Commit Graph

80 Commits

Author SHA1 Message Date
Nikita Karetnikov
bf08d1387c [primTorch] handle out in sort meta function (#96719)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96719
Approved by: https://github.com/ezyang
2023-03-16 07:38:53 +00:00
Edward Z. Yang
3606f59366 Default specialize_int to False (#96624)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96624
Approved by: https://github.com/janeyx99
2023-03-16 02:54:18 +00:00
PyTorch MergeBot
ba4fb9b6ad Revert "Default specialize_int to False (#96624)"
This reverts commit 1ac8782db2.

Reverted https://github.com/pytorch/pytorch/pull/96624 on behalf of https://github.com/kit1980 due to Broke inductor/test_torchinductor_dynamic_shapes.py
2023-03-14 19:43:47 +00:00
Edward Z. Yang
1ac8782db2 Default specialize_int to False (#96624)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96624
Approved by: https://github.com/janeyx99
2023-03-14 18:37:47 +00:00
Driss Guessous
f56cb41c2e Fix calls to sizes to enable dynamic shapes with sdpa (#96674)
Fixes part of #96414

Replaces any calls to sizes, with sym_sizes. Still seeing an error with the repro script:
``` Bash
Exception raised from sizes_default at /scratch/drisspg/work/pytorch/c10/core/TensorImpl.h:635 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x7d (0x7f697f4a141d in /scratch/drisspg/work/pytorch/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0xdd (0x7f697f49fbcd in /scratch/drisspg/work/pytorch/torch/lib/libc10.so)
frame #2: c10::TensorImpl::sizes_custom() const + 0x95 (0x7f697f4824c5 in /scratch/drisspg/work/pytorch/torch/lib/libc10.so)
frame #3: at::native::empty_like(at::Tensor const&, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, c10::optional<c10::MemoryFormat>) + 0x92c (0x7f69809d18ac in /scratch/drisspg/work/pytorch/torch/lib/libtorch_cpu.so)
frame #4: <unknown function> + 0x23f5ce7 (0x7f698193bce7 in /scratch/drisspg/work/pytorch/torch/lib/libtorch_cpu.so)
```

still trying to track down this empty call

from the looks of it, might be coming from at::layer_norm?
the BT from lldb is 221 frames however, so lots of noise

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96674
Approved by: https://github.com/ezyang
2023-03-14 03:47:43 +00:00
Nikita Karetnikov
8e0d5bf538 [primTorch] add meta implementation for aten.min.dim (#96442)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96442
Approved by: https://github.com/ngimel
2023-03-11 18:51:51 +00:00
Yanbo Liang
7fcf8b1829 [Dynamo] Support torch.{cuda/cpu}.amp.autocast (#95416)
For Meta internal use cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95416
Approved by: https://github.com/jansel
2023-03-10 21:48:08 +00:00
PyTorch MergeBot
3ce1e15cf7 Revert "[Dynamo] Support torch.{cuda/cpu}.amp.autocast (#95416)"
This reverts commit c88aa336aa.

Reverted https://github.com/pytorch/pytorch/pull/95416 on behalf of https://github.com/huydhn due to Sorry for reverting your PR. But it seems that the smoke test issue is related as it starts to fail consistently in trunk https://hud.pytorch.org/hud/pytorch/pytorch/master/1?per_page=50&name_filter=inductor_torchbench_smoketest_perf
2023-03-08 06:51:57 +00:00
Yanbo Liang
c88aa336aa [Dynamo] Support torch.{cuda/cpu}.amp.autocast (#95416)
For Meta internal use cases.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95416
Approved by: https://github.com/jansel
2023-03-08 01:40:27 +00:00
Edward Z. Yang
d303665d33 Make int unspecialization actually work (#95621)
OK, so this PR used to be about reducing the number of constants we specialize on, but it turns out that unspecialization was ~essentially never used (because we still constant specialized way too aggressively) and I ended up having to fix a bunch of issues to actually get tests to pass. So this PR is now "make int unspecialization actually work". As part of this, I have to turn off unspecialization by default, as there are still latent bugs in inductor.

The general strategy is that an unspecialized int is represented as a SymInt. Representing it as a 0d tensor (which is what the code used to do) is untenable: (1) we often need unspecialized ints to participate in size computations, but we have no way of propagating sympy expressions through tensor compute, and (2) a lot of APIs work when passed SymInt, but not when passed a Tensor. However, I continue to represent Numpy scalars as Tensors, as they are rarely used for size computation and they have an explicit dtype, so they are more accurately modeled as 0d tensors.

* I folded in the changes from https://github.com/pytorch/pytorch/pull/95099 as I cannot represent unspecialized ints as SymInts without also turning on dynamic shapes. This also eliminates the necessity for test_unspec.py, as toggling specialization without dynamic shapes doesn't do anything. As dynamic shapes defaults to unspecializing, I just deleted this entirely; for the specialization case, I rely on regular static shape tests to catch it. (Hypothetically, we could also rerun all the tests with dynamic shapes, but WITH int/float specialization, but this seems... not that useful? I mean, I guess export wants it, but I'd kind of like our Source heuristic to improve enough that export doesn't have to toggle this either.)
* Only 0/1 integers get specialized by default now
* A hodgepodge of fixes. I'll comment on the PR about them.

Fixes https://github.com/pytorch/pytorch/issues/95469

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95621
Approved by: https://github.com/jansel, https://github.com/Chillee
2023-03-04 01:22:08 +00:00
Driss Guessous
29c235e555 [SDPA] Fix bug in parsing scaled_dot_product_attention arguments (#95311)
Fixes #95266

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95311
Approved by: https://github.com/cpuhrsch
2023-02-23 03:12:46 +00:00
Michael Voznesensky
500ebb2cd6 Fine grained dynamic shape controls (#94787)
https://docs.google.com/document/d/1aoIyYE8_6cYpWqS25thzVoIiKsT5aaUEOiiPwbIXt8k/edit

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94787
Approved by: https://github.com/ezyang
2023-02-17 22:28:37 +00:00
PyTorch MergeBot
e0ede1cc30 Revert "Fine grained dynamic shape controls (#94787)"
This reverts commit 2aa806608b.

Reverted https://github.com/pytorch/pytorch/pull/94787 on behalf of https://github.com/kit1980 due to After this PR, test_autocast_sdpa_dynamic_shapes_static_default started to fail with RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides: https://github.com/pytorch/pytorch/actions/runs/4206176846/jobs/7299657478
2023-02-17 19:52:16 +00:00
Michael Voznesensky
2aa806608b Fine grained dynamic shape controls (#94787)
https://docs.google.com/document/d/1aoIyYE8_6cYpWqS25thzVoIiKsT5aaUEOiiPwbIXt8k/edit

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94787
Approved by: https://github.com/ezyang
2023-02-17 17:39:22 +00:00
Yanbo Liang
950a9efcc3 [Dynamo] Enable test_autocast_sdpa (#95011)
Enable test_autocast_sdpa since the blocker has been removed

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95011
Approved by: https://github.com/drisspg
2023-02-17 09:37:25 +00:00
Joel Schlosser
b5ef37b9a4 Dynamo: Fix graph break when iterating over tensor (#94326)
Supports the following with dynamic shapes:
```python
for element in tensor:
    # do stuff with element
```

Approach follows what's done when `call_range()` is invoked with dynamic shape inputs: guard on tensor size and continue tracing with a real size value from `dyn_dim0_size.evaluate_expr()`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94326
Approved by: https://github.com/ezyang
2023-02-08 23:57:06 +00:00
Jason Ansel
10910758f4 Make dynamo tests work under pytest (#93251)
This now runs without error:
```
pytest test/dynamo
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93251
Approved by: https://github.com/ezyang, https://github.com/voznesenskym, https://github.com/mlazos
2023-02-01 02:11:52 +00:00
Yanbo Liang
b3e422948d [Dynamo] Support out variants of ops mutate the tensors out of the function frame (#93177)
Fixes #93136

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93177
Approved by: https://github.com/jansel
2023-01-29 22:22:58 +00:00
Nikita Karetnikov
53bfba0d72 [inductor] run CPU and CUDA tests with dynamic shapes (#92667)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92667
Approved by: https://github.com/ezyang
2023-01-23 08:54:31 +00:00
PyTorch MergeBot
6bc62a6392 Revert "[inductor] run CPU and CUDA tests with dynamic shapes (#92667)"
This reverts commit 425e506ffe.

Reverted https://github.com/pytorch/pytorch/pull/92667 on behalf of https://github.com/kit1980 due to test_topk_dynamic_shapes_cpu failing after this PR
2023-01-22 03:43:57 +00:00
Nikita Karetnikov
425e506ffe [inductor] run CPU and CUDA tests with dynamic shapes (#92667)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92667
Approved by: https://github.com/ezyang
2023-01-21 22:03:41 +00:00
ydwu4
a2368a7c13 [dynamo] delegate handling of len() of TensorVariable to size(0) (#92016)
We delegate the handling logic of __len__ in TensorVariable to size(0). This seems to also fix several expected failures that are related to len().

Fixes #91901

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92016
Approved by: https://github.com/ezyang
2023-01-12 13:40:48 +00:00
Joel Schlosser
3d8834bdbf SymIntify F.interpolate() with recompute_scale_factor=True (#91318)
This PR makes the minor changes necessary to get `F.interpolate()` working with symbolic shapes when `recompute_scale_factor=True` + adds `OpInfo` samples to test.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91318
Approved by: https://github.com/ezyang
2022-12-29 01:42:56 +00:00
Sherlock Huang
f1fb586bc6 Symintify repeat_interleave.self_int (#89111)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89111
Approved by: https://github.com/ezyang
2022-11-18 05:04:02 +00:00
Sherlock Huang
5faa2792fa Symintify decomps for split and upsample_bilinear; Fix decomp for _softmax_backward_data and native_dropout_backward (#88761)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88761
Approved by: https://github.com/ezyang
2022-11-15 13:34:45 +00:00
Michael Voznesensky
06ce1338bc [dynamo] Port all pytorch/dynamo and test/dynamo pieces over from symbolic-shapes branch (#88768)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88768
Approved by: https://github.com/jansel, https://github.com/ezyang
2022-11-13 04:50:21 +00:00
Edward Z. Yang
2a47b10780 Get the magic method try reverse protocol correct (#88030)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>

cc @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88030
Approved by: https://github.com/anjali411, https://github.com/albanD
2022-10-31 13:19:56 +00:00
Michael Voznesensky
bc19494814 [Dynamo] Symbolic shape guards (#87570)
**Introduces symbolic shape guards into dynamo.**

In this PR, we take the existing fake tensor infra and plumbing in dynamo and we start passing a shape_env around. This shape_env does not get plumbed down to middle layers / backend yet - it only collects expressions from frontend invocations at the moment. We then translate these expressions into guards at the point where we take other guards installed throughout dynamo - and add them to check_fn.

Part 1 of https://docs.google.com/document/d/1QJ-M4zfMkD-fjHIqW089RptjLl9EgozZGCceUbvmgfY/edit#

cc @jansel @lezcano @fdrocha @mlazos @soumith @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87570
Approved by: https://github.com/ezyang
2022-10-25 21:15:40 +00:00
Jason Ansel
8f71e8de7e Sync changes from pytorch/torchdynamo, enable tests (#86950)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86950
Approved by: https://github.com/Chillee
2022-10-14 23:08:58 +00:00
Jason Ansel
c7c09722ad Move TorchDynamo into PyTorch core (#86461)
Context:
https://github.com/pytorch/torchdynamo/issues/1588

This PR moves [TorchDynamo](https://github.com/pytorch/torchdynamo) and TorchInductor into PyTorch core.
- `torchdynamo` becomes `torch._dynamo`
- `torchinductor` becomes `torch._inductor`

This PR was generated by running `copy_to_core.sh` in https://github.com/pytorch/torchdynamo/pull/1538

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86461
Approved by: https://github.com/voznesenskym
2022-10-13 23:18:06 +00:00