Commit Graph

109 Commits

Author SHA1 Message Date
vfdev-5
a56af02913 [dynamo] Added support for is_contiguous with dynamic shapes (#113645)
Description:
- Added support for `x.is_contiguous` with dynamic shapes

On `main` the following code is giving a graph break:
```python
import torch

@torch.compile(backend="eager", dynamic=True, fullgraph=True)
def f(x):
    if x.is_contiguous():
        return x
    else:
        return 0

x = torch.randn(13, 14)
f(x)
```
with the error message:
```
  File "pytorch/torch/_dynamo/variables/builder.py", line 1541, in wrap_fx_proxy_cls
    unimplemented(
  File "pytorch/torch/_dynamo/exc.py", line 193, in unimplemented
    raise Unsupported(msg)
torch._dynamo.exc.Unsupported: torch.* op returned non-Tensor bool call_method is_contiguous

from user code:
   File "check_is_contig_dynamic_true.py", line 37, in f
    if x.is_contiguous():
```

This PR fixes the issue.
```
TORCH_COMPILE_DEBUG=1 python check_is_contig_dynamic_true.py
[2023-11-14 15:49:04,399] [0/0] torch._dynamo.symbolic_convert: [INFO] Step 1: torchdynamo start tracing f check_is_contig_dynamic_true.py:34
[2023-11-14 15:49:04,403] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] TRACE starts_line check_is_contig_dynamic_true.py:34 in f ()
[2023-11-14 15:49:04,403] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG]     @torch.compile(backend="eager", dynamic=True, fullgraph=True)
[2023-11-14 15:49:04,405] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] TRACE starts_line check_is_contig_dynamic_true.py:37 in f (f)
[2023-11-14 15:49:04,405] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG]         if x.is_contiguous():
[2023-11-14 15:49:04,405] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_FAST x []
[2023-11-14 15:49:04,405] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_ATTR is_contiguous [LazyVariableTracker()]
[2023-11-14 15:49:04,804] [0/0] torch._dynamo.output_graph: [DEBUG] create_graph_input L_x_ L['x']
[2023-11-14 15:49:04,805] [0/0] torch._dynamo.variables.builder: [DEBUG] wrap_to_fake L['x'] (5, 4) [<DimDynamic.DUCK: 1>, <DimDynamic.DUCK: 1>] [None, None]
[2023-11-14 15:49:04,839] [0/0] torch._dynamo.output_graph: [DEBUG] create_graph_input s0 L['x'].size()[0]
[2023-11-14 15:49:04,840] [0/0] torch._dynamo.output_graph: [DEBUG] create_graph_input s1 L['x'].size()[1]
[2023-11-14 15:49:04,840] [0/0] torch._dynamo.output_graph: [DEBUG] create_graph_input s2 L['x'].stride()[0]
[2023-11-14 15:49:04,840] [0/0] torch._dynamo.output_graph: [DEBUG] create_graph_input s1 L['x'].stride()[1]
[2023-11-14 15:49:04,840] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE CALL_FUNCTION 0 [GetAttrVariable(TensorVariable(), is_contiguous)]
[2023-11-14 15:49:04,843] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE POP_JUMP_IF_FALSE 12 [ConstantVariable(bool)]
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG] TRACE starts_line check_is_contig_dynamic_true.py:42 in f (f)
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.symbolic_convert.__trace_source: [DEBUG]             return 0
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE LOAD_CONST 0 []
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.symbolic_convert: [DEBUG] TRACE RETURN_VALUE None [ConstantVariable(int)]
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.convert_frame: [DEBUG] Skipping frame because no content in function call f                     check_is_contig_dynamic_true.py 34
[2023-11-14 15:49:04,844] [0/0] torch._dynamo.convert_frame: [DEBUG] No graph captured with one_graph=True
[2023-11-14 15:49:04,848] torch._dynamo.utils: [INFO] TorchDynamo compilation metrics:
[2023-11-14 15:49:04,848] torch._dynamo.utils: [INFO] Function                           Runtimes (s)
[2023-11-14 15:49:04,848] torch._dynamo.utils: [INFO] -------------------------------  --------------
[2023-11-14 15:49:04,848] torch._dynamo.utils: [INFO] _compile.<locals>.compile_inner          1.2083
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113645
Approved by: https://github.com/lezcano
2023-11-17 12:32:38 +00:00
Brian Hirsh
cebad9867b graph break on intermediate leaves that require grad (#113277)
fixes https://github.com/pytorch/pytorch/issues/90552. This is a simpler fix that just detects the situation where AOTAutograd can't create a proper backward graph for the situation and graph breaks. This was technically a silent correctness issue before.

This PR tries to always graph break when we see a factory function that returns a tensor requiring grad. I check this by seeing if the op returned a `TensorVariable` in dynamo, and if one of the input arguments was a `requires_grad=True` kwarg. I think this is high-fidelity enough, and I'm also hoping that this is uncommon enough that a graph break is reasonable here.

The fix to avoid the graph break in user land is also pretty easy - just instantiate your tensor outside of the compiled region and plumb it in.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113277
Approved by: https://github.com/eellison
ghstack dependencies: #113267, #113416, #113584
2023-11-16 02:47:45 +00:00
PyTorch MergeBot
1e60174891 Revert "[dynamo] Add run_inductor_tests entrypoint (#113278)"
This reverts commit b00311ce9e.

Reverted https://github.com/pytorch/pytorch/pull/113278 on behalf of https://github.com/huydhn due to Sorry for reverting your stack, but it is failing to list test internally with buck2 ([comment](https://github.com/pytorch/pytorch/pull/113278#issuecomment-1811646325))
2023-11-15 01:19:48 +00:00
Ken Jin
70064ac416 [Dynamo] Match closures by code ID (#109427)
Closes https://github.com/pytorch/pytorch/issues/107866

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109427
Approved by: https://github.com/ezyang, https://github.com/jansel
2023-11-12 08:20:14 +00:00
Jason Ansel
b00311ce9e [dynamo] Add run_inductor_tests entrypoint (#113278)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113278
Approved by: https://github.com/yanboliang
2023-11-11 08:54:43 +00:00
Oguz Ulgen
68c4507bc2 [Inductor] Allow None values to be passed in as arguments to triton kernels (#113056)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113056
Approved by: https://github.com/jansel
ghstack dependencies: #112752, #113008, #112801
2023-11-07 05:29:42 +00:00
Oguz Ulgen
bfa717c6a6 [Inductor] Improve reinplace_scatters pass (#112801)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112801
Approved by: https://github.com/Chillee, https://github.com/jansel
ghstack dependencies: #112752, #113008
2023-11-07 05:29:42 +00:00
Oguz Ulgen
f6008be266 Move all triton related testing utils into shared file (#113008)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113008
Approved by: https://github.com/zou3519, https://github.com/jansel
ghstack dependencies: #112752
2023-11-07 05:29:29 +00:00
Oguz Ulgen
dbf44dffc9 [Inductor] Cache generated user defined triton kernels on tensor dtype and non tensor parameters (#112752)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112752
Approved by: https://github.com/jansel
2023-11-07 05:29:16 +00:00
Oguz Ulgen
001573b687 [Inductor] Support one node creating multiple mutations in scheduler (#112547)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112547
Approved by: https://github.com/Chillee
2023-11-03 16:01:31 +00:00
Oguz Ulgen
13d62e28a3 [Inductor] Add Dynamic shape support to user defined triton kernels (#112523)
1) This PR moves the grid function codegen to wrapper so that we can use
   IndentBuffers as opposed to manually adding tabs for indentation.
2) In inductor, emits the grid function in the body of the kernel call so
   that it can use free symbols from dynamic shapes

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112523
Approved by: https://github.com/Chillee
2023-11-02 23:58:50 +00:00
Steven Troxler
17fd4885aa [dynamo] Support custom dict constructor with kwargs (#112513)
Summary:

As of https://github.com/pytorch/pytorch/pull/103192, dynamo
supports code that creates OrderedDict instances using kwargs
for the key-value pairs rather than passing a dict literal.

But custom dicts (for example subclasses of OrderedDict) follow
a different codepath so that we can check for conditions such
as a custom `__init__` that need to force a graph break.

This commit allows kwargs for custom dict constructors - if the
args are empty and the class is not also a dataclass (which is
the case that, for example, a
`transformers.modeling_outputs.ModelOutput` instance will wind
up hitting) then treat the kwargs as the key-value pairs.

NOTE: For this to behave 100% correctly, we are relying on
the fact that python dicts behave like ordered dicts so that they
preserve the kwargs' ordering. Technically it is not guaranteed that
future versions of Python will respect this; if that behavior changes
we would need to ensure that dynamo uses OrderedDict for kwargs all
the way down in order to handle special cases like OrderedDict where
the kwargs' ordering does matter.

Test Plan:

```
pytest test/dynamo/test_functions.py
```

I also verified that the new test fails without the changes to
`dicts.py`.

Reviewers: yanboliang

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112513
Approved by: https://github.com/yanboliang
2023-10-31 20:55:38 +00:00
Oguz Ulgen
219763c38d Support calling user defined triton kernels with kernel.run (#112292)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112292
Approved by: https://github.com/jansel
ghstack dependencies: #112290
2023-10-30 17:51:23 +00:00
Oguz Ulgen
1250032c2e [Inductor] Add triton.autotune support for user defined triton kernels with complex grids (#112290)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112290
Approved by: https://github.com/jansel
2023-10-30 17:48:27 +00:00
Oguz Ulgen
c14c4efc0e [Inductor] Add triton.autotune support for user defined triton kernels with constant/simple grids (#112228)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112228
Approved by: https://github.com/jansel
2023-10-28 17:30:35 +00:00
PyTorch MergeBot
8d44999183 Revert "[Inductor] Add triton.autotune support for user defined triton kernels with constant/simple grids (#112228)"
This reverts commit dbb31a2984.

Reverted https://github.com/pytorch/pytorch/pull/112228 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing ROCm test in trunk dbb31a2984 ([comment](https://github.com/pytorch/pytorch/pull/112228#issuecomment-1783660326))
2023-10-28 01:51:32 +00:00
Oguz Ulgen
dbb31a2984 [Inductor] Add triton.autotune support for user defined triton kernels with constant/simple grids (#112228)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112228
Approved by: https://github.com/jansel
2023-10-27 21:40:22 +00:00
Oguz Ulgen
a29a844938 [Inductor] Support top level constants in user defined triton kernels (#111970)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111970
Approved by: https://github.com/jansel
ghstack dependencies: #111956
2023-10-25 02:43:51 +00:00
Oguz Ulgen
bb550b25c9 [Inductor] Support user defined triton kernels calling other triton kernels and activation functions (#111956)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111956
Approved by: https://github.com/jansel
2023-10-25 02:39:43 +00:00
Oguz Ulgen
ddcf9c050b [Inductor] Support calling user defined kernels with different type of arguments (#111939)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111939
Approved by: https://github.com/jansel, https://github.com/zou3519
ghstack dependencies: #111770, #111808
2023-10-24 19:49:48 +00:00
Jon Chuang
36d34ce951 [dynamo] support comparing LHS constant with tensor (#111492)
Fixes https://github.com/pytorch/pytorch/issues/108582

Depends on https://github.com/pytorch/pytorch/pull/111557 for fixing broken integration tests. (due to this PR unblocking an in-graph set membership)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111492
Approved by: https://github.com/Skylion007
2023-10-23 19:05:14 +00:00
Oguz Ulgen
2b2b6caf8f [inductor] Implement clone removal for user defined triton kernel via reinplace_scatters (#111627)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111627
Approved by: https://github.com/jansel
ghstack dependencies: #111434
2023-10-22 22:28:00 +00:00
Jon Chuang
c4ab229a82 [dynamo] Implement set.__contains__ for Tensor as object match of FakeTensor (#111738)
Fixes https://github.com/pytorch/pytorch/issues/111556

Dynamo implementation of `set.__contains__` previously used `__eq__` match.

But this is wrong when `__eq__` match does not imply `__hash__` match, as is the case for `torch.Tensor`, leading to inconsistent results. See: https://github.com/pytorch/pytorch/issues/111542

Hence implement as Tensor object match i.e. proxy node `'example_value'` FakeTensor match.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111738
Approved by: https://github.com/lezcano
2023-10-22 17:40:34 +00:00
Oguz Ulgen
977d3bcc46 [Inductor] Support user defined triton kernels in inductor (#111434)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111434
Approved by: https://github.com/jansel
2023-10-22 17:04:19 +00:00
Jon Chuang
47eed65481 [dynamo] Add is_ support for Tensors, force get_fake_value to reuse previously computed example_value if available (#111565)
Use FakeTensor id match as equivalent to object identity match

cc

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111565
Approved by: https://github.com/ezyang
2023-10-21 13:56:30 +00:00
Jon Chuang
344fc98991 [dynamo] fix: SetVariable should test Tensor identity based example_value FakeTensor, not fx.Node (#111696)
FX Node changes after in-place op. FakeTensor remains the same.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111696
Approved by: https://github.com/ezyang
2023-10-21 08:49:21 +00:00
Jon Chuang
101210e2ce [dynamo] cast single-elem tensors to float and int (#111518)
Fixes https://github.com/pytorch/pytorch/issues/109538

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111518
Approved by: https://github.com/ezyang
2023-10-20 22:53:58 +00:00
Jon Chuang
79529ef657 [dynamo] fix graph break when listlike of tensor contains const (#111572)
Fixes https://github.com/pytorch/pytorch/pull/111557#discussion_r1365620968

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111572
Approved by: https://github.com/voznesenskym, https://github.com/lezcano
2023-10-19 19:51:28 +00:00
Oguz Ulgen
4e310fd875 [Autograd] Track when mutations are for triton kernels (#111500)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111500
Approved by: https://github.com/bdhirsh
2023-10-19 15:34:34 +00:00
Oguz Ulgen
defa0d3a2d Add a side table for triton kernels to avoid using itertools.partial (#110633)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110633
Approved by: https://github.com/jansel
2023-10-08 02:01:59 +00:00
Yanbo Liang
1b1bc08557 [Dynamo] SizeVariable can be indexed by symint (#110349)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110349
Approved by: https://github.com/williamwen42
2023-10-06 20:48:07 +00:00
PyTorch MergeBot
21019620ee Revert "[Dynamo] SizeVariable can be indexed by symint (#110349)"
This reverts commit 510ec7e3c5.

Reverted https://github.com/pytorch/pytorch/pull/110349 on behalf of https://github.com/PaliC due to breaking internal tests (check diff) ([comment](https://github.com/pytorch/pytorch/pull/110349#issuecomment-1748021641))
2023-10-05 04:42:33 +00:00
Oguz Ulgen
baa9af155e Add more tests for native triton kernels (#110486)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110486
Approved by: https://github.com/jansel
ghstack dependencies: #110403
2023-10-04 18:26:45 +00:00
Oguz Ulgen
f04b1a0d27 [AOTInductor] Implement autograd eager backend for native triton kernels (#110403)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110403
Approved by: https://github.com/zou3519, https://github.com/bdhirsh
2023-10-04 17:56:56 +00:00
Yanbo Liang
510ec7e3c5 [Dynamo] SizeVariable can be indexed by symint (#110349)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110349
Approved by: https://github.com/williamwen42
2023-10-04 03:20:18 +00:00
cdzhan
175b626216 Enable torch.promote_types in Dynamo tracing (#110358)
Fixes #109508

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110358
Approved by: https://github.com/Skylion007
2023-10-02 15:20:36 +00:00
Oguz Ulgen
f7ba3e85e2 [Dynamo] Add functional triton kernel wrapper (#110185)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110185
Approved by: https://github.com/jansel, https://github.com/zou3519, https://github.com/bdhirsh
ghstack dependencies: #109623
2023-09-30 04:20:20 +00:00
Oguz Ulgen
2d50a30d77 [Dynamo] Add native support for Triton Kernels to Dynamo (#109623)
This PR adds native support to Dynamo to detect Triton kernels and
create an FX graph node out of them. AOT eager and inductor modes will
be support in follow up PRs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109623
Approved by: https://github.com/jansel
2023-09-29 15:49:18 +00:00
Yukio Siraichi
6f48d872d0 Re-land: Break graph on manual_seed. (#109109)
Re-landing: #108647 (old #107594)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109109
Approved by: https://github.com/lezcano
2023-09-28 15:28:40 +00:00
Michael Voznesensky
e4350d6d4e Functools partial support in dynamo (#108846)
The strategy for supporting functools partials is relatively straightforward.

There are 2 cases we need to support:

**1) Functools partials as input**
In this case, we are first seeing the functools partial and it is guaranteed to have a source. As such, the args, keywords, and func of the functools partial are passed through VariableBuilder. As this is the first time we are seeing these objects (as it is an input), we re-enter VariableBuilder with a source referencing the args, keywords, and func as attributes of the input to produce:

- func: A callable VariableTracker (UDF, TorchVariable, etc) depending on the value of `func`
- args: List[VariableTracker] - note, not ListVariableTracker!
- keywords: Dict[str, VariableTracker]

A major benefit of this structure is that it very elegantly matches the args to `call_function`.

We then compose a FunctoolsPartialVariable from the VariableTrackers made above.

**2) Functools partials created within compile**
In this case, we already have all the args as known VTs, and thus just compose a FunctoolsPartialVariable as we do for case (1).

For both (1) and (2) - we propagate all guards from the func, args, and keyword VTs to the FunctoolsPartialVariable

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108846
Approved by: https://github.com/ezyang, https://github.com/jansel
2023-09-09 17:25:02 +00:00
PyTorch MergeBot
8caaa4f4cd Revert "Re-land: Break graph on manual_seed. (#108647)"
This reverts commit c887309437.

Reverted https://github.com/pytorch/pytorch/pull/108647 on behalf of https://github.com/huydhn due to Ouch, we are hit again my another internal import error from https://github.com/pytorch/pytorch/blob/main/torch/_inductor/config.py#L205-L206 ([comment](https://github.com/pytorch/pytorch/pull/108647#issuecomment-1712230103))
2023-09-08 21:18:00 +00:00
Yukio Siraichi
c887309437 Re-land: Break graph on manual_seed. (#108647)
Trying to re-land #107594.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108647
Approved by: https://github.com/eellison
2023-09-07 12:52:38 +00:00
PyTorch MergeBot
48286d34a4 Revert "Break graph on manual_seed. (#107594)"
This reverts commit 6ad5568cbc.

Reverted https://github.com/pytorch/pytorch/pull/107594 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it has an import issue that breaks internal code ([comment](https://github.com/pytorch/pytorch/pull/107594#issuecomment-1705584405))
2023-09-04 18:00:37 +00:00
Yanbo Liang
9862c7196b [Dynamo] SetVariable supports contains (#108189)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108189
Approved by: https://github.com/voznesenskym
2023-08-31 04:28:49 +00:00
Yukio Siraichi
6ad5568cbc Break graph on manual_seed. (#107594)
Fix: #107187

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107594
Approved by: https://github.com/eellison
2023-08-30 17:24:11 +00:00
PyTorch MergeBot
4e47ea5131 Revert "Break graph on manual_seed. (#107594)"
This reverts commit 6c28de2437.

Reverted https://github.com/pytorch/pytorch/pull/107594 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it seems to cause failures in trunk on inductor/test_torchinductor_opinfo.py::TestInductorOpInfoCUDA::test_comprehensive_uniform_cuda_float, likely a landrace ([comment](https://github.com/pytorch/pytorch/pull/107594#issuecomment-1697783965))
2023-08-29 16:38:01 +00:00
Yukio Siraichi
6c28de2437 Break graph on manual_seed. (#107594)
Fix: #107187

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107594
Approved by: https://github.com/eellison
2023-08-29 12:59:57 +00:00
lezcano
a9dca53438 NumPy support in torch.compile (#106211)
RFC: https://github.com/pytorch/rfcs/pull/54
First commit is the contents of https://github.com/Quansight-Labs/numpy_pytorch_interop/

We have already been using this in core for the last few months as a external dependency. This PR pulls all these into core.

In the next commits, I do a number of things in this order
- Fix a few small issues
- Make the tests that this PR adds pass
- Bend backwards until lintrunner passes
- Remove the optional dependency on `torch_np` and simply rely on the upstreamed code
- Fix a number dynamo tests that were passing before (they were not tasting anything I think) and are not passing now.

Missing from this PR (but not blocking):
- Have a flag that deactivates tracing NumPy functions and simply breaks. There used to be one but after the merge stopped working and I removed it. @lezcano to investigate.
- https://github.com/pytorch/pytorch/pull/106431#issuecomment-1667079543. @voznesenskym to submit a fix after we merge.

All the tests in `tests/torch_np` take about 75s to run.

This was a work by @ev-br, @rgommers @honno and I. I did not create this PR via ghstack (which would have been convenient) as this is a collaboration, and ghstack doesn't allow for shared contributions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106211
Approved by: https://github.com/ezyang
2023-08-11 00:39:32 +00:00
Yanbo Liang
6560750d08 [Dynamo] Support list indexed by constant tensor (#105509)
Fixes #104092

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105509
Approved by: https://github.com/eellison
2023-07-20 20:14:04 +00:00
kshitij12345
e137ac6c59 [dynamo][torch_np] support linalg, random and fft module (#105320)
Support tracing through `np.linalg` with `torch_np` installed. Will update with other modules if this approach makes sense.

TODO:
* [x] Add test for `fft` and `random`.

Fixes https://github.com/pytorch/pytorch/issues/105269

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105320
Approved by: https://github.com/ezyang, https://github.com/lezcano
2023-07-19 11:06:37 +00:00