Sherlock Huang
a7baad04f6
Preserve stack trace for backward nodes over AOTAutograd ( #83558 )
...
For the following program.
```
def my_relu(a):
return a.relu()
def func(a, b):
a = torch.nn.Linear(10, 10)(a)
d = torch.square(b)
d = my_relu(d)
loss = d.sum()
return loss
with torchdynamo.optimize("aot_nop"):
x = torch.rand(10, 10, requires_grad=True)
y = torch.rand(10, 10, requires_grad=True)
out = func(x, y)
```
It would generate the following fx graph with stack_trace populated in both forward and backward nodes.
```
def forward(self, primals, tangents):
primals_1, primals_2, primals_3, primals_4, tangents_1, = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
t_default = torch.ops.aten.t.default(primals_3); primals_3 = None
addmm_default = torch.ops.aten.addmm.default(primals_4, primals_1, t_default); primals_4 = primals_1 = t_default = None
pow_tensor_scalar = torch.ops.aten.pow.Tensor_Scalar(primals_2, 2)
relu_default = torch.ops.aten.relu.default(pow_tensor_scalar); pow_tensor_scalar = None
detach_default = torch.ops.aten.detach.default(relu_default)
sum_default = torch.ops.aten.sum.default(relu_default); relu_default = None
is_same_size_default = torch.ops.aten.is_same_size.default(sum_default, tangents_1)
expand_default = torch.ops.aten.expand.default(tangents_1, [10, 10]); tangents_1 = None
detach_default_1 = torch.ops.aten.detach.default(detach_default); detach_default = None
threshold_backward_default = torch.ops.aten.threshold_backward.default(expand_default, detach_default_1, 0); expand_default = detach_default_1 = None
pow_tensor_scalar_1 = torch.ops.aten.pow.Tensor_Scalar(primals_2, 1.0); primals_2 = None
mul_scalar = torch.ops.aten.mul.Scalar(pow_tensor_scalar_1, 2.0); pow_tensor_scalar_1 = None
mul_tensor = torch.ops.aten.mul.Tensor(threshold_backward_default, mul_scalar); threshold_backward_default = mul_scalar = None
return pytree.tree_unflatten([sum_default, None, mul_tensor, None, None], self._out_spec)
====== joint graph =======
primals_1 None
primals_2 None
primals_3 None
primals_4 None
tangents_1 None
t_default File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 12, in func
def func(a, b):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
addmm_default File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 12, in func
def func(a, b):
File "/fsx/users/bahuang/repos/pytorch_fsx/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
pow_tensor_scalar File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 14, in func
d = torch.square(b)
relu_default File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 15, in func
d = my_relu(d)
File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 10, in my_relu
return a.relu()
detach_default File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 15, in func
d = my_relu(d)
File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 10, in my_relu
return a.relu()
sum_default
is_same_size_default
expand_default
detach_default_1 File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 15, in func
d = my_relu(d)
File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 10, in my_relu
return a.relu()
threshold_backward_default File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 15, in func
d = my_relu(d)
File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 10, in my_relu
return a.relu()
pow_tensor_scalar_1 File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 14, in func
d = torch.square(b)
mul_scalar File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 14, in func
d = torch.square(b)
mul_tensor File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 14, in func
d = torch.square(b)
output None
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83558
Approved by: https://github.com/albanD
2022-08-18 22:13:04 +00:00
Horace He
0a48cdfb3b
re-enable aotautograd tests ( #83485 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83485
Approved by: https://github.com/zou3519
2022-08-18 01:42:56 +00:00
Edward Z. Yang
ea037344e8
Reset compile cache to fix flaky test ( #83608 )
...
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83608
Approved by: https://github.com/seemethere , https://github.com/malfet
2022-08-17 20:12:18 +00:00
Horace He
5e8b4c64aa
Delayed compilation of backwards pass to when backwards runs ( #83367 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83367
Approved by: https://github.com/ngimel , https://github.com/zou3519
2022-08-17 16:38:36 +00:00
Horace He
9e1daf7644
skip flaky tests for now ( #83482 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83482
Approved by: https://github.com/huydhn
2022-08-15 22:38:37 +00:00
Richard Zou
60295e3abd
[functorch] Delete functorch_lagging_op_db ( #83418 )
...
No need to have a lagging op db because there are no more sync issues
between functorch and pytorch. If someone adds a new OpInfo, then we
should explicitly check if we support it or not.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83418
Approved by: https://github.com/samdow
2022-08-15 19:23:03 +00:00
Horace He
a65825116a
clear cache in-between each test ( #83431 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83431
Approved by: https://github.com/ezyang , https://github.com/malfet
2022-08-15 18:49:24 +00:00
Horace He
fbe8c77427
Implemented basic version of AOTDispatcher that only chooses between autograd or no autograd ( #83248 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83248
Approved by: https://github.com/zou3519 , https://github.com/ezyang
2022-08-14 17:36:33 +00:00
Richard Zou
b99f972e07
[functorch] update functorch lagging db ( #83346 )
...
I'm planning on removing functorch lagging op db because it doesn't make
sense in the context of being a part of PyTorch. Before that happens,
this PR updates it, and a future PR will delete it.
Test Plan:
- wait for tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83346
Approved by: https://github.com/samdow
2022-08-13 01:56:01 +00:00
Horace He
ea51e87b52
Added list clearing codegen to AOTAutograd (hidden behind config.aot_clear_list ( #83137 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83137
Approved by: https://github.com/jansel , https://github.com/albanD
2022-08-12 22:52:16 +00:00
Sherlock Huang
6915676448
Preserve node's stack trace during retrace ( #83050 )
...
AOTAutograd retraces graph module produced by torch dynamo, this PR preserves the stack trace in the original fx.Node.
Differential Revision: [D38595638](https://our.internmc.facebook.com/intern/diff/D38595638 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83050
Approved by: https://github.com/ezyang , https://github.com/voznesenskym
2022-08-11 04:18:14 +00:00
Richard Zou
5f4e8c0a4d
Add ability to functorch tests via run_test.py ( #82012 )
...
This PR:
- adds the ability to run functorch tests via run_test.py
- changes the functorch shards in PyTorch CI to invoke functorch tests
via run_test.py
The main motivation for this is so that functorch tests hook into the
standard PyTorch test infrastructure.
Questions for reviewers:
- the functorch tests are located outside of the pytorch/test folder
(they're in the pytorch/functorch/test folder). Is this OK? (run_test.py
works locally for me).
Test Plan:
- checked that `python run_test.py --functorch` ran functorch tests
locally
- Local mock test: added `{"test_compilation_for_dynamic_shape
(__main__.TestCompileCache)":
["https://github.com/pytorch/pytorch/issues/82016 ", ["linux"]]}` to .pytorch-disabled-tests.json, ran functorch tests, verified that the test was skipped.
- Wait for CI
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82012
Approved by: https://github.com/janeyx99
2022-07-25 14:23:18 +00:00
kshitij12345
5880a66758
[composite compliance] matrix_exp ( #81225 )
...
Ref: #69991
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81225
Approved by: https://github.com/zou3519
2022-07-25 11:11:29 +00:00
Samantha Andow
0d9fd3d521
[functorch] Fix CI ( pytorch/functorch#931 )
...
* fix ci
* add batch rule xfail, skip kl_div until tomorrow
2022-07-21 13:41:37 -07:00
Horace He
7527eac49d
[functorch] fix CI issues with AOTAutograd
2022-07-21 13:41:37 -07:00
vfdev
62868fa3d5
[functorch] Added chunks arg to vmap ( pytorch/functorch#774 )
...
* Added chunks arg to vmap
Description:
- Added chunks arg to vmap
- Added a test
* Create chunk_vmap into experimental
* COde formatting
* Updated tests
Refactored common code and fixed random state with randomness = same
* Updated docstring and split tests by randomness
2022-07-21 13:41:37 -07:00
Horace He
16ddcbb6c8
[functorch] Did some dead code elimination before partitioning
2022-07-21 13:41:36 -07:00
Horace He
58cda84f34
[functorch] made partitioner strip overloads
2022-07-21 13:41:36 -07:00
Animesh Jain
97a5d9b703
[functorch] Disable autocast ( pytorch/functorch#794 )
...
* Disable autocast
* Add global flag
* Add a test
2022-07-21 13:41:36 -07:00
Animesh Jain
6050b42388
[functorch] Present Random state ( pytorch/functorch#887 )
...
* Present Random state
* Add tests
2022-07-21 13:41:36 -07:00
Horace He
6c8874f1a9
[functorch] unify PythonKey (i.e. ProxyTensor) tracer with the one in core ( pytorch/functorch#853 )
...
* unify tracer with the one in core
* modified test
* fix lint issues
* fixed some things
2022-07-21 13:41:35 -07:00
Samantha Andow
c048d19772
[functorch] fix CI ( pytorch/functorch#816 )
2022-07-21 13:41:34 -07:00
Samantha Andow
d9b25b1a2a
[functorch] fix ci ( pytorch/functorch#789 )
2022-07-21 13:41:33 -07:00
Horace He
07e000478f
[functorch] fix flake
2022-07-21 13:41:33 -07:00
Horace He
1b2f01e712
[functorch] skip unpool tests on python key too
2022-07-21 13:41:33 -07:00
Horace He
fc24004a9f
[functorch] Updates to the min-cut partitioner to improve it ( pytorch/functorch#653 )
...
* updated partitioner
* some minor modifications
* stashed changes
* update some lists
* added some more decompositions
* remove clone decompositions
* updated partitioner
2022-07-21 13:41:30 -07:00
Richard Zou
a46fbe8d79
[functorch] skip flaky test
2022-07-21 13:41:30 -07:00
Richard Zou
244197a2a6
[functorch] Update lagging op db ( pytorch/functorch#737 )
2022-07-21 13:41:30 -07:00
Richard Zou
7bf45014ce
[functorch] Cleanup return_types handling now that it has been upstreamed ( pytorch/functorch#730 )
...
Fixes https://github.com/pytorch/functorch/issues/713
2022-07-21 13:41:30 -07:00
albanD
0b8752b0f7
[functorch] Fix test following tls update in core ( pytorch/functorch#718 )
...
This test now works!
2022-07-21 13:41:30 -07:00
Horace He
352c07c2f5
[functorch] fixed some minor issues
2022-07-21 13:41:30 -07:00
Horace He
1a85653108
[functorch] fix issue where default partitioner might recompute things
2022-07-21 13:41:30 -07:00
Animesh Jain
38cbf390e2
[functorch] Skip extracting meta tensor info for sparse tensors ( pytorch/functorch#676 )
...
* Skip extracting meta tensor info for sparse tensors
* Change expected failures
2022-07-21 13:41:29 -07:00
Animesh Jain
cb931b1649
[functorch] Reduce overhead of AOT Module ( pytorch/functorch#660 )
...
Adding aot_module_simplified and aot_function_simplified
Fallback to aot_module original until we prevent tracing of leaf modules
2022-07-21 13:41:28 -07:00
Animesh Jain
f4ed73b3d1
[functorch] Trace the backward pass assuming contiguous tensors ( pytorch/functorch#536 )
2022-07-21 13:41:28 -07:00
Richard Zou
1cc3add0e7
[functorch] Update functorch lagging op db ( pytorch/functorch#652 )
2022-07-21 13:41:27 -07:00
Horace He
562c91b133
[functorch] removed special.zeta grad
2022-07-21 13:41:27 -07:00
Horace He
0eb5aa5c7d
[functorch] fixed some op packet issues and added some more tests for partition optimality
2022-07-21 13:41:26 -07:00
Horace He
3a26772c53
[functorch] Actually made some modifications to fix OpOverload changes ( pytorch/functorch#579 )
...
* actually fixed recent issues
* fixed new issues
* fixed overload issue
* Fix epsilons
2022-07-21 13:41:26 -07:00
Animesh Jain
1ac79567fc
[functorch] Setting tensor_meta attr for inplace ops ( pytorch/functorch#565 )
...
* Setting tensor_meta attr for inplace ops
* Tests
* Linter
* Failures
2022-07-21 13:41:25 -07:00
Samantha Andow
824eeb96e9
[functorch] remove s_where, use where.self ( pytorch/functorch#581 )
2022-07-21 13:41:25 -07:00
Edward Z. Yang
57e78c6654
[functorch] Don't unnecessarily wrap the elem in PythonTensor ( pytorch/functorch#554 )
...
* Don't unnecessarily wrap the elem in PythonTensor
Instead of saying that a PythonTensor has a regular (e.g., CPU) tensor
and an FX proxy, a PythonTensor *is a* regular CPU tensor, that also
carries an FX proxy (that updates as we go along).
This should fix https://github.com/pytorch/functorch/issues/465 and
it also fixed some expected failures in the test suite.
This kills the meta variant logic entirely; maybe some other time we'll
try to bring it back.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2022-07-21 13:41:24 -07:00
Edward Z. Yang
6648a05b25
[functorch] Skip networkx if not installed
...
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2022-07-21 13:41:24 -07:00
Sam Andow
12724711b7
[functorch] skip decomps, fix ci
2022-07-21 13:41:22 -07:00
Horace He
d3406d9cd0
[functorch] Fix some CI fails for AOTAutograd and such
2022-07-21 13:41:21 -07:00
Horace He
7ba1e4303b
[functorch] Fixed minor bug with partitioner and other minor changes
2022-07-21 13:41:21 -07:00
Richard Zou
81f3bd4d58
[functorch] Fix CI ( pytorch/functorch#461 )
2022-07-21 13:41:21 -07:00
Horace He
4b72e62178
[functorch] Added Grad context to AOTAutograd ( pytorch/functorch#441 )
...
* refactored AOTAutograd code a bit
* Added grad context stack
* remove pdb
* fixed flake
2022-07-21 13:41:20 -07:00
Sam Andow
23ed94181a
[functorch] fix unexpected successes
2022-07-21 13:41:19 -07:00
Horace He
fc272929a6
[functorch] Added support for outputs that don't require gradients + fixed some other tests
2022-07-21 13:41:19 -07:00