This PR is part of a series attempting to re-submit https://github.com/pytorch/pytorch/pull/134592 as smaller PRs.
In jit tests:
- Add and use a common raise_on_run_directly method for when a user runs a test file directly which should not be run this way. Print the file which the user should have run.
- Raise a RuntimeError on tests which have been disabled (not run)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154725
Approved by: https://github.com/clee2000
Enables a few extra ruff rules, most of which do not have any violations as I already cleaned them with earlier PRs, these just turns them on to enforce them. Adds 1 noqa as we want the suboptimal lambda generation + call kept as a test. Also enables the test in flake8
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130700
Approved by: https://github.com/justinchuby, https://github.com/ezyang
Summary:
torch.testing.assert_equal doesn't support nested strided tensors because sizes is not implemented.
This adds special handling for nested tensors by checking for nested tensors unbinding if they are found.
Test Plan: test_trace_with_nested_strided_tensor_output
Differential Revision: D54430238
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121039
Approved by: https://github.com/YuqingJ
Fixes #https://github.com/pytorch/pytorch/issues/101960
when I trace a func to run out-operator has more than one output, I got the error. This is because the situation when the output of the out operator is greater than 1 is not handled.
```
def test_trace_out_operator_with_two_output():
example_input = torch.rand(2, 8)
out_1, out_2 = torch.cummax(example_input, 1)
def run_cummax(example_input, out_1, out_2):
output_1, output_2 = torch.cummax(example_input, 1, out=(out_1, out_2))
return output_1, output_2
trace_model = torch.jit.trace(run_cummax, (example_input, out_1, out_2))
and the error info:
raise TracingCheckError(
torch.jit._trace.TracingCheckError: Tracing failed sanity checks!
encountered an exception while running the trace with test inputs
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101563
Approved by: https://github.com/jgong5, https://github.com/EikanWang, https://github.com/davidberard98
Fixes#99665
Let me explain the root cause using the unit test I added:
* This bug is triggered when:
* ```wrapped``` is a nested function.
* ```wrapped``` is in another module which is different from the main function ```fn```.
* There is a graph break inside of ```wrapped```.
* The root cause is when resuming nested function, actually we are using the outermost function(```fn``` in my example)'s global variables, but ```wrapped``` calls ```inner_func``` which is not part of ```fn```'s globals, so we have to set correct globals when nested function resume execution.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100426
Approved by: https://github.com/jansel
Fixes#99665
Let me explain the root cause using the unit test I added:
* This bug is triggered when:
* ```wrapped``` is a nested function.
* ```wrapped``` is in another module which is different from the main function ```fn```.
* There is a graph break inside of ```wrapped```.
* The root cause is when resuming nested function, actually we are using the outermost function(```fn``` in my example)'s global variables, but ```wrapped``` calls ```inner_func``` which is not part of ```fn```'s globals, so we have to set correct globals when nested function resume execution.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100426
Approved by: https://github.com/jansel
**Summary**: jit.trace usually adds shape information to all the jit::Values in its graph. This is mostly a side effect of how jit tracing is performed, but many users use this behavior for debugging and for better understanding the graph. Previously, CallFunction nodes (inserted by calling jit.script-ed functions) did _not_ have this information attached. This PR attaches this information for the tensor output values.
**Details**:
* First the jit tracer sets a global TracerState object
* Then the jit tracer invokes the python callable that is to be traced
* When the python function gets to a jit.script-ed function, [invokeScriptFunctionFromPython](8693604bc6/torch/csrc/jit/python/pybind_utils.h (L1060)) is called. It inserts a FunctionCall.
* Then after the actual scripted function gets called and we have a concrete output, we attach the concrete output [IValue to the TracerState](8693604bc6/torch/csrc/jit/python/pybind_utils.h (L1001))
* ^^ the setValueTrace call (linked in previous list item) is where this PR makes changes; we revise the jit::Value output of the CallFunction node to use the type of the concrete tensor, which will have actual shapes associated.
**Test**: added a test verifying that shape info appears in the output type for a CallFunction node in a jit-traced graph.
Differential Revision: [D43592880](https://our.internmc.facebook.com/intern/diff/D43592880)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95544
Approved by: https://github.com/qihqi
crossref is a new strategy for performing tests when you want
to run a normal PyTorch API call, separately run some variation of
the API call (e.g., same thing but all the arguments are meta tensors)
and then cross-reference the results to see that they are consistent.
Any logic you add to CrossRefMode will get run on *every* PyTorch API
call that is called in the course of PyTorch's test suite. This can
be a good choice for correctness testing if OpInfo testing is not
exhaustive enough.
For now, the crossref test doesn't do anything except verify that
we can validly push a mode onto the torch function mode stack for all
functions.
Signed-off-by: Edward Z. Yang <ezyangfb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75988
Approved by: https://github.com/seemethere
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69546
The arg is not used and was previously deprecated.
Also remove torch.onnx._export_to_pretty_string. It's redundant with the
public version.
Test Plan: Imported from OSS
Reviewed By: malfet
Differential Revision: D32994270
Pulled By: msaroufim
fbshipit-source-id: f8f3933b371a0d868d9247510bcd73c31a9d6fcc
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64373
* Fix some bad formatting and clarify things in onnx.rst.
* In `export_to_pretty_string`:
* Add documentation for previously undocumented args.
* Document that `f` arg is ignored and mark it deprecated.
* Update tests to stop setting `f`.
* Warn if `_retain_param_name` is set.
* Use double quotes for string literals in test_operators.py.
Test Plan: Imported from OSS
Reviewed By: ezyang
Differential Revision: D30905271
Pulled By: malfet
fbshipit-source-id: 3627eeabf40b9516c4a83cfab424ce537b36e4b3
Summary:
When generating IR for autograd.Function, if the function has multiple outputs, a TupleUnpack may be inserted after the original function node, and Pytorch only assigns proper information (tensor element type and shape) to the TupleUnpack and forgets the original function node. In contrast, if autograd.Function only produces one output, the original function node may have tensor
element type and shape in its output schema.
Before this PR:
- (simplified) IR for autograd.Function with one output: input (tensor, dtype=float32, shape=[2, 3]) -> PythonOp -> output (tensor, dtype=float32, shape=[4, 5])
- (simplified) IR for autograd.Function with one output: input (tensor, dtype=float32, shape=[2, 3]) -> PythonOp -> output_0 **(tensor)**, output_1 **(tensor)** -> TupleUnpack output_2 (tensor, dtype=float32, shape=[4, 5]), output_3 (tensor, dtype=float32, shape=[6, 7])
After this PR:
- (simplified) IR for autograd.Function with one output: input (tensor, dtype=float32, shape=[2, 3]) -> PythonOp -> output (tensor, dtype=float32, shape=[4, 5])
- (simplified) IR for autograd.Function with one output: input (tensor, dtype=float32, shape=[2, 3]) -> PythonOp ->output_0 **(tensor, dtype=float32, shape=[4, 5])**, output_1 **(tensor, dtype=float32, shape=[6, 7])** -> TupleUnpack output_2 (tensor, dtype=float32, shape=[4, 5]), output_3 (tensor, dtype=float32, shape=[6, 7])
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57966
Reviewed By: zhxchen17
Differential Revision: D30208207
Pulled By: gmagogsfm
fbshipit-source-id: 42a3d1f9c0932133112a85df0c49cf4ea0afa175
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53460
We have code to ignore this category of warnings and found this one is incorrect.
Use `stacklevel=2`, otherwise the warning is always filtered by TracerWarning.ignore_lib_warnings()
Test Plan: sandcastle
Reviewed By: wanchaol
Differential Revision: D26867290
fbshipit-source-id: cda1bc74a28d5965d52387d5ea2c4dcd1a2b1e86
Summary:
`jit.trace` recursively gathers all named attributes in module at beginning of
tracing. This is fine in a pure-tracing environment, but breaks when a
scripted module that contains an InterfaceType'd submodule is involved.
Because InterfaceType, by design, is not allowed to have any attribute,
thus some of the gathered attributes will turn into fatal errors in
following some graph rewrite passes.
This PR fixes this bug by distinguishing InterfaceType'd submodules from
normal ClassType'd submodules.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53052
Reviewed By: wanchaol
Differential Revision: D26735566
Pulled By: gmagogsfm
fbshipit-source-id: a14aee6f1fe8000f80c2dc60bdf19acee6225090
Summary:
Previously `torch.jit.trace` relies on AutoGrad hooks to infer name of tensors in computation, including those of function/method arguments. This often doesn't work out because:
- These names often do not exist
- Tracer uses argument name of first tensor operation on each tensor as inferred argument names. These tensor operations have programmatically-generated names like `argument_1`
This PR extracts argument names directly from Python functions and pass them down to tracer, which then assigns them to correct graph inputs. This way, we always have the correct argument names captured in IR.
This is useful for both debugging and supporting using `InterfaceType` to represent traced modules.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51775
Reviewed By: izdeby
Differential Revision: D26273105
Pulled By: gmagogsfm
fbshipit-source-id: 934a385041137dc3731bb6fa8657b11532fed9e5
Summary:
The attributes in `dir(mod)` may not be valid, this will throw error when calling `getattr`.
Use `hasattr` to test if it is valid.
Here is an example:
```python
class A:
def __init__(self, x):
if x:
self._attr = 1
property
def val(self):
return getattr(self, '_attr')
a = A(False)
print('val' in dir(a))
print(hasattr(a, 'val'))
b = A(True)
print('val' in dir(b))
print(hasattr(b, 'val'))
```
And the outputs:
```
True
False
True
True
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50680
Reviewed By: malfet
Differential Revision: D26103975
Pulled By: eellison
fbshipit-source-id: 67a799afe7d726153c91654d483937c5e198ba94
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50414
If the index that is supplied from python is an integral type, it converts everything to int64_t which is traced correctly.
Test Plan:
new test case
Imported from OSS
Reviewed By: ZolotukhinM
Differential Revision: D25930773
fbshipit-source-id: a3dfeb49df1394c5c8bea0de46038d2c549a0dc6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49765
Some PyTorch module can have None as submodule, which causes the following error in JIT-tracing:
Repro script:
```
import torch
class TestModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.submod = torch.nn.Linear(3, 4)
self.submod = None
def forward(self, inputs):
return inputs
m = TestModule()
tm = torch.jit.trace(m, torch.tensor(1.))
```
Error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/data/miniconda3/envs/master_nightly/lib/python3.7/site-packages/torch/jit/_trace.py", line 742, in trace
_module_class,
File "/data/miniconda3/envs/master_nightly/lib/python3.7/site-packages/torch/jit/_trace.py", line 928, in trace_module
module = make_module(mod, _module_class, _compilation_unit)
File "/data/miniconda3/envs/master_nightly/lib/python3.7/site-packages/torch/jit/_trace.py", line 560, in make_module
return _module_class(mod, _compilation_unit=_compilation_unit)
File "/data/miniconda3/envs/master_nightly/lib/python3.7/site-packages/torch/jit/_trace.py", line 1039, in __init__
submodule, TracedModule, _compilation_unit=None
File "/data/miniconda3/envs/master_nightly/lib/python3.7/site-packages/torch/jit/_trace.py", line 560, in make_module
return _module_class(mod, _compilation_unit=_compilation_unit)
File "/data/miniconda3/envs/master_nightly/lib/python3.7/site-packages/torch/jit/_trace.py", line 988, in __init__
assert isinstance(orig, torch.nn.Module)
AssertionError
```
This pull request changes the JIT-tracing logic to skip the None submodule when tracing.
Test Plan: `buck test mode/dev //caffe2/test:jit -- test_trace_skip_none_submodule`
Reviewed By: wanchaol
Differential Revision: D25670948
fbshipit-source-id: 468f42f5ddbb8fd3de06d0bc224dc67bd7172358