Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57749
add to a fx test
Test Plan: Imported from OSS
Reviewed By: huiguoo
Differential Revision: D28425974
fbshipit-source-id: 195c7a1944decb7a2a99c2831cab38485f32be17
Summary:
Fixes https://github.com/pytorch/pytorch/issues/57719.
This PR fixes `torch.Tensor{__rsub__, __rdiv__, __rtruediv__, __pow__, __rmatmul__}` to return `NotImplemented` instead of raising a `TypeError`.
cc/ mruberry: The first commit of this PR is the same as 1d209db1cc excepts the commit message.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57934
Reviewed By: mruberry
Differential Revision: D28351931
Pulled By: albanD
fbshipit-source-id: 985457a44dba24d2496794dfb8c1661cbcd4ff8f
Summary:
```
class Foo(nn.Module):
def __init__(self):
super().__init__()
def forward(self, y, x):
for k in x:
for v in x[k]:
v += y
return x
example_dict = {'x': {'a': [fx.HOLE], 'z': [fx.HOLE, fx.HOLE]}}
new_f = fx.symbolic_trace(Foo(), concrete_args=example_dict)
print(new_f.code)
new_f(torch.randn(5), {'x': {'a': [torch.randn(5)], 'z': [torch.randn(5), torch.randn(5)]}})
fx.symbolic_trace(new_f, concrete_args=example_dict)
```
prints out
```
def forward(self, y, x):
y, tree_2, tree_3, tree_4 = pytree.tree_flatten([y, x])[0]
add = tree_2 + y
add_1 = tree_3 + y
add_2 = tree_4 + y; y = None
return {'a': [tree_2], 'z': [tree_3, tree_4]}
```
Currently, I store `in_spec` as an extra attribute on `fx.Graph`, and then include it when we do the codegen. I'm not sure if this is the right approach - it introduces a divergence between what's in `fx.Graph` and what's in the python code.
Perhaps the best API is something explicit like `fx.Graph.flatten_args`, but that does make calling things a bit ... more verbose.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55888
Reviewed By: jamesr66a
Differential Revision: D27884694
Pulled By: Chillee
fbshipit-source-id: f9e8a70c63a8df63c9f9bd0a6459255daa5a8df8
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57383
Notes: I picked up an activation from https://github.com/pytorch/pytorch/issues/56969. You can look at the [activations.cpp](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cpu/Activation.cpp#L429) file which has both forward and backward kernel code to help you write the NNC lowering and the symbolic gradient.
I added a test in test_jit_fuser_te for the fusion, and I added an OpInfo and asserted that we expect to see autodiffable nodes to test the symbolic gradient.
Test Plan: Imported from OSS
Reviewed By: mrshenli
Differential Revision: D28197820
Pulled By: eellison
fbshipit-source-id: 05305d85c5bb0847c8f911b95ba47b137dca7e90
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45687
Fix changes the input size check for `InstanceNorm*d` to be more restrictive and correctly reject sizes with only a single spatial element, regardless of batch size, to avoid infinite variance.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56659
Reviewed By: pbelevich
Differential Revision: D27948060
Pulled By: jbschlosser
fbshipit-source-id: 21cfea391a609c0774568b89fd241efea72516bb
Summary:
Fixes https://github.com/pytorch/pytorch/issues/55398
Generates tests that calls `symbolic_trace` on torchvision models and verifies the parity of outputs from eager model, `fx.GraphModule`, `jit.ScriptModule`.
Test errors: GoogleNet and Inception models throw a type mismatch when scripting the traced `fx.GraphModule`.
```
Return value was annotated as having type __torch__.torchvision.models.googlenet.GoogLeNetOutputs but is actually of type Tensor:
dropout = self.dropout(flatten); flatten = None
fc = self.fc(dropout); dropout = None
return fc
~~~~~~~~~ <--- HERE
```
Relevant type-inconsistency 512ea299d4/torchvision/models/googlenet.py (L200)
```
torch.jit.unused
def eager_outputs(self, x: Tensor, aux2: Tensor, aux1: Optional[Tensor]) -> GoogLeNetOutputs:
if self.training and self.aux_logits:
return _GoogLeNetOutputs(x, aux2, aux1)
else:
return x # type: ignore[return-value]
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55744
Reviewed By: albanD
Differential Revision: D27920595
Pulled By: suraj813
fbshipit-source-id: 01f6f2aef7badbde29b5162a7787b5af9398090d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55550
Add a test for `symbolic_trace` into `torch.nn.functional`
Test against all `functional`s with `torch.Tensor` argument and `functional`s from `FUNCTIONALS_WITHOUT_ANNOTATION`.
```py
FUNCTIONALS_WITHOUT_ANNOTATION = (
"adaptive_max_pool1d",
"adaptive_max_pool2d",
"adaptive_max_pool3d",
"fractional_max_pool2d",
"fractional_max_pool3d",
"max_pool1d",
"max_pool2d",
"max_pool3d",
"gaussian_nll_loss",
"upsample",
"upsample_bilinear",
"upsample_nearest",
)
```
`UNTRACEABLE_FUNCTIONALS` lists 110 current untraceable `functional`s with expected `Error`.
- `BUILT_IN_FUNC`: built-in functions or built-in methods can not be traced.
- `PROXY_ITERATED`: Proxy object cannot be iterated. This can be attempted when used in a for loop or as a *args or **kwargs function argument
- `LEN_ERROR`: 'len' is not supported in symbolic tracing by default. If you want this call to be recorded, please call torch.fx.wrap('len') at module scope
- `ARG_TYPE_MISMATCH`: `functional()`: argument <name> (position <n>) must be <type>, not Proxy
- `CONTROL_FLOW`: symbolically traced variables cannot be used as inputs to control flow
- `INTERPOLATE_ARGS_CONFLICT`: When tracing the functional by calling `interpolate(input, size, scale_factor, mode="bilinear", align_corners=True)`, `ValueError("only one of size or scale_factor should be defined")` is raised
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D27659367
Pulled By: ejguan
fbshipit-source-id: d0d05e4d94e0b85f47e6c171a31f0d41b1387373
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52859
This reverts commit 92a4ee1cf6.
Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph.
Test Plan: Imported from OSS
Reviewed By: H-Huang
Differential Revision: D27402390
Pulled By: heitorschueroff
fbshipit-source-id: 73c5ccf54f3da3d29eb63c9ed3601e2fe6951034
Summary:
This PR:
- Updates the structure of the SampleInput class to require the "input" attribute be a tensor
- Limits unary ufuncs to test only the uint8, long, float16, bfloat16, float and cfloat dtypes by default
- Limits variant testing to the float dtype
- Removes test_variant_consistency from test_unary_ufuncs.py since it's now redundant with variant testing in test_ops.py
- Adds backwards supported testing to clarify failures that were coming from variant testing
This should decrease test e2e time.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53255
Reviewed By: ngimel
Differential Revision: D27043643
Pulled By: mruberry
fbshipit-source-id: 91d6b483ad6e2cd1b9ade939d42082980ae14217
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53444
GraphModule construction has two options when constructing the base nn.Module: a dict of names to attrs to assign to the GraphModule, or another nn.Module to copy attrs from.
- For the dict case, add logic to explicitly register `nn.Tensors` that are not `nn.Parameter` as buffers on the GraphModule, else fall back to `__setattr__`.
- For the other `nn.Module` case, update so that it checks in the other module whether the attr to copy in is a buffer, and register it as such, else fall back to `__setattr__`.
Test Plan: Added tests for fetching params and buffers from a GraphModule using both dict and module `__init__`s
Reviewed By: jamesr66a
Differential Revision: D26860055
fbshipit-source-id: 8d9999f91fef20aaa10969558006fc356247591f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52658
DCE will reverse iterate over the graph looking for nodes without users and delete them. It will skip over unused placeholders (since this affects the signature of the method) and outputs (which never have users but we want to keep them :) )
Test Plan: Added unit tests
Reviewed By: jamesr66a, khabinov, chenccfb
Differential Revision: D26602212
fbshipit-source-id: f4f196973e40546076636090bb0008c24f33795e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51976
FX serializes things by serializing Python code as a string and exec'ing
it on load. This accomplishes one goal (we don't have to pickle the
graph object directly) but breaks the pickle abstraction in ways that
are not composable with `torch.package`.
In particular:
1. `forward` is serialized by saving Python code. On load, it's
installed
by `exec`ing that code. This `exec` call needs to have the right
importer installed, otherwise it will not import modules from the
`torch.package` but instead import from the Python environment.
2. Any types/functions used are emitted as `import` statement in the
generated Python code. These are effectively dynamic dependencies of the
`GraphModule` being saved, and need to be registered as such so that the
`PackageImporter` will package them.
To address these, this PR introduces a new protocol for the
importer/exporter: `__reduce_package__`.
A class can implement `__reduce_package__` to customize how it is placed
in the importer/exproter. It functions very similarly to `__reduce__`,
except:
- `__reduce_package__` takes one argument, which is the
`PackageExporter`
instance. Users can use this instance to save stuff to the package to
implement their serialization. `__reduce__` takes no args.
- Only the 2-element tuple version of the return value for `__reduce__`
is supported (this could be extended if necessary).
- When the reduction function is called on load, an additional argument
is added to the beginning of the args tuple. This is the
`PackageImporter`
instance doing the loading.
The `__reduce_package__` protocol is defined using `persistent_id` and
`persistent_load`, which ensures that we can still use the cpickle
implementation of the pickler by default.
Pull Request resolved: #51971
Test Plan: Imported from OSS
Reviewed By: zdevito
Differential Revision: D26340591
Pulled By: suo
fbshipit-source-id: 5872a7d22e832056399a7372bae8a57807717882
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52473
Use `map_aggregate` to create output for new graph so that it won't raise error when we have outputs that is not `Proxy`.
Test Plan: `test_transformer_multi_outputs` in `test_fx.py`
Reviewed By: jamesr66a
Differential Revision: D26502277
fbshipit-source-id: 404d9030a9b84db3f66f8505887a75717a28ad30
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51974
Right now, when an FX `Graph` references an external object, we will emit
code like:
import foo
def forward(input: foo.bar.baz):
...
This is problematic in a world with `torch.package`, since then name
`foo.bar.baz` may reference a name from any number of packages.
This PR lays the groundwork for FX-package integration by separating the
resolution of external references from the genration of the function
code.
When generating a Graph's Python source, we keep track of all external
references and assign them unique names. At the end, we have a
dictionary mapping names -> actual objects. This becomes the `globals`
namespace we pass to `exec` when installing the forward function in a
`GraphModule`. This is nice because we can always be sure that `exec` is
seeing the same objects that were referenced from the `Graph`, no import
statements needed.
At serialization time, we use a `ModuleEnv` to resolve the globals dict
to a set of import statements that can be run to reprodce the `global`
namespace. This is only used on serialiation/deserialization, and those
functions are expected to check that the import statements are producing
the correct results.
Concretely, the code above will now look like:
from foo.bar import baz as foo_bar_baz
def forward(input: foo_bar_baz):
...
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D26340593
Pulled By: suo
fbshipit-source-id: fe247f75205d0a03fd067bdd0f95491e8edf1436
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52297
Before, an `nn.Module` with submodules would fail AST rewriting with `TypeError: 'RewrittenModule' object does not support item assignment`. (Try the `test_ast_rewriter_reassigns_submodules` test case on `master`.) This PR fixes the issue as well as adding additional test cases
Test Plan: Imported from OSS
Reviewed By: pbelevich
Differential Revision: D26483820
Pulled By: ansley
fbshipit-source-id: 757e898dc2b0a67daf2bd039d555b85f4e443322
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51648
The following code will throw during the call to `traced(5)`:
```python
class M(nn.Module):
def __init__(self):
super(M, self).__init__()
self.W = torch.nn.Parameter(torch.randn(5))
def forward(self, x):
return torch.dot(self.W, x)
traced = fx.symbolic_trace(M())
traced(5)
```
Traceback before:
```
Traceback (most recent call last):
File "test/tinytest.py", line 26, in <module>
traced(5)
File "/home/ansley/local/pytorch/torch/fx/graph_module.py", line 338, in wrapped_call
return self._cls_call(self, *args, **kwargs)
File "/home/ansley/local/pytorch/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "<eval_with_key_0>", line 4, in forward
TypeError: dot(): argument 'tensor' (position 2) must be Tensor, not int
```
Traceback after:
```
Traceback (most recent call last):
File "/home/ansley/local/pytorch/torch/fx/graph_module.py", line 338, in wrapped_call
return torch.nn.Module.__call__(self, *args, **kwargs)
File "/home/ansley/local/pytorch/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "<eval_with_key_1>", line 4, in forward
dot_1 = torch.dot(w, x); w = x = None
TypeError: dot(): argument 'tensor' (position 2) must be Tensor, not int
Call using an FX-traced Module, line 4 of the traced Module’s generated forward function:
w = self.W
dot_1 = torch.dot(w, x); w = x = None
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
relu_1 = dot_1.relu(); dot_1 = None
return relu_1
```
(Note that the same `TypeError` is thrown despite modifying the traceback.)
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D26424005
Pulled By: ansley
fbshipit-source-id: 368f46ba81fb3111bd09654825bb2ac5595207d1
Summary:
Currently it's passed in a dict but might be worth considering whether we want to support other methods of passing it in (like a list corresponding to the positional args).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51609
Reviewed By: zou3519
Differential Revision: D26224464
Pulled By: Chillee
fbshipit-source-id: 305769db1a6e5fdcfb9e7dcacfdf153acd057a5a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51175
gives a suggestion about how to deal with immutable args/kwargs list
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D26093478
Pulled By: zdevito
fbshipit-source-id: 832631c125561c3b343539e887c047f185060252
Summary:
This contains some improvements and refactoring to how patching is done in `torch.fx.symbolic_trace`.
1) Functions from `math.*` are now supported without needing to call `torch.fx.wrap()`. `wrap()` actually errors on some of these function because they are written in C and don't have `__code__` requiring use of the string version. `math` usage is relatively common, for example [BERT uses math.sqrt here](6f79061bd1/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/attention/single.py (L16)). Both `math.sqrt()` and `from math import sqrt` (copying to module namespace) are supported. When modules are called FX now searches the module's global scope to find methods to patch.
2) [Guarded behind `env FX_PATCH_GETITEM=1`] Fixes a failed trace of [PositionalEmbedding from BERT](6f79061bd1/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/position.py (L24)), which failed to trace with the error `TypeError: slice indices must be integers or None or have an __index__ method` (a Proxy() is getting passed into `Tensor.__getitem__`). See https://github.com/pytorch/pytorch/issues/50710 for why this is disabled by default.
3) Support for automatically wrapping methods that may have been copied to a different module scope via an import like `from foo import wrapped_function`. This also isn't exposed in `torch.fx.wrap`, but is used to implement `math.*` support.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50793
Test Plan: Added unittests to check each feature
Reviewed By: jamesr66a
Differential Revision: D25999788
Pulled By: jansel
fbshipit-source-id: f1ce11a69b7d97f26c9e2741c6acf9c513a84467
Summary:
`torch.fx.wrap()` could not be used as a decorator as the docstring claimed because it returned None.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50677
Test Plan: Added `test_wrapped_via_decorator` which used to fail with `'NoneType' object is not callable` and now passes
Reviewed By: jamesr66a
Differential Revision: D25949313
Pulled By: jansel
fbshipit-source-id: 02d0f9adeed812f58ec94c94dd4adc43578f21ce
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49931
This fixes#49932. The `maybe_return_annotation` was not being passed by reference, so it was never getting modified.
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D25725582
Pulled By: esqu1
fbshipit-source-id: 4136ff169a269d6b98f0b8e14d95d19e7c7cfa71
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48732
add support for ScriptObject as attributes in symbolic trace.
Test Plan: OSS CI
Reviewed By: jamesr66a
Differential Revision: D25116185
fbshipit-source-id: c61993c84279fcb3c91f1d44fb952a8d80d0e552
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47973
Currently torch.Assert is not scriptable, which makes it not very useful for production code. According to jamesr66a , moving this to c++ op land will help with scriptability. This PR implements the change.
Note: with the current code the Assert is scriptable but the Assert is a no-op after being scripted. Would love suggestions on how to address that (can be in future PR).
Test Plan:
```
python test/test_utils.py TestAssert.test_assert_scriptable
python test/test_utils.py TestAssert.test_assert_true
python test/test_fx.py TestFX.test_symbolic_trace_assert
```
Reviewed By: supriyar
Differential Revision: D24974299
Pulled By: vkuzo
fbshipit-source-id: 20d4f4d8ac20d76eee122f2cdcdcdcaf1cda3afe
Summary:
Change Partitioner.py file name to partitioner.py
Change GraphManipulation.py file name to graph_manipulation.py
Move test_replace_target_nodes_with() to test_fx_experimental.py
Remove the unnecessary argument in size_based_partition() in Partitioner class
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47914
Reviewed By: gcatron
Differential Revision: D24956653
Pulled By: scottxu0730
fbshipit-source-id: 25b65be7dc7d64e90ffdc59cf394446fee83c3e6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47399
Currently torch.Assert is not scriptable, which makes it not very useful for production code. According to jamesr66a , moving this to c++ op land will help with scriptability. This PR implements the change.
Note: with the current code the Assert is scriptable but the Assert is a no-op after being scripted. Would love suggestions on how to address that (can be in future PR).
Test Plan:
```
python test/test_utils.py TestAssert.test_assert_scriptable
python test/test_utils.py TestAssert.test_assert_true
python test/test_fx.py TestFX.test_symbolic_trace_assert
```
Imported from OSS
Reviewed By: eellison
Differential Revision: D24740727
fbshipit-source-id: c7888e769c921408a3020ca8332f4dae33f2bc0e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47763
Changing the name due to the discussion in
https://github.com/pytorch/pytorch/pull/47399.
Test Plan:
```
python test/test_utils.py TestAssert.test_assert_true
python test/test_fx.py TestFX.test_symbolic_trace_assert
python test/test_fx_experimental.py
```
Imported from OSS
Reviewed By: ezyang
Differential Revision: D24891767
fbshipit-source-id: 01c7a5acd83bf9c962751552780930c242134dd2
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47537
When a module only appears in a type constructor List[torch.Tensor],
it previously didn't get added to the list of used modules. This fixes it
by introspecting on the type constructor.
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D24806317
Pulled By: zdevito
fbshipit-source-id: 263391af71e1f2156cbefaab95b9818c6b9aaae1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46669
Make `Graph`'s deepcopy behavior iterative rather than recursive. This prevents stack overflow issues with very large `Graph`s
Test Plan: Imported from OSS
Reviewed By: suo
Differential Revision: D24455120
Pulled By: jamesr66a
fbshipit-source-id: 5c37db5acabe313b9a7a464bebe2a82c59e4e2e9
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45193
This change makes it possible to subclass the tracer to add additional
behavior when you know something about the shape of the Proxy objects,
by overriding the defaults for how the tracer tries to make something iterable,
looks for keys for **kwargs, or tries to convert to a boolean.
An example test shows how this can be used to tag inputs with shapes.
It can also be used combined with create_node to do type propagation during
tracing to fullfil requests like iter.
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D24258993
Pulled By: zdevito
fbshipit-source-id: 6ece686bec292e51707bbc7860a1003d0c1321e8
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46325
Otherwise, mutating them would make the uses/users lists inaccurate.
You can still mutate the node by assigning a new value to .args or .kwargs
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D24308672
Pulled By: zdevito
fbshipit-source-id: a5305e1d82668b36e46876c3bc517f6f1d03dd78
Summary:
Reopen the PR: https://github.com/pytorch/pytorch/pull/45837
This PR add a new feature for Partitioner() class called size_based_partition. Given a list of devices with the same memory size, this function could distribute graph nodes into different devices. To implement this feature, several help functions are created in Partitioner.py and GraphManipulation.py.
An unit test is also added in test/test_fx_experimental.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46282
Reviewed By: gcatron
Differential Revision: D24288470
Pulled By: scottxu0730
fbshipit-source-id: e81b1e0c56e34f61e497d868882126216eba7538
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46121
Otherwise, mutating them would make the uses/users lists inaccurate.
You can still mutate the node by assigning a new value to .args or .kwargs
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D24232288
Pulled By: zdevito
fbshipit-source-id: c95b1a73ae55ad9bdb922ca960c8f744ff732100
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45708
This makes it possible to define reasonable semantics for what happens
when a node in the list is deleted. In particular the iteration over nodes
will continue at the node that was after the deleted node _when it was deleted_.
If the new node is also deleted, we skip it and, continue to the node after it.
Eventually we either reach a node still in the list or we reach the end of the list.
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D24089516
Pulled By: zdevito
fbshipit-source-id: d01312d11fe381c8d910a83a08582a2219f47dda
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45471
Intead of assuming that 'torch' is the only module used by generated code,
use the qualified names of builtin functions to generate import statements
for all builtins. This allows user-captured functions to also get code generated correctly.
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D23978696
Pulled By: zdevito
fbshipit-source-id: ecbff150e3de38532531cdadbfe4965468f29a38
Summary:
WIP: This PR is working in progress for the partition of fx graph module. _class partitioner_ generates partitions for the graph module. _class partition_ is a partition node in the partitions.
_Partitioner()_ : create a partitioner
_partition_graph(self, fx_module: GraphModule, devices: List[str]) -> None_:
use fx graph module and devices as the input and create partition_ids for each node inside the graph module
_dump_partition_DAG(self) -> None_:
print out the information about each partition, including its id, its backend type (what type of device this partition uses), all the nodes included in this partition, its parent partitions, children partitions, input nodes, and output nodes.
So far, only a single partition is considered, which means there is only one device with unlimited memory.
A test unit call _test_find_single_partition()_ is added to test if all nodes in the graph are marked for the only partition.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45429
Reviewed By: izdeby
Differential Revision: D24026268
Pulled By: scottxu0730
fbshipit-source-id: 119d506f33049a59b54ad993670f4ba5d8e15b0b
Summary:
This PR adds a new GraphManipulation library for operating on the GraphModule nodes.
It also adds an implementation of replace_target_nodes_with, which replaces all nodes in the GraphModule or a specific op/target with a new specified op/target. An example use of this function would be replacing a generic operator with an optimized operator for specific sizes and shapes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44775
Reviewed By: jamesr66a
Differential Revision: D23874561
Pulled By: gcatron
fbshipit-source-id: e1497cd11e0bbbf1fabdf137d65c746248998e0b
Summary:
This PR adds get_all_users_of function. The function returns all the users of a specific node. A test unit is also added.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45216
Reviewed By: ezyang
Differential Revision: D23883572
Pulled By: scottxu0730
fbshipit-source-id: 3eb68a411c3c6db39ed2506c9cb7bb7337520ee4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45188
This is a symbolically traceable alternative to Python's `assert`.
It should be useful to allow people who want to use FX to also
be able to assert things.
A bunch of TODO(before) land are inline - would love thoughts
on where is the best place for this code to live, and what this
function should be called (since `assert` is reserved).
Test Plan:
```
python test/test_fx.py TestFX.test_symbolic_trace_assert
```
Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D23861567
fbshipit-source-id: d9d6b9556140faccc0290eba1fabea401d7850de
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44795
Today, we build our cpp tests twice, once as a standalone gtest binary,
and once linked in `libtorch_python` so we can call them from
`test_jit.py`.
This is convenient (it means that `test_jit.py` is a single entry point
for all our tests), but has a few drawbacks:
1. We can't actually use the gtest APIs, since we don't link gtest into
`libtorch_python`. We're stuck with the subset that we want to write
polyfills for, and an awkward registration scheme where you have to
write a test then include it in `tests.h`).
2. More seriously, we register custom operators and classes in these
tests. In a world where we may be linking many `libtorch_python`s, this
has a tendency to cause errors with `libtorch`.
So now, only tests that explicitly require cooperation with Python are
built into `libtorch_python`. The rest are built into
`build/bin/test_jit`.
There are tests which require that we define custom classes and
operators. In these cases, I've built thm into separate `.so`s that we
call `torch.ops.load_library()` on.
Test Plan: Imported from OSS
Reviewed By: SplitInfinity, ZolotukhinM
Differential Revision: D23735520
Pulled By: suo
fbshipit-source-id: d146bf4e7eb908afa6f96b394e4d395d63ad72ff
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44566
The Delegate objects were confusing. They were suppose to be a way to
configure how tracing works, but in some cases they appeared necessary
for consturcting graphs, which was not true. This makes the organization
clearer by removing Delgate and moving its functionality into a Tracer class,
similar to how pickle has a Pickler class.
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D23683177
Pulled By: zdevito
fbshipit-source-id: 7605a34e65dfac9a487c0bada39a23ca1327ab00
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44092
instead submodules and weights are installed directly on the
graph_module by transferring the original modules. This makes it more
likely that scripting will succeed (since we no longer have submodules
that are not used in the trace). It also prevents layered transforms
from having to special case handling of the `root` module. GraphModules
can now be re-traced as part of the input to other transforms.
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D23504210
Pulled By: zdevito
fbshipit-source-id: f79e5c4cbfc52eb0ffb5d6ed89b37ce35a7dc467
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43248
We add the support of __torch_function__ override for C++ custom op. The logic is the same as the other components, like torch.nn.Module.
Refactored some code a little bit to make it reusable.
Test Plan: buck test //caffe2/test:fx -- test_torch_custom_ops
Reviewed By: bradleyhd
Differential Revision: D23203204
fbshipit-source-id: c462a86e407e46c777171da32d7a40860acf061e
Summary:
It's useful if we add additional attributed to nodes in the graph - it's easier to set the attribute on all nodes, even if the value would happen to be None.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43432
Reviewed By: jamesr66a
Differential Revision: D23276433
Pulled By: dzhulgakov
fbshipit-source-id: c69e7cb723bbbb4dba3b508a3d6c0e456fe610df
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43653
When nodes are created without an explicit name, a name is generated for
it based on the target. In these cases, we need to avoid shadowing
builtin names. Otherwise, code like:
```
a.foo.bar
```
results in pretty-printed code like:
```
getattr = a.foo
getattr_1 = getattr.bar
```
While this is technically allowed in Python, it's probably a bad idea,
and more importantly is not supported by TorchScript (where `getattr` is
hardcoded).
This PR changes the name generation logic to avoid shadowing all
builtins and langauge keywords. We already do this for PyTorch
built-ins, so just extend that logic. So now the generated code will
look like:
```
getattr_1 = a.foo
getattr_2 = getattr_1.bar
```
Fixes#43522
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D23357420
Pulled By: suo
fbshipit-source-id: 91e9974adc22987eca6007a2af4fb4fe67f192a8
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43640
+ added a `self.checkGraphModule` utility function to wrap the common
test assert pattern.
Test Plan: Imported from OSS
Reviewed By: gmagogsfm
Differential Revision: D23356262
Pulled By: suo
fbshipit-source-id: a50626dcb01246d0dbd442204a8db5958cae23ab
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43082
Fixes all present errors in mypy. Does not try to add annotations everywhere.
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D23145854
Pulled By: zdevito
fbshipit-source-id: 18e483ed605e89ed8125971e84da1a83128765b7
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42991
Have Node both be a record of the operator in the graph, and the
way we _build_ the graph made it difficult to keep the IR datastructure
separate from the proxying logic in the build.
Among other issues this means that typos when using nodes would add
things to the graph:
```
for node in graph.nodes:
node.grph # does not error, returns an node.Attribute object!
```
This separates the builder into a Proxy object. Graph/Node no longer
need to understand `delegate` objects since they are now just pure IR.
This separates the `symbolic_trace` (proxy.py/symbolic_trace.py) from
the IR (node.py, graph.py).
This also allows us to add `create_arg` to the delegate object,
allowing the customization of how aggregate arguments are handled
when converting to a graph.
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D23099786
Pulled By: zdevito
fbshipit-source-id: 6f207a8c237e5eb2f326b63b0d702c3ebcb254e4