Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46669
Make `Graph`'s deepcopy behavior iterative rather than recursive. This prevents stack overflow issues with very large `Graph`s
Test Plan: Imported from OSS
Reviewed By: suo
Differential Revision: D24455120
Pulled By: jamesr66a
fbshipit-source-id: 5c37db5acabe313b9a7a464bebe2a82c59e4e2e9
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46573
Original commit changeset: 7dd709b585f8
ghstack-source-id: 114730143
Test Plan: Verified on circleci that previously broken test is fixed.
Reviewed By: zdevito
Differential Revision: D24413096
fbshipit-source-id: 439568c631c4556b8ed6af20fcaa4b1375e554cf
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46493
This will allow us to override tow following methods of Tracer:
-- get_module_qualified_name: to find qualified name of a module. In default implementation, it looks for module in registered modules and from there it gets to the name, but in some scenarios, the module being called could not be the exact same module that was registered.
-- create_args_for_root: This allows to create and pass custom structured input (like dictionary with specific keys) to the main module, rather than pure proxy objects. This will also allows us to let proxy objects only represent tensors when they are passed to modules.
ghstack-source-id: 114609258
Test Plan: Unit tests passed
Reviewed By: zdevito, bradleyhd
Differential Revision: D24269034
fbshipit-source-id: d7b67f2349dd516b6f7678e41601d6899403d9de
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45193
This change makes it possible to subclass the tracer to add additional
behavior when you know something about the shape of the Proxy objects,
by overriding the defaults for how the tracer tries to make something iterable,
looks for keys for **kwargs, or tries to convert to a boolean.
An example test shows how this can be used to tag inputs with shapes.
It can also be used combined with create_node to do type propagation during
tracing to fullfil requests like iter.
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D24258993
Pulled By: zdevito
fbshipit-source-id: 6ece686bec292e51707bbc7860a1003d0c1321e8
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46325
Otherwise, mutating them would make the uses/users lists inaccurate.
You can still mutate the node by assigning a new value to .args or .kwargs
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D24308672
Pulled By: zdevito
fbshipit-source-id: a5305e1d82668b36e46876c3bc517f6f1d03dd78
Summary:
Reopen the PR: https://github.com/pytorch/pytorch/pull/45837
This PR add a new feature for Partitioner() class called size_based_partition. Given a list of devices with the same memory size, this function could distribute graph nodes into different devices. To implement this feature, several help functions are created in Partitioner.py and GraphManipulation.py.
An unit test is also added in test/test_fx_experimental.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46282
Reviewed By: gcatron
Differential Revision: D24288470
Pulled By: scottxu0730
fbshipit-source-id: e81b1e0c56e34f61e497d868882126216eba7538
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46121
Otherwise, mutating them would make the uses/users lists inaccurate.
You can still mutate the node by assigning a new value to .args or .kwargs
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D24232288
Pulled By: zdevito
fbshipit-source-id: c95b1a73ae55ad9bdb922ca960c8f744ff732100
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45708
This makes it possible to define reasonable semantics for what happens
when a node in the list is deleted. In particular the iteration over nodes
will continue at the node that was after the deleted node _when it was deleted_.
If the new node is also deleted, we skip it and, continue to the node after it.
Eventually we either reach a node still in the list or we reach the end of the list.
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D24089516
Pulled By: zdevito
fbshipit-source-id: d01312d11fe381c8d910a83a08582a2219f47dda
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45471
Intead of assuming that 'torch' is the only module used by generated code,
use the qualified names of builtin functions to generate import statements
for all builtins. This allows user-captured functions to also get code generated correctly.
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D23978696
Pulled By: zdevito
fbshipit-source-id: ecbff150e3de38532531cdadbfe4965468f29a38
Summary:
WIP: This PR is working in progress for the partition of fx graph module. _class partitioner_ generates partitions for the graph module. _class partition_ is a partition node in the partitions.
_Partitioner()_ : create a partitioner
_partition_graph(self, fx_module: GraphModule, devices: List[str]) -> None_:
use fx graph module and devices as the input and create partition_ids for each node inside the graph module
_dump_partition_DAG(self) -> None_:
print out the information about each partition, including its id, its backend type (what type of device this partition uses), all the nodes included in this partition, its parent partitions, children partitions, input nodes, and output nodes.
So far, only a single partition is considered, which means there is only one device with unlimited memory.
A test unit call _test_find_single_partition()_ is added to test if all nodes in the graph are marked for the only partition.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45429
Reviewed By: izdeby
Differential Revision: D24026268
Pulled By: scottxu0730
fbshipit-source-id: 119d506f33049a59b54ad993670f4ba5d8e15b0b
Summary:
This PR adds a new GraphManipulation library for operating on the GraphModule nodes.
It also adds an implementation of replace_target_nodes_with, which replaces all nodes in the GraphModule or a specific op/target with a new specified op/target. An example use of this function would be replacing a generic operator with an optimized operator for specific sizes and shapes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44775
Reviewed By: jamesr66a
Differential Revision: D23874561
Pulled By: gcatron
fbshipit-source-id: e1497cd11e0bbbf1fabdf137d65c746248998e0b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45261
**Summary**
This commit enables `unused` syntax for ignoring
properties. Inoring properties is more intuitive with this feature enabled.
`ignore` is not supported because class type properties cannot be
executed in Python (because they exist only as TorchScript types) like
an `ignored` function and module properties that cannot be scripted
are not added to the `ScriptModule` wrapper so that they
may execute in Python.
**Test Plan**
This commit updates the existing unit tests for class type and module
properties to test properties ignored using `unused`.
Test Plan: Imported from OSS
Reviewed By: navahgar, Krovatkin, mannatsingh
Differential Revision: D23971881
Pulled By: SplitInfinity
fbshipit-source-id: 8d3cc1bbede7753d6b6f416619e4660c56311d33
Summary:
This PR adds get_all_users_of function. The function returns all the users of a specific node. A test unit is also added.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45216
Reviewed By: ezyang
Differential Revision: D23883572
Pulled By: scottxu0730
fbshipit-source-id: 3eb68a411c3c6db39ed2506c9cb7bb7337520ee4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44566
The Delegate objects were confusing. They were suppose to be a way to
configure how tracing works, but in some cases they appeared necessary
for consturcting graphs, which was not true. This makes the organization
clearer by removing Delgate and moving its functionality into a Tracer class,
similar to how pickle has a Pickler class.
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D23683177
Pulled By: zdevito
fbshipit-source-id: 7605a34e65dfac9a487c0bada39a23ca1327ab00
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44092
instead submodules and weights are installed directly on the
graph_module by transferring the original modules. This makes it more
likely that scripting will succeed (since we no longer have submodules
that are not used in the trace). It also prevents layered transforms
from having to special case handling of the `root` module. GraphModules
can now be re-traced as part of the input to other transforms.
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D23504210
Pulled By: zdevito
fbshipit-source-id: f79e5c4cbfc52eb0ffb5d6ed89b37ce35a7dc467
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43655
Pure, unadulerated bikeshed. The good stuff.
This makes things more consistent with ScriptModule.
Test Plan: Imported from OSS
Reviewed By: ZolotukhinM
Differential Revision: D23401528
Pulled By: suo
fbshipit-source-id: 7dd8396365f118abcd045434acd9348545314f44
Summary:
It's useful if we add additional attributed to nodes in the graph - it's easier to set the attribute on all nodes, even if the value would happen to be None.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43432
Reviewed By: jamesr66a
Differential Revision: D23276433
Pulled By: dzhulgakov
fbshipit-source-id: c69e7cb723bbbb4dba3b508a3d6c0e456fe610df
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43653
When nodes are created without an explicit name, a name is generated for
it based on the target. In these cases, we need to avoid shadowing
builtin names. Otherwise, code like:
```
a.foo.bar
```
results in pretty-printed code like:
```
getattr = a.foo
getattr_1 = getattr.bar
```
While this is technically allowed in Python, it's probably a bad idea,
and more importantly is not supported by TorchScript (where `getattr` is
hardcoded).
This PR changes the name generation logic to avoid shadowing all
builtins and langauge keywords. We already do this for PyTorch
built-ins, so just extend that logic. So now the generated code will
look like:
```
getattr_1 = a.foo
getattr_2 = getattr_1.bar
```
Fixes#43522
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Differential Revision: D23357420
Pulled By: suo
fbshipit-source-id: 91e9974adc22987eca6007a2af4fb4fe67f192a8