Commit Graph

9 Commits

Author SHA1 Message Date
Elias Ellison
28ed04c620 [JIT] remove list_with_default op (#37886)
Summary:
We can implement this as a builtin instead of as a registered op.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37886

Differential Revision: D21414329

Pulled By: eellison

fbshipit-source-id: 6e130fa83fbf7ba4d4601f509cb169a2fa804108
2020-05-06 17:32:11 -07:00
Mike Ruberry
30fabd9398 Creates "Versioned Symbol" pattern to preserve serialized Torchscript semantics (#36300)
Summary:
PyTorch users write programs and save them as serialized Torchscript. When this Torchscript is loaded it contains symbols like "aten::div" describing some of the program's behavior. If the behavior of these symbols has changed since the program was serialized, however, then the original program's semantics may not be preserved.

For example, when we make aten::div always perform "true" division, like NumPy, Python3, and JAX, then serialized Torchscript programs relying on aten::div performing floor division on integral inputs will break.

This PR demonstrates the "Versioned Symbol" pattern that lets symbols be remapped into Torchscript builtins that preserve their historic behavior. Using this pattern, after we update aten::div to always perform true division, we could remap it in older Torchscript programs to a builtin that implements its historic behavior.

The pattern is described in the [Versioned Symbols] note in the code and is implemented like this:

- If BuiltinModule is given a version, before it returns a symbol it queries to see if another symbol should be substituted for it.
- versioned_symbol.cpp has a map for symbols and the version range for which another symbol should be substituted for them.
- The substitutions are implemented as builtin functions.

An example using the new, test-only _subcmul function is implemented to test this behavior. A test in jit/test_save_load.py follows the pattern described in the [Versioned Symbols] note and uses a fixture serialized with file version 2 to verify that the historic behavior is preserved.

In the future we will likely need a slightly more complex mechanism with multiple substitutions with distinct version ranges, and this just requires changing the map to be Symbol->Iterable.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36300

Differential Revision: D21058990

Pulled By: mruberry

fbshipit-source-id: 2b7c732878c0ecfcd9f0a6205fb6d6421feeaf61
2020-04-16 04:56:53 -07:00
Mikhail Zolotukhin
cd00bbc23f clang-format. (#35605)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35605

Test Plan: Imported from OSS

Reviewed By: orionr

Differential Revision: D20720486

Pulled By: ZolotukhinM

fbshipit-source-id: f081a9fb6ef84fdce3b8f071d5e251e267854a18
2020-03-28 11:45:06 -07:00
davidriazati
43928effee [jit] Remove _assert_int_or_pair op (#34509)
Summary:
This one doesn't actually do anything so we don't need an op for it.

It is used inside `torch.nn.functional.unfold` which is already tested

Pull Request resolved: https://github.com/pytorch/pytorch/pull/34509

Pulled By: driazati

Differential Revision: D20676445

fbshipit-source-id: b72d1308bdec593367ec4e14bf9a901d0b62e1cc
2020-03-27 18:37:49 -07:00
Elias Ellison
e68afe3ab9 [JIT] remove prim::shape op (#34286)
Summary:
Desugar prim::shape to aten::size so that passes don't need to reason about both ops. Serialized models still resolve to `prim::shape` so this doesn't break BC.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34286

Differential Revision: D20316818

Pulled By: eellison

fbshipit-source-id: d1585687212843f51e9396e07c108f5c08017818
2020-03-26 19:29:25 -07:00
Meghan Lele
6384c2d81b [JIT] clang-format JIT code (#35115)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35115

This commit runs the newly added tools/clang_format.py on the JIT
codebase and includes all of the formatting changes thus produced.

Testing:
Ran the script, CI.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D20568523

Pulled By: SplitInfinity

fbshipit-source-id: e09bdb982ccf090eecfb7c7b461b8d0681eef82b
2020-03-26 11:24:51 -07:00
Michael Suo
c235be42dd [jit] kill script namespace (#34515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34515

Once upon a time we thought this was necessary. In reality it is not, so
removing it.

For backcompat, our public interface (defined in `api/`) still has
typedefs to the old `script::` names.

There was only one collision: `Pass` as a `Stmt` and `Pass` as a graph
transform. I renamed one of them.

Test Plan: Imported from OSS

Differential Revision: D20353503

Pulled By: suo

fbshipit-source-id: 48bb911ce75120a8c9e0c6fb65262ef775dfba93
2020-03-11 23:32:48 -07:00
davidriazati
2111c4ff0c [jit] Add missing tensor properties (#33906)
Summary:
Fixes #30775

This adds TorchScript implementations (copied from `python_variable.cpp`) for the remainin `Tensor` properties that were missing from the jit, in addition to a test that ensures new properties will trigger a failure so we can decide whether we want to add them as well.

For `some_tensor`, adds:

* `some_tensor.T`
* `some_tensor.ndim`
* `some_tensor.is_leaf`
* `some_tensor.name`
](https://our.intern.facebook.com/intern/diff/20153288/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33906

Pulled By: driazati

Differential Revision: D20153288

fbshipit-source-id: 2ddc48a14267077bc176065267e5ce52181b3d6b
2020-02-28 19:06:11 -08:00
Michael Suo
dbe850af5b [jit] do the code reorg (#33851)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33851

Rationale and context described in #33828.

Script to reproduce the move:
https://gist.github.com/suo/16cbefaaeb67ca5a7c6caffd49b7f6e9
ghstack-source-id: 99079645

Test Plan: Make sure CI passes

Reviewed By: jamesr66a

Differential Revision: D20133869

fbshipit-source-id: 390e9241a9c85366d9005c492ac31f10aa96488e
2020-02-27 13:02:51 -08:00