Commit Graph

111 Commits

Author SHA1 Message Date
Shawn Wu
572f7e069c Enable type check for torch.testing._internal.te_utils.* (#44927)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44927

Test Plan: Imported from OSS

Reviewed By: walterddr

Differential Revision: D23776842

Pulled By: sshawnwu

fbshipit-source-id: 65c028169a37e1f2f7d9fdce8a958234ee1caa26
2020-09-18 18:09:15 -07:00
Michael Suo
374e9373b5 [jit] Pull (most) tests out of libtorch_python (#44795)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44795

Today, we build our cpp tests twice, once as a standalone gtest binary,
and once linked in `libtorch_python` so we can call them from
`test_jit.py`.

This is convenient (it means that `test_jit.py` is a single entry point
for all our tests), but has a few drawbacks:
1. We can't actually use the gtest APIs, since we don't link gtest into
`libtorch_python`. We're stuck with the subset that we want to write
polyfills for, and an awkward registration scheme where you have to
write a test then include it in `tests.h`).
2. More seriously, we register custom operators and classes in these
tests. In a world where we may be linking many `libtorch_python`s, this
has a tendency to cause errors with `libtorch`.

So now, only tests that explicitly require cooperation with Python are
built into `libtorch_python`. The rest are built into
`build/bin/test_jit`.

There are tests which require that we define custom classes and
operators. In these cases, I've built thm into separate `.so`s that we
call `torch.ops.load_library()` on.

Test Plan: Imported from OSS

Reviewed By: SplitInfinity, ZolotukhinM

Differential Revision: D23735520

Pulled By: suo

fbshipit-source-id: d146bf4e7eb908afa6f96b394e4d395d63ad72ff
2020-09-18 14:04:40 -07:00
Mikhail Zolotukhin
c6febc6480 [JIT] Add a python hook for a function to interpret JIT graphs. (#44493)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44493

This function allows to execute a graph exactly as it is, without going
through a graph executor which would run passes on the graph before
interpreting it. I found this feature extremely helpful when I worked on
a stress-testing script to shake out bugs from the TE fuser: I needed to
execute a very specific set of passes on a graph and nothing else, and
then execute exactly it.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D23632505

Pulled By: ZolotukhinM

fbshipit-source-id: ea81fc838933743e2057312d3156b77284d832ef
2020-09-11 02:55:26 -07:00
neginraoof
3d7c22a2ce [ONNX] Enable new scripting passes for functionalization and remove_mutation (#43791)
Summary:
Duplicate of https://github.com/pytorch/pytorch/issues/41413
This PR initiates the process of updating the torchsciprt backend interface used by ONNX exporter.

Replace jit lower graph pass by freeze module pass

Enable ScriptModule tests for ONNX operator tests (ORT backend) and model tests by default.

Replace jit remove_inplace_ops pass with remove_mutation and consolidation all passes for handling inplace ops.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43791

Reviewed By: houseroad

Differential Revision: D23421872

Pulled By: bzinodev

fbshipit-source-id: a98710c45ee905748ec58385e2a232de2486331b
2020-09-04 15:21:45 -07:00
Bert Maher
98ad5ff41f [te] Disable reductions by default (#44122)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44122

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D23504769

Pulled By: bertmaher

fbshipit-source-id: 1889217cd22da529e46ab30c9319a5646267e4ec
2020-09-03 23:37:45 -07:00
Lu Fang
f15e27265f [torch.fx] Add support for custom op (#43248)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43248

We add the support of __torch_function__ override for C++ custom op. The logic is the same as the other components, like torch.nn.Module.
Refactored some code a little bit to make it reusable.

Test Plan: buck test //caffe2/test:fx -- test_torch_custom_ops

Reviewed By: bradleyhd

Differential Revision: D23203204

fbshipit-source-id: c462a86e407e46c777171da32d7a40860acf061e
2020-09-02 16:08:37 -07:00
BowenBao
08126c9153 [ONNX] Utilize ONNX shape inference for ONNX exporter (#40628)
Summary:
It is often that the conversion from torch operator to onnx operator requires input rank/dtype/shape to be known. Previously, the conversion depends on tracer to provide these info, leaving a gap in conversion of scripted modules.

We are extending the export with support from onnx shape inference. If enabled, onnx shape inference will be called whenever an onnx node is created. This is the first PR introducing the initial look of the feature. More and more cases will be supported following this PR.

* Added pass to run onnx shape inference on a given node. The node has to have namespace `onnx`.
* Moved helper functions from `export.cpp` to a common place for re-use.
* This feature is currently experimental, and can be turned on through flag `onnx_shape_inference` in internal api `torch.onnx._export`.
* Currently skipping ONNX Sequence ops, If/Loop and ConstantOfShape due to limitations. Support will be added in the future.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40628

Reviewed By: mrshenli

Differential Revision: D22709746

Pulled By: bzinodev

fbshipit-source-id: b52aeeae00667e66e0b0c1144022f7af9a8b2948
2020-08-30 18:35:46 -07:00
Protonu Basu
58a7e73a95 [TensorExpr] Block Codegen (#40054)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40054

Reviewed By: ZolotukhinM

Differential Revision: D22061350

Pulled By: protonu

fbshipit-source-id: 004f7c316629b16610ecdbb97e43036c72c65067
2020-08-28 09:53:42 -07:00
Zino Benaissa
abe878ce96 Allow Freezing of Module containing interface attribute (#41860)
Summary:
This patch allows to freeze model that utilizes interfaces. Freezing works
under the user assumption that the interfase module dones not aliases with
any value used in the model.

To enable freezing of such modules, added an extra pramater:

torch._C._freeze_module(module, ignoreInterfaces = True)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41860

Reviewed By: eellison

Differential Revision: D22670566

Pulled By: bzinodev

fbshipit-source-id: 41197a724bc2dca2e8495a0924c224dc569f62a4
2020-08-21 18:57:13 -07:00
taivu
665da61d2b Replace Conv1d with Conv2d (#42867)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42867

Test Plan: Imported from OSS

Reviewed By: kimishpatel

Differential Revision: D23177916

Pulled By: kimishpatel

fbshipit-source-id: 68cc40cf42d03e5b8432dc08f9933a4409c76e25
2020-08-20 21:36:51 -07:00
Sinan Nasir
6e1127ea3f [NCCL] Changed FutureNCCL's then callback logic for better efficiency. (#42869)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42869

We realized that when we invoke a simple callback that divides the tensors by `world_size` after `allreduce`, the performance was almost 50% lower in terms of QPS compared to the case where a simple `allreduce` hook is used with no `then` callback.

The main problem was as we call `work.wait()` before invoking `then` callback, we were synchronizing `work`'s stream with the default PyTorch stream inside [`runHook`](https://github.com/pytorch/pytorch/blob/master/torch/csrc/distributed/c10d/reducer.cpp#L609) and stalling the backward computation.

In that PR, we ensure that FutureNCCL's `then` callback is not stalling the backward computation. Assuming single-process single-device, `FutureNCCL` gets a new stream from device's pool using `at::cuda::getStreamFromPool` to run `callback` and before invoking the `callback` inline it synchronizes `WorkNCCL`'s stream by callback's stream not the default stream.

ghstack-source-id: 110208431

Test Plan: Run performance benchmark tests to validate performance issue is resolved. Also, `python test/distributed/test_c10d.py` to avoid any odd issues.

Reviewed By: pritamdamania87

Differential Revision: D23055807

fbshipit-source-id: 60e50993f1ed97497514eac5cb1018579ed2a4c5
2020-08-19 19:42:22 -07:00
taivu
02c8ad70f2 Reconstruct scopes (#41615)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41615

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D22611331

Pulled By: taivu1998

fbshipit-source-id: d4ed4cf6360bc1f72ac9fa24bb4fcf6b7d9e7576
2020-08-13 22:38:16 -07:00
Bram Wasti
ada8404f2d [jit] Scaffold a static runtime (#42753)
Summary:
The premise of this approach is that a small subset of neural networks are well represented by a data flow graph.  The README contains more information.

The name is subject to change, but I thought it was a cute reference to fire.

suo let me know if you'd prefer this in a different spot.  Since it lowers a JIT'd module directly I assumed the JIT folder would be appropriate.  There is no exposed Python interface yet (but is mocked up in `test_accelerant.py`)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42753

Reviewed By: zou3519

Differential Revision: D23043771

Pulled By: bwasti

fbshipit-source-id: 5353731e3aae31c08b5b49820815da98113eb551
2020-08-12 13:05:27 -07:00
Ksenija Stanojevic
e845b0ab51 [Resending] [ONNX] Add eliminate_unused_items pass (#42743)
Summary:
This PR:

- Adds eliminate_unused_items pass that removes unused inputs and initializers.
- Fixes run_embed_params function so it doesn't export unnecessary parameters.
- Removes test_modifying_params in test_verify since it's no longer needed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42743

Reviewed By: hl475

Differential Revision: D23058954

Pulled By: houseroad

fbshipit-source-id: cd1e81463285a0bf4e60766c8c87fc9a350d9c7e
2020-08-11 20:30:50 -07:00
Vasiliy Kuznetsov
79b8328aaf optimize_for_mobile: bring packed params to root module (#42740)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42740

Adds a pass to hoist conv packed params to root module.
The benefit is that if there is nothing else in the conv module,
subsequent passes will delete it, which will reduce module size.

For context, freezing does not handle this because conv packed
params is a custom object.

Test Plan:
```
PYTORCH_JIT_LOG_LEVEL=">hoist_conv_packed_params.cpp" python test/test_mobile_optimizer.py TestOptimizer.test_hoist_conv_packed_params
```

Imported from OSS

Reviewed By: kimishpatel

Differential Revision: D23005961

fbshipit-source-id: 31ab1f5c42a627cb74629566483cdc91f3770a94
2020-08-08 15:53:20 -07:00
BowenBao
a6c8730045 [ONNX] Add preprocess pass for onnx export (#41832)
Summary:
in `_jit_pass_onnx`, symbolic functions are called for each node for conversion. However, there are nodes that cannot be converted without additional context. For example, the number of outputs from split (and whether it is static or dynamic) is unknown until the point where it is unpacked by listUnpack node. This pass does a preprocess, and prepares the nodes such that enough context can be received by the symbolic function.
* After preprocessing, `_jit_pass_onnx` should have enough context to produce valid ONNX nodes, instead of half baked nodes that replies on fixes from later postpasses.
* `_jit_pass_onnx_peephole` should be a pass that does ONNX specific optimizations instead of ONNX specific fixes.
* Producing more valid ONNX nodes in `_jit_pass_onnx` enables better utilization of the ONNX shape inference https://github.com/pytorch/pytorch/issues/40628.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41832

Reviewed By: ZolotukhinM

Differential Revision: D22968334

Pulled By: bzinodev

fbshipit-source-id: 8226f03c5b29968e8197d242ca8e620c6e1d42a5
2020-08-06 20:34:12 -07:00
BowenBao
842759591d [ONNX] Refactor ONNX fixup for Loop and If (#40943)
Summary:
* move both under new file `fixup_onnx_controlflow`
* move the fixup to where the ONNX loop/if node is created, as oppose to running the fixup as postpass. This will help with enable onnx shape inference later.
* move `fuseSequenceSplitConcat` to `Peephole`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40943

Reviewed By: mrshenli

Differential Revision: D22709999

Pulled By: bzinodev

fbshipit-source-id: 51d316991d25dc4bb4047a6bb46ad1e2401d3d2d
2020-08-03 22:33:17 -07:00
Shen Li
d4736ef95f Add done() API to Future (#42013)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42013

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D22729596

Pulled By: mrshenli

fbshipit-source-id: ed31021a35af6e2c3393b9b14e4572cf51013bc0
2020-07-24 14:13:41 -07:00
Yanan Cao
890b52e09f Reduce instability in runCleanUpPasses by reordering passes. (#41891)
Summary:
Currently constant pooling runs before const propagation, which can create more constants that need pooling. This can get in the way of serialization/deserialization stability because each time user serializes and deserializes a module, runCleanUpPasses is called upon it. Doing so multiple times would lead to different saved module.

This PR moves constant pooling after const propagation, which may slow down const propagation a little bit, but would otherwise side-step aforementioned problem.

test_constant_insertion in test_jit.py is also updated because after fixing the pass ordering, the number of constants is no longer a constant and it is extremely difficult to get the exact number with the current convoluted test structure. So for now, I changed the test to check only that CSE doesn't change number of "prim::constant" rather than comparing against a known number. Also left a TODO to improve this test.

ConstantPropagation pass is replaced by ConstantPropagationImmutableTypes because the latter is used in runCleanUpPasses. If not replaced, the former would create new CSE opportunities by folding more constants. This voids the purpose of the test case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41891

Reviewed By: colesbury

Differential Revision: D22701540

Pulled By: gmagogsfm

fbshipit-source-id: 8e60dbdcc54a93dac111d81b8d88fb39387224f5
2020-07-24 11:39:20 -07:00
Ksenija Stanojevic
af5d0bff00 [ONNX] Add pass that fuses Conv and BatchNormalization (#40547)
Summary:
Add pass that fuses Conv and Batchnormalization nodes into one node Conv.
This pass is only applied in inference mode (training is None or TrainingMode.Eval).
Since this pass needs access to param_dict it is written outside peephole file where these kind of passes (fusing multiple nodes into one) is usually placed.

This PR also adds wrapper skipIfNoEmbed to skip debug_embed_params test:
Pass that fuses Conv and Batchnorm changes the params of resnet model and parameters of onnx and pytorch model won't match. Since parameters are not matching, debug_embed_params test for test_resnet will fail and that is expected, therefore debug_embed_params test for test_resnet should be skipped.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40547

Reviewed By: gchanan

Differential Revision: D22631687

Pulled By: bzinodev

fbshipit-source-id: fe45812400398a32541e797f727fd8697eb6d8c0
2020-07-22 14:59:27 -07:00
Yanan Cao
4a3aad354a [1/N] Implement Enum JIT support (#41390)
Summary:
* Add EnumType and AnyEnumType as first-class jit type
* Add Enum-typed IValue
* Enhanced aten::eq to support Enum

Supported:
Enum-typed function targuments
using Enum type and comparing them

TODO:
Add PyThon sugared value for Enum
Support getting name/value attrs of enums
Support Enum-typed return values
Support enum values of different types in same Enum class
Support serialization and deserialization

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41390

Reviewed By: eellison

Differential Revision: D22524388

Pulled By: gmagogsfm

fbshipit-source-id: 1627154a64e752d8457cd53270f3d14aea4b1150
2020-07-18 22:15:06 -07:00
Meghan Lele
758edcd7df [JIT] Replace use of "blacklist" in python/init.cpp (#41456)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41456

**Test Plan**
Continuous integration.

**Fixes**
This commit partially addresses #41443.

Test Plan: Imported from OSS

Reviewed By: Krovatkin

Differential Revision: D22544270

Pulled By: SplitInfinity

fbshipit-source-id: 649b30e1fcc6516a4def6b148a1da07bc3ce941d
2020-07-17 11:33:05 -07:00
Kimish Patel
8a79eec98a Add add_relu fusion pass to optimize_for_mobile. (#40252)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40252

As title says.

Test Plan:
python test/test_mobile_optimizer.py

Imported from OSS

Differential Revision: D22126825

fbshipit-source-id: a1880587ba8db9dee0fa450bc463734e4a8693d9
2020-07-10 08:10:22 -07:00
Kimish Patel
c5dcf056ee JIT pass for add relu fusion. (#39343)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39343

Building on top of previous PR that adds fused add_relu op, this PR adds
a JIT pass to transform input graph to find all fusable instancs of add
+ relu and fuses them.

Test Plan:
python test/test_jit.py TestJit.test_add_relu_fusion

Imported from OSS

Differential Revision: D21822396

fbshipit-source-id: 12c7e8db54c6d70a2402b32cc06c7e305ffbb1be
2020-07-09 16:25:13 -07:00
generatedunixname89002005287564
86f72953dd [Codemod][FBSourceClangFormatLinter] Daily arc lint --take CLANGFORMAT
Reviewed By: zertosh

Differential Revision: D22452776

fbshipit-source-id: a103da6a5b1db7f1c91ca25490358da268fdfe96
2020-07-09 08:49:32 -07:00
Elias Ellison
3f32332ee6 [JIT][Easy]move remove mutation to own file (#41137)
Summary:
This should be in its own file...

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41137

Reviewed By: jamesr66a

Differential Revision: D22437922

Pulled By: eellison

fbshipit-source-id: 1b62dde1a4ebac673b5c60aea4f398f734d62501
2020-07-08 17:00:35 -07:00
peter
c71ec1c717 Fix zip serialization for file > 2GiB for Windows (#40783)
Summary:
`long long == int64_t != long` in MSVC
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40783

Differential Revision: D22328757

Pulled By: ezyang

fbshipit-source-id: bc7301d6b0e7e00ee6d7ca8637e3fce7810b15e2
2020-07-01 08:15:27 -07:00
Yanghan Wang
5923a802fa Back out "[pytorch][PR] [ONNX] Add eliminate_unused_items pass"
Summary:
Original commit changeset: 30e1a6e8823a

cause issue to fusing BN

Test Plan: revert

Reviewed By: houseroad

Differential Revision: D22296958

fbshipit-source-id: 62664cc77baa8811ad6ecce9d0520a2ab7f89868
2020-06-30 10:26:35 -07:00
James Reed
320164f878 Fix zip serialization for file > 2GiB (#40722)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40722

Test Plan: Imported from OSS

Differential Revision: D22294016

Pulled By: jamesr66a

fbshipit-source-id: 0288882873d4b59bdef37d018c030519c4be7f03
2020-06-29 19:17:06 -07:00
Kimish Patel
4a174c83ca Add option to preserve certain methods during optimize_for_mobile. (#40629)
Summary:
By default freeze_module pass, invoked from optimize_for_mobile,
preserves only forward method. There is an option to specify a list of
methods that can be preserved during freeze_module. This PR exposes that
to optimize_for_module pass.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40629

Test Plan: python test/test_mobile_optimizer.py

Reviewed By: dreiss

Differential Revision: D22260972

Pulled By: kimishpatel

fbshipit-source-id: 452c653269da8bb865acfb58da2d28c23c66e326
2020-06-29 09:32:53 -07:00
Ksenija Stanojevic
547ea787ff [ONNX] Add eliminate_unused_items pass (#38812)
Summary:
This PR:

- Adds eliminate_unused_items pass that removes unused inputs and initializers.
- Fixes run_embed_params function so it doesn't export unnecessary parameters.
- Removes  test_modifying_params in test_verify since it's no longer needed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38812

Reviewed By: ezyang

Differential Revision: D22236416

Pulled By: houseroad

fbshipit-source-id: 30e1a6e8823a7e36b51ae1823cc90476a53cd5bb
2020-06-25 22:00:26 -07:00
Zhang, Xiaobing
87c5f02f3d jit: Conv3d + BatchNorm3d fusion (#40082)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40082

Differential Revision: D22120340

Pulled By: jerryzh168

fbshipit-source-id: fce6c5f03fe7ab6c60620cbdf547d5a466a470e3
2020-06-22 11:15:52 -07:00
Ivan Kobzarev
3852215170 [vulkan] jit passes for vulkan conv2 prepack and fuse with clamp (#39282)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39282

Test Plan: Imported from OSS

Differential Revision: D21962424

Pulled By: IvanKobzarev

fbshipit-source-id: 2d20e827d2c3836b7e6b443293377c68dc1ffa5a
2020-06-20 14:12:21 -07:00
Shen Li
4463f59c2c Let torch.futures.wait_all re-throw errors (#40291)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40291

Test Plan: Imported from OSS

Differential Revision: D22141702

Pulled By: mrshenli

fbshipit-source-id: 50b5e5c687e87930aef3a50cc40839729a4eb9c6
2020-06-19 15:32:56 -07:00
Xingying Cheng
0b3755b1d0 Add optimization blacklist as second arg to optimizeForMobile method. (#37462)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37462

Instead of running all the optimization pass in optimizeForMobile method,
introducing a whitelist optimizer dictionary as second param in the method,
when it is not passed during calling, the method will run all the optimization
passes, otherwise the method will read the dict and only run the pass with
value of True.
ghstack-source-id: 106104503

Test Plan:
python test/test_mobile_optimizer.py

Imported from OSS

Differential Revision: D22096029

fbshipit-source-id: daa9370c0510930f4c032328b225df0bcf97880f
2020-06-17 18:14:45 -07:00
Linbin Yu
7021635d61 fix more duplicated names (#40062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40062

fix duplicated op names after D21992552

Test Plan: build

Reviewed By: iseeyuan

Differential Revision: D22056588

fbshipit-source-id: 6d2fcf16b5b86b30b6ac7a4107b20c8cfb6816b0
2020-06-16 11:47:05 -07:00
Jerry Zhang
ec1833bc3c Revert D22069566: Revert D22013026: [quant][graphmode] Pass debug option into insert_quant_dequant pass
Test Plan: revert-hammer

Differential Revision:
D22069566

Original commit changeset: 6230bc806089

fbshipit-source-id: 930490ab0b6a017c949445620e7c6b7056693998
2020-06-16 11:37:33 -07:00
Christian Puhrsch
305921734a Revert D22013026: [quant][graphmode] Pass debug option into insert_quant_dequant pass
Test Plan: revert-hammer

Differential Revision:
D22013026

Original commit changeset: 714b938f25c1

fbshipit-source-id: 6230bc8060892e6485159ca88cc3ad49217791a2
2020-06-16 09:44:04 -07:00
Jerry Zhang
ee5ad6ce25 [quant][graphmode] Pass debug option into insert_quant_dequant pass (#39915)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39915

Some of the usage, e.g. add_scalar will not be supporting the debug option,
that is, we will not have a numerically exact representation of the final quantized model
before finalize if people use add scalar.
warning will be added in a later PR.

Test Plan: Imported from OSS

Differential Revision: D22013026

fbshipit-source-id: 714b938f25c10fad3dfc79f095356b9803ef4b47
2020-06-16 08:14:50 -07:00
Shihao Xu
00651b8c93 [distribtued.nn] Implement TorchScript-compatible RemoteModule API (#37139)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37139

See design doc in https://github.com/pytorch/pytorch/issues/37136

ghstack-source-id: 105926270

Test Plan:
TODO:

- Make the generated Interface usable. https://github.com/pytorch/pytorch/pull/37139#discussion_r434190978
-
- Avoid generating the same template instances for Module that is not scriptable.
- Remove "infer_module_interface_cls".
- Use Python format instead of a CodeTemplate
- Use Python tempfile to track and delete file. Does it work if there is crash.

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator

buck build mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator && \
buck-out/gen/caffe2/test/distributed/nn/jit/test_instantiator\#binary.par -r test_instantiate_scripted_remote_module_template

buck build mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator && \
buck-out/gen/caffe2/test/distributed/nn/jit/test_instantiator\#binary.par -r test_instantiate_non_scripted_remote_module_template
```

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_spawn
```

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_user_provided_global_unique_name

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_async_script

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_sync_script

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_with_kwargs

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_user_provided_global_unique_name
```

```
buck test mode/dev-nosan //caffe2/test/distributed/rpc:rpc_fork
```

buck test mode/opt-asan //caffe2/test:jit -- 'test_script_forward_method_replacement

buck build mode/dev-nosan //caffe2/test:jit && \
buck-out/gen/caffe2/test/jit\#binary.par -r 'test_script_forward_method_replacement'

buck build mode/dev-nosan //caffe2/test:jit && \
buck-out/gen/caffe2/test/jit\#binary.par -r 'test_imported_classes'

Differential Revision: D20499658

fbshipit-source-id: dd9383ae4eb2343366c11127664f845b91ca3b0a
2020-06-15 19:07:35 -07:00
Jeremy Lilley
0c25428597 [futures] Reland: Add torch.futures.collect_all()/wait_all() python api. (#39964)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39964

The "[fut.wait() for fut in futs]" idiom can introduce up to
O(len(futs)) thread switches, which may be excessive for large N.

This plumbs through the new c++ c10::collectAll() to Python space
so that we only employ a single jit-side wait.

Test Plan: buck test mode/dev-nosan caffe2/test/distributed/rpc:rpc_spawn

Differential Revision: D22027412

fbshipit-source-id: 4e344a19a09638ee46e7fc478df80a41941b84ce
2020-06-15 14:07:12 -07:00
Mike Ruberry
8bc821f0d0 Revert D21976891: [futures] Add torch.futures.collect_all()/wait_all() python api.
Test Plan: revert-hammer

Differential Revision:
D21976891

Original commit changeset: 253c61f503f4

fbshipit-source-id: f839b16f4469e96325b607b6313a1397e1988856
2020-06-12 13:40:37 -07:00
Jeremy Lilley
a9aa6367c2 [futures] Add torch.futures.collect_all()/wait_all() python api. (#39790)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39790

The "[fut.wait() for fut in futs]" idiom can introduce up to
O(len(futs)) thread switches, which may be excessive for large N.

This plumbs through the new c++ c10::collectAll() to Python space
so that we only employ a single jit-side wait.
ghstack-source-id: 105779443

Test Plan: buck test mode/dev-nosan caffe2/test/distributed/rpc:rpc_spawn

Reviewed By: kiukchung

Differential Revision: D21976891

fbshipit-source-id: 253c61f503f4ffb9be784e6c49a0656cede139fb
2020-06-12 12:36:04 -07:00
Vasiliy Kuznetsov
5d2f6d86e5 graph mode: add quantization type enum (#39795)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39795

Replaces the `is_dynamic` bool by enums in Python and c++
graph quantization code.  This makes the code more readable
and will make it easier to modify for adding QAT logic in the future.

Test Plan:
CI, as well as
```
python test/test_quantization.py TestQuantizeDynamicScript
python test/test_quantization.py TestQuantizeScriptJitPasses
```

Imported from OSS

Differential Revision: D21981643

fbshipit-source-id: d475760407bcc794aeae92a2c696bac4acda843d
2020-06-10 21:34:23 -07:00
Zino Benaissa
9111ae7782 Preserve user specified attributes and methods (#38830)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38830

This patch enables to preserve user specified attributes or non forward
methods. The API:
  _freeze_module(Module, ["a", "version"])

Test Plan: Imported from OSS

Differential Revision: D21957316

Pulled By: bzinodev

fbshipit-source-id: 5c9146ae679791070a9de868c45785725b48a9e6
2020-06-10 01:38:18 -07:00
Jerry Zhang
9551fb22d6 [quant][graphmode] Preserve numerics in debug option for clamp ops (#39219)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39219

We didn't model clamp ops correctly right now, this PR fixes that.

Reason is quantized clamp op quantizes the scalar arguments in the op implementation: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp#L614-L617

So we'll need to model this explicitly in the IR.
When we see a `aten::dequantize - aten::clamp(%x, %min, %max)`
we first make a scalar tensor with `aten::scalar_tensor(%scalar, ...)`, then we quantize the tensor with the same quantization parameters from the input tensor of the `aten::clamp`, dequantize the tensor, then convert the dequantized tensor to scalar using `aten::item`.

Test Plan: Imported from OSS

Differential Revision: D21831350

fbshipit-source-id: d60731459a0465d64946aabc62065d25d92faefc
2020-06-08 17:15:39 -07:00
davidriazati
da8191a9ad Remove useless copy on zip file load (#36362)
Summary:
Instead of copying to a buffer, then setting a tensor's storage with that buffer, create a storage directly from the file

Pull Request resolved: https://github.com/pytorch/pytorch/pull/36362

Pulled By: driazati

Differential Revision: D21889537

fbshipit-source-id: edbd430073c2bbf52332fe7b3b2590e7d936dedf
2020-06-04 16:59:54 -07:00
Shen Li
bb0377bb24 Expose torch.futures.Future (#39008)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39008

This commit adds a `torch.futures.Future` type and exposes its ctor,
`wait`, `then`, and `set_result` APIs. This type is currently a
wrapper of `c10::ivalue::Future` and mainly used by RPC for now. Later,
we could revamp c10d APIs to return this `Future` type as well. More
utils will be added into `torch.futures` package in followup PRs.

Test Plan: Imported from OSS

Differential Revision: D21723022

Pulled By: mrshenli

fbshipit-source-id: 92e56160544e9bf00d11db3e8347a1b9707882c9
2020-06-02 10:12:56 -07:00
Jie
07518e120b [nvFuser] add torch.jit.fuser context manager (#38993)
Summary:
1. `torch.jit.fuser(str)` context manager facilitates switch between backend fusers:
  str - 'fuser0' enables only legacy fuser;
  str - 'fuser1' enables only NNC;
  str - 'fuser2' enables only nvFuser;
2. cleanup updated python tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38993

Reviewed By: nairbv, pbelevich

Differential Revision: D21800620

Pulled By: soumith

fbshipit-source-id: 7fe855f5a5b97368e5e84c98c28d04b2e1276c85
2020-06-01 10:52:40 -07:00
Jerry Zhang
85d0292c14 [quant][graphmode] Cleanup inplace API (#38827)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38827

Test Plan: Imported from OSS

Differential Revision: D21673481

fbshipit-source-id: becca38efcf720089407c981419b33f629a33e91
2020-05-29 11:13:25 -07:00