Commit Graph

401 Commits

Author SHA1 Message Date
Bram Wasti
43a9d6fb6e [TorchScript] Support user defined classes as constants (#5062)
Summary:
Pull Request resolved: https://github.com/pytorch/glow/pull/5062

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45556

User defined classes can be used as constants.  This is useful when freezing and removing the module from the graph.

Test Plan: waitforsadcastle

Reviewed By: eellison

Differential Revision: D23994974

fbshipit-source-id: 5b4a5c91158aa7f22df39d71f2658afce1d29317
2020-11-16 20:52:02 -08:00
Zino Benaissa
11710598db Preserve module parameters in freezing (#47094)
Summary:
Added preserveParameters to freezing API that allows to preserve module
parameters.

Fixes #{39613}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47094

Reviewed By: eellison

Differential Revision: D24792867

Pulled By: bzinodev

fbshipit-source-id: f0cd980f5aed617b778afe2f231067c7c30a1527
2020-11-13 20:18:32 -08:00
Elias Ellison
4380934b9b [JIT] Dont use specialized tensor type (#46130)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/46122

For `Any`, we infer the type of the ivalue to set the ivalue's type tag. When we saw a Tensor, we would use a specialized Tensor type, so when `Dict[str, Tensor]` was passed in as any `Any` arg it would be inferred as `Dict[str, Float(2, 2, 2, 2)]` which breaks runtime `isinstance` checking.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46130

Reviewed By: glaringlee

Differential Revision: D24261447

Pulled By: eellison

fbshipit-source-id: 8a2bb26ce5b6c56c8dcd8db79e420f4b5ed83ed5
2020-11-13 18:34:40 -08:00
generatedunixname89002005325676
8855c4e12f [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily arc lint --take CLANGFORMAT
Differential Revision: D24946660

fbshipit-source-id: e47d04cac21314acb7f9ac3bdfa0d09289e399b4
2020-11-13 06:59:04 -08:00
Elias Ellison
fe81faee5f Add more CPU tests (#47369)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47369

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D24805251

Pulled By: eellison

fbshipit-source-id: f1a8210ffdc3cc88354cb4896652151d83a0345a
2020-11-12 11:13:47 -08:00
Elias Ellison
f221a19a7f Force LLVM Compilation for CPU Tests (#46949)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46949

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D24805247

Pulled By: eellison

fbshipit-source-id: 4fcaf02d8a78cc5cbcbde36940d0a2c85fba3fc5
2020-11-12 11:12:08 -08:00
Wanchao Liang
fa560ceb9c [reland] make intrusive_ptr as a pybind holder type (#47586)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47586

relanding PR of https://github.com/pytorch/pytorch/pull/44492, and add
additional Capsule related wrapping to ensure we still have the correct
type in pybind11 to resolve Capsule as torch._C.CapsuleType

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D24822519

Pulled By: wanchaol

fbshipit-source-id: eaaea446fb54b56ed3b0d04c31481c64096e9459
2020-11-10 10:09:08 -08:00
Wanchao Liang
31d041c946 Back out "[c10] make intrusive_ptr available as a pybind holder type"
Summary:
Original commit changeset: b9796e15074d

have weird issue happening with custom class + recursive scripting, unland this first to figure out more details

Test Plan: wait for sandcastle

Reviewed By: zhangguanheng66

Differential Revision: D24780498

fbshipit-source-id: 99a937a26908897556d3bd9f1b2b39f494836fe6
2020-11-06 14:27:48 -08:00
Meghan Lele
dc0d68a1ee [JIT] Print out interface mismatch for prim::ModuleDictIndex (#47300)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47300

**Summary**
This commit augments the module interface subtyping check that is done
before the emission of the `prim::ModuleDictIndex` operator so that the
error message that is printed if the subtyping check fails provides more
information on which methods do not match.

**Test Plan**
Existing unit tests for `prim::ModuleDictIndex`. Compilation of `ModWithWrongAnnotation` now produces this error:
```
Attribute module is not of annotated type __torch__.jit.test_module_containers.ModuleInterface: Method on class '__torch__.jit.test_module_containers.DoesNotImplementInterface' (1) is not compatible with interface '__torch__.jit.test_module_containers.ModuleInterface' (2)
  (1) forward(__torch__.jit.test_module_containers.DoesNotImplementInterface self, Tensor inp) -> ((Tensor, Tensor))
  (2) forward(InterfaceType<ModuleInterface> self, Any inp) -> (Any)
:
```

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D24709538

Pulled By: SplitInfinity

fbshipit-source-id: 6b6cb75e4b2b12b08576a5530b4b90cbcad9b6e5
2020-11-03 13:07:21 -08:00
Wanchao Liang
70d58031d7 [c10] make intrusive_ptr available as a pybind holder type (#44492)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44492

Test Plan: Imported from OSS

Reviewed By: smessmer

Differential Revision: D23632278

Pulled By: wanchaol

fbshipit-source-id: b9796e15074d68a347de443983abf7f052a3cdfe
2020-11-02 12:11:45 -08:00
Meghan Lele
19ede75eb9 [JIT] Enable ModuleDict non-literal indexing (#45716)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45716

**Summary**
This commit enables indexing into `ModuleDict` using a non-literal
index if the `ModuleDict` is annotated with `Dict[str, X]`, where `X` is
a module interface type. These annotations must be expressed using a
class attribute named `__annotations__`, which is a `Dict[str, Type]`
where the keys are the names of module attributes and the values are
their types.

The approach taken by this commit is that these annotations are stored
as "hints" along with the corresponding module attributes in the
`ConcreteSubmoduleTypeBuilder` instance for each module (which might be
a `ModuleDict`). These hints are passed into the `ModuleValue` that is
created for desugaring operations on submodules so that indexing into a
`ModuleDict` can be emitted as a getitem op into a dict emitted into the
graph that represents the `ModuleDict`.

**Test Plan**
This commit adds unit tests to `TestModuleContainers` to test this
feature (`test_typed_module_dict`).

Differential Revision: D24070606

Test Plan: Imported from OSS

Reviewed By: ansley

Pulled By: SplitInfinity

fbshipit-source-id: 6019a7242d53d68fbfc1aa5a49df6cfc0507b992
2020-10-31 21:36:23 -07:00
shubhambhokare1
1ea14e30f5 [ONNX] Enable NoneType inputs to export API (#45792)
Summary:
Enables the use of NoneType arguments to inputs tuple in the export API

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45792

Reviewed By: heitorschueroff

Differential Revision: D24312784

Pulled By: bzinodev

fbshipit-source-id: 1717e856b56062add371af7dc09cdd9c7b5646da
2020-10-29 13:56:52 -07:00
Michael Suo
dc8176356e Various cleanups to ir_emitter and friends (#46686)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46686

I was trying to page this code back in after a while and some things
stuck out as unnecessarily confusing.

1. Improve documentation of closures and fork stuff to be more accurate
to how we use them today.
2. Change `prim::LocalVariableScope` to `prim::ListComprehension`. It is
only ever used for a list comprehensions, and in general the nodes
emitted by `ir_emitter` should correspond to concrete operations or
language features rather than semantic constraints.
3. Change the somewhat mysterious "inputs" and "attributes" argument
names throughout the codebase to be the more obvious "args" and "kwargs"
that they generally represent (I think "inputs" and "attributes" come
from the AST naming).

Test Plan: Imported from OSS

Reviewed By: navahgar, jamesr66a

Differential Revision: D24464197

Pulled By: suo

fbshipit-source-id: 1f4b1475b58b5690a0b204e705caceff969533b4
2020-10-28 16:28:05 -07:00
David Reiss
23bce17baa Add inputsSize to Python IR, like outputsSize (#46779)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46779

Test Plan: Used it in some notebooks.

Reviewed By: suo

Differential Revision: D24574005

Pulled By: dreiss

fbshipit-source-id: 78ba7a2bdb859fef5633212b73c7a3eb2cfbc380
2020-10-28 11:35:39 -07:00
Yanan Cao
f9b9430152 Support doc_string for TorchBind custom classes (#46576)
Summary:
With this PR, users can optionally provide a "doc_string" to describe a class or its method. doc_string for TorchBind classes and methods are stored as `doc_string` properties in `Function` and `ScriptClass`. These `dos_string` properties are then exposed in Python layer via PyBind for doc generation.

Fixes https://github.com/pytorch/pytorch/issues/46047

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46576

Reviewed By: wanchaol

Differential Revision: D24440636

Pulled By: gmagogsfm

fbshipit-source-id: bfa9b270a6c2d8bc769a88fad6be939cc6310412
2020-10-24 12:51:35 -07:00
Yi Wang
98aad933b6 [pytorch][PR] Record FutureNCCL callback stream on CUDA caching allocator (#45318)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45318

When calling `then()` from WorkNCCL, record the input data pointers in futureNCCLCallbackStream_ before the execution of the input callback.

Note that the recording cannot be directly added to the lambda used by addCallback in ProcessGroupNCCL.hpp. This is because the type of future value in that context is pyobject rather than TensorList, but a type casting will require pybind and introduce Python dependency, which should not be allowed in c10d library.

I have considered creating a util function in a separate file to support this type casting, and then placing it under torch/csrc directory where python dependency is allowed. However, torch/csrc has a dependency on c10d, so this will create a circular dependency.

Finally, a `record_stream_cb_` member is added to FutureNCCL, and the default value is nullptr. A default `record_stream_cb_` implementation is added to `PythonFutureWrapper,` where Python dependency is allowed.

In addition, a few lines are reformatted by lint.
caffe2/torch/csrc/distributed/c10d/init.cpp is only reformatted.

#Closes: https://github.com/pytorch/pytorch/issues/44203

Test Plan:
buck test mode/dev-nosan caffe2/test/distributed:c10d -- ProcessGroupNCCLTest
buck test mode/dev-nosan caffe2/test/distributed:c10d  -- test_accumulate_gradients_no_sync_allreduce_with_then_hook
buck test mode/dev-nosan caffe2/test/distributed:c10d  -- test_ddp_comm_hook_allreduce_with_then_hook_nccl

Reviewed By: pritamdamania87

Differential Revision: D23910257

fbshipit-source-id: 66920746c41f3a27a3689f22e2a2d9709d0faa15
2020-10-22 01:49:47 -07:00
Lillian Johnson
f83cf2dab3 [JIT] adding torch.jit.isinstance support (#46062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46062

Adds support for torch.jit.isinstance in both eager and script mode

Example use:

```
import torch
from typing import Any, List

class TestModule(torch.nn.Module):
    def __init__(self):
        super(TestModule, self).__init__()

    def call(self, input1: str, input2: str) -> str:
        return input1

    def forward(self, input: Any) -> None:
        if torch.jit.isinstance(input, List[str]):
            for el in input:
                print(el)

TestModule().forward(["1","2"])
scripted_module = torch.jit.script(TestModule())
scripted_module(["1", "2"])
```

Test Plan: Imported from OSS

Reviewed By: bertmaher, zou3519

Differential Revision: D24264415

Pulled By: Lilyjjo

fbshipit-source-id: 039c95bddd854c414027ac8332832e6bc830b5b9
2020-10-20 16:47:49 -07:00
jiej
ac146c4820 [nvFuser] Switching to CudaFusionGuard from BailOut for nvfuser - update 2 (#46452)
Summary:
1. Added CudaFusionGuard as the custom TypeCheck for nvfuser; enabled dynamic shape support with profiling executor;
2. dropped support for legacy fuser;
3. re-enabled nvfuser tests;
4. added registration for profiling record to allow profiling on user specified nodes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46452

Reviewed By: zou3519, anjali411

Differential Revision: D24364642

Pulled By: ngimel

fbshipit-source-id: daf53a9a6b6636e1ede420a3a6d0397d4a8b450b
2020-10-19 15:44:31 -07:00
Tao Xu
495070b388 [Metal] Add the Python binding for optimize_for_mobile (#46456)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46456

Add the python binding in CMake. The general workflow is

- Build pytorch -  `USE_PYTORCH_METAL=ON python setup.py install --cmake`
- Run optimize_for_mobile

```
import torch
from torch.utils.mobile_optimizer import optimize_for_mobile

scripted_model = torch.jit.load('./mobilenetv2.pt')
optimized_model = optimize_for_mobile(scripted_model, backend='metal')
torch.jit.export_opnames(optimized_model)
torch.jit.save(optimized_model, './mobilenetv2_metal.bc')
```
The exported ops are

```
['aten::adaptive_avg_pool2d', 'aten::add.Tensor', 'aten::addmm', 'aten::reshape', 'aten::size.int', 'metal::copy_to_host', 'metal_prepack::conv2d_run']
```
ghstack-source-id: 114559878

Test Plan:
- Sandcastle CI
- Circle CI

Reviewed By: kimishpatel

Differential Revision: D24356768

fbshipit-source-id: fb5c4c4b6316347b67edb4132da044a81470ddfd
2020-10-17 10:26:25 -07:00
chengjun
5741de883a Define the record_stream method in native_functions.yaml (#44301)
Summary:
The record_stream method was hard coded for CUDA device. Define the record_stream in the native_functions.yaml to enable the dynamic dispatch to different end device.

Fixes https://github.com/pytorch/pytorch/issues/36556

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44301

Reviewed By: glaringlee

Differential Revision: D23763954

Pulled By: ezyang

fbshipit-source-id: e6d24f5e7892b56101fa858a6cad2abc5cdc4293
2020-10-13 09:15:22 -07:00
Brian Hirsh
a3caa719af fix #45552 - adding add_done_callback(fn) to torch.futures.Future (#45675)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45675

Test Plan: Imported from OSS

Reviewed By: glaringlee

Differential Revision: D24055353

Pulled By: bdhirsh

fbshipit-source-id: 9233c8e17acc878f0fecbe740a4397fb55cf722f
2020-10-13 07:47:36 -07:00
Elias Ellison
564296f051 [2/3] [JIT] Make sure fusion occurs in test_tensorexpr (#45789)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45789

Making sure that more tests invoke a run with a Fusion Group.

Test Plan: Imported from OSS

Reviewed By: Krovatkin

Differential Revision: D24169535

Pulled By: eellison

fbshipit-source-id: 54d7af434772ba52144b12d15d32ae30460c0c3c
2020-10-08 12:06:16 -07:00
Elias Ellison
1b97ffa07a [1/3] [JIT] Make sure fusion occurs in test_tensorexpr file (#45788)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45788

We were only running the traced graph once, which would not yet have been fused at that point. We should run for num_profiled_runs + 1, and also assert that all nodes in the graph  were fused.

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D24169537

Pulled By: eellison

fbshipit-source-id: 8499bb1a5bd9d2221b1f1c54d6352558cf07ba9a
2020-10-08 12:02:57 -07:00
James Reed
be45c3401a [JIT] Make objects throw Python AttributeError on nonexistant attr access (#45911)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45911

Test Plan: Imported from OSS

Reviewed By: robieta

Differential Revision: D24140971

Pulled By: jamesr66a

fbshipit-source-id: 046a2cffff898efad5bcc36a41bf992f36f555f9
2020-10-07 01:57:29 -07:00
Meghan Lele
4fdba30500 [JIT] Add API for ignoring arbitrary module attributes (#45262)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45262

**Summary**
This commit adds an API for ignoring arbitrary module attributes during
scripting. A class attribute named `ignored_attributes` containing names
of attributes to ignore can be added to the class of the instance being
scripted. Attributes ignored in this fashion cannot be used in
`forward`, methods used by `forward` or by `exported` methods. They
are, however, copied to the `RecursiveScriptModule` wrapper and can be
used by `ignored` methods and regular Python code.

**Test Plan**
This commit adds unit tests to `TestScriptPy3` to test this new API.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D23971882

Pulled By: SplitInfinity

fbshipit-source-id: 8c81fb415fde7b78aa2f87e5d83a477e876a7cc3
2020-10-06 18:02:06 -07:00
Ansley Ussery
f18cc9c57d Change type inferred from empty annotation (#45360)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45360

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D24078645

Pulled By: ansley

fbshipit-source-id: 5d37d07df75bd7a2111d44638befe53c1021ee82
2020-10-05 15:16:56 -07:00
BowenBao
3da4cea658 [ONNX] Add dim_param support in export with onnx shape inference (#44920)
Summary:
* Support propagating `dim_param` in ONNX by encoding as `ShapeSymbol` in `SymbolicShape` of outputs. If export is called with `dynamic_axes` provided, shape inference will start with these axes set as dynamic.
* Add new test file `test_pytorch_onnx_shape_inference.py`, reusing all test cases from `test_pytorch_onnx_onnxruntime.py`, but focus on validating shape for all nodes in graph. Currently this is not enabled in the CI, since there are still quite some existing issues and corner cases to fix. The test is default to run only at opset 12.
* Bug fixes, such as div, _len, and peephole.cpp passes for PackPadded, and LogSoftmaxCrossEntropy.
* This PR depends on existing PR such as 44332.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44920

Reviewed By: eellison

Differential Revision: D23958398

Pulled By: bzinodev

fbshipit-source-id: 00479d9bd19c867d526769a15ba97ec16d56e51d
2020-09-30 21:56:24 -07:00
Negin Raoof
6b42ca2d69 [ONNX] Update embedding_bag export (#44693)
Summary:
Export of embedding bag with dynamic list of offsets.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44693

Reviewed By: malfet

Differential Revision: D23831980

Pulled By: bzinodev

fbshipit-source-id: 3eaff1a0f20d1bcfb8039e518d78c491be381e1a
2020-09-30 13:36:40 -07:00
Ilia Cherniavskii
f5c95d5cf1 Source code level attribution in profiler (#43898)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43898

Adding with_source parameter to enable tracking source code
(filename and line) in profiler for eager, torchscript and autograd
modes

Test Plan:
python test/test_profiler.py
```
Name                                 Self CPU total %  Self CPU total   CPU total %      CPU total        CPU time avg     Number of Calls  Source Location
-----------------------------------  ---------------  ---------------  ---------------  ---------------  ---------------  ---------------  --------------------------------------------
ts_method_1                          10.43%           235.364us        36.46%           822.920us        822.920us        1                test/test_profiler.py(70): test_source
aten::add                            7.52%            169.833us        8.88%            200.439us        200.439us        1                test/test_profiler.py(69): test_source
aten::normal_                        6.26%            141.380us        6.26%            141.380us        141.380us        1                test/test_profiler.py(67): test_source
aten::add                            5.80%            130.830us        8.41%            189.800us        63.267us         3                test/test_profiler.py(72): test_source
aten::sum                            5.02%            113.340us        8.39%            189.475us        189.475us        1                test/test_profiler.py(64): ts_method_1
aten::add                            4.58%            103.346us        6.33%            142.847us        142.847us        1                test/test_profiler.py(62): ts_method_1
aten::mul                            4.05%            91.498us         9.62%            217.113us        217.113us        1                test/test_profiler.py(71): test_source
aten::add                            4.03%            90.880us         5.60%            126.405us        126.405us        1                test/test_profiler.py(58): ts_method_2
aten::empty                          3.49%            78.735us         3.49%            78.735us         19.684us         4                test/test_profiler.py(72): test_source
```

Reviewed By: ngimel

Differential Revision: D23432664

Pulled By: ilia-cher

fbshipit-source-id: 83ad7ebe0c2502494d3b48c4e687802db9c77615
2020-09-30 00:57:35 -07:00
shubhambhokare1
5b839bca78 [ONNX] Optimize export_onnx api to reduce string and model proto exchange (#44332)
Summary:
Optimize export_onnx api to reduce string and model proto exchange in export.cpp

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44332

Reviewed By: bwasti, eellison

Differential Revision: D23880129

Pulled By: bzinodev

fbshipit-source-id: 1d216d8f710f356cbba2334fb21ea15a89dd16fa
2020-09-27 16:29:08 -07:00
gunandrose4u
f07ac6a004 Fix Windows build failure after DDP PR merged (#45335)
Summary:
Fixes #{issue number}
This is resubmit for PR https://github.com/pytorch/pytorch/issues/42897 . Together with fix for Windows build issue introduced by PR https://github.com/pytorch/pytorch/issues/44344 .

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45335

Reviewed By: zou3519

Differential Revision: D23931471

Pulled By: mrshenli

fbshipit-source-id: f49b5a114944c1450b32934b3292170be064f494
2020-09-25 12:37:50 -07:00
Mike Ruberry
103fa3894a Revert D23841786: [pytorch][PR] Enable distributed package on windows, Gloo backend supported only
Test Plan: revert-hammer

Differential Revision:
D23841786 (0122299f9b)

Original commit changeset: 334ba1ed73ef

fbshipit-source-id: ec95432f9957df56a5a04e52661f5db920b7f57f
2020-09-24 22:44:33 -07:00
gunandrose4u
0122299f9b Enable distributed package on windows, Gloo backend supported only (#42897)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/42095

For test case part will be committed to this PR later

mrshenli, please help to review

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42897

Reviewed By: osalpekar

Differential Revision: D23841786

Pulled By: mrshenli

fbshipit-source-id: 334ba1ed73eff2f668857390fc32d1bc7f08e5f3
2020-09-24 21:13:55 -07:00
Jerry Zhang
f575df201f [quant][graphmode][jit][api] Expose preserved_attrs from finalize to convert_jit (#44490)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44490

Test Plan: Imported from OSS

Reviewed By: z-a-f

Differential Revision: D23631142

fbshipit-source-id: f0913f0cb4576067e2a7288326024942d12e0ae0
2020-09-22 19:37:25 -07:00
Meghan Lele
e045119956 [JIT] Add default arguments for class types (#45098)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45098

**Summary**
This commit adds support for default arguments in methods of class
types. Similar to how default arguments are supported for regular
script functions and methods on scripted modules, default values are
retrieved from the definition of a TorchScript class in Python as Python
objects, converted to IValues, and then attached to the schemas of
already compiled class methods.

**Test Plan**
This commit adds a set of new tests to TestClassType to test default
arguments.

**Fixes**
This commit fixes #42562.

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D23844769

Pulled By: SplitInfinity

fbshipit-source-id: ceedff7703bf9ede8bd07b3abcb44a0f654936bd
2020-09-22 18:37:44 -07:00
Ivan Kobzarev
e9941a5dd4 [vulkan][py] torch.utils.optimize_for_vulkan (#44903)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44903

Test Plan: Imported from OSS

Reviewed By: kimishpatel

Differential Revision: D23766039

Pulled By: IvanKobzarev

fbshipit-source-id: dbdf484ee7d3a7719aab105efba51b92ebc51568
2020-09-18 18:20:11 -07:00
Shawn Wu
572f7e069c Enable type check for torch.testing._internal.te_utils.* (#44927)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44927

Test Plan: Imported from OSS

Reviewed By: walterddr

Differential Revision: D23776842

Pulled By: sshawnwu

fbshipit-source-id: 65c028169a37e1f2f7d9fdce8a958234ee1caa26
2020-09-18 18:09:15 -07:00
Michael Suo
374e9373b5 [jit] Pull (most) tests out of libtorch_python (#44795)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44795

Today, we build our cpp tests twice, once as a standalone gtest binary,
and once linked in `libtorch_python` so we can call them from
`test_jit.py`.

This is convenient (it means that `test_jit.py` is a single entry point
for all our tests), but has a few drawbacks:
1. We can't actually use the gtest APIs, since we don't link gtest into
`libtorch_python`. We're stuck with the subset that we want to write
polyfills for, and an awkward registration scheme where you have to
write a test then include it in `tests.h`).
2. More seriously, we register custom operators and classes in these
tests. In a world where we may be linking many `libtorch_python`s, this
has a tendency to cause errors with `libtorch`.

So now, only tests that explicitly require cooperation with Python are
built into `libtorch_python`. The rest are built into
`build/bin/test_jit`.

There are tests which require that we define custom classes and
operators. In these cases, I've built thm into separate `.so`s that we
call `torch.ops.load_library()` on.

Test Plan: Imported from OSS

Reviewed By: SplitInfinity, ZolotukhinM

Differential Revision: D23735520

Pulled By: suo

fbshipit-source-id: d146bf4e7eb908afa6f96b394e4d395d63ad72ff
2020-09-18 14:04:40 -07:00
Yanan Cao
174cbff00a Improve sugared value's error message (#42889)
Summary:
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **https://github.com/pytorch/pytorch/issues/42889 Improve sugared value's error message**

I think most (if not all) cases where this code path is reached can be attributed to closing over a global variable.
Improving error message to make this clearer to users.

close https://github.com/pytorch/pytorch/issues/41288

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42889

Reviewed By: SplitInfinity

Differential Revision: D23779347

Pulled By: gmagogsfm

fbshipit-source-id: ced702a96234040f79eb16ad998d202e360d6654
2020-09-18 11:01:40 -07:00
Yanan Cao
99093277c0 Support Python Slice class in TorchScript (#44335)
Summary:
Implements support for[ Python Slice class](https://docs.python.org/3/c-api/slice.html) (not slice expression, which is already supported)

Slice object can be used in any place that supports slice expression, including multi-dim tensor slicing.

Fixes https://github.com/pytorch/pytorch/issues/43511
Fixes https://github.com/pytorch/pytorch/issues/43125

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44335

Reviewed By: suo, jamesr66a

Differential Revision: D23682213

Pulled By: gmagogsfm

fbshipit-source-id: f74fe25370e89fbfd2b3727d95ce4e1c4ba8dec4
2020-09-17 00:41:53 -07:00
Yanan Cao
6befc09465 Fix misuse of PyObject_IsSubclass (#44769)
Summary:
PyObject_IsSubclass may set python live exception bit if given object is not a class. `IsNamedTuple` is currently using it incorrectly, which may trip all following python operations in debug-build python. Normal release-build python is not affected because `assert` is no-op in release-build.

Fixes https://github.com/pytorch/pytorch/issues/43577

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44769

Reviewed By: jamesr66a

Differential Revision: D23725584

Pulled By: gmagogsfm

fbshipit-source-id: 2dabd4f8667a045d5bf75813500876c6fd81542b
2020-09-16 16:19:01 -07:00
Dmytro Dzhulgakov
2f4c31ce3a [jit] Speed up saving in case of many classes (#44589)
Summary:
There's an annoying O(N^2) in module export logic that makes saving some of the models (if they have many classes) take eternity.

I'm not super familiar with this code to properly untangle the deps and make it a pure hash lookup. So I just added a side lookup table for raw pointers. It's still quadratic, but it's O(num_classes^2) instead of O(num_classes * num_references) which already gives huge savings.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44589

Test Plan:
Tested with one of the offending models - just loading a saving a Torchscript file:

```
Before:
load 1.9239683151245117
save 165.74712467193604

After:
load 1.9409027099609375
save 1.4711427688598633
```

Reviewed By: suo

Differential Revision: D23675278

Pulled By: dzhulgakov

fbshipit-source-id: 8f3fa7730941085ea20d9255b49a149ac1bf64fe
2020-09-15 15:10:45 -07:00
Meghan Lele
e7d782e724 [JIT] Add property support for ScriptModules (#42390)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42390

**Summary**
This commit extends support for properties to include
ScriptModules.

**Test Plan**
This commit adds a unit test that has a ScriptModule with
a user-defined property.

`python test/test_jit_py3.py TestScriptPy3.test_module_properties`

Test Plan: Imported from OSS

Reviewed By: eellison, mannatsingh

Differential Revision: D22880298

Pulled By: SplitInfinity

fbshipit-source-id: 74f6cb80f716084339e2151ca25092b6341a1560
2020-09-14 18:49:21 -07:00
Wanchao Liang
ab6126b50e [rpc][jit] support remote call in TorchScript (#43046)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/43046

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D23621108

Pulled By: wanchaol

fbshipit-source-id: e8152c6cdd3831f32d72d46ac86ce22f3f13c651
2020-09-11 14:59:51 -07:00
Wanchao Liang
3e5df5f216 [rpc][jit] support rpc_sync in TorchScript (#43043)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43043

This add the support for rpc_sync in TorchScript in a way similar to
rpc_async

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D23252039

Pulled By: wanchaol

fbshipit-source-id: 8a05329cb8a24079b2863178b73087d47273914c
2020-09-11 14:59:47 -07:00
Ann Shan
a61318a535 [pytorch] Replace mobile run_method with get_method and operator() (#44202)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44202

In preparation for changing mobile run_method() to be variadic, this diff:

* Implements get_method() for mobile Module, which is similar to find_method but expects the method to exist.
* Replaces calls to the current nonvariadic implementation of run_method() by calling get_method() and then invoking the operator() overload on Method objects.
ghstack-source-id: 111848222

Test Plan: CI, and all the unit tests which currently contain run_method that are being changed.

Reviewed By: iseeyuan

Differential Revision: D23436351

fbshipit-source-id: 4655ed7182d8b6f111645d69798465879b67a577
2020-09-11 10:23:06 -07:00
Mikhail Zolotukhin
c6febc6480 [JIT] Add a python hook for a function to interpret JIT graphs. (#44493)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44493

This function allows to execute a graph exactly as it is, without going
through a graph executor which would run passes on the graph before
interpreting it. I found this feature extremely helpful when I worked on
a stress-testing script to shake out bugs from the TE fuser: I needed to
execute a very specific set of passes on a graph and nothing else, and
then execute exactly it.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D23632505

Pulled By: ZolotukhinM

fbshipit-source-id: ea81fc838933743e2057312d3156b77284d832ef
2020-09-11 02:55:26 -07:00
Taewook Oh
7a64b0c27a Export Node::isBefore/isAfter for PythonAPI (#44162)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44162

This diff exports Node::isBefore/isAfter method to PythonAPI.

Test Plan: Tested locally. Please let me know if there is a set of unit tests to be passed.

Reviewed By: soumith

Differential Revision: D23514448

fbshipit-source-id: 7ef709b036370217ffebef52fd93fbd68c464e89
2020-09-09 00:57:08 -07:00
neginraoof
3d7c22a2ce [ONNX] Enable new scripting passes for functionalization and remove_mutation (#43791)
Summary:
Duplicate of https://github.com/pytorch/pytorch/issues/41413
This PR initiates the process of updating the torchsciprt backend interface used by ONNX exporter.

Replace jit lower graph pass by freeze module pass

Enable ScriptModule tests for ONNX operator tests (ORT backend) and model tests by default.

Replace jit remove_inplace_ops pass with remove_mutation and consolidation all passes for handling inplace ops.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43791

Reviewed By: houseroad

Differential Revision: D23421872

Pulled By: bzinodev

fbshipit-source-id: a98710c45ee905748ec58385e2a232de2486331b
2020-09-04 15:21:45 -07:00
Bert Maher
98ad5ff41f [te] Disable reductions by default (#44122)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44122

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D23504769

Pulled By: bertmaher

fbshipit-source-id: 1889217cd22da529e46ab30c9319a5646267e4ec
2020-09-03 23:37:45 -07:00
Michael Suo
9dd8670d7d [jit] Better match behavior of loaded ScriptModules vs. freshly created ones (#43298)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43298

IR emitter uses `ModuleValue` to represent ScriptModules and emit IR for
attribute access, submodule access, etc.

`ModuleValue` relies on two pieces of information, the JIT type of the
module, and the `ConcreteModuleType`, which encapsulates Python-only
information about the module.

ScriptModules loaded from a package used to create a dummy
ConcreteModuleType without any info in it. This led to divergences in
behavior during compilation.

This PR makes the two ways of constructing a ConcreteModuleType equivalent,
modulo any py-only information (which, by definition, is never present in
packaged files anyway).

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D23228738

Pulled By: suo

fbshipit-source-id: f6a660f42272640ca1a1bb8c4ee7edfa2d1b07cc
2020-09-03 15:03:39 -07:00
Michael Suo
74f18476a2 [jit] fix segfault in attribute lookup on loaded ScriptModules (#43284)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43284

The IR emitter looks for attributes on modules like:
1. Check the JIT type for the attribute
2. Check the originating Python class, in order to fulfill requests for, e.g. static methods or ignored methods.

In the case where you do:
```
inner_module = torch.jit.load("inner.pt")
wrapped = Wrapper(inner_module)  # wrap the loaded ScriptModule in an nn.Module
torch.jit.script(wrapped)
```

The IR emitter may check for attributes on `inner_module`. There is no
originating Python class for `inner_module`, since it was directly
compiled from the serialized format.

Due to a bug in the code, we don't guard for this case an a segfault
results if the wrapper asks for an undefined attribute. The lookup in
this case looks like:
1. Check the JIT type for the attribute (not there!)
2. Check the originating Python class (this is a nullptr! segfault!)

This PR guards this case and properly just raises an attribute missing
compiler error instead of segfaulting.

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D23224337

Pulled By: suo

fbshipit-source-id: 0cf3060c427f2253286f76f646765ec37b9c4c49
2020-09-03 15:01:59 -07:00
Ann Shan
9b3c72d46e [pytorch] Make mobile find_method return an optional (#43965)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43965

As part of a larger effort to unify the API between the lite interpreter and full JIT:
- implement torch::jit::mobile::Method, a proxy for torch::jit::mobile::Function
- add support for overloaded operator() to mobile Method and Function
- mobile find_method now returns a c10::optional<Method> (so signature matches full jit)
- moves some implementation of Function from module.cpp to function.cpp
ghstack-source-id: 111161942

Test Plan: CI

Reviewed By: iseeyuan

Differential Revision: D23330762

fbshipit-source-id: bf0ba0d711d9566c92af31772057ecd35983ee6d
2020-09-03 14:46:18 -07:00
Lu Fang
f15e27265f [torch.fx] Add support for custom op (#43248)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43248

We add the support of __torch_function__ override for C++ custom op. The logic is the same as the other components, like torch.nn.Module.
Refactored some code a little bit to make it reusable.

Test Plan: buck test //caffe2/test:fx -- test_torch_custom_ops

Reviewed By: bradleyhd

Differential Revision: D23203204

fbshipit-source-id: c462a86e407e46c777171da32d7a40860acf061e
2020-09-02 16:08:37 -07:00
BowenBao
08126c9153 [ONNX] Utilize ONNX shape inference for ONNX exporter (#40628)
Summary:
It is often that the conversion from torch operator to onnx operator requires input rank/dtype/shape to be known. Previously, the conversion depends on tracer to provide these info, leaving a gap in conversion of scripted modules.

We are extending the export with support from onnx shape inference. If enabled, onnx shape inference will be called whenever an onnx node is created. This is the first PR introducing the initial look of the feature. More and more cases will be supported following this PR.

* Added pass to run onnx shape inference on a given node. The node has to have namespace `onnx`.
* Moved helper functions from `export.cpp` to a common place for re-use.
* This feature is currently experimental, and can be turned on through flag `onnx_shape_inference` in internal api `torch.onnx._export`.
* Currently skipping ONNX Sequence ops, If/Loop and ConstantOfShape due to limitations. Support will be added in the future.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40628

Reviewed By: mrshenli

Differential Revision: D22709746

Pulled By: bzinodev

fbshipit-source-id: b52aeeae00667e66e0b0c1144022f7af9a8b2948
2020-08-30 18:35:46 -07:00
Ashkan Aliabadi
4e39c310eb Move torch/csrc/utils/hash.h to c10/util/hash.h. (#42503)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42503

Test Plan: Imported from OSS

Reviewed By: IvanKobzarev

Differential Revision: D23252331

Pulled By: AshkanAliabadi

fbshipit-source-id: 3c4c0e27b9a7eec8560e374c2a3ba5f1c65dae48
2020-08-29 17:47:00 -07:00
Dmytro Dzhulgakov
47e489b135 Make ExtraFilesMap return bytes instead of str (#43241)
Summary:
In case we want to store binary files using `ScriptModule.save(..., _extra_files=...)` functionality. With python3 we can just use bytes only and not bother about it.

I had to do a copy-pasta from pybind sources, maybe we should upstream it, but it'd mean adding a bunch of template arguments to `bind_map` which is a bind untidy.

Let me know if there's a better place to park this function (it seems to be the only invocation of `bind_map` so I put it in the same file)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43241

Reviewed By: zdevito

Differential Revision: D23205244

Pulled By: dzhulgakov

fbshipit-source-id: 8f291eb4294945fe1c581c620d48ba2e81b3dd9c
2020-08-28 19:11:33 -07:00
Protonu Basu
58a7e73a95 [TensorExpr] Block Codegen (#40054)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40054

Reviewed By: ZolotukhinM

Differential Revision: D22061350

Pulled By: protonu

fbshipit-source-id: 004f7c316629b16610ecdbb97e43036c72c65067
2020-08-28 09:53:42 -07:00
aizjForever
cdc3e232e9 Add __str__ and __repr__ bindings to SourceRange (#43601)
Summary:
Added the bindings for `__str__` and `__repr__` methods for SourceRange

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43601

Test Plan:
`python test/test_jit.py`

cc gmagogsfm

Reviewed By: agolynski

Differential Revision: D23366500

Pulled By: gmagogsfm

fbshipit-source-id: ab4be6e8f9ad5f67a323554437878198483f4320
2020-08-27 12:30:47 -07:00
Yanan Cao
35a36c1280 Implement JIT Enum type serialization and deserialization (#43460)
Summary:
[Re-review tips: nothing changed other than a type in python_ir.cpp to fix a windows build failure]

Adds code printing for enum type
Enhance enum type to include all contained enum names and values
Adds code parsing for enum type in deserialization
Enabled serialization/deserialization test in most TestCases. (With a few dangling issues to be addressed in later PRs to avoid this PR grows too large)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43460

Reviewed By: albanD

Differential Revision: D23284929

Pulled By: gmagogsfm

fbshipit-source-id: e3e81d6106f18b7337ac3ff5cd1eeaff854904f3
2020-08-24 12:04:31 -07:00
Pavel Belevich
d94b10a832 Revert D23223281: Add Enum TorchScript serialization and deserialization support
Test Plan: revert-hammer

Differential Revision:
D23223281 (f269fb83c1)

Original commit changeset: 716d1866b777

fbshipit-source-id: da1ad8387b7d7aad9ff69e1ebeb5cd0b9394c2df
2020-08-22 02:38:12 -07:00
Zino Benaissa
abe878ce96 Allow Freezing of Module containing interface attribute (#41860)
Summary:
This patch allows to freeze model that utilizes interfaces. Freezing works
under the user assumption that the interfase module dones not aliases with
any value used in the model.

To enable freezing of such modules, added an extra pramater:

torch._C._freeze_module(module, ignoreInterfaces = True)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41860

Reviewed By: eellison

Differential Revision: D22670566

Pulled By: bzinodev

fbshipit-source-id: 41197a724bc2dca2e8495a0924c224dc569f62a4
2020-08-21 18:57:13 -07:00
Yanan Cao
f269fb83c1 Add Enum TorchScript serialization and deserialization support (#42963)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42963

* Adds code printing for enum type
* Enhance enum type to include all contained enum names and values
* Adds code parsing for enum type in deserialization
* Enabled serialization/deserialization test in most TestCases. (With a few dangling issues to be addressed in later PRs to avoid this PR grows too large)

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D23223281

Pulled By: gmagogsfm

fbshipit-source-id: 716d1866b7770dfb7bd8515548cfe7dc4c4585f7
2020-08-21 18:13:27 -07:00
Yanan Cao
0bd35de30e Add Enum convert back to Python object support (#43121)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/43121

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D23222628

Pulled By: gmagogsfm

fbshipit-source-id: 6850c56ced5b52943a47f627b2d1963cc9239408
2020-08-21 10:36:51 -07:00
taivu
665da61d2b Replace Conv1d with Conv2d (#42867)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42867

Test Plan: Imported from OSS

Reviewed By: kimishpatel

Differential Revision: D23177916

Pulled By: kimishpatel

fbshipit-source-id: 68cc40cf42d03e5b8432dc08f9933a4409c76e25
2020-08-20 21:36:51 -07:00
Sinan Nasir
6e1127ea3f [NCCL] Changed FutureNCCL's then callback logic for better efficiency. (#42869)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42869

We realized that when we invoke a simple callback that divides the tensors by `world_size` after `allreduce`, the performance was almost 50% lower in terms of QPS compared to the case where a simple `allreduce` hook is used with no `then` callback.

The main problem was as we call `work.wait()` before invoking `then` callback, we were synchronizing `work`'s stream with the default PyTorch stream inside [`runHook`](https://github.com/pytorch/pytorch/blob/master/torch/csrc/distributed/c10d/reducer.cpp#L609) and stalling the backward computation.

In that PR, we ensure that FutureNCCL's `then` callback is not stalling the backward computation. Assuming single-process single-device, `FutureNCCL` gets a new stream from device's pool using `at::cuda::getStreamFromPool` to run `callback` and before invoking the `callback` inline it synchronizes `WorkNCCL`'s stream by callback's stream not the default stream.

ghstack-source-id: 110208431

Test Plan: Run performance benchmark tests to validate performance issue is resolved. Also, `python test/distributed/test_c10d.py` to avoid any odd issues.

Reviewed By: pritamdamania87

Differential Revision: D23055807

fbshipit-source-id: 60e50993f1ed97497514eac5cb1018579ed2a4c5
2020-08-19 19:42:22 -07:00
Yael Dekel
3c5e3966f4 [ONNX] Squeeze operator should give an error when trying to apply to a dimension with shape > 1 (#38476)
Summary:
The ONNX spec for the Squeeze operator:

> Remove single-dimensional entries from the shape of a tensor. Takes a parameter axes with a list of axes to squeeze. If axes is not provided, all the single dimensions will be removed from the shape. If an axis is selected with shape entry not equal to one, an error is raised.

Currently, as explained in issue https://github.com/pytorch/pytorch/issues/36796, it is possible to export such a model to ONNX, and this results in an exception from ONNX runtime.

Fixes https://github.com/pytorch/pytorch/issues/36796.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/38476

Reviewed By: hl475

Differential Revision: D22158024

Pulled By: houseroad

fbshipit-source-id: bed625f3c626eabcbfb2ea83ec2f992963defa19
2020-08-17 17:41:46 -07:00
Meghan Lele
fcc10d75e1 [JIT] Add property support to TorchScript classes (#42389)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42389

**Summary**
This commit adds support for properties to TorchScript classes,
specifically for getters and setters. They are implemented essentially
as pointers to the methods that the corresponding decorators decorate,
which are treated like regular class methods. Deleters for properties
are considered to be out of scope (and probably useless for TorchScript
anyway).

**Test Plan**
This commit adds a unit test for a class with a property that has both
getter and setter and one that has only a getter.

`python test/test_jit.py TestClassType.test_properties`

Test Plan: Imported from OSS

Reviewed By: eellison, ppwwyyxx

Differential Revision: D22880232

Pulled By: SplitInfinity

fbshipit-source-id: 4828640f4234cb3b0d4f3da4872a75fbf519e5b0
2020-08-14 12:56:57 -07:00
taivu
ccd9f3244b Get, save, and load module information for each operator (#42133)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42133

Test Plan:
We save a module with module debugging information as follows.
```
import torch
m = torch.jit.load('./detect.pt')
# Save module without debug info
m._save_for_lite_interpreter('./detect.bc')
# Save module with debug info
m._save_for_lite_interpreter('./detect.bc', _save_debug_info_in_bytecode=True)
```
Size of the file without module debugging information: 4.508 MB
Size of the file with module debugging information: 4.512 MB

Reviewed By: kimishpatel

Differential Revision: D22803740

Pulled By: taivu1998

fbshipit-source-id: c82ea62498fde36a1cfc5b073e2cea510d3b7edb
2020-08-14 01:25:27 -07:00
taivu
02c8ad70f2 Reconstruct scopes (#41615)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41615

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D22611331

Pulled By: taivu1998

fbshipit-source-id: d4ed4cf6360bc1f72ac9fa24bb4fcf6b7d9e7576
2020-08-13 22:38:16 -07:00
Bram Wasti
ada8404f2d [jit] Scaffold a static runtime (#42753)
Summary:
The premise of this approach is that a small subset of neural networks are well represented by a data flow graph.  The README contains more information.

The name is subject to change, but I thought it was a cute reference to fire.

suo let me know if you'd prefer this in a different spot.  Since it lowers a JIT'd module directly I assumed the JIT folder would be appropriate.  There is no exposed Python interface yet (but is mocked up in `test_accelerant.py`)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42753

Reviewed By: zou3519

Differential Revision: D23043771

Pulled By: bwasti

fbshipit-source-id: 5353731e3aae31c08b5b49820815da98113eb551
2020-08-12 13:05:27 -07:00
Ksenija Stanojevic
e845b0ab51 [Resending] [ONNX] Add eliminate_unused_items pass (#42743)
Summary:
This PR:

- Adds eliminate_unused_items pass that removes unused inputs and initializers.
- Fixes run_embed_params function so it doesn't export unnecessary parameters.
- Removes test_modifying_params in test_verify since it's no longer needed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42743

Reviewed By: hl475

Differential Revision: D23058954

Pulled By: houseroad

fbshipit-source-id: cd1e81463285a0bf4e60766c8c87fc9a350d9c7e
2020-08-11 20:30:50 -07:00
Vasiliy Kuznetsov
79b8328aaf optimize_for_mobile: bring packed params to root module (#42740)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42740

Adds a pass to hoist conv packed params to root module.
The benefit is that if there is nothing else in the conv module,
subsequent passes will delete it, which will reduce module size.

For context, freezing does not handle this because conv packed
params is a custom object.

Test Plan:
```
PYTORCH_JIT_LOG_LEVEL=">hoist_conv_packed_params.cpp" python test/test_mobile_optimizer.py TestOptimizer.test_hoist_conv_packed_params
```

Imported from OSS

Reviewed By: kimishpatel

Differential Revision: D23005961

fbshipit-source-id: 31ab1f5c42a627cb74629566483cdc91f3770a94
2020-08-08 15:53:20 -07:00
Yanan Cao
9597af01ca Support iterating through an Enum class (#42661)
Summary:
[5/N] Implement Enum JIT support

Implement Enum class iteration
Add aten.ne for EnumType

Supported:
Enum-typed function arguments
using Enum type and comparing them
Support getting name/value attrs of enums
Using Enum value as constant
Support Enum-typed return values
Support iterating through Enum class (enum value list)

TODO:
Support serialization and deserialization

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42661

Reviewed By: SplitInfinity

Differential Revision: D22977364

Pulled By: gmagogsfm

fbshipit-source-id: 1a0216f91d296119e34cc292791f9aef1095b5a8
2020-08-06 22:56:34 -07:00
BowenBao
a6c8730045 [ONNX] Add preprocess pass for onnx export (#41832)
Summary:
in `_jit_pass_onnx`, symbolic functions are called for each node for conversion. However, there are nodes that cannot be converted without additional context. For example, the number of outputs from split (and whether it is static or dynamic) is unknown until the point where it is unpacked by listUnpack node. This pass does a preprocess, and prepares the nodes such that enough context can be received by the symbolic function.
* After preprocessing, `_jit_pass_onnx` should have enough context to produce valid ONNX nodes, instead of half baked nodes that replies on fixes from later postpasses.
* `_jit_pass_onnx_peephole` should be a pass that does ONNX specific optimizations instead of ONNX specific fixes.
* Producing more valid ONNX nodes in `_jit_pass_onnx` enables better utilization of the ONNX shape inference https://github.com/pytorch/pytorch/issues/40628.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41832

Reviewed By: ZolotukhinM

Differential Revision: D22968334

Pulled By: bzinodev

fbshipit-source-id: 8226f03c5b29968e8197d242ca8e620c6e1d42a5
2020-08-06 20:34:12 -07:00
Basil Hosmer
feeb515ad5 add Quantizer support to IValue (#42438)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42438

Test Plan: Imported from OSS

Reviewed By: smessmer

Differential Revision: D22894190

Pulled By: bhosmer

fbshipit-source-id: b2d08abd6f582f29daa6cc7ebf05bb1a99f7514b
2020-08-05 12:56:18 -07:00
Will Constable
6d1e43c5a6 Release the GIL before invokeOperator (#42341)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/41865

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42341

Reviewed By: ezyang

Differential Revision: D22928622

Pulled By: wconstab

fbshipit-source-id: 8fa41277c9465f816342db6ec0e6cd4b30095c5c
2020-08-05 11:51:39 -07:00
BowenBao
842759591d [ONNX] Refactor ONNX fixup for Loop and If (#40943)
Summary:
* move both under new file `fixup_onnx_controlflow`
* move the fixup to where the ONNX loop/if node is created, as oppose to running the fixup as postpass. This will help with enable onnx shape inference later.
* move `fuseSequenceSplitConcat` to `Peephole`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40943

Reviewed By: mrshenli

Differential Revision: D22709999

Pulled By: bzinodev

fbshipit-source-id: 51d316991d25dc4bb4047a6bb46ad1e2401d3d2d
2020-08-03 22:33:17 -07:00
Yanan Cao
bdcf320bed Support custom exception message (#41907)
Summary:
Raise and assert used to have a hard-coded error message "Exception". User provided error message was ignored. This PR adds support to represent user's error message in TorchScript.

This breaks backward compatibility because now we actually need to script the user's error message, which can potentially contain unscriptable expressions. Such programs can break when scripting, but saved models can still continue to work.

Increased an op count in test_mobile_optimizer.py because now we need aten::format to form the actual exception message.

This is built upon an WIP PR:  https://github.com/pytorch/pytorch/pull/34112 by driazati

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41907

Reviewed By: ngimel

Differential Revision: D22778301

Pulled By: gmagogsfm

fbshipit-source-id: 2b94f0db4ae9fe70c4cd03f4048e519ea96323ad
2020-08-01 13:03:45 -07:00
Yanan Cao
655f376460 Implement Enum sugared value and Enum constant support (#42085)
Summary:
[3/N] Implement Enum JIT support

* Add enum value as constant support
* Add sugared value for EnumClass

Supported:
Enum-typed function arguments
using Enum type and comparing them
Support getting name/value attrs of enums
Using Enum value as constant

TODO:
Add PyThon sugared value for Enum
Support Enum-typed return values
Support serialization and deserialization

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42085

Reviewed By: eellison

Differential Revision: D22758042

Pulled By: gmagogsfm

fbshipit-source-id: 5c6e571686c0b60d7fbad59503f5f94b3b3cd125
2020-07-31 17:29:55 -07:00
Elias Ellison
2285a2fc11 refactor canonical ordering to also be able to do isAfter checks (#42140)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42140

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D22798378

Pulled By: eellison

fbshipit-source-id: d1a549f43b28fe927729597818a46674c58fe81d
2020-07-31 15:11:40 -07:00
Will Constable
646042e0fb Add suggestion to enumerate ModuleDict in error message (#41946)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41946

Reviewed By: ngimel

Differential Revision: D22774243

Pulled By: wconstab

fbshipit-source-id: 5cfbe52b5b1c540f824593e67ae6ba4973458bb5
2020-07-27 16:24:00 -07:00
Shen Li
d4736ef95f Add done() API to Future (#42013)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42013

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D22729596

Pulled By: mrshenli

fbshipit-source-id: ed31021a35af6e2c3393b9b14e4572cf51013bc0
2020-07-24 14:13:41 -07:00
Yanan Cao
890b52e09f Reduce instability in runCleanUpPasses by reordering passes. (#41891)
Summary:
Currently constant pooling runs before const propagation, which can create more constants that need pooling. This can get in the way of serialization/deserialization stability because each time user serializes and deserializes a module, runCleanUpPasses is called upon it. Doing so multiple times would lead to different saved module.

This PR moves constant pooling after const propagation, which may slow down const propagation a little bit, but would otherwise side-step aforementioned problem.

test_constant_insertion in test_jit.py is also updated because after fixing the pass ordering, the number of constants is no longer a constant and it is extremely difficult to get the exact number with the current convoluted test structure. So for now, I changed the test to check only that CSE doesn't change number of "prim::constant" rather than comparing against a known number. Also left a TODO to improve this test.

ConstantPropagation pass is replaced by ConstantPropagationImmutableTypes because the latter is used in runCleanUpPasses. If not replaced, the former would create new CSE opportunities by folding more constants. This voids the purpose of the test case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41891

Reviewed By: colesbury

Differential Revision: D22701540

Pulled By: gmagogsfm

fbshipit-source-id: 8e60dbdcc54a93dac111d81b8d88fb39387224f5
2020-07-24 11:39:20 -07:00
Ksenija Stanojevic
af5d0bff00 [ONNX] Add pass that fuses Conv and BatchNormalization (#40547)
Summary:
Add pass that fuses Conv and Batchnormalization nodes into one node Conv.
This pass is only applied in inference mode (training is None or TrainingMode.Eval).
Since this pass needs access to param_dict it is written outside peephole file where these kind of passes (fusing multiple nodes into one) is usually placed.

This PR also adds wrapper skipIfNoEmbed to skip debug_embed_params test:
Pass that fuses Conv and Batchnorm changes the params of resnet model and parameters of onnx and pytorch model won't match. Since parameters are not matching, debug_embed_params test for test_resnet will fail and that is expected, therefore debug_embed_params test for test_resnet should be skipped.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40547

Reviewed By: gchanan

Differential Revision: D22631687

Pulled By: bzinodev

fbshipit-source-id: fe45812400398a32541e797f727fd8697eb6d8c0
2020-07-22 14:59:27 -07:00
Yanan Cao
4a3aad354a [1/N] Implement Enum JIT support (#41390)
Summary:
* Add EnumType and AnyEnumType as first-class jit type
* Add Enum-typed IValue
* Enhanced aten::eq to support Enum

Supported:
Enum-typed function targuments
using Enum type and comparing them

TODO:
Add PyThon sugared value for Enum
Support getting name/value attrs of enums
Support Enum-typed return values
Support enum values of different types in same Enum class
Support serialization and deserialization

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41390

Reviewed By: eellison

Differential Revision: D22524388

Pulled By: gmagogsfm

fbshipit-source-id: 1627154a64e752d8457cd53270f3d14aea4b1150
2020-07-18 22:15:06 -07:00
Meghan Lele
758edcd7df [JIT] Replace use of "blacklist" in python/init.cpp (#41456)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41456

**Test Plan**
Continuous integration.

**Fixes**
This commit partially addresses #41443.

Test Plan: Imported from OSS

Reviewed By: Krovatkin

Differential Revision: D22544270

Pulled By: SplitInfinity

fbshipit-source-id: 649b30e1fcc6516a4def6b148a1da07bc3ce941d
2020-07-17 11:33:05 -07:00
Taewook Oh
44b9306d0a Export replaceAllUsesAfterNodeWith for PythonAPI (#41414)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41414

This diff exports replaceAllUsesAfterNodeWith to PythonAPI.

Test Plan: Tested locally. Please let me know if there is a set of unit tests to be passed outside of the default ones triggered by Sandcastle.

Reviewed By: soumith

Differential Revision: D22523211

fbshipit-source-id: 3f075bafa6208ada462abc57d495c15179a6e53d
2020-07-14 22:20:19 -07:00
Michael Suo
ca1b8ebbcb move misc implementation out of jit/__init__.py (#41154)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41154

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D22445213

Pulled By: suo

fbshipit-source-id: 200545715c5ef13beb1437f49e01efb21498ddb7
2020-07-13 16:59:55 -07:00
Meghan Lele
ce3ba3b9bc [JIT] Add support for backend-lowered submodules (#41146)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41146

**Summary**
This commit adds support for using `Modules` that have been lowered as
submodules in `ScriptModules`.

**Test Plan**
This commit adds execution and save/load tests to test_backends.py for
backend-lowered submodules.

**Fixes**
This commit fixes #40069.

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D22459543

Pulled By: SplitInfinity

fbshipit-source-id: 02e0c0ccdce26c671ade30a34aca3e99bcdc5ba7
2020-07-10 16:35:24 -07:00
Kimish Patel
8a79eec98a Add add_relu fusion pass to optimize_for_mobile. (#40252)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40252

As title says.

Test Plan:
python test/test_mobile_optimizer.py

Imported from OSS

Differential Revision: D22126825

fbshipit-source-id: a1880587ba8db9dee0fa450bc463734e4a8693d9
2020-07-10 08:10:22 -07:00
Kimish Patel
c5dcf056ee JIT pass for add relu fusion. (#39343)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39343

Building on top of previous PR that adds fused add_relu op, this PR adds
a JIT pass to transform input graph to find all fusable instancs of add
+ relu and fuses them.

Test Plan:
python test/test_jit.py TestJit.test_add_relu_fusion

Imported from OSS

Differential Revision: D21822396

fbshipit-source-id: 12c7e8db54c6d70a2402b32cc06c7e305ffbb1be
2020-07-09 16:25:13 -07:00
Zino Benaissa
690946c49d Generalize constant_table from tensor only to ivalue (#40718)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40718

Currently only constant except tensor must be inlined during serialization.
Tensor are stored in the contant table. This patch generalizes this capability
to any IValue. This is particularly useful for non ASCII string literal that
cannot be inlined.

Test Plan: Imported from OSS

Differential Revision: D22298169

Pulled By: bzinodev

fbshipit-source-id: 88cc59af9cc45e426ca8002175593b9e431f4bac
2020-07-09 09:09:40 -07:00
generatedunixname89002005287564
86f72953dd [Codemod][FBSourceClangFormatLinter] Daily arc lint --take CLANGFORMAT
Reviewed By: zertosh

Differential Revision: D22452776

fbshipit-source-id: a103da6a5b1db7f1c91ca25490358da268fdfe96
2020-07-09 08:49:32 -07:00
Elias Ellison
3f32332ee6 [JIT][Easy]move remove mutation to own file (#41137)
Summary:
This should be in its own file...

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41137

Reviewed By: jamesr66a

Differential Revision: D22437922

Pulled By: eellison

fbshipit-source-id: 1b62dde1a4ebac673b5c60aea4f398f734d62501
2020-07-08 17:00:35 -07:00
Brian Vaughan
dfd21ec00d Revert D22418716: [JIT] Add support for backend-lowered submodules
Test Plan: revert-hammer

Differential Revision:
D22418716 (6777ea19fe)

Original commit changeset: d2b2c6d5d2cf

fbshipit-source-id: 5ce177e13cab0be60020f8979f9b6c520cc8654e
2020-07-08 13:14:21 -07:00
Michael Suo
c93e96fbd9 [jit] move script-related implementation out of torch/jit/__init__.py (#40902)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40902

See the bottom of this stack for context.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D22360210

Pulled By: suo

fbshipit-source-id: 4275127173a36982ce9ad357aa344435b98e1faf
2020-07-08 11:38:34 -07:00
Meghan Lele
6777ea19fe [JIT] Add support for backend-lowered submodules (#40841)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40841

**Summary**
This commit adds support for using `Modules` that have been lowered as
submodules in `ScriptModules`.

**Test Plan**
This commit adds execution and save/load tests to test_backends.py for
backend-lowered submodules.

**Fixes**
This commit fixes #40069.

Test Plan: Imported from OSS

Differential Revision: D22418716

Pulled By: SplitInfinity

fbshipit-source-id: d2b2c6d5d2cf3042a620b3bde7d494f1abe28dc1
2020-07-07 21:00:40 -07:00
Elias Ellison
37a572f33e fix grad thrashing of shape analysis (#40939)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40939

Previously, when we would do shape analysis by running the op with representative inputs, we would always set the grad property to false. This led to a wrong static analysis when we would create differentiable subgraphs, and propagate shapes without also propagating requires_grad, and then uninline them.

Test Plan: Imported from OSS

Differential Revision: D22394676

Pulled By: eellison

fbshipit-source-id: 254e6e9f964b40d160befe0e125abe1b7aa2bd5e
2020-07-06 17:12:13 -07:00
James Reed
c0f9bf9bea s/torch::jit::class_/torch::class_/ (#40795)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40795

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D22314215

Pulled By: jamesr66a

fbshipit-source-id: a2fb5c6804d4014f8e437c6858a7be8cd3efb380
2020-07-06 15:53:33 -07:00
Will Constable
8ecd4f36aa fix __len__, __contains__, getitem inherited from interface class derived from nn container (closes #40603) (#40789)
Summary:
Define static script implementation of __len__ and __contains__ on any subclass derived from a type such as ModuleList, Sequential, or ModuleDict.  Implement getitem for classes derived from ModuleDict.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40789

Reviewed By: eellison

Differential Revision: D22325159

Pulled By: wconstab

fbshipit-source-id: fc1562c29640fe800e13b5a1dd48e595c2c7239b
2020-07-04 15:45:18 -07:00
Nikolay Korovaiko
8223858cc1 shape inference of undefined for prim::grad (#40866)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40866

Reviewed By: pbelevich

Differential Revision: D22358988

Pulled By: Krovatkin

fbshipit-source-id: 7118d7f8d4eaf056cfb71dc0d588d38b1dfb0fc7
2020-07-04 14:10:22 -07:00
peter
c71ec1c717 Fix zip serialization for file > 2GiB for Windows (#40783)
Summary:
`long long == int64_t != long` in MSVC
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40783

Differential Revision: D22328757

Pulled By: ezyang

fbshipit-source-id: bc7301d6b0e7e00ee6d7ca8637e3fce7810b15e2
2020-07-01 08:15:27 -07:00
Yanghan Wang
5923a802fa Back out "[pytorch][PR] [ONNX] Add eliminate_unused_items pass"
Summary:
Original commit changeset: 30e1a6e8823a

cause issue to fusing BN

Test Plan: revert

Reviewed By: houseroad

Differential Revision: D22296958

fbshipit-source-id: 62664cc77baa8811ad6ecce9d0520a2ab7f89868
2020-06-30 10:26:35 -07:00
Sebastian Messmer
53af9df557 Unify boxed function signature between jit and c10 (#37034)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37034

c10 takes a Stack* in boxed functions while JIT took Stack&.
c10 doesn't return anything while JIT returns an int which is always zero.

This changes JIT to follow the c10 behavior.
ghstack-source-id: 106834069

Test Plan: unit tests

Differential Revision: D20567950

fbshipit-source-id: 1a7aea291023afc52ae706957e9a5ca576fbb53b
2020-06-29 19:24:26 -07:00
James Reed
320164f878 Fix zip serialization for file > 2GiB (#40722)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40722

Test Plan: Imported from OSS

Differential Revision: D22294016

Pulled By: jamesr66a

fbshipit-source-id: 0288882873d4b59bdef37d018c030519c4be7f03
2020-06-29 19:17:06 -07:00
Kimish Patel
4a174c83ca Add option to preserve certain methods during optimize_for_mobile. (#40629)
Summary:
By default freeze_module pass, invoked from optimize_for_mobile,
preserves only forward method. There is an option to specify a list of
methods that can be preserved during freeze_module. This PR exposes that
to optimize_for_module pass.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40629

Test Plan: python test/test_mobile_optimizer.py

Reviewed By: dreiss

Differential Revision: D22260972

Pulled By: kimishpatel

fbshipit-source-id: 452c653269da8bb865acfb58da2d28c23c66e326
2020-06-29 09:32:53 -07:00
Ksenija Stanojevic
547ea787ff [ONNX] Add eliminate_unused_items pass (#38812)
Summary:
This PR:

- Adds eliminate_unused_items pass that removes unused inputs and initializers.
- Fixes run_embed_params function so it doesn't export unnecessary parameters.
- Removes  test_modifying_params in test_verify since it's no longer needed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38812

Reviewed By: ezyang

Differential Revision: D22236416

Pulled By: houseroad

fbshipit-source-id: 30e1a6e8823a7e36b51ae1823cc90476a53cd5bb
2020-06-25 22:00:26 -07:00
Will Constable
d855528186 wconstab/38034-sliced-sequential (#40445)
Summary:
Partial support for slicing of Sequential containers.

- works around missing Sequential slice functionality
   by converting to tuple
- only supports iteration of resulting tuple values,
   not direct call() on the sliced sequential
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40445

Differential Revision: D22192469

Pulled By: wconstab

fbshipit-source-id: 61c85deda2d58f6e3bea2f1fa1d5d5dde568b9b5
2020-06-24 09:05:51 -07:00
Jerry Zhang
cbd53bfee8 [jit] Remove unnecessary clone APIs for script::Module and RecursiveScriptModule (#40297)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40297

Test Plan: Imported from OSS

Differential Revision: D22191660

fbshipit-source-id: 4b338ca82caaca04784bffe01fdae3d180c192f4
2020-06-23 16:03:22 -07:00
Meghan Lele
5fce7137a9 [WIP][JIT] Add ScriptModule._reconstruct (#39979)
Summary:
**Summary**
This commit adds an instance method `_reconstruct` that permits users
to reconstruct a `ScriptModule` from a given C++ `Module` instance.

**Testing**
This commit adds a unit test for `_reconstruct`.

**Fixes**
This pull request fixes https://github.com/pytorch/pytorch/issues/33912.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39979

Differential Revision: D22172323

Pulled By: SplitInfinity

fbshipit-source-id: 9aa6551c422a5a324b822a09cd8d7c660f99ca5c
2020-06-23 14:42:27 -07:00
Elias Ellison
f000b44d89 Fork/Join Inline Docs (relanding) (#40438)
Summary:
Added fork/wait to docs/source/jit.rst, hopefully that will fix test error.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40438

Differential Revision: D22188152

Pulled By: eellison

fbshipit-source-id: c19277284455fb6e7c0138b0c1423d90b147d18e
2020-06-23 13:25:51 -07:00
Jerry Zhang
f652abc1dd [jit] Enable copy.deepcopy and copy.copy for RecursiveScriptModule (#32685)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32685

att

Test Plan:
.

Imported from OSS

Differential Revision: D21220755

fbshipit-source-id: 5c71e9bb9f43032cf60563a9e67579118a8d7e33
2020-06-23 09:21:12 -07:00
Zhang, Xiaobing
87c5f02f3d jit: Conv3d + BatchNorm3d fusion (#40082)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40082

Differential Revision: D22120340

Pulled By: jerryzh168

fbshipit-source-id: fce6c5f03fe7ab6c60620cbdf547d5a466a470e3
2020-06-22 11:15:52 -07:00
Ivan Kobzarev
3852215170 [vulkan] jit passes for vulkan conv2 prepack and fuse with clamp (#39282)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39282

Test Plan: Imported from OSS

Differential Revision: D21962424

Pulled By: IvanKobzarev

fbshipit-source-id: 2d20e827d2c3836b7e6b443293377c68dc1ffa5a
2020-06-20 14:12:21 -07:00
Shen Li
4463f59c2c Let torch.futures.wait_all re-throw errors (#40291)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40291

Test Plan: Imported from OSS

Differential Revision: D22141702

Pulled By: mrshenli

fbshipit-source-id: 50b5e5c687e87930aef3a50cc40839729a4eb9c6
2020-06-19 15:32:56 -07:00
Lu Fang
8315bb2359 Back out "[pytorch][PR] [JIT] Infer NamedTuple type attributes of nn.Modules correctly" (#40270)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40270

Original commit changeset: 1227e243ab94

D22082806 (1e03d603c6) broke the model generation of pyper models. We trace the namedtuple as input. To unblock the development of PyPer project, let's revert the diff first.

Sorry about the inconvenience, SplitInfinity
ghstack-source-id: 106217609

Test Plan: buck run dper3/dper3_models/experimental/pytorch/feed:feed_generation_script -- --model_files_dir=/tmp/

Reviewed By: alyssawangqq

Differential Revision: D22132960

fbshipit-source-id: ce9278c8462602a341e231ea890e46f74e743ddf
2020-06-19 02:58:31 -07:00
Meghan Lele
d58b8222b7 [JIT] Add support for with statements (#34705)
Summary:
**Summary**
This commit adds support for with statements to PyTorch JIT. Each
of the with items in a with statement is represented in the JIT IR
as a pair of `prim::Enter` and `prim::Exit` nodes that call the
`__enter__` and `__exit__` methods defined on the context manager objects
returned by the expressions in the with item.

**Testing**
This commit adds unit tests for with statements with named with items,
nameless with items, and with statements that encounter exceptions.
```
$ python test/test_jit.py TestWith.test_with_as
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.430s

OK
```

```
$ python test/test_jit.py TestWith.test_with_no_as
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.264s

OK
```

```
$ python test/test_jit.py TestWith.test_with_exceptions
Fail to import hypothesis in common_utils, tests are not derandomized
Couldn't download test skip set, leaving all tests enabled...
.
----------------------------------------------------------------------
Ran 1 test in 1.053s

OK
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34705

Differential Revision: D22095945

Pulled By: SplitInfinity

fbshipit-source-id: f661565a834786725259b8ea014b4d7532f9419d
2020-06-18 16:57:18 -07:00
Xingying Cheng
0b3755b1d0 Add optimization blacklist as second arg to optimizeForMobile method. (#37462)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37462

Instead of running all the optimization pass in optimizeForMobile method,
introducing a whitelist optimizer dictionary as second param in the method,
when it is not passed during calling, the method will run all the optimization
passes, otherwise the method will read the dict and only run the pass with
value of True.
ghstack-source-id: 106104503

Test Plan:
python test/test_mobile_optimizer.py

Imported from OSS

Differential Revision: D22096029

fbshipit-source-id: daa9370c0510930f4c032328b225df0bcf97880f
2020-06-17 18:14:45 -07:00
James Reed
b5bf21a6bd [JIT] Expose __deepcopy__ on script::Object (#40068)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40068

Test Plan: Imported from OSS

Differential Revision: D22058808

Pulled By: jamesr66a

fbshipit-source-id: d8593b047c553389caea085337305ee893dc6877
2020-06-17 14:02:28 -07:00
Meghan Lele
1e03d603c6 [JIT] Infer NamedTuple type attributes of nn.Modules correctly (#39116)
Summary:
**Summary**
This commit modifies type inference for `nn.Module` instance attributes
such that the type of a `NamedTuple` attribute is inferred correctly and
such that the field names of this `NamedTuple` instance can be used in
scripted methods. At present, the type of this attribute is inferred to be
`Tuple[T, U, ..., V]`, so the field must be referred to by index and
cannot be referred to by name.

**Test Plan**
This commit adds a unit test to test that a field of a `NamedTuple`
attribute can be referred to by name in a scripted method.

**Fixes**
This commit fixes https://github.com/pytorch/pytorch/issues/37668.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39116

Differential Revision: D22082806

Pulled By: SplitInfinity

fbshipit-source-id: 1227e243ab941376cd5e382fb093751e88dc8846
2020-06-17 13:58:15 -07:00
Linbin Yu
7021635d61 fix more duplicated names (#40062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40062

fix duplicated op names after D21992552

Test Plan: build

Reviewed By: iseeyuan

Differential Revision: D22056588

fbshipit-source-id: 6d2fcf16b5b86b30b6ac7a4107b20c8cfb6816b0
2020-06-16 11:47:05 -07:00
Jerry Zhang
ec1833bc3c Revert D22069566: Revert D22013026: [quant][graphmode] Pass debug option into insert_quant_dequant pass
Test Plan: revert-hammer

Differential Revision:
D22069566

Original commit changeset: 6230bc806089

fbshipit-source-id: 930490ab0b6a017c949445620e7c6b7056693998
2020-06-16 11:37:33 -07:00
Christian Puhrsch
305921734a Revert D22013026: [quant][graphmode] Pass debug option into insert_quant_dequant pass
Test Plan: revert-hammer

Differential Revision:
D22013026

Original commit changeset: 714b938f25c1

fbshipit-source-id: 6230bc8060892e6485159ca88cc3ad49217791a2
2020-06-16 09:44:04 -07:00
Jerry Zhang
ee5ad6ce25 [quant][graphmode] Pass debug option into insert_quant_dequant pass (#39915)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39915

Some of the usage, e.g. add_scalar will not be supporting the debug option,
that is, we will not have a numerically exact representation of the final quantized model
before finalize if people use add scalar.
warning will be added in a later PR.

Test Plan: Imported from OSS

Differential Revision: D22013026

fbshipit-source-id: 714b938f25c10fad3dfc79f095356b9803ef4b47
2020-06-16 08:14:50 -07:00
Shihao Xu
00651b8c93 [distribtued.nn] Implement TorchScript-compatible RemoteModule API (#37139)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37139

See design doc in https://github.com/pytorch/pytorch/issues/37136

ghstack-source-id: 105926270

Test Plan:
TODO:

- Make the generated Interface usable. https://github.com/pytorch/pytorch/pull/37139#discussion_r434190978
-
- Avoid generating the same template instances for Module that is not scriptable.
- Remove "infer_module_interface_cls".
- Use Python format instead of a CodeTemplate
- Use Python tempfile to track and delete file. Does it work if there is crash.

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator

buck build mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator && \
buck-out/gen/caffe2/test/distributed/nn/jit/test_instantiator\#binary.par -r test_instantiate_scripted_remote_module_template

buck build mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator && \
buck-out/gen/caffe2/test/distributed/nn/jit/test_instantiator\#binary.par -r test_instantiate_non_scripted_remote_module_template
```

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_spawn
```

```
buck test mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_user_provided_global_unique_name

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_async_script

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_sync_script

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_with_kwargs

buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \
buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_user_provided_global_unique_name
```

```
buck test mode/dev-nosan //caffe2/test/distributed/rpc:rpc_fork
```

buck test mode/opt-asan //caffe2/test:jit -- 'test_script_forward_method_replacement

buck build mode/dev-nosan //caffe2/test:jit && \
buck-out/gen/caffe2/test/jit\#binary.par -r 'test_script_forward_method_replacement'

buck build mode/dev-nosan //caffe2/test:jit && \
buck-out/gen/caffe2/test/jit\#binary.par -r 'test_imported_classes'

Differential Revision: D20499658

fbshipit-source-id: dd9383ae4eb2343366c11127664f845b91ca3b0a
2020-06-15 19:07:35 -07:00
Jeremy Lilley
0c25428597 [futures] Reland: Add torch.futures.collect_all()/wait_all() python api. (#39964)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39964

The "[fut.wait() for fut in futs]" idiom can introduce up to
O(len(futs)) thread switches, which may be excessive for large N.

This plumbs through the new c++ c10::collectAll() to Python space
so that we only employ a single jit-side wait.

Test Plan: buck test mode/dev-nosan caffe2/test/distributed/rpc:rpc_spawn

Differential Revision: D22027412

fbshipit-source-id: 4e344a19a09638ee46e7fc478df80a41941b84ce
2020-06-15 14:07:12 -07:00
Xingying Cheng
bdef721caf [fbcode] Add find_method into lite interpreter python binding.
Summary: Add 'find_method' into 'LiteScriptModule' python binding method, so that we use it to find existence of methods, e.g. "get_all_bundled_inputs".

Reviewed By: linbinyu, houseroad

Differential Revision: D22029002

fbshipit-source-id: 9acf76880fc989e825dc3a9186dab6928caee75e
2020-06-13 07:48:13 -07:00
Mike Ruberry
8bc821f0d0 Revert D21976891: [futures] Add torch.futures.collect_all()/wait_all() python api.
Test Plan: revert-hammer

Differential Revision:
D21976891

Original commit changeset: 253c61f503f4

fbshipit-source-id: f839b16f4469e96325b607b6313a1397e1988856
2020-06-12 13:40:37 -07:00
Jeremy Lilley
a9aa6367c2 [futures] Add torch.futures.collect_all()/wait_all() python api. (#39790)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39790

The "[fut.wait() for fut in futs]" idiom can introduce up to
O(len(futs)) thread switches, which may be excessive for large N.

This plumbs through the new c++ c10::collectAll() to Python space
so that we only employ a single jit-side wait.
ghstack-source-id: 105779443

Test Plan: buck test mode/dev-nosan caffe2/test/distributed/rpc:rpc_spawn

Reviewed By: kiukchung

Differential Revision: D21976891

fbshipit-source-id: 253c61f503f4ffb9be784e6c49a0656cede139fb
2020-06-12 12:36:04 -07:00
Yanan Cao
c068233300 Add CHECK-SOURCE-HIGHLIGHTED to file check utils. (#39692)
Summary:
Enhance FileCheck util to check for highlighted source ranges. This is useful when writing tests regarding generated error messages that require source code highlighting.

Here is how the error looks like in different cases:

- In case of needed source code token not found at all in input string:
```
RuntimeError: Expected to find "invalid_token" but did not find it
Searched string:

...  <--- HERE
def to_list_missing_type_annotation(x):
    # type: (torch.Tensor) -> List[float]
From CHECK-SOURCE-HIGHLIGHTED: invalid_token
```

- In case of source code token not highlighted:
```
Traceback (most recent call last):
  File "test_range.py", line 11, in <module>
    FileCheck().check_source_highlighted("x.tolist()").run(s)
RuntimeError: Expected to find "~~~~~~~~~~" but did not find it
Searched string:
    # type: (torch.Tensor) -> List[float]
    li = x.tolist()
         ~~~~~~~~~ <--- HERE
         ~~~~~~~~~~~~~~~~~~~...  <--- HERE
    return li
```

It is a bit confusing since both input text (usually an error message) and generated error messages have their highlighted portions, but this is consistent of previous behavior. Another option is to generate plain error messages without additional range highlighting on input text.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39692

Test Plan:
Added unit test.

Closes https://github.com/pytorch/pytorch/issues/38698

Differential Revision: D22001765

Pulled By: gmagogsfm

fbshipit-source-id: 6681441eee5853ab061d198ccfe55ebffddca202
2020-06-11 23:47:07 -07:00
Vasiliy Kuznetsov
5d2f6d86e5 graph mode: add quantization type enum (#39795)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39795

Replaces the `is_dynamic` bool by enums in Python and c++
graph quantization code.  This makes the code more readable
and will make it easier to modify for adding QAT logic in the future.

Test Plan:
CI, as well as
```
python test/test_quantization.py TestQuantizeDynamicScript
python test/test_quantization.py TestQuantizeScriptJitPasses
```

Imported from OSS

Differential Revision: D21981643

fbshipit-source-id: d475760407bcc794aeae92a2c696bac4acda843d
2020-06-10 21:34:23 -07:00
Yanan Cao
c22bbb2124 [JIT] Add Type::repr_str to return human-readable str (#39544)
Summary:
Clearly expressing a type is inferred by PyTorch instead of explicitly annotated by user makes many error messages more user-friendly

Currently Type has two string conversion methods. str() for IR printing and python_str() for serialization and error message generation. If we want to include more information in type printing while maintaining serialization/deserialization correctness, we need to split python_str() into annotation_str() and repr_str().

annotation_str is solely responsible for serialization, it strictly matches format of python type annotation. repr_str() is responsible for generating a human-readable error message that includes information like "this type is inferred, not explicitly annotated"

Closes https://github.com/pytorch/pytorch/issues/39449
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39544

Differential Revision: D21978759

Pulled By: gmagogsfm

fbshipit-source-id: 733566f5a62e748b5ca4bb3c5943ebb6d5b664d0
2020-06-10 12:01:24 -07:00
Zino Benaissa
9111ae7782 Preserve user specified attributes and methods (#38830)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38830

This patch enables to preserve user specified attributes or non forward
methods. The API:
  _freeze_module(Module, ["a", "version"])

Test Plan: Imported from OSS

Differential Revision: D21957316

Pulled By: bzinodev

fbshipit-source-id: 5c9146ae679791070a9de868c45785725b48a9e6
2020-06-10 01:38:18 -07:00
Jerry Zhang
9551fb22d6 [quant][graphmode] Preserve numerics in debug option for clamp ops (#39219)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39219

We didn't model clamp ops correctly right now, this PR fixes that.

Reason is quantized clamp op quantizes the scalar arguments in the op implementation: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp#L614-L617

So we'll need to model this explicitly in the IR.
When we see a `aten::dequantize - aten::clamp(%x, %min, %max)`
we first make a scalar tensor with `aten::scalar_tensor(%scalar, ...)`, then we quantize the tensor with the same quantization parameters from the input tensor of the `aten::clamp`, dequantize the tensor, then convert the dequantized tensor to scalar using `aten::item`.

Test Plan: Imported from OSS

Differential Revision: D21831350

fbshipit-source-id: d60731459a0465d64946aabc62065d25d92faefc
2020-06-08 17:15:39 -07:00
davidriazati
da8191a9ad Remove useless copy on zip file load (#36362)
Summary:
Instead of copying to a buffer, then setting a tensor's storage with that buffer, create a storage directly from the file

Pull Request resolved: https://github.com/pytorch/pytorch/pull/36362

Pulled By: driazati

Differential Revision: D21889537

fbshipit-source-id: edbd430073c2bbf52332fe7b3b2590e7d936dedf
2020-06-04 16:59:54 -07:00
Elias Ellison
49b69b2ade [JIT] fix broadcasting lists of ints (#39481)
Summary:
Previously, on conversion from python -> c++ it was casted to double list through bad copy pasta. It's pretty unusual for someone to script a broadcasting list function directly since it's an internal api, so it was unlikely to affect anyone.

Fix for https://github.com/pytorch/pytorch/issues/39450
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39481

Reviewed By: jamesr66a

Differential Revision: D21870557

Pulled By: eellison

fbshipit-source-id: e704e5e87d2702a270b7d65c4df444246a134480
2020-06-04 12:16:41 -07:00
Xingying Cheng
adc13432fe Enabling lite interpreter in torch python API (#39181)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39181

Create a python binding classes torch._C. LiteScriptModule for mobile::module, a python class called LiteScriptModule is created which wrap torch._C. LiteScriptModule.
Python class LiteScriptModule contains preliminary functions including forward, run_method and __call__.

Create a python api "load_for_lite_interpreter" under torch.jit.mobile where takes pre-saved mobile module in a file-like object as input and returns python class LiteScriptModule.

Add a python binding method "_save_to_buffer_for_mobile" under ScriptModule, and python method "_save_to_buffer_for_lite_interpreter" under RecursiveScriptModule which saves mobile module into buffer instead of file.
ghstack-source-id: 105215736

Test Plan: buck test caffe2/test:mobile

Differential Revision: D21757474

fbshipit-source-id: 758b87497d65c4686459a567d41887c7a577aa4c
2020-06-03 18:33:23 -07:00
Shen Li
bb0377bb24 Expose torch.futures.Future (#39008)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39008

This commit adds a `torch.futures.Future` type and exposes its ctor,
`wait`, `then`, and `set_result` APIs. This type is currently a
wrapper of `c10::ivalue::Future` and mainly used by RPC for now. Later,
we could revamp c10d APIs to return this `Future` type as well. More
utils will be added into `torch.futures` package in followup PRs.

Test Plan: Imported from OSS

Differential Revision: D21723022

Pulled By: mrshenli

fbshipit-source-id: 92e56160544e9bf00d11db3e8347a1b9707882c9
2020-06-02 10:12:56 -07:00
Jie
07518e120b [nvFuser] add torch.jit.fuser context manager (#38993)
Summary:
1. `torch.jit.fuser(str)` context manager facilitates switch between backend fusers:
  str - 'fuser0' enables only legacy fuser;
  str - 'fuser1' enables only NNC;
  str - 'fuser2' enables only nvFuser;
2. cleanup updated python tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38993

Reviewed By: nairbv, pbelevich

Differential Revision: D21800620

Pulled By: soumith

fbshipit-source-id: 7fe855f5a5b97368e5e84c98c28d04b2e1276c85
2020-06-01 10:52:40 -07:00
Jerry Zhang
85d0292c14 [quant][graphmode] Cleanup inplace API (#38827)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38827

Test Plan: Imported from OSS

Differential Revision: D21673481

fbshipit-source-id: becca38efcf720089407c981419b33f629a33e91
2020-05-29 11:13:25 -07:00
Kimish Patel
bb12e4dca0 Add JIT fusion pass to fuse quantized add and relu. (#38897)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38897

Quantized ops support add_relu. This pass enables finding quantized add + relu
pattern and fuse them to add_relu.

Test Plan: buck run caffe2/test:quantization -- test_quantization.TestFusionPasses

Reviewed By: jerryzh168

Differential Revision: D21690909

fbshipit-source-id: 607cf72dde535df15eb7638841543ab2156af464
2020-05-27 14:16:57 -07:00
Elias Ellison
f90dc741eb [JIT] Normalize op aliases (#38735)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38735

Follow up to my comment https://github.com/pytorch/pytorch/pull/36597/#issuecomment-613674329

This adds a pass to convert op aliases into a normalized form. Having two ops generated in our IR that do the same thing makes the IR harder for downstream consumers of the IR, such as TorchScript passes but also ONNX, glow, etc.

Another solution would have been to fix our code generation to only emit `aten::abs` from the start. This seems trickier, and doesn't really buy us much if we still have to expose `aten::absolute` in C++, as glaringlee of the C++ API thinks we should.

Bike shedding: maybe this should be `CanonicalizeOps` instead

Test Plan: Imported from OSS

Differential Revision: D21673108

Pulled By: eellison

fbshipit-source-id: c328618907de1af22e07f57fd27fa619978c2817
2020-05-21 21:47:17 -07:00
Elias Ellison
5183e3aa16 [JIT] Rename canonicalize ops (#38734)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38734

As far as I can tell, this pass only exists to canonicalize ops that are generating in the graph fuser, so it's kind of a misnomer.

Test Plan: Imported from OSS

Differential Revision: D21673109

Pulled By: eellison

fbshipit-source-id: b7bedf34ccaf1fcd442bfb2bbb990e64915f51d4
2020-05-21 21:45:15 -07:00
Nikita Shulga
4c0bf93a0e Revert D21057090: Remove useless copy on zip file load
Test Plan: revert-hammer

Differential Revision:
D21057090

Original commit changeset: e3d30a3b09f4

fbshipit-source-id: b24cbe77aae38b321882e7dcf41022710ee28ed0
2020-05-21 19:34:18 -07:00
davidriazati
455bf77da5 Remove useless copy on zip file load (#36362)
Summary:
Instead of copying to a buffer, then setting a tensor's storage with that buffer, create a storage directly from the file
](https://our.intern.facebook.com/intern/diff/21057090/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36362

Pulled By: driazati

Differential Revision: D21057090

fbshipit-source-id: e3d30a3b09f4d67bf4bb7a0dd7f4f60c3dd1a47e
2020-05-21 18:57:06 -07:00
Jerry Zhang
a8d8fc5532 [quant][graphmode] Different rule for add/add_/mul/mul_ (#38667)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38667

Test Plan: Imported from OSS

Differential Revision: D21633555

fbshipit-source-id: 03b0298e83bf4dbda41b048c0edc7bb92cd4e1df
2020-05-20 19:43:46 -07:00
Will Constable
6fd48e24f1 Add support, test for kwargs in jit._fork (#38357) (#38665)
Summary:
Closing 38357
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38665

Reviewed By: suo

Differential Revision: D21643697

Pulled By: wconstab

fbshipit-source-id: c292c037f87bc2bb69a4ca163d7107d5396c53a2
2020-05-19 13:02:46 -07:00
Michael Voznesensky
f6f1384811 [JIT] Refactor attributes to support buffers and parameters as first class citizens, add support for iterating over named_buffers() (#37905)
Summary:
First part of https://github.com/pytorch/pytorch/issues/36211 - still a WIP, but asking for commentary to ensure this is the direction we want to go in.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37905

Differential Revision: D21633735

Pulled By: voznesenskym

fbshipit-source-id: f4e4302e40114513776c9e48867a90d72049e2e9
2020-05-18 23:23:43 -07:00
James Reed
db86c8c6f5 Test BC for built-in torchbind methods (#38560)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38560

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D21598067

Pulled By: jamesr66a

fbshipit-source-id: 26a0e92a5c2883326be261cf84b7e916ebfd60d8
2020-05-15 19:06:59 -07:00
Michael Voznesensky
960f4b51e3 [JIT] Fix @staticmethod access from self on modules (#37702)
Summary:
Closes https://github.com/pytorch/pytorch/issues/30755
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37702

Differential Revision: D21389989

Pulled By: voznesenskym

fbshipit-source-id: f9b7e26a9eab7dc3d7762a5a28f85424dac5fbb3
2020-05-14 21:12:10 -07:00
David Reiss
6d642a6f6c Remove (most) Python 2 support from C++ code (#35614)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35614

Python 2 has reached end-of-life and is no longer supported by PyTorch.
Now we can clean up a lot of cruft that we put in place to support it.
These changes were all done manually, and I skipped anything that seemed
like it would take more than a few seconds, so I think it makes sense to
review it manually as well.

Test Plan: CI

Differential Revision: D20842876

Pulled By: dreiss

fbshipit-source-id: 18abf0d324ed2185ec6d27c864e935d856dcc6ad
2020-05-14 15:01:49 -07:00
Michael Suo
0d220ef381 [torchbind] Better error message when missing init. (#37474)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37474

Previously we would segfault

Test Plan: Imported from OSS

Differential Revision: D21297542

Pulled By: suo

fbshipit-source-id: c7e2f828a250c490ec23fb51c6a4a642d3370e52
2020-05-13 17:38:31 -07:00
Shen Li
2e9d6d99be Explicitly decref py::object in ConcretePyObjectHolder and PythonFunctionGuard (#38364)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38364

Test Plan: Imported from OSS

Differential Revision: D21537611

Pulled By: mrshenli

fbshipit-source-id: e22d1f1360cf71bec526841b5014013b11316f8d
2020-05-12 20:55:53 -07:00
Elias Ellison
c2ac2127be [JIT] recursively compile class types (#38050)
Summary:
Make it so that non-nn Module classes do not need to be annotated with `torch.jit.script`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38050

Differential Revision: D21482654

Pulled By: eellison

fbshipit-source-id: 22689e4d7a33f6e1574b9495cff29a1fe6abb910
2020-05-12 17:16:28 -07:00
Kimish Patel
f954dd7823 Add dropout removal pass. (#38253)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38253

This pass removes dropout and dropout_ nodes when training is false. It
requires to have run freeze_module pass which does both inlining and constant
propagation, without which training variable remains as attribute instead of
constant.
ghstack-source-id: 103939141

Test Plan: python test/test_jit.py TestScript.test_remove_dropout

Reviewed By: dreiss

Differential Revision: D21505863

fbshipit-source-id: 42ea45804e4653b625b6a254c8d8480757264aa8
2020-05-12 14:38:34 -07:00
Shen Li
dad552666e Add then(callback)->Future API to ivalue::Future (#37311)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37311

Test Plan: Imported from OSS

Differential Revision: D21247827

Pulled By: mrshenli

fbshipit-source-id: f8fe0617ccb957aa747a78554a000ce2c4a58495
2020-05-11 21:58:56 -07:00
James Reed
a553935e3c [JIT] Expose magic methods on script::Object (#38167)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38167

Test Plan: Imported from OSS

Differential Revision: D21486709

Pulled By: jamesr66a

fbshipit-source-id: 17b44d979fc658768b0d64f7d8af6fb684043ea3
2020-05-11 15:01:15 -07:00
Shihao Xu
3d0279862d Consolidate builtin/python_udf RPC to return ivalue::Future like torchscript RPC does (#35154)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35154

This is for issue https://github.com/pytorch/pytorch/issues/34999.

close https://github.com/pytorch/pytorch/issues/34999.

https://github.com/pytorch/pytorch/issues/34997 need more work.

This will make a few work items easier, like 1) Dist autograd profiler, 2) JIT annotation for Future.

Test Plan:
```
buck test mode/dev-nosan //caffe2/test/distributed/rpc:rpc_fork

buck test mode/dev-nosan //caffe2/test/distributed/rpc:rpc_fork -- test_rref_forward_chain --stress-runs 100

buck build mode/dev-nosan //caffe2/test/distributed/rpc:rpc_fork && \
buck-out/gen/caffe2/test/distributed/rpc/rpc_fork\#binary.par \
-r test_call_method_on_rref
```

buck test mode/dev-nosan //caffe2/test/distributed/rpc:rpc_fork -- 'test_rref_proxy_class \(fb\.test_rpc_fork\.RpcTestWithFork\)' --stress-runs 100

test_rref_proxy_reuse
test_handle_send_exceptions

```
buck test mode/dev-nosan //caffe2/test/distributed/rpc/jit:rpc_fork

buck build mode/dev-nosan //caffe2/test/distributed/rpc/jit:rpc_fork && \
buck-out/gen/caffe2/test/distributed/rpc/jit/rpc_fork\#binary.par \
-r test_script_call_python_return_future
```

Differential Revision: D7722184

fbshipit-source-id: bd92b855bfea4913d6672700590c57622fa86e0e
2020-05-08 21:28:56 -07:00
Jerry Zhang
0ed7fc581c [quant][graphmode][refactor] Split quantization.cpp (#37975)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37975

Test Plan:
.

Imported from OSS

Differential Revision: D21468497

fbshipit-source-id: 35cbf98a344ca6e4094d616a4040eacf017fd2de
2020-05-08 12:24:50 -07:00
Jerry Zhang
ff9a809ccd [quant][graphmode][refactor] Remove unused code in quantization.cpp (#37974)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37974

Differential Revision: D21468498

Pulled By: jerryzh168

fbshipit-source-id: 96f34db9f98474ec8e5d33e9b7c406b1637f5de8
2020-05-08 11:03:03 -07:00
James Reed
c1e7758b5e Back out "Revert D20229168: [quantization] Use torchbind for Linear PackedParams" (#38101)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38101

Original commit changeset: 29e8a4d3b8bf
ghstack-source-id: 103730417

Test Plan: waitforsadcastle

Differential Revision: D21471381

fbshipit-source-id: a922cdf31ba32021e7264ae1454c646c0bfd7ef4
2020-05-08 10:53:06 -07:00
Nikita Shulga
4bc0a7f86a Revert D20229168: [quantization] Use torchbind for Linear PackedParams
Test Plan: revert-hammer

Differential Revision:
D20229168

Original commit changeset: 3607cac9aa5b

fbshipit-source-id: 29e8a4d3b8bffd95ff6a58b46c4f1c1e23770304
2020-05-07 19:47:45 -07:00
James Reed
eaf9b28c55 [quantization] Use torchbind for Linear PackedParams (#34140)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34140

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D20229168

Pulled By: jamesr66a

fbshipit-source-id: 3607cac9aa5b4b044572329742baed03350491c6
2020-05-07 19:03:44 -07:00
eellison
d5df055bbb [WIP][JIT] Add JIT backend registration API (#35833)
Summary:
**Summary**
This commit adds `torch::jit::RegisterBackend`, an API that allows
external backends to be registered for the execution of JIT subgraphs
outside the JIT interpreter. In order to register an external backend,
one must extend the provided abstract class `PyTorchBackendInterface` and provide
two additional functions: one that creates an instance of the aforementioned subclass
of `PyTorchBackendInterface`, and another that preprocesses a `ScriptModule` so that
it can run on the backend. Then, a `ScriptModule` that can compile and execute a given
JIT subgraph using the functions provided at registration time is generated
for each registered backend.

**Testing**
This commit adds a unit test that uses a minimal test backend
to make sure that the registration endpoint and generated
`ScriptModule` work.

```
$ python test/test_jit.py TestBackends
Fail to import hypothesis in common_utils, tests are not derandomized
.
----------------------------------------------------------------------
Ran 1 test in 0.183s

OK

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35833

Differential Revision: D21231955

Pulled By: SplitInfinity

fbshipit-source-id: 452db1123d0e5d83f97fe5da8a00fdfdb50dbef9
2020-05-07 18:15:26 -07:00
Mikhail Zolotukhin
a44824c9ed [TensorExpr] Allow to enable/disable fallback mechanism thru an envvar PYTORCH_TENSOREXPR_FALLBACK. (#37971)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37971

Test Plan: Imported from OSS

Reviewed By: protonu

Differential Revision: D21444831

Pulled By: ZolotukhinM

fbshipit-source-id: c75f58772a4730e8f40f05491f9e5afa4aa3ed30
2020-05-07 12:20:31 -07:00
Shen Li
ee1ddcef8d Acquire GIL when constructing/destructing ConcretePyObjectHolder (#37870)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37870

Test Plan: Imported from OSS

Differential Revision: D21410785

fbshipit-source-id: 374d5f40fbdfec98262aa4c84ec4ccdc40fb2ac1
2020-05-07 07:37:39 -07:00
Michael Suo
b53e6bfd49 [jit] normalize getMethod (#37472)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37472

Our convention is for `findX` to return an optional version and `getX`
to assert that the X is there. Fix up `getMethod` to be consistent with
this convention.

Test Plan: Imported from OSS

Differential Revision: D21297543

Pulled By: suo

fbshipit-source-id: b40f56231cc8183e61bbb01fe5c0c113bcb6464d
2020-05-06 15:22:25 -07:00
Jerry Zhang
1ad46f470f [jit] __copy__ for RecursiveScriptModule (#36830)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36830

Test Plan:
build/bin/test_jit

Imported from OSS

Differential Revision: D21431012

fbshipit-source-id: 13a1bf9744ec95ea59622226c8d8a8d55ec3f0b0
2020-05-06 13:55:01 -07:00
Jerry Zhang
70f375becf [quant] ConvPackedParams with TorchBind (#35923)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35923

(Note: this ignores all push blocking failures!)

Test Plan:
tbd

Imported from OSS

Differential Revision: D20957089

fbshipit-source-id: 74d8bd628ccba64e902ea6ebabc2b883924050b0
2020-05-05 20:18:36 -07:00
Jerry Zhang
9b3911c073 [quant][graphmode][refactor] rename SwapDequant and refactor code handling general ops (#37555)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37555

Test Plan:
.

Imported from OSS

Differential Revision: D21393514

fbshipit-source-id: 5bc9fa0f0be25f4c35a64acb23513f64ed07e230
2020-05-05 11:20:15 -07:00
Mikhail Zolotukhin
7fa968b10d [TensorExpr] Add python bindings for TE fuser. (#37831)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37831

Test Plan: Imported from OSS

Reviewed By: jackm321

Differential Revision: D21404947

Pulled By: ZolotukhinM

fbshipit-source-id: 8467346d4fd8413985a33832fb3994d3ead746dc
2020-05-05 10:58:30 -07:00
Elias Ellison
23d0441da7 [JIT] Fix GetAttr inconsistency (#37424)
Summary:
We were previously only looking at class attributes, so that didn't include methods etc, and would silently give wrong semantics. This makes hasAttr go through the same resolution as our other attribute lookups.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37424

Differential Revision: D21282633

Pulled By: eellison

fbshipit-source-id: 8e970f365c2740d137a02331739c2ed93747b918
2020-05-05 09:06:51 -07:00
Michael Suo
b7f258bbd3 add fmt to libtorch_python.so (#37560)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37560

Test Plan: Imported from OSS

Differential Revision: D21320059

Pulled By: suo

fbshipit-source-id: 95cfe7cf26c515fdfcb4621cc58266d838a38a3e
2020-05-04 10:14:37 -07:00
Linbin Yu
099a84ef9b Add overload name for aten::tensor and aten::as_tensor (#37655)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37655

Add override name for aten::tensor and aten::as_tensor.
These two ops are used in NLU model, and they will included them in lite interpreter

Test Plan: verified model can be loaded correctly

Reviewed By: iseeyuan

Differential Revision: D21346142

fbshipit-source-id: 05ff4d9e0bcf7f4f9a30d95ca81aef9c3f6b0990
2020-05-01 14:31:04 -07:00
Michael Voznesensky
91e74fd843 [JIT] Adds a code_with_constants method to module printing (#37586)
Summary:
Closes https://github.com/pytorch/pytorch/issues/36625
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37586

Differential Revision: D21331385

Pulled By: suo

fbshipit-source-id: 752e63eac8bdd06c6719efb972cdc832ad7c1535
2020-04-30 20:44:01 -07:00
Elias Ellison
cde1350a5d Add support for generic list constants (#36953)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36953

Add support for generic lists as a constant. generic dicts & tuples are already implemented. This is a pretty common pattern and cuts down on the number of non-tensor nodes executed in interpolate tests.

Test Plan: Imported from OSS

Differential Revision: D21160761

Pulled By: eellison

fbshipit-source-id: 1e6b7b25b7580f09067794772d44e615601c60c4
2020-04-28 23:28:07 -07:00
Elias Ellison
c516f84525 [JIT] Add Lower Tuples Call & Run remove mutation after list unrolling (#36829)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36829

This changes the IR complexity from the previous PR for the following tests:
```
('Name', 'Ifs/Loops', 'non-tensor ops')
Before:  ('max_unpool1d', 0, 3)
After:  ('max_unpool1d', 0, 0)
Before:  ('max_unpool2d', 0, 3)
After:  ('max_unpool2d', 0, 0)
Before:  ('max_unpool3d', 0, 4)
After:  ('max_unpool3d', 0, 0)
Before:  ('adaptive_max_pool2d', 0, 3)
After:  ('adaptive_max_pool2d', 0, 0)
Before:  ('adaptive_max_pool3d', 0, 4)
After:  ('adaptive_max_pool3d', 0, 0)
Before:  ('adaptive_avg_pool2d', 0, 3)
After:  ('adaptive_avg_pool2d', 0, 0)
Before:  ('adaptive_avg_pool3d', 0, 4)
After:  ('adaptive_avg_pool3d', 0, 0)
Before:  ('upsample', 13, 68)
After:  ('upsample', 4, 28)
Before:  ('upsample', 13, 68)
After:  ('upsample', 0, 5)
Before:  ('interpolate', 14, 68)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 13, 67)
After:  ('interpolate', 4, 27)
Before:  ('interpolate', 14, 68)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 14, 68)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 13, 67)
After:  ('interpolate', 4, 27)
Before:  ('interpolate', 14, 68)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 14, 68)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 13, 67)
After:  ('interpolate', 4, 27)
Before:  ('interpolate', 14, 68)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 14, 68)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 13, 67)
After:  ('interpolate', 4, 27)
Before:  ('interpolate', 14, 68)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 14, 59)
After:  ('interpolate', 0, 3)
Before:  ('interpolate', 13, 57)
After:  ('interpolate', 4, 21)
Before:  ('interpolate', 14, 59)
After:  ('interpolate', 0, 3)
Before:  ('interpolate', 14, 59)
After:  ('interpolate', 0, 3)
Before:  ('interpolate', 13, 57)
After:  ('interpolate', 4, 21)
Before:  ('interpolate', 14, 59)
After:  ('interpolate', 0, 3)
Before:  ('interpolate', 14, 59)
After:  ('interpolate', 0, 3)
Before:  ('interpolate', 13, 57)
After:  ('interpolate', 4, 21)
Before:  ('interpolate', 14, 59)
After:  ('interpolate', 0, 3)
Before:  ('interpolate', 13, 77)
After:  ('interpolate', 4, 33)
Before:  ('interpolate', 14, 77)
After:  ('interpolate', 0, 5)
Before:  ('interpolate', 14, 77)
After:  ('interpolate', 0, 5)
Before:  ('interpolate', 13, 77)
After:  ('interpolate', 4, 33)
Before:  ('interpolate', 14, 77)
After:  ('interpolate', 0, 5)
Before:  ('interpolate', 14, 77)
After:  ('interpolate', 0, 5)
Before:  ('interpolate', 13, 77)
After:  ('interpolate', 4, 33)
Before:  ('interpolate', 14, 77)
After:  ('interpolate', 0, 5)
Before:  ('interpolate', 14, 68)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 14, 68)
After:  ('interpolate', 0, 4)
Before:  ('interpolate', 15, 103)
After:  ('interpolate', 1, 23)
Before:  ('interpolate', 14, 70)
After:  ('interpolate', 0, 6)
Before:  ('interpolate', 15, 103)
After:  ('interpolate', 1, 21)
Before:  ('interpolate', 14, 70)
After:  ('interpolate', 0, 6)
Before:  ('interpolate', 15, 91)
After:  ('interpolate', 1, 13)
Before:  ('interpolate', 14, 59)
After:  ('interpolate', 0, 3)
Before:  ('interpolate', 15, 93)
After:  ('interpolate', 1, 16)
Before:  ('interpolate', 14, 61)
After:  ('interpolate', 0, 5)
Before:  ('interpolate', 15, 111)
After:  ('interpolate', 1, 28)
Before:  ('interpolate', 14, 77)
After:  ('interpolate', 0, 5)
Before:  ('interpolate', 15, 113)
After:  ('interpolate', 1, 27)
Before:  ('interpolate', 14, 79)
After:  ('interpolate', 0, 7)
Before:  ('test_nn_AdaptiveMaxPool2d_single', 0, 3)
After:  ('test_nn_AdaptiveMaxPool2d_single', 0, 0)
Before:  ('test_nn_AdaptiveMaxPool2d_tuple', 0, 3)
After:  ('test_nn_AdaptiveMaxPool2d_tuple', 0, 0)
Before:  ('test_nn_AdaptiveMaxPool3d_single', 0, 4)
After:  ('test_nn_AdaptiveMaxPool3d_single', 0, 0)
Before:  ('test_nn_AdaptiveMaxPool3d_tuple', 0, 4)
After:  ('test_nn_AdaptiveMaxPool3d_tuple', 0, 0)
Before:  ('test_nn_AdaptiveMaxPool3d_single_nonatomic', 0, 4)
After:  ('test_nn_AdaptiveMaxPool3d_single_nonatomic', 0, 0)
Before:  ('test_nn_AdaptiveMaxPool3d_tuple_nonatomic', 0, 4)
After:  ('test_nn_AdaptiveMaxPool3d_tuple_nonatomic', 0, 0)
Before:  ('test_nn_AdaptiveAvgPool2d_single', 0, 3)
After:  ('test_nn_AdaptiveAvgPool2d_single', 0, 0)
Before:  ('test_nn_AdaptiveAvgPool2d_single_1x1output', 0, 3)
After:  ('test_nn_AdaptiveAvgPool2d_single_1x1output', 0, 0)
Before:  ('test_nn_AdaptiveAvgPool2d_tuple', 0, 3)
After:  ('test_nn_AdaptiveAvgPool2d_tuple', 0, 0)
Before:  ('test_nn_AdaptiveAvgPool3d_single', 0, 4)
After:  ('test_nn_AdaptiveAvgPool3d_single', 0, 0)
Before:  ('test_nn_AdaptiveAvgPool3d_tuple', 0, 4)
After:  ('test_nn_AdaptiveAvgPool3d_tuple', 0, 0)
```

Test Plan: Imported from OSS

Differential Revision: D21160758

Pulled By: eellison

fbshipit-source-id: 68ccbf3af74398e8dbad7e6bedb639635dafdb2e
2020-04-28 23:28:02 -07:00
Jerry Zhang
6fa76b8a0c [jit] __deepcopy__ for RecursiveScriptModule (#32684)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32684

Previously we have `clone` and `clone_instance`, where `clone` will clone both type
and value, and `clone_instance` only clone the value, both of them are shallow copies.
We need to re-evaluate whether we should expose them as a user facing API.
I think we should hide `clone`, but `clone_instance` might be useful as well, especially
when we are copying a model with very large weights, people might just want to do shallow copy.

This PR adds a `deepcopy` that might be useful as a user API, which deep copies the values, including
Tensor, but we didn't deepcopy `Blob`, `Capsule`, `Future` or `PyObject`.
For more discussions please see the following issue.

fixes: https://github.com/pytorch/pytorch/issues/32519

Test Plan: Imported from OSS

Differential Revision: D21220756

fbshipit-source-id: 476bf11fe82c08fac36e7457879a09f545ffdc5e
2020-04-28 18:47:11 -07:00
Nikolay Korovaiko
a80a438e37 correctly set and restore states in te tests (#37210)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37210

Differential Revision: D21238634

Pulled By: Krovatkin

fbshipit-source-id: 6462239753399c10c871baa5d5fdff5465cf2544
2020-04-24 20:16:51 -07:00
Xiang Gao
3880f14b64 Canonicalize includes in torch, and add tests for it (#36303)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36303

Test Plan: Imported from OSS

Differential Revision: D20943003

Pulled By: ezyang

fbshipit-source-id: 81fcbaccc1a7eec422bd8347d196bb66a5467884
2020-04-23 08:09:21 -07:00
David Reiss
63e5058c88 Fix naming of "strides" method in TensorType (#36727)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36727

Looks like this was renamed by accident in 0cbd7fa46f

Test Plan:
Unit test.
Lint.

Differential Revision: D21076697

Pulled By: dreiss

fbshipit-source-id: dbd18cb41c7b26479984a7a7b12ad41a4c5b7658
2020-04-16 17:07:27 -07:00
Elias Ellison
9cbeb0faed [JIT] Dont optimize shape peepholes on inline (#36404)
Summary:
With https://github.com/pytorch/pytorch/pull/35562, we are running peephole optimization on inlining to reduce the number of nodes that are copied.

The tracer encodes the sizes in the graph like:
```
graph(%0 : Double(7)):
  %1 : Function = prim::Constant[name="tensor_size"]()
  %2 : Tensor = prim::CallFunction(%1, %0)
  return (%2)
```

however people would like to reuse the graph with different shapes so running size invalidations would invalidate that. long term it might be better for the tracer to not include shape information but there are downstream users of that.

Separates out FuseAddMM from peephole so that now there is a single `disable_size_optimizations` parameter, and onnx explicitly invokes fuseaddmm.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36404

Differential Revision: D20968974

Pulled By: eellison

fbshipit-source-id: 56f8f1699e3b0adeeccdfd5a67bb975fd41a2913
2020-04-15 17:49:48 -07:00
Negin Raoof
f99a28f515 [ONNX] Adding a pass to replace interpolate function with aten::__interpolate (#35744)
Summary:
Since aten;:__interpolate is removed in https://github.com/pytorch/pytorch/pull/34514, we need a pass replace interpolate function with aten::__interpolate for ONNX export.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35744

Reviewed By: hl475

Differential Revision: D20907041

Pulled By: houseroad

fbshipit-source-id: f2d2cdfec47389245c50f538267124eedf682adf
2020-04-14 23:16:22 -07:00
Wanchao Liang
999d7f6ab2 [jit] tracer flag to guard risky behaivors (#36277)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36277

This PR introduce a flag to the tracer that guard the risky behaviors
like adding list/dict as output of the tracer. Currently to ensure not
BC breaking user, we throw warning if the tracer output is list, and
will throw error when the tracer output is dict to enforce using this
flag (next PR)

Test Plan: Imported from OSS

Differential Revision: D20998157

Pulled By: wanchaol

fbshipit-source-id: 0d2c55f1a263a48b1b92dd6ad54407815e0a6f72
2020-04-13 22:35:03 -07:00
Mikhail Zolotukhin
765bf8f03d Remove duplicate bindings from torch/csrc/jit/python/init.cpp. (#36492)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36492

Test Plan: Imported from OSS

Differential Revision: D20995235

Pulled By: ZolotukhinM

fbshipit-source-id: 6afa3a956e57c2fb94bb29d332177be73a2bac2a
2020-04-13 12:28:32 -07:00
Mike Ruberry
62f9312abd Revert D20783298: Fix naming of "strides" method in TensorType
Test Plan: revert-hammer

Differential Revision:
D20783298

Original commit changeset: 8fcc146284af

fbshipit-source-id: 30e3cb6d7a30d82048534d4d2e794b7e08ae01bb
2020-04-09 04:24:43 -07:00
David Reiss
16980e455f Fix naming of "strides" method in TensorType (#35170)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35170

Looks like this was renamed by accident in 0cbd7fa46f

Test Plan:
Unit test.

Imported from OSS

Differential Revision: D20783298

fbshipit-source-id: 8fcc146284af022ec1afe8d651baf6721b190ad3
2020-04-08 15:59:28 -07:00
David Reiss
645d57ea01 Expose JIT Module's "register_attribute" to Python (#35630)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35630

Prefix underscored for now because the semantics of this method can be
confusing.  It adds a new attribute to the *type*, which can be shared
by several objects.

Test Plan:
Next diff in stack uses it, and has unit tests.

Imported from OSS

Differential Revision: D20904253

fbshipit-source-id: dcbf60eacf0e0e075c19238165aa33954aa73b5f
2020-04-08 13:09:28 -07:00
Kimish Patel
d559a47933 Enable relu fusion with prepacked linear/conv. (#35705)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35705

Introduces a pass for relu fusion.

Test Plan:
python test/test_xnnpack_integration.py

Imported from OSS

Differential Revision: D20746592

fbshipit-source-id: 6c22f60a20e9121618c85077b9b58fb8d4082b3b
2020-04-03 15:38:45 -07:00
Mikhail Zolotukhin
af5121f62a Invoke TensorExpr fuser pass from a graph executor. (#35913)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35913

The pass itself is still disabled by default, but with this change we
don't need to register it as a custom pass anymore. It allows us to
control its behavior with env variables more easily.

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D20827189

Pulled By: ZolotukhinM

fbshipit-source-id: e74d90b5e46422e7ab7bc40974a805220da50fbc
2020-04-03 12:20:26 -07:00
davidriazati
6e13a7787b [jit] Fix type comparisons segfault (#35929)
Summary:
Pybind will convert `None`s to `nullptr`s, so this adds a check to make
sure those don't get into the actual type comparison logic. Fixes #35778
](https://our.intern.facebook.com/intern/diff/20831278/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35929

Pulled By: driazati

Differential Revision: D20831278

fbshipit-source-id: 5800050e5eec280072afde58141ad00c1e8db8e2
2020-04-03 11:33:48 -07:00
Elias Ellison
2595c62208 [JIT] Better error on default params error (#35888)
Summary:
Someone messaged me abt this when a better error msg would have solved their problem
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35888

Differential Revision: D20819538

Pulled By: eellison

fbshipit-source-id: 95d124bfd162e1747dcdf7a981703a279a5dfaa6
2020-04-02 15:31:22 -07:00
Christian Sarofeen
6d24f8fe21 Infrastructure for a new CUDA Fuser (#34785)
Summary:
**Summary:** This PR contains the infrastructure of a new CUDA fuser. This CUDA fuser is based on many of the same principles of TensorExpressions and Halide, however the implementation is ground up. The fusion pass itself is similar to the default CUDA fuser, however, it has undergone some refactoring and is using the new code generation infrastructure. For those who are interested in how the code generation in this PR works, I would recommend reviewing _test/cpp/jit/test_gpu_fusion.cpp_ as well as the long comment section at the beginning of _torch/csrc/jit/codegen/cuda/transform_replay.h_  One of the largest differences between our approach and that of TVM/Halide, is the concept of "TensorView". TensorView from a high level should be thought of similarly to how we think of working with Tensors in PyTorch. It's an N-D object which can undergo transformations that change its dimensionality. Dimensionality changes are done through the operations split/merge/reorder/computeAt. These transformations are similar to split/fuse/reorder/compute_at of TVM, they modify how a tensor is iterated over to generate GPU code. Interestingly, in our scheme these transformations are applied to tensors and only impact how that tensor is generated.

**Warning:** This PR is purposefully not feature complete with the current fuser. We wanted to separate out the infrastructure from the fusion capabilities. Once in, smaller incremental PRs will be submitted to expand capabilities of the fuser.

**Short term goals:**

Parity with current CUDA fuser (including performance):
- Dynamic shapes (no recompilation)
- Implicit handling of braodcast (broadcasted tensors are treated as tensors of the braodcasted size in the generated code)
- Dropout

**Mid-term goals:**

- Transposes fused with pointwise operations where transpose involves only 2 axes (across the fused operation).
- 1-D reductions fused with pointwise operations
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34785

Reviewed By: ZolotukhinM

Differential Revision: D20650977

Pulled By: soumith

fbshipit-source-id: ee39c95a880e1b9822e874ed4cc180971572bf63
2020-04-02 09:22:42 -07:00
Michael Suo
866d9d4e6a [jit] Fix name collision on load (#35720)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35720

When modules are saved, all relevant types are serialized according to
their qualified name with a compilation unit. Since qualified names are
guaranteed to be unique within a compilation unit, this normally works
fine.

On load, all types are registered in a compilation unit owned by the
script::Module. Type names are not unique across compilation units, so
if you load two modules with colliding type names, make them submodules
of yet another module, and save that module, there is the potential of a
name collision. See the added tests for examples if that description is
confusing.

The solution is to unique type names when serializing code by mangling
them if we detect a name collision.

Test Plan: Imported from OSS

Differential Revision: D20749423

Pulled By: suo

fbshipit-source-id: a8827ff1d4a89f3e7964dbbb49b4381863da3e6a
2020-04-01 00:02:38 -07:00
Michael Suo
06dcb70905 [jit] Fix Type equality in some cases (#35719)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35719

Test Plan: Imported from OSS

Differential Revision: D20749422

Pulled By: suo

fbshipit-source-id: 09b697766c1eb3e56f4cf8acc7e854b0981d7991
2020-03-31 22:29:12 -07:00
Supriya Rao
a090de380c [quant][graph] Add quant fusion for dynamic quantization (#35586)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35586

This pass fuses the choose_qparams-quant-dequant sequence
Fusion for weight tensor is the same as static quant.

Test Plan:
python test/test_quantize_script.py

Imported from OSS

Differential Revision: D20755680

fbshipit-source-id: b7443770642b6e6fa0fa9da8a44637e9b2d4df70
2020-03-30 23:34:56 -07:00
Supriya Rao
1f7ee7b6b7 [quant][graph] Add pass to insert quant dequant for dynamic quantization (#35448)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35448

Add _choose_qparams_per_tensor which returns scale and zero_point similar to the dynamic quantization in the operator

Test Plan:
python test/test_quantize_script.py

Imported from OSS

Differential Revision: D20755679

fbshipit-source-id: c9066d8f1bb3e331809be26c4be806faafc9b981
2020-03-30 23:33:32 -07:00
Jerry Zhang
6fc2403951 [quant][graphmode] qconfig_dict support None (#35336)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35336

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D20655302

fbshipit-source-id: b453f3240ac487aa29629953b4d71274dbbc25fc
2020-03-29 12:47:47 -07:00
Mikhail Zolotukhin
cd00bbc23f clang-format. (#35605)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35605

Test Plan: Imported from OSS

Reviewed By: orionr

Differential Revision: D20720486

Pulled By: ZolotukhinM

fbshipit-source-id: f081a9fb6ef84fdce3b8f071d5e251e267854a18
2020-03-28 11:45:06 -07:00