Commit Graph

18497 Commits

Author SHA1 Message Date
Zhengxu Chen
12daa4f663 [jit][edge] Enable CALL instruction in lite interpreter. (#65964)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65964

ghstack-source-id: 141425519

Test Plan: buck run xplat/caffe2:test_lite_interpreter

Reviewed By: cccclai

Differential Revision: D31326149

fbshipit-source-id: 8a599d92f3fa4e6c125100adb36d89592e71e547
2021-10-25 14:44:33 -07:00
Pearu Peterson
333717eaf0 Improve assert failure message in test_get_torch_func_signature_exhaustive (#67039)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67039

cc mruberry

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D31899719

Pulled By: cpuhrsch

fbshipit-source-id: 819d07da5b18b31d462010b9f9382e0b8cd10f9f
2021-10-25 14:20:38 -07:00
Jacob Szwejbka
a6d0339492 [Pytorch Edge] Extend runtime compatibility to custom classes (#66972)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66972

Add api to view how many custom classes we have and what their names are

Test Plan: unit test

Reviewed By: cccclai

Differential Revision: D31811337

fbshipit-source-id: 9f8ca1fc578a0a5360c9cd8f95475acc33f250e4
2021-10-25 13:42:26 -07:00
Zhengxu Chen
4dce051cb0 [jit][edge] Add control stack frame to lite interpreter (#65963)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65963

ghstack-source-id: 141425517

Test Plan: In next diff.

Reviewed By: qihqi, cccclai

Differential Revision: D31326150

fbshipit-source-id: dbbf65f2bf14846c45d0add71edc7d4dbfc6b92c
2021-10-25 12:15:16 -07:00
Sahan Chanuka Paliskara
9de0888891 Move the registration of CPython builtin modules to BuiltinRegistry (#67085)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67085

leverages BuiltinRegistry to register the CPython standard C modules. The standard C modules moved are in the FOR_EACH macro

Test Plan:
buck test mode/opt //caffe2/torch/csrc/deploy/interpreter:test_builtin_registry

buck test mode/opt //caffe2/torch/csrc/deploy:test_deploy

Reviewed By: shunting314

Differential Revision: D31848547

fbshipit-source-id: 7eb49d222eaaccb2b8ca5c984b05bf54cc233f25
2021-10-25 11:12:07 -07:00
Mike Iovine
5d9ff8f30e [Static Runtime] Add static_runtime::fused_sigrid_transforms (#66659)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66659

Original message: We added and registered a new operator, static_runtime::fused_sigrid_transforms, and modified the original sigrid_transforms to handle non-fused case only

Note: this diff was commandeered from a bootcamper. Some final touches were needed.

Test Plan: `buck test caffe2/benchmarks/static_runtime/...`

Reviewed By: swolchok

Differential Revision: D31550307

fbshipit-source-id: 287380be0cca20ee6e145bcc7217547bd58cf6d0
2021-10-25 10:44:46 -07:00
Pavithran Ramachandran
8d164a36fb Use at::native::is_nonzero in promoted ops to improve portability (#67097)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67097

all delegated models have `is_nonzero` ops by default, by making the op native and consumable without dispatch eases the portability of such models
ghstack-source-id: 141375082

Test Plan:
`buck test caffe2/test/cpp/jit:jit -- BackendTest.TestComposite`

```
~/fbsource/fbcode] cd ~/fbsource/fbcode/ && buck test caffe2/test:jit -- test_trace_arange
Parsing buck files: finished in 0.5 sec
Building: finished in 9.4 sec (100%) 16035/16035 jobs, 0/16035 updated
  Total time: 10.0 sec
More details at https://www.internalfb.com/intern/buck/build/1e55eea5-2adb-41d1-96ae-cbf4b446d6c6
BUILD SUCCEEDED
Tpx test run coordinator for Facebook. See https://fburl.com/tpx for details.
Running with tpx session id: 46eedba2-ae17-4e88-b205-93bd1332665d
Trace available for this run at /tmp/tpx-20211015-113905.235421/trace.log
Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/1970324912349177
    ✓ ListingSuccess: caffe2/test:jit - main (12.372)
    ✓ Pass: caffe2/test:jit - test_trace_arange (jit.test_tracer.TestTracer) (13.748)
    ✓ Pass: caffe2/test:jit - test_trace_arange_with_grad (jit.test_tracer.TestTracer) (13.892)
Summary
  Pass: 2
  ListingSuccess: 1
If you need help understanding your runs, please follow the wiki: https://fburl.com/posting_in_tpx_users
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/1970324912349177
```

Reviewed By: iseeyuan

Differential Revision: D31656842

fbshipit-source-id: c0e6c798478a2783c0e17e6e9100ba5ce044da78
2021-10-25 10:18:31 -07:00
Christopher Gray Howard
acb340de75 [Pytorch][Bootcamp] Add fixes and vanilla testing for Adagrad non-vectorized and vectorized optimizers to handle complex numbers (#66671)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66671

Made changes in the step function of the vectorized and non-vectorized adagrad optimizers to handle complex numbers as two real numbers as per 65711 on github
ghstack-source-id: 141442350

Test Plan:
buck test mode/dev caffe2/test:optim -- 'test_adagrad_complex'
https://pxl.cl/1Rd44

Reviewed By: albanD

Differential Revision: D31673503

fbshipit-source-id: 90a0d0c69b556716e2d17c59ce80f09c750fc464
2021-10-25 10:13:21 -07:00
Mike Iovine
a0495b3cdb [SR] Remove unused operator() overload (#67001)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67001

The overload of `operator()` taking `std::vector<at::Tensor>` was only used for testing. In a diff following this one, I will add a new overload that takes `std::vector<c10::IValue> args` and no `kwargs` so we can avoid default-constructing `kwargs` everywhere.

This new overload will probably take a forwarding reference, so to avoid problems with overloading on forwarding reference and simplify the interface, it's best to remove this unused one.

Test Plan:
`buck test caffe2/benchmarks/static_runtime/...`

`buck test caffe2/test:static_runtime`

Reviewed By: hlu1

Differential Revision: D31821990

fbshipit-source-id: 6d2e4a75ca4abe6e262651532eb96c3b274c6f4a
2021-10-25 08:18:58 -07:00
Mike Iovine
364645cd9d [SR] Factor operator() implementation into separate function (#67125)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67125

Using explicit template instantiations in D31659973 (f2582a59d0) was a bad idea. The problem is that the lvalue instantiation was for a `const` vector of `IValue`, meaning that if you tried to pass SR a non-const vector of arguments, the linker would fail to find the symbol.

The reason we didn't catch this in D31659973 (f2582a59d0) was because predictor always passes a `const` reference anyways. But we should fix this to prevent unexpected problems in the future.

Test Plan: `buck test caffe2/benchmarks/static_runtime/...`

Reviewed By: hlu1

Differential Revision: D31873406

fbshipit-source-id: 5ab5a03334bed925cec11facadcedf9bec9b90ad
2021-10-25 08:17:40 -07:00
Sameer Deshmukh
edd4d246c3 Accept 0-dim channel inputs in convolution layer (#66256)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/56998 .

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66256

Reviewed By: mrshenli

Differential Revision: D31859428

Pulled By: jbschlosser

fbshipit-source-id: 034b6c1ce35aac50eabfa09bbcd8b1e3c8b171bd
2021-10-25 08:12:29 -07:00
kshitij12345
6c985b57ff OpInfo : nn.functional.embedding (#66997)
Summary:
Adds OpInfo for `nn.functional.embedding`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66997

Reviewed By: mrshenli

Differential Revision: D31859799

Pulled By: zou3519

fbshipit-source-id: bbca860df4fbc243751f5fa81658231866c31d2e
2021-10-25 08:06:32 -07:00
Jerry Zhang
adc21f1966 [quant] Fix docs build (#67169)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67169

Looks like the doc error only appears after it's landed

Test Plan: Imported from OSS

Reviewed By: seemethere

Differential Revision: D31890431

fbshipit-source-id: d40cba082712c4b35704ea15d82fbc4749f85aec
2021-10-25 08:02:26 -07:00
Mike Iovine
dd81fa9027 [JIT] Freeze allows preservation of submodule attributes (#66102)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66102

This changes allows the `preserved_attributes` parameter of `torch.jit.freeze` to accept attributes of submodules. Previously, only root-level attributes were able to be preserved. Example:

```
class SubModule(nn.Module):
    def __init__(self):
        super(SubModule, self).__init__()
        self.a = 1
        self.b = 2

    def forward(self):
        return self.a + self.b

class Module(nn.Module):
    def __init__(self):
        super(Module, self).__init__()
        self.sub = SubModule()

    def forward(self):
        return self.sub()

mod = torch.jit.script(Module())
mod.eval()
frozen_mod = torch.jit.freeze(mod, preserved_attrs = ['sub.a'])

mod.sub   # OK
mod.sub.a # OK
mod.sub.b # Error, not preserved
mod()     # = 3
mod.sub.a = 0
mod()     # = 2
```

Test Plan: `buck test caffe2/test:jit -- TestFreezing`

Reviewed By: eellison

Differential Revision: D31383868

fbshipit-source-id: 34a05ca9528d4e5f04f71ac2a339fd584a8fa305
2021-10-25 07:56:20 -07:00
Jerry Zhang
364c4959c3 [quant] Fix docs error in convert_fx (#67152)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67152

Test Plan:
```
cd docs
make html
```

Imported from OSS

Reviewed By: supriyar

Differential Revision: D31884570

fbshipit-source-id: 2b521f617c93f6fa08da3387df2d25497293eee6
2021-10-24 19:26:45 -07:00
Nikolay Korovaiko
a7ebf76a15 jit trace (#59949)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59949

Reviewed By: ZolotukhinM

Differential Revision: D31366787

Pulled By: Krovatkin

fbshipit-source-id: 798cbcd97e8ecfba984f98cd70214954be9309af
2021-10-24 18:04:22 -07:00
Rohan Varma
b51731527d [ez] [Docs] Missing import in example for post_local_sgd (#67047)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67047

Fix missing import
ghstack-source-id: 141258423

Test Plan: CI

Reviewed By: mrshenli

Differential Revision: D31841837

fbshipit-source-id: 139e614517dcac7a53259ff7a0360bb5275bb53b
2021-10-24 01:44:06 -07:00
Rohan Varma
0000c88e10 [FSDP] No need for list() in _get_shard (#66957)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66957

chunk appears to return a tuple which is enough given that we just
index to the right chunk and discard the rest.
ghstack-source-id: 141391149

Test Plan: CI

Reviewed By: mrshenli

Differential Revision: D31780799

fbshipit-source-id: fdb1b77fffa916328e14a4cd692b5241ae46a514
2021-10-24 01:29:19 -07:00
Rohan Varma
580efb35a5 [FSDP] Add some comments after reading the code. (#66956)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66956

Adds some comments I found helpful while ramping up on FSDP code.
ghstack-source-id: 141391150

Test Plan: n/a

Reviewed By: mrshenli

Differential Revision: D31780798

fbshipit-source-id: e2d38a9801b4548b202a73615774d5f0f7f5e3ed
2021-10-24 01:28:19 -07:00
Natalia Gimelshein
b6fa998892 Revert D31514095: Use kernel_func_name from aotCompiler
Test Plan: revert-hammer

Differential Revision:
D31514095 (7b55dc8340)

Original commit changeset: b70c8e2c7336

fbshipit-source-id: ad4d828f33506e612b51c276149fa0e12b0565d5
2021-10-23 17:17:53 -07:00
Jerry Zhang
313939c9c6 [quant] Fix lint errors (#67138)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67138

Test Plan:
ossci

Imported from OSS

Reviewed By: supriyar

Differential Revision: D31879558

fbshipit-source-id: 271905d3d254c906aa78bae9f2bd411f9d57e1e8
2021-10-23 11:26:25 -07:00
Priya Ramani
7b55dc8340 Use kernel_func_name from aotCompiler (#66337)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66337

Right now, assembly code generated for the a given method from the model is named wrapper or func by default. The function name is then replaced with a proper kernel_func_name after target specific assembly is generated.
This PR propagates a desired kernel_func_name right from aotCompiler API so that the generated function has the needed name that doesn't need to be replaced later.

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D31514095

Pulled By: priyaramani

fbshipit-source-id: b70c8e2c733600a435cd4e8b32092d37b7bf7de5
2021-10-23 02:20:45 -07:00
Jianyu Huang
64c68edaf3 [pt] Add Half precision support for bucketize and searchsorted op (#67077)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67077

Test Plan: CI

Reviewed By: yinghai

Differential Revision: D31852556

fbshipit-source-id: 1e4212146ee67edc6b6568d25db79de525782788
2021-10-22 23:37:37 -07:00
Jerry Zhang
2d81d5ab0a [quant][graphmode][fx] Remove fbgemm_backend_config_dict for now (#67066)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67066

We'll add it later when the api is ready

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D31849079

fbshipit-source-id: 0c00d08510166b2d897cf1562c7276527319b05c
2021-10-22 21:57:56 -07:00
Supriya Rao
8460fa5707 [quant][fx] Add an option in convert_fx to accept qconfig_dict to skip quantization (#66878)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66878

Currently convert_fx quantizes all layers that have been prepared, depending on the prepare qconfig_dict
This PR adds support to accept a variation of qconfig_dict in convert_fx that can be used to specify skip quantizing certain layers

This can help with prepare/observe all operators, quantize a subset of them (based on quantization error), to avoid preparing multiple times.

The qconfig_dict passed to convert_fx can only have the values set to `None`, with the keys being the same as what is allowed in the prepare qconfig_dict

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_convert_qconfig_dict

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D31808247

fbshipit-source-id: a4f5dca1090f0083fc3fea14aff56924033eb24f
2021-10-22 21:18:15 -07:00
Supriya Rao
d13829e6be [quant][[fx] update observer_fqn to not depend on node.name (#66767)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66767

Make observer fqn in prepare step independent of input_node/observed_node name.
This change names the observers as `{input/output}_activation_post_process_{idx}` where idx will be incremented for each new observer instance and is guaranteed to be unique.

Test Plan:
python test/test_quantization.py test_observer_fqn

Imported from OSS

Reviewed By: anjali411

Differential Revision: D31752052

fbshipit-source-id: e0995b1ef33a99d5b012133fe92d303d55a73b7d
2021-10-22 21:16:24 -07:00
Wanchao Liang
cf3a5160f8 [BE] move init_multigpu_helper to common_distributed (#67050)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67050

This PR moves init_multi_gpu_helper to common_distributed so that it could be shared by different distributed tests.
ghstack-source-id: 141370119

Test Plan: wait for ci.

Reviewed By: mrshenli

Differential Revision: D31842644

fbshipit-source-id: c7bad25d6cef9bdce7ad1fb6c60c1cad4b765702
2021-10-22 17:16:11 -07:00
Yanli Zhao
df3f82a1ef Add more FSDP unit tests to cover core logic, freezing weights and flatten parameter wrapper (#66904)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66904

Add more FSDP unit tests to cover core logic, freezing weights and flatten parameter wrappe, these unit tests are refactored to be aligned with PyTorch commonly used test classes
ghstack-source-id: 141335614

Test Plan: unit tests

Reviewed By: mrshenli

Differential Revision: D31779565

fbshipit-source-id: c727110d1d7570c0ec49e42cadfc9e9a5e440073
2021-10-22 16:50:52 -07:00
Michael Suo
f6c88fa99d Revert D31627107: [BE] delete frontend.cpp
Test Plan: revert-hammer

Differential Revision:
D31627107

Original commit changeset: 07d30d280c25

fbshipit-source-id: 5e82f2158f5007c67adb8f947f8cc4d995a9a3bc
2021-10-22 16:39:02 -07:00
Michael Suo
f50bf16c04 Revert D31663043: [BE] minor improvement to dist quantization
Test Plan: revert-hammer

Differential Revision:
D31663043

Original commit changeset: 2f96b7346e9c

fbshipit-source-id: d38684dfe79ca335fbbe624496ad4c86c29d1570
2021-10-22 16:37:41 -07:00
Nikita Shulga
7b0408684b Fix linter (#67122)
Summary:
Fixes regression introduced by 7e5aa0d35a

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67122

Reviewed By: seemethere

Differential Revision: D31872569

Pulled By: malfet

fbshipit-source-id: ada0137db9a46cbec573489c9c37a94f3a7576ae
2021-10-22 16:02:36 -07:00
samdow
7e5aa0d35a fixed unique arguments documentation (#66132)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66132

Differential Revisi
<img width="875" alt="Screen Shot 2021-10-05 at 12 10 39 PM" src="https://user-images.githubusercontent.com/17888388/136276286-3df20681-7b7a-4a91-97d6-4f1ac3722121.png">
on: [D31397746](https://our.intern.facebook.com/intern/diff/D31397746/)

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D31734476

Pulled By: samdow

fbshipit-source-id: 8999443c7f9b24394d7543652b8350261c1f8b3a
2021-10-22 14:50:02 -07:00
Jerry Zhang
a7bbf8814c [quant][graphmode][fx] Move quant-fx2trt unittests to test_quantize_fx.py (#67064)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67064

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D31849075

fbshipit-source-id: 9c5e8aad7c88070830d853faf3106491726e77ff
2021-10-22 14:36:36 -07:00
Wanchao Liang
7379d4db20 [BE] minor improvement to dist quantization (#66649)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66649

some minor changes to dist quantization, mainly change the namespace and add some notes for future code dedup
ghstack-source-id: 141336191

Test Plan: wait for ci

Reviewed By: cbalioglu

Differential Revision: D31663043

fbshipit-source-id: 2f96b7346e9c90df5ab2536767f8301eb86a9c79
2021-10-22 13:46:28 -07:00
BowenBao
1da628bdb7 [ONNX] Update slice process shape to support rank only inference (#65782) (#66149)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66149

Updated logic will be able to infer rank of slice output, when only rank is known for slice input. Enables cases where `ConstantValueMap::HasRank(input)` is `True`, while `ConstantValueMap::HasShape(input)` is `False`.

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31423840

Pulled By: malfet

fbshipit-source-id: 17b2b24aa63435d5212ebe6bdf66ae3c348c4e3b

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-10-22 13:46:26 -07:00
Nikita Shulga
0bc9928f31 [ONNX] Symbolic: dynamic input for OneHot, bool for Einsum (#65940) (#66147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66147

Symbolic: dynamic input for OneHot, bool for Einsum

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424094

fbshipit-source-id: 76bea22b29c93d1621c597fe7ab59deb3685087f

Co-authored-by: jiafatom <jiafa@microsoft.com>
2021-10-22 13:46:24 -07:00
Nikita Shulga
2c0fe338da [ONNX] Modify softplus symbolic to support beta!=1 (#65001) (#66146)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66146

* Modify softplus symbolic to support beta!=1

* Remove parse args

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424096

fbshipit-source-id: 971af54a28141737ccb17510ada03b0651be2a63
2021-10-22 13:46:22 -07:00
Nikita Shulga
6f3f302d9f [ONNX] Deprecate fold_if pass (#65697) (#66145)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66145

Deprecate fold_if pass

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424097

fbshipit-source-id: 25b89679c756393a1065ca6aaa24d29db960cbd4

Co-authored-by: jiafatom <jiafa@microsoft.com>
2021-10-22 13:46:20 -07:00
Nikita Shulga
a0fc14c20f [ONNX] Add diagonal symbolic (#64454) (#66144)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66144

* Add logic and tests

* minor edits

* Eliminate expand ops

* Fix flake and editing

* Modified errant message

* Add overrun check

* Add overrun descriptions

* Remove emptyline

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424095

fbshipit-source-id: 5b8ef6ac21c32d43c3dbc8e51e1ef30bffb19c25
2021-10-22 13:46:18 -07:00
Nikita Shulga
b18c298f24 ONNX: Delete or document skipped ORT tests (#64470) (#66143)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66143

Delete test_list_remove. There's no point in testing conversion of
this model since TorchScript doesn't support it.

Add a link to an issue tracking test_embedding_bag_dynamic_input.

[ONNX] fix docs (#65379)

Mainly fix the sphinx build by inserting empty before
bulleted lists.

Also some minor improvements:
Remove superfluous descriptions of deprecated and ignored args.
The user doesn't need to know anything other than that they are
deprecated and ignored.

Fix custom_opsets description.

Make indentation of Raises section consistent with Args section.

[ONNX] publicize func for discovering unconvertible ops (#65285)

* [ONNX] Provide public function to discover all unconvertible ATen ops

This can be more productive than finding and fixing a single issue at a
time.

* [ONNX] Reorganize test_utility_funs

Move common functionality into a base class that doesn't define any
tests.

Add a new test for opset-independent tests. This lets us avoid running
the tests repeatedly for each opset.

Use simple inheritance rather than the `type()` built-in. It's more
readable.

* [ONNX] Use TestCase assertions rather than `assert`

This provides better error messages.

* [ONNX] Use double quotes consistently.

[ONNX] Fix code block formatting in doc (#65421)

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424093

fbshipit-source-id: 4ced841cc546db8548dede60b54b07df9bb4e36e
2021-10-22 13:46:16 -07:00
Nikita Shulga
7a78f715a6 [ONNX] Add warning for inplace updates on tensor.shape in tracing mode (#63170) (#66142)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66142

* Add warning

* Lint and clang fixes

* Remove duplicate comments

* Added pitfalls section

* Modify sections

* Minor modifications

* Add underline to avoid doc build failures

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424092

fbshipit-source-id: c83195f3c66885ad1aecde13b3029c45dd171dbd
2021-10-22 13:46:14 -07:00
Nikita Shulga
136abf5aff [ONNX] Update sum symbolic to handle dtypes (#64289) (#66141)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66141

* Update aten::sum symbolic for dtype

* Remove nesting and modify opeartor tests

* Fix expect files

[ONNX] Fix expect files added in #64289 (#65356)

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424091

fbshipit-source-id: d4af21e9f0d7e1c68bf6ef2f3e385db84b4c53f3
2021-10-22 13:46:12 -07:00
Nikita Shulga
53a163a015 [ONNX] Export nn.Module call as ONNX local function (#63589) (#66140)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66140

* Add new argument to export api to enable users specifying `nn.Module` classes that they wish to be exported as local function in ONNX model.
* Refactor `torch/csrc/jit/serialization/export.cpp`, and remove redundant `EncoderBase` class.
* ~~Contains changes from #63268~~
* Depends on #63716 to update onnx submodule.

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424098

fbshipit-source-id: c949d0b01c206c30b4182c2dd1a5b90e32b7a0d3

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-10-22 13:44:56 -07:00
Wanchao Liang
d1986a1cf5 [BE] delete frontend.cpp (#66581)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66581

c10d/frontend.cpp was originally proposed to introduce pure C++ API and use TorcBind to share python level API with TorchScript. This is no longer needed, so delete this to reduce code redundancy.
ghstack-source-id: 141336190

Test Plan: wait for ci

Reviewed By: rohan-varma

Differential Revision: D31627107

fbshipit-source-id: 07d30d280c25502a222a74c2c65dfa4069ed8713
2021-10-22 13:33:24 -07:00
Jerry Zhang
e8742f15cf [quant][graphmode][fx] Add observation_type.py (#67063)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67063

Adding ObservationType Enum for `backend_config_dict`

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D31849078

fbshipit-source-id: e9e7225d564b51fa9454f7f087dd134152c069a0
2021-10-22 12:17:54 -07:00
Mike Iovine
f2582a59d0 [SR] Add rvalue overload for operator() (#66648)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66648

Currently, SR shallow-copies its `IValue` inputs when running inferences. We can avoid refcount bumps by `std::move`-ing the inputs into their slots. To achieve this, I've made the following changes:

1. Add an overload for `set_inputs` that takes a `std::vector<IValue>&&`.
2. Change the signatures of `StaticModule::operator()` and `StaticRuntime::operator()`.
Old:
```
operator()(const std::vector<IValue>& args, const std::unordered_map<std::string, IValue>& kwargs)
```
New:
```
template <class IValueList>
operator()(IValueList&& args, const std::unordered_map<std::string, IValue>& kwargs)
```

The implementations use perfect forwarding to invoke the correct overload of `set_inputs`.

Test Plan: Added a short new unit test to exercise the new code path. All other unit tests still pass.

Reviewed By: hlu1

Differential Revision: D31659973

fbshipit-source-id: b8c194405b54a5af1b418f8edaa1dd29a061deed
2021-10-22 10:51:47 -07:00
Aditya Pillai
40a8a50913 Add static_runtime::fused_equally_split (#2)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch-canary/pull/2

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66881

Adds `static_runtime::fused_equally_split` operator and removes `is_fused` logic from original operator. Modifies `FuseUnpackListV2` to map `fb::equally_split` to this new operator.

Test Plan:
```
adityapillai@5960 /data/sandcastle/boxes/fbsource/fbcode 1m 13s
❯ buck test //caffe2/benchmarks/static_runtime/fb:test_fb_operators
```
and sandcastle
strange_what_could_go_wrong

Reviewed By: mikeiovine

Differential Revision: D31742293

fbshipit-source-id: 60b35589c8817719b005d49811f575b6590d1c39
2021-10-22 10:26:49 -07:00
Mike Iovine
391eb1dbe3 [JIT] UseVariadicOp handles multiple lists (#66288)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66288

This change makes it so `UseVariadicOp` can transform ops with many Tensor list inputs.

Input pattern:
```
%output : Type = op(%list_1, %arg_1, %list_2, %list_3)
```
Output pattern:
```
%output : Type = variadic_op(%list_11, ..., %list_1N, %arg_1, %list_21, ..., %list_2M, %list_31, ..., %list_3K, N, M, K)
```
The length of each list is passed at the end of the variadic op so that the op implementation can process the inputs appropriately. This also frees us from needing to update `hasVarArgs` in static runtime each time we add a variadic op.

This diff also makes `UseVariadicOp` more robust. Before, `list_idx` was passed as an argument. Now, `VariadicUpdater` determines `list_idx` from the node's schema.

Test Plan:
Existing variadic ops do not break:
`buck test caffe2/benchmarks/static_runtime:static_runtime_cpptest`

Reviewed By: d1jang

Differential Revision: D31450811

fbshipit-source-id: 808fcc3ae8940b9e602586f38f8cf9154c9a6462
2021-10-22 10:22:33 -07:00
Eddie Yan
d9c4b3feab Do rowwisemoments computation in float for half LayerNorm (#66920)
Summary:
https://github.com/pytorch/pytorch/issues/66707

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66920

Reviewed By: mrshenli

Differential Revision: D31850612

Pulled By: ngimel

fbshipit-source-id: a95a33567285dcf9ee28d33f503cead3268960f9
2021-10-22 09:50:42 -07:00
Elias Ellison
6e6ede2e70 [JIT] Re-enable alias sensitive peepholes (#65860)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65860

Re-enable peepholes like `x + 0 == x`. These were at one point enabled, and then disabled because they did not properly account for aliasing, and then re-enabled with reconstructing the alias db everytime which is slow  - O(n^2). I've added correctness conditions, and I've also made it so that we avoid using stale aliasing properties for either the input or output of nodes we optimize.
Some of the other code that we have written to avoid re-instantiating the alias db involves internally mutating it, however this is tricky to reason about and we probably have to add some extra invariants...

cc navahgar relevant to graph opts and d1jang alias analysis relevant here

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D31352382

Pulled By: eellison

fbshipit-source-id: 441a27f17dc623d6c24538d1d43cba0412c3c482
2021-10-22 09:45:57 -07:00