Commit Graph

33381 Commits

Author SHA1 Message Date
Akifumi Imanishi
aa1fd6b45a Add LazyBatchNormXd (#51548)
Summary:
This PR implements UninitializedBuffer and LazyBatchnormXd based on https://github.com/pytorch/pytorch/issues/44538. (cc. emcastillo and albanD)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51548

Reviewed By: zhangguanheng66

Differential Revision: D26276903

Pulled By: albanD

fbshipit-source-id: 0ac706974178363f8af075e59b41d5989418922f
2021-02-05 10:27:04 -08:00
Yi Wang
5a962369e2 [Gradient Compression] Check if the backend is NCCL when a DDP communication hook is registered (#51759)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51759

Some unit tests actually register a comm hook on other backends like GLOO. Example: `test_ddp_comm_hook_future_passing_cpu`

Therefore, only do the check on `register_builtin_comm_hook`.

Currently DDP communication hook can only be supported on NCCL. Add a check in the registration methods.
ghstack-source-id: 121115814

Test Plan: unit tests.

Reviewed By: pritamdamania87

Differential Revision: D26268581

fbshipit-source-id: c739fa4dca6d320202dc6689d790c2761c834c30
2021-02-05 09:59:12 -08:00
Jeffrey Wan
105c3d2196 Update CODEOWNERS (#51726)
Summary:
add myself and alban to folders

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51726

Reviewed By: albanD

Differential Revision: D26254528

Pulled By: soulitzer

fbshipit-source-id: 91477dda3ff81014dbadd3a93f5f511ac3da81e0
2021-02-05 09:01:18 -08:00
Kimish Patel
a7ba051fa6 [QNNPACK, Sparsity] Add dyanmic linear sparse kernel for arm64 (#50591)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50591

Adds sparse kernels for arm64. Reg blocking factor of 8x8.

Test Plan:
q8gemm-sparse-test

Imported from OSS

Reviewed By: AshkanAliabadi

Differential Revision: D25925501

fbshipit-source-id: 8d62a8eb638f172ffaadfb1480ade0db35831189
2021-02-05 08:46:01 -08:00
Kimish Patel
70830b5ac0 [QNNPACK, Sparsity] Sparse kernel with 4x8 blocking (#50590)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50590

Larger blocking across M dim such as 8 in previous PR is likely
introducing wasted compute on the shapes being benchmarked.
Here we introduced 4x8 blocking of mrxnr. This helps 1) in packing
smaller data for small values of M and 2) for compute kernel it writes
same number of bytes but more contiguously. It is not certain but it
likely helps.

Test Plan:
q8gemm-sparse-test
fully-connected-sparse-test

Imported from OSS

Reviewed By: AshkanAliabadi

Differential Revision: D25925499

fbshipit-source-id: 01c661ceea38bd6ee8321bb85cf1d5da5de4e984
2021-02-05 08:42:53 -08:00
albanD
e8ee35a666 Add script to compare namespace content for release cleanup (#51685)
Summary:
Usage explanation will be in the release note runbook.

This allows to generate diffs like:
```
Processing torch.nn
Things that were added:
{'quantizable', 'ChannelShuffle', 'LazyConvTranspose2d', 'LazyConv2d', 'LazyConvTranspose3d', 'LazyConv1d', 'GaussianNLLLoss', 'LazyConv3d', 'PixelUnshuffle', 'UninitializedParameter', 'LazyLinear', 'LazyConvTranspose1d'}

Things that were removed:
set()
```

This can then be shared with module owners along with the commits to help them validate that the namespace changes for their submodule is as expected.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51685

Reviewed By: zhangguanheng66

Differential Revision: D26260258

Pulled By: albanD

fbshipit-source-id: 40e40f86314e17246899d01ffa4b2631e93b52f7
2021-02-05 07:54:00 -08:00
Meghan Lele
28c5d90b67 [JIT] Allow implicit boolean conversion of containers (#51683)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51683

**Summary**
This commit enables implicit boolean conversion of lists, strings, and
dictionaries in conditional expressions. Like Python, empty lists,
strings and dictionaries evaluate to `False` and their non-empty
counterparts evaluate to `True`. This allows users to write code like

```
torch.jit.script
def fn(l: List[int]):
  if l:
    ...
  else:
    ...
```

This has been requested by some users and would be a good usability
improvement.

**Test Plan**
This commit adds unit tests to `TestList`, `TestDict` and
`test_jit_string.py` to test this new feature.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26264410

Pulled By: SplitInfinity

fbshipit-source-id: b764c18fd766cfc128ea98a02b7c6c3fa49f8632
2021-02-05 00:34:35 -08:00
Natalia Gimelshein
d3023d86ba Revert D26249330: [Gradient Compression] Add a documentation page for DDP communication hooks
Test Plan: revert-hammer

Differential Revision:
D26249330 (e62aabac43)

Original commit changeset: ab973390ddb7

fbshipit-source-id: d508daed76219e7ca588cf7fb38aeaaffc61acfd
2021-02-04 22:38:06 -08:00
Yanan Cao
1065c2d5b6 Fix clang-tidy warnings in python_sugared_value.{h,cpp} (#51703)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51703

Reviewed By: gchanan

Differential Revision: D26245798

Pulled By: gmagogsfm

fbshipit-source-id: 01620adca820968324687982cc48390ff9336d20
2021-02-04 21:29:40 -08:00
Rohan Varma
c941730b96 [JIT/Futures] support set_exception api (#50983)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50983

There is currently no way to handle/propagate errors with the python-based futures API (they are raised correctly if set with an error, but this is only possible from C++).

This diff allows the Future's `unwrap_func` to be set in python optionally, so users can set futures completed with an exception and the error will throw as expected. This is mostly to support the following use case in the next diff:

```
ret_fut = torch.futures.Future(unwrap_func = lambda python_result: {
    # throw exception if needed
    if isinstance(python_result, Exception):
        throw python_result
})

rpc_fut = rpc.rpc_async(...) # RPC future that times out
# Goal is to propagate RPC error to this future
rpc_fut.add_done_callback(
res => {
    # Note that ret_fut.set_result(res.wait()) won't propagate the error
    try:
        ret_fut.set_result(res.wait())
    except Exception as e:
        ret_fut.set_result(e)
}
)
```
ghstack-source-id: 121021434

Test Plan:
unittest
```
buck test mode/dev-nosan mode/no-gpu //caffe2/test:futures -- te
st_unwrap --print-passing-details
```

Reviewed By: mrshenli

Differential Revision: D25950304

fbshipit-source-id: 7ee61e98fcd783b3f515706fa141d538e6d2174d
2021-02-04 20:22:19 -08:00
Rohan Varma
8e78dd6de8 [torch.futures] Fix doc inconsistency about callback args (#50979)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50979

Noticed that the documentation is inconsisntent about the arg needed
in the callback. It appears to require the future, so fix this in the docs.
ghstack-source-id: 121021431

Test Plan: CI

Reviewed By: mrshenli

Differential Revision: D25944637

fbshipit-source-id: 0bfcd4040c4a1c245314186d29a0031e634b29c3
2021-02-04 20:22:14 -08:00
Rohan Varma
21afbba79b [torch.futures] Clarify callback behavior when future is completed (#50978)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50978

Noticed that the documentation is not clear that the cbs are invoked
inline if the future is already completed. We should probably document this
behavior.
ghstack-source-id: 121021432

Test Plan: CI

Reviewed By: mrshenli

Differential Revision: D25944636

fbshipit-source-id: f4ac133d076ba9a5690fecfa56bde6d614a40191
2021-02-04 20:22:09 -08:00
Rohan Varma
c3f2f3294e [RPC] Add option to make rref.get_type not block. (#50977)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50977

Adds a `blocking` flag that can be set to False to make this API return a `Future` to the type. This is to make this function non-blocking, mostly for a future change that will allow `rref.rpc_async()` to be completely non-blocking (it currently calls and waits for this function that issues an RPC in-line).
ghstack-source-id: 121021433

Test Plan: Modified UT

Reviewed By: mrshenli

Differential Revision: D25944582

fbshipit-source-id: e3b48a52af2d4578551a30ba6838927b489b1c03
2021-02-04 20:18:50 -08:00
albanD
716a8c2153 make forward AD API private (#51693)
Summary:
Avoid leaking private functions in `torch.` namespace.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51693

Reviewed By: gchanan

Differential Revision: D26245046

Pulled By: albanD

fbshipit-source-id: 5481b57eb56ba96581848598d32ebf5894a7adf0
2021-02-04 19:02:29 -08:00
Yi Wang
e62aabac43 [Gradient Compression] Add a documentation page for DDP communication hooks (#51715)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51715

Add a documentation page for DDP communication hooks.

Screenshot:

{F369781049}

Test Plan: View locally

Reviewed By: pritamdamania87

Differential Revision: D26249330

fbshipit-source-id: ab973390ddb785c5191f587a1b2b6de7d229e50e
2021-02-04 18:53:53 -08:00
Mike Ruberry
de7eeb7752 Removes nonzero method warning (#51618)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/44284

https://github.com/pytorch/pytorch/pull/45413 incorrectly left this only partially fixed because it did not update the separate list of method signatures that were deprecated. This PR correctly fixes https://github.com/pytorch/pytorch/issues/44284. A test is added for the behavior, but until the WARN_ONCE flag is added it's toothless.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51618

Reviewed By: ngimel

Differential Revision: D26220181

Pulled By: mruberry

fbshipit-source-id: 397b47ac7e962d108d8fde0f3dc6468d6327d1c3
2021-02-04 17:43:43 -08:00
Heitor Schueroff
e7ff0854c6 [doc] Fix inconsistencies with torch.linalg.inv and deprecate torch.inverse (#51672)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51672

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26240535

Pulled By: heitorschueroff

fbshipit-source-id: 16dbd0a8a8c0f851faa12bf092dbedfb7cb0b292
2021-02-04 17:19:45 -08:00
Heitor Schueroff
ff4848aaa1 [doc] Fix inconsistencies with linalg.pinv docs and deprecate pinverse (#51671)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51671

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26240534

Pulled By: heitorschueroff

fbshipit-source-id: 26e2a3cad2105e6e2b7779e785666b38597450c5
2021-02-04 17:19:41 -08:00
Heitor Schueroff
e7d7256f2d doc] Fix inconsistencies with torch.linalg.matrix_rank doc (#51660)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51660

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26234100

Pulled By: heitorschueroff

fbshipit-source-id: b9c48c0e172461ed2770d52c07a147152d51d4b7
2021-02-04 17:19:37 -08:00
Heitor Schueroff
0308261ddc [doc] Fix inconsistencies with torch.linalg.eigvalsh (#51659)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51659

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26234102

Pulled By: heitorschueroff

fbshipit-source-id: 6a6711c7b129cd29f2c733c635c4192caaf42d22
2021-02-04 17:19:33 -08:00
Heitor Schueroff
87504c3265 [doc] Fix inconsistencies with torch.linalg.eigh (#51658)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51658

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26234101

Pulled By: heitorschueroff

fbshipit-source-id: c1b5cc74ba0b32c49bfd843e97f957971d8be364
2021-02-04 17:19:29 -08:00
Heitor Schueroff
4835f203ec [doc] Fix inconsistencies with torch.linalg.det docs (#51651)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51651

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26234103

Pulled By: heitorschueroff

fbshipit-source-id: 00ec7dae942bda887f57cb76752f8b5ef25d276a
2021-02-04 17:19:25 -08:00
Heitor Schueroff
7c12afb5e2 [doc] Fix inconsistencies with torch.linalg.cond doc (#51641)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51641

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D26234104

Pulled By: heitorschueroff

fbshipit-source-id: 5c2c9a206c4051092305d910ed0e808458e5afd9
2021-02-04 17:13:42 -08:00
jiej
4d703d040b Linear autodiff revert revert (#51613)
Summary:
patch PR https://github.com/pytorch/pytorch/issues/50856 and rollbak the revert D26105797 (e488e3c443)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51613

Reviewed By: mruberry

Differential Revision: D26253999

Pulled By: ngimel

fbshipit-source-id: a20b1591de06dd277e4cd95542e3291a2f5a252c
2021-02-04 16:32:05 -08:00
Kimish Patel
6dcbf396aa [QNNPACK, Sparsity] Added prepacking base aarch32 kernels (#50589)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50589

Adds 1. Input prepacking kernel 2. Compute kernels that processes
prepacked activation.
Hunch is that input prepacking will help with 1. Cache locality and 2.
Avoid a lot of address compute instructions.
Cache locality helps mainly comes from the fact that we are doing mr=8
and nr=4.
mr being 8 likely results in cache line evictions as likely cache
associativity is 4. Laying out transposed activations which are blocked
by mr=8 will lay all the transposed activation in one contiguous block.
Downside is that now we will tranpose all the blocks regardless of them
participating in compute. However it is likely that entire activation
matrix participates in compute for some output block.
Also add benchmark

Test Plan:
q8gemm-sparse-test
fully-connected-test-sparse

Imported from OSS

Reviewed By: AshkanAliabadi

Differential Revision: D25925502

fbshipit-source-id: b2c36419a2c5d23b4a49f25f9ee41cee8397c3be
2021-02-04 16:20:08 -08:00
Kimish Patel
47a6703bdb [QNNPACK, Sparsity] ARMV7, aarch32, kernels for dynamic linear (#50588)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50588

This diff introduces aarch32 asm kernel for sparse dense gemm.

Test Plan: Imported from OSS

Reviewed By: AshkanAliabadi

Differential Revision: D25925498

fbshipit-source-id: e9e19ce67157a4bc3cba4656f926e828442f09ad
2021-02-04 16:16:35 -08:00
XiaobingSuper
3fec1e5025 fix hardsigmoid_backward for boundary case (#51454)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/51438.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51454

Reviewed By: mruberry

Differential Revision: D26243461

Pulled By: ngimel

fbshipit-source-id: 7d954dc47427f02b7cbf0344e9889db223bfb525
2021-02-04 14:37:58 -08:00
Jane Xu
8c737f732b replacing ubuntu-latest with ubuntu-18.04 (#51744)
Summary:
following https://github.com/pytorch/pytorch/pull/51725#pullrequestreview-583703598

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51744

Reviewed By: samestep

Differential Revision: D26262089

Pulled By: janeyx99

fbshipit-source-id: fa24e5c15d24750f2a5ccd5b6a5aad9a4a3ad09f
2021-02-04 14:17:06 -08:00
Taylor Robie
094d597679 raise windows tol to 30% (#51733)
Summary:
Up the Windows tolerance set by https://github.com/pytorch/pytorch/pull/35818, as CI is still showing some flakes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51733

Test Plan: CI

Reviewed By: zou3519

Differential Revision: D26258005

Pulled By: robieta

fbshipit-source-id: 864c848b7b31a05a2d07d1e683342b3202377c10
2021-02-04 14:09:10 -08:00
guyang3532
ab0cf3b6b5 Add 'repeat' argument to profiler.schedule (#51630)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51630

Reviewed By: gdankel

Differential Revision: D26246317

Pulled By: ilia-cher

fbshipit-source-id: 28b572c837184fe1b2a07dd57e99aa72cb93a9cb
2021-02-04 13:51:04 -08:00
Howard Huang
62aea33d7f Revert D26237328: Add compare_set operation and test to TCPStore
Test Plan: revert-hammer

Differential Revision:
D26237328 (7d00aec6bc)

Original commit changeset: c6837a4cc34f

fbshipit-source-id: 662f8067ead9bce0da13b35d393fb781635dd2b9
2021-02-04 13:43:05 -08:00
guyang3532
ecfb73aaca Update docs for torch.profiler.tensorboard_trace_handler (#51636)
Summary:
![image](https://user-images.githubusercontent.com/62738430/106856207-17f8c000-66f9-11eb-80c9-844f79de423e.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51636

Reviewed By: orionr

Differential Revision: D26246309

Pulled By: ilia-cher

fbshipit-source-id: 083868e9231727638238c5f5ca31e3566d5e2e7e
2021-02-04 13:32:59 -08:00
Horace He
d4d5f8569f [FX] Fix mypy error in FX for rewriter (#51740)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51740

Reviewed By: jamesr66a

Differential Revision: D26261009

Pulled By: Chillee

fbshipit-source-id: ce97316aede5509fc8ed90b4eb6b758e2bc1fa7a
2021-02-04 13:15:51 -08:00
Peter Bell
b150f150ba Add division overload with rounding_mode selection (#51706)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51706

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50280

As mentioned in gh-43874, this adds a `rounding_mode={'true', 'trunc', 'floor'}`
argument so `torch.div` can be used as a replacement for `floor_divide` during
the transitional period.

I've included dedicated kernels for truncated and floor division which
aren't strictly necessary for float, but do perform significantly better (~2x) than
doing true division followed by a separate rounding kernel.

Note: I introduce new overloads for `aten::div` instead of just adding a default
`rounding_mode` because various JIT passes rely on the exact operator schema.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D26123271

Pulled By: mruberry

fbshipit-source-id: 51a83717602114597ec9c4d946e35a392eb01d46
2021-02-04 13:08:36 -08:00
James Reed
949ab213dd Revert "Revert D26246231: [FX] Edits after comprehensive pass over docs" (#51728)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51728

This reverts commit 6c80fd005f.

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D26254130

Pulled By: jamesr66a

fbshipit-source-id: f301688f85c512076fee9b83a986677ef893d2c5
2021-02-04 13:01:09 -08:00
BowenBao
8c0da1f5e9 [ONNX] Modifications in remove inplace ops passes to better handle binary inplace ops (#51318) (#51572)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51572

Modifications in remove_inplace_ops_for_onnx pass and remove_inplace_ops pass to better handle binary inplace ops

* Handles special case of binary inplace ops, where the first input node has a lower type precedence than the second input node.
* When the inplace node is converted to a regular op, this information is lost and the resulting type is based on type precedence, just like regular ops. To avoid this loss of information, we add a cast node before the input node with the higher data type precedence, so that both the input types are the same.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203117

Pulled By: SplitInfinity

fbshipit-source-id: f018b503701b9067dba053c2764c3b92ef1abc38
2021-02-04 12:44:49 -08:00
BowenBao
c7f1595b19 fix bug (#51222) (#51527)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51527

Fix bug in scatter_add symbolic

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203119

Pulled By: SplitInfinity

fbshipit-source-id: e61f024e2daa7bc396fb264b8823a72ebf94ccdb
2021-02-04 12:44:44 -08:00
BowenBao
25b18bb5d7 [ONNX] Support list remove for onnx export (#51373) (#51526)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51526

* Support aten::Delete
* Refactor prepare_inplace_ops_for_onnx into one pass.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203114

Pulled By: SplitInfinity

fbshipit-source-id: ce940bca54a30c39f4b0810f62b0e7b497508f59
2021-02-04 12:44:40 -08:00
BowenBao
6d47e2cff8 [ONNX] Fix opset 11 ConstantChunk with negative dim (#51396) (#51525)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51525

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203115

Pulled By: SplitInfinity

fbshipit-source-id: d76942f7cc5812c8a1cc16891e4956cc658283d8
2021-02-04 12:44:35 -08:00
BowenBao
ba824eb2d6 [ONNX] Update unsafe_chunk() method to support new version 13 of Split operator. (#51415) (#51524)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51524

* def unsafe_chunk() support and test in ops13.

* Use _unsqueeze_helper insteadof Unsqueeze operator

* Cast the splits into long.

* Change the test to a fixed dimension.

* Update test_pytorch_onnx_onnxruntime.py

* Disable test_loop_with_list for opset 13.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203123

Pulled By: SplitInfinity

fbshipit-source-id: b273aeff8339faa0e8e9f1fcfbf877d1b703209f

Co-authored-by: Negin Raoof <neginmr@utexas.edu>
2021-02-04 12:44:31 -08:00
BowenBao
8ae6b0c5f9 [ONNX] Enable Constant Folding for ONNX Opset 13 (#51096) (#51523)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51523

* Enable Constant Folding for ONNX Opset 13

* fix CI clang-diagnostic

* fix integers type

* fix comments:sort axes and support negative number

* update squeeze op constant folding

* fix format warning

* fix clang-format issue

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203111

Pulled By: SplitInfinity

fbshipit-source-id: c33637ab39db614207bd442c6ab464bd09339b4a

Co-authored-by: hwangdeyu <deyhuang@qq.com>
2021-02-04 12:44:26 -08:00
BowenBao
1c7d966432 Update error message that displays when encountering an op unsupported for ONNX export. (#51387) (#51522)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51522

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203121

Pulled By: SplitInfinity

fbshipit-source-id: 5920995b735cecb500b12948b8ad91803e576dcb
2021-02-04 12:44:22 -08:00
BowenBao
586c2e8d62 [ONNX] Fix graph sequence output from loop node (#51305) (#51521)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51521

* Add loop & if node to the list of nodes that could produce sequence type output.
* Switch from `[]` to `at()` to avoid segfault of out of range access.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203112

Pulled By: SplitInfinity

fbshipit-source-id: e990eeed933124b195be0be159271e33fb485063
2021-02-04 12:44:17 -08:00
BowenBao
3cc46002a3 [ONNX] Fix graph position to insert clone node for inplace op removal (#50123) (#51520)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51520

Previous insertBefore approach might end-up inserting clone node in inner sub-blocks, while then the node being used later at other outside call sites.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203124

Pulled By: SplitInfinity

fbshipit-source-id: 999511e901ad1087f360bb689fcdfc3743c78aa4
2021-02-04 12:44:12 -08:00
BowenBao
0e7e4d4217 [ONNX] Add silu operator support for onnx (#51193) (#51519)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51519

Support for yolov5 compound-scaled object detection models export.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203120

Pulled By: SplitInfinity

fbshipit-source-id: c70bd730ee5d6f8bdebaf8ff764b94ffe7673808
2021-02-04 12:44:08 -08:00
BowenBao
9191b639ba [ONNX] Enable remaining failed tests in opset13 (#50806) (#51518)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51518

* enable remaining test in opset13

* add comments for error version test info

* fix comments:opset12 unbind problem

* add ignore[no-redef]

* fix format

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203122

Pulled By: SplitInfinity

fbshipit-source-id: e7d95bd2ce13f79f11965be82f640379cd55ff0f

Co-authored-by: hwangdeyu <deyhuang@qq.com>
2021-02-04 12:44:04 -08:00
BowenBao
3f185ac18e [ONNX] Export get/set attribute nodes (#50768) (#51517)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51517

Fix get/set attributes when getting/setting a model parameter.
This PR also fixes inplace ops in If blocks.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203116

Pulled By: SplitInfinity

fbshipit-source-id: bed6ee6dd92b5b43febc8c584a6872290f8fe33f
2021-02-04 12:43:59 -08:00
BowenBao
1829268e7f [ONNX] Improve error message for parse_arg in symbolic functions (#50512) (#51516)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51516

previous error message looks like this
```
RuntimeError: Unexpected node type: onnx::Gather
```
now
```
RuntimeError: Expected node type 'onnx::Constant' for argument 'groups' of node 'conv1d', got 'onnx::Gather'.
```

Repro example:
```python
    torch.jit.script
    def conv(x, w):
        return F.conv1d(x, w, groups=x.shape[0])

    class Net(nn.Module):
        def forward(self, x, w):
            return conv(x, w)

    model = Net()

    x = torch.randn(8, 8, 512)
    w = torch.randn(8, 1, 3)
    torch.onnx.export(model,
                        (x, w),
                        "file.onnx",
                        opset_version=12)
```

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203118

Pulled By: SplitInfinity

fbshipit-source-id: 607b22f4cba4baa24154f197914b6817449ab9f8
2021-02-04 12:43:54 -08:00
BowenBao
8dd9fefacb [ONNX] Fix bug in unfold symbolic (#50504) (#51515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51515

Fix bug in unfold symbolic

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203113

Pulled By: SplitInfinity

fbshipit-source-id: 3a1b0013624d918de762a88ac6de8c9cafa0f732
2021-02-04 12:43:50 -08:00
BowenBao
7255b3f6b7 [ONNX] Update constant-folding of Gather op (#50554) (#51514)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51514

Update constant-folding of Gather operator so it also includes cases where rank of indices input is 0.
Currently it only support cases where rank of indices is 1.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26191323

Pulled By: SplitInfinity

fbshipit-source-id: 7edcbd8835b0248fefb908aca394f5cca5eae29e
2021-02-04 12:40:30 -08:00