Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12428
Group conv in NHWC layout was enabled in CPU after D7547497.
In D7547497, unit test of group conv in NHWC layout in CPU was enabled in group_conv_test.py but not in conv_test.py . This diff also enables it in conv_test.py .
Reviewed By: BIT-silence
Differential Revision: D10233252
fbshipit-source-id: aeeaf3eedc60e1cf6321b5a1dbe6a561e3aacbde
Summary:
Essentially makes cuDNN to think of those kernels like of Nx1 ones.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12902
Reviewed By: BIT-silence
Differential Revision: D10852862
Pulled By: soumith
fbshipit-source-id: 7416cf6d131177340d21cbf1d42c1daa6c7cad8c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13437
revert
transform the NCHW Convolution operators to NHWC and the tensors around these operators
Reviewed By: bwasti
Differential Revision: D12871789
fbshipit-source-id: 6509a29fa1654424d22904df0d3e60f8cd9c0ec7
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13436
revert
Add a utility function to convert a list of caffe2_pb2.Argument to a dictionary.
Reviewed By: bwasti
Differential Revision: D12871811
fbshipit-source-id: 486ad09f3f37723c92a946c486ce3e24a649b4e6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13429
Made the SSA transformation idempotent. This ensures that if a caffe2 graph is already in SSA form, the name of the ONNX models inputs/outputs match these of the caffe2 graph.
Avoid evaluating the model by running it if the shapes of all the blobs are present in the value_info map. This speeds up the conversion and decrease its memory usage in the case of medium to large nets.
Reviewed By: abadams
Differential Revision: D12873354
fbshipit-source-id: d695b28e610562afa9a41c2d4da05be212ccb488
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13332
Add a utility function to convert a list of caffe2_pb2.Argument to a dictionary.
Reviewed By: bwasti
Differential Revision: D10861211
fbshipit-source-id: da2fcc3e3b4dbf8decbe14a8e2d5621b3fcc377f
Summary: Made the clangr rule more robust and it discovered more callsites.
Reviewed By: smessmer
Differential Revision: D12825017
fbshipit-source-id: 3be1eeb7ea697b36ef89e78ba64c0ee1259439c4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13206
Add has device option for checking if a node has a device option set
Reviewed By: bwasti
Differential Revision: D12815365
fbshipit-source-id: 58477df93777f470cfb30cd75f02a659a7017b7c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13132
Expose more of the C++ API to python
Reviewed By: duc0
Differential Revision: D10855086
fbshipit-source-id: 98cc89bc72ef91ed1c59c1a19688e047765cf90b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13203
Minor changes in the test workflow to run the model on CPUs
Reviewed By: stephenyan1231
Differential Revision: D9925797
fbshipit-source-id: b7b1fb2658ab68b1ffc2b1f7b314958ea4732b32
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13004
Implement BucketWeighted model layer, which learns a weight for each possible score in an IdScoreList. Here, we assume that the scores in the IdScoreList have already been converted into the appropriate 'buckets'. If this is not done, then essentially each score represents its own bucket.
We assume that the scores/buckets are integers, and if max_score is not set, we assume that the maximum cardinality of the score is less than or equal to the cardinality of the ids.
Reviewed By: chonglinsun
Differential Revision: D10413186
fbshipit-source-id: 743e643a1b36adf124502a8b6b29976158cdb130
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12843
This adds a cuda implementation for the UpsampleBilinearOp and UpsampleBilinearGradientOp.
The CUDA code is based off of the corresponding ResizeNearest operators but with bilinear interpolation logic taken from the CPU implementation.
Reviewed By: houseroad
Differential Revision: D10453776
fbshipit-source-id: b29ac330b72465974ddb27c0587bca590773fdec
Summary:
This is mostly for reusing all the cudnn test cases in our python operator_tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12278
Differential Revision: D10842592
Pulled By: bddppq
fbshipit-source-id: 4b3ed91fca64ff02060837b3270393bc2f9a9898
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13007
No reason to use the hook if it's set, this helps fbcode traces.
This slightly pessimizes the stack trace for ATen functions,
because we are no longer skipping all of the frames we should.
This is probably OK.
Reviewed By: Yangqing
Differential Revision: D10518499
fbshipit-source-id: be54e490df3c3fde7ff894b5b1473442ffc7ded3
Summary:
TSIA - we want to deprecate numba in fbcode when moving to new compiler tiers.
Converted the old test to a non-numba regular python op test.
Reviewed By: xw285cornell
Differential Revision: D10519910
fbshipit-source-id: 0e9188a6d0fc159100f0db704b106fbfde3c5833
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12848
Updated all non-test uses of protobuf::MessageLite::SerializeAsString to call
SerializeAsString_EnforceCheck so that the return value is checked and can
throw an exception if failing.
Most of the affected code was called from classes derived from BlobSerializeBase.
Didn't touch most tests and ENFORCE calls because they usually do checks
anyway.
Original commit changeset: c0760e73ecc7
Reviewed By: dzhulgakov
Differential Revision: D10453456
fbshipit-source-id: d2f2b7b4578e721924354149f08f627c7e3bf070
Summary:
- exhaustive_search attribute will be blacklisted so it
will be discarded from the coverted onnx model. At present
it throws error while verifying the onnx model
Signed-off-by: Parth Raichura <parth.raichura@softnautics.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12805
Differential Revision: D10502374
Pulled By: ezyang
fbshipit-source-id: 0926dfa3237a8a431184e7f7250146e5b0cbfb85
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12900
Workspace sometimes will be populated with input tensors for shape inference but net.external_input() is not a reliable way to tell weights from input in the workspace. We say in some usecases where net.external_input() is empty. In this case, we need to give user an option to provide input hint.
Reviewed By: bddppq
Differential Revision: D10476822
fbshipit-source-id: 1a3fa2df69b959d5b952a7824eba9e6c713f4f07
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12897
UnsafeCoalesce Op is used during memonger days when we try to coalesce operators
for better efficienct computation kernels. It creates a little bit of an unsafe
underlying memory storage pattern.
With the new tensor unification I am not sure if it is still safe for us to do
so, so I propose we delete it for the sake of safety.
Reviewed By: bddppq, ilia-cher
Differential Revision: D10475980
fbshipit-source-id: b1a838c9f47d681c309ee8e2f961b432236e157e
Summary:
This test flushes out the issue that IDEEP cannot handle tensor with dims like (0, 2), which is a valid tensor shape.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/8459
Differential Revision: D10419328
Pulled By: yinghai
fbshipit-source-id: c5efcd152364a544180a8305c47a2a2d126ab070
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12736
This updates UpsampleBilinearOp and UpsampleBilinearGradientOp to support scales to bring it inline with ResizeNearestOp https://github.com/pytorch/pytorch/pull/12720.
Reviewed By: houseroad
Differential Revision: D10416228
fbshipit-source-id: f339b7e06979c9c566afb4cee64a2d939b352957
Summary: Added 2 years ago in D3665603, never used, kill it.
Reviewed By: ezyang
Differential Revision: D10421336
fbshipit-source-id: 1b027a9ef2b71d0dd2c572cd4338bc8e046320d8