Commit Graph

40770 Commits

Author SHA1 Message Date
Mikhail Zolotukhin
03c5598ba2 Add python bindings for JIT trace.
Is it a hack?
No, we should provide python bindings for that.

[ghstack-poisoned]
2021-10-09 14:05:57 -07:00
Mikhail Zolotukhin
b2f0fc9656 Cherry-pick JIT trace feature from #59949
Is it a hack?
No, the PR is legit and we should land it. We also could use
annotate_input_shape+shape_prop+dtype_prop when all that is ready. Both
workflows are legit.

[ghstack-poisoned]
2021-10-09 14:05:54 -07:00
CodemodService FBSourceClangFormatLinterBot
b96c7aea73 [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily arc lint --take CLANGFORMAT
Reviewed By: zertosh

Differential Revision: D31527108

fbshipit-source-id: 40360ebf92e67fd95613cedea9988fbe52519de6
2021-10-09 06:03:49 -07:00
Richard Barnes
109aa135e6 Remove apparently unnecessary std::remove_cv_t (#66254)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66254

`std::decay_t` already implies dropping the const

Test Plan: Sandcastle

Reviewed By: malfet

Differential Revision: D31465856

fbshipit-source-id: 851cdb9194354fe9a89b3a37a4463a43dbbcd77a
2021-10-09 00:38:44 -07:00
Richard Barnes
4cb4d11e0b Disable "-Wignored-qualifiers" for vec256_bfloat16.h (#66279)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66279

This error appears when compiling with "-Wextra" and cannot be resolved by fixing the code since the return type of the instrinic being passed to `map` is fixed.

Fixes:
```
caffe2/aten/src/ATen/cpu/vec/vec256/vec256_bfloat16.h:204:28: error: 'const' type qualifier on return type has no effect [-Werror,-Wignored-qualifiers]
  Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
                           ^~~~~~
caffe2/aten/src/ATen/cpu/vec/vec256/vec256_bfloat16.h:204:28: error: 'const' type qualifier on return type has no effect [-Werror,-Wignored-qualifiers]
  Vectorized<BFloat16> map(const __m256 (*const vop)(__m256)) const {
                           ^~~~~~
```

Test Plan: Sandcastle

Reviewed By: ngimel

Differential Revision: D31480888

fbshipit-source-id: 919c0d48c8ce13ce1106a9df124a077945e36707
2021-10-08 21:47:41 -07:00
Chen Lai
3fe5895a00 Back out "Revert D30599136: [Pytorch Edge][tracing-based] build tracer in OSS" (#66267)
Summary:
Previously https://github.com/pytorch/pytorch/pull/64087 broke the  test `binary_macos_wheel_3_7_cpu_build`, because wheel build is not happy with `model_tracer`. Considering it's prototype and there is no need to ship model_tracer via wheel at the moment, using the option `TRACING_BASED` for building tracer. When tracing-based is mature enough, we can ship the tracer binary via wheel eventually.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66267

Original commit changeset: 8ac3d75a52d0
ghstack-source-id: 140122106

Test Plan:
binary_macos_wheel_3_7_cpu_build passes

{F668643831}

Reviewed By: dhruvbird

Differential Revision: D31478593

fbshipit-source-id: 726cab1b31c4596f6268b7824eecb20e2e59d161
2021-10-08 20:12:12 -07:00
Scott Wolchok
1763c25414 [PyTorch][jit] Fix excess refcounting in TupleType::compare (#66286)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66286

No need to take refcount bumps on each comparator call.

Test Plan: CI, review

Reviewed By: hlu1, JasonHanwen

Differential Revision: D31487058

fbshipit-source-id: 98d2447ac27a12695cb0ebe1e279a6b50744ff4f
2021-10-08 20:08:07 -07:00
Scott Wolchok
fb5a80ffd8 [jit] Don't force refcount bumps from getTypePtr (#66282)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66282

Now that a bunch of the `FooType::get()` functions return a const reference, we can forward that behavior through `getTypePtr()` using return type deduction.

Test Plan: Inspect assembly for List_test.cpp before/after the rest of the change; reference counting is no longer in the happy path.

Reviewed By: hlu1, JasonHanwen

Differential Revision: D31486117

fbshipit-source-id: 863b677bb6685452a5b325d327bdc2a0a09627bf
2021-10-08 20:06:43 -07:00
Eshika Shah
85b562dd2b Fix type checking errors in fx/utils.py (#66311)
Summary:
- [x] Fix the Pyre type checking errors in `torch/quantization/fx/utils.py`
```
torch/quantization/fx/utils.py:490:4 Incompatible variable type [9]: target_module_type is declared to have type `Type[nn.modules.module.Module]` but is used as type `None`.
```
Fixes the issue: [MLH-Fellowship/pyre-check/issues/75](https://github.com/MLH-Fellowship/pyre-check/issues/75)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66311

Reviewed By: pradeep90

Differential Revision: D31506399

Pulled By: 0xedward

fbshipit-source-id: 3d866fba6005452378d4a2613b8689fa2d7a8b67
2021-10-08 19:14:22 -07:00
Shiyan Deng
e5f6f356da [hpc infer] fix bench perf number
Reviewed By: yinghai, jianyuh

Differential Revision: D31505288

fbshipit-source-id: e4951a7c5813e0ee38903dec4cef61531f1b4059
2021-10-08 19:11:04 -07:00
Jane Xu
904fbadaff Fix merge conflict in bc tests (#66356)
Summary:
BC test currently borken on trunk

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66356

Reviewed By: malfet

Differential Revision: D31523340

Pulled By: janeyx99

fbshipit-source-id: a8d1ff697f017c710f70a76b5bb6a2f89d7637c7
2021-10-08 18:45:15 -07:00
Scott Wolchok
5a67ffe0ad [PyTorch][Static Runtime] Combine ProcessedNode::{native_,}fn_ (#65414)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65414

Saves 24 bytes (`sizeof(std::function) - 8`) per ProcessedNode.
ghstack-source-id: 139999909

Test Plan: CI

Reviewed By: hlu1

Differential Revision: D31085561

fbshipit-source-id: 70734b8319e805736ba41aedaaf7fa3d463400c9
2021-10-08 18:11:59 -07:00
Vasiliy Kuznetsov
566922bbcd clean up mypy nit in torch/jit/_recursive.py (#66253)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66253

This was initially broken in #65829 and unbroken in #66003, this PR cleans
it up by removing the mypy ignore line.

Test Plan:
```
mypy torch/jit/_recursive.py --no-incremental
```

Imported from OSS

Reviewed By: supriyar

Differential Revision: D31475100

fbshipit-source-id: 46ab2ede72c08b926f4f9a6b03b1a1375b884c8a
2021-10-08 18:07:33 -07:00
Richard Barnes
4a302a3074 Wextra fix for CUDAApplyUtils.cuh (#66323)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66323

Fixes
```
/data/sandcastle/boxes/eden-trunk-hg-fbcode-fbsource/fbcode/caffe2/aten/src/ATen/cuda/CUDAApplyUtils.cuh:310:48: error: comparison of integers of different signs: 'unsigned long' and 'int' [-Werror,-Wsign-compare]
  const IndexType bOffset = sizeof...(Offsets) < n ?
                            ~~~~~~~~~~~~~~~~~~ ^ ~
/data/sandcastle/boxes/eden-trunk-hg-fbcode-fbsource/fbcode/caffe2/aten/src/ATen/cuda/CUDAApplyUtils.cuh:306:48: error: comparison of integers of different signs: 'unsigned long' and 'int' [-Werror,-Wsign-compare]
  const IndexType aOffset = sizeof...(Offsets) < n ?
                            ~~~~~~~~~~~~~~~~~~ ^ ~
```

Test Plan: Sandcastle

Reviewed By: ngimel

Differential Revision: D31505428

fbshipit-source-id: 326fa8f41f2b200981eddc5cab035b18536cd24e
2021-10-08 18:02:09 -07:00
Jane Xu
0a48f56318 Revert D31299350: Back out "Revert D31005792: [NCCL] Init dummy NCCL comms in constructor"
Test Plan: revert-hammer

Differential Revision:
D31299350 (f1f3bd8c36)

Original commit changeset: 9ad5c8fa17f7

fbshipit-source-id: d63d889922f507a4a0e2e042e451b95b9591c317
2021-10-08 17:55:28 -07:00
Jane Xu
c62ed96496 Revert D30710710: [Pytorch Edge] Support profiling kineto events from external source
Test Plan: revert-hammer

Differential Revision:
D30710710 (c1343ff706)

Original commit changeset: 51399f9b0b64

fbshipit-source-id: ab6bb8fe4e83ed1052e621e427259192a4f0f540
2021-10-08 17:46:18 -07:00
Peter Bell
c957d9fdf6 Replace _baddbmm_mkl_ with cpublas::gemm_batched (#66165)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66165

Test Plan: Imported from OSS

Reviewed By: dagitses

Differential Revision: D31493952

Pulled By: ngimel

fbshipit-source-id: 87cf79036c2d0f4955edbeeeb78f578b0fd223ab
2021-10-08 17:12:14 -07:00
Richard Barnes
51835bec07 Wextra fix 1 for caffe2 (#66272)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66272

Test Plan: Sandcastle

Reviewed By: ngimel

Differential Revision: D31475543

fbshipit-source-id: f6e02d299d0b792ddb37534ad85db82af65bb42a
2021-10-08 16:36:13 -07:00
Zafar Takhirov
a28b038af4 [ao_migration] torch/nn/intrinsic: torch.quantization -> torch.ao.quantization (#65903)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65903

This changes the imports in the `caffe2/torch/nn/intrinsic` to include the new import locations.

```
codemod -d torch/nn/intrinsic --extensions py 'torch.quantization' 'torch.ao.quantization'
```

Test Plan: `python test/run_test.py`

Reviewed By: albanD

Differential Revision: D31301195

fbshipit-source-id: a5a9d84cb1ac33df6c90ee03cda3e2f1c5d5ff51
2021-10-08 16:21:23 -07:00
Zafar Takhirov
2daae532bd [ao_migration] torch/nn/qat: torch.quantization -> torch.ao.quantization (#65902)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65902

This changes the imports in the `caffe2/torch/nn/qat` to include the new import locations.

```
codemod -d torch/nn/qat --extensions py 'torch.quantization' 'torch.ao.quantization'
```

Test Plan: `python test/run_test.py`

Reviewed By: jerryzh168

Differential Revision: D31301196

fbshipit-source-id: ff237790d74cd3b3b5be642a997810f4f439a1d8
2021-10-08 16:21:21 -07:00
Zafar Takhirov
1a6482ee2a [ao_migration] torch/nn/quantizable: torch.quantization -> torch.ao.quantization (#65901)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65901

This changes the imports in the `caffe2/torch/nn/quantizable` to include the new import locations.

```
codemod -d torch/nn/quantizable --extensions py 'torch.quantization' 'torch.ao.quantization'
```

Test Plan: `python test/run_test.py`

Reviewed By: jerryzh168

Differential Revision: D31301194

fbshipit-source-id: 8ce8a3015ea61da62d7658846d1ca64fbdabaf7a
2021-10-08 16:21:19 -07:00
Zafar Takhirov
b23709df03 [ao_migration] torch/nn/quantized: torch.quantization -> torch.ao.quantization (#65900)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65900

This changes the imports in the `caffe2/torch/nn/quantized` to include the new import locations.

```
codemod -d torch/nn/quantized --extensions py 'torch.quantization' 'torch.ao.quantization'
```

Test Plan: `python test/run_test.py`

Reviewed By: jerryzh168

Differential Revision: D31301193

fbshipit-source-id: 58efb1ad51a8b441e2a3bd5b91af11eab6b9331f
2021-10-08 16:19:53 -07:00
Rohan Varma
f1f3bd8c36 Back out "Revert D31005792: [NCCL] Init dummy NCCL comms in constructor" (#65883)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65883

Original commit changeset: d8e962b8aab6
ghstack-source-id: 139836954

Test Plan: ci

Reviewed By: zhaojuanmao

Differential Revision: D31299350

fbshipit-source-id: 9ad5c8fa17f7038ba579cb1eda6d9271ac07a130
2021-10-08 16:04:20 -07:00
Kimish Patel
c1343ff706 [Pytorch Edge] Support profiling kineto events from external source (#64397)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64397

This diff exposes a way to add events to kineto profiler from external
source.
This can be a backend that executes a subgraph and wants to record this
execution in kineto profiler.
This diff also adds "backend" metadata to identify the backend an event
would have executed on.

Test Plan:
test_lite_interpreter

Imported from OSS

Reviewed By: raziel

Differential Revision: D30710710

fbshipit-source-id: 51399f9b0b647bc2d0076074ad4ea9286d0ef3e2
2021-10-08 15:59:42 -07:00
Richard Barnes
8a02d3e5d0 Wextra fix for Tensorshape.cpp (#66320)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66320

Fixes
```
stderr: caffe2/aten/src/ATen/native/TensorShape.cpp:619:36: error: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'long' [-Werror,-Wsign-compare]
    for (size_t offset = 0; offset < numel; offset++) {
                            ~~~~~~ ^ ~~~~~
stderr: caffe2/aten/src/ATen/native/TensorShape.cpp:619:36: error: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'long' [-Werror,-Wsign-compare]
    for (size_t offset = 0; offset < numel; offset++) {
                            ~~~~~~ ^ ~~~~~
```

Test Plan: Sandcastle

Reviewed By: ngimel

Differential Revision: D31505374

fbshipit-source-id: 0fc393dacd72a8b29c0d82561f730cc047b38f0c
2021-10-08 15:03:47 -07:00
Peter Bell
731cf494f2 Remove cuda/Loops.cuh dependency on native_functions.yaml (#64168)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64168

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D30728582

Pulled By: dagitses

fbshipit-source-id: 99dcbb9bb790dd0440d498593ac43e2c18e54a0c
2021-10-08 12:58:52 -07:00
Raghavan Raman
92ce188510 Revert D31445799: [nnc] Use given kernel function name while emitting code
Test Plan: revert-hammer

Differential Revision:
D31445799 (c30dc52739)

Original commit changeset: 8d1642098313

fbshipit-source-id: 6b9d8c816437e9fcba8eb19cc683bc0a46a04cf5
2021-10-08 12:39:01 -07:00
Raghavan Raman
2e6fa0261f Revert D31445797: [nnc] Added a cache to use singleton instances of PytorchLLVMJIT for every triple,cpu,attrs combination
Test Plan: revert-hammer

Differential Revision:
D31445797 (7e5ef5e517)

Original commit changeset: 4e1450100928

fbshipit-source-id: fc13b34dbb66c7a22816eb46cf6d98ae9f332d39
2021-10-08 12:38:59 -07:00
Raghavan Raman
097fdcdf0c Revert D31445798: [Static Runtime] Cleanup LLVMCodeGen memory after code gen completes
Test Plan: revert-hammer

Differential Revision:
D31445798 (40dd2711b6)

Original commit changeset: c860d36456b2

fbshipit-source-id: 64d900cad87113e6b871aedd6669e771a7ede5cc
2021-10-08 12:37:48 -07:00
Peter Bell
0be36d798b Remove Tensor.h include from TensorIterator.h (#64167)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64167

Test Plan: Imported from OSS

Reviewed By: saketh-are

Differential Revision: D30728579

Pulled By: dagitses

fbshipit-source-id: 3888da00c9c8030013c8f4b39d300fe671defb05
2021-10-08 12:28:37 -07:00
Peter Bell
bc1dec9b81 Migrate THCStorage_resizeBytes to ATen (#65944)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65944

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D31386276

Pulled By: ngimel

fbshipit-source-id: a2b28bc09d11a856fdd3796d3df6f96613f13437
2021-10-08 11:50:52 -07:00
John Clow
3bad54069b Concatting multiple linear layers with same input Tensor (different weight/bias) (#63198)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63198

Linear layers using the same input tensor can be concatted together
as long as the weights and biases are compatible.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D31240642

fbshipit-source-id: 1e78daa6b89822412ba2513d326ee0e072ceff1e
2021-10-08 10:55:46 -07:00
Scott Wolchok
94845fc44e [jit] Implement one-argument AliasDb::mayContainAlias more efficiently (#65177)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65177

There is no need to heap-allocate any vectors in this case.
ghstack-source-id: 140052520

Test Plan:
CI

Startup for static runtime on ctr_mobile_feed local net decreased from 7.8s to about 7.0s

Reviewed By: malfet

Differential Revision: D30984194

fbshipit-source-id: 85091e55445f653ec728b27da4b459a2f1873013
2021-10-08 10:29:25 -07:00
Scott Wolchok
c80693f7e6 [jit] Add cache for MemoryDAG::collectAllContainedMemoryLocations (#65122)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65122

Failure to cache this seems to contribute to quadratic startup time for the static runtime.

Disclaimer: I am entirely un-versed in the performance considerations for the JIT and have no idea what the other impacts of this change may be. Let the reviewer beware.
ghstack-source-id: 140052522

Reviewed By: suo

Differential Revision: D30983268

fbshipit-source-id: 4329aee6b5781f5c2e2d2334c396fab8528d4b7b
2021-10-08 10:29:23 -07:00
Scott Wolchok
3ef69a4598 [static runtime] Pre-allocate hash tables (#65343)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65343

No reason not to save a bit on re-hashing.
ghstack-source-id: 140052518

Test Plan:
CI

Static runtime startup seems to go from 5.9-6.0s to 5.8s-6.0s, perf shows less time spent rehashing

Reviewed By: mikeiovine

Differential Revision: D31027362

fbshipit-source-id: 39dd53ecd462693b518535856ddd92df78a4977b
2021-10-08 10:28:13 -07:00
Peter Bell
0020a151c6 slow_conv3d grad_weight: call gemm directly (#65759)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65759

Test Plan: Imported from OSS

Reviewed By: dagitses

Differential Revision: D31257873

Pulled By: ngimel

fbshipit-source-id: 1612c0be10b2aa269c807c7b9f5470172ed68dc1
2021-10-08 09:55:08 -07:00
Yanli Zhao
dfb64b3287 log API usage for fsdp API in PyTorch (#64964)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64964

log API usage for fsdp API in PyTorch

Test Plan: unit test

Reviewed By: rohan-varma

Differential Revision: D30915734

fbshipit-source-id: 5e3b335327f4a3ff59b025e8e17a0fa0b7f6597d
2021-10-08 09:32:59 -07:00
Luca Wehrstedt
201174cb91 Revert D31389480: [pytorch][PR] Allow external CUDA streams to be set as current
Test Plan: revert-hammer

Differential Revision:
D31389480 (61f0bb70c1)

Original commit changeset: 2b2f40e5452c

fbshipit-source-id: c6631e51abcf3819732f981f646cb77b91569c7d
2021-10-08 09:20:24 -07:00
Rohan Varma
b72a1782d8 [PG Wrapper][BE] Add collective information when monitored barrier error is (#66167)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66167

Sometimes due to desync we see PG wrapper monitored barrier fail. In
this case it would be useful to print the info about the collective that was
trying to run along with the actual error.
ghstack-source-id: 140037653

Test Plan: CI

Reviewed By: cbalioglu

Differential Revision: D31353021

fbshipit-source-id: e2a515326c9314c98119978d5566eb5431cca96c
2021-10-08 09:14:24 -07:00
Rohan Varma
b5b1d49a66 [PG Wrapper][BE] Make some methods private (#66166)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66166

These methods should be private.
ghstack-source-id: 139782587

Test Plan: CI

Reviewed By: cbalioglu

Differential Revision: D31353020

fbshipit-source-id: 583fb315cc2cacc37df3d29cd5793b42558930b3
2021-10-08 09:13:02 -07:00
Peter Bell
0cad2c0615 Move intraop_launch_future from Parallel.h (#64166)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64166

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D30728585

Pulled By: dagitses

fbshipit-source-id: 75a41418ae9218bec9bac27597051295222b6eee
2021-10-08 09:07:35 -07:00
Scott Wolchok
2d885ab73d [jit] Reduce refcounting of Types (#65345)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65345

FooType::get() can return a const reference. Inconveniently, converting shared_ptr<FooType> to shared_ptr<Type> requires a copy & refcount bump, so to properly take advantage of this in unshapedType() we need to take a const Type& in isSubtypeOf(), which is good practice anyway -- don't require a shared_ptr if you don't need to take ownership.
ghstack-source-id: 140044165

Test Plan:
CI

perf says c10::unshapedType time decreased from 2.8% to 2.2% during static runtime startup, though I expect this to be generally beneficial.

Reviewed By: hlu1

Differential Revision: D31027361

fbshipit-source-id: 676feb81db9f74ad7b8651d8774f4ecb4cfa6ab8
2021-10-08 09:03:04 -07:00
Scott Wolchok
1ae468a484 [jit] Refcounting spot fixes (#65346)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65346

Tidying up the top sources of reference count decrements seen during static runtime startup.
ghstack-source-id: 140027349

Test Plan:
CI

perf now shows under 2% time spend in ~__shared_count instead of about 5%.

Reviewed By: suo

Differential Revision: D31057277

fbshipit-source-id: 9a16daf2e655fda80d4ec21290b30f02ba63d8da
2021-10-08 08:39:20 -07:00
Kevin Tse
8ebe1a924d [DataPipe] moving mux IterDataPipe test to the right location (#66277)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66277

Previously, it is grouped together with tests related to `MapDataPipe`, but it should be with `IterDataPipe`.

cc VitalyFedyunin ejguan NivekT

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D31485823

Pulled By: NivekT

fbshipit-source-id: d13d8c28cbfc305da0e3033d4109a0f971281a02
2021-10-08 08:32:29 -07:00
Kevin Tse
ed17851642 [DataPipe] adding test for IterableWrapperIterDataPipe (#66276)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66276

cc VitalyFedyunin ejguan NivekT

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D31485824

Pulled By: NivekT

fbshipit-source-id: c7b21636e4b17e264bfb5dbea69cd3c477472f0b
2021-10-08 08:32:26 -07:00
Kevin Tse
e808e3d3d6 [DataPipe] adding SequenceWrapperMapDataPipe (#66275)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66275

Once this is added to Core, TorchData's PR will not need a custom class and can use this wrapper instead.

cc VitalyFedyunin ejguan NivekT

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D31485822

Pulled By: NivekT

fbshipit-source-id: 790de27629c89c0ca7163a8ee5a09ee8b8233340
2021-10-08 08:32:24 -07:00
Vasiliy Kuznetsov
a7cc07f109 quantized embedding: make error message clearer (#66051)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66051

Make the error message clearer when quantized embedding is converted
with an unsupported dtype. This is helpful when debugging quantization
errors on new models.

Test Plan:
```
class M(nn.Module):
    def __init__(self):
        super().__init__()
        self.embedding = nn.Embedding(1, 1)

m = M().eval()
m.qconfig = torch.quantization.QConfig(
    activation=torch.quantization.MinMaxObserver.with_args(dtype=torch.qint8),
    weight=torch.quantization.MinMaxObserver.with_args(dtype=torch.qint8))
m.embedding.qconfig = m.qconfig
mp = torch.quantization.prepare(m)
mq = torch.quantization.convert(m)
// error message now includes the incorrect dtype
```

Imported from OSS

Reviewed By: dagitses

Differential Revision: D31472848

fbshipit-source-id: 86f6d90bc0ad611aa9d1bdae24497bc6f3d2acaa
2021-10-08 08:32:22 -07:00
Vasiliy Kuznetsov
c9aba3b128 make error message when trying to quantize non floats more specific (#66050)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66050

Adds the dtype to an error message when trying to quantize something
other than a float.  This is useful for debugging quantization tools on
new models.

Test Plan:
```
x = torch.randn(1, 1, 1, 1, dtype=torch.double)
xq = torch.quantize_per_tensor(x, 0.01, 0, torch.quint8)
// error message now includes Double
```

Imported from OSS

Reviewed By: dagitses

Differential Revision: D31472849

fbshipit-source-id: 2331ffacefcbc6f8eca79694757d740de74a0f1d
2021-10-08 08:32:19 -07:00
Vasiliy Kuznetsov
81660c08f0 quantized add: enable broadcasting (#66049)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66049

Enables quantized add with broadcasting. As pointed out by jamesr66a,
this was disabled but TensorIterator already supports it. Added a test
case to verify.

Test Plan:
```
python test/test_quantization.py TestQuantizedOps.test_qadd_broadcast
```

Imported from OSS

Reviewed By: dagitses

Differential Revision: D31472850

fbshipit-source-id: a3b16d9000487918db743525d22db6864330762b
2021-10-08 08:31:07 -07:00
Edward Yang
ece0221854 Rename int to long, add more C++ types. (#66108)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66108

BC-breaking change: intT is now longT (which aligns it more accurately with how
the types are referred to in C++).  The benefit for this is we can idiomatically
express all C++ dtypes (with intT now mapping to int32_t).  These types are needed
for ufunc codegen in a latter patch.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D31385761

Pulled By: ezyang

fbshipit-source-id: ec6f3a0953794313470dbe14911f23ac116be425
2021-10-08 08:25:06 -07:00