Commit Graph

26749 Commits

Author SHA1 Message Date
Michael Voznesensky
960f4b51e3 [JIT] Fix @staticmethod access from self on modules (#37702)
Summary:
Closes https://github.com/pytorch/pytorch/issues/30755
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37702

Differential Revision: D21389989

Pulled By: voznesenskym

fbshipit-source-id: f9b7e26a9eab7dc3d7762a5a28f85424dac5fbb3
2020-05-14 21:12:10 -07:00
Yan Zhu
3d0532f3ab [c2] fix compute_norm test (#38529)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38529

(Note: this ignores all push blocking failures!)

Test Plan: buck test mode/opt //caffe2/caffe2/python/modeling:compute_norm_for_blobs_test

Reviewed By: olittle

Differential Revision: D21588603

fbshipit-source-id: bdb0ae455e85a934cb5e369fbb0078f2ff842814
2020-05-14 20:49:36 -07:00
Pruthvi Madugundu
8df14c573e Add sccache support for hcc and hip-clang in ROCm (#38451)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38451

Differential Revision: D21589126

Pulled By: ezyang

fbshipit-source-id: dc4d08e7f393dbe369e501334c776071b2c176e0
2020-05-14 20:44:20 -07:00
Yan Zhu
fac9f36563 Back out "[c2] register cuda op for LpNorm (fallback)"
Summary: Original commit changeset: 573419e5a8da

Test Plan: D21562485  breaks CI build. Unlanding

Reviewed By: olittle

Differential Revision: D21588831

fbshipit-source-id: 6dda4b71904d7765f32f570f9722e4a9a6cbc97b
2020-05-14 20:25:30 -07:00
Jerry Zhang
ee52501976 [quant][graphmode][refactor] Factor out getInputTensorQParamOpFusionInfo (#38358)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38358

Test Plan: Imported from OSS

Differential Revision: D21559806

fbshipit-source-id: b243b811c5c5917f50a11ef5b26174baf46e683f
2020-05-14 19:59:09 -07:00
Shen Li
155a287aea Enforce const on PyRRef functions (#38415)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38415

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D21554722

Pulled By: mrshenli

fbshipit-source-id: 53c2abd8de43545873be486e1fb893bc329d65a1
2020-05-14 19:01:28 -07:00
Supriya Rao
25177e2796 [quant] Support empty batch input for quantized ops (#38508)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38508

Test Plan:
python test/test_quantization.py TestQuantizedOps.test_empty_batch

Imported from OSS

Differential Revision: D21581937

fbshipit-source-id: e50580dec0682848a0703f7bdee6e9351ab79814
2020-05-14 18:42:50 -07:00
Ailing Zhang
bc49d938e2 Revert D21585458: [pytorch][PR] [RELAND] .circleci: Improve docker image build workflow
Test Plan: revert-hammer

Differential Revision:
D21585458

Original commit changeset: 37792a1e0f5e

fbshipit-source-id: cd4c6794708f27a80077e0af27ccf52c5c6ba832
2020-05-14 18:11:03 -07:00
Yan Zhu
0e0b9496fe [c2] [easy] stop gradient when diagnose (#38518)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38518

as title

Test Plan: buck test

Reviewed By: olittle

Differential Revision: D21562570

fbshipit-source-id: 3a2e8dea3d821a2bdb9f30db25816a2bfa6c5dcf
2020-05-14 17:30:39 -07:00
Eli Uriegas
8cdc4807cd [RELAND] .circleci: Improve docker image build workflow (#38484)
Summary:
closes https://github.com/pytorch/pytorch/issues/37855

Relies on https://github.com/pytorch/pytorch/pull/38483

Previous attempts to get this right:
* https://github.com/pytorch/pytorch/pull/38335
* https://github.com/pytorch/pytorch/pull/38279
* https://github.com/pytorch/pytorch/pull/37976

This reverts commit 80639604a8.

Improves the docker image build workflow from many steps to basically
transparent from a user's perspective.

To update docker images now all one has to do is edit the
.circleci/docker folder and it will update automatically and also
dynamically add the tags to the list of tags to keep from the garbage
collector.

Adding a new image will currently stay the same but we can explore doing
that dynamically as well.

How the build workflow works:
  - Docker tags are determined by the hash defined from git for the
    .circleci/docker sub-directory (extracted using git rev-parse)
  - Images are only built if the computed hash is not found in ecr and
    the hash is different than the previously computed hash. The
    previously computed hash is found using the same process as before
    but subbing out HEAD for the merge base between HEAD and the base
    git revision
  - That tag is then passed through the jobs using a shared workspace
    which is added to downstream jobs using the circleci ${BASH_ENV}

How the new garbage collection works:
  - Tags to keep are generated by stepping through all of the commits in
    in the .circleci/docker subdirectory

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38484

Differential Revision: D21585458

Pulled By: seemethere

fbshipit-source-id: 37792a1e0f5e5531438c4ae61507639c133aa76d
2020-05-14 17:11:04 -07:00
Yan Zhu
bbfd0ef244 [c2] register cuda op for LpNorm (fallback) (#38517)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38517

as title

Test Plan: buck test

Reviewed By: olittle

Differential Revision: D21562485

fbshipit-source-id: 573419e5a8dae4121d99d5b72ed3960a92db7a54
2020-05-14 16:54:12 -07:00
Jerry Zhang
504637a171 [quant][graphmode] Support ops with fixed quantization parameters (#38278)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38278

Support ops like aten::hardsigmoid that has a fixed quantization parameters:
```
  constexpr float o_scale = 1.0f / 256.0f;
  constexpr int32_t o_zero_point = 0;
```

Ops supported:
- hardsigmoid
- sigmoid
- tanh

Test Plan: Imported from OSS

Differential Revision: D21559811

fbshipit-source-id: 26f3c9c3389dea4f07b350172e2974fac8c5c470
2020-05-14 16:36:06 -07:00
Supriya Rao
de7025fbdb [quant] Support for functional quantized::conv1d (#38449)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38449

Also update docs to reflect conv1d op support

Test Plan:
python test/test_quantization.py TestQuantizedFunctional.test_conv1d_api

Imported from OSS

Differential Revision: D21575921

fbshipit-source-id: 21c9f6b49ad456cd9d93e97f17cf5b8d87f0da6b
2020-05-14 16:09:51 -07:00
Supriya Rao
8e732514cd [quant][graphmode] Add support for quantized conv1d + relu fusion (#38441)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38441

Test Plan:
python test/test_quantization.py test_quantized_conv1d_relu

Imported from OSS

Differential Revision: D21575919

fbshipit-source-id: d43e33052ce1be5e38acef8fac16f22cb11c0695
2020-05-14 16:09:46 -07:00
Supriya Rao
f4605ae5c3 [quant] Fusion support for conv1d + ReLU (#38438)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38438

Fusion for PTQ flow in eager mode. Graph mode to follow

Test Plan:
python test/test_quantization.py TestFusion

Imported from OSS

Differential Revision: D21575920

fbshipit-source-id: 5bac6602520f42ae3f4957d1a55e6a863daa0257
2020-05-14 16:08:11 -07:00
Jessica Lin
8b6bf2a457 Add C++ Landing Page (#38450)
Summary:
* Add cpp_index.rst for landing page to match 1.5 (https://github.com/pytorch/pytorch/blob/release/1.5/docs/source/cpp_index.rst)
* Link to new cpp landing page was added to the docs table of contents in this PR: https://github.com/pytorch/pytorch/pull/38350
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38450

Differential Revision: D21580939

Pulled By: jlin27

fbshipit-source-id: 021c43f207a100d554266e4e16cb6752ca9c56a0
2020-05-14 16:02:01 -07:00
David Reiss
1f87f15ba3 Remove _reset_warning_registry (#38485)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38485

Python 2 has reached end-of-life and is no longer supported by PyTorch.
This class does nothing in Python 3.

Test Plan: CI

Reviewed By: ailzhang

Differential Revision: D21575260

Pulled By: dreiss

fbshipit-source-id: 184696c9fa501e8d2517950b47cdbc90b2ae8053
2020-05-14 15:03:30 -07:00
David Reiss
b140ed6848 Remove structseq_slice (#35625)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35625

Python 2 has reached end-of-life and is no longer supported by PyTorch.
This function was already ifdef'ed out in Python 2.

Added a comment about when we might be able to remove this entire file.

Test Plan: CI

Differential Revision: D20842885

Pulled By: dreiss

fbshipit-source-id: 1fd3b1b2ff5a82caaf3bc11344dde2941427cfc0
2020-05-14 15:03:24 -07:00
David Reiss
6d642a6f6c Remove (most) Python 2 support from C++ code (#35614)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35614

Python 2 has reached end-of-life and is no longer supported by PyTorch.
Now we can clean up a lot of cruft that we put in place to support it.
These changes were all done manually, and I skipped anything that seemed
like it would take more than a few seconds, so I think it makes sense to
review it manually as well.

Test Plan: CI

Differential Revision: D20842876

Pulled By: dreiss

fbshipit-source-id: 18abf0d324ed2185ec6d27c864e935d856dcc6ad
2020-05-14 15:01:49 -07:00
Karl Ostmo
1b973aa2a2 Sort CirlceCI config.yml keys to facilitate diff review after codegen (#38496)
Summary:
This will support another round of migration from hand-written configs to code generation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38496

Differential Revision: D21581624

Pulled By: kostmo

fbshipit-source-id: aed814ef6d4fc6af9ce092727b2dacc99de14ae0
2020-05-14 14:33:25 -07:00
svcscm
69dca43c35 Updating submodules
Summary:
GitHub commits:

64bad39e0d
d10385b2cf
d1d606ea75
5b7309d5fe
e5c84d203b
6e64791678
b9a2b343c4
d9c1059140
1bcde534b5
46981b8186

Test Plan: n/a

Reviewed By: zpao

fbshipit-source-id: 5299cef9c91612f176ff0c29d0cc3acf629d2240
2020-05-14 14:15:22 -07:00
Igor Sugak
0e80c12bb4 [pytorch] fix -Wlogical-op-parentheses in SortingKthValue.cu (#38500)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38500

Reported  by Clang:
```
caffe2/aten/src/ATen/native/cuda/SortingKthValue.cu:77:56: error: '&&' within '||' [-Werror,-Wlogical-op-parentheses]
                    || THCNumerics<scalar_t>::isnan(v) && THCNumerics<scalar_t>::isnan(kValue));
                    ~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
caffe2/aten/src/ATen/native/cuda/SortingKthValue.cu:77:56: note: place parentheses around the '&&' expression to silence this warning
                    || THCNumerics<scalar_t>::isnan(v) && THCNumerics<scalar_t>::isnan(kValue));
                                                       ^
                       (                                                                      )
```

Test Plan:
```
buck build mode/opt -c fbcode.cuda_use_clang=true fblearner/flow/projects/dper:workflow
```

Reviewed By: ngimel

Differential Revision: D21578871

fbshipit-source-id: 83595152a370a4acbb2c3b5823dbae9c21485f06
2020-05-14 13:59:31 -07:00
Michael Suo
9d0e935b48 skip torchbind on rocm (#38501)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38501

Test Plan: Imported from OSS

Differential Revision: D21579298

Pulled By: suo

fbshipit-source-id: 4ac0b6beac26c97c1e0ff68304996ce62be8e8ce
2020-05-14 12:58:27 -07:00
Rohan Varma
4d4895a62a Use Future's then() API to fix RPC profiling (#38352)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38352

Fixes the RPC profiling by using the `then()` API added in https://github.com/pytorch/pytorch/pull/37311. Instead of adding a regular callback, we return a new future that completes when the profiling callback is finished. This is transparent to the user as the future still completes with the value of the original future (i.e. the RPC's return value)

To make this work for RRef, we add a `_set_profiling_future` to set the profiling future, and `_get_profiling_future` to retrieve this future and wait on it in the tests.

Re-enabled profiling tests and stress tested them 1000 times to verify the fix
ghstack-source-id: 104086114

Test Plan: Re-enabled profiling tests

Differential Revision: D21506940

fbshipit-source-id: 35cde22f0551c825c9bc98ddc24cca412878a63a
2020-05-14 12:52:45 -07:00
Rohan Varma
f178bf10f1 Support rpc_async call with timeout in JIT (#37884)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37884

Adds support to use rpc_timeout param in rpc_async call from jit for
parity with eager mode. Done by:
1) Add timeout as an input in ir_emitter.cpp if it is specified
2) Parse float IValue from inputs in `prim::rpc_async` operator. Give the default if needed.

Added UTs in jit/rpc_test.
ghstack-source-id: 104083031

Test Plan: Added UTs in jit/rpc_test.

Differential Revision: D21268895

fbshipit-source-id: 34bb10a2ac08b67dd6b789121ab43e2c0e696229
2020-05-14 12:44:26 -07:00
Eli Uriegas
3300dd5227 .cirlceci: Keep tags that look like a sha1 (#38483)
Summary:
Previous attempts to get this right:
* https://github.com/pytorch/pytorch/pull/38335
* https://github.com/pytorch/pytorch/pull/38279
* https://github.com/pytorch/pytorch/pull/37976

This tag kept getting deleted before the docker image ci workflow could
be merged causing it to have upstream breakages.

It'd be best to make sure the garbage collector just doesnt garbage
collect it.

This is a pre-step to merge https://github.com/pytorch/pytorch/pull/38484

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38483

Differential Revision: D21577359

Pulled By: seemethere

fbshipit-source-id: c4e0709bd8fff8f24a988b60eaa9f8c01576ef2f
2020-05-14 12:38:33 -07:00
Will Feng (FAIAR)
38d141ede5 Support having a different forward method when we are not in scripting mode (#38158)
Summary:
TorchScript currently doesn’t support `*args, **kwargs` in method signature, which is extensively used in DPER3 low-level modules’ forward method. In order to make DPER3 low-level modules scriptable, I was thinking about a solution of having a forward method *only* for TorchScript, and replace the forward method when we are not in scripting mode.

This solution works today, and I would like to add a test to make sure it will always work in the future.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38158

Differential Revision: D21485657

Pulled By: yf225

fbshipit-source-id: df7368e8a5265418be7c305e6666ffd76e595466
2020-05-14 12:13:06 -07:00
SsnL
5f2a274015 Fix conv non zero padding being applied in wrong dim (#37881)
Summary:
Turns out F.pad takes in dims in reverse order. Fixes https://github.com/pytorch/pytorch/issues/37844
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37881

Differential Revision: D21554011

Pulled By: soumith

fbshipit-source-id: a85a7f6db9f981d915728965903c5c57b6617c93
2020-05-14 11:56:38 -07:00
Omkar Salpekar
b57a339703 Guard against negative rpcTimeout being passed in to RpcBackendOptions (#38267)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38267

Assert that the rpcTimeout is positive in RpcBackendOptions
constructor
ghstack-source-id: 104029918

Test Plan: CI

Differential Revision: D21509850

fbshipit-source-id: c925490e3d8fa2ffa42b0ae1170ca2f740af11f7
2020-05-14 11:33:23 -07:00
Protonu Basu
d1eeb3b7bb [Tensorexpr] Fix and improve handling multiple gpu devices (#38365)
Summary:
These commits fixes a bug which was exposed when we took away the fallback path. The fix is to set the appropriate device before setting CUDA stream.
The improvement is when compiling, setting the device to new device only if it's different from prior device, and removing redundant call to cudaFree
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38365

Reviewed By: zheng-xq

Differential Revision: D21537469

Pulled By: protonu

fbshipit-source-id: b9662dd623b5c7cfd23eb6894e992a43665641e4
2020-05-14 11:17:17 -07:00
Omkar Salpekar
af597335d4 Remove unnecessary to_string in RPC logging code. (#38414)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38414

`std::to_string` call is unnecessary when using glog.
ghstack-source-id: 104030161

Test Plan: Ran the retry tests and checked logs to ensure correct message was printed upon message failure,

Differential Revision: D21266330

fbshipit-source-id: 53519287778d47d99b94ea34b7c551f910affda2
2020-05-14 10:57:00 -07:00
David Reiss
2f4da7c00c Remove a use of exec (#35624)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35624

Python 2 has reached end-of-life and is no longer supported by PyTorch.
This test case is valid syntax in Python 3.

Test Plan: CI

Differential Revision: D20842877

Pulled By: dreiss

fbshipit-source-id: 856e72171496aa1d517f2f27a8a5066462cf4f76
2020-05-14 10:08:04 -07:00
David Reiss
7f7fdb1013 Remove a use of checkScript(str) (#35623)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35623

Python 2 has reached end-of-life and is no longer supported by PyTorch.
This test case is valid syntax in Python 3.

Test Plan: CI

Differential Revision: D20842874

Pulled By: dreiss

fbshipit-source-id: 9f12e046f827d4f9d5eca99b0b0b46f73e06ff51
2020-05-14 10:07:58 -07:00
David Reiss
313bea84ef Remove _get_wrapped_func (#35621)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35621

Python 2 has reached end-of-life and is no longer supported by PyTorch.
`func.__wrapped__` can be used directly in Python 3.

Test Plan: CI

Differential Revision: D20842875

Pulled By: dreiss

fbshipit-source-id: 26f71df12db6d5118c8f278b27d747d647d07900
2020-05-14 10:07:53 -07:00
David Reiss
d060deb5bb Remove _compatible_subtest (#35620)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35620

Python 2 has reached end-of-life and is no longer supported by PyTorch.
`self.subTest` can be used directly in Python 3.

Test Plan: CI

Differential Revision: D20842872

Pulled By: dreiss

fbshipit-source-id: 6ad42550c01e6959821ff07df767fc14b58c5a9e
2020-05-14 10:07:48 -07:00
David Reiss
7026b39ac7 Remove _uses_true_division (#35618)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35618

Python 2 has reached end-of-life and is no longer supported by PyTorch.
Python 3 always uses true division.

Test Plan: CI

Differential Revision: D20842884

Pulled By: dreiss

fbshipit-source-id: 522e34bb584d4bdb01c9c40eb267955062a57774
2020-05-14 10:07:42 -07:00
David Reiss
328fc70b84 Remove (most) Python 2 support from setup.py (#35617)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35617

Python 2 has reached end-of-life and is no longer supported by PyTorch.
Now we can clean up some cruft that we put in place to support it.

Test Plan: CI

Differential Revision: D20842883

Pulled By: dreiss

fbshipit-source-id: 18dc5219ba99658c0ca7e2f26863df008c420e6a
2020-05-14 10:06:20 -07:00
Supriya Rao
cbff959bd7 [quant] Return default qconfig when backend is 'none' (#38407)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38407

We can still run some quantized tests even when fbgemm/qnnpack isn't enabled

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D21554257

fbshipit-source-id: e4fa8f61f6a6717881c00620ed7938c01ffbf958
2020-05-14 09:53:50 -07:00
Hong Xu
7f11079769 Delete "named_guard" in native_functions.yaml (#38429)
Summary:
"named_guard" is not a supported option (i.e., a typo).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38429

Differential Revision: D21572794

Pulled By: zou3519

fbshipit-source-id: 6e799611344f373b03f64410d7af9c2c89a75f55
2020-05-14 09:48:23 -07:00
Michael Carilli
25f918548d Allow GradScaler to be pickled (#38296)
Summary:
Should unblock https://github.com/PyTorchLightning/pytorch-lightning/issues/1782.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38296

Differential Revision: D21553296

Pulled By: albanD

fbshipit-source-id: 9041a72d7cf8833e4b01bc767fd2321f17c7c5f2
2020-05-14 09:14:28 -07:00
SsnL
ae392a77a6 Add better device idx parse checks (#37376)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/32079
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37376

Differential Revision: D21476036

Pulled By: zou3519

fbshipit-source-id: 86907083c23cbaf165b645307fb340f2656b814e
2020-05-14 09:07:12 -07:00
Peter Bell
0a159b0a3a Fix precision issues in CPU remainder (#38293)
Summary:
Together with https://github.com/pytorch/pytorch/issues/37758, this fixes https://github.com/pytorch/pytorch/issues/37743 and fixes https://github.com/pytorch/pytorch/issues/24861.

This follows the CUDA fix in https://github.com/pytorch/pytorch/issues/37758, vectorised using a `blendv` to replace the if conditionals.

Most of the complication is from `remainder` supporting `at::Half` where `fmod` doesn't. I've now got `fmod` working on `Vec256<at::Half>` as well as enabling half dispatch for `fmod` so it matches `remainder`.

I also added `fmod` support to `Vec256<at::BFloat16>` before realising that `remainder` doesn't support `BFloat16` anyway. I could also enable `BFloat16` if that's desirable. If not, I don't think `Vec256<BFloat16>` should be missing `fmod` anyway.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38293

Differential Revision: D21539801

Pulled By: ezyang

fbshipit-source-id: abac6a3ed2076932adc459174cd3d8d510f3e1d5
2020-05-14 08:54:32 -07:00
Nikita Shulga
3e9b4332d2 Fix @skipIfNoFBGEMM for types (#38432)
Summary:
Return unmodified type from decorator if fbgemm is present.

Fix `Tried to trace <__torch__.torch.classes.rnn.CellParamsBase object at 0x55f504c56b40> but it is not part of the active trace. Modules that are called during a trace must be registered as submodules of the thing being traced` thrown from `TestPostTrainingDynamic.test_quantized_rnn`  by preserving modules in returned qRNNBase (i.e. by partially reverting https://github.com/pytorch/pytorch/pull/38134 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38432

Differential Revision: D21567333

Pulled By: malfet

fbshipit-source-id: 364fa2c8fc6e400b4f2e425b922a977756aec1d8
2020-05-14 08:27:29 -07:00
Takayoshi Nishida
628e3b6fbd Fix unreachable validation for gradcheck (#37915)
Summary:
Hi, I found the validation that is unreachable in `gradcheck` function :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37915

Differential Revision: D21551661

Pulled By: albanD

fbshipit-source-id: 8acadcc09cd2afb539061eda0ca5e98860e321eb
2020-05-14 08:18:14 -07:00
Pearu Peterson
48c0331e01 Sparse softmax support (CPU) (#36305)
Summary:
This PR implements softmax support for sparse tensors.

The sparse softmax is related to dense softmax when the values of unspecified sparse tensor entries are taken to be `-inf` that will have the effect of "zero entries ignored". This relation is used for testing the correctness of results here.

Resolves https://github.com/pytorch/pytorch/issues/23651 for CPU.

- [x] sparse softmax
  - [x] CPU C++ implementation
  - [x] unittests
  - [x] update softmax documentation
  - [x] autograd support
- [x] sparse log_softmax
  - [x] CPU C++ implementation
  - [x] unittests
  - [x] update log_softmax documentation
  - [x] autograd support
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36305

Differential Revision: D21566540

Pulled By: ezyang

fbshipit-source-id: a632ea69c38622f960721482e442efeb8d0a54fc
2020-05-14 08:08:40 -07:00
Huang Shuang
fedb70a8fb Fix encoding errors for hipify tool (#37906)
Summary:
Encoding errors occur when using anaconda python 3.6.10 to run hipify_python.py, e.g., "' ASCII 'codec can't decode byte 0xc3".
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37906

Differential Revision: D21549531

Pulled By: ezyang

fbshipit-source-id: 2ffb5787e192a5c03711baa5c7e2577cb5bcab5a
2020-05-14 08:07:04 -07:00
Robert Wang
2b2d2168e8 Issue #27441 Fix: Bug in updating ModuleDict & ParameterDict (#27814)
Summary:
Fix a bug in `nn.ModuleDict.update` and `nn.ParameterDict.update` when passing another same dictionary as input.
Related issue: [Issue https://github.com/pytorch/pytorch/issues/27441](https://github.com/pytorch/pytorch/issues/27441)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27814

Differential Revision: D21518099

Pulled By: ezyang

fbshipit-source-id: 9e6bb6fcc26c8070e137e2e52c65f69a1fcaab37
2020-05-14 08:01:41 -07:00
Bharat123rox
15da26f8aa DOC: Add documentation for Tensor.is_nonzero (#37845)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/37438 by adding documentation for `Tensor.is_nonzero`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37845

Differential Revision: D21494422

Pulled By: mruberry

fbshipit-source-id: ee4f5979922d7c8100b5031d770ccdf59fe1c1a1
2020-05-14 04:46:55 -07:00
Nikolay Korovaiko
96885f73ed make test_jit infer the profiling mode, add a job for simple executor (#38374)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38374

Differential Revision: D21567658

Pulled By: Krovatkin

fbshipit-source-id: c0eb44cf6c842d5feebabf8c7d99c1b4aa6c4960
2020-05-13 23:55:40 -07:00
SsnL
b5868b2833 Relax sampler check in BatchSampler (#38403)
Summary:
Since the check was added in https://github.com/pytorch/pytorch/pull/6249, one can not pass an iterable as a sampler to the data loader anymore, which was a very handy feature (e.g., https://github.com/pytorch/pytorch/issues/1337). I think the check should be removed for two-fold reasons:
1. It is too strict. There is no reason that it should not be a general iterable.
2. It is inconsistent. In `DataLoader` (the main place where people use samplers), you can pass a general iterable as `batch_sampler` but not `sampler` due to this check.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38403

Differential Revision: D21555958

Pulled By: soumith

fbshipit-source-id: c7267bb99a31edd8f2750689205d6edc5dab5cff
2020-05-13 22:24:29 -07:00