Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38518
as title
Test Plan: buck test
Reviewed By: olittle
Differential Revision: D21562570
fbshipit-source-id: 3a2e8dea3d821a2bdb9f30db25816a2bfa6c5dcf
Summary:
closes https://github.com/pytorch/pytorch/issues/37855
Relies on https://github.com/pytorch/pytorch/pull/38483
Previous attempts to get this right:
* https://github.com/pytorch/pytorch/pull/38335
* https://github.com/pytorch/pytorch/pull/38279
* https://github.com/pytorch/pytorch/pull/37976
This reverts commit 80639604a8.
Improves the docker image build workflow from many steps to basically
transparent from a user's perspective.
To update docker images now all one has to do is edit the
.circleci/docker folder and it will update automatically and also
dynamically add the tags to the list of tags to keep from the garbage
collector.
Adding a new image will currently stay the same but we can explore doing
that dynamically as well.
How the build workflow works:
- Docker tags are determined by the hash defined from git for the
.circleci/docker sub-directory (extracted using git rev-parse)
- Images are only built if the computed hash is not found in ecr and
the hash is different than the previously computed hash. The
previously computed hash is found using the same process as before
but subbing out HEAD for the merge base between HEAD and the base
git revision
- That tag is then passed through the jobs using a shared workspace
which is added to downstream jobs using the circleci ${BASH_ENV}
How the new garbage collection works:
- Tags to keep are generated by stepping through all of the commits in
in the .circleci/docker subdirectory
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38484
Differential Revision: D21585458
Pulled By: seemethere
fbshipit-source-id: 37792a1e0f5e5531438c4ae61507639c133aa76d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38517
as title
Test Plan: buck test
Reviewed By: olittle
Differential Revision: D21562485
fbshipit-source-id: 573419e5a8dae4121d99d5b72ed3960a92db7a54
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38449
Also update docs to reflect conv1d op support
Test Plan:
python test/test_quantization.py TestQuantizedFunctional.test_conv1d_api
Imported from OSS
Differential Revision: D21575921
fbshipit-source-id: 21c9f6b49ad456cd9d93e97f17cf5b8d87f0da6b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38485
Python 2 has reached end-of-life and is no longer supported by PyTorch.
This class does nothing in Python 3.
Test Plan: CI
Reviewed By: ailzhang
Differential Revision: D21575260
Pulled By: dreiss
fbshipit-source-id: 184696c9fa501e8d2517950b47cdbc90b2ae8053
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35625
Python 2 has reached end-of-life and is no longer supported by PyTorch.
This function was already ifdef'ed out in Python 2.
Added a comment about when we might be able to remove this entire file.
Test Plan: CI
Differential Revision: D20842885
Pulled By: dreiss
fbshipit-source-id: 1fd3b1b2ff5a82caaf3bc11344dde2941427cfc0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35614
Python 2 has reached end-of-life and is no longer supported by PyTorch.
Now we can clean up a lot of cruft that we put in place to support it.
These changes were all done manually, and I skipped anything that seemed
like it would take more than a few seconds, so I think it makes sense to
review it manually as well.
Test Plan: CI
Differential Revision: D20842876
Pulled By: dreiss
fbshipit-source-id: 18abf0d324ed2185ec6d27c864e935d856dcc6ad
Summary:
This will support another round of migration from hand-written configs to code generation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38496
Differential Revision: D21581624
Pulled By: kostmo
fbshipit-source-id: aed814ef6d4fc6af9ce092727b2dacc99de14ae0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38352
Fixes the RPC profiling by using the `then()` API added in https://github.com/pytorch/pytorch/pull/37311. Instead of adding a regular callback, we return a new future that completes when the profiling callback is finished. This is transparent to the user as the future still completes with the value of the original future (i.e. the RPC's return value)
To make this work for RRef, we add a `_set_profiling_future` to set the profiling future, and `_get_profiling_future` to retrieve this future and wait on it in the tests.
Re-enabled profiling tests and stress tested them 1000 times to verify the fix
ghstack-source-id: 104086114
Test Plan: Re-enabled profiling tests
Differential Revision: D21506940
fbshipit-source-id: 35cde22f0551c825c9bc98ddc24cca412878a63a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37884
Adds support to use rpc_timeout param in rpc_async call from jit for
parity with eager mode. Done by:
1) Add timeout as an input in ir_emitter.cpp if it is specified
2) Parse float IValue from inputs in `prim::rpc_async` operator. Give the default if needed.
Added UTs in jit/rpc_test.
ghstack-source-id: 104083031
Test Plan: Added UTs in jit/rpc_test.
Differential Revision: D21268895
fbshipit-source-id: 34bb10a2ac08b67dd6b789121ab43e2c0e696229
Summary:
TorchScript currently doesn’t support `*args, **kwargs` in method signature, which is extensively used in DPER3 low-level modules’ forward method. In order to make DPER3 low-level modules scriptable, I was thinking about a solution of having a forward method *only* for TorchScript, and replace the forward method when we are not in scripting mode.
This solution works today, and I would like to add a test to make sure it will always work in the future.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38158
Differential Revision: D21485657
Pulled By: yf225
fbshipit-source-id: df7368e8a5265418be7c305e6666ffd76e595466
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38267
Assert that the rpcTimeout is positive in RpcBackendOptions
constructor
ghstack-source-id: 104029918
Test Plan: CI
Differential Revision: D21509850
fbshipit-source-id: c925490e3d8fa2ffa42b0ae1170ca2f740af11f7
Summary:
These commits fixes a bug which was exposed when we took away the fallback path. The fix is to set the appropriate device before setting CUDA stream.
The improvement is when compiling, setting the device to new device only if it's different from prior device, and removing redundant call to cudaFree
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38365
Reviewed By: zheng-xq
Differential Revision: D21537469
Pulled By: protonu
fbshipit-source-id: b9662dd623b5c7cfd23eb6894e992a43665641e4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38414
`std::to_string` call is unnecessary when using glog.
ghstack-source-id: 104030161
Test Plan: Ran the retry tests and checked logs to ensure correct message was printed upon message failure,
Differential Revision: D21266330
fbshipit-source-id: 53519287778d47d99b94ea34b7c551f910affda2
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35624
Python 2 has reached end-of-life and is no longer supported by PyTorch.
This test case is valid syntax in Python 3.
Test Plan: CI
Differential Revision: D20842877
Pulled By: dreiss
fbshipit-source-id: 856e72171496aa1d517f2f27a8a5066462cf4f76
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35623
Python 2 has reached end-of-life and is no longer supported by PyTorch.
This test case is valid syntax in Python 3.
Test Plan: CI
Differential Revision: D20842874
Pulled By: dreiss
fbshipit-source-id: 9f12e046f827d4f9d5eca99b0b0b46f73e06ff51
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35621
Python 2 has reached end-of-life and is no longer supported by PyTorch.
`func.__wrapped__` can be used directly in Python 3.
Test Plan: CI
Differential Revision: D20842875
Pulled By: dreiss
fbshipit-source-id: 26f71df12db6d5118c8f278b27d747d647d07900
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35620
Python 2 has reached end-of-life and is no longer supported by PyTorch.
`self.subTest` can be used directly in Python 3.
Test Plan: CI
Differential Revision: D20842872
Pulled By: dreiss
fbshipit-source-id: 6ad42550c01e6959821ff07df767fc14b58c5a9e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35618
Python 2 has reached end-of-life and is no longer supported by PyTorch.
Python 3 always uses true division.
Test Plan: CI
Differential Revision: D20842884
Pulled By: dreiss
fbshipit-source-id: 522e34bb584d4bdb01c9c40eb267955062a57774
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35617
Python 2 has reached end-of-life and is no longer supported by PyTorch.
Now we can clean up some cruft that we put in place to support it.
Test Plan: CI
Differential Revision: D20842883
Pulled By: dreiss
fbshipit-source-id: 18dc5219ba99658c0ca7e2f26863df008c420e6a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38407
We can still run some quantized tests even when fbgemm/qnnpack isn't enabled
Test Plan:
python test/test_quantization.py
Imported from OSS
Differential Revision: D21554257
fbshipit-source-id: e4fa8f61f6a6717881c00620ed7938c01ffbf958
Summary:
Together with https://github.com/pytorch/pytorch/issues/37758, this fixes https://github.com/pytorch/pytorch/issues/37743 and fixes https://github.com/pytorch/pytorch/issues/24861.
This follows the CUDA fix in https://github.com/pytorch/pytorch/issues/37758, vectorised using a `blendv` to replace the if conditionals.
Most of the complication is from `remainder` supporting `at::Half` where `fmod` doesn't. I've now got `fmod` working on `Vec256<at::Half>` as well as enabling half dispatch for `fmod` so it matches `remainder`.
I also added `fmod` support to `Vec256<at::BFloat16>` before realising that `remainder` doesn't support `BFloat16` anyway. I could also enable `BFloat16` if that's desirable. If not, I don't think `Vec256<BFloat16>` should be missing `fmod` anyway.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38293
Differential Revision: D21539801
Pulled By: ezyang
fbshipit-source-id: abac6a3ed2076932adc459174cd3d8d510f3e1d5
Summary:
Return unmodified type from decorator if fbgemm is present.
Fix `Tried to trace <__torch__.torch.classes.rnn.CellParamsBase object at 0x55f504c56b40> but it is not part of the active trace. Modules that are called during a trace must be registered as submodules of the thing being traced` thrown from `TestPostTrainingDynamic.test_quantized_rnn` by preserving modules in returned qRNNBase (i.e. by partially reverting https://github.com/pytorch/pytorch/pull/38134 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38432
Differential Revision: D21567333
Pulled By: malfet
fbshipit-source-id: 364fa2c8fc6e400b4f2e425b922a977756aec1d8
Summary:
Hi, I found the validation that is unreachable in `gradcheck` function :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37915
Differential Revision: D21551661
Pulled By: albanD
fbshipit-source-id: 8acadcc09cd2afb539061eda0ca5e98860e321eb
Summary:
This PR implements softmax support for sparse tensors.
The sparse softmax is related to dense softmax when the values of unspecified sparse tensor entries are taken to be `-inf` that will have the effect of "zero entries ignored". This relation is used for testing the correctness of results here.
Resolves https://github.com/pytorch/pytorch/issues/23651 for CPU.
- [x] sparse softmax
- [x] CPU C++ implementation
- [x] unittests
- [x] update softmax documentation
- [x] autograd support
- [x] sparse log_softmax
- [x] CPU C++ implementation
- [x] unittests
- [x] update log_softmax documentation
- [x] autograd support
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36305
Differential Revision: D21566540
Pulled By: ezyang
fbshipit-source-id: a632ea69c38622f960721482e442efeb8d0a54fc
Summary:
Since the check was added in https://github.com/pytorch/pytorch/pull/6249, one can not pass an iterable as a sampler to the data loader anymore, which was a very handy feature (e.g., https://github.com/pytorch/pytorch/issues/1337). I think the check should be removed for two-fold reasons:
1. It is too strict. There is no reason that it should not be a general iterable.
2. It is inconsistent. In `DataLoader` (the main place where people use samplers), you can pass a general iterable as `batch_sampler` but not `sampler` due to this check.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38403
Differential Revision: D21555958
Pulled By: soumith
fbshipit-source-id: c7267bb99a31edd8f2750689205d6edc5dab5cff