Commit Graph

103 Commits

Author SHA1 Message Date
Tongzhou Wang
11c31aef04 Prevent hanging in data loader altogether
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/11985

Differential Revision: D10202374

Pulled By: SsnL

fbshipit-source-id: 1ab1a07185f78a104f9b05930a87ef5a32f431e4
2018-10-09 09:54:19 -07:00
vishwakftw
47d65ed34f Fix issue 10492 (#11634)
Summary:
- pass infos vector by reference
- checkErrors takes infos vector by reference
- modified gesv tests to not cause infs or nans sporadically
- also clean up error messages

Reviewed By: ezyang

Differential Revision: D9818550

Pulled By: soumith

fbshipit-source-id: 00215205ff88767d6a5e921322394c5fd915d6d8
2018-09-17 12:13:45 -07:00
Neeraj Pradhan
c391c20063 Adding .expand method for TransformedDistribution (#11607)
Summary:
This PR:
 - adds a `.expand` method for `TransformedDistribution` along the lines of #11341.
 - uses this method to simplify `.expand` in distribution classes that subclass off of `TransformedDistribution`.
 - restores testing of `TransformedDistribution` fixtures.
 - fixes some bugs wherein we were not setting certain attributes in the expanded instances, and adds tests for `.mean` and `.variance` which use these attributes.

There are many cases where users directly use `TransformedDistribution` rather than subclassing off it. In such cases, it seems rather inconvenient to have to write a separate class just to define a `.expand` method. The default implementation should suffice in these cases.

cc. fritzo, vishwakftw, alicanb
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11607

Differential Revision: D9818225

Pulled By: soumith

fbshipit-source-id: 2c4b3812b9a03e6985278cfce0f9a127ce536f23
2018-09-14 07:55:33 -07:00
Richard Zou
040d75d455 Add option to use CUDA memory leak testing as a context manager (#11380)
Summary:
cc SsnL
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11380

Reviewed By: ezyang

Differential Revision: D9705877

Pulled By: zou3519

fbshipit-source-id: 02470c25236f57fa02f4ac9d7ed63d38a6355db2
2018-09-10 12:40:15 -07:00
Tongzhou Wang
e85f3fccb3 Fix relying on UB in test_data_parallel_nested_output (#11092)
Summary:
We shouldn't reply on plain `dict` ordering. Example failure: https://ci.pytorch.org/jenkins/job/pytorch-builds/job/pytorch-linux-xenial-cuda8-cudnn6-py3-test1/8417/console
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11092

Reviewed By: ezyang

Differential Revision: D9583274

Pulled By: SsnL

fbshipit-source-id: ba80b96648c98c24c2ec5fa6fd9aa566c095cce7
2018-08-30 13:10:25 -07:00
Tongzhou Wang
23af7deea7 Add has_lapack flag (#11024)
Summary:
Currently our `skipIfLapack` has uses a try-catch block and regex match the error message. It is highly unreliable. This PR adds `hasLAPACK` and `hasMAGMA` on ATen context, and expose the flags to python.

Also fixes refcounting bug with `PyModule_AddObject`. The method steals reference, but we didn't `Py_INCREF` in some places before calling it with `Py_True` or `Py_False`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11024

Differential Revision: D9564898

Pulled By: SsnL

fbshipit-source-id: f46862ec3558d7e0058ef48991cd9c720cb317e2
2018-08-29 22:41:16 -07:00
Teng Li
56539f5fe1 PT1 Distributed Release MileStone No.1 - Completed Distributed Package and CI tests (#10871)
Summary:
The PR includes:
(1) torch.distributed.c10d, which now includes the complete backward compatible frontend API for `torch.distributed`
(2) `env://` init method functionality
(3) Minor change to `test_distributed.py`, which is now a test for `torch.distributed.c10d`.
(4) The old `test_distributed.py' is now moved to `test_distributed_thd`
(5) Miscellaneous bug fixes.
(6) DDP CPU test is removed since c10d doesn't have this support yet, but this is a very easy test after moving DDP CPU's dependency to torch.distributed.c10d.
(7) CI config to test MPI, NCCL, and Gloo backend of c10d

**Now all the distributed test including c10d DDP can pass with the c10d frontend API**

TODO: (in a separate PR)
MPI subgroup support, once this is added, CI group test will be enabled.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10871

Differential Revision: D9554514

Pulled By: teng-li

fbshipit-source-id: fb686ad42258526c8b4372148e82969fac4f42dd
2018-08-29 12:55:57 -07:00
David Riazati
5c58cda8ca Add subname to console output for assertExpected (#10559)
Summary:
Running `--accept` on a test doesn't tell you explicitly which sub-test is being updated, this PR fixes that
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10559

Differential Revision: D9353977

Pulled By: driazati

fbshipit-source-id: a9d4014386ff0fe388a092f3dcf50f157e460f04
2018-08-28 12:13:03 -07:00
iotamudelta
a38b572de3 enable unit tests and other changes (#10266)
Summary:
This PR for the ROCm target does the following:
* enable some unit tests on ROCm
* fix a missing static_cast that breaks BatchNorm call on ROCm
* fix BatchNorm to work on ROCm w/ ROCm warp sizes etc
* improve the pyhipify script by introducing kernel scope to some transpilations and other improvements
* fix a linking issue on ROCm
* for more unit test sets: mark currently broken tests broken (to be fixed)
* enable THINLTO (phase one) to parallelize linking
* address the first failing of the elementwise kernel by removing non-working ROCm specialization
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10266

Differential Revision: D9184178

Pulled By: ezyang

fbshipit-source-id: 03bcd1fe4ca4dd3241f09634dbd42b6a4c350297
2018-08-06 14:54:01 -07:00
iotamudelta
cfa05706ef ROCm contributions week 29 (#9653)
Summary:
In this changeset:
* improvements to `hipify-python.py`
* marking unit tests broken for ROCm
* reducing the number of jobs for the built to avoid out of memory issues
* switch to Thrust/cub-hip master for the CI
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9653

Differential Revision: D9117791

Pulled By: ezyang

fbshipit-source-id: a6c3c7b81f2bda9825974bf9bf89a97767244352
2018-08-02 09:09:00 -07:00
Gregory Chanan
34c7c56c73 Re-enable empty n-dimensional empty tensor and fix parallel CPU on empty tensors (#10077)
Summary:
This is a combination of https://github.com/pytorch/pytorch/pull/9947 (this was reverted) and https://github.com/pytorch/pytorch/pull/10076.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10077

Differential Revision: D9087491

Pulled By: gchanan

fbshipit-source-id: 9fe9905628000f2ff3e47df32533cd7d1f25a354
2018-07-31 16:43:45 -07:00
Gregory Chanan
6fb9acfc16 Revert empty n-dim and ATen in C2 integration builds
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/10064

Differential Revision: D9082082

Pulled By: gchanan

fbshipit-source-id: ae49470f5b4c89b13beb55fd825de1ba05b6a4fa
2018-07-31 07:25:56 -07:00
Gregory Chanan
ce5f0d40b6 Enable n-dimensional empty tensors. (#9947)
Summary:
These could use some autograd tests, which are coming in a later PR, but using them in autograd is probably pretty rare.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9947

Reviewed By: ezyang

Differential Revision: D9032778

Pulled By: gchanan

fbshipit-source-id: fa5a6509d3bac31ea4fae25143e82de62daabfbd
2018-07-30 12:33:17 -07:00
Tongzhou Wang
27455e9c78 Use _six for inf and nan (#9500)
Summary:
Things like `float('inf')` are actually quite expensive.
```py
In [1]: import math

In [2]: %timeit -n 200 math.inf
49.3 ns ± 1.42 ns per loop (mean ± std. dev. of 7 runs, 200 loops each)

In [3]: %timeit -n 200 float('inf')
194 ns ± 39.1 ns per loop (mean ± std. dev. of 7 runs, 200 loops each)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9500

Reviewed By: soumith

Differential Revision: D8876229

Pulled By: SsnL

fbshipit-source-id: 78602b76bb53d5588910b58270930c0bd413d2d7
2018-07-18 10:40:29 -07:00
Tongzhou Wang
050a2588b5 change stft to have consistent signature with librosa (#9497)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9497

Fixes #7883 by using `rfft`.

It's worth noting that this is BC breaking. And it's impossible to detect the change because the two signatures before and after this change supports a common subset of calling patterns, e.g., `stft(Tensor, int, int)`. (some other calling patterns will raise error).

soumith and I plan to change the current `stft` interface because it is a bit messy and non-standard. rafaelvalle suggested us that `librosa` is a good reference API to align with. After discussing with soumith and ezyang , and given that `stft` is only out for 1 release, I decide to go with directly changing the signature. Also, my understanding is that most researchers in this field will welcome this change as `librosa` seems to be the golden-standard here. (it doesn't yet support all `pad_mode` but those will become available if added to `F.pad`.)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9308

Reviewed By: ezyang

Differential Revision: D8806148

Pulled By: SsnL

fbshipit-source-id: f6e8777d0c34d4a4d7024e638dc9c63242e8bb58
2018-07-17 10:55:43 -07:00
Will Feng
84884dc2d3 Allow passing '0' to ASAN/UBSAN flags (#9202)
Summary:
Similar to https://github.com/pytorch/pytorch/pull/9187, This PR makes setting the `PYTORCH_TEST_WITH_ASAN` and `PYTORCH_TEST_WITH_UBSAN` flags easier internally, by allowing the flags to be set to `0`.
Closes https://github.com/pytorch/pytorch/pull/9202

Differential Revision: D8745533

Pulled By: yf225

fbshipit-source-id: 6293f52f2e8b1c3ef150becfdc2dd7ded56d5d80
2018-07-06 08:40:37 -07:00
Will Feng
ff501c30af Turn on UBSAN in the OSS build (#8813)
Summary:
Copy of https://github.com/pytorch/pytorch/pull/8802
Closes https://github.com/pytorch/pytorch/pull/8813

Differential Revision: D8707364

Pulled By: yf225

fbshipit-source-id: bc201980b50e9fb44c42a17f898b50d3558fc417
2018-07-05 15:55:49 -07:00
Will Feng
1c9073b43a Allow passing '0' to NO_MULTIPROCESSING_SPAWN (#9187)
Summary:
This PR makes setting the `NO_MULTIPROCESSING_SPAWN` easier internally, by allowing the flag to be set to `0`.
Closes https://github.com/pytorch/pytorch/pull/9187

Differential Revision: D8736206

Pulled By: yf225

fbshipit-source-id: b8a34cb9a747b13bc9428777a3ed766ce441cfe1
2018-07-05 11:10:46 -07:00
Will Feng
90fd4df695 Add flag for disabling tests with multiprocessing spawn start method (#9061)
Summary:
This will resolve some of the timeout issues in CPU and GPU tests internally.
Closes https://github.com/pytorch/pytorch/pull/9061

Reviewed By: ezyang

Differential Revision: D8707471

Pulled By: yf225

fbshipit-source-id: 9dc82a2c9da0c540ae015442f74b9b2b1a67a246
2018-06-30 14:39:11 -07:00
anderspapitto
48e90e3339 Build system changes (#8627)
* All changes needed to get rid of process_github.sh

* allow thnn_h_path
2018-06-20 17:45:26 -04:00
Will Feng
d6c873a393
Shard test_nn to reduce runtime for each test target (#8678)
* Shard test_nn to reduce runtime for each test target

* Use load_tests for selecting tests to enable

* fix lint

* Use arg parser from common.py
2018-06-20 15:01:28 -04:00
gchanan
b6af5d40bf
Some 0-sized dimension support, port catArray away from resizeLegacy. (#8666)
* Some 0-sized dimension support, port catArray away from resizeLegacy.

The goal of this PR is to port catArray away from resizeLegacy (so we can delete the legacy resize calls), but since catArray has some weird behavior because
we don't have arbitrary 0-sized dimension support, I made some effort to fix these both in one pass.

The major changes here are:
1) catArray uses the new resize API, no longer the old resizeLegacy API.
2) As 1) is the last usage of resizeLegacy, it is deleted.
3) If compiled with USE_TH_SIZE_ZERO_DIM, catArray will work and properly check shapes for n-dimensional empty tensors.
4) However, we retain the old behavior of "ignoring" size [0] tensors in catArray.  We previously allowed this because we didn't have n-dimensional empty tensors.
5) To get the above to work, we also add support for n-dimensional empty tensors for narrow and slice (ifdef USE_TH_SIZE_ZERO_DIM).
6) We change the stride formula for empty tensors to match NumPy; basically, we never multiply by 0 as the size, always at least 1, so the
   strides are monotonically increasing in the empty tensor case.
7) We print the size of empty tensors if size != [0]; this matches NumPy behavior (even in cases where the size could be inferred from the brackets.
8) For test purposes, we add torch._C._use_zero_size_dim() to add tests for the above.

* Fix flake8.

* Address review comments.
2018-06-20 13:26:08 -04:00
Tongzhou Wang
36b8cc5483 skip CUDA memory leak check on Windows altogether (#8213) 2018-06-06 17:29:53 -04:00
Tongzhou Wang
eb2f21f1e4
Skip CUDA memory leak test on BN tests on windows (#8043) 2018-06-01 18:09:14 -04:00
Tongzhou Wang
85ee94b7be
Add memory leak check in CUDA tests (#7270)
* Add memory leak check in CUDA tests

* Tracking multi-GPU too

* fix run_test.py not running __name__ == '__main__' content; add test for make_cuda_memory_checked_test

* add a comment

* skip if cuda

* 1. Change the wrapper to a method in common.py:TestCase
2. Refactor common constants/method that initialize CUDA context into common_cuda.py
3. Update some test files to use TEST_CUDA and TEST_MULTIGPU

* Fix MaxUnpool3d forward memory leak

* Fix MultiLabelMarginCriterion forward memory leak

* Fix MultiMarginLoss backward memory leak

* default doCUDAMemoryCheck to False

* make the wrapper skip-able

* use TEST_MULTIGPU

* add align_corners=True/False tests for Upsample; fix TEST_CUDNN

* finalize interface

* VolumetricMaxUnpooling_updateOutput

* fix test_nccl

* rename THC caching allocator methods to be clearer

* make the wrapped function a method

* address comments; revert changes to aten/src/THC/THCCachingAllocator.cpp

* fix renamed var
2018-05-31 15:09:54 -04:00
avmgithub
f02ae65727 skip test_utils.TestFFI.test_cpu for ppc64le due to incompatible exception handling (#7422) 2018-05-09 11:45:30 -04:00
Tongzhou Wang
1c01eabd3c
Codemod to update our codebase to 0.4 standard (#6641)
* Codemod to update our codebase to 0.4 standard

* Update some of the test scri[ts

* remove Variable in test_clip_grad_value

* fix _symbolic_override_wrapper_maker
2018-04-17 22:06:54 -04:00
Tongzhou Wang
4f05cb710e Add underscore to nn.init.* and deprecate the original ones (#6093)
Fixes #5946.

* add underscore to nn.init.* and deprecate the original ones

* add a test for deprecation
2018-03-29 13:26:12 -04:00
Ailing
d707dae013 Add half test in test_nn for auto generated tests. (#5362)
* add half and double test in NewTestModule

* add half/double/float tests in NewCriterionTest

* resolve merge conflict with master
2018-03-21 16:55:06 -04:00
Ailing
84a73775d5 adding fp16 tests in test_nn (#5020) 2018-03-20 18:01:05 +01:00
Tongzhou Wang
22ef8e5654 [fft][1 of 3] build system and helpers to support cuFFT and MKL (#5855)
This is the first of three PRs that #5537 will be split into.

This PR adds mkl headers to included files, and provides helper functions for MKL fft and cuFFT.
In particular, on POSIX, headers are using mkl-include from conda, and on Windows, it is from a new file @yf225 and I made and uploaded to s3.

* add mkl-include to required packages

* include MKL headers; add AT_MKL_ENABLED flag; add a method to query MKL availability

* Add MKL and CUFFT helpers
2018-03-19 15:43:14 -04:00
Sam Gross
30ec06c140
Merge Variable and Tensor classes (#5225)
This replaces the torch.Tensor constructors with factories that produce
Variables. Similarly, functions on the torch module (e.g. torch.randn)
now return Variables.

To keep the PR to a reasonable size, I've left most of the unused tensor
code. Subsequent PRs will remove the dead code, clean-up calls to
torch.autograd.Variable, and rename Variable to Tensor everywhere.

There are some breaking changes because Variable and Tensors had
slightly different semantics. There's a list of those changes here:

 https://github.com/pytorch/pytorch/wiki/Breaking-Changes-from-Variable-and-Tensor-merge
2018-02-23 18:03:31 -05:00
Tongzhou Wang
1848cad108 [ready] Layer Normalization (#4922)
* at::maybe_data_ptr and Check.h => TensorUtils.h

* THNN support for optional BN running_*

* ATen support for optional BN running_*

* Python nn.* support for optional BN running_*; Improve IN and BN doc

* Add tests for IN and BN new option

* Layer Norm

* Fix LRN doc

* functional interface for LN and IN

* Layer norm tests

* fix BN double backward returning undefined tensors

* fix jit test using wrong dim inputs for BN

* add/improve BN, IN and LN GPU tests with half type

* Udpate docs to be consistent with Conv notation
Fix onnx
Clarified onnx symbokic wrapper

* fix typo

* Address comments
2018-02-22 11:56:41 -05:00
Ailing
3ef2e484bf Add fp16 testcases in test_cuda (#5122) 2018-02-21 14:35:29 +01:00
Sam Gross
9dfbc120f5
Fix assertNotEqual handling of message/precision (#5246) 2018-02-14 18:08:53 -05:00
gchanan
affe742d31
Add scalar module tests for test_nn. (#5116)
* Add scalar module tests for test_nn.

* Properly return from glu.

* Guard scalar test with skipIf.
2018-02-08 13:53:24 -05:00
Tongzhou Wang
805639906a Broacast output requires_grad if only corresponding input requires_grad (#5061) 2018-02-05 23:38:35 -05:00
Richard Zou
885c874167 Fix refcycles in DataParallel scatter and gather (#4988)
* Eliminate reference cycles in scatter_gather

* Test for refcycles

* Better fix

* Add comments
2018-02-05 17:19:36 -05:00
Tongzhou Wang
64a9ecae02 Dataloader issues (#4643)
* EINTR and kill by loader fix

* addressed @apaszke 's comments

* remove EINTR handling and add test if we are in main thread before setting SIGCHLD
2018-01-29 01:18:17 +01:00
Vishwak Srinivasan
8593c6f4f7 Adding better KL-Tests (#4739) 2018-01-20 21:47:11 +01:00
gchanan
d38cf0e1e9 Allow assertEqual checks with mixed Tensors, Variables, numbers. (#4754)
Currently, a Variable can only be compared with a Variable, but a Tensor
can be compared with Tensors or numbers.  Relax this constraint so Variables
behave identically to Tensors.
2018-01-19 22:28:37 -05:00
Edward Z. Yang
7d25a41251
Fix #4492, make it impossible to forget to reset cudnn flags (#4503)
Three stage plan to no more stupidly weird "why isn't cuDNN enabled"
bugs:

- Add torch.backends.cudnn.disable_global_flags(), which as its name suggests,
  disables global flag setting in cuDNN, so that you are not allowed to
  make changes to this state.  However, the flags() context
  manager continues to work (since they are non-global changes).

- Call disable_global_flags() in test/common.py

- Switch all of the manual flag setting/unsetting in test/test_nn.py
  to use the context manager.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2018-01-08 12:21:09 -05:00
Peter Goldsborough
77c792ec27 Vectorize normal_ (#4312) 2018-01-03 22:30:55 +01:00
Fritz Obermeyer
5c33400dd3 Implement OneHotCategorical distribution (#4357) 2017-12-28 16:54:55 +01:00
Edward Z. Yang
4453a5402f
allow_inf on test_beta_log_prob (#4354)
* allow_inf on test_beta_log_prob
* Support allow_inf on assertAlmostEqual

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-12-27 09:25:12 -05:00
Neeraj Pradhan
ffa7fab67f Minor changes to test utils to catch type errors (#4270) 2017-12-26 10:08:33 +01:00
Edward Z. Yang
e9bfe8ca92 Make expect file directory search more robust.
Previously, we assumed that __main__ was the test file
being run, which is not true if you are using pytest.  New
algorithm uses __module__ of the test class, which is a bit
more robust.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-12-23 11:44:14 +01:00
Will Feng
1681d07199 Disable tests and fix issues with Windows CUDA build (#4251) 2017-12-20 11:30:21 +01:00
Edward Z. Yang
51ca3a1a48
Make sparse test also check that coalesce status of tensors makes sense. (#3171)
This adds more heavy sanity checking when we run to_dense(); in particular,
we make sure that if it claims to be coalesced, it truly is coalesced, and if
it is not, that the coalesced version also to_dense() to the same thing.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-11-28 09:55:56 -05:00
Sam Gross
74fd79d889 Set seed at top-level of common.py (#3862)
Some tests, such as test_autograd.py, include random generation at the
top-level. It's going to be tough to police these files to ensure that
all randomness only happens within a test, so just set the seed as soon
as args are parsed (as well as before each test).

torch.manual_seed_all is no longer needed since torch.manual_seed also
seeds the CUDA random number generator.
2017-11-26 11:46:53 -05:00