Commit Graph

68 Commits

Author SHA1 Message Date
zeshengzong
cb71bcc542 Replace clone.detach with detach.clone (#140264)
Fixes #64532

As state in issue, replace `clone.detach` by `detach.clone`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/140264
Approved by: https://github.com/soulitzer
2024-11-13 07:01:02 +00:00
Mikayla Gawarecki
d9576c9440 Fix failures when default is flipped for weights_only (#127627)
Tests on XLA shard not fixed yet but there is an issue here https://github.com/pytorch/xla/issues/7799

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127627
Approved by: https://github.com/albanD
ghstack dependencies: #132349
2024-08-16 00:22:43 +00:00
Xuehai Pan
4d7bf72d93 [BE][Easy] fix ruff rule needless-bool (SIM103) (#130206)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130206
Approved by: https://github.com/malfet
2024-07-14 08:17:52 +00:00
PyTorch MergeBot
82cd7a7dab Revert "Default meta device to use swap_tensors in nn.Module._apply (.to_empty and .to('meta')) (#126819)"
This reverts commit fa426b096b.

Reverted https://github.com/pytorch/pytorch/pull/126819 on behalf of https://github.com/izaitsevfb due to suspicious build instructions count regression, see [D58015016](https://www.internalfb.com/diff/D58015016) ([comment](https://github.com/pytorch/pytorch/pull/126814#issuecomment-2143545818))
2024-06-01 18:46:16 +00:00
Mikayla Gawarecki
fa426b096b Default meta device to use swap_tensors in nn.Module._apply (.to_empty and .to('meta')) (#126819)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126819
Approved by: https://github.com/albanD
ghstack dependencies: #127313, #126814
2024-05-30 18:28:13 +00:00
Mikayla Gawarecki
cd06ae0cb8 Relax use_count constraints for swap_tensors when AccumulateGrad holds a reference (#127313)
### Before this PR:
`torch.utils.swap_tensors(a, b)` required the `use_count` of `a` and `b` to be 1

```python
a = torch.randn(2, 3, requires_grad=True)
b = torch.randn(2, 4)
out = a * 2
out.sum().backward()
# Calling swap_tensors here would fail due to the reference held by AccumulateGrad node, which is not cleaned up after backward
# torch.utils.swap_tensors(a, b)
del out
# Calling swap_tensors here would pass
torch.utils.swap_tensors(a, b)
```
### After this PR:
`torch.utils.swap_tensors(a, b)` requires the `use_count` of `a` and `b` to be 1 or 2 IF the second reference is held by `AccumulateGrad`

A pre-hook will be registered on the `AccumulateGrad` node so that it will fail if it is called (i.e. if user attempts to backward through the graph).

```python
a = torch.randn(2, 3, requires_grad=True)
b = torch.randn(2, 4)
out = a * 2
out.sum().backward()
# Calling swap_tensors here is ok
torch.utils.swap_tensors(a, b)
# If we ever backward to the AccumulateGrad node it will error that it was poisoned by swap_tensors
```

### Application to `nn.Module`

This issue is especially pertinent in context of `nn.Module` where parameters will have `AccumulateGrad` nodes initialized after forward. Specifically, this is intended to address https://github.com/pytorch/pytorch/pull/126814#issuecomment-2127777866. Previously, this would fail at the `m.cpu()` but we want users to be able to do something like the following, and instead raise an error if the user ever attempts to backward through the poisoned `AccumulateGrad` node

```python
import torch
import torch.nn as nn
m = nn.Linear(3, 5)
inp = torch.randn(2, 3)
out = m(inp)
out.sum().backward()
m.cpu()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127313
Approved by: https://github.com/soulitzer
2024-05-30 07:06:55 +00:00
PyTorch MergeBot
12d11fe4e5 Revert "reset dynamo cache before each test (#126586)"
This reverts commit bd24991f46.

Reverted https://github.com/pytorch/pytorch/pull/126586 on behalf of https://github.com/malfet due to Broke tons of tests, see bd24991f46  ([comment](https://github.com/pytorch/pytorch/pull/126586#issuecomment-2131365576))
2024-05-25 17:17:19 +00:00
Shunting Zhang
bd24991f46 reset dynamo cache before each test (#126586)
In https://github.com/pytorch/pytorch/issues/125967, we found test results depend on test order. The root cause is due to earlier tests populate dynamo cache and affect the later tests.

This PR clear dynamo cache before each unit test so we get more deterministic result for unit test

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126586
Approved by: https://github.com/jansel
2024-05-25 04:48:09 +00:00
Shunting Zhang
b03dc3d167 don't check memory format for empty tensors (#126593)
Fix https://github.com/pytorch/pytorch/issues/125967 . The test actually fail for empty 4D or 5D tensors when checking for memory format.

I'm not exactly sure what recent inductor change cause the failure, but it may be not that important to maintain strides for an empty tensor. (?)

I just skip the check for empty tensor.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126593
Approved by: https://github.com/ezyang
2024-05-25 01:19:45 +00:00
PyTorch MergeBot
6a06d36296 Revert "Default meta device to use swap_tensors in nn.Module._apply (.to_empty and .to('meta')) (#126819)"
This reverts commit ab61309ab8.

Reverted https://github.com/pytorch/pytorch/pull/126819 on behalf of https://github.com/mikaylagawarecki due to broke xla ci ([comment](https://github.com/pytorch/pytorch/pull/126814#issuecomment-2127719337))
2024-05-23 17:43:06 +00:00
Mikayla Gawarecki
ab61309ab8 Default meta device to use swap_tensors in nn.Module._apply (.to_empty and .to('meta')) (#126819)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126819
Approved by: https://github.com/albanD
ghstack dependencies: #126814
2024-05-23 15:43:32 +00:00
PyTorch MergeBot
b1e214ceb1 Revert "don't check memory format for empty tensors (#126593)"
This reverts commit 12dee4f204.

Reverted https://github.com/pytorch/pytorch/pull/126593 on behalf of https://github.com/clee2000 due to broke tests on inductor? test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_CTCLoss_cuda_float64 43f2f43eb3 https://github.com/pytorch/pytorch/actions/runs/9200644034/job/25308511495 ([comment](https://github.com/pytorch/pytorch/pull/126586#issuecomment-2126228689))
2024-05-23 04:54:28 +00:00
Shunting Zhang
12dee4f204 don't check memory format for empty tensors (#126593)
Fix https://github.com/pytorch/pytorch/issues/125967 . The test actually fail for empty 4D or 5D tensors when checking for memory format.

I'm not exactly sure what recent inductor change cause the failure, but it may be not that important to maintain strides for an empty tensor. (?)

I just skip the check for empty tensor.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126593
Approved by: https://github.com/ezyang
ghstack dependencies: #126586
2024-05-22 22:43:09 +00:00
Mikayla Gawarecki
d621e3e3b8 Add exhaustive module and optimizer tests for torch.load(state_dict, weights_only=True) (#121049)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121049
Approved by: https://github.com/janeyx99
2024-03-05 14:27:50 +00:00
Mikayla Gawarecki
677e67c399 Update nn.Module._apply to not gate on should_use_set_data when swap_tensors is set (#120659)
This updates the nesting of if statements in `nn.Module._apply` such that if

`torch.__future__.set_swap_module_params_on_conversion(True)`, we always try to swap regardless of whether
- `torch._has_compatible_shallow_copy_type(param, fn(param)`
- `torch.__future__.set_overwrite_module_params_on_conversion` is set

This means that `meta_module.to_empty('device')` can now use the swap_tensors path cc @awgu

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120659
Approved by: https://github.com/albanD
2024-02-28 00:59:34 +00:00
Mikayla Gawarecki
23b030a79c [easy] Add testing utilties for torch.nn.utils.set_swap_module_params_on_conversion (#118023)
For above PR to parametrize existing `load_state_dict` tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118023
Approved by: https://github.com/albanD
ghstack dependencies: #118028, #117167
2024-02-07 18:55:44 +00:00
Mikayla Gawarecki
d5a718d27b Add swap_tensors path to nn.Module._apply (#117167)
Added `torch.__future__.{get/set}_swap_module_params_on_conversion` that defaults to `False` for now, but we probably want to modify  to override this and default to `True` in `nn.Module._apply` if input is a tensor subclass.

From offline discussion, for now we are **not** allowing `swap_tensor` after the first module forward has been run*** if the autograd graph is still alive. The reason being that `torch.utils.swap_tensors(t1, t2)` requires the `use_count` of both `TensorImpl`s associated with `t1` and `t2` to be 1.  The first forward pass will install `AccumulateGrad` nodes on each param, which [bump the refcount of the associated TensorImpl](6cf1fc66e3/torch/csrc/autograd/variable.cpp (L307)). **Future work might be to swap the refs that the `AccumulateGrad` nodes hold if it is necessary.**

***From this, it might seem like we don't need to handle gradients. However, I still handle the grads for the edge case that the grads are set via `p.grad = grad` OR the autograd graph is no longer alive because the output has been garbage collected.

If any `swap_tensors` fails on any of the parameters in the `nn.Module` we raise an error.

**`RNNBase` overrides `nn.Module._apply()` and installs weakrefs on some parameters. As a result, all modules that inherit from `RNNBase` (`RNN`, `GRU` and `LSTM`) cannot use the`swap_tensors` path as of now**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117167
Approved by: https://github.com/albanD
ghstack dependencies: #118028
2024-02-07 18:55:44 +00:00
PyTorch MergeBot
91b848bf81 Revert "markDynamoStrictTest on more tests (#115879)"
This reverts commit 8b650cdd3c.

Reverted https://github.com/pytorch/pytorch/pull/115879 on behalf of https://github.com/atalman due to OSSCI oncall, broke inductor ([comment](https://github.com/pytorch/pytorch/pull/115879#issuecomment-1858418921))
2023-12-15 20:00:09 +00:00
rzou
8b650cdd3c markDynamoStrictTest on more tests (#115879)
Featuring:
test_mobile_optimizer.py
test_module_init.py
test_modules.py
test_multiprocessing.py
test_multiprocessing_spawn.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115879
Approved by: https://github.com/voznesenskym
ghstack dependencies: #115845, #115855, #115856, #115857, #115858, #115870, #115871
2023-12-15 13:19:52 +00:00
Mikayla Gawarecki
ac60a70e06 Migrated loss functions to ModuleInfos (#115584)
Migrates most tests in `common_nn.py:criterion_tests` to ModuleInfos.

**I can split this up if it is too large to review**

What this PR does not include:
- [`no_batch_dim` tests](https://github.com/pytorch/pytorch/blob/main/torch/testing/_internal/common_nn.py#L3995-L4112)
- [tests that use the functional variant of the loss function and `wrap_functional`](https://github.com/pytorch/pytorch/blob/main/torch/testing/_internal/common_nn.py#L1079-L1128)

#### On test times
This PR increases test time by ~58s locally
Before this PR:
```
>>> python test/test_nn.py -k Loss
Ran 1003 tests in 28.977s
```
After this PR
```
>>> python test/test_nn.py -k Loss
Ran 368 tests in 23.073s
```

```
>>> python test/test_modules.py -k Loss
Ran 836 tests in 63.900s
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115584
Approved by: https://github.com/janeyx99
ghstack dependencies: #115617
2023-12-14 16:21:05 +00:00
PyTorch MergeBot
626b7dc847 Revert "Migrated loss functions to ModuleInfos (#115584)"
This reverts commit f138b08d2e.

Reverted https://github.com/pytorch/pytorch/pull/115584 on behalf of https://github.com/atalman due to OSS CI oncall, breaks slow test ([comment](https://github.com/pytorch/pytorch/pull/115584#issuecomment-1854855080))
2023-12-13 23:34:30 +00:00
Mikayla Gawarecki
f138b08d2e Migrated loss functions to ModuleInfos (#115584)
Migrates most tests in `common_nn.py:criterion_tests` to ModuleInfos.

**I can split this up if it is too large to review**

What this PR does not include:
- [`no_batch_dim` tests](https://github.com/pytorch/pytorch/blob/main/torch/testing/_internal/common_nn.py#L3995-L4112)
- [tests that use the functional variant of the loss function and `wrap_functional`](https://github.com/pytorch/pytorch/blob/main/torch/testing/_internal/common_nn.py#L1079-L1128)

#### On test times
This PR increases test time by ~58s locally
Before this PR:
```
>>> python test/test_nn.py -k Loss
Ran 1003 tests in 28.977s
```
After this PR
```
>>> python test/test_nn.py -k Loss
Ran 368 tests in 23.073s
```

```
>>> python test/test_modules.py -k Loss
Ran 836 tests in 63.900s
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115584
Approved by: https://github.com/janeyx99
ghstack dependencies: #115617
2023-12-12 22:20:20 +00:00
Peter Bell
bbd5b935e4 Use pytree.tree_leaves everywhere (#112324)
This changes all the instances I could find of `tree_flatten(...)[0]` or
`x, _ = tree_flatten` to use `tree_leaves`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112324
Approved by: https://github.com/lezcano
ghstack dependencies: #112327, #112323
2023-10-30 03:39:04 +00:00
CaoE
3992450e8d Add backward check for test_memory_format (#106104)
Add backward check for test_memory_format.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106104
Approved by: https://github.com/mikaylagawarecki
2023-08-25 18:11:54 +00:00
PyTorch MergeBot
02bcaf45f6 Revert "Add backward check for test_memory_format (#106104)"
This reverts commit 2e44adb066.

Reverted https://github.com/pytorch/pytorch/pull/106104 on behalf of https://github.com/huydhn due to Sorry for reverting this but it is failing inductor job in trunk 2e44adb066.  I will add ciflow/inductor label to the PR make sure that the test runs there ([comment](https://github.com/pytorch/pytorch/pull/106104#issuecomment-1683119990))
2023-08-17 23:45:31 +00:00
CaoE
2e44adb066 Add backward check for test_memory_format (#106104)
Add backward check for test_memory_format.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106104
Approved by: https://github.com/mikaylagawarecki
2023-08-17 21:19:34 +00:00
Mikayla Gawarecki
c9be60cd0e Add error inputs to ModuleInfo (mirroring OpInfo) (#106325)
Add infra for error inputs to ModuleInfos, migrate first few error inputs tests from test_nn.py (more to come!)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106325
Approved by: https://github.com/albanD
2023-08-01 12:49:56 +00:00
Mikayla Gawarecki
e18d53e2df Added ModuleInfo test for meta device ctx init (#105871)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105871
Approved by: https://github.com/albanD
2023-07-26 01:57:54 +00:00
Jens Glaser
86e0eda18d Add partial derivative unit tests (#103809)
Adds the unit tests requested in #95810

This PR also addresses a gap in unit testing of gradients, as `gradcheck` always performs total derivatives w.r.t. all arguments and module parameters. Some modules have different code paths for partial derivatives, e.g. `LayerNorm`, and those should be tested separately.

The PR has the following limitations:
- it does not test partial derivatives w.r.t. every combination of arguments, which would exponentially increase CI time.
- it does not implement the same logic for Hessians, where the increase in CI time would be quadratic in the number of arguments.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103809
Approved by: https://github.com/kit1980
2023-06-25 00:36:10 +00:00
Elias Ellison
40d70ba7ed Remove a number of fixed skips (#103162)
Also adds `PYTORCH_TEST_WITH_AOT_EAGER` to distinguish errors coming from aot_autograd and not inductor (not tested in ci, but useful for local debugging)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103162
Approved by: https://github.com/desertfire
2023-06-08 17:37:59 +00:00
Fuzzkatt
45843c7f41 test_memory_format fix for test_modules.py (#102006)
add with_tf32_off, add sm80 check for thresholds

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102006
Approved by: https://github.com/ngimel
2023-05-24 02:32:45 +00:00
Ramin Azarmehr
cecfcf1e17 [MPS] Handle MPS failures of test_modules.py in common_modules.py (#95334)
- Also cleaned up `test_modules.py` from skipMPS code.
- Added `skipMPS` for unsupported or failing tests on MPS backend in common_modules.py.
   (We'll remove `skipMPS` from those tests once a fix is available for them.)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95334
Approved by: https://github.com/kulinseth, https://github.com/albanD
2023-05-09 03:55:16 +00:00
Mikayla Gawarecki
2c6c7deeb3 Added ModuleInfos for Pooling ops (#98358)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98358
Approved by: https://github.com/albanD
2023-04-05 19:39:07 +00:00
Aaron Gokaslan
67d9790985 [BE] Apply almost all remaining flake8-comprehension checks (#94676)
Applies the remaining flake8-comprehension fixes and checks. This changes replace all remaining unnecessary generator expressions with list/dict/set comprehensions which are more succinct, performant, and better supported by our torch.jit compiler. It also removes useless generators such as 'set(a for a in b)`, resolving it into just the set call.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94676
Approved by: https://github.com/ezyang
2023-02-12 01:01:25 +00:00
PyTorch MergeBot
09eb4c2a70 Revert "Update Module.__setattr__ to respect property setters (#92044)"
This reverts commit 0c8f4b5893.

Reverted https://github.com/pytorch/pytorch/pull/92044 on behalf of https://github.com/saitcakmak due to Caused regressions in a Meta internal model
2023-01-21 02:39:21 +00:00
Sait Cakmak
0c8f4b5893 Update Module.__setattr__ to respect property setters (#92044)
Fixes #52664. Checks if the attribute is a property that defines a setter and uses fset in __setattr__ rather than registering an inaccessible module / parameter.

This is BC-breaking as the attribute setters on nn.Module properties used to be ignored and now will be called properly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92044
Approved by: https://github.com/albanD
2023-01-17 20:00:06 +00:00
Huy Do
61cdae0ce5 Switch Windows CI jobs to G5 runners (#91727)
### Changelist

* Change Windows TORCH_CUDA_ARCH_LIST from `7.0` to `8.6` to compatible with NVIDIA A10G TPU
* Correctly disable some tests that requires flash attention, which is not available on Windows at the moment. This has been fixed by https://github.com/pytorch/pytorch/pull/91979
* G5 runner has `AMD EPYC 7R32` CPU, not an Intel one
  * This seems to change the behavior of `GetDefaultMobileCPUAllocator` in `cpu_profiling_allocator_test`.  This might need to be investigated further (TODO: TRACKING ISSUE).  In the meantime, the test has been updated accordingly to use `GetDefaultCPUAllocator` correctly instead of `GetDefaultMobileCPUAllocator` for mobile build
  * Also one periodic test `test_cpu_gpu_parity_nn_Conv3d_cuda_float32` fails with Tensor not close error when comparing grad tensors between CPU and GPU. This is fixed by turning off TF32 for the test.

###  Performance gain

* (CURRENT) p3.2xlarge - https://hud.pytorch.org/tts shows each Windows CUDA shards (1-5 + functorch) takes about 2 hours to finish (duration)
* (NEW RUNNER) g5.4xlarge - The very rough estimation of the duration is 1h30m for each shard, meaning a half an hour gain (**25%**)

### Pricing

On demand hourly rate:

* (CURRENT) p3.2xlarge: $3.428. Total = Total hours spent on Windows CUDA tests * 3.428
* (NEW RUNNER) g5.4xlarge: $2.36. Total = Total hours spent on Windows CUDA tests * Duration gain (0.75) * 2.36

So the current runner is not only more expensive but is also slower.  Switching to G5 runners for Windows should cut down the cost by (3.428 - 0.75 * 2.36) / 3.428 = **~45%**

### Rolling out

https://github.com/pytorch/test-infra/pull/1376 needs to be reviewed and approved to ensure the capacity of the runner before PR can be merged.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91727
Approved by: https://github.com/ZainRizvi, https://github.com/malfet, https://github.com/seemethere
2023-01-13 01:11:59 +00:00
Bin Bao
2c1efe7472 Enable some PyTorch core tests with inductor (#87490)
Summary:
1) Graph break on torch.random.set_rng_state since it blocks running
inductor core tests;
2) Add several inductor-specific skips;
3) Enable several core tests for inductor CI;

cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87490
Approved by: https://github.com/eellison
2022-10-26 18:58:33 +00:00
Nikita Shulga
9eb4f9dd17 Tweak test tolerances to be compatible with A10G (#86538)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86538
Approved by: https://github.com/ngimel
2022-10-11 23:31:48 +00:00
Peter Bell
c2e9b9ec4a TestModule: Don't assume sample inputs version counter is zero (#85734)
The intention of this assert is to check the input tensor's version
counter has increased, indicating it was mutated by `m_inplace`.
However, the cloning step to create `input_arg_clone` restarts the
version counter to zero, so this test may fail if the sample input
was ever mutated during its creation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85734
Approved by: https://github.com/albanD
2022-09-28 19:32:06 +00:00
Joel Benjamin Schlosser
d3dba3c42a Fix ModuleInfo skip logic (#80471)
Fixes #80247

This PR:
* Refactors the skip logic as done for OpInfo in #62713, fixing the logic error
* For tests that were wrongly skipped before and now fail:
    * Fix `TestModule.test_cpu_gpu_parity` to support Lazy modules - this was affecting `LazyConv*`
    * Adds `@expectedFailure` decorators and a follow-up message to address `Conv*` failures on `TestModule.test_memory_format`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80471
Approved by: https://github.com/mruberry
2022-07-08 19:52:15 +00:00
Joel Benjamin Schlosser
70d6446a3d Support both train / eval modes for ModuleInfo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78735

Approved by: https://github.com/albanD
2022-06-09 20:57:17 +00:00
PyTorch MergeBot
854c833f81 Revert "Support both train / eval modes for ModuleInfo"
This reverts commit 12658fcd5b.

Reverted https://github.com/pytorch/pytorch/pull/78735 on behalf of https://github.com/malfet due to Broke eval tests on Win, 10.2 and ROCM, see 12658fcd5b
2022-06-09 03:37:55 +00:00
Joel Benjamin Schlosser
12658fcd5b Support both train / eval modes for ModuleInfo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78735

Approved by: https://github.com/albanD
2022-06-08 23:20:17 +00:00
Kulin Seth
e011a8e18b Enable PyTorch operations on MPS Backend. (#77343)
Add PyTorch operations to MPS backend.

- https://github.com/pytorch/pytorch/issues/77394
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77343
Approved by: https://github.com/albanD
2022-05-13 18:28:53 +00:00
Philip Meier
1f74e082e2 only compare attributes for meta tensors (#72508)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72508

Todo:

- [x] document this behavior
- [x] add tests

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D34262452

Pulled By: ezyang

fbshipit-source-id: bc5c9653d5c3ad5c6efccc9c8e0efc0d28e15104
(cherry picked from commit 233142c88e)
2022-02-17 02:33:08 +00:00
kshitij12345
b372be4211 [nn] lstm : no batch dim support (#71056)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/60585

TODO:
* [x] Update docs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71056

Reviewed By: samdow

Differential Revision: D33638643

Pulled By: jbschlosser

fbshipit-source-id: c0949829de8a8e6e7b2873f459a8d7da597a3be3
(cherry picked from commit f94d5849f6)
2022-01-24 15:13:40 +00:00
Joel Schlosser
7b8f73dd32 No-batch-dim support for ConvNd (#70506)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/70506

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D33355034

Pulled By: jbschlosser

fbshipit-source-id: 5a42645299b1d82cee7d461826acca1c5b35a71c
2022-01-06 16:53:50 -08:00
Mikayla Gawarecki
fb78a31916 Add testing across mem_formats to ModuleInfos (#69317)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69317

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D33285780

Pulled By: mikaylagawarecki

fbshipit-source-id: 1d19293e640e5581351a9c74892dcac4bcdd3f1d
2021-12-29 14:53:27 -08:00
Mikayla Gawarecki
14f4b91f6e Add Nondeterministic Tol to gradient test in test_modules (#69402)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69402

Test Plan: Imported from OSS

Reviewed By: soulitzer

Differential Revision: D33285781

Pulled By: mikaylagawarecki

fbshipit-source-id: f1ab43173d4f558adc943a8acefc13c34cfa5cfa
2021-12-29 14:51:56 -08:00