Fixes#80247
This PR:
* Refactors the skip logic as done for OpInfo in #62713, fixing the logic error
* For tests that were wrongly skipped before and now fail:
* Fix `TestModule.test_cpu_gpu_parity` to support Lazy modules - this was affecting `LazyConv*`
* Adds `@expectedFailure` decorators and a follow-up message to address `Conv*` failures on `TestModule.test_memory_format`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80471
Approved by: https://github.com/mruberry
In general, if we are expecting the users to use the base class,
such as `_ConvNd`, we should rename it to something like
`BaseConv`. However, because this base class is only used inside of the
AO packages, there is no need to expose it to the users.
Test Plan:
```
python test/test_quantization.py
python test/test_module_init.py
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77344
Approved by: https://github.com/jerryzh168
Summary:
This PR absolves `_TestParametrizer`s (e.g. `ops`, `modules`, `parametrize`) of the responsibility of adding device type (e.g. `'cpu'`, `'cuda'`, etc.) / dtype (e.g. 'float32') to generated test names. This fixes repeated instances of the device string being added to generated test names (e.g. `test_batch_norm_training_True_cuda_track_running_stats_True_cuda_affine_True_cuda`).
The responsibility for placing device / dtype suffixes is now handled by `instantiate_device_type_tests()` instead so it is added a single time. It will place `<device>_<dtype>` at the end of the test name unconditionally, maintaining the current naming convention.
As part of this work, I also tightened the semantics through some additional error case handling:
* Composing multiple decorators that each try to handle the same parameter will error out with a nice message. This includes the case to trying to compose `modules` + `ops`, as they each try to handle `dtype`. Similarly, `ops` + `dtypes` is forbidden when both try to handle `dtype`. This required changes in the following test files:
* `test/test_unary_ufuncs.py`
* `test/test_foreach.py`
* The `modules` / `ops` decorators will now error out with a nice message if used with `instantiate_parametrized_tests()` instead of `instantiate_device_type_tests()`, since they're not (currently) written to work outside of a device-specific context.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65217
Reviewed By: mruberry
Differential Revision: D32627303
Pulled By: jbschlosser
fbshipit-source-id: c2957228353ed46a0b7da8fa1a34c67598779312
Summary:
Follow up to https://github.com/pytorch/pytorch/issues/61935
This PR:
1. Adds test for non-contiguous tensors
2. Fixes bug in `NLLLoss` that was catch by the test.
The reason this was not catch in `common_nn` is because `CriterionTest` overrides `test_cuda` but does not call `test_nonconfig`.
cc albanD mruberry jbschlosser walterddr
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64954
Reviewed By: zou3519
Differential Revision: D31174149
Pulled By: jbschlosser
fbshipit-source-id: a16073e59b40ccc01c82ede016b63a8db2e810f5
Summary:
Follow up to https://github.com/pytorch/pytorch/pull/61935
This PR adds inplace checks to `test_modules`. This version checks the constructor for `inplace` and performs the check automatically.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63739
Reviewed By: saketh-are
Differential Revision: D30737774
Pulled By: jbschlosser
fbshipit-source-id: 8813534511e9296c8424d1ca878412726ddd4043
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63554
Following https://github.com/pytorch/pytorch/pull/61840#issuecomment-884087809, this deprecates all the dtype getters publicly exposed in the `torch.testing` namespace. The reason for this twofold:
1. If someone is not familiar with the C++ dispatch macros PyTorch uses, the names are misleading. For example `torch.testing.floating_types()` will only give you `float32` and `float64` skipping `float16` and `bfloat16`.
2. The dtype getters provide very minimal functionality that can be easily emulated by downstream libraries.
We thought about [providing an replacement](https://gist.github.com/pmeier/3dfd2e105842ad0de4505068a1a0270a), but ultimately decided against it. The major problem is BC: by keeping it, either the namespace is getting messy again after a new dtype is added or we need to somehow version the return values of the getters.
Test Plan: Imported from OSS
Reviewed By: H-Huang
Differential Revision: D30662206
Pulled By: mruberry
fbshipit-source-id: a2bdb10ab02ae665df1b5b76e8afa9af043bbf56
Summary:
This PR moves some modules into `common_modules` to see what it looks like.
While migrating some no batch modules into `common_modules`, I noticed that `desc` is not used for the name. This means we can not use `-k` to filter tests. This PR moves the sample generation into `_parametrize_test`, and passes in the already generated `module_input` into users of `modules(modules_db)`.
I can see this is a little different from opsinfo and would be happy to revert to the original implementation of `modules`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62999
Reviewed By: heitorschueroff
Differential Revision: D30522737
Pulled By: jbschlosser
fbshipit-source-id: 7ed1aeb3753fc97a4ad6f1a3c789727c78e1bc73
Summary:
This PR contains the initial version of `ModuleInfo` for use in testing modules. The design philosophy taken here is to start small and simple and build out / refactor as needed when more test coverage or `ModuleInfo` entries are added. As such, it's not intended for general usage yet. The PR contains the following:
* (new file) `torch/testing/_internal/common_modules.py`
* `ModuleInfo` definition - metadata for each module to use in testing
* `module_db` - the actual `ModuleInfo` database; currently contains entries for two modules
* `ModuleInput` - analogous to `SampleInput` from OpInfo; contains `FunctionInput`s for both constructor and forward pass inputs
* Constructor and forward pass inputs are tied together within a `ModuleInput` because they are likely correlated
* `FunctionInput` - just contains args and kwargs to pass to a function (is there a nicer way to do this?)
* `modules` decorator - analogous to `ops`; specifies a set of modules to run a test over
* Some constants used to keep track of all modules under torch.nn:
* `MODULE_NAMESPACES` - list of all namespaces containing modules
* `MODULE_CLASSES` - list of all module class objects
* `MODULE_CLASS_NAMES` - dict from module class object to nice name (e.g. torch.nn.Linear -> "nn.Linear")
* (new file) `test/test_modules.py`
* Uses the above to define tests over modules
* Currently, there is one test for demonstration, `test_forward`, which instantiates a module, runs its forward pass, and compares it to a reference, if one is defined
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61935
Reviewed By: mruberry
Differential Revision: D29881832
Pulled By: jbschlosser
fbshipit-source-id: cc05c7d85f190a3aa42d55d4c8b01847d1efd57f