Commit Graph

525 Commits

Author SHA1 Message Date
Thiago Crepaldi
23dbe2b517 Add test for skipping hf logging during export (#123410)
https://github.com/pytorch/pytorch/pull/123402 already supports hf
logging because HF logger is based on logging module

This PR adds a test to guard this against regression, only

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123410
Approved by: https://github.com/BowenBao, https://github.com/malfet
2024-04-12 17:42:46 +00:00
PyTorch MergeBot
b9d2b75bac Revert "Add test for skipping hf logging during export (#123410)"
This reverts commit ba55ef8e21.

Reverted https://github.com/pytorch/pytorch/pull/123410 on behalf of https://github.com/DanilBaibak due to Broken trunk ([comment](https://github.com/pytorch/pytorch/pull/123402#issuecomment-2044236088))
2024-04-09 06:28:12 +00:00
Thiago Crepaldi
ba55ef8e21 Add test for skipping hf logging during export (#123410)
https://github.com/pytorch/pytorch/pull/123402 already supports hf
logging because HF logger is based on logging module

This PR adds a test to guard this against regression, only

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123410
Approved by: https://github.com/BowenBao, https://github.com/malfet
ghstack dependencies: #123402
2024-04-08 23:20:30 +00:00
Catherine Lee
de950039fc Use .get in xml parsing (#122103)
Check that the `classname` attribute actually exists.
#122017
I expect this route to happen very rarely

At a certain point, we should just remove this parsing altogether since everything uses pytest now...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122103
Approved by: https://github.com/huydhn
2024-03-20 04:07:49 +00:00
Aaron Orenstein
edd80f87b8 Prevent infinite recursion within Tensor.__repr__ (#120206)
`Tensor.__repr__` calls functions which can perform logging which ends up logging `self` (with `__repr__`) causing an infinite loop. Instead of logging all the args in FakeTensor.dispatch log the actual parameters (and use `id` to log the tensor itself).

The change to torch/testing/_internal/common_utils.py came up during testing - in some ways of running the test parts was `('test', 'test_testing.py')` and so `i` was 0 and we were doing a join on `()` which was causing an error.

Repro:
```
import torch
from torch.testing import make_tensor
from torch._subclasses.fake_tensor import FakeTensor, FakeTensorMode
t = torch.sparse_coo_tensor(((0, 1), (1, 0)), (1, 2), size=(2, 2))
t2 = FakeTensor.from_tensor(t, FakeTensorMode())
print(repr(t2))
```
and run with `TORCH_LOGS=+all`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120206
Approved by: https://github.com/yanboliang, https://github.com/pearu
2024-03-07 02:24:45 +00:00
Guilherme Leobas
491c2b4665 Let torch dynamo inline torch.func.grad (#118407)
When dynamo sees torch.func.grad, it tries to inline all frames related
to.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118407
Approved by: https://github.com/zou3519
2024-02-28 20:05:00 +00:00
James Wu
82099ab87b [easy] Reword unexpected success error messages and generated github issues now that we have sentinel files (#120766)
It's a bit annoying to have to read through the test name in verbose mode just to see what the test's sentinel file is actually called when encountering an unexpected success. Now that we have sentinel files, we can directly list the file path from root in the error message.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120766
Approved by: https://github.com/Skylion007
2024-02-28 11:15:29 +00:00
Oguz Ulgen
a5548c6886 Create a sentinel file for each dynamo test failure (#120355)
Created via
```
import os
current_dir = os.path.dirname(os.path.abspath(__file__))
directory = os.path.join(current_dir, 'dynamo_expected_failures')
for name in dynamo_expected_failures:
    path = os.path.join(directory, name)
    with open(path, 'w') as fp:
        pass
```

Differential Revision: [D54036062](https://our.internmc.facebook.com/intern/diff/D54036062)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120355
Approved by: https://github.com/aorenste, https://github.com/yanboliang
2024-02-23 05:22:11 +00:00
Alexander Grund
99cb807e25 Skip test_wrap_bad if run under pytest (#115070)
Pytest replaces sys.stdout/stderr by `TextIOWrapper` instances which do not support `fileno()`
Hence skip that test in this case

Fixes #115069

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115070
Approved by: https://github.com/clee2000
2024-02-15 00:10:05 +00:00
SandishKumarHN
db228f1efd [Lint] replace [assigment] with [method-assign] for methods (#119706)
started with TODO fix from here https://github.com/pytorch/pytorch/blob/main/torch/testing/_internal/common_utils.py#L746
using ignore[method-assign] instead of ignore[assigment]

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119706
Approved by: https://github.com/Skylion007, https://github.com/malfet, https://github.com/kit1980
2024-02-13 02:06:04 +00:00
Mikayla Gawarecki
3372aa51b4 Integrate swap_tensors into nn.Module.load_state_dict (#117913)
Added a `torch.Tensor` method that defines how to transform `other`, a value in the state dictionary, to be loaded into `self`, a param/buffer in an `nn.Module` before swapping via `torch.utils.swap_tensors`
* `param.module_load(sd[key])`

This method can be overridden using `__torch_function__`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117913
Approved by: https://github.com/albanD
2024-02-09 22:32:29 +00:00
Mikayla Gawarecki
23b030a79c [easy] Add testing utilties for torch.nn.utils.set_swap_module_params_on_conversion (#118023)
For above PR to parametrize existing `load_state_dict` tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118023
Approved by: https://github.com/albanD
ghstack dependencies: #118028, #117167
2024-02-07 18:55:44 +00:00
Edward Z. Yang
dab16b6b8e s/supress/suppress/ (#119132)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119132
Approved by: https://github.com/kit1980, https://github.com/malfet
2024-02-04 00:54:14 +00:00
rzou
bd8c91efc0 Remove some now-succeeding tests from dynamo_test_failures.py (#118928)
Test Plan:
- wait for CI
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118928
Approved by: https://github.com/aorenste, https://github.com/anijain2305, https://github.com/yanboliang
2024-02-02 19:49:26 +00:00
Catherine Lee
08d90a1ea9 Workaround for super() calls in test_torchinductor_dynamic_shapes (#118586)
Info about super in dynamic classes:
https://stackoverflow.com/questions/71879642/how-to-pass-function-with-super-when-creating-class-dynamically
https://stackoverflow.com/questions/43782944/super-does-not-work-together-with-type-supertype-obj-obj-must-be-an-i

Calling super(TestCase) actually calls TestCase's parent's functions, bypassing TestCase itself's functions

Mainly doing this because it's making disable bot spam

Test: checked locally and check that https://github.com/pytorch/pytorch/issues/117954 actually got skipped

Logs for `inductor/test_torchinductor_dynamic_shapes.py::TestInductorDynamicCUDA::test_unbacked_index_select_cuda`
https://ossci-raw-job-status.s3.amazonaws.com/log/21083466405
Afaik this PR doesn't actually cause the test to fail, it just surfaces the error since the mem leak check wasn't running previously

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118586
Approved by: https://github.com/huydhn
2024-02-02 00:40:37 +00:00
Yu, Guangye
a205e7bf56 [3/4] Intel GPU Runtime Upstreaming for Device (#116850)
# Motivation
According to [[1/4] Intel GPU Runtime Upstreaming for Device](https://github.com/pytorch/pytorch/pull/116019), As mentioned in [[RFC] Intel GPU Runtime Upstreaming](https://github.com/pytorch/pytorch/issues/114842), this third PR  covers the changes under `libtorch_python`.

# Design
This PR primarily offers device-related APIs in python frontend, including
- `torch.xpu.is_available`
- `torch.xpu.device_count`
- `torch.xpu.current_device`
- `torch.xpu.set_device`
- `torch.xpu.device`
- `torch.xpu.device_of`
- `torch.xpu.get_device_name`
- `torch.xpu.get_device_capability`
- `torch.xpu.get_device_properties`
- ====================
- `torch.xpu._DeviceGuard`
- `torch.xpu._is_compiled`
- `torch.xpu._get_device`

# Additional Context
We will implement the support of lazy initialization in the next PR.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116850
Approved by: https://github.com/EikanWang, https://github.com/jgong5, https://github.com/gujinghui, https://github.com/malfet
2024-02-01 12:31:26 +00:00
PyTorch MergeBot
483001e846 Revert "Workaround for super() calls in test_torchinductor_dynamic_shapes (#118586)"
This reverts commit f2682e75e6.

Reverted https://github.com/pytorch/pytorch/pull/118586 on behalf of https://github.com/atalman due to Broke slow tests ([comment](https://github.com/pytorch/pytorch/pull/118586#issuecomment-1919810802))
2024-01-31 19:44:29 +00:00
Catherine Lee
f2682e75e6 Workaround for super() calls in test_torchinductor_dynamic_shapes (#118586)
Info about super in dynamic classes:
https://stackoverflow.com/questions/71879642/how-to-pass-function-with-super-when-creating-class-dynamically
https://stackoverflow.com/questions/43782944/super-does-not-work-together-with-type-supertype-obj-obj-must-be-an-i

Calling super(TestCase) actually calls TestCase's parent's functions, bypassing TestCase itself's functions

Mainly doing this because it's making disable bot spam

Test: checked locally and check that https://github.com/pytorch/pytorch/issues/117954 actually got skipped

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118586
Approved by: https://github.com/huydhn
2024-01-30 21:34:05 +00:00
Edward Z. Yang
9bce208dfb Replace follow_imports = silent with normal (#118414)
This is a lot of files changed! Don't panic! Here's how it works:

* Previously, we set `follow_imports = silent` for our mypy.ini configuration. Per https://mypy.readthedocs.io/en/stable/running_mypy.html#follow-imports, what this does is whenever we have an import to a module which is not listed as a file to be typechecked in mypy, we typecheck it as normal but suppress all errors that occurred in that file.
* When mypy is run inside lintrunner, the list of files is precisely the files covered by the glob in lintrunner.toml, but with files in excludes excluded.
* The top-level directive `# mypy: ignore-errors` instructs mypy to typecheck the file as normal, but ignore all errors.
* Therefore, it should be equivalent to set `follow_imports = normal`, if we put `# mypy: ignore-errors` on all files that were previously excluded from the file list.
* Having done this, we can remove the exclude list from .lintrunner.toml, since excluding a file from typechecking is baked into the files themselves.
* torch/_dynamo and torch/_inductor were previously in the exclude list, because they were covered by MYPYINDUCTOR. It is not OK to mark these as `# mypy: ignore-errors` as this will impede typechecking on the alternate configuration. So they are temporarily being checked twice, but I am suppressing the errors in these files as the configurations are not quite the same. I plan to unify the configurations so this is only a temporary state.
* There were some straggler type errors after these changes somehow, so I fixed them as needed. There weren't that many.

In the future, to start type checking a file, just remove the ignore-errors directive from the top of the file.

The codemod was done with this script authored by GPT-4:

```
import glob

exclude_patterns = [
    ...
]

for pattern in exclude_patterns:
    for filepath in glob.glob(pattern, recursive=True):
        if filepath.endswith('.py'):
            with open(filepath, 'r+') as f:
                content = f.read()
                f.seek(0, 0)
                f.write('# mypy: ignore-errors\n\n' + content)
```

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118414
Approved by: https://github.com/thiagocrepaldi, https://github.com/albanD
2024-01-27 02:44:11 +00:00
PyTorch MergeBot
533637d9a3 Revert "Check if enable inside run call (#118101)"
This reverts commit 2abb812a78.

Reverted https://github.com/pytorch/pytorch/pull/118101 on behalf of https://github.com/clee2000 due to broke periodic multigpu test some how 6fc015fedc ([comment](https://github.com/pytorch/pytorch/pull/118101#issuecomment-1912357321))
2024-01-26 16:41:56 +00:00
Alexander Grund
b5b36cf0c4 Fix failure of test_dynamo_distributed & test_inductor_collectives (#117741)
When CUDA is not available `c10d.init_process_group("nccl"...)` will fail with
> RuntimeError: ProcessGroupNCCL is only supported with GPUs, no GPUs found!

Hence add a corresponding skip marker to the classes deriving from DynamoDistributedSingleProcTestCase next to the `requires_nccl` marker.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117741
Approved by: https://github.com/ezyang, https://github.com/malfet
2024-01-25 13:25:36 +00:00
Peter Bell
7c33ce7702 [CI] Install dill in ci (#116214)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116214
Approved by: https://github.com/malfet
ghstack dependencies: #116230
2024-01-24 23:42:35 +00:00
Catherine Lee
2abb812a78 Check if enable inside run call (#118101)
In theory this way we never have to worry about subclasses calling super().setUp() ever again

Also, dynamically creating classes (ex via type in instantiate_device_type_tests) makes super() calls a bit odd
https://stackoverflow.com/questions/71879642/how-to-pass-function-with-super-when-creating-class-dynamically
https://stackoverflow.com/questions/43782944/super-does-not-work-together-with-type-supertype-obj-obj-must-be-an-i

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118101
Approved by: https://github.com/huydhn
2024-01-24 22:38:41 +00:00
PyTorch MergeBot
af9b6fa04e Revert "Check if enable inside run call (#118101)"
This reverts commit 6fc015fedc.

Reverted https://github.com/pytorch/pytorch/pull/118101 on behalf of https://github.com/clee2000 due to possibly causing failures on b025e5984c ([comment](https://github.com/pytorch/pytorch/pull/118101#issuecomment-1908940940))
2024-01-24 21:26:35 +00:00
Catherine Lee
6fc015fedc Check if enable inside run call (#118101)
In theory this way we never have to worry about subclasses calling super().setUp() ever again

Also, dynamically creating classes (ex via type in instantiate_device_type_tests) makes super() calls a bit odd
https://stackoverflow.com/questions/71879642/how-to-pass-function-with-super-when-creating-class-dynamically
https://stackoverflow.com/questions/43782944/super-does-not-work-together-with-type-supertype-obj-obj-must-be-an-i

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118101
Approved by: https://github.com/huydhn
2024-01-24 18:51:05 +00:00
PyTorch MergeBot
5ec2d7959d Revert "[ez] Provide a slightly better error message if process times out (#117865)"
This reverts commit 5538b37a06.

Reverted https://github.com/pytorch/pytorch/pull/117865 on behalf of https://github.com/clee2000 due to Does not play nice with retry_shell, which expects timeoutexpired, but i cant control the error message of that ([comment](https://github.com/pytorch/pytorch/pull/117865#issuecomment-1906640922))
2024-01-23 18:13:41 +00:00
chuanqiw
40890ba8e7 [CI] Add python test skip logic for XPU (#117621)
Add python test skip logic for XPU

For test purpose, cherry-pick #116833 & #116850 firstly, and the xpu test passed https://github.com/pytorch/pytorch/actions/runs/7566746218/job/20604997985?pr=117621. Revert them now.

Works for #114850

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117621
Approved by: https://github.com/huydhn
2024-01-23 08:20:42 +00:00
Catherine Lee
5538b37a06 [ez] Provide a slightly better error message if process times out (#117865)
Just a slightly clearer error message
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117865
Approved by: https://github.com/malfet, https://github.com/huydhn
2024-01-19 22:58:00 +00:00
rzou
16ebfbbf07 All tests run with markDynamoStrictTest now (#117763)
Last test to remove from the denylist was dynamo/test_logging.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117763
Approved by: https://github.com/voznesenskym
ghstack dependencies: #117729, #117747, #117754, #117761
2024-01-18 19:42:41 +00:00
titaiwangms
ca0abf8606 Add inductor-specific testing strict mode denylist (#117553)
We have one for Dynamo that currently applies to all "compile"
configurations (PYTORCH_TEST_WITH_DYNAMO, PYTORCH_TEST_WITH_INDUCTOR). I
don't want to figure out the inductor situation right now, so we're
going to add another denylist for inductor and work through it later.

Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117553
Approved by: https://github.com/voznesenskym
ghstack dependencies: #117409, #116667, #117591, #117500, #116910
2024-01-17 19:12:41 +00:00
PyTorch MergeBot
e877c2e6ff Revert "Add inductor-specific testing strict mode denylist (#117553)"
This reverts commit ab6207a342.

Reverted https://github.com/pytorch/pytorch/pull/117553 on behalf of https://github.com/PaliC due to breaking internal discussed with author offline ([comment](https://github.com/pytorch/pytorch/pull/117500#issuecomment-1896426304))
2024-01-17 18:42:39 +00:00
rzou
fb06ed36d1 Change dynamo_test_failures.py to silently run skipped tests (#117401)
- We silently run skipped tests and then raise a skip message with the
  error message (if any)
- Instead of raising expectedFailure, we raise a skip message with the
  error message (if any)

We log the skip messages in CI, so this will let us read the logs and do
some basic triaging of the failure messages.

Test Plan:
- existing tests. I hope that there are no tests that cause each other
  to fail.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117401
Approved by: https://github.com/voznesenskym
ghstack dependencies: #117391, #117400
2024-01-17 02:48:19 +00:00
rzou
ab6207a342 Add inductor-specific testing strict mode denylist (#117553)
We have one for Dynamo that currently applies to all "compile"
configurations (PYTORCH_TEST_WITH_DYNAMO, PYTORCH_TEST_WITH_INDUCTOR). I
don't want to figure out the inductor situation right now, so we're
going to add another denylist for inductor and work through it later.

Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117553
Approved by: https://github.com/voznesenskym
2024-01-16 23:04:31 +00:00
Edward Z. Yang
5b24877663 Improve uint{16,32,64} dlpack/numpy compatibility (#116808)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116808
Approved by: https://github.com/malfet, https://github.com/albanD
2024-01-11 17:01:54 +00:00
rzou
7e6a04e542 Allow unMarkDynamoStrictTest to work on tests (instead of just classes) (#117128)
Tested locally.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117128
Approved by: https://github.com/voznesenskym
ghstack dependencies: #117114, #117127
2024-01-10 22:25:40 +00:00
rzou
79e6d2ae9d Remove incorrect usages of skipIfTorchDynamo (#117114)
Using `@skipifTorchDynamo` is wrong, the correct usage is
`@skipIfTorchDynamo()` or `@skipIfTorchDynamo("msg")`. This would cause
tests to stop existing.
Added an assertion for this and fixed the incorrect callsites.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117114
Approved by: https://github.com/voznesenskym
2024-01-10 22:25:31 +00:00
Edward Z. Yang
2e983fcfd3 Support unsigned int for randint, item, equality, fill, iinfo, tensor (#116805)
These are some basic utilities that are often used for testing.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116805
Approved by: https://github.com/albanD
2024-01-10 02:17:23 +00:00
Jason Ansel
94363cee41 [inductor] Indexing refactors (#116078)
Perf differences seems to be noise:
![image](https://github.com/pytorch/pytorch/assets/533820/d7a36574-0388-46e4-bd4d-b274d37cab2b)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116078
Approved by: https://github.com/aakhundov
2024-01-09 19:06:51 +00:00
Aaron Gokaslan
3fe437b24b [BE]: Update flake8 to v6.1.0 and fix lints (#116591)
Updates flake8 to v6.1.0 and fixes a few lints using sed and some ruff tooling.
- Replace `assert(0)` with `raise AssertionError()`
- Remove extraneous parenthesis i.e.
  - `assert(a == b)` -> `assert a == b`
  - `if(x > y or y < z):`->`if x > y or y < z:`
  - And `return('...')` -> `return '...'`

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116591
Approved by: https://github.com/albanD, https://github.com/malfet
2024-01-03 06:04:44 +00:00
Aaron Gokaslan
bd10fea79a [BE]: Enable F821 and fix bugs (#116579)
Fixes #112371

I tried to fix as many of the bugs as I could, a few I could not figure out what the proper fix for them was though and so I left them with noqas.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116579
Approved by: https://github.com/ezyang
2024-01-01 08:40:46 +00:00
rzou
0fae3dfef7 Add convenient things for Dynamo testing (#116173)
- added a way to easily add a skip
- added a way to easily turn markDynamoStrictTest on by default for a
  particular test file
- added an envvar to turn markDynamoStrictTest on by default
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116173
Approved by: https://github.com/voznesenskym
2023-12-20 22:49:26 +00:00
rzou
4ccd8eb613 Add Dynamo test expected failure mechanism (#115845)
Tests that are added to a list in dynamo_test_failures.py will
automatically be marked as expectedFailure when run with
PYTORCH_TEST_WITH_DYNAMO=1. I'm splitting this PR off on its own so that
I can test various things on top of it.

Also added an unMarkDynamoStrictTest that is not useful until we turn
on strict mode by default.

Test Plan:
- code reading
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115845
Approved by: https://github.com/voznesenskym
2023-12-15 01:22:17 +00:00
David Berard
89ee3af076 [Reland][Dynamo] Don't log compilation metrics for PyTorch unit tests (#115571)
Reland #115452, which was reverted to simplify a merge conflict with #115386

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115571
Approved by: https://github.com/yanboliang
2023-12-12 01:15:54 +00:00
Catherine Lee
b5578cb08b [ez] Remove unittest retries (#115460)
Pytest is used in CI now for reruns and I doubt people are using the env vars when running locally.  imo removing this code has the makes the run function easier to read
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115460
Approved by: https://github.com/malfet, https://github.com/huydhn
2023-12-11 19:46:09 +00:00
David Berard
5c0976fa04 Revert "[dynamo] guarded config (#111299)" (#115386)
This reverts commit 5927e9cbf2.

Differential Revision: [D51959266](https://our.internmc.facebook.com/intern/diff/D51959266)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115386
Approved by: https://github.com/yanboliang, https://github.com/malfet
ghstack dependencies: #115384, #115401, #115385
2023-12-11 19:35:42 +00:00
PyTorch MergeBot
f06f51b152 Revert "[Dynamo] Don't log compilation metrics for PyTorch unit tests (#115452)"
This reverts commit cd444aa075.

Reverted https://github.com/pytorch/pytorch/pull/115452 on behalf of https://github.com/davidberard98 due to Merge conflict with #115385, which already landed in fbcode ([comment](https://github.com/pytorch/pytorch/pull/115452#issuecomment-1850729965))
2023-12-11 19:21:40 +00:00
Wang, Xiao
d7705f325d Patch --save-xml when TEST_IN_SUBPROCESS (#115463)
Patch `--save-xml` when `TEST_IN_SUBPROCESS`

When `--save-xml` is given as a unit test argument and the test is handled by a `TEST_IN_SUBPROCESS` handler (e.g., `run_test_with_subprocess` for `distributed/test_c10d_nccl`), the `--save-xml` args were first "consumed" by argparser in `common_utils.py`. When a following subprocess in this `if TEST_IN_SUBPROCESS:` section starts, there are no `--save-xml` args, thus leaving `args.save_xml` to `None`.

Since argparser for `--save-xml` option has a default argument of `_get_test_report_path()` when the arg is `None`, it's not a problem for Github CI run. It could be an issue when people run those tests without `CI=1`. Test reports won't be saved in this case even if they passed `--save-xml=xxx`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115463
Approved by: https://github.com/clee2000
2023-12-09 02:38:31 +00:00
Yanbo Liang
cd444aa075 [Dynamo] Don't log compilation metrics for PyTorch unit tests (#115452)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115452
Approved by: https://github.com/zou3519
2023-12-09 01:39:36 +00:00
rzou
a1bfaf75dc markDynamoStrictTest: add nopython flag, set default to False (#115276)
Default should be False because in general, we're interested
in reliability and composability: we want to check that
running PyTorch with and without Dynamo has the same semantics (with
graph breaks allowed).

Test Plan:
Existing tests?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115276
Approved by: https://github.com/voznesenskym
ghstack dependencies: #115267
2023-12-07 18:42:21 +00:00
voznesenskym
044cd56dcc [Easy] make @markDynamoStrictTest set nopython=True (#114308)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114308
Approved by: https://github.com/zou3519, https://github.com/oulgen
2023-11-22 01:36:29 +00:00