Commit Graph

13 Commits

Author SHA1 Message Date
Yuanyuan Chen
0d50e5d8d4 [3/N] Fix unused loop variables (#166509)
This PR removes unused loop variables in tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166509
Approved by: https://github.com/Lucaskabela, https://github.com/Skylion007
2025-10-30 20:13:51 +00:00
Anthony Barbier
bf7e290854 Add __main__ guards to jit tests (#154725)
This PR is part of a series attempting to re-submit https://github.com/pytorch/pytorch/pull/134592 as smaller PRs.

In jit tests:

- Add and use a common raise_on_run_directly method for when a user runs a test file directly which should not be run this way. Print the file which the user should have run.
- Raise a RuntimeError on tests which have been disabled (not run)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154725
Approved by: https://github.com/clee2000
2025-06-16 10:28:45 +00:00
PyTorch MergeBot
20912673a6 Revert "Add __main__ guards to jit tests (#154725)"
This reverts commit 1a55fb0ee8.

Reverted https://github.com/pytorch/pytorch/pull/154725 on behalf of https://github.com/malfet due to This added 2nd copy of raise_on_run to common_utils.py which caused lint failures, see https://github.com/pytorch/pytorch/actions/runs/15445374980/job/43473457466 ([comment](https://github.com/pytorch/pytorch/pull/154725#issuecomment-2940503905))
2025-06-04 15:42:52 +00:00
Anthony Barbier
1a55fb0ee8 Add __main__ guards to jit tests (#154725)
This PR is part of a series attempting to re-submit https://github.com/pytorch/pytorch/pull/134592 as smaller PRs.

In jit tests:

- Add and use a common raise_on_run_directly method for when a user runs a test file directly which should not be run this way. Print the file which the user should have run.
- Raise a RuntimeError on tests which have been disabled (not run)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154725
Approved by: https://github.com/Skylion007
2025-06-04 14:44:08 +00:00
Tom Ritchford
d8c8ba2440 Fix unused Python variables in test/[e-z]* (#136964)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136964
Approved by: https://github.com/justinchuby, https://github.com/albanD
2024-12-18 23:02:30 +00:00
Xuehai Pan
6ff1e43a41 [BE][Easy][13/19] enforce style for empty lines in import segments in test/j*/ (#129764)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129764
Approved by: https://github.com/ezyang
2024-08-01 12:13:42 +00:00
Yuanhao Ji
604c9c5601 Enable UFMT on all of test/jit (#123623)
Partially addresses #123062

Ran lintrunner on:

- `test/jit`

with command:

```bash
lintrunner -a --take UFMT --all-files
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123623
Approved by: https://github.com/ezyang
2024-04-11 23:45:05 +00:00
Jason Ansel
ae57bd6630 PT2/TorchScript interoperability fix (#94678)
Allows torch.compile() to inline into ScriptFunction

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94678
Approved by: https://github.com/ezyang
2023-02-15 01:21:10 +00:00
David Berard
268ced5104 Retry - [JIT] Propagate profiled information to DifferentiableGraph outputs
Without profiled outputs, autodiff can't tell whether or not the outputs of a DifferentiableGraph should requires_grad. Autodiff would default to requires_grad=True if there was no profiled information, causing autodiff to mark tensors as requires_grad when they shouldn't have. This adds requires_grad info onto the type of the output, if it can be found in later uses of the output.

Adds a test for correct autodiff requires_grad behavior and also a test to make sure the output type is correctly annotated in create_autodiff_subgraphs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79498

Approved by: https://github.com/eellison
2022-06-21 22:57:17 +00:00
PyTorch MergeBot
5413580f9e Revert "[JIT] Propagate profiled information to DifferentiableGraph outputs"
This reverts commit 1d2a6c2e94.

Reverted https://github.com/pytorch/pytorch/pull/78875 on behalf of https://github.com/davidberard98 due to Internal failures were bisected to this change
2022-06-12 00:14:08 +00:00
David Berard
1d2a6c2e94 [JIT] Propagate profiled information to DifferentiableGraph outputs
Without profiled outputs, autodiff can't tell whether or not the outputs of a DifferentiableGraph should requires_grad. Autodiff would default to requires_grad=True if there was no profiled information, causing autodiff to mark tensors as requires_grad when they shouldn't have. This adds requires_grad info onto the type of the output, if it can be found in later uses of the output.

Adds a test for correct autodiff requires_grad behavior and also a test to make sure the output type is correctly annotated in create_autodiff_subgraphs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78875

Approved by: https://github.com/eellison
2022-06-10 00:54:11 +00:00
David Berard
c175fac2e7 [JIT] Autodiff - use more accurate requires_grad info
When autodiff is constructing the Gradient object, it looks at the
forward graph and records all the outputs that requires_grad into
df_input_vjps. Then at runtime, graph_executor.cpp will detach the
tensors before running the autodiff forward graph, and then add
requires_grad back onto the outputs if they need requires_grad.

Before, the require_grad check was done by just checking
`output->requires_grad()`. But at the point when autodiff is called by
profiling executor, the profiled information is still in the profile
nodes, not on values. So requires_grad would not be set on the output
values, and requires_grad() would default to True on all tensors. As a
result more output tensors than expected would require_grad.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78392

Approved by: https://github.com/eellison
2022-06-06 17:23:54 +00:00
David Berard
ad07b7c338 fix to map an undefined tensor back to a tensor list
Taken from https://github.com/pytorch/pytorch/pull/60516

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75262

Approved by: https://github.com/Krovatkin
2022-04-07 20:07:27 +00:00