Commit Graph

20 Commits

Author SHA1 Message Date
Anthony Barbier
bf7e290854 Add __main__ guards to jit tests (#154725)
This PR is part of a series attempting to re-submit https://github.com/pytorch/pytorch/pull/134592 as smaller PRs.

In jit tests:

- Add and use a common raise_on_run_directly method for when a user runs a test file directly which should not be run this way. Print the file which the user should have run.
- Raise a RuntimeError on tests which have been disabled (not run)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154725
Approved by: https://github.com/clee2000
2025-06-16 10:28:45 +00:00
PyTorch MergeBot
20912673a6 Revert "Add __main__ guards to jit tests (#154725)"
This reverts commit 1a55fb0ee8.

Reverted https://github.com/pytorch/pytorch/pull/154725 on behalf of https://github.com/malfet due to This added 2nd copy of raise_on_run to common_utils.py which caused lint failures, see https://github.com/pytorch/pytorch/actions/runs/15445374980/job/43473457466 ([comment](https://github.com/pytorch/pytorch/pull/154725#issuecomment-2940503905))
2025-06-04 15:42:52 +00:00
Anthony Barbier
1a55fb0ee8 Add __main__ guards to jit tests (#154725)
This PR is part of a series attempting to re-submit https://github.com/pytorch/pytorch/pull/134592 as smaller PRs.

In jit tests:

- Add and use a common raise_on_run_directly method for when a user runs a test file directly which should not be run this way. Print the file which the user should have run.
- Raise a RuntimeError on tests which have been disabled (not run)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154725
Approved by: https://github.com/Skylion007
2025-06-04 14:44:08 +00:00
Xuehai Pan
6ff1e43a41 [BE][Easy][13/19] enforce style for empty lines in import segments in test/j*/ (#129764)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129764
Approved by: https://github.com/ezyang
2024-08-01 12:13:42 +00:00
Yuanhao Ji
604c9c5601 Enable UFMT on all of test/jit (#123623)
Partially addresses #123062

Ran lintrunner on:

- `test/jit`

with command:

```bash
lintrunner -a --take UFMT --all-files
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123623
Approved by: https://github.com/ezyang
2024-04-11 23:45:05 +00:00
Tarun Karuturi
6d2da6106d Raise AttributeError in _OpsNamespace if __self__ attribute is requested (#104096)
Summary:
Trying to get the `__self__` attribute on any `_OpNamespace` object should be an invalid operation. The `__self__` attribute only exists on instance method object and not on class objects.

In [dynamo](a152b3e3b8/torch/_dynamo/variables/torch.py (L164)) there is code that tries to access the `__self__` attribute on `TorchVariable`, this currently results in an expensive call to `torch._C._jit_get_operation` [here](a152b3e3b8/torch/_ops.py (L740)) which ultimately fails and throws an exception. For cases where it fails the operation turns out to be quite expensive on the order of ~0.03s.

For edge use cases when exporting large models with quantized ops this exception is thrown 100's of times resulting in a lot of time wasted. By preventing the call to `torch._C._jit_get_operation` we can quickly return from this function and significantly reduce export times. On a large ASR model for example export currently takes **~405** seconds. With this change we can reduce it to **~340s**.

Overall this should also be a harmless change as no one should mostly ever try to access the `__self__` attribute on any `_OpNamespace` object.

Test Plan: Added test case.

Differential Revision: D46959879

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104096
Approved by: https://github.com/larryliu0820, https://github.com/ezyang, https://github.com/zou3519
2023-06-27 01:42:06 +00:00
Edward Z. Yang
421f04dd02 Only allow numbers as tensors if operator was explicitly allowlisted so (#80587)
Fixes https://github.com/pytorch/pytorch/issues/80508

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80587
Approved by: https://github.com/ngimel
2022-06-30 18:59:38 +00:00
anjali411
50770b9e19 Add torch.ops per overload API (#72206)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/72206

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D33955635

Pulled By: anjali411

fbshipit-source-id: 4fbd0c0c4d032bcbd9d9f362b4b6f84eec9ad047
(cherry picked from commit 85076245c9)
2022-02-11 17:19:06 +00:00
Yan Li
6964aa2ced backout D33469839 (#71443)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71443

cogwheel test inline_cvr_infer_canary_pyper_model_publish is timing out.

The convert_fx call takes > 20 mins for local and local_ro sub modules, which used to take ~ 2 mins.

Test Plan:
Fblearn flow run
* the following cmd took 1113 seconds before the diff and 5002 seconds after.
    flow-cli clone-locally 320014219  --run-as-secure-group pytorch_at_scale  --operators pyper_model_publish_workflow.pyper_model_publish_workflow.process_torch_package_model_files.process_non_sparse_parameters[0]

Cogwheel test
* Cogwheel test with packages in B3588 (the last good run) took 4694.48s
* Cogwheel test with packages in B3590 (the first timeout) took 13975.83s
* Cogwheel test with the following packages took 4535.04s
  * all packages in B3588 except the model publish
  * the model publish built with D33469839 (043e84b3d2) reversed (created D33633570)

Reviewed By: albanD, jerryzh168

Differential Revision: D33633570

fbshipit-source-id: dc5e777c48a90c551641a3f79126461f6a60449e
(cherry picked from commit 03ab65023a)
2022-01-18 23:51:51 +00:00
anjali411
043e84b3d2 Per-overload torch.ops API (#67254)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67254

Fixes https://github.com/pytorch/pytorch/issues/65997

BC breaking:
`output = torch.ops._test.leaky_relu(self=torch.tensor(-1.0))` now fails with the error `TypeError: __call__() got multiple values for argument 'self'` since we call into `OpOverloadBundle`'s `__call__` method that has `self` bound to it as its first argument.

Follow up work:
1. disallow `default` as an overload name for aten operators.
2. Add a method to obtain a list of all overloads (exclude the ones registered by JIT)
3. Add methods/properties to `OpOverload` to access more schema information (types of input and output args etc)

cc ezyang gchanan

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D33469839

Pulled By: anjali411

fbshipit-source-id: c3fc43460f1c7c9651c64b4d46337be21c400621
2022-01-10 17:29:06 -08:00
Michael Suo
402f2934bf Revert D33262228: Per-overload torch.ops API
Test Plan: revert-hammer

Differential Revision:
D33262228 (8e6d1738a4)

Original commit changeset: 600dbf511514

Original Phabricator Diff: D33262228 (8e6d1738a4)

fbshipit-source-id: 238fa88ea9c4f26c7511334765c07452fbca9655
2022-01-05 22:10:11 -08:00
anjali411
8e6d1738a4 Per-overload torch.ops API (#67254)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67254

Fixes https://github.com/pytorch/pytorch/issues/65997

TODO: disallow `default` as an overload name for aten operators.

BC breaking:
`output = torch.ops._test.leaky_relu(self=torch.tensor(-1.0))` now fails with the error `TypeError: __call__() got multiple values for argument 'self'` since we call into `OpOverloadBundle`'s `__call__` method that has `self` bound to it as its first argument.

cc ezyang gchanan

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D33262228

Pulled By: anjali411

fbshipit-source-id: 600dbf511514ea9b41aea3e6b1bc1102dab08909
2022-01-05 15:17:41 -08:00
Jane Xu
09c7771e9c Set test owners for jit tests (#66808)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66808

Reviewed By: mrshenli

Differential Revision: D31761414

Pulled By: janeyx99

fbshipit-source-id: baf8c49ff9c4bcda7b0ea0f6aafd26380586e72d
2021-10-25 07:51:10 -07:00
Jane Xu
b665298dc8 Use assertRaisesRegexWithHighlight test_custom_operators.py (#55519)
Summary:
Step to resolving https://github.com/pytorch/pytorch/issues/55072

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55519

Reviewed By: gmagogsfm

Differential Revision: D27627487

Pulled By: janeyx99

fbshipit-source-id: 6bd54433617180c56153785b69c2e49faf19ef34
2021-04-07 14:39:52 -07:00
Sebastian Messmer
e311e53abe Revert D18672405: Revert D18672405: Use codegen'ed unboxing wrappers (#36010)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36010

-
ghstack-source-id: 101645932

Test Plan: CI

Differential Revision: D20577433

fbshipit-source-id: f80fb62de68d1d11ea05dd1f694b36356a06189b
2020-04-09 10:45:28 -07:00
Jiakai Liu
7f7ea685c0 Revert D18672405: Use codegen'ed unboxing wrappers
Test Plan: revert-hammer

Differential Revision:
D18672405

Original commit changeset: bf2a7056082d

fbshipit-source-id: b7ef1529fc266b4856e49e4dbd1fe8c7ba3d455d
2020-02-29 15:27:54 -08:00
Sebastian Messmer
3c5677a676 Use codegen'ed unboxing wrappers (#32521)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32521

Not all ops support the templated unboxing wrappers yet. For the ones that don't,
let's use the codegen'ed unboxing wrappers from register_aten_ops.cpp, but register
them with c10 directly instead of JIT.

The `use_c10_dispatcher` setting in `native_functions.yaml` now has a new option 'with_codegenerated_unboxing_wrapper' which means we take the codegened unboxing wrapper from register_aten_ops.cpp and stuff it into c10. This new argument is the default, 'unboxed_only' is not the default anymore. For the (very few) ops that don't support boxed dispatch yet (i.e. ops taking TensorOptions arguments), we set them to 'unboxed_only' and they follow the old behavior of having register_aten_ops.cpp register the jit op.

Next steps here are (1) to make TensorOptions work with boxed dispatch and remove the `unboxed_only` option from `use_c10_dispatcher`, so that all ops go through the new path and (2) make the new path template-only and remove codegen from it (see https://github.com/pytorch/pytorch/issues/32366).

First experiments show that
- For a small JITted model that calls add (i.e. a op with just two arguments that are both tensors) on two tensors in a loop, we see a 2-4% performance improvement (~35-50ns) when compared to the old path. This is a simple op that takes two tensor arguments and no non-tensor arguments, so iterating over it in boxed dispatch is cheap.
- For a small JITted model that calls avgpool1d (i.e. an op that has one tensor arg and 5 non-tensor args) on a tensor in a loop, we see a 3-4% performance regression (~60ns) when compared to the old path. This is an op that takes only one tensor argument and then 6 non-tensor arguments. Unboxed dispatch doesn’t have to look at those but boxed dispatch still needs to iterate over them.

This performance difference is likely due to boxed dispatch iterating over all arguments in a loop and unboxed dispatch not having to look at non-tensor arguments.

ghstack-source-id: 99161484

Test Plan: unit tests that call existing ops through JIT

Differential Revision: D18672405

fbshipit-source-id: bf2a7056082dfad61e7e83e9eeff337060eb6944
2020-02-28 14:48:25 -08:00
Pritam Damania
f050b16dd9 Move pytorch distributed tests to separate folder for contbuild. (#30445)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30445

Create distributed and rpc directories under caffe/test for better management
of unit tests.

Differential Revision: D18702786

fbshipit-source-id: e9daeed0cfb846ef68806f6decfcb57c0e0e3606
2020-01-22 21:16:59 -08:00
Michael Suo
8ed84a9123 skip broken custom op test (#29334)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29334

As title

Test Plan: Imported from OSS

Differential Revision: D18358592

Pulled By: suo

fbshipit-source-id: d7afbce52ddd008ae9c42aeda6be24e35086ef01
2019-11-06 14:33:01 -08:00
Michael Suo
cc457ca30f split remaining "easy" tests (#29249)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29249

This splits out all the tests that are "easy", leaving `TestJit`,
`TestScript`, the autogenerated tests, and a small docs test.

Splitting those into reasonable chunks is more effort which is less
mechanical.

Differential Revision: D18339007

Test Plan: Imported from OSS

Pulled By: suo

fbshipit-source-id: 69164b9f9a2c379fe8923a846c98dd3c37ccb70e
2019-11-06 13:23:01 -08:00