Commit Graph

24 Commits

Author SHA1 Message Date
cyy
df458be4e5 [4/N] Apply py39 ruff and pyupgrade fixes (#143257)
```torch/fx/passes/annotate_getitem_nodes.py``` was changed to support the new type hinting annotations.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143257
Approved by: https://github.com/justinchuby, https://github.com/albanD
2025-01-04 10:47:51 +00:00
Xuehai Pan
4226ed1585 [BE] Format uncategorized Python files with ruff format (#132576)
Remove patterns `**`, `test/**`, and `torch/**` in `tools/linter/adapters/pyfmt_linter.py` and run `lintrunner`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132576
Approved by: https://github.com/ezyang, https://github.com/Skylion007
ghstack dependencies: #132574
2024-08-04 17:13:31 +00:00
Aaron Orenstein
dcfa7702c3 Flip default value for mypy disallow_untyped_defs [1/11] (#127838)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127838
Approved by: https://github.com/oulgen
2024-06-08 18:16:33 +00:00
Yuanhao Ji
b3504af56e Enable UFMT on test/scripts and some files (#124137)
Part of: #123062

Ran lintrunner on:

- `test/scripts`
- `test/simulate_nccl_errors.py`
- `test/test_ao_sparsity.py`
- `test/test_autocast.py`
- `test/test_binary_ufuncs.py`
- `test/test_bundled_images.py`
- `test/test_bundled_inputs.py`
- `test/test_comparison_utils.py`
- `test/test_compile_benchmark_util.py`
- `test/test_complex.py`
- `test/test_cpp_api_parity.py`
- `test/test_cpp_extensions_aot.py`
- `test/test_cpp_extensions_jit.py`
- `test/test_cpp_extensions_open_device_registration.py`

Detail:

```bash
$ lintrunner -a --take UFMT --all-files
ok No lint issues.
Successfully applied all patches.
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124137
Approved by: https://github.com/soulitzer
2024-04-19 22:01:27 +00:00
Aaron Gokaslan
1d6c5972c1 [BE]: Optimize min/max/sum comprehensions C419 (#123960)
Automatic fixes that replaces certain list comprehensions with generator ones where appropriate so that they are immediately consumed. This is preview functionality in ruff for rule C419 and it was automatically applied.

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123960
Approved by: https://github.com/malfet
2024-04-12 23:54:15 +00:00
Aaron Gokaslan
67d9790985 [BE] Apply almost all remaining flake8-comprehension checks (#94676)
Applies the remaining flake8-comprehension fixes and checks. This changes replace all remaining unnecessary generator expressions with list/dict/set comprehensions which are more succinct, performant, and better supported by our torch.jit compiler. It also removes useless generators such as 'set(a for a in b)`, resolving it into just the set call.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94676
Approved by: https://github.com/ezyang
2023-02-12 01:01:25 +00:00
Jane Xu
88d86de7d8 Add lint to ensure all test files have headers with ownership info (#66826)
Summary:
UPDATE: CI should be green now with the added files.

This should fail for now, but will pass when all action for https://github.com/pytorch/pytorch/issues/66232 is done.

Example failure run: https://github.com/pytorch/pytorch/runs/4052881947?check_suite_focus=true

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66826

Reviewed By: seemethere

Differential Revision: D32087209

Pulled By: janeyx99

fbshipit-source-id: ad4b51e46de54f23aebacd592ee67577869f8bb6
2021-11-03 18:21:49 -07:00
Pavithran Ramachandran
5f997a7d2f [PyTorch][Edge] Improve InflatableArgs for Bundled Inputs (#62368)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62368

# Context
The bundled inputs accepts an expression in the form of string InflatableArg.fmt that can be applied on the inputs to inflate. The InflatableArg.fmt provides flexibility to have custom transformation to inflate. When the input arguments to a function are not Tensor type, TorchScript casts the inputs from type T to Optional[T] expects the function to handle Nullable (None) clause as well. This becomes tricky to handle in one line code or lambda functions.

We propose an alternative way which allows InflatableArg to include the text of a TorchScript function that would be defined on the module as a helper, then use that in its inflation expression. This can be provided by InflatableArg.fmt_fn. Please refer to pytorch/test/test_bundled_inputs.py for example on how to use the same.

Also refer JacobSzwejbka comment on the same [here](https://github.com/pytorch/pytorch/pull/62368#issuecomment-892012812)

# Mitigation
Allow InflatedArg to include the text of a TorchScript function that would be defined on the module as a helper, then use that in its inflation expression.
ghstack-source-id: 135158680

Test Plan:
To run `test_dict_args`

```
(base) [pavithran@devvm1803.vll0 /data/users/pavithran/fbsource/fbcode] buck test //caffe2/test:test_bundled_inputs -- test_dict_args
Action graph will be rebuilt because files have been added or removed.
Building: finished in 5.4 sec (100%) 12180/12180 jobs, 0/12180 updated
  Total time: 5.8 sec
More details at https://www.internalfb.com/intern/buck/build/fafcf277-1095-4cba-978d-6022f0d391ad
Tpx test run coordinator for Facebook. See https://fburl.com/tpx for details.
Running with tpx session id: 5ef9de71-c1b1-406b-a6c0-3321c2368b8d
Trace available for this run at /tmp/tpx-20210727-163946.454212/trace.log
Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/7036874465805934
    ✓ ListingSuccess: caffe2/test:test_bundled_inputs - main (11.365)
    ✓ Pass: caffe2/test:test_bundled_inputs - test_dict_args (test_bundled_inputs.TestBundledInputs) (12.307)
Summary
  Pass: 1
  ListingSuccess: 1
If you need help understanding your runs, please follow the wiki: https://fburl.com/posting_in_tpx_users
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/7036874465805934
```

To check the py code of TS module:
P433043973

Reviewed By: dreiss

Differential Revision: D29950421

fbshipit-source-id: c819ec5c94429b7fbf6c4beb0259457f169b08ec
2021-08-20 09:36:08 -07:00
Shen Li
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
Zsolt Dollenstein
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
Jacob Szwejbka
1891e4bf1e [Pytorch] Remove run_on_bundled_input (#58344)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58344

remove a helper function thats more trouble then its worth.

ghstack-source-id: 129131889

Test Plan: ci and {P414950111}

Reviewed By: dhruvbird

Differential Revision: D28460607

fbshipit-source-id: 31bd6c1cc169785bb360e3113d258b612cad47fc
2021-05-17 12:44:00 -07:00
Jacob Szwejbka
a3b33139da [Pytorch] Add non mutator bundled inputs method (#58408)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58408

Itd be nice to have a version of bundle inputs that didnt mutate the original class/object. So now there is!
ghstack-source-id: 129127316

Test Plan: The new unittests

Reviewed By: dhruvbird

Differential Revision: D28460231

fbshipit-source-id: f6f7a19e264bddfaa177304cbde40336060a237a
2021-05-17 11:36:49 -07:00
Sam Estep
75024e228c Add lint for unqualified type: ignore (#56290)
Summary:
The other half of https://github.com/pytorch/pytorch/issues/56272.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56290

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI runs (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2384511062
- https://github.com/pytorch/pytorch/actions/runs/765036024

Reviewed By: seemethere

Differential Revision: D27867219

Pulled By: samestep

fbshipit-source-id: e648f07b6822867e70833e23ddafe7fb7eaca235
2021-04-21 08:07:23 -07:00
Jacob Szwejbka
ea4af1511c [Pytorch] Better error message for bundling inputs a second time (#56086)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56086

ghstack-source-id: 126671245

Test Plan: unittest

Reviewed By: dhruvbird

Differential Revision: D27778582

fbshipit-source-id: 6b59aa7ddb25c1b3162bbffdf0dd212a96f22bd3
2021-04-20 12:28:27 -07:00
David Reiss
0e7af36acd Make bundled inputs work with quantized zero inputs (#47407)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47407

Previously, the code for bundling contiguous single-valued tensors (like
torch.zeros) wasn't working for quantized tensors because it was calling
the `torch.tensor` constructor without passing in the quantizer.
Instead, skip the constructor entirely, which makes this use case work
and also simplifies the code.  (Originally, I forgot that
`arg.flatten()[0]` would return a tensor, not a scalar.)

Test Plan: Bundled a quantized zero input and saw it run properly.

Reviewed By: dhruvbird

Differential Revision: D24752890

Pulled By: dreiss

fbshipit-source-id: 26bc4873a71dd44660cc0fcb74c227b754e31663
2021-04-06 13:47:35 -07:00
Jacob Szwejbka
3cf08eaf15 [Pytorch Mobile] Improve Bundled Inputs Error Checking (#52386)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52386

Remove stale aliasing inputs warning, error check that inputs is not null and has at least one entry, error check that the list of inputs is a list of tuples. This can cause subtle bugs where if the user passes in a list of tensors (the most common mistake) the first dimension of each tensor is dropped. This can go unnoticed because its the often the batch dimension which pytorch occasionally silently re-adds if its missing
ghstack-source-id: 122363487

Test Plan:
Bundle something with an input, bundle something with {} for inputs

For typing check below paste

{P199554712}

Reviewed By: dhruvbird

Differential Revision: D26374867

fbshipit-source-id: cd176f34bad7a4da850b165827f8b2448cd9200d
2021-02-24 13:55:45 -08:00
Jacob Szwejbka
0118dec2e3 [Pytorch] Expanded Bundled Inputs To Any Public Function (#51153)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51153

Enabled bundled inputs for all public functions that the user wants in a torchscript module. An important caveat here is that you cant add bundled inputs to functions that were in the nn.module but weren't caught in the scripting/tracing process that brought the model to torchscript.

Old Api is exactly the same. Still only works on forward, return types the same, etc.

-----------New API-------------

Attachment of inputs:

***augment_model_with_bundled_inputs*** : works the same as before but added the option to specify an info dictionary.

***augment_many_model_functions_with_bundled_inputs*** : Similar to the above function but allows the user to specify a Dict[Callable, List[<inputs>]] (mapping function references to the bundled inputs for that function) to attach bundled inputs to many functions

Consumption of inputs:

***get_all_bundled_inputs_for_<function_name>()*** : Works exactly like get_all_bundled_inputs does, but can be used for functions other then forward if you know ahead of time what they are called, and if they have bundled inputs.

***get_bundled_inputs_functions_and_info()*** : This is easily the hackiest function. Returns a Dict['str', 'str'] mapping function_names to get_all_bundled_inputs_for_<function_name>. A user can then execute the functions specified in the values with something like
    all_info = model.get_bundled_inputs_functions_and_info()
    for func_name in all_info.keys():
        input_func_name = all_info[func_name]['get_inputs_function_name'][0]
        func_to_run = getattr(loaded, input_func_name)
The reason its done this way is because torchscript doesn't support 'Any' type yet meaning I can't return the bundled inputs directly because they could be different types for each function. Torchscript also doesn't support callable so I can't return a function reference directly either.
ghstack-source-id: 120768561

Test Plan:
Got a model into torchscript using the available methods that I'm aware of (tracing, scripting, old scripting method). Not really sure how tracing brings in functions that arent in the forward call path though. Attached bundled inputs and info to them successfully. Changes to TorchTest.py on all but the last version of this diff (where it will be/is removed for land) illustrate what I did to test.

Created and ran unit test

Reviewed By: dreiss

Differential Revision: D25931961

fbshipit-source-id: 36e87c9a585554a83a932e4dcf07d1f91a32f046
2021-02-02 10:33:59 -08:00
Edward Yang
3ce539881a Back out "Revert D25757721: [pytorch][PR] Run mypy on more test files" (#50142)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50142

Original commit changeset: 58437d719285

Test Plan: OSS CI

Reviewed By: walterddr, ngimel

Differential Revision: D25803866

fbshipit-source-id: d6b83a5211e430c0451994391876103f1ad96315
2021-01-06 11:27:36 -08:00
Mike Ruberry
9529ae3776 Revert D25757721: [pytorch][PR] Run mypy on more test files
Test Plan: revert-hammer

Differential Revision:
D25757721 (b7bfc723d3)

Original commit changeset: 44c396d8da9e

fbshipit-source-id: 58437d719285a4fecd8c05e487cc86fc2cebadff
2021-01-05 15:18:14 -08:00
Ralf Gommers
b7bfc723d3 Run mypy on more test files (#49658)
Summary:
Improves one annotation for `augment_model_with_bundled_inputs`

Also add a comment to not work on caffe2 type annotations, that's not worth the effort - those ignores can stay as they are.

xref gh-16574

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49658

Reviewed By: heitorschueroff

Differential Revision: D25757721

Pulled By: ezyang

fbshipit-source-id: 44c396d8da9ef3f41b97f9c46a528f0431c4b463
2021-01-05 09:28:38 -08:00
Mike Ruberry
b2b8af9645 Removes assertAlmostEqual (#41514)
Summary:
This test function is confusing since our `assertEqual` behavior allows for tolerance to be specified, and this is a redundant mechanism.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41514

Reviewed By: ngimel

Differential Revision: D22569348

Pulled By: mruberry

fbshipit-source-id: 2b2ff8aaa9625a51207941dfee8e07786181fe9f
2020-07-16 10:35:12 -07:00
David Reiss
375cd852fa Add a utility function for bundling large input tensors (#37055)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37055

Sometimes it's okay to bundle a large example input tensor with a model.
Add a utility function to make it easy for users to do that *on purpose*.

Test Plan: Unit test.

Differential Revision: D22264239

Pulled By: dreiss

fbshipit-source-id: 05c6422be1aa926cca850f994ff1ae83c0399119
2020-06-26 13:34:02 -07:00
David Reiss
41ea7f2d86 Add channels-last support to bundled_inputs (#36764)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36764

This allows bundling inputs that are large uniform buffers in
channels-last memory format.

Test Plan: Unit test.

Differential Revision: D21142660

Pulled By: dreiss

fbshipit-source-id: 31bbea6586d07c1fd0bcad4cb36ed2b8bb88a7e4
2020-06-26 13:31:17 -07:00
David Reiss
fab06bfb75 Add utility for bundling sample inputs with models (#35631)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35631

Bundling sample inputs with our models with a standardized interface
will make it possible to write benchmarking and code-coverage tools that
call all models in a uniform way.  The intent is to make this a standard
for mobile models within Facebook.  Putting it in torch/utils so tests
can run on GitHub and because it might be useful for others as well.

`augment_model_with_bundled_inputs` is the primary entry point.  See
its docstring for usage information and the test for some example uses.

One design question I had was how much power should be available for
automatic deflating and inflating of inputs.  The current scheme gives
some automatic handling and a reasonable escape hatch
("_bundled_input_inflate_format") for top-level tensor arguments, but no
automatic support for (e.g.) tensors in tuples or long strings.  For
more complex cases, we have the ultimate escape hatch of just defining
_generate_bundled_inputs in the model.

Another design question was whether to add the inputs to the model or
wrap the model in a wrapper module that had these methods and delegated
calls to `forward`.  Because models can have other exposed methods and
attributes, the wrapped seemed too onerous.

Test Plan: Unit test.

Differential Revision: D20925013

Pulled By: dreiss

fbshipit-source-id: 4dbbb4cce41e5752133b4ecdb05e1c92bac6b2d5
2020-04-08 13:10:36 -07:00