Commit Graph

35 Commits

Author SHA1 Message Date
Jane Xu
d47a9004c8 [skip ci] Set test owner for mobile tests (#66829)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66829

Reviewed By: albanD

Differential Revision: D31928812

Pulled By: janeyx99

fbshipit-source-id: 8116b7f3728df8632278b013007c06ecce583862
2021-10-26 10:20:01 -07:00
Vasiliy Kuznetsov
227e37dd39 pytorch quantization ao migration phase 2: caffe2/test (#65832)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65832

Renames `torch.quantization` to `torch.ao.quantization` in `caffe2/test`
folder.

```
find caffe2/test/ -type f -name "*.py" -print0 | xargs -0 sed -i "s/torch\.quantization/torch.ao.quantization/g"
HG: manually revert the files testing this migration
hg revert caffe2/test/quantization/ao_migration/common.py
hg revert caffe2/test/quantization/ao_migration/test_ao_migration.py
```

Test Plan: CI

Reviewed By: z-a-f

Differential Revision: D31275754

fbshipit-source-id: 4ed54a74525634feb0f47a26d071102e19c30049
2021-10-01 06:26:30 -07:00
Philip Meier
99203580a9 Updates internal assert_allclose callsites in favor of assert_close (#61841)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61841

Redo of #60863.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D30408145

Pulled By: mruberry

fbshipit-source-id: 0b34ebc7f23ba38ecd89640b61d8aca59b7eab58
2021-08-19 12:50:41 -07:00
Shen Li
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
Zsolt Dollenstein
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
Horace He
79a258f448 s/foward/forward/g (#58497)
Summary:
Annoying typo.

Prompted by these profiling results: https://github.com/pytorch/pytorch/issues/56419#issuecomment-825787828

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58497

Reviewed By: malfet

Differential Revision: D28521081

Pulled By: Chillee

fbshipit-source-id: ab91a2e167dd7d3387fd56106a6cff81f7a32f10
2021-05-19 11:42:42 -07:00
Jacob Szwejbka
1891e4bf1e [Pytorch] Remove run_on_bundled_input (#58344)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58344

remove a helper function thats more trouble then its worth.

ghstack-source-id: 129131889

Test Plan: ci and {P414950111}

Reviewed By: dhruvbird

Differential Revision: D28460607

fbshipit-source-id: 31bd6c1cc169785bb360e3113d258b612cad47fc
2021-05-17 12:44:00 -07:00
Dhruv Matani
38e606d056 [RFC] Add method torch.jit._clone_module_with_class (#56152)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56152

Currently, the Bundled Inputs API mutates the module in-place. It adds class methods and not instance methods. This results in a small problem that one can't re-run an already executed cell in Bento if the class has already been subject to bundled inputs.

In addition, there is no way to add bundled inputs to a module that has bundled inputs added already. This API provides a way to solve this problem as well by adding an `ignored_methods` to the call to `clone()` by allowing the implementation of bundled inputs to pass in the methods that it will add as `ignored_methods` so that when it does try to add those methods, it will be able to do so successfully.

We'll have to be careful when ignoring those methods during the call to `torch.jit._clone_module_with_class` since any bundled input that relies on a user-provided method will need to be preserved and not ignored during the clone.

Looking for feedback on whether this is an acceptable direction.
ghstack-source-id: 128908360

Test Plan:
Added unit test and ran it as `buck test //caffe2/test:mobile`

Also see this Bento Notebook: https://www.internalfb.com/intern/anp/view/?id=550829

Reviewed By: gmagogsfm

Differential Revision: D27788394

fbshipit-source-id: 48109cd4583506d4efdb345e4ba31385db23a273
2021-05-13 22:31:05 -07:00
Jacob Szwejbka
60a5ebfac2 [Pytorch Edge] Remove methods_to_optimize arg (#57045)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57045

Went back and adjusted the previous optimizations to just be applied to every function.
Cleaned up api to match.

ghstack-source-id: 127214412
ghstack-source-id: 127536155

Test Plan: unit test

Reviewed By: kimishpatel

Differential Revision: D27950859

fbshipit-source-id: 214e83d5a19b452747fe223615815c10fa4aee58
2021-04-27 14:54:13 -07:00
Jacob Szwejbka
7e9f7fb980 [Pytorch Edge] Prepack folding for functions besides forward (#56081)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56081
ghstack-source-id: 127205799

Test Plan: unit test. Since I'm prepacking the weights of the same operators multiple times I wonder if its a just works thing?

Reviewed By: kimishpatel

Differential Revision: D27777337

fbshipit-source-id: 909d2a667d9eb51e205536b478a6668c33b3fb15
2021-04-23 10:40:15 -07:00
Sam Estep
4753100a3b Un-ignore F403 in .flake8 (#55838)
Summary:
Generally wildcard imports are bad for the reasons described here: https://www.flake8rules.com/rules/F403.html

This PR replaces wildcard imports with an explicit list of imported items where possible, and adds a `# noqa: F403` comment in the other cases (mostly re-exports in `__init__.py` files).

This is a prerequisite for https://github.com/pytorch/pytorch/issues/55816, because currently [`tools/codegen/dest/register_dispatch_key.py` simply fails if you sort its imports](https://github.com/pytorch/pytorch/actions/runs/742505908).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55838

Test Plan: CI. You can also run `flake8` locally.

Reviewed By: jbschlosser

Differential Revision: D27724232

Pulled By: samestep

fbshipit-source-id: 269fb09cb4168f8a51fd65bfaacc6cda7fb87c34
2021-04-13 09:24:07 -07:00
Yukio Siraichi
93bf0ae6fc Remove legacy constructor calls from pytorch codebase. (#54142)
Summary:
Follow up from https://github.com/pytorch/pytorch/issues/53889
Related to https://github.com/pytorch/pytorch/issues/47112

Removing every occurrence of the legacy constructor call present in PyTorch at:
- _docs_
- _benchmarks_
- _test_
- _caffe2_
- _CONTRIBUTING.md_

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54142

Reviewed By: ngimel

Differential Revision: D27699450

Pulled By: mruberry

fbshipit-source-id: 530aa3f5746cc8bc1407d5d51b2bbd8075e30546
2021-04-11 15:45:17 -07:00
Jacob Szwejbka
20d7916a6a [Pytorch Mobile] Fold Conv BatchNorm for functions besides forward (#54619)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54619

Minor refactor to conv batchnorm folding to work on other functions besides forward
ghstack-source-id: 125767010

Test Plan: unit test and {P339453712}

Reviewed By: kimishpatel

Differential Revision: D27301452

fbshipit-source-id: 4e0cc544a171a970583979a496b2908935124497
2021-04-06 13:07:12 -07:00
Jacob Szwejbka
583c4bf7d3 [Pytorch Mobile] optimize_for_mobile: Fuse Add Relu on any function (#54441)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54441

Similar to previous dropout one
ghstack-source-id: 124544176

Test Plan: Printed graphs before and after fusion. verified input outputs stayed the same {P299343882}

Reviewed By: kimishpatel

Differential Revision: D27014352

fbshipit-source-id: d0a9548f8743472bdd7e194efd8e8d5fe53b95b6
2021-03-23 12:11:59 -07:00
Jacob Szwejbka
9fef25e579 [Pytorch Mobile] optimize_for_mobile: Remove dropout from any function (#53846)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53846

Theres already a varient of removeDropout that takes in a graph. So just switch to calling that one. It doesnt error check that the module isnt in training mode (because it doenst have a module) but optimize_for_mobile guarantees the cloned_module is in eval mode.
ghstack-source-id: 124544216

Test Plan: called optimize on forward and foo, both contained dropouts, both dropouts removed. Called both functions afterwords to verify they ran and gave the same output. {P308987364}

Reviewed By: kimishpatel

Differential Revision: D26986251

fbshipit-source-id: 085e08cbaa982aa08803a718fee4380af5f86b78
2021-03-22 14:57:02 -07:00
Nikita Shulga
97b6b3df51 [Reland] Update XNNPACK (#52691)
Summary:
This update contains the fix to XNNPACK by kimishpatel
Add unit test that exposed the problem
Updated torchvision checkout to 0.9.0rc1 hash

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52691

Reviewed By: walterddr

Differential Revision: D26614595

Pulled By: malfet

fbshipit-source-id: d0fe155a084690a3459a9358dac8488292e734fb
2021-02-24 06:40:38 -08:00
Howard Huang
2680ff7759 Revert D26598115: [pytorch][PR] Update XNNPACK
Test Plan: revert-hammer

Differential Revision:
D26598115 (3721962c33)

Original commit changeset: d652bacdee10

fbshipit-source-id: 7e0128aa9b7691ecd323687da6f6054363b3174a
2021-02-23 10:27:43 -08:00
Nikita Shulga
3721962c33 Update XNNPACK (#52645)
Summary:
This update contains the fix to XNNPACK by kimishpatel
Add unit test that exposed the problem
Updated torchvision checkout to 0.9.0rc1 hash

Fixes https://github.com/pytorch/pytorch/issues/52463

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52645

Reviewed By: kimishpatel, seemethere

Differential Revision: D26598115

Pulled By: malfet

fbshipit-source-id: d652bacdee10bb975fc445ab227de37022b8ef51
2021-02-23 06:59:57 -08:00
jonykarki
934805bc49 cleaned up ModuleAttributeError (#50298)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49726
Just cleaned up the unnecessary `ModuleAttributeError`

BC-breaking note:
`ModuleAttributeError` was added in the previous unsuccessful [PR](https://github.com/pytorch/pytorch/pull/49879) and removed here. If a user catches `ModuleAttributeError` specifically, this will no longer work. They should catch `AttributeError` instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50298

Reviewed By: mrshenli

Differential Revision: D25907620

Pulled By: jbschlosser

fbshipit-source-id: cdfa6b1ea76ff080cd243287c10a9d749a3f3d0a
2021-01-14 06:58:01 -08:00
Rong Rong (AI Infra)
fc5db4265b [BE] replace unittest.main with run_tests (#50451)
Summary:
fix https://github.com/pytorch/pytorch/issues/50448.

This replaces all `test/*.py` files with run_tests(). This PR does not address test files in the subdirectories because they seems unrelated.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50451

Reviewed By: janeyx99

Differential Revision: D25899924

Pulled By: walterddr

fbshipit-source-id: f7c861f0096624b2791ad6ef6a16b1c4895cce71
2021-01-13 10:33:08 -08:00
Xiong Zhang
e2d2d9bb0c [PyTorch Mobile] Preserve bundled input related methods when calling optimize_for_mobile (#49170)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49170

Added an extra step to **always** preserve the bundled inputs methods if they are present in the input module.

Also added a check to see if all the methods in the `preseved_methods` exist. If not, we will now throw an exception. This can hopefully stop hard-to-debug inputs from getting into downstream functions.

~~Add an optional argument `preserve_bundled_inputs_methods=False` to the `optimize_for_mobile` function. If set to be True, the function will now add three additional functions related with bundled inputs to be preserved: `get_all_bundled_inputs`, `get_num_bundled_inputs` and `run_on_bundled_input`.~~

Test Plan:
`buck test mode/dev //caffe2/test:mobile -- 'test_preserve_bundled_inputs_methods \(test_mobile_optimizer\.TestOptimizer\)'`

or

`buck test caffe2/test:mobile` to run some other related tests as well.

Reviewed By: dhruvbird

Differential Revision: D25463719

fbshipit-source-id: 6670dfd59bcaf54b56019c1a43db04b288481b6a
2020-12-18 22:01:46 -08:00
Luca Wehrstedt
2255e68da8 Revert D25433268: [PyTorch Mobile] Preserve bundled input related methods when calling optimize_for_mobile
Test Plan: revert-hammer

Differential Revision:
D25433268 (95233870f2)

Original commit changeset: 0bf9b4afe64b

fbshipit-source-id: bba97e48ce0e72f9d1db5159065bb6495d62666c
2020-12-10 04:39:30 -08:00
Xiong Zhang
95233870f2 [PyTorch Mobile] Preserve bundled input related methods when calling optimize_for_mobile
Summary:
Added an extra step to **always** preserve the bundled inputs methods if they are present in the input module.

Also added a check to see if all the methods in the `preseved_methods` exist. If not, we will now throw an exception. This can hopefully stop hard-to-debug inputs from getting into downstream functions.

~~Add an optional argument `preserve_bundled_inputs_methods=False` to the `optimize_for_mobile` function. If set to be True, the function will now add three additional functions related with bundled inputs to be preserved: `get_all_bundled_inputs`, `get_num_bundled_inputs` and `run_on_bundled_input`.~~

Test Plan:
`buck test mode/dev //caffe2/test:mobile -- 'test_preserve_bundled_inputs_methods \(test_mobile_optimizer\.TestOptimizer\)'`

or

`buck test caffe2/test:mobile` to run some other related tests as well.

Reviewed By: dhruvbird

Differential Revision: D25433268

fbshipit-source-id: 0bf9b4afe64b79ed1684a3db4c0baea40ed3cdd5
2020-12-09 22:53:56 -08:00
Bram Wasti
43a9d6fb6e [TorchScript] Support user defined classes as constants (#5062)
Summary:
Pull Request resolved: https://github.com/pytorch/glow/pull/5062

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45556

User defined classes can be used as constants.  This is useful when freezing and removing the module from the graph.

Test Plan: waitforsadcastle

Reviewed By: eellison

Differential Revision: D23994974

fbshipit-source-id: 5b4a5c91158aa7f22df39d71f2658afce1d29317
2020-11-16 20:52:02 -08:00
albanD
27e2ea4cea Make add_relu an internal function (#46676)
Summary:
Cleanup for 1.7

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46676

Reviewed By: gchanan

Differential Revision: D24458565

Pulled By: albanD

fbshipit-source-id: b1e4b4630233d3f1a4bac20e3077411d1ae17f7b
2020-10-22 18:08:15 -07:00
Meghan Lele
ce9df084d5 [pytorch] Replace "blacklist" in test/test_mobile_optimizer.py (#45512)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45512

This diff addresses https://github.com/pytorch/pytorch/issues/41443.
It is a clone of D23205313 which could not be imported from GitHub
for strange reasons.

Test Plan: Continuous integration.

Reviewed By: AshkanAliabadi

Differential Revision: D23967322

fbshipit-source-id: 744eb92de7cb5f0bc9540ed6a994f9e6dce8919a
2020-09-30 10:43:59 -07:00
Akshit Khurana
5f49d14be2 Add mobile_optimized tag to optimized model. (#45479)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45479

Add a top level boolean attribute to the model called mobile_optimized that is set to true if it is optimized.

Test Plan: buck test //caffe2/test:mobile passes

Reviewed By: kimishpatel

Differential Revision: D23956728

fbshipit-source-id: 79c5931702208b871454319ca2ab8633596b1eb8
2020-09-29 10:06:57 -07:00
Vasiliy Kuznetsov
79b8328aaf optimize_for_mobile: bring packed params to root module (#42740)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42740

Adds a pass to hoist conv packed params to root module.
The benefit is that if there is nothing else in the conv module,
subsequent passes will delete it, which will reduce module size.

For context, freezing does not handle this because conv packed
params is a custom object.

Test Plan:
```
PYTORCH_JIT_LOG_LEVEL=">hoist_conv_packed_params.cpp" python test/test_mobile_optimizer.py TestOptimizer.test_hoist_conv_packed_params
```

Imported from OSS

Reviewed By: kimishpatel

Differential Revision: D23005961

fbshipit-source-id: 31ab1f5c42a627cb74629566483cdc91f3770a94
2020-08-08 15:53:20 -07:00
Vasiliy Kuznetsov
d8801f590c fix asan failure for module freezing in conv bn folding (#42739)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42739

This is a test case which fails with ASAN on at the module freezing
step.

Test Plan:
```
USE_ASAN=1 USE_CUDA=0 python setup.py develop
LD_PRELOAD=/usr/lib64/libasan.so.4 python test/test_mobile_optimizer.py TestOptimizer.test_optimize_for_mobile_asan

// output tail: https://gist.github.com/vkuzo/7a0018b9e10ffe64dab0ac7381479f23
```

Imported from OSS

Reviewed By: kimishpatel

Differential Revision: D23005962

fbshipit-source-id: b7d4492e989af7c2e22197c16150812bd2dda7cc
2020-08-08 15:51:59 -07:00
Yanan Cao
bdcf320bed Support custom exception message (#41907)
Summary:
Raise and assert used to have a hard-coded error message "Exception". User provided error message was ignored. This PR adds support to represent user's error message in TorchScript.

This breaks backward compatibility because now we actually need to script the user's error message, which can potentially contain unscriptable expressions. Such programs can break when scripting, but saved models can still continue to work.

Increased an op count in test_mobile_optimizer.py because now we need aten::format to form the actual exception message.

This is built upon an WIP PR:  https://github.com/pytorch/pytorch/pull/34112 by driazati

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41907

Reviewed By: ngimel

Differential Revision: D22778301

Pulled By: gmagogsfm

fbshipit-source-id: 2b94f0db4ae9fe70c4cd03f4048e519ea96323ad
2020-08-01 13:03:45 -07:00
Kimish Patel
8a79eec98a Add add_relu fusion pass to optimize_for_mobile. (#40252)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40252

As title says.

Test Plan:
python test/test_mobile_optimizer.py

Imported from OSS

Differential Revision: D22126825

fbshipit-source-id: a1880587ba8db9dee0fa450bc463734e4a8693d9
2020-07-10 08:10:22 -07:00
Kimish Patel
4a174c83ca Add option to preserve certain methods during optimize_for_mobile. (#40629)
Summary:
By default freeze_module pass, invoked from optimize_for_mobile,
preserves only forward method. There is an option to specify a list of
methods that can be preserved during freeze_module. This PR exposes that
to optimize_for_module pass.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40629

Test Plan: python test/test_mobile_optimizer.py

Reviewed By: dreiss

Differential Revision: D22260972

Pulled By: kimishpatel

fbshipit-source-id: 452c653269da8bb865acfb58da2d28c23c66e326
2020-06-29 09:32:53 -07:00
Xingying Cheng
0b3755b1d0 Add optimization blacklist as second arg to optimizeForMobile method. (#37462)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37462

Instead of running all the optimization pass in optimizeForMobile method,
introducing a whitelist optimizer dictionary as second param in the method,
when it is not passed during calling, the method will run all the optimization
passes, otherwise the method will read the dict and only run the pass with
value of True.
ghstack-source-id: 106104503

Test Plan:
python test/test_mobile_optimizer.py

Imported from OSS

Differential Revision: D22096029

fbshipit-source-id: daa9370c0510930f4c032328b225df0bcf97880f
2020-06-17 18:14:45 -07:00
Xingying Cheng
5c9d1e4824 Propagate module lints for mobile scripted module. (#37046)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37046
ghstack-source-id: 102669259

Creating a python api entry to generate mobile model lints which takes a scripted module as argument and returns a map of module lints.

The initial version is to create placeholder which included module bundled input as the first lint instance. More lints will be added in the future.

Test Plan: python test/test_optimizer.py

Reviewed By: dreiss

Differential Revision: D21164648

fbshipit-source-id: 9e8f4e19d74b5464a55cc73b9dc18f358c5947d6
2020-04-27 10:20:12 -07:00
Xingying Cheng
86f354c530 Python binding api to optimize for mobile model on script module. (#36357)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36357
ghstack-source-id: 101907180

Creating a python api entry to optimize mobile model which takes a scripted module as argument and returns an optimized scripted module. The initial optimization features includes inserting and folding prepack ops.

Test Plan: python test/test_optimizer.py

Differential Revision: D20946076

fbshipit-source-id: 93cb4a5bb2371128f802d738eb26d0a4f3b2fe10
2020-04-17 16:21:27 -07:00