Summary:
This update contains the fix to XNNPACK by kimishpatel
Add unit test that exposed the problem
Updated torchvision checkout to 0.9.0rc1 hash
Fixes https://github.com/pytorch/pytorch/issues/52463
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52645
Reviewed By: kimishpatel, seemethere
Differential Revision: D26598115
Pulled By: malfet
fbshipit-source-id: d652bacdee10bb975fc445ab227de37022b8ef51
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49726
Just cleaned up the unnecessary `ModuleAttributeError`
BC-breaking note:
`ModuleAttributeError` was added in the previous unsuccessful [PR](https://github.com/pytorch/pytorch/pull/49879) and removed here. If a user catches `ModuleAttributeError` specifically, this will no longer work. They should catch `AttributeError` instead.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50298
Reviewed By: mrshenli
Differential Revision: D25907620
Pulled By: jbschlosser
fbshipit-source-id: cdfa6b1ea76ff080cd243287c10a9d749a3f3d0a
Summary:
fix https://github.com/pytorch/pytorch/issues/50448.
This replaces all `test/*.py` files with run_tests(). This PR does not address test files in the subdirectories because they seems unrelated.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50451
Reviewed By: janeyx99
Differential Revision: D25899924
Pulled By: walterddr
fbshipit-source-id: f7c861f0096624b2791ad6ef6a16b1c4895cce71
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49170
Added an extra step to **always** preserve the bundled inputs methods if they are present in the input module.
Also added a check to see if all the methods in the `preseved_methods` exist. If not, we will now throw an exception. This can hopefully stop hard-to-debug inputs from getting into downstream functions.
~~Add an optional argument `preserve_bundled_inputs_methods=False` to the `optimize_for_mobile` function. If set to be True, the function will now add three additional functions related with bundled inputs to be preserved: `get_all_bundled_inputs`, `get_num_bundled_inputs` and `run_on_bundled_input`.~~
Test Plan:
`buck test mode/dev //caffe2/test:mobile -- 'test_preserve_bundled_inputs_methods \(test_mobile_optimizer\.TestOptimizer\)'`
or
`buck test caffe2/test:mobile` to run some other related tests as well.
Reviewed By: dhruvbird
Differential Revision: D25463719
fbshipit-source-id: 6670dfd59bcaf54b56019c1a43db04b288481b6a
Summary:
Added an extra step to **always** preserve the bundled inputs methods if they are present in the input module.
Also added a check to see if all the methods in the `preseved_methods` exist. If not, we will now throw an exception. This can hopefully stop hard-to-debug inputs from getting into downstream functions.
~~Add an optional argument `preserve_bundled_inputs_methods=False` to the `optimize_for_mobile` function. If set to be True, the function will now add three additional functions related with bundled inputs to be preserved: `get_all_bundled_inputs`, `get_num_bundled_inputs` and `run_on_bundled_input`.~~
Test Plan:
`buck test mode/dev //caffe2/test:mobile -- 'test_preserve_bundled_inputs_methods \(test_mobile_optimizer\.TestOptimizer\)'`
or
`buck test caffe2/test:mobile` to run some other related tests as well.
Reviewed By: dhruvbird
Differential Revision: D25433268
fbshipit-source-id: 0bf9b4afe64b79ed1684a3db4c0baea40ed3cdd5
Summary:
Pull Request resolved: https://github.com/pytorch/glow/pull/5062
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45556
User defined classes can be used as constants. This is useful when freezing and removing the module from the graph.
Test Plan: waitforsadcastle
Reviewed By: eellison
Differential Revision: D23994974
fbshipit-source-id: 5b4a5c91158aa7f22df39d71f2658afce1d29317
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45512
This diff addresses https://github.com/pytorch/pytorch/issues/41443.
It is a clone of D23205313 which could not be imported from GitHub
for strange reasons.
Test Plan: Continuous integration.
Reviewed By: AshkanAliabadi
Differential Revision: D23967322
fbshipit-source-id: 744eb92de7cb5f0bc9540ed6a994f9e6dce8919a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45479
Add a top level boolean attribute to the model called mobile_optimized that is set to true if it is optimized.
Test Plan: buck test //caffe2/test:mobile passes
Reviewed By: kimishpatel
Differential Revision: D23956728
fbshipit-source-id: 79c5931702208b871454319ca2ab8633596b1eb8
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42740
Adds a pass to hoist conv packed params to root module.
The benefit is that if there is nothing else in the conv module,
subsequent passes will delete it, which will reduce module size.
For context, freezing does not handle this because conv packed
params is a custom object.
Test Plan:
```
PYTORCH_JIT_LOG_LEVEL=">hoist_conv_packed_params.cpp" python test/test_mobile_optimizer.py TestOptimizer.test_hoist_conv_packed_params
```
Imported from OSS
Reviewed By: kimishpatel
Differential Revision: D23005961
fbshipit-source-id: 31ab1f5c42a627cb74629566483cdc91f3770a94
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42739
This is a test case which fails with ASAN on at the module freezing
step.
Test Plan:
```
USE_ASAN=1 USE_CUDA=0 python setup.py develop
LD_PRELOAD=/usr/lib64/libasan.so.4 python test/test_mobile_optimizer.py TestOptimizer.test_optimize_for_mobile_asan
// output tail: https://gist.github.com/vkuzo/7a0018b9e10ffe64dab0ac7381479f23
```
Imported from OSS
Reviewed By: kimishpatel
Differential Revision: D23005962
fbshipit-source-id: b7d4492e989af7c2e22197c16150812bd2dda7cc
Summary:
Raise and assert used to have a hard-coded error message "Exception". User provided error message was ignored. This PR adds support to represent user's error message in TorchScript.
This breaks backward compatibility because now we actually need to script the user's error message, which can potentially contain unscriptable expressions. Such programs can break when scripting, but saved models can still continue to work.
Increased an op count in test_mobile_optimizer.py because now we need aten::format to form the actual exception message.
This is built upon an WIP PR: https://github.com/pytorch/pytorch/pull/34112 by driazati
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41907
Reviewed By: ngimel
Differential Revision: D22778301
Pulled By: gmagogsfm
fbshipit-source-id: 2b94f0db4ae9fe70c4cd03f4048e519ea96323ad
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40252
As title says.
Test Plan:
python test/test_mobile_optimizer.py
Imported from OSS
Differential Revision: D22126825
fbshipit-source-id: a1880587ba8db9dee0fa450bc463734e4a8693d9
Summary:
By default freeze_module pass, invoked from optimize_for_mobile,
preserves only forward method. There is an option to specify a list of
methods that can be preserved during freeze_module. This PR exposes that
to optimize_for_module pass.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40629
Test Plan: python test/test_mobile_optimizer.py
Reviewed By: dreiss
Differential Revision: D22260972
Pulled By: kimishpatel
fbshipit-source-id: 452c653269da8bb865acfb58da2d28c23c66e326
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37462
Instead of running all the optimization pass in optimizeForMobile method,
introducing a whitelist optimizer dictionary as second param in the method,
when it is not passed during calling, the method will run all the optimization
passes, otherwise the method will read the dict and only run the pass with
value of True.
ghstack-source-id: 106104503
Test Plan:
python test/test_mobile_optimizer.py
Imported from OSS
Differential Revision: D22096029
fbshipit-source-id: daa9370c0510930f4c032328b225df0bcf97880f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37046
ghstack-source-id: 102669259
Creating a python api entry to generate mobile model lints which takes a scripted module as argument and returns a map of module lints.
The initial version is to create placeholder which included module bundled input as the first lint instance. More lints will be added in the future.
Test Plan: python test/test_optimizer.py
Reviewed By: dreiss
Differential Revision: D21164648
fbshipit-source-id: 9e8f4e19d74b5464a55cc73b9dc18f358c5947d6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36357
ghstack-source-id: 101907180
Creating a python api entry to optimize mobile model which takes a scripted module as argument and returns an optimized scripted module. The initial optimization features includes inserting and folding prepack ops.
Test Plan: python test/test_optimizer.py
Differential Revision: D20946076
fbshipit-source-id: 93cb4a5bb2371128f802d738eb26d0a4f3b2fe10