Commit Graph

10 Commits

Author SHA1 Message Date
Jacob Szwejbka
155b19ef1a [Pytorch Mobile] Remove useless line from bundled_inputs (#52824)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52824

How was this not breaking? _bundled_inputs_deflated doesnt exist
ghstack-source-id: 122491970

Test Plan: unit tests

Reviewed By: iseeyuan

Differential Revision: D26658098

fbshipit-source-id: 9ebf961b8764ba8779052c520dd46a8724be042a
2021-02-26 10:36:32 -08:00
Jacob Szwejbka
3cf08eaf15 [Pytorch Mobile] Improve Bundled Inputs Error Checking (#52386)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52386

Remove stale aliasing inputs warning, error check that inputs is not null and has at least one entry, error check that the list of inputs is a list of tuples. This can cause subtle bugs where if the user passes in a list of tensors (the most common mistake) the first dimension of each tensor is dropped. This can go unnoticed because its the often the batch dimension which pytorch occasionally silently re-adds if its missing
ghstack-source-id: 122363487

Test Plan:
Bundle something with an input, bundle something with {} for inputs

For typing check below paste

{P199554712}

Reviewed By: dhruvbird

Differential Revision: D26374867

fbshipit-source-id: cd176f34bad7a4da850b165827f8b2448cd9200d
2021-02-24 13:55:45 -08:00
Jacob Szwejbka
0118dec2e3 [Pytorch] Expanded Bundled Inputs To Any Public Function (#51153)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51153

Enabled bundled inputs for all public functions that the user wants in a torchscript module. An important caveat here is that you cant add bundled inputs to functions that were in the nn.module but weren't caught in the scripting/tracing process that brought the model to torchscript.

Old Api is exactly the same. Still only works on forward, return types the same, etc.

-----------New API-------------

Attachment of inputs:

***augment_model_with_bundled_inputs*** : works the same as before but added the option to specify an info dictionary.

***augment_many_model_functions_with_bundled_inputs*** : Similar to the above function but allows the user to specify a Dict[Callable, List[<inputs>]] (mapping function references to the bundled inputs for that function) to attach bundled inputs to many functions

Consumption of inputs:

***get_all_bundled_inputs_for_<function_name>()*** : Works exactly like get_all_bundled_inputs does, but can be used for functions other then forward if you know ahead of time what they are called, and if they have bundled inputs.

***get_bundled_inputs_functions_and_info()*** : This is easily the hackiest function. Returns a Dict['str', 'str'] mapping function_names to get_all_bundled_inputs_for_<function_name>. A user can then execute the functions specified in the values with something like
    all_info = model.get_bundled_inputs_functions_and_info()
    for func_name in all_info.keys():
        input_func_name = all_info[func_name]['get_inputs_function_name'][0]
        func_to_run = getattr(loaded, input_func_name)
The reason its done this way is because torchscript doesn't support 'Any' type yet meaning I can't return the bundled inputs directly because they could be different types for each function. Torchscript also doesn't support callable so I can't return a function reference directly either.
ghstack-source-id: 120768561

Test Plan:
Got a model into torchscript using the available methods that I'm aware of (tracing, scripting, old scripting method). Not really sure how tracing brings in functions that arent in the forward call path though. Attached bundled inputs and info to them successfully. Changes to TorchTest.py on all but the last version of this diff (where it will be/is removed for land) illustrate what I did to test.

Created and ran unit test

Reviewed By: dreiss

Differential Revision: D25931961

fbshipit-source-id: 36e87c9a585554a83a932e4dcf07d1f91a32f046
2021-02-02 10:33:59 -08:00
Edward Yang
3ce539881a Back out "Revert D25757721: [pytorch][PR] Run mypy on more test files" (#50142)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50142

Original commit changeset: 58437d719285

Test Plan: OSS CI

Reviewed By: walterddr, ngimel

Differential Revision: D25803866

fbshipit-source-id: d6b83a5211e430c0451994391876103f1ad96315
2021-01-06 11:27:36 -08:00
Mike Ruberry
9529ae3776 Revert D25757721: [pytorch][PR] Run mypy on more test files
Test Plan: revert-hammer

Differential Revision:
D25757721 (b7bfc723d3)

Original commit changeset: 44c396d8da9e

fbshipit-source-id: 58437d719285a4fecd8c05e487cc86fc2cebadff
2021-01-05 15:18:14 -08:00
Ralf Gommers
b7bfc723d3 Run mypy on more test files (#49658)
Summary:
Improves one annotation for `augment_model_with_bundled_inputs`

Also add a comment to not work on caffe2 type annotations, that's not worth the effort - those ignores can stay as they are.

xref gh-16574

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49658

Reviewed By: heitorschueroff

Differential Revision: D25757721

Pulled By: ezyang

fbshipit-source-id: 44c396d8da9ef3f41b97f9c46a528f0431c4b463
2021-01-05 09:28:38 -08:00
Guilherme Leobas
cdf5e2ae86 add typing annotations for a few torch.utils.* modules (#43806)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/43431. Depends on [gh-43862](https://github.com/pytorch/pytorch/pull/43862) (EDIT: now merged)

Modules:
- torch.utils.mkldnn
- torch.utils.mobile_optimizer
- torch.utils.bundled_inputs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43806

Reviewed By: gmagogsfm

Differential Revision: D23635151

Pulled By: SplitInfinity

fbshipit-source-id: a85b75a7927dde6cc55bcb361f8ff601ffb0b2a1
2020-09-11 10:20:55 -07:00
David Reiss
375cd852fa Add a utility function for bundling large input tensors (#37055)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37055

Sometimes it's okay to bundle a large example input tensor with a model.
Add a utility function to make it easy for users to do that *on purpose*.

Test Plan: Unit test.

Differential Revision: D22264239

Pulled By: dreiss

fbshipit-source-id: 05c6422be1aa926cca850f994ff1ae83c0399119
2020-06-26 13:34:02 -07:00
David Reiss
41ea7f2d86 Add channels-last support to bundled_inputs (#36764)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36764

This allows bundling inputs that are large uniform buffers in
channels-last memory format.

Test Plan: Unit test.

Differential Revision: D21142660

Pulled By: dreiss

fbshipit-source-id: 31bbea6586d07c1fd0bcad4cb36ed2b8bb88a7e4
2020-06-26 13:31:17 -07:00
David Reiss
fab06bfb75 Add utility for bundling sample inputs with models (#35631)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35631

Bundling sample inputs with our models with a standardized interface
will make it possible to write benchmarking and code-coverage tools that
call all models in a uniform way.  The intent is to make this a standard
for mobile models within Facebook.  Putting it in torch/utils so tests
can run on GitHub and because it might be useful for others as well.

`augment_model_with_bundled_inputs` is the primary entry point.  See
its docstring for usage information and the test for some example uses.

One design question I had was how much power should be available for
automatic deflating and inflating of inputs.  The current scheme gives
some automatic handling and a reasonable escape hatch
("_bundled_input_inflate_format") for top-level tensor arguments, but no
automatic support for (e.g.) tensors in tuples or long strings.  For
more complex cases, we have the ultimate escape hatch of just defining
_generate_bundled_inputs in the model.

Another design question was whether to add the inputs to the model or
wrap the model in a wrapper module that had these methods and delegated
calls to `forward`.  Because models can have other exposed methods and
attributes, the wrapped seemed too onerous.

Test Plan: Unit test.

Differential Revision: D20925013

Pulled By: dreiss

fbshipit-source-id: 4dbbb4cce41e5752133b4ecdb05e1c92bac6b2d5
2020-04-08 13:10:36 -07:00