Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52824
How was this not breaking? _bundled_inputs_deflated doesnt exist
ghstack-source-id: 122491970
Test Plan: unit tests
Reviewed By: iseeyuan
Differential Revision: D26658098
fbshipit-source-id: 9ebf961b8764ba8779052c520dd46a8724be042a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52386
Remove stale aliasing inputs warning, error check that inputs is not null and has at least one entry, error check that the list of inputs is a list of tuples. This can cause subtle bugs where if the user passes in a list of tensors (the most common mistake) the first dimension of each tensor is dropped. This can go unnoticed because its the often the batch dimension which pytorch occasionally silently re-adds if its missing
ghstack-source-id: 122363487
Test Plan:
Bundle something with an input, bundle something with {} for inputs
For typing check below paste
{P199554712}
Reviewed By: dhruvbird
Differential Revision: D26374867
fbshipit-source-id: cd176f34bad7a4da850b165827f8b2448cd9200d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51153
Enabled bundled inputs for all public functions that the user wants in a torchscript module. An important caveat here is that you cant add bundled inputs to functions that were in the nn.module but weren't caught in the scripting/tracing process that brought the model to torchscript.
Old Api is exactly the same. Still only works on forward, return types the same, etc.
-----------New API-------------
Attachment of inputs:
***augment_model_with_bundled_inputs*** : works the same as before but added the option to specify an info dictionary.
***augment_many_model_functions_with_bundled_inputs*** : Similar to the above function but allows the user to specify a Dict[Callable, List[<inputs>]] (mapping function references to the bundled inputs for that function) to attach bundled inputs to many functions
Consumption of inputs:
***get_all_bundled_inputs_for_<function_name>()*** : Works exactly like get_all_bundled_inputs does, but can be used for functions other then forward if you know ahead of time what they are called, and if they have bundled inputs.
***get_bundled_inputs_functions_and_info()*** : This is easily the hackiest function. Returns a Dict['str', 'str'] mapping function_names to get_all_bundled_inputs_for_<function_name>. A user can then execute the functions specified in the values with something like
all_info = model.get_bundled_inputs_functions_and_info()
for func_name in all_info.keys():
input_func_name = all_info[func_name]['get_inputs_function_name'][0]
func_to_run = getattr(loaded, input_func_name)
The reason its done this way is because torchscript doesn't support 'Any' type yet meaning I can't return the bundled inputs directly because they could be different types for each function. Torchscript also doesn't support callable so I can't return a function reference directly either.
ghstack-source-id: 120768561
Test Plan:
Got a model into torchscript using the available methods that I'm aware of (tracing, scripting, old scripting method). Not really sure how tracing brings in functions that arent in the forward call path though. Attached bundled inputs and info to them successfully. Changes to TorchTest.py on all but the last version of this diff (where it will be/is removed for land) illustrate what I did to test.
Created and ran unit test
Reviewed By: dreiss
Differential Revision: D25931961
fbshipit-source-id: 36e87c9a585554a83a932e4dcf07d1f91a32f046
Summary:
Improves one annotation for `augment_model_with_bundled_inputs`
Also add a comment to not work on caffe2 type annotations, that's not worth the effort - those ignores can stay as they are.
xref gh-16574
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49658
Reviewed By: heitorschueroff
Differential Revision: D25757721
Pulled By: ezyang
fbshipit-source-id: 44c396d8da9ef3f41b97f9c46a528f0431c4b463
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37055
Sometimes it's okay to bundle a large example input tensor with a model.
Add a utility function to make it easy for users to do that *on purpose*.
Test Plan: Unit test.
Differential Revision: D22264239
Pulled By: dreiss
fbshipit-source-id: 05c6422be1aa926cca850f994ff1ae83c0399119
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36764
This allows bundling inputs that are large uniform buffers in
channels-last memory format.
Test Plan: Unit test.
Differential Revision: D21142660
Pulled By: dreiss
fbshipit-source-id: 31bbea6586d07c1fd0bcad4cb36ed2b8bb88a7e4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35631
Bundling sample inputs with our models with a standardized interface
will make it possible to write benchmarking and code-coverage tools that
call all models in a uniform way. The intent is to make this a standard
for mobile models within Facebook. Putting it in torch/utils so tests
can run on GitHub and because it might be useful for others as well.
`augment_model_with_bundled_inputs` is the primary entry point. See
its docstring for usage information and the test for some example uses.
One design question I had was how much power should be available for
automatic deflating and inflating of inputs. The current scheme gives
some automatic handling and a reasonable escape hatch
("_bundled_input_inflate_format") for top-level tensor arguments, but no
automatic support for (e.g.) tensors in tuples or long strings. For
more complex cases, we have the ultimate escape hatch of just defining
_generate_bundled_inputs in the model.
Another design question was whether to add the inputs to the model or
wrap the model in a wrapper module that had these methods and delegated
calls to `forward`. Because models can have other exposed methods and
attributes, the wrapped seemed too onerous.
Test Plan: Unit test.
Differential Revision: D20925013
Pulled By: dreiss
fbshipit-source-id: 4dbbb4cce41e5752133b4ecdb05e1c92bac6b2d5