Updates flake8 to v6.1.0 and fixes a few lints using sed and some ruff tooling.
- Replace `assert(0)` with `raise AssertionError()`
- Remove extraneous parenthesis i.e.
- `assert(a == b)` -> `assert a == b`
- `if(x > y or y < z):`->`if x > y or y < z:`
- And `return('...')` -> `return '...'`
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116591
Approved by: https://github.com/albanD, https://github.com/malfet
Fixes#112633
Fixed errors relating to pydocstyle in the following files. The remaining errors are not covered in this issue. `torch/utils/dlpack.py` was not modified as the errors are relating to the function signature in the first line in the docstring which must be maintained as is for proper Sphinx interpretation.
```python
def from_dlpack(ext_tensor: Any) -> 'torch.Tensor':
"""from_dlpack(ext_tensor) -> Tensor
.....
"""
```
pydocstyle torch/utils/_contextlib.py --count
before: 4
after: 0
pydocstyle torch/backends/mps/__init__.py --count
before: 8
after: 1
**remaining errors**
```
torch/backends/mps/__init__.py:1 at module level:
D104: Missing docstring in public package
```
pydocstyle torch/backends/xeon/run_cpu.py --count
before: 13
after: 1
**remaining errors**
```
torch/backends/xeon/run_cpu.py:864 in public function `main`:
D103: Missing docstring in public function
```
pydocstyle torch/backends/cpu/__init__.py --count
before: 2
after: 1
**remaining errors**
```
torch/backends/cpu/__init__.py:1 at module level:
D104: Missing docstring in public package
```
pydocstyle torch/utils/cpp_backtrace.py --count
before: 4
after: 1
**remaining errors**
```
torch/utils/cpp_backtrace.py:1 at module level:
D100: Missing docstring in public module
```
pydocstyle torch/utils/bundled_inputs.py --count
before: 8
after: 1
**remaining errors**
```
torch/utils/bundled_inputs.py:1 at module level:
D100: Missing docstring in public module
```
pydocstyle torch/utils/file_baton.py --count
before: 8
after: 1
**remaining errors**
```
torch/utils/file_baton.py:1 at module level:
D100: Missing docstring in public module
```
pydocstyle torch/utils/mobile_optimizer.py --count
before: 6
after: 1
**remaining errors**
```
torch/utils/mobile_optimizer.py:8 in public class `LintCode`:
D101: Missing docstring in public class
```
pydocstyle torch/backends/opt_einsum/__init__.py --count
before: 7
after: 5
**remaining errors**
```
torch/backends/opt_einsum/__init__.py:1 at module level:
D104: Missing docstring in public package
torch/backends/opt_einsum/__init__.py:67 in public function `set_flags`:
D103: Missing docstring in public function
torch/backends/opt_einsum/__init__.py:77 in public function `flags`:
D103: Missing docstring in public function
torch/backends/opt_einsum/__init__.py:93 in public class `OptEinsumModule`:
D101: Missing docstring in public class
torch/backends/opt_einsum/__init__.py:94 in public method `__init__`:
D107: Missing docstring in __init__
```
pydocstyle torch/utils/_device.py --count
before: 9
after: 6
**remaining errors**
```
torch/utils/_device.py:58 in public class `DeviceContext`:
D101: Missing docstring in public class
torch/utils/_device.py:59 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/_device.py:62 in public method `__enter__`:
D105: Missing docstring in magic method
torch/utils/_device.py:68 in public method `__exit__`:
D105: Missing docstring in magic method
torch/utils/_device.py:73 in public method `__torch_function__`:
D105: Missing docstring in magic method
torch/utils/_device.py:80 in public function `device_decorator`:
D103: Missing docstring in public function
```
pydocstyle torch/utils/_freeze.py --count
before: 15
after: 7
**remaining errors**
```
torch/utils/_freeze.py:77 in public function `indent_msg`:
D103: Missing docstring in public function
torch/utils/_freeze.py:89 in public class `FrozenModule`:
D101: Missing docstring in public class
torch/utils/_freeze.py:100 in public class `Freezer`:
D101: Missing docstring in public class
torch/utils/_freeze.py:101 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/_freeze.py:106 in public method `msg`:
D102: Missing docstring in public method
torch/utils/_freeze.py:185 in public method `get_module_qualname`:
D102: Missing docstring in public method
torch/utils/_freeze.py:206 in public method `compile_string`:
D102: Missing docstring in public method
```
pydocstyle torch/utils/throughput_benchmark.py --count
before: 25
after: 8
**remaining errors**
```
torch/utils/throughput_benchmark.py:1 at module level:
D100: Missing docstring in public module
torch/utils/throughput_benchmark.py:27 in public class `ExecutionStats`:
D101: Missing docstring in public class
torch/utils/throughput_benchmark.py:28 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/throughput_benchmark.py:33 in public method `latency_avg_ms`:
D102: Missing docstring in public method
torch/utils/throughput_benchmark.py:37 in public method `num_iters`:
D102: Missing docstring in public method
torch/utils/throughput_benchmark.py:46 in public method `total_time_seconds`:
D102: Missing docstring in public method
torch/utils/throughput_benchmark.py:50 in public method `__str__`:
D105: Missing docstring in magic method
torch/utils/throughput_benchmark.py:94 in public method `__init__`:
D107: Missing docstring in __init__
```
pydocstyle torch/utils/hooks.py --count
before: 14
after: 11
**remaining errors**
```
torch/utils/hooks.py:1 at module level:
D100: Missing docstring in public module
torch/utils/hooks.py:23 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/hooks.py:34 in public method `remove`:
D102: Missing docstring in public method
torch/utils/hooks.py:44 in public method `__getstate__`:
D105: Missing docstring in magic method
torch/utils/hooks.py:50 in public method `__setstate__`:
D105: Missing docstring in magic method
torch/utils/hooks.py:64 in public method `__enter__`:
D105: Missing docstring in magic method
torch/utils/hooks.py:67 in public method `__exit__`:
D105: Missing docstring in magic method
torch/utils/hooks.py:82 in public function `warn_if_has_hooks`:
D103: Missing docstring in public function
torch/utils/hooks.py:103 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/hooks.py:188 in public method `setup_input_hook`:
D102: Missing docstring in public method
torch/utils/hooks.py:197 in public method `setup_output_hook`:
D102: Missing docstring in public method
```
pydocstyle torch/utils/_traceback.py --count
before: 19
after: 14
**remaining errors**
```
torch/utils/_traceback.py:47 in public function `report_compile_source_on_error`:
D103: Missing docstring in public function
torch/utils/_traceback.py:160 in public class `CapturedTraceback`:
D101: Missing docstring in public class
torch/utils/_traceback.py:163 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/_traceback.py:167 in public method `cleanup`:
D102: Missing docstring in public method
torch/utils/_traceback.py:170 in public method `summary`:
D102: Missing docstring in public method
torch/utils/_traceback.py:182 in public method `__getstate__`:
D105: Missing docstring in magic method
torch/utils/_traceback.py:190 in public method `extract`:
D205: 1 blank line required between summary line and description (found 0)
torch/utils/_traceback.py:190 in public method `extract`:
D400: First line should end with a period (not 't')
torch/utils/_traceback.py:213 in public method `format`:
D205: 1 blank line required between summary line and description (found 0)
torch/utils/_traceback.py:213 in public method `format`:
D400: First line should end with a period (not 'f')
torch/utils/_traceback.py:213 in public method `format`:
D401: First line should be in imperative mood (perhaps 'Format', not 'Formats')
torch/utils/_traceback.py:224 in public method `format_all`:
D200: One-line docstring should fit on one line with quotes (found 3)
torch/utils/_traceback.py:247 in private function `_extract_symbolized_tb`:
D205: 1 blank line required between summary line and description (found 0)
torch/utils/_traceback.py:247 in private function `_extract_symbolized_tb`:
D400: First line should end with a period (not 'f')
```
pydocstyle torch/utils/mkldnn.py --count
before: 28
after: 26
**remaining errors**
```
torch/utils/mkldnn.py:1 at module level:
D100: Missing docstring in public module
torch/utils/mkldnn.py:4 in public class `MkldnnLinear`:
D101: Missing docstring in public class
torch/utils/mkldnn.py:5 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/mkldnn.py:19 in public method `__getstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:23 in public method `__setstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:29 in public method `forward`:
D102: Missing docstring in public method
torch/utils/mkldnn.py:75 in public class `MkldnnConv1d`:
D101: Missing docstring in public class
torch/utils/mkldnn.py:76 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/mkldnn.py:82 in public method `__setstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:88 in public class `MkldnnConv2d`:
D101: Missing docstring in public class
torch/utils/mkldnn.py:89 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/mkldnn.py:100 in public method `__setstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:110 in public class `MkldnnConv3d`:
D101: Missing docstring in public class
torch/utils/mkldnn.py:111 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/mkldnn.py:122 in public method `__setstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:133 in public class `MkldnnBatchNorm`:
D101: Missing docstring in public class
torch/utils/mkldnn.py:136 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/mkldnn.py:155 in public method `__getstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:163 in public method `__setstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:171 in public method `forward`:
D102: Missing docstring in public method
torch/utils/mkldnn.py:184 in public class `MkldnnPrelu`:
D101: Missing docstring in public class
torch/utils/mkldnn.py:185 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/mkldnn.py:190 in public method `__getstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:194 in public method `__setstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:199 in public method `forward`:
D102: Missing docstring in public method
torch/utils/mkldnn.py:205 in public function `to_mkldnn`:
D103: Missing docstring in public function
```
pydocstyle torch/utils/weak.py --count
before: 32
after: 30
**remaining errors**
```
torch/utils/weak.py:1 at module level:
D100: Missing docstring in public module
torch/utils/weak.py:42 in public class `WeakIdRef`:
D101: Missing docstring in public class
torch/utils/weak.py:45 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/weak.py:54 in public method `__call__`:
D102: Missing docstring in public method
torch/utils/weak.py:61 in public method `__hash__`:
D105: Missing docstring in magic method
torch/utils/weak.py:64 in public method `__eq__`:
D105: Missing docstring in magic method
torch/utils/weak.py:84 in public class `WeakIdKeyDictionary`:
D101: Missing docstring in public class
torch/utils/weak.py:87 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/weak.py:131 in public method `__delitem__`:
D105: Missing docstring in magic method
torch/utils/weak.py:135 in public method `__getitem__`:
D105: Missing docstring in magic method
torch/utils/weak.py:138 in public method `__len__`:
D105: Missing docstring in magic method
torch/utils/weak.py:145 in public method `__repr__`:
D105: Missing docstring in magic method
torch/utils/weak.py:148 in public method `__setitem__`:
D105: Missing docstring in magic method
torch/utils/weak.py:151 in public method `copy`:
D102: Missing docstring in public method
torch/utils/weak.py:162 in public method `__deepcopy__`:
D105: Missing docstring in magic method
torch/utils/weak.py:172 in public method `get`:
D102: Missing docstring in public method
torch/utils/weak.py:175 in public method `__contains__`:
D105: Missing docstring in magic method
torch/utils/weak.py:182 in public method `items`:
D102: Missing docstring in public method
torch/utils/weak.py:189 in public method `keys`:
D102: Missing docstring in public method
torch/utils/weak.py:198 in public method `values`:
D102: Missing docstring in public method
torch/utils/weak.py:216 in public method `popitem`:
D102: Missing docstring in public method
torch/utils/weak.py:224 in public method `pop`:
D102: Missing docstring in public method
torch/utils/weak.py:228 in public method `setdefault`:
D102: Missing docstring in public method
torch/utils/weak.py:231 in public method `update`:
D102: Missing docstring in public method
torch/utils/weak.py:241 in public method `__ior__`:
D105: Missing docstring in magic method
torch/utils/weak.py:245 in public method `__or__`:
D105: Missing docstring in magic method
torch/utils/weak.py:252 in public method `__ror__`:
D105: Missing docstring in magic method
torch/utils/weak.py:262 in public method `__eq__`:
D105: Missing docstring in magic method
torch/utils/weak.py:276 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/weak.py:280 in public method `__call__`:
D102: Missing docstring in public method
```
@mikaylagawarecki @jbschlosser @svekars
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113311
Approved by: https://github.com/ezyang
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.
I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.
I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65856
Occasionally functions dont have this __name__ variable set and have name set instead? Not sure why this happens, but this should catch it.
Test Plan: ci
Reviewed By: iseeyuan
Differential Revision: D31286787
fbshipit-source-id: 8a339541215329b6e9ff43ef77363be41f19c5ca
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64557
MaskRCNN speed depends on how many people detected in the detection stage. A random input from dataloader doesn't satisfy this. In order to standardize the benchmarking, we use 2 standard image for benchmarking, 2/3 people.
Test Plan: AIBench result: https://www.internalfb.com/intern/aibench/details/945883114818980
Reviewed By: axitkhurana
Differential Revision: D30446049
fbshipit-source-id: a2826fdb69e9f840c0afc566c4cbbcde1c2fba89
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62368
# Context
The bundled inputs accepts an expression in the form of string InflatableArg.fmt that can be applied on the inputs to inflate. The InflatableArg.fmt provides flexibility to have custom transformation to inflate. When the input arguments to a function are not Tensor type, TorchScript casts the inputs from type T to Optional[T] expects the function to handle Nullable (None) clause as well. This becomes tricky to handle in one line code or lambda functions.
We propose an alternative way which allows InflatableArg to include the text of a TorchScript function that would be defined on the module as a helper, then use that in its inflation expression. This can be provided by InflatableArg.fmt_fn. Please refer to pytorch/test/test_bundled_inputs.py for example on how to use the same.
Also refer JacobSzwejbka comment on the same [here](https://github.com/pytorch/pytorch/pull/62368#issuecomment-892012812)
# Mitigation
Allow InflatedArg to include the text of a TorchScript function that would be defined on the module as a helper, then use that in its inflation expression.
ghstack-source-id: 135158680
Test Plan:
To run `test_dict_args`
```
(base) [pavithran@devvm1803.vll0 /data/users/pavithran/fbsource/fbcode] buck test //caffe2/test:test_bundled_inputs -- test_dict_args
Action graph will be rebuilt because files have been added or removed.
Building: finished in 5.4 sec (100%) 12180/12180 jobs, 0/12180 updated
Total time: 5.8 sec
More details at https://www.internalfb.com/intern/buck/build/fafcf277-1095-4cba-978d-6022f0d391ad
Tpx test run coordinator for Facebook. See https://fburl.com/tpx for details.
Running with tpx session id: 5ef9de71-c1b1-406b-a6c0-3321c2368b8d
Trace available for this run at /tmp/tpx-20210727-163946.454212/trace.log
Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/7036874465805934
✓ ListingSuccess: caffe2/test:test_bundled_inputs - main (11.365)
✓ Pass: caffe2/test:test_bundled_inputs - test_dict_args (test_bundled_inputs.TestBundledInputs) (12.307)
Summary
Pass: 1
ListingSuccess: 1
If you need help understanding your runs, please follow the wiki: https://fburl.com/posting_in_tpx_users
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/7036874465805934
```
To check the py code of TS module:
P433043973
Reviewed By: dreiss
Differential Revision: D29950421
fbshipit-source-id: c819ec5c94429b7fbf6c4beb0259457f169b08ec
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58344
remove a helper function thats more trouble then its worth.
ghstack-source-id: 129131889
Test Plan: ci and {P414950111}
Reviewed By: dhruvbird
Differential Revision: D28460607
fbshipit-source-id: 31bd6c1cc169785bb360e3113d258b612cad47fc
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58408
Itd be nice to have a version of bundle inputs that didnt mutate the original class/object. So now there is!
ghstack-source-id: 129127316
Test Plan: The new unittests
Reviewed By: dhruvbird
Differential Revision: D28460231
fbshipit-source-id: f6f7a19e264bddfaa177304cbde40336060a237a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55181
There can be a dramatic model size delta between saving a model after calling generate_bundled_inputs_for_* and saving before. This is due to the caching of the inflated tensor.
This increases latency when asking for the bundled inputs multiple times. I dont think this matters but it might for something like benchmarking?
ghstack-source-id: 125746773
Test Plan: unit tests.
Reviewed By: dreiss
Differential Revision: D27519487
fbshipit-source-id: 6ba22bff9c4e3a8d86c04627b7cbf47ca2d141b9
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47407
Previously, the code for bundling contiguous single-valued tensors (like
torch.zeros) wasn't working for quantized tensors because it was calling
the `torch.tensor` constructor without passing in the quantizer.
Instead, skip the constructor entirely, which makes this use case work
and also simplifies the code. (Originally, I forgot that
`arg.flatten()[0]` would return a tensor, not a scalar.)
Test Plan: Bundled a quantized zero input and saw it run properly.
Reviewed By: dhruvbird
Differential Revision: D24752890
Pulled By: dreiss
fbshipit-source-id: 26bc4873a71dd44660cc0fcb74c227b754e31663
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52824
How was this not breaking? _bundled_inputs_deflated doesnt exist
ghstack-source-id: 122491970
Test Plan: unit tests
Reviewed By: iseeyuan
Differential Revision: D26658098
fbshipit-source-id: 9ebf961b8764ba8779052c520dd46a8724be042a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52386
Remove stale aliasing inputs warning, error check that inputs is not null and has at least one entry, error check that the list of inputs is a list of tuples. This can cause subtle bugs where if the user passes in a list of tensors (the most common mistake) the first dimension of each tensor is dropped. This can go unnoticed because its the often the batch dimension which pytorch occasionally silently re-adds if its missing
ghstack-source-id: 122363487
Test Plan:
Bundle something with an input, bundle something with {} for inputs
For typing check below paste
{P199554712}
Reviewed By: dhruvbird
Differential Revision: D26374867
fbshipit-source-id: cd176f34bad7a4da850b165827f8b2448cd9200d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51153
Enabled bundled inputs for all public functions that the user wants in a torchscript module. An important caveat here is that you cant add bundled inputs to functions that were in the nn.module but weren't caught in the scripting/tracing process that brought the model to torchscript.
Old Api is exactly the same. Still only works on forward, return types the same, etc.
-----------New API-------------
Attachment of inputs:
***augment_model_with_bundled_inputs*** : works the same as before but added the option to specify an info dictionary.
***augment_many_model_functions_with_bundled_inputs*** : Similar to the above function but allows the user to specify a Dict[Callable, List[<inputs>]] (mapping function references to the bundled inputs for that function) to attach bundled inputs to many functions
Consumption of inputs:
***get_all_bundled_inputs_for_<function_name>()*** : Works exactly like get_all_bundled_inputs does, but can be used for functions other then forward if you know ahead of time what they are called, and if they have bundled inputs.
***get_bundled_inputs_functions_and_info()*** : This is easily the hackiest function. Returns a Dict['str', 'str'] mapping function_names to get_all_bundled_inputs_for_<function_name>. A user can then execute the functions specified in the values with something like
all_info = model.get_bundled_inputs_functions_and_info()
for func_name in all_info.keys():
input_func_name = all_info[func_name]['get_inputs_function_name'][0]
func_to_run = getattr(loaded, input_func_name)
The reason its done this way is because torchscript doesn't support 'Any' type yet meaning I can't return the bundled inputs directly because they could be different types for each function. Torchscript also doesn't support callable so I can't return a function reference directly either.
ghstack-source-id: 120768561
Test Plan:
Got a model into torchscript using the available methods that I'm aware of (tracing, scripting, old scripting method). Not really sure how tracing brings in functions that arent in the forward call path though. Attached bundled inputs and info to them successfully. Changes to TorchTest.py on all but the last version of this diff (where it will be/is removed for land) illustrate what I did to test.
Created and ran unit test
Reviewed By: dreiss
Differential Revision: D25931961
fbshipit-source-id: 36e87c9a585554a83a932e4dcf07d1f91a32f046
Summary:
Improves one annotation for `augment_model_with_bundled_inputs`
Also add a comment to not work on caffe2 type annotations, that's not worth the effort - those ignores can stay as they are.
xref gh-16574
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49658
Reviewed By: heitorschueroff
Differential Revision: D25757721
Pulled By: ezyang
fbshipit-source-id: 44c396d8da9ef3f41b97f9c46a528f0431c4b463
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37055
Sometimes it's okay to bundle a large example input tensor with a model.
Add a utility function to make it easy for users to do that *on purpose*.
Test Plan: Unit test.
Differential Revision: D22264239
Pulled By: dreiss
fbshipit-source-id: 05c6422be1aa926cca850f994ff1ae83c0399119
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36764
This allows bundling inputs that are large uniform buffers in
channels-last memory format.
Test Plan: Unit test.
Differential Revision: D21142660
Pulled By: dreiss
fbshipit-source-id: 31bbea6586d07c1fd0bcad4cb36ed2b8bb88a7e4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35631
Bundling sample inputs with our models with a standardized interface
will make it possible to write benchmarking and code-coverage tools that
call all models in a uniform way. The intent is to make this a standard
for mobile models within Facebook. Putting it in torch/utils so tests
can run on GitHub and because it might be useful for others as well.
`augment_model_with_bundled_inputs` is the primary entry point. See
its docstring for usage information and the test for some example uses.
One design question I had was how much power should be available for
automatic deflating and inflating of inputs. The current scheme gives
some automatic handling and a reasonable escape hatch
("_bundled_input_inflate_format") for top-level tensor arguments, but no
automatic support for (e.g.) tensors in tuples or long strings. For
more complex cases, we have the ultimate escape hatch of just defining
_generate_bundled_inputs in the model.
Another design question was whether to add the inputs to the model or
wrap the model in a wrapper module that had these methods and delegated
calls to `forward`. Because models can have other exposed methods and
attributes, the wrapped seemed too onerous.
Test Plan: Unit test.
Differential Revision: D20925013
Pulled By: dreiss
fbshipit-source-id: 4dbbb4cce41e5752133b4ecdb05e1c92bac6b2d5