# Fix typo errors across PyTorch codebase
This PR fixes various spelling errors throughout the PyTorch codebase to improve documentation quality and code readability.
## Changes Made
### Documentation Fixes
- Changed "seperate" to "separate" in multiple files:
- `setup.py`: Build system documentation
- `torch/_library/triton.py`: AOT compilation comments
- `torch/csrc/dynamo/compiled_autograd.h`: Node compilation documentation
- `torch/export/_unlift.py`: Pass population comments
- `torch/export/exported_program.py`: Decomposition table notes
### Code Comments and Error Messages
- Changed "occured" to "occurred" in:
- `test/mobile/test_lite_script_module.py`: Exception handling comments
- `torch/export/_draft_export.py`: Error message text
- `aten/src/ATen/native/cuda/linalg/BatchLinearAlgebra.cpp`: MAGMA bug comment
- `torch/csrc/utils/python_numbers.h`: Overflow handling comment
- `torch/csrc/jit/OVERVIEW.md`: Graph compilation documentation
- `torch/_dynamo/symbolic_convert.py`: Error explanation
### API Documentation
- Changed "fullfill" to "fulfill" in `torch/distributed/checkpoint/state_dict_loader.py`
- Changed "accross" to "across" in:
- `torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp`
- `torch/distributed/distributed_c10d.py`
## Motivation
These changes improve code readability and maintain consistent spelling throughout the codebase. No functional changes were made; this is purely a documentation and comment improvement PR.
## Test Plan
No testing required as these changes only affect comments and documentation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148262
Approved by: https://github.com/janeyx99
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Updates flake8 to v6.1.0 and fixes a few lints using sed and some ruff tooling.
- Replace `assert(0)` with `raise AssertionError()`
- Remove extraneous parenthesis i.e.
- `assert(a == b)` -> `assert a == b`
- `if(x > y or y < z):`->`if x > y or y < z:`
- And `return('...')` -> `return '...'`
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116591
Approved by: https://github.com/albanD, https://github.com/malfet
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74025
When users are trying to inspect IValues out of the Lite Interpreter, dynamic types are still attached, therefore torch::jit::toPyObject will fail on these dynamic types while converting dictionary keys.
We should just let dynamic types pass through under this corner case since they won't be used by anything later.
ghstack-source-id: 151051826
Test Plan:
buck test //caffe2/test:mobile -- -r 'test_bundled_input_with_dynamic_type'
without patch:
```
BUILD SUCCEEDED
Tpx test run coordinator for Facebook. See https://fburl.com/tpx for details.
Running with tpx session id: c6693277-2dad-4882-97c7-f69c58f67259
Trace available for this run at /tmp/tpx-20220310-000040.948069-c6693277-2dad-4882-97c7-f69c58f67259/trace.log
RemoteExecution session id: reSessionID-c6693277-2dad-4882-97c7-f69c58f67259-tpx
Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/6473924544183693
✓ ListingSuccess: caffe2/test:mobile : 40 tests discovered (2.122)
✗ Fail: caffe2/test:mobile - test_bundled_input_with_dynamic_type (mobile.test_lite_script_module.TestLiteScriptQuantizedModule) (3.059)
Test output:
> RuntimeError: Cannot create dict for key type 'Dynamic<8>', only int, float, complex, Tensor, device and string keys are supported
File "/usr/local/fbcode/platform009/lib/python3.8/unittest/case.py", line 60, in testPartExecutor
yield
File "/usr/local/fbcode/platform009/lib/python3.8/unittest/case.py", line 676, in run
self._callTestMethod(testMethod)
File "/usr/local/fbcode/platform009/lib/python3.8/unittest/case.py", line 633, in _callTestMethod
method()
File "/data/users/zhxchen17/fbsource/fbcode/buck-out/dbg/gen/caffe2/test/mobile#binary,link-tree/mobile/test_lite_script_module.py", line 558, in test_bundled_input_with_dynamic_type
i = mobile_module.run_method("get_all_bundled_inputs")
File "/data/users/zhxchen17/fbsource/fbcode/buck-out/dbg/gen/caffe2/test/mobile#binary,link-tree/torch/jit/mobile/__init__.py", line 69, in run_method
return self._c.run_method(method_name, input)
stdout:
stderr:
Summary
Fail: 1
✗ caffe2/test:mobile - test_bundled_input_with_dynamic_type (mobile.test_lite_script_module.TestLiteScriptQuantizedModule)
ListingSuccess: 1
If you need help understanding your runs, please follow the wiki: https://fburl.com/posting_in_tpx_users
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/6473924544183693
```
Reviewed By: cccclai
Differential Revision: D34780805
fbshipit-source-id: 88b139c5e91becc031e4b06e055a78a52a429c09
(cherry picked from commit 41abbacf3025cf8fc82516a3e1cefe8b4081a4b6)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67750
Add more information about why exporting model fails.
Before: error message:
```
E1102 22:57:42.984015 3220949 ExceptionTracer.cpp:221] exception stack complete
terminate called after throwing an instance of 'c10::Error'
what(): __torch__ types other than torchbind (__torch__.torch.classes)are not supported in lite interpreter. Workaround: instead of using arbitrary class type (class Foo()), define a pytorch class (class Foo(torch.nn.Module)). The problematic type is: __torch__.dper3.core.schema_utils.IdListFeature
Exception raised from getFunctionTuple at caffe2/torch/csrc/jit/serialization/export_module.cpp:246 (most recent call first):
```
After
```
E1102 22:57:42.984015 3220949 ExceptionTracer.cpp:221] exception stack complete
terminate called after throwing an instance of 'c10::Error'
what(): __torch__ types other than torchbind (__torch__.torch.classes)are not supported in lite interpreter. Workaround: instead of using arbitrary class type (class Foo()), define a pytorch class (class Foo(torch.nn.Module)).
Exception raised from getFunctionTuple at caffe2/torch/csrc/jit/serialization/export_module.cpp:246 (most recent call first):
```
ghstack-source-id: 143009294
Test Plan: CI
Reviewed By: larryliu0820
Differential Revision: D32129397
fbshipit-source-id: 0594a98a59f727dc284acd1c9bebcd7589ee7cbb
Summary:
Add type support for namedtule custom class. For the namedtuple type, it will deserailize to the following format in string
```
"qualified_named[
NamedTuple, [
[filed_name_1, field_type_1],
[filed_name_2, field_type_2]
]
]"
```
If it's nested, it will be
```
"__torch__.A[
NamedTuple, [
[field_name_a, __torch__.B [
NamedTuple, [
[field_name_b, __torch__.C [
NamedTuple, [
[field_name_c_1, Tensor],
[field_name_c_2, Tuple[Tensor, Tensor]],
]
]
]
]
]
]
]
]
"
```
The nametuple type includes both `collection` and `typing`.
```
from typing import NamedTuple
from collections import namedtuple
```
It will be a forward incompatible change. However this type is never supported and exported before and we don't have a proper way to backport it. The optimum solution to ship this change is probably
1. Update the change for import without the change to export. So the runtime can read the new format, but no new format will be exported.
2. Update the change to export the new type. So runtime can export new format.
For the following example:
```
class Foo(NamedTuple):
id: torch.Tensor
class Bar(torch.nn.Module):
def __init__(self):
super(Bar, self).__init__()
self.foo = Foo(torch.tensor(1))
def forward(self, a: torch.Tensor):
self.foo = Foo(a)
return self.foo
```
The new bytecode.pkl will be
```
(6,
('__torch__.mobile.test_lite_script_type.MyTestModule.forward',
(('instructions',
(('STOREN', 1, 2),
('DROPR', 1, 0),
('MOVE', 2, 0),
('LIST_CONSTRUCT', 0, 1),
('NAMED_TUPLE_CONSTRUCT', 1, 1),
('RET', 0, 0))),
('operators', ()),
('constants', ()),
('types',
('List[Tensor]',
'__torch__.mobile.test_lite_script_type.myNamedTuple[NamedTuple, [[a, '
'List[Tensor]]]]')),
('register_size', 2)),
(('arguments',
((('name', 'self'),
('type', '__torch__.mobile.test_lite_script_type.MyTestModule'),
('default_value', None)),
(('name', 'a'), ('type', 'Tensor'), ('default_value', None)))),
('returns',
((('name', ''),
('type', '__torch__.mobile.test_lite_script_type.myNamedTuple'),
('default_value', None)),)))))
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62612
ghstack-source-id: 141485500
Test Plan:
fb:
1. Add a simple unittest to test NamedTuple custom class
2. Use following cpp code (D30271153)
```
TEST(LiteTrainerTest, CustomOp) {
std::string jit_model =
"/home/chenlai/local/notebooks/ads_dper_fl_model_282250609.pt";
Module jit_m = load(jit_model);
jit_m.eval();
torch::jit::Module module_freeze = freeze(jit_m);
IValue tuple =
c10::ivalue::Tuple::create({1 * torch::ones({10, 1034}), 3 * torch::ones({10, 1034})});
std::vector<IValue> inputs_1{tuple};
auto jit_output = jit_m.forward(inputs_1);
jit_output.dump();
std::stringstream ss;
jit_m._save_for_mobile(ss);
jit_m._save_for_mobile("/home/chenlai/local/notebooks/tmp/tmp.ptl");
torch::jit::mobile::Module mobile_m = _load_for_mobile(ss);
auto mobile_output = mobile_m.forward(inputs_1);
std::cout << "mobile output: " << std::endl;
mobile_output.dump();
}
```
And output from both mobile and jit are
```
{prediction: ([ CPUFloatType{0} ], [ CPUFloatType{0} ])}
```
3. N1033894 with model inspection, also compare the result between jit and mobile with the dper model.
Reviewed By: iseeyuan
Differential Revision: D30004716
fbshipit-source-id: cfd30955e66a604af8f9633b1b608feddc13d7d7
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63287
Recent changes in https://github.com/pytorch/pytorch/pull/62419 changed
the way module hierarchy is reported. Now it includes information about
function names as well.
Test Plan:
python test/mobile/test_lite_script_module.py
TestLiteScriptModule.test_save_mobile_module_with_debug_info_with_trace
Imported from OSS
Reviewed By: iseeyuan
Differential Revision: D30328512
fbshipit-source-id: ddd6b11b9ab01cc725f4568a35eff7a92f17204b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62634
Apply the same set of changes as in D27688352 (d728491fc1) to `module.cpp` as instructed by xcheng16.
Basically, this simplifies exception handling and allows propagation of the original message undisturbed to the caller so that we can figure out the lineage of the exception in crash tasks such as t96812652
ghstack-source-id: 134877012
Test Plan: Build/Sandcastle
Reviewed By: raziel
Differential Revision: D30038867
fbshipit-source-id: 8dfd415c510bcd0ab49814f4eb559ec6fc8f72e5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60476
# Context
Add tests for Lite modules that are quantized using fx API
Read this posts for details about why we need a test bench for quantized lite modules
https://fb.workplace.com/groups/2322282031156145/permalink/4289792691071726/https://github.com/pytorch/pytorch/pull/60226#discussion_r654615851
moved common code to `caffe2/torch/testing/_internal/common_quantization.py`
ghstack-source-id: 133144292
Test Plan:
```
~/fbsource/fbcode] buck test caffe2/test:fx_quantization_lite
Downloaded 0/2 artifacts, 0.00 bytes, 100.0% cache miss
Building: finished in 8.3 sec (100%) 11892/11892 jobs, 2 updated
Total time: 8.6 sec
More details at https://www.internalfb.com/intern/buck/build/ffb7d517-d85e-4c8f-9531-5e5d9ca1d34c
Tpx test run coordinator for Facebook. See https://fburl.com/tpx for details.
Running with tpx session id: d79a5713-bd29-4bbf-ae76-33a413869a09
Trace available for this run at /tmp/tpx-20210630-105547.675980/trace.log
Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/3096224749578707
✓ ListingSuccess: caffe2/test:fx_quantization_lite - main (9.423)
✓ Pass: caffe2/test:fx_quantization_lite - test_embedding (mobile.test_quantize_fx_lite_script_module.TestFuseFx) (10.630)
✓ Pass: caffe2/test:fx_quantization_lite - test_submodule (mobile.test_quantize_fx_lite_script_module.TestFuseFx) (12.464)
✓ Pass: caffe2/test:fx_quantization_lite - test_conv2d (mobile.test_quantize_fx_lite_script_module.TestFuseFx) (12.728)
Summary
Pass: 3
ListingSuccess: 1
If you need help understanding your runs, please follow the wiki: https://fburl.com/posting_in_tpx_users
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/3096224749578707
```
Reviewed By: iseeyuan
Differential Revision: D29306402
fbshipit-source-id: aa481e0f696b7e9b04b9dcc6516e8a390f7dc1be
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60226
# Context
Read this posts for details about why we need a test bench for quantized lite modules
https://fb.workplace.com/groups/2322282031156145/permalink/4289792691071726/
# This Diff
Adds test cases for Quantized Lite modules
ghstack-source-id: 131859101
Test Plan:
```
[ ~/fbsource/fbcode] buck test caffe2/test:mobile -- mobile.test_lite_script_module.TestLiteScriptQuantizedModule
Unable to connect to Buck daemon, restarting it...
Running with tpx session id: 44cf0b2f-0905-444a-95df-4a2eec774163
Trace available for this run at /tmp/tpx-20210618-093849.343917/trace.log
Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/7036874461151326
✓ ListingSuccess: caffe2/test:mobile - main (16.736)
✓ Pass: caffe2/test:mobile - test_two_layer (mobile.test_lite_script_module.TestLiteScriptQuantizedModule) (14.836)
✓ Pass: caffe2/test:mobile - test_annotated_nested (mobile.test_lite_script_module.TestLiteScriptQuantizedModule) (15.073)
✓ Pass: caffe2/test:mobile - test_quantization_example (mobile.test_lite_script_module.TestLiteScriptQuantizedModule) (16.286)
✓ Pass: caffe2/test:mobile - test_single_layer (mobile.test_lite_script_module.TestLiteScriptQuantizedModule) (18.360)
Summary
Pass: 4
ListingSuccess: 1
```
https://www.internalfb.com/intern/testinfra/testconsole/testrun/7036874461151326/
Reviewed By: iseeyuan
Differential Revision: D29212232
fbshipit-source-id: 8d0b61b3f414e31720f1e3ce681ec8fa716555c1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55252
Earlier for bytecode serialization we were saving debug handles only for OPs and not all
instructions. This PR makes changes to add that for all instructions.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
Imported from OSS
Reviewed By: dreiss
Differential Revision: D27542502
fbshipit-source-id: cff75118c721ce9f0c2f60d2c9471481f05264ca
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062
This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp
Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.
Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.
Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo
Imported from OSS
Reviewed By: raziel
Differential Revision: D27468709
fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54284
In order to bring mobile deployment, via lite interpreter, on feature
parity with JIT, with respect model level debug information we must make
model level debug information available to mobile runtime.
At the moment, model level debug information is stored in SourceRange
which associates node's of graph to where the come from in original
python source code.
This information is serialized as part of debug_pkl and deserialized
when JIT loads the model and reads the model code.
On lite interpreter, we do not have access to all the functionality of
JIT and hence we cannot load model in the same way as JIT, by reading
code, constructing module hierarchy and graph corresponding module
methods etc. Instead in, lite interpreter, only bytecode corresonding to
the compiled graph, Code, is saved.
Thus in order to annotate OPs in the bytecode with equivalent
SourceRange information we do the following:
1. During model serialization, we create a unique tag for each source
range of the model.
2. Create a map of <SourceRange, tag>
3. During debug_pkl serialization we save tag along with SourceRange, on
top of byte offset.
4. During bytecode generation, the methods of the top module are
lowered. During this process methods are inlined. In the inlined graph,
when the node of a graph is lowered to bytecode, we query node's source
range and look it up against the map.
5. Resulting source range tag is serialized in module_debug_info.
6. During model deserialization, we read all the debug_pkl records in
the archieve and create a map of <tag, SourceRange>
7. This map can be used to find source code information.
During mobile runtime:
1. We read all the debug_pkl records and create <tag=debug_handle,
SourceRange> map.
1.1 This map, MobileDebugInfo, is a member of mobile Module.
2. Interpreter catches appropriate exceptions and sets the thread local
debug handle and rethrows the exception.
3. In Function's run method we catch exception and query current debug
handle where the exception happened.
4. Query MobileDebugInfo with debug handle to retrieve source range and
augment error with source range info.
This information is still incomplete as it does not contain entire
callstack.
In the following diffs we will serialize InlinedCallStack directly.
Note that compilation is gated by SYMBOLICATE_MOBILE_DEBUG_HANDLE macro,
so that mobile builds can avoid building MobileDebugInfo, source range
and source range pickler/unpickler. Later we will add path where, if
building without debug support stack trace will contain only debug
handles. They can be symbolicated later.
Test Plan:
Ported bunch of source range tests from test_jit.py. Added on more test
in test_lite_interpreter.py
Imported from OSS
Reviewed By: raziel
Differential Revision: D27174722
fbshipit-source-id: a7b7c6088ce16dec37e823c7fefa4f0b61047e12
Summary:
Generally wildcard imports are bad for the reasons described here: https://www.flake8rules.com/rules/F403.html
This PR replaces wildcard imports with an explicit list of imported items where possible, and adds a `# noqa: F403` comment in the other cases (mostly re-exports in `__init__.py` files).
This is a prerequisite for https://github.com/pytorch/pytorch/issues/55816, because currently [`tools/codegen/dest/register_dispatch_key.py` simply fails if you sort its imports](https://github.com/pytorch/pytorch/actions/runs/742505908).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55838
Test Plan: CI. You can also run `flake8` locally.
Reviewed By: jbschlosser
Differential Revision: D27724232
Pulled By: samestep
fbshipit-source-id: 269fb09cb4168f8a51fd65bfaacc6cda7fb87c34
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51432
ghstack-source-id: 120976584
torchbind is a convenient way to include custom class to both python and torchscript. CREATE_OBJECT is used to create an object of custom class.
CREATE_OBJECT was not supported by lite interpreter. The major reason was that for custom class directly defined in Python, there's no language parser in lite interpreter. It's still the case. However, for torchbind classes that are defined in C++, a python/torchscript parser is not needed.
This diff is to support the case of torchbind custom classes.
1. The class type can be resolved at import level.
2. If the class is not the supported torchbind class, an error message is provided at export stage. Workaround is also suggested.
3. Unit tests. C++: ```LiteInterpreterTest::BuiltinClass``` is added as an end-to-end test on supported class. Python: ```test_unsupported_createobject``` is changed to ```test_unsupported_classtype``` to test unsupported classes.
Test Plan: CI
Reviewed By: raziel
Differential Revision: D26168913
fbshipit-source-id: 74e8b6a12682ad8e9c39afdfd2b605c5f8e65427
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48863
Support default arguments when invoking a module via PyTorch Lite (`mobile::Module`).
Test Plan:
buck test mode/dbg //caffe2/test/cpp/jit:jit -- LiteInterpreterTest.MethodInvocation
buck test mode/dbg caffe2/test:mobile -- test_method_calls_with_optional_arg
Reviewed By: iseeyuan
Differential Revision: D25896212
fbshipit-source-id: 6d7e7fd5f3244a88bd44889024d81ad2e678ffa5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51312
Follow up to D24690094 (4a870f6518) exposing the api in python. Created matching unit test.
ghstack-source-id: 120611452
Test Plan: Ran unit test
Reviewed By: dhruvbird
Differential Revision: D26112765
fbshipit-source-id: ffe3bb97de0a4f08b31719b4b47dcebd7d2fd42a
Summary:
Relate to https://github.com/pytorch/pytorch/issues/50483.
Everything except ONNX, detectron and release notes tests are moved to use common_utils.run_tests() to ensure CI reports XML correctly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50923
Reviewed By: samestep
Differential Revision: D26027621
Pulled By: walterddr
fbshipit-source-id: b04c03f10d1fe96181b720c4c3868e86e4c6281a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48863
Support default arguments when invoking a module via PyTorch Lite (`mobile::Module`).
Test Plan:
buck test mode/dbg //caffe2/test/cpp/jit:jit -- LiteInterpreterTest.MethodInvocation
buck test mode/dbg caffe2/test:mobile -- test_method_calls_with_optional_arg
Reviewed By: raziel, iseeyuan
Differential Revision: D25152559
fbshipit-source-id: bbf52f1fbdbfbc6f8fa8b65ab524b1cd4648f9c0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46543
Add error messages and workaround for RET failure of containers with a torch class type.
- Error case condition
1) ins.op == RET
2) input_type == TypeKind::ListType or TypeKind::DictType
3) Any(input_type's element type) == TypeKind::ClassType
ghstack-source-id: 114618426
Test Plan:
buck test mode/dev caffe2/test:mobile -- 'test'
Summary
Pass: 13
ListingSuccess: 1
Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/7318349417617713
Reviewed By: iseeyuan
Differential Revision: D24388483
fbshipit-source-id: 7d30f6684a999054d0163e691422797cb818bb6a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46347
Added the named tuple's error messages & workarounds when it returns from a function of a class in Pytorch Mobile.
To identify the error cases (returning NamedTuple type), I used the following coditions:
1) ins.op == RET (for returing)
2) type->kind() == TypeKind::TupleType (for pruning non-tuple types)
3) type->cast<TupleType>().name() (for pruning Tuple type)
- I could use the type's str (str() or repr_str()) directly, but I used whether it has the "name" attribute. Please give the comment for this.
[Information of Tuple and NamedTuple types]
1. Tuple
type->str(): (int, int)
type->repr_str(): Tuple[int, int]
type->kind(): TypeKind::TupleType # different with other types
type()->cast<NamedType>(): True
type()->cast<NamedType>()>name(): False # different with NamedTuple
2. NamedTuple
type->str(): __torch__.myNamedTuple
type->repr_str(): __torch__.myNamedTuple
type->kind(): TypeKind::TupleType # different with other types
type()->cast<NamedType>(): True
type->cast<TupleType>().name() = True # different with Tuple
(From the next diff, I will handle the other error cases: 1) returning List<module class>, Dict<module class> and 2) accessing Module class's member functions)
ghstack-source-id: 114361762
Test Plan:
[Added test results]
buck test mode/dev caffe2/test:mobile -- 'test_unsupported_return'
Summary
Pass: 2
ListingSuccess: 1
Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/7036874440497926
[Whole test results]
buck test mode/dev caffe2/test:mobile -- 'test'
Summary
Pass: 11
ListingSuccess: 1
Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/4503599664074084
Reviewed By: iseeyuan
Differential Revision: D24291962
fbshipit-source-id: a1a9e24e41a5f1e067738f59f1eae34d07cba31a
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42133
Test Plan:
We save a module with module debugging information as follows.
```
import torch
m = torch.jit.load('./detect.pt')
# Save module without debug info
m._save_for_lite_interpreter('./detect.bc')
# Save module with debug info
m._save_for_lite_interpreter('./detect.bc', _save_debug_info_in_bytecode=True)
```
Size of the file without module debugging information: 4.508 MB
Size of the file with module debugging information: 4.512 MB
Reviewed By: kimishpatel
Differential Revision: D22803740
Pulled By: taivu1998
fbshipit-source-id: c82ea62498fde36a1cfc5b073e2cea510d3b7edb
Summary: Add 'find_method' into 'LiteScriptModule' python binding method, so that we use it to find existence of methods, e.g. "get_all_bundled_inputs".
Reviewed By: linbinyu, houseroad
Differential Revision: D22029002
fbshipit-source-id: 9acf76880fc989e825dc3a9186dab6928caee75e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39181
Create a python binding classes torch._C. LiteScriptModule for mobile::module, a python class called LiteScriptModule is created which wrap torch._C. LiteScriptModule.
Python class LiteScriptModule contains preliminary functions including forward, run_method and __call__.
Create a python api "load_for_lite_interpreter" under torch.jit.mobile where takes pre-saved mobile module in a file-like object as input and returns python class LiteScriptModule.
Add a python binding method "_save_to_buffer_for_mobile" under ScriptModule, and python method "_save_to_buffer_for_lite_interpreter" under RecursiveScriptModule which saves mobile module into buffer instead of file.
ghstack-source-id: 105215736
Test Plan: buck test caffe2/test:mobile
Differential Revision: D21757474
fbshipit-source-id: 758b87497d65c4686459a567d41887c7a577aa4c