Commit Graph

53 Commits

Author SHA1 Message Date
Kimish Patel
bb3c6699a5 [Pytorch Mobile DebugInfo Serialization] Save debug handles for all instructions. (#55252)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55252

Earlier for bytecode serialization we were saving debug handles only for OPs and not all
instructions. This PR makes changes to add that for all instructions.

Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule

Imported from OSS

Reviewed By: dreiss

Differential Revision: D27542502

fbshipit-source-id: cff75118c721ce9f0c2f60d2c9471481f05264ca
2021-05-04 09:21:13 -07:00
Kimish Patel
e0fc473e47 [Pytorch, Mobile] Serialize inlined callstack pointer with debug handle. (#55062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55062

This diff introduces the following changes:
1. InlinedCallStack pickler/serializer is introduced. It is serialized
as a tuple of {module_instance_info, source range tag, callee:InlinedCallStack}
Module instance info is serialized as tuple of {class_type_name,
instance_name}.
Note that callee of the serialized inlined callstack points to the tuple
of already serialized callstack. This means the first callstack ptr to
serialize, will serialize entire path of the tree, where some callee
nodes might be shared with callstack pointers that will be serialized
subsequently. Pickler supports memoization of pickled objects, where if
a tuple has been serialized then object id is obtained instead of
serialized object again. Thus we stll serialize the tree and not every
path from the root separately. Furthermore, InlinedCallStackSerializer
also uses cache to lookup the pointer and return the serialized IValue.
Furthermore, note that we must also serialize the source range of
InlinedCallStack. In order to this serializer requires map of
source-range-tags-to-source-range map. This was done in the previous
diff, where as part of source range serialization we also generate
unique tags. These are the tags that are serialized in InlinedCallStack.
Thus during deserialization we would have to deserialize source range
before deserializing InlinedCallStacks.
2. Furthermore, each serialized InlinedCallStack is serialized with a
unique debug_handle and source range tag.
BackendDebugHandleManager manages generation of
unique debug handles and saves the map of
debug-handles-to-{source_range_tag, inlined-callstack-ptr}.
This map is then serialized as callstack_debug_map.pkl. Note that
inlined callstack is not sufficient to get all the source information
since it contains source information about the nodes which are inlined.
The top-of-the-stack (or bottom) node, which is the actual op node, is
not part of the inlined callstack pointer and thus the source range of
this node is serialized separately using source_range_tag. This is
similar to how JIT creates callstack in
torch/csrc/jit/runtime/interpreter.cpp

Unique debug handles facilitates exception throwing or profiling using
just the debug handle without any further qualifications, such as which
function or module the inlined-callstack belongs to.

Furthermore, this diff refactors the old mobile code for tracking
module hierarchy information per op. Mainly now bytecode serialization
will serialize debug handles corresponding to ops/nodes in graph and
have callstack_debug_map.pkl help generate:
1. Entire callstack and
2. Module hierarchy information.

Test Plan:
python test/mobile/test_lite_script_module.py TestLiteScriptModule
./build/bin/test_jit --gtest_filter=*ModuleInfo

Imported from OSS

Reviewed By: raziel

Differential Revision: D27468709

fbshipit-source-id: 53e2413e7703ead01c77718b7c333c7c6ff50a23
2021-05-04 09:21:12 -07:00
Chen Lai
ac71432c54 [PyTorch][Edge] Add api to get bytecode version from runtime (#56948)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56948

Add api to get runtime bytecode version

## Test
Both `caffe2/test/cpp/jit/test_lite_interpreter.cpp` and `caffe2/test/mobile/test_bytecode.py` pass
ghstack-source-id: 127939889

Test Plan: Both `caffe2/test/cpp/jit/test_lite_interpreter.cpp` and `caffe2/test/mobile/test_bytecode.py` pass

Reviewed By: raziel, iseeyuan

Differential Revision: D27987811

fbshipit-source-id: 35ed9bd626aecffc226f6dacfa046e6cdabfed51
2021-05-03 11:26:38 -07:00
Nikita Shulga
4cb534f92e Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os

def get_compiled_files_list():
    import json
    with open("build/compile_commands.json") as f:
        data = json.load(f)
    files = [os.path.relpath(node['file']) for node in data]
    for idx, fname in enumerate(files):
        if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
            files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
    return files

def run_clang_tidy(fname):
    check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
    changes = check_output(["git", "ls-files", "-m"])
    if len(changes) == 0:
        return
    check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])

def main():
    git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
    compiled_files = get_compiled_files_list()
    for idx, fname in enumerate(git_files):
        if fname not in compiled_files:
            continue
        if fname.startswith("caffe2/contrib/aten/"):
            continue
        print(f"[{idx}/{len(git_files)}] Processing {fname}")
        run_clang_tidy(fname)

if __name__ == "__main__":
    main()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892

Reviewed By: H-Huang

Differential Revision: D27991944

Pulled By: malfet

fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 14:10:25 -07:00
Martin Yuan
524cb0a514 [PyTorch Mobile] Dedup method names in bytecode serialization (#53677)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53677

When serializing bytecode, we serialize it based on methods. It may happen that there are multiple instances of a class. In such a case, the methods inside the class may be serialized multiple times.

To reduce the duplication, we cache the qualified name of the methods, so that one method is serialized only once.

Test Plan: existing unittests and CI

Reviewed By: dhruvbird, raziel

Differential Revision: D26933945

Pulled By: iseeyuan

fbshipit-source-id: 8a9833949fa18f7103a5a0be19e2028040dc7717
2021-03-16 15:24:47 -07:00
Martin Yuan
b2520ab3dc Add a demo backend with compiler (#52603)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52603

This PR introduced a backend with minimum compilation capability to the to_<backend> flow. The targets are:

- Demonstrate the end-to-end flow with adding a backend -> compilation -> runtime
- How the backend compilation errors be surfaced to the user, with the original model's source code information. (C++ only in this PR. Python APIs will be demonstrated in a following PR.)

Changes:

- Compilation

1. A backend with minimum compilation features, "backend_with_compiler_demo" is added.
2. The compilation happens AOT in the ```pre_process``` function registered to this backend.
3. Compiled results are stored in a string blob for each method. They are serialized to the lowered module with ```__get_state__``` function.
4. Error message with model source code is thrown, for features not handled by the backend compiler.

- Runtime

1. The compiled blob is loaded in ```__set_state__``` method.
2. The ```compile``` function of the backend pass through the AOT compiled blob. (TODO: parsing the blob to the format that the backend can understand can happen here.)
3. The ```execute``` function of the backend executes the specified method (handle).

Test Plan:
- ```BackendTest.TestCompiler```: the C++ end-to-end demonstration on a supported model. After compilation and running, the lowered model produces the same result as the original torchscript model.
- ```BackendTest.TestCompilerNotSupport```: Demonstrate the error message from the AOT compilation for a feature not supported from the input module. The error message looks like:

```
"The node of aten::mul is not supported in this compiler. Source code:   File "<string>", line 3

    def forward(self, x, h):
        return x * h
               ~~~~~ <--- HERE
```

Reviewed By: raziel

Differential Revision: D26593968

Pulled By: iseeyuan

fbshipit-source-id: 8f264f60a0470e9f07e36fdeccbf17da6c1d7cd7
2021-02-26 11:53:34 -08:00
Dhruv Matani
755c60bffc [PyTorch Mobile] Allow loading of all extra files using the extra_file argument (#52635)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52635

Currently, the method `_load_for_mobile()` accepts an extra files map named `extra_files` which serves as an in-out parameter. i.e. the call fills in the keys of this map with all files under the `extra/` folder that they wish to extract, and the method fills in the `extra_files` map with the contents of those files.

In a specific case we have encountered, it is desirable to extract all the extra files so that they can be forwarded in an opaque manner into a `save_for_mobile()` call with the same set of extra files as during load.

This change adds a method `_get_all_archive_file_names()` which returns the names of all files in the `.ptl` archive. The caller can then extract the ones within the `extra/` directory and pass them in to the `extra_files` map argument.

ghstack-source-id: 122356928

Test Plan: Added additional test + `buck test //xplat/caffe2:test_lite_interpreter`

Reviewed By: iseeyuan

Differential Revision: D26590027

fbshipit-source-id: 4dc30997929e132f319c32cb9435d8a40fe0db5e
2021-02-23 21:57:13 -08:00
Martin Yuan
23c50a4a50 [PyTorch Mobile] Support torchbind custom classes in lite interpreter (#51432)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51432

ghstack-source-id: 120976584

torchbind is a convenient way to include custom class to both python and torchscript. CREATE_OBJECT is used to create an object of custom class.

CREATE_OBJECT was not supported by lite interpreter. The major reason was that for custom class directly defined in Python, there's no language parser in lite interpreter. It's still the case. However, for torchbind classes that are defined in C++, a python/torchscript parser is not needed.

This diff is to support the case of torchbind custom classes.
1. The class type can be resolved at import level.
2. If the class is not the supported torchbind class, an error message is provided at export stage. Workaround is also suggested.
3. Unit tests. C++: ```LiteInterpreterTest::BuiltinClass``` is added as an end-to-end test on supported class. Python: ```test_unsupported_createobject``` is changed to ```test_unsupported_classtype``` to test unsupported classes.

Test Plan: CI

Reviewed By: raziel

Differential Revision: D26168913

fbshipit-source-id: 74e8b6a12682ad8e9c39afdfd2b605c5f8e65427
2021-02-03 21:57:19 -08:00
Frank Seide
87ad77eb4e T66557700 Support default argument values of a method (#48863)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48863

Support default arguments when invoking a module via PyTorch Lite (`mobile::Module`).

Test Plan:
buck test mode/dbg //caffe2/test/cpp/jit:jit -- LiteInterpreterTest.MethodInvocation

buck test mode/dbg caffe2/test:mobile -- test_method_calls_with_optional_arg

Reviewed By: iseeyuan

Differential Revision: D25896212

fbshipit-source-id: 6d7e7fd5f3244a88bd44889024d81ad2e678ffa5
2021-02-01 18:35:13 -08:00
Dhruv Matani
4a870f6518 [PyTorch Mobile] Export Operator List from Mobile CompilationUnit instead of from TorchScript Model (#49385)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49385

Currently, the API to export operator lists accepts a `torch::jit::Module` object, and spits out an operator list. The operator list is practically used only for mobile. This is not ideal because the set of root operators may change by the time the model is subsequently optmized and exported for mobile.

What we need to to instead is glean the list of operators from the mobile model itself (`bytecode.pkl` specifically), and expose that instead.

Also updated the logic in `converter`.

### Before this change:
1. Get operator List from Torch Script Model
2. Convert to bytecode mobile model

### After this change:
1. Convert to bytecode mobile model
2. Use this converted mobile model to get the list of operators for each method on the model

ghstack-source-id: 118796752

Test Plan:
Added a unit test in `test_lite_interpreter.cpp` to ensure that all model referenced operators show up in the exported operator list. Also make `test_lite_interpreter.cpp` runnable from `xplat/caffe2/BUCK` since this is where the production code will be built from.

Verified that the list of operators produced before and after this change for an example model (segmentation) are the same.

{P147863234}

Also verified that the operator lists for BI-Xray model is different (we have been having problems with missing operators for this one): {P154903132}

Reviewed By: iseeyuan

Differential Revision: D24690094

fbshipit-source-id: 0426a6ef90456a811010cfe337c415882ae2deff
2020-12-18 11:17:57 -08:00
Martin Yuan
2b61e4d84c Revert D25152559: T66557700 Support default argument values of a method
Test Plan: revert-hammer

Differential Revision:
D25152559 (6bde0ca6d3)

Original commit changeset: bbf52f1fbdbf

fbshipit-source-id: 592fdb3078b1ac86cd394adc6c1bfd6b10d829e1
2020-12-17 14:05:49 -08:00
Frank Seide
6bde0ca6d3 T66557700 Support default argument values of a method (#48863)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48863

Support default arguments when invoking a module via PyTorch Lite (`mobile::Module`).

Test Plan:
buck test mode/dbg //caffe2/test/cpp/jit:jit -- LiteInterpreterTest.MethodInvocation

buck test mode/dbg caffe2/test:mobile -- test_method_calls_with_optional_arg

Reviewed By: raziel, iseeyuan

Differential Revision: D25152559

fbshipit-source-id: bbf52f1fbdbfbc6f8fa8b65ab524b1cd4648f9c0
2020-12-16 15:55:03 -08:00
Chen Lai
416dc68341 [Pytorch][Annotation] Update inlined callstack with module instance info (#47416)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47416

Test Plan: Imported from OSS

Reviewed By: kimishpatel

Differential Revision: D24752846

Pulled By: cccclai

fbshipit-source-id: 94d3c18c56161d1de3a16bb7c93502fedf71644c
2020-12-03 10:44:46 -08:00
Martin Yuan
a1fef453b6 Support extra files in _load_for_mobile (#47425)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47425

Extra files can be exported in lite interpreter model, but it could not be loaded. This PR is to add the capability to load extra files from lite interpreter model. Because extra_files is a default argument, it should not affect the existing usage of _load_for_mobile. It's a simple assembly or a generic unordered_map. No additional dependency should be introduced and the size overhead should be small (to be tested).

Test Plan: Imported from OSS

Reviewed By: kwanmacher

Differential Revision: D24770266

Pulled By: iseeyuan

fbshipit-source-id: 7e8bd301ce734dbbf36ae56c9decb045aeb801ce
2020-11-06 20:26:54 -08:00
Kimish Patel
82b74bd929 For torch::jit::module's attr method to moble::module (#47059)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47059

This diff adds attr getter to mobile::module similar to torchscript module at https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/api/object.h#L75-L83.

Test Plan: LiteInterpreterTest::CheckAttrAccess

Reviewed By: xta0

Differential Revision: D24604950

fbshipit-source-id: cfac187f47f5115807dc119fe6c203f60dbd5dff
2020-11-02 16:38:12 -08:00
Michael Suo
22401b850b port all JIT tests to gtest (#45264)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45264

Context for why we are porting to gtest in: https://github.com/pytorch/pytorch/pull/45018.

This PR completes the process of porting and removes unused files/macros.

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D23901392

Pulled By: suo

fbshipit-source-id: 89526890e1a49462f3f77718f4ee273c5bc578ba
2020-09-25 11:37:43 -07:00
Raziel Alvarez Guevara
2b38c09f69 Moves prim ops from C10 back to JIT (#45144)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45144

Moves prim ops from C10 back to JIT.

These were originally moved to C10 from JIT in D19237648 (f362cd510d)
ghstack-source-id: 112775781

Test Plan:
buck test //caffe2/test/cpp/jit:jit

https://pxl.cl/1l22N

buck test adsatlas/gavel/lib/ata_processor/tests:ata_processor_test

https://pxl.cl/1lBxD

Reviewed By: iseeyuan

Differential Revision: D23697598

fbshipit-source-id: 36d1eb8c346e9b161ba6af537a218440a9bafd27
2020-09-24 09:44:20 -07:00
Martin Yuan
7862827269 [pytorch] Add variadic run_method for lite intepreter (#44337)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44337

Add a new run_method to mobile Module which is variadic (takes any number of arguments) to match full jit.
ghstack-source-id: 111909068

Test Plan: Added new unit test to test_jit test suite

Reviewed By: linbinyu, ann-ss

Differential Revision: D23585763

fbshipit-source-id: 007cf852290f03615b78c35aa6f7a21287ccff9e
2020-09-13 13:26:30 -07:00
Ann Shan
a61318a535 [pytorch] Replace mobile run_method with get_method and operator() (#44202)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44202

In preparation for changing mobile run_method() to be variadic, this diff:

* Implements get_method() for mobile Module, which is similar to find_method but expects the method to exist.
* Replaces calls to the current nonvariadic implementation of run_method() by calling get_method() and then invoking the operator() overload on Method objects.
ghstack-source-id: 111848222

Test Plan: CI, and all the unit tests which currently contain run_method that are being changed.

Reviewed By: iseeyuan

Differential Revision: D23436351

fbshipit-source-id: 4655ed7182d8b6f111645d69798465879b67a577
2020-09-11 10:23:06 -07:00
Ann Shan
9b3c72d46e [pytorch] Make mobile find_method return an optional (#43965)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43965

As part of a larger effort to unify the API between the lite interpreter and full JIT:
- implement torch::jit::mobile::Method, a proxy for torch::jit::mobile::Function
- add support for overloaded operator() to mobile Method and Function
- mobile find_method now returns a c10::optional<Method> (so signature matches full jit)
- moves some implementation of Function from module.cpp to function.cpp
ghstack-source-id: 111161942

Test Plan: CI

Reviewed By: iseeyuan

Differential Revision: D23330762

fbshipit-source-id: bf0ba0d711d9566c92af31772057ecd35983ee6d
2020-09-03 14:46:18 -07:00
Gao, Xiang
5e97f251a8 Enable TF32 support for cuDNN (#40737)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40737

Reviewed By: mruberry

Differential Revision: D22801525

Pulled By: ngimel

fbshipit-source-id: ac7f7e728b4b3e01925337e8c9996f26a6433fd2
2020-09-01 15:34:24 -07:00
Ann Shan
248b6a30f4 add training mode to mobile::Module (#42880)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42880

Enable switching between and checking for training and eval mode for torch::jit::mobile::Module using train(), eval(), and is_training(), like exists for torch::jit::Module.

Test Plan: Imported from OSS

Reviewed By: iseeyuan

Differential Revision: D23063006

Pulled By: ann-ss

fbshipit-source-id: b79002148c46146b6e961cbef8aaf738bbd53cb2
2020-08-17 00:20:03 -07:00
taivu
ccd9f3244b Get, save, and load module information for each operator (#42133)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42133

Test Plan:
We save a module with module debugging information as follows.
```
import torch
m = torch.jit.load('./detect.pt')
# Save module without debug info
m._save_for_lite_interpreter('./detect.bc')
# Save module with debug info
m._save_for_lite_interpreter('./detect.bc', _save_debug_info_in_bytecode=True)
```
Size of the file without module debugging information: 4.508 MB
Size of the file with module debugging information: 4.512 MB

Reviewed By: kimishpatel

Differential Revision: D22803740

Pulled By: taivu1998

fbshipit-source-id: c82ea62498fde36a1cfc5b073e2cea510d3b7edb
2020-08-14 01:25:27 -07:00
Ann Shan
1039bbf4eb add named parameters to mobile module (#41376)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41376

torch::jit::mobile::Module does not currently support accessing parameters via their attribute names, but torch::jit::Module does. This diff adds an equivalent functionality to mobile::Module.

Test Plan: Imported from OSS

Reviewed By: iseeyuan

Differential Revision: D22609142

Pulled By: ann-ss

fbshipit-source-id: 1a5272ff336f99a3c0bb6194c6a6384754f47846
2020-07-20 15:57:49 -07:00
James Reed
c0f9bf9bea s/torch::jit::class_/torch::class_/ (#40795)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40795

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D22314215

Pulled By: jamesr66a

fbshipit-source-id: a2fb5c6804d4014f8e437c6858a7be8cd3efb380
2020-07-06 15:53:33 -07:00
Martin Yuan
82087ee7f6 Add DICT_CONSTRUCT and NAMED_TUPLE_CONSTRUCT to lite interpreter (#36015)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36015

Test Plan: Imported from OSS

Reviewed By: linbinyu

Differential Revision: D20853995

Pulled By: iseeyuan

fbshipit-source-id: 153f76d223f9ffc71e2259b741a7e5d78ae63f22
2020-04-04 09:52:58 -07:00
Meghan Lele
6384c2d81b [JIT] clang-format JIT code (#35115)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35115

This commit runs the newly added tools/clang_format.py on the JIT
codebase and includes all of the formatting changes thus produced.

Testing:
Ran the script, CI.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D20568523

Pulled By: SplitInfinity

fbshipit-source-id: e09bdb982ccf090eecfb7c7b461b8d0681eef82b
2020-03-26 11:24:51 -07:00
James Reed
618c6214aa [reapply][JIT] Namespaces for TorchBind (#35254)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35254

Reapply D20541090 with some BC fixes
ghstack-source-id: 100733987

Test Plan: buck test mode/dev-nosan //caffe2/torch/fb/predictor/model_repo/tests:ai_infra_representative_model_shard_6_test -- 'RepresentativeModelTest\/ShardedRepresentativeModelTest\.RunModel\/0'

Reviewed By: zdevito

Differential Revision: D20607111

fbshipit-source-id: 80f148d860571208c93e9308128cd480ff089f74
2020-03-24 00:39:48 -07:00
Lu Fang
a100cf5146 Revert D20541090: [JIT][torchbind] Namespaces for torchbind classes
Test Plan: revert-hammer

Differential Revision:
D20541090

Original commit changeset: ce3d9391dd3c

fbshipit-source-id: acc1d660fbda611941381315507dfe594c385db1
2020-03-21 12:20:44 -07:00
James Reed
e0496a70fc [JIT][torchbind] Namespaces for torchbind classes (#35054)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35054

Test Plan: Imported from OSS

Differential Revision: D20541090

Pulled By: jamesr66a

fbshipit-source-id: ce3d9391dd3cdf619042b8f6ba2645f4c1fc875c
2020-03-20 20:07:02 -07:00
James Reed
ab76a8206f [JIT][mobile] Support built-in Function call in lite interpreter (#34676)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34676

Test Plan: Imported from OSS

Differential Revision: D20427938

Pulled By: jamesr66a

fbshipit-source-id: 79eebfa858776f26da55ffd49d3f78fa7ae0df9b
2020-03-13 18:24:18 -07:00
eellison
44256199a9 [JIT] remove specialized list ops (#34520)
Summary:
Now that lists are no longer specialized, we can register only one operator for list ops that are generic to their element type.
This PR reorgs lists into three sets of ops:
- CREATE_GENERIC_LIST_OPS
- CREATE_SPECIALIZED_LIST_OPS
- CREATE_COMPARATOR_LIST_OPS_SPECIALIZED (we didn't bind certain specialized ops to Tensor)

This is important to land quickly because mobile is finalizing its bytecode soon, after which we could not remove these ops.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34520

Reviewed By: iseeyuan

Differential Revision: D20429775

Pulled By: eellison

fbshipit-source-id: ae6519f9b0f731eaa2bf4ac20736317d0a66b8a0
2020-03-12 17:49:23 -07:00
Elias Ellison
787c307e63 Revert D20368543: [pytorch][PR] [JIT] remove specialized list ops
Test Plan: revert-hammer

Differential Revision:
D20368543

Original commit changeset: ad0c6d70d2a6

fbshipit-source-id: b8b1a64ac830d5f544567714b940c57274194d3f
2020-03-12 12:55:49 -07:00
eellison
f9f8424386 [JIT] remove specialized list ops (#34520)
Summary:
Now that lists are no longer specialized, we can register only one operator for list ops that are generic to their element type.
This PR reorgs lists into three sets of ops:
- CREATE_GENERIC_LIST_OPS
- CREATE_SPECIALIZED_LIST_OPS
- CREATE_COMPARATOR_LIST_OPS_SPECIALIZED (we didn't bind certain specialized ops to Tensor)

This is important to land quickly because mobile is finalizing its bytecode soon, after which we could not remove these ops.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34520

Differential Revision: D20368543

Pulled By: eellison

fbshipit-source-id: ad0c6d70d2a6be6ff0e948d6786052167fc43e27
2020-03-12 10:48:14 -07:00
Michael Suo
c235be42dd [jit] kill script namespace (#34515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34515

Once upon a time we thought this was necessary. In reality it is not, so
removing it.

For backcompat, our public interface (defined in `api/`) still has
typedefs to the old `script::` names.

There was only one collision: `Pass` as a `Stmt` and `Pass` as a graph
transform. I renamed one of them.

Test Plan: Imported from OSS

Differential Revision: D20353503

Pulled By: suo

fbshipit-source-id: 48bb911ce75120a8c9e0c6fb65262ef775dfba93
2020-03-11 23:32:48 -07:00
Martin Yuan
ccf4d69b75 [Lite Interpreter] Enable __setstate__ (#33294)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33294

1. Serialize bytecode of __setstate__ and run it when loading the model.
2. One use case is quantization. To test this use case a few operators are registered temporarily for lite interpreter. The "_" prefix registration will be removed when the operators are all migrated to mobile.

Test Plan: Imported from OSS

Differential Revision: D20162898

Pulled By: iseeyuan

fbshipit-source-id: 7a3180807bf38fbce594d86993896861f12bb58c
2020-03-05 15:24:21 -08:00
Martin Yuan
f097ca503d Add and test training in lite interpreter. (#32359)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32359

Test Plan: Imported from OSS

Differential Revision: D19450614

Pulled By: iseeyuan

fbshipit-source-id: 6bafff39d7880a5b7fb9cd70c33a4e584812be12
2020-03-03 23:33:43 -08:00
Martin Yuan
55b44f6746 Throw an exception when method cannot be found from mobile module. (#33972)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33972

Test Plan: Imported from OSS

Differential Revision: D20168965

Pulled By: iseeyuan

fbshipit-source-id: 2efe5dcb1fb80407cd88a47c50cb382ecd8aa275
2020-02-28 18:28:09 -08:00
Michael Suo
dbe850af5b [jit] do the code reorg (#33851)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33851

Rationale and context described in #33828.

Script to reproduce the move:
https://gist.github.com/suo/16cbefaaeb67ca5a7c6caffd49b7f6e9
ghstack-source-id: 99079645

Test Plan: Make sure CI passes

Reviewed By: jamesr66a

Differential Revision: D20133869

fbshipit-source-id: 390e9241a9c85366d9005c492ac31f10aa96488e
2020-02-27 13:02:51 -08:00
Martin Yuan
0e753b2818 Fix SIGABORT caused by double exception in PyTorchStreamReader when file not found. (#33243)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33243

If a file does not exist in an archive, PyTorchStreamReader throws an exception. However, when PyTorchStreamReader is destructed another exception is thrown while processing the first exception. As a result of this double exception there is SIGABORT.

Thanks dreiss for catching this bug and suggesting the fix. It happened when he used _load_for_mobile to load a torch script file without bytecode session. A unittest is added to test this case.

Test Plan: Imported from OSS

Differential Revision: D19859205

Pulled By: iseeyuan

fbshipit-source-id: 8f96b6256f1a1f933fce1c256d64604c7e9269e4
2020-02-12 16:27:15 -08:00
Kimish Patel
820410b505 Added upsample_neartest2d op for lite interpreter. (#32913)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32913

This enables mobile detection and tracking models.

Test Plan: buck test caffe2/test/cpp/jit:jit -- JitTest.LiteInterpreterUpsampleNearest2d

Reviewed By: iseeyuan

Differential Revision: D19664502

fbshipit-source-id: 1c7270dcf394aba7b510c5aa80552c58a5038f24
2020-02-04 07:59:03 -08:00
Martin Yuan
f362cd510d Move prim ops from JIT registration to C10 (#30612)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30612

The first version to move prim ops to c10 registration. After the reviewers are fine with the initial changes, more operators will be moved in the same style.

Test Plan: Imported from OSS

Differential Revision: D19237648

Pulled By: iseeyuan

fbshipit-source-id: c5a519604efffb80564a556536f17d829f71d9f9
2020-01-04 13:47:44 -08:00
Martin Yuan
c272758b43 Mobile module forward() pass input by value. (#30060)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30060

Mobile forward() passed inputs by reference, which is different from JIT's script::module. To make it consistent, change it pass by value.

Test Plan: Imported from OSS

Differential Revision: D18587786

Pulled By: iseeyuan

fbshipit-source-id: fa398124fd0a5168f708733ff88f0ba327726f43
2019-11-18 22:33:38 -08:00
Martin Yuan
6980cb2519 Add overload name to JIT prim operators, version 2 (#29960)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29960

Overload name is required in mobile operators with the same name but different schema. Since it's not used in JIT, it's safe to add overload names for JIT operators.

Test Plan: Imported from OSS

Differential Revision: D18555484

fbshipit-source-id: b451379af24e255d8b0c61b964ae32fd1a64ed34
2019-11-16 23:59:07 -08:00
Zachary DeVito
a5b4d78c6d Revert D18499600: Add overload name to JIT prim operators.
Test Plan: revert-hammer

Differential Revision:
D18499600

Original commit changeset: a1b49e64c908

fbshipit-source-id: 73e27b72f53799c0133850d2352ae8cd8a82d87c
2019-11-15 18:36:17 -08:00
Martin Yuan
ff4e782e79 Add overload name to JIT prim operators.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29656

Test Plan: Imported from OSS

Differential Revision: D18499600

fbshipit-source-id: a1b49e64c908d16d40a6ddb048182d7bbe80bcd6
2019-11-15 16:22:47 -08:00
Martin Yuan
3003c5f91b OPN ops TupleConstruct/Unpack and format. (#29635)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29635

TupleConstruct/Unpack as OPN ops.

Test Plan: Imported from OSS

Differential Revision: D18499602

fbshipit-source-id: 389b21d3ea532ef6fa729d67ce34214d86700cd2
2019-11-15 16:22:42 -08:00
Martin Yuan
a4b872b65e Inline graph before writing the bytecode file. (#29421)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29421

Inline graph before writing the bytecode file, so that all the instructions are emitted from the top-level methods.

Test Plan: Imported from OSS

Differential Revision: D18404180

fbshipit-source-id: 4759474a8dba3813616ebce8253bea09941f6bbb
2019-11-08 13:23:32 -08:00
Martin Yuan
977445b635 Disable TSAN test for LiteInterpreterConv (#27748)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27748

There's TSAN test failure. From stack it's likely related to mkldnn (https://github.com/pytorch/pytorch/issues/27497). Before the issue is resolved, disable TSAN test.
ghstack-source-id: 91761706

Test Plan: buck test mode/dev-tsan caffe2/test/cpp/jit:jit -- 'JitTest\.LiteInterpreterConv' --run-disabled

Reviewed By: bddppq

Differential Revision: D17880082

fbshipit-source-id: 251d9b9577838146231c8e122f755936edd1c281
2019-10-11 14:05:33 -07:00
Edward Yang
2488c29129 Revert D17846079: [TSAN unittest] Disable TSAN test in LiteInterpreterConv
Test Plan: revert-hammer

Differential Revision:
D17846079

Original commit changeset: 669d63856902

fbshipit-source-id: 996d64f12efab52d571fc81a7c602d7f18da7255
2019-10-10 16:29:16 -07:00