Commit Graph

509 Commits

Author SHA1 Message Date
PyTorch MergeBot
32ce06a5ab Revert "[Reland] fix missing-prototypes warnings in torch_cpu (Part 4) (#101949)"
This reverts commit 4f2c007a1b.

Reverted https://github.com/pytorch/pytorch/pull/101949 on behalf of https://github.com/osalpekar due to As noted in @izaitsevfb's comment, we are still seeing linker errors, this time due to `nnc_prepacked_linear_clamp_run` being made a static function. ([comment](https://github.com/pytorch/pytorch/pull/101949#issuecomment-1560226880))
2023-05-23 22:53:47 +00:00
cyy
4f2c007a1b [Reland] fix missing-prototypes warnings in torch_cpu (Part 4) (#101949)
This PR relands the changes introduced in PR #100849. The old PR turnd  nnc_aten_embedding  into a static function, however, it is actually used in torch/csrc/jit/tensorexpr/operators/misc.cpp.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101949
Approved by: https://github.com/albanD
2023-05-22 10:53:07 +00:00
PyTorch MergeBot
498c34e8e8 Revert " fix missing-prototypes warnings in torch_cpu (Part 4) (#100849)"
This reverts commit c2f28d1c1d.

Reverted https://github.com/pytorch/pytorch/pull/100849 on behalf of https://github.com/izaitsevfb due to fails internal Meta builds, including fbcode and android, see D46009888: ld.lld: error: undefined symbol: nnc_aten_embedding ([comment](https://github.com/pytorch/pytorch/pull/100849#issuecomment-1555105800))
2023-05-19 19:05:15 +00:00
cyy
c2f28d1c1d fix missing-prototypes warnings in torch_cpu (Part 4) (#100849)
This PR fixes more missing-prototypes violations in the torch_cpu source following PRs #100053, #100147 and #100245

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100849
Approved by: https://github.com/albanD
2023-05-18 03:49:45 +00:00
PyTorch MergeBot
1272cd73da Revert "extend serialization for tensor metadata (#99808)"
This reverts commit 4b9bc6f2a6.

Reverted https://github.com/pytorch/pytorch/pull/99808 on behalf of https://github.com/izaitsevfb due to Breaks internal builds: ld.lld: error: undefined symbol: torch::jit::GetBackendMetaSerialization() ([comment](https://github.com/pytorch/pytorch/pull/99808#issuecomment-1550071656))
2023-05-16 17:22:25 +00:00
Xiang Gao
1732077758 Bump up flatbuffer submodule version to the latest release (v23.3.3) (#100716)
The current flatbuffer version uses `--std=c++0x` which is too old. On my system, one of flatbuffer's dependency has stopped supporting C++0x, causing a build issue on my system.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100716
Approved by: https://github.com/Skylion007, https://github.com/malfet
2023-05-15 22:28:01 +00:00
fakeYan
4b9bc6f2a6 extend serialization for tensor metadata (#99808)
Fixes #ISSUE_NUMBER
Add the serialization logic of backend metadata to the serialization of tensor, which is implemented through custom registration functions.

In #97429 , the structure backendMeta is provided in TensorImpl, and we think that this part of information may also need to be serialized for custom.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99808
Approved by: https://github.com/ezyang
2023-05-15 19:45:34 +00:00
PyTorch MergeBot
9ff547a57f Revert "Fix ordered dict loading with LibTorch (#100743)"
This reverts commit d371a890a2.

Reverted https://github.com/pytorch/pytorch/pull/100743 on behalf of https://github.com/jeanschmidt due to New test introduced SerializationTest.SaveStateDict is adding regressions ([comment](https://github.com/pytorch/pytorch/pull/100743#issuecomment-1542400538))
2023-05-10 15:29:14 +00:00
Daniel Falbel
d371a890a2 Fix ordered dict loading with LibTorch (#100743)
Fixes #100741

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100743
Approved by: https://github.com/Skylion007
2023-05-09 13:52:45 +00:00
Janet Yang
812cadf90a [3/n] loading meta to device (#100495)
Summary: Make it possible to `torch.jit.load(model, device)` to a device when `model` contains weights that are on device `meta`. Just leave the `meta` weights on `meta`, and load the weights that can be loaded to the target device.

Reviewed By: singlaiiit, RoshanPAN, sayitmemory

Differential Revision: D45099145

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100495
Approved by: https://github.com/houseroad
2023-05-08 22:14:38 +00:00
PyTorch MergeBot
fee6d46940 Revert "Bump up flatbuffer submodule version to the latest release (v23.3.3) (#100716)"
This reverts commit 8d31b81edc.

Reverted https://github.com/pytorch/pytorch/pull/100716 on behalf of https://github.com/malfet due to This will break internal builds, please wait for co-dev land ([comment](https://github.com/pytorch/pytorch/pull/100716#issuecomment-1536909954))
2023-05-05 23:45:11 +00:00
Xiang Gao
8d31b81edc Bump up flatbuffer submodule version to the latest release (v23.3.3) (#100716)
The current flatbuffer version uses `--std=c++0x` which is too old. On my system, one of flatbuffer's dependency has stopped supporting C++0x, causing a build issue on my system.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100716
Approved by: https://github.com/Skylion007, https://github.com/malfet
2023-05-05 21:58:36 +00:00
PyTorch MergeBot
5c14eea1de Revert "extend serialization for tensor metadata (#99808)"
This reverts commit 73dd6f04c9.

Reverted https://github.com/pytorch/pytorch/pull/99808 on behalf of https://github.com/atalman due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/99808#issuecomment-1536823538))
2023-05-05 21:55:52 +00:00
Bug Hunter Yan
73dd6f04c9 extend serialization for tensor metadata (#99808)
Fixes #ISSUE_NUMBER
Add the serialization logic of backend metadata to the serialization of tensor, which is implemented through custom registration functions.

In #97429 , the structure backendMeta is provided in TensorImpl, and we think that this part of information may also need to be serialized for custom.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99808
Approved by: https://github.com/ezyang
2023-05-04 20:32:11 +00:00
Theodor Arsenij
794e3971ab Add size check before calling stack_.at(dict_pos) in unpickler.cpp (#94300)
Hi!

I've been fuzzing different pytorch modules, and found a crash inside one of them.

Specifically, I'm talking about a module for unpickling and a function called `Unpickler::readInstruction()`. Running this function with provided crash file results in a crash, which occurs while calling `auto dict = stack_.at(dict_pos).toGenericDict();` [unpickler.cpp:561](0e94fbc0c8/torch/csrc/jit/serialization/unpickler.cpp (L561)). The crash occurs, because the index `dict_pos` is out of bounds (which itself happens because the stack size is 0).

Besides this pull-request, there is another one related to unpickler hardening: https://github.com/pytorch/pytorch/pull/84343

All tests were performed on this pytorch version: [abc54f9314](abc54f9314)

### How to reproduce

1. To reproduce the crash, use provided docker: [Dockerfile](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/pytorch)

2. Build the container: `docker build -t oss-sydr-fuzz-pytorch-reproduce .`

3. Copy crash file to the current directory:

    - [crash-042dff5e121580425d9d34d0f293918f3c9fbf1e.zip](https://github.com/pytorch/pytorch/files/10674361/crash-042dff5e121580425d9d34d0f293918f3c9fbf1e.zip)

4. Run the container: ``docker run --privileged --network host -v `pwd`:/homedir --rm -it oss-sydr-fuzz-pytorch-reproduce /bin/bash``

5. And execute the binary: `/message_deserialize_sydr /homedir/crash-042dff5e121580425d9d34d0f293918f3c9fbf1e`

After execution completes you will see this error message:

```txt
terminate called after throwing an instance of 'std::out_of_range'
  what():  vector::_M_range_check: __n (which is 18446744073709551613) >= this->size() (which is 0)
```

And this stacktrace:

```asan
erminate called after throwing an instance of 'std::out_of_range'
  what():  vector::_M_range_check: __n (which is 18446744073709551613) >= this->size() (which is 0)
==39== ERROR: libFuzzer: deadly signal
    #0 0x5d0df1 in __sanitizer_print_stack_trace /llvm-project/compiler-rt/lib/asan/asan_stack.cpp:87:3
    #1 0x545727 in fuzzer::PrintStackTrace() /llvm-project/compiler-rt/lib/fuzzer/FuzzerUtil.cpp:210:5
    #2 0x52b933 in fuzzer::Fuzzer::CrashCallback() /llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:233:3
    #3 0x7f9118e0341f  (/lib/x86_64-linux-gnu/libpthread.so.0+0x1441f)
    #4 0x7f9118c2300a in raise (/lib/x86_64-linux-gnu/libc.so.6+0x4300a)
    #5 0x7f9118c02858 in abort (/lib/x86_64-linux-gnu/libc.so.6+0x22858)
    #6 0x7f9119040910  (/lib/x86_64-linux-gnu/libstdc++.so.6+0x9e910)
    #7 0x7f911904c38b  (/lib/x86_64-linux-gnu/libstdc++.so.6+0xaa38b)
    #8 0x7f911904c3f6 in std::terminate() (/lib/x86_64-linux-gnu/libstdc++.so.6+0xaa3f6)
    #9 0x7f911904c6a8 in __cxa_throw (/lib/x86_64-linux-gnu/libstdc++.so.6+0xaa6a8)
    #10 0x7f91190433aa  (/lib/x86_64-linux-gnu/libstdc++.so.6+0xa13aa)
    #11 0x63acdf in std::vector<c10::IValue, std::allocator<c10::IValue> >::_M_range_check(unsigned long) const /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/stl_vector.h:1073:4
    #12 0xce8f93e in std::vector<c10::IValue, std::allocator<c10::IValue> >::at(unsigned long) /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/stl_vector.h:1094:2
    #13 0xce8f93e in torch::jit::Unpickler::readInstruction() /pytorch_fuzz/torch/csrc/jit/serialization/unpickler.cpp:546:26
    #14 0xce8d527 in torch::jit::Unpickler::run() /pytorch_fuzz/torch/csrc/jit/serialization/unpickler.cpp:235:27
    #15 0xce8d1c2 in torch::jit::Unpickler::parse_ivalue() /pytorch_fuzz/torch/csrc/jit/serialization/unpickler.cpp:192:3
    #16 0xcdf0792 in torch::jit::unpickle(std::function<unsigned long (char*, unsigned long)>, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)) /pytorch_fuzz/torch/csrc/jit/serialization/pickle.cpp:127:20
    #17 0xcdf104d in torch::jit::unpickle(char const*, unsigned long, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)) /pytorch_fuzz/torch/csrc/jit/serialization/pickle.cpp:137:10
    #18 0xe0532db in torch::distributed::rpc::ScriptRemoteCall::fromMessage(torch::distributed::rpc::Message const&) /pytorch_fuzz/torch/csrc/distributed/rpc/script_remote_call.cpp:74:16
    #19 0xe0ffa10 in torch::distributed::rpc::deserializeRequest(torch::distributed::rpc::Message const&) /pytorch_fuzz/torch/csrc/distributed/rpc/utils.cpp:108:14
    #20 0x602a41 in LLVMFuzzerTestOneInput /message_deserialize_fuzz.cc:192:27
    #21 0x52ce61 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15
    #22 0x516d7c in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6
    #23 0x51cacb in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9
    #24 0x546062 in main /llvm-project/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10
    #25 0x7f9118c04082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082)
    #26 0x51169d in _start (/message_deserialize_fuzz+0x51169d)

NOTE: libFuzzer has rudimentary signal handlers.
      Combine libFuzzer with AddressSanitizer or similar for better crash reports.
SUMMARY: libFuzzer: deadly signal
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94300
Approved by: https://github.com/malfet, https://github.com/apach301
2023-05-02 18:50:31 +00:00
Nikita Shulga
6b8ef8ea4c [BE] Build PyTorch with -Wnewline-eof (#99687)
This would avoid further regressions like the ones reported in https://github.com/pytorch/pytorch/pull/96668#issuecomment-1468029259

Surround some ONNX/flatbuffer includes with `C10_DIAGNOSTIC_PUSH_AND_IGNORED_IF_DEFINED("-Wnewline-eof")` cone of shame

Fixes https://github.com/pytorch/pytorch/issues/96747

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99687
Approved by: https://github.com/kit1980
2023-04-21 14:46:47 +00:00
shibo
da322ea874 Enable torch.jit.load for custom device (#99535)
Fixes #ISSUE_NUMBER
1、torch.jit.load for custom device
```
# custom device named `foo`
ts_model = torch.jit.script(mode.to(device="foo"))
ts_model.save("./ts.pt") # it is a script model on device `foo`

# and then we want to load it and run it
torch.jit.load("./ts.pt")
```
2、 add some extra key for custom device with `privateuse1`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99535
Approved by: https://github.com/albanD
2023-04-20 05:37:57 +00:00
Edward Z. Yang
756a86d52c Support large negative SymInt (#99157)
The strategy is that we will heap allocate a LargeNegativeIntSymNodeImpl whenever we have a large negative int, so that we can keep the old `is_symbolic` test (now called `is_heap_allocated`) on SymInt. Whenever we need to do something with these ints, though, we convert them back into a plain `int64_t` (and then, e.g., wrap it in whatever user specificed SymNodeImpl they need.) We cannot wrap directly in the user specified SymNodeImpl as we generally do not know what the "tracing context" is from C++. We expect large negative ints to be rare, so we don't apply optimizations like singleton-ifying INT_MIN.  Here's the order to review:

* c10/core/SymInt.h and cpp
  * `is_symbolic` renamed to `is_heap_allocated` as I needed to audit all use sites: the old `is_symbolic` test would return true for large negative int, but it would be wrong to then try to dispatch on the LargeNegativeIntSymNodeImpl which supports very few operations. In this file, I had to update expect_int,
  * If you pass in a large negative integer, we instead heap allocate it in `promote_to_negative`. The function is written in a funny way to keep compact constructor code for SymInt (the heap allocation happens out of line)
  * clone is now moved out-of-line
  * New method maybe_as_int which will give you a constant int if it is possible, either because it's stored inline or in LargeNegativeIntSymNodeImpl. This is the preferred replacement for previous use of is_symbolic() and then as_int_unchecked().
  * Rename toSymNodeImpl to toSymNode, which is more correct (since it returns a SymNode)
  * Complete rewrite of `normalize_symints.cpp` to use new `maybe_as_int`. Cannot easily use the old code structure, so it's now done doing a macro and typing out each case manually (it's actually not that bad.)
  * Reimplementations of all the unary operators by hand to use `maybe_as_int`, relatively simple.
* c10/core/LargeNegativeIntSymNodeImpl.h - Just stores a int64_t value, but it has to be big and negative. Most methods are not implemented, since we will rewrap the large negative int in the real SymNodeImpl subclass before doing operations with it
* The rest of the files are just rewriting code to use `maybe_as_int`. There is a nontrivial comment in c10/core/SymIntArrayRef.h

Very minor test adjustment in c10/test/core/SymInt_test.cpp . Plan to exercise this properly in next PR.

Companion XLA PR: https://github.com/pytorch/xla/pull/4882

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99157
Approved by: https://github.com/albanD
2023-04-15 22:43:51 +00:00
Lu Fang
df43fef87f Support >4GB strings in the TorchScript model (#99104)
Summary: The support of BINUNICODE8 is missing. So adding it. So we can support attributes > 4GB. For example, for very large model, we save the lowered model in the EngineHolder using a string attribute.

Test Plan: buck2 test mode/opt //caffe2/test:jit -- --exact 'caffe2/test:jit - test_save_load_large_string_attribute (jit.test_save_load.TestSaveLoad)'

Differential Revision: D44905770

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99104
Approved by: https://github.com/qihqi
2023-04-14 18:46:19 +00:00
Han Qi (qihqi)
06d8e231d5 Make sure that while caching values we don't invoke any Aten operator (#99050)
Summary:
title
also change catch to catch all so we can make it wont fail

Test Plan: existing tests

Reviewed By: harishs88ss

Differential Revision: D44945942

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99050
Approved by: https://github.com/angelayi
2023-04-14 01:36:18 +00:00
Tugsbayasgalan Manlaibaatar
39fd7f945f Add Symbool support in python to C++ translation (#98453)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98453
Approved by: https://github.com/ezyang
2023-04-12 03:21:57 +00:00
Aleksei Nikiforov
ae0d06b42c Fix saving and loading pickle files on Big Endian systems (#95881)
This change fixes test/test_cpp_api_parity.py tests on Big Endian systems.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95881
Approved by: https://github.com/malfet
2023-04-05 06:11:31 +00:00
Scott Wolchok
794f6e50a1 [PyTorch] Accept string_view in Pickler::pushGlobal (#96402)
This should make a difference for users building with libstdc++: we pass string literals to pushGlobal with length longer than 15 bytes, and 15 bytes is the maximum inline size of libstdc++'s std::string before it will heap allocate.

Differential Revision: [D43930698](https://our.internmc.facebook.com/intern/diff/D43930698/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96402
Approved by: https://github.com/ezyang
2023-03-31 19:33:46 +00:00
Scott Wolchok
79d2a8dd9e [PyTorch] Second try: use c10::FastMap for memoizing in Pickler (#96688)
These maps don't rely on reference stability, so FastMap should be fine.

First try (#96360) was reverted because it broke internal tests.

Differential Revision: [D43995796](https://our.internmc.facebook.com/intern/diff/D43995796/)

**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D43995796/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96688
Approved by: https://github.com/malfet
2023-03-28 21:23:13 +00:00
mikey dagitses
82592f7e53 remove dead torch_pb.h library (#97599)
This is only used in one place, ensure it still builds.

Differential Revision: [D44395699](https://our.internmc.facebook.com/intern/diff/D44395699/)

**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D44395699/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97599
Approved by: https://github.com/PaliC
2023-03-28 00:55:17 +00:00
mikey dagitses
85885301fd fix ignored qualifiers errors (#97443)
fix ignored qualifiers errors

Summary:
These errors exist in GCC 11, which is the default compiler on CentOS
9.

Test Plan: Rely on CI.

Reviewers: sahanp

Subscribers:

Tasks:

Tags:

---
Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/pytorch/pytorch/pull/97443).
* __->__ #97443
* #97442
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97443
Approved by: https://github.com/ezyang
2023-03-24 23:05:50 +00:00
Han Qi (qihqi)
e4b365a9a0 Use a equal operator that don't depend on nonzero for flatbuffer_serializer (#97298)
Summary: call to is_nonzero here is not desirable: https://www.internalfb.com/code/fbsource/[ed0407ba3bf520baa2e9333483b274c5b40b54eb]/fbcode/caffe2/aten/src/ATen/core/ivalue.cpp?lines=278

Differential Revision: D44276685

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97298
Approved by: https://github.com/larryliu0820
2023-03-23 23:54:41 +00:00
PyTorch MergeBot
12da0c7037 Revert "remove dead torch_pb.h library (#97323)"
This reverts commit 364d92f9b6.

Reverted https://github.com/pytorch/pytorch/pull/97323 on behalf of https://github.com/malfet due to Reverting as PR dependent on https://github.com/pytorch/pytorch/pull/97322 that has been reverted
2023-03-23 23:19:05 +00:00
mikey dagitses
364d92f9b6 remove dead torch_pb.h library (#97323)
remove dead torch_pb.h library

Summary: This is only used in one place, ensure it still builds.

Test Plan: Rely on CI.

Reviewers: sahanp

Subscribers:

Tasks:

Tags:

---
Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/pytorch/pytorch/pull/97323).
* #97337
* #97336
* #97335
* #97334
* #97325
* #97324
* __->__ #97323
* #97322
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97323
Approved by: https://github.com/malfet
2023-03-22 17:06:21 +00:00
Han Qi (qihqi)
7a93865c46 Fix regression on loading jit module from flatbuffer (#97190)
Summary:
https://fb.workplace.com/groups/pytorch.edge.users/permalink/1287477365455887

Root cause:
Introduced in D44106776. But this loop is wierd because class_dep can grow, so it cannot be replaced with c10::irange.

Test Plan:
Used model at `fbpkg fetch speech.tuna.milan.ondevice.en_us.transducer:6`
Then
`buck run xplat/caffe2/fb/lite_predictor:convert_model -- --model=$HOME/20230320debug/pytorchmodel.pt --output_name=/tmp/ffmodel.ff`

Differential Revision: D44234894

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97190
Approved by: https://github.com/larryliu0820
2023-03-21 19:54:44 +00:00
Scott Wolchok
7beac103ee [PyTorch] Remove unnecessary unpickler.h #include in jit/serialization/import.h (#96687)
A forward declaration will do here.

Differential Revision: [D43995795](https://our.internmc.facebook.com/intern/diff/D43995795/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96687
Approved by: https://github.com/suo
2023-03-21 03:43:05 +00:00
Nikita Shulga
a229e78544 [BE] Enforce sign-compare (#96723)
Number of OSS PR were reverted, because new signed-unsigned comparison warnings, which are treated as errors in some internal builds.
Not sure how those selective rules are applied, but this PR removes `-Wno-sign-compare` from PyTorch codebase.

The only tricky part in this PR, as making sure that non-ASCII character detection works for both signed and unsigned chars  here:
6e3d51b08a/torch/csrc/jit/serialization/python_print.cpp (L926)

Exclude several files from sign-compare if flash attention is used, due to the violation in cutlass, to be fixed by https://github.com/NVIDIA/cutlass/pull/869
Do not try to fix sign compare violations in caffe2 codebase
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96723
Approved by: https://github.com/albanD
2023-03-15 06:04:20 +00:00
Ivan Zaitsev
41991710b2 Revert "[PyTorch] Use c10::FastMap for memoizing in Pickler (#96360)" (#96547)
This reverts commit 69d3fa2e4d.

Reason: breaks internal meta tests. See [D43926671](https://www.internalfb.com/diff/D43926671)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96547
Approved by: https://github.com/seemethere, https://github.com/malfet
2023-03-10 20:57:06 +00:00
Scott Wolchok
69d3fa2e4d [PyTorch] Use c10::FastMap for memoizing in Pickler (#96360)
These maps don't rely on reference stability, so FastMap should be fine.

Differential Revision: [D43926671](https://our.internmc.facebook.com/intern/diff/D43926671/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96360
Approved by: https://github.com/ezyang
2023-03-10 02:18:16 +00:00
Scott Wolchok
fdd7e76b95 [PyTorch][easy] Don't call IValue::type twice in Pickler::endTypeTag (#96214)
The duplicate call is unlikely to be eliminated by the compiler (it can return a new heap-allocated object).

Differential Revision: [D43877846](https://our.internmc.facebook.com/intern/diff/D43877846/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96214
Approved by: https://github.com/zhxchen17
2023-03-08 01:29:21 +00:00
Daniil Kutz
59005bb998 Fix segmentation fault in script_type_parser.cpp and unpickler.cpp (#94815)
Hi!

I've been fuzzing different pytorch modules, and found a few crashes.

Proposed checks fixes multiple segmentation faults and heap buffer overflows that was found during fuzzing pytorch with [sydr-fuzz](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/pytorch).

### Crash files ###
1) Heap buffer overflow that leads to crash
[crash-842314913bf1820ec19cddfbb7400ffdbb756920.zip](https://github.com/pytorch/pytorch/files/9461316/crash-842314913bf1820ec19cddfbb7400ffdbb756920.zip)

```
  "AsanReport": [
    "==3751==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x619000033478 at pc 0x0000005f9bc3 bp 0x7fffffff1eb0 sp 0x7fffffff1ea8\n",
    "READ of size 4 at 0x619000033478 thread T0\n",
    "[Detaching after fork from child process 3762]\n",
    "    #0 0x5f9bc2 in c10::IValue::IValue(c10::IValue&&) /pytorch_fuzz/aten/src/ATen/core/ivalue.h:192:43\n",
    "    #1 0x9ecd0a7 in torch::jit::pop(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /pytorch_fuzz/aten/src/ATen/core/stack.h:102:12\n",
    "    #2 0x9ecd0a7 in torch::jit::Unpickler::readInstruction() /pytorch_fuzz/torch/csrc/jit/serialization/unpickler.cpp:380:17\n",
    "    #3 0x9ecafc7 in torch::jit::Unpickler::run() /pytorch_fuzz/torch/csrc/jit/serialization/unpickler.cpp:226:27\n",
    "    #4 0x9ecac62 in torch::jit::Unpickler::parse_ivalue() /pytorch_fuzz/torch/csrc/jit/serialization/unpickler.cpp:183:3\n",
    "    #5 0x9e45996 in torch::jit::unpickle(std::function<unsigned long (char*, unsigned long)>, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)) /pytorch_fuzz/torch/csrc/jit/serialization/pickle.cpp:127:20\n",
    "    #6 0x9e4626d in torch::jit::unpickle(char const*, unsigned long, std::function<c10::StrongTypePtr (c10::QualifiedName const&)>, c10::ArrayRef<at::Tensor>, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)) /pytorch_fuzz/torch/csrc/jit/serialization/pickle.cpp:137:10\n",
```

2) Segmentation fault
[crash-e690c58718e88921350562f0b4d9180938145d77.zip](https://github.com/pytorch/pytorch/files/9461331/crash-e690c58718e88921350562f0b4d9180938145d77.zip)

```
 "AsanReport": [
    "==3744==ERROR: AddressSanitizer: SEGV on unknown address (pc 0x000009122754 bp 0x7fffffff5290 sp 0x7fffffff5270 T0)\n",
    "==3744==The signal is caused by a READ memory access.\n",
    "==3744==Hint: this fault was caused by a dereference of a high value address (see register values below).  Disassemble the provided pc to learn which register was used.\n",
    "[Detaching after fork from child process 3763]\n",
    "    #0 0x9122754 in c10::intrusive_ptr<torch::jit::Tree, c10::detail::intrusive_target_default_null_type<torch::jit::Tree> >::retain_() /pytorch_fuzz/c10/util/intrusive_ptr.h:269:54\n",
    "    #1 0x9127929 in c10::intrusive_ptr<torch::jit::Tree, c10::detail::intrusive_target_default_null_type<torch::jit::Tree> >::intrusive_ptr(c10::intrusive_ptr<torch::jit::Tree, c10::detail::intrusive_target_default_null_type<torch::jit::Tree> > const&) /pytorch_fuzz/c10/util/intrusive_ptr.h:352:5\n",
    "    #2 0x9127929 in torch::jit::Expr::Expr(c10::intrusive_ptr<torch::jit::Tree, c10::detail::intrusive_target_default_null_type<torch::jit::Tree> > const&) /pytorch_fuzz/torch/csrc/jit/frontend/tree_views.h:269:49\n",
    "    #3 0x91b1bbb in torch::jit::Maybe<torch::jit::Expr>::get() const /pytorch_fuzz/torch/csrc/jit/frontend/tree_views.h:211:12\n",
    "    #4 0x92a8f74 in torch::jit::ScriptTypeParser::parseClassConstant(torch::jit::Assign const&) /pytorch_fuzz/torch/csrc/jit/frontend/script_type_parser.cpp:461:41\n",
    "    #5 0x9e1c09b in torch::jit::SourceImporterImpl::importClass(c10::QualifiedName const&, torch::jit::ClassDef const&, bool) /pytorch_fuzz/torch/csrc/jit/serialization/import_source.cpp:549:34\n",
    "    #6 0x9e13f00 in torch::jit::SourceImporterImpl::importNamedType(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, torch::jit::ClassDef const&) /pytorch_fuzz/torch/csrc/jit/serialization/import_source.cpp:288:5\n",
    "    #7 0x9e11fbc in torch::jit::SourceImporterImpl::findNamedType(c10::QualifiedName const&) /pytorch_fuzz/torch/csrc/jit/serialization/import_source.cpp:140:5\n",
```

3) Unhandled out of bounds access in a vector
[crash-ccd524e7ba19a37982dd91e0d6fc06bb26dd0b10.zip](https://github.com/pytorch/pytorch/files/9461367/crash-ccd524e7ba19a37982dd91e0d6fc06bb26dd0b10.zip)

```
  "AsanReport": [
    "==3792== ERROR: libFuzzer: deadly signal\n",
    "[Detaching after fork from child process 3809]\n",
    "    #0 0x59cc11 in __sanitizer_print_stack_trace /llvm-project/compiler-rt/lib/asan/asan_stack.cpp:87:3\n",
    "    #1 0x511547 in fuzzer::PrintStackTrace() /llvm-project/compiler-rt/lib/fuzzer/FuzzerUtil.cpp:210:5\n",
    "    #2 0x4f7753 in fuzzer::Fuzzer::CrashCallback() /llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:233:3\n",
    "    #3 0x7ffff7c6741f  (/lib/x86_64-linux-gnu/libpthread.so.0+0x1441f)\n",
    "    #4 0x7ffff7a8700a in __libc_signal_restore_set /build/glibc-SzIz7B/glibc-2.31/signal/../sysdeps/unix/sysv/linux/internal-signals.h:86:3\n",
    "    #5 0x7ffff7a8700a in raise /build/glibc-SzIz7B/glibc-2.31/signal/../sysdeps/unix/sysv/linux/raise.c:48:3\n",
    "    #6 0x7ffff7a66858 in abort /build/glibc-SzIz7B/glibc-2.31/stdlib/abort.c:79:7\n",
    "    #7 0x7ffff7e73910  (/lib/x86_64-linux-gnu/libstdc++.so.6+0x9e910)\n",
    "    #8 0x7ffff7e7f38b  (/lib/x86_64-linux-gnu/libstdc++.so.6+0xaa38b)\n",
    "    #9 0x7ffff7e7f3f6 in std::terminate() (/lib/x86_64-linux-gnu/libstdc++.so.6+0xaa3f6)\n",
    "    #10 0x7ffff7e7f6a8 in __cxa_throw (/lib/x86_64-linux-gnu/libstdc++.so.6+0xaa6a8)\n",
    "    #11 0x7ffff7e763aa  (/lib/x86_64-linux-gnu/libstdc++.so.6+0xa13aa)\n",
    "    #12 0x6aeedf in std::vector<c10::IValue, std::allocator<c10::IValue> >::_M_range_check(unsigned long) const /usr/bin/../lib/gcc/x86_64-linux-gnu/10/../../../../include/c++/10/bits/stl_vector.h:1073:4\n",
    "    #13 0x9ecd66c in torch::jit::Unpickler::readInstruction() /pytorch_fuzz/torch/csrc/jit/serialization/unpickler.cpp\n",
    "    #14 0x9ecafc7 in torch::jit::Unpickler::run() /pytorch_fuzz/torch/csrc/jit/serialization/unpickler.cpp:226:27\n",
    "    #15 0x9ecac62 in torch::jit::Unpickler::parse_ivalue() /pytorch_fuzz/torch/csrc/jit/serialization/unpickler.cpp:183:3\n",
```

Some other crashes found by fuzzer:
[crash-0cab888cbd1e9fea92ab6ddeadf40b958b87d62b.zip](https://github.com/pytorch/pytorch/files/9461406/crash-0cab888cbd1e9fea92ab6ddeadf40b958b87d62b.zip)
[crash-04c9ba8e3b0f15028fd0fb0ed014fd352e182a1d.zip](https://github.com/pytorch/pytorch/files/9461407/crash-04c9ba8e3b0f15028fd0fb0ed014fd352e182a1d.zip)
[crash-422ad8c3a3472980ba751f4c7f79cf2b53e49927.zip](https://github.com/pytorch/pytorch/files/9461408/crash-422ad8c3a3472980ba751f4c7f79cf2b53e49927.zip)

### How to reproduce ###

1. To reproduce the crashes, use provided docker: [Dockerfile](https://github.com/ispras/oss-sydr-fuzz/blob/master/projects/pytorch/Dockerfile)

2. Build the container: `docker build -t oss-sydr-fuzz-pytorch-reproduce .`

3. Copy crash file to the current directory

4. Run the container: `` docker run --privileged --network host -v `pwd`:/homedir --rm -it oss-sydr-fuzz-pytorch-reproduce /bin/bash ``

5. And execute fuzz-targets with provided crash-files.

After execution completes you will see ASAN reports.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94815
Approved by: https://github.com/davidberard98
2023-02-16 21:41:11 +00:00
PyTorch MergeBot
25820b69f6 Revert "[BE] Use data() method when possible as it's safer and more readable (#92755)"
This reverts commit 582485bf0f.

Reverted https://github.com/pytorch/pytorch/pull/92755 on behalf of https://github.com/ezyang due to could have forward fixed but not going to
2023-02-13 21:44:30 +00:00
Maxwell Nuyens
0d0ebcdfe5 feature: adding the ability to restore shapes after loading a traced model (#90744)
Adds the ability to store inputs used in tracing models when calling torch.jit.save and restore the input shapes using torch.jit.load if the appropriate variables are set.

Fixes [89185](https://github.com/pytorch/pytorch/issues/89185)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90744
Approved by: https://github.com/davidberard98
2023-02-10 17:12:52 +00:00
Thiago Crepaldi
a63524684d [ONNX] Add col2im for opset 18 (#84594)
Opset 18 will be used to introduce suport for ONNX's Col2Im-18 and resolve https://github.com/pytorch/pytorch/issues/84408

Depends: https://github.com/pytorch/pytorch/pull/83201 (CI will fail until ONNX submodule is updated)

as per Faith recommendation, this PR should be merged post ORT 1.13 only
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84594
Approved by: https://github.com/justinchuby, https://github.com/titaiwangms, https://github.com/abock, https://github.com/BowenBao
2023-02-09 19:54:42 +00:00
AllenTiTaiWang
b27ac6dc56 [ONNX] Add full checker mode in torch.onnx.export (#83186)
Fix #82589
Why:
1. **full_check** works in `onnx::checker::check_model` function as it turns on **strict_mode** in `onnx::shape_inference::InferShapes()` which I think that was the intention of this part of code.
2. **strict_mode** catches failed shape type inference (invalid ONNX model from onnx perspective) and ONNXRUNTIME can't run these invalid models, as ONNXRUNTIME actually rely on ONNX shape type inference to optimize ONNX graph. Why we don't set it True for default? >>> some of existing users use other platform, such as caffe2 to run ONNX model which doesn't need valid ONNX model to run.
3. This PR doesn't change the original behavior of `check_onnx_proto`, but add a warning message for those models which can't pass strict shape type inference, saying the models would fail on onnxruntime.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/83186
Approved by: https://github.com/justinchuby, https://github.com/thiagocrepaldi, https://github.com/jcwchen, https://github.com/BowenBao
2023-02-08 22:47:25 +00:00
Ivan Kobzarev
9daca46dc4 [jit][await] Apply review comments (#93284)
Differential Revision: [D42849920](https://our.internmc.facebook.com/intern/diff/D42849920)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93284
Approved by: https://github.com/malfet
2023-02-01 07:22:06 +00:00
Ivan Kobzarev
2fc73622f8 [jit] Support Awaitable type (#90863)
We want to make TorchRec sharded models TorchScriptable.

TorchRec sharded models uses generic types Awaitable[W] and LazyAwaitable[W] (https://github.com/pytorch/torchrec/blob/main/torchrec/distributed/types.py#L212).
In sharded model those types are used instead of contained type W, having the initialization function that produces object of type W.

At the moment when the first attribute of W is requested - `LazyAwaitable[W]` will call its initialization function (on the same stack), cache the result inside and work transparently as an object of W. So we can think about it as a delayed object initialization.

To support this behavior in TorchScript - we propose a new type to TorchScript - `Await`.
In eager mode it works the same as `LazyAwaitable[W]` in TorchRec, being dynamically typed - acting as a type `W` while it is `Await[W]`.

Within torchscript it is `Await[W]` and can be only explicitly converted to W, using special function `torch.jit.awaitable_wait(aw)`.
Creation of this `Await[W]` is done via another special function `torch.jit.awaitable(func, *args)`.

The semantic is close to `torch.jit.Future`, fork, wait and uses the same jit mechanics (inline fork Closures) with the difference that it does not start this function in parallel on fork. It only stores as a lambda inside IValue that will be called on the same thread when `torch.jit.awaitable_wait` is called.

For example (more examples in this PR `test/jit/test_await.py`)
```
      def delayed(z: Tensor) -> Tensor:
          return Tensor * 3

      @torch.jit.script
      def fn(x: Tensor):
          aw: Await[int] = torch.jit._awaitable(delayed, 99)
          a = torch.eye(2)
          b = torch.jit._awaitable_wait(aw)
          return a + b + x
```

Functions semantics:

`_awaitable(func -> Callable[Tuple[...], W], *args, **kwargs) -> Await[W]`

Creates Await object, owns args and kwargs. Once _awaitable_wait calls, executes function func and owns the result of the function. Following _awaitable_wait calls will return this result from the first function call.

`_awaitable_wait(Await[W]) -> W`
Returns either cached result of W if it is not the first _awaitable_wait call to this Await object or calls specified function if the first.

`_awaitable_nowait(W) -> Await[W]`

Creates trivial Await[W] wrapper on specified object To be type complaint for the corner cases.

Differential Revision: [D42502706](https://our.internmc.facebook.com/intern/diff/D42502706)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90863
Approved by: https://github.com/davidberard98
2023-01-30 17:38:59 +00:00
Aaron Gokaslan
0247ed27cc Apply Clang-Tidy readability-container-size-empty (#93236)
Not only is this change usually shorter and more readable, it also can yield better performance. size() is not always a constant time operation (such as on LinkedLists), but empty() always is.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93236
Approved by: https://github.com/malfet
2023-01-29 23:28:19 +00:00
Aaron Gokaslan
387d769156 [BE]: Replace string compares with more efficient cpp comparisons (#92765)
Replace cpp string comparisons with more efficient equality operators. These string comparisons are not just more readable, but they also allow for short-circuiting for faster string equality checks.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92765
Approved by: https://github.com/ezyang
2023-01-22 21:40:19 +00:00
Aaron Gokaslan
582485bf0f [BE] Use data() method when possible as it's safer and more readable (#92755)
Apply clang-tidy readability-data-pointer fixits. This essentially uses the data() method when possible instead of the less readable `&vec[0]` to get the address of the underlying backing implementation. Not only is this more readable, it is safer as it allows you to retrieve the pointer even when the std::vector or std::string is empty without throwing an index error.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92755
Approved by: https://github.com/ezyang
2023-01-22 20:05:41 +00:00
Nikita Shulga
8f1c3c68d3 [BE] Use nested namespaces in .cpp/.cu files (#92100)
As we live in C++17 world

This is a functional no-op, just
- `s/namespace at { namespace native {/namespace at::native {/`
- `s/namespace torch { namespace jit {/namespace torch::jit {/`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92100
Approved by: https://github.com/izaitsevfb
2023-01-13 16:32:34 +00:00
Aaron Gokaslan
18b37bbff9 Clang-Tidy: Improve tensorexpr headers with additional std::moves (#91572)
Splitting #91559 into smaller pieces

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91572
Approved by: https://github.com/ezyang
2023-01-05 09:57:54 +00:00
Han Qi
b8ba4802fe Add an option to skip loading of debug traces (#91430)
Summary:
Debug traces consumes lots of memory especially for small models.

Test Plan:
Unit test

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91430
Approved by: https://github.com/davidberard98
2022-12-29 22:53:17 +00:00
Theodor Arsenij Larionov
3ac6106523 Add out of bounds checks inside irparser.cpp and unpickler.cpp (#91401)
Hi!

I've been fuzzing different pytorch modules, and found a few crashes.

Inside unpickler.cpp/irparser.cpp there are a few places, where `.at()` and `.pop_back()` are called before checking target container size. Lack of these checks results in an attempt to access elements oob (in case of `.at()`), and an actual out-of-bounds access while calling `.pop_back()`/`.pop()` on a `stack_` variable.

Crash-files:

1. Crash location: `unpickler.cpp:439` (Call to `.at(idx)` with idx that exceeds `memo_table_` size).
    - Reproduce the crash: `/message_deserialize_fuzz /homedir/crash-5695ad5b2921127775d4137ee02e23834a0bedc4`
    - Crash file: [crash-5695ad5b2921127775d4137ee02e23834a0bedc4.zip](https://github.com/pytorch/pytorch/files/10308463/crash-5695ad5b2921127775d4137ee02e23834a0bedc4.zip)
    - ASAN report: [asan-report-crash-5695ad5b2921127775d4137ee02e23834a0bedc4.log](https://github.com/pytorch/pytorch/files/10308612/asan-report-crash-5695ad5b2921127775d4137ee02e23834a0bedc4.log)

2. Crash location: `irparser.cpp:504` (Call to `.at(idx)` with idx that exceeds `schema->returns()` size).
    - Reproduce the crash: `/irparser_fuzz /homedir/crash-779ecab3d637c8c87de21e23dddb9def82a26792`
    - Crash file: [crash-779ecab3d637c8c87de21e23dddb9def82a26792.zip](https://github.com/pytorch/pytorch/files/10308475/crash-779ecab3d637c8c87de21e23dddb9def82a26792.zip)
    - ASAN report: [asan-report-crash-779ecab3d637c8c87de21e23dddb9def82a26792.log](https://github.com/pytorch/pytorch/files/10308611/asan-report-crash-779ecab3d637c8c87de21e23dddb9def82a26792.log)

3. Crash location: `unpickler.cpp:451` (Call to `.pop_back()` with empty `stack_`).
    - Reproduce the crash: `/message_deserialize_fuzz /homedir/crash-735acc19c9f39b9bbb5667878af995c9167da37f`
    - Crash file: [crash-735acc19c9f39b9bbb5667878af995c9167da37f.zip](https://github.com/pytorch/pytorch/files/10308565/crash-735acc19c9f39b9bbb5667878af995c9167da37f.zip)
    - ASAN report: [asan-report-crash-735acc19c9f39b9bbb5667878af995c9167da37f.log](https://github.com/pytorch/pytorch/files/10308558/asan-report-crash-735acc19c9f39b9bbb5667878af995c9167da37f.log)

4. Crash location: `unpickler.cpp:469` (Call to `.pop()` with empty `stack_`).
    - Reproduce the crash: `/message_deserialize_fuzz /homedir/crash-b552f1a2bbba5eab0f6aeba58475175b18e5b1b9`
    - Crash file: [crash-b552f1a2bbba5eab0f6aeba58475175b18e5b1b9.zip](https://github.com/pytorch/pytorch/files/10308568/crash-b552f1a2bbba5eab0f6aeba58475175b18e5b1b9.zip)
    - ASAN report: [asan-report-crash-b552f1a2bbba5eab0f6aeba58475175b18e5b1b9.log](https://github.com/pytorch/pytorch/files/10308555/asan-report-crash-b552f1a2bbba5eab0f6aeba58475175b18e5b1b9.log)

The provided patch adds missing size checks.

### How to reproduce

1. To reproduce the crashes, use provided docker: [Dockerfile](https://github.com/ispras/oss-sydr-fuzz/blob/master/projects/pytorch/Dockerfile)

6. Build the container: `docker build -t oss-sydr-fuzz-pytorch-reproduce .`

7. Copy crash file to the current directory

8. Run the container: ``docker run --privileged --network host -v `pwd`:/homedir --rm -it oss-sydr-fuzz-pytorch-reproduce /bin/bash``

9. And execute fuzz-targets with the given arguments

After execution completes you will see ASAN reports.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91401
Approved by: https://github.com/davidberard98
2022-12-29 19:58:29 +00:00
mikey dagitses
322e4b4c8a set -Wsuggest-override for builds (#89852)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/pytorch/pytorch/pull/89852).
* __->__ #89852
* #89851

set -Wsuggest-override for builds

Summary: This was flagged by a Meta internal build.

Test Plan: Rely on CI.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89852
Approved by: https://github.com/malfet
2022-12-19 22:08:47 +00:00
Aaron Gokaslan
3916d7a575 Apply modernize-use-emplace to aten, c10, torch (#91077)
Apply clang-tidy check modernize-use-emplace. This is slightly more efficient by using an inplace constructor and is the recommended style in parts of the codebase covered by clang-tidy. This just manually applies the check to rest of the codebase. Pinging @ezyang as this is related to my other PRs he reviewed like #89000

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91077
Approved by: https://github.com/ezyang
2022-12-19 07:49:56 +00:00
Aaron Gokaslan
da8f539e84 [Fix]: Add missing std::vector reserve in aten and torch/csrc (#90627)
Applies some clang-tidy static analysis fixes to some places where the std::vector could call.reserve() first to allocate the appropriate amount of space.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90627
Approved by: https://github.com/ezyang
2022-12-13 14:46:27 +00:00
Han Qi (qihqi)
25eb7c3ae3 Clean up dependancy for flatbuffer_loader (#86041)
Test Plan: waitforsandcastle

Differential Revision: D38445936

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86041
Approved by: https://github.com/cccclai
2022-12-08 03:48:04 +00:00
Kazuaki Ishizaki
e0c194f10b Fix typos in messages under torch (#88961)
This PR fixes typos of messages and parms in c++ source and head files under `torch` directory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88961
Approved by: https://github.com/albanD
2022-11-14 19:06:41 +00:00
kshitij12345
f74946324e [fix] allow saving python attr on Tensor and Parameter via torch.save (#81616)
Fixes: https://github.com/pytorch/pytorch/issues/72129

TODO:
* [x] Fix for Parameter

Benchmark
(Measurable diff for small tensors)
```
[-------------- Save and Load --------------]
                    |  After PR  |  Before PR
1 threads: ----------------------------------
      ()            |    111.7   |     106.9
      (4, 4)        |    114.4   |     109.2
      (128, 128)    |    135.2   |     128.3
      (1024, 1024)  |   1431.9   |    1431.3

Times are in microseconds (us).
```

<details>

<summary> Benchmark Script </summary>

```python
import torch
from torch.testing._internal.common_utils import BytesIOContext
from torch.utils import benchmark
import pickle

shapes = ((), (4, 4), (128, 128), (1024, 1024))

sizes = [1, 64, 1024, 10000]
results = []

def save_load_fn(t):
    with BytesIOContext() as f:
        torch.save(t, f)
        f.seek(0)
        torch.load(f)

for shape in shapes:
    t = torch.randn(shape)
    label = 'Save and Load'
    sub_label = f'{shape}'
    results.append(benchmark.Timer(
        stmt='save_load_fn(t)',
        globals={'t': t, 'save_load_fn':save_load_fn},
        label=label,
        sub_label=sub_label,
        description='Before PR',
    ).blocked_autorange(min_run_time=2))

compare = benchmark.Compare(results)
compare.print()

with open('before_pr.pkl', 'wb') as f:
    pickle.dump(results, f)

# with open('after_pr.pkl', 'rb') as f:
#     after_pr = pickle.load(f)

# with open('before_pr.pkl', 'rb') as f:
#     before_pr = pickle.load(f)

# compare = benchmark.Compare(after_pr + before_pr)
# compare.print()
```

</details>

NOTE : **BC-Breaking** : After this PR, all tensors (also regular tensors) will be serialised using `_rebuild_from_type_v2`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81616
Approved by: https://github.com/albanD, https://github.com/kurtamohler
2022-11-11 21:11:12 +00:00
kshitij12345
d15a6b0c97 Error on ZeroTensor serialization (#88803)
Follow-up : https://github.com/pytorch/pytorch/pull/88182#issuecomment-1308628415

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88803
Approved by: https://github.com/anjali411
2022-11-11 08:51:29 +00:00
kshitij12345
eb9b156019 [fix] MathBits: serialization (#88182)
Fixes #81690

TODO:

* [x] C++ Unpickler Fix (locally tested pickled in Python and unpickled in C++)
* [x] C++ Pickler Fix (locally tested pickled in C++ and unpickled in Python)
* [x] Do quant_tensor, sparse_tensor, etc require similar changes? (Sparse and Quant don't need this)
* [x] Add Comments
* [x] How to make sure C++ and Python are in sync? (Functions in `pickler.h` help in getting and setting Tensor Metadata (math-bits for now) on a tensor. They are the only place which should handle this.)

Notes:
Quant Tensor don't support complex dtypes and for float they segfault with `_neg_view` : https://github.com/pytorch/pytorch/issues/88484

Sparse Tensor:
```python
>>> a = torch.tensor([[0, 2.], [3j, 0]]).to_sparse()
>>> a.conj().is_conj()
False
>>> a._neg_view()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NotImplementedError: Cannot access storage of SparseTensorImpl
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88182
Approved by: https://github.com/ezyang, https://github.com/anjali411
2022-11-09 17:15:12 +00:00
PyTorch MergeBot
78a0ca29d9 Revert "[fix] allow saving python attr on Tensor and Parameter via torch.save (#81616)"
This reverts commit 54b6188cc6.

Reverted https://github.com/pytorch/pytorch/pull/81616 on behalf of https://github.com/mehtanirav due to Internal publishing is broken
2022-11-07 18:51:16 +00:00
Kshiteej K
54b6188cc6 [fix] allow saving python attr on Tensor and Parameter via torch.save (#81616)
Fixes: https://github.com/pytorch/pytorch/issues/72129

TODO:
* [x] Fix for Parameter

Benchmark
(Measurable diff for small tensors)
```
[-------------- Save and Load --------------]
                    |  After PR  |  Before PR
1 threads: ----------------------------------
      ()            |    111.7   |     106.9
      (4, 4)        |    114.4   |     109.2
      (128, 128)    |    135.2   |     128.3
      (1024, 1024)  |   1431.9   |    1431.3

Times are in microseconds (us).
```

<details>

<summary> Benchmark Script </summary>

```python
import torch
from torch.testing._internal.common_utils import BytesIOContext
from torch.utils import benchmark
import pickle

shapes = ((), (4, 4), (128, 128), (1024, 1024))

sizes = [1, 64, 1024, 10000]
results = []

def save_load_fn(t):
    with BytesIOContext() as f:
        torch.save(t, f)
        f.seek(0)
        torch.load(f)

for shape in shapes:
    t = torch.randn(shape)
    label = 'Save and Load'
    sub_label = f'{shape}'
    results.append(benchmark.Timer(
        stmt='save_load_fn(t)',
        globals={'t': t, 'save_load_fn':save_load_fn},
        label=label,
        sub_label=sub_label,
        description='Before PR',
    ).blocked_autorange(min_run_time=2))

compare = benchmark.Compare(results)
compare.print()

with open('before_pr.pkl', 'wb') as f:
    pickle.dump(results, f)

# with open('after_pr.pkl', 'rb') as f:
#     after_pr = pickle.load(f)

# with open('before_pr.pkl', 'rb') as f:
#     before_pr = pickle.load(f)

# compare = benchmark.Compare(after_pr + before_pr)
# compare.print()
```

</details>

NOTE : **BC-Breaking** : After this PR, all tensors (also regular tensors) will be serialised using `_rebuild_from_type_v2`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81616
Approved by: https://github.com/albanD, https://github.com/kurtamohler
2022-11-03 09:57:47 +00:00
Han Qi (qihqi)
5c3666cb81 [codev] Make backport work with flatbuffer models (#88127)
Summary: By adding flatbuffer as dependency of backport.

Differential Revision: D40865452

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88127
Approved by: https://github.com/cccclai
2022-11-01 16:11:30 +00:00
Nikita Shulga
82c8365c16 [BE] Delete TH_DISALLOW_COPY_AND_ASSIGN (#87743)
Replace it with `AT_DISALLOW_COPY_AND_ASSIGN` and delete the header that
contained this define

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87743
Approved by: https://github.com/atalman, https://github.com/ngimel
2022-10-26 03:31:56 +00:00
AllenTiTaiWang
82dff8ee09 [ONNX] replace AT_ASSERT with TORCH_INTERTNAL_ASSERT take 2 (#86405)
Address the AT_ASSERT in torch/jit/csrc/serialization (ONNX related).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86405
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-10-25 18:54:40 +00:00
PyTorch MergeBot
0c1dec375f Revert "Back out "Revert D40198461: [pytorch][PR] Backport currently dont work with some models if:" (#87124)"
This reverts commit a42fbfa0cb.

Reverted https://github.com/pytorch/pytorch/pull/87124 on behalf of https://github.com/ZainRizvi due to This is causing periodic jobs to fail
2022-10-21 16:03:00 +00:00
Han Qi (qihqi)
a42fbfa0cb Back out "Revert D40198461: [pytorch][PR] Backport currently dont work with some models if:" (#87124)
Summary:
reland after fixing windows build failure for OVR.

Notable change:
```
#if defined(FBCODE_CAFFE2) or defined(FB_XPLAT_BUILD)
```
changed to
```#if defined(FBCODE_CAFFE2) || defined(FB_XPLAT_BUILD)
```
Appearently `-DFB_XPLAT_BUILD` wasn't getting picked up in windows if using `or `to connect

Original commit changeset: 7a31fc4b455f

Original Phabricator Diff: D40198461

Test Plan: waitforsandcastle

Reviewed By: davidberard98, cccclai

Differential Revision: D40290932

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87124
Approved by: https://github.com/gmagogsfm
2022-10-20 23:02:10 +00:00
albanD
c141f28b64 Fix compilation warning and spurious print (#87297)
Fixes compilation warning, make this warning an error and remove a random print.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87297
Approved by: https://github.com/malfet
2022-10-19 20:56:37 +00:00
Han Qi (qihqi)
f32aeeae00 Set interface_call to true be default (#86668)
Summary: ASR models need it

Test Plan: existing unit tests

Reviewed By: cccclai

Differential Revision: D40251788

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86668
Approved by: https://github.com/cccclai
2022-10-11 20:07:58 +00:00
Zachary DeVito
736adc0808 Memory snapshots from C++ (#86190)
Sometimes the driving process want to save memory snapshots but isn't Python.
Add a simple API to turn it on without python stack traces. It still
saves to the same format for the vizualization and summary scripts, using
the C++ Pickler.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86190
Approved by: https://github.com/ezyang
2022-10-05 07:36:39 +00:00
Alex Beloi
a38e43e936 [perf][1/5] Replace IValue::toString()->string() with IValue::toStringRef() (#85437)
Summary: `IValue::toString()` creates a `new c10::intrusive_ptr` (like `std::shared_ptr`) and `->string()` immediately accesses it, creating an atomic reference increment/decrement. We can skip both of these operations by calling `IValue::toStringRef()`.

Test Plan: CI

Reviewed By: jaybean-dev

Differential Revision: D39605242

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85437
Approved by: https://github.com/jfix71
2022-09-23 23:36:57 +00:00
Edward Z. Yang
61b4e8a7bf More SymFloat support (#85411)
- Support storing SymFloat in IValue
- Add SymFloat to JIT type system (erases to float)
- Printing support for SymFloat
- add/sub/mul/truediv operator support for SymFloat
- Support truediv on integers, it returns a SymFloat
- Support parsing SymFloat from Python object

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85411
Approved by: https://github.com/albanD
2022-09-22 08:07:22 +00:00
Justin Chu
46843be1e6 [ONNX] Update error messages (#85179)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85179
Approved by: https://github.com/kit1980
2022-09-16 22:48:19 +00:00
Paul Saab
e46c1c7931 [aarch64] Cast to signed char to fix aarch64 build (#84429)
Summary: Force SHORT_BINUNICODE and PROTO to signed char to fix build on aarch64

Test Plan: CI

Differential Revision: D39198776

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84429
Approved by: https://github.com/ajtulloch
2022-09-03 04:06:26 +00:00
Nikita Shulga
79e3a39f95 [BE] Remove unused export.h include (#84305)
As flatbuffer_serializer can be compiled without it

Found while debugging cause of https://github.com/pytorch/pytorch/pull/82040#issuecomment-1229503604
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84305
Approved by: https://github.com/kit1980, https://github.com/qihqi
2022-08-31 04:34:01 +00:00
BowenBao
806878518f [ONNX][Reland] Export node and value with scope name (#82040)
Introduce `_jit_pass_onnx_assign_node_and_value_names` to parse and assign
scoped name for nodes and values in exported onnx graph.
Module layer information is obtained from `ONNXScopeName` captured in `scope`
attribute in nodes. For nodes, the processed onnx node name are stored in
attribute `onnx_name`. For values, the processed onnx output name are stored
as `debugName`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82040
Approved by: https://github.com/AllenTiTaiWang, https://github.com/justinchuby, https://github.com/abock
2022-08-29 20:10:38 +00:00
PyTorch MergeBot
8e6207bcd8 Revert "[ONNX] Export node and value with scope name (#82040)"
This reverts commit 6a3666282d.

Reverted https://github.com/pytorch/pytorch/pull/82040 on behalf of https://github.com/weiwangmeta due to Diff reverted internally
2022-08-29 06:36:18 +00:00
BowenBao
6a3666282d [ONNX] Export node and value with scope name (#82040)
Introduce `_jit_pass_onnx_assign_node_and_value_names` to parse and assign
scoped name for nodes and values in exported onnx graph.
Module layer information is obtained from `ONNXScopeName` captured in `scope`
attribute in nodes. For nodes, the processed onnx node name are stored in
attribute `onnx_name`. For values, the processed onnx output name are stored
as `debugName`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82040
Approved by: https://github.com/AllenTiTaiWang, https://github.com/justinchuby, https://github.com/abock
2022-08-26 20:59:12 +00:00
zaf
d32a762147 [quant][ao_migration] torch.nn.quantized.dynamictorch.ao.nn.quantized.dynamic (#78714)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.

The list of the `nn.quantized` files that are being migrated:

- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
    - [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
    - [X] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
    - [X] [Current PR] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
    - [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
    - [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
    - [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
    - [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
    - [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
    - [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
        - [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
        - [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`

Majority of the files are just moved to the new location.
However, specific files need to be double checked:

- [Documentation](docs/source/quantization-support.rst) @vkuzo
- [Public API test list](test/allowlist_for_publicAPI.json) @peterbell10
- [BC test](test/quantization/bc/test_backward_compatibility.py) @vkuzo
- [IR emitter](torch/csrc/jit/frontend/ir_emitter.cpp) @jamesr66a
- [JIT serialization](torch/csrc/jit/serialization/import_source.cpp) @IvanKobzarev @jamesr66a

Differential Revision: [D36860660](https://our.internmc.facebook.com/intern/diff/D36860660/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36860660/)!

Differential Revision: [D36860660](https://our.internmc.facebook.com/intern/diff/D36860660)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78714
Approved by: https://github.com/jerryzh168
2022-08-25 16:50:34 +00:00
zaf
c92e5ac95b [quant][ao_migration] torch.nn.quantized.modulestorch.ao.nn.quantized.modules (#78713)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.

The list of the `nn.quantized` files that are being migrated:

- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
    - [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
    - [X] [Current PR] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
    - [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
    - [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
    - [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
    - [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
    - [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
    - [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
    - [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
        - [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
        - [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`

Majority of the files are just moved to the new location.
However, specific files need to be double checked:

- Documentation @vkuzo
  - docs/source/conf.py
  - docs/source/quantization.rst
- [quantize_fx](torch/ao/quantization/quantize_fx.py) @jerryzh168
- [common test routine](test/quantization/ao_migration/common.py) @HDCharles
- JIT stuff @jamesr66a
  - torch/csrc/jit/passes/hoist_conv_packed_params.cpp
  - torch/csrc/jit/passes/quantization/helper.h
  - torch/csrc/jit/serialization/import_source.cpp

Differential Revision: [D38926012](https://our.internmc.facebook.com/intern/diff/D38926012/)

Differential Revision: [D38926012](https://our.internmc.facebook.com/intern/diff/D38926012)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78713
Approved by: https://github.com/jerryzh168
2022-08-25 16:50:33 +00:00
PyTorch MergeBot
6a9c02339d Revert "[quant][ao_migration] torch.nn.quantized.modulestorch.ao.nn.quantized.modules (#78713)"
This reverts commit 432f037498.

Reverted https://github.com/pytorch/pytorch/pull/78713 on behalf of https://github.com/janeyx99 due to Reverting for breaking (trunk-only) ios build
2022-08-22 07:32:37 +00:00
PyTorch MergeBot
b1a7b67529 Revert "[quant][ao_migration] torch.nn.quantized.dynamictorch.ao.nn.quantized.dynamic (#78714)"
This reverts commit e6fb97d8ae.

Reverted https://github.com/pytorch/pytorch/pull/78714 on behalf of https://github.com/janeyx99 due to sorry, reverting so https://github.com/pytorch/pytorch/pull/78713 could be cleanly reverted
2022-08-22 07:30:48 +00:00
zaf
e6fb97d8ae [quant][ao_migration] torch.nn.quantized.dynamictorch.ao.nn.quantized.dynamic (#78714)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.

The list of the `nn.quantized` files that are being migrated:

- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
    - [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
    - [X] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
    - [X] [Current PR] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
    - [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
    - [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
    - [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
    - [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
    - [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
    - [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
        - [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
        - [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`

Majority of the files are just moved to the new location.
However, specific files need to be double checked:

- [Documentation](docs/source/quantization-support.rst) @vkuzo
- [Public API test list](test/allowlist_for_publicAPI.json) @peterbell10
- [BC test](test/quantization/bc/test_backward_compatibility.py) @vkuzo
- [IR emitter](torch/csrc/jit/frontend/ir_emitter.cpp) @jamesr66a
- [JIT serialization](torch/csrc/jit/serialization/import_source.cpp) @IvanKobzarev @jamesr66a

Differential Revision: [D36860660](https://our.internmc.facebook.com/intern/diff/D36860660/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36860660/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78714
Approved by: https://github.com/jerryzh168
2022-08-22 05:22:00 +00:00
zaf
432f037498 [quant][ao_migration] torch.nn.quantized.modulestorch.ao.nn.quantized.modules (#78713)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.

The list of the `nn.quantized` files that are being migrated:

- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
    - [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
    - [X] [Current PR] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
    - [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
    - [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
    - [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
    - [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
    - [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
    - [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
    - [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
        - [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
        - [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`

Majority of the files are just moved to the new location.
However, specific files need to be double checked:

- Documentation @vkuzo
  - docs/source/conf.py
  - docs/source/quantization.rst
- [quantize_fx](torch/ao/quantization/quantize_fx.py) @jerryzh168
- [common test routine](test/quantization/ao_migration/common.py) @HDCharles
- JIT stuff @jamesr66a
  - torch/csrc/jit/passes/hoist_conv_packed_params.cpp
  - torch/csrc/jit/passes/quantization/helper.h
  - torch/csrc/jit/serialization/import_source.cpp

Differential Revision: [D36860145](https://our.internmc.facebook.com/intern/diff/D36860145/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78713
Approved by: https://github.com/jerryzh168
2022-08-22 01:38:55 +00:00
Justin Chu
05849eafb9 [ONNX] Create empty opset 17 symbolic file (#83287)
The PR

- Creates an empty symbolic file to house the new ops defined in ONNX 17
- Increments the max version to 17 and fixes the doc for version 16
- Enables tests for opset 17
- Updates the IR version in `export.cpp`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83287
Approved by: https://github.com/thiagocrepaldi, https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-08-19 02:02:46 +00:00
Daniil Kutz
d438e86719 Add assertions to fix torch::jit::load bugs (#79192)
Fixes #77561, #77563, #77573 and #77575

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79192
Approved by: https://github.com/Gamrix
2022-08-11 18:03:00 +00:00
Han Qi (qihqi)
2ae1afd6ae When encountering dynamic types, one should cast it recursively. (#83218)
Summary:
Say we have a list as dynamic type, we'd have something like:
D<1>[D<2>] (numbers are made up). Before this change, it will emit as List[D<2>]. I.e. making the first dynamic type legible. But leaves the nested type unchanged.

After this change it would show up as List[str]. (also changing D<2> to whatever it corresponds to).

Fixes backport issue in task.

Test Plan:
thrift fiddle
{F759935604}

Reviewed By: zhxchen17

Differential Revision: D38561471

Pull Request resolved: https://github.com/pytorch/pytorch/pull/83218
Approved by: https://github.com/zhxchen17
2022-08-11 17:47:17 +00:00
Sergii Dymchenko
a0b3854548 Change seperate -> separate (#83056)
One instance was caught by Meta-internal "exact-word-misspell" linter in D38505529.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83056
Approved by: https://github.com/huydhn, https://github.com/seemethere
2022-08-09 23:11:34 +00:00
Han Qi (qihqi)
f9533560cc Use flatbuffer of alternate namespace (#82952)
Summary: Minimal change to make use of flatbuffer with fbsource namespace.

Test Plan: existing unit tests

Differential Revision: D38494999

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82952
Approved by: https://github.com/cccclai
2022-08-09 07:40:59 +00:00
Tugsbayasgalan Manlaibaatar
b4b60c2a2e Get rid of ENABLE_UPGRADERS macro (#77574)
Since it's been a while after we merged the upgrader design and we haven't encountered any issues, let's get rid of the macro for safe rollout
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77574
Approved by: https://github.com/gmagogsfm
2022-08-09 05:33:14 +00:00
Dave Bort
0810961d5f Remove flatbuffer types/headers from flatbuffer_serializer[_jit].h (#82619)
Hide the flatbuffers types and headers from the serialize APIs, and stop using the DEPRECATED functions from flatbuffer_loader.h.

This required creating the new `DetachedBuffer` type to replace/hide `flatbuffers::DetachedBuffer`, a class that owns a span of custom-allocated memory.

This is another step towards hiding the flatbuffers types and headers from the load/serialize APIs.

Differential Revision: [D38292798](https://our.internmc.facebook.com/intern/diff/D38292798/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D38292798/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82619
Approved by: https://github.com/qihqi
2022-08-05 02:23:34 +00:00
Amit Kumar Chawla
6592259ea5 [HPU] Enable torch.jit.load for HPU (#81759)
As per torch.jit.load documentation, all previously saved modules,
irrespective of their device, are first loaded onto CPU, and then
are moved to the devices they were saved from. So far, supported
devices included CPU and CUDA only. To enable torch.jit.load for
HPU, additional check for HPU is introduced.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81759
Approved by: https://github.com/eellison
2022-08-01 09:28:44 +00:00
Edward Z. Yang
0b9eb93fe9 Make type_resolver_ null error have more useful info (#81466)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81466
Approved by: https://github.com/yinghai
2022-07-15 05:58:37 +00:00
Han Qi (qihqi)
c93ceef658 Wrap static initializers in ifdef (#80590)
because, on iOS some projects has -Wglobal-constructors and it won't build.

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80590
Approved by: https://github.com/cccclai
2022-07-01 04:42:17 +00:00
Han Qi (qihqi)
648224dd80 Use streams for import_ir_module for pickle case (#80131)
Summary: reading file oneshot can explode mem

Test Plan: unit tests & by hand

Differential Revision: D37376875

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80131
Approved by: https://github.com/houseroad
2022-06-23 21:32:52 +00:00
Han Qi (qihqi)
fed12ff680 [BE][flatbuffer] Remove code duplications and refactor (#79184)
Summary:
Remove code dup in import.cpp / export_modules.cpp such that
1. Only one copy of switching logic (detect flatbuffer / is_flatbuffer);
2. Move detection of includeness of flatbuffer to runtime (so no more macros)

This also reverts the dependency of import.cpp -> flatbuffer_loader.cpp to flatbuffer_loader.cpp -> import.cpp.

Differential Revision: D36926217

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79184
Approved by: https://github.com/zhxchen17
2022-06-20 16:37:38 +00:00
Michael Andreas Dagitses
acd072967a canonicalize includes of form <aten/src/ATen/...>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78033

This was never intended to be supported.

@override-unit-failures
(Note: this ignores all push blocking failures!)

Differential Revision: [D36567054](https://our.internmc.facebook.com/intern/diff/D36567054/)

Approved by: https://github.com/kit1980
2022-06-16 17:46:45 +00:00
Han Qi (qihqi)
577f87bbff Make flatbuffer loads faster if loading as mobile module. (#78998)
BCFC check: verified that flatbuffer file created in this commit can
be loaded in HEAD and file created in HEAD can be loaded in this commit

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78998
Approved by: https://github.com/zhxchen17
2022-06-14 18:57:01 +00:00
Michael Andreas Dagitses
606b234336 turn on -Werror=unused-function in our Bazel CPU build
Summary:
We also fix any existing issues. Note that we only do this for the CPU
build because nvcc is considered a C++ toolchain but it does not have
the same flag support. Adding flags to the GPU build will cause nvcc
errors.

Test Plan: Built locally, rely on CI to confirm.

Reviewers: malfet

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79154

Approved by: https://github.com/seemethere, https://github.com/osalpekar, https://github.com/albanD
2022-06-10 22:11:54 +00:00
PyTorch MergeBot
bcd7a20953 Revert "turn on -Werror=unused-function in our Bazel CPU build"
This reverts commit 67d313a032.

Reverted https://github.com/pytorch/pytorch/pull/79154 on behalf of https://github.com/malfet due to Breaks bazel build: 67d313a032
2022-06-10 20:43:03 +00:00
Michael Andreas Dagitses
67d313a032 turn on -Werror=unused-function in our Bazel CPU build
Summary:
We also fix any existing issues. Note that we only do this for the CPU
build because nvcc is considered a C++ toolchain but it does not have
the same flag support. Adding flags to the GPU build will cause nvcc
errors.

Test Plan: Built locally, rely on CI to confirm.

Reviewers: malfet

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79154

Approved by: https://github.com/seemethere, https://github.com/osalpekar, https://github.com/albanD
2022-06-10 18:30:08 +00:00
Michael Andreas Dagitses
f96d96a7fc turn on -Werror=type-limits in our Bazel CPU build
Summary:
We also fix any existing issues.

Test Plan: Built locally, rely on CI to confirm.

Reviewers: malfet

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79139

Approved by: https://github.com/seemethere, https://github.com/osalpekar, https://github.com/albanD
2022-06-10 10:04:08 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
1f680a2752 Temp fix for upgrader (#78589)
Summary: see: https://fb.workplace.com/groups/pytorch.edge.users/permalink/1086950342175258/

Test Plan: https://www.internalfb.com/phabricator/paste/view/P503629793

Reviewed By: qihqi

Differential Revision: D36694318

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78589
Approved by: https://github.com/qihqi
2022-06-01 01:31:51 +00:00
PyTorch MergeBot
ceb93afe3f Revert "Fix bug in flatbuffer deserialization"
This reverts commit 7e72c96b10.

Reverted https://github.com/pytorch/pytorch/pull/78344 on behalf of https://github.com/tugsbayasgalan due to as we need to land it in fbcode asap
2022-05-31 23:34:04 +00:00
Tugsbayasgalan Manlaibaatar
7e72c96b10 Fix bug in flatbuffer deserialization
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78344

Approved by: https://github.com/qihqi
2022-05-31 18:37:30 +00:00
Han Qi (qihqi)
9432be9b8c [flatbuffer] Move saving storage to the last step. (#78024)
Summary: Move storage saving to last step, because otherwise tensors saved after storage are already saved will not have storage.

Test Plan: Tested by loading the file in `clowder get GLDGLQnKrIsQFg8DAPxq9vg59ZwZbmQwAAAA orig.pt` and converting to flatbuffer and load again

Differential Revision: D36552645

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78024
Approved by: https://github.com/Jack-Khuu
2022-05-20 23:48:44 +00:00
Han Qi (qihqi)
0bc4b2af56 Populate bytecode version and operator version (#77685)
Summary: title

Test Plan: unittest

Differential Revision: D36459217

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77685
Approved by: https://github.com/pavithranrao
2022-05-19 23:51:51 +00:00
Pavel Belevich
94eba341f8 Revert RPC Meta device support
This reverts commit 058be5f162 and 2e2200d76c.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77875

Approved by: https://github.com/mrshenli
2022-05-19 23:47:47 +00:00
Tugsbayasgalan Manlaibaatar
31d9f7c303 Move other div variants to upgraders map
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73586

Approved by: https://github.com/gmagogsfm
2022-05-16 22:32:15 +00:00
Han Qi (qihqi)
14e59edd02 Saving JIT to flatbuffer should respect options. (#77456)
Summary: title

Test Plan: manual test with T120364740

Differential Revision: D36388746

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77456
Approved by: https://github.com/pavithranrao
2022-05-16 16:42:56 +00:00
Edward Z. Yang
0a14a4c280 Register prims as operators.
This makes prims look as if they were defined in native_functions.yaml
but they're still all written in Python.  You now need to give a full
schema string for your prims.  The returned prim object is now
torch.ops.prim overload (prims are not allowed to be overloaded,
so we return the overload, not the overload packet, for speed.)

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77117

Approved by: https://github.com/mruberry, https://github.com/albanD
2022-05-11 16:38:14 +00:00
Pavel Belevich
2e2200d76c RPC Meta device support
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76882

Approved by: https://github.com/jamesr66a, https://github.com/mrshenli
2022-05-10 01:26:59 +00:00
BowenBao
679fc90cdb [ONNX] Support optional type (#68793) (#73284)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73284

Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D34625646

Pulled By: malfet

fbshipit-source-id: 537fcbc1e9d87686cc61f5bd66a997e99cec287b

Co-authored-by: BowenBao <bowbao@microsoft.com>
Co-authored-by: neginraoof <neginmr@utexas.edu>
Co-authored-by: Nikita Shulga <nshulga@fb.com>
(cherry picked from commit 822e79f31ae54d73407f34f166b654f4ba115ea5)
2022-05-04 20:24:30 +00:00
Han Qi
aca5594818 Turn on memory efficient format for jit pickle files.
Summary:
This enables previous change made at D35196883 (b34b192d6b)
Previous change is landed for 2 weeks to make sure that the format change introduced here will be handed in code.

Test Plan: existing tests

Differential Revision: D36074453

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76688
Approved by: https://github.com/gmagogsfm
2022-05-03 18:42:30 +00:00
Masaki Kozuki
0ae3aa648e [torch.onnx] support torch.nn.functional.grid_sample
summary

- Adds `F.grid_sample` support
- Adds a test case

Fixes #27212
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76159
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-05-02 22:07:58 +00:00
Han Qi
c55b425de5 [flatbuffer] Bugfix: some class dont have __getstate__ (#76197)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76197

some class only have setstate but not getstate. Those should be able to serialize correctly.

Test Plan:
by hand
`buck run fbcode/mode/dbg xplat/caffe2/fb/lite_predictor:convert_model -- --model=$HOME/models/pytorchmodel_from_manifold.pt --output_name=$HOME/models/pytorchmodel.ff --use_original_bytecode=1`

the model above is a .pt file created using version 4 of bytecode. In particular, __setstate__ is serialized there but __getstate__ is not.

Reviewed By: pavithranrao

Differential Revision: D35827479

fbshipit-source-id: 3d3fdb63d20d41170eac46d076b162d213169f96
(cherry picked from commit 13e966e5c62ce3faf85e8f8fe20e50ad9bb240e5)
2022-04-25 19:39:28 +00:00
Chen Lai
333da3eaef Handle simple tuple type inside Dict (#76164)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76164

For the case `Dict[int, Tuple[Tensor, Tensor]]`, the value type is a `Tuple`, and their qualified name should be `(Tensor, Tensor)`. Their definition won't be in compilation unit, but type parse can parse it easily. We can just use the default string `(Tensor, Tensor)` directly.
ghstack-source-id: 154517975

Test Plan:
```
[chenlai@1833.od /data/sandcastle/boxes/fbsource/fbcode (aba59247a|remote/fbcode/warm)]$ buck test //smart/pytorch_mobile/backport_service:handler_test
Starting new Buck daemon...
Buck daemon started.
DEBUG: /data/sandcastle/boxes/fbsource/tools/build_defs/fbcode_macros/build_defs/lib/cpp_common.bzl:287:14: Using disallowed linker flag 'ANativeActivity_onCreate' in library rule 'fbsource//third-party/toolchains/android-ndk:r18b_native_app_glue'
DEBUG: /data/sandcastle/boxes/fbsource/tools/build_defs/fbcode_macros/build_defs/lib/cpp_common.bzl:287:14: Using disallowed linker flag 'arvr/third-party/toolchains/platform009/build/mesa/lib/libGL.so' in library rule 'fbsource//third-party/toolchains:opengl'
DEBUG: /data/sandcastle/boxes/fbsource/tools/build_defs/fbcode_macros/build_defs/lib/cpp_common.bzl:287:14: Using disallowed linker flag 'arvr/third-party/freeglut/3.0.0/libs/x64-linux/libglut.a' in library rule 'fbsource//third-party/toolchains:GLUT'
Parsing buck files: finished in 26.8 sec
Creating action graph: finished in 59.3 sec
[RE] Metadata: Session ID=[https://fburl.com/b/reSessionID-2ba31fa4-af8e-4de8-abba-76f0f1f91e45]
[RE] Waiting on 0 remote actions. Completed 45 actions remotely, action cache hit rate: 0.00%.
Downloaded 12580/12786 artifacts, 985.44 Mbytes, 0.8% cache miss (for updated rules)
Building: finished in 01:53.8 min (100%) 30935/30935 jobs, 12722/30935 updated
  Total time: 03:20.0 min
More details at https://www.internalfb.com/intern/buck/build/c3bdc062-413e-4646-9ac4-79cef0af8297
BUILD SUCCEEDED
Tpx test run coordinator for Facebook. See https://fburl.com/tpx for details.
Running with tpx session id: a7cd4116-3ee3-4db2-9018-9a7c719a4d7b
Trace available for this run at /tmp/tpx-20220420-213554.773515-a7cd4116-3ee3-4db2-9018-9a7c719a4d7b/trace.log
RemoteExecution session id: reSessionID-a7cd4116-3ee3-4db2-9018-9a7c719a4d7b-tpx
Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/3940649774928947
    ✓ ListingSuccess: smart/pytorch_mobile/backport_service:handler_test : 2 tests discovered (45.162)
    ✓ Pass: smart/pytorch_mobile/backport_service:handler_test - test_illegal_version_exception (smart.pytorch_mobile.backport_service.handler_test.BackportServiceTest) (0.398)
    ✓ Pass: smart/pytorch_mobile/backport_service:handler_test - test_backport (smart.pytorch_mobile.backport_service.handler_test.BackportServiceTest) (21.871)
Summary
  Pass: 2
  ListingSuccess: 1
```

Reviewed By: malfet, pavithranrao, guangy10

Differential Revision: D35805700

fbshipit-source-id: d40288715ec336c06dc8a91244dd5576b0af287c
(cherry picked from commit e908737fc37901ff2cb153936e3a57074146ba3a)
2022-04-21 21:32:36 -07:00
Pavithran Ramachandran
e28ac60dd7 Back out "[easy][PTE] Remove GetMutableSizePrefixed* functions" (#76187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76187

The file `torch/csrc/jit/serialization/mobile_bytecode_generated.h` should be a generated file not edited by hand anymore. for internal usage please use `caffe2:mobile_byecode`. For OSS, until it is generated on thee fly, this checked in file will be used.

ghstack-source-id: 154485024

(Note: this ignores all push blocking failures!)

Test Plan: CI

Reviewed By: qihqi

Differential Revision: D35822915

fbshipit-source-id: d64e2a270f58c82cccafdd9139e080af923b314d
(cherry picked from commit 80ee9f4bdd6cf371abcb1551889c5c2068942942)
2022-04-21 17:27:05 -07:00
Chen Lai
d938867f91 Export NamedTuple when it's nested in first type layer Dict (#75996)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75996

Nested NamedTuple is supported when loading the model. However one case is missing when exporting the model. if it's the first layer, we haven't covered the `Dict` type yet.
Before:
```
// ty is a generic type pointer and can be any type
for (const TypePtr& ty : mobile_code.types_) {
   std::string type_str = get_type_str(t);
   if (t is TupleType) do B
}
```
After:
```
for (const TypePtr& ty : mobile_code.types_) {
   std::string type_str = get_type_str(t);
   if (t is DictType) do A
   else if (t is TupleType) do B
}
```
ghstack-source-id: 154292348

Test Plan:
Use the uploaded model from Everstore: `GBE5xgh6J6T0ZfsAAAhQ7n_pxB90br0LAAAP`. Get it by `clowder get GBE5xgh6J6T0ZfsAAAhQ7n_pxB90br0LAAAP namedtuple.ptl`.

```
TEST(LiteInterpreterTest, DebugDper) {
  std::string path =
      "/data/sandcastle/boxes/fbsource/fbcode/caffe2/test/cpp/jit/namedtuple.ptl";
  // mobile::Module bc = _load_for_mobile(path);
  Module jit_m = load(path);
  std::string resave_path =
      "/data/sandcastle/boxes/fbsource/fbcode/caffe2/test/cpp/jit/namedtuple_reave.ptl";
  jit_m._save_for_mobile(resave_path);
  mobile::Module bc = _load_for_mobile(resave_path);
}
```

```
buck test  //caffe2/test/cpp/jit:jit -- --exact 'caffe2/test/cpp/jit:jit - LiteInterpreterTest.DebugDper'
buck test mode/opt-split-dwarf //dper3/dper3/modules/tests:id_score_list_to_id_list_test
```

Reviewed By: iseeyuan

Differential Revision: D35705480

fbshipit-source-id: b8da2e720b8ca247bb40f13b67b75b5a04709f7a
(cherry picked from commit 73bb6f9ddbefcd7e55e8660a9b55ae6b9eb9759c)
2022-04-20 07:35:34 +00:00
Pavithran Ramachandran
c3e67d8a8c [easy][PTE] Remove GetMutableSizePrefixed* functions
Summary: fb: Fix the error: https://www.internalfb.com/intern/sandcastle/job/9007199888273681/insights

Test Plan:
CI

```
 ~/fbsource/fbcode] eval $(fbpkg info --json smart.pytorch_mobile.backport_service.persistent | jq -r .build.config.build_command)

Downloaded 26538/30277 artifacts, 1.00 Gbytes, 4.9% cache miss (for updated rules)
Building: finished in 10:22.3 min (100%) 48817/48817 jobs, 48817/48817 updated
  Total time: 11:13.5 min
More details at https://www.internalfb.com/intern/buck/build/f7743351-c166-4263-9140-bc59cbb39a37
BUILD SUCCEEDED

Reviewed By: qihqi, guangy10

Differential Revision: D35734987

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76019
Approved by: https://github.com/seemethere
2022-04-19 16:40:34 +00:00
Han Qi
b34b192d6b Reland "Make debug_pkl smaller by only emitting unique traces." (#73368)
Summary:
## Original commit message:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73368

debug_pkl file inside of pytorch's .pt file consists of a list of SourceRanges. Each SourceRange points to a Source which is a stack track, filename, and start, end numbers. Those are emitted in debug_pkl file as strings.
Since many SourceRange shares the same source, the string for trace can be deduped.
The newer format saves a set of unique traces in a tuple, then each SourceRange will save the offset of it's trace w.r.t. position in that tuple. (i.e. manually applying dictionary compression).
The above helps with smaller file size. On loading, if we copy each trace to Source as string the runtime memory would still blowup.
To mitigate this, we use SourceView directly instead of source which will take the reference of string inside of Deserializer and make that into string_view. This is safe because Deserializer is hold by Unpickler by shared_ptr, and Unpickler is also hold by shared_ptr by another Source object. That Source object will be alive during the model construction.

Test Plan:
## Original Test plan
unit test

Took original file (312271638_930.predictor.disagg.local); loaded with `torch.jit.load` save again with `torch.jit.save`. Unzip both, look at contents:
```
[qihan@devvm5585.vll0 ~]$ du archive -h
4.0K    archive/xl_model_weights
3.7M    archive/extra
8.0K    archive/code/__torch__/caffe2/torch/fb/model_transform/splitting
8.0K    archive/code/__torch__/caffe2/torch/fb/model_transform
8.0K    archive/code/__torch__/caffe2/torch/fb
8.0K    archive/code/__torch__/caffe2/torch
8.0K    archive/code/__torch__/caffe2
20M     archive/code/__torch__/torch/fx/graph_module
20M     archive/code/__torch__/torch/fx
8.0K    archive/code/__torch__/torch/classes
20M     archive/code/__torch__/torch
20M     archive/code/__torch__
20M     archive/code
2.7M    archive/constants
35M     archive
[qihan@devvm5585.vll0 ~]$ du resaved -h
4.0K    resaved/extra
8.0K    resaved/code/__torch__/caffe2/torch/fb/model_transform/splitting
8.0K    resaved/code/__torch__/caffe2/torch/fb/model_transform
8.0K    resaved/code/__torch__/caffe2/torch/fb
8.0K    resaved/code/__torch__/caffe2/torch
8.0K    resaved/code/__torch__/caffe2
1.3M    resaved/code/__torch__/torch/fx/graph_module
1.3M    resaved/code/__torch__/torch/fx
8.0K    resaved/code/__torch__/torch/classes
1.4M    resaved/code/__torch__/torch
1.4M    resaved/code/__torch__
1.4M    resaved/code
2.7M    resaved/constants
13M     resaved
[qihan@devvm5585.vll0 ~]$
```
## Additional test:
`buck test mode/dev-tsan //caffe2/benchmarks/static_runtime:static_runtime_cpptest -- --exact 'caffe2/benchmarks/static_runtime:static_runtime_cpptest - StaticRuntime.to'` passes

 test jest.fbios.startup_cold_start.local.simulator f333356873 -

Differential Revision: D35196883

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74869
Approved by: https://github.com/gmagogsfm
2022-04-18 22:34:21 +00:00
Han Qi
7d5c07830d Add upgrader related logic to flatbuffer (#71451)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71451

title

Test Plan: unittest

Reviewed By: tugsbayasgalan

Differential Revision: D33593056

fbshipit-source-id: c48d6ad50e6e2f757b68525dfe07693711b95840
(cherry picked from commit 8e09e20c1dafcdbdb45c2d1574da68a32e54a3a5)
2022-04-17 18:51:23 +00:00
Nikita Shulga
fe8eff3711 Revert "Add upgrader related logic to flatbuffer"
This reverts commit dfae96171a.
2022-04-17 11:38:59 -07:00
Han Qi
dfae96171a Add upgrader related logic to flatbuffer
Summary: title

Test Plan: unittest

Differential Revision: D33593056

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71451
Approved by: https://github.com/tugsbayasgalan
2022-04-16 02:04:48 +00:00
Thiago Crepaldi
9bbe1d632e Fix ONNX ATen fallback for non-caffe2 engines
This PR introduces 3 BC changes:

First, this PR propagates `BUILD_CAFFE2` flag to `libtorch` and `libtorch_python`, which is necessary for non-caffe2 ONNX runtimes when using `ONNX_ATEN_FALLBACK` operator export type.

Second, as a complement of https://github.com/pytorch/pytorch/pull/68490, this PR refactors Caffe2's Aten ops symbolics to consider not only the `operator_export_type` (aka `ONNX_ATEN_FALLBACK`) to emit Caffe2 Aten ops, but also whether `BUILD_CAFFE2` (which is called `torch.onnx._CAFFE2_ATEN_FALLBACK` in python binding) is set.

Lastly, it renames `onnx::ATen` to `aten::ATen` for ONNX spec consistency in a BC fashion.
ONNX doesn't have `ATen` op on its spec, but PyTorch ONNX converter emits them. Non-Caffe2 backend engines would be mislead by such operator's name/domain. A non-ideal workaround would be to have Aten ops handled based on its name and ignore the (non-complaint) domain. Moreover, users could incorrectly file bugs to either ONNX or ONNX Runtime when they inspect the model and notice the presence of an unspecified ONNX operator.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73954
Approved by: https://github.com/BowenBao, https://github.com/malfet, https://github.com/garymm, https://github.com/jiafatom
2022-04-14 23:18:45 +00:00
Pavithran Ramachandran
6402e62454 Refractor flatbuffer jit code
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75239

Refractor flatbuffer_serializer to move JIT related code to a separate file .

Differential Revision: [D35301020](https://our.internmc.facebook.com/intern/diff/D35301020/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D35301020/)!

Approved by: https://github.com/iseeyuan
2022-04-11 23:41:48 +00:00
Pavithran Ramachandran
3001bda304 [PyTorchEdge] Backport from v9 flatbuffer to v8 pickle (#75201)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75201

In this diff:
1. Bump supported version to 9, which will serve as a placeholder for upcoming version bump to v9 for flatbuffer format migration.
2. Implements backport from v9 flatbuffer file to v8 pickle file.
ghstack-source-id: 153225189

(Note: this ignores all push blocking failures!)

Test Plan:
fb:
```
cd ~/fbsource/fbcode/ && buck test  -c fbcode.caffe2_enable_flatbuffer=1 caffe2/test/cpp/jit:jit -- LiteInterpreterTest.BackPortByteCodeModelAllVersions
Parsing buck files: finished in 0.7 sec
Downloaded 0/25 artifacts, 0.00 bytes, 100.0% cache miss (for updated rules)
Building: finished in 20.7 sec (100%) 21783/21783 jobs, 5/21783 updated

cd ~/fbsource/fbcode/ && buck test caffe2/test/cpp/jit:jit -- FlatbufferTest.FlatbufferBackPortTest
Parsing buck files: finished in 0.7 sec
Building: finished in 4.5 sec (100%) 12972/53298 jobs, 0/53298 updated
  Total time: 5.3 sec
More details at https://www.internalfb.com/intern/buck/build/b658d597-d358-4293-97cb-28e7612b96e8
BUILD SUCCEEDED
Tpx test run coordinator for Facebook. See https://fburl.com/tpx for details.
Running with tpx session id: 35d5542d-6ee3-4c28-be10-1d822c7a6fef
Trace available for this run at /tmp/tpx-20220308-090347.891303-35d5542d-6ee3-4c28-be10-1d822c7a6fef/trace.log
RemoteExecution session id: reSessionID-35d5542d-6ee3-4c28-be10-1d822c7a6fef-tpx
Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/8444249379196000
    ✓ ListingSuccess: caffe2/test/cpp/jit:jit : 490 tests discovered (22.838)
    ✓ Pass: caffe2/test/cpp/jit:jit - FlatbufferTest.FlatbufferBackPortTest (0.289)
Summary
  Pass: 1
  ListingSuccess: 1
If you need help understanding your runs, please follow the wiki: https://fburl.com/posting_in_tpx_users
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/8444249379196000
```

Reviewed By: iseeyuan

Differential Revision: D34702597

fbshipit-source-id: 5c203c29d13360d7934ce6e57557739e7038c05e
(cherry picked from commit 6189e08a2bd968fdab636f77cb6bd73d6c36beb2)
2022-04-07 19:43:57 +00:00
Martin Yuan
00c1e01ad0 Remove internal logic to handle bytecode version 3 (#57775)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57775

The minimum supported bytecode version is updated from 3 to 4. We no longer support version 3 bytecode models.

Why?
* There are hacky codes in operator loading, that performs differently on one operator on the global bytecode version 3. Instead operator related metadata should be passed (for example, in #56845). To allow future development, we remove the hacky way first.
* The bytecode version was bumped from 3 to 4 more than half a year ago. Since all the production models are all bumped to version 4, it's not practical to keep and maintain version 3. The risk to deprecate version 3 is low.

Test Plan: Imported from OSS

Reviewed By: raziel

Differential Revision: D28270791

Pulled By: cccclai

fbshipit-source-id: 70b1bd6352fdaae5f8d2173b81578d77018c8e44
(cherry picked from commit 3e930fa381cd01f3705116795c6426df992372fc)
2022-04-07 01:45:52 +00:00
Pavithran Ramachandran
f984e50f39 Extend jit::load to work on flatbuffer file; Take 2 (#75256)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75256

ghstack-source-id: 153138970

Test Plan: CI

Reviewed By: iseeyuan

Differential Revision: D35399581

fbshipit-source-id: dafe9d301009d3f70986ed92bfe06d160ab90ba0
(cherry picked from commit ccc860fd07946de5aae12bc179a0b8bbba83b997)
2022-04-06 17:54:01 +00:00
Lu Fang
32e58c73c4 Back out "Extend jit::load to work on flatbuffer file" (#75244)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75244

Original commit changeset: d653a5af662a

Original Phabricator Diff: D35060736 (d9d34922a0)

Test Plan: Model loading test, verified that D35060736 (d9d34922a0) will cause the torch::save => torch::load failure.

Reviewed By: yinghai, jianyuh

Differential Revision: D35387009

fbshipit-source-id: 9d176992d402d57779e2af3d905b3c1538335298
(cherry picked from commit 6c8cc0d3b8a88b15e35702d70e18bbae8aa4628a)
2022-04-05 09:55:04 +00:00
Pavithran Ramachandran
d9d34922a0 Extend jit::load to work on flatbuffer file (#75022)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75022

Extending torch::jit::load to read flatbuffer file
ghstack-source-id: 152820697

Test Plan: CI

Reviewed By: iseeyuan

Differential Revision: D35060736

fbshipit-source-id: d653a5af662a46107ff4fd70209fd2a0a4d40f20
(cherry picked from commit 109e14a54bd279011c8f9066e6c29e8e0b1fc4db)
2022-04-02 01:33:34 +00:00
Pavithran Ramachandran
7aaa75af05 Extending _get_bytecode_version to support flatbuffers format (#75021)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75021

Extending `_get_bytecode_version` to support flatbuffers.
ghstack-source-id: 152771695

(Note: this ignores all push blocking failures!)

Test Plan:
```
~/fbsource/xplat] cd ~/fbsource/xplat/ && buck test //xplat/caffe2:test_lite_interpreter
Building: finished in 0.8 sec (100%) 327/327 jobs, 0/327 updated
  Total time: 0.9 sec
Testing: finished in 06:59.5 min (85 PASS/0 FAIL)
BUILD SUCCEEDED
RESULTS FOR //xplat/caffe2:test_lite_interpreter
PASS    412.3s 85 Passed   0 Skipped   0 Failed   //xplat/caffe2:test_lite_interpreter
TESTS PASSED
```

Reviewed By: iseeyuan

Differential Revision: D34900498

fbshipit-source-id: 65743076d43a933c5381ec128d0268f22c0a8441
(cherry picked from commit 457c76c7d1df6050b941c56a8198162e2e4a3388)
2022-04-01 15:05:37 +00:00
Nikolay Korovaiko
5177f95d21 Introducing SymInt to Pytorch (for tracing size arithmetic) (master rebase) (#74861)
Summary:
This PR introduces `SymInt` type to Pytorch which will be used by LTC and AOTAutograd for tracing size arithmetic and tests.
`SymInt` is a C++ union structure [int64_t, SymbolicIntNode*] that wraps around an int64_t field where the value of the field could be an index into a list of `shared_ptr<SymbolicIntNode>` or a real int.
This PR doesn't add any support for actually tracing symbolic ints. i.e. data_ for now can only contain real ints.

```
Goal 1: just to show we can add a type to PyTorch core. (wraps int) LANDEABLE
Finalize the naming - symint
Want the name to be short
Does invoke “size” - NO
SInt/SymInt/SymbolicInt
SInt could mean signed int
sym_int or symint or SymInt (originally it was “int”; capitalized implies object semantics, whereas lowercase implies value semantics)
JIT schema - symint
C++ - symint
```

See more details here: https://docs.google.com/document/d/1iiLNwR5ohAsw_ymfnOpDsyF6L9RTUaHMpD8 (d843f63f2a)YLw-jxEw

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74861

Reviewed By: qihqi, ngimel

Differential Revision: D35226230

Pulled By: Krovatkin

fbshipit-source-id: 34acf342bd50fcaa4d8d5dd49c2fd6a98823a5b3
(cherry picked from commit 218643f63ef181cabb92d13a6e837eb64f2dda3c)
2022-03-31 21:59:59 +00:00
Pavithran Ramachandran
6905feea1a Adding versions to flatbuffer schema (#74989)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74989

Adding bytecode and operator version to be serialized in flatbuffer files
ghstack-source-id: 152720235

(Note: this ignores all push blocking failures!)

Test Plan: CI

Reviewed By: iseeyuan

Differential Revision: D35265693

fbshipit-source-id: f47a21036e82c0df3e787e3f330a8140f9c922fc
(cherry picked from commit fc1d9b8dadaf454109a5c9ae583f283b2550ee4e)
2022-03-31 20:26:16 +00:00
Han Qi
75d6cbe605 [4/5]Testing jit module in flatbuffer in Python. (#74387)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74387

Make temporary python bindings for flatbuffer to test ScriptModule save / load.

(Note: this ignores all push blocking failures!)

Test Plan: unittest

Reviewed By: iseeyuan

Differential Revision: D34968080

fbshipit-source-id: d23b16abda6e4b7ecf6b1198ed6e00908a3db903
(cherry picked from commit 5cbbc390c5f54146a1c469106ab4a6286c754325)
2022-03-24 23:29:47 +00:00
Pavithran Ramachandran
fc2cf3d26f Back out "Revert D34805092: Extend _save_for_mobile and _load_for_mobile to support flatbuffer format; Default format is pickle + Change buck targets to support only pickle and pickle + flatbuffer for migration" (#74594)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74594

Extending `_save_for_mobile` and `_load_for_mobile` to support faltbuffer format with additional optional argument which is set to pick pickle by default.

Adding new binary target with suffix `_pickle_and_flatbuffer` to help migration.

Size test in D34909502 shows the size has regressed by ~40K but after removing pickle and comparing lite_predictors we have ~120K size measure that we will achieve when deprecating pickle and moving to flatbuffer

**BEFORE:**

```lang=mermaid
graph TD;
    torch_core-->torch_mobile_deserialize;

    torch_mobile_core-->torch_mobile_deserialize;

    jit_module_saving-->torch_core;
    jit_module_saving-->torch_mobile_core;

    torch_mobile_deserialize-->caffe2_serialize;
    torch_mobile_deserialize-->torch_mobile_module;

    caffe2_serialize-->miniz;

    flatbuffer_loader-->mobile_bytecode;
    flatbuffer_serializer-->mobile_bytecode;

    mobile_bytecode-->flatbuffer_2.0;

    flatbuffer_loader-->torch_mobile_module;
    flatbuffer_serializer-->torch_mobile_module;
```

**AFTER:**
```lang=mermaid
graph TD;
    torch_core-->torch_mobile_deserialize;

    torch_mobile_core-->torch_mobile_deserialize;

    jit_module_saving-->torch_core;
    jit_module_saving-->torch_mobile_core;

    torch_mobile_deserialize-->caffe2_serialize;
    torch_mobile_deserialize-->torch_mobile_module;

    caffe2_serialize-->miniz;

    flatbuffer_loader-->mobile_bytecode;
    flatbuffer_serializer-->mobile_bytecode;

    mobile_bytecode-->flatbuffer_2.0;

    torch_mobile_deserialize_pickle_and_flatbuffer-->|new| flatbuffer_loader;
    torch_mobile_deserialize_pickle_and_flatbuffer-->|new| torch_mobile_deserialize;
    torch_mobile_core_pickle_and_flatbuffer-->|new| torch_mobile_deserialize_pickle_and_flatbuffer;
    torch_core_pickle_and_flatbuffer-->|new| torch_mobile_deserialize_pickle_and_flatbuffer;

    jit_module_saving_pickle_and_flatbuffer-->|new| torch_core_pickle_and_flatbuffer;
    jit_module_saving_pickle_and_flatbuffer-->|new| torch_mobile_core_pickle_and_flatbuffer;

    flatbuffer_serializer-->torch_mobile_module;

    jit_module_saving_pickle_and_flatbuffer-->|new|jit_module_saving;
    jit_module_saving_pickle_and_flatbuffer-->|new|flatbuffer_serializer;

    flatbuffer_loader-->torch_mobile_module;
```

Original commit changeset: 780dfb6fd6ba

Original Phabricator Diff: D34805092 (284b2b7135)
ghstack-source-id: 152044801

(Note: this ignores all push blocking failures!)

Test Plan:
CI

```
~/fbsource/fbcode] cd ~/fbsource/fbcode/ && buck test -c fbcode.caffe2_enable_flatbuffer=1 //caffe2/test/cpp/jit:jit  -- FlatbufferTest.ExtraFiles
Parsing buck files: finished in 0.9 sec
Building: finished in 5.3 sec (100%) 12992/54304 jobs, 0/54304 updated
  Total time: 6.2 sec
More details at https://www.internalfb.com/intern/buck/build/2b387fff-f813-4cfa-b53f-eb2378630d4e
BUILD SUCCEEDED
Tpx test run coordinator for Facebook. See https://fburl.com/tpx for details.
Running with tpx session id: f93a84d6-e7ce-41a0-a97f-0ef3fa6d199d
Trace available for this run at /tmp/tpx-20220323-134108.766518-f93a84d6-e7ce-41a0-a97f-0ef3fa6d199d/trace.log
RemoteExecution session id: reSessionID-f93a84d6-e7ce-41a0-a97f-0ef3fa6d199d-tpx
Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/4503599723101693
    ✓ ListingSuccess: caffe2/test/cpp/jit:jit : 486 tests discovered (19.122)
    ✓ Pass: caffe2/test/cpp/jit:jit - FlatbufferTest.ExtraFiles (0.187)
Summary
  Pass: 1
  ListingSuccess: 1
If you need help understanding your runs, please follow the wiki: https://fburl.com/posting_in_tpx_users
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/4503599723101693
```

Similar Build Deps Dags

```
[pavithran@devvm5216.vll0 /data/users/pavithran/fbsource] buck query 'allpaths(//xplat/caffe2:torch_mobile_all_ops_pickle_and_flatbuffer, //xplat/caffe2:torch_mobile_deserialize_pickle_and_flatbuffer)' --output-format dot-compact  | pastry
P486770901: https://www.internalfb.com/intern/paste/P486770901/

[pavithran@devvm5216.vll0 /data/users/pavithran/fbsource] buck query 'allpaths(//xplat/caffe2:torch_mobile_all_ops, //xplat/caffe2:torch_mobile_deserialize)' --output-format dot-compact  | pastry
P486771278: https://www.internalfb.com/intern/paste/P486771278/
```

pickle_and_flatbuffer: https://www.internalfb.com/intern/dgw/graph/?build_id=P486770901
pickle: https://www.internalfb.com/intern/dgw/graph/?build_id=P486771278

Reviewed By: iseeyuan

Differential Revision: D35067157

fbshipit-source-id: 9044259c17a2e0da79bd6aedb28efbdfd57e23e0
(cherry picked from commit f738069ec3a72e79da56172741d027de514e9e5f)
2022-03-24 21:51:05 +00:00
Nikita Shulga
c53b3ed20f Revert D34805092: Extend _save_for_mobile and _load_for_mobile to support flatbuffer format; Default format is pickle + Change buck targets to support only pickle and pickle + flatbuffer for migration
Test Plan: revert-hammer

Differential Revision:
D34805092 (284b2b7135)

Original commit changeset: 57f3fc81d68f

Original Phabricator Diff: D34805092 (284b2b7135)

fbshipit-source-id: 780dfb6fd6ba5f9348f24a2fb3c57971b7155541
(cherry picked from commit bebeb8b84e11c34cbde4857d0e1c291731a7c781)
2022-03-22 22:45:50 +00:00
Pavithran Ramachandran
284b2b7135 Extend _save_for_mobile and _load_for_mobile to support flatbuffer format; Default format is pickle + Change buck targets to support only pickle and pickle + flatbuffer for migration (#74209)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74209

Extending `_save_for_mobile` and `_load_for_mobile` to support faltbuffer format with additional optional argument which is set to pick pickle by default.

Adding new binary target with suffix `_pickle_and_flatbuffer` to help migration.

Size test in D34909502 shows the size has regressed by ~40K but after removing pickle and comparing lite_predictors we have ~120K size measure that we will achieve when deprecating pickle and moving to flatbuffer

**BEFORE:**

```lang=mermaid
graph TD;
    torch_core-->torch_mobile_deserialize;

    torch_mobile_core-->torch_mobile_deserialize;

    jit_module_saving-->torch_core;
    jit_module_saving-->torch_mobile_core;

    torch_mobile_deserialize-->caffe2_serialize;
    torch_mobile_deserialize-->torch_mobile_module;

    caffe2_serialize-->miniz;

    flatbuffer_loader-->mobile_bytecode;
    flatbuffer_serializer-->mobile_bytecode;

    mobile_bytecode-->flatbuffer_2.0;

    flatbuffer_loader-->torch_mobile_module;
    flatbuffer_serializer-->torch_mobile_module;
```

**AFTER:**
```lang=mermaid
graph TD;
    torch_core-->torch_mobile_deserialize;

    torch_mobile_core-->torch_mobile_deserialize;

    jit_module_saving-->torch_core;
    jit_module_saving-->torch_mobile_core;

    torch_mobile_deserialize-->caffe2_serialize;
    torch_mobile_deserialize-->torch_mobile_module;

    caffe2_serialize-->miniz;

    flatbuffer_loader-->mobile_bytecode;
    flatbuffer_serializer-->mobile_bytecode;

    mobile_bytecode-->flatbuffer_2.0;

    torch_mobile_deserialize_pickle_and_flatbuffer-->|new| flatbuffer_loader;
    torch_mobile_deserialize_pickle_and_flatbuffer-->|new| torch_mobile_deserialize;
    torch_mobile_core_pickle_and_flatbuffer-->|new| torch_mobile_deserialize_pickle_and_flatbuffer;
    torch_core_pickle_and_flatbuffer-->|new| torch_mobile_deserialize_pickle_and_flatbuffer;

    jit_module_saving_pickle_and_flatbuffer-->|new| torch_core_pickle_and_flatbuffer;
    jit_module_saving_pickle_and_flatbuffer-->|new| torch_mobile_core_pickle_and_flatbuffer;

    flatbuffer_serializer-->torch_mobile_module;

    jit_module_saving_pickle_and_flatbuffer-->|new|jit_module_saving;
    jit_module_saving_pickle_and_flatbuffer-->|new|flatbuffer_serializer;

    flatbuffer_loader-->torch_mobile_module;
```
ghstack-source-id: 151744258

Test Plan:
Similar Build Deps Dags

```
[pavithran@devvm5216.vll0 /data/users/pavithran/fbsource] buck query 'allpaths(//xplat/caffe2:torch_mobile_all_ops_pickle_and_flatbuffer, //xplat/caffe2:torch_mobile_deserialize_pickle_and_flatbuffer)' --output-format dot-compact  | pastry
P486770901: https://www.internalfb.com/intern/paste/P486770901/

[pavithran@devvm5216.vll0 /data/users/pavithran/fbsource] buck query 'allpaths(//xplat/caffe2:torch_mobile_all_ops, //xplat/caffe2:torch_mobile_deserialize)' --output-format dot-compact  | pastry
P486771278: https://www.internalfb.com/intern/paste/P486771278/
```

pickle_and_flatbuffer: https://www.internalfb.com/intern/dgw/graph/?build_id=P486770901
pickle: https://www.internalfb.com/intern/dgw/graph/?build_id=P486771278

Reviewed By: iseeyuan

Differential Revision: D34805092

fbshipit-source-id: 57f3fc81d68fce941a050c35bd8e6f05951183b3
(cherry picked from commit 671ae4ed29e65b86ffe507a503548d3e86ab0ea4)
2022-03-22 20:00:53 +00:00
BowenBao
54a6942f8d [ONNX] ONNX Exporter logging (#71342)
Summary:
Add ONNX exporter logging facility. Supporting both C++/Python logging api. Logging can be turned on/off. Logging output stream can be either set to `stdout` or `stderr`.

A few other changes:
* When exception is raised in passes, the current IR graph being processed will be logged.
* When exception is raised from `_jit_pass_onnx` (the pass that converts nodes from namespace `ATen` to `ONNX`), both ATen IR graph and ONNX IR graph under construction will be logged.
* Exception message for ConstantFolding is truncated to avoid being too verbose.
* Update the final printed IR graph with node name in ONNX ModelProto as node attribute. Torch IR Node does not have name. Adding this to printed IR graph helps debugging.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71342

Reviewed By: msaroufim

Differential Revision: D34433473

Pulled By: malfet

fbshipit-source-id: 4b137dfd6a33eb681a5f2612f19aadf5dfe3d84a
(cherry picked from commit 67a8ebed5192c266f604bdcca931df6fe589699f)
2022-03-17 19:40:03 +00:00
Han Qi
4b4f652f79 [3/5] Put JIT source inside flatbuffer (#74245)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74245

title

Test Plan: unittest

Reviewed By: iseeyuan

Differential Revision: D34881612

fbshipit-source-id: 7037982e9267ad72b86e91cd5f2d92426d71dd56
(cherry picked from commit 88f34eb55b2bee6ef8ef27188e075fa2b8767fdf)
2022-03-17 18:46:47 +00:00
Han Qi
ded82ad7c7 Create method to map JIT module to (source, constant) and back. (#74119)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74119

  implemented function to generate source as ExtraFilesMap and constants

  wrote function to construct jit module given (ivalue, source,
  constant) tripple.

Test Plan: unittest

Reviewed By: pavithranrao

Differential Revision: D34803945

fbshipit-source-id: 2edc798407fe68294cb4c3c7516f5bd143df88c3
(cherry picked from commit 35e54e166b8f0f5cfe8f08c07866b59ae61ee79d)
2022-03-15 18:30:08 +00:00
Dave Bort
6c18a9951b [PyTorchEdge] Start writing magic to flatbuffer output (#74084)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74084

Now that the schema includes a magic file header string, write it to the flatbuffer data generated by `flatbuffer_serializer`.
ghstack-source-id: 151109277

Test Plan: A later diff in this stack (D34408538) tests that the output data contains the magic header.

Reviewed By: pavithranrao

Differential Revision: D34809318

fbshipit-source-id: edb45d57e56fa4b30675eb9ce6e4e258abfd5417
(cherry picked from commit f5e8a3ff70eba186ac9e7b91739010e55cd6c5a6)
2022-03-14 23:44:58 +00:00
Han Qi
3e556efc29 regenerate flatbuffer header (#73810)
Summary:
Update flatbuffer generated header and add it to ignore for clang format

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73810

Test Plan: CI

Reviewed By: iseeyuan

Differential Revision: D34652217

Pulled By: qihqi

fbshipit-source-id: fe281afd25d618d2e4852d6b76b813e2fbee0ddc
(cherry picked from commit 095ee360b573506ac946de142bd266b8d3bac58e)
2022-03-11 20:21:56 +00:00
Pavithran Ramachandran
cb4aeff7d8 [easy][PyTorchEdge] Add magic number to flatbuffer schema (#74048)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74048

ghstack-source-id: 151064703

Test Plan:
```
Executing in directory: /data/users/pavithran/fbsource
buck build //xplat/caffe2:mobile_bytecode --config client.id=nuclide

DEBUG: /data/users/pavithran/fbsource/tools/build_defs/fbcode_macros/build_defs/lib/cpp_common.bzl:287:14: Using disallowed linker flag 'ANativeActivity_onCreate' in library rule 'fbsource//third-party/toolchains/android-ndk:r18b_native_app_glue'
Parsing buck files: finished in 1.2 sec
Building: finished in 0.4 sec (100%) 1/1 jobs, 1/1 updated
  Total time: 1.7 sec
More details at https://www.internalfb.com/intern/buck/build/ad0db098-e3c1-465c-b69a-3cda4ab9c2ee
BUILD SUCCEEDED
```

Reviewed By: dbort

Differential Revision: D34797167

fbshipit-source-id: f3c115f80951bb11e17163283603aa7877c7c472
(cherry picked from commit 2ded6963c5d57b6c1e5ff15b8fa3b7d81e66bb33)
2022-03-11 02:53:13 +00:00
Janet Yang
99db53eaa7 Jit save/load meta tensors (#73435)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73435

Add support for torch.jit.save and load for meta tensors to use in meta tensor based xl weights.

Test Plan:
```
buck test //caffe2/test:jit && -- -r .*save_load_meta_tensors.*
```

Reviewed By: houseroad

Differential Revision: D34479511

fbshipit-source-id: 117ccb12e9e427290a17297204508ec85495e3be
(cherry picked from commit ee9aaaf8208d6c9530c828a4b9f28cf2cca05630)
2022-03-10 19:48:29 +00:00
Han Qi
0723639b60 Revert D34455360: Multisect successfully blamed D34455360 for test failures
Summary:
This diff is reverting D34455360 (61d6c43864)
D34455360 (61d6c43864) is making the following tests to fail and this revert diff is either the revert of the blame diff or the revert of the stack of diffs that need to be reverted to revert the blame diff

Tests affected:
- https://www.internalfb.com/intern/test/562950004334605/

Multisect link:
https://www.internalfb.com/intern/testinfra/multisect/756170

Test Plan: NA

Reviewed By: zhxchen17

Differential Revision: D34596156

fbshipit-source-id: a465bca0094db3caf6130c80f1ed49eea981359b
(cherry picked from commit ef5e5578c64ce9827570757fb016aafa9c782c6a)
2022-03-08 23:18:54 +00:00
Han Qi
61d6c43864 Make debug_pkl smaller by only emitting unique traces. (#73368)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73368

debug_pkl file inside of pytorch's .pt file consists of a list of SourceRanges. Each SourceRange points to a Source which is a stack track, filename, and start, end numbers. Those are emitted in debug_pkl file as strings.
Since many SourceRange shares the same source, the string for trace can be deduped.
The newer format saves a set of unique traces in a tuple, then each SourceRange will save the offset of it's trace w.r.t. position in that tuple. (i.e. manually applying dictionary compression).
The above helps with smaller file size. On loading, if we copy each trace to Source as string the runtime memory would still blowup.
To mitigate this, we use SourceView directly instead of source which will take the reference of string inside of Deserializer and make that into string_view. This is safe because Deserializer is hold by Unpickler by shared_ptr, and Unpickler is also hold by shared_ptr by another Source object. That Source object will be alive during the model construction.

Test Plan:
unit test

Took original file (312271638_930.predictor.disagg.local); loaded with `torch.jit.load` save again with `torch.jit.save`. Unzip both, look at contents:
```
[qihan@devvm5585.vll0 ~]$ du archive -h
4.0K    archive/xl_model_weights
3.7M    archive/extra
8.0K    archive/code/__torch__/caffe2/torch/fb/model_transform/splitting
8.0K    archive/code/__torch__/caffe2/torch/fb/model_transform
8.0K    archive/code/__torch__/caffe2/torch/fb
8.0K    archive/code/__torch__/caffe2/torch
8.0K    archive/code/__torch__/caffe2
20M     archive/code/__torch__/torch/fx/graph_module
20M     archive/code/__torch__/torch/fx
8.0K    archive/code/__torch__/torch/classes
20M     archive/code/__torch__/torch
20M     archive/code/__torch__
20M     archive/code
2.7M    archive/constants
35M     archive
[qihan@devvm5585.vll0 ~]$ du resaved -h
4.0K    resaved/extra
8.0K    resaved/code/__torch__/caffe2/torch/fb/model_transform/splitting
8.0K    resaved/code/__torch__/caffe2/torch/fb/model_transform
8.0K    resaved/code/__torch__/caffe2/torch/fb
8.0K    resaved/code/__torch__/caffe2/torch
8.0K    resaved/code/__torch__/caffe2
1.3M    resaved/code/__torch__/torch/fx/graph_module
1.3M    resaved/code/__torch__/torch/fx
8.0K    resaved/code/__torch__/torch/classes
1.4M    resaved/code/__torch__/torch
1.4M    resaved/code/__torch__
1.4M    resaved/code
2.7M    resaved/constants
13M     resaved
[qihan@devvm5585.vll0 ~]$
```

Reviewed By: gmagogsfm

Differential Revision: D34455360

fbshipit-source-id: 8cc716f9bba7183746b1b4ecc33a2de34ac503b9
(cherry picked from commit f1a04730fc9ac8fdab6c8e4c44cb5529e42090e4)
2022-03-02 08:37:08 +00:00
BowenBao
abb55c53b3 [ONNX] Make graph name spec-compliant (#71961)
[According to the ONNX spec](https://github.com/onnx/onnx/blob/main/docs/IR.md#names-within-a-graph),
all names must adhere to C90 identifier syntax rules, which means no
dashes.

Fixes: #30952

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73099
2022-02-24 21:43:56 +00:00
Pavithran Ramachandran
62eb7d64cf [PyTorchEdge] Extend flatbuffer to support extra files map (#72951)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72951

Extend flatbuffer to support extra files map

Flatbuffer schema has extra files. The users can write extra files by providing a `map<string, string>` which will be part of the flatbuffer model asset and and can be loaded back similar to pickle.
ghstack-source-id: 149622799

Test Plan:
fb:

```[pavithran@devvm5216.vll0 ~/fbsource/fbcode] cd ~/fbsource/fbcode/ && buck test caffe2/test/cpp/jit:jit -- FlatbufferTest.ExtraFiles
Parsing buck files: finished in 0.7 sec
Downloaded 0/8 artifacts, 0.00 bytes, 100.0% cache miss (for updated rules)
Building: finished in 20.0 sec (100%) 22343/22343 jobs, 4/22343 updated
  Total time: 20.7 sec
More details at https://www.internalfb.com/intern/buck/build/7dba5034-d623-4a1e-afa1-b0e809df7066
BUILD SUCCEEDED
Tpx test run coordinator for Facebook. See https://fburl.com/tpx for details.
Running with tpx session id: 9c1ac1e0-a8c0-4a62-95df-8f49695aa7d1
Trace available for this run at /tmp/tpx-20220216-144630.207992/trace.log
RemoteExecution session id: reSessionID-9c1ac1e0-a8c0-4a62-95df-8f49695aa7d1-tpx
Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/7318349470518809
    ✓ ListingSuccess: caffe2/test/cpp/jit:jit : 468 tests discovered (17.211)
    ✓ Pass: caffe2/test/cpp/jit:jit - FlatbufferTest.ExtraFiles (0.169)
Summary
  Pass: 1
  ListingSuccess: 1
If you need help understanding your runs, please follow the wiki: https://fburl.com/posting_in_tpx_users
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/7318349470518809````

Reviewed By: iseeyuan

Differential Revision: D34286346

fbshipit-source-id: 4e09ab25b8ed6af6f8923db3aab046c255f13bb8
(cherry picked from commit ce8d88e22a360b25253d8a75f428d523fa88a79a)
2022-02-24 19:39:32 +00:00
Alban Desmaison
3bd1507ff2 Revert D33994011: Make debug_pkl smaller by only emitting unique traces.
Test Plan: revert-hammer

Differential Revision:
D33994011 (3d37f5b052)

Original commit changeset: 8e6224c6e942

Original Phabricator Diff: D33994011 (3d37f5b052)

fbshipit-source-id: 885e739efa1081382e1fcf9c6cccba92c57e9f7a
(cherry picked from commit a6d98c85a736c2eb321a6f38005dd0f5dc43eb87)
2022-02-24 16:38:55 +00:00
Han Qi
3d37f5b052 Make debug_pkl smaller by only emitting unique traces. (#72596)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72596

debug_pkl file inside of pytorch's .pt file consists of a list of SourceRanges. Each SourceRange points to a Source which is a stack track, filename, and start, end numbers. Those are emitted in debug_pkl file as strings.

Since many SourceRange shares the same source, the string for trace can be deduped.

The newer format saves a set of unique traces in a tuple, then each SourceRange will save the offset of it's trace w.r.t. position in that tuple. (i.e. manually applying dictionary compression).

The above helps with smaller file size. On loading, if we copy each trace to Source as string the runtime memory would still blowup.
To mitigate this, we use SourceView directly instead of source which will take the reference of string inside of Deserializer and make that into string_view. This is safe because Deserializer is hold by Unpickler by shared_ptr, and Unpickler is also hold by shared_ptr by another Source object. That Source object will be alive during the model construction.

Test Plan:
unit test

Took original file (312271638_930.predictor.disagg.local); loaded with `torch.jit.load` save again with `torch.jit.save`. Unzip both, look at contents:
```
[qihan@devvm5585.vll0 ~]$ du archive -h
4.0K    archive/xl_model_weights
3.7M    archive/extra
8.0K    archive/code/__torch__/caffe2/torch/fb/model_transform/splitting
8.0K    archive/code/__torch__/caffe2/torch/fb/model_transform
8.0K    archive/code/__torch__/caffe2/torch/fb
8.0K    archive/code/__torch__/caffe2/torch
8.0K    archive/code/__torch__/caffe2
20M     archive/code/__torch__/torch/fx/graph_module
20M     archive/code/__torch__/torch/fx
8.0K    archive/code/__torch__/torch/classes
20M     archive/code/__torch__/torch
20M     archive/code/__torch__
20M     archive/code
2.7M    archive/constants
35M     archive
[qihan@devvm5585.vll0 ~]$ du resaved -h
4.0K    resaved/extra
8.0K    resaved/code/__torch__/caffe2/torch/fb/model_transform/splitting
8.0K    resaved/code/__torch__/caffe2/torch/fb/model_transform
8.0K    resaved/code/__torch__/caffe2/torch/fb
8.0K    resaved/code/__torch__/caffe2/torch
8.0K    resaved/code/__torch__/caffe2
1.3M    resaved/code/__torch__/torch/fx/graph_module
1.3M    resaved/code/__torch__/torch/fx
8.0K    resaved/code/__torch__/torch/classes
1.4M    resaved/code/__torch__/torch
1.4M    resaved/code/__torch__
1.4M    resaved/code
2.7M    resaved/constants
13M     resaved
[qihan@devvm5585.vll0 ~]$
```

Reviewed By: JasonHanwen

Differential Revision: D33994011

fbshipit-source-id: 8e6224c6e942e91c3403f686c8f0937d1002ed41
(cherry picked from commit a7014dd4029308c95007f362a57c31796d686647)
2022-02-24 09:31:16 +00:00
Bowen Bao
46123236db [ONNX] Relax sequence tensor dim_param serialization
Do not assign dim_param for sequence tensor type.
Sequence of tensors could differ in dimension size.
Use a dimension with neither dim_value nor dim_param set
to denote an unknown dimension.
Create and assign dim_param for normal tensor type.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70651
2022-02-23 18:22:35 +00:00
CodemodService FBSourceClangFormatLinterBot
97898e5144 [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily arc lint --take CLANGFORMAT
Reviewed By: zertosh

Differential Revision: D34412981

fbshipit-source-id: a7aa81c0c69bf731db37813f431d9f6ed6a6a355
(cherry picked from commit a43ea6d9fc)
2022-02-23 10:29:48 +00:00