Commit Graph

64 Commits

Author SHA1 Message Date
PyTorch MergeBot
564d00f364 Revert "Fix clang-tidy warnings in Caffe2 code (#134935)"
This reverts commit 7cfd23636c.

Reverted https://github.com/pytorch/pytorch/pull/134935 on behalf of https://github.com/izaitsevfb due to breaks internal builds, caffe2 is still used internally ([comment](https://github.com/pytorch/pytorch/pull/134935#issuecomment-2349368152))
2024-09-13 16:42:37 +00:00
cyy
7cfd23636c Fix clang-tidy warnings in Caffe2 code (#134935)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/134935
Approved by: https://github.com/ezyang
2024-09-12 03:27:09 +00:00
PyTorch MergeBot
ccbac091d2 Revert "Add write_record_metadata to PyTorchFileWriter (#125184)"
This reverts commit dd92637f44.

Reverted https://github.com/pytorch/pytorch/pull/125184 on behalf of https://github.com/izaitsevfb due to breaks internal builds, see D56962076 ([comment](https://github.com/pytorch/pytorch/pull/125184#issuecomment-2094976897))
2024-05-05 22:40:00 +00:00
Mikayla Gawarecki
dd92637f44 Add write_record_metadata to PyTorchFileWriter (#125184)
Add `PyTorchFileWriter.write_record_metadata(record_name, num_bytes)` that
- writes the zipfile header/end of central directory metadata for an entry*
- reserves `num_bytes` in the zipfile for the payload.

*Since the payload is not provided, the CRC32 computation is skipped and 0s are written in the corresponding entry of the zipfile header

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125184
Approved by: https://github.com/albanD
2024-05-03 07:29:52 +00:00
Quinn Zhu
3993771617 Expose recordSize in ChunkRecordIterator (#120239)
Summary: Add a public method to read recordSize in ChunkRecordIterator

Test Plan: ci

Differential Revision: D53931944

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120239
Approved by: https://github.com/zoranzhao
2024-02-21 04:33:03 +00:00
Zhijing Li (Accelerator Enablement)
55971c5c4e Enable concurrent reader for getRecord function (#112818)
Summary:
Use concurrent multiple readers to access record from different start index. It can provide better performance when the data being accessed is large.
bypass-github-pytorch-ci-checks

Test Plan:
```
buck2 run @//mode/dev //caffe2/caffe2/serialize:inline_container_test
```

Reviewed By: YazhiGao

Differential Revision: D50957607

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112818
Approved by: https://github.com/houseroad, https://github.com/huydhn
2023-11-03 22:55:27 +00:00
PyTorch MergeBot
2d5fec4d59 Revert "Enable concurrent reader for getRecord function (#111426)"
This reverts commit 12a6f5aa6b.

Reverted https://github.com/pytorch/pytorch/pull/111426 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/111426#issuecomment-1791733096))
2023-11-03 00:22:21 +00:00
Zhijing Li (Accelerator Enablement)
12a6f5aa6b Enable concurrent reader for getRecord function (#111426)
Summary:
Zion-4s core has poor perf when it comes to reading the large tensor (e.g. 300G), no matter for manifold downloading or reading from files. In this diff, I changed the getRecord function from single thread to multiple threads by passing multiple readers to getRecord function and access the same record at different chunks with different readers.
We control the number of additional reader with the`sigrid_model_manager_additional_reader` flag. The default value is 0. When `additional_reader=2`, we allocate `2` extra read client threads.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111426
Approved by: https://github.com/jiayisuse
2023-11-02 22:07:04 +00:00
Lujia Zhang
a6fadf643f Re-do D48544397: [TGIF Inplace] [xlv2][1/n] Expose a couple APIs from inline_container that will be used for chunk read" (#109183)
Summary:
Original commit changeset: 4a5f31518ad0

Original Phabricator Diff: D48544397

fix easycla

Differential Revision: D49221088

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109183
Approved by: https://github.com/wqfish
2023-09-14 08:17:14 +00:00
Lujia Zhang
b897c57d47 [TGIF][Inplace][Perf] Copy tensor to device with pinned memory & move copy weight sleep to getRecord (#106849)
Summary:
There are 2 changes in the diff that helps optimize perf during inplace update:
1. Read data with pinned memory
2. move the copy weight sleep from between copying the whole Tensor to between copying chunks

Test Plan:
**Local Test**
```
./ai_infra/inference_platform/test_platform/script/run_sigrid_4card.sh --port 7451 --local_model_dir /home/lujia/script --cuda_devices 6 --bind_node 3 --model_id 962549778_514 --gflag_config_path sigrid/predictor/predictor_x_gflags_mrs_prospector_gpu_torchscript_fusedsolution_1card_opt_fm -- --enable_thrift_warmup=false --tgif_replicate_merge_by_tempfile=false --enable_inplace_snapshot_transition --model_version_config_path sigrid/predictor/models_version/lujia_test --inplace_update_max_retries 0 --submod_to_device="merge|cuda0"
```

**Load test on job  tsp_eag/smart/inference_platform_sp__sigrid_predictor_gpu_adhoc_realtimetest_m962549778_latest.s3**

Before:
(p99 latency)
{F1066957232}

(SR error rate)
 {F1066957650}

After:
(p99 latency)
 {F1066957141}

(SR error rate)
{F1066957376}

Differential Revision: D48182533

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106849
Approved by: https://github.com/842974287, https://github.com/kit1980
2023-08-13 07:37:46 +00:00
atannous
149237415f Using deterministic hashing instead of GUID for pytorch serialization id generation (#101964)
Summary:
serialization_id was added in a previous change to be written as a random GUID associated with each time saving of a module is called, for the purpose of adding tracking for saved artifacts. In order not to disturb existing systems that rely on the serialized bytes to be deterministic for serializing the same module, this change uses the combined hash of uncompressed content and file names instead of GUID for serialization id.
The use of this hashing reuses the same CRC32 that is already calculated for zip writing, so it doesn't incur additional computational overhead.

Data descriptor is one of the file headers inside the zip format https://en.wikipedia.org/wiki/ZIP_(file_format)#Data_descriptor. It contains the CRC32 of the uncompressed data. By inspecting the written data in PyTorchStreamWriter, the CRC32 is found for each written record.
In order to make serialization_id a unique and deterministic id for the
serialized files without computation overhead, the updated `serialization_id` is computed based on all files written, and is composed of:
1) a combined hash of record name hashes
2) a combined crc32 of the record uncompressed data

Example value: "15656915541136177431866432772"

Test Plan: buck2 test @//mode/dev //caffe2/caffe2/serialize:inline_container_test

Differential Revision: D46038973

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101964
Approved by: https://github.com/davidberard98
2023-05-23 20:47:30 +00:00
atannous
3ed1569e86 Adding serialization ID to inline container (#100994)
Summary:
In order to better track models after serialization, this change writes a serialization_id as a UUID to inline container. Having this ID enables traceability of model in saving and loading events.
serialization_id is generated as a new UUID everytime serialization takes place. It can be thought of as a model snapshot identifier at the time of serialization.

Test Plan:
```
buck2 test @//mode/dev //caffe2/caffe2/serialize:inline_container_test
```

Local tests:
```
buck2 run @//mode/opt //scripts/atannous:example_pytorch_package
buck2 run @//mode/opt //scripts/atannous:example_pytorch
buck2 run @//mode/opt //scripts/atannous:example_pytorch_script
```

```
$ unzip -l output.pt
Archive:  output.pt
  Length      Date    Time    Name
---------  ---------- -----   ----
       36  00-00-1980 00:00   output/.data/serialization_id
      358  00-00-1980 00:00   output/extra/producer_info.json
       58  00-00-1980 00:00   output/data.pkl
      261  00-00-1980 00:00   output/code/__torch__.py
      326  00-00-1980 00:00   output/code/__torch__.py.debug_pkl
        4  00-00-1980 00:00   output/constants.pkl
        2  00-00-1980 00:00   output/version
---------                     -------
     1045                     7 files
```

```
unzip -p output.pt "output/.data/serialization_id"
a9f903df-cbf6-40e3-8068-68086167ec60
```

Differential Revision: D45683657

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100994
Approved by: https://github.com/davidberard98
2023-05-17 17:08:48 +00:00
Hongyi Jia
23a095ca5f Chunked inplace weight loading API (#100615)
Chunking inplace memory writing to save memory further

Reviewed By: zyan0

Differential Revision: D45506186

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100615
Approved by: https://github.com/davidberard98
2023-05-04 17:41:18 +00:00
Hongyi Jia
f558bb6f76 inplace PyTorchStreamReader getRecord() (#100418)
Summary: Sometimes we want to getRecord into an pre-allocated memory to save cpu memory. Adding new API to support the inplace memory writing.

Test Plan: caffe2/serialize/inline_container_test

Reviewed By: zyan0

Differential Revision: D45439517

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100418
Approved by: https://github.com/davidberard98, https://github.com/houseroad
2023-05-04 01:30:59 +00:00
Han Qi
b8ba4802fe Add an option to skip loading of debug traces (#91430)
Summary:
Debug traces consumes lots of memory especially for small models.

Test Plan:
Unit test

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91430
Approved by: https://github.com/davidberard98
2022-12-29 22:53:17 +00:00
Nikita Shulga
caaf37a111 Fix PyTorchStreamWriter exception handling (#88128)
Avoid double exception in destructor if attempting to serialize to
python object that does not have `write` method

Use `Finalizer` class in `PyTorchStreamWriter::writeEndOfFile()` to a
always set `finailized_` property even if excretion occurs. (as there
isn't much one can do at this point)

Add expicit check for the attribue to `_open_zipfile_writer_buffer` and
add unitests

Modernize code a bit by using Python-3 `super()` method

Fixes https://github.com/pytorch/pytorch/issues/87997

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88128
Approved by: https://github.com/albanD
2022-10-31 23:38:03 +00:00
Tugsbayasgalan Manlaibaatar
b4b60c2a2e Get rid of ENABLE_UPGRADERS macro (#77574)
Since it's been a while after we merged the upgrader design and we haven't encountered any issues, let's get rid of the macro for safe rollout
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77574
Approved by: https://github.com/gmagogsfm
2022-08-09 05:33:14 +00:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
b0fdca8855 Bump version number to 7 and compile old operators with old schema (#68358)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68358

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D33433730

Pulled By: tugsbayasgalan

fbshipit-source-id: 202c58365bae13195d3545cefcb0da9162b02151
2022-01-05 23:57:22 -08:00
Michael Suo
0ece9a49d7 Revert D33198155: Bump version number to 7 and compile old operators with old schema
Test Plan: revert-hammer

Differential Revision:
D33198155 (d35fc409ad)

Original commit changeset: 38a1185f9ecb

Original Phabricator Diff: D33198155 (d35fc409ad)

fbshipit-source-id: 411aaeb4e047aad9202db50d4d0f2ff35bc51f9d
2022-01-04 13:44:59 -08:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
d35fc409ad Bump version number to 7 and compile old operators with old schema (#68358)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68358

Test Plan: Imported from OSS

Reviewed By: samdow

Differential Revision: D33198155

Pulled By: tugsbayasgalan

fbshipit-source-id: 38a1185f9ecb34a33f737ad0b060b3490956300c
2022-01-04 01:31:25 -08:00
Michael Suo
f02cfcc802 ban PyTorchStreamWriter from writing the same file twice (#61805)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61805

Similar in spirit to https://github.com/pytorch/pytorch/pull/61371.
While writing two files with the same name is allowed by the ZIP format,
most tools (including our own) handle this poorly. Previously I banned
this within `PackageExporter`, but that doesn't cover other uses of the
zip format like TorchScript.

Given that there are no valid use cases and debugging issues caused by
multiple file writes is fiendishly difficult, banning this behavior enitrely.

Differential Revision:
D29748968
D29748968

Test Plan: Imported from OSS

Reviewed By: Lilyjjo

Pulled By: suo

fbshipit-source-id: 0afee1506c59c0f283ef41e4be562f9c22f21023
2021-07-19 18:23:43 -07:00
Lillian Johnson
b72a72a477 torch.Package extend PyTorchStreamWriter to track written records (#52218)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52218

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D26429794

Pulled By: Lilyjjo

fbshipit-source-id: 5f68e7991c673ada629d0370c705520243d0637a
2021-02-22 15:02:41 -08:00
Zachary DeVito
60518d10f6 [deploy] torch::deploy API (#51754)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51754

This API allows you to manage multiple python interpreters in a single
process to deploy PyTorch models packaged with torch.package.

torch/csrc/deploy/deploy.h contains the API definition
torch/csrc/deploy/test_deploy.cpp has some examples.

Notes:
* mutex is added to PyTorchStreamReader to make it safe to use from multiple threads at once.
* USE_DEPLOY is only true for the special libtorch_deployinterpreter.so library, when enabled
  we use a hash table to maintain PyObject <> at::Tensor mappping rather than the internal pointer
  in Tensor since >1 interpreter may have a reference to the tensor.
* serialization.py has some additional functions for creating pickle objects
  but keeping storages in memory for use transfering tensors between interpreters

Test Plan: Imported from OSS

Reviewed By: wconstab

Differential Revision: D26329468

Pulled By: zdevito

fbshipit-source-id: d75f4ebb9a27f1d911179d9996041bcb3ca04a07
2021-02-18 02:30:08 -08:00
Martin Yuan
46afd7fc9f [PyTorch] Decouple version numbers from c10 and caffe2 targets (#49905)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49905

There's size regression in model delivery in D25682312. Only the model version numbers are used. However, the dependency of the entire c10 (128 KB) is pulled in.

This diff is to decouple the version numbers to a separate header file, versions.h. Other targets referring to version numbers only can have deps of ```caffe2:version_headers```.
ghstack-source-id: 119161467

Test Plan: CI

Reviewed By: xcheng16, guangyfb

Differential Revision: D25716601

fbshipit-source-id: 07634bcf46eacfefa4aa75f2e4c9b9ee30c6929d
2020-12-30 15:34:01 -08:00
Jane Xu
71ca600af9 Renaming CAFFE2_API to TORCH_API (#49496)
Summary:
Since caffe2 and torch have been consolidated, CAFFE2_API should be merged with TORCH_API. Addresses a TODO.

Manually edited some references of the removed `CAFFE2_API`:
* `CONTRIBUTING.md`
* `caffe2/proto/CMakeLists.txt`
* `cmake/ProtoBuf.cmake`
* `c10/macros/Export.h`
* `torch/csrc/WindowsTorchApiMacro.h`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49496

Reviewed By: malfet, samestep

Differential Revision: D25600726

Pulled By: janeyx99

fbshipit-source-id: 7e068d959e397ac183c097d7e9a9afeca5ddd782
2020-12-18 10:54:50 -08:00
Martin Yuan
2b61e4d84c Revert D25152559: T66557700 Support default argument values of a method
Test Plan: revert-hammer

Differential Revision:
D25152559 (6bde0ca6d3)

Original commit changeset: bbf52f1fbdbf

fbshipit-source-id: 592fdb3078b1ac86cd394adc6c1bfd6b10d829e1
2020-12-17 14:05:49 -08:00
Frank Seide
6bde0ca6d3 T66557700 Support default argument values of a method (#48863)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48863

Support default arguments when invoking a module via PyTorch Lite (`mobile::Module`).

Test Plan:
buck test mode/dbg //caffe2/test/cpp/jit:jit -- LiteInterpreterTest.MethodInvocation

buck test mode/dbg caffe2/test:mobile -- test_method_calls_with_optional_arg

Reviewed By: raziel, iseeyuan

Differential Revision: D25152559

fbshipit-source-id: bbf52f1fbdbfbc6f8fa8b65ab524b1cd4648f9c0
2020-12-16 15:55:03 -08:00
Liang Liu
19f4c5110e Add another torch::jit::load API to load PyTorch model with shared_ptr PyTorchStreamReader input (#48802)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48802

Current torch::jit::load API only supports unique_ptr ReadAdaptInterface input, but for some cases, torch::jit::load may not be the only consumer of the reader adapter. This diff enables an overload of torch::jit::load to load shared_ptr PyTorchStreamReader.

Reviewed By: malfet, houseroad

Differential Revision: D25241904

fbshipit-source-id: aa403bac9ed820cc0e94342aebfe524a1d5bf913
2020-12-06 18:09:25 -08:00
Gao, Xiang
5e97f251a8 Enable TF32 support for cuDNN (#40737)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40737

Reviewed By: mruberry

Differential Revision: D22801525

Pulled By: ngimel

fbshipit-source-id: ac7f7e728b4b3e01925337e8c9996f26a6433fd2
2020-09-01 15:34:24 -07:00
Dmytro Dzhulgakov
cbdaa20c88 [serialize] Expose zip file alignment calculation functions (#43531)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43531

It's useful for building some tooling out of tree to manipulate zip files in PyTorch-y way

Test Plan: contbuild

Reviewed By: houseroad

Differential Revision: D23277361

fbshipit-source-id: e15fad20e792d1e41018d32fd48295cfe74bea8c
2020-08-25 02:32:58 -07:00
Martin Yuan
93f1b5c8da Mobile backward compatibility (#42413)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42413

When a default argument is added, it does not break backward compatibility (BC) for full-jit, but does break BC for mobile bytecode. For example, https://github.com/pytorch/pytorch/pull/40737. To make bytecode BC in this case, we

1. Introduce kMinSupportedBytecodeVersion. The loaded model version should be between kMinSupportedBytecodeVersion and kProducedBytecodeVersion.
2. If an operator is updated, and we can handle BC, bump the kProducedBytecodeVersion (for example, from 3 to 4).
3. If model version is at the older version of the operator, add an adapter function at loading. For the added default arg, we push this default arg to stack before calling the actual operator function.

Test Plan: Imported from OSS

Reviewed By: xcheng16

Differential Revision: D22898314

Pulled By: iseeyuan

fbshipit-source-id: 90d339f8e1365f4bb178db8db7c147390173372b
2020-08-21 15:45:52 -07:00
Martin Yuan
131a0ea277 Add version number to bytecode. (#36439)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36439

A proposal of versioning in bytecode, as suggested by dzhulgakov in the internal post: https://fb.workplace.com/groups/pytorch.mobile.work/permalink/590192431851054/

kProducedBytecodeVersion is added. If the model version is not the same as the number in the code, an error will be thrown.

The updated bytecode would look like below. It's a tuple of elements, where the first element is the version number.
```
(3,
 ('__torch__.m.forward',
  (('instructions',
    (('STOREN', 1, 2),
     ('DROPR', 1, 0),
     ('MOVE', 2, 0),
     ('OP', 0, 0),
     ('RET', 0, 0))),
   ('operators', (('aten::Int', 'Tensor'),)),
   ('constants', ()),
   ('types', ()),
   ('register_size', 2))))
```

Test Plan: Imported from OSS

Differential Revision: D22433532

Pulled By: iseeyuan

fbshipit-source-id: 6d62e4abe679cf91a8e18793268ad8c1d94ce746
2020-07-08 12:30:58 -07:00
Mike Ruberry
e66445878d Adds dynamic versioning pattern (#40279)
Summary:
BC NOTE:

This change makes it so modules saved with torch.jit.save in PyTorch 1.6 can be loaded by previous versions of PyTorch unless they use torch.div or (soon) torch.full. It also lets tensors saved using torch.save be loaded by previous versions. So this is the opposite of BC-breaking, but I'm using that label to highlight this issue since we don't have a "BC-improving" label.

PR NOTE:
When an operator's semantics change in PyTorch we want to do two things:

1) Preserve the semantics of older serialized Torchscript programs that use the operator
2) Ensure the new semantics are respected

Historically, this meant writing a Versioned Symbol that would remap older versions of the operator into current PyTorch code (1), and bumping the produced file format version (2). Unfortunately, bumping the produced file format version is a nuclear option for ensuring semantics are respected, since it also prevents older versions of PyTorch from loading anything (even tensors!) from newer versions.

Dynamic versioning addresses the nuclear consequences of bumping the produced file format version by only bumping it when necessary. That is, when an operator with changed semantics is detected in the serialized Torchscript. This will prevent Torchscript programs that use the changed operator from loading on earlier versions of PyTorch, as desired, but will have no impact on programs that don't use the changed operator.

Note that this change is only applicable when using torch.jit.save and torch.jit.load. torch.save pickles the given object using pickle (by default), which saves a function's Python directly.

No new tests for this behavior are added since the existing tests for versioned division in test_save_load already validate that models with div are loaded correctly at version 4.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40279

Reviewed By: dzhulgakov

Differential Revision: D22168291

Pulled By: mruberry

fbshipit-source-id: e71d6380e727e25123c7eedf6d80e5d7f1fe9f95
2020-06-24 12:52:50 -07:00
Mike Ruberry
cb26661fe4 Throws runtime error when torch.full would infer a float dtype from a bool or integral fill value (#40364)
Summary:
BC-breaking NOTE:

In PyTorch 1.6 bool and integral fill values given to torch.full must set the dtype our out keyword arguments. In prior versions of PyTorch these fill values would return float tensors by default, but in PyTorch 1.7 they will return a bool or long tensor, respectively. The documentation for torch.full has been updated to reflect this.

PR NOTE:

This PR causes torch.full to throw a runtime error when it would have inferred a float dtype by being given a boolean or integer value. A versioned symbol for torch.full is added to preserve the behavior of already serialized Torchscript programs. Existing tests for this behavior being deprecated have been updated to reflect it now being unsupported, and a couple new tests have been added to validate the versioned symbol behavior. The documentation of torch.full has also been updated to reflect this change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40364

Differential Revision: D22176640

Pulled By: mruberry

fbshipit-source-id: b20158ebbcb4f6bf269d05a688bcf4f6c853a965
2020-06-23 23:27:22 -07:00
Mike Ruberry
3d8de74e17 Bumps readable file format version for torch.full inferring float from int values (#40089)
Summary:
Reserves file format version 5 for marking when torch.full(int)->FloatTensor will be deprecated.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40089

Differential Revision: D22066359

Pulled By: mruberry

fbshipit-source-id: 6158e03ca75e3795a2641123ff23d67975170f44
2020-06-16 15:09:40 -07:00
Mike Ruberry
95489b590f Throws runtime error when performing integer division using torch.div (#38620)
Summary:
**1.6 Deprecation Note**

In PyTorch 1.6 attempting to divide two integer tensors or an integer tensor and an integer scalar will throw a runtime error. This behavior was deprecated with a warning in PyTorch 1.5. In PyTorch 1.7 torch.div and the division operator will always perform true division like Python3 and NumPy.

To divide integer values use either torch.true_divide, for true division, or torch.floor_divide (the // operator) for floor division.

**PR Summary**

This PR updates the warning message when performing integer division to be a runtime error. Because some serialized Torchscript programs may rely on torch.div's historic behavior it also implements a "versioned symbol" for div that lets those models retain their current behavior. Extensive tests of this behavior are the majority of this PR.

Note this change bumps the produced file format version to delineate which programs should have their historic div behavior preserved.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38620

Differential Revision: D21612598

Pulled By: mruberry

fbshipit-source-id: c9c33591abce2f7e97f67f0f859901f5b03ed47d
2020-06-10 13:59:34 -07:00
Mike Ruberry
7d56ef27ee Bumps supported file format in anticipate of torch.div changes (#39529)
Summary:
See https://github.com/pytorch/pytorch/pull/38620 for additional context.

When PyTorch begins producing file format 4 with the updated div behavior it's safe for older PyTorch versions to consume it, since file format 4 only prohibits functionality. Bumping the supported file format version now gives PyTorch users on Master some leeway on updating their services that consume vs. produce PyTorch.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39529

Differential Revision: D21886790

Pulled By: mruberry

fbshipit-source-id: d6098eff06c26f18c3fac5cc85e5db298ba86e27
2020-06-04 19:34:00 -07:00
davidriazati
2ec6a30722 Bump produced file format version (#36085)
Summary:
This was left off of #35741, but the max supported file format change
has been landed for several weeks, so this should be fine to land.
](https://our.intern.facebook.com/intern/diff/20875051/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36085

Pulled By: driazati

Reviewed By: eellison

Differential Revision: D20875051

fbshipit-source-id: c3b84c85d791cb6f286a2ed38ca5cd1219b332b2
2020-04-09 22:52:49 -07:00
davidriazati
23b2fba79a [jit] Add type tags to lists/dicts in pickle (#33255)
Summary:
Stacked PRs
 * #33474 - [jit] Remove list specializations from pickler
 * **#33255 - [jit] Add type tags to lists/dicts in pickle**

This adds a global call to `torch.jit._pickle.restore_type_tags` for
lists and dicts so that we can preserve their types after serialization.
](https://our.intern.facebook.com/intern/diff/20346780/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33255

Pulled By: driazati

Differential Revision: D20346780

fbshipit-source-id: c8534954ef4adb2e3c880401acbee30cd284f3db
2020-03-10 19:17:01 -07:00
Leah Dickstein
c5e822b7bb Back out "[jit] Add type tags to lists/dicts in pickle" (#34406)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34406

Pull Request resolved: https://github.com/pytorch/pytorch/pull/34405

Original commit changeset: 2f1826e6679a

Test Plan: reverting, see S197156

Reviewed By: akyrola, volkhin

Differential Revision: D20317456

fbshipit-source-id: 89298a9c022edba1d54bcdc7541804cb919e33f5
2020-03-06 20:02:16 -08:00
davidriazati
99e211e661 [jit] Add type tags to lists/dicts in pickle (#33255)
Summary:
Stacked PRs
 * #33474 - [jit] Remove list specializations from pickler
 * **#33255 - [jit] Add type tags to lists/dicts in pickle**

This adds a global call to `torch.jit._pickle.restore_type_tags` for
lists and dicts so that we can preserve their types after serialization.
](https://our.intern.facebook.com/intern/diff/19868637/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33255

Pulled By: driazati

Reviewed By: xman1979, Tianshu-Bao

Differential Revision: D19868637

fbshipit-source-id: 2f1826e6679a786ca209198690269f399a542c04
2020-03-03 16:48:21 -08:00
Michael Ranieri
9b2b15f4fc misc windows warning fixes (#33632)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33632

* `inline_container.h` was unnecessarily exposing all includers to caffe2 headers via `caffe2/core/logging.h`
* Add msvc version of hiding unused warnings.
* Make sure clang on windows does not use msvc pragmas.
* Don't redefine math macro.

Test Plan: CI green

Differential Revision: D20017046

fbshipit-source-id: 230a9743eb88aee08d0a4833680ec2f01b7ab1e9
2020-02-21 19:36:25 -08:00
Zachary DeVito
7a2889b014 Stop producing op_version_set version numbers.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28122

Test Plan: Imported from OSS

Differential Revision: D17959565

Pulled By: zdevito

fbshipit-source-id: 701101bd870700eb0c9882c69e2cfdd2524b555e
2019-12-04 19:14:43 -08:00
Brian Wignall
e7fe64f6a6 Fix typos (#30606)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30606

Differential Revision: D18763028

Pulled By: mrshenli

fbshipit-source-id: 896515a2156d062653408852e6c04b429fc5955c
2019-12-02 20:17:42 -08:00
David Riazati
8c6f0c0587 Detect TorchScript archives in torch.load (#29339)
Summary:
This PR looks for a `constants.pkl` file at the top level in a zip file
in `torch.load`. If found, it calls `torch.jit.load` instead and issues
a warning to call `torch.jit.load` directly
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29339

Differential Revision: D18611095

Pulled By: driazati

fbshipit-source-id: f070a02f6b5509054fc3876b3e8356bbbcc183e1
2019-11-22 12:30:30 -08:00
Jeremy Lilley
e80f7506c2 In torch::save(), make padding computation faster. (#29425)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29425

This change saves roughly 5-6% in the TorchSaveSmallTensor benchmark
(torch::save() on a tensor with 64 random floats) by reusing the
padding string across records.
ghstack-source-id: 93517961

Test Plan:
Correctness: buck test mode/dev-nosan caffe2/test/...
   Benchmark buck build mode/opt experimental/jeremyl/c2/...
     buck-out/opt/gen/experimental/jeremy/c2/SerializationBench

Differential Revision: D18385731

fbshipit-source-id: 20bcbe1efd2fb7e3012dd68080542f2a74a7d4f2
2019-11-08 15:03:25 -08:00
Jeremy Lilley
ac61adb5ef String opts related to deserialization. (#28263)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28263

When looking at profiles of deserializing small data from torch::load(),
we found some straightforward string-related changes that in aggregate
improve the base time by 25%.

One of the main problems was over-use of std::stringstream - the
constructors alone were 18%+ of the time spent. This change improves
unpickling/deserializing by converting a handful of the hottest
usecases from the profiles:

 - unpickler's readString() goes from 10.3% of time to mostly out of the picture
 - QualifiedHame constructor (particularly Join call) was 8.9% of time,
   but afterwards disappears from the profiles.
 - getRecordID/hasRecord were ~5% each, but also get somewhat smaller.
ghstack-source-id: 92158727

Test Plan:
Benchmark in buck build mode/opt experimental/jeremyl/c2:SerializationBench
  Correctness in buck test mode/dev-nosan caffe2/test/...

Differential Revision: D17997056

fbshipit-source-id: fc6d6c7da7557ff23c8e8c7dbe4c060abf860018
2019-10-18 07:36:17 -07:00
Zachary DeVito
58ed8ca9e1 clean up exported source format (#28129)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28129

The previous PR in the stack removed the need to order classes/functions
or have correct import statements. This resolved circular depedency issues
that can arise when class constructors like ModuleList put new instances
of themselves in a common namespace.

This PR changes our export format to no longer produce this information.
By doing so we can make the logic signficantly simpler, since we just
keep track of an individual PythonPrint object per file.

Notes:
* PythonPrint was changed to manage its own stream/list of ranges. It
was doing this anyway internally, this just makes the API more clear.
* Since we are changing the serialization format, I also removed op_version_set.
It is now replaced with the VERSION number that written in the zip archive.
This further simplifies the code emission process.
* A test of op_version_set was removed since there is no longer any behavior
to test.

Test Plan: Imported from OSS

Differential Revision: D17961610

Pulled By: zdevito

fbshipit-source-id: ada362c4ca34d05393a1a7e799c94785ab9d9825
2019-10-16 22:47:24 -07:00
Jeremy Lilley
2e0294cb39 Make JIT Serialization support arbitrary std::function<> IO (#28039)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28039

Right now, torch::save() uses std::ostream, which results in unnecessary
data copies in practice. Similar for torch::load().

Adding a std::function<size_t(const void*, size_t)> as an output option,
parallel to the existing filename and std::ostream apis, gives users the
flexibility to emit directly to a backing store.

For a simple case of appending the output to a std::string, we observe
significant benchmark savings (on order of -50%), even with the
minor std::function<> dispatch overhead. The main reason is that
std::ostringstream effectively requires 2 extra copies of the data
beyond a simple string.append lambda.

We also provide a parallel api for the load(), though this one is
slightly more complex due to the need to do arbitrary position reads.

Test Plan:
buck test mode/dev-nosan caffe2/test/...
      (Basic serialization test in caffe2/test/cpp/api/serialize.cpp)
      Benchmark in experimental/jeremyl/c2/SerializationBench.cpp, with D17823443
        (1M time goes from 90ms -> 40ms, albeit with crc patch applied)

Differential Revision: D17939034

fbshipit-source-id: 344cce46f74b6438cb638a8cfbeccf4e1aa882d7
2019-10-15 22:12:04 -07:00
Will Feng
964d3d8b38 Revert D17822962: [pytorch][PR] Make JIT Serialization support arbitrary std::function<> IO
Test Plan: revert-hammer

Differential Revision:
D17822962

Original commit changeset: d344a7e59707

fbshipit-source-id: ba153a2110faf91d103bd0f8dea4e9613bd6b0da
2019-10-15 13:55:11 -07:00