Commit Graph

74 Commits

Author SHA1 Message Date
zabboud
01478f1afa Fix pydocstyle errors listed in issue 112589 (#113227)
Fixes #112589

Fixed errors relating to pydocstyle in the following files. The remaining errors are related to docstrings at the module level and at methods within each module (see details below)

pydocstyle torch/cuda/_utils.py --count
before: 3
after: 0

pydocstyle torch/cuda/jiterator.py --count
before: 3
after: 1

**remaining errors:**
```
torch/cuda/jiterator.py:1 at module level:
        D100: Missing docstring in public module
```

pydocstyle torch/cuda/graphs.py --count
before: 25
after: 7

**remaining errors:**
```
torch/cuda/graphs.py:1 at module level:
        D100: Missing docstring in public module
torch/cuda/graphs.py:54 in public method `__new__`:
        D102: Missing docstring in public method
torch/cuda/graphs.py:108 in public method `debug_dump`:
        D205: 1 blank line required between summary line and description (found 0)
torch/cuda/graphs.py:108 in public method `debug_dump`:
        D400: First line should end with a period (not ':')
torch/cuda/graphs.py:150 in public method `__init__`:
        D107: Missing docstring in __init__
torch/cuda/graphs.py:172 in public method `__enter__`:
        D105: Missing docstring in magic method
torch/cuda/graphs.py:186 in public method `__exit__`:
        D105: Missing docstring in magic method
```

pydocstyle torch/cuda/_sanitizer.py --count
before: 35
after: 31

**remaining errors:**
```
torch/cuda/_sanitizer.py:43 in public class `AccessType`:
        D101: Missing docstring in public class
torch/cuda/_sanitizer.py:47 in public method `__str__`:
        D105: Missing docstring in magic method
torch/cuda/_sanitizer.py:84 in public method `__init__`:
        D107: Missing docstring in __init__
torch/cuda/_sanitizer.py:96 in public method `__str__`:
        D105: Missing docstring in magic method
torch/cuda/_sanitizer.py:139 in public method `__init__`:
        D107: Missing docstring in __init__
torch/cuda/_sanitizer.py:142 in public method `__str__`:
        D105: Missing docstring in magic method
torch/cuda/_sanitizer.py:218 in public class `StreamSynchronizations`:
        D101: Missing docstring in public class
torch/cuda/_sanitizer.py:219 in public method `__init__`:
        D107: Missing docstring in __init__
torch/cuda/_sanitizer.py:256 in public method `create_stream`:
        D102: Missing docstring in public method
torch/cuda/_sanitizer.py:268 in public method `create_event`:
        D102: Missing docstring in public method
torch/cuda/_sanitizer.py:272 in public method `delete_event`:
        D102: Missing docstring in public method
torch/cuda/_sanitizer.py:276 in public method `update_seq_num`:
        D102: Missing docstring in public method
torch/cuda/_sanitizer.py:280 in public method `record_state`:
        D102: Missing docstring in public method
torch/cuda/_sanitizer.py:291 in public method `stream_wait_for_event`:
        D102: Missing docstring in public method
torch/cuda/_sanitizer.py:298 in public method `all_streams_wait_for_event`:
        D102: Missing docstring in public method
torch/cuda/_sanitizer.py:307 in public method `all_streams_wait_for_stream`:
        D102: Missing docstring in public method
torch/cuda/_sanitizer.py:316 in public method `sync_all_streams`:
        D102: Missing docstring in public method
torch/cuda/_sanitizer.py:323 in public method `is_ordered_after`:
        D102: Missing docstring in public method
torch/cuda/_sanitizer.py:339 in public method `__init__`:
        D107: Missing docstring in __init__
torch/cuda/_sanitizer.py:460 in public function `zip_by_key`:
        D103: Missing docstring in public function
torch/cuda/_sanitizer.py:466 in public function `zip_arguments`:
        D103: Missing docstring in public function
torch/cuda/_sanitizer.py:478 in public class `ArgumentHandler`:
        D101: Missing docstring in public class
torch/cuda/_sanitizer.py:479 in public method `__init__`:
        D107: Missing docstring in __init__
torch/cuda/_sanitizer.py:505 in public method `parse_inputs`:
        D102: Missing docstring in public method
torch/cuda/_sanitizer.py:520 in public method `parse_outputs`:
        D102: Missing docstring in public method
torch/cuda/_sanitizer.py:527 in public class `CUDASanitizerDispatchMode`:
        D101: Missing docstring in public class
torch/cuda/_sanitizer.py:528 in public method `__init__`:
        D107: Missing docstring in __init__
torch/cuda/_sanitizer.py:562 in public method `__torch_dispatch__`:
        D105: Missing docstring in magic method
torch/cuda/_sanitizer.py:597 in public method `__init__`:
        D107: Missing docstring in __init__
torch/cuda/_sanitizer.py:601 in public method `enable`:
        D102: Missing docstring in public method
torch/cuda/_sanitizer.py:605 in public method `__del__`:
        D105: Missing docstring in magic method
```

pydocstyle torch/storage.py --count
before: 90
after: 37

**remaining errors:**
```
torch/storage.py:1 at module level:
        D100: Missing docstring in public module
torch/storage.py:310 in public class `UntypedStorage`:
        D101: Missing docstring in public class
torch/storage.py:311 in public method `__getitem__`:
        D105: Missing docstring in magic method
torch/storage.py:317 in public method `is_cuda`:
        D102: Missing docstring in public method
torch/storage.py:321 in public method `is_hpu`:
        D102: Missing docstring in public method
torch/storage.py:325 in public method `share_memory_`:
        D102: Missing docstring in public method
torch/storage.py:444 in public class `TypedStorage`:
        D101: Missing docstring in public class
torch/storage.py:453 in public method `fill_`:
        D102: Missing docstring in public method
torch/storage.py:458 in public method `__new__`:
        D102: Missing docstring in public method
torch/storage.py:530 in public method `__init__`:
        D107: Missing docstring in __init__
torch/storage.py:599 in public method `is_cuda`:
        D102: Missing docstring in public method
torch/storage.py:604 in public method `is_hpu`:
        D102: Missing docstring in public method
torch/storage.py:624 in public method `__len__`:
        D105: Missing docstring in magic method
torch/storage.py:653 in public method `__setitem__`:
        D105: Missing docstring in magic method
torch/storage.py:681 in public method `__getitem__`:
        D105: Missing docstring in magic method
torch/storage.py:715 in public method `copy_`:
        D102: Missing docstring in public method
torch/storage.py:723 in public method `nbytes`:
        D102: Missing docstring in public method
torch/storage.py:731 in public method `type`:
        D102: Missing docstring in public method
torch/storage.py:744 in public method `cuda`:
        D102: Missing docstring in public method
torch/storage.py:751 in public method `hpu`:
        D102: Missing docstring in public method
torch/storage.py:758 in public method `element_size`:
        D102: Missing docstring in public method
torch/storage.py:766 in public method `get_device`:
        D102: Missing docstring in public method
torch/storage.py:770 in public method `__str__`:
        D105: Missing docstring in magic method
torch/storage.py:781 in public method `__repr__`:
        D105: Missing docstring in magic method
torch/storage.py:785 in public method `__iter__`:
        D105: Missing docstring in magic method
torch/storage.py:789 in public method `__copy__`:
        D105: Missing docstring in magic method
torch/storage.py:793 in public method `__deepcopy__`:
        D105: Missing docstring in magic method
torch/storage.py:801 in public method `__sizeof__`:
        D105: Missing docstring in magic method
torch/storage.py:877 in public method `device`:
        D102: Missing docstring in public method
torch/storage.py:881 in public method `size`:
        D102: Missing docstring in public method
torch/storage.py:891 in public method `pickle_storage_type`:
        D102: Missing docstring in public method
torch/storage.py:902 in public method `__reduce__`:
        D105: Missing docstring in magic method
torch/storage.py:907 in public method `data_ptr`:
        D102: Missing docstring in public method
torch/storage.py:915 in public method `resize_`:
        D102: Missing docstring in public method
torch/storage.py:931 in public method `from_buffer`:
        D102: Missing docstring in public method
torch/storage.py:1032 in public method `from_file`:
        D402: First line should not be the function's "signature"
torch/storage.py:1075 in public method `is_shared`:
        D102: Missing docstring in public method

```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113227
Approved by: https://github.com/kit1980
2023-11-13 22:05:45 +00:00
Mikayla Gawarecki
320ac546ed Clarify difference between share_memory and from_file (#111856)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111856
Approved by: https://github.com/albanD
ghstack dependencies: #111688
2023-11-01 03:25:09 +00:00
Mikayla Gawarecki
b54ab57522 Document torch.from_file and fix UntypedStorage.from_file docs (#111688)
Fixes https://github.com/pytorch/pytorch/issues/37439

Also threads through filename so it is accessible via `t.storage().filename`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111688
Approved by: https://github.com/albanD
2023-10-25 19:28:11 +00:00
Amadeusz Skrzypczak
653f966df0 Fix type promotion of float8_e5m2 and float8_e4m3fn (#110279)
There is an issue with float8 type promotion, because _promoteTypesLookup doesn't contain records for few types between bfloat16 and float8.
I have simply moved float8 types just after bfloat16, however I'm not sure if it doesn't break serialization.

Please, decide if it can stay like this, or should I insert missing records filled with "ud" into _promoteTypesLookup instead of moving types.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110279
Approved by: https://github.com/albanD
2023-10-05 01:28:48 +00:00
Justin Chu
79c5e33349 [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105436
Approved by: https://github.com/malfet, https://github.com/albanD
2023-07-21 07:38:46 +00:00
Nikita Shulga
5837e95d30 [Reland] Update mypy to 1.4.1 (#105227)
This PR re-lands
- [Typing] Fix PEP 484 Violation (#105022)
- Update mypy to 1.4.1 (#91983)

That were reverted due to the conflict with internal source repo.

Mostly fixes for PEP-484 violation (i.e. when default arg is set to None, but type is not annotated as optional)
Plus few real fixes:
  - Add missing `_get_upgraders_entry_map` to `torch/_C/__init__.pyi`
  - Add missing return statement to `torch._export. deserialize_graph`
  - Fix error message in `torch.ao.ns.fx.weight_utils.get_lstm_mod_weights`
  - Add assert it `torch/optim/optimizer.py` that Optional list is not None
TODO (in followup PR):
  - Fix erroneous `isinstance` check in `torch/ao/quantization/_pt2e/qat_utils.py`

Unrelated, to bypass CI failures due to the gcc9 dependency update in Ubuntu-18.04:
- Add hack to squash older libstdc++ from conda environment in favor one from OS to `.ci/docker/install_conda.sh`
- Update bazel cuda builds to focal, as with libstdc++-6.0.32 bazel builds loose the ability to catch exceptions (probably because they link with cupti statically, but I could not found where it is done)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105227
Approved by: https://github.com/atalman, https://github.com/albanD, https://github.com/Skylion007
2023-07-15 20:30:20 +00:00
PyTorch MergeBot
15fd1ea118 Revert "[Reland] Update mypy to 1.4.1 (#105227)"
This reverts commit c9c4f8efc3.

Reverted https://github.com/pytorch/pytorch/pull/105227 on behalf of https://github.com/atalman due to trying to mitigate ci sev #105248 ([comment](https://github.com/pytorch/pytorch/pull/105227#issuecomment-1636510935))
2023-07-14 22:28:35 +00:00
Nikita Shulga
c9c4f8efc3 [Reland] Update mypy to 1.4.1 (#105227)
This PR re-lands
- [Typing] Fix PEP 484 Violation (#105022)
- Update mypy to 1.4.1 (#91983)

That were reverted due to the conflict with internal source repo.

Mostly fixes for PEP-484 violation (i.e. when default arg is set to None, but type is not annotated as optional)
Plus few real fixes:
  - Add missing `_get_upgraders_entry_map` to `torch/_C/__init__.pyi`
  - Add missing return statement to `torch._export. deserialize_graph`
  - Fix error message in `torch.ao.ns.fx.weight_utils.get_lstm_mod_weights`
  - Add assert it `torch/optim/optimizer.py` that Optional list is not None
TODO (in followup PR):
  - Fix erroneous `isinstance` check in `torch/ao/quantization/_pt2e/qat_utils.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105227
Approved by: https://github.com/atalman, https://github.com/albanD, https://github.com/Skylion007
2023-07-14 20:45:12 +00:00
PyTorch MergeBot
3c5a494d7a Revert "Update mypy to 1.4.1 (#91983)"
This reverts commit 634659e262.

Reverted https://github.com/pytorch/pytorch/pull/91983 on behalf of https://github.com/malfet due to It's dependent change was reverted, so reverting this one as well, to keep CI clean ([comment](https://github.com/pytorch/pytorch/pull/91983#issuecomment-1636059709))
2023-07-14 15:59:16 +00:00
Nikita Shulga
634659e262 Update mypy to 1.4.1 (#91983)
Mostly fixes for PEP-484 violation (i.e. when default arg is set to None, but type is not annotated as optional)
Plus few real fixes:
  - Add missing `_get_upgraders_entry_map` to `torch/_C/__init__.pyi`
  - Add missing return statement to `torch._export. deserialize_graph`
  - Fix error message in `torch.ao.ns.fx.weight_utils.get_lstm_mod_weights`
  -
TODO (in followup PR):
  - Fix erroneous `isinstance` check in `torch/ao/quantization/_pt2e/qat_utils.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91983
Approved by: https://github.com/kit1980, https://github.com/ZainRizvi, https://github.com/huydhn, https://github.com/thiagocrepaldi, https://github.com/aaronenyeshi
2023-07-13 16:30:36 +00:00
Paweł Piskorski
7fb2a928cf fix hpu storage serialization (#101680)
Change-Id: Ia534400a0e8972590374eceba5b62a2525b796e5

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101680
Approved by: https://github.com/mikaylagawarecki
2023-06-21 21:19:49 +00:00
Gao Tianlin
0a7351e9ee [Doc] Fix torch.UntypedStorage.mps() doc (#103797)
Fix doc typo.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103797
Approved by: https://github.com/kit1980
2023-06-17 18:56:18 +00:00
shibo19
d9c8f9a00d add storage dtype for custom device (#102481)
Fixes #ISSUE_NUMBER
1、add `isinstance` check with dtyped storage for custom device
2、add `storage.type()` support for custom device
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102481
Approved by: https://github.com/albanD
2023-06-01 12:46:19 +00:00
Kazuaki Ishizaki
be5e77ca4c Make _StorageBase.byteswap faster ( > 10000x) (#101925)
This PR addresses #101690. This PR implement faster data elements swap in `_StorageBase` using C++ rather than using Python.

This PR helps such a situation that a large model saved on a little-endian machine will be loaded on a big-endian machine.

TODO:
- [x] Add test cases
- [x] Add performance comparison before and after the PR
- [ ] (Optional) Investigate further opportunities for performance improvements by [SIMDization](https://dev.to/wunk/fast-array-reversal-with-simd-j3p)

Fixes #101690

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101925
Approved by: https://github.com/mikaylagawarecki
2023-05-24 00:13:41 +00:00
wbigat
4486a1d09a Improve the functionality of untyped storage for privateuse1. (#100868)
Complete the implementation of the interface  is_pinned() of untyped storage class for privateuse1.
And refactor the implementation in typed storage by   untyped_storage.is_pinned().

Hi,  @ezyang
This is another improvement of untyped storage for privateuse1, can you  take a moment to review it?  Thanks.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100868
Approved by: https://github.com/kurtamohler, https://github.com/ezyang
2023-05-19 04:33:59 +00:00
Aleksei Nikiforov
87a2af6d4a Fix loading data on different encoding (#94503)
Add endianness marker when saving,
and if it doesn't match host endianness when loading data, do a byteswap.

Older data will load correctly only on systems
with same endianness it was saved on.
New data should load correctly on systems
with any endianness.

Fixes #65300
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94503
Approved by: https://github.com/kurtamohler, https://github.com/ezyang
2023-04-25 21:05:20 +00:00
fakeYan
ecd2c71871 Implement the get_device method in the storage base class. (#99818)
Fixes #ISSUE_NUMBER
like #99817, I find a method is missing,
I'm not sure if it was intentionally removed. But I found that the function is still called on the python side, and the function seems to be very simple to implement.
So I made a change in python side.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99818
Approved by: https://github.com/ezyang
2023-04-25 03:45:39 +00:00
wbigat
ee5f09ab80 [Feature] storage pin memory support custom device. (#99712)
Fixes #99326

Support storage pin_memory and is_pinned for custom device, by calling dispatched tensor operations.

@ezyang  this pr is what we have discussed in issue #99326, would you please take a moment to review it, thanks.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99712
Approved by: https://github.com/ezyang
2023-04-21 18:31:01 +00:00
wbigat
b08c384106 Add parameter for pin memory of storage to support other devices. (#98692)
Fixes #ISSUE_NUMBER

Add parameter for pin memory of storage to support other devices.
In the future, other backends will provide their own allocators to create pin memory.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98692
Approved by: https://github.com/ezyang
2023-04-16 20:06:27 +00:00
XDaoHong
ea00f850e9 add new() method identifier to _StorageBase (#98201)
The method torch.UntypedStorage.new is not detailed in API docs. Adding a method identifier may make it easier to know that new() method is only implemented by cpp, like copy_() or nbytes().

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98201
Approved by: https://github.com/ezyang
2023-04-05 12:47:40 +00:00
Aaron Gokaslan
47dca20d80 [BE] Enable flake8-comprehension rule C417 (#97880)
Enables flake8-comprehension rule C417. Ruff autogenerated these fixes to the codebase.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97880
Approved by: https://github.com/ezyang, https://github.com/kit1980, https://github.com/albanD
2023-03-30 14:34:24 +00:00
Kurt Mohler
fbc803df0c Only warn once for TypedStorage deprecation (#97379)
Fixes #97207

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97379
Approved by: https://github.com/ezyang
2023-03-23 05:40:23 +00:00
albanD
6ea790c5b6 Make share_memory_ call thread safe within itself. (#96664)
To achieve this, I have a per-StorageImpl (was data_ptr in the previous version of this PR, but moved to StorageImpl to ensure stability of the key before/after sharing) lock created when we are about to share a storage and make sure that all other calls to share memory wait on this lock before moving forward.
This does NOT make this call generally thread safe as any call that is not sharing memory will race and lead to UB.

This makes ensures that the sample from @robertolat in https://github.com/pytorch/pytorch/issues/95606 works fine.
This does NOT fix the example from @imurray in that same issue as the call still race with the `.sum()` call. This race is expected and there is no easy way for us to make it work I'm afraid (see issue for more details).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96664
Approved by: https://github.com/colesbury
2023-03-14 19:27:01 +00:00
Sridhar Perepu
9159599cd5 Gramatically updated the tech docs (#92896)
Small typo change in the torch tech docs
<img width="1209" alt="Torch storage doc" src="https://user-images.githubusercontent.com/76240270/214272201-5e9cce2a-13cf-48b7-8806-9c492a0eb665.png">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92896
Approved by: https://github.com/mikaylagawarecki, https://github.com/kit1980
2023-03-13 22:51:42 +00:00
Xuehai Pan
5b1cedacde [BE] [2/3] Rewrite super() calls in functorch and torch (#94588)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94588
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-10 21:16:33 +00:00
Aaron Gokaslan
8fce9a09cd [BE]: pyupgrade Python to 3.8 - imports and object inheritance only (#94308)
Apply parts of pyupgrade to torch (starting with the safest changes).
This PR only does two things: removes the need to inherit from object and removes unused future imports.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94308
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-07 21:10:56 +00:00
Kurt Mohler
08a47549af Rename Tensor._storage to Tensor.untyped_storage and update docs (#91414)
Fixes #89224

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91414
Approved by: https://github.com/ezyang
2022-12-28 19:21:34 +00:00
Edward Z. Yang
2ad6ed8ac9 Fix some typed storage is deprecated warnings. (#89867)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89867
Approved by: https://github.com/albanD
2022-12-07 20:09:57 +00:00
Ram Rachum
7322f73c8f Fix exception cause in storage.py (#90118)
This change causes the correct message to be shown between the two tracebacks when an error is shown.

More context here: https://blog.ram.rachum.com/post/621791438475296768/improving-python-exception-chaining-with
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90118
Approved by: https://github.com/kit1980
2022-12-04 06:51:25 +00:00
Kurt Mohler
ee28b865ee Deprecate TypedStorage, its derived classes, and all of their public methods (#85303)
Part of #85302

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85303
Approved by: https://github.com/ezyang
2022-11-08 18:11:01 +00:00
albanD
8a9aca7b8d Reland 2 Many symintifications (#87604) (#87980)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87980
Approved by: https://github.com/ezyang
2022-10-28 13:40:11 +00:00
PyTorch MergeBot
8b4d95759c Revert "Many symintifications (#87604)"
This reverts commit 777e6a2c51.

Reverted https://github.com/pytorch/pytorch/pull/87604 on behalf of https://github.com/weiwangmeta due to breaking internal builds
2022-10-28 03:00:11 +00:00
albanD
777e6a2c51 Many symintifications (#87604)
Adds
expand_inplace
conv conv_double_backward
convolution
adaptive_avg_pool2d_symint
_embedding_bag_backward_symint
cudnn_grid_sampler
cuda 32 bit indexing
nll_loss / nll_loss_2d
tensor split
pooling same mode
cudnn_is_acceptable
storage nbytes

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87604
Approved by: https://github.com/ezyang
2022-10-26 17:33:53 +00:00
Kurt Mohler
14d0296e5c Rename _Typed/_UntypedStorage to Typed/UntypedStorage and update docs (#82438)
### Description

Since the major changes for `_TypedStorage` and `_UntypedStorage` are now complete, they can be renamed to be public.

`TypedStorage._untyped()` is renamed to `TypedStorage.untyped()`.

Documentation for storages is improved as well.

### Issue
Fixes #82436

### Testing
N/A

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82438
Approved by: https://github.com/ezyang
2022-07-30 19:37:08 +00:00
Kurt Mohler
3a6306b9af Remove remaining eval calls from torch/storage.py (#81701)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81701
Approved by: https://github.com/ezyang
2022-07-19 20:04:41 +00:00
Kurt Mohler
8367fd9d6b Remove eval from torch.storage._TypedStorage.__new__ (#81679)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81679
Approved by: https://github.com/ezyang
2022-07-19 06:01:38 +00:00
Kurt Mohler
4c279994fd Fix Module.share_memory error (#80843)
Fixes #80733

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80843
Approved by: https://github.com/malfet
2022-07-05 15:17:36 +00:00
Alban Desmaison
0a651a231d Add full support for serialization of MPS Tensors (#79465)
Fix https://github.com/pytorch/pytorch/issues/79384
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79465
Approved by: https://github.com/kulinseth, https://github.com/malfet
2022-06-14 17:54:30 +00:00
PyTorch MergeBot
ce6ce74703 Revert "Add full support for serialization of MPS Tensors (#79465)"
This reverts commit 64c2a275c4.

Reverted https://github.com/pytorch/pytorch/pull/79465 on behalf of https://github.com/zengk95 due to this broke X linux-xenial-py3.7-clang7-onnx / test (default, 1, 2, linux.2xlarge). Not sure why since it passed on pull.
2022-06-14 16:42:36 +00:00
Alban Desmaison
64c2a275c4 Add full support for serialization of MPS Tensors (#79465)
Fix https://github.com/pytorch/pytorch/issues/79384
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79465
Approved by: https://github.com/kulinseth, https://github.com/malfet
2022-06-14 14:20:09 +00:00
Kurt Mohler
1705be8ff7 Fix _free_weak_ref error (#78575)
Fixes #74016

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78575
Approved by: https://github.com/ezyang
2022-06-01 00:07:48 +00:00
Kurt Mohler
e9afb43676 Add meta device support to _UntypedStorage and _TypedStorage (#78008)
Fixes #77885

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78008
Approved by: https://github.com/ezyang
2022-05-28 15:33:45 +00:00
Kurt Mohler
cecb2ad95e Restore old names for private funcs in legacy storages (#77861)
Followup from #75459

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77861
Approved by: https://github.com/ezyang
2022-05-20 02:03:34 +00:00
Kurt Mohler
aea6e2c396 Merge torch.cuda._UntypedStorage into torch._UntypedStorage (#75459)
Fixes #74933

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75459
Approved by: https://github.com/ezyang
2022-05-19 13:54:39 +00:00
Kurt Mohler
79ddc72b85 Virtualize <type>Storage classes (#66970)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/66228

cc ezyang bhosmer smessmer ljk53 bdhirsh

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66970

Reviewed By: bdhirsh

Differential Revision: D33245612

Pulled By: ezyang

fbshipit-source-id: 4c61c2cb029e2b94b0e68927c377d3e1c358dd7c
(cherry picked from commit d29fcdfb4bc2cc17b1795d4349e4b56fa0d1cf12)
2022-03-22 23:44:48 +00:00
Kurt Mohler
8e7fe87630 Rename Typed/UntypedStorage to _Typed/_UntypedStorage (#72540)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/72540

Reviewed By: jbschlosser

Differential Revision: D34216823

Pulled By: bdhirsh

fbshipit-source-id: 1bc9930ab582771ebf02308e035576cd1a0dbe47
(cherry picked from commit 329238f612)
2022-02-15 23:53:01 +00:00
Christian Puhrsch
4a7e07e53e Fix torch.save and detach for CSR Tensor (#71963)
Summary:
Currently saving a CSR Tensor simply fails. This also addresses the segfault encountered in https://github.com/pytorch/pytorch/issues/71652.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71963

Reviewed By: jbschlosser

Differential Revision: D33895938

Pulled By: cpuhrsch

fbshipit-source-id: a333505d3a216705147c2aaaaeb2a0fd0c2a5e43
(cherry picked from commit a88265921c)
2022-02-02 23:59:24 +00:00
Shijun Kong
e2be087207 [oss][pytorch] Add quint2x4 dtype (#65545)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65545

Introduce 2bit qtensor. The new dtype added for this is c10::quint2x4

The underlying storage for this is still uint8_t, so we pack 4 2-bit values in a byte while quantizing it.

Kernels that use this dtype should be aware of the packing format. (4 2-bit values in one byte)

Test Plan: `buck test mode/dev-asan caffe2/test/:quantization -- test_qtensor`

Reviewed By: supriyar

Differential Revision: D31148141

fbshipit-source-id: 1dc1de719e097adaf93fee47c6d1b8010a3eae6c
2021-10-06 14:22:00 -07:00
Kurt Mohler
5883523c1d Remove dtype from torch.Storage and use only torch.ByteStorage (#62030)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62030

Remove dtype tracking from Python Storage interface, remove all the different `<type>Storage` classes except for `ByteStorage`, and update serialization accordingly, while maintaining as much FC/BC as possible

Fixes https://github.com/pytorch/pytorch/issues/47442

* **THE SERIALIZATION FORMAT IS FULLY FC/BC.** We worked very hard to make sure this is the case. We will probably want to break FC at some point to make the serialization structure of tensors make more sense, but not today.
* There is now only a single torch.ByteStorage class. Methods like `Tensor.set_` no longer check that the dtype of storage is appropriate.
* As we no longer know what dtype of a storage is, we've **removed** the size method from Storage, replacing it with nbytes. This is to help catch otherwise silent errors where you confuse number of elements with number of bytes.
* `Storage._new_shared` takes a `nbytes` kwarg and will reject previous positional only calls.  `Storage._new_with_file` and `_set_from_file` require explicit element size arguments.
* It's no longer possible to convert storages to different types using the float/double/etc methods. Instead, do the conversion using a tensor.
* It's no longer possible to allocate a typed storage directly using FloatStorage/DoubleStorage/etc constructors. Instead, construct a tensor and extract its storage. The classes still exist but they are used purely for unpickling.
* The preexisting serialization format stores dtype with storage, and in fact this dtype is used to determine the dtype of the tensor overall.
 To accommodate this case, we introduce a new TypedStorage concept that exists only during unpickling time which is used to temporarily store the dtype so we can construct a tensor. **If you overrode the handling of pickling/unpickling, you MUST add handling for TypedStorage** or your serialization code will degrade to standard file-based serialization.

Original pull request: https://github.com/pytorch/pytorch/pull/59671

Reviewed By: soulitzer, ngimel

Differential Revision: D29466819

Pulled By: ezyang

fbshipit-source-id: 4a14e5d3c2b08e06e558683d97f7378a3180b00e
2021-10-05 13:50:34 -07:00
Shen Li
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00