Fixes#112589
Fixed errors relating to pydocstyle in the following files. The remaining errors are related to docstrings at the module level and at methods within each module (see details below)
pydocstyle torch/cuda/_utils.py --count
before: 3
after: 0
pydocstyle torch/cuda/jiterator.py --count
before: 3
after: 1
**remaining errors:**
```
torch/cuda/jiterator.py:1 at module level:
D100: Missing docstring in public module
```
pydocstyle torch/cuda/graphs.py --count
before: 25
after: 7
**remaining errors:**
```
torch/cuda/graphs.py:1 at module level:
D100: Missing docstring in public module
torch/cuda/graphs.py:54 in public method `__new__`:
D102: Missing docstring in public method
torch/cuda/graphs.py:108 in public method `debug_dump`:
D205: 1 blank line required between summary line and description (found 0)
torch/cuda/graphs.py:108 in public method `debug_dump`:
D400: First line should end with a period (not ':')
torch/cuda/graphs.py:150 in public method `__init__`:
D107: Missing docstring in __init__
torch/cuda/graphs.py:172 in public method `__enter__`:
D105: Missing docstring in magic method
torch/cuda/graphs.py:186 in public method `__exit__`:
D105: Missing docstring in magic method
```
pydocstyle torch/cuda/_sanitizer.py --count
before: 35
after: 31
**remaining errors:**
```
torch/cuda/_sanitizer.py:43 in public class `AccessType`:
D101: Missing docstring in public class
torch/cuda/_sanitizer.py:47 in public method `__str__`:
D105: Missing docstring in magic method
torch/cuda/_sanitizer.py:84 in public method `__init__`:
D107: Missing docstring in __init__
torch/cuda/_sanitizer.py:96 in public method `__str__`:
D105: Missing docstring in magic method
torch/cuda/_sanitizer.py:139 in public method `__init__`:
D107: Missing docstring in __init__
torch/cuda/_sanitizer.py:142 in public method `__str__`:
D105: Missing docstring in magic method
torch/cuda/_sanitizer.py:218 in public class `StreamSynchronizations`:
D101: Missing docstring in public class
torch/cuda/_sanitizer.py:219 in public method `__init__`:
D107: Missing docstring in __init__
torch/cuda/_sanitizer.py:256 in public method `create_stream`:
D102: Missing docstring in public method
torch/cuda/_sanitizer.py:268 in public method `create_event`:
D102: Missing docstring in public method
torch/cuda/_sanitizer.py:272 in public method `delete_event`:
D102: Missing docstring in public method
torch/cuda/_sanitizer.py:276 in public method `update_seq_num`:
D102: Missing docstring in public method
torch/cuda/_sanitizer.py:280 in public method `record_state`:
D102: Missing docstring in public method
torch/cuda/_sanitizer.py:291 in public method `stream_wait_for_event`:
D102: Missing docstring in public method
torch/cuda/_sanitizer.py:298 in public method `all_streams_wait_for_event`:
D102: Missing docstring in public method
torch/cuda/_sanitizer.py:307 in public method `all_streams_wait_for_stream`:
D102: Missing docstring in public method
torch/cuda/_sanitizer.py:316 in public method `sync_all_streams`:
D102: Missing docstring in public method
torch/cuda/_sanitizer.py:323 in public method `is_ordered_after`:
D102: Missing docstring in public method
torch/cuda/_sanitizer.py:339 in public method `__init__`:
D107: Missing docstring in __init__
torch/cuda/_sanitizer.py:460 in public function `zip_by_key`:
D103: Missing docstring in public function
torch/cuda/_sanitizer.py:466 in public function `zip_arguments`:
D103: Missing docstring in public function
torch/cuda/_sanitizer.py:478 in public class `ArgumentHandler`:
D101: Missing docstring in public class
torch/cuda/_sanitizer.py:479 in public method `__init__`:
D107: Missing docstring in __init__
torch/cuda/_sanitizer.py:505 in public method `parse_inputs`:
D102: Missing docstring in public method
torch/cuda/_sanitizer.py:520 in public method `parse_outputs`:
D102: Missing docstring in public method
torch/cuda/_sanitizer.py:527 in public class `CUDASanitizerDispatchMode`:
D101: Missing docstring in public class
torch/cuda/_sanitizer.py:528 in public method `__init__`:
D107: Missing docstring in __init__
torch/cuda/_sanitizer.py:562 in public method `__torch_dispatch__`:
D105: Missing docstring in magic method
torch/cuda/_sanitizer.py:597 in public method `__init__`:
D107: Missing docstring in __init__
torch/cuda/_sanitizer.py:601 in public method `enable`:
D102: Missing docstring in public method
torch/cuda/_sanitizer.py:605 in public method `__del__`:
D105: Missing docstring in magic method
```
pydocstyle torch/storage.py --count
before: 90
after: 37
**remaining errors:**
```
torch/storage.py:1 at module level:
D100: Missing docstring in public module
torch/storage.py:310 in public class `UntypedStorage`:
D101: Missing docstring in public class
torch/storage.py:311 in public method `__getitem__`:
D105: Missing docstring in magic method
torch/storage.py:317 in public method `is_cuda`:
D102: Missing docstring in public method
torch/storage.py:321 in public method `is_hpu`:
D102: Missing docstring in public method
torch/storage.py:325 in public method `share_memory_`:
D102: Missing docstring in public method
torch/storage.py:444 in public class `TypedStorage`:
D101: Missing docstring in public class
torch/storage.py:453 in public method `fill_`:
D102: Missing docstring in public method
torch/storage.py:458 in public method `__new__`:
D102: Missing docstring in public method
torch/storage.py:530 in public method `__init__`:
D107: Missing docstring in __init__
torch/storage.py:599 in public method `is_cuda`:
D102: Missing docstring in public method
torch/storage.py:604 in public method `is_hpu`:
D102: Missing docstring in public method
torch/storage.py:624 in public method `__len__`:
D105: Missing docstring in magic method
torch/storage.py:653 in public method `__setitem__`:
D105: Missing docstring in magic method
torch/storage.py:681 in public method `__getitem__`:
D105: Missing docstring in magic method
torch/storage.py:715 in public method `copy_`:
D102: Missing docstring in public method
torch/storage.py:723 in public method `nbytes`:
D102: Missing docstring in public method
torch/storage.py:731 in public method `type`:
D102: Missing docstring in public method
torch/storage.py:744 in public method `cuda`:
D102: Missing docstring in public method
torch/storage.py:751 in public method `hpu`:
D102: Missing docstring in public method
torch/storage.py:758 in public method `element_size`:
D102: Missing docstring in public method
torch/storage.py:766 in public method `get_device`:
D102: Missing docstring in public method
torch/storage.py:770 in public method `__str__`:
D105: Missing docstring in magic method
torch/storage.py:781 in public method `__repr__`:
D105: Missing docstring in magic method
torch/storage.py:785 in public method `__iter__`:
D105: Missing docstring in magic method
torch/storage.py:789 in public method `__copy__`:
D105: Missing docstring in magic method
torch/storage.py:793 in public method `__deepcopy__`:
D105: Missing docstring in magic method
torch/storage.py:801 in public method `__sizeof__`:
D105: Missing docstring in magic method
torch/storage.py:877 in public method `device`:
D102: Missing docstring in public method
torch/storage.py:881 in public method `size`:
D102: Missing docstring in public method
torch/storage.py:891 in public method `pickle_storage_type`:
D102: Missing docstring in public method
torch/storage.py:902 in public method `__reduce__`:
D105: Missing docstring in magic method
torch/storage.py:907 in public method `data_ptr`:
D102: Missing docstring in public method
torch/storage.py:915 in public method `resize_`:
D102: Missing docstring in public method
torch/storage.py:931 in public method `from_buffer`:
D102: Missing docstring in public method
torch/storage.py:1032 in public method `from_file`:
D402: First line should not be the function's "signature"
torch/storage.py:1075 in public method `is_shared`:
D102: Missing docstring in public method
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113227
Approved by: https://github.com/kit1980
There is an issue with float8 type promotion, because _promoteTypesLookup doesn't contain records for few types between bfloat16 and float8.
I have simply moved float8 types just after bfloat16, however I'm not sure if it doesn't break serialization.
Please, decide if it can stay like this, or should I insert missing records filled with "ud" into _promoteTypesLookup instead of moving types.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110279
Approved by: https://github.com/albanD
This PR re-lands
- [Typing] Fix PEP 484 Violation (#105022)
- Update mypy to 1.4.1 (#91983)
That were reverted due to the conflict with internal source repo.
Mostly fixes for PEP-484 violation (i.e. when default arg is set to None, but type is not annotated as optional)
Plus few real fixes:
- Add missing `_get_upgraders_entry_map` to `torch/_C/__init__.pyi`
- Add missing return statement to `torch._export. deserialize_graph`
- Fix error message in `torch.ao.ns.fx.weight_utils.get_lstm_mod_weights`
- Add assert it `torch/optim/optimizer.py` that Optional list is not None
TODO (in followup PR):
- Fix erroneous `isinstance` check in `torch/ao/quantization/_pt2e/qat_utils.py`
Unrelated, to bypass CI failures due to the gcc9 dependency update in Ubuntu-18.04:
- Add hack to squash older libstdc++ from conda environment in favor one from OS to `.ci/docker/install_conda.sh`
- Update bazel cuda builds to focal, as with libstdc++-6.0.32 bazel builds loose the ability to catch exceptions (probably because they link with cupti statically, but I could not found where it is done)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105227
Approved by: https://github.com/atalman, https://github.com/albanD, https://github.com/Skylion007
This PR re-lands
- [Typing] Fix PEP 484 Violation (#105022)
- Update mypy to 1.4.1 (#91983)
That were reverted due to the conflict with internal source repo.
Mostly fixes for PEP-484 violation (i.e. when default arg is set to None, but type is not annotated as optional)
Plus few real fixes:
- Add missing `_get_upgraders_entry_map` to `torch/_C/__init__.pyi`
- Add missing return statement to `torch._export. deserialize_graph`
- Fix error message in `torch.ao.ns.fx.weight_utils.get_lstm_mod_weights`
- Add assert it `torch/optim/optimizer.py` that Optional list is not None
TODO (in followup PR):
- Fix erroneous `isinstance` check in `torch/ao/quantization/_pt2e/qat_utils.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105227
Approved by: https://github.com/atalman, https://github.com/albanD, https://github.com/Skylion007
This PR addresses #101690. This PR implement faster data elements swap in `_StorageBase` using C++ rather than using Python.
This PR helps such a situation that a large model saved on a little-endian machine will be loaded on a big-endian machine.
TODO:
- [x] Add test cases
- [x] Add performance comparison before and after the PR
- [ ] (Optional) Investigate further opportunities for performance improvements by [SIMDization](https://dev.to/wunk/fast-array-reversal-with-simd-j3p)
Fixes#101690
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101925
Approved by: https://github.com/mikaylagawarecki
Complete the implementation of the interface is_pinned() of untyped storage class for privateuse1.
And refactor the implementation in typed storage by untyped_storage.is_pinned().
Hi, @ezyang
This is another improvement of untyped storage for privateuse1, can you take a moment to review it? Thanks.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100868
Approved by: https://github.com/kurtamohler, https://github.com/ezyang
Fixes #ISSUE_NUMBER
like #99817, I find a method is missing,
I'm not sure if it was intentionally removed. But I found that the function is still called on the python side, and the function seems to be very simple to implement.
So I made a change in python side.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99818
Approved by: https://github.com/ezyang
Fixes#99326
Support storage pin_memory and is_pinned for custom device, by calling dispatched tensor operations.
@ezyang this pr is what we have discussed in issue #99326, would you please take a moment to review it, thanks.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99712
Approved by: https://github.com/ezyang
The method torch.UntypedStorage.new is not detailed in API docs. Adding a method identifier may make it easier to know that new() method is only implemented by cpp, like copy_() or nbytes().
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98201
Approved by: https://github.com/ezyang
To achieve this, I have a per-StorageImpl (was data_ptr in the previous version of this PR, but moved to StorageImpl to ensure stability of the key before/after sharing) lock created when we are about to share a storage and make sure that all other calls to share memory wait on this lock before moving forward.
This does NOT make this call generally thread safe as any call that is not sharing memory will race and lead to UB.
This makes ensures that the sample from @robertolat in https://github.com/pytorch/pytorch/issues/95606 works fine.
This does NOT fix the example from @imurray in that same issue as the call still race with the `.sum()` call. This race is expected and there is no easy way for us to make it work I'm afraid (see issue for more details).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96664
Approved by: https://github.com/colesbury
### Description
Since the major changes for `_TypedStorage` and `_UntypedStorage` are now complete, they can be renamed to be public.
`TypedStorage._untyped()` is renamed to `TypedStorage.untyped()`.
Documentation for storages is improved as well.
### Issue
Fixes#82436
### Testing
N/A
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82438
Approved by: https://github.com/ezyang
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65545
Introduce 2bit qtensor. The new dtype added for this is c10::quint2x4
The underlying storage for this is still uint8_t, so we pack 4 2-bit values in a byte while quantizing it.
Kernels that use this dtype should be aware of the packing format. (4 2-bit values in one byte)
Test Plan: `buck test mode/dev-asan caffe2/test/:quantization -- test_qtensor`
Reviewed By: supriyar
Differential Revision: D31148141
fbshipit-source-id: 1dc1de719e097adaf93fee47c6d1b8010a3eae6c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62030
Remove dtype tracking from Python Storage interface, remove all the different `<type>Storage` classes except for `ByteStorage`, and update serialization accordingly, while maintaining as much FC/BC as possible
Fixes https://github.com/pytorch/pytorch/issues/47442
* **THE SERIALIZATION FORMAT IS FULLY FC/BC.** We worked very hard to make sure this is the case. We will probably want to break FC at some point to make the serialization structure of tensors make more sense, but not today.
* There is now only a single torch.ByteStorage class. Methods like `Tensor.set_` no longer check that the dtype of storage is appropriate.
* As we no longer know what dtype of a storage is, we've **removed** the size method from Storage, replacing it with nbytes. This is to help catch otherwise silent errors where you confuse number of elements with number of bytes.
* `Storage._new_shared` takes a `nbytes` kwarg and will reject previous positional only calls. `Storage._new_with_file` and `_set_from_file` require explicit element size arguments.
* It's no longer possible to convert storages to different types using the float/double/etc methods. Instead, do the conversion using a tensor.
* It's no longer possible to allocate a typed storage directly using FloatStorage/DoubleStorage/etc constructors. Instead, construct a tensor and extract its storage. The classes still exist but they are used purely for unpickling.
* The preexisting serialization format stores dtype with storage, and in fact this dtype is used to determine the dtype of the tensor overall.
To accommodate this case, we introduce a new TypedStorage concept that exists only during unpickling time which is used to temporarily store the dtype so we can construct a tensor. **If you overrode the handling of pickling/unpickling, you MUST add handling for TypedStorage** or your serialization code will degrade to standard file-based serialization.
Original pull request: https://github.com/pytorch/pytorch/pull/59671
Reviewed By: soulitzer, ngimel
Differential Revision: D29466819
Pulled By: ezyang
fbshipit-source-id: 4a14e5d3c2b08e06e558683d97f7378a3180b00e