Jez Ng
a3b859fc67
Drop dynamo-specific type hints on Tensor in favor of type-ignores ( #113720 )
...
Per [this][1] discussion, plus some offline discussion. The summary:
@albanD considers the core PyTorch types like Tensor to be extremely
brittle, and does not think the risk of adding these typed attributes to
be worth it.
@eellison mentioned that we could use `WeakTensorKeyDictionary` instead.
However, based on the sparse usage of these bonus attributes, I think
that would be overkill. So I've opted to go with a few more type-ignore
comments instead.
[1]: https://github.com/pytorch/pytorch/pull/113610#discussion_r1392907367
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113720
Approved by: https://github.com/ezyang , https://github.com/albanD , https://github.com/eellison
ghstack dependencies: #113534 , #113610
2023-11-16 01:54:00 +00:00
Jez Ng
d00c983b63
[dynamo] Make {testing,debug_utils,utils}.py pass follow_imports typechecking ( #113519 )
...
Notes:
* `debug_insert_nops` in testing.py was passing `None` to the compiler_fn
parameter of `OutputGraph`, hence the modifications there.
* I added `disable-error-code="method-assign"` to debug_utils.py as it
does several such assignments. I guess mypy doesn't like it because it
makes code near-impossible to safely typecheck.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113519
Approved by: https://github.com/Skylion007
ghstack dependencies: #113413 , #113518
2023-11-11 22:15:46 +00:00
Jez Ng
767ce2b81c
[dynamo] Make decorators.py pass follow-import typechecking ( #113304 )
...
I am trying to turn on `follow_imports=silent` for MYPYNOFOLLOW.
However, this requires a huge number of changes, so I am breaking it
down to a per-file basis.
Unfortunately, we will not be able to turn on `follow_imports` until all
files are fixed, so there is no way to stop regressions. So I hope to get
these fixes in as fast as possible.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113304
Approved by: https://github.com/Skylion007
2023-11-09 21:55:49 +00:00
Joel Schlosser
51a38380d1
Fix torch.load(..., weights_only=True) for NT ( #112516 )
...
Found when looking into #112509
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112516
Approved by: https://github.com/soulitzer
2023-11-02 14:41:04 +00:00
Mikayla Gawarecki
320ac546ed
Clarify difference between share_memory and from_file ( #111856 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111856
Approved by: https://github.com/albanD
ghstack dependencies: #111688
2023-11-01 03:25:09 +00:00
Joel Schlosser
3693777a86
Pickle support for NT ( #110219 )
...
Fixes #104198
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110219
Approved by: https://github.com/cpuhrsch
2023-09-29 15:30:06 +00:00
Moritz Hennen
09c598745c
Rename torch._C._TensorBase to TensorBase ( #109940 )
...
I have gone ahead and implemented the renaming of the type `torch._C._TensorBase` to a non-private class name `TensorBase`.
The changes also include leaving `torch._C._TensorBase` as an alias to the new type: 70458768fb/torch/csrc/autograd/python_variable.cpp (L2196-L2197) both in the c++ code and in the corresponding `__init__.pyi.in` file:
70458768fb/torch/_C/__init__.pyi.in (L1522)
Fixes #109438
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109940
Approved by: https://github.com/ezyang
2023-09-25 19:10:22 +00:00
Digant Desai
8a7a6867b9
[PyTorch][Tensor] Introduce tensor.dim_order ( #106835 )
...
Summary:
This is a stride based attribute for a tensor available in Python.
This can help inspect tensors generated using `torch.empty_permuted(.., physical_layout, ...)`, where physical_layout should match the dim_order returned here. `empty_permuted` will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.
Differential Revision: D48134476
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106835
Approved by: https://github.com/ezyang
2023-08-25 00:06:03 +00:00
Jane Xu
6e71ad0509
Add tensor post accumulate grad hook API ( #107063 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107063
Approved by: https://github.com/albanD , https://github.com/soulitzer
2023-08-24 00:19:35 +00:00
PyTorch MergeBot
432fce4e0d
Revert "Add tensor post accumulate grad hook API ( #107063 )"
...
This reverts commit 3f655277d4 .
Reverted https://github.com/pytorch/pytorch/pull/107063 on behalf of https://github.com/ZainRizvi due to Diff train weirdness. Need to temporarily revert this PR and will right land it soon afterwards ([comment](https://github.com/pytorch/pytorch/pull/107063#issuecomment-1690799057 ))
2023-08-24 00:12:34 +00:00
Jun Luo
221daeb1a7
Fix deepcopy for tensor with MTIA device key. ( #107427 )
...
Summary: Tensor with MTIA device type doesn't have storage and we need to treat it same as other tensors which don't have storage.
Test Plan: CI tests.
Differential Revision: D48456004
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107427
Approved by: https://github.com/cx-yin , https://github.com/ezyang
2023-08-23 20:47:36 +00:00
Jane Xu
3f655277d4
Add tensor post accumulate grad hook API ( #107063 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107063
Approved by: https://github.com/albanD , https://github.com/soulitzer
2023-08-22 15:15:57 +00:00
Justin Chu
4cc1745b13
[BE] f-stringify torch/ and scripts ( #105538 )
...
This PR is a follow up on the pyupgrade series to convert more strings to use f-strings using `flynt`.
- https://docs.python.org/3/reference/lexical_analysis.html#f-strings
- https://pypi.org/project/flynt/
Command used:
```
flynt torch/ -ll 120
flynt scripts/ -ll 120
flynt tools/ -ll 120
```
and excluded `collect_env.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105538
Approved by: https://github.com/ezyang , https://github.com/malfet
2023-07-21 19:35:24 +00:00
Justin Chu
79c5e33349
[BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ ( #105436 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105436
Approved by: https://github.com/malfet , https://github.com/albanD
2023-07-21 07:38:46 +00:00
albanD
08cbfb2a58
Avoid tensor creation and use scalar overload ( #104264 )
...
I would expect this preserves the behavior but there might be weird edge cases?
@mruberry might know?
The aim is to fix https://github.com/pytorch/pytorch/pull/104254 (and make `1 ** t` capturable via cudagraph)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104264
Approved by: https://github.com/zou3519
2023-07-12 18:11:27 +00:00
Edward Z. Yang
872fdb329b
This extra message would have helped with Wav2Vec2 debugging. ( #103002 )
...
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103002
Approved by: https://github.com/janeyx99 , https://github.com/anijain2305 , https://github.com/voznesenskym , https://github.com/malfet
2023-06-06 04:28:16 +00:00
Pearu Peterson
39b04370db
Preserve coalesce state in sparse COO tensor serialization ( #102647 )
...
Fixes #101186
Also, resolves the "serialization to preserve coalesced-ness" part in https://github.com/pytorch/pytorch/issues/73479
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102647
Approved by: https://github.com/mikaylagawarecki
2023-06-03 01:37:52 +00:00
eqy
66f6e0e605
[CUDA][DLPack] Handle legacy default streams for DLPack conversion ( #101318 )
...
It seems that some legacy default stream logic (e.g., present in a8ff647e42/torch/utils/dlpack.py (L114) ) is not handled on the potential receiving end in `torch/_tensor.py`.
Open to suggestions on how to make the test case less clunky, as this was the combination we arrived at after discovering flakiness in alternate versions.
Thanks to Olga Andreeva for surfacing this issue and providing a repro.
CC @Aidyn-A @ngimel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101318
Approved by: https://github.com/ngimel
2023-05-24 16:14:50 +00:00
Kiersten Stokes
bafa2c4724
Change 'w.r.t.' to 'wrt' in function docstrings to fix doc rendering ( #100028 )
...
Fixes #72428 according to decision reached in comments.
I've left other instances of `w.r.t.` in tact (e.g. in parameter/return descriptions, in comments, etc) because there were many, and I didn't' want to go out-of-scope. That being said, I'm happy to change those as well if we'd prefer the consistency!
I've also fixed a typo that I came across while grepping for instances.
Will update with screenshots once docs are built.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100028
Approved by: https://github.com/albanD
2023-04-25 23:53:26 +00:00
Justin Chu
79c9e82e27
Fix flake8 lint errors reported by ruff - take 2 ( #99798 )
...
Replaces #99784 . This PR is pure autofix.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99798
Approved by: https://github.com/Skylion007 , https://github.com/kit1980
2023-04-23 23:09:51 +00:00
Edward Z. Yang
419ad49e65
Make Tensor.__contains__ accept SymInt/Float/Bool. ( #98933 )
...
Fixes https://github.com/pytorch/pytorch/issues/98870
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98933
Approved by: https://github.com/albanD , https://github.com/Skylion007
2023-04-12 19:16:33 +00:00
Yu, Guangye
53c9bc8c68
Add DLPack support for XPU backend by mapping to kDLOneAPI in DLPack … ( #94968 )
...
# Motivation
The DLPack device type kDLOneAPI stands for the Unified Shared Memory allocated on a oneAPI device. The corresponding Pytorch backend type is XPU.
Support to export/import the Pytorch XPU tensor as a DLPack tensor of kDLOneAPI device.
# Solution
1. Update the DLPack protocol to v0.7.
2. Add the XPU hooks to map the Aten device and DLPack device with the address value and device information.
# Additional Context
Reopen (#82867 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94968
Approved by: https://github.com/kit1980
2023-03-30 04:32:15 +00:00
shibo
2ea097071a
fix device type bug for custom device ( #97213 )
...
Fixes #ISSUE_NUMBER
support the custom renamed device ,@bdhirsh , please review my changes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97213
Approved by: https://github.com/bdhirsh , https://github.com/kit1980
2023-03-27 18:36:47 +00:00
Sujoy Saraswati
4a5ce921a0
Add HPU to compatible shallow copy list and remove lazy HPU changes ( #94673 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94673
Approved by: https://github.com/wconstab
2023-02-14 17:15:25 +00:00
Xuehai Pan
5b1cedacde
[BE] [2/3] Rewrite super() calls in functorch and torch ( #94588 )
...
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.
- #94587
- #94588
- #94592
Also, methods with only a `super()` call are removed:
```diff
class MyModule(nn.Module):
- def __init__(self):
- super().__init__()
-
def forward(self, ...):
...
```
Some cases that change the semantics should be kept unchanged. E.g.:
f152a79be9/caffe2/python/net_printer.py (L184-L190)
f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94588
Approved by: https://github.com/ezyang , https://github.com/albanD
2023-02-10 21:16:33 +00:00
Ivan Yashchuk
fba13d94a1
Remove deprecated torch.symeig ( #70988 )
...
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.symeig`.
- [x] XLA PR: https://github.com/pytorch/xla/pull/4498
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70988
Approved by: https://github.com/lezcano , https://github.com/kit1980 , https://github.com/malfet
2023-01-31 11:59:11 +00:00
PyTorch MergeBot
acdd462b1a
Revert "Remove deprecated torch.symeig ( #70988 )"
...
This reverts commit d70ed68162 .
Reverted https://github.com/pytorch/pytorch/pull/70988 on behalf of https://github.com/kit1980 due to Failing XLA tests, forward fix unsuccessful
2023-01-24 19:03:40 +00:00
Ivan Yashchuk
d70ed68162
Remove deprecated torch.symeig ( #70988 )
...
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.symeig`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70988
Approved by: https://github.com/lezcano , https://github.com/kit1980
2023-01-23 22:51:40 +00:00
soulitzer
88366a9075
Document hooks ordering behavior in the autograd note ( #91667 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91667
Approved by: https://github.com/albanD
2023-01-18 00:20:13 +00:00
Tugsbayasgalan Manlaibaatar
b32b81a0c5
Make torch.split take symint as arg ( #91724 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91724
Approved by: https://github.com/voznesenskym
2023-01-07 00:00:03 +00:00
Samantha Andow
a7749ae177
[reland] rename DisableTorchFunction to DisableTorchFunctionSubclass ( #88218 ) ( #89221 )
...
Summary: First half of #87990 . This doesn't change any of the behavior and is just a rename
#88218 got reverted for internal breakages. This is the reland of started from internal
Differential Revision:
D41268423
LaMa Project: L1098534
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89221
Approved by: https://github.com/meliy-meyada , https://github.com/zou3519
2023-01-04 18:32:49 +00:00
Kurt Mohler
08a47549af
Rename Tensor._storage to Tensor.untyped_storage and update docs ( #91414 )
...
Fixes #89224
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91414
Approved by: https://github.com/ezyang
2022-12-28 19:21:34 +00:00
Edward Z. Yang
2ad6ed8ac9
Fix some typed storage is deprecated warnings. ( #89867 )
...
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89867
Approved by: https://github.com/albanD
2022-12-07 20:09:57 +00:00
Luis Montero
740860d414
Add type hint to torch.norm and Tensor.norm ( #89728 )
...
Fixes #89727
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89728
Approved by: https://github.com/kit1980
2022-11-29 02:09:51 +00:00
Pearu Peterson
50e2e4faf3
Sparse CSC/BSR/BSC serialization and pickle support ( #89553 )
...
Fixes https://github.com/pytorch/pytorch/issues/89497
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89553
Approved by: https://github.com/cpuhrsch
2022-11-23 20:56:48 +00:00
Charlie West-Taylor
27db806888
Handle Tensor.__deepcopy__ via clone(), on IPU ( #89129 )
...
Currently it falls through to a call to `storage()`, which the IPU doesn't support.
I've made the minimal change here for ease of merging (this'd help us if it was in for 1.13.1), however...
**QUESTION**: Is there any reason why `not torch._C._has_storage(self)` needs to *also* be guarded on `self.device.type == privateuseone`? in other words, could the condition for using `clone` not be this?
```python
self.is_sparse
or self.device.type
in ["lazy", "xla", "mps", "ort", "meta", "hpu", "ipu"]
or not torch._C._has_storage(self)
or (type(self) is not Tensor and self.data_ptr() == 0)
```
If the condition fails, the very next thing is a call to `self._typed_storage()` which will fail, so it feels to me like *any* case without storage shouldn't fall through to the `storage()` call.
The original PR for adding the 'no storage and device is `PrivateUse1`' condition ([86557](https://github.com/pytorch/pytorch/pull/86557 )) doesn't discuss whether this could be broadened.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89129
Approved by: https://github.com/albanD
2022-11-23 19:41:09 +00:00
kshitij12345
f74946324e
[fix] allow saving python attr on Tensor and Parameter via torch.save ( #81616 )
...
Fixes: https://github.com/pytorch/pytorch/issues/72129
TODO:
* [x] Fix for Parameter
Benchmark
(Measurable diff for small tensors)
```
[-------------- Save and Load --------------]
| After PR | Before PR
1 threads: ----------------------------------
() | 111.7 | 106.9
(4, 4) | 114.4 | 109.2
(128, 128) | 135.2 | 128.3
(1024, 1024) | 1431.9 | 1431.3
Times are in microseconds (us).
```
<details>
<summary> Benchmark Script </summary>
```python
import torch
from torch.testing._internal.common_utils import BytesIOContext
from torch.utils import benchmark
import pickle
shapes = ((), (4, 4), (128, 128), (1024, 1024))
sizes = [1, 64, 1024, 10000]
results = []
def save_load_fn(t):
with BytesIOContext() as f:
torch.save(t, f)
f.seek(0)
torch.load(f)
for shape in shapes:
t = torch.randn(shape)
label = 'Save and Load'
sub_label = f'{shape}'
results.append(benchmark.Timer(
stmt='save_load_fn(t)',
globals={'t': t, 'save_load_fn':save_load_fn},
label=label,
sub_label=sub_label,
description='Before PR',
).blocked_autorange(min_run_time=2))
compare = benchmark.Compare(results)
compare.print()
with open('before_pr.pkl', 'wb') as f:
pickle.dump(results, f)
# with open('after_pr.pkl', 'rb') as f:
# after_pr = pickle.load(f)
# with open('before_pr.pkl', 'rb') as f:
# before_pr = pickle.load(f)
# compare = benchmark.Compare(after_pr + before_pr)
# compare.print()
```
</details>
NOTE : **BC-Breaking** : After this PR, all tensors (also regular tensors) will be serialised using `_rebuild_from_type_v2`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81616
Approved by: https://github.com/albanD , https://github.com/kurtamohler
2022-11-11 21:11:12 +00:00
PyTorch MergeBot
ba4d5aae06
Revert "rename DisableTorchFunction to DisableTorchFunctionSubclass ( #88218 )"
...
This reverts commit 7f28be10e5 .
Reverted https://github.com/pytorch/pytorch/pull/88218 on behalf of https://github.com/izaitsevfb due to BC-breaking change, D41211901
2022-11-11 19:13:05 +00:00
samdow
7f28be10e5
rename DisableTorchFunction to DisableTorchFunctionSubclass ( #88218 )
...
First half of #87990 . This doesn't change any of the behavior and is just a rename
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88218
Approved by: https://github.com/ezyang , https://github.com/zou3519
2022-11-10 14:51:13 +00:00
kshitij12345
eb9b156019
[fix] MathBits: serialization ( #88182 )
...
Fixes #81690
TODO:
* [x] C++ Unpickler Fix (locally tested pickled in Python and unpickled in C++)
* [x] C++ Pickler Fix (locally tested pickled in C++ and unpickled in Python)
* [x] Do quant_tensor, sparse_tensor, etc require similar changes? (Sparse and Quant don't need this)
* [x] Add Comments
* [x] How to make sure C++ and Python are in sync? (Functions in `pickler.h` help in getting and setting Tensor Metadata (math-bits for now) on a tensor. They are the only place which should handle this.)
Notes:
Quant Tensor don't support complex dtypes and for float they segfault with `_neg_view` : https://github.com/pytorch/pytorch/issues/88484
Sparse Tensor:
```python
>>> a = torch.tensor([[0, 2.], [3j, 0]]).to_sparse()
>>> a.conj().is_conj()
False
>>> a._neg_view()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NotImplementedError: Cannot access storage of SparseTensorImpl
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88182
Approved by: https://github.com/ezyang , https://github.com/anjali411
2022-11-09 17:15:12 +00:00
Kurt Mohler
ee28b865ee
Deprecate TypedStorage, its derived classes, and all of their public methods ( #85303 )
...
Part of #85302
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85303
Approved by: https://github.com/ezyang
2022-11-08 18:11:01 +00:00
PyTorch MergeBot
78a0ca29d9
Revert "[fix] allow saving python attr on Tensor and Parameter via torch.save ( #81616 )"
...
This reverts commit 54b6188cc6 .
Reverted https://github.com/pytorch/pytorch/pull/81616 on behalf of https://github.com/mehtanirav due to Internal publishing is broken
2022-11-07 18:51:16 +00:00
Kshiteej K
54b6188cc6
[fix] allow saving python attr on Tensor and Parameter via torch.save ( #81616 )
...
Fixes: https://github.com/pytorch/pytorch/issues/72129
TODO:
* [x] Fix for Parameter
Benchmark
(Measurable diff for small tensors)
```
[-------------- Save and Load --------------]
| After PR | Before PR
1 threads: ----------------------------------
() | 111.7 | 106.9
(4, 4) | 114.4 | 109.2
(128, 128) | 135.2 | 128.3
(1024, 1024) | 1431.9 | 1431.3
Times are in microseconds (us).
```
<details>
<summary> Benchmark Script </summary>
```python
import torch
from torch.testing._internal.common_utils import BytesIOContext
from torch.utils import benchmark
import pickle
shapes = ((), (4, 4), (128, 128), (1024, 1024))
sizes = [1, 64, 1024, 10000]
results = []
def save_load_fn(t):
with BytesIOContext() as f:
torch.save(t, f)
f.seek(0)
torch.load(f)
for shape in shapes:
t = torch.randn(shape)
label = 'Save and Load'
sub_label = f'{shape}'
results.append(benchmark.Timer(
stmt='save_load_fn(t)',
globals={'t': t, 'save_load_fn':save_load_fn},
label=label,
sub_label=sub_label,
description='Before PR',
).blocked_autorange(min_run_time=2))
compare = benchmark.Compare(results)
compare.print()
with open('before_pr.pkl', 'wb') as f:
pickle.dump(results, f)
# with open('after_pr.pkl', 'rb') as f:
# after_pr = pickle.load(f)
# with open('before_pr.pkl', 'rb') as f:
# before_pr = pickle.load(f)
# compare = benchmark.Compare(after_pr + before_pr)
# compare.print()
```
</details>
NOTE : **BC-Breaking** : After this PR, all tensors (also regular tensors) will be serialised using `_rebuild_from_type_v2`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81616
Approved by: https://github.com/albanD , https://github.com/kurtamohler
2022-11-03 09:57:47 +00:00
Sheil Kumar
4839f73f32
Fix incorrect tensor storage check ( #86845 )
...
Fix incorrect tensor storage check
This change contains an incorrect check for storage: https://github.com/pytorch/pytorch/pull/86557
**self.storage is not None**
should have been:
**not torch._C._has_storage(self)**
These fixes were run through the DirectML test suite, and confirm the check is now working correctly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86845
Approved by: https://github.com/martinb35 , https://github.com/bdhirsh
2022-10-13 17:54:28 +00:00
Sheil Kumar
f24d174fff
Allow PrivateUse1 backends to not have Storage ( #86557 )
...
Allow PrivateUse1 backends to not have Storage
To unblock the DirectML backend, this change would be needed for 1.13 as well.
The DirectML backend creates tensors using the open registration pattern documented here: https://pytorch.org/tutorials/advanced/extend_dispatcher.html
[registration example](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fbdhirsh%2Fpytorch_open_registration_example&data=05%7C01%7CSheil.Kumar%40microsoft.com%7Cf107b0b4349e41f1a57808daa7ee8a2c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638006940242882444%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ivYLNmuC1WMitwu8n%2B1RAmeKkRM4ssb7EvhhGKJDFwk%3D&reserved=0 )
However, DirectML tensors are opaque, and do not have Storage.
The DirectML Tensor Impl derives from OpaqueTensorImpl, which does not have a storage. Because of this various places in the code fail that expect storage to be present. We had made various changes in-tree to accommodate this:
a. def __deepcopy__(self, memo):
[b5acba8895/torch/_tensor.py (L119) ](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpytorch%2Fpytorch%2Fblob%2Fb5acba88959698d35cb548c78dd3fb151f85f28b%2Ftorch%2F_tensor.py%23L119&data=05%7C01%7CSheil.Kumar%40microsoft.com%7Cf107b0b4349e41f1a57808daa7ee8a2c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638006940242882444%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ajg23nMCzgRDwlinqSxS%2BRmOkAcDCr3LW%2BBEfNCn5hw%3D&reserved=0 )
or self.device.type in ["lazy", "xla", "mps", "ort", "meta", "hpu", 'dml']
b. def _reduce_ex_internal(self, proto):
[b5acba8895/torch/_tensor.py (L275) ](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpytorch%2Fpytorch%2Fblob%2Fb5acba88959698d35cb548c78dd3fb151f85f28b%2Ftorch%2F_tensor.py%23L275&data=05%7C01%7CSheil.Kumar%40microsoft.com%7Cf107b0b4349e41f1a57808daa7ee8a2c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638006940242882444%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=xDW6LwPSe2F396OJ6QSJY6mVzJVDeQiJgA0G347y2pw%3D&reserved=0 )
if self.device.type in ["xla", "ort", "hpu", "dml"]:
c. TensorIteratorBase::build has an unsupported list for tensors without storage.
[b5acba8895/aten/src/ATen/TensorIterator.cpp (L1497) ](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpytorch%2Fpytorch%2Fblob%2Fb5acba88959698d35cb548c78dd3fb151f85f28b%2Faten%2Fsrc%2FATen%2FTensorIterator.cpp%23L1497&data=05%7C01%7CSheil.Kumar%40microsoft.com%7Cf107b0b4349e41f1a57808daa7ee8a2c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638006940242882444%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=qAdgNgzKl0xrtOvsABpw1VGkSoGUpe7jwDPhHw3XjgU%3D&reserved=0 )
Using the PrivateUse1 backend, similar exemptions need to be made in order to relax requirements on Storage so that the DirectML backend tensors can work.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86557
Approved by: https://github.com/bdhirsh , https://github.com/martinb35
2022-10-12 15:26:29 +00:00
胡玮文
92562046e9
Optimize __dlpack_device__ performance ( #86665 )
...
This can be critical when processing a large number of tensors
```bash
python -m timeit --setup 'import torch; t = torch.empty(1000, device="cuda")' 't.__dlpack_device__()'
```
based on 1.12.1:
before:
100000 loops, best of 5: 2.32 usec per loop
after:
500000 loops, best of 5: 844 nsec per loop
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86665
Approved by: https://github.com/SunDoge , https://github.com/soulitzer
2022-10-11 19:03:46 +00:00
Ivan Yashchuk
539076e2c2
Remove deprecated torch.lstsq ( #70980 )
...
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.lstsq`.
There's a note in `tools/codegen/gen.py` about `lstsq` schema in `native_function.yaml` that I will not remove:
87139d8532/tools/codegen/gen.py (L734-L770)
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70980
Approved by: https://github.com/lezcano , https://github.com/kit1980
2022-09-23 00:16:55 +00:00
Aidyn-A
1456cca1fc
Fix exception handling, improve overheads and avoid constructing storage for element size ( #84612 )
...
These changes were proposed by @MatthiasKohl in #84271 and #84542 that fix #84267 and #84056 respectively.
The reason I am creating the pull request is CLA check (see original PRs).
cc @ptrblck @malfet
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84612
Approved by: https://github.com/ngimel
2022-09-19 20:21:46 +00:00
Sergii Dymchenko
e980ff8eb9
Remove unused method_assignments ( #84917 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84917
Approved by: https://github.com/huydhn
2022-09-13 04:04:07 +00:00
Ivan Yashchuk
01c54ad6de
Remove deprecated torch.eig ( #70982 )
...
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.eig`.
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70982
Approved by: https://github.com/Lezcano , https://github.com/malfet
2022-09-09 21:31:57 +00:00