Commit Graph

247 Commits

Author SHA1 Message Date
Brian Hirsh
1665715cb0 add sym_strides() function, use in fake/proxy tensors (#81300)
Add `TensorImpl::sym_strides`, bind it to python with `torch.ops.aten.sym_strides`, and use it in `ProxyTensor` and `FakeTensor`.

Before, `ProxyTensor` was generating `ProxySymInt`'s for the sizes, but not for the strides. Internally we still represent strides with a `SymIntArrayRef` though, so I ran into some weird issues where sizes were showing up as `ProxySymInt`, but strides were `PySymInt`'s.

Differential Revision: [D38594558](https://our.internmc.facebook.com/intern/diff/D38594558)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81300
Approved by: https://github.com/ezyang
2022-08-16 14:31:27 +00:00
soulitzer
ccb7d56a18 Rename PyFunctionPreHook to PyFunctionTensorPreHook (#83225)
Now that there will be two types of Python function prehooks, I prefer have the PyFunction hook taking all grad_outputs and returning all grad_inputs as the more "canonical" one
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83225
Approved by: https://github.com/albanD
2022-08-12 22:14:32 +00:00
Nikolay Korovaiko
5b621205f4 Revert "Revert "adding a custom caster for c10::SymInt (#82692)"" (#83223)
This should fix the MacOS build errors and reland #82692
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83223
Approved by: https://github.com/albanD
2022-08-12 00:46:50 +00:00
Mateusz Sypniewski
916def84d4 CUDA trace Python hooks (#82824)
### Description
This adds Python hooks into PyTorch that allow the user to register their own callbacks for events such as tensor allocation, stream allocation, event record / wait etc.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82824
Approved by: https://github.com/lw, https://github.com/ezyang, https://github.com/malfet
2022-08-11 10:21:40 +00:00
PyTorch MergeBot
daeea7d2c3 Revert "adding a custom caster for c10::SymInt (#82692)"
This reverts commit dee63f4f7b.

Reverted https://github.com/pytorch/pytorch/pull/82692 on behalf of https://github.com/seemethere due to Broke internal builds, see [logs](https://www.internalfb.com/intern/sandcastle/job/4503600373141339/insights)
2022-08-09 22:17:41 +00:00
Nikolay Korovaiko
dee63f4f7b adding a custom caster for c10::SymInt (#82692)
### Description
Adding a custom caster for `c10::SymInt`. This simplifies handling of c10::SymInt on C++/Pytorch boundary. Namely, removing if statements to handle the union nature (e.g. SymIntNode, int) of c10::SymInt.

### Issue
<!-- Link to Issue ticket or RFP -->

### Testing
<!-- How did you test your change? -->

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82692
Approved by: https://github.com/ezyang
2022-08-08 21:40:53 +00:00
Nikolay Korovaiko
bfebf254dd Re-land sym_numel (#82374) (#82726) (#82731) (#82855)
### Description
This is a reland of (#82374) (#82726) (#82731)
This PR has no extra fixes, it simply updates the **correct** pin to point to the XLA side that has the corresponding changes.

### Issue
<!-- Link to Issue ticket or RFP -->

### Testing
<!-- How did you test your change? -->

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82855
Approved by: https://github.com/ezyang, https://github.com/qihqi
2022-08-05 03:36:09 +00:00
PyTorch MergeBot
78bd95b13a Revert "Re-land sym_numel (#82374) (#82726) (#82731)"
This reverts commit c90e00cf85.

Reverted https://github.com/pytorch/pytorch/pull/82731 on behalf of https://github.com/zengk95 due to This is breaking XLA  tests on trunk. It seems to have passed on PR and was able to checkout that commit c90e00cf85.
2022-08-04 22:45:26 +00:00
Nikolay Korovaiko
c90e00cf85 Re-land sym_numel (#82374) (#82726) (#82731)
This PR relands sym_numel #82374 and fixes the ios build break in this commit : 8cbd0031c5
which was a type mismatch in an equality.

### Description
<!-- What did you change and why was it needed? -->

### Issue
<!-- Link to Issue ticket or RFP -->

### Testing
<!-- How did you test your change? -->

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82731
Approved by: https://github.com/malfet
2022-08-04 21:05:24 +00:00
zengk95
d0e6e5a5bb Revert "sym_numel (#82374)" (#82726)
TSIA

It looks like this PR #82374  is breaking mac builds on trunk but I can't revert it normally since there's a merge conflict in the XLA hash.
<img width="1753" alt="image" src="https://user-images.githubusercontent.com/34172846/182644661-b7fdda4b-e5ce-45c3-96a2-ad6737d169ae.png">

I reverted it and resolved the conflict using the old XLA hash that this commit was based upon
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82726
Approved by: https://github.com/albanD, https://github.com/janeyx99
2022-08-03 15:23:47 +00:00
Nikolay Korovaiko
fd68b0931f sym_numel (#82374)
### Description
This PR makes `numel` symint-aware similar to `sym_sizes()` and `sym_strides()`. Similar to https://github.com/pytorch/pytorch/pull/81300 . This PR is the part of a bigger project to support dynamic_shapes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82374
Approved by: https://github.com/ezyang
2022-08-03 06:33:45 +00:00
Elias Ellison
642aed8b99 Add Autocast Support for FakeTensors / use fake device dispatch keys (#82449)
From PR:
```
Note: [Fake Tensor Dispatch Keys]
In order to model the behavior of device-specific autocast
and autograd logic, we update the dispatch keys of FakeTensors
to reflect their fake device. This includes the BackendComponent
(DispatchKey::Meta -> DispatchKey::CUDA), and also the BackendComponent
related Autocast and Autograd keys. __torch__dispatch__ sits below
Autocast and Autograd, and is only invoked when we are at the
kernel for the BackendComponent. Then, we add Meta to the
thread-local dispatch include set to hit the meta kernel
instead of the kernel of the BackendComponent for the fake device.
```

Also adds the `conv1/2/3d.padding` operators to the Autocast rule set. Without that fix, the FakeTensor dtype would diverge.

See: https://github.com/pytorch/pytorch/issues/81608

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82449
Approved by: https://github.com/ezyang
2022-08-01 21:40:36 +00:00
Edward Z. Yang
a9320e6d96 Delete SymInt::data() in favor of as_int_unchecked() (#82477)
I audited all the sites while I was at it, and marked a few suspicious
ones.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82477
Approved by: https://github.com/Chillee
2022-08-01 15:07:22 +00:00
Edward Z. Yang
fd5ac1e6b5 Rename SymbolicIntNode to SymIntNodeImpl (#82350)
Done via

```
git grep -l 'SymbolicIntNode' | xargs sed -i 's/SymbolicIntNode/SymIntNodeImpl/g'
```

Reasoning for the change:

* Sym is shorter than Symbolic, and consistent with SymInt
* You usually will deal in shared_ptr<...>, so we're going to
  reserve the shorter name (SymIntNode) for the shared pointer.

But I don't want to update the Python name, so afterwards I ran

```
 git grep -l _C.SymIntNodeImpl | xargs sed -i 's/_C.SymIntNodeImpl/_C.SymIntNode/'
```

and manually fixed up the binding code

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82350
Approved by: https://github.com/Krovatkin
2022-07-28 18:27:45 +00:00
albanD
4b7de26556 Fix C API to be compatible with latest 3.11 beta (#81242)
Based off https://github.com/pytorch/pytorch/pull/80511 with extra changes:
- Update pybind to the latest release as it contains some needed fixes
- Extend the compat header to do reduce changes in code
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81242
Approved by: https://github.com/malfet, https://github.com/mattip
2022-07-27 08:37:10 +00:00
Edward Z. Yang
563f6c7a9e Pass stride overload, not overload packet; add aten.stride.default (#82083)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82083
Approved by: https://github.com/albanD
2022-07-25 18:28:30 +00:00
lezcano
0b5b10002a Reduce the boilerplate needed to bind properties (#81576)
We implement a template and we fill it up via CRTP. This heavily reduces
the ammount of repeated code.

Just testing the waters here. If you like this idea, I can easily extend
this idea to cover many of the properties that we currently implement.

N.b. It'd be nice to have proper `if constexpr` support for this one,
but here we are.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81576
Approved by: https://github.com/ezyang
2022-07-16 08:58:42 +00:00
Horace He
a0af1d73ed Checked if symbolic shapes are present before using fallback for sizes, and also checks for custom size policy in shallow_copy_and_detach (#81078)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81078
Approved by: https://github.com/ezyang
2022-07-16 04:54:10 +00:00
Edward Z. Yang
57c6bbd274 Make TensorImpl::check_pyobj const (#81001)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81001
Approved by: https://github.com/albanD
2022-07-08 14:07:33 +00:00
Nikolay Korovaiko
8389ccbcd8 reinstate size and shape returning symints (#79560)
This PR redirects `size` and `.shape` to call `sym_sizes`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79560
Approved by: https://github.com/Chillee
2022-07-08 01:17:33 +00:00
Edward Z. Yang
74877943b8 Don't invoke mode as overloaded argument in torch dispatch (#80992)
I noticed that in some situations torch dispatch modes were being
invoked with a mode active, which isn't supposed to happen (we
disable modes before calling into the user mode.)  I also noticed that
I was getting a warning that I had a deprecated non-static definition of
torch dispatch on an argument even though there wasn't any.

It turns out this is because modes were part of the overloaded arguments
list in the Python fallback kernel for torch dispatch.  This is wrong;
instead we should rely on the actual dispatching function to consult
modes.  This makes the code simpler.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80992
Approved by: https://github.com/zou3519
2022-07-06 23:45:59 +00:00
George Qi
393f7f6ad7 add layout to slow path (#80429)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80429
Approved by: https://github.com/ezyang
2022-07-06 18:01:31 +00:00
Nikolay Korovaiko
7e34edf12d adding sym_size override (#80357)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80357
Approved by: https://github.com/ezyang
2022-06-29 00:53:45 +00:00
Edward Z. Yang
f7ee061638 Wconstab/reland pysymint (#79795)
rebased https://github.com/pytorch/pytorch/pull/79617/ to see if issues are reproducible.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79795
Approved by: https://github.com/malfet
2022-06-20 22:55:06 +00:00
samdow
24243659e4 disable modes during constructor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79143

Approved by: https://github.com/ezyang
2022-06-17 22:28:27 +00:00
PyTorch MergeBot
44436947bc Revert "Reland PySymInt (#79617)"
This reverts commit 8ef6356f26.

Reverted https://github.com/pytorch/pytorch/pull/79617 on behalf of https://github.com/zengk95 due to this is breaking periodic jobs (and maybe pull) on trunk
2022-06-16 19:40:27 +00:00
Nikolay Korovaiko
8ef6356f26 Reland PySymInt (#79617)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79617
Approved by: https://github.com/Chillee
2022-06-16 04:18:06 +00:00
PyTorch MergeBot
b8db0a0475 Revert "Python Bindings for SymInts (#78135)"
This reverts commit d332724071.

Reverted https://github.com/pytorch/pytorch/pull/78135 on behalf of https://github.com/ezyang due to broke torchvision tests
2022-06-15 13:52:14 +00:00
Nikolay Korovaiko
d332724071 Python Bindings for SymInts (#78135)
This PR adds support for `SymInt`s in python. Namely,
* `THPVariable_size` now returns `sym_sizes()`
* python arg parser is modified to parse PyObjects into ints and `SymbolicIntNode`s
* pybind11 bindings for `SymbolicIntNode` are added, so size expressions can be traced
* a large number of tests added to demonstrate how to implement python symints.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78135
Approved by: https://github.com/ezyang
2022-06-14 02:17:59 +00:00
George Qi
05624bcf7b add sizes to slowpath
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79295

Approved by: https://github.com/ezyang
2022-06-14 01:19:59 +00:00
Michael Suo
30fb2c4aba [lint] autoformat test/cpp and torch/csrc
Let's have some fun.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78828

Approved by: https://github.com/ezyang
2022-06-11 21:11:16 +00:00
George Qi
a90f006fe5 add strides to slow path
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78610

Approved by: https://github.com/ezyang
2022-06-10 16:59:14 +00:00
vitrioil
ebb7f424b8 Add Tensor.is_cpu (#78887)
Fixes #76872

Not sure if this is also required.
ac8c6d09d1/torch/csrc/tensor/python_tensor.cpp (L146)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78887
Approved by: https://github.com/ezyang
2022-06-06 22:01:12 +00:00
PyTorch MergeBot
8047d2a564 Revert "Reenable assert after test update"
This reverts commit b0814b63df.

Reverted https://github.com/pytorch/pytorch/pull/78658 on behalf of https://github.com/malfet due to test_ops crashes with SIGIOT on both PR and trunk CI, see b0814b63df
2022-06-03 00:21:23 +00:00
Michael Suo
22b10873f3 Allow torchdispatch to customize dim()
This follows the template in
https://github.com/pytorch/pytorch/pull/77396

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78691

Approved by: https://github.com/ezyang
2022-06-02 20:54:13 +00:00
Howard Huang
b0814b63df Reenable assert after test update
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78658

Approved by: https://github.com/ezyang, https://github.com/albanD
2022-06-02 16:40:06 +00:00
Michael Suo
876c359347 Generalize sizes and strides policy on _make_wrapper_subclass
Previously, there was a `dispatch_strides` boolean arg. Change this to
a string argument that directly maps onto `SizesStridesPolicy`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78646

Approved by: https://github.com/ezyang
2022-06-02 02:06:38 +00:00
Zachary DeVito
b6920405da reorder checks to shave 1 us off no-op dispatch time
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78316

Approved by: https://github.com/Chillee, https://github.com/ezyang
2022-05-26 02:27:33 +00:00
Elias Ellison
2d93e1fada Add slow path for device
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77684

Approved by: https://github.com/ezyang
2022-05-24 21:56:01 +00:00
George Qi
294fff16ec add slow path for is_contiguous (#77906)
Test Plan: CI

Reviewed By: malfet, b0noI

Differential Revision: D36493890

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77906
Approved by: https://github.com/malfet
2022-05-19 22:52:45 +00:00
PyTorch MergeBot
00a187c373 Revert "add slow path for is_contiguous"
This reverts commit f6beda89c6.

Reverted https://github.com/pytorch/pytorch/pull/77396 on behalf of https://github.com/malfet
2022-05-19 17:07:54 +00:00
George Qi
f6beda89c6 add slow path for is_contiguous
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77396

Approved by: https://github.com/ezyang, https://github.com/cpuhrsch
2022-05-18 02:25:27 +00:00
Edward Z. Yang
b5bc954a71 Fix optional dtype/layout/memory_format pycall; fix memory format
Double-header bug fix:

- As reported by jansel, dtypes are still showing up as integers
  when the schema is an optional dtype.  This is simple enough to
  fix and I added a test for it.  But while I was at it...

- I noticed that the THPMemoryFormat_new idiom with "unused" name
  doesn't actually work, the repr of the returned memory format
  object is wrong and this shows up when we try to log the args/kwargs.
  So I fixed memory format to do it properly along with everything
  else.

Fixes https://github.com/pytorch/pytorch/issues/77135

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77543

Approved by: https://github.com/albanD, https://github.com/jansel
2022-05-16 16:46:08 +00:00
Edward Z. Yang
0a14a4c280 Register prims as operators.
This makes prims look as if they were defined in native_functions.yaml
but they're still all written in Python.  You now need to give a full
schema string for your prims.  The returned prim object is now
torch.ops.prim overload (prims are not allowed to be overloaded,
so we return the overload, not the overload packet, for speed.)

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77117

Approved by: https://github.com/mruberry, https://github.com/albanD
2022-05-11 16:38:14 +00:00
anjali411
55f55a4cf6 Allow users to override kernels for existing C++ ops through Python
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75905

Approved by: https://github.com/ezyang
2022-05-05 03:31:39 +00:00
samdow
6779366f27 add nested mode to python mode
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75965

Approved by: https://github.com/albanD, https://github.com/ezyang, https://github.com/zou3519
2022-05-04 13:01:06 +00:00
samdow
598e7e5f19 [Reland] Change 'python mode' to 'torch dispatch mode'
Changes Python Mode name to Torch Dispatch Mode because there is now a Torch Function Mode, so Torch Dispatch Mode and Torch Function Mode are consistent with each other
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76562
Approved by: https://github.com/zou3519, https://github.com/albanD
2022-05-02 20:06:43 +00:00
PyTorch MergeBot
395a620a4f Revert "Change 'python mode' to 'torch dispatch mode'"
This reverts commit 7203a73986.

Reverted https://github.com/pytorch/pytorch/pull/76562 on behalf of https://github.com/janeyx99
2022-05-02 14:42:11 +00:00
samdow
7203a73986 Change 'python mode' to 'torch dispatch mode'
Changes Python Mode name to Torch Dispatch Mode because there is now a Torch Function Mode, so Torch Dispatch Mode and Torch Function Mode are consistent with each other
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76562
Approved by: https://github.com/zou3519
2022-05-02 13:33:58 +00:00
Kulin Seth
54c75e1e8f Add "mps" device to PyTorch framework.
Remove the "mlc" device for Mac platforms.

This commit will be followed up with:

* adding MPS runtime components
* PyTorch ops for MPS device

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76291
Approved by: https://github.com/albanD
2022-04-27 19:21:57 +00:00