Commit Graph

392 Commits

Author SHA1 Message Date
Peter Bell
7843a5e882 Move Tensor.grad back into C++
`Tensor.grad` was moved to python in #30531 to add a warning. However,
that warning has since been lowered into C++ so this wrapper is no
longer necessary.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76675

Approved by: https://github.com/albanD
2022-06-10 13:44:45 +00:00
vitrioil
ebb7f424b8 Add Tensor.is_cpu (#78887)
Fixes #76872

Not sure if this is also required.
ac8c6d09d1/torch/csrc/tensor/python_tensor.cpp (L146)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78887
Approved by: https://github.com/ezyang
2022-06-06 22:01:12 +00:00
Rohit Goswami
3f58dd18dc ENH: Add a force argument to numpy() (#78564)
**Reopened** to help with merge issues. See #59790 for full context.

Fixes #20778. Helps #71688.

Finalizes @martinPasen's force argument for `Tensor.numpy()`. It is set to False by default. If it's set to True then we:
1. detatch the Tensor, if requires_grad == True
2. move to cpu, if not on cpu already
3. Uses .resolve_conj() if .is_conj() == True
4. Uses .resolve_neg() if .is_neg() == True

cc @albanD
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78564
Approved by: https://github.com/albanD
2022-06-06 14:14:17 +00:00
YifanShenSZ
6ba1d05fa4 to_padded_tensor doc v0 (#78657)
Fixes #76846

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78657
Approved by: https://github.com/jbschlosser
2022-06-03 14:27:31 +00:00
Thomas J. Fan
3524428fad DOC Corrects default value for storage_offset in as_strided (#78202)
Fixes #77730
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78202
Approved by: https://github.com/mruberry
2022-05-31 19:28:36 +00:00
Brian Hirsh
07e4533403 reland of as_strided support for functionalization; introduce as_strided_scatter
This reverts commit a95f1edd85.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78199

Approved by: https://github.com/ezyang
2022-05-24 22:40:44 +00:00
PyTorch MergeBot
a95f1edd85 Revert "as_strided support for functionalization; introduce as_strided_scatter"
This reverts commit 3a921f2d26.

Reverted https://github.com/pytorch/pytorch/pull/77128 on behalf of https://github.com/suo due to This broke rocm tests on master 3a921f2d26. rocm tests are no longer run on PRs, you should add a `ciflow/trunk` label if you want to run them
2022-05-24 20:19:12 +00:00
Brian Hirsh
3a921f2d26 as_strided support for functionalization; introduce as_strided_scatter
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77128

Approved by: https://github.com/ezyang
2022-05-24 18:20:31 +00:00
Alban Desmaison
04ac80c73a Fix a few issues on assert/double error/legacy constructor (#77966)
Fixes https://github.com/pytorch/pytorch/issues/77960, https://github.com/pytorch/pytorch/issues/77957, https://github.com/pytorch/pytorch/issues/77781
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77966
Approved by: https://github.com/soulitzer, https://github.com/kulinseth
2022-05-20 20:25:12 +00:00
Christian Puhrsch
289192199a Add to_sparse_bsr (#77366)
Conversion function of CSR to BSR.

Follow up work includes
- Conversion from strided, COO, CSC, BSC
- autograd
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77366
Approved by: https://github.com/IvanYashchuk, https://github.com/mikaylagawarecki
2022-05-13 20:16:03 +00:00
Mikayla Gawarecki
841c65f499 Unprivate _index_reduce and add documentation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76997

Approved by: https://github.com/cpuhrsch
2022-05-13 19:48:38 +00:00
Ivan Yashchuk
890bdf13e1 Remove deprecated torch.solve (#70986)
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.solve`.

cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70986
Approved by: https://github.com/Lezcano, https://github.com/albanD
2022-05-10 13:44:07 +00:00
PyTorch MergeBot
0c7c50972b Revert "Move Tensor.grad back into C++"
This reverts commit 3e4bff7285.

Reverted https://github.com/pytorch/pytorch/pull/76675 on behalf of https://github.com/albanD
2022-05-09 21:08:10 +00:00
PyTorch MergeBot
2c5bf12584 Revert "stft: remove non-center overload and python functional wrapper"
This reverts commit d23ecbfc9a.

Reverted https://github.com/pytorch/pytorch/pull/73434 on behalf of https://github.com/albanD
2022-05-09 19:59:46 +00:00
Peter Bell
3e4bff7285 Move Tensor.grad back into C++
`Tensor.grad` was moved to python in #30531 to add a warning. However,
that warning has since been lowered into C++ so this wrapper is no
longer necessary.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76675

Approved by: https://github.com/albanD
2022-05-09 19:58:57 +00:00
Peter Bell
d23ecbfc9a stft: remove non-center overload and python functional wrapper
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73434

Approved by: https://github.com/anjali411
2022-05-03 14:30:35 +00:00
PyTorch MergeBot
77f23d6460 Revert "stft: remove non-center overload and python functional wrapper"
This reverts commit 6b7d89c4f1.

Reverted https://github.com/pytorch/pytorch/pull/73434 on behalf of https://github.com/osalpekar
2022-04-23 23:21:27 +00:00
Peter Bell
6b7d89c4f1 stft: remove non-center overload and python functional wrapper
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73434

Approved by: https://github.com/anjali411
2022-04-23 00:17:01 +00:00
kshitij12345
aa51704ce5 [complex32] add chalf alias for complex32 and chalf method
Reference: https://github.com/pytorch/pytorch/issues/74537

Adds chalf alias for complex32 and also adds method `chalf` similar to `cfloat, cdouble`

TODO:
* [x] Add docs
* [x] Add override
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75320
Approved by: https://github.com/anjali411
2022-04-20 23:44:47 +00:00
Anthony Barbier
ce9e27a0fc Add new keys for Graphcore IPU (DispatchKey / Backend / DeviceType)
We need a key to register our out of tree backend: https://github.com/graphcore/poptorch
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74763
Approved by: https://github.com/bdhirsh
2022-04-07 17:18:45 +00:00
Mikayla Gawarecki
11f1fef981 Update documentation for scatter_reduce
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74608

Approved by: https://github.com/cpuhrsch
2022-04-07 15:41:23 +00:00
Mikayla Gawarecki
e9a8e6f74a Add include_self flag to scatter_reduce
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74607

Approved by: https://github.com/cpuhrsch
2022-04-05 16:31:39 +00:00
Mikayla Gawarecki
2bfa018462 [BC-breaking] Use ScatterGatherKernel for scatter_reduce (CPU-only) (#74226)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74226

Update signature of `scatter_reduce_` to match `scatter_/scatter_add_`

`Tensor.scatter_reduce_(int64 dim, Tensor index, Tensor src, str reduce)`

- Add new reduction options in ScatterGatherKernel.cpp and update `scatter_reduce` to call into the cpu kernel for `scatter.reduce`
- `scatter_reduce` now has the same shape constraints as `scatter_` and `scatter_add_`
- Migrate `test/test_torch.py:test_scatter_reduce` to `test/test_scatter_gather_ops.py`

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D35222842

Pulled By: mikaylagawarecki

fbshipit-source-id: 84930add2ad30baf872c495251373313cb7428bd
(cherry picked from commit 1b45139482e22eb0dc8b6aec2a7b25a4b58e31df)
2022-04-01 05:57:45 +00:00
Christian Puhrsch
807b2e190b Move to_sparse_csr to C++
Allows use of to_sparse_csr from C++
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74294
Approved by: https://github.com/ngimel, https://github.com/malfet
2022-03-23 17:17:45 +00:00
Nikita Shulga
cfb6c942fe scatter_reduce documentation (#73125)
Summary:
Reland of https://github.com/pytorch/pytorch/issues/68580 (which were milestoned for 1.11) plus partial revert of https://github.com/pytorch/pytorch/pull/72543

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73125

Reviewed By: bdhirsh

Differential Revision: D34355217

Pulled By: malfet

fbshipit-source-id: 325ecdeaf53183d653b44ee5e6e8839ceefd9200
(cherry picked from commit 71db31748a)
2022-02-22 19:33:46 +00:00
Nikita Shulga
cb00d9601c Revert D33800694: [pytorch][PR] scatter_reduce documentation
Test Plan: revert-hammer

Differential Revision:
D33800694 (12a1df27c7)

Original commit changeset: 2e09492a29ce

Original Phabricator Diff: D33800694 (12a1df27c7)

fbshipit-source-id: 2a4775c0042551607fe3ab77f5bfe9f2e4b6b78e
(cherry picked from commit 4bd6c0d2bb)
2022-02-15 20:10:26 +00:00
rusty1s
12a1df27c7 scatter_reduce documentation (#68580)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/63780 (part 2)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68580

Reviewed By: atalman

Differential Revision: D33800694

Pulled By: malfet

fbshipit-source-id: 2e09492a29cef115a7cca7c8209d1dcb6ae24eb9
(cherry picked from commit 696ff75940)
2022-02-15 19:43:54 +00:00
anjali411
765669e1b9 Update docs for torch.real to indicate that it's supported for real tensors (#71962)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/71962

Test Plan: Imported from OSS

Reviewed By: davidberard98

Differential Revision: D33846613

Pulled By: anjali411

fbshipit-source-id: a9782bf4e8a7f3ae1fcd4f7ff558ba80b6af012c
(cherry picked from commit 93ea37800f)
2022-01-28 18:46:40 +00:00
kshitij12345
d3bbb281f3 [numpy] add decimals argument to round (#66195)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/65908

Added a new overload instead of updating the current signature. (Had issues with JIT and **maybe** it would have been FC breaking)

TODO:

* [x] Don't compute `std::pow(10, decimals)` for each element.
* [x] Update docs (https://docs-preview.pytorch.org/66195/generated/torch.round.html?highlight=round#torch.round)
* [x] Add tests
* ~~Should we try to make it composite?~~
* ~~Should we add specialized test with more values of `decimals` outside of OpInfo with larger range of values in input tensor?~~

cc mruberry rgommers

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66195

Reviewed By: anjali411

Differential Revision: D31821385

Pulled By: mruberry

fbshipit-source-id: 9a03fcb809440f0c83530108284e69c345e1850f
(cherry picked from commit 50b67c6968)
2022-01-26 17:35:03 +00:00
Khushi Agrawal
1b496cf158 Fixes doc errors in Tensor.triu(), Tensor.tril(), Tensor.ravel(). (#71057)
Summary:
Hi, PyTorch Team!
I am very much interested in starting up my contribution to PyTorch. I made several contributions in NumPy and CuPy, but this is my first PR towards PyTorch. I aim to contribute more in the upcoming future.

The PR fixes https://github.com/pytorch/pytorch/issues/70972  https://github.com/pytorch/pytorch/issues/70975.

#### Aim of PR
The functions like `Tensor.ravel`, `Tensor.tril`, `Tensor.tril_`, `Tensor.triu`, and `Tensor.triu_` had a couple of typos in docs. The PR aims to resolve that.

I'm looking forward to your viewpoints. Thanks!

cc: kshitij12345 vadimkantorov Lezcano TestSomething22

cc brianjo mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71057

Reviewed By: preeti1205

Differential Revision: D33502911

Pulled By: mruberry

fbshipit-source-id: 8ce0b68a29658a5a0be79bc807dfa7d71653532d
2022-01-11 07:34:59 -08:00
Heitor Schueroff
34c49d3d3b Document torch.quantile interpolation kwarg (#70637)
Summary:
clone of https://github.com/pytorch/pytorch/pull/59397

This PR documents the interpolation kwarg parameter added in https://github.com/pytorch/pytorch/issues/49267. Now that the forward compatibility period is over, we can expose this parameter.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70637

Reviewed By: jbschlosser

Differential Revision: D33411707

Pulled By: anjali411

fbshipit-source-id: f5f2d0a6739b3a855bbdf58fc671ac2f0342ce69
2022-01-05 11:02:13 -08:00
Brian Hirsh
457ba1dd3e Porting index_add to structured kernels, add an out variant (#65993)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65993

This PR attempts to port `index_add` to structured kernels, but does more than that:

* Adds an `out=` variant to `index_add`
* Revises `native_functions.yaml` registrations, to not have multiple entries and instead pass default value to `alpha`.
* Changes in `derivatives.yaml` file for autograd functioning
* Revises error messages, please see: https://github.com/pytorch/pytorch/pull/65993#issuecomment-945441615

Follow-up PRs in near future will attempt to refactor the OpInfo test, and will give another look at tests in `test/test_torch.py` for this function. (hence the use of ghstack for this)

~This is WIP because there are tests failing for `Dimname` variant on mobile/android builds, and I'm working on fixing them.~

Issue tracker: https://github.com/pytorch/pytorch/issues/55070

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D32646426

fbshipit-source-id: b035ecf843a9a27d4d1e18b202b035adc2a49ab5
2021-12-14 11:57:13 -08:00
Thomas Metcalfe
ba16b1eca7 [numpy] Alias arctan2 to atan2 (#67010)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/65906

Adds an alias `arctan2` to improve numpy compatibility

cc mruberry rgommers

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67010

Reviewed By: anjali411

Differential Revision: D32378998

Pulled By: mruberry

fbshipit-source-id: 424c5c10c12b49c20ee83ccd109325c480b5b6cf
2021-11-16 09:41:09 -08:00
Kurt Mohler
0420545639 Enable all dtype combinations in torch.Tensor.view(dtype) (#66493)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/29013

Note: This PR does not enable autograd. This can be done in a future PR.

cc mruberry rgommers

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66493

Reviewed By: gchanan

Differential Revision: D32314680

Pulled By: mruberry

fbshipit-source-id: 69d325573b2331f32b83c05c91ffbe80571e7ae2
2021-11-11 13:55:21 -08:00
Marco Bertolini
4fe3965b3a Fix dtype arg typing for Tensor.type doc string (#67019)
Summary:
Fix typing error in PyCharm when using torch.Tensor.type(dtype=torch.int64)

<img width="386" alt="Screenshot 2021-10-21 at 15 30 50" src="https://user-images.githubusercontent.com/59562934/138288062-cc2ba45e-ece0-4fca-9369-55d020404c28.png">

Thanks for your great work! :)

cc brianjo mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67019

Reviewed By: malfet

Differential Revision: D32311313

Pulled By: mruberry

fbshipit-source-id: 90fc453bc4129a301d567d4b39137b93c5dac01e
2021-11-11 12:58:46 -08:00
Anirudh Dagar
b07a11929d Array API: Add torch.linalg.cross (#63285)
Summary:
### Create `linalg.cross`

Fixes https://github.com/pytorch/pytorch/issues/62810

As discussed in the corresponding issue, this PR adds `cross` to the `linalg` namespace (**Note**: There is no method variant) which is slightly different in behaviour compared to `torch.cross`.

**Note**: this is NOT an alias as suggested in mruberry's [https://github.com/pytorch/pytorch/issues/62810 comment](https://github.com/pytorch/pytorch/issues/62810#issuecomment-897504372) below
> linalg.cross being consistent with the Python Array API (over NumPy) makes sense because NumPy has no linalg.cross. I also think we can implement linalg.cross without immediately deprecating torch.cross, although we should definitely refer users to linalg.cross. Deprecating torch.cross will require additional review. While it's not used often it is used, and it's unclear if users are relying on its unique behavior or not.

The current default implementation of `torch.cross` is extremely weird and confusing. This has also been reported multiple times previously. (See https://github.com/pytorch/pytorch/issues/17229, https://github.com/pytorch/pytorch/issues/39310, https://github.com/pytorch/pytorch/issues/41850, https://github.com/pytorch/pytorch/issues/50273)

- [x] Add `torch.linalg.cross` with default `dim=-1`
- [x] Add OpInfo and other tests for `torch.linalg.cross`
- [x] Add broadcasting support to `torch.cross` and `torch.linalg.cross`
- [x] Remove out skip from `torch.cross` OpInfo
- [x] Add docs for `torch.linalg.cross`. Improve docs for `torch.cross` mentioning `linalg.cross` and the difference between the two. Also adds a warning to `torch.cross`, that it may change in the future (we might want to deprecate it later)

 ---

### Additional Fixes to `torch.cross`
- [x] Fix Doc for Tensor.cross
- [x] Fix torch.cross in `torch/overridres.py`

While working on `linalg.cross` I noticed these small issues with `torch.cross` itself.

[Tensor.cross docs](https://pytorch.org/docs/stable/generated/torch.Tensor.cross.html) still mentions `dim=-1` default which is actually wrong. It should be `dim=None` after the behaviour was updated in PR https://github.com/pytorch/pytorch/issues/17582 but the documentation for the `method` or `function` variant wasn’t updated. Later PR https://github.com/pytorch/pytorch/issues/41850 updated the documentation for the `function` variant i.e `torch.cross` and also added the following warning about the weird behaviour.
> If `dim` is not given, it defaults to the first dimension found with the size 3. Note that this might be unexpected.

But still, the `Tensor.cross` docs were missed and remained outdated. I’m finally fixing that here. Also fixing `torch/overrides.py` for `torch.cross` as well now, with `dim=None`.

To verify according to the docs the default behaviour of `dim=-1` should raise, you can try the following.

```python
a = torch.randn(3, 4)
b = torch.randn(3, 4)
b.cross(a)  # this works because the implementation finds 3 in the first dimension and the default behaviour as shown in documentation is actually not true.
>>> tensor([[ 0.7171, -1.1059,  0.4162,  1.3026],
        [ 0.4320, -2.1591, -1.1423,  1.2314],
        [-0.6034, -1.6592, -0.8016,  1.6467]])

b.cross(a, dim=-1)  # this raises as expected since the last dimension doesn't have a 3
>>> RuntimeError: dimension -1 does not have size 3
```

Please take a closer look (particularly the autograd part, this is the first time I'm dealing with `derivatives.yaml`). If there is something missing, wrong or needs more explanation, please let me know. Looking forward to the feedback.

cc mruberry Lezcano IvanYashchuk rgommers

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63285

Reviewed By: gchanan

Differential Revision: D32313346

Pulled By: mruberry

fbshipit-source-id: e68c2687c57367274e8ddb7ef28ee92dcd4c9f2c
2021-11-11 12:49:41 -08:00
kshitij12345
510e3026a9 [numpy] add torch.argwhere (#64257)
Summary:
Adds `torch.argwhere` as an alias to `torch.nonzero`

Currently, `torch.nonzero` is actually provides equivalent functionality to `np.argwhere`.

From NumPy docs,
> np.argwhere(a) is almost the same as np.transpose(np.nonzero(a)), but produces a result of the correct shape for a 0D array.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64257

Reviewed By: qihqi

Differential Revision: D32049884

Pulled By: saketh-are

fbshipit-source-id: 016e49884698daa53b83e384435c3f8f6b5bf6bb
2021-10-30 15:26:11 -07:00
Brian Hirsh
03f3a0331b add slice/select/diagonal_scatter variants as primitive ops (#64430)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64430

The functionalization pass needs `{view}_scatter` versions of the slice/select/diagonal ops in order to correctly propagate mutations from a view to its base. On top of that, the implementations need to be primitive w.r.t. autograd, because they look something like `...slice().copy_()`, and the functionalization pass can't use views + mutations inside of it's own alias-removal machinery!

I added some basic tests that I tried to base off of existing tests for views (particularly around testing the derivative formulas), but I'm wondering if I should add something more comprehensive.

Also, as_strided fits into this category - the functionalization pass will need an `as_strided_scatter` op that's primitive w.r.t. autograd. I didn't add it for now, because it'll involve duplicating a bunch of logic from the current `as_strided_backward()` function, and also writing a derivative formula that I wasn't sure how to write :)

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D31942092

Pulled By: bdhirsh

fbshipit-source-id: c702a57c2748a7c771c14e4bcc3e996b48fcc4c8
2021-10-28 10:51:12 -07:00
Natalia Gimelshein
f29e5220a6 Revert D31474901: [pytorch][PR] [numpy] add torch.argwhere
Test Plan: revert-hammer

Differential Revision:
D31474901

Original commit changeset: 335327a4986f

fbshipit-source-id: 534093e459762ff7a888c58d76e49e362015f2ba
2021-10-21 15:50:54 -07:00
kshitij12345
462f333c01 [numpy] add torch.argwhere (#64257)
Summary:
Adds `torch.argwhere` as an alias to `torch.nonzero`

Currently, `torch.nonzero` is actually provides equivalent functionality to `np.argwhere`.

From NumPy docs,
> np.argwhere(a) is almost the same as np.transpose(np.nonzero(a)), but produces a result of the correct shape for a 0D array.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64257

Reviewed By: dagitses

Differential Revision: D31474901

Pulled By: saketh-are

fbshipit-source-id: 335327a4986fa327da74e1fb8624cc1e56959c70
2021-10-21 14:02:11 -07:00
lezcano
fe41df3601 Deprecate x.T on tensors of dimension other than 0 or 2 (#64180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64180

**BC-breaking note:**

This PR deprecates the `Tensor.T` are not matrices. An upgrade guide is added to the
documentation for `Tensor.T`.

This PR DOES NOT make this attribute to throw an error when called on a tensor of `dim != 2`,
but this will be its behavior in a future PyTorch release.

cc mruberry rgommers pmeier asmeurer leofang AnirudhDagar asi1024 emcastillo kmaehashi heitorschueroff

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D31610611

Pulled By: anjali411

fbshipit-source-id: af8ff7e862790dda9f06921de005b3f6fd0803c3
2021-10-14 08:17:32 -07:00
lezcano
82a216c45b Add tensor.{adjoint(),H,mT,mH} methods and properties (#64179)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64179

This PR follows the discussion in https://github.com/pytorch/pytorch/issues/45063#issuecomment-904431478

Fixes https://github.com/pytorch/pytorch/issues/45063

cc ezyang anjali411 dylanbespalko mruberry Lezcano nikitaved rgommers pmeier asmeurer leofang AnirudhDagar asi1024 emcastillo kmaehashi heitorschueroff

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D30730483

Pulled By: anjali411

fbshipit-source-id: 821d25083f5f682450f6812bf852dc96a1cdf9f2
2021-10-13 07:44:43 -07:00
Ankita Sharma
4af913a7cf fixed minor issues for index_add in docs (#65806)
Summary:
Hi, I'm looking forward to contributing to PyTorch, so starting with a minor fix in the documentation for `index_add`.

Currently, in the documentation for `index_add_` (please see https://pytorch.org/docs/master/generated/torch.Tensor.index_add_.html#torch.Tensor.index_add_):

1. `tensor` attribute was pointing to `torch.tensor` class, which IMO - is (thought may not be a big deal) unintentional.
2. `dim` attribute is pointing to `torch.Tensor.dim`, which again IMO - is unintentional.

This PR suggests a correction for the first point above, to rename `tensor` attribute to `input` so that it doesn't point to `torch.tensor` class. (I've verified that others ops like `scatter` use `input`, so this should not break the consistency in the documentation). I couldn't find an appropriate fix for the second point above, since renaming `dim` to something else will break the consistency (as almost all others op in PyTorch use `dim` as the attribute name).

I may be wrong here, so please let me know if there is any feedback or an alternate fix for this.

_Note:_ I plan to fix this behavior for `index_copy_` (https://pytorch.org/docs/master/generated/torch.Tensor.index_copy_.html#torch.Tensor.index_copy_) once and if this PR is approved.

To the reviewers, please help me tag the correct person who could help review this PR.

cc: krshrimali mruberry zou3519

cc brianjo mruberry

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65806

Reviewed By: dagitses, mruberry

Differential Revision: D31431182

Pulled By: zou3519

fbshipit-source-id: 66ced9677ac3bc71d672d13366f9f567ecea0a2d
2021-10-08 07:17:15 -07:00
Kurt Mohler
5883523c1d Remove dtype from torch.Storage and use only torch.ByteStorage (#62030)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62030

Remove dtype tracking from Python Storage interface, remove all the different `<type>Storage` classes except for `ByteStorage`, and update serialization accordingly, while maintaining as much FC/BC as possible

Fixes https://github.com/pytorch/pytorch/issues/47442

* **THE SERIALIZATION FORMAT IS FULLY FC/BC.** We worked very hard to make sure this is the case. We will probably want to break FC at some point to make the serialization structure of tensors make more sense, but not today.
* There is now only a single torch.ByteStorage class. Methods like `Tensor.set_` no longer check that the dtype of storage is appropriate.
* As we no longer know what dtype of a storage is, we've **removed** the size method from Storage, replacing it with nbytes. This is to help catch otherwise silent errors where you confuse number of elements with number of bytes.
* `Storage._new_shared` takes a `nbytes` kwarg and will reject previous positional only calls.  `Storage._new_with_file` and `_set_from_file` require explicit element size arguments.
* It's no longer possible to convert storages to different types using the float/double/etc methods. Instead, do the conversion using a tensor.
* It's no longer possible to allocate a typed storage directly using FloatStorage/DoubleStorage/etc constructors. Instead, construct a tensor and extract its storage. The classes still exist but they are used purely for unpickling.
* The preexisting serialization format stores dtype with storage, and in fact this dtype is used to determine the dtype of the tensor overall.
 To accommodate this case, we introduce a new TypedStorage concept that exists only during unpickling time which is used to temporarily store the dtype so we can construct a tensor. **If you overrode the handling of pickling/unpickling, you MUST add handling for TypedStorage** or your serialization code will degrade to standard file-based serialization.

Original pull request: https://github.com/pytorch/pytorch/pull/59671

Reviewed By: soulitzer, ngimel

Differential Revision: D29466819

Pulled By: ezyang

fbshipit-source-id: 4a14e5d3c2b08e06e558683d97f7378a3180b00e
2021-10-05 13:50:34 -07:00
Heitor Schueroff
b37503e452 Initial implementation of nanmean (#62671)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62671

Very crude first implementation of `torch.nanmean`. The current reduction kernels do not have good support for implementing nan* variants. Rather than implementing new kernels for each nan* operator, I will work on new reduction kernels with support for a `nan_policy` flag and then I will port `nanmean` to use that.

**TODO**

- [x] Fix autograd issue

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D30515181

Pulled By: heitorschueroff

fbshipit-source-id: 303004ebd7ac9cf963dc4f8e2553eaded5f013f0
2021-09-13 05:53:58 -07:00
Meghan Lele
e5c32cdde7 [docs] Remove input parameter from Tensor.flatten docs (#63180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63180

**Summary**
This commit removes the `input` parameter from the signature for
`Tensor.flatten` shown in its documentation. This parameter is accepted
by `torch.flatten` but not `Tensor.flatten` (since the input is the
`Tensor` on which `flatten` is invoked).

**Test Plan**
Continuous integration.

**Fixes**
This commit fixes #57478.

Test Plan: Imported from OSS

Reviewed By: VitalyFedyunin

Differential Revision: D30293156

Pulled By: SplitInfinity

fbshipit-source-id: 4ad70d638af009fb6bdeb703433b306904d39a76
2021-08-13 12:10:16 -07:00
Shen Li
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
Zsolt Dollenstein
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
Heitor Schueroff
d7d399f3df Exposes _aminmax as aminmax and makes it structured (#62401)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62401

This PR exposes the `torch._aminmax` operator as `torch.aminmax`.

**TODO**

- [x] add examples to documentation
- [x] add minmax to rst docs

fixes https://github.com/pytorch/pytorch/issues/62164

Test Plan: Imported from OSS

Reviewed By: soulitzer

Differential Revision: D30072246

Pulled By: heitorschueroff

fbshipit-source-id: 557d30af7c28ca6c238c59122367104036429ecd
2021-08-03 16:10:43 -07:00
Gary Miguel
9fdf7ec6a2 [docs] Update sphinx to 3.5.4 (#61601)
Summary:
Sphinx 4.x is out, but it seems that requires many more changes to
adopt. So instead use the latest version of 3.x, which includes
several nice features.

* Add some noindex directives to deal with warnings that would otherwise
  be triggered by this change due to conflicts between the docstrings
  declaring a function and the autodoc extension declaring the
  same function.
* Update distributions.utils.lazy_property to make it look like a
  regular property when sphinx autodoc inspects classes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61601

Reviewed By: ejguan

Differential Revision: D29801876

Pulled By: albanD

fbshipit-source-id: 544d2434a15ceb77bff236e934dbd8e4dbd9d160
2021-07-30 06:23:10 -07:00