Commit Graph

917 Commits

Author SHA1 Message Date
Kurt Mohler
fd209543d5 Add torch.utils.deterministic.fill_uninitialized_memory flag (#111377)
Part of #109802

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111377
Approved by: https://github.com/albanD, https://github.com/aaronenyeshi
2023-11-01 16:10:09 +00:00
PyTorch MergeBot
ace2713d1e Revert "Add torch.utils.deterministic.fill_uninitialized_memory flag (#111377)"
This reverts commit f1785373c0.

Reverted https://github.com/pytorch/pytorch/pull/111377 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/111377#issuecomment-1784179040))
2023-10-29 17:41:55 +00:00
Kurt Mohler
f1785373c0 Add torch.utils.deterministic.fill_uninitialized_memory flag (#111377)
Part of #109802

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111377
Approved by: https://github.com/albanD
2023-10-26 02:39:06 +00:00
Mikayla Gawarecki
b54ab57522 Document torch.from_file and fix UntypedStorage.from_file docs (#111688)
Fixes https://github.com/pytorch/pytorch/issues/37439

Also threads through filename so it is accessible via `t.storage().filename`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111688
Approved by: https://github.com/albanD
2023-10-25 19:28:11 +00:00
Peter Bell
46e80ce58a [ATen] Support multi dim any and all reductions (#110310)
This adds a new overload to `all` and `any` with support for multiple reduction dims.
```
all.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
any.dims(Tensor self, int[1]? dim=None, bool keepdim=False) -> Tensor
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110310
Approved by: https://github.com/lezcano, https://github.com/albanD, https://github.com/justinchuby
2023-10-24 21:33:53 +00:00
Igor Sugak
93e5065ba0 [CODEMOD][caffe2] replace numpy.bool with bool (#111432)
Test Plan:
numpy.bool is long deprecated and removed starting numpy-1.20.0 [1]. This replaces all references with equivalent `bool` type using the following oneliner:
```
rg -l 'np\.bool' caffe2 | grep '\.py$' | xargs perl -pi -e 's,\bnp\.bool\b,bool,'
```
1. https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

Differential Revision: D50372711

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111432
Approved by: https://github.com/Skylion007
2023-10-18 18:56:40 +00:00
Jez Ng
d8de45d22c Update arg{min,max} tests and docs (#110845)
The `argmin` docs had been updated in
https://github.com/pytorch/pytorch/issues/78791 but left a minor typo.

`argmax` had a similar issue but was not noticed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110845
Approved by: https://github.com/eellison
2023-10-13 21:40:29 +00:00
Raphael Reme
9f0601df6d Fix a typo in cholesky_inverse documentation (#110364)
Very small PR to fix a typo in [https://pytorch.org/docs/stable/generated/torch.cholesky_inverse.html](cholesky_inverse) doc.

According to the current doc, the function expects $A$, the symmetric positive-definite matrix, as input. But the examples given (and more important, the code) is using $u$ the cholesky decomposition of this matrix (like cholesky_solve).

Also, it provides a correct example of batch usage of this function.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110364
Approved by: https://github.com/lezcano
2023-10-04 12:30:11 +00:00
Tobias Ringwald
7b53303d3c Improved the docs for torch.std, torch.var, torch.std_mean, torch.var_mean and torch.cov (#109326)
Fixes #109186.

This PR updates the docs for
- `torch.var`
- `torch.var_mean`
- `torch.std`
- `torch.std_mean`
- `torch.cov`

to reflect the actual implementation behavior when `correction >= N`. The math for `torch.cov` should probably be double checked before merging.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109326
Approved by: https://github.com/albanD
2023-09-19 20:47:24 +00:00
Nikita Shulga
2f53bca0fc [Docs] Fix typo in torch.unflatten (#109588)
Fixes https://github.com/pytorch/pytorch/issues/109559

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109588
Approved by: https://github.com/lezcano
2023-09-19 10:37:45 +00:00
Andrea D'Eusanio
a6d34c60a1 Fixing searchsorted doc (#109364)
Removing ambiguous description

Fixes #109298

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109364
Approved by: https://github.com/colesbury
2023-09-18 23:12:53 +00:00
lezcano
c382ad47dd Deprecate torch.cross default behaviour (#108760)
Long overdue this one. We may be able to change it in a few years :hopeful:.

**BC-breaking note**

This PR deprecates `torch.cross`'s default dim in favor of
`torch.linalg.cross`.
A upgrade guide is added to the documentation for `torch.cross`.

Note this PR DOES NOT remove `torch.cross`.

Fixes https://github.com/pytorch/pytorch/issues/108664

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108760
Approved by: https://github.com/albanD
2023-09-14 19:36:29 +00:00
Guilherme Leobas
61f0578787 Update take_along_dim docs to include dim=None case (#109120)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109120
Approved by: https://github.com/lezcano
ghstack dependencies: #108879, #108880
2023-09-13 23:13:09 +00:00
Li-Huai (Allan) Lin
b2cba439b4 Introduce Tensor overload to linspace and logspace (#104889)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104889
Approved by: https://github.com/zou3519
ghstack dependencies: #107958
2023-09-11 23:30:40 +00:00
igm503
03fd3544a2 fixed lgamma documentation error (#108719)
Fixes #108527

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108719
Approved by: https://github.com/zou3519
2023-09-11 22:29:06 +00:00
PyTorch MergeBot
a7f5abeade Revert "Introduce Tensor overload to linspace and logspace (#104889)"
This reverts commit 57e5239321.

Reverted https://github.com/pytorch/pytorch/pull/104889 on behalf of https://github.com/clee2000 due to sorry have to revert this to revert https://github.com/pytorch/pytorch/pull/107958 ([comment](https://github.com/pytorch/pytorch/pull/104889#issuecomment-1714305768))
2023-09-11 17:33:48 +00:00
Li-Huai (Allan) Lin
57e5239321 Introduce Tensor overload to linspace and logspace (#104889)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104889
Approved by: https://github.com/zou3519
ghstack dependencies: #107958
2023-09-11 15:29:39 +00:00
PyTorch MergeBot
e5e653a660 Revert "docs: Match open bracket with close bracket in unsqueeze (#95215)"
This reverts commit 9d04d376d8.

Reverted https://github.com/pytorch/pytorch/pull/95215 on behalf of https://github.com/kit1980 due to Incorrect assumptions ([comment](https://github.com/pytorch/pytorch/pull/95215#issuecomment-1708852420))
2023-09-06 18:04:10 +00:00
Pearu Peterson
fe3309b4b8 Add optional is_coalesced argument to sparse coo tensor factory function. (#107638)
Resolves https://github.com/pytorch/pytorch/issues/107097

After this PR, instead of
```python
torch.sparse_coo_tensor(indices, values, size)._coalesced_(is_coalesced)
```
(that does not work in the autograd context, see #107097), use
```python
torch.sparse_coo_tensor(indices, values, size, is_coalesced=is_coalesced)
```

All sparse coo factory functions that take indices as input support the `is_coalesced` argument.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107638
Approved by: https://github.com/cpuhrsch
2023-08-26 07:24:29 +00:00
FFFrog
e00bd83124 Fix the example of torch.slice_scatter (#107849)
Fixes #107681
fix the example of torch.slice_scatter
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107849
Approved by: https://github.com/drisspg
2023-08-25 04:19:49 +00:00
Digant Desai
8a7a6867b9 [PyTorch][Tensor] Introduce tensor.dim_order (#106835)
Summary:
This is a stride based attribute for a tensor available in Python.

This can help inspect tensors generated using `torch.empty_permuted(.., physical_layout, ...)`, where physical_layout should match the dim_order returned here. `empty_permuted` will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.

Differential Revision: D48134476

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106835
Approved by: https://github.com/ezyang
2023-08-25 00:06:03 +00:00
David Berard
e9af315e02 Fix torch.bucketize docs for "right" (#104474)
The docs correctly (i.e matching actual op behavior) state that

`right = False` means `boundaries[i-1] < input[m][n]...[l][x] <= boundaries[i]`.

However they previously stated that
`If 'right' is False (default), then the left boundary is closed.`

which contradicts the `boundaries[i-1] < input[m][n]...[l][x] <= boundaries[i]` statement.

This modifies the docs to say `... then the left boundary is OPEN.` and also clarifies that this is the opposite behavior of numpy.digitize.

Fixes #91580
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104474
Approved by: https://github.com/aakhundov, https://github.com/svekars
2023-08-17 03:08:07 +00:00
Yukio Siraichi
a5d841ef01 asarray: take the default device into consideration. (#106779)
Fix: #106773

This PR makes it so `asarray` takes the default device into consideration when called with
a Python sequence as the data.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106779
Approved by: https://github.com/rgommers, https://github.com/lezcano
2023-08-11 13:16:42 +00:00
Oren Leung
f725e6374d doc: fix fake quantize per channel doc (#105955)
another doc bug for fake_quantize_per_channel

function doc now matches e7142700ed/aten/src/ATen/native/quantized/FakeQuantPerChannelAffine.cpp (L32)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105955
Approved by: https://github.com/kit1980
2023-07-26 19:17:41 +00:00
Aaron Gokaslan
6d43c89f37 [BE]: Update Ruff to 0.0.280 (#105724)
Removes unusued loop values in python dictionary iteration. Automated fix from Ruff master

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105724
Approved by: https://github.com/ezyang, https://github.com/janeyx99
2023-07-22 23:03:34 +00:00
Shreyas Bhat Kera
7b211ff8dd doc: fix fake_quantize_per_channel_affine (#105241)
Fixes #105085

Fix in formula

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105241
Approved by: https://github.com/jcaip
2023-07-22 00:49:28 +00:00
Justin Chu
79c5e33349 [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105436
Approved by: https://github.com/malfet, https://github.com/albanD
2023-07-21 07:38:46 +00:00
kato8966
64c39ece65 Fix a docstring of resolve_neg (#104151)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104151
Approved by: https://github.com/malfet
2023-07-19 03:55:20 +00:00
Horace He
b88b742db8 fixed torch.manual_seed note (#105175)
Fixes https://github.com/pytorch/pytorch/issues/87509

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105175
Approved by: https://github.com/ezyang
2023-07-13 23:43:44 +00:00
Kurt Mohler
f987d11fa7 Reland: Make torch.empty* deterministic by filling with NaN or max int (#104995)
Relands #101849 after #104302 reverted it.

torchrec PR https://github.com/pytorch/torchrec/pull/1269 fixes the torchrec failure that caused #101849 to be reverted

Part of #82004

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104995
Approved by: https://github.com/albanD
2023-07-13 22:18:03 +00:00
Oren Leung
3ff111a4b4 doc: fix fake_quantize_per_tensor_affine docs (#104453)
Fixes #82800

Fixes wrong `fake_quantize_per_tensor_affine` example and wrong `fake_quantize_per_tensor_affine` formula

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104453
Approved by: https://github.com/kit1980
2023-06-30 22:59:00 +00:00
Amr Elshennawy
a78bddac01 Revert D46920584: Multisect successfully blamed D46920584 for test or build failures (#104269) (#104302)
Summary:

This diff is reverting D46920584
D46920584: Make `torch.empty*` deterministic by filling with NaN or max int value (#101849) by generatedunixname499836121 has been identified to be causing the following test or build failures:

Tests affected:
- [torchrec/distributed/composable/tests:test_fsdp - torchrec.distributed.composable.tests.test_fsdp.FullyShardTest: test_composable_checkpoint](https://www.internalfb.com/intern/test/281475062923125/)

Here's the Multisect link:
https://www.internalfb.com/multisect/2341386
Here are the tasks that are relevant to this breakage:

We're generating a revert to back out the changes in this diff, please note the backout may land if someone accepts it.

If you believe this diff has been generated in error you may Commandeer and Abandon it.

Test Plan: NA

Reviewed By: huydhn, osalpekar

Differential Revision: D46997394

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104302
Approved by: https://github.com/osalpekar
2023-06-29 20:20:58 +00:00
JenDL
a6b9a61a6a Added a note to torch.round doc to indicate the return type (#97227)
Added a note to torch.round doc to indicate the return type of output tensor

Fixes #89056

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97227
Approved by: https://github.com/albanD
2023-06-29 20:02:59 +00:00
Kurt Mohler
2642f31e4c Make torch.empty* deterministic by filling with NaN or max int value (#101849)
Part of #82004

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101849
Approved by: https://github.com/lezcano, https://github.com/albanD, https://github.com/kulinseth
2023-06-21 02:53:22 +00:00
Zheng, Zhaoqiong
d52d1fd5ba add description for unexpected case (#103500)
Fixes #88547

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103500
Approved by: https://github.com/mingfeima, https://github.com/mikaylagawarecki
2023-06-20 19:02:45 +00:00
Liang Hou
e82616d900 Add generator argument in torch.randn signature (#102075)
Fix the document issue of `torch.randn`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102075
Approved by: https://github.com/kit1980, https://github.com/soulitzer
2023-06-14 23:37:19 +00:00
Simon-Martin Schröder
a0885dff98 Link torch.cat in docstring of torch.stack and vice versa (#103421)
torch.cat and torch.stack are similar enough that they should point to each other.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103421
Approved by: https://github.com/malfet, https://github.com/svekars, https://github.com/kit1980
2023-06-14 23:31:22 +00:00
Snorf Yang
2a3e45a2a8 Docs: update default device description (#101283)
Closes #101274

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101283
Approved by: https://github.com/albanD
2023-05-16 17:07:31 +00:00
Yukio Siraichi
b3b333205f Fix asarray doc examples. (#100971)
Fixes issue raised on [PyTorch discuss](https://discuss.pytorch.org/t/confused-on-an-example-on-pytorch-official-documentation/178785).

**Summary:** the examples in `asarray` docs have a few mistakes that makes it not work. This PR fixes those.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100971
Approved by: https://github.com/Skylion007, https://github.com/lezcano
2023-05-12 11:52:10 +00:00
ts
2a6a159c0c Modify repeat_interleave docs to highlight potential overloading (#99650)
Fixes #99259 , drawing to attention that input is optional by putting a variation of the method signature at the top of the file and by modifying the input arguments.

Note that I'm not certain how to get the additional signature at the same level of indentation as the first one, but I think this change does a good job of highlighting the change is optional.

Would be happy to iterate on this if there are any issues.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99650
Approved by: https://github.com/mikaylagawarecki
2023-05-01 17:53:03 +00:00
Akinori Mitani
c11441fda3 Update torch.arange doc. (#99963)
To always exclude `end` without being affected by rounding error, `epsilon` should be subtracted, instead of being added.

Fixes #99853

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99963
Approved by: https://github.com/kit1980
2023-04-26 04:18:56 +00:00
Kiersten Stokes
5c16dfd708 Add half to real param description in torch.complex docs (#99938)
Fixes #89733 according to the issue description

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99938
Approved by: https://github.com/Skylion007
2023-04-25 21:23:16 +00:00
gusty1g
efc90c797d improvements to torch.gradient docs (#98824)
Fixes #98693

Clarified docs for `torch.gradient` on `h_l` and how the gradient is computed. For the mathematical equations, I followed this reference: https://www.dam.brown.edu/people/alcyew/handouts/numdiff.pdf.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98824
Approved by: https://github.com/ngimel, https://github.com/kit1980
2023-04-12 23:43:40 +00:00
Connor Henderson
9d04d376d8 docs: Match open bracket with close bracket in unsqueeze (#95215)
Was going to fix something else that I thought was an issue, but isn't, so just leaving this tiny thing in case it's wanted
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95215
Approved by: https://github.com/Skylion007, https://github.com/kit1980
2023-02-24 03:56:59 +00:00
Edward Z. Yang
ce950b412f Reland "Add torch.empty_permuted (#95069)" (#95208)
This reverts commit 92e03cd583.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95208
Approved by: https://github.com/albanD
2023-02-21 18:02:48 +00:00
PyTorch MergeBot
92e03cd583 Revert "Add torch.empty_permuted (#95069)"
This reverts commit bedeb1f014.

Reverted https://github.com/pytorch/pytorch/pull/95069 on behalf of https://github.com/jeanschmidt due to Breaking internal builds. More in https://fburl.com/phabricator/ztrxrroq
2023-02-21 12:05:20 +00:00
Edward Z. Yang
bedeb1f014 Add torch.empty_permuted (#95069)
torch.empty_permuted is a generalized version of torch.empty(memory_format=...), where you can pass an arbitrary physical layout as a tuple of dims to allow you to setup dense, non-overlapping tensors with non-standard memory format. Check the docblock for a full description of semantics.

The initial motivation for this PR is with guard-less unbacked SymInts. Traditionally, the way we allocate dense tensors with arbitrary layout is with `empty_strided`. However, `empty_strided` does not know that the given strides are actually contiguous, and must test this manually to find out if it is the case. With `empty_permuted`, this is known statically to be the case and helps us skip some 0/1 guards.

However, I also think torch.empty_permuted is a useful API in its own right. It is technically possible to simulate this with an empty and a permute; however, there are some downsides:

* The manual incant is tricky to work out. To allocate an NHWC tensor, the invocation is `torch.empty(N, H, W, C).permute(0, 3, 1, 2)`; the permute call has to take NHWC to NCHW, and is the *inverse* of the permutation people are typically thinking of when they talk about NHWC (0, 2, 3, 1). Instead, torch.empty_permuted lets you say `torch.empty_permuted((N, C, H, W), (0, 2, 3, 1))`, letting you provide the intuitive permutation. It can be literally be read off as NHWC if you assign N=0, C=1, H=2, W=3.
* An empty(requires_grad=True).permute() is no longer a leaf tensor. You can force it to be a leaf with a detach(), but it is more straightforward and less error prone to allow directly allocating a tensor with the correct permutation.

It is also technically possible to simulate this with empty_strided. However, this requires the user to manually compute the contiguous output strides and is bad from a reduction of guards perspective. For what it's worth, this is one of the more common uses of as_strided in the wild, and it would be nice to get rid of it.

A nice enhancement of this feature would be to accept `physical_layout` anywhere `memory_format` is accepted. However, this would be a pretty involved change, so I'm doing the easy thing instead.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95069
Approved by: https://github.com/malfet, https://github.com/ngimel, https://github.com/albanD, https://github.com/dagitses
2023-02-20 00:23:10 +00:00
Ivan Yashchuk
fba13d94a1 Remove deprecated torch.symeig (#70988)
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.symeig`.

- [x] XLA PR: https://github.com/pytorch/xla/pull/4498

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70988
Approved by: https://github.com/lezcano, https://github.com/kit1980, https://github.com/malfet
2023-01-31 11:59:11 +00:00
Minh-Long Luu (刘明龙)
00b3f22210 Add missing scalar example in docs of torch.where (#93145)
[`torch.where(condition, x, y)`](https://pytorch.org/docs/stable/generated/torch.where.html) accepts `x` and `y` as either `Tensor` or Scalar, but the Scalar example is missing in the docs. I simply add the example.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93145
Approved by: https://github.com/ngimel
2023-01-28 03:46:44 +00:00
PyTorch MergeBot
acdd462b1a Revert "Remove deprecated torch.symeig (#70988)"
This reverts commit d70ed68162.

Reverted https://github.com/pytorch/pytorch/pull/70988 on behalf of https://github.com/kit1980 due to Failing XLA tests, forward fix unsuccessful
2023-01-24 19:03:40 +00:00