Commit Graph

394 Commits

Author SHA1 Message Date
Mike Ruberry
cb26661fe4 Throws runtime error when torch.full would infer a float dtype from a bool or integral fill value (#40364)
Summary:
BC-breaking NOTE:

In PyTorch 1.6 bool and integral fill values given to torch.full must set the dtype our out keyword arguments. In prior versions of PyTorch these fill values would return float tensors by default, but in PyTorch 1.7 they will return a bool or long tensor, respectively. The documentation for torch.full has been updated to reflect this.

PR NOTE:

This PR causes torch.full to throw a runtime error when it would have inferred a float dtype by being given a boolean or integer value. A versioned symbol for torch.full is added to preserve the behavior of already serialized Torchscript programs. Existing tests for this behavior being deprecated have been updated to reflect it now being unsupported, and a couple new tests have been added to validate the versioned symbol behavior. The documentation of torch.full has also been updated to reflect this change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40364

Differential Revision: D22176640

Pulled By: mruberry

fbshipit-source-id: b20158ebbcb4f6bf269d05a688bcf4f6c853a965
2020-06-23 23:27:22 -07:00
anjali411
09285070a7 Doc fix for complex views (#40450)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40450

Test Plan: Imported from OSS

Differential Revision: D22190911

Pulled By: anjali411

fbshipit-source-id: eb13559c7a2f62d63344601c750b5715686e95c3
2020-06-23 15:03:22 -07:00
anjali411
8ec2ae9a9f Add view_as_real, view_as_complex for complex tensors (#39099)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39099

Test Plan: Imported from OSS

Differential Revision: D22057886

Pulled By: anjali411

fbshipit-source-id: bad5ba7097ba0dd13f2c549b2463094dee9afa14
2020-06-22 15:15:27 -07:00
Mike Ruberry
95489b590f Throws runtime error when performing integer division using torch.div (#38620)
Summary:
**1.6 Deprecation Note**

In PyTorch 1.6 attempting to divide two integer tensors or an integer tensor and an integer scalar will throw a runtime error. This behavior was deprecated with a warning in PyTorch 1.5. In PyTorch 1.7 torch.div and the division operator will always perform true division like Python3 and NumPy.

To divide integer values use either torch.true_divide, for true division, or torch.floor_divide (the // operator) for floor division.

**PR Summary**

This PR updates the warning message when performing integer division to be a runtime error. Because some serialized Torchscript programs may rely on torch.div's historic behavior it also implements a "versioned symbol" for div that lets those models retain their current behavior. Extensive tests of this behavior are the majority of this PR.

Note this change bumps the produced file format version to delineate which programs should have their historic div behavior preserved.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38620

Differential Revision: D21612598

Pulled By: mruberry

fbshipit-source-id: c9c33591abce2f7e97f67f0f859901f5b03ed47d
2020-06-10 13:59:34 -07:00
kshitij12345
9733390998 Add torch.flip{lr, ud} (#38599)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/38349

TODO:
* [x] Add Tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38599

Differential Revision: D21941884

Pulled By: mruberry

fbshipit-source-id: 7a442ff11051c2c868cf8e3c04e4bba0f1a1d426
2020-06-09 07:19:37 -07:00
KushajveerSingh
88fe05e106 [Docs] Update torch.(squeeze, split, set_printoptions, save) docs. (#39303)
Summary:
I added the following to the docs:
1. `torch.save`.
    1. Added doc for `_use_new_zipfile_serialization` argument.
    2. Added a note telling that extension does not matter while saving.
    3. Added an example showing the use of above argument along with `pickle_protocol=5`.

2. `torch.split`
    1. Added an example showing the use of the function.

3. `torch.squeeze`
   1. Added a warning for batch_size=1 case.

4. `torch.set_printoptions`
    1. Changed the docs of `sci_mode` argument from
        ```
        sci_mode: Enable (True) or disable (False) scientific notation. If
                 None (default) is specified, the value is defined by `_Formatter`
        ```
        to
        ```
        sci_mode: Enable (True) or disable (False) scientific notation. If
                 None (default=False) is specified, the value is defined by
                `torch._tensor_str._Formatter`.
        ```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39303

Differential Revision: D21904504

Pulled By: zou3519

fbshipit-source-id: 92a324257d09d6bcfa0b410d4578859782b94488
2020-06-05 12:57:53 -07:00
krshrimali
335e4a1e3b Add arcosh, arcsinh and arctanh to unary ops (#38388)
Summary:
This PR aims to add `arcosh`, `arcsinh` and `arctanh` support. Please see issue https://github.com/pytorch/pytorch/issues/38349 for more details.

**TODOs:**

* [x] Add test cases for `arcosh`, `arcsinh` and `arctanh`. (need help)
* [x] Overload ops if `std::op` does not work with `thrust::complex` types (like for `sinh`, `cosh`).

Note: `std::acosh, std::asinh, std::atanh` do not support `thrust::complex` types. Added support for complex types for these 3 ops (`arccosh, arcsinh, arctanh`)

cc: mruberry
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38388

Differential Revision: D21882055

Pulled By: mruberry

fbshipit-source-id: d334590b47c5a89e491a002c3e41e6ffa89000e3
2020-06-04 11:40:55 -07:00
Aayush Naik
0829cadca3 Implement rad2deg, deg2rad (#38852)
Summary:
Resolves https://github.com/pytorch/pytorch/issues/38372.

cc mruberry
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38852

Differential Revision: D21868935

Pulled By: mruberry

fbshipit-source-id: ae6ded11b743c9d1cdc032984b4abe0a115290d6
2020-06-03 22:21:54 -07:00
Cloud Han
05f097b5bb Implement logaddexp (#38384)
Summary:
Resolve https://github.com/pytorch/pytorch/issues/38377
Related https://github.com/pytorch/pytorch/issues/38349

This op should be disambiguated with `logsumexp` which do a reduction on a tensor over a specific axis.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38384

Differential Revision: D21737336

Pulled By: mruberry

fbshipit-source-id: 7864d04ca304c0fb2937bb083583e3e3d6ef205d
2020-05-27 20:27:31 -07:00
Mike Ruberry
4239416c72 Throws runtime error on attempted addcdiv integer division (#38762)
Summary:
1.6 Deprecation Note:

In 1.6 attempting to perform integer division using addcdiv will throw a RuntimeError, and in 1.7 the behavior will change so that addcdiv always performs a true division of its tensor1 and tensor2 inputs. See the warning in torch.addcdiv's documentation for more information.

PR Summary:

This PR updates the warning that appears when addcdiv performs integer division to throw a RuntimeError. This is intended to prevent silent errors when torch.addcdiv's behavior is changed to always perform true division in 1.7. The documentation is updated (slightly) to reflect this, as our the addcdiv tests in test_torch and test_type_promotion.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38762

Differential Revision: D21657585

Pulled By: mruberry

fbshipit-source-id: c514b44409706f2bcfeca4473424b30cc48aafbc
2020-05-27 14:40:07 -07:00
kshitij12345
2751dda7f6 [docs] fix formula torch.logcumsumexp (#38952)
Summary:
Reference : https://github.com/pytorch/pytorch/pull/36308#issuecomment-632282641

After fix:

![Screenshot from 2020-05-23 15-35-09](https://user-images.githubusercontent.com/19503980/82727956-4bcabb80-9d0b-11ea-85a8-81b35012abbc.png)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38952

Differential Revision: D21722196

Pulled By: ezyang

fbshipit-source-id: 62b08c14e0ce9603133841940627df40d7b1e861
2020-05-26 16:02:43 -07:00
Shawn Zhong
ba3893e736 Rename torch._C.Generator to torch.Generator (#38773)
Summary:
Fix https://github.com/pytorch/pytorch/issues/26528
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38773

Differential Revision: D21701053

Pulled By: pbelevich

fbshipit-source-id: 57632ca9ce430ec30dc8e40739194ee2b5860f71
2020-05-26 08:29:46 -07:00
Shawn Zhong
b1982c4bdb Fix multiline signatures in docstring (#38768)
Summary:
Fix https://github.com/pytorch/pytorch/issues/38694

See https://5533621-65600975-gh.circle-artifacts.com/0/docs/torch.html

## Index Page
| Before | After |
| --- | --- |
| ![image](https://user-images.githubusercontent.com/6421097/82448124-ee1a4300-9a6e-11ea-9a48-cabf62eedd92.png)  | ![image](https://user-images.githubusercontent.com/6421097/82448175-fd00f580-9a6e-11ea-8c79-c3dd6bac0b69.png) |
| ![image](https://user-images.githubusercontent.com/6421097/82448234-0f7b2f00-9a6f-11ea-8221-19335ee60aa2.png) | ![image](https://user-images.githubusercontent.com/6421097/82448262-19049700-9a6f-11ea-9eea-ac2f71068d7f.png) |

## Detail Page
| Before | After |
| --- | --- |
| ![image](https://user-images.githubusercontent.com/6421097/82448421-4fdaad00-9a6f-11ea-9909-29692cb8ca01.png) | ![image](https://user-images.githubusercontent.com/6421097/82448440-5701bb00-9a6f-11ea-8c07-d06cb0cdfa50.png) |
| ![image](https://user-images.githubusercontent.com/6421097/82448496-68e35e00-9a6f-11ea-8db9-2d75a9328b3a.png) | ![image](https://user-images.githubusercontent.com/6421097/82448539-7567b680-9a6f-11ea-9c2e-a59eca4090c4.png) | ![image](https://user-images.githubusercontent.com/6421097/82448563-7d275b00-9a6f-11ea-97af-51f45969f473.png) |
| ![image](https://user-images.githubusercontent.com/6421097/82448329-320d4800-9a6f-11ea-8d24-3d33445cf591.png) | ![image](https://user-images.githubusercontent.com/6421097/82448353-389bbf80-9a6f-11ea-8cc8-752d3fd0dee1.png) |
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38768

Differential Revision: D21691859

Pulled By: zou3519

fbshipit-source-id: 336158be450436554a1fa2105a5eedf24236c56b
2020-05-21 14:39:32 -07:00
kshitij12345
3487744821 Add torch.logcumsumexp (#36308)
Summary:
Creating new PR as I am unable to push to pandeykartikey 's branch as I don't have the permissions.

Closes https://github.com/pytorch/pytorch/issues/26411

Based on https://github.com/pytorch/pytorch/issues/32876 Thanks pandeykartikey for starting this out.

Have addressed the comments.

anjali411 agadetsky albanD
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36308

Differential Revision: D21648573

Pulled By: albanD

fbshipit-source-id: bc1a8fc4ab474a1148298117a1549b0e46f7c3ff
2020-05-21 09:12:31 -07:00
Bharat123rox
15da26f8aa DOC: Add documentation for Tensor.is_nonzero (#37845)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/37438 by adding documentation for `Tensor.is_nonzero`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37845

Differential Revision: D21494422

Pulled By: mruberry

fbshipit-source-id: ee4f5979922d7c8100b5031d770ccdf59fe1c1a1
2020-05-14 04:46:55 -07:00
Xinyu Li
dcf1861f88 add document for bucktization (#38119)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38119

This is for (#37435).
Demo is here:
https://glaringlee.github.io/generated/torch.searchsorted.html
https://glaringlee.github.io/generated/torch.bucketize.html

Test Plan: Imported from OSS

Differential Revision: D21517392

Pulled By: glaringlee

fbshipit-source-id: b35795c7f07e9ae4c4806c528eb51fd4ca14d499
2020-05-11 21:54:19 -07:00
Xiao Wang
63b1ae6983 Fix overflow in torch.remainder when dividend is very large (#37758)
Summary:
This will fix the GPU implementation in https://github.com/pytorch/pytorch/issues/37743 and https://github.com/pytorch/pytorch/issues/24861. Please also check my [comment](https://github.com/pytorch/pytorch/issues/37743#issuecomment-623285707).

The fixed `remainder_kernel` follows the similar implementation in numpy. See 79d7bc276a/numpy/core/src/npymath/npy_math_internal.h.src (L649-L658)

I also slightly update the doc for `torch.remainder`, to make it similar to `torch.fmod`.

I'm not sure how to modify the Vec256 code of CPU remainder_kernel, so I just leave it there.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37758

Differential Revision: D21388417

Pulled By: ngimel

fbshipit-source-id: 770ba5801cf34619b2b68b8b0cf95d8cfa52e6f6
2020-05-08 16:46:55 -07:00
Edward Yang
4fef3763dd Revert "Revert D21337640: [pytorch][PR] Split up documentation into subpages and clean up some warnings" (#37778)
Summary:
Original PR: https://github.com/pytorch/pytorch/pull/37419

cc mattip suo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37778

Differential Revision: D21385774

Pulled By: ezyang

fbshipit-source-id: 5de532faab8bae132736b6b5189e0ee2ac9935be
2020-05-04 14:32:35 -07:00
Michael Suo
20f7e62b1d Revert D21337640: [pytorch][PR] Split up documentation into subpages and clean up some warnings
Test Plan: revert-hammer

Differential Revision:
D21337640

Original commit changeset: d4ad198780c3

fbshipit-source-id: fa9ba6ac542173a50bdb45bfa12f3fec0ed704fb
2020-05-04 10:57:55 -07:00
mattip
f10fbcc820 Split up documentation into subpages and clean up some warnings (#37419)
Summary:
xref gh-32838, gh-34032

This is a major refactor of parts of the documentation to split it up using sphinx's `autosummary` feature which will build out `autofuction` and `autoclass` stub files and link to them. The end result is that the top module pages like torch.nn.rst and torch.rst are now more like table-of-contents to the actual single-class or single-function documentations pages.

Along the way, I modified many of the docstrings to eliminate sphinx warnings when building. I think the only thing I changed from a non-documentation perspective is to add names to `__all__` when adding them to `globals()` in `torch.__init__.py`

I do not know the CI system: are the documentation build artifacts available after the build, so reviewers can preview before merging?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37419

Differential Revision: D21337640

Pulled By: ezyang

fbshipit-source-id: d4ad198780c3ae7a96a9f22651e00ff2d31a0c0f
2020-05-04 09:39:22 -07:00
Jesse Brizzi
bca82801e7 add support for generating Vandermonde matrices (#36725)
Summary:
Adds support for generating Vandermonde matrices based off of the Numpy implementation found [here](https://github.com/numpy/numpy/blob/v1.17.0/numpy/lib/twodim_base.py#L475-L563).

Adds test to ensure generated matrix matches expected Numpy implementation. Note test are only limited to torch.long and torch.double due to differences in now PyTorch and Numpy deal with type promotion.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36725

Differential Revision: D21075138

Pulled By: jessebrizzi

fbshipit-source-id: 6bb1559e8247945714469b0e2b07c6f4d5fd1fd0
2020-04-29 13:16:26 -07:00
Mike Ruberry
bf860a4eba Adds missing documentation . (#37295)
Summary:
Fixes torch.isclose documentation missing a `.`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37295

Differential Revision: D21245426

Pulled By: mruberry

fbshipit-source-id: 88ce57ed68c2eac6aa83932780a6ba30e9fa69ea
2020-04-25 15:36:35 -07:00
Mike Ruberry
4a2372bc90 Implements torch.isclose for complex tensors (#36456)
Summary:
Previously torch.isclose would RuntimeError when called on complex tensors. This update updates torch.isclose to run on complex tensors and be consistent with [NumPy](https://numpy.org/doc/1.18/reference/generated/numpy.isclose.html). However, NumPy's handling of NaN, -inf, and inf values is odd, so I adopted  Python's [cmath.isclose](https://docs.python.org/3/library/cmath.html) behavior when dealing with them. See https://github.com/numpy/numpy/issues/15959 for more on NumPy's behavior.

While implementing complex isclose I also simplified the isclose algorithm to:

- A is close to B if A and B are equal, if equal_nan is true then NaN is equal to NaN
- If A and B are finite, then A is close to B if `abs(a - b) <= (atol + abs(rtol * b))`

This PR also documents torch.isclose, since it was undocumented, and adds multiple tests for its behavior to test_torch.py since it had no dedicated tests.

The PR leaves equal_nan=True with complex inputs an error for now, pending the outcome of https://github.com/numpy/numpy/issues/15959.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36456

Differential Revision: D21159853

Pulled By: mruberry

fbshipit-source-id: fb18fa7048e6104cc24f5ce308fdfb0ba5e4bb30
2020-04-21 19:53:55 -07:00
Jesse Brizzi
28f439d4f4 add absolute alias for abs (#36597)
Summary:
Adds an absolute alias for the abs function to match Numpy's use of both:
https://docs.scipy.org/doc/numpy/reference/generated/numpy.absolute.html

Adds test to ensure the output from abs and absolute are the same.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36597

Differential Revision: D21024458

Pulled By: jessebrizzi

fbshipit-source-id: 4f2987e7bc7cde444d0a93e833a0350844b48d44
2020-04-20 14:49:51 -07:00
Kurt Mohler
c7cf4c1bd6 Bmm sparse dense (#33430)
Summary:
Add sparse-dense BMM operation for CUDA and CPU.

Closes https://github.com/pytorch/pytorch/issues/5672
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33430

Differential Revision: D21017828

Pulled By: ezyang

fbshipit-source-id: 5bf60efcb16d05c08c7a284accc04d8968f98752
2020-04-20 09:35:16 -07:00
Mike Ruberry
d7fabfd5df Implements complex isfinite and isinf (#36648)
Summary:
Implements complex isfinite and isinf, consistent with NumPy.

A complex value is finite if and only if both its real and imaginary part are finite.

A complex value is infinite if and only if its real or imaginary part are infinite.

Old isfinite, isinf, and isnan tests are modernized and instead of fixtures the torch results are compared with NumPy. A new test is added for complex isfinite, isinf, and isnan. The docs for each function are updated to clarify what finite, infinite, and NaN values are.

The new tests rely on a new helper, _np_compare, that we'll likely want to generalize in the near future and use in more tests.

Addresses part of the complex support tasks. See https://github.com/pytorch/pytorch/issues/33152.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36648

Differential Revision: D21054766

Pulled By: mruberry

fbshipit-source-id: d947707c5437385775c82f4e6c722349ca5a2174
2020-04-16 09:09:02 -07:00
Nik Ved
d3cf9452af doc note on deterministic/non-deterministic gradient for min/max/median (#36481)
Summary:
An update on the note that the subgradients for min/max are not deterministic.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36481

Differential Revision: D20993887

Pulled By: albanD

fbshipit-source-id: 4e1a7519d94a9dcf9d359ad679360874d32c1fe2
2020-04-14 07:27:18 -07:00
Richard Zou
9662ef66b7 Fix torch.min docs (#36319)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36319

On the way to resolving #35216.
This is a fix for just the master branch but once this goes in,
I'll send a cherry-pick to release/1.5

The problem is that we were not calling `format` on a string that had
templates (e.g., '{input}', '{dim}'). This change makes it so that we
call format on the entire docstring for `torch.min`.

Test Plan:
- The `torch.max` docs are OK:
https://pytorch.org/docs/master/torch.html#torch.max and don't need
changing.
- `torch.min` docs, before this change: see second screenshot in #35216.
- after this change: <Insert link here on github>

![image](https://user-images.githubusercontent.com/5652049/78921702-4e2acc00-7a63-11ea-9ea0-89636ff6fb0a.png)

Differential Revision: D20946702

Pulled By: zou3519

fbshipit-source-id: a1a28707e41136a9bb170c8a4191786cf037a0c2
2020-04-09 15:10:59 -07:00
Mike Ruberry
d1a4a64092 Disables imag for real-valued tensors (#35728)
Summary:
In NumPy, calling np.imag on a real-valued tensors returns a non-writable tensor (view) of zeros. In PyTorch we don't support non-writeable tensors (or views), so we can either return a writable tensor or error.

If we do the former, that may confuse people who try to write to the imaginary part of a real-valued tensor, and may cause a BC issue if we do support non-writable tensors. This PR errors to provide us flexibility implementation the solution we'd like in the future, while protecting users from unexpected behavior today.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35728

Differential Revision: D20760687

Pulled By: mruberry

fbshipit-source-id: f60d445746cc75ba558804c853993d9e4621dad3
2020-03-31 21:34:46 -07:00
anjali411
2c6d1e57cd is_complex doc fix (#35680)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35680

Differential Revision: D20740814

Pulled By: anjali411

fbshipit-source-id: dd35594ef7661a2876479b974b37be83cf472f44
2020-03-31 18:14:51 -07:00
Natalia Gimelshein
a15a4a5caf Revert D20722426: [pytorch][PR] [doc] Add overflow notice for cuFFT on half precision
Test Plan: revert-hammer

Differential Revision:
D20722426

Original commit changeset: 68f7304de5d6

fbshipit-source-id: 462133d8e8abff2e815a4a9b1eb047e7ecaa041a
2020-03-30 17:52:03 -07:00
Xiao Wang
e021c13d2d [doc] Add overflow notice for cuFFT on half precision (#35594)
Summary:
This would fix https://github.com/pytorch/pytorch/issues/33485.

cc ptrblck
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35594

Differential Revision: D20722426

Pulled By: ngimel

fbshipit-source-id: 68f7304de5d6cecdd9e34e8697fc84bc551b1a45
2020-03-30 15:53:32 -07:00
Mike Ruberry
860790de88 Makes torch.real and torch.imag NumPy compatible, but disables them for complex tensors (#35560)
Summary:
The current implementations of torch.real and torch.imag are not NumPy compatible. In particular:

- torch.real on a real tensor does not return the real tensor, like contiguous
- torch.real on a complex tensor does not return a real-valued view of the real part
- torch.imag on a complex tensor does not return a real-valued view of the imaginary part
- torch.Tensor.real and torch.Tensor.imag exist as methods, but in NumPy they are writable attributes

This PR makes the functions NumPy compatible by removing the method variants and out kwarg, restricting them to work on only real tensors, and updating the behavior of torch.real to return its input. New tests are added to test_torch.py to verify the behavior, a couple existing complex tests are skipped, and the documentation is updated to reflect the change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35560

Differential Revision: D20714568

Pulled By: mruberry

fbshipit-source-id: 5dd092f45757b620c8426c829dd15ee997246a26
2020-03-29 02:09:00 -07:00
anjali411
96eec95ece torch.from_numpy for complex dtypes (#35531)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35531

Differential Revision: D20693581

Pulled By: anjali411

fbshipit-source-id: d53e26b4175452fa00b287efbfceea18104c1364
2020-03-27 14:40:28 -07:00
Mario Kostelac
02d6e6e55f histc: Add a note on elements outside of given bounds (#34889)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34889

Differential Revision: D20625916

Pulled By: albanD

fbshipit-source-id: febb769f40d86bae8e1c7bb51d719b92bf4a572d
2020-03-27 14:04:51 -07:00
Mike Ruberry
7c1ea736ba Extends true_divide to be a method (#34794)
Summary:
Per title. See related https://github.com/pytorch/pytorch/pull/34570.

In PyTorch 1.7 the plan is for torch.div and Python's division operator to perform "true" division, like Python 3, JAX, and NumPy. To facilitate this change, this PR expands true_divide to be a method so it can cover all of torch.div's use cases.

New true_divide tests are added to test_torch.py, test_type_promotion.py, and test_sparse.py.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34794

Differential Revision: D20545507

Pulled By: mruberry

fbshipit-source-id: 55286f819716c8823d1930441a69008560ac2bd5
2020-03-23 23:12:23 -07:00
Vitaly Fedyunin
40da01db6a Add docs about memory format (#34818)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34818

Test Plan: Imported from OSS

Differential Revision: D20601336

Pulled By: VitalyFedyunin

fbshipit-source-id: d34ad226be950bf134c6b383a4810ea6aa75599e
2020-03-23 15:06:33 -07:00
Jerry Zhang
3fa7813b9f [quant] Add dequantize.tensors (#34348)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34348

We need this function to do swap dequantize for prim::ListConstruct since
the output of prim::ListConstruct is a list of Tensors

Test Plan:
.

Imported from OSS

Differential Revision: D20504454

fbshipit-source-id: e6155e37da98e2219a6f79737cd46fe32a509c9f
2020-03-20 22:51:51 -07:00
Mike Ruberry
0d8447a9b8 Warns when performing integer division with div and addcdiv (#34570)
Summary:
Per title.

In the future we want to make div(), the division operator, and addcdiv perform true division as in Python 3, NumPy, and JAX. To do this without silently breaking users we plan to:

- Warn (once) in 1.5 when a user performs integer division using div or addcdiv
- RuntimeError in 1.6 when a user attempts to perform integer division using div or addcdiv
- Always perform true division in 1.7 using div, /, and addcdiv

Users can use true_divide or floor_divide today to explicitly specify the type of division they like.

A test for this behavior is added to test_type_promotion. Unfortunately, because we are only warning once (to avoid a deluge) the test only uses maybeWarns Regex.

The XLA failure is real but will be solved by https://github.com/pytorch/pytorch/pull/34552. I'll be sure to land that PR first to avoid temporarily breaking the XLA build.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34570

Differential Revision: D20529211

Pulled By: mruberry

fbshipit-source-id: 65af5a9641c5825175d029e8413c9e1730c661d0
2020-03-19 04:10:55 -07:00
Mike Ruberry
3b7e1cd2cc Makes floor_divide a method, adds sparse floor division (#34552)
Summary:
(Updated per review feedback)

`torch.floor_divide` is currently a function that can operate on two tensors or a tensor and a scalar (scalar x scalar floor division is handled natively by Python and the JIT has a builtin function for it). This PR updates it to:

- have an out variant: `floor_divide(x, y, out=z)`
- be a method on a tensor: `x.floor_divide(y)`
- have an in-place variant: `x.floor_divide_(y)`
- work with sparse tensors

Tests are added to test_sparse.py and test_torch.py for these new behaviors.

In addition, this PR:

- cleans up the existing sparse division and true_division code and improves their error message
- adds testing of sparse true_division to test_sparse.py
- extends existing floor_divide testing in test_torch to run on CUDA, too, not just the CPU

Unfortunately, making floor_divide a method requires breaking backwards compatibility, and floor_divide has been added to the BC whitelist since this is international. The BC issue is that the first parameter name to torch.floor_divide is changing from input to self. If you previously called torch.floor_divide with keyword arguments, e.g. torch.floor_divide(input=x, other=y), you will need to update to torch.floor_divide(self=x, other=y), or the more common torch.floor_divide(x, y).

The intent of this PR is to allow floor_divide to be substituted for division (torch.div, /) wherever division was previously used. In 1.6 we expect torch.div to perform true_division, and floor_divide is how users can continue to perform integer division with tensors.

There are two potential follow-up issues suggested by this PR:

- the test framework might benefit from additional tensor construction classes, like one to create dividends and divisors for multiple dtypes
- the test framework might benefit from a universal function test class. while methods have reasonable coverage as part of test_torch.py's TestTensorOp tests, function coverage is spotty. Universal functions are similar enough it should be possible to generate tests for them.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34552

Differential Revision: D20509850

Pulled By: mruberry

fbshipit-source-id: 2cd3c828aad67191c77f2ed8470411e246f604f8
2020-03-18 15:00:53 -07:00
Mike Ruberry
1afc584188 Deprecates current torch.full integral type inference, adds torch.full complex type inference (#34709)
Summary:
Per title.

Currently torch.full will always (attempt to) produce a float tensor. This is inconsistent with NumPy in (at least) two cases:

- When integral fill values (including bool) are given
- When complex fill values are given

For example:

```
np.full((1, 2), 1).dtype
: dtype('int64')

np.full((1, 2), (1 + 1j)).dtype
: dtype('complex128')
```

Whereas in PyTorch

```
torch.full((1, 2), 1).dtype
: torch.float32

torch.full((1, 2), (1 + 1j)).dtype
: RuntimeError: value cannot be converted to type float without overflow: (1,1)
```

This PR begins the process of deprecating our current behavior of returning float tensors (by default) when given integer fill values by warning the user that integer fill values will require explicitly specifying the dtype or out kwargs in 1.6, and in 1.7 the behavior will change to return a LongTensor by default (BoolTensor for bool values). The intermediate 1.6 release is to prevent changing the behavior silently and unexpectedly.

The PR also implements inference for complex types. So that with it:

```
torch.full((1, 2), (1 + 1j)).dtype
: torch.complex64
```

The complex type inference returns a ComplexFloat tensor when given a complex fill value (and no dtype or out kwarg is specified), unless the default dtype is Double, in which case a ComplexDouble tensor is returned.

A test for these behaviors is added to test_torch.py.

Implementation note:

This PR required customizing full's dispatch because currently in eager codegen the TensorOptions object passed to functions improperly sets has_dtype() to true, even if the user did not explicitly provide a dtype. torch.arange already worked around this issue with its own custom implementation. The JIT, however, does pass a properly constructed TensorOptions object.

Future Work:

This PR does not extend torch.full's complex type inference to ONNX. This seems unlikely to come up and will be a clear error if it does. When integer type inference is added to torch.full, however, then porting the behavior to ONNX may be warranted. torch.arange ported its complex type promotion logic to ONNX, for example.

Additionally, this PR mostly leaves existing call sites in PyTorch that would trigger this warning intact. This is to be more minimal (since the PR is BC breaking). I will submit a separate PR fixing PyTorch's call sites.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34709

Differential Revision: D20509387

Pulled By: mruberry

fbshipit-source-id: 129593ba06a1662032bbbf8056975eaa59baf933
2020-03-18 12:19:31 -07:00
Mike Ruberry
a1eaaea288 Revert D20497453: [pytorch][PR] Makes floor_divide a method, adds sparse floor division
Test Plan: revert-hammer

Differential Revision:
D20497453

Original commit changeset: ac326f2007d8

fbshipit-source-id: b94b89b1a25521506e3d0a6b072d3d4d8c55e63d
2020-03-18 01:48:50 -07:00
Mike Ruberry
b7129050e7 Makes floor_divide a method, adds sparse floor division (#34552)
Summary:
(Updated per review feedback)

`torch.floor_divide` is currently a function that can operate on two tensors or a tensor and a scalar (scalar x scalar floor division is handled natively by Python and the JIT has a builtin function for it). This PR updates it to:

- have an out variant: `floor_divide(x, y, out=z)`
- be a method on a tensor: `x.floor_divide(y)`
- have an in-place variant: `x.floor_divide_(y)`
- work with sparse tensors

Tests are added to test_sparse.py and test_torch.py for these new behaviors.

In addition, this PR:

- cleans up the existing sparse division and true_division code and improves their error message
- adds testing of sparse true_division to test_sparse.py
- extends existing floor_divide testing in test_torch to run on CUDA, too, not just the CPU

Unfortunately, making floor_divide a method requires breaking backwards compatibility, and floor_divide has been added to the BC whitelist since this is international. The BC issue is that the first parameter name to torch.floor_divide is changing from input to self. If you previously called torch.floor_divide with keyword arguments, e.g. torch.floor_divide(input=x, other=y), you will need to update to torch.floor_divide(self=x, other=y), or the more common torch.floor_divide(x, y).

The intent of this PR is to allow floor_divide to be substituted for division (torch.div, /) wherever division was previously used. In 1.6 we expect torch.div to perform true_division, and floor_divide is how users can continue to perform integer division with tensors.

There are two potential follow-up issues suggested by this PR:

- the test framework might benefit from additional tensor construction classes, like one to create dividends and divisors for multiple dtypes
- the test framework might benefit from a universal function test class. while methods have reasonable coverage as part of test_torch.py's TestTensorOp tests, function coverage is spotty. Universal functions are similar enough it should be possible to generate tests for them.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34552

Differential Revision: D20497453

Pulled By: mruberry

fbshipit-source-id: ac326f2007d8894f730d1278fef84d63bcb07b5d
2020-03-18 00:01:45 -07:00
Mike Ruberry
3671036ef3 Adds true_divide function, analogous to Python 's, JAX's, NumPy's (true) division (#34236)
Summary:
See NumPy's division documentation here: https://numpy.org/doc/1.18/reference/generated/numpy.divide.html#numpy.divide.

True division is the same as PyTorch's default division except when both inputs are integer or bool tensors. In the latter case the inputs are (conceptually) cast to the default floating type before the division is performed.

The function is implemented for dense and sparse tensors and supports exporting to ONNX from PyTorch's eager mode or JIT traces. The function is inherently incompatible with exporting to ONNX via JIT script, and is another datapoint suggesting we should deprecate exporting scripted graphs to ONNX.

Tests are added for the type promotion, named tensor, and ONNX export behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34236

Reviewed By: houseroad

Differential Revision: D20334087

Pulled By: mruberry

fbshipit-source-id: 83d00d886f46f713215d7d9e02ffd043164c57f1
2020-03-09 21:06:33 -07:00
Xiao Wang
ccf6fab65e Fix doc and type hints for "torch.add"; fix deprecated python calls in tests (#33935)
Summary:
This PR fixed documentation for `torch.add` with alpha. It also fixed these deprecated python calls `torch.add` and `torch.addmm` in tests, which may affect performance in *test/test_sparse.py* and *test/test_nn.py*.

cc csarofeen ptrblck
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33935

Differential Revision: D20313320

Pulled By: ngimel

fbshipit-source-id: fb08413d7e244865952e3fc0e1be7f1794ce4e9a
2020-03-06 15:53:58 -08:00
momohatt
a23e8099dd Fix typo (#34008)
Summary:
This PR removes apparently unnecessary dots in the documentation of `torch.t`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34008

Differential Revision: D20195084

Pulled By: ezyang

fbshipit-source-id: a34022de6b7a32d05a0bb3da197ee3507f4b8d8e
2020-03-03 07:38:40 -08:00
anjali411
13e4ee7883 Added tensor.is_complex(), is_complex and dtype.is_complex py binding, tensor printing, and dixed the scalar type returned for complex float (#33268)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33268

Test Plan: Imported from OSS

Differential Revision: D19907698

Pulled By: anjali411

fbshipit-source-id: c3ce2e99fc09da91a90a8fb94e5525a00bb23703
2020-02-20 13:38:01 -08:00
Matthew Haines
1d9fcf8bd2 Correct documentation for torch.unsqueeze (#33478)
Summary:
"out" argument in torch.unsqueeze is not actually implemented, fixed documentation https://github.com/pytorch/pytorch/issues/29800
After: ![image](https://user-images.githubusercontent.com/33493903/74796371-6289ee00-5296-11ea-8493-e8c18ac63bdf.png)

Before: ![image](https://user-images.githubusercontent.com/33493903/74796444-96651380-5296-11ea-816c-2adacfa79e35.png)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33478

Differential Revision: D19978477

Pulled By: yf225

fbshipit-source-id: 42337326c1ec04975307366c94591ee32a11b091
2020-02-19 14:01:06 -08:00
anjali411
a8bd1d24c9 [Documentation] cummin doc fix (#33492)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33492

Differential Revision: D19976082

Pulled By: anjali411

fbshipit-source-id: c9f8f541783fded98b8aba54e293f824c926496e
2020-02-19 13:51:38 -08:00
anjali411
da015c77a1 Cummax and Cummin doc update and performance benchmark (#32537)
Summary:
[CPU] Benchmark results for cummax, cummin:

In [1]: import torch

In [2]: x=torch.randn(5,6,7).cuda()

In [3]: %timeit x.cummax(0)
134 µs ± 1.59 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [4]: %timeit x.max(0)
114 µs ± 560 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [5]: %timeit x.cummax(1)
134 µs ± 760 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [6]: %timeit x.max(1)
118 µs ± 514 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [7]: %timeit x.cumsum(0)
97.1 µs ± 6.93 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [8]: %timeit x.cumprod(0)
83.6 µs ± 689 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [9]: %timeit x.cumprod(1)
86.3 µs ± 528 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [10]: y=torch.randn(5,6,7)

In [11]: %timeit y.cummax(0)
148 µs ± 1.43 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [12]: %timeit y.max(0)
111 µs ± 125 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [13]: %timeit y.cumsum(0)
54.8 µs ± 311 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [14]: %timeit y.cumprod(0)
56.2 µs ± 836 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32537

Differential Revision: D19951171

Pulled By: anjali411

fbshipit-source-id: cf972c550189473e9ce62e24ac7dd34b9373fef9
2020-02-18 14:12:25 -08:00