Commit Graph

713 Commits

Author SHA1 Message Date
Kazuaki Ishizaki
e0c194f10b Fix typos in messages under torch (#88961)
This PR fixes typos of messages and parms in c++ source and head files under `torch` directory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88961
Approved by: https://github.com/albanD
2022-11-14 19:06:41 +00:00
Antoni Viros i Martin
c77368d416 Implement a constructor for nested_tensor that is similar to torch.tensor() (#88213)
Summary: This diff merges both previous implementations of constructors for nested tensors, the one from lists of tensors and the one with arbitrary python lists, adn implements it in pytorch core so no extensions are needed to construct NT.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88213
Approved by: https://github.com/cpuhrsch
2022-11-08 00:03:18 +00:00
David Reiss
885122b7dc Move PadNd from ATen/native to ATen (#82379)
Summary:
This header is being included from both aten/native and torch/csrc, but
some of our build configurations don't allow direct dependencies from
torch/csrc to atent/native, so put the header in aten where it's always
accessible.

Resolves https://github.com/pytorch/pytorch/issues/81198

Test Plan:
CI.
```
./scripts/build_android.sh
env ANDROID_ABI="x86_64" ANDROID_NDK=".../ndk-bundle" CMAKE_CXX_COMPILER_LAUNCHER=ccache CMAKE_C_COMPILER_LAUNCHER=ccache USE_VULKAN=0 ./scripts/build_android.sh
echo '#include <torch/torch.h>' > test.cpp
g++ -E -I $PWD/build_android/install/include/ -I $PWD/build_android/install/include/torch/csrc/api/include test.cpp >/dev/null
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82379
Approved by: https://github.com/ezyang, https://github.com/malfet
2022-10-10 20:26:57 +00:00
Mikayla Gawarecki
afaee00fec Add python nested_tensor and as_nested_tensor constructors in torch.nested (#85593)
Remove `torch.nested_tensor` which has erroneous behavior wrt gradients (could be either leaf or not leaf). Introduce `torch.nested.nested_tensor` and `torch.nested.as_nested_tensor` in the vein of `torch.tensor` and `torch.as_tensor`. Done in nested `__init__.py` for now but can move to pybind in future (when we want to load from numpy/nested lists ).

Discussed offline with @cpuhrsch and pybind constructor (https://github.com/pytorch/pytorch/pull/85536) was more gnarly than expected, so we can move to that when we do need loading from numpy etc.

Differential Revision: [D39806622](https://our.internmc.facebook.com/intern/diff/D39806622)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85593
Approved by: https://github.com/drisspg, https://github.com/cpuhrsch
2022-09-28 20:15:02 +00:00
vfdev-5
d951165bd8 [C++ API] Added missing antialiasing path in interpolation C++ api (#84599)
Description:

Following https://github.com/pytorch/pytorch/pull/69318#issuecomment-1238433540 adding missing bicubic path for anti-alias flag to C++ frontend.

- https://github.com/pytorch/pytorch/pull/70930

- added tests in pytorch/test/cpp/api/functional.cpp

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84599
Approved by: https://github.com/kit1980, https://github.com/malfet
2022-09-13 03:54:07 +00:00
Mikayla Gawarecki
e217b30b0f Add torch.nested namespace (#84102)
First step towards #83775
- only `to_padded_tensor` is moved to the nested namespace for now
- following the schema used for `special`, `fft`, `linalg` and other namespaces, nested functions are registered in native_functions.yaml as `nested_{function_name}` and are bound to the desired Python name in
`torch/nested/__init__.py`, and the desired C++ name in `torch/csrc/api/include/torch/nested.h`.

~~**Question**: should we keep the documentation for `Tensor.to_padded_tensor` or can this deleted since it is shared by `torch.nested.to_padded_tensor`?~~

[generated nested docs](https://docs-preview.pytorch.org/84102/nested.html?highlight=nested#module-torch.nested)

Differential Revision: [D39361148](https://our.internmc.facebook.com/intern/diff/D39361148)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84102
Approved by: https://github.com/drisspg
2022-09-12 16:31:05 +00:00
Peter Bell
b429a17545 Enable -Wunused-local-typedefs (#83708)
I recently had a PR reverted because it triggered an
unused-local-typedefs warning, so disabling these in the CMake build
is counter-productive.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83708
Approved by: https://github.com/albanD
2022-08-26 15:45:47 +00:00
Edward Z. Yang
df69660832 Revert "Revert "Add a lint rule for torch/csrc/util/pybind.h include (#82552)"" (#82599)
This reverts commit 532b8a9e00.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82599
Approved by: https://github.com/albanD
2022-08-02 19:37:02 +00:00
PyTorch MergeBot
532b8a9e00 Revert "Add a lint rule for torch/csrc/util/pybind.h include (#82552)"
This reverts commit 9465c0e0b5.

Reverted https://github.com/pytorch/pytorch/pull/82552 on behalf of https://github.com/zengk95 due to This seems to be breaking windows binary wheels
2022-08-01 20:25:35 +00:00
Edward Z. Yang
9465c0e0b5 Add a lint rule for torch/csrc/util/pybind.h include (#82552)
We define specializations for pybind11 defined templates
(in particular, PYBIND11_DECLARE_HOLDER_TYPE) and consequently
it is important that these specializations *always* be #include'd
when making use of pybind11 templates whose behavior depends on
these specializations, otherwise we can cause an ODR violation.

The easiest way to ensure that all the specializations are always
loaded is to designate a header (in this case, torch/csrc/util/pybind.h)
that ensures the specializations are defined, and then add a lint
to ensure this header is included whenever pybind11 headers are
included.

The existing grep linter didn't have enough knobs to do this
conveniently, so I added some features.  I'm open to suggestions
for how to structure the features better.  The main changes:

- Added an --allowlist-pattern flag, which turns off the grep lint
  if some other line exists.  This is used to stop the grep
  lint from complaining about pybind11 includes if the util
  include already exists.

- Added --match-first-only flag, which lets grep only match against
  the first matching line.  This is because, even if there are multiple
  includes that are problematic, I only need to fix one of them.
  We don't /really/ need this, but when I was running lintrunner -a
  to fixup the preexisting codebase it was annoying without this,
  as the lintrunner overall driver fails if there are multiple edits
  on the same file.

I excluded any files that didn't otherwise have a dependency on
torch/ATen, this was mostly caffe2 and the valgrind wrapper compat
bindings.

Note the grep replacement is kind of crappy, but clang-tidy lint
cleaned it up in most cases.

See also https://github.com/pybind/pybind11/issues/4099

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82552
Approved by: https://github.com/albanD
2022-08-01 17:16:58 +00:00
Fabio Rocha
fd84c458f4 Add torch.unflatten and improve its docs (#81399)
unflatten now has a free function version in torch.flatten in addition to
    the method in torch.Tensor.flatten.

    Updated docs to reflect this and polished them a little.
    For consistency, changed the signature of the int version of unflatten in
    native_functions.yaml.

    Some override tests were failing because unflatten has unusual
    characteristics in terms of the .int and .Dimname versions having
    different number of arguments so this required some changes
    to test/test_override.py

    Removed support for using mix of integer and string arguments
    when specifying dimensions in unflatten.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81399
Approved by: https://github.com/Lezcano, https://github.com/ngimel
2022-07-29 15:02:42 +00:00
lezcano
19f3d4d795 Expose linalg.solve_ex (#80073)
This prepares for making `linalg.inv_ex` just a call into this function
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80073
Approved by: https://github.com/IvanYashchuk, https://github.com/albanD
2022-07-01 16:09:23 +00:00
Allen Goodman
63ef2a03e5 torch.special.scaled_modified_bessel_k0 (#78900)
```Python
scaled_modified_bessel_k0(input, *, out=None) -> Tensor
```

Scaled modified Bessel function of the second kind of order $0$.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78900
Approved by: https://github.com/mruberry
2022-06-29 14:53:37 +00:00
PyTorch MergeBot
602c38ff63 Revert "torch.special.gamma (#78904)"
This reverts commit f563f25efd.

Reverted https://github.com/pytorch/pytorch/pull/78904 on behalf of https://github.com/suo due to This PR appears to have broken mac tests on master f563f25efd
2022-06-28 00:54:22 +00:00
Allen Goodman
ab8797d69b torch.special.spherical_bessel_j0 (#78912)
```Python
spherical_bessel_j0(input, *, out=None) -> Tensor
```

Spherical Bessel function of the first kind of order $0$.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78912
Approved by: https://github.com/mruberry
2022-06-27 20:14:46 +00:00
Allen Goodman
f563f25efd torch.special.gamma (#78904)
```Python
gamma(input, *, out=None) -> Tensor
```

Gamma function $\Gamma\left(\text{input}\right)$.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78904
Approved by: https://github.com/mruberry
2022-06-27 19:36:17 +00:00
Allen Goodman
b3ca3638be torch.special.scaled_modified_bessel_k1 (#78901)
```Python
scaled_modified_bessel_k1(input, *, out=None) -> Tensor
```

Scaled modified Bessel function of the second kind of order $1$.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78901
Approved by: https://github.com/mruberry
2022-06-24 20:57:38 +00:00
Allen Goodman
b3308e21bf torch.special.airy_ai (#78902)
```Python
airy_ai(input, *, out=None) -> Tensor
```

Airy function $\text{Ai}\left(\text{input}\right)$.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78902
Approved by: https://github.com/mruberry, https://github.com/linbinyu, https://github.com/seemethere
2022-06-23 19:33:40 +00:00
Michael Suo
30fb2c4aba [lint] autoformat test/cpp and torch/csrc
Let's have some fun.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78828

Approved by: https://github.com/ezyang
2022-06-11 21:11:16 +00:00
lezcano
54949a5abc Simplify and optimize linalg.solve
This PR heavily simplifies the code of `linalg.solve`. At the same time,
this implementation saves quite a few copies of the input data in some
cases (e.g. A is contiguous)

We also implement it in such a way that the derivative goes from
computing two LU decompositions and two LU solves to no LU
decompositions and one LU solves. It also avoids a number of unnecessary
copies the derivative was unnecessarily performing (at least the copy of
two matrices).

On top of this, we add a `left` kw-only arg that allows the user to
solve `XA = B` rather concisely.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74046

Approved by: https://github.com/nikitaved, https://github.com/IvanYashchuk, https://github.com/mruberry
2022-06-11 04:06:40 +00:00
Allen Goodman
bc84143152 Orthogonal Polynomials (#78304)
```Python
chebyshev_polynomial_v(input, n, *, out=None) -> Tensor
```

Chebyshev polynomial of the third kind $V_{n}(\text{input})$.

```Python
chebyshev_polynomial_w(input, n, *, out=None) -> Tensor
```

Chebyshev polynomial of the fourth kind $W_{n}(\text{input})$.

```Python
legendre_polynomial_p(input, n, *, out=None) -> Tensor
```

Legendre polynomial $P_{n}(\text{input})$.

```Python
shifted_chebyshev_polynomial_t(input, n, *, out=None) -> Tensor
```

Shifted Chebyshev polynomial of the first kind $T_{n}^{\ast}(\text{input})$.

```Python
shifted_chebyshev_polynomial_u(input, n, *, out=None) -> Tensor
```

Shifted Chebyshev polynomial of the second kind $U_{n}^{\ast}(\text{input})$.

```Python
shifted_chebyshev_polynomial_v(input, n, *, out=None) -> Tensor
```

Shifted Chebyshev polynomial of the third kind $V_{n}^{\ast}(\text{input})$.

```Python
shifted_chebyshev_polynomial_w(input, n, *, out=None) -> Tensor
```

Shifted Chebyshev polynomial of the fourth kind $W_{n}^{\ast}(\text{input})$.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78304
Approved by: https://github.com/mruberry
2022-06-03 22:38:56 +00:00
Allen Goodman
4a5381ab40 Bessel functions (#78451)
Adds:

```Python
bessel_j0(input, *, out=None) -> Tensor
```

Bessel function of the first kind of order $0$, $J_{0}(\text{input})$.

```Python
bessel_j1(input, *, out=None) -> Tensor
```

Bessel function of the first kind of order $1$, $J_{1}(\text{input})$.

```Python
bessel_j0(input, *, out=None) -> Tensor
```

Bessel function of the second kind of order $0$, $Y_{0}(\text{input})$.

```Python
bessel_j1(input, *, out=None) -> Tensor
```

Bessel function of the second kind of order $1$, $Y_{1}(\text{input})$.

```Python
modified_bessel_i0(input, *, out=None) -> Tensor
```

Modified Bessel function of the first kind of order $0$, $I_{0}(\text{input})$.

```Python
modified_bessel_i1(input, *, out=None) -> Tensor
```

Modified Bessel function of the first kind of order $1$, $I_{1}(\text{input})$.

```Python
modified_bessel_k0(input, *, out=None) -> Tensor
```

Modified Bessel function of the second kind of order $0$, $K_{0}(\text{input})$.

```Python
modified_bessel_k1(input, *, out=None) -> Tensor
```

Modified Bessel function of the second kind of order $1$, $K_{1}(\text{input})$.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78451
Approved by: https://github.com/mruberry
2022-06-02 14:06:20 +00:00
Xiao Wang
d136852bda [CUDA][Linalg] Add a driver= kwarg to torch.linalg.svd and svdvals; add cusolver gesvdaStridedBatched driver to svd (#74521)
[CUDA][Linalg] Add a driver= kwarg to torch.linalg.svd and svdvals; add cusolver gesvdaStridedBatched driver to svd

cusolver doc: https://docs.nvidia.com/cuda/cusolver/index.html#cuSolverDN-lt-t-gt-gesvda

Todo:

- [X] add cusolver `gesvdaStridedBatched` driver
- [X] add `driver=` kwarg to `torch.linalg.svd` and `torch.linalg.svdvals`
- [X] doc
- [X] error out (?) on other non-cusolver use cases: CPU, MAGMA
- [X] change svd api in `torch/csrc/api/include/torch/linalg.h` ?

Close https://github.com/pytorch/pytorch/issues/41306

Related https://github.com/pytorch/pytorch/issues/75494
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74521
Approved by: https://github.com/Lezcano, https://github.com/IvanYashchuk, https://github.com/mruberry
2022-05-31 16:11:53 +00:00
Allen Goodman
64e0d0c4fe Laguerre polynomial (#78366)
Adds:

```Python
laguerre_polynomial_l(input, n, *, out=None) -> Tensor
```

Laguerre polynomial $L_{n}(\text{input})$.

## Derivatives

Recommended $k$-derivative formula with respect to $\text{input}$:

$$\frac{d^{k}}{d \times \text{input}^{k}} L_{n}(\text{input}) = -1^{k} \times L_{-k + n}^{k}(\text{input})$$

where $L_{n}^{\alpha}$ is the associated Laguerre polynomial.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78366
Approved by: https://github.com/mruberry
2022-05-30 17:24:00 +00:00
Allen Goodman
9dc6d42c18 Probabilist’s Hermite polynomial (#78357)
Adds:

```Python
hermite_polynomial_he(input, n, *, out=None) -> Tensor
```
Physicist’s Hermite polynomial $He_{n}(\text{input})$.

If $n = 0$, $1$ is returned. If $n = 1$, $\text{input}$ is returned. Otherwise, the recursion:

$$He_{n + 1}(\text{input}) = 2 \times \text{input} \times He_{n}(\text{input}) - He_{n - 1}(\text{input})$$

is evaluated.

## Derivatives

Recommended $k$-derivative formula with respect to $\text{input}$:

$$\frac{d^{k}}{d \times \text{input}^{k}} He_{n}^{(k)} = \frac{n!}{(n - k)!}He_{n - k}(\text{input}).$$
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78357
Approved by: https://github.com/mruberry
2022-05-28 13:56:12 +00:00
Allen Goodman
18273c39da Physicist’s Hermite polynomial (#78352)
Adds:

```Python
hermite_polynomial_h(input, n, *, out=None) -> Tensor
```
Physicist’s Hermite polynomial $H_{n}(\text{input})$.

If $n = 0$, $1$ is returned. If $n = 1$, $\text{input}$ is returned. Otherwise, the recursion:

$$H_{n + 1}(\text{input}) = 2 \times \text{input} \times H_{n}(\text{input}) - H_{n - 1}(\text{input})$$

is evaluated.

## Derivatives

Recommended $k$-derivative formula with respect to $\text{input}$:

$$\frac{d^{k}}{d \times \text{input}^{k}} H_{n}^{(k)} = 2^{k} \times \frac{n!}{(n - k)!}H_{n - k}(\text{input})$$
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78352
Approved by: https://github.com/mruberry
2022-05-28 02:26:30 +00:00
Allen Goodman
40a6cc6cc6 Chebyshev polynomial of the second kind (#78293)
Adds:

```Python
chebyshev_polynomial_u(input, n, *, out=None) -> Tensor
```

Chebyshev polynomial of the second kind $U_{n}(\text{input})$.

If $n = 0$, $1$ is returned. If $n = 1$, $2 \times \text{input}$ is returned. If $n < 6$ or $|\text{input}| > 1$ the recursion:

$$T_{n + 1}(\text{input}) = 2 \times \text{input} \times T_{n}(\text{input}) - T_{n - 1}(\text{input})$$

is evaluated. Otherwise, the explicit trigonometric formula:

$$\frac{\text{sin}((n + 1) \times \text{arccos}(\text{input}))}{\text{sin}(\text{arccos}(\text{input}))}$$

is evaluated.

## Derivatives

Recommended first derivative formula with respect to $\text{input}$:

$$\frac{(-1 - n)\times U_{-1 + n}(\text{input}) + n \times \text{input} \times U_{n}(x)}{-1 + \text{input}^{2}}.$$

Recommended $k$-derivative formula with respect to $\text{n}$:

$$\frac{\text{arccos}(\text{input})^{k} \times \text{sin}(\frac{k \times \pi}{2} + (1 + n) \times \text{arccos}(\text{input}))}{\sqrt{1 - \text{input}^{2}}}.$$

## Example

```Python
x = torch.linspace(-1.0, 1.0, 256)

matplotlib.pyplot.plot(x, torch.special.chebyshev_polynomial_u(x, 10))
```

![image](https://user-images.githubusercontent.com/315821/170352780-12af63d3-ce31-4948-8b68-8ecc37c71ac5.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78293
Approved by: https://github.com/mruberry
2022-05-27 18:32:11 +00:00
Allen Goodman
029bbe4995 Chebyshev polynomial of the first kind (#78196)
Adds:

```Python
chebyshev_polynomial_t(input, n, *, out=None) -> Tensor
```

Chebyshev polynomial of the first kind $T_{n}(\text{input})$.

If $n = 0$, $1$ is returned. If $n = 1$, $\text{input}$ is returned. If $n < 6$ or $|\text{input}| > 1$ the recursion:

$$T_{n + 1}(\text{input}) = 2 \times \text{input} \times T_{n}(\text{input}) - T_{n - 1}(\text{input})$$

is evaluated. Otherwise, the explicit trigonometric formula:

$$T_{n}(\text{input}) = \text{cos}(n \times \text{arccos}(x))$$

is evaluated.

## Derivatives

Recommended $k$-derivative formula with respect to $\text{input}$:

$$2^{-1 + k} \times n \times \Gamma(k) \times C_{-k + n}^{k}(\text{input})$$

where $C$ is the Gegenbauer polynomial.

Recommended $k$-derivative formula with respect to $\text{n}$:

$$\text{arccos}(\text{input})^{k} \times \text{cos}(\frac{k \times \pi}{2} + n \times \text{arccos}(\text{input})).$$

## Example

```Python
x = torch.linspace(-1, 1, 256)

matplotlib.pyplot.plot(x, torch.special.chebyshev_polynomial_t(x, 10))
```

![image](https://user-images.githubusercontent.com/315821/170125525-60415735-4d49-4cbd-9278-26286413f635.png)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78196
Approved by: https://github.com/mruberry
2022-05-26 21:06:44 +00:00
Ayaka Mikazuki
91a4fe0777 [docs] Move a sentence from nn.Transformer to nn.TransformerEncoder (#78337)
`nn.Transformer` is not possible to be used to implement BERT, while `nn.TransformerEncoder` does. So this PR moves the sentence 'Users can build the BERT model with corresponding parameters.' from `nn.Transformer` to `nn.TransformerEncoder`.

Fixes #68053
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78337
Approved by: https://github.com/jbschlosser
2022-05-26 15:44:38 +00:00
PyTorch MergeBot
d450034f24 Revert "Beta function (#78031)"
This reverts commit da16450360.

Reverted https://github.com/pytorch/pytorch/pull/78031 on behalf of https://github.com/suo due to broke trunk, see the above message
2022-05-24 22:55:06 +00:00
Allen Goodman
da16450360 Beta function (#78031)
Euler beta function:

```Python
torch.special.beta(input, other, *, out=None) → Tensor
```

`reentrant_gamma` and `reentrant_ln_gamma` implementations (using Stirling’s approximation) are provided. I started working on this before I realized we were missing a gamma implementation (despite providing incomplete gamma implementations). Uses the coefficients computed by Steve Moshier to replicate SciPy’s implementation. Likewise, it mimics SciPy’s behavior (instead of the behavior in Cephes).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78031
Approved by: https://github.com/mruberry
2022-05-24 21:07:25 +00:00
lezcano
7cb7cd5802 Add linalg.lu
This PR modifies `lu_unpack` by:
- Using less memory when unpacking `L` and `U`
- Fuse the subtraction by `-1` with `unpack_pivots_stub`
- Define tensors of the correct types to avoid copies
- Port `lu_unpack` to be a strucutred kernel so that its `_out` version
does not incur on extra copies

Then we implement `linalg.lu` as a structured kernel, as we want to
compute its derivative manually. We do so because composing the
derivatives of `torch.lu_factor` and `torch.lu_unpack` would be less efficient.

This new function and `lu_unpack` comes with all the things it can come:
forward and backward ad, decent docs, correctness tests, OpInfo, complex support,
support for metatensors and support for vmap and vmap over the gradients.

I really hope we don't continue adding more features.

This PR also avoids saving some of the tensors that were previously
saved unnecessarily for the backward in `lu_factor_ex_backward` and
`lu_backward` and does some other general improvements here and there
to the forward and backward AD formulae of other related functions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67833

Approved by: https://github.com/IvanYashchuk, https://github.com/nikitaved, https://github.com/mruberry
2022-05-05 09:17:05 +00:00
Richard Howell
3a2fc312be [xplat] add static_cast where missing (#76756)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76756

Add `static_cast` to implicit int -> float conversions.

Test Plan: CI

Reviewed By: yfeldblum

Differential Revision: D35857046

fbshipit-source-id: 0560125fed19e74eff85e22cfab971893515f4dc
(cherry picked from commit 7cd5b2347d0e95938c73e39b20e59e647c74de69)
2022-05-03 22:59:10 +00:00
Nikita Shulga
8473173c36 Remove breakpad dependency
This functionality does not seem to be used
and there are some requests to update dependency.

Add `third_party` to torch_cpu include directories if compiling with
Caffe2 support, as `caffe2/quantization/server/conv_dnnlowp_op.cc` depends on `third_party/fbgemm/src/RefImplementations.h`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75394
Approved by: https://github.com/janeyx99, https://github.com/seemethere
2022-05-03 20:21:55 +00:00
Ivan Yashchuk
8bb7203049 Add torch.linalg.ldl_factor_ex and torch.linalg.ldl_solve
This PR adds a function for computing the LDL decomposition and a function that can solve systems of linear equations using this decomposition. The result of `torch.linalg.ldl_factor_ex` is in a compact form and it's required to use it only through `torch.linalg.ldl_solve`. In the future, we could provide `ldl_unpack` function that transforms the compact representation into explicit matrices.

Fixes https://github.com/pytorch/pytorch/issues/54847.

cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69828
Approved by: https://github.com/Lezcano, https://github.com/mruberry, https://github.com/albanD
2022-04-28 19:23:37 +00:00
PyTorch MergeBot
d79d9fa283 Revert "Remove breakpad dependency"
This reverts commit 9aa3c7fd83.

Reverted https://github.com/pytorch/pytorch/pull/75394 on behalf of https://github.com/malfet
2022-04-17 17:58:51 +00:00
Nikita Shulga
9aa3c7fd83 Remove breakpad dependency
This functionality does not seem to be used
and there are some requests to update dependency

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75394
Approved by: https://github.com/janeyx99, https://github.com/seemethere
2022-04-17 17:43:45 +00:00
Yulv-git
ac2d2e3a3d Fix some typos.
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75561
Approved by: https://github.com/albanD
2022-04-11 21:55:59 +00:00
Peter Bell
7f051b4d2b Implement F.pad in ATen
This moves the C++ torch pad function into ATen proper. Once the
forward-compatibility period is over, the python interface can use
this directly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73431

Approved by: https://github.com/ezyang
2022-04-01 01:10:12 +00:00
Sherlockk Huang
bbf7e159e0 Implement torch.special.log_ndtr
Implements torch.special.log_ndtr

Issue: https://github.com/pytorch/pytorch/issues/50345

TODO:
- [x] adding proper reference to scipy implementation
- [x] double check if the changes in test/test_unary_ufuncs.py is really necessary
- [x] check setting for UnaryUfuncInfo
cc: @kshitij12345 @mruberry
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74795
Approved by: https://github.com/anjali411
2022-03-29 23:13:37 +00:00
Kurt Mohler
5375b2e994 Resolve int[]? arguments to new OptionalIntArrayRef class
This PR uses the `OptionalArrayRef` template class that was drafted in #64084.

Fixes #44409
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70864
Approved by: https://github.com/ezyang
2022-03-26 01:45:50 +00:00
Peter Bell
f86bb2d6e4 Implement _pad_circular in ATen
Closes #44459

This migrates the python implementation of `_pad_circular` to ATen and
removes the old C++ implementation that had diverged from python.

Note that `pad` can't actually use this until the
forward-compatibility period is over.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73410

Approved by: https://github.com/ezyang
2022-03-25 02:09:01 +00:00
Shuitao Fan
05c86c2be1 T112685841: Use irange in PyTorch (#73378)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73378

1) ran check_for_c10_loops.py to automatically update all files (*.h, *.hpp, *.cpp) under fbcode/caffe2/torch (this is the path in the check_for_c10_loops.py, slightly different from the task description where the path mentioned was fbcode/caffe2. since current commit already contains 27 files, will use a separate commit for additional files).

2) manually reviewed each change, and reverted a few files:
    (a) select_keys.cpp, bucketize_calibration.cpp, index_mmh and TCPStore.cpp: iterator modified in loop
    (b) qlinear_4bit_ops.cpp and id_list_feature_merge_conversion.cpp: condition containing multiple expressions.

Test Plan:
Doing the following (still in progress, will address issues as they appear):
buck build ...
buck test ...

Reviewed By: r-barnes

Differential Revision: D34435473

fbshipit-source-id: b8d3c94768b02cf71ecb24bb58d29ee952f672c2
(cherry picked from commit fa9b0864f3761a501868fe0373204b12fdfc2b32)
2022-02-26 06:34:22 +00:00
Anton Jansson
f43165a75f Remove duplicate call to objective function in strong wolfe line search in L-BFGS optimizer. (#72773)
Summary:
With this change, the optimizer is almost twice as fast as before. As the result of the first call is never used, it looks like a copy paste error and therefore can be removed. In addition, this duplicate call is not present in the Python implementation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72773

Reviewed By: samdow

Differential Revision: D34214312

Pulled By: albanD

fbshipit-source-id: 4f4de08633c7236f3ccce8a2a74e56500003281b
(cherry picked from commit 4a63f812ab)
2022-02-15 15:33:13 +00:00
Ryan Spring
4f8b986e28 Implement Tanh Gelu Approximation (#61439)
Summary:
1. Implements https://github.com/pytorch/pytorch/issues/39853
2. Adds approximate boolean flag to Gelu
3. Enables Tanh Gelu approximation
4. Adds double backward support for Gelu
5. Enable Tanh Gelu in NvFuser

```
def gelu(x, approximate : str = 'none'):
    if approximate == 'tanh':
        # sqrt(2/pi) = 0.7978845608028654
        return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * (x + 0.044715 * torch.pow(x, 3.0))))
    else:
        return x * normcdf(x)
```

Linking XLA PR - https://github.com/pytorch/xla/pull/3039

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61439

Reviewed By: VitalyFedyunin

Differential Revision: D33894937

Pulled By: jbschlosser

fbshipit-source-id: b65e8fb6ea66168af8f34f45ed50e92737a33851
(cherry picked from commit 6e986f91a9)
2022-02-14 03:40:32 +00:00
kshitij12345
02f6226bff [fix] Dropout2d-3d no-batch-dim (#69885)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/69801

TODO:
* [x] Update C++ API

cc albanD mruberry jbschlosser walterddr kshitij12345

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69885

Reviewed By: mruberry

Differential Revision: D33175470

Pulled By: jbschlosser

fbshipit-source-id: c9d7d9e0f59ba290a0157725c338a345f3d58b9f
(cherry picked from commit 7e4271a156)
2022-02-02 16:40:32 +00:00
Nikita Shulga
74c44ba9d6 Revert D33850228: [pytorch][PR] Implement Tanh Gelu Approximation
Test Plan: revert-hammer

Differential Revision:
D33850228 (23d03025dc)

Original commit changeset: 3cc33fb298e4

Original Phabricator Diff: D33850228 (23d03025dc)

fbshipit-source-id: 9436e7df73c2b2e2011f321674f24973316d3692
(cherry picked from commit c9efb58223)
2022-01-31 17:44:19 +00:00
Ryan Spring
23d03025dc Implement Tanh Gelu Approximation (#61439)
Summary:
1. Implements https://github.com/pytorch/pytorch/issues/39853
2. Adds approximate boolean flag to Gelu
3. Enables Tanh Gelu approximation
4. Adds double backward support for Gelu
5. Enable Tanh Gelu in NvFuser

```
def gelu(x, approximate : str = 'none'):
    if approximate == 'tanh':
        # sqrt(2/pi) = 0.7978845608028654
        return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * (x + 0.044715 * torch.pow(x, 3.0))))
    else:
        return x * normcdf(x)
```

Linking XLA PR - https://github.com/pytorch/xla/pull/3039

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61439

Reviewed By: cpuhrsch

Differential Revision: D33850228

Pulled By: jbschlosser

fbshipit-source-id: 3cc33fb298e480d7ecc5c67716da019d60c6ab33
(cherry picked from commit 3a53b3e94f)
2022-01-31 17:07:45 +00:00
Joel Schlosser
cb823d9f07 Revert D33744717: [pytorch][PR] Implement Tanh Gelu Approximation
Test Plan: revert-hammer

Differential Revision:
D33744717 (f499ab9cef)

Original commit changeset: d64532a562ed

Original Phabricator Diff: D33744717 (f499ab9cef)

fbshipit-source-id: 396c3f63de5865f894dbc353d0790a01a624be93
(cherry picked from commit e9fb2d1db1)
2022-01-28 18:35:01 +00:00
Ryan Spring
f499ab9cef Implement Tanh Gelu Approximation (#61439)
Summary:
1. Implements https://github.com/pytorch/pytorch/issues/39853
2. Adds approximate boolean flag to Gelu
3. Enables Tanh Gelu approximation
4. Adds double backward support for Gelu
5. Enable Tanh Gelu in NvFuser

```
def gelu(x, approximate : str = 'none'):
    if approximate == 'tanh':
        # sqrt(2/pi) = 0.7978845608028654
        return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * (x + 0.044715 * torch.pow(x, 3.0))))
    else:
        return x * normcdf(x)
```

Linking XLA PR - https://github.com/pytorch/xla/pull/3039

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61439

Reviewed By: mikaylagawarecki

Differential Revision: D33744717

Pulled By: jbschlosser

fbshipit-source-id: d64532a562ed53247bb4fa52bb16722634d5c187
(cherry picked from commit 4713dd9cca)
2022-01-28 16:59:09 +00:00