Commit Graph

46867 Commits

Author SHA1 Message Date
Edward Z. Yang
789115e05e Don't check for linalg errors on meta tensors
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78467

Approved by: https://github.com/Chillee
2022-05-31 14:18:49 +00:00
Edward Z. Yang
59fdb627a3 Reenable TestMeta native_batch_norm and native_layer_norm
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78466

Approved by: https://github.com/Chillee
2022-05-31 14:18:49 +00:00
Edward Z. Yang
be0629e925 Reenable TestMeta slice
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78465

Approved by: https://github.com/mruberry
2022-05-31 14:18:49 +00:00
Edward Z. Yang
4bbc3e2809 Some helper code for determining missing meta coverage for XLA ops
When I ran it I got this:

```
$ PYTORCH_COMPARE_XLA=/scratch/ezyang/xla/xla_native_functions.yaml python test/test_meta.py
aten.logdet.default
aten._local_scalar_dense.default
aten.cholesky.default
aten.diag.default
aten.empty.memory_format  # SKIP
aten.index.Tensor
aten.kthvalue.default
aten.masked_select.default
aten.max_pool3d_with_indices.default
aten.max_unpool2d.default
aten.max_unpool3d.default
aten.native_batch_norm.default  # SKIP
aten.nll_loss2d_forward.default
aten.nonzero.default
aten.prelu.default
aten.relu.default
aten.roll.default
aten.rrelu_with_noise.default
aten.std_mean.correction
aten.symeig.default
aten.take.default
aten.trace.default
aten.native_layer_norm.default  # SKIP
```

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78464

Approved by: https://github.com/albanD
2022-05-31 14:18:49 +00:00
Edward Z. Yang
0865df4b87 Reenable TestMeta testing for isnan
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78463

Approved by: https://github.com/ngimel, https://github.com/mruberry
2022-05-31 14:18:49 +00:00
Edward Z. Yang
1e11fc894c Reenable tensor_split meta testing
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78462

Approved by: https://github.com/mruberry
2022-05-31 14:18:49 +00:00
Edward Z. Yang
b7215de32f prod ref
It turns out the prim is implemented incorrectly as torch.prod does not accept
a dim list, so I added a little stub for this.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78461

Approved by: https://github.com/ngimel
2022-05-31 14:18:49 +00:00
Edward Z. Yang
e562ed0964 Register PrimTorch sum as a decomposition.
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78460

Approved by: https://github.com/ngimel
2022-05-31 14:18:49 +00:00
Edward Z. Yang
5620ebad5f Unconditionally transform dtype arguments to double for upcast
This ensures we can also hit functions like torch.sum that take
a dtype argument, without having to manually list them all.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78459

Approved by: https://github.com/albanD
2022-05-31 14:18:49 +00:00
yanbing-j
f09f6aadb4 Update ideep for ideep conv changes (#78238)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78238
Approved by: https://github.com/dagitses
2022-05-31 14:18:37 +00:00
Nikita Shulga
8f7e3791ef Make PyTorch importable on python-3.7.0 (#78500)
By stringifying "typing.OrderedDict", as [`typing.OrderedDict`](https://docs.python.org/3.10/library/typing.html#typing.OrderedDict) were introduced by Python-3.7.2+

See similar fix in 21a82fb519

Partially addresses https://github.com/pytorch/pytorch/issues/78499

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78500
Approved by: https://github.com/atalman
2022-05-31 06:11:30 +00:00
Jason Ansel
dabf8f0569 Populate the torch._decomp table on import (#78476)
#78041 broke TorchInductor, because of:
```
>>> from torch import _decomp
>>> import torch
>>> _decomp.get_decompositions([torch.ops.aten.leaky_relu])
{}
>>> import torch._refs.nn.functional
>>> _decomp.get_decompositions([torch.ops.aten.leaky_relu])
{<OpOverload(op='aten.leaky_relu', overload='default')>: <function leaky_relu at 0x7f5a39b56c10>, <OpOverload(op='aten.leaky_relu', overload='out')>: <function leaky_relu at 0x7f5a39b56c10>}
```

cc @Chillee

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78476
Approved by: https://github.com/Chillee
2022-05-31 03:46:38 +00:00
Shawn Zhong
a468941355 Fix jiterator doc format (#78471)
Current docs do not show the code example properly:
https://pytorch.org/docs/master/generated/torch.cuda.jiterator._create_jit_fn.html
https://pytorch.org/docs/master/generated/torch.cuda.jiterator._create_multi_output_jit_fn.html

This PR fixes the formatting issue:
https://docs-preview.pytorch.org/78471/generated/torch.cuda.jiterator._create_jit_fn.html
https://docs-preview.pytorch.org/78471/generated/torch.cuda.jiterator._create_multi_output_jit_fn.html
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78471
Approved by: https://github.com/ngimel
2022-05-31 03:44:52 +00:00
Kulin Seth
017b0ae943 MPS: Fix crashes in view tensors due to buffer size mismatch (#78496)
Fixes #78247, #77886

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78496
Approved by: https://github.com/albanD, https://github.com/malfet
2022-05-31 02:09:03 +00:00
Bairen Yi
b6672b10e1 Fix incorrect decomposition for native_dropout (#77933)
Quick sanity check: it should be identity function if p=0.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77933
Approved by: https://github.com/Chillee
2022-05-30 20:08:48 +00:00
Yukio Siraichi
3f334f0dfd Fix asarray documentation formatting (#78485)
Fixes #78290

Here's a screenshot of the modified doc:
![asarray](https://user-images.githubusercontent.com/3337141/170971723-aafe20a9-8e51-420f-ae98-67dc2df579a2.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78485
Approved by: https://github.com/ngimel
2022-05-30 19:28:10 +00:00
Wang, Eikan
11b9a81e02 [NNC] channels last propagation within NNC fusion group (#76948)
Decide the memory layout propagation policy and propagate it within the NNC fusion group. The memory layout propagation policy could be `Contiguous` and `Channels-last contiguous`.
 - `Contiguous`: Convert the non-contiguous including channels-last contiguous input tensors to contiguous and generate the contiguous output `Buf` for lowering function.
 - `Channels-last contiguous`: Convert the input tensors to channels-last contiguous and generate the channels-last contiguous output `Buf` for lowering function.

Currently, the rule is simple. If all the input and out tensors of the NNC fusion group are channels-last contiguous, then the propagated memory layout is `Channels-last contiguous`. Otherwise, it is always `Contiguous` which is as same as current situation. It means that this PR provides a fast path to channels-last and the optimization is conservative since its trigger conditions are strict.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76948
Approved by: https://github.com/ZolotukhinM
2022-05-30 18:31:49 +00:00
Andrew Or
c7b4eec233 [Quant][fx][bc-breaking] Replace qconfig_dict with a config object (#78452)
**Summary:** Previously, FX graph mode quantization configurations
were specified through a dictionary of qconfigs. However, this
API was not in line with other core APIs in PyTorch. This commit
replaces this dictionary with a config object that users will
create and pass to prepare and convert. This leads to better
type safety and better user experience in notebook settings
due to improved auto completion.

The new API is as follows:

```
from torch.ao.quantization import QConfigMapping
from torch.ao.quantization.quantize_fx import prepare_fx

qconfig_mapping = QConfigMapping()
    .set_global(qconfig)
    .set_object_type(torch.nn.Linear, qconfig)
    .set_module_name_regex("foo.*bar", qconfig)
    .set_module_name("mod", qconfig)

prepare_fx(model, qconfig_mapping)
```

For backwards compatibility, `prepare_fx`, `prepare_qat_fx`,
and `convert_fx` will continue to accept qconfig_dicts, which
will be converted to QuantizationConfigs internally.

Note that this commit does not modify existing tests to use the
new API; they will continue to pass in qconfig_dict as before,
which still works but triggers a deprecation warning. This will
be handled in a future commit.

**Test Plan:**
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

**Reviewers:** jerryzh168, vkuzo

**Subscribers:** jerryzh168, vkuzo

Differential Revision: D36747998

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78452
Approved by: https://github.com/jerryzh168
2022-05-30 18:30:07 +00:00
Alban Desmaison
bde246fcc6 Speed up test_mps from 9min to 25s
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78488

Approved by: https://github.com/kulinseth
2022-05-30 18:16:53 +00:00
Alban Desmaison
02551a0025 Remove prints and add proper asserts
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78454

Approved by: https://github.com/kulinseth
2022-05-30 18:16:53 +00:00
Nikita Shulga
472d67a727 Revert "move XNNPACK buck build to shared build file (#77941)"
This reverts commit b8b46f932b.

This change were reverted internally but has not been populated to
neither `master` nor `fbsync` branches yet
2022-05-30 10:59:51 -07:00
Allen Goodman
64e0d0c4fe Laguerre polynomial (#78366)
Adds:

```Python
laguerre_polynomial_l(input, n, *, out=None) -> Tensor
```

Laguerre polynomial $L_{n}(\text{input})$.

## Derivatives

Recommended $k$-derivative formula with respect to $\text{input}$:

$$\frac{d^{k}}{d \times \text{input}^{k}} L_{n}(\text{input}) = -1^{k} \times L_{-k + n}^{k}(\text{input})$$

where $L_{n}^{\alpha}$ is the associated Laguerre polynomial.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78366
Approved by: https://github.com/mruberry
2022-05-30 17:24:00 +00:00
Mike Ruberry
089203f8bc Updates floor_divide to perform floor division (#78411)
Fixes https://github.com/pytorch/pytorch/issues/43874

This PR changes floor_divide to perform floor division instead of truncation division.

This is a BC-breaking change, but it's a "bug fix," and we've already warned users for several releases this behavior would change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78411
Approved by: https://github.com/ngimel
2022-05-29 21:28:45 +00:00
Jagadish Krishnamoorthy
3ee863cb7c [ROCm] enable test_lobpcg_ortho_cuda_float64 (#78385)
Signed-off-by: Jagadish Krishnamoorthy <jagdish.krishna@gmail.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78385
Approved by: https://github.com/Lezcano, https://github.com/pruthvistony
2022-05-28 22:46:23 +00:00
Kurt Mohler
e9afb43676 Add meta device support to _UntypedStorage and _TypedStorage (#78008)
Fixes #77885

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78008
Approved by: https://github.com/ezyang
2022-05-28 15:33:45 +00:00
Kulin Seth
d63db52349 MPS: Fixes the as_strided_mps implementation for contiguous view operations (#78440)
Fixes https://github.com/pytorch/pytorch/issues/78107; https://github.com/pytorch/pytorch/issues/77750

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78440
Approved by: https://github.com/malfet
2022-05-28 14:41:56 +00:00
Allen Goodman
9dc6d42c18 Probabilist’s Hermite polynomial (#78357)
Adds:

```Python
hermite_polynomial_he(input, n, *, out=None) -> Tensor
```
Physicist’s Hermite polynomial $He_{n}(\text{input})$.

If $n = 0$, $1$ is returned. If $n = 1$, $\text{input}$ is returned. Otherwise, the recursion:

$$He_{n + 1}(\text{input}) = 2 \times \text{input} \times He_{n}(\text{input}) - He_{n - 1}(\text{input})$$

is evaluated.

## Derivatives

Recommended $k$-derivative formula with respect to $\text{input}$:

$$\frac{d^{k}}{d \times \text{input}^{k}} He_{n}^{(k)} = \frac{n!}{(n - k)!}He_{n - k}(\text{input}).$$
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78357
Approved by: https://github.com/mruberry
2022-05-28 13:56:12 +00:00
Linbin Yu
b8b46f932b move XNNPACK buck build to shared build file (#77941)
Summary:
This diff moved the XNNPACK buck build to a shared build file in xplat/caffe2/third_party, so it can be reused by OSS buck CI in the future. There's no functionality change.

**Background**: as we are moving to github-first, we want community to receive more signals from our internal build. XNNPACK is part of pytorch mobile build so we want to add it to OSS BUCK CI.

**How it works**: all XNNPACK targets are defined in xplat/caffe2/third_party/xnnpack_defs.bzl. When we build it internally, the XNNPACK source is still at xplat/third-party/XNNPACK and we will load that bzl file in xplat/third-party/XNNPACK/BUCK. Everything should work as before. In OSS build, XNNPACK is a submodule in xplat/caffe2/third_party and we will load the same bzl file in pytorch/third_party/BUILD.buck.

**Wrapper Generation**: the wrapper generation script is moved to xplat/caffe2/third_party/generate-xnnpack-wrappers.py. It will take an optional argument for the path of XNNPACK (they are different in internal build and OSS build). The wrapper files will always be generated at the parent folder of XNNPACK source. But the src_defs.bzl and wrapper_defs.bzl will always be in xplat/caffe2/third_party/ (they are now called xnnpack_src_defs.bzl and xnnpack_wrapper_defs.bzl). For OSS build this script will only be used in CI, and the generated files will not be committed.

**Next Steps:** Once landed, I will try to build XNNPACK in OSS BUCK using xnnpack_defs.bzl. Meta-specific symbols need to be resolved, so there will be some refactors to the build file.

Test Plan: buck build xplat/third-party/XNNPACK:XNNPACK

Differential Revision: D36529332

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77941
Approved by: https://github.com/malfet, https://github.com/seemethere
2022-05-28 04:39:37 +00:00
Allen Goodman
18273c39da Physicist’s Hermite polynomial (#78352)
Adds:

```Python
hermite_polynomial_h(input, n, *, out=None) -> Tensor
```
Physicist’s Hermite polynomial $H_{n}(\text{input})$.

If $n = 0$, $1$ is returned. If $n = 1$, $\text{input}$ is returned. Otherwise, the recursion:

$$H_{n + 1}(\text{input}) = 2 \times \text{input} \times H_{n}(\text{input}) - H_{n - 1}(\text{input})$$

is evaluated.

## Derivatives

Recommended $k$-derivative formula with respect to $\text{input}$:

$$\frac{d^{k}}{d \times \text{input}^{k}} H_{n}^{(k)} = 2^{k} \times \frac{n!}{(n - k)!}H_{n - k}(\text{input})$$
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78352
Approved by: https://github.com/mruberry
2022-05-28 02:26:30 +00:00
Ryan Spring
2df1da09e1 Add Elementwise unary ops 4 references (#78216)
Add reference implementations for `nan_to_num, positive, sigmoid, signbit, tanhshink`
Add prims for `minimum_value(dtype)` and `maximum_value(dtype)`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78216
Approved by: https://github.com/mruberry
2022-05-27 21:55:34 +00:00
Nikita Shulga
437ecfc461 [MPS] Fix copy_kernel_mps (#78428)
By passing `storage_offset` of source and destination Tensors
This fixes following simple usecase:
```
python3` -c "import torch;x=torch.zeros(3, 3, device='mps'); x[1, 1]=1;print(x)"
```

Add test to validate it would not regress in the future

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78428
Approved by: https://github.com/kulinseth
2022-05-27 20:46:53 +00:00
Allen Goodman
40a6cc6cc6 Chebyshev polynomial of the second kind (#78293)
Adds:

```Python
chebyshev_polynomial_u(input, n, *, out=None) -> Tensor
```

Chebyshev polynomial of the second kind $U_{n}(\text{input})$.

If $n = 0$, $1$ is returned. If $n = 1$, $2 \times \text{input}$ is returned. If $n < 6$ or $|\text{input}| > 1$ the recursion:

$$T_{n + 1}(\text{input}) = 2 \times \text{input} \times T_{n}(\text{input}) - T_{n - 1}(\text{input})$$

is evaluated. Otherwise, the explicit trigonometric formula:

$$\frac{\text{sin}((n + 1) \times \text{arccos}(\text{input}))}{\text{sin}(\text{arccos}(\text{input}))}$$

is evaluated.

## Derivatives

Recommended first derivative formula with respect to $\text{input}$:

$$\frac{(-1 - n)\times U_{-1 + n}(\text{input}) + n \times \text{input} \times U_{n}(x)}{-1 + \text{input}^{2}}.$$

Recommended $k$-derivative formula with respect to $\text{n}$:

$$\frac{\text{arccos}(\text{input})^{k} \times \text{sin}(\frac{k \times \pi}{2} + (1 + n) \times \text{arccos}(\text{input}))}{\sqrt{1 - \text{input}^{2}}}.$$

## Example

```Python
x = torch.linspace(-1.0, 1.0, 256)

matplotlib.pyplot.plot(x, torch.special.chebyshev_polynomial_u(x, 10))
```

![image](https://user-images.githubusercontent.com/315821/170352780-12af63d3-ce31-4948-8b68-8ecc37c71ac5.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78293
Approved by: https://github.com/mruberry
2022-05-27 18:32:11 +00:00
Aidyn-A
4963d41f9d Add logsumexp to AMP autocast (#76330)
Add `logsumexp` function to AMP rules.

This PR fixes an issue described in [PyTorch forum](https://discuss.pytorch.org/t/kl-divergence-negative-with-amp/149312).

cc @ptrblck @mcarilli
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76330
Approved by: https://github.com/mcarilli, https://github.com/ptrblck, https://github.com/ngimel
2022-05-27 17:26:20 +00:00
jjsjann123
1a9a1b8b5e fixing typo (#78417)
primtorch prod is mistakenly using `_sum_doc`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78417
Approved by: https://github.com/malfet
2022-05-27 17:10:15 +00:00
Kulin Seth
8552acbd74 MPS: Eye op (#78408)
This can be used as a reference PR was to add Op in MPS backend.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78408
Approved by: https://github.com/albanD
2022-05-27 17:07:02 +00:00
Nikita Shulga
aefb4c9fba [mps] Do not use malloc/free in Indexing.mm (#78409)
Especially allocating just 2 int64 on heap is somewhat wasteful (and they are leaked if function returns earlier)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78409
Approved by: https://github.com/seemethere, https://github.com/kulinseth
2022-05-27 14:16:18 +00:00
Nikita Shulga
ee86af17fb [CI]Preserve .ninja_log for Mac builds (#78387)
And `compile_commands.json` which would be very useful for debug

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78387
Approved by: https://github.com/suo
2022-05-27 13:58:54 +00:00
Pearu Peterson
8c88a55d44 Fix sparse BSR tensor validation.
Also adds bits to support dense dimensions for Sparse Compressed tensors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78359

Approved by: https://github.com/cpuhrsch
2022-05-27 13:26:35 +00:00
Kulin Seth
2e32d5fcd8 MPS: Add adaptive max pool2d op (#78410)
Adaptive max pool 2d forward and backward with test

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78410
Approved by: https://github.com/albanD
2022-05-27 11:59:07 +00:00
Brian Hirsh
8ad305f375 default argument handling for mobile unboxing codegen
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78353

Differential Revision: [D36702695](https://our.internmc.facebook.com/intern/diff/D36702695/)

Approved by: https://github.com/priyaramani, https://github.com/larryliu0820
2022-05-27 06:41:23 +00:00
Jerry Zhang
85f308275e [fx2trt] Fix dummy weight initialization in conv1d converter (#78402)
Summary:
att, currently it errors out with the following error:
```
---> 72         dummy_weight = trt.Weights(weight_shape)
     73         layer = network.add_convolution_nd(
     74             input=input_val,
TypeError: __init__(): incompatible constructor arguments. The following argument types are supported:
    1. tensorrt.tensorrt.Weights(type: tensorrt.tensorrt.DataType = <DataType.FLOAT: 0>)
    2. tensorrt.tensorrt.Weights(a: numpy.ndarray)
```
full error: https://www.internalfb.com/phabricator/paste/view/P503598381
we need to pass arond a numpy ndarray instead of a shape here.

and support conv1d in backend_config_dict for tensorrt

Test Plan:
```
buck test mode/opt deeplearning/trt/fx2trt_oss/test/converters:test_convolution
```

```
buck test mode/opt deeplearning/trt/fx2trt_oss/test/quant:test_quant_trt
```

Differential Revision: D36721313

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78402
Approved by: https://github.com/842974287
2022-05-27 04:48:45 +00:00
Justin Chu
299fbbccec [ONNX] Fix check_training_mode in symbolic_helper (#78376)
`check_training_mode` always warned that an op is set to training because it was comparing an int `op_train_mode` with an Enum `GLOBALS.training_mode`. This PR fixes the behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78376
Approved by: https://github.com/garymm
2022-05-27 00:38:16 +00:00
Nikita Shulga
dfd78bf4ab Generate CUDAConfig.h only for CUDA builds (#78218)
This should prevent failures like https://github.com/pytorch/pytorch/pull/77002 from sneaking in as CUDAConfig.h would no longer be available for cpu builds.
Note from 2018 about MIOpen builds do no seems relevant, though CUDAConfig.h is still needed by ROCm (tested in https://github.com/pytorch/pytorch/runs/6613660811?check_suite_focus=true build)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78218
Approved by: https://github.com/seemethere, https://github.com/atalman
2022-05-26 23:46:17 +00:00
Max Podkorytov
2679755bdc [static-runtime] out variant for aten::max (#78271)
Summary: Previously the op was auto-generated but it only covered the pointwise overload of aten::max. This adds support for reduction, overall and along a dim

Test Plan: Added a unit test

Differential Revision: D36656378

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78271
Approved by: https://github.com/mikeiovine
2022-05-26 23:29:27 +00:00
Jane Xu
ac031e1326 [GH1] Switch trymerge concurrency to be PR-based (#78296)
Fixes https://github.com/pytorch/pytorch/issues/78176
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78296
Approved by: https://github.com/malfet
2022-05-26 22:23:46 +00:00
Sergii Dymchenko
26852d6fe1 Remove mentions of py3.5-3.6 (#78318)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78318
Approved by: https://github.com/malfet
2022-05-26 22:17:13 +00:00
yuguo68
6ee072a324 fix missing dim out of range check for logcumsumexp_cuda with empty source tensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78284

Approved by: https://github.com/ngimel
2022-05-26 22:05:59 +00:00
Allen Goodman
029bbe4995 Chebyshev polynomial of the first kind (#78196)
Adds:

```Python
chebyshev_polynomial_t(input, n, *, out=None) -> Tensor
```

Chebyshev polynomial of the first kind $T_{n}(\text{input})$.

If $n = 0$, $1$ is returned. If $n = 1$, $\text{input}$ is returned. If $n < 6$ or $|\text{input}| > 1$ the recursion:

$$T_{n + 1}(\text{input}) = 2 \times \text{input} \times T_{n}(\text{input}) - T_{n - 1}(\text{input})$$

is evaluated. Otherwise, the explicit trigonometric formula:

$$T_{n}(\text{input}) = \text{cos}(n \times \text{arccos}(x))$$

is evaluated.

## Derivatives

Recommended $k$-derivative formula with respect to $\text{input}$:

$$2^{-1 + k} \times n \times \Gamma(k) \times C_{-k + n}^{k}(\text{input})$$

where $C$ is the Gegenbauer polynomial.

Recommended $k$-derivative formula with respect to $\text{n}$:

$$\text{arccos}(\text{input})^{k} \times \text{cos}(\frac{k \times \pi}{2} + n \times \text{arccos}(\text{input})).$$

## Example

```Python
x = torch.linspace(-1, 1, 256)

matplotlib.pyplot.plot(x, torch.special.chebyshev_polynomial_t(x, 10))
```

![image](https://user-images.githubusercontent.com/315821/170125525-60415735-4d49-4cbd-9278-26286413f635.png)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78196
Approved by: https://github.com/mruberry
2022-05-26 21:06:44 +00:00
kshitij12345
8bd8f62812 [primTorch] refs: margin_ranking_loss, hinge_embedding_loss (#78057)
Refs for `nn.functional.margin_ranking_loss` and `nn.functional.hinge_embedding_loss`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78057
Approved by: https://github.com/mruberry
2022-05-26 21:01:57 +00:00
Kshiteej K
4e1f41f66a [docs][nn] conv: complex support note (#78351)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78351
Approved by: https://github.com/anjali411, https://github.com/jbschlosser
2022-05-26 20:33:36 +00:00