Commit Graph

47378 Commits

Author SHA1 Message Date
Brian Hirsh
7b3a0ff87a Port index.Tensor to structured kernels.
Tracking issue: #55070

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69607

Approved by: https://github.com/bdhirsh
2022-06-10 17:27:47 +00:00
George Qi
a90f006fe5 add strides to slow path
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78610

Approved by: https://github.com/ezyang
2022-06-10 16:59:14 +00:00
Linbin Yu
1d7627955b Add instructions for iOS test (#79100)
as title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79100
Approved by: https://github.com/atalman
2022-06-10 16:31:30 +00:00
Jane Xu
cde0cefa1c [CI] Remove broken upload binary size step (#79282)
Looking at our logs, I noticed that this step has been failing for I don't know how long. If it's gone unnoticed and no one has really cared to look at these stats, we should just stop reporting.

Failing regular build size upload: https://github.com/pytorch/pytorch/runs/6833171493?check_suite_focus=true
Failing android build size upload: https://github.com/pytorch/pytorch/runs/6832343869?check_suite_focus=true
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79282
Approved by: https://github.com/suo, https://github.com/malfet
2022-06-10 16:00:26 +00:00
Michael Suo
eaaa34daef [ci] write test suites to rockset
Currently we upload all `testcase` elements as individual test runs to
Rockset. It would be nice to also have `testsuite`s as well, which
aggregate high level information.

These aggregations could technically be performed in the backend, but it's
faster to just log the data since we already have it in the XML test
report.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79265

Approved by: https://github.com/seemethere
2022-06-10 15:38:09 +00:00
Michael Suo
cec251fc4b [lint] Don't invoke lintrunner with --verbose
It's been running for a while and is stable, so we don't need debugging
logging anymore. This should reduce noise for people.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79216

Approved by: https://github.com/seemethere
2022-06-10 15:38:09 +00:00
Michael Suo
b26c5b4638 [ci] refactor upload_test_stats + add unit test
Clean some things up and add a unit test.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79210

Approved by: https://github.com/janeyx99
2022-06-10 15:38:09 +00:00
Michael Suo
0117fb7600 [ci] remove IS_GHA env var
This is unnecessary, GitHub automatically populates a `GITHUB_ACTION`
env var:
https://docs.github.com/en/actions/learn-github-actions/environment-variables#default-environment-variables

For docker, this env var is automatically propagated through our use of `--env-file`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79219

Approved by: https://github.com/seemethere
2022-06-10 15:29:20 +00:00
zengk95
ebc936d608 [mergebot] Default on green (#79242)
Another attempt at landing this
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79242
Approved by: https://github.com/janeyx99
2022-06-10 14:26:31 +00:00
kshitij12345
adaecb2cbb [chalf] index_select: cpu support (#79217)
Fixes https://github.com/pytorch/pytorch/issues/79204

PR https://github.com/pytorch/pytorch/pull/78173 took care of adding CUDA support.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79217
Approved by: https://github.com/mruberry
2022-06-10 14:06:32 +00:00
Peter Bell
7843a5e882 Move Tensor.grad back into C++
`Tensor.grad` was moved to python in #30531 to add a warning. However,
that warning has since been lowered into C++ so this wrapper is no
longer necessary.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76675

Approved by: https://github.com/albanD
2022-06-10 13:44:45 +00:00
Tongzhou Wang
dd620c4575 add type annotation to distributions.kl_divergence (#78432)
Fixes #78431

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78432
Approved by: https://github.com/fritzo, https://github.com/ejguan
2022-06-10 13:39:20 +00:00
Kulin Seth
77b6885a22 MPS: add layer_norm_backward (#79189)
Layernorm backward

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79189
Approved by: https://github.com/razarmehr, https://github.com/albanD
2022-06-10 13:25:41 +00:00
Kulin Seth
83239351c5 MPS: add exponential op (#79188)
Add exponential distribution

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79188
Approved by: https://github.com/razarmehr, https://github.com/albanD
2022-06-10 13:16:21 +00:00
PyTorch MergeBot
c99ea0db46 Revert "[PyTorch] Record Sequence Number to Match Forward and Backward Operators (#78795)"
This reverts commit a299a2fa26.

Reverted https://github.com/pytorch/pytorch/pull/78795 on behalf of https://github.com/janeyx99 due to Broke profiler tests a299a2fa26
2022-06-10 13:11:44 +00:00
PyTorch MergeBot
d2200e38f7 Revert "fix _unsafe_view schema to work with functionalization"
This reverts commit 46234df5f1.

Reverted https://github.com/pytorch/pytorch/pull/79148 on behalf of https://github.com/janeyx99 due to Broke 11.3 tests on trunk and on PR, see 46234df5f1
2022-06-10 13:09:00 +00:00
Michael Andreas Dagitses
f96d96a7fc turn on -Werror=type-limits in our Bazel CPU build
Summary:
We also fix any existing issues.

Test Plan: Built locally, rely on CI to confirm.

Reviewers: malfet

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79139

Approved by: https://github.com/seemethere, https://github.com/osalpekar, https://github.com/albanD
2022-06-10 10:04:08 +00:00
vspenubarthi
28c541776c [ao] Added fx model report per_channel detector
Summary: This code is meant to be a tool to help people get the most out
of their backend by hinting them to use per_channel quantization if it's
supported, which will help increase accuracy significantly. The code is
completed and ready to be reviewed.

Test Plan: test/quantization/fx/test_model_report_fx.py

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79104

Approved by: https://github.com/HDCharles
2022-06-10 08:09:59 +00:00
pritam
b9e3d722c4 Use appropriate dtype for sharded linear implementation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79255

We use several collective operations in our sharded linear
implementation and for many collectives, we do not set the `dtype` of the
output tensor appropriately. As a result, using a datatype like torch.float16
(which is not the default torch.float32) results in errors.

Fixing this across the board and adding appropriate tests.

Differential Revision: [D37059752](https://our.internmc.facebook.com/intern/diff/D37059752/)

Approved by: https://github.com/fduwjj, https://github.com/wanchaol
2022-06-10 07:32:15 +00:00
Louis Feng
a299a2fa26 [PyTorch] Record Sequence Number to Match Forward and Backward Operators (#78795)
Summary: Add sequence number to map forward and backward operators.

Test Plan:
```
buck build mode/dev-nosan cea/ml_perf_model/gpu/scripts: --show-output
buck-out/gen/caffe2/test/profiler#binary.par test_profiler.TestExecutionGraph.test_execution_graph_start_stop
```

Outputs with seq_id: P505545974

Differential Revision: D36881999

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78795
Approved by: https://github.com/robieta
2022-06-10 05:51:17 +00:00
Tran Le
43ad2833b9 [pytorch][papaya] Replace parseObject with dummy function in FlatBufferLoader during load parameters (#79167)
Summary: As titled.

Test Plan:
```
buck test //xplat/caffe2:test_lite_trainer_pickle_and_flatbuffer //xplat/caffe2:test_lite_trainer
```

Differential Revision: D37021858

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79167
Approved by: https://github.com/qihqi
2022-06-10 05:41:25 +00:00
Kshiteej K
d837443a6f [fix] composite compliance: matrix_rank (#78968)
Ref: https://github.com/pytorch/pytorch/issues/69991
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78968
Approved by: https://github.com/zou3519
2022-06-10 05:41:19 +00:00
PyTorch MergeBot
fefff54cad Revert "Revert "Revert "Added {logical_not, trace} refs, moved logical ops to use method overloads"""
This reverts commit a2d2981e8e.

Reverted https://github.com/pytorch/pytorch/pull/79224 on behalf of https://github.com/suo due to broke lots of things a2d2981e8e
2022-06-10 04:40:43 +00:00
Horace He
a2d2981e8e Revert "Revert "Added {logical_not, trace} refs, moved logical ops to use method overloads""
This reverts commit d67309aefb.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79224

Approved by: https://github.com/mruberry
2022-06-10 03:07:14 +00:00
PyTorch MergeBot
87a5ecced2 Revert "Support multi-dimensional lengths in segment_reduce to support pytorch_scatter.segment_* functionalities (CUDA)"
This reverts commit 40f7ef1f3d.

Reverted https://github.com/pytorch/pytorch/pull/77061 on behalf of https://github.com/janeyx99 due to Broke segment_reduce tests on trunk, e.g., 40f7ef1f3d
2022-06-10 01:57:34 +00:00
Brian Hirsh
46234df5f1 fix _unsafe_view schema to work with functionalization
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79148

Approved by: https://github.com/albanD
2022-06-10 01:45:04 +00:00
David Berard
1d2a6c2e94 [JIT] Propagate profiled information to DifferentiableGraph outputs
Without profiled outputs, autodiff can't tell whether or not the outputs of a DifferentiableGraph should requires_grad. Autodiff would default to requires_grad=True if there was no profiled information, causing autodiff to mark tensors as requires_grad when they shouldn't have. This adds requires_grad info onto the type of the output, if it can be found in later uses of the output.

Adds a test for correct autodiff requires_grad behavior and also a test to make sure the output type is correctly annotated in create_autodiff_subgraphs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78875

Approved by: https://github.com/eellison
2022-06-10 00:54:11 +00:00
Mikayla Gawarecki
40f7ef1f3d Support multi-dimensional lengths in segment_reduce to support pytorch_scatter.segment_* functionalities (CUDA)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77061

Approved by: https://github.com/cpuhrsch
2022-06-10 00:49:37 +00:00
Luka Mushkudiani
c0a7c1d02e Expose _export_data from C++ to Python (#79207)
Summary:
https://www.internalfb.com/code/fbsource/[477a5768452957f87e56044169de47f051197567]/fbcode/caffe2/torch/csrc/jit/mobile/train/export_data.cpp
export_data is used to serialize data.

I binded this method to Python with PyBind11

Test Plan:
Wrote a file pybind_check.py which checks if the binding works.

Then, tried to read the produced data file from C++ with "torch::jit::_load_parameters" and checked that content matched.

Differential Revision: D37029253

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79207
Approved by: https://github.com/qihqi
2022-06-10 00:41:33 +00:00
PyTorch MergeBot
338bfe6315 Revert "[ci] remove IS_GHA env var"
This reverts commit 1a2d95c68a.

Reverted https://github.com/pytorch/pytorch/pull/79219 on behalf of https://github.com/malfet due to Broke binary jobs see 1a2d95c68a
2022-06-10 00:05:40 +00:00
Nikita Shulga
01929b7374 Delete syncbranches script/workflow (#79245)
As majority of the changes are going thru @pytorchmergebot and
there is a good enough tooling for making sure that code-development
changes are merged as well

https://github.com/pytorch/pytorch/tree/fbsync branch will still stay in
place can be use to verify if something is amiss

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79245
Approved by: https://github.com/seemethere
2022-06-09 23:37:26 +00:00
Michael Suo
1a2d95c68a [ci] remove IS_GHA env var
This is unnecessary, GitHub automatically populates a `GITHUB_ACTION`
env var:
https://docs.github.com/en/actions/learn-github-actions/environment-variables#default-environment-variables

For docker, this env var is automatically propagated through our use of `--env-file`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79219

Approved by: https://github.com/kit1980, https://github.com/malfet, https://github.com/seemethere
2022-06-09 23:32:25 +00:00
Michael Suo
1e2890ab51 [ci] remove COMPACT_JOB_NAME env var
This is a holdover from jenkins days. It is unused now, so remove it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79214

Approved by: https://github.com/kit1980, https://github.com/malfet, https://github.com/seemethere
2022-06-09 23:32:25 +00:00
Michael Suo
0a4d646529 [ci] remove ppc64ie build env
We don't run this in the CI (not sure if we ever did!), so removing the
dead code paths

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79213

Approved by: https://github.com/kit1980, https://github.com/malfet, https://github.com/seemethere
2022-06-09 23:32:24 +00:00
Michael Suo
18c3314e47 [ci] fix USE_DEPLOY condition
This was wrong...we should only build with deploy in the deploy-specific
build config.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79212

Approved by: https://github.com/kit1980, https://github.com/malfet, https://github.com/seemethere
2022-06-09 23:32:24 +00:00
Can Balioglu
21be4d40ba Fix how we handle host memory in CUDA getDeviceFromPtr
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76902

According to the [CUDA Toolkit documentation](https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__UNIFIED.html#group__CUDART__UNIFIED_1gd89830e17d399c064a2f3c3fa8bb4390) `cudaPointerGetAttributes()` function introduces a backwards-incompatible behavior with v11.0 and later. For host memory pointers that are not registered with any CUDA device, instead of returning an error, it now returns success, but sets `attr::type` to `cudaMemoryTypeUnregistered`. A side effect of this change can be seen in #74114.

This PR adds an additional check to `at::cuda::getDeviceFromPtr()` to address this subtle change.

Differential Revision: [D36171315](https://our.internmc.facebook.com/intern/diff/D36171315/)

Approved by: https://github.com/ezyang
2022-06-09 23:00:29 +00:00
Jane Xu
1bc8c87322 [CI] Turn flaky test signal to green (#79220)
This implements the RFC #73573
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79220
Approved by: https://github.com/suo
2022-06-09 22:27:15 +00:00
Elias Ellison
13a8867c01 Add Dynamic Output Shape Tagdfor ata-dependent ops, handle in FakeTensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79170

Approved by: https://github.com/ezyang
2022-06-09 22:16:16 +00:00
Omkar Salpekar
f221b16b2c Remove Unused Nervana GPU submodule (#79168)
Removes the Nervana GPU submodule.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79168
Approved by: https://github.com/seemethere
2022-06-09 22:08:23 +00:00
Justin Chu
54ea6bbc6f Add onnx / test to required merge rules (#78790)
Add two `linux-xenial-py3.7-clang7-onnx / test` checks to required merge rules for `torch.onnx`

~~Question: Do I need to full name (`linux-xenial-py3.7-clang7-onnx / test (default, 1, 2, linux.2xlarge)`) or will the bot match the prefix?~~
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78790
Approved by: https://github.com/BowenBao, https://github.com/janeyx99
2022-06-09 22:04:51 +00:00
PyTorch MergeBot
39bfdf4175 Revert "[mergebot] Make merge on green default behavior (#79199)"
This reverts commit ddeeca0bb5.

Reverted https://github.com/pytorch/pytorch/pull/79199 on behalf of https://github.com/zengk95 due to messed up on-mandatory which is a functional issue
2022-06-09 21:58:12 +00:00
zengk95
ddeeca0bb5 [mergebot] Make merge on green default behavior (#79199)
This makes the merge on green behavior default as we tried to do it earlier, but forgot to pass in the force arguments.

This also adds a few tests so that we don't make that mistake again (although people can still forget to pass in the arguments through shell).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79199
Approved by: https://github.com/janeyx99, https://github.com/seemethere, https://github.com/malfet
2022-06-09 21:07:02 +00:00
Joel Benjamin Schlosser
70d6446a3d Support both train / eval modes for ModuleInfo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78735

Approved by: https://github.com/albanD
2022-06-09 20:57:17 +00:00
Animesh Jain
79f18c1aee Minor FX test fix for TorchDynamo (#79206)
`torch.nn.Identity` gets optimized away with TorchDynamo. Replacing with `torch.nn.tanh`.

@jansel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79206
Approved by: https://github.com/jansel
2022-06-09 20:36:04 +00:00
Edward Z. Yang
b18ba7e036 Properly setup __name__ on refs functions.
My hands hurt now.  Yes I could have added type annotations if you care do it
yourself.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79178

Approved by: https://github.com/mruberry
2022-06-09 20:17:48 +00:00
Taylor Robie
84b9e5ba84 Move test_profiler tests to tree rather than icicle format
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79175

Approved by: https://github.com/ezyang
2022-06-09 19:45:02 +00:00
Taylor Robie
9f2e2aa28b Revert "Revert "[Profiler] Move python tracing to unified event type (Part 2)""
This reverts commit 4305f8e9bd.

replace TEST_CUDA with torch.has_cuda

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79173

Approved by: https://github.com/ezyang
2022-06-09 19:45:02 +00:00
qqaatw
2bafb42a0a Add onnx support for movedim and moveaxis (#78931)
Fixes #68918

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78931
Approved by: https://github.com/BowenBao
2022-06-09 19:41:09 +00:00
yuguo68
c1b831f9cd Fix jit schema_matching ignoring self resulting in wrong operator schema
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79101

Approved by: https://github.com/gmagogsfm, https://github.com/eellison
2022-06-09 19:36:06 +00:00
Mikayla Gawarecki
e289a18e78 Support multi-dimensional lengths in segment_reduce to support pytorch_scatter.segment_* functionalities (CPU-only)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77058

Approved by: https://github.com/cpuhrsch
2022-06-09 19:27:29 +00:00