Commit Graph

1893 Commits

Author SHA1 Message Date
Edward Z. Yang
1ff52225f1 Unify SymIntNode and SymFloatNode into SymNode (#87817)
This refactor was prompted by challenges handling mixed int/float
operations in C++.  A previous version of this patch
added overloads for each permutation of int/float and was unwieldy
https://github.com/pytorch/pytorch/pull/87722/  This PR takes a different
approach.

The general outline of the patch is to combine the C++ types SymIntNode
and SymFloatNode into a single type, SymNode.  This is type erased; we
no longer know statically at C++ if we have an int/float and have to test
it with the is_int()/is_float() virtual methods.  This has a number of
knock on effects.

- We no longer have C++ classes to bind to Python.  Instead, we take an
  entirely new approach to our Python API, where we have a SymInt/SymFloat
  class defined entirely in Python, which hold a SymNode (which corresponds
  to the C++ SymNode).  However, SymNode is not pybind11-bound; instead,
  it lives as-is in Python, and is wrapped into C++ SymNode using PythonSymNode
  when it goes into C++.  This implies a userland rename.

  In principle, it is also possible for the canonical implementation of SymNode
  to be written in C++, and then bound to Python with pybind11 (we have
  this code, although it is commented out.)  However, I did not implement
  this as we currently have no C++ implementations of SymNode.

  Because we do return SymInt/SymFloat from C++ bindings, the C++ binding
  code needs to know how to find these classes.  Currently, this is done
  just by manually importing torch and getting the attributes.

- Because SymInt/SymFloat are easy Python wrappers, __sym_dispatch__ now
  takes SymInt/SymFloat, rather than SymNode, bringing it in line with how
  __torch_dispatch__ works.

Some miscellaneous improvements:

- SymInt now has a constructor that takes SymNode.  Note that this
  constructor is ambiguous if you pass in a subclass of SymNode,
  so an explicit downcast is necessary.  This means toSymFloat/toSymInt
  are no more.  This is a mild optimization as it means rvalue reference
  works automatically.

- We uniformly use the caster for c10::SymInt/SymFloat, rather than
  going the long way via the SymIntNode/SymFloatNode.

- Removed some unnecessary toSymInt/toSymFloat calls in normalize_*
  functions, pretty sure this doesn't do anything.

- guard_int is now a free function, since to guard on an int you cannot
  assume the method exists.  A function can handle both int and SymInt
  inputs.

- We clean up the magic method definition code for SymInt/SymFloat/SymNode.
  ONLY the user classes (SymInt/SymFloat) get magic methods; SymNode gets
  plain methods; this is to help avoid confusion between the two types.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87817
Approved by: https://github.com/albanD, https://github.com/anjali411
2022-10-27 20:56:02 +00:00
HDCharles
d0e12d1cc8 [ao] Adding FAQ to docs (#87322)
Summary: migrated from: https://discuss.pytorch.org/t/quantization-frequently-asked-questions/161251

Test Plan: circle CI tests

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87322
Approved by: https://github.com/z-a-f
2022-10-25 20:18:04 +00:00
Masaki Kozuki
28593a8339 [docs] batch_isend_irecv and P2POp of torch.distributed (#86438)
Reopening https://github.com/pytorch/pytorch/pull/79722

cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86438
Approved by: https://github.com/kit1980
2022-10-25 00:11:50 +00:00
Kazuaki Ishizaki
72ec1b5fc1 Fix typo under docs directory (#87583)
This PR fixes typo in `.rst` files under docs directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87583
Approved by: https://github.com/kit1980
2022-10-24 23:52:44 +00:00
Svetlana Karslioglu
7e83f65ad5 Add General Project Policies (#87385)
Add General Project Policies to the Governance page

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87385
Approved by: https://github.com/orionr
2022-10-20 21:02:09 +00:00
George Qi
17202b3637 [maskedtensor] fix docs formatting (#87387)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87387
Approved by: https://github.com/cpuhrsch
2022-10-20 20:48:25 +00:00
George Qi
cf2be34ff5 [maskedtensor] add docs (#84887)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84887
Approved by: https://github.com/cpuhrsch
2022-10-19 20:44:34 +00:00
Christian Puhrsch
e8c4adf3c3 Add torch.sparse overview section (#85265)
The goal of this section is to provide a general overview of how PyTorch handles sparsity for readers who are already familiar with sparse matrices and their operators.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85265
Approved by: https://github.com/jisaacso
2022-10-18 21:07:57 +00:00
albanD
9db7270ee7 Small update to Module note (#87142)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87142
Approved by: https://github.com/cpuhrsch
2022-10-17 22:56:49 +00:00
Jan Margeta
e85dbcc9b0 [docs] Fix ScalarTensor __repr__ in Extending PyTorch example (#86330)
This PR fixes the __repr__ of the `ScalarTensor` class in the Extending PyTorch example to correspond with the class name instead of `DiagonalTensor`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86330
Approved by: https://github.com/bdhirsh
2022-10-17 20:01:10 +00:00
Nikita Karetnikov
91b3cd0b5a [primTorch] Add a ref for narrow_copy (#86748)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86748
Approved by: https://github.com/mruberry
2022-10-17 10:16:05 +00:00
Lukas Mührke
e027740e77 Chore: Add 'mps' to the docs of tensor_attributes (#86585)
Since PyTorch supports 'mps' (Apple metal) devices it should be reflected in the documentation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86585
Approved by: https://github.com/albanD
2022-10-14 19:59:33 +00:00
Alvaro Gaona
b48deedb77 Set up new module torch.signal.windows (#85599)
Resolves #85366

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85599
Approved by: https://github.com/lezcano, https://github.com/mruberry
2022-10-14 11:33:32 +00:00
Kshiteej K
54ee95c8ec [nn] module: full_backward_pre_hook (#86700)
Fixes https://github.com/pytorch/pytorch/issues/42824

* [x] Test
* [x] Doc
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86700
Approved by: https://github.com/soulitzer
2022-10-13 17:36:39 +00:00
Shawn Zhong
e552cf1050 [DOC] Use type hints to show annotation in the docs (#79086)
Fixes #44964

Use type hints in the code to show type annotations in the parameters section of the docs.

For the parameters already documented in the docstring, but lack the type annotation, the type hints from the code are used:

| [Before](https://pytorch.org/docs/master/generated/torch.nn.AdaptiveMaxPool1d.html) | [After](https://docs-preview.pytorch.org/79086/generated/torch.nn.AdaptiveMaxPool1d.html) |
| --- | --- |
| <img width="462" alt="image" src="https://user-images.githubusercontent.com/6421097/172954756-96d2d8a6-7df9-4c0f-ad34-c12912a5a740.png"> | <img width="479" alt="image" src="https://user-images.githubusercontent.com/6421097/172954770-a6ce2425-99a6-4853-ac2c-e182c3849344.png"> |

| [Before](https://pytorch.org/docs/master/generated/torch.nn.Linear.html) | [After](https://docs-preview.pytorch.org/79086/generated/torch.nn.Linear.html) |
| --- | --- |
| <img width="482" alt="image" src="https://user-images.githubusercontent.com/6421097/172954992-10ce6b48-44a2-487e-b855-2a15a50805bb.png"> | <img width="471" alt="image" src="https://user-images.githubusercontent.com/6421097/172954839-84012ce6-bf42-432c-9226-d3e81500e72d.png"> |

Ref:
- PR https://github.com/pytorch/pytorch/pull/49294 removed type annotations from signatures in HTML docs.
- Sphinx version was bumped to 5.0.0 in PR #70309
- Duplicated (closed) issues: #78311 and #77501

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79086
Approved by: https://github.com/malfet
2022-10-12 22:31:48 +00:00
Mikayla Gawarecki
a77f2a95a7 Improve NestedTensor documentation (#85186)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85186
Approved by: https://github.com/cpuhrsch
2022-10-12 22:03:04 +00:00
Daniel Dale
ce56ee11fd Extend torch.cuda.is_available() to attempt an NVML-based CUDA availability assessment when explicitly requested by the user (#85951)
Fixes #83973 (This is a substitute PR for https://github.com/pytorch/pytorch/pull/85024)

First of all, thanks for your invaluable contributions to PyTorch everyone!

Given how extensively `torch.cuda.is_available` is used in the PyTorch ecosystem, IMHO it's worthwhile to provide downstream libraries/frameworks/users the ability to alter the default behavior of `torch.cuda.is_available` in the context of their PyTorch usage.

I'm confident there are many current and future such use cases which could benefit from leveraging a weakened, NVML-based `torch.cuda.is_available` assessment at a downstream framework's explicit direction (thanks @malfet 81da50a972 !). Though one could always patch out the `torch.cuda.is_available` function with another implementation in a downstream library, I think this environmental variable based configuration option is more convenient and the cost to including the option is quite low.

As discussed in https://github.com/pytorch/pytorch/pull/85024#issuecomment-1261542045, this PR gates new non-default NVML-based CUDA behavior with an environmental variable (PYTORCH_NVML_BASED_CUDA_CHK) that allows a user/framework to invoke non-default, NVML-based `is_available()` assessments if desired.

Thanks again for your work everyone!
@ngimel @malfet @awaelchli

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85951
Approved by: https://github.com/ngimel
2022-10-12 18:37:50 +00:00
Eddie Yan
25725fd624 (Re-open) Adds cudaMallocAsync as an alternative backend for the CUDA allocator (#82682)
Rebased version of @mcarilli 's cudaMallocAsync #65365 for continued testing
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82682
Approved by: https://github.com/ngimel
2022-10-12 03:44:21 +00:00
Partho
42bd275233 [doc] LR scheduler example fix (#86629)
Fixes issue #86208
As suggested in the issue, updated the LR scheduler example to use a regular nn.Module like the other examples on the same page.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86629
Approved by: https://github.com/soulitzer
2022-10-11 21:41:50 +00:00
zaf
3a02873183 [quant][ao_migration] nn.intrinsic.quantized migration to ao (#86172)
All quantization-related modules are being migrated to `torch.ao`. This migrates the `nn.intrinsic.quantized`. Please, see the [tracker](https://github.com/pytorch/pytorch/issues/81667) for the timeline.

```
python test/test_quantization.py -- TestAOMigrationNNIntrinsic
```

Internal:

```
buck2 test @mode/dev-nosan //caffe2/test:quantization -- TestAOMigrationNNIntrinsic
```

Differential Revision: [D39425515](https://our.internmc.facebook.com/intern/diff/D39425515/)

Differential Revision: [D39425515](https://our.internmc.facebook.com/intern/diff/D39425515)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86172
Approved by: https://github.com/jerryzh168
2022-10-08 00:01:38 +00:00
zaf
efccb6401c [quant][ao_migration] nn.intrinsic.qat migration to ao (#86171)
All quantization-related modules are being migrated to `torch.ao`. This migrates the `nn.intrinsic.qat`. Please, see the [tracker](https://github.com/pytorch/pytorch/issues/81667) for the timeline.

```
python test/test_quantization.py TestAOMigrationNNIntrinsic
```

Differential Revision: [D39419993](https://our.internmc.facebook.com/intern/diff/D39419993/)

**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D39419993/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86171
Approved by: https://github.com/jerryzh168
2022-10-07 17:29:42 +00:00
Howard Huang
cc9183eb4c Update distributed.rst backend collective support chart (#86406)
NCCL `scatter` was added by Wanchao in https://github.com/pytorch/pytorch/pull/70029

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86406
Approved by: https://github.com/wanchaol
2022-10-07 12:59:09 +00:00
Zafar
0e30da3f2f [refactor] Renaming ao.sparsity to ao.pruning (#84867)
`Sparsity` as a term doesn't reflect the tools that are developed by the AO. The `torch/ao/sparsity` also has utilities for structured pruning, which internally we always referred to as just "pruning". To avoid any confusion, we renamed `Sparsity` to `Prune`. We will not be introducing the backwards compatibility, as so far this toolset was kept under silent development.

This change will reflect the changes in the documentation as well.

**TODO:**
- [ ] Change the tutorials
- [ ] Confirm no bc-breakages
- [ ] Reflect the changes in the trackers and RFC docs

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84867
Approved by: https://github.com/supriyar
2022-10-07 00:58:41 +00:00
Sahan Paliskara
936e93058b Delete torch::deploy from pytorch core (#85953)
As we have migrated torch::deploy over to https://github.com/pytorch/multipy, we can now delete it from pytorch core as ongoing development will happen there.

This PR was created due to syncing issues with https://github.com/pytorch/pytorch/pull/85443 which is where the review history can be found.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85953
Approved by: https://github.com/seemethere, https://github.com/malfet
2022-10-06 07:20:16 +00:00
Elias Ellison
d04889323e Add Context Manager for Disabling Multithreading in Backwards, use in aot autograd (#86245)
We were running into a few issues with running multithreaded backwards in aot_autograd: such as https://github.com/pytorch/pytorch/issues/86136, and `FakeTensorMode` getting into a weird state as a result of not executing functions completely sequentially. The multithreaded backwards is lost in translation when we trace out the backwards anyway, and adds a lot of additional complexity.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86245
Approved by: https://github.com/albanD, https://github.com/yf225
2022-10-06 03:27:42 +00:00
Jane Xu
a348975e00 Add opteinsum backend to give users control (#86219)
This achieves the same things as https://github.com/pytorch/pytorch/pull/85908 but using backends instead of kwargs (which breaks torchscript unfortunately). This also does mean we let go of numpy compatibility BUT the wins here are that users can control what opt einsum they wanna do!

The backend allows for..well you should just read the docs:
```
.. attribute::  torch.backends.opteinsum.enabled

    A :class:`bool` that controls whether opt_einsum is enabled (on by default). If so,
    torch.einsum will use opt_einsum (https://optimized-einsum.readthedocs.io/en/stable/path_finding.html)
    to calculate an optimal path of contraction for faster performance.

.. attribute::  torch.backends.opteinsum.strategy

    A :class:`str` that specifies which strategies to try when `torch.backends.opteinsum.enabled` is True.
    By default, torch.einsum will try the "auto" strategy, but the "greedy" and "optimal" strategies are
    also supported. Note that the "optimal" strategy is factorial on the number of inputs as it tries all
    possible paths. See more details in opt_einsum's docs
    (https://optimized-einsum.readthedocs.io/en/stable/path_finding.html).
```

In trying (and failing) to land 85908, I discovered that jit script does NOT actually pull from python's version of einsum (because it cannot support variadic args nor kwargs). Thus I learned that jitted einsum does not subscribe to the new opt_einsum path calculation. Overall, this is fine since jit script is getting deprecated, but where is the best place to document this?

## Test plan:
- added tests to CI
- locally tested that trying to set the strategy to something invalid will error properly
- locally tested that tests will pass even if you don't have opt-einsum
- locally tested that setting the strategy when opt-einsum is not there will also error properly
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86219
Approved by: https://github.com/soulitzer, https://github.com/malfet
2022-10-05 06:33:25 +00:00
Jing Xu
f20e4eab7b Fix ITT unit-tests if PyTorch is compiled with USE_ITT=OFF (#86199)
Fixes https://github.com/pytorch/pytorch/pull/84848#discussion_r986329680

@malfet @slgong-fb
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86199
Approved by: https://github.com/malfet
2022-10-04 21:57:05 +00:00
Khushi
d6b030856b [primTorch] special: j0, j1, spherical_j0 (#86049)
Adds prims and refs for special functions (bessel_j0, bessel_j1, spherical_bessel_j0). Thanks!

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86049
Approved by: https://github.com/mruberry
2022-10-04 18:21:46 +00:00
Driss Guessous
cd6477617c Custom sdp implementations dense (#85984)
# Summary

- This code creates the runtime dispatch system for choosing a performant fused SDP kernel. The only choice of fused kernel is flash_attention. It also creates python flags and a context manager that can be used to turn off and on behavior for dispatch.
- This also adds support for flash_attention with dense tensors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85984
Approved by: https://github.com/cpuhrsch
2022-10-03 17:36:37 +00:00
vfdev
8d9472d7d4 [skip-ci] Fixed bad link in build_ci_governance.rst (#85933)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85933
Approved by: https://github.com/albanD
2022-10-03 17:35:44 +00:00
Masaki Kozuki
85d520d448 [docs] Add `torch.channels_last_3d (#85888)
As per title, updating https://pytorch.org/docs/master/tensor_attributes.html#torch-memory-format.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85888
Approved by: https://github.com/ngimel
2022-10-03 17:32:07 +00:00
Codrin Popa
d401732baa Added roundup_bypass_threshold_mb knobs to the PyTorch Caching Allocator (#85940)
Summary:
Added an additional roundup knob( ``roundup_bypass_threshold_mb``) to bypass rounding the requested allocation size, for allocation requests larger than the threshold value (in MB). This can help reduce the memory footprint when making large allocations that are expected to be persistent or have a large lifetime.

Differential Revision: D39868104

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85940
Approved by: https://github.com/zdevito
2022-10-03 16:56:22 +00:00
Richard Zou
a262ccea58 Change torch.autograd.graph.disable_saved_tensors_hooks to be public API (#85994)
Also addresses some comments from the review in
https://github.com/pytorch/pytorch/pull/85971
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85994
Approved by: https://github.com/albanD, https://github.com/soulitzer
2022-10-03 16:25:01 +00:00
vfdev
6fd5d6397a [Docs] Updated torchvision people (#85931)
cc @datumbox @pmeier

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85931
Approved by: https://github.com/fmassa, https://github.com/datumbox
2022-10-03 10:57:08 +00:00
Ke Wen
05d1128106 [c10d] Start deprecating *_multigpu APIs (#85961)
### Deprecation reasons:
- For most users training is on one GPU per process so these APIs are rarely used
- They added one more API dimension
- They can be expressed in a composed manner
- They are not abstracted – specific to GPU
- They caused backend APIs and implementations to have nested `std::vector<std::vector<Tensor>>`, which is hard to read or maintain

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85961
Approved by: https://github.com/XilunWu, https://github.com/H-Huang
2022-10-01 00:59:39 +00:00
Justin Chu
69b927701a [ONNX] Update user documentation (#85819)
- Remove mentions of `SymbolicContext` in the doc
- Comment out the PythonOp example so that it is not shown to users
- Updated code blocks and wording
- Changed to recommend using `pip` for installing onnx.

Now adds a deprecation message to the docs (demo only):

![image](https://user-images.githubusercontent.com/11205048/193327649-f789b369-6b59-49e0-8bba-34a6785eb128.png)

Fixes #85608

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85819
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-09-30 19:35:34 +00:00
erjia
b13b10a8fa Extend collate function that can register collate functions to handle specific types (#85748)
As per request from Vision team, adding `collate` function with an extra argument of `collate_fn_map` to dispatch custom collate functions for non-collection objects and specific objects.
If the type of batch element is not present in`collate_fn_map`, it will go through all keys in the insertion order to check if the type is a subclass of the key. If so, it will invoke the corresponding collate functions.

And, `default_collate` will utilize the `collate` function with a few by default collate function for `int`, `float`, `str` and `numpy object`.

Benefit:
- Domain teams can register their own `collate` function to handle their specific type of objects
- Easier for users to extend from the `collate` function.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85748
Approved by: https://github.com/NivekT, https://github.com/pmeier
2022-09-30 13:30:18 +00:00
Ke Wen
ade1c19612 Add reduce_scatter_tensor in place of _reduce_scatter_base (#85867)
This is a twin PR similar to the one for `all_gather_into_tensor` (#85686).
The philosophy for renaming `_reduce_scatter_base` instead of merging it is described in #85686.

Cc @rohan-varma @H-Huang @crcrpar @ptrblck @mrshenli

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85867
Approved by: https://github.com/crcrpar, https://github.com/H-Huang
2022-09-30 05:48:16 +00:00
Kazuaki Ishizaki
bc57306bdd Fix typo under docs directory and RELEASE.md (#85896)
This PR fixes typo in rst files under docs directory and `RELEASE.md`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85896
Approved by: https://github.com/kit1980
2022-09-29 21:41:59 +00:00
zaf
d542aab5c1 [quant][ao_migration] nn.intrinsic migration to ao (#84842)
All quantization-related modules are being migrated to `torch.ao`. This migrates the `nn.intrinsic.modules`. Please, see the [tracker](https://github.com/pytorch/pytorch/issues/81667) for the timeline.

Differential Revision: [D39419733](https://our.internmc.facebook.com/intern/diff/D39419733/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D39419733/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84842
Approved by: https://github.com/jerryzh168
2022-09-28 23:54:29 +00:00
Mikayla Gawarecki
afaee00fec Add python nested_tensor and as_nested_tensor constructors in torch.nested (#85593)
Remove `torch.nested_tensor` which has erroneous behavior wrt gradients (could be either leaf or not leaf). Introduce `torch.nested.nested_tensor` and `torch.nested.as_nested_tensor` in the vein of `torch.tensor` and `torch.as_tensor`. Done in nested `__init__.py` for now but can move to pybind in future (when we want to load from numpy/nested lists ).

Discussed offline with @cpuhrsch and pybind constructor (https://github.com/pytorch/pytorch/pull/85536) was more gnarly than expected, so we can move to that when we do need loading from numpy etc.

Differential Revision: [D39806622](https://our.internmc.facebook.com/intern/diff/D39806622)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85593
Approved by: https://github.com/drisspg, https://github.com/cpuhrsch
2022-09-28 20:15:02 +00:00
Jing Xu
80b8886223 add itt unit test and docstrings (#84848)
Add unit tests and docstrings corresponding to PR https://github.com/pytorch/pytorch/pull/63289
UT:
1. `test_profiler_emit_itt` in `test/test_autograd.py`. This test is merely intended to catch if emit_itt breaks on construction.
2. Test `torch.profiler.itt` functions in `test/test_itt.py`
3. Only testing that emit_itt runs when `record_shapes` option is enabled in `test/test_profiler.py`.

Docstring:
1. add ITT related info into `docs/source/bottleneck.rst`
4. add `torch.profiler.itt` functions to `docs/source/profiler.rst`
5. add docstring to `torch.profiler.itt` functions in `torch/profiler/itt.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84848
Approved by: https://github.com/malfet
2022-09-28 01:39:58 +00:00
Andrew M. James
5bfcf1f01a [Docs] Update sparse Maintainers (#85126)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85126
Approved by: https://github.com/cpuhrsch
2022-09-27 22:50:31 +00:00
Ke Wen
775a22c7c6 Add all_gather_into_tensor in place of _all_gather_base (#85686)
### Description
- This PR renames `_all_gather_base` to `all_gather_into_tensor` so that it is clearer in meaning.
- The `all_gather_into_tensor` API differs from the `all_gather` API in the output it accepts -- a single, large tensor instead of a list of tensors.
- This PR also adds deprecation warning to `_all_gather_base`.

### Issue
`_all_gather_base` was implemented in https://github.com/pytorch/pytorch/pull/33924 to avoid unnecessary flattening. There was previous effort (#82639) to merge `_all_gather_base` with the existing `all_gather` API by detecting the parameter type passed in for the output.

There are, however, two "blockers" that make the merge difficult:
(i) The merge leads to backward compatibility break. We would need to change the parameter name `tensor_list` in `all_gather` to a general name `output` that can cover both tensor and tensor list.
(ii) Recently, the `all_gather` API has added uneven tensor support, utilizing the tensor boundaries implied by the list. We are, however, not sure to add such support to the `_all_gather_base` function, because that would require users to pass in additional tensor boundary information.

In view of the above, we decided to productize `_all_gather_base` as a separate function, but with a clearer name.

### Testing
Added tests:
- `test_all_gather_into_cat_tensor_cuda` -- output form as with `torch.cat`. For example:
```
        >>> tensor_in
        tensor([1, 2], device='cuda:0') # Rank 0
        tensor([3, 4], device='cuda:1') # Rank 1
        >>> tensor_out
        tensor([1, 2, 3, 4], device='cuda:0') # Rank 0
        tensor([1, 2, 3, 4], device='cuda:1') # Rank 1
```
- `test_all_gather_into_stack_tensor_cuda` -- output form as with `torch.stack`. For example:
```
        >>> tensor_out2
        tensor([[1, 2],
                [3, 4]], device='cuda:0') # Rank 0
        tensor([[1, 2],
                [3, 4]], device='cuda:1') # Rank 1
```
The output form is determined by the shape of the output tensor passed by the user, no flag used.

Cc @rohan-varma @mrshenli @crcrpar @ptrblck @H-Huang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85686
Approved by: https://github.com/rohan-varma, https://github.com/crcrpar
2022-09-27 22:50:22 +00:00
supriyar
18685b7fe1 Update PT maintainers list for AO (#85125)
Summary:
Update the list based on recommendation in
https://github.com/pytorch/pytorch/blob/master/docs/source/community/build_ci_governance.rst

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D39745619](https://our.internmc.facebook.com/intern/diff/D39745619)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85125
Approved by: https://github.com/gchanan
2022-09-23 23:38:57 +00:00
Ivan Yashchuk
539076e2c2 Remove deprecated torch.lstsq (#70980)
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.lstsq`.

There's a note in `tools/codegen/gen.py` about `lstsq` schema in `native_function.yaml` that I will not remove:
87139d8532/tools/codegen/gen.py (L734-L770)

cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70980
Approved by: https://github.com/lezcano, https://github.com/kit1980
2022-09-23 00:16:55 +00:00
Ivan Yashchuk
bcf93181a0 Remove deprecated torch.matrix_rank (#70981)
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.matrix_rank`.

cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70981
Approved by: https://github.com/lezcano, https://github.com/kit1980
2022-09-22 17:40:46 +00:00
lezcano
de0f3c4200 Change Lezcano to lezcano (#85396)
I changed my handle to lezcano (no caps) as rhere's always issues with
capital letters when automatising stuff.

The last issue was https://github.com/pytorch/test-infra/pull/751
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85396
Approved by: https://github.com/ezyang
2022-09-21 13:49:55 +00:00
Mateusz Sypniewski
b70c254ebb Rework printing tensor aliases in CSAN error message (#85008)
Small rework of how the error message is formatted, introduces a distinction between the arguments and the output of kernels. Verified manually on multiple examples that the message is printed as expected.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85008
Approved by: https://github.com/lw
2022-09-21 13:41:52 +00:00
Justin Chu
d6c2080eb4 [ONNX] Update ONNX documentation to include unsupported operators (#84496)
- Update ONNX documentation to include unsupported operators
- Include aten, quantized and other namespaces
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84496
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao, https://github.com/kit1980
2022-09-16 23:48:37 +00:00