Commit Graph

1972 Commits

Author SHA1 Message Date
Emilio Castillo
07e595e88a Add device_idx to free_fn in CUDAPluggableAllocator (#91398)
This was requested by nvidia folks, track also the device_id in the free function.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91398
Approved by: https://github.com/albanD
2023-01-12 05:03:48 +00:00
BowenBao
c537f5bee8 [ONNX] Documentation for torch.onnx.find_mismatch (#90728)
Doc preview:
* `find_mismatch`: https://docs-preview.pytorch.org/90728/onnx.html#torch.onnx.verification.find_mismatch
* `GraphInfo`: https://docs-preview.pytorch.org/90728/onnx.html#classes and https://docs-preview.pytorch.org/90728/generated/torch.onnx.verification.GraphInfo.html#torch.onnx.verification.GraphInfo
* `VerificationOptions`: https://docs-preview.pytorch.org/90728/onnx.html#classes and  https://docs-preview.pytorch.org/90728/generated/torch.onnx.verification.VerificationOptions.html#torch.onnx.verification.VerificationOptions

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90728
Approved by: https://github.com/titaiwangms, https://github.com/justinchuby
2023-01-11 23:58:57 +00:00
Pearu Peterson
b9a035c1c5 Add check-sparse-tensor-invariants flag to Context. (#90849)
This PR adds "check sparse tensor invariants" flag to Context that when enabled will trigger sparse tensor data invariants checks in unsafe methods of constructing sparse COO/CSR/CSC/BSR/BSC tensors. The feature includes the following changes to UI:

- `torch.enable_check_sparse_tensor_invariants` and `torch.is_check_sparse_tensor_invariants_enabled` functions to globally enable/disable the invariant checks and to retrieve the state of the feature, respectively
- `torch.sparse_coo/csr/csc/bsr/bsc/compressed_tensor` functions have a new optional argument `check_invariants` to enable/disable the invariant checks explicitly. When the `check_invariants` argument is specified, the global state of the feature is temporarily overridden.

The PR also fixes https://github.com/pytorch/pytorch/issues/90833

# Main issue

*The following content is outdated after merging the PRs in this ghstack but kept for the record.*

The importance of this feature is that when enabling the invariants checks by default, say, via

<details>

```
$ git diff
diff --git a/torch/__init__.py b/torch/__init__.py
index c8543057c7..19a91d0482 100644
--- a/torch/__init__.py
+++ b/torch/__init__.py
@@ -1239,3 +1239,8 @@ if 'TORCH_CUDA_SANITIZER' in os.environ:

 # Populate magic methods on SymInt and SymFloat
 import torch.fx.experimental.symbolic_shapes
+
+# temporarily enable sparse tensor arguments validation in unsafe
+# constructors:
+
+torch._C._set_check_sparse_tensor_invariants(True)
```

</details>

a massive number of test failures/errors occur in test_sparse_csr.py tests:
```
$ pytest -sv test/test_sparse_csr.py
<snip>
==== 4293 failed, 1557 passed, 237 skipped, 2744 errors in 69.71s (0:01:09) ====
```
that means that we are silently constructing sparse compressed tensors that do not satisfy the sparse tensor invariants. In particular, the following errors are raised:

```
AssertionError: "resize_as_sparse_compressed_tensor_: self and src must have the same layout" does not match "expected values to be a strided and contiguous tensor"

RuntimeError: CUDA error: device-side assert triggered

RuntimeError: `col_indices[..., crow_indices[..., i - 1]:crow_indices[..., i]] for all i = 1, ..., nrows are sorted and distinct along the last dimension values` is not satisfied.

RuntimeError: expected col_indices to be a strided and contiguous tensor

RuntimeError: expected row_indices to be a strided and contiguous tensor

RuntimeError: expected values to be a strided and contiguous tensor

RuntimeError: for_each: failed to synchronize: cudaErrorAssert: device-side assert triggered

RuntimeError: tensor dimensionality must be sum of batch, base, and dense dimensionalities (=0 + 2 + 0) but got 3
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90849
Approved by: https://github.com/amjames, https://github.com/cpuhrsch
2023-01-11 01:05:14 +00:00
Kazuaki Ishizaki
4f91b8e0ee Fix typo under docs directory (#91871)
This PR fixes typo in '.rst' files under 'docs' directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91871
Approved by: https://github.com/ngimel
2023-01-10 22:33:36 +00:00
PyTorch MergeBot
3aeb7127b4 Revert "Clean Up MobileOptimizerType Rewrite Flags Public API and Documentation (#91600)"
This reverts commit 370df963e0.

Reverted https://github.com/pytorch/pytorch/pull/91600 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2023-01-10 21:38:40 +00:00
Salil Desai
370df963e0 Clean Up MobileOptimizerType Rewrite Flags Public API and Documentation (#91600)
Summary:
X-link: https://github.com/facebookresearch/d2go/pull/452

Remove MobileOptimizerType and all rewrite flags from torch.X and torch._C.X to clean up torch.X and torch._C.X namespaces

The affected rewrite flags are
- CONV_BN_FUSION
- FUSE_ADD_RELU
- HOIST_CONV_PACKED_PARAMS
- INSERT_FOLD_PREPACK_OPS
- REMOVE_DROPOUT
- VULKAN_AUTOMATIC_GPU_TRANSFER

Bc-Breaking Change:

Before this change, the rewrite flags were accessible through all of
1. torch.utils.mobile_optimizer.MobileOptimizerType.X
2. torch._C.MobileOptimizerType.X
3. torch.X
4. torch.MobileOptimizerType.X
5. torch._C.X

But after this change, only torch.utils.mobile_optimizer.MobileOptimizerType.X  (option 1 above) and the newly added torch._C._MobileOptimizerType.X remain

Corresponding updates to PyTorch Tutorial Docs are in https://github.com/pytorch/tutorials/pull/2163

Test Plan:
```buck test caffe2/test:test_mobile_optimizer```
```
Summary
  Pass: 6
  Skip: 1
    ↻ caffe2/test:test_mobile_optimizer - test_mobilenet_optimize_for_mobile (test_mobile_optimizer.TestOptimizer)
  ListingSuccess: 1
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/4222124793514412
```
___

With temporary testing changes in D41690204:

```buck run caffe2:test_rewrite_flags_api```
Before:
```
torch.utils.mobile_optimizer.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch._C._MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result:  (module 'torch._C' has no attribute '_MobileOptimizerType')
torch._C.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch._C.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
```
After:
```
torch.utils.mobile_optimizer.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch._C._MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result: 
torch._C.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result:  (module 'torch._C' has no attribute 'MobileOptimizerType')
torch.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result:  (module 'torch' has no attribute 'VULKAN_AUTOMATIC_GPU_TRANSFER')
torch.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result:  (module 'torch' has no attribute 'MobileOptimizerType')
torch._C.VULKAN_AUTOMATIC_GPU_TRANSFER
        Expected:  | Result:  (module 'torch._C' has no attribute 'VULKAN_AUTOMATIC_GPU_TRANSFER')
```

```buck test caffe2/test:public_bindings -- test_no_new_bindings```
```
Summary
  Pass: 1
  ListingSuccess: 1
Finished test run: https://www.internalfb.com/intern/testinfra/testrun/7881299473114294
```

Differential Revision: D41690203

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91600
Approved by: https://github.com/albanD, https://github.com/malfet
2023-01-10 20:16:53 +00:00
Sean Silva
e9cd7e0869 [dynamo] Fix rst syntax for list (#90390)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90390
Approved by: https://github.com/soumith
2023-01-10 19:56:26 +00:00
Edward Z. Yang
333540a458 Reland "Add torch.utils.device_mode" (#91796)
Original PR https://github.com/pytorch/pytorch/pull/91525

Signed-off-by: Edward Z. Yang <ezyangfb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91796
Approved by: https://github.com/albanD
2023-01-09 20:57:12 +00:00
Will Constable
630ef6c711 Fix Dynamo+DDP documentation (#91832)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91832
Approved by: https://github.com/soumith, https://github.com/davidberard98
2023-01-09 17:35:49 +00:00
PyTorch MergeBot
9b415240d4 Revert "Reland "Add torch.utils.device_mode" (#91796)"
This reverts commit 81b5eff3c3.

Reverted https://github.com/pytorch/pytorch/pull/91796 on behalf of https://github.com/huydhn due to This breaks trunk with the following failed test https://hud.pytorch.org/failure/test_jit_save%2CTestTracer
2023-01-09 04:45:47 +00:00
Edward Z. Yang
81b5eff3c3 Reland "Add torch.utils.device_mode" (#91796)
Original PR https://github.com/pytorch/pytorch/pull/91525

Signed-off-by: Edward Z. Yang <ezyangfb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91796
Approved by: https://github.com/albanD
2023-01-08 03:44:56 +00:00
drisspg
eb8547e939 Add a NestedTensor Readme (#91472)
# Summary
This PR adds a NestedTensor Readme which explains the code structure and will hopefully serve as a reference point for new contributors, especially if they would like to implement a NestedTensor kernel implementation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91472
Approved by: https://github.com/mikaylagawarecki, https://github.com/cpuhrsch
2023-01-06 14:44:55 +00:00
PyTorch MergeBot
f571ae4fdb Revert "Make torch.device usable as a context manager (#91525)"
This reverts commit 619d52a5d2.

Reverted https://github.com/pytorch/pytorch/pull/91525 on behalf of https://github.com/mehtanirav due to Internal breakages
2023-01-05 21:34:50 +00:00
Edward Z. Yang
619d52a5d2 Make torch.device usable as a context manager (#91525)
Fixes https://github.com/pytorch/pytorch/issues/82296
Fixes https://github.com/pytorch/pytorch/issues/27878
Fixes https://github.com/pytorch/pytorch/issues/260

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91525
Approved by: https://github.com/albanD
2023-01-04 01:32:00 +00:00
samdow
162474d7fd [functorch] add new ensembling api, demonstrate in example (#88850)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88850
Approved by: https://github.com/zou3519
2023-01-04 00:33:14 +00:00
samdow
c5e5916fff [functorch] add functorch functional_call, update tests to test this (#89213)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89213
Approved by: https://github.com/zou3519
2023-01-04 00:33:14 +00:00
Richard Zou
264f5ed516 [autograd.Function] Add docs on the functorch interaction (#91452)
This PR:
- Updates autograd.Function.forward docs to reflect how you either
  define a forward with ctx or a separate forward and setup_context
- Updates the "Extending Autograd" docs to suggest the usage of
  autograd.Function with separate forward and setup_context. This should
  be the default because there is a low barrier to go from this to
  an autograd.Function that is fully supported by functorch transforms.
- Adds a new "Extending torch.func with autograd.Function" doc that
  explains how to use autograd.Function with torch.func. It also
  explains how to use generate_vmap_rule and how to manually write a
  vmap staticmethod.

While writing this, I noticed that the implementation of
setup_context staticmethod/generate_vmap_rule/vmap staticmethod are a
bit inconsistent with the other method/attributes on autograd.Function:
- https://github.com/pytorch/pytorch/issues/91451
- I'm happy to fix those if we think it is a problem, either in this PR
  or a followup (this PR is getting long, I want some initial docs
  out that I can point early adopters at, and fixing the problems in the
  future isn't really BC-breaking).

Test Plan:
- view docs preview
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91452
Approved by: https://github.com/soulitzer
2023-01-04 00:28:19 +00:00
Richard Zou
31e66ca4ef [torch.func] Add docs (#91319)
Docs copy-pasted from functorch docs with minor adjustments. We are
keeping the functorch docs for BC, though that's up for debate -- we
could also just say "see .. in torch.func" for some, but not all doc
pages (we still want to keep around any examples that use
make_functional so that users can tell what the difference between that
and the new functional_call is).

Test Plan:
- docs preview
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91319
Approved by: https://github.com/samdow
2022-12-30 02:51:18 +00:00
Kurt Mohler
08a47549af Rename Tensor._storage to Tensor.untyped_storage and update docs (#91414)
Fixes #89224

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91414
Approved by: https://github.com/ezyang
2022-12-28 19:21:34 +00:00
Joel Schlosser
8b55b86dbd Move sym_int and sym_float alongside SymInt / SymFloat in base torch package (#91317)
This PR moves the definitions for:
* `sym_int`
* `sym_ceil` (used only for `sym_int`)
* `sym_floor` (used only for `sym_int`)
* `sym_float`

from `torch/fx/experimental/symbolic_shapes.py` to `torch/__init__.py`, where `SymInt` and `SymFloat` are already defined.

This removes the need for several in-line imports, and enables proper JIT script gating for #91318. I'm very open to doing this in a better way!

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91317
Approved by: https://github.com/ezyang, https://github.com/anijain2305
2022-12-28 16:08:16 +00:00
Salahuddin
f1d8fef4d4 Softmax added to tensor, torch and docs (#91292)
Fixes #91107

Added `softmax` docs in

- `pytorch/torch/_tensor_docs.py`
- `pytorch/torch/_torch_docs.py `
- `pytorch/docs/XXX.rst` files. Here XXX represents all those files where I made the change

Although I have added `softmax` in `docs` directory, I was not sure which files/folders required the edits so there could be issues

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91292
Approved by: https://github.com/lezcano
2022-12-28 15:06:24 +00:00
PyTorch MergeBot
af7132302a Revert "Softmax added to tensor, torch and docs (#91292)"
This reverts commit f8b28799f8.

Reverted https://github.com/pytorch/pytorch/pull/91292 on behalf of https://github.com/weiwangmeta due to breaking internal distributed testing builds
2022-12-28 14:30:46 +00:00
Salahuddin
f8b28799f8 Softmax added to tensor, torch and docs (#91292)
Fixes #91107

Added `softmax` docs in

- `pytorch/torch/_tensor_docs.py`
- `pytorch/torch/_torch_docs.py `
- `pytorch/docs/XXX.rst` files. Here XXX represents all those files where I made the change

Although I have added `softmax` in `docs` directory, I was not sure which files/folders required the edits so there could be issues

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91292
Approved by: https://github.com/lezcano
2022-12-25 12:59:45 +00:00
Ikko Ashimine
a188e6ddc0 Fix typo in troubleshooting.rst (#91301)
enviornment -> environment

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91301
Approved by: https://github.com/msaroufim
2022-12-23 21:39:38 +00:00
Takeshi Watanabe
55749b9c41 [dynamo] Write full code of how to enable output_code (#91230)
Ref https://github.com/pytorch/pytorch/pull/91223
Since it was trickier than I've expected

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91230
Approved by: https://github.com/soumith
2022-12-22 14:09:06 +00:00
bowen0701
e803d336eb Fix missing indentation in serialization.rst (#91253)
Fixes #ISSUE_NUMBER

In serialization.rst, fix class ControlFlowModule's forward(): missing indentation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91253
Approved by: https://github.com/kit1980
2022-12-21 20:14:44 +00:00
Eddie Yan
8b617f813d [cuBLAS] Add an option to disable reduced precision reductions for BF16 GEMM (#89172)
Essentially the same change as #67946, except that the default is to disallow reduced precision reductions in `BFloat16` GEMMs (for now). If performance is severely regressed, we can change the default, but this option appears to be necessary to pass some `addmm` `BFloat16` tests on H100.

CC @ptrblck @ngimel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89172
Approved by: https://github.com/ngimel
2022-12-21 18:58:28 +00:00
Takeshi Watanabe
0476201482 Update debug option for torch._dynamo (#91223)
Seems outdated from https://www.youtube.com/watch?v=egZB5Uxki0I

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91223
Approved by: https://github.com/ngimel
2022-12-21 05:06:42 +00:00
richardachen
f460893cec Update optim.rst (#91195)
Fixes #91080

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91195
Approved by: https://github.com/kit1980
2022-12-20 23:22:25 +00:00
Richard Zou
41846e205e [torch.func] Setup torch.func, populate it with all transforms (#91016)
This PR sets up torch.func and populates it with the following APIs:
- grad
- grad_and_value
- vjp
- jvp
- jacrev
- jacfwd
- hessian
- functionalize
- vmap

It also renames all instances of `functorch` in the APIs for those docs
to `torch.func`.

We rewrite the `__module__` fields on some of the above APIs so that the
APIs fit PyTorch's public api definition.
- For an API to be public, it must have a `__module__` that points to a
  public PyTorch submodule. However, `torch._functorch.eager_transforms`
  is not public due to the leading underscore.
- The solution is to rewrite `__module__` to point to where the API is
  exposed (torch.func). This is what both Numpy and JAX do for their
  APIs.
- h/t pmeier in
  https://github.com/pytorch/pytorch/issues/90284#issuecomment-1348595246
  for idea and code
- The helper function, `exposed_in`, is confined to
  torch._functorch/utils for now because we're not completely sure if
  this should be the long-term solution.

Implication for functorch.* APIs:
- functorch.grad is the same object as torch.func.grad
- this means that the functorch.grad docstring is actually the
  torch.func.grad docstring and will refer to torch.func instead of
  functorch.
- This isn't really a problem since the plan on record is to deprecate
  functorch in favor of torch.func. We can fix these if we really want,
  but I'm not sure if a solution is worth maintaining.

Test Plan:
- view docs preview

Future:
- vmap should actually just be torch.vmap. This requires an extra step
  where I need to test internal callsites, so, I'm separating it into a
  different PR.
- make_fx should be in torch.func to be consistent with `import
  functorch`. This one is a bit more of a headache to deal with w.r.t.
  public api, so going to deal with it separately.
- beef up func.rst with everything else currently on the functorch
  documention website. func.rst is currently just an empty shell.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91016
Approved by: https://github.com/samdow
2022-12-20 00:00:52 +00:00
Bin Bao
548960f68e Replace TORCHINDUCTOR_TRACE with TORCH_COMPILE_DEBUG in documentation (#91011)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91011
Approved by: https://github.com/mlazos, https://github.com/jansel, https://github.com/msaroufim
2022-12-19 14:45:27 +00:00
Alvaro Gaona
ddf5b68dcb Nuttall window (#90103)
Relates #85366
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90103
Approved by: https://github.com/lezcano
2022-12-16 09:05:53 +00:00
HDCharles
1ca9d43d4e [ao] quantize.py fixing public v private (#87521)
Summary: made _register_activation_post_process_hook, _add_observer,
_get_unique_devices_, _get_observer_dict private

Test Plan: python test/test_public_bindings.py

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D40709277](https://our.internmc.facebook.com/intern/diff/D40709277)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87521
Approved by: https://github.com/jerryzh168
2022-12-14 22:50:39 +00:00
Sherlock Huang
b4b8a56589 Doc for Canonical Aten and Prims IR (#90644)
as title.

Sample output: https://docs-preview.pytorch.org/90644/ir.html

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90644
Approved by: https://github.com/ezyang
2022-12-13 21:30:47 +00:00
Arek Sredzki
44dac51c36 Improve Autograd Documentation Clarity (#89401)
This makes minor adjustments to the autograd docs, improving clarity and resolving grammatical errors

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89401
Approved by: https://github.com/kit1980
2022-12-06 06:45:04 +00:00
xiny
57bb4cd046 [Doc][Distributed] Add missing functions to distributed.rst (#89905)
Add missing documents for `torch.distributed.all_to_all_single` and other functions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89905
Approved by: https://github.com/kit1980
2022-12-04 07:22:54 +00:00
Christian Puhrsch
a306f85ea7 Update Persons of Interest (#90069)
Creates sections for contributors to MaskedTensor and NestedTensor and updates torchaudio.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90069
Approved by: https://github.com/drisspg, https://github.com/mikaylagawarecki, https://github.com/nateanl
2022-12-02 23:06:57 +00:00
PyTorch MergeBot
cba96366a2 Revert "remove torch.equal usages (#89527)"
This reverts commit 4095ef8b80.

Reverted https://github.com/pytorch/pytorch/pull/89527 on behalf of https://github.com/clee2000 due to broke periodic multigpu tests 4095ef8b80 https://github.com/pytorch/pytorch/actions/runs/3592806602/jobs/6049368502
2022-12-02 21:36:13 +00:00
XiaobingSuper
8b2f9887bf update quantization doc: add x86 backend as default backend of server inference (#86794)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86794
Approved by: https://github.com/jgong5, https://github.com/kit1980
2022-12-02 02:10:25 +00:00
Nikita Shulga
768bd3fb4a Add torch.compile implementation (#89607)
`torch.compile` can be used either as decorator or to optimize model directly, for example:
```
@torch.compile
def foo(x):
  return torch.sin(x) + x.max()
```
or
```
mod = torch.nn.ReLU()
optimized_mod = torch.compile(mod, mode="max-autotune")
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89607
Approved by: https://github.com/soumith
2022-12-01 20:17:52 +00:00
Svetlana Karslioglu
015b05af18 Editorial pass on Dyamo docs (#89921)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89921
Approved by: https://github.com/msaroufim
2022-12-01 18:53:16 +00:00
Philip Meier
4095ef8b80 remove torch.equal usages (#89527)
Preparation for the next PR in this stack: #89559.

I replaced

- `self.assertTrue(torch.equal(...))` with `self.assertEqual(..., rtol=0, atol=0, exact_device=True)`,
- the same for `self.assertFalse(...)` with `self.assertNotEqual(...)`, and
- `assert torch.equal(...)` with `torch.testing.assert_close(..., rtol=0, atol=0)` (note that we don't need to set `check_device=True` here since that is the default).

There were a few instances where the result of `torch.equal` is used directly. In that cases I've replaced with `(... == ...).all().item()` while sometimes also dropping the `.item()` depending on the context.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89527
Approved by: https://github.com/mruberry
2022-12-01 11:22:52 +00:00
Philip Meier
d72cd4c4e5 document torch.testing.assert_allclose (#89526)
After our failed attempt to remove `assert_allclose` in #87974, we decided to add it to the documentation after all. Although we drop the expected removal date, the function continues to be deprecated in favor of `assert_close`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89526
Approved by: https://github.com/mruberry
2022-12-01 11:22:50 +00:00
Wanchao Liang
4451eb24e6 Move tensor_parallel out to distributed.tensor folder (#89878)
This PR moves tensor parallel from torch.distributed._tensor.parallel
to torch.distributed.tensor.parallel, to prepare for beta release
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89878
Approved by: https://github.com/fduwjj
2022-11-30 22:13:10 +00:00
Will Constable
447283752c Update DDP docs for Dynamo/DDPOptimizer (#89096)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89096
Approved by: https://github.com/msaroufim
2022-11-30 05:50:12 +00:00
andrewor14
fb47a66989 [Quant][docs] Use get_default_qconfig_mapping (#87299)
Summary: The recommended way to use QConfigMapping is through
`get_default_qconfig_mapping`. However, the docs still references
usages that use `QConfigMapping().set_global(...)`. This doesn't
actually work well in practice when the model has fixed qparams
ops for example. This commit updates these usages.

Reviewers: vkuzo

Subscribers: vkuzo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87299
Approved by: https://github.com/jerryzh168
2022-11-29 18:08:16 +00:00
Mark Saroufim
9048cf16fe Move Dynamo docs back to core (#89769)
With contributions from @svekars and @malfet

Waiting for doc build job to complete
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89769
Approved by: https://github.com/soumith, https://github.com/malfet
2022-11-29 04:38:53 +00:00
PyTorch MergeBot
47cca5e444 Revert "Move Dynamo docs back to core (#89769)"
This reverts commit be2816db18.

Reverted https://github.com/pytorch/pytorch/pull/89769 on behalf of https://github.com/clee2000 due to broke lint
2022-11-28 21:04:33 +00:00
eqy
8321066031 Tweak formatting of note on macros (#89598)
For readability when viewing the rendered file e.g., from the browser.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89598
Approved by: https://github.com/kit1980
2022-11-28 20:42:30 +00:00
Mark Saroufim
be2816db18 Move Dynamo docs back to core (#89769)
With contributions from @svekars and @malfet

Waiting for doc build job to complete
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89769
Approved by: https://github.com/soumith
2022-11-28 20:32:05 +00:00