Commit Graph

51 Commits

Author SHA1 Message Date
Tugsbayasgalan Manlaibaatar
bf7307adf8 Support inference_mode decorator (#109274)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109274
Approved by: https://github.com/williamwen42
2023-09-27 22:21:42 +00:00
soulitzer
aa04b0536b Fix inference_mode decorator pass mode as kwarg (#107349)
Fixes https://fb.workplace.com/groups/1405155842844877/permalink/7330520550308347/
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107349
Approved by: https://github.com/albanD
ghstack dependencies: #107296
2023-08-17 17:12:31 +00:00
andreasfloros
c9c90765c1 grad_mode decorators without paren (#107086)
This PR implements the feature described in #107036 for `no_grad`, `enable_grad` and `inference_mode`.

Users can still use the above as before but they can also use them without parentheses.

For example:

```python
import torch

a = torch.ones(1, requires_grad=True)

def do_something():
    print(2 * a)

with torch.no_grad():
    do_something()  # tensor([2.])

torch.no_grad()(do_something)()  # tensor([2.])

torch.no_grad(do_something)()  # tensor([2.])

do_something()  # tensor([2.], grad_fn=<MulBackward0>)
```

For `inference_mode`, decorating without parenthesis is equivalent to decorating with the default `mode=True`, similiar to how dataclasses behave (https://docs.python.org/3/library/dataclasses.html#module-contents)

Closes #107036

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107086
Approved by: https://github.com/albanD
2023-08-15 05:25:33 +00:00
poseljacob
a25eee1d77 _force_original_view_tracking to work as both context manager and function (#106706)
Fix _force_original_view_tracking to work as a function as well as a context manager, as stated by documentation.

Applied similar fixes to PR: https://github.com/pytorch/pytorch/pull/105291
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106706
Approved by: https://github.com/albanD
2023-08-07 23:29:22 +00:00
Edward Z. Yang
3bf922a6ce Apply UFMT to low traffic torch modules (#106249)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106249
Approved by: https://github.com/Skylion007
2023-07-29 23:37:30 +00:00
Furkan Akkurt
3959695fbd Fix typo ; Update grad_mode.py (#106045)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106045
Approved by: https://github.com/albanD, https://github.com/soulitzer
2023-07-27 00:24:50 +00:00
poseljacob
1aba399138 allow set_multithreading_enabled to act as function and context manager (#105291)
Fixes #104985

Implemented `set_multithreading_enabled` C++ function to directly alter state rather than using `MultithreadingEnabled` class, which was automatically resetting the state when the object was destroyed. This behavior more closely aligns with set_grad_enabled which does work as expected. This allows us to change python class `set_multithreading_enabled` to act as both a function and context manager.

I also added a getter: `torch._C.is_multithreading_enabled`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105291
Approved by: https://github.com/albanD
2023-07-18 16:55:40 +00:00
Richard Zou
74f10b9ea5 Switch most Python RAII guard usages to context manager (#102642)
There are some I can't easily switch due to reasons like:
- Dynamo modelling the guard
- BC concerns (for torch.autograd.set_multithreading_enabled)

Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102642
Approved by: https://github.com/albanD
2023-06-01 16:28:37 +00:00
Jane Xu
6dc81f7bdd Update docs that Parameters are immune to no_grad mode (#95232)
Fixes https://github.com/pytorch/pytorch/issues/83998

![image](https://user-images.githubusercontent.com/31798555/220971800-4af57d92-9f15-4e13-bfe4-73e2ff1cd943.png)
![image](https://user-images.githubusercontent.com/31798555/221019508-d7330a16-7f01-4d37-a1af-a4905e9596c4.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95232
Approved by: https://github.com/soulitzer
2023-02-23 23:33:19 +00:00
PyTorch MergeBot
cb6e38d89d Revert "Update docs that Parameters are immune to no_grad mode (#95232)"
This reverts commit 5783cee2a3.

Reverted https://github.com/pytorch/pytorch/pull/95232 on behalf of https://github.com/ZainRizvi due to This caused the test_doc_examples test to fail on trunk
2023-02-23 17:43:45 +00:00
Jane Xu
5783cee2a3 Update docs that Parameters are immune to no_grad mode (#95232)
Fixes https://github.com/pytorch/pytorch/issues/83998

![image](https://user-images.githubusercontent.com/31798555/220971800-4af57d92-9f15-4e13-bfe4-73e2ff1cd943.png)
![image](https://user-images.githubusercontent.com/31798555/220971892-35554d17-fc44-4211-9017-7a5555ae3bb1.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95232
Approved by: https://github.com/soulitzer
2023-02-23 16:41:54 +00:00
Brian Hirsh
2b36d35b9c add torch.autograd._unsafe_set_version_counter API (#92924)
better description coming soon (but this is meant to fix https://github.com/pytorch/pytorch/issues/91093)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92924
Approved by: https://github.com/ezyang, https://github.com/alanwaketan, https://github.com/albanD
2023-02-11 21:07:08 +00:00
Brian Hirsh
83275d8cdf add torch.autograd._set_view_replay_enabled, use in aot autograd (#92588)
tldr; this should fix some minor perf regressions that were caused by adding more as_strided() calls in aot autograd.

This PR adds a new context manager, `torch.autograd._set_view_replay_enabled()`.

Context: AOT Autograd has special handling for "outputs that alias graph intermediates". E.g. given this function:

```
def f(x):
    y = torch.mul(x, 2)
    out = y.view(-1)
    return out
```

AOT Autograd will do the following:

```
def fn_to_compile(x):
    y = torch.mul(x, 2)
    out = y.view(-1)
    # return the graph intermediate
    return y, out

compiled_fn = compile(fn_to_compile)

def wrapper(x):
    y, out = compiled_fn(x)
    # regenerate the alias of the graph intermediate
    return out._view_func(y)
```

What's annoying is that `out._view_func()` will result in a `.as_strided` call, because `out` is an ordinary runtime tensor. This (likely?) caused a perf regression, because when running the backward, out `as_strided_backward()` is slower than our `view_backward()`.

In this PR, I added some TLS for instructing autograd to do view replay instead of as_strided, even when given a normal tensor. I'm definitely interested in thoughts from autograd folks (cc @albanD @soulitzer). A few points that I want to bring up:

(1) One reason that this API seems generally useful to me is because of the case where you `torch.compile()` a function, and you pass in two inputs that alias each other, and mutate one of the inputs. Autograd is forced to add a bunch of as_strided() calls into the graph when this happens, but this would give users an escape hatch for better compiled perf in this situation

(2) To be fair, AOT Autograd probably won't need this TLS in the long term. There's a better (more complicated) solution, where AOT Autograd manually precomputes the view chain off of graph intermediates during tracing, and re-applies them at runtime. This is kind of complicated though and feels lower priority to implement immediately.

(3) Given all of that I made the API private, but lmk what you all think.

This is a followup of https://github.com/pytorch/pytorch/pull/92255.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92588
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-08 01:48:32 +00:00
Adam J. Stewart
ec25db7741 torch.inference_mode: add type hints (#94223)
Copied the type hints from the other context managers.

Not sure how to add type hints for `clone` since it returns the same class. The `Self` type isn't introduced until Python 3.11 and mypy just recently added support for it. Could also use `"inference_mode"` with quotes to avoid using it before it's declared, or `from __future__ import annotations` to allow its use without quotes. Or we could just skip it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94223
Approved by: https://github.com/albanD
2023-02-07 23:16:55 +00:00
Edward Z. Yang
333540a458 Reland "Add torch.utils.device_mode" (#91796)
Original PR https://github.com/pytorch/pytorch/pull/91525

Signed-off-by: Edward Z. Yang <ezyangfb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91796
Approved by: https://github.com/albanD
2023-01-09 20:57:12 +00:00
PyTorch MergeBot
9b415240d4 Revert "Reland "Add torch.utils.device_mode" (#91796)"
This reverts commit 81b5eff3c3.

Reverted https://github.com/pytorch/pytorch/pull/91796 on behalf of https://github.com/huydhn due to This breaks trunk with the following failed test https://hud.pytorch.org/failure/test_jit_save%2CTestTracer
2023-01-09 04:45:47 +00:00
Edward Z. Yang
81b5eff3c3 Reland "Add torch.utils.device_mode" (#91796)
Original PR https://github.com/pytorch/pytorch/pull/91525

Signed-off-by: Edward Z. Yang <ezyangfb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91796
Approved by: https://github.com/albanD
2023-01-08 03:44:56 +00:00
PyTorch MergeBot
f571ae4fdb Revert "Make torch.device usable as a context manager (#91525)"
This reverts commit 619d52a5d2.

Reverted https://github.com/pytorch/pytorch/pull/91525 on behalf of https://github.com/mehtanirav due to Internal breakages
2023-01-05 21:34:50 +00:00
Edward Z. Yang
619d52a5d2 Make torch.device usable as a context manager (#91525)
Fixes https://github.com/pytorch/pytorch/issues/82296
Fixes https://github.com/pytorch/pytorch/issues/27878
Fixes https://github.com/pytorch/pytorch/issues/260

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91525
Approved by: https://github.com/albanD
2023-01-04 01:32:00 +00:00
joncrall
ad782ff7df Enable xdoctest runner in CI for real this time (#83816)
Builds on #83317 and enables running the doctests. Just need to figure out what is causing the failures.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/83816
Approved by: https://github.com/ezyang, https://github.com/malfet
2022-12-29 05:32:42 +00:00
albanD
347a7d97a5 Deprecate decorating classes with torch.no_grad and similar (#89522)
Fixes https://github.com/pytorch/pytorch/issues/89450

I would have completely removed it but I don't think this is particularly urgent and there are some use of it in the wild: https://github.com/search?q=%2Ftorch%5C.no_grad%5C%28%5C%29%5Cnclass%2F&type=code
So we might as well take one release to do it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89522
Approved by: https://github.com/lezcano, https://github.com/soulitzer, https://github.com/janeyx99
2022-11-23 16:51:42 +00:00
Elias Ellison
d04889323e Add Context Manager for Disabling Multithreading in Backwards, use in aot autograd (#86245)
We were running into a few issues with running multithreaded backwards in aot_autograd: such as https://github.com/pytorch/pytorch/issues/86136, and `FakeTensorMode` getting into a weird state as a result of not executing functions completely sequentially. The multithreaded backwards is lost in translation when we trace out the backwards anyway, and adds a lot of additional complexity.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86245
Approved by: https://github.com/albanD, https://github.com/yf225
2022-10-06 03:27:42 +00:00
joncrall
4618371da5 Integrate xdoctest - Rebased (#82797)
This is a new version of #15648 based on the latest master branch.

Unlike the previous PR where I fixed a lot of the doctests in addition to integrating xdoctest, I'm going to reduce the scope here. I'm simply going to integrate xdoctest, and then I'm going to mark all of the failing tests as "SKIP". This will let xdoctest run on the dashboards, provide some value, and still let the dashboards pass. I'll leave fixing the doctests themselves to another PR.

In my initial commit, I do the bare minimum to get something running with failing dashboards. The few tests that I marked as skip are causing segfaults. Running xdoctest results in 293 failed, 201 passed tests. The next commits will be to disable those tests. (unfortunately I don't have a tool that will insert the `#xdoctest: +SKIP` directive over every failing test, so I'm going to do this mostly manually.)

Fixes https://github.com/pytorch/pytorch/issues/71105

@ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82797
Approved by: https://github.com/ezyang
2022-08-12 02:08:01 +00:00
Adam J. Stewart
dfde877c0b Add type hints for a few random functions/classes
Adds type hints for a few functions/classes that we use in [TorchGeo](https://github.com/microsoft/torchgeo).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74171
Approved by: https://github.com/jbschlosser, https://github.com/anjali411
2022-05-04 13:53:00 +00:00
PyTorch MergeBot
80fe96c860 Revert "Add type hints for a few random functions/classes"
This reverts commit cdb40eb528.

Reverted https://github.com/pytorch/pytorch/pull/74171 on behalf of https://github.com/zengk95
2022-04-21 21:07:15 +00:00
Adam J. Stewart
cdb40eb528 Add type hints for a few random functions/classes
Adds type hints for a few functions/classes that we use in [TorchGeo](https://github.com/microsoft/torchgeo).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74171
Approved by: https://github.com/jbschlosser
2022-04-21 20:09:40 +00:00
MattiaSarti
6656c71049 docs: code examples running successfully
## Description

This pull request solves both #72921 and an identical, unreported issue in another part of the documentation.

## Documentation Changes Preview

- ### Typo in https://pytorch.org/docs/stable/generated/torch.no_grad.html#codecell0:

  #### Before:
  ![1-original](https://user-images.githubusercontent.com/59971270/157734766-e15bf0cb-c6d4-4958-82cb-53a8de9fa186.png)

  #### After:
  ![1-corrected](https://user-images.githubusercontent.com/59971270/157734774-570108b8-0b93-4139-817d-02315a6262e3.png)

- ### Typo in https://pytorch.org/docs/stable/generated/torch.autograd.set_grad_enabled.html?highlight=set_grad_enabled#codecell0:

  #### Before:
  ![2-original](https://user-images.githubusercontent.com/59971270/157734859-9185f57b-0767-465e-b683-809be836a947.png)

  #### After:
  ![2-corrected](https://user-images.githubusercontent.com/59971270/157734863-5fa4caee-818a-463d-872b-4231007940f8.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74044
Approved by: https://github.com/albanD
2022-03-10 22:13:32 +00:00
soulitzer
99d490e911 Document forward AD interaction with grad mode (#72216)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72216

Fix https://github.com/pytorch/pytorch/issues/72202

Test Plan: Imported from OSS

Reviewed By: dagitses, albanD

Differential Revision: D33991687

Pulled By: soulitzer

fbshipit-source-id: 6b074c48cb2412efdec1af1ccb050cdb2cfea4e7
(cherry picked from commit 9b87bd83b2)
2022-02-04 17:43:09 +00:00
milesial
0ccb1dcdbb Fix inference_mode decorator (#68617)
Summary:
This fixes the case when `torch.inference_mode` is called with `mode=False` (disabled). When used as a decorator, it ignored the argument and enabled inference mode anyway.

`_DecoratorContextManager` is changed so that a new instance is a copy instead of a new instance with default parameters.

I also added more tests to cover this case.

Current behaviour:

```python
>>> import torch
>>> x = torch.ones(1, 2, 3, requires_grad=True)
>>> torch.inference_mode(mode=False)
... def func(x):
...     return x * x
...
>>> out = func(x)
>>> out.requires_grad
False
```

New behaviour (fixed):

```python
>>> import torch
>>> x = torch.ones(1, 2, 3, requires_grad=True)
>>> torch.inference_mode(mode=False)
... def func(x):
...     return x * x
...
>>> out = func(x)
>>> out.requires_grad
True
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68617

Reviewed By: mrshenli

Differential Revision: D32958434

Pulled By: albanD

fbshipit-source-id: 133c69970ef8bffb9fc9ab5142dedcffc4c32945
2021-12-09 10:45:09 -08:00
Jeffrey Wan
a7a5992d7d Add no-grad inference mode note (#58513)
Summary:
Adds a note explaining the difference between several often conflated mechanisms in the autograd note
Also adds a link to this note from the docs in `grad_mode` and `nn.module`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58513

Reviewed By: gchanan

Differential Revision: D28651129

Pulled By: soulitzer

fbshipit-source-id: af9eb1749b641fc1b632815634eea36bf7979156
2021-05-25 13:06:54 -07:00
Jeffrey Wan
e71b526e7e Add inference mode python bindings and tests (#58045)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/56608

 - Adds binding to the `c10::InferenceMode` RAII class in `torch._C._autograd.InferenceMode` through pybind. Also binds the `torch.is_inference_mode` function.
 - Adds context manager `torch.inference_mode` to manage an instance of `c10::InferenceMode` (global).  Implemented in `torch.autograd.grad_mode.py` to reuse the `_DecoratorContextManager` class.
 - Adds some tests based on those linked in the issue + several more for just the context manager

Issues/todos (not necessarily for this PR):
- Improve short inference mode description
- Small example
- Improved testing since there is no direct way of checking TLS/dispatch keys
-

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58045

Reviewed By: agolynski

Differential Revision: D28390595

Pulled By: soulitzer

fbshipit-source-id: ae98fa036c6a2cf7f56e0fd4c352ff804904752c
2021-05-13 08:55:35 -07:00
Jeff Yang
74e01c1dd9 docs: change to FloatTensor for requires_grad=True (#54658)
Summary:
fixes https://github.com/pytorch/pytorch/issues/54506

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54658

Reviewed By: ailzhang

Differential Revision: D27328321

Pulled By: zou3519

fbshipit-source-id: d29fa266a1cb2b6d8566055dfb6ce001edde9d96
2021-03-29 10:25:56 -07:00
Samuel Marks
e6779d4357 [*.py] Rename "Arguments:" to "Args:" (#49736)
Summary:
I've written custom parsers and emitters for everything from docstrings to classes and functions. However, I recently came across an issue when I was parsing/generating from the TensorFlow codebase: inconsistent use of `Args:` and `Arguments:` in its docstrings.

```sh
(pytorch#c348fae)$ for name in 'Args:' 'Arguments:'; do
    printf '%-10s %04d\n' "$name" "$(rg -IFtpy --count-matches "$name" | paste -s -d+ -- | bc)"; done
Args:      1095
Arguments: 0336
```

It is easy enough to extend my parsers to support both variants, however it looks like `Arguments:` is wrong anyway, as per:

  - https://google.github.io/styleguide/pyguide.html#doc-function-args @ [`ddccc0f`](https://github.com/google/styleguide/blob/ddccc0f/pyguide.md)

  - https://chromium.googlesource.com/chromiumos/docs/+/master/styleguide/python.md#describing-arguments-in-docstrings @ [`9fc0fc0`](https://chromium.googlesource.com/chromiumos/docs/+/9fc0fc0/styleguide/python.md)

  - https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html @ [`c0ae8e3`](https://github.com/sphinx-contrib/napoleon/blob/c0ae8e3/docs/source/example_google.rst)

Therefore, only `Args:` is valid. This PR replaces them throughout the codebase.

PS: For related PRs, see tensorflow/tensorflow/pull/45420

PPS: The trackbacks automatically appearing below are sending the same changes to other repositories in the [PyTorch](https://github.com/pytorch) organisation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49736

Reviewed By: albanD

Differential Revision: D25710534

Pulled By: soumith

fbshipit-source-id: 61e8ff01abb433e9f78185c2d1d0cbd7c22c1619
2020-12-28 09:34:47 -08:00
ivannz
efc090652e Enhanced generators with grad-mode decorators (#49017)
Summary:
This PR addresses the feature request outlined in https://github.com/pytorch/pytorch/issues/48713 for two-way communication with enhanced generators from [pep-342](https://www.python.org/dev/peps/pep-0342/).

Briefly, the logic of the patch resembles `yield from` [pep-380](https://www.python.org/dev/peps/pep-0380/), which cannot be used, since the generator **must be interacted with from within the grad-mode context**, while yields from the decorator **must take place outside of the context**. Hence any interaction with the wrapped generator, be it via [.send](https://docs.python.org/3/reference/expressions.html?highlight=throw#generator.send), [.throw](https://docs.python.org/3/reference/expressions.html?highlight=throw#generator.throw), and even [.close](https://docs.python.org/3/reference/expressions.html?highlight=throw#generator.close) must be wrapped by a `with` clause. The patch is compatible with `for i in gen: pass` and `next(gen)` use cases and allows two-way communication with the generator via `.send <-> yield` points.

### Logic
At lines [L37-L38](2d40296c0c/torch/autograd/grad_mode.py (L37-L38)) we (the decorator) **start the wrapped generator** (coroutine) by issuing `None` into it (equivalently, we can use `next(get)` here). Then we **dispatch responses of the generator** to our ultimate caller and **relay the latter's requests** into the generator in the loop on lines [L39-L52](2d40296c0c/torch/autograd/grad_mode.py (L39-L52)).

We yield the most recent response on [L40-L41](2d40296c0c/torch/autograd/grad_mode.py (L40-L41)), at which point we become **paused**, waiting for the next ultimate caller's interaction with us. If the caller **sends us a request**, then we become unpaused and move to [L51-L52](2d40296c0c/torch/autograd/grad_mode.py (L51-L52)) and **forward it into the generator**, at which point we pause, waiting for its response. The response might be a value, an exception or a `StopIteration`. In the case of an exception from the generator, we let it **bubble up** from the immediately surrounding [except clause](https://docs.python.org/3/reference/compound_stmts.html#the-try-statement)  to the ultimate caller through the [outer try-except](2dc287bba8/torch/autograd/grad_mode.py (L36-L54)). In the case of a `StopIteration`, we **take it's payload and propagate it** to the caller via [return](2d40296c0c/torch/autograd/grad_mode.py (L54)). In the case of a value, the flow and the loop continues.

The caller **throwing an exception at us** is handled much like a proper request, except for the exception playing the role of the request. In this case we **forward it into the generator** on lines [L47-L49](2d40296c0c/torch/autograd/grad_mode.py (L47-L49)) and await its response. We explicitly **advance** the traceback one frame up, in order to indicate the **source of the exception within the generator**.

Finally the `GeneratorExit` is handled on lines [L42-L45](2d40296c0c/torch/autograd/grad_mode.py (L42-L45)) and closes the generator.

Updates: clarified exception propagation

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49017

Reviewed By: izdeby

Differential Revision: D25567796

Pulled By: albanD

fbshipit-source-id: 801577cccfcb2b5e13a08e77faf407881343b7b0
2020-12-16 07:15:33 -08:00
Himangshu
9fc7a942f0 Change from self to self.class() in _DecoratorManager to ensure a new object is every time a function is called recursively (#44633)
Summary:
Change from self to self._class_() in _DecoratorManager to ensure a new object is every time a function is called recursively

Fixes https://github.com/pytorch/pytorch/issues/44531

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44633

Reviewed By: agolynski

Differential Revision: D23783601

Pulled By: albanD

fbshipit-source-id: a818664dee7bdb061a40ede27ef99e9546fc80bb
2020-09-22 09:13:39 -07:00
Ralf Gommers
4c19a1e350 Move torch/autograd/grad_mode.pyi stubs inline (#43415)
Summary:
- Add `torch._C` bindings from `torch/csrc/autograd/init.cpp`
- Renamed `torch._C.set_grad_enabled` to `torch._C._set_grad_enabled`
  so it doesn't conflict with torch.set_grad_enabled anymore

This is a continuation of gh-38201. All I did was resolve merge conflicts and finish the annotation of `_DecoratorContextManager.__call__` that ezyang started in the first commit.

~Reverts commit b5cd3a80bb, which was only motivated by not having `typing_extensions` available.~ (JIT can't be made to understand `Literal[False]`, so keep as is).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43415

Reviewed By: ngimel

Differential Revision: D23301168

Pulled By: malfet

fbshipit-source-id: cb5290f2e556b4036592655b9fe54564cbb036f6
2020-08-31 16:14:41 -07:00
Meghan Lele
87d7c362b1 [JIT] Add JIT support for torch.no_grad (#41371)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41371

**Summary**
This commit enables the use of `torch.no_grad()` in a with item of a
with statement within JIT. Note that the use of this context manager as
a decorator is not supported.

**Test Plan**
This commit adds a test case to the existing with statements tests for
`torch.no_grad()`.

**Fixes**
This commit fixes #40259.

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D22649519

Pulled By: SplitInfinity

fbshipit-source-id: 7fa675d04835377666dfd0ca4e6bc393dc541ab9
2020-08-27 15:32:57 -07:00
vfdev-5
2f47e953f7 Fixes #40158 (#40617)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/40158

Description
- docs update: removed incorrect statements
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40617

Reviewed By: ezyang

Differential Revision: D22308802

Pulled By: yns88

fbshipit-source-id: e33084af320f249c0c9ba04bdbe2191d1b954d17
2020-07-01 18:05:44 -07:00
Keigo Kawamura
b5cd3a80bb Return None instead False, and return bool to None in type stub (#39324)
Summary:
# What's this

Just a small bug fix related to typing stubs.
I haven't open an issue. I will do so if I must open it, but this PR is very small (only 6 lines diff).

## What I encountered

pytorch 1.5.0 with mypy 0.770 behaves odd. The code is following:
```python
import torch

def f() -> int:  # Mypy says: `error: Missing return statement`
    with torch.no_grad():
        return 1
```

No mypy error is expected, but actually mypy 0.770 warns about `Missing return statement`.

## This is because

`mypy >= 0.730` with `--warn-unreachable` says it's unreachable because `torch.no_grad()` may "swallows" the error in the return statement.
http://mypy-lang.blogspot.com/2019/09/mypy-730-released.html

Here is a small "swallowing" example:

```python
from typing import Generator
from contextlib import contextmanager

contextmanager
def swallow_zerodiv() -> Generator[None, None, None]:
    try:
        yield None
    except ZeroDivisionError:
        pass
    finally:
        pass

def div(a: int, b: int) -> float:  # This function seems `(int, int) -> float` but actually `(int, int) -> Optional[float]` because ` return a / b` may be swallowed
    with swallow_zerodiv():
        return a / b

if __name__ == '__main__':
    result = div(1, 0)
    print(result, type(result))  # None <class 'NoneType'>
```

To supress this behavior, one can tell mypy not to swallow any exceptions, with returning `Literal[False]` or `None` in `__exit__` method of the context manager.

# What I did

Return `None` instead of `bool` to tell mypy that "I never swallow your exception".
I chose `None` because I cannot interpret `Literal[False]` without typing_extensions in `python <=3.7`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39324

Differential Revision: D21833651

Pulled By: albanD

fbshipit-source-id: d5cad2e5e19068bd68dc773e997bf13f7e60f4de
2020-06-02 10:46:44 -07:00
Moto Hira
6631c2a627 [doc] Add grad context manager doc to toplevel torch module. (#33877)
Summary:
fixes https://github.com/pytorch/pytorch/issues/32014
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33877

Differential Revision: D20141801

Pulled By: albanD

fbshipit-source-id: bac713382a71666dd5e2499f710c51a55cc579ba
2020-03-02 06:32:36 -08:00
Peter Bell
5d80f63478 no_grad, enable_grad: support for decorating generator functions (#31792)
Summary:
Closes https://github.com/pytorch/pytorch/issues/31497

This allows `torch.no_grad` and `torch.enable_grad` to be used as decorators for generator functions. In which case it disables/enables grad only inside the body of the generator and restores the context outside of the generator.

https://github.com/pytorch/pytorch/issues/31497 doesn't include a complete reproducer but the included test with `torch.is_grad_enabled` show this is working where it failed before.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31792

Differential Revision: D19274971

Pulled By: albanD

fbshipit-source-id: fde6d3fd95d76c8d324ad02db577213a4b68ccbe
2020-01-06 15:21:20 -08:00
Prasun
0c79753c0d Improve documentation for torch.enable_grad , torch.no_grad and torch.set_grad_enabled (#23310)
Summary:
Modified documentation for ` torch.enable_grad` , ` torch.no_grad` and `torch.set_grad_enabled`.

Fixes https://github.com/pytorch/pytorch/issues/19189
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23310

Differential Revision: D16489626

Pulled By: soumith

fbshipit-source-id: f0926e4f51ffd97521e67bee3a16ad954458247a
2019-07-25 05:48:33 -07:00
Edward Yang
b858f42e16 Document that no_grad is thread local. (#21755)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21755
ghimport-source-id: dfb53759024d9ba9d104fdb2a8151ab996e55234

Differential Revision: D15811172

Pulled By: ezyang

fbshipit-source-id: c8c7c1c15277d8fe8cc513e20af449257d7ff15c
2019-06-13 13:47:09 -07:00
Thomas Viehmann
d34578026c Various example code fixes (#12707)
Summary:
- Fix broken sparse_coo_examples, update output
- Tensor(...) to tensor(...)
- Fix arguments to math.log to be floats

While the last might be debateable, mypy currently complains when passing an int to math.log. As it is not essential for our examples, let's be clean w.r.t. other people's expectations.

These popped up while checking examples in the context of  #12500 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12707

Differential Revision: D10415256

Pulled By: SsnL

fbshipit-source-id: c907b576b02cb0f89d8f261173dbf4b3175b4b8d
2018-10-16 21:59:40 -07:00
Wei Yang
cda74ac476 fix nested no_grad decorator and with-statement (#11479)
Summary:
- fixes https://github.com/pytorch/pytorch/issues/10858
- allow `no_grad` decorator to apply `with torch.no_grad()` at the correct context
- current behavior:
```
import torch

torch.no_grad()
def nothing(x):
    return x

testin = torch.Tensor([0])
with torch.no_grad():
    print(torch.is_grad_enabled()) # False
    testout = nothing(testin)
    print(torch.is_grad_enabled()) # False
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11479

Differential Revision: D9758691

Pulled By: weiyangfb

fbshipit-source-id: 87de2219c6c45f65a2c0406ae152c3ad760be8f2
2018-09-11 17:56:40 -07:00
jvmancuso
4352eab367 Call grad_mode.py context managers as decorators (#7737)
* call grad_mode.py context managers as decorators

* flake fixes

* switch to using context manager in wrapper

* fix set_grad_enabled test

* removed dumb github UI whitespace

* revert set_grad_enabled to normal, update tests
2018-05-23 17:39:13 -04:00
Sang-gil Lee
c92b5422f7 Fix typo in set_grad_enabled description (#6931)
After setting set_grad_enabled(False), y.requires_grad returns False. But in the example it is described as True.
2018-04-25 09:23:15 +02:00
Richard Zou
1449c9f754 Update autograd docs (#5907)
* Update autograd docs

* Deprecate 'grad_variables' in backward().

Advise to replace with 'grad_tensors'.

* Resolve saved_variables/saved_tensors

* Tensor section

* Address comments

* Address comments

* Address comments
2018-03-30 15:33:11 -04:00
Thomas Viehmann
a33aeed1dc Add set_grad_enabled as context manager and function (#5555) 2018-03-09 11:36:56 +01:00
albanD
d400305eb9 fix typo in grad_mode 2017-12-20 17:13:39 +01:00