Commit Graph

48 Commits

Author SHA1 Message Date
Edward Z. Yang
361db32d47 Consolidate SymDispatchMode into ProxyTensorMode (#132674)
Instead of having a separate context variable for SymDispatchMode, we
now simply delegate to the current active proxy tensor mode when we
need to trace a SymInt.  We maintain a separate `__sym_dispatch__` magic
method as the calling convention is different than `__torch_dispatch__`.

Consolidating the modes in this ways means that we can consistently
disable both of these modes in tandem simply by removing the mode
from the proxy mode infra slot.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132674
Approved by: https://github.com/zou3519, https://github.com/bdhirsh
2024-08-08 12:02:54 +00:00
PyTorch MergeBot
a9ff190867 Revert "Consolidate SymDispatchMode into ProxyTensorMode (#132674)"
This reverts commit ffdf48e63b.

Reverted https://github.com/pytorch/pytorch/pull/132674 on behalf of https://github.com/PaliC due to We need to now revert https://github.com/pytorch/pytorch/pull/132216 in OSS and there is a dependency on this pr ([comment](https://github.com/pytorch/pytorch/pull/132674#issuecomment-2274062785))
2024-08-07 18:25:33 +00:00
Edward Z. Yang
ffdf48e63b Consolidate SymDispatchMode into ProxyTensorMode (#132674)
Instead of having a separate context variable for SymDispatchMode, we
now simply delegate to the current active proxy tensor mode when we
need to trace a SymInt.  We maintain a separate `__sym_dispatch__` magic
method as the calling convention is different than `__torch_dispatch__`.

Consolidating the modes in this ways means that we can consistently
disable both of these modes in tandem simply by removing the mode
from the proxy mode infra slot.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132674
Approved by: https://github.com/zou3519, https://github.com/bdhirsh
2024-08-06 17:03:17 +00:00
Wouter Devriendt
e8645fa2b9 [Doc] fix some typos (found by codespell and typos) (#132544)
Applying doc fixes from PR https://github.com/pytorch/pytorch/pull/127267 - with CLA
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132544
Approved by: https://github.com/kit1980
2024-08-05 17:21:56 +00:00
Sheng Fu
c1dd3a615f Implement Graph Transform Observer (#127427)
Summary: Implement Graph Transform Observer

Differential Revision: D57887518

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127427
Approved by: https://github.com/angelayi
2024-06-02 06:49:47 +00:00
Boyuan Feng
aa2da0cdd2 [Export] Add runtime assert to non-strict export (#123681)
This PR moves insert_deferred_runtime_asserts from dynamo to torch.fx.passes and uses it to add runtime assertion for non-strict export.

Differential Revision: D55944267

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123681
Approved by: https://github.com/tugsbayasgalan, https://github.com/angelayi
2024-04-18 16:13:27 +00:00
Edward Z. Yang
3f0fd36835 Introduce size oblivious guards (#118579)
Fixes https://github.com/pytorch/pytorch/issues/117361

The implementation here slightly diverges from what was proposed in the issue, so I will recap what this PR is doing here. Today, when doing computations involving size-like unbacked SymInts, we assume for all operations that the compile time range of the integer is `[2, inf]`, even though at runtime we also accept zero and one.

This PR removes the carte blanche assumption, and instead does the analysis in a much more limited and controlled fashion: only for guards which we have designated as "size oblivious" are we willing to do the analysis under the assumption that the range of all size-like unbacked SymInts is `[2, inf]`; otherwise, we will faithfully only do analysis with `[0, inf]` (or whatever the user provided) bounds.

The infra pieces of this PR are:

* Remove runtime_var_to_range from torch/fx/experimental/symbolic_shapes.py; modify `_constrain_range_for_size` to refine the range without clamping min to 2, and instead add the symbol to a `size_like` set in the ShapeEnv
* When evaluating an expression, if the expression is requested to be evaluated in a `size_oblivious` way, we attempt to statically compute the value of the expression with the assumption that all symbols in `size_like` are updated to assume that they are `>= 2`.
* Add Python and C++ APIs for guarding on a SymBool in a size-oblivious way. In C++, I also need to add some helpers for performing symbolic comparisons, since the stock comparisons immediately specialize in the "normal" way.

The rest of the changes of the PR are marking various spots in PyTorch framework code as size oblivious, based on what our current test suite exercises.

As you review the places where we have marked things as size oblivious, it may become clear why I ended up not opting for the "designate a branch as the default branch when it's not statically obvious which way to go": for some of the conditions, this answer is rather non-obvious. I think potentially there is another refinement on top of this PR, which is something like "I don't care if you can't figure it out with ValueRange analysis, go down this path anyway if there are unbacked sizes involved." But even if we add this API, I think we are obligated to attempt the ValueRange analysis first, since it can lead to better outcomes sometimes (e.g., we are able to figure out that something is contiguous no matter what the unbacked size is.)

When is it permissible to mark something as size oblivious? Heuristically, it is OK anywhere in framework code if it gets you past a guard on unbacked SymInt problem. It is somewhat difficult to provide a true semantic answer, however. In particular, these annotations don't have any observational equivalence guarantee; for example, if I have `torch.empty(u0, 1).squeeze()`, we will always produce a `[u0]` size tensor, even though if `u0 == 1` PyTorch will actually produce a `[]` size tensor. The argument that I gave to Lezcano is that we are in fact defining an alternate semantics for a "special" size = 0, 1, for which we have these alternate eager mode semantics. In particular, suppose that we have a constant `special1` which semantically denotes 1, but triggers alternate handling rules. We would define `torch.empty(special1, 1).squeeze()` to always produce a `[special1]` size tensor, making its semantics coincide with unbacked SymInt semantics. In this model, the decision to designate guards as size oblivious is simply a user API question: you put them where ever you need some handling for special1! As we conservatively error out whenever it is not obvious what `special1` semantics should be, it is always valid to expand these semantics to cover more cases (although you can always choose the wrong semantics!)

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118579
Approved by: https://github.com/eellison, https://github.com/lezcano
2024-02-06 19:45:32 +00:00
lezcano
47ccf04885 Split SymNode into its own file (#112037)
This PR:

- Moves TrueDiv, LShift, RShift, IsNonOverlappingAndDenseIndicator to `_sympy.functions.py`
- Moves SymNode to `fx.experimental.sym_node`.
  - This file does not have any SymPy dependencies at import time
  - It installs the magic methods in Sym{Bool,Int,Float}.
  - N.b. With this split, we may be able to move Sym{Bool,Int,Float} to this file, and remove quite a few of the hacks around these classes
- Imports `sym_node` in `torch/__init__.py` rather than the whole `symbolic_shapes.py`.
  This breaks the import-time dependency between torch and SymPy

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112037
Approved by: https://github.com/peterbell10
ghstack dependencies: #112035, #112036
2023-10-26 23:32:27 +00:00
Jerry Zhang
7a69e3d30b [fx][subgraph_matcher] Add a matcher that supports name to node map (#110743)
Summary:
We want the matcher to return a name -> node in target graph
so that we can refer to the node by name, this is useful for downstream applications like
quantization.

and also we can use the torch API as source of truth instead of matching aten API directly.

Test Plan:
python test/fx/test_matcher_utils.py

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110743
Approved by: https://github.com/SherlockNoMad
2023-10-10 22:21:24 +00:00
albanD
c4db607607 Doc test non packages (#110568)
Add non-package python modules to the public API checks.
The original change is to remove the `ispkg` check in this line
https://github.com/pytorch/pytorch/blob/main/docs/source/conf.py#L518

Everything else is to add the appropriate modules to the rst files, make sure every module we provide can be imported (fixed by either making optional dependencies optional or just deleting files that have been un-importable for 3 years), make API that are both modules and functions (like torch.autograd.gradcheck) properly rendered on the docs website without confusion and add every non-documented API to the allow list (~3k of them).

Next steps will be to try and fix these missing docs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110568
Approved by: https://github.com/zou3519
2023-10-06 14:16:01 +00:00
Michael Suo
a475ea4542 [fx] change from #users to num_users in graph printout (#101140)
`#users` means stuff in various chat apps, which makes it annoying to copypasta graphs into them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101140
Approved by: https://github.com/ezyang
2023-06-20 21:24:32 +00:00
PyTorch MergeBot
66eef31444 Revert "[fx] change from #users to num_users in graph printout (#101140)"
This reverts commit e568c5a18d.

Reverted https://github.com/pytorch/pytorch/pull/101140 on behalf of https://github.com/jeanschmidt due to There are internal changes to this commit that are preventing landing, so I am reverting to unblock the diff train ([comment](https://github.com/pytorch/pytorch/pull/101140#issuecomment-1547989487))
2023-05-15 14:35:22 +00:00
Michael Suo
e568c5a18d [fx] change from #users to num_users in graph printout (#101140)
`#users` means stuff in various chat apps, which makes it annoying to copypasta graphs into them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101140
Approved by: https://github.com/ezyang
2023-05-12 04:34:01 +00:00
Svetlana Karslioglu
d425da8bf3 Replace master with main in links and docs/conf.py (#100176)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100176
Approved by: https://github.com/albanD, https://github.com/malfet
2023-05-02 18:20:32 +00:00
Philip Meier
bc73affdad prepare removal of deprecated functionality in torch.testing (#87969)
_Redo of #86586 with all BC breaking changes granularly placed into separate commits._

---

Per title. Deprecation happened on Feb 25, 2022 in c6f1bbc0ac, which made it into the 1.12 release. Since it is now 245 days later and the next release will be 1.14, the removals later in the stack comply with the [BC policy](https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#minimizing-the-disruption-of-bc-breaking-changes).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87969
Approved by: https://github.com/mruberry
2022-11-02 14:04:48 +00:00
Kazuaki Ishizaki
7d2f1cd211 Fix typos under docs directory (#88033)
This PR fixes typos in `.rst` and `.Doxyfile` files under docs directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88033
Approved by: https://github.com/soulitzer
2022-10-31 19:31:56 +00:00
Shangdi Yu
c52ee6dc0a CSE Pass and common pass Tests (#81742)
Test cases for CSE Pass and common passes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81742
Approved by: https://github.com/SherlockNoMad
2022-07-22 03:45:09 +00:00
Sherlock Huang
fc10a63727 Prims+NvFuser Backend Prototype (#80591)
This PR integrates FX graph partitioner + Aten2Prims DecompositionInterpreter + Prims' TraceExecutor + naive caches for nvFuser.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80591
Approved by: https://github.com/jjsjann123, https://github.com/ezyang
2022-07-08 19:53:03 +00:00
migeedz
443db9b58e Introduce Z3 types and utility functions for constraint generation (#80084)
Create Z3 types. In particular, dynamic dimensions, dynamic tensor type and tensor types up to size 4. Note that for Z3 decidability reasons, we are using uninterpreted functions for tensor types, which means we must explicitly define tensor constructors with a concrete size (for now, upto size 4).  We defer lifting this requirement to future work.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80084
Approved by: https://github.com/anijain2305
2022-06-25 22:27:33 +00:00
Sherlock Huang
752c06e0e1 FX graph partitioner and fuser (#79439)
This PR introduces two components.

CapabilityBasedPartitioner for FX graph: given a list of supported operators, this partitioner tries to forms the largest subgraphs that only contain the supported ops.

Fuser utility: given a list of nodes in FX graph, it lifts them as a sub-GraphModule in the original graph.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79439
Approved by: https://github.com/jjsjann123, https://github.com/davidberard98
2022-06-24 18:49:37 +00:00
David Berard
8edaf388e5 Fix fx decomposition example
Previously GraphAppendingTracer was appending to the wrong graph.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79807

Approved by: https://github.com/kit1980
2022-06-20 17:26:17 +00:00
Alban Desmaison
bd7e99cbb9 Fix doc build
Regression introduced in https://github.com/pytorch/pytorch/pull/73224
The caller for this script has never been updated to pass in main: 2ecc59086a/.github/workflows/_docs.yml (L81-L85)

So this change made it so that all PR doc is built as-if it was a release (for example https://github.com/pytorch/pytorch/runs/6031182009?check_suite_focus=true) and so the coverage test for the doc didn't run for a month :(
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75997
Approved by: https://github.com/musebc, https://github.com/seemethere
2022-04-19 04:07:47 +00:00
Alban Desmaison
734281c3d6 Cleanup all module references in doc (#73983)
Summary:
Working towards https://docs.google.com/document/d/10yx2-4gs0gTMOimVS403MnoAWkqitS8TUHX73PN8EjE/edit?pli=1#

This PR:
- Ensure that all the submodules are listed in a rst file (that ensure they are considered by the coverage tool)
- Remove some long deprecated code that just error out on import
- Remove the allow list altogether to ensure nothing gets added back there

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73983

Reviewed By: anjali411

Differential Revision: D34787908

Pulled By: albanD

fbshipit-source-id: 163ce61e133b12b2f2e1cbe374f979e3d6858db7
(cherry picked from commit c9edfead7a01dc45bfc24eaf7220d2a84ab1f62e)
2022-03-10 22:26:29 +00:00
Priyam Parashar
f75e92a936 Fix for retracing documentation which would break for n-ary operators (#71599)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/68195

Updated fx.rst documentation and followed the instructions in [contributing.md](https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md#writing-documentation) to generate html. Faced errors which looked very similar to https://github.com/pytorch/pytorch/issues/32703 but gathered from the thread that a non-0 exit is OK for documentation building and these are warnings not affecting the html generation (at least for root rst folder). The HTML output is plain without any styling, please confirm this is intentional.

Screenshot of generated html:
<img width="1438" alt="Screen Shot 2022-01-20 at 4 31 24 PM" src="https://user-images.githubusercontent.com/9580531/150439448-1a626d74-68ba-4f94-91f2-a6942959b049.png">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71599

Reviewed By: jamesr66a

Differential Revision: D33719546

Pulled By: zephirefaith

fbshipit-source-id: cc9b8ddb13cfdb9f14ebff54cf0d894a8b842aa1
(cherry picked from commit 170db5d7be)
2022-01-24 20:07:08 +00:00
JUBIN CHHEDA
27228656e6 [FX][docs] Document gotcha about training flag (#68915)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/68913

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68915

Reviewed By: jamesr66a

Differential Revision: D32705410

Pulled By: jubinchheda

fbshipit-source-id: a44c17ab0e62465823ceb0ef983ae330b50fb073
2021-11-29 16:13:32 -08:00
Xiaoyu Zhang
273f7ae9b3 fx: Update fx.rst (#68043)
Summary:
When I run this part of the code on the document with PyTorch version 1.10.0, I found some differences between the output and the document, as follows:

```python
import torch
import torch.fx as fx

class M(torch.nn.Module):
    def forward(self, x, y):
        return x + y

# Create an instance of `M`
m = M()

traced = fx.symbolic_trace(m)
print(traced)
print(traced.graph)
traced.graph.print_tabular()
```

I get the result:

```shell
def forward(self, x, y):
    add = x + y;  x = y = None
    return add

graph():
    %x : [#users=1] = placeholder[target=x]
    %y : [#users=1] = placeholder[target=y]
    %add : [#users=1] = call_function[target=operator.add](args = (%x, %y), kwargs = {})
    return add
opcode         name    target                   args    kwargs
-------------  ------  -----------------------  ------  --------
placeholder    x       x                        ()      {}
placeholder    y       y                        ()      {}
call_function  add     <built-in function add>  (x, y)  {}
output         output  output                   (add,)  {}
```

This pr modified the document。

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68043

Reviewed By: driazati

Differential Revision: D32287178

Pulled By: jamesr66a

fbshipit-source-id: 48ebd0e6c09940be9950cd57ba0c03274a849be5
2021-11-09 14:00:45 -08:00
Sam Estep
3a0801f960 [skip ci] Fix "arugment" typos (#61459)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/61455.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61459

Reviewed By: soulitzer

Differential Revision: D29636559

Pulled By: samestep

fbshipit-source-id: 9ad65265c0491d9e81bb303abe3a07c6843bfa4a
2021-07-15 15:20:18 -07:00
James Reed
ac64a41e8a [FX][docs] Add note about python set pitfall (#61597)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61597

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D29685735

Pulled By: jamesr66a

fbshipit-source-id: b5c5b53ff94fac1022f69b7c0ad4e4055b116029
2021-07-13 20:09:13 -07:00
James Reed
02d380450d [FX][docs][EZ] Fix link to fuser example (#59670)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/59670

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D28975704

Pulled By: jamesr66a

fbshipit-source-id: 2fb759224b5b1ecc62c0ab26563d2a35ed422794
2021-06-08 17:32:55 -07:00
Horace He
79a258f448 s/foward/forward/g (#58497)
Summary:
Annoying typo.

Prompted by these profiling results: https://github.com/pytorch/pytorch/issues/56419#issuecomment-825787828

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58497

Reviewed By: malfet

Differential Revision: D28521081

Pulled By: Chillee

fbshipit-source-id: ab91a2e167dd7d3387fd56106a6cff81f7a32f10
2021-05-19 11:42:42 -07:00
Gary Miguel
f9c8b7f1a8 [FX][docs] minor fixes (#58085)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58085

Reviewed By: mruberry

Differential Revision: D28364553

Pulled By: jamesr66a

fbshipit-source-id: 0d953672de9a86ecf5b1900b22e6ddef850dbe8f
2021-05-11 15:35:49 -07:00
James Reed
f8e7d8bb0d [FX][docs] Render inherited methods in fx.Tracer API reference (#53630)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53630

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D26918962

Pulled By: jamesr66a

fbshipit-source-id: 2c84e308889d4ba3176018c7bd44a841e715e6c8
2021-03-09 14:30:41 -08:00
Horace He
c07a62b854 [FX] change dynamic control flow example to a *more* dynamic version (#53250)
Summary:
This is a more fundamental example, as we may support some amount of shape specialization in the future.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53250

Reviewed By: navahgar

Differential Revision: D26841272

Pulled By: Chillee

fbshipit-source-id: 027c719afafc03828a657e40859cbfbf135e05c9
2021-03-08 10:00:19 -08:00
Wanchao Liang
79944f7ad9 [fx] simple doc fix
Reviewed By: houseroad

Differential Revision: D26739803

fbshipit-source-id: e680ce961a9ed1a5042d675aca9f5cf118c8ff85
2021-03-03 15:47:40 -08:00
Horace He
475278f1c0 [FX] Make some modifications to limitation section (#51928)
Summary:
![](https://i.imgur.com/P0Tq4xR.jpg)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51928

Reviewed By: jamesr66a

Differential Revision: D26329664

Pulled By: Chillee

fbshipit-source-id: 94fd7b03ca53f48b1e4633a462c6e02bb0fd2f3c
2021-02-09 18:32:28 -08:00
Horace He
9c2dd5775a Fixed slight bug in FX docs (#51779)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51779

Reviewed By: ngimel

Differential Revision: D26279623

Pulled By: Chillee

fbshipit-source-id: 0cd2a487ce6b80ce0d3f81e2b2334ade20d816bb
2021-02-05 11:27:39 -08:00
James Reed
949ab213dd Revert "Revert D26246231: [FX] Edits after comprehensive pass over docs" (#51728)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51728

This reverts commit 6c80fd005f.

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D26254130

Pulled By: jamesr66a

fbshipit-source-id: f301688f85c512076fee9b83a986677ef893d2c5
2021-02-04 13:01:09 -08:00
Alban Desmaison
6c80fd005f Revert D26246231: [FX] Edits after comprehensive pass over docs
Test Plan: revert-hammer

Differential Revision:
D26246231 (c22bc4821d)

Original commit changeset: 8d6278a9fe1d

fbshipit-source-id: fdc83289f8fe7986bc02181eec55e4e72be2d812
2021-02-04 09:26:21 -08:00
James Reed
c22bc4821d [FX] Edits after comprehensive pass over docs (#51705)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51705

Pull Request resolved: #51679

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D26246231

Pulled By: jamesr66a

fbshipit-source-id: 8d6278a9fe1da5e6c34eff4fedc4c7e18533fe0f
2021-02-04 08:11:07 -08:00
Horace He
f1a63b7c10 [FX] Added how to write transformations section (#51278)
Summary:
![image](https://user-images.githubusercontent.com/6355099/106121588-b8614a00-6125-11eb-923f-fcdf575cd6cd.png)

I still need to add links to vmap/grad/decomposition, but those haven't been added to the examples folder yet.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51278

Reviewed By: zou3519

Differential Revision: D26223103

Pulled By: Chillee

fbshipit-source-id: 3ad9bf76cd3438743edecdc17c44f8d1e00e5ea1
2021-02-03 21:32:43 -08:00
Ansley Ussery
ab4623da16 Document FX debugging (#51530)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51530

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26192641

Pulled By: ansley

fbshipit-source-id: c69ab1bb2451d8ee5a729445f52bccc66e6f431b
2021-02-02 23:17:51 -08:00
James Reed
609f76f27a [WIP][FX] Add Interpreter and Transformer (#50420)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50420

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25880330

Pulled By: jamesr66a

fbshipit-source-id: 27d34888e36e39924821fed891d79f969237a104
2021-02-01 11:40:12 -08:00
James Reed
789f6f1250 [FX] Minor docs changes (#50966)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50966

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D26029101

Pulled By: jamesr66a

fbshipit-source-id: 4374771be74d0a4d05fdd29107be5357130c2a76
2021-01-22 16:23:19 -08:00
James Reed
d0e942f9a7 [FX][docs] Add limitations of symbolic tracing (#50638)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50638

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D25933780

Pulled By: jamesr66a

fbshipit-source-id: 0aa97ea05203fbcb707b0e947a465e206104b7df
2021-01-20 21:42:16 -08:00
James Reed
d9f71b5868 [WIP][FX] new sections in docs (#50562)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50562

Adding new top-level sections to the docs to be filled out

![image](https://user-images.githubusercontent.com/4685384/104666703-5b778580-5689-11eb-80ab-7df07f816b5b.png)

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D25919592

Pulled By: jamesr66a

fbshipit-source-id: 45f564eb8fddc7a42abb5501e160cca0dd0745c8
2021-01-14 21:34:36 -08:00
James Reed
6882f9cc1c [FX] Add wrap() docstring to docs and add decorator example (#50555)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50555

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D25917564

Pulled By: jamesr66a

fbshipit-source-id: 20c7c8b1192fa80c6a0bb9e18910791bd7167232
2021-01-14 21:31:51 -08:00
Ansley Ussery
080a097935 Add docstring for Proxy (#50145)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50145

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D25854281

Pulled By: ansley

fbshipit-source-id: d7af6fd6747728ef04e86fbcdeb87cb0508e1fd8
2021-01-11 13:47:55 -08:00
James Reed
778006918c [WIP][FX] Add FX page to docs (#48814)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48814

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D25320051

Pulled By: jamesr66a

fbshipit-source-id: b1fdec9615a7a4eb97c557bb3cba7f90b0a4d933
2020-12-15 09:48:29 -08:00