Commit Graph

332 Commits

Author SHA1 Message Date
Ram Rachum
351d73b97f Fix exception causes all over the codebase (#90271)
This is the continuation to #90134 and hopefully the final PR in this series.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90271
Approved by: https://github.com/kit1980
2022-12-07 04:29:00 +00:00
Jongsoo Park
2bca280a31 Revert D41683102: Multisect successfully blamed D41683102 for test or build failures (#90117)
Summary:
This diff is reverting D41683102
D41683102 has been identified to be causing the following test or build failures:
Tests affected:
- https://www.internalfb.com/intern/test/281475051072735/

Here's the Multisect link:
https://www.internalfb.com/intern/testinfra/multisect/1444960
Here are the tasks that are relevant to this breakage:
T124964606: 41 tests started failing for oncall ads_trainer_release in the last 2 weeks
We're generating a revert to back out the changes in this diff, please note the backout may land if someone accepts it.

Test Plan: NA

Reviewed By: jspark1105

Differential Revision: D41710842

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90117
Approved by: https://github.com/soumith
2022-12-03 19:54:04 +00:00
alexmsettle
b703e4b3c2 Add hierarchical module names to torchFX graph.node #87659 (#87742)
Fixes #87659

Pass down the module hierarchy from module.named_modules() to the name field of graph.node.
This makes it so the name of each node contains descriptive information about the network architecture.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87742
Approved by: https://github.com/jerryzh168
2022-12-02 05:58:06 +00:00
Ryan Spring
534ae6ae47 [primTorch] Implement group norm reference (#87054)
Add group norm reference
Split from #81191
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87054
Approved by: https://github.com/mruberry
2022-11-11 01:08:20 +00:00
soulitzer
4c20c0509d Split out forward AD tests from test_ops_gradients and reenable slow gradcheck CI (#88216)
Fixes: https://github.com/pytorch/pytorch/issues/88010

This PR does a couple things to stop slow gradcheck from timing out:
- Splits out test_ops_fwd_gradients from test_ops_gradients, and factors out TestFwdGradients and TestBwdGradients which both inherit from TestGradients, now situated in common_utils (maybe there is a better place?)
- Skips CompositeCompliance (and several other test files) for slow gradcheck CI since they do not use gradcheck
- because test times for test_ops_fwd_gradients and test_ops_gradients are either unknown or wrong, we hardcode them for now to prevent them from being put together. We can undo the hack after we see actual test times are updated. ("def calculate_shards" randomly divides tests with unknown test times in a round-robin fashion.)
- Updates references to test_ops_gradients and TestGradients
- Test files that are skipped for slow gradcheck CI are now centrally located in in run_tests.py, this reduces how fine-grained we can be with the skips, so for some skips (one so far) we still use the old skipping mechanism, e.g. for test_mps

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88216
Approved by: https://github.com/albanD
2022-11-03 00:20:45 +00:00
Philip Meier
bc73affdad prepare removal of deprecated functionality in torch.testing (#87969)
_Redo of #86586 with all BC breaking changes granularly placed into separate commits._

---

Per title. Deprecation happened on Feb 25, 2022 in c6f1bbc0ac, which made it into the 1.12 release. Since it is now 245 days later and the next release will be 1.14, the removals later in the stack comply with the [BC policy](https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#minimizing-the-disruption-of-bc-breaking-changes).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87969
Approved by: https://github.com/mruberry
2022-11-02 14:04:48 +00:00
Peter Bell
bc9caafc78 record_function: update to use custom_class API (#76420)
Re-submit of gh-72302

This still has a small performance hit, but it much smaller. On my
machine I see `_record_fucntion_exit._RecordFunction` takes 1.05 us
compared to the `Tensor` overload taking 0.79 us.

In an overall comparison, I see a 0.7 us slowdown from 6.0 us to
6.7 us for this timeit benchmark
```python
import torch

def foo():
  with torch.profiler.record_function("foo"):
    return torch.eye(3)

%timeit foo()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76420
Approved by: https://github.com/robieta
2022-11-02 00:39:28 +00:00
lezcano
787028cadb Implement col2im decomposition and fix im2col and add a few preconditions (#85541)
As per title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85541
Approved by: https://github.com/jansel
2022-09-30 09:31:53 +00:00
Renfei Chen
4befe45084 [FX] Add one option to maintain the FX graph execution order after splitting_module (#85188)
Summary:
{F770932209}

Given the original execution order and the node dependency relationship (note that the same dependency order could generate multiple execution order, which refers to “Topological Order”), after reunion, we could find the new execution order of the new GraphModule is different from the original one which is not what we want.
For example, let’s assume that NewLeaf_1 is EmbeddingLookup (Calling EmbeddingLookup is awaitable, we will keep executing the following nodes rather than waiting for the result until we have to know the lookup result), NewLeaf_4 is the node where we HAVE to get the lookup result to interact with the NewLeaf_3. So NewLeaf_1 will launch a lookup kernel and all2all communication stream to distribute the result to all ranks. In the meantime, we want to keep executing NewLeaf_2 and NewLeaf_3 to avoid meaningless waiting. However, given the new execution order, we have to wait for the lookup kernel and all2all communication to be finished since the next node NewLeaf_4 needs the result, until then we can execute NewLeaf_2, etc. It cannot leverage the advantage of parallel computation and communication stream and will hurt the QPS a lot.
So while constructing the GraphModule, we have to change from the topological order to the original order

Test Plan:
Unit test

Not sure how to add tests in FX as there's no TARGETS, so I added in the TorchRec folder

Differential Revision: D39567314

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85188
Approved by: https://github.com/SherlockNoMad
2022-09-23 23:21:54 +00:00
Kunal Bhalla
b00a4b7cf1 [torch.fx.wrap] Use callable / function.__name__ instead of function.__code__.co_name (#84373)
Ran across this issue while using torch.fx.wrap on a decorated function: it triggered a KeyError: 'wrapper_inside_decorator'. torch.fx.wrap stores function.__code__.co_name, but that isn't set correctly (and doesn't match it's name in the global namespace) for decorators; function.__name__ is set correctly.

Also adjusted to checking for callable instead of checking for the existing of __code__ to allow for a broader variety of functions that can be passed in. Eg. using functools.cache returns a callable that won't have a __code__ attribute.

I added a unit test (that incidentally fails every test in the suite before the fix commit -- because it affects the global state), and then a fix that addresses it.

```
In [1]: import functools

In [2]: def decorator(f):
   ...:     @functools.wraps(f)
   ...:     def wrapper(*args, **kwargs):
   ...:         return f(*args, **kwargs)
   ...:     return wrapper
   ...:

In [3]: @decorator
   ...: def some_function(x):
   ...:     return x
   ...:

In [4]: some_function.__name__
Out[4]: 'some_function'

In [5]: some_function.__code__.co_name
Out[5]: 'wrapper'
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84373
Approved by: https://github.com/jamesr66a, https://github.com/SherlockNoMad
2022-09-09 05:44:29 +00:00
Vasilis Vryniotis
7e05879b46 Fix fx test for S3D (#84526)
Fixing [failing](https://github.com/pytorch/pytorch/runs/8083404365?check_suite_focus=true) tests by adjusting the input size for S3D. The reason the test is failing is because S3D requires a bigger input size than previously passed.

As noted before, TorchVision already checks that its models are FX traceable and ensures all the tests are updated and work properly prior adding new architectures. The tests here seem to duplicate our efforts and often break because they don't factor in details about each model. It might be worth considering running TorchVision's tests instead.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84526
Approved by: https://github.com/pbelevich
2022-09-05 13:15:55 +00:00
Vasilis Vryniotis
cff55682d8 Change the input of mvit_v2_s on the FX test (#83242)
Addresses some [breakages](https://github.com/pytorch/pytorch/runs/7782559841?check_suite_focus=true) from #82560

Context: The tests are breaking because a new architecture was added in TorchVision (see https://github.com/pytorch/vision/pull/6373) that requires a different input size. This PR addresses it by using the right size for the `mvit_v2_s` architecture.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83242
Approved by: https://github.com/ezyang
2022-08-11 15:27:38 +00:00
Vasilis Vryniotis
6a09847c42 Fix broken FX tests (#83187)
Resolves [breakages](https://github.com/pytorch/pytorch/runs/7762125339?check_suite_focus=true) observed at #82560

Context:
The current FX tests assume that every public method under `torchvision.models` is a model builder method. To get a list of those methods, they query the `__dict__` attribute of the module. Unfortunately this assumption is not true and the tests already contain some workarounds to filter some methods. A better approach would be to query TorchVision for all of its available models under a specific module. This is exactly what the new Registration API can help us do and that's what we use in this PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83187
Approved by: https://github.com/ezyang
2022-08-11 07:38:35 +00:00
Sherlock Huang
6915676448 Preserve node's stack trace during retrace (#83050)
AOTAutograd retraces graph module produced by torch dynamo, this PR preserves the stack trace in the original fx.Node.

Differential Revision: [D38595638](https://our.internmc.facebook.com/intern/diff/D38595638)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83050
Approved by: https://github.com/ezyang, https://github.com/voznesenskym
2022-08-11 04:18:14 +00:00
David Berard
45e7d0268a [fx] Implement __deepcopy__ for fx.Tracer (#83130)
Copied from @jamesr66a 's example in #83116.

Implements `__deepcopy__` to skip deepcopying the elements of `_autowrap_search`, because it contains modules, which cannot/should not be deepcopied

Differential Revision: [D38560212](https://our.internmc.facebook.com/intern/diff/D38560212)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83130
Approved by: https://github.com/SherlockNoMad
2022-08-11 00:13:21 +00:00
Sherlock Huang
752579a373 Preseve stack trace in nodes during fx.Transform (#82670)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82670
Approved by: https://github.com/ezyang
2022-08-03 20:24:19 +00:00
Nikita Shulga
d80fe49de0 [Reland] Add py-3.10 config (#82329)
This is a re-land of #81372 and #81233 with the exception that it does not force the range-checks on older Python runtime versions and as such should not affect the internal workloads, which were the reason for revert, see https://github.com/pytorch/pytorch/pull/81372#issuecomment-1187516464

- [Py3.10] Allow floats to be imported as Long (#81372)
- [CI] Move CUDA-11.6 to Python-3.10 configuration (#81233)
- Don't do anything about range checks for pre-py3.10
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82329
Approved by: https://github.com/kit1980
2022-07-27 20:22:47 +00:00
PyTorch MergeBot
5df1ce46f0 Revert "[resubmit][FX] Fix PyTree unpacking carrying forward type annotations (#81999)"
This reverts commit ce92c1cfe9.

Reverted https://github.com/pytorch/pytorch/pull/81999 on behalf of https://github.com/ZainRizvi due to test_bce_with_logits_has_correct_forward_grad consistently fails with an error that it takes 2 positional arguments but 3 were given
2022-07-26 03:29:50 +00:00
soulitzer
0fcdf936e7 Skip tests that don't call gradcheck in slow gradcheck CI (#82117)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82117
Approved by: https://github.com/kit1980, https://github.com/albanD
2022-07-25 21:33:52 +00:00
James Reed
ce92c1cfe9 [resubmit][FX] Fix PyTree unpacking carrying forward type annotations (#81999)
Differential Revision: D38077793

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81999
Approved by: https://github.com/pbelevich, https://github.com/osalpekar
2022-07-25 21:00:42 +00:00
PyTorch MergeBot
0d1710ade5 Revert "[FX] Fix PyTree unpacking carrying forward type annotations (#81906)"
This reverts commit e0d83a0bdc.

Reverted https://github.com/pytorch/pytorch/pull/81906 on behalf of https://github.com/jeanschmidt due to breaking internal builds
2022-07-22 11:11:10 +00:00
James Reed
e0d83a0bdc [FX] Fix PyTree unpacking carrying forward type annotations (#81906)
Resolves https://github.com/pytorch/pytorch/issues/81902

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81906
Approved by: https://github.com/Chillee, https://github.com/voznesenskym
2022-07-22 04:25:23 +00:00
Shangdi Yu
c52ee6dc0a CSE Pass and common pass Tests (#81742)
Test cases for CSE Pass and common passes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81742
Approved by: https://github.com/SherlockNoMad
2022-07-22 03:45:09 +00:00
Edward Z. Yang
5b88a2078b Follow GitHub relabeling of oncall: fx for test owners (#81821)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81821
Approved by: https://github.com/janeyx99
2022-07-21 01:50:06 +00:00
PyTorch MergeBot
75aa049a81 Revert "[Reland] Add should_traverse_fn to torch.fx.node.map_aggregate (#81695)"
This reverts commit c09d84d325.

Reverted https://github.com/pytorch/pytorch/pull/81695 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-07-20 23:31:05 +00:00
Pavel Belevich
c09d84d325 [Reland] Add should_traverse_fn to torch.fx.node.map_aggregate (#81695)
Test Plan: CI

Differential Revision: D37956824

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81695
Approved by: https://github.com/jamesr66a
2022-07-20 03:50:09 +00:00
PyTorch MergeBot
fde1107fe8 Revert "Add should_traverse_fn to torch.fx.node.map_aggregate (#81510)"
This reverts commit d52f8c2533.

Reverted https://github.com/pytorch/pytorch/pull/81510 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-07-19 09:51:54 +00:00
PyTorch MergeBot
c96485804f Revert "[CI] Move CUDA-11.6 to Python-3.10 configuration (#81233)"
This reverts commit 7ccf693cf6.

Reverted https://github.com/pytorch/pytorch/pull/81233 on behalf of https://github.com/janeyx99 due to this should have been reverted along with 81372 for breaking internal builds
2022-07-18 17:15:50 +00:00
Nikita Shulga
7ccf693cf6 [CI] Move CUDA-11.6 to Python-3.10 configuration (#81233)
Second attempt of landing the change after https://github.com/pytorch/pytorch/pull/66530

Skip nan hashes comparison validation in `jit/test_hash.py`, as it behaves differently in 3.10 vs other pythons
Skip tensor_fx assert tests
Skip initializing uint8 tensors from negative values in `TestScript.test_torch_tensor_as_tensor`

Final step in closing https://github.com/pytorch/pytorch/issues/66424

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81233
Approved by: https://github.com/seemethere
2022-07-16 20:41:04 +00:00
Pavel Belevich
d52f8c2533 Add should_traverse_fn to torch.fx.node.map_aggregate (#81510)
Adds an optional callback that checks if map_aggregate should continue recursive traversal. The main motivation is to not traverse torch.Size which is tuple

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81510
Approved by: https://github.com/SherlockNoMad, https://github.com/jamesr66a
2022-07-15 11:57:07 +00:00
Angela Yi
3d0b0b2f9b [fx] PassManager changes (#80531)
PassManager is a class used to run multiple passes on a given graph module.

Class Attributes
* `passes: List[Callable]`: A list of callable passes
* `constraints: List[Callable]`: A list of constraints
* `run_checks_after_each_pass`: Flag for running checks each pass

Class Methods:
* `__call__(graph_module: DispatchGraphModule)`:
    * Runs the passes based on the list of passes until the graph stops changes, or until `steps` number of times.
    * Each time a pass is run, it will check that the graph module still maintains the required invariants by calling `check()` and will lint the graph to check that it’s well formed if the flag `run_checks_after_each_pass` is set.
* `check(graph_module: DispatchGraphModule)`: Runs various checks on the given graph module to make sure that it contains the needed data for passes
* `add_check(check: Callable)`: Adds the `check` function to the given pass manager instance
* `add_constraint(constraint: Callable)`: Adds a constraint to the current list of constraints

We can create a PassManager and run it by doing:
```
PassManager(passes=[pass1, pass2])(graph_module)
```

Differential Revision: [D37523159](https://our.internmc.facebook.com/intern/diff/D37523159)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80531
Approved by: https://github.com/SherlockNoMad
2022-07-15 00:58:43 +00:00
Tim Gates
3a87b47de9 docs: Fix a few typos (#81435)
There are small typos in:
- caffe2/python/recurrent.py
- test/distributed/test_c10d_nccl.py
- test/test_fx.py
- torch/csrc/jit/runtime/autodiff.cpp
- torchgen/gen.py

Fixes:
- Should read `propagation` rather than `propogation`.
- Should read `multiplied` rather than `multuplied`.
- Should read `eliminate` rather than `elminate`.
- Should read `dispatcher` rather than `disaptcher`.

Semi-automated pull request generated by
https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81435
Approved by: https://github.com/ngimel
2022-07-14 04:20:26 +00:00
Jeff Daily
340ae3ca43 [ROCm] unskip test_fx tests (#81125)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81125
Approved by: https://github.com/ngimel
2022-07-14 00:42:16 +00:00
Nikita Shulga
80bf2ea3d9 [CI] Install vision without pep517 (#81074)
If installed with pep517 support, `torchvision` will be build againstreleased version of PyTorch rather than against the one currently installed on the system

Also update `torchvision` hash to 8a45147f9d and:
 - Added `maskrcnn_resnet50_fpn_v2`, `maskrcnn_resnet50_fpn_v2`, `retinanet_resnet50_fpn_v2`, `ssd300_vgg16`, `fcos_resnet50_fpn` and `ssdlite320_mobilenet_v3_large` to the list of untraceable models
 - Set default input size to (1, 3, 16, 224, 224) for `mvit_v1_b` model
 - Skipped `test_roi_aligned`,`test_batched_nms`, `test_roi_pooled` and `test_roi_align_aligned`  ONNX test (tracked in https://github.com/pytorch/pytorch/issues/81121 )
 - Skipped TorchVision integration tests in `test_package` (tracked in https://github.com/pytorch/pytorch/issues/81115 )

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81074
Approved by: https://github.com/kit1980
2022-07-08 22:53:44 +00:00
Joel Benjamin Schlosser
2d73c8e6e0 Add Dropout1d module
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79545

Approved by: https://github.com/ngimel, https://github.com/albanD
2022-06-15 14:39:07 +00:00
Brian Hirsh
0161e9eb00 [test] attempt to functionalize ops with mutable positional-only args
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76320

Approved by: https://github.com/ezyang
2022-05-19 18:50:34 +00:00
sijiac
efcbbb177e [Re-submit] Make tracer be able to trace different forward functions
Summary: The root module may have different forward functions. The current implementation assumes only the func forward can be traced. In this PR, we add an attribute func name to Tracer class to enable users trace different functions

Test Plan:
python3 test/test_fx.py TestFX.test_trace_multiple_funcs

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77502

Approved by: https://github.com/jamesr66a
2022-05-17 01:05:33 +00:00
Jon Janzen
fa018ef989 Revert "Make tracer be able to trace different forward functions (#77109)"
This reverts commit bf4b6d0dce.
2022-05-13 13:06:47 -07:00
Sijia Chen
bf4b6d0dce Make tracer be able to trace different forward functions (#77109)
Summary: The root module may have different forward functions. The current implementation assumes only the func `forward` can be traced. In this diff, we add an argument of forward func name to enable users trace different forward functions

Test Plan: N1903198

Differential Revision: D36157032

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77109
Approved by: https://github.com/jamesr66a
2022-05-13 16:11:23 +00:00
Xiaodong Wang
2291960d3f Back out "record_function: update to use custom_class API" (#76253)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76253

We're observing large QPS regression on the original PR https://github.com/pytorch/pytorch/pull/72302. For the training job we had, it regressed from 720k QPS to 450k QPS (see the test plan in FB internal). We suspect this is because the api was changed from `_record_function_enter` to `_record_function_enter_new`, and we're running experiments to confirm that. Will add more details when the runs in the test plan has finished. For now, it's better to revert the diff to unblock internal usecases and we can think about how to reland this diff later.

Original commit changeset: dc9939f1fa6d

Original Phabricator Diff: D35257354

Test Plan:
on trunk: f338665947

with this diff: f338502850

Reviewed By: malfet, robieta

Differential Revision: D35853300

fbshipit-source-id: dd38042aeacb848f66756491a4c849c7c652a0e1
2022-04-26 17:49:57 -04:00
Peter Bell
cb37e7a080 Remove F.pad python implementation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73433

Approved by: https://github.com/albanD, https://github.com/jbschlosser
2022-04-23 00:13:20 +00:00
Alban Desmaison
eb69e8a3ed Revert "Revert "record_function: update to use custom_class API""
This reverts commit 3f9f35b9f8.

This should be done via a clean revert as this has been in master for a long time.
Doing a quick fix here to make sure we don't break master.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76172
Approved by: https://github.com/atalman
2022-04-21 14:18:28 +00:00
PyTorch MergeBot
3f9f35b9f8 Revert "record_function: update to use custom_class API"
This reverts commit 5630c5ac75.

Reverted https://github.com/pytorch/pytorch/pull/72302 on behalf of https://github.com/atalman
2022-04-21 13:59:48 +00:00
Peter Bell
5630c5ac75 record_function: update to use custom_class API
Merge after forward-compatibility period is over.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72302

Approved by: https://github.com/albanD
2022-03-30 15:57:28 +00:00
James Reed
a2d2610ec9 [FX] Assert None concrete_args and improve error messages (#74662)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74662

Previously, we would not emit a check that `concrete_args` with value `None` matched that value during runtime. This fixes that and improves some of the warning messages

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D35137362

Pulled By: jamesr66a

fbshipit-source-id: 222a2c8a907748f90290f1c1b4ab8012b46099a0
(cherry picked from commit b960405ad87e57dcf62ca25dd4d4bdfc34c8744c)
2022-03-25 23:36:27 +00:00
Shiyan Deng
3f164e0395 [reland] Process inputs and outputs in fx interpreter (#74637)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74637

Forgot to update the expect file in https://github.com/pytorch/pytorch/pull/74242. Reland to include changes in expect file.

Test Plan: unit test

Reviewed By: yinghai

Differential Revision: D35089989

fbshipit-source-id: 5e3ad9c696cf31cbc691d34fdb77eff26f92e38d
(cherry picked from commit 110ac12f5e2bcca7552d4b4691c7d98fafb21a57)
2022-03-24 18:32:57 +00:00
Michael Suo
bf5e25f3a9 Revert D34898108: Process inputs and outputs in fx interpreter
Test Plan: revert-hammer

Differential Revision:
D34898108 (f65594fc9f)

Original commit changeset: 250bd236f6c8

Original Phabricator Diff: D34898108 (f65594fc9f)

fbshipit-source-id: 5f634bbc0b393ebcacc0298fd86505a26637ea84
(cherry picked from commit 5804247425afd758d6df6e935374f6965a1c0f54)
2022-03-22 19:14:24 +00:00
Shiyan Deng
f65594fc9f Process inputs and outputs in fx interpreter (#74242)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74242

The inputs and outputs of the graph module might be different from the graph inputs and outputs if users are using custom codegen. In interpreter, it runs the graph instead of the generated forward function so it might not work if user provides the inputs to the graph module. To fill the gap, we call `process_inputs` and `process_outputs` inside interpreter.

Test Plan: unit test: test_interpreter_with_codegen

Reviewed By: jamesr66a, Chillee

Differential Revision: D34898108

fbshipit-source-id: 250bd236f6c8c1268a363cf19a09521a4f64b3a9
(cherry picked from commit b33076fa3b10788d455cecc590bc01c4ad8ef94c)
2022-03-22 17:26:01 +00:00
Jane Xu
6ecd13dfef Add super() calls for Fx TestCases (#74216)
Summary:
The fx test case wasn't disabled properly because it didn't call the parent class' setUp().

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74216

Reviewed By: zou3519

Differential Revision: D34898707

Pulled By: janeyx99

fbshipit-source-id: 83e56f5a1efc50d24646c182160f7cfcb5bc9935
(cherry picked from commit bb8dd72d1640c1ef0201d615c5d405479afdf078)
2022-03-16 22:12:56 +00:00
Shiyan Deng
f98b316f13 Preserve codegen on fx graph in transformer (#74189)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74189

Use the codegen on the original graph module for the new graph module produced by transformer.

Test Plan: Added a unit test: test_custom_codegen_with_transformer

Reviewed By: yinghai

Differential Revision: D34867938

fbshipit-source-id: fcda6600faeccfa7a650ba7226ca125e8440b19c
(cherry picked from commit d098c12081f61ddcf69052db5b8a1f31b0a0b67b)
2022-03-16 16:33:44 +00:00
James Reed
b68f227709 [FX] Disable buffer tracing test due to SEV remediation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74207

Approved by: https://github.com/malfet
2022-03-14 23:54:38 +00:00
James Reed
6a44efa888 [FX] Fix bare generic type annotations (#74135)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/74135

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D34839339

Pulled By: jamesr66a

fbshipit-source-id: fd026cab684acaae9bf7c2fa4228ed8eb7aeb788
(cherry picked from commit 3acc565324e78bbabde3f796db9f5fcc99394d6b)
2022-03-14 23:30:53 +00:00
Animesh Jain
7ebab9247d FX graph module - prevent infinite recursion (#73866)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73866

super(type(self), self) in wrapped_call leads to infinite recursion for subclass of Fx graph module. This happens when we call _stateless.functional_call on a fx module.     https://github.com/pytorch/pytorch/blob/master/torch%2Fnn%2Futils%2F_stateless.py

Test Plan:
Tests added in https://github.com/pytorch/pytorch/pull/62436

Imported from OSS

Reviewed By: jansel

Differential Revision: D34737828

fbshipit-source-id: 871b897e1210173ccc83fe34d53fc41af00a39ee
(cherry picked from commit 3d0c5fc71503fa2782b497a9d39ce26288fd219b)
2022-03-09 06:09:57 +00:00
anjali411
beda4e8b2f Fix fx tracing for OpOverload (#73940)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73940

Test Plan: Imported from OSS

Reviewed By: zhxchen17

Differential Revision: D34727831

Pulled By: anjali411

fbshipit-source-id: 26e7044a1d5ba9ee0854bda784633b134971074b
(cherry picked from commit 69685e19b3de5ea3f494464eddcce44e93cb0f4d)
2022-03-08 21:47:55 +00:00
James Reed
a8d9fbb021 [FX] Make immutable_list and immutable_dict work with pytrees (#73766)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73766

Test Plan: Imported from OSS

Reviewed By: zou3519, Chillee

Differential Revision: D34630217

Pulled By: jamesr66a

fbshipit-source-id: f23420deaeed7e54d5e6759b486ca4a02243a7b3
(cherry picked from commit 8854c60e60e79b144077f3021d305ea3d06a2a21)
2022-03-04 19:35:41 +00:00
Jay Banerjee
5332d8705b [FX lowering] Modify replace_all_uses_with to allowing filtering of nodes to update; use it to (#73763)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73763

The test that is enabled generates a graph as such:

```
linear_25 --> sigmoid_14 --> output_1
         \--> output_2
```
Before this diff, (unpadding) layout_transform nodes would be added as follows:

```
linear_25 --> layout_xform1 --> sigmoid_14 --> layout_xform2--> output_1
                           \--> output_2
```
This causes an assertion to fail for the sigmoid node where the input and output types
don't match due to padding differences.

This diff modifies the replacement algorithm to not affect users of an output's parent node
when the user requires padded inputs. This yields the following graph instead:

```
linear_25 --> sigmoid_14 --> layout_xform2--> output_1
         \--> layout_xform1 --> output_2
```

Test Plan: Manually and CI

Reviewed By: jfix71, dborkovic

Differential Revision: D34623590

fbshipit-source-id: 3834b06c95fc5626eccc282216cbe039ac5a3242
(cherry picked from commit af012372ae1a6bb654b0ed9b765993960d5251e4)
2022-03-04 19:35:41 +00:00
Kushashwa Ravi Shrimali
452c26bbeb Fix functional.max_poolNd warning spam in the CI
Fixes https://github.com/pytorch/pytorch/issues/71257.

Warnings have been removed, please see [this](https://github.com/pytorch/pytorch/pull/71258#issuecomment-1058503649) comment.

cc: @Lezcano @jbschlosser @zou3519
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71258
Approved by: https://github.com/Lezcano, https://github.com/jbschlosser
2022-03-04 18:42:23 +00:00
James Reed
dae7ed179f [FX] Make module getattr wrapper proxy buffers (#73612)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73612

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D34568113

Pulled By: jamesr66a

fbshipit-source-id: 95a7106cf6ce45999c1b3c06b34965e725961771
(cherry picked from commit 54841e028478ea641fb4d7895f726553b8b48353)
2022-03-03 04:32:49 +00:00
Jordan Fix
987f146185 [fx] Improve support for tuple subclasses such as NamedTuple (#73198)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73198

Previously, if an arg to an FX node is a subclass of tuple then it gets sanitized essentially back to that base class. An example here is when setting an arg to be a TensorMetadata object, which is a NamedTuple, it will be set as a tuple instead.

- Change `map_aggregate` to repack the tuple to `type(a)` when it's not directly a tuple (try/except for best attempt)
- During codegen, call `add_global` for `type(a)` if it's not directly a tuple.
- Add an option for an arg to provide a `_custom_fx_repr_fn` for use inside stringifying via `_format_arg`

Test Plan: Added unit test coverage, where we inline the named tuple into arg/kwarg.

Reviewed By: jamesr66a

Differential Revision: D34381888

fbshipit-source-id: bd672a8542e2bba5aa604b448bec920efc256440
(cherry picked from commit 68f99c12dd)
2022-02-23 11:31:10 +00:00
Vitaly Fedyunin
81fbeea760 Add docstrings to native_channel_shuffle (#72919)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/72919

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34274717

Pulled By: VitalyFedyunin

fbshipit-source-id: fa42f91ef2335e2594b19ef65d914c711f7a94fd
(cherry picked from commit a6f6fe9112)
2022-02-17 02:33:08 +00:00
Horace He
d635d0f86e Refactor FX codegen into extensible Codegen object (#72566)
Summary:
The goal of this is to make FX's codegen extensible. I've refactored it into a class with 5 extensibility points on it.

```
class Codegen(object):
    def generate_prologue(self, free_vars: List[str], maybe_return_annotation: str) -> str:
        """
        Given the free variables and a return annotation, generates the beginning of the FX function.
        By default, `generate_prologue(['a', 'b'], '') == 'def forward(a, b):'`
        """
    def generate_output(self, output_args: Argument) -> str:
        """
        Given the output arguments, generates the return statement of the FX function.
        """
    def process_inputs(self, args: Any) -> Any:
        """
        Transforms the inputs so that the graph can take them as arguments, as
        non-default codegen may result in the inputs to the function being
        different from the inputs to the graph.

        If the graph was directly runnable, this invariant should hold true
        `f.process_outputs(f.graph(*f.process_inputs(*inputs))) == f(*inputs)`
        """
    def process_outputs(self, outputs: Any) -> Any:
        """
        Transforms the outputs of the graph to be identical to the codegen.

        See ``process_inputs`` for more details.
        """
    def additional_globals(self) -> List[Tuple[str, Any]]:
        """
        If your codegen uses extra global values, add them here.
        For example, return ['List', typing.List] if you need ``List`` in the global context.
        """
```

So, for example, the `ListCodeGen` we want for AOTAutograd looks like this
```
        class ListCodeGen(CodeGen):
            def generate_prologue(self, free_vars, maybe_return_annotation):
                lst_unpack = f"""
def forward(self, args_list: List[torch.Tensor]){maybe_return_annotation}:
    {', '.join(free_vars)} = args_list"""
                return lst_unpack

            def additional_globals(self):
                return [('List', typing.List)]

            def process_inputs(self, *inputs):
                assert(len(inputs) == 1)
                return inputs[0]
```
and
```
        def f(a, b):
            return a + b

        nf = fx.symbolic_trace(f)
        nf.graph.set_codegen(ListCodeGen())
        nf.recompile()
        print(nf.code)
```
would result in
```
def forward(self, args_list: List[torch.Tensor]):
    a, b = args_list
    add = a + b;  a = b = None
    return add
```

Backwards compatibility changes - I added `process_outputs` and `process_inputs` to `fx.Graph`, while removing `flatten_inputs` and `flatten_outputs` - those didn't have `backwards_compatibility` on them, so I *think* it's probably fine?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72566

Reviewed By: desertfire

Differential Revision: D34160424

Pulled By: Chillee

fbshipit-source-id: ebf6411312b373e3fbcb13288a34befa449a2375
(cherry picked from commit 13cd12eaa1)
2022-02-11 18:13:29 +00:00
James Reed
3f6643e661 [FX] Fix default argument handling for Interpreter (#72272)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/72272

Test Plan: Imported from OSS

Reviewed By: dagitses

Differential Revision: D33984862

Pulled By: jamesr66a

fbshipit-source-id: 7d89901c2041806df86c9b08f3af731f3afc9100
(cherry picked from commit f79f0e451e)
2022-02-04 01:46:20 +00:00
Peter Bell
e8d226cd9a Remove some unnecessary python functional wrappers (#61608)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61608

See #61544 for an example of issues created by functional wrappers. In this
case, these are directly wrapping the native function with no added
functionality. One exception was `bilinear` which was just missing the default
argument in C++, but was otherwise the same.

I've kept the symbol `torch.functional.istft` because it looks like public API,
but it could just as easily be moved to `_torch_docs.py`.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D31401361

Pulled By: albanD

fbshipit-source-id: 162b74d0b2d4f2e5c4834687a94541960cefdd52
(cherry picked from commit 700cd73ca1)
2022-02-01 16:59:26 +00:00
Nikita Shulga
74c44ba9d6 Revert D33850228: [pytorch][PR] Implement Tanh Gelu Approximation
Test Plan: revert-hammer

Differential Revision:
D33850228 (23d03025dc)

Original commit changeset: 3cc33fb298e4

Original Phabricator Diff: D33850228 (23d03025dc)

fbshipit-source-id: 9436e7df73c2b2e2011f321674f24973316d3692
(cherry picked from commit c9efb58223)
2022-01-31 17:44:19 +00:00
Ryan Spring
23d03025dc Implement Tanh Gelu Approximation (#61439)
Summary:
1. Implements https://github.com/pytorch/pytorch/issues/39853
2. Adds approximate boolean flag to Gelu
3. Enables Tanh Gelu approximation
4. Adds double backward support for Gelu
5. Enable Tanh Gelu in NvFuser

```
def gelu(x, approximate : str = 'none'):
    if approximate == 'tanh':
        # sqrt(2/pi) = 0.7978845608028654
        return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * (x + 0.044715 * torch.pow(x, 3.0))))
    else:
        return x * normcdf(x)
```

Linking XLA PR - https://github.com/pytorch/xla/pull/3039

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61439

Reviewed By: cpuhrsch

Differential Revision: D33850228

Pulled By: jbschlosser

fbshipit-source-id: 3cc33fb298e480d7ecc5c67716da019d60c6ab33
(cherry picked from commit 3a53b3e94f)
2022-01-31 17:07:45 +00:00
Joel Schlosser
cb823d9f07 Revert D33744717: [pytorch][PR] Implement Tanh Gelu Approximation
Test Plan: revert-hammer

Differential Revision:
D33744717 (f499ab9cef)

Original commit changeset: d64532a562ed

Original Phabricator Diff: D33744717 (f499ab9cef)

fbshipit-source-id: 396c3f63de5865f894dbc353d0790a01a624be93
(cherry picked from commit e9fb2d1db1)
2022-01-28 18:35:01 +00:00
Ryan Spring
f499ab9cef Implement Tanh Gelu Approximation (#61439)
Summary:
1. Implements https://github.com/pytorch/pytorch/issues/39853
2. Adds approximate boolean flag to Gelu
3. Enables Tanh Gelu approximation
4. Adds double backward support for Gelu
5. Enable Tanh Gelu in NvFuser

```
def gelu(x, approximate : str = 'none'):
    if approximate == 'tanh':
        # sqrt(2/pi) = 0.7978845608028654
        return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * (x + 0.044715 * torch.pow(x, 3.0))))
    else:
        return x * normcdf(x)
```

Linking XLA PR - https://github.com/pytorch/xla/pull/3039

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61439

Reviewed By: mikaylagawarecki

Differential Revision: D33744717

Pulled By: jbschlosser

fbshipit-source-id: d64532a562ed53247bb4fa52bb16722634d5c187
(cherry picked from commit 4713dd9cca)
2022-01-28 16:59:09 +00:00
Zachary DeVito
7bc5962329 Trace asserts with fx by looking at byte code (#70960)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70960

This patch uses some bytecode introspection logic to see if a boolean is being used as an assert condition and if so, it records the assert in the fx graph and allows the trace to continue.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D33570397

Pulled By: zdevito

fbshipit-source-id: 99d26cede8fe42c96d4032d9353c1ede7eb3d969
(cherry picked from commit 30d002da25)
2022-01-28 02:04:21 +00:00
Jason Ansel
567c2bb8e9 Support printing inplace operators in FX (#71887)
Summary:
Pretty print inplace operators (`a+=b`, etc) in generated FX code.  This is useful because it allows `torch.jit.script()` to parse these operators without error.

I don't believe FX tracing supports inplace ops yet, though I am generating them in torchdynamo and want to be able to lower them with torchscript.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71887

Reviewed By: jamesr66a

Differential Revision: D33806248

Pulled By: jansel

fbshipit-source-id: 5eb9f744caab2f745cefc83ea658e12e9e7a817d
(cherry picked from commit eacbd6bb83)
2022-01-27 20:35:22 +00:00
Philip Meier
d4d0ab71b3 use torch.testing.assert_equal in TestCase.assertEqual (#67796)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67796

Supersedes #58981.

cc mruberry

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D33542994

Pulled By: mruberry

fbshipit-source-id: 527099f5fdc154fd95ee48cd19f0a85eeec43443
(cherry picked from commit 1a58915e2c)
2022-01-27 08:33:55 +00:00
Richard Zou
620a1fcb55 OpInfos for: normal, bernoulli, multinomial (#66358)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66358

Test Plan: - run tests

Reviewed By: mruberry

Differential Revision: D31551695

Pulled By: zou3519

fbshipit-source-id: cf1b43118a0414a1af9ece9ae8c0598b2701aa0a
2021-12-14 06:59:38 -08:00
Vasiliy Kuznetsov
2dd46d3aa9 FX: ensure node stack trace survives copying (#69368)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69368

Before this PR, copying a node would lose the stack trace. This PR
ensures that the stack trace is preserved across copies.

This is useful because quantization passes would like to start
allowing the user to preserve stack traces, and we use the copy
behavior.

Test Plan:
```
python test/test_fx.py TestFX.test_stack_traces
```

Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D32835248

fbshipit-source-id: 91610fd8d05f5683cfa5e11fb6f9f3feacb8e241
2021-12-07 06:18:38 -08:00
Michael Suo
0aa9d177fe [fx] remove CPatcher (#69032)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69032

I am removing it because, for packaging-related reasons, it's easier if
torch.fx is a pure Python module.

I don't think there is much reason to keep it: this functionality was
experimental, has no known users currently, and we didn't have a clear
path to turning it on by default due to regressions in tracing
performance. Also, it only was ever enabled for `rand` and friends.

Technically the removal of the `enable_cpatching` arguments on
`symbolic_trace` and `Tracer.__init__` are BC-breaking, but the
docstrings clearly state that the argument is experimental and BC is not
guaranteed, so I think it's fine.

Test Plan: Imported from OSS

Reviewed By: soulitzer

Differential Revision: D32706344

Pulled By: suo

fbshipit-source-id: 501648b5c3610ae71829b5e7db74e3b8c9e1a480
2021-11-30 11:59:57 -08:00
Richard Zou
d4ae789655 OpInfos for new_blah functions and some _like functions (#67357)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67357

This PR adds OpInfos for:
- new_ones, new_zeros, new_full, new_empty
- rand_like, randint_like

I forgot to add the _like functions in a previous PR, so here they are.

Test Plan: - wait for tests

Reviewed By: mruberry

Differential Revision: D31969533

Pulled By: zou3519

fbshipit-source-id: 236d70d66e82f1d6f8e5254b55ca2a37b54c9494
2021-11-11 07:21:23 -08:00
Horace He
0b2f68eadf Remove special FX OpInfo list (#67520)
Summary:
Most of the failing tests are since the test doesn't work with python functions (only builtins like `torch.add`).

I added a check for that and ported the remaining skips into the `skips` field.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67520

Reviewed By: ZolotukhinM

Differential Revision: D32046856

Pulled By: Chillee

fbshipit-source-id: 05fa3e3c40fa6cc4f776e0c24f667629b14afd25
2021-11-02 16:01:46 -07:00
Saketh Are
b24c34426f Add OpInfo for torch.unique and torch.unique_consecutive (#67529)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67529

Reviewed By: pbelevich

Differential Revision: D32045941

Pulled By: saketh-are

fbshipit-source-id: fefea1ddabcd3c4b40e9374b991410626437cdb4
2021-10-30 08:33:41 -07:00
Shiyan Deng
4b9464f4b9 [fx]Early return if a node tries prepend self (#67068)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67068

Prepending a node to itself will result in the node gets removed from the graph.

Usually people won't prepend a node with itself. But people would accidentally try to append a node that's already next to `self` node, which will be prepending `self` to `self`.

Test Plan: Added a unit test

Reviewed By: jamesr66a

Differential Revision: D31849030

fbshipit-source-id: b0fdfbb893f785f268595acd823b426d57c15e61
2021-10-27 10:49:45 -07:00
Pearu Peterson
333717eaf0 Improve assert failure message in test_get_torch_func_signature_exhaustive (#67039)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67039

cc mruberry

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D31899719

Pulled By: cpuhrsch

fbshipit-source-id: 819d07da5b18b31d462010b9f9382e0b8cd10f9f
2021-10-25 14:20:38 -07:00
Saketh Are
33790c4e06 Implement histogramdd on CPU (#65318)
Summary:
Implements `torch.histogramdd` analogous to `numpy.histogramdd`.

Builds on https://github.com/pytorch/pytorch/pull/58780, generalizing the existing `torch.histogram` kernel to handle D-dimensional inputs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65318

Reviewed By: soulitzer

Differential Revision: D31654555

Pulled By: saketh-are

fbshipit-source-id: 14b781fac0fd3698b052dbd6f0fda46e50d4c5f1
2021-10-21 16:09:31 -07:00
Jane Xu
9ea3424747 Set test owner for fx (#66807)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66807

Reviewed By: jamesr66a

Differential Revision: D31736722

Pulled By: janeyx99

fbshipit-source-id: 5ffcb02a858137211bff1eabf158001dcb0359a6
2021-10-18 12:25:38 -07:00
Pearu Peterson
472a6f2787 Strided masked reductions: sum, amax. Testing of masked reductions. (#65990)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65990

cc nikitaved pearu cpuhrsch IvanYashchuk

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D31729532

Pulled By: albanD

fbshipit-source-id: 855a6bb2a7c6e75c780a64ce23c0f29321f0e511
2021-10-18 11:10:32 -07:00
Kushashwa Ravi Shrimali
909694fd88 Fix nn.functional.max_poolNd dispatch (for arg: return_indices) (#62544)
Summary:
Please see https://github.com/pytorch/pytorch/issues/62545 for context.

The order of `return_indices, ceil_mode` is different for `nn.functional.max_poolNd` functions to what seen with `torch.nn.MaxPoolNd` (modular form). While this should be resolved in the future, it was decided to first raise a warning that the behavior will be changed in the future. (please see https://github.com/pytorch/pytorch/pull/62544#issuecomment-893770955 for more context)

This PR thus raises appropriate warnings and updates the documentation to show the full signature (along with a note) for `torch.nn.functional.max_poolNd` functions.

**Quick links:**

(_upstream_)

* Documentation of [`nn.functional.max_pool1d`](https://pytorch.org/docs/1.9.0/generated/torch.nn.functional.max_pool1d.html), [`nn.functional.max_pool2d`](https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool2d.html), and [`nn.functional.max_pool3d`](https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool3d.html).

(_this branch_)

* Documentation of [`nn.functional.max_pool1d`](https://docs-preview.pytorch.org/62544/generated/torch.nn.functional.max_pool1d.html?highlight=max_pool1d), [`nn.functional.max_pool2d`](https://docs-preview.pytorch.org/62544/generated/torch.nn.functional.max_pool2d.html?highlight=max_pool2d#torch.nn.functional.max_pool2d), and [`nn.functional.max_pool3d`](https://docs-preview.pytorch.org/62544/generated/torch.nn.functional.max_pool3d.html?highlight=max_pool3d#torch.nn.functional.max_pool3d).

cc mruberry jbschlosser

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62544

Reviewed By: gchanan

Differential Revision: D31179038

Pulled By: jbschlosser

fbshipit-source-id: 0a2c7215df9e132ce9ec51448c5b3c90bbc69030
2021-10-18 08:34:38 -07:00
Richard Zou
d810e738b9 OpInfo for *_like functions (#65941)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65941

OpInfos for: empty_like, zeros_like, ones_like, full_like, randn_like

Test Plan: - run tests

Reviewed By: dagitses

Differential Revision: D31452625

Pulled By: zou3519

fbshipit-source-id: 5e6c45918694853f9252488d62bb7f4ccfa1f1e4
2021-10-14 09:14:51 -07:00
Richard Zou
5d4452937d OpInfos for some Tensor dtype conversion methods (#64282)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64282

OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan: - run tests

Reviewed By: dagitses

Differential Revision: D31452627

Pulled By: zou3519

fbshipit-source-id: b7f272e558558412c47aefe947af7f060dfb45c5
2021-10-14 09:13:30 -07:00
lezcano
82a216c45b Add tensor.{adjoint(),H,mT,mH} methods and properties (#64179)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64179

This PR follows the discussion in https://github.com/pytorch/pytorch/issues/45063#issuecomment-904431478

Fixes https://github.com/pytorch/pytorch/issues/45063

cc ezyang anjali411 dylanbespalko mruberry Lezcano nikitaved rgommers pmeier asmeurer leofang AnirudhDagar asi1024 emcastillo kmaehashi heitorschueroff

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D30730483

Pulled By: anjali411

fbshipit-source-id: 821d25083f5f682450f6812bf852dc96a1cdf9f2
2021-10-13 07:44:43 -07:00
James Reed
3eb9443619 [FX] Fix issue where GraphModule.delete_all_unused_submodules deletes submodules from called leaf modules (#66430)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66430

On the whole, I'm not totally satisfied with this approach. I think we should be building a prefix tree data structure during initial iteration over the submodules and querying that when deleting submodules. But I think this approach works and I want to see if we can get it in before 1.10

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D31546137

Pulled By: jamesr66a

fbshipit-source-id: f08b8409a3cf511277017ccccb916097b7c4c4fe
2021-10-11 19:37:51 -07:00
Horace He
300613dc60 make FX symbolic tracing reuse buffers if they're the same (#66211)
Summary:
Currently, if the same tensor constant is reused multiple times, we'll store a tensor constant for each time we use it.

For example
```
val = torch.randn(5)
for _ in range(10):
    x = x + val
```
ends up storing 10 tensor constants.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66211

Reviewed By: jamesr66a

Differential Revision: D31437089

Pulled By: Chillee

fbshipit-source-id: 401169c8d58ce0afb7025ae11060680ef544419f
2021-10-06 18:35:38 -07:00
Yinghai Lu
6b0aa2958d [FX] Support torch.layout as arg (#66048)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66048

Previously, create_arg would fail if it encountered a not `None` layout argument. Adding it to `BaseArgumentTypes` list should be enough to fix that.

Test Plan: Added unittest

Reviewed By: jamesr66a

Differential Revision: D31362662

fbshipit-source-id: 20049971e18c17e9c75e50540500c567266daa55
2021-10-04 19:58:08 -07:00
Jason Ansel
487c771593 [FX] Fix tracing of bitwise and/or (#65196)
Summary:
Previously resulted in `AttributeError: module 'operator' has no attribute 'and'`

and/or are python keywords, so they are renamed to `operator.and_` and `operator.or_`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65196

Reviewed By: Chillee

Differential Revision: D31020336

Pulled By: jansel

fbshipit-source-id: 51d888151fe78c0c1197ecaf161976b219c59694
2021-09-17 14:33:02 -07:00
James Reed
0559cb37cd [FX] Ensure BC coverage for all of torch.fx.passes (#65081)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65081

Test Plan: Imported from OSS

Reviewed By: jbschlosser, khabinov

Differential Revision: D30967428

Pulled By: jamesr66a

fbshipit-source-id: 2ff83da728dc469f086cf504e71b43396db612d8
2021-09-17 09:32:43 -07:00
James Reed
9117eed6ed [FX} Add torch.ops.profiler._record_function_{enter,exit} as stateful ops for DCE (#65180)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65180

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31007115

Pulled By: jamesr66a

fbshipit-source-id: 823b15db712a382a4f2a4fd409983d47bc067150
2021-09-16 21:31:54 -07:00
soulitzer
4bf7959de2 Remove run_functional_checks from test_autograd and create necessary OpInfos (#64993)
Summary:
OpInfo tracker: https://github.com/pytorch/pytorch/issues/54261

 - Eliminate duplicated testing logic in test_autograd
 - Moved tests that rely on this testing logic to use OpInfos
   - `cat` already has OpInfo (no action needed)
   - Created OpInfo for `block_diag` and `broadcast_tensors`

Running into some FX errors. Added op to skip-list and created an issue here: https://github.com/pytorch/pytorch/issues/64997
Both `block_diag` and `broadcast_tensors` are variadic, so skipping `test_variant_consistency_jit` (from comments on other OpInfos, it looks like JIT does not support variadic tensors)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64993

Reviewed By: jbschlosser

Differential Revision: D30961736

Pulled By: soulitzer

fbshipit-source-id: e169305384a683acae1178c4e12e9e214a67226a
2021-09-15 12:45:38 -07:00
Horace He
35413a16f7 Add __matmul__ to the magic methods for FX tracing (#64512)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/64483

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64512

Reviewed By: mrshenli

Differential Revision: D30797265

Pulled By: Chillee

fbshipit-source-id: 7630e048a960e0b27c4309d04d85301abe325189
2021-09-08 10:03:48 -07:00
kshitij12345
2c351c76e0 [special] Alias igamma, igammac to special.gammaninc, special.gammaincc (#61902)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50345

Also added relevant OpInfo

TODO:
* [x] Check rendered docs gammainc : https://docs-preview.pytorch.org/61902/special.html#torch.special.gammainc
* [x] Check rendered docs gammaincc: https://docs-preview.pytorch.org/61902/special.html#torch.special.gammaincc

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61902

Reviewed By: ngimel

Differential Revision: D30761428

Pulled By: mruberry

fbshipit-source-id: 06a16432873357958d53364f12a4e91c29779d26
2021-09-07 15:31:26 -07:00
James Reed
e1c3e5f830 [resubmit][FX] Prototype for guarding against mutable operations in tracing (#64467)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64467

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D30744870

Pulled By: jamesr66a

fbshipit-source-id: fc652f8b17748f90dbeb83fabf3bd5bb57d6ff1a
2021-09-02 21:13:21 -07:00
Eli Uriegas
32a93c2424 Revert D30675780: [FX] Prototype for guarding against mutable operations in tracing
Test Plan: revert-hammer

Differential Revision:
D30675780 (795387477f)

Original commit changeset: b2116b51dcc8

fbshipit-source-id: d4f1173f4989556ea54974f4c2739ef85a705fae
2021-09-02 16:07:29 -07:00
James Reed
795387477f [FX] Prototype for guarding against mutable operations in tracing (#64295)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64295

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D30675780

Pulled By: jamesr66a

fbshipit-source-id: b2116b51dcc87357f0c84192c4c336680875e27a
2021-09-02 15:17:04 -07:00
Patrick Hu
c6505cc383 [FX] Fix python code generation for wrapped getattr() with default value (#64271)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64271

Closes #60417

Modified emit_node() in fx/graph.py to generate getattr() call with default value when len(node.args) != 2 instead of accessing the attribute.
Added test_torch_fx_getattr() in test/test_fx.py.

Test Plan:
pytest test/test_fx.py

Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D30671265

fbshipit-source-id: f2db9ea47e0cb247547e200684f715aab006c374
2021-09-01 11:30:27 -07:00
Jay Leverett
44fcb00a56 Fix redundant class definition in GraphModule singleton constructor (#64274)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/63883

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64274

Reviewed By: jamesr66a

Differential Revision: D30675970

Pulled By: jayleverett

fbshipit-source-id: e74ef2a28013f0fa7c58d14f38e66cfe48d26b74
2021-08-31 17:34:14 -07:00
James Reed
538647fe1f [WIP][FX] BC guarantees for 1.10 (#63888)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63888

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D30523133

Pulled By: jamesr66a

fbshipit-source-id: b04cc0d842a74862f42ecba98b757310cd2ec7b0
2021-08-30 19:56:46 -07:00