Commit Graph

247 Commits

Author SHA1 Message Date
Brian Hirsh
0161e9eb00 [test] attempt to functionalize ops with mutable positional-only args
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76320

Approved by: https://github.com/ezyang
2022-05-19 18:50:34 +00:00
sijiac
efcbbb177e [Re-submit] Make tracer be able to trace different forward functions
Summary: The root module may have different forward functions. The current implementation assumes only the func forward can be traced. In this PR, we add an attribute func name to Tracer class to enable users trace different functions

Test Plan:
python3 test/test_fx.py TestFX.test_trace_multiple_funcs

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77502

Approved by: https://github.com/jamesr66a
2022-05-17 01:05:33 +00:00
Jon Janzen
fa018ef989 Revert "Make tracer be able to trace different forward functions (#77109)"
This reverts commit bf4b6d0dce.
2022-05-13 13:06:47 -07:00
Sijia Chen
bf4b6d0dce Make tracer be able to trace different forward functions (#77109)
Summary: The root module may have different forward functions. The current implementation assumes only the func `forward` can be traced. In this diff, we add an argument of forward func name to enable users trace different forward functions

Test Plan: N1903198

Differential Revision: D36157032

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77109
Approved by: https://github.com/jamesr66a
2022-05-13 16:11:23 +00:00
Xiaodong Wang
2291960d3f Back out "record_function: update to use custom_class API" (#76253)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76253

We're observing large QPS regression on the original PR https://github.com/pytorch/pytorch/pull/72302. For the training job we had, it regressed from 720k QPS to 450k QPS (see the test plan in FB internal). We suspect this is because the api was changed from `_record_function_enter` to `_record_function_enter_new`, and we're running experiments to confirm that. Will add more details when the runs in the test plan has finished. For now, it's better to revert the diff to unblock internal usecases and we can think about how to reland this diff later.

Original commit changeset: dc9939f1fa6d

Original Phabricator Diff: D35257354

Test Plan:
on trunk: f338665947

with this diff: f338502850

Reviewed By: malfet, robieta

Differential Revision: D35853300

fbshipit-source-id: dd38042aeacb848f66756491a4c849c7c652a0e1
2022-04-26 17:49:57 -04:00
Peter Bell
cb37e7a080 Remove F.pad python implementation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73433

Approved by: https://github.com/albanD, https://github.com/jbschlosser
2022-04-23 00:13:20 +00:00
Alban Desmaison
eb69e8a3ed Revert "Revert "record_function: update to use custom_class API""
This reverts commit 3f9f35b9f8.

This should be done via a clean revert as this has been in master for a long time.
Doing a quick fix here to make sure we don't break master.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76172
Approved by: https://github.com/atalman
2022-04-21 14:18:28 +00:00
PyTorch MergeBot
3f9f35b9f8 Revert "record_function: update to use custom_class API"
This reverts commit 5630c5ac75.

Reverted https://github.com/pytorch/pytorch/pull/72302 on behalf of https://github.com/atalman
2022-04-21 13:59:48 +00:00
Peter Bell
5630c5ac75 record_function: update to use custom_class API
Merge after forward-compatibility period is over.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72302

Approved by: https://github.com/albanD
2022-03-30 15:57:28 +00:00
James Reed
a2d2610ec9 [FX] Assert None concrete_args and improve error messages (#74662)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74662

Previously, we would not emit a check that `concrete_args` with value `None` matched that value during runtime. This fixes that and improves some of the warning messages

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D35137362

Pulled By: jamesr66a

fbshipit-source-id: 222a2c8a907748f90290f1c1b4ab8012b46099a0
(cherry picked from commit b960405ad87e57dcf62ca25dd4d4bdfc34c8744c)
2022-03-25 23:36:27 +00:00
Shiyan Deng
3f164e0395 [reland] Process inputs and outputs in fx interpreter (#74637)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74637

Forgot to update the expect file in https://github.com/pytorch/pytorch/pull/74242. Reland to include changes in expect file.

Test Plan: unit test

Reviewed By: yinghai

Differential Revision: D35089989

fbshipit-source-id: 5e3ad9c696cf31cbc691d34fdb77eff26f92e38d
(cherry picked from commit 110ac12f5e2bcca7552d4b4691c7d98fafb21a57)
2022-03-24 18:32:57 +00:00
Michael Suo
bf5e25f3a9 Revert D34898108: Process inputs and outputs in fx interpreter
Test Plan: revert-hammer

Differential Revision:
D34898108 (f65594fc9f)

Original commit changeset: 250bd236f6c8

Original Phabricator Diff: D34898108 (f65594fc9f)

fbshipit-source-id: 5f634bbc0b393ebcacc0298fd86505a26637ea84
(cherry picked from commit 5804247425afd758d6df6e935374f6965a1c0f54)
2022-03-22 19:14:24 +00:00
Shiyan Deng
f65594fc9f Process inputs and outputs in fx interpreter (#74242)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74242

The inputs and outputs of the graph module might be different from the graph inputs and outputs if users are using custom codegen. In interpreter, it runs the graph instead of the generated forward function so it might not work if user provides the inputs to the graph module. To fill the gap, we call `process_inputs` and `process_outputs` inside interpreter.

Test Plan: unit test: test_interpreter_with_codegen

Reviewed By: jamesr66a, Chillee

Differential Revision: D34898108

fbshipit-source-id: 250bd236f6c8c1268a363cf19a09521a4f64b3a9
(cherry picked from commit b33076fa3b10788d455cecc590bc01c4ad8ef94c)
2022-03-22 17:26:01 +00:00
Jane Xu
6ecd13dfef Add super() calls for Fx TestCases (#74216)
Summary:
The fx test case wasn't disabled properly because it didn't call the parent class' setUp().

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74216

Reviewed By: zou3519

Differential Revision: D34898707

Pulled By: janeyx99

fbshipit-source-id: 83e56f5a1efc50d24646c182160f7cfcb5bc9935
(cherry picked from commit bb8dd72d1640c1ef0201d615c5d405479afdf078)
2022-03-16 22:12:56 +00:00
Shiyan Deng
f98b316f13 Preserve codegen on fx graph in transformer (#74189)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74189

Use the codegen on the original graph module for the new graph module produced by transformer.

Test Plan: Added a unit test: test_custom_codegen_with_transformer

Reviewed By: yinghai

Differential Revision: D34867938

fbshipit-source-id: fcda6600faeccfa7a650ba7226ca125e8440b19c
(cherry picked from commit d098c12081f61ddcf69052db5b8a1f31b0a0b67b)
2022-03-16 16:33:44 +00:00
James Reed
b68f227709 [FX] Disable buffer tracing test due to SEV remediation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74207

Approved by: https://github.com/malfet
2022-03-14 23:54:38 +00:00
James Reed
6a44efa888 [FX] Fix bare generic type annotations (#74135)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/74135

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D34839339

Pulled By: jamesr66a

fbshipit-source-id: fd026cab684acaae9bf7c2fa4228ed8eb7aeb788
(cherry picked from commit 3acc565324e78bbabde3f796db9f5fcc99394d6b)
2022-03-14 23:30:53 +00:00
Animesh Jain
7ebab9247d FX graph module - prevent infinite recursion (#73866)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73866

super(type(self), self) in wrapped_call leads to infinite recursion for subclass of Fx graph module. This happens when we call _stateless.functional_call on a fx module.     https://github.com/pytorch/pytorch/blob/master/torch%2Fnn%2Futils%2F_stateless.py

Test Plan:
Tests added in https://github.com/pytorch/pytorch/pull/62436

Imported from OSS

Reviewed By: jansel

Differential Revision: D34737828

fbshipit-source-id: 871b897e1210173ccc83fe34d53fc41af00a39ee
(cherry picked from commit 3d0c5fc71503fa2782b497a9d39ce26288fd219b)
2022-03-09 06:09:57 +00:00
anjali411
beda4e8b2f Fix fx tracing for OpOverload (#73940)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73940

Test Plan: Imported from OSS

Reviewed By: zhxchen17

Differential Revision: D34727831

Pulled By: anjali411

fbshipit-source-id: 26e7044a1d5ba9ee0854bda784633b134971074b
(cherry picked from commit 69685e19b3de5ea3f494464eddcce44e93cb0f4d)
2022-03-08 21:47:55 +00:00
James Reed
a8d9fbb021 [FX] Make immutable_list and immutable_dict work with pytrees (#73766)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73766

Test Plan: Imported from OSS

Reviewed By: zou3519, Chillee

Differential Revision: D34630217

Pulled By: jamesr66a

fbshipit-source-id: f23420deaeed7e54d5e6759b486ca4a02243a7b3
(cherry picked from commit 8854c60e60e79b144077f3021d305ea3d06a2a21)
2022-03-04 19:35:41 +00:00
Jay Banerjee
5332d8705b [FX lowering] Modify replace_all_uses_with to allowing filtering of nodes to update; use it to (#73763)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73763

The test that is enabled generates a graph as such:

```
linear_25 --> sigmoid_14 --> output_1
         \--> output_2
```
Before this diff, (unpadding) layout_transform nodes would be added as follows:

```
linear_25 --> layout_xform1 --> sigmoid_14 --> layout_xform2--> output_1
                           \--> output_2
```
This causes an assertion to fail for the sigmoid node where the input and output types
don't match due to padding differences.

This diff modifies the replacement algorithm to not affect users of an output's parent node
when the user requires padded inputs. This yields the following graph instead:

```
linear_25 --> sigmoid_14 --> layout_xform2--> output_1
         \--> layout_xform1 --> output_2
```

Test Plan: Manually and CI

Reviewed By: jfix71, dborkovic

Differential Revision: D34623590

fbshipit-source-id: 3834b06c95fc5626eccc282216cbe039ac5a3242
(cherry picked from commit af012372ae1a6bb654b0ed9b765993960d5251e4)
2022-03-04 19:35:41 +00:00
Kushashwa Ravi Shrimali
452c26bbeb Fix functional.max_poolNd warning spam in the CI
Fixes https://github.com/pytorch/pytorch/issues/71257.

Warnings have been removed, please see [this](https://github.com/pytorch/pytorch/pull/71258#issuecomment-1058503649) comment.

cc: @Lezcano @jbschlosser @zou3519
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71258
Approved by: https://github.com/Lezcano, https://github.com/jbschlosser
2022-03-04 18:42:23 +00:00
James Reed
dae7ed179f [FX] Make module getattr wrapper proxy buffers (#73612)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73612

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D34568113

Pulled By: jamesr66a

fbshipit-source-id: 95a7106cf6ce45999c1b3c06b34965e725961771
(cherry picked from commit 54841e028478ea641fb4d7895f726553b8b48353)
2022-03-03 04:32:49 +00:00
Jordan Fix
987f146185 [fx] Improve support for tuple subclasses such as NamedTuple (#73198)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73198

Previously, if an arg to an FX node is a subclass of tuple then it gets sanitized essentially back to that base class. An example here is when setting an arg to be a TensorMetadata object, which is a NamedTuple, it will be set as a tuple instead.

- Change `map_aggregate` to repack the tuple to `type(a)` when it's not directly a tuple (try/except for best attempt)
- During codegen, call `add_global` for `type(a)` if it's not directly a tuple.
- Add an option for an arg to provide a `_custom_fx_repr_fn` for use inside stringifying via `_format_arg`

Test Plan: Added unit test coverage, where we inline the named tuple into arg/kwarg.

Reviewed By: jamesr66a

Differential Revision: D34381888

fbshipit-source-id: bd672a8542e2bba5aa604b448bec920efc256440
(cherry picked from commit 68f99c12dd)
2022-02-23 11:31:10 +00:00
Vitaly Fedyunin
81fbeea760 Add docstrings to native_channel_shuffle (#72919)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/72919

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D34274717

Pulled By: VitalyFedyunin

fbshipit-source-id: fa42f91ef2335e2594b19ef65d914c711f7a94fd
(cherry picked from commit a6f6fe9112)
2022-02-17 02:33:08 +00:00
Horace He
d635d0f86e Refactor FX codegen into extensible Codegen object (#72566)
Summary:
The goal of this is to make FX's codegen extensible. I've refactored it into a class with 5 extensibility points on it.

```
class Codegen(object):
    def generate_prologue(self, free_vars: List[str], maybe_return_annotation: str) -> str:
        """
        Given the free variables and a return annotation, generates the beginning of the FX function.
        By default, `generate_prologue(['a', 'b'], '') == 'def forward(a, b):'`
        """
    def generate_output(self, output_args: Argument) -> str:
        """
        Given the output arguments, generates the return statement of the FX function.
        """
    def process_inputs(self, args: Any) -> Any:
        """
        Transforms the inputs so that the graph can take them as arguments, as
        non-default codegen may result in the inputs to the function being
        different from the inputs to the graph.

        If the graph was directly runnable, this invariant should hold true
        `f.process_outputs(f.graph(*f.process_inputs(*inputs))) == f(*inputs)`
        """
    def process_outputs(self, outputs: Any) -> Any:
        """
        Transforms the outputs of the graph to be identical to the codegen.

        See ``process_inputs`` for more details.
        """
    def additional_globals(self) -> List[Tuple[str, Any]]:
        """
        If your codegen uses extra global values, add them here.
        For example, return ['List', typing.List] if you need ``List`` in the global context.
        """
```

So, for example, the `ListCodeGen` we want for AOTAutograd looks like this
```
        class ListCodeGen(CodeGen):
            def generate_prologue(self, free_vars, maybe_return_annotation):
                lst_unpack = f"""
def forward(self, args_list: List[torch.Tensor]){maybe_return_annotation}:
    {', '.join(free_vars)} = args_list"""
                return lst_unpack

            def additional_globals(self):
                return [('List', typing.List)]

            def process_inputs(self, *inputs):
                assert(len(inputs) == 1)
                return inputs[0]
```
and
```
        def f(a, b):
            return a + b

        nf = fx.symbolic_trace(f)
        nf.graph.set_codegen(ListCodeGen())
        nf.recompile()
        print(nf.code)
```
would result in
```
def forward(self, args_list: List[torch.Tensor]):
    a, b = args_list
    add = a + b;  a = b = None
    return add
```

Backwards compatibility changes - I added `process_outputs` and `process_inputs` to `fx.Graph`, while removing `flatten_inputs` and `flatten_outputs` - those didn't have `backwards_compatibility` on them, so I *think* it's probably fine?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72566

Reviewed By: desertfire

Differential Revision: D34160424

Pulled By: Chillee

fbshipit-source-id: ebf6411312b373e3fbcb13288a34befa449a2375
(cherry picked from commit 13cd12eaa1)
2022-02-11 18:13:29 +00:00
James Reed
3f6643e661 [FX] Fix default argument handling for Interpreter (#72272)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/72272

Test Plan: Imported from OSS

Reviewed By: dagitses

Differential Revision: D33984862

Pulled By: jamesr66a

fbshipit-source-id: 7d89901c2041806df86c9b08f3af731f3afc9100
(cherry picked from commit f79f0e451e)
2022-02-04 01:46:20 +00:00
Peter Bell
e8d226cd9a Remove some unnecessary python functional wrappers (#61608)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61608

See #61544 for an example of issues created by functional wrappers. In this
case, these are directly wrapping the native function with no added
functionality. One exception was `bilinear` which was just missing the default
argument in C++, but was otherwise the same.

I've kept the symbol `torch.functional.istft` because it looks like public API,
but it could just as easily be moved to `_torch_docs.py`.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D31401361

Pulled By: albanD

fbshipit-source-id: 162b74d0b2d4f2e5c4834687a94541960cefdd52
(cherry picked from commit 700cd73ca1)
2022-02-01 16:59:26 +00:00
Nikita Shulga
74c44ba9d6 Revert D33850228: [pytorch][PR] Implement Tanh Gelu Approximation
Test Plan: revert-hammer

Differential Revision:
D33850228 (23d03025dc)

Original commit changeset: 3cc33fb298e4

Original Phabricator Diff: D33850228 (23d03025dc)

fbshipit-source-id: 9436e7df73c2b2e2011f321674f24973316d3692
(cherry picked from commit c9efb58223)
2022-01-31 17:44:19 +00:00
Ryan Spring
23d03025dc Implement Tanh Gelu Approximation (#61439)
Summary:
1. Implements https://github.com/pytorch/pytorch/issues/39853
2. Adds approximate boolean flag to Gelu
3. Enables Tanh Gelu approximation
4. Adds double backward support for Gelu
5. Enable Tanh Gelu in NvFuser

```
def gelu(x, approximate : str = 'none'):
    if approximate == 'tanh':
        # sqrt(2/pi) = 0.7978845608028654
        return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * (x + 0.044715 * torch.pow(x, 3.0))))
    else:
        return x * normcdf(x)
```

Linking XLA PR - https://github.com/pytorch/xla/pull/3039

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61439

Reviewed By: cpuhrsch

Differential Revision: D33850228

Pulled By: jbschlosser

fbshipit-source-id: 3cc33fb298e480d7ecc5c67716da019d60c6ab33
(cherry picked from commit 3a53b3e94f)
2022-01-31 17:07:45 +00:00
Joel Schlosser
cb823d9f07 Revert D33744717: [pytorch][PR] Implement Tanh Gelu Approximation
Test Plan: revert-hammer

Differential Revision:
D33744717 (f499ab9cef)

Original commit changeset: d64532a562ed

Original Phabricator Diff: D33744717 (f499ab9cef)

fbshipit-source-id: 396c3f63de5865f894dbc353d0790a01a624be93
(cherry picked from commit e9fb2d1db1)
2022-01-28 18:35:01 +00:00
Ryan Spring
f499ab9cef Implement Tanh Gelu Approximation (#61439)
Summary:
1. Implements https://github.com/pytorch/pytorch/issues/39853
2. Adds approximate boolean flag to Gelu
3. Enables Tanh Gelu approximation
4. Adds double backward support for Gelu
5. Enable Tanh Gelu in NvFuser

```
def gelu(x, approximate : str = 'none'):
    if approximate == 'tanh':
        # sqrt(2/pi) = 0.7978845608028654
        return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * (x + 0.044715 * torch.pow(x, 3.0))))
    else:
        return x * normcdf(x)
```

Linking XLA PR - https://github.com/pytorch/xla/pull/3039

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61439

Reviewed By: mikaylagawarecki

Differential Revision: D33744717

Pulled By: jbschlosser

fbshipit-source-id: d64532a562ed53247bb4fa52bb16722634d5c187
(cherry picked from commit 4713dd9cca)
2022-01-28 16:59:09 +00:00
Zachary DeVito
7bc5962329 Trace asserts with fx by looking at byte code (#70960)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70960

This patch uses some bytecode introspection logic to see if a boolean is being used as an assert condition and if so, it records the assert in the fx graph and allows the trace to continue.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D33570397

Pulled By: zdevito

fbshipit-source-id: 99d26cede8fe42c96d4032d9353c1ede7eb3d969
(cherry picked from commit 30d002da25)
2022-01-28 02:04:21 +00:00
Jason Ansel
567c2bb8e9 Support printing inplace operators in FX (#71887)
Summary:
Pretty print inplace operators (`a+=b`, etc) in generated FX code.  This is useful because it allows `torch.jit.script()` to parse these operators without error.

I don't believe FX tracing supports inplace ops yet, though I am generating them in torchdynamo and want to be able to lower them with torchscript.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71887

Reviewed By: jamesr66a

Differential Revision: D33806248

Pulled By: jansel

fbshipit-source-id: 5eb9f744caab2f745cefc83ea658e12e9e7a817d
(cherry picked from commit eacbd6bb83)
2022-01-27 20:35:22 +00:00
Philip Meier
d4d0ab71b3 use torch.testing.assert_equal in TestCase.assertEqual (#67796)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67796

Supersedes #58981.

cc mruberry

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D33542994

Pulled By: mruberry

fbshipit-source-id: 527099f5fdc154fd95ee48cd19f0a85eeec43443
(cherry picked from commit 1a58915e2c)
2022-01-27 08:33:55 +00:00
Richard Zou
620a1fcb55 OpInfos for: normal, bernoulli, multinomial (#66358)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66358

Test Plan: - run tests

Reviewed By: mruberry

Differential Revision: D31551695

Pulled By: zou3519

fbshipit-source-id: cf1b43118a0414a1af9ece9ae8c0598b2701aa0a
2021-12-14 06:59:38 -08:00
Vasiliy Kuznetsov
2dd46d3aa9 FX: ensure node stack trace survives copying (#69368)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69368

Before this PR, copying a node would lose the stack trace. This PR
ensures that the stack trace is preserved across copies.

This is useful because quantization passes would like to start
allowing the user to preserve stack traces, and we use the copy
behavior.

Test Plan:
```
python test/test_fx.py TestFX.test_stack_traces
```

Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D32835248

fbshipit-source-id: 91610fd8d05f5683cfa5e11fb6f9f3feacb8e241
2021-12-07 06:18:38 -08:00
Michael Suo
0aa9d177fe [fx] remove CPatcher (#69032)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69032

I am removing it because, for packaging-related reasons, it's easier if
torch.fx is a pure Python module.

I don't think there is much reason to keep it: this functionality was
experimental, has no known users currently, and we didn't have a clear
path to turning it on by default due to regressions in tracing
performance. Also, it only was ever enabled for `rand` and friends.

Technically the removal of the `enable_cpatching` arguments on
`symbolic_trace` and `Tracer.__init__` are BC-breaking, but the
docstrings clearly state that the argument is experimental and BC is not
guaranteed, so I think it's fine.

Test Plan: Imported from OSS

Reviewed By: soulitzer

Differential Revision: D32706344

Pulled By: suo

fbshipit-source-id: 501648b5c3610ae71829b5e7db74e3b8c9e1a480
2021-11-30 11:59:57 -08:00
Richard Zou
d4ae789655 OpInfos for new_blah functions and some _like functions (#67357)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67357

This PR adds OpInfos for:
- new_ones, new_zeros, new_full, new_empty
- rand_like, randint_like

I forgot to add the _like functions in a previous PR, so here they are.

Test Plan: - wait for tests

Reviewed By: mruberry

Differential Revision: D31969533

Pulled By: zou3519

fbshipit-source-id: 236d70d66e82f1d6f8e5254b55ca2a37b54c9494
2021-11-11 07:21:23 -08:00
Horace He
0b2f68eadf Remove special FX OpInfo list (#67520)
Summary:
Most of the failing tests are since the test doesn't work with python functions (only builtins like `torch.add`).

I added a check for that and ported the remaining skips into the `skips` field.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67520

Reviewed By: ZolotukhinM

Differential Revision: D32046856

Pulled By: Chillee

fbshipit-source-id: 05fa3e3c40fa6cc4f776e0c24f667629b14afd25
2021-11-02 16:01:46 -07:00
Saketh Are
b24c34426f Add OpInfo for torch.unique and torch.unique_consecutive (#67529)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67529

Reviewed By: pbelevich

Differential Revision: D32045941

Pulled By: saketh-are

fbshipit-source-id: fefea1ddabcd3c4b40e9374b991410626437cdb4
2021-10-30 08:33:41 -07:00
Shiyan Deng
4b9464f4b9 [fx]Early return if a node tries prepend self (#67068)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67068

Prepending a node to itself will result in the node gets removed from the graph.

Usually people won't prepend a node with itself. But people would accidentally try to append a node that's already next to `self` node, which will be prepending `self` to `self`.

Test Plan: Added a unit test

Reviewed By: jamesr66a

Differential Revision: D31849030

fbshipit-source-id: b0fdfbb893f785f268595acd823b426d57c15e61
2021-10-27 10:49:45 -07:00
Pearu Peterson
333717eaf0 Improve assert failure message in test_get_torch_func_signature_exhaustive (#67039)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67039

cc mruberry

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D31899719

Pulled By: cpuhrsch

fbshipit-source-id: 819d07da5b18b31d462010b9f9382e0b8cd10f9f
2021-10-25 14:20:38 -07:00
Saketh Are
33790c4e06 Implement histogramdd on CPU (#65318)
Summary:
Implements `torch.histogramdd` analogous to `numpy.histogramdd`.

Builds on https://github.com/pytorch/pytorch/pull/58780, generalizing the existing `torch.histogram` kernel to handle D-dimensional inputs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65318

Reviewed By: soulitzer

Differential Revision: D31654555

Pulled By: saketh-are

fbshipit-source-id: 14b781fac0fd3698b052dbd6f0fda46e50d4c5f1
2021-10-21 16:09:31 -07:00
Jane Xu
9ea3424747 Set test owner for fx (#66807)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66807

Reviewed By: jamesr66a

Differential Revision: D31736722

Pulled By: janeyx99

fbshipit-source-id: 5ffcb02a858137211bff1eabf158001dcb0359a6
2021-10-18 12:25:38 -07:00
Pearu Peterson
472a6f2787 Strided masked reductions: sum, amax. Testing of masked reductions. (#65990)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65990

cc nikitaved pearu cpuhrsch IvanYashchuk

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D31729532

Pulled By: albanD

fbshipit-source-id: 855a6bb2a7c6e75c780a64ce23c0f29321f0e511
2021-10-18 11:10:32 -07:00
Kushashwa Ravi Shrimali
909694fd88 Fix nn.functional.max_poolNd dispatch (for arg: return_indices) (#62544)
Summary:
Please see https://github.com/pytorch/pytorch/issues/62545 for context.

The order of `return_indices, ceil_mode` is different for `nn.functional.max_poolNd` functions to what seen with `torch.nn.MaxPoolNd` (modular form). While this should be resolved in the future, it was decided to first raise a warning that the behavior will be changed in the future. (please see https://github.com/pytorch/pytorch/pull/62544#issuecomment-893770955 for more context)

This PR thus raises appropriate warnings and updates the documentation to show the full signature (along with a note) for `torch.nn.functional.max_poolNd` functions.

**Quick links:**

(_upstream_)

* Documentation of [`nn.functional.max_pool1d`](https://pytorch.org/docs/1.9.0/generated/torch.nn.functional.max_pool1d.html), [`nn.functional.max_pool2d`](https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool2d.html), and [`nn.functional.max_pool3d`](https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool3d.html).

(_this branch_)

* Documentation of [`nn.functional.max_pool1d`](https://docs-preview.pytorch.org/62544/generated/torch.nn.functional.max_pool1d.html?highlight=max_pool1d), [`nn.functional.max_pool2d`](https://docs-preview.pytorch.org/62544/generated/torch.nn.functional.max_pool2d.html?highlight=max_pool2d#torch.nn.functional.max_pool2d), and [`nn.functional.max_pool3d`](https://docs-preview.pytorch.org/62544/generated/torch.nn.functional.max_pool3d.html?highlight=max_pool3d#torch.nn.functional.max_pool3d).

cc mruberry jbschlosser

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62544

Reviewed By: gchanan

Differential Revision: D31179038

Pulled By: jbschlosser

fbshipit-source-id: 0a2c7215df9e132ce9ec51448c5b3c90bbc69030
2021-10-18 08:34:38 -07:00
Richard Zou
d810e738b9 OpInfo for *_like functions (#65941)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65941

OpInfos for: empty_like, zeros_like, ones_like, full_like, randn_like

Test Plan: - run tests

Reviewed By: dagitses

Differential Revision: D31452625

Pulled By: zou3519

fbshipit-source-id: 5e6c45918694853f9252488d62bb7f4ccfa1f1e4
2021-10-14 09:14:51 -07:00
Richard Zou
5d4452937d OpInfos for some Tensor dtype conversion methods (#64282)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64282

OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan: - run tests

Reviewed By: dagitses

Differential Revision: D31452627

Pulled By: zou3519

fbshipit-source-id: b7f272e558558412c47aefe947af7f060dfb45c5
2021-10-14 09:13:30 -07:00
lezcano
82a216c45b Add tensor.{adjoint(),H,mT,mH} methods and properties (#64179)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64179

This PR follows the discussion in https://github.com/pytorch/pytorch/issues/45063#issuecomment-904431478

Fixes https://github.com/pytorch/pytorch/issues/45063

cc ezyang anjali411 dylanbespalko mruberry Lezcano nikitaved rgommers pmeier asmeurer leofang AnirudhDagar asi1024 emcastillo kmaehashi heitorschueroff

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D30730483

Pulled By: anjali411

fbshipit-source-id: 821d25083f5f682450f6812bf852dc96a1cdf9f2
2021-10-13 07:44:43 -07:00