Commit Graph

64 Commits

Author SHA1 Message Date
Yuanyuan Chen
0d50e5d8d4 [3/N] Fix unused loop variables (#166509)
This PR removes unused loop variables in tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166509
Approved by: https://github.com/Lucaskabela, https://github.com/Skylion007
2025-10-30 20:13:51 +00:00
Xuehai Pan
775788f93b [BE][PYFMT] migrate PYFMT for test/[i-z]*/ to ruff format (#144556)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144556
Approved by: https://github.com/ezyang
2025-07-29 03:26:09 +00:00
Anthony Barbier
bf7e290854 Add __main__ guards to jit tests (#154725)
This PR is part of a series attempting to re-submit https://github.com/pytorch/pytorch/pull/134592 as smaller PRs.

In jit tests:

- Add and use a common raise_on_run_directly method for when a user runs a test file directly which should not be run this way. Print the file which the user should have run.
- Raise a RuntimeError on tests which have been disabled (not run)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154725
Approved by: https://github.com/clee2000
2025-06-16 10:28:45 +00:00
PyTorch MergeBot
20912673a6 Revert "Add __main__ guards to jit tests (#154725)"
This reverts commit 1a55fb0ee8.

Reverted https://github.com/pytorch/pytorch/pull/154725 on behalf of https://github.com/malfet due to This added 2nd copy of raise_on_run to common_utils.py which caused lint failures, see https://github.com/pytorch/pytorch/actions/runs/15445374980/job/43473457466 ([comment](https://github.com/pytorch/pytorch/pull/154725#issuecomment-2940503905))
2025-06-04 15:42:52 +00:00
Anthony Barbier
1a55fb0ee8 Add __main__ guards to jit tests (#154725)
This PR is part of a series attempting to re-submit https://github.com/pytorch/pytorch/pull/134592 as smaller PRs.

In jit tests:

- Add and use a common raise_on_run_directly method for when a user runs a test file directly which should not be run this way. Print the file which the user should have run.
- Raise a RuntimeError on tests which have been disabled (not run)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154725
Approved by: https://github.com/Skylion007
2025-06-04 14:44:08 +00:00
Aaron Gokaslan
292af3cc89 [BE][Ez]: ISC001 Auto concatenate implicit one line strings (#146408)
Apply ruff rule about implicit string concatenation, this autofixes strings that are all the same type and on the same line. These lines are broken up likely as the result of autoformatters in the past. All fixes are automated using the autofixes in ISC001.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146408
Approved by: https://github.com/justinchuby, https://github.com/janeyx99
2025-02-04 19:07:04 +00:00
Tom Ritchford
d8c8ba2440 Fix unused Python variables in test/[e-z]* (#136964)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136964
Approved by: https://github.com/justinchuby, https://github.com/albanD
2024-12-18 23:02:30 +00:00
Oguz Ulgen
920f0426ae Add None return type to init -- tests rest (#132376)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132376
Approved by: https://github.com/jamesjwu
ghstack dependencies: #132335, #132351, #132352
2024-08-01 15:44:51 +00:00
Xuehai Pan
6ff1e43a41 [BE][Easy][13/19] enforce style for empty lines in import segments in test/j*/ (#129764)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129764
Approved by: https://github.com/ezyang
2024-08-01 12:13:42 +00:00
Aaron Gokaslan
d1c4e6b55f [BE]: Enable a few additional ruff rules (#130700)
Enables a few extra ruff rules, most of which do not have any violations as I already cleaned them with earlier PRs, these just turns them on to enforce them. Adds 1 noqa as we want the suboptimal lambda generation + call kept as a test. Also enables the test in flake8

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130700
Approved by: https://github.com/justinchuby, https://github.com/ezyang
2024-07-17 02:06:04 +00:00
cyy
cb5e9183c6 [Caffe2] [2/N] Remove Caffe2 from tests (#128911)
Follows #128675

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128911
Approved by: https://github.com/titaiwangms, https://github.com/r-barnes
2024-06-19 00:05:50 +00:00
Yuanhao Ji
604c9c5601 Enable UFMT on all of test/jit (#123623)
Partially addresses #123062

Ran lintrunner on:

- `test/jit`

with command:

```bash
lintrunner -a --take UFMT --all-files
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123623
Approved by: https://github.com/ezyang
2024-04-11 23:45:05 +00:00
David Berard
cead0363a8 [jit][nested strided tensor] support nested tensor in check_trace (#121039)
Summary:
torch.testing.assert_equal doesn't support nested strided tensors because sizes is not implemented.

This adds special handling for nested tensors by checking for nested tensors unbinding if they are found.

Test Plan: test_trace_with_nested_strided_tensor_output

Differential Revision: D54430238

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121039
Approved by: https://github.com/YuqingJ
2024-03-04 01:15:45 +00:00
Aaron Gokaslan
bd10fea79a [BE]: Enable F821 and fix bugs (#116579)
Fixes #112371

I tried to fix as many of the bugs as I could, a few I could not figure out what the proper fix for them was though and so I left them with noqas.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116579
Approved by: https://github.com/ezyang
2024-01-01 08:40:46 +00:00
eellison
2a246c5259 update type() calling to not use unneeded device (#110163)
Previous code path was doing an unnecessary cuda init as well as causing an unnecessary "device" to occur in the jit trace.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110163
Approved by: https://github.com/henryhu6, https://github.com/albanD
2023-09-28 17:34:46 +00:00
Aaron Gokaslan
6d43c89f37 [BE]: Update Ruff to 0.0.280 (#105724)
Removes unusued loop values in python dictionary iteration. Automated fix from Ruff master

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105724
Approved by: https://github.com/ezyang, https://github.com/janeyx99
2023-07-22 23:03:34 +00:00
Kurt Mohler
ffce2492af Remove set_default_dtype calls from jit and ops tests (#105072)
Part of #68972

This only attempts to avoid setting the default dtype for `test_jit.py` and `test_ops.py`. There are other tests, like `test_nn.py`, which will be addressed in follow up PRs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105072
Approved by: https://github.com/ezyang
2023-07-15 03:18:33 +00:00
nihuini
7157dfdd4a [jit] fix duplicated module input and output values in tracing module (#102510)
remap shall record the original inp pointers instead of remapped ones

testcase

```python
import torch
import torch.nn as nn
import torch.nn.functional as F

class Normalize(nn.Module):
    def __init__(self):
        super().__init__()

        self.norm = nn.GroupNorm(num_groups=32, num_channels=32)

    def forward(self, x, y):
        if y is None:
            y = x
        else:
            y = self.norm(y)

        y = y * 2

        return y

class G(nn.Module):
    def __init__(self):
        super().__init__()

        self.norm = Normalize()

    def forward(self, x):

        A = self.norm(x, None)
        B = F.relu(A)

        return A, B

class Net(nn.Module):
    def __init__(self):
        super().__init__()

        self.g = G()

        self.norm_1 = Normalize()

    def forward(self, x):
        hs = self.g(x)

        A, B = hs

        h = self.norm_1(B, A)
        return h

net = Net()
net = net.eval()

x = torch.randn(1, 32, 16, 16)

traced = torch.jit.trace(net, x)

print(traced.graph)
```

without this patch, there are duplicated lifted values, %80, %81, %82, %83, %84, %85
```
graph(%self.1 : __torch__.Net,
      %x : Float(1, 32, 16, 16, strides=[8192, 256, 16, 1], requires_grad=0, device=cpu)):
  %norm_1 : __torch__.___torch_mangle_1.Normalize = prim::GetAttr[name="norm_1"](%self.1)
  %g : __torch__.G = prim::GetAttr[name="g"](%self.1)
  %86 : (Tensor, Tensor, Tensor, Tensor, Tensor, Tensor, Tensor) = prim::CallMethod[name="forward"](%g, %x)
  %79 : Float(1, 32, 16, 16, strides=[8192, 256, 16, 1], requires_grad=0, device=cpu), %80 : Float(1, 32, 16, 16, strides=[8192, 256, 16, 1], requires_grad=0, device=cpu), %81 : Float(1, 32, 16, 16, strides=[8192, 256, 16, 1], requires_grad=0, device=cpu), %82 : Float(1, 32, 16, 16, strides=[8192, 256, 16, 1], requires_grad=0, device=cpu), %83 : Float(1, 32, 16, 16, strides=[8192, 256, 16, 1], requires_grad=0, device=cpu), %84 : Float(1, 32, 16, 16, strides=[8192, 256, 16, 1], requires_grad=0, device=cpu), %85 : Float(1, 32, 16, 16, strides=[8192, 256, 16, 1], requires_grad=0, device=cpu) = prim::TupleUnpack(%86)
  %87 : Tensor = prim::CallMethod[name="forward"](%norm_1, %79, %80, %81, %82, %83, %84, %85)
  return (%87)

```

with this patch
```
graph(%self.1 : __torch__.Net,
      %x : Float(1, 32, 16, 16, strides=[8192, 256, 16, 1], requires_grad=0, device=cpu)):
  %norm_1 : __torch__.___torch_mangle_1.Normalize = prim::GetAttr[name="norm_1"](%self.1)
  %g : __torch__.G = prim::GetAttr[name="g"](%self.1)
  %71 : Tensor = prim::CallMethod[name="forward"](%g, %x)
  %72 : Tensor = prim::CallMethod[name="forward"](%norm_1, %71)
  return (%72)

```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102510
Approved by: https://github.com/davidberard98
2023-06-27 03:43:06 +00:00
shibo19
213e10dc3d fix bug in trace model when out-operator has more than one output (#101563)
Fixes #https://github.com/pytorch/pytorch/issues/101960
when I trace a func to run out-operator has more than one output, I got the error. This is because the situation when the output of the out operator is greater than 1 is not handled.
```
def test_trace_out_operator_with_two_output():
    example_input = torch.rand(2, 8)
    out_1, out_2 = torch.cummax(example_input, 1)

    def run_cummax(example_input, out_1, out_2):
        output_1, output_2 = torch.cummax(example_input, 1, out=(out_1, out_2))
        return output_1, output_2

    trace_model = torch.jit.trace(run_cummax, (example_input, out_1, out_2))

and the error info:

    raise TracingCheckError(
torch.jit._trace.TracingCheckError: Tracing failed sanity checks!
encountered an exception while running the trace with test inputs
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101563
Approved by: https://github.com/jgong5, https://github.com/EikanWang, https://github.com/davidberard98
2023-05-31 19:39:52 +00:00
Yanbo Liang
075d36d37f [Dynamo] Fix nested function resume execution (#100426)
Fixes #99665

Let me explain the root cause using the unit test I added:
* This bug is triggered when:
  * ```wrapped``` is a nested function.
  * ```wrapped``` is in another module which is different from the main function ```fn```.
  * There is a graph break inside of ```wrapped```.
* The root cause is when resuming nested function, actually we are using the outermost function(```fn``` in my example)'s global variables, but ```wrapped``` calls ```inner_func``` which is not part of ```fn```'s globals, so we have to set correct globals when nested function resume execution.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100426
Approved by: https://github.com/jansel
2023-05-11 03:10:23 +00:00
PyTorch MergeBot
4b8127b90e Revert "[Dynamo] Fix nested function resume execution (#100426)"
This reverts commit d719f0276d.

Reverted https://github.com/pytorch/pytorch/pull/100426 on behalf of https://github.com/jeanschmidt due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/100426#issuecomment-1540915913))
2023-05-09 21:32:13 +00:00
Yanbo Liang
d719f0276d [Dynamo] Fix nested function resume execution (#100426)
Fixes #99665

Let me explain the root cause using the unit test I added:
* This bug is triggered when:
  * ```wrapped``` is a nested function.
  * ```wrapped``` is in another module which is different from the main function ```fn```.
  * There is a graph break inside of ```wrapped```.
* The root cause is when resuming nested function, actually we are using the outermost function(```fn``` in my example)'s global variables, but ```wrapped``` calls ```inner_func``` which is not part of ```fn```'s globals, so we have to set correct globals when nested function resume execution.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100426
Approved by: https://github.com/jansel
2023-05-06 05:04:50 +00:00
Wang, Yi A
8564ed24a8 do not need to check if element in dict input is Tensor. (#97866)
sometimes it's a tuple with tensor element such as past value key in text generation case

Fixes https://github.com/pytorch/pytorch/issues/97229

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97866
Approved by: https://github.com/jgong5, https://github.com/davidberard98
2023-03-31 19:39:00 +00:00
David Berard
10bf019b71 [jit] Add shapes info to the output type of CallFunction nodes after tracing, if the output is a tensor (#95544)
**Summary**: jit.trace usually adds shape information to all the jit::Values in its graph. This is mostly a side effect of how jit tracing is performed, but many users use this behavior for debugging and for better understanding the graph. Previously, CallFunction nodes (inserted by calling jit.script-ed functions) did _not_ have this information attached. This PR attaches this information for the tensor output values.

**Details**:
* First the jit tracer sets a global TracerState object
* Then the jit tracer invokes the python callable that is to be traced
* When the python function gets to a jit.script-ed function, [invokeScriptFunctionFromPython](8693604bc6/torch/csrc/jit/python/pybind_utils.h (L1060)) is called. It inserts a FunctionCall.
* Then after the actual scripted function gets called and we have a concrete output, we attach the concrete output [IValue to the TracerState](8693604bc6/torch/csrc/jit/python/pybind_utils.h (L1001))
* ^^ the setValueTrace call (linked in previous list item) is where this PR makes changes; we revise the jit::Value output of the CallFunction node to use the type of the concrete tensor, which will have actual shapes associated.

**Test**: added a test verifying that shape info appears in the output type for a CallFunction node in a jit-traced graph.

Differential Revision: [D43592880](https://our.internmc.facebook.com/intern/diff/D43592880)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95544
Approved by: https://github.com/qihqi
2023-02-27 22:50:29 +00:00
Wang, Yi A
d82c2b14c7 jit trace will fail for parameter check if it contains param whose ki… (#94032)
…nd is _ParameterKind.VAR_KEYWORD

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94032
Approved by: https://github.com/qihqi, https://github.com/davidberard98
2023-02-13 20:33:30 +00:00
Xuehai Pan
046e88a291 [BE] [3/3] Rewrite super() calls in test (#94592)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94592
Approved by: https://github.com/ezyang, https://github.com/seemethere
2023-02-12 22:20:53 +00:00
AllenTiTaiWang
bdb14238ec [Reland][ONNX] Move all torch.onnx.export related tests to test/onnx (#87292)
Moving torch.onnx.export related tests to test/onnx integrates ONNX tests to the same CI machine, so the testing environment can be better managed.

Fixes https://github.com/pytorch/pytorch/issues/87320
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87292
Approved by: https://github.com/thiagocrepaldi, https://github.com/BowenBao, https://github.com/kit1980, https://github.com/malfet
2022-11-01 14:22:46 +00:00
PyTorch MergeBot
e9599724fa Revert "[ONNX] Move all torch.onnx.export related tests to test/onnx (#87292)"
This reverts commit e3e84830aa.

Reverted https://github.com/pytorch/pytorch/pull/87292 on behalf of https://github.com/weiwangmeta due to breaking internal test relating to quantization eager tests, see test/quantization/eager/test_quantize_eager_ptq.py test_lower_graph_linear and test_lower_graph_conv2d
2022-10-31 19:55:58 +00:00
AllenTiTaiWang
e3e84830aa [ONNX] Move all torch.onnx.export related tests to test/onnx (#87292)
Moving torch.onnx.export related tests to test/onnx integrates ONNX tests to the same CI machine, so the testing environment can be better managed.

Fixes https://github.com/pytorch/pytorch/issues/87320
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87292
Approved by: https://github.com/thiagocrepaldi, https://github.com/BowenBao, https://github.com/kit1980
2022-10-29 05:31:30 +00:00
Animesh Jain
1d90d6ee60 Setup for running PyTorch tests with TorchDynamo and skips for known failing tests (#80106)
@ezyang I am going to keep adding more skips in this PR for now. And once we have the CI running, I will replace with the appropriate decorators.

cc @mlazos , we should add those tests in test_ops.py in this PR as well

cc @jansel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80106
Approved by: https://github.com/ezyang, https://github.com/jansel
2022-07-07 18:57:33 +00:00
Edward Z. Yang
a3f10ec281 Move functorch decompositions to PyTorch
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76311

Approved by: https://github.com/Chillee
2022-04-30 16:47:53 +00:00
Edward Z. Yang
ee955b8bb9 Cannibalize noarch CI job into crossref CI job
crossref is a new strategy for performing tests when you want
to run a normal PyTorch API call, separately run some variation of
the API call (e.g., same thing but all the arguments are meta tensors)
and then cross-reference the results to see that they are consistent.
Any logic you add to CrossRefMode will get run on *every* PyTorch API
call that is called in the course of PyTorch's test suite.  This can
be a good choice for correctness testing if OpInfo testing is not
exhaustive enough.

For now, the crossref test doesn't do anything except verify that
we can validly push a mode onto the torch function mode stack for all
functions.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75988

Approved by: https://github.com/seemethere
2022-04-20 11:56:25 +00:00
BowenBao
aa51ee2345 Enable numel tracing
clang-format

resolve onnx test failure

update expect file

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74081

Approved by: https://github.com/garymm, https://github.com/eellison, https://github.com/malfet
2022-04-13 22:23:41 +00:00
BowenBao
ff78c73286 [ONNX] Remove f arg from export_to_pretty_string (#69045) (#69546)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69546

The arg is not used and was previously deprecated.

Also remove torch.onnx._export_to_pretty_string. It's redundant with the
public version.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D32994270

Pulled By: msaroufim

fbshipit-source-id: f8f3933b371a0d868d9247510bcd73c31a9d6fcc
2022-01-12 21:31:36 -08:00
Jane Xu
09c7771e9c Set test owners for jit tests (#66808)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66808

Reviewed By: mrshenli

Differential Revision: D31761414

Pulled By: janeyx99

fbshipit-source-id: baf8c49ff9c4bcda7b0ea0f6aafd26380586e72d
2021-10-25 07:51:10 -07:00
BowenBao
9323ea2195 [ONNX] minor doc improvements and cleanup (#62514) (#64373)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64373

* Fix some bad formatting and clarify things in onnx.rst.
* In `export_to_pretty_string`:
    * Add documentation for previously undocumented args.
    * Document that `f` arg is ignored and mark it deprecated.
    * Update tests to stop setting `f`.
    * Warn if `_retain_param_name` is set.
* Use double quotes for string literals in test_operators.py.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30905271

Pulled By: malfet

fbshipit-source-id: 3627eeabf40b9516c4a83cfab424ce537b36e4b3
2021-09-23 22:20:44 -07:00
Philip Meier
57d4c6cf42 replace self.assertTrue(torch.allclose(..)) with self.assertEqual(…) (#63637)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/63565

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63637

Reviewed By: malfet

Differential Revision: D30541266

Pulled By: mruberry

fbshipit-source-id: ab461949782c6908a589ea098fcfcf5c3e081ee6
2021-08-25 16:47:40 -07:00
Wei-Sheng Chin
a55cae3d37 Fix missing element types and shapes when autograd.Function has multiple tensor outputs (#57966)
Summary:
When generating IR for autograd.Function, if the function has multiple outputs, a TupleUnpack may be inserted after the original function node, and Pytorch only assigns proper information (tensor element type and shape) to the TupleUnpack and forgets the original function node. In contrast, if autograd.Function only produces one output, the original function node may have tensor
element type and shape in its output schema.

Before this PR:
- (simplified) IR for autograd.Function with one output: input (tensor, dtype=float32, shape=[2, 3]) -> PythonOp -> output (tensor, dtype=float32, shape=[4, 5])
- (simplified) IR for autograd.Function with one output: input (tensor, dtype=float32, shape=[2, 3]) -> PythonOp -> output_0 **(tensor)**, output_1 **(tensor)** -> TupleUnpack output_2 (tensor, dtype=float32, shape=[4, 5]), output_3 (tensor, dtype=float32, shape=[6, 7])

After this PR:
- (simplified) IR for autograd.Function with one output: input (tensor, dtype=float32, shape=[2, 3]) -> PythonOp -> output (tensor, dtype=float32, shape=[4, 5])
- (simplified) IR for autograd.Function with one output: input (tensor, dtype=float32, shape=[2, 3]) -> PythonOp ->output_0 **(tensor, dtype=float32, shape=[4, 5])**, output_1 **(tensor, dtype=float32, shape=[6, 7])** -> TupleUnpack output_2 (tensor, dtype=float32, shape=[4, 5]), output_3 (tensor, dtype=float32, shape=[6, 7])

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57966

Reviewed By: zhxchen17

Differential Revision: D30208207

Pulled By: gmagogsfm

fbshipit-source-id: 42a3d1f9c0932133112a85df0c49cf4ea0afa175
2021-08-10 19:48:11 -07:00
Yukio Siraichi
93bf0ae6fc Remove legacy constructor calls from pytorch codebase. (#54142)
Summary:
Follow up from https://github.com/pytorch/pytorch/issues/53889
Related to https://github.com/pytorch/pytorch/issues/47112

Removing every occurrence of the legacy constructor call present in PyTorch at:
- _docs_
- _benchmarks_
- _test_
- _caffe2_
- _CONTRIBUTING.md_

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54142

Reviewed By: ngimel

Differential Revision: D27699450

Pulled By: mruberry

fbshipit-source-id: 530aa3f5746cc8bc1407d5d51b2bbd8075e30546
2021-04-11 15:45:17 -07:00
Yuxin Wu
b0984f7925 [pytorch] use correct warning type for tracer warnings (#53460)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53460

We have code to ignore this category of warnings and found this one is incorrect.

Use `stacklevel=2`, otherwise the warning is always filtered by TracerWarning.ignore_lib_warnings()

Test Plan: sandcastle

Reviewed By: wanchaol

Differential Revision: D26867290

fbshipit-source-id: cda1bc74a28d5965d52387d5ea2c4dcd1a2b1e86
2021-03-08 17:02:41 -08:00
Yanan Cao
f448c59a57 Fix jit.trace mis-handling of InterfaceType (#53052)
Summary:
`jit.trace` recursively gathers all named attributes in module at beginning of
tracing. This is fine in a pure-tracing environment, but breaks when a
scripted module that contains an InterfaceType'd submodule is involved.
Because InterfaceType, by design, is not allowed to have any attribute,
thus some of the gathered attributes will turn into fatal errors in
following some graph rewrite passes.

This PR fixes this bug by distinguishing InterfaceType'd submodules from
normal ClassType'd submodules.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53052

Reviewed By: wanchaol

Differential Revision: D26735566

Pulled By: gmagogsfm

fbshipit-source-id: a14aee6f1fe8000f80c2dc60bdf19acee6225090
2021-03-02 00:40:19 -08:00
Elias Ellison
752d808fa0 Trace linear as aten::linear (#51897)
Summary:
https://github.com/pytorch/pytorch/pull/51613 made `torch.nn.functional.linear` compile as `aten::linear`, extend the same behavior with tracing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51897

Reviewed By: albanD

Differential Revision: D26320711

Pulled By: eellison

fbshipit-source-id: a26d3c37323a0706313c6ebb210bad60eec6a64b
2021-02-19 10:20:42 -08:00
Yanan Cao
705fa7e964 [Usability] Capture argument names for traced functions and modules (#51775)
Summary:
Previously `torch.jit.trace` relies on AutoGrad hooks to infer name of tensors in computation, including those of function/method arguments. This often doesn't work out because:

- These names often do not exist
- Tracer uses argument name of first tensor operation on each tensor as inferred argument names. These tensor operations have programmatically-generated names like `argument_1`

This PR extracts argument names directly from Python functions and pass them down to tracer, which then assigns them to correct graph inputs. This way, we always have the correct argument names captured in IR.

This is useful for both debugging and supporting using `InterfaceType` to represent traced modules.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51775

Reviewed By: izdeby

Differential Revision: D26273105

Pulled By: gmagogsfm

fbshipit-source-id: 934a385041137dc3731bb6fa8657b11532fed9e5
2021-02-10 18:28:08 -08:00
Luca(Wei) Chen
3f23ad5bce [Bug] fix for module_has_exports (#50680)
Summary:
The attributes in `dir(mod)` may not be valid, this will throw error when calling `getattr`.
Use `hasattr` to test if it is valid.

Here is an example:
```python
class A:
    def __init__(self, x):
        if x:
            self._attr = 1

    property
    def val(self):
        return getattr(self, '_attr')

a = A(False)
print('val' in dir(a))
print(hasattr(a, 'val'))

b = A(True)
print('val' in dir(b))
print(hasattr(b, 'val'))
```

And the outputs:
```
True
False
True
True
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50680

Reviewed By: malfet

Differential Revision: D26103975

Pulled By: eellison

fbshipit-source-id: 67a799afe7d726153c91654d483937c5e198ba94
2021-01-27 16:03:24 -08:00
Tugsbayasgalan Manlaibaatar
0c9fb4aff0 Disable tracer warning for slicing indices. (#50414)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50414

If the index that is supplied from python is an integral type, it converts everything to int64_t which is traced correctly.

Test Plan:
new test case

Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D25930773

fbshipit-source-id: a3dfeb49df1394c5c8bea0de46038d2c549a0dc6
2021-01-19 14:15:50 -08:00
Richard Barnes
cf45d65f1c Clean up some type annotations in test/jit/...../test_class_type.py (#50156)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50156

Upgrades type annotations from Python2 to Python3

Test Plan: Sandcastle tests

Reviewed By: xush6528

Differential Revision: D25720035

fbshipit-source-id: 7e1aec34b21f3c9a3e8db9578258d99ffb87e6d4
2021-01-12 13:28:13 -08:00
Heitor Schueroff
1bb7d8ff93 Revert D25717504: Clean up some type annotations in test/jit
Test Plan: revert-hammer

Differential Revision:
D25717504 (a4f30d48d8)

Original commit changeset: 9a83c44db02e

fbshipit-source-id: e6e3a83bed22701d8125f5a293dfcd5093c1a2cd
2021-01-08 12:14:48 -08:00
Richard Barnes
a4f30d48d8 Clean up some type annotations in test/jit (#50158)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50158

Upgrades type annotations from Python2 to Python3

Test Plan: Sandcastle tests

Reviewed By: xush6528

Differential Revision: D25717504

fbshipit-source-id: 9a83c44db02ec79f353862255732873f6d7f885e
2021-01-08 10:56:55 -08:00
peter
8d7338e820 Enable tests using named temp files on Windows (#49640)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49640

Reviewed By: ngimel

Differential Revision: D25681548

Pulled By: malfet

fbshipit-source-id: 0e2b25817c98d749920cb2b4079033a2ee8c1456
2020-12-29 09:57:35 -08:00
Will Feng (DPER)
010b9c52f4 Skip None submodule during JIT-tracing (#49765)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49765

Some PyTorch module can have None as submodule, which causes the following error in JIT-tracing:

Repro script:
```
import torch

class TestModule(torch.nn.Module):
  def __init__(self):
    super().__init__()
    self.submod = torch.nn.Linear(3, 4)
    self.submod = None

  def forward(self, inputs):
    return inputs

m = TestModule()
tm = torch.jit.trace(m, torch.tensor(1.))
```
Error:
```
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/data/miniconda3/envs/master_nightly/lib/python3.7/site-packages/torch/jit/_trace.py", line 742, in trace
    _module_class,
  File "/data/miniconda3/envs/master_nightly/lib/python3.7/site-packages/torch/jit/_trace.py", line 928, in trace_module
    module = make_module(mod, _module_class, _compilation_unit)
  File "/data/miniconda3/envs/master_nightly/lib/python3.7/site-packages/torch/jit/_trace.py", line 560, in make_module
    return _module_class(mod, _compilation_unit=_compilation_unit)
  File "/data/miniconda3/envs/master_nightly/lib/python3.7/site-packages/torch/jit/_trace.py", line 1039, in __init__
    submodule, TracedModule, _compilation_unit=None
  File "/data/miniconda3/envs/master_nightly/lib/python3.7/site-packages/torch/jit/_trace.py", line 560, in make_module
    return _module_class(mod, _compilation_unit=_compilation_unit)
  File "/data/miniconda3/envs/master_nightly/lib/python3.7/site-packages/torch/jit/_trace.py", line 988, in __init__
    assert isinstance(orig, torch.nn.Module)
AssertionError
```

This pull request changes the JIT-tracing logic to skip the None submodule when tracing.

Test Plan: `buck test mode/dev //caffe2/test:jit -- test_trace_skip_none_submodule`

Reviewed By: wanchaol

Differential Revision: D25670948

fbshipit-source-id: 468f42f5ddbb8fd3de06d0bc224dc67bd7172358
2020-12-22 17:45:35 -08:00