Commit Graph

1509 Commits

Author SHA1 Message Date
Pritam Damania
f050b16dd9 Move pytorch distributed tests to separate folder for contbuild. (#30445)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30445

Create distributed and rpc directories under caffe/test for better management
of unit tests.

Differential Revision: D18702786

fbshipit-source-id: e9daeed0cfb846ef68806f6decfcb57c0e0e3606
2020-01-22 21:16:59 -08:00
Yanli Zhao
193ac31441 [jit] Enable IValue to hold a PyObject (#32491)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32491

This PR enables IValue to be able to hold a pure PyObject by adding a
new enum tag, a new jit_type to denote PyObject existance in IValue and
the JIT type system. We don't and not plan to expose this to user.

This is the basic piece that enable ivalue to be adopted broader like
making RRef always hold IValue, it might also simplify some compiler
logic
ghstack-source-id: 97039980

Test Plan: Imported from OSS

Differential Revision: D19502234

fbshipit-source-id: 90be001706d707d376cfbea25980fd82980df84a
2020-01-22 15:48:32 -08:00
Elias Ellison
38d122eca9 implement tuple constants (#31841)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31841

Add Tuple Constants to JIT. The constraint here is that all elements of a tuple must themself be insertable as a a constant. Previously tuples were special cased in constant propagation, but now that there are more passes that are inserted constants, such as freezing, we should just have tuples be representable as constants.

Test Plan: Imported from OSS

Differential Revision: D19439514

Pulled By: eellison

fbshipit-source-id: 3810ba08ee349fa5598f4b53ea64525996637b1a
2020-01-22 12:13:31 -08:00
Elias Ellison
adf0916606 Add str[] float[] constants resubmit
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31791

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D19439513

Pulled By: eellison

fbshipit-source-id: a04c7401687b051f0d4fb4794963931ebe004194
2020-01-22 12:11:58 -08:00
peter
b77c25dec0 Fix dll load logic for Python 3.8 on Windows (#32215)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/31181 and https://github.com/pytorch/pytorch/pull/31162#discussion_r362495611.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32215

Differential Revision: D19501869

Pulled By: ezyang

fbshipit-source-id: 363824e52d2592ad968ecf1df345aa4c0daff915
2020-01-22 08:33:34 -08:00
Jerry Zhang
44b270d892 insert_quant_dequant pass support shared class types (#31408)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31408

We'll error out when a graph is quantized with different QSchemes.
This only occurs when we have two modules that have same types (e.g. two Conv2d modules initialized with
same arguments) and quantized with two configs that would produce different quantized graphs, for example
per tensor affine and per channel affine. This is a rare case, so it should be OK to skip for now.
Actual support will come later.

Test Plan:
test_jit.py, test_quantization.py

Imported from OSS

Differential Revision: D19162366

fbshipit-source-id: 798f06d0ddef0c8458237ce88b62159cc77eec8b
2020-01-21 22:18:49 -08:00
James Reed
1ecad2bb2b Test passing custom class instance to bound method
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32320

Test Plan: Imported from OSS

Differential Revision: D19437335

Pulled By: jamesr66a

fbshipit-source-id: 8f5166dbe6fc5704b12b6224932460b12be0d39b
2020-01-17 23:09:38 -08:00
James Reed
c7078a1ce8 Fix returning instance of custom class from method
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32312

Test Plan: Imported from OSS

Differential Revision: D19433511

Pulled By: jamesr66a

fbshipit-source-id: f048d5f60eaba992ee42fea2d318a59b3a156578
2020-01-17 23:09:34 -08:00
Elias Ellison
e7bc1663bd fix unchecked cast alias analysis (#32309)
Summary:
Unchecked cast just refines the type of a value, the value stays the same, so the output should alias the input.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32309

Differential Revision: D19439037

Pulled By: eellison

fbshipit-source-id: fe6902d0d9a5a9ef5e9c13e1dbd056576d8c327e
2020-01-17 12:29:28 -08:00
Nikolay Korovaiko
53708e21ed classic fixed-point liveness
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31724

Differential Revision: D19426570

Pulled By: Krovatkin

fbshipit-source-id: 3387dfb25e6e9456d5d0517eac1d2e44e61d6813
2020-01-16 15:13:22 -08:00
Michael Suo
90c65b81c3 Define repr() on IValues (#32232)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32232

Previously, we were using `operator<<` as the default way of printing
IValue constants during serialization. The semantics of `operator<<`
were ill-defined; and this bit us in particular with strings and lack of
quoting.

This PR defines the role of `operator<<`: much like Python `str()`, it
is intended to produce a human-readable-ish representation for
debugging purposes.

This PR also defines a new `repr()` function on IValue that is intended
to produce a valid Python expression that can be used to recreate an
object with the same value. `repr()` is not defined on all IValue kinds
(notably tensors!) for this reason.

Test Plan: Imported from OSS

Differential Revision: D19417036

Pulled By: suo

fbshipit-source-id: c102d509eaf95a28b6a62280bc99ca6f09603de5
2020-01-15 17:35:41 -08:00
Richard Zou
19bbb4fccb Stop building documentation in pytorch_linux_xenial_cuda*_build (#32187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32187

Fixes #32058. Previously we would build documentation during the pytorch
linux cuda build. We don't actually need to do this because we have a
dedicated python_doc_build job that builds the docs. With this change,
the CUDA build should run ~10 minutes faster, giving devs faster signal.

Test Plan: - Check the CUDA (10.1) build on this PR, make sure it doesn't build the docs.

Differential Revision: D19400417

Pulled By: zou3519

fbshipit-source-id: e8fb2b818146f33330e06760377a9afbc18a71ed
2020-01-15 07:48:42 -08:00
Nikolay Korovaiko
02c3493a84 Fix an invalid peephole transformation if input/output values are written to (#28455)
Summary:
fixes https://github.com/pytorch/pytorch/issues/28360
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28455

Differential Revision: D19374601

Pulled By: Krovatkin

fbshipit-source-id: 622f24b40aba03e79e55a6b8d25d88417f7d8bad
2020-01-14 16:28:07 -08:00
davidriazati
61e509b992 Skip un-runnable tests (#31965)
Summary:
`test_init_ops` calls `orthogonal_` which fails without lapack (this test was just missing a skip condition)

The cpp tests would fail with a `undefined symbol` error if run with `BUILD_TESTS=0`, so this PR skips them if that flag is `0`
](https://our.intern.facebook.com/intern/diff/19320064/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31965

Pulled By: driazati

Differential Revision: D19320064

fbshipit-source-id: d1dcd36714107688ded25a414e8969abe026bd03
2020-01-14 11:36:52 -08:00
Jerry Zhang
1f34801460 More robust mangling (#31978)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31978

Currently we keep a `mangleIndex_` that's intenral to compilation unit and
just increment the index when we found the original name is mangled, this doesn't
guarantee the new name is not defined.
This PR fixes the problem by querying whether the new name is defined or not.
fixes: https://github.com/pytorch/pytorch/issues/31268

Test Plan:
fixes the issue

Imported from OSS

Differential Revision: D19350535

fbshipit-source-id: fe3262b2838d4208ab72e2cd4a5970b3a792ae86
2020-01-13 11:11:50 -08:00
Elias Ellison
8ecd3f783d check for object equality in constant pooling (#31800)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31800

If we know that two constants are the same object, we can ignore other constraints and pool them together. This fixes an issue introduced by the other PR where quantization relied on constant pooling happening for correctness.

Test Plan: Imported from OSS

Differential Revision: D19269499

Pulled By: eellison

fbshipit-source-id: 9d4396125aa6899cb081863d463d4f024135cbf4
2020-01-08 16:47:07 -08:00
davidriazati
883fb5434a Use real argument names for Python functions (#29300)
Summary:
This hooks up `inspect` so that Python functions get their parameters
names attached instead of naming them `0, 1, 2, ...`. This also fixes
issue #28537 where `ignore` functions were improperly typing `self`.
](https://our.intern.facebook.com/intern/diff/19256434/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29300

Pulled By: driazati

Differential Revision: D19256434

fbshipit-source-id: 6a1fe7bd0afab708b8439517798955d0abfeb44c
2020-01-08 15:41:28 -08:00
Artem Volkhin
3a2757c682 Fix tracing for modules with List[Tensor] as output (#31343)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31343

Fix an issue in TorchScript tracing for modules with `c10::List<at::Tensor>` as an output. TensorList was not supported properly.

Test Plan: unit tests

Reviewed By: wanchaol

Differential Revision: D18850722

fbshipit-source-id: 87a223104d1361fe754d55deceeb1e8bbcad629b
2020-01-07 11:57:25 -08:00
Jerry Zhang
5579611544 Enable foldbn tests (#29220)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29220

Support for accessing constant is added in previous
PRs, this PR re-enables the foldbn tests

Test Plan:
test_jit.py

Imported from OSS

Differential Revision: D18846848

fbshipit-source-id: 90ceaf42539ffee80b984e0d8b2420da66c263c3
2020-01-04 11:47:01 -08:00
Jerry Zhang
ebe69236d1 Expose class constant through attr and setattr in object (#29219)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29219

We added class constant in previous PRs, this PR allows access to
class constant in the object API

Test Plan:
build/bin/test_jit
python test/test_jit.py

Imported from OSS

Differential Revision: D18846851

fbshipit-source-id: 888a6517d5f747d1f8ced283c0c2c30b2f6c72c6
2020-01-04 11:09:35 -08:00
BowenBao
c4f10e0fe7 Renaming scales parameter for interpolate (#31526)
Summary:
PR separated from https://github.com/pytorch/pytorch/pull/31274.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31526

Reviewed By: zou3519

Differential Revision: D19221931

Pulled By: gchanan

fbshipit-source-id: 81958a9910867ac9d62f2b47abc49384526c4e51
2020-01-02 08:19:30 -08:00
Lu Fang
cb1af5f61f Revert D19233558: add float[] str[] constants
Test Plan: revert-hammer

Differential Revision:
D19233558

Original commit changeset: 4f7c6d9ddbe7

fbshipit-source-id: a5020a9169e349a5970323471d673e8cd7818c66
2019-12-31 11:57:34 -08:00
Elias Ellison
dd0f2f0c19 add float[] str[] constants (#31503)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31503

Add support for float lists and string lists constants, which enables better constant propagation + constant pooling + freezing.

Test Plan: Imported from OSS

Differential Revision: D19233558

Pulled By: eellison

fbshipit-source-id: 4f7c6d9ddbe7623757a9a20606ce5f394e14e93d
2019-12-30 11:58:17 -08:00
davidriazati
6064223808 @slowTest some slow tests (#31706)
Summary:
These are all the jit tests that take > 10 seconds according to `pytest test/test_jit.py --durations=15`

```
32.76s call     test/test_jit.py::TestModels::test_super_resolution
32.20s call     test/test_jit.py::TestModels::test_neural_style
30.90s call     test/test_jit.py::TestJit::test_export_batchnorm
25.95s call     test/test_jit.py::TestJit::test_dropout_module_requires_grad
22.24s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Transformer
12.38s call     test/test_jit.py::TestScript::test_fuser_double_float_codegen
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31706

Pulled By: driazati

Differential Revision: D19251567

fbshipit-source-id: 8e76f717506b8bf28d1a63ce302feb0446dc9141
2019-12-30 11:45:24 -08:00
Mingbo Wan
647569e546 get rid of choco install (#30897)
Summary:
7zip and cmake are part of base image, no need to re-install. Remove the install step can make build/test more stable.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30897

Differential Revision: D19232961

Pulled By: mingbowan

fbshipit-source-id: fa3bbd1325839a2a977bf13fdbd97fda43793b8d
2019-12-27 13:12:04 -08:00
davidriazati
446e9af5b9 Fix parsing of big float literals (#29940)
Summary:
Stacked PRs
 * **#29940 - [jit] Fix parsing of big float literals**
 * #29935 - [jit] Fix hex literal parsing
 * #29931 - [jit] Throw a better error for int too big for int64_t
](https://our.intern.facebook.com/intern/diff/19186604/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29940

Pulled By: driazati

Differential Revision: D19186604

fbshipit-source-id: 6ef66588a5cf956f281e7bd1e5584ef06f5296e9
2019-12-23 17:21:07 -08:00
Gregory Chanan
68e5172382 Support optional float parameters (float?, optional<double>). (#31517)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31517

This is going to be used by upsample (which currently uses magic values to represent optionals).

For now, we just introduce a fake function for testing (torch._test_optional_float(x)).

Test Plan: Imported from OSS

Differential Revision: D19198721

Pulled By: gchanan

fbshipit-source-id: 0a1382fde0927c5d277d02d62bfb31fb574b8c74
2019-12-23 08:33:39 -08:00
James Reed
7d630278da Separate torchbind from Python (#30242)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30242

Pull Request resolved: https://github.com/pytorch/pytorch/pull/29501

Currently blocked on schema serialization issue

Test Plan: Imported from OSS

Differential Revision: D18463063

Pulled By: jamesr66a

fbshipit-source-id: c12a1b644eb9bf04e68ff93cccf91d6cb3e75359
2019-12-21 22:52:40 -08:00
Martin Yuan
11854bcd38 Add test to torch.jit.export_opnames, make the _C function private
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31446

Test Plan: Imported from OSS

Differential Revision: D19172851

Pulled By: iseeyuan

fbshipit-source-id: f06d8766ed73c9abe4ebf41c402ee64880d745be
2019-12-20 13:38:43 -08:00
Nikolay Korovaiko
5375ceae80 run optimizations on pre-profiled graph (#31392)
Summary:
This is the first stab at running profile-insensitive optimizations on pre-profiled graphs. Running those optimizations has a potential to simplify graphs greatly before GuardElimination and GuardElimination should be able to remove more guards.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31392

Differential Revision: D19173639

Pulled By: Krovatkin

fbshipit-source-id: 2485a2a598c10f9b5445efb30b16439ad4551b3f
2019-12-20 10:49:08 -08:00
Zachary DeVito
457286a383 fix missing type check in dictionary literal
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31375

Test Plan: Imported from OSS

Differential Revision: D19145440

Pulled By: zdevito

fbshipit-source-id: 69909089586149ef766b4858d3420864a81b2493
2019-12-19 16:22:36 -08:00
Nikolay Korovaiko
fc3103b116 fixing a naming issue in creating a residual loop node in a bailout graph (#31400)
Summary:
This addresses the issue of differentiating between `%4` in
`%12 : int, %y.1 : Tensor = prim::Loop(%9, %6, %4, %3)` and `%y.5 : Double(3) = aten::cat(%22, %4) # test_jit.py:3772:24` in `%4` loop's body in a residual continuation loop, because these should be different values.

```
[DUMP profiling_graph_executor_impl.cpp:124] with prim::BailoutTemplate_0 = graph(%z.1 : int,
[DUMP profiling_graph_executor_impl.cpp:124]       %size.1 : int):
[DUMP profiling_graph_executor_impl.cpp:124]   %2 : Tensor = prim::Constant[value= 1  1 [ CPUDoubleType{2} ]]()
[DUMP profiling_graph_executor_impl.cpp:124]   %3 : Double(2) = prim::BailOut[index=0](%2, %z.1, %size.1)
[DUMP profiling_graph_executor_impl.cpp:124]   %4 : int = prim::Constant[value=0]() # test_jit.py:3772:54
[DUMP profiling_graph_executor_impl.cpp:124]   %5 : None = prim::Constant()
[DUMP profiling_graph_executor_impl.cpp:124]   %6 : bool = prim::Constant[value=1]() # test_jit.py:3770:16
[DUMP profiling_graph_executor_impl.cpp:124]   %counters.1 : int[] = prim::ListConstruct()
[DUMP profiling_graph_executor_impl.cpp:124]   %8 : int = prim::Constant[value=8]()
[DUMP profiling_graph_executor_impl.cpp:124]   %9 : int = aten::__round_to_zero_floordiv(%size.1, %8)
[DUMP profiling_graph_executor_impl.cpp:124]   %10 : int = aten::mul(%9, %8)
[DUMP profiling_graph_executor_impl.cpp:124]   %11 : int = aten::sub(%size.1, %10)
[DUMP profiling_graph_executor_impl.cpp:124]   %12 : int, %y.1 : Tensor = prim::Loop(%9, %6, %4, %3) # test_jit.py:3770:16
[DUMP profiling_graph_executor_impl.cpp:124]     block0(%i.2 : int, %15 : int, %y.7 : Tensor):
[DUMP profiling_graph_executor_impl.cpp:124]       %17 : Double(2) = prim::BailOut[index=1](%y.7, %z.1, %counters.1, %9, %11, %i.2, %15)
[DUMP profiling_graph_executor_impl.cpp:124]       %18 : int[] = aten::append(%counters.1, %15) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %19 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %20 : Tensor = aten::ones(%19, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %21 : Double(1) = prim::BailOut[index=2](%20, %z.1, %counters.1, %9, %11, %i.2, %15, %17)
[DUMP profiling_graph_executor_impl.cpp:124]       %22 : Tensor[] = prim::ListConstruct(%17, %21)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.5 : Double(3) = aten::cat(%22, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %24 : int = prim::Constant[value=1]()
[DUMP profiling_graph_executor_impl.cpp:124]       %25 : int = aten::add(%15, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %26 : int[] = aten::append(%counters.1, %25) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %27 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %28 : Tensor = aten::ones(%27, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %29 : Double(1) = prim::BailOut[index=3](%28, %z.1, %counters.1, %9, %11, %i.2, %y.5, %25)
[DUMP profiling_graph_executor_impl.cpp:124]       %30 : Tensor[] = prim::ListConstruct(%y.5, %29)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.9 : Double(4) = aten::cat(%30, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %32 : int = aten::add(%25, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %33 : int[] = aten::append(%counters.1, %32) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %34 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %35 : Tensor = aten::ones(%34, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %36 : Double(1) = prim::BailOut[index=4](%35, %z.1, %counters.1, %9, %11, %i.2, %y.9, %32)
[DUMP profiling_graph_executor_impl.cpp:124]       %37 : Tensor[] = prim::ListConstruct(%y.9, %36)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.10 : Double(5) = aten::cat(%37, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %39 : int = aten::add(%32, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %40 : int[] = aten::append(%counters.1, %39) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %41 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %42 : Tensor = aten::ones(%41, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %43 : Double(1) = prim::BailOut[index=5](%42, %z.1, %counters.1, %9, %11, %i.2, %y.10, %39)
[DUMP profiling_graph_executor_impl.cpp:124]       %44 : Tensor[] = prim::ListConstruct(%y.10, %43)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.11 : Double(6) = aten::cat(%44, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %46 : int = aten::add(%39, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %47 : int[] = aten::append(%counters.1, %46) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %48 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %49 : Tensor = aten::ones(%48, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %50 : Double(1) = prim::BailOut[index=6](%49, %z.1, %counters.1, %9, %11, %i.2, %y.11, %46)
[DUMP profiling_graph_executor_impl.cpp:124]       %51 : Tensor[] = prim::ListConstruct(%y.11, %50)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.12 : Double(7) = aten::cat(%51, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %53 : int = aten::add(%46, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %54 : int[] = aten::append(%counters.1, %53) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %55 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %56 : Tensor = aten::ones(%55, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %57 : Double(1) = prim::BailOut[index=7](%56, %z.1, %counters.1, %9, %11, %i.2, %y.12, %53)
[DUMP profiling_graph_executor_impl.cpp:124]       %58 : Tensor[] = prim::ListConstruct(%y.12, %57)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.13 : Double(8) = aten::cat(%58, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %60 : int = aten::add(%53, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %61 : int[] = aten::append(%counters.1, %60) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %62 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %63 : Tensor = aten::ones(%62, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %64 : Double(1) = prim::BailOut[index=8](%63, %z.1, %counters.1, %9, %11, %i.2, %y.13, %60)
[DUMP profiling_graph_executor_impl.cpp:124]       %65 : Tensor[] = prim::ListConstruct(%y.13, %64)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.14 : Double(9) = aten::cat(%65, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %67 : int = aten::add(%60, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       %68 : int[] = aten::append(%counters.1, %67) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %69 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %70 : Tensor = aten::ones(%69, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %71 : Double(1) = prim::BailOut[index=9](%70, %z.1, %counters.1, %9, %11, %i.2, %y.14, %67)
[DUMP profiling_graph_executor_impl.cpp:124]       %72 : Tensor[] = prim::ListConstruct(%y.14, %71)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.15 : Tensor = aten::cat(%72, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %74 : int = aten::add(%67, %24)
[DUMP profiling_graph_executor_impl.cpp:124]       -> (%6, %74, %y.15)
[DUMP profiling_graph_executor_impl.cpp:124]   %75 : Double(10) = prim::BailOut[index=10](%y.1, %z.1, %counters.1, %11, %12)
[DUMP profiling_graph_executor_impl.cpp:124]   %76 : int, %y : Tensor = prim::Loop(%11, %6, %12, %75) # test_jit.py:3770:16
[DUMP profiling_graph_executor_impl.cpp:124]     block0(%i.1 : int, %79 : int, %y.6 : Tensor):
[DUMP profiling_graph_executor_impl.cpp:124]       %81 : Double(*) = prim::BailOut[index=11](%y.6, %z.1, %counters.1, %11, %i.1, %79)
[DUMP profiling_graph_executor_impl.cpp:124]       %82 : int[] = aten::append(%counters.1, %79) # test_jit.py:3771:20
[DUMP profiling_graph_executor_impl.cpp:124]       %83 : int[] = prim::ListConstruct(%z.1)
[DUMP profiling_graph_executor_impl.cpp:124]       %84 : Tensor = aten::ones(%83, %5, %5, %5, %5) # test_jit.py:3772:38
[DUMP profiling_graph_executor_impl.cpp:124]       %85 : Double(1) = prim::BailOut[index=12](%84, %counters.1, %11, %i.1, %79, %81)
[DUMP profiling_graph_executor_impl.cpp:124]       %86 : Tensor[] = prim::ListConstruct(%81, %85)
[DUMP profiling_graph_executor_impl.cpp:124]       %y.4 : Tensor = aten::cat(%86, %4) # test_jit.py:3772:24
[DUMP profiling_graph_executor_impl.cpp:124]       %88 : int = prim::Constant[value=1]()
[DUMP profiling_graph_executor_impl.cpp:124]       %89 : int = aten::add(%79, %88)
[DUMP profiling_graph_executor_impl.cpp:124]       -> (%6, %89, %y.4)
[DUMP profiling_graph_executor_impl.cpp:124]   %90 : Double(12) = prim::BailOut[index=13](%y, %counters.1)
[DUMP profiling_graph_executor_impl.cpp:124]   %91 : (Tensor, int[]) = prim::TupleConstruct(%90, %counters.1)
[DUMP profiling_graph_executor_impl.cpp:124]   return (%91)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31400

Differential Revision: D19172750

Pulled By: Krovatkin

fbshipit-source-id: 85d3aac4e80b65b83b6be3c0bca8075a731a2b7e
2019-12-19 00:34:50 -08:00
Elias Ellison
fb24f7c4ad catch all exceptions in converting default values to ivalues (#31398)
Summary:
Previously we would only catch `py::cast_error` which led to incomprehensible error messages like: `TypeError: 'NoneType' object is not iterable`. We are running arbitrary pybind code here, and not doing anything with the error message, so we should be less restrictive with the types of errors we catch.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31398

Differential Revision: D19166655

Pulled By: eellison

fbshipit-source-id: 84db8b3714c718b475913f2f4bb6f19e62f2d9ec
2019-12-18 20:27:46 -08:00
Jerry Zhang
fe707c7849 Use default_observer and default_weight_observer in tests (#31424)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31424

att

Test Plan:
test_jit.py

Imported from OSS

Differential Revision: D19162368

fbshipit-source-id: 33b95ba643eeeae942283bbc33f7ceda8d14c431
2019-12-18 18:35:07 -08:00
James Reed
a3cdb7eca3 Fix default instantation of dynamic quantized LSTM
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31433

Test Plan: Imported from OSS

Differential Revision: D19164539

Pulled By: jamesr66a

fbshipit-source-id: 7045817ab3dfb530c4480a10523c4c6bcdbfc7eb
2019-12-18 16:59:00 -08:00
davidriazati
148bcd3ee5 Add support for builtins as attributes (#31269)
Summary:
Fixes #27495

This adds builtins as another piece of a concrete type. They're separate from normal functions since they represent the `BuiltinFunction` sugared value (which is a direct call to a builtin op). It also moves the builtins related logic from `jit/__init__.py` to `jit/_builtins.py` so it can be used from `jit/_recursive.py` to look up functions in the builtins table.
](https://our.intern.facebook.com/intern/diff/19149779/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31269

Pulled By: driazati

Differential Revision: D19149779

fbshipit-source-id: d4e5e5d7d7d528b75a2f503e6004394251a4e82d
2019-12-18 15:24:45 -08:00
davidriazati
7692494c67 Fix hex literal parsing (#29935)
Summary:
Stacked PRs
 * #29940 - [jit] Fix parsing of big float literals
 * **#29935 - [jit] Fix hex literal parsing**
 * #29931 - [jit] Throw a better error for int too big for int64_t

Previously these were all parsed as `0`
](https://our.intern.facebook.com/intern/diff/19124944/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29935

Pulled By: driazati

Differential Revision: D19124944

fbshipit-source-id: 1ee0c1dee589933363a5efba069a2cfaf94373c5
2019-12-18 14:00:22 -08:00
davidriazati
1f50cfc24d Throw a better error for int too big for int64_t
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29931

Pulled By: driazati

Differential Revision: D19124934

fbshipit-source-id: 91841d7ba4f2f6142c51fba07b7faa14bb817e3a
2019-12-18 14:00:16 -08:00
Elias Ellison
fb30a48b4e add unsupported section (#31329)
Summary:
Add a section for unsupported ops, and modules. Automatically generate the properties and attributes that aren't bound, and for ops that have semantic mismatches set up tests so the docs stay up to date.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31329

Differential Revision: D19164472

Pulled By: eellison

fbshipit-source-id: 46290bb8a64d9de928cfb1eda5ff4558c3799c88
2019-12-18 13:56:02 -08:00
Alexander Stante
f30b14dead Fix handling of type comments in body (#30590)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/30477. Any type comment after `# type: (...) -> ` is ignored.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30590

Differential Revision: D18887351

Pulled By: driazati

fbshipit-source-id: 162c652f6d7610d14609bbcb25aaa27cdd947a76
2019-12-12 18:19:30 -08:00
Elias Ellison
bee6344d4e remove / rewrite weak module tests (#31193)
Summary:
Remove most of the testing for `weak_script`, since we removed it. Refactor a few of the existing tests to use recursive scripting api.

Fix for https://github.com/pytorch/pytorch/issues/23965
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31193

Differential Revision: D18966291

Pulled By: eellison

fbshipit-source-id: 6b1e18c293f55017868a14610d87b69be42bde12
2019-12-12 13:33:38 -08:00
Elias Ellison
56de8853da Resubmit overload v2 (#31123)
Summary:
Resubmit of https://github.com/pytorch/pytorch/pull/30356 and https://github.com/pytorch/pytorch/pull/31014 :'(

The last commit contains the fix. There was an internal FBcode error not able to compile the previous `impl_default->second.equal(default_val.second))` line. I tried various fixes in C++ internally but couldn't figure anything out. This is a good example of the programming costs of going from python -> c++ for different types of objects, because the conceptual overhead has expanded in scope from (python) -> (python, c++, pybind).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31123

Differential Revision: D18936128

Pulled By: eellison

fbshipit-source-id: 7d8fd66a6dd4a3e9838f3a0b68c219b6565a9462
2019-12-12 07:54:23 -08:00
Lara
97c1e90f46 ONNX Interpolate Add Scales Params (#28324)
Summary:
Fix for : https://github.com/pytorch/pytorch/issues/27176
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28324

Reviewed By: hl475

Differential Revision: D18309133

Pulled By: houseroad

fbshipit-source-id: 348bb41393442c6b107d88fc2cd3224e0afa3ccf
2019-12-11 20:09:15 -08:00
davidriazati
679b20b1e4 Unify list elements for all list types (#30777)
Summary:
Previously list elements were only unified for tensor lists.
This improves error messages and expands the unification logic
to include all types.
](https://our.intern.facebook.com/intern/diff/18837726/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30777

Pulled By: driazati

Differential Revision: D18837726

fbshipit-source-id: c4d275562a8429700987569426d694faa8f6002e
2019-12-11 17:00:52 -08:00
David Riazati
1f87e823b8 Make nn.Transformer TorchScript compatible (#28561)
Summary:
This makes `nn.Transformer` usable from TorchScript. It preserves backwards compatibility via `__setstate__` on the encoder/decoder.

Fixes https://github.com/pytorch/pytorch/issues/24173
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28561

Differential Revision: D18124753

Pulled By: driazati

fbshipit-source-id: 7314843e5aa9c9bf974c4672e4edb24ed8ef4a6f
2019-12-11 10:57:31 -08:00
Alban Desmaison
717274c001 Add useful warnings for t.grad when it won't be populated for known reasons (#30531)
Summary:
Fix https://github.com/pytorch/pytorch/issues/2362 and https://github.com/pytorch/pytorch/issues/19778

To avoid issues with frozen model, we only consider warning for Tensors that require gradients and are neither leafs nor retain gradients.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30531

Differential Revision: D18832767

Pulled By: albanD

fbshipit-source-id: 743e863dc14ab57713e66da78b2e4d759dfba0ff
2019-12-11 09:47:18 -08:00
Elias Ellison
9f3fe78239 peephole optimize type refinements (#31024)
Summary:
Peephole optimize out type refinements when they are no longer refining the type.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31024

Differential Revision: D18920958

Pulled By: eellison

fbshipit-source-id: 6d05d9812b9f9dcf001de760a78a2042fb832773
2019-12-10 18:32:28 -08:00
Pieter Noordhuis
78a00d72b4 Revert D18899127: resubmit polish up overloads on free functions
Test Plan: revert-hammer

Differential Revision:
D18899127

Original commit changeset: 9049b8718926

fbshipit-source-id: c70a8aa4120aa757dce0926a8ab3cc5c92cd6041
2019-12-10 10:51:07 -08:00
Elias Ellison
af4040d808 resubmit polish up overloads on free functions (#31014)
Summary:
Resubmitting https://github.com/pytorch/pytorch/pull/30356

Second commit has reintroduces deleted function which caused revert previously.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31014

Differential Revision: D18899127

Pulled By: eellison

fbshipit-source-id: 9049b8718926c329d9cb46bb96eac6c278e9b866
2019-12-10 07:57:47 -08:00
Elias Ellison
f48a8901c5 Add floor_divide function (#30493)
Summary:
Adds `torch.floor_divide` following the numpy's `floor_divide` api. I only implemented the out-of-place version, I can add the inplace version if requested.

Also fixes  https://github.com/pytorch/pytorch/issues/27512
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30493

Differential Revision: D18896211

Pulled By: eellison

fbshipit-source-id: ee401c96ab23a62fc114ed3bb9791b8ec150ecbd
2019-12-10 07:51:39 -08:00
Wanchao Liang
73dd8c005a Revert D18864774: polish up overloads on free functions
Test Plan: revert-hammer

Differential Revision:
D18864774

Original commit changeset: 6c566738bd6f

fbshipit-source-id: 669192605a3bc1a6ba06bbb5cae54f61637a45ae
2019-12-09 15:41:45 -08:00
Elias Ellison
446488960a polish up overloads on free functions (#30356)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30356

This finishes up the `torch.jit.overload` api for free-functions.
- defaults now required on the implementation function itself
- fully follows [overload spec](https://mypy.readthedocs.io/en/latest/more_types.html#function-overloading) such that the following is supported

```
overload
def mouse_event(x1: int, y1: int) -> ClickEvent: ...
def mouse_event(x1: int,
                y1: int,
                x2: Optional[int] = None,
                y2: Optional[int] = None): ...
```

Note: `jit.overload` isn't supported yet for UDT, but is support for modules. This PR doesn't make the same changes for modules, if reviewers think I should include them then I could do so in a follow up PR or wait to land this. Since that's still an internal api I think it's fine, and the changes here would allow us to expose `torch.jit.overload` on free functions.

Test Plan: Imported from OSS

Differential Revision: D18864774

Pulled By: eellison

fbshipit-source-id: 6c566738bd6f0551a000a9ea8d56e403636b7856
2019-12-09 15:12:18 -08:00
Elias Ellison
82268bf300 handle reassignment to inf and nan (#30877)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30877

Previously, when the environment tried to reassign variables which had been assigned to "inf" or "nan" it would fail because they are not simple values. Constant prop exposed this, a test was failing internally because of it.

Test Plan: Imported from OSS

Reviewed By: Krovatkin

Differential Revision: D18861016

Pulled By: eellison

fbshipit-source-id: b9b72978a26a0b00b13bf8ea7685825551f5a541
2019-12-09 14:20:17 -08:00
Elias Ellison
3eefc06feb add constant prop for immutable types (#30544)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30544

Run Constant Propagation upon compilation only on ops with non-aliasing inputs and outputs. This speeds up the first run of `torchvision.models.resnet18` by over 50% and speeds up compilation by about 25% (although the effects didn't seem additive with with https://github.com/pytorch/pytorch/pull/30503, so I'm going to land this PR first and then see if caching still has a sizable impact).

Running constant prop only with non-aliasing types does a lot of graph cleanup by removing constant ifs and a bunch of other smaller ops. It also avoids all the jitter problems we had when we tried running full constant prop previously. Bc it is idempotent it doesn't jitter, and it doesn't jitter graphs constructed from tracing because tracing doesn't emit any ops that only involve non-aliasing inputs.

Full constant prop isn't idempotent because what ops are run depends on the state of mutation in alias db, which will often change upon successive iterations of constant propagation, and bc it affects graphs constructed from tracing.

Edit: if we were okay with running constant propagation on graphs constructed from tracing (potentially making them hard to debug), an alternative would be to run constant propagation until the graph reaches a fixed point.

Test Plan: Imported from OSS

Differential Revision: D18833607

Pulled By: eellison

fbshipit-source-id: 92a0adb4882d67ed5a0db5c279f5e122aeeba54a
2019-12-09 14:20:12 -08:00
Michael Suo
62b10721fb Actually make flake8 do something (#30892)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30892

Fixes all outstanding lints and actually installs a properly configured
flake8

Test Plan: Imported from OSS

Differential Revision: D18862825

Pulled By: suo

fbshipit-source-id: 08e9083338a7309272e17bb803feaa42e348aa85
2019-12-06 17:50:50 -08:00
Edward Yang
11b3065323 Run method_tests on CUDA. (#30821)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30821

While investigating while our tests didn't catch #30704 I noticed that none
of our tests in method_tests() were being run on CUDA.  This diff moves
those tests into the new device-generic test framework so that we also get
CUDA coverage.  For expediency, I blacklisted all tests which didn't work
on CUDA (rather than fix them); that's something we can leave for future PRs.
This is done by way of a new expectedFailure gadget.

Note that all occurences of skipIfNoLapack needed to be replaced with
skipCPUIfNoLapack.

I punted for test_jit; it's possible those tests should also run CUDA but a JIT
expert should take a look here.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18840089

Pulled By: ezyang

fbshipit-source-id: 66b613b5024c91d3e391c456bb642be7e73d4785
2019-12-06 07:24:27 -08:00
Jerry Zhang
f1755d9aea Insert GetAttr for quantization parameters instead of Constant (#30551)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30551

To enable quantizing with shared types, we need to insert GetAttr nodes for
quantization parameters since the code might be shared by multiple module instances
and we'd like to make quantized module instance also share the same code but with
different values of attributes.

Test Plan:
test_jit.py, test_quantization.py

Imported from OSS

Differential Revision: D18818652

fbshipit-source-id: fc95623cac59dcedd9e3f95397524eae515e7a11
2019-12-05 22:52:45 -08:00
Edward Yang
2ced81f289 Revert "Default to not build Caffe2 operators on Windows. (#29061)" (#30740)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30740

This reverts commit 7102aceaf8.

Test Plan: Imported from OSS

Differential Revision: D18834315

Pulled By: ezyang

fbshipit-source-id: 2dbd1cf686864b9840365083182cd6188a285399
2019-12-05 14:01:59 -08:00
Jerry Zhang
c4c2e23385 Supporting making submodules unique (#30037)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30037

Support quantization for modules with reused submodules, e.g. relu (automatically make unique)
We first do a pass on the graph to find all duplicate uses of the same module, and record the `Value`s of the
module instance, for each of these values we create a new module and change the access to that module.

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D18821483

fbshipit-source-id: 1698b981e9e9f0c728d9f03fcbcfbd260151f679
2019-12-04 19:26:56 -08:00
Elias Ellison
d38f9117fd Cache compilation of free functions (#30503)
Summary:
We don't have to recompile free functions if we've already compiled them.

Improved compilation of resnet18 by 27%.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30503

Differential Revision: D18796501

Pulled By: eellison

fbshipit-source-id: 2dee0fc5fcf9adc5b92213f8cb813730d71b376f
2019-12-04 12:45:35 -08:00
Jerry Zhang
f73cd28082 InsertObservers for shared class types (#30548)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30548

ClassTypes can be shared among different module instances, but previously we assumed
they would be unique, this PR enables the insert_observers pass to work with shared class types

Test Plan:
python test/test_jit.py
python test/test_quantization.py

Imported from OSS

Differential Revision: D18802465

fbshipit-source-id: b782e71e44a043af45577ac2b5c83e695155bb8b
2019-12-04 09:34:47 -08:00
Nikolay Korovaiko
d4c25add45 make sure the counter stays correct in between bailout transitions (#30186)
Summary:
This fixes the second issue reported in https://github.com/pytorch/pytorch/issues/29909 namely, a loop counter is assigned the wrong values after transitioning to a bailout graph.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30186

Differential Revision: D18646845

Pulled By: Krovatkin

fbshipit-source-id: 1f7c601dd9f35892979385ffa132fb0886a4f203
2019-12-03 14:59:08 -08:00
davidriazati
9c02b88791 Add pickler support for Device (#30131)
Summary:
This PR adds (un)pickling support for `c10::Device`. It also adds `torch.device` as a type annotation for device attributes.
](https://our.intern.facebook.com/intern/diff/18664421/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30131

Pulled By: driazati

Differential Revision: D18664421

fbshipit-source-id: 64378fb42b2d1bbe2bd86259e5ed10f24b5d1e49
2019-12-02 17:43:08 -08:00
Jerry Zhang
fec903ce00 Fix test case after get_qparams refactor (#30470)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30470

att

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D18710775

fbshipit-source-id: b1c7c0afbc538ff1d3e19c5d3d6bd425e4f94f06
2019-11-26 12:16:29 -08:00
Jerry Zhang
0b71e7e1fd Refactor QAT Conv module for better extensibility (#30362)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30362

Right now the qat modules(qat.ConvBn2d, qat.ConvBnReLU2d, qat.Conv2d)
are not convinent to support other dimensions of Conv, this PR refactors
these modules so that we can support Conv1d/Conv3d better

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D18691152

fbshipit-source-id: 5b561e6b054eadd31b98cabdf1ac67a61ee9b805
2019-11-26 06:53:12 -08:00
Lingyi Liu
b8f50d9cc8 Support to add dequant for each use of Value (#30145)
Summary:
In this PR, we mainly handle the case there are multiple usage of a Value when inserting the quant-dequant pair. This change will add one dequant for each usage of the Value.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30145

Differential Revision: D18671600

Pulled By: lly-zero-one

fbshipit-source-id: 61324a98861da85b80dcf7e930381311118ae53b
2019-11-25 14:52:58 -08:00
David Riazati
8c6f0c0587 Detect TorchScript archives in torch.load (#29339)
Summary:
This PR looks for a `constants.pkl` file at the top level in a zip file
in `torch.load`. If found, it calls `torch.jit.load` instead and issues
a warning to call `torch.jit.load` directly
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29339

Differential Revision: D18611095

Pulled By: driazati

fbshipit-source-id: f070a02f6b5509054fc3876b3e8356bbbcc183e1
2019-11-22 12:30:30 -08:00
James Reed
97fae401f0 Use LinearPackedParams everywhere
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30198

Test Plan: Imported from OSS

Differential Revision: D18628003

Pulled By: jamesr66a

fbshipit-source-id: 76ff0248fd859e805a15cde555d26dd2138636fa
2019-11-22 11:31:17 -08:00
Nikolay Korovaiko
e3334723b2 fix a crash due in nested bailouts (#30097)
Summary:
A prim::BailOut also needs to capture max trip counts as for some graphs they aren't constants and they are used in continuation graphs to figure out the remaining number of iterations to run.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30097

Differential Revision: D18624446

Pulled By: Krovatkin

fbshipit-source-id: 085d25981c6669f65848996cd2d50066cc252048
2019-11-21 09:53:12 -08:00
Wanchao Liang
f7b12a9858 fix aten::grad to return optional list (#29577)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29577

`torch.autograd.grad` can return none is one of the input is not in the
autograd graph or not requires_grad, this fix it so that it return a
list of optional tensor instead of list of tensor.

This might have BC issue unfortunately, but I think it's rare both
internal and external (only training use it, and most of the training
use backward, instead of autograd.grad), so whitelist it.

Test Plan: Imported from OSS

Differential Revision: D18491642

fbshipit-source-id: d32b2b3446cf9e8b9a98f6d203a21a75643d8991
2019-11-20 22:19:10 -08:00
James Reed
1eb9f49cc6 Fix test_jit under pytest
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30212

Test Plan: Imported from OSS

Differential Revision: D18632004

Pulled By: jamesr66a

fbshipit-source-id: d5cfd351890140c604535744598d0f6ad8989450
2019-11-20 20:44:28 -08:00
James Reed
449828378d Serialize ClassType as its qualname
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30058

Test Plan: Imported from OSS

Differential Revision: D18584269

Pulled By: jamesr66a

fbshipit-source-id: 5f1d0142bd7cd94eecbd2ed9250a0de47639040b
2019-11-20 16:17:26 -08:00
Jerry Zhang
f2b851a9e5 Returning axis from calculate_qparams (#29494)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29494

`calculate_qparams` of per channel quantization should return the axis, this
PR added this and also added corresponding support in graph mode

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D18580905

fbshipit-source-id: f9691c1f043f8bca39f81716a4d0b10f60a65396
2019-11-20 11:06:48 -08:00
Jerry Zhang
64817a43d2 Test for per channel graph mode quantization (#29493)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29493

att

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D18580907

fbshipit-source-id: 05218e012c0322bb88714670d5dbe9332252f2ee
2019-11-20 11:06:44 -08:00
Mikhail Zolotukhin
2c8dce915c Show full call stack in TorchScript exception even when calls were inlined.
Summary:
This uses newly added InlinedCallStack to print the original call stack
even if the real call stack is shallower because of inlining.
This change also makes torchscript stacktraces look like python ones.

Example:
```
torch.jit.script
def baz(c, b):
    return c + b

torch.jit.script
def foo(c, b):
    return baz(c, b)

torch.jit.script
def bar(c, b):
    return foo(c, b)

bar(torch.rand(10), torch.rand(9))
```

Output before:
```
Traceback (most recent call last):
  File "fail.py", line 25, in <module>
    bar(torch.rand(10), torch.rand(9))
RuntimeError: The size of tensor a (10) must match the size of tensor b (9) at non-singleton dimension 0
The above operation failed in interpreter, with the following stack trace:
at fail.py:15:11
torch.jit.script
def baz(c, b):
    return c + b
           ~~~~~ <--- HERE
```

Output after:
```
Traceback (most recent call last):
  File "fail.py", line 41, in <module>
    bar(torch.rand(10), torch.rand(9))
RuntimeError: The size of tensor a (10) must match the size of tensor b (9) at non-singleton dimension 0
The above operation failed in interpreter.
Traceback (most recent call last):
  File "fail.py", line 33
torch.jit.script
def bar(c, b):
    return foo(c, b)
           ~~~ <--- HERE
  File "fail.py", line 29, in foo
torch.jit.script
def foo(c, b):
    return baz(c, b)
           ~~~ <--- HERE
  File "fail.py", line 25, in baz
torch.jit.script
def baz(c, b):
    return c + b
           ~~~~~ <--- HERE
```

Output of non-scripted python code:
```
Traceback (most recent call last):
  File "fail.py", line 36, in <module>
    bar(torch.rand(10), torch.rand(9))
  File "fail.py", line 21, in bar
    return foo(c, b)
  File "fail.py", line 18, in foo
    return baz(c, b)
  File "fail.py", line 15, in baz
    return c + b
RuntimeError: The size of tensor a (10) must match the size of tensor b (9) at non-singleton dimension 0
```

Differential Revision: D18532812

Test Plan: Imported from OSS

Pulled By: ZolotukhinM

fbshipit-source-id: e7e5ba5e4a8f1c7086406271d0f1685d9db8541a
2019-11-19 17:58:55 -08:00
Jerry Zhang
c2e576e74b Per channel quantization support in insert_prepack_unpack (#29701)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29701

att

Test Plan:
python test/test_jit.py 'TestJit.test_insert_prepack_unpack'

Imported from OSS

Differential Revision: D18580908

fbshipit-source-id: 2d1ce9b6279586198cb53a7fd2a35325fa20bf20
2019-11-19 15:53:04 -08:00
David Riazati
dca123e76d Add zipfile serialization (#29232)
Summary:
Stacked PRs
 * https://github.com/pytorch/pytorch/issues/29244 - Use custom CRC
 * **https://github.com/pytorch/pytorch/issues/29232 - Add zipfile serialization**

This adds a serialization method that uses a zipfile (https://github.com/pytorch/pytorch/issues/26567). Right now it is
guarded behind a flag `_use_new_zipfile_serialization`. In release mode it seems to have performance about the same / slightly better than the current serialization in some simple benchmarks for large/small tensors.

Follow ups:
* Flip the `_use_new_zipfile_serialization` flag
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29232

Differential Revision: D18332036

Pulled By: driazati

fbshipit-source-id: 1bac0847c4d599612cba905f2cac8248783be2f4
2019-11-19 10:17:32 -08:00
Vitaly Fedyunin
5f510374e7 Add torch.memory_format support to the TorchScript
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28544

Test Plan: Imported from OSS

Differential Revision: D18093801

Pulled By: VitalyFedyunin

fbshipit-source-id: 2c82a1508da50a24825b44939434d86546cf1e19
2019-11-18 05:35:49 -08:00
Elias Ellison
902c1f9ef1 Check for mutable default parameters (#29833)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/21545

We we were silently giving wrong semantics previously:

Python behavior:
```
def test(x=[]):
   x.append(1)
   return len(x)

print(test()) # 1
print(test()) # 2
```

By checking at the python layer, we prevent any new models from serializing this behavior but do not break existing serialized models.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29833

Differential Revision: D18513168

Pulled By: eellison

fbshipit-source-id: 6fe73f28e1f9d39dedeaf67a04718089d14401a1
2019-11-14 18:28:48 -08:00
James Reed
90ac35b7bd Fix tracing of autograd functions
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29791

Test Plan: Imported from OSS

Differential Revision: D18499142

Pulled By: jamesr66a

fbshipit-source-id: 6c2864dfbfa0419c8c888d55e082a619d058b3ee
2019-11-14 11:18:07 -08:00
Nikolay Korovaiko
78bd0069d3 enable back 2 tests for simple exec
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29661

Differential Revision: D18456143

Pulled By: Krovatkin

fbshipit-source-id: 9e4ae3ae681e3c9a81ada1e8b39da1e1342ce394
2019-11-13 14:22:19 -08:00
Ailing Zhang
8875120b54 Make dropout condition on training.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29436

Reviewed By: bddppq

Differential Revision: D18438288

Pulled By: ailzhang

fbshipit-source-id: d9c6fe4bd734dc87b2154b0ccd80efcb61740ec9
2019-11-12 16:32:02 -08:00
Jerry Zhang
fd8f74e688 Remove observer module after insert_quant_dequant (#29622)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29622

Remove the observer module in the quantized model

Test Plan: python test/test_jit.py 'TestJit.test_insert_quant_dequant'

Differential Revision: D18442888

Pulled By: jerryzh168

fbshipit-source-id: 22c777569af0e814661fe51f76341b39600fae0d
2019-11-12 14:48:40 -08:00
Elias Ellison
fbe90b65fa Cleanup special handling of Containers, allowing custom forwards (#28988)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28988

Make ModuleList, Sequential, ModuleDict go through the same pathway as other modules, cleaning up a bunch of code and allowing them to define custom forwards and other methods.

EDIT: Previously, we would ignore an nn.Sequential attribute if it was not in `__constants__` ("did you forget to add it to Constants"). This PR scripts it even if it is not in `__constants__`. Is that what we want?

Test Plan: Imported from OSS

Differential Revision: D18402821

Pulled By: eellison

fbshipit-source-id: dd4f28fb0df0d1ba4ad1b3bc34ba141959a433f7
2019-11-12 14:10:38 -08:00
Junjie Bai
949d6ae184 Fix jit tracing namedtuple (#29477)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29477

When passing in a namedtuple as trcing input, __clone_inputs will call into `torch.autograd.function._nested_map` and https://github.com/pytorch/pytorch/blob/593bb14/torch/autograd/function.py#L256 will run into error (because namedtuple doesn't support this style of constructor).
ghstack-source-id: 93586773

Differential Revision: D18405504

fbshipit-source-id: 8d0135cff0bdaaabcf6e06fac63df0f75c0c50b9
2019-11-12 10:38:20 -08:00
Jianyu Huang
bbff06ee96 Convert conv_prepack to conv2d_prepack and conv_unpack to conv2d_unpack (#29529)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29529

Pull Request resolved: https://github.com/pytorch/glow/pull/3771

We would like to replace `conv_prepack` with `conv2d_prepack` and  `conv_unpack` with `conv2d_unpack`.

This makes the naming consistent between 2D and 3D conv:
```
torch.ops.quantized.conv2d_prepack
torch.ops.quantized.conv2d_unpack
torch.ops.quantized.conv2d
torch.ops.quantized.conv3d_prepack
torch.ops.quantized.conv3d_unpack
torch.ops.quantized.conv3d
```

We should do this earlier rather than later when we have more users for the quantized conv2d ops, for better engineering.

The replacement bash command is as the follows:
```
find ./ -type f -exec sed -i -e 's/quantized::conv_prepack/quantized::conv2d_prepack/g' {} \;
find ./ -type f -exec sed -i -e 's/quantized::conv_unpack/quantized::conv2d_unpack/g' {} \;
find ./ -type f -exec sed -i -e 's/torch.ops.quantized.conv_prepack/torch.ops.quantized.conv2d_prepack/g' {} \;
find ./ -type f -exec sed -i -e 's/torch.ops.quantized.conv_unpack/torch.ops.quantized.conv2d_unpack/g' {} \;
```
ghstack-source-id: 93661879

Test Plan: CI

Reviewed By: jackm321

Differential Revision: D18421079

fbshipit-source-id: 17ae8b1ee79223bd2c5d4bbccd57af6580c4ab12
2019-11-11 21:54:10 -08:00
Jerry Zhang
70f886ffa4 Revert D18253777: Remove observer module after insert_quant_dequant
Test Plan: revert-hammer

Differential Revision:
D18253777

Original commit changeset: 26081c4c3fd3

fbshipit-source-id: 88f330c34976030c9310e7982fa6ae74e093ebbf
2019-11-11 17:09:58 -08:00
Jerry Zhang
587996ef04 Remove observer module after insert_quant_dequant (#28985)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28985

Remove the observer module in the quantized model

Test Plan:
python test/test_jit.py 'TestJit.test_insert_quant_dequant'

Imported from OSS

Differential Revision: D18253777

fbshipit-source-id: 26081c4c3fd3dc049cafa8c0383219bc4c233589
2019-11-11 16:31:01 -08:00
Zachary DeVito
4e4e29a511 Simplify ScriptModule bindings. (#29432)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29432

This removes a lot of the private methods on torch._C.ScriptModule,
and instead implements functionality in terms of slot_dict_impl views
to implement _parameter, _buffers, and _modules in nn.Module.

A followup PR should also remove the _register_attribute,
_register_module, and _register_parameter methods, but this requires
more refactoring of the way tracing creates modules and replication
for data parallel works.

Test Plan: Imported from OSS

Differential Revision: D18387963

Pulled By: zdevito

fbshipit-source-id: f10d47afeb30c1e05d704ae5ac4166830933125c
2019-11-11 13:52:36 -08:00
Nikolay Korovaiko
5b702ab52b switching to a simple/full executor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29230

Differential Revision: D18402229

Pulled By: Krovatkin

fbshipit-source-id: 62f4bc9bc89c0c7369359bba1359c22a2fa80f46
2019-11-11 13:41:35 -08:00
eellison
e01fc56ecb move type inference for arange into c++ (#27629)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/17662

I'm not sure if `arange` needs to be in python_arg_parser at all, given the schemas in native_functions.yaml. In any case this at least fixes the dytpe mismatch.

In follow up PRs I will try to handle some of the other ops that do type inference at the python level, like randint.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27629

Differential Revision: D17885939

Pulled By: eellison

fbshipit-source-id: f97a8bc722b7ab77de1c42a992e49a4a3175ad60
2019-11-11 11:26:21 -08:00
Elias Ellison
91e1f07967 Check for unrolled loop in break & continue (#29474)
Summary:
For the same reason we don't allow iteration over heterogenous types (modulelists/tuples) with types that don't have a static length, we also can't break/continue within them - we need to statically know all types.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29474

Differential Revision: D18406097

Pulled By: eellison

fbshipit-source-id: 70ed3fc4947b6237cdd6703135a988a5c13ce786
2019-11-08 15:51:13 -08:00
Michael Suo
52456b2eba add hasattr() (#29332)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29332

Even though we're statically typed, this can be useful, e.g. as
shorthand when iterating through a module list.

Test Plan: Imported from OSS

Differential Revision: D18393097

Pulled By: suo

fbshipit-source-id: aa42e955f88d1b8a876d0727055eb596453b9839
2019-11-08 13:58:14 -08:00
Edward Yang
4e21157e01 Revert "Revert D18171156: Merge Tensor and Variable." (#29299)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29299

This reverts commit 9c43b16df9, but also
with the changes from D18348622.  Comments there:

thpp-compatibility is used by admarket/adreview/service:adreviewservice and
libtorch is too big for the service to deal with.

thpp-compatibility doesn't support autograd, so we hack around dispatching
variables by using AutoNonVariableTypeMode everywhere we call into ATen,
so we never attempt to call into Variable stubs.  If you get it wrong,
you'll get an error like:

```
what():  Could not run 'aten::empty' with arguments from the 'VariableTensorId' backend. 'aten::empty' is only available for these backends: [SparseCPUTensorId, CPUTensorId, MkldnnCPUTensorId]. (lookup_ at caffe2/aten/src/ATen/core/dispatch/DispatchTable.h:298)
```

Test Plan:
Imported from OSS

```
buck test //thpp-compatibility/...
buck build mode/opt-clang admarket/adreview/service:adreviewservice
```

adreviewservice canary: https://our.intern.facebook.com/intern/ads/canary/422290029716387895 (comparing against parent comment due to current breakage) ==> experiment store https://our.intern.facebook.com/intern/experiment_store/experiment/43990006/
adfinder canary: https://our.intern.facebook.com/intern/ads/canary/422268535840333934
adindexer canary: https://our.intern.facebook.com/intern/ads/canary/422268550559034675

adreview second canary:  https://our.intern.facebook.com/intern/ads/canary/422307863515591925

canary without thpp-compat fixups https://our.intern.facebook.com/intern/ads/canary/422308951649168772

Reviewed By: dreiss

Differential Revision: D18353504

Pulled By: ezyang

fbshipit-source-id: 65feaba39fa07bb66762810909aeb38868668a30
2019-11-08 09:11:20 -08:00
Elias Ellison
19d3a7ad02 fix negative string indexing (#22700)
Summary:
strings allow negative indexing in python
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22700

Differential Revision: D18382382

Pulled By: eellison

fbshipit-source-id: 05c3fa0890be6234ee1467da0e65697f51236523
2019-11-07 17:28:16 -08:00
James Reed
782e80e6e7 Make jit.trace_module reentrant (#29411)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29411

Fixes https://github.com/pytorch/pytorch/issues/29367

Test Plan: Imported from OSS

Differential Revision: D18380559

Pulled By: jamesr66a

fbshipit-source-id: 5caf606ccbc5dc79dac14e3c28cc02dec19ce695
2019-11-07 16:29:06 -08:00
Jerry Zhang
de9a54466d clone should preserve the type of attribute (#29269)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29269

Hit this bug when I have an attribute of type `Optional[Tensor]` which
is initialized to None and reassigned later to some tensor.

Test Plan:
.

Imported from OSS

Differential Revision: D18364338

fbshipit-source-id: d8e1277a84ab7d80331cba83f5639469d398632e
2019-11-07 15:25:20 -08:00
James Reed
1dd3c8e539 Skip flaky test
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29403

Test Plan: Imported from OSS

Differential Revision: D18377162

Pulled By: jamesr66a

fbshipit-source-id: 69052a7466d03468146e99da45f1ee2c9e85dfa8
2019-11-07 12:52:47 -08:00
Alban Desmaison
b14c5943d4 Handle warning in torchscript (#27154)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27154

Fix for #25859

* #28283 Fix clang-tidy errors in csrc/Module.cpp

Test Plan: Imported from OSS

Differential Revision: D18249631

Pulled By: albanD

fbshipit-source-id: 4e9bbad07cc39e7c7f0546ef7587bd4ab2dd644e
2019-11-07 08:35:16 -08:00
Alban Desmaison
9b875e1256 Buffer python warning to avoid deadlocks
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26613

Test Plan: Imported from OSS

Differential Revision: D18249633

Pulled By: albanD

fbshipit-source-id: 863f52400e1b97943a67a9e1abb09ae8d045e7f0
2019-11-07 08:35:06 -08:00
Zachary DeVito
796363147f Implement more of of the nn.Module API (#28828)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28828

This updates torch::script::Module to more closely match the behavior
of nn.Module. In particular, it implements the (optionally recurisive)
iterators that retrieve submodules, parameters, and buffers and makes
their names match the python versions.

This also removes the individual accessors for Parameter, Module, Buffer, etc.
and replaces them with a single `attr` function which is equivalent to
writing `a.foo` in Python (`setattr` emulates `a.foo = v`).
As we build out the user-facing API for TorchScript values this will end
up matching how an  attribute is accessed on general objects.

This PR preservers the python bindings for script::Module by emulating the
old API at the binding level. A followup will clean up the usage to more
directly match the C++ API.

Test Plan: Imported from OSS

Differential Revision: D18197611

Pulled By: zdevito

fbshipit-source-id: 7ee4dcbb258605d1c988314b05d938423f1ccee5
2019-11-06 22:58:25 -08:00
James Reed
309b28ee3a Trace module calls
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29261

Test Plan: Imported from OSS

Differential Revision: D18343363

Pulled By: jamesr66a

fbshipit-source-id: 0c6394205e2c0ea8708028d20df83fe17b466ff4
2019-11-06 15:05:49 -08:00
Michael Suo
cc457ca30f split remaining "easy" tests (#29249)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29249

This splits out all the tests that are "easy", leaving `TestJit`,
`TestScript`, the autogenerated tests, and a small docs test.

Splitting those into reasonable chunks is more effort which is less
mechanical.

Differential Revision: D18339007

Test Plan: Imported from OSS

Pulled By: suo

fbshipit-source-id: 69164b9f9a2c379fe8923a846c98dd3c37ccb70e
2019-11-06 13:23:01 -08:00
Edward Yang
9c43b16df9 Revert D18171156: Merge Tensor and Variable.
Test Plan: revert-hammer

Differential Revision:
D18171156

Original commit changeset: 5b6a045beba3

fbshipit-source-id: f5581d902c2305018ea49f8473592be2a465560b
2019-11-06 10:57:00 -08:00
James Reed
6e38c3b89e Make get_trace_graph private
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29149

Test Plan: Imported from OSS

Differential Revision: D18307559

Pulled By: jamesr66a

fbshipit-source-id: 0b6aec2a1d10810d4e7f6b30b256cca79fc4e854
2019-11-05 17:04:36 -08:00
Elias Ellison
a5aeb37493 Don't throw when type is used in TorchScript (#28053)
Summary:
Type objects in python have an attribute `__abstractmethods__` that throws when it is accessed, so we were failing with an AttributeError whenever a type was used in TorchScript.

This pr prevents that error from happening. We can't just throw when a type is used because it could be used to access a static method: https://github.com/pytorch/pytorch/pull/27163
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28053

Differential Revision: D18332347

Pulled By: eellison

fbshipit-source-id: 9c7f2220f92674ad4d903621d9762cecc566ab0d
2019-11-05 15:15:12 -08:00
Edward Yang
25261a4776 Merge Tensor and Variable. (#28620)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28620

All Tensors are Variables now, they just happen to have requires_grad=False. Tensors ALWAYS have `VariableTensorId` in their type set.

When constructing this patch, I had to make decisions about what I would fix in this patch, and what I would leave for follow up PRs. Here is the cleanup that happens in this patch:

- The `is_variable` property is removed from TensorOptions. I removed this immediately because unlike Tensor::is_variable, TensorOptions::is_variable doesn't respect our VariableTensorId thread-local state. This means that there were a bunch of places where TensorOptions::is_variable was false, which is obviously bogus in the world when tensor and variable are merged. Instead of keeping the method as a function that always returns true, I just opted to remove it entirely (it's not public API.) All places we set `is_variable` are deleted.
  - Knock on effect: there is no longer a separate DeprecatedTypeProperties for the variable and non-variable versions of type.
  - Knock on effect: instead of asserting on TensorOptions::is_variable, instead we just test `at::impl::variable_is_excluded()`
- There is now only one copy of the cuDNN RNN dropout cache, not two (I'm not sure why we had two to begin with)

Some cleanup that doesn't happen in this patch:
- Eliminating unnecessary uses of `make_variable`
- Eliminating `Tensor::is_variable`

The most subtle part of this patch is retaining tracing behavior: the fact that everything is a Variable means that more code gets routed to VariableType than before; this can change traces. I identified two places where we didn't appropriately turn off VariableType, mostly factory functions:

- `torch.tensor` must turn off VariableType before invoking `at::empty` to construct the tensor, as it subsequently does direct data access
- `tensor_slow` (invoked when you pass a Python scalar to a tensor argument) must turn off VariableType before calling `scalar_to_tensor` so the scalar gets traced as constant, rather than as a call to `scalar_to_tensor`.

Honestly, these are all giant hacks, and should be replaced with a more specialized guard that just toggles tracing.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: dreiss

Differential Revision: D18171156

Pulled By: ezyang

fbshipit-source-id: 5b6a045beba37492647e350190f495114e86504d
2019-11-04 14:59:57 -08:00
Elias Ellison
60cb56d128 Refactor iterables (#29138)
Summary:
Refactor list comprehensions so they go through the same path as other for loops, making List Comprehensions work with modulelists, also fixing https://github.com/pytorch/pytorch/issues/27255

Replacing https://github.com/pytorch/pytorch/pull/28296 which was gh-poisoned and previously accepted.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29138

Differential Revision: D18303432

Pulled By: eellison

fbshipit-source-id: 8e4c0ba6f800142d5c4d921d56917cfae0c74655
2019-11-04 14:39:22 -08:00
Edward Yang
7102aceaf8 Default to not build Caffe2 operators on Windows. (#29061)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29061

It looks like we are too close to the maximum library size on
Windows.  Kill Caffe2 operators to get us lower again.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: smessmer

Differential Revision: D18281083

Pulled By: ezyang

fbshipit-source-id: 8a11f9059dbf330f659bd96cc0cc2abc947723a8
2019-11-04 14:32:47 -08:00
Elias Ellison
fdeef45852 Add Support For Module Containers as Iterables (#28255)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28255

Add support for treating Sequentials, ModuleLists, and ModuleDicts as iterables.

As previously, when emitting a for loop over a Module Container we unroll the for loop over all elements. We require that any Sugared Value in an iterable with a Module Container have a statically - determinable length.

Otherwise, if you zipped over a list of varying length and an nn.Sequential that alternated between returning a Tensor and a Dictionary, the output type would change based on the length of the list.

Fix for #17179
And https://github.com/pytorch/pytorch/issues/27401
and https://github.com/pytorch/pytorch/issues/27506

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D18278124

Pulled By: eellison

fbshipit-source-id: aca336a5b8da89c756b1f0884883649510cbde3c
2019-11-04 09:19:40 -08:00
Wanchao Liang
1e904049ca guard against inheritance on torchscript classes (#28407)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28407

Given that we do not have support for inheitance or any polymorphism
strategy yet, we should guard against user from using it until we get
the full support so that user won't confuse by the weird behaviors.

Test Plan: Imported from OSS

Differential Revision: D18284310

fbshipit-source-id: f55a224f4190d57926d91ed98f6168d787387eb8
2019-11-02 16:38:56 -07:00
Jerry Zhang
5ac3df7712 Minor fix and turn off fold_convbn (#27403)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27403

In fold_convbn pass, we need to recompute the parameter(weight, bias) for
conv, update the attribute of conv and update the access of bias in conv
because if the original conv have no bias, the `self.bias` access will be
inline and replaced by Constant node `None = prim::Constant()`, we need to
update this to use `GetAttr[name="bias"]` to make this work. But there is
also some work going on the handle constants, so we'll fix this pass after
that is done.

Test Plan:
.

Imported from OSS

Differential Revision: D18182918

fbshipit-source-id: bba510bc41ab58e0eb76f7b77335b6e3ffe2862d
2019-11-01 12:15:38 -07:00
Vitaly Fedyunin
4bfe2f0900 Fix jit outplace tracing and reapply changes to *_like operators. (#28839)
Summary:
Reapply reverted and fix files `gen_variable_type.py` `test_jit.py`

https://github.com/pytorch/pytorch/issues/27891 Cleanup testing of _like operators
https://github.com/pytorch/pytorch/issues/27890 Add memory format support to randn_like operator
https://github.com/pytorch/pytorch/issues/27889 Add memory format support to randint_like operator
https://github.com/pytorch/pytorch/issues/27562 Add memory format support to zeros_like operator
https://github.com/pytorch/pytorch/issues/27561 Add memory format support to rand_like operator
https://github.com/pytorch/pytorch/issues/27270 Add memory format support to ones_like operator
https://github.com/pytorch/pytorch/issues/27262 Add memory format support to full_like operator
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28839

Test Plan:
Imported from GitHub, without a `Test Plan:` line.

buck test mode/dev //language_technology/neural_mt/os/pytorch_translate/test:test_onnx -- 'test_forced_decoder_export_vocab_reduction \(language_technology\.neural_mt\.os\.pytorch_translate\.test\.test_onnx\.TestONNX\)'

Differential Revision: D18203397

Pulled By: VitalyFedyunin

fbshipit-source-id: eea41cbd4c232cf5a54172b1e1b16b173798f298
2019-10-31 13:23:08 -07:00
Jerry Zhang
6b5bfd4cfc Make inserted child module names unique (#27237)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27237

Making inserted observer module and wrapper module names unique

Test Plan:
test_jit.py

Imported from OSS

Differential Revision: D18182917

fbshipit-source-id: 77aa5997fbf024c73085866550372b5e68ad9ae1
2019-10-29 12:30:49 -07:00
Nikolay Korovaiko
47faee2fae Switching tests to ProfilingExecutor (rebased)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28535

Differential Revision: D18197932

Pulled By: Krovatkin

fbshipit-source-id: 2639b205e899f800787ee57c157447d54e4669c3
2019-10-29 11:41:42 -07:00
James Reed
f782500ee0 Abstract tracer::enter and tracer::exit into a function
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28473

Test Plan: Imported from OSS

Differential Revision: D18121007

Pulled By: jamesr66a

fbshipit-source-id: 4c4a4344ad9bcc4630b945d2a645a0b05928933c
2019-10-26 18:41:14 -07:00
davidriazati
dbf1996f79 Support MultiheadedAttention module (#28555)
Summary:
This makes MultiheadedAttention TorchScript compatible

It also breaks BC-compatibility for old models that do not have `_qkv_same_embed_dim` as an attribute.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28555

Pulled By: driazati

Differential Revision: D18124746

fbshipit-source-id: 5c5042fc6fc0e557db859a8ae05174cba5fce6a9
2019-10-25 17:28:53 -07:00
Jerry Zhang
e280f93e31 Prepack folding for conv2d (#27119)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27119

att

Test Plan:
python test/test_jit.py 'TestJit.test_fold_prepack'

Imported from OSS

Differential Revision: D17717636

fbshipit-source-id: 97e9f8d927f7eacedf09f47b8ae1bf8216b8cad4
2019-10-23 09:03:14 -07:00
neginraoof
d2eb08d17b Fix tracing slice/select with dynamic inputs (#26549)
Summary:
Fix Slice/Select trace arguments. This PR stashes arguments to functions in order to avoid tracing them as constants.
This PR depends on a fix for select op in PR:
https://github.com/pytorch/pytorch/pull/25273
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26549

Reviewed By: hl475

Differential Revision: D17623851

Pulled By: houseroad

fbshipit-source-id: ae314004266688d2c25c5bada2dcedbfc4f39c5b
2019-10-22 17:09:40 -07:00
Michael Suo
4e033b0040 split TestLogging, TestDict, TestList
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28038

Test Plan: Imported from OSS

Differential Revision: D17954441

Pulled By: suo

fbshipit-source-id: 4703fb577adea3aa00fabb13c577b055e9ab4d7c
2019-10-21 17:15:15 -07:00
Michael Suo
f36497e687 split test_type_sharing
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28037

Test Plan: Imported from OSS

Differential Revision: D17954442

Pulled By: suo

fbshipit-source-id: 6edee4d7dee0e52b58e71d3b520c0503fb7bd0ed
2019-10-21 17:15:11 -07:00
Zachary DeVito
fb4517132f Allow 'Any' to appear as a type argument. (#26572)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26572

Combined with isinstance specialization this allows a degree of polymorphic
functions to work without needing to use our weirder overload hacks.

We do not define any operators on Any, so the only thing you can do with it
is to put it in containers or type refine it using an isinstance check.
Any is restricted from appearing in non-argument position because we
cannot restore type tags if it ends up as a field in a class.

Test Plan: Imported from OSS

Differential Revision: D17530643

Pulled By: zdevito

fbshipit-source-id: f06f78ce84819f7773953a492f3d4c49219ee94c
2019-10-16 11:07:08 -07:00
Hiroshi Ogawa
97b39a296f Fix error report highlight for unmatched type annotation (#27195)
Summary:
This PR fixes https://github.com/pytorch/pytorch/issues/25801 (see there for my verbose analysis).

As an example, for the following code:

```
import torch

torch.jit.script
def f1(x):
    # type: (int, int) -> None
    pass
```

this PR will change error message from this:

```
RuntimeError:
Number of type annotations (2) did not match the number of function parameters (1):
# type: (int, int) -> None
```

to this:

```
RuntimeError:
Number of type annotations (2) did not match the number of function parameters (1):
at __scratch__/example.py:4:0
torch.jit.script
def f1(x):
~~~~~~~~ <--- HERE
    # type: (int, int) -> None
    pass
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27195

Differential Revision: D17910902

Pulled By: driazati

fbshipit-source-id: af5c6353069d005752d6c7f0bd6a0c6db8437e55
2019-10-16 10:39:36 -07:00
davidriazati
8cdc262063 Add support for @staticmethod (#27163)
Summary:
Resolve static methods as functions

Fixes #26792
](https://our.intern.facebook.com/intern/diff/17695094/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27163

Pulled By: driazati

Differential Revision: D17695094

fbshipit-source-id: 4671cae1a92526a35c83b8d9c12a50aa5442412b
2019-10-16 10:36:38 -07:00
Zachary DeVito
cf43aa3e16 add type refinements for isinstance checks (#27772)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27772

This replaces unchecked_unwrap_optional with unchecked_cast. This
enables the generalization of type refinement so that it works for
isinstance checks as well. This also removes unchecked_unwrap_optional from
code we generate, which is good because it is a hard op to serialize well
since it doesn't directly encode the Optional[T] being unwrapped. In contrast,
unchecked_cast always explicitly lists the type.

Test Plan: Imported from OSS

Differential Revision: D17885424

Pulled By: zdevito

fbshipit-source-id: ce81077d6fbeaf2a802a2e0b17349aca85670466
2019-10-15 16:00:42 -07:00
Zachary DeVito
30d9316f35 refactor tryMatchSchema (#26499) (#27773)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27773

We've changed how these functions are used over time, so I cleaned up
the header file API to match. In particular:

* tryMatchSchemas was added since the overload logic got copy/pasted
into three separate locations.
* With this change, tryMatchSchema is no longer public, as it is not needed
  outside of tryMatchSchemas
* emitBuiltinFunction no longer needs a requires argument (it was always true)

* Argument order for all the schema matching stuff now puts the 'self'
builtin override last. This is only rarely used and was inconsistent with
matchSchema

Test Plan: Imported from OSS

Differential Revision: D17885425

Pulled By: zdevito

fbshipit-source-id: 064bc9fa4bd57b2e5366fff9f3c6ab9b9945e08b
2019-10-14 20:45:25 -07:00
Michael Suo
a4a5b6fcaa Revert D17913708: [pytorch][PR] [JIT] throw on custom forward for module containers
Test Plan: revert-hammer

Differential Revision:
D17913708

Original commit changeset: 1cc2a8a4b573

fbshipit-source-id: 19ad68a1b0fd8e0f17e1b7ab92879106517e13d2
2019-10-14 17:48:31 -07:00
Michael Suo
aaedf1b38b break out test_recursive_script (#27819)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27819

The idea here is to preserve the fact that `test_jit.py` contains all the JIT tests. So we import `JitTestCase`s from `jit/` into `test_jit.py` so that the test loader will find and run them when you do `python test_jit.py`. This also means that things like `-k` will work as expected.

The individual test files in `jit/` will throw if run directly, to prevent cases where the CI accidentally runs multiple versions of the same test.

Differential Revision: D17898105

Test Plan: Imported from OSS

Pulled By: suo

fbshipit-source-id: 0cd6f8421c86c90a6e1bae33a3fdbe998f570e07
2019-10-14 16:00:35 -07:00
Michael Suo
151483e25d move import_class_test files around (#26722)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26722

Put them in a directory under jit/ to prep for test splitting

Test Plan: Imported from OSS

Differential Revision: D17550582

Pulled By: suo

fbshipit-source-id: a592b671ffe808f02d0a597d441bd98a18c9109e
2019-10-14 16:00:31 -07:00
James Reed
fdea0cbe40 s/TestEndToEndHybridFrontendModels/TestModels/
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27877

Test Plan: Imported from OSS

Differential Revision: D17909137

Pulled By: jamesr66a

fbshipit-source-id: d8d730eed562b0f08caed7be302dd122af61e877
2019-10-14 13:13:30 -07:00
Elias Ellison
cd6b37afa7 throw on custom forward for module containers (#27763)
Summary:
Custom forwards of containers would silently not be compiled previously. Throw an error now instead.

Fix for https://github.com/pytorch/pytorch/issues/26671
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27763

Differential Revision: D17913708

Pulled By: eellison

fbshipit-source-id: 1cc2a8a4b57356ba7f007a95ede0a31e5d61aa82
2019-10-14 13:08:10 -07:00
Mike Ruberry
f6bda1e07b Removes @default_floating_dtype decorator (#27628)
Summary:
One fewer legacy decorator cluttering the test suite.

Functions relying on this decorator were updated or, in the case of test_sparse, the test suite was put back on double by default.

Note: this PR is blocked on https://github.com/pytorch/pytorch/issues/27599.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27628

Differential Revision: D17896254

Pulled By: mruberry

fbshipit-source-id: 13d460301f50ef4af7a660372432108164c0de1f
2019-10-12 12:39:34 -07:00
Michael Suo
341262754f module dedupe (#26666)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26666

Changes:
- Introduce a `ConcreteModuleType` concept. This acts both as the key into the type
  cache, and as the source of truth for `ModuleValue::attr` queries. It needs
  to do both jobs because that's how we ensure correctness (if the types are
  different, it's because `ModuleValue::attr` would return different things).
- Now `recursive_script` will first construct a `ConcreteModuleType` and search for a
  pre-existing type before starting compilation.
- All previous paths to creating a `ScriptModule` (including inheriting from
  `ScriptModule`) are now rewritten to go through `create_script_module`, so
  that we have only a single place where construction happens.

Behavioral changes:
- Big change to `torch.jit.ScriptModule` inheritance: all attributes are now
  recursively scripted if possible, matching recursive scripting semantics.
  This makes it hard to keep something from being scripted (for example, a
  Python submodule). Possibly we'll need an `ignore()` type thing for
  attributes. In particular, this adds `self.training` to *every* ScriptModule, since
  it's present on every `nn.Module`.
- I believe this change to be transparent to existing users of the inheritance API, since if you had an attribute that is unscriptable that you never used, there is no error. In some cases, we will create new attributes (even if they are unused), which will increase serialized model size from before.

Test Plan: Imported from OSS

Differential Revision: D17551196

Pulled By: suo

fbshipit-source-id: b476d1c9feb3ddfd63406d90989aaf9dfe890591
2019-10-12 09:51:57 -07:00
Michael Suo
759c99c2e3 [jit Python None should have its type inferred as NoneType (#26665)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26665

This is actually useful. For example: in batchnorm.py, all the tracked
stats are either `nn.Parameter` or `None`. We should register them as
params if they are set, or attributes with type NoneType if they are
not.

Test Plan: Imported from OSS

Reviewed By: shannonzhu

Differential Revision: D17551197

Pulled By: suo

fbshipit-source-id: 8d6f6d76d4dab0d524c4ffdfe0c1dd465771cd00
2019-10-12 09:51:49 -07:00
Edward Yang
7135f7c263 Revert D17412856: [JIT] add type refinements for isinstance checks
Test Plan: revert-hammer

Differential Revision:
D17412856

Original commit changeset: ded47eb086c4

fbshipit-source-id: 854a6c8f322435c3f3416dbedcb642cb2d2902b1
2019-10-11 13:02:30 -07:00
Edward Yang
07fc7d05ce Revert D17488297: [jit] refactor tryMatchSchema
Test Plan: revert-hammer

Differential Revision:
D17488297

Original commit changeset: a32d838ce355

fbshipit-source-id: 2bd319d9554d81d09231bf1e34c8417bff468940
2019-10-10 17:39:48 -07:00
Zachary DeVito
51656eefb0 refactor tryMatchSchema (#26499)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26499

We've changed how these functions are used over time, so I cleaned up
the header file API to match. In particular:

* tryMatchSchemas was added since the overload logic got copy/pasted
into three separate locations.
* With this change, tryMatchSchema is no longer public, as it is not needed
  outside of tryMatchSchemas
* emitBuiltinFunction no longer needs a requires argument (it was always true)

* Argument order for all the schema matching stuff now puts the 'self'
builtin override last. This is only rarely used and was inconsistent with
matchSchema

Test Plan: Imported from OSS

Differential Revision: D17488297

Pulled By: zdevito

fbshipit-source-id: a32d838ce35544972fa8767557acc22149081b55
2019-10-09 22:11:24 -07:00
Zachary DeVito
d44b9cd4bb add type refinements for isinstance checks (#26271)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26271

This replaces unchecked_unwrap_optional with unchecked_cast. This
enables the generalization of type refinement so that it works for
isinstance checks as well. This also removes unchecked_unwrap_optional from
code we generate, which is good because it is a hard op to serialize well
since it doesn't directly encode the Optional[T] being unwrapped. In contrast,
unchecked_cast always explicitly lists the type.

Test Plan: Imported from OSS

Differential Revision: D17412856

Pulled By: zdevito

fbshipit-source-id: ded47eb086c4610998ad92bb1174225af00220f7
2019-10-09 22:11:19 -07:00
Zachary DeVito
eb9000be4e always use the closure to resolve variable names (#27515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27515

Resoving variable names using the local activation frames does not work
when using recursive scripting, but our current code tries to do it
(incorrectly) anyway. The reason it works is only because the script
call is in the same local frame as the definition. This will not be
true in practice and makes it seem like the API works in more cases
than it really does. This forces us to always use closure-based annotations,
documents it, and it fixes the tests so that they still pass.

Test Plan: Imported from OSS

Differential Revision: D17803403

Pulled By: zdevito

fbshipit-source-id: e172559c655b05f0acf96c34f5bdc849f4e09ce2
2019-10-09 12:16:15 -07:00
James Reed
e63bfb7877 Use orig source range in Node::print
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27524

Test Plan: Imported from OSS

Differential Revision: D17806454

Pulled By: jamesr66a

fbshipit-source-id: 5e3edb87fc79ad8dd1aed0b7d4a2153e7e0429ab
2019-10-08 10:30:56 -07:00
davidriazati
725810f42c Set existing attributes under recursive script (#27514)
Summary:
This is related to #27109, `training` was being skipped since modules
have it as an attribute by default, but it should be copied anyways.
](https://our.intern.facebook.com/intern/diff/17802544/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27514

Pulled By: driazati

Differential Revision: D17802544

fbshipit-source-id: 9e8f068903b67073c509c2c598b27622fcada2d7
2019-10-08 10:12:04 -07:00
Mike Ruberry
7f183a978f Stops common_utils.py from setting the default tensor type (to torch.DoubleTensor) (#27444)
Summary:
This PR stop common_utils.py from setting the default tensor type when it's imported. See issue https://github.com/pytorch/pytorch/issues/27355. This is a frequent source of confusion for test writers.

Many tests relied on this setting (whether they knew it or not), and this PR also updates the test suite to pass without common_utils.py setting the default tensor type. Some larger test files now set the default floating dtype themselves, however. These test files are:

- test_autograd.py
- test_distributions.py
- test_jit.py
- test_nn.py

This is still a significant improvement from today, however. First, these files set the default floating dtype much more clearly than importing it from common_utils. Second, the rest of the test suite no longer sets this globally. Third, this PR is a springboard to updating those tests, too. In particular, as tests are made generic they can be moved aways from relying on this global setting.

Notable technical changes in this PR are:

- Significant updates to test_torch.py to make it pass without setting the default floating dtype globally.
- The default_floating_dtype decorator is now defined in common_utils, a couple versions of this operator were defined in test files previously.
- test_torch-specific parts of common_utils were refactored into test_torch.
- tensor creation methods in common_utils were updated to accept an optional dtype and device.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27444

Differential Revision: D17795235

Pulled By: mruberry

fbshipit-source-id: 7f77271c0c836e69f183ad9057a2c4b29f09d2e1
2019-10-08 09:52:44 -07:00
davidriazati
0046092178 Reduce special casing around 'training' (#27109)
Summary:
Most of this was old cruft left over from special handling of `training` before we had a `bool` type. This makes all modules have a `training` attribute that is true by default and removes all other special handling.

Fixes #26884
](https://our.intern.facebook.com/intern/diff/17728129/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27109

Pulled By: driazati

Differential Revision: D17728129

fbshipit-source-id: 8ddc9fbb07a953dd05529538bfdd01ed88b5cb57
2019-10-07 13:52:59 -07:00
Wanchao Liang
b05ec828ad Add interface/object serialization as module attribute (#26770)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26770

This PR added the interface/object serialization as module attribute, to
allow initializing object as a interface type during python
initialization. Because interface type can be backed by any class object
that implements that interface, if we declare it in
python/module.__init__, we will need to collect the run time types of the
value and serialize them to ensure complete code information

Test Plan: Imported from OSS

Differential Revision: D17742707

fbshipit-source-id: 7f614ad4f982996d320a0e2dd3515bf47370e730
2019-10-04 17:12:08 -07:00
Zachary DeVito
9ade1e6944 improve error messages when a method or attribute is missing (#27110)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27110

Previously missing methods on some types like tensors would talk about
'builtins' which are only a thing inside of the compiler. Furthermore,
the error would only occur when the builtin was applied and it was discovered
that no builtin existed. This changes the error message so that it
discovers that method on our builtin types does not exist on attribute lookup.

Test Plan: Imported from OSS

Differential Revision: D17677616

Pulled By: zdevito

fbshipit-source-id: 2f7cf6c6093a9c832569c44f4b1044a2e56fe205
2019-10-03 21:25:01 -07:00
davidriazati
8fe5dcf699 Skip tests that use numpy if it's not present
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27165

Pulled By: driazati

Differential Revision: D17695078

fbshipit-source-id: d25c920f4c43285028537f88761d47a2c9db7b8f
2019-10-03 17:18:41 -07:00
Wanchao Liang
827a00cf63 Support interface python assignment as an attribute (#26734)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26734

This PR added the python assignment for interface as an attribute in the
module, it enables any object that implicitly inheriting the specific
interface to be able to be assigned to the interface type in python.

Serialization support for interface/class assignment will be done in the
follow up PR

Test Plan: Imported from OSS

Differential Revision: D17742708

Pulled By: wanchaol

fbshipit-source-id: a0a2d8c74b60ed3fa6c05e1b0d49b7ad1abc670b
2019-10-03 17:18:37 -07:00
Nikolay Korovaiko
1bc7ea17b2 more profiler changes in C++ before enabling checkScript changes
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26909

Differential Revision: D17683632

Pulled By: Krovatkin

fbshipit-source-id: 5d36c3c4cf7411c56485ef19fe59262b9f8b45b2
2019-10-03 10:39:54 -07:00
albanD
5b5f398dd4 Make cpp-backed jit classes appear as being in torch.jit
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27220

Test Plan: Imported from OSS

Differential Revision: D17715305

Pulled By: albanD

fbshipit-source-id: 574704ad23ece6da7aa2780b78867307bef523cc
2019-10-03 08:28:36 -07:00
Jerry Zhang
e33ec3942e Add insert_prepack_unpack for conv2d (#27118)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27118

att

Test Plan:
test_jit.py

Imported from OSS

Differential Revision: D17717637

fbshipit-source-id: 83c94ff12e6a2137e0161a338fcdd100514c452f
2019-10-02 15:14:24 -07:00
Egor Peshkov
bb51980766 make default string arguments in schemas human readable (#27088)
Summary:
[jit] String default args get printed as ascii values https://github.com/pytorch/pytorch/issues/25804
https://github.com/pytorch/pytorch/issues/25804
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27088

Differential Revision: D17689732

Pulled By: Krovatkin

fbshipit-source-id: f385b2fe44c5a2387bfcb6484edf7faa92bc8edf
2019-10-02 11:32:24 -07:00
Zachary DeVito
becf080e4a add dynamic isinstance (#26269)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26269

previously isinstance only worked when we could statically determine
if it were true/false. Now we actually can issue an isinstance check
in case where it is dependent on the runtime type, e.g. Optional[int]
being an instance of int. This is not very useful on its own yet,
but with type refinement and allowing Any as an argument type this will
allow for python-style "overloaded" functions such that we can
remove our __overload__ support.

Test Plan: Imported from OSS

Differential Revision: D17412853

Pulled By: zdevito

fbshipit-source-id: e2c37040f25f6b94ee1676854fceecd22de190ef
2019-10-01 16:46:59 -07:00
Jerry Zhang
14d29aeece insert_prepack_unpack pattern (#27102)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27102

We need to prepack the quantized weight rather then original weight

Test Plan:
.

Imported from OSS

Differential Revision: D17678264

fbshipit-source-id: 50614b841cc41007affcf3df7251f042a5a97c10
2019-10-01 11:47:33 -07:00
Jerry Zhang
f742ceaa46 API - add more passes to graph mode (#27093)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27093

Add `insert_prepack_unpack` and `fold_prepack` to `convert_script`

Test Plan:
.

Imported from OSS

Differential Revision: D17678262

fbshipit-source-id: 4bfd6681af6fce226cc77aed8dd84066cbd8ed17
2019-10-01 11:26:02 -07:00
James Reed
6a4ca9abec Support layout() in script
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27100

Test Plan: Imported from OSS

Differential Revision: D17675837

Pulled By: jamesr66a

fbshipit-source-id: d561664368382c28b26053d5879b17450c60a810
2019-09-30 19:30:38 -07:00
Elias Ellison
990f4ca76d make class types callable (#26743)
Summary:
Allowing invoking of a UDT if they have a `__call__` method

Fix for https://github.com/pytorch/pytorch/issues/26725
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26743

Differential Revision: D17677795

Pulled By: eellison

fbshipit-source-id: 0ceb6088e22c4689e0735fdb9e07418a75603486
2019-09-30 17:25:26 -07:00
Jerry Zhang
e367f605cd Integrate prepacked workaround in QuantFusion (#26939)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26939

Updated quant fusion patterns to work with modules with prepack
params folded into module.

Test Plan:
python test/test_jit.py 'TestJit.test_quant_fusion'

Imported from OSS

Differential Revision: D17636398

fbshipit-source-id: 8e7917e981260b81ed6038a1c2ccf19049726395
2019-09-30 10:35:04 -07:00
Wanchao Liang
a252aee8c2 serialize autograd ops into its own namespace (#26761)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26761

This PR serialize autograd ops into its own namespace by turning the
serialization op name into `torch.autograd.op`, this is to keep the
original code namespace rather than turning all to the global namespace,
this will be more properly handled in the future when we handle the module
namespace. This change also preserve BC until we have namespace handling

Test Plan: Imported from OSS

Differential Revision: D17645438

fbshipit-source-id: 656ec6b31d4fc2252585de73117c4d40a122678e
2019-09-30 10:28:40 -07:00
Jerry Zhang
d91e490a9f Fold prepacked weight into module (#26579)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26579

Remove `linear_prepack` call and attach a module to the
parent class that contains the packed weight and bias,
this is to support serialization of the quantized model
since the packed weight and bias is not serializable and
we need to overwrite the `__getstate__` and `__setstate__`
function to be able to serialize them

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D17636397

fbshipit-source-id: 3b81b6faa4413e4309453fd6acec2f0be6fd2f16
2019-09-30 10:12:10 -07:00
peter
1eaa9f89fb Fix Windows CI
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27031

Differential Revision: D17665998

Pulled By: ezyang

fbshipit-source-id: 6926e304c75ba878520627f1e829412f633b1bec
2019-09-30 07:38:53 -07:00
Jerry Zhang
9f9ba3a900 Add InsertPackUnpack pass (#26959)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26959

Add insert_pack_unpack pass for future transformations.
Only added a pattern for linear, will need to have a
pattern for conv2d as well

Test Plan:
tbd

Imported from OSS

Differential Revision: D17636400

fbshipit-source-id: 8dc64213aac0f91b55dbe3aafd92c6dce36ddd89
2019-09-27 22:16:41 -07:00
Jerry Zhang
5e79b5b1c7 Move some class/functions in test_jit.py to jit_utils.py (#26839)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26839

att

Test Plan:
ci

Imported from OSS

Differential Revision: D17643010

fbshipit-source-id: 5768b70410b7bdfdbee734d3a00296e5b1ad30d5
2019-09-27 18:07:24 -07:00
Elias Ellison
ff8b7ef63d fix range for non-int inputs and pow implementation (#26926)
Summary:
Previously we did not throw if an input to `range` was a non-integer.

We also typed the result from `int ** int` as an integer but returned a float value. The return type should be a float, because if the exponent is negative `int ** int` returns a float.

Batching these two PRs together because it is easier to land and we're almost at the branch cut.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26926

Differential Revision: D17643039

Pulled By: eellison

fbshipit-source-id: b49203a9d420417e1307bbb653d2e33cd9e530e3
2019-09-27 17:14:23 -07:00
Dmytro Dzhulgakov
764bf826e3 Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26840

Cleaning up top-level namespace. Also cosmetic changes to torch.backends.quantized

Test Plan: Imported from OSS

Differential Revision: D17604403

Pulled By: dzhulgakov

fbshipit-source-id: c55af277ea7319d962a82a6120f65ccd47a60abc
2019-09-27 13:45:15 -07:00
vishwakftw
43b07ff2c4 Fix nuclear norm with requires_grad=True (#26303)
Summary:
Changelog:
- Selectively assign compute_uv in the at::svd used internally in the implementation of at::nuclear_norm
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26303

Test Plan:
- Add tests in common_method_invocations.py

Refixes: https://github.com/pytorch/pytorch/issues/18275

Differential Revision: D17605357

Pulled By: ezyang

fbshipit-source-id: d87d60afe678e2546dca6992ea66f2daeb6b0346
2019-09-26 12:08:25 -07:00
Zachary DeVito
0e3389dced Fix circular deps in loading (#26758)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26758

This PR changes the order in which we import classes and functions so
that is is no longer necessary for them to defined in order in a file,
or for there to be proper import statements in the exported file.

Actually importing a function/class now is driven by the need to resolve
the entity during unpickling, type resolution, or value resolution.

While this should allow significant simplification to the code that
serializes classes, this work has not been done yet in order to avoid
inevitable forward compat issues in the transition period.

Notes:
* Individual functions have been replaced with a SourceImporter object
  that exposes a resolveType method. This method loads the type if
  it has not been loaded yet, potentially parsing  (but not loading)
  the file it exists in if that file hasn't been parsed yet.
* Some legacy functionality needed to be added as a method to this object
  since the old format still used some of this logic for class resolution.

Test Plan: Imported from OSS

Differential Revision: D17558989

Pulled By: zdevito

fbshipit-source-id: 7eae3470bcbd388c4de463e3462d527776ed46c6
2019-09-26 11:39:16 -07:00
Elias Ellison
d43480d6d1 support iterables, rangevalue in list comprehensions (#26768)
Summary:
Support IterableValue expressions and rangevalue in list comprehensions. Just as with supporting list comprehensions where the expression changes the input list types, we need to correctly type the list we create and it works.

Fixes https://github.com/pytorch/pytorch/issues/26693
Fixes https://github.com/pytorch/pytorch/issues/22483
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26768

Differential Revision: D17562762

Pulled By: eellison

fbshipit-source-id: 7ce8bf8605758dfd99057bc0376b4b724c4f9251
2019-09-25 15:41:32 -07:00
Basil Hosmer
167722d36e Typevar matching fix + implicit conversions from Scalar to int/float (#26453)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26453

Previously, schema matching would incorrectly widen typevar bindings
when later occurrences were supertypes of earlier ones. This allowed
callsites like `floatlist.append(tensor.item())` to pass the typechecker,
causing a runtime assert (issue #24856).

An earlier, reverted fix (#25136) insisted on strict equality across all
occurrences of a typevar, necessitating explicit casts around Scalar-typed
arguments to int- or float-typed parameters, like `tensor.item()` above.
This was per the original type system design, but turned out to break
existing user code that relied on the de facto dynamic downcast. (The
error required a specialized list representation.)

The current fix includes the prevention of typevar widening, but
adds logic to insert implicit conversions from Scalar to float or int
as needed to satisfy a matched schema.

Test Plan: Imported from OSS

Differential Revision: D17470598

Pulled By: bhosmer

fbshipit-source-id: d260dbf3cd78b9c2f2229bc61afc84e1910b5659
2019-09-25 13:49:55 -07:00
Nikolay Korovaiko
db5791d543 autodiff changes to enable profiling
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25397

Differential Revision: D17565747

Pulled By: Krovatkin

fbshipit-source-id: b772437d9e02df99db6e662cb7d1227359959bed
2019-09-25 10:11:44 -07:00
davidriazati
ef8d1c50c4 Fix builtin lookup for Python functions
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26688

Pulled By: driazati

Differential Revision: D17560634

fbshipit-source-id: e1c50d1ca24e0313c2b7d704c488a29ef6a47cad
2019-09-24 18:02:36 -07:00
Michael Suo
f43b7c4435 Revert D17513451: Register values listed in __constants__ as attributes of the Module.
Test Plan: revert-hammer

Differential Revision:
D17513451

Original commit changeset: cf8f9b450e71

fbshipit-source-id: 319ec9399173eb06556969dc6be365b319c1ab6c
2019-09-24 16:30:06 -07:00
Michael Suo
1058373205 Revert D17514653: [quant] Un-hardcode epsilon constant in FoldConvBatchNorm2d.
Test Plan: revert-hammer

Differential Revision:
D17514653

Original commit changeset: 7d9cc8f619b7

fbshipit-source-id: 2cf32082a46fe169a1db4926df78a9f3256616ad
2019-09-24 16:30:04 -07:00
davidriazati
d0fff0ebc8 Make is_optional check more robust (#26312)
Summary:
If the `Union` contains a non-class type, `issubclass` would fail, this
adds a check for that case
](https://our.intern.facebook.com/intern/diff/17505206/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26312

Pulled By: driazati

Differential Revision: D17505206

fbshipit-source-id: 1331e412f938e2f08ecb079972147f11e3ec77cd
2019-09-24 10:44:40 -07:00
Mikhail Zolotukhin
eddda3afdc Un-hardcode epsilon constant in FoldConvBatchNorm2d.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26584

Test Plan: Imported from OSS

Differential Revision: D17514653

Pulled By: ZolotukhinM

fbshipit-source-id: 7d9cc8f619b7dbe26fa58eac37cc131929c004d4
2019-09-24 10:30:35 -07:00
Mikhail Zolotukhin
6c758ff244 Register values listed in __constants__ as attributes of the Module. (#26581)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26581

We're currently inlining immediate values of the constants directly into
IR when we generate it providing no way to access these values by their
names later. This change registers such values as atrtibutes of the
module so that they are not lost after IR generation.

Differential Revision: D17513451

Test Plan: Imported from OSS

Pulled By: ZolotukhinM

fbshipit-source-id: cf8f9b450e7178692211abd905ffd2d7ce5a6ce1
2019-09-24 10:30:31 -07:00
Jerry Zhang
52b69fbcd4 Remove _dequantize_per_channel in the pattern (#26680)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26680

This was introduced before under the assumption that we'll have a qconv_per_tensor_affine
and a qconv_per_channel_affine, but turns out we don't have these, so we'll remove
thse functions.

Test Plan:
python test/test_jit.py 'TestJit.test_quant_fusion'

Imported from OSS

Differential Revision: D17542607

fbshipit-source-id: b90ce5738170f0922bdc2eb1c4dbecd930f68a48
2019-09-24 10:27:52 -07:00
Elias Ellison
efaa65dd60 resolve ignored module method type annotations (#26683)
Summary:
Previously we weren't passing an rcb around, causing NamedTuples with unused methods to fail.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26683

Differential Revision: D17539656

Pulled By: eellison

fbshipit-source-id: 50091e78eea5fa3a22b4655b65384eee47a1c9d6
2019-09-24 08:16:08 -07:00
Dmytro Dzhulgakov
b93823cb65 Per-channel quantized tensor to have only a single axis (#26675)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26675

Based on offline poll, we're very unlikely to have multi-axis quantized tensors in the foreseeable future. Let's simplify API and just return int instead of list. It also matches the singular `axis` name.

Test Plan: Imported from OSS

Differential Revision: D17537052

Pulled By: dzhulgakov

fbshipit-source-id: 676abc3b251d288468aaed467b5e5ca4063b98b0
2019-09-23 22:29:01 -07:00
Jerry Zhang
8a919f4f3d Skip observing bias across function call hierarchy (#26642)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26642

att

Test Plan:
python test/test_jit.py 'TestJit.test_insert_observers'

Imported from OSS

Differential Revision: D17538667

fbshipit-source-id: ac8f561160eed0803f6ac48cf0fed253adb58bb5
2019-09-23 18:49:40 -07:00
Zachary DeVito
fcd13549f9 add CondValue to unify refinements and code emission (#26145)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26145

This is step towards isinstance type refinement.
It primarily does yak shaving in compiler.cpp to unify the handling
of special case behavior that occurs in conditional expressions:

* Handling type refinement as part of emission.
* Handling `is None` static-if specialization.

It introduces a CondValue object that is a Value that also has
additional type refinements that are true when that Value is true,
and potentialy a static-true/false value that, if set, will cause if
statements to be handled statically, omitting typechecking of the other side.

This ends up expanding some behavior, for instance `is None` specialization
used to happen only for single expressions, but now works through
boolean logic.

Test Plan: Imported from OSS

Differential Revision: D17359500

Pulled By: zdevito

fbshipit-source-id: ce93804496c8b4c3197a5966bc28c608465fda64
2019-09-23 14:24:18 -07:00
Dmytro Dzhulgakov
ebc2365fd3 Serialization for per channel qtensor (#26339)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26339

Serializes per-channel tensor in both torch.serialization and jit. Since we didn't bind Quantizer properly yet, I chose to save a tuple representing quantizer settings. To avoid recursive tensor serialization calls, I'm using tuple instead of tensor to store scales and zero points.

driazati - please check the serialization logic. Is there a good test that compares that JIT serialization and python serialization are equivalent? (I haven't tested it yet)

Test Plan: Imported from OSS

Differential Revision: D17443222

Pulled By: dzhulgakov

fbshipit-source-id: a34758de1ffd2ec1cdc5355f5baf95284a4ccf4b
2019-09-23 13:28:11 -07:00
Jerry Zhang
95cb22f21f _dequantize_linear -> _dequantize_per_tensor (#26576)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26576

to match `quantize_per_tensor`

Test Plan:
ci

Imported from OSS

Differential Revision: D17517439

fbshipit-source-id: 8c20f9b5d2a50d0e42e4444994b0987e6204ac56
2019-09-21 11:52:19 -07:00
Jerry Zhang
d09d1d9aac Add inplace argument to InsertObservers and InsertQuantDeQuant (#26389)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26389

att

Test Plan:
.

Imported from OSS

Differential Revision: D17504458

fbshipit-source-id: a1a5c908eabf270c1e8d2098532ffc46978a240c
2019-09-20 22:43:29 -07:00
Jerry Zhang
1bec8d7a15 Get scalar type from observer module (#26425)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26425

Currently the scalar type is hardcoded for weight and normal tensor
but what we want is to get it from corresponding observer module

Test Plan:
there are some known issues right now,
will test e2e later when all the issues are fixed

Imported from OSS

Differential Revision: D17504459

fbshipit-source-id: f5a21789c2ebeb60bff4acc777db80170063c9f8
2019-09-20 22:19:18 -07:00
Jerry Zhang
254122dd4e quantize_linear -> quantize_per_tensor (#26574)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26574

Since we also have `quantized::linear`, `quantize_linear` sounds
confusing, so we plan to rename it before the branch cut

Test Plan:
ci

Imported from OSS

Differential Revision: D17514876

fbshipit-source-id: 01d9005e6ec8cb9950b9d8bba122109c389641d3
2019-09-20 21:58:48 -07:00
BowenBao
1a114948ce Fix jit/pass/peephole.cpp fuse addmm (#26357)
Summary:
Fix https://github.com/pytorch/pytorch/issues/26328. Reversing the order of inserting nodes. Previously the IR graph looks like

```
graph(%0 : Float(3, 3)):
  %5 : Float(3, 3) = aten::addmm(%0, %0, %0, %6, %6)
  %6 : int = prim::Constant[value=1]()
  return (%5)
```
where %6 is used before created. Now
```
graph(%0 : Float(3, 3)):
  %5 : int = prim::Constant[value=1]()
  %6 : Float(3, 3) = aten::addmm(%0, %0, %0, %5, %5)
  return (%6)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26357

Reviewed By: hl475

Differential Revision: D17463945

Pulled By: houseroad

fbshipit-source-id: 4f483c2bc004a4a88f0976a7b37d7994d97ba41a
2019-09-20 13:32:03 -07:00
Edward Yang
b59e856517 Revert D17486465: [jit] Make is_optional check more robust
Test Plan: revert-hammer

Differential Revision:
D17486465

Original commit changeset: c513cef3bbc0

fbshipit-source-id: 567311c001d7dd0b7ab9ffe8bb894954bea583c9
2019-09-20 11:06:19 -07:00
davidriazati
4c40dbcb75 Resolve NamedTuple types in Python (#26443)
Summary:
When used as annotations on Python functions, `NamedTuple`s go through our Python annotation -> type mapping which previously had no way of lookup up `NamedTuple`s (which are created lazily by checking if the type has certain properties, so the lookup is creating the `TupleType` from scratch). This PR threads through the necessary data to make them work.

Fixes #26437
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26443

Pulled By: driazati

Differential Revision: D17486441

fbshipit-source-id: a6bbb543ff05a5abe61f1a7f68db9ecdb652b358
2019-09-20 10:53:25 -07:00
davidriazati
9a5b784eac Make is_optional check more robust (#26312)
Summary:
If the `Union` contains a non-class type, `issubclass` would fail, this
adds a check for that case
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26312

Pulled By: driazati

Differential Revision: D17486465

fbshipit-source-id: c513cef3bbc038f15c021eb0c1bf36be0df1eb90
2019-09-20 10:50:00 -07:00
Jerry Zhang
4444b91141 Fix quantized::conv2d patterns in QuantFusion (#26515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26515

Fix patterns of `prepack` and `permute` after recent changes
to `quantized::conv2d` and `quantized::conv2d_prepack`

Test Plan:
python test/test_jit.py 'TestJit.test_quant_fusion'

Imported from OSS

Differential Revision: D17502573

fbshipit-source-id: 1a719fd610e8ea9dc16075abaa042556e1edbceb
2019-09-20 10:40:44 -07:00
Mike Ruberry
60dd203a1d Fixes test_wrapped_number (#26523)
Summary:
test_wrapped_number was calling torch.set_default_tensor_type('torch.FloatTensor'), which was setting the default tensor types for all following tests until a class boundary (with unittest) or until end of file (with pytest). Tests that don't expect the default tensor type to be set this way were then failing if run afterwards.

This fixes the issue by copying the default_tensor_type decorator from test_nn and using that instead with test_wrapped_number. The decorator correctly resets the default tensor type after the test has run.

This fixes the many errors encountered when running pytest test_jit.py.

Note: test_wrapped_number was introduced in https://github.com/pytorch/pytorch/issues/22273.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26523

Differential Revision: D17495283

Pulled By: mruberry

fbshipit-source-id: ab518c78b7706af7cb1c2d1c17823d311178996d
2019-09-20 09:39:00 -07:00
Elias Ellison
0f42881269 fix schema matching of tuples to vartype lists (#25944)
Summary:
In schema matching we allow a homogenous tuple to be matched to list arguments. This logic wasn't yet extended for vartype lists, causing stuff like `len((1, 2, 3))` to fail.

Fix for https://github.com/pytorch/pytorch/issues/20500
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25944

Differential Revision: D17482510

Pulled By: eellison

fbshipit-source-id: aa63318c27a01d965a7a7b68ce8bec638168dc26
2019-09-19 15:46:27 -07:00
Elias Ellison
1f2fa8d4d8 Make jit dicts ordered (#26465)
Summary:
Makes c10::Dict Ordered and bins binds the OrderedDict() and dict() constructor into torchscript. For the case of the empty constructor dict() i typed it as [str, Tensor] because:
• we're almost dropping support for python 2, at which point all dicts are ordered
• then it's more conventional to write x : Dict[int, int] = {} which is already supported
• It is possible to construct an arbitrarily typed empty OrderedDict through
OrderedDict(torch.jit.annotate(List[Tuple[key, value]], [])

We could consider dropping the no inputs aten::dict constructor since then the types would be more explicit.

This replaces https://github.com/pytorch/pytorch/issues/26170 and https://github.com/pytorch/pytorch/pull/26372 b/c ghstack was poisioned and i had to resubmit
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26465

Differential Revision: D17481604

Pulled By: eellison

fbshipit-source-id: d2d49795a518c3489881afac45d070e5262c5849
2019-09-19 15:09:02 -07:00
Jerry Zhang
aad8738681 Remove quantization for bias in pattern (#26415)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26415

We do dynamic quantization for bias right now, remove this in pattern

Test Plan:
python test/test_jit.py 'TestJit.test_quant_fusion'

Imported from OSS

Differential Revision: D17465555

fbshipit-source-id: 5e229cbc6ae85ea4ce727b3479993d79747d7792
2019-09-19 11:57:11 -07:00
Elias Ellison
4c1a2c2033 add setitem to class types (#25750)
Summary:
Follow up to https://github.com/pytorch/pytorch/pull/25664, add `class_type[ind] = val`. Like `__getitem__`, `__setitem__` has a custom compilation path so it wasn't added with the rest of the magic methods.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25750

Differential Revision: D17428725

Pulled By: eellison

fbshipit-source-id: ff3767ef41515baf04b0c0f5c896dbd3f1d20cd3
2019-09-19 10:01:39 -07:00
Jerry Zhang
cbc7172a02 Fix quantized::linear QuantFusion patterns (#26414)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26414

Fix the patterns after changes to prepack functions(https://github.com/pytorch/pytorch/pull/25626)

Test Plan:
pytho test/test_jit.py 'TestJit.test_quant_fusion'

Imported from OSS

Differential Revision: D17465553

fbshipit-source-id: 7df6a6aa8389bb4a7a370c65ade4c2585b45b882
2019-09-18 19:59:07 -07:00
Nikolay Korovaiko
18eb92e2af Add support for lists for prim::min and prim::max
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26155

Differential Revision: D17455540

Pulled By: Krovatkin

fbshipit-source-id: e3aee465d108b59691d6c68f85fbf212a5d6a125
2019-09-18 13:39:08 -07:00
Michael Suo
193a6a6f98 Revert D17431514: [pytorch][PR] fix schema matching of tuples to vartype lists
Test Plan: revert-hammer

Differential Revision:
D17431514

Original commit changeset: 2ad98bab15ea

fbshipit-source-id: 5cf445fd1e37629c700b9b3740fe13ca941e4db9
2019-09-17 17:23:12 -07:00
Elias Ellison
a06e1c3af7 min(li) max(li) (#26351)
Summary:
Add min and max of a list to JIT. Fixes https://github.com/pytorch/pytorch/issues/26036
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26351

Differential Revision: D17427547

Pulled By: eellison

fbshipit-source-id: 45796b4076eef0b496b01c2cc710ec4dc15a1ee6
2019-09-17 14:50:33 -07:00
Elias Ellison
a8073f34af fix schema matching of tuples to vartype lists (#25944)
Summary:
In schema matching we allow a homogenous tuple to be matched to list arguments. This logic wasn't yet extended for vartype lists, causing stuff like `len((1, 2, 3))` to fail.

Fix for https://github.com/pytorch/pytorch/issues/20500
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25944

Differential Revision: D17431514

Pulled By: eellison

fbshipit-source-id: 2ad98bab15eaa496471df651572735eb35183323
2019-09-17 13:47:46 -07:00
peter
2ce8c83f67 Enable CPU fused kernel on Windows
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25578

Differential Revision: D17397156

Pulled By: ezyang

fbshipit-source-id: b243528c2bfd5a0d401897833048429e67fe40ef
2019-09-17 07:29:40 -07:00
Jerry Zhang
06c69ad8ed Whiltelist and fusion support for quantized::linear - matmul(with bias) (#26204)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26204

Support quant fusion for `matmul` with bias to `quantized::linear`.

Test Plan:
python test/test_jit.py 'TestJit.test_quant_fusion'

Imported from OSS

Differential Revision: D17380073

fbshipit-source-id: 00014469a852cc5d5b66469fc4b8d05eafba1e3e
2019-09-16 14:05:50 -07:00
Jerry Zhang
fd3cc36fab Whiltelist and fusion support for quantized::linear - matmul(without bias) (#26209)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26209

Support quant fusion for `matmul`(without bias) -> `quantized::linear`

Test Plan:
python test/test_jit.py 'TestJit.test_quant_fusion'

Imported from OSS

Differential Revision: D17380075

fbshipit-source-id: 290caee7f7bcf94d2731c0ee9bd40054f0fb9b07
2019-09-16 11:33:48 -07:00
Jerry Zhang
f95d2b61d1 Whiltelist and fusion support for quantized::linear - addmm (#26208)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26208

Supporing `addmm` -> `quantized::linear` quant fusion

Test Plan:
python test/test_jit.py 'TestJit.test_quant_fusion'

Imported from OSS

Differential Revision: D17380074

fbshipit-source-id: fae88f118f85663d777648695768b0504ed7ccf9
2019-09-16 10:48:20 -07:00
Jerry Zhang
6d3ac7f85c use whitelist for selecting observed values (#25974)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25974

Previously we observe all the Tensor values, but what we want is actually
observing only the ones that can be quantized.

Test Plan:
python test/test_jit.py
python test/test_quantizer.py

Imported from OSS

Differential Revision: D17348986

fbshipit-source-id: 55be0d73862a0e7eb1e7fd882d16e0d830618b63
2019-09-13 15:38:31 -07:00
Jerry Zhang
43335cddb7 Fold quantize op into module (#25625)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25625

We want to fold the quantize op for weights/bias into module to avoid quantizing weights on the fly.

Test Plan:
python test/test_jit.py

Imported from OSS

Differential Revision: D17208889

fbshipit-source-id: 1854b8953b065855d210bc1166533c08ca264354
2019-09-13 12:27:16 -07:00
Richard Zou
babaac3e08 Fix bug with named tensors and (no) tracer support (#26106)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26106

Previously, in the named tensors build, an operator is marked as
non-traceable if ANY of its overloads are named tensor overloads. This
breaks the tracer for things like torch.full (has a names= overload for
named tensor) and tensor.sum (has a Dimname overload for named tensor).

This PR fixes the problem by putting the "no tracer support" logic into
the location where the tracer attempts to construct a graph by adding a
Dimname/DimnameList argument to a node.

Test Plan:
- new test in test_jit.py to check if torch.full is traceable
- new test in test_namedtensor.py to check what happens when someone
tries to trace a function that uses named tensor APIs.
- [namedtensor ci]

Differential Revision: D17353452

Pulled By: zou3519

fbshipit-source-id: b0b843c8357ffe54baee6e8df86db914f0b1ece4
2019-09-13 06:45:00 -07:00
Jerry Zhang
94964a9ba2 Add fusion for quantized linear (#25624)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25624

First fuse the splitted op into aten::linear and then fuse
`dequant - aten::linear - quant` into quantized linear op

Test Plan:
python test/test_jit.py 'TestJit.quant_fusion'

Imported from OSS

Differential Revision: D17208891

fbshipit-source-id: 864b19fabab2e8e6f8f8ad35eb3dbbf2d5fdb8c4
2019-09-12 20:52:37 -07:00
Jerry Zhang
be82239c86 Port fuse_linear from pytorch/tvm (#25623)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25623

Port over fuse_linear pass from pytorch/tvm project, we'll need this
in backend specific quantization pass to match aten::linear and swap
it with quantized linear

Test Plan:
python test/test_jit.py 'TestJit.test_fuse_linear'

Imported from OSS

Differential Revision: D17208890

fbshipit-source-id: f4ff3889ae4525797d3b986f46ae37e50ea49116
2019-09-12 18:51:13 -07:00
Jerry Zhang
1d87090051 Support quantizing any methods called (#25505)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25505

Support for quantizing all the methods called by forward method, including
child module methods and other methods in the current module

It relies on module level constant prop, we need to figure out a way to do constant prop
for these methods as well. We can either do constant prop in the module level or do constant
prop in the quantization function, but this will need some discussion.

Test Plan:
python test/test_jit.py 'TestJit.insert_quant_dequant'
python test/test_quantizer.py

Imported from OSS

Differential Revision: D17208887

fbshipit-source-id: 21749457b21b00a6edada290c26324e2fb210b10
2019-09-12 18:09:44 -07:00
Jerry Zhang
f559c1d85d Skip inserting duplicate observers (#25504)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25504

Skip inserting duplicate observers for values observed
in forward method of a child module or other methods in
the current module.

Test Plan:
python test/test_jit.py -- 'TestJit.insert_observers'
python test/test_jit.py -- 'TestJit.insert_observers_child_qconfig'
python test/test_jit.py -- 'TestJit.insert_observers_skip_values'

Imported from OSS

Differential Revision: D17208888

fbshipit-source-id: e04f1c22ab1c4f410933a17a3ef31acf5f217323
2019-09-12 16:22:51 -07:00
J M Dieterich
5376ee51fd Enable more mGPU tests (#26055)
Summary:
Enable mGPU tests that pass on ROCm as of 2.7.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26055

Differential Revision: D17331484

Pulled By: bddppq

fbshipit-source-id: 51f956a84a6c14a1a41473d322950994fa29c25c
2019-09-11 17:54:35 -07:00
davidriazati
68f40fb2c8 Add in membership checks for lists (#25796)
Summary:
Since it requires an equality operator, it's only implemented for lists
of `int`, `float`, and `str`.

Fixes some of #25758
](https://our.intern.facebook.com/intern/diff/17296216/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25796

Pulled By: driazati

Differential Revision: D17296216

fbshipit-source-id: 561245bfa75b65cee4e3395e242b2439b3c87b2e
2019-09-11 14:10:38 -07:00
davidriazati
d546c069a4 Preserve module names in recursive script (#24505)
Summary:
Turns
```
ScriptModule(
  (conv): ScriptModule()
  (lin): ScriptModule()
  (sub): ScriptModule()
)
```

into

```
ScriptModule(
  original=MyModule
  (conv): ScriptModule(original=Conv2d)
  (lin): ScriptModule(original=Linear)
  (sub): ScriptModule(original=Submodule)
)
```
](https://our.intern.facebook.com/intern/diff/16862032/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24505

Pulled By: driazati

Differential Revision: D16862032

fbshipit-source-id: 76dc4e5252bbf746f5cc26450b577dab10477732
2019-09-11 14:07:04 -07:00
Lara Haidar
8ca93ec351 Fix torch.arange traced as constant (#25363)
Summary:
torch.arange is always traced as a constant which makes it impossible to trace correctly TestModel() from the example below.

class TestModel(torch.nn.Module):
  def forward(self, input):
    return torch.arange(input.shape[0])
input = torch.randn(5,3,2)
print(torch.jit.trace(TestModel(), input).graph)

Currently the trace of TestModel() looks like:

graph(%self : ClassType<TestModel>,
      %input : Float(5, 3, 2)):
  %11 : int = prim::Constant[value=5]()
  %12 : int = prim::Constant[value=4]()
  %13 : int = prim::Constant[value=0]()
  %14 : Device = prim::Constant[value="cpu"]()
  %15 : bool = prim::Constant[value=0]()
  %16 : Long(5) = aten::arange(%11, %12, %13, %14, %15)
  return (%16)

This PR will allow the trace to have a variable value for %11.
The trace of TestModel() with this PR's modifs looks like:

graph(%self : ClassType<TestModel>,
      %input : Float(5, 3, 2)):
  %2 : int = prim::Constant[value=0]()
  %3 : int = aten::size(%input, %2)
  %4 : Long() = prim::NumToTensor(%3)
  %11 : Scalar = prim::ImplicitTensorToNum(%4)
  %12 : int = prim::Constant[value=4]()
  %13 : int = prim::Constant[value=0]()
  %14 : Device = prim::Constant[value="cpu"]()
  %15 : bool = prim::Constant[value=0]()
  %16 : Long(5) = aten::arange(%11, %12, %13, %14, %15)
  return (%16)

More info : https://github.com/pytorch/pytorch/issues/20075
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25363

Reviewed By: zrphercule

Differential Revision: D17301934

Pulled By: houseroad

fbshipit-source-id: d9907763742cb51d8c761bf63fc2e4918f7b9941
2019-09-11 13:39:54 -07:00
J M Dieterich
00d967c39d enable unit tests (#25963)
Summary:
These unit tests pass after landing all the warp size awareness patches.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25963

Differential Revision: D17319124

Pulled By: bddppq

fbshipit-source-id: 22f5d5f1ca9c67e66a7ccf983b2d2f889a74e729
2019-09-11 12:31:43 -07:00
Elias Ellison
8f7020bbdb add support for ModuleDict (#25715)
Summary:
Add support for nn.ModuleDict in script. This is needed to support torchvision.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25715

Differential Revision: D17301826

Pulled By: eellison

fbshipit-source-id: 541b5477e980f519a8c3bbb1be91dac227f6d00f
2019-09-10 18:43:49 -07:00
Elias Ellison
1897440e02 add torch.jit.is_scripting api (#25955)
Summary:
The PR https://github.com/pytorch/pytorch/pull/25263 was based on got reverted and ghimport got confused. Relanding here.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25955

Differential Revision: D17296727

Pulled By: eellison

fbshipit-source-id: 96200d3ef4c86f0d9907dc41b05619cb33bf2bab
2019-09-10 17:28:59 -07:00
Wanchao Liang
a7eaec6cf2 add set_grad_enabled to TorchScript and fix data attribute
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25350

Test Plan: Imported from OSS

Differential Revision: D17100829

fbshipit-source-id: d85d6f3b03218b9c77e144365940eeaa5b4cce9a
2019-09-10 14:36:26 -07:00
Elias Ellison
7ab4ad7b6d add torch.jit.is_scripting() api (#25263)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25263

This adds an api to return true in script and false in eager, which together with ignore allows guarding of not yet supported JIT features. Bikeshedding requested please.

cc zou3519

```
def foo():
   if not torch.jit.is_scripting():
      return torch.linear(...)
   else:
      return addmm(...)
```

Test Plan: Imported from OSS

Differential Revision: D17272443

Pulled By: eellison

fbshipit-source-id: de0f769c7eaae91de0007b98969183df93a91f42
2019-09-09 20:24:36 -07:00
Supriya Rao
9d2d31e626 Store bias in PackedLinearWeight struct in fbgemm (#25428)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25428

Added bias as an optional param to the quantized_linear_prepack function.
Bias is quantized during runtime using input scale and weight scale.
ghstack-source-id: 89601399

Test Plan: python test/run_test.py --exclude nn --verbose --bring-to-front quantization quantized quantized_tensor quantized_nn_mods quantizer

Differential Revision: D17121304

fbshipit-source-id: 8adb0e55e4aed0a5430aaa2c8639c8ad1639c85a
2019-09-06 08:37:34 -07:00
Brian Vaughan
88e4cee3e7 Improve handling of mixed-type tensor operations (#22273)
Summary:
Improve handling of mixed-type tensor operations.

This PR affects the arithmetic (add, sub, mul, and div) operators implemented via TensorIterator (so dense but not sparse tensor ops).

For these operators, we will now promote to reasonable types where possible, following the rules defined in https://github.com/pytorch/pytorch/issues/9515, and error in cases where the cast would require floating point -> integral or non-boolean to boolean downcasts.

The details of the promotion rules are described here:
https://github.com/nairbv/pytorch/blob/promote_types_strict/docs/source/tensor_attributes.rst

Some specific backwards incompatible examples:
* now `int_tensor * float` will result in a float tensor, whereas previously the floating point operand was first cast to an int. Previously `torch.tensor(10) * 1.9` => `tensor(10)` because the 1.9 was downcast to `1`. Now the result will be the more intuitive `tensor(19)`
* Now `int_tensor *= float` will error, since the floating point result of this operation can't be cast into the in-place integral type result.

See more examples/detail in the original issue (https://github.com/pytorch/pytorch/issues/9515), in the above linked tensor_attributes.rst doc, or in the test_type_promotion.py tests added in this PR:
https://github.com/nairbv/pytorch/blob/promote_types_strict/test/test_type_promotion.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22273

Reviewed By: gchanan

Differential Revision: D16582230

Pulled By: nairbv

fbshipit-source-id: 4029cca891908cdbf4253e4513c617bba7306cb3
2019-09-05 18:26:09 -07:00
Elias Ellison
82c8949a9d add __getitem__ to class types (#25664)
Summary:
Add magic method for `class_type[index]`. Since the compiler has custom logic for indexing this was not included with the other magic methods.

Fix for https://github.com/pytorch/pytorch/issues/25637
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25664

Differential Revision: D17214996

Pulled By: eellison

fbshipit-source-id: bf77f70851f6c3487147da710cc996624492a0c8
2019-09-05 17:19:15 -07:00
Michael Suo
11eb8ac2a9 Revert D17199043: [JIT] preserve ignored function return value type
Test Plan: revert-hammer

Differential Revision:
D17199043

Original commit changeset: 1196fd94c207

fbshipit-source-id: 49789ae1f128262bc40a9d5b0d2b7bfbbf0b7e1e
2019-09-05 15:51:06 -07:00
Jerry Zhang
99cd83ea22 Inserting observers for all methods called in forward (#25503)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25503

Previously we only insert observers for forward methods, this PR
extends the support to all observers. It will insert
duplicated observers right now, we'll remove them in next PR.

Test Plan:
python test/test_jit.py -- 'TestJit.insert_observers'

Imported from OSS

Differential Revision: D17208886

fbshipit-source-id: 04084c8f42c56cb66a11968987a15752f532ac04
2019-09-05 15:11:22 -07:00
Elias Ellison
df043cd49d preserve ignored function return value type (#25262)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25262

Preserve the type of ignore'd functions on serialization. Currently we first compile an ignore'd function with it's annotated type when first compiling, but do not preserve it. This is important for being able to compile models with not-yet-supported features in JIT.

```
torch.jit.ignore
def unsupported(x):
    return x

def foo():
   if not torch.jit._is_scripting():
      return torch.linear(...)
   else:
      return unsupported(...)
```

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D17199043

Pulled By: eellison

fbshipit-source-id: 1196fd94c207b9fbee1087e4b2ef7d4656a6647f
2019-09-05 11:21:55 -07:00
Supriya Rao
61819260f7 Rename FBGEMM quantized operators to generic quantized ops (#25678)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25678

As an effort to unify fbgemm and qnnpack at the dispatcher level, we need to have a generic name for the quantized backed ops.
Currently FBGEMM is guarded by the USE_FBGEMM macro and QNNPACK uses USE_QNNPACK.
ghstack-source-id: 89518961

Test Plan: buck test caffe2/test:quantized

Differential Revision: D17194364

fbshipit-source-id: 5960aedff6b8cb89eb3872c39b74caf54c0fbf20
2019-09-05 10:13:08 -07:00
Edward Yang
55da02a86d Revert D17097735: [quantization] Rename fbgemm quantized operators to generic quantized ops
Test Plan: revert-hammer

Differential Revision:
D17097735

Original commit changeset: 447112a7a421

fbshipit-source-id: 78368b6f84d96cea70692fb000cebe99602a08c1
2019-09-04 15:02:32 -07:00
Supriya Rao
c9ba5186d3 Rename fbgemm quantized operators to generic quantized ops (#25338)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25338

As an effort to unify fbgemm and qnnpack at the dispatcher level, we need to have a generic name for the quantized backed ops.
Currently FBGEMM is guarded by the USE_FBGEMM macro and QNNPACK uses USE_QNNPACK.

TBD: Use compile time macro or run_time to switch between fbgemm and qnnpack.
ghstack-source-id: 89454244

Test Plan: buck test caffe2/test:quantized

Differential Revision: D17097735

fbshipit-source-id: 447112a7a421387724d3e29b8fd8412dfb1c373a
2019-09-04 14:27:27 -07:00
Zachary DeVito
efc5306ad2 Make NoneType <: Optional[T] (#25361)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25361

Previously we had a different None object for each type T so that
unwrap optional could still recover the type T from it. After a few
months of having this conversion behavior, it has become clear that
only the unwrap optional operators cause this problem. Furthermore, it
is beneficial to have NoneType <: Optional[T] because this is how IValues
work (in particular the None IValue is not tagged). This patch makes the
necessary changes to do this. In particular it special cases unwrap optional
in export so that it annotates the None to make sure we can recover the type.

This also changes how matching and evaluating type values work so that we
can consider None matchable to type Optional[T], eventhough we cannot
derive T from that match.

Test Plan: Imported from OSS

Differential Revision: D17103072

Pulled By: zdevito

fbshipit-source-id: 37678ed3e5ce54f2eb3ee4dff2734a39f0bee028
2019-09-04 13:52:40 -07:00
Michael Suo
0c6ee947b6 Remove forward compat code for serialization format (#25440)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25440

See the comments deleted for what this PR is all about

Test Plan: Imported from OSS

Differential Revision: D17125690

Pulled By: suo

fbshipit-source-id: a4a2f541a3e161f9c15b51df475130e7bf683cf8
2019-09-04 12:22:31 -07:00
Horace He
f3f83ccb23 Added invert bitwise operation to JIT (#22324)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/25360
Fixes https://github.com/pytorch/pytorch/issues/22124
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22324

Differential Revision: D17140477

Pulled By: yf225

fbshipit-source-id: f42aec5e688fe079d9e79726b7a6c345da94ae2e
2019-09-03 11:16:30 -07:00
davidriazati
7a921ba17d Manually implement is_zipfile (#25279)
Summary:
The default implementation is lenient in that it recognizes a zipfile if the magic number appears anywhere in the archive. So if someone has the bytes `PK\x03\x04` in a tensor, it gets recognized as a zipfile. See https://bugs.python.org/issue28494

This implementation only checks the first 4 bytes of the file for the zip magic number. We could also copy https://github.com/python/cpython/pull/5053's fix, but that seems like overkill.

Fixes #25214
](https://our.intern.facebook.com/intern/diff/17102516/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25279

Pulled By: driazati

Differential Revision: D17102516

fbshipit-source-id: 4d09645bd97e9ff7136a2229fba1d9a1bce5665a
2019-08-30 16:47:50 -07:00
Elias Ellison
d2a8435c08 add tuple keyword (#25474)
Summary:
Doesn't really add much functionality, since inputs to `tuple()` which we can statically infer the output size is pretty much just tuples. Does improve the error message though.

Fix for https://github.com/pytorch/pytorch/issues/24000
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25474

Differential Revision: D17133800

Pulled By: eellison

fbshipit-source-id: 41a052895e6ed24a384ec6f8aef0a6769ac094e6
2019-08-30 11:33:49 -07:00
Michael Suo
60f6cc9d59 Emit script function calls during tracing. (#25089)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25089

Previously, when the tracer encountered a scripted function (or method), it
inlined the function into the graph. Now, we emit a CallFunction or
CallMethod node instead.

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D16987936

Pulled By: suo

fbshipit-source-id: a3e38a4621f3504909ec0542865dc7e381c243d6
2019-08-30 01:30:03 -07:00
Michael Suo
194acd023a Some alias analysis fixes (#25425)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25425

1. Properly invalidate memory locations when we change the points-to
set.
2. Don't build a new indexToElementMap in toString(), just use
`MemoryDag::fromIndex`
3. Fix transitive wildcard assignment

Test Plan: Imported from OSS

Differential Revision: D17126402

Pulled By: suo

fbshipit-source-id: cbd99027d2e78fd333dbf030172d3b7ac4df8349
2019-08-29 23:32:07 -07:00
Gregory Chanan
93b653bba3 Attempt to enable CrossMapLRN2d, as it no longer uses Module._backend.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25343

Test Plan: Imported from OSS

Differential Revision: D17101574

Pulled By: gchanan

fbshipit-source-id: 71d40f5c2a9c94a71abc52e61f6f7be449a2b41a
2019-08-29 20:15:14 -07:00
Jerry Zhang
f495a3abac Skip inserting observers for Tensors inside fused op (#25281)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25281

We want to skip inserting observers for the Tensors that's between the two
ops that will be fused, e.g. Conv -> ReLU, this PR just added this pattern,
but new patterns can be easily added in the future.

Test Plan:
python test test/test_jit.py -- 'TestJit.test_insert_observers_skip_values'

Imported from OSS

Differential Revision: D17106037

fbshipit-source-id: 49697f4d9598a461edc62a2b4148495764a99574
2019-08-29 18:19:26 -07:00
Iurii Zdebskyi
1ea1d7f095 Fixed masking warnings in tests (#25317)
Summary:
Fixing deprecation warnings in tests related to uint8 masking and indexing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25317

Differential Revision: D17099063

Pulled By: izdeby

fbshipit-source-id: 49f1d85dcd9464d61e3156eebc07390e9f6fa1b4
2019-08-29 12:13:52 -07:00
Mikhail Zolotukhin
910d2f18fc Implement FoldConvBatchnorm2d pass. (#25282)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25282

For now it will be used in quantization, but it can be used as a
standalone pass too.

Couple of things are not finished at this moment:
- Batchnorm.eps value is hardcoded. This is bad and wrong, but we cannot
access fields listed in __constants__ from IR now. Once we fix this, we
should remove the hardcoded value.
- We do not remove Batchnorm submodules from the parent module even when
they were merged into a Conv. Once we figure out API for removing
attributes and modules, we should fix this.

Test Plan: Imported from OSS

Differential Revision: D17086611

Pulled By: ZolotukhinM

fbshipit-source-id: d58a947a3b2205d8f3629d693b70b9ad2b5a9102
2019-08-28 21:56:05 -07:00
Jerry Zhang
96db3ad413 insert_quant_dequant work with qconfig_dict (#25127)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25127

Extend insert_quant_dequant pass to go through forward function graphs

Test Plan:
```
python test/test_jit.py 'TestJit.test_insert_quant_dequant'
python test/test_quantizer.py
```

Imported from OSS

Differential Revision: D17001137

fbshipit-source-id: 41b029906fe5c8bc0de01956059388a7d552a380
2019-08-28 21:43:29 -07:00
Jerry Zhang
11b4d57711 insert_observers use qconfig_dict (#25069)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25069

This PR changes the API of insert_observers to use qconfig_dict,
full functionality support will come in later PRs

Test Plan:
```
python test/test_quantizer.py
python test/test_jit.py
```

Imported from OSS

Differential Revision: D17001135

fbshipit-source-id: 16df6fa521fcc0c9e268a375be8e1a630e77011a
2019-08-28 21:07:31 -07:00
davidriazati
efe808b326 Fix old annotate() error (#25261)
Summary:
Fixes #25067

](https://our.intern.facebook.com/intern/diff/17103889/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25261

Pulled By: driazati

Differential Revision: D17103889

fbshipit-source-id: bd94cb36cf4829e63ad39ae169047b9b9e857679
2019-08-28 20:50:24 -07:00
davidriazati
43c4b9f2a5 Add source location to class instantiation error (#24990)
Summary:
Fixes #24987
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24990

Pulled By: driazati

Differential Revision: D17099779

fbshipit-source-id: 296e2b4ccc3fddabd4998497d0753e99680ba92d
2019-08-28 17:14:00 -07:00
Elias Ellison
44bd63c7a1 don't throw in constant prop (#25270)
Summary:
Don't throw in constant propagation, since the op we're running may not be reached. Previously we would only only catch `C10::Error`; however it's hard to maintain that the entire codebase doesn't throw any other types of errors, and some errors map nicely to python errors, like `std::index_error` to IndexError.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25270

Differential Revision: D17102545

Pulled By: eellison

fbshipit-source-id: 9fd485821743ad882e5c6fc912ca47b0b001b0e9
2019-08-28 15:34:01 -07:00
Zachary DeVito
ca4bc9fc07 improve interface error messages (#25228)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25228

This adds a facility to isSubtypeOf for it to explain why a type is
not a subtype of something else. It is used in situations where it
is not clear from the types python_str alone why the relationship
is now true. Because of subtle interaction between default arguments,
overloads, and virtual methods, it uses isSubtypeOfExt for the extended
version to avoid requiring readers to understand the interaction.

Test Plan: Imported from OSS

Differential Revision: D17066673

Pulled By: zdevito

fbshipit-source-id: 4de7c40fbf7f9eeae045d33a89a038538cf87155
2019-08-27 22:54:50 -07:00
Zachary DeVito
fba107f18e add serialization of interface (#25227)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25227

Adds cases to NamedType serialization to so that interfaces are written.
Similar implementation to NamedTuples

Test Plan: Imported from OSS

Differential Revision: D17066674

Pulled By: zdevito

fbshipit-source-id: fda5419260fad29e8c4ddb92de1d3447d621d982
2019-08-27 22:54:46 -07:00
Zachary DeVito
61818b8986 Add interface declarations to JIT (#25258)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25258

this is the first commit in a series to add interfaces to JIT.
Interfaces allow the specification through a blank python class of an
abstract interface that can be used in type annotations for Script functions.
If a TorchScript class implements all the methods in the interface with
the appropriate types, then it is implicitly considered to implement
that interface.

Follows required:
* implementation of serialization
* implementation in the parser frontend
* better error reporting for explaining why a class does not meet an
  interface specification.

Test Plan: Imported from OSS

Differential Revision: D17079963

Pulled By: zdevito

fbshipit-source-id: a9986eeba2d4fdedd0064ce7d459c0251480a5a0
2019-08-27 22:54:37 -07:00
Elias Ellison
011db3bcaa fix closures which always throw. (#25278)
Summary:
When a closure was declared that always throw'd we would erroneously propagate the ExitThrows status to the block in which it was declared, causing us to remove the subsequent code in the block. [this code](https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/script/exit_transforms.cpp#L462) was meant to handle this case, however it didn't handle the case when we were transforming Loops and the prim::Function wasn't a target block.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25278

Differential Revision: D17084780

Pulled By: eellison

fbshipit-source-id: ee31a4cc243653f615e4607ece29cdac8ef5710e
2019-08-27 22:16:54 -07:00
Edward Yang
9340b155bc Revert D15901930: Add interface declarations to JIT
Test Plan: revert-hammer

Differential Revision:
D15901930

Original commit changeset: 22c82d12c9c2

fbshipit-source-id: 4009a3ce7af245d7e0f4924824ece59cdc774180
2019-08-27 06:41:32 -07:00