Commit Graph

355 Commits

Author SHA1 Message Date
Elias Ellison
00aedfc0e2 constant pooling pass (#12222)
Summary:
Add a pass to move all constants to the beginning of the graph, and deduplicate.

This extends https://github.com/pytorch/pytorch/pull/10231 to also handle constants introduced in inlining, constant propagation, etc.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12222

Reviewed By: driazati

Differential Revision: D10201616

Pulled By: eellison

fbshipit-source-id: bc9c5be26868c8b5414257a0d4462de025aeb9bd
2018-10-08 11:55:02 -07:00
David Riazati
92b0e7026e Add weak script mode for script functions (#11963)
Summary:
This PR is the start of weak script mode for functions

Weak scripts allow you to compile a graph from Python code at runtime by annotating with `torch.jit.weak_script` for use in the JIT without affecting eager execution. Scripts are compiled lazily on the first call in a graph to avoid long Python startup times.

apaszke zdevito ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11963

Differential Revision: D10183451

Pulled By: driazati

fbshipit-source-id: 128750994d5eb148a984f8aba4113525c3e248c8
2018-10-05 18:55:49 -07:00
Zachary DeVito
b937cbb776 Fix a bug that would resize tensor storage on export
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12377

Differential Revision: D10219213

Pulled By: zdevito

fbshipit-source-id: 85cfa4467c672ff5a718e58cfae7e8c8b1cfc532
2018-10-05 16:24:54 -07:00
David Riazati
f0b73ff790 Pretty printer improvements (#12179)
Summary:
* Replaces `prim::PythonOp` with the name of the function being called
* Delays printing values used in `prim::Return` nodes until the return
node itself if that is the only place the value is used to remove some
useless assigns

zdevito apaszke ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12179

Differential Revision: D10132661

Pulled By: driazati

fbshipit-source-id: cbc4ac34137ed5872049082e25d19eb1ebc71208
2018-10-04 15:14:51 -07:00
David Riazati
c9f9df002d Properly catch errors in PythonOps (#12243)
Summary:
If a PythonOp throws an error it raises an exception to the interpreter and also releases the GIL which causes [pybind to segfault](https://github.com/potassco/clingo/issues/42)

This fix catches pybind errors while the GIL is still held and throws a `python_error` to re-capture the GIL

Fixes #12118

apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12243

Differential Revision: D10182787

Pulled By: driazati

fbshipit-source-id: 719d4a7c3294af201e061cf7141bec3ca0fb1f04
2018-10-03 17:25:03 -07:00
David Riazati
d1ac1eba3b Add bool type to IR (#11834)
Summary:
This PR adds a bool type to `IValue` and puts it into place.

* changes conds for `prim::If` and `prim::Loop` to use `bool` type
* changes operators that take `bool`s to match their native ops
* fixes ambiguous `aten` ops `aten::std` and `aten::var`
	* fixes tests in `test_jit.py TestJitGenerated`
		```
		'test_std_dim',
		'test_std_dim_1d',
		'test_std_dim_1d_neg0',
		'test_std_dim_neg0',
		'test_var_dim',
		'test_var_dim_1d',
		'test_var_dim_1d_neg0',
		'test_var_dim_neg0'
		```
* adds `prim::BoolToTensor` and `prim::TensorToBool`

apaszke zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11834

Differential Revision: D9928570

Pulled By: driazati

fbshipit-source-id: 373c53df2f1a8ffa9e33d9a517002fbeef25f3eb
2018-10-03 12:40:03 -07:00
Elias Ellison
fed91f873f (Very small) allow trailing commas in assign or tuples (#11723)
Summary:
Allow trailing commas in assign statements or tuples, which also allows single element tuples.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11723

Differential Revision: D10052162

Pulled By: eellison

fbshipit-source-id: 344d908a3ad942a23ebd9f341794bc9734226aa8
2018-10-01 10:10:13 -07:00
iotamudelta
a2ebbccc9f fix unit tests on CI
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12187

Differential Revision: D10118483

Pulled By: bddppq

fbshipit-source-id: 986c8fb48d61e00103c713548a50e74489a0e442
2018-09-28 23:11:55 -07:00
mruberry
7b2c0a09e4 Adds support for NaN, +inf, -inf float scalars to CPU and CUDA fusers (#12070)
Summary:
In current upstream float scalars are always written into kernels with:

`out << std::scientific << v << "f";`

When the floats are special values like NaN, +inf, or -inf this produces nonsense that causes compilation to fail. This fix updates the conversion of float scalars to device-specific special values. The appropriate macros are added to the CPU and CUDA resource strings. Note that a NAN macro was not necessary on the CPU since math.h defines NAN.

To verify this fix I updated the test_clamp_fusion test in test_jit.py. I wanted to test -inf, too, but -inf is not currently accepted by the interpreter.

Edit:

Forgot to mention, this partially addresses issue #12067.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12070

Reviewed By: ezyang

Differential Revision: D10044704

Pulled By: soumith

fbshipit-source-id: 8f4a930862d66a7d37d985e3f6a6fb724579e74c
2018-09-28 14:11:49 -07:00
Luca Antiga
5be0baefa2 Use streams in JIT serialization, allow JIT serialization to/from buffer (#11932)
Summary:
This PR replaces the use of `std::FILE` with `istream`/`ostream` for JIT serialization.
It uses this mechanism to add the possibility to serialize to/from binary buffers, in addition to files, both in `libtorch` and from Python.

`getExportImportCopy` in `test_jit.py` has been updated so that both file and buffer codepaths are exercised during tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11932

Differential Revision: D10084303

Pulled By: apaszke

fbshipit-source-id: b850801b3932922fa1dbac6fdaed5063d58bc20d
2018-09-28 07:54:27 -07:00
Michael Suo
7f35e92af2 mutable lists (#10700)
Summary:
This PR implements the design that we discussed. Changes:
- Added a World token IValue and type. The IValue is basically a dummy struct for now, in the future we may extend it (say, add thread-local state).
- Effectful ops explicitly declare they are mutable by having World tokens as inputs and outputs in their schema.
- Purely functional ops that use mutable values will get "fenced" and the world token will be threaded through the fences
- AnnotateEffects pass which wires up all the world tokens together.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10700

Reviewed By: eellison

Differential Revision: D9547881

Pulled By: michaelsuo

fbshipit-source-id: ebbd786c31f15bf45e2ddb0c188438ff2f5f3c88
2018-09-27 19:25:13 -07:00
Zachary DeVito
478803a75f Introduce type variables to implement generic list operators (#12040)
Summary:
We generate specialized list operations for int, float, and Tensor lists so that small lists of integers like the arguments to conv do not involve tons of boxing code.

This PR adds a fallback GenericList for List types that contain any other type. It does so by adding type variables to `jit::Type`, and machinery for matching/replacing the type variables during `tryMatchSchema` and operator lookup.

It also modifies the builtin list ops to include a fallback that works on a GenericList object that simply holds IValues. This is distinguished from IValue's tuple type so that conversion to/from Python still happens losslessly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12040

Differential Revision: D10037098

Pulled By: zdevito

fbshipit-source-id: 0c5f2864d12e7d33554bf34cc29e5fb700dde150
2018-09-26 17:02:51 -07:00
Adam Paszke
18f9c07b18 Enable tracing of tensor factories with an out argument
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12051

Differential Revision: D10044890

Pulled By: apaszke

fbshipit-source-id: 2d794bf408875600bc71f354f0b4961d6b715094
2018-09-26 09:40:34 -07:00
Richard Zou
c8a0b11b7f add autodiff expressions for common operations (#11832)
Summary:
This PR does a few things:

Previously test_jit.py only tested autograd on backward graphs.
This is because we borrow from test_autograd and construct graphs with a small
number of nodes. Because the number of nodes is small (typically 1-2), those graph
do not end up containing autodiff subgraphs, so autodiff never gets tested.

This PR enables autodiff testing by doing the following:
- added disableDebugAutodiffSubgraphInlining fn to graph_executor to disable
  autodiff subgraph inlining.
- (implementation) added autodiffSubgraphNodeThreshold and autodiffSubgraphInlineThreshold.
  These are set to their default values (2, 5) but disableDebugAutodiffSubgraphInlining()
  sets both to 1, disabling subgraph inlining and allowing 1-node autodiff subgraphs.
- The relevant backward jit tests disable autodiff subgraph inlining so they
  will test the autodiff versions of the operators instead of autograd whenever
  an autodiff variant exists.
- We don't run the tests that do inline autodiff subgraphs anymore.
  This has no impact on testing correctness because the assumption is
  that autograd functions are correct and are tested in test_autograd.py

This allows the graph fuser to work better because a lot of these ops were previously not autodiff-compatible but fusible. On a more concrete example, lstm backward contains a lot of tensor-scalar operations; these autodiff formulas help its double backward pass.

Included:
- arithmetic overloads
- abs, acos, asin, atan, ceil, cos, cosh, exp, expm1, floor, fmod, frac, log, log10, log1p, log2 reciprocal, remainder, round, sin, sinh, tan, trunc, rsqrt

TestJitGenerated tests autodiff for all of the added operations.

cc apaszke zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11832

Differential Revision: D10031256

Pulled By: zou3519

fbshipit-source-id: 9daf9900a5ad187743609cd0fbbd10b15411ad93
2018-09-26 08:10:04 -07:00
Adam Paszke
a830964007 Eliminate no-op adds and muls in peephole pass (#11801)
Summary:
Because we emit a lot of them in our symbolic AD. This brings down the backward time of an LSTM I'm testing from 14.2ms to 12.5ms (a 15% improvement).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11801

Differential Revision: D9916815

Pulled By: apaszke

fbshipit-source-id: 2d9cb886c424ccd43b9f996aad89950d3bddf494
2018-09-24 17:48:48 -07:00
Adam Paszke
51414822f5 Stop moving constants into DifferentiableSubgraphs (#11809)
Summary:
Or even taking them as inputs. This prevents optimizations to happen
either inside the differentiable subgraphs, or in the surrounding graph.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11809

Differential Revision: D10009680

Pulled By: apaszke

fbshipit-source-id: face638566228e470a6deec48dc2aa3a1cce26d4
2018-09-24 13:24:53 -07:00
Richard Zou
b5f60af94c Shape prop view/reshape/as_strided through prim::ListConstructs (#11877)
Summary:
Previously, aten::view returned a Dynamic type when attr::size is a prim::ListConstruct.
See [this for a repro](https://gist.github.com/zou3519/cbd610472ba3369f556fa612a7d93b28).
This prevented a pre-multipled lstm input graph from being fusible (aten::view is necessary
to do premultiplication).

If aten::view is passed an output of a prim::ListConstruct node, then shape prop should
be able to figure out its TensorType because we statically know the number of inputs to
prim::ListConstruct. This PR implements that.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11877

Differential Revision: D9972356

Pulled By: zou3519

fbshipit-source-id: cb87786f6e7f222d4b8f07d8f2a9de34859cb6a5
2018-09-21 14:20:01 -07:00
Adam Paszke
7efbf3a827 Specialize ArgumentSpecs on tuple elements too (#11863)
Summary:
This is pretty important because a common situation of passing LSTM hidden states as a tuple completely trashes performance of a network.

Cleans up all our propagation/undef specialization passes, at a cost of increased complexity of `ArgumentSpec` and `GraphExecutor`. An alternative would be to simply flatten all tuple inputs to a graph ahead of time, but that might just end up being confusing in the future (you never know if you're working with a graph that can have tuple or not).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11863

Differential Revision: D9992814

Pulled By: apaszke

fbshipit-source-id: 0a565a3b23e32f8fa72c0534e07c1ce6187739fc
2018-09-21 14:19:58 -07:00
Adam Paszke
1ad7e0c5ec Minor JIT improvements (#11654)
Summary:
- Disable addmm fusion. The reason for this is explained in the comment.
- Tiny change in `stack.h` that lets us avoid constructing an unnecessary temporary `IValue` on the (C++) stack (it will only get created on the interpreter stack directly).
- Fixed a correctness issue in requires grad propagation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11654

Reviewed By: colesbury

Differential Revision: D9813739

Pulled By: apaszke

fbshipit-source-id: 23e83bc8605802f39bfecf447efad9239b9421c3
2018-09-21 14:19:54 -07:00
David Riazati
4e65fbfee5 Remove tests from EXCLUDE_SCRIPT that pass (#11916)
Summary:
Spruriously added in #11261

I had a PR to catch these automatically (#11279), but it had some issues
passing on some CI environments but not others (e.g. for
`test_nn_group_norm`), any ideas?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11916

Differential Revision: D9992065

Pulled By: driazati

fbshipit-source-id: 05cfa8ed9af939e8ffd5827847ee7bfe0be799b2
2018-09-21 14:19:50 -07:00
Luca Antiga
58d28a5f12 Fix saving loaded module (#11915)
Summary:
This PR fixes #11913.

In order to test for this, the model is serialized twice in `getExportImportCopy`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11915

Differential Revision: D9984697

Pulled By: soumith

fbshipit-source-id: ae0250c179000c03db1522b99410f6ecb9681297
2018-09-21 06:58:16 -07:00
yya007
b91b15d86e Implementing Matrix Norm for torch.norm (#11261)
Summary:
Currently, norm function only supports vector norm. This PR extends vector norm to matrix norm.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11261

Reviewed By: li-roy

Differential Revision: D9652379

Pulled By: yya007

fbshipit-source-id: 519b3fb80b563c17c56a24675c7b0e46bf5a3a1c
2018-09-20 14:43:13 -07:00
Thomas Viehmann
068eac255b Jit fuse clamp (#11574)
Summary:
This patch adds fused forward and backward for clamp to the jit.
This is one item of #11118 . If it's OK, I'd be happy to also add some more of #11118 .

The patch depends on #11150 , which I merged into master as a base. I'll rebase it when that or #10981 is merged.

This is first serious jit patch, thank you, ngimel and the others for their guidance. All errors are my own.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11574

Differential Revision: D9943090

Pulled By: apaszke

fbshipit-source-id: c40954b8c28c374baab8d3bd89acc9250580dc67
2018-09-20 14:43:10 -07:00
Richard Zou
8f4601fbac renable test_scalar_fusion
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/11378

Differential Revision: D9943578

Pulled By: zou3519

fbshipit-source-id: fb9e4303e844d5e2515acce7869bcbe11526ab56
2018-09-20 07:56:25 -07:00
David Riazati
a79f5d77ad Add pretty printer for JIT IR (#10319)
Summary:
Adds some pretty-printing capability to the IR graph to make debugging easier/more human readable, see `torch/csrc/jit/test_jit.cpp:925` and onwards for example outputs. Results aren't perfect yet but it's a start.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10319

Reviewed By: zdevito

Differential Revision: D9558402

Pulled By: driazati

fbshipit-source-id: 1d61c02818daa4c9bdca36d1477d1734cfc7d043
2018-09-18 17:39:44 -07:00
Wanchao Liang
d4e1fa45d0 allow no-alpha add/sub in onnx symbolic (#10972)
Summary:
The PR fixes #10873

The context is aten::add and aten::sub ST overloads don't have alpha, so onnx symbolic does not match.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10972

Reviewed By: jamesr66a

Differential Revision: D9724224

Pulled By: wanchaol

fbshipit-source-id: eb5d1b09fa8f1604b288f4a62b8d1f0bc66611af
2018-09-18 13:55:39 -07:00
David Riazati
7671f4ab1c Add math to scope when using inf in tests (#11302)
Summary:
This fixes #8515 which was mostly issues in the test themselves. As long
as `math` is imported in the scope in which the script runs it resolves
to a `prim::Constant` with value `inf` correctly. This PR adds this to
the `test_jit.py` tests involving `inf` and adds a test to demonstrate
`inf` in a non-generated test.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11302

Differential Revision: D9684336

Pulled By: driazati

fbshipit-source-id: 73df2848dfdb45ab50690a7c88df8fda269a64eb
2018-09-17 14:08:32 -07:00
Natalia Gimelshein
336323f53c return aten::gt to the list of fusable operations, add expected graphs (#11150)
Summary:
Fixes one of #11118 issues.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11150

Differential Revision: D9861372

Pulled By: apaszke

fbshipit-source-id: 98b196b89e991d3936360b30568360367fd32e8b
2018-09-17 13:40:41 -07:00
Mike Ruberry
96d3f968eb Splits CPU and CUDA fusion compilers (#10981)
Summary:
This PR splits the CPU and CUDA fusion compilers, putting them into a new jit/fusers/ directory with jit/fusers/common for common components. In particular:

- A fusion interface is created that allows "fusion handles" to be requested
- The CPU and CUDA fusers implement this interface, with dispatch determined by device
- The fusion compilers, fusion function specializations and resource strings are split
- CPU-specific classes like TempFile and DynamicLibrary are in the CPU fuser
- Common classes likes TensorDesc and the base fusion function class are in jit/fusers/common
- There is still some specialization in jit/fusers/common, but these specializations are small(-ish)
- Updates the build system to remove the dummy interface on Windows and minimize the use of macros

This structure should allow in-flight PRs to easily rebase while providing a clear interface to the fusers.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10981

Reviewed By: soumith

Differential Revision: D9701999

Pulled By: apaszke

fbshipit-source-id: 3b6bec7b97e0444b2a93caa38d9b897f2e68c1b3
2018-09-14 14:05:34 -07:00
James Reed
278e304c18 Implement elif in string frontend (#11667)
Summary:
Closes #11625
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11667

Differential Revision: D9828145

Pulled By: jamesr66a

fbshipit-source-id: c72dc41cb310a4211b4e4c6b33f7e2c1fb3581a0
2018-09-14 10:09:46 -07:00
Adam Paszke
98e04db955 Implement requires_grad propagation in the JIT (#11586)
Summary:
Previously, we would pretty much assume that all floating point tensors do require grad, which might result in some unnecessary compute.

I don't really like the fact that `TensorType` uses `tensor.is_variable() && tensor.requires_grad()` to infer the value of `requires_grad`, but changing constants to keep variables turns out to be pretty hard. I got halfway there, but it would still need some more work.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11586

Reviewed By: ezyang

Differential Revision: D9813648

Pulled By: apaszke

fbshipit-source-id: 77f77756d18ff7632fca3aa68ce855e1d7f3bdb8
2018-09-13 19:25:26 -07:00
James Reed
0f1ca569ce End-to-end dynamic slicing with ONNX DynamicSlice experimental operator (#11255)
Summary:
Requires https://github.com/onnx/onnx/pull/1377

This PR makes it so that slices with dynamic boundary values can be exported from pytorch and run in caffe2 via ONNX.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11255

Differential Revision: D9790216

Pulled By: jamesr66a

fbshipit-source-id: 6adfcddc5788df4d34d7ca98341077140402a3e2
2018-09-13 12:39:52 -07:00
Roy Li
75f49befeb move instance_norm to aten (#10792)
Summary:
This also removes the usage of torch.onnx.symbolic_override in instance_norm. Fixes #8439.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10792

Differential Revision: D9800643

Pulled By: li-roy

fbshipit-source-id: fa13a57de5a31fbfa2d4d02639d214c867b9e1f1
2018-09-13 12:26:22 -07:00
Richard Zou
45e9ee096e Fix test_mnist_training_leaks_no_memory_cuda warning (#11639)
Summary:
Before this PR it would warn that "dropout is non deterministic and can
cause problems when checking trace", so I disabled the trace checking.

cc zdevito apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11639

Differential Revision: D9812493

Pulled By: zou3519

fbshipit-source-id: fab86928a5fba8b218b47543533aaf7c82a10b4a
2018-09-13 12:09:20 -07:00
David Riazati
6f53b4efea Remove implicit bool casts (#11503)
Summary:
In order to comply with Python's rules on implicit casting of
non-booleans to booleans, this PR removes implicit casting in favor of
explicit casts via `bool()`

cc zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11503

Differential Revision: D9780869

Pulled By: driazati

fbshipit-source-id: c753acaca27f4e79dddf424c6b04674f44a6aad9
2018-09-13 11:26:45 -07:00
Zachary DeVito
ab3a2d25fb Improve error messages when trying to use nested lists.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/11606

Differential Revision: D9806949

Pulled By: zdevito

fbshipit-source-id: c38abc4ce745a63d26a64f6aa1b41350e4b1acd5
2018-09-13 11:10:38 -07:00
Roy Li
a861573e36 fix tensor export bug in IR export (#11613)
Differential Revision: D9811094

Pulled By: li-roy

fbshipit-source-id: 012792dbedc70bd3fa242fdf2e39da0b21ce158d
2018-09-13 11:10:35 -07:00
Elias Ellison
77f6998e54 Guard against inputting or returning sparse tensors (#11550)
Summary:
Add guards against using sparse tensor by checking the conversion from IValue -> PyObject & PyObject -> IValue.

This diff also changes the behavior in constant propagation to not run python ops even if all ops are constant because of possible mutation to global state. This came up in trying to run get_sparse(), and I'm including it here to make it easier to land.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11550

Differential Revision: D9804712

Pulled By: eellison

fbshipit-source-id: 9fe7daf721c6d6e48df4925c0f9c775873bcdc77
2018-09-13 08:58:29 -07:00
Wanchao Liang
44b2b6b150 clean up jit generated tests (#11403)
Summary:
Clean up some generated tests after we have newly nice features like var args.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11403

Differential Revision: D9800545

Pulled By: wanchaol

fbshipit-source-id: e9973b113f78dc38cf99a81b6ede3fa3485f1cfa
2018-09-12 22:55:03 -07:00
Wanchao Liang
739e6af869 Add reminder % to the jit
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/11557

Reviewed By: apaszke

Differential Revision: D9784642

Pulled By: wanchaol

fbshipit-source-id: b7c60c3e9534555c9d7db83769965b3f2f277cdf
2018-09-12 12:40:38 -07:00
Zachary DeVito
ad7936e108 Fix reloading modules back into python (#11552)
Summary:
This changes the way module import works so that when a module
is reloaded in python it becomes a ScriptModule and not a _C.ScriptModule
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11552

Differential Revision: D9782751

Pulled By: zdevito

fbshipit-source-id: 9576850b75494b228ce3def94c0d371a4a44b11d
2018-09-12 12:25:15 -07:00
Richard Zou
13b05c8c78 Add EndToEndHybridModel CUDA tests (#11544)
Summary:
Also adds two additional tests that check for memory leaks while the relevant graph executors are alive:
- (minimal test): Create a ScriptModule, keep it alive, and test that it does not leak memory while it is alive
- (large test) Do MNIST training with a traced MNIST module and test that no memory is leaked while the traced module (with graph executor) is alive

cc apaszke zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11544

Reviewed By: apaszke

Differential Revision: D9778479

Pulled By: zou3519

fbshipit-source-id: 2d6cdea81dd1264f2c0396b662f70fdafecb3647
2018-09-12 11:25:18 -07:00
Adam Paszke
62c9d4ac96 Make .to() methods native functions (to fix JIT tracing)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/11491

Differential Revision: D9771121

Pulled By: apaszke

fbshipit-source-id: 08d11101fb12093f8cf913b06359adddf3af9da7
2018-09-11 21:55:42 -07:00
Adam Paszke
8b196d671b Allow tracing random functions (only when using default generators) (#11539)
Summary:
Fixes #11504.

zdevito, neerajprad, fritzo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11539

Differential Revision: D9777897

Pulled By: apaszke

fbshipit-source-id: 56983260f5b93da7d5540a6242769ea7bd50eb06
2018-09-11 17:56:39 -07:00
Zachary DeVito
289a8c9b7d Allow train/eval, and non-Tensor arguments to python functions (#11505)
Summary:
This whitelists train/eval functions in script modules, and tests that nested nn.Modules still work.

This also changes the code for calling python functions from script to allow non-tensor inputs/outputs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11505

Differential Revision: D9765466

Pulled By: zdevito

fbshipit-source-id: 1177bff931324422b69e18fa0bbaa82e3c98ec69
2018-09-11 15:05:09 -07:00
James Reed
deac304b6b Bugfix for basic slicing
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/11428

Differential Revision: D9753999

Pulled By: jamesr66a

fbshipit-source-id: cfc4163a5a06b41beb808a4e24650d71f5d91f4f
2018-09-11 09:39:29 -07:00
Adam Paszke
120d769432 Add support for tracing strings (#11506)
Summary:
This enabled `torch.einsum` both in tracing and in script mode. It's used all over Pyro at the moment, and is needed for any use of the JIT in there.

Fixes #11157.

zdevito fritzo neerajprad
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11506

Differential Revision: D9764787

Pulled By: apaszke

fbshipit-source-id: 9b5251b9e7c5897034602bd07ff67b425d33326c
2018-09-11 06:02:41 -07:00
Adam Paszke
0ddbe668cd Improve shape analysis to cover all most commonly used ops (#11358)
Summary:
[Here's a list](https://gist.github.com/apaszke/f0821840bdcc67a977832dc58acc1b85) of ops that are in `register_aten_ops.cpp`, but aren't supported in shape prop. Everything else should work now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11358

Differential Revision: D9753693

Pulled By: apaszke

fbshipit-source-id: efeae0126ce16cb56b8797fc5246405588bcae3c
2018-09-11 06:02:39 -07:00
James Reed
3ad67c60f0 Traceable explicit Variable instantiation (#11463)
Summary:
There's a bunch of legacy code where people are explicitly instantiating Variable, and these call-sites have thus far been untraceable (appearing as prim::Constant nodes with the tensor value at the time of tracing). This makes it so that the new variable inherits the traced Value* from the tensor it's being constructed from
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11463

Differential Revision: D9756529

Pulled By: jamesr66a

fbshipit-source-id: da99c6a7621957a305f2699ec9cb9def69b1b2d7
2018-09-10 17:03:24 -07:00
Adam Paszke
3e665cc29b Improve support for tracing sizes, add more tracer warnings (#11288)
Summary:
Many constructors like `torch.zeros` or `torch.randn` didn't support
size tracing correctly which is fixed by this pass. Same issue has been
fixed in legacy tensor constructors.

Additionally, new tensor constructors, which do not participate in
tracing (most notably `torch.tensor`, `torch.as_tensor` and
`torch.from_numpy`) raise a warning when they are used.

Finally, entering a traceable operation disables the tracing in its body.
This is needed because

zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11288

Reviewed By: ezyang

Differential Revision: D9751183

Pulled By: apaszke

fbshipit-source-id: 51444a39d76a3e164adc396c432fd5ee3c8d5f7f
2018-09-10 15:22:48 -07:00