Commit Graph

2657 Commits

Author SHA1 Message Date
Pieter Noordhuis
9b69da2b55 Allow for iterations where no module parameter is used (#19821)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19821

It is possible that not a single parameter is used during an
iteration. If this is the case, the `prepare_for_backward` function
marks all parameters as unused, kicks off reduction of all buckets,
*and* finalizes the reduction.

This is different from the prior implementation where we assumed that
autograd would produce a gradient for at least a single parameter.
We then used the autograd callback mechanism to queue a finalizer
callback. Now, this finalizer may be executed in line.

Reviewed By: mrshenli

Differential Revision: D15113272

fbshipit-source-id: dc91458b569cd8c106ddaeea558464b515683550
2019-04-27 22:57:59 -07:00
Michael Suo
f0a007a26c Use QualifiedName for classes (#19575)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19575
ghimport-source-id: eaed1d9d66672aadfe893e62a3a811da8ac7d966

Differential Revision: D15035281

Reviewed By: shannonzhu

Pulled By: suo

fbshipit-source-id: 7bac5e5f9223af77268bc03e08b37450a6840dbe
2019-04-27 16:13:27 -07:00
Michael Suo
1d6e868c2f make QualifiedName a value type (#19626)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19626
ghimport-source-id: 20f03b245245a7fab1c175cebc41e477b1eadc68

Reviewed By: shannonzhu

Differential Revision: D15049582

Pulled By: suo

fbshipit-source-id: 9e73a1d8ee8aa1849880815fa6fddebca408b8b4
2019-04-27 16:13:24 -07:00
Michael Suo
096dd8a4f2 separate QualifiedName into its own file (#19566)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19566
ghimport-source-id: c237f2a25d1aa9fc41f19fefe7a08a53a54279db

Differential Revision: D15032205

Reviewed By: shannonzhu

Pulled By: suo

fbshipit-source-id: 7527d97565559ebfb2556600eea5d93c1e141ac8
2019-04-27 16:13:20 -07:00
Michael Suo
a25b79531c use fully qualified name for ScriptClasses (#19239)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19239
ghimport-source-id: 830aad6dc11d2a7247760a9c7c9fc8556f70a706

Differential Revision: D14928293

Reviewed By: eellison

Pulled By: suo

fbshipit-source-id: d2efa5d7f7397526083278d6650b9cee8d967b1a
2019-04-26 19:17:21 -07:00
David Riazati
f9786ad351 Add support for LONG_BINGET pickler op (#19815)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19815
ghimport-source-id: dd51c13892a8f0d91d726ae8ec65206d5e81f33e

Differential Revision: D15109969

Pulled By: driazati

fbshipit-source-id: da0bb5e30038173e74ca3e0e103dc11ba1638797
2019-04-26 17:13:48 -07:00
Max Wang
c5845c4482 Add support for reduce-scatter in c10d (#18844)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18844
ghimport-source-id: c6b2f0032c7c2212be2000a9c1f262f63d878a97

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18844 Add support for reduce-scatter in c10d**
* #18820 Refactor ProcessGroupNCCL collective primitives

Reviewed By: mrshenli

Differential Revision: D14768369

fbshipit-source-id: a9def7a0da6e9cd995e982371cc1e22f3df1a156
2019-04-26 13:46:57 -07:00
Wanchao Liang
236c2b2387 Let script module buffer attributes can also cast device/type (#19700)
Summary:
Tested locally this  fix #19039, did not add a test since there's no way to create a script module in the cpp world.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19700

Differential Revision: D15094195

Pulled By: wanchaol

fbshipit-source-id: fcc2c1e5efbc160d976ae485ba2457442f62f065
2019-04-26 13:06:52 -07:00
Will Feng
5099db08d4 Ignore nn::Functional submodules in nn::Module serialization (#19740)
Summary:
Currently, the Python API doesn't serialize layers that don't have weights (such as `nn.ReLU` and `nn.MaxPool2d`e.g. in https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py#L80-L81). If one saves a model that contains weight-less layers in Python and tries to load it into C++, the C++ module loading code (`torch::load(...)`) will throw an error complaining that the expected layers are not found in the serialized file (e.g. https://github.com/pytorch/vision/pull/728#issuecomment-480974175). This PR solves the problem by ignoring layers that are not serializable (which currently only include `nn::Functional`) in the C++ module serialization code (`torch::save(...)` and `torch::load(...)`), and the user is expected to use `nn::Functional` to wrap the weight-less layers so that they can be ignored when serializing / deserializing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19740

Differential Revision: D15100575

Pulled By: yf225

fbshipit-source-id: 956481a2355d1de45341585abedda05e35d2ee8b
2019-04-26 12:47:23 -07:00
Pieter Noordhuis
0d8a3610c5 Multiple module outputs and multiple calls to backward (#19799)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19799

A module that returns multiple outputs and where the called may end up
doing multiple calls to torch.autograd.backward did not work with
DistributedDataParallel. It expected the first call to
torch.autograd.backward to provide gradients for ALL parameters that
expect gradients and were used in computing the module output. If you
have outputs with disjoint autograd graphs it is fine to call
torch.autograd.backward on both and fill in the module's parameter
gradients in separate chunks.

With this change we delay queuing the finalizer callback until we have
marked all buckets as ready, instead of queueing it the first time we
receive an autograd hook. This returns the current implementation to
be functionally equivalent to the DistributedDataParallel
implementation before #18953 was merged.

Reviewed By: mrshenli

Differential Revision: D15097045

fbshipit-source-id: 2df023319713bc31e29a8b45108c78e6593fccd4
2019-04-26 08:20:10 -07:00
Eric Faust
dcfb5620df Allow passing lists as trace inputs.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19580

Differential Revision: D15034978

fbshipit-source-id: d3bc32ccae1c12104f2bde43fd4700d220bb3ca9
2019-04-26 02:41:57 -07:00
Karl Ostmo
8f0603b128 C++ changes toward libtorch and libcaffe2 unification (#19554)
Summary:
* adds TORCH_API and AT_CUDA_API in places
* refactor code generation Python logic to separate
  caffe2/torch outputs
* fix hip and asan
* remove profiler_cuda from hip
* fix gcc warnings for enums
* Fix PythonOp::Kind
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19554

Differential Revision: D15082727

Pulled By: kostmo

fbshipit-source-id: 83a8a99717f025ab44b29608848928d76b3147a4
2019-04-26 01:38:10 -07:00
Zachary DeVito
a425e1cbf8 Remove duplicate inlineCallToCode (#19724)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19724
ghimport-source-id: a68d28ac9bbe62dd61f03bfd9d57f4ef1d0ce9c9

Reviewed By: jamesr66a

Differential Revision: D15078532

Pulled By: zdevito

fbshipit-source-id: bebd34ff6105f538395260b027dc169448b5bc96
2019-04-25 15:53:10 -07:00
Zachary DeVito
330990d878 Serialize first-class version of functions (#19723)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19723
ghimport-source-id: 7f7ec6200c3b42d19046a3e228a3d82212697f14

Reviewed By: jamesr66a

Differential Revision: D15078533

Pulled By: zdevito

fbshipit-source-id: fe421afab9607ee942f6d200f04bb6335fc0aa97
2019-04-25 15:53:07 -07:00
Zachary DeVito
6cb1b994d8 Trace directly into first-class module form. (#19722)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19722
ghimport-source-id: b024666feccb324f5ba9aae4a6301723e04d9846

Reviewed By: jamesr66a

Differential Revision: D15078535

Pulled By: zdevito

fbshipit-source-id: b866b31c1864a090c545560cbecee81e34ad2d16
2019-04-25 15:53:03 -07:00
Zachary DeVito
31524bda1f @torch.jit.script(fn) now is a torch.jit.Function (#19721)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19721
ghimport-source-id: b4f5024adc845a82dc5197d19aab1496bf85089f

Reviewed By: jamesr66a

Differential Revision: D15078534

Pulled By: zdevito

fbshipit-source-id: 408d3a871302c5ac5d6426dc5de567f2188ebf4c
2019-04-25 15:53:00 -07:00
Zachary DeVito
12f7c2dea3 pybind CompilationUnit and Function directly (#19720)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19720
ghimport-source-id: c5829234dbbe8f7fe719ffce3fa92ce5198ffd21

Reviewed By: jamesr66a

Differential Revision: D15078536

Pulled By: zdevito

fbshipit-source-id: e617de31fc907a408fb50e18d9358dfd64de1f9e
2019-04-25 15:52:57 -07:00
Daniel Thul
29d8711ef0 Fix compilation on Windows 10 (CUDA 10.0, Visual Studio 2017) (#19615)
Summary:
I want to use libtorch in a C++/CUDA project but as soon as I  include `<torch/torch.h>`, ".cu" files fail to compile:

`torch/csrc/jit/script/tree.h(64): error C3520: 'args': parameter pack must be expanded in this context`

This PR makes it build on my machine (don't know if it breaks anything though).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19615

Differential Revision: D15063712

Pulled By: ezyang

fbshipit-source-id: 7561e705f8f5b42b8e6a23430710b36508fee1ee
2019-04-25 09:37:17 -07:00
Natalia Gimelshein
3875e1ba45 try to make at::cat in mm_tree_reduction operate on contig tensors (#18816)
Summary:
Sometimes at::cat gets transposed inputs and goes on a slow path. Also, make jit_premul lstm benchmark add bias to the whole input tensor to avoid separate reduction kernels in the backward pass.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18816

Differential Revision: D15013576

Pulled By: wanchaol

fbshipit-source-id: bcfa1cf44180b11b05b0f55f034707012f66281a
2019-04-24 23:44:25 -07:00
Wanchao Liang
c571969148 Fix the insert_guard for norm decomposation (#19646)
Summary:
move the insert_guard all the way up to the beginning of the decomposation, this will fix the case that we lose insert_point context after decomposeCommonNormalization and we still need to modify the graph.

fixes #19502
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19646

Differential Revision: D15058040

Pulled By: wanchaol

fbshipit-source-id: ebdbf8623ebfe4556c461e1b650e94b905791adb
2019-04-24 23:12:37 -07:00
Wanchao Liang
3c81eb3aa7 add max_pool2d to AD, add tests for both autodiff and inference mode
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19661

Differential Revision: D15074431

Pulled By: wanchaol

fbshipit-source-id: b31cf2126c2c5d6a12c2ef5dc67b57677652f1fc
2019-04-24 22:19:51 -07:00
Natalia Gimelshein
614871d948 use relative path to load libthnvrtc (#19690)
Summary:
We had a few hard to repro cases where very occasionally libthnvrtc failed to be loaded due to what looked like garbled dladdr return in `info.dli_fname` field. We could not root cause why this is happening, but this workaround avoids the problem altogether. $ORIGIN is already added to RPATH as the first search location, so dlopen("libthnnvrtc.so") will look for libthnvrtc in the caller (`libtorch.so.1`) directory, which was the purpose of the previous code that was getting `libtorch.so.1` directory using dladdr.
```
root@4ec0aab027a0:/opt/conda/lib/python3.6/site-packages/torch/lib# readelf -d ./libtorch.so.1 | grep RPATH
 0x000000000000000f (RPATH)              Library rpath: [$ORIGIN:/usr/local/cuda/lib64:/opt/conda/lib]
```
Hopefully, same should be happening on Mac.
cc zdevito ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19690

Differential Revision: D15076990

Pulled By: soumith

fbshipit-source-id: a4d2992ccf26953f1fc73f17c4e752d69c58e2fc
2019-04-24 21:09:31 -07:00
Roy Li
72b8b6c374 Change some comments related to moving copy_ to native (#19618)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19618
ghimport-source-id: 6bb9965f2f7b72f602f03e27b664d7d7696edd00

Differential Revision: D15048632

Pulled By: li-roy

fbshipit-source-id: a2707e3086f3a9993780a7f76104c5f00f2a9618
2019-04-24 19:23:06 -07:00
davidriazati
c08f3d06c3 Add some of nn.init to weak script (#19640)
Summary:
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#19640 [jit] Add some of nn.init to weak script**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/19640

Pulled By: driazati

Differential Revision: D15065332

fbshipit-source-id: 30df9f02e527cd5e5ebe34b7e003444eae96c66d
2019-04-24 17:00:48 -07:00
Will Feng
9aa0e6078f Support serializing std::vector<torch::Tensor> (#19677)
Summary:
In the distributed training development work, we need to be able to serialize a `std::vector` of `torch::Tensor`s. This PR adds support for serializing `std::vector<torch::Tensor>`.

cc. mrshenli
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19677

Differential Revision: D15069860

Pulled By: yf225

fbshipit-source-id: 505147e5f5fea78be1bf60fb8418bc187dbc2a98
2019-04-24 16:50:16 -07:00
David Riazati
3d6e956412 Add LONG_BINPUT to unpickler (#19696)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19696
ghimport-source-id: 8d711cd3ed2b2810b5b3d765564429882f96d1f1

Differential Revision: D15072658

Pulled By: driazati

fbshipit-source-id: a28a90218874e07cfbed4f8df3d6d23ae5e70933
2019-04-24 16:39:23 -07:00
Elias Ellison
62447a5aa3 improve err msg (#19645)
Summary:
Print out the tensor value when throwing the cannot insert tensor with grad error
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19645

Differential Revision: D15057809

Pulled By: eellison

fbshipit-source-id: 3f622ef1322a75c965e780275f1fb447e9acf38d
2019-04-24 16:22:07 -07:00
Jerry Zhang
6ec55c13a9 Enable assignment for QTensor in pytorch frontend (#19676)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19676

Make copy work with QTensor, enable assignment of QTensor in pytorch frontend.

Differential Revision: D15064710

fbshipit-source-id: 04f2dc02a825695d41fa1114bfca49e92108fef3
2019-04-24 16:05:34 -07:00
Vitaly Fedyunin
d14abe3aff Add torch.from_file function similar to the Storage.from_file, but returning tensor (#18688)
Summary:
Porting `torch.Storage.from_file(filename, shared, size)` function to `torch.from_file(filename, shared, size, dtype=torch.int)`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18688

Differential Revision: D15012644

Pulled By: VitalyFedyunin

fbshipit-source-id: 3f62ca9e414fad3847fe71b785ff97b5bdc2d2cd
2019-04-24 15:38:56 -07:00
Zachary DeVito
87a6974193 Make it possible for self.forward to return a ScriptMethod (#19217)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19217
ghimport-source-id: 6fdd7f5ac041dae950b47ca316f30682ede0b083

Reviewed By: suo

Differential Revision: D14922120

Pulled By: zdevito

fbshipit-source-id: 5e82e5d7ee72df6f401146d2519c80ea336ff40e
2019-04-24 11:14:34 -07:00
Edward Yang
c42f3f9055 Revert D15008160: Enable assignment for QTensor in pytorch frontend
Differential Revision:
D15008160

Original commit changeset: 5f1166246d76

fbshipit-source-id: 24c7350431ae6a87199d6e3f7ffbbc8ec7d3c28b
2019-04-24 06:58:13 -07:00
efaust
8273b9b3cb Enforce consistent dict iteration order for trace inputs. (#19528)
Summary:
Stack:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; **#19528 [pytorch] Enforce consistent dict iteration order for trace inputs.**&nbsp;&nbsp;[💛](https://our.intern.facebook.com/intern/diff/D15023656/)

Don't iterate down unordered_maps and expect ordering. Should fix test flakiness.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19528

Differential Revision: D15023656

Pulled By: efaust

fbshipit-source-id: 91c9a31a8652fcf93ae0e942bea4cec67bb490c9
2019-04-23 23:36:48 -07:00
Jerry Zhang
309c15e2df Enable assignment for QTensor in pytorch frontend (#19530)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19530
Make copy work with QTensor, enable assignment of QTensor in pytorch frontend.

Differential Revision: D15008160

fbshipit-source-id: 5f1166246d768b23f009cde1fa03e8952368a332
2019-04-23 21:29:31 -07:00
eellison
d902774cad Dont introduce aliasing in CSE or Constant Pooling (#19576)
Summary:
We can't introduce aliasing to a graph output, since they may be mutated after.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19576

Differential Revision: D15057734

Pulled By: eellison

fbshipit-source-id: 33594c05d985a0c58edebd6252e1ee2c0efb6f0e
2019-04-23 20:39:09 -07:00
Elias Ellison
5119cc7cdf builtin ivalues sort (#19572)
Summary:
Add sorting to all the lists which we specialize on (Tensor, int, float, bool).

First part of https://github.com/pytorch/pytorch/issues/19372
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19572

Differential Revision: D15052677

Pulled By: eellison

fbshipit-source-id: 301e8e0e3e29e04aca1311410db0a474fd833cff
2019-04-23 16:38:08 -07:00
James Reed
80020b3d2d Guard {set,rebase}_history on grad_fn check (#19623)
Summary:
We would previously have statements like

```
set_history(flatten_tensor_args( result ), grad_fn);
```

Internally, {set,rebase}_history would check grad_fn and short circuit if it is nullptr. However, this means that we are executing the expression `flatten_tensor_args( result )` and immediately throwing away the results. This was causing unnecessary allocations + overhead.

My JIT overhead benchmark script (with custom benchmark method):

```
import torch, time

torch.jit.script
def add(x, y):
    return x + y

a = torch.rand([])
b = torch.rand([])

niter = 1000000

with torch.no_grad():
    s = time.time()
    add.__getattr__('forward').benchmark(niter, a, b)
    e = time.time() - s
    print('overhead per call (us)', e / niter * 1e6)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19623

Differential Revision: D15053399

Pulled By: jamesr66a

fbshipit-source-id: 8777e1a2b5c5a5bbd3a035b7247c8154c5fc4aa6
2019-04-23 15:40:11 -07:00
Elias Ellison
0922a64d22 add torch.tensor requires grad (#19445)
Summary:
Add setting requires_grad = True within torchscript to torch.Tensor

Within constant propagation, we can't insert any constants that require grad.

Also added shape analysis and requires grad analysis to torch.tensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19445

Differential Revision: D15046211

Pulled By: eellison

fbshipit-source-id: b4ef7a6b4b6b8dc03e1fa49f87dc415874cd1998
2019-04-23 12:27:52 -07:00
Bram Wasti
d8729efabe Only require python print on certain namespaces (#19383)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19383
ghimport-source-id: b93c7849a52d11ecbf26b614704740d44a2447f9

Differential Revision: D15032727

Pulled By: bwasti

fbshipit-source-id: a19f72abb99e63d87eab13022538f325b2e20526
2019-04-23 10:52:48 -07:00
Michael Suo
5bafb64e67 Revert D15039713: [pytorch][PR] add torch.tensor requires grad
Differential Revision:
D15039713

Original commit changeset: 47f1931b6fc4

fbshipit-source-id: fd91ce8ddd6d2f4e0016054dcdc2541dacc0e191
2019-04-22 23:15:49 -07:00
James Reed
e7fc7c732c Bugfix for fusion device check (#19594)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19594

I missed a callsite

Reviewed By: wanchaol

Differential Revision: D15041457

fbshipit-source-id: eef76ad51bee06a56d31b4ab64f19250fe2ad8f0
2019-04-22 20:55:17 -07:00
Elias Ellison
d2b03512da add torch.tensor requires grad (#19445)
Summary:
Add setting requires_grad = True within torchscript to torch.Tensor

Within constant propagation, we can't insert any constants that require grad.

Also added shape analysis and requires grad analysis to torch.tensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19445

Differential Revision: D15039713

Pulled By: eellison

fbshipit-source-id: 47f1931b6fc4a1137c13d80110cc404465bfdf06
2019-04-22 18:02:41 -07:00
James Reed
5be4bee4ff Don't create FusionGroups for known-CPU producer values (#19342)
Summary:
I believe the existing check in FuseGraph was only `false` if PyTorch was built with NO_CUDA=1. Otherwise, we would create fusion groups even if we're on a CPU-only machine running CPU code. This is confusing. Instead I've made it so that the decision to fuse or not is dependent on if the producer Value is a known CPU tensor. If it is, we skip fusion.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19342

Differential Revision: D15038351

Pulled By: jamesr66a

fbshipit-source-id: fce9d83929309a7bf14346833f84b996f3e7f6db
2019-04-22 16:57:18 -07:00
Mikhail Zolotukhin
8abab61d39 IRParser: optionally create name->value map of the parsed IR. (#19551)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19551
ghimport-source-id: e666e3c00786a3b1c747f2dd6e85a48a63bdd69d

Differential Revision: D15028056

Pulled By: ZolotukhinM

fbshipit-source-id: 37e08d6df1d43513748ecfdd8549738eac7ec24e
2019-04-22 16:09:05 -07:00
Nikolay Korovaiko
43d0b78c31 Profiling : Adding Profile Op to provide storage for profiling lambdas
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19365

Differential Revision: D14998968

Pulled By: Krovatkin

fbshipit-source-id: a7f7d1529cbe4e8b30638c6eb8e2ff68f6e114c3
2019-04-22 15:09:30 -07:00
Elias Ellison
19f73180cf Add manual_seed in script (#19510)
Summary:
Add manual_seed to torch script.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19510

Reviewed By: suo, driazati

Differential Revision: D15018823

Pulled By: eellison

fbshipit-source-id: d7734a8ad05ba254c0d88abf3fb58c4ce6a4e53b
2019-04-22 10:58:15 -07:00
Roy Li
689dd800ed Generate only one Type class per backend (#19295)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19295
ghimport-source-id: 9345110f91f044a449804ddd5116cc9179444a00

Differential Revision: D14948581

Pulled By: li-roy

fbshipit-source-id: a317b03d58d621e8df162918038f7543bfb13ba2
2019-04-21 21:16:14 -07:00
Roy Li
189f30603c Make complex its own backend (#19275)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19275
ghimport-source-id: 73fd40b02152aed6f24225a88d7ffde7f700899e

Differential Revision: D14948582

Pulled By: li-roy

fbshipit-source-id: a1be6e57057defc74a007c5351c5edb2b9dcaf30
2019-04-21 21:16:10 -07:00
Roy Li
ab78449e8c Add ScalarType argument to Type::options() (#19270)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19270
ghimport-source-id: a5ade6131f3260066c5750ea1fa9ed5c998bb791

Differential Revision: D14938707

Pulled By: li-roy

fbshipit-source-id: 018fb3f01706531a06515d6d861e5683a455a705
2019-04-21 21:16:07 -07:00
Mikhail Zolotukhin
868933a467 Fix clang-format. (#19550)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19550
ghimport-source-id: 980d96762426d3e97c26839edbaf107a3fc18b2f

Differential Revision: D15028055

Pulled By: ZolotukhinM

fbshipit-source-id: a50a0aaa74d0f1b9249ad79ab80e4b7747c3bffc
2019-04-21 20:31:09 -07:00
Shen Li
1bd5f2c181 Fix some typos in jit README
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19548

Differential Revision: D15028275

Pulled By: mrshenli

fbshipit-source-id: 84ff635be3b4681962451b4c301271683174d7a8
2019-04-21 19:45:05 -07:00