Commit Graph

181 Commits

Author SHA1 Message Date
Michael Suo
b6a88b3344 Make traced fns also go into the global python CU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22901

Test Plan: Imported from OSS

Differential Revision: D16278160

Pulled By: suo

fbshipit-source-id: f3e7d83b48d5f5b5cb1548ccc5b9bd382a3c411a
2019-07-16 12:04:16 -07:00
Michael Suo
c5afdd0b55 Revert D16197605: [jit] Make traced fns also go into the global python CU
Differential Revision:
D16197605

Original commit changeset: d32c975486b0

fbshipit-source-id: a00f0490cc23824792f3e745d7b5a003b1a33d20
2019-07-15 22:31:33 -07:00
Michael Suo
5fc1260e0a Make traced fns also go into the global python CU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22725

Differential Revision: D16197605

Test Plan: Imported from OSS

Pulled By: suo

fbshipit-source-id: d32c975486b0cb4808687f0aa89325571f2817c4
2019-07-15 13:13:12 -07:00
BowenBao
b3147bc674 PyTorch export to ONNX Opset 7 and 8 - Cont (#22421)
Summary:
This is an extension to the original PR https://github.com/pytorch/pytorch/pull/21765

1. Increase the coverage of different opsets support, comments, and blacklisting.
2. Adding backend tests for both caffe2 and onnxruntime on opset 7 and opset 8.
3. Reusing onnx model tests in caffe2 for onnxruntime.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22421

Reviewed By: zrphercule

Differential Revision: D16225518

Pulled By: houseroad

fbshipit-source-id: 01ae3eed85111a83a0124e9e95512b80109d6aee
2019-07-12 14:52:48 -07:00
Spandan Tiwari
9d11004ee4 Update ONNX constant folding to support opset 10. (#22515)
Summary:
Currently ONNX constant folding (`do_constant_folding=True` arg in `torch.onnx.export` API) supports only opset 9 of ONNX. For opset 10, it is a no-op. This change enables ONNX constant folding for opset 10. Specifically there are three main changes:
1) Turn on constant folding ONNX pass for opset 10.
2) Update support for opset 10 version of `onnx::Slice` op for backend computation during constant folding.
3) Enable constant folding tests in `test/onnx/test_utility_funs.py` for multiple opsets (9 and 10).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22515

Reviewed By: zrphercule

Differential Revision: D16189336

Pulled By: houseroad

fbshipit-source-id: 3e2e748a06e4228b69a18c5458ca71491bd13875
2019-07-11 16:29:03 -07:00
Michael Suo
3b2844eeea Make CompilationUnit own Functions (#22202)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22202
ghimport-source-id: de6c963af1df76d2d6357155e64a5913ab879f76

Test Plan: Imported from OSS

Differential Revision: D15998761

Pulled By: suo

fbshipit-source-id: 5414a6424953738d823b265d20dc67dde6e5b2d8
2019-07-04 17:12:00 -07:00
Sebastian Messmer
17cc79865d Fix dead code elimination in onnx export (#22476)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22476

Dead code elimination assumes a valid jit graph because it checks if operators have side effects.
The onnx export path destroys the jit graph right before calling dead code elimination, but it actually doesn't care about side effects.
We can just call dead code elimination and disable side effect lookup and things should work.

Reviewed By: houseroad

Differential Revision: D16100172

fbshipit-source-id: 8c790055e0d76c4227394cafa93b07d1310f2cea
2019-07-02 21:28:57 -07:00
Sebastian Messmer
1f9c4fdb5e split onnx passes (#22413)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22413

_jit_pass_erase_number_types invalidates the jit graph but parts of _jit_pass_onnx rely on having a valid jit graph.

This splits _jit_pass_onnx into _jit_pass_onnx_remove_print and _jit_pass_onnx_preprocess_caffe2 (which rely on the valid jit graph), runs these before _jit_pass_erase_number_types,
and then runs the rest of _jit_pass_onnx after _jit_pass_erase_number_types

Reviewed By: houseroad

Differential Revision: D16079890

fbshipit-source-id: ae68b87dced077f76cbf1335ef3bf89984413224
2019-07-01 18:16:53 -07:00
Zachary DeVito
5b87049c66 remove uses of std::shared_ptr<Module> (#21934)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21934
ghimport-source-id: e64ab9096f43749ead3ac5567675b815da295664

Test Plan: Imported from OSS

Differential Revision: D15892401

Pulled By: zdevito

fbshipit-source-id: 6424139206593ff944556c69d8a54723884eacaf
2019-06-25 13:24:38 -07:00
David Riazati
afad3e4954 Add support for class annotations (#21379)
Summary:
This adds support for inferred attributes (everything except empty lists, dicts, and tuples) as well as using the PEP 526 style annotations on a class, so this eliminates the need for `torch.jit.Attribute`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21379

Differential Revision: D15718537

Pulled By: driazati

fbshipit-source-id: b7481ae3d7ee421613e931b7dc3427ef2a99757f
2019-06-18 09:49:09 -07:00
Zachary DeVito
972ec676b2 Remove lowered execution (#21674)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21674
ghimport-source-id: b8e27f0ce9b8b362daf73556ee67457fb5355062

Reviewed By: eellison

Differential Revision: D15777726

Pulled By: zdevito

fbshipit-source-id: 718ac676c9a1bcf99b856862fd29631d825645da
2019-06-16 14:29:18 -07:00
James Reed
c2a18a6702 Override print when python is present (#21625)
Summary:
This makes it so we can see the output of prim::Print in environments like iPython notebooks which override sys.stdout
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21625

Differential Revision: D15756793

Pulled By: jamesr66a

fbshipit-source-id: 7d9a14b2e229ed358e784318e9d862677db2c461
2019-06-11 22:58:22 -07:00
Michael Suo
cab3e726df Split out Function into its own file (#21539)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21539
ghimport-source-id: f1e4396a0bec6e30d3179f926ec4da68807942f7

Differential Revision: D15741979

Pulled By: suo

fbshipit-source-id: 4cd0ed36bcbf8db0b36a101dda6f58975f806889
2019-06-10 16:37:58 -07:00
Zachary DeVito
ea822d9626 Interpreter support for CallFunction/CallMethod (#21562)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21562
ghimport-source-id: 17e5e183f730f50d97ef48973aafc6249d54978f

Reviewed By: suo

Differential Revision: D15729500

Pulled By: zdevito

fbshipit-source-id: efa8a133b617b1498810392a8da6b513ce00b5eb
2019-06-09 15:28:26 -07:00
Zachary DeVito
ad0c08f950 Expose ExecutionPlan in prep for function calls (#21561)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21561
ghimport-source-id: 4bf28d8140610a0cefef0c0a17f0a513ae855dde

Reviewed By: suo

Differential Revision: D15729498

Pulled By: zdevito

fbshipit-source-id: b26458336da1efaba71d8a577c3917c6622dae0d
2019-06-09 15:28:22 -07:00
Zachary DeVito
de31f6719c Add flag to temporarily enable first class modules (#21560)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21560
ghimport-source-id: a555ca33fcd3efd1147aaf90f26a8e63da1c1a67

Reviewed By: suo

Differential Revision: D15729502

Pulled By: zdevito

fbshipit-source-id: d6c11472bfc791e2ad1e9aa695b0439d72b79681
2019-06-09 15:28:19 -07:00
Zachary DeVito
03641413e5 Revert D15600068: Add flag to temporarily enable first class modules
Differential Revision:
D15600068

Original commit changeset: 9b68e23d7f8b

fbshipit-source-id: 45f36b3aaa4f1c457c27490579496456cbbc680b
2019-06-07 22:20:47 -07:00
Zachary DeVito
e616a5e8b8 Revert D15600067: Expose ExecutionPlan in prep for function calls
Differential Revision:
D15600067

Original commit changeset: 82b7de458dd6

fbshipit-source-id: ca26a362cd73bdb9e8c4eba15dd5c10986fa79fe
2019-06-07 22:20:44 -07:00
Zachary DeVito
bfb235b8c9 Revert D15618275: Interpreter support for CallFunction/CallMethod
Differential Revision:
D15618275

Original commit changeset: 038ae27e5416

fbshipit-source-id: 8dbe0f564ba103fe445dacc471085c659171705f
2019-06-07 22:20:40 -07:00
Zachary DeVito
5f6afafdef Interpreter support for CallFunction/CallMethod (#21325)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21325
ghimport-source-id: eeca1176f5e00c85a69cd016acccf5105e670e02

Reviewed By: jamesr66a

Differential Revision: D15618275

Pulled By: zdevito

fbshipit-source-id: 038ae27e5416f1ce338009627c839a4d61a00658
2019-06-07 20:56:58 -07:00
Zachary DeVito
1517ff66a1 Expose ExecutionPlan in prep for function calls (#21273)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21273
ghimport-source-id: b92c1e07fbe4122467a21b98d29635295093e0c2

Reviewed By: jamesr66a

Differential Revision: D15600067

Pulled By: zdevito

fbshipit-source-id: 82b7de458dd65c175f55b0f383bfc3fcf4704032
2019-06-07 20:56:55 -07:00
Zachary DeVito
7e08bc42d5 Add flag to temporarily enable first class modules (#21272)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21272
ghimport-source-id: 43e73d1b93ccbe0dd6845eb3f7444c9d0abd444b

Reviewed By: jamesr66a

Differential Revision: D15600068

Pulled By: zdevito

fbshipit-source-id: 9b68e23d7f8b6046a5a0d6d9fd16138ac384b863
2019-06-07 20:56:52 -07:00
Nishant Pandit
bd2d318e23 Modify quant-dequant node api to take module object and method name (#21407)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21407

Modify api takes module object and method whose graph is
instrumented to insert the quant dequant nodes

Differential Revision: D15651624

fbshipit-source-id: 1ff1ae446c986184c724504c8fdd0dcd43864016
2019-06-05 19:08:56 -07:00
Wanchao Liang
113a27ee45 bake constants into the traced graph, get rid of getNestedValueTrace (#21046)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21046
ghimport-source-id: 5cb3efb1896fbe42336e24c14fbf0bb5e646528e

Differential Revision: D15530991

Pulled By: wanchaol

fbshipit-source-id: b096ca5a1cdce496742b7f7e1de3ef8d21e9a8b0
2019-06-03 21:48:11 -07:00
Nishant Pandit
a501e7d5be Add quant-dequant nodes for bias. (#20045)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20045

This pass adds quant-dequant nodes for bias. This pass requires
quant-dequant pass for activations and weights to be done as it is required
to compute the qparams for bias

Differential Revision: D15179141

fbshipit-source-id: 3aab9fceefcadc3fa42a4e802d9b1e18addad78a
2019-05-21 21:59:37 -07:00
Nishant Pandit
d73caca2a1 Add mandatory ScalarType nodes as input to the quant-dequant nodes. (#20468)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20468

ScalarType node is mandatory for activations and parameters now.
This change inserts ScalarType node for all the quant-dequant nodes. For the activations, currently the default value is at::ScalarType::Undefined. Remove this and explicitly pass the at::ScalarType::QUint8 dtype

Differential Revision: D15331600

fbshipit-source-id: 5b51e0b42e694bf409026af4783a12da6d7e234b
2019-05-20 20:01:17 -07:00
Jerry Zhang
220e6894c5 Rename qint8 data type (#19932)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19932

In preparation to add int8_t data type for QTensor

Reviewed By: zafartahirov

Differential Revision: D15137838

fbshipit-source-id: 59462c36d6fc5982986d4196bf3f32f49bb294d7
2019-05-16 18:09:28 -07:00
Edward Yang
97e1f07ffc Replace AT_CHECK with TORCH_CHECK [shard 10/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20436

Reviewed By: jerryzh168

Differential Revision: D15318926

fbshipit-source-id: 71a43070cc50cc174f703ebc595f1d87c6fc1e91
2019-05-15 07:35:37 -07:00
Nishant Pandit
6a8f55796a Add quant-dequant nodes for weights
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20041

Differential Revision: D15178086

fbshipit-source-id: 8cb060d72b68e44bf042338924f203ae62d74f6a
2019-05-11 14:03:10 -07:00
Nikolay Korovaiko
9499c7b7ee Profiling GraphExecutor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19994

Differential Revision: D15307752

Pulled By: Krovatkin

fbshipit-source-id: 7b35191042199ef16823487e15fe639968cbdc89
2019-05-10 23:05:47 -07:00
Wanchao Liang
4d676d53a6 split canonicalize_ops, make a decompose pass (#19988)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19988
ghimport-source-id: 1dbf39e07099fa24ef9a6c0221312bf01a8011b7

Differential Revision: D15190355

Pulled By: wanchaol

fbshipit-source-id: 83f2b6557efd758810ccb4a4229d71fdebfd06e0
2019-05-08 17:21:59 -07:00
Mikhail Zolotukhin
c931d7e9d2 SubgraphRewriter: Add a support for arbitrary replacement graphs in subgraph rewriter. (#20084)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20084
ghimport-source-id: 91b3b0b66da00c6592a2d57c8f2a88a73c019d1a

Differential Revision: D15190191

Pulled By: ZolotukhinM

fbshipit-source-id: d57ba6b6790ea2fd277b2feb3f4a58895ed15486
2019-05-08 11:50:46 -07:00
Mikhail Zolotukhin
b3324d0fe3 SubgraphRewriter: Expose runOnGraph and use it in tests. (#20083)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20083
ghimport-source-id: e4d425775c2a2fb5ed334727e902a91f744b697c

Differential Revision: D15190192

Pulled By: ZolotukhinM

fbshipit-source-id: 5fbcd61fa631d8f22b5016754f8d1a46eefb19c5
2019-05-08 11:50:43 -07:00
Mikhail Zolotukhin
8a6072c3bd SubgraphRewriter: Rename pattern fusion to subgraph rewrite. (#20082)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20082
ghimport-source-id: f0594f4ad918288fb3158b4ecfa8010cf09dd0c2

Differential Revision: D15190193

Pulled By: ZolotukhinM

fbshipit-source-id: 81b026398c94f2fbf7487cafbb86b7364a78d827
2019-05-08 11:22:29 -07:00
Thomas Viehmann
5c9ab6f411 Specialize Optional[T] to T (or subtype for Tensor) or None when executing graph (#18407)
Summary:
This patch specializes `Optional[Tensor]` graph inputs to either a `DimensionedTensorType` (if a Tensor is passed) or `NoneType`. Other `Optional[T]` are specialized to `T` or `None`.

- For unwrapping (checked and unchecked) we need to keep the output type, as IR code that follows unwrapping may not work with NoneType (just as it doesn't deal with Optional). While it would not be hit during execution, it will run against the (legitimate) assumptions of the analysis passes.
- Function lookup currently will not match NoneType when it expects optional (I'm not entirely sure why this doesn't lead to unhappyness currently, but hey), I amend this at the level of the function matching code (`operator.cpp`), but see Adam's comments. We would run into trouble if we needed to select between functions whose signature only differs in Optional types with different subtypes, but we would have the same problem when calling them directly, so I would think this is OK.

- It would enable throwing away branches we can't hit. This also reduces the "blockyness" of the graph, so it may be easier to apply optimizations (e.g. fuse things in `if t is None: ...` and outside the `if`.
- Arguments passed into `Optional[Tensor]` arguments will get shape information, which is very handy.
- It get's rid of the problem that tensors passed into Optional arguments get requires_grad set erroneously #18270 (though that also affects lists, which aren't fixed here).
- `Optional[List[int]]` is needed for #18697.

- We're changing typing in a more subtle way than the `TensorType`->`DimensionedTensorType`.
- In particular, specializing to NoneType loses the Type information captured in the `OptionalType` element type.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18407

Reviewed By: zdevito

Differential Revision: D15216808

Pulled By: eellison

fbshipit-source-id: 01f1a7643deaf4962c3f55eff2070d54b0e54b69
2019-05-06 15:35:03 -07:00
Mikhail Zolotukhin
8b46938355 Cleanup includes in torch/csrc/jit/* (#19922)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19922
ghimport-source-id: 0434c46bf75621ff79ea27a18a2475e7f13e2487

Differential Revision: D15125015

Pulled By: ZolotukhinM

fbshipit-source-id: 5685edfc94067f62e363a85e9badb7f757b1d321
2019-05-06 13:40:26 -07:00
Nishant Pandit
e04caa3f44 Pass Quantization parameters for quant nodes (#19402)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19402

This pass propagate the qparams calculated after calibration to the
quant nodes which will be used later for quantization

Differential Revision: D14995230

fbshipit-source-id: 5709153ea1c039c4ab4470ddb689a303b0bcc6fd
2019-05-01 09:15:59 -07:00
Nishant Pandit
1f9a0c5dd6 Add observer nodes for input data nodes (#19232)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19232

Add observer nodes to collect stats for input data nodes excluding params
which are constant at inference and need not be observed. This information
is required to compute quantization params.

Differential Revision: D14885485

fbshipit-source-id: 8762cc2a4e510e1553b3dbd1d1aecd55b4bdb89f
2019-05-01 03:49:14 -07:00
Mikhail Zolotukhin
2a95cf6345 Add a pattern-based fusion pass. (#19596)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19596
ghimport-source-id: 1d7af5877dbeffa826201812649a9009c06c6305

Differential Revision: D15042033

Pulled By: ZolotukhinM

fbshipit-source-id: e3178d9aec2ac63fc3779ddedbd967aae0401c76
2019-04-29 19:17:31 -07:00
Karl Ostmo
8f0603b128 C++ changes toward libtorch and libcaffe2 unification (#19554)
Summary:
* adds TORCH_API and AT_CUDA_API in places
* refactor code generation Python logic to separate
  caffe2/torch outputs
* fix hip and asan
* remove profiler_cuda from hip
* fix gcc warnings for enums
* Fix PythonOp::Kind
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19554

Differential Revision: D15082727

Pulled By: kostmo

fbshipit-source-id: 83a8a99717f025ab44b29608848928d76b3147a4
2019-04-26 01:38:10 -07:00
James Reed
5be4bee4ff Don't create FusionGroups for known-CPU producer values (#19342)
Summary:
I believe the existing check in FuseGraph was only `false` if PyTorch was built with NO_CUDA=1. Otherwise, we would create fusion groups even if we're on a CPU-only machine running CPU code. This is confusing. Instead I've made it so that the decision to fuse or not is dependent on if the producer Value is a known CPU tensor. If it is, we skip fusion.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19342

Differential Revision: D15038351

Pulled By: jamesr66a

fbshipit-source-id: fce9d83929309a7bf14346833f84b996f3e7f6db
2019-04-22 16:57:18 -07:00
Mikhail Zolotukhin
868933a467 Fix clang-format. (#19550)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19550
ghimport-source-id: 980d96762426d3e97c26839edbaf107a3fc18b2f

Differential Revision: D15028055

Pulled By: ZolotukhinM

fbshipit-source-id: a50a0aaa74d0f1b9249ad79ab80e4b7747c3bffc
2019-04-21 20:31:09 -07:00
Spandan Tiwari
a64cce326f Add constant folding to ONNX graph during export (Resubmission) (#18698)
Summary:
Rewritten version of https://github.com/pytorch/pytorch/pull/17771 using graph C++ APIs.

This PR adds the ability to do constant folding on ONNX graphs during PT->ONNX export. This is done mainly to optimize the graph and make it leaner. The two attached snapshots show a multiple-node LSTM model before and after constant folding.
A couple of notes:
1. Constant folding is by default turned off for now. The goal is to turn it on by default once we have validated it through all the tests.
2. Support for folding in nested blocks is not in place, but will be added in the future, if needed.

**Original Model:**
![multiple_lstm_original](https://user-images.githubusercontent.com/23646532/53987630-6ac53980-40d6-11e9-9702-1ccfee124a83.JPG)
**Constant-folded model:**
![multiple_lstm_constant_folded](https://user-images.githubusercontent.com/23646532/53987632-6c8efd00-40d6-11e9-81c5-362c16f68861.JPG)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18698

Differential Revision: D14889768

Pulled By: houseroad

fbshipit-source-id: b6616b1011de9668f7c4317c880cb8ad4c7b631a
2019-04-18 00:10:04 -07:00
Zachary DeVito
e958ceb5d7 Remove GraphExecutor's python bindings (#19141)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19141
ghimport-source-id: 796a41f5514d29959af052fcf5391a2834850a80

Reviewed By: jamesr66a

Differential Revision: D14888702

Pulled By: zdevito

fbshipit-source-id: c280145f08e7bc210434d1c99396a3257b626cf9
2019-04-13 08:42:24 -07:00
Zachary DeVito
ddda563f22 Cleanup ScriptModule bindings (#19138)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19138
ghimport-source-id: 10f810f5e7551c1cb65fc4799744083bd7ffd1ee

Reviewed By: jamesr66a

Differential Revision: D14886945

Pulled By: zdevito

fbshipit-source-id: a5e5bb08694d03166a7516ec038656c2a02e7896
2019-04-13 08:42:21 -07:00
Nishant Pandit
bcd527190a Quantizer pass to insert quant-dequant nodes into IR (#18446)
Summary:
- Quantizer pass to mutate IR by inserting quant-dequant nodes
before and after nodes which support quantized ops. This information
will be used by jit compiler to substitute with quantized ops

- This currently covers simple model. It will be expanded later
for subgraph pattern matching to cover more complex patterns
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18446

Differential Revision: D14592265

Pulled By: nishantpdce

fbshipit-source-id: c9ba6c12aa96cb9c117826e386721eec83a55ea6
2019-04-06 12:39:26 -07:00
James Reed
6084908287 Code string API for fuser testing (#18884)
Summary:
This adds a C++ function `debugGetFusedKernelCode` as well as a Python binding `_jit_fuser_get_fused_kernel_code` that will, given a FusionGroup graph and a set of specified inputs, return the compiled kernel source code. We can then check the contents of this source code for verification of the fuser codegen backend.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18884

Differential Revision: D14795508

Pulled By: jamesr66a

fbshipit-source-id: 8f6e9dd13ebbb517737d893b0b5f5e9aa06af124
2019-04-05 17:13:17 -07:00
Michael Suo
0a4117a36e run cpp tests for non-cuda builds in test_jit.py (#18826)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18826
ghimport-source-id: 7ffa3bc7ef7402a6d6eb6ba5849e197019d77bf8

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18826 [jit] run cpp tests for non-cuda builds in test_jit.py**

We did all the work of nicely separating our cpp tests that don't require
CUDA, but they aren't run from test_jit.py if CUDA is missing.

Reviewed By: ZolotukhinM

Differential Revision: D14766287

fbshipit-source-id: 9326b3a5c90f6c20fc8cfaf1a1885a363b91f30a
2019-04-03 22:23:58 -07:00
Zachary DeVito
2d07993bcb Add ability to specialize class types to ArgumentSpec (#18314)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18314
ghimport-source-id: 8cecb768d476ab19c9460f39c8f94a764e4cb052

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18314 Add ability to specialize class types to ArgumentSpec**
* #18226 Add Slot type to abstract the raw pointers being used for slots.

Differential Revision: D14574395

fbshipit-source-id: cc3af6e56e9ae52990f4a1ad56ecceaa2d493577
2019-04-02 17:35:57 -07:00
eellison
af9335436d Re-land Parsing file check (#18570)
Summary:
The last time I tried to land it there was a merge race with the docs coverage test lol. Re-landing with the fix.

Re-land of https://github.com/pytorch/pytorch/pull/18304
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18570

Reviewed By: driazati

Differential Revision: D14707285

Pulled By: eellison

fbshipit-source-id: 3a0265928aa8cad78961723d8bf0fbf871fdb71d
2019-04-01 11:56:32 -07:00