Commit Graph

53 Commits

Author SHA1 Message Date
Mikhail Zolotukhin
360640bc9c Extract Python-specific SugaredValues to a separate file from init.cpp. (#19986)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19986
ghimport-source-id: 67f5fec4b5b2114f2922505a7743ed27e6d7e6cc

Differential Revision: D15160820

Pulled By: ZolotukhinM

fbshipit-source-id: e39238db8f30a8809891bff8a2fe39977124f6ca
2019-04-30 19:38:23 -07:00
Mikhail Zolotukhin
2a95cf6345 Add a pattern-based fusion pass. (#19596)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19596
ghimport-source-id: 1d7af5877dbeffa826201812649a9009c06c6305

Differential Revision: D15042033

Pulled By: ZolotukhinM

fbshipit-source-id: e3178d9aec2ac63fc3779ddedbd967aae0401c76
2019-04-29 19:17:31 -07:00
Michael Suo
a25b79531c use fully qualified name for ScriptClasses (#19239)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19239
ghimport-source-id: 830aad6dc11d2a7247760a9c7c9fc8556f70a706

Differential Revision: D14928293

Reviewed By: eellison

Pulled By: suo

fbshipit-source-id: d2efa5d7f7397526083278d6650b9cee8d967b1a
2019-04-26 19:17:21 -07:00
Karl Ostmo
8f0603b128 C++ changes toward libtorch and libcaffe2 unification (#19554)
Summary:
* adds TORCH_API and AT_CUDA_API in places
* refactor code generation Python logic to separate
  caffe2/torch outputs
* fix hip and asan
* remove profiler_cuda from hip
* fix gcc warnings for enums
* Fix PythonOp::Kind
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19554

Differential Revision: D15082727

Pulled By: kostmo

fbshipit-source-id: 83a8a99717f025ab44b29608848928d76b3147a4
2019-04-26 01:38:10 -07:00
Dmytro Dzhulgakov
d247912dbf Add no-gpu build mode for all of PyTorch and Caffe2
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19687

Differential Revision: D15023347

fbshipit-source-id: 5bed0d72e8ff337e066c142ca5c8e2c2bae93746
2019-04-24 13:27:59 -07:00
Dmytro Dzhulgakov
8b798f43e3 Commit explicit libtorch_python sources (#19607)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19607

Explicit is better than implicit - it's pretty hard to debug where particular file is if it's not greppable.

As a follow up step - we should look whether we can just include build_variables.py in CMake directly to share setups of two build systems

Reviewed By: ezyang

Differential Revision: D15023348

fbshipit-source-id: 600ef2d1871bc28530c6a02681b284f7499904df
2019-04-23 19:49:42 -07:00
Mikhail Zolotukhin
9818c7cb63 Add minimalistic implementation of subgraph matcher. (#19322)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19322
ghimport-source-id: 93c713f829d1b2a9aa5d104cb1f30148dd37c967

Differential Revision: D14962182

Pulled By: ZolotukhinM

fbshipit-source-id: 3989fba06502011bed9c24f12648d0baa2a4480c
2019-04-19 16:35:16 -07:00
Sebastian Messmer
41dc54e291 Move function schema parser to ATen/core build target (#19282)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19282

This is largely a hack because we need to use the function schema parser from ATen/core
but aren't clear yet on how the final software architecture should look like.

- Add function schema parser files from jit to ATen/core build target.
- Also move ATen/core build target one directory up to allow this.

We only change the build targets and don't move the files yet because this is likely
not the final build set up and we want to avoid repeated interruptions
for other developers. cc zdevito

Reviewed By: dzhulgakov

Differential Revision: D14931922

fbshipit-source-id: 26462e2e7aec9e0964706138edd3d87a83b964e3
2019-04-18 01:03:37 -07:00
Sebastian Messmer
c7b1fdb767 Fixing function schema parser for Android (#19281)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19281

String<->Number conversions aren't available in the STL used in our Android environment.
This diff adds workarounds for that so that the function schema parser can be compiled for android

Reviewed By: dzhulgakov

Differential Revision: D14931649

fbshipit-source-id: d5d386f2c474d3742ed89e52dff751513142efad
2019-04-17 23:50:17 -07:00
Sebastian Messmer
094678c04b Split function schema parser from operator (#19280)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19280

We want to use the function schema parser from ATen/core, but with as little dependencies as possible.
This diff moves the function schema parser into its own file and removes some of its dependencies.

Reviewed By: dzhulgakov

Differential Revision: D14931651

fbshipit-source-id: c2d787202795ff034da8cba255b9f007e69b4aea
2019-04-17 23:50:15 -07:00
Nikolay Korovaiko
58d4414c33 Profiling pipeline part1
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18772

Differential Revision: D14952781

Pulled By: Krovatkin

fbshipit-source-id: 1e99fc9053c377291167f0b04b0f0829b452dbc4
2019-04-16 21:21:08 -07:00
Bram Wasti
b1539412db Add pass registration mechanism (#18587)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18587
ghimport-source-id: 80d753f7046a2a719e0c076684f44fa2059a0921

Differential Revision: D14901227

Pulled By: bwasti

fbshipit-source-id: 56511d0313419b63945a36b80e9ea51abdef2bd4
2019-04-12 15:32:00 -07:00
Zachary DeVito
ef406ee925 First class modules in the compiler, round 2 (#19167)
Summary:
This PR propagates where we use first-class modules objects into the compiler. This creates a transitionary state where:

* compiler.cpp creates Graphs where `self` is a Module class and attributes/parameters/buffers/submodules are looked up with `prim::GetAttr`
* GraphExecutor still runs "lowered graphs" where the self object has been removed by a compiler pass `lower_first_class_method`.
* Tracing still creates "lowered graphs", and a pass "lift_lowered_method" creates a first-class method graph for things.

* This PR separates out Method and Function. A script::Function is a pure Graph with no `self` bound.  Similar to Python, a script::Method is just a bound `self` and its underlying `script::Function`.
* This PR also separates CompilationUnit from Module. A CompilationUnit is just a list of named script::Functions.  Class's have a CompilationUnit holding the class methods, and Modules also have a CompilationUnit holding their Methods. This avoids the weird circular case Module --has a-> Class -> has a -> Module ...

Details:
* In this transitionary state, we maintain two copies of a Graph, first-class module and lowered. Th first-class one has a self argument that is the module's class type. The lowered one is the lowered graph that uses the initial_ivalues inputs.
* When defining lowered methods using `_defined_lowered` we immediately create the first-class equivalent. The reverse is done lazily, creating lowered_methods on demand from the class.
* The two way conversions will be deleted in a future PR when the executor itself runs first-class objects. However this requires more changes to (1) the traces, (2) the python bindings, and (3) the onnx export pass and would make this PR way to large.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19167

Differential Revision: D14891966

Pulled By: zdevito

fbshipit-source-id: 0b5f03118aa65448a15c7a7818e64089ec93d7ea
2019-04-11 13:55:48 -07:00
Zachary DeVito
f5165ade5b Revert D14842057: Compiler uses first-class modules**
Differential Revision:
D14842057

Original commit changeset: ca6e7b5a4380

fbshipit-source-id: e8f1862a59bf20d5f78648b2fdc53a8b3750ead3
2019-04-11 06:17:01 -07:00
Zachary DeVito
5e1f0b2a07 Compiler uses first-class modules** (#19043)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19043
ghimport-source-id: 0c9e80d5f35654af6d472abd5643bff3e9eb9ddf

Differential Revision: D14842057

Pulled By: zdevito

fbshipit-source-id: ca6e7b5a43805240f40b84d30e54495061067dc0
2019-04-11 00:00:48 -07:00
Zachary DeVito
2d07993bcb Add ability to specialize class types to ArgumentSpec (#18314)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18314
ghimport-source-id: 8cecb768d476ab19c9460f39c8f94a764e4cb052

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18314 Add ability to specialize class types to ArgumentSpec**
* #18226 Add Slot type to abstract the raw pointers being used for slots.

Differential Revision: D14574395

fbshipit-source-id: cc3af6e56e9ae52990f4a1ad56ecceaa2d493577
2019-04-02 17:35:57 -07:00
Mikhail Zolotukhin
74d9146559 build_variables.py: turn on link_whole for _C_impl library. (#18763)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18763

Without `link_whole` flag in opt-builds some of the files are not linked into `_C_impl` library, which causes some of static initializers not to run (namely, registering an cutomPythonOperation from python_interpreter.cpp). This diff fixes it.

Differential Revision: D14732471

fbshipit-source-id: 57cff6b4b6d479ad7ab7fd29f677746d91d6ff45
2019-04-02 15:17:13 -07:00
Pieter Noordhuis
bdfdf6c2b9 C++ handler for gradient reduction (#18251)
Summary:
This commit adds the `c10d::Reducer` class that hooks into autograd
and performs gradient bucketing and reduction. These are the core
parts of `nn.parallel.DistributedDataParallel` that up to now were
only usable for CUDA models.

This should enable the following:

* Distributed data parallelism for models defined using the C++ frontend.
* Allow overlap of gradient computation and reduction for non-CUDA models.
* Enable distributed data parallelism for models with some unused parameters.

This does not include any logic for computing bucket assignment, which
can be done separately; either by observing autograd execution order
(this is what Apex does), or by assigning buckets based on some
maximum byte size, or both.

Also see #17757 and #13273.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18251

Reviewed By: mrshenli

Differential Revision: D14571899

Pulled By: pietern

fbshipit-source-id: 20f95eefd288dfe8cfffe0a28ca22fa7c9c3cd4c
2019-04-01 14:30:02 -07:00
James Reed
85f36014e2 Experimental logging/counters API (#18235)
Summary:
This defines a generic counters API that users can utilize to provide monitoring functionality in e.g. a production service. We expose both counters for runtime internals as well as a TorchScript API to create user-defined counters. Synopsis of the API:

- `torch/csrc/jit/script/logging.h` specifies the externally-facing API in C++
- `torch/jit/_logging.py` specifies the Python API

We use an interface, `LoggerBase`, to define the interactions between users and a logging backend. Implementing a subclass of `LoggerBase` allows the user to handle these events in a custom way, such as logging into a DB or calling into an infra-specific counters API.

From the frontend perspective, we can create log events in two ways:
1. We provide an `add_stat_value(name, val)` function. This calls into the Logger backend with a key/value pair. For example, we might call `add_stat_value('foo', 1)` to bump an event counter.
2. We provide a `time_point()` function to record a timestamp in nanoseconds. This can be used in conjunction with `add_stat_value` to record runtime wall clock durations.

Examples of frontend usage can be found in `test_jit.py TestLogging`.

We provide a trivial `LockingLogger` implementation as an example and for testing purposes. It is likely not ready for production usage. It demonstrates that a backend implementing the API can do things like specify aggregation types and report these aggregate stats via the `get_counters()` API.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18235

Differential Revision: D14545060

Pulled By: jamesr66a

fbshipit-source-id: 04099543a1898cfdd411511e46e03d5dce9b4881
2019-03-29 17:14:03 -07:00
Ilia Cherniavskii
600eeecbf4 Add external callbacks into RecordFunction (#17844)
Summary:
Add a way to insert external callbacks into PT's RecordFunction
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17844

Differential Revision: D14399664

Pulled By: ilia-cher

fbshipit-source-id: 76654799811fefd3ffed4abfb46ed95b492cebab
2019-03-28 17:48:45 -07:00
Mikhail Zolotukhin
13b95eac55 Add quant-passes stubs. (#18151)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18151
ghimport-source-id: 7d12462971bdf3e5e26a3f150f1fcad05bba1a15

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18152 Initial implementation of InsertObserverNodes pass.
* **#18151 Add quant-passes stubs.**

gh-metadata: pytorch pytorch 18149 gh/zolotukhinm@gmail.com/1/head

Differential Revision: D14584224

fbshipit-source-id: b3d0b5ff797160d5ad23f91f732e627b0129086c
2019-03-25 17:48:54 -07:00
Dmytro Dzhulgakov
6e0cbc7f31 Untangle internal build python and cpp dependencies
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18326

Reviewed By: ezyang

Differential Revision: D14576294

fbshipit-source-id: 186ce1e3d026d962b7386f861eddf093f583a878
2019-03-22 12:18:03 -07:00
Dmytro Dzhulgakov
7397eb7e8e End to end hack to call server side Caffe2 ops (#18267)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18267

Motivation: we don't actually want to use it for real under any circumstances. This is an idea to unblock our internal progress and parallelize workstreams. We can easily define schemas for all ops in question and implement forwarding to C2 ops which is NOT going to be performant. Then several things can be happening in parallel:
* move code of ops outside of C2 ops that depend on protobuf into c10
* development of optimization/fusion passes
* building python-level wrappers with clean API
* improving perf

This demonstrates, Relu, quant, dequant. It seems to cover all use cases necessary (maybe except weights prepacking). Ideally I'd demonstrate Conv, but will get to it later in a separate PR (contributions welcomed)

Reviewed By: ezyang

Differential Revision: D14531232

fbshipit-source-id: 4cd4a71ae0cb373c6c0e81f965c442b82a1b4069
2019-03-22 11:17:45 -07:00
David Riazati
3d44305e9d Attribute serialization (#17423)
Summary:
Allows serialization/loading of attributes (`IValue`s of any type).
* metadata (attribute name, type) is stored in the `model.json`
* The binary format is a subset of the `pickle` module that supports the operations necessary for `IValue`s
    * Attributes are serialized in the order they are defined on a module to a list in a single `attributes` file, with submodule attributes coming first. This order directly matches the order attributes are listed in `model.json`
    * This can be inspected in Python with `pickle.load()` or with `pickletools` (PyTorch need not be installed for this to work)
        * A class is used to store a tensor's index into the tensor table of the model, so to unpickle the file you have to use a custom Unpickler:
        ```python
        class TensorID(object):
            def __setstate__(self, id):
                self.id = id

        class JitUnpickler(pickle.Unpickler):
            def find_class(self, module, name):
                if module == '__main__' and name == 'TensorID':
                    return TensorID

        JitUnpickler(open("my_model/attributes.pkl", "rb")).load()
        ```
    * pickle format: https://svn.python.org/projects/python/trunk/Lib/pickletools.py
* It currently does not support/guarantee that anything saved out with `pickle` (i.e. if you edit `attributes` with `pickle` directly) instead of our tools will be imported correctly

Also will fix #17683 and fix #16367

Followup Work:
* document format / choice of pickle: #17951
* create an example
* list specializations
* int size specializations, large binputs
* do a first pass over attributes to output only necessary `BINPUT` ops
* attribute reassignment (e.g `self.my_attribute = new_value`)
* `tensor.save("some_checkpoint.pkl")` support with tensors embedded in Pickle file
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17423

Differential Revision: D14470965

Pulled By: driazati

fbshipit-source-id: 6a21a9939efdbe59b4bc57fd31d6d630bab5297e
2019-03-18 18:18:22 -07:00
Michael Suo
18f721fb9a support serialization of classes (#17856)
Summary:
Stack:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; **#17856 [jit] support serialization of classes**&nbsp;&nbsp;[💛](https://our.intern.facebook.com/intern/diff/D14402599/)

Add support for saving/loading TorchScript modules that depend on user-defned classes.

We track class dependencies the same we track tensor constants, then write them
all out such that we can just compile them in order before compiling the module
hierarchy.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17856

Reviewed By: shannonzhu

Differential Revision: D14461599

Pulled By: suo

fbshipit-source-id: 7115f87e069fd00dc8381d7de9997864fef7ea9f
2019-03-15 12:06:23 -07:00
Sebastian Messmer
7f7d12854d Remove legacy way of exposing caffe2 operators to PyTorch (#17742)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17742

This path isn't used anymore, and is incompatible with the changes stacked on top of this diff.
Removing it.
cc bwasti to check and confirm these can really be deleted

Reviewed By: ezyang

Differential Revision: D14362426

fbshipit-source-id: 32cdc19f28c2a981ae1e204901420998367ee588
2019-03-08 10:22:41 -08:00
Sebastian Messmer
7d02a1fbc7 caffe2:libtorch_cuda depends on caffe2:caffe2_gpu (#17729)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17729

When doing "import torch" in fbcode, previously the caffe2 cuda kernels weren't loaded because libcaffe2_gpu.so wasn't loaded.
Once you also did "from caffe2.python import workspace", then the cuda kernels were loaded because that triggered a runtime mechanism for loading libcaffe2_gpu.so.

We want the cuda kernels to always be available, so this diff adds a dependency from caffe2:libtorch_cuda to caffe2:caffe2_gpu.

Reviewed By: ezyang

Differential Revision: D14353498

fbshipit-source-id: 76a9fe69f231b308ab40eac393bb216c6fad3658
2019-03-06 23:53:16 -08:00
Iurii Zdebskyi
3257608276 Removed all usages of TH_Index_Base (#17591)
Summary:
TH_Index_Base is hard coded to 0 and can be removed from the code base.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17591

Differential Revision: D14269273

Pulled By: izdeby

fbshipit-source-id: d844e261f4af7297bad8a81e7d6dcf0a391b94e6
2019-03-04 12:51:42 -08:00
Wanchao Liang
ab95b5c6cc Rename prim::Undefined to prim::AutogradZero (#17611)
Summary:
supersedes #17245
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17611

Differential Revision: D14283581

Pulled By: wanchaol

fbshipit-source-id: 8022d02b8a021ea2fee9a18a2c8920eb123200c5
2019-03-01 15:13:18 -08:00
Michael Suo
e6a9062335 usertype -> class (#17528)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17528

as title. register_prim_ops is messy because someone ruined clang-format, but I figured it's okay to include here since this is such a mechanical change

Reviewed By: driazati

Differential Revision: D14236943

fbshipit-source-id: c2b22845837b7f830015510e48ec2ee5202fa407
2019-03-01 10:08:23 -08:00
Michael Suo
830ca665f5 alias analysis refactor take 2 (#17594)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17594

The original version of this broke things because a concurrent change raced with it in CI.

Reviewed By: ezyang

Differential Revision: D14266663

fbshipit-source-id: e8ac5dfcb7349b4f2c425d9f0eabbfc964314063
2019-03-01 10:08:22 -08:00
Michael Suo
1046593509 Revert D14231251: [jit] alias_analysis refactor
Differential Revision:
D14231251

Original commit changeset: 6cd98ae6fced

fbshipit-source-id: 96189f47daf7cc4cf4ef5cd343022d56a2296b39
2019-02-28 12:56:17 -08:00
Michael Suo
54c5b10934 alias_analysis refactor (#17511)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17511

AliasTracker was doing bookkeeping for three concepts: the points-to graph,
writes, and wildcards.

This PR makes AliasTracker's job clearer: it keeps track of the points-to
graph. Thus it has been renamed MemoryDAG. Write and wildcard information were
pulled back into AliasDb as part of this—I may decide to pull them into their
own little modules since I don't want the alias analysis stuff to get too
bloated.

This refactor is necessary because we want to start tracking information for
aliasing elements that _aren't_ first-class IR Values (e.g. the "stuff" inside
a list). So MemoryDAG can't know too much about Values

Reviewed By: houseroad

Differential Revision: D14231251

fbshipit-source-id: 6cd98ae6fced8d6c1522c2454da77c3c1b2b0504
2019-02-28 12:00:36 -08:00
Michael Suo
2cdbb140e6 user defined types (#17314)
Summary:
First pass at user defined types. The following is contained in this PR:
- `UserType` type, which contains a reference to a module with all methods for the type, and a separate namespace for data attributes (map of name -> TypePtr).
- `UserTypeRegistry`, similar to the operator registry
- `UserObject` which is the runtime representation of the user type (just a map of names -> IValues)
- `UserTypeValue` SugaredValue, to manage getattr and setattr while generating IR, plus compiler.cpp changes to make that work.
- Frontend changes to get `torch.jit.script` to work as a class decorator
- `ClassDef` node in our AST.
- primitive ops for object creation, setattr, and getattr, plus alias analysis changes to make mutation safe.

Things that definitely need to get done:
- Import/export, python_print support
- String frontend doesn't understand class definitions yet
- Python interop (using a user-defined type outside TorchScript) is completely broken
- Static methods (without `self`) don't work

Things that are nice but not essential:
- Method definition shouldn't matter (right now you can only reference a method that's already been defined)
- Class definitions can only contain defs, no other expressions are supported.

Things I definitely won't do initially:
- Polymorphism/inheritance
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17314

Differential Revision: D14194065

Pulled By: suo

fbshipit-source-id: c5434afdb9b39f84b7c85a9fdc2891f8250b5025
2019-02-26 01:34:07 -08:00
Zachary DeVito
356a94b64e Lazily load libcuda libnvrtc from c++ (#17317)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/16860
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17317

Differential Revision: D14157877

Pulled By: zdevito

fbshipit-source-id: c37aec2d77c2e637d4fc6ceffe2bd32901c70317
2019-02-22 13:51:45 -08:00
Elias Ellison
81b43202ae Refactor Type Parser b/w Schemas & IRParser into a type common parser (#17383)
Summary:
Creates a new shared type parser to be shared between the IR parser and the Schema Parser.

Also adds parsing of CompleteTensorType and DimensionedTensorType, and feature-gates that for the IRParser.

Renames the existing type_parser for python annotations, python_type_parser, and names the new one jit_type_parser.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17383

Differential Revision: D14186438

Pulled By: eellison

fbshipit-source-id: bbd5e337917d8862c7c6fa0a0006efa101c76afe
2019-02-22 13:43:55 -08:00
Elias Ellison
89df22e57b Lightweight String check Utility (#16858)
Summary:
light weight implementation of LLVM filecheck utility. Currently only handles string matching - regexes & saving a regex to a variable name can be added as needed.

Current intended usage is through FileCheckBuilder python handle, and is shown in the tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16858

Differential Revision: D14096244

Pulled By: eellison

fbshipit-source-id: c7c8d1457691c105e6ccbb3c1a378d96baac2569
2019-02-19 12:31:57 -08:00
Mikhail Zolotukhin
3a01a45f06 Implement IRParser. (#16987)
Summary:
It might need some cleaning up and might be missing some features, but it should be already working for most cases.

This PR is based on top of PR16986 (so please review only the last commit here).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16987

Differential Revision: D14074577

Pulled By: ZolotukhinM

fbshipit-source-id: 712b598f423265655f574bb9903e2066628eaad3
2019-02-16 20:23:50 -08:00
Mikhail Zolotukhin
6c06b32558 Implement NetDef <--> JIT IR converters. Try 2. (#17123)
Summary:
Currently the converters are very straightforward, i.e. there is no code for trying to
preserve semantics, we're purely perform conversion from one format to another.

Two things that we might want to add/change:

1. Add semantic conversion as well (but probably it would be a good idea to keep
   it separate as a temporary thing).
2. Make sure we don't mess with value names, as they are crucial for current
   uses of NetDefs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17123

Differential Revision: D14090244

Pulled By: ZolotukhinM

fbshipit-source-id: 07175fa9235582e1d1da5f10a42a5c1280b1b394
2019-02-15 20:39:30 -08:00
Sebastian Messmer
16468a9f45 Automatically register c10 ops with JIT (#16534)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16534

All c10 ops from the c10 dispatcher are now automatically registered with JIT

Reviewed By: dzhulgakov

Differential Revision: D13869275

fbshipit-source-id: 5ab5dec5b983fe661f977f9d29d8036768cdcab6
2019-02-06 21:21:33 -08:00
Michael Suo
72a431edce split up AliasTracker into a separate file (#16588)
Summary:
This just moves thing around to make AliasTracker independently testable and keep things a little more separate. Follow-on PRs will change the interfaces of AliasDb and AliasTracker to be more clearly distinct.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16588

Differential Revision: D13891894

Pulled By: suo

fbshipit-source-id: c5b590b5fdd462afefe743e499034068bf35784a
2019-01-31 10:53:53 -08:00
James Reed
d1ed0176df Trace fork and join calls
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16232

Differential Revision: D13772974

Pulled By: jamesr66a

fbshipit-source-id: b2db370271809e26d3301f8cc98eec567db5e62b
2019-01-26 14:42:45 -08:00
Bram Wasti
13fde345fb plug caffe2 into jit" (#16388)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16388

previous diff broke master -- this refactors out the custom_operator.cpp file into a separate header + cpp pair (caffe2_operator.{h,cpp})

Reviewed By: smessmer

Differential Revision: D13823550

fbshipit-source-id: 00e005e650336132d05aef97c1f0e5242ccad5ba
2019-01-25 16:52:32 -08:00
Zachary DeVito
b2eb98f6c3 Remove cuda from autograd profiler (#15898)
Summary:
This puts stubs in the autograd profiler for the use of cuda APIs allowing the cuda parts of libtorch to be linked separately from the CPU parts.

This also edits the buck build.

Previous:

For GPU builds:
_C -> csrc -> caffe2
For CPU builds:
_C -> csrc-cpu -> caffe2

Now:
GPU:
_C -> libtorch_cuda -> (libtorch -> caffe2, for CPU)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/15898

Reviewed By: ailzhang

Differential Revision: D13617991

Pulled By: zdevito

fbshipit-source-id: 6d84a50bb356a54b4217f93219902755601b00e1
2019-01-15 16:43:11 -08:00
Zachary DeVito
3f6b212e80 Register CPU/CUDA fuser dynamically (#15887)
Summary:
This avoids a bunch of conditional compilation logic
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15887

Reviewed By: eellison

Differential Revision: D13613239

Pulled By: zdevito

fbshipit-source-id: a18fc69676b3ef19b4469ab58d8714d1f6efccbb
2019-01-11 10:50:35 -08:00
Sebastian Messmer
8136c39b5e Enable calling caffe2 LayerNorm from PyTorch and JIT (#15243)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15243

Register it as a custom JIT op.

Reviewed By: dzhulgakov

Differential Revision: D13473791

fbshipit-source-id: 0f7e72e3efc85a75060a7597fadaf0a8bd289651
2019-01-10 16:22:18 -08:00
Elias Ellison
b1529eeadb Print out operator suggestions for unknown builtin op (#15183)
Summary:
This improves the error message for "unknown builtin op" to suggest similarly named ops.

Currently it prints out all operators with a name within two edits.

Related issue: https://github.com/pytorch/pytorch/issues/13409
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15183

Differential Revision: D13578509

Pulled By: eellison

fbshipit-source-id: 5c73408eda1f7aa456f5bd28790c34df0c76aeca
2019-01-04 13:04:44 -08:00
Zachary DeVito
6bf05bfde6 allow non-final returns (#15463)
Summary:
This PR allows a subclass of programs that have return statements that are not final in the graph.

`final_returns.h` contains the a comment describing how this is accomplished.
To minimize complexity in `compiler.cpp`, this pass is done as an AST-to-AST rewrite before the compiler runs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15463

Differential Revision: D13538962

Pulled By: zdevito

fbshipit-source-id: 67105ca873351825b4a364092ab1873779f3e462
2018-12-21 14:01:33 -08:00
Zachary DeVito
0368054a6d Split up compiler.cpp (#15355)
Summary:
This separates the different parts of compiler.cpp to make their relationship more clear. In particular it adds:

* sugared_value.{h,cpp} - all the public SugaredValues that the compiler defines and a few that were inside compiler.cpp
* type_parser.{h, cpp} - Turns TreeRef's defining types into TypePtr
* schema_matching.{h, cpp} - infrastructure for matching arguments against overloaded schema and emitting builtin operators with a particular schema.
Retains:
* compiler.{h, cpp} - now responsible simply for the `defineMethodsInModule` infra structure.

Some utility functions like inlineCallTo have moved to ir.h.

Only thing that is not a move is some changes in module.h/cpp that remove multiple returns from `Method::emit_call_to`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15355

Reviewed By: suo, wanchaol

Differential Revision: D13507524

Pulled By: zdevito

fbshipit-source-id: 69ec936a9ff1a383c12a883616346b219c72e393
2018-12-18 19:43:35 -08:00
Ailing Zhang
6ab2e7442d Autograd using torchscript (#14604)
Summary:
This PR enables autodiff to use the forward/backward graph compiled from python code, instead of using symbolic gradients(modifying the original graph directly).

We put the map in a separate .h file for now to wait for the native_functions.yaml and derivatives.yaml merge. This should ideally go into native_functions.yaml eventually.

This PR should be enough to unblock us for now, we can start writing gradients for aten functions in python.

Differential Revision: D13494635

Pulled By: ailzhang

fbshipit-source-id: f8d51a15243ac46afd09d930c573ccdfcd9fdaaf
2018-12-18 19:10:57 -08:00