Commit Graph

78 Commits

Author SHA1 Message Date
Sebastian Messmer
b527e48588 Use c10::List (#21177)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21177

- Integrate c10::ListPtr into IValue and the c10 dispatcher.
- Streamline conversion to/from IValue. Before, we had IValue::to<> and kernel_functor.h had its own ivalue_to_arg_type and return_type_to_ivalue. They are now unified. Also, this means that nested types like Dicts of Lists of Optional of Dict of ... do work as expected now

Differential Revision: D15476433

fbshipit-source-id: bde9df80df20091aa8e6ae17ba7e90abd149b954
2019-06-12 13:58:24 -07:00
Michael Suo
b6d1a72f48 improve error message on inferred type (#21058)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21058
ghimport-source-id: 7fad3a0567022dd417f4bd079a50a22e3c1dc020

Differential Revision: D15547218

Pulled By: suo

fbshipit-source-id: 5dbd567c79e6d01e9af4b8552777f7f0043df5b2
2019-05-30 10:50:34 -07:00
Zachary DeVito
3083c71cde First class functions in IR, inlined eagerly (#21052)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21052
ghimport-source-id: cc476b9cc301967dde5de6212ca144cdb252e84c

Differential Revision: D15533353

Pulled By: zdevito

fbshipit-source-id: 4d25461969cfcc9e5f641d585584cc100c7b34ae
2019-05-29 23:04:18 -07:00
Michael Suo
154029a6ff Revert D15534670: [jit] improve error message on inferred type
Differential Revision:
D15534670

Original commit changeset: 8bbfd6e9c1af

fbshipit-source-id: fe62cf954292e8ef1d00a3cc569206f73cedcd31
2019-05-29 14:56:08 -07:00
Michael Suo
5dacf6b048 improve error message on inferred type (#21058)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21058
ghimport-source-id: e7d6e082b0faf4f3d3e683f2c98863ee269439f0

Differential Revision: D15534670

Pulled By: suo

fbshipit-source-id: 8bbfd6e9c1afbc3006d7d55ed633e18618e05021
2019-05-29 14:47:00 -07:00
Sebastian Messmer
d5b7138a2c Dict is a reference type (#20669)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20669

Before, Dict was a value type, i.e. copying it did a deep copy.
Unfortunately, this doesn't work well with storing and passing Dicts around in IValues because IValues are reference types.
This diff changes Dict to be a reference type.

Reviewed By: dzhulgakov

Differential Revision: D15404911

fbshipit-source-id: dc990d3eb7cae044b74dd0253f8b704dde6a6c86
2019-05-23 15:24:31 -07:00
Junjie Bai
8dedb04c26 Enable torch.jit.trace for mkldnn modules
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20800

Differential Revision: D15447892

fbshipit-source-id: 78e76523c5412c020a2bc22d6998ff7b36356720
2019-05-23 12:51:54 -07:00
Sebastian Messmer
ace506fb38 Dict (#20372)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20372

Implement a Dict type that allows us to abstract away from the concrete implementation used.
The API is similar to std::unordered_map, but behind the scenes we can switch to any map implementation we like. ska::flat_hash_map, google dense map, or any future map implementation with better performance.
Switching such an implementation choice does not have to break backwards compatibility of kernel code using the Dict type.

Reviewed By: zdevito

Differential Revision: D15298234

fbshipit-source-id: b5ad368a9e9516030805cd8f5f1b02e3986933c0
2019-05-14 18:37:02 -07:00
Edward Yang
c397134d6b Revert D15156384: Dict
Differential Revision:
D15156384

Original commit changeset: b9313ec4dd9a

fbshipit-source-id: 3b44f49ec4eaba692cfb2cfe46e5f98102e337d9
2019-05-10 06:11:25 -07:00
Sebastian Messmer
c92129033a Dict (#19976)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19976

Implement a Dict type that allows us to abstract away from the concrete implementation used.
The API is similar to std::unordered_map, but behind the scenes we can switch to any map implementation we like. ska::flat_hash_map, google dense map, or any future map implementation with better performance.
Switching such an implementation choice does not have to break backwards compatibility of kernel code using the Dict type.

Reviewed By: li-roy

Differential Revision: D15156384

fbshipit-source-id: b9313ec4dd9acb3b6a0035345b6ba4f2a437d1e5
2019-05-09 10:54:07 -07:00
Michael Suo
26dd65eaf8 Namespace isolation for classes (#19903)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19903
ghimport-source-id: deadf59f469ad620d0ee10b089dfc9bb92171710

Differential Revision: D15118978

Pulled By: suo

fbshipit-source-id: f2b487fd65520d1b7f45cb74145634d334ef1614
2019-05-07 22:48:31 -07:00
Michael Suo
f0a007a26c Use QualifiedName for classes (#19575)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19575
ghimport-source-id: eaed1d9d66672aadfe893e62a3a811da8ac7d966

Differential Revision: D15035281

Reviewed By: shannonzhu

Pulled By: suo

fbshipit-source-id: 7bac5e5f9223af77268bc03e08b37450a6840dbe
2019-04-27 16:13:27 -07:00
Eric Faust
dcfb5620df Allow passing lists as trace inputs.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19580

Differential Revision: D15034978

fbshipit-source-id: d3bc32ccae1c12104f2bde43fd4700d220bb3ca9
2019-04-26 02:41:57 -07:00
Zachary DeVito
12f7c2dea3 pybind CompilationUnit and Function directly (#19720)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19720
ghimport-source-id: c5829234dbbe8f7fe719ffce3fa92ce5198ffd21

Reviewed By: jamesr66a

Differential Revision: D15078536

Pulled By: zdevito

fbshipit-source-id: e617de31fc907a408fb50e18d9358dfd64de1f9e
2019-04-25 15:52:57 -07:00
Zachary DeVito
87a6974193 Make it possible for self.forward to return a ScriptMethod (#19217)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19217
ghimport-source-id: 6fdd7f5ac041dae950b47ca316f30682ede0b083

Reviewed By: suo

Differential Revision: D14922120

Pulled By: zdevito

fbshipit-source-id: 5e82e5d7ee72df6f401146d2519c80ea336ff40e
2019-04-24 11:14:34 -07:00
Michael Suo
26f1c6d4d4 Revert D14689639: [pytorch] Allow passing lists as trace inputs.
Differential Revision:
D14689639

Original commit changeset: 6dcec8a64319

fbshipit-source-id: 03a5e7c80e7f2420e33b056b5844a78d7fd41141
2019-04-20 08:50:47 -07:00
Eric Faust
2d4875b8ed Allow passing lists as trace inputs.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18636

Differential Revision: D14689639

fbshipit-source-id: 6dcec8a64319ae3c4da9a93f574a13ce8ec223a5
2019-04-19 13:37:50 -07:00
Eric Faust
593bb145ce Allow passing dicts as trace inputs. (#18092)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18092

Previously, tracing required all inputs to be either tensors,
or tuples of tensor. Now, we allow users to pass dicts as well.

Differential Revision: D14491795

fbshipit-source-id: 7a2df218e5d00f898d01fa5b9669f9d674280be3
2019-04-18 23:52:00 -07:00
Nikolay Korovaiko
58d4414c33 Profiling pipeline part1
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18772

Differential Revision: D14952781

Pulled By: Krovatkin

fbshipit-source-id: 1e99fc9053c377291167f0b04b0f0829b452dbc4
2019-04-16 21:21:08 -07:00
Zachary DeVito
c38c7b0ec5 Support Kwargs in C++ Function/Method calls (#19086)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19086
ghimport-source-id: 7790a5cc6e32f6f72e92add0b9f76dfa49ad9859

Reviewed By: jamesr66a

Differential Revision: D14875729

Pulled By: zdevito

fbshipit-source-id: ad1e4542381d9c33722155459e794f1ba4660dbb
2019-04-13 08:42:11 -07:00
Zachary DeVito
13f03a42d2 Create Object that represents a Module (#18469)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18469
ghimport-source-id: 73cb8b58f43f10b1dcfca805fd5b25c4fa977632

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18469 Create Object that represents a Module**
* #18468 slots with explicit value/setValue make more sense in future patches
* #18467 Make Object hold its ClassType
* #18379 Enforce single parent for script submodules
* #18378 Unify namespace of script::Module
* #18314 Add ability to specialize class types to ArgumentSpec
* #18226 Add Slot type to abstract the raw pointers being used for slots.

This changes the underlying storage for script::Module to hold
a ivalue::Object which has slots for all the parameters and attributes.

NamedIValue and Slot are now merged together into one class Slot that stores
the tuple (ivalue::Object, offset) and can be used to read the name, type,
or value of the slot and also to set the value. This cleans up a bunch
of client uses.

This PR does not actually use the module object in any generated code.
A future PR will switch how code is generated to treat modules as
first class.

Differential Revision: D14613508

fbshipit-source-id: d853a7559f58d244de2ef54a781427fcd1060ed0
2019-04-05 18:58:52 -07:00
Zachary DeVito
091acb0978 Make Object hold its ClassType (#18467)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18467
ghimport-source-id: d51bdd64d2529d08c634c58df1a0870b54ad49fb

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18469 Create Object that represents a Module
* #18468 slots with explicit value/setValue make more sense in future patches
* **#18467 Make Object hold its ClassType**
* #18379 Enforce single parent for script submodules
* #18378 Unify namespace of script::Module
* #18314 Add ability to specialize class types to ArgumentSpec
* #18226 Add Slot type to abstract the raw pointers being used for slots.

Currently it holds a symbol whose unqualified name is the name of the
class. This will get confusing when there are multiple possible registries,
and it makes getting the class type from the object difficult.
The pointer to the class is only 4 more bytes so this patch just puts
it in the object.

Reviewed By: suo

Differential Revision: D14613510

fbshipit-source-id: b35175ba4be83d2522deaa6dad5070d6ec691fed
2019-04-05 13:40:59 -07:00
Elias Ellison
1eee2090d4 Const trace error v2 (#18535)
Summary:
Trying to reland https://github.com/pytorch/pytorch/pull/18298
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18535

Differential Revision: D14652391

Pulled By: eellison

fbshipit-source-id: 699e30045dd5f14f0a2b98378272045a292e1e2a
2019-03-27 14:40:56 -07:00
Michael Suo
d85451c07b Revert D14584266: [pytorch][PR] Better error message for tensor with grad as constant in tracing
Differential Revision:
D14584266

Original commit changeset: 4e7850dadc78

fbshipit-source-id: 3bb3b5006e469edff984c16e0ff8d5dac2862d88
2019-03-23 02:50:54 -07:00
Michael Suo
10751d5fb4 python interop for script classes (#18148)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18148
ghimport-source-id: 40a9d745dc9aeba53d098743323fcbd50ca65137

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18148 py interop**

Support for converting classes between the Python–TorchScript boundary. Like other TorchScript values, ScriptClasses are native Python values when used in Python and IValues when used in TorchScript.

Notably, there is a copy across this boundary, which will be surprising to users who will expect standard Python reference semantics. I have some ideas for fixing that, but it's a more involved process.

Reviewed By: jamesr66a

Differential Revision: D14526259

fbshipit-source-id: 5916e3032488a42dc7da756c1826d7c040a21ebd
2019-03-22 16:30:04 -07:00
Elias Ellison
3badea6eb3 Better error message for tensor with grad as constant in tracing (#18298)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/17583

There's an unrelated issue right now causing a segfault when printing tensor so that might have to fixed first for this to land
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18298

Differential Revision: D14584266

Pulled By: eellison

fbshipit-source-id: 4e7850dadc78ef1e98ad40b9d8adc0fef42acf48
2019-03-22 15:29:30 -07:00
peterjc123
2a6cbfaccf Enable 32 bit CPU build on Windows
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18176

Differential Revision: D14539884

Pulled By: ezyang

fbshipit-source-id: 0e4bd9c1ef1830cd9bcc40df36b87534f61def08
2019-03-20 09:26:50 -07:00
David Riazati
a2381fa346 Add module attributes (#17309)
Summary:
Similar to `nn.Parameter`s, this PR lets you store any `IValue` on a module as an attribute on a `ScriptModule` (only from the Python front-end currently). To mark something as an attribute, it should wrapped in `jit.Attribute(value, type)` (ex. `self.table = torch.jit.Attribute(table, Dict[str, torch.Tensor])`)

Followup Work:
* (de)serializing for use in C++
* change `self.training` to be a `bool` attribute instead of a buffer
* mutable attributes
* string frontend support
* documentation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17309

Differential Revision: D14354316

Pulled By: driazati

fbshipit-source-id: 67e08ab5229366b67fbc837e67b58831a4fb3318
2019-03-07 10:44:10 -08:00
Wanchao Liang
ab95b5c6cc Rename prim::Undefined to prim::AutogradZero (#17611)
Summary:
supersedes #17245
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17611

Differential Revision: D14283581

Pulled By: wanchaol

fbshipit-source-id: 8022d02b8a021ea2fee9a18a2c8920eb123200c5
2019-03-01 15:13:18 -08:00
Michael Suo
e6a9062335 usertype -> class (#17528)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17528

as title. register_prim_ops is messy because someone ruined clang-format, but I figured it's okay to include here since this is such a mechanical change

Reviewed By: driazati

Differential Revision: D14236943

fbshipit-source-id: c2b22845837b7f830015510e48ec2ee5202fa407
2019-03-01 10:08:23 -08:00
Michael Suo
2cdbb140e6 user defined types (#17314)
Summary:
First pass at user defined types. The following is contained in this PR:
- `UserType` type, which contains a reference to a module with all methods for the type, and a separate namespace for data attributes (map of name -> TypePtr).
- `UserTypeRegistry`, similar to the operator registry
- `UserObject` which is the runtime representation of the user type (just a map of names -> IValues)
- `UserTypeValue` SugaredValue, to manage getattr and setattr while generating IR, plus compiler.cpp changes to make that work.
- Frontend changes to get `torch.jit.script` to work as a class decorator
- `ClassDef` node in our AST.
- primitive ops for object creation, setattr, and getattr, plus alias analysis changes to make mutation safe.

Things that definitely need to get done:
- Import/export, python_print support
- String frontend doesn't understand class definitions yet
- Python interop (using a user-defined type outside TorchScript) is completely broken
- Static methods (without `self`) don't work

Things that are nice but not essential:
- Method definition shouldn't matter (right now you can only reference a method that's already been defined)
- Class definitions can only contain defs, no other expressions are supported.

Things I definitely won't do initially:
- Polymorphism/inheritance
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17314

Differential Revision: D14194065

Pulled By: suo

fbshipit-source-id: c5434afdb9b39f84b7c85a9fdc2891f8250b5025
2019-02-26 01:34:07 -08:00
Michael Liu
0fc03d155a Fix remaining -Wreturn-std-move violations in fbcode
Summary:
Some value are copied when it could've been moved.
Detected by compiler flag -Wreturn-std-move

Reviewed By: igorsugak

Differential Revision: D14134303

fbshipit-source-id: 8fc3bb2017108b3d65097cb8447e33f5b6c743b4
2019-02-19 12:41:27 -08:00
David Riazati
b3d8c569d3 Remove templates for GenericDict
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17175

Differential Revision: D14113022

Pulled By: driazati

fbshipit-source-id: 5183e131cc8ccb58525875f76fa03133570a59ea
2019-02-15 21:35:19 -08:00
Wanchao Liang
ac00e85e36 Remove undefined tensor in jit script (#16379)
Summary:
This PR is a follow up of #15460, it did the following things:

* remove the undefined tensor semantic in jit script/tracing mode
* change ATen/JIT schema for at::index and other index related ops with `Tensor?[]` to align with what at::index is really doing and to adopt `optional[tensor]` in JIT
* change python_print to correctly print the exported script
* register both TensorList and ListOfOptionalTensor in JIT ATen ops to support both
* Backward compatibility for `torch.jit.annotate(Tensor, None)`

List of follow ups:

* remove the undefined tensor semantic in jit autograd, autodiff and grad_of
* remove prim::Undefined fully

For easy reviews, please turn on `hide white space changes` in diff settings.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16379

Differential Revision: D13855677

Pulled By: wanchaol

fbshipit-source-id: 0e21c14d7de250c62731227c81bfbfb7b7da20ab
2019-02-07 11:02:14 -08:00
Zachary DeVito
f34192db0f Rename DynamicType -> TensorType (#16787)
Summary:
```
import json
from subprocess import check_call
from pprint import pprint
renames = {
    'c10::TensorType': 'DimentionedTensorType',
    'c10::DynamicType': 'TensorType',
    'c10::TensorTypePtr': 'DimentionedTensorTypePtr',
    'c10::DynamicTypePtr': 'TensorTypePtr',
    'c10::TypeKind::DynamicType': 'TensorType',
    'c10::TypeKind::TensorType': 'DimentionedTensorType',
}

entries = json.loads(open('compile_commands.json', 'r').read())

build = None
sources = []

for e in entries:
    name = e['file']
    if not ('jit' in name or 'ATen/core' in name):
        continue
    build = e['directory']
    sources.append(name)

args = ['clang-rename', '-i', '-force', '-pl']
for name in sorted(renames.keys()):
    args += ['-qualified-name={}'.format(name), '-new-name={}'.format(renames[name])]

for source in sources:
    cmd = args + [source]
    pprint(args)
    check_call(cmd, cwd=build)
    check_call(['git', 'stash', 'push', '-m', 'rename'])
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16787

Differential Revision: D13974132

Pulled By: zdevito

fbshipit-source-id: 8368fd53e17cff83707bbe77f2d7aad74f8ce60e
2019-02-06 17:31:07 -08:00
David Riazati
3f8fd19a86 Add immutable dict support (#16208)
Summary:
This PR adds basic support (creation and indexing) for immutable dictionaries in Script. This includes Python/string frontend support and a `IValue::GenericDict` type backed by a `std::unordered_map`. Only `str`, `int`, and `float` are supported as keys, any type can be a value. Structure is pretty similar to list.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16208

Differential Revision: D13881686

Pulled By: driazati

fbshipit-source-id: 29ce9835b953c3456f57bcc2bbdf7fe0cbf941c0
2019-01-31 14:29:23 -08:00
Sebastian Messmer
ed4776820a Fix includes for ATen/core/stack.h (#16462)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16462

This file was moved, now we change the includes to the new location and remove the proxy header.

Reviewed By: ezyang

Differential Revision: D13847279

fbshipit-source-id: 4617d52fdcfe785cb7b2154460a6686c437abd8b
2019-01-29 23:33:13 -08:00
Mikhail Zolotukhin
47bf30661f Directly include headers from ATen.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16287

Differential Revision: D13792949

Pulled By: ZolotukhinM

fbshipit-source-id: d627d8dc469df048063c70d0b5b8d33fede809a3
2019-01-24 11:22:27 -08:00
Xiang Gao
c5e1b469be Return namedtuples from torch.* function with multiple return arguments for C++ operators (#15429)
Summary:
Partially fixes: https://github.com/pytorch/pytorch/issues/394

Implementation detail:

Codegen is modified to generate codes that looks like below:
```C++
static PyObject * THPVariable_svd(PyObject* self_, PyObject* args, PyObject* kwargs)
{
  HANDLE_TH_ERRORS
  static PythonArgParser parser({
    "svd(Tensor input, bool some=True, bool compute_uv=True, *, TensorList[3] out=None)",
  }, /*traceable=*/true);

  ParsedArgs<6> parsed_args;
  auto r = parser.parse(args, kwargs, parsed_args);
  static PyStructSequence_Field fields0[] = {
    {"U", ""}, {"S", ""}, {"V", ""}, {nullptr}
  };
  static PyStructSequence_Desc desc0 = {
    "torch.return_types.svd_out", nullptr,
    fields0, 3
  };
  static PyTypeObject type0;
  static bool namedtuple_type_initialized0 = false;
  if (!namedtuple_type_initialized0) {
    PyStructSequence_InitType(&type0, &desc0);
    namedtuple_type_initialized0 = true;
  }
  static PyStructSequence_Field fields1[] = {
    {"U", ""}, {"S", ""}, {"V", ""}, {nullptr}
  };
  static PyStructSequence_Desc desc1 = {
    "torch.return_types.svd", nullptr,
    fields1, 3
  };
  static PyTypeObject type1;
  static bool namedtuple_type_initialized1 = false;
  if (!namedtuple_type_initialized1) {
    PyStructSequence_InitType(&type1, &desc1);
    namedtuple_type_initialized1 = true;
  }
  if (r.idx == 0) {
    if (r.isNone(3)) {
      return wrap(&type1, dispatch_svd(r.tensor(0), r.toBool(1), r.toBool(2)));
    } else {
      auto results = r.tensorlist_n<3>(3);
      return wrap(&type0, dispatch_svd(r.tensor(0), r.toBool(1), r.toBool(2), results[0], results[1], results[2]));
    }
  }
  Py_RETURN_NONE;
  END_HANDLE_TH_ERRORS
}
```
Types are defined as static member of `THPVariable_${op_name}` functions, and initialized at the first time the function is called.

When parsing function prototypes in `native_functions.yaml`, the parser will set the specified name as `field_name` when see things like `-> (Tensor t1, ...)`. These field names will be the field names of namedtuple. The class of namedtuples will be named `torch.return_types.${op_name}`.

In some python 2, `PyStructSequence` is not a subtype of tuple, so we have to create some functions to check if an object is a tuple or namedtuple for compatibility issue.

Operators in `native_functions.yaml` are changed such that only `max` and `svd` are generated as namedtuple. Tests are added for these two operators to see if the return value works as expected. Docs for these two ops are also updated to explicitly mention the return value is a namedtuple. More ops will be added in later PRs.

There is some issue with Windows build of linker unable to resolve `PyStructSequence_UnnamedField`, and some workaround is added to deal with this case.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15429

Differential Revision: D13709678

Pulled By: ezyang

fbshipit-source-id: 23a511c9436977098afc49374e9a748b6e30bccf
2019-01-22 11:12:18 -08:00
Michael Suo
f636dc9276 clang format world (#15524)
Summary:
The PR clang-formats everything in `torch/csrc/jit/` and adds it to the pre-commit hook.

Here is a list of non-mechanical changes:
- I went over each file and fixed up whenever I could tell that clang-format was clobbering comment formatting.
- Made the macros in register_prim_ops a little more clang-format friendly by omitting trailing commas
- Refactored autodiff.cpp to use a helper class with explicit state rather than a bunch of capturing lambdas
- Small improvements to the precommit hook clang-format
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15524

Differential Revision: D13547989

Pulled By: suo

fbshipit-source-id: 3ff1541bb06433ccfe6de6e33f29227a2b5bb493
2018-12-26 06:55:01 -08:00
Zachary DeVito
056cfaf3ff Method returns a single argument (#15289)
Summary:
This PR changes Method (just Method not all graphs) to always have a single
return argument.

This is part 1 in a set of changes that will enable us to have better handling if early return statements.
The simplification that this change provides greatly reduces the work for the next step.

This change makes it so that Method and Python handle multiple returns in the same way:
* 0 - None
* 1 - <single value>
* many - Tuple[...]

The result is that a lot of special-case handling in compiler.cpp and its
bindings can be removed. It also fixes several bugs in return handling,
including one where return values were not always checked against their
attributed values.

Notes:
* inferTypeFrom is renamed to be more accurate and discourage use.
* This has uncovered some bugs in other components, which are noted in
  the diff.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15289

Differential Revision: D13481649

Pulled By: zdevito

fbshipit-source-id: 0e2242a40bb28cca2d0e8be48bede96195e4858c
2018-12-18 10:44:09 -08:00
Michael Suo
78bf1a9065 Revert D13407930: [pytorch][PR] Support torch.tensor in script
Differential Revision:
D13407930

Original commit changeset: d17f1195a221

fbshipit-source-id: f4458872c48ec4a2c9983b21ed90bcdc0ae665b7
2018-12-13 22:13:07 -08:00
Elias Ellison
aecab53778 Support torch.tensor in script (#14913)
Summary:
Adding support for torch.tensor in script.

The input list is typed as t[], because it can be arbitrarily nested. I added a check a compile time check  that the inner type of the list is a bool, float, or int.

Also adds specialization for Boolean Lists, which already existed at the ivalue level but had not been added to the compiler yet
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14913

Differential Revision: D13407930

Pulled By: eellison

fbshipit-source-id: d17f1195a22149d5b0d08d76c89a7fab8444f7c5
2018-12-13 17:38:38 -08:00
Peter Goldsborough
1e9c384afb Enable performance-unnecessary-value-param in .clang-tidy (#15026)
Summary:
This PR fixes around 250 places in the codebase where we were making unnecessary copies of objects (some large, some small).

ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15026

Differential Revision: D13458784

Pulled By: goldsborough

fbshipit-source-id: be5148b2ce09493588d70952e6f6d6ff5ec5199b
2018-12-13 16:15:35 -08:00
David Riazati
5837320b70 Add script standard library documentation + cleanup (#14912)
Summary:
Documents what is supported in the script standard library.

* Adds `my_script_module._get_method('forward').schema()` method to get function schema from a `ScriptModule`
* Removes `torch.nn.functional` from the list of builtins. The only functions not supported are `nn.functional.fold` and `nn.functional.unfold`, but those currently just dispatch to their corresponding aten ops, so from a user's perspective it looks like they work.
* Allow printing of `IValue::Device` by getting its string representation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14912

Differential Revision: D13385928

Pulled By: driazati

fbshipit-source-id: e391691b2f87dba6e13be05d4aa3ed2f004e31da
2018-12-12 12:30:13 -08:00
Edward Yang
517c7c9861 Canonicalize all includes in PyTorch. (#14849)
Summary:
Anywhere we used #include "foo.h", we now say #include <foo.h>
Paths are adjusted to be rooted out of aten/src, torch/lib, or
the root level directory.

I modified CMakeLists.txt by hand to remove TH and THC from
the include paths.

I used the following script to do the canonicalization:

```
  import subprocess
  import re
  import os.path

  files = subprocess.check_output(['git', 'ls-files']).decode('utf-8').rstrip().split('\n')
  for fn in files:
      if not any(fn.endswith(suff) for suff in ['.cu', '.cpp', '.in', '.h', '.hpp', '.cu', '.cuh', '.cc']):
          continue
      if not any(fn.startswith(pref) for pref in ["aten/", "torch/"]):
          continue
      with open(fn, 'r') as f:
          c = f.read()
      def fmt(p):
          return "#include <{}>".format(p)
      def repl(m):
          p = m.group(1)
          if p in ["dlfcn.h", "unistd.h", "nvrtc.h", "cuda.h", "cuda_runtime.h", "cstdint", "cudnn.h", "Python.h", "cusparse.h", "cuda_runtime_api.h", "cuda_fp16.h", "cublas_v2.h", "stdint.h", "curand_kernel.h"]:
              return fmt(p)
          if any(p.startswith(pref) for pref in ["torch/csrc", "c10/", "ATen/", "caffe2/", "TH/", "THC/", "Eigen/", "gtest/", "zdl/", "gloo/", "onnx/", "miopen/"]):
              return fmt(p)
          for root in ["aten/src", "torch/lib", ""]:
              for bad_root in [os.path.dirname(fn), "aten/src/TH", "aten/src/THC", "torch/csrc"]:
                  new_p = os.path.relpath(os.path.join(bad_root, p), root)
                  if not new_p.startswith("../") and (os.path.exists(os.path.join(root, new_p)) or os.path.exists(os.path.join(root, new_p + ".in"))):
                      return fmt(new_p)
          print("ERROR: ", fn, p)
          return m.group(0)
      new_c = re.sub(r'#include "([^"]+)"', repl, c)
      if new_c != c:
          print(fn)
          with open(fn, 'w') as f:
              f.write(new_c)
```

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14849

Reviewed By: dzhulgakov

Differential Revision: D13363445

Pulled By: ezyang

fbshipit-source-id: 52361f878a672785f9306c9e9ab2513128092b68
2018-12-08 19:38:30 -08:00
Zachary DeVito
78d594f46c Implement Device as a type in the script (#14666)
Summary:
[ note:  stacked on expect files changes, will unstack once they land ]
This adds DeviceObjType (cannot use DeviceType it is already an enum)
to the type hierarchy and an isDevice/toDevice pair to IValue.
Previous hacks which used an int[] to represent Device are removed
and at::Device is used instead.

Note: the behavior or .to is only a subset of python, we need to
fix the aten op so that it accepts Option[Device] and Optional[ScalarType].
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14666

Reviewed By: suo

Differential Revision: D13290405

Pulled By: zdevito

fbshipit-source-id: 68b4381b292f5418a6a46aaa077f1c902750b134
2018-12-03 16:54:40 -08:00
Wanchao Liang
79ceecec8e Optional undefined tensor support (#13650)
Summary:
This PR is a part of task to unblock standard library export.
* we treat None differently from Tensor and other types, when passing None as Tensor, it's an undefined tensor rather than the None IValue.
* Refine the type system so that we have correct tensor types hierarchy (Dynamic/Tensor/CompleteTensor), Dynamic should be at the top of the inheritance hierarchy.
* It also tries to export bilinear as an example of undefined tensor(None) input.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13650

Differential Revision: D12967026

Pulled By: wanchaol

fbshipit-source-id: 6aedccc7ce2a12fadd13d9e620c03e1260103a5a
2018-11-09 11:29:57 -08:00
Michael Suo
57e162da56 Switch mutable lists to new mutable schema (#13406)
Summary:
Goodbye, World! This PR removes the world tokens and associated pass and switches lists over to the new mutability/aliasing annotations.

Should resolve #12780 since we are disabling optimization pending alias analysis.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13406

Differential Revision: D12886463

Pulled By: suo

fbshipit-source-id: e64e55905aebdcad273b39862df3209f823f5408
2018-11-01 19:41:04 -07:00
Elias Ellison
70db53661b expose fixed length list argument (#13142)
Summary:
Arguments have an optional fixed length list field which allows either a list or a single element that will be broadcast to a fixed length.

This PR exposes that as a denotable argument, mostly to cover the many instances in which this used in the standard library. It appears in the standard library with ints & floats. Since this is not really a pattern we want to promote moving forward, I did not expose this for booleans or tensors.

We could consider making the optional static length part of the list type, instead of the argument, which would make some of this code much nicer.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13142

Differential Revision: D12876047

Pulled By: eellison

fbshipit-source-id: e7359d2a878b4627fc2b9ebc090f9849ee524693
2018-11-01 10:34:52 -07:00