Commit Graph

359 Commits

Author SHA1 Message Date
Alexander Golynski
989de7a0f8 Implementing negative striding for python lists
ghstack-source-id: c2736c648c
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33019
2020-02-11 09:36:59 -08:00
Elias Ellison
040bc1d0e1 [JIT] make is_scripting a condvalue (#32871)
Summary:
Add `torch.jit.is_scripting` to the list of CondValues, or values that if they are an input to a if statement we only compile one side of the if. I'm not sure if we actually want this PR.

Pros:
- Makes it easier to add features that are not yet supported in TorchScript (like has_torch_function)
- The current idiom of writing `torch.jit.is_scripting` and factoring out the block to a function annotated with `torch.jit.ignore` is functionally equivalent and much more cumbersome

Cons:
- Makes it easier to add features that are not yet supported in TorchScript
- Perhaps is confusing as a reader what is being compiled. Potentially could give all caps name or otherwise change name to make it more visually stand out.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32871

Differential Revision: D19670383

Pulled By: eellison

fbshipit-source-id: 5257b0bd23c66f199d59a7f2c911e948301e5588
2020-01-31 18:23:42 -08:00
Elias Ellison
10bd21d550 [JIT] fix nested select assign (#32877)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/31902

```
self.sub.a = 1
 ```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32877

Differential Revision: D19670322

Pulled By: eellison

fbshipit-source-id: 6d8f350b4d1169be1d2a56050fccd7c246ad9212
2020-01-31 16:58:26 -08:00
Michael Suo
63170431f9 [jit] fix segfault on missing getstate (#32642)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32642

Previously, if we defined `__setstate__` but not `__getstate__`, we
would segfault. This PR turns that into a comprehensible error message
(and improves another error message as well).

Fixes https://github.com/pytorch/pytorch/issues/25886

Test Plan: Imported from OSS

Differential Revision: D19596463

Pulled By: suo

fbshipit-source-id: dbe76bc36bc747d65fb0223184c009e0e9ba072c
2020-01-28 01:25:37 -08:00
Jerry Zhang
1f34801460 More robust mangling (#31978)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31978

Currently we keep a `mangleIndex_` that's intenral to compilation unit and
just increment the index when we found the original name is mangled, this doesn't
guarantee the new name is not defined.
This PR fixes the problem by querying whether the new name is defined or not.
fixes: https://github.com/pytorch/pytorch/issues/31268

Test Plan:
fixes the issue

Imported from OSS

Differential Revision: D19350535

fbshipit-source-id: fe3262b2838d4208ab72e2cd4a5970b3a792ae86
2020-01-13 11:11:50 -08:00
davidriazati
06dbef663d Add support for del (#31273)
Summary:
Adds the `del` keyword to the parser and corresponding `aten::Delete` op for lists and dicts

Fixes #20615
](https://our.intern.facebook.com/intern/diff/19181473/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31273

Pulled By: driazati

Differential Revision: D19181473

fbshipit-source-id: c42a2d43ec361a98e0c425232981edc9c39388c4
2019-12-19 21:48:11 -08:00
Zachary DeVito
457286a383 fix missing type check in dictionary literal
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31375

Test Plan: Imported from OSS

Differential Revision: D19145440

Pulled By: zdevito

fbshipit-source-id: 69909089586149ef766b4858d3420864a81b2493
2019-12-19 16:22:36 -08:00
David Riazati
1e116a5089 Revert D19054937: Add support for del
Test Plan: revert-hammer

Differential Revision:
D19054937

Original commit changeset: c535ea16a9e6

fbshipit-source-id: e57d31811441947b7ee38c8c2b16eecde5005792
2019-12-18 22:39:41 -08:00
davidriazati
e1509cb468 Add support for del (#31273)
Summary:
Adds the `del` keyword to the parser and corresponding `aten::Delete` op for lists and dicts

Fixes #20615
](https://our.intern.facebook.com/intern/diff/19054937/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31273

Pulled By: driazati

Differential Revision: D19054937

fbshipit-source-id: c535ea16a9e62d176f8ad45947670fc3535af77c
2019-12-18 18:19:22 -08:00
davidriazati
679b20b1e4 Unify list elements for all list types (#30777)
Summary:
Previously list elements were only unified for tensor lists.
This improves error messages and expands the unification logic
to include all types.
](https://our.intern.facebook.com/intern/diff/18837726/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30777

Pulled By: driazati

Differential Revision: D18837726

fbshipit-source-id: c4d275562a8429700987569426d694faa8f6002e
2019-12-11 17:00:52 -08:00
Elias Ellison
3eefc06feb add constant prop for immutable types (#30544)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30544

Run Constant Propagation upon compilation only on ops with non-aliasing inputs and outputs. This speeds up the first run of `torchvision.models.resnet18` by over 50% and speeds up compilation by about 25% (although the effects didn't seem additive with with https://github.com/pytorch/pytorch/pull/30503, so I'm going to land this PR first and then see if caching still has a sizable impact).

Running constant prop only with non-aliasing types does a lot of graph cleanup by removing constant ifs and a bunch of other smaller ops. It also avoids all the jitter problems we had when we tried running full constant prop previously. Bc it is idempotent it doesn't jitter, and it doesn't jitter graphs constructed from tracing because tracing doesn't emit any ops that only involve non-aliasing inputs.

Full constant prop isn't idempotent because what ops are run depends on the state of mutation in alias db, which will often change upon successive iterations of constant propagation, and bc it affects graphs constructed from tracing.

Edit: if we were okay with running constant propagation on graphs constructed from tracing (potentially making them hard to debug), an alternative would be to run constant propagation until the graph reaches a fixed point.

Test Plan: Imported from OSS

Differential Revision: D18833607

Pulled By: eellison

fbshipit-source-id: 92a0adb4882d67ed5a0db5c279f5e122aeeba54a
2019-12-09 14:20:12 -08:00
Brian Wignall
e7fe64f6a6 Fix typos (#30606)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30606

Differential Revision: D18763028

Pulled By: mrshenli

fbshipit-source-id: 896515a2156d062653408852e6c04b429fc5955c
2019-12-02 20:17:42 -08:00
Elias Ellison
91e1f07967 Check for unrolled loop in break & continue (#29474)
Summary:
For the same reason we don't allow iteration over heterogenous types (modulelists/tuples) with types that don't have a static length, we also can't break/continue within them - we need to statically know all types.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29474

Differential Revision: D18406097

Pulled By: eellison

fbshipit-source-id: 70ed3fc4947b6237cdd6703135a988a5c13ce786
2019-11-08 15:51:13 -08:00
Michael Suo
52456b2eba add hasattr() (#29332)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29332

Even though we're statically typed, this can be useful, e.g. as
shorthand when iterating through a module list.

Test Plan: Imported from OSS

Differential Revision: D18393097

Pulled By: suo

fbshipit-source-id: aa42e955f88d1b8a876d0727055eb596453b9839
2019-11-08 13:58:14 -08:00
James Reed
309b28ee3a Trace module calls
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29261

Test Plan: Imported from OSS

Differential Revision: D18343363

Pulled By: jamesr66a

fbshipit-source-id: 0c6394205e2c0ea8708028d20df83fe17b466ff4
2019-11-06 15:05:49 -08:00
Elias Ellison
60cb56d128 Refactor iterables (#29138)
Summary:
Refactor list comprehensions so they go through the same path as other for loops, making List Comprehensions work with modulelists, also fixing https://github.com/pytorch/pytorch/issues/27255

Replacing https://github.com/pytorch/pytorch/pull/28296 which was gh-poisoned and previously accepted.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29138

Differential Revision: D18303432

Pulled By: eellison

fbshipit-source-id: 8e4c0ba6f800142d5c4d921d56917cfae0c74655
2019-11-04 14:39:22 -08:00
Elias Ellison
fdeef45852 Add Support For Module Containers as Iterables (#28255)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28255

Add support for treating Sequentials, ModuleLists, and ModuleDicts as iterables.

As previously, when emitting a for loop over a Module Container we unroll the for loop over all elements. We require that any Sugared Value in an iterable with a Module Container have a statically - determinable length.

Otherwise, if you zipped over a list of varying length and an nn.Sequential that alternated between returning a Tensor and a Dictionary, the output type would change based on the length of the list.

Fix for #17179
And https://github.com/pytorch/pytorch/issues/27401
and https://github.com/pytorch/pytorch/issues/27506

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D18278124

Pulled By: eellison

fbshipit-source-id: aca336a5b8da89c756b1f0884883649510cbde3c
2019-11-04 09:19:40 -08:00
Wanchao Liang
e95dc9814e introduce module interface declaration (#28408)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28408

This enable interface to defined on a nn.Module, and the InterfaceType
now have a field of is_module_ to distinguish if it's a module interface
or a normal interface (This is similar to what ClassType distinguish on
module and torchscript classes).

The module interface can be assigned with any ScriptModule that has the
compatible signatures on schemas. A normal object that is not a
ScriptModule will not be able to assigned to an module interface and
will error out when user explicitly doing so. Assigning a ScriptModule
to class interface will make it only available in attribute_list, not
module_list. More details on subtyping relationship documented in the
jit_type.h

If you declare an module interface inside an nn.Module that is being
compiled to a ScriptModule, behavior to our internal compilation will
be:

1. ConcreteModuleType will record it as an module attribute and add to
   the attributes_ list.
2. JitType that is created from the ConcreteModuleType will record it as
   an attribute and pre-genenerate the slot. The slot will be marked as
   EntityType::MODULE still to make sure JitType record it as a Module
   slot
3. cpp_module will also register it as a Module as the Slot type is the
   source of truth

Since JitType will record it as attribute as store its type, it will
behave normally as the class interface attribute behave now. This means
the submodule assigned to this module interface is not getting inlined
into the graph as the normal `Module::attr` behave, it will generate
interface callMethod and allow us to later swap this with another
ScriptModule that implicitly implements this module interface.

Test Plan: Imported from OSS

Differential Revision: D18284311

fbshipit-source-id: e0b8f6e8c34b2087fab337a969e5ea3fb37ec209
2019-11-02 16:39:00 -07:00
David Reiss
da6b8a905a Use c10::to_string in more places (#28605)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28605

This was added because std::to_string isn't available in libstc++
on Android.  Use it in more places to get the PyTorch Android
build working with libstdc++.

Test Plan: Internal android build.

Reviewed By: jerryzh168

Differential Revision: D18099520

fbshipit-source-id: 17a2b617c2d21deadd0fdac1db849823637981fc
2019-10-24 15:52:05 -07:00
davidriazati
8cdc262063 Add support for @staticmethod (#27163)
Summary:
Resolve static methods as functions

Fixes #26792
](https://our.intern.facebook.com/intern/diff/17695094/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27163

Pulled By: driazati

Differential Revision: D17695094

fbshipit-source-id: 4671cae1a92526a35c83b8d9c12a50aa5442412b
2019-10-16 10:36:38 -07:00
Zachary DeVito
cf43aa3e16 add type refinements for isinstance checks (#27772)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27772

This replaces unchecked_unwrap_optional with unchecked_cast. This
enables the generalization of type refinement so that it works for
isinstance checks as well. This also removes unchecked_unwrap_optional from
code we generate, which is good because it is a hard op to serialize well
since it doesn't directly encode the Optional[T] being unwrapped. In contrast,
unchecked_cast always explicitly lists the type.

Test Plan: Imported from OSS

Differential Revision: D17885424

Pulled By: zdevito

fbshipit-source-id: ce81077d6fbeaf2a802a2e0b17349aca85670466
2019-10-15 16:00:42 -07:00
Zachary DeVito
30d9316f35 refactor tryMatchSchema (#26499) (#27773)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27773

We've changed how these functions are used over time, so I cleaned up
the header file API to match. In particular:

* tryMatchSchemas was added since the overload logic got copy/pasted
into three separate locations.
* With this change, tryMatchSchema is no longer public, as it is not needed
  outside of tryMatchSchemas
* emitBuiltinFunction no longer needs a requires argument (it was always true)

* Argument order for all the schema matching stuff now puts the 'self'
builtin override last. This is only rarely used and was inconsistent with
matchSchema

Test Plan: Imported from OSS

Differential Revision: D17885425

Pulled By: zdevito

fbshipit-source-id: 064bc9fa4bd57b2e5366fff9f3c6ab9b9945e08b
2019-10-14 20:45:25 -07:00
Edward Yang
7135f7c263 Revert D17412856: [JIT] add type refinements for isinstance checks
Test Plan: revert-hammer

Differential Revision:
D17412856

Original commit changeset: ded47eb086c4

fbshipit-source-id: 854a6c8f322435c3f3416dbedcb642cb2d2902b1
2019-10-11 13:02:30 -07:00
Edward Yang
07fc7d05ce Revert D17488297: [jit] refactor tryMatchSchema
Test Plan: revert-hammer

Differential Revision:
D17488297

Original commit changeset: a32d838ce355

fbshipit-source-id: 2bd319d9554d81d09231bf1e34c8417bff468940
2019-10-10 17:39:48 -07:00
Zachary DeVito
51656eefb0 refactor tryMatchSchema (#26499)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26499

We've changed how these functions are used over time, so I cleaned up
the header file API to match. In particular:

* tryMatchSchemas was added since the overload logic got copy/pasted
into three separate locations.
* With this change, tryMatchSchema is no longer public, as it is not needed
  outside of tryMatchSchemas
* emitBuiltinFunction no longer needs a requires argument (it was always true)

* Argument order for all the schema matching stuff now puts the 'self'
builtin override last. This is only rarely used and was inconsistent with
matchSchema

Test Plan: Imported from OSS

Differential Revision: D17488297

Pulled By: zdevito

fbshipit-source-id: a32d838ce35544972fa8767557acc22149081b55
2019-10-09 22:11:24 -07:00
Zachary DeVito
d44b9cd4bb add type refinements for isinstance checks (#26271)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26271

This replaces unchecked_unwrap_optional with unchecked_cast. This
enables the generalization of type refinement so that it works for
isinstance checks as well. This also removes unchecked_unwrap_optional from
code we generate, which is good because it is a hard op to serialize well
since it doesn't directly encode the Optional[T] being unwrapped. In contrast,
unchecked_cast always explicitly lists the type.

Test Plan: Imported from OSS

Differential Revision: D17412856

Pulled By: zdevito

fbshipit-source-id: ded47eb086c4610998ad92bb1174225af00220f7
2019-10-09 22:11:19 -07:00
James Reed
84e2dc692a Fix broken name mangling
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27511

Test Plan: Imported from OSS

Differential Revision: D17801185

Pulled By: jamesr66a

fbshipit-source-id: 3eaa9542a445c9401f3f96e11138ec09b0d8350a
2019-10-07 20:05:32 -07:00
davidriazati
a6bb8b52d4 Reduce error context from 10 -> 3 (#26765)
Summary:
10 lines of error context (on both sides) is overkill, especially now
that we have line numbers. With a compilation stack of a couple
functions, it becomes a pain to scroll to the top of the stack to see
the real error every time.

This also fixes class names in the compilation stack to a format of
`ClassName.method_name` instead of the the full qualified name
Old output
```
clip_boxes_to_image(Tensor boxes, (int, int) size) -> (Tensor):
Expected a value of type 'Tuple[int, int]' for argument 'size' but instead found type 'Tuple[int, int, int]'.
:
at /home/davidriazati/dev/vision/torchvision/models/detection/rpn.py:365:20
        top_n_idx = self._get_top_n_idx(objectness, num_anchors_per_level)
        batch_idx = torch.arange(num_images, device=device)[:, None]
        objectness = objectness[batch_idx, top_n_idx]
        levels = levels[batch_idx, top_n_idx]
        proposals = proposals[batch_idx, top_n_idx]

        final_boxes = []
        final_scores = []
        for boxes, scores, lvl, img_shape in zip(proposals, objectness, levels, image_shapes):
            boxes = box_ops.clip_boxes_to_image(boxes, img_shape)
                    ~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
            keep = box_ops.remove_small_boxes(boxes, self.min_size)
            boxes, scores, lvl = boxes[keep], scores[keep], lvl[keep]
            # non-maximum suppression, independently done per level
            keep = box_ops.batched_nms(boxes, scores, lvl, self.nms_thresh)
            # keep only topk scoring predictions
            keep = keep[:self.post_nms_top_n]
            boxes, scores = boxes[keep], scores[keep]
            final_boxes.append(boxes)
            final_scores.append(scores)
'RegionProposalNetwork.filter_proposals' is being compiled since it was called from 'RegionProposalNetwork.forward'
at /home/davidriazati/dev/vision/torchvision/models/detection/rpn.py:446:8
        num_images = len(anchors)
        num_anchors_per_level = [o[0].numel() for o in objectness]
        objectness, pred_bbox_deltas = \
            concat_box_prediction_layers(objectness, pred_bbox_deltas)
        # apply pred_bbox_deltas to anchors to obtain the decoded proposals
        # note that we detach the deltas because Faster R-CNN do not backprop through
        # the proposals
        proposals = self.box_coder.decode(pred_bbox_deltas.detach(), anchors)
        proposals = proposals.view(num_images, -1, 4)
        boxes, scores = self.filter_proposals(proposals, objectness, images.image_sizes, num_anchors_per_level)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE

        losses = {}
        if self.training:
            assert targets is not None
            labels, matched_gt_boxes = self.assign_targets_to_anchors(anchors, targets)
            regression_targets = self.box_coder.encode(matched_gt_boxes, anchors)
            loss_objectness, loss_rpn_box_reg = self.compute_loss(
                objectness, pred_bbox_deltas, labels, regression_targets)
            losses = {
'RegionProposalNetwork.forward' is being compiled since it was called from 'MaskRCNN.forward'
at /home/davidriazati/dev/vision/torchvision/models/detection/generalized_rcnn.py:53:8
        """
        if self.training and targets is None:
            raise ValueError("In training mode, targets should be passed")
        original_image_sizes = [(img.shape[-2], img.shape[-3])  for img in images]

        images, targets = self.transform(images, targets)
        features = self.backbone(images.tensors)
        if isinstance(features, torch.Tensor):
            features = OrderedDict([(0, features)])
        proposals, proposal_losses = self.rpn(images, features, targets)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
        detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)
        detections = self.transform.postprocess(detections, images.image_sizes, original_image_sizes)

        losses = {}
        losses.update(detector_losses)
        losses.update(proposal_losses)

        # TODO: multiple return types??
        # if self.training:
```

New output

```
RuntimeError:

clip_boxes_to_image(Tensor boxes, (int, int) size) -> (Tensor):
Expected a value of type 'Tuple[int, int]' for argument 'size' but instead found type 'Tuple[int, int, int]'.
:
at /home/davidriazati/dev/vision/torchvision/models/detection/rpn.py:365:20
        final_scores = []
        for boxes, scores, lvl, img_shape in zip(proposals, objectness, levels, image_shapes):
            boxes = box_ops.clip_boxes_to_image(boxes, img_shape)
                    ~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
            keep = box_ops.remove_small_boxes(boxes, self.min_size)
            boxes, scores, lvl = boxes[keep], scores[keep], lvl[keep]
'RegionProposalNetwork.filter_proposals' is being compiled since it was called from 'RegionProposalNetwork.forward'
at /home/davidriazati/dev/vision/torchvision/models/detection/rpn.py:446:8
        proposals = self.box_coder.decode(pred_bbox_deltas.detach(), anchors)
        proposals = proposals.view(num_images, -1, 4)
        boxes, scores = self.filter_proposals(proposals, objectness, images.image_sizes, num_anchors_per_level)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE

        losses = {}
'RegionProposalNetwork.forward' is being compiled since it was called from 'MaskRCNN.forward'
at /home/davidriazati/dev/vision/torchvision/models/detection/generalized_rcnn.py:53:8
        if isinstance(features, torch.Tensor):
            features = OrderedDict([(0, features)])
        proposals, proposal_losses = self.rpn(images, features, targets)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
        detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)
        detections = self.transform.postprocess
```
](https://our.intern.facebook.com/intern/diff/17560963/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26765

Pulled By: driazati

Differential Revision: D17560963

fbshipit-source-id: e463548744b505ca17f0158079b80e08fda47d49
2019-10-04 11:24:52 -07:00
Zachary DeVito
2ea1d3d01f refactor extra sugared values (#26270)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26270

We've accumulated a lot of sugared values whose only purpose is
to be instanced-checked against in emitApplyExpr. I need to add
another one to insert an unchecked_cast, and do not want to continue
the pattern. This creates an abstraction for this concept (SpecialFormValue),
and removes all the unneeded sugared values. There is no functionality
change here just a bunch of code movement in compiler.cpp

Test Plan: Imported from OSS

Differential Revision: D17412854

Pulled By: zdevito

fbshipit-source-id: 15877c91decaea5a00d1fe737ed2d0f0f8a79a28
2019-10-03 21:25:05 -07:00
Nikolay Korovaiko
882b2abb80 Fix segfault while printing value type for an error msg in emitListComprehension
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27261

Differential Revision: D17740159

Pulled By: Krovatkin

fbshipit-source-id: 90439282aea14d8634eb41ffece5b6320d615fa7
2019-10-03 11:08:25 -07:00
Zachary DeVito
becf080e4a add dynamic isinstance (#26269)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26269

previously isinstance only worked when we could statically determine
if it were true/false. Now we actually can issue an isinstance check
in case where it is dependent on the runtime type, e.g. Optional[int]
being an instance of int. This is not very useful on its own yet,
but with type refinement and allowing Any as an argument type this will
allow for python-style "overloaded" functions such that we can
remove our __overload__ support.

Test Plan: Imported from OSS

Differential Revision: D17412853

Pulled By: zdevito

fbshipit-source-id: e2c37040f25f6b94ee1676854fceecd22de190ef
2019-10-01 16:46:59 -07:00
Dmytro Dzhulgakov
0ae0c9788e Fix misuages for TORCH_CHECK/TORCH_INTERNAL_ASSERT with string (#26897)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26897

TORCH_INTERNAL_ASSERT("foo") doesn't do what you think it does :)

I'll try to do a fix to catch it in the compiler, but for now - let's fix usages

Found them using regex:
```
ag --cpp "TORCH_(CHECK|INTERNAL_ASSERT)\([ \n]*\"" --multiline
```

Test Plan: Imported from OSS

Differential Revision: D17624299

Pulled By: dzhulgakov

fbshipit-source-id: 74f05737ef598fd92b5e61541ee36de2405df23d
2019-09-27 13:45:19 -07:00
Zachary DeVito
0e3389dced Fix circular deps in loading (#26758)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26758

This PR changes the order in which we import classes and functions so
that is is no longer necessary for them to defined in order in a file,
or for there to be proper import statements in the exported file.

Actually importing a function/class now is driven by the need to resolve
the entity during unpickling, type resolution, or value resolution.

While this should allow significant simplification to the code that
serializes classes, this work has not been done yet in order to avoid
inevitable forward compat issues in the transition period.

Notes:
* Individual functions have been replaced with a SourceImporter object
  that exposes a resolveType method. This method loads the type if
  it has not been loaded yet, potentially parsing  (but not loading)
  the file it exists in if that file hasn't been parsed yet.
* Some legacy functionality needed to be added as a method to this object
  since the old format still used some of this logic for class resolution.

Test Plan: Imported from OSS

Differential Revision: D17558989

Pulled By: zdevito

fbshipit-source-id: 7eae3470bcbd388c4de463e3462d527776ed46c6
2019-09-26 11:39:16 -07:00
Mikhail Zolotukhin
d842435c01 Remove convert_to_ssa argument from runCleanupPasses - it is only used in one place.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26703

Test Plan: Imported from OSS

Differential Revision: D17543131

Pulled By: ZolotukhinM

fbshipit-source-id: c4a209c55ac76d8472e64af79f76e9a61fd2a941
2019-09-25 19:18:46 -07:00
Elias Ellison
d43480d6d1 support iterables, rangevalue in list comprehensions (#26768)
Summary:
Support IterableValue expressions and rangevalue in list comprehensions. Just as with supporting list comprehensions where the expression changes the input list types, we need to correctly type the list we create and it works.

Fixes https://github.com/pytorch/pytorch/issues/26693
Fixes https://github.com/pytorch/pytorch/issues/22483
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26768

Differential Revision: D17562762

Pulled By: eellison

fbshipit-source-id: 7ce8bf8605758dfd99057bc0376b4b724c4f9251
2019-09-25 15:41:32 -07:00
Zachary DeVito
fcd13549f9 add CondValue to unify refinements and code emission (#26145)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26145

This is step towards isinstance type refinement.
It primarily does yak shaving in compiler.cpp to unify the handling
of special case behavior that occurs in conditional expressions:

* Handling type refinement as part of emission.
* Handling `is None` static-if specialization.

It introduces a CondValue object that is a Value that also has
additional type refinements that are true when that Value is true,
and potentialy a static-true/false value that, if set, will cause if
statements to be handled statically, omitting typechecking of the other side.

This ends up expanding some behavior, for instance `is None` specialization
used to happen only for single expressions, but now works through
boolean logic.

Test Plan: Imported from OSS

Differential Revision: D17359500

Pulled By: zdevito

fbshipit-source-id: ce93804496c8b4c3197a5966bc28c608465fda64
2019-09-23 14:24:18 -07:00
Elias Ellison
4c1a2c2033 add setitem to class types (#25750)
Summary:
Follow up to https://github.com/pytorch/pytorch/pull/25664, add `class_type[ind] = val`. Like `__getitem__`, `__setitem__` has a custom compilation path so it wasn't added with the rest of the magic methods.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25750

Differential Revision: D17428725

Pulled By: eellison

fbshipit-source-id: ff3767ef41515baf04b0c0f5c896dbd3f1d20cd3
2019-09-19 10:01:39 -07:00
Zachary DeVito
8d9364ef32 Refactor emitIsInstance (#26061)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26061

This is in preparation for actually emitting a dynamic isinstance check instruction.
It re-arranges the  logic so that all the types and properties to check
against are in a flat list. In the future this flat list will be encoded
into an actual instruction if we determine that we cannot perform
the check statically.

Test Plan: Imported from OSS

Differential Revision: D17332062

Pulled By: zdevito

fbshipit-source-id: 4c0b65436f8e030170d469fe747e79de24bb24eb
2019-09-18 23:27:13 -07:00
Zachary DeVito
efc5306ad2 Make NoneType <: Optional[T] (#25361)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25361

Previously we had a different None object for each type T so that
unwrap optional could still recover the type T from it. After a few
months of having this conversion behavior, it has become clear that
only the unwrap optional operators cause this problem. Furthermore, it
is beneficial to have NoneType <: Optional[T] because this is how IValues
work (in particular the None IValue is not tagged). This patch makes the
necessary changes to do this. In particular it special cases unwrap optional
in export so that it annotates the None to make sure we can recover the type.

This also changes how matching and evaluating type values work so that we
can consider None matchable to type Optional[T], eventhough we cannot
derive T from that match.

Test Plan: Imported from OSS

Differential Revision: D17103072

Pulled By: zdevito

fbshipit-source-id: 37678ed3e5ce54f2eb3ee4dff2734a39f0bee028
2019-09-04 13:52:40 -07:00
Horace He
f3f83ccb23 Added invert bitwise operation to JIT (#22324)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/25360
Fixes https://github.com/pytorch/pytorch/issues/22124
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22324

Differential Revision: D17140477

Pulled By: yf225

fbshipit-source-id: f42aec5e688fe079d9e79726b7a6c345da94ae2e
2019-09-03 11:16:30 -07:00
Elias Ellison
d2a8435c08 add tuple keyword (#25474)
Summary:
Doesn't really add much functionality, since inputs to `tuple()` which we can statically infer the output size is pretty much just tuples. Does improve the error message though.

Fix for https://github.com/pytorch/pytorch/issues/24000
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25474

Differential Revision: D17133800

Pulled By: eellison

fbshipit-source-id: 41a052895e6ed24a384ec6f8aef0a6769ac094e6
2019-08-30 11:33:49 -07:00
davidriazati
efe808b326 Fix old annotate() error (#25261)
Summary:
Fixes #25067

](https://our.intern.facebook.com/intern/diff/17103889/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25261

Pulled By: driazati

Differential Revision: D17103889

fbshipit-source-id: bd94cb36cf4829e63ad39ae169047b9b9e857679
2019-08-28 20:50:24 -07:00
Zachary DeVito
ca4bc9fc07 improve interface error messages (#25228)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25228

This adds a facility to isSubtypeOf for it to explain why a type is
not a subtype of something else. It is used in situations where it
is not clear from the types python_str alone why the relationship
is now true. Because of subtle interaction between default arguments,
overloads, and virtual methods, it uses isSubtypeOfExt for the extended
version to avoid requiring readers to understand the interaction.

Test Plan: Imported from OSS

Differential Revision: D17066673

Pulled By: zdevito

fbshipit-source-id: 4de7c40fbf7f9eeae045d33a89a038538cf87155
2019-08-27 22:54:50 -07:00
Zachary DeVito
fba107f18e add serialization of interface (#25227)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25227

Adds cases to NamedType serialization to so that interfaces are written.
Similar implementation to NamedTuples

Test Plan: Imported from OSS

Differential Revision: D17066674

Pulled By: zdevito

fbshipit-source-id: fda5419260fad29e8c4ddb92de1d3447d621d982
2019-08-27 22:54:46 -07:00
Zachary DeVito
61818b8986 Add interface declarations to JIT (#25258)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25258

this is the first commit in a series to add interfaces to JIT.
Interfaces allow the specification through a blank python class of an
abstract interface that can be used in type annotations for Script functions.
If a TorchScript class implements all the methods in the interface with
the appropriate types, then it is implicitly considered to implement
that interface.

Follows required:
* implementation of serialization
* implementation in the parser frontend
* better error reporting for explaining why a class does not meet an
  interface specification.

Test Plan: Imported from OSS

Differential Revision: D17079963

Pulled By: zdevito

fbshipit-source-id: a9986eeba2d4fdedd0064ce7d459c0251480a5a0
2019-08-27 22:54:37 -07:00
Edward Yang
9340b155bc Revert D15901930: Add interface declarations to JIT
Test Plan: revert-hammer

Differential Revision:
D15901930

Original commit changeset: 22c82d12c9c2

fbshipit-source-id: 4009a3ce7af245d7e0f4924824ece59cdc774180
2019-08-27 06:41:32 -07:00
Zachary DeVito
4b22cf6bd5 Add interface declarations to JIT (#21972)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21972
ghimport-source-id: 280f89ca678615f915be2139d1c05cb6bc39eefc

Test Plan: Imported from OSS

Differential Revision: D15901930

Pulled By: zdevito

fbshipit-source-id: 22c82d12c9c2600e569d7083e2771fd6ec3de2b1
2019-08-26 16:57:59 -07:00
Zachary DeVito
121839b2f8 Fix bugs in assignment to optionals (#25059)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25059

This fixes the cases where a type annotated with optional cannot
be conditionally assigned to none:

```
x : Optional[int] = 4
if ...:
 x = None
```

Test Plan: Imported from OSS

Differential Revision: D16975166

Pulled By: zdevito

fbshipit-source-id: 5a7a81224d08b9447e1f4d957fcd882091e02f32
2019-08-26 13:47:54 -07:00
Zachary DeVito
5254b12002 cleanup tmp name generation (#25065)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25065

Using global atomic variables is bad because sending the same AST through
the compiler twice will produce different graphs. This makes it a
member of the translation struct.

Test Plan: Imported from OSS

Differential Revision: D16975355

Pulled By: zdevito

fbshipit-source-id: 23e15ffd58937a207898a4c4bed82628237e3c2e
2019-08-22 22:49:16 -07:00
Zachary DeVito
f9f5af0ed7 Revert D16949314: [jit] Fix bugs in assignment to optionals
Test Plan: revert-hammer

Differential Revision:
D16949314

Original commit changeset: 7f63d88b30a3

fbshipit-source-id: d1f00de2ad9c3484b731ad1b24205ca60024355d
2019-08-22 16:50:48 -07:00