Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27772
This replaces unchecked_unwrap_optional with unchecked_cast. This
enables the generalization of type refinement so that it works for
isinstance checks as well. This also removes unchecked_unwrap_optional from
code we generate, which is good because it is a hard op to serialize well
since it doesn't directly encode the Optional[T] being unwrapped. In contrast,
unchecked_cast always explicitly lists the type.
Test Plan: Imported from OSS
Differential Revision: D17885424
Pulled By: zdevito
fbshipit-source-id: ce81077d6fbeaf2a802a2e0b17349aca85670466
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27773
We've changed how these functions are used over time, so I cleaned up
the header file API to match. In particular:
* tryMatchSchemas was added since the overload logic got copy/pasted
into three separate locations.
* With this change, tryMatchSchema is no longer public, as it is not needed
outside of tryMatchSchemas
* emitBuiltinFunction no longer needs a requires argument (it was always true)
* Argument order for all the schema matching stuff now puts the 'self'
builtin override last. This is only rarely used and was inconsistent with
matchSchema
Test Plan: Imported from OSS
Differential Revision: D17885425
Pulled By: zdevito
fbshipit-source-id: 064bc9fa4bd57b2e5366fff9f3c6ab9b9945e08b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26499
We've changed how these functions are used over time, so I cleaned up
the header file API to match. In particular:
* tryMatchSchemas was added since the overload logic got copy/pasted
into three separate locations.
* With this change, tryMatchSchema is no longer public, as it is not needed
outside of tryMatchSchemas
* emitBuiltinFunction no longer needs a requires argument (it was always true)
* Argument order for all the schema matching stuff now puts the 'self'
builtin override last. This is only rarely used and was inconsistent with
matchSchema
Test Plan: Imported from OSS
Differential Revision: D17488297
Pulled By: zdevito
fbshipit-source-id: a32d838ce35544972fa8767557acc22149081b55
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26271
This replaces unchecked_unwrap_optional with unchecked_cast. This
enables the generalization of type refinement so that it works for
isinstance checks as well. This also removes unchecked_unwrap_optional from
code we generate, which is good because it is a hard op to serialize well
since it doesn't directly encode the Optional[T] being unwrapped. In contrast,
unchecked_cast always explicitly lists the type.
Test Plan: Imported from OSS
Differential Revision: D17412856
Pulled By: zdevito
fbshipit-source-id: ded47eb086c4610998ad92bb1174225af00220f7
Summary:
10 lines of error context (on both sides) is overkill, especially now
that we have line numbers. With a compilation stack of a couple
functions, it becomes a pain to scroll to the top of the stack to see
the real error every time.
This also fixes class names in the compilation stack to a format of
`ClassName.method_name` instead of the the full qualified name
Old output
```
clip_boxes_to_image(Tensor boxes, (int, int) size) -> (Tensor):
Expected a value of type 'Tuple[int, int]' for argument 'size' but instead found type 'Tuple[int, int, int]'.
:
at /home/davidriazati/dev/vision/torchvision/models/detection/rpn.py:365:20
top_n_idx = self._get_top_n_idx(objectness, num_anchors_per_level)
batch_idx = torch.arange(num_images, device=device)[:, None]
objectness = objectness[batch_idx, top_n_idx]
levels = levels[batch_idx, top_n_idx]
proposals = proposals[batch_idx, top_n_idx]
final_boxes = []
final_scores = []
for boxes, scores, lvl, img_shape in zip(proposals, objectness, levels, image_shapes):
boxes = box_ops.clip_boxes_to_image(boxes, img_shape)
~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
keep = box_ops.remove_small_boxes(boxes, self.min_size)
boxes, scores, lvl = boxes[keep], scores[keep], lvl[keep]
# non-maximum suppression, independently done per level
keep = box_ops.batched_nms(boxes, scores, lvl, self.nms_thresh)
# keep only topk scoring predictions
keep = keep[:self.post_nms_top_n]
boxes, scores = boxes[keep], scores[keep]
final_boxes.append(boxes)
final_scores.append(scores)
'RegionProposalNetwork.filter_proposals' is being compiled since it was called from 'RegionProposalNetwork.forward'
at /home/davidriazati/dev/vision/torchvision/models/detection/rpn.py:446:8
num_images = len(anchors)
num_anchors_per_level = [o[0].numel() for o in objectness]
objectness, pred_bbox_deltas = \
concat_box_prediction_layers(objectness, pred_bbox_deltas)
# apply pred_bbox_deltas to anchors to obtain the decoded proposals
# note that we detach the deltas because Faster R-CNN do not backprop through
# the proposals
proposals = self.box_coder.decode(pred_bbox_deltas.detach(), anchors)
proposals = proposals.view(num_images, -1, 4)
boxes, scores = self.filter_proposals(proposals, objectness, images.image_sizes, num_anchors_per_level)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
losses = {}
if self.training:
assert targets is not None
labels, matched_gt_boxes = self.assign_targets_to_anchors(anchors, targets)
regression_targets = self.box_coder.encode(matched_gt_boxes, anchors)
loss_objectness, loss_rpn_box_reg = self.compute_loss(
objectness, pred_bbox_deltas, labels, regression_targets)
losses = {
'RegionProposalNetwork.forward' is being compiled since it was called from 'MaskRCNN.forward'
at /home/davidriazati/dev/vision/torchvision/models/detection/generalized_rcnn.py:53:8
"""
if self.training and targets is None:
raise ValueError("In training mode, targets should be passed")
original_image_sizes = [(img.shape[-2], img.shape[-3]) for img in images]
images, targets = self.transform(images, targets)
features = self.backbone(images.tensors)
if isinstance(features, torch.Tensor):
features = OrderedDict([(0, features)])
proposals, proposal_losses = self.rpn(images, features, targets)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)
detections = self.transform.postprocess(detections, images.image_sizes, original_image_sizes)
losses = {}
losses.update(detector_losses)
losses.update(proposal_losses)
# TODO: multiple return types??
# if self.training:
```
New output
```
RuntimeError:
clip_boxes_to_image(Tensor boxes, (int, int) size) -> (Tensor):
Expected a value of type 'Tuple[int, int]' for argument 'size' but instead found type 'Tuple[int, int, int]'.
:
at /home/davidriazati/dev/vision/torchvision/models/detection/rpn.py:365:20
final_scores = []
for boxes, scores, lvl, img_shape in zip(proposals, objectness, levels, image_shapes):
boxes = box_ops.clip_boxes_to_image(boxes, img_shape)
~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
keep = box_ops.remove_small_boxes(boxes, self.min_size)
boxes, scores, lvl = boxes[keep], scores[keep], lvl[keep]
'RegionProposalNetwork.filter_proposals' is being compiled since it was called from 'RegionProposalNetwork.forward'
at /home/davidriazati/dev/vision/torchvision/models/detection/rpn.py:446:8
proposals = self.box_coder.decode(pred_bbox_deltas.detach(), anchors)
proposals = proposals.view(num_images, -1, 4)
boxes, scores = self.filter_proposals(proposals, objectness, images.image_sizes, num_anchors_per_level)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
losses = {}
'RegionProposalNetwork.forward' is being compiled since it was called from 'MaskRCNN.forward'
at /home/davidriazati/dev/vision/torchvision/models/detection/generalized_rcnn.py:53:8
if isinstance(features, torch.Tensor):
features = OrderedDict([(0, features)])
proposals, proposal_losses = self.rpn(images, features, targets)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)
detections = self.transform.postprocess
```
](https://our.intern.facebook.com/intern/diff/17560963/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26765
Pulled By: driazati
Differential Revision: D17560963
fbshipit-source-id: e463548744b505ca17f0158079b80e08fda47d49
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26270
We've accumulated a lot of sugared values whose only purpose is
to be instanced-checked against in emitApplyExpr. I need to add
another one to insert an unchecked_cast, and do not want to continue
the pattern. This creates an abstraction for this concept (SpecialFormValue),
and removes all the unneeded sugared values. There is no functionality
change here just a bunch of code movement in compiler.cpp
Test Plan: Imported from OSS
Differential Revision: D17412854
Pulled By: zdevito
fbshipit-source-id: 15877c91decaea5a00d1fe737ed2d0f0f8a79a28
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26269
previously isinstance only worked when we could statically determine
if it were true/false. Now we actually can issue an isinstance check
in case where it is dependent on the runtime type, e.g. Optional[int]
being an instance of int. This is not very useful on its own yet,
but with type refinement and allowing Any as an argument type this will
allow for python-style "overloaded" functions such that we can
remove our __overload__ support.
Test Plan: Imported from OSS
Differential Revision: D17412853
Pulled By: zdevito
fbshipit-source-id: e2c37040f25f6b94ee1676854fceecd22de190ef
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26897
TORCH_INTERNAL_ASSERT("foo") doesn't do what you think it does :)
I'll try to do a fix to catch it in the compiler, but for now - let's fix usages
Found them using regex:
```
ag --cpp "TORCH_(CHECK|INTERNAL_ASSERT)\([ \n]*\"" --multiline
```
Test Plan: Imported from OSS
Differential Revision: D17624299
Pulled By: dzhulgakov
fbshipit-source-id: 74f05737ef598fd92b5e61541ee36de2405df23d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26758
This PR changes the order in which we import classes and functions so
that is is no longer necessary for them to defined in order in a file,
or for there to be proper import statements in the exported file.
Actually importing a function/class now is driven by the need to resolve
the entity during unpickling, type resolution, or value resolution.
While this should allow significant simplification to the code that
serializes classes, this work has not been done yet in order to avoid
inevitable forward compat issues in the transition period.
Notes:
* Individual functions have been replaced with a SourceImporter object
that exposes a resolveType method. This method loads the type if
it has not been loaded yet, potentially parsing (but not loading)
the file it exists in if that file hasn't been parsed yet.
* Some legacy functionality needed to be added as a method to this object
since the old format still used some of this logic for class resolution.
Test Plan: Imported from OSS
Differential Revision: D17558989
Pulled By: zdevito
fbshipit-source-id: 7eae3470bcbd388c4de463e3462d527776ed46c6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26145
This is step towards isinstance type refinement.
It primarily does yak shaving in compiler.cpp to unify the handling
of special case behavior that occurs in conditional expressions:
* Handling type refinement as part of emission.
* Handling `is None` static-if specialization.
It introduces a CondValue object that is a Value that also has
additional type refinements that are true when that Value is true,
and potentialy a static-true/false value that, if set, will cause if
statements to be handled statically, omitting typechecking of the other side.
This ends up expanding some behavior, for instance `is None` specialization
used to happen only for single expressions, but now works through
boolean logic.
Test Plan: Imported from OSS
Differential Revision: D17359500
Pulled By: zdevito
fbshipit-source-id: ce93804496c8b4c3197a5966bc28c608465fda64
Summary:
Follow up to https://github.com/pytorch/pytorch/pull/25664, add `class_type[ind] = val`. Like `__getitem__`, `__setitem__` has a custom compilation path so it wasn't added with the rest of the magic methods.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25750
Differential Revision: D17428725
Pulled By: eellison
fbshipit-source-id: ff3767ef41515baf04b0c0f5c896dbd3f1d20cd3
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26061
This is in preparation for actually emitting a dynamic isinstance check instruction.
It re-arranges the logic so that all the types and properties to check
against are in a flat list. In the future this flat list will be encoded
into an actual instruction if we determine that we cannot perform
the check statically.
Test Plan: Imported from OSS
Differential Revision: D17332062
Pulled By: zdevito
fbshipit-source-id: 4c0b65436f8e030170d469fe747e79de24bb24eb
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25361
Previously we had a different None object for each type T so that
unwrap optional could still recover the type T from it. After a few
months of having this conversion behavior, it has become clear that
only the unwrap optional operators cause this problem. Furthermore, it
is beneficial to have NoneType <: Optional[T] because this is how IValues
work (in particular the None IValue is not tagged). This patch makes the
necessary changes to do this. In particular it special cases unwrap optional
in export so that it annotates the None to make sure we can recover the type.
This also changes how matching and evaluating type values work so that we
can consider None matchable to type Optional[T], eventhough we cannot
derive T from that match.
Test Plan: Imported from OSS
Differential Revision: D17103072
Pulled By: zdevito
fbshipit-source-id: 37678ed3e5ce54f2eb3ee4dff2734a39f0bee028
Summary:
Doesn't really add much functionality, since inputs to `tuple()` which we can statically infer the output size is pretty much just tuples. Does improve the error message though.
Fix for https://github.com/pytorch/pytorch/issues/24000
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25474
Differential Revision: D17133800
Pulled By: eellison
fbshipit-source-id: 41a052895e6ed24a384ec6f8aef0a6769ac094e6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25228
This adds a facility to isSubtypeOf for it to explain why a type is
not a subtype of something else. It is used in situations where it
is not clear from the types python_str alone why the relationship
is now true. Because of subtle interaction between default arguments,
overloads, and virtual methods, it uses isSubtypeOfExt for the extended
version to avoid requiring readers to understand the interaction.
Test Plan: Imported from OSS
Differential Revision: D17066673
Pulled By: zdevito
fbshipit-source-id: 4de7c40fbf7f9eeae045d33a89a038538cf87155
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25227
Adds cases to NamedType serialization to so that interfaces are written.
Similar implementation to NamedTuples
Test Plan: Imported from OSS
Differential Revision: D17066674
Pulled By: zdevito
fbshipit-source-id: fda5419260fad29e8c4ddb92de1d3447d621d982
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25258
this is the first commit in a series to add interfaces to JIT.
Interfaces allow the specification through a blank python class of an
abstract interface that can be used in type annotations for Script functions.
If a TorchScript class implements all the methods in the interface with
the appropriate types, then it is implicitly considered to implement
that interface.
Follows required:
* implementation of serialization
* implementation in the parser frontend
* better error reporting for explaining why a class does not meet an
interface specification.
Test Plan: Imported from OSS
Differential Revision: D17079963
Pulled By: zdevito
fbshipit-source-id: a9986eeba2d4fdedd0064ce7d459c0251480a5a0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25059
This fixes the cases where a type annotated with optional cannot
be conditionally assigned to none:
```
x : Optional[int] = 4
if ...:
x = None
```
Test Plan: Imported from OSS
Differential Revision: D16975166
Pulled By: zdevito
fbshipit-source-id: 5a7a81224d08b9447e1f4d957fcd882091e02f32
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25065
Using global atomic variables is bad because sending the same AST through
the compiler twice will produce different graphs. This makes it a
member of the translation struct.
Test Plan: Imported from OSS
Differential Revision: D16975355
Pulled By: zdevito
fbshipit-source-id: 23e15ffd58937a207898a4c4bed82628237e3c2e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24989
This fixes the cases where a type annotated with optional cannot
be conditionally assigned to none:
```
x : Optional[int] = 4
if ...:
x = None
```
Test Plan: Imported from OSS
Differential Revision: D16949314
Pulled By: zdevito
fbshipit-source-id: 7f63d88b30a3f5b024c2a539aa74967c9202af00
Summary:
Previously we weren't clearing the stack, so any failures that didn't
stop the program stayed around in the stack and would show up if
something else accessed the stack.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23458
Pulled By: driazati
Differential Revision: D16866719
fbshipit-source-id: 29739b11f79de91c6468129da1bdcbf3c53b42d9
Summary:
Previously we didn't handle list comprehensions where the expression produced a different type than the input list.
`[float(x) for x in [1, 2, 3]`
Fix for https://github.com/pytorch/pytorch/issues/24239
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24271
Differential Revision: D16806564
Pulled By: eellison
fbshipit-source-id: 1af6a174b9d17a6ea7154511133c12c691eb9188
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23885
This is a series of PRs that will allow us to support adding [padding to conv](https://github.com/pytorch/pytorch/pull/22484) and also reduce the friction of adding method overloads that was brought up in https://github.com/pytorch/pytorch/pull/23266.
This PR only compiles one if branch if the condition is an isinstance check. This is consistent with what mypy does; it does not report errors if a branch can be determined statically to be unreachable.
```
def foo(x):
# type: (int) -> int
if isinstance(x, str):
return x["1"]
return x + 1
reveal_type(foo) # no error, shows int -> int
```
Test Plan: Imported from OSS
Differential Revision: D16697092
Pulled By: eellison
fbshipit-source-id: d3eb4925cd16d551515ac6ff620a69897dbec130
Summary:
Before calling `__setstate__` when loading a module, we need to disable
the optimizer since the module's type does not match the values on the
stack (all the tensors will be `UndefinedTensor`)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23698
Pulled By: driazati
Differential Revision: D16690935
fbshipit-source-id: 71e2238fd25cd16271af478ef21a3cf4e514a462
Summary:
When we're emitting an if node, if one branch exits allow variables in the other branch to escape scope. This is using the same machinery that already exists for early returns so there are minimal changes to the compiler. Most of the changes are in the exit_transform pass so we don't create terrible graphs when exceptions exist. In a follow up PR i will add a writeup of the transform pass to docs since this should be the last change made to it for a while.
This will allow assertions to refine Optional types, as well as allow JIT to understand things like:
```
def foo(x):
if x == 1:
raise Exception()
else:
a = 1
return a
```
If you look in nn/functional.py, like 3/4 of the TODOs are this issue. One note is that if a function always throws, I accepted whatever the annotation for the return type is if it exists and otherwise set it to None. This is consistent with what mypy does.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23565
Differential Revision: D16679572
Pulled By: eellison
fbshipit-source-id: e58c9e9ddaeb13144c803d90e2beae253c851f7f
Summary:
Add `sorted` keyword to JIT for lists and dicts. This desugars to a list copy and a call to `list.sort()`. Since we don't have interfaces yet I implement it in terms of `list.sort()`. When we do we can re-visit implementing this op in a different manner.
The test fails bc of a fix to specialized lists which is landing here: https://github.com/pytorch/pytorch/pull/23267
Ignore the first commit because it is formatting, plz use clang_format ppl :'(
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23274
Differential Revision: D16527323
Pulled By: eellison
fbshipit-source-id: aed8faef23cb790b9af036cd6c1b9b1d7066345d
Summary:
Add early returns to JIT with minimal changes to compiler.cpp and an IR->IR pass that will transform the graph so that there is only one return value.
In compiler.cpp, record when a block will exit so that in the following example will work:
```
if cond:
a = torch.zeros([2])
else:
return 2
a += 2
...
```
To match block outputs with values that will not be used, like in the above example with `a`, I add a Bottom Type that subtypes everything else. This allows shape propagation to continue to work, and makes it so that we don't need many extra nodes filling up the graph.
The IR transform currently doesn't work on Loops, I didn't add that to this PR to avoid too much complexity, but will add it as a stack (and it should be very little extra code). the IR transform is commented at the top of the file.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19179
Differential Revision: D16519819
Pulled By: eellison
fbshipit-source-id: 322a27f69966d1fd074ebe723c3e948b458b0e68
Summary:
there are a lot of formatting changes which makes other diffs to these PRs noisy & hard to read.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23283
Differential Revision: D16453590
Pulled By: eellison
fbshipit-source-id: 97b4bf1dbbbfb09c44c57402f61ea27287060044
Summary:
https://github.com/pytorch/pytorch/issues/20153
I believe you need 2 passes for this. Take this example
```python
torch.jit.script
def f():
x = torch.ones(10, 9, 8, 7, 6)
return x[..., None, None].shape
```
which results in `[10, 9, 8, 7, 6, 1, 1]`
vs
```
torch.jit.script
def f():
x = torch.ones(10, 9, 8, 7, 6)
return x[..., None, None, :].shape
```
which results in `[10, 9, 8, 7, 1, 1, 6]`
After only processing `x[..., None, None` we don't know whether we should be creating a new dimension at the end of the dimension list or somewhere in the middle. What we do depends on the elements to the right of it.
Thus, I do 2 passes - one to collect all the dimensions that the index operations operate on, and another that executes the index operations.
This still doesn't work for an ellipse index followed by a tensor index, but it wasn't working previously either.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22905
Differential Revision: D16433558
Pulled By: Chillee
fbshipit-source-id: c1b303cb97b1af8b6e405bad33495ef3b4c27c4a