Summary:
Fixes https://github.com/pytorch/pytorch/issues/29161.
I looked a bit at the code changes related to this and think I have all of the use cases of `DeprecatedTypeProperties` covered in the message, but suggestions from someone with more context on this would be very much appreciated :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30281
Differential Revision: D18830818
Pulled By: ezyang
fbshipit-source-id: 1a7fcee15354ae09e6644577e7fa33bd26acfe20
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29217
We want to preserve constant information in ClassType so that
users can access the constants in the module by name.
This is also used later for freezing some attribute(converting
attributes to constant)
Test Plan:
tbd
Imported from OSS
Differential Revision: D18799955
fbshipit-source-id: fbfbcd5d3f7f560368b96e2a87e270c822a3d03a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29653
I didn't remove is_variable from Tensor for BC reasons, but I did
remove as many uses as I could from the codebase.
at::impl::variable_excluded_from_dispatch got moved to TensorBody.h
so that it's more widely accessible.
This diff is NOT semantics preserving. Here are the major differences:
- In a number of native operator implementations, we tested that arguments
are not variable. I replaced these with asserts that variable is
excluded from dispatch. I actually don't think these asserts are really
necessary now (they should certainly be true, but it's hard to get
it wrong), but I've kept them for old time's sake. At least, they'll detect
if you call these functions before you've processed variable (indicating
a bug in your kernel.)
- There are a number of places where we do a per-tensor test for being a
variable, for better error reporting when someone commits Tensor/Variable
confusion. Although these tests are substantively the same as the
tests above, in these cases I decided to *delete* the test entirely.
The reasoning is that in these cases, we didn't really care about
dispatch (also, see above; I'm not too sure we really need the dispatch
asserts), we cared about Tensor/Variable confusion. Since Tensor/Variable
confusion is impossible now, we don't need the tests. One of the key
factors which pushed me one way or another was whether or not a function
was doing per-tensor validation; if I kept the assert in such functions,
I'd repeatedly access the TLS. Even if we want to bring back the asserts,
they would have to go somewhere else.
Another similar idiom is the number of places we do !x.defined() ||
x.is_variable(); I treated this equivalently.
- nuclear_norm's computation of compute_uv is a bit weird, but I think
it's OK to just delete the is_variable case (I *suspect* that it is
always the case that self.is_variable(), but it doesn't really matter.)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Differential Revision: D18496168
Pulled By: ezyang
fbshipit-source-id: 5a1ded931e0c10a6b758ba64a8380d34110e0c3e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28408
This enable interface to defined on a nn.Module, and the InterfaceType
now have a field of is_module_ to distinguish if it's a module interface
or a normal interface (This is similar to what ClassType distinguish on
module and torchscript classes).
The module interface can be assigned with any ScriptModule that has the
compatible signatures on schemas. A normal object that is not a
ScriptModule will not be able to assigned to an module interface and
will error out when user explicitly doing so. Assigning a ScriptModule
to class interface will make it only available in attribute_list, not
module_list. More details on subtyping relationship documented in the
jit_type.h
If you declare an module interface inside an nn.Module that is being
compiled to a ScriptModule, behavior to our internal compilation will
be:
1. ConcreteModuleType will record it as an module attribute and add to
the attributes_ list.
2. JitType that is created from the ConcreteModuleType will record it as
an attribute and pre-genenerate the slot. The slot will be marked as
EntityType::MODULE still to make sure JitType record it as a Module
slot
3. cpp_module will also register it as a Module as the Slot type is the
source of truth
Since JitType will record it as attribute as store its type, it will
behave normally as the class interface attribute behave now. This means
the submodule assigned to this module interface is not getting inlined
into the graph as the normal `Module::attr` behave, it will generate
interface callMethod and allow us to later swap this with another
ScriptModule that implicitly implements this module interface.
Test Plan: Imported from OSS
Differential Revision: D18284311
fbshipit-source-id: e0b8f6e8c34b2087fab337a969e5ea3fb37ec209
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28605
This was added because std::to_string isn't available in libstc++
on Android. Use it in more places to get the PyTorch Android
build working with libstdc++.
Test Plan: Internal android build.
Reviewed By: jerryzh168
Differential Revision: D18099520
fbshipit-source-id: 17a2b617c2d21deadd0fdac1db849823637981fc
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28604
This isn't used anywhere, and it doesn't work with older libstdc++
because std::ostringstream is not copyable or movable.
Test Plan: Internal android build.
Reviewed By: jamesr66a
Differential Revision: D18099511
fbshipit-source-id: 1ffb49303aa5d7890ca7f057b21886f88c04ce20
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28129
The previous PR in the stack removed the need to order classes/functions
or have correct import statements. This resolved circular depedency issues
that can arise when class constructors like ModuleList put new instances
of themselves in a common namespace.
This PR changes our export format to no longer produce this information.
By doing so we can make the logic signficantly simpler, since we just
keep track of an individual PythonPrint object per file.
Notes:
* PythonPrint was changed to manage its own stream/list of ranges. It
was doing this anyway internally, this just makes the API more clear.
* Since we are changing the serialization format, I also removed op_version_set.
It is now replaced with the VERSION number that written in the zip archive.
This further simplifies the code emission process.
* A test of op_version_set was removed since there is no longer any behavior
to test.
Test Plan: Imported from OSS
Differential Revision: D17961610
Pulled By: zdevito
fbshipit-source-id: ada362c4ca34d05393a1a7e799c94785ab9d9825
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27772
This replaces unchecked_unwrap_optional with unchecked_cast. This
enables the generalization of type refinement so that it works for
isinstance checks as well. This also removes unchecked_unwrap_optional from
code we generate, which is good because it is a hard op to serialize well
since it doesn't directly encode the Optional[T] being unwrapped. In contrast,
unchecked_cast always explicitly lists the type.
Test Plan: Imported from OSS
Differential Revision: D17885424
Pulled By: zdevito
fbshipit-source-id: ce81077d6fbeaf2a802a2e0b17349aca85670466
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26787
A follow up PR will remove the need to issue import statements,
or write classes in order since they are no longer needed.
This change allows the same PythonPrint class
to be used for an entire file which will be needed in that patch.
Test Plan: Imported from OSS
Differential Revision: D17566440
Pulled By: zdevito
fbshipit-source-id: 1ee896da0cdfe6a003298e1d4b0238403b9ed6dd
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26271
This replaces unchecked_unwrap_optional with unchecked_cast. This
enables the generalization of type refinement so that it works for
isinstance checks as well. This also removes unchecked_unwrap_optional from
code we generate, which is good because it is a hard op to serialize well
since it doesn't directly encode the Optional[T] being unwrapped. In contrast,
unchecked_cast always explicitly lists the type.
Test Plan: Imported from OSS
Differential Revision: D17412856
Pulled By: zdevito
fbshipit-source-id: ded47eb086c4610998ad92bb1174225af00220f7
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26269
previously isinstance only worked when we could statically determine
if it were true/false. Now we actually can issue an isinstance check
in case where it is dependent on the runtime type, e.g. Optional[int]
being an instance of int. This is not very useful on its own yet,
but with type refinement and allowing Any as an argument type this will
allow for python-style "overloaded" functions such that we can
remove our __overload__ support.
Test Plan: Imported from OSS
Differential Revision: D17412853
Pulled By: zdevito
fbshipit-source-id: e2c37040f25f6b94ee1676854fceecd22de190ef
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26761
This PR serialize autograd ops into its own namespace by turning the
serialization op name into `torch.autograd.op`, this is to keep the
original code namespace rather than turning all to the global namespace,
this will be more properly handled in the future when we handle the module
namespace. This change also preserve BC until we have namespace handling
Test Plan: Imported from OSS
Differential Revision: D17645438
fbshipit-source-id: 656ec6b31d4fc2252585de73117c4d40a122678e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25263
This adds an api to return true in script and false in eager, which together with ignore allows guarding of not yet supported JIT features. Bikeshedding requested please.
cc zou3519
```
def foo():
if not torch.jit.is_scripting():
return torch.linear(...)
else:
return addmm(...)
```
Test Plan: Imported from OSS
Differential Revision: D17272443
Pulled By: eellison
fbshipit-source-id: de0f769c7eaae91de0007b98969183df93a91f42
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25262
Preserve the type of ignore'd functions on serialization. Currently we first compile an ignore'd function with it's annotated type when first compiling, but do not preserve it. This is important for being able to compile models with not-yet-supported features in JIT.
```
torch.jit.ignore
def unsupported(x):
return x
def foo():
if not torch.jit._is_scripting():
return torch.linear(...)
else:
return unsupported(...)
```
Test Plan: Imported from OSS
Reviewed By: driazati
Differential Revision: D17199043
Pulled By: eellison
fbshipit-source-id: 1196fd94c207b9fbee1087e4b2ef7d4656a6647f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25361
Previously we had a different None object for each type T so that
unwrap optional could still recover the type T from it. After a few
months of having this conversion behavior, it has become clear that
only the unwrap optional operators cause this problem. Furthermore, it
is beneficial to have NoneType <: Optional[T] because this is how IValues
work (in particular the None IValue is not tagged). This patch makes the
necessary changes to do this. In particular it special cases unwrap optional
in export so that it annotates the None to make sure we can recover the type.
This also changes how matching and evaluating type values work so that we
can consider None matchable to type Optional[T], eventhough we cannot
derive T from that match.
Test Plan: Imported from OSS
Differential Revision: D17103072
Pulled By: zdevito
fbshipit-source-id: 37678ed3e5ce54f2eb3ee4dff2734a39f0bee028
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25227
Adds cases to NamedType serialization to so that interfaces are written.
Similar implementation to NamedTuples
Test Plan: Imported from OSS
Differential Revision: D17066674
Pulled By: zdevito
fbshipit-source-id: fda5419260fad29e8c4ddb92de1d3447d621d982
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25226
Given the current structure, it is easier to just call different functions
to get the desired behavior.
Test Plan: Imported from OSS
Differential Revision: D17066672
Pulled By: zdevito
fbshipit-source-id: 88e76c5ee870d9d1e9887aebcac5e7873fabe6b1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25059
This fixes the cases where a type annotated with optional cannot
be conditionally assigned to none:
```
x : Optional[int] = 4
if ...:
x = None
```
Test Plan: Imported from OSS
Differential Revision: D16975166
Pulled By: zdevito
fbshipit-source-id: 5a7a81224d08b9447e1f4d957fcd882091e02f32
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24989
This fixes the cases where a type annotated with optional cannot
be conditionally assigned to none:
```
x : Optional[int] = 4
if ...:
x = None
```
Test Plan: Imported from OSS
Differential Revision: D16949314
Pulled By: zdevito
fbshipit-source-id: 7f63d88b30a3f5b024c2a539aa74967c9202af00
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24871
Bind the torch.autograd.grad function into TorchScript so that well
formed inputs can directly call this from a TorchScript function.
This also change the serliazation a bit, it fixes a small bug where node
output type can never be tensor type in prim::ListConstruct(only its elementype can be), and add the case where we need to annotate the ListType if the element type is optional type to preserve type information when reimport
Differential Revision: D16923273
fbshipit-source-id: 151cc13411c8c287def35b4e65122d9fc083ccfd
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23799
Before, we inlined as part of the initial IR generation process, which
has a few disadvantages:
1. It loses information about what nodes came from which function/method
calls. Other parties who want to implement transformations on the
function/module level don't have a reliable way of doing so.
2. It duplicates a ton of code if we are inlining the same
function/method a tons of times.
After this PR: inline is deferred to the optimization stage, so
optimizations that rely on inlining will still work. But things get
serialized with the function/method calls in.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23799
Differential Revision: D16652819
Test Plan: Imported from OSS
Reviewed By: jamesr66a
Pulled By: suo
fbshipit-source-id: a11af82aec796487586f81f5a9102fefb6c246db
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24282
This moves a test from Python to cpp, and in doing so lets us clean up a
bunch of otherwise unused code.
Test Plan: Imported from OSS
Differential Revision: D16800562
Pulled By: suo
fbshipit-source-id: ebc29bb81f4fb2538081fa309ead1739980f1093
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24281
These are not just classes anymore, rename
Test Plan: Imported from OSS
Differential Revision: D16800564
Pulled By: suo
fbshipit-source-id: 8b8d508944c26a8916fc7642df43f22583dfcf82
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24278
We had a lot of redundant methods. Killing them.
Test Plan: Imported from OSS
Differential Revision: D16800561
Pulled By: suo
fbshipit-source-id: 60acc1d5b0f34130a1f66a1e5bc7df364a5feb57
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23846
This moves a test from Python to cpp, and in doing so lets us clean up a
bunch of otherwise unused code.
Test Plan: Imported from OSS
Differential Revision: D16684390
Pulled By: suo
fbshipit-source-id: fca81ca14d1ac9e4d6b47ae5eecaa42b38d69147
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23845
These are not just classes anymore, rename
Test Plan: Imported from OSS
Differential Revision: D16684391
Pulled By: suo
fbshipit-source-id: af0024c0b7fbcca68785ec3fc6dc288ec46a1b84
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23691
We had a lot of redundant methods. Killing them.
Test Plan: Imported from OSS
Differential Revision: D16611883
Pulled By: suo
fbshipit-source-id: a32c0a8b8b7e909b386a70abb0827c26cbd37e20
Summary:
Add `sorted` keyword to JIT for lists and dicts. This desugars to a list copy and a call to `list.sort()`. Since we don't have interfaces yet I implement it in terms of `list.sort()`. When we do we can re-visit implementing this op in a different manner.
The test fails bc of a fix to specialized lists which is landing here: https://github.com/pytorch/pytorch/pull/23267
Ignore the first commit because it is formatting, plz use clang_format ppl :'(
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23274
Differential Revision: D16527323
Pulled By: eellison
fbshipit-source-id: aed8faef23cb790b9af036cd6c1b9b1d7066345d
Summary:
Add support for breaks and continues in the jit. We do with a Graph transform pre-SSA.
A graph of the form
```
def test():
while i < 5:
if i == 3:
break
i += 1
print(i)
```
has the body of the loop transformed to
```
if i == 3:
did_break = True
else:
did_break = False
if did_break:
loop_exit = True
else:
i += 1
print(i)
loop_exit = i < 5
```
I am going to add more tests but I think it is ready for review now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21692
Differential Revision: D16215807
Pulled By: eellison
fbshipit-source-id: 365102f42de4861d9323caaeb39a96de7619a667