Commit Graph

1178 Commits

Author SHA1 Message Date
Gao, Xiang
5e97f251a8 Enable TF32 support for cuDNN (#40737)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40737

Reviewed By: mruberry

Differential Revision: D22801525

Pulled By: ngimel

fbshipit-source-id: ac7f7e728b4b3e01925337e8c9996f26a6433fd2
2020-09-01 15:34:24 -07:00
Ralf Gommers
4c19a1e350 Move torch/autograd/grad_mode.pyi stubs inline (#43415)
Summary:
- Add `torch._C` bindings from `torch/csrc/autograd/init.cpp`
- Renamed `torch._C.set_grad_enabled` to `torch._C._set_grad_enabled`
  so it doesn't conflict with torch.set_grad_enabled anymore

This is a continuation of gh-38201. All I did was resolve merge conflicts and finish the annotation of `_DecoratorContextManager.__call__` that ezyang started in the first commit.

~Reverts commit b5cd3a80bb, which was only motivated by not having `typing_extensions` available.~ (JIT can't be made to understand `Literal[False]`, so keep as is).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43415

Reviewed By: ngimel

Differential Revision: D23301168

Pulled By: malfet

fbshipit-source-id: cb5290f2e556b4036592655b9fe54564cbb036f6
2020-08-31 16:14:41 -07:00
Dmytro Dzhulgakov
47e489b135 Make ExtraFilesMap return bytes instead of str (#43241)
Summary:
In case we want to store binary files using `ScriptModule.save(..., _extra_files=...)` functionality. With python3 we can just use bytes only and not bother about it.

I had to do a copy-pasta from pybind sources, maybe we should upstream it, but it'd mean adding a bunch of template arguments to `bind_map` which is a bind untidy.

Let me know if there's a better place to park this function (it seems to be the only invocation of `bind_map` so I put it in the same file)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43241

Reviewed By: zdevito

Differential Revision: D23205244

Pulled By: dzhulgakov

fbshipit-source-id: 8f291eb4294945fe1c581c620d48ba2e81b3dd9c
2020-08-28 19:11:33 -07:00
Nikita Shulga
0fa99d50bc Enable torch.cuda.memory typechecking (#43444)
Summary:
Add number of function prototypes defined in torch/csrs/cuda/Module.cpp to `__init__.pyi.in`

Fixes https://github.com/pytorch/pytorch/issues/43442

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43444

Reviewed By: ezyang

Differential Revision: D23280221

Pulled By: malfet

fbshipit-source-id: 7d67dff7b24c8d7b7e72c919e6e7b847f242ef83
2020-08-24 11:46:04 -07:00
Nikita Shulga
db78c07ced Enable torch.cuda.nvtx typechecking (#43443)
Summary:
Add pyi file covering torch._C.nvtx submodule

Fixes https://github.com/pytorch/pytorch/issues/43436

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43443

Reviewed By: ezyang

Differential Revision: D23280188

Pulled By: malfet

fbshipit-source-id: 882860cce9feb0b5307c8b7c887f4a2f2c1548a2
2020-08-24 08:20:12 -07:00
Ralf Gommers
c84f78470b Fix type annotations for a number of torch.utils submodules (#42711)
Summary:
Related issue on `torch.utils` type annotation hiccups: gh-41794

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42711

Reviewed By: mrshenli

Differential Revision: D23005434

Pulled By: malfet

fbshipit-source-id: 151554b1e7582743f032476aeccdfdad7a252095
2020-08-14 18:12:48 -07:00
Keigo Kawamura
5d2e9b6ed9 Add missing type annotation for Tensor.ndim (#42909)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/42908

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42909

Reviewed By: zhangguanheng66

Differential Revision: D23090364

Pulled By: malfet

fbshipit-source-id: 44457fddc86f6abde635aa671e7611b405780ab9
2020-08-12 17:14:20 -07:00
Richard Zou
e8f4b04d9a vmap: temporarily disable support for random functions (#42617)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42617

While we figure out the random plan, I want to initially disable
support for random operations. This is because there is an ambiguity in
what randomness means. For example,

```
tensor = torch.zeros(B0, 1)
vmap(lambda t: t.normal_())(tensor)
```

in the above example, should tensor[0] and tensor[1] be equal (i.e.,
use the same random seed), or should they be different?

The mechanism for disabling random support is as follows:
- We add a new dispatch key called VmapMode
- Whenever we're inside vmap, we enable VmapMode for all tensors.
This is done via at::VmapMode::increment_nesting and
at::VmapMode::decrement_nesting.
- DispatchKey::VmapMode's fallback kernel is the fallthrough kernel.
- We register kernels that raise errors for all random functions on
DispatchKey::VmapMode. This way, whenever someone calls a random
function on any tensor (not just BatchedTensors) inside of a vmap block,
an error gets thrown.

Test Plan: - pytest test/test_vmap.py -v -k "Operators"

Reviewed By: ezyang

Differential Revision: D22954840

Pulled By: zou3519

fbshipit-source-id: cb8d71062d4087e10cbf408f74b1a9dff81a226d
2020-08-11 07:19:51 -07:00
Ralf Gommers
bcab2d6848 And type annotations for cpp_extension, utils.data, signal_handling (#42647)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42647

Reviewed By: ezyang

Differential Revision: D22967041

Pulled By: malfet

fbshipit-source-id: 35e124da0be56934faef56834a93b2b400decf66
2020-08-06 09:42:07 -07:00
Guilherme Leobas
0c48aa1e07 Add typing annotations to hub.py and _jit_internal.py (#42252)
Summary:
xref: https://github.com/pytorch/pytorch/wiki/Guide-for-adding-type-annotations-to-PyTorch

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42252

Reviewed By: malfet

Differential Revision: D22916480

Pulled By: ezyang

fbshipit-source-id: 392ab805b0023640a3b5cdf600f70638b375f84f
2020-08-04 08:20:44 -07:00
Shen Li
d4736ef95f Add done() API to Future (#42013)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42013

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D22729596

Pulled By: mrshenli

fbshipit-source-id: ed31021a35af6e2c3393b9b14e4572cf51013bc0
2020-07-24 14:13:41 -07:00
Nikita Shulga
c0bfa45f9d Enable typechecking for torch.futures (#41675)
Summary:
Add typing declarations for torch._C.Future and torch._C._collect_all

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41675

Reviewed By: izdeby

Differential Revision: D22627539

Pulled By: malfet

fbshipit-source-id: 29b87685d65dd24ee2094bae8a84a0fe3787e7f8
2020-07-23 23:06:45 -07:00
xueht-fnst
0651887eb4 Improve repr for torch.iinfo & torch.finfo (#40488)
Summary:
- fix https://github.com/pytorch/pytorch/issues/39991
- Include directly `min`/`max`/`eps`/`tiny` values in repr of `torch.iinfo` & `torch.finfo` for inspection
- Use `torch.float16` / `torch.int16` instead of uncorrespond names `Half` / `Short`
- The improved repr is shown just like:
```
>>> torch.iinfo(torch.int8)
iinfo(type=torch.int8, max=127, min=-128)
>>> torch.iinfo(torch.int16)
iinfo(type=torch.int16, max=32767, min=-32768)
>>> torch.iinfo(torch.int32)
iinfo(type=torch.int32, max=2.14748e+09, min=-2.14748e+09)
>>> torch.iinfo(torch.int64)
iinfo(type=torch.int64, max=9.22337e+18, min=-9.22337e+18)
>>> torch.finfo(torch.float16)
finfo(type=torch.float16, eps=0.000976563, max=65504, min=-65504, tiny=6.10352e-05)
>>> torch.finfo(torch.float32)
finfo(type=torch.float32, eps=1.19209e-07, max=3.40282e+38, min=-3.40282e+38, tiny=1.17549e-38)
>>> torch.finfo(torch.float64)
finfo(type=torch.float64, eps=2.22045e-16, max=1.79769e+308, min=-1.79769e+308, tiny=2.22507e-308)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40488

Differential Revision: D22445301

Pulled By: mruberry

fbshipit-source-id: 552af9904c423006084b45d6c4adfb4b5689db54
2020-07-10 15:22:55 -07:00
Tongzhou Wang
10caf58a52 [typing] tensor._version is int (#41125)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41125

Differential Revision: D22440717

Pulled By: ezyang

fbshipit-source-id: f4849c6e13f01cf247b2f64f68a621b055c8bc17
2020-07-08 17:00:30 -07:00
Tongzhou Wang
0790d11a18 typing for tensor.T/grad_fn torch.Size (#40879)
Summary:
fixes  https://github.com/pytorch/pytorch/issues/40658 https://github.com/pytorch/pytorch/issues/40658

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40879

Reviewed By: pbelevich

Differential Revision: D22339146

Pulled By: ezyang

fbshipit-source-id: 6b4695e102591e7a2c391eb337c154414bacf67c
2020-07-04 11:58:29 -07:00
Nikita Shulga
591fffc524 Type-annotate serialization.py (#40862)
Summary:
Move Storage class from __init__.pyi.in to types.py and make it a protocol, since this is not a real class
Expose `PyTorchFileReader` and `PyTorchFileWriter` native classes

Ignore function attributes, as there are yet no good way to type annotate those, see https://github.com/python/mypy/issues/2087
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40862

Differential Revision: D22344743

Pulled By: malfet

fbshipit-source-id: 95cdb6f980ee79383960f306223e170c63df3232
2020-07-02 07:10:55 -07:00
Diego M. Rodriguez
e180ca652f Add __all__ to torch/_C/_VariableFunctions.pyi (#40499)
Summary:
Related to https://github.com/pytorch/pytorch/issues/40397

Inspired by ezyang's comment at https://github.com/pytorch/pytorch/issues/40397#issuecomment-648233001, this PR attempts to leverage using `__all__` to explicitly export private functions from `_VariableFunctions.pyi` in order to make `mypy` aware of them after:

```
if False:
    from torch._C._VariableFunctions import *
```

The generation of the `__all__` template variable excludes some items from `unsorted_function_hints`, as it seems that those without hints end up not being explicitly included in the `.pyi` file: I leaned on the side of caution and opted for having `__all__` consistent with the definitions inside the file. Additionally, added some pretty-printing to avoid having an extremely long line.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40499

Differential Revision: D22240716

Pulled By: ezyang

fbshipit-source-id: 77718752577a82b1e8715e666a8a2118a9d3a1cf
2020-06-25 14:10:07 -07:00
Nikita Shulga
b670ff2d3a Add typing for _CudaStreamBase and _CudaEventBase classes (#40256)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40256

Differential Revision: D22139369

Pulled By: malfet

fbshipit-source-id: c7f4f8709700eb85d971ad504dd3552e311cb58d
2020-06-19 11:29:41 -07:00
Nikita Shulga
8b5732e8ad Move torch.cuda annotations inline (#40075)
Summary:
Also enable `torch.cuda` typechecking
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40075

Differential Revision: D22121275

Pulled By: malfet

fbshipit-source-id: dbecef09911334e8f3d87f5ecab66349da9f2325
2020-06-18 15:52:29 -07:00
Kurt Mohler
124cdf2290 Add experimental deterministic flag (#38683)
Summary:
Adds `torch.experimental.deterministic` flag to enforce deterministic algorithms across all of pytorch.
Adds `torch.experimental.deterministic_error_level` to allow users to choose between error/warning/silent if determinism for an operation is not available.
Adds `torch.experimental.alert_not_deterministic()` which should be called within operations that are not deterministic.
Offers both Python and ATen interfaces

Issue https://github.com/pytorch/pytorch/issues/15359
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38683

Differential Revision: D21998093

Pulled By: ezyang

fbshipit-source-id: 23aabbddd20f6199d846f97764ff24d728163737
2020-06-12 08:44:06 -07:00
xueht-fnst
51504cb8dd Fix IDE hint channels_last & preserve_format (#39120)
Summary:
- Fixing the nit introduced in https://github.com/pytorch/pytorch/issues/38784 .
- [torch.preserve_format] does not show IDE hint either, it would be fixed here as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39120

Differential Revision: D21904575

Pulled By: ezyang

fbshipit-source-id: 80fa1e838e0c444d7b1f2d45e649b51d6c38b54d
2020-06-05 09:48:05 -07:00
Nikita Shulga
8811e4d00d Add/fix typing annotations to some functions (#39075)
Summary:
Add missing typing imports to some jit tests
Add typing annotations to `torch.testing._compare_scalars_internal` and `torch.testing._internal.assertTrue`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39075

Differential Revision: D21882468

Pulled By: malfet

fbshipit-source-id: dd9858eb8e11a38411544cc64daf36fced807d76
2020-06-04 13:40:04 -07:00
Edward Yang
4d880c0693 Device and torch._C function cleanup (#38173)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38173

- Introduce torch.types.Device representing all "device-like" types
- Stubbed torch.device.__reduce__
- Stubbed all torch._C functions comprehensively
- Deleted _safe_call which is unused throughout the codebase

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D21497399

Pulled By: ezyang

fbshipit-source-id: 1f534442b0ec9a70d556545d072f2c06a08b9d15
2020-06-03 19:17:22 -07:00
Shawn Zhong
21ba3b4f40 Fix torch.backends.cudnn mypy error (#38947)
Summary:
Fix https://github.com/pytorch/pytorch/issues/38410

![image](https://user-images.githubusercontent.com/6421097/82724121-74b26880-9c99-11ea-9b63-e92de2dccdf2.png)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38947

Differential Revision: D21765290

Pulled By: ezyang

fbshipit-source-id: 5d2b25f039a653c609d60cdaac4a7ac5812ae291
2020-06-03 10:55:43 -07:00
anjali411
a50d781c03 Added real and imag views as tensor attributes (#39033)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39033

Added `real` and `imag` views as tensor attributes. Right now, tensor.imag is disabled for real tensors. This is because if we return a new tensor of zeros, the user would be able to update the tensor returned by tensor.imag which should not be allowed as numpy returns a read-only array, and pytorch doesn't support read-only tensors yet.

TODO in follow-up PRs:
1. add a setter for `real` and `imag`
2. add special case in codegen for `real` and `imag` backward functions.
3. remove `copy_real` and `copy_imag` methods.

Test Plan: Imported from OSS

Differential Revision: D21767542

Pulled By: anjali411

fbshipit-source-id: 539febf01f01ff055e3fbc7e9ff01fd3fe729056
2020-05-29 12:31:51 -07:00
Ralf Gommers
cebf5a8767 Run mypy on some test files, add iinfo/finfo annotations (#38220)
Summary:
Most test files have a ton of errors; there's not much point adding ignores for them though. The way of working is simply to run `mypy test/test_somefile.py`, fix up the errors, then add that file to the `files =` list in `mypy.ini`.

Can't add all of `test/*` by default, because the JIT test files have (on purpose) syntax errors that are meant to exercise the robustness of the JIT to bad annotations. Leave those alone for now.

_Depends on the ghstacked PRs in gh-38173, only the last 2 commits are new._
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38220

Differential Revision: D21503481

Pulled By: ezyang

fbshipit-source-id: 63026e73201c549d64647a03a20a4c6687720244
2020-05-11 20:18:41 -07:00
Edward Yang
6edf340338 Delete torch/__init__.pyi, deferring to direct extension stubs (#38157)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38157

This removes the error prone process of assembling `torch/__init__.pyi`
(and frequently forgetting to expose things), since now we can simply
rely on the true source file to get things done.  Most of the old
codegen in gen_pyi.py is now rerouted to various files:

- `torch/_C/__init__.pyi` (the dumping pile of all misc bindings)
- `torch/_C/_nn.pyi` (NN function bindings)
- `torch/_C/_VariableFunctions.pyi` (torch function bindings)

`torch.types` grew a bunch more definitions that previously where
defined in `torch/__init__.pyi`

Some miscellaneous changes

- Fixed a bug where we treat single TensorList argument as implying
  varargs are accepted. This is actually only supported on IntList.
  This means we can correctly generate a stub for dequantize.
- Add missing manual stub for nonzero
- Switched torch/onnx/operators.py to directly refer to _C module,
  since apparently mypy doesn't think that methods prefixed with
  underscores get reexported.  This may be a recurring theme; maybe
  we need to find a better way to solve it.

Because I was really lazy, I dumped namedtuple definitions in both
`torch._C` and `torch._C._VariableFunctions`.  This is definitely wrong.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D21497400

Pulled By: ezyang

fbshipit-source-id: 07b126141c82efaca37be27c07255cb2b9b3f064
2020-05-11 07:20:13 -07:00
Edward Yang
7e9af67ca1 Add minimal skeleton for _C type stubs, delete torch.autograd stub (#38080)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38080

Originally, my plan was to just delete the torch.autograd stub, but
this triggered a bunch of downstream errors relating to non-existent
to _C modules, and so instead of ignoring those files, I decided to
add a minimal _C type stubs, where it was easy (cases which were
codegened I ignored).

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D21487841

Pulled By: ezyang

fbshipit-source-id: cfcc467ff1c146d242cb9ff33a46ba26b33b8213
2020-05-08 22:33:21 -07:00