pytorch/docs/source
Edward Yang 173f224570 Turn on F401: Unused import warning. (#18598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**

This was requested by someone at Facebook; this lint is turned
on for Facebook by default.  "Sure, why not."

I had to noqa a number of imports in __init__.  Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it.  Left for future work.

Be careful!  flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments.  flake8-3 will
report an import unused; flake8-2 will not.  For now, I just
noqa'd all these sites.

All the changes were done by hand.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14687478

fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3
2019-03-30 09:01:17 -07:00
..
_static/img upload alias tracker graph for docs (#17476) 2019-02-25 16:58:43 -08:00
_templates Generate sphinx docs with secure content. (#18508) 2019-03-27 11:01:48 -07:00
community Fix contribution_guide docs (#18237) 2019-03-21 13:20:57 -07:00
notes Add magma debug version for Windows 2019-03-14 10:15:57 -07:00
scripts Add CELU activation to pytorch (#8551) 2018-08-01 07:54:44 -07:00
autograd.rst Update Tensor doc (#14339) 2018-11-28 15:28:17 -08:00
bottleneck.rst [docs] Clarify more CUDA profiling gotchas in bottleneck docs (#6763) 2018-04-19 13:15:27 -04:00
checkpoint.rst Stashing checkpointing RNG states based on devices of arg tensors (#14518) 2018-12-11 09:48:45 -08:00
conf.py Turn on F401: Unused import warning. (#18598) 2019-03-30 09:01:17 -07:00
cpp_extension.rst Inline JIT C++ Extensions (#7059) 2018-04-30 11:48:44 -04:00
cuda_deterministic_backward.rst Amend nondeterminism notes (#12217) 2018-10-16 23:59:26 -07:00
cuda_deterministic.rst Amend nondeterminism notes (#12217) 2018-10-16 23:59:26 -07:00
cuda.rst Add cuda.reset_max_memory_* (#15985) 2019-01-14 07:31:51 -08:00
cudnn_deterministic.rst Amend nondeterminism notes (#12217) 2018-10-16 23:59:26 -07:00
cudnn_persistent_rnn.rst don't copy weight gradients in rnn (#12600) 2018-10-12 13:34:10 -07:00
data.rst add fold example and add nn.Fold/nn.Unfold and F.fold/F.unfold to doc (#8600) 2018-06-18 09:36:42 -04:00
distributed_deprecated.rst Documentation for c10d: torch.distributed and deprecate the old distributed doc (#11450) 2018-09-11 02:10:28 -07:00
distributed.rst Making dist.get_default_group private for PT1 release (#14767) 2018-12-04 19:22:24 -08:00
distributions.rst Typos and broken RSTs fixed in torch.distribution (#16136) 2019-01-23 03:03:10 -08:00
dlpack.rst document torch.utils.dlpack (#9343) 2018-07-11 07:46:09 -07:00
hub.rst fix typo in hub doc 2019-03-05 23:19:30 -08:00
index.rst Add PyTorch Governance, Contributor Guide, and List of Persons of Interest 2019-03-11 10:36:41 -07:00
jit.rst Add section about .code to docs 2019-03-26 20:52:31 -07:00
model_zoo.rst Add model_zoo utility torch torch.utils (#424) 2017-01-09 13:16:58 -05:00
multiprocessing.rst Implement reference counting for shared IPC CUDA tensors (#16854) 2019-03-25 10:24:38 -07:00
nn.rst one_hot docs missing (#17142) 2019-02-15 10:48:18 -08:00
onnx.rst Add trigonometry functions to docs/source/onnx.rst 2018-09-12 12:10:01 -07:00
optim.rst Adds Cyclical Learning Rate and Momentum (#18001) 2019-03-27 19:56:04 -07:00
sparse.rst sparse.mm(), reland #14526 (#14661) 2018-12-03 10:39:27 -08:00
storage.rst Start documenting torch.Tensor (#377) 2016-12-30 01:21:34 -05:00
tensor_attributes.rst Fix the error in the note about torch.device documentation. (#16839) 2019-02-09 20:18:58 -08:00
tensors.rst Rename btrifact* to lu (#18435) 2019-03-29 00:34:30 -07:00
torch.rst Rename btriunpack to lu_unpack (#18529) 2019-03-29 13:01:30 -07:00
type_info.rst Allow converting char tensor to numpy; add [fi]info.min (#15046) 2018-12-24 09:11:24 -08:00