pytorch/docs/source
Horace He 71c97d3747 Fixed flatten docs (I think) (#25544)
Summary:
I think...

I'm having issues building the site, but it appears to get rid of the error.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25544

Differential Revision: D17157327

Pulled By: ezyang

fbshipit-source-id: 170235c52008ca78ff0d8740b2d7f5b67397b614
2019-09-02 11:34:56 -07:00
..
_static/img hyperparameter plugin (#23134) 2019-08-26 10:40:34 -07:00
_templates Generate sphinx docs with secure content. (#18508) 2019-03-27 11:01:48 -07:00
community Adjust maintainers list (#23693) 2019-08-01 22:59:02 -07:00
notes Document benchmarking practice for CUDA 2019-08-13 15:07:23 -07:00
scripts Add CELU activation to pytorch (#8551) 2018-08-01 07:54:44 -07:00
__config__.rst Allow a non-OpenMP based build (#19749) 2019-05-06 19:34:48 -07:00
autograd.rst Added torch.autograd.profiler.record_function() as context manager. (#23428) 2019-07-30 11:10:01 -07:00
bottleneck.rst [docs] Clarify more CUDA profiling gotchas in bottleneck docs (#6763) 2018-04-19 13:15:27 -04:00
checkpoint.rst Stashing checkpointing RNG states based on devices of arg tensors (#14518) 2018-12-11 09:48:45 -08:00
conf.py Add docs to CI (#24435) 2019-08-20 21:40:44 -07:00
cpp_extension.rst Inline JIT C++ Extensions (#7059) 2018-04-30 11:48:44 -04:00
cuda_deterministic_backward.rst Typo correction in cuda_deterministic_backward.rst (#25011) 2019-08-22 21:19:39 -07:00
cuda_deterministic.rst Amend nondeterminism notes (#12217) 2018-10-16 23:59:26 -07:00
cuda.rst Add cuda.reset_max_memory_* (#15985) 2019-01-14 07:31:51 -08:00
cudnn_deterministic.rst Amend nondeterminism notes (#12217) 2018-10-16 23:59:26 -07:00
cudnn_persistent_rnn.rst don't copy weight gradients in rnn (#12600) 2018-10-12 13:34:10 -07:00
data.rst Slightly improve dataloader docs on when auto-batching is disabled (#23671) 2019-08-01 12:10:17 -07:00
distributed.rst Update distributed.rst (#23289) 2019-07-26 16:55:52 -07:00
distributions.rst More doc edits (#19929) 2019-04-30 13:52:07 -07:00
dlpack.rst document torch.utils.dlpack (#9343) 2018-07-11 07:46:09 -07:00
hub.rst better example for local weights (#21685) 2019-06-13 17:56:25 -07:00
index.rst Fix builtin function reference (#24056) 2019-08-09 15:58:15 -07:00
jit_builtin_functions.rst Fix builtin function reference (#24056) 2019-08-09 15:58:15 -07:00
jit.rst Fix item() call in docs 2019-08-29 13:50:04 -07:00
model_zoo.rst add/move a few apis in torch.hub (#18758) 2019-04-10 23:10:39 -07:00
multiprocessing.rst Update multiprocessing note now that shared CUDA tensors are refcounted (#19904) 2019-05-25 17:40:42 -07:00
nn.functional.rst Breaks up NN module in docs so it loads faster. 2019-06-11 09:38:41 -07:00
nn.init.rst Add document of functions nn.init.ones_/zeros_ (#23145) 2019-07-25 06:09:50 -07:00
nn.rst Fixed flatten docs (I think) (#25544) 2019-09-02 11:34:56 -07:00
onnx.rst Fix dead link and syntax in ONNX landing page 2019-08-29 23:58:34 -07:00
optim.rst Add OneCycleLR (#25324) 2019-08-28 16:59:40 -07:00
random.rst Adds torch.random to docs/toc (#23553) 2019-08-07 16:31:32 -07:00
sparse.rst sparse.mm(), reland #14526 (#14661) 2018-12-03 10:39:27 -08:00
storage.rst Start documenting torch.Tensor (#377) 2016-12-30 01:21:34 -05:00
tensor_attributes.rst Clarify that torch.device without an index will always represent the current device (#23468) 2019-07-27 06:49:52 -07:00
tensorboard.rst Update tensorboard.rst (#22026) 2019-08-12 15:02:26 -07:00
tensors.rst Add missing functions and methods for channelwise quantization (#24934) 2019-08-23 15:44:16 -07:00
torch.rst Add logical_xor operator (#23847) 2019-08-15 08:40:25 -07:00
type_info.rst Allow converting char tensor to numpy; add [fi]info.min (#15046) 2018-12-24 09:11:24 -08:00