pytorch/docs/source
Xiang Gao 4fcab92d6c Move outplace ops to ATen (#16788)
Summary:
Based on https://github.com/pytorch/pytorch/pull/12413, with the following additional changes:

-  Inside `native_functions.yml` move those outplace operators right next to everyone's corresponding inplace operators for convenience of checking if they match when reviewing
- `matches_jit_signature: True` for them
- Add missing `scatter` with Scalar source
- Add missing `masked_fill` and `index_fill` with Tensor source.
- Add missing test for `scatter` with Scalar source
- Add missing test for `masked_fill` and `index_fill` with Tensor source by checking the gradient w.r.t source
- Add missing docs to `tensor.rst`

Differential Revision: D14069925

Pulled By: ezyang

fbshipit-source-id: bb3f0cb51cf6b756788dc4955667fead6e8796e5
2019-02-15 15:58:10 -08:00
..
_static/img Optimize images (#14084) 2018-12-05 22:46:32 -08:00
_templates Add Google pixel code 2018-10-23 13:26:37 -07:00
notes Add cuda.reset_max_memory_* (#15985) 2019-01-14 07:31:51 -08:00
scripts Add CELU activation to pytorch (#8551) 2018-08-01 07:54:44 -07:00
autograd.rst Update Tensor doc (#14339) 2018-11-28 15:28:17 -08:00
bottleneck.rst [docs] Clarify more CUDA profiling gotchas in bottleneck docs (#6763) 2018-04-19 13:15:27 -04:00
checkpoint.rst Stashing checkpointing RNG states based on devices of arg tensors (#14518) 2018-12-11 09:48:45 -08:00
conf.py Remove outdated css and font files in html docs (#13699) 2018-11-07 16:31:28 -08:00
cpp_extension.rst Inline JIT C++ Extensions (#7059) 2018-04-30 11:48:44 -04:00
cuda_deterministic_backward.rst Amend nondeterminism notes (#12217) 2018-10-16 23:59:26 -07:00
cuda_deterministic.rst Amend nondeterminism notes (#12217) 2018-10-16 23:59:26 -07:00
cuda.rst Add cuda.reset_max_memory_* (#15985) 2019-01-14 07:31:51 -08:00
cudnn_deterministic.rst Amend nondeterminism notes (#12217) 2018-10-16 23:59:26 -07:00
cudnn_persistent_rnn.rst don't copy weight gradients in rnn (#12600) 2018-10-12 13:34:10 -07:00
data.rst add fold example and add nn.Fold/nn.Unfold and F.fold/F.unfold to doc (#8600) 2018-06-18 09:36:42 -04:00
distributed_deprecated.rst Documentation for c10d: torch.distributed and deprecate the old distributed doc (#11450) 2018-09-11 02:10:28 -07:00
distributed.rst Making dist.get_default_group private for PT1 release (#14767) 2018-12-04 19:22:24 -08:00
distributions.rst Typos and broken RSTs fixed in torch.distribution (#16136) 2019-01-23 03:03:10 -08:00
dlpack.rst document torch.utils.dlpack (#9343) 2018-07-11 07:46:09 -07:00
hub.rst Improve hub documentation (#14862) 2018-12-07 14:59:01 -08:00
index.rst remove legacy from docs (#15112) 2018-12-25 21:57:54 -08:00
jit.rst doc updates for TorchScript (#16866) 2019-02-07 18:03:57 -08:00
model_zoo.rst Add model_zoo utility torch torch.utils (#424) 2017-01-09 13:16:58 -05:00
multiprocessing.rst add example multiprocess code (#16345) 2019-01-30 09:35:58 -08:00
nn.rst one_hot docs missing (#17142) 2019-02-15 10:48:18 -08:00
onnx.rst Add trigonometry functions to docs/source/onnx.rst 2018-09-12 12:10:01 -07:00
optim.rst Add Cosine Annealing LR Scheduler (#3311) 2017-12-18 02:43:08 -05:00
sparse.rst sparse.mm(), reland #14526 (#14661) 2018-12-03 10:39:27 -08:00
storage.rst Start documenting torch.Tensor (#377) 2016-12-30 01:21:34 -05:00
tensor_attributes.rst Fix the error in the note about torch.device documentation. (#16839) 2019-02-09 20:18:58 -08:00
tensors.rst Move outplace ops to ATen (#16788) 2019-02-15 15:58:10 -08:00
torch.rst Add some missing docs to torch.rst, new unittest to enforce torch.rst no longer miss anything (#16039) 2019-02-15 07:02:31 -08:00
type_info.rst Allow converting char tensor to numpy; add [fi]info.min (#15046) 2018-12-24 09:11:24 -08:00