pytorch/docs/source
Tony Beltramelli 7fcaf3b49e Update torch.nn.init and torch.nn.utils.clip_grad (#6173)
Introducing two updates.

1. Add param to He initialization scheme in torch.nn.init
Problem solved:
The function calculate_gain can take an argument to specify the type of non-linearity used. However, it wasn't possible to pass this argument directly to the He / Kaiming weight initialization function.

2. Add util to clip gradient value in torch.nn.utils.clip_grad
Problem solved:
DL libraries typically provide users with easy access to functions for clipping the gradients both using the norm and a fixed value. However, the utils clip_grad.py only had a function to clip the gradient norm.

* add param to He initialization scheme in torch.nn.init

* add util to clip gradient value in torch/nn/utils/clip_grad.py

* update doc in torch.nn.utils.clip_grad

* update and add test for torch.nn.utils.clip_grad

* update function signature in torch.nn.utils.clip_grad to match suffix_ convention

* ensure backward compatibility in torch.nn.utils.clip_grad

* remove DeprecationWarning in torch.nn.utils.clip_grad

* extend test and implementation of torch.nn.utils.clip_grad

* update test and implementation torch.nn.utils.clip_grad
2018-04-17 11:32:32 -04:00
..
_static Add different logo for master docs (#6446) 2018-04-09 18:48:53 -04:00
_templates Add link in docs menu to stable docs (#6475) 2018-04-10 14:53:04 -04:00
notes Link relevant FAQ section in DataLoader docs (#6476) 2018-04-11 13:41:46 -04:00
scripts fix activation images not showing up on official website (#6367) 2018-04-07 11:06:24 -04:00
autograd.rst fix typo (#6329) 2018-04-05 21:36:16 -04:00
bottleneck.rst bottleneck supports better user-provided arguments (#6425) 2018-04-09 13:57:26 -04:00
checkpoint.rst [Re-checkpointing] Autograd container for trading compute for memory (#6467) 2018-04-10 15:26:24 -04:00
conf.py Add different logo for master docs (#6446) 2018-04-09 18:48:53 -04:00
cpp_extension.rst Enable documentation for C++ extensions on the website (#5597) 2018-03-07 14:07:26 +01:00
cuda.rst Fix Python docs for broadcast and braodcast_coalesced (#4727) 2018-01-19 10:57:20 -05:00
data.rst Add ConcatDataset to docs (#2337) 2017-08-08 07:16:04 -04:00
device.rst Add device docs; match constructor parameter names with attribute names. (#6633) 2018-04-17 09:55:44 -04:00
distributed.rst Added distributed docs on NCCL2 backend/functions and launch module (#6579) 2018-04-15 21:53:10 -04:00
distributions.rst [distributions] Implement Independent distribution (#6615) 2018-04-16 11:42:12 -04:00
ffi.rst Improve ffi utils (#479) 2017-01-18 11:17:01 -05:00
index.rst Add device docs; match constructor parameter names with attribute names. (#6633) 2018-04-17 09:55:44 -04:00
legacy.rst Add anything in torch.legacy docs 2017-01-16 12:59:47 -05:00
model_zoo.rst Add model_zoo utility torch torch.utils (#424) 2017-01-09 13:16:58 -05:00
multiprocessing.rst Typofix 2017-10-13 01:31:22 +02:00
nn.rst Update torch.nn.init and torch.nn.utils.clip_grad (#6173) 2018-04-17 11:32:32 -04:00
onnx.rst fixed softmax support documentation (#5557) 2018-03-05 08:59:06 -05:00
optim.rst Add Cosine Annealing LR Scheduler (#3311) 2017-12-18 02:43:08 -05:00
sparse.rst Copy-edit sparse constructor docs for clarity. 2017-08-15 13:36:30 -04:00
storage.rst Start documenting torch.Tensor (#377) 2016-12-30 01:21:34 -05:00
tensors.rst Add device docs; match constructor parameter names with attribute names. (#6633) 2018-04-17 09:55:44 -04:00
torch.rst Split set_default_tensor_type(dtype) into set_default_dtype(dtype). (#6599) 2018-04-16 13:49:00 -04:00