pytorch/docs/source
Ailing Zhang 227c8f2654 Implement nn.functional.interpolate based on upsample. (#8591)
Summary:
This PR addresses #5823.

* fix docstring: upsample doesn't support LongTensor

* Enable float scale up & down sampling for linear/bilinear/trilinear modes. (following SsnL 's commit)

* Enable float scale up & down sampling for nearest mode. Note that our implementation is slightly different from TF that there's actually no "align_corners" concept in this mode.

* Add a new interpolate function API to replace upsample. Add deprecate warning for upsample.

* Add an area mode which is essentially Adaptive_average_pooling into resize_image.

* Add test cases for interpolate in test_nn.py

* Add a few comments to help understand *linear interpolation code.

* There is only "*cubic" mode missing in resize_images API which is pretty useful in practice. And it's labeled as hackamonth here #1552. I discussed with SsnL that we probably want to implement all new ops in ATen instead of THNN/THCUNN. Depending on the priority, I could either put it in my queue or leave it for a HAMer.

* After the change, the files named as *Upsampling*.c works for both up/down sampling. I could rename the files if needed.

Differential Revision: D8729635

Pulled By: ailzhang

fbshipit-source-id: a98dc5e1f587fce17606b5764db695366a6bb56b
2018-07-06 15:28:11 -07:00
..
_static i2h<->h2h in gif (#8750) 2018-06-21 14:46:47 -04:00
_templates docs: add canonical_url and fix redirect link (#8155) 2018-06-05 10:29:55 -04:00
notes Clarify mp note about sharing a tensor's grad field. (#8688) 2018-06-20 14:22:38 -04:00
scripts Add grid lines for activation images, fixes #9130 (#9134) 2018-07-03 19:10:00 -07:00
autograd.rst Add autograd automatic anomaly detection (#7677) 2018-06-11 21:26:17 -04:00
bottleneck.rst [docs] Clarify more CUDA profiling gotchas in bottleneck docs (#6763) 2018-04-19 13:15:27 -04:00
checkpoint.rst [docs] Fix some sphinx warnings (#6764) 2018-04-19 12:37:42 -04:00
conf.py docs: add canonical_url and fix redirect link (#8155) 2018-06-05 10:29:55 -04:00
cpp_extension.rst Inline JIT C++ Extensions (#7059) 2018-04-30 11:48:44 -04:00
cuda.rst Fix Python docs for broadcast and braodcast_coalesced (#4727) 2018-01-19 10:57:20 -05:00
data.rst add fold example and add nn.Fold/nn.Unfold and F.fold/F.unfold to doc (#8600) 2018-06-18 09:36:42 -04:00
distributed.rst fix nccl distributed documentation 2018-05-17 18:03:54 -04:00
distributions.rst Add half cauchy, half normal distributions (#8411) 2018-06-14 10:28:42 +02:00
ffi.rst Improve ffi utils (#479) 2017-01-18 11:17:01 -05:00
index.rst Update docs with new tensor repr (#6454) 2018-04-21 07:35:37 -04:00
legacy.rst Add anything in torch.legacy docs 2017-01-16 12:59:47 -05:00
model_zoo.rst Add model_zoo utility torch torch.utils (#424) 2017-01-09 13:16:58 -05:00
multiprocessing.rst Typofix 2017-10-13 01:31:22 +02:00
nn.rst Implement nn.functional.interpolate based on upsample. (#8591) 2018-07-06 15:28:11 -07:00
onnx.rst Add gt lt ge le to the supported operators list (#8375) 2018-06-12 15:28:34 -04:00
optim.rst Add Cosine Annealing LR Scheduler (#3311) 2017-12-18 02:43:08 -05:00
sparse.rst Update docs with new tensor repr (#6454) 2018-04-21 07:35:37 -04:00
storage.rst Start documenting torch.Tensor (#377) 2016-12-30 01:21:34 -05:00
tensor_attributes.rst Update device docs (#6887) 2018-04-23 19:04:20 -04:00
tensors.rst Implement torch.pinverse : Pseudo-inverse (#9052) 2018-07-05 09:11:24 -07:00
torch.rst Implement torch.pinverse : Pseudo-inverse (#9052) 2018-07-05 09:11:24 -07:00