pytorch/torch
Peter Bell 04e0cbf5a9 Add padding='same' mode to conv{1,2,3}d (#45667)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45667

First part of #3867 (Pooling operators still to do)

This adds a `padding='same'` mode to the interface of `conv{n}d`and `nn.Conv{n}d`. This should match the behaviour of `tensorflow`. I couldn't find it explicitly documented but through experimentation I found `tensorflow` returns the shape `ceil(len/stride)` and always adds any extra asymmetric padding onto the right side of the input.

Since the `native_functions.yaml` schema doesn't seem to support strings or enums, I've moved the function interface into python and it now dispatches between the numerically padded `conv{n}d` and the `_conv{n}d_same` variant. Underscores because I couldn't see any way to avoid exporting a function into the `torch` namespace.

A note on asymmetric padding. The total padding required can be odd if both the kernel-length is even  and the dilation is odd. mkldnn has native support for asymmetric padding, so there is no overhead there, but for other backends I resort to padding the input tensor by 1 on the right hand side to make the remaining padding symmetrical. In these cases, I use `TORCH_WARN_ONCE` to notify the user of the performance implications.

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D27170744

Pulled By: jbschlosser

fbshipit-source-id: b3d8a0380e0787ae781f2e5d8ee365a7bfd49f22
2021-03-18 16:22:03 -07:00
..
_C [distributed] add processgroup options as argument (#53663) 2021-03-18 01:04:17 -07:00
autograd Remove usage of onEachDevice from legacy profiler (#54125) 2021-03-18 12:19:51 -07:00
backends
contrib
csrc Add padding='same' mode to conv{1,2,3}d (#45667) 2021-03-18 16:22:03 -07:00
cuda [RELAND] [CUDA graphs] Private mempools for CUDA graphs (#54038) 2021-03-16 12:13:33 -07:00
distributed [distributed] add processgroup options as argument (#53663) 2021-03-18 01:04:17 -07:00
distributions Fix distributions which don't properly honor validate_args=False (#53600) 2021-03-10 13:16:32 -08:00
fft [doc] Fix documentations of torch functions (#52982) 2021-03-01 09:59:57 -08:00
for_onnx
futures [torch.futures] Add note about error handling for non-chained futures. (#53212) 2021-03-04 18:09:23 -08:00
fx [FX] Normalize torch. namespace ops (#53832) 2021-03-17 23:34:29 -07:00
jit [JIT] Update Namespace from cuda to _cuda (#53378) 2021-03-11 00:52:01 -08:00
legacy
lib [distributed] add options field in ProcessGroupGloo/NCCL (#54090) 2021-03-17 18:41:55 -07:00
linalg Add torch.linalg.vector_norm function (#51099) 2021-03-18 06:41:39 -07:00
multiprocessing
nn Add padding='same' mode to conv{1,2,3}d (#45667) 2021-03-18 16:22:03 -07:00
onnx [ONNX] Update embedding export wrt padding_idx (#53931) 2021-03-15 10:03:53 -07:00
optim Cleanup of unused list in adam.py (#53874) 2021-03-15 09:49:27 -07:00
package [package] autoformat (#53783) 2021-03-15 17:18:43 -07:00
profiler
quantization ns for fx: move lstm dynamic weight test case to new API (#53772) 2021-03-12 10:02:43 -08:00
sparse Forbid trailing whitespace (#53406) 2021-03-05 17:22:55 -08:00
special [special] add torch.special namespace (#52296) 2021-03-04 00:04:36 -08:00
testing Add padding='same' mode to conv{1,2,3}d (#45667) 2021-03-18 16:22:03 -07:00
utils Revert D26907093: Add repeats to Timer.collect_callgrind(...) 2021-03-17 20:14:21 -07:00
__config__.py
__future__.py
__init__.py Fix pylint error torch.tensor is not callable (#53424) 2021-03-09 11:32:53 -08:00
_appdirs.py
_autograd_functions.py
_classes.py
_deploy.py [package] Pull out _UnpicklerWrapper into PackageUnpickler (#53049) 2021-03-01 18:40:52 -08:00
_jit_internal.py Fix doc (#53996) 2021-03-15 15:44:30 -07:00
_linalg_utils.py
_lobpcg.py
_lowrank.py
_namedtensor_internals.py
_ops.py
_python_dispatcher.py
_six.py
_storage_docs.py
_tensor_docs.py Implements cpu_kernel_multiple_outputs and torch.frexp (#51097) 2021-03-15 10:44:32 -07:00
_tensor_str.py
_tensor.py Fix typo "informations" -> "information" (#53746) 2021-03-12 12:07:38 -08:00
_torch_docs.py Added autograd support for torch.orgqr (#52637) 2021-03-18 05:42:18 -07:00
_utils_internal.py
_utils.py Introduce mlc device (ML Compute device) to PyTorch's device list (#50634) 2021-02-24 22:39:11 -08:00
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt Use touch() in pathlib for better compatibility on Windows (#52729) 2021-02-25 13:46:21 -08:00
custom_class_detail.h [PyTorch] Remove reference_cast in make_boxed_from_unboxed_functor (#51319) 2021-02-17 10:58:44 -08:00
custom_class.h Add property binding in torchbind (#50670) 2021-03-03 14:25:52 -08:00
deploy.h [deploy] torch::deploy API (#51754) 2021-02-18 02:30:08 -08:00
extension.h
functional.py Fix pylint error torch.tensor is not callable (#53424) 2021-03-09 11:32:53 -08:00
hub.py
library.h Make meta a device (getting rid of empty_meta) (#53143) 2021-03-03 11:24:13 -08:00
overrides.py [JIT] Add CUDNN Conv-Add-Relu fusion for Frozen Model Optimization (#52102) 2021-03-18 15:18:52 -07:00
py.typed
quasirandom.py
random.py
README.txt
script.h
serialization.py Fix pylint error torch.tensor is not callable (#53424) 2021-03-09 11:32:53 -08:00
storage.py
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.