pytorch/docs/source
lezcano 621ff0f973 Add linalg.vander
This PR adds `linalg.vander`, the linalg version of `torch.vander`.

We add autograd support and support for batched inputs.

We also take this chance to improve the docs (TODO: Check that they
render correctly!) and add an OpInfo.

**Discussion**: The current default for the `increasing` kwargs is extremely
odd as it is the opposite of the classical definition (see
[wiki](https://en.wikipedia.org/wiki/Vandermonde_matrix)). This is
reflected in the docs, where I explicit both the odd defaults that we
use and the classical definition. See also [this stackoverflow
post](https://stackoverflow.com/a/71758047/5280578), which shows how
people are confused by this defaults.

My take on this would be to correct the default to be `increasing=True`
and document the divergence with NumPy (as we do for other `linalg`
functions) as:

- It is what people expect
- It gives the correct determinant called "the Vandermonde determinant" rather than (-1)^{n-1} times the Vandermonde det (ugh).
- [Minor] It is more efficient (no `flip` needed)
- Since it's under `linalg.vander`, it's strictly not a drop-in replacement for `np.vander`.

We will deprecate `torch.vander` in a PR after this one in this stack
(once we settle on what's the correct default).

Thoughts? mruberry

cc kgryte rgommers as they might have some context for the defaults of
NumPy.

Fixes https://github.com/pytorch/pytorch/issues/60197

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76303

Approved by: https://github.com/albanD, https://github.com/mruberry
2022-05-06 08:44:14 +00:00
..
_static clarify the documentation of torch.meshgrid (#62977) 2021-08-18 04:01:22 -07:00
_templates DOC: Merge extraheader block from theme instead of override (#70187) 2022-01-05 06:42:38 -08:00
community Update persons of interest for ONNX (#72072) 2022-02-16 23:01:13 +00:00
elastic (torchelastic) make --max_restarts explicit in the quickstart and runner docs (#65838) 2021-09-29 19:29:01 -07:00
notes fix docs error in Autograd Mechanics 2022-03-29 18:32:16 +00:00
rpc Support Union in TorchScript (#64234) 2021-09-03 06:12:24 -07:00
scripts [quant][fx] Move backend_config folder to torch.ao.quantization 2022-04-19 15:38:57 +00:00
amp.rst add autocast cpu doc 2022-03-22 02:02:43 +00:00
autograd.rst Targeted documentation updates in autograd.functional (#72111) 2022-02-02 03:19:31 +00:00
backends.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
benchmark_utils.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
bottleneck.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
checkpoint.rst
complex_numbers.rst
conf.py [quant][fx] Move backend_config folder to torch.ao.quantization 2022-04-19 15:38:57 +00:00
config_mod.rst rename config module file to work with gh pages better 2022-03-10 20:41:44 +00:00
cpp_extension.rst Check clang++/g++ version when compiling CUDA extensions (#63230) 2022-02-24 08:32:32 +00:00
cpp_index.rst
cuda.rst Document torch.cuda.ExternalStream, torch.cuda.caching_allocator_alloc and torch.cuda.caching_allocator_delete (#70126) 2022-01-12 15:44:40 -08:00
cudnn_persistent_rnn.rst Remove orphan from cuDNN persistent note (#65160) 2021-09-21 11:09:47 -07:00
cudnn_rnn_determinism.rst
data.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
ddp_comm_hooks.rst [DDP Comm Hook] Add debugging communication hooks to ddp_comm_hooks.rst (#64352) 2021-09-01 17:37:19 -07:00
deploy.rst Back out "Back out "[torch deploy] Update deploy.rst with working simple example"" (#76713) 2022-05-03 14:12:18 +00:00
distributed.algorithms.join.rst
distributed.elastic.rst
distributed.optim.rst [distributed][docs] Delete distributed optimimzer section from RPC and add reference to namespace docs page (#68068) 2021-11-09 15:01:54 -08:00
distributed.rst Add TORCH_CPP_LOG_LEVEL to the docs 2022-05-03 17:01:11 +00:00
distributions.rst [Reinstate] Wishart distribution (#70377) 2021-12-30 11:41:46 -08:00
dlpack.rst
docutils.conf
fft.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
fsdp.rst make fsdp folder to be public (#72084) 2022-02-02 15:50:14 +00:00
futures.rst
fx.rst Fix doc build 2022-04-19 04:07:47 +00:00
hub.rst Add more details to the known limitations section of torchhub docs (#69970) 2021-12-16 02:43:48 -08:00
index.rst Revert "Contribution- Grammatical Corrections in the documentation" 2022-05-05 23:13:10 +00:00
jit_builtin_functions.rst
jit_language_reference_v2.rst Add Union type to TorchScript Language Ref (#69514) 2021-12-07 12:53:54 -08:00
jit_language_reference.rst fix typos in jit_language_reference.rst (#68706) 2021-11-22 19:09:06 -08:00
jit_python_reference.rst
jit_unsupported.rst
jit.rst [Reland take-2] Add JIT graph fuser for oneDNN Graph API (v0.5) 2022-05-05 16:57:03 +00:00
linalg.rst Add linalg.vander 2022-05-06 08:44:14 +00:00
math-quantizer-equation.png
mobile_optimizer.rst
model_zoo.rst
monitor.rst torch/monitor: merge Interval and FixedCount stats (#72009) 2022-01-30 23:21:59 +00:00
multiprocessing.rst
name_inference.rst
named_tensor.rst
nested.rst Minimal NestedTensor (#72881) 2022-03-02 16:31:51 +00:00
nn.functional.rst Revert D34154832: [pytorch][PR] Add multi_head_attention_forward to functional rst docs 2022-02-11 05:08:46 +00:00
nn.init.rst
nn.rst move the stateless util to public API! 2022-04-21 13:42:24 +00:00
onnx_supported_aten_ops.rst Add list of supported ATen ops by ONNX converter into torch.onnx page 2022-04-07 00:05:44 +00:00
onnx.rst Update onnx.rst 2022-04-08 20:07:01 +00:00
optim.rst To add SequentialLR to PyTorch Core Schedulers (#64037) 2021-09-09 09:36:32 -07:00
package.rst Fix some typos. 2022-04-11 21:55:59 +00:00
pipeline.rst Minor changes in documentation (#68557) 2021-11-18 17:57:16 -08:00
profiler.rst Add low level torch.profiler.kineto_profile base class (#63302) 2021-12-14 14:47:43 -08:00
quantization-backend-configuration.rst quantization: autogenerate quantization backend configs for documentation (#75126) 2022-04-04 22:22:30 +00:00
quantization-support.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
quantization.rst [quant][docs] Fix formatting for quantization.rst (#76223) 2022-04-26 03:16:39 +00:00
random.rst
rpc.rst Add note in RPC docs about retries. (#73601) 2022-03-03 00:29:31 +00:00
sparse.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
special.rst Implement torch.special.log_ndtr 2022-03-29 23:13:37 +00:00
storage.rst Virtualize <type>Storage classes (#66970) 2022-03-22 23:44:48 +00:00
tensor_attributes.rst fix wrong indexing of class names in docs 2022-03-02 22:21:21 +00:00
tensor_view.rst Correcting a minor typo: "Users should pay" instead of "Users should be pay" (#72500) 2022-02-08 23:08:25 +00:00
tensorboard.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
tensors.rst [complex32] add chalf alias for complex32 and chalf method 2022-04-20 23:44:47 +00:00
testing.rst promote torch.testing to stable (#73348) 2022-02-25 06:30:31 +00:00
torch.ao.ns._numeric_suite_fx.rst Quantization docs: add pages for Numeric Suite (Eager and FX) (#66380) 2021-10-11 18:47:58 -07:00
torch.ao.ns._numeric_suite.rst Quantization docs: add pages for Numeric Suite (Eager and FX) (#66380) 2021-10-11 18:47:58 -07:00
torch.overrides.rst
torch.rst Add high level control of fp32 matmul precision; disable TF32 for matmuls by default 2022-05-04 20:40:13 +00:00
type_info.rst [Docs] Mention torch.bfloat16 in torch.finfo (#68496) 2021-11-18 17:52:41 -08:00