pytorch/docs/source
Andrew M. James cfb2034b65 Add spdiags sparse matrix initialization (#78439)
Similar to [scipy.sparse.spdiags](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.spdiags.html#scipy-sparse-spdiags)

Part of #70926

In other functions (ie (torch.diagonal)[https://pytorch.org/docs/stable/generated/torch.diagonal.html#torch.diagonal]) diagonals of a tensor are referenced using the offset and the two dimensions that the diagonal is taken with respect to.

Here the reference implementation from scipy is only considering matrix output, so even if we only support 2-d output at first. It may be useful to consider how the dimensions corresponding to each diagonal would be specified for higher dimensional output.

The proposed torch signature implies that all offsets refer to the diagonals with respect to the only two dimensions of the output:

```
torch.sparse.spdiags(Tensor diagonals, IntTensor offsets, int[] shape, Layout? layout=None) -> SparseTensor
```
 Above it is required that: `diagonals.ndimension() == 2`, `offsets.ndimensions() == 1`, `offsets.shape[0] == diagonals.shape[0]` and `len(shape) == 2`.

This would need to be altered for the case where `len(shape)` > 2. One options is:
```
torch.sparse.spdiags(Tensor[] diagonals, IntTensor[] offsets, IntTensor dims, int[] shape, Layout? layout=None) -> SparseTensor
```

Here `offsets` and `diagonals` becomes lists of tensors, and the `IntTensor dims` argument is introduced. This would require that `len(diagonals) == len(offsets) == dims.shape[0]`, `dims.ndimension() == 2` and `dims.shape[1] == 2` also the same restrictions as the 2d case above apply to the elements of `diagonals` and `offsets` pairwise (that is `diagonals[i].ndimension() == 2`, `offsets[i].ndimension() == 1` and `offsets[i].shape[0] == diagonals[i].shape[0]` for all i). This form of the signature would construct the sparse result by placing the values from `diagonals[i][j]` into the diagonal with offset `offset[i][j]` taken with respect to dimensions `dims[i]`. The specialization back to the original signature for the 2d case could be seen as allowing the single row of dims to default to `[0, 1]` when there is only one `diagonals`, `offsets` provided, and shape is `2-d`. This option allows the rows of an input element `diagonals[i]` to have a different length which may be appropriate as the max length of a diagonal along different dimension pairs will be different.

Another option is to specify the dimensions the diagonal is taken with respect to for each offset. This signature would look like:

```
torch.sparse.spdiags(Tensor diagonals, IntTensor offsets, IntTensor dims, int[] shape, Layout? layout=None) -> SparseTensor
```
Here, `diagonals` is still 2-D with dimension 0 matching the length of 1-D `offsets` and the tensor input `dims` is also 2-D with dimension 0 matching the length of 1-D `offsets` and the second dimension being fixed at `2` in this case the sparse result is constructed by placing the elements from `diagonals[i]` into the output diagonal `output.diagonal(offset[i], dim0=dims[i][0], dim1=dims[i][1])` (with some additional consideration that makes it more complicated than simply asigning to that view). The specialization from this back to the 2-D form could be seen as assuming `dims = [[0, 1], [0, 1]... len(offsets) times ]` when `len shape==2`.

In both proposed signatures for the N-D case the specialization back to the 2-D signature is a bit of a stretch for your typical default arguments logic, however I think the first is better choice as it offers more flexibility.

I think some discussion is required about:
- [x] Should the N-D output case be implemented from the outset
- [x] If not, should the future addition of the N-D output case be considered when designing the interface.
- [x] Other thoughts on the signature which includes the `dims` information for the N-D output case.

**Resolution**: Since no one has offered a request for N-D output support, I think is fine to restrict this to sparse matrix generation. Should a request for N-D support come later, an overload accepting the additional `dims` could be added.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78439
Approved by: https://github.com/nikitaved, https://github.com/cpuhrsch, https://github.com/pearu
2022-06-30 19:54:47 +00:00
..
_static clarify the documentation of torch.meshgrid (#62977) 2021-08-18 04:01:22 -07:00
_templates Fix left nav (#78552) 2022-06-01 00:49:53 +00:00
community Adjust wording for consistency (#79758) 2022-06-17 01:39:30 +00:00
elastic (torchelastic) make --max_restarts explicit in the quickstart and runner docs (#65838) 2021-09-29 19:29:01 -07:00
notes [AMP] Use generic autocast in example, specify dtype (#79579) 2022-06-17 21:32:51 +00:00
rpc Support Union in TorchScript (#64234) 2021-09-03 06:12:24 -07:00
scripts [ONNX] Clean up onnx_supported_ops (#79424) 2022-06-23 20:44:51 +00:00
amp.rst Remove operators that support BFloat16 in the fp32 cast policy list of AutocastCPU (#77623) 2022-05-17 16:49:17 +00:00
autograd.rst Revert "Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs (ITT) to PyTorch (#63289)" 2022-06-30 12:49:41 +00:00
backends.rst Revert "[cuDNN V8 API] (reopen) Allow the number of kernels profiled under torch.backends.cudnn.benchmark = True to be limitedCudnnv8 benchmark limit (#77002)" 2022-05-24 21:52:35 +00:00
benchmark_utils.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
bottleneck.rst Revert "Enable Intel® VTune™ Profiler's Instrumentation and Tracing Technology APIs (ITT) to PyTorch (#63289)" 2022-06-30 12:49:41 +00:00
checkpoint.rst
complex_numbers.rst Add a note on CUDA 11.6 (#80363) 2022-06-27 21:34:24 +00:00
conf.py Wconstab/reland pysymint (#79795) 2022-06-20 22:55:06 +00:00
config_mod.rst rename config module file to work with gh pages better 2022-03-10 20:41:44 +00:00
cpp_extension.rst Check clang++/g++ version when compiling CUDA extensions (#63230) 2022-02-24 08:32:32 +00:00
cpp_index.rst
cuda.rst Python Jiterator supports multiple outputs (#78139) 2022-05-24 21:52:56 +00:00
cudnn_persistent_rnn.rst Remove orphan from cuDNN persistent note (#65160) 2021-09-21 11:09:47 -07:00
cudnn_rnn_determinism.rst
data.rst [DataLoader] Minor documentation improvement 2022-05-31 15:59:46 +00:00
ddp_comm_hooks.rst Functionality/pickling for commhooks (#79334) 2022-06-16 23:15:34 +00:00
deploy.rst Back out "Back out "[torch deploy] Update deploy.rst with working simple example"" (#76713) 2022-05-03 14:12:18 +00:00
distributed.algorithms.join.rst
distributed.elastic.rst
distributed.optim.rst [distributed][docs] Delete distributed optimimzer section from RPC and add reference to namespace docs page (#68068) 2021-11-09 15:01:54 -08:00
distributed.rst Add TORCH_CPP_LOG_LEVEL to the docs 2022-05-03 17:01:11 +00:00
distributions.rst [Reinstate] Wishart distribution (#70377) 2021-12-30 11:41:46 -08:00
dlpack.rst
docutils.conf
fft.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
fsdp.rst make fsdp folder to be public (#72084) 2022-02-02 15:50:14 +00:00
futures.rst
fx.rst Introduce Z3 types and utility functions for constraint generation (#80084) 2022-06-25 22:27:33 +00:00
hub.rst Add more details to the known limitations section of torchhub docs (#69970) 2021-12-16 02:43:48 -08:00
index.rst Add docs for Python Registration 2022-06-13 23:21:23 +00:00
jit_builtin_functions.rst
jit_language_reference_v2.rst Add Union type to TorchScript Language Ref (#69514) 2021-12-07 12:53:54 -08:00
jit_language_reference.rst fix typos in jit_language_reference.rst (#68706) 2021-11-22 19:09:06 -08:00
jit_python_reference.rst
jit_unsupported.rst
jit_utils.rst Create __init__.py (#78629) 2022-06-03 18:14:21 +00:00
jit.rst adding a quick link to nvfuser README.md in jit doc for 1.12 release (#78160) 2022-06-09 17:28:17 +00:00
library.rst Add docs for Python Registration 2022-06-13 23:21:23 +00:00
linalg.rst Add linalg.lu_solve 2022-06-07 22:28:28 +00:00
math-quantizer-equation.png
mobile_optimizer.rst
model_zoo.rst
monitor.rst torch/monitor: merge Interval and FixedCount stats (#72009) 2022-01-30 23:21:59 +00:00
multiprocessing.rst
name_inference.rst
named_tensor.rst
nested.rst to_padded_tensor doc v0 (#78657) 2022-06-03 14:27:31 +00:00
nn.functional.rst Add Dropout1d module 2022-06-15 14:39:07 +00:00
nn.init.rst add trunc_normal_ function to doc of torch.nn.init 2022-05-06 14:33:08 +00:00
nn.rst Add Dropout1d module 2022-06-15 14:39:07 +00:00
onnx_supported_aten_ops.rst Add list of supported ATen ops by ONNX converter into torch.onnx page 2022-04-07 00:05:44 +00:00
onnx.rst [ONNX] Fix case in type annotation in docs (#78388) 2022-05-31 19:27:34 +00:00
optim.rst Remove misleading statement in optim.Optimizer docs (#76967) 2022-05-10 14:39:53 +00:00
package.rst [torch.package][doc] PackageExporter does not have file_structure (#79948) 2022-06-30 19:49:53 +00:00
pipeline.rst Minor changes in documentation (#68557) 2021-11-18 17:57:16 -08:00
profiler.rst Add low level torch.profiler.kineto_profile base class (#63302) 2021-12-14 14:47:43 -08:00
quantization-accuracy-debugging.rst quant docs: best practices for quantization accuracy debugging 2022-05-17 12:16:52 +00:00
quantization-backend-configuration.rst quantization: autogenerate quantization backend configs for documentation (#75126) 2022-04-04 22:22:30 +00:00
quantization-support.rst [quant] Quantizable documentation (#79957) 2022-06-24 16:55:15 +00:00
quantization.rst [quant] Quantizable documentation (#79957) 2022-06-24 16:55:15 +00:00
random.rst
rpc.rst Add note in RPC docs about retries. (#73601) 2022-03-03 00:29:31 +00:00
sparse.rst Add spdiags sparse matrix initialization (#78439) 2022-06-30 19:54:47 +00:00
special.rst torch.special.scaled_modified_bessel_k0 (#78900) 2022-06-29 14:53:37 +00:00
storage.rst Virtualize <type>Storage classes (#66970) 2022-03-22 23:44:48 +00:00
tensor_attributes.rst fix wrong indexing of class names in docs 2022-03-02 22:21:21 +00:00
tensor_view.rst Correcting a minor typo: "Users should pay" instead of "Users should be pay" (#72500) 2022-02-08 23:08:25 +00:00
tensorboard.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
tensors.rst Unprivate _index_reduce and add documentation 2022-05-13 19:48:38 +00:00
testing.rst promote torch.testing to stable (#73348) 2022-02-25 06:30:31 +00:00
torch.ao.ns._numeric_suite_fx.rst Quantization docs: add pages for Numeric Suite (Eager and FX) (#66380) 2021-10-11 18:47:58 -07:00
torch.ao.ns._numeric_suite.rst Quantization docs: add pages for Numeric Suite (Eager and FX) (#66380) 2021-10-11 18:47:58 -07:00
torch.overrides.rst Revert "Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)"" 2022-05-18 18:40:57 +00:00
torch.rst Autogen Tags enum, and allow specifying tags while defining an op 2022-06-11 00:29:32 +00:00
type_info.rst ENH: Convert finfo.tiny to finfo.smallest_normal (#76292) 2022-05-20 00:59:48 +00:00