mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 00:21:07 +01:00
* Spelling fix in MultivariateNormal docstring (#7915) * [c10d] MPI Process Group Implementation (#7783) This provides a bare-minimum MPI Process Group implementation, the commit is on top of @pietern's Gloo Process Group PR. * [c10d] MPI Process Group Implementation ref: https://github.com/pytorch/pytorch/issues/7434 * Better exception, atexit func, and addressed comments * Clang formatting changes * Static initialization and addressed comments * Added constness back * Test will now launch mpi processes if found * CMakeList Changed * Fix Windows doc for import error (#7704) * Fix Windows doc for import error * Fix doc again * Fix wrong format * Moved condition for dilated grouped convolutions to CUDNN convolution implementation (#7465) * Updates to caffe2 operator documentation (#7917) * Significant updates to the operator docs in prep for merge * [auto] Update onnx to 307995b - Update from upstream (onnx/onnx#1038)307995b143* Test if ASAN is actually working as part of ASAN tests. (#6050) * Test if ASAN is actually working as part of ASAN tests. Signed-off-by: Edward Z. Yang <ezyang@fb.com> * Drop explicit use of libstdc++, we should not care. Signed-off-by: Edward Z. Yang <ezyang@fb.com> * Build with DEBUG=1 Signed-off-by: Edward Z. Yang <ezyang@fb.com> * Increase main thread stack size when using ASAN. Signed-off-by: Edward Z. Yang <ezyang@fb.com> * Split up detail.h (#7836) * Fix THCUNN SpatialDepthwiseConvolution assuming contiguity (#7952) * Fix fbcode compatibility (#7939) * add test for correctness of transpose fusion (#7950) * [JIT][script] Fix emitted gather and slice for dynamic indices (#7861) * [JIT][script] Fix emitted gather for dynamic indices * Also fix slice * Address comments * cache and use BLAS_SET_BY_USER so that it doesn't set itself to TRUE when run second time (#7942) * Add unsafe flag to skip checking in prepare (#7832) * Add unsafe flag to skip checking in prepare * pop * Rename cuda::type to cuda::into_type and provide cuda::from_type. (#7937) These are used to convert Half -> half and half -> Half respectively. from_type will be used for runtime type checking in THC. * Try to fix TORCH_CUDA_ARCH_LIST for PyTorch again (#7936) * try again * use DEFINED * use a loop * Minor fixes * remove sort requirement from pad-sequence (#7928) * pad-sequence no longer requires sorting entries pad-sequence can get the max_len from the list of sequences. entries only need to be sorted if output will be used for pack_padded_sequence, which can throw the error itself. * remove sort requirement from pad-sequence Picks up from #5974. Removes the requirement that input sequences to pad_sequence have to be sorted. Addressed the comments in the PR: - Updated docstring for pad_sequence - Remove sort requirement in pad_sequence test - Test unsorted and sorted sequences in pad_sequence test * Fix checkBackend error message (#7926) * Fix checkBackend error message Fixes #7849 * Switch order of printing args * Split CI tests in half and run them in parallel (#7867) * Split and run tests in parallel * Refactor tests * Handling of scalars in torch.Size (#5676) * Handling of scalars in torch.Size torch.Size() constructor uses python_arg_parser IntList in python_arg_parser can take iter/range Have IntList take python iterables and ranges. Address comments: don't use python_arg_parser and instead call __index__ in THPSize_pynew Address comments Address comments * Rebased * Address nit * [JIT] Fission and fusion passes for addmm (#7938) * Addmm decomposition pass * Addmm peephole pass * Fix handling of output shape in fusion pass * Add DCE to the peephole passes * add comments * maybe bugfix? * Fix GPU tests * fix py2/3 test issue * Set smaller grain size for some cases (#7941) * Fix returning scalar input in Python autograd function (#7934) * fix _wrap_outputs not working with scalar inputs * add a test * Prevent git autocrlf for bash scripts (#7949) * Delete unused file (#7919) * Fix typo in autodiff formula for addmm (#7932) * 1) use meshgrid for flip() CPU implementation, only need one copy of input tensor; 2) changed kernel of CUDA implementation, no need materialized indices tensor; 3) reusing error checking code * [caffe2] YellowFin parameter update GPU code fix. (#6993) * [Caffe2] Keep name of caffe2_pybind11_state and caffe2_pybind11_state_gpu in debug build (#7155) * Allowing MatMul to create a gradient even with 3 inputs. useful if you are differentiating a graph twice (#6536) * added const for local variables * Fix the cpp libtorch CUDA build (#7975) * Use mingfeima's mkldnn (#7977) * Fix the import part of the windows doc (#7979) * Change perf test folder after git checkout (#7980) * Move the broadcast check in MKL Add/Sum to runtime (#7978) * Use Glog's implementation of STL logging when possible. (#7206) Inject custom workaround into namespace std so that it can be found by ADL. * [Hotfix] Bring back warnings and -Werror to ATen (#7866) * Bring back warnings and -Werror to ATen * Unbreak... * Fix tbb errors * Enable ONNX backend Mean tests (#7985) * Add third wayt to determine IS_CONDA (#7971) * Fix EmbeddingBag max_norm option (#7959) * fix EmbeddingBag max_norm option * flake8 * add warning to the embedding bag arg change * Raise error when torch.load a storage on a non-existing device (#7921) * Raise error when torch.load a storage on a non-existing device Before, doing torch.load(...) on a CUDA tensor on a CPU-only machine would raise an unreadable error: ``` ~/pytorch/pytorch/torch/cuda/__init__.py in __enter__(self) 223 if self.idx is -1: 224 return --> 225 self.prev_idx = torch._C._cuda_getDevice() 226 if self.prev_idx != self.idx: 227 torch._C._cuda_setDevice(self.idx) AttributeError: module 'torch._C' has no attribute '_cuda_getDevice' ``` This PR makes it so that torch.load raises a hard error if one tries to load a storage onto a non-existing device and suggests the user to use torch.load's map_location feature. * Address comments * missing dep * Make THStorage / THCStorage have void* data ptr. (#7964) * Make THStorage / THCStorage have void* data ptr. This is the initial step in unifying the ATen and TH tensor representations, next is to only generate a single THStorage / THCStorage type. The major changes here are: 1) data has been renamed to data_ptr and made void* in THStorage/THCStorage. 2) THStorage / THCStorage stores a at::ScalarType representing its data type (This will be useful when we generate a single THStorage/THCStorage). 3) APIs for Accessing the data as a real*: a) storage->data<real>() -- this does runtime-type checking (checks that the at::ScalarType is correct). b) storage->unsafeData<real>() -- as above, but no runtime-type checking (used in inner loops / fast code paths). c) THStorage_(data)(storage) -- this already existed, just calls storage->data<real>(). * Add include. * Attempt to fix clang build issues. * Clarify comment and remove extra character. * Rename unsafeData -> unsafe_data. * Remove unnecessary 'to' function to get compile time rather than link time errors. * Import/export observer symbols for DLL, which fixes the linking error in Visual Studio. (#6834) * Import/export observer symbols for DLL, which fixes the linking error in Visual Studio. * Add support of all default cmake build types for release to cuda. * Remove python bindings for `torch.slice` (#7924) * skip python bindings for slice * remove tests * convert slice test to indexing * Build ONNX for PyTorch version of libcaffe2 (#7967) * support loading gzip (#6490) * support loading gzip * address comments * address comments * fix lint * fix test for python2 * Add memory leak check in CUDA tests (#7270) * Add memory leak check in CUDA tests * Tracking multi-GPU too * fix run_test.py not running __name__ == '__main__' content; add test for make_cuda_memory_checked_test * add a comment * skip if cuda * 1. Change the wrapper to a method in common.py:TestCase 2. Refactor common constants/method that initialize CUDA context into common_cuda.py 3. Update some test files to use TEST_CUDA and TEST_MULTIGPU * Fix MaxUnpool3d forward memory leak * Fix MultiLabelMarginCriterion forward memory leak * Fix MultiMarginLoss backward memory leak * default doCUDAMemoryCheck to False * make the wrapper skip-able * use TEST_MULTIGPU * add align_corners=True/False tests for Upsample; fix TEST_CUDNN * finalize interface * VolumetricMaxUnpooling_updateOutput * fix test_nccl * rename THC caching allocator methods to be clearer * make the wrapped function a method * address comments; revert changes to aten/src/THC/THCCachingAllocator.cpp * fix renamed var * Revert "Set smaller grain size for some cases" (#7988) * Entry for c10d in CODEOWNERS (#8001) * Fix a couple of typos (#7998) * Fix typo * Fix typo * Fix typo * Fix typo * Add on-stack observer cache for Observable (#7931) observers_list_ stores all the observers for an observable. The list is allocated on heap, which can cause LLC miss. Add an on-stack observer cache for fast access. In production, we have seen 20% speed up for start and stop observer calls. * Reduce grain size for Unary operations (#8003) * [auto] Update onnx to 8ec0e5f - Add index check for Transpose's type inference function (onnx/onnx#1053)8ec0e5fe9b* Make AT_FORALL_SCALAR_TYPES usable outside of at::namespace. (#7935) * Make AT_FORALL_SCALAR_TYPES usable outside of at::namespace. This requires renaming the _cast functions which used the unqualified names. * Separate onnx mapping of scalar type from cast name. * Fix flake8. * Properly cast onnx. * Remove WITH_ROCM cmake flag/variable (use USE_ROCM solely) (#8013) * Mention the pytorch-ci-hud on the README. (#8004) Signed-off-by: Edward Z. Yang <ezyang@fb.com> * Re-enable build env check (#7969) * Re-enable build env check * Fix linux test error * Try to fix macOS test error * Update nn.rst (#8029) * Example for Transformed Distribution (#8011) * [auto] Update onnx to 33e9cd4 - Remove the usage of default value to fix invalid proto3 files. (onnx/onnx#1052)33e9cd4182* [auto] Update onnx to 1504a33 - Convert schema assert for duplicate type names to exception (onnx/onnx#1057)1504a33abb* Support CUDA tensors in ProcessGroupGloo (#7694) This adds an unconditional dependency on CUDA, which is not desirable for the long term. Ideally we have split like ATen where we have different artifacts for different backends so you can decide at runtime what to use. * [auto] Update onnx to 3fb9656 - Fix for fbcode CI (onnx/onnx#1062)3fb965666e* propagate nan in some activations (#8033) * propagate nan in some activations * fix py2 not having math.nan * flake8 * Fix profiler crash when no events register (#8034) * Fix profiler crash when no events register When trying to profile, attempting to print the event table throws a vague error because the event list is empty: .... max_name_length = max(len(evt.key) for evt in events) ValueError: max() arg is an empty sequence This change fixes the error by returning an empty string. * Update profiler.py * Allow CI testing with different AVX configs (#8020) * Allow CI testing with different AVX configs * Unset ATEN_DISABLE_AVX and ATEN_DISABLE_AVX2 in default config * Support for generating ATen during the fbcode build, rather than committing the generated files (#8002) Paint the internal bikeshed a slightly different color to appease Buck tooling. * Factor python dependency out of interpreter (#7970) * Factor python dependency out of interpreter * Remove NO_PYTHON for the autograd engine If there is no python bindings, then a default Engine is constructed the first time it is requested. If the python libraries are loaded, then they override the default accessor and the default engine becomes a python Engine. Note: it is possible for two engines to be generated if a non-python one gets created before the python bindings are loaded. This case is rare, and just results in additional threads being spawned. * Fixing AlexNet test which is skipped in CI * [auto] Update onnx to 760c928 - add missing hasNInputShapes check for bidirectionalBroadcastShapeInference (onnx/onnx#1060)760c9283d0* Support modules that output scalar in Gather (and data parallel) (#7973) * Support modules that output scalar in Gather (and data parallel) * Improve warning msg * [auto] Update onnx to 9e7855d - Remove PyTorch generated Upsample tests cases (onnx/onnx#1064)9e7855dcd4* [script] Add support for torch.zeros, torch.ones, etc. (#7799) * [script] Add support for torch.zeros, torch.ones, etc. * modifies gen_jit_dispatch to creating bindings for functions that do not take tensor arguments, but do have an initial type argument * adds tensor attributes to these functions for device, layout, and dtype specification * extends the list of valid compiler constants to include device, layout, and dtype. * allows functions with Generators, but only using the default generator Known limitations: * when using `torch.float`, we convert it to a scalar tensor and make no checks that it is actually used only in a dtype specification. This is similar to how we handle Python numbers, creating some situations where the script is more permissive. Fixing this requires much more significant changes to the IR, so is lower priority for now. * devices specified using string literals e.g. 'cuda:1' do not work, since we do not support string literals in general. * Add profiling annotations to NeuralNet[Operator|Data] (#8005) * Update from facebook 1ee4edd286a3 (#8040) * Adding instance weight to batch distill loss as title * add bfloat 16-31 added bfloat 16-31 and their respective unit tests * [CUDA9] Upgrade - fbcode CUDA9 upgrade diff D5654023 has been out for a while thanks to Pieter. But with time growing it's becoming quite hard to rebase, because of the symlinks and auto-generated build/config files in tp2. Break D5654023 into two diffs, one touching tp2 config files, and another one touching fbcode TARGETS file (adding nvcc flag). These two should be a bit easier to rebase (for detailed procedure see "Test Plan"). This diff can only be committed if: 1. CUDA 9 rpm is rolled out fleet-wide (TBD) 2. NVidia driver 390.40 is rolled out fleet-wide (done) 3. Upgrade CUDA 9.1, cudnn 7.1, nccl 2.1 (done) 4. Make sure all dependents are built (done) 5. Test all C2 operators, PyTorch (see test plan) * Share intermediate int32 buffer across Conv ops Adding a known type * [C2 fix] infer function for ensure_cpu_output_op this is adding the missing device funtion for ensure_cpu_output_op * [int8] Add blob serializer/deserializer for Int8TensorCPU To export to logfiledb * [nomnigraph] Add try catch block to optimization passes in predictor This will catch failures that happen in the optimization pass. * Caffe2: avoid static initialization order fiasco for CAFFE_ENFORCE CAFFE_ENFORCE uses strack trace fetcher. Which is currently a global static variable. If at static initialization time CAFFE_ENFORCE is used, this is a SIOF. Recently CAFFE_ENFORCE was added into init functions registration, so we started to see this. Meyers singleton is going to provide safety here. If stacktrace fetcher was not registered yet, it will just use a dummy one. * NUMA support in SparseNN CPU benchmark Adding support for NUMA in SparseNN CPU benchmark * [mobile-roofline] Add logging needed for roofline model This should be all that's needed * Let the operators using the same input if the operators are not chained or else, we have to change the input data dims * fix null-pointer-use UBSAN errors in in reshape_op.h * revert previous fix on input blob name as title * Adding flag to let MineHardNegative automatically extract single value from dict Model exporter requires the output of the model to be a struct. This makes it convenient to use those models directly in MineHardNegative by allow automatic extraction of the single element of dict, which is a common use case. * Reverting change that broke internal tests back to OSS compatible state * Skip CUDA memory leak test on BN tests on windows (#8043) * workaround for Sequential when one cannot retrieve python source (#8048) * [auto] Update onnx to 0dbec2a - - Generate protoc type hints on Windows (onnx/onnx#1047)0dbec2a047* [auto] Update onnx to 4f8ef17 - Remove erroneous documentation around maps and sequences. (onnx/onnx#1069)4f8ef17ad3* [auto] Update onnx to e6a500e - Extract constant to initializer (onnx/onnx#1050)e6a500e54c* [auto] Update onnx to 033f956 - make gcc happy (onnx/onnx#1061)033f956f41* Remove NO_PYTHON macros from Exceptions.h/cpp (#8007) Removes cases where NO_PYTHON was unnecessary in Exception.h/cpp * [ready] Clean up torch.distributions (#8046) * Have a single THStorage and THCStorage type. (#8030) No longer generate data-type specific Storage types, since all Storage types are now identical anyway. For (some) backwards compatibility and documentation purposes, the Real names, e.g. THLongStorage are now #defined as aliases to the single THStorage type * Reduce usages of TensorUtils<T>::DataType in THC. (#8056) TensorUtils<T> is basically ATen-dispatch-lite in that it allows one to do multi-type THC function dispatch with a single call. However, it is templatized on the Tensor type, and since we are moving to a single Tensor type, this doesn't work. Most of the functions in TensorUtils (e.g. getDims) can be pulled up a level, to just call THCTensor_nDimension (or directly accessing the member), but the DataType specific functions are more problematic. So, this PR does two things: 1) Replaces calls of 'TensorUtils<THCTensor>::DataType' with 'real' since these are identical 2) Templatizes the THC_pointwiseApplyX functions to take scalar types. To ensure this is done correctly, we static_assert that the scalar type template parameter matches the scalar type of the corresponding template parameter. We will need to get rid of these static_asserts in the future, but this is useful for now. * Support to run ONNX Upsample operator (mode=nearest) in Caffe2 (#8037) * Added support to run ONNX Upsample operator (mode=nearest) in Caffe2 * adding error checks to upsample * adding error checks to upsample * adding error checks to upsample * changing to np.isclose * Revert onnx submodule update * still fixing * [auto] Update onnx to eb12f72 - Add conv transpose test cases (onnx/onnx#886)eb12f72a86* [auto] Update onnx to bd98abb - Add a hook for doing post-processing on protobuf generated header files (onnx/onnx#1068)bd98abbba0* Skip ConvTraspose ONNX backend tests (#8074) * Post process onnx proto (#8064) * Post processing onnx generated protobuf files to hide global symbols * . * . * Add code for TensorBoard visualization of JIT GraphExecutors (#8050) * [auto] Update onnx to cc26486 - bump version to 7 for prelu. (onnx/onnx#1063)cc26486541* [auto] Update onnx to 356208d - add input tensor dimension checks to shape inference (onnx/onnx#1070)356208d756* Move backtrace to its own header (#8096) * Move backtrace to its own header * Move cxxabi.h into Backtrace.cpp * Fix and ignore some warnings (#8081) * Do an additional sanity check that nvcc and CUDA include dir agree. (#8094) If you set CUDA_HOME and CUDA_NVCC_EXECUTABLE together, you may end up in a situation where the CUDA_VERSION of your includes mismatches the CUDA version of your nvcc. See #8092 for a concrete case where this can occur. Explicitly detect this situation and give a good error message in this case! Signed-off-by: Edward Z. Yang <ezyang@fb.com> * use regex in kwarg parser (#8061) * Removing remaining NO_PYTHON ifdefs (#8067) * Remove NO_PYTHON in tracing * Remove NO_PYTHON in ir.h * Remove NO_PYTHON in test_jit.cpp * Replace std::size_t with size_t (#8093) * Remove out-of-date comment (#8114) * [Caffe2] Enabling AMD GPU Backend for Caffe2 (#7955) * Add hip support for caffe2 core * Add MIOPEN header/wrapper to caffe2 core * Add HIP device into caffe2 PB * top level makefile change for rocm/hip * makefile scaffolding for AMD/RocM/HIP * Makefile scafodding for AMD/RocM/HIP; add makefile/utility for HIP files * caffe2 PB update for AMD/ROCM HIP device * Add AMD/RocM/Thrust dependency * HIP threadpool update * Fix makefile macro * makefile fix: duplicate test/binary name * makefile clean-up * makefile clean-up * add HIP operator registry * add utilities for hip device * Add USE_HIP to config summary * makefile fix for BUILD_TEST * merge latest * Fix indentation * code clean-up * Guard builds without HIP and use the same cmake script as PyTorch to find HIP * Setup rocm environment variables in build.sh (ideally should be done in the docker images) * setup locale * set HIP_PLATFORM * Revert "set HIP_PLATFORM" This reverts commit 8ec58db2b390c9259220c49fa34cd403568300ad. * continue the build script environment variables mess * HCC_AMDGPU_TARGET * Cleanup the mess, has been fixed in the lastest docker images * Assign protobuf field hip_gpu_id a new field number for backward compatibility * change name to avoid conflict * Fix duplicated thread pool flag * Refactor cmake files to not add hip includes and libs globally * Fix the wrong usage of environment variables detection in cmake * Add MIOPEN CNN operators * Revert "Add MIOPEN CNN operators" This reverts commit 6e89ad4385b5b8967a7854c4adda52c012cee42a. * Resolve merge conflicts * . * Update GetAsyncNetHIPThreadPool * Enable BUILD_CAFFE2 in pytorch build * Unifiy USE_HIP and USE_ROCM * always check USE_ROCM * . * remove unrelated change * move all core hip files to separate subdirectory * . * . * recurse glob core directory * . * correct include * . * Detect CUDNN related environment variables in cmake (#8082) * Implement adaptive softmax (#5287) * Implement adaptive softmax * fix test for python 2 * add return_logprob flag * add a test for cross-entropy path * address review comments * Fix docs * pytorch 0.4 fixes * address review comments * don't use no_grad when computing log-probs * add predict method * add test for predict * change methods order * get rid of hardcoded int values * Add an optional bias term to the head of AdaptiveSoftmax * Make libshm also test if rt requires pthread. (#8112) In some configurations (e.g., our internal build of GCC 5 + GLIBC 2.23), -lrt is not sufficient to use shm_open; you also need to declare a dependency on pthread. This patch adds a surgical extra fix to detect this situation, in the case that I noticed it failing in the wild. Fixes #8110 Signed-off-by: Edward Z. Yang <ezyang@fb.com> * [auto] Update onnx to 2d5ce4a - Remove empty model (onnx/onnx#1058)2d5ce4aeb6* Add missing pragma once. (#8118) Signed-off-by: Edward Z. Yang <ezyang@fb.com> * [auto] Update onnx to 2a87616 - Tests for LRN operator (onnx/onnx#903)2a876162ac* Split SparseTensorImpl off from TensorImpl. (#7990) * Split SparseTensorImpl off from TensorImpl. At the moment they have the same data layout, but with the upcoming refactor they will not, and we need a place to put all of the sparse tensor specific fields. Signed-off-by: Edward Z. Yang <ezyang@fb.com> * Update SparseTensorImpl.h * [Caffe2] Support non peer access in muji and fix bug when reduced_affix is empty (#6896) * [Caffe2] Support non peer access in muji * [Caffe2] Add test for 4 gpus and 2 groups * [Caffe2] Add comments * Fix bug when reduced_affix is empty * Fix typo and add comments about cpu and amd gpu * Skip OnnxBackendNodeModelTest::test_lrn_default_cuda that causes segfault (#8127) * Replace most remaining usages of TensorUtils<T>::DataType. (#8124) As in https://github.com/pytorch/pytorch/pull/8056, this doesn't work with a single TensorImpl type. This replaces the usages of with a templatized parameter and static_asserts that the new and old are equal. After this we can get rid of the old template parameter, but I want to ensure they are equivalent across all builds first. * Add utf-8 header to Python file with Unicode. (#8131) Signed-off-by: Edward Z. Yang <ezyang@fb.com> * Add back lrn test (#8134) * Revert "Skip OnnxBackendNodeModelTest::test_lrn_default_cuda that causes segfault (#8127)" This reverts commit410191c417. * Fix mismatched default values * Add non_blocking to Tensor/Module.to (#7312) * Add non_blocking to Tensor/Module.to * flake8 * Add argparse tests * cpp parse * Use C++ parser * use a commong parse function with Tensor.to * fix test_jit * use THPObjectPtr * increase refcount for None, True, and False * address comments * address comments * Fix job name checking for AVX tests (#8135) * Fix a corner case for ReShapeOp (#8142) In my use case, in the backward propogate pass, the reshape need to change a [0] tensor into [0,0] shaped tensor. The original implementation would cause out of index issue. This diff fix this problem. * cpu/ideep context converter (#8139) * fix type mismatch while call torch._C._cuda_setDevice (#8065) * fix type mismatch while call torch._C._cuda_setDevice * fix type mismatch in scatter * fix type mismatch in scatter * fix type mismatch while call torch._C._cuda_setDevice * fix type mismatch while call torch._C._cuda_setDevice * fix type mismatch while call torch._C._cuda_setDevice * docs: Add warning to torch.repeat() (#8116) * docs: Add warning to torch.repeat() closes #7993 * docs: Add links for numpy functions * docs: Break the too long line * Accelerate bernoulli number generation on CPU (#7171) * opt bernoulli rng with vsl and openmp * detect cpu vendor for bernnoulli * retrigger test platform * check the vendor more severely * use cpuinfo to check vendor * docs: add canonical_url and fix redirect link (#8155) * docs: enable redirect link to work for each specific page * docs: add canonical_url for search engines closes #7222 * docs: update redirect link to canonical_url * docstring support for @script and @script_method (#7898) * docstring support for @script and @script_method * make it python2 compatible * improve according to review * improve build_stmts * use filter instead of list comprehension * improve the way wrap is handled for script_method * stash the original method instead * allow dynamic attr for ScriptMethod and GraphExecutor * a bit comment on build_Expr * remove _build_wrap * a bit improve on comments * rename to __original_methods * should be _original_methods * [auto] Update onnx to 968d28d - fix Node::isBefore (onnx/onnx#1075)968d28d901* remove some unnecessary cudaGetDevices (#8089) * remove unnecessary cudaGetDevices * make curDevice argument non-optional, add explicit checks to current_device * Fix cuda.framework error on OSX. (#8136) When compiling OSX with CUDA, Caffe2's build system uses find_package(cuda) to get its grubby hands on the CUDA driver library (for some strange reason, FindCUDA doesn't save this information as a variable). Unfortunately, on OSX, sometimes this picks up the cuda.framework folder, and then our build system chokes to death because it doesn't try to link against this as a framework. (Is the folder even a framework? I have no idea). This commit attempts to fix this in a two pronged fashion: 1. For some users, reducing the precedence of frameworks using CMAKE_FIND_FRAMEWORK seems to help. So we set these variables. However, this fix is not perfect; on my laptop it doesn't actually solve the problem. 2. PyTorch doesn't actually need the CUDA driver API. So we only add the dep when building Caffe2. Fixes #8022 Signed-off-by: Edward Z. Yang <ezyang@fb.com> * [C++ API] Improve and use OrderedDict for parameters / modules (#7823) * Improve OrderedDict for C++ API * Give OrderedDict a subject and fix review comments * Fix OrderedDict use in torch/csrc/jit/script/init.cpp * Fix __rshift__ bug (#8161) * Fix __rshift__ bug * Add small tests for __lshift__ and __rshift__ in test_cuda * Add a more elaborate check for __lshift__ and __rshift__ * refactor the test to address @zou3519 's comments * Move non-generic Storage code needed by TensorUtils to non-generic C++. (#8164) For non-generic function call implementations in Storage used by TensorUtils, we do the following: 1) Move the declaration from generic/C to non-generic/C++; we don't need backwards compatibility on these functions and want to use e.g. at::ScalarType. 2) Move the implementation from generic/C++ to non-generic/C++. 3) Change the generic implementation to call the non-generic implementation. This will allow us to get rid of the corresponding TensorUtils calls (once we move over the Tensor functions in the same manner). * Pinning opencv to < 3.4 in conda builds (#7923) * Pinning opencv to 3.1.0 in conda builds * Also pinning numpy to 1.11 * Trying only specifying <3.4 * Adding -setup- path, and better code structure (#8122) * Abstract parallelization to faciliate using threadpools (#8163) * [Caffe2] Update elementwise ops to support numpy style boradcast (#8070) * Update elementwise ops to support numpy style boradcast Update elementwise ops to support numpy style boradcast * Fix sqrt_op * Fix compare ops * Fix gradient test * Fix optimizer legacy broadcast * Fix legacy broadcast for elementwise ops * Skip flaky test * Fix eigen simple binary op * Fix attention test * Fix rnn test * Fix LSTM test * Fix tan grad * Fix schema check * Export getCudnnHandle (#7726) * [JIT] Support a single TensorList argument anywhere in the argument list + index_put (#8173) * [JIT] Support a single TensorList argument anywhere in the argument list * [JIT] index_put * use the correct datatype format (#8144) * Add back onnx console scripts dropped during migration from onnx-caffe2 (#8143) * Get rid of SOVERSION (again). (#8132) We don't want SOVERSION because pip will lose the symlink and double your distribution size, and also because our setup.py accidentally links against both libcaffe2.dylib and libcaffe2.1.dylib on OS X. This leads to a very puzzling error where you get the error "cannot initialize CUDA without ATen_cuda", because there are actually two copies of your registry in memory (because there are two copies of the dynamic library). Dropping SOVERSION makes it impossible to make this mistake. In principle, if the shared library load is done with DYLD_GLOBAL, that should also prevent two copies of the registry from popping up. Worth checking at some later point, if you need to bring back SOVERSION (because, e.g., pip finally fixed their software.) Partially fixes #8022. Signed-off-by: Edward Z. Yang <ezyang@fb.com> * Fix a corner case for ReShapeOp (#8178) In my use case, in the backward propogate pass, the reshape need to change a [0] tensor into [0,0] shaped tensor. The original implementation would cause out of index issue. This diff fix this problem. * Better conv error message basing on weight shape (#8051) * Add retry logic to sccache download for Windows build (#7697) * Add retry logic to sccache download for Windows build * fix script bug * clean up * fix caffe2 docker build (#7411) * [ONNX] Fix type_as symbolic (#8183) * [ONNX] Nuke type_as symbolic * make it better * Fix lookup + test * Yangqing as an ONNX codeowner (#8185) * Fix protobuf options (#8184) * protobuf * fix protobuf_MSVC_STATIC_RUNTIME * Add a loop unrolling pass to PyTorch JIT (#7672) * [auto] Update onnx to 4e65fd8 - fuse consecutive squeezes (onnx/onnx#1078)4e65fd83ba* [Caffe2] Merging setup.py with setup_caffe2.py (#8129) * Mergine setup.pys, torch works, caffe2 works up to other KP * Fix to super call for python 2 * Works on python2 on mac * Consolidating Caffe2 flags * Fix scalar check for sparse tensors. (#8197) * Fix scalar check for sparse tensors. As discovered in #8152 If `t` is a scalar sparse tensor, `t._indices` used to return a sparse empty tensor because the scalar check was incorrect. This PR modifies the scalar check to return a dense tensor instead of a sparse tensor. i.e. ``` tensor = torch.sparse_coo_tensor([], [], torch.Size([]), device=device) out = tensor._indices() # was a sparse tensor, now is dense. ``` * Fix typos * fix lint * Add more annotations for arguments in ATen schema (#8192) * use THCThrustAllocator in BCECriterion (#8188) * Allow parallel_apply to take in list[Tensor] (#8047) * Docs for gradcheck and gradgradcheck; expose gradgradcheck (#8166) * Docs for gradcheck and gradgradcheck; expose gradgradcheck * address comments * Implement randperm for CUDA (#7606) * Implement randperm for CUDA * Use Thrust to implement randperm * clean up * Fix test * Offload small input scenario to CPU * Fixed test * Try to fix Windows error * Fix Windows error and clean up * Use fork_rng context manager * Move test_randperm_cuda to test_cuda * Add half tensor support * Fix cuda::type error * Fix CPU offloading * Fix issues * No need to check range for n == 0 case * Update c10d build to link against Caffe2 (#8201) This follows #7399. * add wipe_cache option (#8204) as title * Replace (non-data) TensorUtils calls with non-generic THCTensor calls. (#8176) * Replace (non-data) TensorUtils calls with non-generic THCTensor calls. TensorUtils is templatized on the THTensor type, so to support a single tensor type (like ATen), we need to remove these. This PR does the following: 1) Allows THCTensorTypeUtils.cuh to include THCTensor.hpp. This involves moving includes of it outside of generic/, so we can use the new implementations. 2) Defines a single _THTensor struct and changes THCRealTensor to be a derived type of _THCTensor. This allows us to implement a single non-generic function and avoid static_cast or void * tricks to call it from the generic functions. 3) For functions inside of TensorUtils that don't use data pointers: a) Implement the functions in (non-generic) THTensor.cpp and declare them in (non-generic) THTensor.hpp. b) Have the generic versions call the non-generic versions. c) Replace the corresponding TensorUtils<THCTensor>::fn call with (non-generic) THTensor_fn. * Add comment about THCTensor struct. * Error if storage is null in setStorageNd or resizeNd. * Fix c10d compiler warnings (#8206) Copy compiler flags from the ones used in setup.py and fix warnings. This makes the root build that includes c10d headers warning free. * Bump gloo submodule (#8202) This includes facebookincubator/gloo#125. * rm -rf aten/contrib (#8165) * Remove aten/contrib * Remove from CMake * Fix tanh_op on ios build (#8207) * Fix tanh_op on ios build * Fix tanh * [auto] Update onnx to f28e2f1 - fix lrn spec (onnx/onnx#1090)f28e2f1a60* [cmake] deprecate caffe2_* specific cuda function in cmake. (#8200) * deprecate caffe2_* specific cuda function in cmake. * ENV{} -> $ENV{} * CUDA_ARCH_NAME -> TORCH_CUDA_ARCH_LIST * . * . * . * skip CUDA memory leak check on Windows altogether (#8213) * Record shape and type in autograd to validate gradients (#8168) The check that the gradient is defined is currently disabled because TestJit.test_ge_optimized will trigger the error. * [auto] Update onnx to 18d70ff - Graph should only have one (input) kParam node (onnx/onnx#1088)18d70ff529* Set up a c10 source folder (#7822) * Set up a c10 source folder * Change the benchmark log format and also log flops (#8215) as title * Move helper functions to unnamed namespace. (#8224) Currently, the helper functions in this file are in global namespace. I am guessing the purpose of excluding them from was to keep them local. * [auto] Update onnx to e96d823 - Update Google benchmark to 1.4.1 (onnx/onnx#1083)e96d823e5c* Change new bernoulli implementation to be fully generic. (#8218) The current implementation depends on THTensor types being unique, which is not guaranteed going forward. * Structure THTensor like THCTensor is structured. (#8217) In particular, define a base type, _THTensor, that can be used for all THRealTensor structs. This is just to have less cognitive load when dealing with generic THTensor/THCTensor types (as in templates). * move THCP-related utils to cuda/utils.cpp. (#8221) These files don't follow the usual pattern: In general the files torch/csrc/X torch/csrc/cuda/X both include the generic file torch/csrc/generic/X, where torch/csrc/X includes the cpu implementations and torch/csrc/cuda/X includes the cuda implementations. (Aside: this is probably not the best structure, the torch/csrc/X fiels should probably be moved to torch/csrc/cpu/X). utils.cpp combines these so that torch/csrc/utils.cpp has cuda specific code. This makes it impossible to declare a single THTensor and THCTensor template type (i.e. THPPointer<_THTensor>, THPointer<_THCTensor>). * [READY TO MERGE] Use ccache in macOS build (#8009) * Use ccache in macOS build * Moving to sccache * Don't use sccache in test job * [NEEDS REVIEW] Add nan and inf probability check to multinomial (#7647) * Add nan and inf probs check to multinomial * fix bug * Spawn CUDA test in subprocess * Make sure invalid input won't pass the test case * Try to fix error * Test failure cases in Python 3 only * Try to fix Windows error * Move CUDA test to test_cuda.py * fix issues * fix module name error * no need to check for CUDA existence in test_cuda * Use PY3 * [READY TO MERGE] Enable tests that use DataLoader with multiple workers on Windows (#6745) * Don't import TEST_CUDA for test_dataloader on Windows * test_partial_workers is stuck on Windows * Don't copy unneeded grads when using a function for several derivatives (Fixes #7722) (#7759) Trying to copy all results fails when one of them is a tensor list which has not been populated. This blew up for CuDNN RNNs when the weights did not require grad. Thanks to Sylvain Gugger for reporting! * Fix win mkldnn (#7718) * Sync build_pytorch_libs.bat with build_pytorch_libs.sh * fix quoting * add warnings * fix warnings * Add /EHa * [Caffe2] Add ADD operator for IDEEP (#8220) * Add ADD operator for IDEEP * Add boradcast check * Comments * Allow optional build and installation of native test binaries (#8225) * test finetuning * install off by default * Turn BUILD_TEST=ON for jenkins. * Turn on install_test in jenkins as well * Update MKL exporter to IDEEP ops (#8228) IDEEP exporter support * [ideep] Add IDEEP Squeeze op (#8227) Similar to MKLSqueezeOp at caffe2/mkl/operators/squeeze_op.cc * [auto] Update onnx to 62e63e9 - Fix build errors inside protobuf-bench (onnx/onnx#1084)62e63e9de8* Use .cc since some downstream libraries are configured for C++ only. (#8234) * Rename SparseTensor to SparseTensorRef. (#8237) I want to introduce using SparseTensor = Tensor (as a documentary type alias for Tensor), but the name is already taken. Signed-off-by: Edward Z. Yang <ezyang@fb.com> * [caffe2] Build Android tests and binaries in CI (#7593) Update benchmark submodule to version with fixed Android/GNUSTL build * Remove core and util warnings (#8239) * Fix some signed/unsigned mismatches * Skip unused result warning * Explict fallthrough for murmur hash * Enable aligned new support to eliminate warning * Switch to int instead of unsigned in some cases * Remove .gitmodules.aten since it is in .gitmodules now (#8232) * Fix: gradcheck forced float32 (#8230) * Print requires_grad and grad_fn in string repr of tensor (#8211) For example: >>> torch.ones(3).requires_grad_() tensor([ 1., 1., 1.], requires_grad=True) >>> torch.ones(3).requires_grad_() * 5 tensor([ 5., 5., 5.], grad_fn=<MulBackward0>) The suffix (dtype, requires_grad, grad_fn) wraps to a new line if it would cause the the line to exceed the linewidth. >>> torch.ones(10).double().requires_grad_() tensor([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], dtype=torch.float64, requires_grad=True) * Fix TEST_CUDA import in test_cuda (#8246) * Fix lifting cat into its constant version (#8174) This fixes a bug where schema including varargs lists did not lift properly blocking correct ONNX export. * Don't override Tensor, Storage macros defined outside torch/csrc in t… (#8243) * Don't override Tensor, Storage macros defined outside torch/csrc in torch/csrc. This PR does the following: 1) Removes THSTensor macros in torch/csrc, which aren't used. 2) For macros defined outside of torch/csrc (THTensor, THTensor_, THStorage, THStorage_): a) No longer override them, i.e. previously THTensor could actually be THCTensor if a generic file was included from a file including THCP.h. b) Instead, introduce new macros THW* (e.g. THWTensor) to represent a (potentially empty) wildcard character. In addition to making this code easier to read and codemod, this allows us to more freely change TH/THC; for example: currently in the THC random code, the state is casted to THByteTensor*; this happens to work because the macros don't happen to override THByteTensor. But if THByteTensor just becomes an alias of THTensor (which is the plan for a single tensor type), then this no longer works. The whole thing is a bit of a mess previously because you really have to understand which macros and redefined and which aren't. We could also rename the macros that live in torch/csrc (e.g. the THPTensor macros), but since that is more self contained, I punted for now. * Don't change the plugin. * [auto] Update onnx to 3a035f4 - Add retry logic to model downloading (onnx/onnx#1077)3a035f4397* Fully genericize THC/THCUNN (except for TensorUtils and DeviceTensorUtils). (#8251) * [cmake] Use CAFFE2_USE_* for public/cuda.cmake (#8248) * Fix app size check (#8256) Fix app size check * wip on CPU impl * Stop BCELoss from returning negative results (#8147) * Stop BCELoss from returning negative results * check explicitly for 0 before taking log * add tests * fix lint * address comments * Relax CUDA_HOME detection logic, to build when libraries are found. (#8244) Log when no cuda runtime is found, but CUDA is found * Added backward function for kl_div target (#7839) * added backward fn for target * added module test for kl_div target, and assuming targets are probabilities * Change the output format of caffe2 observers (#8261) as title * Remove TensorUtils<T>::getData, provide data<T>() in TH(C)Tensor. (#8247) * Remove TensorUtils<T>::getData, provide data<T>() in TH(C)Tensor. * Fix template parameter. * [caffe2] Move submodule onnx-tensorrt forward (#7659) Commit 82106f833dcb0070446a150e658e60ca9428f89b is essential. * [ideep] Add IDEEP fallbacks for Faster-RCNN ops (#8260) TSIA * un-genericize THCDeviceTensorUtils. (#8258) * provide data<T>() in TH(C)Tensor. * un-genericize THCDeviceTensorUtils. This is used outside of generic context, so we need to un-genericize it to have a single THCTensor type. * [caffe2] Fix ATen dispatch for ops with TensorList arg (#8226) * [cmake] Add and export Modules_CUDA_fix (#8271) * Add and export Modules_CUDA_fix * actually, need to include before finding cuda * [auto] Update onnx to 2508156 - Make error message more verbose (onnx/onnx#1097)2508156135* [auto] Update onnx to 39e4668 - fix optimizer does not set ir_version bug (onnx/onnx#1098)39e46687ea* [cmake] Make cudnn optional (#8265) * Make cudnn optional * Remove cudnn file from cpu file * Move signal window functions to ATen; add Blackman window (#8130) * Move signal window functions to ATen; add Blackman window * fix cuda test not checking scipy * [ideep] Fuse Conv-Relu after IDEEP graph rewrite, skip group conv (#8233) IDEEP supports fusion for non-group conv * [c10d] NCCL Process Group implementation (#8182) * [c10d] Process Group NCCL implementation * Addressed comments * Added one missing return and clang format again * Use cmake/Modules for everything and fix gloo build * Fixed compiler warnings * Deleted duplicated FindNCCL * Set up CI build for CUDA 9.2 + macOS (#8274) * Add macOS CUDA build to CI * Fix undefined symbols issue * Use sccache for CUDA build * Fix sccache issues * clean up * c10 build setup (#8264) * Move c10/ to caffe2/dispatch/ * Set up caffe2/utils directory * Remove remaining TensorTypeUtils functions. (#8286) Mostly what's remaining is copy utilities -- these are now provided in THCTensorCopy.hpp and templatized on the ScalarType rather than the TensorType. * Create initial Python bindings for c10d (#8119) * Build and install c10d from tools/build_pytorch_libs.sh * Create initial Python bindings for c10d * clang-format * Switch link order to include more symbols * Add bindings and tests for ProcessGroupGloo * Add broadcast test * Separate build flag for c10d * Explicit PIC property * Skip c10d tests if not available * Remove c10d from Windows blacklist Let it skip by itself because it won't be available anyway. * Make lint happy * Comments * Move c10d module into torch.distributed * Close tempfile such that it is deleted * Add option USE_NVRTC which defaults to off (#8289) * [build] Remove /torch/lib/THD/cmake in favor of /cmake (#7159) * Remove /torch/lib/THD/cmake in favor of /cmake * path fix * Explicitly marking gloo to use cuda * Fix gloo path in THD * Have a single THTensor / THCTensor type. (#8288) * Remove remaining TensorTypeUtils functions. Mostly what's remaining is copy utilities -- these are now provided in THCTensorCopy.hpp and templatized on the ScalarType rather than the TensorType. * Have a single THTensor / THCTensor type. As was previously done with Storages, have only a single (dtype-independent) THTensor / THCTensor. For documentation and backwards compatibility purposes, the old names, e.g. TH(Cuda)LongTensor alias the new TH(C)Tensor type. * undef GENERATE_SPARSE. * [auto] Update onnx to 58efe0a - add float16 support back for math and reduction ops (onnx/onnx#1102)58efe0a9ca* Some utils for compile-time programming (#7778) * Add some C++17 features, implemented with C++14 * Add some type traits * Compile-time type list abstraction * Some utils for compile-time programming * Fix compatibility with a larger range of compilers * Use guts::array instead of std::array because of std::array shortcomings * code review comments * Use quotes for includes * Remove THC's FindMAGMA (#8299) * Entries for torch.distributed in CODEOWNERS (#8293) * Add depthwise convolution test for IDEEP (#8301) * Fix dividing by zero segfault in Reshape (#8302) when infer a dimension of zero size new shape * Removes unused THCTensorConv (#8229) * Replace Variables to Tensors (#8309) * Clean up old sccache log before build (#8305) * Remove unused grad ops on mobile to reduce app size (#8297) Remove unused grad ops on mobile to reduce app size * Small fixes (#8296) * [auto] Update onnx to 5ed684e - Remove/replace /MX with /WX for MSVC build. Was typo in a previous ch… (onnx/onnx#1104)5ed684ebe5* Fix sample code for cuda stream (#8319) * [auto] Update onnx to 4b4085c - Add missing warning ignoring flags to onnx_proto CMake target (onnx/onnx#1105)4b4085c2e9* [THD] fix broken THD build with NCCL (#8323) * Add docstring for `torch.sparse_coo_tensor` (#8152) * add sparse_coo_tensor docstring * update empty tensor example * whitespace * whitespace again * add error when backend is not supported by DDP (#8325) * Fix collect_env.py for Windows (#8326) * Fix collect_env.py for Windows * Fix expect file for Win machine * Fix the script doesn't stop eariler on error for MSVC and Ninja (#8277) * Simplify the solution * Remove the usage of set errorlevel * Skip test_multinomial_invalid_probs_cuda on Windows (#8324) * Support printing sparse tensors in ATen, fixes #8333. (#8334) Signed-off-by: Edward Z. Yang <ezyang@fb.com> * [C++ API] Cursors (#8190) * Add cursors to C++ API * Small self nits * s/struct/class * Use more STL like names for cursors * Implement dim_arange operator (#8266) * Implement arange_like operator * add ONNX symbolic * lint * change name * Comment the hack * 1. fixed flip CPU impl for non-continuous flip dims; 2. added more tests; 3. using TensorInfo and collapseDims to speed up CUDA impl for cases where flip dim is the 1st or last dim * nits * 1. removed for loop in pointwise CUDA kernel; 2. using templated (int64_t) IndexType for indices in pointwise CUDA kernel * added torch.flip.__doc__ * nits
5722 lines
176 KiB
Python
5722 lines
176 KiB
Python
"""Adds docstrings to functions defined in the torch._C"""
|
|
|
|
import re
|
|
|
|
import torch._C
|
|
from torch._C import _add_docstr as add_docstr
|
|
|
|
|
|
def parse_kwargs(desc):
|
|
"""Maps a description of args to a dictionary of {argname: description}.
|
|
Input:
|
|
(' weight (Tensor): a weight tensor\n' +
|
|
' Some optional description')
|
|
Output: {
|
|
'weight': \
|
|
'weight (Tensor): a weight tensor\n Some optional description'
|
|
}
|
|
"""
|
|
# Split on exactly 4 spaces after a newline
|
|
regx = re.compile("\n\s{4}(?!\s)")
|
|
kwargs = [section.strip() for section in regx.split(desc)]
|
|
kwargs = [section for section in kwargs if len(section) > 0]
|
|
return {desc.split(' ')[0]: desc for desc in kwargs}
|
|
|
|
|
|
reduceops_common_args = parse_kwargs("""
|
|
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
|
|
If specified, the input tensor is casted to :attr:`dtype` before the operation
|
|
is performed. This is useful for preventing data type overflows. Default: None.
|
|
""")
|
|
|
|
factory_common_args = parse_kwargs("""
|
|
out (Tensor, optional): the output tensor
|
|
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
|
|
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
|
|
layout (:class:`torch.layout`, optional): the desired layout of returned Tensor.
|
|
Default: ``torch.strided``.
|
|
device (:class:`torch.device`, optional): the desired device of returned tensor.
|
|
Default: if ``None``, uses the current device for the default tensor type
|
|
(see :func:`torch.set_default_tensor_type`). :attr:`device` will be the CPU
|
|
for CPU tensor types and the current CUDA device for CUDA tensor types.
|
|
requires_grad (bool, optional): If autograd should record operations on the
|
|
returned tensor. Default: ``False``.
|
|
""")
|
|
|
|
factory_like_common_args = parse_kwargs("""
|
|
input (Tensor): the size of :attr:`input` will determine size of the output tensor
|
|
layout (:class:`torch.layout`, optional): the desired layout of returned tensor.
|
|
Default: if ``None``, defaults to the layout of :attr:`input`.
|
|
dtype (:class:`torch.dtype`, optional): the desired data type of returned Tensor.
|
|
Default: if ``None``, defaults to the dtype of :attr:`input`.
|
|
device (:class:`torch.device`, optional): the desired device of returned tensor.
|
|
Default: if ``None``, defaults to the device of :attr:`input`.
|
|
requires_grad (bool, optional): If autograd should record operations on the
|
|
returned tensor. Default: ``False``.
|
|
""")
|
|
|
|
factory_data_common_args = parse_kwargs("""
|
|
data (array_like): Initial data for the tensor. Can be a list, tuple,
|
|
NumPy ``ndarray``, scalar, and other types.
|
|
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
|
|
Default: if ``None``, infers data type from :attr:`data`.
|
|
device (:class:`torch.device`, optional): the desired device of returned tensor.
|
|
Default: if ``None``, uses the current device for the default tensor type
|
|
(see :func:`torch.set_default_tensor_type`). :attr:`device` will be the CPU
|
|
for CPU tensor types and the current CUDA device for CUDA tensor types.
|
|
requires_grad (bool, optional): If autograd should record operations on the
|
|
returned tensor. Default: ``False``.
|
|
""")
|
|
|
|
add_docstr(torch.abs,
|
|
r"""
|
|
abs(input, out=None) -> Tensor
|
|
|
|
Computes the element-wise absolute value of the given :attr:`input` tensor.
|
|
|
|
.. math::
|
|
\text{out}_{i} = |\text{input}_{i}|
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> torch.abs(torch.tensor([-1, -2, 3]))
|
|
tensor([ 1, 2, 3])
|
|
""")
|
|
|
|
add_docstr(torch.acos,
|
|
r"""
|
|
acos(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the arccosine of the elements of :attr:`input`.
|
|
|
|
.. math::
|
|
\text{out}_{i} = \cos^{-1}(\text{input}_{i})
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([ 0.3348, -0.5889, 0.2005, -0.1584])
|
|
>>> torch.acos(a)
|
|
tensor([ 1.2294, 2.2004, 1.3690, 1.7298])
|
|
""")
|
|
|
|
add_docstr(torch.add,
|
|
r"""
|
|
.. function:: add(input, value, out=None)
|
|
|
|
Adds the scalar :attr:`value` to each element of the input :attr:`input`
|
|
and returns a new resulting tensor.
|
|
|
|
.. math::
|
|
out = input + value
|
|
|
|
If :attr:`input` is of type FloatTensor or DoubleTensor, :attr:`value` must be
|
|
a real number, otherwise it should be an integer.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
value (Number): the number to be added to each element of :attr:`input`
|
|
|
|
Keyword arguments:
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([ 0.0202, 1.0985, 1.3506, -0.6056])
|
|
>>> torch.add(a, 20)
|
|
tensor([ 20.0202, 21.0985, 21.3506, 19.3944])
|
|
|
|
.. function:: add(input, value=1, other, out=None)
|
|
|
|
Each element of the tensor :attr:`other` is multiplied by the scalar
|
|
:attr:`value` and added to each element of the tensor :attr:`input`.
|
|
The resulting tensor is returned.
|
|
|
|
The shapes of :attr:`input` and :attr:`other` must be
|
|
:ref:`broadcastable <broadcasting-semantics>`.
|
|
|
|
.. math::
|
|
out = input + value \times other
|
|
|
|
If :attr:`other` is of type FloatTensor or DoubleTensor, :attr:`value` must be
|
|
a real number, otherwise it should be an integer.
|
|
|
|
Args:
|
|
input (Tensor): the first input tensor
|
|
value (Number): the scalar multiplier for :attr:`other`
|
|
other (Tensor): the second input tensor
|
|
|
|
Keyword arguments:
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([-0.9732, -0.3497, 0.6245, 0.4022])
|
|
>>> b = torch.randn(4, 1)
|
|
>>> b
|
|
tensor([[ 0.3743],
|
|
[-1.7724],
|
|
[-0.5811],
|
|
[-0.8017]])
|
|
>>> torch.add(a, 10, b)
|
|
tensor([[ 2.7695, 3.3930, 4.3672, 4.1450],
|
|
[-18.6971, -18.0736, -17.0994, -17.3216],
|
|
[ -6.7845, -6.1610, -5.1868, -5.4090],
|
|
[ -8.9902, -8.3667, -7.3925, -7.6147]])
|
|
""")
|
|
|
|
add_docstr(torch.addbmm,
|
|
r"""
|
|
addbmm(beta=1, mat, alpha=1, batch1, batch2, out=None) -> Tensor
|
|
|
|
Performs a batch matrix-matrix product of matrices stored
|
|
in :attr:`batch1` and :attr:`batch2`,
|
|
with a reduced add step (all matrix multiplications get accumulated
|
|
along the first dimension).
|
|
:attr:`mat` is added to the final result.
|
|
|
|
:attr:`batch1` and :attr:`batch2` must be 3-D tensors each containing the
|
|
same number of matrices.
|
|
|
|
If :attr:`batch1` is a :math:`(b \times n \times m)` tensor, :attr:`batch2` is a
|
|
:math:`(b \times m \times p)` tensor, :attr:`mat` must be
|
|
:ref:`broadcastable <broadcasting-semantics>` with a :math:`(n \times p)` tensor
|
|
and :attr:`out` will be a :math:`(n \times p)` tensor.
|
|
|
|
.. math::
|
|
out = \beta\ mat + \alpha\ (\sum_{i=0}^{b} batch1_i \mathbin{@} batch2_i)
|
|
|
|
For inputs of type `FloatTensor` or `DoubleTensor`, arguments :attr:`beta` and :attr:`alpha`
|
|
must be real numbers, otherwise they should be integers.
|
|
|
|
Args:
|
|
beta (Number, optional): multiplier for :attr:`mat` (:math:`\beta`)
|
|
mat (Tensor): matrix to be added
|
|
alpha (Number, optional): multiplier for `batch1 @ batch2` (:math:`\alpha`)
|
|
batch1 (Tensor): the first batch of matrices to be multiplied
|
|
batch2 (Tensor): the second batch of matrices to be multiplied
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> M = torch.randn(3, 5)
|
|
>>> batch1 = torch.randn(10, 3, 4)
|
|
>>> batch2 = torch.randn(10, 4, 5)
|
|
>>> torch.addbmm(M, batch1, batch2)
|
|
tensor([[ 6.6311, 0.0503, 6.9768, -12.0362, -2.1653],
|
|
[ -4.8185, -1.4255, -6.6760, 8.9453, 2.5743],
|
|
[ -3.8202, 4.3691, 1.0943, -1.1109, 5.4730]])
|
|
""")
|
|
|
|
add_docstr(torch.addcdiv,
|
|
r"""
|
|
addcdiv(tensor, value=1, tensor1, tensor2, out=None) -> Tensor
|
|
|
|
Performs the element-wise division of :attr:`tensor1` by :attr:`tensor2`,
|
|
multiply the result by the scalar :attr:`value` and add it to :attr:`tensor`.
|
|
|
|
.. math::
|
|
out_i = tensor_i + value \times \frac{tensor1_i}{tensor2_i}
|
|
|
|
The shapes of :attr:`tensor`, :attr:`tensor1`, and :attr:`tensor2` must be
|
|
:ref:`broadcastable <broadcasting-semantics>`.
|
|
|
|
For inputs of type `FloatTensor` or `DoubleTensor`, :attr:`value` must be
|
|
a real number, otherwise an integer.
|
|
|
|
Args:
|
|
tensor (Tensor): the tensor to be added
|
|
value (Number, optional): multiplier for :math:`tensor1 ./ tensor2`
|
|
tensor1 (Tensor): the numerator tensor
|
|
tensor2 (Tensor): the denominator tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> t = torch.randn(1, 3)
|
|
>>> t1 = torch.randn(3, 1)
|
|
>>> t2 = torch.randn(1, 3)
|
|
>>> torch.addcdiv(t, 0.1, t1, t2)
|
|
tensor([[-0.2312, -3.6496, 0.1312],
|
|
[-1.0428, 3.4292, -0.1030],
|
|
[-0.5369, -0.9829, 0.0430]])
|
|
""")
|
|
|
|
add_docstr(torch.addcmul,
|
|
r"""
|
|
addcmul(tensor, value=1, tensor1, tensor2, out=None) -> Tensor
|
|
|
|
Performs the element-wise multiplication of :attr:`tensor1`
|
|
by :attr:`tensor2`, multiply the result by the scalar :attr:`value`
|
|
and add it to :attr:`tensor`.
|
|
|
|
.. math::
|
|
out_i = tensor_i + value \times tensor1_i \times tensor2_i
|
|
|
|
The shapes of :attr:`tensor`, :attr:`tensor1`, and :attr:`tensor2` must be
|
|
:ref:`broadcastable <broadcasting-semantics>`.
|
|
|
|
For inputs of type `FloatTensor` or `DoubleTensor`, :attr:`value` must be
|
|
a real number, otherwise an integer.
|
|
|
|
Args:
|
|
tensor (Tensor): the tensor to be added
|
|
value (Number, optional): multiplier for :math:`tensor1 .* tensor2`
|
|
tensor1 (Tensor): the tensor to be multiplied
|
|
tensor2 (Tensor): the tensor to be multiplied
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> t = torch.randn(1, 3)
|
|
>>> t1 = torch.randn(3, 1)
|
|
>>> t2 = torch.randn(1, 3)
|
|
>>> torch.addcmul(t, 0.1, t1, t2)
|
|
tensor([[-0.8635, -0.6391, 1.6174],
|
|
[-0.7617, -0.5879, 1.7388],
|
|
[-0.8353, -0.6249, 1.6511]])
|
|
""")
|
|
|
|
add_docstr(torch.addmm,
|
|
r"""
|
|
addmm(beta=1, mat, alpha=1, mat1, mat2, out=None) -> Tensor
|
|
|
|
Performs a matrix multiplication of the matrices :attr:`mat1` and :attr:`mat2`.
|
|
The matrix :attr:`mat` is added to the final result.
|
|
|
|
If :attr:`mat1` is a :math:`(n \times m)` tensor, :attr:`mat2` is a
|
|
:math:`(m \times p)` tensor, then :attr:`mat` must be
|
|
:ref:`broadcastable <broadcasting-semantics>` with a :math:`(n \times p)` tensor
|
|
and :attr:`out` will be a :math:`(n \times p)` tensor.
|
|
|
|
:attr:`alpha` and :attr:`beta` are scaling factors on matrix-vector product between
|
|
:attr:`mat1` and :attr`mat2` and the added matrix :attr:`mat` respectively.
|
|
|
|
.. math::
|
|
out = \beta\ mat + \alpha\ (mat1_i \mathbin{@} mat2_i)
|
|
|
|
For inputs of type `FloatTensor` or `DoubleTensor`, arguments :attr:`beta` and
|
|
:attr:`alpha` must be real numbers, otherwise they should be integers.
|
|
|
|
Args:
|
|
beta (Number, optional): multiplier for :attr:`mat` (:math:`\beta`)
|
|
mat (Tensor): matrix to be added
|
|
alpha (Number, optional): multiplier for :math:`mat1 @ mat2` (:math:`\alpha`)
|
|
mat1 (Tensor): the first matrix to be multiplied
|
|
mat2 (Tensor): the second matrix to be multiplied
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> M = torch.randn(2, 3)
|
|
>>> mat1 = torch.randn(2, 3)
|
|
>>> mat2 = torch.randn(3, 3)
|
|
>>> torch.addmm(M, mat1, mat2)
|
|
tensor([[-4.8716, 1.4671, -1.3746],
|
|
[ 0.7573, -3.9555, -2.8681]])
|
|
""")
|
|
|
|
add_docstr(torch.addmv,
|
|
r"""
|
|
addmv(beta=1, tensor, alpha=1, mat, vec, out=None) -> Tensor
|
|
|
|
Performs a matrix-vector product of the matrix :attr:`mat` and
|
|
the vector :attr:`vec`.
|
|
The vector :attr:`tensor` is added to the final result.
|
|
|
|
If :attr:`mat` is a :math:`(n \times m)` tensor, :attr:`vec` is a 1-D tensor of
|
|
size `m`, then :attr:`tensor` must be
|
|
:ref:`broadcastable <broadcasting-semantics>` with a 1-D tensor of size `n` and
|
|
:attr:`out` will be 1-D tensor of size `n`.
|
|
|
|
:attr:`alpha` and :attr:`beta` are scaling factors on matrix-vector product between
|
|
:attr:`mat` and :attr:`vec` and the added tensor :attr:`tensor` respectively.
|
|
|
|
.. math::
|
|
out = \beta\ tensor + \alpha\ (mat \mathbin{@} vec)
|
|
|
|
For inputs of type `FloatTensor` or `DoubleTensor`, arguments :attr:`beta` and
|
|
:attr:`alpha` must be real numbers, otherwise they should be integers
|
|
|
|
Args:
|
|
beta (Number, optional): multiplier for :attr:`tensor` (:math:`\beta`)
|
|
tensor (Tensor): vector to be added
|
|
alpha (Number, optional): multiplier for :math:`mat @ vec` (:math:`\alpha`)
|
|
mat (Tensor): matrix to be multiplied
|
|
vec (Tensor): vector to be multiplied
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> M = torch.randn(2)
|
|
>>> mat = torch.randn(2, 3)
|
|
>>> vec = torch.randn(3)
|
|
>>> torch.addmv(M, mat, vec)
|
|
tensor([-0.3768, -5.5565])
|
|
""")
|
|
|
|
add_docstr(torch.addr,
|
|
r"""
|
|
addr(beta=1, mat, alpha=1, vec1, vec2, out=None) -> Tensor
|
|
|
|
Performs the outer-product of vectors :attr:`vec1` and :attr:`vec2`
|
|
and adds it to the matrix :attr:`mat`.
|
|
|
|
Optional values :attr:`beta` and :attr:`alpha` are scaling factors on the
|
|
outer product between :attr:`vec1` and :attr:`vec2` and the added matrix
|
|
:attr:`mat` respectively.
|
|
|
|
.. math::
|
|
out = \beta\ mat + \alpha\ (vec1 \otimes vec2)
|
|
|
|
If :attr:`vec1` is a vector of size `n` and :attr:`vec2` is a vector
|
|
of size `m`, then :attr:`mat` must be
|
|
:ref:`broadcastable <broadcasting-semantics>` with a matrix of size
|
|
:math:`(n \times m)` and :attr:`out` will be a matrix of size
|
|
:math:`(n \times m)`.
|
|
|
|
For inputs of type `FloatTensor` or `DoubleTensor`, arguments :attr:`beta` and
|
|
:attr:`alpha` must be real numbers, otherwise they should be integers
|
|
|
|
Args:
|
|
beta (Number, optional): multiplier for :attr:`mat` (:math:`\beta`)
|
|
mat (Tensor): matrix to be added
|
|
alpha (Number, optional): multiplier for :math:`vec1 \otimes vec2` (:math:`\alpha`)
|
|
vec1 (Tensor): the first vector of the outer product
|
|
vec2 (Tensor): the second vector of the outer product
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> vec1 = torch.arange(1., 4.)
|
|
>>> vec2 = torch.arange(1., 3.)
|
|
>>> M = torch.zeros(3, 2)
|
|
>>> torch.addr(M, vec1, vec2)
|
|
tensor([[ 1., 2.],
|
|
[ 2., 4.],
|
|
[ 3., 6.]])
|
|
""")
|
|
|
|
add_docstr(torch.as_tensor,
|
|
r"""
|
|
as_tensor(data, dtype=None, device=None) -> Tensor
|
|
|
|
Convert the data into a `torch.Tensor`. If the data is already a `Tensor` of the same `dtype` and `device`, no copy
|
|
will be performed. Similarly, if the data is an ``ndarray`` of the corresponding `dtype` and the `device` is the cpu,
|
|
no copy will be performed.
|
|
|
|
Args:
|
|
{data}
|
|
{dtype}
|
|
{device}
|
|
|
|
Example::
|
|
|
|
>>> torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])
|
|
tensor([[ 0.1000, 1.2000],
|
|
[ 2.2000, 3.1000],
|
|
[ 4.9000, 5.2000]])
|
|
|
|
>>> a = numpy.array([1, 2, 3])
|
|
>>> t = torch.from_numpy(a)
|
|
>>> t
|
|
tensor([ 1, 2, 3])
|
|
>>> t[0] = -1
|
|
>>> a
|
|
array([-1, 2, 3])
|
|
""".format(**factory_data_common_args))
|
|
|
|
add_docstr(torch.asin,
|
|
r"""
|
|
asin(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the arcsine of the elements of :attr:`input`.
|
|
|
|
.. math::
|
|
\text{out}_{i} = \sin^{-1}(\text{input}_{i})
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([-0.5962, 1.4985, -0.4396, 1.4525])
|
|
>>> torch.asin(a)
|
|
tensor([-0.6387, nan, -0.4552, nan])
|
|
""")
|
|
|
|
add_docstr(torch.atan,
|
|
r"""
|
|
atan(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the arctangent of the elements of :attr:`input`.
|
|
|
|
.. math::
|
|
\text{out}_{i} = \tan^{-1}(\text{input}_{i})
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([ 0.2341, 0.2539, -0.6256, -0.6448])
|
|
>>> torch.atan(a)
|
|
tensor([ 0.2299, 0.2487, -0.5591, -0.5727])
|
|
""")
|
|
|
|
add_docstr(torch.atan2,
|
|
r"""
|
|
atan2(input1, input2, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the arctangent of the elements of :attr:`input1`
|
|
and :attr:`input2`.
|
|
|
|
The shapes of :attr:`input1` and :attr:`input2` must be
|
|
:ref:`broadcastable <broadcasting-semantics>`.
|
|
|
|
Args:
|
|
input1 (Tensor): the first input tensor
|
|
input2 (Tensor): the second input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([ 0.9041, 0.0196, -0.3108, -2.4423])
|
|
>>> torch.atan2(a, torch.randn(4))
|
|
tensor([ 0.9833, 0.0811, -1.9743, -1.4151])
|
|
""")
|
|
|
|
add_docstr(torch.baddbmm,
|
|
r"""
|
|
baddbmm(beta=1, mat, alpha=1, batch1, batch2, out=None) -> Tensor
|
|
|
|
Performs a batch matrix-matrix product of matrices in :attr:`batch1`
|
|
and :attr:`batch2`.
|
|
:attr:`mat` is added to the final result.
|
|
|
|
:attr:`batch1` and :attr:`batch2` must be 3-D tensors each containing the same
|
|
number of matrices.
|
|
|
|
If :attr:`batch1` is a :math:`(b \times n \times m)` tensor, :attr:`batch2` is a
|
|
:math:`(b \times m \times p)` tensor, then :attr:`mat` must be
|
|
:ref:`broadcastable <broadcasting-semantics>` with a
|
|
:math:`(b \times n \times p)` tensor and :attr:`out` will be a
|
|
:math:`(b \times n \times p)` tensor. Both :attr:`alpha` and :attr:`beta` mean the
|
|
same as the scaling factors used in :meth:`torch.addbmm`.
|
|
|
|
.. math::
|
|
out_i = \beta\ mat_i + \alpha\ (batch1_i \mathbin{@} batch2_i)
|
|
|
|
For inputs of type `FloatTensor` or `DoubleTensor`, arguments :attr:`beta` and
|
|
:attr:`alpha` must be real numbers, otherwise they should be integers.
|
|
|
|
Args:
|
|
beta (Number, optional): multiplier for :attr:`mat` (:math:`\beta`)
|
|
mat (Tensor): the tensor to be added
|
|
alpha (Number, optional): multiplier for `batch1 @ batch2` (:math:`\alpha`)
|
|
batch1 (Tensor): the first batch of matrices to be multiplied
|
|
batch2 (Tensor): the second batch of matrices to be multiplied
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> M = torch.randn(10, 3, 5)
|
|
>>> batch1 = torch.randn(10, 3, 4)
|
|
>>> batch2 = torch.randn(10, 4, 5)
|
|
>>> torch.baddbmm(M, batch1, batch2).size()
|
|
torch.Size([10, 3, 5])
|
|
""")
|
|
|
|
add_docstr(torch.bernoulli,
|
|
r"""
|
|
bernoulli(input, out=None) -> Tensor
|
|
|
|
Draws binary random numbers (0 or 1) from a Bernoulli distribution.
|
|
|
|
The :attr:`input` tensor should be a tensor containing probabilities
|
|
to be used for drawing the binary random number.
|
|
Hence, all values in :attr:`input` have to be in the range:
|
|
:math:`0 \leq \text{input}_i \leq 1`.
|
|
|
|
The :math:`\text{i}^{th}` element of the output tensor will draw a
|
|
value `1` according to the :math:`\text{i}^{th}` probability value given
|
|
in :attr:`input`.
|
|
|
|
.. math::
|
|
\text{out}_{i} \sim \mathrm{Bernoulli}(p = \text{input}_{i})
|
|
|
|
The returned :attr:`out` tensor only has values 0 or 1 and is of the same
|
|
shape as :attr:`input`
|
|
|
|
Args:
|
|
input (Tensor): the input tensor of probability values for the Bernoulli distribution
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.empty(3, 3).uniform_(0, 1) # generate a uniform random matrix with range [0, 1]
|
|
>>> a
|
|
tensor([[ 0.1737, 0.0950, 0.3609],
|
|
[ 0.7148, 0.0289, 0.2676],
|
|
[ 0.9456, 0.8937, 0.7202]])
|
|
>>> torch.bernoulli(a)
|
|
tensor([[ 1., 0., 0.],
|
|
[ 0., 0., 0.],
|
|
[ 1., 1., 1.]])
|
|
|
|
>>> a = torch.ones(3, 3) # probability of drawing "1" is 1
|
|
>>> torch.bernoulli(a)
|
|
tensor([[ 1., 1., 1.],
|
|
[ 1., 1., 1.],
|
|
[ 1., 1., 1.]])
|
|
>>> a = torch.zeros(3, 3) # probability of drawing "1" is 0
|
|
>>> torch.bernoulli(a)
|
|
tensor([[ 0., 0., 0.],
|
|
[ 0., 0., 0.],
|
|
[ 0., 0., 0.]])
|
|
""")
|
|
|
|
add_docstr(torch.bincount,
|
|
r"""
|
|
bincount(self, weights=None, minlength=0) -> Tensor
|
|
|
|
Count the frequency of each value in an array of non-negative ints.
|
|
|
|
The number of bins (size 1) is one larger than the largest value in
|
|
:attr:`input`. If :attr:`minlength` is specified, the number of bins is at least
|
|
:attr:`minlength`. If ``n`` is the value at position ``i``,
|
|
:math:`out[n] += weights[i]` if :attr:`weights` is specified else
|
|
:math:`out[n] += 1`.
|
|
|
|
Arguments:
|
|
input (Tensor): 1-d int tensor
|
|
weights (Tensor): optional, weight for each value in the input tensor.
|
|
Should be of same size as input tensor.
|
|
minlength (int): optional, min number of bins. Should be non-negative.
|
|
|
|
Shape:
|
|
output (Tensor): ``Size([max(input) + 1])``
|
|
|
|
Example::
|
|
|
|
>>> input = torch.randint(0, 8, (5,), dtype=torch.int64)
|
|
>>> weights = torch.linspace(0, 1, steps=5)
|
|
>>> input, weights
|
|
(tensor([4, 3, 6, 3, 4]),
|
|
tensor([ 0.0000, 0.2500, 0.5000, 0.7500, 1.0000])
|
|
|
|
>>> torch.bincount(input)
|
|
tensor([0, 0, 0, 2, 2, 0, 1])
|
|
|
|
>>> input.bincount(weights)
|
|
tensor([0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 0.0000, 0.5000])
|
|
""")
|
|
|
|
add_docstr(torch.bmm,
|
|
r"""
|
|
bmm(batch1, batch2, out=None) -> Tensor
|
|
|
|
Performs a batch matrix-matrix product of matrices stored in :attr:`batch1`
|
|
and :attr:`batch2`.
|
|
|
|
:attr:`batch1` and :attr:`batch2` must be 3-D tensors each containing
|
|
the same number of matrices.
|
|
|
|
If :attr:`batch1` is a :math:`(b \times n \times m)` tensor, :attr:`batch2` is a
|
|
:math:`(b \times m \times p)` tensor, :attr:`out` will be a
|
|
:math:`(b \times n \times p)` tensor.
|
|
|
|
.. math::
|
|
out_i = batch1_i \mathbin{@} batch2_i
|
|
|
|
.. note:: This function does not :ref:`broadcast <broadcasting-semantics>`.
|
|
For broadcasting matrix products, see :func:`torch.matmul`.
|
|
|
|
Args:
|
|
batch1 (Tensor): the first batch of matrices to be multiplied
|
|
batch2 (Tensor): the second batch of matrices to be multiplied
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> batch1 = torch.randn(10, 3, 4)
|
|
>>> batch2 = torch.randn(10, 4, 5)
|
|
>>> res = torch.bmm(batch1, batch2)
|
|
>>> res.size()
|
|
torch.Size([10, 3, 5])
|
|
""")
|
|
|
|
add_docstr(torch.stack,
|
|
r"""
|
|
stack(seq, dim=0, out=None) -> Tensor
|
|
|
|
Concatenates sequence of tensors along a new dimension.
|
|
|
|
All tensors need to be of the same size.
|
|
|
|
Arguments:
|
|
seq (sequence of Tensors): sequence of tensors to concatenate
|
|
dim (int): dimension to insert. Has to be between 0 and the number
|
|
of dimensions of concatenated tensors (inclusive)
|
|
out (Tensor, optional): the output tensor
|
|
""")
|
|
|
|
add_docstr(torch.chunk,
|
|
r"""
|
|
chunk(tensor, chunks, dim=0) -> List of Tensors
|
|
|
|
Splits a tensor into a specific number of chunks.
|
|
|
|
Last chunk will be smaller if the tensor size along the given dimension
|
|
:attr:`dim` is not divisible by :attr:`chunks`.
|
|
|
|
Arguments:
|
|
tensor (Tensor): the tensor to split
|
|
chunks (int): number of chunks to return
|
|
dim (int): dimension along which to split the tensor
|
|
""")
|
|
|
|
add_docstr(torch.cat,
|
|
r"""
|
|
cat(seq, dim=0, out=None) -> Tensor
|
|
|
|
Concatenates the given sequence of :attr:`seq` tensors in the given dimension.
|
|
All tensors must either have the same shape (except in the concatenating
|
|
dimension) or be empty.
|
|
|
|
:func:`torch.cat` can be seen as an inverse operation for :func:`torch.split`
|
|
and :func:`torch.chunk`.
|
|
|
|
:func:`torch.cat` can be best understood via examples.
|
|
|
|
Args:
|
|
seq (sequence of Tensors): any python sequence of tensors of the same type.
|
|
Non-empty tensors provided must have the same shape, except in the
|
|
cat dimension.
|
|
dim (int, optional): the dimension over which the tensors are concatenated
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> x = torch.randn(2, 3)
|
|
>>> x
|
|
tensor([[ 0.6580, -1.0969, -0.4614],
|
|
[-0.1034, -0.5790, 0.1497]])
|
|
>>> torch.cat((x, x, x), 0)
|
|
tensor([[ 0.6580, -1.0969, -0.4614],
|
|
[-0.1034, -0.5790, 0.1497],
|
|
[ 0.6580, -1.0969, -0.4614],
|
|
[-0.1034, -0.5790, 0.1497],
|
|
[ 0.6580, -1.0969, -0.4614],
|
|
[-0.1034, -0.5790, 0.1497]])
|
|
>>> torch.cat((x, x, x), 1)
|
|
tensor([[ 0.6580, -1.0969, -0.4614, 0.6580, -1.0969, -0.4614, 0.6580,
|
|
-1.0969, -0.4614],
|
|
[-0.1034, -0.5790, 0.1497, -0.1034, -0.5790, 0.1497, -0.1034,
|
|
-0.5790, 0.1497]])
|
|
""")
|
|
|
|
add_docstr(torch.ceil,
|
|
r"""
|
|
ceil(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the ceil of the elements of :attr:`input`,
|
|
the smallest integer greater than or equal to each element.
|
|
|
|
.. math::
|
|
\text{out}_{i} = \left\lceil \text{input}_{i} \right\rceil = \left\lfloor \text{input}_{i} \right\rfloor + 1
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([-0.6341, -1.4208, -1.0900, 0.5826])
|
|
>>> torch.ceil(a)
|
|
tensor([-0., -1., -1., 1.])
|
|
""")
|
|
|
|
add_docstr(torch.reciprocal,
|
|
r"""
|
|
reciprocal(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the reciprocal of the elements of :attr:`input`
|
|
|
|
.. math::
|
|
\text{out}_{i} = \frac{1}{\text{input}_{i}}
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([-0.4595, -2.1219, -1.4314, 0.7298])
|
|
>>> torch.reciprocal(a)
|
|
tensor([-2.1763, -0.4713, -0.6986, 1.3702])
|
|
""")
|
|
|
|
add_docstr(torch.clamp,
|
|
r"""
|
|
clamp(input, min, max, out=None) -> Tensor
|
|
|
|
Clamp all elements in :attr:`input` into the range `[` :attr:`min`, :attr:`max` `]` and return
|
|
a resulting tensor:
|
|
|
|
.. math::
|
|
y_i = \begin{cases}
|
|
\text{min} & \text{if } x_i < \text{min} \\
|
|
x_i & \text{if } \text{min} \leq x_i \leq \text{max} \\
|
|
\text{max} & \text{if } x_i > \text{max}
|
|
\end{cases}
|
|
|
|
If :attr:`input` is of type `FloatTensor` or `DoubleTensor`, args :attr:`min`
|
|
and :attr:`max` must be real numbers, otherwise they should be integers.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
min (Number): lower-bound of the range to be clamped to
|
|
max (Number): upper-bound of the range to be clamped to
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([-1.7120, 0.1734, -0.0478, -0.0922])
|
|
>>> torch.clamp(a, min=-0.5, max=0.5)
|
|
tensor([-0.5000, 0.1734, -0.0478, -0.0922])
|
|
|
|
.. function:: clamp(input, *, min, out=None) -> Tensor
|
|
|
|
Clamps all elements in :attr:`input` to be larger or equal :attr:`min`.
|
|
|
|
If :attr:`input` is of type `FloatTensor` or `DoubleTensor`, :attr:`value`
|
|
should be a real number, otherwise it should be an integer.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
value (Number): minimal value of each element in the output
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([-0.0299, -2.3184, 2.1593, -0.8883])
|
|
>>> torch.clamp(a, min=0.5)
|
|
tensor([ 0.5000, 0.5000, 2.1593, 0.5000])
|
|
|
|
.. function:: clamp(input, *, max, out=None) -> Tensor
|
|
|
|
Clamps all elements in :attr:`input` to be smaller or equal :attr:`max`.
|
|
|
|
If :attr:`input` is of type `FloatTensor` or `DoubleTensor`, :attr:`value`
|
|
should be a real number, otherwise it should be an integer.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
value (Number): maximal value of each element in the output
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([ 0.7753, -0.4702, -0.4599, 1.1899])
|
|
>>> torch.clamp(a, max=0.5)
|
|
tensor([ 0.5000, -0.4702, -0.4599, 0.5000])
|
|
""")
|
|
|
|
add_docstr(torch.cos,
|
|
r"""
|
|
cos(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the cosine of the elements of :attr:`input`.
|
|
|
|
.. math::
|
|
\text{out}_{i} = \cos(\text{input}_{i})
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([ 1.4309, 1.2706, -0.8562, 0.9796])
|
|
>>> torch.cos(a)
|
|
tensor([ 0.1395, 0.2957, 0.6553, 0.5574])
|
|
""")
|
|
|
|
add_docstr(torch.cosh,
|
|
r"""
|
|
cosh(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the hyperbolic cosine of the elements of
|
|
:attr:`input`.
|
|
|
|
.. math::
|
|
\text{out}_{i} = \cosh(\text{input}_{i})
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([ 0.1632, 1.1835, -0.6979, -0.7325])
|
|
>>> torch.cosh(a)
|
|
tensor([ 1.0133, 1.7860, 1.2536, 1.2805])
|
|
""")
|
|
|
|
add_docstr(torch.cross,
|
|
r"""
|
|
cross(input, other, dim=-1, out=None) -> Tensor
|
|
|
|
|
|
Returns the cross product of vectors in dimension :attr:`dim` of :attr:`input`
|
|
and :attr:`other`.
|
|
|
|
:attr:`input` and :attr:`other` must have the same size, and the size of their
|
|
:attr:`dim` dimension should be 3.
|
|
|
|
If :attr:`dim` is not given, it defaults to the first dimension found with the
|
|
size 3.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
other (Tensor): the second input tensor
|
|
dim (int, optional): the dimension to take the cross-product in.
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4, 3)
|
|
>>> a
|
|
tensor([[-0.3956, 1.1455, 1.6895],
|
|
[-0.5849, 1.3672, 0.3599],
|
|
[-1.1626, 0.7180, -0.0521],
|
|
[-0.1339, 0.9902, -2.0225]])
|
|
>>> b = torch.randn(4, 3)
|
|
>>> b
|
|
tensor([[-0.0257, -1.4725, -1.2251],
|
|
[-1.1479, -0.7005, -1.9757],
|
|
[-1.3904, 0.3726, -1.1836],
|
|
[-0.9688, -0.7153, 0.2159]])
|
|
>>> torch.cross(a, b, dim=1)
|
|
tensor([[ 1.0844, -0.5281, 0.6120],
|
|
[-2.4490, -1.5687, 1.9792],
|
|
[-0.8304, -1.3037, 0.5650],
|
|
[-1.2329, 1.9883, 1.0551]])
|
|
>>> torch.cross(a, b)
|
|
tensor([[ 1.0844, -0.5281, 0.6120],
|
|
[-2.4490, -1.5687, 1.9792],
|
|
[-0.8304, -1.3037, 0.5650],
|
|
[-1.2329, 1.9883, 1.0551]])
|
|
""")
|
|
|
|
add_docstr(torch.cumprod,
|
|
r"""
|
|
cumprod(input, dim, dtype=None) -> Tensor
|
|
|
|
Returns the cumulative product of elements of :attr:`input` in the dimension
|
|
:attr:`dim`.
|
|
|
|
For example, if :attr:`input` is a vector of size N, the result will also be
|
|
a vector of size N, with elements.
|
|
|
|
.. math::
|
|
y_i = x_1 \times x_2\times x_3\times \dots \times x_i
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dim (int): the dimension to do the operation over
|
|
{dtype}
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(10)
|
|
>>> a
|
|
tensor([ 0.6001, 0.2069, -0.1919, 0.9792, 0.6727, 1.0062, 0.4126,
|
|
-0.2129, -0.4206, 0.1968])
|
|
>>> torch.cumprod(a, dim=0)
|
|
tensor([ 0.6001, 0.1241, -0.0238, -0.0233, -0.0157, -0.0158, -0.0065,
|
|
0.0014, -0.0006, -0.0001])
|
|
|
|
>>> a[5] = 0.0
|
|
>>> torch.cumprod(a, dim=0)
|
|
tensor([ 0.6001, 0.1241, -0.0238, -0.0233, -0.0157, -0.0000, -0.0000,
|
|
0.0000, -0.0000, -0.0000])
|
|
""".format(**reduceops_common_args))
|
|
|
|
add_docstr(torch.cumsum,
|
|
r"""
|
|
cumsum(input, dim, out=None) -> Tensor
|
|
|
|
Returns the cumulative sum of elements of :attr:`input` in the dimension
|
|
:attr:`dim`.
|
|
|
|
For example, if :attr:`input` is a vector of size N, the result will also be
|
|
a vector of size N, with elements.
|
|
|
|
.. math::
|
|
y_i = x_1 + x_2 + x_3 + \dots + x_i
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dim (int): the dimension to do the operation over
|
|
{dtype}
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(10)
|
|
>>> a
|
|
tensor([-0.8286, -0.4890, 0.5155, 0.8443, 0.1865, -0.1752, -2.0595,
|
|
0.1850, -1.1571, -0.4243])
|
|
>>> torch.cumsum(a, dim=0)
|
|
tensor([-0.8286, -1.3175, -0.8020, 0.0423, 0.2289, 0.0537, -2.0058,
|
|
-1.8209, -2.9780, -3.4022])
|
|
""".format(**reduceops_common_args))
|
|
|
|
add_docstr(torch.diag,
|
|
r"""
|
|
diag(input, diagonal=0, out=None) -> Tensor
|
|
|
|
- If :attr:`input` is a vector (1-D tensor), then returns a 2-D square tensor
|
|
with the elements of :attr:`input` as the diagonal.
|
|
- If :attr:`input` is a matrix (2-D tensor), then returns a 1-D tensor with
|
|
the diagonal elements of :attr:`input`.
|
|
|
|
The argument :attr:`diagonal` controls which diagonal to consider:
|
|
|
|
- If :attr:`diagonal` = 0, it is the main diagonal.
|
|
- If :attr:`diagonal` > 0, it is above the main diagonal.
|
|
- If :attr:`diagonal` < 0, it is below the main diagonal.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
diagonal (int, optional): the diagonal to consider
|
|
out (Tensor, optional): the output tensor
|
|
|
|
.. seealso::
|
|
|
|
:func:`torch.diagonal` always returns the diagonal of its input.
|
|
|
|
:func:`torch.diagflat` always constructs a tensor with diagonal elements
|
|
specified by the input.
|
|
|
|
Examples:
|
|
|
|
Get the square matrix where the input vector is the diagonal::
|
|
|
|
>>> a = torch.randn(3)
|
|
>>> a
|
|
tensor([ 0.5950,-0.0872, 2.3298])
|
|
>>> torch.diag(a)
|
|
tensor([[ 0.5950, 0.0000, 0.0000],
|
|
[ 0.0000,-0.0872, 0.0000],
|
|
[ 0.0000, 0.0000, 2.3298]])
|
|
>>> torch.diag(a, 1)
|
|
tensor([[ 0.0000, 0.5950, 0.0000, 0.0000],
|
|
[ 0.0000, 0.0000,-0.0872, 0.0000],
|
|
[ 0.0000, 0.0000, 0.0000, 2.3298],
|
|
[ 0.0000, 0.0000, 0.0000, 0.0000]])
|
|
|
|
Get the k-th diagonal of a given matrix::
|
|
|
|
>>> a = torch.randn(3, 3)
|
|
>>> a
|
|
tensor([[-0.4264, 0.0255,-0.1064],
|
|
[ 0.8795,-0.2429, 0.1374],
|
|
[ 0.1029,-0.6482,-1.6300]])
|
|
>>> torch.diag(a, 0)
|
|
tensor([-0.4264,-0.2429,-1.6300])
|
|
>>> torch.diag(a, 1)
|
|
tensor([ 0.0255, 0.1374])
|
|
""")
|
|
|
|
add_docstr(torch.diagflat,
|
|
r"""
|
|
diagflat(input, diagonal=0) -> Tensor
|
|
|
|
- If :attr:`input` is a vector (1-D tensor), then returns a 2-D square tensor
|
|
with the elements of :attr:`input` as the diagonal.
|
|
- If :attr:`input` is a tensor with more than one dimension, then returns a
|
|
2-D tensor with diagonal elements equal to a flattened :attr:`input`.
|
|
|
|
The argument :attr:`offset` controls which diagonal to consider:
|
|
|
|
- If :attr:`offset` = 0, it is the main diagonal.
|
|
- If :attr:`offset` > 0, it is above the main diagonal.
|
|
- If :attr:`offset` < 0, it is below the main diagonal.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
offset (int, optional): the diagonal to consider. Default: 0 (main
|
|
diagonal).
|
|
|
|
Examples::
|
|
|
|
>>> a = torch.randn(3)
|
|
>>> a
|
|
tensor([-0.2956, -0.9068, 0.1695])
|
|
>>> torch.diagflat(a)
|
|
tensor([[-0.2956, 0.0000, 0.0000],
|
|
[ 0.0000, -0.9068, 0.0000],
|
|
[ 0.0000, 0.0000, 0.1695]])
|
|
>>> torch.diagflat(a, 1)
|
|
tensor([[ 0.0000, -0.2956, 0.0000, 0.0000],
|
|
[ 0.0000, 0.0000, -0.9068, 0.0000],
|
|
[ 0.0000, 0.0000, 0.0000, 0.1695],
|
|
[ 0.0000, 0.0000, 0.0000, 0.0000]])
|
|
|
|
>>> a = torch.randn(2, 2)
|
|
>>> a
|
|
tensor([[ 0.2094, -0.3018],
|
|
[-0.1516, 1.9342]])
|
|
>>> torch.diagflat(a)
|
|
tensor([[ 0.2094, 0.0000, 0.0000, 0.0000],
|
|
[ 0.0000, -0.3018, 0.0000, 0.0000],
|
|
[ 0.0000, 0.0000, -0.1516, 0.0000],
|
|
[ 0.0000, 0.0000, 0.0000, 1.9342]])
|
|
""")
|
|
|
|
add_docstr(torch.diagonal,
|
|
r"""
|
|
diagonal(input, offset=0, dim1=0, dim2=1) -> Tensor
|
|
|
|
Returns a partial view of :attr:`input` with the its diagonal elements
|
|
with respect to :attr:`dim1` and :attr:`dim2` appended as a dimension
|
|
at the end of the shape.
|
|
|
|
The argument :attr:`offset` controls which diagonal to consider:
|
|
|
|
- If :attr:`offset` = 0, it is the main diagonal.
|
|
- If :attr:`offset` > 0, it is above the main diagonal.
|
|
- If :attr:`offset` < 0, it is below the main diagonal.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor. Must be at least 2-dimensional.
|
|
offset (int, optional): which diagonal to consider. Default: 0
|
|
(main diagonal).
|
|
dim1 (int, optional): first dimension with respect to which to
|
|
take diagonal. Default: 0.
|
|
dim2 (int, optional): second dimension with respect to which to
|
|
take diagonal. Default: 1.
|
|
|
|
.. note:: To take a batch diagonal, pass in dim1=-2, dim2=-1.
|
|
|
|
Examples::
|
|
|
|
>>> a = torch.randn(3, 3)
|
|
>>> a
|
|
tensor([[-1.0854, 1.1431, -0.1752],
|
|
[ 0.8536, -0.0905, 0.0360],
|
|
[ 0.6927, -0.3735, -0.4945]])
|
|
|
|
|
|
>>> torch.diagonal(a, 0)
|
|
tensor([-1.0854, -0.0905, -0.4945])
|
|
|
|
|
|
>>> torch.diagonal(a, 1)
|
|
tensor([ 1.1431, 0.0360])
|
|
|
|
|
|
>>> x = torch.randn(2, 5, 4, 2)
|
|
>>> torch.diagonal(x, offset=-1, dim1=1, dim2=2)
|
|
tensor([[[-1.2631, 0.3755, -1.5977, -1.8172],
|
|
[-1.1065, 1.0401, -0.2235, -0.7938]],
|
|
|
|
[[-1.7325, -0.3081, 0.6166, 0.2335],
|
|
[ 1.0500, 0.7336, -0.3836, -1.1015]]])
|
|
""")
|
|
|
|
add_docstr(torch.dist,
|
|
r"""
|
|
dist(input, other, p=2) -> Tensor
|
|
|
|
Returns the p-norm of (:attr:`input` - :attr:`other`)
|
|
|
|
The shapes of :attr:`input` and :attr:`other` must be
|
|
:ref:`broadcastable <broadcasting-semantics>`.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
other (Tensor): the Right-hand-side input tensor
|
|
p (float, optional): the norm to be computed
|
|
|
|
Example::
|
|
|
|
>>> x = torch.randn(4)
|
|
>>> x
|
|
tensor([-1.5393, -0.8675, 0.5916, 1.6321])
|
|
>>> y = torch.randn(4)
|
|
>>> y
|
|
tensor([ 0.0967, -1.0511, 0.6295, 0.8360])
|
|
>>> torch.dist(x, y, 3.5)
|
|
tensor(1.6727)
|
|
>>> torch.dist(x, y, 3)
|
|
tensor(1.6973)
|
|
>>> torch.dist(x, y, 0)
|
|
tensor(inf)
|
|
>>> torch.dist(x, y, 1)
|
|
tensor(2.6537)
|
|
""")
|
|
|
|
add_docstr(torch.div,
|
|
r"""
|
|
.. function:: div(input, value, out=None) -> Tensor
|
|
|
|
Divides each element of the input :attr:`input` with the scalar :attr:`value`
|
|
and returns a new resulting tensor.
|
|
|
|
.. math::
|
|
out_i = \frac{input_i}{value}
|
|
|
|
If :attr:`input` is of type `FloatTensor` or `DoubleTensor`, :attr:`value`
|
|
should be a real number, otherwise it should be an integer
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
value (Number): the number to be divided to each element of :attr:`input`
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(5)
|
|
>>> a
|
|
tensor([ 0.3810, 1.2774, -0.2972, -0.3719, 0.4637])
|
|
>>> torch.div(a, 0.5)
|
|
tensor([ 0.7620, 2.5548, -0.5944, -0.7439, 0.9275])
|
|
|
|
.. function:: div(input, other, out=None) -> Tensor
|
|
|
|
Each element of the tensor :attr:`input` is divided by each element
|
|
of the tensor :attr:`other`. The resulting tensor is returned. The shapes of
|
|
:attr:`input` and :attr:`other` must be
|
|
:ref:`broadcastable <broadcasting-semantics>`.
|
|
|
|
.. math::
|
|
out_i = \frac{input_i}{other_i}
|
|
|
|
Args:
|
|
input (Tensor): the numerator tensor
|
|
other (Tensor): the denominator tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4, 4)
|
|
>>> a
|
|
tensor([[-0.3711, -1.9353, -0.4605, -0.2917],
|
|
[ 0.1815, -1.0111, 0.9805, -1.5923],
|
|
[ 0.1062, 1.4581, 0.7759, -1.2344],
|
|
[-0.1830, -0.0313, 1.1908, -1.4757]])
|
|
>>> b = torch.randn(4)
|
|
>>> b
|
|
tensor([ 0.8032, 0.2930, -0.8113, -0.2308])
|
|
>>> torch.div(a, b)
|
|
tensor([[-0.4620, -6.6051, 0.5676, 1.2637],
|
|
[ 0.2260, -3.4507, -1.2086, 6.8988],
|
|
[ 0.1322, 4.9764, -0.9564, 5.3480],
|
|
[-0.2278, -0.1068, -1.4678, 6.3936]])
|
|
""")
|
|
|
|
add_docstr(torch.dot,
|
|
r"""
|
|
dot(tensor1, tensor2) -> Tensor
|
|
|
|
Computes the dot product (inner product) of two tensors.
|
|
|
|
.. note:: This function does not :ref:`broadcast <broadcasting-semantics>`.
|
|
|
|
Example::
|
|
|
|
>>> torch.dot(torch.tensor([2, 3]), torch.tensor([2, 1]))
|
|
tensor(7)
|
|
""")
|
|
|
|
add_docstr(torch.eig,
|
|
r"""
|
|
eig(a, eigenvectors=False, out=None) -> (Tensor, Tensor)
|
|
|
|
Computes the eigenvalues and eigenvectors of a real square matrix.
|
|
|
|
Args:
|
|
a (Tensor): the square matrix for which the eigenvalues and eigenvectors will be computed
|
|
eigenvectors (bool): ``True`` to compute both eigenvalues and eigenvectors;
|
|
otherwise, only eigenvalues will be computed
|
|
out (tuple, optional): the output tensors
|
|
|
|
Returns:
|
|
(Tensor, Tensor): A tuple containing
|
|
|
|
- **e** (*Tensor*): the right eigenvalues of ``a``
|
|
- **v** (*Tensor*): the eigenvectors of ``a`` if ``eigenvectors`` is ``True``; otherwise an empty tensor
|
|
""")
|
|
|
|
add_docstr(torch.einsum,
|
|
r"""
|
|
einsum(equation, operands) -> Tensor
|
|
|
|
This function provides a way of computing multilinear expressions (i.e. sums of products) using the
|
|
Einstein summation convention.
|
|
|
|
Args:
|
|
equation (string): The equation is given in terms of lower case letters (indices) to be associated
|
|
with each dimension of the operands and result. The left hand side lists the operands
|
|
dimensions, separated by commas. There should be one index letter per tensor dimension.
|
|
The right hand side follows after `->` and gives the indices for the output.
|
|
If the `->` and right hand side are omitted, it implicitly defined as the alphabetically
|
|
sorted list of all indices appearing exactly once in the left hand side.
|
|
The indices not apprearing in the output are summed over after multiplying the operands
|
|
entries.
|
|
If an index appears several times for the same operand, a diagonal is taken.
|
|
Ellipses `...` represent a fixed number of dimensions. If the right hand side is inferred,
|
|
the ellipsis dimensions are at the beginning of the output.
|
|
operands (list of Tensors): The operands to compute the Einstein sum of.
|
|
Note that the operands are passed as a list, not as individual arguments.
|
|
|
|
Examples::
|
|
|
|
>>> x = torch.randn(5)
|
|
>>> y = torch.randn(4)
|
|
>>> torch.einsum('i,j->ij', (x,y)) # outer product
|
|
tensor([[-0.0570, -0.0286, -0.0231, 0.0197],
|
|
[ 1.2616, 0.6335, 0.5113, -0.4351],
|
|
[ 1.4452, 0.7257, 0.5857, -0.4984],
|
|
[-0.4647, -0.2333, -0.1883, 0.1603],
|
|
[-1.1130, -0.5588, -0.4510, 0.3838]])
|
|
|
|
|
|
>>> A = torch.randn(3,5,4)
|
|
>>> l = torch.randn(2,5)
|
|
>>> r = torch.randn(2,4)
|
|
>>> torch.einsum('bn,anm,bm->ba', (l,A,r)) # compare torch.nn.functional.bilinear
|
|
tensor([[-0.3430, -5.2405, 0.4494],
|
|
[ 0.3311, 5.5201, -3.0356]])
|
|
|
|
|
|
>>> As = torch.randn(3,2,5)
|
|
>>> Bs = torch.randn(3,5,4)
|
|
>>> torch.einsum('bij,bjk->bik', (As, Bs)) # batch matrix multiplication
|
|
tensor([[[-1.0564, -1.5904, 3.2023, 3.1271],
|
|
[-1.6706, -0.8097, -0.8025, -2.1183]],
|
|
|
|
[[ 4.2239, 0.3107, -0.5756, -0.2354],
|
|
[-1.4558, -0.3460, 1.5087, -0.8530]],
|
|
|
|
[[ 2.8153, 1.8787, -4.3839, -1.2112],
|
|
[ 0.3728, -2.1131, 0.0921, 0.8305]]])
|
|
|
|
>>> A = torch.randn(3, 3)
|
|
>>> torch.einsum('ii->i', (A,)) # diagonal
|
|
tensor([-0.7825, 0.8291, -0.1936])
|
|
|
|
>>> A = torch.randn(4, 3, 3)
|
|
>>> torch.einsum('...ii->...i', (A,)) # batch diagonal
|
|
tensor([[-1.0864, 0.7292, 0.0569],
|
|
[-0.9725, -1.0270, 0.6493],
|
|
[ 0.5832, -1.1716, -1.5084],
|
|
[ 0.4041, -1.1690, 0.8570]])
|
|
|
|
>>> A = torch.randn(2, 3, 4, 5)
|
|
>>> torch.einsum('...ij->...ji', (A,)).shape # batch permute
|
|
torch.Size([2, 3, 5, 4])
|
|
""")
|
|
|
|
add_docstr(torch.eq,
|
|
r"""
|
|
eq(input, other, out=None) -> Tensor
|
|
|
|
Computes element-wise equality
|
|
|
|
The second argument can be a number or a tensor whose shape is
|
|
:ref:`broadcastable <broadcasting-semantics>` with the first argument.
|
|
|
|
Args:
|
|
input (Tensor): the tensor to compare
|
|
other (Tensor or float): the tensor or value to compare
|
|
out (Tensor, optional): the output tensor. Must be a `ByteTensor` or the same type as `input`.
|
|
|
|
Returns:
|
|
Tensor: A ``torch.ByteTensor`` containing a 1 at each location where comparison is true
|
|
|
|
Example::
|
|
|
|
>>> torch.eq(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
|
|
tensor([[ 1, 0],
|
|
[ 0, 1]], dtype=torch.uint8)
|
|
""")
|
|
|
|
add_docstr(torch.equal,
|
|
r"""
|
|
equal(tensor1, tensor2) -> bool
|
|
|
|
``True`` if two tensors have the same size and elements, ``False`` otherwise.
|
|
|
|
Example::
|
|
|
|
>>> torch.equal(torch.tensor([1, 2]), torch.tensor([1, 2]))
|
|
True
|
|
""")
|
|
|
|
add_docstr(torch.erf,
|
|
r"""
|
|
erf(tensor, out=None) -> Tensor
|
|
|
|
Computes the error function of each element. The error function is defined as follows:
|
|
|
|
.. math::
|
|
\mathrm{erf}(x) = \frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-t^2} dt
|
|
|
|
Args:
|
|
tensor (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> torch.erf(torch.tensor([0, -1., 10.]))
|
|
tensor([ 0.0000, -0.8427, 1.0000])
|
|
""")
|
|
|
|
add_docstr(torch.erfinv,
|
|
r"""
|
|
erfinv(tensor, out=None) -> Tensor
|
|
|
|
Computes the inverse error function of each element. The inverse error function is defined
|
|
in the range :math:`(-1, 1)` as:
|
|
|
|
.. math::
|
|
\mathrm{erfinv}(\mathrm{erf}(x)) = x
|
|
|
|
Args:
|
|
tensor (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> torch.erfinv(torch.tensor([0, 0.5, -1.]))
|
|
tensor([ 0.0000, 0.4769, -inf])
|
|
""")
|
|
|
|
add_docstr(torch.exp,
|
|
r"""
|
|
exp(tensor, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the exponential of the elements
|
|
of :attr:`input`.
|
|
|
|
.. math::
|
|
y_{i} = e^{x_{i}}
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Args:
|
|
tensor (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> torch.exp(torch.tensor([0, math.log(2)]))
|
|
tensor([ 1., 2.])
|
|
""")
|
|
|
|
add_docstr(torch.expm1,
|
|
r"""
|
|
expm1(tensor, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the exponential of the elements minus 1
|
|
of :attr:`input`.
|
|
|
|
.. math::
|
|
y_{i} = e^{x_{i}} - 1
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Args:
|
|
tensor (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> torch.expm1(torch.tensor([0, math.log(2)]))
|
|
tensor([ 0., 1.])
|
|
""")
|
|
|
|
add_docstr(torch.eye,
|
|
r"""
|
|
eye(n, m=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a 2-D tensor with ones on the diagonal and zeros elsewhere.
|
|
|
|
Args:
|
|
n (int): the number of rows
|
|
m (int, optional): the number of columns with default being :attr:`n`
|
|
{out}
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Returns:
|
|
Tensor: A 2-D tensor with ones on the diagonal and zeros elsewhere
|
|
|
|
Example::
|
|
|
|
>>> torch.eye(3)
|
|
tensor([[ 1., 0., 0.],
|
|
[ 0., 1., 0.],
|
|
[ 0., 0., 1.]])
|
|
""".format(**factory_common_args))
|
|
|
|
add_docstr(torch.floor,
|
|
r"""
|
|
floor(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the floor of the elements of :attr:`input`,
|
|
the largest integer less than or equal to each element.
|
|
|
|
.. math::
|
|
\text{out}_{i} = \left\lfloor \text{input}_{i} \right\rfloor
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([-0.8166, 1.5308, -0.2530, -0.2091])
|
|
>>> torch.floor(a)
|
|
tensor([-1., 1., -1., -1.])
|
|
""")
|
|
|
|
add_docstr(torch.fmod,
|
|
r"""
|
|
fmod(input, divisor, out=None) -> Tensor
|
|
|
|
Computes the element-wise remainder of division.
|
|
|
|
The dividend and divisor may contain both for integer and floating point
|
|
numbers. The remainder has the same sign as the dividend :attr:`input`.
|
|
|
|
When :attr:`divisor` is a tensor, the shapes of :attr:`input` and
|
|
:attr:`divisor` must be :ref:`broadcastable <broadcasting-semantics>`.
|
|
|
|
Args:
|
|
input (Tensor): the dividend
|
|
divisor (Tensor or float): the divisor, which may be either a number or a tensor of the same shape as the dividend
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> torch.fmod(torch.tensor([-3., -2, -1, 1, 2, 3]), 2)
|
|
tensor([-1., -0., -1., 1., 0., 1.])
|
|
>>> torch.fmod(torch.tensor([1., 2, 3, 4, 5]), 1.5)
|
|
tensor([ 1.0000, 0.5000, 0.0000, 1.0000, 0.5000])
|
|
|
|
|
|
""")
|
|
|
|
add_docstr(torch.frac,
|
|
r"""
|
|
frac(tensor, out=None) -> Tensor
|
|
|
|
Computes the fractional portion of each element in :attr:`tensor`.
|
|
|
|
.. math::
|
|
\text{out}_{i} = \text{input}_{i} - \left\lfloor \text{input}_{i} \right\rfloor
|
|
|
|
Example::
|
|
|
|
>>> torch.frac(torch.tensor([1, 2.5, -3.2]))
|
|
tensor([ 0.0000, 0.5000, -0.2000])
|
|
""")
|
|
|
|
add_docstr(torch.from_numpy,
|
|
r"""
|
|
from_numpy(ndarray) -> Tensor
|
|
|
|
Creates a :class:`Tensor` from a :class:`numpy.ndarray`.
|
|
|
|
The returned tensor and :attr:`ndarray` share the same memory. Modifications to
|
|
the tensor will be reflected in the :attr:`ndarray` and vice versa. The returned
|
|
tensor is not resizable.
|
|
|
|
Example::
|
|
|
|
>>> a = numpy.array([1, 2, 3])
|
|
>>> t = torch.from_numpy(a)
|
|
>>> t
|
|
tensor([ 1, 2, 3])
|
|
>>> t[0] = -1
|
|
>>> a
|
|
array([-1, 2, 3])
|
|
""")
|
|
|
|
add_docstr(torch.gather,
|
|
r"""
|
|
gather(input, dim, index, out=None) -> Tensor
|
|
|
|
Gathers values along an axis specified by `dim`.
|
|
|
|
For a 3-D tensor the output is specified by::
|
|
|
|
out[i][j][k] = input[index[i][j][k]][j][k] # if dim == 0
|
|
out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1
|
|
out[i][j][k] = input[i][j][index[i][j][k]] # if dim == 2
|
|
|
|
If :attr:`input` is an n-dimensional tensor with size
|
|
:math:`(x_0, x_1..., x_{i-1}, x_i, x_{i+1}, ..., x_{n-1})`
|
|
and :attr:`dim` :math:`= i`, then :attr:`index` must be an :math:`n`-dimensional tensor with
|
|
size :math:`(x_0, x_1, ..., x_{i-1}, y, x_{i+1}, ..., x_{n-1})` where :math:`y \geq 1`
|
|
and :attr:`out` will have the same size as :attr:`index`.
|
|
|
|
Args:
|
|
input (Tensor): the source tensor
|
|
dim (int): the axis along which to index
|
|
index (LongTensor): the indices of elements to gather
|
|
out (Tensor, optional): the destination tensor
|
|
|
|
Example::
|
|
|
|
>>> t = torch.tensor([[1,2],[3,4]])
|
|
>>> torch.gather(t, 1, torch.tensor([[0,0],[1,0]]))
|
|
tensor([[ 1, 1],
|
|
[ 4, 3]])
|
|
""")
|
|
|
|
add_docstr(torch.ge,
|
|
r"""
|
|
ge(input, other, out=None) -> Tensor
|
|
|
|
Computes :math:`input \geq other` element-wise.
|
|
|
|
The second argument can be a number or a tensor whose shape is
|
|
:ref:`broadcastable <broadcasting-semantics>` with the first argument.
|
|
|
|
Args:
|
|
input (Tensor): the tensor to compare
|
|
other (Tensor or float): the tensor or value to compare
|
|
out (Tensor, optional): the output tensor that must be a `ByteTensor` or the same type as :attr:`input`
|
|
|
|
Returns:
|
|
Tensor: A ``torch.ByteTensor`` containing a 1 at each location where comparison is true
|
|
|
|
Example::
|
|
|
|
>>> torch.ge(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
|
|
tensor([[ 1, 1],
|
|
[ 0, 1]], dtype=torch.uint8)
|
|
""")
|
|
|
|
add_docstr(torch.gels,
|
|
r"""
|
|
gels(B, A, out=None) -> Tensor
|
|
|
|
Computes the solution to the least squares and least norm problems for a full
|
|
rank matrix :math:`A` of size :math:`(m \times n)` and a matrix :math:`B` of
|
|
size :math:`(n \times k)`.
|
|
|
|
If :math:`m \geq n`, :func:`gels` solves the least-squares problem:
|
|
|
|
.. math::
|
|
|
|
\begin{array}{ll}
|
|
\min_X & \|AX-B\|_2.
|
|
\end{array}
|
|
|
|
If :math:`m < n`, :func:`gels` solves the least-norm problem:
|
|
|
|
.. math::
|
|
|
|
\begin{array}{ll}
|
|
\min_X & \|X\|_2 & \mbox{subject to} & AX = B.
|
|
\end{array}
|
|
|
|
Returned tensor :math:`X` has shape :math:`(\max(m, n) \times k)`. The first :math:`n`
|
|
rows of :math:`X` contains the solution. If :math`m \geq n`, the residual sum of squares
|
|
for the solution in each column is given by the sum of squares of elements in the
|
|
remaining :math:`m - n` rows of that column.
|
|
|
|
Args:
|
|
B (Tensor): the matrix :math:`B`
|
|
A (Tensor): the :math:`m` by :math:`n` matrix :math:`A`
|
|
out (tuple, optional): the optional destination tensor
|
|
|
|
Returns:
|
|
(Tensor, Tensor): A tuple containing:
|
|
|
|
- **X** (*Tensor*): the least squares solution
|
|
- **qr** (*Tensor*): the details of the QR factorization
|
|
|
|
.. note::
|
|
|
|
The returned matrices will always be transposed, irrespective of the strides
|
|
of the input matrices. That is, they will have stride `(1, m)` instead of
|
|
`(m, 1)`.
|
|
|
|
Example::
|
|
|
|
>>> A = torch.tensor([[1., 1, 1],
|
|
[2, 3, 4],
|
|
[3, 5, 2],
|
|
[4, 2, 5],
|
|
[5, 4, 3]])
|
|
>>> B = torch.tensor([[-10., -3],
|
|
[ 12, 14],
|
|
[ 14, 12],
|
|
[ 16, 16],
|
|
[ 18, 16]])
|
|
>>> X, _ = torch.gels(B, A)
|
|
>>> X
|
|
tensor([[ 2.0000, 1.0000],
|
|
[ 1.0000, 1.0000],
|
|
[ 1.0000, 2.0000],
|
|
[ 10.9635, 4.8501],
|
|
[ 8.9332, 5.2418]])
|
|
""")
|
|
|
|
add_docstr(torch.geqrf,
|
|
r"""
|
|
geqrf(input, out=None) -> (Tensor, Tensor)
|
|
|
|
This is a low-level function for calling LAPACK directly.
|
|
|
|
You'll generally want to use :func:`torch.qr` instead.
|
|
|
|
Computes a QR decomposition of :attr:`input`, but without constructing
|
|
:math:`Q` and :math:`R` as explicit separate matrices.
|
|
|
|
Rather, this directly calls the underlying LAPACK function `?geqrf`
|
|
which produces a sequence of 'elementary reflectors'.
|
|
|
|
See `LAPACK documentation for geqrf`_ for further details.
|
|
|
|
Args:
|
|
input (Tensor): the input matrix
|
|
out (tuple, optional): the output tuple of (Tensor, Tensor)
|
|
|
|
.. _LAPACK documentation for geqrf:
|
|
https://software.intel.com/en-us/node/521004
|
|
|
|
""")
|
|
|
|
add_docstr(torch.ger,
|
|
r"""
|
|
ger(vec1, vec2, out=None) -> Tensor
|
|
|
|
Outer product of :attr:`vec1` and :attr:`vec2`.
|
|
If :attr:`vec1` is a vector of size :math:`n` and :attr:`vec2` is a vector of
|
|
size :math:`m`, then :attr:`out` must be a matrix of size :math:`(n \times m)`.
|
|
|
|
.. note:: This function does not :ref:`broadcast <broadcasting-semantics>`.
|
|
|
|
Args:
|
|
vec1 (Tensor): 1-D input vector
|
|
vec2 (Tensor): 1-D input vector
|
|
out (Tensor, optional): optional output matrix
|
|
|
|
Example::
|
|
|
|
>>> v1 = torch.arange(1., 5.)
|
|
>>> v2 = torch.arange(1., 4.)
|
|
>>> torch.ger(v1, v2)
|
|
tensor([[ 1., 2., 3.],
|
|
[ 2., 4., 6.],
|
|
[ 3., 6., 9.],
|
|
[ 4., 8., 12.]])
|
|
""")
|
|
|
|
add_docstr(torch.gesv,
|
|
r"""
|
|
torch.gesv(B, A) -> (Tensor, Tensor)
|
|
|
|
This function returns the solution to the system of linear
|
|
equations represented by :math:`AX = B` and the LU factorization of
|
|
A, in order as a tuple `X, LU`.
|
|
|
|
`LU` contains `L` and `U` factors for LU factorization of `A`.
|
|
|
|
`torch.gesv(B, A)` can take in 2D inputs `B, A` or inputs that are
|
|
batches of 2D matrices. If the inputs are batches, then returns
|
|
batched outputs `X, LU`.
|
|
|
|
.. note::
|
|
|
|
The `out` keyword only supports 2D matrix inputs, that is,
|
|
`B, A` must be 2D matrices.
|
|
|
|
.. note::
|
|
|
|
Irrespective of the original strides, the returned matrices
|
|
`X` and `LU` will be transposed, i.e. with strides like
|
|
`B.contiguous().transpose(-1, -2).strides()` and
|
|
`A.contiguous().transpose(-1, -2).strides()` respectively.
|
|
|
|
Args:
|
|
B (Tensor): input matrix of size :math:`(*, m, k)` , where `*`
|
|
is zero or more batch dimensions.
|
|
A (Tensor): input square matrix of size :math:`(*, m, m)`, where
|
|
`*` is zero or more batch dimensions.
|
|
out ((Tensor, Tensor), optional): optional output tuple.
|
|
|
|
Example::
|
|
|
|
>>> A = torch.tensor([[6.80, -2.11, 5.66, 5.97, 8.23],
|
|
[-6.05, -3.30, 5.36, -4.44, 1.08],
|
|
[-0.45, 2.58, -2.70, 0.27, 9.04],
|
|
[8.32, 2.71, 4.35, -7.17, 2.14],
|
|
[-9.67, -5.14, -7.26, 6.08, -6.87]]).t()
|
|
>>> B = torch.tensor([[4.02, 6.19, -8.22, -7.57, -3.03],
|
|
[-1.56, 4.00, -8.67, 1.75, 2.86],
|
|
[9.81, -4.09, -4.57, -8.61, 8.99]]).t()
|
|
>>> X, LU = torch.gesv(B, A)
|
|
>>> torch.dist(B, torch.mm(A, X))
|
|
tensor(1.00000e-06 *
|
|
7.0977)
|
|
|
|
>>> # Batched solver example
|
|
>>> A = torch.randn(2, 3, 1, 4, 4)
|
|
>>> B = torch.randn(2, 3, 1, 4, 6)
|
|
>>> X, LU = torch.gesv(B, A)
|
|
>>> torch.dist(B, A.matmul(X))
|
|
tensor(1.00000e-06 *
|
|
3.6386)
|
|
|
|
""")
|
|
|
|
add_docstr(torch.get_default_dtype,
|
|
r"""
|
|
get_default_dtype() -> :class:`torch.dtype`
|
|
|
|
Get the current default floating point :class:`torch.dtype`.
|
|
|
|
Example::
|
|
|
|
>>> torch.get_default_dtype() # initial default for floating point is torch.float32
|
|
torch.float32
|
|
>>> torch.set_default_dtype(torch.float64)
|
|
>>> torch.get_default_dtype() # default is now changed to torch.float64
|
|
torch.float64
|
|
>>> torch.set_default_tensor_type(torch.FloatTensor) # setting tensor type also affects this
|
|
>>> torch.get_default_dtype() # changed to torch.float32, the dtype for torch.FloatTensor
|
|
torch.float32
|
|
|
|
""")
|
|
|
|
add_docstr(torch.get_num_threads,
|
|
r"""
|
|
get_num_threads() -> int
|
|
|
|
Gets the number of OpenMP threads used for parallelizing CPU operations
|
|
""")
|
|
|
|
add_docstr(torch.gt,
|
|
r"""
|
|
gt(input, other, out=None) -> Tensor
|
|
|
|
Computes :math:`input > other` element-wise.
|
|
|
|
The second argument can be a number or a tensor whose shape is
|
|
:ref:`broadcastable <broadcasting-semantics>` with the first argument.
|
|
|
|
Args:
|
|
input (Tensor): the tensor to compare
|
|
other (Tensor or float): the tensor or value to compare
|
|
out (Tensor, optional): the output tensor that must be a `ByteTensor` or the same type as :attr:`input`
|
|
|
|
Returns:
|
|
Tensor: A ``torch.ByteTensor`` containing a 1 at each location where comparison is true
|
|
|
|
Example::
|
|
|
|
>>> torch.gt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
|
|
tensor([[ 0, 1],
|
|
[ 0, 0]], dtype=torch.uint8)
|
|
""")
|
|
|
|
add_docstr(torch.histc,
|
|
r"""
|
|
histc(input, bins=100, min=0, max=0, out=None) -> Tensor
|
|
|
|
Computes the histogram of a tensor.
|
|
|
|
The elements are sorted into equal width bins between :attr:`min` and
|
|
:attr:`max`. If :attr:`min` and :attr:`max` are both zero, the minimum and
|
|
maximum values of the data are used.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
bins (int): number of histogram bins
|
|
min (int): lower end of the range (inclusive)
|
|
max (int): upper end of the range (inclusive)
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Returns:
|
|
Tensor: Histogram represented as a tensor
|
|
|
|
Example::
|
|
|
|
>>> torch.histc(torch.tensor([1., 2, 1]), bins=4, min=0, max=3)
|
|
tensor([ 0., 2., 1., 0.])
|
|
""")
|
|
|
|
add_docstr(torch.index_select,
|
|
r"""
|
|
index_select(input, dim, index, out=None) -> Tensor
|
|
|
|
Returns a new tensor which indexes the :attr:`input` tensor along dimension
|
|
:attr:`dim` using the entries in :attr:`index` which is a `LongTensor`.
|
|
|
|
The returned tensor has the same number of dimensions as the original tensor
|
|
(:attr:`input`). The :attr:`dim`\ th dimension has the same size as the length
|
|
of :attr:`index`; other dimensions have the same size as in the original tensor.
|
|
|
|
.. note:: The returned tensor does **not** use the same storage as the original
|
|
tensor. If :attr:`out` has a different shape than expected, we
|
|
silently change it to the correct shape, reallocating the underlying
|
|
storage if necessary.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dim (int): the dimension in which we index
|
|
index (LongTensor): the 1-D tensor containing the indices to index
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> x = torch.randn(3, 4)
|
|
>>> x
|
|
tensor([[ 0.1427, 0.0231, -0.5414, -1.0009],
|
|
[-0.4664, 0.2647, -0.1228, -1.1068],
|
|
[-1.1734, -0.6571, 0.7230, -0.6004]])
|
|
>>> indices = torch.tensor([0, 2])
|
|
>>> torch.index_select(x, 0, indices)
|
|
tensor([[ 0.1427, 0.0231, -0.5414, -1.0009],
|
|
[-1.1734, -0.6571, 0.7230, -0.6004]])
|
|
>>> torch.index_select(x, 1, indices)
|
|
tensor([[ 0.1427, -0.5414],
|
|
[-0.4664, -0.1228],
|
|
[-1.1734, 0.7230]])
|
|
""")
|
|
|
|
add_docstr(torch.inverse,
|
|
r"""
|
|
inverse(input, out=None) -> Tensor
|
|
|
|
Takes the inverse of the square matrix :attr:`input`.
|
|
|
|
.. note::
|
|
|
|
Irrespective of the original strides, the returned matrix will be
|
|
transposed, i.e. with strides `(1, m)` instead of `(m, 1)`
|
|
|
|
Args:
|
|
input (Tensor): the input 2-D square tensor
|
|
out (Tensor, optional): the optional output tensor
|
|
|
|
Example::
|
|
|
|
>>> x = torch.rand(4, 4)
|
|
>>> y = torch.inverse(x)
|
|
>>> z = torch.mm(x, y)
|
|
>>> z
|
|
tensor([[ 1.0000, -0.0000, -0.0000, 0.0000],
|
|
[ 0.0000, 1.0000, 0.0000, 0.0000],
|
|
[ 0.0000, 0.0000, 1.0000, 0.0000],
|
|
[ 0.0000, -0.0000, -0.0000, 1.0000]])
|
|
>>> torch.max(torch.abs(z - torch.eye(4))) # Max nonzero
|
|
tensor(1.00000e-07 *
|
|
1.1921)
|
|
""")
|
|
|
|
add_docstr(torch.kthvalue,
|
|
r"""
|
|
kthvalue(input, k, dim=None, keepdim=False, out=None) -> (Tensor, LongTensor)
|
|
|
|
Returns the :attr:`k` th smallest element of the given :attr:`input` tensor
|
|
along a given dimension.
|
|
|
|
If :attr:`dim` is not given, the last dimension of the `input` is chosen.
|
|
|
|
A tuple of `(values, indices)` is returned, where the `indices` is the indices
|
|
of the kth-smallest element in the original `input` tensor in dimension `dim`.
|
|
|
|
If :attr:`keepdim` is ``True``, both the :attr:`values` and :attr:`indices` tensors
|
|
are the same size as :attr:`input`, except in the dimension :attr:`dim` where
|
|
they are of size 1. Otherwise, :attr:`dim` is squeezed
|
|
(see :func:`torch.squeeze`), resulting in both the :attr:`values` and
|
|
:attr:`indices` tensors having 1 fewer dimension than the :attr:`input` tensor.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
k (int): k for the k-th smallest element
|
|
dim (int, optional): the dimension to find the kth value along
|
|
keepdim (bool): whether the output tensors have :attr:`dim` retained or not
|
|
out (tuple, optional): the output tuple of (Tensor, LongTensor)
|
|
can be optionally given to be used as output buffers
|
|
|
|
Example::
|
|
|
|
>>> x = torch.arange(1., 6.)
|
|
>>> x
|
|
tensor([ 1., 2., 3., 4., 5.])
|
|
>>> torch.kthvalue(x, 4)
|
|
(tensor(4.), tensor(3))
|
|
|
|
>>> x=torch.arange(1.,7.).resize_(2,3)
|
|
>>> x
|
|
tensor([[ 1., 2., 3.],
|
|
[ 4., 5., 6.]])
|
|
>>> torch.kthvalue(x,2,0,True)
|
|
(tensor([[ 4., 5., 6.]]), tensor([[ 1, 1, 1]]))
|
|
""")
|
|
|
|
add_docstr(torch.le,
|
|
r"""
|
|
le(input, other, out=None) -> Tensor
|
|
|
|
Computes :math:`input \leq other` element-wise.
|
|
|
|
The second argument can be a number or a tensor whose shape is
|
|
:ref:`broadcastable <broadcasting-semantics>` with the first argument.
|
|
|
|
Args:
|
|
input (Tensor): the tensor to compare
|
|
other (Tensor or float): the tensor or value to compare
|
|
out (Tensor, optional): the output tensor that must be a `ByteTensor` or the same type as :attr:`input`
|
|
|
|
Returns:
|
|
Tensor: A ``torch.ByteTensor`` containing a 1 at each location where comparison is true
|
|
|
|
Example::
|
|
|
|
>>> torch.le(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
|
|
tensor([[ 1, 0],
|
|
[ 1, 1]], dtype=torch.uint8)
|
|
""")
|
|
|
|
add_docstr(torch.lerp,
|
|
r"""
|
|
lerp(start, end, weight, out=None)
|
|
|
|
Does a linear interpolation of two tensors :attr:`start` and :attr:`end` based
|
|
on a scalar :attr:`weight` and returns the resulting :attr:`out` tensor.
|
|
|
|
.. math::
|
|
out_i = start_i + weight \times (end_i - start_i)
|
|
|
|
The shapes of :attr:`start` and :attr:`end` must be
|
|
:ref:`broadcastable <broadcasting-semantics>`.
|
|
|
|
Args:
|
|
start (Tensor): the tensor with the starting points
|
|
end (Tensor): the tensor with the ending points
|
|
weight (float): the weight for the interpolation formula
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> start = torch.arange(1., 5.)
|
|
>>> end = torch.empty(4).fill_(10)
|
|
>>> start
|
|
tensor([ 1., 2., 3., 4.])
|
|
>>> end
|
|
tensor([ 10., 10., 10., 10.])
|
|
>>> torch.lerp(start, end, 0.5)
|
|
tensor([ 5.5000, 6.0000, 6.5000, 7.0000])
|
|
""")
|
|
|
|
add_docstr(torch.linspace,
|
|
r"""
|
|
linspace(start, end, steps=100, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a one-dimensional tensor of :attr:`steps`
|
|
equally spaced points between :attr:`start` and :attr:`end`.
|
|
|
|
The output tensor is 1-D of size :attr:`steps`.
|
|
|
|
Args:
|
|
start (float): the starting value for the set of points
|
|
end (float): the ending value for the set of points
|
|
steps (int): number of points to sample between :attr:`start`
|
|
and :attr:`end`. Default: ``100``.
|
|
{out}
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
|
|
Example::
|
|
|
|
>>> torch.linspace(3, 10, steps=5)
|
|
tensor([ 3.0000, 4.7500, 6.5000, 8.2500, 10.0000])
|
|
>>> torch.linspace(-10, 10, steps=5)
|
|
tensor([-10., -5., 0., 5., 10.])
|
|
>>> torch.linspace(start=-10, end=10, steps=5)
|
|
tensor([-10., -5., 0., 5., 10.])
|
|
""".format(**factory_common_args))
|
|
|
|
add_docstr(torch.log,
|
|
r"""
|
|
log(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the natural logarithm of the elements
|
|
of :attr:`input`.
|
|
|
|
.. math::
|
|
y_{i} = \log_{e} (x_{i})
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(5)
|
|
>>> a
|
|
tensor([-0.7168, -0.5471, -0.8933, -1.4428, -0.1190])
|
|
>>> torch.log(a)
|
|
tensor([ nan, nan, nan, nan, nan])
|
|
""")
|
|
|
|
add_docstr(torch.log10,
|
|
r"""
|
|
log10(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the logarithm to the base 10 of the elements
|
|
of :attr:`input`.
|
|
|
|
.. math::
|
|
y_{i} = \log_{10} (x_{i})
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.rand(5)
|
|
>>> a
|
|
tensor([ 0.5224, 0.9354, 0.7257, 0.1301, 0.2251])
|
|
|
|
|
|
>>> torch.log10(a)
|
|
tensor([-0.2820, -0.0290, -0.1392, -0.8857, -0.6476])
|
|
|
|
""")
|
|
|
|
add_docstr(torch.log1p,
|
|
r"""
|
|
log1p(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the natural logarithm of (1 + :attr:`input`).
|
|
|
|
.. math::
|
|
y_i = \log_{e} (x_i + 1)
|
|
|
|
.. note:: This function is more accurate than :func:`torch.log` for small
|
|
values of :attr:`input`
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(5)
|
|
>>> a
|
|
tensor([-1.0090, -0.9923, 1.0249, -0.5372, 0.2492])
|
|
>>> torch.log1p(a)
|
|
tensor([ nan, -4.8653, 0.7055, -0.7705, 0.2225])
|
|
""")
|
|
|
|
add_docstr(torch.log2,
|
|
r"""
|
|
log2(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the logarithm to the base 2 of the elements
|
|
of :attr:`input`.
|
|
|
|
.. math::
|
|
y_{i} = \log_{2} (x_{i})
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.rand(5)
|
|
>>> a
|
|
tensor([ 0.8419, 0.8003, 0.9971, 0.5287, 0.0490])
|
|
|
|
|
|
>>> torch.log2(a)
|
|
tensor([-0.2483, -0.3213, -0.0042, -0.9196, -4.3504])
|
|
|
|
""")
|
|
|
|
add_docstr(torch.logspace,
|
|
r"""
|
|
logspace(start, end, steps=100, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a one-dimensional tensor of :attr:`steps` points
|
|
logarithmically spaced between :math:`10^{{\text{{start}}}}` and :math:`10^{{\text{{end}}}}`.
|
|
|
|
The output tensor is 1-D of size :attr:`steps`.
|
|
|
|
Args:
|
|
start (float): the starting value for the set of points
|
|
end (float): the ending value for the set of points
|
|
steps (int): number of points to sample between :attr:`start`
|
|
and :attr:`end`. Default: ``100``.
|
|
{out}
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Example::
|
|
|
|
>>> torch.logspace(start=-10, end=10, steps=5)
|
|
tensor([ 1.0000e-10, 1.0000e-05, 1.0000e+00, 1.0000e+05, 1.0000e+10])
|
|
>>> torch.logspace(start=0.1, end=1.0, steps=5)
|
|
tensor([ 1.2589, 2.1135, 3.5481, 5.9566, 10.0000])
|
|
""".format(**factory_common_args))
|
|
|
|
add_docstr(torch.logsumexp,
|
|
r"""
|
|
logsumexp(input, dim, keepdim=False, out=None)
|
|
|
|
Returns the log of summed exponentials of each row of the :attr:`input`
|
|
tensor in the given dimension :attr:`dim`. The computation is numerically
|
|
stabilized.
|
|
|
|
For summation index :math:`j` given by `dim` and other indices :math:`i`, the result is
|
|
|
|
.. math::
|
|
\text{logsumexp}(x)_{i} = \log \sum_j \exp(x_{ij})
|
|
|
|
If :attr:`keepdim` is ``True``, the output tensor is of the same size
|
|
as :attr:`input` except in the dimension :attr:`dim` where it is of size 1.
|
|
Otherwise, :attr:`dim` is squeezed (see :func:`torch.squeeze`), resulting in
|
|
the output tensor having 1 fewer dimension than :attr:`input`.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dim (int or tuple of ints): the dimension or dimensions to reduce
|
|
keepdim (bool): whether the output tensor has :attr:`dim` retained or not
|
|
out (Tensor, optional): the output tensor
|
|
|
|
|
|
Example::
|
|
>>> a = torch.randn(3, 3)
|
|
>>> torch.logsumexp(a, 1)
|
|
tensor([ 0.8442, 1.4322, 0.8711])
|
|
""")
|
|
|
|
add_docstr(torch.lt,
|
|
r"""
|
|
lt(input, other, out=None) -> Tensor
|
|
|
|
Computes :math:`input < other` element-wise.
|
|
|
|
The second argument can be a number or a tensor whose shape is
|
|
:ref:`broadcastable <broadcasting-semantics>` with the first argument.
|
|
|
|
Args:
|
|
input (Tensor): the tensor to compare
|
|
other (Tensor or float): the tensor or value to compare
|
|
out (Tensor, optional): the output tensor that must be a `ByteTensor` or the same type as :attr:`input`
|
|
|
|
Returns:
|
|
Tensor: A `torch.ByteTensor` containing a 1 at each location where comparison is true
|
|
|
|
Example::
|
|
|
|
>>> torch.lt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
|
|
tensor([[ 0, 0],
|
|
[ 1, 0]], dtype=torch.uint8)
|
|
""")
|
|
|
|
add_docstr(torch.masked_select,
|
|
r"""
|
|
masked_select(input, mask, out=None) -> Tensor
|
|
|
|
Returns a new 1-D tensor which indexes the :attr:`input` tensor according to
|
|
the binary mask :attr:`mask` which is a `ByteTensor`.
|
|
|
|
The shapes of the :attr:`mask` tensor and the :attr:`input` tensor don't need
|
|
to match, but they must be :ref:`broadcastable <broadcasting-semantics>`.
|
|
|
|
.. note:: The returned tensor does **not** use the same storage
|
|
as the original tensor
|
|
|
|
Args:
|
|
input (Tensor): the input data
|
|
mask (ByteTensor): the tensor containing the binary mask to index with
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> x = torch.randn(3, 4)
|
|
>>> x
|
|
tensor([[ 0.3552, -2.3825, -0.8297, 0.3477],
|
|
[-1.2035, 1.2252, 0.5002, 0.6248],
|
|
[ 0.1307, -2.0608, 0.1244, 2.0139]])
|
|
>>> mask = x.ge(0.5)
|
|
>>> mask
|
|
tensor([[ 0, 0, 0, 0],
|
|
[ 0, 1, 1, 1],
|
|
[ 0, 0, 0, 1]], dtype=torch.uint8)
|
|
>>> torch.masked_select(x, mask)
|
|
tensor([ 1.2252, 0.5002, 0.6248, 2.0139])
|
|
""")
|
|
|
|
add_docstr(torch.max,
|
|
r"""
|
|
.. function:: max(input) -> Tensor
|
|
|
|
Returns the maximum value of all elements in the :attr:`input` tensor.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(1, 3)
|
|
>>> a
|
|
tensor([[ 0.6763, 0.7445, -2.2369]])
|
|
>>> torch.max(a)
|
|
tensor(0.7445)
|
|
|
|
.. function:: max(input, dim, keepdim=False, out=None) -> (Tensor, LongTensor)
|
|
|
|
Returns the maximum value of each row of the :attr:`input` tensor in the given
|
|
dimension :attr:`dim`. The second return value is the index location of each
|
|
maximum value found (argmax).
|
|
|
|
If :attr:`keepdim` is ``True``, the output tensors are of the same size
|
|
as :attr:`input` except in the dimension :attr:`dim` where they are of size 1.
|
|
Otherwise, :attr:`dim` is squeezed (see :func:`torch.squeeze`), resulting
|
|
in the output tensors having 1 fewer dimension than :attr:`input`.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dim (int): the dimension to reduce
|
|
keepdim (bool): whether the output tensors have :attr:`dim` retained or not
|
|
out (tuple, optional): the result tuple of two output tensors (max, max_indices)
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4, 4)
|
|
>>> a
|
|
tensor([[-1.2360, -0.2942, -0.1222, 0.8475],
|
|
[ 1.1949, -1.1127, -2.2379, -0.6702],
|
|
[ 1.5717, -0.9207, 0.1297, -1.8768],
|
|
[-0.6172, 1.0036, -0.6060, -0.2432]])
|
|
>>> torch.max(a, 1)
|
|
(tensor([ 0.8475, 1.1949, 1.5717, 1.0036]), tensor([ 3, 0, 0, 1]))
|
|
|
|
.. function:: max(input, other, out=None) -> Tensor
|
|
|
|
Each element of the tensor :attr:`input` is compared with the corresponding
|
|
element of the tensor :attr:`other` and an element-wise maximum is taken.
|
|
|
|
The shapes of :attr:`input` and :attr:`other` don't need to match,
|
|
but they must be :ref:`broadcastable <broadcasting-semantics>`.
|
|
|
|
.. math::
|
|
out_i = \max(tensor_i, other_i)
|
|
|
|
.. note:: When the shapes do not match, the shape of the returned output tensor
|
|
follows the :ref:`broadcasting rules <broadcasting-semantics>`.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
other (Tensor): the second input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([ 0.2942, -0.7416, 0.2653, -0.1584])
|
|
>>> b = torch.randn(4)
|
|
>>> b
|
|
tensor([ 0.8722, -1.7421, -0.4141, -0.5055])
|
|
>>> torch.max(a, b)
|
|
tensor([ 0.8722, -0.7416, 0.2653, -0.1584])
|
|
""")
|
|
|
|
add_docstr(torch.mean,
|
|
r"""
|
|
.. function:: mean(input) -> Tensor
|
|
|
|
Returns the mean value of all elements in the :attr:`input` tensor.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(1, 3)
|
|
>>> a
|
|
tensor([[ 0.2294, -0.5481, 1.3288]])
|
|
>>> torch.mean(a)
|
|
tensor(0.3367)
|
|
|
|
.. function:: mean(input, dim, keepdim=False, out=None) -> Tensor
|
|
|
|
Returns the mean value of each row of the :attr:`input` tensor in the given
|
|
dimension :attr:`dim`.
|
|
|
|
If :attr:`keepdim` is ``True``, the output tensor is of the same size
|
|
as :attr:`input` except in the dimension :attr:`dim` where it is of size 1.
|
|
Otherwise, :attr:`dim` is squeezed (see :func:`torch.squeeze`), resulting in the
|
|
output tensor having 1 fewer dimension.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dim (int): the dimension to reduce
|
|
keepdim (bool, optional): whether the output tensor has :attr:`dim` retained or not
|
|
out (Tensor): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4, 4)
|
|
>>> a
|
|
tensor([[-0.3841, 0.6320, 0.4254, -0.7384],
|
|
[-0.9644, 1.0131, -0.6549, -1.4279],
|
|
[-0.2951, -1.3350, -0.7694, 0.5600],
|
|
[ 1.0842, -0.9580, 0.3623, 0.2343]])
|
|
>>> torch.mean(a, 1)
|
|
tensor([-0.0163, -0.5085, -0.4599, 0.1807])
|
|
>>> torch.mean(a, 1, True)
|
|
tensor([[-0.0163],
|
|
[-0.5085],
|
|
[-0.4599],
|
|
[ 0.1807]])
|
|
""")
|
|
|
|
add_docstr(torch.median,
|
|
r"""
|
|
.. function:: median(input) -> Tensor
|
|
|
|
Returns the median value of all elements in the :attr:`input` tensor.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(1, 3)
|
|
>>> a
|
|
tensor([[ 1.5219, -1.5212, 0.2202]])
|
|
>>> torch.median(a)
|
|
tensor(0.2202)
|
|
|
|
.. function:: median(input, dim=-1, keepdim=False, values=None, indices=None) -> (Tensor, LongTensor)
|
|
|
|
Returns the median value of each row of the :attr:`input` tensor in the given
|
|
dimension :attr:`dim`. Also returns the index location of the median value
|
|
as a `LongTensor`.
|
|
|
|
By default, :attr:`dim` is the last dimension of the :attr:`input` tensor.
|
|
|
|
If :attr:`keepdim` is ``True``, the output tensors are of the same size
|
|
as :attr:`input` except in the dimension :attr:`dim` where they are of size 1.
|
|
Otherwise, :attr:`dim` is squeezed (see :func:`torch.squeeze`), resulting in
|
|
the outputs tensor having 1 fewer dimension than :attr:`input`.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dim (int): the dimension to reduce
|
|
keepdim (bool): whether the output tensors have :attr:`dim` retained or not
|
|
values (Tensor, optional): the output tensor
|
|
indices (Tensor, optional): the output index tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4, 5)
|
|
>>> a
|
|
tensor([[ 0.2505, -0.3982, -0.9948, 0.3518, -1.3131],
|
|
[ 0.3180, -0.6993, 1.0436, 0.0438, 0.2270],
|
|
[-0.2751, 0.7303, 0.2192, 0.3321, 0.2488],
|
|
[ 1.0778, -1.9510, 0.7048, 0.4742, -0.7125]])
|
|
>>> torch.median(a, 1)
|
|
(tensor([-0.3982, 0.2270, 0.2488, 0.4742]), tensor([ 1, 4, 4, 3]))
|
|
""")
|
|
|
|
add_docstr(torch.min,
|
|
r"""
|
|
.. function:: min(input) -> Tensor
|
|
|
|
Returns the minimum value of all elements in the :attr:`input` tensor.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(1, 3)
|
|
>>> a
|
|
tensor([[ 0.6750, 1.0857, 1.7197]])
|
|
>>> torch.min(a)
|
|
tensor(0.6750)
|
|
|
|
.. function:: min(input, dim, keepdim=False, out=None) -> (Tensor, LongTensor)
|
|
|
|
Returns the minimum value of each row of the :attr:`input` tensor in the given
|
|
dimension :attr:`dim`. The second return value is the index location of each
|
|
minimum value found (argmin).
|
|
|
|
If :attr:`keepdim` is ``True``, the output tensors are of the same size as
|
|
:attr:`input` except in the dimension :attr:`dim` where they are of size 1.
|
|
Otherwise, :attr:`dim` is squeezed (see :func:`torch.squeeze`), resulting in
|
|
the output tensors having 1 fewer dimension than :attr:`input`.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dim (int): the dimension to reduce
|
|
keepdim (bool): whether the output tensors have :attr:`dim` retained or not
|
|
out (tuple, optional): the tuple of two output tensors (min, min_indices)
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4, 4)
|
|
>>> a
|
|
tensor([[-0.6248, 1.1334, -1.1899, -0.2803],
|
|
[-1.4644, -0.2635, -0.3651, 0.6134],
|
|
[ 0.2457, 0.0384, 1.0128, 0.7015],
|
|
[-0.1153, 2.9849, 2.1458, 0.5788]])
|
|
>>> torch.min(a, 1)
|
|
(tensor([-1.1899, -1.4644, 0.0384, -0.1153]), tensor([ 2, 0, 1, 0]))
|
|
|
|
.. function:: min(input, other, out=None) -> Tensor
|
|
|
|
Each element of the tensor :attr:`input` is compared with the corresponding
|
|
element of the tensor :attr:`other` and an element-wise minimum is taken.
|
|
The resulting tensor is returned.
|
|
|
|
The shapes of :attr:`input` and :attr:`other` don't need to match,
|
|
but they must be :ref:`broadcastable <broadcasting-semantics>`.
|
|
|
|
.. math::
|
|
out_i = \min(tensor_i, other_i)
|
|
|
|
.. note:: When the shapes do not match, the shape of the returned output tensor
|
|
follows the :ref:`broadcasting rules <broadcasting-semantics>`.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
other (Tensor): the second input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([ 0.8137, -1.1740, -0.6460, 0.6308])
|
|
>>> b = torch.randn(4)
|
|
>>> b
|
|
tensor([-0.1369, 0.1555, 0.4019, -0.1929])
|
|
>>> torch.min(a, b)
|
|
tensor([-0.1369, -1.1740, -0.6460, -0.1929])
|
|
""")
|
|
|
|
add_docstr(torch.mm,
|
|
r"""
|
|
mm(mat1, mat2, out=None) -> Tensor
|
|
|
|
Performs a matrix multiplication of the matrices :attr:`mat1` and :attr:`mat2`.
|
|
|
|
If :attr:`mat1` is a :math:`(n \times m)` tensor, :attr:`mat2` is a
|
|
:math:`(m \times p)` tensor, :attr:`out` will be a :math:`(n \times p)` tensor.
|
|
|
|
.. note:: This function does not :ref:`broadcast <broadcasting-semantics>`.
|
|
For broadcasting matrix products, see :func:`torch.matmul`.
|
|
|
|
Args:
|
|
mat1 (Tensor): the first matrix to be multiplied
|
|
mat2 (Tensor): the second matrix to be multiplied
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> mat1 = torch.randn(2, 3)
|
|
>>> mat2 = torch.randn(3, 3)
|
|
>>> torch.mm(mat1, mat2)
|
|
tensor([[ 0.4851, 0.5037, -0.3633],
|
|
[-0.0760, -3.6705, 2.4784]])
|
|
""")
|
|
|
|
add_docstr(torch.matmul,
|
|
r"""
|
|
matmul(tensor1, tensor2, out=None) -> Tensor
|
|
|
|
Matrix product of two tensors.
|
|
|
|
The behavior depends on the dimensionality of the tensors as follows:
|
|
|
|
- If both tensors are 1-dimensional, the dot product (scalar) is returned.
|
|
- If both arguments are 2-dimensional, the matrix-matrix product is returned.
|
|
- If the first argument is 1-dimensional and the second argument is 2-dimensional,
|
|
a 1 is prepended to its dimension for the purpose of the matrix multiply.
|
|
After the matrix multiply, the prepended dimension is removed.
|
|
- If the first argument is 2-dimensional and the second argument is 1-dimensional,
|
|
the matrix-vector product is returned.
|
|
- If both arguments are at least 1-dimensional and at least one argument is
|
|
N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first
|
|
argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the
|
|
batched matrix multiply and removed after. If the second argument is 1-dimensional, a
|
|
1 is appended to its dimension for the purpose of the batched matrix multiple and removed after.
|
|
The non-matrix (i.e. batch) dimensions are :ref:`broadcasted <broadcasting-semantics>` (and thus
|
|
must be broadcastable). For example, if :attr:`tensor1` is a
|
|
:math:`(j \times 1 \times n \times m)` tensor and :attr:`tensor2` is a :math:`(k \times m \times p)`
|
|
tensor, :attr:`out` will be an :math:`(j \times k \times n \times p)` tensor.
|
|
|
|
.. note::
|
|
|
|
The 1-dimensional dot product version of this function does not support an :attr:`out` parameter.
|
|
|
|
Arguments:
|
|
tensor1 (Tensor): the first tensor to be multiplied
|
|
tensor2 (Tensor): the second tensor to be multiplied
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> # vector x vector
|
|
>>> tensor1 = torch.randn(3)
|
|
>>> tensor2 = torch.randn(3)
|
|
>>> torch.matmul(tensor1, tensor2).size()
|
|
torch.Size([])
|
|
>>> # matrix x vector
|
|
>>> tensor1 = torch.randn(3, 4)
|
|
>>> tensor2 = torch.randn(4)
|
|
>>> torch.matmul(tensor1, tensor2).size()
|
|
torch.Size([3])
|
|
>>> # batched matrix x broadcasted vector
|
|
>>> tensor1 = torch.randn(10, 3, 4)
|
|
>>> tensor2 = torch.randn(4)
|
|
>>> torch.matmul(tensor1, tensor2).size()
|
|
torch.Size([10, 3])
|
|
>>> # batched matrix x batched matrix
|
|
>>> tensor1 = torch.randn(10, 3, 4)
|
|
>>> tensor2 = torch.randn(10, 4, 5)
|
|
>>> torch.matmul(tensor1, tensor2).size()
|
|
torch.Size([10, 3, 5])
|
|
>>> # batched matrix x broadcasted matrix
|
|
>>> tensor1 = torch.randn(10, 3, 4)
|
|
>>> tensor2 = torch.randn(4, 5)
|
|
>>> torch.matmul(tensor1, tensor2).size()
|
|
torch.Size([10, 3, 5])
|
|
|
|
""")
|
|
|
|
add_docstr(torch.mode,
|
|
r"""
|
|
mode(input, dim=-1, keepdim=False, values=None, indices=None) -> (Tensor, LongTensor)
|
|
|
|
Returns the mode value of each row of the :attr:`input` tensor in the given
|
|
dimension :attr:`dim`. Also returns the index location of the mode value
|
|
as a `LongTensor`.
|
|
|
|
By default, :attr:`dim` is the last dimension of the :attr:`input` tensor.
|
|
|
|
If :attr:`keepdim` is ``True``, the output tensors are of the same size as
|
|
:attr:`input` except in the dimension :attr:`dim` where they are of size 1.
|
|
Otherwise, :attr:`dim` is squeezed (see :func:`torch.squeeze`), resulting
|
|
in the output tensors having 1 fewer dimension than :attr:`input`.
|
|
|
|
.. note:: This function is not defined for ``torch.cuda.Tensor`` yet.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dim (int): the dimension to reduce
|
|
keepdim (bool): whether the output tensors have :attr:`dim` retained or not
|
|
values (Tensor, optional): the output tensor
|
|
indices (Tensor, optional): the output index tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4, 5)
|
|
>>> a
|
|
tensor([[-1.2808, -1.0966, -1.5946, -0.1148, 0.3631],
|
|
[ 1.1395, 1.1452, -0.6383, 0.3667, 0.4545],
|
|
[-0.4061, -0.3074, 0.4579, -1.3514, 1.2729],
|
|
[-1.0130, 0.3546, -1.4689, -0.1254, 0.0473]])
|
|
>>> torch.mode(a, 1)
|
|
(tensor([-1.5946, -0.6383, -1.3514, -1.4689]), tensor([ 2, 2, 3, 2]))
|
|
""")
|
|
|
|
add_docstr(torch.mul,
|
|
r"""
|
|
.. function:: mul(input, value, out=None)
|
|
|
|
Multiplies each element of the input :attr:`input` with the scalar
|
|
:attr:`value` and returns a new resulting tensor.
|
|
|
|
.. math::
|
|
out_i = value \times input_i
|
|
|
|
If :attr:`input` is of type `FloatTensor` or `DoubleTensor`, :attr:`value`
|
|
should be a real number, otherwise it should be an integer
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
value (Number): the number to be multiplied to each element of :attr:`input`
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(3)
|
|
>>> a
|
|
tensor([ 0.2015, -0.4255, 2.6087])
|
|
>>> torch.mul(a, 100)
|
|
tensor([ 20.1494, -42.5491, 260.8663])
|
|
|
|
.. function:: mul(input, other, out=None)
|
|
|
|
Each element of the tensor :attr:`input` is multiplied by each element of the
|
|
Tensor :attr:`other`. The resulting tensor is returned.
|
|
|
|
The shapes of :attr:`input` and :attr:`other` must be
|
|
:ref:`broadcastable <broadcasting-semantics>`.
|
|
|
|
.. math::
|
|
out_i = input_i \times other_i
|
|
|
|
Args:
|
|
input (Tensor): the first multiplicand tensor
|
|
other (Tensor): the second multiplicand tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4, 1)
|
|
>>> a
|
|
tensor([[ 1.1207],
|
|
[-0.3137],
|
|
[ 0.0700],
|
|
[ 0.8378]])
|
|
>>> b = torch.randn(1, 4)
|
|
>>> b
|
|
tensor([[ 0.5146, 0.1216, -0.5244, 2.2382]])
|
|
>>> torch.mul(a, b)
|
|
tensor([[ 0.5767, 0.1363, -0.5877, 2.5083],
|
|
[-0.1614, -0.0382, 0.1645, -0.7021],
|
|
[ 0.0360, 0.0085, -0.0367, 0.1567],
|
|
[ 0.4312, 0.1019, -0.4394, 1.8753]])
|
|
""")
|
|
|
|
add_docstr(torch.multinomial,
|
|
r"""
|
|
multinomial(input, num_samples, replacement=False, out=None) -> LongTensor
|
|
|
|
Returns a tensor where each row contains :attr:`num_samples` indices sampled
|
|
from the multinomial probability distribution located in the corresponding row
|
|
of tensor :attr:`input`.
|
|
|
|
.. note::
|
|
The rows of :attr:`input` do not need to sum to one (in which case we use
|
|
the values as weights), but must be non-negative, finite and have
|
|
a non-zero sum.
|
|
|
|
Indices are ordered from left to right according to when each was sampled
|
|
(first samples are placed in first column).
|
|
|
|
If :attr:`input` is a vector, :attr:`out` is a vector of size :attr:`num_samples`.
|
|
|
|
If :attr:`input` is a matrix with `m` rows, :attr:`out` is an matrix of shape
|
|
:math:`(m \times num\_samples)`.
|
|
|
|
If replacement is ``True``, samples are drawn with replacement.
|
|
|
|
If not, they are drawn without replacement, which means that when a
|
|
sample index is drawn for a row, it cannot be drawn again for that row.
|
|
|
|
This implies the constraint that :attr:`num_samples` must be lower than
|
|
:attr:`input` length (or number of columns of :attr:`input` if it is a matrix).
|
|
|
|
Args:
|
|
input (Tensor): the input tensor containing probabilities
|
|
num_samples (int): number of samples to draw
|
|
replacement (bool, optional): whether to draw with replacement or not
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> weights = torch.tensor([0, 10, 3, 0], dtype=torch.float) # create a tensor of weights
|
|
>>> torch.multinomial(weights, 4)
|
|
tensor([ 1, 2, 0, 0])
|
|
>>> torch.multinomial(weights, 4, replacement=True)
|
|
tensor([ 2, 1, 1, 1])
|
|
""")
|
|
|
|
add_docstr(torch.mv,
|
|
r"""
|
|
mv(mat, vec, out=None) -> Tensor
|
|
|
|
Performs a matrix-vector product of the matrix :attr:`mat` and the vector
|
|
:attr:`vec`.
|
|
|
|
If :attr:`mat` is a :math:`(n \times m)` tensor, :attr:`vec` is a 1-D tensor of
|
|
size :math:`m`, :attr:`out` will be 1-D of size :math:`n`.
|
|
|
|
.. note:: This function does not :ref:`broadcast <broadcasting-semantics>`.
|
|
|
|
Args:
|
|
mat (Tensor): matrix to be multiplied
|
|
vec (Tensor): vector to be multiplied
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> mat = torch.randn(2, 3)
|
|
>>> vec = torch.randn(3)
|
|
>>> torch.mv(mat, vec)
|
|
tensor([ 1.0404, -0.6361])
|
|
""")
|
|
|
|
add_docstr(torch.ne,
|
|
r"""
|
|
ne(input, other, out=None) -> Tensor
|
|
|
|
Computes :math:`input \neq other` element-wise.
|
|
|
|
The second argument can be a number or a tensor whose shape is
|
|
:ref:`broadcastable <broadcasting-semantics>` with the first argument.
|
|
|
|
Args:
|
|
input (Tensor): the tensor to compare
|
|
other (Tensor or float): the tensor or value to compare
|
|
out (Tensor, optional): the output tensor that must be a `ByteTensor` or the same type as `input`
|
|
|
|
Returns:
|
|
Tensor: A ``torch.ByteTensor`` containing a 1 at each location where comparison is true.
|
|
|
|
Example::
|
|
|
|
>>> torch.ne(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
|
|
tensor([[ 0, 1],
|
|
[ 1, 0]], dtype=torch.uint8)
|
|
""")
|
|
|
|
add_docstr(torch.neg,
|
|
r"""
|
|
neg(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the negative of the elements of :attr:`input`.
|
|
|
|
.. math::
|
|
out = -1 \times input
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(5)
|
|
>>> a
|
|
tensor([ 0.0090, -0.2262, -0.0682, -0.2866, 0.3940])
|
|
>>> torch.neg(a)
|
|
tensor([-0.0090, 0.2262, 0.0682, 0.2866, -0.3940])
|
|
""")
|
|
|
|
add_docstr(torch.nonzero,
|
|
r"""
|
|
nonzero(input, out=None) -> LongTensor
|
|
|
|
Returns a tensor containing the indices of all non-zero elements of
|
|
:attr:`input`. Each row in the result contains the indices of a non-zero
|
|
element in :attr:`input`.
|
|
|
|
If :attr:`input` has `n` dimensions, then the resulting indices tensor
|
|
:attr:`out` is of size :math:`(z \times n)`, where :math:`z` is the total number of
|
|
non-zero elements in the :attr:`input` tensor.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (LongTensor, optional): the output tensor containing indices
|
|
|
|
Example::
|
|
|
|
>>> torch.nonzero(torch.tensor([1, 1, 1, 0, 1]))
|
|
tensor([[ 0],
|
|
[ 1],
|
|
[ 2],
|
|
[ 4]])
|
|
>>> torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],
|
|
[0.0, 0.4, 0.0, 0.0],
|
|
[0.0, 0.0, 1.2, 0.0],
|
|
[0.0, 0.0, 0.0,-0.4]]))
|
|
tensor([[ 0, 0],
|
|
[ 1, 1],
|
|
[ 2, 2],
|
|
[ 3, 3]])
|
|
""")
|
|
|
|
add_docstr(torch.norm,
|
|
r"""
|
|
.. function:: norm(input, p=2) -> Tensor
|
|
|
|
Returns the p-norm of the :attr:`input` tensor.
|
|
|
|
.. math::
|
|
||x||_{p} = \sqrt[p]{x_{1}^{p} + x_{2}^{p} + \ldots + x_{N}^{p}}
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
p (float, optional): the exponent value in the norm formulation
|
|
Example::
|
|
|
|
>>> a = torch.randn(1, 3)
|
|
>>> a
|
|
tensor([[-0.5192, -1.0782, -1.0448]])
|
|
>>> torch.norm(a, 3)
|
|
tensor(1.3633)
|
|
|
|
.. function:: norm(input, p, dim, keepdim=False, out=None) -> Tensor
|
|
|
|
Returns the p-norm of each row of the :attr:`input` tensor in the given
|
|
dimension :attr:`dim`.
|
|
|
|
If :attr:`keepdim` is ``True``, the output tensor is of the same size as
|
|
:attr:`input` except in the dimension :attr:`dim` where it is of size 1.
|
|
Otherwise, :attr:`dim` is squeezed (see :func:`torch.squeeze`), resulting
|
|
in the output tensor having 1 fewer dimension than :attr:`input`.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
p (float): the exponent value in the norm formulation
|
|
dim (int): the dimension to reduce
|
|
keepdim (bool): whether the output tensor has :attr:`dim` retained or not
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4, 2)
|
|
>>> a
|
|
tensor([[ 2.1983, 0.4141],
|
|
[ 0.8734, 1.9710],
|
|
[-0.7778, 0.7938],
|
|
[-0.1342, 0.7347]])
|
|
>>> torch.norm(a, 2, 1)
|
|
tensor([ 2.2369, 2.1558, 1.1113, 0.7469])
|
|
>>> torch.norm(a, 0, 1, True)
|
|
tensor([[ 2.],
|
|
[ 2.],
|
|
[ 2.],
|
|
[ 2.]])
|
|
""")
|
|
|
|
add_docstr(torch.normal,
|
|
r"""
|
|
.. function:: normal(mean, std, out=None) -> Tensor
|
|
|
|
Returns a tensor of random numbers drawn from separate normal distributions
|
|
whose mean and standard deviation are given.
|
|
|
|
The :attr:`mean` is a tensor with the mean of
|
|
each output element's normal distribution
|
|
|
|
The :attr:`std` is a tensor with the standard deviation of
|
|
each output element's normal distribution
|
|
|
|
The shapes of :attr:`mean` and :attr:`std` don't need to match, but the
|
|
total number of elements in each tensor need to be the same.
|
|
|
|
.. note:: When the shapes do not match, the shape of :attr:`mean`
|
|
is used as the shape for the returned output tensor
|
|
|
|
Args:
|
|
mean (Tensor): the tensor of per-element means
|
|
std (Tensor): the tensor of per-element standard deviations
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> torch.normal(mean=torch.arange(1., 11.), std=torch.arange(1, 0, -0.1))
|
|
tensor([ 1.0425, 3.5672, 2.7969, 4.2925, 4.7229, 6.2134,
|
|
8.0505, 8.1408, 9.0563, 10.0566])
|
|
|
|
.. function:: normal(mean=0.0, std, out=None) -> Tensor
|
|
|
|
Similar to the function above, but the means are shared among all drawn
|
|
elements.
|
|
|
|
Args:
|
|
mean (float, optional): the mean for all distributions
|
|
std (Tensor): the tensor of per-element standard deviations
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> torch.normal(mean=0.5, std=torch.arange(1., 6.))
|
|
tensor([-1.2793, -1.0732, -2.0687, 5.1177, -1.2303])
|
|
|
|
.. function:: normal(mean, std=1.0, out=None) -> Tensor
|
|
|
|
Similar to the function above, but the standard-deviations are shared among
|
|
all drawn elements.
|
|
|
|
Args:
|
|
mean (Tensor): the tensor of per-element means
|
|
std (float, optional): the standard deviation for all distributions
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> torch.normal(mean=torch.arange(1., 6.))
|
|
tensor([ 1.1552, 2.6148, 2.6535, 5.8318, 4.2361])
|
|
""")
|
|
|
|
add_docstr(torch.numel,
|
|
r"""
|
|
numel(input) -> int
|
|
|
|
Returns the total number of elements in the :attr:`input` tensor.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(1, 2, 3, 4, 5)
|
|
>>> torch.numel(a)
|
|
120
|
|
>>> a = torch.zeros(4,4)
|
|
>>> torch.numel(a)
|
|
16
|
|
|
|
""")
|
|
|
|
add_docstr(torch.ones,
|
|
r"""
|
|
ones(*sizes, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a tensor filled with the scalar value `1`, with the shape defined
|
|
by the variable argument :attr:`sizes`.
|
|
|
|
Args:
|
|
sizes (int...): a sequence of integers defining the shape of the output tensor.
|
|
Can be a variable number of arguments or a collection like a list or tuple.
|
|
{out}
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Example::
|
|
|
|
>>> torch.ones(2, 3)
|
|
tensor([[ 1., 1., 1.],
|
|
[ 1., 1., 1.]])
|
|
|
|
>>> torch.ones(5)
|
|
tensor([ 1., 1., 1., 1., 1.])
|
|
|
|
""".format(**factory_common_args))
|
|
|
|
add_docstr(torch.ones_like,
|
|
r"""
|
|
ones_like(input, dtype=None, layout=None, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a tensor filled with the scalar value `1`, with the same size as
|
|
:attr:`input`. ``torch.ones_like(input)`` is equivalent to
|
|
``torch.ones(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)``.
|
|
|
|
.. warning::
|
|
As of 0.4, this function does not support an :attr:`out` keyword. As an alternative,
|
|
the old ``torch.ones_like(input, out=output)`` is equivalent to
|
|
``torch.ones(input.size(), out=output)``.
|
|
|
|
Args:
|
|
{input}
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Example::
|
|
|
|
>>> input = torch.empty(2, 3)
|
|
>>> torch.ones_like(input)
|
|
tensor([[ 1., 1., 1.],
|
|
[ 1., 1., 1.]])
|
|
""".format(**factory_like_common_args))
|
|
|
|
add_docstr(torch.orgqr,
|
|
r"""
|
|
orgqr(a, tau) -> Tensor
|
|
|
|
Computes the orthogonal matrix `Q` of a QR factorization, from the `(a, tau)`
|
|
tuple returned by :func:`torch.geqrf`.
|
|
|
|
This directly calls the underlying LAPACK function `?orgqr`.
|
|
See `LAPACK documentation for orgqr`_ for further details.
|
|
|
|
Args:
|
|
a (Tensor): the `a` from :func:`torch.geqrf`.
|
|
tau (Tensor): the `tau` from :func:`torch.geqrf`.
|
|
|
|
.. _LAPACK documentation for orgqr:
|
|
https://software.intel.com/en-us/mkl-developer-reference-c-orgqr
|
|
|
|
""")
|
|
|
|
add_docstr(torch.ormqr,
|
|
r"""
|
|
ormqr(a, tau, mat, left=True, transpose=False) -> (Tensor, Tensor)
|
|
|
|
Multiplies `mat` by the orthogonal `Q` matrix of the QR factorization
|
|
formed by :func:`torch.geqrf` that is represented by `(a, tau)`.
|
|
|
|
This directly calls the underlying LAPACK function `?ormqr`.
|
|
See `LAPACK documentation for ormqr`_ for further details.
|
|
|
|
Args:
|
|
a (Tensor): the `a` from :func:`torch.geqrf`.
|
|
tau (Tensor): the `tau` from :func:`torch.geqrf`.
|
|
mat (Tensor): the matrix to be multiplied.
|
|
|
|
.. _LAPACK documentation for ormqr:
|
|
https://software.intel.com/en-us/mkl-developer-reference-c-ormqr
|
|
|
|
""")
|
|
|
|
add_docstr(torch.potrf, r"""
|
|
potrf(a, upper=True, out=None) -> Tensor
|
|
|
|
Computes the Cholesky decomposition of a symmetric positive-definite
|
|
matrix :math:`A`.
|
|
|
|
If :attr:`upper` is ``True``, the returned matrix `U` is upper-triangular, and
|
|
the decomposition has the form:
|
|
|
|
.. math::
|
|
|
|
A = U^TU
|
|
|
|
If :attr:`upper` is ``False``, the returned matrix `L` is lower-triangular, and
|
|
the decomposition has the form:
|
|
|
|
.. math::
|
|
|
|
A = LL^T
|
|
|
|
Args:
|
|
a (Tensor): the input 2-D tensor, a symmetric positive-definite matrix
|
|
upper (bool, optional): flag that indicates whether to return the
|
|
upper or lower triangular matrix
|
|
out (Tensor, optional): the output matrix
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(3, 3)
|
|
>>> a = torch.mm(a, a.t()) # make symmetric positive definite
|
|
>>> u = torch.potrf(a)
|
|
>>> a
|
|
tensor([[ 2.4112, -0.7486, 1.4551],
|
|
[-0.7486, 1.3544, 0.1294],
|
|
[ 1.4551, 0.1294, 1.6724]])
|
|
>>> u
|
|
tensor([[ 1.5528, -0.4821, 0.9371],
|
|
[ 0.0000, 1.0592, 0.5486],
|
|
[ 0.0000, 0.0000, 0.7023]])
|
|
>>> torch.mm(u.t(), u)
|
|
tensor([[ 2.4112, -0.7486, 1.4551],
|
|
[-0.7486, 1.3544, 0.1294],
|
|
[ 1.4551, 0.1294, 1.6724]])
|
|
""")
|
|
|
|
add_docstr(torch.potri, r"""
|
|
potri(u, upper=True, out=None) -> Tensor
|
|
|
|
Computes the inverse of a positive semidefinite matrix given its
|
|
Cholesky factor :attr:`u`: returns matrix `inv`
|
|
|
|
If :attr:`upper` is ``True`` or not provided, :attr:`u` is upper
|
|
triangular such that:
|
|
|
|
.. math::
|
|
inv = (u^T u)^{-1}
|
|
|
|
If :attr:`upper` is ``False``, :attr:`u` is lower triangular
|
|
such that:
|
|
|
|
.. math::
|
|
inv = (uu^{T})^{-1}
|
|
|
|
Args:
|
|
u (Tensor): the input 2-D tensor, a upper or lower triangular
|
|
Cholesky factor
|
|
upper (bool, optional): whether to return a upper (default) or lower triangular matrix
|
|
out (Tensor, optional): the output tensor for `inv`
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(3, 3)
|
|
>>> a = torch.mm(a, a.t()) # make symmetric positive definite
|
|
>>> u = torch.potrf(a)
|
|
>>> a
|
|
tensor([[ 0.9935, -0.6353, 1.5806],
|
|
[ -0.6353, 0.8769, -1.7183],
|
|
[ 1.5806, -1.7183, 10.6618]])
|
|
>>> torch.potri(u)
|
|
tensor([[ 1.9314, 1.2251, -0.0889],
|
|
[ 1.2251, 2.4439, 0.2122],
|
|
[-0.0889, 0.2122, 0.1412]])
|
|
>>> a.inverse()
|
|
tensor([[ 1.9314, 1.2251, -0.0889],
|
|
[ 1.2251, 2.4439, 0.2122],
|
|
[-0.0889, 0.2122, 0.1412]])
|
|
""")
|
|
|
|
add_docstr(torch.potrs, r"""
|
|
potrs(b, u, upper=True, out=None) -> Tensor
|
|
|
|
Solves a linear system of equations with a positive semidefinite
|
|
matrix to be inverted given its Cholesky factor matrix :attr:`u`.
|
|
|
|
If :attr:`upper` is ``True`` or not provided, :attr:`u` is upper triangular
|
|
and `c` is returned such that:
|
|
|
|
.. math::
|
|
c = (u^T u)^{-1} b
|
|
|
|
If :attr:`upper` is ``False``, :attr:`u` is and lower triangular and `c` is
|
|
returned such that:
|
|
|
|
.. math::
|
|
c = (u u^T)^{-1} b
|
|
|
|
.. note:: :attr:`b` is always a 2-D tensor, use `b.unsqueeze(1)` to convert a vector.
|
|
|
|
Args:
|
|
b (Tensor): the right hand side 2-D tensor
|
|
u (Tensor): the input 2-D tensor, a upper or lower triangular Cholesky factor
|
|
upper (bool, optional): whether to return a upper (default) or lower triangular matrix
|
|
out (Tensor, optional): the output tensor for `c`
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(3, 3)
|
|
>>> a = torch.mm(a, a.t()) # make symmetric positive definite
|
|
>>> u = torch.potrf(a)
|
|
>>> a
|
|
tensor([[ 0.7747, -1.9549, 1.3086],
|
|
[-1.9549, 6.7546, -5.4114],
|
|
[ 1.3086, -5.4114, 4.8733]])
|
|
>>> b = torch.randn(3, 2)
|
|
>>> b
|
|
tensor([[-0.6355, 0.9891],
|
|
[ 0.1974, 1.4706],
|
|
[-0.4115, -0.6225]])
|
|
>>> torch.potrs(b,u)
|
|
tensor([[ -8.1625, 19.6097],
|
|
[ -5.8398, 14.2387],
|
|
[ -4.3771, 10.4173]])
|
|
>>> torch.mm(a.inverse(),b)
|
|
tensor([[ -8.1626, 19.6097],
|
|
[ -5.8398, 14.2387],
|
|
[ -4.3771, 10.4173]])
|
|
""")
|
|
|
|
add_docstr(torch.pow,
|
|
r"""
|
|
.. function:: pow(input, exponent, out=None) -> Tensor
|
|
|
|
Takes the power of each element in :attr:`input` with :attr:`exponent` and
|
|
returns a tensor with the result.
|
|
|
|
:attr:`exponent` can be either a single ``float`` number or a `Tensor`
|
|
with the same number of elements as :attr:`input`.
|
|
|
|
When :attr:`exponent` is a scalar value, the operation applied is:
|
|
|
|
.. math::
|
|
out_i = x_i ^ {exponent}
|
|
|
|
When :attr:`exponent` is a tensor, the operation applied is:
|
|
|
|
.. math::
|
|
out_i = x_i ^ {exponent_i}
|
|
|
|
When :attr:`exponent` is a tensor, the shapes of :attr:`input`
|
|
and :attr:`exponent` must be :ref:`broadcastable <broadcasting-semantics>`.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
exponent (float or tensor): the exponent value
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([ 0.4331, 1.2475, 0.6834, -0.2791])
|
|
>>> torch.pow(a, 2)
|
|
tensor([ 0.1875, 1.5561, 0.4670, 0.0779])
|
|
>>> exp = torch.arange(1., 5.)
|
|
|
|
>>> a = torch.arange(1., 5.)
|
|
>>> a
|
|
tensor([ 1., 2., 3., 4.])
|
|
>>> exp
|
|
tensor([ 1., 2., 3., 4.])
|
|
>>> torch.pow(a, exp)
|
|
tensor([ 1., 4., 27., 256.])
|
|
|
|
.. function:: pow(base, input, out=None) -> Tensor
|
|
|
|
:attr:`base` is a scalar ``float`` value, and :attr:`input` is a tensor.
|
|
The returned tensor :attr:`out` is of the same shape as :attr:`input`
|
|
|
|
The operation applied is:
|
|
|
|
.. math::
|
|
out_i = base ^ {input_i}
|
|
|
|
Args:
|
|
base (float): the scalar base value for the power operation
|
|
input (Tensor): the exponent tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> exp = torch.arange(1., 5.)
|
|
>>> base = 2
|
|
>>> torch.pow(base, exp)
|
|
tensor([ 2., 4., 8., 16.])
|
|
""")
|
|
|
|
add_docstr(torch.prod,
|
|
r"""
|
|
.. function:: prod(input, dtype=None) -> Tensor
|
|
|
|
Returns the product of all elements in the :attr:`input` tensor.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
{dtype}
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(1, 3)
|
|
>>> a
|
|
tensor([[-0.8020, 0.5428, -1.5854]])
|
|
>>> torch.prod(a)
|
|
tensor(0.6902)
|
|
|
|
.. function:: prod(input, dim, keepdim=False, dtype=None) -> Tensor
|
|
|
|
Returns the product of each row of the :attr:`input` tensor in the given
|
|
dimension :attr:`dim`.
|
|
|
|
If :attr:`keepdim` is ``True``, the output tensor is of the same size as
|
|
:attr:`input` except in the dimension :attr:`dim` where it is of size 1.
|
|
Otherwise, :attr:`dim` is squeezed (see :func:`torch.squeeze`), resulting
|
|
in the output tensor having 1 fewer dimension than :attr:`input`.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dim (int): the dimension to reduce
|
|
keepdim (bool): whether the output tensor has :attr:`dim` retained or not
|
|
{dtype}
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4, 2)
|
|
>>> a
|
|
tensor([[ 0.5261, -0.3837],
|
|
[ 1.1857, -0.2498],
|
|
[-1.1646, 0.0705],
|
|
[ 1.1131, -1.0629]])
|
|
>>> torch.prod(a, 1)
|
|
tensor([-0.2018, -0.2962, -0.0821, -1.1831])
|
|
""".format(**reduceops_common_args))
|
|
|
|
add_docstr(torch.pstrf, r"""
|
|
pstrf(a, upper=True, out=None) -> (Tensor, Tensor)
|
|
|
|
Computes the pivoted Cholesky decomposition of a positive semidefinite
|
|
matrix :attr:`a`. returns matrices `u` and `piv`.
|
|
|
|
If :attr:`upper` is ``True`` or not provided, `u` is upper triangular
|
|
such that :math:`a = p^T u^T u p`, with `p` the permutation given by `piv`.
|
|
|
|
If :attr:`upper` is ``False``, `u` is lower triangular such that
|
|
:math:`a = p^T u u^T p`.
|
|
|
|
Args:
|
|
a (Tensor): the input 2-D tensor
|
|
upper (bool, optional): whether to return a upper (default) or lower triangular matrix
|
|
out (tuple, optional): tuple of `u` and `piv` tensors
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(3, 3)
|
|
>>> a = torch.mm(a, a.t()) # make symmetric positive definite
|
|
>>> a
|
|
tensor([[ 3.5405, -0.4577, 0.8342],
|
|
[-0.4577, 1.8244, -0.1996],
|
|
[ 0.8342, -0.1996, 3.7493]])
|
|
>>> u,piv = torch.pstrf(a)
|
|
>>> u
|
|
tensor([[ 1.9363, 0.4308, -0.1031],
|
|
[ 0.0000, 1.8316, -0.2256],
|
|
[ 0.0000, 0.0000, 1.3277]])
|
|
>>> piv
|
|
tensor([ 2, 0, 1], dtype=torch.int32)
|
|
>>> p = torch.eye(3).index_select(0,piv.long()).index_select(0,piv.long()).t() # make pivot permutation
|
|
>>> torch.mm(torch.mm(p.t(),torch.mm(u.t(),u)),p) # reconstruct
|
|
tensor([[ 3.5405, -0.4577, 0.8342],
|
|
[-0.4577, 1.8244, -0.1996],
|
|
[ 0.8342, -0.1996, 3.7493]])
|
|
""")
|
|
|
|
add_docstr(torch.qr,
|
|
r"""
|
|
qr(input, out=None) -> (Tensor, Tensor)
|
|
|
|
Computes the QR decomposition of a matrix :attr:`input`, and returns matrices
|
|
`Q` and `R` such that :math:`\text{input} = Q R`, with :math:`Q` being an
|
|
orthogonal matrix and :math:`R` being an upper triangular matrix.
|
|
|
|
This returns the thin (reduced) QR factorization.
|
|
|
|
.. note:: precision may be lost if the magnitudes of the elements of :attr:`input`
|
|
are large
|
|
|
|
.. note:: While it should always give you a valid decomposition, it may not
|
|
give you the same one across platforms - it will depend on your
|
|
LAPACK implementation.
|
|
|
|
.. note:: Irrespective of the original strides, the returned matrix :math:`Q` will be
|
|
transposed, i.e. with strides `(1, m)` instead of `(m, 1)`.
|
|
|
|
Args:
|
|
input (Tensor): the input 2-D tensor
|
|
out (tuple, optional): tuple of `Q` and `R` tensors
|
|
|
|
Example::
|
|
|
|
>>> a = torch.tensor([[12., -51, 4], [6, 167, -68], [-4, 24, -41]])
|
|
>>> q, r = torch.qr(a)
|
|
>>> q
|
|
tensor([[-0.8571, 0.3943, 0.3314],
|
|
[-0.4286, -0.9029, -0.0343],
|
|
[ 0.2857, -0.1714, 0.9429]])
|
|
>>> r
|
|
tensor([[ -14.0000, -21.0000, 14.0000],
|
|
[ 0.0000, -175.0000, 70.0000],
|
|
[ 0.0000, 0.0000, -35.0000]])
|
|
>>> torch.mm(q, r).round()
|
|
tensor([[ 12., -51., 4.],
|
|
[ 6., 167., -68.],
|
|
[ -4., 24., -41.]])
|
|
>>> torch.mm(q.t(), q).round()
|
|
tensor([[ 1., 0., 0.],
|
|
[ 0., 1., -0.],
|
|
[ 0., -0., 1.]])
|
|
""")
|
|
|
|
add_docstr(torch.rand,
|
|
r"""
|
|
rand(*sizes, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a tensor filled with random numbers from a uniform distribution
|
|
on the interval :math:`[0, 1)`
|
|
|
|
The shape of the tensor is defined by the variable argument :attr:`sizes`.
|
|
|
|
Args:
|
|
sizes (int...): a sequence of integers defining the shape of the output tensor.
|
|
Can be a variable number of arguments or a collection like a list or tuple.
|
|
{out}
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Example::
|
|
|
|
>>> torch.rand(4)
|
|
tensor([ 0.5204, 0.2503, 0.3525, 0.5673])
|
|
>>> torch.rand(2, 3)
|
|
tensor([[ 0.8237, 0.5781, 0.6879],
|
|
[ 0.3816, 0.7249, 0.0998]])
|
|
""")
|
|
|
|
add_docstr(torch.rand_like,
|
|
r"""
|
|
rand_like(input, dtype=None, layout=None, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a tensor with the same size as :attr:`input` that is filled with
|
|
random numbers from a uniform distribution on the interval :math:`[0, 1)`.
|
|
``torch.rand_like(input)`` is equivalent to
|
|
``torch.rand(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)``.
|
|
|
|
Args:
|
|
{input}
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
""".format(**factory_like_common_args))
|
|
|
|
add_docstr(torch.randint,
|
|
r"""
|
|
randint(low=0, high, size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a tensor filled with random integers generated uniformly
|
|
between :attr:`low` (inclusive) and :attr:`high` (exclusive).
|
|
|
|
The shape of the tensor is defined by the variable argument :attr:`size`.
|
|
|
|
.. note:
|
|
With the global dtype default (`torch.float32`), this function returns
|
|
a tensor with dtype `torch.float32`, NOT an integer dtype.
|
|
|
|
Args:
|
|
low (int, optional): Lowest integer to be drawn from the distribution. Default: 0.
|
|
high (int): One above the highest integer to be drawn from the distribution.
|
|
size (tuple): a tuple defining the shape of the output tensor.
|
|
{out}
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Example::
|
|
|
|
>>> torch.randint(3, 5, (3,))
|
|
tensor([ 4., 3., 4.])
|
|
|
|
|
|
>>> torch.randint(10, (2,2))
|
|
tensor([[ 0., 2.],
|
|
[ 5., 5.]])
|
|
|
|
|
|
>>> torch.randint(3, 10, (2,2))
|
|
tensor([[ 4., 5.],
|
|
[ 6., 7.]])
|
|
|
|
|
|
""".format(**factory_common_args))
|
|
|
|
add_docstr(torch.randint_like,
|
|
r"""
|
|
randint_like(input, low=0, high, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a tensor with the same shape as Tensor :attr:`input` filled with
|
|
random integers generated uniformly between :attr:`low` (inclusive) and
|
|
:attr:`high` (exclusive).
|
|
|
|
.. note:
|
|
With the global dtype default (`torch.float32`), this function returns
|
|
a tensor with dtype `torch.float32`, NOT an integer dtype.
|
|
|
|
Args:
|
|
{input}
|
|
low (int, optional): Lowest integer to be drawn from the distribution. Default: 0.
|
|
high (int): One above the highest integer to be drawn from the distribution.
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
""".format(**factory_like_common_args))
|
|
|
|
add_docstr(torch.randn,
|
|
r"""
|
|
randn(*sizes, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a tensor filled with random numbers from a normal distribution
|
|
with mean `0` and variance `1` (also called the standard normal
|
|
distribution).
|
|
|
|
.. math::
|
|
\text{{out}}_{{i}} \sim \mathcal{{N}}(0, 1)
|
|
|
|
The shape of the tensor is defined by the variable argument :attr:`sizes`.
|
|
|
|
Args:
|
|
sizes (int...): a sequence of integers defining the shape of the output tensor.
|
|
Can be a variable number of arguments or a collection like a list or tuple.
|
|
{out}
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Example::
|
|
|
|
>>> torch.randn(4)
|
|
tensor([-2.1436, 0.9966, 2.3426, -0.6366])
|
|
>>> torch.randn(2, 3)
|
|
tensor([[ 1.5954, 2.8929, -1.0923],
|
|
[ 1.1719, -0.4709, -0.1996]])
|
|
""".format(**factory_common_args))
|
|
|
|
add_docstr(torch.randn_like,
|
|
r"""
|
|
randn_like(input, dtype=None, layout=None, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a tensor with the same size as :attr:`input` that is filled with
|
|
random numbers from a normal distribution with mean 0 and variance 1.
|
|
``torch.randn_like(input)`` is equivalent to
|
|
``torch.randn(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)``.
|
|
|
|
Args:
|
|
{input}
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
""".format(**factory_like_common_args))
|
|
|
|
add_docstr(torch.randperm,
|
|
r"""
|
|
randperm(n, out=None, dtype=torch.int64, layout=torch.strided, device=None, requires_grad=False) -> LongTensor
|
|
|
|
Returns a random permutation of integers from ``0`` to ``n - 1``.
|
|
|
|
Args:
|
|
n (int): the upper bound (exclusive)
|
|
{out}
|
|
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
|
|
Default: ``torch.int64``.
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Example::
|
|
|
|
>>> torch.randperm(4)
|
|
tensor([ 2, 1, 0, 3])
|
|
""".format(**factory_common_args))
|
|
|
|
add_docstr(torch.tensor,
|
|
r"""
|
|
tensor(data, dtype=None, device=None, requires_grad=False) -> Tensor
|
|
|
|
Constructs a tensor with :attr:`data`.
|
|
|
|
.. warning::
|
|
|
|
:func:`torch.tensor` always copies :attr:`data`. If you have a Tensor
|
|
``data`` and want to avoid a copy, use :func:`torch.Tensor.requires_grad_`
|
|
or :func:`torch.Tensor.detach`.
|
|
If you have a NumPy ``ndarray`` and want to avoid a copy, use
|
|
:func:`torch.from_numpy`.
|
|
|
|
Args:
|
|
{data}
|
|
{dtype}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
|
|
Example::
|
|
|
|
>>> torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])
|
|
tensor([[ 0.1000, 1.2000],
|
|
[ 2.2000, 3.1000],
|
|
[ 4.9000, 5.2000]])
|
|
|
|
>>> torch.tensor([0, 1]) # Type inference on data
|
|
tensor([ 0, 1])
|
|
|
|
>>> torch.tensor([[0.11111, 0.222222, 0.3333333]],
|
|
dtype=torch.float64,
|
|
device=torch.device('cuda:0')) # creates a torch.cuda.DoubleTensor
|
|
tensor([[ 0.1111, 0.2222, 0.3333]], dtype=torch.float64, device='cuda:0')
|
|
|
|
>>> torch.tensor(3.14159) # Create a scalar (zero-dimensional tensor)
|
|
tensor(3.1416)
|
|
|
|
>>> torch.tensor([]) # Create an empty tensor (of size (0,))
|
|
tensor([])
|
|
""".format(**factory_data_common_args))
|
|
|
|
add_docstr(torch.range,
|
|
r"""
|
|
range(start=0, end, step=1, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a 1-D tensor of size :math:`\left\lfloor \frac{{end - start}}{{step}} \right\rfloor + 1`
|
|
with values from :attr:`start` to :attr:`end` with step :attr:`step`. Step is
|
|
the gap between two values in the tensor.
|
|
|
|
.. math::
|
|
\text{{out}}_{{i+1}} = \text{{out}}_i + step.
|
|
|
|
.. warning::
|
|
This function is deprecated in favor of :func:`torch.arange`.
|
|
|
|
Args:
|
|
start (float): the starting value for the set of points. Default: ``0``.
|
|
end (float): the ending value for the set of points
|
|
step (float): the gap between each pair of adjacent points. Default: ``1``.
|
|
{out}
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Example::
|
|
|
|
>>> torch.range(1, 4)
|
|
tensor([ 1., 2., 3., 4.])
|
|
>>> torch.range(1, 4, 0.5)
|
|
tensor([ 1.0000, 1.5000, 2.0000, 2.5000, 3.0000, 3.5000, 4.0000])
|
|
""".format(**factory_common_args))
|
|
|
|
add_docstr(torch.arange,
|
|
r"""
|
|
arange(start=0, end, step=1, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a 1-D tensor of size :math:`\left\lfloor \frac{{end - start}}{{step}} \right\rfloor`
|
|
with values from the interval ``[start, end)`` taken with common difference
|
|
:attr:`step` beginning from `start`.
|
|
|
|
Note that non-integer :attr:`step` is subject to floating point rounding errors when
|
|
comparing against :attr:`end`; to avoid inconsistency, we advise adding a small epsilon to :attr:`end`
|
|
in such cases.
|
|
|
|
.. math::
|
|
\text{{out}}_{{i+1}} = \text{{out}}_{{i}} + \text{{step}}
|
|
|
|
Args:
|
|
start (Number): the starting value for the set of points. Default: ``0``.
|
|
end (Number): the ending value for the set of points
|
|
step (Number): the gap between each pair of adjacent points. Default: ``1``.
|
|
{out}
|
|
{dtype} If `dtype` is not given, infer the data type from the other input arguments.
|
|
If any of `start`, `end`, or `stop` are floating-point,
|
|
the `dtype` is inferred to be the default dtype, see :meth:`~torch.get_default_dtype`.
|
|
Otherwise, the `dtype` is inferred to be `torch.int64`.
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Example::
|
|
|
|
>>> torch.arange(5)
|
|
tensor([ 0, 1, 2, 3, 4])
|
|
>>> torch.arange(1, 4)
|
|
tensor([ 1, 2, 3])
|
|
>>> torch.arange(1, 2.5, 0.5)
|
|
tensor([ 1.0000, 1.5000, 2.0000])
|
|
""".format(**factory_common_args))
|
|
|
|
add_docstr(torch.remainder,
|
|
r"""
|
|
remainder(input, divisor, out=None) -> Tensor
|
|
|
|
Computes the element-wise remainder of division.
|
|
|
|
The divisor and dividend may contain both for integer and floating point
|
|
numbers. The remainder has the same sign as the divisor.
|
|
|
|
When :attr:`divisor` is a tensor, the shapes of :attr:`input` and
|
|
:attr:`divisor` must be :ref:`broadcastable <broadcasting-semantics>`.
|
|
|
|
Args:
|
|
input (Tensor): the dividend
|
|
divisor (Tensor or float): the divisor that may be either a number or a
|
|
Tensor of the same shape as the dividend
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> torch.remainder(torch.tensor([-3., -2, -1, 1, 2, 3]), 2)
|
|
tensor([ 1., 0., 1., 1., 0., 1.])
|
|
>>> torch.remainder(torch.tensor([1., 2, 3, 4, 5]), 1.5)
|
|
tensor([ 1.0000, 0.5000, 0.0000, 1.0000, 0.5000])
|
|
|
|
.. seealso::
|
|
|
|
:func:`torch.fmod`, which computes the element-wise remainder of
|
|
division equivalently to the C library function ``fmod()``.
|
|
""")
|
|
|
|
add_docstr(torch.renorm,
|
|
r"""
|
|
renorm(input, p, dim, maxnorm, out=None) -> Tensor
|
|
|
|
Returns a tensor where each sub-tensor of :attr:`input` along dimension
|
|
:attr:`dim` is normalized such that the `p`-norm of the sub-tensor is lower
|
|
than the value :attr:`maxnorm`
|
|
|
|
.. note:: If the norm of a row is lower than `maxnorm`, the row is unchanged
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
p (float): the power for the norm computation
|
|
dim (int): the dimension to slice over to get the sub-tensors
|
|
maxnorm (float): the maximum norm to keep each sub-tensor under
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> x = torch.ones(3, 3)
|
|
>>> x[1].fill_(2)
|
|
tensor([ 2., 2., 2.])
|
|
>>> x[2].fill_(3)
|
|
tensor([ 3., 3., 3.])
|
|
>>> x
|
|
tensor([[ 1., 1., 1.],
|
|
[ 2., 2., 2.],
|
|
[ 3., 3., 3.]])
|
|
>>> torch.renorm(x, 1, 0, 5)
|
|
tensor([[ 1.0000, 1.0000, 1.0000],
|
|
[ 1.6667, 1.6667, 1.6667],
|
|
[ 1.6667, 1.6667, 1.6667]])
|
|
""")
|
|
|
|
add_docstr(torch.reshape,
|
|
r"""
|
|
reshape(input, shape) -> Tensor
|
|
|
|
Returns a tensor with the same data and number of elements as :attr:`input`,
|
|
but with the specified shape. When possible, the returned tensor will be a view
|
|
of :attr:`input`. Otherwise, it will be a copy. Contiguous inputs and inputs
|
|
with compatible strides can be reshaped without copying, but you should not
|
|
depend on the copying vs. viewing behavior.
|
|
|
|
A single dimension may be -1, in which case it's inferred from the remaining
|
|
dimensions and the number of elements in :attr:`input`.
|
|
|
|
Args:
|
|
input (Tensor): the tensor to be reshaped
|
|
shape (tuple of ints): the new shape
|
|
|
|
Example::
|
|
|
|
>>> a = torch.arange(4.)
|
|
>>> torch.reshape(a, (2, 2))
|
|
tensor([[ 0., 1.],
|
|
[ 2., 3.]])
|
|
>>> b = torch.tensor([[0, 1], [2, 3]])
|
|
>>> torch.reshape(b, (-1,))
|
|
tensor([ 0, 1, 2, 3])
|
|
""")
|
|
|
|
|
|
add_docstr(torch.round,
|
|
r"""
|
|
round(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with each of the elements of :attr:`input` rounded
|
|
to the closest integer.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([ 0.9920, 0.6077, 0.9734, -1.0362])
|
|
>>> torch.round(a)
|
|
tensor([ 1., 1., 1., -1.])
|
|
""")
|
|
|
|
add_docstr(torch.rsqrt,
|
|
r"""
|
|
rsqrt(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the reciprocal of the square-root of each of
|
|
the elements of :attr:`input`.
|
|
|
|
.. math::
|
|
\text{out}_{i} = \frac{1}{\sqrt{\text{input}_{i}}}
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([-0.0370, 0.2970, 1.5420, -0.9105])
|
|
>>> torch.rsqrt(a)
|
|
tensor([ nan, 1.8351, 0.8053, nan])
|
|
""")
|
|
|
|
add_docstr(torch.set_flush_denormal,
|
|
r"""
|
|
set_flush_denormal(mode) -> bool
|
|
|
|
Disables denormal floating numbers on CPU.
|
|
|
|
Returns ``True`` if your system supports flushing denormal numbers and it
|
|
successfully configures flush denormal mode. :meth:`~torch.set_flush_denormal`
|
|
is only supported on x86 architectures supporting SSE3.
|
|
|
|
Args:
|
|
mode (bool): Controls whether to enable flush denormal mode or not
|
|
|
|
Example::
|
|
|
|
>>> torch.set_flush_denormal(True)
|
|
True
|
|
>>> torch.tensor([1e-323], dtype=torch.float64)
|
|
tensor([ 0.], dtype=torch.float64)
|
|
>>> torch.set_flush_denormal(False)
|
|
True
|
|
>>> torch.tensor([1e-323], dtype=torch.float64)
|
|
tensor(9.88131e-324 *
|
|
[ 1.0000], dtype=torch.float64)
|
|
""")
|
|
|
|
add_docstr(torch.set_num_threads,
|
|
r"""
|
|
set_num_threads(int)
|
|
|
|
Sets the number of OpenMP threads used for parallelizing CPU operations
|
|
""")
|
|
|
|
add_docstr(torch.sigmoid,
|
|
r"""
|
|
sigmoid(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the sigmoid of the elements of :attr:`input`.
|
|
|
|
.. math::
|
|
\text{out}_{i} = \frac{1}{1 + e^{-\text{input}_{i}}}
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([ 0.9213, 1.0887, -0.8858, -1.7683])
|
|
>>> torch.sigmoid(a)
|
|
tensor([ 0.7153, 0.7481, 0.2920, 0.1458])
|
|
""")
|
|
|
|
add_docstr(torch.sign,
|
|
r"""
|
|
sign(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the sign of the elements of :attr:`input`.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([ 1.0382, -1.4526, -0.9709, 0.4542])
|
|
>>> torch.sign(a)
|
|
tensor([ 1., -1., -1., 1.])
|
|
""")
|
|
|
|
add_docstr(torch.sin,
|
|
r"""
|
|
sin(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the sine of the elements of :attr:`input`.
|
|
|
|
.. math::
|
|
\text{out}_{i} = \sin(\text{input}_{i})
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([-0.5461, 0.1347, -2.7266, -0.2746])
|
|
>>> torch.sin(a)
|
|
tensor([-0.5194, 0.1343, -0.4032, -0.2711])
|
|
""")
|
|
|
|
add_docstr(torch.sinh,
|
|
r"""
|
|
sinh(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the hyperbolic sine of the elements of
|
|
:attr:`input`.
|
|
|
|
.. math::
|
|
\text{out}_{i} = \sinh(\text{input}_{i})
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([ 0.5380, -0.8632, -0.1265, 0.9399])
|
|
>>> torch.sinh(a)
|
|
tensor([ 0.5644, -0.9744, -0.1268, 1.0845])
|
|
""")
|
|
|
|
add_docstr(torch.sort,
|
|
r"""
|
|
sort(input, dim=None, descending=False, out=None) -> (Tensor, LongTensor)
|
|
|
|
Sorts the elements of the :attr:`input` tensor along a given dimension
|
|
in ascending order by value.
|
|
|
|
If :attr:`dim` is not given, the last dimension of the `input` is chosen.
|
|
|
|
If :attr:`descending` is ``True`` then the elements are sorted in descending
|
|
order by value.
|
|
|
|
A tuple of (sorted_tensor, sorted_indices) is returned, where the
|
|
sorted_indices are the indices of the elements in the original `input` tensor.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dim (int, optional): the dimension to sort along
|
|
descending (bool, optional): controls the sorting order (ascending or descending)
|
|
out (tuple, optional): the output tuple of (`Tensor`, `LongTensor`) that can
|
|
be optionally given to be used as output buffers
|
|
|
|
Example::
|
|
|
|
>>> x = torch.randn(3, 4)
|
|
>>> sorted, indices = torch.sort(x)
|
|
>>> sorted
|
|
tensor([[-0.2162, 0.0608, 0.6719, 2.3332],
|
|
[-0.5793, 0.0061, 0.6058, 0.9497],
|
|
[-0.5071, 0.3343, 0.9553, 1.0960]])
|
|
>>> indices
|
|
tensor([[ 1, 0, 2, 3],
|
|
[ 3, 1, 0, 2],
|
|
[ 0, 3, 1, 2]])
|
|
|
|
>>> sorted, indices = torch.sort(x, 0)
|
|
>>> sorted
|
|
tensor([[-0.5071, -0.2162, 0.6719, -0.5793],
|
|
[ 0.0608, 0.0061, 0.9497, 0.3343],
|
|
[ 0.6058, 0.9553, 1.0960, 2.3332]])
|
|
>>> indices
|
|
tensor([[ 2, 0, 0, 1],
|
|
[ 0, 1, 1, 2],
|
|
[ 1, 2, 2, 0]])
|
|
""")
|
|
|
|
add_docstr(torch.sparse_coo_tensor,
|
|
r"""
|
|
sparse_coo_tensor(indices, values, size=None, dtype=None, device=None, requires_grad=False) -> Tensor
|
|
|
|
Constructs a sparse_coo_tensor with non-zero elements at the given :attr:`indices` with the given
|
|
:attr:`values`.
|
|
|
|
Args:
|
|
indices (array_like): Initial data for the tensor. Can be a list, tuple,
|
|
NumPy ``ndarray``, scalar, and other types. Will be cast to a :class:`torch.LongTensor`
|
|
internally. The indices are the coordinates of the non-zero values in the matrix, and thus
|
|
should be two-dimensional where the first dimension is the number of tensor dimensions and
|
|
the second dimension is the number of non-zero values.
|
|
values (array_like): Initial values for the tensor. Can be a list, tuple,
|
|
NumPy ``ndarray``, scalar, and other types.
|
|
size (list, tuple, or :class:`torch.Size`, optional): Size of the sparse tensor. If not
|
|
provided the size will be inferred as the minimum size big enough to hold all non-zero
|
|
elements.
|
|
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
|
|
Default: if None, infers data type from :attr:`values`.
|
|
device (:class:`torch.device`, optional): the desired device of returned tensor.
|
|
Default: if None, uses the current device for the default tensor type
|
|
(see :func:`torch.set_default_tensor_type`). :attr:`device` will be the CPU
|
|
for CPU tensor types and the current CUDA device for CUDA tensor types.
|
|
requires_grad (bool, optional): If autograd should record operations on the
|
|
returned tensor. Default: ``False``.
|
|
|
|
|
|
Example::
|
|
|
|
>>> i = torch.LongTensor([[0, 1, 1],
|
|
[2, 0, 2]])
|
|
>>> v = torch.FloatTensor([3, 4, 5])
|
|
>>> torch.sparse_coo_tensor(i, v, torch.Size([2,4]))
|
|
torch.sparse.FloatTensor of size (2,4) with indices:
|
|
tensor([[ 0, 1, 1],
|
|
[ 2, 0, 2]])
|
|
and values:
|
|
tensor([ 3., 4., 5.])
|
|
|
|
>>> torch.sparse_coo_tensor(i, v) # Shape inference
|
|
torch.sparse.FloatTensor of size (2,3) with indices:
|
|
tensor([[ 0, 1, 1],
|
|
[ 2, 0, 2]])
|
|
and values:
|
|
tensor([ 3., 4., 5.])
|
|
|
|
>>> torch.sparse_coo_tensor(i, v, torch.Size([2,4]), dtype=torch.float64,
|
|
device=torch.device('cuda:0'))
|
|
torch.cuda.sparse.DoubleTensor of size (2,4) with indices:
|
|
tensor([[ 0, 1, 1],
|
|
[ 2, 0, 2]], device='cuda:0')
|
|
and values:
|
|
tensor([ 3., 4., 5.], dtype=torch.float64, device='cuda:0')
|
|
|
|
>>> torch.sparse_coo_tensor([], [], torch.Size([])) # Create an empty tensor (of size (0,))
|
|
torch.sparse.FloatTensor of size () with indices:
|
|
tensor([], dtype=torch.int64)
|
|
and values:
|
|
tensor([])
|
|
""")
|
|
|
|
add_docstr(torch.sqrt,
|
|
r"""
|
|
sqrt(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the square-root of the elements of :attr:`input`.
|
|
|
|
.. math::
|
|
\text{out}_{i} = \sqrt{\text{input}_{i}}
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([-2.0755, 1.0226, 0.0831, 0.4806])
|
|
>>> torch.sqrt(a)
|
|
tensor([ nan, 1.0112, 0.2883, 0.6933])
|
|
""")
|
|
|
|
add_docstr(torch.squeeze,
|
|
r"""
|
|
squeeze(input, dim=None, out=None) -> Tensor
|
|
|
|
Returns a tensor with all the dimensions of :attr:`input` of size `1` removed.
|
|
|
|
For example, if `input` is of shape:
|
|
:math:`(A \times 1 \times B \times C \times 1 \times D)` then the `out` tensor
|
|
will be of shape: :math:`(A \times B \times C \times D)`.
|
|
|
|
When :attr:`dim` is given, a squeeze operation is done only in the given
|
|
dimension. If `input` is of shape: :math:`(A \times 1 \times B)`,
|
|
`squeeze(input, 0)` leaves the tensor unchanged, but :func:`squeeze(input, 1)` will
|
|
squeeze the tensor to the shape :math:`(A \times B)`.
|
|
|
|
.. note:: As an exception to the above, a 1-dimensional tensor of size 1 will
|
|
not have its dimensions changed.
|
|
|
|
.. note:: The returned tensor shares the storage with the input tensor,
|
|
so changing the contents of one will change the contents of the other.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dim (int, optional): if given, the input will be squeezed only in
|
|
this dimension
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> x = torch.zeros(2, 1, 2, 1, 2)
|
|
>>> x.size()
|
|
torch.Size([2, 1, 2, 1, 2])
|
|
>>> y = torch.squeeze(x)
|
|
>>> y.size()
|
|
torch.Size([2, 2, 2])
|
|
>>> y = torch.squeeze(x, 0)
|
|
>>> y.size()
|
|
torch.Size([2, 1, 2, 1, 2])
|
|
>>> y = torch.squeeze(x, 1)
|
|
>>> y.size()
|
|
torch.Size([2, 2, 1, 2])
|
|
""")
|
|
|
|
add_docstr(torch.std,
|
|
r"""
|
|
.. function:: std(input, unbiased=True) -> Tensor
|
|
|
|
Returns the standard-deviation of all elements in the :attr:`input` tensor.
|
|
|
|
If :attr:`unbiased` is ``False``, then the standard-deviation will be calculated
|
|
via the biased estimator. Otherwise, Bessel's correction will be used.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
unbiased (bool): whether to use the unbiased estimation or not
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(1, 3)
|
|
>>> a
|
|
tensor([[-0.8166, -1.3802, -0.3560]])
|
|
>>> torch.std(a)
|
|
tensor(0.5130)
|
|
|
|
.. function:: std(input, dim, keepdim=False, unbiased=True, out=None) -> Tensor
|
|
|
|
Returns the standard-deviation of each row of the :attr:`input` tensor in the
|
|
given dimension :attr:`dim`.
|
|
|
|
If :attr:`keepdim` is ``True``, the output tensor is of the same size as
|
|
:attr:`input` except in the dimension :attr:`dim` where it is of size 1.
|
|
Otherwise, :attr:`dim` is squeezed (see :func:`torch.squeeze`), resulting
|
|
in the output tensor having 1 fewer dimension than :attr:`input`.
|
|
|
|
If :attr:`unbiased` is ``False``, then the standard-deviation will be calculated
|
|
via the biased estimator. Otherwise, Bessel's correction will be used.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dim (int): the dimension to reduce
|
|
keepdim (bool): whether the output tensor has :attr:`dim` retained or not
|
|
unbiased (bool): whether to use the unbiased estimation or not
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4, 4)
|
|
>>> a
|
|
tensor([[ 0.2035, 1.2959, 1.8101, -0.4644],
|
|
[ 1.5027, -0.3270, 0.5905, 0.6538],
|
|
[-1.5745, 1.3330, -0.5596, -0.6548],
|
|
[ 0.1264, -0.5080, 1.6420, 0.1992]])
|
|
>>> torch.std(a, dim=1)
|
|
tensor([ 1.0311, 0.7477, 1.2204, 0.9087])
|
|
""")
|
|
|
|
add_docstr(torch.sum,
|
|
r"""
|
|
.. function:: sum(input, dtype=None) -> Tensor
|
|
|
|
Returns the sum of all elements in the :attr:`input` tensor.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
{dtype}
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(1, 3)
|
|
>>> a
|
|
tensor([[ 0.1133, -0.9567, 0.2958]])
|
|
>>> torch.sum(a)
|
|
tensor(-0.5475)
|
|
|
|
.. function:: sum(input, dim, keepdim=False, dtype=None) -> Tensor
|
|
|
|
Returns the sum of each row of the :attr:`input` tensor in the given
|
|
dimension :attr:`dim`. If :attr::`dim` is a list of dimensions,
|
|
reduce over all of them.
|
|
|
|
If :attr:`keepdim` is ``True``, the output tensor is of the same size
|
|
as :attr:`input` except in the dimension :attr:`dim` where it is of size 1.
|
|
Otherwise, :attr:`dim` is squeezed (see :func:`torch.squeeze`), resulting in
|
|
the output tensor having 1 fewer dimension than :attr:`input`.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dim (int or tuple of ints): the dimension or dimensions to reduce
|
|
keepdim (bool): whether the output tensor has :attr:`dim` retained or not
|
|
{dtype}
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4, 4)
|
|
>>> a
|
|
tensor([[ 0.0569, -0.2475, 0.0737, -0.3429],
|
|
[-0.2993, 0.9138, 0.9337, -1.6864],
|
|
[ 0.1132, 0.7892, -0.1003, 0.5688],
|
|
[ 0.3637, -0.9906, -0.4752, -1.5197]])
|
|
>>> torch.sum(a, 1)
|
|
tensor([-0.4598, -0.1381, 1.3708, -2.6217])
|
|
>>> b = torch.arange(4 * 5 * 6).view(4, 5, 6)
|
|
>>> torch.sum(b, (2, 1))
|
|
tensor([ 435., 1335., 2235., 3135.])
|
|
""".format(**reduceops_common_args))
|
|
|
|
add_docstr(torch.svd,
|
|
r"""
|
|
svd(input, some=True, out=None) -> (Tensor, Tensor, Tensor)
|
|
|
|
`U, S, V = torch.svd(A)` returns the singular value decomposition of a
|
|
real matrix `A` of size `(n x m)` such that :math:`A = USV^T`.
|
|
|
|
`U` is of shape :math:`(n \times n)`.
|
|
|
|
`S` is a diagonal matrix of shape :math:`(n \times m)`, represented as a vector
|
|
of size :math:`\min(n, m)` containing the non-negative diagonal entries.
|
|
|
|
`V` is of shape :math:`(m \times m)`.
|
|
|
|
If :attr:`some` is ``True`` (default), the returned `U` and `V` matrices will
|
|
contain only :math:`min(n, m)` orthonormal columns.
|
|
|
|
.. note:: Irrespective of the original strides, the returned matrix `U`
|
|
will be transposed, i.e. with strides `(1, n)` instead of `(n, 1)`.
|
|
|
|
.. note:: Extra care needs to be taken when backward through `U` and `V`
|
|
outputs. Such operation is really only stable when :attr:`input` is
|
|
full rank with all distinct singular values. Otherwise, ``NaN`` can
|
|
appear as the gradients are not properly defined. Also, notice that
|
|
double backward will usually do an additional backward through `U` and
|
|
`V` even if the original backward is only on `S`.
|
|
|
|
.. note:: When :attr:`some` = ``False``, the gradients on ``U[:, min(n, m):]``
|
|
and ``V[:, min(n, m):]`` will be ignored in backward as those vectors
|
|
can be arbitrary bases of the subspaces.
|
|
|
|
Args:
|
|
input (Tensor): the input 2-D tensor
|
|
some (bool, optional): controls the shape of returned `U` and `V`
|
|
out (tuple, optional): the output tuple of tensors
|
|
|
|
Example::
|
|
|
|
>>> a = torch.tensor([[8.79, 6.11, -9.15, 9.57, -3.49, 9.84],
|
|
[9.93, 6.91, -7.93, 1.64, 4.02, 0.15],
|
|
[9.83, 5.04, 4.86, 8.83, 9.80, -8.99],
|
|
[5.45, -0.27, 4.85, 0.74, 10.00, -6.02],
|
|
[3.16, 7.98, 3.01, 5.80, 4.27, -5.31]]).t()
|
|
|
|
>>> u, s, v = torch.svd(a)
|
|
>>> u
|
|
tensor([[-0.5911, 0.2632, 0.3554, 0.3143, 0.2299],
|
|
[-0.3976, 0.2438, -0.2224, -0.7535, -0.3636],
|
|
[-0.0335, -0.6003, -0.4508, 0.2334, -0.3055],
|
|
[-0.4297, 0.2362, -0.6859, 0.3319, 0.1649],
|
|
[-0.4697, -0.3509, 0.3874, 0.1587, -0.5183],
|
|
[ 0.2934, 0.5763, -0.0209, 0.3791, -0.6526]])
|
|
>>> s
|
|
tensor([ 27.4687, 22.6432, 8.5584, 5.9857, 2.0149])
|
|
>>> v
|
|
tensor([[-0.2514, 0.8148, -0.2606, 0.3967, -0.2180],
|
|
[-0.3968, 0.3587, 0.7008, -0.4507, 0.1402],
|
|
[-0.6922, -0.2489, -0.2208, 0.2513, 0.5891],
|
|
[-0.3662, -0.3686, 0.3859, 0.4342, -0.6265],
|
|
[-0.4076, -0.0980, -0.4933, -0.6227, -0.4396]])
|
|
>>> torch.dist(a, torch.mm(torch.mm(u, torch.diag(s)), v.t()))
|
|
tensor(1.00000e-06 *
|
|
9.3738)
|
|
""")
|
|
|
|
add_docstr(torch.symeig,
|
|
r"""
|
|
symeig(input, eigenvectors=False, upper=True, out=None) -> (Tensor, Tensor)
|
|
|
|
This function returns eigenvalues and eigenvectors
|
|
of a real symmetric matrix :attr:`input`, represented by a tuple :math:`(e, V)`.
|
|
|
|
:attr:`input` and :math:`V` are :math:`(m \times m)` matrices and :math:`e` is a
|
|
:math:`m` dimensional vector.
|
|
|
|
This function calculates all eigenvalues (and vectors) of :attr:`input`
|
|
such that :math:`input = V diag(e) V^T`.
|
|
|
|
The boolean argument :attr:`eigenvectors` defines computation of
|
|
eigenvectors or eigenvalues only.
|
|
|
|
If it is ``False``, only eigenvalues are computed. If it is ``True``,
|
|
both eigenvalues and eigenvectors are computed.
|
|
|
|
Since the input matrix :attr:`input` is supposed to be symmetric,
|
|
only the upper triangular portion is used by default.
|
|
|
|
If :attr:`upper` is ``False``, then lower triangular portion is used.
|
|
|
|
Note: Irrespective of the original strides, the returned matrix `V` will
|
|
be transposed, i.e. with strides `(1, m)` instead of `(m, 1)`.
|
|
|
|
Args:
|
|
input (Tensor): the input symmetric matrix
|
|
eigenvectors(boolean, optional): controls whether eigenvectors have to be computed
|
|
upper(boolean, optional): controls whether to consider upper-triangular or lower-triangular region
|
|
out (tuple, optional): the output tuple of (Tensor, Tensor)
|
|
|
|
Examples::
|
|
|
|
|
|
>>> a = torch.tensor([[ 1.96, 0.00, 0.00, 0.00, 0.00],
|
|
[-6.49, 3.80, 0.00, 0.00, 0.00],
|
|
[-0.47, -6.39, 4.17, 0.00, 0.00],
|
|
[-7.20, 1.50, -1.51, 5.70, 0.00],
|
|
[-0.65, -6.34, 2.67, 1.80, -7.10]]).t()
|
|
>>> e, v = torch.symeig(a, eigenvectors=True)
|
|
>>> e
|
|
tensor([-11.0656, -6.2287, 0.8640, 8.8655, 16.0948])
|
|
>>> v
|
|
tensor([[-0.2981, -0.6075, 0.4026, -0.3745, 0.4896],
|
|
[-0.5078, -0.2880, -0.4066, -0.3572, -0.6053],
|
|
[-0.0816, -0.3843, -0.6600, 0.5008, 0.3991],
|
|
[-0.0036, -0.4467, 0.4553, 0.6204, -0.4564],
|
|
[-0.8041, 0.4480, 0.1725, 0.3108, 0.1622]])
|
|
""")
|
|
|
|
add_docstr(torch.t,
|
|
r"""
|
|
t(input) -> Tensor
|
|
|
|
Expects :attr:`input` to be a matrix (2-D tensor) and transposes dimensions 0
|
|
and 1.
|
|
|
|
Can be seen as a short-hand function for :meth:`transpose(input, 0, 1)`
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
|
|
Example::
|
|
|
|
>>> x = torch.randn(2, 3)
|
|
>>> x
|
|
tensor([[ 0.4875, 0.9158, -0.5872],
|
|
[ 0.3938, -0.6929, 0.6932]])
|
|
>>> torch.t(x)
|
|
tensor([[ 0.4875, 0.3938],
|
|
[ 0.9158, -0.6929],
|
|
[-0.5872, 0.6932]])
|
|
""")
|
|
|
|
add_docstr(torch.flip,
|
|
r"""
|
|
flip(input, dims) -> Tensor
|
|
|
|
Reverse the order of a n-D tensor along given axis in dims.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dims (a list or tuple): axis to flip on
|
|
|
|
Example::
|
|
|
|
>>> x = torch.arange(8).view(2, 2, 2)
|
|
>>> x
|
|
tensor([[[ 0, 1],
|
|
[ 2, 3]],
|
|
|
|
[[ 4, 5],
|
|
[ 6, 7]]])
|
|
>>> torch.flip(x, [0, 1])
|
|
tensor([[[ 6, 7],
|
|
[ 4, 5]],
|
|
|
|
[[ 2, 3],
|
|
[ 0, 1]]])
|
|
""")
|
|
|
|
add_docstr(torch.take,
|
|
r"""
|
|
take(input, indices) -> Tensor
|
|
|
|
Returns a new tensor with the elements of :attr:`input` at the given indices.
|
|
The input tensor is treated as if it were viewed as a 1-D tensor. The result
|
|
takes the same shape as the indices.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
indices (LongTensor): the indices into tensor
|
|
|
|
Example::
|
|
|
|
>>> src = torch.tensor([[4, 3, 5],
|
|
[6, 7, 8]])
|
|
>>> torch.take(src, torch.tensor([0, 2, 5]))
|
|
tensor([ 4, 5, 8])
|
|
""")
|
|
|
|
add_docstr(torch.tan,
|
|
r"""
|
|
tan(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the tangent of the elements of :attr:`input`.
|
|
|
|
.. math::
|
|
\text{out}_{i} = \tan(\text{input}_{i})
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([-1.2027, -1.7687, 0.4412, -1.3856])
|
|
>>> torch.tan(a)
|
|
tensor([-2.5930, 4.9859, 0.4722, -5.3366])
|
|
""")
|
|
|
|
add_docstr(torch.tanh,
|
|
r"""
|
|
tanh(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the hyperbolic tangent of the elements
|
|
of :attr:`input`.
|
|
|
|
.. math::
|
|
\text{out}_{i} = \tanh(\text{input}_{i})
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([ 0.8986, -0.7279, 1.1745, 0.2611])
|
|
>>> torch.tanh(a)
|
|
tensor([ 0.7156, -0.6218, 0.8257, 0.2553])
|
|
""")
|
|
|
|
add_docstr(torch.topk,
|
|
r"""
|
|
topk(input, k, dim=None, largest=True, sorted=True, out=None) -> (Tensor, LongTensor)
|
|
|
|
Returns the :attr:`k` largest elements of the given :attr:`input` tensor along
|
|
a given dimension.
|
|
|
|
If :attr:`dim` is not given, the last dimension of the `input` is chosen.
|
|
|
|
If :attr:`largest` is ``False`` then the `k` smallest elements are returned.
|
|
|
|
A tuple of `(values, indices)` is returned, where the `indices` are the indices
|
|
of the elements in the original `input` tensor.
|
|
|
|
The boolean option :attr:`sorted` if ``True``, will make sure that the returned
|
|
`k` elements are themselves sorted
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
k (int): the k in "top-k"
|
|
dim (int, optional): the dimension to sort along
|
|
largest (bool, optional): controls whether to return largest or
|
|
smallest elements
|
|
sorted (bool, optional): controls whether to return the elements
|
|
in sorted order
|
|
out (tuple, optional): the output tuple of (Tensor, LongTensor) that can be
|
|
optionally given to be used as output buffers
|
|
|
|
Example::
|
|
|
|
>>> x = torch.arange(1., 6.)
|
|
>>> x
|
|
tensor([ 1., 2., 3., 4., 5.])
|
|
>>> torch.topk(x, 3)
|
|
(tensor([ 5., 4., 3.]), tensor([ 4, 3, 2]))
|
|
""")
|
|
|
|
add_docstr(torch.trace,
|
|
r"""
|
|
trace(input) -> Tensor
|
|
|
|
Returns the sum of the elements of the diagonal of the input 2-D matrix.
|
|
|
|
Example::
|
|
|
|
>>> x = torch.arange(1., 10.).view(3, 3)
|
|
>>> x
|
|
tensor([[ 1., 2., 3.],
|
|
[ 4., 5., 6.],
|
|
[ 7., 8., 9.]])
|
|
>>> torch.trace(x)
|
|
tensor(15.)
|
|
""")
|
|
|
|
add_docstr(torch.transpose,
|
|
r"""
|
|
transpose(input, dim0, dim1) -> Tensor
|
|
|
|
Returns a tensor that is a transposed version of :attr:`input`.
|
|
The given dimensions :attr:`dim0` and :attr:`dim1` are swapped.
|
|
|
|
The resulting :attr:`out` tensor shares it's underlying storage with the
|
|
:attr:`input` tensor, so changing the content of one would change the content
|
|
of the other.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dim0 (int): the first dimension to be transposed
|
|
dim1 (int): the second dimension to be transposed
|
|
|
|
Example::
|
|
|
|
>>> x = torch.randn(2, 3)
|
|
>>> x
|
|
tensor([[ 1.0028, -0.9893, 0.5809],
|
|
[-0.1669, 0.7299, 0.4942]])
|
|
>>> torch.transpose(x, 0, 1)
|
|
tensor([[ 1.0028, -0.1669],
|
|
[-0.9893, 0.7299],
|
|
[ 0.5809, 0.4942]])
|
|
""")
|
|
|
|
add_docstr(torch.tril,
|
|
r"""
|
|
tril(input, diagonal=0, out=None) -> Tensor
|
|
|
|
Returns the lower triangular part of the matrix (2-D tensor) :attr:`input`,
|
|
the other elements of the result tensor :attr:`out` are set to 0.
|
|
|
|
The lower triangular part of the matrix is defined as the elements on and
|
|
below the diagonal.
|
|
|
|
The argument :attr:`diagonal` controls which diagonal to consider. If
|
|
:attr:`diagonal` = 0, all elements on and below the main diagonal are
|
|
retained. A positive value includes just as many diagonals above the main
|
|
diagonal, and similarly a negative value excludes just as many diagonals below
|
|
the main diagonal. The main diagonal are the set of indices
|
|
:math:`\lbrace (i, i) \rbrace` for :math:`i \in [0, \min\{d_{1}, d_{2}\} - 1]` where
|
|
:math:`d_{1}, d_{2}` are the dimensions of the matrix.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
diagonal (int, optional): the diagonal to consider
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(3, 3)
|
|
>>> a
|
|
tensor([[-1.0813, -0.8619, 0.7105],
|
|
[ 0.0935, 0.1380, 2.2112],
|
|
[-0.3409, -0.9828, 0.0289]])
|
|
>>> torch.tril(a)
|
|
tensor([[-1.0813, 0.0000, 0.0000],
|
|
[ 0.0935, 0.1380, 0.0000],
|
|
[-0.3409, -0.9828, 0.0289]])
|
|
|
|
>>> b = torch.randn(4, 6)
|
|
>>> b
|
|
tensor([[ 1.2219, 0.5653, -0.2521, -0.2345, 1.2544, 0.3461],
|
|
[ 0.4785, -0.4477, 0.6049, 0.6368, 0.8775, 0.7145],
|
|
[ 1.1502, 3.2716, -1.1243, -0.5413, 0.3615, 0.6864],
|
|
[-0.0614, -0.7344, -1.3164, -0.7648, -1.4024, 0.0978]])
|
|
>>> torch.tril(b, diagonal=1)
|
|
tensor([[ 1.2219, 0.5653, 0.0000, 0.0000, 0.0000, 0.0000],
|
|
[ 0.4785, -0.4477, 0.6049, 0.0000, 0.0000, 0.0000],
|
|
[ 1.1502, 3.2716, -1.1243, -0.5413, 0.0000, 0.0000],
|
|
[-0.0614, -0.7344, -1.3164, -0.7648, -1.4024, 0.0000]])
|
|
>>> torch.tril(b, diagonal=-1)
|
|
tensor([[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
|
|
[ 0.4785, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
|
|
[ 1.1502, 3.2716, 0.0000, 0.0000, 0.0000, 0.0000],
|
|
[-0.0614, -0.7344, -1.3164, 0.0000, 0.0000, 0.0000]])
|
|
""")
|
|
|
|
add_docstr(torch.triu,
|
|
r"""
|
|
triu(input, diagonal=0, out=None) -> Tensor
|
|
|
|
Returns the upper triangular part of the matrix (2-D tensor) :attr:`input`,
|
|
the other elements of the result tensor :attr:`out` are set to 0.
|
|
|
|
The upper triangular part of the matrix is defined as the elements on and
|
|
above the diagonal.
|
|
|
|
The argument :attr:`diagonal` controls which diagonal to consider. If
|
|
:attr:`diagonal` = 0, all elements on and below the main diagonal are
|
|
retained. A positive value excludes just as many diagonals above the main
|
|
diagonal, and similarly a negative value includes just as many diagonals below
|
|
the main diagonal. The main diagonal are the set of indices
|
|
:math:`\lbrace (i, i) \rbrace` for :math:`i \in [0, \min\{d_{1}, d_{2}\} - 1]` where
|
|
:math:`d_{1}, d_{2}` are the dimensions of the matrix.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
diagonal (int, optional): the diagonal to consider
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(3, 3)
|
|
>>> a
|
|
tensor([[ 0.2309, 0.5207, 2.0049],
|
|
[ 0.2072, -1.0680, 0.6602],
|
|
[ 0.3480, -0.5211, -0.4573]])
|
|
>>> torch.triu(a)
|
|
tensor([[ 0.2309, 0.5207, 2.0049],
|
|
[ 0.0000, -1.0680, 0.6602],
|
|
[ 0.0000, 0.0000, -0.4573]])
|
|
>>> torch.triu(a, diagonal=1)
|
|
tensor([[ 0.0000, 0.5207, 2.0049],
|
|
[ 0.0000, 0.0000, 0.6602],
|
|
[ 0.0000, 0.0000, 0.0000]])
|
|
>>> torch.triu(a, diagonal=-1)
|
|
tensor([[ 0.2309, 0.5207, 2.0049],
|
|
[ 0.2072, -1.0680, 0.6602],
|
|
[ 0.0000, -0.5211, -0.4573]])
|
|
|
|
>>> b = torch.randn(4, 6)
|
|
>>> b
|
|
tensor([[ 0.5876, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235],
|
|
[-0.2447, 0.9556, -1.2919, 1.3378, -0.1768, -1.0857],
|
|
[ 0.4333, 0.3146, 0.6576, -1.0432, 0.9348, -0.4410],
|
|
[-0.9888, 1.0679, -1.3337, -1.6556, 0.4798, 0.2830]])
|
|
>>> torch.tril(b, diagonal=1)
|
|
tensor([[ 0.5876, -0.0794, 0.0000, 0.0000, 0.0000, 0.0000],
|
|
[-0.2447, 0.9556, -1.2919, 0.0000, 0.0000, 0.0000],
|
|
[ 0.4333, 0.3146, 0.6576, -1.0432, 0.0000, 0.0000],
|
|
[-0.9888, 1.0679, -1.3337, -1.6556, 0.4798, 0.0000]])
|
|
>>> torch.tril(b, diagonal=-1)
|
|
tensor([[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
|
|
[-0.2447, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
|
|
[ 0.4333, 0.3146, 0.0000, 0.0000, 0.0000, 0.0000],
|
|
[-0.9888, 1.0679, -1.3337, 0.0000, 0.0000, 0.0000]])
|
|
""")
|
|
|
|
add_docstr(torch.trtrs,
|
|
r"""
|
|
trtrs(b, A, upper=True, transpose=False, unitriangular=False) -> (Tensor, Tensor)
|
|
|
|
Solves a system of equations with a triangular coefficient matrix `A`
|
|
and multiple right-hand sides `b`.
|
|
|
|
In particular, solves :math:`AX = b` and assumes `A` is upper-triangular
|
|
with the default keyword arguments.
|
|
|
|
This method is NOT implemented for CUDA tensors.
|
|
|
|
Args:
|
|
A (Tensor): the input triangular coefficient matrix
|
|
b (Tensor): multiple right-hand sides. Each column of `b` is a
|
|
right-hand side for the system of equations.
|
|
upper (bool, optional): whether to solve the upper-triangular system
|
|
of equations (default) or the lower-triangular system of equations. Default: True.
|
|
transpose (bool, optional): whether `A` should be transposed before
|
|
being sent into the solver. Default: False.
|
|
unitriangular (bool, optional): whether `A` is unit triangular.
|
|
If True, the diagonal elements of `A` are assumed to be
|
|
1 and not referenced from `A`. Default: False.
|
|
|
|
Returns:
|
|
A tuple (X, M) where `M` is a clone of `A` and `X` is the solution to
|
|
`AX = b` (or whatever variant of the system of equations, depending on
|
|
the keyword arguments.)
|
|
|
|
Shape:
|
|
- A: :math:`(N, N)`
|
|
- b: :math:`(N, C)`
|
|
- output[0]: :math:`(N, C)`
|
|
- output[1]: :math:`(N, N)`
|
|
|
|
Examples::
|
|
|
|
>>> A = torch.randn(2, 2).triu()
|
|
>>> A
|
|
tensor([[ 1.1527, -1.0753],
|
|
[ 0.0000, 0.7986]])
|
|
>>> b = torch.randn(2, 3)
|
|
>>> b
|
|
tensor([[-0.0210, 2.3513, -1.5492],
|
|
[ 1.5429, 0.7403, -1.0243]])
|
|
>>> torch.trtrs(b, A)
|
|
(tensor([[ 1.7840, 2.9045, -2.5405],
|
|
[ 1.9319, 0.9269, -1.2826]]), tensor([[ 1.1527, -1.0753],
|
|
[ 0.0000, 0.7986]]))
|
|
""")
|
|
|
|
add_docstr(torch.trunc,
|
|
r"""
|
|
trunc(input, out=None) -> Tensor
|
|
|
|
Returns a new tensor with the truncated integer values of
|
|
the elements of :attr:`input`.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4)
|
|
>>> a
|
|
tensor([ 3.4742, 0.5466, -0.8008, -0.9079])
|
|
>>> torch.trunc(a)
|
|
tensor([ 3., 0., -0., -0.])
|
|
""")
|
|
|
|
add_docstr(torch.unsqueeze,
|
|
r"""
|
|
unsqueeze(input, dim, out=None) -> Tensor
|
|
|
|
Returns a new tensor with a dimension of size one inserted at the
|
|
specified position.
|
|
|
|
The returned tensor shares the same underlying data with this tensor.
|
|
|
|
A negative `dim` value within the range
|
|
[-:attr:`input.dim()`, :attr:`input.dim()`) can be used and
|
|
will correspond to :meth:`unsqueeze` applied at :attr:`dim` = :attr:`dim + input.dim() + 1`
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dim (int): the index at which to insert the singleton dimension
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> x = torch.tensor([1, 2, 3, 4])
|
|
>>> torch.unsqueeze(x, 0)
|
|
tensor([[ 1, 2, 3, 4]])
|
|
>>> torch.unsqueeze(x, 1)
|
|
tensor([[ 1],
|
|
[ 2],
|
|
[ 3],
|
|
[ 4]])
|
|
""")
|
|
|
|
add_docstr(torch.var,
|
|
r"""
|
|
.. function:: var(input, unbiased=True) -> Tensor
|
|
|
|
Returns the variance of all elements in the :attr:`input` tensor.
|
|
|
|
If :attr:`unbiased` is ``False``, then the variance will be calculated via the
|
|
biased estimator. Otherwise, Bessel's correction will be used.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
unbiased (bool): whether to use the unbiased estimation or not
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(1, 3)
|
|
>>> a
|
|
tensor([[-0.3425, -1.2636, -0.4864]])
|
|
>>> torch.var(a)
|
|
tensor(0.2455)
|
|
|
|
|
|
.. function:: var(input, dim, keepdim=False, unbiased=True, out=None) -> Tensor
|
|
|
|
Returns the variance of each row of the :attr:`input` tensor in the given
|
|
dimension :attr:`dim`.
|
|
|
|
If :attr:`keepdim` is ``True``, the output tensors are of the same size
|
|
as :attr:`input` except in the dimension :attr:`dim` where they are of size 1.
|
|
Otherwise, :attr:`dim` is squeezed (see :func:`torch.squeeze`), resulting in
|
|
the outputs tensor having 1 fewer dimension than :attr:`input`.
|
|
|
|
If :attr:`unbiased` is ``False``, then the variance will be calculated via the
|
|
biased estimator. Otherwise, Bessel's correction will be used.
|
|
|
|
Args:
|
|
input (Tensor): the input tensor
|
|
dim (int): the dimension to reduce
|
|
keepdim (bool): whether the output tensor has :attr:`dim` retained or not
|
|
unbiased (bool): whether to use the unbiased estimation or not
|
|
out (Tensor, optional): the output tensor
|
|
|
|
Example::
|
|
|
|
>>> a = torch.randn(4, 4)
|
|
>>> a
|
|
tensor([[-0.3567, 1.7385, -1.3042, 0.7423],
|
|
[ 1.3436, -0.1015, -0.9834, -0.8438],
|
|
[ 0.6056, 0.1089, -0.3112, -1.4085],
|
|
[-0.7700, 0.6074, -0.1469, 0.7777]])
|
|
>>> torch.var(a, 1)
|
|
tensor([ 1.7444, 1.1363, 0.7356, 0.5112])
|
|
""")
|
|
|
|
add_docstr(torch.zeros,
|
|
r"""
|
|
zeros(*sizes, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a tensor filled with the scalar value `0`, with the shape defined
|
|
by the variable argument :attr:`sizes`.
|
|
|
|
Args:
|
|
sizes (int...): a sequence of integers defining the shape of the output tensor.
|
|
Can be a variable number of arguments or a collection like a list or tuple.
|
|
{out}
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Example::
|
|
|
|
>>> torch.zeros(2, 3)
|
|
tensor([[ 0., 0., 0.],
|
|
[ 0., 0., 0.]])
|
|
|
|
>>> torch.zeros(5)
|
|
tensor([ 0., 0., 0., 0., 0.])
|
|
""".format(**factory_common_args))
|
|
|
|
add_docstr(torch.zeros_like,
|
|
r"""
|
|
zeros_like(input, dtype=None, layout=None, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a tensor filled with the scalar value `0`, with the same size as
|
|
:attr:`input`. ``torch.zeros_like(input)`` is equivalent to
|
|
``torch.zeros(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)``.
|
|
|
|
.. warning::
|
|
As of 0.4, this function does not support an :attr:`out` keyword. As an alternative,
|
|
the old ``torch.zeros_like(input, out=output)`` is equivalent to
|
|
``torch.zeros(input.size(), out=output)``.
|
|
|
|
Args:
|
|
{input}
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Example::
|
|
|
|
>>> input = torch.empty(2, 3)
|
|
>>> torch.zeros_like(input)
|
|
tensor([[ 0., 0., 0.],
|
|
[ 0., 0., 0.]])
|
|
""".format(**factory_like_common_args))
|
|
|
|
add_docstr(torch.btrifact_with_info,
|
|
r"""
|
|
btrifact_with_info(A, pivot=True) -> (Tensor, IntTensor, IntTensor)
|
|
|
|
Batch LU factorization with additional error information.
|
|
|
|
This is a version of :meth:`torch.btrifact` that always creates an info
|
|
`IntTensor`, and returns it as the third return value.
|
|
|
|
Arguments:
|
|
A (Tensor): the tensor to factor
|
|
pivot (bool, optional): controls whether pivoting is done
|
|
|
|
Returns:
|
|
A tuple containing factorization, pivots, and an `IntTensor` where non-zero
|
|
values indicate whether factorization for each minibatch sample succeeds.
|
|
|
|
Example::
|
|
|
|
>>> A = torch.randn(2, 3, 3)
|
|
>>> A_LU, pivots, info = A.btrifact_with_info()
|
|
>>> if info.nonzero().size(0) == 0:
|
|
>>> print('LU factorization succeeded for all samples!')
|
|
LU factorization succeeded for all samples!
|
|
""")
|
|
|
|
add_docstr(torch.btrisolve,
|
|
r"""
|
|
btrisolve(b, LU_data, LU_pivots) -> Tensor
|
|
|
|
Batch LU solve.
|
|
|
|
Returns the LU solve of the linear system :math:`Ax = b`.
|
|
|
|
Arguments:
|
|
b (Tensor): the RHS tensor
|
|
LU_data (Tensor): the pivoted LU factorization of A from :meth:`btrifact`.
|
|
LU_pivots (IntTensor): the pivots of the LU factorization
|
|
|
|
Example::
|
|
|
|
>>> A = torch.randn(2, 3, 3)
|
|
>>> b = torch.randn(2, 3)
|
|
>>> A_LU = torch.btrifact(A)
|
|
>>> x = torch.btrisolve(b, *A_LU)
|
|
>>> torch.norm(torch.bmm(A, x.unsqueeze(2)) - b.unsqueeze(2))
|
|
tensor(1.00000e-07 *
|
|
2.8312)
|
|
""")
|
|
|
|
add_docstr(torch.empty,
|
|
r"""
|
|
empty(*sizes, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a tensor filled with uninitialized data. The shape of the tensor is
|
|
defined by the variable argument :attr:`sizes`.
|
|
|
|
Args:
|
|
sizes (int...): a sequence of integers defining the shape of the output tensor.
|
|
Can be a variable number of arguments or a collection like a list or tuple.
|
|
{out}
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Example::
|
|
|
|
>>> torch.empty(2, 3)
|
|
tensor(1.00000e-08 *
|
|
[[ 6.3984, 0.0000, 0.0000],
|
|
[ 0.0000, 0.0000, 0.0000]])
|
|
|
|
""".format(**factory_common_args))
|
|
|
|
add_docstr(torch.empty_like,
|
|
r"""
|
|
empty_like(input, dtype=None, layout=None, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns an uninitialized tensor with the same size as :attr:`input`.
|
|
``torch.empty_like(input)`` is equivalent to
|
|
``torch.empty(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)``.
|
|
|
|
Args:
|
|
{input}
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Example::
|
|
|
|
>>> input = torch.empty((2,3), dtype=torch.int64)
|
|
>>> input.new(input.size())
|
|
tensor([[ 9.4064e+13, 2.8000e+01, 9.3493e+13],
|
|
[ 7.5751e+18, 7.1428e+18, 7.5955e+18]])
|
|
""".format(**factory_like_common_args))
|
|
|
|
add_docstr(torch.full,
|
|
r"""
|
|
full(size, fill_value, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a tensor of size :attr:`size` filled with :attr:`fill_value`.
|
|
|
|
Args:
|
|
size (int...): a list, tuple, or :class:`torch.Size` of integers defining the
|
|
shape of the output tensor.
|
|
fill_value: the number to fill the output tensor with.
|
|
{out}
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Example::
|
|
|
|
>>> torch.full((2, 3), 3.141592)
|
|
tensor([[ 3.1416, 3.1416, 3.1416],
|
|
[ 3.1416, 3.1416, 3.1416]])
|
|
|
|
""".format(**factory_common_args))
|
|
|
|
add_docstr(torch.full_like,
|
|
r"""
|
|
full_like(input, fill_value, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
|
|
Returns a tensor with the same size as :attr:`input` filled with :attr:`fill_value`.
|
|
``torch.full_like(input, fill_value)`` is equivalent to
|
|
``torch.full_like(input.size(), fill_value, dtype=input.dtype, layout=input.layout, device=input.device)``.
|
|
|
|
Args:
|
|
{input}
|
|
fill_value: the number to fill the output tensor with.
|
|
{dtype}
|
|
{layout}
|
|
{device}
|
|
{requires_grad}
|
|
""".format(**factory_like_common_args))
|
|
|
|
add_docstr(torch.stft,
|
|
r"""
|
|
stft(signal, frame_length, hop, fft_size=None, normalized=False, onesided=True, window=None, pad_end=0) -> Tensor
|
|
|
|
Short-time Fourier transform (STFT).
|
|
|
|
Ignoring the batch dimension, this method computes the following expression:
|
|
|
|
.. math::
|
|
X[m, \omega] = \sum_{k = 0}^{\text{frame_length}}%
|
|
window[k]\ signal[m \times hop + k]\ e^{- j \frac{2 \pi \cdot \omega k}{\text{frame_length}}},
|
|
|
|
where :math:`m` is the index of the sliding window, and :math:`\omega` is
|
|
the frequency that :math:`0 \leq \omega <` :attr:`fft_size`. When
|
|
:attr:`return_onsesided` is the default value ``True``, only values for
|
|
:math:`\omega` in range :math:`\left[0, 1, 2, \dots, \left\lfloor \frac{\text{fft_size}}{2} \right\rfloor + 1\right]`
|
|
are returned because the real-to-complex transform satisfies the Hermitian
|
|
symmetry, i.e., :math:`X[m, \omega] = X[m, \text{fft_size} - \omega]^*`.
|
|
|
|
The input :attr:`signal` must be 1-D sequence :math:`(T)` or 2-D a batch of
|
|
sequences :math:`(N \times T)`. If :attr:`fft_size` is ``None``, it is
|
|
default to same value as :attr:`frame_length`. :attr:`window` can be a
|
|
1-D tensor of size :attr:`frame_length`, e.g., see
|
|
:meth:`torch.hann_window`. If :attr:`window` is the default value ``None``,
|
|
it is treated as if having :math:`1` everywhere in the frame.
|
|
:attr:`pad_end` indicates the amount of zero padding at the end of
|
|
:attr:`signal` before STFT. If :attr:`normalized` is set to ``True``, the
|
|
function returns the normalized STFT results, i.e., multiplied by
|
|
:math:`(frame\_length)^{-0.5}`.
|
|
|
|
Returns the real and the imaginary parts together as one tensor of size
|
|
:math:`(* \times N \times 2)`, where :math:`*` is the shape of input :attr:`signal`,
|
|
:math:`N` is the number of :math:`\omega` s considered depending on
|
|
:attr:`fft_size` and :attr:`return_onesided`, and each pair in the last
|
|
dimension represents a complex number as real part and imaginary part.
|
|
|
|
Arguments:
|
|
signal (Tensor): the input tensor
|
|
frame_length (int): the size of window frame and STFT filter
|
|
hop (int): the distance between neighboring sliding window frames
|
|
fft_size (int, optional): size of Fourier transform. Default: ``None``
|
|
normalized (bool, optional): controls whether to return the normalized STFT results
|
|
Default: ``False``
|
|
onesided (bool, optional): controls whether to return half of results to
|
|
avoid redundancy Default: ``True``
|
|
window (Tensor, optional): the optional window function. Default: ``None``
|
|
pad_end (int, optional): implicit zero padding at the end of :attr:`signal`. Default: 0
|
|
|
|
Returns:
|
|
Tensor: A tensor containing the STFT result
|
|
""")
|
|
|
|
add_docstr(torch.det,
|
|
r"""
|
|
det(A) -> Tensor
|
|
|
|
Calculates determinant of a 2D square tensor.
|
|
|
|
.. note::
|
|
Backward through :meth:`det` internally uses SVD results when :attr:`A` is
|
|
not invertible. In this case, double backward through :meth:`det` will be
|
|
unstable in when :attr:`A` doesn't have distinct singular values. See
|
|
:meth:`~torch.svd` for details.
|
|
|
|
Arguments:
|
|
A (Tensor): The input 2D square tensor
|
|
|
|
Example::
|
|
|
|
>>> A = torch.randn(3, 3)
|
|
>>> torch.det(A)
|
|
tensor(3.7641)
|
|
""")
|
|
|
|
add_docstr(torch.where,
|
|
r"""
|
|
where(condition, x, y) -> Tensor
|
|
|
|
Return a tensor of elements selected from either :attr:`x` or :attr:`y`, depending on :attr:`condition`.
|
|
|
|
The operation is defined as:
|
|
|
|
.. math::
|
|
out_i = \begin{cases}
|
|
x_i & \text{if } condition_i \\
|
|
y_i & \text{otherwise} \\
|
|
\end{cases}
|
|
|
|
.. note::
|
|
The tensors :attr:`condition`, :attr:`x`, :attr:`y` must be :ref:`broadcastable <broadcasting-semantics>`.
|
|
|
|
Arguments:
|
|
condition (ByteTensor): When True (nonzero), yield x, otherwise yield y
|
|
x (Tensor): values selected at indices where :attr:`condition` is ``True``
|
|
y (Tensor): values selected at indices where :attr:`condition` is ``False``
|
|
|
|
Returns:
|
|
Tensor: A tensor of shape equal to the broadcasted shape of :attr:`condition`, :attr:`x`, :attr:`y`
|
|
|
|
Example::
|
|
|
|
>>> x = torch.randn(3, 2)
|
|
>>> y = torch.ones(3, 2)
|
|
>>> x
|
|
tensor([[-0.4620, 0.3139],
|
|
[ 0.3898, -0.7197],
|
|
[ 0.0478, -0.1657]])
|
|
>>> torch.where(x > 0, x, y)
|
|
tensor([[ 1.0000, 0.3139],
|
|
[ 0.3898, 1.0000],
|
|
[ 0.0478, 1.0000]])
|
|
""")
|
|
|
|
add_docstr(torch.logdet,
|
|
r"""
|
|
logdet(A) -> Tensor
|
|
|
|
Calculates log determinant of a 2D square tensor.
|
|
|
|
.. note::
|
|
Result is ``-inf`` if :attr:`A` has zero log determinant, and is ``nan`` if
|
|
:attr:`A` has negative determinant.
|
|
|
|
.. note::
|
|
Backward through :meth:`logdet` internally uses SVD results when :attr:`A`
|
|
is not invertible. In this case, double backward through :meth:`logdet` will
|
|
be unstable in when :attr:`A` doesn't have distinct singular values. See
|
|
:meth:`~torch.svd` for details.
|
|
|
|
Arguments:
|
|
A (Tensor): The input 2D square tensor
|
|
|
|
Example::
|
|
|
|
>>> A = torch.randn(3, 3)
|
|
>>> torch.det(A)
|
|
tensor(0.2611)
|
|
>>> torch.logdet(A)
|
|
tensor(-1.3430)
|
|
""")
|
|
|
|
add_docstr(torch.slogdet,
|
|
r"""
|
|
slogdet(A) -> (Tensor, Tensor)
|
|
|
|
Calculates the sign and log value of a 2D square tensor's determinant.
|
|
|
|
.. note::
|
|
If ``A`` has zero determinant, this returns ``(0, -inf)``.
|
|
|
|
.. note::
|
|
Backward through :meth:`slogdet` internally uses SVD results when :attr:`A`
|
|
is not invertible. In this case, double backward through :meth:`slogdet`
|
|
will be unstable in when :attr:`A` doesn't have distinct singular values.
|
|
See :meth:`~torch.svd` for details.
|
|
|
|
Arguments:
|
|
A (Tensor): The input 2D square tensor
|
|
|
|
Returns:
|
|
A tuple containing the sign of the determinant, and the log value of the
|
|
absolute determinant.
|
|
|
|
Example::
|
|
|
|
>>> A = torch.randn(3, 3)
|
|
>>> torch.det(A)
|
|
tensor(-4.8215)
|
|
>>> torch.logdet(A)
|
|
tensor(nan)
|
|
>>> torch.slogdet(A)
|
|
(tensor(-1.), tensor(1.5731))
|
|
""")
|
|
|
|
add_docstr(torch.fft,
|
|
r"""
|
|
fft(input, signal_ndim, normalized=False) -> Tensor
|
|
|
|
Complex-to-complex Discrete Fourier Transform
|
|
|
|
This method computes the complex-to-complex discrete Fourier transform.
|
|
Ignoring the batch dimensions, it computes the following expression:
|
|
|
|
.. math::
|
|
X[\omega_1, \dots, \omega_d] =
|
|
\frac{1}{\prod_{i=1}^d N_i} \sum_{n_1=0}^{N_1} \dots \sum_{n_d=0}^{N_d} x[n_1, \dots, n_d]
|
|
e^{-j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}},
|
|
|
|
where :math:`d` = :attr:`signal_ndim` is number of dimensions for the
|
|
signal, and :math:`N_i` is the size of signal dimension :math:`i`.
|
|
|
|
This method supports 1D, 2D and 3D complex-to-complex transforms, indicated
|
|
by :attr:`signal_ndim`. :attr:`input` must be a tensor with last dimension
|
|
of size 2, representing the real and imaginary components of complex
|
|
numbers, and should have at least ``signal_ndim + 1`` dimensions with optionally
|
|
arbitrary number of leading batch dimensions. If :attr:`normalized` is set to
|
|
``True``, this normalizes the result by dividing it with
|
|
:math:`\sqrt{\prod_{i=1}^K N_i}` so that the operator is unitary.
|
|
|
|
Returns the real and the imaginary parts together as one tensor of the same
|
|
shape of :attr:`input`.
|
|
|
|
The inverse of this function is :func:`~torch.ifft`.
|
|
|
|
.. warning::
|
|
For CPU tensors, this method is currently only available with MKL. Check
|
|
:func:`torch.backends.mkl.is_available` to check if MKL is installed.
|
|
|
|
Arguments:
|
|
input (Tensor): the input tensor of at least :attr:`signal_ndim` ``+ 1``
|
|
dimensions
|
|
signal_ndim (int): the number of dimensions in each signal.
|
|
:attr:`signal_ndim` can only be 1, 2 or 3
|
|
normalized (bool, optional): controls whether to return normalized results.
|
|
Default: ``False``
|
|
|
|
Returns:
|
|
Tensor: A tensor containing the complex-to-complex Fourier transform result
|
|
|
|
Example::
|
|
|
|
>>> # unbatched 2D FFT
|
|
>>> x = torch.randn(4, 3, 2)
|
|
>>> torch.fft(x, 2)
|
|
tensor([[[-0.0876, 1.7835],
|
|
[-2.0399, -2.9754],
|
|
[ 4.4773, -5.0119]],
|
|
|
|
[[-1.5716, 2.7631],
|
|
[-3.8846, 5.2652],
|
|
[ 0.2046, -0.7088]],
|
|
|
|
[[ 1.9938, -0.5901],
|
|
[ 6.5637, 6.4556],
|
|
[ 2.9865, 4.9318]],
|
|
|
|
[[ 7.0193, 1.1742],
|
|
[-1.3717, -2.1084],
|
|
[ 2.0289, 2.9357]]])
|
|
>>> # batched 1D FFT
|
|
>>> torch.fft(x, 1)
|
|
tensor([[[ 1.8385, 1.2827],
|
|
[-0.1831, 1.6593],
|
|
[ 2.4243, 0.5367]],
|
|
|
|
[[-0.9176, -1.5543],
|
|
[-3.9943, -2.9860],
|
|
[ 1.2838, -2.9420]],
|
|
|
|
[[-0.8854, -0.6860],
|
|
[ 2.4450, 0.0808],
|
|
[ 1.3076, -0.5768]],
|
|
|
|
[[-0.1231, 2.7411],
|
|
[-0.3075, -1.7295],
|
|
[-0.5384, -2.0299]]])
|
|
>>> # arbitrary number of batch dimensions, 2D FFT
|
|
>>> x = torch.randn(3, 3, 5, 5, 2)
|
|
>>> y = torch.fft(x, 2)
|
|
>>> y.shape
|
|
torch.Size([3, 3, 5, 5, 2])
|
|
|
|
""")
|
|
|
|
add_docstr(torch.ifft,
|
|
r"""
|
|
ifft(input, signal_ndim, normalized=False) -> Tensor
|
|
|
|
Complex-to-complex Inverse Discrete Fourier Transform
|
|
|
|
This method computes the complex-to-complex inverse discrete Fourier
|
|
transform. Ignoring the batch dimensions, it computes the following
|
|
expression:
|
|
|
|
.. math::
|
|
X[\omega_1, \dots, \omega_d] =
|
|
\frac{1}{\prod_{i=1}^d N_i} \sum_{n_1=0}^{N_1} \dots \sum_{n_d=0}^{N_d} x[n_1, \dots, n_d]
|
|
e^{\ j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}},
|
|
|
|
where :math:`d` = :attr:`signal_ndim` is number of dimensions for the
|
|
signal, and :math:`N_i` is the size of signal dimension :math:`i`.
|
|
|
|
The argument specifications are almost identical with :func:`~torch.fft`.
|
|
However, if :attr:`normalized` is set to ``True``, this instead returns the
|
|
results multiplied by :math:`\sqrt{\prod_{i=1}^d N_i}`, to become a unitary
|
|
operator. Therefore, to invert a :func:`~torch.fft`, the :attr:`normalized`
|
|
argument should be set identically for :func:`~torch.fft`.
|
|
|
|
Returns the real and the imaginary parts together as one tensor of the same
|
|
shape of :attr:`input`.
|
|
|
|
The inverse of this function is :func:`~torch.fft`.
|
|
|
|
.. warning::
|
|
For CPU tensors, this method is currently only available with MKL. Check
|
|
:func:`torch.backends.mkl.is_available` to check if MKL is installed.
|
|
|
|
Arguments:
|
|
input (Tensor): the input tensor of at least :attr:`signal_ndim` ``+ 1``
|
|
dimensions
|
|
signal_ndim (int): the number of dimensions in each signal.
|
|
:attr:`signal_ndim` can only be 1, 2 or 3
|
|
normalized (bool, optional): controls whether to return normalized results.
|
|
Default: ``False``
|
|
|
|
Returns:
|
|
Tensor: A tensor containing the complex-to-complex inverse Fourier transform result
|
|
|
|
Example::
|
|
|
|
>>> x = torch.randn(3, 3, 2)
|
|
>>> x
|
|
tensor([[[ 1.2766, 1.3680],
|
|
[-0.8337, 2.0251],
|
|
[ 0.9465, -1.4390]],
|
|
|
|
[[-0.1890, 1.6010],
|
|
[ 1.1034, -1.9230],
|
|
[-0.9482, 1.0775]],
|
|
|
|
[[-0.7708, -0.8176],
|
|
[-0.1843, -0.2287],
|
|
[-1.9034, -0.2196]]])
|
|
>>> y = torch.fft(x, 2)
|
|
>>> torch.ifft(y, 2) # recover x
|
|
tensor([[[ 1.2766, 1.3680],
|
|
[-0.8337, 2.0251],
|
|
[ 0.9465, -1.4390]],
|
|
|
|
[[-0.1890, 1.6010],
|
|
[ 1.1034, -1.9230],
|
|
[-0.9482, 1.0775]],
|
|
|
|
[[-0.7708, -0.8176],
|
|
[-0.1843, -0.2287],
|
|
[-1.9034, -0.2196]]])
|
|
|
|
""")
|
|
|
|
add_docstr(torch.rfft,
|
|
r"""
|
|
rfft(input, signal_ndim, normalized=False, onesided=True) -> Tensor
|
|
|
|
Real-to-complex Discrete Fourier Transform
|
|
|
|
This method computes the real-to-complex discrete Fourier transform. It is
|
|
mathematically equivalent with :func:`~torch.fft` with differences only in
|
|
formats of the input and output.
|
|
|
|
This method supports 1D, 2D and 3D real-to-complex transforms, indicated
|
|
by :attr:`signal_ndim`. :attr:`input` must be a tensor with at least
|
|
``signal_ndim`` dimensions with optionally arbitrary number of leading batch
|
|
dimensions. If :attr:`normalized` is set to ``True``, this normalizes the result
|
|
by multiplying it with :math:`\sqrt{\prod_{i=1}^K N_i}` so that the operator is
|
|
unitary, where :math:`N_i` is the size of signal dimension :math:`i`.
|
|
|
|
The real-to-complex Fourier transform results follow conjugate symmetry:
|
|
|
|
.. math::
|
|
X[\omega_1, \dots, \omega_d] = X^*[N_1 - \omega_1, \dots, N_d - \omega_d],
|
|
|
|
where the index arithmetic is computed modulus the size of the corresponding
|
|
dimension, :math:`\ ^*` is the conjugate operator, and
|
|
:math:`d` = :attr:`signal_ndim`. :attr:`onesided` flag controls whether to avoid
|
|
redundancy in the output results. If set to ``True`` (default), the output will
|
|
not be full complex result of shape :math:`(*, 2)`, where :math:`*` is the shape
|
|
of :attr:`input`, but instead the last dimension will be halfed as of size
|
|
:math:`\lfloor \frac{N_d}{2} \rfloor + 1`.
|
|
|
|
The inverse of this function is :func:`~torch.irfft`.
|
|
|
|
.. warning::
|
|
For CPU tensors, this method is currently only available with MKL. Check
|
|
:func:`torch.backends.mkl.is_available` to check if MKL is installed.
|
|
|
|
Arguments:
|
|
input (Tensor): the input tensor of at least :attr:`signal_ndim` dimensions
|
|
signal_ndim (int): the number of dimensions in each signal.
|
|
:attr:`signal_ndim` can only be 1, 2 or 3
|
|
normalized (bool, optional): controls whether to return normalized results.
|
|
Default: ``False``
|
|
onesided (bool, optional): controls whether to return half of results to
|
|
avoid redundancy Default: ``True``
|
|
|
|
Returns:
|
|
Tensor: A tensor containing the real-to-complex Fourier transform result
|
|
|
|
Example::
|
|
|
|
>>> x = torch.randn(5, 5)
|
|
>>> torch.rfft(x, 2).shape
|
|
torch.Size([5, 3, 2])
|
|
>>> torch.rfft(x, 2, onesided=False).shape
|
|
torch.Size([5, 5, 2])
|
|
|
|
""")
|
|
|
|
|
|
add_docstr(torch.irfft,
|
|
r"""
|
|
irfft(input, signal_ndim, normalized=False, onesided=True, signal_sizes=None) -> Tensor
|
|
|
|
Complex-to-real Inverse Discrete Fourier Transform
|
|
|
|
This method computes the complex-to-real inverse discrete Fourier transform.
|
|
It is mathematically equivalent with :func:`ifft` with differences only in
|
|
formats of the input and output.
|
|
|
|
The argument specifications are almost identical with :func:`~torch.ifft`.
|
|
Similar to :func:`~torch.ifft`, if :attr:`normalized` is set to ``True``,
|
|
this normalizes the result by multiplying it with
|
|
:math:`\sqrt{\prod_{i=1}^K N_i}` so that the operator is unitary, where
|
|
:math:`N_i` is the size of signal dimension :math:`i`.
|
|
|
|
Due to the conjugate symmetry, :attr:`input` do not need to contain the full
|
|
complex frequency values. Roughly half of the values will be sufficient, as
|
|
is the case when :attr:`input` is given by :func:`~torch.rfft` with
|
|
``rfft(signal, onesided=True)``. In such case, set the :attr:`onesided`
|
|
argument of this method to ``True``. Moreover, the original signal shape
|
|
information can sometimes be lost, optionally set :attr:`signal_sizes` to be
|
|
the size of the original signal (without the batch dimensions if in batched
|
|
mode) to recover it with correct shape.
|
|
|
|
Therefore, to invert an :func:`~torch.rfft`, the :attr:`normalized` and
|
|
:attr:`onesided` arguments should be set identically for :func:`~torch.irfft`,
|
|
and preferrably a :attr:`signal_sizes` is given to avoid size mismatch. See the
|
|
example below for a case of size mismatch.
|
|
|
|
See :func:`~torch.rfft` for details on conjugate symmetry.
|
|
|
|
The inverse of this function is :func:`~torch.rfft`.
|
|
|
|
.. warning::
|
|
Generally speaking, the input of this function should contain values
|
|
following conjugate symmetry. Note that even if :attr:`onesided` is
|
|
``True``, often symmetry on some part is still needed. When this
|
|
requirement is not satisfied, the behavior of :func:`~torch.irfft` is
|
|
undefined. Since :func:`torch.autograd.gradcheck` estimates numerical
|
|
Jacobian with point perturbations, :func:`~torch.irfft` will almost
|
|
certainly fail the check.
|
|
|
|
.. warning::
|
|
For CPU tensors, this method is currently only available with MKL. Check
|
|
:func:`torch.backends.mkl.is_available` to check if MKL is installed.
|
|
|
|
Arguments:
|
|
input (Tensor): the input tensor of at least :attr:`signal_ndim` ``+ 1``
|
|
dimensions
|
|
signal_ndim (int): the number of dimensions in each signal.
|
|
:attr:`signal_ndim` can only be 1, 2 or 3
|
|
normalized (bool, optional): controls whether to return normalized results.
|
|
Default: ``False``
|
|
onesided (bool, optional): controls whether :attr:`input` was halfed to avoid
|
|
redundancy, e.g., by :func:`rfft`. Default: ``True``
|
|
signal_sizes (list or :class:`torch.Size`, optional): the size of the original
|
|
signal (without batch dimension). Default: ``None``
|
|
|
|
Returns:
|
|
Tensor: A tensor containing the complex-to-real inverse Fourier transform result
|
|
|
|
Example::
|
|
|
|
>>> x = torch.randn(4, 4)
|
|
>>> torch.rfft(x, 2, onesided=True).shape
|
|
torch.Size([4, 3, 2])
|
|
>>>
|
|
>>> # notice that with onesided=True, output size does not determine the original signal size
|
|
>>> x = torch.randn(4, 5)
|
|
|
|
>>> torch.rfft(x, 2, onesided=True).shape
|
|
torch.Size([4, 3, 2])
|
|
>>>
|
|
>>> # now we use the original shape to recover x
|
|
>>> x
|
|
tensor([[-0.8992, 0.6117, -1.6091, -0.4155, -0.8346],
|
|
[-2.1596, -0.0853, 0.7232, 0.1941, -0.0789],
|
|
[-2.0329, 1.1031, 0.6869, -0.5042, 0.9895],
|
|
[-0.1884, 0.2858, -1.5831, 0.9917, -0.8356]])
|
|
>>> y = torch.rfft(x, 2, onesided=True)
|
|
>>> torch.irfft(y, 2, onesided=True, signal_sizes=x.shape) # recover x
|
|
tensor([[-0.8992, 0.6117, -1.6091, -0.4155, -0.8346],
|
|
[-2.1596, -0.0853, 0.7232, 0.1941, -0.0789],
|
|
[-2.0329, 1.1031, 0.6869, -0.5042, 0.9895],
|
|
[-0.1884, 0.2858, -1.5831, 0.9917, -0.8356]])
|
|
|
|
""")
|
|
|
|
|
|
add_docstr(torch.hann_window,
|
|
"""
|
|
hann_window(window_length, periodic=True, dtype=None, \
|
|
layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
""" + r"""
|
|
Hann window function.
|
|
|
|
.. math::
|
|
w[n] = \frac{1}{2}\ \left[1 - \cos \left( \frac{2 \pi n}{N - 1} \right)\right] =
|
|
\sin^2 \left( \frac{\pi n}{N - 1} \right),
|
|
|
|
where :math:`N` is the full window size.
|
|
|
|
The input :attr:`window_length` is a positive integer controlling the
|
|
returned window size. :attr:`periodic` flag determines whether the returned
|
|
window trims off the last duplicate value from the symmetric window and is
|
|
ready to be used as a periodic window with functions like
|
|
:meth:`torch.stft`. Therefore, if :attr:`periodic` is true, the :math:`N` in
|
|
above formula is in fact :math:`\text{window_length} + 1`. Also, we always have
|
|
``torch.hann_window(L, periodic=True)`` equal to
|
|
``torch.hann_window(L + 1, periodic=False)[:-1])``.
|
|
|
|
.. note::
|
|
If :attr:`window_length` :math:`=1`, the returned window contains a single value 1.
|
|
""" + r"""
|
|
Arguments:
|
|
window_length (int): the size of returned window
|
|
periodic (bool, optional): If True, returns a window to be used as periodic
|
|
function. If False, return a symmetric window.
|
|
{dtype} Only floating point types are supported.
|
|
layout (:class:`torch.layout`, optional): the desired layout of returned window tensor. Only
|
|
``torch.strided`` (dense layout) is supported.
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Returns:
|
|
Tensor: A 1-D tensor of size :math:`(\text{{window_length}},)` containing the window
|
|
|
|
""".format(**factory_common_args))
|
|
|
|
|
|
add_docstr(torch.hamming_window,
|
|
"""
|
|
hamming_window(window_length, periodic=True, alpha=0.54, beta=0.46, dtype=None, \
|
|
layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
""" + r"""
|
|
Hamming window function.
|
|
|
|
.. math::
|
|
w[n] = \alpha - \beta\ \cos \left( \frac{2 \pi n}{N - 1} \right),
|
|
|
|
where :math:`N` is the full window size.
|
|
|
|
The input :attr:`window_length` is a positive integer controlling the
|
|
returned window size. :attr:`periodic` flag determines whether the returned
|
|
window trims off the last duplicate value from the symmetric window and is
|
|
ready to be used as a periodic window with functions like
|
|
:meth:`torch.stft`. Therefore, if :attr:`periodic` is true, the :math:`N` in
|
|
above formula is in fact :math:`\text{window_length} + 1`. Also, we always have
|
|
``torch.hamming_window(L, periodic=True)`` equal to
|
|
``torch.hamming_window(L + 1, periodic=False)[:-1])``.
|
|
|
|
.. note::
|
|
If :attr:`window_length` :math:`=1`, the returned window contains a single value 1.
|
|
|
|
.. note::
|
|
This is a generalized version of :meth:`torch.hann_window`.
|
|
""" + r"""
|
|
Arguments:
|
|
window_length (int): the size of returned window
|
|
periodic (bool, optional): If True, returns a window to be used as periodic
|
|
function. If False, return a symmetric window.
|
|
{dtype} Only floating point types are supported.
|
|
layout (:class:`torch.layout`, optional): the desired layout of returned window tensor. Only
|
|
``torch.strided`` (dense layout) is supported.
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Returns:
|
|
Tensor: A 1-D tensor of size :math:`(\text{{window_length}},)` containing the window
|
|
|
|
""".format(**factory_common_args))
|
|
|
|
|
|
add_docstr(torch.bartlett_window,
|
|
"""
|
|
bartlett_window(window_length, periodic=True, dtype=None, \
|
|
layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
""" + r"""
|
|
Bartlett window function.
|
|
|
|
.. math::
|
|
w[n] = 1 - \left| \frac{2n}{N-1} - 1 \right| = \begin{cases}
|
|
\frac{2n}{N - 1} & \text{if } 0 \leq n \leq \frac{N - 1}{2} \\
|
|
2 - \frac{2n}{N - 1} & \text{if } \frac{N - 1}{2} < n < N \\
|
|
\end{cases},
|
|
|
|
where :math:`N` is the full window size.
|
|
|
|
The input :attr:`window_length` is a positive integer controlling the
|
|
returned window size. :attr:`periodic` flag determines whether the returned
|
|
window trims off the last duplicate value from the symmetric window and is
|
|
ready to be used as a periodic window with functions like
|
|
:meth:`torch.stft`. Therefore, if :attr:`periodic` is true, the :math:`N` in
|
|
above formula is in fact :math:`\text{window_length} + 1`. Also, we always have
|
|
``torch.bartlett_window(L, periodic=True)`` equal to
|
|
``torch.bartlett_window(L + 1, periodic=False)[:-1])``.
|
|
|
|
.. note::
|
|
If :attr:`window_length` :math:`=1`, the returned window contains a single value 1.
|
|
""" + r"""
|
|
Arguments:
|
|
window_length (int): the size of returned window
|
|
periodic (bool, optional): If True, returns a window to be used as periodic
|
|
function. If False, return a symmetric window.
|
|
{dtype} Only floating point types are supported.
|
|
layout (:class:`torch.layout`, optional): the desired layout of returned window tensor. Only
|
|
``torch.strided`` (dense layout) is supported.
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Returns:
|
|
Tensor: A 1-D tensor of size :math:`(\text{{window_length}},)` containing the window
|
|
|
|
""".format(**factory_common_args))
|
|
|
|
|
|
add_docstr(torch.blackman_window,
|
|
"""
|
|
blackman_window(window_length, periodic=True, dtype=None, \
|
|
layout=torch.strided, device=None, requires_grad=False) -> Tensor
|
|
""" + r"""
|
|
Blackman window function.
|
|
|
|
.. math::
|
|
w[n] = 0.42 - 0.5 \cos \left( \frac{2 \pi n}{N - 1} \right) + 0.08 \cos \left( \frac{4 \pi n}{N - 1} \right)
|
|
|
|
where :math:`N` is the full window size.
|
|
|
|
The input :attr:`window_length` is a positive integer controlling the
|
|
returned window size. :attr:`periodic` flag determines whether the returned
|
|
window trims off the last duplicate value from the symmetric window and is
|
|
ready to be used as a periodic window with functions like
|
|
:meth:`torch.stft`. Therefore, if :attr:`periodic` is true, the :math:`N` in
|
|
above formula is in fact :math:`\text{window_length} + 1`. Also, we always have
|
|
``torch.blackman_window(L, periodic=True)`` equal to
|
|
``torch.blackman_window(L + 1, periodic=False)[:-1])``.
|
|
|
|
.. note::
|
|
If :attr:`window_length` :math:`=1`, the returned window contains a single value 1.
|
|
""" + r"""
|
|
Arguments:
|
|
window_length (int): the size of returned window
|
|
periodic (bool, optional): If True, returns a window to be used as periodic
|
|
function. If False, return a symmetric window.
|
|
{dtype} Only floating point types are supported.
|
|
layout (:class:`torch.layout`, optional): the desired layout of returned window tensor. Only
|
|
``torch.strided`` (dense layout) is supported.
|
|
{device}
|
|
{requires_grad}
|
|
|
|
Returns:
|
|
Tensor: A 1-D tensor of size :math:`(\text{{window_length}},)` containing the window
|
|
|
|
""".format(**factory_common_args))
|