mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49097 RFC: https://github.com/pytorch/rfcs/pull/11 This PR add the basic logic to handle forward grad as dual Tensors. It contains the following: - Mechanism to save dual state on a Tensor and clear it up when the dual level ends - C++ and python user facing API - Updated view system that is able to track both forward and backward views The current PR has the following limitations: - Extensive tests are in the next PR in the stack as formulas are needed to write full tests. - Only the manual formulas have been audited and no other formula is actually implemented here (they are in the next PR in the stack) - Only level 0 is allowed for now. This was discussed and agreed that it is not needed for the first version of this PR. - We can save one ViewInfo creation when both the forward and backward views have the same base. This can be done by adding a boolean flag to the DifferentiableViewMeta and extra logic in the `as_view` method. This is left out to keep this PR concise. - We can skip tracking forward views if the base has a forward grad. This can be done by adding extra logic in the `as_view` method. This is left out to keep this PR concise. Reading guide: - Updated view handling in [gen_variable_type.py](https://github.com/pytorch/pytorch/pull/49097/files#diff-f6553cec68caeaea36f6c8b14ff76a6d39dfd774e0ea9ef2f76e8d81fd9af5df), [VariableTypeUtils.h](https://github.com/pytorch/pytorch/pull/49097/files#diff-ec71cfa45954dece1236c661d170e6341879c5be637f4abf52e826d61b40695a), [variable.cpp](https://github.com/pytorch/pytorch/pull/49097/files#diff-60e3bfe444e89efc7149f25b38e472710525984789934ab83f1bd5671b8ff285) (skip code below "[Forward Grad View]" for now), [variable.h](https://github.com/pytorch/pytorch/pull/49097/files#diff-1604bcd0e4350ed99ec45e437cee7ac9ebe337392c9ea16a236247aeeb35b02bR266-R542) and [custom_function.cpp](https://github.com/pytorch/pytorch/pull/49097/files#diff-dd85f452082b5bb6612bbc12adb496f8827defa228509f7b493de1d517522d5d). This introduces the new ViewInfo to hold view informations shared for forward and backward. It also updates the differentiable view meta to use this. And it updates the as_view function to handle both forward and backward view. - New forward grad class that handle storing gradients and tracking at each level [forward_grad.h](https://github.com/pytorch/pytorch/pull/49097/files#diff-c6c5b9ab2d7e5dde4102495faa1b6bbbfc23aa3e47deb7359c0bfe1eb004c0cb), [forward_grad.cpp](https://github.com/pytorch/pytorch/pull/49097/files#diff-de2ab54ade7312701850d71a119a4f4ee4b9fc5a9c42a467cdd4e73c033531dd) and [build_variables.bzl](https://github.com/pytorch/pytorch/pull/49097/files#diff-dfdfa2efb17beddfd9094524f95351fd197db6c8857e96b436fb599870359325). EDIT: These files also contain the new flag to globally disable forward AD that allows us to reduce performance issues while this is in development. - Lowest level API and binding between Tensor and AutogradMeta in [TensorBody.h](https://github.com/pytorch/pytorch/pull/49097/files#diff-7554853205392fa743357bf845ecc350a974ec049383248c12daaf2f4de04911), [TensorImpl.cpp](https://github.com/pytorch/pytorch/pull/49097/files#diff-052bd9150ef8e09289ddf644b5a6830ede49207201cd41728f6d7cc6d9cead94), [TensorImpl.h](https://github.com/pytorch/pytorch/pull/49097/files#diff-a15aae4cf23da44970db7cece62ff981265575c798c62f7b52d87c8809dfe2e1) and the rest of [variable.cpp](https://github.com/pytorch/pytorch/pull/49097/files#diff-60e3bfe444e89efc7149f25b38e472710525984789934ab83f1bd5671b8ff285R557-R677) - API to access the forward primal that needs to be a differentiable function (and so in native_functions.yaml) [native_functions.yaml](https://github.com/pytorch/pytorch/pull/49097/files#diff-2f3dbd85efb9b5172f2264eedd3be47dd765e6ab7cc8bf3ade5e62c28ae35991) [NamedRegistrations.cpp](https://github.com/pytorch/pytorch/pull/49097/files#diff-69bd3bea510c9b64e1633fa18c3ea63d4b8348dbad3a78ad9de844ab3e43dc1d), [VariableMethodsStub.cpp](https://github.com/pytorch/pytorch/pull/49097/files#diff-23f5fcb737a2b289811fe0f4b65aef775e7c824b2e629ecd343df51405cd434f), [derivatives.yaml](https://github.com/pytorch/pytorch/pull/49097/files#diff-e4c2f99a2404e98c3586e07425da73008f36b1bada790648a7297af141d37f8c), [gen_python_functions.py](https://github.com/pytorch/pytorch/pull/49097/files#diff-e4c2f99a2404e98c3586e07425da73008f36b1bada790648a7297af141d37f8c), [gen_trace_type.py](https://github.com/pytorch/pytorch/pull/49097/files#diff-54e0b976027bf8debefb959ff360b89ae93466970c843365b1b3a03806d868ce), [TraceTypeManual.cpp](https://github.com/pytorch/pytorch/pull/49097/files#diff-f34636741ad4a23d018e0c289bc750c3bad887b45660e1d6eaf440d234a78fbf) and [part of VariableTypeManual.cpp](https://github.com/pytorch/pytorch/pull/49097/files#diff-6e19a1bce8cbdba8714b6e2c794a76bc0864b64a49cfa757cb0b5afdc937d1a4R198-R243) - c++ API [autograd.h](https://github.com/pytorch/pytorch/pull/49097/files#diff-349028fbe8291a965a7a263c323b208fe071c35c66179ee997ef84fa81aa4b1e), [autograd.cpp](https://github.com/pytorch/pytorch/pull/49097/files#diff-a3fe908d67dfec16a1fcde300de68b0701bf68b88db7451f29f2bee255cf30c9) - python binding [init.cpp](https://github.com/pytorch/pytorch/pull/49097/files#diff-c58a67c85191c22c9b3bb439117d8053edfd9dea839fa010cf967d404c3c630d) - python API [forward_ad.py](https://github.com/pytorch/pytorch/pull/49097/files#diff-a4efad4ba18fffdfb264c21e5475997a24a743089a899f8ec1a5ff962c6738d9), [autograd/__init__.py](https://github.com/pytorch/pytorch/pull/49097/files#diff-743abcafd32ad0e69f39ac5a91df4197b7e1921c135cacee7ef6dc829a8a7af8) - c++ and python printing [Formatting.cpp](https://github.com/pytorch/pytorch/pull/49097/files#diff-881dba501e71662e2e4818b4b016f739b344c8aed2f5edc6b871eda47a2aced0), [_tensor_str.py](https://github.com/pytorch/pytorch/pull/49097/files#diff-a7911f8d5e73adbff914d99fd7818ace2a7030b6a3748abe06ec6fc6e3df9cc3) - Utility for formulas and updated manual functions to respect new view system as well as forward grad [FunctionsManual.h](https://github.com/pytorch/pytorch/pull/49097/files#diff-6378bb6dc81a64dab676d61731341fa5d1088418f32a1473a33a0ccfc2357dc1), [FunctionsManual.cpp](https://github.com/pytorch/pytorch/pull/49097/files#diff-4adbd88239afcd60e8198aab65d4f5e43b62314e34b80551e997a1ea503adea5) [rest of VariableTypeManual.cpp](https://github.com/pytorch/pytorch/pull/49097/files#diff-6e19a1bce8cbdba8714b6e2c794a76bc0864b64a49cfa757cb0b5afdc937d1a4R264-R433) - Ensure SavedVariable save forward grad properly [saved_variable.h](https://github.com/pytorch/pytorch/pull/49097/files#diff-c1b8039d776241abe177d5aa99b79dd9489a9b3e529da8ab24c2e386c1238ae2), [saved_variable.cpp](https://github.com/pytorch/pytorch/pull/49097/files#diff-cc9fba479b5beae06b2eea2e390d17796e0341c5b037a20b5bcaccbb0c341030) Test Plan: Imported from OSS Reviewed By: mrshenli Differential Revision: D25607503 Pulled By: albanD fbshipit-source-id: f1396290de1d75760f3d380c43cdd56e86fa6099
216 lines
15 KiB
C++
216 lines
15 KiB
C++
#pragma once
|
|
|
|
// NB: Must be at the top of file to avoid including the deprecated "math.h".
|
|
// https://stackoverflow.com/questions/6563810/m-pi-works-with-math-h-but-not-with-cmath-in-visual-studio
|
|
#ifdef _MSC_VER
|
|
#ifndef _USE_MATH_DEFINES
|
|
#define _USE_MATH_DEFINES
|
|
#endif
|
|
#include <cmath>
|
|
#endif
|
|
|
|
#include <torch/csrc/autograd/generated/Functions.h>
|
|
#include <ATen/ATen.h>
|
|
|
|
namespace torch {
|
|
namespace autograd {
|
|
namespace generated {
|
|
namespace details {
|
|
|
|
// A simple way to imperatively compute index ranges for slots
|
|
// that have been flattened
|
|
struct IndexRangeGenerator {
|
|
IndexRange range(size_t range_size) {
|
|
i += range_size;
|
|
return {i - range_size, i};
|
|
}
|
|
size_t size() { return i; }
|
|
private:
|
|
size_t i = 0;
|
|
};
|
|
|
|
bool isFwGradDefined(const c10::optional<Tensor>& t);
|
|
Tensor toLegacyFwGrad(const c10::optional<Tensor>& t);
|
|
Tensor toLegacyPrimal(const c10::optional<Tensor>& t);
|
|
|
|
bool any_variable_defined(variable_list& variables);
|
|
void copy_range(variable_list& out, IndexRange range, const at::Tensor & t);
|
|
void copy_range(variable_list& out, IndexRange range, at::ArrayRef<at::Tensor> t);
|
|
at::Tensor copysign_tensor_self_backward(const Tensor & grad, const Tensor & self, const Tensor & result);
|
|
at::Tensor not_implemented(const char* name);
|
|
at::Tensor handle_r_to_c(ScalarType self_st, Tensor gradient_result);
|
|
at::Tensor maybe_multiply(const at::Tensor & t, const at::Scalar & s);
|
|
int64_t _safe_size(IntArrayRef sizes, IntArrayRef dim);
|
|
Tensor restore_reduced_dims(const Tensor &output, IntArrayRef dims, bool keepdim);
|
|
Tensor scale_grad_by_count(const Tensor &grad, const Tensor &mask, IntArrayRef dims);
|
|
at::Tensor norm_backward(const at::Tensor & grad, const at::Tensor & self, const optional<at::Scalar> & p_, const at::Tensor & norm);
|
|
at::Tensor norm_backward(at::Tensor grad, const at::Tensor & self, const optional<at::Scalar> & p_, at::Tensor norm, at::IntArrayRef dim, bool keepdim);
|
|
at::Tensor pow_backward(at::Tensor grad, const at::Tensor & self, const at::Scalar & exponent_);
|
|
at::Tensor pow_backward_self(at::Tensor grad, const at::Tensor & self, const at::Tensor & exponent);
|
|
at::Tensor pow_backward_exponent(at::Tensor grad, const at::Tensor& self, const at::Tensor& exponent, at::Tensor result);
|
|
at::Tensor pow_backward_exponent(at::Tensor grad, const at::Scalar & base, const at::Tensor& exponent, at::Tensor result);
|
|
at::Tensor angle_backward(at::Tensor grad, const at::Tensor& self);
|
|
at::Tensor mul_tensor_backward(Tensor grad, Tensor other, ScalarType self_st);
|
|
at::Tensor div_tensor_self_backward(Tensor grad, Tensor other, ScalarType self_st);
|
|
at::Tensor div_tensor_other_backward(Tensor grad, Tensor self, Tensor other);
|
|
at::Tensor mvlgamma_backward(at::Tensor grad, const at::Tensor & self, int64_t p);
|
|
at::Tensor permute_backwards(const at::Tensor & grad, at::IntArrayRef fwd_dims);
|
|
at::Tensor rad2deg_backward(const at::Tensor& grad);
|
|
at::Tensor deg2rad_backward(const at::Tensor& grad);
|
|
at::Tensor unsqueeze_multiple(const at::Tensor & t, at::IntArrayRef dim, size_t n_dims);
|
|
at::Tensor sum_backward(const at::Tensor & grad, at::IntArrayRef sizes, at::IntArrayRef dims, bool keepdim);
|
|
at::Tensor nansum_backward(const at::Tensor & grad, const at::Tensor & self, at::IntArrayRef dims, bool keepdim);
|
|
std::vector<int64_t> reverse_list(const at::IntArrayRef list);
|
|
at::Tensor reverse_dim(const at::Tensor& t, int64_t dim);
|
|
at::Tensor prod_safe_zeros_backward(const at::Tensor &grad, const at::Tensor& inp, int64_t dim);
|
|
at::Tensor prod_backward(const at::Tensor& grad, const at::Tensor& input, const at::Tensor& result);
|
|
at::Tensor prod_backward(at::Tensor grad, const at::Tensor& input, at::Tensor result, int64_t dim, bool keepdim);
|
|
at::Tensor solve_backward_self(const at::Tensor & grad, const at::Tensor & self, const at::Tensor & A);
|
|
at::Tensor solve_backward_A(const at::Tensor & grad, const at::Tensor & self, const at::Tensor & A, const at::Tensor & solution);
|
|
at::Tensor cumsum_backward(const at::Tensor & x, int64_t dim);
|
|
at::Tensor logsumexp_backward(at::Tensor grad, const at::Tensor & self, at::Tensor result, at::IntArrayRef dim, bool keepdim);
|
|
at::Tensor logcumsumexp_backward(at::Tensor grad, const at::Tensor & self, at::Tensor result, int64_t dim);
|
|
at::Tensor unbind_backward(const variable_list& grads, int64_t dim);
|
|
at::Tensor unsqueeze_to(const at::Tensor & self, at::IntArrayRef sizes);
|
|
at::Tensor unsqueeze_to(const at::Tensor & self, int64_t dim, at::IntArrayRef sizes);
|
|
std::vector<at::Tensor> cat_tensors_backward(const at::Tensor & grad, const std::vector<std::vector<int64_t>> &sizes, int64_t dim);
|
|
at::Tensor clamp_backward(const at::Tensor & grad, const at::Tensor &self, const optional<at::Scalar> & min, const optional<at::Scalar> & max);
|
|
at::IntArrayRef strides_or_error(const Tensor & input, c10::string_view const & input_name);
|
|
at::Tensor mm_mat1_backward(const Tensor & grad, const Tensor & mat2, at::IntArrayRef mat1_sizes, at::IntArrayRef mat1_strides, const Scalar & alpha);
|
|
at::Tensor mm_mat2_backward(const at::Tensor & grad, const at::Tensor & mat1, at::IntArrayRef sizes, at::IntArrayRef strides, const at::Scalar & alpha);
|
|
at::Tensor _sparse_addmm_sparse_backward(const at::Tensor& grad, const at::Tensor& sparse_, const at::Tensor& dense, const at::Scalar& alpha);
|
|
at::Tensor sparse_sparse_matmul_backward(const at::Tensor& grad, const at::Tensor& mat1, const at::Tensor& mat2,int64_t grad_order);
|
|
at::Tensor renorm_backward(const at::Tensor & grad, const at::Tensor & self, at::Scalar p, int64_t dim, at::Scalar maxnorm);
|
|
at::Tensor repeat_backward(at::Tensor grad, at::IntArrayRef repeats, at::IntArrayRef input_shape);
|
|
at::Tensor _fused_dropout_backward(at::Tensor grad, at::Tensor mask, double p1m);
|
|
at::Tensor evenly_distribute_backward(at::Tensor grad, const at::Tensor & input, const at::Tensor & value);
|
|
at::Tensor sgn_backward(Tensor result, Tensor grad, Tensor self);
|
|
at::Tensor var_backward(const at::Tensor & grad, const at::Tensor & self, bool unbiased);
|
|
at::Tensor var_backward(at::Tensor grad, const at::Tensor & self, at::IntArrayRef dim, bool unbiased, bool keepdim);
|
|
at::Tensor std_backward(const at::Tensor & result, const at::Tensor & grad, const at::Tensor & self, bool unbiased);
|
|
at::Tensor std_backward(const at::Tensor & result, at::Tensor grad, const at::Tensor & self, at::IntArrayRef dim, bool unbiased, bool keepdim);
|
|
at::Tensor mean_backward(at::Tensor grad, const at::IntArrayRef sizes, at::IntArrayRef dim, bool keepdim);
|
|
at::Tensor mean_backward(at::Tensor grad, const at::IntArrayRef sizes, int numel);
|
|
at::Tensor var_std_mean_backward(const variable_list& grads, const at::Tensor & self, const at::Tensor & r1, const at::Tensor & r2, at::IntArrayRef dim, bool unbiased, bool keepdim, bool is_std);
|
|
at::Tensor var_std_mean_backward(const variable_list& grads, const at::Tensor & self, const at::Tensor & r1, const at::Tensor & r2, bool unbiased, bool is_std);
|
|
at::Tensor masked_scatter_backward(const at::Tensor & grad, const at::Tensor & mask, at::IntArrayRef sizes);
|
|
at::Tensor cholesky_backward(at::Tensor grad, bool upper, at::Tensor L);
|
|
at::Tensor cholesky_inverse_backward(at::Tensor grad, at::Tensor L, bool upper, at::Tensor inverse);
|
|
at::Tensor split_with_sizes_backward(const std::vector<torch::autograd::Variable> &grads,
|
|
IntArrayRef split_sizes, int64_t dim, IntArrayRef sizes, const at::TensorOptions &options);
|
|
at::Tensor split_backward(const std::vector<torch::autograd::Variable> &grads, int64_t split_size, int64_t dim, at::IntArrayRef sizes, const at::TensorOptions &options);
|
|
at::Tensor max_pool_double_backward(const at::Tensor & grad, const at::Tensor & indices, int dim);
|
|
at::Tensor glu_double_backward(const at::Tensor & grad, const at::Tensor & grad_output, const at::Tensor & input, int64_t dim);
|
|
at::Tensor glu_double_backward_grad_output(const at::Tensor & grad, const at::Tensor & input, int64_t dim);
|
|
at::Tensor infinitely_differentiable_silu_backward(const at::Tensor& grad_output, const at::Tensor& input);
|
|
Tensor infinitely_differentiable_logit_backward(const Tensor& grad, const Tensor& self, c10::optional<double> eps);
|
|
at::Tensor kl_div_double_backward_grad_output(const at::Tensor & grad, const at::Tensor & input, const at::Tensor & target, int64_t reduction, bool log_target);
|
|
at::Tensor binary_cross_entropy_with_logits_target_backward(const at::Tensor& grad_output, const at::Tensor& self, const at::Tensor& target, const c10::optional<at::Tensor>& weight, const c10::optional<at::Tensor>& pos_weight, int64_t reduction);
|
|
at::Tensor log_sigmoid_double_backward(const at::Tensor & grad, const at::Tensor & input);
|
|
at::Tensor softmax_double_backward(const at::Tensor & grad, const at::Tensor & grad_output, int dim, const at::Tensor & output);
|
|
at::Tensor log_softmax_double_backward(const at::Tensor & grad, const at::Tensor & grad_output, int dim, const at::Tensor & output);
|
|
at::Tensor binary_cross_entropy_double_backward(const at::Tensor & grad_output, const at::Tensor & grad, const at::Tensor & input, const at::Tensor & target, const c10::optional<at::Tensor>& weight, int64_t reduction);
|
|
at::Tensor binary_cross_entropy_double_backward_grad_output(const at::Tensor & grad, const at::Tensor & input, const at::Tensor & target, const c10::optional<at::Tensor>& weight, int64_t reduction);
|
|
at::Tensor l1_loss_double_backward_grad_output(const at::Tensor & grad, const at::Tensor & input, const at::Tensor & target, int64_t reduction);
|
|
at::Tensor smooth_l1_loss_double_backward(const at::Tensor & grad, const at::Tensor & input, const at::Tensor & target, int64_t reduction, double beta);
|
|
at::Tensor smooth_l1_loss_double_backward_grad_output(const at::Tensor & grad, const at::Tensor & grad_output, const at::Tensor & input, const at::Tensor & target, int64_t reduction, double beta);
|
|
at::Tensor mse_loss_double_backward(const at::Tensor & grad, const at::Tensor & input, int64_t reduction);
|
|
at::Tensor mse_loss_double_backward_grad_output(const at::Tensor & grad, const at::Tensor & grad_output, const at::Tensor & input, const at::Tensor & target, int64_t reduction);
|
|
at::Tensor soft_margin_loss_double_backward(const at::Tensor & grad, const at::Tensor & input, const at::Tensor & target, int64_t reduction);
|
|
at::Tensor soft_margin_loss_double_backward_grad_output(const at::Tensor & grad, const at::Tensor & grad_output, const at::Tensor & input, const at::Tensor & target, int64_t reduction);
|
|
at::Tensor softplus_double_backward(const at::Tensor & grad, const at::Tensor & input, at::Scalar beta, at::Scalar threshold);
|
|
at::Tensor logdet_backward(const at::Tensor & grad, const at::Tensor& self, const at::Tensor& logdet);
|
|
at::Tensor slogdet_backward(const at::Tensor& grad_logabsdet, const at::Tensor& self, const at::Tensor& signdet, const at::Tensor& logabsdet);
|
|
at::Tensor log1p_backward(const at::Tensor& grad, const at::Tensor& self);
|
|
at::Tensor sparse_constructor_values_backward(const at::Tensor& sparse_grad_out, const at::Tensor& indices, at::IntArrayRef values_shape);
|
|
at::Tensor embedding_dense_double_backward(const at::Tensor & grad, const at::Tensor & indices, int64_t padding_idx);
|
|
at::Tensor index_backward(at::Tensor zeros_like_self, at::TensorList indices, const at::Tensor& grad);
|
|
at::Tensor _cudnn_ctc_loss_backward(const at::Tensor& grad_out, const at::Tensor& loss, const at::Tensor& raw_grad, bool zero_infinity);
|
|
|
|
Tensor svd_backward(const std::vector<torch::autograd::Variable> &grads, const Tensor& self,
|
|
bool some, bool compute_uv, const Tensor& raw_u, const Tensor& sigma, const Tensor& raw_v);
|
|
Tensor symeig_backward(const std::vector<torch::autograd::Variable> &grads, const Tensor& self,
|
|
bool eigenvectors, bool upper, const Tensor& lambda, const Tensor& v);
|
|
std::tuple<Tensor, Tensor> triangular_solve_backward(
|
|
const Tensor & grad_x, const Tensor & grad_m,
|
|
const Tensor & b, const Tensor & a, const Tensor & x,
|
|
const bool upper, const bool transpose, const bool unitriangular,
|
|
std::array<bool, 2> output_mask);
|
|
std::tuple<Tensor, Tensor, Tensor> _trilinear_backward(const Tensor& grad_out, const Tensor& i1, const Tensor& i2, const Tensor& i3,
|
|
IntArrayRef expand1, IntArrayRef expand2, IntArrayRef expand3,
|
|
IntArrayRef sumdim, int64_t unroll_dim, std::array<bool, 3> grad_mask);
|
|
Tensor qr_backward(const std::vector<torch::autograd::Variable> &grads, const Tensor& self,
|
|
bool some, const Tensor& Q, const Tensor& R);
|
|
Tensor eig_backward(const std::vector<torch::autograd::Variable> &grads, const Tensor& self,
|
|
bool eigenvectors, const Tensor& lambda, const Tensor& v);
|
|
Tensor det_backward(const Tensor & grad, const Tensor& self, const Tensor& det);
|
|
std::tuple<Tensor, Tensor, Tensor> batchnorm_double_backward(
|
|
const Tensor & input,
|
|
const c10::optional<Tensor> & gamma,
|
|
const Tensor & ggI,
|
|
const Tensor & ggG,
|
|
const Tensor & ggB,
|
|
const Tensor & gO,
|
|
const c10::optional<Tensor> & running_mean,
|
|
const c10::optional<Tensor> & running_var,
|
|
bool training,
|
|
double eps,
|
|
const c10::optional<Tensor> & save_mean,
|
|
const c10::optional<Tensor> & save_invstd,
|
|
std::array<bool,3> output_mask);
|
|
std::tuple<Tensor, Tensor> _euclidean_dist_backward(const Tensor & grad, const Tensor & x1, const Tensor & x2, const Tensor & res);
|
|
Tensor kl_div_target_backward(Tensor grad_output, Tensor self, Tensor target, int64_t reduction, bool log_target);
|
|
Tensor fft_backward(const Tensor& self, const Tensor& grad, int64_t signal_ndim,
|
|
bool complex_input, bool complex_output,
|
|
bool inverse, IntArrayRef checked_signal_sizes,
|
|
int64_t normalization, bool onesided,
|
|
IntArrayRef output_sizes);
|
|
Tensor fft_r2c_backward(const Tensor& grad, IntArrayRef dim, int64_t normalization,
|
|
bool onesided, int64_t last_dim_size);
|
|
Tensor fft_c2r_backward(const Tensor& grad, IntArrayRef dim, int64_t normalization);
|
|
Tensor constant_pad_nd_backward(const Tensor& grad, IntArrayRef pad);
|
|
std::tuple<Tensor, Tensor> cholesky_solve_backward(
|
|
const Tensor& grad_x, const Tensor& self,
|
|
const Tensor& input2, const Tensor& result, const bool upper);
|
|
std::tuple<Tensor, Tensor, Tensor>
|
|
infinitely_differentiable_native_group_norm_backward(
|
|
const Tensor& dY,
|
|
const Tensor& dmean,
|
|
const Tensor& drstd,
|
|
const Tensor& X,
|
|
const Tensor& mean,
|
|
const Tensor& rstd,
|
|
const c10::optional<Tensor>& gamma,
|
|
int64_t N,
|
|
int64_t C,
|
|
int64_t HxW,
|
|
int64_t group,
|
|
double eps,
|
|
std::array<bool, 3> grad_input_mask);
|
|
std::tuple<Tensor, Tensor, Tensor> prelu_double_backward(
|
|
const Tensor & grad_grad_input,
|
|
const Tensor & grad_grad_weight,
|
|
const Tensor & grad_out,
|
|
const Tensor & input_,
|
|
const Tensor & weight_);
|
|
Tensor as_strided_backward(Tensor grad, TensorGeometry input_geometry, IntArrayRef sizes, IntArrayRef strides, optional<int64_t> storage_offset_);
|
|
std::tuple<Tensor, Tensor> atan2_backward(const Tensor& grad, const Tensor& self, const Tensor& other, std::array<bool, 2> output_mask);
|
|
std::tuple<Tensor, Tensor, Tensor>
|
|
infinitely_differentiable_native_layer_norm_backward(
|
|
const Tensor& dY,
|
|
const Tensor& dmean,
|
|
const Tensor& drstd,
|
|
const Tensor& X,
|
|
const Tensor& mean,
|
|
const Tensor& rstd,
|
|
const c10::optional<Tensor>& gamma,
|
|
IntArrayRef normalized_shape,
|
|
double eps,
|
|
std::array<bool, 3> grad_input_mask);
|
|
|
|
|
|
} // namespace details
|
|
} // namespace generated
|
|
} // namespace autograd
|
|
} // namespace torch
|