mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-06 12:20:52 +01:00
[cuBLASLt] relax addmm cuBLASLt constraint (#153675)
`beta == 1.0` doesn't seem to be required anymore https://github.com/pytorch/pytorch/issues/153590 `self.dim() == 1` restriction seems to still hold but not sure if that's due to a lack of handling on the PyTorch side or the cuBLASLt side, will investigate Pull Request resolved: https://github.com/pytorch/pytorch/pull/153675 Approved by: https://github.com/Skylion007
This commit is contained in:
parent
7c9d94e9bb
commit
f9bb7cf72a
|
|
@ -369,7 +369,7 @@ Tensor& addmm_out_cuda_impl(Tensor& result, const Tensor& self, const Tensor& ma
|
|||
// leading dim >> rows when they are sliced from a large tensor
|
||||
// see fbcode/caffe2/test/test_linalg.py:test_corner_cases_of_cublasltmatmul
|
||||
if (!disable_addmm_cuda_lt_final) {
|
||||
useLtInterface = beta.toComplexDouble() == 1.0 && self.dim() == 1 &&
|
||||
useLtInterface = self.dim() == 1 &&
|
||||
result.dim() == 2 && self.sizes()[0] == mat2_sizes[1] &&
|
||||
self.is_contiguous() && result.is_contiguous() &&
|
||||
#ifdef USE_ROCM
|
||||
|
|
|
|||
Loading…
Reference in New Issue
Block a user