Current logic to set the HAS_SBGEMM flag is ignored in case the BLAS libraries are found already, ie, if set from environment variable BLAS=OpenBLAS . If BLAS_LIBRARIES are already set the code to find if BLAS_LIBRARY has sbgemm is never executed. The following commit brings out this logic outside unconditionally.
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125227
Approved by: https://github.com/malfet
When we use pytorch with unregistered blas, spack set BLAS=Generic.
pytorch is searched only libblas.
If the blas package's blas library name is not libblas, spack install py-torch is failed.
This PR set blas lirary names to GENERIC_BLAS_LIBRARIES environment variable, and py-torch is found blas library.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74269
Approved by: https://github.com/kit1980
Summary:
OpenBLAS recently added support for bfloat16 GEMM, so this change has PyTorch call out to OpenBLAS for that, like it does for single and double precision
Our goal is to try to enable PyTorch to make calls to "sbgemm" in OpenBLAS.
We are prepared (if it is your preference) to add fences to the code to limit this change to the Power architecture,
but our first instinct is that anyone on any architecture that enables access to sbgemm in their OpenBLAS library
should be able to use this code. (but again, we respect that as we are just starting to modify PyTorch, we respect
your guidance!)
(there is no issue number related to this)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58831
Reviewed By: albanD
Differential Revision: D29951900
Pulled By: malfet
fbshipit-source-id: 3d0a4a638ac95b2ff2e9f6d08827772e28d397c3
Summary:
This is a PR on build system that provides support for cross compiling on Jetson platforms.
The major change is:
1. Disable try runs for cross compiling in `COMPILER_WORKS`, `BLAS`, and `CUDA`. They will not be able to perform try run on a cross compile setup
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59764
Reviewed By: soulitzer
Differential Revision: D29524363
Pulled By: malfet
fbshipit-source-id: f06d1ad30b704c9a17d77db686c65c0754db07b8
Summary:
Before that, only dynamically linked OpenBLAS compield with OpenMP could
be found.
Also get rid of hardcoded codepath for libgfortran.a in FindLAPACK.cmake
Only affects aarch64 linux builds
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59428
Reviewed By: agolynski
Differential Revision: D28891314
Pulled By: malfet
fbshipit-source-id: 5af55a14c85ac66551ad2805c5716bbefe8d55b2
Summary:
These changes provide the user with an additional option to choose the DNNL+BLIS path for PyTorch.
This assumes BLIS is already downloaded or built from source and the necessary library file is available at the location: $BLIS_HOME/lib/libblis.so and include files are available at: $BLIS_HOME/include/blis/blis.h and $BLIS_HOME/include/blis/cblas.h
Export the below variables to build PyTorch with MKLDNN+BLIS and proceed with the regular installation procedure as below:
$export BLIS_HOME=path-to-BLIS
$export PATH=$BLIS_HOME/include/blis:$PATH LD_LIBRARY_PATH=$BLIS_HOME/lib:$LD_LIBRARY_PATH
$export BLAS=BLIS USE_MKLDNN_CBLAS=ON WITH_BLAS=blis
$python setup.py install
CPU only Dockerfile to build PyTorch with AMD BLIS is available at : docker/cpu-blis/Dockerfile
Example command line to build using the Dockerfile:
sudo DOCKER_BUILDKIT=1 docker build . -t docker-image-repo-name
Example command line to run the built docker container:
sudo docker run --name container-name -it docker-image-repo-name
Fixes #{issue number}
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54953
Reviewed By: glaringlee
Differential Revision: D27466799
Pulled By: malfet
fbshipit-source-id: e03bae9561be3a67429df3b1be95a79005c63050
Summary:
Fix accidental regression introduced by https://github.com/pytorch/pytorch/issues/47940
`FIND_PACKAGE(OpenBLAS)` does not validate that discovered library can actually be used, while `check_fortran_libraries` does that
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53168
Test Plan: Build PyTorch with static OpenBLAS and check that `torch.svd(torch.ones(3, 3)).S` do not raise an exception
Reviewed By: walterddr
Differential Revision: D26772345
Pulled By: malfet
fbshipit-source-id: 3e4675c176b30dfe4f0490d7d3dfe4f9a4037134
Summary:
The request changes are to support building Pytorch 1.0 on the Jetson Xavier with Openblas. Jetson Xavier with Jetpack 3.3 has generic lapack installed. To pick up the CUDA accelerated BLAS/Lapack, I had to build Openblas and build/link pytorch from source. Otherwise, I got a runtime error indicating lapack routines were not cuda enabled.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15660
Differential Revision: D13571324
Pulled By: soumith
fbshipit-source-id: 9b148d081d6e7fa7e1824dfdd93283c67f69e683
Summary:
This is to fix cmake-time compilation error.
When we change script to build Caffe2 with mkldnn, we run into some cmake-time compilation support check (like in libsleef) failed due to incorrect setting of CMAKE_REQUIRED_LIBRARIES. It is a global setting which can interfere camke compilation if it is not clean up properly. FindBLAS.cmake and FindLAPACK.cmake didn't clean this flag, and causes incorrect building of libsleef.so.
yinghai gujinghui
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12195
Differential Revision: D10159314
Pulled By: yinghai
fbshipit-source-id: 04908738f7d005579605b9c2a58d54f035d3baf4