Commit Graph

143 Commits

Author SHA1 Message Date
Taylor Robie
022c929145 Revert "Revert D25199264: Enable callgrind collection for C++ snippets" (#48720)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48720

This reverts commit 6646ff122d.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D25273994

Pulled By: malfet

fbshipit-source-id: 61743176dc650136622e1b8f2384bbfbd7a46294
2020-12-02 11:10:11 -08:00
Taylor Robie
07f038aa9d Add option for cpp_extensions to compile standalone executable (#47862)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47862

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25199265

Pulled By: robieta

fbshipit-source-id: eceb04dea60b82eb10434099639fa3afa61000ca
2020-12-01 20:03:08 -08:00
Nikita Shulga
8af9f2cc23 Revert D24924736: [pytorch][PR] Hipify revamp
Test Plan: revert-hammer

Differential Revision:
D24924736 (10b490a3e0)

Original commit changeset: 4af42b8ff4f2

fbshipit-source-id: 7f8f90d55d8a69a2890ec73622fcea559189e381
2020-11-18 11:48:30 -08:00
Jithun Nair
10b490a3e0 Hipify revamp (#45451)
Summary:
This PR revamps the hipify module in PyTorch to overcome a long list of shortcomings in the original implementation. However, these improvements are applied only when using hipify to build PyTorch extensions, **not for PyTorch or Caffe2 itself**.

Correspondingly, changes are made to `cpp_extension.py` to match these improvements.

The list of improvements to hipify is as follows:

1. Hipify files in the same directory as the original file, unless there's a "cuda" subdirectory in the original file path, in which case the hipified file will be in the corresponding file path with "hip" subdirectory instead of "cuda".
2. Never hipify the file in-place if changes are introduced due to hipification i.e. always ensure the hipified file either resides in a different folder or has a different filename compared to the original file.
3. Prevent re-hipification of already hipified files. This avoids creation of unnecessary "hip/hip" etc. subdirectories and additional files which have no actual use.
4. Do not write out hipified versions of files if they are identical to the original file. This results in a cleaner output directory, with minimal number of hipified files created.
5. Update header rewrite logic so that it accounts for the previous improvement.
6. Update header rewrite logic so it respects the rules for finding header files depending on whether `""` or `<>` is used.
7. Return a dictionary of mappings of original file paths to hipified file paths from `hipify` function.
8. Introduce a version for hipify module to allow extensions to contain back-compatible code that targets a specific point in PyTorch where the hipify functionality changed.
9. Update `cuda_to_hip_mappings.py` to account for the ROCm component subdirectories inside `/opt/rocm/include`. This also results in cleanup of the `Caffe2_HIP_INCLUDE` path to remove unnecessary additions to the include path.

The list of changes to `cpp_extension.py` is as follows:
1. Call `hipify` when building a CUDAExtension for ROCm.
2. Prune the list of source files to CUDAExtension to include only the hipified versions of any source files in the list (if both original and hipified versions of the source file are in the list)
3. Add subdirectories of /opt/rocm/include to the include path for extensions, so that ROCm headers for subcomponent libraries are found automatically

cc jeffdaily sunway513 hgaspar lcskrishna ashishfarmer

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45451

Reviewed By: ezyang

Differential Revision: D24924736

Pulled By: malfet

fbshipit-source-id: 4af42b8ff4f21c3782dedb8719b8f9f86b34bd2d
2020-11-18 08:37:49 -08:00
Chester Liu
17a6bc7c1b Cleanup unused code for Python < 3.6 (#47822)
Summary:
I think these can be safely removed since the min version of supported Python is now 3.6

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47822

Reviewed By: smessmer

Differential Revision: D24954936

Pulled By: ezyang

fbshipit-source-id: 5d4b2aeb78fc97d7ee4abaf5fb2aae21bf765e8b
2020-11-13 21:37:01 -08:00
peter
d73a8db2d2 Use local env for building CUDA extensions on Windows (#47150)
Summary:
Fixes https://github.com/pytorch/vision/pull/2818#issuecomment-719167504
After activating the VC env multiple times, the following error will be raised when building a CUDA extension.
```
FAILED: C:/tools/MINICO~1/CONDA-~2/TORCHV~1/work/build/temp.win-amd64-3.8/Release/tools/MINICO~1/CONDA-~2/TORCHV~1/work/torchvision/csrc/cuda/PSROIAlign_cuda.obj
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\bin\nvcc -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -DWITH_CUDA -Dtorchvision_EXPORTS -IC:\tools\MINICO~1\CONDA-~2\TORCHV~1\work\torchvision\csrc -I%PREFIX%\lib\site-packages\torch\include -I%PREFIX%\lib\site-packages\torch\include\torch\csrc\api\include -I%PREFIX%\lib\site-packages\torch\include\TH -I%PREFIX%\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\include" -I%PREFIX%\include -I%PREFIX%\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" -I%PREFIX%\Library\include -c C:\tools\MINICO~1\CONDA-~2\TORCHV~1\work\torchvision\csrc\cuda\PSROIAlign_cuda.cu -o C:\tools\MINICO~1\CONDA-~2\TORCHV~1\work\build\temp.win-amd64-3.8\Release\tools\MINICO~1\CONDA-~2\TORCHV~1\work\torchvision\csrc\cuda\PSROIAlign_cuda.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_35,code=sm_35 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_50,code=compute_50 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
'cl.exe' is not recognized as an internal or external command,
operable program or batch file.
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47150

Reviewed By: agolynski

Differential Revision: D24706019

Pulled By: ezyang

fbshipit-source-id: c13dc29f62d2d12d6a56f33dd450b467a1bf193b
2020-11-10 20:02:06 -08:00
Yuxin Wu
5cba3cec5a fix extensions build flags on newer GPUs (#47585)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/47352

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47585

Reviewed By: heitorschueroff

Differential Revision: D24833654

Pulled By: ezyang

fbshipit-source-id: eaec5b8db5f35cac0a74d2858cb054a3853b0990
2020-11-10 11:38:18 -08:00
Simon Geisler
abae12ba41 only set ccbin flag if not provided by user (#47404)
Summary:
Avoid nvcc error if the user specifies c compiler (as pointed out in https://github.com/pytorch/pytorch/issues/47377)

Fixes https://github.com/pytorch/pytorch/issues/47377

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47404

Reviewed By: ejguan

Differential Revision: D24748833

Pulled By: malfet

fbshipit-source-id: 1a4ad1f851c8854795f7f98e28f479a0ff458a00
2020-11-10 07:55:57 -08:00
Nikita Shulga
2b6a720eb1 Update pybind to 2.6.0 (#46415)
Summary:
Preserve PYBIND11 (63ce3fbde8) configuration options in `torch._C._PYBIND11 (63ce3fbde8)_COMPILER_TYPE` and use them when building extensions

Also, use f-strings in `torch.utils.cpp_extension`

"Fixes" https://github.com/pytorch/pytorch/issues/46367

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46415

Reviewed By: VitalyFedyunin

Differential Revision: D24605949

Pulled By: malfet

fbshipit-source-id: 87340f2ed5308266a46ef8f0317316227dab9d4d
2020-10-29 10:53:47 -07:00
Nikita Shulga
42a51148c1 Use f-strings in torch.utils.cpp_extension (#47025)
Summary:
Plus two minor fixes to `torch/csrc/Module.cpp`:
 - Use iterator of type `Py_ssize_t` for array indexing in `THPModule_initNames`
 - Fix clang-tidy warning of unneeded defaultGenerator copy by capturing it as `const auto&`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47025

Reviewed By: samestep

Differential Revision: D24605907

Pulled By: malfet

fbshipit-source-id: c276567d320758fa8b6f4bd64ff46d2ea5d40eff
2020-10-28 21:32:33 -07:00
Guilherme Leobas
789e935304 Annotate torch.nn.cpp (#46490)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/46489

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46490

Reviewed By: zhangguanheng66

Differential Revision: D24509519

Pulled By: ezyang

fbshipit-source-id: edffd32ab2ac17ae4bbd44826b71f5cb9f1da1c5
2020-10-23 17:40:32 -07:00
Jithun Nair
65da50c099 Apply hip vs hipcc compilation flags correctly for building extensions (#46273)
Summary:
Fixes issues when building certain PyTorch extensions where the cpp files do NOT compile if flags such as `__HIP_NO_HALF_CONVERSIONS__` are defined.
cc jeffdaily

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46273

Reviewed By: zou3519

Differential Revision: D24422463

Pulled By: ezyang

fbshipit-source-id: 7a43d1f7d59c95589963532ef3bd3c68cb8262be
2020-10-21 11:40:40 -07:00
Alexander Grund
5b0f400488 Replace list(map(...)) constructs by list comprehensions (#46461)
Summary:
As discussed in https://github.com/pytorch/pytorch/issues/46392 this makes the code more readable and possibly more performant.

It also fixes a bug detected by this where the argument order of `map` was confused: 030a24906e (diff-5bb26bd3a23ee3bb540aeadcc0385df2a4e48de39f87ed9ea76b21990738fe98L1537-R1537)

Fixes https://github.com/pytorch/pytorch/issues/46392

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46461

Reviewed By: ailzhang

Differential Revision: D24367015

Pulled By: ezyang

fbshipit-source-id: d55a67933cc22346b00544c9671f09982ad920e7
2020-10-19 18:42:49 -07:00
Alexandre Saint
c734961e26 [cpp-extensions] Ensure default extra_compile_args (#45956)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45835

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45956

Reviewed By: ngimel

Differential Revision: D24162289

Pulled By: albanD

fbshipit-source-id: 9ba2ad51e818864f6743270212ed94d86457f4e6
2020-10-09 07:33:28 -07:00
Xiang Gao
2fa062002e CUDA BFloat16 infrastructure (#44925)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44925

Reviewed By: agolynski

Differential Revision: D23783910

Pulled By: ngimel

fbshipit-source-id: dacac2ad87d58056bdc68bfe0b7ab1de5c2af0d8
2020-10-02 16:21:30 -07:00
Xiang Gao
0a15646e15 CUDA RTX30 series support (#45489)
Summary:
I also opened a PR on cmake upstream: https://gitlab.kitware.com/cmake/cmake/-/merge_requests/5292

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45489

Reviewed By: zhangguanheng66

Differential Revision: D23997844

Pulled By: ezyang

fbshipit-source-id: 4e7443dde9e70632ee429184f0d51cb9aa5a98b5
2020-09-29 18:19:23 -07:00
Xiang Gao
20ac736200 Remove py2 compatible future imports (#44735)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735

Reviewed By: mruberry

Differential Revision: D23731306

Pulled By: ezyang

fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
Nikita Shulga
4134b7abfa Pass CC env variable as ccbin argument to nvcc (#43931)
Summary:
This is the common behavior when one builds PyTorch (or any other CUDA project) using CMake, so it should be held true for Torch CUDA extensions as well.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43931

Reviewed By: ezyang, seemethere

Differential Revision: D23441793

Pulled By: malfet

fbshipit-source-id: 1af392107a94840331014fda970ef640dc094ae4
2020-09-01 17:26:08 -07:00
Akihiro Nitta
f17d7a5556 Fix exception chaining in torch/ (#43836)
Summary:
## Motivation
Fixes https://github.com/pytorch/pytorch/issues/43770.

## Description of the change
This PR fixes exception chaining only in files under `torch/` where appropriate.
To fix exception chaining, I used either:
1. `raise new_exception from old_exception` where `new_exception` itself seems not descriptive enough to debug or `old_exception` delivers valuable information.
2. `raise new_exception from None` where raising both of `new_exception` and `old_exception` seems a bit noisy and redundant.
I subjectively chose which one to use from the above options.

## List of lines containing raise in except clause:
I wrote [this simple script](https://gist.github.com/akihironitta/4223c1b32404b36c1b349d70c4c93b4d) using [ast](https://docs.python.org/3.8/library/ast.html#module-ast) to list lines where `raise`ing in `except` clause.

- [x] 000739c31a/torch/jit/annotations.py (L35)
- [x] 000739c31a/torch/jit/annotations.py (L150)
- [x] 000739c31a/torch/jit/annotations.py (L158)
- [x] 000739c31a/torch/jit/annotations.py (L231)
- [x] 000739c31a/torch/jit/_trace.py (L432)
- [x] 000739c31a/torch/nn/utils/prune.py (L192)
- [x] 000739c31a/torch/cuda/nvtx.py (L7)
- [x] 000739c31a/torch/utils/cpp_extension.py (L1537)
- [x] 000739c31a/torch/utils/tensorboard/_pytorch_graph.py (L292)
- [x] 000739c31a/torch/utils/data/dataloader.py (L835)
- [x] 000739c31a/torch/utils/data/dataloader.py (L849)
- [x] 000739c31a/torch/utils/data/dataloader.py (L856)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L186)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L189)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L424)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L1279)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L1283)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L1356)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L1388)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L1391)
- [ ] 000739c31a/torch/testing/_internal/common_utils.py (L1412)
- [x] 000739c31a/torch/testing/_internal/codegen/random_topo_test.py (L310)
- [x] 000739c31a/torch/testing/_internal/codegen/random_topo_test.py (L329)
- [x] 000739c31a/torch/testing/_internal/codegen/random_topo_test.py (L332)
- [x] 000739c31a/torch/testing/_internal/jit_utils.py (L183)
- [x] 000739c31a/torch/testing/_internal/common_nn.py (L4789)
- [x] 000739c31a/torch/onnx/utils.py (L367)
- [x] 000739c31a/torch/onnx/utils.py (L659)
- [x] 000739c31a/torch/onnx/utils.py (L892)
- [x] 000739c31a/torch/onnx/utils.py (L897)
- [x] 000739c31a/torch/serialization.py (L108)
- [x] 000739c31a/torch/serialization.py (L754)
- [x] 000739c31a/torch/distributed/rpc/_testing/faulty_agent_backend_registry.py (L76)
- [x] 000739c31a/torch/distributed/rpc/backend_registry.py (L260)
- [x] 000739c31a/torch/distributed/distributed_c10d.py (L184)
- [x] 000739c31a/torch/_utils_internal.py (L57)
- [x] 000739c31a/torch/hub.py (L494)
- [x] 000739c31a/torch/contrib/_tensorboard_vis.py (L16)
- [x] 000739c31a/torch/distributions/lowrank_multivariate_normal.py (L100)
- [x] 000739c31a/torch/distributions/constraint_registry.py (L142)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43836

Reviewed By: ailzhang

Differential Revision: D23431212

Pulled By: malfet

fbshipit-source-id: 5f7f41b391164a5ad0efc06e55cd58c23408a921
2020-08-31 20:26:23 -07:00
Nikita Shulga
6753157c5a Enable torch.utils typechecks (#42960)
Summary:
Fix typos in torch.utils/_benchmark/README.md
Add empty __init__.py to examples folder to make example invocations from README.md correct
Fixed uniform distribution logic generation when mixval and maxval are None

Fixes https://github.com/pytorch/pytorch/issues/42984

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42960

Reviewed By: seemethere

Differential Revision: D23095399

Pulled By: malfet

fbshipit-source-id: 0546ce7299b157d9a1f8634340024b10c4b7e7de
2020-08-13 15:24:56 -07:00
Ralf Gommers
bcab2d6848 And type annotations for cpp_extension, utils.data, signal_handling (#42647)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42647

Reviewed By: ezyang

Differential Revision: D22967041

Pulled By: malfet

fbshipit-source-id: 35e124da0be56934faef56834a93b2b400decf66
2020-08-06 09:42:07 -07:00
Thomas Viehmann
0f78e596ba ROCm: Fix linking of custom ops in load_inline (#41257)
Summary:
Previously we did not link against amdhip64 (roughly equivalent to cudart). Apparently, the recent RTDL_GLOBAL fixes prevent the extensions from finding the symbols needed for launching kernels.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41257

Reviewed By: zou3519

Differential Revision: D22573288

Pulled By: ezyang

fbshipit-source-id: 89f9329b2097df26785e2f67e236d60984d40fdd
2020-07-17 12:14:50 -07:00
Edward Yang
22c7d183f7 If ninja is being used, force build_ext to run. (#40837)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40837

As ninja has accurate dependency tracking, if there is nothing to do,
then we will very quickly noop.  But this is important for correctness:
if a change was made to a header that is not listed explicitly in
the distutils Extension, then distutils will come to the wrong
conclusion about whether or not recompilation is needed (but Ninja
will work it out.)

This caused https://github.com/pytorch/vision/issues/2367

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D22340930

Pulled By: ezyang

fbshipit-source-id: 481b74f6e2cc78159d2a74d413751cf7cf16f592
2020-07-07 09:49:31 -07:00
Pavel Belevich
95e51bb7f8 change BuildExtension.with_options to return a class not a c-tor (#40121)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40121

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D22076634

Pulled By: pbelevich

fbshipit-source-id: a89740baf75208065e418d7f972eeb52db9ee3cf
2020-06-17 12:09:09 -07:00
lixinyu
7cb4eae8b1 correct some cpp extension code usages and documents (#39766)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39766

Test Plan: Imported from OSS

Differential Revision: D21967284

Pulled By: glaringlee

fbshipit-source-id: 8597916bee247cb5f8c82ed8297119d2f3a72170
2020-06-10 08:31:22 -07:00
Xiang Gao
b3fac8af6b Initial support for building on Ampere GPU, CUDA 11, cuDNN 8 (#39277)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39277

This PR contains initial changes that makes PyTorch build with Ampere GPU, CUDA 11, and cuDNN 8.
TF32 related features will not be included in this PR.

Test Plan: Imported from OSS

Differential Revision: D21832814

Pulled By: malfet

fbshipit-source-id: 37f9c6827e0c26ae3e303580f666584230832d06
2020-06-02 10:03:42 -07:00
ashishfarmer
53b55d8f38 Use ninja build as default for HIPExtensions (#38939)
Summary:
This PR adds the following changes:
1. It sets the default extension build to use ninja
2. Adds HIPCC flags to the host code compile string for ninja builds. This is needed when host code makes HIP API calls

cc: ezyang jeffdaily
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38939

Differential Revision: D21721905

Pulled By: ezyang

fbshipit-source-id: 75206838315a79850ecf86a78391a31ba5ee97cb
2020-05-27 11:35:19 -07:00
Yuxin Wu
0e2a0478af Support paths with spaces when building ninja extension (#38670)
Summary:
Generate the following `build.ninja` file and can successfully build:
```
cflags = -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA '-I/scratch/yuxinwu/space space/detectron2/layers/csrc' -I/private/home/yuxinwu/miniconda3/lib/python3.7
/site-packages/torch/include -I/private/home/yuxinwu/miniconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/private/home/yuxinwu/miniconda3/lib/python3.7/site-packages/torc
h/include/TH -I/private/home/yuxinwu/miniconda3/lib/python3.7/site-packages/torch/include/THC -I/public/apps/cuda/10.1/include -I/private/home/yuxinwu/miniconda3/include/python3.7m -c
post_cflags = -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cuda_cflags = -DWITH_CUDA '-I/scratch/yuxinwu/space space/detectron2/layers/csrc' -I/private/home/yuxinwu/miniconda3/lib/python3.7/site-packages/torch/include -I/private/home/yuxinwu/miniconda3/li
b/python3.7/site-packages/torch/include/torch/csrc/api/include -I/private/home/yuxinwu/miniconda3/lib/python3.7/site-packages/torch/include/TH -I/private/home/yuxinwu/miniconda3/lib/python3.7/site
-packages/torch/include/THC -I/public/apps/cuda/10.1/include -I/private/home/yuxinwu/miniconda3/include/python3.7m -c
cuda_post_cflags = -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_
OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -ccbin=/public/apps/gcc/7.1.0/bin/gcc -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
-gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -std=c++14
ldflags =

rule compile
  command = $cxx -MMD -MF $out.d $cflags -c $in -o $out $post_cflags
  depfile = $out.d
  deps = gcc

rule cuda_compile
  command = $nvcc $cuda_cflags -c $in -o $out $cuda_post_cflags

build /scratch/yuxinwu/space$ space/build/temp.linux-x86_64-3.7/scratch/yuxinwu/space$ space/detectron2/layers/csrc/vision.o: compile /scratch/yuxinwu/space$ space/detectron2/layers/csrc/vision.c$
p
build /scratch/yuxinwu/space$ space/build/temp.linux-x86_64-3.7/scratch/yuxinwu/space$ space/detectron2/layers/csrc/box_iou_rotated/box_iou_rotated_cpu.o: compile /scratch/yuxinwu/space$ space/de$
ectron2/layers/csrc/box_iou_rotated/box_iou_rotated_cpu.cpp
build /scratch/yuxinwu/space$ space/build/temp.linux-x86_64-3.7/scratch/yuxinwu/space$ space/detectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated_cpu.o: compile /scratch/yuxinwu/space$ space/de$
ectron2/layers/csrc/ROIAlignRotated/ROIAlignRotated_cpu.cpp
build /scratch/yuxinwu/space$ space/build/temp.linux-x86_64-3.7/scratch/yuxinwu/space$ space/detectron2/layers/csrc/nms_rotated/nms_rotated_cpu.o: compile /scratch/yuxinwu/space$ space/detectron2$
layers/csrc/nms_rotated/nms_rotated_cpu.cpp
build /scratch/yuxinwu/space$ space/build/temp.linux-x86_64-3.7/scratch/yuxinwu/space$ space/detectron2/layers/csrc/ROIAlign/ROIAlign_cpu.o: compile /scratch/yuxinwu/space$ space/detectron2/layer$
/csrc/ROIAlign/ROIAlign_cpu.cpp

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38670

Differential Revision: D21689613

Pulled By: ppwwyyxx

fbshipit-source-id: 1f71b12433e18f6b0c6aad5e1b390b4438654563
2020-05-21 14:57:40 -07:00
peter
a40049fd2a Better handling for msvc env when compiling cpp extensions (#38862)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/38861#issuecomment-631934636.
1. Error out if msvc env is activated but `DISTUTILS_USE_SDK` is not set.
2. Attempt to activate msvc env before running ninja build
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38862

Differential Revision: D21686343

Pulled By: ezyang

fbshipit-source-id: 38b366654e2d0376dbdd21276689772b78e9718e
2020-05-21 12:52:22 -07:00
peter
4e46c95826 Fix cpp extension build failure if path contains space (#38860)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38860

Differential Revision: D21686335

Pulled By: ezyang

fbshipit-source-id: 2675f4f70b48ae3b58ea597a2b584b446d03c704
2020-05-21 12:36:27 -07:00
lixinyu
5a979fcb99 allow user passing relative paths in include_dirs within setuptools.setup (#38264)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38264

Test Plan: Imported from OSS

Differential Revision: D21509277

Pulled By: glaringlee

fbshipit-source-id: b0bc17d375a89b96b1bdacde5987b4f4baa9468e
2020-05-13 20:00:12 -07:00
ashish
5a386a0a78 Fix ldflags string for HIPExtensions (#38047)
Summary:
This pull request adds a check for ROCm environment and skips adding CUDA specific flags for the scenario when a pytorch extension is built on ROCm.

ezyang jeffdaily
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38047

Differential Revision: D21470507

Pulled By: ezyang

fbshipit-source-id: 5af2d7235e306c7aa9a5f7fc8760025417383069
2020-05-07 20:39:01 -07:00
ashishfarmer
402f635bbe Enable ahead of time compilation for HIPExtensions using ninja (#37800)
Summary:
This pull request enables ahead of time compilation of HIPExtensions with ninja by setting appropriate compilation flags for ROCm environment. Also, this enables the unit test for testing cuda_extensions on ROCm as well as removing test for ahead of time compilation of extensions with ninja from ROCM_BLACKLIST

ezyang jeffdaily
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37800

Differential Revision: D21408148

Pulled By: soumith

fbshipit-source-id: 146f4ffb3418f3534e6ce86805d3fe9c3eae84e1
2020-05-05 20:53:35 -07:00
peter
7c4bda7e6f Eliminate warnings for cpp extensions on Windows (#37400)
Summary:
Improve the readability of the logs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37400

Differential Revision: D21302597

Pulled By: ezyang

fbshipit-source-id: b8cbd33f95b6839ad4c6930bed8750c9b5a2ef7a
2020-04-30 20:28:03 -07:00
SsnL
13013848d5 Fix cpp_ext build dir create permission (#34239)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/34238
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34239

Differential Revision: D21328036

Pulled By: soumith

fbshipit-source-id: dac2735383b1a689139af5a23f61ccbebd1fd6c1
2020-04-30 11:30:07 -07:00
Lukas Koestler
0048243f70 Check compiler -v to determine compiler (fix #33701) (#37293)
Summary:
As described in the issue (https://github.com/pytorch/pytorch/issues/33701) the compiler check
	for building cpp extensions does not work with ccache.
	In this case we check compiler -v to determine which
	compiler is actually used and check it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37293

Differential Revision: D21256913

Pulled By: ezyang

fbshipit-source-id: 5483a10cc2dbcff98a7f069ea9dbc0c12b6502dc
2020-04-27 10:49:04 -07:00
David Reiss
e75fb4356b Remove (most) Python 2 support from Python code (#35615)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35615

Python 2 has reached end-of-life and is no longer supported by PyTorch.
Now we can clean up a lot of cruft that we put in place to support it.
These changes were all done manually, and I skipped anything that seemed
like it would take more than a few seconds, so I think it makes sense to
review it manually as well (though using side-by-side view and ignoring
whitespace change might be helpful).

Test Plan: CI

Differential Revision: D20842886

Pulled By: dreiss

fbshipit-source-id: 8cad4e87c45895e7ce3938a88e61157a79504aed
2020-04-22 09:23:14 -07:00
Thomas Viehmann
d070c0bcf0 ROCm: enable cpp_extensions.load/load_inline (#35897)
Summary:
This enables cpp_extensions.load/load_inline. This works by hipify-ing cuda sources.
Also enable tests.
CuDNN/MIOpen extensions aren't yet supported, I propose to not do this in this PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35897

Differential Revision: D20983279

Pulled By: ezyang

fbshipit-source-id: a5d0f5ac592d04488a6a46522c58e2ee0a6fd57c
2020-04-13 11:44:08 -07:00
lizz
5d1205bf02 Suppress output when checking hipcc (#35789)
Summary:
Otherwise, it will print some message when hipcc is not found.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35789

Differential Revision: D20793089

Pulled By: ezyang

fbshipit-source-id: 4b3cb29fb1d74a1931603ee01e669013ccae9685
2020-04-01 13:03:21 -07:00
hainq
a0dc36e501 [Windows] Fix torch_cuda's forced link (#35659)
Summary:
The current config on `master` yields the following errors when build from source on Windows with CMake and Visual Studio 2019.
```
Severity	Code	Description	Project	File	Line	Suppression State
Error	LNK2001	unresolved external symbol \?warp_size@cuda@at@YAHXZ\	torch	D:\AI\pytorch\build_libtorch\caffe2\LINK	1
Severity	Code	Description	Project	File	Line	Suppression State
Error	LNK1120	1 unresolved externals	torch	D:\AI\pytorch\build_libtorch\bin\Release\torch.dll	1
Severity	Code	Description	Project	File	Line	Suppression State
Error	LNK2001	unresolved external symbol \?warp_size@cuda@at@YAHXZ\	caffe2_observers	D:\AI\pytorch\build_libtorch\modules\observers\LINK	1
Severity	Code	Description	Project	File	Line	Suppression State
Error	LNK1120	1 unresolved externals	caffe2_observers	D:\AI\pytorch\build_libtorch\bin\Release\caffe2_observers.dll	1
Severity	Code	Description	Project	File	Line	Suppression State
Error	LNK2001	unresolved external symbol \?warp_size@cuda@at@YAHXZ\	caffe2_detectron_ops_gpu	D:\AI\pytorch\build_libtorch\modules\detectron\LINK	1
Severity	Code	Description	Project	File	Line	Suppression State
Error	LNK1120	1 unresolved externals	caffe2_detectron_ops_gpu	D:\AI\pytorch\build_libtorch\bin\Release\caffe2_detectron_ops_gpu.dll	1
```

This change at least fixes the above errors in that specific setting. Do you think it makes sense to get this merged or will it break other settings?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35659

Differential Revision: D20735907

Pulled By: ezyang

fbshipit-source-id: eb8fa1e69aaaa5af2da3a76963ddc910bb716479
2020-03-30 13:59:31 -07:00
Nikita Shulga
0f0a5b11b8 Disable C4251 when compiling cpp_extensions on Windows (#35272)
Summary:
Otherwise, VC++ will warn that every exposed C++ symbol, for example:
```
include\c10/core/impl/LocalDispatchKeySet.h(53): warning C4251: 'c10::impl::LocalDispatchKeySet::included_': class 'c10::DispatchKeySet' needs to have dll-interface to be used by clients of struct 'c10::impl::LocalDispatchKeySet'
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35272

Test Plan: CI

Differential Revision: D20623005

Pulled By: malfet

fbshipit-source-id: b635b674159bb9654e4e1a1af4394c4f36fe35bd
2020-03-24 11:08:28 -07:00
peterjc123
9e6cd98c3f Ensure torch_cuda is linked against on Windows (#34288)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/31611.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34288

Differential Revision: D20314251

Pulled By: seemethere

fbshipit-source-id: 15ab2d4de665d553a1622a2d366148697deb6c02
2020-03-12 12:16:44 -07:00
Yuxin Wu
20b18a58f1 Update compiler warning about ABI compatibility (#34472)
Summary:
3ac4267763 already forces pytorch to use gcc>=5 everywhere
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34472

Differential Revision: D20345134

Pulled By: ezyang

fbshipit-source-id: 3ce706405e8784cac5c314500466b5f988ad31bf
2020-03-10 08:12:07 -07:00
ashish
616beb1412 [ROCm] Added support for pytorch extensions to use HIP (#32669)
Summary:
This pull request has changes for:
1. Enabling a torch module with HIP code to be compiled by cpp_extensions.py
2. Fixes for hipify module to be able to be used by a torch extension

cc: ezyang iotamudelta jeffdaily
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32669

Differential Revision: D20033893

Pulled By: zou3519

fbshipit-source-id: fd6ddc8cdcd3930f41008636bb2bc9dd26cdb008
2020-02-21 12:10:02 -08:00
peter
ffe327f7d9 Revert "Disable flaky test TestCppExtensionAOT.test_cuda_extension in… (#33404)
Summary:
… Windows CI (https://github.com/pytorch/pytorch/issues/33282)"

This reverts commit 5b922918d0.

Fixes https://github.com/pytorch/pytorch/issues/33270.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33404

Differential Revision: D19972594

Pulled By: ezyang

fbshipit-source-id: c8f67536fd6e4b7135171d621ad671b1b2a21fd4
2020-02-20 09:08:29 -08:00
Peter Bell
44af8ee6cd Add pybind11 exception translator (#30588)
Summary:
Closes https://github.com/pytorch/pytorch/issues/30027

The idea here is that you can bind a function with `pybind11` in a single line and without modifying the function:
```cpp
m.def("foo", foo, py::call_guard<torch::PyWarningHandler>());
```
Where warnings are handled by the [`call_guard`](https://pybind11.readthedocs.io/en/stable/advanced/functions.html#call-guard) and exceptions are handled by the `pybind11` exception translator. To do this, I have added support for handling C++ exceptions in `torch::PyWarningHandler`'s destructor without setting the python error state before hand.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30588

Differential Revision: D19905626

Pulled By: albanD

fbshipit-source-id: 90c0a5e298b123cc0c8ab9c52c91be4e96ea47c6
2020-02-18 11:33:29 -08:00
Richard Zou
28c5213a97 Add mechanism to pass a number of workers to cpp extensions (#33346)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33346

Fixes #33091

This PR lets users control the number of workers that cpp extensions
uses through the environment variable `MAX_JOBS`. If the environment
variable is a non-negative integer we use that many threads; otherwise,
ninja falls back to the default.

I chose to use the name `MAX_JOBS` because we use it in PyTorch already
to control the number of workers PyTorch builds with. There is a risk
that users of cpp extensions already have `MAX_JOBS` set but we are
hoping that that risk is small and/or it means semantically the same
thing.

Test Plan: - tested locally

Differential Revision: D19911645

Pulled By: zou3519

fbshipit-source-id: d20ed42de4f845499ed38f1a1c73e9ccb620f780
2020-02-18 06:48:11 -08:00
peter
769abddfa3 Build ahead-of-time C++ extensions with ninja on windows
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33084

Differential Revision: D19817361

Pulled By: ezyang

fbshipit-source-id: 95a6d0ffa9beb6885c8a41688621b33da51706ae
2020-02-11 17:50:09 -08:00
Richard Zou
6209412647 Add option to use ninja to compile ahead-of-time cpp_extensions (#32495)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32495

Background
------------------------------
Previously, ninja was used to compile+link inline cpp_extensions and
ahead-of-time cpp_extensions were compiled with distutils. This PR adds
the ability to compile (but not link) ahead-of-time cpp_extensions with ninja.

The main motivation for this is to speed up cpp_extension builds: distutils
does not make use of parallelism. With this PR, using the new option, on my machine,
- torchvision compilation goes from 3m43s to 49s
- nestedtensor compilation goes from 2m0s to 28s.

User-facing changes
------------------------------

I added a `use_ninja` flag to BuildExtension. This defaults to
`True`. When `use_ninja` is True:
- it will attempt to use ninja.
- If we cannot use ninja, then this throws a warning and falls back to
distutils.
- Situations we cannot use ninja: Windows (NYI, I'll open a new issue
for this), if ninja cannot be found on the system.

Implementation Details
------------------------------

This PR makes this change in two steps. Please me know if it would be
easier to review this if I split this up into a stacked diff.
Those changes are:
1) refactor _write_ninja_file to separate the policy (what compiler flags
to pass) from the mechanism (how to write the ninja file and do compilation).
2) call _write_ninja_file and _run_ninja_build while building
ahead-of-time cpp_extensions. These are only used to compile objects;
distutils still handles the linking.

Change 1: refactor _write_ninja_file to seperate policy from mechanism
- I split _write_ninja_file into: _write_ninja_file and
_write_ninja_file_to_build_library
- I renamed _build_extension_module to _run_ninja_build

Change 2: Call _write_ninja_file while building ahead-of-time
cpp_extensions
- _write_ninja_file_and_compile_objects calls _write_ninja_file to only
build object files.
- We monkey-patch distutils.CCompiler.compile to call
_write_ninja_files_and_compile_objects
- distutils still handles the linking step. The linking step is not a
bottleneck so it was not a concern.
- This change only works on unix-based systems. Our code for windows
goes down a different codepath and I did not want to mess with that.
- If a system does not support ninja, we raise a warning and fall back
to the original compilation path.

Test Plan
------------------------------

Adhoc testing
- I built torchvision using pytorch master and printed out the build
commands. Next, I used this branch to build torchvision and looked at
the ninja file. I compared the ninja file with the build commands and
asserted that they were functionally the same.
- I repeated the above for pytorch/nestedtensor.

PyTorch test suite
- I split `test_cpp_extensions` into `test_cpp_extensions_aot` and
`test_cpp_extensions_jit`. The AOT (ahead-of-time) version tests
ahead-of-time and the JIT version tests just-in-time (not to be confused
with TorchScript)
- `test_cpp_extensions_aot` gets run TWICE by run_test.py, once with
a module that was built with ninja, and once with a module that was
built without ninja.
- run_test.py asserts that when we are building with use_ninja=True,
ninja is actually available on the system.

Test Plan: Imported from OSS

Differential Revision: D19730432

Pulled By: zou3519

fbshipit-source-id: 819590d01cf65e8da5a1e8019b8b3084792fee90
2020-02-05 18:49:29 -08:00
peter
1e5aead35b Make cuda search process of cpp extension quiet (#32620)
Summary:
Fixes https://discuss.pytorch.org/t/error-with-cpp-extentions/67559.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32620

Differential Revision: D19576164

Pulled By: soumith

fbshipit-source-id: 076229322375774bec03ef2632fc233000c15391
2020-01-26 20:26:43 -08:00