Commit Graph

19 Commits

Author SHA1 Message Date
Roy Li
2a698682e4 Remove Type dispatch (#21964)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21964
ghimport-source-id: fdfb555ac4efbf31ae7d2c700a5aa44ad0cc4d7f

Test Plan: Imported from OSS

Differential Revision: D15897424

Pulled By: li-roy

fbshipit-source-id: 3cd6744254e34d70e6875ffde749b5cf959b663c
2019-06-30 04:11:35 -07:00
Sam Gross
b90790ab1b Don't split 256-bit AVX2 load/store intrinsics (#20609)
Summary:
Recent versions of GCC split unaligned load and store intrinsics into
two 128-bit instructions. On old processors (Sandy Bridge) this was a
bit faster for unaligned data, but bit slower for aligned data. On new
processors (Intel Haswell+, recent AMD) splitting loads is slower on
both aligned and unaligned data.

Clang, MSVC, and ICC do not split unaligned load and store intrinsics.

There's a good explanation here:
https://stackoverflow.com/questions/52626726/why-doesnt-gcc-resolve-mm256-loadu-pd-as-single-vmovupd#tab-top

Splitting load and store intrinsics makes no sense in our AVX2
configuration because the CPUs that support AVX2 instructions are the
same CPUs where splitting is disadvantageous on all data alignemnt.

Note that this doesn't change the AVX configuration (used by CPUs that
support AVX but not AVX2). It's possible this would be benficial for
that configuration too (our data is usually 32-byte aligned), but I'd
prefer the conservative change for now.

torch.add generated assembly (hot loop) (GCC 7.3.0)
before:
https://gist.github.com/colesbury/066376537bccd514daf8fe4ab54d8295

after:
https://gist.github.com/colesbury/8b4b948145001d44b225c51d2428bb91

Timing of `torch.add(x, y, out=z)` for size 10240 (1 thread, Broadwell,
no turbo):
before: 7.35 us after: 6.39 us

(Take the torch.add timings with a grain of salt. The difference in timings
is much larger than I would expect.)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20609

Differential Revision: D15385800

Pulled By: colesbury

fbshipit-source-id: 66415b148a3b19360b9de9881af594ab46547b6f
2019-05-17 09:16:17 -07:00
Jiakai Liu
c7c02724cd CMakeLists changes to enable libtorch for Android (#19762)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19762
ghimport-source-id: 287aa7fea4efd38994e14d794123eb2046b91fc0

Differential Revision: D15087653

Pulled By: ljk53

fbshipit-source-id: 4498ff9f7f7903c3e25541184302b811267958e9
2019-05-03 09:28:53 -07:00
Jiakai Liu
8cd6d2f101 rename BUILD_ATEN_MOBILE to INTERN_BUILD_MOBILE and make it private (#19942)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19942
ghimport-source-id: 6bacc8f5ad7911af8cf5fde9fcb604ade666b862

Reviewed By: dzhulgakov

Differential Revision: D15144325

Pulled By: ljk53

fbshipit-source-id: d63a70f007110d5d1055d6bec1ed09a1a6aafdae
2019-05-01 00:20:24 -07:00
Gemfield
20159c3ffe remove redundant --install_dir parameter in GEN_COMMAND (#18473)
Summary:
remove redundant --install_dir parameter in GEN_COMMAND, since "--install_dir parameter " already contained in ${GEN_COMMAND}.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18473

Differential Revision: D14620193

Pulled By: ezyang

fbshipit-source-id: ee9953b5d055f4b8beb3557f95f6539051b0028a
2019-03-26 10:22:00 -07:00
Owen Anderson
fc2d8c6889 Eliminate PYCMD in favor of PYTHON_EXECUTABLE in CMake.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16522

Differential Revision: D13867376

Pulled By: resistor

fbshipit-source-id: 6bce68facea83c5161a31fcdfafe08827999eb2b
2019-01-30 17:13:43 -08:00
andersj
8a5ba577c1 Revert "remove use of tmp_install" (#15847)
Summary:
This reverts commit 04bf528589.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15847

Differential Revision: D13603174

Pulled By: anderspapitto

fbshipit-source-id: ae321434d3345ad94fad67bf71fd027cddeb4588
2019-01-08 16:30:19 -08:00
andersj
04bf528589 remove use of tmp_install
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14553

Differential Revision: D13583335

Pulled By: anderspapitto

fbshipit-source-id: 8711fead9eda877c1037a0bc59f91a3d2e01f3e0
2019-01-04 13:48:12 -08:00
Zachary DeVito
60f02b87be fix an issue where two rules build the same .py files
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15230

Differential Revision: D13471625

Pulled By: zdevito

fbshipit-source-id: a982413a308c7a9bb5b6a82fe96fd3de44f555aa
2018-12-14 14:52:52 -08:00
Edward Yang
b710642969 Make ATen HIPify out-of-place, but still reuse CUDA names. (#14866)
Summary:
```
    This diff changes the HIPification of ATen to be out-of-place.
    We now have the following mappings:

    - ATen/cuda => ATen/hip
    - ATen/native/cuda => ATen/native/hip
    - ATen/native/sparse/cuda => ATen/native/sparse/hip
    - THC => THH
    - THCUNN => THHUNN

    The build system is adjusted to know about these new build paths,
    and HIPify is taught how to adjust include paths and
    THC_GENERIC_FILE appropriately.  ATen_hip is now built as
    the ATen_hip library, rather than reusing ATen_cuda.

    However, despite these new filepaths, none of the identifiers in ATen
    have actually changed.  So, e.g., THHGeneral.h still defines functions
    named THC_blahblah, and HIP still shows up as CUDA in PyTorch itself.
    We'll tackle this in a subsequent PR; this diff is just to get the files
    out-of-place.

    Minor extra improvements:

    - Don't edit tmp_install when hipifying
    - HIP no longer builds native_cudnn_cpp; it was unnecessary
    - Caffe2_HIP_INCLUDES is now Caffe2_HIP_INCLUDE, for consistency
      with all the other variables.
    - HIP build now properly respects ATEN_CUDA_FILES_GEN_LIB (it
      did not previously.)
    - You can now override file extension matching in pyHIPIFY
      by explicitly specifying its full name in the matching list.
      This is used so we can HIPify CMakeLists.txt in some situations.

    A little bit of string and ceiling wax:

    - gen.py grows a --rocm flag so that it knows to generate CUDA
      files which actually refer to the HIP headers (e.g., THH.h)
      We'll get rid of this eventually and generate real HIP files,
      but not for this PR.
    - Management of HIP dependencies is now completely deleted
      from the ATen CMakeLists.txt.  The old code was dead (because
      it was shoveled in ATen_CUDA_DEPENDENCY_LIBS and promptly
      ignored by the Caffe2 build system) and didn't actually work.
```

Stacked on https://github.com/pytorch/pytorch/pull/14849 review last commit only
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14866

Differential Revision: D13419475

Pulled By: ezyang

fbshipit-source-id: cb4c843df69a1d8369314c9fab1b7719520fa3db
2018-12-11 19:15:27 -08:00
Christian Puhrsch
f564163951 Remove SSE-only code and convolve5x5 (#12109)
Summary:
Performance oriented code will use AVX/AVX2, so we don't need SSE specific code anymore. This will also reduce the probability of running into an error on legacy CPUs.

On top of this convolve is covered by modern libraries such as MKLDNN, which are much more performant and which we now build against by default (even for builds from source).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12109

Differential Revision: D10055134

Pulled By: colesbury

fbshipit-source-id: 789b8a34d5936d9c144bcde410c30f7eb1c776fa
2018-10-09 10:53:50 -07:00
Gregory Chanan
9a7c196040 Move Type, Tensor, TensorMethods to core.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/11519

Reviewed By: yf225

Differential Revision: D9771684

Pulled By: gchanan

fbshipit-source-id: a57ee2072af99ce856f895c688b09d750a8606e0
2018-09-12 13:10:54 -07:00
Christian Puhrsch
aeb6094538 Unify opt flag for cmake codegen (#11227)
Summary:
Also enables debug for non-MSVC for kernel codegen
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11227

Differential Revision: D9656506

Pulled By: cpuhrsch

fbshipit-source-id: 667195cb55de1a1a9042b6b1c4436e9c6c743333
2018-09-05 08:55:49 -07:00
Mingzhe Li
f0d8a36e70 Completely remove build_aten and use_aten (#10469)
Summary:
Breaking out of #8338 to completely remove build_aten and use_aten.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10469

Reviewed By: orionr

Differential Revision: D9413639

Pulled By: mingzhe09088

fbshipit-source-id: b7203aa4f5f2bb95c504c8dc187a3167f2570183
2018-08-20 20:26:42 -07:00
Edward Yang
64a6f17177 Fix ATen/core header installation. (#10463)
Summary:
Fixes #10353 and fixes #10397.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10463

Differential Revision: D9296491

Pulled By: ezyang

fbshipit-source-id: f825c2a21a113e44a6f5c1c5ec17814d9deac366
2018-08-13 09:25:49 -07:00
Edward Yang
37a226de63 When BUILD_ATEN=OFF, use ATen/core directly (#10019)
Summary:
ATenCore.h is a dummy header to just test that this is working at all.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10019

Reviewed By: smessmer

Differential Revision: D9067262

Pulled By: ezyang

fbshipit-source-id: 58bab9c0aa83b56335e36b719b9b6505400d8dee
2018-07-30 21:09:55 -07:00
Christian Puhrsch
e9e47ce8f1 Vectorize sigmoid (#8612)
Summary:
This PR ports the vectorization of sigmoid to also enable better performance for non-contiguous arrays. Detailed timings will follow shortly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/8612

Reviewed By: ezyang

Differential Revision: D8712298

Pulled By: cpuhrsch

fbshipit-source-id: 01a3d06af8d04513edd024ab1d01a6b753fc6f6a
2018-07-10 12:40:39 -07:00
anderspapitto
41ef5c2d4b Support for generating ATen during the fbcode build, rather than committing the generated files (#8002)
Paint the internal bikeshed a slightly different color to appease Buck tooling.
2018-06-01 16:04:02 -04:00
Orion Reblitz-Richardson
4bf0202cac
[build] Have PyTorch depend on minimal libcaffe2.so instead of libATen.so (#7399)
* Have PyTorch depend on minimal libcaffe2.so instead of libATen.so

* Build ATen tests as a part of Caffe2 build

* Hopefully cufft and nvcc fPIC fixes

* Make ATen install components optional

* Add tests back for ATen and fix TH build

* Fixes for test_install.sh script

* Fixes for cpp_build/build_all.sh

* Fixes for aten/tools/run_tests.sh

* Switch ATen cmake calls to USE_CUDA instead of NO_CUDA

* Attempt at fix for aten/tools/run_tests.sh

* Fix typo in last commit

* Fix valgrind call after pushd

* Be forgiving about USE_CUDA disable like PyTorch

* More fixes on the install side

* Link all libcaffe2 during test run

* Make cuDNN optional for ATen right now

* Potential fix for non-CUDA builds

* Use NCCL_ROOT_DIR environment variable

* Pass -fPIC through nvcc to base compiler/linker

* Remove THCUNN.h requirement for libtorch gen

* Add Mac test for -Wmaybe-uninitialized

* Potential Windows and Mac fixes

* Move MSVC target props to shared function

* Disable cpp_build/libtorch tests on Mac

* Disable sleef for Windows builds

* Move protos under BUILD_CAFFE2

* Remove space from linker flags passed with -Wl

* Remove ATen from Caffe2 dep libs since directly included

* Potential Windows fixes

* Preserve options while sleef builds

* Force BUILD_SHARED_LIBS flag for Caffe2 builds

* Set DYLD_LIBRARY_PATH and LD_LIBRARY_PATH for Mac testing

* Pass TORCH_CUDA_ARCH_LIST directly in cuda.cmake

* Fixes for the last two changes

* Potential fix for Mac build failure

* Switch Caffe2 to build_caffe2 dir to not conflict

* Cleanup FindMKL.cmake

* Another attempt at Mac cpp_build fix

* Clear cpp-build directory for Mac builds

* Disable test in Mac build/test to match cmake
2018-05-24 07:47:27 -07:00