pytorch/tools
Vitaly Fedyunin d39ab0312a Add memory_format support to and type operators (#27107)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27107

Adds memory_format keyword argument (positional for cpp).

'Preserve' behavior now follows next rules:
1) If tensor is non-overlapping and dense - output tensor will have the same strides as input tensor.
2) If not (1) and tensor is stored in the channels last format, output tensor going to have channels last format.
3) Output tensor is going to be contiguous in all other cases.

 ---
Dense tensor is the tensor that store values in a contiguous block of memory.
Non-overlapping tensor is the tensor in which elements occupy individual non-repetitive memory.

Test Plan: Imported from OSS

Differential Revision: D17931062

Pulled By: VitalyFedyunin

fbshipit-source-id: 2c5dd3dd05bf58a9a29f25562cd45190b009c3f9
2019-10-15 12:55:56 -07:00
..
amd_build Enable EXE001 flake8 check. (#27560) 2019-10-09 09:15:29 -07:00
autograd Add memory_format support to and type operators (#27107) 2019-10-15 12:55:56 -07:00
docker clean up runtime dockerfile, use cuda 9 package (#7230) 2018-05-02 23:54:05 -07:00
jit Wrapping namespace Reduction in namespace at (#26606) (#27422) 2019-10-15 11:05:40 -07:00
pyi Migrate soft_margin_loss from the TH to Aten (CUDA+CPU) (#27673) 2019-10-15 08:44:57 -07:00
setup_helpers Remove CUDA_VERSION from Python script (which has already been detected in CMake) (#27316) 2019-10-04 07:49:57 -07:00
shared Kill declared_type and ignore_check from THFormal. (#26284) 2019-09-17 07:40:33 -07:00
__init__.py python 2 support 2016-06-08 19:14:57 -04:00
aten_mirror.sh Restore TBB module (#20454) 2019-05-28 02:49:36 -07:00
build_libtorch.py Specify build dir as a global variable in BUILD_DIR in the build system. 2019-07-25 07:19:47 -07:00
build_pytorch_libs.py remove tools/setup_helpers/cudnn.py (#25876) 2019-09-24 07:44:33 -07:00
build_variables.py Add nn::functional::normalize() to C++ Frontend (#27280) 2019-10-14 08:39:02 -07:00
clang_format.py Enable EXE001 flake8 check. (#27560) 2019-10-09 09:15:29 -07:00
clang_tidy.py add clang-tidy to github actions (#27755) 2019-10-11 17:01:50 -07:00
download_mnist.py Turn on F401: Unused import warning. (#18598) 2019-03-30 09:01:17 -07:00
flake8_hook.py Add missing shebangs to Python files with executable permissions. 2019-06-06 10:53:40 -07:00
generated_dirs.txt Add simple scripts for checking if generated code changed. (#12835) 2018-10-22 07:33:32 -07:00
git_add_generated_dirs.sh Add simple scripts for checking if generated code changed. (#12835) 2018-10-22 07:33:32 -07:00
git_reset_generated_dirs.sh Add simple scripts for checking if generated code changed. (#12835) 2018-10-22 07:33:32 -07:00
git-pre-commit Remove THD (#22065) 2019-06-25 12:19:13 -07:00
pytorch.version Convert all tabs to spaces, add CI. (#18959) 2019-04-09 08:12:26 -07:00
README.md Stop doing nn wrap. (#25353) 2019-08-30 07:42:20 -07:00

This folder contains a number of scripts which are used as part of the PyTorch build process. This directory also doubles as a Python module hierarchy (thus the __init__.py).

Overview

Modern infrastructure:

  • autograd - Code generation for autograd. This includes definitions of all our derivatives.
  • jit - Code generation for JIT
  • shared - Generic infrastructure that scripts in tools may find useful.
    • module_loader.py - Makes it easier to import arbitrary Python files in a script, without having to add them to the PYTHONPATH first.

Legacy infrastructure (we should kill this):

  • cwrap - Implementation of legacy code generation for THNN/THCUNN. This is used by nnwrap.

Build system pieces:

  • setup_helpers - Helper code for searching for third-party dependencies on the user system.
  • build_pytorch_libs.sh - Script that builds all of the constituent libraries of PyTorch, but not the PyTorch Python extension itself. We are working on eliminating this script in favor of a unified cmake build.
  • build_pytorch_libs.bat - Same as above, but for Windows.
  • build_libtorch.py - Script for building libtorch, a standalone C++ library without Python support. This build script is tested in CI.

Developer tools which you might find useful:

Important if you want to run on AMD GPU:

  • amd_build - HIPify scripts, for transpiling CUDA into AMD HIP. Right now, PyTorch and Caffe2 share logic for how to do this transpilation, but have separate entry-points for transpiling either PyTorch or Caffe2 code.
    • build_amd.py - Top-level entry point for HIPifying our codebase.

Tools which are only situationally useful: