Commit Graph

294 Commits

Author SHA1 Message Date
Jiong Gong
037615b989 [inductor][cpp] GEMM template (infra and fp32) (#124021)
This PR adds the Cpp template infrastructure and the initial FP32 gemm template. See RFC https://github.com/pytorch/pytorch/issues/125683 for more background info.
1. Cpp template infrastructure
Similar template abstractions as the CUTLASS template, i.e., `CppTemplate`, `CppTemplateKernel`, `CppTemplateBuffer`. The MicroGemm micro-kernel abstraction that can be used by Cpp GEMM templates.
2. Initial FP32 gemm template
This involves a GEMM template implementation `CppPackedGemmTemplate` that supports GEMM with constant weight (`B`) requiring `N` to be a multiple of register blocking while allows the static or dynamic sizes for the `M` (batch dim) of `A`. The `B` matrix would be prepacked. This is a typical setting for inference workloads. The template handles the thread decomposition (via `thread_blocking`) and cache blocking (via `cache_blocking`). Then it invokes `CppMicroGemm` which handles register blocking, instruction selection, and other CPU architecture-specific optimizations. A `CppMicroGemmFP32Vec` micro-kernel implementation is provided for fp32 matmuls implemented with ATen vec abstraction.
3. Correctness and performance
The changes have been validated with fp32 inference on the three benchmark suites (torchbench, huggingface and timm_models) with both static shape and dynamic shapes. Since it is an initial implementation, we are still working on further performance improves with follow-up PRs including the optimizations in kernels as well as fusions. The perf gains are only observed from a selective number of models compared to the ATen kernels which are implemented with MKL. The perf gains are more obvious with dynamic shapes since MKL only supports packed gemm for static shapes. Below are details.

Static shapes
| Benchmark | torchbench | huggingface | timm_models |
|------------|-------------|--------------|--------------|
| Multi-threaded (baseline) | 1.47x | 1.36x | 1.91x |
| Multi-threaded (max-autotune) | 1.47x | 1.36x | 1.92x |
| Single-threaded (baseline) | 1.56x | 1.19x | 1.51x |
| Single-threaded (max-autotune) | 1.56x | 1.19x | 1.52x |

Key models being sped up:
drq: 1.14x
soft_act: 1.12
cait_m36_384: 1.18x

Dynamic shapes
| Benchmark | torchbench | huggingface | timm_models |
| --- | --- | --- | --- |
| Multi-threaded (baseline) | 1.43x | 1.28x | 1.85x |
| Multi-threaded (max-autotune) | 1.47x | 1.28x | 1.85x |
| Single-threaded (baseline) | 1.55x | 1.20x | 1.51x |
| Single-threaded (max-autotune) | 1.56x | 1.19x | 1.53x |

Key models being sped up:
BERT_pytorch: 1.22x
pyhpc_turbulent: 1.13x
soft_actor_critic: 1.77x
BlenderbotForCausalLM: 1.09x
cait_m36_384: 1.17x

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124021
Approved by: https://github.com/jansel
2024-05-12 07:46:44 +00:00
lezcano
320af5eaa6 Compute bounds for the variables created during codegen (#123100)
Before we would just bail out on these bounds for all variables that did
not come from the FX graph. Now we propagate the bounds whenever we have
a rule for that op.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123100
Approved by: https://github.com/jgong5, https://github.com/peterbell10
2024-05-08 08:14:06 +00:00
Jez Ng
7863e04615 Back out "Get cutlass_library import working under fbcode" (#125606)
Summary: Original commit changeset: de79f6bfe348

Differential Revision: D57002294

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125606
Approved by: https://github.com/chenyang78
2024-05-07 16:55:11 +00:00
Oguz Ulgen
22bcfc25ef Initial implementation of Inductor FX Graph Remote Cache (#124669)
This diff implements a remote caching strategy (memcache for internal and redis for external) for caching of Inductor FX Graph to Inductor generated wrapper file.

It uses the same idea with the autotuning result cache that is currently live.

This will land turned off and before turning this on by default, I will do more testing and including looking at the dynamic shape guards added by inductor.

Differential Revision: [D56441624](https://our.internmc.facebook.com/intern/diff/D56441624/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124669
Approved by: https://github.com/jansel, https://github.com/eellison
2024-05-06 22:10:27 +00:00
PyTorch MergeBot
2a42c40791 Revert "Compute bounds for the variables created during codegen (#123100)"
This reverts commit bb668c6468.

Reverted https://github.com/pytorch/pytorch/pull/123100 on behalf of https://github.com/huydhn due to Sorry for reverting you change but it is failing inductor tests bb668c6468 ([comment](https://github.com/pytorch/pytorch/pull/123100#issuecomment-2096837821))
2024-05-06 20:23:39 +00:00
lezcano
bb668c6468 Compute bounds for the variables created during codegen (#123100)
Before we would just bail out on these bounds for all variables that did
not come from the FX graph. Now we propagate the bounds whenever we have
a rule for that op.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123100
Approved by: https://github.com/jgong5, https://github.com/peterbell10
2024-05-06 18:12:15 +00:00
Xiaodong Wang
52f9128a0d [AMD] Fix cutlass path in inductor (#125463)
Summary: Trunk is broken because fbcode triton-amd doesn't have cutlass path

Test Plan: It now runs.

Differential Revision: D56923833

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125463
Approved by: https://github.com/Skylion007
2024-05-03 18:02:58 +00:00
Jez Ng
fb1bfe1156 Get cutlass_library import working under fbcode (#125257)
Differential Revision: D56764089

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125257
Approved by: https://github.com/chenyang78
2024-05-02 15:17:10 +00:00
David Berard
9022f131b5 [inductor] switch assume_aligned_inputs to False (#124336)
In #123319, we guard some behavior behind the `assume_aligned_inputs` config option. If we set this to `False`, then the behavior added in #123319 becomes the default behavior. See the referenced PR for more details about the behavior affected.

Side effects:
* It's possible that this will hurt performance in some scenarios. For example, if an unaligned input is used in a matmul, it might be better to perform the clone to align it first.
* This will occasionally cause recompiles. Specifically: the check we perform (`(storage_offset * get_dtype_size(dtype)) % ALIGNMENT == 0`) can be guarded on if the storage_offset becomes dynamic. storage_offset becomes dynamic during automatic_dynamic_shapes after a shape or stride changes. Previously, this was increasing graph breaks in cpu inductor torchbench tests (but is fixed by more carefully guarding checks on alignment, so that we don't run them and generate guards unless actually needed).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124336
Approved by: https://github.com/eellison
2024-05-01 23:49:27 +00:00
Kai Londenberg
cd06c73cbd [Inductor Cutlass backend] Improved GEMM template (#124577)
Improves the Cutlass backend GEMM template:

 * Adds code which allows to create stand-alone test runners for Cutlass GEMM Kernels, which allows (manual) debugging of, for example, CUDA IMA errors or similar problems which occur in practice. Includes some utility code and tests to actually compile and run these standalone tests.
 * Cleans up the GEMM template code through various refactorings
 * Eliminates code sections and options that are unneccessary now that epilogue fusions are being removed.
 * Limits the scope of a workaround for (flaky) Cutlass issues with bias broadcasting to neccessary cases.
 * Puts some CPU runtime checks into #if / #endif blocks, such that it's possible to compile CUTLASS Kernels with lower CPU overhead.
 * Add documentation comments

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124577
Approved by: https://github.com/jansel
ghstack dependencies: #124576
2024-04-26 20:03:20 +00:00
Kai Londenberg
9aeeb8e925 [Inductor Cutlass backend] Improve GEMM op filtering (#124576)
Add configurable allowlist / denylist regular expressions to make it possible to exclude certain
CUTLASS GEMM implementations ( for example "pingpong" Kernels due to undesired numerical behavior ).

Remove usage of old 2.x Cutlass Kernels entirely.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124576
Approved by: https://github.com/jansel, https://github.com/eellison
2024-04-25 16:33:54 +00:00
Boyuan Feng
b91f83f181 [cudagraph] add config for cudagraph managed input mutation support (#124754)
Summary: [#123231](https://github.com/pytorch/pytorch/pull/123231) adds cudagraph supports for more types of functions (i.e., cudagraph managed input mutation). These newly supported functions may have mutated static inputs, leading to assertion errors in some workload which skip cudagraph previously. This diff adds a config to opt in the new feature.

Test Plan: ci

Differential Revision: D56481353

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124754
Approved by: https://github.com/eellison
2024-04-24 04:23:53 +00:00
Kai Londenberg
f8f6c460cd [Inductor max autotune] Make autotuning robust against very slow Kernels (#123932)
If a Kernel does not return in a reasonable amount of time during autotuning, it can delay inductor compilation a lot. This change introduces soft / hard kill timeouts and a mechanism to kill Kernels being profiled in subprocesses if they take too long.

Correspondingly, a few new config options are introduced within _inductor/config.py - all of them with inline docs.

Test Plan:
Existing tests within test_max_autotune.py and test_cutlass_backend.py ) cover the new codepaths.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123932
Approved by: https://github.com/jansel
ghstack dependencies: #121497, #123930
2024-04-23 11:56:15 +00:00
Jason Ansel
0093735ccd [inductor] Use compile time config values in runtime (#124561)
This removes usage of torch._inductor.config from `torch._inductor.runtime`.  Fixing two issues:
1) If configs change we should really use the compile time ones
2) In compile workers, we want to use the parent process config

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124561
Approved by: https://github.com/yanboliang
ghstack dependencies: #124552, #124553, #124557, #124559, #124560, #124569
2024-04-22 18:46:40 +00:00
PyTorch MergeBot
30dec1da84 Revert "[inductor] Use compile time config values in runtime (#124561)"
This reverts commit 3af12447f8.

Reverted https://github.com/pytorch/pytorch/pull/124561 on behalf of https://github.com/jeanschmidt due to There are internal breakages, already discussed with author and he'll FF ([comment](https://github.com/pytorch/pytorch/pull/124561#issuecomment-2070537634))
2024-04-22 18:24:38 +00:00
Jason Ansel
3af12447f8 [inductor] Use compile time config values in runtime (#124561)
This removes usage of torch._inductor.config from `torch._inductor.runtime`.  Fixing two issues:
1) If configs change we should really use the compile time ones
2) In compile workers, we want to use the parent process config

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124561
Approved by: https://github.com/yanboliang
ghstack dependencies: #124552, #124553, #124557, #124559, #124560, #124569
2024-04-22 04:51:30 +00:00
eellison
000d55870a Enable in oss (#124031)
Biggest movement is 4% HF inference, 9% TIMM inference. Note, this is max-autotune mode so we are more tolerant of compilation increases. We could improve compilation time by limiting:
```
# Take how many of the top triton kernels to benchmark epilogue
max_epilogue_benchmarked_choices = 3
```

There is a hf_Whisper failure which you can repro on main without this stack with `TORCHINDUCTOR_MAX_AUTOTUNE_GEMM_BACKENDS=TRITON TORCHINDUCTOR_MAX_AUTOTUNE=1 python benchmarks/dynamo/torchbench.py --backend inductor --amp --accuracy --training --only hf_Whisper`. When you turn off epilogue fusion, it fixes the accuracy. I bisected the failure to an epilogue, however when you compare the results of that epilogue with the corresponding separate kernels the results of the output are equivalent.

Inference:

<img width="1686" alt="image" src="https://github.com/pytorch/pytorch/assets/11477974/0b240080-cd33-4c08-89d3-583103b1fb0c">

Training:

<img width="1329" alt="Screenshot 2024-04-16 at 6 16 30 PM" src="https://github.com/pytorch/pytorch/assets/11477974/db0afcc9-7288-4c27-84ce-4fc1a5690788">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124031
Approved by: https://github.com/Chillee, https://github.com/shunting314
ghstack dependencies: #124030, #122642, #123229, #122825
2024-04-19 20:28:55 +00:00
eellison
39fc280dce Dont precompile already seen keys, limit epilogue choices (#122642)
Two changes:
- in epilogue benchmark fusion, only take top 6 choices. There were basically no choices taken after this in HF.
- Share a single precompilation function among matmuls with same key.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122642
Approved by: https://github.com/shunting314
ghstack dependencies: #124030
2024-04-19 17:34:22 +00:00
chilli
8e280862ff Add custom joint graph passes (#124443)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124443
Approved by: https://github.com/aorenste, https://github.com/malfet
2024-04-19 11:54:46 +00:00
PyTorch MergeBot
77ad630f5d Revert "Dont precompile already seen keys, limit epilogue choices (#122642)"
This reverts commit 050051f412.

Reverted https://github.com/pytorch/pytorch/pull/122642 on behalf of https://github.com/DanilBaibak due to Broken trunk ([comment](https://github.com/pytorch/pytorch/pull/124030#issuecomment-2061044960))
2024-04-17 11:31:32 +00:00
eellison
050051f412 Dont precompile already seen keys, limit epilogue choices (#122642)
Two changes:
- in epilogue benchmark fusion, only take top 6 choices. There were basically no choices taken after this in HF.
- Share a single precompilation function among matmuls with same key.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122642
Approved by: https://github.com/shunting314
ghstack dependencies: #124030
2024-04-17 03:08:59 +00:00
Shunting Zhang
3e90e93a78 [inductor] disable comprehensive padding in fbcode (#124191)
Comprehension padding cause small NE change and fail an internal test. Disable it for internal use case to mitigate.

Differential Revision: [D56197430](https://our.internmc.facebook.com/intern/diff/D56197430)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124191
Approved by: https://github.com/jansel
2024-04-16 22:44:08 +00:00
Shunting Zhang
fb6f6270d6 [inductor] comprehensive padding (#120758)
This PR adds the ability to pad tensor strides during lowering. The goal is to make sure (if possible) tensors with bad shape can have aligned strides so GPU can access the memory more efficiently.

By testing BlenderbotSmallForConditionalGeneration I already see 2.5ms speedup.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120758
Approved by: https://github.com/jansel
2024-04-15 19:05:51 +00:00
Oguz Ulgen
6367eab1a6 Normalize remote/local cache names (#123914)
Differential Revision: D56027380

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123914
Approved by: https://github.com/aakhundov
2024-04-12 18:18:09 +00:00
Nikita Shulga
416f532753 [AOTI] Serialize large weights (#123002)
But appending them to the end of the shared library and mmaping afterwards
Disabled by default, but overridable by `config.aot_inductor.force_mmap_weights`

Implemented by adding `USE_MMAP_SELF` define to `inductor/aoti_runtime/model.h` which is defined when weights are appended to the binary. In that case, shared library name is determined by calling `dladdr`, mmaped and finally checked against random magic number embedded at the end of the weights as well as in const section of the library in question

Added unites to validate that it works as expected
TODO:
  - Extend support to CUDA
  - munmap region if the same library is reused

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123002
Approved by: https://github.com/jansel, https://github.com/desertfire, https://github.com/mikekgfb
2024-04-11 06:39:58 +00:00
PyTorch MergeBot
a65e9a06f0 Revert "[AOTI] Serialize large weights (#123002)"
This reverts commit 27eb5daee4.

Reverted https://github.com/pytorch/pytorch/pull/123002 on behalf of https://github.com/DanilBaibak due to There is conflict to land the diff internally ([comment](https://github.com/pytorch/pytorch/pull/123002#issuecomment-2048215990))
2024-04-10 18:54:31 +00:00
Jiong Gong
0288fa7cae [inductor][cpp] expose config options via env vars (#123519)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123519
Approved by: https://github.com/leslie-fang-intel, https://github.com/desertfire
2024-04-10 00:11:32 +00:00
Nikita Shulga
27eb5daee4 [AOTI] Serialize large weights (#123002)
But appending them to the end of the shared library and mmaping afterwards
Disabled by default, but overridable by `config._force_mmap_aoti_weights`

Implemented by adding `USE_MMAP_SELF` define to `inductor/aoti_runtime/model.h` which is defined when weights are appended to the binary. In that case, shared library name is determined by calling `dladdr`, mmaped and finally checked against random magic number embedded at the end of the weights as well as in const section of the library in question

Added unites to validate that it works as expected
TODO:
  - Extend support to CUDA
  - munmap region if the same library is reused

Co-authored-by: Michael Gschwind <61328285+mikekgfb@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123002
Approved by: https://github.com/jansel, https://github.com/desertfire, https://github.com/mikekgfb
2024-04-09 22:18:57 +00:00
Menglu Yu
63a0ce89a0 [PT2][Inductor][3/n] Customize pre grad and post grad patterns (#121915)
Summary: Currently, we only enabled the group batch fusion customization, we also enable the split cat customization.

Test Plan:
```
buck2 run mode/opt //scripts/jackiexu0313/pt2:local_model_with_pt2 -- --test_mode batch-split --model_type "cmf" --flow_id 524546542
```
P1196013839

Differential Revision: D54861682

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121915
Approved by: https://github.com/jackiexu1992
2024-04-03 21:37:21 +00:00
Kai Londenberg
74b3a7920e [Inductor Cutlass backend] GEMM size threshold for Cutlass backend usage (#121491)
* Adds a configurable GEMM size threshold for the usage of Cutlass GEMM Kernels **_inductor.config.cutlass_backend_min_gemm_size**

 * During GEMM algorithm choice generation: **if no viable choices can be generated using the configured backends, the ATen backend will be used as a fallback backend**, even if it is not enabled in **_inductor.config.max_autotune_gemm_backends**

Test plan:
CI
Additional unit test in test_cutlass_backend.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121491
Approved by: https://github.com/jansel
ghstack dependencies: #121490
2024-04-03 13:34:16 +00:00
Menglu Yu
c40f386afd [Inductor][1/n]Split cat customization (#123045)
Summary: Change the config and revise the group batch fusion in order not to reuse the exsiting pre_grad and post_grad fusion options

Test Plan:
# unit test
```
buck2 test @mode/dev-nosan //caffe2/test/inductor:split_cat_fx_passes
```
Test UI: https://www.internalfb.com/intern/testinfra/testrun/17732923560510096
Network: Up: 15MiB  Down: 155MiB  (reSessionID-6a577a14-1772-42d9-9ae8-bfdc62f406a3)
Jobs completed: 267487. Time elapsed: 2:39.7s.
Cache hits: 99%. Commands: 104465 (cached: 104457, remote: 8, local: 0)
Tests finished: Pass 11. Fail 0. Fatal 0. Skip 0. Build failure 0

```
buck2 test @mode/dev-nosan //caffe2/test/inductor/fb:split_cat_fx_passes_fb
```
Test UI: https://www.internalfb.com/intern/testinfra/testrun/9007199283031382
Network: Up: 28MiB  Down: 177MiB  (reSessionID-a3081518-7cba-4c83-b442-c16655ecb2cd)
Jobs completed: 183164. Time elapsed: 1:41.4s.
Cache hits: 99%. Commands: 75875 (cached: 75862, remote: 12, local: 1)
Tests finished: Pass 1. Fail 0. Fatal 0. Skip 0. Build failure 0

```
buck2 test @mode/dev-nosan //caffe2/test/inductor:group_batch_fusion
```
Test UI: https://www.internalfb.com/intern/testinfra/testrun/10133099189612276
Network: Up: 1.3MiB           Down: 3.1MiB           (reSessionID-0d312a2d-e19e-4ba6-9f96-7eb5863734e7)
Discovered 9. Pass 0. Fail 0. Fatal 0. Skip 0. Timeout 0
Network: Up: 1.4MiB  Down: 3.2MiB  (reSessionID-0d312a2d-e19e-4ba6-9f96-7eb5863734e7)
Jobs completed: 68. Time elapsed: 2:19.9s.
Cache hits: 0%. Commands: 13 (cached: 0, remote: 1, local: 12)
Tests finished: Pass 9. Fail 0. Fatal 0. Skip 0. Build failure 0
```
buck2 test @mode/dev-nosan //caffe2/test/inductor:perf
```
Test UI: https://www.internalfb.com/intern/testinfra/testrun/5066549804623287
Network: Up: 1.5MiB  Down: 1.1MiB  (reSessionID-8d912a20-fceb-4698-89c3-d28e0708831f)
Jobs completed: 164. Time elapsed: 1:42.2s.
Cache hits: 0%. Commands: 13 (cached: 0, remote: 1, local: 12)
Tests finished: Pass 57. Fail 0. Fatal 0. Skip 0. Build failure 0

# local reproduce
case 1: with split cat
```
buck2 run @mode/opt //scripts/jackiexu0313/pt2:local_model_with_pt2 -- --test_mode batch-split --model_type "cmf" --flow_id 524546542
```
optimus parameter sent to the scuba:
```
{'before_recompile_pre_grad': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GLL6RBZb-ssXJYcBAMzw0oaKtp80br0LAAAz', 'BatchLayernormFusion': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GH1LAxcxv0Ae_BkFAHVav3K3oosDbr0LAAAz', 'BatchSigmoidPreGradFusion': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GNb0jwR-Ukkqns4CAGRmOqucfedDbr0LAAAz', 'normalization_pass_pre_grad': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GHsIQxm-hn3SPrgCAKq1E-HBsoZHbr0LAAAz', 'remove_split_with_size_one_pass_pre_grad': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GOrJORmbMTV_xlQDAOwolqclPsIAbr0LAAAz', 'merge_getitem_cat_pass_pre_grad': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GCqkmRblvVKybGUDACVxkwVIrWxLbr0LAAAz', 'merge_splits_pass_pre_grad': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GCB1QBfko_kVN0wFAKGjSZv4DJULbr0LAAAz', 'after_recompile_pre_grad': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GMwJPRmu4ry88swDAO1gdA5RCKIXbr0LAAAz', 'before_recompile_post_grad': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GLXCORnNiKeQFmoDABR93CRKmP8Sbr0LAAAz', 'BatchMulPostGradFusion': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GBMIPRnlwQyjSD4BANPuaMhV7MUjbr0LAAAz', 'after_recompile_post_grad': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GJ9KPxkOv4LL8_0DAA65D4kh4JYDbr0LAAAz', 'inductor': Counter({'pattern_matcher_nodes': 2844, 'pattern_matcher_count': 2604, 'normalization_pass': 886, 'remove_split_with_size_one_pass': 748, 'merge_splits_pass': 82, 'merge_getitem_cat_pass': 11, 'scmerge_split_sections_removed': 4, 'batch_aten_mul': 4, 'batch_sigmoid': 2, 'batch_aten_sub': 2, 'batch_layernorm': 1, 'scmerge_split_added': 1, 'scmerge_cat_added': 1, 'scmerge_split_removed': 1, 'scmerge_cat_removed': 1, 'batch_aten_add': 1}), 'BatchAddPostGradFusion': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GEcvPxmxBj-pd8gCABE1QgB-d6N6br0LAAAz', 'BatchSubPostGradFusion': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GEvQxhYomJGj2FMBAEXXAI8Vgzhmbr0LAAAz'}
```
P1202819405

case 2: without split cat
```
buck2 run @mode/opt //scripts/jackiexu0313/pt2:local_model_with_pt2 -- --test_mode batch --model_type "cmf" --flow_id 524546542
```
optimus parameter sent to the scuba:
```
{'before_recompile_pre_grad': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GAY7PxmGthuyjSwEAHF_A767YbMkbr0LAAAz', 'BatchLayernormFusion': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GLDPtBacXyybEOICAKaGCPatq5oabr0LAAAz', 'BatchSigmoidPreGradFusion': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GBu7ORkiDJu42QAEAGmlVTgO_Mpbbr0LAAAz', 'after_recompile_pre_grad': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GC893BZNl99ftY4BAHm5Z8sM4ptSbr0LAAAz', 'before_recompile_post_grad': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GCAeuRYgzPO5RcsCAPO3Z7tdMNMKbr0LAAAz', 'BatchMulPostGradFusion': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GHBIQxm1jlU-xhsFAONkzhh2mgknbr0LAAAz', 'after_recompile_post_grad': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GDoUPhmZ0noiaGMDAJHYuuiwHEAUbr0LAAAz', 'inductor': Counter({'pattern_matcher_nodes': 1189, 'pattern_matcher_count': 757, 'batch_aten_mul': 9, 'batch_aten_sub': 3, 'batch_sigmoid': 2, 'batch_aten_add': 2, 'batch_layernorm': 1, 'batch_linear_post_grad': 1}), 'BatchAddPostGradFusion': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GAluthYxi8uxpI4BAIQDzn3OyywUbr0LAAAz', 'BatchSubPostGradFusion': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GDjsJhTK5VAcot4CADIcAixghrYibr0LAAAz', 'PostGradBatchLinearFusion': 'https://www.internalfb.com/intern/everpaste/?color=0&handle=GEPfJxfJwktC7wsEAA0QbkqYNuVAbr0LAAAz'}
```
P1202823734

# e2e
training_platform:fd4f02cd855f5cc0ccb49317a5a6c8bb

with split cat
f546646358

without split cat
f546647159

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123045
Approved by: https://github.com/jackiexu1992
2024-04-02 14:36:22 +00:00
eellison
cd51496f8b add a couple debug options (#121033)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121033
Approved by: https://github.com/ezyang
2024-03-27 17:24:43 +00:00
eellison
df724153c1 Add option to skip cudagraphing on dynamic shape graphs (#122520)
This was requested internally.

Differential Revision: [D55264528](https://our.internmc.facebook.com/intern/diff/D55264528)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122520
Approved by: https://github.com/mlazos, https://github.com/shunting314
2024-03-26 21:49:21 +00:00
PyTorch MergeBot
97d3bf71b9 Revert "[Inductor Cutlass backend] GEMM size threshold for Cutlass backend usage (#121491)"
This reverts commit 700c92e1b9.

Reverted https://github.com/pytorch/pytorch/pull/121491 on behalf of https://github.com/huydhn due to Sorry for reverting you change but I think it is failing on ROCm, i.e. 700c92e1b9 ([comment](https://github.com/pytorch/pytorch/pull/121490#issuecomment-2015829464))
2024-03-22 20:11:47 +00:00
David Berard
8013c4409f [inductor] config to control whether we assume inputs are aligned (#122158)
**Motivation**: https://github.com/pytorch/pytorch/issues/112771

**Summary**: Inductor generates triton that assumes that inputs are going to be 16-byte aligned. If the inputs aren't aligned, Inductor clones the inputs. This PR introduces a config option to not do this: when assume_aligned_inputs=False, Inductor will _not_ pass inputs as being divisible_by_16, and Inductor will not make clones. This an can generate code that might be a bit slower, but this tradeoff can be worth it in some scenarios where you might otherwise make a lot of clones.

Ideally, we could do this on a per-tensor basis. But this would be a lot of work, and attempts to add guards on storage offsets to do this automatically have run into issues: recompilations and excessive time to generate/check guards.

**Tests** https://github.com/pytorch/pytorch/pull/122159 flips this to False. It didn't run through all errors, but the ones we see are all expected failures: divisible_by_16 changes; triton kernel caching fails if we call the same triton kernel multiple times (this makes sense because the first call will have unaligned inputs, but subsequent calls have aligned inputs); and some xfailed tests start passing.

**Alternatives/RFC**:
* Is this the right thing to do with cudagraphs?
* Elias and Jason mentioned that we probably still want to make clones if we're dealing with unaligned inputs to matmuls. Is this something we should add in this config option? (In the use case I'm targeting, it seems like we don't need this optimization right now)

Differential Revision: [D55079094](https://our.internmc.facebook.com/intern/diff/D55079094)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122158
Approved by: https://github.com/ezyang
2024-03-22 20:03:38 +00:00
Kai Londenberg
700c92e1b9 [Inductor Cutlass backend] GEMM size threshold for Cutlass backend usage (#121491)
* Adds a configurable GEMM size threshold for the usage of Cutlass GEMM Kernels **_inductor.config.cutlass_backend_min_gemm_size**

 * During GEMM algorithm choice generation: **if no viable choices can be generated using the configured backends, the ATen backend will be used as a fallback backend**, even if it is not enabled in **_inductor.config.max_autotune_gemm_backends**

Test plan:
CI
Additional unit test in test_cutlass_backend.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121491
Approved by: https://github.com/jansel
ghstack dependencies: #121490
2024-03-22 10:58:43 +00:00
eellison
cbbed46377 Defer selection of triton template (#120275)
Our prior approach to epilogue fusion was to select from a choice from a set of triton templates and extern calls based on benchmarking inputs, then unconditionally fuse epilogues. This can be sub-optimal in following ways:

- We select an extern kernel, however an epilogue like relu() exists such that choosing a triton template + relu would have been faster
- We select a triton template, epilogue fuse, and register spilling occurs causing it to be slower than not epilogue fusing.

In this PR we wait to select either the Triton Template or Extern Kernel based on benchmarking results from the kernel itself and its epilogue. As soon as a successful fusion occurs where a fused Triton Template + epilogue is faster than the unfused choice we finalize the MultiTemplateBuffer as a specific template. If no fusion occurs we'll finalize the MultiTemplateBuffer after fusion.

Note: if there are multiple epilogue fusions (not super likely), even though we select a template after the first fusion, we will still benchmark to see if subsequent epilogue are worth fusing. We could potentially defer choosing template in this case in a follow up at expense of compile time.

Gives 4% HF training win, 10% TIMM inference win. Increases compilation time which I will be trying to address more in follow up prs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120275
Approved by: https://github.com/jansel
ghstack dependencies: #121996
2024-03-20 01:40:33 +00:00
Chien-Chin Huang
8e6d572b4e [DDP][PT2D] Allreduce fusion fx pass using concat and all_reduce_coalesced (#113209)
Differential Revision: [D49858057](https://our.internmc.facebook.com/intern/diff/D49858057/)

**TL;DR**
This PR implements 2 different DDP all_reduce fusions in Inductor post_grad fx passes. The two fusions are 1) fusion with concat op and 2) fusion with all_reduce_coalesced. When DDP detects that Python reducer is being used, DDP will automatically turn on the fusion.

This PR does not invent any algorithm and simply reflects the bucket size users set to DDP.

**Implementation Details**
*Fusion with concat op*
The idea of this fusion is to use a concat op to concatenate all the gradients into one tensor and perform one `all_reduce`. After the `wait` op of the `all_reduce`, splitting and reshaping will also be perform to get the individual gradient.

Because DDP needs to perform gradient scaling, the benefit of using this fusion is that we could perform the gradient scaling over the the concatenated buffer.

*Fusion with `all_reduce_coalesced`*
The idea of this fusion is to use `all_reduce_coalesced` op to directly perform the `all_reduce` over multiple buffers. This avoid the copy overhead but may not achieve the best NCCL performance. In addition, because there are multiple buffers, we could not do one simple gradient scaling but have to rely on `foreach_div` to help the gradient scaling.

**Limitations**
Current fusions do not distinguish `all_reduce` generated by different DDP modules. This is okay if all DDP instances use the same PG and data type. The support of multiple DDP instances with different PG and data type will come in the later PRs.

**TODOs**
- [x] Implement DDP allreduce fusion algorithm for Inductor post_grad pass.
- [ ] Add unit tests to ensure the fusion doesn't DDP + TP.
- [ ] Group different PG and data type of `all_reduce`s.
- [ ] Mixed precision supports and tests
- [ ] Implement the fusions with Inductor IR.
- [ ] Add auto bucketing based on Inductor profiling.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113209
Approved by: https://github.com/yf225
2024-03-13 20:37:09 +00:00
Bin Bao
7e598c0053 [Inductor] Enable ABI-compatible mode for cpp-wrapper JIT (#121309)
Differential Revision: [D54617284](https://our.internmc.facebook.com/intern/diff/D54617284)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121309
Approved by: https://github.com/chenyang78
2024-03-07 14:22:06 +00:00
Oguz Ulgen
9e16622397 Move JK check to on-demand (#121182)
Summary: Some tests are failing due to checking JK during forking. Lets move the JK check to on-demand.

Differential Revision: D54518293

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121182
Approved by: https://github.com/aakhundov
2024-03-05 07:03:25 +00:00
Bin Bao
6ddf5cf85e [AOTI] Update cpp wrapper codegen to use v2 C shim (#120714)
Summary: To use the torchgen-ed v2 C shim interface, cpp wrapper codegen needs to update its rule for generating the right parameter and function call. Because changing the emitted code will cause a FC breakage, we add a flag to control the behavior.

Differential Revision: [D54258086](https://our.internmc.facebook.com/intern/diff/D54258086)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/120714
Approved by: https://github.com/chenyang78
ghstack dependencies: #120513
2024-03-05 04:32:32 +00:00
Oguz Ulgen
6566b3db67 Add an autotune cache for inductor generated kernels (#120963)
Summary: Inductor currently has a best config cache for kernels that it generates. This is a local cache done via writing to the file system. This diff takes this local cache to remote by reusing the existing triton caching mechanism built via Memcache internally and Redis externally.

Test Plan:
tested locally using `TORCH_INDUCTOR_AUTOTUNE_REMOTE_CACHE =1`

Look at scuba to verify the local testing: https://fburl.com/scuba/triton_remote_cache/z6pypznk

The plan is to land this diff with this turned off and gradually introduce this.

Differential Revision: D54398076

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120963
Approved by: https://github.com/jansel
2024-03-04 16:58:37 +00:00
Kai Londenberg
96eff4ef70 [inductor max autotune] Detailed autotuning result logs ( machine-readable ) (#119004)
This diff introduces a new separate logging of autotuning results,
with the intention of making the results analyzable, specifically
those for the new experimental Cutlass backend.

Results are logged as text files with one JSON document corresponding to a single benchmark result per line.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119004
Approved by: https://github.com/jansel
ghstack dependencies: #120620
2024-02-29 18:24:13 +00:00
Jackie (Jiaqi) Xu
2ba798df60 [inductor] decompose memory bound mm (#120047)
Summary:
Decompose memory bound mm/bmm.
Linear decomposition result:  D53502768
BMM decomposition result: D53148650
 We should only decompose when
1)bmm, b is large, m,n,k is relative small
2)mm/addmm. m is large, n and K is relative small. e.g. mm of input gradient in linear backward should not be decomposed since m is small and n is large.
Need to conduct more experiments to see if we can find a better strategy for decomposition. I have tried to use a linear regression model (see the bento results) which does not fit well. For short term, we use heuristics to determine when to decompose.

Test Plan:
```
buck2 test mode/dev-nosan //caffe2/test/inductor:decompose_mem_bound_mm
```

COFFEE APS mc0:
baseline: aps-lsf-0124-bf16-267ccb7a0d
decompose: aps-lsf-0124-bf16-4e3824db40

FIRST AFOC pyper mc1

Differential Revision: D53602514

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120047
Approved by: https://github.com/mengluy0125
2024-02-22 19:29:51 +00:00
Elias Ellison
3278b4c557 be more consrevative until regression is debugged (#119583)
See, internal regression: https://www.internalfb.com/diff/D53375778?transaction_fbid=953511712782168

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119583
Approved by: https://github.com/Chillee
2024-02-10 03:06:58 +00:00
Peter Bell
88429a8084 [inductor] Add split scan kernel (#117992)
This PR adds a new type of triton kernel in which data is persistent but the
reduction dimension is split over multiple blocks (up to the entire kernel).
though this is called a reduction dimension, in actuality we only support scans.
because of this limitation, i have to be able to block fusions of split scan
operations with reductions so chose to add a new `ir.SplitScan` node which
is identical but allows for differentiation in the scheduler.

The split scan kernel is also the first to require an additional workspace buffer
which is used to communicate between cuda blocks. this is slightly tricky as we
the exact scratch space requirement isn't known until the grid size is calculated.
here i workaround the issue by setting a minimum rblock size and always allocating
to the maximum possible grid size for a given input tensor.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117992
Approved by: https://github.com/jansel
ghstack dependencies: #117991
2024-02-09 01:56:00 +00:00
PyTorch MergeBot
088d538a8d Revert "[Inductor] GEMM shape padding improvements (#118522)"
This reverts commit cc46829f96.

Reverted https://github.com/pytorch/pytorch/pull/118522 on behalf of https://github.com/eellison due to regresses HF ~4/5% ([comment](https://github.com/pytorch/pytorch/pull/118522#issuecomment-1932557670))
2024-02-07 17:42:14 +00:00
Bin Bao
e868a7fedd [AOTI] Rename config.aot_inductor.abi_compatible (#119065)
Summary: Rename config.aot_inductor.abi_compatible to config.abi_compatible, since the cpp_wrapper mode in JIT Inductor will share the same flag.

Differential Revision: [D53478752](https://our.internmc.facebook.com/intern/diff/D53478752)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119065
Approved by: https://github.com/khabinov
2024-02-07 00:14:33 +00:00
Shunting Zhang
fd0bf96c2b [inductor] make multi-kernel work with cpp-wrapper (#117813)
Make multi-kernel work with cpp-wrapper. multi-kernel generates two equivalent variants for a reduction. At runtime the faster one is picked. But cpp-wrapper need save cubin file during codegen. They don't work with each other at the beginning.

Thanks Jason for suggesting a neat way to integrate these two. cpp-wrapper does 2 passes codegen right now. For the first pass, we still generate multi-kernel code and run it; for the second pass, we load the cubin file for the faster kernel directly. And multi-kernel python code is not generated for the second pass since they should not be needed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117813
Approved by: https://github.com/jansel
2024-02-05 23:35:41 +00:00
PyTorch MergeBot
b964a1222c Revert "[inductor] make multi-kernel work with cpp-wrapper (#117813)"
This reverts commit c24ffc3f66.

Reverted https://github.com/pytorch/pytorch/pull/117813 on behalf of https://github.com/atalman due to Failing internal tests ([comment](https://github.com/pytorch/pytorch/pull/117813#issuecomment-1927877102))
2024-02-05 19:25:39 +00:00