Commit Graph

177 Commits

Author SHA1 Message Date
cyy
28f6ae2718 [9/N] Replace c10::optional with std::optional (#130674)
Follows  #130509

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130674
Approved by: https://github.com/Skylion007
2024-07-15 00:48:43 +00:00
yuqingj
adbf62cd0a Fix layer norm in static runtime when input is non-contiguous (#124789)
Test: The added unit test fails before this fix. But it passes now after the fix.

The fix is coming from @swolchok in D56087067.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124789
Approved by: https://github.com/davidberard98
2024-04-24 19:49:36 +00:00
Scott Wolchok
98d1529474 [PyTorch] fix mixed int32/int64 indices/offsets for embedding_bag_out (#120752)
This was an oversight in D27482738 (#55189) -- it only patched the regular embedding_bag operator, but static runtime uses the out variant.

Differential Revision: [D54285460](https://our.internmc.facebook.com/intern/diff/D54285460/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/120752
Approved by: https://github.com/houseroad
2024-02-28 20:13:30 +00:00
Mike Iovine
7b0650d5cf Back out "[static-runtime] change the backend for permute_copy" (#89463)
Summary: This permute copy change seems to be causing huge regressions on machines without AVX512. Revert to mitigate. This shouldn't be problematic since the improvement from changing it was super small anyways.

Differential Revision: D41450088

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89463
Approved by: https://github.com/hlu1
2022-11-22 06:26:10 +00:00
Mike Iovine
dd43903fa9 [Static Runtime] Fix tensor_split sections overload (#88113)
Summary:
D40798763 broke this op. Unfortunately, it wasn't caught at land time due to the recent OSS Static Runtime test problems.

The problem is C++ overload resolution. After D40798763, the int that we were passing to `at::native::tensor_split` was getting implicitly converted to `IntArrayRef`. Fix this by converting the int to a `SymInt` and calling the correct overload.

Test Plan:
```
buck2 test caffe2/benchmarks/static_runtime:static_runtime_cpptest -- Tensor_Split --run-disabled
```

Differential Revision: D40862394

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88113
Approved by: https://github.com/hlu1
2022-11-07 14:36:39 +00:00
Mike Iovine
23fe6c8ca1 [Static Runtime] Fix ReplaceWithMaybeCopy test in OSS (#88099)
Summary: `ReplaceWithMaybeCopy` is guarded by `FBCODE_CAFFE` in `OptimizeGraph`. Run the pass manually to ensure it does the replacement.

Test Plan: Existing tests

Differential Revision: D40858743

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88099
Approved by: https://github.com/huydhn
2022-11-01 09:58:26 +00:00
Mike Iovine
81c4049f4d [Static Runtime] Move PrepackWeights to internal-only graph passes (#87799)
Summary:
The pass introduces an `fb::` operator and thus cannot be used in OSS.

The test failure was not exposed because the Static Runtime tests have been disabled in OSS for a while. The Dev Infra folks encountered this failure when re-enabling the tests.

Test Plan: Existing tests

Differential Revision: D40724547

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87799
Approved by: https://github.com/huydhn
2022-10-28 01:28:34 +00:00
Mike Iovine
7d8ee38a5c [Static Runtime] Fix prim::If tuple corner case (#85446)
Summary: We currently assume that a tuple output implies that the prim::If node returns multiple unpacked outputs, but this is not guaranteed to be the case. Add some logic to return the wrapped tuple if necessary

Test Plan: New unit test

Differential Revision: D39712050

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85446
Approved by: https://github.com/tenpercent
2022-09-24 01:01:34 +00:00
Mike Iovine
63c1f2fef9 [Static Runtime] Fold linear prepack ops (#85289)
Summary: Split `quantized_linear_unpacked_weight_v2` into `linear_prepack` and `quantized_linear` so that the prepacking operation may be eliminated by constant folding.

Test Plan:
Fixes a huge regression in an internal model:

```
Before
        89.6141 ms.    99.0923%. fb::quantized_linear_unpacked_weight_v2 (12 nodes)
After
       0.806852 ms.    53.5365%. quantized::linear (12 nodes, out variant)
(prepacking eliminated)
```

Differential Revision: D39622530

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85289
Approved by: https://github.com/davidberard98
2022-09-22 20:23:07 +00:00
Mike Iovine
e4899764b2 [Static Runtime] Fix aten::index_put list conversions (#85298)
Summary: Apparently static runtime's list construct return value is always a `GenericList`, so we cannot use the `toOptionalTensorList` method in the general case -- we must convert each item individually.

Test Plan: New unit test

Differential Revision: D39628979

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85298
Approved by: https://github.com/tenpercent
2022-09-22 20:21:52 +00:00
Max Podkorytov
bf62ece536 [static-runtime] add schema checks to most of the ops where these checks are missing (#84163)
Test Plan: existing unit tests; also fix some failing ones along the way

Differential Revision: D39074902

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84163
Approved by: https://github.com/mikeiovine
2022-09-01 17:21:22 +00:00
Mike Iovine
db7784e722 [Static Runtime] Schema checks for index_put (#84152)
Summary:
`index_put` can take a list of tensors, but Static Runtime always tries to convert its argument to a list of optional tensors. This was causing crashes for some users. Add some schema checks to prevent this, and add a new overload for the new case.

Also, I found a clear bug in the JIT interpreter (mutating the argument when its not supposed to), so I fixed that too.

Test Plan: New unit test

Differential Revision: D39072214

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84152
Approved by: https://github.com/tenpercent
2022-08-31 01:20:14 +00:00
Mike Iovine
09157c76c0 [Static Runtime] Add schema checks for aten::list (#83753)
Summary:
The previous implementation assumed that there was only one overload and unconditionally tried to convert its input into a string. Some users were running into crashes because of this. Added a new overload for the list overload and schema checks.

Also, I managed to uncover another bug when writing tests for this case (yikes). Returning inputs didn't work because the input cleanup process would destroy the output. Extended `CreateOwnedRefsForSpecialIValues` to fix that.

Test Plan: CI + new unit tests

Differential Revision: D38870803

Pull Request resolved: https://github.com/pytorch/pytorch/pull/83753
Approved by: https://github.com/tenpercent, https://github.com/albanD
2022-08-22 13:42:47 +00:00
Max Podkorytov
68d2d7866d [static-runtime] change the backend for permute_copy (#83532)
Summary: Testing wrappable dims

Differential Revision: D38717563

Pull Request resolved: https://github.com/pytorch/pytorch/pull/83532
Approved by: https://github.com/mikeiovine
2022-08-17 18:10:36 +00:00
Max Podkorytov
bf75708ce4 [static-runtime] add nnc codegen for aten::div (#76903)
Differential Revision: D36151087

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76903
Approved by: https://github.com/mikeiovine
2022-06-22 05:47:44 +00:00
Hui Guo
0545c85f74 [static runtime] Add JIT prim ops: aten::cpu, aten::list, aten::numel, aten::__range_length (#79111)
Summary: This adds the missing jit prim ops appear in the non ads models for c2->pt mitigation: aten::cpu, aten::list, aten::numel, aten::__range_length

Test Plan: static runtime unit tests

Differential Revision: D36984960

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79111
Approved by: https://github.com/davidberard98
2022-06-18 16:38:58 +00:00
Hui Guo
8d7fcfa8f1 [static runtime] Add native ops: aten::index_put, aten::item, aten::tensor_split (#79065)
Summary: This adds the pytorch operators that are currently missing in non-ads models from c2->pt mitigation: aten::index_put, aten::item, aten::tensor_split

Test Plan: buck run mode/opt caffe2/benchmarks/static_runtime:static_runtime_cpptest

Differential Revision: D36984961

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79065
Approved by: https://github.com/davidberard98
2022-06-15 19:15:34 +00:00
Akshay Parashar
28f87b9cf9 [Static Runtime] Fix aten::clone out variant (#78297) (#78322)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78297

Clone followed by expand/expand_as due to memoryOverlap check on copy_ native method. Refer to T118519310 for more details.

Crashing test case:
a = tensor(3,1)			  // strides = (1,1)
B = tensor(3,2)	          // strides = (2,1)
Temp = a.expand_as(b).   // creates temp with shape as (3,2) and strides as (1,0)
temp.clone()		         // crashe on copy_ due to memoryOverlap

Fix: Disable the out variant for the expanded tensor.
- Calls native clone instead of out variant for clone dealing with expanded tensors
- Added test case for both clone variants (out and native clones)
- Increased the tensor size for memory planner test case to trigger dynamic allocation

Test Plan:
buck test caffe2/benchmarks/static_runtime/fb:test_fb_operators

buck test caffe2/benchmarks/static_runtime:static_runtime_cpptest

Differential Revision: D36672180

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78322
Approved by: https://github.com/mikeiovine
2022-06-02 21:06:59 +00:00
Max Podkorytov
ebfc70f37a [static-runtime] out variant for aten::mean (#78161)
Summary: As subject

Test Plan: Added unit tests

Differential Revision: D36614633

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78161
Approved by: https://github.com/mikeiovine
2022-06-02 20:56:42 +00:00
Max Podkorytov
2679755bdc [static-runtime] out variant for aten::max (#78271)
Summary: Previously the op was auto-generated but it only covered the pointwise overload of aten::max. This adds support for reduction, overall and along a dim

Test Plan: Added a unit test

Differential Revision: D36656378

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78271
Approved by: https://github.com/mikeiovine
2022-05-26 23:29:27 +00:00
mikeiovine
56c23f5633 [SR] Out variant for embedding_bag_byte_unpack
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77661

Add an out variant and wrapper in static runtime.

I just added the declaration with the others in `qembeddingbag.h` for now (rather than properly adding the out variant to the torch library). This can be fixed in a followup.

Differential Revision: [D36449840](https://our.internmc.facebook.com/intern/diff/D36449840/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36449840/)!

Approved by: https://github.com/tenpercent
2022-05-25 23:24:11 +00:00
mikeiovine
2ae3c59e4b [SR] Remove linear/relu fusion
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77620

Apparently, this is not implemented in fbgemm, so it's strictly worse than using NNC.

Differential Revision: [D36431811](https://our.internmc.facebook.com/intern/diff/D36431811/)

Approved by: https://github.com/hlu1
2022-05-23 21:46:27 +00:00
Hao Lu
c60d2ef4eb [StaticRuntime] Replace Permute with copy version only when it's followed by reshape or flatten (#77832)
Reviewed By: mikeiovine

Differential Revision: D36466622

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77832
Approved by: https://github.com/mikeiovine
2022-05-20 03:14:01 +00:00
mikeiovine
02713221e3 [SR] Fuse clamp/nan_to_num
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77094

Fuse `clamp` and `nan_to_num` in an NNC kernel. This leads to a big speed up on many models. We can avoid comparisons since clamp potentially gets rid of all of the `inf`s in the input tensor.

Differential Revision: [D36220967](https://our.internmc.facebook.com/intern/diff/D36220967/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36220967/)!

Approved by: https://github.com/navahgar
2022-05-10 23:33:59 +00:00
Mike Iovine
cac2733af1 [SR] Codegen for aten::clamp (#76340)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76340

NNC kernel for `clamp` scalar case
ghstack-source-id: 155466507

Reviewed By: navahgar, huiguoo

Differential Revision: D35904019

fbshipit-source-id: e4115757f7e2cbdf364b88be3f599dfc3028750f
(cherry picked from commit bdc4b918bc5a14490f46c79793f764b28c18388f)
2022-05-04 23:08:49 +00:00
Mike Iovine
fc64dbdc01 [SR] Fuse quantized linear/relu (#75775)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75775

fbgemm kernels already implement the fused kernel, no reason not to use it
ghstack-source-id: 155450342

Test Plan: New unit tests

Reviewed By: navahgar

Differential Revision: D35633297

fbshipit-source-id: a744a33a65ce7dbb9ce8900dbe091b6d56dd4e48
(cherry picked from commit b1361b349862715aa17e6318c5e658cd6401a464)
2022-05-04 19:01:14 +00:00
Mike Iovine
3fa77fa51a [SR] Fix quantized linear tests not managing outputs (#75776)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75776

The output was returned directly instead of a clone, so the output of the relevant op would not be managed.
ghstack-source-id: 154935103

Test Plan: CI

Reviewed By: navahgar

Differential Revision: D35633469

fbshipit-source-id: 7b08b7368e0349a12abf8802a4c625ffecdc5abb
(cherry picked from commit 24bed9ba4da39cff7f3b40f5e49dfded2552b373)
2022-04-27 16:38:54 +00:00
Ansha Yu
ee636e2fd1 [sr] remove max_indices argument of embedding_bag when unncessary (#75993)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75993

Strobelight shows copy_ in embedding_bag taking up a lot of time in adfinder_story_post_ad_session_exit_model 334827604_0
{F723683014}

More details in https://fb.quip.com/MKumAjz1YD4 (1f47a80e88)a#temp:C:FPD3 (ecd5567980)e5a0871ae5d481286b511ef7

The last 3 outputs of embedding_bag are unused in the graph: P495814049.
* max_indices output isn't necessary for the main output, so remove it when it's not used in the graph.
* offset2bag is used as an intermediate to calculate the main output, so we don't remove this output even though it's unused in the graph.
* bag_size is used as an intermediate to calculate the main output for MODE_MEAN, so we don't remove this for now.

Test Plan:
`./caffe2/caffe2/fb/predictor/scripts/run_disagg_model_benchmarks.sh 334827604 0 /data/users/ansha/tmp/ads_tail sr_only`

Inputs uploaded to `/mnt/persistent-public/ansha/ads_tail/334827604`

Before:
I0414 10:53:12.261133 1070948 PyTorchPredictorBenchLib.cpp:305] PyTorch run finished. Milliseconds per iter: 0.121318. Iters per second: 8242.78
        0.11156 ms.    99.0457%. aten::embedding_bag (52 nodes, out variant)

After:
I0418 13:05:10.837378 2354604 PyTorchPredictorBenchLib.cpp:305] PyTorch run finished. Milliseconds per iter: 0.0881273. Iters per second: 11347.2
      0.0789221 ms.    98.7096%. static_runtime::embedding_bag (52 nodes, out variant)

* Ads prod canary:
https://www.internalfb.com/intern/ads/canary/443002539593035806/
* 4M test: `servicelab create cogwheel_pyper_inference_fullsync_ads_inline_cvr_post_imp -a D35726594`
https://www.internalfb.com/intern/servicelab/602875732/
* 4M test: `servicelab create cogwheel_pyper_inference_fullsync_ads_10x_ctr_mbl_feed_non_mimo -a D35726594`
https://www.internalfb.com/intern/servicelab/1002874745/

Reviewed By: mikeiovine

Differential Revision: D35726594

fbshipit-source-id: 3b71a0822657bf7a23ce37ca899baef9997b011a
(cherry picked from commit fd5e3098c047a1e7d4348e1c97341eecb892536e)
2022-04-22 15:36:35 +00:00
Mike Iovine
2f98fa9147 [SR] Do not manage tensors that escape scope via container (#74966)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74966

It's clear that we don't want to manage tensors that escape their scope. Previously, we handled this by checking whether the tensor aliased the graph outputs. But there's actually another way to escape scope: by aliasing the wildcard set. The following graph demonstrates this:

```
def forward(self, cond: bool, a, b):
    lst = []
    if cond:
        res = a + b # res should not be managed!!!
        lst.append(res)
    return lst
```

The `if cond:` sub-block returns nothing, but `res` escapes the scope through `lst`.

The fix is simple: we simply have to mark values that alias the wildcard set as an `external_alias_` in `ValueGroup`.

This diff also exposed another issue (via unit tests) in `checkOutputTensorMemoryLeaks`: it assumes that, if a node's `Value*` is managed, the underlying `IValue` must be a tensor. But this is not true after the addition of `to_maybe_copy_out`; TMCO does not produce a tensor in its first output slot if it does not copy.
ghstack-source-id: 153288188

Test Plan: New unit tests cover the problematic case

Reviewed By: navahgar

Differential Revision: D35257087

fbshipit-source-id: 853a761dffe51f2c70720759664dd8dfcd56d1d7
(cherry picked from commit 2c7f519354041975f33626eab6b7f16c2494bbf8)
2022-04-07 19:57:57 +00:00
Mike Iovine
4055d1f653 [SR] Fix StaticRuntime move ctor (#74927)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74927

The move ctor was broken because `BlockRunner` stores a reference to `values_`. When moving runtime instances, the pointer to the root block would be moved, but the reference inside it would not be updated.

Pass `BlockRunner` a raw pointer to the heap-allocated IValues instead to avoid this issue.
ghstack-source-id: 153168602

Test Plan: New unit test/CI

Reviewed By: navahgar

Differential Revision: D35228467

fbshipit-source-id: 04e198b39f898b82677a0e41e1cdf00c2b0c09f3
(cherry picked from commit 03e2c591ac3a907d68025eae9500ed7226dec17e)
2022-04-07 02:16:37 +00:00
Don Jang
85e163c56b [Static Runtime] Fix a bug that aten::full_like reuses a tensor that does not match arguments (#74255)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74255

This change fixes a bug that `aten::full_like` reuses a previously allocated tensor that does not match requested one when arguments to `aten::full_like` are dynamically changed.

Test Plan: - Enhanced `StaticRuntime.FullLike` to cover the modified code path.

Reviewed By: mikeiovine

Differential Revision: D34863639

fbshipit-source-id: ca6d4ee3c039e263cc3a4f643d949cea59381608
(cherry picked from commit ae7db0af5e7d95d866027abc968afcb162fd2ef8)
2022-04-05 22:30:41 +00:00
Raghavan Raman
60bda4d06b [Static Runtime] Fix handling relu in quantized linear relu dynamic op
Summary:
The implementation of `PackedLinearWeightFp16::apply_dynamic_impl` [here](https://www.internalfb.com/code/fbsource/[b1ef7c31f022]/fbcode/caffe2/aten/src/ATen/native/quantized/cpu/qlinear_dynamic.cpp?lines=393) does not handle `relu`. It completely ignores the `ReluFused` boolean template parameter.

At this point, callers of that function handle `relu` explicitly. While the correct thing to do would be to handle the `ReluFused` parameter in that implementation, it is not clear if that semantics is being followed in this code. So, we are handling this in SR's out-variant implementation, until the owner fixes that issue.

This issue resulted in incorrect results when Static Runtime was enabled for the MRS video model.

Test Plan:
```
buck run mode/opt //caffe2/benchmarks/static_runtime:static_runtime_cpptest -- --gtest_filter=StaticRuntime.QuantizedLinearReluDynamicFp16
```

Reviewed By: mikeiovine

Differential Revision: D35366309

fbshipit-source-id: e60126e3590d52681ceaee5583b81c4c0b5404d9
(cherry picked from commit cabeb96a792339e7dbfd16cb51a3ac9039812137)
2022-04-04 22:16:22 +00:00
Max Podkorytov
11c412a8ec [static-runtime] optimize empty if blocks at runtime (#74987)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74987

Add specializations to `prim::If` operator at runtime to save resources when some of subblocks are empty

Test Plan:
`buck build //caffe2:torch-cpp-cpu`
`buck test //caffe2/benchmarks/static_runtime/...`
Add unit test:
`buck test //caffe2/benchmarks/static_runtime:static_runtime_cpptest -- StaticRuntime.EmptyIfBlock`

Reviewed By: mikeiovine

Differential Revision: D35262952

fbshipit-source-id: 324f88471f33f035f4d8a9b212716530d8e59df2
(cherry picked from commit 2db1b1a6833b1376fa376f54791effc8e12fb77f)
2022-04-01 05:43:33 +00:00
Mike Iovine
3f37337ed0 [SR] Native implementation for reshape_as (#74585)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74585

Native static runtime for `aten::reshape_as`
ghstack-source-id: 152340038

Test Plan: New unit test

Reviewed By: hlu1

Differential Revision: D35060895

fbshipit-source-id: c4e6f8a04c7df3821c7e654bfaf584e5a72ea701
(cherry picked from commit 6fa596cd866a024b6653239e0e30ddad42de242f)
2022-03-28 17:02:14 +00:00
Mike Iovine
9f2344aa40 [SR] Native implementation for select (#74568)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74568

Native static runtime implementation for `aten::select(Tensor, int, int)` overload
ghstack-source-id: 152340037

Test Plan: New unit test

Reviewed By: hlu1

Differential Revision: D35053900

fbshipit-source-id: c315d4202a4dfca3360325547af805aea33ecc9f
(cherry picked from commit 8683f214dbd8c081365bad727007bbff969b64d0)
2022-03-28 17:02:14 +00:00
Mike Iovine
facdbe6d72 [SR] Native implementation for IntImplicit (#74562)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74562

Add a native implementation for `aten::IntImplicit`, which is similar to `aten::Int` except for a few extra checks it must do
ghstack-source-id: 152340039

Test Plan: New unit tests

Reviewed By: hlu1

Differential Revision: D35052997

fbshipit-source-id: cb2f0faf7c62382e3f13750d8e1280c49c6b9e42
(cherry picked from commit 359c7493f8deaeccebc27e1b6e6e9777850010c1)
2022-03-28 17:02:14 +00:00
Don Jang
6294a2eb7f [Static Runtime] Add out variant wrapper for aten::index_select (#74321)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74321

This change adds out variant wrapper for aten::index_select.

Test Plan: Added a unittest

Reviewed By: mikeiovine

Differential Revision: D34928012

fbshipit-source-id: d808363d740d79fa25abee4dd33920fbb6ec7283
(cherry picked from commit ba9b3c0cd4ba240c4a2174f3376580a1880b2b4a)
2022-03-16 23:43:21 +00:00
Mike Iovine
f14a0be302 [SR] Avoid allocating rstd/mean in layer_norm (#73606)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73606

The single-output overload of `layer_norm` internally allocates two tensors. As an optimization, we previously added `static_runtime::layer_norm`. This variant of layer norm had two extra outputs to make the memory planner aware of these extra tensors. But these outputs were unused; it's actually better for us to avoid the allocation and associated computations entirely.
ghstack-source-id: 151394116

Test Plan: Existing unit tests

Reviewed By: hlu1

Differential Revision: D34562131

fbshipit-source-id: c6a6560e60db43b0b100aedc54ea4265acb347de
(cherry picked from commit 3bed52b6f688b93b9b032c3d2b4be68d08d8eb76)
2022-03-15 22:07:11 +00:00
Don Jang
381c0c080f [Static Runtime] Fix a bug that aten::full reuses a tensor that does not match requested one (#73990)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73990

This change fixes a bug that `aten::full` reuses a previously allocated tensor that does not match requested one when arguments to `aten::full` are dynamically changed.

This fix is applied to multiple other out variant wrappers added to Static Runtime, and their fixes are following.

Test Plan: - Added a unittest.

Reviewed By: mikeiovine

Differential Revision: D34768718

fbshipit-source-id: b6958d6601d36253dd5d4f93596fb14055cca9c9
(cherry picked from commit 42acb40d3a1e9359c0f1a3c25481854e5ad344b6)
2022-03-15 16:13:52 +00:00
Don Jang
1b80f609b0 [Static Runtime] Add out variant wrapper for aten::ones_like (#73945)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73945

This change adds add out variant wrapper for aten::ones_like.

Test Plan:
- Added a unittest.
- Checked that the op execution got switched to its added out variant (P485330978).

Reviewed By: hlu1

Differential Revision: D34727057

fbshipit-source-id: 5022a7f547d53b0c00459d3959ad3c6e6a8a62d5
(cherry picked from commit 1bec4680e8173654400b165d720a0902136dba0f)
2022-03-14 20:29:58 +00:00
Don Jang
60f22a40ef [Static Runtime] Add out variant wrapper for aten::zeros (#73946)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73946

This change adds an out variant wrapper for aten::zeros.

Test Plan:
- Added a unittest.

- Confirmed that the added out variant gets executed by the unittest (P485324923).

Reviewed By: mikeiovine

Differential Revision: D34725843

fbshipit-source-id: 3ac02ba1914c4a51969381e610d4243df65071ed
(cherry picked from commit 368836d51709b7f96c79114984a95606b29766b1)
2022-03-11 00:52:30 +00:00
Don Jang
87564a1bd7 [Static Runtime] Add native op support for aten::len (#73899)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73899

This change adds native op wrappers to Static Runtime as appears in JIT (https://www.internalfb.com/code/fbsource/[429d233b9beb5e6f60df7304b792e2ff332f6ecd]/fbcode/caffe2/torch/csrc/jit/runtime/register_prim_ops.cpp?lines=613 , search for "aten::len" in that file).

Test Plan: Added unittests, "StaticRuntime.LenWith*", and confirmed they are passing with `V0307 17:39:39.817956 3516654 impl.cpp:1792] Switch to native impl for node: %2 : int = aten::len(%input.1)` per added unittest: P485159811

Reviewed By: mikeiovine

Differential Revision: D34705231

fbshipit-source-id: 916b1f8bdbc92def07bc3f98ce1db22f0f5ce206
(cherry picked from commit 66d2bb9a0a294b55e1bc87ae33f5553b1460e74b)
2022-03-10 02:57:51 +00:00
Mike Iovine
97b20b9b50 [SR][easy] Stack/concat out variants do not segfault on empty inputs (#73704)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73704

Empty inputs are invalid for these ops. But while looking for optimizations, I noticed that these ops just segfault when that happens, which is not helpful for users. Added a check/error message.
ghstack-source-id: 150812721

Test Plan: New unit tests

Reviewed By: hlu1

Differential Revision: D34596954

fbshipit-source-id: 6b22a3a255273920210dcd41f54a9d238bbbcc14
(cherry picked from commit 9e950bfffef36c320638662bdb72f19eb805a228)
2022-03-09 00:55:57 +00:00
Don Jang
71961d37bb [Static Runtime] Add out variant wrapper for aten::ones (#73851)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73851

This change adds an out variant wrapper for aten::ones

Test Plan: Added a unittest

Reviewed By: mikeiovine

Differential Revision: D34557095

fbshipit-source-id: 0d2ac8d0ad6f73067e28c2cebd3b4a018a9b17ae
(cherry picked from commit cc1dda957b8c3acd71de3aa6054c11a9aab5cfa6)
2022-03-07 20:33:22 +00:00
Don Jang
fe7e1bd1ce [Static Runtime] Add auto-generated out variant dispatchers (#72603)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72603

This change adds out variant dispatchers generated by the previous diff.

The number of the out variant dispatchers generated by this diff is 133, which increases the out variant coverage by 309% (current: 43, this diff: 133 + 43 = 176). This number is expected to increase a lot as we develop this script further to cover more ops.

Test Plan:
**Unittest**
Confirmed
```
buck run //caffe2/benchmarks/static_runtime:static_runtime_cpptest
```
is passing.

Reviewed By: swolchok

Differential Revision: D33373928

fbshipit-source-id: 4d94d788282f3f313bb36f2f9452edecd9862246
(cherry picked from commit e4ce8b386d1fcc47b86cb9c9016a70e7a31b452c)
2022-02-28 08:39:10 +00:00
Mike Iovine
d398d4d32c [SR] Disable aten::where out variant (#73367)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73367

The op is currently bugged w.r.t. a `condition` input that is not the same shape as the others:

```
def forward(self, cond_1d, x, y):
    shape = [-1] + [1] * (x.dim() - 1)
    cond = cond_1d.view(shape)
    return torch.where(cond, x, y).clone()

Condition:
01
00
[ CPUBoolType{2} ]

A:
06 -9
08 -8
[ CPULongType{2,2} ]

B:
-4 05
-5 -2
[ CPULongType{2,2} ]

Actual:
06 05
-5 -2
[ CPULongType{2,2} ]

Expected:
06 -9
-5 -2
[ CPULongType{2,2} ]
```
ghstack-source-id: 149963254

Test Plan: Unit tests exercise broadcasting

Reviewed By: d1jang

Differential Revision: D34454770

fbshipit-source-id: 6ad4c4ca6893d2b87852a17d437437d99ca94ab4
(cherry picked from commit 7135bc40e9fd930c08f5291b7d6b4902ec30005b)
2022-02-26 01:08:45 +00:00
Mike Iovine
d1c5f9e439 [JIT][SR] Introduce prim::IfThenElse (#72587)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72587

This pattern frequently appears in a few graphs:

```
%result = prim::If(%condition)
  block0():
    -> (%a)
  block1():
    -> (%b)
```

This is slow, particularly in static runtime. Static runtime creates memory planners/block runners for each sub-block, which eats up a lot of memory and introduces a lot of extra overhead for this relatively simple operation.

This diff introduces a new op that replaces nodes like the above with a single op meant to act like a ternary operator:

```
%result = prim::IfThenElse(%condition, %a, %b)
```

Test Plan: New unit tests

Reviewed By: eellison

Differential Revision: D34091789

fbshipit-source-id: eb6a8c460c39b4c019a1f4ab1f3f1e5b6edc400c
(cherry picked from commit 0f1b335e5b)
2022-02-17 18:22:48 +00:00
Mike Iovine
c975b928ab [SR][easy] CPU fuser uses native control flow (#72544)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72544

Now that static runtime supports control flow, there's no need to fall back to the JIT. We get better performance with the native control flow since we avoid heap allocation/ref count bumps during stack construction.

I've left the old `prim::TensorExprDynamicGroup` around in case we need to support it in the future. I've also added native support for a few scalar ops that are used inside the control flow sub-blocks.
ghstack-source-id: 148825816

Test Plan: New unit tests

Reviewed By: d1jang

Differential Revision: D34083080

fbshipit-source-id: a7ffc0fda39ab3df3ba47e44a03d857131dc1e50
(cherry picked from commit 2ef39e0e54)
2022-02-10 18:40:39 +00:00
Don Jang
84729cef70 [Static Runtime] Fix a bug in aten::slice to honor optional arguments (#72530)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72530

This bug was revealed from a failed attempt to run a feed/story model.

Test Plan:
- This fix was tested to successfully run the failed model: P479037453
- Added a unittest

Reviewed By: mikeiovine

Differential Revision: D34055801

fbshipit-source-id: 4a3e06bbb3b9fa78b0514c9c67aa4a0b79f46a8d
(cherry picked from commit bfa2bfb81c)
2022-02-09 17:05:45 +00:00
Mike Iovine
6c0521b919 [SR] Add native implementations for converted prim ops (#71474)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71474

The PyTorch edge team is working on promoting some prim ops to interpreter instructions (see D33398092). Since the JIT fallback ops will be unavailable soon, we need to implement these ops in static runtime.

Ops not included in this diff:
* `aten::__is__` and `aten::__isnot__`: disabled in static runtime for unrelated reasons
* `prim::NumToTensor` and `aten::__get__.Dict` already exist
ghstack-source-id: 148641179

Test Plan: `buck test caffe2/benchmarks/static_runtime:static_runtime_cpptest`

Reviewed By: d1jang

Differential Revision: D33657816

fbshipit-source-id: 6d15244ae1024a56d3b25e51a433fa104ce8ee5e
(cherry picked from commit 33f8f861ff)
2022-02-08 23:25:34 +00:00