Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63578
Added a new op `prim::VarStack` and a pass that transforms instances of `aten::stack(list, dim)` into `prim::VarStack(list[0], ..., list[n], dim)`. Also provided a JIT interpreter implementation.
Most of the implementation/tests are the same as `prim::VarConcat`.
Test Plan: `buck test caffe2/test/cpp/jit:jit -- TestStackOpt`
Reviewed By: navahgar
Differential Revision: D30426232
fbshipit-source-id: 9829a7db6e0a5038c9b7528c43c25b0c221aa2ce
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63587
Now that there is no classes using KernelArena for memory management we
can remove it.
Differential Revision:
D30429115
D30429115
Test Plan: Imported from OSS
Reviewed By: navahgar
Pulled By: ZolotukhinM
fbshipit-source-id: 375f6f9294d27790645eeb7cb5a8e87047a57544
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63586
This is another commit in transition from KernelArena memory management.
Tensor is essentially just a pair of <BufPtr, StmtPtr> and we don't need
to dynamically allocate it at all - it's cheap to pass it by value, and
that's what we're switching to in this commit.
After this change nothing uses KernelScope/KernelArena and they can be
safely removed.
Differential Revision:
D30429114
D30429114
Test Plan: Imported from OSS
Reviewed By: navahgar
Pulled By: ZolotukhinM
fbshipit-source-id: f90b859cfe863692b7beffbe9bd0e4143df1e819
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63195
This helps us to later switch from using KernelArena with raw pointers
to shared pointers without having to change all our source files at
once.
The changes are mechanical and should not affect any functionality.
With this PR, we're changing the following:
* `Add*` --> `AddPtr`
* `new Add(...)` --> `alloc<Add>(...)`
* `dynamic_cast<Add*>` --> `to<Add>`
* `static_cast<Add*>` --> `static_to<Add>`
Due to some complications with args forwarding, some places became more
verbose, e.g.:
* `new Block({})` --> `new Block(std::vector<ExprPtr>())`
Test Plan: Imported from OSS
Reviewed By: navahgar
Differential Revision: D30292779
Pulled By: ZolotukhinM
fbshipit-source-id: 150301c7d2df56b608b035827b6a9a87f5e2d9e9
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62921
Added a cache for NNC generated code across different calls to the same ops.
Before this diff:
```
ProcessedNode time 13402.9 ms
Static Module initialization took 30964.8 ms
```
After this diff:
```
ProcessedNode time 85.4195 ms
Static Module initialization took 4348.42 ms
```
There is one global cache for all the ops. It is guarded with a reader-writer lock. This is necessary because we could have multiple threads loading different models in parallel. Note that this locking does not guarantee that there will be exactly one code generated for each op. There could be more than one thread generating code for the same op simultaneously and all of them will update the cache in some order. But that should be small number bounded by the number of threads. Also, there is no correctness issue, since the generated code is always the same and the one generated by the last thread is retained in the cache and reused later while running the model.
Test Plan: Tested inline_cvr model
Reviewed By: hlu1
Differential Revision: D30104017
fbshipit-source-id: 32e9af43d7e724ed54b661dfe58a73a14e443ff7
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61983
Trial #2. The previous PR (https://github.com/pytorch/pytorch/pull/61498) was reverted because this caused a failure in `pytorch_linux_backward_compatibility_check_test`. Fixed that now by adding to the exception list in `check_backward_compatibility.py`.
Test Plan: Imported from OSS
Reviewed By: eellison
Differential Revision: D29828830
Pulled By: navahgar
fbshipit-source-id: 947a7b1622ff6e3e575c051b8f34a789e105bcee
Summary:
Re-land of D29935444
We previously had lots of ops with implementations like this:
```
if (p_node->Output(0).isNone()) {
p_node->Output(0) = create_empty_like(input_0);
}
...
auto& out = p_node->Output(0);
some_func_out(inputs, out);
```
This would make the output have the correct shape. But it would
also take the dtype of `input_0`, which is not always correct.
This change transforms these blocks to:
```
if (p_node->Output(0).isNone()) {
p_node->Output(0) = some_func(inputs)
} else {
...
auto& out = p_node->Output(0);
some_func_out(inputs, out);
}
```
This gives the output the correct shape and dtype.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62267
Reviewed By: ejguan
Differential Revision: D29937253
Pulled By: malfet
fbshipit-source-id: d91ca5d5703490d7d349a1de2ad3bb09b0c33967
Summary:
We previously had lots of ops with implementations like this:
```
if (p_node->Output(0).isNone()) {
p_node->Output(0) = create_empty_like(input_0);
}
...
auto& out = p_node->Output(0);
some_func_out(inputs, out);
```
This would make the output have the correct shape. But it would
also take the dtype of `input_0`, which is not always correct.
This change transforms these blocks to:
```
if (p_node->Output(0).isNone()) {
p_node->Output(0) = some_func(inputs)
} else {
...
auto& out = p_node->Output(0);
some_func_out(inputs, out);
}
```
This gives the output the correct shape and dtype.
Test Plan: `buck test //caffe2/benchmarks/static_runtime:static_runtime_cpptest`
Reviewed By: hlu1
Differential Revision: D29887367
fbshipit-source-id: cef04bfa52ec082ad3a9a32aa27c44e275c6b24c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62067
The wrapper for aten::cat is no longer needed after the variadic cat change in D29565344 (ae58a4c45d) .
Also added a simple test to test dynamic shapes, i.e., input tensors in args2 are larger than in args1.
Reviewed By: navahgar, mikeiovine
Differential Revision: D29864600
fbshipit-source-id: 44a712c2e776815c09e0bf5631412149b81274b2
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61361
This PR ports the `clamp` kernel to the structured format. In addition, it introduces `OptionalScalarRef` as a replacement for `c10::optional<Scalar>&`. The latter, although it is a reference type, can still involve copying the contained `Scalar` (e.g. if the actual parameter is a `Scalar` or if a `c10::optional<Scalar>` is constructed just to call a kernel). `OptionalScalarRef` contains only a `const Scalar&`, and stores flag about whether the instance contains something inside the `Scalar` itself using a new tag.
For more information, see #55070.
Test Plan: Imported from OSS
Reviewed By: albanD
Differential Revision: D29821533
Pulled By: SplitInfinity
fbshipit-source-id: 88d55df5a4b2c14b68a57e4905d90eea1b088d99
Summary:
As GoogleTest `TEST` macro is non-compliant with it as well as `DEFINE_DISPATCH`
All changes but the ones to `.clang-tidy` are generated using following script:
```
for i in `find . -type f -iname "*.c*" -or -iname "*.h"|xargs grep cppcoreguidelines-avoid-non-const-global-variables|cut -f1 -d:|sort|uniq`; do sed -i "/\/\/ NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)/d" $i; done
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62008
Reviewed By: driazati, r-barnes
Differential Revision: D29838584
Pulled By: malfet
fbshipit-source-id: 1b2f8602c945bd4ce50a9bfdd204755556e31d13
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61595
Add out variant wrapper for `aten::linear` in the static runtime
Test Plan: `buck test //caffe2/benchmarks/static_runtime:static_runtime_cpptest`
Reviewed By: hlu1
Differential Revision: D29684236
fbshipit-source-id: 94df6d7267b3f269b2cadf065f207648777147df
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61505
The handling of `self` in static runtime was previously incorrect. This diff fixed that issue, since self is essential to prim::GetAttr/SetAttr. After all, most of the time we're getting and setting attributes from self, the torch script module.
Reviewed By: ajyu
Differential Revision: D29350173
fbshipit-source-id: 6e62add4cda517ef8cd6c315d4cb0595e7d531fb
Summary:
This PR suppresses clang-tidy warnings in the codebase (for now) so that we can re-enable clang-tidy checks on master.
I ran this script to add the `NOLINTNEXTLINE` comments (on a devserver):
```bash
python3 setup.py develop
# Uses same script that's run on CI and adds the -j (parallel), -s (add comments), -k (continue if diagnostic errors are found) options
python3 tools/clang_tidy.py \
-j \
-s \
-k \
-v \
--paths torch/csrc/ \
-g"-torch/csrc/jit/passes/onnx/helper.cpp" \
-g"-torch/csrc/jit/passes/onnx/shape_type_inference.cpp" \
-g"-torch/csrc/jit/serialization/onnx.cpp" \
-g"-torch/csrc/jit/serialization/export.cpp" \
-g"-torch/csrc/jit/serialization/import.cpp" \
-g"-torch/csrc/jit/serialization/import_legacy.cpp" \
-g"-torch/csrc/onnx/init.cpp" \
-g"-torch/csrc/cuda/nccl.*" \
-g"-torch/csrc/cuda/python_nccl.cpp" \
-g"-torch/csrc/autograd/FunctionsManual.cpp" \
-g"-torch/csrc/generic/*.cpp" \
-g"-torch/csrc/jit/codegen/cuda/runtime/*" \
-g"-torch/csrc/deploy/interpreter/interpreter.cpp" \
-g"-torch/csrc/deploy/interpreter/interpreter.h" \
-g"-torch/csrc/deploy/interpreter/interpreter_impl.h" \
-g"-torch/csrc/deploy/interpreter/test_main.cpp"
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60649
Test Plan: Verified changes by re-running the script (without the `-s` option) and seeing no warnings/errors.
Reviewed By: walterddr, janeyx99
Differential Revision: D29504258
Pulled By: 1ntEgr8
fbshipit-source-id: 78310b30ee8213b73ddb4771ad874665323e7a4e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60364
Tracking issue: #55070
This PR was openned so as to solve the CI failures in main when merging: #59371#59372#59373#59937#59938.
Test Plan: Imported from OSS
Reviewed By: anjali411
Differential Revision: D29265855
Pulled By: ezyang
fbshipit-source-id: ccee3810940542f8b370596105826c96b32231ec
Summary:
The path which has NNC/LLVM disabled still constructs a tensor
expression, even though `supports()` will always return false, so a
`KernelScope` is necessary to manage those memory allocations.
I guess we could avoid building the TEs at all in this case, but it's pretty
clean this way.
Test Plan:
```
scripts/bertrand/static_runtime/run.sh
```
Reviewed By: hlu1
Differential Revision: D29415909
fbshipit-source-id: dde43de8516b9a2cf9f5f7f3699962bf9ccd8c30
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60503
Fixed a few issues in the static_runtime::to_copy impl:
- fixed a bug with memory_format
- copy strides when appropriate. This is necessary to make sure that the fbgemm path in the copy kernel gets hit.
- fix the schema in the `ReplaceWithCopy` pass
- add registration of `static_runtime::to_copy.other`
Add more unit tests:
- test dynamic shapes
- test strided input tensor to `aten::to`
- test alias case (same input/output)
- test `to.other`
Reviewed By: ajyu
Differential Revision: D26838933
fbshipit-source-id: ec0d1a2deebe998fcfe8858e772e1ef429cb4522
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60229
Fix bug where we did not resize to the input tensor size, causing
the output to be incorrect
Test Plan:
Test on replayer, rebased on D29217781, with model 278203319_26.
Verify with jit outputs (D28583950)
`./buck-out/gen/admarket/lib/ranking/prediction_replayer/replayer --model_inference_type_target=DISAGG_ACCELERATOR --prediction_replayer_force_model_type=inline_cvr_post_imp_model --prediction_replayer_force_model=278203319_26 --prediction_replayer_target_tier=sigrid.predictor.perf.dianshi_staticruntime_debug_0604.test --prediction_replayer_input_stream_filename=/data/users/ansha/tmp/adfinder/filtered_requests_inline_cvr_100 --ignore_model_id_mismatch --check_performance --fully_remote_sr_connection_options="overall_timeout:10000000,processing_timeout:10000000" --use_new_encoding_for_ads_services --use_new_encoding_from_model_id_to_shard_id --sigrid_force_model_dir=/data/users/ansha/tmp/adfinder/278203319_26/ --sigrid_predictor_model_suffix=.predictor.disagg.local —use_new_encoding_from_model_id_to_shard_id=true --prediction_replayer_force_model_kind=19 --pytorch_predictor_static_runtime_enable=true --prediction_replayer_target_qps=1`
Reviewed By: hlu1, movefast1990
Differential Revision: D29218918
fbshipit-source-id: dab4bbbabeaa8367174ed90edca43d6204c65409
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60001
Fix the aten::to schema to reflect that the output may alias input.
Test Plan: Added new unit tests.
Reviewed By: ezyang
Differential Revision: D29121620
fbshipit-source-id: c29b6aa22d367ffedf06e47116bc46b3e188c39c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59603
D28698997 (10345010f7) was reverted because I forgot to replace the
```
VLOG(1) << "Found schema mismatch";
n->schema().dump();
```
block in `aten::clamp_min` with `LogAndDumpSchema(n)` and that led to the bazel build to fail. I don't know why it makes the bazel build though.
Test Plan: OSS CI.
Reviewed By: ajyu
Differential Revision: D28950177
fbshipit-source-id: 9bb1c6619e6b68415a3349f04933c2fcd24cc9a2
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58191
There are two clamp overloads: clamp.Scalar and clamp.Tensor. SR needs to support both or has checks in place to avoid runtime errors. Supporting both is not too hard so here we are.
Reviewed By: edvgha
Differential Revision: D28371949
fbshipit-source-id: 0ec6b8a0b8c6277e50d8e51e4e7a45aa62211e22
Summary:
Port addmm to structure kernel
Follow ups
- migrate `mm` and `addbmm` to structure
- move TORCH_CHECKS currently in `addmm_cpu_impl_` and `addmm_out_cuda_impl` to meta
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57417
Reviewed By: bdhirsh
Differential Revision: D28291001
Pulled By: walterddr
fbshipit-source-id: 4eafaa30a465e225fbb4d2a69a36f1e037df9122
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58067
- Use expect_contiguous in layer_norm to avoid unnecessary refcount bumps when the tensors are contiguous
- Clean up some leftovers from the hacky wrappers removal cleanup: use c10::MaybeOwned<Tensor> for bias tensors
- Skip dispatcher for at::empty in the layer_norm impl in Static Runtime
Test Plan: CI
Reviewed By: swolchok
Differential Revision: D28214298
fbshipit-source-id: 73150fa62d5c18f41a2264f8e56bbe5e377ad045