Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61505
The handling of `self` in static runtime was previously incorrect. This diff fixed that issue, since self is essential to prim::GetAttr/SetAttr. After all, most of the time we're getting and setting attributes from self, the torch script module.
Reviewed By: ajyu
Differential Revision: D29350173
fbshipit-source-id: 6e62add4cda517ef8cd6c315d4cb0595e7d531fb
Summary:
This PR suppresses clang-tidy warnings in the codebase (for now) so that we can re-enable clang-tidy checks on master.
I ran this script to add the `NOLINTNEXTLINE` comments (on a devserver):
```bash
python3 setup.py develop
# Uses same script that's run on CI and adds the -j (parallel), -s (add comments), -k (continue if diagnostic errors are found) options
python3 tools/clang_tidy.py \
-j \
-s \
-k \
-v \
--paths torch/csrc/ \
-g"-torch/csrc/jit/passes/onnx/helper.cpp" \
-g"-torch/csrc/jit/passes/onnx/shape_type_inference.cpp" \
-g"-torch/csrc/jit/serialization/onnx.cpp" \
-g"-torch/csrc/jit/serialization/export.cpp" \
-g"-torch/csrc/jit/serialization/import.cpp" \
-g"-torch/csrc/jit/serialization/import_legacy.cpp" \
-g"-torch/csrc/onnx/init.cpp" \
-g"-torch/csrc/cuda/nccl.*" \
-g"-torch/csrc/cuda/python_nccl.cpp" \
-g"-torch/csrc/autograd/FunctionsManual.cpp" \
-g"-torch/csrc/generic/*.cpp" \
-g"-torch/csrc/jit/codegen/cuda/runtime/*" \
-g"-torch/csrc/deploy/interpreter/interpreter.cpp" \
-g"-torch/csrc/deploy/interpreter/interpreter.h" \
-g"-torch/csrc/deploy/interpreter/interpreter_impl.h" \
-g"-torch/csrc/deploy/interpreter/test_main.cpp"
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60649
Test Plan: Verified changes by re-running the script (without the `-s` option) and seeing no warnings/errors.
Reviewed By: walterddr, janeyx99
Differential Revision: D29504258
Pulled By: 1ntEgr8
fbshipit-source-id: 78310b30ee8213b73ddb4771ad874665323e7a4e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60364
Tracking issue: #55070
This PR was openned so as to solve the CI failures in main when merging: #59371#59372#59373#59937#59938.
Test Plan: Imported from OSS
Reviewed By: anjali411
Differential Revision: D29265855
Pulled By: ezyang
fbshipit-source-id: ccee3810940542f8b370596105826c96b32231ec
Summary:
The path which has NNC/LLVM disabled still constructs a tensor
expression, even though `supports()` will always return false, so a
`KernelScope` is necessary to manage those memory allocations.
I guess we could avoid building the TEs at all in this case, but it's pretty
clean this way.
Test Plan:
```
scripts/bertrand/static_runtime/run.sh
```
Reviewed By: hlu1
Differential Revision: D29415909
fbshipit-source-id: dde43de8516b9a2cf9f5f7f3699962bf9ccd8c30
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60503
Fixed a few issues in the static_runtime::to_copy impl:
- fixed a bug with memory_format
- copy strides when appropriate. This is necessary to make sure that the fbgemm path in the copy kernel gets hit.
- fix the schema in the `ReplaceWithCopy` pass
- add registration of `static_runtime::to_copy.other`
Add more unit tests:
- test dynamic shapes
- test strided input tensor to `aten::to`
- test alias case (same input/output)
- test `to.other`
Reviewed By: ajyu
Differential Revision: D26838933
fbshipit-source-id: ec0d1a2deebe998fcfe8858e772e1ef429cb4522
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60229
Fix bug where we did not resize to the input tensor size, causing
the output to be incorrect
Test Plan:
Test on replayer, rebased on D29217781, with model 278203319_26.
Verify with jit outputs (D28583950)
`./buck-out/gen/admarket/lib/ranking/prediction_replayer/replayer --model_inference_type_target=DISAGG_ACCELERATOR --prediction_replayer_force_model_type=inline_cvr_post_imp_model --prediction_replayer_force_model=278203319_26 --prediction_replayer_target_tier=sigrid.predictor.perf.dianshi_staticruntime_debug_0604.test --prediction_replayer_input_stream_filename=/data/users/ansha/tmp/adfinder/filtered_requests_inline_cvr_100 --ignore_model_id_mismatch --check_performance --fully_remote_sr_connection_options="overall_timeout:10000000,processing_timeout:10000000" --use_new_encoding_for_ads_services --use_new_encoding_from_model_id_to_shard_id --sigrid_force_model_dir=/data/users/ansha/tmp/adfinder/278203319_26/ --sigrid_predictor_model_suffix=.predictor.disagg.local —use_new_encoding_from_model_id_to_shard_id=true --prediction_replayer_force_model_kind=19 --pytorch_predictor_static_runtime_enable=true --prediction_replayer_target_qps=1`
Reviewed By: hlu1, movefast1990
Differential Revision: D29218918
fbshipit-source-id: dab4bbbabeaa8367174ed90edca43d6204c65409
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60001
Fix the aten::to schema to reflect that the output may alias input.
Test Plan: Added new unit tests.
Reviewed By: ezyang
Differential Revision: D29121620
fbshipit-source-id: c29b6aa22d367ffedf06e47116bc46b3e188c39c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59603
D28698997 (10345010f7) was reverted because I forgot to replace the
```
VLOG(1) << "Found schema mismatch";
n->schema().dump();
```
block in `aten::clamp_min` with `LogAndDumpSchema(n)` and that led to the bazel build to fail. I don't know why it makes the bazel build though.
Test Plan: OSS CI.
Reviewed By: ajyu
Differential Revision: D28950177
fbshipit-source-id: 9bb1c6619e6b68415a3349f04933c2fcd24cc9a2
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58191
There are two clamp overloads: clamp.Scalar and clamp.Tensor. SR needs to support both or has checks in place to avoid runtime errors. Supporting both is not too hard so here we are.
Reviewed By: edvgha
Differential Revision: D28371949
fbshipit-source-id: 0ec6b8a0b8c6277e50d8e51e4e7a45aa62211e22
Summary:
Port addmm to structure kernel
Follow ups
- migrate `mm` and `addbmm` to structure
- move TORCH_CHECKS currently in `addmm_cpu_impl_` and `addmm_out_cuda_impl` to meta
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57417
Reviewed By: bdhirsh
Differential Revision: D28291001
Pulled By: walterddr
fbshipit-source-id: 4eafaa30a465e225fbb4d2a69a36f1e037df9122
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58067
- Use expect_contiguous in layer_norm to avoid unnecessary refcount bumps when the tensors are contiguous
- Clean up some leftovers from the hacky wrappers removal cleanup: use c10::MaybeOwned<Tensor> for bias tensors
- Skip dispatcher for at::empty in the layer_norm impl in Static Runtime
Test Plan: CI
Reviewed By: swolchok
Differential Revision: D28214298
fbshipit-source-id: 73150fa62d5c18f41a2264f8e56bbe5e377ad045
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58100
aten::clone has a second arg, memory_format, which was not previously supported.
Reviewed By: ajyu
Differential Revision: D28347171
fbshipit-source-id: e083cc24c3228048429bba3497326415bc3d1f5a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58018
- Add checks for the number of input args and return nullptr if it doesn't match. This is intended to make Static Runtime more robust so that op schema change is less likely to break things. Imagine that a new arg is added to an op or a new overload is added that has this added arg, SR would simply ignore this added arg. If this arg has a default value, SR would run the model with the default value and give you wrong results, which can be hard to track down.
Reviewed By: ajyu
Differential Revision: D28047955
fbshipit-source-id: 01067059edd5cfea80c4ee121829f7733b11f601
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57578
The original impl in SR assumes that eps is a constant, which is true most of the times. However it could be a graph input as well. This diff fixes this issue. Unit tests are added as well.
Reviewed By: edvgha
Differential Revision: D28207975
fbshipit-source-id: 9a10dec159f3804e43ef74aaa20c3ec6c79548c9
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57553
Relanding #57329 (the entire stack) which was reverted because I forgot
to guard a new test with `ifdef LLVM`.
Test Plan: Imported from OSS
Reviewed By: bertmaher
Differential Revision: D28195048
Pulled By: ZolotukhinM
fbshipit-source-id: 50052a2f20f84940b83d1dd1241c8659ff06e014
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57521
When an op is added to static runtime, we manually check the schema (not with the jit schema check, more with IValue.IsTensor()/IsInt() etc) and make sure it's the one we do support. If the schema doesn't match, SR would throw an exception with TORCH_CHECK, which makes the entire graph invalid for SR.
This diff tries to make the op with unsupported schema to use the fallback path and make it go through the dispatcher instead:
```
if (node->kind() != prim::ListConstruct &&
node->kind() != prim::TupleConstruct &&
node->kind() != prim::DictConstruct && node->kind() != prim::ListUnpack) {
const Operator& op = node->getOperator();
TORCH_CHECK(op.hasOperation());
op_ = op.getOperation(node);
VLOG(1) << "Fallback interpreter for node: " << PrintNode(node);
}
```
The 2-arg `torch.norm`, which the SR `torch.norm impl doesn't support (only 3, 4, 5 args are supported), now can run in static runtime with fallback mode.
(Note: this ignores all push blocking failures!)
Reviewed By: ajyu
Differential Revision: D27531447
fbshipit-source-id: 0a9c2662ac73ed0393a23cc3a2c7df45fdb00fdd
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57282
Added support for fb::expand_dims for SR.
Test Plan:
buck test caffe2/torch/fb/sparsenn:gpu_test -- test_expand_dims
buck test caffe2/benchmarks/static_runtime/fb:test_fb_operators
Reviewed By: hlu1
Differential Revision: D28043049
fbshipit-source-id: 01f59db7b507f027b220f044d6ff23602adbdb06
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os
def get_compiled_files_list():
import json
with open("build/compile_commands.json") as f:
data = json.load(f)
files = [os.path.relpath(node['file']) for node in data]
for idx, fname in enumerate(files):
if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
return files
def run_clang_tidy(fname):
check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
changes = check_output(["git", "ls-files", "-m"])
if len(changes) == 0:
return
check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])
def main():
git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
compiled_files = get_compiled_files_list()
for idx, fname in enumerate(git_files):
if fname not in compiled_files:
continue
if fname.startswith("caffe2/contrib/aten/"):
continue
print(f"[{idx}/{len(git_files)}] Processing {fname}")
run_clang_tidy(fname)
if __name__ == "__main__":
main()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892
Reviewed By: H-Huang
Differential Revision: D27991944
Pulled By: malfet
fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56444
Added out version for layer_norm
Test Plan:
buck test caffe2/aten:math_kernel_test -- NativeLayerNorm
buck test caffe2/benchmarks/static_runtime:static_runtime_cpptest
Reviewed By: hlu1
Differential Revision: D27873846
fbshipit-source-id: 53ee9fec4ff9a4e78198b031e86b5afd013626dd
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56841
- Move arg checks to outside the lambda so we can perform these checks at Static Runtime initialization time
- use `optional` where possible
- support `to.other` overload, the 5-arg input load of `torch.to`.
Test Plan:
```
buck run //caffe2/benchmarks/static_runtime:static_runtime_cpptest
buck test mode/opt-clang //caffe2/caffe2/fb/predictor:ptvsc2_predictor_bench_test -- --run-disabled
```
Reviewed By: edvgha
Differential Revision: D27933176
fbshipit-source-id: 49d6249c8784c44146461e286e7a301596172d7c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56082
The native_functions.yaml changes were done by codemod using the
following script:
```
import ruamel.yaml
from ruamel.yaml.tokens import CommentToken
from ruamel.yaml.error import CommentMark
from tools.codegen.model import * # noqa: F403
with open("aten/src/ATen/native/native_functions.yaml", "r") as f:
contents = f.read()
yaml = ruamel.yaml.YAML()
yaml.preserve_quotes = True
yaml.width = 1000
yaml.boolean_representation = ['False', 'True']
r = yaml.load(contents)
convert = '''\
acos
acosh
asin
asinh
atan
atanh
cos
cosh
digamma
erf
erfc
erfinv
exp
expm1
exp2
lgamma
log
log10
log1p
log2
reciprocal
sigmoid
sin
sinc
sinh
special_entr
sqrt
tan
tanh'''.split()
for e in r:
f = NativeFunction.from_yaml(e, Location("", 0))
if f.structured or f.structured_delegate is not None:
continue
n = f.func.name.name.base
if n not in convert:
continue
# mutate e to make changes
if f.func.kind() == SchemaKind.out:
e.insert(1, 'structured', True)
e.insert(2, 'structured_inherits', 'TensorIteratorBase')
else:
# TODO: The .out overload assumption is not sound in general
e.insert(1, 'structured_delegate', f'{n}.out')
e['dispatch'].pop('CPU', None)
e['dispatch'].pop('CUDA', None)
e['dispatch'].pop('CPU, CUDA', None)
e['dispatch'].pop('CompositeExplicitAutograd', None)
*_, last_k = e.keys()
needs_fixup = False
if not e['dispatch']:
if last_k == 'dispatch':
needs_fixup = True
del e['dispatch']
# Manually fix up newlines at the end, because ruamel
# made some bad life choices about where to associate trailing
# whitespace for nested dicts; see
# https://stackoverflow.com/questions/42172399/modifying-yaml-using-ruamel-yaml-adds-extra-new-lines
if needs_fixup:
*_, last_k = e.keys()
# post_key, pre_key, post_value, pre_value
e.ca.items[last_k] = [None, None, CommentToken('\n\n', CommentMark(0), None), None]
with open("aten/src/ATen/native/native_functions.yaml.new", "w") as f:
yaml.dump(r, f)
```
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Reviewed By: bhosmer
Differential Revision: D27777769
Pulled By: ezyang
fbshipit-source-id: 1ecbac7cb3e0093167bb61c7d2b1ecb95b8ae17c