Avoid c++ exception and stack trace (#111438)

Summary:
When raising an exception here this causes pybind11's dispatcher to kick in, which causes aiplatform's logic to kick in (aiplatform::error_reporting::util::printAddressesWithBestEffortLocationInfo), which ultimately uses `folly::symbolizer::Symbolizer::symbolize` for building up the stack trace.  In 3.8 this uses about 3.62% of the CPU time per pyperf (https://fburl.com/scuba/pyperf_experimental/on_demand/oi554uvy).  In Cinder 3.8 for some reason this is worse - using 5.94% of the CPU.

This exception is happening when doing a hasattr() on `prims` for things like `bitwise_left_shift` which don't exist: https://www.internalfb.com/code/fbsource/[2d695f650d00]/fbcode/caffe2/torch/_inductor/lowering.py?lines=590

That exception is ultimately going to be swallowed anyway, and the stack trace has no meaningful value.  Furthermore because this is kind of an expected outcome in the code versus some random C++ exception the stack trace is less valuable as well.

This changes this to return a (None, None) on the failure case instead of returning a valid op/overload list, avoiding the exception, and reclaiming the 3.62%-5.94% of time.

Test Plan: Existing CI and perf run: https://fburl.com/scuba/pyperf_experimental/on_demand/oi554uvy

Differential Revision: D50018789

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111438
Approved by: https://github.com/davidberard98
This commit is contained in:
Dino Viehland 2023-10-26 23:55:34 +00:00 committed by PyTorch MergeBot
parent acd02a60d5
commit 5b71834785
2 changed files with 8 additions and 1 deletions

View File

@ -823,6 +823,10 @@ class _OpNamespace(types.ModuleType):
qualified_op_name = f"{namespace_name}::{op_name}"
try:
op, overload_names = torch._C._jit_get_operation(qualified_op_name)
if op is None:
raise AttributeError(
f"'_OpNamespace' '{self.name}' object has no attribute '{op_name}'"
)
except RuntimeError as e:
# Turn this into AttributeError so getattr(obj, key, default)
# works (this is called by TorchScript with __origin__)

View File

@ -1628,7 +1628,10 @@ void initJITBindings(PyObject* module) {
try {
auto symbol = Symbol::fromQualString(op_name);
const auto& unsortedOps = getAllOperatorsFor(symbol);
TORCH_CHECK(!unsortedOps.empty(), "No such operator ", op_name);
if (unsortedOps.empty()) {
// No such operator
return py::make_tuple(py::none(), py::none());
}
// Depending on the order of registration, aten or jit ops may be
// registered first. This sorting is helpful in cases where