Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72995
Add ability to specify input dimensions that need to be dynamic.
Example: if dim 115 can be dynamic in input sizes "1,115;1", then specify dynamic_dims as "115"
Also recompile and update CI models and some asm code as the old ones don't compile with compiler changes in context.cpp
Test Plan: - Compiles and runs BI Bytedoc model with and without dynamic inputs.
Reviewed By: ZolotukhinM
Differential Revision: D34233121
fbshipit-source-id: 35095e549ebd6d3bec98b9abb3f0764366a0ff6f
(cherry picked from commit 33166a9f9ac9194b5df0a35280b57708df255ebd)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59466
Change saved parameter type from at::Tensor to at::IValue to support custom
class parameters, e.g. `__torch__.torch.classes.xnnpack.Conv2dOpContext`.
The NNC produced kernels won't deal with custom class parameters directly.
They simply pass through to the external operators that take these custom
class parameters, e.g. `prepacked::conv2d_clamp_run`.
It will reuse the `__getstate__` and `__setstate__` methods on the custom class
to persist and restore the state of the parameters.
When calling into the kernel, it will pass in the untyped raw pointer of the custom
class objects to the kernel as `void*`. It's similar to the regular tensor parameters,
for which it will pass in the raw data pointer of the tensor storage. The generated
kernel needs to hardcode the expected type for each parameter and cast before
calling the external ops.
ghstack-source-id: 131897904
Test Plan: - unit tests
Reviewed By: kimishpatel
Differential Revision: D28902496
fbshipit-source-id: 4b2c0895dd28f0b7d344aa08183d42ad6a355dae