Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14143
ConvTranspose has a per-operator attribute rename, which meant that the
global attribute rename for kernels => kernel_shape was not applied.
Changing the behavior so that the global renames always apply, but per-op
renames can override those for specific attributes.
Note: The python frontend path isn't actually used for ConvTranspose, but I
thought it would be good to make it consistent.
Reviewed By: yinghai
Differential Revision: D13113395
fbshipit-source-id: cd3f124b4b5c753a506d297138b7d002b51bfb38
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13429
Made the SSA transformation idempotent. This ensures that if a caffe2 graph is already in SSA form, the name of the ONNX models inputs/outputs match these of the caffe2 graph.
Avoid evaluating the model by running it if the shapes of all the blobs are present in the value_info map. This speeds up the conversion and decrease its memory usage in the case of medium to large nets.
Reviewed By: abadams
Differential Revision: D12873354
fbshipit-source-id: d695b28e610562afa9a41c2d4da05be212ccb488
Summary:
- exhaustive_search attribute will be blacklisted so it
will be discarded from the coverted onnx model. At present
it throws error while verifying the onnx model
Signed-off-by: Parth Raichura <parth.raichura@softnautics.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12805
Differential Revision: D10502374
Pulled By: ezyang
fbshipit-source-id: 0926dfa3237a8a431184e7f7250146e5b0cbfb85
Summary:
Will bump up to opset 8 in another PR to match the current opset version.
Already tested through generating the models in current model zoo.
Closes https://github.com/pytorch/pytorch/pull/8854
Reviewed By: ezyang
Differential Revision: D8666437
Pulled By: houseroad
fbshipit-source-id: feffdf704dd3136aa59c0f1ff1830c14d1bd20aa
* Skip some tests to unbreak CI
* Pass the opset_version to run_node
* Remove the stale check_graph call, caffe2_net_to_onnx_model will invoke check_model
* Add support to TensorRT
* Removed License header
* Bind input/output by position
* Comments
* More comments
* Add benchmark
* Add warning for performance degradation on large batch
* Address comments
* comments
* Fix useless opset_import in onnx
* Set the default ir version in make_model
* Use the target_opset_version in Caffe2Frontend
* remove make_model from helper in caffe2.python.onnx
* Handle legacy pad in Caffe2==>ONNX converter, also remove fake initializer
* Address the comments, 1) have filtering fake initializer before ssa rewrite, 2) polish the legacy padding handling logic
* Add test cases to cover the code just added
* Nit