Bugra Akyildiz
27c7158166
Remove __future__ imports for legacy Python2 supports ( #45033 )
...
Summary:
There is a module called `2to3` which you can target for future specifically to remove these, the directory of `caffe2` has the most redundant imports:
```2to3 -f future -w caffe2```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45033
Reviewed By: seemethere
Differential Revision: D23808648
Pulled By: bugra
fbshipit-source-id: 38971900f0fe43ab44a9168e57f2307580d36a38
2020-09-23 17:57:02 -07:00
Christopher Whelan
5cd0f5e8ec
[PyFI] Update hypothesis and switch from tp2 ( #41645 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41645
Pull Request resolved: https://github.com/facebookresearch/pytext/pull/1405
Test Plan: buck test
Reviewed By: thatch
Differential Revision: D20323893
fbshipit-source-id: 54665d589568c4198e96a27f0ed8e5b41df7b86b
2020-08-08 12:13:04 -07:00
Nikita Shulga
fd9205e14b
Enable caffe2 tests for RocM jobs ( #41604 )
...
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41604
Reviewed By: ezyang
Differential Revision: D22603703
Pulled By: malfet
fbshipit-source-id: 789ccf2bb79668a5a68006bb877b2d88fb569809
2020-07-28 14:21:42 -07:00
Gu, Jinghui
575aebc182
implement operators for DNNLOWP ( #18656 )
...
Summary:
Implement operators for DNNLOWP, including int8_conv, int8_FC, int8_pooling, int8_relu, int8_sum, quantize/dequantize, and order_swtich operators.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18656
Differential Revision: D14767092
Pulled By: yinghai
fbshipit-source-id: 1f3e24929a358a42214da333bd304c593ea4468f
2019-04-10 12:04:39 -07:00
PenghuiCheng
939877bf4b
Implementation of WeightedSum op for mkl-dnn and fix FC op output shape issue.
...
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14407
Reviewed By: yinghai
Differential Revision: D13364364
Pulled By: wesolwsk
fbshipit-source-id: e69bcd1bc52e35b2f0e45e5dc40184f1bd66605d
2018-12-07 12:35:19 -08:00
Gu, Jinghui
60963c2ecb
Add "axis" and "axis_w" arguments in FC to support customized axix to reduce dim. ( #12971 )
...
Summary:
Add "axis" and "axis_w" arguments in FC to support customized axix to reduce dim.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12971
Reviewed By: bddppq
Differential Revision: D12850675
Pulled By: yinghai
fbshipit-source-id: f1cde163201bd7add53b8475329db1f038a73019
2018-11-21 15:44:50 -08:00
Gu, Jinghui
dbab9b73b6
seperate mkl, mklml, and mkldnn ( #12170 )
...
Summary:
1. Remove avx2 support in mkldnn
2. Seperate mkl, mklml, and mkldnn
3. Fix convfusion test case
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12170
Reviewed By: yinghai
Differential Revision: D10207126
Pulled By: orionr
fbshipit-source-id: 1e62eb47943f426a89d57e2d2606439f2b04fd51
2018-10-29 10:52:55 -07:00
Sebastian Meßmer
b3e87b1066
Fix fbcode compatibility ( #7939 )
2018-05-30 13:35:46 -04:00
Jinghui
769397eb77
[Caffe2] [feature request] Add gradient operators for IDEEP ( #7234 )
...
* Add gradient operators for IDEEP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Add gradient test cases for IDEEP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Upgrade third_party/ideep
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Refine SumOp for IDEEP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Share input buffer in fallback op if possible
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Fallback ConvTranspose op for IDEEP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Fix bug introduced by the patch of sharing input buffer
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Share output buffer in fallback operators
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Remove IDEEP to resolve repo issue
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Reflash IDEEP repo
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Remove redundant lines in IDEEP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Fallback operators for IDEEP
(Flatten, ResizeLike, Transpose, and Reshape)
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
2018-05-09 08:52:24 -07:00
Jinghui
26ddefbda1
[feature request] [Caffe2] Enable MKLDNN support for inference ( #6699 )
...
* Add operators based-on IDEEP interfaces
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Enable IDEEP as a caffe2 device
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Add test cases for IDEEP ops
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Add IDEEP as a caffe2 submodule
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Skip test cases if no IDEEP support
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Correct cmake options for IDEEP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Add dependences on ideep libraries
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Fix issues in IDEEP conv ops and etc.
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Move ideep from caffe2/ideep to caffe2/contrib/ideep
Signed-off-by: Gu Jinghui <jinghui.gu@intel.com>
* Update IDEEP to fix cmake issue
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Fix cmake issue caused by USE_MKL option
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Correct comments in MKL cmake file
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
2018-04-22 21:58:14 -07:00