Commit Graph

12 Commits

Author SHA1 Message Date
Andrey Malevich
c8f9072ab6 Fix half-float conversion ops to handle tensors larger than 2B of params (#17952)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17952

As desc.

Reviewed By: hyuen

Differential Revision: D14435092

fbshipit-source-id: dc614ba16ad531101d04d01aec8f1fbd534ebec5
2019-03-12 23:03:22 -07:00
Hector Yuen
5bf9e41938 move half<->float conversions to oss operators (#17548)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17548

expose half float operators to OSS

common/math/Float16.h is the original implementation
this is substituted by caffe2/c10/util/Half.h

from the comments seems like the both implementations don't handle denormals

Reviewed By: jspark1105

Differential Revision: D14244200

fbshipit-source-id: f90ba28c5bf6a2b451b429cc4925b8cc376ac651
2019-03-07 13:00:13 -08:00
Orion Reblitz-Richardson
1d5780d42c Remove Apache headers from source.
* LICENSE file contains details, so removing from individual source files.
2018-03-27 13:10:18 -07:00
Orion Reblitz-Richardson
c18f9b4dea Back out "[codemod] - comment out unused parameters"
Original commit changeset: 8e10b1f1e2ae

@allow-large-files
2018-02-26 10:26:25 -08:00
Orion Reblitz-Richardson
7e9f8af018 [codemod] - comment out unused parameters 2018-02-26 10:26:25 -08:00
Yangqing Jia
8286ce1e3a Re-license to Apache
Summary: Closes https://github.com/caffe2/caffe2/pull/1260

Differential Revision: D5906739

Pulled By: Yangqing

fbshipit-source-id: e482ba9ba60b5337d9165f28f7ec68d4518a0902
2017-09-28 16:22:00 -07:00
Simon Layton
22ec2ca968 Add shape inference to fp16<->fp32 ops
Summary:
Added to HalfToFloat and FloatToHalf
Closes https://github.com/caffe2/caffe2/pull/1241

Differential Revision: D5902071

Pulled By: salexspb

fbshipit-source-id: 9c79b0c50990200ca5bd6e00b3e8881d1c784e36
2017-09-26 19:33:08 -07:00
Devesh Agrawal
1d83a46b44 Improve float16 support
Summary: The operators were lacking some float16 stuff: Extend ScatterAssign for float16. In addition, introduce a constant fill for float16. This needs to be a separate operator instead of ConstantFill, since the latter is in OSS and hence cannot use the Float16 stuff that is fb specific.

Reviewed By: azzolini

Differential Revision: D5664071

fbshipit-source-id: 5b84f625693b6ddddd8b7a35f1541ae40df49fbe
2017-08-23 16:33:07 -07:00
Henry Lu
10667a914e Add linter for enforcing caffe operator documentation
Summary: Add check that every time we register a caffe operator to CPU or GPU that documentation is added for the particular operator.

Reviewed By: dzhulgakov

Differential Revision: D5443110

fbshipit-source-id: 3793c3d29bea1228078cb30bdf8243ac0ab90664
2017-07-24 15:27:47 -07:00
Aapo Kyrola
95291f0f74 Revert D5348078: Add linter for enforcing caffe operator documentation
Summary: This reverts commit c3fa22fc7ca8066d5fc8fa780b23d7867fd3380e

Differential Revision: D5348078

fbshipit-source-id: f536e647cbd221b26ccbc105a5f5f8bdbcc119ab
2017-07-17 18:36:38 -07:00
Henry Lu
32b13d6243 Add linter for enforcing caffe operator documentation
Summary: Add lint rule to check that every time we register a caffe operator to CPU or GPU that documentation is added for the particular operator.

Reviewed By: dzhulgakov

Differential Revision: D5348078

fbshipit-source-id: c3fa22fc7ca8066d5fc8fa780b23d7867fd3380e
2017-07-17 08:17:23 -07:00
Artem Volkhin
3e08beb75e implement Float16EncodeOp and Float16DecodeOp
Summary: casting between fp16 and fp32

Reviewed By: dzhulgakov

Differential Revision: D4526415

fbshipit-source-id: ebffb00ae12c6bcba79096b13e84ce55ef3f02bb
2017-02-09 17:03:43 -08:00