Commit Graph

19 Commits

Author SHA1 Message Date
Nikita Shulga
a9b0a921d5 Disable avoid-non-const-global-variables lint check (#62008)
Summary:
As GoogleTest `TEST` macro is non-compliant with it as well as `DEFINE_DISPATCH`

All changes but the ones to `.clang-tidy` are generated using following script:
```
for i in `find . -type f -iname "*.c*" -or -iname "*.h"|xargs grep cppcoreguidelines-avoid-non-const-global-variables|cut -f1 -d:|sort|uniq`;  do sed -i "/\/\/ NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)/d" $i; done
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62008

Reviewed By: driazati, r-barnes

Differential Revision: D29838584

Pulled By: malfet

fbshipit-source-id: 1b2f8602c945bd4ce50a9bfdd204755556e31d13
2021-07-22 18:04:40 -07:00
Nikita Shulga
4cb534f92e Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os

def get_compiled_files_list():
    import json
    with open("build/compile_commands.json") as f:
        data = json.load(f)
    files = [os.path.relpath(node['file']) for node in data]
    for idx, fname in enumerate(files):
        if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
            files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
    return files

def run_clang_tidy(fname):
    check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
    changes = check_output(["git", "ls-files", "-m"])
    if len(changes) == 0:
        return
    check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])

def main():
    git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
    compiled_files = get_compiled_files_list()
    for idx, fname in enumerate(git_files):
        if fname not in compiled_files:
            continue
        if fname.startswith("caffe2/contrib/aten/"):
            continue
        print(f"[{idx}/{len(git_files)}] Processing {fname}")
        run_clang_tidy(fname)

if __name__ == "__main__":
    main()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892

Reviewed By: H-Huang

Differential Revision: D27991944

Pulled By: malfet

fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 14:10:25 -07:00
Yan Xie
285ba0d068 Enable fp16 for UniformFill (#44540)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44540

Support output type to be fp16 for UniformFill

Reviewed By: jianyuh

Differential Revision: D23558030

fbshipit-source-id: 53a5b2c92cfe78cd11f55e6ee498e1bd682fe4a1
2020-09-15 15:09:18 -07:00
Yinghai Lu
64323ae177 Back out "Use simd version for fp16 conversions" (#32640)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32640

Original commit changeset: 3b1ee0ba756e

Reverting according to https://our.intern.facebook.com/intern/diff/D19291499/?transaction_id=1347995678706116&dest_fbid=465672071047258

Test Plan: unittests.

Reviewed By: jspark1105, jianyuh

Differential Revision: D19576708

fbshipit-source-id: bec92318523498067935234ab702c925ece71da6
2020-01-27 10:01:24 -08:00
Yinghai Lu
8b4feff01d Use simd version for fp16 conversions (#31897)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31897

Previous version only use avx2. The _simd version uses avx512 if CPU is capable of that.

Test Plan: Unitttest

Reviewed By: tracelogfb

Differential Revision: D19291499

fbshipit-source-id: 3b1ee0ba756e5c9defbd5caf7f68982d9b2ca06c
2020-01-08 14:36:38 -08:00
Jianyu Huang
199e1fb348 Use AVX2 to increase frequency for FP16<->FP32 Caffe2 ops (#31203)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31203

For multi-instance environment, AVX2 should help increase the clock frequency.
ghstack-source-id: 95502576

Test Plan: buck test //caffe2/caffe2:caffe2_test_cpu -- "Float16"

Reviewed By: jspark1105

Differential Revision: D18962649

fbshipit-source-id: 6532d929a99f41f2f6ad1a1a1962e38ae3ddaecb
2019-12-12 19:42:29 -08:00
Jongsoo Park
6848f9abb8 call fp16<->fp32 routines in fbgemm from Half2Float and Float2Half operators (#30715)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30715

Changed caffe2/caffe2/TARGETS file to define USE_FBGEMM for x86 and USE_SSE_ONLY is not defined.

Test Plan: buck test caffe2/caffe2:caffe2_test_cpu -- Float16

Reviewed By: jianyuh

Differential Revision: D18806067

fbshipit-source-id: 1b44b90a9f6dc3c27f81a46038c0f7542ed2bab3
2019-12-07 19:46:47 -08:00
Andrey Malevich
c8f9072ab6 Fix half-float conversion ops to handle tensors larger than 2B of params (#17952)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17952

As desc.

Reviewed By: hyuen

Differential Revision: D14435092

fbshipit-source-id: dc614ba16ad531101d04d01aec8f1fbd534ebec5
2019-03-12 23:03:22 -07:00
Hector Yuen
5bf9e41938 move half<->float conversions to oss operators (#17548)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17548

expose half float operators to OSS

common/math/Float16.h is the original implementation
this is substituted by caffe2/c10/util/Half.h

from the comments seems like the both implementations don't handle denormals

Reviewed By: jspark1105

Differential Revision: D14244200

fbshipit-source-id: f90ba28c5bf6a2b451b429cc4925b8cc376ac651
2019-03-07 13:00:13 -08:00
Orion Reblitz-Richardson
1d5780d42c Remove Apache headers from source.
* LICENSE file contains details, so removing from individual source files.
2018-03-27 13:10:18 -07:00
Orion Reblitz-Richardson
c18f9b4dea Back out "[codemod] - comment out unused parameters"
Original commit changeset: 8e10b1f1e2ae

@allow-large-files
2018-02-26 10:26:25 -08:00
Orion Reblitz-Richardson
7e9f8af018 [codemod] - comment out unused parameters 2018-02-26 10:26:25 -08:00
Yangqing Jia
8286ce1e3a Re-license to Apache
Summary: Closes https://github.com/caffe2/caffe2/pull/1260

Differential Revision: D5906739

Pulled By: Yangqing

fbshipit-source-id: e482ba9ba60b5337d9165f28f7ec68d4518a0902
2017-09-28 16:22:00 -07:00
Simon Layton
22ec2ca968 Add shape inference to fp16<->fp32 ops
Summary:
Added to HalfToFloat and FloatToHalf
Closes https://github.com/caffe2/caffe2/pull/1241

Differential Revision: D5902071

Pulled By: salexspb

fbshipit-source-id: 9c79b0c50990200ca5bd6e00b3e8881d1c784e36
2017-09-26 19:33:08 -07:00
Devesh Agrawal
1d83a46b44 Improve float16 support
Summary: The operators were lacking some float16 stuff: Extend ScatterAssign for float16. In addition, introduce a constant fill for float16. This needs to be a separate operator instead of ConstantFill, since the latter is in OSS and hence cannot use the Float16 stuff that is fb specific.

Reviewed By: azzolini

Differential Revision: D5664071

fbshipit-source-id: 5b84f625693b6ddddd8b7a35f1541ae40df49fbe
2017-08-23 16:33:07 -07:00
Henry Lu
10667a914e Add linter for enforcing caffe operator documentation
Summary: Add check that every time we register a caffe operator to CPU or GPU that documentation is added for the particular operator.

Reviewed By: dzhulgakov

Differential Revision: D5443110

fbshipit-source-id: 3793c3d29bea1228078cb30bdf8243ac0ab90664
2017-07-24 15:27:47 -07:00
Aapo Kyrola
95291f0f74 Revert D5348078: Add linter for enforcing caffe operator documentation
Summary: This reverts commit c3fa22fc7ca8066d5fc8fa780b23d7867fd3380e

Differential Revision: D5348078

fbshipit-source-id: f536e647cbd221b26ccbc105a5f5f8bdbcc119ab
2017-07-17 18:36:38 -07:00
Henry Lu
32b13d6243 Add linter for enforcing caffe operator documentation
Summary: Add lint rule to check that every time we register a caffe operator to CPU or GPU that documentation is added for the particular operator.

Reviewed By: dzhulgakov

Differential Revision: D5348078

fbshipit-source-id: c3fa22fc7ca8066d5fc8fa780b23d7867fd3380e
2017-07-17 08:17:23 -07:00
Artem Volkhin
3e08beb75e implement Float16EncodeOp and Float16DecodeOp
Summary: casting between fp16 and fp32

Reviewed By: dzhulgakov

Differential Revision: D4526415

fbshipit-source-id: ebffb00ae12c6bcba79096b13e84ce55ef3f02bb
2017-02-09 17:03:43 -08:00