Commit Graph

20 Commits

Author SHA1 Message Date
Sergii Dymchenko
c2402a9257 Change caffe2 branch links to main (#100129)
Just a change

pytorch/tree/master -> pytorch/tree/main
pytorch/blob/master -> pytorch/blob/main
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100129
Approved by: https://github.com/huydhn
2023-04-27 10:31:50 +00:00
Nikita Shulga
a9b0a921d5 Disable avoid-non-const-global-variables lint check (#62008)
Summary:
As GoogleTest `TEST` macro is non-compliant with it as well as `DEFINE_DISPATCH`

All changes but the ones to `.clang-tidy` are generated using following script:
```
for i in `find . -type f -iname "*.c*" -or -iname "*.h"|xargs grep cppcoreguidelines-avoid-non-const-global-variables|cut -f1 -d:|sort|uniq`;  do sed -i "/\/\/ NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)/d" $i; done
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62008

Reviewed By: driazati, r-barnes

Differential Revision: D29838584

Pulled By: malfet

fbshipit-source-id: 1b2f8602c945bd4ce50a9bfdd204755556e31d13
2021-07-22 18:04:40 -07:00
Nikita Shulga
4cb534f92e Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os

def get_compiled_files_list():
    import json
    with open("build/compile_commands.json") as f:
        data = json.load(f)
    files = [os.path.relpath(node['file']) for node in data]
    for idx, fname in enumerate(files):
        if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
            files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
    return files

def run_clang_tidy(fname):
    check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
    changes = check_output(["git", "ls-files", "-m"])
    if len(changes) == 0:
        return
    check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])

def main():
    git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
    compiled_files = get_compiled_files_list()
    for idx, fname in enumerate(git_files):
        if fname not in compiled_files:
            continue
        if fname.startswith("caffe2/contrib/aten/"):
            continue
        print(f"[{idx}/{len(git_files)}] Processing {fname}")
        run_clang_tidy(fname)

if __name__ == "__main__":
    main()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892

Reviewed By: H-Huang

Differential Revision: D27991944

Pulled By: malfet

fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 14:10:25 -07:00
Richard Barnes
fa325d7c9f Use sum_integers and multiply_integers (#51146)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51146

Test Plan: Sandcastle tests

Reviewed By: ngimel

Differential Revision: D25903430

fbshipit-source-id: 329c14018c9e5192864eed88a8ed0a5068ff1c69
2021-02-10 18:05:45 -08:00
Andrey Malevich
bce4c82f0d [C2] Add TypeAndShape Inference logic for ReduceMean (#51828)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51828

As desc.

Test Plan: Unit-tests.

Differential Revision: D26293844

fbshipit-source-id: 2eb2a694c439b794ad7c134409e2b8926aabc91f
2021-02-08 00:57:47 -08:00
Nikita Shulga
6f737dd4a3 Fix signed-unsigned warnings (#34791)
Summary:
And few typos
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34791

Test Plan: CI

Differential Revision: D20524879

Pulled By: malfet

fbshipit-source-id: 58fa03bd6356979e77cd1bffb6370d41a177c409
2020-03-19 00:29:56 -07:00
Yinghai Lu
b4b1b100bd Add a loop test for onnxified net (#32935)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32935

Mock away the content of onnxified net with some low cost ops so that we can still mimic the input/output transfer while doing minimal work on the card.

Test Plan:
```
buck run glow/fb/test:sparsenn_test -- --gtest_filter='SparseNNTest.vanillaC2' --onnxifi_debug_mode --onnxifi_loop_test_mode --nocaffe2_predictor_use_memonger
```

Differential Revision: D19631971

fbshipit-source-id: f970c55ccb410702f479255eeb750e01e3f8c2ae
2020-02-03 18:35:41 -08:00
Tongliang Liao
0eee56fff7 Export ReduceMean/ReduceFrontMean/ReduceBackMean (Caffe2) to ReduceMean (ONNX). (#16727)
Summary:
The second input (`lengths`) is not supported.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16727

Differential Revision: D14054105

Pulled By: houseroad

fbshipit-source-id: 36b8d00460f9623696439e1bd2a6bc60b7bb263c
2019-02-12 13:35:32 -08:00
Mark Richardson
88146484b4 Add support for .norm() pytorch onnx export and ReduceL1/ReduceL2 caffe2 operators (#9299)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9299

Onnx has ReduceL1 and ReduceL2 operators that would facilitate this, so allow pytorch to export those and allow caffe2 to run them.

I only implemented this on CPU so far.

Reviewed By: pjh5

Differential Revision: D8757381

fbshipit-source-id: 68afc9e2f90042a70929b73ace05a499b5c670c7
2018-07-14 10:54:13 -07:00
Xiaomeng Yang
9243b64bff
[Caffe2] Update elementwise ops to support numpy style boradcast (#8070)
* Update elementwise ops to support numpy style boradcast

Update elementwise ops to support numpy style boradcast

* Fix sqrt_op

* Fix compare ops

* Fix gradient test

* Fix optimizer legacy broadcast

* Fix legacy broadcast for elementwise ops

* Skip flaky test

* Fix eigen simple binary op

* Fix attention test

* Fix rnn test

* Fix LSTM test

* Fix tan grad

* Fix schema check
2018-06-05 15:49:16 -07:00
Nathan Inkawhich
38dbe6e605 Updates to caffe2 operator documentation (#7917)
* Significant updates to the operator docs in prep for merge
2018-05-29 14:38:56 -07:00
Xiaomeng Yang
a61d4a3374
[Caffe2] Refactor reduce ops to take flexible input types (#7164)
* Refactor reduce ops to take flexible input types

* Add DISPATCH_FUNCTION macros in common_gpu.h

* Use macros to reduce switch case in dispatching cuda functions
2018-05-02 12:08:38 -07:00
Xiaomeng Yang
71c644b005
[caffe2] Add ReduceMinOp and ReduceMaxOp (#6744)
* Add gpu check for reduce_max

* Add ReduceMinOp and ReduceMaxOp

* Merge util functions in reduce_ops and math

* Expose math internal functions
2018-04-19 00:22:23 -07:00
Xiaomeng Yang
e47b3018b7
[caffe2] Update EigenTensorMap to use ColMajor (#6735)
* Add gpu check for reduce_max

* Update EigenTensorMap to use ColMajor

* Revert incorrect change on cpu
2018-04-18 18:28:38 -07:00
Xiaomeng Yang
4be34ca0f3 Add broadcast and reduce gradient (#6668)
Add broadcast and reduce gradient
2018-04-17 13:31:13 -07:00
Xiaomeng Yang
cd2112717c
[caffe2] Update math functions with params on host. (#6602)
* Update ReduceMean

Add reduce mean to math

Add reduce mean to math

* sync reduce_ops_test

* Update math_gpu.cu
2018-04-14 21:41:41 -07:00
Xiaomeng Yang
8849bea120 [caffe2] Update ReduceOps (#6497)
* Update ReduceMean

* Add reduce mean to math

* Update cuda flag

* Update Eigen::Tensor ctor

* Remove unused variables

* Skip ReduceTensorGPUTest if no gpus

* Add NOMINMAX for windows

* Fix lpnorm_op in windows
2018-04-11 23:36:05 -07:00
Orion Reblitz-Richardson
1d5780d42c Remove Apache headers from source.
* LICENSE file contains details, so removing from individual source files.
2018-03-27 13:10:18 -07:00
James Reed
48c70d2dbd Fix ReduceMean performance by specializing Eigen implementation for common shapes (#2355) 2018-03-21 21:48:54 -07:00
Mohammad Hossain
28eda01809 Reduce Sum and Reduce Mean (#2189)
* Reduce Sum and Reduce Mean

* Handle reductions with empty 'axes'

* Merge codebase and simplify tesnor reduction logic

* Restructure code and add comments.

* Fix parameter to scale

* Fix parameter to scale
2018-03-13 19:13:47 -07:00