Commit Graph

86 Commits

Author SHA1 Message Date
cyy
483f748dd5 [BE] Enforce missing override keyword (#104032)
This PR enables `-Winconsistent-missing-destructor-override` and `-Winconsistent-missing-override`
and fixes violations.

<!--
copilot:summary
-->
### <samp>🤖 Generated by Copilot at 47e904e</samp>

This pull request updates the code of various classes and operators in the `caffe2` and `aten` subdirectories to use the `override` specifier instead of the `virtual` keyword for destructors and other virtual functions that override a base class function. This improves the code readability, quality, and consistency with C++ best practices. It also modifies the `./CMakeLists.txt` file to enable warnings for these specifiers, but disable errors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104032
Approved by: https://github.com/malfet
2023-06-24 02:34:24 +00:00
PyTorch MergeBot
b5594f7df0 Revert "Use missing-prototypes in torch_cpu (#103725)"
This reverts commit 716b3b893d.

Reverted https://github.com/pytorch/pytorch/pull/103725 on behalf of https://github.com/osalpekar due to Broke caffe2 builds due. More info at [D46920675](https://www.internalfb.com/diff/D46920675) ([comment](https://github.com/pytorch/pytorch/pull/103725#issuecomment-1603129273))
2023-06-22 18:30:31 +00:00
cyy
716b3b893d Use missing-prototypes in torch_cpu (#103725)
This PR enables  Wmissing-prototypes in torch_cpu except some generated cpp files and the mps and metal backends.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103725
Approved by: https://github.com/albanD
2023-06-21 13:19:55 +00:00
Sergii Dymchenko
c2402a9257 Change caffe2 branch links to main (#100129)
Just a change

pytorch/tree/master -> pytorch/tree/main
pytorch/blob/master -> pytorch/blob/main
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100129
Approved by: https://github.com/huydhn
2023-04-27 10:31:50 +00:00
Kazuaki Ishizaki
601e7dc0bb Fix typos under caffe2/operators directory (#98235)
This PR fixes typos in comments and messages of `.cc` and `.h` files under `caffe2/operators` directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98235
Approved by: https://github.com/kit1980
2023-04-05 06:26:01 +00:00
Will Constable
4f34cd6d1e Replace all CHECK_ and DCHECK_ with TORCH_* macros (#82032)
Avoid exposing defines that conflict with google logging, since this blocks external usage of libtorch in certain cases.

All the 'interesting' changes should be in these two files, and the rest should just be mechanical changes via sed.
c10/util/logging_is_not_google_glog.h
c10/util/logging_is_google_glog.h

Fixes https://github.com/pytorch/pytorch/issues/81415

cc @miladm @malfet
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82032
Approved by: https://github.com/soumith, https://github.com/miladm
2022-07-26 01:20:44 +00:00
Nikita Shulga
f6c275f55d Remove -Wno-unused-variable from utils.cmake (take 2) (#75538)
Summary:
[Comment](https://github.com/pytorch/pytorch/pull/62445/files#r680132022) claims, it got added for consistency with  top level CMakeLists.txt, but `-Wno-unused-variable` is not mentioned there.

Modify violations in 50+ files that were added in the interim by either removing unused variables, or decorating the code with `C10_UNUSED` if local variable is likely used to extend object lifetime until the end of the block.

Caused preventable revert in https://github.com/pytorch/pytorch/pull/72633#issuecomment-1092300787

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75538

Reviewed By: anjali411

Differential Revision: D35747333

Pulled By: malfet

fbshipit-source-id: 3fc5828e44a4c05ba0e89e92613e6ebbdb260626
(cherry picked from commit c179fba21cfa2a0093fad50ccad5a22dd7cff52c)
2022-04-20 17:41:59 +00:00
PyTorch MergeBot
5c56b2286b Revert "Remove -Wno-unused-variable from utils.cmake"
This reverts commit 018cbe1f5c.

Reverted https://github.com/pytorch/pytorch/pull/75538 on behalf of https://github.com/seemethere
2022-04-19 17:19:09 +00:00
Nikita Shulga
018cbe1f5c Remove -Wno-unused-variable from utils.cmake
[Comment](https://github.com/pytorch/pytorch/pull/62445/files#r680132022) claims, it got added for consistency with  top level CMakeLists.txt, but `-Wno-unused-variable` is not mentioned there.

Modify violations in 50+ files that were added in the interim by either removing unused variables, or decorating the code with `C10_UNUSED` if local variable is likely used to extend object lifetime until the end of the block.

Caused preventable revert in https://github.com/pytorch/pytorch/pull/72633#issuecomment-1092300787

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75538
Approved by: https://github.com/cpuhrsch
2022-04-19 15:26:55 +00:00
Nikita Shulga
f6e7a2ab64 Fix sign-compare in caffe2 cpp tests
Prerequisite change for enabling `-Werror=sign-compare` across PyTorch repo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75084

Approved by: https://github.com/ngimel
2022-04-05 00:08:05 +00:00
Xiao Sun
1436507960 fused int8 static (#73452)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73452

Added a fused Int8FC path using PackAWithQuantRowOffset like the INT8 dynamic path. There are two ways to enable it
(1) set an positive "X_scale" value in the arg list of Int8FC op
(2) send both "Qparam" (for output requantization, could be dummy values) and "in_Qparam" (for fused input quantization)

Differential Revision: D34034681

fbshipit-source-id: f25ca8a2b783ea597389d31c110448d19610218e
(cherry picked from commit 6fa10ba0e3be2d46298b439fba0fe9ae7e329f3a)
2022-02-28 16:34:45 +00:00
Nikita Shulga
269e92669a [c2] Remove unused private fields (#69709)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69709

Fix logical bug in `caffe2/ideep/operators/conv_op.cc`, which
contained an always false statement (fusion_type_ == X && fusion_type_ == Y ) statement

Test Plan: Imported from OSS

Reviewed By: r-barnes

Differential Revision: D32997006

Pulled By: malfet

fbshipit-source-id: 23e4db1b17cf8a77eae6a8691847ffa484d4736c
2021-12-14 11:31:08 -08:00
Richard Barnes
1433160a36 use irange for loops 6 (#66742)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66742

Modified loops in files under fbsource/fbcode/caffe2/ from the format

`for(TYPE var=x0;var<x_max;x++)`

to the format

`for(const auto var: irange(xmax))`

This was achieved by running r-barnes's loop upgrader script (D28874212) with some modification to exclude all files under /torch/jit and a number of reversions or unused variable suppression warnings added by hand.

Test Plan: Sandcastle

Reviewed By: malfet

Differential Revision: D31705366

fbshipit-source-id: be58222426c192406a7f93c21582c3f6f2082401
2021-12-07 16:07:50 -08:00
Xue Li
2f099c7555 Revert D30652629: use irange for loops
Test Plan: revert-hammer

Differential Revision:
D30652629 (687c2267d4)

Original commit changeset: 0ae6c4bbbb55

fbshipit-source-id: 5c4f067b584a021c8c9656454d1ee60999600fb3
2021-10-15 15:23:10 -07:00
Richard Barnes
687c2267d4 use irange for loops (#66234)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66234

Modified loops in files under fbsource/fbcode/caffe2/ from the format

`for(TYPE var=x0;var<x_max;x++)`

to the format

`for(const auto var: irange(xmax))`

This was achieved by running r-barnes's loop upgrader script (D28874212) with some modification to exclude all files under /torch/jit and a number of reversions or unused variable suppression warnings added by hand.

bypass_size_limit
allow-large-files

Test Plan: Sandcastle

Reviewed By: ngimel

Differential Revision: D30652629

fbshipit-source-id: 0ae6c4bbbb554bad42e372792a6430e1acf15e3e
2021-10-15 13:50:33 -07:00
Nikita Shulga
4c4525fa5c Compile without -Wno-unused-variable (take 2) (#66041)
Summary:
Delete `-Wno-unused-variable` from top level `CMakeLists.txt`
Still suppress those warnings for tests and `torch_python`

Delete number of unused variables from caffe2 code
Use `(void)var;` to suppress unused variable in range loops
Use `C10_UNUSED` for global constructors and use `constexpr` instead of `static` for global constants

Do not delete `caffe2::OperatorBase::Output` calls as they have side effects

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66041

Reviewed By: ngimel

Differential Revision: D31360142

Pulled By: malfet

fbshipit-source-id: 6fdfb9f91efdc49ca984a2f2a17ee377d28210c8
2021-10-04 20:39:39 -07:00
Nikita Shulga
e4ee5ca698 Revert D31326599: [pytorch][PR] Compile without -Wno-unused-variable
Test Plan: revert-hammer

Differential Revision:
D31326599 (a6280ab653)

Original commit changeset: 924155f1257a

fbshipit-source-id: b8ee5bc0298637443232f5ee9ec79e51ed256faf
2021-10-01 20:40:47 -07:00
Nikita Shulga
a6280ab653 Compile without -Wno-unused-variable (#65954)
Summary:
Delete `-Wno-unused-variable` from top level `CMakeLists.txt`
Still suppress those warnings for tests and `torch_python`

Delete number of unused variables from caffe2 code
Use `(void)var;` to suppress unused variable in range loops
Use `C10_UNUSED` for global constructors and use `constexpr` instead of `static` for global constants

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65954

Reviewed By: ngimel

Differential Revision: D31326599

Pulled By: malfet

fbshipit-source-id: 924155f1257a2ba1896c50512f615e45ca1f61f3
2021-10-01 17:40:47 -07:00
Nikita Shulga
a9b0a921d5 Disable avoid-non-const-global-variables lint check (#62008)
Summary:
As GoogleTest `TEST` macro is non-compliant with it as well as `DEFINE_DISPATCH`

All changes but the ones to `.clang-tidy` are generated using following script:
```
for i in `find . -type f -iname "*.c*" -or -iname "*.h"|xargs grep cppcoreguidelines-avoid-non-const-global-variables|cut -f1 -d:|sort|uniq`;  do sed -i "/\/\/ NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)/d" $i; done
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62008

Reviewed By: driazati, r-barnes

Differential Revision: D29838584

Pulled By: malfet

fbshipit-source-id: 1b2f8602c945bd4ce50a9bfdd204755556e31d13
2021-07-22 18:04:40 -07:00
Nikita Shulga
3a66a1cb99 [clang-tidy] Exclude cppcoreguidelines-avoid-magic-numbers (#57841)
Summary:
Add cppcoreguidelines-avoid-magic-numbers exclusion to clang-tidy
Remove existing nolint warnings using following script:
```
for file in `git ls-files | grep -v \.py`; do gsed '/^ *\/\/ NOLINTNEXTLINE(cppcoreguidelines-avoid-magic-numbers)/d' -i  $file; done
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57841

Reviewed By: samestep

Differential Revision: D28295045

Pulled By: malfet

fbshipit-source-id: 7c6e8d1213c9593f169ed3df6a916498f1a97163
2021-05-07 20:02:33 -07:00
Nikita Shulga
4cb534f92e Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os

def get_compiled_files_list():
    import json
    with open("build/compile_commands.json") as f:
        data = json.load(f)
    files = [os.path.relpath(node['file']) for node in data]
    for idx, fname in enumerate(files):
        if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
            files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
    return files

def run_clang_tidy(fname):
    check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
    changes = check_output(["git", "ls-files", "-m"])
    if len(changes) == 0:
        return
    check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])

def main():
    git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
    compiled_files = get_compiled_files_list()
    for idx, fname in enumerate(git_files):
        if fname not in compiled_files:
            continue
        if fname.startswith("caffe2/contrib/aten/"):
            continue
        print(f"[{idx}/{len(git_files)}] Processing {fname}")
        run_clang_tidy(fname)

if __name__ == "__main__":
    main()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892

Reviewed By: H-Huang

Differential Revision: D27991944

Pulled By: malfet

fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 14:10:25 -07:00
Summer Deng
509fb77b70 Adjust bound_shape_inferencer to take 4 inputs for FCs (#41934)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41934

The model exported from online training workflow with int8 quantization contains FCs with 4 inputs. The extra input is the quant_param blob. This diff is to adjust the bound_shape_inferencer and int8 op schema to get shape info for the quant_param input.

Test Plan:
```
buck test caffe2/caffe2/opt:bound_shape_inference_test
```

Reviewed By: yinghai

Differential Revision: D22683554

fbshipit-source-id: 684d1433212a528120aba1c37d27e26b6a31b403
2020-08-05 18:44:48 -07:00
Ashkan Aliabadi
c8deca8ea8 Update pthreadpool to pthreadpool:029c88620802e1361ccf41d1970bd5b07fd6b7bb. (#40524)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40524

Reviewed By: ezyang

Differential Revision: D22215742

Pulled By: AshkanAliabadi

fbshipit-source-id: ef594e0901337a92b21ddd44e554da66c723eb7c
2020-07-09 10:00:36 -07:00
Shai Szulanski
0ddaaf6a92 [codemod][caffe2] Run clang-format - 5/7
Summary:
This directory is opted-in to clang-format but is not format-clean. This blocks continuous formatting from being enabled on fbcode, and causes hassle for other codemods that leave inconsistent formatting. This diff runs clang-format, which is widely used and considered safe.

If you are unhappy with the formatting of a particular block, please *accept this diff* and then in a stacked commit undo the change and wrap that code in `// clang-format off` and `// clang-format on`, or `/* clang-format off */` and `/* clang-format on */`.

drop-conflicts

Test Plan: sandcastleit

Reviewed By: jerryzh168

Differential Revision: D22311706

fbshipit-source-id: 1ca59a82e96156a4a5dfad70ba3e64d44c5e762a
2020-06-30 15:45:11 -07:00
Summer Deng
597cb04b2f Use Int8QuantParamsBlob to pass the scale and zeropoint params (#40494)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40494

Resubmit the diff because D22124313 (1ec4337b7d) was reverted due to CI test failures
Added the int8_gen_quant_params.cc to CMakeList.txt to fix the CI failures

Test Plan: buck test caffe2/caffe2/quantization/server:

Reviewed By: hx89

Differential Revision: D22204244

fbshipit-source-id: a2c8b668f199cc5b0c5894086f554f7c459b1ad7
2020-06-24 10:20:16 -07:00
Luca Wehrstedt
2acee6dc93 Revert D22124313: Use Int8QuantParamsBlob to pass the scale and zeropoint params
Test Plan: revert-hammer

Differential Revision:
D22124313

Original commit changeset: 6b5c1974c0fc

fbshipit-source-id: 87a9a64c323be40db5d7d584029efa10c779dfa1
2020-06-23 05:54:44 -07:00
Summer Deng
1ec4337b7d Use Int8QuantParamsBlob to pass the scale and zeropoint params (#40390)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40390

Change the Int8FC/Int8Quantize op interface to use Int8QuantParamsBlob as the qparam input blob format when needed.

Test Plan:
```
 buck test caffe2/caffe2/quantization/server:
```

Reviewed By: hx89

Differential Revision: D22124313

fbshipit-source-id: 6b5c1974c0fc5928f72773495f0da8d0eb9b98c9
2020-06-23 00:45:21 -07:00
Haixin Liu
ddd45ae919 Extend int8 FC op to take scale and zero point from input
Summary: Extend int8 FC op to take scale and zero point from input to support int8 PTQ productization of online training models.

Test Plan: buck test caffe2/caffe2/quantization/server:fully_connected_dnnlowp_op_test

Reviewed By: csummersea

Differential Revision: D21944884

fbshipit-source-id: 2094827da903f3993afe4f8cf6e70286b195321d
2020-06-13 02:34:45 -07:00
Haixin Liu
2bab9149cc Extend int8 quantize op to take scale and zero point from input
Summary: Extend int8 quantize op to take scale and zero point from input to support int8 PTQ productization of online training models.

Test Plan: buck test caffe2/caffe2/quantization/server:quantize_dnnlowp_op_test

Reviewed By: csummersea

Differential Revision: D21939660

fbshipit-source-id: 7ce2fbf9cd8a990c270f2187a49b1578ce76bc37
2020-06-12 09:28:51 -07:00
Michael Ranieri
af9c3a3652 uniform_int_distribution does not support uint8_t (#37260)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37260

List of supported types here:
https://en.cppreference.com/w/cpp/numeric/random/uniform_int_distribution

Test Plan: CircleCI green, test compiles and passes on msvc.

Reviewed By: malfet

Differential Revision: D21237280

fbshipit-source-id: 51b09b87511e35bfe8a57ecd48ed772d587dba9b
2020-04-27 13:09:39 -07:00
James Donald
7ad03855dc Fix 'template' keyword warning with clang-cl and clang.exe (#32104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32104

Fixes these warnings:
```
xplat\caffe2\caffe2Windows#header-mode-symlink-tree-only,headers\caffe2\operators\quantized\int8_conv_op.h(96,17): warning: use 'template' keyword to treat 'data' as a dependent template name
            W.t.data<uint8_t>(),
                ^
                template
xplat\caffe2\caffe2Windows#header-mode-symlink-tree-only,headers\caffe2\operators\quantized\int8_conv_op.h(97,17): warning: use 'template' keyword to treat 'data' as a dependent template name
            B.t.data<int32_t>(),
                ^
                template
```

Test Plan: Tested locally with clang-cl and CI for other toolchains

Reviewed By: boguscoder

Differential Revision: D19353563

fbshipit-source-id: c28afb8c1ad72fd77ef82556ba89fcf09100d1f9
2020-01-14 20:09:35 -08:00
Supriya Rao
6d9a9e379d Fix segfault in caffe2 slice test (#31801)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31801

Try to fix issue #30764

Test Plan:
python test/onnx/test_utility_funs.py TestUtilityFuns

Imported from OSS

Differential Revision: D19315046

fbshipit-source-id: de3595969280e4ebe762cb098ff0891f8b5a9a90
2020-01-08 17:13:29 -08:00
Sebastian Messmer
643ca5def2 Replace c10::guts::stuff with std::stuff (#30915)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30915

Since we now have C++14, we don't need these c10::guts helpers anymore
ghstack-source-id: 95777609

Test Plan: waitforsandcastle

Differential Revision: D18869639

fbshipit-source-id: 97716f932297c64c6e814410ac47b444c33d4e2e
2019-12-16 13:57:19 -08:00
Supriya Rao
980aead1f8 Add support for quantized slice conversion (#30498)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30498

Updated Int8SliceOp to accept dim, start and end index similar to Pytorch.

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2_quantized.py TestQuantizedOps.test_slice

Imported from OSS

Differential Revision: D18740519

fbshipit-source-id: 2313f37a4936edb150ce04911b241e591e191801
2019-12-03 14:37:59 -08:00
Brian Wignall
e7fe64f6a6 Fix typos (#30606)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30606

Differential Revision: D18763028

Pulled By: mrshenli

fbshipit-source-id: 896515a2156d062653408852e6c04b429fc5955c
2019-12-02 20:17:42 -08:00
Supriya Rao
2599b9b551 Add output_size argument to caffe2 Int8ResizeNearest (#30202)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30202

Pytorch Upsample operator has output_size as an argument.
For quantized tensor inputs we cannot get the input_size to calculate the width and height scale factor.
Instead we pass the output_size directly to caffe2 to calculate the scale factors.

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2_quantized.py TestQuantizedOps.test_upsample

Imported from OSS

Differential Revision: D18631478

fbshipit-source-id: 38a39129bc863f4ecf2293acc068e40ab7edc825
2019-11-26 06:54:02 -08:00
David Reiss
77e8dba620 Disable Int8Transpose test
Summary: It's failing in the FB internal build because we don't enable that op.

Test Plan: buck test //xplat/caffe2:caffe2_testAndroid

Reviewed By: supriyar

Differential Revision: D17139694

fbshipit-source-id: 8091b71ff826466f3e2e1b4d6f87b9b50d1def20
2019-08-30 15:21:23 -07:00
Yanghan Wang
8cd45b4c46 relax roi_width/roi_height check to non-negative
Summary: Pull Request resolved: https://github.com/fairinternal/detectron2/pull/260

Test Plan: sandcastle.

Reviewed By: ppwwyyxx

Differential Revision: D17127067

fbshipit-source-id: ddca51fa0dab1e683f8c3709e105b6cbdf8b78b0
2019-08-29 21:18:40 -07:00
David Reiss
d704097d33 Add Int8Transpose operator (#16382)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16382

Adding an Int8TransposeOp that inherits from TransposeOp.
Small refactoring to normal TransposeOp to move main logic into a TransposeImpl
function.

Test Plan: int8_test.cc

Reviewed By: supriyar

Differential Revision: D13822715

fbshipit-source-id: a4d61bdf8e4e1d3f2e30b86d325810ed44c21635
2019-08-29 16:06:25 -07:00
Yanghan Wang
ad64789a1e add aligned option to RoIAlign
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23706

Reviewed By: ppwwyyxx

Differential Revision: D16615823

fbshipit-source-id: fd9152af8bc979cb04044413e66af349b032a99d
2019-08-07 21:22:33 -07:00
Haixin Liu
7f130c8494 Expose the quantized inputs and output of dynamic quantized int8 FC operator for debugging (#23566)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23566

Currently if we use dynamic quantization we don't have the access to the internally quantized inputs and output for debugging.

To make the debugging easier, this diff adds a debug feature to expose the quantized X, W and Y for debugging if debug outputs are attached to the operator and caffe2_dnnlowp_force_slow_path flag is set.

The quantized inputs and output are exposed as the extra outputs.

The example Int8FC op with debug outputs appended looks like:
```
op {
  input: "X"
  input: "W"
  input: "b"
  output: "Y"
  output: "X_q"
  output: "W_q"
  output: "Y_q"
  name: ""
  type: "Int8FC"
  arg {
    name: "axis"
    i: 1
  }
  ...
}
```

Next need to expose the quantization parameters.

Reviewed By: jspark1105

Differential Revision: D16566753

fbshipit-source-id: acd855a172ee7993ddba8808f2af81b628ff9c02
2019-08-02 21:23:43 -07:00
Rui Zhu
19fe2b9db4 Adding quantized tensor shape/type info support for caffe2=>glow in caffe2 side (#18621)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18621

This diff added caffe2 support for onnxifi quantization.

Reviewed By: yinghai

Differential Revision: D14648767

fbshipit-source-id: 4ddb492cacbba6142305866e6dbb875880acaea3
2019-03-31 17:42:27 -07:00
Lutz Roeder
195cba500f Fix Caffe2 operator schemas (#15462) (#13229) (#18109)
Summary:
Maratyszcza harouwu yinghai

This is broken since #13065. `c_str()` returns a pointer that isn't permanent.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18109

Differential Revision: D14516622

Pulled By: ezyang

fbshipit-source-id: 7113d92eac4f61479c4c7b323cf78cc8aa00b17e
2019-03-18 21:00:43 -07:00
Jongsoo Park
39423fbdd4 add tensor and cost inference functions (#17684)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17684

Adding tensor and cost inference functions to more int8 operators.

Reviewed By: yinghai

Differential Revision: D14174746

fbshipit-source-id: dfad975fa75899565c8fb61f1b7747a9206ebd22
2019-03-06 23:34:02 -08:00
Sebastian Messmer
28b5df1c8f refactor caffe2 operator constructors - 6/9 (#17087)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17087

clangr codemod

Reviewed By: ezyang

Differential Revision: D14078525

fbshipit-source-id: 7cc03b30b0d4eb99818e35406be4119b27bdb1bc
2019-02-28 14:23:57 -08:00
Sebastian Messmer
8db403b9dc refactor caffe2 operator constructors - 7/9 (#17088)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17088

clangr codemod

also manually moved the constructor of a class from the .cpp file to the .h file.

Reviewed By: ezyang

Differential Revision: D14078531

fbshipit-source-id: 2adb4ac0ce523742da6cce3bc3b6c177b816c299
2019-02-28 14:23:53 -08:00
Oleg Bogdanov
260facfdea caffe2 | added missing operator source file (#17272)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17272

after windows-specific fixes were applied new file was left out of CMakeLists

Reviewed By: orionr

Differential Revision: D14140419

fbshipit-source-id: 6a6c652048ed196ec20241bc2a1d08cbe2a4e155
2019-02-20 09:28:29 -08:00
eyyub.sari@epitech.eu
e661dc27ff Int8GivenTensorFill Operator Schema fix typo (#16204)
Summary:
Hi,
caffe2/operators/quantized/int8_given_tensor_fill_op.cc expects the value array to be named "values" but the operator schema describe "value" (no s). I guess it is a little typo but it made me losing a bit of time before understanding why I had this error by passing "value" instead of "values":
```
[F int8_given_tensor_fill_op.h:95] Check failed: output->t.numel() == values_.numel() output size: 3 given size: 0
Aborted (core dumped)
```

Thanks,
Eyyüb Sari
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16204

Differential Revision: D14020476

Pulled By: ezyang

fbshipit-source-id: a8a46bfc44ec125e7925ce4b7c79fdf99c890a50
2019-02-10 20:08:45 -08:00
Oleg Bogdanov
30a6feda84 caffe2 | MSVS compatibility fixes (#16765)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16765

Code changes required to build caffe2 for windows with toolchain used by FB.

Reviewed By: orionr

Differential Revision: D13953258

fbshipit-source-id: 651823ec9d81ac70e32d4cce5bc2472434104733
2019-02-06 09:47:01 -08:00
Jerry Zhang
2af95d8e3e Back out "[pt1][tensor] Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize" (#16516)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16516

Original commit changeset: 64abce3dbaed

Reviewed By: dzhulgakov

Differential Revision: D13863715

fbshipit-source-id: f1923fdca4a1a82768d9c280a8493ff15a7eb2ba
2019-01-30 12:50:38 -08:00