Commit Graph

7146 Commits

Author SHA1 Message Date
Antonio Kim
02c4d877b4 Codegen Non-Native IR Nodes (#76535)
Add codegen infrastructure to generate IR nodes for non-native ops.

The proposed change is to add a `non_native` key to the `{backend}_native_functions.yaml` file that contains schema definitions similar to what is found in `native_functions.yaml`. e.g.
```
non_native:
    ...
    - func: expand(Tensor input, int[] size, bool is_scalar_expand) -> Tensor
    ...
```
these definitions are parsed into a `LazyIrSchema` that can be used for generating IR nodes using `GenLazyIR`.

Fixes #74628

CC: @wconstab @desertfire @henrytwo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76535
Approved by: https://github.com/wconstab
2022-05-24 19:29:23 +00:00
Max Podkorytov
e4f5203386 print available modules in predictor error message (#78101)
Summary:
print available modules when throwing a module not found exception

I believe that improves UX

Differential Revision: D36580924

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78101
Approved by: https://github.com/mikeiovine
2022-05-24 18:47:06 +00:00
Adam Simpkins
bb7fd1fcfb [caffe2] fix type annotations for workspace.SwitchWorkspace() (#77464)
Summary: The `create_if_missing` parameter is optional, and defaults to `None`.

Test Plan:
Confirmed that Pyre no longer complains about calling `SwitchWorkspace` with a
single string argument.

Differential Revision: D36366987

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77464
Approved by: https://github.com/voznesenskym
2022-05-20 19:22:06 +00:00
Nikolay Korovaiko
df1f9b9840 Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#77756)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77756
Approved by: https://github.com/desertfire
2022-05-20 05:39:03 +00:00
PyTorch MergeBot
e9d660c331 Revert "Revert "Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)"""
This reverts commit acf7136a52.

Reverted https://github.com/pytorch/pytorch/pull/77719 on behalf of https://github.com/suo
2022-05-18 05:06:50 +00:00
Edward Z. Yang
acf7136a52 Revert "Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)""
This reverts commit c35bd8d423.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77719

Approved by: https://github.com/Chillee, https://github.com/malfet
2022-05-18 03:25:43 +00:00
PyTorch MergeBot
c35bd8d423 Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)"
This reverts commit fc4c3c9bc7.

Reverted https://github.com/pytorch/pytorch/pull/76836 on behalf of https://github.com/suo
2022-05-18 02:45:25 +00:00
Nikolay Korovaiko
fc4c3c9bc7 Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)
LTC Tensors now create real IR (SizeNode) for sym_sizes() in LTCTensorImpl.cpp.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76836
Approved by: https://github.com/ezyang
2022-05-18 00:40:42 +00:00
David Reiss
f1b16e5f32 Remove c2_aten_srcs.bzl (#77314)
Summary: All uses removed.

Test Plan: CI

Differential Revision: D36330329

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77314
Approved by: https://github.com/linbinyu, https://github.com/dagitses
2022-05-13 20:44:19 +00:00
Kulin Seth
e011a8e18b Enable PyTorch operations on MPS Backend. (#77343)
Add PyTorch operations to MPS backend.

- https://github.com/pytorch/pytorch/issues/77394
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77343
Approved by: https://github.com/albanD
2022-05-13 18:28:53 +00:00
Kulin Seth
f348b1b2b5 Add the Runtime components for MPS backend. (#76725)
The PR adds the runtime components and few basic operations like copy, as_strided for MPS backend.

Current list of identified TODOs are:

-  https://github.com/pytorch/pytorch/issues/77176
- Unify the logic with CUDACachingAllocator and remove redundant code.
-  https://github.com/pytorch/pytorch/issues/77170
- Look into using C++ smart pointers where possible with ObjC code
- Use empty_strided_generic() to implement the `empty_strided_mps` code
- https://github.com/pytorch/pytorch/issues/77144
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76725
Approved by: https://github.com/albanD
2022-05-11 17:19:45 +00:00
Nikolay Korovaiko
99339fddd9 move SymInt and SymIntArrayRef to c10/core (#77009)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77009
Approved by: https://github.com/ezyang, https://github.com/malfet
2022-05-11 16:21:31 +00:00
PyTorch MergeBot
7eaf4780ba Revert "[LT] Store OpKind for each IR subclass in a static field"
This reverts commit ac37ddc795.

Reverted https://github.com/pytorch/pytorch/pull/76711 on behalf of https://github.com/malfet
2022-05-09 20:50:09 +00:00
PyTorch MergeBot
2c5bf12584 Revert "stft: remove non-center overload and python functional wrapper"
This reverts commit d23ecbfc9a.

Reverted https://github.com/pytorch/pytorch/pull/73434 on behalf of https://github.com/albanD
2022-05-09 19:59:46 +00:00
Bin Bao
ac37ddc795 [LT] Store OpKind for each IR subclass in a static field
Summary: Currently OpKind is stored as an object field called op_ for each IR
node, and one usage of op_ is to avoid dynamic_cast in NodeCast when we
need to downcast a base-node pointer into a concrete sub-node pointer.
As a result, we need to construct and pass in an op when downcasting
nodes, and this becomes quite anonnying when we start to implement the
trie-based IR node reusing. More importantly, the op for each subclass
should be unique for that subclass and thus making it a const static field
is a more logical design.

In this PR, we still keep the object-level op_ for easier XLA adoption. As
furture work, we can come back to remove op_, make the op() method
virtual, and get rid of OpKind in all the node constructors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76711

Approved by: https://github.com/wconstab, https://github.com/JackCaoG
2022-05-06 19:14:46 +00:00
sanchitintel
4ee29d6033 [Reland take-2] Add JIT graph fuser for oneDNN Graph API (v0.5)
Re-landing #68111/#74596

## Description
v0.5 PR of this [RFC](https://github.com/pytorch/pytorch/issues/49444).

On the basis of #50256, the below improvements are included:

 * The [v0.5 release branch](https://github.com/oneapi-src/oneDNN/releases/tag/graph-v0.5) of the oneDNN Graph API is used
 * The fuser now works with the profiling graph executor. We have inserted type check nodes to guard the profiled tensor properties.

 ### User API:
The optimization pass is disabled by default. Users could enable it by:

```
 torch.jit.enable_onednn_fusion(True)
```
`torch.jit.freeze` should be used after tracing (recommended) or scripting a model.

 ### Performance:
 [pytorch/benchmark](https://github.com/pytorch/benchmark) tool is used to compare the performance:

 * SkyLake 8180 (1 socket of 28 cores):
   ![image](https://user-images.githubusercontent.com/65992142/151162305-05e44425-a24e-4d5e-94e1-743b40b87a8c.png)
* SkyLake 8180 (single thread):
   ![image](https://user-images.githubusercontent.com/65992142/151162528-69f90b79-d08d-46b8-8775-d80a6ccbce8a.png)
   * By mapping hardswish to oneDNN Graph, it’s 8% faster than PyTorch JIT (NNC + OFI)
   ** We expect performance gain after mapping transpose, contiguous & view to oneDNN graph ops

 ### Directory structure of the integration code
 Fuser-related code is placed under:

 ```
 torch/csrc/jit/codegen/onednn/
 ```

 Optimization pass registration is done in:

 ```
 torch/csrc/jit/passes/onednn_graph_fuser.h
 ```

 CMake for the integration code is in:

 ```
 caffe2/CMakeLists.txt
 cmake/public/mkldnn.cmake
 cmake/Modules/FindMKLDNN.cmake
 ```

 ## Limitations
 * In this PR, we only support Pytorch-oneDNN-Graph integration on Linux platform. Support on Windows and MacOS will be enabled as a next step.
 * We have only optimized the inference use-case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76622
Approved by: https://github.com/eellison
2022-05-05 16:57:03 +00:00
BowenBao
679fc90cdb [ONNX] Support optional type (#68793) (#73284)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73284

Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D34625646

Pulled By: malfet

fbshipit-source-id: 537fcbc1e9d87686cc61f5bd66a997e99cec287b

Co-authored-by: BowenBao <bowbao@microsoft.com>
Co-authored-by: neginraoof <neginmr@utexas.edu>
Co-authored-by: Nikita Shulga <nshulga@fb.com>
(cherry picked from commit 822e79f31ae54d73407f34f166b654f4ba115ea5)
2022-05-04 20:24:30 +00:00
Nikita Shulga
8473173c36 Remove breakpad dependency
This functionality does not seem to be used
and there are some requests to update dependency.

Add `third_party` to torch_cpu include directories if compiling with
Caffe2 support, as `caffe2/quantization/server/conv_dnnlowp_op.cc` depends on `third_party/fbgemm/src/RefImplementations.h`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75394
Approved by: https://github.com/janeyx99, https://github.com/seemethere
2022-05-03 20:21:55 +00:00
Stephen Jia
ccd5fa506f [iOS][coreml] Add CoreML memory observer Round 2
Summary:
Add an observer to `PTMCoreMLExecutor` so we can inspect OOMs in production to help with T115554493.

The behaviour of the logger is as such:

1. Each time a model is compiled, there is a chance we publish all logs to QPL. This is determined by the randomly generated `_model_load_id` and `_sample_thresh`.
2. If we are publishing all logs, then every `_sample_every` inferences will be logged via QPL.
3. Every QPL log will collect memory metrics before and after model compilation/inference
4. If memory pressure is not normal (remaining mem < 400 MB) before or after compilation/inference, then that compilation/inference will be logged to QPL no matter what.

Previous diff got reverted due to OSS CI failures. Fixed those failures in this iteration.

Test Plan:
We can test in pytorch playground and inspect the QPL logs through Flipper:

```
arc focus2 -b pp-ios -a ModelRunner -a //xplat/caffe2/c10:c10Apple -a //xplat/caffe2:torch_mobile_coreApple  -a //xplat/caffe2/fb/dynamic_pytorch:dynamic_pytorch_implApple -a //xplat/caffe2:coreml_delegateApple  -a ModelRunnerDevOps -a //xplat/caffe2:torch_mobile_all_opsApple -a coreml_memory_observer -a //xplat/perflogger:perfloggerApple -fd --force-with-wrong-xcode
```

To check results in Hive/Scuba, test in instagram:

```
arc focus2 -b igios-no-extensions -a //fbobjc/Apps/Instagram/AppLibraries/Core/QPL/IGPerformanceLogging:IGPerformanceLogging -a //xplat/caffe2/c10:c10Apple -a //xplat/caffe2:torch_mobile_coreApple  -a //xplat/caffe2/fb/dynamic_pytorch:dynamic_pytorch_implApple -a //xplat/caffe2:coreml_delegateApple -a //xplat/caffe2:torch_mobile_all_opsApple -a //xplat/perflogger:perfloggerApple -a coreml_memory_observerApple -c pt.enable_qpl=1 --force-with-wrong-xcode
```

Note that we need to change `_sample_thresh` to ensure logs show up.

Reviewed By: kimishpatel

Differential Revision: D35970823

fbshipit-source-id: 2cb4d73931f35a0fa7f362b9fb44b9f0a00aeb82
(cherry picked from commit 1e965acf72958fc59d0cd0b4958e54955cc1adf2)
2022-05-03 17:30:55 +00:00
Peter Bell
d23ecbfc9a stft: remove non-center overload and python functional wrapper
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73434

Approved by: https://github.com/anjali411
2022-05-03 14:30:35 +00:00
CodemodService FBSourceClangFormatLinterBot
461cc0a960 [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily arc lint --take CLANGFORMAT
Reviewed By: adamjernst

Differential Revision: D36061557

fbshipit-source-id: 61ea6017a3550dbbb13b7b288a1d537253c25fe2
(cherry picked from commit 2ea13c57e96763318859159b6441c02f7c72caf2)
2022-05-02 22:07:42 +00:00
Lucian Grijincu
bf3dcb9599 fix anon-struct usage that's a warning/error -Wnon-c-typedef-for-linkage (#137)
Summary:
X-link: https://github.com/facebook/CacheLib/pull/137

Fix
  error: anonymous non-C-compatible type given name for linkage purposes by alias declaration; add a tag name here [-Werror,-Wnon-c-typedef-for-linkage]

Test Plan:
```name=before
buck-out/v2/gen/fbcode/d839c731f5505c62/datasec/messaging/auth/utils/__MessagingAuthUtils__/headers/datasec/messaging/auth/utils/MessagingAuthLogger.h:83:25: error: anonymous non-C-compatible type given name for linkage purposes by alias declaration; add a tag name here [-Werror,-Wnon-c-typedef-for-linkage]
  using LogInfo = struct {
```

CI

Reviewed By: philippv

Differential Revision: D36043476

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76610
Approved by: https://github.com/philippv, https://github.com/osalpekar
2022-05-02 18:14:30 +00:00
PyTorch MergeBot
3dcd67a1b3 Revert "[Re-landing 68111] Add JIT graph fuser for oneDNN Graph API (Preview4.1)"
This reverts commit 8b11d81058.

Reverted https://github.com/pytorch/pytorch/pull/74596 on behalf of https://github.com/janeyx99
2022-04-29 15:40:17 +00:00
chunyuan
8b11d81058 [Re-landing 68111] Add JIT graph fuser for oneDNN Graph API (Preview4.1)
Re-landing https://github.com/pytorch/pytorch/pull/68111

## Description
Preview4 PR of this [RFC](https://github.com/pytorch/pytorch/issues/49444).

On the basis of https://github.com/pytorch/pytorch/pull/50256, the below improvements are included:

- The [preview4 release branch](https://github.com/oneapi-src/oneDNN/releases/tag/graph-v0.4.1) of the oneDNN Graph API is used
- The fuser now works with the profiling graph executor. We have inserted type check nodes to guard the profiled tensor properties.

### User API:
The optimization pass is disabled by default. Users could enable it by:
```
torch.jit.enable_onednn_fusion(True)
```

### Performance:
[pytorch/benchmark](https://github.com/pytorch/benchmark) tool is used to compare the performance:
- SkyLake 8180 (1 socket of 28 cores):

  ![image](https://user-images.githubusercontent.com/65992142/151162305-05e44425-a24e-4d5e-94e1-743b40b87a8c.png)

- SkyLake 8180 (single thread):

  ![image](https://user-images.githubusercontent.com/65992142/151162528-69f90b79-d08d-46b8-8775-d80a6ccbce8a.png)
 \* By mapping hardswish to oneDNN Graph, it’s 8% faster than PyTorch JIT (NNC + OFI)
  \** We expect performance gain after mapping transpose, contiguous & view to oneDNN graph ops

### Directory structure of the integration code
Fuser-related code are placed under:
```
torch/csrc/jit/codegen/onednn/
```

Optimization pass registration is done in:
```
torch/csrc/jit/passes/onednn_graph_fuser.h
```

CMake for the integration code is:
```
caffe2/CMakeLists.txt
```

## Limitations

- In this PR, we have only supported the optimization on Linux platform. The support on Windows and MacOS will be enabled as the next step.
- We have only optimized the inference use case.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74596
Approved by: https://github.com/malfet
2022-04-29 01:01:33 +00:00
anjali411
b204ad863f Revert "Revert "Allow specifying tags for aten operators in native_functions.yaml""
This reverts commit ea44645c9a.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76456

Approved by: https://github.com/osalpekar
2022-04-28 02:04:57 +00:00
Kulin Seth
54c75e1e8f Add "mps" device to PyTorch framework.
Remove the "mlc" device for Mac platforms.

This commit will be followed up with:

* adding MPS runtime components
* PyTorch ops for MPS device

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76291
Approved by: https://github.com/albanD
2022-04-27 19:21:57 +00:00
Hannes Friederich
5932c37198 [caffe2] drop XROS ports (#76366)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76366

caffe2 is not currently being built for XROS.

Test Plan: CI

Reviewed By: kimishpatel

Differential Revision: D35923922

fbshipit-source-id: 260dacadf0bd5b6bab7833a4ce81e896d280b053
(cherry picked from commit 8370b8dd2519d55a79fa8d45e7951ca8dc0b21a8)
2022-04-26 23:54:22 +00:00
Edward Yang
36420b5e8c Rename tools/codegen to torchgen (#76275)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76275

In preparation for addressing
https://github.com/pytorch/pytorch/issues/73212

Diff was generated with:

```
git mv tools/codegen torchgen
git grep -l 'tools.codegen' | xargs sed -i 's/tools.codegen/torchgen/g'
sed -i "s/\${TOOLS_PATH}\/codegen/\${TORCH_ROOT}\/torchgen/g" caffe2/CMakeLists.txt
```

and a manual edits to:

* tools/test/test_gen_backend_stubs.py
* torchgen/build.bzl
* torchgen/gen_backend_stubs.py

aka this diff:

```
 diff --git a/tools/test/test_gen_backend_stubs.py b/tools/test/test_gen_backend_stubs.py
index 3dc26c6d2d..104054575e 100644
 --- a/tools/test/test_gen_backend_stubs.py
+++ b/tools/test/test_gen_backend_stubs.py
@@ -9,7 +9,7 @@ from torchgen.gen_backend_stubs import run
 from torchgen.gen import _GLOBAL_PARSE_NATIVE_YAML_CACHE  # noqa: F401

 path = os.path.dirname(os.path.realpath(__file__))
-gen_backend_stubs_path = os.path.join(path, '../torchgen/gen_backend_stubs.py')
+gen_backend_stubs_path = os.path.join(path, '../../torchgen/gen_backend_stubs.py')

 # gen_backend_stubs.py is an integration point that is called directly by external backends.
 # The tests here are to confirm that badly formed inputs result in reasonable error messages.
 diff --git a/torchgen/build.bzl b/torchgen/build.bzl
index ed04e35a43..d00078a3cf 100644
 --- a/torchgen/build.bzl
+++ b/torchgen/build.bzl
@@ -1,6 +1,6 @@
 def define_targets(rules):
     rules.py_library(
-        name = "codegen",
+        name = "torchgen",
         srcs = rules.glob(["**/*.py"]),
         deps = [
             rules.requirement("PyYAML"),
@@ -11,6 +11,6 @@ def define_targets(rules):

     rules.py_binary(
         name = "gen",
-        srcs = [":codegen"],
+        srcs = [":torchgen"],
         visibility = ["//visibility:public"],
     )
 diff --git a/torchgen/gen_backend_stubs.py b/torchgen/gen_backend_stubs.py
index c1a672a655..beee7a15e0 100644
 --- a/torchgen/gen_backend_stubs.py
+++ b/torchgen/gen_backend_stubs.py
@@ -474,7 +474,7 @@ def run(
 ) -> None:

     # Assumes that this file lives at PYTORCH_ROOT/torchgen/gen_backend_stubs.py
-    pytorch_root = pathlib.Path(__file__).parent.parent.parent.absolute()
+    pytorch_root = pathlib.Path(__file__).parent.parent.absolute()
     template_dir = os.path.join(pytorch_root, "aten/src/ATen/templates")

     def make_file_manager(install_dir: str) -> FileManager:
```

run_all_fbandroid_tests

Test Plan: sandcastle

Reviewed By: albanD, ngimel

Differential Revision: D35770317

fbshipit-source-id: 153ac4a7fef15b1e750812a90bfafdbc8f1ebcdf
(cherry picked from commit c6d485d1d4648fa1c8a4c14c5bf3d8e899b9b4dd)
2022-04-25 01:38:06 +00:00
PyTorch MergeBot
77f23d6460 Revert "stft: remove non-center overload and python functional wrapper"
This reverts commit 6b7d89c4f1.

Reverted https://github.com/pytorch/pytorch/pull/73434 on behalf of https://github.com/osalpekar
2022-04-23 23:21:27 +00:00
Peter Bell
6b7d89c4f1 stft: remove non-center overload and python functional wrapper
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73434

Approved by: https://github.com/anjali411
2022-04-23 00:17:01 +00:00
Prem
7557407653 Added directory check before saving in C++ API
Fixes #75177

Couldn't find any utility method to get directory name in pytorch repo, hence creating a function for that.
Let me know if a new function is not needed.

I also referred [this](https://github.com/pytorch/pytorch/blob/master/c10/test/util/tempfile_test.cpp#L15) for directory check.

Also I am using TORCH_CHECK to show the error. This is highly verbose with the entire stack visible. Is there any alternative for the same so that it is easier to read? This could happen a frequently, so small and concise error would be more helpful here.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75681
Approved by: https://github.com/albanD
2022-04-22 20:04:41 +00:00
Nikita Shulga
ecd5567980 Tentative fix for CUDA-10.2 windows build failures (#76204)
Summary:
`C10_UNUSED` somehow triggers segfault in some versions of NVCC that looks like something as follows:
```
caffe2\caffe2\operators\piecewise_linear_transform_op.h(65): internal error: assertion failed: gen_variable_decl: declared_type is NULL (cp_gen_be.c, line 22209 in gen_variable_decl)

1 catastrophic error detected in the compilation of "caffe2/caffe2/operators/piecewise_linear_transform_op.cu".
Compilation aborted.
nvcc error   : 'cudafe++' died with status 0xC0000409

```
Fixes regression introduced by  https://github.com/pytorch/pytorch/pull/75538 / D35747333 (f6c275f55d)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76204

Test Plan: CI

Reviewed By: EscapeZero, atalman

Differential Revision: D35831451

fbshipit-source-id: f744d4688c9fd324f8f54b27781a3def97778d1e
(cherry picked from commit fd64655aa4b6890d205273ec76b416ee3b524783)
2022-04-22 15:15:18 +00:00
Nikita Shulga
b3aa2de5be Sync c2_aten_srcs.bzl with fbsync
When
69e048b090 were committed internally
as b5222584e6, changes were added to
`c2_aten_srcs.bzl`, as noted in https://github.com/pytorch/pytorch/pull/75115#issuecomment-1104448637

This commit reconciles the difference between two branches and to be
skipped when imported
2022-04-22 06:45:40 -07:00
Thiago Crepaldi
e07134092f Add warning when importing caffe2 on build without BUILD_CAFFE2=1
Confusing backtraces are issued to users when they run Caffe2 scripts (or tests) on PyTorch builds without Caffe2 enabled through `BUILD_CAFFE2=1`

This PR adds warnings (in more than one place) to return a friendly message for the user, helping them to overcome the problem by themselves

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73770
Approved by: https://github.com/BowenBao, https://github.com/malfet, https://github.com/garymm
2022-04-21 12:28:10 +00:00
Nikita Shulga
f6c275f55d Remove -Wno-unused-variable from utils.cmake (take 2) (#75538)
Summary:
[Comment](https://github.com/pytorch/pytorch/pull/62445/files#r680132022) claims, it got added for consistency with  top level CMakeLists.txt, but `-Wno-unused-variable` is not mentioned there.

Modify violations in 50+ files that were added in the interim by either removing unused variables, or decorating the code with `C10_UNUSED` if local variable is likely used to extend object lifetime until the end of the block.

Caused preventable revert in https://github.com/pytorch/pytorch/pull/72633#issuecomment-1092300787

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75538

Reviewed By: anjali411

Differential Revision: D35747333

Pulled By: malfet

fbshipit-source-id: 3fc5828e44a4c05ba0e89e92613e6ebbdb260626
(cherry picked from commit c179fba21cfa2a0093fad50ccad5a22dd7cff52c)
2022-04-20 17:41:59 +00:00
Nikolay Korovaiko
69e048b090 List of SymInt rebase on master
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75115
Approved by: https://github.com/ezyang
2022-04-20 02:09:55 +00:00
PyTorch MergeBot
5c56b2286b Revert "Remove -Wno-unused-variable from utils.cmake"
This reverts commit 018cbe1f5c.

Reverted https://github.com/pytorch/pytorch/pull/75538 on behalf of https://github.com/seemethere
2022-04-19 17:19:09 +00:00
Nikita Shulga
018cbe1f5c Remove -Wno-unused-variable from utils.cmake
[Comment](https://github.com/pytorch/pytorch/pull/62445/files#r680132022) claims, it got added for consistency with  top level CMakeLists.txt, but `-Wno-unused-variable` is not mentioned there.

Modify violations in 50+ files that were added in the interim by either removing unused variables, or decorating the code with `C10_UNUSED` if local variable is likely used to extend object lifetime until the end of the block.

Caused preventable revert in https://github.com/pytorch/pytorch/pull/72633#issuecomment-1092300787

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75538
Approved by: https://github.com/cpuhrsch
2022-04-19 15:26:55 +00:00
PyTorch MergeBot
d79d9fa283 Revert "Remove breakpad dependency"
This reverts commit 9aa3c7fd83.

Reverted https://github.com/pytorch/pytorch/pull/75394 on behalf of https://github.com/malfet
2022-04-17 17:58:51 +00:00
Nikita Shulga
9aa3c7fd83 Remove breakpad dependency
This functionality does not seem to be used
and there are some requests to update dependency

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75394
Approved by: https://github.com/janeyx99, https://github.com/seemethere
2022-04-17 17:43:45 +00:00
Linbin Yu
bc72add504 [2] remove caffe2 math.h from maskrcnn ops
Summary:
caffe2/utils/math.h depends on protobuf which is not needed for pytorch, so we want to get rid of it.
It turns out maskrcnn ops only need the StorageOrder enum from math.h, so just copy it.

Test Plan:
buck build //xplat/caffe2/fb/custom_ops/maskrcnn:maskrcnnAndroid#android-armv7,shared
buck build //xplat/caffe2/fb/custom_ops/maskrcnn:maskrcnn_aabbAndroid#android-armv7,shared
buck build //xplat/caffe2/fb/custom_ops/maskrcnn:maskrcnn_int8Android#android-armv7,shared

Differential Revision: D35624743

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75760
Approved by: https://github.com/malfet
2022-04-15 20:22:55 +00:00
PyTorch MergeBot
cc1902a5ed Revert "Add warning when importing caffe2 on build without BUILD_CAFFE2=1"
This reverts commit b142a224c6.

Reverted https://github.com/pytorch/pytorch/pull/73770 on behalf of https://github.com/suo
2022-04-15 01:39:39 +00:00
Thiago Crepaldi
9bbe1d632e Fix ONNX ATen fallback for non-caffe2 engines
This PR introduces 3 BC changes:

First, this PR propagates `BUILD_CAFFE2` flag to `libtorch` and `libtorch_python`, which is necessary for non-caffe2 ONNX runtimes when using `ONNX_ATEN_FALLBACK` operator export type.

Second, as a complement of https://github.com/pytorch/pytorch/pull/68490, this PR refactors Caffe2's Aten ops symbolics to consider not only the `operator_export_type` (aka `ONNX_ATEN_FALLBACK`) to emit Caffe2 Aten ops, but also whether `BUILD_CAFFE2` (which is called `torch.onnx._CAFFE2_ATEN_FALLBACK` in python binding) is set.

Lastly, it renames `onnx::ATen` to `aten::ATen` for ONNX spec consistency in a BC fashion.
ONNX doesn't have `ATen` op on its spec, but PyTorch ONNX converter emits them. Non-Caffe2 backend engines would be mislead by such operator's name/domain. A non-ideal workaround would be to have Aten ops handled based on its name and ignore the (non-complaint) domain. Moreover, users could incorrectly file bugs to either ONNX or ONNX Runtime when they inspect the model and notice the presence of an unspecified ONNX operator.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73954
Approved by: https://github.com/BowenBao, https://github.com/malfet, https://github.com/garymm, https://github.com/jiafatom
2022-04-14 23:18:45 +00:00
Thiago Crepaldi
b142a224c6 Add warning when importing caffe2 on build without BUILD_CAFFE2=1
Confusing backtraces are issues to user when they try to run tests or actual scripts using Caffe2 on a pytorch build without Caffe2 enabled through BUILD_CAFFE2=1

This PR adds a warning in more than one place to return a friendly message for the user

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73770
Approved by: https://github.com/BowenBao, https://github.com/malfet, https://github.com/garymm
2022-04-14 23:13:40 +00:00
Thiago Crepaldi
950dc1b457 Fix use of ONNX optimizer by Caffe2 backend
Fixes https://github.com/pytorch/pytorch/issues/69674

The fix is Back Compatible with any Caffe2 build. It simply tries to use `onnxptimizer` module when `onnx.optimizer` is not available.

`onnx.optimizer` does not exist since ONNX 1.9 (April 2021) as the code was moved to a different [repo](https://github.com/onnx/onnxoptimizer)

If both `onnx<1.9` and `onnxoptimizer` are not found, the current fallback behavior is maintained (no ONNX optimization happens). Otherwise, the ONNX optimization pass will run from whatever module it is found.

This PR does not require or enforce a direct package dependency to work
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75718
Approved by: https://github.com/BowenBao, https://github.com/malfet
2022-04-14 21:48:48 +00:00
John Shahid
4766314de1 Disable GPU tests for the PiecewiseLinearTransform operator. (#75738)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75738

The tests are failing on platform010 and blocking the upgrade.  Skip the tests given that Caffe2 on GPU is no longer supported.

Test Plan: signals

Reviewed By: ezyang

Differential Revision: D35613544

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75771
Approved by: https://github.com/jamesr66a
2022-04-14 12:07:50 +00:00
John Shahid
b311f255d8 Disable GPU tests for the Dropout operator. (#75739)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75739

The tests are failing on platform010 and blocking the upgrade.  Skip the tests given that Caffe2 on GPU is no longer supported.

Test Plan: signals

Reviewed By: ezyang

Differential Revision: D35614159

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75767
Approved by: https://github.com/ezyang
2022-04-14 05:43:41 +00:00
Nikita Shulga
bdf5a87714 Extend sign-compare warnings to gcc (take 2)
Remove `-Wno-sign-compare` option for GCC
Suppress erroneous sign-compare warning in `c10::greater_than_max`(see  https://godbolt.org/z/Tr3Msnz99)
Fix sign-compare in torch/deploy,  `caffe2::QTensor::dim32()` and `generate_proposals_op_test.cc`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75544
Approved by: https://github.com/osalpekar
2022-04-13 00:06:52 +00:00
Edward Z. Yang
c2124f5c66 Turn on -Wsign-compare
This is enabled on some of our internal builds, is a common source
of fbcode only errors and apparently we are relatively clean on it.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74996

Approved by: https://github.com/malfet
2022-04-12 18:58:14 +00:00
Pavithran Ramachandran
6402e62454 Refractor flatbuffer jit code
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75239

Refractor flatbuffer_serializer to move JIT related code to a separate file .

Differential Revision: [D35301020](https://our.internmc.facebook.com/intern/diff/D35301020/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D35301020/)!

Approved by: https://github.com/iseeyuan
2022-04-11 23:41:48 +00:00