Commit Graph

28 Commits

Author SHA1 Message Date
Richard Zou
5c92777307 Stop checking in VmapGeneratedPlumbing.h (#82351)
This PR changes VmapGeneratedPlumbing.h to be generated by torchgen. The
output file is ATen/VmapGeneratedPlumbing.h.

Why generate this file inside PyTorch codegen instead of a separate step
in functorch?
- I can't figure out how to get functorch's fbcode target to generate
- functorch's build system will, in the mid-term, be absorbed into
pytorch's build system, so I don't want to do the extra work of adding
a step to the functorch build process.

Test Plan:
- build pytorch, build functorch
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82351
Approved by: https://github.com/ezyang
2022-07-27 20:39:37 +00:00
Edward Z. Yang
6f0c253956 Add sparse, quantized and nested tensor meta support to codegen (#81793)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81793
Approved by: https://github.com/cpuhrsch, https://github.com/bdhirsh
2022-07-21 21:23:56 +00:00
Richard Howell
51cc614cb9 [pytorch] add missing -fexceptions flags (#81394)
Summary:
Add missing `-fexceptions` flags that are currently being passed through `exported_preprocessor_flags`. The exported preprocessor flags will be removed in a subsequent diff.

This is a rediff of D37386802 (3e1ac21c3b) with the changes split out to avoid reverts.

Test Plan:
Check flag is present:
```
$ buck uquery xplat/caffe2:common_core -a 'compiler_flags'
{
  "//xplat/caffe2:common_core" : {
    "compiler_flags" : [
      "-fexceptions",
      "-frtti",
      "-Os",
      "-Wno-unknown-pragmas",
      "-Wno-write-strings",
      "-Wno-unused-variable",
      "-Wno-unused-function",
      "-Wno-deprecated-declarations",
      "-Wno-shadow",
      "-Wno-global-constructors",
      "-Wno-missing-prototypes",
      "-std=gnu++17",
      "/EHsc",
      "/GR",
      "/O1",
      "/wd4101"
    ]
  }
}
```

Differential Revision: D37813869

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81394
Approved by: https://github.com/linbinyu
2022-07-14 20:03:17 +00:00
PyTorch MergeBot
e608befae4 Revert "[c10] move fexceptions to compiler_flags (#80387)"
This reverts commit 3e1ac21c3b.

Reverted https://github.com/pytorch/pytorch/pull/80387 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-07-12 14:50:55 +00:00
Richard Howell
3e1ac21c3b [c10] move fexceptions to compiler_flags (#80387)
Summary: Move `-fexceptions` out of the exported preprocessor flags and in to the libraries compiler flags. Apply the same changes to all rdeps of this library in the caffe2 subtree.

Test Plan:
Verify no rdeps are missing `-fexceptions` that have cpp sources:
```
% buck uquery 'kind(cxx*, rdeps(//xplat/caffe2/..., //xplat/caffe2/c10:c10, 1))' > /tmp/rdeps
% buck uquery '%Ss - attrfilter(preprocessor_flags, "-fexceptions", %Ss) - attrfilter(compiler_flags, "-fexceptions", %Ss)' @/tmp/rdeps
//xplat/pytorch_models/build/pytorch_dev_mobilenetv3/v1/nnc:asm
//xplat/pytorch_models/build/aot_test_model/v1/nnc:asm
//xplat/pytorch_models/build/pytorch_dev_linear/v1/nnc:asm
//xplat/pytorch_models/build/bi_bytedoc_nnc/v1/nnc:asm
//xplat/pytorch_models/build/bi_bytedoc_nnc/v2/nnc:asm
```

Differential Revision: D37386802

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80387
Approved by: https://github.com/linbinyu
2022-07-12 14:49:16 +00:00
Md Aamir Raihan
7ea723b8f6 Updating miniz library from version 2.0.8 -> 2.1.0 (#79636)
Summary:
This PR updates the miniz library from version 2.0.8 to 2.1.0.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79636
Approved by: https://github.com/albanD
2022-06-22 15:02:16 +00:00
Michael Andreas Dagitses
e21c0ac9a5 use exe/exepath in our genrules
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79626

Buck does not properly handle caching when the executable is
identified with `$(location ...)`. See
https://fb.workplace.com/groups/askbuck/posts/8600146743367198 for
more information.

Differential Revision: [D37179273](https://our.internmc.facebook.com/intern/diff/D37179273/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D37179273/)!

Approved by: https://github.com/malfet
2022-06-16 02:23:51 +00:00
Michael Andreas Dagitses
86606fbe22 fix generate-code caching by indicating that the binary is an executable
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79625

Per Josiah Gaskin's followup on
https://www.internalfb.com/intern/qa/365579, using $(exe ...) instead
of $(location ...) should address the caching behavior.

@override-unit-failures
(Note: this ignores all push blocking failures!)

Differential Revision: [D36970846](https://our.internmc.facebook.com/intern/diff/D36970846/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36970846/)!

Approved by: https://github.com/malfet
2022-06-16 02:21:03 +00:00
Brian Hirsh
adf8060600 add a new alias key for functional to view op decompositions
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79615

Approved by: https://github.com/zou3519
2022-06-15 23:18:09 +00:00
Michael Andreas Dagitses
eb5751d84b move gen_aten and gen_aten_hip into shared build structure
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77751

This requires two changes to rule generation:
 * pulling the cpu static dispatch prediction into the rules
 * disabling the Bazel-style generated file aliases

Differential Revision: [D36481918](https://our.internmc.facebook.com/intern/diff/D36481918/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36481918/)!

Approved by: https://github.com/kit1980, https://github.com/seemethere
2022-06-15 18:22:52 +00:00
anjali411
38350acf8f Autogen Tags enum, and allow specifying tags while defining an op
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79322

Approved by: https://github.com/albanD
2022-06-11 00:29:32 +00:00
Michael Andreas Dagitses
7d12eecba1 move GENERATED_CPP_CUDA to caffe2/build.bzl
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77744

This is needed by gen_aten and it's immediate downstream libraries. As
such, it can live solely in the shared build structure.

Differential Revision: [D36480812](https://our.internmc.facebook.com/intern/diff/D36480812/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36480812/)!

Approved by: https://github.com/kit1980
2022-06-02 18:38:05 +00:00
Michael Andreas Dagitses
7dc5b5bf10 move generated_srcs_list.bzl into caffe2/build.bzl
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77680

This is only used by ATen code generation and libraries. These are
about to move into the shared build structure, so let's move this
cleanly first.

Differential Revision: [D36455725](https://our.internmc.facebook.com/intern/diff/D36455725/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36455725/)!

Approved by: https://github.com/kit1980
2022-06-01 23:03:54 +00:00
Antonio Kim
02c4d877b4 Codegen Non-Native IR Nodes (#76535)
Add codegen infrastructure to generate IR nodes for non-native ops.

The proposed change is to add a `non_native` key to the `{backend}_native_functions.yaml` file that contains schema definitions similar to what is found in `native_functions.yaml`. e.g.
```
non_native:
    ...
    - func: expand(Tensor input, int[] size, bool is_scalar_expand) -> Tensor
    ...
```
these definitions are parsed into a `LazyIrSchema` that can be used for generating IR nodes using `GenLazyIR`.

Fixes #74628

CC: @wconstab @desertfire @henrytwo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76535
Approved by: https://github.com/wconstab
2022-05-24 19:29:23 +00:00
Michael Andreas Dagitses
c2ff413622 move generated-autograd-headers to the shared build structure
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76183

This is a relatively simple target but we have to fix our header
expansion to understand generated files. Next step will be to use this
in Bazel.

Differential Revision: [D35820541](https://our.internmc.facebook.com/intern/diff/D35820541/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D35820541/)!

Approved by: https://github.com/dreiss, https://github.com/malfet
2022-05-19 04:31:56 +00:00
Michael Andreas Dagitses
e517fc8b28 eliminate Bazel's libtorch_cpp_generated_sources
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76179

This list is redundant with the shared build structure.

Differential Revision: [D35818500](https://our.internmc.facebook.com/intern/diff/D35818500/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D35818500/)!

Approved by: https://github.com/dreiss
2022-05-17 03:46:49 +00:00
Michael Andreas Dagitses
a013d83bf9 eliminate Bazel's libtorch_python_generated_sources
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76178

These contents are already identified in the shared build structure.

Differential Revision: [D35817999](https://our.internmc.facebook.com/intern/diff/D35817999/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D35817999/)!

Approved by: https://github.com/dreiss
2022-05-17 03:43:02 +00:00
PyTorch MergeBot
7eaf4780ba Revert "[LT] Store OpKind for each IR subclass in a static field"
This reverts commit ac37ddc795.

Reverted https://github.com/pytorch/pytorch/pull/76711 on behalf of https://github.com/malfet
2022-05-09 20:50:09 +00:00
Bin Bao
ac37ddc795 [LT] Store OpKind for each IR subclass in a static field
Summary: Currently OpKind is stored as an object field called op_ for each IR
node, and one usage of op_ is to avoid dynamic_cast in NodeCast when we
need to downcast a base-node pointer into a concrete sub-node pointer.
As a result, we need to construct and pass in an op when downcasting
nodes, and this becomes quite anonnying when we start to implement the
trie-based IR node reusing. More importantly, the op for each subclass
should be unique for that subclass and thus making it a const static field
is a more logical design.

In this PR, we still keep the object-level op_ for easier XLA adoption. As
furture work, we can come back to remove op_, make the op() method
virtual, and get rid of OpKind in all the node constructors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76711

Approved by: https://github.com/wconstab, https://github.com/JackCaoG
2022-05-06 19:14:46 +00:00
mikey dagitses
37fb636b7f fix package violation caused by D35587412 (#76808)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76808

This reached into aten/TARGETS in fbcode.
ghstack-source-id: 155484095

Test Plan: Verified manually.

Reviewed By: dreiss, malfet

Differential Revision: D36128458

fbshipit-source-id: c7447b3a40fe905993e799d211241e72183f8acb
(cherry picked from commit b68eb7a45d8973fadab2dfcafcbb0f63801abd40)
2022-05-05 23:39:03 +00:00
mikey dagitses
ac45fb9b93 switch Bazel to the shared generate-code genrule (#75790)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75790

We were building it before, but now we use it in downstream
rules. This enables us to eliminate the handwritten genrule.
ghstack-source-id: 155300051

Test Plan: Verified locally and in CI.

Reviewed By: dreiss

Differential Revision: D35645390

fbshipit-source-id: 478bb37a6ec295c232f66383babf46606e83ed5e
(cherry picked from commit 2822d4c5b48c6d9282149b2d43cf72d645237196)
2022-05-04 15:26:25 +00:00
mikey dagitses
096ff0ecca introduce new --gen-dir flag to generate_code and use it in fbcode (#75800)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75800

This leads to more similarities between OSS CMake and eventually OSS
Bazel. We will be able to generate files with the same names and not
have different file lists between the builds.
ghstack-source-id: 155300043

Test Plan: Verified locally and in CI.

Reviewed By: dreiss

Differential Revision: D35648586

fbshipit-source-id: 9f1638b5665ebcc64466883f65ef24a2bfd05228
(cherry picked from commit 7f2acff1baa8dfafddefdc720714f8d39feda436)
2022-05-04 15:26:25 +00:00
mikey dagitses
401179f263 disable the //:generate-code target in Bazel (#76174)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76174

This is about to conflict with the existing Bazel codegen
outputs. Switch to it atomically.
ghstack-source-id: 155029309

Test Plan: Verify manually and rely on CI.

Reviewed By: dreiss

Differential Revision: D35815288

fbshipit-source-id: 8b35e7baeb8572aef13c07cac689ee84dc7335d5
(cherry picked from commit 6dde9831a30fcf664b73fccaa51e30a7049b3251)
2022-05-03 12:13:19 +00:00
mikey dagitses
eb27c85160 move generate-code into shared build structure (#75699)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75699

ghstack-source-id: 155255334

Test Plan: Rely on CI.

Reviewed By: dreiss

Differential Revision: D35587412

fbshipit-source-id: 5ab79c07029de279a1fae36519654a73bb61d430
(cherry picked from commit 4896b72a6c0cc087e36889d21d2d885009d94a6d)
2022-05-03 09:53:37 +00:00
mikey dagitses
8b1cf8ed6b move version_h to shared build structure in Buck (#75964)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75964

This is already in the shared build structure for Bazel, but we need
to implement genrule for fbcode.

There's an xplat target that can't build in fbcode yet because the
dependencies don't line up, so we have to add a tag to exclude it.

ghstack-source-id: 154696020

Test Plan: Rely on CI

Reviewed By: malfet

Differential Revision: D35443900

fbshipit-source-id: 0768b29906c8218d7aebfdc7c18d69f59a0c9384
(cherry picked from commit bff47be441bd142392a07aa177be02e18aa86f1c)
2022-04-26 12:06:09 +00:00
mikey dagitses
f4200600e4 move Bazel version header generation to shared build structure (#75332)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75332

ghstack-source-id: 154678044

Test Plan: Rely on OSS CI.

Reviewed By: malfet

Differential Revision: D35434229

fbshipit-source-id: 7cdd33fa32d0c485f44477e414c24c9bc4b74963
(cherry picked from commit 60285c613e8703c52f36f0bf1178e35c04574ffa)
2022-04-25 17:51:30 +00:00
mikey dagitses
d78dd825ba define the caffe2_serialize target in Bazel (#75942)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75942

This also requires changes to the target definition and the xplat translator to get it working.
ghstack-source-id: 154678046

Test Plan: Verify locally and rely on CI.

Reviewed By: malfet

Differential Revision: D35704597

fbshipit-source-id: 6b0d9f5a044609b24dda656f80233ba6186c097f
(cherry picked from commit 6de43c5ca7a973c9f8b71f4d60d4d5e85cc2ba21)
2022-04-25 16:14:05 +00:00
Sergii Dymchenko
a5b4839f35 Move //xplat/caffe2:caffe2_serialize to shared build structure
Summary: This is a first step to migrate xplat targets to shared build structure. Eventually both xplat buck and open source bazel targets will be generated from shared build.bzl.

Test Plan: Should be no-op, rely on CI.

Reviewed By: malfet

Differential Revision: D35270004

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75847
Approved by: https://github.com/linbinyu
2022-04-15 17:25:29 +00:00