Commit Graph

32 Commits

Author SHA1 Message Date
Aaron Gokaslan
700941f683 Fixup c10 headers with clang-tidy (#91407)
Clang-tidy was not applied properly to headers in c10 as documented #91406. These are the easy automated fixes that came out of applying clang-tidy to the c10 part of the code base. cc @ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91407
Approved by: https://github.com/ezyang
2022-12-28 11:12:22 +00:00
Nikita Shulga
e59120ab51 C++20 compatible changes (#85703)
`std::hash<T>::result_type` is deprecated since C++17 and removed in c++20, so use `c10::invoke_result_t` to define it

Fixes https://github.com/pytorch/pytorch/issues/85603

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85703
Approved by: https://github.com/ezyang
2022-09-27 19:43:14 +00:00
Scott Wolchok
cf3ce329b5 [PyTorch] Avoid initializing storage for empty Optionals
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78947

We don't need to initialize for the non-constexpr case ever, or in the constexpr case after C++20.

Differential Revision: [D36519379](https://our.internmc.facebook.com/intern/diff/D36519379/)

Approved by: https://github.com/ezyang, https://github.com/malfet
2022-06-08 03:56:24 +00:00
PyTorch MergeBot
b7caf402aa Revert "[PyTorch] Avoid initializing storage for empty Optionals"
This reverts commit 17bd683aad.

Reverted https://github.com/pytorch/pytorch/pull/77858 on behalf of https://github.com/kit1980 due to asan builds failed on both PR and trunk, see 17bd683aad
2022-06-04 06:39:01 +00:00
Scott Wolchok
17bd683aad [PyTorch] Avoid initializing storage for empty Optionals
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77858

We don't need to initialize for the non-constexpr case ever, or in the constexpr case after C++20.

Differential Revision: [D36519379](https://our.internmc.facebook.com/intern/diff/D36519379/)

Approved by: https://github.com/ezyang, https://github.com/malfet
2022-06-04 02:03:49 +00:00
Rohit Goswami
801abc0cdd MAINT, DOC: Trivial spellings and warnings (#72745)
Summary:
Fixes N/A.
Just minor annoyances.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72745

Reviewed By: samdow

Differential Revision: D34216016

Pulled By: albanD

fbshipit-source-id: b65600b50e41a1dd7bf7d076b0dd3e2d1c99caf9
(cherry picked from commit b959392a5f)
2022-02-14 21:55:19 +00:00
Nolan O'Brien
a383d01774 [fbcode][warnings] Suppress warnings in caffe2/c10 (#71356)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71356

Suppress remaining header based warnings in `caffe2/c10` when building with `clang`

Test Plan: CI pass

Reviewed By: r-barnes

Differential Revision: D33600097

fbshipit-source-id: e1c0d84a0bad768eb03e047d62b5379cf28b48e2
2022-01-15 18:34:08 -08:00
Nolan O'Brien
8f4cec2231 [warnings][Caffe2] Suppress warnings in caffe2 headers (#71196)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71196

`caffe2` headers contain code that can elicit warnings when built with strict compiler flags.  Rather than force downstream/consuming code to weaken their compiler flags, suppress those warnings in the header using `#pragma clang diagnostic` suppressions.

Test Plan: CI Pass

Reviewed By: malfet

Differential Revision: D33536233

fbshipit-source-id: 74404e7a5edaf244f79f7a0addd991a84442a31f
2022-01-12 10:16:35 -08:00
Jane Xu
1ee66a5278 Remove CUDA 9.2 references conditionals and workarounds (#65070)
Summary:
Title says it all

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65070

Reviewed By: malfet

Differential Revision: D30966464

Pulled By: janeyx99

fbshipit-source-id: e454906fd5d7d321d390939ba5d237e1d9b150f8
2021-09-17 12:28:23 -07:00
Michael Dagitses
773c8b6440 support optional comparisons with different but comparable types (#62890)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/62565

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62890

Reviewed By: ejguan

Differential Revision: D30396008

Pulled By: dagitses

fbshipit-source-id: fca02207509f882973d54484f89c4d116505fc66
2021-08-18 21:40:38 -07:00
Erjia Guan
691183bb74 Fix compile failure on CUDA92 (#60017)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/60016

For CUDA 92
- OptionalBase was not check if `is_arrayref`
- constexpr seems not expect to raise Exception for cuda 92

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60017

Reviewed By: malfet

Differential Revision: D29139515

Pulled By: ejguan

fbshipit-source-id: 4f4f6d9fe6a5f2eadf913de0a9781cc9f2e6ac6f
2021-06-16 12:23:08 -07:00
Scott Wolchok
1798ff02e4 [PyTorch] Optimize c10::optional<ArrayRef<T>> for size (#59333)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59333

Code comment should explain this in sufficient detail. In brief, making it 16 bytes should get it to be passed in registers.
ghstack-source-id: 130631329

Test Plan: Updated optional_test and added static_assert in Optional.cpp.

Reviewed By: ezyang

Differential Revision: D28843027

fbshipit-source-id: 3029f05e03a9f04ca7337962e7770cdeb9a608d9
2021-06-07 11:35:17 -07:00
Scott Wolchok
44cc873fba [PyTorch] Autoformat c10 (#56830)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56830

Opt into formatting on GitHub and format everything. This is a trial run before turning on formatting for more and eventually all of the codebase.

Test Plan: CI

Reviewed By: zertosh

Differential Revision: D27979080

fbshipit-source-id: a80f0c48691c08ae8ca0af06377b87e6a2351151
2021-04-30 21:23:28 -07:00
Ansley Ussery
c619892482 Fix errata (#49903)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49903

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25718411

Pulled By: ansley

fbshipit-source-id: 0cc365c5a53077752dc1c5a5c4a65b873baa3604
2020-12-28 20:40:41 -08:00
Rong Rong
4fe583e248 fix move default not compile correctly on cuda92 (#48257)
Summary:
explicitly define move constructor when using cuda version <= 9200

this fixes https://github.com/pytorch/csprng/issues/84

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48257

Reviewed By: malfet, mrshenli

Differential Revision: D25123467

Pulled By: walterddr

fbshipit-source-id: 72deff82c421fbaada6f38b2b6288f7f2f833062
2020-12-01 14:23:20 -08:00
Sebastian Messmer
edf751ca2f Make empty c10-full (#46092)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46092

Make empty c10-full without using hacky-wrapper, i.e. port the kernel to the new style signature.

This PR also changes the signature of some helpers called by empty to the new style.
ghstack-source-id: 116544203

(Note: this ignores all push blocking failures!)

Test Plan:
vs prev diff (outdated, before c10::optional fix): https://www.internalfb.com/intern/fblearner/details/224735103/

after c10::optional fix:
https://www.internalfb.com/intern/fblearner/details/231391773/

Also, after the c10::optional fix, the instruction counting benchmark shows a 2% regression for calling empty from Python. We decided this is acceptable and decided against landing D24425836 which would fix the regression.

Reviewed By: ezyang

Differential Revision: D24219944

fbshipit-source-id: e554096e90ce438c75b679131c3151ff8e5c5d50
2020-11-12 17:08:21 -08:00
Scott Wolchok
df5b4696cf [Pytorch] Specialize guts of c10::optional for 32-bit scalars (#47015)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47015

c10::optional has non-trivial copy and move operations always. This change specializes it for 32-bit scalars so that it has trivial copy and move operations in that case. Ideally, we would instead rely on P0602 "variant and optional should propagate copy/move triviality" and use `std::optional` (or implement that functionality ourselves). We can't use `std::optional` because we are stuck with C++14. Implementing the full P0602 ourselves would add even more complexity. We could do it, but this should be a helpful first step.
ghstack-source-id: 115886743

Test Plan:
Collect Callgrind instruction counts for `torch.empty(())`. Data:

Make empty c10-ful (https://github.com/pytorch/pytorch/pull/46092):

```
<torch.utils.benchmark.utils.valgrind_wrapper.timer_interface.CallgrindStats object at 0x7ffaed1128e0>
torch.empty(())
                           All          Noisy symbols removed
    Instructions:       648005                     632899
    Baseline:             4144                       3736
100 runs per measurement, 1 thread
```

This diff atop #46092:

```
<torch.utils.benchmark.utils.valgrind_wrapper.timer_interface.CallgrindStats object at 0x7f943f1dc8e0>
torch.empty(())
                           All          Noisy symbols removed
    Instructions:       602347                     591005
    Baseline:             4106                       3736
100 runs per measurement, 1 thread
```

(6.6% improvement vs #46092)

Pass optionals by const reference (https://github.com/pytorch/pytorch/pull/46598)

```
<torch.utils.benchmark.utils.valgrind_wrapper.timer_interface.CallgrindStats object at 0x7f1abb3988e0>
torch.empty(())
                           All          Noisy symbols removed
    Instructions:       601349                     590005
    Baseline:             4162                       3736
100 runs per measurement, 1 thread
```
(6.8% improvement vs #46092)

This diff atop #46598 (i.e., both together)

```
<torch.utils.benchmark.utils.valgrind_wrapper.timer_interface.CallgrindStats object at 0x7f9577c22850>
torch.empty(())
                           All          Noisy symbols removed
    Instructions:       596095                     582451
    Baseline:             4162                       3736
100 runs per measurement, 1 thread
Warning: PyTorch was not built with debug symbols.
         Source information may be limited. Rebuild with
         REL_WITH_DEB_INFO=1 for more detailed results.
```

(another 1.3% savings!)

#46598 outperformed this change slightly, and combining the two leads to further benefits. I guess we should do both! (Though I still don't understand why passing optionals that should fit in a register by const reference would help...)

Reviewed By: smessmer

Differential Revision: D24552280

fbshipit-source-id: 4d93bfcffafebd8c01559398513fa6b9db959d11
2020-11-04 21:08:50 -08:00
Scott Wolchok
17be8ae11a [pytorch] Remove c10::nullopt_t::init (#47013)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47013

It was getting used in client code, and it's not part of `std::optional`.
ghstack-source-id: 115769682

Test Plan: Existing tests

Reviewed By: smessmer

Differential Revision: D24547710

fbshipit-source-id: a24e0fd03aba1cd996c85b12bb5dcdb3e7af46b5
2020-11-04 14:14:55 -08:00
Edward Yang
c68c5ea0e6 Upgrade cpp docs Sphinx/breathe/exhale to latest version (#41312)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41312

I was hoping that exhale had gotten incremental recompilation
in its latest version, but experimentally this does not seem
to have been the case.  Still, I had gotten the whole shebang
to be working on the latest version of these packages, so might
as well land the upgrade.  There was one bug in Optional.h that
I had to fix; see the cited bug report.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D22526349

Pulled By: ezyang

fbshipit-source-id: d4169c2f48ebd8dfd8a593cc8cd232224d008ae9
2020-07-14 15:35:43 -07:00
Sebastian Messmer
f0072b3af5 Remove C++11 compatibility from c10::optional (#30919)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30919

deletecode
ghstack-source-id: 96383227

Test Plan: waitforsandcastle

Differential Revision: D18869641

fbshipit-source-id: c08345d17a291cea3749af20473b6acddc78ab27
2020-01-08 09:19:59 -08:00
Will Feng
aad5071206 Use torch::variant for enums in C++ API
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26837

Test Plan: Imported from OSS

Differential Revision: D17579438

Pulled By: yf225

fbshipit-source-id: 9ac59df28a317fdb3be2cc02c65962ad99117127
2019-10-16 22:40:57 -07:00
Zachary DeVito
61818b8986 Add interface declarations to JIT (#25258)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25258

this is the first commit in a series to add interfaces to JIT.
Interfaces allow the specification through a blank python class of an
abstract interface that can be used in type annotations for Script functions.
If a TorchScript class implements all the methods in the interface with
the appropriate types, then it is implicitly considered to implement
that interface.

Follows required:
* implementation of serialization
* implementation in the parser frontend
* better error reporting for explaining why a class does not meet an
  interface specification.

Test Plan: Imported from OSS

Differential Revision: D17079963

Pulled By: zdevito

fbshipit-source-id: a9986eeba2d4fdedd0064ce7d459c0251480a5a0
2019-08-27 22:54:37 -07:00
peter
95b5718007 Prevent VS from emitting errors when using swap in Optional.h (#22182)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/21706
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22182

Differential Revision: D15981740

Pulled By: ezyang

fbshipit-source-id: d58b3ca3aea8d3d383150208b87fa4bbd4f6fe33
2019-06-26 07:29:35 -07:00
Karl Ostmo
49481d576d Torch rename (#20774)
Summary:
This renames the CMake `caffe2` target to `torch`, as well as renaming `caffe2_gpu` to `torch_gpu` (and likewise for other gpu target variants).  Many intermediate variables that don't manifest as artifacts of the build remain for now with the "caffe2" name; a complete purge of `caffe2` from CMake variable names is beyond the scope of this PR.

The shell `libtorch` library that had been introduced as a stopgap in https://github.com/pytorch/pytorch/issues/17783 is again flattened in this PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20774

Differential Revision: D15769965

Pulled By: kostmo

fbshipit-source-id: b86e8c410099f90be0468e30176207d3ad40c821
2019-06-12 20:12:34 -07:00
peter
35c8f93fd2 Fix CUDA 8 build on Windows (#14665)
Summary:
Fixes #14663.
Test for CUDA 8 is running here: https://dev.azure.com/pytorch/PyTorch/_build/results?buildId=54
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14665

Differential Revision: D13290392

Pulled By: soumith

fbshipit-source-id: 57f0d5b704e5d1fcb4927cbc007327b4ed74f443
2018-12-01 16:50:38 -08:00
Sebastian Messmer
d55b25a633 Remove individual "using c10:xxx" statements (#13168)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13168

We now have a "using namespace c10" in the at and caffe2 namespaces, we don't need the individual ones anymore

Reviewed By: ezyang

Differential Revision: D11669870

fbshipit-source-id: fc2bb1008e533906914188da4b6eb30e7db6acc1
2018-11-22 11:57:10 -08:00
Peter Goldsborough
8311bbee7f Fix Windows build and test in CI (#11716)
Summary:
This PR adds Windows support for the C++ frontend. A lot of declarations were missing `TORCH_API` macros, and lots of code just did not compile on MSVC.

ebetica ezyang orionr
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11716

Reviewed By: orionr

Differential Revision: D13038253

Pulled By: goldsborough

fbshipit-source-id: c8e5a45efd26117aeb99e768b56fcd5a89fcb9f8
2018-11-13 16:35:54 -08:00
Wanchao Liang
4e1c64caee Add c10::optional to type syntax (#12582)
Summary:
This PR adds optional type to ATen native, autograd, JIT schema and Python Arg parser, closes #9513. It allows us to use optional default values (including None) for function signature and implementations like clamp, etc., and also let us remove the python_default_init hack.

Follow up:

remove python_default_init completely.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12582

Differential Revision: D10417423

Pulled By: wanchaol

fbshipit-source-id: 1c80f0727bb528188b47c595629e2996be269b89
2018-10-25 16:08:29 -07:00
Dmytro Dzhulgakov
be99eff75a Back out "Revert D10494123: [c10] Remove at::Optional" (#12991)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12991

Remove the file proxying. Before we can do land `using namespace c10` everywhere, we just keep the one off namespace proxy. The follow up diff is going to replace explicit at::optional but keep just `optional` usage

Reviewed By: ezyang, Yangqing

Differential Revision: D10511254

fbshipit-source-id: 8297c61d7e9810ae215a18869a6ec9b63f55d202
2018-10-25 15:17:51 -07:00
Gregory Chanan
428300d318 Revert D10494123: [c10] Remove at::Optional
Differential Revision:
D10494123

Original commit changeset: 761bdf7359d6

fbshipit-source-id: 552fb4ab0dc253b95ce87ec6a1c65aba4b07e84a
2018-10-23 07:18:54 -07:00
Yangqing Jia
d401dc4374 Remove at::Optional (#12958)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12958

TSIA - this is an ongoing diff to fully move to c10 namespace.

Reviewed By: dzhulgakov

Differential Revision: D10494123

fbshipit-source-id: 761bdf7359d62ef4503ecb1b8d0ae1c0762e073c
2018-10-23 00:03:20 -07:00
Yangqing Jia
713e706618 Move exception to C10 (#12354)
Summary:
There are still a few work to be done:

- Move logging and unify AT_WARN with LOG(ERROR).
- A few header files are still being plumbed through, need cleaning.
- caffe2::EnforceNotMet aliasing is not done yet.
- need to unify the macros. See c10/util/Exception.h

This is mainly a codemod and not causing functional changes. If you find your job failing and trace back to this diff, usually it can be fixed by the following approaches:

(1) add //caffe2/c10:c10 to your dependency (or transitive dependency).
(2) change objects such as at::Error, at::Optional to the c10 namespace.
(3) change functions to the c10 namespace. Especially, caffe2::MakeString is not overridden by the unified c10::str function. Nothing else changes.

Please kindly consider not reverting this diff - it involves multiple rounds of rebasing and the fix is usually simple. Contact jiayq@ or AI Platform Dev for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/12354

Reviewed By: orionr

Differential Revision: D10238910

Pulled By: Yangqing

fbshipit-source-id: 7794d5bf2797ab0ca6ebaccaa2f7ebbd50ff8f32
2018-10-15 13:33:18 -07:00