Commit Graph

42 Commits

Author SHA1 Message Date
cyy
bb2a1e9941 Enable readability-redundant-smartptr-get in clang-tidy (#116381)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116381
Approved by: https://github.com/Skylion007
2023-12-26 06:05:15 +00:00
cyy
968b94bef2 [8/N] Fixes clang-tidy warnings in c10/{core,util}/*.h (#116082)
This patch enables clang-tidy coverage on c10/**/*.h and contains other fixes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116082
Approved by: https://github.com/Skylion007
2023-12-20 12:22:21 +00:00
cyy
1544c37520 [7/N] Fixes clang-tidy warnings in c10/{core,util}/*.h (#115495)
This PR continues to fix clang-tidy warnings for headers in c10/core and c10/util.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115495
Approved by: https://github.com/malfet
2023-12-19 02:14:30 +00:00
Kurt Mohler
4c5e43574c Reland 2: Add PyObject preservation for UntypedStorage (#109039)
Relands #103907 after it was reverted. This PR makes the new `ignore_hermetic_tls` argument of `check_pyobj` optional to avoid causing a compilation error in torchdistx

Part of #91395

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109039
Approved by: https://github.com/ezyang
2023-09-12 22:26:05 +00:00
PyTorch MergeBot
59f605be57 Revert "Reland 2: Add PyObject preservation for UntypedStorage (#109039)"
This reverts commit 419e4e17a2.

Reverted https://github.com/pytorch/pytorch/pull/109039 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing linter job in trunk, probably due to a landrace ([comment](https://github.com/pytorch/pytorch/pull/109039#issuecomment-1715147020))
2023-09-12 07:26:11 +00:00
Kurt Mohler
419e4e17a2 Reland 2: Add PyObject preservation for UntypedStorage (#109039)
Relands #103907 after it was reverted. This PR makes the new `ignore_hermetic_tls` argument of `check_pyobj` optional to avoid causing a compilation error in torchdistx

Part of #91395

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109039
Approved by: https://github.com/ezyang
2023-09-12 01:19:40 +00:00
PyTorch MergeBot
68238606f3 Revert "Reland: Add PyObject preservation for UntypedStorage (#103907)"
This reverts commit 56b848157c.

Reverted https://github.com/pytorch/pytorch/pull/103907 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it is failing torchdistx build which uses check_pyobj here 9c1b9f5cb2/src/python/torchdistx/_C/deferred_init.cc (L87) ([comment](https://github.com/pytorch/pytorch/pull/103907#issuecomment-1712121158))
2023-09-08 19:27:07 +00:00
Kurt Mohler
56b848157c Reland: Add PyObject preservation for UntypedStorage (#103907)
This relands #97470 after #102553 reverted it. This PR attempts to fix the internal failure by avoiding an unnecessary intermediate storage buffer allocation in `c10::newStorageImplFromRefcountedDataPtr`.

Part of #91395

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103907
Approved by: https://github.com/ezyang
2023-09-07 04:24:11 +00:00
cyy
01fc6466d1 [Reland] [1/N] fix clang-tidy warnings in torch/csrc (#108114)
Reland of PR #107648 with auto replaced with Py_ssize_t in eval_frame.c. This PR applies fixes to some found issues by clang-tidy in torch/csrc.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108114
Approved by: https://github.com/Skylion007
2023-08-30 17:11:16 +00:00
PyTorch MergeBot
8cbf77585d Revert "[1/N] fix clang-tidy warnings in torch/csrc (#107648)"
This reverts commit 49eeca00d1.

Reverted https://github.com/pytorch/pytorch/pull/107648 on behalf of https://github.com/osalpekar due to This causes breakages due to underspecified type ([comment](https://github.com/pytorch/pytorch/pull/107648#issuecomment-1696372588))
2023-08-28 20:35:12 +00:00
cyy
49eeca00d1 [1/N] fix clang-tidy warnings in torch/csrc (#107648)
Apply fixes to some found issues by clang-tidy in torch/csrc.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107648
Approved by: https://github.com/Skylion007
2023-08-25 00:30:09 +00:00
Shiyan Deng
685505353a Back out "Add PyObject preservation for UntypedStorage (#97470)" (#102553)
Summary:
Original commit changeset: c24708d18ccb

Original Phabricator Diff: D46159983

Test Plan: SL tests and CI

Differential Revision: D46284986

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102553
Approved by: https://github.com/DanilBaibak
2023-06-01 17:23:43 +00:00
Kurt Mohler
5fe629e314 Add PyObject preservation for UntypedStorage (#97470)
Part of #91395

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97470
Approved by: https://github.com/ezyang
2023-05-23 01:27:30 +00:00
mikey dagitses
387feaa131 add mutable to name of non-const Storage::data_ptr (#97694)
See D44409928.

Differential Revision: [D44432585](https://our.internmc.facebook.com/intern/diff/D44432585/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97694
Approved by: https://github.com/ezyang
2023-04-08 12:44:30 +00:00
mikey dagitses
c68a94c5ea distinguish mutability of untyped Storage::data (#97690)
See D44409928.

Differential Revision: [D44429769](https://our.internmc.facebook.com/intern/diff/D44429769/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97690
Approved by: https://github.com/ezyang
2023-04-08 02:02:28 +00:00
mikey dagitses
49b80c3ea2 [reland] remove typed StorageImpl::data() and StorageImpl::unsafe_data() (#98411)
Original commit changeset: a466b3cb6a0a

Original Phabricator Diff: D44629941

Differential Revision: [D44709004](https://our.internmc.facebook.com/intern/diff/D44709004/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98411
Approved by: https://github.com/ezyang
2023-04-06 17:42:48 +00:00
PyTorch MergeBot
45edc58e4f Revert "remove typed StorageImpl::data() and StorageImpl::unsafe_data() (#98219)"
This reverts commit 144d5268a1.

Reverted https://github.com/pytorch/pytorch/pull/98219 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2023-04-05 09:08:08 +00:00
mikey dagitses
144d5268a1 remove typed StorageImpl::data() and StorageImpl::unsafe_data() (#98219)
Typed data will now only be a tensor level concept.

Differential Revision: [D44629941](https://our.internmc.facebook.com/intern/diff/D44629941/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98219
Approved by: https://github.com/ezyang
2023-04-05 03:32:02 +00:00
mikey dagitses
3af0228338 remove typed StorageImpl::unsafe_data() (#98218)
Typed data will now only be a tensor level concept.

Differential Revision: [D44629939](https://our.internmc.facebook.com/intern/diff/D44629939/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98218
Approved by: https://github.com/ezyang
2023-04-05 00:10:59 +00:00
mikey dagitses
64077ce511 remove redundant typed StorageImpl::data() member (#97650)
This has the same implementation as the unsafe variants and the unsafe
variants match the original semantics of the code, given that they
don't check that the type matches.

Given that we're updating callsites anyways to address the mutability
aspect, we might as well just drop this method now.

Differential Revision: [D44410210](https://our.internmc.facebook.com/intern/diff/D44410210/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97650
Approved by: https://github.com/ezyang
2023-04-01 08:16:54 +00:00
mikey dagitses
cb8c0be54d add StorageImpl::mutable_unsafe_data (#97648)
See D44409928.

Differential Revision: [D44409945](https://our.internmc.facebook.com/intern/diff/D44409945/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97648
Approved by: https://github.com/ezyang
2023-03-31 16:04:07 +00:00
mikey dagitses
da28af3286 distinguish mutability of StorageImpl::data_ptr() member (#97651)
See D44409928.

Differential Revision: [D44410323](https://our.internmc.facebook.com/intern/diff/D44410323/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97651
Approved by: https://github.com/ezyang
2023-03-30 19:13:56 +00:00
mikey dagitses
428cb3a868 distinguish mutability of untyped StorageImpl::data() member (#97647)
To implement the warning when transitioning reshape to copy-on-write
storage, we want to be able to detect a write to one view family
following by a read or a write to another one that shares the same
copy-on-write storage.

Because we have historically not been strict about the mutability of
our data pointers, any warning we have would likely be far too
aggressive.

Therefore, this is the first PR in a long series to ensure a strict
distinction between mutable and const data accessors in TensorBase,
TensorImpl, Storage, and StorageImpl.

The rough plan is to give the mutable accessor a new name that is
explicit about mutation, this will also force us to rewrite any code
that really needs a mutation.

Differential Revision: [D44409928](https://our.internmc.facebook.com/intern/diff/D44409928/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97647
Approved by: https://github.com/ezyang
2023-03-30 09:45:09 +00:00
Aaron Gokaslan
a34a9c3471 Perf: Apply more clang-tidy fixups to torch headers (#91445)
Applies so more fixes to headers that may have been missed before for performance optimization.cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @EikanWang @ezyang since this more in the series of the clang-tidy fixup

This is PR fixes 3 main issues:
1. Use emplacement more in headers
1. Avoid unnecessary copies and use const ref when possible
1. Default any special functions when possible to make them potentially trivial and more readable.
1. There is also one change in this PR that tries to prevent unnecessary math promotion, the rest of these changes are in another PR
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91445
Approved by: https://github.com/ezyang
2022-12-29 23:43:45 +00:00
Aaron Gokaslan
48dc24ddce Fix: [ATen] Add some missing moves (#88514)
Related to #88512 , but for ATen. This should reduce a number of copies and inefficient atomic smart pointer increments.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88514
Approved by: https://github.com/jgong5, https://github.com/ezyang
2022-11-13 22:05:41 +00:00
Edward Z. Yang
e1f634753c Setup fake tensor and symbolic shapes once at beginning of AOTAutograd (#85233)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D39662822](https://our.internmc.facebook.com/intern/diff/D39662822)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85233
Approved by: https://github.com/wconstab
2022-09-20 19:11:25 +00:00
Scott Wolchok
6e90572bb9 [PyTorch] Don't create a new Storage in FreeMemory unnecessarily
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79573

No reason to go through an extra heap allocation.

Differential Revision: [D37157595](https://our.internmc.facebook.com/intern/diff/D37157595/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D37157595/)!

Approved by: https://github.com/ezyang
2022-06-17 00:46:16 +00:00
Edward Z. Yang
95f9ca4931 Symbolic storage size
Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79492

Approved by: https://github.com/albanD
2022-06-14 17:54:34 +00:00
Richard Barnes
72e4aab74b Eliminate unused parameters in PyTorch (#73749)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73749

Unused parameters cause compiler warnings which distract from real issues. Let's remove unused parameters!

Test Plan: Sandcastle

Reviewed By: swolchok, ngimel

Differential Revision: D34567731

fbshipit-source-id: 2e42301a29a8e1014ac8ab429588bb773db58850
(cherry picked from commit 3eda4743991328d532194efd0fe3d127a294343d)
2022-03-04 02:31:37 +00:00
Luca Wehrstedt
b5c464d5ef Make Future store weak pointers to storages (#60943)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60943

In https://github.com/pytorch/pytorch/pull/60470 we made Future store Storages rather than store references to their DataPtrs (because these references could go stale...). However this meant that the Future could keep the Storage alive, and thus keep its memory allocated, even after the user was done with it. We fix it here by instead storing a weak ptr to that Storage (well, in fact to the StorageImpl, but it's the same).
ghstack-source-id: 133295799

Test Plan: CI

Reviewed By: mrshenli

Differential Revision: D29454104

fbshipit-source-id: d36dee00a4841c087bb7b3f5bc39e0459f209cdb
2021-07-09 11:28:36 -07:00
Scott Wolchok
44cc873fba [PyTorch] Autoformat c10 (#56830)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56830

Opt into formatting on GitHub and format everything. This is a trial run before turning on formatting for more and eventually all of the codebase.

Test Plan: CI

Reviewed By: zertosh

Differential Revision: D27979080

fbshipit-source-id: a80f0c48691c08ae8ca0af06377b87e6a2351151
2021-04-30 21:23:28 -07:00
Can Balioglu
9029d0d7d8 Introduce a fluent API to construct tensors from external data. (#54530)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54530

This diff introduces the following changes and improvements:

- Introduces a new fluent API to construct tensors from external data as an alternative to `from_blob` overloads. See below for an example.
- Leverages several small-buffer optimizations which result in %50 reduction in tensor construction times.
- Exposes a new (lightweight) way to construct tensors by passing a naked `context` and `context_deleter` pair as an alternative to the existing `deleter` parameter.
- Updates the existing `from_blob` overloads to internally use the fluent API.

```
// Example 1
at::Tensor tensor = at::for_blob(data, sizes)
  .strides(strides)
  .context(context, [](void *ctx) { delete static_cast<Ctx*>(ctx); })
  .options(...)
  .target_device(...)
  .make_tensor();

// Example 2
at::Tensor tensor = at::for_blob(data, sizes).make_tensor();

// Example 3
at::Tensor tensor = at::for_blob(data, sizes)
  .deleter(...)
  .make_tensor();
```

Test Plan:
Below are the folly Benchmark results for the following two equivalent operations:

```
// The fluent API
at::Tensor tensor = at::for_blob(data, sizes)
  .deleter([buffer](void*) mutable { buffer.reset(); })
  .options(dtype(c10::ScalarType::Float))
  .make_tensor();

// The original `from_blob` overload
at::Tensor tensor = at::from_blob(
  data,
  sizes,
  [buffer](void*) mutable { buffer.reset(); },
  dtype(c10::ScalarType::Float));
```

```
============================================================================
scripts/balioglu/from_blob_exp/main.cpp         relative  time/iter  iters/s
============================================================================
fluent                                                     298.34ns    3.35M
from_blob                                         55.19%   540.51ns    1.85M
============================================================================
```

Various similar experiments show an approximate %50 reduction in tensor construction times.

Reviewed By: ezyang

Differential Revision: D27269344

fbshipit-source-id: e6bd0b78384bf89fd24f22254008180329000363
2021-03-25 06:24:50 -07:00
Lance Ware
fdd25f82c9 Update to replace AT_ERROR with TORCH_CHECK (#52711)
Summary:
Fixes #{52699}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52711

Reviewed By: ailzhang

Differential Revision: D26654677

Pulled By: malfet

fbshipit-source-id: 97079250d144c9b1c69028f35e4a23a34481b2a5
2021-02-25 19:51:29 -08:00
Scott Wolchok
edf8130e9e [PyTorch] Add set_data_ptr_noswap & use where possible (#52244)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52244

`StorageImpl::set_data_ptr` returns the old pointer and thus has to do extra
work. Found because `std::swap<at::DataPtr>` was showing up in
profiling, although at < 1%.
ghstack-source-id: 121795131

Test Plan:
Run AdIndexer benchmark under `perf stat`.

Before:
```
         17,990.01 msec task-clock                #    0.998 CPUs utilized            ( +-  0.43% )
             6,550      context-switches          #    0.364 K/sec                    ( +- 31.42% )
                 3      cpu-migrations            #    0.000 K/sec                    ( +-  7.14% )
           103,820      page-faults               #    0.006 M/sec                    ( +-  2.47% )
    35,610,511,494      cycles                    #    1.979 GHz                      ( +-  0.40% )  (50.03%)
    71,651,045,779      instructions              #    2.01  insn per cycle           ( +-  0.07% )  (50.02%)
    11,679,947,910      branches                  #  649.246 M/sec                    ( +-  0.10% )  (50.03%)
        69,088,927      branch-misses             #    0.59% of all        branches          ( +-  0.24% )  (50.06%
```

After:
```
         17,896.20 msec task-clock                #    0.999 CPUs utilized            ( +-  0.24% )
             4,011      context-switches          #    0.224 K/sec                    ( +- 27.77% )
                 3      cpu-migrations            #    0.000 K/sec
           100,350      page-faults               #    0.006 M/sec                    ( +-  1.58% )
    35,418,702,208      cycles                    #    1.979 GHz                      ( +-  0.23% )  (50.05%)
    71,449,334,935      instructions              #    2.02  insn per cycle           ( +-  0.09% )  (50.03%)
    11,652,819,899      branches                  #  651.134 M/sec                    ( +-  0.12% )  (50.04%)
        69,744,411      branch-misses             #    0.60% of all branches          ( +-  0.53% )  (50.06%)
```

Cycles difference is within the noise, but it looks like we have an
0.28% instruction count win, which is outside the noise (and fits with
intuition that this should be better).

Reviewed By: hlu1

Differential Revision: D26437297

fbshipit-source-id: bf0fceccf6ad78f1497b03ccb4cdfd1a21c6846c
2021-02-17 12:42:21 -08:00
Kurt Mohler
f9eb8824f1 Remove datatype from Storage and StorageImpl (#38870)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38870

* Removed dtype data member from StorageImpl
* Removed any methods or method arguments in Storage/StorageImpl that deal with dtypes
* Update all callers of the changed API

Part of issue https://github.com/pytorch/pytorch/issues/33950
Original PR: https://github.com/pytorch/pytorch/pull/38038

Reviewed By: albanD

Differential Revision: D21549645

Pulled By: ezyang

fbshipit-source-id: 4289b356c55ff6b9530376a79343b99b540ee3de
2020-05-21 15:26:08 -07:00
Edward Yang
fe88806784 Back out "Revert D21171334: [pytorch][PR] Change StorageImpl to track byte count rather than element count" (#37893)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37893

Original commit changeset: 50746043acf3

Test Plan: sandcastle and ossci

Reviewed By: malfet, seemethere, ngimel

Differential Revision: D21416509

fbshipit-source-id: 735ec4e61f9d36d4537f52dd2dc6267751aeb94b
2020-05-05 22:43:15 -07:00
Edward Yang
a2fc7f787a Revert D21171334: [pytorch][PR] Change StorageImpl to track byte count rather than element count
Test Plan: revert-hammer

Differential Revision:
D21171334

Original commit changeset: 37329a379de9

fbshipit-source-id: 50746043acf3c76754688de0fe6f1cc12437ea2f
2020-05-05 16:36:15 -07:00
Kurt Mohler
3706803b60 Change StorageImpl to track byte count rather than element count (#37776)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37776

* Remove type-specific size tracking in favor of byte size tracking in Storage and StorageImpl
* Changed numel() and set_numel() to nbytes() and set_nbytes()
* Added enum argument to Storage/StorageImpl constructor to indicate new meaning of the size parameter
* Update all callers of the changed API

Part of issue https://github.com/pytorch/pytorch/issues/33950
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37028

Differential Revision: D21171334

Pulled By: ezyang

fbshipit-source-id: 37329a379de9a3a83cc5e9007e455a3e1c2d10b8
2020-05-05 14:20:51 -07:00
Gregory Chanan
b6ee83a5b4 Materialize a non-default device for C2 legacy storage. (#18605)
Summary:
It's not intended that Storages have 'default' CUDA devices, but this is allowable via the Storage::create_legacy codepath.

This also messages with device_caching, because the initial cache is obtained from the Storage, which may have a 'default' device.

Instead, we materialize a device by allocating 0 bytes via the allocator.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18605

Differential Revision: D14680620

Pulled By: gchanan

fbshipit-source-id: 6d43383d836e90beaf12bfe37c3f0506843f5432
2019-04-11 13:50:41 -07:00
Dmytro Dzhulgakov
3408d9de20 Clean up Storage/StorageImpl constructors (#16948)
Summary:
Small cleanup while doing https://github.com/pytorch/pytorch/pull/16857:

- rename C2 constructors as create_legacy
- remove duplicated constructors
- make resizable flag non-default
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16948

Differential Revision: D14062755

Pulled By: dzhulgakov

fbshipit-source-id: 3b7b4ec9cdf67d2628cccc001156e040006b673e
2019-02-13 22:58:32 -08:00
Michael Suo
95e5a5ae0c basic testing of builtin alias annotations (#14588)
Summary:
Check whether the codegen'd alias annotations actually track alias creation and writes correctly. This could be made more exhaustive, but it's good enough for now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14588

Differential Revision: D13312653

Pulled By: suo

fbshipit-source-id: 98de1610ea86deada71957c75c222fff331a0888
2018-12-03 22:31:02 -08:00
Sebastian Messmer
3d4d09fe06 Move Storage and StorageImpl to c10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14061

Reviewed By: ezyang

Differential Revision: D13081608

fbshipit-source-id: 1ea2d32e9ec9293b6ffa4b9e76c674cca55d5a1c
2018-11-27 12:59:48 -08:00