Commit Graph

66 Commits

Author SHA1 Message Date
Shashank Chaudhry
06d1be2447 [NOOP][clangformat][codemod] Enable CLANGFORMAT for caffe2/caffe2/* (#67624)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67624

Test Plan: Visual inspection. Sandcastle.

Reviewed By: malfet

Differential Revision: D31986628

fbshipit-source-id: c872bded7325997a2945dbf5d4d052628dcb3659
2021-11-02 22:14:04 -07:00
Jane Xu
71ca600af9 Renaming CAFFE2_API to TORCH_API (#49496)
Summary:
Since caffe2 and torch have been consolidated, CAFFE2_API should be merged with TORCH_API. Addresses a TODO.

Manually edited some references of the removed `CAFFE2_API`:
* `CONTRIBUTING.md`
* `caffe2/proto/CMakeLists.txt`
* `cmake/ProtoBuf.cmake`
* `c10/macros/Export.h`
* `torch/csrc/WindowsTorchApiMacro.h`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49496

Reviewed By: malfet, samestep

Differential Revision: D25600726

Pulled By: janeyx99

fbshipit-source-id: 7e068d959e397ac183c097d7e9a9afeca5ddd782
2020-12-18 10:54:50 -08:00
Shai Szulanski
0ddaaf6a92 [codemod][caffe2] Run clang-format - 5/7
Summary:
This directory is opted-in to clang-format but is not format-clean. This blocks continuous formatting from being enabled on fbcode, and causes hassle for other codemods that leave inconsistent formatting. This diff runs clang-format, which is widely used and considered safe.

If you are unhappy with the formatting of a particular block, please *accept this diff* and then in a stacked commit undo the change and wrap that code in `// clang-format off` and `// clang-format on`, or `/* clang-format off */` and `/* clang-format on */`.

drop-conflicts

Test Plan: sandcastleit

Reviewed By: jerryzh168

Differential Revision: D22311706

fbshipit-source-id: 1ca59a82e96156a4a5dfad70ba3e64d44c5e762a
2020-06-30 15:45:11 -07:00
Jerry Zhang
fb8487d708 Tensor construction codemod(ResizeLike) - 3/7 (#15122)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15122

Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: dzhulgakov

Differential Revision: D13419643

fbshipit-source-id: 65b5a037b94d458b944d51f790ba2829db1fb530
2018-12-14 02:08:37 -08:00
Sebastian Messmer
4b0fc5200b Fix include paths for typeid.h (#13689)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13689

Now that typeid.h lives in c10/util, the include paths should reflect that.

Reviewed By: ezyang

Differential Revision: D12912237

fbshipit-source-id: e54225f049f690de77cb6d5f417994b211a6e1fb
2018-11-14 18:04:09 -08:00
Jerry Zhang
e7242cbaf2 Rename dim(i) -> size(i) - 1/2
Summary:
Codemod generated with clangr shard mode, 50 files per diff,
clangr code(dim->size): diffusion/FBS/browse/master/fbcode/caffe2/caffe2/fb/codemods/TensorMethodRename.cpp

Reviewed By: ezyang

Differential Revision: D12896712

fbshipit-source-id: 909731691fab7799efbcfc3b5dcc9e531831c2d4
2018-11-05 07:27:04 -08:00
Jerry Zhang
dcbca53e58 Renaming size() to numel() - 1/6
Summary: Codemod generated with clangr shard mode, 50 files per diff

Reviewed By: li-roy

Differential Revision: D10866373

fbshipit-source-id: 589194164d4fea93b74d83fa7fc4c59558c41f4a
2018-10-29 11:11:19 -07:00
Jerry Zhang
314d95a5f2 Renaming dims() to sizes() (caffe2/caffe2) - 3/4 (#13096)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13096

Codemod generated with clangr shard mode, 25 files per diff, for renaming dims() to sizes()

Reviewed By: ezyang

Differential Revision: D10842875

fbshipit-source-id: 1784859735ed4d1bd5ccd7ca56e289498374a68f
2018-10-25 12:14:21 -07:00
Edward Yang
54d9823d00 Make caffe2::Tensor::dims() return an IntList instead of a const vector& (#12180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12180

I had to fix a lot of call sites, because a lot of places assume that
you can actually get a const vector&, and if the internal representation
of sizes in a tensor is NOT a vector, it's not possible to fulfill
this API contract.

Framework changes:
- I deleted TensorImpl::dims(); caffe2::Tensor::dims() just forwards to
  sizes() now.
- De-templatized SetDims; now it is an explicit list of ArrayRef and
  variadic overloads.  This makes implicit conversions work again,
  so I don't need to explicitly list the std::vector cases too.
  - As a knock-on effect, this causes Reset() to accept at::IntList as well as
    const std::vector<int64_t>&
- Edited variadic overloads of SetDims to all forward to the underlying
  arbitrary-dim implementation, reducing code duplication. (It's probably
  marginally less efficient in the new world.)
- Replace Tensor constructor accepting const std::vector<int64_t>& with at::IntList
- Make MKLTensor accept ArrayRef along with vector in constructor and
  Reset (unfortunately, no implicit conversions here, since it's templated on
  index type.)
- There are a few other places, like cudnn, where I changed functions
  that previously took const std::vector<int64_t>& to take at::IntList
  instead.

Classification of call site changes:
- 'const std::vector<int64_t>& x_dims = x.dims()' ==>
  'at::IntList x_dims = x.dims()'
- 'std::vector<int64_t> x_dims = x.dims()' ==>
  'std::vector<int64_t> x_dims = x.dims().vec()' (we need a copy!)
  Usually this is because we're about to mutably modify the vector
  to compute some new dimension.  However, it also very commonly occurs in the
  form: 'x_dims_ = x.dims()' because we frequently cache sizes in operators.
- Instead of constructing std::vector<int64_t>{blah, blah}, construct an
  at::IntList directly

ArrayRef changes:
- cbegin()/cend() iterators, they operate the same aas begin()/end() because
  everything on ArrayRef is const.
- Moved operator<< into ArrayRef.h, so that it's always available when
  working with ArrayRef.  I also templated it, so it now works on an
  ArrayRef of any type.
- Add operator== overload for ArrayRef, and also add variants to permit
  comparison of ArrayRef with std::vector, a very common operation.
  (The non-templated version of operator== can get these automatically
  via implicit conversion, but with templates C++ refuses to do
  any explicit conversions.)

I'm planning to audit all dims() call sites to make sure they don't
expect 'auto x = t.dims()' to give you an x whose lifetime can validly
outlive the tensor.

I opted not to do a dims() to sizes() rename, because dims() also matches
the protobufs accessor.  Bad news!

Reviewed By: jerryzh168

Differential Revision: D10111759

fbshipit-source-id: a2a81dc4b92c22ad4b3b8ef4077a7e97b6479452
2018-10-05 15:57:41 -07:00
Yangqing Jia
38f3d1fc40 move flags to c10 (#12144)
Summary:
still influx.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12144

Reviewed By: smessmer

Differential Revision: D10140176

Pulled By: Yangqing

fbshipit-source-id: 1a313abed022039333e3925d19f8b3ef2d95306c
2018-10-04 02:09:56 -07:00
Christian Puhrsch
a6630e25af Remove many caffe2::TIndex and replace them with int64_t (#11943)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11943

See title

Reviewed By: ezyang

Differential Revision: D9992645

fbshipit-source-id: e8f80d6ea762971513e5e8072975ceea53e1f11a
2018-09-22 18:11:04 -07:00
Sebastian Messmer
ce6906b051 Narrowing Blob (#11167)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11167

Narrow the Blob API as preparation for merging Blob/IValue

- get rid of templated IsType and Operator::InputIsType / OutputIsType
- Use 'using' instead of 'typedef' for DestroyCall (just for readability)

Reviewed By: ezyang

Differential Revision: D9623916

fbshipit-source-id: 952f0b0cf5a525094b02e8d2798dd57a56a9e1d8
2018-09-10 12:40:16 -07:00
Yangqing Jia
68613cf5a2 Windows DLL build with Caffe2 code (#11266)
Summary:
This is an experimental build on top of what orionr and mingzhe09088 built.

Essentially, the idea is that we will need separate *_API versions for different shared libraries. If this theory is right, I'll try to clean up the design a bit and document it properly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11266

Reviewed By: orionr

Differential Revision: D9682942

Pulled By: Yangqing

fbshipit-source-id: c79653199e67a1500c9174f39f8b0357324763f3
2018-09-06 15:12:20 -07:00
Orion Reblitz-Richardson
535633bddc Export MPI functions (#11037)
Summary:
Potential fix for https://github.com/caffe2/caffe2/issues/2551#issuecomment-417124872

cc Yangqing mingzhe09088
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11037

Reviewed By: mingzhe09088

Differential Revision: D9580937

Pulled By: orionr

fbshipit-source-id: 5e1fbf718728271a5b5af526d8e67cc5b48f0575
2018-08-30 10:42:02 -07:00
Jerry Zhang
3b3aff2ed6 IsType<TensorCPU> -> IsType<Tensor>(CPU) (#10135)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10135

att

Reviewed By: yinghai

Differential Revision: D9121892

fbshipit-source-id: 4a4a3bfc450896b619bf92c92ef218aaaefc3081
2018-08-03 17:24:59 -07:00
Jerry Zhang
aebf3b47ae Remove template parameter from Tensor (#9939)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9939

Pull Request resolved: https://github.com/facebookresearch/weakly-supervised-action-detection/pull/13

Pull Request resolved: https://github.com/pytorch/translate/pull/166

Pull Request resolved: https://github.com/pytorch/pytorch/pull/9125

Closes https://github.com/pytorch/pytorch/pull/9125

Use inheritance for polymorphism, and remove template parameter
This is to change the templating in call sites, the core implementations will change later

Before Caffe2 Tensor class was compile-time fixed to bind to a particular device/context. With this change, we're making it a runtime property (stored inside the tensor), but preserve the same semantics. For example, one has to specify device type in order to create a Tensor - there are no uninitialized tensors. More specifically the changes are:

1. We added an extra argument *DeviceType* to most of the constructors of the tensor, e.g. (Tensor(DeviceType type)),
2. Semantics of constructor Tensor(const Tensor<SrcContext>& src, ContextForCopy* context); is changed, in this constructor, the second context is passed in to enable us to call the templated Copy function, it could be in a different context as source and target previously, now we'll enforce that the context should have same device type as src, if it is provided.
3. To preserve 'get-or-construct' semantics of Blob, we added specialized getter Blob::GetMutableTensor that verifies both that Blob contains a Tensor and that it's of a correct type
4. Specifically, Tensor type is not default-constructible any more (as we don't have unknown device tensors) and thus some of the code handling STL containers needs to change

Note: Some changes are postponed just to keep this diff a bit smaller. Please see `TODO`s.

Reviewed By: ezyang, houseroad

Differential Revision: D9024330

fbshipit-source-id: e0b8295d2dc6ebe2963383ded5af799ad17164ba
2018-07-27 10:56:39 -07:00
Jerry Zhang
969b62f276 Revert D8121878: Remove template parameter from Tensor
Differential Revision:
D8121878

Original commit changeset: 4a5e9a677ba4

fbshipit-source-id: d8e2c0bb145b52fbcca323b22d1d3346f0b3249e
2018-07-26 14:02:04 -07:00
Jerry Zhang
cd5adc7b5f Remove template parameter from Tensor (#13)
Summary:
Pull Request resolved: https://github.com/facebookresearch/weakly-supervised-action-detection/pull/13

Pull Request resolved: https://github.com/pytorch/translate/pull/166

Pull Request resolved: https://github.com/pytorch/pytorch/pull/9125

Closes https://github.com/pytorch/pytorch/pull/9125

Use inheritance for polymorphism, and remove template parameter
This is to change the templating in call sites, the core implementations will change later

Before Caffe2 Tensor class was compile-time fixed to bind to a particular device/context. With this change, we're making it a runtime property (stored inside the tensor), but preserve the same semantics. For example, one has to specify device type in order to create a Tensor - there are no uninitialized tensors. More specifically the changes are:

1. We added an extra argument *DeviceType* to most of the constructors of the tensor, e.g. (Tensor(DeviceType type)),
2. Semantics of constructor Tensor(const Tensor<SrcContext>& src, ContextForCopy* context); is changed, in this constructor, the second context is passed in to enable us to call the templated Copy function, it could be in a different context as source and target previously, now we'll enforce that the context should have same device type as src, if it is provided.
3. To preserve 'get-or-construct' semantics of Blob, we added specialized getter Blob::GetMutableTensor that verifies both that Blob contains a Tensor and that it's of a correct type
4. Specifically, Tensor type is not default-constructible any more (as we don't have unknown device tensors) and thus some of the code handling STL containers needs to change

Note: Some changes are postponed just to keep this diff a bit smaller. Please see `TODO`s.

Reviewed By: xw285cornell

Differential Revision: D8121878

fbshipit-source-id: 4a5e9a677ba4ac82095df959851a054c81eccf81
2018-07-26 10:25:23 -07:00
Orion Reblitz-Richardson
1d5780d42c Remove Apache headers from source.
* LICENSE file contains details, so removing from individual source files.
2018-03-27 13:10:18 -07:00
Yangqing Jia
611a89c4b6 Remove more protobuf APIs. (#2348)
* Wrap ShutdownProtobufLibrary

* Remove text_format.h header and only put the function in proto_utils.h

* ParseFromString returns bool
2018-03-21 10:29:45 -07:00
Luke Yeager
25b35a3f62 Fix broken MPI tests
Summary:
Broken since e16871d87d06f3ae1adfc90bd43410c00cc4a330
Closes https://github.com/caffe2/caffe2/pull/1315

Differential Revision: D6026591

Pulled By: Yangqing

fbshipit-source-id: 0569128bb4df6c912d5d00239f6d70cdb72d3a15
2017-10-10 22:32:14 -07:00
Yangqing Jia
8286ce1e3a Re-license to Apache
Summary: Closes https://github.com/caffe2/caffe2/pull/1260

Differential Revision: D5906739

Pulled By: Yangqing

fbshipit-source-id: e482ba9ba60b5337d9165f28f7ec68d4518a0902
2017-09-28 16:22:00 -07:00
Henry Lu
10667a914e Add linter for enforcing caffe operator documentation
Summary: Add check that every time we register a caffe operator to CPU or GPU that documentation is added for the particular operator.

Reviewed By: dzhulgakov

Differential Revision: D5443110

fbshipit-source-id: 3793c3d29bea1228078cb30bdf8243ac0ab90664
2017-07-24 15:27:47 -07:00
Aapo Kyrola
95291f0f74 Revert D5348078: Add linter for enforcing caffe operator documentation
Summary: This reverts commit c3fa22fc7ca8066d5fc8fa780b23d7867fd3380e

Differential Revision: D5348078

fbshipit-source-id: f536e647cbd221b26ccbc105a5f5f8bdbcc119ab
2017-07-17 18:36:38 -07:00
Henry Lu
32b13d6243 Add linter for enforcing caffe operator documentation
Summary: Add lint rule to check that every time we register a caffe operator to CPU or GPU that documentation is added for the particular operator.

Reviewed By: dzhulgakov

Differential Revision: D5348078

fbshipit-source-id: c3fa22fc7ca8066d5fc8fa780b23d7867fd3380e
2017-07-17 08:17:23 -07:00
Luke Yeager
4da9e92d3f MPIConstantFill -> ConstantFill
Summary:
Continuation of https://github.com/caffe2/caffe2/pull/709

Close https://github.com/caffe2/caffe2/issues/706

/cc Yangqing
Closes https://github.com/caffe2/caffe2/pull/711

Differential Revision: D5162486

Pulled By: Yangqing

fbshipit-source-id: 3ff069aa27eecf73c3dc51eacf86a6974f027625
2017-05-31 19:47:49 -07:00
Yangqing Jia
680a00e99a MPIConstantFill -> ConstantFill
Summary:
(this is due to an earlier blind vim find-replace error)
Closes https://github.com/caffe2/caffe2/pull/709

Differential Revision: D5159055

Pulled By: Yangqing

fbshipit-source-id: f188b7bebf79a45825568ba96a71b535fe4e3aad
2017-05-31 16:36:49 -07:00
Andrew Gallagher
9c58341809 codemod: use <> includes for gtest headers
Summary: These are system headers and so should be included via `<>`.

Reviewed By: yfeldblum

Differential Revision: D4783480

fbshipit-source-id: 979670b594859b45560cead34f615442dfcc9f8b
2017-03-28 00:50:54 -07:00
Yangqing Jia
0e7e9888f7 Explicitly do MPI prefix for ops before it is too late
Summary: Chatted with pietern today, figured it is an easy change.

Reviewed By: pietern

Differential Revision: D4688275

fbshipit-source-id: a2751f1ff9f192ba6f2bd961be6ad1c693c8b5c6
2017-03-10 10:18:34 -08:00
Yangqing Jia
97f95bb247 mpi const cast
Summary: This fixes https://github.com/caffe2/caffe2/issues/160

Reviewed By: pietern

Differential Revision: D4617278

fbshipit-source-id: 6fbc7727d62915cfe0426b528d707756580e7b78
2017-02-27 09:46:31 -08:00
Jim Meyering
c0dd3b9744 caffe2/caffe2/mpi/mpi_test.cc: avoid shadowing warnings
Summary:
Fix warnings exposed by gcc-4.9.x's -Wshadow-compatible-local
I plan to enable this for all of fbcode, soon.
See t13698406 for justification.

Rename outer "rank,size" to "rank0,size0" (to avoid shadowing another "rank" and "size" just below).

This avoids the following errors:

  caffe2/caffe2/mpi/mpi_test.cc:124:9: error: declaration of 'rank' shadows a previous local [-Werror=shadow-compatible-local]
  caffe2/caffe2/mpi/mpi_test.cc:112:7: error: shadowed declaration is here [-Werror=shadow-compatible-local]
  caffe2/caffe2/mpi/mpi_test.cc:126:9: error: declaration of 'size' shadows a previous local [-Werror=shadow-compatible-local]
  caffe2/caffe2/mpi/mpi_test.cc:115:7: error: shadowed declaration is here [-Werror=shadow-compatible-local]

Reviewed By: Yangqing

Differential Revision: D4544808

fbshipit-source-id: fdc53ab8763eb342302b94d82d1ac046f2af7d33
2017-02-10 14:35:51 -08:00
Jim Meyering
b0ff960301 caffe2/caffe2/mpi/mpi_gpu_test.cc: avoid shadowing warnings
Summary:
Fix warnings exposed by gcc-4.9.x's -Wshadow-compatible-local
I plan to enable this for all of fbcode, soon.
See t13698406 for justification.

Rename outer "rank" to "rank0" (to avoid shadowing another "rank" just below).
Also rename outer "size" to "size0" for the same reason.

This avoids the following errors:

  caffe2/caffe2/mpi/mpi_gpu_test.cc:132:9: error: declaration of 'rank' shadows a previous local [-Werror=shadow-compatible-local]
  caffe2/caffe2/mpi/mpi_gpu_test.cc:120:7: error: shadowed declaration is here [-Werror=shadow-compatible-local]
  caffe2/caffe2/mpi/mpi_gpu_test.cc:134:9: error: declaration of 'size' shadows a previous local [-Werror=shadow-compatible-local]
  caffe2/caffe2/mpi/mpi_gpu_test.cc:123:7: error: shadowed declaration is here [-Werror=shadow-compatible-local]

Reviewed By: Yangqing

Differential Revision: D4544806

fbshipit-source-id: 4cfa412dd672919174d487e60aa503a32125da03
2017-02-10 14:19:19 -08:00
Jim Meyering
7721cba906 caffe2/caffe2/mpi/mpi_common.cc: avoid shadowing warnings
Summary:
Fix warnings exposed by gcc-4.9.x's -Wshadow-compatible-local
I plan to enable this for all of fbcode, soon.
See t13698406 for justification.

Rename inner "new_insta_comm" to "comm".

This avoids the following errors:

  caffe2/caffe2/mpi/mpi_common.cc:167:16: error: declaration of 'new_intra_comm' shadows a previous local [-Werror=shadow-compatible-local]
  caffe2/caffe2/mpi/mpi_common.cc:162:14: error: shadowed declaration is here [-Werror=shadow-compatible-local]

Reviewed By: pietern

Differential Revision: D4544805

fbshipit-source-id: c703c3f35c71f08b4daae8491ea2518572fc8013
2017-02-10 13:01:11 -08:00
Yangqing Jia
3732a0044c Move mpi_python.cc to the python folder to be more consistent about source file locations.
Summary: TSIA

Differential Revision: D4386553

fbshipit-source-id: 2c7196171be7d0af90b46b75f68c949ee3980c2e
2017-01-09 10:59:39 -08:00
Yangqing Jia
375c0816b3 goodbye old brewery 2017-01-04 20:58:35 -08:00
Yangqing Jia
1e8659fd89 build files bugfix 2017-01-04 20:36:11 -08:00
Yangqing Jia
3d1bda1f3a cmake: make python dependencies separate from the C++ dependencies 2017-01-04 16:34:56 -08:00
Simon Layton
ae62e15f87 Added MPI operators to cmake 2017-01-04 15:06:20 -05:00
Yangqing Jia
589398950f fbsync at f5a877 2016-11-18 15:41:06 -08:00
Yangqing Jia
238ceab825 fbsync. TODO: check if build files need update. 2016-11-15 00:00:46 -08:00
Yangqing Jia
d1e9215184 fbsync 2016-10-07 13:08:53 -07:00
Yangqing Jia
b23e51d467 chunky sync 2016-09-06 15:55:19 -07:00
Yangqing Jia
05512d1e10 sync 2016-08-10 11:02:15 -07:00
Yangqing Jia
1ede7a7ff0 more build updates:
(1) nccl submodule, cnmem submodule
(2) mpi ops fallback test
(3) a bit more blob interface
(4) fixed tests
(5) caffe2.python.io -> caffe2.python.dataio to avoid name conflicts
(6) In the build system autogen __init__.py instead of having manual
rules just to copy over an empty __init__.py.
2016-08-02 23:28:23 -07:00
Yangqing Jia
f09d2b2b35 changes to make c2 build. 2016-07-21 16:39:08 -07:00
Yangqing Jia
6463eebc7b chunky sync - build scripts to be written 2016-07-21 10:16:42 -07:00
Yangqing Jia
79c5275d75 A set of changes to make newest sync build.
(1) build file changes.
(2) removed data/ subfolder - anything involving datasets should probably
be tested separately.
(3) Some new functionalities.

TODOs:

(1) build files for contrib/
(2) cudnn 5.05 compatibility (currently supporting 5.04)
2016-05-15 23:04:32 -07:00
Yangqing Jia
559053d3a8 chunky sync 2016-05-13 14:43:48 -07:00
Yangqing Jia
0521e1d672 notebook rewrite and grammar bugfix 2016-03-10 17:34:31 -08:00
Yangqing Jia
0747a4a7fd move a bunch of things 2016-03-08 15:15:19 -08:00