Commit Graph

76 Commits

Author SHA1 Message Date
Jerry Zhang
9b272c08cf Remove partially initialized Tensor in Deserialization (#14197)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14197

Pull Request resolved: https://github.com/pytorch/pytorch/pull/13642

Previously we pass in a patially initialized Tensor to Deserialize and it will fill
it with the result of deserialization of a tensor proto. Now we want it to return
a Tensor directly since it's just a shared pointer to TensorImpl.

Reviewed By: dzhulgakov

Differential Revision: D12874357

fbshipit-source-id: 12b80a763375da23cfa64a74d6bc186d8d03b94f
2018-12-10 17:17:29 -08:00
Jerry Zhang
a228a95b94 Rename ndim() -> dim() - 1/6
Summary:
Codemod generated with clangr shard mode, 50 files per diff,
clangr code(ndim()->dim()): diffusion/FBS/browse/master/fbcode/caffe2/caffe2/fb/codemods/TensorMethodRename.cpp

Reviewed By: ezyang

Differential Revision: D12935693

fbshipit-source-id: f24f1c10cd5bbb9e63cda0a0da989e6e3766380a
2018-11-07 07:30:11 -08:00
Jerry Zhang
2e1b7a6f4f Renaming dim() to size() - 1/3 (#13434)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13434

Codemod generated with clangr shard mode, 50 files per diff,
clangr code(dim->size): diffusion/FBS/browse/master/fbcode/caffe2/caffe2/fb/codemods/TensorMethodRename.cpp

Reviewed By: ezyang

Differential Revision: D12867223

fbshipit-source-id: 3e05be1a370ebd1a273bd4c70499d019fd056ac4
2018-10-31 17:43:52 -07:00
Jerry Zhang
edd902594a Renaming meta() to dtype() - 1/2 (#13333)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13333

Codemod generated with clangr shard mode, 50 files per diff,
clangr code(meta->dtype): diffusion/FBS/browse/master/fbcode/caffe2/caffe2/fb/codemods/TensorMethodRename.cpp

Reviewed By: ezyang

Differential Revision: D12845168

fbshipit-source-id: 492091963d2211ea80215200e981965767566135
2018-10-31 17:14:08 -07:00
Jerry Zhang
eea2ee6d29 Renaming size() to numel() - 1/17
Summary: Codemod generated with clangr shard mode, 25 files per diff

Reviewed By: li-roy

Differential Revision: D10866237

fbshipit-source-id: 020fcfdf52083430c5b674eda8e07ad3adfcc838
2018-10-26 15:36:59 -07:00
Dmytro Dzhulgakov
c95fa4b904 fix dtype uninitialized tensor serialization
Summary:
See D10380678 for the discussion.

Caffe2 serialization code was able to handle dtype uninitalized tensor as long as their numel was 0 O_O.

For safety to unblock the push I'm preserving this behavior with critical. As we fix all occurrences of old API, we can delete this test.

Reviewed By: kennyhorror

Differential Revision: D10866562

fbshipit-source-id: e172bd045fdfca660ff05b426e001f5f2f03f408
2018-10-26 01:30:47 -07:00
Edward Yang
956e620c64 Eliminate numel == -1 state, delete Storage-only constructor (#12656)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12656

I originally wanted to do this in two steps, but deleting the Storage-only
constructor also changes the default numel state (which breaks tests),
so easiest to do it all in one go.)

- I still need a way to compute the correct TensorTypeId for all of the
  Caffe2 constructors; rather than hard-code it, I wrote a function
  in at::detail::computeTensorTypeId() to do this calculation.  Maybe
  this function could be used more widely, but for now, it's used
  by Caffe2 only.
- Added a pile more TensorTypeId for all of Caffe2's supported DeviceTypes
- Because I still can't put arbitrary TypeMeta in TensorOptions, the
  TensorTypeId() calculation doesn't respect dtype.  For now, this is
  not a problem, but this might block work to split non-POD dtypes
  into their own TensorTypeId.

Reviewed By: li-roy

Differential Revision: D10380678

fbshipit-source-id: 10c5d12020596fc9f27d5579adffad00513af363
2018-10-25 08:44:05 -07:00
Michael Antonov
a6949abb15 Guard all Caffe2 protobuf string serializations with CAFFE_ENFORCE (fixed reverted bug) (#12848)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12848

Updated all non-test uses of protobuf::MessageLite::SerializeAsString to call
SerializeAsString_EnforceCheck so that the return value is checked and can
throw an exception if failing.

Most of the affected code was called from classes derived from  BlobSerializeBase.
Didn't touch most tests and ENFORCE calls because they usually do checks
anyway.

Original commit changeset: c0760e73ecc7

Reviewed By: dzhulgakov

Differential Revision: D10453456

fbshipit-source-id: d2f2b7b4578e721924354149f08f627c7e3bf070
2018-10-23 16:21:26 -07:00
Junjie Bai
805f4d5cb8 Revert D10416438: Guard all Caffe2 protobuf string serializations with CAFFE_ENFORCE
Differential Revision:
D10416438

Original commit changeset: cb842e3e26b0

fbshipit-source-id: c0760e73ecc76ca9b1b74f6844e243c2df5260a2
2018-10-18 13:46:33 -07:00
Michael Antonov
63cd051867 Guard all Caffe2 protobuf string serializations with CAFFE_ENFORCE (#12799)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12799

Updated all non-test uses of protobuf::MessageLite::SerializeAsString to call
SerializeAsString_EnforceCheck so that the return value is checked and can
throw an exception if failing.

Most of the affected code was called from classes derived from  BlobSerializeBase.
Didn't touch most tests and ENFORCE calls because they usually do checks
anyway.

Reviewed By: ezyang

Differential Revision: D10416438

fbshipit-source-id: cb842e3e26b0918829d71267a375d4dd40600d58
2018-10-18 12:49:01 -07:00
Jerry Zhang
ab1a25aa9b caffe2::empty for Resize+mutable_data refactor (#12407)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12407

We want to use tensor factory to refactor the caffe2's old way of initialize Tensor by Resize and mutable_data
in order to eliminate uninitialized Tensor.

Previously when we want to create a Tensor in caffe2, we'll do the following
```
Tensor x(CPU); // device type provided
x.Resize({1, 2, 3}); // size provided
x.mutable_data<float>(); // data type provided and memory allocated
```
This leaves Tensor in not fully initialized state during the process, to eliminate this, we
want to provide all the needed information in the begining. ATen already has its TensorFactories: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/TensorFactories.cpp, and there is a TensorOption, we want to adopt the same interface to ease future refactoring.

In the callsite, we used to have `Output(i)` that returns a `Blob` that contains an uninitialized `Tensor` and we'll call Resize and mutable_data afterwards to provide dimension and data type,
```
// uninitialized tensor
auto* Y = Output(0);
// set dimensions
Y->Resize({1, 2, 3});
// actually allocate the data
auto* data = Y->mutable_data<float>();
// After this step, Tensor is fully initialized.
```
We want to change it to the following:
```
// provide dimensions and TensorOptions which include device type and data type.
// This will set all the information of Tensor properly and also allocate memory.
auto* Y = Output(0, {1, 2, 3}, at::device({context_.device_type()}).template dtype<T>());
// Tensor is fully initialized after this step

// following `mutable_data` call won't allocate memory.
auto* data = Y->mutable_data<float>();
```

microbenchmarks
```
============================================================================
caffe2/caffe2/fb/benchmarks/core_overhead_benchmark.ccrelative  time/iter  iters/s
============================================================================
OperatorNewOutputTensorAPI                                   3.27us  306.05K
OperatorOldOutputTensorAPI                                   3.55us  281.54K
============================================================================
```

Reviewed By: ezyang

Differential Revision: D10207890

fbshipit-source-id: f54ddacaa057b7c6bc7d5a8290171f35e9e40e29
2018-10-17 13:03:06 -07:00
Yangqing Jia
7d5f7ed270 Using c10 namespace across caffe2. (#12714)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12714

This is a short change to enable c10 namespace in caffe2. We did not enable
it before due to gflags global variable confusion, but it should have been
mostly cleaned now. Right now, the plan on record is that namespace caffe2 and
namespace aten will fully be supersets of namespace c10.

Most of the diff is codemod, and only two places of non-codemod is in caffe2/core/common.h, where

```
using namespace c10;
```

is added, and in Flags.h, where instead of creating aliasing variables in c10 namespace, we directly put it in the global namespace to match gflags (and same behavior if gflags is not being built with).

Reviewed By: dzhulgakov

Differential Revision: D10390486

fbshipit-source-id: 5e2df730e28e29a052f513bddc558d9f78a23b9b
2018-10-17 12:57:19 -07:00
Sebastian Messmer
6cbf1992bd Serialization takes pointers instead of Blob (#11925)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11925

This is step 1 in the refactoring to remove Blob::ShareExternal(), i.e. Blob would then always own its contents.

ShareExternal() is for example used to pass non-owning blobs to serialization. This diff prepares removing that.

Reviewed By: ezyang

Differential Revision: D9884177

fbshipit-source-id: d01df9a613a4fc62e5679fe45bfc47e2c899b818
2018-10-17 11:50:34 -07:00
Yangqing Jia
713e706618 Move exception to C10 (#12354)
Summary:
There are still a few work to be done:

- Move logging and unify AT_WARN with LOG(ERROR).
- A few header files are still being plumbed through, need cleaning.
- caffe2::EnforceNotMet aliasing is not done yet.
- need to unify the macros. See c10/util/Exception.h

This is mainly a codemod and not causing functional changes. If you find your job failing and trace back to this diff, usually it can be fixed by the following approaches:

(1) add //caffe2/c10:c10 to your dependency (or transitive dependency).
(2) change objects such as at::Error, at::Optional to the c10 namespace.
(3) change functions to the c10 namespace. Especially, caffe2::MakeString is not overridden by the unified c10::str function. Nothing else changes.

Please kindly consider not reverting this diff - it involves multiple rounds of rebasing and the fix is usually simple. Contact jiayq@ or AI Platform Dev for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/12354

Reviewed By: orionr

Differential Revision: D10238910

Pulled By: Yangqing

fbshipit-source-id: 7794d5bf2797ab0ca6ebaccaa2f7ebbd50ff8f32
2018-10-15 13:33:18 -07:00
Yangqing Jia
38f3d1fc40 move flags to c10 (#12144)
Summary:
still influx.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12144

Reviewed By: smessmer

Differential Revision: D10140176

Pulled By: Yangqing

fbshipit-source-id: 1a313abed022039333e3925d19f8b3ef2d95306c
2018-10-04 02:09:56 -07:00
Yangqing Jia
9c49bb9ddf Move registry fully to c10 (#12077)
Summary:
This does 6 things:

- add c10/util/Registry.h as the unified registry util
  - cleaned up some APIs such as export condition
- fully remove aten/core/registry.h
- fully remove caffe2/core/registry.h
- remove a bogus aten/registry.h
- unifying all macros
- set up registry testing in c10

Also, an important note that we used to mark the templated Registry class as EXPORT - this should not happen, because one should almost never export a template class. This PR fixes that.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12077

Reviewed By: ezyang

Differential Revision: D10050771

Pulled By: Yangqing

fbshipit-source-id: 417b249b49fed6a67956e7c6b6d22374bcee24cf
2018-09-27 03:09:54 -07:00
Sebastian Messmer
8f0db9bbbb Removing some dependency edges from Blob to other caffe2 (#12043)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12043

Re-trying D9979976, this time with all call sites fixed.

D9979976 got reverted because there was a call site that wasn't covered by sandcastle it seems.
I fixed it and used 'grep' to ensure there aren't any more call sites in fbsource.

Reviewed By: ezyang

Differential Revision: D10026392

fbshipit-source-id: cd341514a8e53a40147ea0ee3e52f63bb6444157
2018-09-25 11:40:24 -07:00
Maciej Bargiel
2cdf98a74d Back out "Removing some dependency edges from Blob to other caffe2"
Summary: The controller you requested could not be found. Original commit changeset: 2ea17724e223

Differential Revision:
D10026321
Ninja: stable broken

fbshipit-source-id: faf87cb7cc0f78c2c10d4aa6fceea279cd27acd6
2018-09-25 01:11:14 -07:00
Sebastian Messmer
17a65bf9b6 Removing some dependency edges from Blob to other caffe2 (#11923)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11923

This is pre-work to allow moving Blob to ATen/core, which cannot depend on caffe2 anymore.
(1) Removing the Blob -> Tensor dependency allows us to move Blob to ATen/core and use it inside IValue without having to wait for the Tensor merge to be complete.
(2) In the final Blob design, we want it to be a very small class that doesn't have any special treatment for Tensor (or to be more correct, doesn't allow storing Tensor anymore), so this is anyhow the direction we want to go.

This changes call sites that will have to be moved to IValue later, but they cannot be moved to IValue directly, because for that, IValue first needs to be able to store Blob, which in turn first needs this diff and some other changes coming up in future diffs.

Codemods:
$ codemod --extensions h,hpp,c,cpp,cc "([a-zA-Z0-9_]+)\\.IsTensorType\\(" "BlobIsTensorType(\\1, "
$ codemod --extensions h,hpp,c,cpp,cc "([a-zA-Z0-9_]+)->IsTensorType\\(" "BlobIsTensorType(*\\1, "
$ codemod --extensions h,hpp,c,cpp,cc "([a-zA-Z0-9_]+)\\.GetMutableTensor\\(" "BlobGetMutableTensor(\\1, "
$ codemod --extensions h,hpp,c,cpp,cc "([a-zA-Z0-9_]+)->GetMutableTensor\\(" "BlobGetMutableTensor(*\\1, "

It is, however, not only these codemods because regex based refactoring was only able to match a small amount of the call sites. To catch more, I wouldn've needed a AST aware tool like clangr, which I didn't figure out how to use.

Reviewed By: ezyang

Differential Revision: D9979976

fbshipit-source-id: 2ea17724e223b5b73b44f99362727759ca689e61
2018-09-24 22:57:05 -07:00
Christian Puhrsch
a6630e25af Remove many caffe2::TIndex and replace them with int64_t (#11943)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11943

See title

Reviewed By: ezyang

Differential Revision: D9992645

fbshipit-source-id: e8f80d6ea762971513e5e8072975ceea53e1f11a
2018-09-22 18:11:04 -07:00
Sebastian Messmer
b2b05b7c20 Move blob serialization to free functions (#11817)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11817

Blob::Serialize() and Blob::Deserialize() are now free functions SerializeBlob(), DeserializeBlob() instead.
This takes away access to Blob internals from them and makes future refactorings easier.

Reviewed By: ezyang

Differential Revision: D9882726

fbshipit-source-id: 3251ebd4b53fc12f5e6924a6e4a8db3846ab3729
2018-09-20 23:27:34 -07:00
Roy Li
30521a37ad codemod: caffe::float16 -> at::Half (#11785)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11785

Replace each instead of float16 with Half.

Reviewed By: Yangqing

Differential Revision: D9892158

fbshipit-source-id: b9225ca7bd5c84fd1c04a9d24b026c8b6cbff120
2018-09-20 18:55:19 -07:00
Sebastian Messmer
ce6906b051 Narrowing Blob (#11167)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11167

Narrow the Blob API as preparation for merging Blob/IValue

- get rid of templated IsType and Operator::InputIsType / OutputIsType
- Use 'using' instead of 'typedef' for DestroyCall (just for readability)

Reviewed By: ezyang

Differential Revision: D9623916

fbshipit-source-id: 952f0b0cf5a525094b02e8d2798dd57a56a9e1d8
2018-09-10 12:40:16 -07:00
Jerry Zhang
9f4bcdf075 caffe2::DeviceType -> at::DeviceType (#11254)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11254
Previously we use DeviceType in caffe2.proto directly, but it's an `enum` and have implicit conversion to int, which does not have type safety, e.g. we have to explicitly check for a device type is valid in event.h:
```
template <int d>
struct EventCreateFunctionRegisterer {
  explicit EventCreateFunctionRegisterer(EventCreateFunction f) {
    static_assert(d < MaxDeviceTypes, "");
    Event::event_creator_[d] = f;
  }
};
```
at::DeviceType is an `enum class`, and it does not have implicit conversion to int, and provides better type safety guarantees. In this diff we have done the following refactor(taking CPU as an example):

    1. caffe2::DeviceType → caffe2::DeviceTypeProto
    2. caffe2::CPU → caffe2::PROTO_CPU
    3. caffe2::DeviceType = at::DeviceType
    4. caffe2::CPU = at::DeviceType::CPU

codemod -d caffe2/caffe2 --extensions h,cc,cpp 'device_type\(\), ' 'device_type(), PROTO_'
+ some manual changes

In short, after this diff, in c++, caffe2::CPU refers to the at::DeviceType::CPU and the old proto caffe2::CPU will be caffe2::PROTO_CPU.
In python side, we have a temporary workaround that alias `caffe2_pb2.CPU = caffe2_pb2.PROOT_CPU` to make the change easier to review and this will be removed later.

Reviewed By: ezyang

Differential Revision: D9545704

fbshipit-source-id: 461a28a4ca74e616d3ee183a607078a717fd38a7
2018-09-05 16:28:09 -07:00
Edward Yang
91797c0672 Replace direct include of caffe2.pb.h with an intermediary header caffe2_pb.h (#10946)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10946

```
codemod -d . --extensions cc,cpp,cu,cuh,h caffe2/proto/caffe2.pb.h caffe2/proto/caffe2_pb.h
```

Reviewed By: houseroad

Differential Revision: D9539945

fbshipit-source-id: 497d04720e8e7e61c05ffe1b23733d0cb774de7e
2018-08-28 11:57:08 -07:00
Jerry Zhang
7de830b879 proper sharing in ShareExternalPointer (#10804)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10804

Make ShareData and ShareExternalPointer to create new storage when the old one is used by multiple tensors.
When we need to modify the field of storage, we'll create a new storage instead.

Reviewed By: ezyang

Differential Revision: D9350686

fbshipit-source-id: 68d2b6b886b0367b0fc4fabfd55b9a480e7388ca
2018-08-28 10:52:26 -07:00
Edward Yang
98d60ad43d Replace caffe2::EnforceNotMet with at::Error
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/10184

Reviewed By: dzhulgakov

Differential Revision: D9140095

fbshipit-source-id: 3beead825609cec5054347e59903b0b78ef150f8
2018-08-03 19:25:05 -07:00
Jerry Zhang
5d3782b655 Fix IDEEP Copys (#10104)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10104

.

Reviewed By: yinghai

Differential Revision: D9109638

fbshipit-source-id: 319cc5711132314dfba0f09ac403522f21ad532b
2018-08-03 10:31:32 -07:00
Jerry Zhang
aebf3b47ae Remove template parameter from Tensor (#9939)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9939

Pull Request resolved: https://github.com/facebookresearch/weakly-supervised-action-detection/pull/13

Pull Request resolved: https://github.com/pytorch/translate/pull/166

Pull Request resolved: https://github.com/pytorch/pytorch/pull/9125

Closes https://github.com/pytorch/pytorch/pull/9125

Use inheritance for polymorphism, and remove template parameter
This is to change the templating in call sites, the core implementations will change later

Before Caffe2 Tensor class was compile-time fixed to bind to a particular device/context. With this change, we're making it a runtime property (stored inside the tensor), but preserve the same semantics. For example, one has to specify device type in order to create a Tensor - there are no uninitialized tensors. More specifically the changes are:

1. We added an extra argument *DeviceType* to most of the constructors of the tensor, e.g. (Tensor(DeviceType type)),
2. Semantics of constructor Tensor(const Tensor<SrcContext>& src, ContextForCopy* context); is changed, in this constructor, the second context is passed in to enable us to call the templated Copy function, it could be in a different context as source and target previously, now we'll enforce that the context should have same device type as src, if it is provided.
3. To preserve 'get-or-construct' semantics of Blob, we added specialized getter Blob::GetMutableTensor that verifies both that Blob contains a Tensor and that it's of a correct type
4. Specifically, Tensor type is not default-constructible any more (as we don't have unknown device tensors) and thus some of the code handling STL containers needs to change

Note: Some changes are postponed just to keep this diff a bit smaller. Please see `TODO`s.

Reviewed By: ezyang, houseroad

Differential Revision: D9024330

fbshipit-source-id: e0b8295d2dc6ebe2963383ded5af799ad17164ba
2018-07-27 10:56:39 -07:00
Jerry Zhang
969b62f276 Revert D8121878: Remove template parameter from Tensor
Differential Revision:
D8121878

Original commit changeset: 4a5e9a677ba4

fbshipit-source-id: d8e2c0bb145b52fbcca323b22d1d3346f0b3249e
2018-07-26 14:02:04 -07:00
Jerry Zhang
cd5adc7b5f Remove template parameter from Tensor (#13)
Summary:
Pull Request resolved: https://github.com/facebookresearch/weakly-supervised-action-detection/pull/13

Pull Request resolved: https://github.com/pytorch/translate/pull/166

Pull Request resolved: https://github.com/pytorch/pytorch/pull/9125

Closes https://github.com/pytorch/pytorch/pull/9125

Use inheritance for polymorphism, and remove template parameter
This is to change the templating in call sites, the core implementations will change later

Before Caffe2 Tensor class was compile-time fixed to bind to a particular device/context. With this change, we're making it a runtime property (stored inside the tensor), but preserve the same semantics. For example, one has to specify device type in order to create a Tensor - there are no uninitialized tensors. More specifically the changes are:

1. We added an extra argument *DeviceType* to most of the constructors of the tensor, e.g. (Tensor(DeviceType type)),
2. Semantics of constructor Tensor(const Tensor<SrcContext>& src, ContextForCopy* context); is changed, in this constructor, the second context is passed in to enable us to call the templated Copy function, it could be in a different context as source and target previously, now we'll enforce that the context should have same device type as src, if it is provided.
3. To preserve 'get-or-construct' semantics of Blob, we added specialized getter Blob::GetMutableTensor that verifies both that Blob contains a Tensor and that it's of a correct type
4. Specifically, Tensor type is not default-constructible any more (as we don't have unknown device tensors) and thus some of the code handling STL containers needs to change

Note: Some changes are postponed just to keep this diff a bit smaller. Please see `TODO`s.

Reviewed By: xw285cornell

Differential Revision: D8121878

fbshipit-source-id: 4a5e9a677ba4ac82095df959851a054c81eccf81
2018-07-26 10:25:23 -07:00
Orion Reblitz-Richardson
9ec0a2aef4 fbshipit-source-id: ba600fcd2b5cefc7621357bdeb05e24cea02e5af 2018-06-27 04:50:56 -07:00
Dmytro Dzhulgakov
ae1ceef36a Allow TypeMeta hold non-default-constructible types (#8349)
Necessary for Tensor detemplatization (D8121878) - now tensor won't have default constructor (as we don't know the device).

Thus this diff makes TypeMeta be constructible with non-default-constructible types in which case ctor() is non-null but always throws.

It's dangerous however as we won't catch potential type errors at compile time. Luckily - the only place where ctor() is used is in Blob and Tensor which have templated wrappers there (GetMutable and mutable_data respectively). We can just enforce the necessary type requirements there explicitly as a static_assert.

It also changes the failure behavior to be throw() instead of abort(). Aborting the process is not cool for the library :)
2018-06-11 15:53:07 -07:00
Dmytro Dzhulgakov
46c0b01234 Revert D3314316 (#8346)
This is after 2 years and we do not seem to have a use case for this one, so
for the sake of clean API design we should potentially remove this. This would
allow us to potentially pass in arguments to optionally construct an object,
although it is indeed a little bit unclear how we can reuse existing objects if
constructor arguments are passed in. In any case, we may want to remove this
dangling feature.
2018-06-11 14:23:10 -07:00
Orion Reblitz-Richardson
1d5780d42c Remove Apache headers from source.
* LICENSE file contains details, so removing from individual source files.
2018-03-27 13:10:18 -07:00
Dmytro Dzhulgakov
7d141d4243 Changes done internally at Facebook (#2154)
f679c644e332 dzhulgakov [caffe2] Sync script - add ability to handle rebase conflicts
51729b061a15 dzhulgakov [caffe2] Changes done on GitHub
2018-03-06 01:23:54 -08:00
mingzhe
6aef608f10 Fix Out of Memory failure in test TensorTest.Tensor64BitDimension (#2114)
* WIP: Fix Out of Memory failure in test TensorTest.Tensor64BitDimension

* WIP: update warning message and wrap resize inside TensorTest.Tensor64BitDimension

* WIP: only catch exception which is related to out of memory

* WIP: add return in the out of memory exception
2018-03-05 10:26:05 -08:00
Dmytro Dzhulgakov
1b4959e48d Type error message when RTTI is not enabled
Summary:
When RTTI was not enabled, previously we can only print
(RTTI not enabled ...) type error message. This is annoying when developing
on mobile environment. Adding gRegistry when #T to have basic string for type
easy type inference

Reviewed By: Yangqing

Differential Revision: D6849614

fbshipit-source-id: d41417d72fdcfb7b8c9ddc4ded604ea598572b73
2018-01-31 15:06:39 -08:00
Yangqing Jia
fc8532c89d Allow serialization of custom types inside Tensor
Summary:
The use case is that sometimes we need a Tensor of custom type instead of POD
or string. This diff allows one to delegate to BlobSerializerBase to further
serialize the contents inside the Tensor.

Design choices:
(1) Each element is serialized as a BlobProto string, and stored in the
repeated string field.
(2) UNDEFINED is used as the enum value for the tensor data type, and the exact
type string is stored in the additional field.
(3) BlobSerializer is called on each item to obtain the serialized string.
(4) This requires the custom type to have copy constructor - otherwise it
will simply not be possible to copy over the deserialized content without
explicit type.

See blob_test.cc for an example.

Reviewed By: sunnieshang

Differential Revision: D6300196

fbshipit-source-id: 18bf94a22a07337e0fa83d3f1004b3651e38cf27
2017-11-10 13:14:21 -08:00
Yangqing Jia
8286ce1e3a Re-license to Apache
Summary: Closes https://github.com/caffe2/caffe2/pull/1260

Differential Revision: D5906739

Pulled By: Yangqing

fbshipit-source-id: e482ba9ba60b5337d9165f28f7ec68d4518a0902
2017-09-28 16:22:00 -07:00
Aapo Kyrola
ec2ee181c1 allow sharing tensor of simple types
Summary: If blob type switches between fp32, fp16 - for example - we should share the tensor buffer. This kind of switching can happen with memonger and in-place conversions.

Reviewed By: bddppq

Differential Revision: D5812333

fbshipit-source-id: 44d54bfe52cbda734db8c7f20d6970e4b51ee1e1
2017-09-13 14:35:29 -07:00
Yangqing Jia
5d24a4eeef Early design for a general Event abstraction cross-devices.
Summary:
There are ad-hoc efforts on avoiding excessive device synchronizations, such as
async_dag, singlethread_async, etc. This diff aims to provide an early design
for a general Event class, that can achieve the following:

(1) It is device agnostic, essentially using a vtable to do cross device record,
wait and synchronization.
(2) Created new functions WaitEvent and Record in the Context class for
interacting with Events.
(3) Exposed the corresponding WaitEvent and Record functions in the OperatorBase
class as well.

An example use case is that, after potential future refactoring, one can achieve
a real async execution per operator by running

op.WaitEvent(previous_event);
op.RunAsync();
op.RecordEvent(this_op_event);

and the next op can do

next_op.WaitEvent(this_op_event);

Right now, I changed async_dag net implementation so that it uses the general
event design. The old Event class is assimilated to the general Event class and
the old Stream class is now essentially taken over by the Context class itself.

Reviewed By: harouwu

Differential Revision: D5648463

fbshipit-source-id: 58bd84d06e4a9977b0b835110ddb2f18be3b7cbc
2017-08-18 15:46:51 -07:00
Dmitrii Podoprikhin
a95f7aa38b Fixed bug with blob_test
Reviewed By: Yangqing

Differential Revision: D5556434

fbshipit-source-id: 4877872e9b1357a5c5a338ef06c67d6ac409f0a6
2017-08-03 17:22:49 -07:00
Dmitrii Podoprikhin
7df859871e Added functionality that allows users to store huge blobs
Summary: Added functionality that allows users to store huge blobs of any type not only Tensors. Blob has to be divided into chunks in the same way as Tensor blob.

Reviewed By: kennyhorror

Differential Revision: D5432762

fbshipit-source-id: c171faacd99d209bfae6f9707ebde7c4e23ba3b9
2017-08-02 16:08:09 -07:00
Yangqing Jia
b6daf562c4 fix windows build
Summary:
__attribute__((unused)) is not supported on Windows, so we actually need to
substitute it with a macro.

Also changed UNUSED_VARIABLE to CAFFE2_UNUSED because we also use it to mark
functions now.

Reviewed By: ajtulloch

Differential Revision: D5497063

fbshipit-source-id: bcda026e626c41f71c21c36f029a3f871eaea7d4
2017-07-26 03:50:20 -07:00
Victor Gao
34be12353b comment out unused parameters
Summary: This uses `clang-tidy` to comment out unused parameters (in functions, methods and lambdas) in fbcode. Cases that the tool failed to handle are fixed manually.

Reviewed By: igorsugak

Differential Revision: D5454343

fbshipit-source-id: 5dee339b4334e25e963891b519a5aa81fbf627b2
2017-07-21 15:14:43 -07:00
Artem Volkhin
54e8ef14fb add flag caffe2_serialize_fp16_as_bytes
Reviewed By: kennyhorror

Differential Revision: D5403218

fbshipit-source-id: 755e7a709880f54096a6e5e661554614fc2cc585
2017-07-11 22:20:36 -07:00
Artem Volkhin
d9daad509d Serialize float16 tensors as bytes to get rid of 50% overhead
Summary: When we use int32_data field for float16 tensors serialization it's possible to end up with up to 50% larger representation than can be achieved using byte_data. The reason for it is varints (https://developers.google.com/protocol-buffers/docs/encoding#varints). In worst cast (when highest sign bit is set) it uses 3 8-bit blocks i.e. 24 bits for each number. Saving in byte_field removes this overhead.

Reviewed By: Yangqing

Differential Revision: D5375267

fbshipit-source-id: 0068daed25cd0157ea80a768b6e3899ea2bd8caf
2017-07-10 11:19:09 -07:00
Dmytro Dzhulgakov
03aec5ae53 Add Tensor::FreeMemory
Summary: Similarly to Blob::Reset but keeps dimension information.

Reviewed By: rayleichen

Differential Revision: D5365655

fbshipit-source-id: c0f99c020bbabe93ff2a2ee5d519a5f467fda5ba
2017-07-04 19:31:17 -07:00
Luke Yeager
86594075f7 Third fix for KeepOnShrink tests
Summary:
See https://github.com/caffe2/caffe2/pull/723, https://github.com/caffe2/caffe2/pull/551, https://github.com/caffe2/caffe2/issues/417
```
[ RUN      ] TensorCPUTest/1.KeepOnShrink
/opt/caffe2/caffe2/core/blob_test.cc:362: Failure
Expected: (ptr) != (larger_ptr), actual: 0x24f7640 vs 0x24f7640
[  FAILED  ] TensorCPUTest/1.KeepOnShrink, where TypeParam = int (0 ms)
```
I haven't been able to reproduce locally or on TravisCI - only in our own test infrastructure. The fix is conceptually the same as https://github.com/caffe2/caffe2/pull/723. After reading through the code, I can't see any other checks which should fail this way.
Closes https://github.com/caffe2/caffe2/pull/801

Differential Revision: D5248106

Pulled By: akyrola

fbshipit-source-id: 0cd3e2c11e7ae3d924496843a311530f0e08a9da
2017-06-14 11:48:15 -07:00