Commit Graph

138 Commits

Author SHA1 Message Date
Roy Li
50fbf79451 test basic tensor interop
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12249

Differential Revision: D13469356

Pulled By: li-roy

fbshipit-source-id: b49748462aa44ac34b8ce79783f2c895a537a232
2018-12-27 17:04:00 -08:00
Sebastian Messmer
bb8ee2de0f Move TensorImpl::CopyFrom to caffe2::Tensor (2/2) (#14858)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14858

This diff doesn't change logic but just takes the existing code and moves it to caffe2::Tensor

Reviewed By: ezyang

Differential Revision: D13365817

fbshipit-source-id: bc73b27a793602cb14200dcdf357aa63233da43c
2018-12-13 18:41:24 -08:00
Sebastian Messmer
070f33f154 Move TensorImpl::CopyFrom to caffe2::Tensor (1/2) (#14656)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14656

This diff doesn't move it yet, but prepares it to be moved, i.e. removes all access to class internals.

dzhulgakov: Please comment on if you think it still makes sense to land this even though it's not blocking anymore since we're going to move at::CopyBytes anyhow.

ezyang: There's some changes in the implementation, especially handling undefined dest tensors. Please review carefully.

Reviewed By: ezyang

Differential Revision: D13287688

fbshipit-source-id: 17800ca8a79ab1633f23be58d96f99a160d8ed24
2018-12-13 18:41:23 -08:00
Sebastian Messmer
3fa53da61a Fix include paths for UndefinedTensorImpl.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14818

Reviewed By: ezyang

Differential Revision: D13348042

fbshipit-source-id: 11bdfc755767ce9d0a6fa95b2cf49d50adde8d60
2018-12-11 21:01:45 -08:00
Sebastian Messmer
9e9e87c19e Move TensorImpl to c10 (yay!)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14795

Reviewed By: ezyang

Differential Revision: D13336856

fbshipit-source-id: 5375d0e42312ff7564f4df06210a5e49542d59e3
2018-12-11 21:01:38 -08:00
Sebastian Messmer
086a37876b Fix include paths for TensorOptions
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14747

Reviewed By: ezyang

Differential Revision: D13318645

fbshipit-source-id: f5ba77a93f6019fbf5faffb47a2837c95fad474d
2018-12-07 16:23:44 -08:00
Bram Wasti
83ad52634a Add FunctionSchema based Operator Registry (#13789)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13789

This enables creation of operators with FunctionSchema and IValue

Reviewed By: smessmer

Differential Revision: D13008791

fbshipit-source-id: 151efc88ac315f4a0ab0171a99774caaf767ef1e
2018-12-05 17:20:24 -08:00
Sebastian Messmer
ff7deb95d7 Back out "Fix include paths for TensorOptions, DefaultTensorOptions, OptionsGuard" (#14744)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14744

Original commit changeset: d236d5351ecf

Reviewed By: suo

Differential Revision: D13318596

fbshipit-source-id: 55f1e9472d05fb5a9c47dc82c32e9a66b5e4308c
2018-12-04 08:59:07 -08:00
Sebastian Messmer
d063c9c330 Fix include paths for TensorOptions, DefaultTensorOptions, OptionsGuard
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14647

Reviewed By: ezyang

Differential Revision: D13283497

fbshipit-source-id: d236d5351ecf7ab9712a55e9ef12d8bba48eb53f
2018-12-03 21:53:26 -08:00
Dmytro Dzhulgakov
da9e49e586 Remove Context dependency from Tensor class (#14269)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14269

Removes reference to Context proper and instead adds a bool argument for async copy (the same as `copy_`)

For CopyFrom - I haven't tweaked all callsites yet. Instead I rely on a terrible hack that pointer to context is implicitly converted to bool when passed, haha :) It's not a good code and I propose to fix it in a follow up diff (maybe using clangr tooling).

Reviewed By: ezyang

Differential Revision: D13117981

fbshipit-source-id: 7cb1dc2ba6a4c50ac26614f45ab8318ea96e3138
2018-11-28 15:45:38 -08:00
Sebastian Messmer
0e93a03a3a Fix include paths for intrusive_ptr (#13692)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13692

This now lives in c10/util, not ATen/core anymore.

Reviewed By: ezyang

Differential Revision: D12937091

fbshipit-source-id: ea2d420a15e7941a38d0b4c75e20ca18437c73f8
2018-11-21 23:08:50 -08:00
Jerry Zhang
1c2ed4eb23 Tensor construction: combine Resize+mutable_data - 1/4 (#13942)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13942

Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: smessmer

Differential Revision: D13054770

fbshipit-source-id: a9e86e5dfcb4f7cebf5243e1d359fad064561bed
2018-11-19 15:33:50 -08:00
Lin Yang
17b2d2d373 fix TensorPrinter when tensor have 0 size. (#13986)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13986

if totoal_count == 0, it crash on:

  values_stream << tensor_data[total_count - 1];

Reviewed By: jerryzh168

Differential Revision: D13066438

fbshipit-source-id: b7a2d681ca0cf5b68d78872c94fac6de9c5de2dc
2018-11-15 07:51:13 -08:00
Jerry Zhang
0f59dcb317 Remove partially initialized Tensor + CopyFrom (#13629)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13629

Previously we have a Tensor which has a initialized storage(therefore a known device_type) and
then we'll call CopyFrom on it to initialize the sizes and data.

We want to eliminate partially initialized Tensor by replacing the pattern of calling CopyFrom with a partially initialized Tensor with either splitting that to undefined Tensor + initialization API(1)(3) or combine all the initialization in the same step(2).

1. member variable initialization + CopyFrom
Previously we have a tensor that is initialized with device_type, and then use CopyFrom to populate the content, now we remove the partial initialization by make the original member variable an undefined Tensor and use ReinitializeFrom to copy from another Tensor.

2. Output + CopyFrom
Previously, we first get a tensor with device_type, and then CopyFrom another Tensor,
We changed it two combining these two operations into OperatorBase::OutputTensor.

3. Output + custom functions
Example can be found in TransformGPU function.
In this case we move the part that initializes the tensor outside of the function, and do that explicitly outside so that we could reuse the Output functions to make a fully initialized Tensor.

Note that to keep the original semantics, both of the APIs has a caching effect based on device_type, which means we only create a Tensor object when device_type does not match or the Tensor is undefined, otherwise, we will reuse the original Tensor object.

Reviewed By: dzhulgakov

Differential Revision: D12848855

fbshipit-source-id: 37bb4ddc1698ebea533b73006eeb1218faa8ddf8
2018-11-07 11:31:03 -08:00
Jerry Zhang
ebaabfbbd5 ReinitializeTensor function for refactoring Tensor as member variable (#13147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13147

We want to refactor
```
class A {

void func() {
  x_.Resize(dims);
  auto* data = x_.mutable_data<T>();
}

Tensor x(CPU);
};
```

to
```
class A {
void func() {
  ReinitializeTensor(&x_, dims, at::dtype<T>().device(CPU));
  auto* data = x_.mutable_data<T>();
}

Tensor x_; // Undefined Tensor
};
```

This diff adds the ReinitializeTensor function.

Reviewed By: dzhulgakov

Differential Revision: D10861298

fbshipit-source-id: 9f432297d07a4890e29bb68436364e0b2e2545e7
2018-11-05 19:13:55 -08:00
Dmytro Dzhulgakov
fdf34c8da8 Kill more weird constructors on Tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/13433

Reviewed By: jerryzh168

Differential Revision: D12874599

fbshipit-source-id: 0c262fda72cbc4f3ea80df790cc8e95140bdc7e0
2018-11-04 16:54:49 -08:00
Dan Nguyen
77b8aade58 Revert D12809293: Kill more weird constructors on Tensor
Differential Revision:
D12809293

Original commit changeset: 5eb663fe8182

fbshipit-source-id: 709a5378fdbbb3fcfaacef8fc48b6530afbbc28f
2018-10-30 16:01:51 -07:00
Dmytro Dzhulgakov
ec754adb14 Kill more weird constructors on Tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/13190

Reviewed By: ezyang

Differential Revision: D12809293

fbshipit-source-id: 5eb663fe818276d97cf31d1ed1e7f025d2b69851
2018-10-30 10:25:40 -07:00
Dmytro Dzhulgakov
3c78cc6c2b Remove Tensor(const Tensor&, BaseContext*, type)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/13204

Reviewed By: ezyang

Differential Revision: D11915764

fbshipit-source-id: baf883b3095bc9d5adf0b942eb874eaa7c1f45e5
2018-10-29 13:57:43 -07:00
Jerry Zhang
eea2ee6d29 Renaming size() to numel() - 1/17
Summary: Codemod generated with clangr shard mode, 25 files per diff

Reviewed By: li-roy

Differential Revision: D10866237

fbshipit-source-id: 020fcfdf52083430c5b674eda8e07ad3adfcc838
2018-10-26 15:36:59 -07:00
Jerry Zhang
f6ccb6a0f9 bring caffe2::Tensor API closer to aten/pytorch (#13134)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13134

For tensor, we plan to do the following renaming:
```
* t.ndim() → t.dim()
* t.size() → t.numel()
* dims() → t.sizes()
* t.meta() → t.dtype()
* t.dim(d) → t.size(d)
```
This diff adds new APIs in caffe2::Tensor so we can start codemod,
we'll remove old API after the codemod

Reviewed By: ezyang

Differential Revision: D10856028

fbshipit-source-id: 1638997e234d7b3113ef8be65a16246f902273c7
2018-10-25 15:45:09 -07:00
Edward Yang
956e620c64 Eliminate numel == -1 state, delete Storage-only constructor (#12656)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12656

I originally wanted to do this in two steps, but deleting the Storage-only
constructor also changes the default numel state (which breaks tests),
so easiest to do it all in one go.)

- I still need a way to compute the correct TensorTypeId for all of the
  Caffe2 constructors; rather than hard-code it, I wrote a function
  in at::detail::computeTensorTypeId() to do this calculation.  Maybe
  this function could be used more widely, but for now, it's used
  by Caffe2 only.
- Added a pile more TensorTypeId for all of Caffe2's supported DeviceTypes
- Because I still can't put arbitrary TypeMeta in TensorOptions, the
  TensorTypeId() calculation doesn't respect dtype.  For now, this is
  not a problem, but this might block work to split non-POD dtypes
  into their own TensorTypeId.

Reviewed By: li-roy

Differential Revision: D10380678

fbshipit-source-id: 10c5d12020596fc9f27d5579adffad00513af363
2018-10-25 08:44:05 -07:00
Jerry Zhang
dd7c2d4284 Change the function signature for caffe2::empty (#13015)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13015

att

Reviewed By: ezyang

Differential Revision: D10469310

fbshipit-source-id: f4621fe5d17bb4663192860f81effe6bdfe21bea
2018-10-24 13:14:24 -07:00
Jerry Zhang
353fdefdd6 dims() -> sizes() (caffe2/core) (#13014)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13014

Tensor method renaming using clangr

Reviewed By: ezyang

Differential Revision: D10467556

fbshipit-source-id: 7d7eaf5fc59bbb493c057d5b8bfdda03b140c97e
2018-10-24 12:49:28 -07:00
Edward Yang
99bc541b5b size_from_dim(0) is like numel() but worse. Don't do it. (#12729)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12729

This may have a dependency on D10380678 if size_from_dim(0)
was required because numel() used to return -1 in some cases.
This is no longer true.

Reviewed By: li-roy, dzhulgakov

Differential Revision: D10415069

fbshipit-source-id: 39f46f56249ecaf3533f62a0205b3a45d519d789
2018-10-18 18:06:37 -07:00
Jerry Zhang
ab1a25aa9b caffe2::empty for Resize+mutable_data refactor (#12407)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12407

We want to use tensor factory to refactor the caffe2's old way of initialize Tensor by Resize and mutable_data
in order to eliminate uninitialized Tensor.

Previously when we want to create a Tensor in caffe2, we'll do the following
```
Tensor x(CPU); // device type provided
x.Resize({1, 2, 3}); // size provided
x.mutable_data<float>(); // data type provided and memory allocated
```
This leaves Tensor in not fully initialized state during the process, to eliminate this, we
want to provide all the needed information in the begining. ATen already has its TensorFactories: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/TensorFactories.cpp, and there is a TensorOption, we want to adopt the same interface to ease future refactoring.

In the callsite, we used to have `Output(i)` that returns a `Blob` that contains an uninitialized `Tensor` and we'll call Resize and mutable_data afterwards to provide dimension and data type,
```
// uninitialized tensor
auto* Y = Output(0);
// set dimensions
Y->Resize({1, 2, 3});
// actually allocate the data
auto* data = Y->mutable_data<float>();
// After this step, Tensor is fully initialized.
```
We want to change it to the following:
```
// provide dimensions and TensorOptions which include device type and data type.
// This will set all the information of Tensor properly and also allocate memory.
auto* Y = Output(0, {1, 2, 3}, at::device({context_.device_type()}).template dtype<T>());
// Tensor is fully initialized after this step

// following `mutable_data` call won't allocate memory.
auto* data = Y->mutable_data<float>();
```

microbenchmarks
```
============================================================================
caffe2/caffe2/fb/benchmarks/core_overhead_benchmark.ccrelative  time/iter  iters/s
============================================================================
OperatorNewOutputTensorAPI                                   3.27us  306.05K
OperatorOldOutputTensorAPI                                   3.55us  281.54K
============================================================================
```

Reviewed By: ezyang

Differential Revision: D10207890

fbshipit-source-id: f54ddacaa057b7c6bc7d5a8290171f35e9e40e29
2018-10-17 13:03:06 -07:00
Jerry Zhang
b89a3b50fb Remove StaticContext (#12547)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12547

Pull Request resolved: https://github.com/pytorch/pytorch/pull/12305

Remove StaticContext from context_base.h

Reviewed By: dzhulgakov

Differential Revision: D10073519

fbshipit-source-id: 350beec3c54365edef338318ce58229ccb825a98
2018-10-10 19:41:03 -07:00
Jerry Zhang
7724807551 Remove ExtractDeviceOption from StaticContext (#12304)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12304

- make ExtractDeviceOption to be a free function.
- Add a Strorage(at::Device) constructor in order to preserve the device_id.

Reviewed By: dzhulgakov

Differential Revision: D10069839

fbshipit-source-id: a5f3994a39bdf1b7503b39bb42c228e438b52bfa
2018-10-10 14:12:16 -07:00
Sebastian Messmer
6f664d3917 Improve TypeMeta (#11502)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11502

TypeMeta now is only a pointer to a TypeMetaData structure, of which there is exactly one global instance per type.
This reduces the size of everything storing a TypeMeta (Tensor, Blob, ...) and potentially improves performance.

Also, this diff gets rid of the type name registry in favor of static strings.

Experiments (summary: 1-3% perf gain)
- Service Lab: https://our.intern.facebook.com/intern/servicelab/30712497/
 -> No significant results found.
- Mobile Lab c10bench.json: https://our.intern.facebook.com/intern/fblearner/details/75984908/
 -> 1-3% perf gain
- Mobile Lab c10bench default: https://our.intern.facebook.com/intern/fblearner/details/75984999/
 -> 2-3% perf gain
- adindexer canary: https://our.intern.facebook.com/intern/ads/canary/413002142824203076
 -> no significant changes (benchmark too noisy)
- adfinder canary: https://our.intern.facebook.com/intern/ads/canary/413002166737860362
 -> no significant changes (benchmark too noisy)

Reviewed By: dzhulgakov

Differential Revision: D9763422

fbshipit-source-id: fc08937f114af5ff9f3ddbe7c7e396942868cdf5
2018-10-06 14:09:28 -07:00
Edward Yang
54d9823d00 Make caffe2::Tensor::dims() return an IntList instead of a const vector& (#12180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12180

I had to fix a lot of call sites, because a lot of places assume that
you can actually get a const vector&, and if the internal representation
of sizes in a tensor is NOT a vector, it's not possible to fulfill
this API contract.

Framework changes:
- I deleted TensorImpl::dims(); caffe2::Tensor::dims() just forwards to
  sizes() now.
- De-templatized SetDims; now it is an explicit list of ArrayRef and
  variadic overloads.  This makes implicit conversions work again,
  so I don't need to explicitly list the std::vector cases too.
  - As a knock-on effect, this causes Reset() to accept at::IntList as well as
    const std::vector<int64_t>&
- Edited variadic overloads of SetDims to all forward to the underlying
  arbitrary-dim implementation, reducing code duplication. (It's probably
  marginally less efficient in the new world.)
- Replace Tensor constructor accepting const std::vector<int64_t>& with at::IntList
- Make MKLTensor accept ArrayRef along with vector in constructor and
  Reset (unfortunately, no implicit conversions here, since it's templated on
  index type.)
- There are a few other places, like cudnn, where I changed functions
  that previously took const std::vector<int64_t>& to take at::IntList
  instead.

Classification of call site changes:
- 'const std::vector<int64_t>& x_dims = x.dims()' ==>
  'at::IntList x_dims = x.dims()'
- 'std::vector<int64_t> x_dims = x.dims()' ==>
  'std::vector<int64_t> x_dims = x.dims().vec()' (we need a copy!)
  Usually this is because we're about to mutably modify the vector
  to compute some new dimension.  However, it also very commonly occurs in the
  form: 'x_dims_ = x.dims()' because we frequently cache sizes in operators.
- Instead of constructing std::vector<int64_t>{blah, blah}, construct an
  at::IntList directly

ArrayRef changes:
- cbegin()/cend() iterators, they operate the same aas begin()/end() because
  everything on ArrayRef is const.
- Moved operator<< into ArrayRef.h, so that it's always available when
  working with ArrayRef.  I also templated it, so it now works on an
  ArrayRef of any type.
- Add operator== overload for ArrayRef, and also add variants to permit
  comparison of ArrayRef with std::vector, a very common operation.
  (The non-templated version of operator== can get these automatically
  via implicit conversion, but with templates C++ refuses to do
  any explicit conversions.)

I'm planning to audit all dims() call sites to make sure they don't
expect 'auto x = t.dims()' to give you an x whose lifetime can validly
outlive the tensor.

I opted not to do a dims() to sizes() rename, because dims() also matches
the protobufs accessor.  Bad news!

Reviewed By: jerryzh168

Differential Revision: D10111759

fbshipit-source-id: a2a81dc4b92c22ad4b3b8ef4077a7e97b6479452
2018-10-05 15:57:41 -07:00
Jerry Zhang
006171fffc Back out "[pytorch][PR] Revert "Move CreateContext to global registry (#11688)"" (#12121)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12121

Pull Request resolved: https://github.com/pytorch/pytorch/pull/12055

Original commit changeset: 6ca9de65b707

Reviewed By: ezyang

Differential Revision: D10033396

fbshipit-source-id: ca9f4b2f7ef0561f619b833415d394a8b9972bf4
2018-10-01 11:10:46 -07:00
Edward Yang
f5a0c337ba Move TensorImpl IsType, meta, dim32, dim, ExtractDeviceOption to caffe2::Tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12100

Reviewed By: jerryzh168

Differential Revision: D10051424

fbshipit-source-id: 5986e92ea54e60ec6bfe992015a05e09288c948c
2018-09-27 20:40:03 -07:00
Edward Yang
bbae57d06e Move TensorImpl size_from_dim, size_to_dim, size_between_dim, canonical_axis_index to caffe2::Tensor (#12099)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12099

- Generalize the free functions to accept IntList, not just std::vector<int64_t>

Reviewed By: jerryzh168

Differential Revision: D10051365

fbshipit-source-id: e3d571bf8fead22f6f25c3ca46f0c38c2bb065d2
2018-09-27 20:40:00 -07:00
Edward Yang
149403f849 Move TensorImpl ndim, size, itemsize and nbytes to caffe2::Tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12098

Reviewed By: jerryzh168

Differential Revision: D10051298

fbshipit-source-id: a833fad74bbda38c019ec2cb97d4bb6804e09963
2018-09-27 19:56:00 -07:00
Edward Yang
a86a61b004 Implement caffe2::Tensor::raw_data() in terms of data()
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12097

Reviewed By: jerryzh168

Differential Revision: D10051202

fbshipit-source-id: b4b61869363a606ab465d1500558226efae30d06
2018-09-27 18:40:37 -07:00
Edward Yang
2021b26bcb Move TensorImpl::ShareExternalPointer helper overloads to caffe2::Tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12096

Reviewed By: jerryzh168

Differential Revision: D10051126

fbshipit-source-id: a9b95d00512a0b4e6339d4f3f0bb180dd0c79247
2018-09-27 18:40:35 -07:00
Edward Yang
976a9e0454 Move TensorImpl::DebugString() to caffe2::Tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12095

Reviewed By: jerryzh168

Differential Revision: D10051078

fbshipit-source-id: f56b6fc5d1cb8ae4b636e88efe607fe65cc1d7a0
2018-09-27 18:40:33 -07:00
Edward Yang
b0e48aa197 Move TensorImpl::Reshape(vector<int>) to caffe2::Tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12094

Reviewed By: jerryzh168

Differential Revision: D10051079

fbshipit-source-id: 87fb91f31c33ce9b64c4654e79e0131ae391cd78
2018-09-27 18:40:30 -07:00
Edward Yang
d02478e607 Move TensorImpl::ResizeLike to caffe2::Tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12091

Reviewed By: jerryzh168

Differential Revision: D10051012

fbshipit-source-id: 772ecd2e377f7d4e1ae510c1f647f6c8b71e5a57
2018-09-27 18:40:25 -07:00
Edward Yang
dd73d57643 Move TensorImpl::ShrinkTo to caffe2::Tensor (#12090)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12090

This is a slight pessimization because we need to do a
full recompute of is_contiguous(), even though a modification
of dim-0 is guaranteed to preserve contiguity.

Reviewed By: jerryzh168

Differential Revision: D10050905

fbshipit-source-id: b99233e21c9f4275b0db6e76740462e5430ce152
2018-09-27 18:40:23 -07:00
Edward Yang
00c6fb16e7 Move ExtendTo to caffe2::Tensor from TensorImpl
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12089

Reviewed By: jerryzh168

Differential Revision: D10050859

fbshipit-source-id: 843067aacfa2a519657220bc39a0f499582a48a4
2018-09-27 18:40:21 -07:00
Edward Yang
6a2dbc9808 Rename TensorImpl::GetDeviceType to device_type, and properly test if is_variable
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12087

Reviewed By: jerryzh168

Differential Revision: D10050781

fbshipit-source-id: 0b6c9d7caf3b1000691f86fcc7f2ef203936a29f
2018-09-27 18:40:19 -07:00
Edward Yang
c5fc2f1105 Merge UndefinedTensorImpl.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/11972

Reviewed By: gchanan, Yangqing, jerryzh168

Differential Revision: D9995633

fbshipit-source-id: 6b4645c9d4bb0bc4301cd4bcfa76cf85331b8379
2018-09-27 18:40:16 -07:00
Edward Yang
f6abd16a9d Merge TensorImpl. (#11971)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11971

- Switched TensorImpl::data<T>() to use Storage::unsafe_data<T>() to work
  around an outstanding bug in the Storage::data<T>() implementation
  where it only works on Ts which are valid ScalarType
- Qualify a bunch of identifiers which still live in caffe2:: namespace
- strides returns an IntList now
- s/update_strides/update_to_contiguous_strides/
- Correctly compute type_id_ for the Storage only constructor from Caffe2.
  This is special cased to only work for CPU and CUDA dense tensors.
- Fix some signed-unsigned comparisons in Caffe2 code (OSS build for
  ATen/core has more restrictive warning tests.)

Reviewed By: jerryzh168

Differential Revision: D9995559

fbshipit-source-id: 9c74032e011189e1c7e9a98d20f2bd1e25ad2e5c
2018-09-27 17:40:44 -07:00
Edward Yang
d7e11e3aae Revert "Move CreateContext to global registry (#11688)" (#12049)
Summary:
This reverts commit 3ae6ee4ebd.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12049

Differential Revision: D10030954

Pulled By: ezyang

fbshipit-source-id: 6ca9de65b707c5b4c68280fc6f1b8e5ad7251efc
2018-09-25 10:13:43 -07:00
Jerry Zhang
3ae6ee4ebd Move CreateContext to global registry (#11688)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11688

As a first step to remove static context(merge with allocator), we'll create a
global registries for context constructors, and remove CreateContext function from tensor.

Reviewed By: ezyang, dzhulgakov

Differential Revision: D9779821

fbshipit-source-id: 8b239ea50af7a0556fde2382f58f79194f0e3dc1
2018-09-24 17:07:50 -07:00
Christian Puhrsch
a9e6a673ae Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11876

Modern C++ api instead of macros, item() is aligned with Python frontend. caffe2::Tensor::capacity_nbytes is effecitvely unused and confusing w.r.t. caffe2::Tensor::nbytes().

codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCByte   "item<uint8_t>"
codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCLong   "item<int64_t>"
codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCInt    "item<int32_t>"
codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCDouble "item<double>"
codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCFloat  "item<float>"

codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toByteData   "data<uint8_t>"
codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toLongData   "data<int64_t>"
codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toIntData    "data<int32_t>"
codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toDoubleData "data<double>"
codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toFloatData  "data<float>"

codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCByte   "item<uint8_t>"
codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCLong   "item<int64_t>"
codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCInt    "item<int32_t>"
codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCDouble "item<double>"
codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCFloat  "item<float>"

codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toByteData   "data<uint8_t>"
codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toLongData   "data<int64_t>"
codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toIntData    "data<int32_t>"
codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toDoubleData "data<double>"
codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toFloatData  "data<float>"

codemod -d caffe2 --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCComplexDouble "item<std::complex<double>>"

codemod -d tc           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCFloat  "item<float>"

Reviewed By: ezyang

Differential Revision: D9948572

fbshipit-source-id: 70c9f5390d92b82c85fdd5f8a5aebca338ab413c
2018-09-24 10:40:10 -07:00
Christian Puhrsch
a6630e25af Remove many caffe2::TIndex and replace them with int64_t (#11943)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11943

See title

Reviewed By: ezyang

Differential Revision: D9992645

fbshipit-source-id: e8f80d6ea762971513e5e8072975ceea53e1f11a
2018-09-22 18:11:04 -07:00
Edward Yang
48c8adfe1b Turn storage on UndefinedTensorImpl into nullptr. (#11738)
Summary:
I also fix a bug that crept in while we had incorrect semantics where UndefinedTensorImpl was a CPU tensor, and thus some moves which shouldn't have been legal didn't crash. Moving out the Tensor* also moved out the Tensor* in the blob, and it's not supported to store an undefined tensor in a blob.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/11738

Reviewed By: gchanan

Differential Revision: D9847859

fbshipit-source-id: db6be0f76a8e6526a89fd0e87b6a23b9cc820c8d
2018-09-21 08:24:57 -07:00
Christian Puhrsch
bd43d64dd5 Add strides to Tensor (#11763)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11763

baseline-std vector
```
============================================================================
caffe2/caffe2/fb/benchmarks/core_overhead_benchmark.ccrelative  time/iter  iters/s
============================================================================
TensorConstructionDestruction                                6.74us  148.26K
TensorShareData                                              5.89us  169.78K
TensorShareExternalPointer                                   1.01us  994.35K
TensorReallocation                                           2.46us  405.78K
============================================================================
============================================================================
caffe2/caffe2/fb/benchmarks/core_overhead_benchmark.ccrelative  time/iter  iters/s
============================================================================
TensorConstructionDestruction                                7.50us  133.27K
TensorShareData                                              7.07us  141.38K
TensorShareExternalPointer                                   1.05us  955.19K
TensorReallocation                                           2.55us  391.62K
============================================================================

```

baseline-smallvector
```
============================================================================
caffe2/caffe2/fb/benchmarks/core_overhead_benchmark.ccrelative  time/iter  iters/s
============================================================================
TensorConstructionDestruction                                6.56us  152.34K
TensorShareData                                              5.84us  171.32K
TensorShareExternalPointer                                 962.49ns    1.04M
TensorReallocation                                           2.32us  431.73K
============================================================================
============================================================================
caffe2/caffe2/fb/benchmarks/core_overhead_benchmark.ccrelative  time/iter  iters/s
============================================================================
TensorConstructionDestruction                                6.29us  159.04K
TensorShareData                                              5.73us  174.39K
TensorShareExternalPointer                                 914.90ns    1.09M
TensorReallocation                                           2.29us  435.80K
============================================================================
```

Reviewed By: ezyang

Differential Revision: D9694097

fbshipit-source-id: c462e770a4b40e640d8c9d38e0ae7036a4e6e84a
2018-09-17 22:09:40 -07:00