Commit Graph

163 Commits

Author SHA1 Message Date
Sebastian Messmer
17f05ad5e5 Moving at::Tensor into caffe2::Tensor without bumping refcount (#19388)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19388

The old implementation forced a refcount bump when converting at::Tensor to caffe2::Tensor.
Now, it is possible to move it without a refcount bump.

Reviewed By: dzhulgakov

Differential Revision: D14986815

fbshipit-source-id: 92b4b0a6f323ed38376ffad75f960cad250ecd9b
2019-04-18 14:13:26 -07:00
Sebastian Messmer
db611b7caf Delete C10Tensor (#19328)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19328

Plans changed and we don't want this class anymore.

Reviewed By: dzhulgakov

Differential Revision: D14966746

fbshipit-source-id: 09ea4c95b352bc1a250834d32f35a94e401f2347
2019-04-17 00:02:27 -07:00
Will Feng
c7b5a8a876 Change is_variable() to check existence of AutogradMeta, and remove is_variable_ (#19139)
Summary:
Currently, a TensorImpl's `is_variable_` is true if and only if the TensorImpl has AutogradMeta. This PR unifies these two concepts by removing `is_variable_` and change `is_variable()` to check existence of AutogradMeta instead.

Removing `is_variable_` is part of the work in Variable/Tensor merge.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19139

Differential Revision: D14893339

Pulled By: yf225

fbshipit-source-id: ceb5e22c3c01f79b5d21d5bdbf4a7d1bc397796a
2019-04-11 14:03:33 -07:00
Dmytro Dzhulgakov
3408d9de20 Clean up Storage/StorageImpl constructors (#16948)
Summary:
Small cleanup while doing https://github.com/pytorch/pytorch/pull/16857:

- rename C2 constructors as create_legacy
- remove duplicated constructors
- make resizable flag non-default
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16948

Differential Revision: D14062755

Pulled By: dzhulgakov

fbshipit-source-id: 3b7b4ec9cdf67d2628cccc001156e040006b673e
2019-02-13 22:58:32 -08:00
Edward Yang
4404762d7d Rename IntList to IntArrayRef. (#16751)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16751

This was made more complicated by the fact that ivalue::IntList
is a thing.  So I had to fix all of the sites where we referring
to IValue post facto.

The following codemods were run, in this order:

```
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in IntList IntArrayRef
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in IntArrayRef::create IntList::create
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in ivalue::IntArrayRef ivalue::IntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in Tag::IntArrayRef Tag::IntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in isIntArrayRef isIntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in toIntArrayRef toIntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in 'Shared<IntArrayRef>' 'Shared<IntList>'
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in 'intrusive_ptr<IntArrayRef>' 'intrusive_ptr<IntList>'
```

Some manual fixups were done afterwards; they can be reviewed separately
at https://github.com/pytorch/pytorch/pull/16752

Reviewed By: dzhulgakov

Differential Revision: D13954363

fbshipit-source-id: b5c40aacba042402155a2f5a229fa6db7992ac64
2019-02-05 14:54:34 -08:00
Jerry Zhang
cb9740a608 Tensor method rename size()->numel() - 1/3
Summary: Codemod generated with clangr shard mode, 25 files per diff,

Reviewed By: dzhulgakov

Differential Revision: D13944296

fbshipit-source-id: 67e97c2cf45889d25f2cb3e2203cecba03c8a3aa
2019-02-04 23:33:17 -08:00
Dmytro Dzhulgakov
a061e3fd77 Back out "Revert D13596031: Improve c2-aten tensor interop and add proper testing" (#16514)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16514

Original commit changeset: dc371697f14b
Relanding https://github.com/pytorch/pytorch/pull/15860 - the problem was that layer_norm was using at::empty which is not yet on mobile

Reviewed By: ezyang

Differential Revision: D13861480

fbshipit-source-id: e2116da32bc117175c96b9151b1beba9b31eff36
2019-01-31 13:38:55 -08:00
Edward Yang
3b337e7892 Revert D13596031: Improve c2-aten tensor interop and add proper testing
Differential Revision:
D13596031

Original commit changeset: d20b601e06ba

fbshipit-source-id: dc371697f14b3893a9164380a39e7a49d8d68ecf
2019-01-29 07:14:57 -08:00
Dmytro Dzhulgakov
5e21e0fe75 Improve c2-aten tensor interop and add proper testing (#15860)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15860

Few changes (which are harder to split in separate diffs, so together):
- make conversion explicit (as they can throw to avoid surprises)
- fix tensor legacy dispatch not initialized when tensor is created on C2 side
- add a bunch of invariants to enforce

Reviewed By: ezyang

Differential Revision: D13596031

fbshipit-source-id: d20b601e06ba47aeff2f6e8e15769840e2d46108
2019-01-28 23:41:50 -08:00
Jerry Zhang
e866bc7c88 Remove dims() in caffe2::Tensor (#16356)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16356

att

Reviewed By: dzhulgakov

Differential Revision: D13813197

fbshipit-source-id: 68c0fb43404536f622422c51949c819d8a037aa5
2019-01-28 12:42:42 -08:00
Jerry Zhang
539894d70a Remove caffe2::ShareData (#16139)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16139

Original commit changeset: 4b15a4c62995

Reviewed By: dzhulgakov

Differential Revision: D13677464

fbshipit-source-id: 1a644a88fac02b44feebac48ccc01bc72cc47edb
2019-01-25 15:39:11 -08:00
Edward Yang
45602ce9a2 Delete Tensor::swap(), replace with pointer swap (#12730)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12730

i-am-not-moving-c2-to-c10

Reviewed By: smessmer

Differential Revision: D10415430

fbshipit-source-id: 8a2ce8611c5fa77bbbd73fb6788c1baa3b370f07
2019-01-25 08:25:07 -08:00
Will Feng
2a70f24cce Add thread-local guard: at::AutoNonVariableTypeMode (#15939)
Summary:
This PR adds thread-local guard (`at::AutoNonVariableTypeMode`) to make sure that in VariableType.cpp the operations on baseType still dispatch to non-Variable type, even if the parameters will become Variables after the Tensor/Variable merge. We achieve this by making `legacyTensorType()` and `getType()` check the `at::AutoNonVariableTypeMode` guard to decide whether to return non-Variable type for a variable.

This is part of the VariableImpl/TensorImpl merge work: https://github.com/pytorch/pytorch/issues/13638.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15939

Reviewed By: ezyang

Differential Revision: D13640980

Pulled By: yf225

fbshipit-source-id: d12c2543822958558d7d70d36c50999a5eb8783f
2019-01-24 14:33:03 -08:00
Edward Yang
07a090247a Change data() accessor in Caffe2 to return non-const pointer. (#16176)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16176

This makes PyTorch and Caffe2's data() method line up.
Historically, PyTorch made no distinction between tensors
with const or non-const data, and thus provided a
non-const pointer with data() member.  Changing the API to
return a const-pointer would break all mutable code, whereas
changing the Caffe2 API to change a pointer doesn't break
any code, *except* for code which required an exact match
on const-ness (e.g., in template arguments).  Since the latter
is less disruptive, we've opted for it here.

The few places downstream that broke due to this are fixed
in this patch.

Reviewed By: smessmer

Differential Revision: D13742916

fbshipit-source-id: baa4b4544cfdf7c1f369f4d69a1e0d5953c1bd99
2019-01-23 13:55:24 -08:00
Jerry Zhang
da578b7dcf Add defined() to caffe2::Tensor (#16125)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16125

Add defined() method to check whether the Tensor is defined.

Reviewed By: ezyang

Differential Revision: D13719222

fbshipit-source-id: ff8efef2159ed1026bd16acaea40c768a1e20a47
2019-01-18 11:03:36 -08:00
Dmytro Dzhulgakov
aaff2fecda Remove caffe2::Tensor copy constructor (#15416)
Summary:
Based on offline discussion it should be less surprising to the users of existing code. Thus caffe2::Tensor is now a move-only class (as it used to be), explicit calls to UnsafeSharedInstance() are necessary to get shared_ptr behavior.

This change also identified a few places that misused the copy constructor - those are fixed

Pull Request resolved: https://github.com/pytorch/pytorch/pull/15416

Reviewed By: Yangqing

Differential Revision: D13524598

fbshipit-source-id: aea12d6dff77342606fa88ce4ddddbff266245a7
2019-01-18 00:31:56 -08:00
Edward Yang
5b2d30ec85 Revert D12812029: [pt1][tensor] Remove deprecated caffe2::Tensor APIs
Differential Revision:
D12812029

Original commit changeset: ea0c3dd882be

fbshipit-source-id: d5bb4cbb1d7c9be08789599a7db0fb3313f3dbc4
2019-01-16 14:53:20 -08:00
Jerry Zhang
5353847b19 Remove deprecated caffe2::Tensor APIs (#15814)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15814

Plan is to remove the APIs we want to deprecate one by one and make sure it still builds in sandcastle and ossci

Reviewed By: ezyang

Differential Revision: D12812029

fbshipit-source-id: ea0c3dd882bec95fcd4507160ebc61f598b6d040
2019-01-15 18:42:04 -08:00
Jerry Zhang
5e72e99c86 Remaining Tensor API fixes - dims() -> sizes() (#15743)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15743

Remaining fixes so that D12812029 will compile

Reviewed By: dzhulgakov

Differential Revision: D13535559

fbshipit-source-id: 2c8b3403570c8c35ac8efe2d827233abc0e6e0d1
2019-01-15 18:42:02 -08:00
Jerry Zhang
6371bc76a9 Back out "[pt1][tensor] Remove caffe2::ShareData" (#15983)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15983

Original commit changeset: 6e4275d02f4c

Reviewed By: supertopher, Yangqing

Differential Revision: D13644123

fbshipit-source-id: 4b15a4c62995c0e68aad58465600409e302e6504
2019-01-12 07:07:22 -08:00
Jerry Zhang
fd0ed2e298 Tensor reinitialization codemod - 4/5 (#15967)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15967

Codemod generated with clangr shard mode, 25 files per diff,
To eliminiate partially initialized Tensor, we split the initialization of local Tensor variables into two steps, first declare un uninitialized Tensor, and
call `ReinitializeTensor` to initialize it.
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: smessmer

Differential Revision: D13586735

fbshipit-source-id: eae2d79e1107a2e813ce3809e690af4706aaa9ca
2019-01-11 16:41:19 -08:00
Sebastian Messmer
da7468853a caffe2::Tensor::is_same() (#15407)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15407

Don't ask the tensor for its intrusive pointer if we just want to check if two tensors are the same.
This mirrors ATen APIs.

Reviewed By: dzhulgakov

Differential Revision: D13520389

fbshipit-source-id: 681317f36f480ab60e532bb08a073f98f39770fd
2019-01-10 16:22:25 -08:00
Sebastian Messmer
8ac55a6812 Convert caffe2/aten Tensors to/from c10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14820

Reviewed By: dzhulgakov

Differential Revision: D13348044

fbshipit-source-id: 95008e6ead3cfc478696b1c203769241d4cf6ca8
2019-01-08 20:31:42 -08:00
Sebastian Messmer
31d7c933af Implement c10::Tensor (#14819)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14819

This is a minimal wrapper for a c10::TensorImpl,
maybe destined for greatness later when we move caffe2::Tensor or at::Tensor into c10.

Reviewed By: dzhulgakov

Differential Revision: D13348039

fbshipit-source-id: 874f515358e94f35dc7a4c3e55b35fde59c51ff1
2019-01-08 20:31:40 -08:00
Jerry Zhang
ede1f4ad05 Remove caffe2::ShareData (#15418)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15418

Previously we are using Resize + ShareData.
Instead, we'll create a function on Tensor that clones itself with same storage.

Suppose we want `t` to `ShareData` with `t0`, Previous:
```
Tensor t(dims, CPU);
t.Resize(t0.sizes());
t.ShareData(t0);
```
Now:
```
Tensor t = t0.Alias();
```

Reviewed By: dzhulgakov

Differential Revision: D13507609

fbshipit-source-id: 6e4275d02f4c3356cbce91127f1b01111dc86b9f
2019-01-08 11:01:56 -08:00
Roy Li
50fbf79451 test basic tensor interop
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12249

Differential Revision: D13469356

Pulled By: li-roy

fbshipit-source-id: b49748462aa44ac34b8ce79783f2c895a537a232
2018-12-27 17:04:00 -08:00
Sebastian Messmer
bb8ee2de0f Move TensorImpl::CopyFrom to caffe2::Tensor (2/2) (#14858)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14858

This diff doesn't change logic but just takes the existing code and moves it to caffe2::Tensor

Reviewed By: ezyang

Differential Revision: D13365817

fbshipit-source-id: bc73b27a793602cb14200dcdf357aa63233da43c
2018-12-13 18:41:24 -08:00
Sebastian Messmer
070f33f154 Move TensorImpl::CopyFrom to caffe2::Tensor (1/2) (#14656)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14656

This diff doesn't move it yet, but prepares it to be moved, i.e. removes all access to class internals.

dzhulgakov: Please comment on if you think it still makes sense to land this even though it's not blocking anymore since we're going to move at::CopyBytes anyhow.

ezyang: There's some changes in the implementation, especially handling undefined dest tensors. Please review carefully.

Reviewed By: ezyang

Differential Revision: D13287688

fbshipit-source-id: 17800ca8a79ab1633f23be58d96f99a160d8ed24
2018-12-13 18:41:23 -08:00
Sebastian Messmer
3fa53da61a Fix include paths for UndefinedTensorImpl.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14818

Reviewed By: ezyang

Differential Revision: D13348042

fbshipit-source-id: 11bdfc755767ce9d0a6fa95b2cf49d50adde8d60
2018-12-11 21:01:45 -08:00
Sebastian Messmer
9e9e87c19e Move TensorImpl to c10 (yay!)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14795

Reviewed By: ezyang

Differential Revision: D13336856

fbshipit-source-id: 5375d0e42312ff7564f4df06210a5e49542d59e3
2018-12-11 21:01:38 -08:00
Sebastian Messmer
086a37876b Fix include paths for TensorOptions
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14747

Reviewed By: ezyang

Differential Revision: D13318645

fbshipit-source-id: f5ba77a93f6019fbf5faffb47a2837c95fad474d
2018-12-07 16:23:44 -08:00
Bram Wasti
83ad52634a Add FunctionSchema based Operator Registry (#13789)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13789

This enables creation of operators with FunctionSchema and IValue

Reviewed By: smessmer

Differential Revision: D13008791

fbshipit-source-id: 151efc88ac315f4a0ab0171a99774caaf767ef1e
2018-12-05 17:20:24 -08:00
Sebastian Messmer
ff7deb95d7 Back out "Fix include paths for TensorOptions, DefaultTensorOptions, OptionsGuard" (#14744)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14744

Original commit changeset: d236d5351ecf

Reviewed By: suo

Differential Revision: D13318596

fbshipit-source-id: 55f1e9472d05fb5a9c47dc82c32e9a66b5e4308c
2018-12-04 08:59:07 -08:00
Sebastian Messmer
d063c9c330 Fix include paths for TensorOptions, DefaultTensorOptions, OptionsGuard
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14647

Reviewed By: ezyang

Differential Revision: D13283497

fbshipit-source-id: d236d5351ecf7ab9712a55e9ef12d8bba48eb53f
2018-12-03 21:53:26 -08:00
Dmytro Dzhulgakov
da9e49e586 Remove Context dependency from Tensor class (#14269)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14269

Removes reference to Context proper and instead adds a bool argument for async copy (the same as `copy_`)

For CopyFrom - I haven't tweaked all callsites yet. Instead I rely on a terrible hack that pointer to context is implicitly converted to bool when passed, haha :) It's not a good code and I propose to fix it in a follow up diff (maybe using clangr tooling).

Reviewed By: ezyang

Differential Revision: D13117981

fbshipit-source-id: 7cb1dc2ba6a4c50ac26614f45ab8318ea96e3138
2018-11-28 15:45:38 -08:00
Sebastian Messmer
0e93a03a3a Fix include paths for intrusive_ptr (#13692)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13692

This now lives in c10/util, not ATen/core anymore.

Reviewed By: ezyang

Differential Revision: D12937091

fbshipit-source-id: ea2d420a15e7941a38d0b4c75e20ca18437c73f8
2018-11-21 23:08:50 -08:00
Jerry Zhang
1c2ed4eb23 Tensor construction: combine Resize+mutable_data - 1/4 (#13942)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13942

Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: smessmer

Differential Revision: D13054770

fbshipit-source-id: a9e86e5dfcb4f7cebf5243e1d359fad064561bed
2018-11-19 15:33:50 -08:00
Lin Yang
17b2d2d373 fix TensorPrinter when tensor have 0 size. (#13986)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13986

if totoal_count == 0, it crash on:

  values_stream << tensor_data[total_count - 1];

Reviewed By: jerryzh168

Differential Revision: D13066438

fbshipit-source-id: b7a2d681ca0cf5b68d78872c94fac6de9c5de2dc
2018-11-15 07:51:13 -08:00
Jerry Zhang
0f59dcb317 Remove partially initialized Tensor + CopyFrom (#13629)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13629

Previously we have a Tensor which has a initialized storage(therefore a known device_type) and
then we'll call CopyFrom on it to initialize the sizes and data.

We want to eliminate partially initialized Tensor by replacing the pattern of calling CopyFrom with a partially initialized Tensor with either splitting that to undefined Tensor + initialization API(1)(3) or combine all the initialization in the same step(2).

1. member variable initialization + CopyFrom
Previously we have a tensor that is initialized with device_type, and then use CopyFrom to populate the content, now we remove the partial initialization by make the original member variable an undefined Tensor and use ReinitializeFrom to copy from another Tensor.

2. Output + CopyFrom
Previously, we first get a tensor with device_type, and then CopyFrom another Tensor,
We changed it two combining these two operations into OperatorBase::OutputTensor.

3. Output + custom functions
Example can be found in TransformGPU function.
In this case we move the part that initializes the tensor outside of the function, and do that explicitly outside so that we could reuse the Output functions to make a fully initialized Tensor.

Note that to keep the original semantics, both of the APIs has a caching effect based on device_type, which means we only create a Tensor object when device_type does not match or the Tensor is undefined, otherwise, we will reuse the original Tensor object.

Reviewed By: dzhulgakov

Differential Revision: D12848855

fbshipit-source-id: 37bb4ddc1698ebea533b73006eeb1218faa8ddf8
2018-11-07 11:31:03 -08:00
Jerry Zhang
ebaabfbbd5 ReinitializeTensor function for refactoring Tensor as member variable (#13147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13147

We want to refactor
```
class A {

void func() {
  x_.Resize(dims);
  auto* data = x_.mutable_data<T>();
}

Tensor x(CPU);
};
```

to
```
class A {
void func() {
  ReinitializeTensor(&x_, dims, at::dtype<T>().device(CPU));
  auto* data = x_.mutable_data<T>();
}

Tensor x_; // Undefined Tensor
};
```

This diff adds the ReinitializeTensor function.

Reviewed By: dzhulgakov

Differential Revision: D10861298

fbshipit-source-id: 9f432297d07a4890e29bb68436364e0b2e2545e7
2018-11-05 19:13:55 -08:00
Dmytro Dzhulgakov
fdf34c8da8 Kill more weird constructors on Tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/13433

Reviewed By: jerryzh168

Differential Revision: D12874599

fbshipit-source-id: 0c262fda72cbc4f3ea80df790cc8e95140bdc7e0
2018-11-04 16:54:49 -08:00
Dan Nguyen
77b8aade58 Revert D12809293: Kill more weird constructors on Tensor
Differential Revision:
D12809293

Original commit changeset: 5eb663fe8182

fbshipit-source-id: 709a5378fdbbb3fcfaacef8fc48b6530afbbc28f
2018-10-30 16:01:51 -07:00
Dmytro Dzhulgakov
ec754adb14 Kill more weird constructors on Tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/13190

Reviewed By: ezyang

Differential Revision: D12809293

fbshipit-source-id: 5eb663fe818276d97cf31d1ed1e7f025d2b69851
2018-10-30 10:25:40 -07:00
Dmytro Dzhulgakov
3c78cc6c2b Remove Tensor(const Tensor&, BaseContext*, type)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/13204

Reviewed By: ezyang

Differential Revision: D11915764

fbshipit-source-id: baf883b3095bc9d5adf0b942eb874eaa7c1f45e5
2018-10-29 13:57:43 -07:00
Jerry Zhang
eea2ee6d29 Renaming size() to numel() - 1/17
Summary: Codemod generated with clangr shard mode, 25 files per diff

Reviewed By: li-roy

Differential Revision: D10866237

fbshipit-source-id: 020fcfdf52083430c5b674eda8e07ad3adfcc838
2018-10-26 15:36:59 -07:00
Jerry Zhang
f6ccb6a0f9 bring caffe2::Tensor API closer to aten/pytorch (#13134)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13134

For tensor, we plan to do the following renaming:
```
* t.ndim() → t.dim()
* t.size() → t.numel()
* dims() → t.sizes()
* t.meta() → t.dtype()
* t.dim(d) → t.size(d)
```
This diff adds new APIs in caffe2::Tensor so we can start codemod,
we'll remove old API after the codemod

Reviewed By: ezyang

Differential Revision: D10856028

fbshipit-source-id: 1638997e234d7b3113ef8be65a16246f902273c7
2018-10-25 15:45:09 -07:00
Edward Yang
956e620c64 Eliminate numel == -1 state, delete Storage-only constructor (#12656)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12656

I originally wanted to do this in two steps, but deleting the Storage-only
constructor also changes the default numel state (which breaks tests),
so easiest to do it all in one go.)

- I still need a way to compute the correct TensorTypeId for all of the
  Caffe2 constructors; rather than hard-code it, I wrote a function
  in at::detail::computeTensorTypeId() to do this calculation.  Maybe
  this function could be used more widely, but for now, it's used
  by Caffe2 only.
- Added a pile more TensorTypeId for all of Caffe2's supported DeviceTypes
- Because I still can't put arbitrary TypeMeta in TensorOptions, the
  TensorTypeId() calculation doesn't respect dtype.  For now, this is
  not a problem, but this might block work to split non-POD dtypes
  into their own TensorTypeId.

Reviewed By: li-roy

Differential Revision: D10380678

fbshipit-source-id: 10c5d12020596fc9f27d5579adffad00513af363
2018-10-25 08:44:05 -07:00
Jerry Zhang
dd7c2d4284 Change the function signature for caffe2::empty (#13015)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13015

att

Reviewed By: ezyang

Differential Revision: D10469310

fbshipit-source-id: f4621fe5d17bb4663192860f81effe6bdfe21bea
2018-10-24 13:14:24 -07:00
Jerry Zhang
353fdefdd6 dims() -> sizes() (caffe2/core) (#13014)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13014

Tensor method renaming using clangr

Reviewed By: ezyang

Differential Revision: D10467556

fbshipit-source-id: 7d7eaf5fc59bbb493c057d5b8bfdda03b140c97e
2018-10-24 12:49:28 -07:00
Edward Yang
99bc541b5b size_from_dim(0) is like numel() but worse. Don't do it. (#12729)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12729

This may have a dependency on D10380678 if size_from_dim(0)
was required because numel() used to return -1 in some cases.
This is no longer true.

Reviewed By: li-roy, dzhulgakov

Differential Revision: D10415069

fbshipit-source-id: 39f46f56249ecaf3533f62a0205b3a45d519d789
2018-10-18 18:06:37 -07:00