Commit Graph

157 Commits

Author SHA1 Message Date
Richard Zou
cedca377bd Re-enable TestNamedTensor.test_big_tensor_repr (#29407)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29407

Fixes https://github.com/pytorch/pytorch/issues/27753.

The bug was that random tensors print subtly differently. This causes
the "names=" tag to appear in slightly different places; sometimes it is
on the same line as the data, sometimes it is on different lines.

For this test, we wanted to know the following:
- printing a big named tensor's repr doesn't crash
- a big named tensor's repr shows the names

This PR changes the test to check those two things.

Test Plan: - run existing tests

Differential Revision: D18428657

Pulled By: zou3519

fbshipit-source-id: 6bcf247ffba010520878a175e766a496028f87d9
2019-11-11 13:32:32 -08:00
Richard Zou
a248ef7b9c fix autograd support for torch.mean(tensor, dimname) (#29199)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29199

Previously, we called `native::mean_cpu_gpu` inside `mean(Tensor, Dimname)`;
`native::mean_cpu_gpu` is not supported by autograd. This PR replaces
`native::mean_cpu_gpu` with `at::mean(Tensor, int)` so that the dimname
overload can piggyback off of autograd support for `at::mean(Tensor,
int)`.

Also added tests (those didn't exist before) for autograd support for
named tensor reduction functions.

Test Plan: - `python test/test_namedtensor.py -v`

Differential Revision: D18334617

Pulled By: zou3519

fbshipit-source-id: 1714eb3fd93714fe860f208831e8d910f01c1c78
2019-11-06 07:21:30 -08:00
Richard Zou
cb6d9deec6 support for cdist (#29129)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29129

cdist(x1, x2) does the following:
- assume x1, x2 are 2-dimensional. Then x1, x2 are each considered to be
a list of vectors.
- The operation returns a matrix that is the pairwise distance between
each vector in x1 and each vector in x2. The matrix has first dimension
size equal to the number of vectors in x1 and second dimension size equal
to the number of vectors in x2.
- cdist also supports arbitrary left-hand broadcastable batch
dimensions. In this case, x1 and x2 are each considered to be a batch
of a list of vectors.

The above leads to the following name inference rule for cdist:
- In the 2D case, propagate x1.names[-2] and x2.names[-1] (because
the final result has size (x1.size[-2], x2.size[-2]).
- in the ND case, unify all the batch dimensions together to produce the
output batch dimensions and then apply the rule for the 2D case.

Furthermore, I moved all of the name checking in the implementation to
occur before name inference because name inference assumes that the
shapes are valid.

Test Plan: - new test: `pytest test/test_namedtensor.py -v -k "cdist"`

Differential Revision: D18311867

Pulled By: zou3519

fbshipit-source-id: 713d7cdda93c8fe92e7f1bd7f7c5c6e20a8138e3
2019-11-05 07:24:23 -08:00
Richard Zou
71be5fe54e add support for {ones,zeros,full,rand,randn}_like ops (#28981)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28981

This PR adds support for calling those functions on named tensors. The
implementation is not the nicest: in the future we have plans to merge
names into TensorOptions, at which point we don't need the extra
branches that check if the tensor has names. Right now, however, these
functions are very useful to have (in particular, ones_like is used by
autograd to generate gradients).

Test Plan: - Added tests for each of these

Differential Revision: D18270937

Pulled By: zou3519

fbshipit-source-id: 720739ff0474449a960b81728345a4250becbfc3
2019-11-01 11:04:42 -07:00
Richard Zou
0a101bf8d5 Improve name inference API by introducing a TensorName helper struct (#28904)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28904

Motivation
============

Before this PR, a core problem with writing name inference rules was
that each rule needed to handle misalignment by themselves. A misaligned
name occurs when we are matching None with a non-None name, but the
non-None name already exists in the first tensor.

For example, `A` is misaligned in `Tensor[A, None] + Tensor[None, A]`.

Each op handled this in a custom way
- align_from_right (used by broadcasting) handles misalignment
- compute_matmul_outnames checks for misalignment across batch and
feature dimensions.

We can actually codify "misalignment" into something more rigorous by
folding it into the definition of `match` and eliminate special handling
of "misalignment". That is what this PR attempts to do.

Approach
============

Definition: Two names in two tensors *match* if they are equal, or if at
least one of them is a wildcard that can be *refined* to the other name.

With this new definition, to check if two names match, we need to know
about the names list that each name came from to determine if a wildcard
can successfully be *refined* to the other name.

For example, consider the following:
```
tensor: Tensor[A, None]
other: Tensor[None, A]`
```
when unifying `tensor.names[-1]` with `other.names[-1]`, we see that
`tensor.names[-1]` is None and `other.names[-1]` is A. Then we check to
see if `tensor.names[-1]` can be refined to `A`; it can't be refined if
there is already an `A` in `tensor.names`.

Enter `TensorNames`.
A TensorName represents a Dimname associated with some DimnameList
(that came from a Tensor).

`TensorNames` is a list of such TensorName objects with some helper
functions attached.

One can perform the following operations:
- unify two `TensorName` objects
- unify two `TensorNames` objects with right alignment.

Plan
============

This PR changes `compute_matmul_outnames` to use `TensorNames` to
demonstrate how they make writing name inference rules easier. In the
future I'll convert other name inference rules to use `TensorNames` as
well.

Test Plan
- run all tests

Test Plan: Imported from OSS

Differential Revision: D18270666

Pulled By: zou3519

fbshipit-source-id: 3ec96cc957747eb4cfe4ea17fd02ef3d8828a20c
2019-11-01 11:01:48 -07:00
Richard Zou
dd288d3b21 support addcmul, addcdiv (#28975)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28975

TensorIterator supports propagating names so we just needed to enable
them with support_named_tensor: True

Test Plan:
- really basic tests to test that each variant (outplace, inplace, out=)
supports named tensors.

Differential Revision: D18252421

Pulled By: zou3519

fbshipit-source-id: ea7fb59dcf8c708b6e45d03b9c2ba27fa6b6ce98
2019-11-01 07:11:58 -07:00
Richard Zou
5da932ad72 Return None correctly from Tensor.names (#28659)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28659

Previously, we would return None from `Tensor.names` without bumping the
refcount. This is a bug; the Python API requires the developer to
increment the refcount on new references to None. This is because None
is a singleton object and does not automatically have its reference
count bumped when one uses Py_None (which is a pointer to the actual
None singleton object).

See the following for Python documentation on this:
- https://docs.python.org/3/c-api/none.html#c.Py_RETURN_NONE
- https://docs.python.org/3/extending/extending.html#back-to-the-example

Fixes https://github.com/pytorch/pytorch/issues/28646

Test Plan: - New test.

Differential Revision: D18140593

Pulled By: zou3519

fbshipit-source-id: 302a09021b68229e2e7b1b584b3549b30506bdab
2019-10-28 07:01:22 -07:00
Richard Zou
b7b73e43c0 Delete TEST_NAMEDTENSOR; run named tensor tests on all CIs (#27760)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27760

There's nothing special about the named tensor tests that requires that
they be run in their own CI job. In this PR we delete the
TEST_NAMEDTENSOR flag that hides named tensor tests from regular jobs.
In the future, we'll delete the named tensor CI job so that we do not
duplicate signals.

Test Plan: - wait for CI

Differential Revision: D17882262

Pulled By: zou3519

fbshipit-source-id: f90c71cb939e53b8ea23f7e2ab95a5c41b8be0e3
2019-10-14 08:01:41 -07:00
Mike Ruberry
f6bda1e07b Removes @default_floating_dtype decorator (#27628)
Summary:
One fewer legacy decorator cluttering the test suite.

Functions relying on this decorator were updated or, in the case of test_sparse, the test suite was put back on double by default.

Note: this PR is blocked on https://github.com/pytorch/pytorch/issues/27599.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27628

Differential Revision: D17896254

Pulled By: mruberry

fbshipit-source-id: 13d460301f50ef4af7a660372432108164c0de1f
2019-10-12 12:39:34 -07:00
Richard Zou
0fbbc7acb4 Allow align_to to take in partially named tensors (#27308)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27308

Currently, `tensor.align_to(*names)` has the restriction that the
`tensor` must be fully named. This doesn't need to be the case, when
using Ellipsis, we "expand the ellipsis to all unmentioned dimensions,
in the order which they appear in the original tensor".

For example, consider `tensor: Tensor[None, None, C]`.

`tensor.align_to(C, None, None)` is ambiguous because the user might
have wanted to switch the order of the None dimensions and there is no
way to specify that using this API. However, `tensor.align_to('C', ...)`
isn't ambiguous: we can select the two unnamed dimensions in the order
in which they appear.

To actually implement this, we write a brand-new `align_to(names,
ellipsis_idx)` function in c++ that is separate from the regular
`align_to(names)` implementation. Ideally we would support "..." as a
special name in c++ and combine the two implementations; we'll need to
support "..." in c++ in the future but that requires a bit of extra work.
In this PR, Python processees the ellipsis and then calls the correct
overload.

Test Plan: - run tests

Differential Revision: D17745179

Pulled By: zou3519

fbshipit-source-id: 9fed06d224215cfb7efecd8c002604baab3c45e6
2019-10-09 16:28:45 -07:00
Mike Ruberry
7f183a978f Stops common_utils.py from setting the default tensor type (to torch.DoubleTensor) (#27444)
Summary:
This PR stop common_utils.py from setting the default tensor type when it's imported. See issue https://github.com/pytorch/pytorch/issues/27355. This is a frequent source of confusion for test writers.

Many tests relied on this setting (whether they knew it or not), and this PR also updates the test suite to pass without common_utils.py setting the default tensor type. Some larger test files now set the default floating dtype themselves, however. These test files are:

- test_autograd.py
- test_distributions.py
- test_jit.py
- test_nn.py

This is still a significant improvement from today, however. First, these files set the default floating dtype much more clearly than importing it from common_utils. Second, the rest of the test suite no longer sets this globally. Third, this PR is a springboard to updating those tests, too. In particular, as tests are made generic they can be moved aways from relying on this global setting.

Notable technical changes in this PR are:

- Significant updates to test_torch.py to make it pass without setting the default floating dtype globally.
- The default_floating_dtype decorator is now defined in common_utils, a couple versions of this operator were defined in test files previously.
- test_torch-specific parts of common_utils were refactored into test_torch.
- tensor creation methods in common_utils were updated to accept an optional dtype and device.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27444

Differential Revision: D17795235

Pulled By: mruberry

fbshipit-source-id: 7f77271c0c836e69f183ad9057a2c4b29f09d2e1
2019-10-08 09:52:44 -07:00
Iurii Zdebskyi
293e35a87c Fixed Error message for tensor.align_to (#27221)
Summary:
Fixing this [issue1](https://github.com/pytorch/pytorch/issues/27074) and [issue2](https://github.com/pytorch/pytorch/issues/27073)
Tested via unit tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27221

Differential Revision: D17716235

Pulled By: izdeby

fbshipit-source-id: c7bafd16b469c91924ebc3dba77ca56424d4c93c
2019-10-02 14:19:40 -07:00
iurii zdebskyi
5e776d8a45 Enabled comparison ops with named tensors (#27162)
Summary:
Fixing this [issue](https://github.com/pytorch/pytorch/issues/27077).
Tested via unit tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27162

Differential Revision: D17694187

Pulled By: izdeby

fbshipit-source-id: 939017c91605c89a0e08e0c3f8fe21de93bba95b
2019-10-02 13:35:53 -07:00
Richard Zou
3ad1bbe16a Named tensor support for: index_fill_, index_fill, squeeze, median(Tensor) (#26914)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26914

Also added dimname overloads for index_fill_ and squeeze.

Test Plan: - [namedtensor ci]

Differential Revision: D17609136

Pulled By: zou3519

fbshipit-source-id: 29c7ad52ffe24e0b3ad679111fee7a78eca7acdf
2019-09-27 12:28:49 -07:00
Richard Zou
92a2d4232a Named tensor support for: all, any, bitwise_not, cumprod, cumsum, and more (#26815)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26815

This PR adds named tensor support for:
- any, all, `bitwise_not(_)`, cumprod, cumsum, `logical_not`

In addition, it adds smoke tests for a variety of tensor attributes and
fns:
- is_shared, is_signed
- retain_grad, register_hook

Test Plan: - [namedtensor ci]

Differential Revision: D17575905

Pulled By: zou3519

fbshipit-source-id: 37bfa327e68112c5bf0f6bf1f467a527f50fa1c4
2019-09-25 14:56:28 -07:00
Richard Zou
3346759774 Named tensor support for logsumexp, mode, kthvalue, median, min, max (#26563)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26563

This adds name inference rules for pre-existing logsumexp, mode,
kthvalue, and median ops. Also adds overloads so that they can take
`Dimname` dimensions.

There are a lot of min/max overloads. This PR adds name inference to
the following overloads for (both) min and max:
- min(Tensor, int dim)
- min(Tensor, Dimname dim)
- min(Tensor)  (full reduction)

Test Plan: - new tests and [namedtensor ci]

Differential Revision: D17557050

Pulled By: zou3519

fbshipit-source-id: a099a0ef04ad90d021a38a0668fc44902e1c7171
2019-09-25 07:04:31 -07:00
Richard Zou
60343a82e9 Named tensor support for: atan2, output_nr, detach{_}, requires_grad_ (#26543)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26543

Also adds a test for logical_xor (it already had named tensor support
but there was no test)

Test Plan: - [namedtensor ci]

Differential Revision: D17501403

Pulled By: zou3519

fbshipit-source-id: 49be15580be9fb520e25a8020164e5a599d22d40
2019-09-25 05:23:57 -07:00
Richard Zou
cc4219a799 Wrap dimensions during named inference (#26558)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26558

Previously, name inference gets called after dimensions are wrapped.
This PR makes it so that name inference always wraps dimensions so that
it can be called anywhere. Ideally we would only wrap dimensions once,
but many of our operators wrap dimensions in weird places.

Wrapping dimensions in name inference is pretty inexpensive and only
happens for named tensors (name inference does not run on unnamed
tensors.)

Test Plan: - [namedtensor ci]

Differential Revision: D17557049

Pulled By: zou3519

fbshipit-source-id: 68c5636489e233dbf2588ab6ad4e379a6fe4c8ba
2019-09-24 17:47:55 -07:00
Richard Zou
925e51ea7f Add a lot of dimname overloads (#26636)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26636

This PR defines a lot of dimname overloads so that when named tensor
support is added for those operators, we will not have to modify the
autogenerated TensorMethods.h, thereby avoiding potential merge
conflicts in the future.

Overloads were added for the following:
- all
- any
- argmax
- argmin
- cumsum
- cumprod
- index_copy
- kthvalue
- mode
- permute
- squeeze
- index_add
- index_fill
- scatter
- scatter_add
- index_select
- gather
- sort
- argsort

Test Plan: - [namedtensor ci]

Differential Revision: D17522984

Pulled By: zou3519

fbshipit-source-id: eca6dea819ba4e4e43b71b700d5cf09176f00061
2019-09-24 17:03:36 -07:00
Richard Zou
567a1981a7 Fix ellipsis behavior for Tensor.align_to to glob all missing dims (#26648)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26648

Previously:
- `Tensor.align_to(*names)` only works on fully named tensors. In addition, the
desired ordering `names` must not have any None-names.
- `Tensor.align_to(*names)` accepted `...`, but expanded it based on
position. i.e., in `tensor.align_to('N', ..., 'C', 'H')`, `...` expand
to `*tensor.names[1:-2]`. This is wildly incorrect: see the following
concrete example.

```
tensor = tensor.refine_names('N', 'C', 'H, 'W')
tensor.align_to('W', ...) # ... expands to 'C', 'H, 'W'
```

This PR changes it so that `...` in `tensor.align_to` grabs all
unmentioned dimensions from `tensor`, in the order that they appear.
`align_to` is the only function that takes ellipsis that requires this
change. This is because all other functions (`refine_to`) require their
list of names to work in a positional manner, but `align_to` lets the
user reorder dimensions.

This does not add very much overhead to `align_to`, as shown in the
following benchmark. However, in the future, we should resolve to make
these operations faster; align_to should be as fast as view but isn't
most likely due to Python overhead.

```
[ins] In [2]: import torch
         ...: named = torch.randn(3, 3, 3, 3, names=('N', 'C', 'H', 'W'))
         ...: unnamed = torch.randn(3, 3, 3, 3)
         ...: %timeit unnamed[:]
         ...: %timeit unnamed.view(-1)
         ...: %timeit named.align_to(...)
         ...: %timeit named.align_to('N', 'C', 'H', 'W')

31 µs ± 126 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
43.8 µs ± 146 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
69.6 µs ± 142 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
66.1 µs ± 1.13 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```

Test Plan:
- new tests [namedtensor ci]

allows the user to transpose and permute dimensions.

Differential Revision: D17528207

Pulled By: zou3519

fbshipit-source-id: 4efc70329f84058c245202d0b267d0bc5ce42069
2019-09-23 12:16:46 -07:00
Richard Zou
808f4a4d61 Revert D17521607: Name inference for min(Tensor, dim?) / max(Tensor, dim?)
Test Plan: revert-hammer

Differential Revision:
D17521607

Original commit changeset: 303e3cef2291

fbshipit-source-id: a27b99c2c1c8b2e389d34395ba28a74d2946bb5a
2019-09-23 05:43:40 -07:00
Richard Zou
4fada96218 Renames tensor.renamed -> rename, tensor.names_ -> rename_ (#26548)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26548

This makes the naming more consistent with PyTorch's API. The original
concern was that `tensor.rename` might make the operation seem like it
is in-place. However, we have many "verb" APIs: `tensor.add(other)`, for
example, doesn't add other to tensor in-place, but `tensor.add_(other)`
does.

`tensor.rename_` does exactly the same place as `tensor.rename`, but
in-place.

Test Plan: - [namedtensor ci]

Differential Revision: D17502021

Pulled By: zou3519

fbshipit-source-id: 6a5b93136a820075013cd1e30fb8fc6b9d77d7d9
2019-09-22 15:38:26 -07:00
Richard Zou
d3e90bc47d Name inference for min(Tensor, dim?) / max(Tensor, dim?) (#25582)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25582

There are a lot of min/max overloads. This PR adds name inference to
the following overloads for (both) min and max:
- min(Tensor, int dim)
- min(Tensor, Dimname dim)
- min(Tensor)  (full reduction)

Test Plan: - new tests [namedtensor ci]

Differential Revision: D17521607

Pulled By: zou3519

fbshipit-source-id: 303e3cef22916dbc9da6a092d4f23e39e74c39e4
2019-09-22 12:20:51 -07:00
Richard Zou
87f80ff8ea Support torch.pow with named tensors (#26541)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26541

`torch.pow` already supports named tensors; every one of its constituent
codepaths propagates names:
- TensorIterator propagates names
- resize_as_ and fill_ propagate names (exponent == 0 or base == 1)
- resize_as_ and copy_ propagate names (exponent == 1)

This PR adds `supports_named_tensor = True` to the pow overloads,
enabling `pow` to take named tensors.

Test Plan: - [namedtensor ci]

Differential Revision: D17501402

Pulled By: zou3519

fbshipit-source-id: 07ee91d685e55dd58bbbb3a3fc9e185de8bb7515
2019-09-20 14:15:03 -07:00
Richard Zou
98b5b6fc13 Implement resize_, resize_as_ for named tensors (#26493)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26493

resize_ and resize_as_ are low level functions that are not meant to be
used as a part of the regular PyTorch user's routine. However, they are
used to implement a lot of our operations: `out=` functionality is
implemented by resizing an output to be the correct size.

To keep in line with already implemented `out=` functionality, we do the
following:
- resize_as_(self, other) propagates names according to `out=` functionality.
This means that if self doesn't have names, then we propagate
other.names. If self does have names, they must be equal to other.names.

In addition, resize_ cannot resize a named tensor to anything but the same size.

Test Plan: - [namedtensor ci]

Differential Revision: D17501404

Pulled By: zou3519

fbshipit-source-id: e396e7fba55e1419355933925226d02dccb03012
2019-09-20 14:14:59 -07:00
Richard Zou
858cf76ef7 Disable tagged names (#26479)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26479

This PR doesn't delete the code for them yet because it takes some effort to
determine what to delete. I will send a followup PR fully deleting
tagged names, but this PR disables their creation.

Test Plan: - [namedtensor ci]

Differential Revision: D17484758

Pulled By: zou3519

fbshipit-source-id: 451409e36eac98ffee1b98884d0f675bb5d46c9d
2019-09-20 10:59:41 -07:00
Richard Zou
76fb909beb Change "named_guard" in native_functions to "supports_named_tensor" (#26352)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26352

"named_guard: P" is the same as "supports_named_tensor: !P".
Also changed the error message to be more understandable to users.

Test Plan:
- `TEST_NAMEDTENSOR=1 pytest test/test_namedtensor.py -v`
- [namedtensor ci]

Differential Revision: D17426234

Pulled By: zou3519

fbshipit-source-id: 4cab780e6e29e184e79cdd3690f41df9ebb2ecb5
2019-09-18 12:28:16 -07:00
Richard Zou
bae7528479 Change '*' to '...' and ... for named tensor API functions. (#26350)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26350

Python 3 lets us use `...` to perform indexing. Semantically, `...`
means "the rest of the unspecified dimensions". For example, while
indexing, one can do (for 5D `tensor`) `tensor[0, 0, ..., 0]` and
the `...` is expanded into `tensor[0, 0, :, :, 0]`.

Previously, we were using '*' to represent a similar behavior in names.
For example, `tensor.refine_names` supports things like the following:

```
x = torch.randn(2, 3, 4, 5, 6)
x_out = x.refine_names('*', 'H', 'W')  # refine only the last two
dimensions
```

This PR changes it so that named tensor API functions recognize `'...'`
(in Python 2 and Python 3) and `...` (in Python 3 exclusively) instead
of `'*'`.

Test Plan: - [namedtensor ci]

Differential Revision: D17424666

Pulled By: zou3519

fbshipit-source-id: 003182879fd38ced3fea051217572a457cdaf7cf
2019-09-18 05:47:13 -07:00
Richard Zou
0038111019 Implement named tensor unflatten(dim, namedshape). (#25658)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25658

This unflattens `dim` according to the shape specified in `namedshape`.
`namedshape` may be either an OrderedDict or an iterable of (name, size)
tuples.

Future:
- It is possible to make it take a dict in Python >= 3.6 because those are
ordered by default, but I'll leave that task for the future.

Test Plan: - new tests [namedtensor ci]

Differential Revision: D17192655

Pulled By: zou3519

fbshipit-source-id: fd9bd2f462c23a4df1c23d66f2aa95076ff1b160
2019-09-17 21:24:25 -07:00
Richard Zou
babaac3e08 Fix bug with named tensors and (no) tracer support (#26106)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26106

Previously, in the named tensors build, an operator is marked as
non-traceable if ANY of its overloads are named tensor overloads. This
breaks the tracer for things like torch.full (has a names= overload for
named tensor) and tensor.sum (has a Dimname overload for named tensor).

This PR fixes the problem by putting the "no tracer support" logic into
the location where the tracer attempts to construct a graph by adding a
Dimname/DimnameList argument to a node.

Test Plan:
- new test in test_jit.py to check if torch.full is traceable
- new test in test_namedtensor.py to check what happens when someone
tries to trace a function that uses named tensor APIs.
- [namedtensor ci]

Differential Revision: D17353452

Pulled By: zou3519

fbshipit-source-id: b0b843c8357ffe54baee6e8df86db914f0b1ece4
2019-09-13 06:45:00 -07:00
Richard Zou
5e2d25af34 Implement tensor.align_as(other), change tensor.align_to(names) (#25843)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25843

`tensor.align_to(*names)` permutes the dimensions of `tensor` and adds
additional 1-sized dimensions such that the output tensor has dimensions
in the same order as `names`. All dimensions of `tensor` must be
present in `names`, in addition, this function requires that all dims of
`tensor` be named.

`tensor.align_as(other)` is equivalent to
`tensor.align_to(*other.names)`.

I'm planning on changing `torch.align_tensors(*tensors)` to align closer
to these semantics because there didn't seem to be a clear use case for the old
semantics that preserve unnamed dimensions. That will come in a future
change.

Test Plan: - new tests [namedtensor ci]

Differential Revision: D17255549

Pulled By: zou3519

fbshipit-source-id: 1e437ad81e9359b4d5bd0e7e64c3a1be441fc3e3
2019-09-12 22:53:44 -07:00
Richard Zou
e544f88590 Implement tensor.refine_names (#25842)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25842

`tensor.refine_names(*names)` takes `tensor` and attempts to name its
dimensions `names` out-of-place. If a dimension `i` already had a name,
then it cannot be changed (so tensor.names[i] must equal names[i]);
if the original dimension did not have a name, then the new name
(names[i]) can be anything.

`tensor.refine_names(*names)` also accepts a glob '*' that greedily selects
names from `tensor`. Here are some examples:

- `Tensor[None].refine_names('N') -> Tensor[N]`
- `Tensor[N].refine_names('N') -> Tensor[N]`
- `Tensor[N].refine_names('D') -> Error!`
- `Tensor[N].refine_names(None) -> Error!`
- `Tensor[None, None].refine_names('*', D) -> Tensor[None, D]`

Test Plan: - new tests [namedtensor ci]

Differential Revision: D17255548

Pulled By: zou3519

fbshipit-source-id: fdbdb3a12f24fbe37ce1e53ed09dc8a42589d928
2019-09-12 22:53:40 -07:00
Richard Zou
4fb5a7c5b8 Experimental warning for named tensors (#26050)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26050

Throws a warning once when someone attempts to attach names to a tensor.
This is guaranteed to happen at the callsite `set_named_tensor_meta`.

Test Plan: - run tests [namedtensor ci]

Differential Revision: D17331634

Pulled By: zou3519

fbshipit-source-id: 44f5e5c95acd9c7ba543c1210a3b1314aab348f0
2019-09-12 06:34:12 -07:00
Richard Zou
ad2ec71695 Add TEST_NAMEDTENSOR flag to namedtensor ci (#25948)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25948

Previously, test/test_namedtensor.py is skipped if pytorch was not
compiled with BUILD_NAMEDTENSOR. Now, we skip test/test_namedtensor.py
if pytorch was not compiled with BUILD_NAMEDTENSOR or if
TEST_NAMEDTENSOR is not set.

This is done in preparation for turning on BUILD_NAMEDTENSOR=1 permanently;
at that point we will use TEST_NAMEDTENSOR to differentiate between the
named tensor ci and the regular ci.

Test Plan:
- [namedtensor ci] (and check that the named tensor tests are actually
running).

Differential Revision: D17300132

Pulled By: zou3519

fbshipit-source-id: 928f71f4d50445680b6ae1aa54b8857bc92e4d08
2019-09-11 14:53:20 -07:00
Richard Zou
4231287504 Add names= argument to torch.tensor ctor (#25424)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25424

Test Plan
- new tests [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D17120399

Pulled By: zou3519

fbshipit-source-id: 93d7944f2ec4c5a7256f505323b879af706131df
2019-09-10 16:58:01 -07:00
Richard Zou
294cf096bf Name inference for unbind (#25585)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25585

Test Plan:
- new tests [namedtensor ci]

Pull Request resolved: https://github.com/pytorch/pytorch/pull/25585

Differential Revision: D17185070

Pulled By: zou3519

fbshipit-source-id: 85512b194f5b7c62a00aa81d048b5351e098bdb0
2019-09-08 11:35:58 -07:00
Richard Zou
6257c8d634 Add flatten for named tensors. (#25672)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25672

There are three overloads:
1) flatten(tensor, int start_dim, int end_dim, Dimname out_dim)
2) flatten(tensor, Dimname start_dim, Dimname end_dim, Dimname out_dim)
3) flatten(tensor, DimnameList dims, Dimname out_dim)

`flatten` joins all the dimensions between start_dim and end_dim into
one dimension. The name of the output dimension is specified by
`out_dim`.

In the case where flatten takes a list of `dims` to flatten, all the
dimensions in `dims` must be in consecutive order.

Test Plan: - new tests [namedtensor ci]

Differential Revision: D17192656

Pulled By: zou3519

fbshipit-source-id: 55d2b23358bd77cbef299f66701a8da8cd194f4f
2019-09-06 21:16:44 -07:00
Richard Zou
7970e5720b Rename tensor.view_names -> tensor.renamed (#25711)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25711

This function renames the dimensions of a tensor out-of-place. Because
of that, I think `tensor.renamed(...)` is a clearer name: `view_names`
has the connotation that we can use names to `view` our tensors with a
"different shape", but what this function really does is let us rename a
tensor no matter the previous names.

`tensor.names_`, the in-place version of this, is unchanged for now.
However, we might delete this or not advertise it if it has no use case
and also because its naming is a little inconsistent with `tensor.renamed`.

Test Plan: - [namedtensor ci]

Differential Revision: D17206515

Pulled By: zou3519

fbshipit-source-id: 67053951fcc8130c84566b5ebbdce35ef619c90d
2019-09-06 11:28:04 -07:00
Richard Zou
50cb48643d Fix named tensor build (#25673)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25673

We recently moved new_empty into ATen. new_empty doesn't support named
tensors (in fact, it was hackily supporting named tensors before). This
fixes the named tensor test by changing all uses of `new_empty` to
`empty`.

Named tensor support for `new_empty` will come eventually, but it might
be a little tricky.

Test Plan: - [namedtensor ci]

Differential Revision: D17206043

Pulled By: zou3519

fbshipit-source-id: 1697bd1d63e7cb344f3d459a29af0fcb9696ea49
2019-09-05 09:18:24 -07:00
Richard Zou
47cee2dd22 Implement initial version of autograd with named tensors (#25604)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25604

In this initial version:
- autograd ignores all names.
- tensor.grad is unnamed, unless the user manually assigns to it.
- if a grad tensor has any names, perhaps the user was hoping for some
alignment-checking behavior that named tensor offers for other ops. We
raise a warning in this case.

Future: do some more extensive checking to see if this actually works in
all cases.

Test Plan:
- [namedtensor ci]
- Check a warning is raised if a grad tensor has names.
- Check tensor.grad field is unnamed.
- Check that we can perform backward on an op that doesn't explictly
support names in backward. `sigmoid` is one such op.

Differential Revision: D17171788

Pulled By: zou3519

fbshipit-source-id: 64837fde94d8269610b6d3539ac025516dbe1df4
2019-09-04 06:36:54 -07:00
Richard Zou
0ebbcd9541 Name inference rules for relu/relu_/threshold/threshold_ (#25569)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25569

Test Plan
- new tests [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D17159121

Pulled By: zou3519

fbshipit-source-id: c68bdb543155488aa3634f908bd576e5c30c8d77
2019-09-03 20:10:24 -07:00
Richard Zou
9ea6238b07 Fix named tensor printing (#25564)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25564

There are a number of ops that get called while printing tensors
depending on how large the tensors are. This PR makes it so that before
we attempt to format tensor data for printing, we drop the names of the
tensor (if there are any). This is easier than supporting named tensors
for all of those ops (which should happen eventually).

Test Plan: - new test [namedtensor ci]

Differential Revision: D17158872

Pulled By: zou3519

fbshipit-source-id: 282023837645b8cb16a4d93896a843dd598fc738
2019-09-03 19:58:00 -07:00
Richard Zou
67d64ea910 Fix binary op name inference to happen before shape checks (#25563)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25563

Before, for binary ops, name inference occurred after shape checks. This
defeats the purposes for names because the names are supposed to tell
the user that i.e. their tensors are misaligned or that they are adding
incompatible tensors.

This PR changes TensorIterator so that names are computed before shape checks and
propagated after the binary ops are finished. In order to support this,
this PR makes the following changes:
- adds a `names_` field to TensorIterator, similar to `shape_`. This is
necessary to hold the output names, that are computed in
`compute_names`, until they are used in `propagate_names_to_outputs()`.

Test Plan: Imported from OSS

Differential Revision: D17158869

Pulled By: zou3519

fbshipit-source-id: 0caa90f7a93e4d9bdb2549cd330cc3abd2258868
2019-09-03 18:49:09 -07:00
Richard Zou
9922e09436 Name inference rule for torch.cat (#25568)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25568

Test Plan
- new test [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D17159069

Pulled By: zou3519

fbshipit-source-id: fbc185ea5865b128508451096b742ac18e467670
2019-09-03 18:43:10 -07:00
Richard Zou
a6ba4f64ac Name inference for masked_fill_ / masked_fill
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25567

Test Plan: - new tests [namedtensor ci]

Differential Revision: D17159070

Pulled By: zou3519

fbshipit-source-id: d177a0847fc592b6b15e3ae59fcea847d4975e12
2019-09-03 17:45:14 -07:00
Richard Zou
2aef60660f Name inference rule for masked select (#25566)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25566

masked_select returns a tensor with None names. However, it broadcasts
its inputs so we need to perform a check that they are broadcastable.

Test Plan: - new tests [namedtensor ci]

Differential Revision: D17159071

Pulled By: zou3519

fbshipit-source-id: ad201f3f73bc54163ede1ba3d906d2409ebef475
2019-09-03 17:45:09 -07:00
Richard Zou
938e740241 Name inference rule for mean, std, var, std_mean, var_mean (#25431)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25431

I put the name propagation logic in a central place, `make_reduction`,
that creates a TensorIterator for the reduction. This lets us implement
name inference rules for mean, std, var, std_mean, and var_mean.

Test Plan
- new tests [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D17123577

Pulled By: zou3519

fbshipit-source-id: 2d47080a40da0c4bcabbb3df71ffa8fbeb7a14c6
2019-09-03 11:54:13 -07:00
Richard Zou
2513ca66ca Add guards for using named tensor with serialization and multiprocessing (#25345)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25345

Test Plan
- New tests [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D17101486

Pulled By: zou3519

fbshipit-source-id: 58e803b042056ee6abab8551517f74078f2b81d5
2019-08-29 14:10:33 -07:00
Richard Zou
0bb69f6071 Add guard for named tensors in the JIT (#25344)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25344

Test Plan
- [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D17101487

Pulled By: zou3519

fbshipit-source-id: d6170a809dfd98e6a4dba8450433c439962991cc
2019-08-29 14:10:28 -07:00
Richard Zou
6f5fe96c80 Implement name inference for torch.matmul (#25177)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25177

Test Plan
- new tests [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D17051452

Pulled By: zou3519

fbshipit-source-id: 7259cdb7ba7f480035528cf3c60ef6d051e42db5
2019-08-28 13:51:04 -07:00
Richard Zou
d2719b549d Implement name inference for torch.bmm (#25123)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25123

The approach is different for CPU and CUDA. In particular:
- in CPU, I added a name inference rule to bmm_out
- in CUDA, bmm calls THCTensor_(baddbmm) so I added a name inference
rule to that.

When one calls baddbmm on CPU or CUDA, it'll error out with NYI due to
named_guard: True on it in native_functions.yaml. I'm not planning on
implementing baddbmm soon because it's a little tricky to add it to CPU
and bmm is more commonly used function.

Test Plan
- new tests [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D16998073

Pulled By: zou3519

fbshipit-source-id: 8dc01898964318717911f28eebd6cdfffc7dfcf2
2019-08-28 13:51:00 -07:00
Richard Zou
2f4f6c2563 Implement name inference for torch.dot (#24474)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24474

torch.dot is a little weird. It ignores the names of its inputs to be
consistent with the rest of our matrix multiplication functions.

I've written the implementation using a helper function that is also
used by other matrix multiplication functions so that it is easy to
change the behavior.

Test Plan
- new tests [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D16915802

Pulled By: zou3519

fbshipit-source-id: 628a6de1935357022cc92f4d23222736a70bb070
2019-08-27 06:49:27 -07:00
Richard Zou
088201f95d Implement name inference for addmv, addmv_, mv (#24471)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24471

mv(Tensor[M, N], Tensor[O]) ignores the names of N and O and returns a
tensor with names [M].

Test Plan: - new tests [namedtensor ci]

Differential Revision: D16915805

Pulled By: zou3519

fbshipit-source-id: d7d47903f249f85ef3be8a188d51993834bf5f55
2019-08-26 15:03:26 -07:00
Richard Zou
78fa8a8ad0 Implement name inference for expand (#24469)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24469

tensor.expand(*sizes) returns a tensor with names equal to tensor.names
plus unnamed padding in the beginning dimensions.

For example, Tensor[H, W].expand(10, 2, 128, 128) -> Tensor[None, None,
H, W].

Test Plan: - new tests [namedtensor ci]

Differential Revision: D16915804

Pulled By: zou3519

fbshipit-source-id: 77ac97f42e9959d7f6d358c5286e3dc27488e33d
2019-08-26 15:03:22 -07:00
Richard Zou
0156d02b59 Implement name inference for mm, addmm (#24306)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24306

Featuring:
- a new way of writing name inference tests. At some point I'll migrate
the older tests over.
- The out= variants aren't implemented. This is because they are a
little weird: the output gets resized, but I haven't throught through
what semantics that should have.

Test Plan: - new tests [namedtensor ci]

Differential Revision: D16915801

Pulled By: zou3519

fbshipit-source-id: 29ae2ee414c7d98e042965458c5dccef7ddbd4dd
2019-08-26 12:20:26 -07:00
Richard Zou
6195aee2c6 Fix binary op name inference between unnamed and named tensors. (#24921)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24921

Let `unnamed = torch.randn(1, 1, 1)` and `named = torch.randn(1, 1,
names=('N', 'C'))`.

Previously, there was a bug where `unnamed + named` would error out.
This happened because `unify_from_right(unnamed.opt_names(),
named.opt_names())` would return `named.names()`, which was propagated
to the output tensor. However, the output tensor has dim 3, but
`names.names()` only has 2 elements, so the code would throw an error.

The solution implemented in this PR is to stop trying to do premature
optimization. If all inputs to an operation doesn't have names, then
don't run name inference. However, if any inputs do, then materialize
the names and run name inference.

It's possible to make this more efficient for the case where some inputs
are named and some aren't, but we should benchmark these cases
and determine if it is necessary for it to be more efficient.

Test Plan: - new tests [namedtensor ci]

Differential Revision: D16930710

Pulled By: zou3519

fbshipit-source-id: 0de73c803c8b0f9a1c2d80684b9a47cccba91cbc
2019-08-26 12:20:22 -07:00
Richard Zou
867d8af20f Fix FIXME_default_names by storing static list of 64 none names (#24885)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24885

Store a static pre-allocated vector of names. When one calls
`default_names`, it gives a const reference to some amount of these
names.

Also make clearer the maximum number of dimensions we support for named
tensors. Right now it is 64 but that number is easy to change. 64
follows some internal pytorch maximum number of dimensions;
TensorIterator reduce ops have a limit of 64 dims.

Test Plan: - new tests [namedtensor ci]

Differential Revision: D16915803

Pulled By: zou3519

fbshipit-source-id: 931741b199456f8976882b82f25ab5af6dcd108b
2019-08-23 14:32:07 -07:00
Richard Zou
3a59a9b36c Implement name inference for t(), transpose(...) (#24941)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24941

Test Plan
- [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D16930707

Pulled By: zou3519

fbshipit-source-id: 833a2bfd27f3bb3b7bc4327ac62a1d02ec526127
2019-08-23 09:01:53 -07:00
Richard Zou
a77cb2ccd1 Revert D16915800: Implement name inference for t(), transpose(...)
Differential Revision:
D16915800

Original commit changeset: d8e5beff3daa

fbshipit-source-id: f8b966fdc485d8250ae74d8bbbda157b45c2d1a0
2019-08-20 14:07:06 -07:00
Richard Zou
acf3b76bf0 Implement name inference for t(), transpose(...) (#24203)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24203

Test Plan
- [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D16915800

Pulled By: zou3519

fbshipit-source-id: d8e5beff3daa7e5fd5bfed5b02d8089cac300de8
2019-08-20 13:46:47 -07:00
Richard Zou
4bfd33ed36 Name inference for softmax, log_softmax and Dimname overloads. (#24087)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24087

Added name inference rules for softmax and log_softmax.

Added the overloads for Dimname dim to softmax and log_softmax.

Test Plan: - [namedtensor ci]

Differential Revision: D16763391

Pulled By: zou3519

fbshipit-source-id: 676a14666d42441eb7d3c9babef7461c7b78d290
2019-08-14 12:19:27 -07:00
Richard Zou
5cb8a7b396 Fix out= function semantics for named tensors. (#24028)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24028

Previously, torch.abs(tensor, out=out) would ignore the names of the
`out` tensor and overwrite them with the names of `tensor`.

This patch changes the behavior to the following:
1) If `out` does not have names, then overwite them with `tensor.names`.
2) If `out` does have names, then check that `out.names` equals
`tensor.names`.

This patch also includes the following clean ups:
- renamed `default_names` to `FIXME_default_names` because it is
inefficient and needs to be fixed.
- Renamed impl::internal_get_names / impl::internal_has_names to
impl::get_names / impl::set_names. Devs should feel free to use them, so
I removed the internal_ prefix.
- Moved internal_set_names to NamedTensor.{h, cpp}. These functions
still have the internal_ prefix because their use requires caution.

Test Plan: - [namedtensor ci]

Differential Revision: D16763387

Pulled By: zou3519

fbshipit-source-id: 57dcc7c759246def0db2746d1dca8eddd5e90049
2019-08-14 12:19:23 -07:00
Richard Zou
f996f8d61d Update tensor.view_names / tensor.names_ API (#23973)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23973

Without loss of generality, I describe the API for `tensor.view_names`.
`tensor.names_` has an analogous API.

`tensor.view_names(*names)` returns a view on tensor with named dims `names`.
`names` must be of length `tensor.dim()`; otherwise, if '*' is in `names`,
then it (known as the "glob") is expanded greedily to be equal to the
corresponding names from `tensor.names`.

For example,
```
>>> x = torch.empty(2, 3, 5, 7, names=('N', 'C', 'H', 'W'))
>>> x.view_names('*', 'height', 'width').names
('N', 'C', 'height', 'width')

>>> x.view_names('batch', '*', 'width').names
('batch', 'C', 'H', 'width')
```

tensor.view_names(**rename_map) returns a view on tensor that has
renamed dims as specified in the mapping `rename_map`.

For example,
```
>>> x = torch.empty(2, 3, 5, 7, names=('N', 'C', 'H', 'W'))
>>> x.view_names(W='width', H='height').names
('N', 'C', 'height', 'width')
```

These are different(!!!) from the C++ API, which only allows the
following:
- tensor.view_names(optional<DimnameList>)

C++ API parity for named tensors is not important right now; I am
punting that to the future.

Test Plan: - [namedtensor ci]

Differential Revision: D16710916

Pulled By: zou3519

fbshipit-source-id: 7cb8056c0fb4c97b04c3a2d1dd0f737e0a67ce34
2019-08-14 09:40:35 -07:00
Richard Zou
2fcdb3a1f3 Rename set_names -> view_names, set_names_ -> names_ (#23962)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23962

This change should make the semantics clearer.

`tensor.names_(names)` sets tensor.names to be `names`.

`tensor.view_names(names)` returns a view of the tensor with names
`names`.

Test Plan
- [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D16710915

Pulled By: zou3519

fbshipit-source-id: c82fa9812624d03c86f7be84b0a460e3c047aaa0
2019-08-14 09:40:31 -07:00
Richard Zou
7030f2c623 Implement tensor.align_to(names), torch.align_tensors(*tensors) (#23804)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23804

`output = tensor.align_to(names)` returns a view of `tensor` such that
`output.names = names`. Dimensions with the same names in `tensor` and
`output` have the same sizes; dimensions with new names have size 1.

The following must be true for this operation to succeed:
1) tensor.names must be a subsequence (not necessarily contiguous) of `names`
2) Aligning tensor.names to names must not change the absolute position from the
   right of any unnamed dimension.

In practice, these constraints mean that aligning cannot transpose
names.

Some examples:
- Tensor[C].align_to(C) -> Tensor[C]
- Tensor[N].align_to([N, C]) -> Tensor[N, C]
- Tensor[H, W].align_to([N, H, W, C]) -> Tensor[N, H, W, C]
- Tensor[None].align_to([N, None]) -> Tensor[N, None]
- Tensor[N].align_to([N, None None]) -> Tensor[N, None, None]

Examples of error cases:
- Tensor[W, H].align_to([N, H, W, C]) -> Error (not a subsequence)
- Tensor[None, H].align_to([None, H, W]) -> Error (would change the
absolute position from the right of a None dimension)

`torch.align_tensors(*tensors)` aligns the named dimensions of each
tensor according to the alignment rules so that they can be used in an
operation. More concretely, it aligns each tensor to the
longest names among the names of the tensors in `tensors`.

This allows users to emulate "broadcasting by names", which is one of
the things named tensors tries to enable. Here is an example:

```
imgs: Tensor[N, C, H, W]
scale: Tensor[N]

// Doesn't work because we do broadcasting by alignment by default
imgs * scale

// Does work
imgs, scale = torch.align_tensors(imgs, scale)
imas * scale
```

Future:
- Consider allowing broadcasting by names by default.

Test Plan:
- The diff looks pretty large but more than half of it is testing.
- new tests [namedtensor ci]

Differential Revision: D16657927

Pulled By: zou3519

fbshipit-source-id: e2f958bf5146c8ee3b694aba57d21b08e928a4e6
2019-08-14 09:40:27 -07:00
Richard Zou
eabfca3577 Named inference for contiguous(), bernoulli variants, and dropout. (#24109)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24109

See title.

Test Plan: - New tests [namedtensor ci]

Differential Revision: D16763389

Pulled By: zou3519

fbshipit-source-id: ea14af0fe812d04ca7127a080e56c273b21c30bc
2019-08-14 06:19:28 -07:00
Richard Zou
ad42c7d0f3 Implement name inference rule for empty_like, clone (#24108)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24108

`torch.empty_like(tensor)` and `tensor.clone()` both propagate names to
the output tensor.

As a part of this change, I fixed the empty(..., names=) overload to
include the `memory_format` argument in the normal `empty` declaration
in native_functions.yaml.

Test Plan: - [namedtensor ci]

Differential Revision: D16763392

Pulled By: zou3519

fbshipit-source-id: c7b2bc058d26a515a5fd8deef22c2acb290c8816
2019-08-14 06:19:24 -07:00
Richard Zou
65fa0233c5 Add names argument to ones, rand, randn, zeros, full; fix empty (#24107)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24107

In the short term, we implement this by having overloads for each of
these functions. In the long term, the plan is to move DimnameList to
TensorOptions so that we do not have to duplicate work.

Also fixes the implementation of empty. If there are no names, we should
just return an unnamed tensor instead of telling the user we don't
support their backend/layout.

Test Plan: - [namedtensor ci]

Differential Revision: D16763393

Pulled By: zou3519

fbshipit-source-id: 7324a6b157187d4f74abc5459052f3323a417412
2019-08-14 06:19:21 -07:00
Richard Zou
98a3b3d565 Add name propagation for at::alias, add tensor.set_names (#24202)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24202

tensor.set_names(names) is the out-of-place variant of
tensor.set_names_(names). This naming is probably confusing so I am
taking any and all suggestions.

Test Plan: - run tests [namedtensor ci]

Differential Revision: D16773014

Pulled By: zou3519

fbshipit-source-id: 61024303c1a34db631cc4cb2c53757345e40d72c
2019-08-13 17:01:18 -07:00
Richard Zou
75db368031 Revert D16763388: Add name propagation for at::alias, add tensor.set_names
Differential Revision:
D16763388

Original commit changeset: 4b2fb3acc051

fbshipit-source-id: 5be35bdcc2e7c71378af9e34be19305bdd4ba0d1
2019-08-12 13:42:43 -07:00
Richard Zou
6772f537f0 Revert D16763390: Improve test_namedtensor.py with named tensor equality check
Differential Revision:
D16763390

Original commit changeset: 170e27ebc4d7

fbshipit-source-id: dbabe837793d8db6493a221b91e43a065baece75
2019-08-12 13:42:39 -07:00
Richard Zou
90f3f9d9aa Improve test_namedtensor.py with named tensor equality check (#24106)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24106

Test Plan
- Code reading. assertTensorDataAndNamesEqual isn't used in this commit
but it'll be used in future commits.
- [namedtensor ci]

Test Plan: Imported from OSS

Differential Revision: D16763390

Pulled By: zou3519

fbshipit-source-id: 170e27ebc4d79aca939c5d101489b20faedc6133
2019-08-12 12:45:00 -07:00
Richard Zou
1108fa1acb Add name propagation for at::alias, add tensor.set_names (#24105)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24105

tensor.set_names(names) is the out-of-place variant of
tensor.set_names_(names). This naming is probably confusing so I am
taking any and all suggestions.

Test Plan: - run tests [namedtensor ci]

Differential Revision: D16763388

Pulled By: zou3519

fbshipit-source-id: 4b2fb3acc0514515e7ca805dbc5c3d4a9bd96317
2019-08-12 12:44:56 -07:00
Richard Zou
0bba302da5 Revert D16621830: Add name propagation for at::alias, add tensor.set_names
Differential Revision:
D16621830

Original commit changeset: f8a3837d3a37

fbshipit-source-id: 801ab858a0741d98b0b9d56763fa70a9010fe75e
2019-08-09 10:55:18 -07:00
Richard Zou
71352fbd9a Revert D16667816: Improve test_namedtensor.py with named tensor equality check
Differential Revision:
D16667816

Original commit changeset: 66519cd5d17b

fbshipit-source-id: 51a26cdfb5624695a492d3ac93fb7a402c44e11a
2019-08-09 10:55:14 -07:00
Richard Zou
de97b12dbd Revert D16647820: Add names argument to ones, rand, randn, zeros, full
Differential Revision:
D16647820

Original commit changeset: c6c53c5f26a8

fbshipit-source-id: a341c6eda49f5dd2e1712b65e61fef99791f0668
2019-08-09 10:55:10 -07:00
Richard Zou
177a5c3f41 Revert D16647821: Implement name inference rule for empty_like, clone
Differential Revision:
D16647821

Original commit changeset: 43b261f3456b

fbshipit-source-id: 03caecd6898efd292b4f5c5b7254f7d31d502d6a
2019-08-09 10:55:06 -07:00
Richard Zou
521484eaec Revert D16657926: Named inference for contiguous(), bernoulli variants, and dropout.
Differential Revision:
D16657926

Original commit changeset: 8cd46765b1c7

fbshipit-source-id: fce2202dd101cfc3153f279a0a4651c9b735e044
2019-08-09 10:32:48 -07:00
Richard Zou
4dd2908dd6 Named inference for contiguous(), bernoulli variants, and dropout. (#23808)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23808

See title.

Test Plan: - New tests [namedtensor ci]

Differential Revision: D16657926

Pulled By: zou3519

fbshipit-source-id: 8cd46765b1c791b73448ddf4585dae56d635364d
2019-08-09 09:17:47 -07:00
Richard Zou
16b6466e5e Implement name inference rule for empty_like, clone (#23746)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23746

`torch.empty_like(tensor)` and `tensor.clone()` both propagate names to
the output tensor.

As a part of this change, I fixed the empty(..., names=) overload to
include the `memory_format` argument in the normal `empty` declaration
in native_functions.yaml.

Test Plan: - [namedtensor ci]

Differential Revision: D16647821

Pulled By: zou3519

fbshipit-source-id: 43b261f3456b6bf5fca7b6313e659b259a2ba66d
2019-08-09 09:17:43 -07:00
Richard Zou
11cff2981b Add names argument to ones, rand, randn, zeros, full (#23743)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23743

In the short term, we implement this by having overloads for each of
these functions. In the long term, the plan is to move DimnameList to
TensorOptions so that we do not have to duplicate work.

Test Plan: - [namedtensor ci]

Differential Revision: D16647820

Pulled By: zou3519

fbshipit-source-id: c6c53c5f26a86b730cbc4d4eb69907ac0e08fc65
2019-08-09 09:17:39 -07:00
Richard Zou
5fbe824398 Improve test_namedtensor.py with named tensor equality check (#23801)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23801

Test Plan
- Code reading. assertTensorDataAndNamesEqual isn't used in this commit
but it'll be used in future commits.
- [namedtensor ci]

gh-metadata: pytorch pytorch 23801 gh/zou3519/90/head

Test Plan: Imported from OSS

Differential Revision: D16667816

Pulled By: zou3519

fbshipit-source-id: 66519cd5d17bda4c4304a1bc6e2a03ae59d49e39
2019-08-09 09:17:35 -07:00
Richard Zou
78f3b883f0 Add name propagation for at::alias, add tensor.set_names (#23624)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23624

tensor.set_names(names) is the out-of-place variant of
tensor.set_names_(names). This naming is probably confusing so I am
taking any and all suggestions.

Test Plan:
- run tests [namedtensor ci]

gh-metadata: pytorch pytorch 23624 gh/zou3519/86/head

Differential Revision: D16621830

Pulled By: zou3519

fbshipit-source-id: f8a3837d3a370b41210e938369348dcbb4aee53a
2019-08-09 09:17:31 -07:00
Richard Zou
57fc793650 Add names to repr for named tensors
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23316

Test Plan:
- [namedtensor ci]

gh-metadata: pytorch pytorch 23316 gh/zou3519/80/head

Imported from OSS

Differential Revision: D16494415

Pulled By: zou3519

fbshipit-source-id: e483f57bdb0610d0eadbe70d673e20dc3d3f9502
2019-08-02 11:37:29 -07:00
Richard Zou
8e466b7e21 Add torch._C._BUILD_NAMEDTENSOR() (#23623)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23623

This is a quick, not-user-facing check for if pytorch was built with BUILD_NAMEDTENSOR=1.

Test Plan:
- run tests [namedtensor ci]

gh-metadata: pytorch pytorch 23623 gh/zou3519/85/head

Differential Revision: D16621829

Pulled By: zou3519

fbshipit-source-id: d7e1161dc176bab2c1f953265722daeba1e63102
2019-08-02 11:37:25 -07:00
Richard Zou
08f7f27c6a Fix named tensor build by enabling tensor.is_pinned and removing support for clone() (#23597)
Summary:
`is_pinned` was moved to native_functions.yaml, disabling it for named
tensors. This PR re-enables its usage for named tensors.

I wrote a named inference rule for torch.clone(), but something happened
to it. Disable it for now so we can get the namedtensor ci to be green.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23597

Test Plan: - run tests [namedtensor ci]

Differential Revision: D16581771

Pulled By: zou3519

fbshipit-source-id: 498018cdc55e269bec80634b8c0a63ba5c72914b
2019-07-31 11:48:40 -07:00
Richard Zou
c5482e33e9 Rename tensor.is_named to has_named, expose has_named to python.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23315

Test Plan:
- [namedtensor ci]

gh-metadata: pytorch pytorch 23315 gh/zou3519/79/head

Imported from OSS

Differential Revision: D16494414

Pulled By: zou3519

fbshipit-source-id: d2d6beb45db9288e5df707b68b6046d783ca9f97
2019-07-31 07:14:07 -07:00
Richard Zou
725e41e955 Enable named tensors for arithmetic, clone, and tensor conversion ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23237

Test Plan: Imported from OSS

Differential Revision: D16494416

Pulled By: zou3519

fbshipit-source-id: 29bc390797c99088d50a2b59c3e2402a93562e2c
2019-07-31 07:14:04 -07:00
Richard Zou
437a8b3eed Named inference rule for copy_
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23229

Test Plan: Imported from OSS

Differential Revision: D16494413

Pulled By: zou3519

fbshipit-source-id: 4acb85e5a4ad09bf5f7cbb84cc8d4ceac0cd9967
2019-07-30 07:17:34 -07:00
Richard Zou
505fa83b2f Implement named inference rule for mul
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23193

Test Plan:
- [namedtensor ci]

gh-metadata: pytorch pytorch 23193 gh/zou3519/75/head

Imported from OSS

Differential Revision: D16494401

Pulled By: zou3519

fbshipit-source-id: 0e2395d7de39158ec51feed5da0389715ec52600
2019-07-29 09:58:18 -07:00
Richard Zou
0dcb8755c8 Implement tensor.set_names_, tensor.names setter
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23172

Test Plan:
- [namedtensor ci]

gh-metadata: pytorch pytorch 23172 gh/zou3519/74/head

Imported from OSS

Differential Revision: D16494364

Pulled By: zou3519

fbshipit-source-id: 8d0e26b33346d4eadba30b2e76610f6d7be7c373
2019-07-26 08:50:49 -07:00
Richard Zou
c8a50a26d2 Named inference rule for torch.prod
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23106

Test Plan:
- [namedtensor ci]

Imported from OSS

Differential Revision: D16419175

Pulled By: zou3519

fbshipit-source-id: beb9ef838525c1ea7d7839cb9b8d68028fb4917f
2019-07-26 08:50:45 -07:00
Richard Zou
9817d7e16b Implement named inference rule for torch.sum
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23081

Test Plan:
- New tests [namedtensor ci]

Imported from OSS

Differential Revision: D16419174

Pulled By: zou3519

fbshipit-source-id: 8679f77f121664d0398d7f062a53c0fa37482481
2019-07-26 08:50:40 -07:00
Richard Zou
b4b51ed5ec Implement tensor.size(Dimname), tensor.stride(Dimname)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22989

Test Plan: Imported from OSS

Differential Revision: D16364437

Pulled By: zou3519

fbshipit-source-id: 393a93fecac27b5d3b1a7f7692590d8fd5e95a5d
2019-07-22 13:11:59 -07:00
Richard Zou
662fe699c5 Named inference rules for some initializer fns
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22972

Test Plan:
- [namedtensor ci]

Imported from OSS

Differential Revision: D16342782

Pulled By: zou3519

fbshipit-source-id: 25277688ab51e1e98af0e19aeb9c79399171d2fb
2019-07-18 10:04:29 -07:00
Richard Zou
57cec0a720 Named inference rules for split/chunk
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22971

Test Plan: Imported from OSS

Differential Revision: D16342783

Pulled By: zou3519

fbshipit-source-id: 379edc8eb2f45a82ee8a6320f8285f8f81ea0b1b
2019-07-18 10:04:25 -07:00
Hong Xu
693871ded3 Rename macros and build options NAMEDTENSOR_ENABLED to BUILD_NAMEDTENSOR (#22360)
Summary:
Currently the build system accepts USE_NAMEDTENSOR from the environment
variable and turns it into NAMEDTENSOR_ENABLED when passing to CMake.
This discrepancy does not seem necessary and complicates the build
system. The naming of this build option is also semantically incorrect
("BUILD_" vis-a-vis "USE_").  This commit eradicate this issue before it
is made into a stable release.

The support of NO_NAMEDTENSOR is also removed, since PyTorch has been
quite inconsistent about "NO_*" build options.

 ---

Note: All environment variables with their names starting with `BUILD_` are currently automatically passed to CMake with no need of an additional wrapper.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22360

Differential Revision: D16074509

Pulled By: zou3519

fbshipit-source-id: dc316287e26192118f3c99b945454bc50535b2ae
2019-07-02 11:46:13 -07:00
Richard Zou
f894ef7263 Add smoke test for information fn/method/attrs to test_namedtensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22341

Test Plan:
- `python test/test_namedtensor.py -v` [namedtensor ci]

gh-metadata: pytorch pytorch 22341 gh/zou3519/66/head

Imported from OSS

Differential Revision: D16053440

Pulled By: zou3519

fbshipit-source-id: 400f2e1c136cd7db4346a42b58813e42595ca755
2019-07-01 07:24:54 -07:00
Richard Zou
496e35f76b More named inference rules for pointwise unary ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22308

Test Plan:
- `python test/test_namedtensor.py -v` [namedtensor ci]

gh-metadata: pytorch pytorch 22308 gh/zou3519/65/head

Imported from OSS

Differential Revision: D16053441

Pulled By: zou3519

fbshipit-source-id: 2e8d4cc11d7a711d2b789752a316a11fffc0996e
2019-07-01 07:24:51 -07:00
Richard Zou
177b8bf6e7 Named inference rule for more pointwise ops. (#22268)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22268
ghimport-source-id: c722f9fbb3fc529c872dcccbf58ba1a8c5fcda8e

Test Plan:
- `python test/test_namedtensor.py -v` [namedtensor ci]

Imported from OSS

Differential Revision: D16030549

Pulled By: zou3519

fbshipit-source-id: 5cbb2c8626335a32a22ed8079245a5faa7cf553f
2019-06-27 12:49:36 -07:00