Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26543
Also adds a test for logical_xor (it already had named tensor support
but there was no test)
Test Plan: - [namedtensor ci]
Differential Revision: D17501403
Pulled By: zou3519
fbshipit-source-id: 49be15580be9fb520e25a8020164e5a599d22d40
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26558
Previously, name inference gets called after dimensions are wrapped.
This PR makes it so that name inference always wraps dimensions so that
it can be called anywhere. Ideally we would only wrap dimensions once,
but many of our operators wrap dimensions in weird places.
Wrapping dimensions in name inference is pretty inexpensive and only
happens for named tensors (name inference does not run on unnamed
tensors.)
Test Plan: - [namedtensor ci]
Differential Revision: D17557049
Pulled By: zou3519
fbshipit-source-id: 68c5636489e233dbf2588ab6ad4e379a6fe4c8ba
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26636
This PR defines a lot of dimname overloads so that when named tensor
support is added for those operators, we will not have to modify the
autogenerated TensorMethods.h, thereby avoiding potential merge
conflicts in the future.
Overloads were added for the following:
- all
- any
- argmax
- argmin
- cumsum
- cumprod
- index_copy
- kthvalue
- mode
- permute
- squeeze
- index_add
- index_fill
- scatter
- scatter_add
- index_select
- gather
- sort
- argsort
Test Plan: - [namedtensor ci]
Differential Revision: D17522984
Pulled By: zou3519
fbshipit-source-id: eca6dea819ba4e4e43b71b700d5cf09176f00061
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26648
Previously:
- `Tensor.align_to(*names)` only works on fully named tensors. In addition, the
desired ordering `names` must not have any None-names.
- `Tensor.align_to(*names)` accepted `...`, but expanded it based on
position. i.e., in `tensor.align_to('N', ..., 'C', 'H')`, `...` expand
to `*tensor.names[1:-2]`. This is wildly incorrect: see the following
concrete example.
```
tensor = tensor.refine_names('N', 'C', 'H, 'W')
tensor.align_to('W', ...) # ... expands to 'C', 'H, 'W'
```
This PR changes it so that `...` in `tensor.align_to` grabs all
unmentioned dimensions from `tensor`, in the order that they appear.
`align_to` is the only function that takes ellipsis that requires this
change. This is because all other functions (`refine_to`) require their
list of names to work in a positional manner, but `align_to` lets the
user reorder dimensions.
This does not add very much overhead to `align_to`, as shown in the
following benchmark. However, in the future, we should resolve to make
these operations faster; align_to should be as fast as view but isn't
most likely due to Python overhead.
```
[ins] In [2]: import torch
...: named = torch.randn(3, 3, 3, 3, names=('N', 'C', 'H', 'W'))
...: unnamed = torch.randn(3, 3, 3, 3)
...: %timeit unnamed[:]
...: %timeit unnamed.view(-1)
...: %timeit named.align_to(...)
...: %timeit named.align_to('N', 'C', 'H', 'W')
31 µs ± 126 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
43.8 µs ± 146 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
69.6 µs ± 142 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
66.1 µs ± 1.13 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
Test Plan:
- new tests [namedtensor ci]
allows the user to transpose and permute dimensions.
Differential Revision: D17528207
Pulled By: zou3519
fbshipit-source-id: 4efc70329f84058c245202d0b267d0bc5ce42069
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26548
This makes the naming more consistent with PyTorch's API. The original
concern was that `tensor.rename` might make the operation seem like it
is in-place. However, we have many "verb" APIs: `tensor.add(other)`, for
example, doesn't add other to tensor in-place, but `tensor.add_(other)`
does.
`tensor.rename_` does exactly the same place as `tensor.rename`, but
in-place.
Test Plan: - [namedtensor ci]
Differential Revision: D17502021
Pulled By: zou3519
fbshipit-source-id: 6a5b93136a820075013cd1e30fb8fc6b9d77d7d9
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25582
There are a lot of min/max overloads. This PR adds name inference to
the following overloads for (both) min and max:
- min(Tensor, int dim)
- min(Tensor, Dimname dim)
- min(Tensor) (full reduction)
Test Plan: - new tests [namedtensor ci]
Differential Revision: D17521607
Pulled By: zou3519
fbshipit-source-id: 303e3cef22916dbc9da6a092d4f23e39e74c39e4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26541
`torch.pow` already supports named tensors; every one of its constituent
codepaths propagates names:
- TensorIterator propagates names
- resize_as_ and fill_ propagate names (exponent == 0 or base == 1)
- resize_as_ and copy_ propagate names (exponent == 1)
This PR adds `supports_named_tensor = True` to the pow overloads,
enabling `pow` to take named tensors.
Test Plan: - [namedtensor ci]
Differential Revision: D17501402
Pulled By: zou3519
fbshipit-source-id: 07ee91d685e55dd58bbbb3a3fc9e185de8bb7515
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26493
resize_ and resize_as_ are low level functions that are not meant to be
used as a part of the regular PyTorch user's routine. However, they are
used to implement a lot of our operations: `out=` functionality is
implemented by resizing an output to be the correct size.
To keep in line with already implemented `out=` functionality, we do the
following:
- resize_as_(self, other) propagates names according to `out=` functionality.
This means that if self doesn't have names, then we propagate
other.names. If self does have names, they must be equal to other.names.
In addition, resize_ cannot resize a named tensor to anything but the same size.
Test Plan: - [namedtensor ci]
Differential Revision: D17501404
Pulled By: zou3519
fbshipit-source-id: e396e7fba55e1419355933925226d02dccb03012
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26479
This PR doesn't delete the code for them yet because it takes some effort to
determine what to delete. I will send a followup PR fully deleting
tagged names, but this PR disables their creation.
Test Plan: - [namedtensor ci]
Differential Revision: D17484758
Pulled By: zou3519
fbshipit-source-id: 451409e36eac98ffee1b98884d0f675bb5d46c9d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26352
"named_guard: P" is the same as "supports_named_tensor: !P".
Also changed the error message to be more understandable to users.
Test Plan:
- `TEST_NAMEDTENSOR=1 pytest test/test_namedtensor.py -v`
- [namedtensor ci]
Differential Revision: D17426234
Pulled By: zou3519
fbshipit-source-id: 4cab780e6e29e184e79cdd3690f41df9ebb2ecb5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26350
Python 3 lets us use `...` to perform indexing. Semantically, `...`
means "the rest of the unspecified dimensions". For example, while
indexing, one can do (for 5D `tensor`) `tensor[0, 0, ..., 0]` and
the `...` is expanded into `tensor[0, 0, :, :, 0]`.
Previously, we were using '*' to represent a similar behavior in names.
For example, `tensor.refine_names` supports things like the following:
```
x = torch.randn(2, 3, 4, 5, 6)
x_out = x.refine_names('*', 'H', 'W') # refine only the last two
dimensions
```
This PR changes it so that named tensor API functions recognize `'...'`
(in Python 2 and Python 3) and `...` (in Python 3 exclusively) instead
of `'*'`.
Test Plan: - [namedtensor ci]
Differential Revision: D17424666
Pulled By: zou3519
fbshipit-source-id: 003182879fd38ced3fea051217572a457cdaf7cf
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25658
This unflattens `dim` according to the shape specified in `namedshape`.
`namedshape` may be either an OrderedDict or an iterable of (name, size)
tuples.
Future:
- It is possible to make it take a dict in Python >= 3.6 because those are
ordered by default, but I'll leave that task for the future.
Test Plan: - new tests [namedtensor ci]
Differential Revision: D17192655
Pulled By: zou3519
fbshipit-source-id: fd9bd2f462c23a4df1c23d66f2aa95076ff1b160
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26106
Previously, in the named tensors build, an operator is marked as
non-traceable if ANY of its overloads are named tensor overloads. This
breaks the tracer for things like torch.full (has a names= overload for
named tensor) and tensor.sum (has a Dimname overload for named tensor).
This PR fixes the problem by putting the "no tracer support" logic into
the location where the tracer attempts to construct a graph by adding a
Dimname/DimnameList argument to a node.
Test Plan:
- new test in test_jit.py to check if torch.full is traceable
- new test in test_namedtensor.py to check what happens when someone
tries to trace a function that uses named tensor APIs.
- [namedtensor ci]
Differential Revision: D17353452
Pulled By: zou3519
fbshipit-source-id: b0b843c8357ffe54baee6e8df86db914f0b1ece4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25843
`tensor.align_to(*names)` permutes the dimensions of `tensor` and adds
additional 1-sized dimensions such that the output tensor has dimensions
in the same order as `names`. All dimensions of `tensor` must be
present in `names`, in addition, this function requires that all dims of
`tensor` be named.
`tensor.align_as(other)` is equivalent to
`tensor.align_to(*other.names)`.
I'm planning on changing `torch.align_tensors(*tensors)` to align closer
to these semantics because there didn't seem to be a clear use case for the old
semantics that preserve unnamed dimensions. That will come in a future
change.
Test Plan: - new tests [namedtensor ci]
Differential Revision: D17255549
Pulled By: zou3519
fbshipit-source-id: 1e437ad81e9359b4d5bd0e7e64c3a1be441fc3e3
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25842
`tensor.refine_names(*names)` takes `tensor` and attempts to name its
dimensions `names` out-of-place. If a dimension `i` already had a name,
then it cannot be changed (so tensor.names[i] must equal names[i]);
if the original dimension did not have a name, then the new name
(names[i]) can be anything.
`tensor.refine_names(*names)` also accepts a glob '*' that greedily selects
names from `tensor`. Here are some examples:
- `Tensor[None].refine_names('N') -> Tensor[N]`
- `Tensor[N].refine_names('N') -> Tensor[N]`
- `Tensor[N].refine_names('D') -> Error!`
- `Tensor[N].refine_names(None) -> Error!`
- `Tensor[None, None].refine_names('*', D) -> Tensor[None, D]`
Test Plan: - new tests [namedtensor ci]
Differential Revision: D17255548
Pulled By: zou3519
fbshipit-source-id: fdbdb3a12f24fbe37ce1e53ed09dc8a42589d928
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26050
Throws a warning once when someone attempts to attach names to a tensor.
This is guaranteed to happen at the callsite `set_named_tensor_meta`.
Test Plan: - run tests [namedtensor ci]
Differential Revision: D17331634
Pulled By: zou3519
fbshipit-source-id: 44f5e5c95acd9c7ba543c1210a3b1314aab348f0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25948
Previously, test/test_namedtensor.py is skipped if pytorch was not
compiled with BUILD_NAMEDTENSOR. Now, we skip test/test_namedtensor.py
if pytorch was not compiled with BUILD_NAMEDTENSOR or if
TEST_NAMEDTENSOR is not set.
This is done in preparation for turning on BUILD_NAMEDTENSOR=1 permanently;
at that point we will use TEST_NAMEDTENSOR to differentiate between the
named tensor ci and the regular ci.
Test Plan:
- [namedtensor ci] (and check that the named tensor tests are actually
running).
Differential Revision: D17300132
Pulled By: zou3519
fbshipit-source-id: 928f71f4d50445680b6ae1aa54b8857bc92e4d08
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25424
Test Plan
- new tests [namedtensor ci]
Test Plan: Imported from OSS
Differential Revision: D17120399
Pulled By: zou3519
fbshipit-source-id: 93d7944f2ec4c5a7256f505323b879af706131df
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25672
There are three overloads:
1) flatten(tensor, int start_dim, int end_dim, Dimname out_dim)
2) flatten(tensor, Dimname start_dim, Dimname end_dim, Dimname out_dim)
3) flatten(tensor, DimnameList dims, Dimname out_dim)
`flatten` joins all the dimensions between start_dim and end_dim into
one dimension. The name of the output dimension is specified by
`out_dim`.
In the case where flatten takes a list of `dims` to flatten, all the
dimensions in `dims` must be in consecutive order.
Test Plan: - new tests [namedtensor ci]
Differential Revision: D17192656
Pulled By: zou3519
fbshipit-source-id: 55d2b23358bd77cbef299f66701a8da8cd194f4f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25711
This function renames the dimensions of a tensor out-of-place. Because
of that, I think `tensor.renamed(...)` is a clearer name: `view_names`
has the connotation that we can use names to `view` our tensors with a
"different shape", but what this function really does is let us rename a
tensor no matter the previous names.
`tensor.names_`, the in-place version of this, is unchanged for now.
However, we might delete this or not advertise it if it has no use case
and also because its naming is a little inconsistent with `tensor.renamed`.
Test Plan: - [namedtensor ci]
Differential Revision: D17206515
Pulled By: zou3519
fbshipit-source-id: 67053951fcc8130c84566b5ebbdce35ef619c90d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25673
We recently moved new_empty into ATen. new_empty doesn't support named
tensors (in fact, it was hackily supporting named tensors before). This
fixes the named tensor test by changing all uses of `new_empty` to
`empty`.
Named tensor support for `new_empty` will come eventually, but it might
be a little tricky.
Test Plan: - [namedtensor ci]
Differential Revision: D17206043
Pulled By: zou3519
fbshipit-source-id: 1697bd1d63e7cb344f3d459a29af0fcb9696ea49
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25604
In this initial version:
- autograd ignores all names.
- tensor.grad is unnamed, unless the user manually assigns to it.
- if a grad tensor has any names, perhaps the user was hoping for some
alignment-checking behavior that named tensor offers for other ops. We
raise a warning in this case.
Future: do some more extensive checking to see if this actually works in
all cases.
Test Plan:
- [namedtensor ci]
- Check a warning is raised if a grad tensor has names.
- Check tensor.grad field is unnamed.
- Check that we can perform backward on an op that doesn't explictly
support names in backward. `sigmoid` is one such op.
Differential Revision: D17171788
Pulled By: zou3519
fbshipit-source-id: 64837fde94d8269610b6d3539ac025516dbe1df4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25569
Test Plan
- new tests [namedtensor ci]
Test Plan: Imported from OSS
Differential Revision: D17159121
Pulled By: zou3519
fbshipit-source-id: c68bdb543155488aa3634f908bd576e5c30c8d77
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25564
There are a number of ops that get called while printing tensors
depending on how large the tensors are. This PR makes it so that before
we attempt to format tensor data for printing, we drop the names of the
tensor (if there are any). This is easier than supporting named tensors
for all of those ops (which should happen eventually).
Test Plan: - new test [namedtensor ci]
Differential Revision: D17158872
Pulled By: zou3519
fbshipit-source-id: 282023837645b8cb16a4d93896a843dd598fc738
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25563
Before, for binary ops, name inference occurred after shape checks. This
defeats the purposes for names because the names are supposed to tell
the user that i.e. their tensors are misaligned or that they are adding
incompatible tensors.
This PR changes TensorIterator so that names are computed before shape checks and
propagated after the binary ops are finished. In order to support this,
this PR makes the following changes:
- adds a `names_` field to TensorIterator, similar to `shape_`. This is
necessary to hold the output names, that are computed in
`compute_names`, until they are used in `propagate_names_to_outputs()`.
Test Plan: Imported from OSS
Differential Revision: D17158869
Pulled By: zou3519
fbshipit-source-id: 0caa90f7a93e4d9bdb2549cd330cc3abd2258868
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25568
Test Plan
- new test [namedtensor ci]
Test Plan: Imported from OSS
Differential Revision: D17159069
Pulled By: zou3519
fbshipit-source-id: fbc185ea5865b128508451096b742ac18e467670
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25566
masked_select returns a tensor with None names. However, it broadcasts
its inputs so we need to perform a check that they are broadcastable.
Test Plan: - new tests [namedtensor ci]
Differential Revision: D17159071
Pulled By: zou3519
fbshipit-source-id: ad201f3f73bc54163ede1ba3d906d2409ebef475
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25431
I put the name propagation logic in a central place, `make_reduction`,
that creates a TensorIterator for the reduction. This lets us implement
name inference rules for mean, std, var, std_mean, and var_mean.
Test Plan
- new tests [namedtensor ci]
Test Plan: Imported from OSS
Differential Revision: D17123577
Pulled By: zou3519
fbshipit-source-id: 2d47080a40da0c4bcabbb3df71ffa8fbeb7a14c6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25345
Test Plan
- New tests [namedtensor ci]
Test Plan: Imported from OSS
Differential Revision: D17101486
Pulled By: zou3519
fbshipit-source-id: 58e803b042056ee6abab8551517f74078f2b81d5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25177
Test Plan
- new tests [namedtensor ci]
Test Plan: Imported from OSS
Differential Revision: D17051452
Pulled By: zou3519
fbshipit-source-id: 7259cdb7ba7f480035528cf3c60ef6d051e42db5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25123
The approach is different for CPU and CUDA. In particular:
- in CPU, I added a name inference rule to bmm_out
- in CUDA, bmm calls THCTensor_(baddbmm) so I added a name inference
rule to that.
When one calls baddbmm on CPU or CUDA, it'll error out with NYI due to
named_guard: True on it in native_functions.yaml. I'm not planning on
implementing baddbmm soon because it's a little tricky to add it to CPU
and bmm is more commonly used function.
Test Plan
- new tests [namedtensor ci]
Test Plan: Imported from OSS
Differential Revision: D16998073
Pulled By: zou3519
fbshipit-source-id: 8dc01898964318717911f28eebd6cdfffc7dfcf2
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24474
torch.dot is a little weird. It ignores the names of its inputs to be
consistent with the rest of our matrix multiplication functions.
I've written the implementation using a helper function that is also
used by other matrix multiplication functions so that it is easy to
change the behavior.
Test Plan
- new tests [namedtensor ci]
Test Plan: Imported from OSS
Differential Revision: D16915802
Pulled By: zou3519
fbshipit-source-id: 628a6de1935357022cc92f4d23222736a70bb070
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24471
mv(Tensor[M, N], Tensor[O]) ignores the names of N and O and returns a
tensor with names [M].
Test Plan: - new tests [namedtensor ci]
Differential Revision: D16915805
Pulled By: zou3519
fbshipit-source-id: d7d47903f249f85ef3be8a188d51993834bf5f55
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24469
tensor.expand(*sizes) returns a tensor with names equal to tensor.names
plus unnamed padding in the beginning dimensions.
For example, Tensor[H, W].expand(10, 2, 128, 128) -> Tensor[None, None,
H, W].
Test Plan: - new tests [namedtensor ci]
Differential Revision: D16915804
Pulled By: zou3519
fbshipit-source-id: 77ac97f42e9959d7f6d358c5286e3dc27488e33d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24306
Featuring:
- a new way of writing name inference tests. At some point I'll migrate
the older tests over.
- The out= variants aren't implemented. This is because they are a
little weird: the output gets resized, but I haven't throught through
what semantics that should have.
Test Plan: - new tests [namedtensor ci]
Differential Revision: D16915801
Pulled By: zou3519
fbshipit-source-id: 29ae2ee414c7d98e042965458c5dccef7ddbd4dd
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24921
Let `unnamed = torch.randn(1, 1, 1)` and `named = torch.randn(1, 1,
names=('N', 'C'))`.
Previously, there was a bug where `unnamed + named` would error out.
This happened because `unify_from_right(unnamed.opt_names(),
named.opt_names())` would return `named.names()`, which was propagated
to the output tensor. However, the output tensor has dim 3, but
`names.names()` only has 2 elements, so the code would throw an error.
The solution implemented in this PR is to stop trying to do premature
optimization. If all inputs to an operation doesn't have names, then
don't run name inference. However, if any inputs do, then materialize
the names and run name inference.
It's possible to make this more efficient for the case where some inputs
are named and some aren't, but we should benchmark these cases
and determine if it is necessary for it to be more efficient.
Test Plan: - new tests [namedtensor ci]
Differential Revision: D16930710
Pulled By: zou3519
fbshipit-source-id: 0de73c803c8b0f9a1c2d80684b9a47cccba91cbc
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24885
Store a static pre-allocated vector of names. When one calls
`default_names`, it gives a const reference to some amount of these
names.
Also make clearer the maximum number of dimensions we support for named
tensors. Right now it is 64 but that number is easy to change. 64
follows some internal pytorch maximum number of dimensions;
TensorIterator reduce ops have a limit of 64 dims.
Test Plan: - new tests [namedtensor ci]
Differential Revision: D16915803
Pulled By: zou3519
fbshipit-source-id: 931741b199456f8976882b82f25ab5af6dcd108b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24087
Added name inference rules for softmax and log_softmax.
Added the overloads for Dimname dim to softmax and log_softmax.
Test Plan: - [namedtensor ci]
Differential Revision: D16763391
Pulled By: zou3519
fbshipit-source-id: 676a14666d42441eb7d3c9babef7461c7b78d290
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24028
Previously, torch.abs(tensor, out=out) would ignore the names of the
`out` tensor and overwrite them with the names of `tensor`.
This patch changes the behavior to the following:
1) If `out` does not have names, then overwite them with `tensor.names`.
2) If `out` does have names, then check that `out.names` equals
`tensor.names`.
This patch also includes the following clean ups:
- renamed `default_names` to `FIXME_default_names` because it is
inefficient and needs to be fixed.
- Renamed impl::internal_get_names / impl::internal_has_names to
impl::get_names / impl::set_names. Devs should feel free to use them, so
I removed the internal_ prefix.
- Moved internal_set_names to NamedTensor.{h, cpp}. These functions
still have the internal_ prefix because their use requires caution.
Test Plan: - [namedtensor ci]
Differential Revision: D16763387
Pulled By: zou3519
fbshipit-source-id: 57dcc7c759246def0db2746d1dca8eddd5e90049
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23973
Without loss of generality, I describe the API for `tensor.view_names`.
`tensor.names_` has an analogous API.
`tensor.view_names(*names)` returns a view on tensor with named dims `names`.
`names` must be of length `tensor.dim()`; otherwise, if '*' is in `names`,
then it (known as the "glob") is expanded greedily to be equal to the
corresponding names from `tensor.names`.
For example,
```
>>> x = torch.empty(2, 3, 5, 7, names=('N', 'C', 'H', 'W'))
>>> x.view_names('*', 'height', 'width').names
('N', 'C', 'height', 'width')
>>> x.view_names('batch', '*', 'width').names
('batch', 'C', 'H', 'width')
```
tensor.view_names(**rename_map) returns a view on tensor that has
renamed dims as specified in the mapping `rename_map`.
For example,
```
>>> x = torch.empty(2, 3, 5, 7, names=('N', 'C', 'H', 'W'))
>>> x.view_names(W='width', H='height').names
('N', 'C', 'height', 'width')
```
These are different(!!!) from the C++ API, which only allows the
following:
- tensor.view_names(optional<DimnameList>)
C++ API parity for named tensors is not important right now; I am
punting that to the future.
Test Plan: - [namedtensor ci]
Differential Revision: D16710916
Pulled By: zou3519
fbshipit-source-id: 7cb8056c0fb4c97b04c3a2d1dd0f737e0a67ce34
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23962
This change should make the semantics clearer.
`tensor.names_(names)` sets tensor.names to be `names`.
`tensor.view_names(names)` returns a view of the tensor with names
`names`.
Test Plan
- [namedtensor ci]
Test Plan: Imported from OSS
Differential Revision: D16710915
Pulled By: zou3519
fbshipit-source-id: c82fa9812624d03c86f7be84b0a460e3c047aaa0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23804
`output = tensor.align_to(names)` returns a view of `tensor` such that
`output.names = names`. Dimensions with the same names in `tensor` and
`output` have the same sizes; dimensions with new names have size 1.
The following must be true for this operation to succeed:
1) tensor.names must be a subsequence (not necessarily contiguous) of `names`
2) Aligning tensor.names to names must not change the absolute position from the
right of any unnamed dimension.
In practice, these constraints mean that aligning cannot transpose
names.
Some examples:
- Tensor[C].align_to(C) -> Tensor[C]
- Tensor[N].align_to([N, C]) -> Tensor[N, C]
- Tensor[H, W].align_to([N, H, W, C]) -> Tensor[N, H, W, C]
- Tensor[None].align_to([N, None]) -> Tensor[N, None]
- Tensor[N].align_to([N, None None]) -> Tensor[N, None, None]
Examples of error cases:
- Tensor[W, H].align_to([N, H, W, C]) -> Error (not a subsequence)
- Tensor[None, H].align_to([None, H, W]) -> Error (would change the
absolute position from the right of a None dimension)
`torch.align_tensors(*tensors)` aligns the named dimensions of each
tensor according to the alignment rules so that they can be used in an
operation. More concretely, it aligns each tensor to the
longest names among the names of the tensors in `tensors`.
This allows users to emulate "broadcasting by names", which is one of
the things named tensors tries to enable. Here is an example:
```
imgs: Tensor[N, C, H, W]
scale: Tensor[N]
// Doesn't work because we do broadcasting by alignment by default
imgs * scale
// Does work
imgs, scale = torch.align_tensors(imgs, scale)
imas * scale
```
Future:
- Consider allowing broadcasting by names by default.
Test Plan:
- The diff looks pretty large but more than half of it is testing.
- new tests [namedtensor ci]
Differential Revision: D16657927
Pulled By: zou3519
fbshipit-source-id: e2f958bf5146c8ee3b694aba57d21b08e928a4e6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24108
`torch.empty_like(tensor)` and `tensor.clone()` both propagate names to
the output tensor.
As a part of this change, I fixed the empty(..., names=) overload to
include the `memory_format` argument in the normal `empty` declaration
in native_functions.yaml.
Test Plan: - [namedtensor ci]
Differential Revision: D16763392
Pulled By: zou3519
fbshipit-source-id: c7b2bc058d26a515a5fd8deef22c2acb290c8816
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24107
In the short term, we implement this by having overloads for each of
these functions. In the long term, the plan is to move DimnameList to
TensorOptions so that we do not have to duplicate work.
Also fixes the implementation of empty. If there are no names, we should
just return an unnamed tensor instead of telling the user we don't
support their backend/layout.
Test Plan: - [namedtensor ci]
Differential Revision: D16763393
Pulled By: zou3519
fbshipit-source-id: 7324a6b157187d4f74abc5459052f3323a417412
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24202
tensor.set_names(names) is the out-of-place variant of
tensor.set_names_(names). This naming is probably confusing so I am
taking any and all suggestions.
Test Plan: - run tests [namedtensor ci]
Differential Revision: D16773014
Pulled By: zou3519
fbshipit-source-id: 61024303c1a34db631cc4cb2c53757345e40d72c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24106
Test Plan
- Code reading. assertTensorDataAndNamesEqual isn't used in this commit
but it'll be used in future commits.
- [namedtensor ci]
Test Plan: Imported from OSS
Differential Revision: D16763390
Pulled By: zou3519
fbshipit-source-id: 170e27ebc4d79aca939c5d101489b20faedc6133
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24105
tensor.set_names(names) is the out-of-place variant of
tensor.set_names_(names). This naming is probably confusing so I am
taking any and all suggestions.
Test Plan: - run tests [namedtensor ci]
Differential Revision: D16763388
Pulled By: zou3519
fbshipit-source-id: 4b2fb3acc0514515e7ca805dbc5c3d4a9bd96317
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23746
`torch.empty_like(tensor)` and `tensor.clone()` both propagate names to
the output tensor.
As a part of this change, I fixed the empty(..., names=) overload to
include the `memory_format` argument in the normal `empty` declaration
in native_functions.yaml.
Test Plan: - [namedtensor ci]
Differential Revision: D16647821
Pulled By: zou3519
fbshipit-source-id: 43b261f3456b6bf5fca7b6313e659b259a2ba66d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23743
In the short term, we implement this by having overloads for each of
these functions. In the long term, the plan is to move DimnameList to
TensorOptions so that we do not have to duplicate work.
Test Plan: - [namedtensor ci]
Differential Revision: D16647820
Pulled By: zou3519
fbshipit-source-id: c6c53c5f26a86b730cbc4d4eb69907ac0e08fc65
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23801
Test Plan
- Code reading. assertTensorDataAndNamesEqual isn't used in this commit
but it'll be used in future commits.
- [namedtensor ci]
gh-metadata: pytorch pytorch 23801 gh/zou3519/90/head
Test Plan: Imported from OSS
Differential Revision: D16667816
Pulled By: zou3519
fbshipit-source-id: 66519cd5d17bda4c4304a1bc6e2a03ae59d49e39
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23624
tensor.set_names(names) is the out-of-place variant of
tensor.set_names_(names). This naming is probably confusing so I am
taking any and all suggestions.
Test Plan:
- run tests [namedtensor ci]
gh-metadata: pytorch pytorch 23624 gh/zou3519/86/head
Differential Revision: D16621830
Pulled By: zou3519
fbshipit-source-id: f8a3837d3a370b41210e938369348dcbb4aee53a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23623
This is a quick, not-user-facing check for if pytorch was built with BUILD_NAMEDTENSOR=1.
Test Plan:
- run tests [namedtensor ci]
gh-metadata: pytorch pytorch 23623 gh/zou3519/85/head
Differential Revision: D16621829
Pulled By: zou3519
fbshipit-source-id: d7e1161dc176bab2c1f953265722daeba1e63102
Summary:
`is_pinned` was moved to native_functions.yaml, disabling it for named
tensors. This PR re-enables its usage for named tensors.
I wrote a named inference rule for torch.clone(), but something happened
to it. Disable it for now so we can get the namedtensor ci to be green.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23597
Test Plan: - run tests [namedtensor ci]
Differential Revision: D16581771
Pulled By: zou3519
fbshipit-source-id: 498018cdc55e269bec80634b8c0a63ba5c72914b
Summary:
Currently the build system accepts USE_NAMEDTENSOR from the environment
variable and turns it into NAMEDTENSOR_ENABLED when passing to CMake.
This discrepancy does not seem necessary and complicates the build
system. The naming of this build option is also semantically incorrect
("BUILD_" vis-a-vis "USE_"). This commit eradicate this issue before it
is made into a stable release.
The support of NO_NAMEDTENSOR is also removed, since PyTorch has been
quite inconsistent about "NO_*" build options.
---
Note: All environment variables with their names starting with `BUILD_` are currently automatically passed to CMake with no need of an additional wrapper.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22360
Differential Revision: D16074509
Pulled By: zou3519
fbshipit-source-id: dc316287e26192118f3c99b945454bc50535b2ae