Right now, if we add a zero-filled sparse tensor with another sparse
tensor, both tensors must have the same "density" (dimI, dimV) and size
(tensor.size()) for them to be added successfully. This relaxes that
constraint so that if both tensors have the same tensor.size() and at
least one is zero-filled, they can be added successfully.
Before:
```
i = torch.LongTensor([[0, 1, 1], [2, 0, 2]])
v = torch.FloatTensor([3, 4, 5]).unsqueeze(1)
sparse_mat = torch.sparse.FloatTensor(i, v, torch.Size([2,3,1]))
zeros = torch.zeros(sparse_mat.size(), layout=torch.sparse_coo)
sparse_mat + zeros
RuntimeError: cadd operands have
incompatible sizes or dimension types
at
../src/THS/generic/THSTensorMath.c:126
```
After: no error.
* Fix torch.tensor(...) device-type calculation when used with numpy and type inference.
* Fix tensor device type inference as well.
* Better variable type inference: infer cuda-ness only if device is not specified.
Fixes#5748.
Added an unsafe version so embedding isn't slowed.
* Create safe and unsafe versions of sparse_coo_tensor
* rename sparse_coo_tensor_unsafe to _sparse_coo_tensor_unsafe
* refactor
* make helper static inline
* add sparse size check test
* fix lint
* Separate cuda-ness from dtype.
There are no longer torch.cuda.int64, etc; only torch.int64 that correspond to at::ScalarType.
At the python arg parser level, the corresponding ATen type is selected from the combination of (ScalarType, Layout, Device).
There is also currently unused code in here for support ScalarType in native_functions; this will be used for specifying aggregate types
on reduction functions.
* Fix test_autograd.
* Add defaults to randint_like.
* Track is_cuda in py tensor types.
* Fix test_sparse.
* Fix multiprocessing.
* Fix rnn.
* Fix test_nn.
* Fix flake8.
* Introduce torch.layout and split layout from dtypes.
Tensors (and tensor types) now have a 'layout' attribute that returns either 'torch.strided' or 'torch.sparse_coo'.
Previously, dtypes were 1-to-1 with ATen types/PyTensorTypes; the impetus behind this decision was to make things easy in the common case
(i.e. specifying a type in a factory function). But this doesn't really follow for sparity, which isn't a common case.
It also doesn't properly represent the concept or a dtype, which in numpy are proper scalar types (i.e. roughly the type returned from indexing the
last dimension of an n-d array). But this should be the same whether or not the tensor is represented via strides, sparsity, etc.
This is accomplished by:
1) having the dtype of tensor return the (device-type, scalar-type) combination, i.e. torch.cuda.float32, so both
torch.cuda.FloatTensor and torch.cuda.sparse.FloatTensor have the same dtype
2) Adding a layout parameter to python functions, where the combination of (dtype, layout) maps to an ATen type that is used for dispatch.
* Formatting, make init throw python_error.
* Fix cuda not enabled error message.
* Fix test.
This is in preparation for splitting out sparsity (layout) from dtypes; it's complex to maintain these
and tensor.new(...) is a legacy API in any case.
* Add numpy.array-like type inference to torch.tensor.
* Temporary fix for int/double types.
* Treat python floats as the default (scalar) dtype.
* Also make 0-length sequences the default scalar type and add more tests.
* Add type inference to sparse_coo_tensor.
* Fix sparse test.
* Remove allow_variables.
* Check numpy platform bits.
* Address review comments.
* Make suggested changes to constraints.
* More checking windows builds.
* Fix test for windows.
* Add torch.sparse_coo_tensor factory.
Notes:
1) I didn't add Tensor.new_sparse_coo_tensor; it didn't seem particularly useful, but it's easy to add
2) This doesn't do the type inference, i.e. torch.sparse_coo_tensor(indices=LongTensor, values=IntTensor)
will return a sparse tensor corresponding to the default type rather than a sparse IntTensor. We can add
type inference later when we add it to other factories.
* Fix merge.
* Use type_conversion function from python_variable_methods.
Notes:
1) I didn't add Tensor.new_sparse_coo_tensor; it didn't seem particularly useful, but it's easy to add
2) This doesn't do the type inference, i.e. torch.sparse_coo_tensor(indices=LongTensor, values=IntTensor)
will return a sparse tensor corresponding to the default type rather than a sparse IntTensor. We can add
type inference later when we add it to other factories.
* Add torch.empty, torch.full and new_ size Tensor factory methods.
This adds torch.full, torch.empty equivalents of np.full, np.empty.
In addition, this adds size-based Tensor factory methods new_empty, new_ones, new_full, new_zeros,
which is meant to complete the separation of the legacy "new" method into data-based and size-based
functions.
This also fixes an issue in sparse zeros_like when the dtype didn't match the argument dtype.
* Get rid of unnecessary zero in sparse tensor zeros_like.
* Fix test if only 1 cuda device.
* Support dtypes in legacy new constructors.
* Add comment about why we don't have dtype for sparse (indices, values).
* separate legacy tensor ctor vs new (new includes dtypes).
* Use TypeError.
* Handle copying empty sparse tensors to/from CPU, GPU.
This is likely not a robust fix because it special cases the case where both the indices and values are empty
rather than handling each one separately. But this is currently blocking a change introducing devices to constructors.
* Guard sizes being NULL.
* Add numpy-style dtypes to Variable factories.
1) Add numpy-style dtypes corresponding to torch tensor types. These are:
torch.float16, torch.float32, torch.float64, torch.uint8, torch.int8, torch.int16, torch.int32, torch.int64
as well as torch.cuda, torch.sparse, and torch.cuda.sparse equivalents.
2) Adds "legacy" names for the above dtypes that correspond more closely to existing tensor names. These are:
torch.half, torch.float, torch.double, torch.short, torch.int, torch.long.
torch.byte and torch.char don't exist because they either don't match numpy semantics or differ on different architectures.
3) Adds a "dtype" parameter to Variable factories (e.g. zeros, ones) that allows the user to specify the type without changing the default tensor type.
4) Adds a "dtype" getter to Variables that return the canonical dtype from 1)
This PR is missing the following useful features that should be added in the future:
A) We only add the "dtype" parameter to auto-generated factories; hand-written factories like in tensor_new.cpp don't support this yet.
B) We don't allow type conversions to use dtypes; that should be added to type(param) or a new function.
C) We don't yet have a "device" parameter for these factories; right now, they will only create Variables on the default device.
* backend_to_string can be private.
* Define python binding argument indexes in a more simple way.
* add all_declared_types, still need to hook it up to THPDType.
* Fix all_declared_types for missing types (it's Sparse + Half).
* Ensure cuda dtypes are created even if compiled with NO_CUDA=1.
* Fix case where dtype is provided but dispatch is via namespace.
This happens in ones_like, empty_like, randn_like.
There is some question if we should do:
1) at::ones_like(tensor).toType(dtype)
2) at::ones_like(tensor.toType(dtype))
I did the former because this matches with the numpy documentation, i.e.:
"Overrides the data type of the result." and it's easier to implement.
Note that the above causes an extra copy, either of the input or output.
Here's a better implementation:
1) Make zeros_like, ones_like native functions that take an optional type (named dtype?).
2) Match the type argument with the dtype, so we don't have two different parameters.
3) Call at::zeros_like(input, type) -> at::native::zeros_like(input, type) -> type.zeros(input.sizes())
* Don't return from maybe_initialize_cuda.
* Don't leak DType name.
* Address cpp review comments.
* Share code between sparse and non-sparse test_dtypes.
* Rewrite _like functions as native function with explicit type parameter.
* Use type 'Type' instead of 'dtype' for consistency.
* Address review comments.
* Handle arg_idx when there is requires_grad but no dtype in python_binding_arguments.
torch.mm(sparse, dense) -> dense works for tensors. This PR makes it work for variables as well.
I renamed mm to _mm in Declarations.cwrap and wrote a native mm function that wraps _mm for the dense case and addmm for the sparse case.
* Remove setting coalesce to 0 in sparse transpose_
* Remove setting coalesced to 0 in THCSTensor transpose_
* Add test for transpose's coalesce invariant
This adds more heavy sanity checking when we run to_dense(); in particular,
we make sure that if it claims to be coalesced, it truly is coalesced, and if
it is not, that the coalesced version also to_dense() to the same thing.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Apparently, the algorithm only guarantees the output is coalesced if
the inputs are coalesced.
I'm planning to do another PR that does much more stringent correctness
testing for the 'coalesced' bit shortly, but y'all should merge
this one first.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Fixes#1783.
There is an undocumented invariant in PyTorch that we should
try to avoid having storage == NULL as much as possible (even
though Torch supports it.) This commit properly documents the
invariant, and fixes a bug in sparse where the invariant was
not respected. This now means that sparse tensors now correctly
remember what GPU they are associated with.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Fixes#1782.
The default operation should be cheap: user can always choose to
explicitly make a copy on the way in. Note that this is a
BACKWARDS COMPATIBILITY BREAKING change. However, we DO create
a new tensor wrapper (so we are not affected by subsequent
size changes, etc.)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Make sparseMask error if mask is uncoalesced.
Fixes#1447.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Add test for sparse adagrad.
Previously, the sparse codepath was not exercised at all; this commit
adds a very simple test case "sparse Rosenbrock"; the idea is to do
Rosenbrock but then knock out one of the dimensions so that the
tensor is sparse.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
As discussed in #1441.
I also added some docs giving clear guidance about how to coalescing
in sparse tensors.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Simplify _gen_sparse
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Randomly generate an uncoalesced tensor and test with it.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Simpler implementation of cpu_only suggested by @apaszke
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Better implementation of randn, suggested by @soumith
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Lint fix.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Fix CUDA type error.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Refactor test_sparse to reduce boilerplate.
Instead of manually creating a helper function, threading an is_cuda
parameter around, and creating a test method for CUDA and non-CUDA
variants, we take a different approach:
- There is now some new member variables initialized in setUp which
control the aspects of how we carry out the test; at the moment,
it's just whether or not we are using CUDA or not. This means
you don't have to pass is_cuda around, or do a conditional to
get the triplet of constructors you need.
I'll note that I am not a big fan of member variables in test
objects, but these are (intended to be) immutable so I think
it should be OK.
- Instead of manually defining test_foo and test_foo_cuda, we now
have a new TestCudaSparse class which overrides setUp (from above)
to swap in the CUDA implementation. Way less boilerplate, and NO
metaprogramming needed.
If you need to opt out of CUDA testing, there is a new cpu_only
decorator you can use.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Here's the command I used to invoke autopep8 (in parallel!):
git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i
Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.
Also configures flake8 to match pep8's behavior.
Also configures TravisCI to check the whole project for lint.
* Fix error in ELU backward
* Add --seed flag for testst st
* Add test for BatchNorm eval
* Fix autograd.backward docs
* Support cc flags in cuDNN search
* Fix IndexSelect backward formula