* Revert "Fix wrong argument name (#5366)"
This reverts commit cc9d3b265d.
* Fix wrong argument naming
* Revert "Wrap torch::cuda::lazy_init with WITH_CUDA flag"
This reverts commit a8fa37f8fac5aef09eb7fe54d84de6126618c262.
* Revert "Solves the linking error related to lazy_init for MSVC"
This reverts commit 63913a102f274865a76e7c40ffdf6b40c277d5ff.
* better solution for the linking error related to lazy_init for MSVC
* Naming changes
* Namespace changes and further comment
* Rebasing onto current master
* Remove code that is useless
* Fix linting
* Remove rebasing bugs
* Support dtypes in legacy new constructors.
* Add comment about why we don't have dtype for sparse (indices, values).
* separate legacy tensor ctor vs new (new includes dtypes).
* Use TypeError.
This replaces the torch.Tensor constructors with factories that produce
Variables. Similarly, functions on the torch module (e.g. torch.randn)
now return Variables.
To keep the PR to a reasonable size, I've left most of the unused tensor
code. Subsequent PRs will remove the dead code, clean-up calls to
torch.autograd.Variable, and rename Variable to Tensor everywhere.
There are some breaking changes because Variable and Tensors had
slightly different semantics. There's a list of those changes here:
https://github.com/pytorch/pytorch/wiki/Breaking-Changes-from-Variable-and-Tensor-merge
* Revert "Fix wrong argument name (#5366)"
This reverts commit cc9d3b265d.
* Solves the linking error related to lazy_init for MSVC
* Fix wrong argument naming
* Wrap torch::cuda::lazy_init with WITH_CUDA flag
* Various dtype improvements.
1) Add dtypes to the new data-based constructors: Variable.new_tensor and torch.autograd.variable.
2) In the python signatures, use Type instead of Dtype to match the C++ signatures; the error messages still print as dtype.
3) Handle / add a better error message when a dtype is used when ATen was not compiled with that type (e.g. cuda types).
4) Move cuda_lazy_init to its own file.
A later commit will add support to the legacy constructors as well.
* Move implementation of lazy_init to cpp.
* Fix parsed_arg size.
* Improve Variable interface
* Address comments from @apaszke and @colesbury
* string ::operator= is not noexcept
* Remove ir.h from tracer_state.h to improve build times
* Make Variable a struct and pack SavedVariable fields
* Implement as_variable_ref
* grad_fn_ptr() -> grad_fn_unsafe()
* Reduce hackiness of set_type hack
* Include variable.h and edge.h in tracer_state.h because it uses them
* class Variable -> struct Variable because Windows cant even
* Make Variable::output_nr uint32_t instead of int
* Add comment about tracing state
* Replaced more static_cast<Variable&> and improve docs
* Remove SavedVariable destructor and construct members in init list
* Clarify docs for Variable
* Variable::set_version -> set_version_counter
This better maintains backwards compatibility when Tensors and Variables
are merged. For example:
>>> loss = var.sum().data[0]
Currently, `var.sum().data` is 1-dim so indexing. Once scalars are
enabled and Variable and Tensor are merged it will be zero-dim. This
change allows that expression to continue working (with a warning). In
the future, the canonical way to compute that expression will be:
>>> loss = float(var.sum())
Or an equivalent alternative:
>>> loss = var.sum().item()
Also fixes a few error cases.
* Add a new_tensor instance method to Variable that takes only data.
This is to work around the legacy problems of new, where e.g.
new(5) will give you an unfilled tensor rather than a scalar.
* Remove double return.
* Fix cuda scalar code path.
* Work around lack of WITH_SCALARS.
* Implement a (data-only) Variable factory.
Implements a function, torch.autograd.variable that is modeled after np.array. The main difference between it and new() and
the tensor constructors is it inteprets a python number as data, i.e. as a 0-dimensional tensor (we currently don't expose
that at the pytorchl level, so it will temporarily end up as a 1-dimensional tensor), rather than a size.
The main difference currently between torch.autograd.variable and np.array is that np.autograd.variable is stricter, e.g.
passing a PyFloat when an integral type is the default tensor type will result in an array; np.array basically lets anything
through (floating-point / integral mismatch, overflow, etc). This is to keep it consistent with Variable.new when called with
a sequence, although we can loosen the checks later.
This will be renamed to torch.tensor once we merge Variable and tensor.
* Address review comments.