Summary: Printing for complex numbers requires loading and storing between `Py_complex` and `std::complex`. This patch aims to support this for the plugin.
Differential Revision: D9771808
Pulled By: ezyang
fbshipit-source-id: 024865f1945d63ddb5efc775a35438c8ea06408e
Summary:
as discussed with ezyang and slayton58 , this might be a nice convenience to be able to use code in extensions just as in ATen.
also split off `tracing_state.h` from `torch/jit/tracer.h` fix#11204 to bee able to use the utility functions
pytorchbot it's not a jit patch per se.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11425
Differential Revision: D9735556
Pulled By: ezyang
fbshipit-source-id: 466c92bbdb1d7d7a970eba1c26b7583fe9756139
Summary:
Linting `torch/csrc/` (non-recursive) and `torch/csrc/autograd` (non-recursive).
Fixed things like:
- `typedef` vs `using`
- Use `.empty()` instead of comparing with empty string/using `.size() == 0`
- Use range for loops instead of old style loops (`modernize-`)
- Remove some `virtual` + `override`
- Replace `stdint.h` with `cstdint`
- Replace `return Type(x, y)` with `return {x, y}`
- Use boolean values (`true`/`false`) instead of numbers (1/0)
- More ...
ezyang apaszke cpuhrsch
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11050
Differential Revision: D9597505
Pulled By: goldsborough
fbshipit-source-id: cb0fb4793ade885a8dbf4b10484487b84c64c7f2
Summary:
Allows mulitplication of e.g. numpy.float32 with tensors.
This came up with #9468
If you want this and after the other patch is done, I'll add tests (but that would be conflicting, so I prefer to wait).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9659
Differential Revision: D8948078
Pulled By: weiyangfb
fbshipit-source-id: c7dcc57b63e2f100df837f70e1299395692f1a1b
* Use Index rather than Long for IntList, so floating-point types convertible to ints fail the parsing.
Basically, our unpackLong code works with floating-point types that are convertible to ints, but this isn't often what you want (because of truncation).
What you actually want is to convert to an index, which will usually find such issues.
I made this the minimal change I could because:
1) I didn't want to change unpackLong because the existing code call checkLong before unpackLong, so this should be a non-issue most of the time. And fixing this properly requires calling checkLong again, which will slow everything down.
2) An exception above is with IntList, which only checks that 1) it is a tuple or 2) it is a varargs tuple (i.e. torch.ones(1, 2, 3)).
* Fix bug.
* Don't conflict tensor and IntList bindings.
* Change function to be consistent between python 2 and 3.
* Check Index.
* Move IntList overloads in legacy new functions to below Tensor overloads.
* Allow zero-dim tensors to be bound to at::Scalar
This relaxes THPUtils_unpackLong and THPUtils_unpackDouble to allow
values convertable to PyLong and PyFloat objects. This includes NumPy
scalars and zero-dim tensors (Variables).
This is important to maintain backwards compatibility in the Tensor
constructors once scalars are enabled and Variable and Tensor are
merged.
* Add comment and unpack PyInt as int64_t
* Have localScalar work with all 1 element tensors, not just scalars.
Also have toCFloat, etc. call localScalar so 1 element tensors work as well.
* Implement python number conversions.
* Implement __bool__, __nonzero__ as ATen functions.
* Remove merge artifacts.
* Simplify by dispatching to toCDouble.