Commit Graph

42 Commits

Author SHA1 Message Date
Aaron Gokaslan
700941f683 Fixup c10 headers with clang-tidy (#91407)
Clang-tidy was not applied properly to headers in c10 as documented #91406. These are the easy automated fixes that came out of applying clang-tidy to the c10 part of the code base. cc @ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91407
Approved by: https://github.com/ezyang
2022-12-28 11:12:22 +00:00
Edward Z. Yang
1ff52225f1 Unify SymIntNode and SymFloatNode into SymNode (#87817)
This refactor was prompted by challenges handling mixed int/float
operations in C++.  A previous version of this patch
added overloads for each permutation of int/float and was unwieldy
https://github.com/pytorch/pytorch/pull/87722/  This PR takes a different
approach.

The general outline of the patch is to combine the C++ types SymIntNode
and SymFloatNode into a single type, SymNode.  This is type erased; we
no longer know statically at C++ if we have an int/float and have to test
it with the is_int()/is_float() virtual methods.  This has a number of
knock on effects.

- We no longer have C++ classes to bind to Python.  Instead, we take an
  entirely new approach to our Python API, where we have a SymInt/SymFloat
  class defined entirely in Python, which hold a SymNode (which corresponds
  to the C++ SymNode).  However, SymNode is not pybind11-bound; instead,
  it lives as-is in Python, and is wrapped into C++ SymNode using PythonSymNode
  when it goes into C++.  This implies a userland rename.

  In principle, it is also possible for the canonical implementation of SymNode
  to be written in C++, and then bound to Python with pybind11 (we have
  this code, although it is commented out.)  However, I did not implement
  this as we currently have no C++ implementations of SymNode.

  Because we do return SymInt/SymFloat from C++ bindings, the C++ binding
  code needs to know how to find these classes.  Currently, this is done
  just by manually importing torch and getting the attributes.

- Because SymInt/SymFloat are easy Python wrappers, __sym_dispatch__ now
  takes SymInt/SymFloat, rather than SymNode, bringing it in line with how
  __torch_dispatch__ works.

Some miscellaneous improvements:

- SymInt now has a constructor that takes SymNode.  Note that this
  constructor is ambiguous if you pass in a subclass of SymNode,
  so an explicit downcast is necessary.  This means toSymFloat/toSymInt
  are no more.  This is a mild optimization as it means rvalue reference
  works automatically.

- We uniformly use the caster for c10::SymInt/SymFloat, rather than
  going the long way via the SymIntNode/SymFloatNode.

- Removed some unnecessary toSymInt/toSymFloat calls in normalize_*
  functions, pretty sure this doesn't do anything.

- guard_int is now a free function, since to guard on an int you cannot
  assume the method exists.  A function can handle both int and SymInt
  inputs.

- We clean up the magic method definition code for SymInt/SymFloat/SymNode.
  ONLY the user classes (SymInt/SymFloat) get magic methods; SymNode gets
  plain methods; this is to help avoid confusion between the two types.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

cc @jansel @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87817
Approved by: https://github.com/albanD, https://github.com/anjali411
2022-10-27 20:56:02 +00:00
Edward Z. Yang
0e5a27fb8d Fix horribly double truncation bug in Scalar (#86304)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86304
Approved by: https://github.com/albanD
2022-10-05 22:24:17 +00:00
Edward Z. Yang
9c036aa112 Add SymInt to Scalar (#84958)
This is by no means comprehensive, but adds initial support for SymInt as a Scalar.

Things that don't work yet but need to:
- for some reason `torch.add(tensor, sym_int)` got matched to the `add.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor` schema
- `x + sym_int` failed bc we tried to turn `x` into a sym int:
```
              "__radd__",
              [](c10::SymIntNode a, py::object b) -> c10::SymIntNode {
                auto snb = toSymIntNode(a, b);
                return a->add(snb);
              })
 ```
- Many more things I'm sure

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84958
Approved by: https://github.com/ezyang
2022-09-25 23:51:06 +00:00
Kshiteej K
849b08f14b [reland][chalf] where(cpu and cuda), pow(cuda) (#78665)
Reland: https://github.com/pytorch/pytorch/pull/77640
Ref: #74537
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78665
Approved by: https://github.com/ngimel
2022-06-02 18:04:06 +00:00
PyTorch MergeBot
4bb8db85e9 Revert "[chalf] where(cpu and cuda), pow(cuda) (#77640)"
This reverts commit 3697cf7f76.

Reverted https://github.com/pytorch/pytorch/pull/77640 on behalf of https://github.com/mruberry due to as it broke ROCM on trunk
2022-06-01 19:39:38 +00:00
kshitij12345
3697cf7f76 [chalf] where(cpu and cuda), pow(cuda) (#77640)
Ref: #74537
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77640
Approved by: https://github.com/anjali411, https://github.com/ngimel
2022-06-01 18:35:53 +00:00
Sherlockk Huang
8b6a78f39f Python Interface for Jiterator
This PR allows user to author a CUDA kernel in python.

```
from torch.cuda.jiterator import create_jit_fn

code_string = "template <typename T> T my_kernel(T x, T y, T alpha) { return  -x * y + x - y + alpha; }"
jitted_fn = create_jit_fn(code_string, alpha=0)

a = torch.rand(3, device='cuda')
b = torch.rand(3, device='cuda')
result = jitted_fn(a, b, alpha=1.0)
```

Limitations:
- Only supports elementwise kernel
- 1~8 tensor inputs (empty input, e.g. factory methods, is not supported)
- inputs tensors must live in cuda device
- cpu Scalar is not supported
- kwargs must be pre-declared when calling create_jit_fn
- kwargs must be convertible to at::Scalar, one of float64, int64_t, bool. (complex not support for now)

TODOs:
- [x] consolidate union and c10::variant implementation
- [x] plug into existing op testing framework
- [ ] rename files, place files in the right folder
- [ ] place util functions in the right file
- [x] enforce assumptions in python interface e.g <8 inputs, kwargs types
- [x] Add user-facing documentation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76394
Approved by: https://github.com/mruberry
2022-05-06 18:44:28 +00:00
kshitij12345
f7ee308dfb [complex-half] support casting (by updating copy_)
Reference https://github.com/pytorch/pytorch/issues/71680

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73847
Approved by: https://github.com/anjali411
2022-03-23 21:42:59 +00:00
Nolan O'Brien
a383d01774 [fbcode][warnings] Suppress warnings in caffe2/c10 (#71356)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71356

Suppress remaining header based warnings in `caffe2/c10` when building with `clang`

Test Plan: CI pass

Reviewed By: r-barnes

Differential Revision: D33600097

fbshipit-source-id: e1c0d84a0bad768eb03e047d62b5379cf28b48e2
2022-01-15 18:34:08 -08:00
Meghan Lele
1d2ea76afb clamp: port to structured kernel (#61361)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61361

This PR ports the `clamp` kernel to the structured format. In addition, it introduces `OptionalScalarRef` as a replacement for `c10::optional<Scalar>&`. The latter, although it is a reference type, can still involve copying the contained `Scalar` (e.g. if the actual parameter is a `Scalar` or if a `c10::optional<Scalar>` is constructed just to call a kernel). `OptionalScalarRef` contains only a `const Scalar&`, and stores flag about whether the instance contains something inside the `Scalar` itself using a new tag.

For more information, see #55070.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D29821533

Pulled By: SplitInfinity

fbshipit-source-id: 88d55df5a4b2c14b68a57e4905d90eea1b088d99
2021-07-23 02:02:07 -07:00
Peter Bell
fb120493b1 Make Scalar.to<> for invalid types a compile-time error (#58726)
Summary:
Currently calling `scalar.to<std::complex<double>>()` for example compiles but throws an error at runtime. Instead, marking the non-specialized cases as `= delete` means the code fails to compile and you catch the error sooner.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58726

Reviewed By: zou3519, seemethere

Differential Revision: D28646057

Pulled By: ezyang

fbshipit-source-id: 9e4e3d1b4586eeecbb73db61bba56560b2657351
2021-05-25 15:34:01 -07:00
Scott Wolchok
44cc873fba [PyTorch] Autoformat c10 (#56830)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56830

Opt into formatting on GitHub and format everything. This is a trial run before turning on formatting for more and eventually all of the codebase.

Test Plan: CI

Reviewed By: zertosh

Differential Revision: D27979080

fbshipit-source-id: a80f0c48691c08ae8ca0af06377b87e6a2351151
2021-04-30 21:23:28 -07:00
anjali411
97c17b4772 Fix auto exponent issue for torch.pow (#49809)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49809

Fixes https://github.com/pytorch/xla/issues/2688 #46936

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D25724176

Pulled By: anjali411

fbshipit-source-id: 16287a1f481e9475679b99d6fb45de840da225be
2020-12-29 17:02:56 -08:00
Mike Ruberry
013e6a3d9d Revert D24698027: Fix auto exponent issue for torch.pow
Test Plan: revert-hammer

Differential Revision:
D24698027 (8ef7ccd669)

Original commit changeset: f23fdb65c925

fbshipit-source-id: 9a67a2c6310c9e4fdefbb421a8cd4fa41595bc9a
2020-11-15 03:58:44 -08:00
anjali411
8ef7ccd669 Fix auto exponent issue for torch.pow (#47024)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47024

Fixes https://github.com/pytorch/pytorch/issues/46936

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#47024 Fix auto exponent issue for torch.pow**

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D24698027

Pulled By: anjali411

fbshipit-source-id: f23fdb65c925166243593036e08214c4f041a63d
2020-11-14 22:50:12 -08:00
anjali411
cedeee2cd4 Add scalar.conj() and update backward formulas for add and sub (#46596)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46596

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).

Test Plan: Imported from OSS

Reviewed By: glaringlee

Differential Revision: D24529227

Pulled By: anjali411

fbshipit-source-id: da871309a6decf5a4ab5c561d5ab35fc66b5273d
2020-11-02 16:17:00 -08:00
Xiang Gao
263412e536 Rename is_complex_t -> is_complex (#39906)
Summary:
`is_complex_t` is a bad name. For example in std, there are `std::is_same` but not `std::is_same_t`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/39906

Reviewed By: mrshenli

Differential Revision: D22665013

Pulled By: anjali411

fbshipit-source-id: 4b71745f5e2ea2d8cf5845d95ada4556c87e040d
2020-09-01 21:04:19 -07:00
Nikita Shulga
d10056652b Enable torch.half for lt and masked_select (#43704)
Summary:
Enable testing of those options in `TestTorchDeviceTypeCPU.test_logical_cpu` and `TestTorchDeviceTypeCPU.test_masked_select_cpu_float16`
Add `view_as_real` testing for `torch.complex32` type

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43704

Reviewed By: albanD

Differential Revision: D23373070

Pulled By: malfet

fbshipit-source-id: 00f17f23b48513379a414227aea91e2d3c0dd5f9
2020-08-29 02:37:26 -07:00
Xiang Gao
c55d8a6f62 Remove std::complex from c10::Scalar (#39831)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39831

Differential Revision: D22018505

Pulled By: ezyang

fbshipit-source-id: 4719c0f1673077598c5866dafc7391d9e074f4eb
2020-07-07 20:31:42 -07:00
Xiang Gao
bdaa78499e Reland Refactor c10::complex and cleanup c10::Scalar (#39306)
Summary:
This reverts commit 8556664d68.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39306

Differential Revision: D21818096

Pulled By: albanD

fbshipit-source-id: ed4396fcad8c7036fb7bfa2f3da6ed63c0eb6625
2020-06-01 11:51:57 -07:00
Alban Desmaison
8556664d68 Revert D21769463: [pytorch][PR] Refactor c10::complex and cleanup c10::Scalar
Test Plan: revert-hammer

Differential Revision:
D21769463

Original commit changeset: 3cb5bcbb0ff3

fbshipit-source-id: 0392e23d7057f90e7b13c9abf19bcca2d84b26fa
2020-05-30 18:02:51 -07:00
Xiang Gao
928ce29ee2 Refactor c10::complex and cleanup c10::Scalar (#38593)
Summary:
**Main:**
- `c10::complex` is refactored: it no longer uses inheritance to specialize constructors, but using SFINAE instead. This implementation is cleaner and avoids some compiler bugs.
- `c10::Scalar` is cleaned up: it no longer needs to store complex as `double z[2]`, `c10::complex<double>` will work.

**Other cleanups:**
- `numeric_limits` of `c10::complex` is moved to `complex_utils.h`
- the variable in `c10::complex` storing real and imag is changed from `storage[2]` to `real_` and `imag_`
- remove the `c10::` before `complex` when in `c10` namespace
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38593

Differential Revision: D21769463

Pulled By: anjali411

fbshipit-source-id: 3cb5bcbb0ff304d137221e00fe481a08dba7bc12
2020-05-30 13:33:51 -07:00
Xiang Gao
14fc83ebc7 Add missing c10::complex::value_type (#37677)
Summary:
There is such a member type as in https://en.cppreference.com/w/cpp/numeric/complex
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37677

Differential Revision: D21410197

Pulled By: anjali411

fbshipit-source-id: 749be1d71190e4afc13513b396da47f33cb990c7
2020-05-06 19:36:20 -07:00
Gao, Xiang
ecf1ea75a7 Make c10::ComplexHalf a template specialization of c10::complex (#37426)
Summary:
This PR basically makes `c10::ComplexHalf` a template specialization of `c10::complex`. Since `c10::ComplexHalf` is not used much, this does not include much change.

Due to the fact that `c10::Half` does not have much `constexpr` methods, it is impossible to keep the same API. Currently, we are just completely reusing the old implementation. It is just the name getting changed from `c10::ComplexHalf` to `c10::complex<c10::Half>`. We can always change the implementation in the future when needed. But for now, I think this is OK.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37426

Differential Revision: D21300754

Pulled By: anjali411

fbshipit-source-id: fc0f65adccf97025a727735096780ce8078675a1
2020-05-01 10:49:24 -07:00
Xiang Gao
1ef992639d Make c10::complex the C++ type for complex tensors (#37421)
Summary:
# Overview

This PR changes the backing type of complex tensors in `ScalarType` from `std::complex` to `c10::complex`.

Since `c10::complex` and `std::complex` are reinterpret-castable, we can freely use `std::complex *` to access `c10::complex` data and vice versa. The implementation of `c10::complex` is not complete yet, so we are reinterpret casting all complex data to `std::complex` during dispatch, and do all operations in `std::complex`.

# `std::complex` and `c10::complex` interoperatability

To use `std::complex *` to access  `c10::complex` data, the following specializations are added:
```C++
template <> inline std::complex<float>* Tensor::data_ptr();
template <> inline std::complex<double>* Tensor::data_ptr();
template <> inline std::complex<float> Tensor::item();
template <> inline std::complex<double> Tensor::item();
```

See [`aten/src/ATen/templates/TensorMethods.h`](https://github.com/pytorch/pytorch/pull/37274/files#diff-0e8bf6f5024b32c240a4c1f0b4d8fd71)

And

```C++
template <> inline std::complex<float> Scalar::to();
template <> inline std::complex<double> Scalar::to();
```

is added in [`c10/core/Scalar.h`](https://github.com/pytorch/pytorch/pull/37274/files#diff-aabe1c134055c8dcefad830c1c7ae957)

# Dispatch

Macros in [`Dispatch.h`](https://github.com/pytorch/pytorch/pull/37274/files#diff-737cfdab7707be924da409a98d46cb98) still using `std::complex` as its type. We will add macros such as `AT_DISPATCH_ALL_TYPES_AND_C10_COMPLEX_AND3` as needed during the migration and not in this PR.

Note that `AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3` is only used in copy kernel of CUDA, and this PR is already changing it to use `c10::complex` because CUDA copy kernel has to use its original dtype otherwise there will be funny casting of dtypes causing cuda unspecified launch error.

When all the migration is done, the c10 version of macros will be removed, and the default version will have `std::complex` replaced by `c10::complex` by default. This design allows us to incrementally migrate from `std::complex` to `c10::complex`.

# Note

Note that the `std::complex` is not completely replaced by `c10::complex` in c10 yet, for example `c10::Scalar` is still using `std::complex`. This will be fixed in later PRs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37421

Differential Revision: D21282161

Pulled By: anjali411

fbshipit-source-id: 635e309e8c8a807c2217723ad250b5ab5a20ce45
2020-04-29 16:42:49 -07:00
Hong Xu
00f685d2d8 Add Scalar::type() (#33603)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33603

This function returns ScalarType based on its value. This is helpful
to avoid code generated in aten_op.h has returned Scalars depending on
arg self to determine its type.

Test Plan: Imported from OSS

Differential Revision: D20100218

Pulled By: ezyang

fbshipit-source-id: 337729a7559e6abb3a16b2a563a2b92aa96c7016
2020-02-26 22:25:18 -08:00
Xiang Gao
d119de8abd Deduplication of type casting codes (#32730)
Summary:
These codes are implemented twice at different places by different people, we should merge them together.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32730

Differential Revision: D19622023

Pulled By: ezyang

fbshipit-source-id: a9cbda31428b335bf28a7e4050f51f58e787b94f
2020-01-29 10:13:15 -08:00
Mingbo Wan
647569e546 get rid of choco install (#30897)
Summary:
7zip and cmake are part of base image, no need to re-install. Remove the install step can make build/test more stable.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30897

Differential Revision: D19232961

Pulled By: mingbowan

fbshipit-source-id: fa3bbd1325839a2a977bf13fdbd97fda43793b8d
2019-12-27 13:12:04 -08:00
Sebastian Messmer
f0243ea712 Use [[deprecated]] instead of C10_DEPRECATED (#30918)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30918

This is a C++14 feature we can use now
ghstack-source-id: 95811482

Test Plan: waitforsandcastle

Differential Revision: D18869636

fbshipit-source-id: b5b3d78b61b6ceb2deda509131f8502e95b1d057
2019-12-17 15:21:34 -08:00
Will Feng
4f7848e520 Make c10::Scalar::to<T>() const (#26406)
Summary:
Since `c10::Scalar::to<T>()` is not an in-place operation, we should be able to make it const. This removes the need of using `const_cast` at https://github.com/pytorch/pytorch/pull/26210#discussion_r324880325.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26406

Differential Revision: D17452258

Pulled By: yf225

fbshipit-source-id: 26881e2861f0f1f46cc2d92cc02a467e1f7eaa64
2019-09-19 15:06:14 -07:00
Brian Vaughan
88e4cee3e7 Improve handling of mixed-type tensor operations (#22273)
Summary:
Improve handling of mixed-type tensor operations.

This PR affects the arithmetic (add, sub, mul, and div) operators implemented via TensorIterator (so dense but not sparse tensor ops).

For these operators, we will now promote to reasonable types where possible, following the rules defined in https://github.com/pytorch/pytorch/issues/9515, and error in cases where the cast would require floating point -> integral or non-boolean to boolean downcasts.

The details of the promotion rules are described here:
https://github.com/nairbv/pytorch/blob/promote_types_strict/docs/source/tensor_attributes.rst

Some specific backwards incompatible examples:
* now `int_tensor * float` will result in a float tensor, whereas previously the floating point operand was first cast to an int. Previously `torch.tensor(10) * 1.9` => `tensor(10)` because the 1.9 was downcast to `1`. Now the result will be the more intuitive `tensor(19)`
* Now `int_tensor *= float` will error, since the floating point result of this operation can't be cast into the in-place integral type result.

See more examples/detail in the original issue (https://github.com/pytorch/pytorch/issues/9515), in the above linked tensor_attributes.rst doc, or in the test_type_promotion.py tests added in this PR:
https://github.com/nairbv/pytorch/blob/promote_types_strict/test/test_type_promotion.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22273

Reviewed By: gchanan

Differential Revision: D16582230

Pulled By: nairbv

fbshipit-source-id: 4029cca891908cdbf4253e4513c617bba7306cb3
2019-09-05 18:26:09 -07:00
Gregory Chanan
fe541aab5f Align AT_FORALL macros with DISPATCH macros wrt Half. (#25268)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25268

The AT_FORALL AND macros with mistakenly already include Half, which differs from the Dispatch macros.

This change shouldn't have any effect.

Test Plan: Imported from OSS

Differential Revision: D17079747

Pulled By: gchanan

fbshipit-source-id: 635eb167722ce850d6c1949fac652de4dddf32ee
2019-08-28 08:15:40 -07:00
Gregory Chanan
497bc3f283 Remove unused parameter from FORALL macros and rename STUBS to QINTS.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23340

Test Plan: Imported from OSS

Differential Revision: D16467981

Pulled By: gchanan

fbshipit-source-id: f4535c21ea54838d2086b2887a73e02e28b783d9
2019-08-12 14:43:39 -07:00
Gregory Chanan
f5fefd62e2 Align AT_FORALL macros with AT_DISPATCH macros.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23339

Test Plan: Imported from OSS

Differential Revision: D16467983

Pulled By: gchanan

fbshipit-source-id: 84a29a03d3ec9c6416cad254a9ff1005fdc6324f
2019-08-12 14:43:35 -07:00
Gregory Chanan
3e0da2ab8e Rename AT_FORALL_SCALAR_TYPES_WITH_COMPLEX to AT_FORALL_SCALAR_TYPES_WITH_COMPLEX_AND_STUBS
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23336

Test Plan: Imported from OSS

Differential Revision: D16467982

Pulled By: gchanan

fbshipit-source-id: 004bfc179c7bf963e1132c59af692080156808ab
2019-07-31 08:17:17 -07:00
Jerry Zhang
dfcd7b0185 QTensor (#18230)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18230

Implementing minimum qtensor API to unblock other workstreams in quantization

Changes:
- Added Quantizer which represents different quantization schemes
- Added qint8 as a data type for QTensor
- Added a new ScalarType QInt8
- Added QTensorImpl for QTensor
- Added following user facing APIs
  - quantize_linear(scale, zero_point)
  - dequantize()
  - q_scale()
  - q_zero_point()

Reviewed By: dzhulgakov

Differential Revision: D14524641

fbshipit-source-id: c1c0ae0978fb500d47cdb23fb15b747773429e6c
2019-04-03 13:17:11 -07:00
Jerry Zhang
ed9724f385 For some files that are touched by the QTensor diff (#18765)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18765

att

Reviewed By: ZolotukhinM

Differential Revision: D14733442

fbshipit-source-id: 525002034e6dccc2045da645e1193671fd0474b3
2019-04-03 12:47:31 -07:00
Iurii Zdebskyi
1a742075ee Resolving comments from Bool Tensor for CPU PR (#18165)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18165
ghimport-source-id: 55cb3fb63a25c2faab1725b4ec14c688bf45bd38

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18166 Bool Tensor for CUDA
* **#18165 Resolved comments from Bool Tensor for CPU PR**
-------
------------
This is a follow up PR that resolves some additional feedback on one the of previous Bool Tensor PRs.

gchanan, here is a list of almost all the comments from the original PR with respective fixes and replies:

**[utils/python_scalars.h]** why is this converting from uint8_t and not bool? (comment?)
When i was adding this, i was testing by creating a tensor and then calling its .tolist(). it worked for bool and uint8_t equally good so i left uint8_t as thought it makes more sense as we are calling PyBool_FromLong. �Changing it to bool.

**[ATen/Dispatch.h]**better name?.
fixed.

**[test/test_torch.py]** what about other factories, such as full? (and more).
There is a test that goes through the factory methods - test_tensor_factories_empty. i added some bool cases above it and added a comment that once CUDA will be done, i will unite them and it will iterate not just between CUDA and CPU but also all types. ��Adding all bool cases now. Will unite in CUDA PR.

**[generic/THTensorMath.h]** any changes in this file actually needed?
Bad merge. Fixed.

**[TH/THTensor.h]** this generates code for random, clampedRandom, and cappedRandom -- do we have tests for all of these with bool?
Added

**[c10/core/ScalarType.h]** I'm not very confident about the lack of Bool here -- can you look at the call sites and see what makes sense to do here?
Added bool to the macro and created a similar one without for a single case which fails the build with errors:

_./torch/csrc/jit/symbolic_variable.h:79:20: error: ambiguous overload for ‘operator*’ (operand types are ‘const torch::jit::SymbolicVariable’ and ‘torch::jit::Value*’)
return (*this) * insertConstant(rhs);_

Differential Revision: D14605105

fbshipit-source-id: abf82d50e8f8c50b386545ac068268651b28496d
2019-03-26 09:59:34 -07:00
Sebastian Messmer
d408324350 Move files to/from c10/core and c10/util (#15316)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15316

This starts cleaning up the files in c10 according to the module structure we decided on.

Move to c10/util:
- Half.h, Half-inl.h, Half.cpp, bitcasts.h

Move to c10/core:
- Device.h, Device.cpp
- DeviceType.h, DeviceType.cpp

i-am-not-moving-c2-to-c10

Reviewed By: dzhulgakov

Differential Revision: D13498493

fbshipit-source-id: dfcf1c490474a12ab950c72ca686b8ad86428f63
2019-01-10 16:22:22 -08:00
Sebastian Messmer
c8a5ec14dd Remove at references from c10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14432

Reviewed By: dzhulgakov

Differential Revision: D13223904

fbshipit-source-id: 43b06e33e088e7789ccea6d92267936fe30d8571
2018-12-08 00:28:35 -08:00
Sebastian Messmer
50e9c56830 Move Scalar and ScalarType to c10/core
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14022

Reviewed By: ezyang

Differential Revision: D13015236

fbshipit-source-id: 92aac4e342d85f75a31837b2943fa5b80f0c35c9
2018-11-27 12:59:36 -08:00