Commit Graph

52 Commits

Author SHA1 Message Date
Donna Choi
fdf7a833e7 Address printing inconsistency between float and complex tensors (#35841)
Summary:
See issue [https://github.com/pytorch/pytorch/issues/33494 Complex number printing inconsistent with float](https://github.com/pytorch/pytorch/issues/33494).

Changes introduces an optional argument in Formatter's ```format``` function to discern whether a tensor is a float tensor or not. This way, there is consistency between float tensors and complex tensors so that the complex tensors print in the same manner as float tensors:

- Only a decimal point and no zeros for integer values.
- Trailing zeros only if the value is truly a float.
- White space introduced to fill the gap so that +/- symbols and commas align.

Here are some example outputs.

```
print(torch.zeros((2,2), dtype=torch.float64))
```
yields
```
tensor([[0., 0.],
        [0., 0.]], dtype=torch.float64)
```

```
print(torch.zeros((2,2), dtype=torch.complex64))
```
previously yielded
```
tensor([[(0.0000 + 0.0000j), (0.0000 + 0.0000j)],
        [(0.0000 + 0.0000j), (0.0000 + 0.0000j)]], dtype=torch.complex64)
```
and now yields
```
tensor([[(0 + 0.j), (0 + 0.j)],
        [(0 + 0.j), (0 + 0.j)]], dtype=torch.complex64)
```
This new print version is more consistent with float tensor's pretty print.

The following example mixes integer and decimals:
```
print(torch.tensor([[1 + 1.340j, 3 + 4j], [1.2 + 1.340j, 6.5 + 7j]], dtype=torch.complex64))
```
This yields:
```
tensor([[                     (1.0000 + 1.3400j),
                              (3.0000 + 4.0000j)],
        [                     (1.2000 + 1.3400j),
                              (6.5000 + 7.0000j)]], dtype=torch.complex64)
```

The following example
```
torch.tensor([1,2,3,4.5])
```
yields
```
tensor([1.0000, 2.0000, 3.0000, 4.5000]) .
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35841

Differential Revision: D20893848

Pulled By: anjali411

fbshipit-source-id: f84c533b8957a1563602439c07e60efbc79691bc
2020-04-09 08:54:25 -07:00
anjali411
13e4ee7883 Added tensor.is_complex(), is_complex and dtype.is_complex py binding, tensor printing, and dixed the scalar type returned for complex float (#33268)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33268

Test Plan: Imported from OSS

Differential Revision: D19907698

Pulled By: anjali411

fbshipit-source-id: c3ce2e99fc09da91a90a8fb94e5525a00bb23703
2020-02-20 13:38:01 -08:00
Richard Zou
9047d4df45 Remove all remaining usages of BUILD_NAMEDTENSOR (#31116)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31116

Changelist:
- remove BUILD_NAMEDTENSOR macro
- remove torch._C._BUILD_NAMEDTENSOR
- remove all python behavior that relies on torch._C._BUILD_NAMEDTENSOR

Future:
- In the next diff, I will remove all usages of
ATen/core/EnableNamedTensor.h since that header doesn't do anything
anymore
- After that, we'll be done with the BUILD_NAMEDTENSOR removal.

Test Plan: - run CI

Differential Revision: D18934951

Pulled By: zou3519

fbshipit-source-id: 0a0df0f1f0470d0a01c495579333a2835aac9f5d
2019-12-12 09:53:03 -08:00
Dmytro Dzhulgakov
b93823cb65 Per-channel quantized tensor to have only a single axis (#26675)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26675

Based on offline poll, we're very unlikely to have multi-axis quantized tensors in the foreseeable future. Let's simplify API and just return int instead of list. It also matches the singular `axis` name.

Test Plan: Imported from OSS

Differential Revision: D17537052

Pulled By: dzhulgakov

fbshipit-source-id: 676abc3b251d288468aaed467b5e5ca4063b98b0
2019-09-23 22:29:01 -07:00
Richard Zou
4fada96218 Renames tensor.renamed -> rename, tensor.names_ -> rename_ (#26548)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26548

This makes the naming more consistent with PyTorch's API. The original
concern was that `tensor.rename` might make the operation seem like it
is in-place. However, we have many "verb" APIs: `tensor.add(other)`, for
example, doesn't add other to tensor in-place, but `tensor.add_(other)`
does.

`tensor.rename_` does exactly the same place as `tensor.rename`, but
in-place.

Test Plan: - [namedtensor ci]

Differential Revision: D17502021

Pulled By: zou3519

fbshipit-source-id: 6a5b93136a820075013cd1e30fb8fc6b9d77d7d9
2019-09-22 15:38:26 -07:00
Dmytro Dzhulgakov
8c1354c31b Implement more support for per-channel quantization (#26240)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26240

In particular adds support for empty/empty_like which is needed for memory layouts to work.

Test Plan: Imported from OSS

Differential Revision: D17443220

Pulled By: dzhulgakov

fbshipit-source-id: 9c9e25981999c0edaf40be104a5741e9c62a1333
2019-09-19 13:39:17 -07:00
Richard Zou
7970e5720b Rename tensor.view_names -> tensor.renamed (#25711)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25711

This function renames the dimensions of a tensor out-of-place. Because
of that, I think `tensor.renamed(...)` is a clearer name: `view_names`
has the connotation that we can use names to `view` our tensors with a
"different shape", but what this function really does is let us rename a
tensor no matter the previous names.

`tensor.names_`, the in-place version of this, is unchanged for now.
However, we might delete this or not advertise it if it has no use case
and also because its naming is a little inconsistent with `tensor.renamed`.

Test Plan: - [namedtensor ci]

Differential Revision: D17206515

Pulled By: zou3519

fbshipit-source-id: 67053951fcc8130c84566b5ebbdce35ef619c90d
2019-09-06 11:28:04 -07:00
Richard Zou
9ea6238b07 Fix named tensor printing (#25564)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25564

There are a number of ops that get called while printing tensors
depending on how large the tensors are. This PR makes it so that before
we attempt to format tensor data for printing, we drop the names of the
tensor (if there are any). This is easier than supporting named tensors
for all of those ops (which should happen eventually).

Test Plan: - new test [namedtensor ci]

Differential Revision: D17158872

Pulled By: zou3519

fbshipit-source-id: 282023837645b8cb16a4d93896a843dd598fc738
2019-09-03 19:58:00 -07:00
Daya Khudia
12ea1d74f0 Add missing functions and methods for channelwise quantization (#24934)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24934

1) Functions and methods to get scales and zero_points for channelwise quantization were missing. Adding these.
2) Correctly print quantized tensors for channelwise quantization.
ghstack-source-id: 88868339

Test Plan:
buck test mode/dev caffe2/test:quantized -- 'test_qtensor\ \(test_quantized_tensor.TestQuantizedTensor\)'  --print-passing-details

```
Running 1 tests
Started new test run: https://our.intern.facebook.com/intern/testinfra/testrun/1970324844629541
      ✓ caffe2/test:quantized - test_qtensor (test_quantized_tensor.TestQuantizedTensor) 0.161 1/1 (passed)
Test output:
> test_qtensor (test_quantized_tensor.TestQuantizedTensor) ... ok
>
> ----------------------------------------------------------------------
> Ran 1 test in 0.161s
>
> OK
Finished test run: https://our.intern.facebook.com/intern/testinfra/testrun/1970324844629541
Summary (total time 6.61s):
  PASS: 1
  FAIL: 0
  SKIP: 0
  FATAL: 0
  TIMEOUT: 0
  OMIT: 0
```
To be added in a followup diff.
Current output for printing qtensors:
print(W_q.int_repr())
print(W_q)

```
> tensor([[[[-3,  0,  0],
>           [ 4, -2, -4],
>           [-1, -3, -2]],
>
>          [[-3,  1,  3],
>           [-3, -3,  3],
>           [-3, -5, -1]]],
>
>
>         [[[ 4, -3, -4],
>           [ 4, -3, -3],
>           [ 4, -1, -1]],
>
>          [[ 2, -3,  0],
>           [ 3,  1,  1],
>           [ 2, -4,  0]]]], dtype=torch.int8)
> tensor([[[[-0.9273, -0.2318, -0.2318],
>           [ 0.6955, -0.6955, -1.1592],
>           [-0.4637, -0.9273, -0.6955]],
>
>          [[-0.9273,  0.0000,  0.4637],
>           [-0.9273, -0.9273,  0.4637],
>           [-0.9273, -1.3910, -0.4637]]],
>
>
>         [[[ 0.3938, -0.1575, -0.2363],
>           [ 0.3938, -0.1575, -0.1575],
>           [ 0.3938,  0.0000,  0.0000]],
>
>          [[ 0.2363, -0.1575,  0.0788],
>           [ 0.3150,  0.1575,  0.1575],
>           [ 0.2363, -0.2363,  0.0788]]]], size=(2, 2, 3, 3), dtype=torch.qint8,
>        quantization_scheme=torch.per_channel_affine,
>        scale=tensor([0.2318, 0.0788]), zero_point=tensor([ 1, -1]))
```

Differential Revision: D16659715

fbshipit-source-id: f8d3eeaff8f618aa0cca4fd076db73318e6df946
2019-08-23 15:44:16 -07:00
Richard Zou
57fc793650 Add names to repr for named tensors
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23316

Test Plan:
- [namedtensor ci]

gh-metadata: pytorch pytorch 23316 gh/zou3519/80/head

Imported from OSS

Differential Revision: D16494415

Pulled By: zou3519

fbshipit-source-id: e483f57bdb0610d0eadbe70d673e20dc3d3f9502
2019-08-02 11:37:29 -07:00
Iurii Zdebskyi
10c60b601a Added Bfloat16 tensor for cpu with very limited support (#21860)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21860
ghimport-source-id: 5290755b63033cdfdeb911a4ecf4aa282b3db02d

Test Plan: Imported from OSS

Differential Revision: D15856091

Pulled By: izdeby

fbshipit-source-id: 54e7e17be1b5c5a2e80a41feaeaeba75dbb8108f
2019-07-10 09:08:52 -07:00
iurii zdebskyi
a54acd3755 Update the way boolean tensor are being printed (#22238)
Summary:
In case when the boolean tensor gets printed out, no need to specify the dtype.

Example:
```
>> x = torch.tensor([[True, True, True], [True, True, True]])
>> print(x)
tensor([[True, True, True],
        [True, True, True]])

>> x = torch.tensor([True])
>> print(x)
tensor([True])

>> x = torch.tensor(True)
>> print(x)
tensor(True)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22238

Differential Revision: D15996304

Pulled By: izdeby

fbshipit-source-id: 5699acf3e00abca8a2bbb5384f8271eeb063dce7
2019-07-01 18:04:42 -07:00
Ailing Zhang
e8bc992b03 print device when it's not on default device (#22094)
Summary:
we used to not print device when it's on xla. It's sometimes confusing as it looks the same as cpu tensor...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22094

Differential Revision: D15975405

Pulled By: ailzhang

fbshipit-source-id: f19ceb9e26f5f2f6e7d659de12716f0dfe065f42
2019-06-25 20:28:50 -07:00
Jerry Zhang
88921feafd change return type for q_scale and q_zero_point (#21709)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21709

Change the return type from Scalar to double/int64_t so we don't need to do conversion when we call other quantize related aten functions

Differential Revision: D15793003

fbshipit-source-id: 510936c69fa17a4d67340a31ebb03415647feb04
2019-06-20 20:30:39 -07:00
Jerry Zhang
17268a9225 Add print function for QTensor (#19513)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19513

Add support for printing a QTensor in python frontend

Differential Revision: D15017168

fbshipit-source-id: 312d1f18e6ca3c9eb4a5b8bb1c64f7cc8bc1dcf5
2019-05-06 13:12:43 -07:00
jgong5
3ad710b837 Add MKL-DNN Tensor (#17748)
Summary:
This is a minimalist PR to add MKL-DNN tensor per discussion from Github issue: https://github.com/pytorch/pytorch/issues/16038

Ops with MKL-DNN tensor will be supported in following-up PRs to speed up imperative path.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17748

Reviewed By: dzhulgakov

Differential Revision: D14614640

Pulled By: bddppq

fbshipit-source-id: c58de98e244b0c63ae11e10d752a8e8ed920c533
2019-04-08 21:41:38 -07:00
Edward Yang
173f224570 Turn on F401: Unused import warning. (#18598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**

This was requested by someone at Facebook; this lint is turned
on for Facebook by default.  "Sure, why not."

I had to noqa a number of imports in __init__.  Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it.  Left for future work.

Be careful!  flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments.  flake8-3 will
report an import unused; flake8-2 will not.  For now, I just
noqa'd all these sites.

All the changes were done by hand.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14687478

fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3
2019-03-30 09:01:17 -07:00
Ivan Ogasawara
8b4dea3f56 Added scientific notation on set_printoptions (#16876)
Summary:
This PR fixes #15683
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16876

Differential Revision: D14021703

Pulled By: soumith

fbshipit-source-id: 1f603a7d24e331831d8d389f4a704c6a5b070b0c
2019-02-11 04:55:12 -08:00
David Riazati
1dbc7cff3e Fix tensor printing bug in Python 2 (#12732)
Summary:
`rsplit` doesn't have kwargs in Python 2 so this line raises an error

Fixes #15135
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12732

Differential Revision: D10458630

Pulled By: driazati

fbshipit-source-id: a63e42fbc0e39e4291480775b516c98122ec05a1
2018-12-17 13:17:51 -08:00
Francisco Massa
68251fb931 Fix half tensor printing plus speedup large tensor printing (#14418)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/14344 and https://github.com/pytorch/pytorch/issues/6863

The slowdown was due to the fact that we were only summarizing the tensor (for computing the number of digits to print) if its first dimension was larger than the threshold. It now goes over all the dimensions.

Some quick runtime analysis:

Before this PR:
```python
In [1]: import torch; a = torch.rand(1, 1700, 34, 50)

In [2]: %timeit str(a)
13.6 s ± 84.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```

After this PR

```python
In [1]: import torch; a = torch.rand(1, 1700, 34, 50)

In [2]: %timeit str(a)
2.08 ms ± 395 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [3]: b = a.cuda()

In [4]: %timeit str(b)
8.39 ms ± 45.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14418

Reviewed By: weiyangfb

Differential Revision: D13226950

Pulled By: soumith

fbshipit-source-id: 19eb4b855db4c8f891d0925a9c56ae8a2824bb23
2018-11-28 06:13:06 -08:00
Ailing Zhang
478886be30 Fix print precision and match numpy behavior (#12746)
Summary:
Fixes #12578 #9395.

* Fix and simplify print logic

* Follow numpy print rule eb2bd11870/numpy/core/arrayprint.py (L859)
> scientific notation is used when absolute value of the smallest number is < 1e-4 or maximum > 1e8 or the ratio of the maximum absolute value to the minimum is > 1e3

I hope I didn't break anything since there seems to be a lot of edge cases here... Here are some easy sanity checks.
```
In [5]: torch.tensor(1)
Out[5]: tensor(1)
Out[2]: array(1) # numpy

In [6]: torch.tensor(10)
Out[6]: tensor(10)
Out[3]: array(10) # numpy

In [8]: torch.tensor(99000000)
Out[8]: tensor(99000000)
Out[5]: array(99000000) # numpy

In [9]: torch.tensor(100000000)
Out[9]: tensor(100000000)
Out[6]: array(100000000) # numpy

In [10]: torch.tensor(100000001)
Out[10]: tensor(100000001)
Out[7]: array(100000001) # numpy

In [11]: torch.tensor(1000000000)
Out[11]: tensor(1000000000)
Out[8]: array(1000000000) # numpy

In [12]: torch.tensor([1, 1000])
Out[12]: tensor([   1, 1000])
Out[9]: array([   1, 1000]) # numpy

In [13]: torch.tensor([1, 1010])
Out[13]: tensor([   1, 1010])
Out[10]: array([   1, 1010]) # numpy
```
For floating points, we use scientific when `max/min > 1000 || max > 1e8 || min < 1e-4`
Lines with "old" are old behaviors that either has precision issue, or not aligned with numpy
```
In [14]: torch.tensor(0.01)
Out[14]: tensor(0.0100)
Out[11]: array(0.01) # numpy

In [15]: torch.tensor(0.1)
Out[15]: tensor(0.1000)
Out[12]: array(0.1) # numpy

In [16]: torch.tensor(0.0001)
Out[16]: tensor(0.0001)
Out[14]: array(0.0001) # numpy

In [17]: torch.tensor(0.00002)
Out[17]: tensor(2.0000e-05)
Out[15]: array(2e-05) # numpy
Out[5]: tensor(0.0000) # old

In [18]: torch.tensor(1e8)
Out[18]: tensor(100000000.)
Out[16]: array(100000000.0) # numpy

In [19]: torch.tensor(1.1e8)
Out[19]: tensor(1.1000e+08)
Out[17]: array(1.1e8) # numpy 1.14.5, In <= 1.13 this was not using scientific print
Out[10]: tensor(110000000.) # old

In [20]: torch.tensor([0.01, 10.])
Out[20]: tensor([ 0.0100, 10.0000])
Out[18]: array([  0.01,  10.  ]) # numpy

In [21]: torch.tensor([0.01, 11.])
Out[21]: tensor([1.0000e-02, 1.1000e+01])
Out[19]: array([  1.00000000e-02,   1.10000000e+01]) # numpy
Out[7]: tensor([ 0.0100, 11.0000]) # old
```
When print floating number in int mode, we still need to respect rules to use scientific mode first
```
In [22]: torch.tensor([1., 1000.])
Out[22]: tensor([   1., 1000.])
Out[20]: array([    1.,  1000.]) # numpy

In [23]: torch.tensor([1., 1010.])
Out[23]: tensor([1.0000e+00, 1.0100e+03])
Out[21]: array([  1.00000000e+00,   1.01000000e+03]) # numpy
Out[9]: tensor([   1., 1010.]) # old
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12746

Differential Revision: D10443800

Pulled By: ailzhang

fbshipit-source-id: f5e4e3fe9bf0b44af2c64c93a9ed42b73fa613f5
2018-10-24 18:12:51 -07:00
Tongzhou Wang
83a1ab2136 Sparse tensor printing; add NotImplemented autograd fn (#10181)
Summary:
Commits:

1. Add autograd function `NotImplemented` (subclass of `Error`) so python `grad_fn` prints nicer. Since `Error` is used in `DelayedError` to implement `oncedifferentiable`, I can't just change its name. cc colesbury

2. Add printing for sparse tensors. Fixes https://github.com/pytorch/pytorch/issues/9412 . cc weiyangfb The controller you requested could not be found. .

3. Add tests for sparse printing

Examples:
```diff
  In [2]: x = torch.sparse.FloatTensor(torch.arange(4).view(2,2), torch.randn(2, 2), [10, 10, 2])

  In [3]: x
  Out[3]:
- torch.sparse.FloatTensor of size (10,10,2) with indices:
- tensor([[0, 1],
-         [2, 3]])
- and values:
- tensor([[-1.1832, -0.5927],
-         [ 0.0831,  0.2511]])
+ tensor(indices=tensor([[0, 1],
+                        [2, 3]]),
+        values=tensor([[ 1.5081,  0.3451],
+                       [-0.0392,  0.4776]]),
+        size=(10, 10, 2), nnz=2, layout=torch.sparse_coo)

  In [4]: x.requires_grad_()
  Out[4]:
- torch.sparse.FloatTensor of size (10,10,2) with indices:
- tensor([[0, 1],
-         [2, 3]], grad_fn=<Error>)
- and values:
- tensor([[-1.1832, -0.5927],
-         [ 0.0831,  0.2511]], grad_fn=<Error>)
+ tensor(indices=tensor([[0, 1],
+                        [2, 3]]),
+        values=tensor([[ 1.5081,  0.3451],
+                       [-0.0392,  0.4776]]),
+        size=(10, 10, 2), nnz=2, layout=torch.sparse_coo, requires_grad=True)

  In [5]: x + x
  Out[5]:
- torch.sparse.FloatTensor of size (10,10,2) with indices:
- tensor([[0, 1],
-         [2, 3]], grad_fn=<Error>)
- and values:
- tensor([[-2.3664, -1.1855],
-         [ 0.1662,  0.5021]], grad_fn=<Error>)
+ tensor(indices=tensor([[0, 1],
+                        [2, 3]]),
+        values=tensor([[ 3.0162,  0.6902],
+                       [-0.0785,  0.9553]]),
+        size=(10, 10, 2), nnz=2, layout=torch.sparse_coo, grad_fn=<AddBackward0>)

  In [6]: x.double()
  Out[6]:
- torch.sparse.DoubleTensor of size (10,10,2) with indices:
- tensor([[0, 1],
-         [2, 3]], grad_fn=<Error>)
- and values:
- tensor([[-1.1832, -0.5927],
-         [ 0.0831,  0.2511]], dtype=torch.float64, grad_fn=<Error>)
+ tensor(indices=tensor([[0, 1],
+                        [2, 3]]),
+        values=tensor([[ 1.5081,  0.3451],
+                       [-0.0392,  0.4776]]),
+        size=(10, 10, 2), nnz=2, dtype=torch.float64, layout=torch.sparse_coo,
+        grad_fn=<NotImplemented>)

  In [7]: x = torch.sparse.FloatTensor(torch.ones(0, 2, dtype=torch.long), torch.randn(2, 0), [0])

  In [8]: x
  Out[8]:
- torch.sparse.FloatTensor of size (0,) with indices:
- tensor([], size=(0, 2), dtype=torch.int64)
- and values:
- tensor([], size=(2, 0))
+ tensor(indices=tensor([], size=(0, 2)),
+        values=tensor([], size=(2, 0)),
+        size=(0,), nnz=2, layout=torch.sparse_coo)

  In [9]: x = torch.sparse.FloatTensor(torch.ones(0, 2, dtype=torch.long), torch.randn(2), [])

  In [10]: x
  Out[10]:
- torch.sparse.FloatTensor of size () with indices:
- tensor([], size=(0, 2), dtype=torch.int64)
- and values:
- tensor([-0.0064,  0.8518])
+ tensor(indices=tensor([], size=(0, 2)),
+        values=tensor([ 0.9800, -0.5978]),
+        size=(), nnz=2, layout=torch.sparse_coo)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10181

Differential Revision: D9139845

Pulled By: SsnL

fbshipit-source-id: 353eebd55fac4049ed9bf85f8b0ee2c1418a744e
2018-09-05 19:41:22 -07:00
Adam Paszke
f0142faab0 Expose arbitrary cpp autograd functions to Python (#11082)
Summary:
This is needed because the JIT declares some custom autograd functions.

colesbury
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11082

Differential Revision: D9580456

Pulled By: apaszke

fbshipit-source-id: 6bf00c1188a20b2ee6ecf60e5a0099f8263ad55a
2018-08-30 14:25:59 -07:00
pbialecki
c6fc3ab557 fixes printing non-contiguous tensors
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/10405

Differential Revision: D9302794

Pulled By: soumith

fbshipit-source-id: e4a7db8d33400a5a050d05fd1679de8bc3cbcf30
2018-08-13 16:26:20 -07:00
Tongzhou Wang
27455e9c78 Use _six for inf and nan (#9500)
Summary:
Things like `float('inf')` are actually quite expensive.
```py
In [1]: import math

In [2]: %timeit -n 200 math.inf
49.3 ns ± 1.42 ns per loop (mean ± std. dev. of 7 runs, 200 loops each)

In [3]: %timeit -n 200 float('inf')
194 ns ± 39.1 ns per loop (mean ± std. dev. of 7 runs, 200 loops each)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9500

Reviewed By: soumith

Differential Revision: D8876229

Pulled By: SsnL

fbshipit-source-id: 78602b76bb53d5588910b58270930c0bd413d2d7
2018-07-18 10:40:29 -07:00
gchanan
b6af5d40bf
Some 0-sized dimension support, port catArray away from resizeLegacy. (#8666)
* Some 0-sized dimension support, port catArray away from resizeLegacy.

The goal of this PR is to port catArray away from resizeLegacy (so we can delete the legacy resize calls), but since catArray has some weird behavior because
we don't have arbitrary 0-sized dimension support, I made some effort to fix these both in one pass.

The major changes here are:
1) catArray uses the new resize API, no longer the old resizeLegacy API.
2) As 1) is the last usage of resizeLegacy, it is deleted.
3) If compiled with USE_TH_SIZE_ZERO_DIM, catArray will work and properly check shapes for n-dimensional empty tensors.
4) However, we retain the old behavior of "ignoring" size [0] tensors in catArray.  We previously allowed this because we didn't have n-dimensional empty tensors.
5) To get the above to work, we also add support for n-dimensional empty tensors for narrow and slice (ifdef USE_TH_SIZE_ZERO_DIM).
6) We change the stride formula for empty tensors to match NumPy; basically, we never multiply by 0 as the size, always at least 1, so the
   strides are monotonically increasing in the empty tensor case.
7) We print the size of empty tensors if size != [0]; this matches NumPy behavior (even in cases where the size could be inferred from the brackets.
8) For test purposes, we add torch._C._use_zero_size_dim() to add tests for the above.

* Fix flake8.

* Address review comments.
2018-06-20 13:26:08 -04:00
li-roy
6a85b133d3
Improve number formatting in tensor print (#7632)
* Improve number formatting in tensor print

* fix bad rebase

* address comments

* fix test

* fix test

* use assertExpected for tests

* address comments

* address comments
2018-06-13 23:57:16 -07:00
Sam Gross
14f5484e0d Print requires_grad and grad_fn in string repr of tensor (#8211)
For example:

  >>> torch.ones(3).requires_grad_()
  tensor([ 1.,  1.,  1.], requires_grad=True)

  >>> torch.ones(3).requires_grad_() * 5
  tensor([ 5.,  5.,  5.], grad_fn=<MulBackward0>)

The suffix (dtype, requires_grad, grad_fn) wraps to a new line if
it would cause the the line to exceed the linewidth.

  >>> torch.ones(10).double().requires_grad_()
  tensor([ 1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.],
         dtype=torch.float64, requires_grad=True)
2018-06-07 14:31:23 -04:00
li-roy
93242d320f
fix scale on some tensors (#7189) 2018-05-02 15:33:02 -07:00
li-roy
242f6c3470
Don't print dots after nonfinite numbers in integral float tensors (#6835)
* Don't print dots after nonfinite numbers in integral float tensors

* get around lint

* support python 2

* refactor

* better refactor
2018-04-26 11:18:12 -07:00
gchanan
90e75c6528
Speed up printing of large tensors. (#6876)
* Speed up printing of large tensors.

Instead of deciding on the format based on all of the elements of the tensor, decide based on the elements that will actually be printed.

* Fix flake8.

* Add else case.
2018-04-24 14:04:29 -04:00
li-roy
a2f2d6b43f
Add special case for printing dtype for empty int64 tensor (#6869)
* add special case for printing dtype for empty int64 tensor

* add comment
2018-04-23 12:07:59 -07:00
li-roy
34edd6f12e
fix sparse tensor print (#6829) 2018-04-20 19:39:52 -07:00
gchanan
8a434d9554
Print integral floating point numbers as X. instead of X.0000. (#6812) 2018-04-20 21:26:21 -04:00
li-roy
d1bb75e273
Redo tensor repr to make it less verbose (#6370)
* Redo tensor repr to make it less verbose

* fix empty tensor

* fix scaled scalars

* update for device-dtype split

* address comments

* removed repeated lines

* address comments

* add cuda to device string
2018-04-18 18:25:07 -07:00
Kento NOZAWA
3b58b859b2 Fix typos in docs (#6389) 2018-04-07 12:41:15 -04:00
Vishwak Srinivasan
32b3841553 [ready] General documentation improvements (#5450)
* Improvize documentation
1. Add formula for erf, erfinv
2. Make exp, expm1 similar to log, log1p
3. Symbol change in ge, le, ne, isnan

* Fix minor nit in the docstring

* More doc improvements
1. Added some formulae
2. Complete scanning till "Other Operations" in Tensor docs

* Add more changes
1. Modify all torch.Tensor wherever required

* Fix Conv docs
1. Fix minor nits in the references for LAPACK routines

* Improve Pooling docs
1. Fix lint error

* Improve docs for RNN, Normalization and Padding
1. Fix flake8 error for pooling

* Final fixes for torch.nn.* docs.
1. Improve Loss Function documentation
2. Improve Vision Layers documentation

* Fix lint error

* Improve docstrings in torch.nn.init

* Fix lint error

* Fix minor error in torch.nn.init.sparse

* Fix Activation and Utils Docs
1. Fix Math Errors
2. Add explicit clean to Makefile in docs to prevent running graph generation script
while cleaning
3. Fix utils docs

* Make PYCMD a Makefile argument, clear up prints in the build_activation_images.py

* Fix batch norm doc error
2018-03-08 13:21:12 -05:00
Sam Gross
30ec06c140
Merge Variable and Tensor classes (#5225)
This replaces the torch.Tensor constructors with factories that produce
Variables. Similarly, functions on the torch module (e.g. torch.randn)
now return Variables.

To keep the PR to a reasonable size, I've left most of the unused tensor
code. Subsequent PRs will remove the dead code, clean-up calls to
torch.autograd.Variable, and rename Variable to Tensor everywhere.

There are some breaking changes because Variable and Tensors had
slightly different semantics. There's a list of those changes here:

 https://github.com/pytorch/pytorch/wiki/Breaking-Changes-from-Variable-and-Tensor-merge
2018-02-23 18:03:31 -05:00
Sam Gross
720c7b1e2c
Move repeat to torch/_utils.py (#4712)
This moves the implementation of repeat to _utils so that the autograd
function can call it directly instead of relying on forward being called
on tensors.

This also removes _range, which was previously necessary because we
shadowed the built-in range() function.
2018-01-17 17:30:43 -05:00
gchanan
841ce42daf
Fix flake8. (#4644) 2018-01-12 14:38:14 -05:00
gchanan
eb857ec367
Introduce a (non-public) autograd scalar method and improve printing (#4586)
* Specialize Variable pinting and always print device for GPU tensors/Variables.

* Introduce a (non-public) _scalar_sum() method for autograd scalar testing.
2018-01-12 14:26:38 -05:00
SsnL
86d0c24b6a Dynamically find min log scale #3289
* dynamic fina min scale

* compute only once each _number_format call
2017-10-27 02:42:16 +05:30
SsnL
38f87cc9c4 Limit print scale by sys.float_info (#3113)
* limit print scale by sys.float_info

* test print tiny/huge values in test_print

* fix lint
2017-10-14 08:52:01 +02:00
Gregory Chanan
1ef4cc1591 Incorporate review comments:
1) Line up trailing dimensions in broadcast docs.
2) remove unnecessary expand_as in common_nn test.
3) use view in tensor_str instead of resize_.
4) newExpand remove raiseErrors change.
5) clarify expandedSizes/expandedStrides parameters in inferExpandGeometry.
6) simplify inferSize2/inferSizeN implementations.
7) use new-style classes for warning.
2017-06-11 05:37:59 -04:00
Gregory Chanan
69287250d1 Add a broadcast parameter to copy_, use it in the library in cases where there is non-broadcasting calls exposed by the tests. 2017-06-11 05:37:59 -04:00
Luke Yeager
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Adam Paszke
4694e4050b Fix printing bug when all values are NaN or inf 2016-12-19 20:35:08 -05:00
Zeming Lin
86e42ba291 Adding truncated tensor printing (#202)
* Adding truncated tensor printing
2016-11-08 10:05:30 -05:00
Adam Paszke
645c913e4f Print GPU id for CUDA tensors 2016-10-30 00:16:06 +02:00
Adam Paszke
deebc1383e Show exponent when printing vectors 2016-10-24 22:30:11 +02:00