Commit Graph

27 Commits

Author SHA1 Message Date
gchanan
b6af5d40bf
Some 0-sized dimension support, port catArray away from resizeLegacy. (#8666)
* Some 0-sized dimension support, port catArray away from resizeLegacy.

The goal of this PR is to port catArray away from resizeLegacy (so we can delete the legacy resize calls), but since catArray has some weird behavior because
we don't have arbitrary 0-sized dimension support, I made some effort to fix these both in one pass.

The major changes here are:
1) catArray uses the new resize API, no longer the old resizeLegacy API.
2) As 1) is the last usage of resizeLegacy, it is deleted.
3) If compiled with USE_TH_SIZE_ZERO_DIM, catArray will work and properly check shapes for n-dimensional empty tensors.
4) However, we retain the old behavior of "ignoring" size [0] tensors in catArray.  We previously allowed this because we didn't have n-dimensional empty tensors.
5) To get the above to work, we also add support for n-dimensional empty tensors for narrow and slice (ifdef USE_TH_SIZE_ZERO_DIM).
6) We change the stride formula for empty tensors to match NumPy; basically, we never multiply by 0 as the size, always at least 1, so the
   strides are monotonically increasing in the empty tensor case.
7) We print the size of empty tensors if size != [0]; this matches NumPy behavior (even in cases where the size could be inferred from the brackets.
8) For test purposes, we add torch._C._use_zero_size_dim() to add tests for the above.

* Fix flake8.

* Address review comments.
2018-06-20 13:26:08 -04:00
li-roy
6a85b133d3
Improve number formatting in tensor print (#7632)
* Improve number formatting in tensor print

* fix bad rebase

* address comments

* fix test

* fix test

* use assertExpected for tests

* address comments

* address comments
2018-06-13 23:57:16 -07:00
Sam Gross
14f5484e0d Print requires_grad and grad_fn in string repr of tensor (#8211)
For example:

  >>> torch.ones(3).requires_grad_()
  tensor([ 1.,  1.,  1.], requires_grad=True)

  >>> torch.ones(3).requires_grad_() * 5
  tensor([ 5.,  5.,  5.], grad_fn=<MulBackward0>)

The suffix (dtype, requires_grad, grad_fn) wraps to a new line if
it would cause the the line to exceed the linewidth.

  >>> torch.ones(10).double().requires_grad_()
  tensor([ 1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.],
         dtype=torch.float64, requires_grad=True)
2018-06-07 14:31:23 -04:00
li-roy
93242d320f
fix scale on some tensors (#7189) 2018-05-02 15:33:02 -07:00
li-roy
242f6c3470
Don't print dots after nonfinite numbers in integral float tensors (#6835)
* Don't print dots after nonfinite numbers in integral float tensors

* get around lint

* support python 2

* refactor

* better refactor
2018-04-26 11:18:12 -07:00
gchanan
90e75c6528
Speed up printing of large tensors. (#6876)
* Speed up printing of large tensors.

Instead of deciding on the format based on all of the elements of the tensor, decide based on the elements that will actually be printed.

* Fix flake8.

* Add else case.
2018-04-24 14:04:29 -04:00
li-roy
a2f2d6b43f
Add special case for printing dtype for empty int64 tensor (#6869)
* add special case for printing dtype for empty int64 tensor

* add comment
2018-04-23 12:07:59 -07:00
li-roy
34edd6f12e
fix sparse tensor print (#6829) 2018-04-20 19:39:52 -07:00
gchanan
8a434d9554
Print integral floating point numbers as X. instead of X.0000. (#6812) 2018-04-20 21:26:21 -04:00
li-roy
d1bb75e273
Redo tensor repr to make it less verbose (#6370)
* Redo tensor repr to make it less verbose

* fix empty tensor

* fix scaled scalars

* update for device-dtype split

* address comments

* removed repeated lines

* address comments

* add cuda to device string
2018-04-18 18:25:07 -07:00
Kento NOZAWA
3b58b859b2 Fix typos in docs (#6389) 2018-04-07 12:41:15 -04:00
Vishwak Srinivasan
32b3841553 [ready] General documentation improvements (#5450)
* Improvize documentation
1. Add formula for erf, erfinv
2. Make exp, expm1 similar to log, log1p
3. Symbol change in ge, le, ne, isnan

* Fix minor nit in the docstring

* More doc improvements
1. Added some formulae
2. Complete scanning till "Other Operations" in Tensor docs

* Add more changes
1. Modify all torch.Tensor wherever required

* Fix Conv docs
1. Fix minor nits in the references for LAPACK routines

* Improve Pooling docs
1. Fix lint error

* Improve docs for RNN, Normalization and Padding
1. Fix flake8 error for pooling

* Final fixes for torch.nn.* docs.
1. Improve Loss Function documentation
2. Improve Vision Layers documentation

* Fix lint error

* Improve docstrings in torch.nn.init

* Fix lint error

* Fix minor error in torch.nn.init.sparse

* Fix Activation and Utils Docs
1. Fix Math Errors
2. Add explicit clean to Makefile in docs to prevent running graph generation script
while cleaning
3. Fix utils docs

* Make PYCMD a Makefile argument, clear up prints in the build_activation_images.py

* Fix batch norm doc error
2018-03-08 13:21:12 -05:00
Sam Gross
30ec06c140
Merge Variable and Tensor classes (#5225)
This replaces the torch.Tensor constructors with factories that produce
Variables. Similarly, functions on the torch module (e.g. torch.randn)
now return Variables.

To keep the PR to a reasonable size, I've left most of the unused tensor
code. Subsequent PRs will remove the dead code, clean-up calls to
torch.autograd.Variable, and rename Variable to Tensor everywhere.

There are some breaking changes because Variable and Tensors had
slightly different semantics. There's a list of those changes here:

 https://github.com/pytorch/pytorch/wiki/Breaking-Changes-from-Variable-and-Tensor-merge
2018-02-23 18:03:31 -05:00
Sam Gross
720c7b1e2c
Move repeat to torch/_utils.py (#4712)
This moves the implementation of repeat to _utils so that the autograd
function can call it directly instead of relying on forward being called
on tensors.

This also removes _range, which was previously necessary because we
shadowed the built-in range() function.
2018-01-17 17:30:43 -05:00
gchanan
841ce42daf
Fix flake8. (#4644) 2018-01-12 14:38:14 -05:00
gchanan
eb857ec367
Introduce a (non-public) autograd scalar method and improve printing (#4586)
* Specialize Variable pinting and always print device for GPU tensors/Variables.

* Introduce a (non-public) _scalar_sum() method for autograd scalar testing.
2018-01-12 14:26:38 -05:00
SsnL
86d0c24b6a Dynamically find min log scale #3289
* dynamic fina min scale

* compute only once each _number_format call
2017-10-27 02:42:16 +05:30
SsnL
38f87cc9c4 Limit print scale by sys.float_info (#3113)
* limit print scale by sys.float_info

* test print tiny/huge values in test_print

* fix lint
2017-10-14 08:52:01 +02:00
Gregory Chanan
1ef4cc1591 Incorporate review comments:
1) Line up trailing dimensions in broadcast docs.
2) remove unnecessary expand_as in common_nn test.
3) use view in tensor_str instead of resize_.
4) newExpand remove raiseErrors change.
5) clarify expandedSizes/expandedStrides parameters in inferExpandGeometry.
6) simplify inferSize2/inferSizeN implementations.
7) use new-style classes for warning.
2017-06-11 05:37:59 -04:00
Gregory Chanan
69287250d1 Add a broadcast parameter to copy_, use it in the library in cases where there is non-broadcasting calls exposed by the tests. 2017-06-11 05:37:59 -04:00
Luke Yeager
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Adam Paszke
4694e4050b Fix printing bug when all values are NaN or inf 2016-12-19 20:35:08 -05:00
Zeming Lin
86e42ba291 Adding truncated tensor printing (#202)
* Adding truncated tensor printing
2016-11-08 10:05:30 -05:00
Adam Paszke
645c913e4f Print GPU id for CUDA tensors 2016-10-30 00:16:06 +02:00
Adam Paszke
deebc1383e Show exponent when printing vectors 2016-10-24 22:30:11 +02:00
Soumith Chintala
f2cf673d3a fix tensor printing when the tensor is a view into a giant storage 2016-10-07 17:53:37 -04:00
Adam Paszke
8fdec15a55 Codemod to remove camel case method naming 2016-09-20 08:40:28 -07:00