Commit Graph

752 Commits

Author SHA1 Message Date
Iurii Zdebskyi
5b9f55f33f Enable Add, sub, mul, and div on CPU for bfloat16 type. (#22851)
Summary:
Enable Add, sub, mul, and div on CPU for bfloat16 type.
Tested via unit tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22851

Differential Revision: D16256757

Pulled By: izdeby

fbshipit-source-id: 8b62f7581fc0ca0d2cff48ab40d877a9fcf70a5b
2019-08-08 12:34:25 -07:00
Igor Fedan
341d5934b7 Move addcmul to Aten(CUDA) (#23814)
Summary:
https://github.com/pytorch/pytorch/issues/22797
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23814

Differential Revision: D16712381

Pulled By: ifedan

fbshipit-source-id: aeca4fdb9b10143932f195900b1f424ef6d26c89
2019-08-08 12:34:21 -07:00
Ojas Ahuja
10b1254edd fix crash on torch.Tensor.repeat() for 0 repeats (#23766)
Summary:
This PR fixes https://github.com/pytorch/pytorch/issues/23603
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23766

Differential Revision: D16644866

Pulled By: soumith

fbshipit-source-id: ee7d368afdfe874133d0bd90f4d03a191ee22b13
2019-08-07 09:16:00 -07:00
Hong Xu
f90afff3bd Recommend ~ and bitwise_not() when user tries to apply neg (-) on a bool tensor.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23621

Test Plan: Imported from OSS

Differential Revision: D16656729

Pulled By: ezyang

fbshipit-source-id: d107e8caa2ccfa6ff8a1bd8a31b4d79f142d68fb
2019-08-05 16:21:34 -07:00
Vitaly Fedyunin
6e4a83ab57 Channels last stored in tensor (#23391)
Summary:
Define 4D tensor as stored in channels last memory format, when dimensions order is NCHW and C-strides < W-strides < H-strides < N-strides (If size of any dimension is equal to 1, this dimension strides value is not taken into account).

Channels last contiguous tensor is channel last tensor which occupies contiguous memory block. So x.is_contiguous(memory_format=torch.channels_last) checks if tensor is channels last contiguous.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23391

Differential Revision: D16601414

Pulled By: VitalyFedyunin

fbshipit-source-id: 8d098e7eec2f00fb1d12261bc240b3645d4f5b73
2019-08-05 11:50:29 -07:00
Hong Xu
be7fe1ccb9 Add tests to ensure that both abs(0.0) and abs(-0.0) lead to 0.0 (#23701)
Summary:
As pointed out by colesbury in https://github.com/pytorch/pytorch/pull/23579#discussion_r309798987
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23701

Differential Revision: D16623781

Pulled By: mrshenli

fbshipit-source-id: f48a29499128b08d2ac8bc9e466f2326112ead94
2019-08-05 07:50:06 -07:00
Iurii Zdebskyi
19c675178f Updated docs and added deprecation warnings to acknowledge a bool tensor (#22261)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22261
ghimport-source-id: 1611d62d056a04c0ad15ef662e594a3d206a78e2

Test Plan: Imported from OSS

Differential Revision: D16005990

Pulled By: izdeby

fbshipit-source-id: 2413824aa75a0755719e4df11acd21e6607e5a85
2019-08-05 07:42:34 -07:00
Xiang Gao
520982d1df Zero sized tensor support for repeat_interleave (#23717)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/22753
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23717

Differential Revision: D16623598

Pulled By: mrshenli

fbshipit-source-id: 297a3274fb5a5b2fcc0c3ad601337d7eb29fdca2
2019-08-05 07:36:47 -07:00
Iurii Zdebskyi
865c7eea48 Changed tensor comparison return type from uint8 to bool (#21113)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21113
ghimport-source-id: 9c4ba63457a72bfc41894387e0b01be3fd9a9baf

Test Plan: Imported from OSS

Differential Revision: D15552204

Pulled By: izdeby

fbshipit-source-id: a608213668649d058e22b510d7755cb99e7d0037
2019-08-01 07:54:53 -07:00
vishwakftw
5d130e4232 Allowing batching for det/logdet/slogdet operations (#22909)
Summary:
Changelog:
- Add batching for det / logdet / slogdet operations
- Update derivative computation to support batched inputs (and consequently batched outputs)
- Update docs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22909

Test Plan:
- Add a `test_det_logdet_slogdet_batched` method in `test_torch.py` to test `torch.det`, `torch.logdet` and `torch.slogdet` on batched inputs. This relies on the correctness of `torch.det` on single matrices (tested by `test_det_logdet_slogdet`). A port of this test is added to `test_cuda.py`
- Add autograd tests for batched inputs

Differential Revision: D16580988

Pulled By: ezyang

fbshipit-source-id: b76c87212fbe621f42a847e3b809b5e60cfcdb7a
2019-07-31 10:01:32 -07:00
Richard Zou
c5482e33e9 Rename tensor.is_named to has_named, expose has_named to python.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23315

Test Plan:
- [namedtensor ci]

gh-metadata: pytorch pytorch 23315 gh/zou3519/79/head

Imported from OSS

Differential Revision: D16494414

Pulled By: zou3519

fbshipit-source-id: d2d6beb45db9288e5df707b68b6046d783ca9f97
2019-07-31 07:14:07 -07:00
Tongzhou Wang
af638ad5d7 pin_memory should not copy on already pinned tensors (#23484)
Summary:
fixes https://github.com/pytorch/pytorch/issues/21076
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23484

Differential Revision: D16546264

Pulled By: ezyang

fbshipit-source-id: 8058e0bbc6336751f36b884d71234feef498a982
2019-07-30 21:16:23 -07:00
Vitaly Fedyunin
401fbb0088 Port resize_as_ and clone from TH to Aten (#23027)
Summary:
API operators now routed to `at::native::resize_as_*_` and `at::native::clone` accordingly.
Internal `THTensor_(resizeAs)`, `THCTensor_(resizeAs)`, `THTensor_(newClone)` and `THCTensor_(newClone)` remains to support older TH code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23027

Differential Revision: D16362304

Pulled By: VitalyFedyunin

fbshipit-source-id: 4c1e8516da685f3fdea632ff791d143f27aeebeb
2019-07-30 10:40:27 -07:00
vishwakftw
b3a9a7a9b9 Rename gels to lstsq (#23460)
Summary:
Changelog:
- Rename `gels` to `lstsq`
- Fix all callsites
- Rename all tests
- Create a tentative alias for `lstsq` under the name `gels` and add a deprecation warning to not promote usage.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23460

Test Plan: - All tests should pass to confirm that the patch is correct

Differential Revision: D16547834

Pulled By: colesbury

fbshipit-source-id: b3bdb8f4c5d14c7716c3d9528e40324cc544e496
2019-07-30 09:56:04 -07:00
Pavel Belevich
fd61cc9ebc Moved at::assert_no_internal_overlap to TensorIterator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22917

Differential Revision: D16521429

Pulled By: pbelevich

fbshipit-source-id: 80ae583c6486d6948431b79e1452902bdf2cfbc3
2019-07-30 07:47:33 -07:00
Will Feng
7b081e5d1e Improve error message for changing tensor metadata after .data or .detach() (#23504)
Summary:
When a user tries to change metadata of a tensor created from `.data` or `.detach()`, we currently shows an error message "<function_name> is not allowed on Tensor created from .data or .detach()". However, this error message doesn't suggest what the right fix should look like. This PR improves the error message.

Closes https://github.com/pytorch/pytorch/issues/23393.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23504

Differential Revision: D16547415

Pulled By: yf225

fbshipit-source-id: 37f4a0385442e2b0966386fb14d3d938ecf4230c
2019-07-29 22:25:14 -07:00
Hong Xu
e366af7d87 Add TORCH_CHECK to disable sub for bool tensors (#23519)
Summary:
This resolves two issues in one shot:

- sub shouldn't be available for bool type.
- When sub is applied to an unsupported type, the current error messages
  shows "add_cpu/add_cuda is not implemented for [type]". They should be
  "sub_cpu/sub_cuda" instead.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23519

Differential Revision: D16548770

Pulled By: izdeby

fbshipit-source-id: fe404a2a97b8d11bd180ec41364bf8e68414fb15
2019-07-29 16:28:35 -07:00
Richard Zou
0dcb8755c8 Implement tensor.set_names_, tensor.names setter
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23172

Test Plan:
- [namedtensor ci]

gh-metadata: pytorch pytorch 23172 gh/zou3519/74/head

Imported from OSS

Differential Revision: D16494364

Pulled By: zou3519

fbshipit-source-id: 8d0e26b33346d4eadba30b2e76610f6d7be7c373
2019-07-26 08:50:49 -07:00
Pieter Noordhuis
2938299de1 Fix lint failure introduced in #23346
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23371

Differential Revision: D16489985

Pulled By: pietern

fbshipit-source-id: 914048563bbe7bf5ab897c6f12f4a1bb4bff30e1
2019-07-25 05:17:15 -07:00
Iurii Zdebskyi
cf0f3556f6 Enabled cumsum and cumprod for bool tensors (#23346)
Summary:
```
a = torch.tensor([[True, False, True],
                  [False, False, False],
                  [True, True, True]])

>>> torch.cumsum(a, 0)
tensor([[1, 0, 1],
        [1, 0, 1],
        [2, 1, 2]])

>>> torch.cumsum(a, 1)
tensor([[1, 1, 2],
        [0, 0, 0],
        [1, 2, 3]])
```

Tested via unit tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23346

Differential Revision: D16469393

Pulled By: izdeby

fbshipit-source-id: b55f3ca0588f9961a771def40f6ef58932021e1a
2019-07-24 18:16:01 -07:00
Iurii Zdebskyi
200cb836f1 Enabled 'add_cuda' for bool and fixed alpha scalar bug (#23044)
Summary:
Enabled 'add_cuda' for bool
Tested via unit tests

Fixed https://github.com/pytorch/pytorch/issues/22431 #22430
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23044

Differential Revision: D16368095

Pulled By: izdeby

fbshipit-source-id: 033d28095ff1c5df4078905c52782cf4cf9ed6b0
2019-07-24 10:31:34 -07:00
Johannes M Dieterich
4cd726c7b3 Update ROCm CI to python3.6 (#23088)
Summary:
Rehash of https://github.com/pytorch/pytorch/issues/22322 .

Given that python 2.7 will be EOL'd on Jan 1, 2020 and we have models depending on python3.5+, we'd like to update the ROCm CI across the board to python3.6.

This PR adds the skip tests and some semantic changes for PyTorch.

Added pattern match skip for anything but the ROCm CI compared to #223222 for the python find step in the PyTorch build.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23088

Differential Revision: D16448261

Pulled By: bddppq

fbshipit-source-id: 69ece1a213418d9abf1444c496dce1c190ee07c8
2019-07-23 23:07:45 -07:00
Vishwak Srinivasan
0ab19d66ee Port lu_solve to ATen (#22379)
Summary:
Changelog:
- Port TH implementation to ATen/native/BatchLinearAlgebra.cpp
- Port THC implementation to ATen/native/cuda/BatchLinearAlgebra.cu
- Remove TH/THC implementations
- Update doc strings
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22379

Test Plan: - Added new tests in test_torch.py (port to test_cuda.py exists)

Differential Revision: D16089645

Pulled By: zou3519

fbshipit-source-id: dc8561aadacacb23e80c375b4fec687df2b6bbc8
2019-07-23 19:11:35 -07:00
Kexuan Sun
45d3f495ef Add document of function torch.as_strided (#22842)
Summary:
Documentation of `torch.as_strided` and `Tensor.as_strided` is missing. As mentioned in https://github.com/pytorch/pytorch/issues/9886
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22842

Differential Revision: D16254106

Pulled By: soumith

fbshipit-source-id: dee142483fb9ef7bea84bd44a970b6eccdcdc471
2019-07-23 06:06:00 -07:00
Jerry Zhang
3861520603 Verify flatten works for quantized Tensor (#23121)
Summary:
Added a test in `test_torch.py`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23121
ghstack-source-id: 86983227

Differential Revision: D16391409

fbshipit-source-id: 04e72b2f753a0a6ddbf58d55b794e443b18a2156
2019-07-22 18:34:25 -07:00
Iurii Zdebskyi
22f7c9e31b (#23105)
Summary:
Fixed a [bug](https://github.com/pytorch/pytorch/issues/22992) where passing result tensor into masked_select wont work with bool mask.
Tested via unit tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23105

Differential Revision: D16386676

Pulled By: izdeby

fbshipit-source-id: 93a1e9bfbc916c8a8eaa149a70a5553f3711f53e
2019-07-22 07:49:30 -07:00
vishwakftw
6dfecc7e01 Remove deprecated linear algebra functions (and methods) (#22841)
Summary:
Changelog:
- Removed the following linear algebra functions in PyTorch in favor of the renamed operations
  - `btrifact` (use `lu` instead)
  - `btrifact_with_info` (use `lu` with `get_infos=True` instead)
  - `btrisolve` (use `lu_solve` instead)
  - `btriunpack` (use `lu_unpack` instead)
  - `gesv` (use `solve` instead)
  - `pstrf` (use `cholesky` instead)
  - `potrf` (use `cholesky` instead)
  - `potri` (use `cholesky_inverse` instead)
  - `potrs` (use `cholesky_solve` instead)
  - `trtrs` (use `triangular_solve` instead)

- Removed dead code after the removal of `pstrf`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22841

Test Plan:
- All existing tests should pass to verify that the removal is clean

Closes https://github.com/pytorch/pytorch/issues/22832

Differential Revision: D16346184

Pulled By: zou3519

fbshipit-source-id: f748d16ed7609c028de6adcbc28684d5a1af0678
2019-07-19 11:43:06 -07:00
Junjie Bai
eb76b7a564 Revert D16199862: [pytorch][PR] [ROCm] Update ROCm CI to python3.6
Differential Revision:
D16199862

Original commit changeset: 46ca6029a232

fbshipit-source-id: 2843b919f2655674e39dc764053621994061a12b
2019-07-17 14:26:56 -07:00
iotamudelta
031b406c38 Update ROCm CI to python3.6 (#22322)
Summary:
Given that python 2.7 will be EOL'd on Jan 1, 2020 and we have models depending on python3.5+, we'd like to update the ROCm CI across the board to python3.6.

This PR adds the skip tests and some semantic changes for PyTorch.

Open tasks/questions:
* RoiAlignTest.CheckCPUGPUEqual fails in the Caffe2 unit tests. Is this something expects / can be skipped?
* for testing, I've used update-alternatives on CentOS/Ubuntu to select python == python 3.6. Is this the preferred way?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22322

Differential Revision: D16199862

Pulled By: ezyang

fbshipit-source-id: 46ca6029a232f7d23f3fdb5efc33ae39a379fca8
2019-07-17 13:42:30 -07:00
Pavel Belevich
bcfa023a00 hardshrink_cpu and hardshrink_backward_cpu refactoring with at::native::cpu_kernel
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22459

Differential Revision: D16132625

Pulled By: pbelevich

fbshipit-source-id: d7eb1cd6ed04eba3d0c54feaca1e5ab2836211b5
2019-07-16 18:58:35 -07:00
vishwakftw
f8ad65adb1 Fix torch.triu / torch.tril on contiguous tensors with non-default st… (#22730)
Summary:
…rides

Changelog:
- Fix behavior of `torch.triu` / `torch.tril` on certain unsqueezed tensors that lead to uninitialized values on CPU
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22730

Test Plan:
- Add tests for these cases in test_triu_tril in test_torch

Fixes https://github.com/pytorch/pytorch/issues/22581

Differential Revision: D16222897

Pulled By: zou3519

fbshipit-source-id: b86b060187797e5cd2a7731421dff1ba2b5c9596
2019-07-16 10:09:03 -07:00
vishwakftw
7d055c21b3 Port SVD to ATen, enable batching for matrix inputs (#21588)
Summary:
Changelog:
- Port SVD TH implementation to ATen/native/BatchLinearAlgebra.cpp
- Port SVD THC implementation to ATen/native/cuda/BatchLinearAlgebra.cu
- Allow batches of matrices as arguments to `torch.svd`
- Remove existing implementations in TH and THC
- Update doc string
- Update derivatives to support batching
- Modify nuclear norm implementation to use at::svd instead of _batch_svd
- Remove _batch_svd as it is redundant
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21588

Test Plan:
- Add new test suite for SVD in test_torch.py with port to test_cuda.py
- Add tests in common_methods_invocations.py for derivative testing

Differential Revision: D16266115

Pulled By: nairbv

fbshipit-source-id: e89bb0dbd8f2d58bd758b7830d2389c477aa61fb
2019-07-15 13:34:01 -07:00
Iurii Zdebskyi
bd88fd0793 Added .bfloat16() (#22852)
Summary:
Add conversion method for bfloat16
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22852

Differential Revision: D16256760

Pulled By: izdeby

fbshipit-source-id: 01d75495f9df513a0cdf78791c3eb013ab92bd95
2019-07-15 09:32:18 -07:00
shihongzhi
45cf33a731 add fill_diagonal function (#21892)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/21796
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21892

Differential Revision: D16164678

Pulled By: colesbury

fbshipit-source-id: 85df8ae9b7a6a91b6023fe7295b3a8124e4526ea
2019-07-11 09:20:44 -07:00
Sam Gross
4240220926 Revert D16183577: Delegate Python ~ (invert operator) to Tensor.bitwise_not().
Differential Revision:
D16183577

Original commit changeset: f86838c407db

fbshipit-source-id: bbf53ce52a20b1e90b1fe522d73e558d8044c4ba
2019-07-10 18:29:22 -07:00
Iurii Zdebskyi
fffa7200c1 fixing lint
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22703

Differential Revision: D16188326

Pulled By: izdeby

fbshipit-source-id: 72e6b6f957068c3995010a1b811f24cd2304ff6f
2019-07-10 16:02:21 -07:00
Hong Xu
7750cae722 Refactor and improve randperm tests.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22121

Test Plan: Imported from OSS

Differential Revision: D16153794

Pulled By: li-roy

fbshipit-source-id: 4dbfa6cfcc79f6d431918a6646664215fa9ea0b9
2019-07-10 12:23:33 -07:00
Hong Xu
0f7c3710dd Support Half type in randperm.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22102

Test Plan: Imported from OSS

Differential Revision: D16153586

Pulled By: li-roy

fbshipit-source-id: d58e3dbc5da893005f4eaf521a28b0d752274eff
2019-07-10 12:23:25 -07:00
Hong Xu
9c4c9c3af0 Delegate Python ~ (invert operator) to Tensor.bitwise_not().
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22326

Test Plan: Imported from OSS

Differential Revision: D16183577

Pulled By: colesbury

fbshipit-source-id: f86838c407db4ded9ce70998bf1ab1ffd75b3b58
2019-07-10 12:17:52 -07:00
Hong Xu
574e808680 Add a bitwise NOT operator for integer and Boolean types (CUDA).
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22320

Test Plan: Imported from OSS

Differential Revision: D16183578

Pulled By: colesbury

fbshipit-source-id: 2f72cce5e10fd637be1ac87e1bbfe0937a661034
2019-07-10 12:17:48 -07:00
Hong Xu
e2dc1fc715 Add a bitwise NOT operator for integer and Boolean types (CPU).
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22283

Test Plan: Imported from OSS

Differential Revision: D16183576

Pulled By: colesbury

fbshipit-source-id: 2e539fab8ff885dddb9bff334d1d784b28d65b8f
2019-07-10 12:17:44 -07:00
Iurii Zdebskyi
10c60b601a Added Bfloat16 tensor for cpu with very limited support (#21860)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21860
ghimport-source-id: 5290755b63033cdfdeb911a4ecf4aa282b3db02d

Test Plan: Imported from OSS

Differential Revision: D15856091

Pulled By: izdeby

fbshipit-source-id: 54e7e17be1b5c5a2e80a41feaeaeba75dbb8108f
2019-07-10 09:08:52 -07:00
Iurii Zdebskyi
3a8d7463bd Enabled BFloat16 storage (#21523)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21523
ghimport-source-id: 698b3cbd6b21c09b9ff8bf8011980df8e35c33b0

Test Plan: Imported from OSS

Differential Revision: D15819368

Pulled By: izdeby

fbshipit-source-id: f6b3bba7b3ca8ee677bd80a231dbb3920c07d61c
2019-07-09 21:51:06 -07:00
Brandon Amos
046c4589df lu: When not using pivoting, return the identity permutation instead of zeros (#22242)
Summary:
Some of my qpth users have told me that updating to the latest version of PyTorch and replacing the btrifact/btrisolve calls with the LU ones wasn't working and I didn't believe them until I tried it myself :)

These updates have broken unpivoted LU factorizations/solves on CUDA. The LU factorization code used to return the identity permutation when pivoting wasn't used but now returns all zeros as the pivots. This PR reverts it back to return the identity permutation. I've not yet tested this code as I'm having some trouble compiling PyTorch with this and am hitting https://github.com/pytorch/pytorch/issues/21700 and am not sure how to disable that option.

Here's a MWE to reproduce the broken behavior, and my fix.

```python
torch.manual_seed(0)

n = 4
L = torch.randn(n,n)
A = L.mm(L.t()).unsqueeze(0)
b = torch.randn(1, n)

A_lu_cpu = torch.lu(A)
A_lu_cuda_nopivot = torch.lu(A.cuda(), pivot=False)
A_lu_cuda_pivot = torch.lu(A.cuda(), pivot=True)
print('A_lu_cuda_nopivot\n', A_lu_cuda_nopivot)
print('-----\nA_lu_cuda_pivot\n', A_lu_cuda_nopivot)

x_cpu = b.lu_solve(*A_lu_cpu)
x_cuda_nopivot = b.cuda().lu_solve(*A_lu_cuda_nopivot)
x_cuda_nopivot_fixed = b.cuda().lu_solve(
    A_lu_cuda_nopivot[0], torch.arange(1, n+1, device='cuda:0').int())
x_cuda_pivot = b.cuda().lu_solve(*A_lu_cuda_pivot)

print(x_cpu, x_cuda_nopivot, x_cuda_nopivot_fixed, x_cuda_pivot)
```

Output:

```
A_lu_cuda_nopivot
 (tensor([[[ 2.8465, -0.7560,  0.8716, -1.7337],
         [-0.2656,  5.5724, -1.1316,  0.6678],
         [ 0.3062, -0.2031,  1.4206, -0.5438],
         [-0.6091,  0.1198, -0.3828,  1.5103]]], device='cuda:0'), tensor([[0, 0, 0, 0]], device='cuda:0', dtype=torch.int32))

-----

A_lu_cuda_pivot
 (tensor([[[ 2.8465, -0.7560,  0.8716, -1.7337],
         [-0.2656,  5.5724, -1.1316,  0.6678],
         [ 0.3062, -0.2031,  1.4206, -0.5438],
         [-0.6091,  0.1198, -0.3828,  1.5103]]], device='cuda:0'), tensor([[0, 0, 0, 0]], device='cuda:0', dtype=torch.int32))

(tensor([[-0.3121, -0.1673, -0.4450, -0.2483]]),
 tensor([[-0.1661, -0.1875, -0.5694, -0.4772]], device='cuda:0'),
 tensor([[-0.3121, -0.1673, -0.4450, -0.2483]], device='cuda:0'),
 tensor([[-0.3121, -0.1673, -0.4450, -0.2483]], device='cuda:0'))
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22242

Differential Revision: D16049334

Pulled By: ezyang

fbshipit-source-id: 7eacae810d87ffbdf8e07159bbbc03866dd9979d
2019-07-09 11:16:50 -07:00
Iurii Zdebskyi
304552b61a Enabled masked fill and scatter for bool tensors (#22491)
Summary:
Enabled masked_fill, masked_fill_, masked_scatter_, masked_scatter on bool tensors.
Tested via unit tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22491

Differential Revision: D16108817

Pulled By: izdeby

fbshipit-source-id: 8b1808f41d7a4f65fe6d3797a3c83b2dac3446c7
2019-07-08 10:49:40 -07:00
vishwakftw
9bafe5d4da Fix torch.normal with CUDA tensors (#22533)
Summary:
`addcmul_out` overwrote the samples, which led to constant values being output by `torch.normal`.

Changelog:
- Replace the `addcmul_out` calls with combo of inplace `mul` and `add` and justification for this change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22533

Test Plan:
- Enable tests for test_normal on all devices

Fixes https://github.com/pytorch/pytorch/issues/22529

Differential Revision: D16141337

Pulled By: ezyang

fbshipit-source-id: 567a399042e0adcd154582f362318ce95a244c62
2019-07-08 08:27:38 -07:00
Brennan Vincent
e210c65097 Add torch.where overload with only condition argument (#21986)
Summary:
Requested in https://github.com/pytorch/pytorch/issues/21798
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21986

Differential Revision: D16081577

Pulled By: zhangguanheng66

fbshipit-source-id: 658c0f451b833aceb1a41ee424c7990eec00bc02
2019-07-02 18:18:15 -07:00
Brennan Vincent
dcd902bdde provide "size" parameter in torch.normal when called with two floats (#20545)
Summary:
This has been requested in https://github.com/pytorch/pytorch/issues/20323

(It is still not exactly the same as NumPy, which allows you to pass tensors at mean/std and broadcast them with size, but the present PR is extremely simple and does the main thing people are asking for)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20545

Differential Revision: D15358736

Pulled By: zhangguanheng66

fbshipit-source-id: 762ea5eab5b8667afbac2df0137df017ba6e413c
2019-07-02 18:18:11 -07:00
iurii zdebskyi
a54acd3755 Update the way boolean tensor are being printed (#22238)
Summary:
In case when the boolean tensor gets printed out, no need to specify the dtype.

Example:
```
>> x = torch.tensor([[True, True, True], [True, True, True]])
>> print(x)
tensor([[True, True, True],
        [True, True, True]])

>> x = torch.tensor([True])
>> print(x)
tensor([True])

>> x = torch.tensor(True)
>> print(x)
tensor(True)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22238

Differential Revision: D15996304

Pulled By: izdeby

fbshipit-source-id: 5699acf3e00abca8a2bbb5384f8271eeb063dce7
2019-07-01 18:04:42 -07:00
Roy Li
9c8f9f0ecb Remove many usages of Type (#21941)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21941
ghimport-source-id: f20cca6229daba9eb8652adb3d959266ae081ef1

Test Plan: Imported from OSS

Differential Revision: D15893331

Pulled By: li-roy

fbshipit-source-id: c988b16008ff0e2725a88c6025afd4aabdaca45a
2019-06-30 04:11:28 -07:00