Commit Graph

152 Commits

Author SHA1 Message Date
nuka137
a68c1e109e C++ API: torch::nn::BatchNorm{2,3}d (#28936)
Summary:
Add torch::nn::BatchNorm{2,3}d module and functional support for the C++ API.

Related Issue: https://github.com/pytorch/pytorch/issues/25883 #28176

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28936

Differential Revision: D18274584

Pulled By: yf225

fbshipit-source-id: 3784eee9f8947f6c7c9f1699544a3d36a1a019b7
2019-11-01 17:50:33 -07:00
Pavel Belevich
4a94eaa60b C++ API parity: PoissonNLLLoss
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28755

Test Plan: Imported from OSS

Differential Revision: D18202436

Pulled By: pbelevich

fbshipit-source-id: a7a27d5f3cdbcbbd9bbbffa02b576609d5fdc9b3
2019-11-01 12:35:59 -07:00
Edward Yang
bbea34f283 Revert D18266918: C++ API: torch::nn::BatchNorm{2,3}d
Test Plan: revert-hammer

Differential Revision:
D18266918

Original commit changeset: f432904c7298

fbshipit-source-id: 0e1c596b2e2f13b59082ff422c67ba025df4be07
2019-11-01 10:46:49 -07:00
nuka137
b7c5b3d398 C++ API: torch::nn::BatchNorm{2,3}d (#28936)
Summary:
Add torch::nn::BatchNorm{2,3}d module and functional support for the C++ API.

Related Issue: https://github.com/pytorch/pytorch/issues/25883 #28176

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28936

Differential Revision: D18266918

Pulled By: yf225

fbshipit-source-id: f432904c72985d52ec52cb992cceb372b6ff0244
2019-11-01 09:28:58 -07:00
Carlos Miranda
72b9bda9e5 Smooth L1 loss (#27661)
Summary:
In accordance with https://github.com/pytorch/pytorch/issues/25883, I added the `SmoothL1Loss` module and `smooth_l1_loss` functional.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27661

Differential Revision: D18002332

Pulled By: yf225

fbshipit-source-id: b382df8becb0de14986ec16ee0dc953d7b10e917
2019-10-31 23:41:35 -07:00
Will Feng
595209bddc Fix bugs in torch::tensor constructor (#28523)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28523

New features:
1. Previously, `torch::tensor({true, false, true})` throws `"tensor_cpu" not implemented for 'Bool'`. After this PR, it produces the correct bool tensor, matching the Python API behavior.
2. Tensors with zero-size dimensions are now supported, e.g. `torch::tensor({{}, {}})` produces a tensor with sizes `{2, 0}`, matching the Python API behavior.

BC-breaking bug fixes:
1. Previously, `torch::tensor({{1}, {2}})` produces a tensor of sizes `{2}`. After this PR, it produces a tensor of sizes `{2, 1}`, matching the Python API behavior.
2. Fixed semantics of `torch::tensor(1.1)`: it now returns a 0-dim tensor instead of a 1-dim tensor, matching the Python API behavior.
3. Previously, when passed a non-dtype `TensorOptions` to the `torch::tensor` constructor, it always produces a tensor of dtype `float`. After this PR, it produces tensor of different dtypes based on the dtype of the braced-init-list, matching the behavior of the no-options case.
```cpp
// Previously:
torch::tensor({1, 2, 3}, torch::TensorOptions(/*non-dtype-options*/)).dtype() -> float
torch::tensor({{1, 2, 3}}, torch::TensorOptions(/*non-dtype-options*/)).dtype() -> float
torch::tensor({1., 2., 3.}, torch::TensorOptions(/*non-dtype-options*/)).dtype() -> float
torch::tensor({{1., 2., 3.}}, torch::TensorOptions(/*non-dtype-options*/)).dtype() -> float

// Now:
torch::tensor({1, 2, 3}, torch::TensorOptions(/*non-dtype-options*/)).dtype() -> int
torch::tensor({{1, 2, 3}}, torch::TensorOptions(/*non-dtype-options*/)).dtype() -> int
torch::tensor({1., 2., 3.}, torch::TensorOptions(/*non-dtype-options*/)).dtype() -> double
torch::tensor({{1., 2., 3.}}, torch::TensorOptions(/*non-dtype-options*/)).dtype() -> double

// As comparison, currently:
torch::tensor({1, 2, 3}).dtype() -> int
torch::tensor({{1, 2, 3}}).dtype() -> int
torch::tensor({1., 2., 3.}).dtype() -> double
torch::tensor({{1., 2., 3.}}).dtype() -> double
```

Notes:
1. From now on, the behavior of `at::tensor(scalar_value)` (which produces a 1-dim tensor) would be different from `torch::tensor(scalar_value)` (which produces a 0-dim tensor). I will fix the behavior of `at::tensor(scalar_value)` in a follow-up PR.
2. From now on, the behavior of `at::tensor({1, 2, 3}, torch::TensorOptions(/*non-dtype-options*/))` (which produces a `float` tensor) would be different from `torch::tensor({1, 2, 3}, torch::TensorOptions(/*non-dtype-options*/))` (which produces a an `int` tensor). I will fix this behavior of `at::tensor` constructor in a follow-up PR.

Context for the changes in this PR:

The motivation comes from fixing the "`torch::tensor({{1}, {2}})` gives tensor of wrong sizes" bug - in order to fix it, I have to move the handling of `at::ArrayRef` and `std::vector` into `InitListTensor` (see below on why we need to do this) and renamed `InitListTensor` to `TensorDataContainer`. After such changes, support for bool values comes out of the box without extra effort, and support for tensors with zero-size dimensions only requires adding a default constructor for `TensorDataContainer`, so I added those two in this PR.

For the semantic change of `torch::tensor(1.1)`, it's actually more effort to preserve the original wrong behavior (i.e. we need to check the sizes of the tensor converted from `TensorDataContainer` and reshape any scalar tensor to a 1-D tensor). I think preserving the original wrong behavior doesn't give us much value, and since the above changes naturally fix the problem, we should just start using the right behavior instead.

For the "constructor with non-dtype options behavior" fix, the code looks simpler and easier to reason about with the fix, so I included it in this PR.

--------

Why we need to move the handling of `at::ArrayRef` and `std::vector` into `TensorDataContainer`:

`torch::tensor({{1}, {2}})` can match this function overload:
`torch::tensor(at::ArrayRef<int> values)`, because `{1}` and `{2}` can be treated as
a list-initialization of an `int` value. However, this will produce a Tensor with sizes `{2}`,
but we actually want a Tensor with sizes `{2, 1}`. In order to avoid matching this function overload,
we removed the function overload and moved the ability to convert `at::ArrayRef<T>`
(and similarly `std::vector<T>`) into `TensorDataContainer`, and since for braced-init-list the
`TensorDataContainer(std::initializer_list<TensorDataContainer>)` constructor is always preferred over all other constructors, it will take the `std::initializer_list` path, and all is good.

Test Plan: Imported from OSS

Differential Revision: D18234625

Pulled By: yf225

fbshipit-source-id: 0f3f6912e82e2117d2103e31b74e7e97baaa8693
2019-10-31 12:53:06 -07:00
Pavel Belevich
d6f1e49c4a C++ API parity: CTCLoss
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28654

Test Plan: Imported from OSS

Differential Revision: D18202437

Pulled By: pbelevich

fbshipit-source-id: a4b80a57e65da84f3988002a026c648fa52a0fde
2019-10-30 14:35:02 -07:00
jon-tow
1d3d9ec7d4 C++ API Parity: functional::fold and Fold::pretty_print (#28732)
Summary:
Adds `torch::nn::functional::fold` support and updates `Fold::pretty_print` in the C++ API for more thorough Python parity.

Note: Small updates in source files to maintain consistency elsewhere.

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28732

Differential Revision: D18219955

Pulled By: yf225

fbshipit-source-id: fd2e9be8f17db77c1b1f384c0d2e16cc34858c0c
2019-10-30 11:37:39 -07:00
mansoorcheema
a465b033fd Local response norm (#28759)
Summary:
Implemented LocalResponseNorm and some initial tests for modules and functional. Reference https://github.com/pytorch/pytorch/issues/25883
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28759

Differential Revision: D18219745

Pulled By: yf225

fbshipit-source-id: e6aad568a8b1e81f54752decaefd4f9044029da9
2019-10-30 11:31:00 -07:00
mrsalehi
dfe7b25eaf Add nn::Flatten to C++ Frontend (#28072)
Summary:
Adds torch::nn::Flatten module support for the C++ API.

Issue: https://github.com/pytorch/pytorch/issues/25883

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28072

Differential Revision: D18202778

Pulled By: yf225

fbshipit-source-id: 43345dcbdf2f50d75746bf9a0ba293b84df275ab
2019-10-29 17:52:47 -07:00
nuka137
cbc234bceb C++ API: torch::nn::BatchNorm1d (#28176)
Summary:
Add torch::nn::BatchNorm1d function/module support for the C++ API.
torch::nn::BatchNorm{2,3}d will be added after this PR is merged.

Related Issue: https://github.com/pytorch/pytorch/issues/25883

Reviewer: yf225

I would like to discuss about below items.

* Necessity of `num_batches_tracked` in `BatchNormImplBase`
  * `num_batches_tracked` is needed to calculate `momentum` when we do not feed `momentum` argument in Python API. But in C++ API, `momentum` argument has a default value.
  * `num_batches_tracked` is only used for counting up `BatchNorm1d::foward()` call. I think it is no necessary for user anymore.
* The design of `BatchNorm{1,2,3}dOptions`
  * We have already `BatchNormOptions` used for deprecated `BatchNorm` module. However, it is hard to use it for `BatchNorm{1,2,3}dOptions` because of the arguments disagreement of each modules.
  * In this PR, I introduce `BatchNormOptionsv2` template class for the `BatchNorm{1,2,3}dOptions`. But I'm not sure this design is good or not.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28176

Differential Revision: D18196843

Pulled By: yf225

fbshipit-source-id: 667e2b5de4150d5776c41b9088c9e6c2ead24cd4
2019-10-29 17:29:42 -07:00
Will Feng
e33b4b6761 Use c10::variant-based enums for Reduction
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27942

Test Plan: Imported from OSS

Differential Revision: D18202857

Pulled By: yf225

fbshipit-source-id: 0303ce2508e3b7665c6a91ae270a7d0ef0e45900
2019-10-29 14:15:48 -07:00
jon-tow
52dd587123 C++ API parity: Upsample (#28413)
Summary:
Adds `interpolate` functional and `Upsample` module support for the C++ API.

**Issue**: https://github.com/pytorch/pytorch/issues/25883

**Reviewer**: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28413

Differential Revision: D18165014

Pulled By: yf225

fbshipit-source-id: ecae2f432a301b1f4afa7c038b2d104cbad139f2
2019-10-28 21:34:44 -07:00
nuka137
648749b203 C++ API: torch::nn::LPPool2d (#28492)
Summary:
Add torch::nn::LPPool2d module and functional support for the C++ API.

Related Issue: https://github.com/pytorch/pytorch/issues/25883 #27800

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28492

Differential Revision: D18109401

Pulled By: yf225

fbshipit-source-id: 5cedecb895d9d44c2167cdb3f6f758f3426b3497
2019-10-28 12:28:25 -07:00
Will Feng
d04973beda Use c10::variant-based enums for EmbeddingBag mode (#28330)
Summary:
This PR is BC-breaking in the following way:

Previous, we require the use of `std::string` to specify the mode for `EmbeddingBag`. After this PR, we use variant-based enums such as `torch::kSum` / `torch::kMean` / `torch::kMax` to specify the mode for `EmbeddingBag`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28330

Differential Revision: D18127116

Pulled By: yf225

fbshipit-source-id: 15cd86c764777f4d399587be92cda15b6ce8524b
2019-10-24 17:47:42 -07:00
Will Feng
92b39434a2 C++ nn::ConstantPad{1,2,3}d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28541

Test Plan: Imported from OSS

Differential Revision: D18115607

Pulled By: yf225

fbshipit-source-id: 736df791ddc3cd30ad9af89eacfb4a0c6b53f2cd
2019-10-24 15:10:27 -07:00
Will Feng
7f9941c4ea C++ nn::ZeroPad2d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28540

Test Plan: Imported from OSS

Differential Revision: D18115610

Pulled By: yf225

fbshipit-source-id: ced7c0917f4712838e753cd2e9fc4fa79fd5d310
2019-10-24 14:23:57 -07:00
Will Feng
303527d733 C++ nn::ReplicationPad{1,2,3}d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28539

Test Plan: Imported from OSS

Differential Revision: D18115609

Pulled By: yf225

fbshipit-source-id: 15f4ab6a114279bb06bf62f1265b62aa12f8700f
2019-10-24 12:49:41 -07:00
Will Feng
78375c02b8 C++ nn::ReflectionPad1d and nn::ReflectionPad2d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28538

Test Plan: Imported from OSS

Differential Revision: D18115608

Pulled By: yf225

fbshipit-source-id: 3a48d8c11721f013076db2965f5f75b71662c78e
2019-10-24 12:02:51 -07:00
Pavel Belevich
dd277e9086 C++ API parity: Linear
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27382

Test Plan: Imported from OSS

Differential Revision: D17766735

Pulled By: pbelevich

fbshipit-source-id: c7a66daeb17550eb9a5d26944427723d4ebdc6c8
2019-10-24 07:11:51 -07:00
Anjali Chourdia
7b59174882 torch::nn::LayerNorm
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28032

Differential Revision: D18047371

Pulled By: anjali411

fbshipit-source-id: fb61aea52d6622a67ec1d84950e17e85686461ae
2019-10-22 12:50:22 -07:00
nuka137
9ea42f8d7c C++ API: torch::nn::LPPool1d (#27800)
Summary:
Add torch::nn::LPPool1d module and functional support for the C++ API.

Related Issue: https://github.com/pytorch/pytorch/issues/25883

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27800

Differential Revision: D18045040

Pulled By: yf225

fbshipit-source-id: e61fefe9efec3423f7a93dd1e946f3e380122927
2019-10-21 15:33:51 -07:00
Carlos Miranda
a1e14a6626 PixelShuffle module and functional (#28140)
Summary:
Added `PixelShuffle` module and functional https://github.com/pytorch/pytorch/issues/25883
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28140

Differential Revision: D18008474

Pulled By: yf225

fbshipit-source-id: f482495bb56998701c79a61ef065a121bf5a5154
2019-10-18 15:54:14 -07:00
Shahriar
91a260cef9 Adding MSELoss, KLDivLoss and BCELoss to C++ front-end (#27156)
Summary:
This PR adds ```MSELoss```, ```KLDivLoss``` and ```BCELoss```. The tests for ```BCELoss``` fail with the following error:
```
unknown file: Failure
C++ exception with description "autograd_meta() INTERNAL ASSERT FAILED at /home/shahriar/Contrib/pytorch/c10/core/TensorImpl.h:533, please report a bug to PyTorch. set_requires_grad is not implemented for Tensor (set_requires_grad at /home/shahriar/Contrib/pytorch/c10/core/TensorImpl.h:533)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27156

Differential Revision: D17960323

Pulled By: yf225

fbshipit-source-id: 84b8431064f2f573679c03a8d7994e3e2f81a4d1
2019-10-17 22:07:01 -07:00
Carlos Miranda
7d277b0670 Multi Label Margin loss (#27659)
Summary:
In accordance with https://github.com/pytorch/pytorch/issues/25883, I added the `MultiLabelMarginLoss` module and `multilabel_margin_loss` functional.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27659

Differential Revision: D17931905

Pulled By: yf225

fbshipit-source-id: 3642f75c79843dda55ac38de9f6f970f3e237847
2019-10-16 15:44:38 -07:00
Carlos Miranda
9540f6c3fe Soft Margin loss (#27660)
Summary:
In accordance with https://github.com/pytorch/pytorch/issues/25883, I added the `SoftMarginLoss` module and `soft_margin_loss` functional.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27660

Differential Revision: D17958325

Pulled By: yf225

fbshipit-source-id: c14422765e6e1fdabf6c9687080e6d5ff490d300
2019-10-16 12:04:08 -07:00
Moksh Jain
f38beff800 Add nn.Bilinear to C++ Frontend (#26082)
Summary:
Adds support for the Bilinear layer to the C++ frontend
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26082

Differential Revision: D17954148

Pulled By: yf225

fbshipit-source-id: 5e746bdea29b00e25969cd7a22044b8059b53687
2019-10-16 09:54:01 -07:00
Divyansh Singhvi
3397d41b8a Wrapping namespace Reduction in namespace at (#26606) (#27422)
Summary:
1) Wrapped namespace `Reduction` in namespace `at`
2) Prefixed `at::` wherever `Reduction::` is used
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27422

Differential Revision: D17913759

Pulled By: yf225

fbshipit-source-id: 8f00ca01cad2e7f673d316b128abf59c026e216c
2019-10-15 11:05:40 -07:00
Will Feng
11172c19be codemod at::ArrayRef and torch::IntArrayRef to std::vector in C++ API tests (#27884)
Summary:
`at::ArrayRef` / `torch::IntArrayRef` should be discouraged in user code, because users might not be aware of the fact that it doesn't own the underlying data, which already leads to memory access bugs when they try to write the following:
```cpp
auto expected_sizes = torch::IntArrayRef({2, 16, 6});  // The memory that represents `{2, 16, 6}` is released after this line
ASSERT_EQ(output.sizes(), expected_sizes);  // `expected_sizes` is pointing to invalid memory region
```
This PR changes all usage of `at::ArrayRef` and `torch::IntArrayRef` to the corresponding `std::vector` version, so that users won't pick up the habit of using `ArrayRef` by looking at the test code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27884

Differential Revision: D17921646

Pulled By: yf225

fbshipit-source-id: 461e79fc22b598aac230d36cc028085ce6cbe937
2019-10-14 18:00:30 -07:00
Carlos Miranda
2cae3928b0 Multi-Label Soft Margin loss (#27669)
Summary:
In accordance with https://github.com/pytorch/pytorch/issues/25883, I added the `MultiLabelSoftMarginLoss` module and `multilabel_soft_margin_loss` functional.

It looks like there isn't a C++ ATen implementation of `multilabel_soft_margin_loss`, so I translated the python version, which does not rely on a C/C++ backend either.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27669

Differential Revision: D17907608

Pulled By: yf225

fbshipit-source-id: ccb02951e009973c2adbe604593ce929f10c39eb
2019-10-14 13:29:45 -07:00
jon-tow
0003771423 C++ API parity: Unfold (#27809)
Summary:
Adds `unfold` functional and module support for the C++ API.

Issue: https://github.com/pytorch/pytorch/issues/25883

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27809

Differential Revision: D17901792

Pulled By: yf225

fbshipit-source-id: ff58a1866bf240f37ebc589463c60593b8931f51
2019-10-14 13:21:59 -07:00
nuka137
07d4374239 C++ API: torch::nn::Softmax2d (#27509)
Summary:
Add torch::nn::Softmax2d module support for the C++ API.
Softmax2d only supports module in Python API, so this PR adds only module support as well.

This PR is WIP because it uses the function in https://github.com/pytorch/pytorch/issues/27446 .
After https://github.com/pytorch/pytorch/issues/27446 is merged, I will remove WIP.

Related Issue: https://github.com/pytorch/pytorch/issues/25883

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27509

Differential Revision: D17899715

Pulled By: yf225

fbshipit-source-id: bd891bc995f5a92bf4f5405f8bf07d1bd5de2479
2019-10-13 11:00:56 -07:00
PyExtreme
52528c041a - TripletMarginLoss (#27713)
Summary:
Hi yf225 , I had to create a new branch to tackle merge conflict since I am using cloud due to some limitations on my PC. Therefore, I don't have enough command there.

Also, I have incorporated the changes you have put before here
https://github.com/pytorch/pytorch/pull/27613

Also, it would be great if you could recommend me some resources to work smmothly on GCP..:-D

Thank you
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27713

Differential Revision: D17899695

Pulled By: yf225

fbshipit-source-id: eb6643223148774a5cbbd093bdcc5623872e5bba
2019-10-13 10:57:37 -07:00
Pavel Belevich
446a79b959 C++ API parity: Threshold
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27538

Test Plan: Imported from OSS

Differential Revision: D17835415

Pulled By: pbelevich

fbshipit-source-id: 2a887704655be79ee458081c46a7eea31eca51dc
2019-10-13 09:38:31 -07:00
Pavel Belevich
cbdd55c669 C++ API parity: Tanhshrink
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27537

Test Plan: Imported from OSS

Differential Revision: D17835409

Pulled By: pbelevich

fbshipit-source-id: ad4120cfe01ea2508bf3ce1054022a2da649ac74
2019-10-13 08:12:13 -07:00
Pavel Belevich
2750ea25b2 C++ API parity: Tanh
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27536

Test Plan: Imported from OSS

Differential Revision: D17835411

Pulled By: pbelevich

fbshipit-source-id: c8984aec2f4bae48ff901fafc8c53a4122192ac5
2019-10-13 06:34:18 -07:00
Pavel Belevich
96aafc3cdc C++ API parity: Softsign
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27535

Test Plan: Imported from OSS

Differential Revision: D17835408

Pulled By: pbelevich

fbshipit-source-id: 8548deab91f6fe0f7285fdd919c25129ed042181
2019-10-12 08:30:10 -07:00
Pavel Belevich
fcb6dd079e C++ API parity: Softshrink
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27534

Test Plan: Imported from OSS

Differential Revision: D17835404

Pulled By: pbelevich

fbshipit-source-id: 7b9f3d3ea793f82840496912f248b0c48bb7463e
2019-10-12 06:36:20 -07:00
nuka137
abaa44122d C++ API: torch::nn::Softmin (#27459)
Summary:
Add torch::nn::Softmin module and functional support for the C++ API.

Related Issue: https://github.com/pytorch/pytorch/issues/25883

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27459

Differential Revision: D17892852

Pulled By: yf225

fbshipit-source-id: db15b06e8ad33947e7d65995df700f5e90c3b6a8
2019-10-11 23:03:55 -07:00
Pavel Belevich
c79d3a4a98 C++ API parity: Softplus
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27489

Test Plan: Imported from OSS

Differential Revision: D17835410

Pulled By: pbelevich

fbshipit-source-id: 51a8c4ab2ff4b860c96eda1ed8f073017b8cf9ae
2019-10-11 09:00:32 -07:00
Pavel Belevich
9d448099fd C++ API parity: Sigmoid
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27488

Test Plan: Imported from OSS

Differential Revision: D17835405

Pulled By: pbelevich

fbshipit-source-id: 78e13047a2a1f2776c59e778db7ba120716e93d3
2019-10-11 07:45:31 -07:00
Pavel Belevich
795c913636 C++ API parity: CELU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27487

Test Plan: Imported from OSS

Differential Revision: D17835406

Pulled By: pbelevich

fbshipit-source-id: a8282ae65d8996efcc8b8d846cfa637c3f89eda6
2019-10-11 06:23:57 -07:00
Pavel Belevich
6294a9a877 C++ API parity: RReLU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27437

Test Plan: Imported from OSS

Differential Revision: D17835413

Pulled By: pbelevich

fbshipit-source-id: 5d943fdac4fd2633e7f7ca13db1a7fed5636ca50
2019-10-10 19:14:48 -07:00
Pavel Belevich
352092ca95 C++ API parity: ReLU6
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27436

Test Plan: Imported from OSS

Differential Revision: D17835414

Pulled By: pbelevich

fbshipit-source-id: 77e743d2f6b71fb3ba5643f9d676f2bb8f236cfa
2019-10-10 17:12:17 -07:00
nuka137
6711969dd8 C++ API: torch::nn::LogSoftmax (#27462)
Summary:
Add torch::nn::LogSoftmax module and functional support for the C++ API.

Related Issue: https://github.com/pytorch/pytorch/issues/25883

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27462

Differential Revision: D17867121

Pulled By: yf225

fbshipit-source-id: dae8ac981c1c6ccdef013cd2d886ad4a043f6243
2019-10-10 16:18:15 -07:00
Pavel Belevich
8515650c2b C++ API parity: ReLU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27435

Test Plan: Imported from OSS

Differential Revision: D17835407

Pulled By: pbelevich

fbshipit-source-id: b8ee86c7a76674bc88d8e995424dad22d3caab59
2019-10-10 13:34:38 -07:00
Pavel Belevich
1fec1441a1 C++ API parity: PReLU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27429

Test Plan: Imported from OSS

Differential Revision: D17835412

Pulled By: pbelevich

fbshipit-source-id: e678d5920dad1293bb0ba3de28e2da3087d19bde
2019-10-09 16:31:54 -07:00
Carlos Miranda
3246fddfd6 Implement C++ API torch::nn::MultiMarginLoss. (#27424)
Summary:
Hi yf225 , here is the C++ frontend API MultiMarginLoss implementation and tests https://github.com/pytorch/pytorch/issues/27198. Could you review it and tell me if it is okay?

I am not entirely sure I used `c10::optional` correctly, but `options.weight()` resulted in a compilation error, so I went with `options.weight().value()` instead of `value_or()` to follow the logic in `torch.nn._WeightedLoss.register_buffer` (where one can pass a `None` value).

Oh, and are the tests supposed to be skipped or did I do something wrong? I ran `pytest test/test_cpp_api_parity.py -k Loss -v` , and the `L1Loss` test passed but the others were skipped...

Thank you for the review in any case!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27424

Differential Revision: D17839963

Pulled By: yf225

fbshipit-source-id: f4b6012590cf22d56d42751c214df80cce717cb8
2019-10-09 14:44:41 -07:00
jon-tow
0fed4756d0 C++ API parity: SELU (#27434)
Summary:
Adds `SELU` functional and module support for the C++ API.

Issue: https://github.com/pytorch/pytorch/issues/25883
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27434

Differential Revision: D17782762

Pulled By: yf225

fbshipit-source-id: 96c7ce84b9baf9e219a63e631929b8997ba6f3f0
2019-10-09 14:39:28 -07:00
nuka137
28a1806cbc C++ API: torch::nn::Softmax (#27446)
Summary:
Add torch::nn::Softmax module support for the C++ API

Related Issue: https://github.com/pytorch/pytorch/issues/25883

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27446

Differential Revision: D17839546

Pulled By: yf225

fbshipit-source-id: 7c7fb55111b261614de7c3a75fa1019fbde93c67
2019-10-09 14:19:47 -07:00
Anjali Chourdia
a37be201c1 Implement torch.nn.Embedding / EmbeddingBag in PyTorch C++ API (#26358)
Summary:
added more variables to EmbeddingOptions and updated EmbeddingImpl reset, forward functions. Also added EmbeddingBag.

-----

This PR is BC-breaking in the following way:

Previously, `EmbeddingOptions` supports `count` and `dimension` as options arguments. After this PR, they are renamed to `num_embeddings` and `embedding_dim` respectively.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26358

Differential Revision: D17714337

Pulled By: yf225

fbshipit-source-id: f9f969c68e4bece106b92f8e2e02ac39c8455fb7
2019-10-08 22:13:39 -07:00
Jonathan Tow
3b5d40c339 Add C++ torch::nn::CosineEmbeddingLoss (#27345)
Summary:
Adds `torch::nn::CosineEmbeddingLoss`  module and functional support for the C++ API.

Issue: https://github.com/pytorch/pytorch/issues/25883

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27345

Differential Revision: D17801402

Pulled By: yf225

fbshipit-source-id: 0eabe80d7d36397e6667b331c3fa2f56d7a15962
2019-10-08 10:52:05 -07:00
Pavel Belevich
2cc1e69cc9 C++ API parity: LogSigmoid
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27060

Test Plan: Imported from OSS

Differential Revision: D17682404

Pulled By: pbelevich

fbshipit-source-id: d60d64cd4caf1f56a2e05c516f91321d46ec9624
2019-10-05 06:18:25 -07:00
Pavel Belevich
8b61a220c0 C++ API parity: LeakyReLU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27059

Test Plan: Imported from OSS

Differential Revision: D17682407

Pulled By: pbelevich

fbshipit-source-id: 2a4f42e9438799ba8de7282ac7a6fd3ff97ee048
2019-10-04 14:18:03 -07:00
Pavel Belevich
192ca9730f C++ API parity: Hardtanh
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27038

Test Plan: Imported from OSS

Differential Revision: D17682405

Pulled By: pbelevich

fbshipit-source-id: f65e76696e0041c3518f56da94f2e3b800305234
2019-10-04 12:53:33 -07:00
Pavel Belevich
515e3b85da C++ API parity: Hardshrink
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27035

Test Plan: Imported from OSS

Differential Revision: D17682403

Pulled By: pbelevich

fbshipit-source-id: 186377fe577abfdd53acc95751a7ed845b51af95
2019-10-02 08:30:20 -07:00
Pavel Belevich
c864454a8f C++ API parity: ELU
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27028

Test Plan: Imported from OSS

Differential Revision: D17682406

Pulled By: pbelevich

fbshipit-source-id: 9c313237cb93b9870c6fcf8d01b3dbe4af4c6f2a
2019-10-02 07:12:08 -07:00
Pavel Belevich
5005f7bce7 C++ API parity: MaxUnpool3d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27027

Test Plan: Imported from OSS

Differential Revision: D17682402

Pulled By: pbelevich

fbshipit-source-id: 2008ce405176c174cdba88b4f25cd77a82bb13ea
2019-10-02 05:40:42 -07:00
Pavel Belevich
5cac738713 C++ API parity: MaxUnpool2d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26915

Test Plan: Imported from OSS

Differential Revision: D17627826

Pulled By: pbelevich

fbshipit-source-id: 04a5a7e7d19b1610cafaaa0bd329d4d228ab4be5
2019-10-01 19:29:15 -07:00
Pavel Belevich
d125a83f98 C++ API parity: MaxUnpool1d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26896

Test Plan: Imported from OSS

Differential Revision: D17627825

Pulled By: pbelevich

fbshipit-source-id: 369d0080412467d0259eb5e692a0778c71b12343
2019-10-01 14:53:40 -07:00
jon-tow
209dc4c4ba Add C++ torch::nn::HingeEmbeddingLoss (#27101)
Summary:
Adds `torch::nn::HingeEmbeddingLoss` module support for the C++ API.

**Issue**: https://github.com/pytorch/pytorch/issues/25883

**Reviewer**: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27101

Differential Revision: D17680489

Pulled By: yf225

fbshipit-source-id: 1f8f41775a9e1272a98232c8f899418b2b907eca
2019-09-30 19:29:24 -07:00
Pavel Belevich
1a3997e0b8 C++ API parity: AdaptiveAvgPool3d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26819

Test Plan: Imported from OSS

Differential Revision: D17627829

Pulled By: pbelevich

fbshipit-source-id: be4d803c7d4ba2c59e54d154eeebc63794465191
2019-09-28 22:32:21 -07:00
Pavel Belevich
a31fd5ea68 C++ API parity: AdaptiveAvgPool2d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26818

Test Plan: Imported from OSS

Differential Revision: D17627822

Pulled By: pbelevich

fbshipit-source-id: 0e1dea1c3ff2650dbc7902ce704ac6b47588d0bb
2019-09-28 10:45:03 -07:00
Pavel Belevich
7d58060f49 C++ API parity: AdaptiveAvgPool1d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26808

Test Plan: Imported from OSS

Differential Revision: D17627827

Pulled By: pbelevich

fbshipit-source-id: 13ad1d0414e7b62f4fc2f6573332bb2c07b16b53
2019-09-28 10:23:31 -07:00
Pavel Belevich
5aa01fd89a C++ API parity: AdaptiveMaxPool3d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26775

Test Plan: Imported from OSS

Differential Revision: D17627824

Pulled By: pbelevich

fbshipit-source-id: c4ae077ea5575c5d1df795e74a0dcb74a695ad06
2019-09-27 15:31:37 -07:00
Pavel Belevich
bb7a415bcc C++ API parity: AdaptiveMaxPool2d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26772

Test Plan: Imported from OSS

Differential Revision: D17627823

Pulled By: pbelevich

fbshipit-source-id: 195f1edabbbbe245de3568beb0c7925eb347118a
2019-09-27 12:41:38 -07:00
Pavel Belevich
0a393f6ef5 C++ API parity: AdaptiveMaxPool1d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26755

Test Plan: Imported from OSS

Differential Revision: D17627828

Pulled By: pbelevich

fbshipit-source-id: f898a4d2c269b98eb5905291914caa25bca87ce0
2019-09-27 09:10:39 -07:00
Will Feng
b5d15315d8 Improve C++ maxpool and avgpool (#26521)
Summary:
This PR makes the following improvements:
1. Add `forward_with_indices` method to all C++ MaxPool modules, to return the max indices along with the outputs. (We can't make two `forward` methods that return different types based on input, because that will break the type deduction of `torch::detail::return_type_of_forward_t`)
2. Add `max_poolNd_with_indices` to `torch::nn::functional`, to be used when indices of the max values are needed. (We can't merge this with `torch::nn::functional::max_poolNd` because the return type of `max_poolNd` has to be defined statically).
3. Improve `pretty_print` of C++ MaxPoolNd and AvgPoolNd modules to match the Python `extra_repr`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26521

Differential Revision: D17507358

Pulled By: yf225

fbshipit-source-id: b6c0e2b27b38378cdc0c75f4bfc797b3c6b17cd9
2019-09-25 13:52:58 -07:00
jon-tow
5e5b9a9321 Add C++ nn::Identity (#26713)
Summary:
**Summary**:
Adds `torch::nn::Identity` module support for the C++ API.

**Issue**: https://github.com/pytorch/pytorch/issues/25883

**Reviewer**: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26713

Differential Revision: D17550982

Pulled By: yf225

fbshipit-source-id: f24483846e82d5d276d77a1a0c50884f3bc05112
2019-09-24 16:29:49 -07:00
Will Feng
da8fbe5bf0 Minor improvement to C++ nn::Distance tests (#26539)
Summary:
C++ `nn::Distance` tests can take advantage of the newly released multi-dimensional tensor constructor https://github.com/pytorch/pytorch/pull/26210 to simplify the tensor constructions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26539

Differential Revision: D17501041

Pulled By: yf225

fbshipit-source-id: 21d5f95ab3ec02227115c823c581218cee2ce458
2019-09-20 12:40:52 -07:00
jon-tow
872ca919a9 Distance module (#26424)
Summary:
Adds `Distance` module parity.
https://github.com/pytorch/pytorch/issues/25883
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26424

Differential Revision: D17487314

Pulled By: yf225

fbshipit-source-id: c7d124cb4afb08a4733e7212af0bb276bf32d172
2019-09-20 07:28:49 -07:00
Pavel Belevich
98ccae09af C++ API parity: at::Tensor::grad
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26150

Test Plan: Imported from OSS

Differential Revision: D17427579

Pulled By: pbelevich

fbshipit-source-id: 68d012076aa86dee9f23fad71a2d265d75f56d22
2019-09-18 09:20:38 -07:00
Shahriar
28a2dafc15 C++ Average Pool Module (#25800)
Summary:
This PR adds Average Pool module to C++ front-end.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25800

Differential Revision: D17318094

Pulled By: yf225

fbshipit-source-id: c914c0e802bbe5f1d1f0a21a669c28bc956899db
2019-09-11 16:39:56 -07:00
Shahriar
ba9fda14a7 C++ MaxPool Module
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24860

Differential Revision: D17260361

Pulled By: yf225

fbshipit-source-id: 4b8c894d3bdf675cfeb9fc84934fe0339a048c1e
2019-09-11 08:56:57 -07:00
Shahriar
e04836004d L1Loss module (#25902)
Summary:
yf225 This is L1Loss module. I don't think that ```_Loss``` and ```_WeightedLoss``` as base Python classes do anything. First one sets reduction type and also takes in ```reduce``` parameter which is deprecated. The second one only registers ```weight``` parameter. I don't think that we should keep this structure. What do you think?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25902

Differential Revision: D17307045

Pulled By: yf225

fbshipit-source-id: ad3eda2ee8dcf4465054b376c1be89b39d11532f
2019-09-11 07:18:17 -07:00
Will Feng
a88f310151 Simplify header inclusion in test/cpp/api/modules.cpp (#25921)
Summary:
This PR simplifies header inclusion in `test/cpp/api/modules.cpp`, so that when we add a new `torch::nn` module and add the test in `modules.cpp`, we can check that the new module's header is included in `torch/torch.h`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25921

Differential Revision: D17303220

Pulled By: yf225

fbshipit-source-id: 327db0ff2f075d52e7b594b3dffc5a59441e0931
2019-09-10 18:37:39 -07:00
Shahriar
3680cef44e C++ Fold nn module
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24160

Differential Revision: D17260740

Pulled By: yf225

fbshipit-source-id: f0c7769316bed330289ca3d948f2e39c72ec928b
2019-09-10 13:19:37 -07:00
Will Feng
be6ad7ddde Rename BatchNorm running_variance to running_var (#17371)
Summary:
Currently there is a mismatch in naming between Python BatchNorm `running_var` and C++ BatchNorm `running_variance`, which causes JIT model parameters loading to fail (https://github.com/pytorch/vision/pull/728#issuecomment-466067138):
```
terminate called after throwing an instance of 'c10::Error'
  what():  No such serialized tensor 'running_variance' (read at /home/shahriar/Build/pytorch/torch/csrc/api/src/serialize/input-archive.cpp:27)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x85 (0x7f2d92d32f95 in /usr/local/lib/libc10.so)
frame #1: torch::serialize::InputArchive::read(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, at::Tensor&, bool) + 0xdeb (0x7f2d938551ab in /usr/local/lib/libtorch.so.1)
frame #2: torch::nn::Module::load(torch::serialize::InputArchive&) + 0x98 (0x7f2d9381cd08 in /usr/local/lib/libtorch.so.1)
frame #3: torch::nn::Module::load(torch::serialize::InputArchive&) + 0xf9 (0x7f2d9381cd69 in /usr/local/lib/libtorch.so.1)
frame #4: torch::nn::Module::load(torch::serialize::InputArchive&) + 0xf9 (0x7f2d9381cd69 in /usr/local/lib/libtorch.so.1)
frame #5: torch::nn::operator>>(torch::serialize::InputArchive&, std::shared_ptr<torch::nn::Module> const&) + 0x32 (0x7f2d9381c7b2 in /usr/local/lib/libtorch.so.1)
frame #6: <unknown function> + 0x2b16c (0x5645f4d1916c in /home/shahriar/Projects/CXX/build-TorchVisionTest-Desktop_Qt_5_12_1_GCC_64bit-Debug/TorchVisionTest)
frame #7: <unknown function> + 0x27a3c (0x5645f4d15a3c in /home/shahriar/Projects/CXX/build-TorchVisionTest-Desktop_Qt_5_12_1_GCC_64bit-Debug/TorchVisionTest)
frame #8: <unknown function> + 0x2165c (0x5645f4d0f65c in /home/shahriar/Projects/CXX/build-TorchVisionTest-Desktop_Qt_5_12_1_GCC_64bit-Debug/TorchVisionTest)
frame #9: <unknown function> + 0x1540b (0x5645f4d0340b in /home/shahriar/Projects/CXX/build-TorchVisionTest-Desktop_Qt_5_12_1_GCC_64bit-Debug/TorchVisionTest)
frame #10: __libc_start_main + 0xf3 (0x7f2d051dd223 in /usr/lib/libc.so.6)
frame #11: <unknown function> + 0x1381e (0x5645f4d0181e in /home/shahriar/Projects/CXX/build-TorchVisionTest-Desktop_Qt_5_12_1_GCC_64bit-Debug/TorchVisionTest)
```
Renaming C++ BatchNorm `running_variance` to `running_var` should fix this problem.

This is a BC-breaking change, but it should be easy for end user to rename `running_variance` to `running_var` in their call sites.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17371

Reviewed By: goldsborough

Differential Revision: D14172775

Pulled By: yf225

fbshipit-source-id: b9d3729ec79272a8084269756f28a8f7c4dd16b6
2019-02-22 08:00:25 -08:00
Edward Yang
4404762d7d Rename IntList to IntArrayRef. (#16751)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16751

This was made more complicated by the fact that ivalue::IntList
is a thing.  So I had to fix all of the sites where we referring
to IValue post facto.

The following codemods were run, in this order:

```
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in IntList IntArrayRef
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in IntArrayRef::create IntList::create
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in ivalue::IntArrayRef ivalue::IntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in Tag::IntArrayRef Tag::IntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in isIntArrayRef isIntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in toIntArrayRef toIntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in 'Shared<IntArrayRef>' 'Shared<IntList>'
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in 'intrusive_ptr<IntArrayRef>' 'intrusive_ptr<IntList>'
```

Some manual fixups were done afterwards; they can be reviewed separately
at https://github.com/pytorch/pytorch/pull/16752

Reviewed By: dzhulgakov

Differential Revision: D13954363

fbshipit-source-id: b5c40aacba042402155a2f5a229fa6db7992ac64
2019-02-05 14:54:34 -08:00
Peter Goldsborough
4bdaca827c Make call operator on module holder call forward (#15831)
Summary:
In Python, you can use the call operator to invoke the `forward()` method of a module. In C++ this was currently not possible, because I couldn't figure out how to deduce the return type of a module's `forward()` method under the constraint that `forward()` may not exist at all (since the base module class in C++ does not mandate a `forward()` method). I now figured it out, so the call operator can be used.

ezyang ebetica
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15831

Differential Revision: D13652676

Pulled By: goldsborough

fbshipit-source-id: ccab45a15215dda56460e560f0038781b539135f
2019-01-14 14:40:33 -08:00
Peter Goldsborough
eb5d28ecef Pretty printing of C++ modules (#15326)
Summary:
A long outstanding nicety: pretty printing of C++ modules. E.g.
```
  Sequential sequential(
      Linear(10, 3),
      Conv2d(1, 2, 3),
      Dropout(0.5),
      BatchNorm(5),
      Embedding(4, 10),
      LSTM(4, 5));
std::cout << sequential;
```
prints
```
torch::nn::Sequential(
  (0): torch::nn::Linear(in=10, out=3, with_bias=true)
  (1): torch::nn::Conv2d(input_channels=1, output_channels=2, kernel_size=[3, 3], stride=[1, 1])
  (2): torch::nn::Dropout(rate=0.5)
  (3): torch::nn::BatchNorm(features=5, eps=1e-05, momentum=0.1, affine=true, stateful=true)
  (4): torch::nn::Embedding(count=4, dimension=10)
  (5): torch::nn::LSTM(input_size=4, hidden_size=5, layers=1, dropout=0)
)
```

apaszke ebetica ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15326

Differential Revision: D13518986

Pulled By: goldsborough

fbshipit-source-id: 63bf753672f0e348951de3645208f263581de5fb
2018-12-19 21:55:49 -08:00
Peter Goldsborough
ab0c72ab6f Replace cursors with OrderedDict (#13427)
Summary:
This is a pre-cursor diff to Python <-> C++ frontend integration -- I have a follow-up PR coming for that. This PR changes the C++ frontend module interface to replace the custom "cursor"s I introduced some time ago with `OrderedDict`. I introduced cursors at the time as a convenient way of applying functions and query operations on a modules' parameters, buffers and modules, allowing things like `module.parameters().map(my_func)`. However, I noticed that (1) this functionality is easily implement-able on top of a regular data structure and (2) more importantly,  using OrderedDicts is much, much easier for Python integration. This is especially true given that ScriptModule today also uses OrderedDict. Since C++ frontend modules and ScriptModules will soon too share as many implementation details as possible, it is overall the best move to ditch the custom cursor datastructure and pervasively use OrderedDict everywhere.

For this I did:

1. Changed the C++ frontend module interface to more closely match the Python one by providing `parameters()`, `named_parameters()` and other methods Python provides. This is very important for the following diff which binds these into Python for inter-op with Python modules.
2. In lieu of the `Cursor::apply()` method I added `nn::Module::apply`. This again is one more unifying step between Python and C++, since Python modules have an apply function too.
3. Deleted all uses of Cursor.
4. Tidied and beefed up the `OrderedDict` class. In particular, I made `OrderedDict::Item` store an `std::pair` under the hood, because that is trivial to bind into Python and saved me a lot of headaches. `key` and `value` become methods instead of fields, which they should have been from the very start anyway because it allows exactly these kinds of changes, as per usual good software engineering principle of encapsulation.
5. Added many tests for the OrderedDict use in `nn::Module`.

ebetica ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13427

Differential Revision: D12894092

Pulled By: goldsborough

fbshipit-source-id: 715770c95a9643753a1db26d7f9da9a78619a15d
2018-11-07 11:10:05 -08:00
Peter Goldsborough
393ad6582d Use torch:: instead of at:: in all C++ APIs (#13523)
Summary:
In TorchScript and C++ extensions we currently advocate a mix of `torch::` and `at::` namespace usage. In the C++ frontend I had instead exported all symbols from `at::` and some from `c10::` into the `torch::` namespace. This is far, far easier for users to understand, and also avoid bugs around creating tensors vs. variables. The same should from now on be true for the TorchScript C++ API (for running and loading models) and all C++ extensions.

Note that since we're just talking about typedefs, this change does not break any existing code.

Once this lands I will update stuff in `pytorch/tutorials` too.

zdevito ezyang gchanan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13523

Differential Revision: D12942787

Pulled By: goldsborough

fbshipit-source-id: 76058936bd8707b33d9e5bbc2d0705fc3d820763
2018-11-06 14:32:25 -08:00
Christian Puhrsch
a9e6a673ae Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11876

Modern C++ api instead of macros, item() is aligned with Python frontend. caffe2::Tensor::capacity_nbytes is effecitvely unused and confusing w.r.t. caffe2::Tensor::nbytes().

codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCByte   "item<uint8_t>"
codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCLong   "item<int64_t>"
codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCInt    "item<int32_t>"
codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCDouble "item<double>"
codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCFloat  "item<float>"

codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toByteData   "data<uint8_t>"
codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toLongData   "data<int64_t>"
codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toIntData    "data<int32_t>"
codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toDoubleData "data<double>"
codemod -d caffe2           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toFloatData  "data<float>"

codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCByte   "item<uint8_t>"
codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCLong   "item<int64_t>"
codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCInt    "item<int32_t>"
codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCDouble "item<double>"
codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCFloat  "item<float>"

codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toByteData   "data<uint8_t>"
codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toLongData   "data<int64_t>"
codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toIntData    "data<int32_t>"
codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toDoubleData "data<double>"
codemod -d hphp           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toFloatData  "data<float>"

codemod -d caffe2 --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCComplexDouble "item<std::complex<double>>"

codemod -d tc           --extensions cc,cpp,cu,cuh,h,py,hpp,mm toCFloat  "item<float>"

Reviewed By: ezyang

Differential Revision: D9948572

fbshipit-source-id: 70c9f5390d92b82c85fdd5f8a5aebca338ab413c
2018-09-24 10:40:10 -07:00
Peter Goldsborough
825181ea9d Rewrite C++ API tests in gtest (#11953)
Summary:
This PR is a large codemod to rewrite all C++ API tests with GoogleTest (gtest) instead of Catch.

You can largely trust me to have correctly code-modded the tests, so it's not required to review every of the 2000+ changed lines. However, additional things I changed were:

1. Moved the cmake parts for these tests into their own `CMakeLists.txt` under `test/cpp/api` and calling `add_subdirectory` from `torch/CMakeLists.txt`
2. Fixing DataParallel tests which weren't being compiled because `USE_CUDA` wasn't correctly being set at all.
3. Updated README

ezyang ebetica
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11953

Differential Revision: D9998883

Pulled By: goldsborough

fbshipit-source-id: affe3f320b0ca63e7e0019926a59076bb943db80
2018-09-21 21:28:16 -07:00
Gregory Chanan
e00fb69b25 Use CATCH prefix to avoid name conflicts with Caffe2.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/11780

Differential Revision: D9889925

Pulled By: gchanan

fbshipit-source-id: 5eca849c36ced00b8ae7482b7945b445a3e1687e
2018-09-18 08:12:45 -07:00
Peter Goldsborough
f0a284502a Document BatchNorm and update default behavior (#11484)
Summary:
This PR:

1. Documents `BatchNorm`,
2. Makes a number of API changes after reconsidering some quirks:
    1. The default value for the `stateful` parameter used to be `false`, but the most common usage of `BatchNorm` out of the wild is certainly stateful, and the default in Python is also statefulness. So we change the default to stateful.
    2. The `pure_forward` function used to use the internal running mean and variance variables instead of the ones supplied to that function call when `stateful` was true, which certainly seems odd. When you call `pure_forward` you would certainly expect the values you pass explicitly to be used. This is now fixed.
3. Adds tests for `BatchNorm`, finally.

ebetica apaszke ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11484

Reviewed By: pjh5

Differential Revision: D9779618

Pulled By: goldsborough

fbshipit-source-id: 59ba760e085c01454b75644b24b22317b688e459
2018-09-12 09:09:53 -07:00
Peter Goldsborough
dd8defeb3f Document the Functional module (#11460)
Summary:
Document the `Functional` module in the C++  API.

ebetica ezyang soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11460

Differential Revision: D9757555

Pulled By: goldsborough

fbshipit-source-id: 15f8bf6d60bd26f3f4e69fb8e414e186e3c220ee
2018-09-10 19:58:38 -07:00
Peter Goldsborough
2e0dd86903 Make torch::Tensor -> at::Tensor (#10516)
Summary:
This PR removes the `using Tensor = autograd::Variable;` alias from `torch/tensor.h`, which means `torch::Tensor` is now `at::Tensor`. This PR fixes up some last uses of `.data()` and tidies up the resulting code. For example, I was able to remove `TensorListView` such that code like

```
auto loss = torch::stack(torch::TensorListView(policy_loss)).sum() +
    torch::stack(torch::TensorListView(value_loss)).sum();
```

is now

```
auto loss = torch::stack(policy_loss).sum() + torch::stack(value_loss).sum();
```

CC jgehring

ebetica
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10516

Differential Revision: D9324691

Pulled By: goldsborough

fbshipit-source-id: a7c1cb779c9c829f89cea55f07ac539b00c78449
2018-08-15 21:25:12 -07:00
Xiang Gao
6fc75eadf0 Add CELU activation to pytorch (#8551)
Summary:
Also fuse input scale multiplication into ELU

Paper:
https://arxiv.org/pdf/1704.07483.pdf
Pull Request resolved: https://github.com/pytorch/pytorch/pull/8551

Differential Revision: D9088477

Pulled By: SsnL

fbshipit-source-id: 877771bee251b27154058f2b67d747c9812c696b
2018-08-01 07:54:44 -07:00
Anders Papitto
620952117e remove unnecessary -Wno= flags
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/9608

Differential Revision: D8946664

Pulled By: anderspapitto

fbshipit-source-id: b05f10af58da25b2a2588f7153f393bb3637f29a
2018-07-24 18:40:42 -07:00
Peter Goldsborough
31ba2f15e1 Rename embedding variable to weight (#9720)
Summary:
I renamed the variable in the `Embedding` module from `weight` to `table` a few months ago, because it seemed like a more meaningful name. Turns out it's not such a good idea because it deviates from PyTorch, which unnecessarily breaks C++->Python translated code.

ebetica ezyang apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9720

Differential Revision: D8955647

Pulled By: goldsborough

fbshipit-source-id: 77228b07d2b733866e8cdecaa6d0686eef4cc3ea
2018-07-23 14:55:24 -07:00
Peter Goldsborough
ae44a6b5e3 Fix Sequential::clone() (#9372)
Summary:
I noticed that `Sequential::clone()` does not work. This is because `Sequential` does not use `reset()` which is normally where modules have to initialize and register its submodules. Further, this is because of the way `Sequential` allows its modules to be passed in the constructor, which doesn't work with `reset()` (since it does "late" initialization).

I've added some better error messages inside `Cloneable::clone()` which makes this kind of mistake clearer for other users, and tests for `Sequential::clone()`.

I also had to give `AnyModule` a deep `clone()` method.

ebetica ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9372

Differential Revision: D8865189

Pulled By: goldsborough

fbshipit-source-id: b81586e0d3157cd3c4265b19ac8dd87c5d8dcf94
2018-07-16 21:53:42 -07:00
Peter Goldsborough
153e2e96d4 Make Sequential ref-counted (#9151)
Summary:
In the C++ API, `Sequential` currently was not refcounted itself, but stored `shared_ptr<AnyModule>` to get the reference semantics. This is unfortunate because most modules in the API are accessed via `->`, e.g. `Linear l(1, 2); l->forward(...);`. `Sequential` was different in that it had value semantics itself, thus was accessed via `.`.

This PR makes `Sequential` store `AnyModule` (without extra indirection), and uses the same pImpl mechanism we use for all other modules to make `Sequential` have reference semantics itself. This makes it consistent with the rest of the library. It also removes one level of indirection inside of `Sequential`, which is cool.

One thing I had to change was that the `ModuleHolder` with which the whole pImpl thing is implemented previously did some tricks to make `Linear(3, 4)` actually construct `Linear(LinearOptions(3, 4))`. This doesn't work well with `Sequential` since it takes a variadic parameter pack. Instead, I made `ModuleHolder` forward all arguments to the underlying module, and then further pushed the trick to forward parameters to modules' options types into the actual Modules. This adds one constructor per Module in the library. This is not something user modules have to do (unless they want this nice forwarding themselves). It makes the code simpler overall.

ezyang ebetica apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9151

Reviewed By: ezyang

Differential Revision: D8809298

Pulled By: goldsborough

fbshipit-source-id: da68452c3de912fbc67af330ba93b5220de6909f
2018-07-11 17:24:59 -07:00
Peter Goldsborough
9ce15173fb Move _cudnn_init_dropout_state to TensorOptions and enable cuDNN dropout in C++ API RNNs (#9012)
Summary:
The goal of this PR was to add support for dropout descriptors in the C++ API's RNN class.
The end result is a 4x-5x speedup for our RNN integration tests since they can now use cuDNN instead of autograd when dropout is set.

To achieve this, I had to move `_cudnn_init_dropout_state` to the `TensorOptions` API.

I also fixed a bug around `RNN::cuda()` not flattening parameters for cuDNN.

ebetica ezyang
Closes https://github.com/pytorch/pytorch/pull/9012

Reviewed By: pjh5

Differential Revision: D8689786

Pulled By: goldsborough

fbshipit-source-id: 44fb191f5a38e41c4ded5417306b5bbc012cd56c
2018-06-29 17:25:23 -07:00
Peter Goldsborough
03d0a70a4d Set random seed at the start of C++ tests (#8903)
Summary:
Sets the random seed at the start of C++ tests so that everything is super deterministic.

I made sure we only generate random values from torch instead of `std::`, so that this seed always applies. I.e. I do:

```
torch::randint(2, {2}, at::kInt64)
```

instead of

```
std::rand() % 2
```

Also got rid of the tests that test the random seeding, since it would interfere here. And the test is not useful since we just use ATen's seeding mechanism, which should work.

Fixes  #7288 #7286 #7289

ebetica ezyang
Closes https://github.com/pytorch/pytorch/pull/8903

Differential Revision: D8667269

Pulled By: goldsborough

fbshipit-source-id: a833e86e156d5e68dae8c53a4b1c433cb0608b6c
2018-06-27 20:09:46 -07:00
Peter Goldsborough
fef9a66d08 Use torch:: instead of at:: (#8911)
Summary:
This PR is the final step to making `torch::` the only  namespace users of the C++ API ever see. Basically, I did:

``` cpp

namespace torch {
using namespace at;
}
```

And then changed `torch::` to `at::` almost everywhere. This worked surprisingly well out of the box. So users can now write `torch::relu`  and `torch::log_softmax` and `torch::conv2d` instead of having to know when to use `at::` and when `torch::`. This is happy!

Another thing I did was to have `using Dtype = at::ScalarType`, which will be the eventual name anyway.

ebetica ezyang apaszke zdevito
Closes https://github.com/pytorch/pytorch/pull/8911

Reviewed By: ezyang

Differential Revision: D8668230

Pulled By: goldsborough

fbshipit-source-id: a72ccb70fca763c396c4b0997d3c4767c8cf4fd3
2018-06-27 14:42:01 -07:00
Peter Goldsborough
55757357b2
[C++ API] Better forward methods (#8739)
* Better forward methods in C++ API

capitalize error message in test_torch.test_flatten

Support for operator()

* Add operator() to Functional

* Get rid of SigmoidLinear

* Add BoundFunction to FunctionalImpl

* Remove macro from conv because it makes errors more nasty
2018-06-26 13:23:16 -07:00
Peter Goldsborough
521f5111ad
[C++ API] Use torch::Tensor instead of at::Tensor/Variable mix (#8680)
* Use torch::Tensor instead of at::Tensor/Variable mix

* TensorRange -> TensorListView
2018-06-24 19:03:39 -07:00
Peter Goldsborough
065fdbd500
Created Tensor::to functions (#8643)
* Created Tensor::to functions

* Only have to(dtype) and to(device)

* Ignore requires_grad in TensorOptions(Tensor) constructor
2018-06-20 09:28:08 -07:00