Commit Graph

128 Commits

Author SHA1 Message Date
lixinyu
5246bc4e87 register parameters correctly in c++ MultiheadAttention (#42037)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42037

This is to fix #41951

Test Plan: Imported from OSS

Reviewed By: yf225

Differential Revision: D22764717

Pulled By: glaringlee

fbshipit-source-id: e6da0aeb05a2356f52446e6d5fad391f2cd1cf6f
2020-07-27 13:58:11 -07:00
Will Feng
bbec4520c6 Add inplace tests for several torch::nn modules / functionals (#35147)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35147

Test Plan: Imported from OSS

Differential Revision: D20578217

Pulled By: yf225

fbshipit-source-id: b8bafa49ee94c7dfbbca6e100ee3d9df5b2b621c
2020-03-21 10:02:56 -07:00
Will Feng
a2557970f3 Fix F::interpolate and torch::nn::Upsample implementation (#35025)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35025

This PR fixes `F::interpolate` and `torch::nn::Upsample` implementation to match the Python API implementation.

**This PR is BC-breaking in the following way:**

There are changes to `UpsampleOptions` and `InterpolateFuncOptions`:
- `size` is changed from `std::vector<int64_t>` to `c10::optional<std::vector<int64_t>>`. If you want to pass a list of `int64_t` to this argument, you must pass it as `std::vector<int64_t>`.
- `scale_factor` is changed from `std::vector<double>` to `c10::optional<std::vector<double>>`. If you want to pass a list of `double` to this argument, you must pass it as `std::vector<double>`.

**TODO**: cherry-pick this PR into v1.5 release branch.

Test Plan: Imported from OSS

Differential Revision: D20559892

Pulled By: yf225

fbshipit-source-id: ac18609e351a9f2931eaeced8966b9491b2995f7
2020-03-20 22:37:13 -07:00
Will Feng
d7462dcea6 Fix AdaptiveAvgPool{2,3}d and AdaptiveMaxPool{2,3}d implementation (#35022)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35022

This PR fixes `AdaptiveAvgPool{2,3}d` and `AdaptiveMaxPool{2,3}d` implementation to match the Python API implementation. Particularly, `output_size` is changed to accept `c10::nullopt` in its elements, matching the Python API behavior.

**TODO**: cherry-pick this PR into v1.5 release branch.

Test Plan: Imported from OSS

Differential Revision: D20559890

Pulled By: yf225

fbshipit-source-id: ccddbd278dd39165cf1dda11fc0e49387c76dbef
2020-03-20 22:36:57 -07:00
Will Feng
d041d0784e [C++ API] RNNCell / LSTMCell / GRUCell layers (#34400)
Summary:
This PR adds `RNNCell` / `LSTMCell` / `GRUCell` layers to the C++ frontend, with implementations exactly matching the Python API equivalent.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34400

Differential Revision: D20316859

Pulled By: yf225

fbshipit-source-id: bb7cee092622334043c0d0fd0fcb4e75e707699c
2020-03-13 21:52:24 -07:00
Will Feng
a54416d208 [C++ API] Remove deprecated torch::nn::BatchNorm / FeatureDropout / modules_ordered_dict and torch::nn::init::Nonlinearity / FanMode (#34508)
Summary:
This PR is BC-breaking in the following way:
- The deprecated `torch::nn::BatchNorm` is removed in favor of `torch::nn::BatchNorm{1,2,3}d`
- The deprecated `torch::nn::FeatureDropout` is removed in favor of `torch::nn::Dropout{2,3}d`
- The deprecated `torch::nn::modules_ordered_dict` is removed. User should do `Sequential sequential({{"m1", MyModule(1)}, {"m2", MyModule(2)}})` instead.
- The deprecated `torch::nn::init::Nonlinearity` is removed, in favor of the following enums:
    - `torch::kLinear`
    - `torch::kConv1D`
    - `torch::kConv2D`
    - `torch::kConv3D`
    - `torch::kConvTranspose1D`
    - `torch::kConvTranspose2D`
    - `torch::kConvTranspose3D`
    - `torch::kSigmoid`
    - `torch::kTanh`
    - `torch::kReLU`
    - `torch::kLeakyReLU`
- The deprecated `torch::nn::init::FanMode` is removed, in favor of the following enums:
    - `torch::kFanIn`
    - `torch::kFanOut`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34508

Differential Revision: D20351601

Pulled By: yf225

fbshipit-source-id: cca0cd112f29a31bb023e348ca8f82780e42bea3
2020-03-12 10:09:58 -07:00
Mansoor
e95657b87e [C++ API] AdaptiveLogSoftmaxWithLoss (#29076)
Summary:
Implemented AdaptiveLogSoftmaxWithLoss and some tests for modules. Reference https://github.com/pytorch/pytorch/issues/25883
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29076

Differential Revision: D20404588

Pulled By: yf225

fbshipit-source-id: edbadf432b8173cbcc6caf83c9c03dd92dc31a37
2020-03-12 09:53:58 -07:00
generatedunixname89002005287564
9482683065 Remove dead includes in caffe2/test
Reviewed By: ezyang

Differential Revision: D19273220

fbshipit-source-id: 3dfc3388914e60611c84472e3fc529f5b5e40534
2020-01-21 11:30:34 -08:00
Pavel Belevich
47766e648f C++ API parity: MultiheadAttention
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27309

Test Plan: Imported from OSS

Differential Revision: D17766736

Pulled By: pbelevich

fbshipit-source-id: 7a5f2399f081945d31d4c13d7a8d248c387fc1a6
2019-12-18 10:13:29 -08:00
Pavel Belevich
f8e7f3fca4 C++ API parity: BCEWithLogitsLoss
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28783

Test Plan: Imported from OSS

Differential Revision: D18202435

Pulled By: pbelevich

fbshipit-source-id: 011b028bbb2a091e98d3548616b99d7b4569c239
2019-11-20 06:46:38 -08:00
Will Feng
bb1d9b238d torch::nn::FractionalMaxPool{2,3}d module and functional
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29933

Test Plan: Imported from OSS

Differential Revision: D18548174

Pulled By: yf225

fbshipit-source-id: 070776db6e8b7ad94d9b7cbd82b3d6966f061a46
2019-11-19 17:24:07 -08:00
Divyansh Singhvi
ec52d911bd InstanceNorm{1,2,3}d (#28790)
Summary:
Hi yf225,

I have a few doubts related to implementation:
1) What tests do I have to write?
2) What does _load_state_from_dict does?
3) Do I need to override reset() function as I can not see it's utility?
4) InstanceNormOptions could be removed with BatchNormOptions, but I find that
`track_running_status` is not defined instead `stateful` is defined.

InstanceNorm{1,2,3}d https://github.com/pytorch/pytorch/issues/25883
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28790

Differential Revision: D18588666

Pulled By: yf225

fbshipit-source-id: bb9b81f01f62c3fc8765fa0ba0716768087ee155
2019-11-19 16:57:01 -08:00
nuka137
a75b669b0f C++ API: torch::nn::ConvTranspose{1,2,3}d (#29721)
Summary:
Add torch::nn::ConvTranspose{1,2,3}d module and functional support for the C++ API.

Related Issue: https://github.com/pytorch/pytorch/issues/25883

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29721

Differential Revision: D18588943

Pulled By: yf225

fbshipit-source-id: d4dbb091389367e70459399d5cda3778325c2120
2019-11-19 16:04:12 -08:00
Suyash458
e88d096321 C++/Python API Parity: add AlphaDropout (#28424)
Summary:
- add `AlphaDropoutImpl` to `modules/dropout.h` and `modules/dropout.cpp`
 - add `functional/dropout.h` containing the `alpha_dropout` function
 - include `functional/dropout.h` in `nn/functional.h`
 - add functional and module tests
-  related issue https://github.com/pytorch/pytorch/issues/25883
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28424

Differential Revision: D18589162

Pulled By: yf225

fbshipit-source-id: c85734e02431a6c052515e26b11ca30ad7303644
2019-11-19 10:05:51 -08:00
Will Feng
689b4bea7b torch::nn::GLU and F::glu (#29922)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29922

* #29920 [C++ API] torch::nn::GroupNorm and F::group_norm

Test Plan: Imported from OSS

Differential Revision: D18558818

Pulled By: yf225

fbshipit-source-id: ff80d634309fcb55f53db8dcf86eb9cf8161b37e
2019-11-16 21:03:38 -08:00
Will Feng
d5bf51b684 torch::nn::GroupNorm and F::group_norm
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29920

Test Plan: Imported from OSS

Differential Revision: D18539314

Pulled By: yf225

fbshipit-source-id: dabbbaac31796fe7bfde02487737971bde699c1c
2019-11-16 19:22:11 -08:00
PyExtreme
e1d13f4f8b C++ API parity: NLLLoss & CrossEntropyLoss (#29812)
Summary:
Hi yf225 , I have added **NLLLoss and CrossEntropyLoss.**
```

Also, while using log_softmax in cross_entropy_loss, I am getting an error
../caffe2/../torch/csrc/api/include/torch/nn/functional/loss.h:537:63: error: no matching function for call to  log_softmax(const at::Tensor&)’
     const Tensor& log_softmax_input = torch::log_softmax(input);

aten/src/ATen/Functions.h:5551:22: note: candidate: at::Tensor at::log_softmax(const at::Tensor&, int64_t, c10::optional<c10::ScalarType>)
 static inline Tensor log_softmax(const Tensor & self, int64_t dim, c10::optional<ScalarType> dtype) {
                      ^~~~~~~~~~~
aten/src/ATen/Functions.h:5551:22: note:   candidate expects 3 arguments, 1 provided
```

I think the other two parameters should be optional as in python frontend(shown in documentation here at https://pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.log_softmax ). Rest, there were no errors in build and tests have passed
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29812

Differential Revision: D18548249

Pulled By: yf225

fbshipit-source-id: 2ab350abd2a6f498d4dba2345f51ad87471f3038
2019-11-16 10:49:09 -08:00
Pavel Belevich
27afac2134 C++ API parity: Dropout, Dropout2d, Dropout3d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29761

Test Plan: Imported from OSS

Differential Revision: D18530820

Pulled By: pbelevich

fbshipit-source-id: 9d351561692f7de099d7c6aaf2ecb930b5c867e9
2019-11-15 20:32:06 -08:00
Will Feng
b37c235d86 C++/Python API parity for Conv{1,2,3}d layers, and add F::conv{1,2,3}d functionals (#28917)
Summary:
This PR changes the implementation of C++ Conv{1,2,3}d layers to exactly match the Python version, and add F::conv{1,2,3}d functionals. For more thorough testing, I will rely on the parity test mechanism which uses values from `common_nn.py` to generate the inputs and options that we are interested in testing.

This PR is BC-breaking in the following way:

In `Conv{1,2,3}dOptions`:
- `with_bias` is renamed to `bias`.
- `input_channels` is renamed to `in_channels`.
- `output_channels` is renamed to `out_channels`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28917

Differential Revision: D18471526

Pulled By: yf225

fbshipit-source-id: 7a33f60654ad93cc2e043245e7ff9e0ef9da15b3
2019-11-13 12:53:31 -08:00
Will Feng
65bfcde05e Use c10::variant-based enums for SmoothL1Loss module and functional
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29536

Test Plan: Imported from OSS

Differential Revision: D18432272

Pulled By: yf225

fbshipit-source-id: fa355145962e93025b7de98b99b0a4fc82e8c871
2019-11-12 16:05:31 -08:00
Will Feng
9f879ef532 Make all non-input arguments to functionals part of its options (#29404)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29404

This PR makes all non-input arguments to functionals part of its options parameters, so that we won't break backward compatibility even if we add or reorder some of the non-input arguments to functionals in the future.

Test Plan: Imported from OSS

Differential Revision: D18378526

Pulled By: yf225

fbshipit-source-id: f5cf6bdfb844e75bf94fdee58c121e0955631b6e
2019-11-12 16:05:22 -08:00
eellison
e01fc56ecb move type inference for arange into c++ (#27629)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/17662

I'm not sure if `arange` needs to be in python_arg_parser at all, given the schemas in native_functions.yaml. In any case this at least fixes the dytpe mismatch.

In follow up PRs I will try to handle some of the other ops that do type inference at the python level, like randint.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27629

Differential Revision: D17885939

Pulled By: eellison

fbshipit-source-id: f97a8bc722b7ab77de1c42a992e49a4a3175ad60
2019-11-11 11:26:21 -08:00
Will Feng
cb74ede59e Pass F::*FuncOptions instead of torch::nn::*Options to functionals, and make F::*FuncOptions a different class when necessary (#29364)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29364

Currently, we use `torch::nn::*Options` both as module options and functional options. However, this makes it very hard to manage the parameters in `torch::nn::*Options`, because a module's constructor can take a different set of arguments than the module's equivalent functional (e.g. `torch.nn.BatchNorm1d` takes `num_features, eps=1e-5, momentum=0.1, affine=True,
track_running_stats=True`, while `F::batch_norm` takes `running_mean, running_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-5`).

This PR resolves the above problem by making `F::*FuncOptions` a different class from `torch::nn::*Options` when necessary (i.e. when a module's constructor takes a different set of arguments than the module's equivalent functional). In the rest of the cases where the module constructor takes the same set of arguments as the module's equivalent functional, `F::*FuncOptions` is an alias of `torch::nn::*Options`.

Also as part of this PR, we change all functional options to pass-by-value, to make the semantics consistent across all functionals.

Test Plan: Imported from OSS

Differential Revision: D18376977

Pulled By: yf225

fbshipit-source-id: 8d9c240d93bfd5af0165b6884fdc912476b1d06b
2019-11-08 22:38:21 -08:00
lsrock1
6389c18709 C++ parity, nn::CrossMapLRN2d (#29039)
Summary:
yf225 https://github.com/pytorch/pytorch/issues/25883
re- pull request because of rebase mistake!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29039

Differential Revision: D18326829

Pulled By: yf225

fbshipit-source-id: 5ed737f6275e4463efa4951d9b7f45c6f2723c82
2019-11-05 15:27:08 -08:00
Pavel Belevich
69f845cb77 C++ API parity: MarginRankingLoss
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29000

Test Plan: Imported from OSS

Differential Revision: D18271855

Pulled By: pbelevich

fbshipit-source-id: cbafc7f059173306c83673d7be374c2d3700911f
2019-11-05 05:41:40 -08:00
Xiaomeng Yang
2460dced8f Add torch.nn.GELU for GELU activation (#28944)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28944

Add torch.nn.GELU for GELU activation

Test Plan: buck test mode/dev-nosan //caffe2/test:nn -- "GELU"

Reviewed By: hl475, houseroad

Differential Revision: D18240946

fbshipit-source-id: 6284b30def9bd4c12bf7fb2ed08b1b2f0310bb78
2019-11-03 21:55:05 -08:00
nuka137
a68c1e109e C++ API: torch::nn::BatchNorm{2,3}d (#28936)
Summary:
Add torch::nn::BatchNorm{2,3}d module and functional support for the C++ API.

Related Issue: https://github.com/pytorch/pytorch/issues/25883 #28176

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28936

Differential Revision: D18274584

Pulled By: yf225

fbshipit-source-id: 3784eee9f8947f6c7c9f1699544a3d36a1a019b7
2019-11-01 17:50:33 -07:00
Pavel Belevich
4a94eaa60b C++ API parity: PoissonNLLLoss
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28755

Test Plan: Imported from OSS

Differential Revision: D18202436

Pulled By: pbelevich

fbshipit-source-id: a7a27d5f3cdbcbbd9bbbffa02b576609d5fdc9b3
2019-11-01 12:35:59 -07:00
Edward Yang
bbea34f283 Revert D18266918: C++ API: torch::nn::BatchNorm{2,3}d
Test Plan: revert-hammer

Differential Revision:
D18266918

Original commit changeset: f432904c7298

fbshipit-source-id: 0e1c596b2e2f13b59082ff422c67ba025df4be07
2019-11-01 10:46:49 -07:00
nuka137
b7c5b3d398 C++ API: torch::nn::BatchNorm{2,3}d (#28936)
Summary:
Add torch::nn::BatchNorm{2,3}d module and functional support for the C++ API.

Related Issue: https://github.com/pytorch/pytorch/issues/25883 #28176

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28936

Differential Revision: D18266918

Pulled By: yf225

fbshipit-source-id: f432904c72985d52ec52cb992cceb372b6ff0244
2019-11-01 09:28:58 -07:00
Carlos Miranda
72b9bda9e5 Smooth L1 loss (#27661)
Summary:
In accordance with https://github.com/pytorch/pytorch/issues/25883, I added the `SmoothL1Loss` module and `smooth_l1_loss` functional.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27661

Differential Revision: D18002332

Pulled By: yf225

fbshipit-source-id: b382df8becb0de14986ec16ee0dc953d7b10e917
2019-10-31 23:41:35 -07:00
Will Feng
595209bddc Fix bugs in torch::tensor constructor (#28523)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28523

New features:
1. Previously, `torch::tensor({true, false, true})` throws `"tensor_cpu" not implemented for 'Bool'`. After this PR, it produces the correct bool tensor, matching the Python API behavior.
2. Tensors with zero-size dimensions are now supported, e.g. `torch::tensor({{}, {}})` produces a tensor with sizes `{2, 0}`, matching the Python API behavior.

BC-breaking bug fixes:
1. Previously, `torch::tensor({{1}, {2}})` produces a tensor of sizes `{2}`. After this PR, it produces a tensor of sizes `{2, 1}`, matching the Python API behavior.
2. Fixed semantics of `torch::tensor(1.1)`: it now returns a 0-dim tensor instead of a 1-dim tensor, matching the Python API behavior.
3. Previously, when passed a non-dtype `TensorOptions` to the `torch::tensor` constructor, it always produces a tensor of dtype `float`. After this PR, it produces tensor of different dtypes based on the dtype of the braced-init-list, matching the behavior of the no-options case.
```cpp
// Previously:
torch::tensor({1, 2, 3}, torch::TensorOptions(/*non-dtype-options*/)).dtype() -> float
torch::tensor({{1, 2, 3}}, torch::TensorOptions(/*non-dtype-options*/)).dtype() -> float
torch::tensor({1., 2., 3.}, torch::TensorOptions(/*non-dtype-options*/)).dtype() -> float
torch::tensor({{1., 2., 3.}}, torch::TensorOptions(/*non-dtype-options*/)).dtype() -> float

// Now:
torch::tensor({1, 2, 3}, torch::TensorOptions(/*non-dtype-options*/)).dtype() -> int
torch::tensor({{1, 2, 3}}, torch::TensorOptions(/*non-dtype-options*/)).dtype() -> int
torch::tensor({1., 2., 3.}, torch::TensorOptions(/*non-dtype-options*/)).dtype() -> double
torch::tensor({{1., 2., 3.}}, torch::TensorOptions(/*non-dtype-options*/)).dtype() -> double

// As comparison, currently:
torch::tensor({1, 2, 3}).dtype() -> int
torch::tensor({{1, 2, 3}}).dtype() -> int
torch::tensor({1., 2., 3.}).dtype() -> double
torch::tensor({{1., 2., 3.}}).dtype() -> double
```

Notes:
1. From now on, the behavior of `at::tensor(scalar_value)` (which produces a 1-dim tensor) would be different from `torch::tensor(scalar_value)` (which produces a 0-dim tensor). I will fix the behavior of `at::tensor(scalar_value)` in a follow-up PR.
2. From now on, the behavior of `at::tensor({1, 2, 3}, torch::TensorOptions(/*non-dtype-options*/))` (which produces a `float` tensor) would be different from `torch::tensor({1, 2, 3}, torch::TensorOptions(/*non-dtype-options*/))` (which produces a an `int` tensor). I will fix this behavior of `at::tensor` constructor in a follow-up PR.

Context for the changes in this PR:

The motivation comes from fixing the "`torch::tensor({{1}, {2}})` gives tensor of wrong sizes" bug - in order to fix it, I have to move the handling of `at::ArrayRef` and `std::vector` into `InitListTensor` (see below on why we need to do this) and renamed `InitListTensor` to `TensorDataContainer`. After such changes, support for bool values comes out of the box without extra effort, and support for tensors with zero-size dimensions only requires adding a default constructor for `TensorDataContainer`, so I added those two in this PR.

For the semantic change of `torch::tensor(1.1)`, it's actually more effort to preserve the original wrong behavior (i.e. we need to check the sizes of the tensor converted from `TensorDataContainer` and reshape any scalar tensor to a 1-D tensor). I think preserving the original wrong behavior doesn't give us much value, and since the above changes naturally fix the problem, we should just start using the right behavior instead.

For the "constructor with non-dtype options behavior" fix, the code looks simpler and easier to reason about with the fix, so I included it in this PR.

--------

Why we need to move the handling of `at::ArrayRef` and `std::vector` into `TensorDataContainer`:

`torch::tensor({{1}, {2}})` can match this function overload:
`torch::tensor(at::ArrayRef<int> values)`, because `{1}` and `{2}` can be treated as
a list-initialization of an `int` value. However, this will produce a Tensor with sizes `{2}`,
but we actually want a Tensor with sizes `{2, 1}`. In order to avoid matching this function overload,
we removed the function overload and moved the ability to convert `at::ArrayRef<T>`
(and similarly `std::vector<T>`) into `TensorDataContainer`, and since for braced-init-list the
`TensorDataContainer(std::initializer_list<TensorDataContainer>)` constructor is always preferred over all other constructors, it will take the `std::initializer_list` path, and all is good.

Test Plan: Imported from OSS

Differential Revision: D18234625

Pulled By: yf225

fbshipit-source-id: 0f3f6912e82e2117d2103e31b74e7e97baaa8693
2019-10-31 12:53:06 -07:00
Pavel Belevich
d6f1e49c4a C++ API parity: CTCLoss
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28654

Test Plan: Imported from OSS

Differential Revision: D18202437

Pulled By: pbelevich

fbshipit-source-id: a4b80a57e65da84f3988002a026c648fa52a0fde
2019-10-30 14:35:02 -07:00
jon-tow
1d3d9ec7d4 C++ API Parity: functional::fold and Fold::pretty_print (#28732)
Summary:
Adds `torch::nn::functional::fold` support and updates `Fold::pretty_print` in the C++ API for more thorough Python parity.

Note: Small updates in source files to maintain consistency elsewhere.

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28732

Differential Revision: D18219955

Pulled By: yf225

fbshipit-source-id: fd2e9be8f17db77c1b1f384c0d2e16cc34858c0c
2019-10-30 11:37:39 -07:00
mansoorcheema
a465b033fd Local response norm (#28759)
Summary:
Implemented LocalResponseNorm and some initial tests for modules and functional. Reference https://github.com/pytorch/pytorch/issues/25883
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28759

Differential Revision: D18219745

Pulled By: yf225

fbshipit-source-id: e6aad568a8b1e81f54752decaefd4f9044029da9
2019-10-30 11:31:00 -07:00
mrsalehi
dfe7b25eaf Add nn::Flatten to C++ Frontend (#28072)
Summary:
Adds torch::nn::Flatten module support for the C++ API.

Issue: https://github.com/pytorch/pytorch/issues/25883

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28072

Differential Revision: D18202778

Pulled By: yf225

fbshipit-source-id: 43345dcbdf2f50d75746bf9a0ba293b84df275ab
2019-10-29 17:52:47 -07:00
nuka137
cbc234bceb C++ API: torch::nn::BatchNorm1d (#28176)
Summary:
Add torch::nn::BatchNorm1d function/module support for the C++ API.
torch::nn::BatchNorm{2,3}d will be added after this PR is merged.

Related Issue: https://github.com/pytorch/pytorch/issues/25883

Reviewer: yf225

I would like to discuss about below items.

* Necessity of `num_batches_tracked` in `BatchNormImplBase`
  * `num_batches_tracked` is needed to calculate `momentum` when we do not feed `momentum` argument in Python API. But in C++ API, `momentum` argument has a default value.
  * `num_batches_tracked` is only used for counting up `BatchNorm1d::foward()` call. I think it is no necessary for user anymore.
* The design of `BatchNorm{1,2,3}dOptions`
  * We have already `BatchNormOptions` used for deprecated `BatchNorm` module. However, it is hard to use it for `BatchNorm{1,2,3}dOptions` because of the arguments disagreement of each modules.
  * In this PR, I introduce `BatchNormOptionsv2` template class for the `BatchNorm{1,2,3}dOptions`. But I'm not sure this design is good or not.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28176

Differential Revision: D18196843

Pulled By: yf225

fbshipit-source-id: 667e2b5de4150d5776c41b9088c9e6c2ead24cd4
2019-10-29 17:29:42 -07:00
Will Feng
e33b4b6761 Use c10::variant-based enums for Reduction
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27942

Test Plan: Imported from OSS

Differential Revision: D18202857

Pulled By: yf225

fbshipit-source-id: 0303ce2508e3b7665c6a91ae270a7d0ef0e45900
2019-10-29 14:15:48 -07:00
jon-tow
52dd587123 C++ API parity: Upsample (#28413)
Summary:
Adds `interpolate` functional and `Upsample` module support for the C++ API.

**Issue**: https://github.com/pytorch/pytorch/issues/25883

**Reviewer**: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28413

Differential Revision: D18165014

Pulled By: yf225

fbshipit-source-id: ecae2f432a301b1f4afa7c038b2d104cbad139f2
2019-10-28 21:34:44 -07:00
nuka137
648749b203 C++ API: torch::nn::LPPool2d (#28492)
Summary:
Add torch::nn::LPPool2d module and functional support for the C++ API.

Related Issue: https://github.com/pytorch/pytorch/issues/25883 #27800

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28492

Differential Revision: D18109401

Pulled By: yf225

fbshipit-source-id: 5cedecb895d9d44c2167cdb3f6f758f3426b3497
2019-10-28 12:28:25 -07:00
Will Feng
d04973beda Use c10::variant-based enums for EmbeddingBag mode (#28330)
Summary:
This PR is BC-breaking in the following way:

Previous, we require the use of `std::string` to specify the mode for `EmbeddingBag`. After this PR, we use variant-based enums such as `torch::kSum` / `torch::kMean` / `torch::kMax` to specify the mode for `EmbeddingBag`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28330

Differential Revision: D18127116

Pulled By: yf225

fbshipit-source-id: 15cd86c764777f4d399587be92cda15b6ce8524b
2019-10-24 17:47:42 -07:00
Will Feng
92b39434a2 C++ nn::ConstantPad{1,2,3}d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28541

Test Plan: Imported from OSS

Differential Revision: D18115607

Pulled By: yf225

fbshipit-source-id: 736df791ddc3cd30ad9af89eacfb4a0c6b53f2cd
2019-10-24 15:10:27 -07:00
Will Feng
7f9941c4ea C++ nn::ZeroPad2d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28540

Test Plan: Imported from OSS

Differential Revision: D18115610

Pulled By: yf225

fbshipit-source-id: ced7c0917f4712838e753cd2e9fc4fa79fd5d310
2019-10-24 14:23:57 -07:00
Will Feng
303527d733 C++ nn::ReplicationPad{1,2,3}d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28539

Test Plan: Imported from OSS

Differential Revision: D18115609

Pulled By: yf225

fbshipit-source-id: 15f4ab6a114279bb06bf62f1265b62aa12f8700f
2019-10-24 12:49:41 -07:00
Will Feng
78375c02b8 C++ nn::ReflectionPad1d and nn::ReflectionPad2d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28538

Test Plan: Imported from OSS

Differential Revision: D18115608

Pulled By: yf225

fbshipit-source-id: 3a48d8c11721f013076db2965f5f75b71662c78e
2019-10-24 12:02:51 -07:00
Pavel Belevich
dd277e9086 C++ API parity: Linear
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27382

Test Plan: Imported from OSS

Differential Revision: D17766735

Pulled By: pbelevich

fbshipit-source-id: c7a66daeb17550eb9a5d26944427723d4ebdc6c8
2019-10-24 07:11:51 -07:00
Anjali Chourdia
7b59174882 torch::nn::LayerNorm
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28032

Differential Revision: D18047371

Pulled By: anjali411

fbshipit-source-id: fb61aea52d6622a67ec1d84950e17e85686461ae
2019-10-22 12:50:22 -07:00
nuka137
9ea42f8d7c C++ API: torch::nn::LPPool1d (#27800)
Summary:
Add torch::nn::LPPool1d module and functional support for the C++ API.

Related Issue: https://github.com/pytorch/pytorch/issues/25883

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27800

Differential Revision: D18045040

Pulled By: yf225

fbshipit-source-id: e61fefe9efec3423f7a93dd1e946f3e380122927
2019-10-21 15:33:51 -07:00
Carlos Miranda
a1e14a6626 PixelShuffle module and functional (#28140)
Summary:
Added `PixelShuffle` module and functional https://github.com/pytorch/pytorch/issues/25883
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28140

Differential Revision: D18008474

Pulled By: yf225

fbshipit-source-id: f482495bb56998701c79a61ef065a121bf5a5154
2019-10-18 15:54:14 -07:00
Shahriar
91a260cef9 Adding MSELoss, KLDivLoss and BCELoss to C++ front-end (#27156)
Summary:
This PR adds ```MSELoss```, ```KLDivLoss``` and ```BCELoss```. The tests for ```BCELoss``` fail with the following error:
```
unknown file: Failure
C++ exception with description "autograd_meta() INTERNAL ASSERT FAILED at /home/shahriar/Contrib/pytorch/c10/core/TensorImpl.h:533, please report a bug to PyTorch. set_requires_grad is not implemented for Tensor (set_requires_grad at /home/shahriar/Contrib/pytorch/c10/core/TensorImpl.h:533)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27156

Differential Revision: D17960323

Pulled By: yf225

fbshipit-source-id: 84b8431064f2f573679c03a8d7994e3e2f81a4d1
2019-10-17 22:07:01 -07:00