Commit Graph

409 Commits

Author SHA1 Message Date
Will Feng
5c33d98b0d Add assert_tensor_equal and assert_tensor_not_equal to test/cpp/api/support.h (#30426)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30426

This PR adds `assert_tensor_equal` and `assert_tensor_not_equal` to `test/cpp/api/support.h`, as better functions for testing whether two tensors are equal / not equal.

Test Plan: Imported from OSS

Differential Revision: D18695900

Pulled By: yf225

fbshipit-source-id: c19b9bc4c4e84d9f444015023649d27618fcbdf5
2020-02-26 13:25:25 -08:00
Will Feng
0dded4026e [C++ API] Add PackedSequence / pack_padded_sequence / pad_packed_sequence / pack_sequence (#33652)
Summary:
Most of the function implementation and test code are translated from the Python version.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33652

Differential Revision: D20052211

Pulled By: yf225

fbshipit-source-id: ce6767db54364f91ef4f06674239a12278c2752a
2020-02-25 12:53:41 -08:00
Will Feng
36919278cc C++ tensor multi-dim indexing: add index() and index_put_() overloads, simple indexing tests, merge with Python indexing path (#32841)
Summary:
This PR adds the following items:
- **1st item**: `ArrayRef<TensorIndex>` and `std::initializer_list<TensorIndex>` overloads for `Tensor::index` and `Tensor::index_put_`, to be used specifically for multi-dim indexing purpose.

Design rationale:
* C++ `Tensor::index` and `Tensor::index_put_` are both existing tensor APIs, and they currently (before this PR) only accept a list of tensors (i.e. `ArrayRef<Tensor>`) as indices. If we change their signatures to also accept non-tensors as indices (i.e. `ArrayRef<TensorIndex>`, and `TensorIndex` is convertible from `Tensor` / `Slice` / `None` / `Ellipsis`), it would slow down the original code path (since now it has to go through more steps), which is undesirable.

    To get around this problem, the proposed solution is to keep the original `ArrayRef<Tensor>` overload, and add `ArrayRef<TensorIndex>` and `std::initializer_list<TensorIndex>` overloads to `Tensor::index` and `Tensor::index_put_`. This way, the original code path won’t be affected, and the tensor multi-dim indexing API is only used when the user explicitly pass an `ArrayRef<TensorIndex>` or a braced-init-list of `TensorIndex`-convertible types to `Tensor::index` and `Tensor::index_put_` .

    Note that the above proposed solution would still affect perf for the user’s original `Tensor::index` or `Tensor::index_put_` call sites that use a braced-init-list of tensors as input, e.g. `tensor.index({...})` or `tensor.index_put_({...}, value)`, since now such function calls would take the multi-dim indexing path instead of the original advanced indexing path. However, there are only two instances of this in our codebase (one in ATen cpp test, one in a C++ API nn init function), and they can be easily changed to explicitly use `ArrayRef<Tensor>` as input (I changed them in this PR). For external user’s code, since this is part of the C++ frontend which is still considered experimental, we will only talk about this change in the release note, and ask users to switch to using `ArrayRef<Tensor>` explicitly if they want to keep using the original advanced indexing code path.

- **2nd item**: Mechanisms for parsing `ArrayRef<TensorIndex>` indices and performing indexing operations (mirroring the functions in `torch/csrc/autograd/python_variable_indexing.cpp`).
- **3rd item**: Simple tests to demonstrate that the `Tensor::index()` and `Tensor::index_put_()` APIs work. I will add more tests after the first few PRs are reviewed.
- **4th item**: Merge Python/C++ indexing code paths, for code simplicity. I tested locally and found that there is no perf regression resulting from the merge. I will get more concrete numbers for common use cases when we settle on the overall design.

This PR supersedes https://github.com/pytorch/pytorch/pull/30425.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32841

Differential Revision: D19919692

Pulled By: yf225

fbshipit-source-id: 7467e64f97fc0e407624809dd183c95ea16b1482
2020-02-24 22:04:00 -08:00
Suyash458
47e90d774e C++/Python API Parity: add pad_sequence (#32387)
Summary:
- add `pad_sequence` and tests
- related issue https://github.com/pytorch/pytorch/issues/25883
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32387

Differential Revision: D20025421

Pulled By: yf225

fbshipit-source-id: caa9ae2114bece8db387a3a1610f24a3e06b1324
2020-02-21 13:16:09 -08:00
nicolov
e77abb9a5b Normalize reward-to-go in C++ actor-critic (#33550)
Summary:
Comparing to the [Python implementation](https://github.com/pytorch/examples/blob/master/reinforcement_learning/actor_critic.py), it seems like the tensor of normalized reward-to-go is computed but never used. Even if it's just an integration test, this PR switches to the normalized version for better convergence.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33550

Differential Revision: D20024393

Pulled By: yf225

fbshipit-source-id: ebcf0fee14ff39f65f6744278fb0cbf1fc92b919
2020-02-21 09:19:39 -08:00
Will Feng
a203dc2e6d [C++ API] Allow skipping default arguments in module's forward method when module is used in Sequential (#33027)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33027

This PR allows default arguments in module's forward method to be skipped when module is used in `torch::nn::Sequential`, by introducing the `FORWARD_HAS_DEFAULT_ARGS` macro and requiring that all modules that have default arguments in its forward method must have a corresponding `FORWARD_HAS_DEFAULT_ARGS` macro call.

Fixes issue mentioned in https://github.com/pytorch/pytorch/issues/30931#issuecomment-564144468.

Test Plan: Imported from OSS

Differential Revision: D19777815

Pulled By: yf225

fbshipit-source-id: 73282fcf63377530063e0092a9d84b6c139d2e32
2020-02-17 20:38:02 -08:00
Will Feng
4724964810 [C++ API] Expose AnyValue and AnyModuleHolder classes (#33026)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33026

This PR contains necessary changes to prepare for https://github.com/pytorch/pytorch/pull/33027. It exposes the following classes to public:
1. `torch::nn::AnyValue`, because if the user has optional arguments in their module's forward method, they must also use the `FORWARD_HAS_DEFAULT_ARGS` macro and pass in the default values for those optional arguments wrapped by `torch::nn::AnyValue`.
2. `torch::nn::AnyModuleHolder`, because `torch::nn::Module` needs to declare it as a friend class for it to be able to access `torch::nn::Module`'s protected methods such as `_forward_has_default_args` / `_forward_num_required_args` / `_forward_populate_default_args`.

Test Plan: Imported from OSS

Differential Revision: D19777814

Pulled By: yf225

fbshipit-source-id: 1c9d5aa24f0689154752c426a83ee98f64c9d02f
2020-02-17 20:35:22 -08:00
Will Feng
5d7f42847c Add at::Tensor::retain_grad API (#33349)
Summary:
This PR adds `at::Tensor::retain_grad`, and its implementation mirrors the Python `torch.Tensor.retain_grad` API:
c6271c63f2/torch/tensor.py (L292-L315)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33349

Differential Revision: D19944524

Pulled By: yf225

fbshipit-source-id: e61d5d761996b6d1b860c04c4b4650c1a49a6a8c
2020-02-17 20:03:48 -08:00
anjali411
91744907d4 SGD: updated step and class design (#32592)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32592

Differential Revision: D19868154

Pulled By: anjali411

fbshipit-source-id: ce888efc68b1531d97e8b0abf2b146198e012d2f
2020-02-12 18:38:55 -08:00
albanD
3655975565 Add allow_rebase_history flag and fix codegen functions for multiple views (#32790)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32790

Same as https://github.com/pytorch/pytorch/pull/31990 but without the first commit in the stack that is problematic for a lot of people.

Test Plan: Imported from OSS

Differential Revision: D19814116

Pulled By: albanD

fbshipit-source-id: d104911a5b098a5807b4bc08b69803ebd4f69fa6
2020-02-11 07:16:02 -08:00
albanD
e1c53a5c86 Fix version counter bump in cpp Function (#33068)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33068

The version counter is already tracked if we use pytorch's functions but not if the user unpack the Tensor and modifies it by hand or with a third party library.

Test Plan: Imported from OSS

Differential Revision: D19791564

Pulled By: albanD

fbshipit-source-id: a73c0f73d8fd0c0e5bf838f14bed54fa66937840
2020-02-10 07:22:29 -08:00
Pavel Belevich
d141465713 Fix torch::allclose to handle std::numeric_limits<T>::lowest() for integral types (#32978)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32978

Fixes #32946

Test Plan: Imported from OSS

Differential Revision: D19726013

Pulled By: pbelevich

fbshipit-source-id: ada4aeabc8e39016d24f1a40f02fb7c56f069cd3
2020-02-04 19:06:52 -08:00
Will Feng
b564eaf7a8 Bug fixes: torch::tensor(floating-point values) -> default dtype, and torch::tensor(integer values) ->at::kLong (#32367)
Summary:
Some of the `torch::tensor` behavior is updated to better match Python API. Fixes https://github.com/pytorch/pytorch/issues/32234.

This PR is BC-breaking in the following way:
- `torch::tensor({1.0f, 2.0f})`: float -> default dtype
- `torch::tensor(at::ArrayRef<int>({1, 2, 3}))`: int -> at::kLong
- `torch::tensor(std::vector<int>({1, 2, 3}))`: int -> at::kLong
- `torch::tensor(at::ArrayRef<float>({1.f, 2.f, 3.f}))`: float -> default dtype
- `torch::tensor(std::vector<float>({1.f, 2.f, 3.f}))`: float -> default dtype
- `torch::tensor(at::ArrayRef<double>({1., 2., 3.}))`: double -> default dtype
- `torch::tensor(std::vector<double>({1., 2., 3.}))`: double -> default dtype
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32367

Differential Revision: D19498484

Pulled By: yf225

fbshipit-source-id: 19c8dc2a56476266153cff4c404e7f84d309eb12
2020-02-01 15:00:07 -08:00
Alban Desmaison
db8ce7ea2d Back out "Make autogen functions correct for multiple outputs and views" (#32681)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32681

Original commit changeset: a2b41c2d231e

Test Plan: fb and oss tests

Reviewed By: hudeven

Differential Revision: D19591864

fbshipit-source-id: 7068b5563e37bc9a5d415fd535c73fd9d71fe131
2020-01-27 19:54:34 -08:00
Charles Hofer
5fd037ce44 Fix MagmaInitializesCorrectly_CUDA by using an invertible matrix (#32547)
Summary:
This test case had been using the tensor

```
1  2  3  4
5  6  7  8
9  10 11 12
13 14 15 16
```

which is not an invertible tensor and causes the test case to fail, even if magma gets initialized just fine. This change uses a tensor that is invertible, and the inverse doesn't include any elements that are close to zero to avoid floating point rounding errors.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32547

Differential Revision: D19572316

Pulled By: ngimel

fbshipit-source-id: 1baf3f8601b2ba69fdd6678d7a3d86772d01edbe
2020-01-25 20:00:54 -08:00
albanD
3ab30753e9 Make autogen functions correct for multiple outputs and views (#31990)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31990

This PR does three things:
- Add a new `allow_rebase_history` flag to the differentiable views. If set, trying to rebase their history will raise an error.
- Make sure that the codegen functions verify this flag before doing inplace operations so that they fail before doing the inplace modification.
- Make sure the codegen functions set this flag properly when we don't support rebasing the history of the output.

The codegen change can be found [here](4bf180caa0).

Test Plan: Imported from OSS

Differential Revision: D19409649

Pulled By: albanD

fbshipit-source-id: a2b41c2d231e952ecfe162bdb6bad620ac595703
2020-01-24 14:32:28 -08:00
anjali411
be6ffac1b6 Adagrad optimizer - updated step function, added param_groups, state to optimizers
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29335

Differential Revision: D19449382

Pulled By: anjali411

fbshipit-source-id: ee238801ed9cdf15a80f2ce31cc4aab8ba582aea
2020-01-21 14:41:12 -08:00
generatedunixname89002005287564
9482683065 Remove dead includes in caffe2/test
Reviewed By: ezyang

Differential Revision: D19273220

fbshipit-source-id: 3dfc3388914e60611c84472e3fc529f5b5e40534
2020-01-21 11:30:34 -08:00
Brian Wignall
f326045b37 Fix typos, via a Levenshtein-type corrector (#31523)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos, with https://github.com/bwignall/typochecker to help automate the checking.

Uses an updated version of the tool used in https://github.com/pytorch/pytorch/pull/30606 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31523

Differential Revision: D19216749

Pulled By: mrshenli

fbshipit-source-id: 7fd489cb9a77cd7e4950c1046f925d57524960ea
2020-01-17 16:03:19 -08:00
Will Feng
2bd179147a Fix typo in config script to re-enable libtorch build and test in macOS CI (#32072)
Summary:
Currently, libtorch build and test are not running in macOS CI. This PR fixes the issue.

**Test Plan:**
Check that libtorch build and test are running again in macOS CI.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32072

Differential Revision: D19391909

Pulled By: yf225

fbshipit-source-id: 1ab345b099869f78e1124f1a8bd185fa51371b6a
2020-01-14 16:23:57 -08:00
Will Feng
b6cee03e29 C++ tensor indexing: add Slice / TensorIndex (#30424)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30424

`at::indexing::TensorIndex` is used for converting C++ tensor indices such as `{None, "...", Ellipsis, 0, true, {1, None, 2}, torch::tensor({1, 2})}` into its equivalent `std::vector<TensorIndex>`, so that further tensor indexing operations can be performed using the supplied indices.

Test Plan: Imported from OSS

Differential Revision: D18695902

Pulled By: yf225

fbshipit-source-id: d73e14a411cdbec815866b02e75ffd71a9186e89
2020-01-10 17:53:41 -08:00
TH3CHARLie
1296e2d55e C++ API parity: isinf (#31099)
Summary:
fixes https://github.com/pytorch/pytorch/issues/31021, port the legacy binding method of `isinf` to C++ therefore support JIT
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31099

Differential Revision: D19314733

Pulled By: yf225

fbshipit-source-id: 5725c51d19c33b4fddd0fc9e7034078580bd534e
2020-01-09 13:16:13 -08:00
Jeremy Lilley
114562cf93 For torch::from_blob() add clue when memory is non-owned. (#31222)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31222

 - When constructing torch::from_blob() in the case where the deleter is a nop, switch to using a nullptr context in the DataPtr (with a nop deleter)

 - No real extra memory/cpu requirements here, actually saves a minor alloc.

Why? Trying to get a signal that a Tensor might contain non-owned memory from
torch::from_blob(), by detecting the nullptr context.
ghstack-source-id: 96336078

Test Plan:
buck test mode/dev caffe2/test/cpp/api/...
   buck test mode/dev-nosan caffe2/test/...

Differential Revision: D18992119

fbshipit-source-id: 4eea642f82d0858b57fdfc6995364a760c10567d
2020-01-07 13:12:30 -08:00
Jerry Zhang
ebe69236d1 Expose class constant through attr and setattr in object (#29219)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29219

We added class constant in previous PRs, this PR allows access to
class constant in the object API

Test Plan:
build/bin/test_jit
python test/test_jit.py

Imported from OSS

Differential Revision: D18846851

fbshipit-source-id: 888a6517d5f747d1f8ced283c0c2c30b2f6c72c6
2020-01-04 11:09:35 -08:00
Jerry Zhang
1bb6c51421 Fix getAttribute (#31011)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31011

`getAttribute` is supposed to throw when there the attribute is not
found rather than return a `nullptr`.

Test Plan:
.

Imported from OSS

Differential Revision: D18898417

fbshipit-source-id: 0fe7d824b978ad19bb5ef094d3aa560e9fc57f87
2019-12-18 19:27:39 -08:00
Pavel Belevich
47766e648f C++ API parity: MultiheadAttention
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27309

Test Plan: Imported from OSS

Differential Revision: D17766736

Pulled By: pbelevich

fbshipit-source-id: 7a5f2399f081945d31d4c13d7a8d248c387fc1a6
2019-12-18 10:13:29 -08:00
Nathan Goldbaum
f531815526 Deprecate tensor.type() (#30281)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/29161.

I looked a bit at the code changes related to this and think I have all of the use cases of `DeprecatedTypeProperties` covered in the message, but suggestions from someone with more context on this would be very much appreciated :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30281

Differential Revision: D18830818

Pulled By: ezyang

fbshipit-source-id: 1a7fcee15354ae09e6644577e7fa33bd26acfe20
2019-12-05 10:55:34 -08:00
Will Feng
18ec4632b3 Exclude undefined tensors in the result of Module::parameters() / named_paramters() / buffers() / named_buffers() (#30626)
Summary:
PR https://github.com/pytorch/pytorch/pull/30523 attempted to fix https://github.com/pytorch/pytorch/issues/30508 and https://github.com/pytorch/pytorch/issues/30462, but the fix wasn't complete. This PR makes the following improvements:
1. Fixes https://github.com/pytorch/pytorch/issues/30508 and https://github.com/pytorch/pytorch/issues/30462 properly by excluding undefined tensors in the result of `Module::parameters()` / `named_parameters()` / `buffers()` / `named_buffers()`, which mirrors the Python API behavior.
2. Audits all use sites of `Module::parameters_` / `buffers_` and change them to `Module::named_parameters(/*recurse=*/false)` / `named_buffers(/*recurse=*/false)` when appropriate, so that use sites of module parameters / buffers never need to worry about undefined tensors.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30626

Differential Revision: D18777507

Pulled By: yf225

fbshipit-source-id: 55b64b69779e1186342efd3c44857f416334ed6b
2019-12-02 21:59:58 -08:00
Brian Wignall
e7fe64f6a6 Fix typos (#30606)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30606

Differential Revision: D18763028

Pulled By: mrshenli

fbshipit-source-id: 896515a2156d062653408852e6c04b429fc5955c
2019-12-02 20:17:42 -08:00
Will Feng
7ac8efa689 Skip undefined tensors when moving torch::nn module to a different device (#30523)
Summary:
This fixes high-pri issues such as https://github.com/pytorch/pytorch/issues/30508 and https://github.com/pytorch/pytorch/issues/30462.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30523

Differential Revision: D18732904

Pulled By: yf225

fbshipit-source-id: fe5a7a43838000f5803bd9c01ecfba0c3f02df5d
2019-11-27 21:21:02 -08:00
Will Feng
3ba1456aee Fix clip_grad_norm_ / clip_grad_value_ to take input by value instead of by non-const ref (#30216)
Summary:
The original design of `torch::nn::utils::clip_grad_norm_` / `clip_grad_value_` takes input by non-const reference, which prevents users from passing rvalue reference as input into the functions. This PR changes the functions to take input by value, which matches the Python version's semantics, and also adheres to the C++ API convention that if a function modifies its input in-place, it should take that input by value.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30216

Differential Revision: D18632543

Pulled By: yf225

fbshipit-source-id: 97a09d6467f982fe9c8120f483a9c07fcf13699e
2019-11-21 10:07:00 -08:00
lsrock1
0a77c090d5 C++ parity, convert_parameters (#29267)
Summary:
yf225 https://github.com/pytorch/pytorch/issues/25883
update parameters_to_vector and vector_to_parameters
check please!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29267

Differential Revision: D18628571

Pulled By: yf225

fbshipit-source-id: 03783e6b0f8183dd97ae48f3da4acb1d07083555
2019-11-20 19:59:11 -08:00
Will Feng
5cbdbddc12 Add test for F::max_unpool3d, and update parity table
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30171

Differential Revision: D18620503

Pulled By: yf225

fbshipit-source-id: 52adf9a6c0238b5cdb2e11e03807fb7dd73880bf
2019-11-20 12:42:24 -08:00
Will Feng
a460c856dd Fix naming for kl_div and binary_cross_entropy functional options (#30146)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30146

This PR fixes naming for kl_div and binary_cross_entropy functional options, to be more consistent with the naming scheme of other functional options.

Test Plan: Imported from OSS

Differential Revision: D18618971

Pulled By: yf225

fbshipit-source-id: 2af62c1a0ace2cd0c36c2f1071639bf131d8fe61
2019-11-20 12:23:50 -08:00
Pavel Belevich
f8e7f3fca4 C++ API parity: BCEWithLogitsLoss
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28783

Test Plan: Imported from OSS

Differential Revision: D18202435

Pulled By: pbelevich

fbshipit-source-id: 011b028bbb2a091e98d3548616b99d7b4569c239
2019-11-20 06:46:38 -08:00
Pavel Belevich
cc81769e10 C++ API parity: isfinite
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30083

Test Plan: Imported from OSS

Differential Revision: D18594723

Pulled By: pbelevich

fbshipit-source-id: 5970e0aa6ef8994e9c4a741784fd053383aaceb7
2019-11-19 20:00:05 -08:00
Will Feng
bb1d9b238d torch::nn::FractionalMaxPool{2,3}d module and functional
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29933

Test Plan: Imported from OSS

Differential Revision: D18548174

Pulled By: yf225

fbshipit-source-id: 070776db6e8b7ad94d9b7cbd82b3d6966f061a46
2019-11-19 17:24:07 -08:00
Divyansh Singhvi
ec52d911bd InstanceNorm{1,2,3}d (#28790)
Summary:
Hi yf225,

I have a few doubts related to implementation:
1) What tests do I have to write?
2) What does _load_state_from_dict does?
3) Do I need to override reset() function as I can not see it's utility?
4) InstanceNormOptions could be removed with BatchNormOptions, but I find that
`track_running_status` is not defined instead `stateful` is defined.

InstanceNorm{1,2,3}d https://github.com/pytorch/pytorch/issues/25883
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28790

Differential Revision: D18588666

Pulled By: yf225

fbshipit-source-id: bb9b81f01f62c3fc8765fa0ba0716768087ee155
2019-11-19 16:57:01 -08:00
Will Feng
05a7aaa742 Pass Tensor instead of Tensor& to torch::nn functionals that can change input in place (#30112)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30112

Currently, we have torch::nn functionals that takes `input` as `Tensor&` in order to be able to in-place change `input`'s value. We likely shouldn't do this because it will prevent the following use case:
```cpp
F::elu(torch::tensor(1), F::ELUFuncOptions().inplace(true))
```
The solution is to change the type of `input` to `Tensor`, so that we can pass an rvalue into the functional.

Test Plan: Imported from OSS

Differential Revision: D18601580

Pulled By: yf225

fbshipit-source-id: 639a86eb62f6c986b0f20bf7e201983e83126e73
2019-11-19 16:11:39 -08:00
nuka137
a75b669b0f C++ API: torch::nn::ConvTranspose{1,2,3}d (#29721)
Summary:
Add torch::nn::ConvTranspose{1,2,3}d module and functional support for the C++ API.

Related Issue: https://github.com/pytorch/pytorch/issues/25883

Reviewer: yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29721

Differential Revision: D18588943

Pulled By: yf225

fbshipit-source-id: d4dbb091389367e70459399d5cda3778325c2120
2019-11-19 16:04:12 -08:00
Suyash458
e88d096321 C++/Python API Parity: add AlphaDropout (#28424)
Summary:
- add `AlphaDropoutImpl` to `modules/dropout.h` and `modules/dropout.cpp`
 - add `functional/dropout.h` containing the `alpha_dropout` function
 - include `functional/dropout.h` in `nn/functional.h`
 - add functional and module tests
-  related issue https://github.com/pytorch/pytorch/issues/25883
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28424

Differential Revision: D18589162

Pulled By: yf225

fbshipit-source-id: c85734e02431a6c052515e26b11ca30ad7303644
2019-11-19 10:05:51 -08:00
Will Feng
3bd0f476d4 Revert D18233037: C++ API parity: isfinite
Test Plan: revert-hammer

Differential Revision:
D18233037

Original commit changeset: c76b9467bbc1

fbshipit-source-id: 97d2cfa9de767a8c3a0ca919f9d768e959fa484e
2019-11-18 20:26:19 -08:00
Pavel Belevich
8df5e10ee9 C++ API parity: isfinite
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28918

Test Plan: Imported from OSS

Differential Revision: D18233037

Pulled By: pbelevich

fbshipit-source-id: c76b9467bbc1fbb2c9bf49855895c98438b36c12
2019-11-18 19:06:57 -08:00
Will Feng
689b4bea7b torch::nn::GLU and F::glu (#29922)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29922

* #29920 [C++ API] torch::nn::GroupNorm and F::group_norm

Test Plan: Imported from OSS

Differential Revision: D18558818

Pulled By: yf225

fbshipit-source-id: ff80d634309fcb55f53db8dcf86eb9cf8161b37e
2019-11-16 21:03:38 -08:00
Will Feng
d5bf51b684 torch::nn::GroupNorm and F::group_norm
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29920

Test Plan: Imported from OSS

Differential Revision: D18539314

Pulled By: yf225

fbshipit-source-id: dabbbaac31796fe7bfde02487737971bde699c1c
2019-11-16 19:22:11 -08:00
PyExtreme
e1d13f4f8b C++ API parity: NLLLoss & CrossEntropyLoss (#29812)
Summary:
Hi yf225 , I have added **NLLLoss and CrossEntropyLoss.**
```

Also, while using log_softmax in cross_entropy_loss, I am getting an error
../caffe2/../torch/csrc/api/include/torch/nn/functional/loss.h:537:63: error: no matching function for call to  log_softmax(const at::Tensor&)’
     const Tensor& log_softmax_input = torch::log_softmax(input);

aten/src/ATen/Functions.h:5551:22: note: candidate: at::Tensor at::log_softmax(const at::Tensor&, int64_t, c10::optional<c10::ScalarType>)
 static inline Tensor log_softmax(const Tensor & self, int64_t dim, c10::optional<ScalarType> dtype) {
                      ^~~~~~~~~~~
aten/src/ATen/Functions.h:5551:22: note:   candidate expects 3 arguments, 1 provided
```

I think the other two parameters should be optional as in python frontend(shown in documentation here at https://pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.log_softmax ). Rest, there were no errors in build and tests have passed
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29812

Differential Revision: D18548249

Pulled By: yf225

fbshipit-source-id: 2ab350abd2a6f498d4dba2345f51ad87471f3038
2019-11-16 10:49:09 -08:00
Pavel Belevich
27afac2134 C++ API parity: Dropout, Dropout2d, Dropout3d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29761

Test Plan: Imported from OSS

Differential Revision: D18530820

Pulled By: pbelevich

fbshipit-source-id: 9d351561692f7de099d7c6aaf2ecb930b5c867e9
2019-11-15 20:32:06 -08:00
Edward Yang
65bb34d885 Remove TensorImpl::is_variable, deprecate Tensor::is_variable (#29653)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29653

I didn't remove is_variable from Tensor for BC reasons, but I did
remove as many uses as I could from the codebase.
at::impl::variable_excluded_from_dispatch got moved to TensorBody.h
so that it's more widely accessible.

This diff is NOT semantics preserving.  Here are the major differences:

- In a number of native operator implementations, we tested that arguments
  are not variable.  I replaced these with asserts that variable is
  excluded from dispatch.  I actually don't think these asserts are really
  necessary now (they should certainly be true, but it's hard to get
  it wrong), but I've kept them for old time's sake.  At least, they'll detect
  if you call these functions before you've processed variable (indicating
  a bug in your kernel.)

- There are a number of places where we do a per-tensor test for being a
  variable, for better error reporting when someone commits Tensor/Variable
  confusion.  Although these tests are substantively the same as the
  tests above, in these cases I decided to *delete* the test entirely.
  The reasoning is that in these cases, we didn't really care about
  dispatch (also, see above; I'm not too sure we really need the dispatch
  asserts), we cared about Tensor/Variable confusion.  Since Tensor/Variable
  confusion is impossible now, we don't need the tests.  One of the key
  factors which pushed me one way or another was whether or not a function
  was doing per-tensor validation; if I kept the assert in such functions,
  I'd repeatedly access the TLS.  Even if we want to bring back the asserts,
  they would have to go somewhere else.

  Another similar idiom is the number of places we do !x.defined() ||
  x.is_variable(); I treated this equivalently.

- nuclear_norm's computation of compute_uv is a bit weird, but I think
  it's OK to just delete the is_variable case (I *suspect* that it is
  always the case that self.is_variable(), but it doesn't really matter.)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18496168

Pulled By: ezyang

fbshipit-source-id: 5a1ded931e0c10a6b758ba64a8380d34110e0c3e
2019-11-14 11:41:02 -08:00
Will Feng
a68c52494c Use F::*FuncOptions for embedding/embeddingbag functionals (#29673)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29673

Following https://github.com/pytorch/pytorch/pull/29364 and https://github.com/pytorch/pytorch/pull/29404, this PR makes `F::EmbeddingFuncOptions` and `F::EmbeddingBagFuncOptions` separate classes from `torch::nn::EmbeddingOptions` and `torch::nn::EmbeddingBagOptions`, so that it's easier to enforce that arguments such as `num_embeddings` and `embedding_dim` are required for `torch::nn::EmbeddingOptions` and `torch::nn::EmbeddingBagOptions`.

Test Plan: Imported from OSS

Differential Revision: D18462540

Pulled By: yf225

fbshipit-source-id: f2abf431e48675b0a9d7f6f398cdb90ff9037c35
2019-11-13 18:47:22 -08:00
Will Feng
65f691f2c2 Add more tests for torch::arange
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29689

Test Plan: Imported from OSS

Differential Revision: D18465818

Pulled By: yf225

fbshipit-source-id: 0cf0aaa7febcf4318abdaae7d17a43ab3acde017
2019-11-13 15:17:16 -08:00