Commit Graph

431 Commits

Author SHA1 Message Date
anjali411
781f590f33 [C++ API Parity] Add xor_convergence test for lbfgs (#35001)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35001

Differential Revision: D20548983

Pulled By: anjali411

fbshipit-source-id: 1f858635d0680c0109d1ef348b7df4d3844fe0a6
2020-03-20 06:57:24 -07:00
Edward Yang
7c06b86e42 Revert D20518647: [pytorch][PR] [C++ API Parity] [Optimizers] Merged Optimizer and LossClosureOptimizer
Test Plan: revert-hammer

Differential Revision:
D20518647

Original commit changeset: 4760d1d29df1

fbshipit-source-id: b84f1a06c2de27e147716279223a6844ef89f760
2020-03-19 07:53:43 -07:00
Natalia Gimelshein
be82e554fe Revert D20524479: [pytorch][PR] [C++ API Parity] Add xor_convergence test for lbfgs
Test Plan: revert-hammer

Differential Revision:
D20524479

Original commit changeset: 3413779676ab

fbshipit-source-id: ef8007ed6c184bc8b8751eb713aac2a891260048
2020-03-18 21:56:17 -07:00
anjali411
b8e043abca [C++ API Parity] [Optimizers] Merged Optimizer and LossClosureOptimizer (#34957)
Summary:
1. Removed LossClosureOptimizer, and merged Optimizer into OptimizerBase (and renamed the merged class to Optimizer)
2. Merged the LBFGS-specific serialize test function and the generic test_serialize_optimizer function.
3. BC-compatibility serialization test for LBFGS
4. Removed mentions of parameters_ in optimizer.cpp, de-virtualize all functions
5. Made defaults_ optional argument in all optimizers except SGD
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34957

Test Plan: Imported from GitHub, without a `Test Plan:` line.

Differential Revision: D20518647

Pulled By: anjali411

fbshipit-source-id: 4760d1d29df1784e2d01e2a476d2a08e9df4ea1c
2020-03-18 17:28:57 -07:00
anjali411
4521477f83 [C++ API Parity] Add xor_convergence test for lbfgs (#35001)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35001

Differential Revision: D20524479

Pulled By: anjali411

fbshipit-source-id: 3413779676ab95c1ee82298f95d3441a89873107
2020-03-18 17:06:53 -07:00
anjali411
d7e4a379a0 [C++ API Parity] LBFGS optimizer step() update and added closure to the Optimizer step() function (#34564)
Summary:
Follow-ups after this PR:

* Remove `LossClosureOptimizer`, and merge `Optimizer` into `OptimizerBase` (and rename the merged class to Optimizer)
* Merge the LBFGS-specific serialize test function and the generic `test_serialize_optimizer` function, possibly by passing a bool `has_only_global_state` flag into the `test_serialize_optimizer` function to denote whether `size()` should be equal to 1 or 2?
    * https://github.com/pytorch/pytorch/pull/34564#discussion_r393780303
* It seems that we don't have the equivalent `XORConvergence_LBFGS` test like the other optimizers, and it would be good to add one
* Remove mentions of `parameters_` in optimizer.cpp, de-virtualize all functions, and remove the `OptimizerBase(std::vector<Tensor> parameters)` constructor from `OptimizerBase`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34564

Test Plan: Imported from GitHub, without a `Test Plan:` line.

Differential Revision: D20495701

Pulled By: anjali411

fbshipit-source-id: 6d35286d2decb6f7dff93d9d3e57515770666622
2020-03-17 22:27:24 -07:00
anjali411
762be86e63 [C++ API Parity] [Optimizers] added closure to optimizers (#34790)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34790

Differential Revision: D20468361

Pulled By: anjali411

fbshipit-source-id: 1c6115d735b211dc2bedf002d58931cb32cf657a
2020-03-16 07:51:44 -07:00
Will Feng
bdd7dbfd4b [C++ API] RNN / GRU / LSTM layer refactoring (#34322)
Summary:
This PR refactors RNN / GRU / LSTM layers in C++ API to exactly match the implementation in Python API.

**BC-breaking changes:**
- Instead of returning `RNNOutput`, RNN / GRU forward method now returns `std::tuple<Tensor, Tensor>`, and LSTM forward method now returns `std::tuple<Tensor, std::tuple<Tensor, Tensor>>`, matching Python API.
- RNN / LSTM / GRU forward method now accepts the same inputs (input tensor and optionally hidden state), matching Python API.
- RNN / LSTM / GRU layers now have `forward_with_packed_input` method which accepts `PackedSequence` as input and optionally hidden state, matching the `forward(PackedSequence, ...)` variant in Python API.
- RNN / LSTM / GRU layers no longer have these fields: `w_ih` / `w_hh` / `b_ih` / `b_hh`. Instead, to access the weights and biases of the gates, users should do e.g. `rnn->named_parameters()["weight_ih_l0"]`, which mirrors the Python API `rnn.weight_ih_l0`.
- In `RNNOptions`
    - `tanh()` / `relu()` / `activation` are removed. Instead, `nonlinearity` is added which takes either `torch::kTanh` or `torch::kReLU`
    - `layers` -> `num_layers`
    - `with_bias` -> `bias`
- In `LSTMOptions`
    - `layers` -> `num_layers`
    - `with_bias` -> `bias`
- In `GRUOptions`
    - `layers` -> `num_layers`
    - `with_bias` -> `bias`

The majority of the changes in this PR focused on refactoring the implementations in `torch/csrc/api/src/nn/modules/rnn.cpp` to match the Python API. RNN tests are then changed to reflected the revised API design.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34322

Differential Revision: D20458302

Pulled By: yf225

fbshipit-source-id: ffff2ae1ddb1c742c966956f6ad4d7fba03dc54d
2020-03-15 17:48:29 -07:00
Will Feng
6c555e1508 Revert D20311699: [pytorch][PR] [C++ API] RNN / GRU / LSTM layer refactoring
Test Plan: revert-hammer

Differential Revision:
D20311699

Original commit changeset: e2b60fc7bac6

fbshipit-source-id: 72f4a762189490998d6b716857eeac053a11742d
2020-03-14 16:18:48 -07:00
Will Feng
e23a9dc140 [C++ API] RNN / GRU / LSTM layer refactoring (#34322)
Summary:
This PR refactors RNN / GRU / LSTM layers in C++ API to exactly match the implementation in Python API.

**BC-breaking changes:**
- Instead of returning `RNNOutput`, RNN / GRU forward method now returns `std::tuple<Tensor, Tensor>`, and LSTM forward method now returns `std::tuple<Tensor, std::tuple<Tensor, Tensor>>`, matching Python API.
- RNN / LSTM / GRU forward method now accepts the same inputs (input tensor and optionally hidden state), matching Python API.
- RNN / LSTM / GRU now has `forward_with_packed_input` method which accepts `PackedSequence` as input and optionally hidden state, matching the `forward(PackedSequence, ...)` variant in Python API.
- In `RNNOptions`
    - `tanh()` / `relu()` / `activation` are removed. Instead, `nonlinearity` is added which takes either `torch::kTanh` or `torch::kReLU`
    - `layers` -> `num_layers`
    - `with_bias` -> `bias`
- In `LSTMOptions`
    - `layers` -> `num_layers`
    - `with_bias` -> `bias`
- In `GRUOptions`
    - `layers` -> `num_layers`
    - `with_bias` -> `bias`

The majority of the changes in this PR focused on refactoring the implementations in `torch/csrc/api/src/nn/modules/rnn.cpp` to match the Python API. RNN tests are then changed to reflected the revised API design.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34322

Differential Revision: D20311699

Pulled By: yf225

fbshipit-source-id: e2b60fc7bac64367a8434647d74c08568a7b28f7
2020-03-14 12:09:04 -07:00
Will Feng
d041d0784e [C++ API] RNNCell / LSTMCell / GRUCell layers (#34400)
Summary:
This PR adds `RNNCell` / `LSTMCell` / `GRUCell` layers to the C++ frontend, with implementations exactly matching the Python API equivalent.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34400

Differential Revision: D20316859

Pulled By: yf225

fbshipit-source-id: bb7cee092622334043c0d0fd0fcb4e75e707699c
2020-03-13 21:52:24 -07:00
Will Feng
a54416d208 [C++ API] Remove deprecated torch::nn::BatchNorm / FeatureDropout / modules_ordered_dict and torch::nn::init::Nonlinearity / FanMode (#34508)
Summary:
This PR is BC-breaking in the following way:
- The deprecated `torch::nn::BatchNorm` is removed in favor of `torch::nn::BatchNorm{1,2,3}d`
- The deprecated `torch::nn::FeatureDropout` is removed in favor of `torch::nn::Dropout{2,3}d`
- The deprecated `torch::nn::modules_ordered_dict` is removed. User should do `Sequential sequential({{"m1", MyModule(1)}, {"m2", MyModule(2)}})` instead.
- The deprecated `torch::nn::init::Nonlinearity` is removed, in favor of the following enums:
    - `torch::kLinear`
    - `torch::kConv1D`
    - `torch::kConv2D`
    - `torch::kConv3D`
    - `torch::kConvTranspose1D`
    - `torch::kConvTranspose2D`
    - `torch::kConvTranspose3D`
    - `torch::kSigmoid`
    - `torch::kTanh`
    - `torch::kReLU`
    - `torch::kLeakyReLU`
- The deprecated `torch::nn::init::FanMode` is removed, in favor of the following enums:
    - `torch::kFanIn`
    - `torch::kFanOut`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34508

Differential Revision: D20351601

Pulled By: yf225

fbshipit-source-id: cca0cd112f29a31bb023e348ca8f82780e42bea3
2020-03-12 10:09:58 -07:00
Mansoor
e95657b87e [C++ API] AdaptiveLogSoftmaxWithLoss (#29076)
Summary:
Implemented AdaptiveLogSoftmaxWithLoss and some tests for modules. Reference https://github.com/pytorch/pytorch/issues/25883
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29076

Differential Revision: D20404588

Pulled By: yf225

fbshipit-source-id: edbadf432b8173cbcc6caf83c9c03dd92dc31a37
2020-03-12 09:53:58 -07:00
Lingyi Liu
09296c34a4 Add the build for runtime dispatch for AVX, AVX2 instruction set (#26125)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26125

We already had some optimization implementation using AVX2 for improve the quantized kernel performance. In this diff, we want to enable the runtime dispatch.

Test Plan:
Sandcastle build and test

Also test with a python binary calling into vectorized op.

torch.__config__.show()
PyTorch built with:
  - GCC 4.2
  - clang 8.0.20181009
  - Intel(R) Math Kernel Library Version 2017.0.3 Product Build 20170413 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v0.18.1 (Git Hash N/A)
  - OpenMP 1
  - **CPU capability usage: AVX2**
  - Build settings:

Reviewed By: jamesr66a

Differential Revision: D17337251

fbshipit-source-id: 8e22d10011a12a4eaf54cea3485353eb1811d828
2020-03-10 15:32:57 -07:00
anjali411
2d24005d18 [C++ API Parity] rmsprop optimizer update (#33450)
Summary:
**This PR is BC-breaking in the following way:**

In RMSpropOptions:
1. learning_rate is renamed to lr.

**Test plan before 1.5 release:**

Test that in 1.5 we can load a C++ RMSprop optimizer that was serialized in 1.4, and their states are the same.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33450

Differential Revision: D20366623

Pulled By: anjali411

fbshipit-source-id: 83250be9b583a766927e0e22a4de8b0765379451
2020-03-10 13:30:56 -07:00
Will Feng
baeb359e7a Remove using namespace torch::autograd from header files (#34423)
Summary:
This PR prevents leaking symbols from `torch::autograd` namespace to the root namespace.
Fixes https://github.com/pytorch/pytorch/issues/34371.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34423

Differential Revision: D20338404

Pulled By: yf225

fbshipit-source-id: e7ff3348193667a0cee5d38f9a003ae36cc704ca
2020-03-09 10:31:21 -07:00
Will Feng
739d4609c3 [C++ API] Fix ModuleList compile error: error: 'begin' was not declared in this scope (#34463)
Summary:
One example in the current docs for `torch::nn::ModuleList` doesn't compile, and this PR fixes it.
Fixes https://github.com/pytorch/pytorch/issues/32414.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34463

Test Plan: Imported from GitHub, without a `Test Plan:` line.

Differential Revision: D20331120

Pulled By: yf225

fbshipit-source-id: 50bb078fe1a900c9114d5434e92dc40ee13b52bf
2020-03-09 08:15:50 -07:00
Will Feng
415595ace4 [C++ API] Remove init-list form of at::indexing::Slice (#34255)
Summary:
The init-list form of `at::indexing::Slice` (i.e. `tensor.index({{1, None, 2}, ...})` instead of `tensor.index({Slice(1, None, 2), ...})`) in C++ API can be easily confused with the list-form indexing in Python API (e.g. `tensor[[1, 3, 2], ...]`), which is not good from readability perspective. This PR removes the init-list form of `at::indexing::Slice` to make the API less confusing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34255

Test Plan: Imported from GitHub, without a `Test Plan:` line.

Differential Revision: D20290166

Pulled By: yf225

fbshipit-source-id: abbcbeca0b179219e5e1f196a33ef8aec87ebb76
2020-03-06 05:51:53 -08:00
meganset
b8fd88319a C++ make torch::nn::Sequential push_back(AnyModule) methods public (#34208)
Summary:
Issue https://github.com/pytorch/pytorch/issues/33192
Moves Sequential::push_back methods with AnyModule from private -> public
Allows adding an existing AnyModule via something like:

```
  torch::nn::Sequential q;
  auto a=torch::nn::AnyModule(torch::nn::Linear(1,2));
  q->push_back(a);
  q->push_back("fc",a);
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34208

Differential Revision: D20300278

Pulled By: yf225

fbshipit-source-id: 4525319bb7fb6667e43a006c9f446a2193781005
2020-03-06 05:47:14 -08:00
anjali411
76035f050b [C++ API Parity] Adam: updated step and class design (#33730)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33730

Differential Revision: D20292073

Pulled By: anjali411

fbshipit-source-id: a7b4a70f29027ab355aebb91873ea55d5cb51783
2020-03-05 19:15:24 -08:00
Wanchao Liang
f909b5535e [autograd] fix allow_unused checking for C++ API (#34035)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34035

Bug for the conditon check in https://github.com/pytorch/pytorch/pull/24342, realized we don't have tests in either
python or cpp to catch this, so added testes for both python and cpp.

Thanks hczhu on capturing it!

Test Plan: Imported from OSS

Differential Revision: D20198837

Pulled By: wanchaol

fbshipit-source-id: 33846a14c0a8e7aac2e8328189d10c38a0d7e6ee
2020-03-02 17:57:15 -08:00
Will Feng
1494005cfd C++ tensor indexing: more indexing tests (#30427)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30427

Test Plan: Imported from OSS

Differential Revision: D18695899

Pulled By: yf225

fbshipit-source-id: 74455fe52ef922556fabe65aefca9ec93fe2346d
2020-02-28 22:07:41 -08:00
Will Feng
5c33d98b0d Add assert_tensor_equal and assert_tensor_not_equal to test/cpp/api/support.h (#30426)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30426

This PR adds `assert_tensor_equal` and `assert_tensor_not_equal` to `test/cpp/api/support.h`, as better functions for testing whether two tensors are equal / not equal.

Test Plan: Imported from OSS

Differential Revision: D18695900

Pulled By: yf225

fbshipit-source-id: c19b9bc4c4e84d9f444015023649d27618fcbdf5
2020-02-26 13:25:25 -08:00
Will Feng
0dded4026e [C++ API] Add PackedSequence / pack_padded_sequence / pad_packed_sequence / pack_sequence (#33652)
Summary:
Most of the function implementation and test code are translated from the Python version.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33652

Differential Revision: D20052211

Pulled By: yf225

fbshipit-source-id: ce6767db54364f91ef4f06674239a12278c2752a
2020-02-25 12:53:41 -08:00
Will Feng
36919278cc C++ tensor multi-dim indexing: add index() and index_put_() overloads, simple indexing tests, merge with Python indexing path (#32841)
Summary:
This PR adds the following items:
- **1st item**: `ArrayRef<TensorIndex>` and `std::initializer_list<TensorIndex>` overloads for `Tensor::index` and `Tensor::index_put_`, to be used specifically for multi-dim indexing purpose.

Design rationale:
* C++ `Tensor::index` and `Tensor::index_put_` are both existing tensor APIs, and they currently (before this PR) only accept a list of tensors (i.e. `ArrayRef<Tensor>`) as indices. If we change their signatures to also accept non-tensors as indices (i.e. `ArrayRef<TensorIndex>`, and `TensorIndex` is convertible from `Tensor` / `Slice` / `None` / `Ellipsis`), it would slow down the original code path (since now it has to go through more steps), which is undesirable.

    To get around this problem, the proposed solution is to keep the original `ArrayRef<Tensor>` overload, and add `ArrayRef<TensorIndex>` and `std::initializer_list<TensorIndex>` overloads to `Tensor::index` and `Tensor::index_put_`. This way, the original code path won’t be affected, and the tensor multi-dim indexing API is only used when the user explicitly pass an `ArrayRef<TensorIndex>` or a braced-init-list of `TensorIndex`-convertible types to `Tensor::index` and `Tensor::index_put_` .

    Note that the above proposed solution would still affect perf for the user’s original `Tensor::index` or `Tensor::index_put_` call sites that use a braced-init-list of tensors as input, e.g. `tensor.index({...})` or `tensor.index_put_({...}, value)`, since now such function calls would take the multi-dim indexing path instead of the original advanced indexing path. However, there are only two instances of this in our codebase (one in ATen cpp test, one in a C++ API nn init function), and they can be easily changed to explicitly use `ArrayRef<Tensor>` as input (I changed them in this PR). For external user’s code, since this is part of the C++ frontend which is still considered experimental, we will only talk about this change in the release note, and ask users to switch to using `ArrayRef<Tensor>` explicitly if they want to keep using the original advanced indexing code path.

- **2nd item**: Mechanisms for parsing `ArrayRef<TensorIndex>` indices and performing indexing operations (mirroring the functions in `torch/csrc/autograd/python_variable_indexing.cpp`).
- **3rd item**: Simple tests to demonstrate that the `Tensor::index()` and `Tensor::index_put_()` APIs work. I will add more tests after the first few PRs are reviewed.
- **4th item**: Merge Python/C++ indexing code paths, for code simplicity. I tested locally and found that there is no perf regression resulting from the merge. I will get more concrete numbers for common use cases when we settle on the overall design.

This PR supersedes https://github.com/pytorch/pytorch/pull/30425.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32841

Differential Revision: D19919692

Pulled By: yf225

fbshipit-source-id: 7467e64f97fc0e407624809dd183c95ea16b1482
2020-02-24 22:04:00 -08:00
Suyash458
47e90d774e C++/Python API Parity: add pad_sequence (#32387)
Summary:
- add `pad_sequence` and tests
- related issue https://github.com/pytorch/pytorch/issues/25883
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32387

Differential Revision: D20025421

Pulled By: yf225

fbshipit-source-id: caa9ae2114bece8db387a3a1610f24a3e06b1324
2020-02-21 13:16:09 -08:00
nicolov
e77abb9a5b Normalize reward-to-go in C++ actor-critic (#33550)
Summary:
Comparing to the [Python implementation](https://github.com/pytorch/examples/blob/master/reinforcement_learning/actor_critic.py), it seems like the tensor of normalized reward-to-go is computed but never used. Even if it's just an integration test, this PR switches to the normalized version for better convergence.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33550

Differential Revision: D20024393

Pulled By: yf225

fbshipit-source-id: ebcf0fee14ff39f65f6744278fb0cbf1fc92b919
2020-02-21 09:19:39 -08:00
Will Feng
a203dc2e6d [C++ API] Allow skipping default arguments in module's forward method when module is used in Sequential (#33027)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33027

This PR allows default arguments in module's forward method to be skipped when module is used in `torch::nn::Sequential`, by introducing the `FORWARD_HAS_DEFAULT_ARGS` macro and requiring that all modules that have default arguments in its forward method must have a corresponding `FORWARD_HAS_DEFAULT_ARGS` macro call.

Fixes issue mentioned in https://github.com/pytorch/pytorch/issues/30931#issuecomment-564144468.

Test Plan: Imported from OSS

Differential Revision: D19777815

Pulled By: yf225

fbshipit-source-id: 73282fcf63377530063e0092a9d84b6c139d2e32
2020-02-17 20:38:02 -08:00
Will Feng
4724964810 [C++ API] Expose AnyValue and AnyModuleHolder classes (#33026)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33026

This PR contains necessary changes to prepare for https://github.com/pytorch/pytorch/pull/33027. It exposes the following classes to public:
1. `torch::nn::AnyValue`, because if the user has optional arguments in their module's forward method, they must also use the `FORWARD_HAS_DEFAULT_ARGS` macro and pass in the default values for those optional arguments wrapped by `torch::nn::AnyValue`.
2. `torch::nn::AnyModuleHolder`, because `torch::nn::Module` needs to declare it as a friend class for it to be able to access `torch::nn::Module`'s protected methods such as `_forward_has_default_args` / `_forward_num_required_args` / `_forward_populate_default_args`.

Test Plan: Imported from OSS

Differential Revision: D19777814

Pulled By: yf225

fbshipit-source-id: 1c9d5aa24f0689154752c426a83ee98f64c9d02f
2020-02-17 20:35:22 -08:00
Will Feng
5d7f42847c Add at::Tensor::retain_grad API (#33349)
Summary:
This PR adds `at::Tensor::retain_grad`, and its implementation mirrors the Python `torch.Tensor.retain_grad` API:
c6271c63f2/torch/tensor.py (L292-L315)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33349

Differential Revision: D19944524

Pulled By: yf225

fbshipit-source-id: e61d5d761996b6d1b860c04c4b4650c1a49a6a8c
2020-02-17 20:03:48 -08:00
anjali411
91744907d4 SGD: updated step and class design (#32592)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32592

Differential Revision: D19868154

Pulled By: anjali411

fbshipit-source-id: ce888efc68b1531d97e8b0abf2b146198e012d2f
2020-02-12 18:38:55 -08:00
albanD
3655975565 Add allow_rebase_history flag and fix codegen functions for multiple views (#32790)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32790

Same as https://github.com/pytorch/pytorch/pull/31990 but without the first commit in the stack that is problematic for a lot of people.

Test Plan: Imported from OSS

Differential Revision: D19814116

Pulled By: albanD

fbshipit-source-id: d104911a5b098a5807b4bc08b69803ebd4f69fa6
2020-02-11 07:16:02 -08:00
albanD
e1c53a5c86 Fix version counter bump in cpp Function (#33068)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33068

The version counter is already tracked if we use pytorch's functions but not if the user unpack the Tensor and modifies it by hand or with a third party library.

Test Plan: Imported from OSS

Differential Revision: D19791564

Pulled By: albanD

fbshipit-source-id: a73c0f73d8fd0c0e5bf838f14bed54fa66937840
2020-02-10 07:22:29 -08:00
Pavel Belevich
d141465713 Fix torch::allclose to handle std::numeric_limits<T>::lowest() for integral types (#32978)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32978

Fixes #32946

Test Plan: Imported from OSS

Differential Revision: D19726013

Pulled By: pbelevich

fbshipit-source-id: ada4aeabc8e39016d24f1a40f02fb7c56f069cd3
2020-02-04 19:06:52 -08:00
Will Feng
b564eaf7a8 Bug fixes: torch::tensor(floating-point values) -> default dtype, and torch::tensor(integer values) ->at::kLong (#32367)
Summary:
Some of the `torch::tensor` behavior is updated to better match Python API. Fixes https://github.com/pytorch/pytorch/issues/32234.

This PR is BC-breaking in the following way:
- `torch::tensor({1.0f, 2.0f})`: float -> default dtype
- `torch::tensor(at::ArrayRef<int>({1, 2, 3}))`: int -> at::kLong
- `torch::tensor(std::vector<int>({1, 2, 3}))`: int -> at::kLong
- `torch::tensor(at::ArrayRef<float>({1.f, 2.f, 3.f}))`: float -> default dtype
- `torch::tensor(std::vector<float>({1.f, 2.f, 3.f}))`: float -> default dtype
- `torch::tensor(at::ArrayRef<double>({1., 2., 3.}))`: double -> default dtype
- `torch::tensor(std::vector<double>({1., 2., 3.}))`: double -> default dtype
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32367

Differential Revision: D19498484

Pulled By: yf225

fbshipit-source-id: 19c8dc2a56476266153cff4c404e7f84d309eb12
2020-02-01 15:00:07 -08:00
Alban Desmaison
db8ce7ea2d Back out "Make autogen functions correct for multiple outputs and views" (#32681)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32681

Original commit changeset: a2b41c2d231e

Test Plan: fb and oss tests

Reviewed By: hudeven

Differential Revision: D19591864

fbshipit-source-id: 7068b5563e37bc9a5d415fd535c73fd9d71fe131
2020-01-27 19:54:34 -08:00
Charles Hofer
5fd037ce44 Fix MagmaInitializesCorrectly_CUDA by using an invertible matrix (#32547)
Summary:
This test case had been using the tensor

```
1  2  3  4
5  6  7  8
9  10 11 12
13 14 15 16
```

which is not an invertible tensor and causes the test case to fail, even if magma gets initialized just fine. This change uses a tensor that is invertible, and the inverse doesn't include any elements that are close to zero to avoid floating point rounding errors.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32547

Differential Revision: D19572316

Pulled By: ngimel

fbshipit-source-id: 1baf3f8601b2ba69fdd6678d7a3d86772d01edbe
2020-01-25 20:00:54 -08:00
albanD
3ab30753e9 Make autogen functions correct for multiple outputs and views (#31990)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31990

This PR does three things:
- Add a new `allow_rebase_history` flag to the differentiable views. If set, trying to rebase their history will raise an error.
- Make sure that the codegen functions verify this flag before doing inplace operations so that they fail before doing the inplace modification.
- Make sure the codegen functions set this flag properly when we don't support rebasing the history of the output.

The codegen change can be found [here](4bf180caa0).

Test Plan: Imported from OSS

Differential Revision: D19409649

Pulled By: albanD

fbshipit-source-id: a2b41c2d231e952ecfe162bdb6bad620ac595703
2020-01-24 14:32:28 -08:00
anjali411
be6ffac1b6 Adagrad optimizer - updated step function, added param_groups, state to optimizers
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29335

Differential Revision: D19449382

Pulled By: anjali411

fbshipit-source-id: ee238801ed9cdf15a80f2ce31cc4aab8ba582aea
2020-01-21 14:41:12 -08:00
generatedunixname89002005287564
9482683065 Remove dead includes in caffe2/test
Reviewed By: ezyang

Differential Revision: D19273220

fbshipit-source-id: 3dfc3388914e60611c84472e3fc529f5b5e40534
2020-01-21 11:30:34 -08:00
Brian Wignall
f326045b37 Fix typos, via a Levenshtein-type corrector (#31523)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos, with https://github.com/bwignall/typochecker to help automate the checking.

Uses an updated version of the tool used in https://github.com/pytorch/pytorch/pull/30606 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31523

Differential Revision: D19216749

Pulled By: mrshenli

fbshipit-source-id: 7fd489cb9a77cd7e4950c1046f925d57524960ea
2020-01-17 16:03:19 -08:00
Will Feng
2bd179147a Fix typo in config script to re-enable libtorch build and test in macOS CI (#32072)
Summary:
Currently, libtorch build and test are not running in macOS CI. This PR fixes the issue.

**Test Plan:**
Check that libtorch build and test are running again in macOS CI.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32072

Differential Revision: D19391909

Pulled By: yf225

fbshipit-source-id: 1ab345b099869f78e1124f1a8bd185fa51371b6a
2020-01-14 16:23:57 -08:00
Will Feng
b6cee03e29 C++ tensor indexing: add Slice / TensorIndex (#30424)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30424

`at::indexing::TensorIndex` is used for converting C++ tensor indices such as `{None, "...", Ellipsis, 0, true, {1, None, 2}, torch::tensor({1, 2})}` into its equivalent `std::vector<TensorIndex>`, so that further tensor indexing operations can be performed using the supplied indices.

Test Plan: Imported from OSS

Differential Revision: D18695902

Pulled By: yf225

fbshipit-source-id: d73e14a411cdbec815866b02e75ffd71a9186e89
2020-01-10 17:53:41 -08:00
TH3CHARLie
1296e2d55e C++ API parity: isinf (#31099)
Summary:
fixes https://github.com/pytorch/pytorch/issues/31021, port the legacy binding method of `isinf` to C++ therefore support JIT
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31099

Differential Revision: D19314733

Pulled By: yf225

fbshipit-source-id: 5725c51d19c33b4fddd0fc9e7034078580bd534e
2020-01-09 13:16:13 -08:00
Jeremy Lilley
114562cf93 For torch::from_blob() add clue when memory is non-owned. (#31222)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31222

 - When constructing torch::from_blob() in the case where the deleter is a nop, switch to using a nullptr context in the DataPtr (with a nop deleter)

 - No real extra memory/cpu requirements here, actually saves a minor alloc.

Why? Trying to get a signal that a Tensor might contain non-owned memory from
torch::from_blob(), by detecting the nullptr context.
ghstack-source-id: 96336078

Test Plan:
buck test mode/dev caffe2/test/cpp/api/...
   buck test mode/dev-nosan caffe2/test/...

Differential Revision: D18992119

fbshipit-source-id: 4eea642f82d0858b57fdfc6995364a760c10567d
2020-01-07 13:12:30 -08:00
Jerry Zhang
ebe69236d1 Expose class constant through attr and setattr in object (#29219)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29219

We added class constant in previous PRs, this PR allows access to
class constant in the object API

Test Plan:
build/bin/test_jit
python test/test_jit.py

Imported from OSS

Differential Revision: D18846851

fbshipit-source-id: 888a6517d5f747d1f8ced283c0c2c30b2f6c72c6
2020-01-04 11:09:35 -08:00
Jerry Zhang
1bb6c51421 Fix getAttribute (#31011)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31011

`getAttribute` is supposed to throw when there the attribute is not
found rather than return a `nullptr`.

Test Plan:
.

Imported from OSS

Differential Revision: D18898417

fbshipit-source-id: 0fe7d824b978ad19bb5ef094d3aa560e9fc57f87
2019-12-18 19:27:39 -08:00
Pavel Belevich
47766e648f C++ API parity: MultiheadAttention
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27309

Test Plan: Imported from OSS

Differential Revision: D17766736

Pulled By: pbelevich

fbshipit-source-id: 7a5f2399f081945d31d4c13d7a8d248c387fc1a6
2019-12-18 10:13:29 -08:00
Nathan Goldbaum
f531815526 Deprecate tensor.type() (#30281)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/29161.

I looked a bit at the code changes related to this and think I have all of the use cases of `DeprecatedTypeProperties` covered in the message, but suggestions from someone with more context on this would be very much appreciated :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30281

Differential Revision: D18830818

Pulled By: ezyang

fbshipit-source-id: 1a7fcee15354ae09e6644577e7fa33bd26acfe20
2019-12-05 10:55:34 -08:00
Will Feng
18ec4632b3 Exclude undefined tensors in the result of Module::parameters() / named_paramters() / buffers() / named_buffers() (#30626)
Summary:
PR https://github.com/pytorch/pytorch/pull/30523 attempted to fix https://github.com/pytorch/pytorch/issues/30508 and https://github.com/pytorch/pytorch/issues/30462, but the fix wasn't complete. This PR makes the following improvements:
1. Fixes https://github.com/pytorch/pytorch/issues/30508 and https://github.com/pytorch/pytorch/issues/30462 properly by excluding undefined tensors in the result of `Module::parameters()` / `named_parameters()` / `buffers()` / `named_buffers()`, which mirrors the Python API behavior.
2. Audits all use sites of `Module::parameters_` / `buffers_` and change them to `Module::named_parameters(/*recurse=*/false)` / `named_buffers(/*recurse=*/false)` when appropriate, so that use sites of module parameters / buffers never need to worry about undefined tensors.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30626

Differential Revision: D18777507

Pulled By: yf225

fbshipit-source-id: 55b64b69779e1186342efd3c44857f416334ed6b
2019-12-02 21:59:58 -08:00