Commit Graph

114 Commits

Author SHA1 Message Date
Tugsbayasgalan Manlaibaatar
f59845db40 Symintify pytorch slicing logic (#91340)
Differential Revision: [D42398023](https://our.internmc.facebook.com/intern/diff/D42398023)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91340
Approved by: https://github.com/Skylion007, https://github.com/albanD
2023-01-08 22:51:42 +00:00
PyTorch MergeBot
b3603f8129 Revert "Deduplicate c10 error and PyTorchError hierarchy (#87855)"
This reverts commit 34f2d3e6ae.

Reverted https://github.com/pytorch/pytorch/pull/87855 on behalf of https://github.com/osalpekar due to perf regression in quantization tests
2023-01-06 19:56:35 +00:00
PyTorch MergeBot
3bb63aa387 Revert "Symintify pytorch slicing logic (#91340)"
This reverts commit 8c172fa98a.

Reverted https://github.com/pytorch/pytorch/pull/91340 on behalf of https://github.com/clee2000 due to breaking mac builds 8c172fa98a https://github.com/pytorch/pytorch/actions/runs/3845932024/jobs/6550654339, marking this as weird because it was merged via codev?
2023-01-05 17:14:49 +00:00
Tugsbayasgalan Manlaibaatar
8c172fa98a Symintify pytorch slicing logic (#91340)
Differential Revision: [D42223260](https://our.internmc.facebook.com/intern/diff/D42223260)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91340
Approved by: https://github.com/Skylion007, https://github.com/albanD
2023-01-05 10:33:37 +00:00
William Phetsinorath
34f2d3e6ae Deduplicate c10 error and PyTorchError hierarchy (#87855)
Fixes #53370

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87855
Approved by: https://github.com/albanD
2023-01-02 15:53:36 +00:00
albanD
28ceccec21 cleanup old python_compat code (#91162)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91162
Approved by: https://github.com/ezyang
2022-12-20 18:13:19 +00:00
Peter Bell
d5a7e6db38 ATen/native (1/6): Use per-operator headers (#75571)
Differential Revision: [D40126698](https://our.internmc.facebook.com/intern/diff/D40126698)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75571
Approved by: https://github.com/malfet
2022-10-15 01:43:26 +00:00
albanD
be8627827e More symintification of get/set item (#86605)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86605
Approved by: https://github.com/anjali411
2022-10-11 12:00:40 +00:00
albanD
55663b7f81 Reland 3 of Symintify getitem and add the required helper functions (#86207) (#86487)
Note that this might not cover every use of the function (we know it doesn't)
But this is enough to get few models passing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86487
Approved by: https://github.com/ezyang
2022-10-10 15:54:28 +00:00
PyTorch MergeBot
5b69b87d5a Revert "Symintify getitem and add the required helper functions (#86207)"
This reverts commit fd5085c445.

Reverted https://github.com/pytorch/pytorch/pull/86207 on behalf of https://github.com/seemethere due to  Fails internal tests, see: https://www.internalfb.com/intern/sandcastle/job/22517998926071860/insights
2022-10-07 16:10:30 +00:00
albanD
fd5085c445 Symintify getitem and add the required helper functions (#86207)
Note that this might not cover every use of the function (we know it doesn't)
But this is enough to get few models passing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86207
Approved by: https://github.com/ezyang, https://github.com/Chillee, https://github.com/bdhirsh
2022-10-06 04:46:19 +00:00
PyTorch MergeBot
168ba066e3 Revert "Symintify getitem and add the required helper functions (#86207)"
This reverts commit 17addb307e.

Reverted https://github.com/pytorch/pytorch/pull/86207 on behalf of https://github.com/malfet due to Broke lint, by double-registering `meta_index_put`, but no CI was run during the outage
2022-10-05 22:42:56 +00:00
albanD
17addb307e Symintify getitem and add the required helper functions (#86207)
Note that this might not cover every use of the function (we know it doesn't)
But this is enough to get few models passing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86207
Approved by: https://github.com/ezyang
2022-10-05 21:19:00 +00:00
Andrew M. James
0e0dfaa057 Add support for select of batch dims for all sparse compressed formats. (#82119)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82119
Approved by: https://github.com/nikitaved, https://github.com/bhosmer
2022-08-06 02:24:20 +00:00
Elias Ellison
452af0bc44 Call lift fresh in valueToTensor (#81927)
`valueToTensor` is invoked in many python apis such as `x[0] = 0.5`, which lifts the 0.5 to a tensor. the problem is described similarly in https://github.com/pytorch/pytorch/pull/81609 (/s/scalar_to_tensor/valueToTensor)
> scalar_to_tensor is not dispatched and thus there is no interposition point for modes to ensure that the resulting tensor is appropriately wrapped. lift_fresh introduces this interposition point. This prevents FakeTensorMode from erroring

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81927
Approved by: https://github.com/ezyang
2022-07-26 22:07:59 +00:00
Michael Suo
30fb2c4aba [lint] autoformat test/cpp and torch/csrc
Let's have some fun.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78828

Approved by: https://github.com/ezyang
2022-06-11 21:11:16 +00:00
YifanShenSZ
6ad51c9422 Support indexing of the underlying tensors for nested tensors (#78934)
Fixes #76843

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78934
Approved by: https://github.com/cpuhrsch, https://github.com/jbschlosser
2022-06-08 21:05:04 +00:00
Zachary DeVito
ab6c7b4b3f fix __torch_function__ bug in getindex that causes an error not set exception
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78781

Approved by: https://github.com/ezyang
2022-06-06 17:02:57 +00:00
Ivan Yashchuk
407e8eba8c Enable simple indexing into CSR tensor, add torch.select for CSR
This PR implements `torch.select` for CSR tensors. Currently, it's not possible to select rows or columns for batched CSR. The non-batched case works fine by converting to COO and calling select. Initially, I implemented raw manipulations of indices but converting to COO is only slightly slower and more readable.

This PR also enables indexing into batched CSR tensor with `[x, y, z]`. Assigning is disabled.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76228
Approved by: https://github.com/cpuhrsch
2022-04-23 02:36:03 +00:00
anjali411
37e0d2e361 Fix segfault while real and imaginary attributes are set to a number (#73867)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73867

Fixes https://github.com/pytorch/pytorch/issues/72947

Test Plan: Imported from OSS

Reviewed By: davidberard98

Differential Revision: D34695956

Pulled By: anjali411

fbshipit-source-id: 2f3eda272a5214335eae506bd387ce8da4d81b8c
(cherry picked from commit fdb07354cac22c30aa047e65fbac9840608db811)
2022-03-08 18:58:26 +00:00
Peter Bell
dba42056d8 Release GIL in Tensor indexing functions (#71728)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71728

Fixes gh-68739

For simple indexing this adds a `gil_scoped_release` before calling
`set_item`. For tuple indexing, the slicing operation is done with the
GIL because otherwise it would have to re-aquire the GIL for each
element in the tuple. However, the GIL is released for the final
`copy_to` operation which is where the actual kernels are called.

Test Plan: Imported from OSS

Reviewed By: anjali411

Differential Revision: D33770047

Pulled By: albanD

fbshipit-source-id: 67304a65e2cbf3b3ba9843687d9c63926d29298f
(cherry picked from commit d0a85046b7)
2022-01-25 22:30:48 +00:00
Peter Bell
cd9da3267c Rationalize API exports in torch_python (#68095)
Summary:
This renames `WindowsTorchApiMacro.h` to `Export.h` to mirror the c10 header `c10/macros/Export.h` and also updates it to use `C10_EXPORT`/`C10_IMPORT`. This also removes the `THP_API` macro from `THP_export.h` which appears to serve the same purpose.

cc pietern mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse SciPioneer H-Huang

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68095

Reviewed By: jbschlosser

Differential Revision: D32810881

Pulled By: albanD

fbshipit-source-id: d6949ccd0d80d6c3e5ec1264207611fcfe2503e3
2021-12-07 15:24:37 -08:00
Peter Bell
129e99fbce __getitem__: Ensure Tensor subclasses are not treated as tuples (#67202)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/67027

`torch.Tensor` is considered a Mapping, but not a Sequence in Python
because it uses `tp_as_mapping` instead of defining `__getitem__` in
Python. However, If you try to overwrite `__getitem__` from Python
it is considered a `Sequence` and so the tensor is treated like a
tuple for indexing purposes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67202

Reviewed By: VitalyFedyunin

Differential Revision: D31908515

Pulled By: albanD

fbshipit-source-id: 0ca55a36be3421f96428a8eacf5d195646252b38
2021-10-26 08:56:59 -07:00
Eddie Yan
42d6543c7b [bc-breaking] Dispatch index_put with boolean mask argument to masked_fill (#61612)
Summary:
https://github.com/pytorch/pytorch/issues/57515

Based on ngimel 's branch, with a few tweaks to determine when to copy value tensors to device memory/additional tests.
bc-breaking note: Previously, if in `x[index]=value` `value` was a 0-d tensor with device different from `x`'s device, it resulted in a RuntimeError. Now this case is handled by copying `value` to the correct device.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61612

Reviewed By: mrshenli

Differential Revision: D29753491

Pulled By: ngimel

fbshipit-source-id: 3fba14f4c2b9b136b50af020f9c1eda88f7373b0
2021-07-19 22:53:14 -07:00
Ailing Zhang
0ecdbfebff s/InplaceOrView/ADInplaceOrView/g (#57372)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57372

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57324

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D28121821

Pulled By: ailzhang

fbshipit-source-id: f568dd2505f6279da9ffb93ce1d22e0f98c606bb
2021-05-01 22:56:18 -07:00
Nikita Shulga
4cb534f92e Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os

def get_compiled_files_list():
    import json
    with open("build/compile_commands.json") as f:
        data = json.load(f)
    files = [os.path.relpath(node['file']) for node in data]
    for idx, fname in enumerate(files):
        if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
            files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
    return files

def run_clang_tidy(fname):
    check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
    changes = check_output(["git", "ls-files", "-m"])
    if len(changes) == 0:
        return
    check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])

def main():
    git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
    compiled_files = get_compiled_files_list()
    for idx, fname in enumerate(git_files):
        if fname not in compiled_files:
            continue
        if fname.startswith("caffe2/contrib/aten/"):
            continue
        print(f"[{idx}/{len(git_files)}] Processing {fname}")
        run_clang_tidy(fname)

if __name__ == "__main__":
    main()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892

Reviewed By: H-Huang

Differential Revision: D27991944

Pulled By: malfet

fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 14:10:25 -07:00
Ailing Zhang
be7a943bb8 s/AutoDispatchBelowAutograd/AutoDispatchBelowInplaceOrView. (#56657)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56657

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D27931526

Pulled By: ailzhang

fbshipit-source-id: 3af718df3435e2b0b30bc62070dbdc5aeeecdfb4
2021-04-23 15:50:00 -07:00
Ailing Zhang
3d904b56ec s/AutoNonVariableTypeMode/AutoDispatchBelowAutograd/ (#56423)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56423

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D27866606

Pulled By: ailzhang

fbshipit-source-id: e3942356dc3133d1c5722de40ec0d45e6a60f2f1
2021-04-20 17:17:46 -07:00
Edward Yang
6ec71ed4f9 Replace all direct cdata access with THPVariable_Unpack (#55799)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55799

I'm going to change the implementation of cdata soon so I need to
abstract over cdata access with a function.  Additionally, many
users are casting manually casting to THPVariable to access
the member so I can remove these unsafe casts in the client code
(the implementation, of course, is still doing an unsafe cast.)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D27712130

Pulled By: ezyang

fbshipit-source-id: 95fcc013bf3913d67f2c634068eb5b3aab144cb3
2021-04-15 08:57:04 -07:00
Richard Barnes
d690973295 irange on int64_t (#55148)
Summary:
Converts loops of the form:
```
for(int64_t VAR=0;VAR<LIMIT;VAR++)
```
to the form
```
for(const auto VAR : c10::irange(LIMIT))
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55148

Test Plan: Sandcastle

Reviewed By: ngimel

Differential Revision: D27447811

fbshipit-source-id: 6311a094ec4a81a0b57383aaee0ba1b1dc2445c4
2021-04-05 16:14:00 -07:00
Edward Yang
e0aebe241d Refactor tensor_new.cpp to use TensorOptions instead of DispatchKey (#54034)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54034

Fixes #53544

I had to touch a bunch of lines but the refactoring was fairly
mechanical.  Here's how it works.

The basic concept behind this PR is that tensor_new.cpp was previously
abusing DispatchKey when it actually meant TensorOptions.  The provided
DispatchKey argument to most of the constructor functions typically
comes from torch::tensors::get_default_dispatch_key();  it doesn't
really make sense for people to set the default dispatch key, but
this got grandfathered in due to the old API set_default_tensor_type
(where the "Type" concept got refactored into "DispatchKey" concept
over time).  See also #53124.  But the upshot is that, semantically,
what we refer to as the default dispatch key really is more like
torch.set_default_tensor_type(torch.Tensor) versus
torch.set_default_tensor_type(torch.cuda.Tensor): clearly the user
wants to do something about *construction* of the tensor, and
TensorOptions captures that exactly.

So, how exactly to translate from one to the other?
- Sources (things that used to PRODUCE DispatchKey)
  - Most top level functions take a DispatchKey as their argument.  I
    use the new function dispatchKeyToTensorOptions to convert it into
    a TensorOptions
  - typeIdWithDefault now produces a TensorOptions (probably could do
    with a rename, though I didn't)
- Sinks (things that used to CONSUME DispatchKey)
  - Previously, the function options() was typically used to convert the
    DispatchKey into a TensorOptions.  Now its replacement build_options
    just takes a TensorOptions and sets some extra fields on it.
    Irritatingly, I can't just replace
    `build_options(options, scalar_type, device)` with
    `options.dtype(scalar_type).device(device)` because the semantics
    are slightly different: if device is nullopt, we should preserve
    the usage of the device specified in options (what options.device()
    does is overwrite the device unconditionally; e.g., if device is
    nullopt, unset device from options)
  - The other major sink for DispatchKey was `internal_new_from_data`,
    but it turns out it only really extracts the device type from
    the dispatch key.  Now it just pulls out the device from
    TensorOptions.
- To actually do the translation of DispatchKey to TensorOptions, I
  introduce new functions dispatchKeyToLayout (replicating
  layout_from_backend--there are still a few uses of this function
  so I couldn't delete it) and dispatchKeyToDeviceType (replacing
  computeDeviceType)
- In all internal functions, whenever DispatchKey is taken as an argument,
  I instead take TensorOptions as an argument, and pass it along.
- Anywhere `legacyExtractDispatchKey(other.key_set())` equality was
  previously used, I now do `other.options().type_equal()`, which
  is the intended BC for doing "backend to backend" comparisons
- There are a few places in the sparse constructors where we allocated
  a tensor for values, and then read out the dispatch key from the
  result to allocate the keys.  As best as I can tell, this is totally
  equivalent to just passing in the options to both values and indices
  (the only difference is dtype, which is captured via a separate
  argument)

This refactor doesn't really go far enough: for example, there are now
functions that take both TensorOptions and ScalarType, when really
the TensorOptions can capture this all.  I kept it solely just
s/DispatchKey/TensorOptions/ to reduce the number of possible bugs;
also, a lot of this will be mooted by a proper fix to #53124.

Even with this limited refactor, the payoff is sweet.  I can delete:

- backendToCPU
- backendToXPU
- backendToCUDA
- backendToHIP
- backendToBackendOfDeviceType

The reason I can do this is because I can simply overwrite layout in TensorOptions
to do the conversion, rather than having to type out each backend case
explicitly.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: bhosmer

Differential Revision: D27109509

Pulled By: ezyang

fbshipit-source-id: 91d16cfbc390127770362ac04fb43f7e070077e9
2021-03-19 09:08:32 -07:00
Hameer Abbasi
cf1882adeb Fix indexing for overrides. (#49324)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/46277

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49324

Reviewed By: mruberry

Differential Revision: D25959334

Pulled By: ezyang

fbshipit-source-id: bac48b8ffee89d10aa04c004de2b53b4e54a96c2
2021-01-20 11:34:02 -08:00
mattip
f96ce3305c prohibit assignment to a sparse tensor (#50040)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/48225 by prohibiting assignment to a sparse Tensor.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50040

Reviewed By: mrshenli

Differential Revision: D25757125

Pulled By: zou3519

fbshipit-source-id: 3db6f48932eb10bf6ca5e97a6091afcabb60e478
2021-01-04 14:38:35 -08:00
Hameer Abbasi
3d46e02ea1 Add __torch_function__ for methods (#37091)
Summary:
According to pytorch/rfcs#3

From the goals in the RFC:

1. Support subclassing `torch.Tensor` in Python (done here)
2. Preserve `torch.Tensor` subclasses when calling `torch` functions on them (done here)
3. Use the PyTorch API with `torch.Tensor`-like objects that are _not_ `torch.Tensor`
   subclasses (done in https://github.com/pytorch/pytorch/issues/30730)
4. Preserve `torch.Tensor` subclasses when calling `torch.Tensor` methods. (done here)
5. Propagating subclass instances correctly also with operators, using
   views/slices/indexing/etc. (done here)
6. Preserve subclass attributes when using methods or views/slices/indexing. (done here)
7. A way to insert code that operates on both functions and methods uniformly
   (so we can write a single function that overrides all operators). (done here)
8. The ability to give external libraries a way to also define
   functions/methods that follow the `__torch_function__` protocol. (will be addressed in a separate PR)

This PR makes the following changes:

1. Adds the `self` argument to the arg parser.
2. Dispatches on `self` as well if `self` is not `nullptr`.
3. Adds a `torch._C.DisableTorchFunction` context manager to disable `__torch_function__`.
4. Adds a `torch::torch_function_enabled()` and `torch._C._torch_function_enabled()` to check the state of `__torch_function__`.
5. Dispatches all `torch._C.TensorBase` and `torch.Tensor` methods via `__torch_function__`.

TODO:

- [x] Sequence Methods
- [x] Docs
- [x] Tests

Closes https://github.com/pytorch/pytorch/issues/28361

Benchmarks in https://github.com/pytorch/pytorch/pull/37091#issuecomment-633657778

Pull Request resolved: https://github.com/pytorch/pytorch/pull/37091

Reviewed By: ngimel

Differential Revision: D22765678

Pulled By: ezyang

fbshipit-source-id: 53f8aa17ddb8b1108c0997f6a7aa13cb5be73de0
2020-08-05 20:44:13 -07:00
Jiakai Liu
72b0447f8d [pytorch] move tracing logic to a separate dispatch backend (#38467)
Summary:
This PR moves tracing logic out of the generated VariableType kernels, to associate it with a new dedicated dispatch key Tracer.
It also toggles the dispatch key set at various places to keep the semantics unchanged - see the inline [Tracing Mode Switches] note.

Sample generated code:
```
Tensor & __ilshift___Tensor(Tensor & self, const Tensor & other) {
  #if !defined(PYTORCH_DISABLE_TRACING)
  torch::jit::Node* node = nullptr;
  std::shared_ptr<jit::tracer::TracingState> tracer_state;
  if (jit::tracer::isTracing()) {
    tracer_state = jit::tracer::getTracingState();
    at::Symbol op_name;
    op_name = jit::Symbol::fromQualString("aten::__ilshift__");
    node = tracer_state->graph->create(op_name, /*num_outputs=*/0);
    jit::tracer::recordSourceLocation(node);
    jit::tracer::addInputs(node, "self", self);
    jit::tracer::addInputs(node, "other", other);
    tracer_state->graph->insertNode(node);

    jit::tracer::setTracingState(nullptr);
  }
  #endif
  static auto op = c10::Dispatcher::singleton().findSchemaOrThrow("aten::__ilshift__", "Tensor");
  c10::Dispatcher::singleton().redispatch<Tensor &, Tensor &, const Tensor &>(op, c10::DispatchKey::Tracer, self, other);
  #if !defined(PYTORCH_DISABLE_TRACING)
  if (tracer_state) {
    jit::tracer::setTracingState(std::move(tracer_state));
    jit::tracer::addOutput(node, self);
  }
  #endif
  return self;
}
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/38467

ghstack-source-id: 105215150

Test Plan: CI

Differential Revision: D21570684

fbshipit-source-id: 1a96761830307f9a934f38bfb9fe8b5b1763e0e0
2020-06-04 01:51:30 -07:00
anjali411
6e92579883 Added autograd support for C->C functions and enabled requires_grad=True for complex (#36932)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36932

Differential Revision: D21181230

Pulled By: anjali411

fbshipit-source-id: 295f2cd1e2b9918a8b2cb88cab0536b2407dc455
2020-04-24 12:30:49 -07:00
Will Feng
415595ace4 [C++ API] Remove init-list form of at::indexing::Slice (#34255)
Summary:
The init-list form of `at::indexing::Slice` (i.e. `tensor.index({{1, None, 2}, ...})` instead of `tensor.index({Slice(1, None, 2), ...})`) in C++ API can be easily confused with the list-form indexing in Python API (e.g. `tensor[[1, 3, 2], ...]`), which is not good from readability perspective. This PR removes the init-list form of `at::indexing::Slice` to make the API less confusing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34255

Test Plan: Imported from GitHub, without a `Test Plan:` line.

Differential Revision: D20290166

Pulled By: yf225

fbshipit-source-id: abbcbeca0b179219e5e1f196a33ef8aec87ebb76
2020-03-06 05:51:53 -08:00
Will Feng
1494005cfd C++ tensor indexing: more indexing tests (#30427)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30427

Test Plan: Imported from OSS

Differential Revision: D18695899

Pulled By: yf225

fbshipit-source-id: 74455fe52ef922556fabe65aefca9ec93fe2346d
2020-02-28 22:07:41 -08:00
Michael Suo
dbe850af5b [jit] do the code reorg (#33851)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33851

Rationale and context described in #33828.

Script to reproduce the move:
https://gist.github.com/suo/16cbefaaeb67ca5a7c6caffd49b7f6e9
ghstack-source-id: 99079645

Test Plan: Make sure CI passes

Reviewed By: jamesr66a

Differential Revision: D20133869

fbshipit-source-id: 390e9241a9c85366d9005c492ac31f10aa96488e
2020-02-27 13:02:51 -08:00
Will Feng
36919278cc C++ tensor multi-dim indexing: add index() and index_put_() overloads, simple indexing tests, merge with Python indexing path (#32841)
Summary:
This PR adds the following items:
- **1st item**: `ArrayRef<TensorIndex>` and `std::initializer_list<TensorIndex>` overloads for `Tensor::index` and `Tensor::index_put_`, to be used specifically for multi-dim indexing purpose.

Design rationale:
* C++ `Tensor::index` and `Tensor::index_put_` are both existing tensor APIs, and they currently (before this PR) only accept a list of tensors (i.e. `ArrayRef<Tensor>`) as indices. If we change their signatures to also accept non-tensors as indices (i.e. `ArrayRef<TensorIndex>`, and `TensorIndex` is convertible from `Tensor` / `Slice` / `None` / `Ellipsis`), it would slow down the original code path (since now it has to go through more steps), which is undesirable.

    To get around this problem, the proposed solution is to keep the original `ArrayRef<Tensor>` overload, and add `ArrayRef<TensorIndex>` and `std::initializer_list<TensorIndex>` overloads to `Tensor::index` and `Tensor::index_put_`. This way, the original code path won’t be affected, and the tensor multi-dim indexing API is only used when the user explicitly pass an `ArrayRef<TensorIndex>` or a braced-init-list of `TensorIndex`-convertible types to `Tensor::index` and `Tensor::index_put_` .

    Note that the above proposed solution would still affect perf for the user’s original `Tensor::index` or `Tensor::index_put_` call sites that use a braced-init-list of tensors as input, e.g. `tensor.index({...})` or `tensor.index_put_({...}, value)`, since now such function calls would take the multi-dim indexing path instead of the original advanced indexing path. However, there are only two instances of this in our codebase (one in ATen cpp test, one in a C++ API nn init function), and they can be easily changed to explicitly use `ArrayRef<Tensor>` as input (I changed them in this PR). For external user’s code, since this is part of the C++ frontend which is still considered experimental, we will only talk about this change in the release note, and ask users to switch to using `ArrayRef<Tensor>` explicitly if they want to keep using the original advanced indexing code path.

- **2nd item**: Mechanisms for parsing `ArrayRef<TensorIndex>` indices and performing indexing operations (mirroring the functions in `torch/csrc/autograd/python_variable_indexing.cpp`).
- **3rd item**: Simple tests to demonstrate that the `Tensor::index()` and `Tensor::index_put_()` APIs work. I will add more tests after the first few PRs are reviewed.
- **4th item**: Merge Python/C++ indexing code paths, for code simplicity. I tested locally and found that there is no perf regression resulting from the merge. I will get more concrete numbers for common use cases when we settle on the overall design.

This PR supersedes https://github.com/pytorch/pytorch/pull/30425.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32841

Differential Revision: D19919692

Pulled By: yf225

fbshipit-source-id: 7467e64f97fc0e407624809dd183c95ea16b1482
2020-02-24 22:04:00 -08:00
Pavel Belevich
62b06b9fae Rename TensorTypeId to DispatchKey (#32154)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32154

TensorTypeId -> DispatchKey
	c10/core/TensorTypeId.h -> c10/core/DispatchKey.h
	c10/core/TensorTypeId.cpp -> c10/core/DispatchKey.cpp
	TensorTypeId::* -> DispatchKey::*
	TensorTypeId type_id -> DispatchKey dispatch_key
		type_id -> dispatch_key
	TensorTypeId::NumTensorIds -> DispatchKey::NumDispatchKeys
	RealTensorTypeId -> RealDispatchKey
TensorTypeSet -> DispatchKeySet
	TensorTypeIds -> DispatchKeys
	c10/core/TensorTypeSet.h -> c10/core/DispatchKeySet.h
	c10/core/TensorTypeSet.cpp -> c10/core/DispatchKeySet.cpp
	type_set() -> key_set()
	type_set_ -> key_set_
	typeSet -> keySet
ExcludeTensorTypeIdGuard -> ExcludeDispatchKeyGuard
IncludeTensorTypeIdGuard -> IncludeDispatchKeyGuard
LocalTensorTypeSet -> LocalDispatchKeySet
	c10/core/impl/LocalTensorTypeSet.h -> c10/core/impl/LocalDispatchKeySet.h
	c10/core/impl/LocalTensorTypeSet.cpp -> c10/core/impl/LocalDispatchKeySet.cpp
	tls_local_tensor_type_set -> tls_local_dispatch_key_set
	tls_is_tensor_type_id_excluded -> tls_is_dispatch_key_excluded
	tls_set_tensor_type_id_excluded -> tls_set_dispatch_key_excluded
	tls_is_tensor_type_id_included -> tls_is_dispatch_key_included
	tls_set_tensor_type_id_included -> tls_set_dispatch_key_included
MultiDispatchTensorTypeSet -> MultiDispatchKeySet
	multi_dispatch_tensor_type_set -> multi_dispatch_key_set
tensorTypeIdToBackend -> dispatchKeyToBackend
backendToTensorTypeId -> backendToDispatchKey
initForTensorTypeSet -> initForDispatchKeySet
inferred_type_set -> inferred_key_set
computeTensorTypeId -> computeDispatchKey
PODLocalTensorTypeSet raw_local_tensor_type_set -> PODLocalDispatchKeySet raw_local_dispatch_key_set
get_default_tensor_type_id -> get_default_dispatch_key
inferred_type_id -> inferred_dispatch_key
actual_type_id -> actual_dispatch_key
typeSetToDispatchKey_ -> dispatchKeySetToDispatchKey_
get_type_id() -> get_dispatch_key()
legacyExtractTypeId -> legacyExtractDispatchKey
extractTypeId -> extractDispatchKey

Test Plan: Imported from OSS

Differential Revision: D19398900

Pulled By: pbelevich

fbshipit-source-id: 234ad19f93d33e00201b61e153b740a339035776
2020-01-15 11:16:08 -08:00
Nathan Goldbaum
f531815526 Deprecate tensor.type() (#30281)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/29161.

I looked a bit at the code changes related to this and think I have all of the use cases of `DeprecatedTypeProperties` covered in the message, but suggestions from someone with more context on this would be very much appreciated :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30281

Differential Revision: D18830818

Pulled By: ezyang

fbshipit-source-id: 1a7fcee15354ae09e6644577e7fa33bd26acfe20
2019-12-05 10:55:34 -08:00
Edward Yang
1111a6b810 Use pybind11::gil_scoped_* functions instead of AutoGIL/AutoNoGIL (#30274)
Summary:
Reland of https://github.com/pytorch/pytorch/pull/29095
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30274

Differential Revision: D18762293

Pulled By: ezyang

fbshipit-source-id: d3d50c2dd12bcb678ab25fa708eb6587cc4b66f9
2019-12-02 12:19:58 -08:00
Mike Ruberry
eff4c4d7c1 Revert D18301806: Use pybind11::gil_scoped_* functions instead of AutoGIL/AutoNoGIL
Test Plan: revert-hammer

Differential Revision:
D18301806

Original commit changeset: 03da6a26c41e

fbshipit-source-id: c1324ee8d154e7e16f5dd4f1cf3625aaa566cd39
2019-11-21 14:50:07 -08:00
Alan Du
f4b9690f2d Use pybind11::gil_scoped_* functions instead of AutoGIL/AutoNoGIL (#29095)
Summary:
Given that pybind11 implements these gil functions, I don't think it makes sense for Pytorch to have its own bespoke versions.

Fixes https://github.com/pytorch/pytorch/issues/29065
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29095

Differential Revision: D18301806

Pulled By: ezyang

fbshipit-source-id: 03da6a26c41ee65aaadf7b67b9f0b14d2def2a5a
2019-11-21 13:44:40 -08:00
Edward Yang
30092df15e Rename getNonVariableDeprecatedTypeProperties to getDeprecatedTypeProperties (#29203)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29203

There is no more Variable/Tensor distinction, so fix the misleading name.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18353505

Pulled By: ezyang

fbshipit-source-id: dadc394d533ab7746f70bc186c6645441a784518
2019-11-13 07:43:32 -08:00
Edward Yang
4e21157e01 Revert "Revert D18171156: Merge Tensor and Variable." (#29299)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29299

This reverts commit 9c43b16df9, but also
with the changes from D18348622.  Comments there:

thpp-compatibility is used by admarket/adreview/service:adreviewservice and
libtorch is too big for the service to deal with.

thpp-compatibility doesn't support autograd, so we hack around dispatching
variables by using AutoNonVariableTypeMode everywhere we call into ATen,
so we never attempt to call into Variable stubs.  If you get it wrong,
you'll get an error like:

```
what():  Could not run 'aten::empty' with arguments from the 'VariableTensorId' backend. 'aten::empty' is only available for these backends: [SparseCPUTensorId, CPUTensorId, MkldnnCPUTensorId]. (lookup_ at caffe2/aten/src/ATen/core/dispatch/DispatchTable.h:298)
```

Test Plan:
Imported from OSS

```
buck test //thpp-compatibility/...
buck build mode/opt-clang admarket/adreview/service:adreviewservice
```

adreviewservice canary: https://our.intern.facebook.com/intern/ads/canary/422290029716387895 (comparing against parent comment due to current breakage) ==> experiment store https://our.intern.facebook.com/intern/experiment_store/experiment/43990006/
adfinder canary: https://our.intern.facebook.com/intern/ads/canary/422268535840333934
adindexer canary: https://our.intern.facebook.com/intern/ads/canary/422268550559034675

adreview second canary:  https://our.intern.facebook.com/intern/ads/canary/422307863515591925

canary without thpp-compat fixups https://our.intern.facebook.com/intern/ads/canary/422308951649168772

Reviewed By: dreiss

Differential Revision: D18353504

Pulled By: ezyang

fbshipit-source-id: 65feaba39fa07bb66762810909aeb38868668a30
2019-11-08 09:11:20 -08:00
Edward Yang
9c43b16df9 Revert D18171156: Merge Tensor and Variable.
Test Plan: revert-hammer

Differential Revision:
D18171156

Original commit changeset: 5b6a045beba3

fbshipit-source-id: f5581d902c2305018ea49f8473592be2a465560b
2019-11-06 10:57:00 -08:00
Edward Yang
25261a4776 Merge Tensor and Variable. (#28620)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28620

All Tensors are Variables now, they just happen to have requires_grad=False. Tensors ALWAYS have `VariableTensorId` in their type set.

When constructing this patch, I had to make decisions about what I would fix in this patch, and what I would leave for follow up PRs. Here is the cleanup that happens in this patch:

- The `is_variable` property is removed from TensorOptions. I removed this immediately because unlike Tensor::is_variable, TensorOptions::is_variable doesn't respect our VariableTensorId thread-local state. This means that there were a bunch of places where TensorOptions::is_variable was false, which is obviously bogus in the world when tensor and variable are merged. Instead of keeping the method as a function that always returns true, I just opted to remove it entirely (it's not public API.) All places we set `is_variable` are deleted.
  - Knock on effect: there is no longer a separate DeprecatedTypeProperties for the variable and non-variable versions of type.
  - Knock on effect: instead of asserting on TensorOptions::is_variable, instead we just test `at::impl::variable_is_excluded()`
- There is now only one copy of the cuDNN RNN dropout cache, not two (I'm not sure why we had two to begin with)

Some cleanup that doesn't happen in this patch:
- Eliminating unnecessary uses of `make_variable`
- Eliminating `Tensor::is_variable`

The most subtle part of this patch is retaining tracing behavior: the fact that everything is a Variable means that more code gets routed to VariableType than before; this can change traces. I identified two places where we didn't appropriately turn off VariableType, mostly factory functions:

- `torch.tensor` must turn off VariableType before invoking `at::empty` to construct the tensor, as it subsequently does direct data access
- `tensor_slow` (invoked when you pass a Python scalar to a tensor argument) must turn off VariableType before calling `scalar_to_tensor` so the scalar gets traced as constant, rather than as a call to `scalar_to_tensor`.

Honestly, these are all giant hacks, and should be replaced with a more specialized guard that just toggles tracing.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Reviewed By: dreiss

Differential Revision: D18171156

Pulled By: ezyang

fbshipit-source-id: 5b6a045beba37492647e350190f495114e86504d
2019-11-04 14:59:57 -08:00
neginraoof
d2eb08d17b Fix tracing slice/select with dynamic inputs (#26549)
Summary:
Fix Slice/Select trace arguments. This PR stashes arguments to functions in order to avoid tracing them as constants.
This PR depends on a fix for select op in PR:
https://github.com/pytorch/pytorch/pull/25273
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26549

Reviewed By: hl475

Differential Revision: D17623851

Pulled By: houseroad

fbshipit-source-id: ae314004266688d2c25c5bada2dcedbfc4f39c5b
2019-10-22 17:09:40 -07:00