Commit Graph

29 Commits

Author SHA1 Message Date
cyy
168e41009b [structural binding][10/N] Replace std::tie with structural binding (#130784)
Follows  #130404

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130784
Approved by: https://github.com/malfet
2024-07-16 10:28:14 +00:00
Kazuaki Ishizaki
deb800ee81 Fix typo under test directory (#111304)
This PR fixes typo in comments under `test` directory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111304
Approved by: https://github.com/Skylion007
2023-10-16 23:06:06 +00:00
Michael Suo
30fb2c4aba [lint] autoformat test/cpp and torch/csrc
Let's have some fun.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78828

Approved by: https://github.com/ezyang
2022-06-11 21:11:16 +00:00
Richard Barnes
e0643fa3fc use irange for loops 5 (#66744)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66744

Modified loops in files under fbsource/fbcode/caffe2/ from the format

`for(TYPE var=x0;var<x_max;x++)`

to the format

`for(const auto var: irange(xmax))`

This was achieved by running r-barnes's loop upgrader script (D28874212) with some modification to exclude all files under /torch/jit and a number of reversions or unused variable suppression warnings added by hand.

Test Plan: Sandcastle

Reviewed By: ngimel

Differential Revision: D31705358

fbshipit-source-id: d6ea350cbaa8f452fc78f238160e5374be637a48
2021-10-18 21:59:50 -07:00
Xue Li
2f099c7555 Revert D30652629: use irange for loops
Test Plan: revert-hammer

Differential Revision:
D30652629 (687c2267d4)

Original commit changeset: 0ae6c4bbbb55

fbshipit-source-id: 5c4f067b584a021c8c9656454d1ee60999600fb3
2021-10-15 15:23:10 -07:00
Richard Barnes
687c2267d4 use irange for loops (#66234)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66234

Modified loops in files under fbsource/fbcode/caffe2/ from the format

`for(TYPE var=x0;var<x_max;x++)`

to the format

`for(const auto var: irange(xmax))`

This was achieved by running r-barnes's loop upgrader script (D28874212) with some modification to exclude all files under /torch/jit and a number of reversions or unused variable suppression warnings added by hand.

bypass_size_limit
allow-large-files

Test Plan: Sandcastle

Reviewed By: ngimel

Differential Revision: D30652629

fbshipit-source-id: 0ae6c4bbbb554bad42e372792a6430e1acf15e3e
2021-10-15 13:50:33 -07:00
Luca Wehrstedt
3a2149a4ce [reland] Make TP agent use streams from Future when sending response (#59212)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59212

Reland of https://github.com/pytorch/pytorch/pull/58428

Until now, the TP agent expected the output of a remote function to be on the same streams as the inputs. In other words, it used the lazy stream context of the inputs to synchronize the output tensors. This was true in the most common case of a synchronous remote function. However it wasn't true for async functions, for fetching RRefs, ... The more generic way is to use the CUDA events held by the Future to perform this synchronization. (These events may be on the input streams, or they may not be!).
ghstack-source-id: 130202842

Test Plan: CI

Reviewed By: mrshenli

Differential Revision: D28623885

fbshipit-source-id: 29333bcb75d077ab801eac92017d0e381e8f5569
2021-06-02 05:46:05 -07:00
Luca Wehrstedt
5ec169b4c3 [reland] Always use intrusive_ptr for Message (1 out of 2) (#59205)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59205

Reland of https://github.com/pytorch/pytorch/pull/58422

Similar to Future (which I tackled recently), Message is an ivalue type (a "custom class" one), and the natural way to represent it is inside an intrusive_ptr. However in the RPC code we had a mix of usages, often passing Message by value. This has undesirable consequences, as it could easily trigger a copy by accident, which I believe is why in many places we accepted _rvalue references_ to Message, in order to force the caller to move. In my experience this is non-idiomatic in C++ (normally a function signature specifies how the function consumes its arguments, and it's up to the caller to then decide whether to copy or move).

By moving to intrusive_ptr everywhere I think we eliminate and simplify many of the problems above.

In this PR I do half of the migration, by updating everything except the `toMessageImpl` methods, which will come in the next PR.
ghstack-source-id: 130202849

Test Plan: CI

Reviewed By: mrshenli

Differential Revision: D28623891

fbshipit-source-id: c9aeea3440679a11741ca78c06b03c57cb815a5e
2021-06-02 05:44:49 -07:00
Xiaodong Wang
4c961beacb Revert D28474878: Always use intrusive_ptr for Message (1 out of 2)
Test Plan: revert-hammer

Differential Revision:
D28474878 (4d704e607d)

Original commit changeset: 5b76d45e05f6

fbshipit-source-id: 677c5bc7f02dca23213f778eb0e626a2f6600f3b
2021-05-21 19:24:22 -07:00
Xiaodong Wang
b8a04e25ec Revert D28474982: Make TP agent use streams from Future when sending response
Test Plan: revert-hammer

Differential Revision:
D28474982 (19a7472702)

Original commit changeset: c0034eb3f2a2

fbshipit-source-id: fb260c71e6c9dd5a2c44121fe4729a4f4418532b
2021-05-21 19:23:01 -07:00
Luca Wehrstedt
19a7472702 Make TP agent use streams from Future when sending response (#58428)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58428

Until now, the TP agent expected the output of a remote function to be on the same streams as the inputs. In other words, it used the lazy stream context of the inputs to synchronize the output tensors. This was true in the most common case of a synchronous remote function. However it wasn't true for async functions, for fetching RRefs, ... The more generic way is to use the CUDA events held by the Future to perform this synchronization. (These events may be on the input streams, or they may not be!).
ghstack-source-id: 129567045

Test Plan: CI

Reviewed By: mrshenli

Differential Revision: D28474982

fbshipit-source-id: c0034eb3f2a2ea525efb63a31b839bc086060e7e
2021-05-21 13:15:35 -07:00
Luca Wehrstedt
4d704e607d Always use intrusive_ptr for Message (1 out of 2) (#58422)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58422

Similar to Future (which I tackled recently), Message is an ivalue type (a "custom class" one), and the natural way to represent it is inside an intrusive_ptr. However in the RPC code we had a mix of usages, often passing Message by value. This has undesirable consequences, as it could easily trigger a copy by accident, which I believe is why in many places we accepted _rvalue references_ to Message, in order to force the caller to move. In my experience this is non-idiomatic in C++ (normally a function signature specifies how the function consumes its arguments, and it's up to the caller to then decide whether to copy or move).

By moving to intrusive_ptr everywhere I think we eliminate and simplify many of the problems above.

In this PR I do half of the migration, by updating everything except the `toMessageImpl` methods, which will come in the next PR.
ghstack-source-id: 129567053

Test Plan: CI

Reviewed By: mrshenli

Differential Revision: D28474878

fbshipit-source-id: 5b76d45e05f6fa58c831e369c5c964d126187a6c
2021-05-21 13:15:24 -07:00
Shen Li
cf7a0e5af4 Use RPC context streams to cover serde ops (#57926)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57926

Test Plan: Imported from OSS

Reviewed By: lw

Differential Revision: D28316526

Pulled By: mrshenli

fbshipit-source-id: 1907ec8f46e40fa5049d810c6ad959263361b6aa
2021-05-11 07:07:51 -07:00
Luca Wehrstedt
0422e67336 Use Devices instead of DeviceIndexes in TensorPipe agent (#57294)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57294

With the advent of CPUs in the device maps, and to be more generic (e.g., to support AMD GPUs), and to avoid conversions when passing to Future and RRef and such, it's easier to use Devices instead of DeviceIndices. This started by just migrating the TensorPipe agent but the RPC layer is quite intertwined so I had to migrate a lot of stuff.
ghstack-source-id: 127916562

Test Plan: CI

Reviewed By: mrshenli

Differential Revision: D28092733

fbshipit-source-id: 024dcb3648c5898ab13e770413c43958f04f1a8a
2021-05-01 16:12:55 -07:00
Lucas Hosseini
8868f9c8e3 [TensorPipe] Use targetDevice in tensorpipe_agent. (#56346)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56346

Now that TensorPipe's API has `targetDevice`, use that instead of
manually writing the CUDA device index in `metadata`.

Test Plan: CI

Reviewed By: lw

Differential Revision: D27703235

fbshipit-source-id: c5b620e3b3ce619367412efdbe9fa3778f6b8869
2021-04-20 11:54:13 -07:00
Lucas Hosseini
3802e577fb [TensorPipe] Use Descriptor::Tensor::sourceDevice in tensorpipe_agent. (#55821)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55821

Test Plan: CI

Reviewed By: lw

Differential Revision: D27661608

fbshipit-source-id: fd241f073d8928528a749758c7d0f570dfeb677b
2021-04-15 03:21:26 -07:00
Lucas Hosseini
047164437e [TensorPipe] Prepare for new Pipe API. (#55820)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55820

Test Plan: CI

Reviewed By: lw

Differential Revision: D27648291

fbshipit-source-id: e08db6e8c1f5f333ec355de29e25fbe552904b25
2021-04-15 03:20:32 -07:00
Lucas Hosseini
09f1f14569 Transition to new tensorpipe::Pipe API. (#55193)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55193

Test Plan: CI

Reviewed By: lw

Differential Revision: D27466387

fbshipit-source-id: 07b831d699f56874dd45f37e448b8c4244ead5e3
2021-04-02 02:28:07 -07:00
Lucas Hosseini
9d6a81d1a6 Avoid aggregate initialization for tensorpipe::{Cpu,Cuda}Buffer and tensorpipe::Message::Tensor. (#55136)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55136

This will ease the transition to the new API where `Buffer` does not
store a length anymore.

Test Plan: CI

Reviewed By: lw

Differential Revision: D27466385

fbshipit-source-id: 9a167f8c501455a3ab49ce75257c69d8b4869925
2021-04-01 06:55:02 -07:00
Lucas Hosseini
a84afb3a7c Use type-erased union for Buffer. (#54251)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54251

Pull Request resolved: https://github.com/pytorch/tensorpipe/pull/324

In order to merge the channel hierarchies, we need a generic `Buffer` type, that can wrap either a `CpuBuffer` or a `CudaBuffer`.
The constraints are that, since this type is used by the channels, it cannot explicitly refer to `CudaBuffer`. We propose here a type-erasure based solution, with small-buffer optimization to avoid heap-allocating the wrapped concrete buffer.

This is a new version of D27001339 (c618dc13d2) which broke PyTorch OSS build.

Test Plan: CI

Reviewed By: lw, mrshenli

Differential Revision: D27156053

fbshipit-source-id: 4244302af33a3be91dcd06093c0d6045d081d3cc
2021-03-19 04:58:09 -07:00
Mike Ruberry
8caa7889fc Revert D27001339: Use type-erased union for Buffer.
Test Plan: revert-hammer

Differential Revision:
D27001339 (c618dc13d2)

Original commit changeset: 26d7dc19d69d

fbshipit-source-id: 6e036ed7e1f71c9cf20e3361607c4fe4fa2d3d02
2021-03-18 05:27:17 -07:00
Lucas Hosseini
c618dc13d2 Use type-erased union for Buffer. (#322)
Summary:
Pull Request resolved: https://github.com/pytorch/tensorpipe/pull/322

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54145

In order to merge the channel hierarchies, we need a generic `Buffer` type, that can wrap either a `CpuBuffer` or a `CudaBuffer`.
The constraints are that, since this type is used by the channels, it cannot explicitly refer to `CudaBuffer`. We propose here a type-erasure based solution, with small-buffer optimization to avoid heap-allocating the wrapped concrete buffer.
ghstack-source-id: 124131499

Test Plan: CI

Reviewed By: lw

Differential Revision: D27001339

fbshipit-source-id: 26d7dc19d69d7e3336df6fd4ff6ec118dc17c5b6
2021-03-18 02:23:17 -07:00
Lucas Hosseini
ac8c7c4e9f Make Channel API accept buffer structs rather than raw pointers. (#45014)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45014

Pull Request resolved: https://github.com/pytorch/tensorpipe/pull/219

Pull Request resolved: https://github.com/pytorch/tensorpipe/pull/212

+ Introduce buffer.h defining the buffer struct(s). The `CpuBuffer`
struct is always defined, while the `CudaBuffer` struct is defined
only when `TENSORPIPE_SUPPORTS_CUDA` is true.
+ Update all channels to take a `CpuBuffer` or `CudaBuffer` for
`send`/`recv` rather than a raw pointer and a length.
+ Make the base `Channel`/`Context` classes templated on `TBuffer`,
effectively creating two channel hierarchies (one for CPU channels,
one for CUDA channels).
+ Update the Pipe and the generic channel tests to use the new API. So
far, generic channel tests are CPU only, and tests for the CUDA IPC
channel are (temporarily) disabled. A subsequent PR will take care of
refactoring tests so that generic tests work for CUDA channels. An
other PR will add support for CUDA tensors in the Pipe.

Differential Revision: D23598033

Test Plan: Imported from OSS

Reviewed By: lw

Pulled By: beauby

fbshipit-source-id: 1d6c3f91e288420858835cd5e7962e8da051b44b
2020-09-21 10:18:45 -07:00
Lucas Hosseini
af3fc9725d Extract rpc/tensorpipe_utils.{cpp,h} from rpc/utils.{cpp,h} (#44803)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44803

Test Plan: CI

Reviewed By: lw

Differential Revision: D23732022

fbshipit-source-id: 5b839c7997bbee162a14d03414ee32baabbc8ece
2020-09-18 13:51:43 -07:00
Shen Li
06aaf8c20d Add set_device_map to TensorPipeOptions to support GPU args (#42637)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42637

This commit enables sending non-CPU tensors through RPC using
TensorPipe backend. Users can configure device mappings by calling
set_map_location on `TensorPipeRpcBackendOptions`. Internally,
the `init_rpc` API verifies the correctness of device mappings. It
will shutdown RPC if the check failed, or proceed and pass global
mappings to `TensorPipeAgent` if the check was successful. For serde,
we added a device indices field to TensorPipe read and write buffers,
which should be either empty (all tensors must be on CPU) or match
the tensors in order and number in the RPC message. This commit
does not yet avoid zero-copy, the tensor is always moved to CPU
on the sender and then moved to the specified device on the receiver.

Test Plan: Imported from OSS

Reviewed By: izdeby

Differential Revision: D23011572

Pulled By: mrshenli

fbshipit-source-id: 62b617eed91237d4e9926bc8551db78b822a1187
2020-08-14 18:46:55 -07:00
Luca Wehrstedt
72f2ff5950 [TensorPipe] Improve serialization (#39010)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39010

The initial version of the serialization for the TensorPipe RPC agent (i.e., the conversion from rpc::Message to tensorpipe::Message) worker around a limitation of TensorPipe of only allowing one payload per message by pickling each tensor separately and storing the pickles as metadata (which is a less efficient way of sending data over, as it goes through more copies). Having now lifter that limitation we can now improve the way we serialize. We now put the type and the id as their own payloads, we do a single pickling pass for all the tensors of the message (which allows us to deduplicate them) and store the pickle as a payload. My impression is that pickling is a somewhat costly operation, so reducing the number of times we do it should be beneficial for performance. For this same reason, another change I've done here is separate the allocation of the buffers from the deserialization. This will allow us (in the future) to perform the allocation on the I/O event loop but perform the unpickling in the worker thread, thus keeping the event loop more responsive.
ghstack-source-id: 104810740

Test Plan: RPC tests

Differential Revision: D21716067

fbshipit-source-id: c1475cc78afdcf0820a485ffd98c91abb35796c7
2020-05-28 10:48:24 -07:00
Luca Wehrstedt
bc09478a60 [TensorPipe] Use the new multi-payload message API (#37919)
Summary:
In D21209901 TensorPipe added support for a vector of payloads inside each message, instead of a single one, so that users with multiple payloads can send them separately as they are instead of having to copy them into a new block of contiguous memory. The PyTorch agent is using the old API, which is preventing us from deleting it. This change has no effects on over-the-wire format and thus on performance.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/37919

ghstack-source-id: 103572164

Test Plan:
On both workers
```
import os
import torch
import torch.distributed.rpc as rpc
os.environ["MASTER_ADDR"] = "127.0.0.1"
os.environ["MASTER_PORT"] = "8765"
```
On worker 0
```
rpc.init_rpc(name="foo", rank=0, backend=rpc.backend_registry.BackendType.TENSORPIPE, world_size=2, rpc_backend_options=rpc.TensorPipeRpcBackendOptions(worker_name_to_id={"foo": 0, "bar": 0}))
```
On worker 1
```
rpc.init_rpc(name="bar", rank=1, backend=rpc.backend_registry.BackendType.TENSORPIPE, world_size=2, rpc_backend_options=rpc.TensorPipeRpcBackendOptions(worker_name_to_id={"foo": 0, "bar": 0}))
```
On worker 0
```
In [15]: rpc.rpc_sync("bar", torch.add, args=(torch.full((2,2), 1), torch.full((2,2), 2)))
Out[15]:
tensor([[3., 3.],
        [3., 3.]])

In [16]: rpc.rpc_sync("bar", torch.add, args=(1, 2))
Out[16]: 3
```

Differential Revision: D21425536

fbshipit-source-id: a0ec2be825556b39aff018a2834baf815a6d8fa5
2020-05-07 02:52:30 -07:00
Edward Yang
fe88806784 Back out "Revert D21171334: [pytorch][PR] Change StorageImpl to track byte count rather than element count" (#37893)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37893

Original commit changeset: 50746043acf3

Test Plan: sandcastle and ossci

Reviewed By: malfet, seemethere, ngimel

Differential Revision: D21416509

fbshipit-source-id: 735ec4e61f9d36d4537f52dd2dc6267751aeb94b
2020-05-05 22:43:15 -07:00
Hongyi Jia
3411ec6e32 [TensorPipe/RPC] Serialize and deserialize message (#36197)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36197

Create APIs to convert between rpc::message and tensorpipe::message
1. tensorpipeSerialize() - converts rpc::message to tensorpipe::message without memory copy (tensors).
2. tensorpipeAllocateMessage - allocates rpc::message based on received tensorpipe descriptor to prepare memory-copy-free receiving.

Test Plan: buck test caffe2/test/cpp/rpc:test_tensorpipe_serialization

Reviewed By: lw

Differential Revision: D20084125

fbshipit-source-id: ffbc310f93443e50261aed752be0fe176610dd2a
2020-05-05 05:45:57 -07:00