Bugra Akyildiz
27c7158166
Remove __future__ imports for legacy Python2 supports ( #45033 )
...
Summary:
There is a module called `2to3` which you can target for future specifically to remove these, the directory of `caffe2` has the most redundant imports:
```2to3 -f future -w caffe2```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45033
Reviewed By: seemethere
Differential Revision: D23808648
Pulled By: bugra
fbshipit-source-id: 38971900f0fe43ab44a9168e57f2307580d36a38
2020-09-23 17:57:02 -07:00
Jerry Zhang
63e77ab6c4
Move numa.{h, cc} to c10/util ( #15024 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15024
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14393
att
Reviewed By: dzhulgakov
Differential Revision: D13380559
fbshipit-source-id: abc3fc7321cf37323f756dfd614c7b41978734e4
2018-12-12 12:21:10 -08:00
Yudong Guang
265b55d028
Revert D13205604: Move numa.{h, cc} to c10/util
...
Differential Revision:
D13205604
Original commit changeset: 54166492d318
fbshipit-source-id: 89b6833518c0b554668c88ae38d97fbc47e2de17
2018-12-07 10:01:25 -08:00
Jerry Zhang
1d111853ae
Move numa.{h, cc} to c10/util ( #14393 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14393
att
Reviewed By: ezyang
Differential Revision: D13205604
fbshipit-source-id: 54166492d31827b0343ed070cc36a825dd86e2ed
2018-12-06 11:30:13 -08:00
Junjie Bai
e290a9d2fd
Back out "Migrate DeviceOption.numa_node_id to DeviceOption.device_id"
...
Summary: Original commit changeset: 82583d0ad4b8
Reviewed By: enosair, ilia-cher
Differential Revision: D10560741
fbshipit-source-id: e289a37d441bd2243b369810abf451292891d9ee
2018-10-24 17:11:25 -07:00
Junjie Bai
202893fe1a
Migrate DeviceOption.numa_node_id to DeviceOption.device_id
...
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12717
Reviewed By: ilia-cher
Differential Revision: D10408325
fbshipit-source-id: 82583d0ad4b8db094ee4c5c607b52500826328f7
2018-10-19 12:45:48 -07:00
Junjie Bai
f54ab540af
Rename cuda_gpu_id to device_id in DeviceOption ( #12456 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12456
codemod with 'Yes to all'
codemod -d . --extensions h,cc,cpp,cu,py,proto,pbtxt,pb.txt,config cuda_gpu_id device_id
Overload TextFormat::ParseFromString to do string replace when parsing from protobuf format
Reviewed By: Yangqing
Differential Revision: D10240535
fbshipit-source-id: 5e6992bec961214be8dbe26f16f5794154a22b25
2018-10-09 15:54:04 -07:00
Junjie Bai
ff608a9ff3
Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_id"" ( #12232 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12232
Original commit changeset: fca91fea58b7
This adds proper modifications to the DeviceType <->DeviceOption conversion code added in D10033396
Reviewed By: jerryzh168
Differential Revision: D10132473
fbshipit-source-id: 801ef777e2950982cb47b48051b1471a0a91e64b
2018-10-01 21:54:52 -07:00
Rick Ratmansky
3010dc4208
Revert D10123245: Back out "codemod cuda_gpu_id to device_id"
...
Differential Revision:
D10123245
Original commit changeset: d83da8e00a12
fbshipit-source-id: fca91fea58b7df208edc2e218a1d514f9821ec7b
2018-10-01 12:22:36 -07:00
Yang Liu
7d7d336c45
Back out "codemod cuda_gpu_id to device_id"
...
Summary:
Original commit changeset: f5614a5d2607
D9986213 is causing Multifeed Aggregator a [huge performance different](https://our.intern.facebook.com/intern/ads/analyze_canary/412951953278781781/ ) and is blocking aggregator push since last Friday night: https://fburl.com/feedtools/b6izvwjz
We need to land this revert ASAP to unblock aggregator push.
Reviewed By: orionr
Differential Revision: D10123245
fbshipit-source-id: d83da8e00a1250f5d09811a0a587c127e377aab2
2018-10-01 11:31:14 -07:00
Junjie Bai
3eb5940cf5
codemod cuda_gpu_id to device_id ( #12022 )
...
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12022
codemod -d . --extensions h,cc,cpp,cu,py,proto,pbtxt,pb.txt,config cuda_gpu_id device_id
codemod with 'Yes to all'
Reviewed By: orionr
Differential Revision: D9986213
fbshipit-source-id: f5614a5d26078817aee8caf79a494abfd6a95ff1
2018-09-27 20:24:53 -07:00
Dmytro Dzhulgakov
496c999f7d
[core] NUMA-aware pinned allocator
...
Using cudaHostRegister/Unregister instead of cudaMallocHost to move memory to a
specific NUMA node
2018-03-06 00:33:11 -08:00
Dmytro Dzhulgakov
9e71de398b
[core] Graph-level NUMA awareness in Caffe2
...
Adding NUMA awareness through numa_node_id in DeviceOption. Blobs of operators
with numa_node_id are allocated on corr. memory banks, using CPU pools with
NUMA affinity set to run operators.
2018-03-06 00:33:11 -08:00