Commit Graph

174 Commits

Author SHA1 Message Date
Igor Sugak
93e5065ba0 [CODEMOD][caffe2] replace numpy.bool with bool (#111432)
Test Plan:
numpy.bool is long deprecated and removed starting numpy-1.20.0 [1]. This replaces all references with equivalent `bool` type using the following oneliner:
```
rg -l 'np\.bool' caffe2 | grep '\.py$' | xargs perl -pi -e 's,\bnp\.bool\b,bool,'
```
1. https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

Differential Revision: D50372711

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111432
Approved by: https://github.com/Skylion007
2023-10-18 18:56:40 +00:00
Jeffrey Dunn
25d657c701 Fix possible naming collision issue (#107743)
Summary: As pointed out in https://github.com/pytorch/pytorch/pull/107479, using a set prevents collisions like "a" => "a", "a" => "a_1", "a_1" => "a_1" (but should go to "a_1_1"). We can combine using counters and a set to avoid this problem. Still gets us the performance benefit in the case of collisions with a very minor penalty in a case with no collision.

Test Plan:
Extract this code and run:
```
# New version
from typing import Dict, Set

class Net:
    _net_names_used_counters: Dict[str, int] = {}
    _net_names_used: Set[str] = set()

    staticmethod
    def current_prefix():
        return "test_prefix"

    staticmethod
    def _get_next_net_name(basename):
        basename = "/".join(x for x in [Net.current_prefix(), basename] if x)
        idx = Net._net_names_used_counters.get(basename, 0)
        while (name := basename if idx == 0 else f"{basename}_{idx}") in Net._net_names_used:
            idx += 1
        Net._net_names_used_counters[basename] = idx + 1
        Net._net_names_used.add(name)
        return name

print(Net._get_next_net_name("basename"))
print(Net._get_next_net_name("x_basename"))
print(Net._get_next_net_name("basename"))
print(Net._get_next_net_name("basename"))
print(Net._get_next_net_name("x_basename"))
print(Net._get_next_net_name("basename_1"))

> test_prefix/basename
> test_prefix/x_basename
> test_prefix/basename_1
> test_prefix/basename_2
> test_prefix/x_basename_1
> test_prefix/basename_1_1
```

Differential Revision: D48576516

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107743
Approved by: https://github.com/zdevito
2023-09-08 17:39:27 +00:00
Jeffrey Dunn
1e9b590df9 Optimize Net._get_next_net_name (#107479)
Summary: This is surprisingly expensive and can be easily optimized.

Differential Revision: D48440000

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107479
Approved by: https://github.com/kit1980
2023-08-22 19:15:11 +00:00
Omkar Salpekar
ae1ed27756 [codemod][numpy] replace np.str with str (#103931)
Summary:
`np.str` is removed from numpy 1.20.0. It was an alias to builtin `str` and it's safe to do the replacement.

The whole changes is mechanical, generated using the following onliner:
```
fbgr -sl 'np\.str\b' | xargs perl -pi -e 's,\bnp\.str\b,str,g'
```

Test Plan: sandcastle

Differential Revision: D46586144

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103931
Approved by: https://github.com/huydhn
2023-06-21 18:16:42 +00:00
Aaron Gokaslan
8fce9a09cd [BE]: pyupgrade Python to 3.8 - imports and object inheritance only (#94308)
Apply parts of pyupgrade to torch (starting with the safest changes).
This PR only does two things: removes the need to inherit from object and removes unused future imports.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94308
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-07 21:10:56 +00:00
Nikita Shulga
fde220ca44 [BE] Get rid of six in caffe2 code (#93956)
Mostly `s/string_types/str/` `s/binary_types/bytes/` and `s/text_types/str/`
Also `y.extend([str(x) for x in foo])`->`y.extend(map(str, foo))`
As Python-2 is long dead

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93956
Approved by: https://github.com/albanD, https://github.com/Skylion007
2023-02-02 22:13:37 +00:00
Nikita Shulga
1906eaf22f [BE] Get rid of future (#92596)
PyTorch has been Python-3.X+ for ages, so it's a shame to still rely on `future.utils` even in a deprecated Caffe2 codebase

For the reference:
https://peps.python.org/pep-0469/#migrating-directly-to-python-3

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92596
Approved by: https://github.com/kit1980, https://github.com/orionr
2023-01-19 08:46:50 +00:00
Ram Rachum
351d73b97f Fix exception causes all over the codebase (#90271)
This is the continuation to #90134 and hopefully the final PR in this series.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90271
Approved by: https://github.com/kit1980
2022-12-07 04:29:00 +00:00
Atul Jangra
564905c8e1 [Caffe2] Fix the assert message (#89816)
Summary:
As title.
dev1/2 is invalid. It should be dev_1/2 instead

Test Plan: Sandcastle

Differential Revision: D41569982

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89816
Approved by: https://github.com/PaliC
2022-12-05 23:40:08 +00:00
Stephen Macke
3d3ad0a52f [easy] add an inplace argument to MutableNetProto.to_net() and core.Net() constructor (#63068)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63068

The caffe2 core.Net constructor can accept a caffe2_pb2.NetDef proto, but it always creates a copy. This is wasteful when we can prove that the proto being passed to it will not be used anywhere else. So we add an "inplace" argument to the `core.Net` constructor that allows clients to give away ownership of the passed proto without copying. We default this argument to `False`, ensuring that behavior does not change unless explicitly requested.

Test Plan: Let CI run.

Differential Revision: D29976510

fbshipit-source-id: 26e13ca76f3431b8ef0de51f08bbf263491d323e
2021-08-11 11:10:52 -07:00
Adam Simpkins
f7aa88b400 [caffe2] Explicitly define all DataTypes in python/core.py (#51768)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51768

This updates python/core.py to explicitly define all of the `DataType`
values rather than dynamically defining them at runtime from the
`caffe2_pb2` values.

This allows type checkers like Pyre and Mypy to see the members of the
`DataType` class.  Otherwise the type checkers report errors such as
`"core.DataType" has no attribute "INT64"`.

This code does keep a run-time check that all of the data types defined
by `caffe2_pb2.proto` are defined correctly in this file.  This way if
someone does add a new type to `caffe2_pb2.proto` it should be very
quickly apparent that this file needs to be updated and kept in sync.
ghstack-source-id: 121936201

Test Plan:
Confirmed that various caffe2/python tests still pass.
Verified that this allows many `pyre-fixme` comments to be removed in
downstream projects, and that Pyre is still clean for these projects.

Reviewed By: jeffdunn

Differential Revision: D26271725

Pulled By: simpkins

fbshipit-source-id: f9e95795de60aba67d7d3872d0c141ed82ba8e39
2021-02-17 20:54:17 -08:00
Tristan Rice
6eaf1e358c caffe2/core.Net: is_external_input rebuild lookup tables when necessary
Summary: is_external_input doesn't check if the lookup tables are valid. Calling .Proto() should invalidate all lookup tables and have them rebuilt on call to any methods depending on them. This adds this check to is_external_input.

Test Plan: internal unit tests

Reviewed By: dzhulgakov, esqu1

Differential Revision: D25100464

fbshipit-source-id: d792dec7e5aa9ffeafda88350e05cb757f4c4831
2020-11-20 10:53:24 -08:00
Tristan Rice
b10d6c6089 [caffe2] cache NextName indexes for faster name generation (#47768)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47768

This stores the next ID for a given NextName(prefix, output_id) so repeated calls to NextName are significantly faster. This accounts for ~65% of time spent for large models.

Test Plan:
buck test //caffe2/caffe2/python/...

will launch canary job before landing to ensure no regressions + confirm speedup

Reviewed By: dzhulgakov

Differential Revision: D24876961

fbshipit-source-id: 668d73060d800513bc72d7cd405a47d15c4acc34
2020-11-17 12:24:00 -08:00
Gary Zheng
4a58f35bef [caffe2] Fix duplicate name bug in Net.AddExternalInput (#47530)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47530

`Net.AddExternalInput` should raise if there are duplicate names. The previous code would only raise if the addition of duplicates was in separate calls, but not if it was in the same call.

Test Plan:
Added two new regression tests

```
    ✓ Pass: caffe2/caffe2/python:core_test - testSetInputRecordWithBlobs (caffe2.caffe2.python.core_test.TestExternalInputs) (9.622)
    ✓ Pass: caffe2/caffe2/python:core_test - testAddExternalInputShouldRaiseIfDuplicate (caffe2.caffe2.python.core_test.TestExternalInputs) (9.639)
    ✓ Pass: caffe2/caffe2/python:core_test - testSetInputRecordWithoutBlobs (caffe2.caffe2.python.core_test.TestExternalInputs) (9.883)
    ✓ Pass: caffe2/caffe2/python:core_test - testAddExternalInputShouldRaiseIfDuplicateInSameCall (caffe2.caffe2.python.core_test.TestExternalInputs) (10.153)
```

Test trained 2 models. No issues

f230755456
f230754926

Reviewed By: dzhulgakov

Differential Revision: D24763586

fbshipit-source-id: c87088441d76f7198f8b07508b2607aec13521ed
2020-11-09 08:30:58 -08:00
Tristan Rice
47198e3208 [caffe2] improve core.Net cloning/init performance (24x for large models!) (#47475)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47475

This improves the core.Net cloning/init performance by quite a bit. It makes set_input_record run in linear time instead of O(n) by checking the external_input map instead of regenerating the external inputs each time and then iterating over it.

Test Plan: unit tests + canary runs

Reviewed By: dzhulgakov

Differential Revision: D24765346

fbshipit-source-id: 92d9f6dec158512bd50513b78675174686f0f411
2020-11-06 11:34:12 -08:00
Yunfan Zhong
e519fcd1aa Remap net name inside arg.n for AsyncIf operator
Summary: Similar to If operator, AsyncIf also contains nets in args. It needs the same handling.

Test Plan:
New unit test test_control_op_remap
`buck test caffe2/caffe2/python:core_test`

Also it worked end to end in prototype of dist bulk eval workflow f226680903

Reviewed By: yyetim

Differential Revision: D24451775

fbshipit-source-id: 50594e2ab9bb457329ed8da7b035f7409461b5f6
2020-10-23 10:41:06 -07:00
Alexander Grund
93719440b8 Replace map(lambda constructs (#46462)
Summary:
Follow-up of https://github.com/pytorch/pytorch/issues/46461 with a similar goal

Makes them more readable and possibly faster. Care has to be taken because `map` applies the function immediately while `(x for x in xs)` is a generator expression which gets evaluated later. This is a benefit in some cases where it is not required to actually create the list of values in memory (e.g. when passing to `tuple` or `extend` or `join`)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46462

Reviewed By: zou3519

Differential Revision: D24422343

Pulled By: ezyang

fbshipit-source-id: 252e33499c92ac0b15238f2df32681dbbda2b237
2020-10-22 09:50:22 -07:00
Bugra Akyildiz
27c7158166 Remove __future__ imports for legacy Python2 supports (#45033)
Summary:
There is a module called `2to3` which you can target for future specifically to remove these, the directory of `caffe2` has the most redundant imports:

```2to3 -f future -w caffe2```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45033

Reviewed By: seemethere

Differential Revision: D23808648

Pulled By: bugra

fbshipit-source-id: 38971900f0fe43ab44a9168e57f2307580d36a38
2020-09-23 17:57:02 -07:00
Xing Wang
27b03d62de [HT] Clear the device placement tag for the auto gen sum so that we could break the component for FC sharing the same input (#42219)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42219

Introduce a new extra info that is tagged on the forward net for the operators sharing the same input. The effect is that the auto gen sum of gradient for the input will not follow the tag of the operator tags in the forward net. This allow more flexible device allocation.

Test Plan:
# unit test
`./buck-out/gen/caffe2/caffe2/python/core_gradients_test#binary.par -r  testMultiUseInputAutoGenSumDevice`

Reviewed By: xianjiec, boryiingsu

Differential Revision: D22609080

fbshipit-source-id: d558145e5eb36295580a70e1ee3a822504dd439a
2020-07-29 15:21:27 -07:00
Jiyan Yang
c062cdbd90 Log the net if blob doesn't exist when setting output record (#41971)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41971

Reviewed By: wx1988

Differential Revision: D22490309

fbshipit-source-id: d967ee211b610f5523a307b5266b9fcb0277a21c
2020-07-27 19:13:50 -07:00
Colin L Reliability Rice
dfa914a90c Modify lazy_dyndep loading to trigger inside workspace. (#41687)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41687

Specifically, this makes a new library (lazy), which can be used from both core
and workspace.

This allows workspace.Createnet to trigger lazy loading of dyndep dependencies.

Test Plan: Added a unit test specifically for workspace.CreateNet

Reviewed By: dzhulgakov

Differential Revision: D22441877

fbshipit-source-id: 3a9d1af9962585d08ea2566c9c85bec7377d39f2
2020-07-22 15:36:43 -07:00
Colin L Reliability Rice
415ff0bceb Create lazy_dyndeps to avoid caffe2 import costs. (#41343)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41343

Currently caffe2.InitOpLibrary does the dll import uniliaterally. Instead if we make a lazy version and use it, then many pieces of code which do not need the caffe2urrenoperators get a lot faster.

One a real test, the import time went from 140s to 68s. 8s.

This also cleans up the algorithm slightly (although it makes a very minimal
difference), by parsing the list of operators once, rather than every time a
new operator is added, since we defer the RefreshCall until after we've
imported all the operators.

The key way we maintain safety, is that as soon as someone does an operation
which requires a operator (or could), we force importing of all available
operators.

Future work could include trying to identify which code is needed for which
operator and only import the needed ones. There may also be wins available by
playing with dlmopen (which opens within a namespace), or seeing if the dl
flags have an impact (I tried this and didn't see an impact, but dlmopen may
make it better).

Note that this was previously landed and reverted. The issue was that if a import failed and raised an exception, the specific library would not be removed from the lazy imports. This caused our tests which had libraries that failed to poison all other tests that ran after it. This has been fixed and a unit test has been added for this case (to help make it obvious what failed).

Test Plan:
I added a new test a lazy_dyndep_test.py (copied from all_compare_test.py).
I'm a little concerned that I don't see any explicit tests for dyndep, but this
should provide decent coverage.

I've added a specific test to handle the poisoning issues mentioned above, which caused the previous version to get reverted.

Differential Revision: D22506369

fbshipit-source-id: 7395df4778e8eb0220630c570360b99a7d60eb83
2020-07-16 15:17:41 -07:00
Nikita Shulga
1f1351488e Revert D21870844: Create lazy_dyndeps to avoid caffe2 import costs.
Test Plan: revert-hammer

Differential Revision:
D21870844 (07fd5f8ff9)

Original commit changeset: 3f65fedb65bb

fbshipit-source-id: 4f661072d72486a9c14711e368247b3d30e28af9
2020-07-09 14:18:38 -07:00
Colin L Reliability Rice
07fd5f8ff9 Create lazy_dyndeps to avoid caffe2 import costs. (#39488)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39488

Currently caffe2.InitOpLibrary does the dll import uniliaterally. Instead if we make a lazy version and use it, then many pieces of code which do not need the caffe2urrenoperators get a lot faster.

One a real test, the import time went from 140s to 68s. 8s.

This also cleans up the algorithm slightly (although it makes a very minimal
difference), by parsing the list of operators once, rather than every time a
new operator is added, since we defer the RefreshCall until after we've
imported all the operators.

The key way we maintain safety, is that as soon as someone does an operation
which requires a operator (or could), we force importing of all available
operators.

Future work could include trying to identify which code is needed for which
operator and only import the needed ones. There may also be wins available by
playing with dlmopen (which opens within a namespace), or seeing if the dl
flags have an impact (I tried this and didn't see an impact, but dlmopen may
make it better).

Test Plan:
I added a new test a lazy_dyndep_test.py (copied from all_compare_test.py).
I'm a little concerned that I don't see any explicit tests for dyndep, but this
should provide decent coverage.

Differential Revision: D21870844

fbshipit-source-id: 3f65fedb65bb48663670349cee5e1d3e22d560ed
2020-07-09 11:34:57 -07:00
Xianjie Chen
0dc0fffca1 [net_transform] only skip ConstantFill for autogen_grad (#34628)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/34628

Differential Revision: D20370564

fbshipit-source-id: 854c8ab44ba262e5020383447ed6bb629064ec33
2020-03-11 19:09:52 -07:00
Tim Gates
0392e8384b Fix simple typo: whos -> whose (#31288)
Summary:
Closes https://github.com/pytorch/pytorch/issues/31287
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31288

Differential Revision: D19166753

Pulled By: zou3519

fbshipit-source-id: da31ad323b8fafa7cbc502fda4e2eb6e02facfb6
2020-01-15 11:47:21 -08:00
Aapo Kyrola
aeb6532e7f BlobReference __getattr__ can only throw AttributeError (#26654)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26654

As per python contract, __getattr__ can only throw AttributeError. Throwing something else breaks hasattr() and causes upstream issues.

Similar bug was in pytorch earlier.

Test Plan: builds

Differential Revision: D17529471

fbshipit-source-id: bb6ac6c9e3be8b80fa2967e6a2e293afd1594cf9
2019-09-23 13:01:00 -07:00
Andrey Malevich
28d3eb8156 Back out "Back out "[Caffe2] Fix device_option propagation"" (#25908)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25908

Original commit changeset: f6e961e88c01

device_option propagation is completely broken in Caffe2 for cases when pass through operators are used. As an example Gather operator don't have gradient and passes through it's inputs, which results in incorrect detection of the components for sparse parameter aggregation (component will be empty instead of the real device).
This diff is trying to fix this issue.

Original diff had a problem, that Caffe2 is not handling cases when device option is present, but contains only metadata (for example one for auto-generated reduction ops in backward pass). This diff is addressing this issue by merging device options during the backward pass

Test Plan:
1. net_transform is finally working with Gather + FloatToHalf transformed model instead of failing because of incorrect number of components.
2. New unit-test.
3. Verify that previously broken benchmark is now passing

ezyang do you have suggestions what else I should test?

Reviewed By: ezyang

Differential Revision: D17281528

fbshipit-source-id: 4a1bc386f29f6a34fbf8008effde9d4890abebfa
2019-09-17 04:01:36 -07:00
Edward Yang
f70ef229ce Back out "[Caffe2] Fix device_option propagation"
Summary: Original commit changeset: 916551b93346

Test Plan: none

Reviewed By: nairbv

Differential Revision: D17259017

fbshipit-source-id: f6e961e88c01126393ed2b6be0adeb6fcc68cb3c
2019-09-09 07:22:42 -07:00
Andrey Malevich
bd0e564d40 Fix device_option propagation (#25203)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25203

device_option propagation is completely broken in Caffe2 for cases when pass
through operators are used. As an example Gather operator don't have gradient
and passes through it's inputs, which results in incorrect detection of the
components for sparse parameter aggregation (component will be empty instead of
the real device).

This diff is trying to fix this issue.

Test Plan:
net_transform is finally working with Gather + FloatToHalf transformed model
instead of failing because of incorrect number of components.

Reviewed By: dzhulgakov

Differential Revision: D16936041

fbshipit-source-id: 916551b933469f04e32ddf86ec4b2c07f76c9176
2019-09-06 19:05:04 -07:00
Xianjie Chen
2dd1323379 Fix the GPU trainer for NoneCalibration and RNN
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22385

Reviewed By: Wakeupbuddy

Differential Revision: D16053190

fbshipit-source-id: 6304c5c51f33691c201c78d4c921a9c250d9b4f5
2019-07-01 22:55:18 -07:00
Xianjie Chen
d74b11ce0e add extra info for the auto gen sum ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17934

Reviewed By: iroot900

Differential Revision: D14418689

fbshipit-source-id: 9e11e461001467f0000ea7c355d5b0f0d738fa85
2019-03-27 14:56:32 -07:00
Nikita Shulga
0799a81cb7 Extend Net.RunAllOnGPU() to support RecurrentNetwork op (#15713)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15713

[caffe2] Extend Net.RunAllOnGPU() to support RecurrentNetwork op

Reviewed By: dzhulgakov

Differential Revision: D13576507

fbshipit-source-id: f517127492c9d516ece663d42fef84338c70344e
2019-02-08 15:48:42 -08:00
Jerry Zhang
d5d7718770 fix scope related naming issue in build_quant_conv_bn_relu, and also format function signature
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14885

Reviewed By: harouwu

Differential Revision: D13374077

fbshipit-source-id: 5082c4ea0d2fdc197243b022b9b489f38b04c8e9
2019-01-31 15:53:27 -08:00
Yiming Wu
a1494efdfa fix auto grad summing for IfOp where intermediate output needs renaming (#14772)
Summary:
fix auto grad summing for IfOp where intermediate output needs renaming.

Bug before this diff:
- we only renames the output of IfOp without changing the subnet ops output
- this results in blob not found error

the unittest provides an example
this diff fix that for IfOp
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14772

Differential Revision: D13327090

Pulled By: harouwu

fbshipit-source-id: ec40ee88526ace3619c54551e223dd71158a02f8
2018-12-09 08:26:46 -08:00
rohithkrn
0d663cec30 Unify cuda and hip device types in Caffe2 python front end (#14221)
Summary:
Goal of this PR is to unify cuda and hip device types in caffe2 python front end.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14221

Differential Revision: D13148564

Pulled By: bddppq

fbshipit-source-id: ef9bd2c7d238200165f217097ac5727e686d887b
2018-11-29 14:00:16 -08:00
Junjie Bai
e290a9d2fd Back out "Migrate DeviceOption.numa_node_id to DeviceOption.device_id"
Summary: Original commit changeset: 82583d0ad4b8

Reviewed By: enosair, ilia-cher

Differential Revision: D10560741

fbshipit-source-id: e289a37d441bd2243b369810abf451292891d9ee
2018-10-24 17:11:25 -07:00
Junjie Bai
202893fe1a Migrate DeviceOption.numa_node_id to DeviceOption.device_id
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12717

Reviewed By: ilia-cher

Differential Revision: D10408325

fbshipit-source-id: 82583d0ad4b8db094ee4c5c607b52500826328f7
2018-10-19 12:45:48 -07:00
Junjie Bai
f54ab540af Rename cuda_gpu_id to device_id in DeviceOption (#12456)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12456

codemod with 'Yes to all'
codemod -d . --extensions h,cc,cpp,cu,py,proto,pbtxt,pb.txt,config cuda_gpu_id device_id

Overload TextFormat::ParseFromString to do string replace when parsing from protobuf format

Reviewed By: Yangqing

Differential Revision: D10240535

fbshipit-source-id: 5e6992bec961214be8dbe26f16f5794154a22b25
2018-10-09 15:54:04 -07:00
Junjie Bai
ff608a9ff3 Back out "Revert D10123245: Back out "codemod cuda_gpu_id to device_id"" (#12232)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12232

Original commit changeset: fca91fea58b7

This adds proper modifications to the DeviceType <->DeviceOption conversion code added in D10033396

Reviewed By: jerryzh168

Differential Revision: D10132473

fbshipit-source-id: 801ef777e2950982cb47b48051b1471a0a91e64b
2018-10-01 21:54:52 -07:00
Rick Ratmansky
3010dc4208 Revert D10123245: Back out "codemod cuda_gpu_id to device_id"
Differential Revision:
D10123245

Original commit changeset: d83da8e00a12

fbshipit-source-id: fca91fea58b7df208edc2e218a1d514f9821ec7b
2018-10-01 12:22:36 -07:00
Yang Liu
7d7d336c45 Back out "codemod cuda_gpu_id to device_id"
Summary:
Original commit changeset: f5614a5d2607

D9986213 is causing Multifeed Aggregator a [huge performance different](https://our.intern.facebook.com/intern/ads/analyze_canary/412951953278781781/) and is blocking aggregator push since last Friday night: https://fburl.com/feedtools/b6izvwjz
We need to land this revert ASAP to unblock aggregator push.

Reviewed By: orionr

Differential Revision: D10123245

fbshipit-source-id: d83da8e00a1250f5d09811a0a587c127e377aab2
2018-10-01 11:31:14 -07:00
Junjie Bai
3eb5940cf5 codemod cuda_gpu_id to device_id (#12022)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12022

codemod -d . --extensions h,cc,cpp,cu,py,proto,pbtxt,pb.txt,config cuda_gpu_id device_id

codemod with 'Yes to all'

Reviewed By: orionr

Differential Revision: D9986213

fbshipit-source-id: f5614a5d26078817aee8caf79a494abfd6a95ff1
2018-09-27 20:24:53 -07:00
Xianjie Chen
b885dea300 parallize the dense part in event models
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/10768

Reviewed By: Wakeupbuddy

Differential Revision: D9445750

fbshipit-source-id: b8c2ddfe3ccb9278506de15a5e43bada016408f7
2018-08-22 22:40:07 -07:00
Yiming Wu
e5e2514f4e fix debug_info arg in createOperator and improve reroute_tensor (#10736)
Summary:
-Fixed C2 core.CreateOperator debug info assignment
-Improving core.Net.reroute_tensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10736

Differential Revision: D9426659

Pulled By: harouwu

fbshipit-source-id: 90caf848c88854e17e568d5f6910dc6c81fd000a
2018-08-21 19:40:16 -07:00
Yiming Wu
579962f2a8 reroute tensor feature in core.Net and generate one net feature in model_helper (#10528)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10528

adding 2 features to core and model_helper

- reroute_tensor which supports op insertion on net level
- model_helper complete net and cut net used for full graph analysis

Differential Revision: D9330345

fbshipit-source-id: 56341d3f500e72069ee306e20266c8590ae7985a
2018-08-15 16:40:15 -07:00
Kittipat Virochsiri
8a0fe0a588 set_input_record() should always add external input (#9636)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9636

Make sure that the blobs are registered to the net

Reviewed By: pjh5

Differential Revision: D8924883

fbshipit-source-id: f09422a2d4d5ba8bf6cfbfd00172097b5ab1fcd6
2018-07-20 11:55:37 -07:00
Artem Volkhin
b6b6e1b39f Fix core.Plan.create_from_proto (#9438)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9438

Current implementation of create_from_proto doesn't work as expected: it
duplicates networks and execution steps by copying original PlanDef first and
adding each step one-by-one later.

Reviewed By: pjh5

Differential Revision: D8850316

fbshipit-source-id: 9b02836d6e6ee1c91cfdd3b4c4804f14137dc22b
2018-07-18 10:55:55 -07:00
Yan Shang
8253947256 Make error message more informative (#9352)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9352

I am debugging a failed workflow f61490672, and found the original error message to be not informative.

Differential Revision: D8808181

fbshipit-source-id: 3f524ca092881186a492c5c0456124ce31d54751
2018-07-11 15:09:46 -07:00
Duc Ngo
f52c2ca1c6 net_async tracing use enable_profile arg from NetDef (#8927)
Summary:
Closes https://github.com/pytorch/pytorch/pull/8927

Closes https://github.com/pytorch/pytorch/pull/8855

- Add parameter `enable_tracing` to the Arg field of NetDef. `net_async_tracing` will only enable Tracer for Net instances that have this field set (unless the command line argument also include the net name).
- Append a unique id to the json profiling result file because there could be multiple instances of the same net running.
- Dump json profling file regularly instead of just when the Tracer object is destroyed

Reviewed By: ilia-cher

Differential Revision: D8372378

fbshipit-source-id: 8adc9d59f48b67456beed2e3a88235c298fdfd01
2018-06-27 16:24:57 -07:00