Summary: As pointed out in https://github.com/pytorch/pytorch/pull/107479, using a set prevents collisions like "a" => "a", "a" => "a_1", "a_1" => "a_1" (but should go to "a_1_1"). We can combine using counters and a set to avoid this problem. Still gets us the performance benefit in the case of collisions with a very minor penalty in a case with no collision.
Test Plan:
Extract this code and run:
```
# New version
from typing import Dict, Set
class Net:
_net_names_used_counters: Dict[str, int] = {}
_net_names_used: Set[str] = set()
staticmethod
def current_prefix():
return "test_prefix"
staticmethod
def _get_next_net_name(basename):
basename = "/".join(x for x in [Net.current_prefix(), basename] if x)
idx = Net._net_names_used_counters.get(basename, 0)
while (name := basename if idx == 0 else f"{basename}_{idx}") in Net._net_names_used:
idx += 1
Net._net_names_used_counters[basename] = idx + 1
Net._net_names_used.add(name)
return name
print(Net._get_next_net_name("basename"))
print(Net._get_next_net_name("x_basename"))
print(Net._get_next_net_name("basename"))
print(Net._get_next_net_name("basename"))
print(Net._get_next_net_name("x_basename"))
print(Net._get_next_net_name("basename_1"))
> test_prefix/basename
> test_prefix/x_basename
> test_prefix/basename_1
> test_prefix/basename_2
> test_prefix/x_basename_1
> test_prefix/basename_1_1
```
Differential Revision: D48576516
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107743
Approved by: https://github.com/zdevito
Summary:
`np.str` is removed from numpy 1.20.0. It was an alias to builtin `str` and it's safe to do the replacement.
The whole changes is mechanical, generated using the following onliner:
```
fbgr -sl 'np\.str\b' | xargs perl -pi -e 's,\bnp\.str\b,str,g'
```
Test Plan: sandcastle
Differential Revision: D46586144
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103931
Approved by: https://github.com/huydhn
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63068
The caffe2 core.Net constructor can accept a caffe2_pb2.NetDef proto, but it always creates a copy. This is wasteful when we can prove that the proto being passed to it will not be used anywhere else. So we add an "inplace" argument to the `core.Net` constructor that allows clients to give away ownership of the passed proto without copying. We default this argument to `False`, ensuring that behavior does not change unless explicitly requested.
Test Plan: Let CI run.
Differential Revision: D29976510
fbshipit-source-id: 26e13ca76f3431b8ef0de51f08bbf263491d323e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51768
This updates python/core.py to explicitly define all of the `DataType`
values rather than dynamically defining them at runtime from the
`caffe2_pb2` values.
This allows type checkers like Pyre and Mypy to see the members of the
`DataType` class. Otherwise the type checkers report errors such as
`"core.DataType" has no attribute "INT64"`.
This code does keep a run-time check that all of the data types defined
by `caffe2_pb2.proto` are defined correctly in this file. This way if
someone does add a new type to `caffe2_pb2.proto` it should be very
quickly apparent that this file needs to be updated and kept in sync.
ghstack-source-id: 121936201
Test Plan:
Confirmed that various caffe2/python tests still pass.
Verified that this allows many `pyre-fixme` comments to be removed in
downstream projects, and that Pyre is still clean for these projects.
Reviewed By: jeffdunn
Differential Revision: D26271725
Pulled By: simpkins
fbshipit-source-id: f9e95795de60aba67d7d3872d0c141ed82ba8e39
Summary: is_external_input doesn't check if the lookup tables are valid. Calling .Proto() should invalidate all lookup tables and have them rebuilt on call to any methods depending on them. This adds this check to is_external_input.
Test Plan: internal unit tests
Reviewed By: dzhulgakov, esqu1
Differential Revision: D25100464
fbshipit-source-id: d792dec7e5aa9ffeafda88350e05cb757f4c4831
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47768
This stores the next ID for a given NextName(prefix, output_id) so repeated calls to NextName are significantly faster. This accounts for ~65% of time spent for large models.
Test Plan:
buck test //caffe2/caffe2/python/...
will launch canary job before landing to ensure no regressions + confirm speedup
Reviewed By: dzhulgakov
Differential Revision: D24876961
fbshipit-source-id: 668d73060d800513bc72d7cd405a47d15c4acc34
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47530
`Net.AddExternalInput` should raise if there are duplicate names. The previous code would only raise if the addition of duplicates was in separate calls, but not if it was in the same call.
Test Plan:
Added two new regression tests
```
✓ Pass: caffe2/caffe2/python:core_test - testSetInputRecordWithBlobs (caffe2.caffe2.python.core_test.TestExternalInputs) (9.622)
✓ Pass: caffe2/caffe2/python:core_test - testAddExternalInputShouldRaiseIfDuplicate (caffe2.caffe2.python.core_test.TestExternalInputs) (9.639)
✓ Pass: caffe2/caffe2/python:core_test - testSetInputRecordWithoutBlobs (caffe2.caffe2.python.core_test.TestExternalInputs) (9.883)
✓ Pass: caffe2/caffe2/python:core_test - testAddExternalInputShouldRaiseIfDuplicateInSameCall (caffe2.caffe2.python.core_test.TestExternalInputs) (10.153)
```
Test trained 2 models. No issues
f230755456
f230754926
Reviewed By: dzhulgakov
Differential Revision: D24763586
fbshipit-source-id: c87088441d76f7198f8b07508b2607aec13521ed
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47475
This improves the core.Net cloning/init performance by quite a bit. It makes set_input_record run in linear time instead of O(n) by checking the external_input map instead of regenerating the external inputs each time and then iterating over it.
Test Plan: unit tests + canary runs
Reviewed By: dzhulgakov
Differential Revision: D24765346
fbshipit-source-id: 92d9f6dec158512bd50513b78675174686f0f411
Summary: Similar to If operator, AsyncIf also contains nets in args. It needs the same handling.
Test Plan:
New unit test test_control_op_remap
`buck test caffe2/caffe2/python:core_test`
Also it worked end to end in prototype of dist bulk eval workflow f226680903
Reviewed By: yyetim
Differential Revision: D24451775
fbshipit-source-id: 50594e2ab9bb457329ed8da7b035f7409461b5f6
Summary:
Follow-up of https://github.com/pytorch/pytorch/issues/46461 with a similar goal
Makes them more readable and possibly faster. Care has to be taken because `map` applies the function immediately while `(x for x in xs)` is a generator expression which gets evaluated later. This is a benefit in some cases where it is not required to actually create the list of values in memory (e.g. when passing to `tuple` or `extend` or `join`)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46462
Reviewed By: zou3519
Differential Revision: D24422343
Pulled By: ezyang
fbshipit-source-id: 252e33499c92ac0b15238f2df32681dbbda2b237
Summary:
There is a module called `2to3` which you can target for future specifically to remove these, the directory of `caffe2` has the most redundant imports:
```2to3 -f future -w caffe2```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45033
Reviewed By: seemethere
Differential Revision: D23808648
Pulled By: bugra
fbshipit-source-id: 38971900f0fe43ab44a9168e57f2307580d36a38
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42219
Introduce a new extra info that is tagged on the forward net for the operators sharing the same input. The effect is that the auto gen sum of gradient for the input will not follow the tag of the operator tags in the forward net. This allow more flexible device allocation.
Test Plan:
# unit test
`./buck-out/gen/caffe2/caffe2/python/core_gradients_test#binary.par -r testMultiUseInputAutoGenSumDevice`
Reviewed By: xianjiec, boryiingsu
Differential Revision: D22609080
fbshipit-source-id: d558145e5eb36295580a70e1ee3a822504dd439a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41687
Specifically, this makes a new library (lazy), which can be used from both core
and workspace.
This allows workspace.Createnet to trigger lazy loading of dyndep dependencies.
Test Plan: Added a unit test specifically for workspace.CreateNet
Reviewed By: dzhulgakov
Differential Revision: D22441877
fbshipit-source-id: 3a9d1af9962585d08ea2566c9c85bec7377d39f2
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/41343
Currently caffe2.InitOpLibrary does the dll import uniliaterally. Instead if we make a lazy version and use it, then many pieces of code which do not need the caffe2urrenoperators get a lot faster.
One a real test, the import time went from 140s to 68s. 8s.
This also cleans up the algorithm slightly (although it makes a very minimal
difference), by parsing the list of operators once, rather than every time a
new operator is added, since we defer the RefreshCall until after we've
imported all the operators.
The key way we maintain safety, is that as soon as someone does an operation
which requires a operator (or could), we force importing of all available
operators.
Future work could include trying to identify which code is needed for which
operator and only import the needed ones. There may also be wins available by
playing with dlmopen (which opens within a namespace), or seeing if the dl
flags have an impact (I tried this and didn't see an impact, but dlmopen may
make it better).
Note that this was previously landed and reverted. The issue was that if a import failed and raised an exception, the specific library would not be removed from the lazy imports. This caused our tests which had libraries that failed to poison all other tests that ran after it. This has been fixed and a unit test has been added for this case (to help make it obvious what failed).
Test Plan:
I added a new test a lazy_dyndep_test.py (copied from all_compare_test.py).
I'm a little concerned that I don't see any explicit tests for dyndep, but this
should provide decent coverage.
I've added a specific test to handle the poisoning issues mentioned above, which caused the previous version to get reverted.
Differential Revision: D22506369
fbshipit-source-id: 7395df4778e8eb0220630c570360b99a7d60eb83
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39488
Currently caffe2.InitOpLibrary does the dll import uniliaterally. Instead if we make a lazy version and use it, then many pieces of code which do not need the caffe2urrenoperators get a lot faster.
One a real test, the import time went from 140s to 68s. 8s.
This also cleans up the algorithm slightly (although it makes a very minimal
difference), by parsing the list of operators once, rather than every time a
new operator is added, since we defer the RefreshCall until after we've
imported all the operators.
The key way we maintain safety, is that as soon as someone does an operation
which requires a operator (or could), we force importing of all available
operators.
Future work could include trying to identify which code is needed for which
operator and only import the needed ones. There may also be wins available by
playing with dlmopen (which opens within a namespace), or seeing if the dl
flags have an impact (I tried this and didn't see an impact, but dlmopen may
make it better).
Test Plan:
I added a new test a lazy_dyndep_test.py (copied from all_compare_test.py).
I'm a little concerned that I don't see any explicit tests for dyndep, but this
should provide decent coverage.
Differential Revision: D21870844
fbshipit-source-id: 3f65fedb65bb48663670349cee5e1d3e22d560ed
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26654
As per python contract, __getattr__ can only throw AttributeError. Throwing something else breaks hasattr() and causes upstream issues.
Similar bug was in pytorch earlier.
Test Plan: builds
Differential Revision: D17529471
fbshipit-source-id: bb6ac6c9e3be8b80fa2967e6a2e293afd1594cf9
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25908
Original commit changeset: f6e961e88c01
device_option propagation is completely broken in Caffe2 for cases when pass through operators are used. As an example Gather operator don't have gradient and passes through it's inputs, which results in incorrect detection of the components for sparse parameter aggregation (component will be empty instead of the real device).
This diff is trying to fix this issue.
Original diff had a problem, that Caffe2 is not handling cases when device option is present, but contains only metadata (for example one for auto-generated reduction ops in backward pass). This diff is addressing this issue by merging device options during the backward pass
Test Plan:
1. net_transform is finally working with Gather + FloatToHalf transformed model instead of failing because of incorrect number of components.
2. New unit-test.
3. Verify that previously broken benchmark is now passing
ezyang do you have suggestions what else I should test?
Reviewed By: ezyang
Differential Revision: D17281528
fbshipit-source-id: 4a1bc386f29f6a34fbf8008effde9d4890abebfa
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25203
device_option propagation is completely broken in Caffe2 for cases when pass
through operators are used. As an example Gather operator don't have gradient
and passes through it's inputs, which results in incorrect detection of the
components for sparse parameter aggregation (component will be empty instead of
the real device).
This diff is trying to fix this issue.
Test Plan:
net_transform is finally working with Gather + FloatToHalf transformed model
instead of failing because of incorrect number of components.
Reviewed By: dzhulgakov
Differential Revision: D16936041
fbshipit-source-id: 916551b933469f04e32ddf86ec4b2c07f76c9176
Summary:
fix auto grad summing for IfOp where intermediate output needs renaming.
Bug before this diff:
- we only renames the output of IfOp without changing the subnet ops output
- this results in blob not found error
the unittest provides an example
this diff fix that for IfOp
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14772
Differential Revision: D13327090
Pulled By: harouwu
fbshipit-source-id: ec40ee88526ace3619c54551e223dd71158a02f8
Summary:
Goal of this PR is to unify cuda and hip device types in caffe2 python front end.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14221
Differential Revision: D13148564
Pulled By: bddppq
fbshipit-source-id: ef9bd2c7d238200165f217097ac5727e686d887b
Summary:
Original commit changeset: f5614a5d2607
D9986213 is causing Multifeed Aggregator a [huge performance different](https://our.intern.facebook.com/intern/ads/analyze_canary/412951953278781781/) and is blocking aggregator push since last Friday night: https://fburl.com/feedtools/b6izvwjz
We need to land this revert ASAP to unblock aggregator push.
Reviewed By: orionr
Differential Revision: D10123245
fbshipit-source-id: d83da8e00a1250f5d09811a0a587c127e377aab2
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10528
adding 2 features to core and model_helper
- reroute_tensor which supports op insertion on net level
- model_helper complete net and cut net used for full graph analysis
Differential Revision: D9330345
fbshipit-source-id: 56341d3f500e72069ee306e20266c8590ae7985a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9636
Make sure that the blobs are registered to the net
Reviewed By: pjh5
Differential Revision: D8924883
fbshipit-source-id: f09422a2d4d5ba8bf6cfbfd00172097b5ab1fcd6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9438
Current implementation of create_from_proto doesn't work as expected: it
duplicates networks and execution steps by copying original PlanDef first and
adding each step one-by-one later.
Reviewed By: pjh5
Differential Revision: D8850316
fbshipit-source-id: 9b02836d6e6ee1c91cfdd3b4c4804f14137dc22b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9352
I am debugging a failed workflow f61490672, and found the original error message to be not informative.
Differential Revision: D8808181
fbshipit-source-id: 3f524ca092881186a492c5c0456124ce31d54751
Summary:
Closes https://github.com/pytorch/pytorch/pull/8927
Closes https://github.com/pytorch/pytorch/pull/8855
- Add parameter `enable_tracing` to the Arg field of NetDef. `net_async_tracing` will only enable Tracer for Net instances that have this field set (unless the command line argument also include the net name).
- Append a unique id to the json profiling result file because there could be multiple instances of the same net running.
- Dump json profling file regularly instead of just when the Tracer object is destroyed
Reviewed By: ilia-cher
Differential Revision: D8372378
fbshipit-source-id: 8adc9d59f48b67456beed2e3a88235c298fdfd01