Summary:
There is a module called `2to3` which you can target for future specifically to remove these, the directory of `caffe2` has the most redundant imports:
```2to3 -f future -w caffe2```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45033
Reviewed By: seemethere
Differential Revision: D23808648
Pulled By: bugra
fbshipit-source-id: 38971900f0fe43ab44a9168e57f2307580d36a38
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11048
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10739
I wanted to assert that the blobs in the workspace of the new session after loading checkpoint are exactly the same as the blobs in the workspace of the old session before saving to a checkpoint.
But I found that when calling `task.get_step()`, a dummy task output blob, `task:output/ConstIntFill:0`, is added. Also a dummy net `task:output` was also added along with it. See https://fburl.com/937lf2yk
This makes it hard to assert "Equal", forcing me to assert "LessThan" or "GreaterThan".
This adding a dummy TaskOutput when user specifies no TaskOutput is a hack.
The reason for this is that ZMQ socket can't send empty blob list.
As a result, if the Task on the Worker had no output,
The master would never stop waiting and hang forever. See https://fburl.com/rd7fhy6p and imagine `socket.recv(net, 0)`.
TaskOuput is at user layer. The hack shouldn't be exposed to user layer, polluting user workspaces.
Instead, we should move the creating of the dummy blob to some deeper layer,
and remove the dummy blob in the workspace afterwards to avoid polluting user workspaces.
After this change, the workaround becomes totally transparent and no side-effect to users.
Reviewed By: mraway
Differential Revision: D9566744
fbshipit-source-id: 18292dd64a6d48192c34034200a7c9811d2172af
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10739
I wanted to assert that the blobs in the workspace of the new session after loading checkpoint are exactly the same as the blobs in the workspace of the old session before saving to a checkpoint.
But I found that when calling `task.get_step()`, a dummy task output blob, `task:output/ConstIntFill:0`, is added. Also a dummy net `task:output` was also added along with it. See https://fburl.com/937lf2yk
This makes it hard to assert "Equal", forcing me to assert "LessThan" or "GreaterThan".
This adding a dummy TaskOutput when user specifies no TaskOutput is a hack.
The reason for this is that ZMQ socket can't send empty blob list.
As a result, if the Task on the Worker had no output,
The master would never stop waiting and hang forever. See https://fburl.com/rd7fhy6p and imagine `socket.recv(net, 0)`.
TaskOuput is at user layer. The hack shouldn't be exposed to user layer, polluting user workspaces.
Instead, we should move the creating of the dummy blob to some deeper layer,
and remove the dummy blob in the workspace afterwards to avoid polluting user workspaces.
After this change, the workaround becomes totally transparent and no side-effect to users.
Reviewed By: mraway
Differential Revision: D9413150
fbshipit-source-id: 51aaf3201e26570b4fcf5738e9b9aa17c58777ac
* Track checkpoint performance in scuba
As title.
* [C2/CUDA]: fix cross entropy sigmoid with logits
when adding log_d_trick, I forgot to add it to the cuda impl; this diff fixes
it.
* Back out "[caffe2] Unregister MKL fallbacks for NCHW conversions"
Original commit changeset: 8918dd40205a
Will land after @jongsoo's diff https://phabricator.intern.facebook.com/D7596315 lands
* [Easy][C2] Don't add blob to external outputs from output_record if it's already external output
As desc.
* On Mobile phones, call GlobalInit with no arguments in predictor in case we need to perform initialization
FACEBOOK:
The QPL logger needs the initialization code. In the past, the initialization code is put in the pipeline calling Caffe2. However, those places become obsolete quickly, as the product teams change places to call Caffe2 from time to time. We also need to track which teams use Caffe2 so that we can put the initialization code there.
With this diff, the initialization code is put in the predictor constructor, only enabled for mobile phones. This way, we can always enable QPL logging.
Once we do this, we can check how many times Caffe2 inference is called in production, and which models are more popular in production. This way, we can prioritize our effort supporting those models.
Will clean up the old code calling the init in the product in a separate diff.
* add padding op for sparse length tensor
to pad length-based sparse tensor with padding_value
* Add conv_op with cudaconvnet engine
Add conv_op with cudaconvnet engine
* [numa] Fix simple NUMA copy benchmark
Move XavierFill into init_net and also compute BW
* call roundf (device function) instead of round (host function)
* [caffe2_benchmark][observer] Make caffe2_benchmark use its own observer
1. Add ClearGlobalNetObservers()
2. Make caffe2_benchmark use its own observer and observer_reporter
* [detectron] Use roundf instead of round in the detectron module ops
* allow K larger than number of elements in top k op
one use case is to use this op together with PackSegments for sparse tensors, where the number of elements in each slice is not statistically defined.
* add ChannelShuffle DNNLOWP op
* fixup math_cpu.cc break
This reverts commit 05bd9bec10fad5ff9dc40be88836fd7274d50ce9
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
Summary: Current MultiNodeCheckpointManager return None in this case, yet in JobRunner we assume this function returns a valid task group, i.e. we call session.run(self.checkpoint_manager.init(...)) directly. This will fail the case we use LocalHostScheduler and reuse a MultiNodeCheckpointManager
Reviewed By: azzolini
Differential Revision: D6843450
fbshipit-source-id: a7ec942cfe692f19e8751b0078ae6a6108f29e54
Summary:
At the end of distributed training, trainer needs to download the parameters back from parameter servers for saving the model. Currently, this parameter downloading happens at the end of job's epoch task group, which creates several problems when checkpointing is enabled for distributed training:
1. When checkpointing is enabled, we run multiple training epochs. At the end of each epoch, the model download tasks will run to collect parameters, but we won't save the model until the true end of training, so there is a big waste of resource.
2. After trainer0 downloads the parameters, these parameters take a lot of memory, so trainer0 can easily run out of memory in the next epoch of training.
Our solution is to insert a parameter download task group between the job's training epoch_group and the job's exit_group.
Reviewed By: azzolini
Differential Revision: D6765393
fbshipit-source-id: 5a4f556fc3c1cd7834a7c406a3c0de3fccd50c49
Summary:
Instead of constructing db_name as a member of checkpoint_manager, generalize
this function
Reviewed By: anshulverma
Differential Revision: D6671088
fbshipit-source-id: c528538def66933619f2fdf67820bca5d13571ea
Summary:
If we encounter failures while writing a checkpoint, ensure that the job does
not fail.
A job can make progress even if writing a checkpoint fails
Reviewed By: anshulverma, boryiingsu
Differential Revision: D6615163
fbshipit-source-id: 01f790422e1a81bab1fe73f86750eaf75a72bb77
Summary:
We need to use Cluster to isolate the definition of the nodes.
Otherwise, the contexts are polluted and the run becomes
stateful.
Reviewed By: Yangqing
Differential Revision: D6140404
fbshipit-source-id: 09d1c86ef12bb01eaa16b1dade4d2e1e93be287a
Summary: So far the we format the epoch name with 6 digits, but this is constraining. In order to have consistent naming, we can simply append the epoch to the suffix. Then we will have consistent naming rules for small and for large epoch numbers.
Reviewed By: azzolini
Differential Revision: D5653871
fbshipit-source-id: acdf26a14b731347bb85fe2f33c1b89e2ba83bdd
Summary:
Travis CI is complaining about test_load_model_from_checkpoints in recent PRs.
E: AssertionError: 'trainer:1/task/GivenTensorInt64Fill:0, a C++ native class of type nullptr (uninitialized).' != array([103])
See for example https://travis-ci.org/caffe2/caffe2/jobs/265665119
Reason unkown yet. First disable this then try to fix it
Reviewed By: Yangqing
Differential Revision: D5655068
fbshipit-source-id: 10949339ec92b0a4c2f0e59246040f1b0510be12
Summary:
Since we temporarily disable checkpointing the readers, we need to
rename all the node names in the test to make it pass.
Reviewed By: azzolini
Differential Revision: D5640930
fbshipit-source-id: 1e61be31ddf9b6e28efd2eb8e6e91e63dcd83154
Summary:
The LocalSession does not work with the multi-node definitions.
The test becomes flaky because of that. The fix is to create
different LocalSession for each Node(), and run each node
sequentially.
Differential Revision: D5617857
fbshipit-source-id: a8079a90291b4c8b5aa6b471c33c06d18e59976c
Summary:
1. Adds one more step in the JobRunner class to upload checkpoints.
2. Adds one function to return the name of the checkpoint given
the name of the node.
Reviewed By: andrewwdye
Differential Revision: D5597130
fbshipit-source-id: 570a55785e6227859e1115326d6cab077f0e7f72
Summary:
Advantages of cloning the tasks/execution_steps at runtime:
- Less complexity on the python side: no need to clone nets and add prefixes to blob names
- Faster start-up: we had cases of complex plans that took up to 30min to be created.
- Better isolation: each task cloned at runtime has its own child workspace, preventing false sharing of blobs.
- Opens up possibility for dynamic scheduling: Number of threads per task can be increased on the fly, at runtime.
Reviewed By: dzhulgakov
Differential Revision: D5100730
fbshipit-source-id: 71b83193b135da4e6eaf2536d8fc266528e1fdcc
Summary:
Diff D5224410 initializes the should_stop_blob explicitly. With that, we will
have one more blob when executing the job. Adjusts the check accordingly.
Reviewed By: azzolini
Differential Revision: D5228398
fbshipit-source-id: 439b186c30b0b1d0e41e513babbcccd85e7a1b4a
Summary:
To evaluate on checkpoints, we often need to load from multiple checkpoints.
However, it is inconvenient if we always need to check the existence of
a checkpoint manually. Adds interfaces to check the existence of a DB
so that we can find available checkpoints automatically.
Reviewed By: azzolini
Differential Revision: D4823876
fbshipit-source-id: e5a65b736ac2addd0447c4add81dbd0986f422e7
Summary:
The initialization phase of each checkpoint object simply loads the nanmes of
the blobs in the checkpoints. When we load from the checkpoints, the names of
the blobs are given. We can skip this init step.
Reviewed By: azzolini
Differential Revision: D4808114
fbshipit-source-id: 4c740049c1014f3e93b4b87f43e3937afdefa25a
Summary:
Somehow the stress-runs flag does not work as what I expected.
Now the test finally passes.
Reviewed By: azzolini
Differential Revision: D4797559
fbshipit-source-id: 1e46844e9ae55c331c2e265a59dc550983274213
Summary:
To evaluate from checkpoints, we need to load a model from the checkpoints.
However, the checkpoints store way more blobs than the blobs needed by the
model. This function enables the model builder to load only the blobs
associated with the model to the workspace. After that, the model builder
can evaluate the model from the populated workspace.
Reviewed By: azzolini
Differential Revision: D4751414
fbshipit-source-id: a7a420228d681fc2dcfd8573cf69a97b1abc2ef3
Summary:
We were running into a problem where a Job could not be pickled. It needs to be pickled in order for the master flow operator to execute it using the session.
This creates a concept of "compiled" Job, that pretty much only stores protobufs with the Jobs to be executed, avoiding any issue with pickling.
Reviewed By: dzhulgakov
Differential Revision: D4554799
fbshipit-source-id: 2ee9877ca49a796d51925e5ec917436e3d930984