Summary:
There is a module called `2to3` which you can target for future specifically to remove these, the directory of `caffe2` has the most redundant imports:
```2to3 -f future -w caffe2```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45033
Reviewed By: seemethere
Differential Revision: D23808648
Pulled By: bugra
fbshipit-source-id: 38971900f0fe43ab44a9168e57f2307580d36a38
Summary:
Currently after performing export it gives two entries of externel_input
of input data in predict_net proto because it extends the externel_input
twice once seperately using input blob and one it is extendind all the entries
of external_input from proto in which input blob is already included
Signed-off-by: Parth Raichura <parth.raichura@softnautics.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12979
Differential Revision: D12916349
Pulled By: soumith
fbshipit-source-id: 4d4a1c68c0936f8de3f4e380aea1393fe193cd2d
* [C2] Don't crash kernel in case of invalid shapes for ConcatOp
Enforce correctness of the shapes for input tensors so we won't access invalid index.
* [Caffe2] Add analytical performance counters to Dynolog
Initial diff for counting analytical flops and memory writes for C2 operators.
* BBoxTransform op: Handle RoIs from multiple images per batch
BBoxTransform op used during typical Faster-RCNN inference operates only on
RoIs from a single image (no batching). Adding support to handle that with an
optional output blob containing the batch splits (i.e., the number of RoIs
belonging to each item in the batch). The code is perfectly backward compatible
and shouldn't break any existing models..
* [mkl] Make MKL-DNN cooperate with memongered nets
C2's MKL-DNN implementation caches input dims and reuses intermediate and
output buffers across net runs, which prevents memonger from being used. This
may not always be useful since input dims may vary widely in many cases and
we'll end up reallocating anyway. Added an option to force reallocation when
memonger is used.
* [oncall] fix batch gather ops for empty input
still need to bisect for the breaking change, but this shall fix the case for empty input.
the error logging is like: https://interncache-ftw.fbcdn.net/t49.3276-7/23938497_293562711176943_6500112636590424064_n.txt?_nc_log=1
@[557759185:raychen] can you help to subscribe oncall from ads side. this may affect the Sigrid online trainer.
* optimize BatchOneHotOp
We want to iterate in row-major as opposed to column-major for better
locality.
* Supported exporting model with int blobs.
Supported exporting model with int blobs. Needed by condensenet.
* BoxWithNMSLimit op: Handle boxes from mutiple images per batch
Similar to D7135360. Added support for multiple images per batch in the op.
Takes an optional additional input "batch_splits" as output by BBoxTransform
op, and returns new batch_splits after applying NMS and filtering. Otherwise,
backward compatibility is maintained.
Summary: Basically takes in a live net and creates an init_net and predict_net which can be written to file and run in Predictor
Reviewed By: salexspb
Differential Revision: D4989425
fbshipit-source-id: 8052065da9ed763d48bd9e1e19f7697ef60a2829