* Caffe2: Enhance test for CollectAndDistributeOp
This also changes the operator and the test to use stable sort
otherwise the test will fail due to differences between the op
and the test when facing ROIs of the same score.
* Caffe2: Adjust comparator to make std::nth_element and std::sort stable
Revert the removal of std::nth_element and std::sort and adding of
std::stable_sort.
* [GanH][Easy]: Add assertion to adaptive weighting layer
0 weight causes numeric instability and exploding ne
* [Easy] Add cast op before computing norm in diagnose options
As LpNorm only takes floats we add a manual casting here.
* Introduce a new caching device allocator
`cudaMalloc` and `cudaFree` calls are slow, and become slower the
more GPUs there are. Essentially, they grab a host-wide (not device-wide) lock
because GPU memory is transparently shared across all GPUs. Normally, this
isn't much of a concern since workloads allocate memory upfront, and reuse it
during later computation.
However, under some computation models (specifically, memory conserving
approaches like checkpoint-and-recompute, see
https://medium.com/@yaroslavvb/fitting-larger-networks-into-memory-583e3c758ff9)
this assumption is no longer true. In these situations, `cudaMalloc` and
`cudaFree` are common and frequent. Furthermore, in data parallel contexts,
these calls happen at nearly the same time from all GPUs worsening lock
contention.
A common solution to this problem is to add a custom allocator. In fact,
nVIDIA provides one out of the box: CUB, which Caffe2 already supports.
Unfortunately, the CUB allocator suffers from very high fragmentation. This is
primarily because it is a "buddy" allocator which neither splits nor merges
free cached blocks. Study
https://github.com/NVlabs/cub/blob/1.8.0/cub/util_allocator.cuh#L357 if you
want to convince yourself.
This diff adapts a caching allocator from the Torch codebase
https://github.com/torch/cutorch/blob/master/lib/THC/THCCachingAllocator.cpp
which does splitting and merging and ends up working really well, at least for
workloads like the checkpoint-and-recompute computation models noted above.
I simplified the implementation a little bit, made it a bit more C++-like. I
also removed a bunch of stream synchronization primitives for this diff. I
plan to add them back in subsequent diffs.
* Report reader progress in fblearner workflows
Integrate with fblearner progress reporting API and add support to report training progress from reader nodes.
If reader is constructed with batch limits, report based on finished batch vs total batch. The finished batch may be more than total batch because we evaludate if we should stop processing everytime we dequeue a split.
If no limit for the reader, report based on finished splits (Hive files) vs total splits. This is fairly accurate.
* [GanH][Diagnose]: fix plotting
1. ganh diagnose needs to set plot options
2. modifier's blob name is used for metric field can need to be fixed before
generating net
* Automatic update of fbcode/onnx to 985af3f5a0f7e7d29bc0ee6b13047e7ead9c90c8
* Make CompositeReader stops as soon as one reader finishes
Previously, CompositeReader calls all readers before stopping. It results in flaky test since the last batch may be read by different threads; resulting in dropped data.
* [dper] make sure loss is not nan
as desc.
* [rosetta2] [mobile-vision] Option to export NHWC order for RoIWarp/RoIAlign
Thanks for finding this @stzpz and @wangyanghan. Looks like NHWC is more
optimized. For OCR though it doesn't yet help since NHWC uses more mem b/w but
will soon become important.
* Intra-op parallel FC operator
Intra-op parallel FC operator
* [C2 Proto] extra info in device option
passing extra information in device option
design doc: https://fb.quip.com/yAiuAXkRXZGx
* Unregister MKL fallbacks for NCHW conversions
* Tracing for more executors
Modified Tracer to work with other executors and add more tracing
* Remove ShiftActivationDevices()
* Check for blob entry iff it is present
When processing the placeholders ops, ignore if the blob is not present in the blob_to_device.
* Internalize use of eigen tensor
Move use of eigen tensor out of the header file so we don't get template partial specialization errors when building other libraries.
* feature importance for transformed features.
* - Fix unused parameter warnings
The changes in this diff comments out unused parameters.
This will allow us to enable -Wunused-parameter as error.
#accept2ship
* add opencv dependencies to caffe2
The video input op requires additional opencv packages. This is to add them to
cmake so that it can build
* Add clip_by_value option in gradient clipping
Add clip_by_value option in gradient clipping
when the value is bigger than max or smaller than min, do the clip
* std::round compat
* Add support to TensorRT
* Removed License header
* Bind input/output by position
* Comments
* More comments
* Add benchmark
* Add warning for performance degradation on large batch
* Address comments
* comments
* fix unit test for sqrt op
From the error logging:
[idx, grad, grad_estimate] are:
[[ 146. 0.5 0.45776367]
[ 147. 0.5 0.45776367]
The gradient == 0.5 is correct, which means the SqrtOp and its gradient is doing right job. (Because y = sqrt(x), loss = y^2/2 = x/2, and then d(loss)/dx = 1/2 = 0.5; )
The test failed because of numerical problem of grad_estimate (in unit test). It can be because the step_size is small, and float precision is not high (when there are multiple elements in the tensor, we do sum(y^2) to compute loss)
This diff
- increase the step size, and also move the test cases to be further away from 0 (where sqrt(x) is not well defined) to be safe :)
- also clean up, and merge the test case for inplace Vs. non-inplace
Tested with:
`CAFFE2_HYPOTHESIS_PROFILE=debug ai_bt caffe2/caffe2/python/operator_test:elementwise_ops_test -- "test_sqrt"`
* CompositeReader & CompositeReaderBuilder
A new type of reader gluing multiple readers together.
* Back out "Revert D7394363: [GanH]: Log D Trick for Cross Entropy with Sigmoid"
Original commit changeset: 9325a4356dbe
* [dai][WIP] convert params to int8 on ps before sending to trainer
Add float->uint8 conversion in addition to float->fp16 conversion in model_saver.
* [easy] improve unit test for sparse length sum ops
as desc.
#accept2ship
* Update GitHub upstream to 771fcb3455
* move sparse hash unique ops to OOS and add unit tests
- move the SparseHash version to OOS, since 'sparsehash' is already deps of caffe2 OOS: https://fburl.com/arssw4n1
- The 'SparseHash' engine is also being used in OOS, so the SparseHash version shall be in OOS to reduce confusion: https://fburl.com/o5ea7ah2
- fix the CUDA UniqueOp for the case when batch is empty.
- add unit test
* group_norm_op for caffe2
This is the cuda op for Group Normalization (GN): https://arxiv.org/abs/1803.08494
This code implements GN in one op that computes Y=gamma * (X-mu) / sigma + beta and also its gradients. It is expected to have minimal memory consumption (similar to the BN op), without creating new blobs if GN were implemented as several ops (e.g., reshape, norm_mean/std, affine_channel).
* Resubmit D7405233: disappeared in D7464958
OOS publish causes the op missing -- however, test was still there
* [c2] add sparse hash engine for cuda unique op
The SparseHash version of UniqueOp copy input tensor to CPU, and make use of sparse hash map to get unique output, and then copy back to GPU.
* [dper][gpu] enable unit testing gpu trainer for sparse nn
to debug the GPU trainer using mock data in unit test.
make it easier to develop GPU trainer for new models.
* Reuse Gloo context for Synchronize() calls
Previously we were creating (and leaking) the Gloo context on each call to Synchronize(). Now only run the common world op and create the barrier net once, then run the barrier net on each Synchronize() call. Since timeout is associated with the Gloo context, assert that the timeout is fixed instead of trying to handle the complexity of multiple timeouts (and associated contexts).
* [GanH/WGAN][1/n]: add FC param clipping
as titled
* [mobile] minimizing changes between caffe2_benchmark and speed_benchmark
* [GanH]: enable diagnose within model
avoid finding blob names but to directly enable inside the model
* Add `net_transformer_fun` option to DPM
This callback allows for various transformations to be made to the
model after gradient operators have been added. The immediate motivation for
this is to allow transformations such has "checkpoint-and-recompute" which
allow trading off memory for additional compute.
Adding several callbacks like this has made DPM's API less than ideal at this
stage. However, I could not find any reasonable alternative.
* [DT] [33/n] Compile flow task groups
task groups need to compiled in order to pickle the object in fblearner. However I also changed the Job's compile function as creating new object is not necessary.
* Initial commit for sparse_normalize vectorization and benchmark
* [GanH]: LB Calibration for JSD
as titled
* Tracing event in async executor
Adding event tracing through TRACE_EVENT macro in async executor
* [Resubmit] D7409751 Reseting book-keeping blobs when the reservoir is reset
D7409751 got lost in D7464958
* Visualizing realtime weights values
we want to visualize the weights values as optimizer is iterating. This diff supports to visual the weights at an assigned index.
Currently, we assume the blob to be 2 dimensional.
* [GanH][Easy]: Fix Homotopy Weighting
apparantely, there was a bug in homotopy weight (alpha, beta) update
* [c2] move sparse hash unique op out of oss
so that oss do not need to depend on google hash map.
* Get rid of std::round as it's not supported on Android
* Revert changes on setup.py
* Skip shaky test on Dataio
* fix
* Check mappings ONNX -> Caffe2 bear the same argument names
When adding an extra arg to an input ONNX op, if it's not supported in Caffe2, the exporter would just silently pass it to NetDef and ignore it in the implementation. It's pretty error-prone. Caffe2 also has an OpSchema description and we can enforce that all arguments explicitly appear in schema or listed explicitly in Caffe2.
See also https://github.com/caffe2/caffe2/pull/2478
Add test for C2 argument checking
* Some operators do not log arguments, which prevents argument checks.
Invite users to file an issue to fix the schema.
* Change Same as input type deduction to work for ops with multiple outputs
* change InferBlobShapesAndTypes definition to take vector ot pointers instead of unique_ptr. The function doesn't own the objects, so no need to pass smart pointers and that prevents calling the function with existing object, since the caller has to create unique_ptr, i.e. copy an existing object just to create the pointer
* switching order of std::move<unique_ptr> and uniqur_ptr.get
* adding comma
* [easy] allow empty tensor in cuda relu op
The diff has not enabled unit test of empty tensor, because MLKVersion of ReluOp need extra work to support
* Make blob norm plotting work with distributed trainer when the old framework is used
This reverts commit d63266ccbc0c1390c58c2a71ae0b562fdec2fbc0
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
This reverts commit 05bd9bec10fad5ff9dc40be88836fd7274d50ce9
@bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
Providing Python API to fetch Int8 tensors.
data, scale. zero_point = workspace.FetchInt8Blob(blob_name)
now returns a tuple if the blob contains a Int8TensorCPU
'data' = int8 data array
'scale' = fake quantization scale
'zero_point' = fake quantization offset
Although FetchBlob shares back-end implmentation with FetchInt8Blob, we raise
error to prevent unexpected behavior of the same method
Ignore backward step when there is no loss function;
For some customized model, we can encode the update directly in forward step and there is no backward step;
Added a caffe2 math sum operator so that it takes integers (only int32)
Changed the SumFloatIter to SumGenericIter so that it takes >1 types.
Added a sumElementInt operator
This code introduces a new class for exporting decoder step (ensemble) models trained with fbtranslate pytorch to Caffe2 models via ONNX, for the purpose of use in "component beam search" being developed concurrently in C++ by @juancarabina.
This is required to support placeholder/decorator ops which does not have operator schema. Note that the change is made in such a way that it is a no-op if placeholder Ops are not used.
Changes:
1. Since the placeholder ops always run on CPU, added a utility to infer placeholder ops blob devices.
2. Placeholder op's input/output blobs should be on CPU as well. This change takes care of dealing with output blobs - i.e. use blobs on CPU.
3. Added a Unit test - test_inject_copy_placeholder_ops
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Add axis to top_k_op. (#2416)
* Revert update on top_k_op
* Add axis to top_k_op
Add axis to top_k_op
* [auto] Update onnx to a8e4648 - Adjust link flags when built in Windows Debug mode (#647)
a8e4648a7d
* [auto] Update onnx to f4acf28 - Remove allowconsumed enforceconsumed from op schema. (#617)
f4acf281ef
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Initialize cpuinfo in the thread pool
Thread pool called cpuinfo_get_processors_count() without initializing cpuinfo. Only by luck it didn't make Caffe2 single-threaded: threadpool is initialized after NNPACK, and NNPACK initializes cpuinfo itself.
This commit also updates cpuinfo to a version that aborts with a fatal error if its used uninitialized.
* Updated Python Op and Image Pre-Processing Pipeline tutorials && Added CIFAR-10 Part 1 tutorial (#2286)
* Updated Basics tutorial: (1) Added Python 3 support with __future__ statements; (2) Various grammatical/typo fixes and minor refactoring of Markdown
* Added Python 3 support and made minor typo fixes
* Added Python 3 support with future imports, refactored and corrected errors in Markdown, added comments
* Added Python 3 support with future imports, Added use of caffe_translator.py to translate downloaded .caffemodel file to .pb files
* Upgrades to Image Pre-Processing Pipeline tutorial
* Updated Python Op tutorial
* removed markdown with empty links
* Added Part 1 of an end-to-end CIFAR-10 tutorial
* Updated MNIST Dataset and Databases tutorial with python3 support and markdown fixes
* Tweaks to markup, less training iterations
* changed permissions of CIFAR10_Part1; typo corrections in Image_Pre-Processing_Pipeline
* Typo corrections in Multi-GPU Training tutorial
* sync Python_Op py_gen with the IPython notebook
* nit typo correction
* [auto] Update onnx to 5cb999d - Minor cleanups to shape inference (#653)
5cb999ddc1
* [auto] Update onnx to ecac1c1 - Merge Rel 1.1.0 branch into master (#657)
ecac1c1624
* Strip down onnx to only pb definitions in mobile build (#2426)
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Exported AtomicIterOp count
* Revert update on top_k_op
* Add axis to top_k_op
* Remove do { ... } while (false)
* Revert top_k op to upstream
* Add argmin and argmax ops
Add argmin and argmax ops
* Revert top_k_test to upstream
* Add argmin and argmax ops
Add argmin and argmax ops
* Revert "Use -DCMAKE_BUILD_TYPE=Release for local build by default"
This reverts commit 035c62081f6420405b9f1380cc5d21b4c6ae78f6.
* Revert "Export number of iterations of AtomicIterOp (#2338)"
This reverts commit 91b7a0cb48c6b079e2ca8fd5c26819a003937d76.
1. support the LpNorm operator to calculate the average LpNorm by adding one more boolean argument, i.e., LpNorm(average = true) = LpNorm(x) / size of (x)
2. integrate the average option into visualization framework
Changes:
=======
1. Added device inference functions for Concat and Split Ops.
2. Added a unit test to validate the change. See, test_device_inference_function in core_test.py
3. Fixed some formatting.
Instead of using hard-coded rules or rely on gpu_strategy to mark full sync data parallel ops, we need some generic rules that is applicable to both the single and distributed setting.
Make it easier to plug in intermediate steps between preprocessing & trainer by maintaining a stable schema.
I also fixed enqueue() so that we can pass in the same blob in multiple location without causing data corruption.
The way `splits()` is currently used is so convoluted. It's impossible to compose ReaderBuilder. I'm working on a composite reader so this is a prerequisite for it.
The idea is that the ReaderBuilder should maintain the states it needs to create a reader. Any setup is done through the new `setup()` method. Currently, `setup()` should only be called once, but, if needed, it should be safe to call it multiple times.
* Add CollectAndDistributeFpnRpnProposalsOp for FPN support
* Adds a C++ operator equivalent to the Python op in Detectron
* Once some additional GenerateProposalsOp changes are made this will
let us support Detectron FPN models with straight Caffe2 C++ ops
* RetinaNet and segmentation models require additional work
* Remove some uses of conservativeResize
* Add notes about training and inputs/outputs to operator documentation
* Fixing conda
* Adding hypothesis and onnx to conda builds
* Updates but still not working
* Adding required changes to conda_full
* Updates
* Moving to more general build_anaconda script
* Adding check for gcc version
* Adding general ways to add/remove packages from meta.yaml?
* Changes for specific packages to build on gcc 5.4
* Fix with glog spec
* Requiring >numpy 1.12 for python 3 to satisfy opencv dependency
* Adding pydot to required testing packages
* Adding script to read conda versions for gcc ABI
* Trying to fix segfault by installing in env instead
* conda activate -> source activate
* Trying adding back leveldb
* Setting locale for ONNX + conda-search changed its format
* read_conda_versions handles libprotobuf
* Conda script updates
* Adding a protobuf-working test
* Removing changes to proto defs b/c they will require internal changes in a separate diff
* Fix useless opset_import in onnx
* Set the default ir version in make_model
* Use the target_opset_version in Caffe2Frontend
* remove make_model from helper in caffe2.python.onnx
* Reduce Sum and Reduce Mean
* Handle reductions with empty 'axes'
* Merge codebase and simplify tesnor reduction logic
* Restructure code and add comments.
* Fix parameter to scale
* Fix parameter to scale
* [GanH]: two_task_discriminator
as titled
and adding label smooth
* [Dper2] Simplified UI options needed for blob magnitude visualization
* [GanH]: fix tags
as titled
* Added type and shape inference for GatherRange operator
This helps with type / shape inference when using this operator in layers.
Also just a nice to have in general.
* Demonstrate Caffe2 exception handling with StoreHandlerTimeoutError in Python
We'd like to catch and recover from certain Caffe2 net exceptions. Use this diff to demonstrate a pattern of registering a pybind exception mapping and catching in Pythonusing caffe2::StoreHandlerTimeoutException.
* Bind Gloo IoException to IoError in Python
Allow peer failure handling and recovery using an exception based mechanism. This diff registers gloo::IoException with pybind.
* [GanH]: add label smoothing to softmax with loss
as titled
* [C2] Enable LARS in Adagrad and hook it to DPER
* [DPER] Don't pass LayerModelHelper in create_trainer_nodes
Since we're planning to get rid of it eventually and I want to get access to
NetDef only interface ASAP - I'm looking towards removing all references to
LMH, where we don't really need them.
* fix bugs in LambdaRankNdcgOp
the loss and gradient in LambdaRankNdcgOp are incorrect. The loss should be negative log of probs instead of log.
* Restrict thread pool on iOS to only big cores
Historically, iPhones exposed only one type of cores, and Caffe2 thread pool used all of them.
However, iPhone 8/iPhone X exposes 2 big + 4 LITTLE cores. As our thread pool doesn't support work stealing or other forms of load balancing, fast cores end up waiting for the slow ones, and it may be better to restrict execution to only 2 fast cores, like we do on Android.
* Remove SparseLength Sum/WeightedSum/Mean operators with fp16 engine
Remove SparseLength Sum/WeightedSum/Mean operators with fp16 engine
* make clang happy and get fewer warnings
make clang happy and get fewer warnings
* [Personalization] Support add_output_schema() in layer_model_helper
Problem:
Currently the output_schema of sparse_nn can only be set once. https://fburl.com/efth5zer.
Solution:
For flexibility, we want to add fields to output_schema incrementally.
Plan:
Wrap the change of `model._output_schema` into a new function `add_output_schema()` for adding additional output_schema.
Callsite:
The add_output_schema() should be called instead at https://fburl.com/efth5zer
Reference:
The newly added `add_output_schema()` will be similar to `add_loss()` in https://fburl.com/t2ii8njh
* [C2] Don't crash kernel in case of invalid shapes for ConcatOp
Enforce correctness of the shapes for input tensors so we won't access invalid index.
* [Caffe2] Add analytical performance counters to Dynolog
Initial diff for counting analytical flops and memory writes for C2 operators.
* BBoxTransform op: Handle RoIs from multiple images per batch
BBoxTransform op used during typical Faster-RCNN inference operates only on
RoIs from a single image (no batching). Adding support to handle that with an
optional output blob containing the batch splits (i.e., the number of RoIs
belonging to each item in the batch). The code is perfectly backward compatible
and shouldn't break any existing models..
* [mkl] Make MKL-DNN cooperate with memongered nets
C2's MKL-DNN implementation caches input dims and reuses intermediate and
output buffers across net runs, which prevents memonger from being used. This
may not always be useful since input dims may vary widely in many cases and
we'll end up reallocating anyway. Added an option to force reallocation when
memonger is used.
* [oncall] fix batch gather ops for empty input
still need to bisect for the breaking change, but this shall fix the case for empty input.
the error logging is like: https://interncache-ftw.fbcdn.net/t49.3276-7/23938497_293562711176943_6500112636590424064_n.txt?_nc_log=1
@[557759185:raychen] can you help to subscribe oncall from ads side. this may affect the Sigrid online trainer.
* optimize BatchOneHotOp
We want to iterate in row-major as opposed to column-major for better
locality.
* Supported exporting model with int blobs.
Supported exporting model with int blobs. Needed by condensenet.
* BoxWithNMSLimit op: Handle boxes from mutiple images per batch
Similar to D7135360. Added support for multiple images per batch in the op.
Takes an optional additional input "batch_splits" as output by BBoxTransform
op, and returns new batch_splits after applying NMS and filtering. Otherwise,
backward compatibility is maintained.
Summary:
Executing loop's body in a separate workspace, using WorkspaceStack to
support saving and reusing of workspaces
Test Plan:
python caffe2/python/operator_test/onnx_while_test.py
Reviewers: caffe2-review, jamesreed
Subscribers:
Tasks:
Tags:
This op is used for gradient clipping to take care of exploding / vanishing gradients.
If original_norm is larger than the threshold,
then each element of the tensor is scaled by threshold / original_norm.
Adding NUMA awareness through numa_node_id in DeviceOption. Blobs of operators
with numa_node_id are allocated on corr. memory banks, using CPU pools with
NUMA affinity set to run operators.
with python3 np.int defaults to int64. This diff should fix it. I don't know if test exist for this function already, however following ASR test was breaking when i switch to py3
```
buck test caffe2/caffe2/fb/speech/asr_training/:tensor_parser_test
```
After D6953547 some of the blobs were no longer impacted by uint8 quanitzation,
but they would still generate operators expecting uint8 inputs and thus fail.
This diff is adding a temporal hack to avoid doing this quantization when layer
is not quantized.
Will fix it with switching to Net rewriting instead.
* Scope MultiRNN blobs with name as well as layers
Also don't double scope MultiRNN in case of multiple layers.
* Scope input projection of first layer with name
We don't scope it with layers because the projection is done
outside of the layer.
* Avoid scoping input blob in MemongerTest.test_rnn
* Rectify input_blob in prepare_input
Revert change in memonger_test because rectifying input will solve the problem.
* First attempt on sqrt op
* Adding the Sqrt op along with the test cases
* Made changes per @Yangqing's questions re: tensor format and used hypothesis to generate input tensor
In pytorch, after pad_packed_sequence, the "extra" elements (after the
ends of the sequences) are reset. In the equivalent Caffe2 graph
exported via ONNX, they contained some leftover values, which caused
tests to fail. Probably no one depends on these values, but just in
case, set them to zero to mimic pytorch semantics.
* Handle legacy pad in Caffe2==>ONNX converter, also remove fake initializer
* Address the comments, 1) have filtering fake initializer before ssa rewrite, 2) polish the legacy padding handling logic
* Add test cases to cover the code just added
* Nit
Added functionality to GatherRangesToDenseOp such that it supports an optional input KEY, and will sort DATA according to KEY for each example per feature.
* [C2] Implement Layer-wise Adaptive Rate Scaling (LARS)
* [C2] Implement Layer-wise Adaptive Rate Scaling (LARS)
* add unit test for Lars
* set default value for lars to be None
* remove lars for subclasses of SgdOptimizer
The old pow operator has been deleted in math_ops.cc, math_ops.cu and math_ops.h, while the new operator supporting scalar and tensor exponent has been added in pow_op.cc, pow_op.h an elementwise_op.cu.
Summary: as title. This is similar with python pprint utility for nested json data structure. It can be useful for checking schema during debugging.
Reviewed By: kittipatv
Differential Revision: D6710767
fbshipit-source-id: e450aa5477fa1ad4f93c4573f8108a2f49956da8
Summary: The original implementation averaged the momentum across the embedding dimensions, which doesn't make any sense. This meant all the embedding dimensions received the same update, becoming a very memory-expensive one-dimensional embedding.
Differential Revision: D7003135
fbshipit-source-id: ed54e3427bc13895a4e949e96b4b17f6ebfb6d53
Summary:
In some cases we were doing quantization even we we should not. This diff is
preventing this from happening.
Reviewed By: rayleichen
Differential Revision: D6953547
fbshipit-source-id: 7c65baaf969e5e1bddb68ca8182f4f3b43f2431d
Summary:
Without this enforce it's too easy to export model overriding it's params in
predictor.
Reviewed By: rayleichen
Differential Revision: D6984506
fbshipit-source-id: 9bbf375758686c6ad12ad071723f255363e98ae6
Summary:
This reverts commit 30f614beea6f859fee25ce4f85573142885dde45
bypass-lint
An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
cause_a_sev_many_files
Differential Revision:
D6893040
Original commit changeset: 30f614beea6f
fbshipit-source-id: 5e98a24699088283f864efe31234874bdacbe3c3
Summary: The old pow operator has been deleted in math_ops.cc, math_ops.cu and math_ops.h, while the new operator supporting scalar and tensor exponent has been added in pow_op.cc, pow_op.h an elementwise_op.cu.
Reviewed By: houseroad
Differential Revision: D6893040
fbshipit-source-id: 30f614beea6f859fee25ce4f85573142885dde45
Summary:
Add a function to return true if the model contains loss and retuen
false if the model doesn't include a loss.
Reviewed By: kittipatv
Differential Revision: D6982444
fbshipit-source-id: 1f63b7a1eaa3077841a0ad5d8d854b471d0aa84c
Summary: Sometimes we need to add some extra schema later
Reviewed By: sunnieshang
Differential Revision: D6951849
fbshipit-source-id: 564eb88f9250eae24869fd10ba3426e00a18af33
Summary:
Modify detect_components to take a list of valid node_name prefixes instead of values. Users can set node_name to e.g. `'sparse_component:0'`, `'sparse_component:1'`, etc.
and pass `'sparse_component:'` as a valid prefix. Also add `Tags.SPARSE_COMPONENT` in addition to `Tags.SPARSE_SHARDED` and `Tags.SPARSE_DONT_SHARD` and update all calls to
`detect_device_components`.
Reviewed By: azzolini
Differential Revision: D6952599
fbshipit-source-id: e1b1e6b146a6bd053b295690016044fd5990c893
Summary:
Change log
- Support rectangle cropping, where height and width of clip cropping can be set separately. This is useful when most video resolution is non-square, such as 240p, 360p and 480p where width is significantly larger than height.
- Comparisons of training on ucf101 between using 112x112 croppings and using 112x144 cropping.
- https://fburl.com/i0rw6y1k
- Support 14 multi-cropping per video clip at testing stage to improve classification accuracy. Take left-top, central-top, right-top, left-bottom, central-bottom, right-bottom and central-central croppings as well as their mirrorings. In total, 14 croppings.
- Comparisons on the same model trained on UCF-101. Use 1 clip per video
- RGB. f41014306, w/o Vs f41014868, w/ multi-cropping: `0.64099 Vs 0.65796`
- OF. f41014889, w/o Vs f41014913, w/ multi-cropping: `0.65796 Vs 0.67624`
- Support color jittering and color lighting on RGB data for training data augmentation.
- Comparisons of training on ucf101 from scratch with and without color jittering and lighting:
- https://fburl.com/k69zatul
Reviewed By: HengCV
Differential Revision: D6962620
fbshipit-source-id: 9b43478945874142727fea351ee04417218e6606
Summary: Copying model_id from metaNetDef_->modelInfo in PredictorContainer for dper models. Since these model_id's are strings of <model_id>_<snapshot_id>, changed them to strings in net_observer
Reviewed By: salexspb
Differential Revision: D6752448
fbshipit-source-id: 93c91950b44c012e57240aaf909bc961449cfd7c
Summary: Step 1 of 3 in adding support for multidevice batch normalization on GPUs. Implements ChannelStatsOp for the GPU. Next steps are to port the backprop stats op and tie things together in DPM.
Reviewed By: rbgirshick
Differential Revision: D6953411
fbshipit-source-id: cd50e53d66ea84fe66021c08b978b28290d9f347
Summary: The interface is not used anywhere AFAICT; cleaning up to make it less confusing.
Reviewed By: kuttas
Differential Revision: D6867040
fbshipit-source-id: 3e8a77df76ef09c6864c308561825777b326f76c
Summary: CompleteInTimeOrDie was added to detect deadlocks and proactively exit. In addition, call os.abort() to generate a core dump so that the error is actionable.
Reviewed By: bmaurer
Differential Revision: D6938343
fbshipit-source-id: 8bd36da4f4bb1195bd3398f25d133a6ebf1c66ad
Summary:
It appears that my initial implementation was not really working when one
starts doing nesting. This diff is fixing this by replacing itertools with
something that is really easy to reason about.
Reviewed By: idning
Differential Revision: D6933763
fbshipit-source-id: f7a1de996d878a41bac2b2acd9d87a7c4b416778
Summary:
There is a long lasting problem of scoping which was introduced in original python wrappers early in H1. Basically each RNNCell implemented has to manually scope outputs of each of the operators. If somebody forgets, then there could be weird bugs with layers etc.
Approach is the following. User has to explicitly specify current scope when using apply_over_sequence function and others if the function is going to be called several times (like for stacking layers). This way we use Caffe2 native scoping approach instead of inventing one extra API people have to use (i.e. passing scope name as an argument to the RNNCell constructor).
Closes https://github.com/caffe2/caffe2/pull/1681
Differential Revision: D6777536
Pulled By: salexspb
fbshipit-source-id: 73d860b8d4857589e04bdea5a6fcd3080d68427c
Summary: We should not be trying to instantiate this op on GPU at this point
Reviewed By: pietern
Differential Revision: D6915576
fbshipit-source-id: 6bdbc93ad12fc67e3001fce1b506fe2895d7b0ba
Summary: The previous refactor of these four Ops changed their input semantics, which makes backward impatible with old models. This diff fix this problem by checking the input and define follow-up behavior by case, so that the old models can be accommodated.
Reviewed By: dzhulgakov
Differential Revision: D6905840
fbshipit-source-id: fc37baec407fd5eae64fc9c2b61aba3c492a90f3
Summary:
Special While loop operator that follows the semantics of While in ONNX: https://github.com/jamesr66a/onnx/blob/controlflow/docs/Operators.md#experimental-loop
Stuff that's missing:
- Lexical scoping enforced via child workspaces
- Double-buffering on forward
Further possible enhancements:
- Full parallelism when there are no loop-carried dependencies
- Diagonal execution
- More optimized scan_outputs shaping via static shape inference provided in ONNX (coming sometime)
- GPU support (probably just some tensor value management stuff)
- Gradient support (likely low-pri right now)
Closes https://github.com/caffe2/caffe2/pull/1848
Reviewed By: dzhulgakov
Differential Revision: D6907524
Pulled By: jamesr66a
fbshipit-source-id: 4938108733e168b8c027035091104712a18c992a
Summary:
* We now allow subdirectories as well as numbers in the name.
* Also fixed an error case.
Closes https://github.com/caffe2/caffe2/pull/1875
Reviewed By: pjh5
Differential Revision: D6894401
Pulled By: orionr
fbshipit-source-id: 6a9938bc7d2ba6b8f094ed7b8a02664120a10626
Summary: hypothesis_test have been introduced in D4508879, add a plain test which is more straightforward.
Reviewed By: kennyhorror
Differential Revision: D6835334
fbshipit-source-id: d05a2cd199b2de56ac0cc0319f19fcd7978647d5
Summary: enable ModOp to control the output sign to follow dividend or divisor.
Reviewed By: xianjiec
Differential Revision: D6852457
fbshipit-source-id: 62dbb66cacecb8e0a0f81f63f2b7b378efbd6ee2
Summary: Current MultiNodeCheckpointManager return None in this case, yet in JobRunner we assume this function returns a valid task group, i.e. we call session.run(self.checkpoint_manager.init(...)) directly. This will fail the case we use LocalHostScheduler and reuse a MultiNodeCheckpointManager
Reviewed By: azzolini
Differential Revision: D6843450
fbshipit-source-id: a7ec942cfe692f19e8751b0078ae6a6108f29e54
Summary: To match the semantic in ONNX, change the default value of alpha of LeakyRelu to 0.01
Reviewed By: dzhulgakov
Differential Revision: D6840975
fbshipit-source-id: 08543f80fd86cbe96a0eee8d725ef137a5bf4ab8
Summary:
Commonly, net observers attach operator observers at construction. This diff separates the logic into a base class to inherit from.
Closes https://github.com/caffe2/caffe2/pull/1806
Reviewed By: salexspb
Differential Revision: D6808623
Pulled By: mdschatz
fbshipit-source-id: 75ef0eea913ef30943541c829c0a976965f42736
Summary:
In this case, each sequence is treated as having a length equal to the
first dimension of the input tensor. This matches the semantics of
ONNX when the sequence length input is left out.
Closes https://github.com/caffe2/caffe2/pull/1764
Reviewed By: dzhulgakov
Differential Revision: D6751219
Pulled By: anderspapitto
fbshipit-source-id: 89e0efd12339157627494e2b8c83e952bdd8a9f8
Summary:
Main changes:
1. Move reader creation to Brew in order to be consistent and avoid a wild use of param_init_net
2. Use optimizers for training function, avoid manual optimizer construction
3. Add MLP mode (a default)
4. Fix a bunch of too verbose comments and add a bit of new explanations
Closes https://github.com/caffe2/caffe2/pull/1760
Differential Revision: D6749059
Pulled By: salexspb
fbshipit-source-id: 9dfbbb2d9772a74a0300c2e404a92e791f7cc593
Summary: Updates `sparse_lookup.py` for the new fused 8-bit rowwise quantization. Mostly just changing the same files as the original diffs (D5753626 and D5761202). I know very little about this code here so please let me know if this is safe, also in terms of migration away from the non-fused storage.
Reviewed By: kennyhorror
Differential Revision: D6710784
fbshipit-source-id: 185f147af52a094a937ba631b0351225e660d205
Summary:
as titled
After converting categorical to Ngram keys, use this op to extract eids
Differential Revision: D6794020
fbshipit-source-id: 4f9251a22d7a129da30b92845e312876e6510e7e
Summary: Adds cuda support for LC Op
Reviewed By: QueryConnectionException
Differential Revision: D6803659
fbshipit-source-id: 538bbf6fd202c79154132fda0e90e175eb09d025
Summary: Weighted sampling reader dequeue randomly chooses a hive reader to read a mini-batch. This diff allows dequeue to output the index of the randomly chosen table to a specific blob.
Reviewed By: kennyhorror
Differential Revision: D6621070
fbshipit-source-id: 754b981fc2bcfdb0146d2a0a5b677e7cfe74211b
Summary: Fix the flaky test for ngram from categorical test
Reviewed By: dragonxlwang
Differential Revision: D6801152
fbshipit-source-id: dcbae17b1d3737a41fb2f5c794c1146a02c542bb
Summary:
Every call to the checkpoint_metadata_handler write() API requires us to pass all params like db_prefix, db_type etc.
Introducing an init API in the checkpoint_metadata_handler so that such params can be saved and need not be passed in every API call
Reviewed By: mraway, anshulverma
Differential Revision: D6792651
fbshipit-source-id: 059fa4309e8fce1ee5ab009af3e0570573c24245
Summary: This is the first in a series of diffs to enable batch normalization across multiple devices on the same node with data parallel model. The diff contains the ops for computing the per-channel statistics required to obtain the mean and variance across multiple devices on the same node on the forward pass, and the gradient of the bias and scale during backpropagation. The actual modifications to SpatialBN and SpatialBNGradient to make use of these results will be in a separate diff.
Reviewed By: rbgirshick
Differential Revision: D6697336
fbshipit-source-id: 0de2750fe7e851795f238d9f625aeb4d74023dc2
Summary:
This is a first attempt at completing bootcamp task T24449916. This diff contains 3 major changes:
1) Change LayerModelHelper to allow for exposing the output and parameters of any layer to metrics
2) Added a runner that allows metrics to draw arbitrary plots to a matplotlib axes object
3) Implement a metric that aggregates distributions of values in a blob over the training, and try this out in a notebook
Reviewed By: kennyhorror
Differential Revision: D6671273
fbshipit-source-id: b8961837395e89c957edbf5c7c862bdb845ccf4b
Summary: add Test for SparseLookup with PositionWeighted.
Reviewed By: kennyhorror
Differential Revision: D6771612
fbshipit-source-id: b4b3bfd514f366f579b4192643330ae73843d4f9
Summary:
SqueezeOp support to drop drop dims of size 1. MKLMemory now supports Reshape()
if the buffer is in plain layout, in which case just the dims and layouts are
modified similar to caffe2::Tensor. SqueezeOp takes care of converting the
input to plain layout if needed via an intermediate buffer before calling
Reshape().
Differential Revision: D6735656
fbshipit-source-id: 953309498370e1b8986e8c593bc6963f38036255
Summary:
At the end of distributed training, trainer needs to download the parameters back from parameter servers for saving the model. Currently, this parameter downloading happens at the end of job's epoch task group, which creates several problems when checkpointing is enabled for distributed training:
1. When checkpointing is enabled, we run multiple training epochs. At the end of each epoch, the model download tasks will run to collect parameters, but we won't save the model until the true end of training, so there is a big waste of resource.
2. After trainer0 downloads the parameters, these parameters take a lot of memory, so trainer0 can easily run out of memory in the next epoch of training.
Our solution is to insert a parameter download task group between the job's training epoch_group and the job's exit_group.
Reviewed By: azzolini
Differential Revision: D6765393
fbshipit-source-id: 5a4f556fc3c1cd7834a7c406a3c0de3fccd50c49
Summary:
This should translate to an 1% error margin. The gradient checker uses a .5% threshold.
Closes https://github.com/caffe2/caffe2/pull/1766
Differential Revision: D6774077
Pulled By: pietern
fbshipit-source-id: f97c7ffb2ef34fdd71d69320a7fdcf4a6a457715
Summary:
Just redirects to MKLSumOp. Doesn't support broadcast though since dnnSumCreate
expects identical dims.
Differential Revision: D6729788
fbshipit-source-id: 3e189465ad9d026bec4954648562ffe4e67fc393
Summary:
As in name. LATTE translation team moving some code from Python 2 to 3 uncovered a case where comparison between unicode and str types leads NameScope('') to prepend a separator to the beginning of blob names. This fixes it.
Thank you so much to dzhulgakov for tracking down the cause of this so quickly!
Reviewed By: dzhulgakov
Differential Revision: D6766866
fbshipit-source-id: fbe46cff581f425ba10e8668400915ea40baab94
Summary: Make test less computationally expensive
Reviewed By: Yangqing, dzhulgakov
Differential Revision: D6766236
fbshipit-source-id: 59e51faa1331d804b11da9f7237ee9ce0cb27df8
Summary:
Reason for this change:
(1) Setting/Getting default gpu id doesn't seem to be used at all.
(2) It actually is confusing compared to the CUDA_VISIBLE_DEVICES options etc.
(3) When setting cuda_gpu_id=-1 in the CUDAContext arg, it used to use the
default gpu id but probably we should use the current gpu - so that the caller
will be able to control the device placement.
One use case is for TensorRT - if we have a custom callback layer, then it would
be easier for TRT or whatever caller to set the running device.
Reviewed By: dzhulgakov
Differential Revision: D6740357
fbshipit-source-id: 2ea710e434b10220d5a198e31c93847304636863
Summary: Building on D6710785 (float <-> fused_8bit_rowwise conversions) and D6710843 (`FusedEmbeddingLookup`), this diff implements the new reduction operations for the fused 8-bit rowwise storage. I mostly followed the [old 8-bit quantized code](diffusion/FBS/browse/master/fbcode/caffe2/caffe2/operators/lengths_reducer_rowwise_8bit_ops.h) and [full-precision code](diffusion/FBS/browse/master/fbcode/caffe2/caffe2/operators/lengths_reducer_ops.h).
Reviewed By: kennyhorror
Differential Revision: D6710844
fbshipit-source-id: b9e85db7437bd32dd44d01733c3749f35c00b06e
Summary: This first diff adds the conversion operators that go from float to our fused 8bit rowwise quantized storage and back again. For now I've put the scale and bias in front of each row because it makes the pointer arithmetic nicer here and in the EmebddingLookup perfkernel. If benchmarks or other reasons point out that this is a bad idea we can change it easily.
Reviewed By: kennyhorror
Differential Revision: D6710785
fbshipit-source-id: 086ab91c12d3b472564a06eff6329be6cb9e680e
Summary:
This is to update the video input op in caffe2 so that it is up to date.
It adds additional support for:
1, optical flow and early fusion
2, different ways of sampling clips from video
3, different ways of resizing the input video
Reviewed By: dutran
Differential Revision: D6752788
fbshipit-source-id: 0cbd4d4bbbe97b0ada4cba7a55adc91a7af60d5f
Summary:
This updates https://github.com/caffe2/caffe2/pull/1096/ to build doxygen docs with cmake and fixes operator catalog generation. See the new README.md for details, but you can run
```
mkdir build && cd build
cmake -DBUILD_DOCS=ON .. && make
```
and
```
python caffe2/python/docs/github.py ~/c2docs/_docs/operators-catalogue.md
```
to generate docs.
There was one weird issue in `generator.py` that we sometimes receive tuples and sometimes objects. I handled this just by testing `isinstance`, but we might want to be more principled in the future.
Closes https://github.com/caffe2/caffe2/pull/1758
Reviewed By: pietern
Differential Revision: D6752127
Pulled By: orionr
fbshipit-source-id: 9ba9ad8efc920b27a57327f8a7d3050f3650d4ce
Summary: Added the RowWise functionality for SparseAdam, which saves roughly 2/3 memory usage by only keeping one first and second moment term for each row of the parameter tensor, rather than one for each individual parameter.
Differential Revision: D6679342
fbshipit-source-id: ce6fb27e35ce41a890c66f6089cd2748d10e7a44