Commit Graph

1944 Commits

Author SHA1 Message Date
Lu Fang
c286efb442
Quick patch for the CI (#6802) 2018-04-20 08:58:38 -07:00
Yinghai Lu
d695624efe
More trt tests (#6782) 2018-04-19 21:53:49 -07:00
bddppq
370acdf3bf
Change to use CAFFE2_HOME for specifiying caffe2 models path (#6775) 2018-04-19 11:34:52 -07:00
Yinghai Lu
7f587de4bc
[Caffe2] Let TensorRT flow use the generic graph transformer (#6696)
* Refine the transform API

* Let TensorRT flow use the generic graph transformer

* Rebase
2018-04-19 10:07:01 -07:00
Xiaomeng Yang
71c644b005
[caffe2] Add ReduceMinOp and ReduceMaxOp (#6744)
* Add gpu check for reduce_max

* Add ReduceMinOp and ReduceMaxOp

* Merge util functions in reduce_ops and math

* Expose math internal functions
2018-04-19 00:22:23 -07:00
Jongsoo Park
c40eefeef9 ChannelShuffle with NHWC layout (#6667)
* ChannelShuffle with NHWC layout

* ChannelShuffle with NHWC layout
2018-04-18 19:13:45 -07:00
Pooya Davoodi
969251962c [Caffe2] Enhance test for CollectAndDistributeOp (#6693)
* Caffe2: Enhance test for CollectAndDistributeOp

This also changes the operator and the test to use stable sort
otherwise the test will fail due to differences between the op
and the test when facing ROIs of the same score.

* Caffe2: Adjust comparator to make std::nth_element and std::sort stable

Revert the removal of std::nth_element and std::sort and adding of
std::stable_sort.
2018-04-18 13:19:05 -07:00
Orion Reblitz-Richardson
6223bfdb1d Update from Facebook (#6692)
* [GanH][Easy]: Add assertion to adaptive weighting layer

0 weight causes numeric instability and exploding ne

* [Easy] Add cast op before computing norm in diagnose options

As LpNorm only takes floats we add a manual casting here.

* Introduce a new caching device allocator

`cudaMalloc` and `cudaFree` calls are slow, and become slower the
more GPUs there are. Essentially, they grab a host-wide (not device-wide) lock
because GPU memory is transparently shared across all GPUs. Normally, this
isn't much of a concern since workloads allocate memory upfront, and reuse it
during later computation.

However, under some computation models (specifically, memory conserving
approaches like checkpoint-and-recompute, see
https://medium.com/@yaroslavvb/fitting-larger-networks-into-memory-583e3c758ff9)
this assumption is no longer true. In these situations, `cudaMalloc` and
`cudaFree` are common and frequent. Furthermore, in data parallel contexts,
these calls happen at nearly the same time from all GPUs worsening lock
contention.

A common solution to this problem is to add a custom allocator. In fact,
nVIDIA provides one out of the box: CUB, which Caffe2 already supports.
Unfortunately, the CUB allocator suffers from very high fragmentation. This is
primarily because it is a "buddy" allocator which neither splits nor merges
free cached blocks. Study
https://github.com/NVlabs/cub/blob/1.8.0/cub/util_allocator.cuh#L357 if you
want to convince yourself.

This diff adapts a caching allocator from the Torch codebase
https://github.com/torch/cutorch/blob/master/lib/THC/THCCachingAllocator.cpp
which does splitting and merging and ends up working really well, at least for
workloads like the checkpoint-and-recompute computation models noted above.

I simplified the implementation a little bit, made it a bit more C++-like. I
also removed a bunch of stream synchronization primitives for this diff. I
plan to add them back in subsequent diffs.

* Report reader progress in fblearner workflows

Integrate with fblearner progress reporting API and add support to report training progress from reader nodes.
If reader is constructed with batch limits, report based on finished batch vs total batch. The finished batch may be more than total batch because we evaludate if we should stop processing everytime we dequeue a split.
If no limit for the reader, report based on finished splits (Hive files) vs total splits. This is fairly accurate.

* [GanH][Diagnose]: fix plotting

1. ganh diagnose needs to set plot options
2. modifier's blob name is used for metric field can need to be fixed before
generating net

* Automatic update of fbcode/onnx to 985af3f5a0f7e7d29bc0ee6b13047e7ead9c90c8

* Make CompositeReader stops as soon as one reader finishes

Previously, CompositeReader calls all readers before stopping. It results in flaky test since the last batch may be read by different threads; resulting in dropped data.

* [dper] make sure loss is not nan

as desc.

* [rosetta2] [mobile-vision] Option to export NHWC order for RoIWarp/RoIAlign

Thanks for finding this @stzpz and @wangyanghan. Looks like NHWC is more
optimized. For OCR though it doesn't yet help since NHWC uses more mem b/w but
will soon become important.

* Intra-op parallel FC operator

Intra-op parallel FC operator

* [C2 Proto] extra info in device option

passing extra information in device option

design doc: https://fb.quip.com/yAiuAXkRXZGx

* Unregister MKL fallbacks for NCHW conversions

* Tracing for more executors

Modified Tracer to work with other executors and add more tracing

* Remove ShiftActivationDevices()

* Check for blob entry iff it is present

When processing the placeholders ops, ignore if the blob is not present in the blob_to_device.

* Internalize use of eigen tensor

Move use of eigen tensor out of the header file so we don't get template partial specialization errors when building other libraries.

* feature importance for transformed features.

* - Fix unused parameter warnings

The changes in this diff comments out unused parameters.
This will allow us to enable -Wunused-parameter as error.

#accept2ship

* add opencv dependencies to caffe2

The video input op requires additional opencv packages. This is to add them to
cmake so that it can build

* Add clip_by_value option in gradient clipping

Add clip_by_value option in gradient clipping

when the value is bigger than max or smaller than min, do the clip

* std::round compat
2018-04-17 23:36:40 -07:00
Yinghai Lu
6252706feb
[Caffe2] Workspace centric API for TensorRT transformation (#6678)
* Workspace centric API for trt transformation

* Merge SSA rewrite code
2018-04-17 21:23:27 -07:00
anderspapitto
4dd29ac89f fix broken code from rebasing (#6681) 2018-04-17 15:44:56 -07:00
Xiaomeng Yang
4be34ca0f3 Add broadcast and reduce gradient (#6668)
Add broadcast and reduce gradient
2018-04-17 13:31:13 -07:00
anderspapitto
e51e792cef
enable exporting bidirectional rnn with fixes seq len from onnx to caffe2 (#6566) 2018-04-17 12:27:16 -07:00
Yinghai Lu
582d47e986
[Caffe2] Scoped dummy name generator (#6458)
* Scoped dummy name generator

* Fix

* Fix

* Use class variable

* Fix build

* comment
2018-04-16 11:58:02 -07:00
bddppq
7ef14bf04c Follow the change of ONNX Cast operator "to" attribute (#6574)
* Follow the change of ONNX Cast operator "to" attribute

* Update Cast conversion in frontend and backend

* update pytorch onnx frontend
2018-04-16 14:24:42 -04:00
Xiaomeng Yang
cd2112717c
[caffe2] Update math functions with params on host. (#6602)
* Update ReduceMean

Add reduce mean to math

Add reduce mean to math

* sync reduce_ops_test

* Update math_gpu.cu
2018-04-14 21:41:41 -07:00
Yinghai Lu
434f710f3f
[Caffe2] Add support to TensorRT (#6150)
* Add support to TensorRT

* Removed License header

* Bind input/output by position

* Comments

* More comments

* Add benchmark

* Add warning for performance degradation on large batch

* Address comments

* comments
2018-04-11 17:03:54 -07:00
Yinghai Lu
ef8f556212
[Caffe2] Changes done inside Facebook (#6378)
* fix unit test for sqrt op

From the error logging:

[idx, grad, grad_estimate] are:
[[ 146.            0.5           0.45776367]
 [ 147.            0.5           0.45776367]

The gradient == 0.5 is correct, which means the SqrtOp and its gradient is doing right job. (Because y = sqrt(x), loss = y^2/2 = x/2, and then d(loss)/dx = 1/2 = 0.5; )

The test failed because of numerical problem of grad_estimate (in unit test). It can be because the step_size is small, and float precision is not high (when there are multiple elements in the tensor, we do sum(y^2) to compute loss)

This diff
- increase the step size, and also move the test cases to be further away from 0 (where sqrt(x) is not well defined) to be safe :)
- also clean up, and merge the test case for inplace Vs. non-inplace

Tested with:

`CAFFE2_HYPOTHESIS_PROFILE=debug ai_bt caffe2/caffe2/python/operator_test:elementwise_ops_test -- "test_sqrt"`

* CompositeReader & CompositeReaderBuilder

A new type of reader gluing multiple readers together.

* Back out "Revert D7394363: [GanH]: Log D Trick for Cross Entropy with Sigmoid"

Original commit changeset: 9325a4356dbe

* [dai][WIP] convert params to int8 on ps before sending to trainer

Add float->uint8 conversion in addition to float->fp16 conversion in model_saver.

* [easy] improve unit test for sparse length sum ops

as desc.

#accept2ship

* Update GitHub upstream to 771fcb3455

* move sparse hash unique ops to OOS and add unit tests

- move the SparseHash version to OOS, since 'sparsehash' is already deps of caffe2 OOS: https://fburl.com/arssw4n1
- The 'SparseHash' engine is also being used in OOS, so the SparseHash version shall be in OOS to reduce confusion: https://fburl.com/o5ea7ah2

- fix the CUDA UniqueOp for the case when batch is empty.
- add unit test

* group_norm_op for caffe2

This is the cuda op for Group Normalization (GN): https://arxiv.org/abs/1803.08494

This code implements GN in one op that computes Y=gamma * (X-mu) / sigma + beta and also its gradients. It is expected to have minimal memory consumption (similar to the BN op), without creating new blobs if GN were implemented as several ops (e.g., reshape, norm_mean/std, affine_channel).

* Resubmit D7405233: disappeared in D7464958

OOS publish causes the op missing -- however, test was still there

* [c2] add sparse hash engine for cuda unique op

The SparseHash version of UniqueOp copy input tensor to CPU, and make use of sparse hash map to get unique output, and then copy back to GPU.

* [dper][gpu] enable unit testing gpu trainer for sparse nn

to debug the GPU trainer using mock data in unit test.

make it easier to develop GPU trainer for new models.

* Reuse Gloo context for Synchronize() calls

Previously we were creating (and leaking) the Gloo context on each call to Synchronize(). Now only run the common world op and create the barrier net once, then run the barrier net on each Synchronize() call. Since timeout is associated with the Gloo context, assert that the timeout is fixed instead of trying to handle the complexity of multiple timeouts (and associated contexts).

* [GanH/WGAN][1/n]: add FC param clipping

as titled

* [mobile] minimizing changes between caffe2_benchmark and speed_benchmark

* [GanH]: enable diagnose within model

avoid finding blob names but to directly enable inside the model

* Add `net_transformer_fun` option to DPM

This callback allows for various transformations to be made to the
model after gradient operators have been added. The immediate motivation for
this is to allow transformations such has "checkpoint-and-recompute" which
allow trading off memory for additional compute.

Adding several callbacks like this has made DPM's API less than ideal at this
stage. However, I could not find any reasonable alternative.

* [DT] [33/n] Compile flow task groups

task groups need to compiled in order to pickle the object in fblearner. However I also changed the Job's compile function as creating new object is not necessary.

* Initial commit for sparse_normalize vectorization and benchmark

* [GanH]: LB Calibration for JSD

as titled

* Tracing event in async executor

Adding event tracing through TRACE_EVENT macro in async executor

* [Resubmit] D7409751 Reseting book-keeping blobs when the reservoir is reset

D7409751 got lost in D7464958

* Visualizing realtime weights values

we want to visualize the weights values as optimizer is iterating. This diff supports to visual the weights at an assigned index.
Currently, we assume the blob to be 2 dimensional.

* [GanH][Easy]: Fix Homotopy Weighting

apparantely, there was a bug in homotopy weight (alpha, beta) update

* [c2] move sparse hash unique op out of oss

so that oss do not need to depend on google hash map.

* Get rid of std::round as it's not supported on Android

* Revert changes on setup.py

* Skip shaky test on Dataio

* fix
2018-04-10 21:11:43 -07:00
Bram Wasti
7bd398b3db
Add fuseNNPACKConvRelu (#6439) 2018-04-10 16:51:16 -07:00
Qinqing Zheng
038b66ee07 [caffe2] use dictionary in Printer (#6443) 2018-04-10 10:37:07 -07:00
Qinqing Zheng
66791f54d5 Update the compile function of Job (#6323) 2018-04-09 22:44:23 -07:00
bddppq
df2e1d2962
Disallow using the OOP api workspace as context managers (#6456) 2018-04-09 22:13:54 -07:00
François Garillot
a91c88a348 Check mappings ONNX -> Caffe2 bear the same argument names (#6317)
* Check mappings ONNX -> Caffe2 bear the same argument names

When adding an extra arg to an input ONNX op, if it's not supported in Caffe2, the exporter would just silently pass it to NetDef and ignore it in the implementation. It's pretty error-prone. Caffe2 also has an OpSchema description and we can enforce that all arguments explicitly appear in schema or listed explicitly in Caffe2.

See also https://github.com/caffe2/caffe2/pull/2478

Add test for C2 argument checking

* Some operators do not log arguments, which prevents argument checks.
Invite users to file an issue to fix the schema.
2018-04-09 09:15:42 -07:00
Svetoslav Kolev
997acfd7fe [Caffe2] Some small changes to InferBlobShapesAndTypes definition and SameAsInput Schema (#6335)
* Change Same as input type deduction to work for ops with multiple outputs

* change InferBlobShapesAndTypes definition to take vector ot pointers instead of unique_ptr. The function doesn't own the objects, so no need to pass smart pointers and that prevents calling the function with existing object, since the caller has to create unique_ptr, i.e. copy an existing object just to create the pointer

* switching order of std::move<unique_ptr> and uniqur_ptr.get

* adding comma
2018-04-06 19:06:46 -07:00
Lu Fang
aab0bd3c13
Change onnx_optimizer API (#6290) 2018-04-06 13:46:53 -07:00
Lu Fang
876ad110af
Skip some unsupported onnx backend tests (#6247) 2018-04-05 21:33:35 -07:00
bddppq
8df2487de9
Properly skip the failing onnx conversion test (#6280) 2018-04-04 14:07:03 -07:00
kuttas
460e8cd376 change print to logger.warning in operator traceback code (#6216) 2018-04-03 08:01:25 -07:00
Qinqing Zheng
fd2e7cb487 Change JobRunner's __call__ function to train (#6205) 2018-04-02 21:04:36 -07:00
Paul Jesse Hellemn
771fcb3455 [caffe2] Fbcode to GitHub sync (#6208)
* [easy] allow empty tensor in cuda relu op

The diff has not enabled unit test of empty tensor, because MLKVersion of ReluOp need extra work to support

* Make blob norm plotting work with distributed trainer when the old framework is used
2018-04-02 16:35:27 -07:00
Orion Reblitz-Richardson
a409f959e8
Remove ShuffleNet from model zoo. (#6203)
* No longer supported.
2018-04-02 15:00:06 -07:00
Orion Reblitz-Richardson
cbe92abd7c Disable failing test_lengths_max_gpu 2018-03-30 21:00:45 -07:00
Ellie Wen
3d27095eec [easy] fix comments
nit: fix comments
2018-03-30 21:00:44 -07:00
Qinqing Zheng
365652229d Back out "Revert D7372460: [DT] [28/n] Lift epoch_limiter"
Original commit changeset: b0a986d16c3b
2018-03-30 21:00:44 -07:00
Andrey Malevich
b9d2ba1dbf Revert D7394363: [GanH]: Log D Trick for Cross Entropy with Sigmoid
This reverts commit d63266ccbc0c1390c58c2a71ae0b562fdec2fbc0

@bypass-lint

An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
2018-03-30 21:00:44 -07:00
Ellie Wen
363a227d19 extend bucketize op to support duplicated boundries
upgrade bucketize op to support duplicated boundaries
2018-03-30 21:00:44 -07:00
Jason Gauci
551d5fbf9a CUDA version of LengthsMax operator
CUDA version of LengthsMax operator

@override-unit-failures
2018-03-30 21:00:44 -07:00
Andrew Tulloch
0df662c67f [Caffe2] [Int8] More exhaustive unit tests for int8 ops (+ bug fix in Int8Add in-place case)
As title. This catches one bug in the Int8Add in-place case,
which wasn't tested in int8_test.cc
2018-03-30 21:00:44 -07:00
Xiaolong Wang
2b0e39f569 [GanH]: Log D Trick for Cross Entropy with Sigmoid
as titled
2018-03-30 21:00:44 -07:00
Andrey Malevich
f8eb8a66e2 Revert D7372460: [DT] [28/n] Lift epoch_limiter
This reverts commit 05bd9bec10fad5ff9dc40be88836fd7274d50ce9

@bypass-lint

An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
@cause_a_sev_many_files
2018-03-30 21:00:44 -07:00
Bram Wasti
ee64200c64 [nomnigraph] Expose transformations to python
Adding a python interface to the transformations
2018-03-30 21:00:44 -07:00
Yiming Wu
03c5198331 [C2 Int8][C2 Core]fetch int8 blob
Providing Python API to fetch Int8 tensors.

  data, scale. zero_point = workspace.FetchInt8Blob(blob_name)

now returns a tuple if the blob contains a Int8TensorCPU

     'data' = int8 data array
     'scale' = fake quantization scale
     'zero_point' = fake quantization offset

Although FetchBlob shares back-end implmentation with FetchInt8Blob, we raise
error to prevent unexpected behavior of the same method
2018-03-30 21:00:44 -07:00
Lu Fang
8f3ba30266 Fix a typo
Fix a typo in optimize_onnx_test.py
2018-03-30 21:00:44 -07:00
James Reed
47a1fd208f Quick and dirty raw value substitution from zip file (#2454) 2018-03-29 19:18:58 -07:00
Lu Fang
344fa57680 Adjust the test since only the op only has CPU implementation 2018-03-27 18:10:39 -07:00
Lu Fang
0ac8495165 Fix the CMake issues caused by internal changes 2018-03-27 18:10:39 -07:00
Xiaolong Wang
af3dcdf6ae [D2]: Improve loss weight by allowing omitted weights
as titled
2018-03-27 18:10:39 -07:00
Xiaolong Wang
d6c30ee6af [GanH]: Unifying two discriminators
to improve the flexibility and combines different discriminators in one model.
2018-03-27 18:10:39 -07:00
Jongsoo Park
3300e21d52 Add SparseLengthsPositionalWeightedSum operator that fuses SparseLengthsWeightedSum, LengthsRangeFill, and Gather
add SparseLengthsPositionalWeightedSum operator that fuses SparseLengthsWeightedSum, LengthsRangeFill, and Gather
2018-03-27 18:10:39 -07:00
Xianjie Chen
e6b04ba121 fix lengths sum cuda op for empty batch
the cuda does not allow launching empty kernel
2018-03-27 18:10:39 -07:00
Xianjie Chen
6ed9a0c3f2 fix cuda elementwise ops for empty batch
CUDA will fail to launch empty kernel
2018-03-27 18:10:39 -07:00
Dehua Cheng
c6587597d8 Ignore backward step when there is no loss function;
Ignore backward step when there is no loss function;

For some customized model, we can encode the update directly in forward step and there is no backward step;
2018-03-27 18:10:39 -07:00
Xiaolong Wang
c909abd85f [GanH] Label Smooth: Add Layer and Integrate to SparseNN
as titled
2018-03-27 18:10:39 -07:00
Yan Zhu
107cb670b1 add typecast and assertion for histogram computing
as title
2018-03-27 18:10:39 -07:00
Xianjie Chen
078b6d5ad1 [layer model] remove duplicated init ops
it saves some model init time, and reduce confusion.
2018-03-27 18:10:39 -07:00
Roxie He
d2453afb1e Add SumElementsInt operator
Added a caffe2 math sum operator so that it takes integers (only int32)
Changed the SumFloatIter to SumGenericIter so that it takes >1 types.
Added a sumElementInt operator
2018-03-27 18:10:39 -07:00
James Cross
16312e8123 [fbtranslate/onnx] decoder step (pytorch -> caffe2) exporter for fbtranlsate
This code introduces a new class for exporting decoder step (ensemble) models trained with fbtranslate pytorch to Caffe2 models via ONNX, for the purpose of use in "component beam search" being developed concurrently in C++ by @juancarabina.
2018-03-27 18:10:39 -07:00
Manoj Krishnan
a92a6233b5 Enable support for placeholder ops in InjectCrossDeviceCopies
This is required to support placeholder/decorator ops which does not have operator schema. Note that the change is made in such a way that it is a no-op if placeholder Ops are not used.

Changes:
1. Since the placeholder ops always run on CPU, added a utility to infer placeholder ops blob devices.
2. Placeholder op's input/output blobs should be on CPU as well. This change takes care of dealing with output blobs - i.e. use blobs on CPU.
3. Added a Unit test - test_inject_copy_placeholder_ops
2018-03-27 18:10:39 -07:00
Jiyan Yang
8fa38f8dce Add gradient clipping (#2452)
As titled.
2018-03-27 15:10:15 -07:00
Orion Reblitz-Richardson
1d5780d42c Remove Apache headers from source.
* LICENSE file contains details, so removing from individual source files.
2018-03-27 13:10:18 -07:00
Jason Gauci
f93e820e7d Revert "[C2][GPU]LengthsMax CUDA version (#2209)" (#2444)
This reverts commit 71acc269bb573c8c04343e6d534b2557a456b29a.
2018-03-27 01:15:52 -07:00
harouwu
6740126f5c [C2][GPU]LengthsMax CUDA version (#2209)
lengthsmax CUDA version.

will provide gradient later
2018-03-27 00:19:17 -07:00
Kutta Srinivasan
0e0918cb9a dpm synchronize 2018-03-26 19:54:31 -07:00
mlappelbaum
d11fc90317 Export atomic iter count (#2379)
* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Add axis to top_k_op. (#2416)

* Revert update on top_k_op

* Add axis to top_k_op

Add axis to top_k_op

* [auto] Update onnx to a8e4648 - Adjust link flags when built in Windows Debug mode (#647)
a8e4648a7d

* [auto] Update onnx to f4acf28 - Remove allowconsumed enforceconsumed from op schema. (#617)
f4acf281ef

* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Initialize cpuinfo in the thread pool

Thread pool called cpuinfo_get_processors_count() without initializing cpuinfo. Only by luck it didn't make Caffe2 single-threaded: threadpool is initialized after NNPACK, and NNPACK initializes cpuinfo itself.

This commit also updates cpuinfo to a version that aborts with a fatal error if its used uninitialized.

* Updated Python Op and Image Pre-Processing Pipeline tutorials && Added CIFAR-10 Part 1 tutorial (#2286)

* Updated Basics tutorial: (1) Added Python 3 support with __future__ statements; (2) Various grammatical/typo fixes and minor refactoring of Markdown

* Added Python 3 support and made minor typo fixes

* Added Python 3 support with future imports, refactored and corrected errors in Markdown, added comments

* Added Python 3 support with future imports, Added use of caffe_translator.py to translate downloaded .caffemodel file to .pb files

* Upgrades to Image Pre-Processing Pipeline tutorial

* Updated Python Op tutorial

* removed markdown with empty links

* Added Part 1 of an end-to-end CIFAR-10 tutorial

* Updated MNIST Dataset and Databases tutorial with python3 support and markdown fixes

* Tweaks to markup, less training iterations

* changed permissions of CIFAR10_Part1; typo corrections in Image_Pre-Processing_Pipeline

* Typo corrections in Multi-GPU Training tutorial

* sync Python_Op py_gen with the IPython notebook

* nit typo correction

* [auto] Update onnx to 5cb999d - Minor cleanups to shape inference (#653)
5cb999ddc1

* [auto] Update onnx to ecac1c1 - Merge Rel 1.1.0 branch into master (#657)
ecac1c1624

* Strip down onnx to only pb definitions in mobile build (#2426)

* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Exported AtomicIterOp count

* Exported AtomicIterOp count
2018-03-26 19:26:09 -07:00
Yinghai Lu
b6e80a1ec4 Caffe2-onnx exporter (#2248)
* caffe2-onnx frontend

* Remove Python part of the conversion code

* nit

* convert more ops

* Address commmetns
2018-03-26 19:23:45 -07:00
Xiaomeng Yang
a73f9af5ab Add axis to top_k_op. (#2416)
* Revert update on top_k_op

* Add axis to top_k_op

Add axis to top_k_op
2018-03-23 20:43:43 -07:00
bddppq
425361af6a Bump onnx opset version (#2402) 2018-03-23 10:48:12 -07:00
bddppq
bbb7c722df Remove legacy onnx optimizer tests (#2394) 2018-03-22 21:08:05 -07:00
Qinqing Zheng
1288c4fd79 refactor epoch_limiter (#2389)
* refactor epoch_limiter

* fix test
2018-03-22 20:32:13 -07:00
bddppq
f3b7b2f293 Remove ONNX consumed_inputs (#2278)
* Remove ONNX consumed_inputs

* Bump up opset version to 6 issued by onnx caffe2 frontend
2018-03-22 20:24:35 -07:00
Qinqing Zheng
566a25e1e4 Add keyword argument to PipeReaderBuilder (#2381)
att
2018-03-22 14:17:47 -07:00
Yinghai Lu
45da53f478 Remove Python onnx-caffe2 conversion code (#2362)
* WIP

* Remove Python onnx-caffe2 onversion code

* Fix build

* Comments

* Add comments

* Fix typo in comments
2018-03-22 11:59:03 -07:00
Xiaomeng Yang
3053618624 Add argmax and argmin ops (#2371)
* Revert update on top_k_op

* Add axis to top_k_op

* Remove do { ... } while (false)

* Revert top_k op to upstream

* Add argmin and argmax ops

Add argmin and argmax ops

* Revert top_k_test to upstream

* Add argmin and argmax ops

Add argmin and argmax ops
2018-03-22 00:52:11 -07:00
James Reed
48c70d2dbd Fix ReduceMean performance by specializing Eigen implementation for common shapes (#2355) 2018-03-21 21:48:54 -07:00
Joseph Spisak
b2c56eb219 Removed special handling for onnx sqrt (#2353) 2018-03-21 21:05:25 -07:00
Yangqing Jia
2d03ae2f85 Move ParseProtobufFromLargeString to proto_utils (#2354)
* Move ParseProtobufFromLargeString to proto_utils

* ParseProtobuf -> ParseProto to be consistent in naming
2018-03-21 17:05:14 -07:00
Orion Reblitz-Richardson
0ea8964fd6 Revert "Export number of iterations of AtomicIterOp" (#2359)
* Revert "Use -DCMAKE_BUILD_TYPE=Release for local build by default"

This reverts commit 035c62081f6420405b9f1380cc5d21b4c6ae78f6.

* Revert "Export number of iterations of AtomicIterOp (#2338)"

This reverts commit 91b7a0cb48c6b079e2ca8fd5c26819a003937d76.
2018-03-21 16:11:29 -07:00
mlappelbaum
8346088094 Export number of iterations of AtomicIterOp (#2338)
* Exported AtomicIterOp count

* Exported AtomicIterOp count
2018-03-21 12:39:30 -07:00
Lu Fang
b1684e9a3a Skip DepthToSpace and MaxPool same mode onnx backend tests (#2343) 2018-03-21 09:24:06 -07:00
Lu Fang
6cae6d3841 Update ONNXOpCoverage.md 2018-03-20 15:22:43 -07:00
Lu Fang
1c80ee1c74 Update ONNXOpCoverage.md 2018-03-20 13:56:13 -07:00
Lu Fang
ac1b7b6366 Update ONNXOpCoverage.md 2018-03-20 13:55:33 -07:00
Orion Reblitz-Richardson
42d3bcc189 Only run WeightedMultiSample test on CPU and not GPU. 2018-03-20 13:34:22 -07:00
Orion Reblitz-Richardson
6aa087d902 Revert "export num iterations of AtomicIter"
This reverts commit be9c8e5591f5d38131b9bdc2249542f27dadc221.
2018-03-20 13:34:22 -07:00
Xianjie Chen
22d0828f00 [easy] improve error messages
as desc.

#accept2ship
2018-03-20 13:34:22 -07:00
Yan Shang
69706b2ab4 Add C2 for weighted sampling
C2 operator, with input (1) index; (2) cdf; argument number_samples,
output number_samples samples from the index.
2018-03-20 13:34:22 -07:00
Xiaolong Wang
4bb73b8361 [GanH] Weighting Layers: Adaptive/Constant/Homotopy
use case: to weight multiple losses (real values) as a single composite loss for
optimization
2018-03-20 13:34:22 -07:00
Xiaolong Wang
a5279dccd4 [GanH]: homotopy JSD
as titled
2018-03-20 13:34:22 -07:00
Matan Appelbaum
fac306d3c9 export num iterations of AtomicIter
as title.  Useful for tracking number of EASGD updates.
2018-03-20 13:34:22 -07:00
Lukasz Wesolowski
f7f48989ba GPU support for ChannelBackpropStatsOp
Step 2 of 3 in adding support for multidevice batch normalization on GPUs. Implements ChannelBackpropStatsOp. Similar to D6953411.
2018-03-20 13:34:22 -07:00
Chenguang Xi
3940e7f0a7 Support computing averaged norm in blob magnitdue visualization
1. support the LpNorm operator to calculate the average LpNorm by adding one more boolean argument, i.e., LpNorm(average = true) = LpNorm(x) / size of (x)

2. integrate the average option into visualization framework
2018-03-20 13:34:22 -07:00
Manoj Krishnan
c43896732e Added device inference functions for Concat and Split Ops.
Changes:
=======
1. Added device inference functions for Concat and Split Ops.
2. Added a unit test to validate the change. See, test_device_inference_function in core_test.py
3. Fixed some formatting.
2018-03-20 13:34:22 -07:00
Wei Zhang
e0e334793c Revert D7219461: Mark full sync data parallel ops with rules
This reverts commit 79c56ec5859e25c7caec7bb6b79e80dd19307c64
2018-03-20 13:34:22 -07:00
Wei Zhang
9edbafe0de Mark full sync data parallel ops with rules
Instead of using hard-coded rules or rely on gpu_strategy to mark full sync data parallel ops, we need some generic rules that is applicable to both the single and distributed setting.
2018-03-20 13:34:22 -07:00
Kittipat Virochsiri
35b6b0747a Fix stop_if()
Making sure that stop blob is never overrided.
2018-03-20 13:34:22 -07:00
Yan Shang
40683cdf42 Allow calculating average margin rank loss
Similar to LrLoss, we allow for average loss of margin rank loss.
2018-03-20 13:34:22 -07:00
Kittipat Virochsiri
72f2cd8bcc Making preproc_output_schema explicit
Make it easier to plug in intermediate steps between preprocessing & trainer by maintaining a stable schema.

I also fixed enqueue() so that we can pass in the same blob in multiple location without causing data corruption.
2018-03-20 13:34:22 -07:00
Zhanibek Datbayev
7aeda25cfb Add type / shape inference for IndexHash op
just as title says
2018-03-20 13:34:22 -07:00
Edoardo Conti
6af3429f4f Add 2D Row-wise Arg Max Operator
Add operator to return row-wise arg max of 2D matrix.
2018-03-20 13:34:22 -07:00
Kittipat Virochsiri
9be2de507b Cleaning up ReaderBuilder interface
The way `splits()` is currently used is so convoluted. It's impossible to compose ReaderBuilder. I'm working on a composite reader so this is a prerequisite for it.

The idea is that the ReaderBuilder should maintain the states it needs to create a reader. Any setup is done through the new `setup()` method. Currently, `setup()` should only be called once, but, if needed, it should be safe to call it multiple times.
2018-03-20 13:34:22 -07:00
Kittipat Virochsiri
a4d0ef2621 Fix stop blob of processing reader
See inline comment
2018-03-20 13:34:22 -07:00
Yinghai Lu
efe1c2bd13 hypen as a valid part of model names (#2312) 2018-03-20 08:52:54 -07:00
Lu Fang
cda2f02f89 Skip the test average pool same mode tests (#2324) 2018-03-20 00:13:31 -07:00
Yinghai Lu
b0fe67aca8 Expose more APIs for onnx cpp backend (#2317) 2018-03-19 22:46:26 -07:00
Bram Wasti
aa4af1a5f9 [tiny] make debug info optional, CAFFE2_DEBUG env variable driven 2018-03-19 16:58:04 -07:00
Qinqing Zheng
23631eee5a [C2] Fix the check of current scope in optimizer (#2316)
scope.CurrentDeviceScope() can return a None type, which was not considered.
2018-03-19 16:38:55 -07:00
Yan Zhu
fb77b423f4 refactor histogram as net modifier (#2314) 2018-03-19 16:04:58 -07:00
Orion Reblitz-Richardson
00603b5e0a Add CollectAndDistributeFpnRpnProposalsOp for FPN support (#2254)
* Add CollectAndDistributeFpnRpnProposalsOp for FPN support

* Adds a C++ operator equivalent to the Python op in Detectron
* Once some additional GenerateProposalsOp changes are made this will
 let us support Detectron FPN models with straight Caffe2 C++ ops
* RetinaNet and segmentation models require additional work

* Remove some uses of conservativeResize

* Add notes about training and inputs/outputs to operator documentation
2018-03-19 14:04:43 -07:00
Lu Fang
334fc98fb0 Handle the legacy padding in global pooling case (#2292) 2018-03-18 21:28:15 -07:00
bddppq
c155842cc1 Update onnx frontend to emit new onnx Reshape (with shape as input) (#2287)
* Update onnx frontend to emit new onnx Reshape (with shape as input)

* Address comments and revert submodule change
2018-03-16 16:32:35 -07:00
James Reed
e8f14f5d37 Fix ONNX backend for MatMul (#2273)
* Fix ONNX backend for MatMul

* Update Python implementation

* Address comments
2018-03-15 14:43:52 -07:00
Paul Jesse Hellemn
74f0b270ea Fixing conda (#2123)
* Fixing conda

* Adding hypothesis and onnx to conda builds

* Updates but still not working

* Adding required changes to conda_full

* Updates

* Moving to more general build_anaconda script

* Adding check for gcc version

* Adding general ways to add/remove packages from meta.yaml?

* Changes for specific packages to build on gcc 5.4

* Fix with glog spec

* Requiring >numpy 1.12 for python 3 to satisfy opencv dependency

* Adding pydot to required testing packages

* Adding script to read conda versions for gcc ABI

* Trying to fix segfault by installing in env instead

* conda activate -> source activate

* Trying adding back leveldb

* Setting locale for ONNX + conda-search changed its format

* read_conda_versions handles libprotobuf

* Conda script updates

* Adding a protobuf-working test

* Removing changes to proto defs b/c they will require internal changes in a separate diff
2018-03-14 12:24:37 -07:00
Lu Fang
8a9925f03f Fix useless opset_import in onnx (#2243)
* Fix useless opset_import in onnx

* Set the default ir version in make_model

* Use the target_opset_version in Caffe2Frontend

* remove make_model from helper in caffe2.python.onnx
2018-03-14 10:17:32 -07:00
Mohammad Hossain
28eda01809 Reduce Sum and Reduce Mean (#2189)
* Reduce Sum and Reduce Mean

* Handle reductions with empty 'axes'

* Merge codebase and simplify tesnor reduction logic

* Restructure code and add comments.

* Fix parameter to scale

* Fix parameter to scale
2018-03-13 19:13:47 -07:00
Qinqing Zheng
edd138ba00 [C2] Support optional lengths input to ReduceFront/Back operators (#2250) 2018-03-13 13:20:26 -07:00
Yinghai Lu
7e6693991d Onnx caffe2 backend (#2039)
* C++ version of ONNX->Caffe2 backend

* use namespace ONNX_NAMESPACE

* Fix Build

* Comments

* Change namespace from onnx_caffe2 to caffe2::onnx
2018-03-12 15:18:05 -07:00
jmp84
b465bb9a8e fix post eos penalty (#2235) 2018-03-12 12:42:22 -07:00
sf-wind
602a09dde7 Update caffe2 from facebook 4f527ef46abf (#2234)
* [GanH]: two_task_discriminator

as titled

and adding label smooth

* [Dper2] Simplified UI options needed for blob magnitude visualization

* [GanH]: fix tags

as titled

* Added type and shape inference for GatherRange operator

This helps with type / shape inference when using this operator in layers.
Also just a nice to have in general.

* Demonstrate Caffe2 exception handling with StoreHandlerTimeoutError in Python

We'd like to catch and recover from certain Caffe2 net exceptions. Use this diff to demonstrate a pattern of registering a pybind exception mapping and catching in Pythonusing caffe2::StoreHandlerTimeoutException.

* Bind Gloo IoException to IoError in Python

Allow peer failure handling and recovery using an exception based mechanism. This diff registers gloo::IoException with pybind.

* [GanH]: add label smoothing to softmax with loss

as titled

* [C2] Enable LARS in Adagrad and hook it to DPER

* [DPER] Don't pass LayerModelHelper in create_trainer_nodes

Since we're planning to get rid of it eventually and I want to get access to
NetDef only interface ASAP - I'm looking towards removing all references to
LMH, where we don't really need them.

* fix bugs in LambdaRankNdcgOp

the loss and gradient in LambdaRankNdcgOp are incorrect. The loss should be negative log of probs instead of log.

* Restrict thread pool on iOS to only big cores

Historically, iPhones exposed only one type of cores, and Caffe2 thread pool used all of them.
However, iPhone 8/iPhone X exposes 2 big + 4 LITTLE cores. As our thread pool doesn't support work stealing or other forms of load balancing, fast cores end up waiting for the slow ones, and it may be better to restrict execution to only 2 fast cores, like we do on Android.

* Remove SparseLength Sum/WeightedSum/Mean operators with fp16 engine

Remove SparseLength Sum/WeightedSum/Mean operators with fp16 engine

* make clang happy and get fewer warnings

make clang happy and get fewer warnings

* [Personalization] Support add_output_schema() in layer_model_helper

Problem:
Currently the output_schema of sparse_nn can only be set once. https://fburl.com/efth5zer.

Solution:
For flexibility, we want to add fields to output_schema incrementally.

Plan:
Wrap the change of `model._output_schema` into a new function `add_output_schema()` for adding additional output_schema.

Callsite:
The add_output_schema() should be called instead at https://fburl.com/efth5zer

Reference:
The newly added `add_output_schema()` will be similar to `add_loss()` in https://fburl.com/t2ii8njh
2018-03-12 12:22:59 -07:00
Kutta Srinivasan
0ee53bf7fe Fix one more naming issue in resnet50_trainer.py for PR 2205 2018-03-09 13:51:42 -08:00
Kutta Srinivasan
ed05ca9fec Clean up naming of FP16-related code, add comments 2018-03-09 13:51:42 -08:00
James Reed
60299e03cf Report all errors during ONNX backend translation rather than failing fast (#2210) 2018-03-09 10:58:22 -08:00
Lu Fang
52460a0b30 Add outputs_info as parameter in run_node (#2161) 2018-03-09 10:44:51 -08:00
Jongsoo Park
64b33672af add GatherFused8BitRowwise operator (#2167)
* add GatherFused8BitRowwise operator

* Update gather_fused_8bit_rowwise_op.cc

* Update gather_fused_8bit_rowwise_op.cc
2018-03-09 07:42:17 -08:00
Qinqing Zheng
9acac2a513 Pass in task groups to PipedReaderBuilder (#2182) 2018-03-08 16:16:57 -08:00
Jiyan Yang
f4b1e8b334 [Dper2] Add NetModifier abstraction and support for plotting the norm of blobs (#2201) 2018-03-08 13:41:32 -08:00
Joseph Spisak
cebf44e960 Element-wise tests now use or seeded with hypothesis (#2181)
* Updated all element-wise tests to use hypothesis testing or at least use hypothesis seeds

* Updated tests to add seed to sqr function
2018-03-08 07:51:45 -08:00
Alexander Sidorov
60aa8c793d Update caffe2 from facebook (#2178)
* [C2] Don't crash kernel in case of invalid shapes for ConcatOp

Enforce correctness of the shapes for input tensors so we won't access invalid index.

* [Caffe2] Add analytical performance counters to Dynolog

Initial diff for counting analytical flops and memory writes for C2 operators.

* BBoxTransform op: Handle RoIs from multiple images per batch

BBoxTransform op used during typical Faster-RCNN inference operates only on
RoIs from a single image (no batching). Adding support to handle that with an
optional output blob containing the batch splits (i.e., the number of RoIs
belonging to each item in the batch). The code is perfectly backward compatible
and shouldn't break any existing models..

* [mkl] Make MKL-DNN cooperate with memongered nets

C2's MKL-DNN implementation caches input dims and reuses intermediate and
output buffers across net runs, which prevents memonger from being used. This
may not always be useful since input dims may vary widely in many cases and
we'll end up reallocating anyway. Added an option to force reallocation when
memonger is used.

* [oncall] fix batch gather ops for empty input

still need to bisect for the breaking change, but this shall fix the case for empty input.

the error logging is like: https://interncache-ftw.fbcdn.net/t49.3276-7/23938497_293562711176943_6500112636590424064_n.txt?_nc_log=1

@[557759185:raychen] can you help to subscribe oncall from ads side. this may affect the Sigrid online trainer.

* optimize BatchOneHotOp

We want to iterate in row-major as opposed to column-major for better
locality.

* Supported exporting model with int blobs.

Supported exporting model with int blobs. Needed by condensenet.

* BoxWithNMSLimit op: Handle boxes from mutiple images per batch

Similar to D7135360. Added support for multiple images per batch in the op.
Takes an optional additional input "batch_splits" as output by BBoxTransform
op, and returns new batch_splits after applying NMS and filtering. Otherwise,
backward compatibility is maintained.
2018-03-07 16:41:22 -08:00
Kutta Srinivasan
0a18608b43 hacks to test exception handling and python operator backtraces
Add exception handling & re-throwing to worker threads of DAGNetBase
2018-03-07 15:09:17 -08:00
ilia-cher
0c6e843028 [caffe2] Add scopes into ONNX While op (#2149)
Summary:
Executing loop's body in a separate workspace, using WorkspaceStack to
support saving and reusing of workspaces

Test Plan:
python caffe2/python/operator_test/onnx_while_test.py

Reviewers: caffe2-review, jamesreed

Subscribers:

Tasks:

Tags:
2018-03-07 12:34:11 -08:00
Dmytro Dzhulgakov
7d141d4243 Changes done internally at Facebook (#2154)
f679c644e332 dzhulgakov [caffe2] Sync script - add ability to handle rebase conflicts
51729b061a15 dzhulgakov [caffe2] Changes done on GitHub
2018-03-06 01:23:54 -08:00
Dmytro Dzhulgakov
bec8923e02 [C2] Adding Clip Tensor by Scaling op
This op is used for gradient clipping to take care of exploding / vanishing gradients.

If original_norm is larger than the threshold,
then each element of the tensor is scaled by threshold / original_norm.
2018-03-06 00:33:11 -08:00
Dmytro Dzhulgakov
6b98315a28 [GanH] Model Test
as titled
2018-03-06 00:33:11 -08:00
Dmytro Dzhulgakov
16ba087b64 [oncall]fix unittest dper/layer_models/tests:utils_test
as titled -- fix offending diff D7091725 due to added debug_info in operator
proto
2018-03-06 00:33:11 -08:00
Dmytro Dzhulgakov
496c999f7d [core] NUMA-aware pinned allocator
Using cudaHostRegister/Unregister instead of cudaMallocHost to move memory to a
specific NUMA node
2018-03-06 00:33:11 -08:00
Dmytro Dzhulgakov
9e71de398b [core] Graph-level NUMA awareness in Caffe2
Adding NUMA awareness through numa_node_id in DeviceOption. Blobs of operators
with numa_node_id are allocated on corr. memory banks, using CPU pools with
NUMA affinity set to run operators.
2018-03-06 00:33:11 -08:00
Dmytro Dzhulgakov
8b0b090ff1 fix Caffe2TensorToNumpyArray for py3
with python3 np.int defaults to int64.  This diff should fix it. I don't know if test exist for this function already, however following ASR test was breaking when i switch to py3

```
buck test caffe2/caffe2/fb/speech/asr_training/:tensor_parser_test
```
2018-03-06 00:33:11 -08:00
Dmytro Dzhulgakov
968ebb3b82 [GanH]fuse jsd with lr loss/xent
as titled
2018-03-06 00:33:11 -08:00
Dmytro Dzhulgakov
08dbd96642 Add TensorInferenceFunction for PowOp
Add TensorInferenceFunction for PowOp so that we can infer the shape and datatype of Pow output.
2018-03-06 00:33:11 -08:00
Dmytro Dzhulgakov
f2ec5b7b0e [DPER] Fix bug in uint8 quantization shortcut.
After D6953547 some of the blobs were no longer impacted by uint8 quanitzation,
but they would still generate operators expecting uint8 inputs and thus fail.

This diff is adding a temporal hack to avoid doing this quantization when layer
is not quantized.

Will fix it with switching to Net rewriting instead.
2018-03-06 00:33:11 -08:00
Dmytro Dzhulgakov
1f0a833d8e JSD fwd/bwd op
as titled
2018-03-06 00:33:11 -08:00
Kutta Srinivasan
b4b2f0d2cc Work on fp16 conv op 2018-03-05 21:13:03 -08:00
Pooya Davoodi
37dec493a5 Scope MultiRNN blobs with name as well as layers (#2025)
* Scope MultiRNN blobs with name as well as layers

Also don't double scope MultiRNN in case of multiple layers.

* Scope input projection of first layer with name

We don't scope it with layers because the projection is done
outside of the layer.

* Avoid scoping input blob in MemongerTest.test_rnn

* Rectify input_blob in prepare_input

Revert change in memonger_test because rectifying input will solve the problem.
2018-03-02 22:21:07 -08:00
Qinqing Zheng
d013e16cf4 [C2] Enable LARS on GPU (#2115) 2018-03-02 18:06:19 -08:00
Joseph Spisak
11a736b682 Sqrt op (#2101)
* First attempt on sqrt op

* Adding the Sqrt op along with the test cases

* Made changes per @Yangqing's questions re: tensor format and used hypothesis to generate input tensor
2018-03-02 16:19:45 -08:00
Mohammad Hossain
349238f5bf Mean Op (#2072)
* Mean Op

* Mean Op

* Mean Op

* Fix gradients and include seed for randomized input generation

* Update test strategies parameters
2018-03-02 16:18:17 -08:00
Xiaomeng Yang
558e2a92df Revert update on top_k_op (#2119) 2018-03-02 16:07:45 -08:00
Xiaomeng Yang
c70beed31c Add axis to top_k_op. 2018-03-02 15:21:31 -08:00
Alexander Sidorov
1af7df6e78 fix rnn_cell_test in fbcode (#2107) 2018-03-01 21:02:52 -08:00
Lu Fang
1981557751 Add README and ONNXOpCoverage doc back (#2102)
* Add README and ONNXOpCoverage doc back

* Polish the coverage table again

* Remove onnx-caffe2 from title
2018-03-01 17:05:25 -08:00
Lu Fang
aa5145bf14 Enable onnx backend test on pow, ceil and floor (#2103) 2018-03-01 15:33:58 -08:00
anderspapitto
c0304c83b1 Copy some outputs in order to decouple storage (#2105)
so that mutating one of them does not mutate the others
2018-03-01 13:25:31 -08:00
anderspapitto
749a17661c Introduce padding op to mimic pytorch semantics in ONNX export (#2069)
In pytorch, after pad_packed_sequence, the "extra" elements (after the
ends of the sequences) are reset. In the equivalent Caffe2 graph
exported via ONNX, they contained some leftover values, which caused
tests to fail. Probably no one depends on these values, but just in
case, set them to zero to mimic pytorch semantics.
2018-02-28 15:44:54 -08:00
Orion Reblitz-Richardson
5c381bbc57 Patch cuda-convnet2 from internal Facebook changes.
* Unfortunately this needs to be manually monkey patched.
* This should get it so GitHub and fbcode versions match.
2018-02-28 14:20:48 -08:00
Lu Fang
eb612b09e9 Fix Caffe2 ==> ONNX converter to handle three models (#2058)
* Handle legacy pad in Caffe2==>ONNX converter, also remove fake initializer

* Address the comments, 1) have filtering fake initializer before ssa rewrite, 2) polish the legacy padding handling logic

* Add test cases to cover the code just added

* Nit
2018-02-28 11:55:49 -08:00
anderspapitto
76304300a8 Transpose shape inference (#2057)
* fix variable name

* enhance shape inference to handle transpose

in the case arising from pack_padded(..., batch_first=True)
2018-02-27 11:51:10 -08:00
anderspapitto
ec547ce640 RNN ONNX export: concat hidden/cell states on the right axis (#2055)
Test Plan: existing tests in onnx-fb-universe catch this, modulo a bug
in the tests which I am fixing in a separate diff
2018-02-26 11:04:04 -08:00
Orion Reblitz-Richardson
028bc2f23f [C2 OSS][GPU]exposing totalGlobalMem info to workspace python
exposing totalGlobalMem info to GetDeviceProperties method so that users
can have better understanding
2018-02-26 10:26:25 -08:00
Orion Reblitz-Richardson
c55a642d83 [c2] update SparseFeatureHash layer
The diff makes following changes for this layer: copy length blob; add nameScope for output schema; add layer tests
2018-02-26 10:26:25 -08:00
Orion Reblitz-Richardson
e397367db0 GatherRangesToDenseOp supporting sorting with keys
Added functionality to GatherRangesToDenseOp such that it supports an optional input KEY, and will sort DATA according to KEY for each example per feature.
2018-02-26 10:26:25 -08:00
Qinqing Zheng
7cafdab69b [C2] Implement Layer-wise Adaptive Rate Scaling (LARS) (#2034)
* [C2] Implement Layer-wise Adaptive Rate Scaling (LARS)

* [C2] Implement Layer-wise Adaptive Rate Scaling (LARS)

* add unit test for Lars

* set default value for lars to be None

* remove lars for subclasses of SgdOptimizer
2018-02-25 14:58:31 -08:00
PengBo
07646e405e no_bias in resnet32x32 (#1817) 2018-02-24 16:58:23 -08:00
Lu Fang
c1919b370b Skip Cast ONNX backend test, which is not supported in Float16 case (#2005) 2018-02-24 03:51:08 -08:00
Qinqing Zheng
b3fdfa7bd6 [DT] [4/n] Make epoch_group explicit for JobRunner (#2018) 2018-02-23 10:41:52 -08:00
Yinghai Lu
c249f49ddd Rename caffe2_ref_test.py to c2_ref_test.py (#2016)
* Rename caffe2_ref_test.py to c2_ref_test.py

* Rename the module name doc too
2018-02-22 20:22:39 -08:00
Bram Wasti
51897e52da fix all the broken tests from adding debug info (#2013) 2018-02-22 17:43:53 -08:00
anderspapitto
38f18c1daa add third output in onnx -> caffe2 lstm conversion (#2011) 2018-02-22 17:43:33 -08:00
Bram Wasti
4e5df5cda6 added debug info to OperatorDef 2018-02-22 15:53:49 -08:00
Orion Reblitz-Richardson
3ee9b5edca [PR] Floor and Ceil Op
Closes https://github.com/caffe2/caffe2/pull/1932
GitHub Author: Mohammad Hossain <zem@devgpu242.prn2.facebook.com>
2018-02-21 18:31:45 -08:00
Orion Reblitz-Richardson
ccea6924a2 Implementing Pow operator (this merges existing pow with a scalar and new pow with a tensor exponent). Second Try.
The old pow operator has been deleted in math_ops.cc, math_ops.cu and math_ops.h, while the new operator supporting scalar and tensor exponent has been added in pow_op.cc, pow_op.h an elementwise_op.cu.
2018-02-21 18:31:45 -08:00
Yinghai Lu
cc7e61c88d Move onnx-caffe2 inside caffe2 (#1921)
* Move onnx-caffe2 inside caffe2

* Update to the lastest onnx-caffe2 and update jenkins env

* Rename onnx_caffe2 to onnx

* Add __init__.py to caffe2/python/onnx

* Change CI check variable to JENKINS_URL

* Cherrypick recent onnx-caffe2 update
2018-02-20 13:56:52 -08:00
Yan Zhu
36c49c9f4a change schema's __repr__() flat output to pprint style indented output
Summary: as title. This is similar with python pprint utility for nested json data structure. It can be useful for checking schema during debugging.

Reviewed By: kittipatv

Differential Revision: D6710767

fbshipit-source-id: e450aa5477fa1ad4f93c4573f8108a2f49956da8
2018-02-16 16:26:11 -08:00
Frank Jiang
c809d89810 Fix RowWiseSparseAdam implementation
Summary: The original implementation averaged the momentum across the embedding dimensions, which doesn't make any sense. This meant all the embedding dimensions received the same update, becoming a very memory-expensive one-dimensional embedding.

Differential Revision: D7003135

fbshipit-source-id: ed54e3427bc13895a4e949e96b4b17f6ebfb6d53
2018-02-16 13:28:26 -08:00
Andrey Malevich
60dc3ca66f Use 8-bit quantization only in cases when it makes sense.
Summary:
In some cases we were doing quantization even we we should not. This diff is
preventing this from happening.

Reviewed By: rayleichen

Differential Revision: D6953547

fbshipit-source-id: 7c65baaf969e5e1bddb68ca8182f4f3b43f2431d
2018-02-15 19:33:03 -08:00
Xianjie Chen
c5497a34f6 Add CPU_ONLY tag for sparse_feature_hash layer
Summary: as desc.

Differential Revision: D6997841

fbshipit-source-id: 75a33ea146224979f149a36a063a78d6f18338ee
2018-02-15 19:05:56 -08:00
Andrey Malevich
16cd3f4a9e Don't allow to export models where parameters are inputs/outputs
Summary:
Without this enforce it's too easy to export model overriding it's params in
predictor.

Reviewed By: rayleichen

Differential Revision: D6984506

fbshipit-source-id: 9bbf375758686c6ad12ad071723f255363e98ae6
2018-02-14 23:54:42 -08:00
Yan Zhu
0a66c76a4c detailed error output for parameter sharing
Reviewed By: xianjiec

Differential Revision: D6986239

fbshipit-source-id: 5b8bb06ea2383ce64318b5322bda7a58469f3eb0
2018-02-14 11:10:51 -08:00
Pieter Noordhuis
52fa742c51 Revert D6893040: Implementing Pow operator (this merges existing pow with a scalar and new pow with a tensor exponent).
Summary:
This reverts commit 30f614beea6f859fee25ce4f85573142885dde45

bypass-lint

An infra SEV is better than not reverting this diff.
If you copy this password, see you in SEV Review!
cause_a_sev_many_files

Differential Revision:
D6893040

Original commit changeset: 30f614beea6f

fbshipit-source-id: 5e98a24699088283f864efe31234874bdacbe3c3
2018-02-14 10:34:08 -08:00
Maxim Naumov
f7cc8e8822 Implementing Pow operator (this merges existing pow with a scalar and new pow with a tensor exponent).
Summary: The old pow operator has been deleted in math_ops.cc, math_ops.cu and math_ops.h, while the new operator supporting scalar and tensor exponent has been added in pow_op.cc, pow_op.h an elementwise_op.cu.

Reviewed By: houseroad

Differential Revision: D6893040

fbshipit-source-id: 30f614beea6f859fee25ce4f85573142885dde45
2018-02-13 17:46:35 -08:00
Yan Shang
fd28e0fa29 Add bool function to return whether a model contains loss
Summary:
Add a function to return true if the model contains loss and retuen
false if the model doesn't include a loss.

Reviewed By: kittipatv

Differential Revision: D6982444

fbshipit-source-id: 1f63b7a1eaa3077841a0ad5d8d854b471d0aa84c
2018-02-13 16:38:36 -08:00
Kittipat Virochsiri
83c494787d Allow adding to trainer_extra_schema
Summary: Sometimes we need to add some extra schema later

Reviewed By: sunnieshang

Differential Revision: D6951849

fbshipit-source-id: 564eb88f9250eae24869fd10ba3426e00a18af33
2018-02-13 14:40:36 -08:00
Kittipat Virochsiri
6f533fd8b8 Only overwrite path_prefix & path_type when not None
Summary: This breaks internal functionality

Reviewed By: aartibasant

Differential Revision: D6975222

fbshipit-source-id: ce751950b4b9217d8ea5de703690451e98642f00
2018-02-13 14:40:35 -08:00
Matan Appelbaum
d99d28b3e6 Allow custom component tagging in DeviceOptions.node_name
Summary:
Modify detect_components to take a list of valid node_name prefixes instead of values.  Users can set node_name to e.g. `'sparse_component:0'`, `'sparse_component:1'`, etc.
and pass `'sparse_component:'` as a valid prefix.  Also add `Tags.SPARSE_COMPONENT` in addition to `Tags.SPARSE_SHARDED` and `Tags.SPARSE_DONT_SHARD` and update all calls to
`detect_device_components`.

Reviewed By: azzolini

Differential Revision: D6952599

fbshipit-source-id: e1b1e6b146a6bd053b295690016044fd5990c893
2018-02-13 11:14:41 -08:00
Junjie Bai
b11ba65204 Experimental support for setup.py develop mode install
Summary:
`python setup.py develop` / `pip install -e .`
Closes https://github.com/caffe2/caffe2/pull/1926

Reviewed By: orionr

Differential Revision: D6951780

Pulled By: bddppq

fbshipit-source-id: 01249cbca90ec5326ea4107d4e500ae95a9dbd7b
2018-02-12 23:36:18 -08:00
Zhicheng Yan
d79a31761e rectangle_cropping_multi_cropping_color_jittering_lighting
Summary:
Change log
- Support rectangle cropping, where height and width of clip cropping can be set separately. This is useful when most video resolution is non-square, such as 240p, 360p and 480p where width is significantly larger than height.
  - Comparisons of training on ucf101 between using 112x112 croppings and using 112x144 cropping.
  - https://fburl.com/i0rw6y1k
- Support 14 multi-cropping per video clip at testing stage to improve classification accuracy. Take left-top, central-top, right-top, left-bottom, central-bottom, right-bottom and central-central croppings as well as their mirrorings. In total, 14 croppings.
   - Comparisons on the same model trained on UCF-101. Use 1 clip per video
      - RGB. f41014306, w/o Vs f41014868, w/ multi-cropping: `0.64099 Vs 0.65796`
      - OF. f41014889, w/o Vs f41014913, w/ multi-cropping: `0.65796 Vs 0.67624`

- Support color jittering and color lighting on RGB data for training data augmentation.
  - Comparisons of training on ucf101 from scratch with and without color jittering and lighting:
  - https://fburl.com/k69zatul

Reviewed By: HengCV

Differential Revision: D6962620

fbshipit-source-id: 9b43478945874142727fea351ee04417218e6606
2018-02-12 16:39:06 -08:00
Jesse Hellemn
1c005602fc Adding model_id argument to nets in predictor_container when modelInfo exists
Summary: Copying model_id from metaNetDef_->modelInfo in PredictorContainer for dper models. Since these model_id's are strings of <model_id>_<snapshot_id>, changed them to strings in net_observer

Reviewed By: salexspb

Differential Revision: D6752448

fbshipit-source-id: 93c91950b44c012e57240aaf909bc961449cfd7c
2018-02-12 10:38:58 -08:00
Lukasz Wesolowski
78c9a35a84 GPU support for ChannelStatsOp
Summary: Step 1 of 3 in adding support for multidevice batch normalization on GPUs. Implements ChannelStatsOp for the GPU. Next steps are to port the backprop stats op and tie things together in DPM.

Reviewed By: rbgirshick

Differential Revision: D6953411

fbshipit-source-id: cd50e53d66ea84fe66021c08b978b28290d9f347
2018-02-09 19:31:31 -08:00
Kittipat Virochsiri
51267095d5 Remove enqueue_splits() from ReaderBuilder
Summary: The interface is not used anywhere AFAICT; cleaning up to make it less confusing.

Reviewed By: kuttas

Differential Revision: D6867040

fbshipit-source-id: 3e8a77df76ef09c6864c308561825777b326f76c
2018-02-09 12:20:53 -08:00
Zhicheng Yan
06f8fc3f49 extend_operator_CostInferenceFunction
Summary:
- Extend SimpleNet::TEST_Benchmark to report extra FLOP, feature map memory, parameter memory at operator-level
- Add cost interfence function for 3D conv, sum, relu, spatial_bn, fc operators.

Reviewed By: sf-wind

Differential Revision: D6909893

fbshipit-source-id: 534492ccf2e15860e86f1e7f759ff338bf57753f
2018-02-09 10:56:29 -08:00
Lin Yang
cec7003190 only enable FloatToHalf test for GPU
Reviewed By: bddppq

Differential Revision: D6945312

fbshipit-source-id: 9550a9607c0daec6783ce63d3c9f082ff27b0303
2018-02-08 17:48:47 -08:00
Lin Yang
27b9b7b15a Make TypeInference work for HalfToFloat & FloatToHalf.
Summary: add missing type mapping.

Reviewed By: kennyhorror

Differential Revision: D6940574

fbshipit-source-id: b70cea4ce2e519cb3e72d0482a38f50dbb968b4a
2018-02-08 15:33:43 -08:00
Andrew Dye
6ecaed5021 Generate a core dump when CompleteInTimeOrDie forcefully quits
Summary: CompleteInTimeOrDie was added to detect deadlocks and proactively exit. In addition, call os.abort() to generate a core dump so that the error is actionable.

Reviewed By: bmaurer

Differential Revision: D6938343

fbshipit-source-id: 8bd36da4f4bb1195bd3398f25d133a6ebf1c66ad
2018-02-08 14:08:51 -08:00
Andrey Malevich
01de4e40d6 Fix a bug in nested parameter sharing logic.
Summary:
It appears that my initial implementation was not really working when one
starts doing nesting. This diff is fixing this by replacing itertools with
something that is really easy to reason about.

Reviewed By: idning

Differential Revision: D6933763

fbshipit-source-id: f7a1de996d878a41bac2b2acd9d87a7c4b416778
2018-02-08 13:32:53 -08:00
Giri Anantharaman
6aaa701c9c Adding ThresholdedRelu Op support.
Summary: Core operator and python operator changes for adding ThresholdedRelu Op support.

Reviewed By: houseroad

Differential Revision: D6900660

fbshipit-source-id: 9b17ede13ccb3264286389c7fc633ab9c1a7bbbf
2018-02-08 12:18:40 -08:00
Alexander Sidorov
e0e124e617 Fix RNN scoping situation
Summary:
There is a long lasting problem of scoping which was introduced in original python wrappers early in H1. Basically each RNNCell implemented has to manually scope outputs of each of the operators. If somebody forgets, then there could be weird bugs with layers etc.

Approach is the following. User has to explicitly specify current scope when using  apply_over_sequence function and others if the function is going to be called several times (like for stacking layers). This way we use Caffe2 native scoping approach instead of inventing one extra API people have to use (i.e. passing scope name as an argument to the RNNCell constructor).
Closes https://github.com/caffe2/caffe2/pull/1681

Differential Revision: D6777536

Pulled By: salexspb

fbshipit-source-id: 73d860b8d4857589e04bdea5a6fcd3080d68427c
2018-02-07 17:35:29 -08:00
James Reed
a68e224219 Fix ONNX While test for CUDA
Summary: We should not be trying to instantiate this op on GPU at this point

Reviewed By: pietern

Differential Revision: D6915576

fbshipit-source-id: 6bdbc93ad12fc67e3001fce1b506fe2895d7b0ba
2018-02-06 14:35:34 -08:00
Qinqing Zheng
c028bcd466 Fix input of Reduce{Front/Back}{Sum/Mean}Gradient ops
Summary: The previous refactor of these four Ops changed their input semantics, which makes backward impatible with old models. This diff fix this problem by checking the input and define follow-up behavior by case, so that the old models can be accommodated.

Reviewed By: dzhulgakov

Differential Revision: D6905840

fbshipit-source-id: fc37baec407fd5eae64fc9c2b61aba3c492a90f3
2018-02-05 23:33:07 -08:00
James Reed
f383600625 ONNX While Operator
Summary:
Special While loop operator that follows the semantics of While in ONNX: https://github.com/jamesr66a/onnx/blob/controlflow/docs/Operators.md#experimental-loop

Stuff that's missing:

- Lexical scoping enforced via child workspaces
- Double-buffering on forward

Further possible enhancements:
- Full parallelism when there are no loop-carried dependencies
- Diagonal execution
- More optimized scan_outputs shaping via static shape inference provided in ONNX (coming sometime)
- GPU support (probably just some tensor value management stuff)
- Gradient support (likely low-pri right now)
Closes https://github.com/caffe2/caffe2/pull/1848

Reviewed By: dzhulgakov

Differential Revision: D6907524

Pulled By: jamesr66a

fbshipit-source-id: 4938108733e168b8c027035091104712a18c992a
2018-02-05 21:05:52 -08:00
Anders Papitto
6a02cb2844 implement sequence length support for BasicRNN
Summary: Closes https://github.com/caffe2/caffe2/pull/1843

Differential Revision: D6839575

Pulled By: anderspapitto

fbshipit-source-id: efdf00f1c5cfb0d63f1992028a796c8277b76688
2018-02-05 21:05:51 -08:00
Aarti Basant
28f42cc8e7 separating set_params and init() for checkpoint managers.
Summary: separating set_params and init() for checkpoint managers.

Reviewed By: anshulverma

Differential Revision: D6852255

fbshipit-source-id: 061f16ce0c49953ca8a5fe9546af5c9945a3be48
2018-02-05 18:03:21 -08:00
Evgeny Kharitonov
7c7e09fe2d Adding the Percentile op & UT
Reviewed By: MisterTea

Differential Revision: D6879507

fbshipit-source-id: 7ca4165a42c073e384d3a6138ef033ca384afd49
2018-02-05 16:08:00 -08:00
Anders Papitto
d8748a9d53 GRU sequence lengths: allow unspecified sequence lengths
Summary:
modeled after the earlier change for LSTM
Closes https://github.com/caffe2/caffe2/pull/1841

Differential Revision: D6837461

Pulled By: anderspapitto

fbshipit-source-id: de4e787019fa30f813a4b29f14b7000ce9d22d8e
2018-02-05 13:20:05 -08:00
Orion Reblitz-Richardson
d3ea7e260b Allow for all of the names we have in our model zoo.
Summary:
* We now allow subdirectories as well as numbers in the name.
* Also fixed an error case.
Closes https://github.com/caffe2/caffe2/pull/1875

Reviewed By: pjh5

Differential Revision: D6894401

Pulled By: orionr

fbshipit-source-id: 6a9938bc7d2ba6b8f094ed7b8a02664120a10626
2018-02-05 08:52:55 -08:00
Lin Yang
3acce3e4a7 assert global_constant name as string
Reviewed By: kennyhorror

Differential Revision: D6895157

fbshipit-source-id: 9844ab6176d22c6d05a5a0f83b731f734ef9853d
2018-02-04 01:02:30 -08:00
Lin Yang
95626737d0 enforce global_constant name should be a string
Reviewed By: kennyhorror

Differential Revision: D6880114

fbshipit-source-id: 2c9bd27b01cedb469f19843163b04a613fda5904
2018-02-04 01:02:27 -08:00
Yan Shang
e816c777eb Add regularization for sparse features
Reviewed By: xianjiec

Differential Revision: D5767997

fbshipit-source-id: b9b7c47d11417fbe67d861a2a6b4daa38adbe57b
2018-02-02 16:03:32 -08:00
Yan Shang
dabddd65f4 Add sparse normalization operator
Reviewed By: xianjiec

Differential Revision: D6735673

fbshipit-source-id: 870b38d5175cb2d2dcad43c0e9fa4746e4dd15dd
2018-02-02 15:05:59 -08:00
Lin Yang
e138203d8f add sparse_to_dense_test
Summary: hypothesis_test have been introduced in D4508879, add a plain test which is more straightforward.

Reviewed By: kennyhorror

Differential Revision: D6835334

fbshipit-source-id: d05a2cd199b2de56ac0cc0319f19fcd7978647d5
2018-02-01 08:14:37 -08:00
Xue Feng
f652f20f73 change ModOp to support output sign configurations
Summary: enable ModOp to control the output sign to follow dividend or divisor.

Reviewed By: xianjiec

Differential Revision: D6852457

fbshipit-source-id: 62dbb66cacecb8e0a0f81f63f2b7b378efbd6ee2
2018-01-31 18:03:16 -08:00
Jerry Pan
eee42748d9 Caffe2: serialize init for parallel workers
Summary: Caffe2: serialize init for parallel workers

Reviewed By: kevinwilfong

Differential Revision: D6862119

fbshipit-source-id: 805b2971eca4501977950420565bd9ea37dc0f6c
2018-01-31 17:50:10 -08:00
Qinqing Zheng
90a3363f29 Return an empty TaskGroup if node managers exist in MultiNodeCheckpointManager
Summary: Current MultiNodeCheckpointManager return None in this case, yet in JobRunner we assume this function returns a valid task group, i.e. we call session.run(self.checkpoint_manager.init(...)) directly. This will fail the case we use LocalHostScheduler and reuse a MultiNodeCheckpointManager

Reviewed By: azzolini

Differential Revision: D6843450

fbshipit-source-id: a7ec942cfe692f19e8751b0078ae6a6108f29e54
2018-01-30 19:20:50 -08:00
Alexander Sidorov
98a4c3f9b2 Enable rnn_cell_test in jenkins
Summary: Closes https://github.com/caffe2/caffe2/pull/1839

Differential Revision: D6847623

Pulled By: salexspb

fbshipit-source-id: b8a32cb39a8063b8938c89556e5d42606735238d
2018-01-30 11:48:35 -08:00
Lu Fang
560e5c94bd Change default value of LeakyRelu's alpha from 0 to 0.01
Summary: To match the semantic in ONNX, change the default value of alpha of LeakyRelu to 0.01

Reviewed By: dzhulgakov

Differential Revision: D6840975

fbshipit-source-id: 08543f80fd86cbe96a0eee8d725ef137a5bf4ab8
2018-01-29 22:31:12 -08:00
Xiaomeng Yang
6b1f848df6 Adds gpu implementation for FCTransposed
Summary: Adds gpu implementation for FCTransposed.

Reviewed By: salexspb

Differential Revision: D6572785

fbshipit-source-id: a7cd0f7364ace286942c46b91e0287307cbfea83
2018-01-29 19:03:24 -08:00
mdschatz
3c952426fb Add operator attaching net observer
Summary:
Commonly, net observers attach operator observers at construction. This diff separates the logic into a base class to inherit from.
Closes https://github.com/caffe2/caffe2/pull/1806

Reviewed By: salexspb

Differential Revision: D6808623

Pulled By: mdschatz

fbshipit-source-id: 75ef0eea913ef30943541c829c0a976965f42736
2018-01-29 14:34:34 -08:00
Xiaolong Wang
f8575f6d68 Breakdown Dispatcher
Summary: dispatch by Ngram breakdown

Differential Revision: D6794082

fbshipit-source-id: 7f6e8fa3a0abe0dc6d0d466c95e8c4fc865e3abb
2018-01-26 17:47:54 -08:00
Anders Papitto
33d2212751 LSTM sequence lengths: allow unspecified sequence lengths
Summary:
In this case, each sequence is treated as having a length equal to the
first dimension of the input tensor. This matches the semantics of
ONNX when the sequence length input is left out.
Closes https://github.com/caffe2/caffe2/pull/1764

Reviewed By: dzhulgakov

Differential Revision: D6751219

Pulled By: anderspapitto

fbshipit-source-id: 89e0efd12339157627494e2b8c83e952bdd8a9f8
2018-01-26 16:32:56 -08:00
Lin Yang
252211b001 testPairwiseDotProduct
Summary: as title.

Reviewed By: kennyhorror

Differential Revision: D6793829

fbshipit-source-id: f803e0400635ca37184f1dd5bb711bfe0e4bea21
2018-01-26 11:33:08 -08:00
Alexander Sidorov
a3b8c459d4 Revamp MNIST tutorial
Summary:
Main changes:

1. Move reader creation to Brew in order to be consistent and avoid a wild use of param_init_net
2. Use optimizers for training function, avoid manual optimizer construction
3. Add MLP mode (a default)
4. Fix a bunch of too verbose comments and add a bit of new explanations
Closes https://github.com/caffe2/caffe2/pull/1760

Differential Revision: D6749059

Pulled By: salexspb

fbshipit-source-id: 9dfbbb2d9772a74a0300c2e404a92e791f7cc593
2018-01-26 09:17:31 -08:00
Peter Goldsborough
0fd41a63a1 Integrate Fused8BitRowwise ops with DPER
Summary: Updates `sparse_lookup.py` for the new fused 8-bit rowwise quantization. Mostly just changing the same files as the original diffs (D5753626 and D5761202). I know very little about this code here so please let me know if this is safe, also in terms of migration away from the non-fused storage.

Reviewed By: kennyhorror

Differential Revision: D6710784

fbshipit-source-id: 185f147af52a094a937ba631b0351225e660d205
2018-01-25 15:02:42 -08:00
Frank Jiang
304e607b70 Fix adam test
Reviewed By: pietern

Differential Revision: D6787780

fbshipit-source-id: a2d1428b0e028d6f3d8f7c312c90f3fa411cd0a2
2018-01-25 12:59:54 -08:00
Xiaolong Wang
b2cfc5ea53 add KeySplitOp
Summary:
as titled

After converting categorical to Ngram keys, use this op to extract eids

Differential Revision: D6794020

fbshipit-source-id: 4f9251a22d7a129da30b92845e312876e6510e7e
2018-01-25 10:50:53 -08:00
Xiaomeng Yang
d695027300 Adds cuda support for LC op
Summary: Adds cuda support for LC Op

Reviewed By: QueryConnectionException

Differential Revision: D6803659

fbshipit-source-id: 538bbf6fd202c79154132fda0e90e175eb09d025
2018-01-25 10:19:48 -08:00
Huazhong Ning
90543ff13a weighted sampling reader dequeue outputs table index
Summary: Weighted sampling reader dequeue randomly chooses a hive reader to read a mini-batch. This diff allows dequeue to output the index of the randomly chosen table to a specific blob.

Reviewed By: kennyhorror

Differential Revision: D6621070

fbshipit-source-id: 754b981fc2bcfdb0146d2a0a5b677e7cfe74211b
2018-01-24 19:06:25 -08:00
Huan Gui
c261b9ce70 Fix NGram from categorical test
Summary: Fix the flaky test for ngram from categorical test

Reviewed By: dragonxlwang

Differential Revision: D6801152

fbshipit-source-id: dcbae17b1d3737a41fb2f5c794c1146a02c542bb
2018-01-24 18:51:16 -08:00
Xiaomeng Yang
afafe8a466 Add LC Layer
Summary: Add the 1st version of LC layer.

Reviewed By: Yangqing

Differential Revision: D6788647

fbshipit-source-id: ebee9215a1d6e1e567548a0fef771802851682a3
2018-01-24 16:51:17 -08:00
Aarti Basant
fc56e86c7d Introduce init API for the optional Checkpoint Metadata Handler object
Summary:
Every call to the checkpoint_metadata_handler write() API requires us to pass all params like db_prefix, db_type etc.
Introducing an init API in the checkpoint_metadata_handler so that such params can be saved and need not be passed in every API call

Reviewed By: mraway, anshulverma

Differential Revision: D6792651

fbshipit-source-id: 059fa4309e8fce1ee5ab009af3e0570573c24245
2018-01-24 15:19:55 -08:00
Lukasz Wesolowski
29a4c942fe Add support for multi-device batch normalization through an option to data_parallel_model
Summary: Stage 3 in stack of diffs for supporting multi-device batch normalization. Adds input parameter to data_parallel_model to enable multi-device batch normalization. Depends on D6699258.

Reviewed By: pietern

Differential Revision: D6700387

fbshipit-source-id: 24ed62915483fa4da9b1760eec0c1ab9a64b94f8
2018-01-24 13:24:06 -08:00
Lukasz Wesolowski
9414072159 Add operators to support batch normalization across multiple devices on the same node
Summary: This is the first in a series of diffs to enable batch normalization across multiple devices on the same node with data parallel model. The diff contains the ops for computing the per-channel statistics required to obtain the mean and variance across multiple devices on the same node on the forward pass, and the gradient of the bias and scale during backpropagation. The actual modifications to SpatialBN and SpatialBNGradient to make use of these results will be in a separate diff.

Reviewed By: rbgirshick

Differential Revision: D6697336

fbshipit-source-id: 0de2750fe7e851795f238d9f625aeb4d74023dc2
2018-01-24 13:24:04 -08:00
Pieter Noordhuis
7a232aae49 Add random seed to NGramFromCategorical test
Summary: TSIA

Reviewed By: Yangqing, Maratyszcza, dzhulgakov

Differential Revision: D6797213

fbshipit-source-id: e1132229cda09d1fbde63686aaec81b995989c03
2018-01-24 13:05:28 -08:00
Xiaolong Wang
29c7c682d8 add NGramFromCategorical Op
Summary: as titled

Differential Revision: D6783763

fbshipit-source-id: 78280cf15c2cdc3c308562d3f27a81b61ef8d662
2018-01-23 15:08:25 -08:00
Xue Feng
0e9b0cf779 add error msg in fc input_record
Summary: as titled

Reviewed By: xianjiec

Differential Revision: D6787879

fbshipit-source-id: 4bbdd11455480b25fa18121fa4527a9f0a03addc
2018-01-23 14:48:15 -08:00
Anders Papitto
0aa1a6387e Add a seed to the gru unit test
Summary:
as it calls np.random and sometimes fails unreproducibly
Closes https://github.com/caffe2/caffe2/pull/1779

Reviewed By: pietern

Differential Revision: D6779802

Pulled By: anderspapitto

fbshipit-source-id: 2ad069f8a15f70a8110b1a6bdb06f81577c53ad4
2018-01-23 13:47:43 -08:00
Xianjie Chen
76a141f016 add error msg in get_key
Summary: as title

Differential Revision: D6782896

fbshipit-source-id: bd29f6d085e56f51deb4bf6ad81771787fd85a5a
2018-01-23 11:04:05 -08:00
Dániel Simig
2dd79eb53a Visualize distribution of activation functions
Summary:
This is a  first attempt at completing bootcamp task T24449916. This diff contains 3 major changes:
1) Change LayerModelHelper to allow for exposing the output and parameters of any layer to metrics
2) Added a runner that allows metrics to draw arbitrary plots to a matplotlib axes object
3) Implement a metric that aggregates distributions of values in a blob over the training, and try this out in a notebook

Reviewed By: kennyhorror

Differential Revision: D6671273

fbshipit-source-id: b8961837395e89c957edbf5c7c862bdb845ccf4b
2018-01-23 10:36:40 -08:00
Lin Yang
8e0177255e Test for PositionWeighted
Summary: add Test for SparseLookup with PositionWeighted.

Reviewed By: kennyhorror

Differential Revision: D6771612

fbshipit-source-id: b4b3bfd514f366f579b4192643330ae73843d4f9
2018-01-22 19:20:46 -08:00
Viswanath Sivakumar
231d6f7b09 Add SqueezeOp in MKLDNN
Summary:
SqueezeOp support to drop drop dims of size 1. MKLMemory now supports Reshape()
if the buffer is in plain layout, in which case just the dims and layouts are
modified similar to caffe2::Tensor. SqueezeOp takes care of converting the
input to plain layout if needed via an intermediate buffer before calling
Reshape().

Differential Revision: D6735656

fbshipit-source-id: 953309498370e1b8986e8c593bc6963f38036255
2018-01-22 18:39:42 -08:00
Wei Zhang
1d4e996b87 Separate parameter downloading tasks from training tasks and run them in a different group
Summary:
At the end of distributed training, trainer needs to download the parameters back from parameter servers for saving the model. Currently, this parameter downloading happens at the end of job's epoch task group, which creates several problems when checkpointing is enabled for distributed training:

1. When checkpointing is enabled, we run multiple training epochs. At the end of each epoch, the model download tasks will run to collect parameters, but we won't save the model until the true end of training, so there is a big waste of resource.
2. After trainer0 downloads the parameters, these parameters take a lot of memory, so trainer0 can easily run out of memory in the next epoch of training.

Our solution is to insert a parameter download task group between the job's training epoch_group and the job's exit_group.

Reviewed By: azzolini

Differential Revision: D6765393

fbshipit-source-id: 5a4f556fc3c1cd7834a7c406a3c0de3fccd50c49
2018-01-22 14:04:12 -08:00
Pieter Noordhuis
d618c05174 Increase lower bound of values for values in div test
Summary:
This should translate to an 1% error margin. The gradient checker uses a .5% threshold.
Closes https://github.com/caffe2/caffe2/pull/1766

Differential Revision: D6774077

Pulled By: pietern

fbshipit-source-id: f97c7ffb2ef34fdd71d69320a7fdcf4a6a457715
2018-01-22 09:06:12 -08:00
Viswanath Sivakumar
b5d513b1f9 Add op in MKLDNN
Summary:
Just redirects to MKLSumOp. Doesn't support broadcast though since dnnSumCreate
expects identical dims.

Differential Revision: D6729788

fbshipit-source-id: 3e189465ad9d026bec4954648562ffe4e67fc393
2018-01-21 08:21:43 -08:00
James Cross
91066559a8 truthy check for empty string in NameScope()
Summary:
As in name. LATTE translation team moving some code from Python 2 to 3 uncovered a case where comparison between unicode and str types leads NameScope('') to prepend a separator to the beginning of blob names. This fixes it.

Thank you so much to dzhulgakov for tracking down the cause of this so quickly!

Reviewed By: dzhulgakov

Differential Revision: D6766866

fbshipit-source-id: fbe46cff581f425ba10e8668400915ea40baab94
2018-01-19 21:34:09 -08:00
Ilia Cherniavskii
4ce4bc5c7f Fix occasional test timeouts
Summary: Make test less computationally expensive

Reviewed By: Yangqing, dzhulgakov

Differential Revision: D6766236

fbshipit-source-id: 59e51faa1331d804b11da9f7237ee9ce0cb27df8
2018-01-19 20:08:58 -08:00
Yangqing Jia
ced2c7e2b2 Remove Set/GetDefaultGPUID and move to use current gpu id instead.
Summary:
Reason for this change:

(1) Setting/Getting default gpu id doesn't seem to be used at all.
(2) It actually is confusing compared to the CUDA_VISIBLE_DEVICES options etc.
(3) When setting cuda_gpu_id=-1 in the CUDAContext arg, it used to use the
default gpu id but probably we should use the current gpu - so that the caller
will be able to control the device placement.

One use case is for TensorRT - if we have a custom callback layer, then it would
be easier for TRT or whatever caller to set the running device.

Reviewed By: dzhulgakov

Differential Revision: D6740357

fbshipit-source-id: 2ea710e434b10220d5a198e31c93847304636863
2018-01-19 18:03:21 -08:00
Peter Goldsborough
cded9683ad Implement fused 8bit rowwise sparse lengths reductions
Summary: Building on D6710785 (float <-> fused_8bit_rowwise conversions) and D6710843 (`FusedEmbeddingLookup`), this diff implements the new reduction operations for the fused 8-bit rowwise storage. I mostly followed the [old 8-bit quantized code](diffusion/FBS/browse/master/fbcode/caffe2/caffe2/operators/lengths_reducer_rowwise_8bit_ops.h) and [full-precision code](diffusion/FBS/browse/master/fbcode/caffe2/caffe2/operators/lengths_reducer_ops.h).

Reviewed By: kennyhorror

Differential Revision: D6710844

fbshipit-source-id: b9e85db7437bd32dd44d01733c3749f35c00b06e
2018-01-19 15:44:35 -08:00
Peter Goldsborough
8dc0702af5 Add float32 <-> fused_rowwise_8bit conversion Caffe2 operators
Summary: This first diff adds the conversion operators that go from float to our fused 8bit rowwise quantized storage and back again. For now I've put the scale and bias in front of each row because it makes the pointer arithmetic nicer here and in the EmebddingLookup perfkernel. If benchmarks or other reasons point out that this is a bad idea we can change it easily.

Reviewed By: kennyhorror

Differential Revision: D6710785

fbshipit-source-id: 086ab91c12d3b472564a06eff6329be6cb9e680e
2018-01-19 15:44:33 -08:00
Heng Wang
c052eb6bbb update the video input op in caffe2
Summary:
This is to update the video input op in caffe2 so that it is up to date.
It adds additional support for:
1, optical flow and early fusion
2, different ways of sampling clips from video
3, different ways of resizing the input video

Reviewed By: dutran

Differential Revision: D6752788

fbshipit-source-id: 0cbd4d4bbbe97b0ada4cba7a55adc91a7af60d5f
2018-01-19 09:52:25 -08:00
Lin Yang
4ea6e6a556 testSparseLookup
Summary: add basic test for SparseLookup

Reviewed By: kennyhorror

Differential Revision: D6749915

fbshipit-source-id: f97af785e4f89f36788a992843066fd1ec2b75a9
2018-01-19 09:27:20 -08:00
Orion Reblitz-Richardson
b28d5a3586 Build doxygen docs with cmake and fix catalog generation
Summary:
This updates https://github.com/caffe2/caffe2/pull/1096/ to build doxygen docs with cmake and fixes operator catalog generation. See the new README.md for details, but you can run

```
mkdir build && cd build
cmake -DBUILD_DOCS=ON .. && make
```
and

```
python caffe2/python/docs/github.py ~/c2docs/_docs/operators-catalogue.md
```

to generate docs.

There was one weird issue in `generator.py` that we sometimes receive tuples and sometimes objects. I handled this just by testing `isinstance`, but we might want to be more principled in the future.
Closes https://github.com/caffe2/caffe2/pull/1758

Reviewed By: pietern

Differential Revision: D6752127

Pulled By: orionr

fbshipit-source-id: 9ba9ad8efc920b27a57327f8a7d3050f3650d4ce
2018-01-18 18:47:59 -08:00
Anders Papitto
e3e6680b48 Add ElmanCell and ElmanRNN
Summary: Closes https://github.com/caffe2/caffe2/pull/1742

Reviewed By: dzhulgakov

Differential Revision: D6706809

Pulled By: anderspapitto

fbshipit-source-id: 15a05786a26aeb719ea4377f4dbbb62738d9e697
2018-01-18 12:14:02 -08:00
Anirban Roychowdhury
158e001238 Checking for positive epoch size before running epoch
Summary: Checking for positive epoch size before running epoch

Reviewed By: pietern

Differential Revision: D6738966

fbshipit-source-id: 64e1fb461d784786b20a316999e4c037787f3a14
2018-01-18 11:48:35 -08:00
Frank Jiang
6f0bb28afb Stop running RowWiseSparseAdam test on GPU
Reviewed By: pietern

Differential Revision: D6739194

fbshipit-source-id: 0892cdc6a575a84147f86984c67e7b4bf605a197
2018-01-17 15:05:21 -08:00
Frank Jiang
61356cbadc RowWiseSparseAdam operator
Summary: Added the RowWise functionality for SparseAdam, which saves roughly 2/3 memory usage by only keeping one first and second moment term for each row of the parameter tensor, rather than one for each individual parameter.

Differential Revision: D6679342

fbshipit-source-id: ce6fb27e35ce41a890c66f6089cd2748d10e7a44
2018-01-16 19:39:31 -08:00