Summary:
This was a nasty one to track down. This was the error message:
```
E0323 14:47:46.138900 2870 context_gpu.h:126] Encountered CUDA error: an illegal memory access was encountered
F0323 14:47:46.139143 2870 operator.h:176] Computation on device returned error in operator
input: "x_gpu_2" output: "loss" name: "" type: "AveragedLoss" device_option { device_type: 1 cuda_gpu_id: 1 }
```
Closes https://github.com/caffe2/caffe2/pull/220
Differential Revision: D4771086
Pulled By: Yangqing
fbshipit-source-id: f2d0f39f1647c84d97d9745f8a0305a389bfbc41
Summary: This didn't work for a reason specified in comments. Also some cleanup in the unit tests, now inference uses a custom workspace to run cell net on
Reviewed By: urikz
Differential Revision: D4742670
fbshipit-source-id: 04165c029fddec5ae31b20b207faf06d2fa20816
Summary: D4734505 part 2. Remove more instances of the batch_size parameter
Reviewed By: urikz
Differential Revision: D4736906
fbshipit-source-id: fc9d374e9308017d61c427890364c5ab9cec2edf
Summary: Reshape based on tensor shapes in the graph rather than based on a passed-in batch_size parameter
Reviewed By: urikz
Differential Revision: D4734505
fbshipit-source-id: d9c23d85be84f61124106e752ef2b4f6945e2a07
Summary: we don't use this one any more except a few tests
Reviewed By: urikz
Differential Revision: D4731401
fbshipit-source-id: c5c28b7594e3251f501fc28455dfc9bd2093a836
Summary: Adding synchronous optimization on GPUs to the translation training pipeline, via data_parallel_model.Parallelize_GPU, which needs to be updated so there is some way of performing sparse parameter updates (e.g., on embedding tables), whether on GPU or CPU.
Reviewed By: urikz
Differential Revision: D4631914
fbshipit-source-id: 9cdd655f7dbda3f9b2733d459228b3e097892441
Summary: This adds a nearest neighbor interpolation resizing operator to caffe2. CPU only, NCHW only, no gradients. Also adds torch2caffe support. This is probably not optimal in terms of performance, but it works.
Reviewed By: ajtulloch
Differential Revision: D4724244
fbshipit-source-id: b8295061141fb513da84acf91fdfd67264119059
Summary: Reshape based on tensor shapes in the graph rather than based on a passed-in batch_size parameter
Reviewed By: urikz
Differential Revision: D4702086
fbshipit-source-id: c4c1d8425cd36c1e86695918eaba2667c27e9601
Summary:
/cc akyrola
I basically just copied all the `ShapeCall` stuff as `TypeCall`. Is there a better way?
Closes https://github.com/caffe2/caffe2/pull/187
Differential Revision: D4699312
Pulled By: Yangqing
fbshipit-source-id: 92f736ffe4127b00b5821acb1eb359771975fdd7
Summary:
These are all essentially no-op changes which allow for nose-style (or pytest-style) test discovery.
With this patch, you can use any of these methods to discover and run tests under `caffe2/python`:
```
python -m unittest discover -p '*test*.py' caffe2/python/
python -m nose caffe2/python/
python -m pytest caffe2/python/
```
Future work:
* Get all of the tests to pass
* Some seem to be testing operations which don't have GPU implementations
* I get a segfault unless I set `CUDA_VISIBLE_DEVICES=0`
* Some tests are flaky
* Allow test discovery throughout the whole project (e.g. the `experiments/` dir)
Closes https://github.com/caffe2/caffe2/pull/199
Reviewed By: pietern
Differential Revision: D4704504
Pulled By: Yangqing
fbshipit-source-id: 8f5687ec9c8aa873dfaff30dbf44272bc38a206b
Summary:
Implement ReduceBackSum & ReduceBackMean with gradients for CPU & GPU contexts.
The reduction happens among the last dimenstions for example if input is a
M x N matrix ReduceBackSum will results a vector of dim M x 1 contains the
rowwise sums.
Differential Revision: D4689768
fbshipit-source-id: 5b0482d4341867ecf23526dc6c4d544420e7d8f7
Summary: Add shape inference for reshape. Because it cannot do shape inference for reshaped tensor with runtime tensor data, set `out[0].set_unknown_shape(true)` if no `shape` argument is used.
Differential Revision: D4671125
fbshipit-source-id: 685a9198f9b08e3336014c792f20051b381d8619
Summary: Following krp's suggestion, check if the shape parameter is empty.
Reviewed By: dzhulgakov
Differential Revision: D4686698
fbshipit-source-id: 3f9fb1e3215dd2a4a726442531201eeb18224bc6
Summary:
Created a new function with specifics related to MI LSTM implementation in caffe2
See https://arxiv.org/pdf/1606.06630.pdf for details.
See D4478877 for the implementation of the same in tensorflow
Reviewed By: jhcross
Differential Revision: D4669882
fbshipit-source-id: 095bbcf187dbdac2cd79558ff0c8f9f67d8af639
Summary: ReversePackedSegs operator for CUDA. Input "lengths" (static integers) required to be in CPU memory.
Differential Revision: D4661281
fbshipit-source-id: c800c316c34015ba8e732dcbcaa8c4edaffdfeab
Summary: Super rough implementation of recurrent attention. Planning to factor out the common code between the two functions as well as train and eval. I want to get this out and get eyes on it sooner rather than later
Differential Revision: D4647837
fbshipit-source-id: 54bc4e8ed0df6f04c86c425926decbe89f73b068
Summary: Add gradient support for Caffe2 operator SumElements (for use in Translation RNN training pipeline).
Differential Revision: D4669036
fbshipit-source-id: 502760a2a624b20b3241e83a2f208f450b6ff36f
Summary: Renamed ElementwisePower to Pow for better discoverability. Added CUDA version and Gradient + tests.
Reviewed By: kennyhorror
Differential Revision: D4665550
fbshipit-source-id: dd33d8ad3917d71504e363ab397af50d38a63b1f
Summary: Add a simple op to sum the elements, with optional averaging. This is basically copy from AverageLossOp that we should alias to this. And maybe develop this towards a generic norm op.
Reviewed By: jhcross
Differential Revision: D4664591
fbshipit-source-id: 0e0c0efe9e415e2ad2feecfa42b03db2c83bee70
Summary: Due to popular demand, added an op to compute element-wise square + gradient for it (just for the fun of it).
Reviewed By: Yangqing
Differential Revision: D4664797
fbshipit-source-id: 0a29c7c249fdc72f51412bebd6ae352a7801cf05
Summary: Simple elementwise Max implementation for CUDA. Given N inputs, it will do N-1 pairwise maxes. I am not sure if it would be much better to iterate through all the inputs in the kernel, since this has better locality. We can also optimize later.
Reviewed By: Yangqing
Differential Revision: D4659953
fbshipit-source-id: 3a23b7fb3dbdf1d43bf3134ece03af4a791844dd
Summary:
To avoid Numpy warning: using a non-integer number instead of an integer will result in an error in the future
Closes https://github.com/caffe2/caffe2/pull/64
Differential Revision: D4658348
Pulled By: Yangqing
fbshipit-source-id: 3a1b33cbb27849bc167b08147d078e8d487567f4
Summary:
the existing code uses vector<T> to store the given tensor and then copy to output.
If T=bool, vector<bool> stores the data as bits and then copy does not work.
we use TensorCPU to store it instead.
Also add unittest.
Reviewed By: kennyhorror
Differential Revision: D4622325
fbshipit-source-id: 95c27b5d1cfbc836d2419d01cacde5a3172f4d7e
Summary: The shape inferenec did not check for spatial mode.
Reviewed By: andrewwdye
Differential Revision: D4638218
fbshipit-source-id: f15419738587013dea39e04a3da086890938c4e2
Summary:
A bit too much stuff in one diff, so sorry:
1. Add inference for gradient types by using the fact that x_grad is gradient of x and must be of same shape. This is kind of awkward to use string matching, but in addition I rely on the operator being actually a gradient op.
2. dzhulgakov was write, scalar shape is () and not (1). Sorry, my claim easlier was #fakenews.
3. Added inference functions for MakeTwoClass, MomentumSGDUpdate and Cross entropy ops.
Reviewed By: dzhulgakov
Differential Revision: D4569758
fbshipit-source-id: 0db13f33819777fdddefe21d4b1ebf906fcaf98c
Summary:
Add cudnn v6 support, including testing support for dilated convolution.
Add a check to ensure that the versions of cuDNN used to compile Caffe2 and run it are compatible
Closes https://github.com/caffe2/caffe2/pull/85
Reviewed By: bwasti
Differential Revision: D4387690
Pulled By: Yangqing
fbshipit-source-id: 312960134398dd4afe6ee0c01cdc160046c904e8
Summary:
previously fp16 type was supported in SparseLengthsSum operator, now it
works in all other segment operator as well.
Reviewed By: dzhulgakov
Differential Revision: D4624312
fbshipit-source-id: c9d72110e3762167270bb088405eaf9c56e88493
Summary: Inference function for the Im2ColOp: caffe2/caffe2/operators/im2col_op.cc.
Differential Revision: D4608663
fbshipit-source-id: d26ffb403c2acb7a5ead5f58f044ee3340c8311a
Summary:
Reduce test input size to instance norm gradient check. Larger size is currently timing out on stress tests.
e.g. failed: Timeout: Ran out of time before finding a satisfying example for test_instance_norm_gradients. Only found 2 examples in 125.39s.
Reviewed By: Yangqing
Differential Revision: D4608828
fbshipit-source-id: ce17a3ad28752d808efcbf79f1ea4238e63fb005
Summary: curandGenerateNormal can only generate arrays of multiple of 2 lengths. MSRAFill and GaussianFill operators use RandGaussian utility method which in turn uses curandGenerateNormal. This is a test which runs the operators on both devices to generate odd sized random arrays.
Differential Revision: D4602819
fbshipit-source-id: e65f5c731e925886cfa14afff482f7053bd020a0
Summary:
Implementation of ##LSTMWithAttention##
Still TBD:
1. There are problems with back propagation, because gradient is not implemented for ops with broadcasting
2. I need to make initial_recurrent_state to be of shape [dim] rather than [1, batch_size, dim], so one doesn't need to provide batch_size to LSTMWithAttention
Differential Revision: D4298735
fbshipit-source-id: 8903fcff4d6a66647ee6d45a6ef28803fc3091e5
Summary:
In-place is ~30% speedup, but needs a change to torch2caffe
or a graph rewrite on the client.
Differential Revision: D4577582
fbshipit-source-id: c31bf8ba97f4fa4cedf355cf2475eb7bab48b304
Summary: Another part of making DPER compatible with half-floats. This diffs adds supoprt of fp16 to segment reduction operators used in DPER.
Reviewed By: dzhulgakov
Differential Revision: D4587560
fbshipit-source-id: 0ae10648a7286a820bffaee802464dd9464584bc
Summary: this is to fix the bug with eigen implementation which calculating crossentropy
Reviewed By: salexspb
Differential Revision: D4582078
fbshipit-source-id: 4c92047e9dbbe219fcbef618a45c584c2fbfaad5
Summary:
- Key-value store for counters.
- Counters are updated via macros that also export USTD probes.
- Counter values can be exported using caffe2 operators.
- Snapshot mechanism for tracking time-window counter values.
Reviewed By: dzhulgakov, pietern
Differential Revision: D4553761
fbshipit-source-id: 25a1a91a3168dcff2159c6fba7b357d3fd3aa9bf
Summary:
Remove the use of `NextName` in layer model helper, so that the same function return `model_helper` that should construct identical `Net`, when under the same NameScope.
The `NextScopedBlob` should only take effect when there is real name conflicting, otherwise it returns ScopedBlobReference.
This is critical for parameter blobs. In long run, we need to be able to specify parameter blobs more explicitly. (kennyhorror is working on this). This solution works in short term for e.g., two tower sparse nn models.
Reviewed By: kennyhorror
Differential Revision: D4555423
fbshipit-source-id: 2c4b99a61392e5d51aa878f7346466a8f14be187
Summary:
Pass through the h-value recurrent output unchanged at each LSTM step beyond the valid part of a sequence (computed based on seqLengths, allowing batching of sequences of different length). This enables using the final-step output of each sequence as the output when one vector is desired for the entire sequence. Gradient also passed back unchanged.
Also made some cosmetic changes to recurrent_network_test.py (seq_lengths offset corrected, should be in [1, T] rather than [0, T-1]).
Reviewed By: urikz
Differential Revision: D4540307
fbshipit-source-id: 73a9f6326069d713dcb0cdc8d17869317c6dbe96
Summary: This diff adds shape inference for the SoftmaxWithLoss Operator
Differential Revision: D4565835
fbshipit-source-id: 1c2db398524c765977ec4d8a22c9b986bf9faf82
Summary:
One can find a reason, why I need gradient for CopyOp in this post - https://fb.facebook.com/groups/1405155842844877/permalink/1639683782725414/
Gradient for CopyOp is trivial in case the device was the same (cpu, or same gpu), but get's a little harder, when the copy was made across two different gpu.
I introduce new operator CopyOnDeviceLike, which has additional second input. The op copies the first input to the same device as the second one. The default implementation is exactly the same as CopyOp, but I specialize it for CUDAContext.
Please, let me know if I'm doing anything wrong here! That's my first caffe2 diff, related to operators definitions.
Reviewed By: Yangqing
Differential Revision: D4557258
fbshipit-source-id: 9494be589cc1e5696bbbfe25b7622aaa4c9efe4a