Summary:
Two implementation of max pool reducers had different semantics in case of equal indices. It matters less in real cases, but breaks tests. Choosing the behavior of LengthMax over SortedSegmentRangeMax as the former is more widely used.
Also some minor tweaks for the test code.
Reviewed By: Yangqing
Differential Revision: D5870386
fbshipit-source-id: 6488cbd5cacaf595ffc07c44084730dd44b3f9dd
Summary: Ported existing adhoc test code to use python unittests. Small tweak to caffe2.python.hypothesis_test_util
Reviewed By: kmatzen
Differential Revision: D5837295
fbshipit-source-id: daa2360db3c18c7d4bda7785e7a0b9175f5858af
Summary:
Adding a range operator in the spirit of np.arange. It is an imporant building block for a lot of manipulation functions.
This accepts parameters with the same meaning in the same order as python's range or np.arange (e.g. `(stop)`, `(start, stop)` or `(start, stop, step)`)
Differential Revision: D5616861
fbshipit-source-id: 02622b8bd85ebca125cc881c06fae5b54b7c602a
Summary:
Need it for some reference comparison for c2isl.
Also there's an argument that it might be faster on GPU with int32. Doesn't seem to be the case now, but haven't tested with Jeff's changes yet.
Reviewed By: kennyhorror
Differential Revision: D5405482
fbshipit-source-id: dc1a983dce5f06f1111c5634ec475647c94848cc
Summary:
1. it was easy to pass grad_reference which was just ignored due to missing output_to_grad
2. threshold was not passed to the gradient checkinglogic
Reviewed By: dzhulgakov
Differential Revision: D5425226
fbshipit-source-id: 2eb41f2601d5e356f7872e57724d08ab2e742329
Summary:
Eliminates failures from overloaded machines from only
running a few examples before being timed out.
Reviewed By: tomdz
Differential Revision: D5349555
fbshipit-source-id: 89d1db063f58c72656b37157225a586c9e3f24bc
Summary: This is needed so that we can create blobs that are not numpy arrays, e.g., creating mutex with `CreateMutex` op.
Reviewed By: chocjy
Differential Revision: D5303742
fbshipit-source-id: f83cbf67c658a234c1e4a9a114ad943a4e360598
Summary: This would allow us to pin the size of lengths tensor to the batch size. I'll use this in a follow up diff.
Reviewed By: kennyhorror
Differential Revision: D4906634
fbshipit-source-id: 8d3d151f33fd99547d9940e7c663779810283eb6
Summary: ReversePackedSegs operator for CUDA. Input "lengths" (static integers) required to be in CPU memory.
Differential Revision: D4661281
fbshipit-source-id: c800c316c34015ba8e732dcbcaa8c4edaffdfeab
Summary: Renamed ElementwisePower to Pow for better discoverability. Added CUDA version and Gradient + tests.
Reviewed By: kennyhorror
Differential Revision: D4665550
fbshipit-source-id: dd33d8ad3917d71504e363ab397af50d38a63b1f
Summary:
Verify shape and type inference in op unittests via assertReferenceChecks(). For now catch exceptions from InferShapeAndTypes() and log a warning.
TBD: Determine if there existing inference/output mismatches, and if so, change test asserts to warnings until they are resolved.
Differential Revision: D4639343
fbshipit-source-id: 605e72f53198e1a100fe7ba18b72c34c9ddbb727
Summary:
Implementation of ##LSTMWithAttention##
Still TBD:
1. There are problems with back propagation, because gradient is not implemented for ops with broadcasting
2. I need to make initial_recurrent_state to be of shape [dim] rather than [1, batch_size, dim], so one doesn't need to provide batch_size to LSTMWithAttention
Differential Revision: D4298735
fbshipit-source-id: 8903fcff4d6a66647ee6d45a6ef28803fc3091e5
Summary: Another part of making DPER compatible with half-floats. This diffs adds supoprt of fp16 to segment reduction operators used in DPER.
Reviewed By: dzhulgakov
Differential Revision: D4587560
fbshipit-source-id: 0ae10648a7286a820bffaee802464dd9464584bc
Summary: looks like we don't a good job with initial recurrent input gradients yet. Here is some fix, but gradient doesn't check yet. The shape is correct now though
Reviewed By: salexspb
Differential Revision: D4475447
fbshipit-source-id: 280f1f59f19e487fd0dce0d440609c50ddce294a
Summary: Fixes segfaults that occur in Eigen and im2col/sgemm backends.
Reviewed By: Yangqing
Differential Revision: D4451772
fbshipit-source-id: 3cf21e5afb2fe300db4228933a82063db5f7091f
Summary:
lets have a test for this so we don't break existing usecases
while iterating over RecurrentOp's code
Reviewed By: urikz
Differential Revision: D4456404
fbshipit-source-id: 79f2b88c1eed16106adf5b793b4c74441c7146c6