..
hip
Forbid trailing whitespace ( #53406 )
2021-03-05 17:22:55 -08:00
quantized
Adjust bound_shape_inferencer to take 4 inputs for FCs ( #41934 )
2020-08-05 18:44:48 -07:00
rnn
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
abs_op.cc
[caffe2] fix type and shape inference for common gradient ops ( #35857 )
2020-04-02 11:17:04 -07:00
abs_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
abs_op.h
accumulate_op.cc
accumulate_op.cu
accumulate_op.h
refactor caffe2 operator constructors - 1/9 ( #17082 )
2019-03-04 16:04:01 -08:00
accuracy_op.cc
Tensor construction: combine Resize+mutable_data - 1/4 ( #13942 )
2018-11-19 15:33:50 -08:00
accuracy_op.cu
Forbid trailing whitespace ( #53406 )
2021-03-05 17:22:55 -08:00
accuracy_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
acos_op.cc
acos_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
acos_op.h
activation_ops_cudnn.h
Fix typos, via a Levenshtein-type corrector ( #31523 )
2020-01-17 16:03:19 -08:00
affine_channel_op.cc
Tensor construction codemod(ResizeLike) - 3/7 ( #15122 )
2018-12-14 02:08:37 -08:00
affine_channel_op.cu
Forbid trailing whitespace ( #53406 )
2021-03-05 17:22:55 -08:00
affine_channel_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
alias_with_name.cc
move AliasWithNameOp to caffe2/operators
2019-12-17 02:39:40 -08:00
alias_with_name.cu
move AliasWithNameOp to caffe2/operators
2019-12-17 02:39:40 -08:00
alias_with_name.h
move AliasWithNameOp to caffe2/operators
2019-12-17 02:39:40 -08:00
apmeter_op.cc
Tensor construction: combine Resize+mutable_data - 1/4 ( #13942 )
2018-11-19 15:33:50 -08:00
apmeter_op.h
refactor caffe2 operator constructors - 1/9 ( #17082 )
2019-03-04 16:04:01 -08:00
arg_ops.cc
Remove many caffe2::TIndex and replace them with int64_t ( #11943 )
2018-09-22 18:11:04 -07:00
arg_ops.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
arg_ops.h
refactor caffe2 operator constructors - 1/9 ( #17082 )
2019-03-04 16:04:01 -08:00
asin_op.cc
asin_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
asin_op.h
assert_op.cc
Fix spelling errors
2020-01-28 04:46:15 -08:00
assert_op.cu
assert_op.h
refactor caffe2 operator constructors - 1/9 ( #17082 )
2019-03-04 16:04:01 -08:00
async_net_barrier_op.cc
[DPER] Introduce barrier operation to force synchronization of threads in async execution ( #49322 )
2020-12-15 16:13:42 -08:00
async_net_barrier_op.cu
[DPER] Introduce barrier operation to force synchronization of threads in async execution ( #49322 )
2020-12-15 16:13:42 -08:00
async_net_barrier_op.h
[DPER] Introduce barrier operation to force synchronization of threads in async execution ( #49322 )
2020-12-15 16:13:42 -08:00
atan_op.cc
atan_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
atan_op.h
atomic_ops.cc
Add 64bit atomic fetch add ( #32354 )
2020-01-17 11:43:43 -08:00
batch_box_cox_op.cc
Export box_cox operator in caffe2
2020-06-17 19:28:53 -07:00
batch_box_cox_op.h
Export box_cox operator in caffe2
2020-06-17 19:28:53 -07:00
batch_bucketize_op.cc
Fix typos, via a Levenshtein-type corrector ( #31523 )
2020-01-17 16:03:19 -08:00
batch_bucketize_op.h
refactor caffe2 operator constructors - 1/9 ( #17082 )
2019-03-04 16:04:01 -08:00
batch_gather_ops.cc
support Gather different indices for different examples in one batch ( #23813 )
2019-08-07 21:14:30 -07:00
batch_gather_ops.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
batch_gather_ops.h
support Gather different indices for different examples in one batch ( #23813 )
2019-08-07 21:14:30 -07:00
batch_matmul_op_gpu_test.cc
Renaming size() to numel() - 1/6
2018-10-29 11:11:19 -07:00
batch_matmul_op_test.cc
Renaming size() to numel() - 1/6
2018-10-29 11:11:19 -07:00
batch_matmul_op.cc
Fix signed-unsigned warnings ( #34791 )
2020-03-19 00:29:56 -07:00
batch_matmul_op.cu
RIP CUDA <9.2: circleci, aten, and caffe2 ( #36846 )
2020-05-18 13:41:05 -07:00
batch_matmul_op.h
Optimize batch mm op when broadcast the second input ( #21556 )
2019-06-09 15:28:03 -07:00
batch_moments_op.cc
Optimize reduce ops for 2d and 3d ( #9992 )
2018-08-04 13:53:58 -07:00
batch_moments_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
batch_moments_op.h
refactor caffe2 operator constructors - 10/9 ( #17659 )
2019-03-06 15:11:47 -08:00
batch_permutation_op_gpu_test.cc
Add zero input support for batch permutation op ( #39851 )
2020-06-13 21:34:24 -07:00
batch_permutation_op.cc
Add zero input support for batch permutation op ( #39851 )
2020-06-13 21:34:24 -07:00
batch_permutation_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
batch_permutation_op.h
move BatchPermutationOp to caffe2/operators
2019-12-17 14:58:27 -08:00
batch_sparse_to_dense_op.cc
feature_segmented_histogram_binning_calibration
2021-03-08 12:47:19 -08:00
batch_sparse_to_dense_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
batch_sparse_to_dense_op.h
Add cuda version for operators BatchSparseToDense and BatchDenseToSparse ( #29166 )
2019-11-05 13:06:23 -08:00
bbox_transform_op.cc
remove ops in the __caffe2 namespace ( #47318 )
2020-11-16 15:30:16 -08:00
bbox_transform_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
bisect_percentile_op.cc
Fix typos, via a Levenshtein-type corrector ( #31523 )
2020-01-17 16:03:19 -08:00
bisect_percentile_op.h
refactor caffe2 operator constructors - 10/9 ( #17659 )
2019-03-06 15:11:47 -08:00
boolean_mask_ops.cc
Fix signed-unsigned warnings ( #34791 )
2020-03-19 00:29:56 -07:00
boolean_mask_ops.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
boolean_mask_ops.h
Adding gradient to Boolean Mask operator ( #21423 )
2019-06-06 20:48:47 -07:00
boolean_unmask_ops_test.cc
Renaming size() to numel() - 1/6
2018-10-29 11:11:19 -07:00
boolean_unmask_ops.cc
Revert "Tensor construction codemod(raw_mutable_data) ( #16373 )" ( #18680 )
2019-04-01 14:39:13 -07:00
boolean_unmask_ops.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
boolean_unmask_ops.h
box_with_nms_limit_op.cc
remove ops in the __caffe2 namespace ( #47318 )
2020-11-16 15:30:16 -08:00
box_with_nms_limit_op.h
BoxWithNMSLimit support int batch_splits input ( #47504 )
2020-11-07 00:27:51 -08:00
bucketize_op.cc
[C2] Native GPU implementation for bucketize ( #33529 )
2020-02-21 15:47:04 -08:00
bucketize_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
bucketize_op.h
[pyper] export caffe2 bucketize GPU operator to pytorch
2020-09-09 16:08:53 -07:00
byte_weight_dequant_op.cc
Add byte_weight_dequant_op
2018-07-18 16:27:21 -07:00
byte_weight_dequant_op.h
Fix signed-unsigned warnings ( #34791 )
2020-03-19 00:29:56 -07:00
cast_op.cc
Tensor construction codemod(ResizeLike) - 4/7 ( #15088 )
2018-12-13 13:39:56 -08:00
cast_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
cast_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
cbrt_op.cc
[caffe2] fix type and shape inference for common gradient ops ( #35857 )
2020-04-02 11:17:04 -07:00
cbrt_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
cbrt_op.h
cc_bmm_bg_op.cc
Move ConcatBatchMatMulBatchGatherOp to OSS
2019-04-10 15:29:03 -07:00
cc_bmm_bg_op.h
Move ConcatBatchMatMulBatchGatherOp to OSS
2019-04-10 15:29:03 -07:00
ceil_op.cc
ceil_op.cu
Forbid trailing whitespace ( #53406 )
2021-03-05 17:22:55 -08:00
ceil_op.h
Tensor construction codemod(ResizeLike) - 4/7 ( #15088 )
2018-12-13 13:39:56 -08:00
channel_backprop_stats_op.cc
Rename ndim() -> dim() - 3/6
2018-11-05 23:21:40 -08:00
channel_backprop_stats_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
channel_backprop_stats_op.h
refactor caffe2 operator constructors - 11/9 ( #17722 )
2019-03-08 12:38:54 -08:00
channel_shuffle_op.cc
batch size 0 support in ChannelShuffle DNNLOWP op ( #26858 )
2019-09-26 00:40:07 -07:00
channel_shuffle_op.cu
Forbid trailing whitespace ( #53406 )
2021-03-05 17:22:55 -08:00
channel_shuffle_op.h
refactor caffe2 operator constructors - 11/9 ( #17722 )
2019-03-08 12:38:54 -08:00
channel_stats_op.cc
Optimize channel_stats_op ( #16243 )
2019-03-12 12:08:00 -07:00
channel_stats_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
channel_stats_op.h
Optimize channel_stats_op ( #16243 )
2019-03-12 12:08:00 -07:00
channelwise_conv3d_op_cudnn.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
clip_op.cc
Tensor construction codemod(ResizeLike) - 4/7 ( #15088 )
2018-12-13 13:39:56 -08:00
clip_op.cu
Forbid trailing whitespace ( #53406 )
2021-03-05 17:22:55 -08:00
clip_op.h
refactor caffe2 operator constructors - 2/9 ( #17083 )
2019-02-28 14:23:55 -08:00
CMakeLists.txt
Remove experimental c10 ops ( #36394 )
2020-04-10 16:11:16 -07:00
collect_and_distribute_fpn_rpn_proposals_op.cc
Rename caffe2<->c10 operator wrappers ( #21322 )
2019-06-07 13:48:10 -07:00
collect_and_distribute_fpn_rpn_proposals_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
communicator_op_gpu.cc
communicator_op.cc
concat_split_op_gpu.cc
concat_split_op.cc
Fix illegal memory acess issue for CUDA versionn of SplitByLengths operator.
2020-08-14 01:04:08 -07:00
concat_split_op.h
Fix illegal memory acess issue for CUDA versionn of SplitByLengths operator.
2020-08-14 01:04:08 -07:00
conditional_op.cc
Revert "Tensor construction codemod(raw_mutable_data) ( #16373 )" ( #18680 )
2019-04-01 14:39:13 -07:00
conditional_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
conv_gradient_op.cc
Fix signed-unsigned warnings ( #34791 )
2020-03-19 00:29:56 -07:00
conv_op_cache_cudnn_test.cc
move flags to c10 ( #12144 )
2018-10-04 02:09:56 -07:00
conv_op_cache_cudnn.cc
conv_op_cache_cudnn.h
Rename IntList to IntArrayRef. ( #16751 )
2019-02-05 14:54:34 -08:00
conv_op_cudnn.cc
Remove deprecated cuDNN API from caffe2 ( #38680 )
2020-05-20 12:55:58 -07:00
conv_op_eigen.cc
refactor caffe2 operator constructors - 2/9 ( #17083 )
2019-02-28 14:23:55 -08:00
conv_op_gpu.cc
conv_op_impl.h
fix some issues found by enabling -Wshorten-64-to-32 ( #18187 )
2019-06-14 16:29:32 -07:00
conv_op_shared_gpu.cc
Removing some dependency edges from Blob to other caffe2 ( #12043 )
2018-09-25 11:40:24 -07:00
conv_op_shared.cc
move flags to c10 ( #12144 )
2018-10-04 02:09:56 -07:00
conv_op_shared.h
Remove template parameter from Tensor ( #9939 )
2018-07-27 10:56:39 -07:00
conv_op.cc
Convert some docstrings from char* to char[] ( #13062 )
2018-10-24 13:48:18 -07:00
conv_op.h
refactor caffe2 operator constructors - 2/9 ( #17083 )
2019-02-28 14:23:55 -08:00
conv_pool_op_base.h
Fix signed-unsigned warnings (RELAND) ( #36224 )
2020-04-08 16:29:27 -07:00
conv_transpose_gradient_op.cc
conv_transpose_op_cudnn.cc
Remove deprecated cuDNN API from caffe2 ( #38680 )
2020-05-20 12:55:58 -07:00
conv_transpose_op_gpu.cc
conv_transpose_op_impl.h
fix zero-batch handling in convtranspose ( #24341 )
2019-12-18 15:06:36 -08:00
conv_transpose_op_mobile_impl.h
fix zero-batch handling in convtranspose ( #24341 )
2019-12-18 15:06:36 -08:00
conv_transpose_op_mobile_test.cc
[PyTorch] Remove CAFFE2_FB_LIMITED_MOBILE_CAPABILITY ( #50385 )
2021-01-20 10:26:54 -08:00
conv_transpose_op_mobile.cc
Make C10_MOBILE consistent with how feature macros are usually used ( #17481 )
2019-02-27 17:57:51 -08:00
conv_transpose_op_mobile.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
conv_transpose_op.cc
Simplify InheritOnnxSchema registration ( #12696 )
2018-10-16 19:59:49 -07:00
conv_transpose_op.h
refactor caffe2 operator constructors - 2/9 ( #17083 )
2019-02-28 14:23:55 -08:00
conv_transpose_unpool_op_base.h
fix zero-batch handling in convtranspose ( #24341 )
2019-12-18 15:06:36 -08:00
copy_op.cc
To fix caffe2 model with Copy OP cannot export to onnx model ( #37144 )
2020-05-04 11:34:09 -07:00
copy_op.cu
exposing CPU/GPU Copy ops ( #32248 )
2020-01-17 12:40:43 -08:00
copy_op.h
exposing CPU/GPU Copy ops ( #32248 )
2020-01-17 12:40:43 -08:00
copy_rows_to_tensor_op.cc
Perform weight re-init for embedding table in sparse_lookup.py ( #22348 )
2019-07-03 10:33:40 -07:00
copy_rows_to_tensor_op.h
Perform weight re-init for embedding table in sparse_lookup.py ( #22348 )
2019-07-03 10:33:40 -07:00
cos_op.cc
cos_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
cos_op.h
cosh_op.cc
[caffe2] fix type and shape inference for common gradient ops ( #35857 )
2020-04-02 11:17:04 -07:00
cosh_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
cosh_op.h
cosine_embedding_criterion_op.cc
Tensor construction codemod(ResizeLike) - 4/7 ( #15088 )
2018-12-13 13:39:56 -08:00
cosine_embedding_criterion_op.cu
Forbid trailing whitespace ( #53406 )
2021-03-05 17:22:55 -08:00
cosine_embedding_criterion_op.h
refactor caffe2 operator constructors - 2/9 ( #17083 )
2019-02-28 14:23:55 -08:00
counter_ops_gpu.cc
HIP Operators Generator--> HipOpG ( #9322 )
2018-07-19 00:26:06 -07:00
counter_ops.cc
Replace c10::guts::stuff with std::stuff ( #30915 )
2019-12-16 13:57:19 -08:00
counter_ops.h
Renaming CAFFE2_API to TORCH_API ( #49496 )
2020-12-18 10:54:50 -08:00
crash_op.cc
Support compilation on gcc-7.4.0 ( #19470 )
2019-04-19 21:41:36 -07:00
create_scope_op.cc
Tensor construction: combine Resize+mutable_data - 1/4 ( #13942 )
2018-11-19 15:33:50 -08:00
create_scope_op.h
Renaming CAFFE2_API to TORCH_API ( #49496 )
2020-12-18 10:54:50 -08:00
crf_viterbi_op.cc
Fix typos, via a Levenshtein-type corrector ( #31523 )
2020-01-17 16:03:19 -08:00
cross_entropy_op.cc
[Format] format a few files ( #35187 )
2020-04-17 14:30:01 -07:00
cross_entropy_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
cross_entropy_op.h
Renaming CAFFE2_API to TORCH_API ( #49496 )
2020-12-18 10:54:50 -08:00
ctc_beam_search_decoder_op.cc
Output sequence probability with CTC beam search, optional multiple output sequences ( #21927 )
2019-07-02 17:29:13 -07:00
ctc_beam_search_decoder_op.h
Output sequence probability with CTC beam search, optional multiple output sequences ( #21927 )
2019-07-02 17:29:13 -07:00
ctc_greedy_decoder_op.cc
fix some issues found by enabling -Wshorten-64-to-32 ( #18187 )
2019-06-14 16:29:32 -07:00
ctc_greedy_decoder_op.h
refactor caffe2 operator constructors - 2/9 ( #17083 )
2019-02-28 14:23:55 -08:00
cube_op.cc
cube_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
cube_op.h
data_couple_gpu.cu
No Op Optimizer ( #12390 )
2018-10-10 18:09:46 -07:00
data_couple.cc
No Op Optimizer ( #12390 )
2018-10-10 18:09:46 -07:00
data_couple.h
No Op Optimizer ( #12390 )
2018-10-10 18:09:46 -07:00
dataset_ops.cc
pass TypeMeta by value ( #45026 )
2020-10-30 10:14:17 -07:00
dataset_ops.h
pass TypeMeta by value ( #45026 )
2020-10-30 10:14:17 -07:00
deform_conv_gradient_op.cc
deform_conv_op_impl.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
deform_conv_op.cc
deform_conv_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
deform_conv_op.h
refactor caffe2 operator constructors - 2/9 ( #17083 )
2019-02-28 14:23:55 -08:00
dense_vector_to_id_list_op.cc
add dense vector to id_list operator ( #15090 )
2018-12-18 16:27:38 -08:00
dense_vector_to_id_list_op.h
Tensor construction codemod ( #16568 )
2019-02-05 18:51:02 -08:00
depthwise_3x3_conv_op_cudnn.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
distance_op.cc
Tensor construction codemod(ResizeLike) - 4/7 ( #15088 )
2018-12-13 13:39:56 -08:00
distance_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
distance_op.h
Move math::Axpy function to elementwise lib ( #18316 )
2019-03-26 12:19:19 -07:00
do_op_gpu.cc
do_op.cc
do_op.h
refactor caffe2 operator constructors - 2/9 ( #17083 )
2019-02-28 14:23:55 -08:00
dropout_op_cudnn.cc
refactor caffe2 operator constructors - 3/9 ( #17084 )
2019-02-28 14:13:17 -08:00
dropout_op.cc
caffe2: use at::mt19937 instead of std::mt19937 (10x speedup) ( #43987 )
2020-10-16 16:08:35 -07:00
dropout_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
dropout_op.h
refactor caffe2 operator constructors - 2/9 ( #17083 )
2019-02-28 14:23:55 -08:00
elementwise_add_gradient_op.cc
elementwise_add_op_gpu.cc
elementwise_add_op.cc
elementwise_add_op.h
Separate reduce functions from math ( #16929 )
2019-02-13 17:50:47 -08:00
elementwise_div_gradient_op.cc
Fix signed-unsigned warnings ( #34791 )
2020-03-19 00:29:56 -07:00
elementwise_div_op.cc
elementwise_div_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
elementwise_div_op.h
elementwise_linear_op.cc
Export ElementwiseLinear to ONNX (Mul + Add). ( #17411 )
2019-02-25 08:11:13 -08:00
elementwise_linear_op.cu
Forbid trailing whitespace ( #53406 )
2021-03-05 17:22:55 -08:00
elementwise_linear_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
elementwise_logical_ops.cc
Windows raw string fix ( #10998 )
2018-08-29 11:40:08 -07:00
elementwise_logical_ops.h
refactor caffe2 operator constructors - 3/9 ( #17084 )
2019-02-28 14:13:17 -08:00
elementwise_mul_gradient_op.cc
Use sum_integers and multiply_integers ( #51146 )
2021-02-10 18:05:45 -08:00
elementwise_mul_op.cc
elementwise_mul_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
elementwise_mul_op.h
optimize MulGradient for common shapes ( #19705 )
2019-12-11 11:39:52 -08:00
elementwise_op_gpu_test.cc
move flags to c10 ( #12144 )
2018-10-04 02:09:56 -07:00
elementwise_op_test.cc
move flags to c10 ( #12144 )
2018-10-04 02:09:56 -07:00
elementwise_op_test.h
Renaming size() to numel() - 2/6
2018-10-26 15:21:50 -07:00
elementwise_ops_schema.cc
[caffe2] fix type and shape inference for common gradient ops ( #35857 )
2020-04-02 11:17:04 -07:00
elementwise_ops_utils.cc
[PyPer] Port c2 add to pt ( #54229 )
2021-03-19 12:45:11 -07:00
elementwise_ops_utils.h
[PyPer] Port c2 add to pt ( #54229 )
2021-03-19 12:45:11 -07:00
elementwise_ops.cc
Fix signed-unsigned warnings ( #34791 )
2020-03-19 00:29:56 -07:00
elementwise_ops.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
elementwise_ops.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
elementwise_sub_gradient_op.cc
elementwise_sub_op_gpu.cc
elementwise_sub_op.cc
elementwise_sub_op.h
Separate reduce functions from math ( #16929 )
2019-02-13 17:50:47 -08:00
elementwise_sum_op.cc
add tensor and cost inference functions ( #17684 )
2019-03-06 23:34:02 -08:00
elu_op_cudnn.cc
refactor caffe2 operator constructors - 3/9 ( #17084 )
2019-02-28 14:13:17 -08:00
elu_op.cc
Use REGISTER_CPU_GRADIENT_OPERATOR for many operators ( #12616 )
2018-10-24 13:48:17 -07:00
elu_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
elu_op.h
Add cudnn activation ops ( #9379 )
2018-07-12 23:18:56 -07:00
enforce_finite_op.cc
enforce_finite_op.cu
Remove calls to CopyFrom that can be sync ( #13205 )
2018-10-29 13:57:42 -07:00
enforce_finite_op.h
[caffe2] EnforceFinite: log blobs finiteness in workspace on error ( #52892 )
2021-02-26 16:48:19 -08:00
ensure_clipped_op.cc
Renaming size() to numel() - 2/6
2018-10-26 15:21:50 -07:00
ensure_clipped_op.h
refactor caffe2 operator constructors - 3/9 ( #17084 )
2019-02-28 14:13:17 -08:00
ensure_cpu_output_op.cc
ensure_cpu_output_op.cu
ensure_cpu_output_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
erf_op.cc
Export PyTorch erf to ONNX Erf and add Caffe2 Erf operator
2019-01-17 09:18:08 -08:00
erf_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
erf_op.h
Export PyTorch erf to ONNX Erf and add Caffe2 Erf operator
2019-01-17 09:18:08 -08:00
exp_op_gpu.cc
exp_op.cc
Simplify InheritOnnxSchema registration ( #12696 )
2018-10-16 19:59:49 -07:00
exp_op.h
expand_op_gpu.cc
expand_op.cc
Convert all tabs to spaces, add CI. ( #18959 )
2019-04-09 08:12:26 -07:00
expand_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
expand_squeeze_dims_op_gpu.cc
expand_squeeze_dims_op.cc
Simplify InheritOnnxSchema registration ( #12696 )
2018-10-16 19:59:49 -07:00
expand_squeeze_dims_op.h
fix -Wsign-compare warnings for some files inside c2 ( #18123 )
2019-03-19 10:39:20 -07:00
fc_inference.cc
[caffe2] add cost inference for FusedFakeQuantFC and FusedFakeQuantFCGradient ( #44840 )
2020-09-17 14:07:17 -07:00
fc_inference.h
[caffe2] add cost inference for FusedFakeQuantFC and FusedFakeQuantFCGradient ( #44840 )
2020-09-17 14:07:17 -07:00
feature_maps_ops.cc
Add a new op for converting the dense feature to sparse representation
2020-07-27 12:45:37 -07:00
feature_maps_ops.h
Add a new op for converting the dense feature to sparse representation
2020-07-27 12:45:37 -07:00
feed_blob_op.cc
feed_blob_op.h
refactor caffe2 operator constructors - 3/9 ( #17084 )
2019-02-28 14:13:17 -08:00
filler_op.cc
topk tensor k support ( #39407 )
2020-06-15 13:10:20 -07:00
filler_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
filler_op.h
caffe2: use at::mt19937 instead of std::mt19937 (10x speedup) ( #43987 )
2020-10-16 16:08:35 -07:00
find_duplicate_elements_op.cc
find_duplicate_elements_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
find_op.cc
find_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
find_op.h
refactor caffe2 operator constructors - 3/9 ( #17084 )
2019-02-28 14:13:17 -08:00
flatten_op.cc
add tensor and cost inference functions ( #17684 )
2019-03-06 23:34:02 -08:00
flatten_op.h
Revert "Tensor construction codemod(raw_mutable_data) ( #16373 )" ( #18680 )
2019-04-01 14:39:13 -07:00
flexible_top_k.cc
Tensor construction: combine Resize+mutable_data - 2/4 ( #14205 )
2018-11-30 10:46:58 -08:00
flexible_top_k.h
refactor caffe2 operator constructors - 3/9 ( #17084 )
2019-02-28 14:13:17 -08:00
floor_op.cc
floor_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
floor_op.h
Tensor construction codemod(ResizeLike) - 4/7 ( #15088 )
2018-12-13 13:39:56 -08:00
free_op_gpu.cc
free_op.cc
free_op.h
refactor caffe2 operator constructors - 3/9 ( #17084 )
2019-02-28 14:13:17 -08:00
fully_connected_op_gpu.cc
RIP CUDA <9.2: circleci, aten, and caffe2 ( #36846 )
2020-05-18 13:41:05 -07:00
fully_connected_op.cc
[caffe2] add cost inference for FusedFakeQuantFC and FusedFakeQuantFCGradient ( #44840 )
2020-09-17 14:07:17 -07:00
fully_connected_op.h
[Format] format a few files ( #35187 )
2020-04-17 14:30:01 -07:00
fused_rowwise_8bit_conversion_ops.cc
[caffe2] make fused rowwise quant/dequant op work for N-dim tensors ( #33426 )
2020-02-19 23:29:42 -08:00
fused_rowwise_8bit_conversion_ops.h
[caffe2] optimize 2/4-bit row-wise quantization ( #387 )
2020-06-19 21:28:31 -07:00
fused_rowwise_nbit_conversion_ops.cc
fp16 include not needed ( #35708 )
2020-03-30 17:47:44 -07:00
fused_rowwise_nbit_conversion_ops.h
[caffe2] optimize 2/4-bit row-wise quantization ( #387 )
2020-06-19 21:28:31 -07:00
fused_rowwise_nbitfake_conversion_ops.cc
fp16 include not needed ( #35708 )
2020-03-30 17:47:44 -07:00
fused_rowwise_nbitfake_conversion_ops.h
[caffe2] minor typo fix in fused_rowwise_nbitfake_conversion_ops.h comment ( #39315 )
2020-05-31 23:32:39 -07:00
fused_rowwise_random_quantization_ops.cc
Fix typos, via a Levenshtein-type corrector ( #31523 )
2020-01-17 16:03:19 -08:00
fused_rowwise_random_quantization_ops.h
refactor caffe2 operator constructors - 3/9 ( #17084 )
2019-02-28 14:13:17 -08:00
gather_fused_8bit_rowwise_op.cc
gather_fused_8bit_rowwise_op.h
Tensor construction: combine Resize+mutable_data - 2/4 ( #14205 )
2018-11-30 10:46:58 -08:00
gather_op.cc
support Gather different indices for different examples in one batch ( #23813 )
2019-08-07 21:14:30 -07:00
gather_op.cu
support Gather different indices for different examples in one batch ( #23813 )
2019-08-07 21:14:30 -07:00
gather_op.cuh
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
gather_op.h
Fix typos, via a Levenshtein-type corrector ( #31523 )
2020-01-17 16:03:19 -08:00
gather_ranges_to_dense_op.cc
Resend diff D23858329 ( #45315 )
2020-09-24 18:41:49 -07:00
gather_ranges_to_dense_op.h
Replace GatherRangesToDense operator in Dper from c2 to pt.
2020-11-20 08:14:32 -08:00
gelu_op.cc
[caffe2] fix type and shape inference for common gradient ops ( #35857 )
2020-04-02 11:17:04 -07:00
gelu_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
gelu_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
generate_proposals_op_gpu_test.cc
handle box plus one for gpu generate_proposals
2019-05-16 18:17:15 -07:00
generate_proposals_op_test.cc
fix comparison of narrow type with wide type in loop condition ( #53951 )
2021-03-22 16:40:35 -07:00
generate_proposals_op_util_boxes_test.cc
make box plus one a legacy argument in detection ops
2019-05-16 18:17:12 -07:00
generate_proposals_op_util_boxes.h
fix comparison of narrow type with wide type in loop condition ( #53951 )
2021-03-22 16:40:35 -07:00
generate_proposals_op_util_nms_gpu_test.cc
Skips flaky UtilsNMSTest.GPUEqualsCPURotatedCorrectnessTest ( #30053 )
2019-11-21 13:44:44 -08:00
generate_proposals_op_util_nms_gpu.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
generate_proposals_op_util_nms_gpu.h
Renaming CAFFE2_API to TORCH_API ( #49496 )
2020-12-18 10:54:50 -08:00
generate_proposals_op_util_nms_test.cc
fix comparison of narrow type with wide type in loop condition ( #53951 )
2021-03-22 16:40:35 -07:00
generate_proposals_op_util_nms.h
fix comparison of narrow type with wide type in loop condition ( #53951 )
2021-03-22 16:40:35 -07:00
generate_proposals_op.cc
remove ops in the __caffe2 namespace ( #47318 )
2020-11-16 15:30:16 -08:00
generate_proposals_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
generate_proposals_op.h
Renaming CAFFE2_API to TORCH_API ( #49496 )
2020-12-18 10:54:50 -08:00
given_tensor_byte_string_to_uint8_fill_op.cc
Export uint8 tensors as byte string in mobile_exporter and add GivenTensorByteStringToUInt8FillOp ( #10385 )
2018-08-15 14:26:50 -07:00
given_tensor_byte_string_to_uint8_fill_op.cu
Export uint8 tensors as byte string in mobile_exporter and add GivenTensorByteStringToUInt8FillOp ( #10385 )
2018-08-15 14:26:50 -07:00
given_tensor_byte_string_to_uint8_fill_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
given_tensor_fill_op.cc
Add GivenTensorInt16Fill ( #20515 )
2019-05-15 19:45:15 -07:00
given_tensor_fill_op.cu
Add GivenTensorInt16Fill ( #20515 )
2019-05-15 19:45:15 -07:00
given_tensor_fill_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
glu_op.cc
glu_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
glu_op.h
refactor caffe2 operator constructors - 3/9 ( #17084 )
2019-02-28 14:13:17 -08:00
group_norm_op.cc
correct comments in group_norm_op ( #19621 )
2019-04-23 13:31:15 -07:00
group_norm_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
group_norm_op.h
batch size 0 support in norm operators ( #26894 )
2019-09-26 16:08:35 -07:00
gru_unit_op_gpu.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
gru_unit_op.cc
gru_unit_op.h
refactor caffe2 operator constructors - 4/9 ( #17085 )
2019-02-28 14:23:52 -08:00
h_softmax_op.cc
Fix typos, via a Levenshtein-type corrector ( #31523 )
2020-01-17 16:03:19 -08:00
h_softmax_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
half_float_ops_test.cc
fix comparison of narrow type with wide type in loop condition ( #53951 )
2021-03-22 16:40:35 -07:00
half_float_ops.cc
Enable fp16 for UniformFill ( #44540 )
2020-09-15 15:09:18 -07:00
half_float_ops.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
half_float_ops.h
Enable fp16 for UniformFill ( #44540 )
2020-09-15 15:09:18 -07:00
hard_sigmoid_op.cc
Simplify InheritOnnxSchema registration ( #12696 )
2018-10-16 19:59:49 -07:00
hard_sigmoid_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
hard_sigmoid_op.h
Add CPU version of hard sigmoid operator to caffe2 ( #10837 )
2018-08-28 14:55:49 -07:00
heatmap_max_keypoint_op.cc
remove ops in the __caffe2 namespace ( #47318 )
2020-11-16 15:30:16 -08:00
heatmap_max_keypoint_op.h
register HeatmapMaxKeypoint with C10 ( #25191 )
2019-08-27 20:13:57 -07:00
histogram_op.cc
[dper][pruning] add histogram op ( #38514 )
2020-05-28 15:45:04 -07:00
histogram_op.h
[dper][pruning] add histogram op ( #38514 )
2020-05-28 15:45:04 -07:00
if_op_gpu.cc
if_op.cc
if_op.h
refactor caffe2 operator constructors - 4/9 ( #17085 )
2019-02-28 14:23:52 -08:00
im2col_op_gpu.cc
im2col_op.cc
im2col_op.h
refactor caffe2 operator constructors - 4/9 ( #17085 )
2019-02-28 14:23:52 -08:00
index_hash_ops.cc
[1/3] Bind IndexHash to PyTorch ( #33015 )
2020-02-10 17:47:38 -08:00
index_hash_ops.h
[c10/cuda] Reorganize device_count() and robustly surface ASAN warnings ( #42249 )
2020-08-05 11:39:31 -07:00
index_ops.cc
refactor caffe2 operator constructors - 4/9 ( #17085 )
2019-02-28 14:23:52 -08:00
index_ops.h
pass TypeMeta by value ( #45026 )
2020-10-30 10:14:17 -07:00
inference_lstm_op.cc
Rename caffe2<->c10 operator wrappers ( #21322 )
2019-06-07 13:48:10 -07:00
inference_lstm_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
instance_norm_gradient_op.cc
Optimize InstanceNormGradientOp ( #22288 )
2019-07-01 15:10:17 -07:00
instance_norm_op.cc
Optimize InstanceNormGradientOp ( #22288 )
2019-07-01 15:10:17 -07:00
instance_norm_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
instance_norm_op.h
Optimize InstanceNormGradientOp ( #22288 )
2019-07-01 15:10:17 -07:00
integral_image_op.cc
Tensor construction codemod(ResizeLike) - 5/7 ( #15084 )
2018-12-13 12:42:52 -08:00
integral_image_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
integral_image_op.h
refactor caffe2 operator constructors - 4/9 ( #17085 )
2019-02-28 14:23:52 -08:00
is_empty_op.cc
Remove BUILD_CAFFE2 and build everything ( #8338 )
2018-08-31 13:10:24 -07:00
is_empty_op.h
Tensor construction: combine Resize+mutable_data - 2/4 ( #14205 )
2018-11-30 10:46:58 -08:00
jsd_op.cc
Tensor construction codemod(ResizeLike) - 5/7 ( #15084 )
2018-12-13 12:42:52 -08:00
jsd_op.h
key_split_ops.cc
key_split_ops.h
refactor caffe2 operator constructors - 4/9 ( #17085 )
2019-02-28 14:23:52 -08:00
last_n_window_collector.cc
[mlf][efficiency] add tensor inference function to last-n collector op ( #46693 )
2020-10-22 01:15:00 -07:00
layer_norm_op.cc
Remove unused param in Caffe2 LayerNormGradientOp ( #22282 )
2019-06-27 11:22:44 -07:00
layer_norm_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
layer_norm_op.h
[Caffe2] Fix LayerNormOp when batch_size == 0. ( #45250 )
2020-09-24 12:30:03 -07:00
leaky_relu_op.cc
[caffe2] fix type and shape inference for common gradient ops ( #35857 )
2020-04-02 11:17:04 -07:00
leaky_relu_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
leaky_relu_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
length_split_op.cc
Add new LengthsSplit operator ( #10974 )
2018-09-10 15:40:28 -07:00
length_split_op.h
refactor caffe2 operator constructors - 4/9 ( #17085 )
2019-02-28 14:23:52 -08:00
lengths_pad_op.cc
lengths_pad_op.cu
lengths_pad_op.h
refactor caffe2 operator constructors - 4/9 ( #17085 )
2019-02-28 14:23:52 -08:00
lengths_reducer_fused_8bit_rowwise_ops.cc
[caffe] fix input order in SLS op documentation ( #36708 )
2020-04-16 00:55:54 -07:00
lengths_reducer_fused_8bit_rowwise_ops.h
[caffe2] explicitly pass use_offsets=false when calling fbgemm embedding kernels ( #35711 )
2020-03-31 08:35:19 -07:00
lengths_reducer_fused_nbit_rowwise_ops.cc
Expose SparseLengthsSum8BitRowwiseSparse to C10 ( #47306 )
2020-11-03 22:51:12 -08:00
lengths_reducer_fused_nbit_rowwise_ops.h
Expose SparseLengthsSum8BitRowwiseSparse to C10 ( #47306 )
2020-11-03 22:51:12 -08:00
lengths_reducer_ops.cc
Back out "Revert D19987020: [pytorch][PR] Add the sls tensor train op" ( #43938 )
2020-09-01 11:42:12 -07:00
lengths_reducer_ops.h
Add logging for debugging S223170
2021-03-22 08:58:40 -07:00
lengths_reducer_rowwise_8bit_ops.cc
Move registry fully to c10 ( #12077 )
2018-09-27 03:09:54 -07:00
lengths_reducer_rowwise_8bit_ops.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
lengths_tile_op.cc
Revert "Tensor construction codemod(raw_mutable_data) ( #16373 )" ( #18680 )
2019-04-01 14:39:13 -07:00
lengths_tile_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
lengths_tile_op.h
Tensor reinitialization codemod - 3/5 ( #15912 )
2019-01-16 19:49:01 -08:00
lengths_top_k_op.cc
Tensor construction: combine Resize+mutable_data - 2/4 ( #14205 )
2018-11-30 10:46:58 -08:00
lengths_top_k_op.h
refactor caffe2 operator constructors - 4/9 ( #17085 )
2019-02-28 14:23:52 -08:00
listwise_l2r_op.cc
add LambdaRank DCG Loss Option ( #23679 )
2019-08-02 11:47:46 -07:00
listwise_l2r_op.h
add LambdaRank DCG Loss Option ( #23679 )
2019-08-02 11:47:46 -07:00
load_save_op_gpu.cc
Rename cuda_gpu_id to device_id in DeviceOption ( #12456 )
2018-10-09 15:54:04 -07:00
load_save_op_util.cc
Reduce amount of work done within a global lock within ParallelLoadOp ( #43508 )
2020-08-26 18:19:40 -07:00
load_save_op_util.h
Renaming CAFFE2_API to TORCH_API ( #49496 )
2020-12-18 10:54:50 -08:00
load_save_op.cc
[caffe2] add a SerializationOptions field for the save operator ( #53402 )
2021-03-11 13:02:58 -08:00
load_save_op.h
[caffe2] add a SerializationOptions field for the save operator ( #53402 )
2021-03-11 13:02:58 -08:00
local_response_normalization_op_cudnn.cc
refactor caffe2 operator constructors - 4/9 ( #17085 )
2019-02-28 14:23:52 -08:00
local_response_normalization_op.cc
Move math::Axpy function to elementwise lib ( #18316 )
2019-03-26 12:19:19 -07:00
local_response_normalization_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
local_response_normalization_op.h
refactor caffe2 operator constructors - 4/9 ( #17085 )
2019-02-28 14:23:52 -08:00
locally_connected_op_gpu.cc
locally_connected_op_impl.h
Back out "[pt1][tensor] Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize" ( #16516 )
2019-01-30 12:50:38 -08:00
locally_connected_op_util.cc
locally_connected_op_util.h
Renaming CAFFE2_API to TORCH_API ( #49496 )
2020-12-18 10:54:50 -08:00
locally_connected_op.cc
Move exception to C10 ( #12354 )
2018-10-15 13:33:18 -07:00
locally_connected_op.h
refactor caffe2 operator constructors - 4/9 ( #17085 )
2019-02-28 14:23:52 -08:00
log_op_gpu.cc
log_op.cc
Revert D10439558: Add cost for non-linear ops
2018-11-16 23:30:05 -08:00
log_op.h
logit_op.cc
Export logic op to pytorch
2020-07-08 02:27:09 -07:00
logit_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
logit_op.h
Export logic op to pytorch
2020-07-08 02:27:09 -07:00
loss_op.cc
loss_op.cu
loss_op.h
Reapply D14078519 ( #17596 )
2019-03-06 13:51:00 -08:00
lp_pool_op.cc
Remove 4 unused variables in lp_pool_op.cc ( #42329 )
2020-07-30 15:50:17 -07:00
lp_pool_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
lpnorm_op.cc
Back out "Back out "[c2] register cuda op for LpNorm (fallback)"" ( #38566 )
2020-05-19 10:37:25 -07:00
lpnorm_op.cu
Back out "Back out "[c2] register cuda op for LpNorm (fallback)"" ( #38566 )
2020-05-19 10:37:25 -07:00
lpnorm_op.h
Reapply D14078519 ( #17596 )
2019-03-06 13:51:00 -08:00
lstm_unit_op_gpu.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
lstm_unit_op.cc
Revert D10439558: Add cost for non-linear ops
2018-11-16 23:30:05 -08:00
lstm_unit_op.h
[caffe2] Explicit vectorization of LSTM operator ( #35556 )
2020-04-01 17:19:56 -07:00
lstm_utils.h
Update math::Transpose to support tensor with size > 2G ( #17670 )
2019-03-20 18:22:21 -07:00
map_ops.cc
Add interface to provide blob types to shape&type inference ( #9643 )
2018-07-24 11:58:05 -07:00
map_ops.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
margin_ranking_criterion_op.cc
Tensor construction codemod(ResizeLike) - 5/7 ( #15084 )
2018-12-13 12:42:52 -08:00
margin_ranking_criterion_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
margin_ranking_criterion_op.h
Reapply D14078519 ( #17596 )
2019-03-06 13:51:00 -08:00
matmul_op_gpu.cc
matmul_op.cc
matmul_op.h
Reapply D14078519 ( #17596 )
2019-03-06 13:51:00 -08:00
max_pool_with_index_gpu.h
HIP Operators Generator--> HipOpG ( #9322 )
2018-07-19 00:26:06 -07:00
max_pool_with_index.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
mean_op.cc
mean_op.cu
mean_op.h
Adding Type Double to Caffe2 Mean Op
2020-09-28 13:35:29 -07:00
mem_query_op.cu
Reapply D14078519 ( #17596 )
2019-03-06 13:51:00 -08:00
merge_id_lists_op.cc
Export MergeIdLists Caffe2 Operator to PyTorch
2020-08-14 14:46:17 -07:00
merge_id_lists_op.h
Export MergeIdLists Caffe2 Operator to PyTorch
2020-08-14 14:46:17 -07:00
minmax_gradient_ops.cc
Separate elementwise level2 math functions ( #16753 )
2019-02-07 18:38:26 -08:00
minmax_ops.cc
Separate elementwise level2 math functions ( #16753 )
2019-02-07 18:38:26 -08:00
minmax_ops.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
minmax_ops.h
Reapply D14078519 ( #17596 )
2019-03-06 13:51:00 -08:00
mish_op.cc
[Caffe2] Optimize MishOp on CPU ( #48212 )
2020-11-19 14:17:27 -08:00
mish_op.h
[Caffe2] Optimize MishOp on CPU ( #48212 )
2020-11-19 14:17:27 -08:00
mod_op.cc
[uhm][0/n] add cuda Mod Op ( #46732 )
2020-10-26 11:07:51 -07:00
mod_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
mod_op.h
Reapply D14078519 ( #17596 )
2019-03-06 13:51:00 -08:00
moments_op.cc
moments_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
moments_op.h
Reapply D14078519 ( #17596 )
2019-03-06 13:51:00 -08:00
multi_class_accuracy_op.cc
Tensor construction: combine Resize+mutable_data - 3/4 ( #13944 )
2018-11-19 15:28:13 -08:00
multi_class_accuracy_op.cu
Forbid trailing whitespace ( #53406 )
2021-03-05 17:22:55 -08:00
multi_class_accuracy_op.h
negate_gradient_op_gpu.cc
negate_gradient_op.cc
negate_gradient_op.h
Shut up "address will always evaluate to 'true'" warnings ( #14774 )
2018-12-05 21:18:31 -08:00
negative_op_gpu.cc
negative_op.cc
negative_op.h
ngram_ops.cc
ngram_ops.h
Reapply D14078519 ( #17596 )
2019-03-06 13:51:00 -08:00
no_default_engine_op.h
norm_planar_yuv_op.cc
Apply modernize-use-override - 2/2
2019-02-13 21:01:28 -08:00
normalize_l1_op.cc
normalize_l1_op.h
Revert D22330340: [C2] Fixed a bug in normalization operator
2020-07-02 16:05:23 -07:00
normalize_op.cc
Fix compilation error ( #17860 )
2019-03-11 10:26:42 -07:00
normalize_op.h
Revert D22330340: [C2] Fixed a bug in normalization operator
2020-07-02 16:05:23 -07:00
normalize_ops.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
numpy_tile_op.cc
numpy_tile_op.h
pass TypeMeta by value ( #45026 )
2020-10-30 10:14:17 -07:00
one_hot_ops.cc
Export BatchBucketOneHot Caffe2 Operator to PyTorch
2020-08-11 14:00:19 -07:00
one_hot_ops.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
one_hot_ops.h
Export BatchBucketOneHot Caffe2 Operator to PyTorch
2020-08-11 14:00:19 -07:00
onnx_while_op.cc
onnx_while_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
op_utils_cudnn.h
Fix typos, via a Levenshtein-type corrector ( #31523 )
2020-01-17 16:03:19 -08:00
operator_fallback_gpu_test.cc
Reapply D14078519 ( #17596 )
2019-03-06 13:51:00 -08:00
operator_fallback_gpu.h
Reapply D14078519 ( #17596 )
2019-03-06 13:51:00 -08:00
order_switch_ops_cudnn.cc
Reapply D14078519 ( #17596 )
2019-03-06 13:51:00 -08:00
order_switch_ops_gpu.cc
Optimize NCHW2NHWC on GPU ( #12910 )
2018-10-22 11:24:29 -07:00
order_switch_ops.cc
Optimize NCHW2NHWC on GPU ( #12910 )
2018-10-22 11:24:29 -07:00
order_switch_ops.h
Tensor construction: combine Resize+mutable_data - 3/4 ( #13944 )
2018-11-19 15:28:13 -08:00
pack_rnn_sequence_op.cc
pack_rnn_sequence_op.h
Reapply D14078519 ( #17596 )
2019-03-06 13:51:00 -08:00
pack_segments.cc
[dper3] Export PackSegments and UnpackSegments to Pytorch
2020-09-11 09:29:24 -07:00
pack_segments.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
pack_segments.h
[dper3] Export PackSegments and UnpackSegments to Pytorch
2020-09-11 09:29:24 -07:00
pad_op_gpu.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
pad_op.cc
Back out "[pt1][tensor] Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize" ( #16516 )
2019-01-30 12:50:38 -08:00
pad_op.h
Renaming CAFFE2_API to TORCH_API ( #49496 )
2020-12-18 10:54:50 -08:00
partition_ops.cc
Implement gradient operator for GatherByKeys. ( #24348 )
2019-08-15 12:19:22 -07:00
partition_ops.h
Revert "Tensor construction codemod(raw_mutable_data) ( #16373 )" ( #18680 )
2019-04-01 14:39:13 -07:00
percentile_op.cc
Exposing Percentile Caffe2 Operator in PyTorch
2020-08-07 16:22:37 -07:00
percentile_op.h
Exposing Percentile Caffe2 Operator in PyTorch
2020-08-07 16:22:37 -07:00
perplexity_op.cc
Tensor construction: combine Resize+mutable_data - 3/4 ( #13944 )
2018-11-19 15:28:13 -08:00
perplexity_op.cu
Tensor construction codemod ( #16568 )
2019-02-05 18:51:02 -08:00
perplexity_op.h
piecewise_linear_transform_op.cc
Expose PiecewiseLinearTransform to PyTorch
2019-09-27 12:49:04 -07:00
piecewise_linear_transform_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
piecewise_linear_transform_op.h
Expose PiecewiseLinearTransform to PyTorch
2019-09-27 12:49:04 -07:00
pool_gradient_op.cc
Add count_include_pad to average_pool_gradient_op ( #15997 )
2019-01-15 16:56:40 -08:00
pool_op_cudnn.cc
Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize ( #17764 )
2019-03-07 18:38:53 -08:00
pool_op_util.cc
Fix typos, via a Levenshtein-type corrector ( #31523 )
2020-01-17 16:03:19 -08:00
pool_op_util.h
Add count_include_pad arg for PoolOpGradient on CPU and fix ARM performance issue. ( #15651 )
2019-01-03 00:18:47 -08:00
pool_op.cc
Separate reduce functions from math ( #16929 )
2019-02-13 17:50:47 -08:00
pool_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
pool_op.h
refactor caffe2 operator constructors - 6/9 ( #17087 )
2019-02-28 14:23:57 -08:00
pow_op.cc
Fix signed-unsigned warnings ( #34791 )
2020-03-19 00:29:56 -07:00
pow_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
pow_op.h
Fix signed-unsigned warnings ( #34791 )
2020-03-19 00:29:56 -07:00
prefetch_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
prelu_op.cc
[caffe2] fix type and shape inference for common gradient ops ( #35857 )
2020-04-02 11:17:04 -07:00
prelu_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
prelu_op.h
refactor caffe2 operator constructors - 6/9 ( #17087 )
2019-02-28 14:23:57 -08:00
prepend_dim_op_gpu.cc
prepend_dim_op.cc
Support conversion from Caffe2 MergeDim to ONNX Reshape + Squeeze. ( #16189 )
2019-02-13 15:53:38 -08:00
prepend_dim_op.h
Revert "Tensor construction codemod(raw_mutable_data) ( #16373 )" ( #18680 )
2019-04-01 14:39:13 -07:00
quant_decode_op.cc
Fix include paths for typeid.h ( #13689 )
2018-11-14 18:04:09 -08:00
quant_decode_op.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
quantile_op.cc
[Rowwise Pruning][c2 op] Add Quantile Op ( #32448 )
2020-01-22 16:59:56 -08:00
quantile_op.h
[Rowwise Pruning][c2 op] Add Quantile Op ( #32448 )
2020-01-22 16:59:56 -08:00
rank_loss_op.cc
Tensor construction codemod(ResizeLike) - 6/7 ( #15137 )
2018-12-13 12:47:33 -08:00
rank_loss_op.h
reciprocal_gradient_op.cc
Adding reciprocal operator and a test
2018-07-27 18:24:43 -07:00
reciprocal_op.cc
Adding reciprocal operator and a test
2018-07-27 18:24:43 -07:00
reciprocal_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
reciprocal_op.h
Adding reciprocal operator and a test
2018-07-27 18:24:43 -07:00
reduce_front_back_max_ops.cc
Split reduction_front_backops.[cc|cu] into smaller units to allow build of smaller size ( #12315 )
2018-10-05 16:50:21 -07:00
reduce_front_back_max_ops.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
reduce_front_back_max_ops.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
reduce_front_back_mean_ops.cc
Export ReduceMean/ReduceFrontMean/ReduceBackMean (Caffe2) to ReduceMean (ONNX). ( #16727 )
2019-02-12 13:35:32 -08:00
reduce_front_back_sum_mean_ops.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
reduce_front_back_sum_mean_ops.h
[codemod][caffe2] Run clang-format - 5/7
2020-06-30 15:45:11 -07:00
reduce_front_back_sum_ops.cc
Split reduction_front_backops.[cc|cu] into smaller units to allow build of smaller size ( #12315 )
2018-10-05 16:50:21 -07:00
reduce_ops.cc
Use sum_integers and multiply_integers ( #51146 )
2021-02-10 18:05:45 -08:00
reduce_ops.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
reduce_ops.h
refactor caffe2 operator constructors - 7/9 ( #17088 )
2019-02-28 14:23:53 -08:00
reducer_functors.h
Fix typos ( #30606 )
2019-12-02 20:17:42 -08:00
reduction_ops.cc
[C2] Add shape inference logic for ColwiseMax operator. ( #51914 )
2021-02-09 14:12:07 -08:00
reduction_ops.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
reduction_ops.h
Histogram Binning Calibration
2020-09-06 17:11:16 -07:00
relu_n_op.cc
Remove template parameter from Tensor ( #9939 )
2018-07-27 10:56:39 -07:00
relu_n_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
relu_n_op.h
Add cudnn activation ops ( #9379 )
2018-07-12 23:18:56 -07:00
relu_op.cc
[caffe2] fix type and shape inference for common gradient ops ( #35857 )
2020-04-02 11:17:04 -07:00
relu_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
relu_op.h
Add cudnn activation ops ( #9379 )
2018-07-12 23:18:56 -07:00
remove_data_blocks_op.cc
remove_data_blocks_op.h
Revert "Tensor construction codemod(raw_mutable_data) ( #16373 )" ( #18680 )
2019-04-01 14:39:13 -07:00
replace_nan_op.cc
Remove many caffe2::TIndex and replace them with int64_t ( #11943 )
2018-09-22 18:11:04 -07:00
replace_nan_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
replace_nan_op.h
refactor caffe2 operator constructors - 7/9 ( #17088 )
2019-02-28 14:23:53 -08:00
reservoir_sampling.cc
caffe2: use at::mt19937 instead of std::mt19937 (10x speedup) ( #43987 )
2020-10-16 16:08:35 -07:00
reshape_op_gpu_test.cc
Rename ndim() -> dim() - 5/6
2018-11-06 16:38:35 -08:00
reshape_op_gpu.cc
reshape_op.cc
ReshapeOp supports empty tensor ( #21230 )
2019-06-06 15:02:11 -07:00
reshape_op.h
[C2] Fix slowness of the ReshapeOp. ( #33729 )
2020-03-03 00:44:22 -08:00
resize_3d_op.cc
Migrate the cpu and gpu implementations of resize nearest 3D from vision to caffe2
2019-10-03 16:14:00 -07:00
resize_3d_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
resize_3d_op.h
Migrate the cpu and gpu implementations of resize nearest 3D from vision to caffe2
2019-10-03 16:14:00 -07:00
resize_op.cc
Rename caffe2<->c10 operator wrappers ( #21322 )
2019-06-07 13:48:10 -07:00
resize_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
resize_op.h
Rename caffe2<->c10 operator wrappers ( #21322 )
2019-06-07 13:48:10 -07:00
reverse_packed_segs_op.cc
reverse_packed_segs_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
reverse_packed_segs_op.h
Tensor construction: combine Resize+mutable_data - 3/4 ( #13944 )
2018-11-19 15:28:13 -08:00
rmac_regions_op.cc
Remove Context dependency from Tensor class ( #14269 )
2018-11-28 15:45:38 -08:00
rmac_regions_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
rmac_regions_op.h
refactor caffe2 operator constructors - 7/9 ( #17088 )
2019-02-28 14:23:53 -08:00
rms_norm_op.cc
[Caffe2] Add RMSNormOp ( #44338 )
2020-09-08 23:50:44 -07:00
rms_norm_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
rms_norm_op.h
[Caffe2] Add RMSNormOp ( #44338 )
2020-09-08 23:50:44 -07:00
roi_align_gradient_op.cc
add dllexport before template specialization functions for windows build ( #45477 )
2020-09-30 10:39:23 -07:00
roi_align_gradient_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
roi_align_gradient_op.h
Export roi_align_gradient_op to c10 ( #34776 )
2020-03-15 02:43:39 -07:00
roi_align_op_gpu_test.cc
Re-enable Caffe2 test RoiAlignTest.CheckCPUGPUEqual ( #40901 )
2020-07-02 11:22:23 -07:00
roi_align_op.cc
Lint trailing newlines ( #54737 )
2021-03-30 13:09:52 -07:00
roi_align_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
roi_align_op.h
Refactor RoIAlignOp on CPU ( #34698 )
2020-03-27 07:53:58 -07:00
roi_align_rotated_gradient_op.cc
roi_align_rotated_gradient_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
roi_align_rotated_gradient_op.h
add aligned option to RoIAlign
2019-08-07 21:22:33 -07:00
roi_align_rotated_op.cc
remove ops in the __caffe2 namespace ( #47318 )
2020-11-16 15:30:16 -08:00
roi_align_rotated_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
roi_align_rotated_op.h
Register RoIAlignRotated with C10
2020-01-16 16:32:28 -08:00
roi_pool_op.cc
add dllexport before template specialization functions for windows build ( #45477 )
2020-09-30 10:39:23 -07:00
roi_pool_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
roi_pool_op.h
refactor caffe2 operator constructors - 7/9 ( #17088 )
2019-02-28 14:23:53 -08:00
rowmul_op.cc
rowmul_op.h
Tensor construction codemod(ResizeLike) - 6/7 ( #15137 )
2018-12-13 12:47:33 -08:00
rsqrt_op.cc
rsqrt_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
rsqrt_op.h
scale_blobs_op.cc
ScaleBlobs Operator ( #19660 )
2019-05-08 17:57:33 -07:00
scale_blobs_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
scale_blobs_op.h
ScaleBlobs Operator ( #19660 )
2019-05-08 17:57:33 -07:00
scale_op_gpu.cc
codemod: caffe::float16 -> at::Half ( #11785 )
2018-09-20 18:55:19 -07:00
scale_op.cc
Remove template parameter from Tensor ( #9939 )
2018-07-27 10:56:39 -07:00
scale_op.h
Histogram Binning Calibration
2020-09-06 17:11:16 -07:00
segment_reduction_op_gpu.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
segment_reduction_op_gpu.cuh
Enable fp16 for CUDA SparseLengthsSum/Mean ( #44089 )
2020-09-15 11:10:54 -07:00
segment_reduction_op.cc
Rename caffe2<->c10 operator wrappers ( #21322 )
2019-06-07 13:48:10 -07:00
segment_reduction_op.h
TensorInferenceFunction checks
2020-10-11 16:08:58 -07:00
self_binning_histogram_op.cc
[MLF] Allow for computing prune quantile thresholds on absolute value of indicators in distributed-inference-compatible embedding LUT pruning ( #46789 )
2020-11-02 11:31:31 -08:00
self_binning_histogram_op.h
[MLF] Allow for computing prune quantile thresholds on absolute value of indicators in distributed-inference-compatible embedding LUT pruning ( #46789 )
2020-11-02 11:31:31 -08:00
selu_op.cc
Tensor construction codemod(ResizeLike) - 6/7 ( #15137 )
2018-12-13 12:47:33 -08:00
selu_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
selu_op.h
refactor caffe2 operator constructors - 7/9 ( #17088 )
2019-02-28 14:23:53 -08:00
sequence_ops.cc
Undefined behavior with memset of std::string to 0 ( #18703 )
2019-04-02 10:10:11 -07:00
sequence_ops.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
sequence_ops.h
refactor caffe2 operator constructors - 7/9 ( #17088 )
2019-02-28 14:23:53 -08:00
shape_op_gpu.cc
shape_op.cc
shape_op.h
refactor caffe2 operator constructors - 7/9 ( #17088 )
2019-02-28 14:23:53 -08:00
sigmoid_gradient_op.cc
Add cudnn activation ops ( #9379 )
2018-07-12 23:18:56 -07:00
sigmoid_op_cudnn.cc
Add cudnn activation ops ( #9379 )
2018-07-12 23:18:56 -07:00
sigmoid_op.cc
[caffe2] fix type and shape inference for common gradient ops ( #35857 )
2020-04-02 11:17:04 -07:00
sigmoid_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
sigmoid_op.h
sin_op.cc
sin_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
sin_op.h
sinh_op.cc
[caffe2] fix type and shape inference for common gradient ops ( #35857 )
2020-04-02 11:17:04 -07:00
sinh_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
sinh_op.h
sinusoid_position_encoding_op.cc
sinusoid_position_encoding_op.h
preprocessor cleanup ( #33957 )
2020-03-02 13:37:19 -08:00
slice_op.cc
[caffe2] SliceOp axes indexing fixes. ( #45432 )
2020-10-06 13:21:08 -07:00
slice_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
slice_op.h
[caffe2] SliceOp axes indexing fixes. ( #45432 )
2020-10-06 13:21:08 -07:00
softmax_op_cudnn.cc
Support softmax with D == 0 ( #29167 )
2019-11-11 00:46:10 -08:00
softmax_op.cc
Support softmax with D == 0 ( #29167 )
2019-11-11 00:46:10 -08:00
softmax_op.h
Optimize SoftmaxOp on CPU ( #18635 )
2019-04-10 18:52:15 -07:00
softmax_ops.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
softmax_utils.cc
Optimize SoftmaxOp on CPU ( #18635 )
2019-04-10 18:52:15 -07:00
softmax_utils.h
Optimize SoftmaxOp on CPU ( #18635 )
2019-04-10 18:52:15 -07:00
softmax_with_loss_op.cc
Linearizable Label: Class Weights, Allow Missing Label, and Average by Batch Size ( #29707 )
2019-11-13 16:52:27 -08:00
softmax_with_loss_op.h
Linearizable Label: Class Weights, Allow Missing Label, and Average by Batch Size ( #29707 )
2019-11-13 16:52:27 -08:00
softplus_op.cc
Tensor construction codemod(ResizeLike) - 6/7 ( #15137 )
2018-12-13 12:47:33 -08:00
softplus_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
softplus_op.h
softsign_op.cc
Use REGISTER_CPU_GRADIENT_OPERATOR for many operators ( #12616 )
2018-10-24 13:48:17 -07:00
softsign_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
softsign_op.h
space_batch_op_gpu.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
space_batch_op.cc
space_batch_op.h
refactor caffe2 operator constructors - 8/9 ( #17089 )
2019-02-28 14:45:20 -08:00
sparse_dropout_with_replacement_op.cc
caffe2: use at::mt19937 instead of std::mt19937 (10x speedup) ( #43987 )
2020-10-16 16:08:35 -07:00
sparse_dropout_with_replacement_op.h
Implement dropout with replacement for id list features. ( #22880 )
2019-07-23 14:34:21 -07:00
sparse_lp_regularizer_op_gpu.cu
Adding sparse Lp regularization operator to Caffe2 ( #38574 )
2020-06-01 15:21:19 -07:00
sparse_lp_regularizer_op.cc
Adding sparse Lp regularization operator to Caffe2 ( #38574 )
2020-06-01 15:21:19 -07:00
sparse_lp_regularizer_op.h
Renaming CAFFE2_API to TORCH_API ( #49496 )
2020-12-18 10:54:50 -08:00
sparse_normalize_op_gpu.cu
Windows DLL build with Caffe2 code ( #11266 )
2018-09-06 15:12:20 -07:00
sparse_normalize_op.cc
[caffe2] Add operator schema for FP16SparseNorm ( #46300 )
2020-10-13 18:58:23 -07:00
sparse_normalize_op.h
Renaming CAFFE2_API to TORCH_API ( #49496 )
2020-12-18 10:54:50 -08:00
sparse_to_dense_mask_op.cc
Exposes SparseToDenseMask Caffe2 Operator ( #45670 )
2020-10-02 10:05:13 -07:00
sparse_to_dense_mask_op.h
Exposes SparseToDenseMask Caffe2 Operator ( #45670 )
2020-10-02 10:05:13 -07:00
sparse_to_dense_op.cc
Shape inference for SparseToDense in ExpertCombiner
2020-07-15 08:04:48 -07:00
sparse_to_dense_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
sparse_to_dense_op.h
refactor caffe2 operator constructors - 8/9 ( #17089 )
2019-02-28 14:45:20 -08:00
spatial_batch_norm_gradient_op.cc
Optimize SpatialBNOp on GPU ( #16395 )
2019-01-28 09:36:45 -08:00
spatial_batch_norm_op_cudnn.cu
[caffe2] Use extended versions of cuDNN calls for SpatialBN
2021-03-05 18:18:15 -08:00
spatial_batch_norm_op_impl.cuh
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
spatial_batch_norm_op.cc
Revert D13747581: Optimize SpatialBN on GPU
2019-01-24 15:26:37 -08:00
spatial_batch_norm_op.cu
Optimize SpatialBNOp on GPU ( #16395 )
2019-01-28 09:36:45 -08:00
spatial_batch_norm_op.h
[Caffe2] Remove explicitly divide by zero in SpatialBN training mode ( #42380 )
2020-08-01 11:54:58 -07:00
spatial_softmax_with_loss_op.cc
Optimize SoftmaxOp on CPU ( #18635 )
2019-04-10 18:52:15 -07:00
spatial_softmax_with_loss_op.h
refactor caffe2 operator constructors - 8/9 ( #17089 )
2019-02-28 14:45:20 -08:00
sqr_op_gpu.cc
sqr_op.cc
sqr_op.h
sqrt_op_gpu.cc
sqrt_op.cc
Histogram Binning Calibration
2020-09-06 17:11:16 -07:00
sqrt_op.h
Seperate level1 elementwise functions from math ( #16397 )
2019-01-30 00:04:12 -08:00
square_root_divide_op.cc
square_root_divide_op.h
[caffe2] Fix signed unsigned comparison warning ( #34161 )
2020-03-04 08:02:44 -08:00
stats_ops.cc
refactor caffe2 operator constructors - 8/9 ( #17089 )
2019-02-28 14:45:20 -08:00
stats_put_ops.cc
Fix typos ( #30606 )
2019-12-02 20:17:42 -08:00
stats_put_ops.h
Fix typos ( #30606 )
2019-12-02 20:17:42 -08:00
stop_gradient_gpu.cc
stop_gradient.cc
stop_gradient.h
Remove Context dependency from Tensor class ( #14269 )
2018-11-28 15:45:38 -08:00
string_ops_test.cc
fix comparison of narrow type with wide type in loop condition ( #53951 )
2021-03-22 16:40:35 -07:00
string_ops.cc
[C2] Add string equality operator ( #45886 )
2020-10-06 12:08:26 -07:00
string_ops.h
Fix typos, via a Levenshtein-type corrector ( #31523 )
2020-01-17 16:03:19 -08:00
stump_func_op.cc
[dt] [caffe2] add/fix shape inference for StumpFunc, SliceGradient and ResizeLike ( #35430 )
2020-03-26 17:50:32 -07:00
stump_func_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
stump_func_op.h
refactor caffe2 operator constructors - 8/9 ( #17089 )
2019-02-28 14:45:20 -08:00
stylizer_ops.cc
refactor caffe2 operator constructors - 8/9 ( #17089 )
2019-02-28 14:45:20 -08:00
summarize_op.cc
Tensor construction: combine Resize+mutable_data - 4/4 ( #13856 )
2018-11-27 12:34:25 -08:00
summarize_op.cu
Fix typos, via a Levenshtein-type corrector ( #31523 )
2020-01-17 16:03:19 -08:00
summarize_op.h
refactor caffe2 operator constructors - 8/9 ( #17089 )
2019-02-28 14:45:20 -08:00
swish_op.cc
Tensor construction codemod(ResizeLike) - 7/7 ( #15087 )
2018-12-20 15:33:07 -08:00
swish_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
swish_op.h
tan_op.cc
tan_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
tan_op.h
tanh_gradient_op.cc
Fix TanhGradientOperator linker errors ( #10426 )
2018-08-13 17:57:10 -07:00
tanh_op_cudnn.cc
Add cudnn activation ops ( #9379 )
2018-07-12 23:18:56 -07:00
tanh_op.cc
[caffe2] fix type and shape inference for common gradient ops ( #35857 )
2020-04-02 11:17:04 -07:00
tanh_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
tanh_op.h
tensor_protos_db_input_gpu.cc
tensor_protos_db_input.cc
tensor_protos_db_input.h
Remove partially initialized Tensor in Deserialization ( #14197 )
2018-12-10 17:17:29 -08:00
text_file_reader_utils_test.cc
fix comparison of narrow type with wide type in loop condition ( #53951 )
2021-03-22 16:40:35 -07:00
text_file_reader_utils.cc
fix comparison of narrow type with wide type in loop condition ( #53951 )
2021-03-22 16:40:35 -07:00
text_file_reader_utils.h
Renaming CAFFE2_API to TORCH_API ( #49496 )
2020-12-18 10:54:50 -08:00
text_file_reader.cc
Fix typos ( #30606 )
2019-12-02 20:17:42 -08:00
thresholded_relu_op.cc
Tensor construction codemod(ResizeLike) - 7/7 ( #15087 )
2018-12-20 15:33:07 -08:00
thresholded_relu_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
thresholded_relu_op.h
refactor caffe2 operator constructors - 8/9 ( #17089 )
2019-02-28 14:45:20 -08:00
tile_op.cc
pass TypeMeta by value ( #45026 )
2020-10-30 10:14:17 -07:00
tile_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
tile_op.h
Fix typos, via a Levenshtein-type corrector ( #31523 )
2020-01-17 16:03:19 -08:00
top_k_heap_selection.cuh
RIP CUDA <9.2: circleci, aten, and caffe2 ( #36846 )
2020-05-18 13:41:05 -07:00
top_k_radix_selection.cuh
Followup for cuda assert cleanups ( #39220 )
2020-05-29 11:53:46 -07:00
top_k.cc
Forbid trailing whitespace ( #53406 )
2021-03-05 17:22:55 -08:00
top_k.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
top_k.h
ONNX Export Topk with Dynamic k (+ add test cases)
2019-07-05 23:46:36 -07:00
transpose_op_cudnn.cc
Disable cudnn transpose for int types ( #26934 )
2019-09-27 11:36:10 -07:00
transpose_op.cc
Simplify InheritOnnxSchema registration ( #12696 )
2018-10-16 19:59:49 -07:00
transpose_op.cu
Update math::Transpose to support tensor with size > 2G ( #17670 )
2019-03-20 18:22:21 -07:00
transpose_op.h
Add Int8Transpose operator ( #16382 )
2019-08-29 16:06:25 -07:00
tt_linear_op.cc
tt_linear_op.h
refactor caffe2 operator constructors - 8/9 ( #17089 )
2019-02-28 14:45:20 -08:00
unique_ops.cc
Tensor construction codemod(ResizeLike) - 7/7 ( #15087 )
2018-12-20 15:33:07 -08:00
unique_ops.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
unique_ops.h
Tensor reinitialization codemod - 4/5 ( #15967 )
2019-01-11 16:41:19 -08:00
unsafe_coalesce.cc
[C2] Revive unsafe CoalesceOp ( #49402 )
2020-12-17 04:31:29 -08:00
unsafe_coalesce.cu
[C2] Revive unsafe CoalesceOp ( #49402 )
2020-12-17 04:31:29 -08:00
unsafe_coalesce.h
[C2] Revive unsafe CoalesceOp ( #49402 )
2020-12-17 04:31:29 -08:00
upsample_op.cc
Tensor construction: combine Resize+mutable_data - 4/4 ( #13856 )
2018-11-27 12:34:25 -08:00
upsample_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
upsample_op.h
refactor caffe2 operator constructors - 8/9 ( #17089 )
2019-02-28 14:45:20 -08:00
utility_ops_gpu_test.cc
Rename ndim() -> dim() - 5/6
2018-11-06 16:38:35 -08:00
utility_ops_test.cc
Rename ndim() -> dim() - 5/6
2018-11-06 16:38:35 -08:00
utility_ops.cc
Implement LengthsToOffsets operator in Caffe2 ( #46590 )
2020-10-29 07:03:34 -07:00
utility_ops.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
utility_ops.h
Directly Return when Numel == 0 for WeightedSum and ScatterWeightedSum
2021-02-14 17:49:34 -08:00
variable_length_sequence_padding.cc
variable_length_sequence_padding.h
refactor caffe2 operator constructors - 9/9 ( #17090 )
2019-02-28 09:53:18 -08:00
weighted_multi_sampling_op.cc
Tensor construction: combine Resize+mutable_data - 4/4 ( #13856 )
2018-11-27 12:34:25 -08:00
weighted_multi_sampling_op.h
refactor caffe2 operator constructors - 9/9 ( #17090 )
2019-02-28 09:53:18 -08:00
weighted_sample_op.cc
Tensor construction: combine Resize+mutable_data - 4/4 ( #13856 )
2018-11-27 12:34:25 -08:00
weighted_sample_op.cu
Check kernel launches in caffe2/operators ( #52240 )
2021-02-16 16:42:05 -08:00
weighted_sample_op.h
refactor caffe2 operator constructors - 9/9 ( #17090 )
2019-02-28 09:53:18 -08:00
while_op_gpu.cc
while_op.cc
while_op.h
refactor caffe2 operator constructors - 9/9 ( #17090 )
2019-02-28 09:53:18 -08:00
workspace_ops.cc
refactor caffe2 operator constructors - 9/9 ( #17090 )
2019-02-28 09:53:18 -08:00
zero_gradient_op_gpu.cc
zero_gradient_op.cc
zero_gradient_op.h