pytorch/caffe2/python
Lu Fang 2752ad8045 Automatic update of fbcode/onnx to f461f7aad9987635b4aff108620ed7918f002d19 (#14568)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14568

Previous import was 882c5283c54345d131e8fe5c859e4844dcf7ca8e

Included changes:
- **[f461f7a](https://github.com/onnx/onnx/commit/f461f7a)**: Show the op's type and name when the shape inference is failed. (#1623) <Jerry>
- **[ab8aaf9](https://github.com/onnx/onnx/commit/ab8aaf9)**: Add scan test case (#1586) <G. Ramalingam>
- **[c95357e](https://github.com/onnx/onnx/commit/c95357e)**: link the tutorial (#1650) <Lu Fang>
- **[d7e2420](https://github.com/onnx/onnx/commit/d7e2420)**: Upgrade label encoder to support more input types (#1596) <Wei-Sheng Chin>
- **[6425108](https://github.com/onnx/onnx/commit/6425108)**: Add Doc about Adding New Operator into ONNX (#1647) <Lu Fang>
- **[295889c](https://github.com/onnx/onnx/commit/295889c)**: use an empty initializer to create map (#1643) <Lu Fang>
- **[e38f3ec](https://github.com/onnx/onnx/commit/e38f3ec)**: Remove redundant const (#1639) <daquexian>
- **[ea694bf](https://github.com/onnx/onnx/commit/ea694bf)**: implement fuse reduce->unsqueeze + fix assumption in nop_dropout pass (#1565) <Armen>
- **[6db386e](https://github.com/onnx/onnx/commit/6db386e)**: make output shape clear enough for Softmax family (#1634) <Lu Fang>
- **[2b67c6e](https://github.com/onnx/onnx/commit/2b67c6e)**: fix batchnorm doc (#1633) <Lu Fang>
- **[c901784](https://github.com/onnx/onnx/commit/c901784)**: remove inappropriate consts (#1632) <Lu Fang>
- **[de82119](https://github.com/onnx/onnx/commit/de82119)**: Shape inference fix for broadcast, concat and scan (#1594) <KeDengMS>
- **[d7ffe3b](https://github.com/onnx/onnx/commit/d7ffe3b)**: Update Optimizer Docs (#1607) <Armen>
- **[d09d139](https://github.com/onnx/onnx/commit/d09d139)**: mark PROTOBUF_INCLUDE_DIRS as BUILD_INTERFACE (#1466) <Yuta Okamoto>
- **[eb4b7c2](https://github.com/onnx/onnx/commit/eb4b7c2)**: allow variadic parameters of different types (#1615) <G. Ramalingam>
- **[4166246](https://github.com/onnx/onnx/commit/4166246)**: Fix onnxifi test (#1617) <Yinghai Lu>
- **[6706a4d](https://github.com/onnx/onnx/commit/6706a4d)**: Fix a bug in vector address access (#1598) <Raymond Yang>
- **[ae39866](https://github.com/onnx/onnx/commit/ae39866)**: Separate types of inputs 1 and 2 in OneHot op. (#1610) <Spandan Tiwari>
- **[45ba661](https://github.com/onnx/onnx/commit/45ba661)**: Handle new types in the switch. (#1608) <Dmitri Smirnov>
- **[14853b6](https://github.com/onnx/onnx/commit/14853b6)**: Bump docker image version to 230 used in CircleCI (#1606) <bddppq>
- **[e0993b8](https://github.com/onnx/onnx/commit/e0993b8)**: [onnxifi] Make sure that backend handles run async. (#1599) <Roman Dzhabarov>
- **[e6965cc](https://github.com/onnx/onnx/commit/e6965cc)**: Introduce SparseTensor ML proto (#1554) <Dmitri Smirnov>
- **[75b782f](https://github.com/onnx/onnx/commit/75b782f)**: In driver test check the return status of onnxGetBackendIDs (#1597) <bddppq>
- **[c05b364](https://github.com/onnx/onnx/commit/c05b364)**: Make CI log less verbose (#1595) <bddppq>
- **[fa568e4](https://github.com/onnx/onnx/commit/fa568e4)**: Loop type shape inferencing (#1591) <Scott McKay>
- **[937e64c](https://github.com/onnx/onnx/commit/937e64c)**: add uint8 (#1590) <Lu Fang>
- **[f86e951](https://github.com/onnx/onnx/commit/f86e951)**: Add domain as an optional parameter for make_node function (#1588) <Young Kim>
- **[ff45588](https://github.com/onnx/onnx/commit/ff45588)**: Remove unreachable code in shape_inference.h (#1585) <Changming Sun>
- **[f7dcad0](https://github.com/onnx/onnx/commit/f7dcad0)**: Add several hyperbolic function ops. (#1499) <Sergii Dymchenko>
- **[a60ac7d](https://github.com/onnx/onnx/commit/a60ac7d)**: Add OneHot op to ONNX. (#1567) <Spandan Tiwari>
- **[f6c3a7e](https://github.com/onnx/onnx/commit/f6c3a7e)**: [compiler flag] Issue a warning if class has virtual method but missing virtual dtor. (#1583) <Roman Dzhabarov>
- **[88d1784](https://github.com/onnx/onnx/commit/88d1784)**: Fix MaxUnpool shape inference when output_shape is provided as input (#1578) <Spandan Tiwari>
- **[20041b7](https://github.com/onnx/onnx/commit/20041b7)**: Add type shape inferencing for the If operator (#1571) <Scott McKay>
- **[d6c4c75](https://github.com/onnx/onnx/commit/d6c4c75)**: Add a virtual destructor to GraphInferencer (#1574) <Changming Sun>
- **[a339598](https://github.com/onnx/onnx/commit/a339598)**: fix ConvTranspose spec (#1566) <Wenhao Hu>

Reviewed By: zrphercule

Differential Revision: D13263831

fbshipit-source-id: a2ff22c6454e2430429e5a7d18d21661a7ffb0cb
2018-11-29 16:31:56 -08:00
..
docs adapting caffe2 operator docs generator to pytorch url 2018-10-11 12:55:06 -07:00
examples Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
helpers
ideep Add "axis" and "axis_w" arguments in FC to support customized axix to reduce dim. (#12971) 2018-11-21 15:44:50 -08:00
layers Resubmit: Set the correct engine name for position weighted pooling when fp16 is used for training 2018-11-27 14:51:56 -08:00
mint move flags to c10 (#12144) 2018-10-04 02:09:56 -07:00
mkl seperate mkl, mklml, and mkldnn (#12170) 2018-10-29 10:52:55 -07:00
modeling diagnose option: get_entry to print a whole row (#11308) 2018-09-06 21:26:30 -07:00
models Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
onnx Automatic update of fbcode/onnx to f461f7aad9987635b4aff108620ed7918f002d19 (#14568) 2018-11-29 16:31:56 -08:00
operator_test Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
predictor Caffe2: Fix for creating entries of external_input in predic_net (#12979) 2018-11-15 22:33:50 -08:00
rnn Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
serialized_test operator serialized test coverage summary document (#13703) 2018-11-09 15:04:08 -08:00
test Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
trt Clean up a couple of items in the C2 test scaffolding (WIP) (#7847) 2018-11-07 09:16:13 -08:00
__init__.py caffe2::DeviceType -> at::DeviceType (#11254) 2018-09-05 16:28:09 -07:00
_import_c_extension.py Completely remove build_aten and use_aten (#10469) 2018-08-20 20:26:42 -07:00
allcompare_test.py
attention.py [Caffe2] Update elementwise ops to support numpy style boradcast (#8070) 2018-06-05 15:49:16 -07:00
benchmark_generator.py
binarysize.py
brew_test.py Move tanh function to math (#9328) 2018-07-11 13:59:50 -07:00
brew.py
build.py
cached_reader.py Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
caffe_translator_test.py Fix skip logic in caffe_translator_test.py (#13627) 2018-11-15 16:45:49 -08:00
caffe_translator.py Fix bug in caffe_translator tool (#10056) 2018-10-11 13:13:12 -07:00
checkpoint_test.py Revert D9566744: [New Checkpoint] Kill the dummy TaskOutput when task.get_step() (#11164) 2018-08-31 22:25:57 -07:00
checkpoint.py Create class constant for string literal 'blob_names' 2018-08-24 22:11:43 -07:00
CMakeLists.txt seperate mkl, mklml, and mkldnn (#12170) 2018-10-29 10:52:55 -07:00
cnn.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
compatibility.py migrating deprecated calls without abc module for containers (#11515) 2018-09-13 15:09:22 -07:00
context_test.py
context.py Resolve name conflict of ContextManager (#7244) 2018-06-22 00:41:51 -04:00
control_ops_grad.py
control_ops_util.py
control_test.py
control.py
convert_test.py New serialization format (#12384) 2018-10-16 16:36:58 -07:00
convert.py New serialization format (#12384) 2018-10-16 16:36:58 -07:00
convnet_benchmarks_test.py
convnet_benchmarks.py
core_gradients_test.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
core_test.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
core.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
crf_predict.py Move crf in caffe2 from fb to oss (#12200) 2018-10-01 18:31:41 -07:00
crf_viterbi_test.py Move crf in caffe2 from fb to oss (#12200) 2018-10-01 18:31:41 -07:00
crf.py Productionize CRF layer in PyText (#10362) 2018-08-22 00:25:26 -07:00
data_parallel_model_test.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
data_parallel_model.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
data_workers_test.py
data_workers.py Fixed log message (#10874) 2018-09-05 09:55:52 -07:00
dataio_test.py Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
dataio.py Fixing stop condition on composite reader (#9888) 2018-08-20 03:02:20 -07:00
dataset.py Update from facebook (#7855) 2018-05-29 11:38:02 -07:00
db_file_reader.py Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
db_test.py
device_checker.py Update from facebook (#7451) 2018-05-10 23:14:27 -07:00
dlpack.h Upgrade DLPack 2018-11-12 15:59:46 -08:00
dyndep.py
embedding_generation_benchmark.py
experiment_util.py
extension_loader.py Completely remove build_aten and use_aten (#10469) 2018-08-20 20:26:42 -07:00
functional_test.py Add support for specifying device_option in Functional (#9619) 2018-07-24 14:41:59 -07:00
functional.py Caffe2 Functional enforcing inplace output (#10797) 2018-08-23 22:42:47 -07:00
fused_8bit_rowwise_conversion_ops_test.py
gradient_check_test.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
gradient_checker.py make the variable declaration closer to usage 2018-10-12 12:07:08 -07:00
gru_cell.py
hip_test_util.py Make CUDNN an alias of MIOPEN for HIP ops (#12278) 2018-10-24 17:07:31 -07:00
hsm_util.py
hypothesis_test_util.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
hypothesis_test.py Remove unsafecoalesce op (#12897) 2018-10-22 09:42:26 -07:00
ideep_test_util.py
layer_model_helper.py parallize the dense part in event models 2018-08-22 22:40:07 -07:00
layer_model_instantiator.py
layer_parameter_sharing_test.py Clean up a couple of items in the C2 test scaffolding (WIP) (#7847) 2018-11-07 09:16:13 -08:00
layer_test_util.py
layers_test.py Add Recency Weighted into SparseLookup (#14291) 2018-11-24 02:43:31 -08:00
lengths_reducer_fused_8bit_rowwise_ops_test.py
lengths_reducer_rowwise_8bit_ops_test.py
lstm_benchmark.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
memonger_test.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
memonger.py
mkl_test_util.py
model_device_test.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
model_helper_test.py keep net type info when generating model complete net (#11032) 2018-09-04 21:10:06 -07:00
model_helper.py Rename cuda_gpu_id to device_id in DeviceOption (#12456) 2018-10-09 15:54:04 -07:00
modifier_context.py
mpi_python.cc
muji_test.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
muji.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
net_builder_test.py
net_builder.py
net_drawer.py
net_printer_test.py
net_printer.py Rename cuda_gpu_id to device_id in DeviceOption (#12456) 2018-10-09 15:54:04 -07:00
nomnigraph_test.py nomnigraph - support subgraph visualization (#13795) 2018-11-16 08:19:20 -08:00
nomnigraph_transformations_test.py Add transpose network pass (#13437) 2018-11-01 14:27:07 -07:00
nomnigraph_transformations.py Add transpose network pass (#13437) 2018-11-01 14:27:07 -07:00
nomnigraph.py createUniqueDataNode 2018-10-31 11:16:38 -07:00
normalizer_context.py Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
normalizer_test.py Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
normalizer.py Enable alternative LayerNorm impl in FisherGan (#12178) 2018-10-11 17:36:11 -07:00
numa_benchmark.py Back out "Migrate DeviceOption.numa_node_id to DeviceOption.device_id" 2018-10-24 17:11:25 -07:00
numa_test.py Back out "Migrate DeviceOption.numa_node_id to DeviceOption.device_id" 2018-10-24 17:11:25 -07:00
observer_test.py
optimizer_context.py
optimizer_test_util.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
optimizer_test.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
optimizer.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
parallel_workers_test.py
parallel_workers.py Update from facebook (#7696) 2018-05-19 23:10:48 -07:00
parallelize_bmuf_distributed_test.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
pipeline_test.py
pipeline.py SNNTest with Data Preproc Service (#11707) 2018-09-17 21:25:49 -07:00
predictor_constants.py
pybind_state_dlpack.cc Upgrade DLPack 2018-11-12 15:59:46 -08:00
pybind_state_dlpack.h Upgrade DLPack 2018-11-12 15:59:46 -08:00
pybind_state_gpu.cc Renaming dims() to sizes() (caffe2/caffe2) - 4/4 2018-10-24 16:32:51 -07:00
pybind_state_hip.cc Change hip filename extension to .hip (#14036) 2018-11-16 11:55:59 -08:00
pybind_state_ideep.cc FeedTensor returns a Tensor (#14196) 2018-11-26 13:05:44 -08:00
pybind_state_int8.cc Renaming meta() to dtype() - 2/2 (#13334) 2018-10-30 18:24:30 -07:00
pybind_state_nomni.cc nomnigraph - support subgraph visualization (#13795) 2018-11-16 08:19:20 -08:00
pybind_state_registry.cc Move registry fully to c10 (#12077) 2018-09-27 03:09:54 -07:00
pybind_state_registry.h Move registry fully to c10 (#12077) 2018-09-27 03:09:54 -07:00
pybind_state.cc FeedTensor returns a Tensor (#14196) 2018-11-26 13:05:44 -08:00
pybind_state.h Change Tensor::CopyFrom to a simple double dispatch (#14268) 2018-11-28 15:45:37 -08:00
python_op_test.py Clean up a couple of items in the C2 test scaffolding (WIP) (#7847) 2018-11-07 09:16:13 -08:00
queue_util.py
record_queue.py
recurrent.py
regularizer_context.py Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
regularizer_test.py Add GroupL1Norm regularizer (#9115) 2018-07-06 13:26:09 -07:00
regularizer.py Add GroupL1Norm regularizer (#9115) 2018-07-06 13:26:09 -07:00
rnn_cell.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
schema_test.py Add util function from core type to dtype (#10716) 2018-08-21 10:55:19 -07:00
schema.py Add util function from core type to dtype (#10716) 2018-08-21 10:55:19 -07:00
scope_test.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
scope.py Update from Facebook (#8887) 2018-06-26 14:55:48 -07:00
session_test.py
session.py
sparse_to_dense_mask_test.py
sparse_to_dense_test.py
task.py Allowing TaskGroups to carry remote nets (#14342) 2018-11-27 13:34:11 -08:00
test_util.py Enable junk fill for the default CPU allocator (#13377) 2018-11-08 00:02:37 -08:00
text_file_reader.py
timeout_guard.py
toy_regression_test.py Enable junk fill for the default CPU allocator (#13377) 2018-11-08 00:02:37 -08:00
transformations_test.py nomnigraph - easy - some code cleanup for transformations_test (#12101) 2018-10-01 11:31:08 -07:00
transformations.py Enable Conv fusion optimizations in optimizeForIdeep (#9255) 2018-07-16 21:28:50 -07:00
tt_core_test.py
tt_core.py
utils_test.py Convert Arguments to dictionary (#13436) 2018-11-01 14:27:05 -07:00
utils.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
visualize.py
workspace_test.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00
workspace.py Unify cuda and hip device types in Caffe2 python front end (#14221) 2018-11-29 14:00:16 -08:00