Commit Graph

250 Commits

Author SHA1 Message Date
Jerry Zhang
890568a018 Tensor reinitialization codemod - 5/5 (#15884)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15884

Codemod generated with clangr shard mode, 25 files per diff,
To eliminiate partially initialized Tensor, we split the initialization of local Tensor variables into two steps, first declare un uninitialized Tensor, and
call `ReinitializeTensor` to initialize it.
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: hyuen

Differential Revision: D13586737

fbshipit-source-id: dc8e49e9f29505b8898bb19f84c1a983f2d811ab
2019-01-10 16:32:26 -08:00
Summer Deng
5af9aaa5bb Minor bug fix in dnnlowp (#15841)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15841

Fix the bugs in dnnlowp to support int8/int16 quantization for sparsenn.

Reviewed By: jspark1105

Differential Revision: D13600878

fbshipit-source-id: 27f06d7c54a663208320c8f211714220a9b49540
2019-01-09 17:18:30 -08:00
Jongsoo Park
770b5ac42b clean up D13579188 (#15759)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15759

Some flags have too long names. And some other few minor clean ups.

Reviewed By: jianyuh

Differential Revision: D13587353

fbshipit-source-id: f8aee7f167505644f5d8f80fe2eed70201ef1e54
2019-01-07 18:48:25 -08:00
Jongsoo Park
bc328d01e5 simplify conv dnnlowp ops by not allowing fp32 in/out (#15758)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15758

DNNLOWP Conv operators became very complex due to many options. This diff simplifies them by not allowing fp32 in/out. This is OK for Conv operators because Conv operators are usually used in deep networks where quantizing and dequantizing using separate operators is not much overhead.

Reviewed By: csummersea

Differential Revision: D13587341

fbshipit-source-id: e88c919dae79d1c5b7d787ea539edf5bcb064afc
2019-01-07 15:14:59 -08:00
Jongsoo Park
c68eb5ec44 fix conv unit test for groupwise quantization and pre-packing (#15761)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15761

As title says.

Reviewed By: csummersea

Differential Revision: D13587727

fbshipit-source-id: f0631b8cbb89d65a1d952bc25b463de23de93bec
2019-01-07 11:08:32 -08:00
Jongsoo Park
ad0ef7ae48 remove dependency to fp32 batch permutation op (#15723)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15723

As title says.

Reviewed By: jianyuh

Differential Revision: D13578604

fbshipit-source-id: 0da0ac31ae83c1e0daa9077e878feb4deffed6a3
2019-01-04 07:56:05 -08:00
Jongsoo Park
069d894145 make conv_depthwise_dnnlowp_op_test faster (#15725)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15725

As title says.

Reviewed By: jianyuh

Differential Revision: D13579188

fbshipit-source-id: 382072c95929ccf9e189e2338e35b046c4a0650f
2019-01-03 21:46:00 -08:00
Jongsoo Park
a923ea7cf0 disallow nbits_in_non_outlier == 0 in acc16 conv; option to fallback to acc32 (#15708)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15708

nbits_in_non_outlier == 0 doesn't make sense because it means everything is outlier and we can just use 32-bit accumulation.
Depending on architecture, break-even point between acc16 and acc32 can be different. Adding thresholds for falling back to acc32.

Reviewed By: jianyuh

Differential Revision: D13574832

fbshipit-source-id: b7a37aacbfdc7867e31838dafcdd5f7c2ac282af
2019-01-03 20:31:33 -08:00
Jongsoo Park
1159302ab1 bug fix in 3d group conv (#15625)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15625

3D group conv (both NCHW and NHWC layout) was not correct.
Added group=2 in test_1d_convolution and test_3d_convolution in conv_test

Reviewed By: protonu

Differential Revision: D13562099

fbshipit-source-id: 586e8a7574a2764f2a3b559db6c2415b3ab90453
2019-01-03 09:46:49 -08:00
Jianyu Huang
3b5a940355 Unify the usage of Dequantize (#15685)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15685

The declaration of "Dequantize" is in "fbsource/fbcode/deeplearning/fbgemm2/QuantUtils.h", so it requires the "namespace fbgemm".

<T> is actually optional, since the type can de deduced from the first argument.

In some places we have "Dequantize<T>(...)", while in other places we have "Dequantize(...)". We'd better unify them. As a reference, all occurrences of "Quantize" are using "fbgemm::Quantize<T>(...)".

Reviewed By: jspark1105

Differential Revision: D13570847

fbshipit-source-id: 7fca9f7f9e4e0d9e5eb27ac44b8707adc3c80717
2019-01-02 21:32:46 -08:00
Xiaomeng Yang
56d945a1ca Add count_include_pad arg for average_pool_op on CPU (#15593)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15593

Add count_include_pad arg for average_pool_op on CPU

Reviewed By: houseroad

Differential Revision: D13558123

fbshipit-source-id: 188879ec3af313105ff66ac0b5a81ea44fca2855
2018-12-30 04:16:47 -08:00
Jongsoo Park
d53012b4fe add NCHW2NHWC and NHWC2NCHW in utils.py (#15588)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15588

Use NHWC2NCHW or NCHW2NHWC functions which is easier to understand compared to code using transpose and generalizable to non-2D convolutions.

Reviewed By: csummersea

Differential Revision: D13557674

fbshipit-source-id: c4fdb8850503ea58f6b17b188513ae2b29691ec0
2018-12-28 17:34:50 -08:00
Jongsoo Park
6a3e54eda9 append caffe2 prefix to dnnlowp cmd line options (#15582)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15582

Following convention of having caffe2_ prefix in command line options

Reviewed By: viswanathgs

Differential Revision: D13252055

fbshipit-source-id: 142a6395b832f211f34d0a87ec2d62c1e5fcdc69
2018-12-28 11:51:59 -08:00
Jerry Zhang
ed5b584f65 Tensor construction codemod(ResizeLike) - 7/7 (#15087)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15087

Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: ezyang

Differential Revision: D13419765

fbshipit-source-id: 34d695309a66723281429610a12544598c507d74
2018-12-20 15:33:07 -08:00
Jianyu Huang
cd8dd49fba race condition fix of using mutable_data inside OPENMP region for batched matmul (#15371)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15371

Similar to D13387692:

Never call mutable_data from an OpenMP region!!!

Reviewed By: jspark1105

Differential Revision: D13511259

fbshipit-source-id: 100812d2a547c0a1d5018749d5fdc88162375673
2018-12-18 23:22:56 -08:00
Jongsoo Park
fab78827d6 don't compile dnnlowp.cc in avx2 option (#15147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15147

Forgot to take out dnnlowp.cc from avx2 list in a previous diff.

Reviewed By: dskhudia

Differential Revision: D13440686

fbshipit-source-id: 9ada98b6e885c7d5f22c91a735ff60304480b4cb
2018-12-12 18:57:09 -08:00
Brett Koonce
d8260239a0 docs: minor spelling tweaks
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15148

Differential Revision: D13443708

Pulled By: suo

fbshipit-source-id: 5e3ec0afd3416ab8ce207f2d04105c49e1c04611
2018-12-12 18:17:14 -08:00
Zachary DeVito
92314c83fa re-enable copy of python files, but be careful that the copy is only … (#14982)
Summary:
…done once

This allow no-op build to work correctly even when BUILD_CAFFE2_OPS is on.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14982

Differential Revision: D13413960

Pulled By: zdevito

fbshipit-source-id: 6e5412a8c375af8a47c76f548cdd31cff15f3853
2018-12-11 16:54:08 -08:00
Jerry Zhang
83f32eebd9 Tensor construction codemod - 2/3 (#14836)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14836

Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: bddppq

Differential Revision: D13335176

fbshipit-source-id: 8d89510670e2cf70559d2f75e68f7181feb0b6d9
2018-12-10 19:30:56 -08:00
Jongsoo Park
4fcc2fffc3 unit test with multiple omp threads (#14958)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14958

Test with multiple threads

Reviewed By: jianyuh

Differential Revision: D13394791

fbshipit-source-id: 931a6c3bda15ebc816807e537dd0841c383e7a6f
2018-12-10 17:23:44 -08:00
Jongsoo Park
b039a715ce pre-pack operation of dnnlowp conv with 16-bit accumulation (#14881)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14881

This diff allows us to pre-quantize and pre-pack weight matrix used in DNNLOWP_ACC16 .
The intended use pattern is run Int8ConvPackWeight in init_net that generates a packed weight and Int8Conv with DNNLOWP_ACC16 engine uses the the packed weight.

Reviewed By: csummersea

Differential Revision: D13374662

fbshipit-source-id: dd02b9a4eb7af1fe208aa857fcd0b445e6e395af
2018-12-10 01:08:21 -08:00
Jongsoo Park
a7b3197b2d race condition fix of calling mutable_data inside a openmp region (#14921)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14921

Fix race condition introduced in D13188595 .
Let's reminder ourselves "never call mutable_data from an OpenMP region!!!"

Reviewed By: jianyuh

Differential Revision: D13387692

fbshipit-source-id: 6a3aeedeeda55a9ede660de8f1f44d4eee76ae2b
2018-12-08 18:17:20 -08:00
Daya S Khudia
ca6311d909 File name change for FbgemmI8Depthwise.h and FbgemmI8Depthwise.cc (#14725)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14725

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/33

Renaming FbgemmI8Depthwise.h to FbgemmI8DepthwiseAvx2.h and FbgemmI8Depthwise.cc to FbgemmI8DepthwiseAvx2.cc since FbgemmI8DepthwiseAvx2.cc will be compiled with avx2 flags

Reviewed By: jianyuh

Differential Revision: D13313898

fbshipit-source-id: a8111eacf3d79a466ce0565bfe5f2f0b200a5c33
2018-12-05 13:14:48 -08:00
Daya S Khudia
f6354d903a Unit tests need better compilation flow (#14547)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14547

Unit tests used in dnnlowp need a better compilation flow as some of them need avx. Disabling for now so that pytorch builds with fbgemm.

Reviewed By: jianyuh

Differential Revision: D13240933

fbshipit-source-id: e2e187b758c5d89e524470cd261ce35493f427a2
2018-11-30 09:40:29 -08:00
Jongsoo Park
c32debb916 fix build error from D13188595 (#14481)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14481

Fix build error in mode/opt

Reviewed By: dskhudia

Differential Revision: D13234688

fbshipit-source-id: 6c8515c45f75e7b88713a303f22990ad85d68beb
2018-11-28 10:46:33 -08:00
Jongsoo Park
e8754ee017 use fbgemm's im2col fusion and thread partitioning (#14350)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14350

acc32 for now. Will have a separate diff for acc16 but that will need another out processing that does sparse convolution without im2col.

Reviewed By: dskhudia

Differential Revision: D13188595

fbshipit-source-id: e8faee46c7ea43e4a600aecb8b8e93e6c860a8c8
2018-11-28 01:13:11 -08:00
Jongsoo Park
a3cfab2d63 per-group and per-channel quantization (#14340)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14340

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/25

Per-group and per-channel quantization in fbgemm
This diff also cleans up explicit template instantiation using macro expansion
This diff also changes randFill interface which was easy to make mistakes of generating integer random numbers for floating point vectors.

Using this in DNNLOWP operators will be done in a separate diff.

Reviewed By: dskhudia

Differential Revision: D13176386

fbshipit-source-id: e46c53e31e21520bded71b8ed86e8b19e010e2dd
2018-11-27 10:17:34 -08:00
Jongsoo Park
80ba65e2f5 remove unnecessary zero_point argument from constructors (#14323)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14323

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/24

As title says.

Reviewed By: dskhudia

Differential Revision: D13167073

fbshipit-source-id: 6d6c526fd6e29a14e97f71a0881f28ada8703107
2018-11-26 11:48:17 -08:00
Jongsoo Park
90ed2f5aca minimize code compiled with avx2 and header includes from them (#14313)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14313

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/22

This diff is an attempt to minimize code compiled with avx2.

Reviewed By: dskhudia

Differential Revision: D13166591

fbshipit-source-id: 2be241141f6d7478b86a422953791e237ff10268
2018-11-26 11:09:21 -08:00
Jongsoo Park
fb8c3d62fe removing quantization utility functions moved to fbgemm (#14301)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14301

This diff removes quantization utility functions copied to fbgemm

Reviewed By: Maratyszcza

Differential Revision: D13159299

fbshipit-source-id: a7f3cd2af0aa241a8578d532a70a157da70d9289
2018-11-21 21:38:23 -08:00
Jongsoo Park
31ba34b73c fix comment on dnnlowp op arguments (#14265)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14265

Fix comment

Reviewed By: hx89

Differential Revision: D13152106

fbshipit-source-id: fbe98906963cbd5cb20a583a737a792fbc38292e
2018-11-21 09:39:57 -08:00
Jongsoo Park
9a281451ed remove unused parameters from caffe2_dnnlowp_utils.cc (#14164)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14164

See title

Reviewed By: csummersea

Differential Revision: D13115470

fbshipit-source-id: d754f558cd06e5f4c1cd00315e912cdb7b50731a
2018-11-20 00:56:06 -08:00
Jongsoo Park
3c2462cf24 use pragma once (#14163)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14163

Some of the names we were using to guard the header file was too short (e.g. DYNAMIC_HISTOGRAM_H).

Reviewed By: csummersea

Differential Revision: D13115451

fbshipit-source-id: cef8c84c62922616ceea17effff7bdf8d67302a2
2018-11-20 00:56:04 -08:00
Jongsoo Park
4224ce10a8 format python files (#14161)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14161

Formatting using Nuclide

Reviewed By: hx89

Differential Revision: D13115348

fbshipit-source-id: 7432ce6072a1822d7287b4ebcfcb6309282e15ac
2018-11-20 00:56:02 -08:00
Jongsoo Park
3c0ce51484 clang-format (#14160)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14160

clang-format of C++ files

Reviewed By: hx89

Differential Revision: D13115201

fbshipit-source-id: d2ad65f66209e00578ef90f87f41272de2d24aa9
2018-11-20 00:56:00 -08:00
Daya S Khudia
c96b72d61f OSS build fix (#14192)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14192

We can only use C10_* in OSS. The build is only broken if built with USE_FBGEMM=ON

Reviewed By: jianyuh

Differential Revision: D13121781

fbshipit-source-id: f0ee9a75997766e63e1da8a53de7ddb98296a171
2018-11-19 22:47:17 -08:00
Jongsoo Park
a036f9a65f Create README.md of caffe2/quantization/server
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14217

Reviewed By: csummersea

Differential Revision: D13135086

Pulled By: jspark1105

fbshipit-source-id: bddf4f1c2dc5ec8ea6ebe9e265956f367e082d52
2018-11-19 21:59:34 -08:00
Summer Deng
55b25365e9 Add ultra low precision options (#14133)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14133

Experiment with ultra low precisions on the Resnext-101 URU trunk model

Reviewed By: jspark1105

Differential Revision: D10108518

fbshipit-source-id: f04d74fbe1c9e75efafcd9845719bdb2efbbfe9c
2018-11-18 12:51:34 -08:00
Jongsoo Park
390bf1e779 remove unnecessary file from avx2 list (#14012)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14012

conv_dnnlowp_op.cc doesn't need avx2 anymore.

Reviewed By: dskhudia

Differential Revision: D13079665

fbshipit-source-id: dbfe8d2213de4969b6334d54de81d51149268cbd
2018-11-17 10:29:25 -08:00
Haixin Liu
bb404e7a32 Update atol scale in dnnlowp test (#14135)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14135

Update atol scale of dnnlowp test. Can't reproduce the flaky test error in the task locally even after setting the same seed value, but found according to comments in check_quantized_results_close(), atol_scale should be 1/1.9=0.526315789473684, which is larger than current value 0.51. So increase the atol_scale to 0.53.

Reviewed By: jspark1105

Differential Revision: D13108415

fbshipit-source-id: 1e8840659fdf0092f51b439cf499858795f9706a
2018-11-16 19:18:55 -08:00
Viswanath Sivakumar
037d6b697b Add ResizeNearest DNNLOWP op (#13940)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13940

As in title

Reviewed By: jspark1105

Differential Revision: D13054325

fbshipit-source-id: 81af5f095a1aca92d4b5e1fe0e71ae2f21b43922
2018-11-15 21:03:01 -08:00
Jongsoo Park
53c3a92a50 consistent rounding (#9)
Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/9

Pull Request resolved: https://github.com/pytorch/pytorch/pull/13960

The vectorized code was rounding to even in halfway cases with _mm256_round_ps + (_MM_FROUND_TO_NEAREST_INT |_MM_FROUND_NO_EXC) (see more details in https://software.intel.com/en-us/node/523819), but we were still using std::round in a couple of places which does rounding away from zero in halfway cases.
With this diff, we use std::nearbyint in all scalar code (except a few cases where we don't care exact rounding mode and uses rint which is the fastest in general) to be more consistent. nearbyint is the same as what the vectorized code does only when the current rounding mode is FE_TONEAREST but in practice this is OK because we almost always use the default rounding mode FE_TONEAREST.

This is inspired by Marat's diff for mobile quantization.

Reviewed By: dskhudia

Differential Revision: D13017719

fbshipit-source-id: 6b8f99db7ea2e233aa2e3bd2adf622e03ed6258e
2018-11-14 10:21:42 -08:00
Jongsoo Park
dead6632b3 bug fix for 1D conv in NHWC layout (#13813)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13813

Title says it all.

Reviewed By: hx89

Differential Revision: D13017652

fbshipit-source-id: e3cea6c7dee2878119d154bb9f3efbc329d7c0d5
2018-11-14 09:16:07 -08:00
Jongsoo Park
7f002008f1 remove ShouldFp32FallbackToNCHW (#13814)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13814

D10333829 implemented 3D conv in NHWC in fp32 ops so int8 ops don't need special handling anymore.

Reviewed By: hx89

Differential Revision: D13017666

fbshipit-source-id: 41df449f5e21c4c7134cc5c480e559f8c247069b
2018-11-13 00:52:41 -08:00
Jongsoo Park
0bfbdcac89 fix bug in D13017777
Summary:
Mistakenly created an infinite recursive call.

(Note: this ignores all push blocking failures!)

Reviewed By: jianyuh

Differential Revision: D13038053

fbshipit-source-id: 8b760cb73b5369647d8ef651b8c196ac3f7af04d
2018-11-12 21:57:31 -08:00
Max Katsev
8de9564c12 Fix gcc-7 build in caffe2/caffe2/quantization/server/activation_distribution_observer.cc (#13799)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13799

Fix broken operator=

Reviewed By: jspark1105

Differential Revision: D13014333

fbshipit-source-id: 6075906ecf0735bd9a74d57108036a33e1575df8
2018-11-12 14:52:51 -08:00
Jongsoo Park
309cc76469 BaseType:: -> this-> (#13817)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13817

gcc7 doesn't like BaseType::func<..>() . Should use this->func<...>()

Reviewed By: hx89

Differential Revision: D13017777

fbshipit-source-id: 0cf68d459b44379b1c103cf74382857db9a91bef
2018-11-12 12:51:12 -08:00
Jianyu Huang
2ee4ef5290 Change all namespace fbgemm2 in the new fbgemm2 to namespace fbgemm (#13740)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13740

We would like to rename the old fbgemm to “fbgemm0”, and the new fbgemm2 to “fbgemm”:

This DIFF changes all namespace fbgemm2 to namespace fbgemm.

The purpose is to avoid the confusion of "fbgemm2" when we release our FBGEMM open source.

Reviewed By: jspark1105

Differential Revision: D12850449

fbshipit-source-id: 08cc47864b157e36fbceddb7a10bf26218c67bd8
2018-11-08 19:59:12 -08:00
Jianyu Huang
55964abb11 Change all namespace fbgemm in the old fbgemm to namespace fbgemm0 (#13701)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13701

We would like to rename the old fbgemm to “fbgemm0”, and the new fbgemm2 to “fbgemm”:

This DIFF changes all namespace fbgemm to namespace fbgemm0.

Reviewed By: jspark1105

Differential Revision: D12848727

fbshipit-source-id: 47935e9e2c4714a7ce1bfc3f7e4d6a334130132e
2018-11-08 19:59:10 -08:00
Jongsoo Park
90ea61800f operators/quantized/server -> quantization/server (#13660)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13660

Any change in server side quantized operator was triggering ios-sanity-check with more than 5 hours testing time. I suspect this was because the operator code was synced with xplat directory. This diff moves server side quantized operators to caffe2/caffe2/quantization/server to avoid this issue.

Reviewed By: hx89

Differential Revision: D12955420

fbshipit-source-id: b6c824b9de5e2a696f8c748e1b2c77d81d46746b
2018-11-07 22:54:13 -08:00