Commit Graph

42 Commits

Author SHA1 Message Date
James Reed
05a1644ce3 Fix BC for quantized linear
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30481

Test Plan: Imported from OSS

Differential Revision: D18714602

Pulled By: jamesr66a

fbshipit-source-id: d51206c22cf2446e98053446789c6324c0481321
2019-11-26 17:38:09 -08:00
Xiaomeng Yang
c12f9a12a8 Fix quantized ConvReLU3d test (#30266)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30266

Fix quantized ConvReLU3d test

Test Plan: buck test mode/dev-nosan //caffe2/test:quantized -- "conv"

Reviewed By: hl475

Differential Revision: D18645717

fbshipit-source-id: bbe93f9daf5046f2aa05363efc7d0e59eaff37bf
2019-11-25 14:52:32 -08:00
James Reed
97fae401f0 Use LinearPackedParams everywhere
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30198

Test Plan: Imported from OSS

Differential Revision: D18628003

Pulled By: jamesr66a

fbshipit-source-id: 76ff0248fd859e805a15cde555d26dd2138636fa
2019-11-22 11:31:17 -08:00
Xiaomeng Yang
510ef4b63a Add nn.quantized.Conv3d (#29813)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29813

Add nn.quantized.Conv3d

Test Plan: buck test mode/dev-nosan //caffe2/test:quantized -- "conv"

Reviewed By: jianyuh

Differential Revision: D18467749

fbshipit-source-id: 892f708179e9e836ad902851ac1838847009da15
2019-11-15 04:33:40 -08:00
Xiaomeng Yang
bf80664515 Add quantized conv3d function (#29686)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29686

Add quantized conv3d function

Test Plan: buck test mode/dev-nosan //caffe2/test:quauntized -- "conv"

Reviewed By: hl475

Differential Revision: D18463090

fbshipit-source-id: f9c3d2920c3fc015bbb2b6a583a582c9f8397b08
2019-11-14 03:04:51 -08:00
Jianyu Huang
bbff06ee96 Convert conv_prepack to conv2d_prepack and conv_unpack to conv2d_unpack (#29529)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29529

Pull Request resolved: https://github.com/pytorch/glow/pull/3771

We would like to replace `conv_prepack` with `conv2d_prepack` and  `conv_unpack` with `conv2d_unpack`.

This makes the naming consistent between 2D and 3D conv:
```
torch.ops.quantized.conv2d_prepack
torch.ops.quantized.conv2d_unpack
torch.ops.quantized.conv2d
torch.ops.quantized.conv3d_prepack
torch.ops.quantized.conv3d_unpack
torch.ops.quantized.conv3d
```

We should do this earlier rather than later when we have more users for the quantized conv2d ops, for better engineering.

The replacement bash command is as the follows:
```
find ./ -type f -exec sed -i -e 's/quantized::conv_prepack/quantized::conv2d_prepack/g' {} \;
find ./ -type f -exec sed -i -e 's/quantized::conv_unpack/quantized::conv2d_unpack/g' {} \;
find ./ -type f -exec sed -i -e 's/torch.ops.quantized.conv_prepack/torch.ops.quantized.conv2d_prepack/g' {} \;
find ./ -type f -exec sed -i -e 's/torch.ops.quantized.conv_unpack/torch.ops.quantized.conv2d_unpack/g' {} \;
```
ghstack-source-id: 93661879

Test Plan: CI

Reviewed By: jackm321

Differential Revision: D18421079

fbshipit-source-id: 17ae8b1ee79223bd2c5d4bbccd57af6580c4ab12
2019-11-11 21:54:10 -08:00
James Reed
a423817055 Fix reprs for _intrinsic modules
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27184

Test Plan: Imported from OSS

Differential Revision: D17717481

Pulled By: jamesr66a

fbshipit-source-id: 4bd72bcd42191d9b21d03f5bb6698198dbffffda
2019-10-02 19:55:49 -07:00
Zafar Takhirov
27dc595215 Rename _intrinsic to intrinsic
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27194

Test Plan: Imported from OSS

Differential Revision: D17704957

Pulled By: zafartahirov

fbshipit-source-id: 46f02d129aa77c3047b2a6c606bfadd831a6b0fc
2019-10-02 18:53:06 -07:00
Supriya Rao
b805b5dab8 Unify quantized conv and linear tests (#26992)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26992

Run the same test for FBGEMM and QNNPACK backends.
Checks that QNNPACK or FBGEMM are supported before running it (using supported_qengines)

Test Plan:
python test/test_quantized.py TestQuantizedLinear
    python test/test_quantized.py TestQuantizedConv
    python test/test_quantized_models.py
    python test/test_quantized_nn_mods.py

Imported from OSS

Differential Revision: D17689171

fbshipit-source-id: e11c0a5e41f5f4e6836a614a5b61e4db3c5e384b
2019-10-01 14:07:16 -07:00
James Reed
4d7bec5f3e Improve repr for quantized modules
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27008

Test Plan: Imported from OSS

Differential Revision: D17649174

Pulled By: jamesr66a

fbshipit-source-id: e3e6c4bb31e1ad8ed1ebe27f803f90d564ecfe53
2019-09-28 15:15:14 -07:00
Raghuraman Krishnamoorthi
2ccbdb79c8 Per-channel baseline (#26516)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26516

ghstack-source-id: 90982010

Test Plan:
Integrate per-channel support into conv and linear modules.
The following tests pass:
buck test caffe2/test:quantized -- 'test_linear_api \(test_quantized_nn_mods\.ModuleAPITest\)' --print-passing-details

buck test caffe2/test:quantized -- 'test_conv_api \(test_quantized_nn_mods\.ModuleAPITest\)' --print-passing-details

buck test caffe2/test:quantized -- 'test_float_quant_compare_per_channel \(test_quantized_models\.ModelNumerics\)' --print-passing-details

Differential Revision: D17342622

fbshipit-source-id: f0d618928e3d9348672c589a6b7a47049c372a2e
2019-09-28 14:05:06 -07:00
Dmytro Dzhulgakov
764bf826e3 Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26840

Cleaning up top-level namespace. Also cosmetic changes to torch.backends.quantized

Test Plan: Imported from OSS

Differential Revision: D17604403

Pulled By: dzhulgakov

fbshipit-source-id: c55af277ea7319d962a82a6120f65ccd47a60abc
2019-09-27 13:45:15 -07:00
Dmytro Dzhulgakov
0a8a779abe Add more inplace arguments to quantization top level API (#26782)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26782

At least we should be consistent on top-level APIs and prepare/convert/etc.

Logic is inplace=False by default but top-level APIs take care of doing fewer copies.

Also renames always-inplace methods like add_observer to have underscore in the end.

One fix for MinMaxObserver was triggered by deepcopy surfacing that we were accidentally keeping autograd around

Test Plan: Imported from OSS

Differential Revision: D17595956

Pulled By: dzhulgakov

fbshipit-source-id: 801f9f5536b553f24c7a660064dd6fce685edd65
2019-09-26 00:07:07 -07:00
James Reed
df16fb9ca1 Throw if someone tries to torch.save() quantized modules (#26828)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26828

Pickle serialization for quantized modules is currently broken by https://github.com/pytorch/pytorch/issues/24045, so let's be loud and fail if the user tries to do it

Test Plan: Imported from OSS

Differential Revision: D17579127

Pulled By: jamesr66a

fbshipit-source-id: 3deccac7e4590c6f648f22bb79c57badf3bf0487
2019-09-25 19:55:17 -07:00
Jerry Zhang
254122dd4e quantize_linear -> quantize_per_tensor (#26574)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26574

Since we also have `quantized::linear`, `quantize_linear` sounds
confusing, so we plan to rename it before the branch cut

Test Plan:
ci

Imported from OSS

Differential Revision: D17514876

fbshipit-source-id: 01d9005e6ec8cb9950b9d8bba122109c389641d3
2019-09-20 21:58:48 -07:00
Dmytro Dzhulgakov
af64789cfa Fold activation permutation inside quantized conv operator (#26242)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26242

According to https://github.com/pytorch/pytorch/issues/19092 we always keep NCHW order and do handling inside the kernels. This PR fixes it for activations of the qconv by using MemoryLayout mechanism - activations stay logically as NCHW but strided as NHWC.

Note, that this version is more aggressive than eventual MemoryLayout mechanism - the QConv's output is always NHWC regardless of the input striding. I think it's ok as we don't have NCHW quantized kernels anyway - so the very first conv would magically switch the order, but I'm open to suggestions. Btw, it doesn't change behavior - same happens today in master because of the explicit permute() call.

Test Plan: Imported from OSS

Differential Revision: D17443218

Pulled By: dzhulgakov

fbshipit-source-id: cfd136ae0465acd8d8c26ffad87385dac9c88726
2019-09-19 13:39:26 -07:00
Dmytro Dzhulgakov
d5daac7223 Fold weight permutation inside quantized conv operator (#26241)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26241

According to https://github.com/pytorch/pytorch/issues/19092 we always keep NCHW order and do handling inside the kernels. This PR fixes it for weights of the qconv by using MemoryLayout mechanism.

Test Plan: Imported from OSS

Differential Revision: D17443219

Pulled By: dzhulgakov

fbshipit-source-id: ce0eb92034a9977b3303dafab8b0414575171062
2019-09-19 13:39:22 -07:00
Daya Khudia
2b52c1d982 Dynamic quantization for bias. (#26057)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26057

bias is now unquantized (i.e. floating type) for qconv and qlinear. It is dynamically quantized by fbgemm.

TODO: Add some performance numbers.

Tests:

test:quantization
```
Summary (total time 8.41s):
  PASS: 24
  FAIL: 0
  SKIP: 0
  FATAL: 0
  TIMEOUT: 0More details at https://our.intern.facebook.com/intern/buck/build/74d5f6f7-55c9-4350-a618-2013042fffd8

  OMIT: 0
```

test:quantized
```
Summary (total time 13.21s):
  PASS: 43
  FAIL: 0
  SKIP: 5
    caffe2/test:quantized - test_qnnpack_maxpool2d (test_quantized.TestQNNPackOps)
    caffe2/test:quantized - test_compare_tensor_scalar (test_quantized.TestComparatorOps)
    caffe2/test:quantized - test_qnnpack_linear (test_quantized.TestQNNPackOps)
    caffe2/test:quantized - test_qnnpack_relu (test_quantized.TestQNNPackOps)
    caffe2/test:quantized - test_qnnpack_add (test_quantized.TestQNNPackOps)
  FATAL: 0
  TIMEOUT: 0
  OMIT: 0
```
ghstack-source-id: 90166254

Test Plan:
buck test mode/dev caffe2/test:quantization

buck test mode/dev caffe2/test:quantized

Differential Revision: D17328028

fbshipit-source-id: d4a163d730d0f4a03e8e0faf7420710cf36eec09
2019-09-16 14:43:06 -07:00
Jianyu Huang
ead14a6bd4 Use BytesIO instead of tempfile (#25976)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25976

As recommended in https://github.com/pytorch/pytorch/pull/25877/files#r322956051:

> We should move more of these toward using BytesIO. Using files in tests is generally considered bad practice because it introduces syscalls and dependencies on the execution environment, and thus can cause test flakiness/instability.
ghstack-source-id: 89929947

Test Plan: CI

Differential Revision: D17310441

fbshipit-source-id: ba97cce4224225df45ff44062f1bc8ebefb25922
2019-09-11 19:35:49 -07:00
Supriya Rao
c60dddbb9f Store bias in PackedConvWeight in fbgemm (#25626)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25626

Add bias as an optional parameter in the packed conv weight struct.
ghstack-source-id: 89780639

Test Plan: python test/run_test.py --exclude nn --verbose --bring-to-front quantization quantized quantized_tensor quantized_nn_mods quantizer

Reviewed By: raghuramank100

Differential Revision: D17177723

fbshipit-source-id: e502f2196cb1c002db8b691124db740368944c92
2019-09-10 08:43:55 -07:00
Supriya Rao
9d2d31e626 Store bias in PackedLinearWeight struct in fbgemm (#25428)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25428

Added bias as an optional param to the quantized_linear_prepack function.
Bias is quantized during runtime using input scale and weight scale.
ghstack-source-id: 89601399

Test Plan: python test/run_test.py --exclude nn --verbose --bring-to-front quantization quantized quantized_tensor quantized_nn_mods quantizer

Differential Revision: D17121304

fbshipit-source-id: 8adb0e55e4aed0a5430aaa2c8639c8ad1639c85a
2019-09-06 08:37:34 -07:00
Supriya Rao
61819260f7 Rename FBGEMM quantized operators to generic quantized ops (#25678)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25678

As an effort to unify fbgemm and qnnpack at the dispatcher level, we need to have a generic name for the quantized backed ops.
Currently FBGEMM is guarded by the USE_FBGEMM macro and QNNPACK uses USE_QNNPACK.
ghstack-source-id: 89518961

Test Plan: buck test caffe2/test:quantized

Differential Revision: D17194364

fbshipit-source-id: 5960aedff6b8cb89eb3872c39b74caf54c0fbf20
2019-09-05 10:13:08 -07:00
Jerry Zhang
76b6b1b1a6 move no_deadline to hypothesis_utils.py (#25598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25598

att

Test Plan:
CI

Imported from OSS

Differential Revision: D17192467

fbshipit-source-id: 9ee93b02cc293bb71ed114534d92eedda3ddee88
2019-09-04 17:06:33 -07:00
Edward Yang
55da02a86d Revert D17097735: [quantization] Rename fbgemm quantized operators to generic quantized ops
Test Plan: revert-hammer

Differential Revision:
D17097735

Original commit changeset: 447112a7a421

fbshipit-source-id: 78368b6f84d96cea70692fb000cebe99602a08c1
2019-09-04 15:02:32 -07:00
Supriya Rao
c9ba5186d3 Rename fbgemm quantized operators to generic quantized ops (#25338)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25338

As an effort to unify fbgemm and qnnpack at the dispatcher level, we need to have a generic name for the quantized backed ops.
Currently FBGEMM is guarded by the USE_FBGEMM macro and QNNPACK uses USE_QNNPACK.

TBD: Use compile time macro or run_time to switch between fbgemm and qnnpack.
ghstack-source-id: 89454244

Test Plan: buck test caffe2/test:quantized

Differential Revision: D17097735

fbshipit-source-id: 447112a7a421387724d3e29b8fd8412dfb1c373a
2019-09-04 14:27:27 -07:00
Zafar Takhirov
e44c09ecae making quant utilities inplace
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25054

Test Plan: Imported from OSS

Differential Revision: D16974198

Pulled By: zafartahirov

fbshipit-source-id: 54befc8429990adafe746d1255d117fca5f12e11
2019-08-29 16:03:13 -07:00
Zafar Takhirov
e8acc2ebb1 Removing future imports from the test fixtures.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25296

Test Plan: Imported from OSS

Differential Revision: D17090201

Pulled By: zafartahirov

fbshipit-source-id: 5a4f6ac0ea475b55d2c610e2f9f4f0cef8690e8f
2019-08-29 01:39:59 -07:00
Raghuraman Krishnamoorthi
9945c0cea6 Work around for bias quantization for conv and linear operators (#25212)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25212

In eager mode, all modules need to work with input tensors that can change qparams dynamically. This issue https://github.com/pytorch/pytorch/issues/23874 will address this via FBGEMM modifications. This is a work around before that.
ghstack-source-id: 89118038

Test Plan:
buck test caffe2/test:quantized -- 'test_conv_api \(test_quantized_nn_mods\.ModuleAPITest\)' --print-passing-details
Summary (total time 65.86s):
  PASS: 1
  FAIL: 0
  SKIP: 0
  FATAL: 0
  TIMEOUT: 0
  OMIT: 0

Differential Revision: D17064471

fbshipit-source-id: 3c192442b19bf2d9d88d4e52de6c24dc134a846f
2019-08-28 07:24:03 -07:00
Raghuraman Krishnamoorthi
26a438d4fb Revert D16852280: Work around for bias quantization for conv and linear operators
Test Plan: revert-hammer

Differential Revision:
D16852280

Original commit changeset: 988f8ff91616

fbshipit-source-id: e2cf03e13dc8dcf0db22d43740d72fd8b069fd74
2019-08-26 16:25:33 -07:00
Raghuraman Krishnamoorthi
ea601d90d6 Work around for bias quantization for conv and linear operators (#24789)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24789

In eager mode, all modules need to work with input tensors that can change qparams dynamically. This issue https://github.com/pytorch/pytorch/issues/23874 will address this via FBGEMM modifications. This is a work around before that.
ghstack-source-id: 89003798

Test Plan:
buck test caffe2/test:quantized -- 'test_conv_api \(test_quantized_nn_mods\.ModuleAPITest\)' --print-passing-details
Summary (total time 65.86s):
  PASS: 1
  FAIL: 0
  SKIP: 0
  FATAL: 0
  TIMEOUT: 0
  OMIT: 0

Differential Revision: D16852280

fbshipit-source-id: 988f8ff91616eddf511e71926aa7d2d0f1938188
2019-08-26 12:16:42 -07:00
Zafar Takhirov
a99a4485fa Added relu6 kernel (#24799)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24799

Pull Request resolved: https://github.com/pytorch/pytorch/pull/24799

Differential Revision: D16875493

Test Plan: Imported from OSS

Pulled By: zafartahirov

fbshipit-source-id: 0d256db193c6a8e0d37dbdf6cf35dd031fd4ec6c
2019-08-21 13:57:00 -07:00
Jianyu Huang
6cf14361f4 Add the default_weight_observer for the dynamic quantization path (#24231)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24231

As suggested in https://github.com/pytorch/pytorch/pull/23128#discussion_r309528932, we will add a default weight observer for the dynamic quantization path.

We need to move `observer` and `qconfig` to a separate namespace.
ghstack-source-id: 88583658

Differential Revision: D16781092

fbshipit-source-id: 5cd59c881a7f98b82704ca318b1e63650d73062a
2019-08-19 14:54:22 -07:00
James Reed
a0b13b4fa5 extra_repr for quantized modules (#24443)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24443

This gives us useful information about the Module when we print it, like so:

```
FloatModule(
  (quant): Quantize()
  (conv1): Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1), scale=0.08209919929504395, zero_point=128)
  (conv2): Conv2d(20, 50, kernel_size=(5, 5), stride=(1, 1), scale=0.16885940730571747, zero_point=128)
  (fc1): Linear(in_features=800, out_features=500, bias=True, scale=0.12840059399604797, zero_point=128)
  (fc2): Linear(in_features=500, out_features=10, bias=True, scale=0.260015606880188, zero_point=128)
  (dequant): DeQuantize()
)
```

Test Plan: Imported from OSS

Differential Revision: D16847140

Pulled By: jamesr66a

fbshipit-source-id: 8c995108f17ed1b086d1fb30471a41c532c68080
2019-08-16 22:38:45 -07:00
Zafar Takhirov
dd97743de7 Enables inplace in the quantized relu (#24374)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24374

This is a duplicate to bring back #23704 with diff revision D16634539

Test Plan: Imported from OSS

Differential Revision: D16818664

Pulled By: zafartahirov

fbshipit-source-id: c8f7965356555a6a995eaeea6820ea62cbbea6fd
2019-08-16 16:53:09 -07:00
Jianyu Huang
b10a3e916f Remove redundant assignment (#24408)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24408

As Title says.
ghstack-source-id: 88388745

Differential Revision: D16830709

fbshipit-source-id: 87eafcd3236abcec94cf87009fc705ad26d87eca
2019-08-15 13:38:33 -07:00
James Reed
de58df4c6f JIT trace testing
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23987

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D16744208

Pulled By: jamesr66a

fbshipit-source-id: 8e65898cc8edebcc46b862e3d33f85071d701a04
2019-08-14 22:11:32 -07:00
James Reed
a919fc3704 test {__init__,from_float} on nnq{,d}.Linear
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24364

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D16812543

Pulled By: jamesr66a

fbshipit-source-id: be05a658fa4562f3fcf3548e30b1fe9a77d1151c
2019-08-14 17:42:23 -07:00
James Reed
7afe0a8c6d no_deadline on ModuleAPITests and skip on dynamic quantization test
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24307

Test Plan: Imported from OSS

Reviewed By: Krovatkin

Differential Revision: D16800749

Pulled By: jamesr66a

fbshipit-source-id: 7ce466794c13d598b4396bd33fcdcffb57bac1cb
2019-08-13 23:27:15 -07:00
James Reed
93d2cd7619 Skip test_quantized_nn_mods tests if theres no FBGEMM
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24302

Test Plan: Imported from OSS

Differential Revision: D16800352

Pulled By: jamesr66a

fbshipit-source-id: 56650d8c937afca77005ad39a5bc38ebd6e71414
2019-08-13 21:23:19 -07:00
Jianyu Huang
e94ba742b0 Dynamic Quantized Linear Module (#23128)
Summary:
- ~~Add a unit test for the Dynamic Quantized Linear operator (```torch.fbgemm_linear_quantize_weight```, ```torch.fbgemm_pack_quantized_matrix```, and ```torch.fbgemm_linear_int8_weight```) in ```test_quantized.py```.~~ Move this to D16404027 for a separate review.
- Add the Dynamic Quantized Linear module in ```torch/nn/quantized/modules/linear.py```. ~~This is in a rudimentary stage. Will add more functions later~~.
- Add the torch.quantize logic (prepare, eval, convert) for dynamic quantization.
- Add a unit test for the Dynamic Quantized Linear module  in ```test_nn_quantized.py```.
- Add a unit test for the Model-level Quantization API

Pull Request resolved: https://github.com/pytorch/pytorch/pull/23128
ghstack-source-id: 88257232

Differential Revision: D16258664

fbshipit-source-id: 4be3ac39ee27c088b341c741d3f09f51d5a23ef0
2019-08-13 21:01:23 -07:00
James Reed
4e0af295c1 Fix and test conv2d constructor and from_float
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24277

Test Plan: Imported from OSS

Differential Revision: D16793043

Pulled By: jamesr66a

fbshipit-source-id: bbf74c87aa11adfe15e31ea8190e7542b8127c65
2019-08-13 17:07:19 -07:00
James Reed
e7f1977bae test_nn_quantized -> test_quantized_nn_mods (#24201)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24201

It turns out that the `run_test` script uses a blacklist of "exclude" tests and tests if the test name [starts with](https://github.com/pytorch/pytorch/blob/master/test/run_test.py#L342) the given blacklist item. `nn` was passed as a blacklist item in CI, and that meant that not only was test_nn skipped, but also test_nn_quantized. This renames the test to avoid this situation, and imo puts it in a better position lexicographically next to the other quantization tests.

Test Plan: Imported from OSS

Differential Revision: D16772820

Pulled By: jamesr66a

fbshipit-source-id: 4cde0729b48ae3e36fcedab9c98197831af82dde
2019-08-13 17:07:15 -07:00