pytorch/torch/quantization
davidriazati 0046092178 Reduce special casing around 'training' (#27109)
Summary:
Most of this was old cruft left over from special handling of `training` before we had a `bool` type. This makes all modules have a `training` attribute that is true by default and removes all other special handling.

Fixes #26884
](https://our.intern.facebook.com/intern/diff/17728129/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27109

Pulled By: driazati

Differential Revision: D17728129

fbshipit-source-id: 8ddc9fbb07a953dd05529538bfdd01ed88b5cb57
2019-10-07 13:52:59 -07:00
..
__init__.py Factored out the default mappings 2019-10-03 11:52:21 -07:00
_quantize_script.py Reduce special casing around 'training' (#27109) 2019-10-07 13:52:59 -07:00
default_mappings.py Replacing the skip_list with white_list in the qconfig propagation 2019-10-03 20:40:17 -07:00
fake_quantize.py MovingAverage Observer (#27396) 2019-10-04 16:28:59 -07:00
fuse_modules.py Rename _intrinsic to intrinsic 2019-10-02 18:53:06 -07:00
observer.py MovingAverage Observer (#27396) 2019-10-04 16:28:59 -07:00
QConfig.py MovingAverage Observer (#27396) 2019-10-04 16:28:59 -07:00
quantize.py Replacing the skip_list with white_list in the qconfig propagation 2019-10-03 20:40:17 -07:00
stubs.py Factored out the default mappings 2019-10-03 11:52:21 -07:00