Commit Graph

19 Commits

Author SHA1 Message Date
Brian Wignall
e7fe64f6a6 Fix typos (#30606)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30606

Differential Revision: D18763028

Pulled By: mrshenli

fbshipit-source-id: 896515a2156d062653408852e6c04b429fc5955c
2019-12-02 20:17:42 -08:00
Xiaodong Wang
eb7a298489 Add resnext model to OSS (#11468)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11468

Add resnext model into OSS Caffe 2 repo.

Reviewed By: orionr, kuttas

Differential Revision: D9506000

fbshipit-source-id: 236005d5d7dbeb8c2864014b1eea03810618d8e8
2018-09-12 15:59:20 -07:00
Orion Reblitz-Richardson
1d5780d42c Remove Apache headers from source.
* LICENSE file contains details, so removing from individual source files.
2018-03-27 13:10:18 -07:00
PengBo
07646e405e no_bias in resnet32x32 (#1817) 2018-02-24 16:58:23 -08:00
Di Yu
82198831e7 Fix pool op custom path issue 2, wrongful routing to global pooling
Summary:
In D5681122 - when routing to global maxpool and average pool, the condition is not correct.
see T24876217 for discussion

Reviewed By: Yangqing

Differential Revision: D6665466

fbshipit-source-id: dcb5b4686249e6ee8e1e976ab66b003ef09b32fd
2018-01-09 00:54:45 -08:00
Aapo Kyrola
b5c053b1c4 fix fp16 issues with resnet trainer
Summary:
My commit  bab5bc  broke things wiht fp16 compute, as i had tested it only with the null-input, that actually produced fp32 data (even dtype was given as float16). Also, I had confused the concepts of "float16 compute" and fp16 data. Issue #1408.

This fixes those issues, tested with both Volta and M40 GPUs. Basically restored much of the previous code and fixed the null input to do FloatToHalf.

Reviewed By: pietern

Differential Revision: D6211849

fbshipit-source-id: 5b41cffdd605f61a438a4c34c56972ede9eee28e
2017-11-01 13:30:08 -07:00
Aapo Kyrola
1b71bf1d36 Updated resnet50_trainer and resnet for more FP16 support
Summary: Added FP16SgdOptimizer to resnet50_trainer

Reviewed By: wesolwsk

Differential Revision: D5841408

fbshipit-source-id: 3c8c0709fcd115377c13ee58d5bb35f1f83a7105
2017-10-24 09:19:06 -07:00
Yangqing Jia
8286ce1e3a Re-license to Apache
Summary: Closes https://github.com/caffe2/caffe2/pull/1260

Differential Revision: D5906739

Pulled By: Yangqing

fbshipit-source-id: e482ba9ba60b5337d9165f28f7ec68d4518a0902
2017-09-28 16:22:00 -07:00
haracejacob
2ec294a8bb Fix a few typos and grammars in comment
Summary:
Fix a few typos and grammars in comment

by using language-check, python library
spell_checker source code is here : https://github.com/17-1-SKKU-OSS/011A/blob/master/spell_checker/spell_checker.py
here is the text file which indicates what things should be fixed :  https://github.com/17-1-SKKU-OSS/011A/tree/master/spell_checker/fix/caffe2
Closes https://github.com/caffe2/caffe2/pull/719

Differential Revision: D5165118

Pulled By: aaronmarkham

fbshipit-source-id: 7fb8ef7a99d03cd5fd2f9ebdb01b9865e90fc37b
2017-06-14 18:22:39 -07:00
Yiming Wu
64d43dbb6e new resnet building with brew
Summary: new resnet building with brew

Reviewed By: akyrola

Differential Revision: D4945418

fbshipit-source-id: d90463834cbba2c35d625053ba8812e192df0adf
2017-05-15 22:47:24 -07:00
Aaron Markham
58f7f2b441 doxygen python block added
Summary: Closes https://github.com/caffe2/caffe2/pull/226

Differential Revision: D4793550

Pulled By: JoelMarcey

fbshipit-source-id: cc33e58186304fa8dcac2ee9115dcc271d785b1e
2017-03-29 06:46:16 -07:00
Sean Snyder
79c04d32dc add an option to use a resnet network instead of alexnet
Summary: add an option to use a resnet network instead of alexnet. Modified the resnet.create_resnet50 function slightly to allow specifying different kernel/stride parameters so we can adapt resnet to our image size.

Differential Revision: D4472535

fbshipit-source-id: ed06acf52f6425a1e04d047548eb3c70388d74aa
2017-01-31 16:59:30 -08:00
Aapo Kyrola
e18643f90b More fixes
Summary:
When testing the code, a couple of issues arised:
 - we need to have different name for last layer than the preprocessed model, otherwise a shape assertion is created
 - preprocess_noaugmentation still needs to do a crop for images larger than 227x227, otherwise things fail.

Reviewed By: viswanathgs

Differential Revision: D4442700

fbshipit-source-id: 05f54e7f17c266280f5ba5bb57af1721fe30df12
2017-01-20 13:44:24 -08:00
Aapo Kyrola
afe822ebd7 Small tweaks
Summary:
Some tweaks, hopefully getting us to 0.98 MAP
- no cropping for test dataset (as per patrick)
- spatialBN momentum 0.1 (default is 0.9)

Also added some additional logging and reduced frequency of running of test net and logging.

Reviewed By: viswanathgs

Differential Revision: D4439790

fbshipit-source-id: 700705b811a5fc8c7139a265de96db646605ca5a
2017-01-19 18:44:26 -08:00
Aapo Kyrola
bb928f3cc0 Latest fixes to Xray Flow workflows for Caffe2
Summary:
(Ignore the convolution-op related changes, they will be later patched separately)

This diff ignores work from latest few weeks:
- some refactoring of the flow ops
- no_bias setting
- MAP computation (instead of accuracy) for OC
- adaptive learning rate for Xray concepts
- various small bug fixes

Reviewed By: viswanathgs

Differential Revision: D4329500

fbshipit-source-id: 000d4fd22ec408af5290480c788eb86546bff52e
2017-01-10 12:59:23 -08:00
Aapo Kyrola
d37fffd257 use in-place ReLu to safe a lot of memory
Summary: Reading Torch docs about Resnets, and soumith's comment,  they mention significant memory-saving with in-place ReLu. prigoyal already had this in her code, but I did not. This saves memory a lot: 9851 MiB -> 7497 MiB.

Reviewed By: prigoyal

Differential Revision: D4346100

fbshipit-source-id: e9c5d5e93787f47487fade668b65b9619bfc9741
2016-12-19 09:29:26 -08:00
Aapo Kyrola
eddf23ca0f Handle parameters that are computed but not optimized
Summary:
prigoyal sharply noticed a bug in the Resnet models: we have not been checkpointing, nor synchronizing between gpus, the moving average and variance computed by the SpatialBN ops.  Particularly the first problen is serious, since models starting from checkpoint would have started from a null-state for SpatialBN. Not synchronizing with the data parallel model is less tragic since each GPU should see very similar data.

Thus I propose keeping track of "computed params", i.e params that are computed from data but not optimized. I don't know if there are other examples, but SpatialBN's moving avg and var definitely are one.

- I modified the checkpointign for xray model to store those blobs + also ensure the synchronization of those blobs
- I modified data parallel model to broadcast those params from gpu0. I first tried averaging, but hit some NCCL deadlocks ... :(

Differential Revision: D4281265

fbshipit-source-id: 933311afeec4b7e9344a13cf2d38aa939c50ac31
2016-12-15 12:01:28 -08:00
Yangqing Jia
238ceab825 fbsync. TODO: check if build files need update. 2016-11-15 00:00:46 -08:00
Yangqing Jia
d1e9215184 fbsync 2016-10-07 13:08:53 -07:00