-
- Downloads
Modify image size and training for Inception Models (#425)
* Merge pytorch 1.3 commits This PR is a fix for issue #422. 1. ImageNet models usually use input size [batch, 3, 224, 224], but all Inception models require an input image size of [batch, 3, 299, 299]. 2. Inception models have auxiliary branches which contribute to the loss only during training. The reported classification loss only considers the main classification loss. 3. Inception_V3 normalizes the input inside the network itself. More details can be found in @soumendukrg's PR #425 [comments](https://github.com/NervanaSystems/distiller/pull/425#issuecomment-557941736). NOTE: Training using Inception_V3 is only possible on a single GPU as of now. This issue talks about this problem. I have checked and this problem persists in torch 1.3.0: [inception_v3 of vision 0.3.0 does not fit in DataParallel of torch 1.1.0 #1048](https://github.com/pytorch/vision/issues/1048 ) Co-authored-by:Neta Zmora <neta.zmora@intel.com>
Loading
Please register or sign in to comment