Neta Zmora
authored
By default, when we create a model we wrap it with DataParallel to benefit from data-parallelism across GPUs (mainly for convolution layers). But sometimes we don't want the sample application to do this: for example when we receive a model that was trained serially. This commit adds a new argument to the application to prevent the use of DataParallel.
Name | Last commit | Last update |
---|---|---|
.. | ||
__init__.py | ||
compress_classifier.py | ||
inspect_ckpt.py | ||
logging.conf |