-
- Downloads
Enable compute (training/inference) on the CPU
In compress_classifier.py we added a new application argument: --cpu which you can use to force compute (training/inference) to run on the CPU when you invoke compress_classifier.py on a machine which has Nvidia GPUs. If your machine lacks Nvidia GPUs, then the compute will now run on the CPU (and you do not need the new flag). Caveat: we did not fully test the CPU support for the code in the Jupyter notebooks. If you find a bug, we apologize and appreciate your feedback.
Showing
- apputils/checkpoint.py 1 addition, 1 deletionapputils/checkpoint.py
- distiller/model_summaries.py 2 additions, 1 deletiondistiller/model_summaries.py
- distiller/thinning.py 6 additions, 5 deletionsdistiller/thinning.py
- distiller/thresholding.py 5 additions, 5 deletionsdistiller/thresholding.py
- distiller/utils.py 8 additions, 0 deletionsdistiller/utils.py
- examples/classifier_compression/compress_classifier.py 31 additions, 20 deletionsexamples/classifier_compression/compress_classifier.py
- models/__init__.py 7 additions, 1 deletionmodels/__init__.py
Loading
Please register or sign in to comment