Skip to content
Snippets Groups Projects
Unverified Commit 112163eb authored by Guy Jacob's avatar Guy Jacob Committed by GitHub
Browse files

Update post-train quant command line example

parent 43548deb
No related branches found
No related tags found
No related merge requests found
...@@ -8,7 +8,7 @@ Post-training quantization can either be configured straight from the command-li ...@@ -8,7 +8,7 @@ Post-training quantization can either be configured straight from the command-li
**All the examples shown below are using command-line configuration.** **All the examples shown below are using command-line configuration.**
**For an example of how to use a YAML config file, please see `resnet18_imagenet_post_train.yaml` in this directory. It shows how to override the configuration of specific layers in order to obtain better accuracy.** **For an example of how to use a YAML config file, please see `resnet18_imagenet_post_train.yaml` in this directory ([link to view in GitHub repo](https://github.com/NervanaSystems/distiller/blob/master/examples/quantization/post_train_quant/resnet18_imagenet_post_train.yaml)). It shows how to override the configuration of specific layers in order to obtain better accuracy.**
## Available Command Line Arguments ## Available Command Line Arguments
...@@ -108,4 +108,4 @@ We can provide Distiller with a list of layers for which not to clip activations ...@@ -108,4 +108,4 @@ We can provide Distiller with a list of layers for which not to clip activations
## Note 2: Under 8-bits ## Note 2: Under 8-bits
Runs (8) - (10) are examples of trying post-training quantization below 8-bits. Notice how with the most basic settings we get a massive accuracy loss of ~53%. Even with asymmetric quantization and all other optimizations enabled, we still get a non-trivial degradation of just under 2% vs. FP32. In many cases, quantizing with less than 8-bits requires quantization-aware training. However, if we allow some layers to remain in 8-bit, we can regain some of the accuracy. We can do this by using a YAML configuration file and specifying overrides. As mentioned at the top of this document, check out the `resnet18_imagenet_post_train.yaml` file located in this directory for an example of how to do this. Runs (8) - (10) are examples of trying post-training quantization below 8-bits. Notice how with the most basic settings we get a massive accuracy loss of ~53%. Even with asymmetric quantization and all other optimizations enabled, we still get a non-trivial degradation of just under 2% vs. FP32. In many cases, quantizing with less than 8-bits requires quantization-aware training. However, if we allow some layers to remain in 8-bit, we can regain some of the accuracy. We can do this by using a YAML configuration file and specifying overrides. As mentioned at the top of this document, check out the `resnet18_imagenet_post_train.yaml` file located in this directory for an example of how to do this.
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment