diff --git a/examples/quantization/post_train_quant/command_line.md b/examples/quantization/post_train_quant/command_line.md
index 2c06b1734ca09402b8703e31e7469fa721149db0..c3f73064e22f35d135372129a253c12072901fe1 100644
--- a/examples/quantization/post_train_quant/command_line.md
+++ b/examples/quantization/post_train_quant/command_line.md
@@ -8,7 +8,7 @@ Post-training quantization can either be configured straight from the command-li
 
 **All the examples shown below are using command-line configuration.**  
 
-**For an example of how to use a YAML config file, please see `resnet18_imagenet_post_train.yaml` in this directory. It shows how to override the configuration of specific layers in order to obtain better accuracy.**  
+**For an example of how to use a YAML config file, please see `resnet18_imagenet_post_train.yaml` in this directory ([link to view in GitHub repo](https://github.com/NervanaSystems/distiller/blob/master/examples/quantization/post_train_quant/resnet18_imagenet_post_train.yaml)). It shows how to override the configuration of specific layers in order to obtain better accuracy.**  
 
 ## Available Command Line Arguments
 
@@ -108,4 +108,4 @@ We can provide Distiller with a list of layers for which not to clip activations
 
 ## Note 2: Under 8-bits
 
-Runs (8) - (10) are examples of trying post-training quantization below 8-bits. Notice how with the most basic settings we get a massive accuracy loss of ~53%. Even with asymmetric quantization and all other optimizations enabled, we still get a non-trivial degradation of just under 2% vs. FP32. In many cases, quantizing with less than 8-bits requires quantization-aware training. However, if we allow some layers to remain in 8-bit, we can regain some of the accuracy. We can do this by using a YAML configuration file and specifying overrides. As mentioned at the top of this document, check out the `resnet18_imagenet_post_train.yaml` file located in this directory for an example of how to do this.
\ No newline at end of file
+Runs (8) - (10) are examples of trying post-training quantization below 8-bits. Notice how with the most basic settings we get a massive accuracy loss of ~53%. Even with asymmetric quantization and all other optimizations enabled, we still get a non-trivial degradation of just under 2% vs. FP32. In many cases, quantizing with less than 8-bits requires quantization-aware training. However, if we allow some layers to remain in 8-bit, we can regain some of the accuracy. We can do this by using a YAML configuration file and specifying overrides. As mentioned at the top of this document, check out the `resnet18_imagenet_post_train.yaml` file located in this directory for an example of how to do this.