diff --git a/hpvm/docs/developerdocs/configuration-format.rst b/hpvm/docs/developerdocs/configuration-format.rst
index 1616e58ed4bdcc8c292f60a0f4de43521998d99a..cc5faadeaaa0bd7f9878a93ce745423c22996a81 100644
--- a/hpvm/docs/developerdocs/configuration-format.rst
+++ b/hpvm/docs/developerdocs/configuration-format.rst
@@ -5,7 +5,12 @@ Approximation Configuration Format
 The HPVM binaries generated from the (Keras and PyTorch) Frontends support loading in a configuration file (`HPVM_binary -c ${config_file_path}`) that loads approximation knobs corresponding to each tensor operation in the program. This configuration file is the output of the autotuner (`predtuner`) that selects an approximation knob for each tensor operation, while respecting the accuracy degradation budget given for autotuning. The HPVM tensor runtime uses the configuration to dispatch to the corresponding approximate variants with appropriate arguments.
 
 
-The format of the configuration is includes one line per fused HPVM node. Note that this often includes multiple Tensor operations in a single fused node. For instance, a Convolution, Add, and Relu, are fused into a single HPVM node since these are semantically a convolution layer. This fusion is done to facilitate code generation to accelerators and libraries that expose higher level abstractions such as "Convolution Layers" or "Dense Layer" as the API.
+The format of the configuration is includes one line per fused HPVM node. Note that
+this often includes multiple Tensor operations in a single fused node. For instance,
+a Convolution, Add, and Relu, are fused into a single HPVM node since these are
+semantically a convolution layer. This fusion is done to facilitate code
+generation to accelerators and libraries that expose higher level abstractions
+such as "Convolution Layers" or "Dense Layer" as the API.
 
 File Format
 --------------
@@ -22,19 +27,19 @@ The delimeters `+++++` and `-----` marked beginning and end of a configuration
 
 The `$config_id` is the configuration ID in the configuration file. A configuration file is a list of multiple configurations - the runtime can select from any of these at runtime - default behavior is to use the first configuration in the file.
 
-`$predicted_speedup` is the "hardware-agnostic" speedup predicted by the autotuner using a performance heursitic (no performance measurement on a hardware device)
+`$predicted_speedup` is the "hardware-agnostic" speedup predicted by the autotuner using a performance heursitic.
 
 `$predicted_energy`: hardware-agnostic predicted energy metric. Currently, the tuner sets this to 0 - since we do not yet support energy estimation.
 
-`$real_accuracy` is the accuracy of the program on the tune set (inputs used for tuning) when no approximations are applied and `$accuracy_degradation` is the drop in accuracy when applying the configuration that follows - the specific knob settings that follow.
+`$real_accuracy` is the accuracy of the program on the tune set (inputs used for tuning) when no approximations are applied and `$accuracy_degradation` is the drop in accuracy when applying the configuration (i.e., the specific knob settings).
 
-`$hpvm_node_id` specifies the node ID to apply the approximation knobs for, `$device` specifies the device to offload to, `$tensor_op_type` specifies the type of tensor operation (conv, mul, add, relu etc.), and `$approximation_knob` is the knob setting corresponding to this tensor operation. The autotuner selects these knobs.
+`$hpvm_node_id` specifies the node ID to apply the approximation knobs for, `$device` specifies the device to offload to (GPU or CPU), `$tensor_op_type` specifies the type of tensor operation (conv, mul, add, relu etc.), and `$approximation_knob` is the knob setting corresponding to this tensor operation. The autotuner selects these knobs.
 
 Approximation Knobs
 --------------------
 
 The `$approximation_knob` is an integer ID that represents an approximation knob.
-HPVM currently supports `fp16` and `fp32` for all
+HPVM currently supports `fp16` (knob value `12`) and `fp32` (knob value `11`) for all
 types of tensor operations. For convolution operations, "sampling" and "perforation" are the
 supported algorithmic approximations with the following knobs.