From 104b722bd40080393dfd98ff5754c78b00ea7b63 Mon Sep 17 00:00:00 2001
From: Hashim Sharif <hsharif3@miranda.cs.illinois.edu>
Date: Tue, 30 Mar 2021 20:14:12 -0500
Subject: [PATCH] Fixes to config format doc

---
 hpvm/docs/developerdocs/configuration-format.rst | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/hpvm/docs/developerdocs/configuration-format.rst b/hpvm/docs/developerdocs/configuration-format.rst
index 1616e58ed4..cc5faadeaa 100644
--- a/hpvm/docs/developerdocs/configuration-format.rst
+++ b/hpvm/docs/developerdocs/configuration-format.rst
@@ -5,7 +5,12 @@ Approximation Configuration Format
 The HPVM binaries generated from the (Keras and PyTorch) Frontends support loading in a configuration file (`HPVM_binary -c ${config_file_path}`) that loads approximation knobs corresponding to each tensor operation in the program. This configuration file is the output of the autotuner (`predtuner`) that selects an approximation knob for each tensor operation, while respecting the accuracy degradation budget given for autotuning. The HPVM tensor runtime uses the configuration to dispatch to the corresponding approximate variants with appropriate arguments.
 
 
-The format of the configuration is includes one line per fused HPVM node. Note that this often includes multiple Tensor operations in a single fused node. For instance, a Convolution, Add, and Relu, are fused into a single HPVM node since these are semantically a convolution layer. This fusion is done to facilitate code generation to accelerators and libraries that expose higher level abstractions such as "Convolution Layers" or "Dense Layer" as the API.
+The format of the configuration is includes one line per fused HPVM node. Note that
+this often includes multiple Tensor operations in a single fused node. For instance,
+a Convolution, Add, and Relu, are fused into a single HPVM node since these are
+semantically a convolution layer. This fusion is done to facilitate code
+generation to accelerators and libraries that expose higher level abstractions
+such as "Convolution Layers" or "Dense Layer" as the API.
 
 File Format
 --------------
@@ -22,19 +27,19 @@ The delimeters `+++++` and `-----` marked beginning and end of a configuration
 
 The `$config_id` is the configuration ID in the configuration file. A configuration file is a list of multiple configurations - the runtime can select from any of these at runtime - default behavior is to use the first configuration in the file.
 
-`$predicted_speedup` is the "hardware-agnostic" speedup predicted by the autotuner using a performance heursitic (no performance measurement on a hardware device)
+`$predicted_speedup` is the "hardware-agnostic" speedup predicted by the autotuner using a performance heursitic.
 
 `$predicted_energy`: hardware-agnostic predicted energy metric. Currently, the tuner sets this to 0 - since we do not yet support energy estimation.
 
-`$real_accuracy` is the accuracy of the program on the tune set (inputs used for tuning) when no approximations are applied and `$accuracy_degradation` is the drop in accuracy when applying the configuration that follows - the specific knob settings that follow.
+`$real_accuracy` is the accuracy of the program on the tune set (inputs used for tuning) when no approximations are applied and `$accuracy_degradation` is the drop in accuracy when applying the configuration (i.e., the specific knob settings).
 
-`$hpvm_node_id` specifies the node ID to apply the approximation knobs for, `$device` specifies the device to offload to, `$tensor_op_type` specifies the type of tensor operation (conv, mul, add, relu etc.), and `$approximation_knob` is the knob setting corresponding to this tensor operation. The autotuner selects these knobs.
+`$hpvm_node_id` specifies the node ID to apply the approximation knobs for, `$device` specifies the device to offload to (GPU or CPU), `$tensor_op_type` specifies the type of tensor operation (conv, mul, add, relu etc.), and `$approximation_knob` is the knob setting corresponding to this tensor operation. The autotuner selects these knobs.
 
 Approximation Knobs
 --------------------
 
 The `$approximation_knob` is an integer ID that represents an approximation knob.
-HPVM currently supports `fp16` and `fp32` for all
+HPVM currently supports `fp16` (knob value `12`) and `fp32` (knob value `11`) for all
 types of tensor operations. For convolution operations, "sampling" and "perforation" are the
 supported algorithmic approximations with the following knobs.
 
-- 
GitLab