From 03ad146ec6d11720e56281af24a1f104c9c156af Mon Sep 17 00:00:00 2001
From: akashk4 <akashk4@illinois.edu>
Date: Thu, 8 Apr 2021 02:17:53 +0000
Subject: [PATCH] Add links to papers

---
 hpvm/docs/developerdocs/approximation-implementation.rst | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hpvm/docs/developerdocs/approximation-implementation.rst b/hpvm/docs/developerdocs/approximation-implementation.rst
index bb9f5de2d9..de3fde127f 100644
--- a/hpvm/docs/developerdocs/approximation-implementation.rst
+++ b/hpvm/docs/developerdocs/approximation-implementation.rst
@@ -15,9 +15,9 @@ Description
 
 The algorithm for perforated convolution can be broken down into three major steps:
 
-* **Patch matrix creation:** Based on indices of the rows/columns to be perforated, the corresponding elements of the input tensor are used to create a new matrix called an input-patch matrix. The input-patch matrix is a matrix laid out in memory in such a way that convolution is then reduced down to a simple matrix multiplication operation. This approach is similar to one described in this paper.
+* **Patch matrix creation:** Based on indices of the rows/columns to be perforated, the corresponding elements of the input tensor are used to create a new matrix called an input-patch matrix. The input-patch matrix is a matrix laid out in memory in such a way that convolution is then reduced down to a simple matrix multiplication operation. This approach is similar to one described in this `paper <https://dl.acm.org/doi/abs/10.1145/2964284.2967243>`_.
 
-* **Dense matrix multiplication:** This step involves performing a matrix multiplication in a manner very similar to described in this paper. It is important to note that it is performed on reduced, dense matrices.
+* **Dense matrix multiplication:** This step involves performing a matrix multiplication in a manner very similar to described in this `paper <https://arxiv.org/pdf/1704.04428.pdf>`_. It is important to note that it is performed on reduced, dense matrices.
 
 * **Interpolation of missing values:** This step entails allocation of a new tensor to which computed elements from the reduced, dense tensor are copied and the elements whose computation was skipped are interpolated by taking the arithmetic mean of the neighboring elements. These neighboring elements constitute the computed elements on the right and the left of the skipped element in case of column perforation;  and the computed elements above and below the skipped element in case of row perforation.
 
-- 
GitLab