From 3910b5bde1d4246dcf823b467957035cd5e5cc0e Mon Sep 17 00:00:00 2001 From: Neta Zmora <neta.zmora@intel.com> Date: Thu, 14 Jun 2018 14:49:50 +0300 Subject: [PATCH] Documentation: small fix for RNN pruning image --- docs-src/docs/algo_pruning.md | 2 +- docs/algo_pruning/index.html | 2 +- docs/index.html | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs-src/docs/algo_pruning.md b/docs-src/docs/algo_pruning.md index aac65d5..2038a6a 100755 --- a/docs-src/docs/algo_pruning.md +++ b/docs-src/docs/algo_pruning.md @@ -107,7 +107,7 @@ The authors of [Exploring Sparsity in Recurrent Neural Networks](https://arxiv.o Distiller's distiller.pruning.BaiduRNNPruner class implements this pruning algorithm. -<center></center> +<center></center> # Structure pruners Element-wise pruning can create very sparse models which can be compressed to consume less memory footprint and bandwidth, but without specialized hardware that can compute using the sparse representation of the tensors, we don't gain any speedup of the computation. Structure pruners, remove entire "structures", such as kernels, filters, and even entire feature-maps. diff --git a/docs/algo_pruning/index.html b/docs/algo_pruning/index.html index 5ac80cf..4560e4d 100644 --- a/docs/algo_pruning/index.html +++ b/docs/algo_pruning/index.html @@ -274,7 +274,7 @@ abundant and gradually reduce the number of weights being pruned each time as th <h2 id="rnn-pruner">RNN pruner</h2> <p>The authors of <a href="https://arxiv.org/abs/1704.05119">Exploring Sparsity in Recurrent Neural Networks</a>, Sharan Narang, Erich Elsen, Gregory Diamos, and Shubho Sengupta, "propose a technique to reduce the parameters of a network by pruning weights during the initial training of the network." They use a gradual pruning schedule which is reminiscent of the schedule used in AGP, for element-wise pruning of RNNs, which they also employ during training. They show pruning of RNN, GRU, LSTM and embedding layers.</p> <p>Distiller's distiller.pruning.BaiduRNNPruner class implements this pruning algorithm.</p> -<p><center><img alt="Gradual Pruning" src="../imgs/baidu_rnn_pruning.png" /></center></p> +<p><center><img alt="Baidu RNN Pruning" src="../imgs/baidu_rnn_pruning.png" /></center></p> <h1 id="structure-pruners">Structure pruners</h1> <p>Element-wise pruning can create very sparse models which can be compressed to consume less memory footprint and bandwidth, but without specialized hardware that can compute using the sparse representation of the tensors, we don't gain any speedup of the computation. Structure pruners, remove entire "structures", such as kernels, filters, and even entire feature-maps.</p> <h2 id="ranked-structure-pruner">Ranked structure pruner</h2> diff --git a/docs/index.html b/docs/index.html index b5ca080..1a40362 100644 --- a/docs/index.html +++ b/docs/index.html @@ -246,5 +246,5 @@ And of course, if we used a sparse or compressed representation, then we are red <!-- MkDocs version : 0.17.2 -Build Date UTC : 2018-06-14 10:51:56 +Build Date UTC : 2018-06-14 11:48:24 --> -- GitLab