diff --git a/docs-src/docs/algo_pruning.md b/docs-src/docs/algo_pruning.md index aac65d506306c7543f4f6b3f79cad1bfff61f97f..2038a6adc96801bd851a833e8bc4ec16dbbbc62c 100755 --- a/docs-src/docs/algo_pruning.md +++ b/docs-src/docs/algo_pruning.md @@ -107,7 +107,7 @@ The authors of [Exploring Sparsity in Recurrent Neural Networks](https://arxiv.o Distiller's distiller.pruning.BaiduRNNPruner class implements this pruning algorithm. -<center></center> +<center></center> # Structure pruners Element-wise pruning can create very sparse models which can be compressed to consume less memory footprint and bandwidth, but without specialized hardware that can compute using the sparse representation of the tensors, we don't gain any speedup of the computation. Structure pruners, remove entire "structures", such as kernels, filters, and even entire feature-maps. diff --git a/docs/algo_pruning/index.html b/docs/algo_pruning/index.html index 5ac80cfa5aa0745f9fc157b9832d2c4f734e5eca..4560e4db90bf06598d82b3c98dc7597370a00c23 100644 --- a/docs/algo_pruning/index.html +++ b/docs/algo_pruning/index.html @@ -274,7 +274,7 @@ abundant and gradually reduce the number of weights being pruned each time as th <h2 id="rnn-pruner">RNN pruner</h2> <p>The authors of <a href="https://arxiv.org/abs/1704.05119">Exploring Sparsity in Recurrent Neural Networks</a>, Sharan Narang, Erich Elsen, Gregory Diamos, and Shubho Sengupta, "propose a technique to reduce the parameters of a network by pruning weights during the initial training of the network." They use a gradual pruning schedule which is reminiscent of the schedule used in AGP, for element-wise pruning of RNNs, which they also employ during training. They show pruning of RNN, GRU, LSTM and embedding layers.</p> <p>Distiller's distiller.pruning.BaiduRNNPruner class implements this pruning algorithm.</p> -<p><center><img alt="Gradual Pruning" src="../imgs/baidu_rnn_pruning.png" /></center></p> +<p><center><img alt="Baidu RNN Pruning" src="../imgs/baidu_rnn_pruning.png" /></center></p> <h1 id="structure-pruners">Structure pruners</h1> <p>Element-wise pruning can create very sparse models which can be compressed to consume less memory footprint and bandwidth, but without specialized hardware that can compute using the sparse representation of the tensors, we don't gain any speedup of the computation. Structure pruners, remove entire "structures", such as kernels, filters, and even entire feature-maps.</p> <h2 id="ranked-structure-pruner">Ranked structure pruner</h2> diff --git a/docs/index.html b/docs/index.html index b5ca0802c2de721855bfb233ab3ac87936fe155b..1a4036280f7ae675a033b15213926ecbea2009e0 100644 --- a/docs/index.html +++ b/docs/index.html @@ -246,5 +246,5 @@ And of course, if we used a sparse or compressed representation, then we are red <!-- MkDocs version : 0.17.2 -Build Date UTC : 2018-06-14 10:51:56 +Build Date UTC : 2018-06-14 11:48:24 -->