Skip to content
Snippets Groups Projects
Commit ebb89126 authored by Neta Zmora's avatar Neta Zmora
Browse files

fix typo: Jupyter spelled as Jupiter

parent 958b361f
No related branches found
No related tags found
No related merge requests found
......@@ -39,7 +39,7 @@ Network compression can reduce the memory footprint of a neural network, increas
+ [Install dependencies](#install-dependencies)
* [Getting Started](#getting-started)
+ [Example invocations of the sample application](#example-invocations-of-the-sample-application)
+ [Explore the sample Jupiter notebooks](#explore-the-sample-jupiter-notebooks)
+ [Explore the sample Jupyter notebooks](#explore-the-sample-jupyter-notebooks)
* [Set up the classification datasets](#set-up-the-classification-datasets)
* [Running the tests](#running-the-tests)
* [Generating the HTML documentation site](#generating-the-html-documentation-site)
......@@ -137,7 +137,7 @@ You can jump head-first into some limited examples of network compression, to ge
Distiller comes with a sample application for compressing image classification DNNs, ```compress_classifier.py``` located at ```distiller/examples/classifier_compression```.
We'll show you how to use it for some simple use-cases, and will point you to some ready-to-go Jupiter notebooks.
We'll show you how to use it for some simple use-cases, and will point you to some ready-to-go Jupyter notebooks.
For more details, there are some other resources you can refer to:
+ [Model zoo](https://nervanasystems.github.io/distiller/model_zoo/index.html)
......@@ -196,7 +196,7 @@ This example performs 8-bit quantization of ResNet20 for CIFAR10. We've include
$ python3 compress_classifier.py -a resnet20-cifar ../../../data.cifar10 --resume ../examples/ssl/checkpoints/checkpoint_trained_dense.pth.tar --quantize --evaluate
```
### Explore the sample Jupiter notebooks
### Explore the sample Jupyter notebooks
The set of notebooks that come with Distiller is described [here](https://nervanasystems.github.io/distiller/jupyter/index.html#using-the-distiller-notebooks), which also explains the steps to install the Jupyter notebook server.<br>
After installing and running the server, take a look at the [notebook](https://github.com/NervanaSystems/distiller/blob/master/jupyter/sensitivity_analysis.ipynb) covering pruning sensitivity analysis.
......
......@@ -5,7 +5,7 @@ These instructions will help get Distiller up and running on your local machine.
You may also want to refer to these resources:
* [Dataset installation](https://github.com/NervanaSystems/distiller#set-up-the-classification-datasets) instructions.
* [Jupiter installation](https://nervanasystems.github.io/distiller/jupyter/index.html#installation) instructions.
* [Jupyter installation](https://nervanasystems.github.io/distiller/jupyter/index.html#installation) instructions.
Notes:
- Distiller has only been tested on Ubuntu 16.04 LTS, and with Python 3.5.
......
......@@ -50,7 +50,7 @@ Regularization can also be used to induce sparsity. To induce element-wise spar
\\(l_2\\)-norm regularization reduces overfitting and improves a model's accuracy by shrinking large parameters, but it does not force these parameters to absolute zero. \\(l_1\\)-norm regularization sets some of the parameter elements to zero, therefore limiting the model's capacity while making the model simpler. This is sometimes referred to as *feature selection* and gives us another interpretation of pruning.
[One](https://github.com/NervanaSystems/distiller/blob/master/jupyter/L1-regularization.ipynb) of Distiller's Jupiter notebooks explains how the \\(l_1\\)-norm regularizer induces sparsity, and how it interacts with \\(l_2\\)-norm regularization.
[One](https://github.com/NervanaSystems/distiller/blob/master/jupyter/L1-regularization.ipynb) of Distiller's Jupyter notebooks explains how the \\(l_1\\)-norm regularizer induces sparsity, and how it interacts with \\(l_2\\)-norm regularization.
If we configure ```weight_decay``` to zero and use \\(l_1\\)-norm regularization, then we have:
......
......@@ -236,5 +236,5 @@ And of course, if we used a sparse or compressed representation, then we are red
<!--
MkDocs version : 0.17.2
Build Date UTC : 2018-04-24 23:01:45
Build Date UTC : 2018-04-28 18:01:09
-->
......@@ -160,7 +160,7 @@
<p>You may also want to refer to these resources:</p>
<ul>
<li><a href="https://github.com/NervanaSystems/distiller#set-up-the-classification-datasets">Dataset installation</a> instructions.</li>
<li><a href="https://nervanasystems.github.io/distiller/jupyter/index.html#installation">Jupiter installation</a> instructions.</li>
<li><a href="https://nervanasystems.github.io/distiller/jupyter/index.html#installation">Jupyter installation</a> instructions.</li>
</ul>
<p>Notes:
- Distiller has only been tested on Ubuntu 16.04 LTS, and with Python 3.5.
......
......@@ -203,7 +203,7 @@ for input, target in dataset:
\lVert W \rVert_1 = l_1(W) = \sum_{i=1}^{|W|} |w_i|
\]</p>
<p>\(l_2\)-norm regularization reduces overfitting and improves a model's accuracy by shrinking large parameters, but it does not force these parameters to absolute zero. \(l_1\)-norm regularization sets some of the parameter elements to zero, therefore limiting the model's capacity while making the model simpler. This is sometimes referred to as <em>feature selection</em> and gives us another interpretation of pruning.</p>
<p><a href="https://github.com/NervanaSystems/distiller/blob/master/jupyter/L1-regularization.ipynb">One</a> of Distiller's Jupiter notebooks explains how the \(l_1\)-norm regularizer induces sparsity, and how it interacts with \(l_2\)-norm regularization.</p>
<p><a href="https://github.com/NervanaSystems/distiller/blob/master/jupyter/L1-regularization.ipynb">One</a> of Distiller's Jupyter notebooks explains how the \(l_1\)-norm regularizer induces sparsity, and how it interacts with \(l_2\)-norm regularization.</p>
<p>If we configure <code>weight_decay</code> to zero and use \(l_1\)-norm regularization, then we have:
\[
loss(W;x;y) = loss_D(W;x;y) + \lambda_R \lVert W \rVert_1
......
This diff is collapsed.
......@@ -4,7 +4,7 @@
<url>
<loc>/index.html</loc>
<lastmod>2018-04-25</lastmod>
<lastmod>2018-04-28</lastmod>
<changefreq>daily</changefreq>
</url>
......@@ -12,7 +12,7 @@
<url>
<loc>/install/index.html</loc>
<lastmod>2018-04-25</lastmod>
<lastmod>2018-04-28</lastmod>
<changefreq>daily</changefreq>
</url>
......@@ -20,7 +20,7 @@
<url>
<loc>/usage/index.html</loc>
<lastmod>2018-04-25</lastmod>
<lastmod>2018-04-28</lastmod>
<changefreq>daily</changefreq>
</url>
......@@ -28,7 +28,7 @@
<url>
<loc>/schedule/index.html</loc>
<lastmod>2018-04-25</lastmod>
<lastmod>2018-04-28</lastmod>
<changefreq>daily</changefreq>
</url>
......@@ -37,19 +37,19 @@
<url>
<loc>/pruning/index.html</loc>
<lastmod>2018-04-25</lastmod>
<lastmod>2018-04-28</lastmod>
<changefreq>daily</changefreq>
</url>
<url>
<loc>/regularization/index.html</loc>
<lastmod>2018-04-25</lastmod>
<lastmod>2018-04-28</lastmod>
<changefreq>daily</changefreq>
</url>
<url>
<loc>/quantization/index.html</loc>
<lastmod>2018-04-25</lastmod>
<lastmod>2018-04-28</lastmod>
<changefreq>daily</changefreq>
</url>
......@@ -58,7 +58,7 @@
<url>
<loc>/algorithms/index.html</loc>
<lastmod>2018-04-25</lastmod>
<lastmod>2018-04-28</lastmod>
<changefreq>daily</changefreq>
</url>
......@@ -66,7 +66,7 @@
<url>
<loc>/model_zoo/index.html</loc>
<lastmod>2018-04-25</lastmod>
<lastmod>2018-04-28</lastmod>
<changefreq>daily</changefreq>
</url>
......@@ -74,7 +74,7 @@
<url>
<loc>/jupyter/index.html</loc>
<lastmod>2018-04-25</lastmod>
<lastmod>2018-04-28</lastmod>
<changefreq>daily</changefreq>
</url>
......@@ -82,7 +82,7 @@
<url>
<loc>/design/index.html</loc>
<lastmod>2018-04-25</lastmod>
<lastmod>2018-04-28</lastmod>
<changefreq>daily</changefreq>
</url>
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment