Skip to content
Snippets Groups Projects
Unverified Commit 0c2fcb55 authored by Neta Zmora's avatar Neta Zmora Committed by GitHub
Browse files

docs-src/usage: Fix wrong schedule path (#4) (#5)

Two places in the documentation gave the wrong path to the example
Alexnet sensitivity pruning schedule.
parent a7ed8cad
No related branches found
No related tags found
No related merge requests found
...@@ -22,7 +22,7 @@ $ python3 compress_classifier.py --help ...@@ -22,7 +22,7 @@ $ python3 compress_classifier.py --help
For example: For example:
``` ```
$ time python3 compress_classifier.py -a alexnet --lr 0.005 -p 50 ../../../data.imagenet -j 44 --epochs 90 --pretrained --compress=../imagenet/alexnet/pruning/alexnet.schedule_sensitivity.yaml $ time python3 compress_classifier.py -a alexnet --lr 0.005 -p 50 ../../../data.imagenet -j 44 --epochs 90 --pretrained --compress=../sensitivity-pruning/alexnet.schedule_sensitivity.yaml
Parameters: Parameters:
+----+---------------------------+------------------+---------------+----------------+------------+------------+----------+----------+----------+------------+---------+----------+------------+ +----+---------------------------+------------------+---------------+----------------+------------+------------+----------+----------+----------+------------+---------+----------+------------+
...@@ -65,16 +65,17 @@ Parameters: ...@@ -65,16 +65,17 @@ Parameters:
Let's look at the command line again: Let's look at the command line again:
``` ```
$ time python3 compress_classifier.py -a alexnet --lr 0.005 -p 50 ../../../data.imagenet -j 44 --epochs 90 --pretrained --compress=../imagenet/alexnet/pruning/alexnet.schedule_sensitivity.yaml $ time python3 compress_classifier.py -a alexnet --lr 0.005 -p 50 ../../../data.imagenet -j 44 --epochs 90 --pretrained --compress=../sensitivity-pruning/alexnet.schedule_sensitivity.yaml
``` ```
In this example, we prune a TorchVision pre-trained AlexNet network, using the following configuration: In this example, we prune a TorchVision pre-trained AlexNet network, using the following configuration:
- Learning-rate of 0.005 - Learning-rate of 0.005
- Print progress every 50 mini-batches - Print progress every 50 mini-batches.
- Use 44 worker threads to load data - Use 44 worker threads to load data (make sure to use something suitable for your machine).
- Run for 90 epochs. Torchvision's pre-trained models did not store the epoch metadata, so pruning starts at epoch 0. When you train and prune your own networks, the last training epoch is saved as a metadata with the model. Therefore, when you load such models, the first epoch is not 0, but the last training epoch. - Run for 90 epochs. Torchvision's pre-trained models did not store the epoch metadata, so pruning starts at epoch 0. When you train and prune your own networks, the last training epoch is saved as a metadata with the model. Therefore, when you load such models, the first epoch is not 0, but it is the last training epoch.
- The pruning schedule is provided in ```alexnet.schedule_sensitivity.yaml``` - The pruning schedule is provided in ```alexnet.schedule_sensitivity.yaml```
<br> Log files are written to directory ```logs```. - Log files are written to directory ```logs```.
## Examples ## Examples
Distiller comes with several example schedules which can be used together with ```compress_classifier.py```. Distiller comes with several example schedules which can be used together with ```compress_classifier.py```.
......
...@@ -246,5 +246,5 @@ And of course, if we used a sparse or compressed representation, then we are red ...@@ -246,5 +246,5 @@ And of course, if we used a sparse or compressed representation, then we are red
<!-- <!--
MkDocs version : 0.17.2 MkDocs version : 0.17.2
Build Date UTC : 2018-05-14 13:58:17 Build Date UTC : 2018-05-22 09:40:34
--> -->
Source diff could not be displayed: it is too large. Options to address this: view the blob.
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
<url> <url>
<loc>/index.html</loc> <loc>/index.html</loc>
<lastmod>2018-05-14</lastmod> <lastmod>2018-05-22</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url> </url>
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
<url> <url>
<loc>/install/index.html</loc> <loc>/install/index.html</loc>
<lastmod>2018-05-14</lastmod> <lastmod>2018-05-22</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url> </url>
...@@ -20,7 +20,7 @@ ...@@ -20,7 +20,7 @@
<url> <url>
<loc>/usage/index.html</loc> <loc>/usage/index.html</loc>
<lastmod>2018-05-14</lastmod> <lastmod>2018-05-22</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url> </url>
...@@ -28,7 +28,7 @@ ...@@ -28,7 +28,7 @@
<url> <url>
<loc>/schedule/index.html</loc> <loc>/schedule/index.html</loc>
<lastmod>2018-05-14</lastmod> <lastmod>2018-05-22</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url> </url>
...@@ -37,19 +37,19 @@ ...@@ -37,19 +37,19 @@
<url> <url>
<loc>/pruning/index.html</loc> <loc>/pruning/index.html</loc>
<lastmod>2018-05-14</lastmod> <lastmod>2018-05-22</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url> </url>
<url> <url>
<loc>/regularization/index.html</loc> <loc>/regularization/index.html</loc>
<lastmod>2018-05-14</lastmod> <lastmod>2018-05-22</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url> </url>
<url> <url>
<loc>/quantization/index.html</loc> <loc>/quantization/index.html</loc>
<lastmod>2018-05-14</lastmod> <lastmod>2018-05-22</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url> </url>
...@@ -59,13 +59,13 @@ ...@@ -59,13 +59,13 @@
<url> <url>
<loc>/algo_pruning/index.html</loc> <loc>/algo_pruning/index.html</loc>
<lastmod>2018-05-14</lastmod> <lastmod>2018-05-22</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url> </url>
<url> <url>
<loc>/algo_quantization/index.html</loc> <loc>/algo_quantization/index.html</loc>
<lastmod>2018-05-14</lastmod> <lastmod>2018-05-22</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url> </url>
...@@ -74,7 +74,7 @@ ...@@ -74,7 +74,7 @@
<url> <url>
<loc>/model_zoo/index.html</loc> <loc>/model_zoo/index.html</loc>
<lastmod>2018-05-14</lastmod> <lastmod>2018-05-22</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url> </url>
...@@ -82,7 +82,7 @@ ...@@ -82,7 +82,7 @@
<url> <url>
<loc>/jupyter/index.html</loc> <loc>/jupyter/index.html</loc>
<lastmod>2018-05-14</lastmod> <lastmod>2018-05-22</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url> </url>
...@@ -90,7 +90,7 @@ ...@@ -90,7 +90,7 @@
<url> <url>
<loc>/design/index.html</loc> <loc>/design/index.html</loc>
<lastmod>2018-05-14</lastmod> <lastmod>2018-05-22</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url> </url>
......
...@@ -197,7 +197,7 @@ ...@@ -197,7 +197,7 @@
</code></pre> </code></pre>
<p>For example:</p> <p>For example:</p>
<pre><code>$ time python3 compress_classifier.py -a alexnet --lr 0.005 -p 50 ../../../data.imagenet -j 44 --epochs 90 --pretrained --compress=../imagenet/alexnet/pruning/alexnet.schedule_sensitivity.yaml <pre><code>$ time python3 compress_classifier.py -a alexnet --lr 0.005 -p 50 ../../../data.imagenet -j 44 --epochs 90 --pretrained --compress=../sensitivity-pruning/alexnet.schedule_sensitivity.yaml
Parameters: Parameters:
+----+---------------------------+------------------+---------------+----------------+------------+------------+----------+----------+----------+------------+---------+----------+------------+ +----+---------------------------+------------------+---------------+----------------+------------+------------+----------+----------+----------+------------+---------+----------+------------+
...@@ -239,16 +239,18 @@ Parameters: ...@@ -239,16 +239,18 @@ Parameters:
</code></pre> </code></pre>
<p>Let's look at the command line again:</p> <p>Let's look at the command line again:</p>
<pre><code>$ time python3 compress_classifier.py -a alexnet --lr 0.005 -p 50 ../../../data.imagenet -j 44 --epochs 90 --pretrained --compress=../imagenet/alexnet/pruning/alexnet.schedule_sensitivity.yaml <pre><code>$ time python3 compress_classifier.py -a alexnet --lr 0.005 -p 50 ../../../data.imagenet -j 44 --epochs 90 --pretrained --compress=../sensitivity-pruning/alexnet.schedule_sensitivity.yaml
</code></pre> </code></pre>
<p>In this example, we prune a TorchVision pre-trained AlexNet network, using the following configuration: <p>In this example, we prune a TorchVision pre-trained AlexNet network, using the following configuration:</p>
- Learning-rate of 0.005 <ul>
- Print progress every 50 mini-batches <li>Learning-rate of 0.005</li>
- Use 44 worker threads to load data <li>Print progress every 50 mini-batches.</li>
- Run for 90 epochs. Torchvision's pre-trained models did not store the epoch metadata, so pruning starts at epoch 0. When you train and prune your own networks, the last training epoch is saved as a metadata with the model. Therefore, when you load such models, the first epoch is not 0, but the last training epoch. <li>Use 44 worker threads to load data (make sure to use something suitable for your machine).</li>
- The pruning schedule is provided in <code>alexnet.schedule_sensitivity.yaml</code> <li>Run for 90 epochs. Torchvision's pre-trained models did not store the epoch metadata, so pruning starts at epoch 0. When you train and prune your own networks, the last training epoch is saved as a metadata with the model. Therefore, when you load such models, the first epoch is not 0, but it is the last training epoch.</li>
<br> Log files are written to directory <code>logs</code>.</p> <li>The pruning schedule is provided in <code>alexnet.schedule_sensitivity.yaml</code></li>
<li>Log files are written to directory <code>logs</code>.</li>
</ul>
<h2 id="examples">Examples</h2> <h2 id="examples">Examples</h2>
<p>Distiller comes with several example schedules which can be used together with <code>compress_classifier.py</code>. <p>Distiller comes with several example schedules which can be used together with <code>compress_classifier.py</code>.
These example schedules (YAML) files, contain the command line that is used in order to invoke the schedule (so that you can easily recreate the results in your environment), together with the results of the pruning or regularization. The results usually contain a table showing the sparsity of each of the model parameters, together with the validation and test top1, top5 and loss scores.</p> These example schedules (YAML) files, contain the command line that is used in order to invoke the schedule (so that you can easily recreate the results in your environment), together with the results of the pruning or regularization. The results usually contain a table showing the sparsity of each of the model parameters, together with the validation and test top1, top5 and loss scores.</p>
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment