From ab35fed73738b497b6ac0c5a1f83a74fac29b54f Mon Sep 17 00:00:00 2001
From: Thomas Fan <thomasjpfan@gmail.com>
Date: Fri, 22 Jun 2018 14:51:59 -0400
Subject: [PATCH] DOC: Fix (#10)

Reviewed and looking good.  We have to set a convention for naming files.
---
 README.md                                        |  2 +-
 ...doc_requirements.txt => doc-requirements.txt} |  0
 docs-src/docs/jupyter.md                         | 16 ++++++++--------
 3 files changed, 9 insertions(+), 9 deletions(-)
 rename docs-src/{doc_requirements.txt => doc-requirements.txt} (100%)

diff --git a/README.md b/README.md
index 27de91b..4081037 100755
--- a/README.md
+++ b/README.md
@@ -263,7 +263,7 @@ $ pip3 install -r doc-requirements.txt
 
 To build the project documentation run:
 ```
-$ cd distiller/docs
+$ cd distiller/docs-src
 $ mkdocs build --clean
 ```
 This will create a folder named 'site' which contains the documentation website.
diff --git a/docs-src/doc_requirements.txt b/docs-src/doc-requirements.txt
similarity index 100%
rename from docs-src/doc_requirements.txt
rename to docs-src/doc-requirements.txt
diff --git a/docs-src/docs/jupyter.md b/docs-src/docs/jupyter.md
index 8de4cef..6e4e082 100755
--- a/docs-src/docs/jupyter.md
+++ b/docs-src/docs/jupyter.md
@@ -32,13 +32,13 @@ We welcome new ideas and implementations of Jupyter.
 
 Roughly, the notebooks can be divided into three categories.
 ### Theory
-- [jupyter/L1-regularization.ipynb](localhost:8888/notebooks/jupyter/L1-regularization.ipynb): Experience hands-on how L1 and L2 regularization affect the solution of a toy loss-minimization problem, to get a better grasp on the interaction between regularization and sparsity.
-- [jupyter/alexnet_insights.ipynb](localhost:8888/notebooks/alexnet_insights.ipynb): This notebook reviews and compares a couple of pruning sessions on Alexnet.  We compare distributions, performance, statistics and show some visualizations of the weights tensors.
+- [jupyter/L1-regularization.ipynb](https://github.com/NervanaSystems/distiller/blob/master/jupyter/L1-regularization.ipynb): Experience hands-on how L1 and L2 regularization affect the solution of a toy loss-minimization problem, to get a better grasp on the interaction between regularization and sparsity.
+- [jupyter/alexnet_insights.ipynb](https://github.com/NervanaSystems/distiller/blob/master/jupyter/alexnet_insights.ipynb): This notebook reviews and compares a couple of pruning sessions on Alexnet.  We compare distributions, performance, statistics and show some visualizations of the weights tensors.
 ### Preparation for compression
-- [jupyter/model_summary.ipynb](localhost:8888/notebooks/jupyter/model_summary.ipynb): Begin by getting familiar with your model.  Examine the sizes and properties of layers and connections.  Study which layers are compute-bound, and which are bandwidth-bound, and decide how to prune or regularize the model.
-- [jupyter/sensitivity_analysis.ipynb](localhost:8888/notebooks/jupyter/sensitivity_analysis.ipynb): If you performed pruning sensitivity analysis on your model, this notebook can help you load the results and graphically study how the layers behave.
-- [jupyter/interactive_lr_scheduler.ipynb](localhost:8888/notebooks/jupyter/interactive_lr_scheduler.ipynb): The learning rate decay policy affects pruning results, perhaps as much as it affects training results.  Graph a few LR-decay policies to see how they behave.
-- [jupyter/jupyter/agp_schedule.ipynb](localhost:8888/notebooks/jupyter/agp_schedule.ipynb): If you are using the Automated Gradual Pruner, this notebook can help you tune the schedule.
+- [jupyter/model_summary.ipynb](https://github.com/NervanaSystems/distiller/blob/master/jupyter/model_summary.ipynb): Begin by getting familiar with your model.  Examine the sizes and properties of layers and connections.  Study which layers are compute-bound, and which are bandwidth-bound, and decide how to prune or regularize the model.
+- [jupyter/sensitivity_analysis.ipynb](https://github.com/NervanaSystems/distiller/blob/master/jupyter/sensitivity_analysis.ipynb): If you performed pruning sensitivity analysis on your model, this notebook can help you load the results and graphically study how the layers behave.
+- [jupyter/interactive_lr_scheduler.ipynb](https://github.com/NervanaSystems/distiller/blob/master/jupyter/interactive_lr_scheduler.ipynb): The learning rate decay policy affects pruning results, perhaps as much as it affects training results.  Graph a few LR-decay policies to see how they behave.
+- [jupyter/jupyter/agp_schedule.ipynb](https://github.com/NervanaSystems/distiller/blob/master/jupyter/agp_schedule.ipynb): If you are using the Automated Gradual Pruner, this notebook can help you tune the schedule.
 ### Reviewing experiment results
-- [jupyter/compare_executions.ipynb](localhost:8888/notebooks/jupyter/compare_executions.ipynb): This is a simple notebook to help you graphically compare the results of executions of several experiments.
-- [jupyter/compression_insights.ipynb](localhost:8888/notebooks/compression_insights.ipynb): This notebook is packed with code, tables and graphs to us understand the results of a compression session.  Distiller provides *summaries*, which are Pandas dataframes, which contain statistical information about you model.  We chose to use Pandas dataframes because they can be sliced, queried, summarized and graphed with a few lines of code.  
+- [jupyter/compare_executions.ipynb](https://github.com/NervanaSystems/distiller/blob/master/jupyter/compare_executions.ipynb): This is a simple notebook to help you graphically compare the results of executions of several experiments.
+- [jupyter/compression_insights.ipynb](https://github.com/NervanaSystems/distiller/blob/master/jupyter/compression_insights.ipynb): This notebook is packed with code, tables and graphs to us understand the results of a compression session.  Distiller provides *summaries*, which are Pandas dataframes, which contain statistical information about you model.  We chose to use Pandas dataframes because they can be sliced, queried, summarized and graphed with a few lines of code.
-- 
GitLab