Skip to content
Snippets Groups Projects
Unverified Commit a27aabe3 authored by Neta Zmora's avatar Neta Zmora Committed by GitHub
Browse files

Update README.md with November features

parent 2b2cf6e1
No related branches found
No related tags found
No related merge requests found
...@@ -36,7 +36,36 @@ ...@@ -36,7 +36,36 @@
Network compression can reduce the memory footprint of a neural network, increase its inference speed and save energy. Distiller provides a [PyTorch](http://pytorch.org/) environment for prototyping and analyzing compression algorithms, such as sparsity-inducing methods and low-precision arithmetic. Network compression can reduce the memory footprint of a neural network, increase its inference speed and save energy. Distiller provides a [PyTorch](http://pytorch.org/) environment for prototyping and analyzing compression algorithms, such as sparsity-inducing methods and low-precision arithmetic.
<details><summary><b>What's New in November?</b></summary> <details><summary><b>What's New in November?</b></summary>
<p> <p>
<a href="https://bizwebcast.intel.cn/aidc/index_en.aspx?utm_source=other">Come see us in AIDC 2018 Beijing!</a>
- Quantization:
- To improve quantization results: Added averaging-based activations clipping in SymmetricLinearQuantizer.
- For better control over quantization configuration: Added command line arguments for post-training quantization settings in image classification sample.
- Asymmetric post-training quantization (only symmetric supported so until now)
- Quantization aware training for range-based (min-max) symmetric and asymmetric quantization
- Per-channel quantization support in all of the above scenarios
- Added an implementation of [Dynamic Network Surgery for Efficient DNNs](https://arxiv.org/abs/1608.04493) with:
- A sample implementation on ResNet50 which achieves 82.5% compression 75.5% Top1 (-0.6% from TorchVision baseline).
- A new SplicingPruner pruning algorithm.
- New features for PruningPolicy:
1. The pruning policy can use two copies of the weights: one is used during the forward-pass, the other during the backward pass. You can control when the mask is frozen and always applied.
2. Scheduling at the training-iterationgranularity (i.e. at the mini-batch granularity). Until now we could schedule
pruning at the epoch-granularity.
- A bunch of new schedules showing AGP in action; including hybrid schedules combining structured-pruning and element-wise pruning.
- Filter and channel pruning
- Fixed problems arising in non-trivial data dependencies.
- Added [documentation](https://github.com/NervanaSystems/distiller/wiki/Pruning-Filters-&-Channels)
- Changed the YAML API to express complex dependencies when pruning channels and filters.
- Fixed a bunch of bugs
- Image classifier compression sample:
- Added a new command-line argument to report the top N best accuracy scores, instead of just the highest score.
- Added an option to load a model in serialized mode.
- We've fixed a couple of Early Exit bugs, and improved the [documentation](https://nervanasystems.github.io/distiller/algo_earlyexit/index.html)
- We presented Distiller at [AIDC 2018 Beijing](https://bizwebcast.intel.cn/aidc/index_en.aspx?utm_source=other) and @haim-barad presented his Early Exit research implemented using Distiller.
- We've looked up our star-gazers (that might be you ;-) and where they are located:<br>
*The map was generated by [this utility](https://github.com/netaz/github_stars_analytics).*
<center> <img src="imgs/wiki/distiller_star_gazer_Nov11_2018.png"></center>
</p> </p>
</details> </details>
<details><summary><b>What's New in October?</b></summary> <details><summary><b>What's New in October?</b></summary>
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment