- Aug 11, 2019
-
-
Neta Zmora authored
-
- Aug 06, 2019
-
-
Neta Zmora authored
*An implementation of AMC (the previous implementation code has moved to a new location under /distiller/examples/auto_compression/amc. AMC is aligned with the ‘master’ branch of Coach. *compress_classifier.py is refactored. The base code moved to /distiller/apputils/image_classifier.py. Further refactoring will follow. We want to provide a simple and small API to the basic features of a classifier-compression application. This will help applications that want to use the make features of a classifier-compression application, without the standard training regiment. AMC is one example of a stand-alone application that needs to leverage the capabilities of a classifier-compression application, but is currently coupled to `compress_classifier.py`. `multi-finetune.py` is another example. * ranked_structures_pruner.py: ** Added support for grouping channels/filters Sometimes we want to prune a group of structures: e.g. groups of 8-channels. This feature does not force the groups to be adjacent, so it is more like a set of structures. E.g. in the case of pruning channels from a 64-channels convolution, grouped by 8 channels, we will prune exactly one of 0/8/16/24/32/40/48/56 channels. I.e. always a multiple of 8-channels, excluding the set of all 64 channels. ** Added FMReconstructionChannelPruner – this is channel pruning using L1-magnitude to rank and select channels to remove, and feature-map reconstruction to improve the resilience to the pruning. * Added a script to run multiple instances of an experiment, in different processes: examples/classifier_compression/multi-run.py * Set the seed value even when not specified by the command-line arguments, so that we can try and recreate the session. * Added pruning ranking noise - Ranking noise introduces Gaussian noise when ranking channels/filters using Lp-norm. The noise is introduced using the epsilon-greedy methodology, where ranking using exact Lp-norm is considered greedy. * Added configurable rounding of pruning level: choose whether to Round up/down when rounding the number of structures to prune (rounding is always to an integer).
-
- Mar 05, 2019
-
-
Neta Zmora authored
amc-ft-frequency: Sometimes we may want to fine-tune the weights after ‘n’ number of episode steps (action-steps). This new argument controls the frequency of this fine-tuning (FT) How many action-steps between fine-tuning By default, there is no fine-tuning between steps. amc-reward-frequency: By default, we only provide a non-zero reward at the end episodes. This argument allows us to provide rewards at a higher frequency. This commit also reorders the ResNet layer names, so that layers are processed by near-topological order. This is simply to help interpret the data in the AMC Jupyter notebooks.
-
- Feb 17, 2019
-
-
Neta Zmora authored
--amc-reward-frequency Computing the reward requires running the evaluated network on the Test dataset (or parts of it) and may involve short-term fine-tuning before the evaluation (depending on the configuration). Use this new argument to configure the number of steps/iterations between reward computation.
-
- Feb 13, 2019
-
-
Neta Zmora authored
Merging the 'amc' branch with 'master'. This updates the automated compression code in 'master', and adds a greedy filter-pruning algorithm.
-