- Aug 26, 2019
-
-
Neta Zmora authored
Added support for saving the randomly initialized network before starting training; and added an implmentation showing how to extract a (winning) lottery ticket from the prestine network, and the pruned network.
-
- Aug 06, 2019
-
-
Neta Zmora authored
*An implementation of AMC (the previous implementation code has moved to a new location under /distiller/examples/auto_compression/amc. AMC is aligned with the ‘master’ branch of Coach. *compress_classifier.py is refactored. The base code moved to /distiller/apputils/image_classifier.py. Further refactoring will follow. We want to provide a simple and small API to the basic features of a classifier-compression application. This will help applications that want to use the make features of a classifier-compression application, without the standard training regiment. AMC is one example of a stand-alone application that needs to leverage the capabilities of a classifier-compression application, but is currently coupled to `compress_classifier.py`. `multi-finetune.py` is another example. * ranked_structures_pruner.py: ** Added support for grouping channels/filters Sometimes we want to prune a group of structures: e.g. groups of 8-channels. This feature does not force the groups to be adjacent, so it is more like a set of structures. E.g. in the case of pruning channels from a 64-channels convolution, grouped by 8 channels, we will prune exactly one of 0/8/16/24/32/40/48/56 channels. I.e. always a multiple of 8-channels, excluding the set of all 64 channels. ** Added FMReconstructionChannelPruner – this is channel pruning using L1-magnitude to rank and select channels to remove, and feature-map reconstruction to improve the resilience to the pruning. * Added a script to run multiple instances of an experiment, in different processes: examples/classifier_compression/multi-run.py * Set the seed value even when not specified by the command-line arguments, so that we can try and recreate the session. * Added pruning ranking noise - Ranking noise introduces Gaussian noise when ranking channels/filters using Lp-norm. The noise is introduced using the epsilon-greedy methodology, where ranking using exact Lp-norm is considered greedy. * Added configurable rounding of pruning level: choose whether to Round up/down when rounding the number of structures to prune (rounding is always to an integer).
-
- Jul 01, 2019
-
-
Guy Jacob authored
-
- May 26, 2019
-
-
Neta Zmora authored
Added set_seed() to Distiller and added support for seeding the PRNG when setting --deterministic mode (prior to this change, the seed is always set to zero when running in deterministic mode. The PRNGs of Pytorch (CPU & Cuda devices), numpy and Python are set. Added support for ```--seed``` to classifier_compression.py.
-
- May 16, 2019
-
-
Bar authored
Introduced a new utility function to export image-classifiers to ONNX: export_img_classifier_to_onnx. The functionality is not new, just refactored. In the sample application compress_classifier.py added --export-onnx as a stand-alone cmd-line flag for specifically exporting ONNX models. This new flag can take an optional argument which is used to name the exported onnx model file. The option to export models was removed from the –summary argument. Now we allow multiple --summary options be called together. Added a basic test for exporting ONNX.
-
- May 15, 2019
-
-
Guy Jacob authored
Added a collector for activation histograms (sub-class of ActivationStatsCollector). It is stats-based, meaning it requires pre-computed min/max stats per tensor. This is done in order to prevent the need to save all of the activation tensors throughout the run. The stats are expected in the format generated by QuantCalibrationStatsCollector. Details: * Implemented ActivationHistogramsCollector * Added Jupyter notebook showcasing activation histograms * Implemented helper function that performs the stats collection pass and histograms pass in one go * Also added separate helper function just for quantization stats collection * Integrated in image classification sample * data_loaders.py: Added option to have a fixed subset throughout within the same session. Using it to keep the same subset between the stats collection and histograms collection phases. * Other changes: * Calling assign_layer_fq_names in base-class of collectors. We do this since the collectors, as implemented so far, assume this is done. So makes sense to just do it in the base class instead of expecting the user to do it. * Enforcing a non-parallel model for quantization stats and histograms collectors * Jupyter notebooks - add utility function to enable loggers in notebooks. This allows us to see any logging done by Distiller APIs called from notebooks.
-
- Apr 18, 2019
-
-
Bar authored
Also: * Single worker limitation not needed anymore, been fixed in PyTorch since v0.4.0 (https://github.com/pytorch/pytorch/pull/4640) * compress_classifier.py: If run in evaluation mode (--eval), enable deterministic mode. * Call utils.set_deterministic at data loaders creation if deterministic argument is set (don't assume user calls it outside) * Disable CUDNN benchmark mode in utils.set_deterministic (https://pytorch.org/docs/stable/notes/randomness.html#cudnn)
-
- Apr 01, 2019
-
-
Bar authored
Load optimizer from checkpoint (BREAKING - see details) (#182) * Fixes issues #70, #145 and replaces PR #74 * checkpoint.py * save_checkpoint will now save the optimizer type in addition to its state * load_checkpoint will now instantiate an optimizer based on the saved type and load its state * config.py: file/dict_config now accept the resumed epoch to pass to LR schedulers * policy.py: LRPolicy now passes the current epoch to the LR scheduler * Classifier compression sample * New flag '--resume-from' for properly resuming a saved training session, inc. optimizer state and epoch # * Flag '--reset-optimizer' added to allow discarding of a loaded optimizer. * BREAKING: * Previous flag '--resume' is deprecated and is mapped to '--resume-from' + '--reset-optimizer'. * But, old resuming behavior had an inconsistency where the epoch count would continue from the saved epoch, but the LR scheduler was setup as if we were starting from epoch 0. * Using '--resume-from' + '--reset-optimizer' now will simply RESET the epoch count to 0 for the whole environment. * This means that scheduling configurations (in YAML or code) which assumed use of '--resume' might need to be changed to reflect the fact that the epoch count now starts from 0 * All relevant YAML files under 'examples' modified to reflect this change * Initial support for ReduceLROnPlateau (#161): * Allow passing **kwargs to policies via the scheduler * Image classification now passes the validation loss to the scheduler, to be used yo ReduceLROnPlateau * The current implementation is experimental and subject to change
-
- Feb 26, 2019
-
-
Lev Zlotnik authored
Not backward compatible - re-installation is required * Fixes for PyTorch==1.0.0 * Refactoring folder structure * Update installation section in docs
-
- Feb 13, 2019
-
-
Neta Zmora authored
Merging the 'amc' branch with 'master'. This updates the automated compression code in 'master', and adds a greedy filter-pruning algorithm.
-
- Feb 11, 2019
-
-
Guy Jacob authored
Summary of changes: (1) Post-train quantization based on pre-collected statistics (2) Quantized concat, element-wise addition / multiplication and embeddings (3) Move post-train quantization command line args out of sample code (4) Configure post-train quantization from YAML for more fine-grained control (See PR #136 for more detailed changes descriptions)
-
- Feb 10, 2019
-
-
Guy Jacob authored
* For CIFAR-10 / ImageNet only * Refactor data_loaders.py, reduce code duplication * Implemented custom sampler * Integrated in image classification sample * Since we now shuffle the test set, had to update expected results in 2 full_flow_tests that do evaluation
-
- Jan 31, 2019
-
-
Neta Zmora authored
-
- Jan 16, 2019
-
-
Bar authored
* Support for multi-phase activations logging Enable logging activation both durning training and validation at the same session. * Refactoring: Move parser to its own file * Parser is moved from compress_classifier into its own file. * Torch version check is moved to precede main() call. * Move main definition to the top of the file. * Modify parser choices to case-insensitive
-