- Jul 04, 2019
-
-
Guy Jacob authored
* PyTorch 1.1.0 now required - Moved other dependencies to up-to-date versions as well * Adapt LR scheduler to PyTorch 1.1 API changes: - Change lr_scheduler.step() calls to succeed validate calls, during training - Pass to lr_scheduler.step() caller both loss and top1 (Resolves issue #240) * Adapt thinning for PyTorch 1.1 semantic changes - **KNOWN ISSUE**: When a thinning recipe is applied, in certain cases PyTorch displays this warning: "UserWarning: non-inplace resize is deprecated". To be fixed later * SummaryGraph: Workaround for new scope name issue from PyTorch 1.1.0 * Adapt to updated PyTest version: - Stop using deprecated 'message' parameter of pytest.raises(), use pytest.fail() instead - Make sure only a single test case per pytest.raises context * Move PyTorch version check to root __init__.py - This means the version each checked when Distiller is first imported. A RuntimeError is raised if the version is wrong. * Updates to parameter_histograms notebook: - Replace deprecated normed argument with density - Add sparsity rate to plot title - Load model in CPU
-
- May 27, 2019
-
-
Lev Zlotnik authored
* Fixed bug where a shared module which was supposed to be skipped wasn't skipped on the second reference * Added tests for new bug fix
-
- May 19, 2019
-
-
Guy Jacob authored
-
- May 02, 2019
-
-
Lev Zlotnik authored
-
- Apr 08, 2019
-
-
Lev Zlotnik authored
-
- Apr 01, 2019
-
-
Lev Zlotnik authored
* Bias handling: * Add 'bits_bias' parameter to explicitly specify # of bits for bias, similar to weights and activations. * BREAKING: Remove the now redundant 'quantize_bias' boolean parameter * Custom overrides: * Expand the semantics of the overrides dict to allow overriding of other parameters in addition to bit-widths * Functions registered in the quantizer's 'replacement_factory' can define keyword arguments. Non bit-width entries in the overrides dict will be checked against the function signature and passed * BREAKING: * Changed the name of 'bits_overrides' to simply 'overrides' * Bit-width overrides must now be defined using the full parameter names - 'bits_activations/weights/bias' instead of the short-hands 'acts' and 'wts' which were used so far. * Added/updated relevant tests * Modified all quantization YAMLs under 'examples' to reflect these changes * Updated docs
-
- Feb 26, 2019
-
-
Lev Zlotnik authored
Not backward compatible - re-installation is required * Fixes for PyTorch==1.0.0 * Refactoring folder structure * Update installation section in docs
-
- Feb 11, 2019
-
-
Guy Jacob authored
Summary of changes: (1) Post-train quantization based on pre-collected statistics (2) Quantized concat, element-wise addition / multiplication and embeddings (3) Move post-train quantization command line args out of sample code (4) Configure post-train quantization from YAML for more fine-grained control (See PR #136 for more detailed changes descriptions)
-
- Jan 24, 2019
-
-
Guy Jacob authored
-
- Jan 23, 2019
-
-
Guy Jacob authored
-
- Dec 04, 2018
-
-
Guy Jacob authored
* Asymmetric post-training quantization (only symmetric supported so until now) * Quantization aware training for range-based (min-max) symmetric and asymmetric quantization * Per-channel quantization support in both training and post-training * Added tests and examples * Updated documentation
-
- Jul 22, 2018
-
-
Gal Novik authored
* Adding PACT quantization method * Move logic modifying the optimizer due to changes the quantizer makes into the Quantizer itself * Updated documentation and tests
-
- Jul 17, 2018
-
-
Guy Jacob authored
* Add Quantizer unit tests * Require 'bits_overrides' to be OrderedDict to support overlapping patterns in a predictable manner + update documentation to reflect this * Quantizer class cleanup * Use "public" nn.Module APIs instead of protected attributes * Call the builtins set/get/delattr instead of the class special methods (__***__) * Fix issues reported in #24 * Bug in RangeLinearQuantParamLayerWrapper - add explicit override of pre_quantized_forward accpeting single input (#15) * Add DoReFa test to full_flow_tests
-