- Aug 08, 2019
-
-
Guy Jacob authored
-
- Aug 04, 2019
-
-
Guy Jacob authored
-
- Jul 10, 2019
-
-
Guy Jacob authored
* "Net-aware quantization" - using the term coined in https://arxiv.org/abs/1811.09886. (section 3.2.2). Refers to considering sequences of modules when quantizing. This isn't exactly layer fusion - we modify activation stats prior to setting quantization parameters, to make sure that when a module is followed by certain activation functions, only the relevant ranges are quantized. We do this for: * ReLU - Clip all negative values * Tanh / Sigmoid - Clip according to the (approximated) saturation values for these functions. We use [-4, 4] for tanh and [-6, 6] for sigmoid. * Perform batch-norm folding before post-training quantization. Batch-norm parameters are folded into the parameters of the previous layer and the BN layer is replaced with an identity module. * Both BN folding and "net-aware" are now automatically executed in PostTrainLinearQuantizer (details of this change below) * BN folding enabled by new generic mechanism to "fuse" module sequences (at the Python API level) * First module in sequence is replaced/modified by a user-provided function, rest of moudles replaced with nn.Identity * Quantizer changes: * Optionally create adjacency map during prepare_model * Subclasses may enforce adjacency map creation * Refatcoring: Replace _prepare_model_impl with pre and post override-able "callbacks", so core functionality is always executed * PostTrainLinearQuantizer Changes: * Enforce creation of adjacency map. This means users must now pass a dummy input to PostTrainLinearQuantizer.prepare_model * Before module replacement - Apply BN folding and stats updates according to net-aware quantization * Updated the language model quantization tutorial to reflect the new functionality * Updated the image classification post-train quantization samples (command line and YAML) * Other changes: * Distller LSTM implementation: Replace the ModuleList for cells with a plain list. The PyTorch trace mechanism doesn't "see" ModuleList objects, it only sees the contained modules. This means that the "scopeName" of these modules isn't complete, which makes it impossible to match op names in SummaryGraph to modules in the Python model. * ActivationStatsCollector: Ignore nn.Identity modules
-
- Jul 08, 2019
-
-
Guy Jacob authored
-
- Mar 29, 2019
-
-
Songyi Blair Han authored
-
- Feb 26, 2019
-
-
Lev Zlotnik authored
Not backward compatible - re-installation is required * Fixes for PyTorch==1.0.0 * Refactoring folder structure * Update installation section in docs
-
- Dec 06, 2018
-
-
Neta Zmora authored
- Moved the Language model and struct pruning tutorials from the Wiki to the HTML documentation. Love the ease of Wiki, but GitHub doesn't let Google crawl these pages, and users can't open PRs on Wiki pages. - Updated the pruning algorithms documentation
-
- Nov 07, 2018
-
-
Neta Zmora authored
-
- Sep 03, 2018
-
-
Guy Jacob authored
* Implemented as a Policy * Integrated in image classification sample * Updated docs and README
-
- Apr 30, 2018
-
-
Guy Jacob authored
-
- Apr 28, 2018
-
-
Neta Zmora authored
-
- Apr 24, 2018
-
-
Neta Zmora authored
-
Neta Zmora authored
-
Neta Zmora authored
-
Neta Zmora authored
-