- Jul 08, 2021
-
-
Yifan Zhao authored
-
Peter Pao-Huang authored
-
- Feb 26, 2020
-
-
Guy Jacob authored
The gitdb versioning issue is resolved internally in gitpython 3.1.0, so moving to that and removing specific gitb requirements
-
- Feb 23, 2020
-
-
levzlotnik authored
-
levzlotnik authored
-
- Jan 19, 2020
-
-
Neta Zmora authored
Temp patch until moving to torchvision 0.5. See https://github.com/pytorch/vision/issues/1712#issuecomment-575036523
-
- Nov 14, 2019
-
-
Guy Jacob authored
* summary_graph.py: * Change ONNX op.uniqueName() to op.debugName() * Removed scope-naming workaround which isn't needed in PyTorch 1.3 * Tests: * Naming of trace entries changed in 1.3. Fixed SummaryGraph unit test that checked that * Adjusted expected values in full_flow_tests * Adjusted tolerance in test_sim_bn_fold * Filter some new warnings
-
- Nov 13, 2019
-
-
Bar authored
* Previous implementation: * Stats collection required a separate run with `-qe-calibration`. * Specifying `--quantize-eval` without `--qe-stats-file` triggered dynamic quantization. * Running with `--quantize-eval --qe-calibration <num>` only ran stats collection and ignored --quantize-eval. * New implementation: * Running `--quantize-eval --qe-calibration <num>` will now perform stats collection according to the calibration flag, and then quantize the model with the collected stats (and run evaluation). * Specifying `--quantize-eval` without `--qe-stats-file` will trigger the same flow as in the bullet above, as if `--qe-calibration 0.05` was used (i.e. 5% of the test set will be used for stats). * Added new flag: `--qe-dynamic`. From now, to do dynamic quantization, need to explicitly run: `--quantize-eval --qe-dynamic` * As before, can still run `--qe-calibration` without `--quantize-eval` to perform "stand-alone" stats collection * The following flags, which all represent different ways to control creation of stats or use of existing stats, are now mutually exclusive: `--qe-calibration`, `-qe-stats-file`, `--qe-dynamic`, `--qe-config-file`
-
- Oct 06, 2019
-
-
Bar authored
Hot-fix for issue that arises with FileWriter class on TF v2. Allows only Tensorflow v1.X
-
- Aug 08, 2019
-
-
Guy Jacob authored
-
- Jul 16, 2019
-
-
Bar authored
-
- Jul 04, 2019
-
-
Guy Jacob authored
* PyTorch 1.1.0 now required - Moved other dependencies to up-to-date versions as well * Adapt LR scheduler to PyTorch 1.1 API changes: - Change lr_scheduler.step() calls to succeed validate calls, during training - Pass to lr_scheduler.step() caller both loss and top1 (Resolves issue #240) * Adapt thinning for PyTorch 1.1 semantic changes - **KNOWN ISSUE**: When a thinning recipe is applied, in certain cases PyTorch displays this warning: "UserWarning: non-inplace resize is deprecated". To be fixed later * SummaryGraph: Workaround for new scope name issue from PyTorch 1.1.0 * Adapt to updated PyTest version: - Stop using deprecated 'message' parameter of pytest.raises(), use pytest.fail() instead - Make sure only a single test case per pytest.raises context * Move PyTorch version check to root __init__.py - This means the version each checked when Distiller is first imported. A RuntimeError is raised if the version is wrong. * Updates to parameter_histograms notebook: - Replace deprecated normed argument with density - Add sparsity rate to plot title - Load model in CPU
-
- Jun 18, 2019
-
-
Neta Zmora authored
- Replace `sklearn` with scikit-learn. - Freeze the `gym` version
-
- Jun 02, 2019
-
-
Neta Zmora authored
Added sklearn and gym - both required for automated compression
-
- Mar 11, 2019
-
-
Bar authored
Integrate Cadene ```pretrainedmodels``` package. This PR integrates a large set of pre-trained PyTorch image-classification and object-detection models which originate from https://github.com/Cadene/pretrained-models.pytorch. ******************************************************************************************* PLEASE NOTE: This PR adds a dependency on he ```pretrainedmodels``` package, and you will need to install it using ```pip3 install pretrainedmodels```. For new users, we have also updated the ```requirements.txt``` file. ******************************************************************************************* Distiller does not currently support the compression of object-detectors (a sample application is required - and the community is invited to send us a PR). Compression of some of these models may not be fully supported by Distiller due to bugs and/or missing features. If you encounter any issues, please report to us. Whenever there is contention on the names of models passed to the ```compress_classifier.py``` sample application, it will prefer to use the Cadene models at the lowest priority (e.g. Torchvision models are used in favor of Cadene models, when the same model is supported by both packages). This PR also: * Adds documentation to ```create_model``` * Adds tests for ```create_model```
-
- Feb 26, 2019
-
-
Lev Zlotnik authored
Not backward compatible - re-installation is required * Fixes for PyTorch==1.0.0 * Refactoring folder structure * Update installation section in docs
-
- Oct 22, 2018
-
-
Neta Zmora authored
Activation statistics can be leveraged to make pruning and quantization decisions, and so We added support to collect these data. - Two types of activation statistics are supported: summary statistics, and detailed records per activation. Currently we support the following summaries: - Average activation sparsity, per layer - Average L1-norm for each activation channel, per layer - Average sparsity for each activation channel, per layer For the detailed records we collect some statistics per activation and store it in a record. Using this collection method generates more detailed data, but consumes more time, so Beware. * You can collect activation data for the different training phases: training/validation/test. * You can access the data directly from each module that you chose to collect stats for. * You can also create an Excel workbook with the stats. To demonstrate use of activation collection we added a sample schedule which prunes weight filters by the activation APoZ according to: "Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures", Hengyuan Hu, Rui Peng, Yu-Wing Tai, Chi-Keung Tang, ICLR 2016 https://arxiv.org/abs/1607.03250 We also refactored the AGP code (AutomatedGradualPruner) to support structure pruning, and specifically we separated the AGP schedule from the filter pruning criterion. We added examples of ranking filter importance based on activation APoZ (ActivationAPoZRankedFilterPruner), random (RandomRankedFilterPruner), filter gradients (GradientRankedFilterPruner), and filter L1-norm (L1RankedStructureParameterPruner)
-
- Oct 03, 2018
-
-
Neta Zmora authored
We need AverageValueMeter's support for numpy arrays.
-
- May 16, 2018
-
-
Neta Zmora authored
-
Neta Zmora authored
-
Neta Zmora authored
-
- Apr 25, 2018
-
-
Neta Zmora authored
ONNX is not required by the use-cases currently supported.
-
- Apr 24, 2018
-
-
Neta Zmora authored
-