Skip to content
Snippets Groups Projects
  1. Nov 13, 2019
    • Bar's avatar
      image_classifier.py: PTQ stats collection and eval in same run (#346) · fb98377e
      Bar authored
      * Previous implementation:
        * Stats collection required a separate run with `-qe-calibration`.
        * Specifying `--quantize-eval` without `--qe-stats-file` triggered
          dynamic quantization.
        * Running with `--quantize-eval --qe-calibration <num>` only ran
          stats collection and ignored --quantize-eval.
      
      * New implementation:
        * Running `--quantize-eval --qe-calibration <num>` will now 
          perform stats collection according to the calibration flag,
          and then quantize the model with the collected stats (and
          run evaluation).
        * Specifying `--quantize-eval` without `--qe-stats-file` will
          trigger the same flow as in the bullet above, as if 
          `--qe-calibration 0.05` was used (i.e. 5% of the test set will
          be used for stats).
        * Added new flag: `--qe-dynamic`. From now, to do dynamic 
          quantization, need to explicitly run:
          `--quantize-eval --qe-dynamic`
        * As before, can still run `--qe-calibration` without 
          `--quantize-eval` to perform "stand-alone" stats collection
        * The following flags, which all represent different ways to
          control creation of stats or use of existing stats, are now
          mutually exclusive:
          `--qe-calibration`, `-qe-stats-file`, `--qe-dynamic`,
          `--qe-config-file`
      fb98377e
  2. Nov 11, 2019
    • Neta Zmora's avatar
      Pruning with virtual Batch-norm statistics folding (#415) · c849a25f
      Neta Zmora authored
      * pruning: add an option to virtually fold BN into Conv2D for ranking
      
      PruningPolicy can be configured using a new control argument fold_batchnorm: when set to `True`, the weights of BatchNorm modules are folded into the weights of Conv-2D modules (if Conv2D->BN edges exist in the model graph).  Each weights filter is attenuated using a different pair of (gamma, beta) coefficients, so `fold_batchnorm` is relevant for fine-grained and filter-ranking pruning methods.  We attenuate using the running values of the mean and variance, as is done in quantization.
      This control argument is only supported for Conv-2D modules (i.e. other convolution operation variants and Linear operations are not supported).
      e.g.:
      policies:
        - pruner:
            instance_name : low_pruner
            args:
              fold_batchnorm: True
          starting_epoch: 0
          ending_epoch: 30
          frequency: 2
      
      * AGP: non-functional refactoring
      
      distiller/pruning/automated_gradual_pruner.py – change `prune_to_target_sparsity`
      to `_set_param_mask_by_sparsity_target`, which is a more appropriate function
      name as we don’t really prune in this function
      
      * Simplify GEMM weights input-channel ranking logic
      
      Ranking weight-matrices by input channels is similar to ranking 4D
      Conv weights by input channels, so there is no need for duplicate logic.
      
      distiller/pruning/ranked_structures_pruner.py
      -change `prune_to_target_sparsity` to `_set_param_mask_by_sparsity_target`,
      which is a more appropriate function name as we don’t really prune in this
      function
      -remove the code handling ranking of matrix rows
      
      distiller/norms.py – remove rank_cols.
      
      distiller/thresholding.py – in expand_binary_map treat `channels` group_type
      the same as the `cols` group_type when dealing with 2D weights
      
      * AGP: add example of ranking filters with virtual BN-folding
      
      Also update resnet20 AGP examples
      Unverified
      c849a25f
  3. Nov 10, 2019
  4. Nov 08, 2019
  5. Nov 07, 2019
  6. Nov 06, 2019
  7. Nov 05, 2019
  8. Nov 03, 2019
  9. Oct 31, 2019
  10. Oct 30, 2019
  11. Oct 28, 2019
  12. Oct 27, 2019
  13. Oct 25, 2019
  14. Oct 23, 2019
  15. Oct 22, 2019
  16. Oct 07, 2019
    • Guy Jacob's avatar
      Post-Train Quant: Greedy Search + Proper mixed-settings handling (#402) · 9e7ef987
      Guy Jacob authored
      
      * Greedy search script for post-training quantization settings
        * Iterates over each layer in the model in order. For each layer,
          checks a user-defined set of quantization settings and chooses
          the best one based on validation accuracy
        * Provided sample that searches for best activations-clipping
          mode per layer, on image classification models
      
      * Proper handling of mixed-quantization settings in post-train quant:
        * By default, the quantization settings for each layer apply only
          to output quantization
        * Propagate quantization settings for activations tensors through
          the model during execution
        * For non-quantized inputs to layers that require quantized inputs,
          fall-back to quantizing according to the settings used for the
          output
        * In addition, provide mechanism to override inputs quantization
          settings via the YAML configuration file
        * By default all modules are quantized now. For module types that
          don't have a dedicated quantized implementation, "fake"
          quantization is performed
      
      * Misc. Changes
        * Fuse ReLU/ReLU6 to predecessor during post-training quantization
        * Fixes to ACIQ clipping in the half-range case
      
      Co-authored-by: default avatarLev Zlotnik <lev.zlotnik@intel.com>
      Co-authored-by: default avatarGuy Jacob <guy.jacob@intel.com>
      Unverified
      9e7ef987
    • Neta Zmora's avatar
    • Neta Zmora's avatar
      Remove confusing log message · 9f7f6b14
      Neta Zmora authored
      As noted in issue #382, logging when a parameter does not have
      a mask is unnecessary and may confuse users.  Therefore, it is
      removed.
      9f7f6b14
    • Neta Zmora's avatar
      Add logging of `app_cfg` to the logger default · bdab1fa5
      Neta Zmora authored
      `app_cfg` logs the basic execution environment state, and is deemed important
      in most circumstances.
      bdab1fa5
  17. Oct 06, 2019
    • Neta Zmora's avatar
      Low-level pruning API refactor (#401) · 05d5592e
      Neta Zmora authored
      Some refactoring of the low-level pruning API
      
      Added distiller/norms.py - for calculating norms of various sub-tensors.
      
      ranked_structures_pruner.py:
      -Removed l1_magnitude, l2_magnitude. Use instead distiller.norms.l1_norm
      -Lots of refactoring
      -replaced LpRankedStructureParameterPruner.ch_binary_map_to_mask with
      distiller.thresholding.expand_binary_map
      -FMReconstructionChannelPruner.rank_and_prune_channels used L2-norm
      by default and now uses L1-norm (i.e.magnitude_fn=l2_magnitude was
      replaced with magnitude_fn=distiller.norms.l1_norm)
      
      thresholding.py:
      -Delegated lots of the work to the new norms.py.
      -Removed support for 4D (entire convolution layers) since that has not been
      maintained for a longtime. This may break some old scripts that remove entire
      layers.
      -added expand_binary_map() explicitly so others can use it. Might need to
      move to a different file
      -removed threshold_policy()
      
      utils.py:
      -use distiller.norms.xxx for sparsity stats
      Unverified
      05d5592e
    • Bar's avatar
      Change requirements on Tensorflow version (#400) · bbd6fef9
      Bar authored
      Hot-fix for issue that arises with FileWriter class on TF v2.
      Allows only Tensorflow v1.X
      bbd6fef9
    • Guy Jacob's avatar
    • Guy Jacob's avatar
  18. Oct 05, 2019
Loading