Skip to content
Snippets Groups Projects
  1. Aug 11, 2019
  2. Aug 06, 2019
    • Neta Zmora's avatar
      AMC and other refactoring - large merge (#339) · 02054da1
      Neta Zmora authored
      *An implementation of AMC (the previous implementation
       code has moved to a new location under 
      /distiller/examples/auto_compression/amc.  AMC is aligned
      with the ‘master’ branch of Coach.
      *compress_classifier.py is refactored.  The base code moved
      to /distiller/apputils/image_classifier.py.  Further refactoring
      will follow.
      We want to provide a simple and small API to the basic features of
      a classifier-compression application.
      This will help applications that want to use the make features of a
      classifier-compression application, without the standard training
      regiment.
      AMC is one example of a stand-alone application that needs to leverage
      the capabilities of a classifier-compression application, but is currently
      coupled to `compress_classifier.py`.
      `multi-finetune.py` is another example.
      * ranked_structures_pruner.py:
      ** Added support for grouping channels/filters
      Sometimes we want to prune a group of structures: e.g. groups of
      8-channels.  This feature does not force the groups to be adjacent,
      so it is more like a set of structures.  E.g. in the case of pruning
      channels from a 64-channels convolution, grouped by 8 channels, we 
      will prune exactly one of 0/8/16/24/32/40/48/56 channels.  I.e. 
      always a multiple of 8-channels, excluding the set of all 64 channels.
      ** Added FMReconstructionChannelPruner – this is channel
      pruning using L1-magnitude to rank and select channels to
      remove, and feature-map reconstruction to improve the
      resilience to the pruning.
      * Added a script to run multiple instances of an 
      experiment, in different processes:
       examples/classifier_compression/multi-run.py
      * Set the seed value even when not specified by the command-line
      arguments, so that we can try and recreate the session.
      * Added pruning ranking noise -
      Ranking noise introduces Gaussian noise when ranking channels/filters
      using Lp-norm.  The noise is introduced using the epsilon-greedy
      methodology, where ranking using exact Lp-norm is considered greedy.
      * Added configurable rounding of pruning level: choose whether to 
      Round up/down when rounding the number of structures to prune 
      (rounding is always to an integer).  
      02054da1
  3. Mar 05, 2019
    • Neta Zmora's avatar
      AMC - added arguments: amc-ft-frequency, amc-reward-frequency · 7fb41d6f
      Neta Zmora authored
      amc-ft-frequency:
      Sometimes we may want to fine-tune the weights after
      ‘n’ number of episode steps (action-steps).   This new
      argument  controls the frequency of this fine-tuning (FT)
      How many action-steps between fine-tuning
      By default, there is no fine-tuning between steps.
      
      amc-reward-frequency:
      By default, we only provide a non-zero reward at the end
      episodes.  This argument allows us to provide rewards at
      a higher frequency.
      
      This commit also reorders the ResNet layer names, so that
      layers are processed by near-topological order.  This is simply
      to help interpret the data in the AMC Jupyter notebooks.
      7fb41d6f
  4. Feb 17, 2019
    • Neta Zmora's avatar
      AMC: added configuration option to set the frequency of computing a reward · 8c07eb1e
      Neta Zmora authored
      --amc-reward-frequency
      Computing the reward requires running the evaluated network on the Test
      dataset (or parts of it) and may involve short-term fine-tuning before
      the evaluation (depending on the configuration).
      Use this new argument to configure the number of steps/iterations between
      reward computation.
      8c07eb1e
  5. Feb 13, 2019
Loading