Skip to content
Snippets Groups Projects
  1. Jul 01, 2019
  2. May 26, 2019
    • Neta Zmora's avatar
      Added support for setting the PRNG seed (#269) · fe27ab90
      Neta Zmora authored
      Added set_seed() to Distiller and added support for seeding the PRNG when setting --deterministic mode (prior to this change, the seed is always set to zero when running in deterministic mode.
      The PRNGs of Pytorch (CPU & Cuda devices), numpy and Python are set.
      Added support for ```--seed``` to classifier_compression.py.
      fe27ab90
  3. May 16, 2019
    • Bar's avatar
      Refactor export to ONNX functionality (#258) · 54304810
      Bar authored
      Introduced a new utility function to export image-classifiers
      to ONNX: export_img_classifier_to_onnx.
      The functionality is not new, just refactored.
      
      In the sample application compress_classifier.py added 
      --export-onnx as a stand-alone cmd-line flag for specifically exporting 
      ONNX models.
      This new flag can take an optional argument which is used to name the
      exported onnx model file.
      The option to export models was removed from the –summary argument.
      Now we allow multiple --summary options be called together.
      
      Added a basic test for exporting ONNX.
      54304810
  4. May 15, 2019
    • Guy Jacob's avatar
      Activation Histograms (#254) · 9405679f
      Guy Jacob authored
      Added a collector for activation histograms (sub-class of
      ActivationStatsCollector). It is stats-based, meaning it requires
      pre-computed min/max stats per tensor. This is done in order to prevent
      the need to save all of the activation tensors throughout the run.
      The stats are expected in the format generated by
      QuantCalibrationStatsCollector.
      
      Details:
      
      * Implemented ActivationHistogramsCollector
      * Added Jupyter notebook showcasing activation histograms
      * Implemented helper function that performs the stats collection pass
        and histograms pass in one go
      * Also added separate helper function just for quantization stats
        collection
      * Integrated in image classification sample
      * data_loaders.py: Added option to have a fixed subset throughout
        within the same session. Using it to keep the same subset between
        the stats collection and histograms collection phases.
      * Other changes:
        * Calling assign_layer_fq_names in base-class of collectors. We do
          this since the collectors, as implemented so far, assume this is
          done. So makes sense to just do it in the base class instead of
          expecting the user to do it.
        * Enforcing a non-parallel model for quantization stats and
          histograms collectors
        * Jupyter notebooks - add utility function to enable loggers in
          notebooks. This allows us to see any logging done by Distiller
          APIs called from notebooks.
      9405679f
  5. Apr 18, 2019
  6. Apr 01, 2019
    • Bar's avatar
      Load optimizer from checkpoint (BREAKING - see details) (#182) · 992291cf
      Bar authored
      Load optimizer from checkpoint (BREAKING - see details) (#182)
      
      * Fixes issues #70, #145 and replaces PR #74
      * checkpoint.py
        * save_checkpoint will now save the optimizer type in addition to
          its state
        * load_checkpoint will now instantiate an optimizer based on the
          saved type and load its state
      * config.py: file/dict_config now accept the resumed epoch to pass to
        LR schedulers
      * policy.py: LRPolicy now passes the current epoch to the LR scheduler
      * Classifier compression sample
        * New flag '--resume-from' for properly resuming a saved training
          session, inc. optimizer state and epoch #
        * Flag '--reset-optimizer' added to allow discarding of a loaded
          optimizer.
        * BREAKING:
          * Previous flag '--resume' is deprecated and is mapped to
            '--resume-from' + '--reset-optimizer'. 
          * But, old resuming behavior had an inconsistency where the epoch
            count would continue from the saved epoch, but the LR scheduler
            was setup as if we were starting from epoch 0.
          * Using '--resume-from' + '--reset-optimizer' now will simply
            RESET the epoch count to 0 for the whole environment.
          * This means that scheduling configurations (in YAML or code)
            which assumed use of '--resume' might need to be changed to
            reflect the fact that the epoch count now starts from 0
          * All relevant YAML files under 'examples' modified to reflect
            this change
      * Initial support for ReduceLROnPlateau (#161):
        * Allow passing **kwargs to policies via the scheduler
        * Image classification now passes the validation loss to the
          scheduler, to be used yo ReduceLROnPlateau
        * The current implementation is experimental and subject to change
      992291cf
  7. Feb 26, 2019
  8. Feb 13, 2019
  9. Feb 11, 2019
    • Guy Jacob's avatar
      Post-train quant based on stats + additional modules quantized (#136) · 28a8ee18
      Guy Jacob authored
      Summary of changes:
      (1) Post-train quantization based on pre-collected statistics
      (2) Quantized concat, element-wise addition / multiplication and embeddings
      (3) Move post-train quantization command line args out of sample code
      (4) Configure post-train quantization from YAML for more fine-grained control
      
      (See PR #136 for more detailed changes descriptions)
      28a8ee18
  10. Feb 10, 2019
    • Guy Jacob's avatar
      Load different random subset of dataset on each epoch (#149) · 4b1d0c89
      Guy Jacob authored
      * For CIFAR-10 / ImageNet only
      * Refactor data_loaders.py, reduce code duplication
      * Implemented custom sampler
      * Integrated in image classification sample
      * Since we now shuffle the test set, had to update expected results
        in 2 full_flow_tests that do evaluation
      4b1d0c89
  11. Jan 31, 2019
  12. Jan 16, 2019
    • Bar's avatar
      compress_classifier.py refactoring (#126) · cfbc3798
      Bar authored
      * Support for multi-phase activations logging
      
      Enable logging activation both durning training and validation at
      the same session.
      
      * Refactoring: Move parser to its own file
      
      * Parser is moved from compress_classifier into its own file.
      * Torch version check is moved to precede main() call.
      * Move main definition to the top of the file.
      * Modify parser choices to case-insensitive
      cfbc3798
Loading