Skip to content
Snippets Groups Projects
  1. Apr 27, 2020
  2. Feb 06, 2020
    • Guy Jacob's avatar
      Convert Distiller PTQ models to "native" PyTorch PTQ (#458) · cdc1775f
      Guy Jacob authored
      Convert Distiller PTQ models to "native" PyTorch PTQ (#458)
      
      * New API: distiller.quantization.convert_distiller_ptq_model_to_pytorch()
      * Can also be called from PostTrainLinearQuantizer instance:
          quantizer.convert_to_pytorch()
      * Can also trigger from command line in image classification sample
      * Can save/load converted modules via apputils.load/save_checkpoint
      * Added Jupyter notebook tutorial
      
      * Converted modules have only the absolutely necessary quant-dequant
        operations. For a fully quantized model, this means just quantization
        of model input and de-quantization of model output. If a user keeps
        specific internal layers in FP32, quant-dequant operations are added
        as needed
      * Can configure either 'fbgemm' or 'qnnpack' backend. For 'fbgemm' we
        take care of preventing overflows (aka "reduce_range" in the PyTorch
        API)
      cdc1775f
  3. Oct 28, 2019
  4. Sep 23, 2019
    • Neta Zmora's avatar
      User-registered model (#391) · 0036011d
      Neta Zmora authored
      Add a jupyter notebook showing how to register a user's (external) image-classification model.
      Contains fixes to the previous models extension mechanism, and relaxation of the `args' requirements in apputils/image_classifier.py.
      
      apputils/image_classifier.py –
      *when self.logdir is None:
      -use NullLogger
      -skip save_checkpoint
      
      *return training log from run_training_loop()
      *don’t log if script_dir or output_dir are not set.
      *Fix params_nnz_cnt in update_training_scores_history()
      
      data_loggers/logger.py – add NullLogger which does not log
      0036011d
  5. Aug 24, 2019
  6. Jul 04, 2019
    • Guy Jacob's avatar
      Switch to PyTorch 1.1.0 (#306) · 032b1f74
      Guy Jacob authored
      * PyTorch 1.1.0 now required
        - Moved other dependencies to up-to-date versions as well
      * Adapt LR scheduler to PyTorch 1.1 API changes:
        - Change lr_scheduler.step() calls to succeed validate calls,
          during training
        - Pass to lr_scheduler.step() caller both loss and top1
          (Resolves issue #240)
      * Adapt thinning for PyTorch 1.1 semantic changes
        - **KNOWN ISSUE**: When a thinning recipe is applied, in certain
          cases PyTorch displays this warning:
          "UserWarning: non-inplace resize is deprecated".
          To be fixed later
      * SummaryGraph: Workaround for new scope name issue from PyTorch 1.1.0
      * Adapt to updated PyTest version:
        - Stop using deprecated 'message' parameter of pytest.raises(),
          use pytest.fail() instead
        - Make sure only a single test case per pytest.raises context
      * Move PyTorch version check to root __init__.py 
        - This means the version each checked when Distiller is first
          imported. A RuntimeError is raised if the version is wrong.
      * Updates to parameter_histograms notebook:
        - Replace deprecated normed argument with density
        - Add sparsity rate to plot title
        - Load model in CPU
      032b1f74
  7. May 15, 2019
    • Guy Jacob's avatar
      Activation Histograms (#254) · 9405679f
      Guy Jacob authored
      Added a collector for activation histograms (sub-class of
      ActivationStatsCollector). It is stats-based, meaning it requires
      pre-computed min/max stats per tensor. This is done in order to prevent
      the need to save all of the activation tensors throughout the run.
      The stats are expected in the format generated by
      QuantCalibrationStatsCollector.
      
      Details:
      
      * Implemented ActivationHistogramsCollector
      * Added Jupyter notebook showcasing activation histograms
      * Implemented helper function that performs the stats collection pass
        and histograms pass in one go
      * Also added separate helper function just for quantization stats
        collection
      * Integrated in image classification sample
      * data_loaders.py: Added option to have a fixed subset throughout
        within the same session. Using it to keep the same subset between
        the stats collection and histograms collection phases.
      * Other changes:
        * Calling assign_layer_fq_names in base-class of collectors. We do
          this since the collectors, as implemented so far, assume this is
          done. So makes sense to just do it in the base class instead of
          expecting the user to do it.
        * Enforcing a non-parallel model for quantization stats and
          histograms collectors
        * Jupyter notebooks - add utility function to enable loggers in
          notebooks. This allows us to see any logging done by Distiller
          APIs called from notebooks.
      9405679f
  8. Mar 17, 2019
  9. Mar 01, 2019
  10. Feb 28, 2019
  11. Feb 26, 2019
  12. Feb 13, 2019
  13. Nov 25, 2018
  14. Oct 29, 2018
  15. Oct 23, 2018
  16. Oct 22, 2018
    • Neta Zmora's avatar
      Activation statistics collection (#61) · 54a5867e
      Neta Zmora authored
      Activation statistics can be leveraged to make pruning and quantization decisions, and so
      We added support to collect these data.
      - Two types of activation statistics are supported: summary statistics, and detailed records 
      per activation.
      Currently we support the following summaries: 
      - Average activation sparsity, per layer
      - Average L1-norm for each activation channel, per layer
      - Average sparsity for each activation channel, per layer
      
      For the detailed records we collect some statistics per activation and store it in a record.  
      Using this collection method generates more detailed data, but consumes more time, so
      Beware.
      
      * You can collect activation data for the different training phases: training/validation/test.
      * You can access the data directly from each module that you chose to collect stats for.  
      * You can also create an Excel workbook with the stats.
      
      To demonstrate use of activation collection we added a sample schedule which prunes 
      weight filters by the activation APoZ according to:
      "Network Trimming: A Data-Driven Neuron Pruning Approach towards 
      Efficient Deep Architectures",
      Hengyuan Hu, Rui Peng, Yu-Wing Tai, Chi-Keung Tang, ICLR 2016
      https://arxiv.org/abs/1607.03250
      
      We also refactored the AGP code (AutomatedGradualPruner) to support structure pruning,
      and specifically we separated the AGP schedule from the filter pruning criterion.  We added
      examples of ranking filter importance based on activation APoZ (ActivationAPoZRankedFilterPruner),
      random (RandomRankedFilterPruner), filter gradients (GradientRankedFilterPruner), 
      and filter L1-norm (L1RankedStructureParameterPruner)
      54a5867e
    • Neta Zmora's avatar
  17. Oct 03, 2018
  18. Aug 29, 2018
  19. Aug 27, 2018
  20. Jul 11, 2018
  21. May 16, 2018
    • Neta Zmora's avatar
      PyTorch 0.4 improvement to SummaryGraph · 32c01c28
      Neta Zmora authored
      PyTorch 0.4 now fully supports the ONNX export features that are needed
      in order to create a SummaryGraph, which is sort of a "shadow graph" for
      PyTorch models.
      
      The big advantage of SummaryGraph is that it gives us information about
      the connectivity of nodes.  With connectivity information we can compute
      per-node MAC (compute) and BW, and better yet, we can remove channels,
      filters, and layers (more on this in future commits).
      
      In this commit we (1) replace the long and overly-verbose ONNX node names,
      with PyTorch names; and (2) move MAC and BW attributes from the Jupyter
      notebook to the SummaryGraph.
      32c01c28
    • Neta Zmora's avatar
      pytorch 0.4: adjustments to API changes · 957e6777
      Neta Zmora authored
      Various small changes due to the chamnges in the semantics and syntax of the
      PyTorch 0.4 API.
      
      Note that currently distiller.model_performance_summary() returns wrong results
      on graphs containing torch.nn.DataParallel layers.
      957e6777
    • Neta Zmora's avatar
      b9bf4282
  22. May 10, 2018
  23. Apr 25, 2018
  24. Apr 24, 2018
Loading