Skip to content
Snippets Groups Projects
  1. Mar 29, 2019
  2. Mar 28, 2019
  3. Mar 27, 2019
  4. Mar 26, 2019
  5. Mar 25, 2019
  6. Mar 23, 2019
  7. Mar 21, 2019
    • Neta Zmora's avatar
      Added BernoulliFilterPruner_AGP · 90226b1c
      Neta Zmora authored
      This is AGP (automatic gradual pruning) for a pruner which
      samples filters-to-prune by sampling a Bernoulli probability distribution.
      90226b1c
  8. Mar 17, 2019
  9. Mar 14, 2019
  10. Mar 12, 2019
  11. Mar 11, 2019
    • Bar's avatar
      Integrate Cadene pretrained PyTorch models [fixes #142] (#184) · 321abb61
      Bar authored
      Integrate Cadene ```pretrainedmodels``` package.
      
      This PR integrates a large set of pre-trained PyTorch image-classification and object-detection models which originate from https://github.com/Cadene/pretrained-models.pytorch.
      
      *******************************************************************************************
      PLEASE NOTE: 
      This PR adds a dependency on he ```pretrainedmodels``` package, and you will need to install it using ```pip3 install pretrainedmodels```.  For new users, we have also updated the ```requirements.txt``` file.
      *******************************************************************************************
      
      Distiller does not currently support the compression of object-detectors (a sample application is required - and the community is invited to send us a PR).
      
      Compression of some of these models may not be fully supported by Distiller due to bugs and/or missing features.  If you encounter any issues, please report to us.
      
      Whenever there is contention on the names of models passed to the ```compress_classifier.py``` sample application, it will prefer to use the Cadene models at the lowest priority (e.g. Torchvision models are used in favor of Cadene models, when the same model is supported by both packages).
      
      This PR also:
      * Adds documentation to ```create_model```
      * Adds tests for ```create_model```
      321abb61
  12. Mar 10, 2019
  13. Mar 06, 2019
    • Neta Zmora's avatar
    • Neta Zmora's avatar
      Utils: added model_params_stats · 839c433a
      Neta Zmora authored
      This is a utility function that returns some statistics about a
      model's parameters (model_sparsity, params_cnt, params_nnz_cnt).
      
      This file is required for the previous commit (and was accidentally
      left out)
      839c433a
    • Neta Zmora's avatar
      compress_classifier.py: sort best scores by count of NNZ weights · 9cb0dd68
      Neta Zmora authored
      A recent commit changed the sorting of the best performing training
      epochs to be based on the sparsity level of the model, then its
      Top1 and Top5 scores.
      When we create thinned models, the sparsity remains low (even zero),
      while the physical size of the network is smaller.
      This commit changes the sorting criteria to be based on the count
      of non-zero (NNZ) parameters.  This captures both sparsity and
      parameter size objectives:
      - When sparsity is high, the number of NNZ params is low
      (params_nnz_cnt = sparsity * params_cnt).
      - When we remove structures (thinnning), the sparsity may remain
      constant, but the count of params (params_cnt) is lower, and therefore,
      once again params_nnz_cnt is lower.
      
      Therefore, params_nnz_cnt is a good proxy to capture a sparsity
      objective and/or a thinning objective.
      9cb0dd68
  14. Mar 05, 2019
  15. Mar 03, 2019
    • Neta Zmora's avatar
      compress_classifier.py: Fix best_epoch logic · 87055fed
      Neta Zmora authored
      Based on a commit and ideas from @barrh:
      https://github.com/NervanaSystems/distiller/pull/150/commits/1623db3cdc3a95ab620e2dc6863cff23a91087bd
      
      The sample application compress_classifier.py logs details about
      the best performing epoch(s) and stores the best epoch in a checkpoint
      file named ```best.pth.tar``` by default (if you use the ```--name```
      application argument, the checkpoint name will be prefixed by ```best```).
      
      Until this fix, the performance of a model was judged solely on its
      Top1 accuracy.  This can be a problem when performing gradual pruning
      of a pre-trained model, because many times a model's Top1 accuracy
      increases with light pruning and this is registered as the best performing
      training epoch.  However, we are really interested in the best performing
      trained model _after_ the pruning phase is done.  Even during training, we
      may be interested in the checkpoint of the best performing model with the
      highest sparsity.
      This fix stores a list of the performance results from all the trained
      epochs so far.  This list is sorted using a hierarchical key:
      (sparsity, top1, top5, epoch), so that the list is first sorted by sparsity,
      then top1, followed by top5 and epoch.
      
      But what if you want to sort using a different metric?  For example, when
      quantizing you may want to score the best performance by the total number of
      bits used to represent the model parameters and feature-maps.  In such a case
      you may want to replace ```sparsity``` by this new metric.  Because this is a
      sample application, we don't load it with all possible control logic, and
      anyone can make local changes to this logic.  To keep your code separated from
      the main application logic, we plan to refactor the application code sometime
      in the next few months.
      87055fed
    • Neta Zmora's avatar
      compress_classifier.py: fix PNG and ONNX exports broken in new release · 6567ecec
      Neta Zmora authored
      Release 0.3 broke the expots to PNG and ONNX and this is the fix.
      6567ecec
    • Neta Zmora's avatar
    • Neta Zmora's avatar
      SummaryGraph - warn user when the model uses dynamic input shapes · e62d0a24
      Neta Zmora authored
      See issue #168.
      This is not a fix, but warns the user of wrong MAC results, until
      fixing the issue.
      e62d0a24
  16. Mar 01, 2019
  17. Feb 28, 2019
  18. Feb 27, 2019
  19. Feb 26, 2019
    • Bar's avatar
      execution_env.py - small fix (#160) · 61c7e19d
      Bar authored
      Function ```log_execution_env_state``` copies a given configuration file to the logs directory to save all of the details of an experiment.  In some distributed a file copy may fail, so we wrap the copy of the configuration file with a try/except block.
      61c7e19d
    • Lev Zlotnik's avatar
      Update README.md · cebb140a
      Lev Zlotnik authored
      Unverified
      cebb140a
Loading