Skip to content
Snippets Groups Projects
  1. Feb 26, 2019
  2. Feb 20, 2019
  3. Feb 17, 2019
  4. Feb 14, 2019
    • Bar's avatar
      Store config files in logdir/configs directory (#156) · b476d028
      Bar authored
      Modified log_execution_env_state() to store
      configuration file in the output directory,
      under 'configs' sub-directory it creates.
      
      At this time, the only configuration file is
      passed via args.compress
      b476d028
    • Neta Zmora's avatar
      Fix automated-compression imports · ac9f61c0
      Neta Zmora authored
      To use automated compression you need to install several optional packages
      which are not required for other use-cases.
      This fix hides the import requirements for users who do not want to install
      the extra packages.
      ac9f61c0
    • Neta Zmora's avatar
      Create directory to hold all YAML schedules for training baseline networks · d7b5a50d
      Neta Zmora authored
      The new directory 'examples/baseline_networks' contains YAML schedules
      that we used to train the baseline networks for our experiments.
      Theses scripts are currently scattered across the repository, so as a
      first step I'm collecting them using softlinks.  Later we will physically
      move these files to 'examples/baseline_networks', and remove the soft-links.
      d7b5a50d
  5. Feb 13, 2019
  6. Feb 12, 2019
    • Neta Zmora's avatar
      CPU support: fix the case of loading a thinned GPU-model on the CPU · ba05f6cf
      Neta Zmora authored
      This commit fixes (and adds a test) for the case that we with to load
      a thinned GPU checkpoint onto the CPU.
      ba05f6cf
    • Guy Jacob's avatar
    • Neta Zmora's avatar
      Fix issue #148 + refactor load_checkpoint.py (#153) · 1210f412
      Neta Zmora authored
      The root-cause of issue #148 is that DataParallel modules cannot execute on the CPU,
      on machines that have both CPUs and GPUs.
      Therefore, we don’t use DataParallel for models loaded for the CPUs, but we do wrap
      the models with DataParallel when loaded on the GPUs (to make them run faster).
      The names of module keys saved in a checkpoint file depend if the modules are wrapped
      by a DataParallel module or not.  So loading a checkpoint that ran on the GPU onto a
      CPU-model (and vice-versa) will fail on the keys.
      This is all PyTorch and despite the community asking for a fix -
      e.g. https://github.com/pytorch/pytorch/issues/7457 - it is still pending.
      
      This commit contains code to catch key errors when loading a GPU-generated model
      (i.e. with DataParallel) onto a CPU, and convert the names of the keys.
      
      This PR also merges refactoring to load_chackpoint.py done by @barrh, who also added
      a test to further test loading checkpoints.
      Unverified
      1210f412
  7. Feb 11, 2019
  8. Feb 10, 2019
  9. Feb 06, 2019
  10. Jan 31, 2019
  11. Jan 27, 2019
  12. Jan 24, 2019
  13. Jan 23, 2019
  14. Jan 22, 2019
  15. Jan 21, 2019
  16. Jan 16, 2019
    • Bar's avatar
      compress_classifier.py refactoring (#126) · cfbc3798
      Bar authored
      * Support for multi-phase activations logging
      
      Enable logging activation both durning training and validation at
      the same session.
      
      * Refactoring: Move parser to its own file
      
      * Parser is moved from compress_classifier into its own file.
      * Torch version check is moved to precede main() call.
      * Move main definition to the top of the file.
      * Modify parser choices to case-insensitive
      cfbc3798
    • Neta Zmora's avatar
      Fix for CPU support · 4cc0e7d6
      Neta Zmora authored
      4cc0e7d6
  17. Jan 15, 2019
  18. Jan 13, 2019
Loading