- Feb 26, 2019
-
-
Lev Zlotnik authored
Not backward compatible - re-installation is required * Fixes for PyTorch==1.0.0 * Refactoring folder structure * Update installation section in docs
-
- Feb 20, 2019
-
-
Bar authored
This patch corrects overlapping between magnitude_fn and block_shape arguments during block pruning.
-
Neta Zmora authored
Add a default value for the new parameter 'normalize_dataparallel_keys'
-
- Feb 17, 2019
-
-
Neta Zmora authored
Sometime Symlinks can't be created, so we fail queitly.
-
Neta Zmora authored
--amc-reward-frequency Computing the reward requires running the evaluated network on the Test dataset (or parts of it) and may involve short-term fine-tuning before the evaluation (depending on the configuration). Use this new argument to configure the number of steps/iterations between reward computation.
-
Neta Zmora authored
A small change to support ranking weight filters by the mean mean-value of the feature-map channels. Mean mean-value refers to computing the average value (across many input images) of the mean-value of each channel.
-
- Feb 14, 2019
-
-
Bar authored
Modified log_execution_env_state() to store configuration file in the output directory, under 'configs' sub-directory it creates. At this time, the only configuration file is passed via args.compress
-
Neta Zmora authored
To use automated compression you need to install several optional packages which are not required for other use-cases. This fix hides the import requirements for users who do not want to install the extra packages.
-
Neta Zmora authored
The new directory 'examples/baseline_networks' contains YAML schedules that we used to train the baseline networks for our experiments. Theses scripts are currently scattered across the repository, so as a first step I'm collecting them using softlinks. Later we will physically move these files to 'examples/baseline_networks', and remove the soft-links.
-
- Feb 13, 2019
-
-
Neta Zmora authored
Merging the 'amc' branch with 'master'. This updates the automated compression code in 'master', and adds a greedy filter-pruning algorithm.
-
- Feb 12, 2019
-
-
Neta Zmora authored
This commit fixes (and adds a test) for the case that we with to load a thinned GPU checkpoint onto the CPU.
-
Guy Jacob authored
-
Neta Zmora authored
The root-cause of issue #148 is that DataParallel modules cannot execute on the CPU, on machines that have both CPUs and GPUs. Therefore, we don’t use DataParallel for models loaded for the CPUs, but we do wrap the models with DataParallel when loaded on the GPUs (to make them run faster). The names of module keys saved in a checkpoint file depend if the modules are wrapped by a DataParallel module or not. So loading a checkpoint that ran on the GPU onto a CPU-model (and vice-versa) will fail on the keys. This is all PyTorch and despite the community asking for a fix - e.g. https://github.com/pytorch/pytorch/issues/7457 - it is still pending. This commit contains code to catch key errors when loading a GPU-generated model (i.e. with DataParallel) onto a CPU, and convert the names of the keys. This PR also merges refactoring to load_chackpoint.py done by @barrh, who also added a test to further test loading checkpoints.
-
- Feb 11, 2019
-
-
Neta Zmora authored
Added recent Distiller citations
-
Guy Jacob authored
-
Guy Jacob authored
Summary of changes: (1) Post-train quantization based on pre-collected statistics (2) Quantized concat, element-wise addition / multiplication and embeddings (3) Move post-train quantization command line args out of sample code (4) Configure post-train quantization from YAML for more fine-grained control (See PR #136 for more detailed changes descriptions)
-
- Feb 10, 2019
-
-
Guy Jacob authored
* For CIFAR-10 / ImageNet only * Refactor data_loaders.py, reduce code duplication * Implemented custom sampler * Integrated in image classification sample * Since we now shuffle the test set, had to update expected results in 2 full_flow_tests that do evaluation
-
- Feb 06, 2019
-
-
Neta Zmora authored
-
Neta Zmora authored
-
Neta Zmora authored
-
Neta Zmora authored
A parameter was missing from one of the function calls.
-
Neta Zmora authored
-
Neta Zmora authored
Expand the command line arguments to recreate the original command line invocation.
-
Neta Zmora authored
The use of DataParallel is causing various small problems when used in conjunction with SummaryGraph. The best solution is to force SummaryGraph to use a non-data-parallel version of the model and to always normalize node names when accessing SummaryGraph operations.
-
- Jan 31, 2019
-
-
Neta Zmora authored
-
Neta Zmora authored
Specifically, gracefully handle a missing 'epoch' key in a loaded checkpoint file.
-
- Jan 27, 2019
-
-
JoyFreemanYan authored
-
- Jan 24, 2019
- Jan 23, 2019
- Jan 22, 2019
-
-
inner authored
-
- Jan 21, 2019
- Jan 16, 2019
-
-
Bar authored
* Support for multi-phase activations logging Enable logging activation both durning training and validation at the same session. * Refactoring: Move parser to its own file * Parser is moved from compress_classifier into its own file. * Torch version check is moved to precede main() call. * Move main definition to the top of the file. * Modify parser choices to case-insensitive
-
Neta Zmora authored
-
- Jan 15, 2019
-
-
Neta Zmora authored
-
Neta Zmora authored
Fix a mismatch between the location of the model and the computation.
-
- Jan 13, 2019
-
-
Neta Zmora authored
When masks are loaded from a checkpoint file, they should use the same device as the model.
-
Neta Zmora authored
-