- Nov 25, 2018
-
-
Neta Zmora authored
Instead of returning the average-percentage-of-zeros, we returned the average-percentage-of-non-zeros. So inverted the results, and also multiplied by 100, because the name of the statistic has "percentage" not "fraction" (not very important, but still...)
-
- Nov 24, 2018
-
-
Neta Zmora authored
Thanks to Dan Alistarh for bringing this issue to my attention. The activations of Linear layers have shape (batch_size, output_size) and those of Convolution layers have shape (batch_size, num_channels, width, height) and this distinction in shape was not correctly handled. This commit also fixes sparsity computation for very large activations, as seen in VGG16, which leads to memory exhaustion. One solution is to use smaller batch sizes, but this commit uses a different solution, which counts zeros “manually”, and using less space. Also in this commit: - Added a “caveats” section to the documentation. - Added more tests.
-
- Jun 13, 2018
-
-
Neta Zmora authored
-
- Apr 25, 2018
-
-
Neta Zmora authored
-
- Apr 24, 2018
-
-
Neta Zmora authored
-