Skip to content
Snippets Groups Projects
Unverified Commit 952028d0 authored by Guy Jacob's avatar Guy Jacob Committed by GitHub
Browse files

Enable weights/activations-only PTQ for conv/linear modules (#439)

* Weights-only PTQ:
  * Allow RangeLinearQuantWrapper to accept num_bits_acts = None, in
    which case it'll act as a simple pass-through during forward
  * In RangeLinearQuantParamLayerWrapper, if bits_activations is None
    and num_bits_params > 0, Perform quant and de-quant of the
    parameters instead of just quant.
* Activations-only PTQ:
  * Enable activations only quantization for conv/linear modules. When
    PostTrainLinearQuantizer detects # bits != None for activations 
    and # bits == None for weights, a fake-quantization wrapper will
    be used.
* Allow passing 0 in the `--qe-bits-acts` and `--qe-bits-wts` command
  line arguments to invoke weights/activations-only quantization,
  respectively.
* Minor refactoring for clarity in PostTrainLinearQuantizer's replace_*
  functions
parent 326d172f
No related branches found
No related tags found
No related merge requests found
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment