llmcompressor.modifiers.quantization.gptq.base
Classes:
-
GPTQModifier–Implements the GPTQ algorithm from https://arxiv.org/abs/2210.17323. This modifier
GPTQModifier
Bases: Modifier, QuantizationMixin
Implements the GPTQ algorithm from https://arxiv.org/abs/2210.17323. This modifier uses activations to calibrate a hessian matrix, which is then used to determine optimal quantizion values and orderings for the model weights.
| Sample yaml: | test_stage: | obcq_modifiers: | GPTQModifier: | block_size: 128 | dampening_frac: 0.001 | offload_hessians: False | actorder: static | config_groups: | group_0: | targets: | - "Linear" | input_activations: null | output_activations: null | weights: | num_bits: 8 | type: "int" | symmetric: true | strategy: group | group_size: 128
Lifecycle: - on_initialize - apply config to model - on_start - add activation calibration hooks - add gptq weight calibration hooks - on_sequential_epoch_end - quantize_weight - on_finalize - remove_hooks() - model.apply(freeze_module_quantization)
Parameters:
-
–sequential_targetslist of layer names to compress during GPTQ, or 'ALL' to compress every layer in the model
-
–block_sizeUsed to determine number of columns to compress in one pass
-
–dampening_fracAmount of dampening to apply to H, as a fraction of the diagonal norm
-
–actorderorder in which weight columns are quantized. Defaults to "static" activation ordering, which achieves best accuracy recovery with no runtime cost. For more information, see https://github.com/vllm-project/vllm/pull/8135
-
–offload_hessiansSet to True for decreased memory usage but increased runtime.
-
–config_groupsdictionary specifying quantization schemes to apply to target modules. Modules not matching a scheme target will NOT be quantized.
-
–targetslist of layer names to quantize if a scheme is provided. Defaults to Linear layers
-
–ignoreoptional list of module class names or submodule names to not quantize even if they match a target in config_groups. Defaults to empty list.
-
–schemea single quantization scheme to apply to the model. This is a dictionary that supports all keys from QuantizationScheme except targets, which will be set to the targets parameter set at the modifier level. Can also be set to a dictionary of the format
preset_scheme_name: targetsfor example:W8A8: ['Linear']for weight and activation 8-bit. -
–kv_cache_schemeoptional QuantizationArgs, that specify the quantization of the kv cache. If None, kv cache is not quantized. When applying kv cache quantization to transformer AutoModelForCausalLM, the kv_cache_scheme gets converted into a QuantizationScheme that: - targets the
q_projandk_projmodules of the model. The outputs of those modules are the keys and values that might be cached - quantizes the outputs of the aformentioned layers, so that keys and values are compressed before storing them in the cache There is an explicit assumption that the model contains modules withk_projandv_projin their names. If this is not the case and kv_cache_scheme != None, the quantization of kv cache will fail
Methods:
-
calibrate_module–Calibration hook used to accumulate the hessian of the input to the module
-
compress_modules–Quantize modules which have been calibrated
-
on_end–Finish calibrating by removing observers and calibration hooks
-
on_finalize–disable the quantization observers used by the OBCQ algorithm
-
on_initialize–Initialize and run the GPTQ algorithm on the current state
calibrate_module
Calibration hook used to accumulate the hessian of the input to the module
Parameters:
-
(moduleModule) –module being calibrated
-
(argsTuple[Tensor, ...]) –inputs to the module, the first element of which is the cannonical input
-
(_outputTensor) –uncompressed module output, unused
Source code in llmcompressor/modifiers/quantization/gptq/base.py
compress_modules
Quantize modules which have been calibrated
Source code in llmcompressor/modifiers/quantization/gptq/base.py
on_end
Finish calibrating by removing observers and calibration hooks
Source code in llmcompressor/modifiers/quantization/gptq/base.py
on_finalize
disable the quantization observers used by the OBCQ algorithm
Parameters:
-
(stateState) –session state storing input model and calibration data
Source code in llmcompressor/modifiers/quantization/gptq/base.py
on_initialize
Initialize and run the GPTQ algorithm on the current state
Parameters:
-
(stateState) –session state storing input model and calibration data