llmcompressor.modifiers.quantization.gptq.base
Classes:
-
GPTQModifier
–Implements the GPTQ algorithm from https://arxiv.org/abs/2210.17323. This modifier
GPTQModifier
Bases: Modifier
, QuantizationMixin
Implements the GPTQ algorithm from https://arxiv.org/abs/2210.17323. This modifier uses activations to calibrate a hessian matrix, which is then used to determine optimal quantizion values and orderings for the model weights.
| Sample yaml: | test_stage: | obcq_modifiers: | GPTQModifier: | block_size: 128 | dampening_frac: 0.001 | offload_hessians: False | actorder: static | config_groups: | group_0: | targets: | - "Linear" | input_activations: null | output_activations: null | weights: | num_bits: 8 | type: "int" | symmetric: true | strategy: group | group_size: 128
Lifecycle: - on_initialize - apply config to model - on_start - add activation calibration hooks - add gptq weight calibration hooks - on_sequential_epoch_end - quantize_weight - on_finalize - remove_hooks() - model.apply(freeze_module_quantization)
Parameters:
-
sequential_targets
list of layer names to compress during GPTQ, or 'ALL' to compress every layer in the model
-
block_size
Used to determine number of columns to compress in one pass
-
dampening_frac
Amount of dampening to apply to H, as a fraction of the diagonal norm
-
actorder
order in which weight columns are quantized. For more information, on actorder options, see https://github.com/vllm-project/vllm/pull/8135
-
offload_hessians
Set to True for decreased memory usage but increased runtime.
-
config_groups
dictionary specifying quantization schemes to apply to target modules. Modules not matching a scheme target will NOT be quantized.
-
targets
list of layer names to quantize if a scheme is provided. Defaults to Linear layers
-
ignore
optional list of module class names or submodule names to not quantize even if they match a target in config_groups. Defaults to empty list.
-
scheme
a single quantization scheme to apply to the model. This is a dictionary that supports all keys from QuantizationScheme except targets, which will be set to the targets parameter set at the modifier level. Can also be set to a dictionary of the format
preset_scheme_name: targets
for example:W8A8: ['Linear']
for weight and activation 8-bit. -
kv_cache_scheme
optional QuantizationArgs, that specify the quantization of the kv cache. If None, kv cache is not quantized. When applying kv cache quantization to transformer AutoModelForCausalLM, the kv_cache_scheme gets converted into a QuantizationScheme that: - targets the
q_proj
andk_proj
modules of the model. The outputs of those modules are the keys and values that might be cached - quantizes the outputs of the aformentioned layers, so that keys and values are compressed before storing them in the cache There is an explicit assumption that the model contains modules withk_proj
andv_proj
in their names. If this is not the case and kv_cache_scheme != None, the quantization of kv cache will fail
Methods:
-
calibrate_module
–Calibration hook used to accumulate the hessian of the input to the module
-
compress_modules
–Quantize modules which have been calibrated
-
on_end
–Finish calibrating by removing observers and calibration hooks
-
on_finalize
–disable the quantization observers used by the OBCQ algorithm
-
on_initialize
–Initialize and run the GPTQ algorithm on the current state
calibrate_module
Calibration hook used to accumulate the hessian of the input to the module
Parameters:
-
module
Module
) –module being calibrated
-
args
Tuple[Tensor, ...]
) –inputs to the module, the first element of which is the cannonical input
-
_output
Tensor
) –uncompressed module output, unused
Source code in llmcompressor/modifiers/quantization/gptq/base.py
compress_modules
Quantize modules which have been calibrated
Source code in llmcompressor/modifiers/quantization/gptq/base.py
on_end
Finish calibrating by removing observers and calibration hooks
Source code in llmcompressor/modifiers/quantization/gptq/base.py
on_finalize
disable the quantization observers used by the OBCQ algorithm
Parameters:
-
state
State
) –session state storing input model and calibration data
Source code in llmcompressor/modifiers/quantization/gptq/base.py
on_initialize
Initialize and run the GPTQ algorithm on the current state
Parameters:
-
state
State
) –session state storing input model and calibration data