Transformers documentation

양자화

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v4.46.3).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

양자화

양자화 기법은 가중치와 활성화를 8비트 정수(int8)와 같은 더 낮은 정밀도의 데이터 타입으로 표현함으로써 메모리와 계산 비용을 줄입니다. 이를 통해 일반적으로는 메모리에 올릴 수 없는 더 큰 모델을 로드할 수 있고, 추론 속도를 높일 수 있습니다. Transformers는 AWQ와 GPTQ 양자화 알고리즘을 지원하며, bitsandbytes를 통해 8비트와 4비트 양자화를 지원합니다. Transformers에서 지원되지 않는 양자화 기법들은 HfQuantizer 클래스를 통해 추가될 수 있습니다.

모델을 양자화하는 방법은 이 양자화 가이드를 통해 배울 수 있습니다.

QuantoConfig

class transformers.QuantoConfig

< >

( weights = 'int8' activations = None modules_to_not_convert: typing.Optional[typing.List] = None **kwargs )

Parameters

  • weights (str, optional, defaults to "int8") — The target dtype for the weights after quantization. Supported values are (“float8”,“int8”,“int4”,“int2”)
  • activations (str, optional) — The target dtype for the activations after quantization. Supported values are (None,“int8”,“float8”)
  • modules_to_not_convert (list, optional, default to None) — The list of modules to not quantize, useful for quantizing models that explicitly require to have some modules left in their original precision (e.g. Whisper encoder, Llava encoder, Mixtral gate layers).

This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using quanto.

post_init

< >

( )

Safety checker that arguments are correct

AqlmConfig

class transformers.AqlmConfig

< >

( in_group_size: int = 8 out_group_size: int = 1 num_codebooks: int = 1 nbits_per_codebook: int = 16 linear_weights_not_to_quantize: typing.Optional[typing.List[str]] = None **kwargs )

Parameters

  • in_group_size (int, optional, defaults to 8) — The group size along the input dimension.
  • out_group_size (int, optional, defaults to 1) — The group size along the output dimension. It’s recommended to always use 1.
  • num_codebooks (int, optional, defaults to 1) — Number of codebooks for the Additive Quantization procedure.
  • nbits_per_codebook (int, optional, defaults to 16) — Number of bits encoding a single codebook vector. Codebooks size is 2**nbits_per_codebook.
  • linear_weights_not_to_quantize (Optional[List[str]], optional) — List of full paths of nn.Linear weight parameters that shall not be quantized.
  • kwargs (Dict[str, Any], optional) — Additional parameters from which to initialize the configuration object.

This is a wrapper class about aqlm parameters.

post_init

< >

( )

Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.

AwqConfig

class transformers.AwqConfig

< >

( bits: int = 4 group_size: int = 128 zero_point: bool = True version: AWQLinearVersion = <AWQLinearVersion.GEMM: 'gemm'> backend: AwqBackendPackingMethod = <AwqBackendPackingMethod.AUTOAWQ: 'autoawq'> do_fuse: typing.Optional[bool] = None fuse_max_seq_len: typing.Optional[int] = None modules_to_fuse: typing.Optional[dict] = None modules_to_not_convert: typing.Optional[typing.List] = None exllama_config: typing.Optional[typing.Dict[str, int]] = None **kwargs )

Parameters

  • bits (int, optional, defaults to 4) — The number of bits to quantize to.
  • group_size (int, optional, defaults to 128) — The group size to use for quantization. Recommended value is 128 and -1 uses per-column quantization.
  • zero_point (bool, optional, defaults to True) — Whether to use zero point quantization.
  • version (AWQLinearVersion, optional, defaults to AWQLinearVersion.GEMM) — The version of the quantization algorithm to use. GEMM is better for big batch_size (e.g. >= 8) otherwise, GEMV is better (e.g. < 8 ). GEMM models are compatible with Exllama kernels.
  • backend (AwqBackendPackingMethod, optional, defaults to AwqBackendPackingMethod.AUTOAWQ) — The quantization backend. Some models might be quantized using llm-awq backend. This is useful for users that quantize their own models using llm-awq library.
  • do_fuse (bool, optional, defaults to False) — Whether to fuse attention and mlp layers together for faster inference
  • fuse_max_seq_len (int, optional) — The Maximum sequence length to generate when using fusing.
  • modules_to_fuse (dict, optional, default to None) — Overwrite the natively supported fusing scheme with the one specified by the users.
  • modules_to_not_convert (list, optional, default to None) — The list of modules to not quantize, useful for quantizing models that explicitly require to have some modules left in their original precision (e.g. Whisper encoder, Llava encoder, Mixtral gate layers). Note you cannot quantize directly with transformers, please refer to AutoAWQ documentation for quantizing HF models.
  • exllama_config (Dict[str, Any], optional) — You can specify the version of the exllama kernel through the version key, the maximum sequence length through the max_input_len key, and the maximum batch size through the max_batch_size key. Defaults to {"version": 2, "max_input_len": 2048, "max_batch_size": 8} if unset.

This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using auto-awq library awq quantization relying on auto_awq backend.

post_init

< >

( )

Safety checker that arguments are correct

EetqConfig

class transformers.EetqConfig

< >

( weights: str = 'int8' modules_to_not_convert: typing.Optional[typing.List] = None **kwargs )

Parameters

  • weights (str, optional, defaults to "int8") — The target dtype for the weights. Supported value is only “int8”
  • modules_to_not_convert (list, optional, default to None) — The list of modules to not quantize, useful for quantizing models that explicitly require to have some modules left in their original precision.

This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using eetq.

post_init

< >

( )

Safety checker that arguments are correct

GPTQConfig

class transformers.GPTQConfig

< >

( bits: int tokenizer: typing.Any = None dataset: typing.Union[typing.List[str], str, NoneType] = None group_size: int = 128 damp_percent: float = 0.1 desc_act: bool = False sym: bool = True true_sequential: bool = True use_cuda_fp16: bool = False model_seqlen: typing.Optional[int] = None block_name_to_quantize: typing.Optional[str] = None module_name_preceding_first_block: typing.Optional[typing.List[str]] = None batch_size: int = 1 pad_token_id: typing.Optional[int] = None use_exllama: typing.Optional[bool] = None max_input_length: typing.Optional[int] = None exllama_config: typing.Optional[typing.Dict[str, typing.Any]] = None cache_block_outputs: bool = True modules_in_block_to_quantize: typing.Optional[typing.List[typing.List[str]]] = None **kwargs )

Parameters

  • bits (int) — The number of bits to quantize to, supported numbers are (2, 3, 4, 8).
  • tokenizer (str or PreTrainedTokenizerBase, optional) — The tokenizer used to process the dataset. You can pass either:
    • A custom tokenizer object.
    • A string, the model id of a predefined tokenizer hosted inside a model repo on huggingface.co.
    • A path to a directory containing vocabulary files required by the tokenizer, for instance saved using the save_pretrained() method, e.g., ./my_model_directory/.
  • dataset (Union[List[str]], optional) — The dataset used for quantization. You can provide your own dataset in a list of string or just use the original datasets used in GPTQ paper [‘wikitext2’,‘c4’,‘c4-new’]
  • group_size (int, optional, defaults to 128) — The group size to use for quantization. Recommended value is 128 and -1 uses per-column quantization.
  • damp_percent (float, optional, defaults to 0.1) — The percent of the average Hessian diagonal to use for dampening. Recommended value is 0.1.
  • desc_act (bool, optional, defaults to False) — Whether to quantize columns in order of decreasing activation size. Setting it to False can significantly speed up inference but the perplexity may become slightly worse. Also known as act-order.
  • sym (bool, optional, defaults to True) — Whether to use symetric quantization.
  • true_sequential (bool, optional, defaults to True) — Whether to perform sequential quantization even within a single Transformer block. Instead of quantizing the entire block at once, we perform layer-wise quantization. As a result, each layer undergoes quantization using inputs that have passed through the previously quantized layers.
  • use_cuda_fp16 (bool, optional, defaults to False) — Whether or not to use optimized cuda kernel for fp16 model. Need to have model in fp16.
  • model_seqlen (int, optional) — The maximum sequence length that the model can take.
  • block_name_to_quantize (str, optional) — The transformers block name to quantize. If None, we will infer the block name using common patterns (e.g. model.layers)
  • module_name_preceding_first_block (List[str], optional) — The layers that are preceding the first Transformer block.
  • batch_size (int, optional, defaults to 1) — The batch size used when processing the dataset
  • pad_token_id (int, optional) — The pad token id. Needed to prepare the dataset when batch_size > 1.
  • use_exllama (bool, optional) — Whether to use exllama backend. Defaults to True if unset. Only works with bits = 4.
  • max_input_length (int, optional) — The maximum input length. This is needed to initialize a buffer that depends on the maximum expected input length. It is specific to the exllama backend with act-order.
  • exllama_config (Dict[str, Any], optional) — The exllama config. You can specify the version of the exllama kernel through the version key. Defaults to {"version": 1} if unset.
  • cache_block_outputs (bool, optional, defaults to True) — Whether to cache block outputs to reuse as inputs for the succeeding block.
  • modules_in_block_to_quantize (List[List[str]], optional) — List of list of module names to quantize in the specified block. This argument is useful to exclude certain linear modules from being quantized. The block to quantize can be specified by setting block_name_to_quantize. We will quantize each list sequentially. If not set, we will quantize all linear layers. Example: modules_in_block_to_quantize =[["self_attn.k_proj", "self_attn.v_proj", "self_attn.q_proj"], ["self_attn.o_proj"]]. In this example, we will first quantize the q,k,v layers simultaneously since they are independent. Then, we will quantize self_attn.o_proj layer with the q,k,v layers quantized. This way, we will get better results since it reflects the real input self_attn.o_proj will get when the model is quantized.

This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using optimum api for gptq quantization relying on auto_gptq backend.

from_dict_optimum

< >

( config_dict )

Get compatible class with optimum gptq config dict

post_init

< >

( )

Safety checker that arguments are correct

to_dict_optimum

< >

( )

Get compatible dict for optimum gptq config

BitsAndBytesConfig

class transformers.BitsAndBytesConfig

< >

( load_in_8bit = False load_in_4bit = False llm_int8_threshold = 6.0 llm_int8_skip_modules = None llm_int8_enable_fp32_cpu_offload = False llm_int8_has_fp16_weight = False bnb_4bit_compute_dtype = None bnb_4bit_quant_type = 'fp4' bnb_4bit_use_double_quant = False bnb_4bit_quant_storage = None **kwargs )

Parameters

  • load_in_8bit (bool, optional, defaults to False) — This flag is used to enable 8-bit quantization with LLM.int8().
  • load_in_4bit (bool, optional, defaults to False) — This flag is used to enable 4-bit quantization by replacing the Linear layers with FP4/NF4 layers from bitsandbytes.
  • llm_int8_threshold (float, optional, defaults to 6.0) — This corresponds to the outlier threshold for outlier detection as described in LLM.int8() : 8-bit Matrix Multiplication for Transformers at Scale paper: https://arxiv.org/abs/2208.07339 Any hidden states value that is above this threshold will be considered an outlier and the operation on those values will be done in fp16. Values are usually normally distributed, that is, most values are in the range [-3.5, 3.5], but there are some exceptional systematic outliers that are very differently distributed for large models. These outliers are often in the interval [-60, -6] or [6, 60]. Int8 quantization works well for values of magnitude ~5, but beyond that, there is a significant performance penalty. A good default threshold is 6, but a lower threshold might be needed for more unstable models (small models, fine-tuning).
  • llm_int8_skip_modules (List[str], optional) — An explicit list of the modules that we do not want to convert in 8-bit. This is useful for models such as Jukebox that has several heads in different places and not necessarily at the last position. For example for CausalLM models, the last lm_head is kept in its original dtype.
  • llm_int8_enable_fp32_cpu_offload (bool, optional, defaults to False) — This flag is used for advanced use cases and users that are aware of this feature. If you want to split your model in different parts and run some parts in int8 on GPU and some parts in fp32 on CPU, you can use this flag. This is useful for offloading large models such as google/flan-t5-xxl. Note that the int8 operations will not be run on CPU.
  • llm_int8_has_fp16_weight (bool, optional, defaults to False) — This flag runs LLM.int8() with 16-bit main weights. This is useful for fine-tuning as the weights do not have to be converted back and forth for the backward pass.
  • bnb_4bit_compute_dtype (torch.dtype or str, optional, defaults to torch.float32) — This sets the computational type which might be different than the input type. For example, inputs might be fp32, but computation can be set to bf16 for speedups.
  • bnb_4bit_quant_type (str, optional, defaults to "fp4") — This sets the quantization data type in the bnb.nn.Linear4Bit layers. Options are FP4 and NF4 data types which are specified by fp4 or nf4.
  • bnb_4bit_use_double_quant (bool, optional, defaults to False) — This flag is used for nested quantization where the quantization constants from the first quantization are quantized again.
  • bnb_4bit_quant_storage (torch.dtype or str, optional, defaults to torch.uint8) — This sets the storage type to pack the quanitzed 4-bit prarams.
  • kwargs (Dict[str, Any], optional) — Additional parameters from which to initialize the configuration object.

This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using bitsandbytes.

This replaces load_in_8bit or load_in_4bittherefore both options are mutually exclusive.

Currently only supports LLM.int8(), FP4, and NF4 quantization. If more methods are added to bitsandbytes, then more arguments will be added to this class.

is_quantizable

< >

( )

Returns True if the model is quantizable, False otherwise.

post_init

< >

( )

Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.

quantization_method

< >

( )

This method returns the quantization method used for the model. If the model is not quantizable, it returns None.

to_diff_dict

< >

( ) Dict[str, Any]

Returns

Dict[str, Any]

Dictionary of all the attributes that make up this configuration instance,

Removes all attributes from config which correspond to the default config attributes for better readability and serializes to a Python dictionary.

HfQuantizer

class transformers.quantizers.HfQuantizer

< >

( quantization_config: QuantizationConfigMixin **kwargs )

Abstract class of the HuggingFace quantizer. Supports for now quantizing HF transformers models for inference and/or quantization. This class is used only for transformers.PreTrainedModel.from_pretrained and cannot be easily used outside the scope of that method yet.

Attributes quantization_config (transformers.utils.quantization_config.QuantizationConfigMixin): The quantization config that defines the quantization parameters of your model that you want to quantize. modules_to_not_convert (List[str], optional): The list of module names to not convert when quantizing the model. required_packages (List[str], optional): The list of required pip packages to install prior to using the quantizer requires_calibration (bool): Whether the quantization method requires to calibrate the model before using it. requires_parameters_quantization (bool): Whether the quantization method requires to create a new Parameter. For example, for bitsandbytes, it is required to create a new xxxParameter in order to properly quantize the model.

adjust_max_memory

< >

( max_memory: typing.Dict[str, typing.Union[int, str]] )

adjust max_memory argument for infer_auto_device_map() if extra memory is needed for quantization

adjust_target_dtype

< >

( torch_dtype: torch.dtype )

Parameters

  • torch_dtype (torch.dtype, optional) — The torch_dtype that is used to compute the device_map.

Override this method if you want to adjust the target_dtype variable used in from_pretrained to compute the device_map in case the device_map is a str. E.g. for bitsandbytes we force-set target_dtype to torch.int8 and for 4-bit we pass a custom enum accelerate.CustomDtype.int4.

check_quantized_param

< >

( model: PreTrainedModel param_value: torch.Tensor param_name: str state_dict: typing.Dict[str, typing.Any] **kwargs )

checks if a loaded state_dict component is part of quantized param + some validation; only defined if requires_parameters_quantization == True for quantization methods that require to create a new parameters for quantization.

create_quantized_param

< >

( *args **kwargs )

takes needed components from state_dict and creates quantized param; only applicable if requires_parameters_quantization == True

dequantize

< >

( model )

Potentially dequantize the model to retrive the original model, with some loss in accuracy / performance. Note not all quantization schemes support this.

get_special_dtypes_update

< >

( model torch_dtype: torch.dtype )

Parameters

  • model (~transformers.PreTrainedModel) — The model to quantize
  • torch_dtype (torch.dtype) — The dtype passed in from_pretrained method.

returns dtypes for modules that are not quantized - used for the computation of the device_map in case one passes a str as a device_map. The method will use the modules_to_not_convert that is modified in _process_model_before_weight_loading.

postprocess_model

< >

( model: PreTrainedModel **kwargs )

Parameters

  • model (~transformers.PreTrainedModel) — The model to quantize
  • kwargs (dict, optional) — The keyword arguments that are passed along _process_model_after_weight_loading.

Post-process the model post weights loading. Make sure to override the abstract method _process_model_after_weight_loading.

preprocess_model

< >

( model: PreTrainedModel **kwargs )

Parameters

  • model (~transformers.PreTrainedModel) — The model to quantize
  • kwargs (dict, optional) — The keyword arguments that are passed along _process_model_before_weight_loading.

Setting model attributes and/or converting model before weights loading. At this point the model should be initialized on the meta device so you can freely manipulate the skeleton of the model in order to replace modules in-place. Make sure to override the abstract method _process_model_before_weight_loading.

update_device_map

< >

( device_map: typing.Optional[typing.Dict[str, typing.Any]] )

Parameters

  • device_map (Union[dict, str], optional) — The device_map that is passed through the from_pretrained method.

Override this method if you want to pass a override the existing device map with a new one. E.g. for bitsandbytes, since accelerate is a hard requirement, if no device_map is passed, the device_map is set to `“auto”“

update_expected_keys

< >

( model expected_keys: typing.List[str] loaded_keys: typing.List[str] )

Parameters

  • expected_keys (List[str], optional) — The list of the expected keys in the initialized model.
  • loaded_keys (List[str], optional) — The list of the loaded keys in the checkpoint.

Override this method if you want to adjust the update_expected_keys.

update_missing_keys

< >

( model missing_keys: typing.List[str] prefix: str )

Parameters

  • missing_keys (List[str], optional) — The list of missing keys in the checkpoint compared to the state dict of the model

Override this method if you want to adjust the missing_keys.

update_torch_dtype

< >

( torch_dtype: torch.dtype )

Parameters

  • torch_dtype (torch.dtype) — The input dtype that is passed in from_pretrained

Some quantization methods require to explicitly set the dtype of the model to a target dtype. You need to override this method in case you want to make sure that behavior is preserved

validate_environment

< >

( *args **kwargs )

This method is used to potentially check for potential conflicts with arguments that are passed in from_pretrained. You need to define it for all future quantizers that are integrated with transformers. If no explicit check are needed, simply return nothing.

HqqConfig

class transformers.HqqConfig

< >

( nbits: int = 4 group_size: int = 64 view_as_float: bool = False axis: typing.Optional[int] = None dynamic_config: typing.Optional[dict] = None skip_modules: typing.List[str] = ['lm_head'] **kwargs )

Parameters

  • nbits (int, optional, defaults to 4) — Number of bits. Supported values are (8, 4, 3, 2, 1).
  • group_size (int, optional, defaults to 64) — Group-size value. Supported values are any value that is divisble by weight.shape[axis]).
  • view_as_float (bool, optional, defaults to False) — View the quantized weight as float (used in distributed training) if set to True.
  • axis (Optional[int], optional) — Axis along which grouping is performed. Supported values are 0 or 1.
  • dynamic_config (dict, optional) — Parameters for dynamic configuration. The key is the name tag of the layer and the value is a quantization config. If set, each layer specified by its id will use its dedicated quantization configuration.
  • skip_modules (List[str], optional, defaults to ['lm_head']) — List of nn.Linear layers to skip.
  • kwargs (Dict[str, Any], optional) — Additional parameters from which to initialize the configuration object.

This is wrapper around hqq’s BaseQuantizeConfig.

from_dict

< >

( config: typing.Dict[str, typing.Any] )

Override from_dict, used in AutoQuantizationConfig.from_dict in quantizers/auto.py

post_init

< >

( )

Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.

to_diff_dict

< >

( ) Dict[str, Any]

Returns

Dict[str, Any]

Dictionary of all the attributes that make up this configuration instance,

Removes all attributes from config which correspond to the default config attributes for better readability and serializes to a Python dictionary.

FbgemmFp8Config

class transformers.FbgemmFp8Config

< >

( activation_scale_ub: float = 1200.0 modules_to_not_convert: typing.Optional[typing.List] = None **kwargs )

Parameters

  • activation_scale_ub (float, optional, defaults to 1200.0) — The activation scale upper bound. This is used when quantizing the input activation.
  • modules_to_not_convert (list, optional, default to None) — The list of modules to not quantize, useful for quantizing models that explicitly require to have some modules left in their original precision.

This is a wrapper class about all possible attributes and features that you can play with a model that has been loaded using fbgemm fp8 quantization.

CompressedTensorsConfig

class transformers.CompressedTensorsConfig

< >

( config_groups: typing.Dict[str, typing.Union[ForwardRef('QuantizationScheme'), typing.List[str]]] = None format: str = 'dense' quantization_status: QuantizationStatus = 'initialized' kv_cache_scheme: typing.Optional[ForwardRef('QuantizationArgs')] = None global_compression_ratio: typing.Optional[float] = None ignore: typing.Optional[typing.List[str]] = None sparsity_config: typing.Dict[str, typing.Any] = None quant_method: str = 'compressed-tensors' **kwargs )

Parameters

  • config_groups (typing.Dict[str, typing.Union[ForwardRef('QuantizationScheme'), typing.List[str]]], optional) — dictionary mapping group name to a quantization scheme definition
  • format (str, optional, defaults to "dense") — format the model is represented as
  • quantization_status (QuantizationStatus, optional, defaults to "initialized") — status of model in the quantization lifecycle, ie ‘initialized’, ‘calibration’, ‘frozen’
  • kv_cache_scheme (typing.Union[QuantizationArgs, NoneType], optional) — specifies quantization of the kv cache. If None, kv cache is not quantized.
  • global_compression_ratio (typing.Union[float, NoneType], optional) — 0-1 float percentage of model compression
  • ignore (typing.Union[typing.List[str], NoneType], optional) — layer names or types to not quantize, supports regex prefixed by ‘re:’
  • sparsity_config (typing.Dict[str, typing.Any], optional) — configuration for sparsity compression
  • quant_method (str, optional, defaults to "compressed-tensors") — do not override, should be compressed-tensors

This is a wrapper class that handles compressed-tensors quantization config options. It is a wrapper around compressed_tensors.QuantizationConfig

from_dict

< >

( config_dict return_unused_kwargs = False **kwargs ) QuantizationConfigMixin

Parameters

  • config_dict (Dict[str, Any]) — Dictionary that will be used to instantiate the configuration object.
  • return_unused_kwargs (bool,optional, defaults to False) — Whether or not to return a list of unused keyword arguments. Used for from_pretrained method in PreTrainedModel.
  • kwargs (Dict[str, Any]) — Additional parameters from which to initialize the configuration object.

Returns

QuantizationConfigMixin

The configuration object instantiated from those parameters.

Instantiates a CompressedTensorsConfig from a Python dictionary of parameters. Optionally unwraps any args from the nested quantization_config

to_diff_dict

< >

( ) Dict[str, Any]

Returns

Dict[str, Any]

Dictionary of all the attributes that make up this configuration instance,

Removes all attributes from config which correspond to the default config attributes for better readability and serializes to a Python dictionary.

TorchAoConfig

class transformers.TorchAoConfig

< >

( quant_type: str modules_to_not_convert: typing.Optional[typing.List] = None **kwargs )

Parameters

  • quant_type (str) — The type of quantization we want to use, currently supporting: int4_weight_only, int8_weight_only and int8_dynamic_activation_int8_weight.
  • modules_to_not_convert (list, optional, default to None) — The list of modules to not quantize, useful for quantizing models that explicitly require to have some modules left in their original precision.
  • kwargs (Dict[str, Any], optional) — The keyword arguments for the chosen type of quantization, for example, int4_weight_only quantization supports two keyword arguments group_size and inner_k_tiles currently. More API examples and documentation of arguments can be found in https://github.com/pytorch/ao/tree/main/torchao/quantization#other-available-quantization-techniques

This is a config class for torchao quantization/sparsity techniques.

Example:

quantization_config = TorchAoConfig("int4_weight_only", group_size=32)
# int4_weight_only quant is only working with *torch.bfloat16* dtype right now
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda", torch_dtype=torch.bfloat16, quantization_config=quantization_config)

post_init

< >

( )

Safety checker that arguments are correct - also replaces some NoneType arguments with their default values.

< > Update on GitHub