DeepSpeed utilities
DeepSpeedPlugin
get_active_deepspeed_plugin
accelerate.utils.get_active_deepspeed_plugin
< source >( state )
Raises
ValueError
ValueError
— If DeepSpeed was not enabled and this function is called.
Returns the currently active DeepSpeedPlugin.
class accelerate.DeepSpeedPlugin
< source >( hf_ds_config: typing.Any = None gradient_accumulation_steps: int = None gradient_clipping: float = None zero_stage: int = None is_train_batch_min: bool = True offload_optimizer_device: str = None offload_param_device: str = None offload_optimizer_nvme_path: str = None offload_param_nvme_path: str = None zero3_init_flag: bool = None zero3_save_16bit_model: bool = None transformer_moe_cls_names: str = None enable_msamp: bool = None msamp_opt_level: typing.Optional[typing.Literal['O1', 'O2']] = None )
Parameters
- hf_ds_config (
Any
, defaults toNone
) — Path to DeepSpeed config file or dict or an object of classaccelerate.utils.deepspeed.HfDeepSpeedConfig
. - gradient_accumulation_steps (
int
, defaults toNone
) — Number of steps to accumulate gradients before updating optimizer states. If not set, will use the value from theAccelerator
directly. - gradient_clipping (
float
, defaults toNone
) — Enable gradient clipping with value. - zero_stage (
int
, defaults toNone
) — Possible options are 0, 1, 2, 3. Default will be taken from environment variable. - is_train_batch_min (
bool
, defaults toTrue
) — If both train & eval dataloaders are specified, this will decide thetrain_batch_size
. - offload_optimizer_device (
str
, defaults toNone
) — Possible options are none|cpu|nvme. Only applicable with ZeRO Stages 2 and 3. - offload_param_device (
str
, defaults toNone
) — Possible options are none|cpu|nvme. Only applicable with ZeRO Stage 3. - offload_optimizer_nvme_path (
str
, defaults toNone
) — Possible options are /nvme|/local_nvme. Only applicable with ZeRO Stage 3. - offload_param_nvme_path (
str
, defaults toNone
) — Possible options are /nvme|/local_nvme. Only applicable with ZeRO Stage 3. - zero3_init_flag (
bool
, defaults toNone
) — Flag to indicate whether to save 16-bit model. Only applicable with ZeRO Stage-3. - zero3_save_16bit_model (
bool
, defaults toNone
) — Flag to indicate whether to save 16-bit model. Only applicable with ZeRO Stage-3. - transformer_moe_cls_names (
str
, defaults toNone
) — Comma-separated list of Transformers MoE layer class names (case-sensitive). For example,MixtralSparseMoeBlock
,Qwen2MoeSparseMoeBlock
,JetMoEAttention
,JetMoEBlock
, etc. - enable_msamp (
bool
, defaults toNone
) — Flag to indicate whether to enable MS-AMP backend for FP8 training. - msasmp_opt_level (
Optional[Literal["O1", "O2"]]
, defaults toNone
) — Optimization level for MS-AMP (defaults to ‘O1’). Only applicable ifenable_msamp
is True. Should be one of [‘O1’ or ‘O2’].
This plugin is used to integrate DeepSpeed.
deepspeed_config_process
< source >( prefix = '' mismatches = None config = None must_match = True **kwargs )
Process the DeepSpeed config with the values from the kwargs.
Sets the HfDeepSpeedWeakref to use the current deepspeed plugin configuration
class accelerate.utils.DummyScheduler
< source >( optimizer total_num_steps = None warmup_num_steps = 0 lr_scheduler_callable = None **kwargs )
Parameters
- optimizer (
torch.optim.optimizer.Optimizer
) — The optimizer to wrap. - total_num_steps (int, optional) — Total number of steps.
- warmup_num_steps (int, optional) — Number of steps for warmup.
- lr_scheduler_callable (callable, optional) —
A callable function that creates an LR Scheduler. It accepts only one argument
optimizer
. - **kwargs (additional keyword arguments, optional) — Other arguments.
Dummy scheduler presents model parameters or param groups, this is primarily used to follow conventional training loop when scheduler config is specified in the deepspeed config file.
DeepSpeedEnginerWrapper
class accelerate.utils.DeepSpeedEngineWrapper
< source >( engine )
Internal wrapper for deepspeed.runtime.engine.DeepSpeedEngine. This is used to follow conventional training loop.
DeepSpeedOptimizerWrapper
class accelerate.utils.DeepSpeedOptimizerWrapper
< source >( optimizer )
Internal wrapper around a deepspeed optimizer.
DeepSpeedSchedulerWrapper
class accelerate.utils.DeepSpeedSchedulerWrapper
< source >( scheduler optimizers )
Internal wrapper around a deepspeed scheduler.
DummyOptim
class accelerate.utils.DummyOptim
< source >( params lr = 0.001 weight_decay = 0 **kwargs )
Dummy optimizer presents model parameters or param groups, this is primarily used to follow conventional training loop when optimizer config is specified in the deepspeed config file.