WarmupLRScheduler

class lightning_ir.lightning_utils.lr_schedulers.WarmupLRScheduler(optimizer: Optimizer, num_warmup_steps: int, *args, verbose: bool = False, **kwargs)[source]

Bases: LambdaWarmupScheduler, LambdaLR

__init__(optimizer: Optimizer, num_warmup_steps: int, *args, verbose: bool = False, **kwargs) None[source]

Methods

__init__(optimizer, num_warmup_steps, *args)

check_delay(current_step)

check_warmup(current_step)

get_last_lr()

Return last computed learning rate by current scheduler.

get_lr()

load_state_dict(state_dict)

Loads the schedulers state.

print_lr(is_verbose, group, lr[, epoch])

Display the current learning rate.

state_dict()

Returns the state of the scheduler as a dict.

step([epoch])

value_lambda(current_step)

get_last_lr() List[float]

Return last computed learning rate by current scheduler.

load_state_dict(state_dict)

Loads the schedulers state.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.

Parameters:

state_dict (dict) – scheduler state. Should be an object returned from a call to state_dict().

print_lr(is_verbose: bool, group: Dict[str, Any], lr: float, epoch: int | None = None)

Display the current learning rate.

Deprecated since version 2.4: print_lr() is deprecated. Please use get_last_lr() to access the learning rate.

state_dict()

Returns the state of the scheduler as a dict.

It contains an entry for every variable in self.__dict__ which is not the optimizer. The learning rate lambda functions will only be saved if they are callable objects and not if they are functions or lambdas.

When saving or loading the scheduler, please make sure to also save or load the state of the optimizer.