bitorch.quantizations.progressive_sign.ProgressiveSign

class bitorch.quantizations.progressive_sign.ProgressiveSign(use_global_scaling: bool = True, initial_scale: Optional[float] = None, custom_transform: Optional[Callable[[float], float]] = None, alpha: Optional[Union[int, float]] = None, beta: Optional[Union[int, float]] = None)[source]

Module for applying a progressive sign function with STE during training.

During validation a regular sign function is used. This can lead to a significant accuracy difference during the first epochs. With a temperature of one this function is basically equal to a regular sign function.

Methods

__init__

Initialize the progressive sign module (can be used for progressive weight binarization).

default_transform

Transform the given scale into the temperature of the progressive sign function with the default function.

quantize

Forwards the tensor through the sign function.

transform

Transform the given scale into a steady temperature increase, higher at the start, and much less at the end.

Attributes

bit_width

current_scale

Return the current scale of this Progressive Sign layer.

name

scale

global_scaling

alpha

beta

__init__(use_global_scaling: bool = True, initial_scale: Optional[float] = None, custom_transform: Optional[Callable[[float], float]] = None, alpha: Optional[Union[int, float]] = None, beta: Optional[Union[int, float]] = None) None[source]

Initialize the progressive sign module (can be used for progressive weight binarization).

If use_global_scaling is set to False, the scale of this module must be set manually. Otherwise, the value can be set for all progressive sign modules in the config.

Parameters:
  • use_global_scaling – whether to use the global scaling variable stored in the config

  • initial_scale – if not using global scaling you can set an initial scale

  • custom_transform – to use a custom transform function from scale to temperature, add it here

  • alpha – parameters of default transform function

  • beta – parameters of default transform function

property current_scale: float

Return the current scale of this Progressive Sign layer.

static default_transform(scale: float, alpha: Optional[Union[int, float]] = None, beta: Optional[Union[int, float]] = None) float[source]

Transform the given scale into the temperature of the progressive sign function with the default function.

The formula is as follows: 1 - (alpha ** (-beta * scale))

Parameters:
  • scale – the current scale

  • alpha – base of default exponential function

  • beta – (negative) factor of scale exponent

quantize(x: Tensor) Tensor[source]

Forwards the tensor through the sign function.

Parameters:

x (torch.Tensor) – tensor to be forwarded.

Returns:

sign of tensor x

Return type:

torch.Tensor

transform(scale: float) float[source]

Transform the given scale into a steady temperature increase, higher at the start, and much less at the end.

Parameters:

scale – the current scale