Skip to content

compress

torch_to_nnef.compress

Compression module mostly used as demonstration purpose.

Examplify: how to implement quantization

dynamic_load_registry

dynamic_load_registry(compression_registry_full_path: str)

Load a registry dynamically based on it's module path + dict name.

offloaded_tensor_qtensor

offloaded_tensor_qtensor(q_fn, tensor: torch.Tensor, suffix_name: str) -> torch.Tensor

Maintains a QTensor offloaded if original tensor is offloaded.

quantize_weights_min_max_Q4_0

quantize_weights_min_max_Q4_0(model: nn.Module, **kwargs)

Example of quantization function for a model to Q40.