Skip to content

export

torch_to_nnef.nemo_tract.export

build_custom_subnet_tract_properties

build_custom_subnet_tract_properties(subnet_name, subnet, *, nemo: InjectedNemoModule = INJECTED)

Build custom tract properties for nemo subnet.

build_preprocessor_export_params

build_preprocessor_export_params(asr_model, inference_target, *, nemo: InjectedNemoModule = INJECTED, axis_registry=None) -> T.Iterator[ExportParameters]

Build export parameters for the preprocessor of a NeMo ASR model.

export_nemo_asr_model

export_nemo_asr_model(asr_model, inference_target, export_dir: Path, compress_registry: str, compress_method: T.Optional[str] = None, skip_preprocessor: bool = False, split_joint_decoder: bool = False, extra_cfg: T.Optional[T.Dict[str, T.Any]] = None, float_dtype: T.Optional[torch.dtype] = None, remove_unused_inputs: bool = True, dump_checked_io: bool = False, only_subnets: T.Optional[T.Collection[str]] = None, *, omegaconf: InjectedOmegaConfModule = INJECTED, axis_registry=None, **kwargs)

Export a generic NeMo ASR model to NNEF format using TractNNEF.

export_nemo_from_model

export_nemo_from_model(*, model, target: InferenceTarget, export_dir: Path, axis_reg: T.Optional[AxisSymbolRegistry] = None, cfg: NemoExportConfig) -> None

Export a NeMo ASR model using a structured configuration.

This is the public programmatic API. Handles dtype preparation (float16 / mixed) internally so callers don't need to apply model.half() or WrapPreprocessorCast themselves.

exportable_nemo_net

exportable_nemo_net(output_name, model, input_example, use_dynamo=False, batch_size: int = 1, float_dtype: T.Optional[torch.dtype] = None, *, nemo: InjectedNemoModule = INJECTED, pytorch_lightning: InjectedLightningModule = INJECTED)

Context manager to follow export way of nemo models.

It prepare model by switching mode to eval, disabling typechecks and wrapping forward method for tracing by PyTorch export tools.

Mostly borrowed from nemo codebase logic (with more modularity). see: nemo.core.classes.Exportable._export

Yield

ExportContext with input_example, output_example and dynamic_axes ready for export.

iter_decoder_joint_subnets

iter_decoder_joint_subnets(subnet, input_example, ctx_dynamic_axes, *, batch_size: int, remove_unused_inputs: bool, split_joint_decoder: bool)

Yield export tuples for the decoder_joint case.

  • If split_joint_decoder is True: yields separate decoder and joint entries with their own input_examples and dynamic axes.
  • Otherwise: optionally remove unused inputs, fix batch size on the input example, validate arity, and yield a single decoder_joint entry using the context-provided dynamic axes.

iter_export_params_for_generic_nemo_asr_model

iter_export_params_for_generic_nemo_asr_model(asr_model, inference_target, skip_preprocessor: bool = False, split_joint_decoder: bool = False, remove_unused_inputs: bool = True, float_dtype: T.Optional[torch.dtype] = None, only_subnets: T.Optional[T.Collection[str]] = None, axis_registry=None) -> T.Iterator[ExportParameters]

Iterator over export parameters for a generic NeMo ASR model.

iter_nemo_model_subnets

iter_nemo_model_subnets(model, input_example=None, float_dtype: T.Optional[torch.dtype] = None, split_joint_decoder: bool = False, remove_unused_inputs: bool = True, apply_sequential_examples: bool = False, batch_size: int = 3, only_subnets: T.Optional[T.Collection[str]] = None)

Iterator over exportable subnets of a nemo model.