export
torch_to_nnef.export
export_model_to_nnef
export_model_to_nnef(model: torch.nn.Module, args, file_path_export: T.Union[Path, str], inference_target: InferenceTarget, input_names: T.Optional[T.List[str]] = None, output_names: T.Optional[T.List[str]] = None, compression_level: T.Optional[int] = 0, log_level: int = log.INFO, nnef_variable_naming_scheme: VariableNamingScheme = DEFAULT_VARNAME_SCHEME, check_io_names_qte_match: bool = True, debug_bundle_path: T.Optional[Path] = None, custom_extensions: T.Optional[T.List[str]] = None, allow_same_io_names: bool = False) -> Path
Main entrypoint of this library.
Export any torch.nn.Module to NNEF file format archive
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model |
Module
|
a nn.Module that have a |
required |
args |
a flat ordered list of tensors for each forward inputs of |
required | |
file_path_export |
Union[Path, str]
|
target path for the exported model.
Accepted forms are:
- ".../model.nnef" → base path; creates:
• directory when |
required |
inference_target |
InferenceTarget
|
can be |
required |
input_names |
Optional[List[str]]
|
Optional list of names for args, it replaces variable inputs names traced from graph (if set it must have the same size as number of args) |
None
|
output_names |
Optional[List[str]]
|
Optional list of names for outputs of |
None
|
compression_level |
Optional[int]
|
Optional[int] = 0
If None, writes an uncompressed |
0
|
log_level |
int
|
int,
logger level for |
INFO
|
nnef_variable_naming_scheme |
VariableNamingScheme
|
Possible choices NNEF variables naming schemes are: - "raw": Taking variable names from traced graph debugName directly - "natural_verbose": that try to provide nn.Module exported variable naming consistency - "natural_verbose_camel": that try to provide nn.Module exported variable naming consistency but with more consice camelCase variable pattern - "numeric": that try to be as concise as possible |
DEFAULT_VARNAME_SCHEME
|
check_io_names_qte_match |
bool
|
(default: True)
During the tracing process of the torch graph
One or more input provided can be removed if not contributing to
generate outputs while check_io_names_qte_match is True we ensure
that this input and output quantity remain constant with numbers in
|
True
|
debug_bundle_path |
Optional[Path]
|
Optional[Path] if specified it should create an archive bundle with all needed information to allow easier debug. |
None
|
custom_extensions |
Optional[List[str]]
|
Optional[List[str]] allow to add a set of extensions as defined in (https://registry.khronos.org/NNEF/specs/1.0/nnef-1.0.5.html) Useful to set specific extensions like for example: 'extension tract_assert S >= 0' those assertion allows to add limitation on dynamic shapes that are not expressed in traced graph (like for example maximum number of tokens for an LLM) |
None
|
allow_same_io_names |
bool
|
bool by default input and output names must be different to avoid simplification of the graph that would merge those tensors silently. If you really want to have same names for inputs and outputs set this flag to True. Some libs like 'nvidia/nemo' use this pattern. (note that it only make sense if it's a no operation) |
False
|
Returns:
| Name | Type | Description |
|---|---|---|
Path |
Path
|
the path to the exported artifact.
- If |
Examples:
For example this function can be used to export as simple perceptron model:
>>> import os
>>> import tarfile
>>> import tempfile
>>> from torch import nn
>>> mod = nn.Sequential(nn.Linear(1, 5), nn.ReLU())
>>> export_path = tempfile.mktemp(suffix=".nnef.tgz")
>>> inference_target = TractNNEF.latest()
>>> _ = export_model_to_nnef(
... mod,
... torch.rand(3, 1),
... export_path,
... inference_target,
... compression_level=0,
... input_names=["inp"],
... output_names=["out"]
... )
>>> os.chdir(export_path.rsplit("/", maxsplit=1)[0])
>>> tarfile.open(export_path).extract("graph.nnef")
>>> "graph network(inp) -> (out)" in open("graph.nnef").read()
True
export_tensors_from_disk_to_nnef
export_tensors_from_disk_to_nnef(store_filepath: T.Union[Path, str], output_dir: T.Union[Path, str], filter_key: T.Optional[T.Callable[[str], bool]] = None, fn_check_found_tensors: T.Optional[T.Callable[[T.Dict[str, _Tensor]], bool]] = None, map_location: T.Union[str, torch.device] = 'cpu') -> T.Dict[str, _Tensor]
Export any statedict or safetensors file torch.Tensors to NNEF .dat file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
store_filepath |
Union[Path, str]
|
the filepath that hold the .safetensors , .pt or .bin containing the state dict |
required |
output_dir |
Union[Path, str]
|
directory to dump the NNEF tensor .dat files |
required |
filter_key |
Optional[Callable[[str], bool]]
|
An optional function to filter specific keys to be exported |
None
|
fn_check_found_tensors |
Optional[Callable[[Dict[str, _Tensor]], bool]]
|
post checking function to ensure all requested tensors have effectively been dumped |
None
|
map_location |
Union[str, device]
|
device mapping used by torch.load for .pt/.pth/.bin files (default: "cpu"). |
'cpu'
|
Returns:
| Type | Description |
|---|---|
Dict[str, _Tensor]
|
a dict of tensor name as key and torch.Tensor values,
identical to |
Examples:
Simple filtered example
>>> import tempfile
>>> from torch import nn
>>> class Mod(nn.Module):
... def __init__(self):
... super().__init__()
... self.a = nn.Linear(1, 5)
... self.b = nn.Linear(5, 1)
...
... def forward(self, x):
... return self.b(self.a(x))
>>> mod = Mod()
>>> pt_path = tempfile.mktemp(suffix=".pt")
>>> nnef_dir = tempfile.mkdtemp(suffix="_nnef")
>>> torch.save(mod.state_dict(), pt_path)
>>> def check(ts):
... assert all(_.startswith("a.") for _ in ts)
>>> exported_tensors = export_tensors_from_disk_to_nnef(
... pt_path,
... nnef_dir,
... lambda x: x.startswith("a."),
... check
... )
>>> list(exported_tensors.keys())
['a.weight', 'a.bias']
export_tensors_to_nnef
export_tensors_to_nnef(name_to_torch_tensors: T.Dict[str, _Tensor], output_dir: Path) -> T.Dict[str, _Tensor]
Export any torch.Tensors list to NNEF .dat file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name_to_torch_tensors |
Dict[str, _Tensor]
|
dict A map of name (that will be used to define .dat filename) and tensor values (that can also be special torch_to_nnef tensors) |
required |
output_dir |
Path
|
directory to dump the NNEF tensor .dat files |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, _Tensor]
|
a dict of tensor name as key and torch.Tensor values,
identical to |
Examples:
Simple example
>>> import tempfile
>>> from torch import nn
>>> class Mod(nn.Module):
... def __init__(self):
... super().__init__()
... self.a = nn.Linear(1, 5)
... self.b = nn.Linear(5, 1)
...
... def forward(self, x):
... return self.b(self.a(x))
>>> mod = Mod()
>>> nnef_dir = tempfile.mkdtemp(suffix="_nnef")
>>> exported_tensors = export_tensors_to_nnef(
... {k: v for k, v in mod.named_parameters() if k.startswith("b.")},
... nnef_dir,
... )
>>> list(exported_tensors.keys())
['b.weight', 'b.bias']
iter_torch_tensors_from_disk
iter_torch_tensors_from_disk(store_filepath: Path, filter_key: T.Optional[T.Callable[[str], bool]] = None, map_location: T.Union[str, torch.device] = 'cpu') -> T.Iterator[T.Tuple[str, _Tensor]]
Iter on torch tensors from disk .safetensors, .pt, pth, .bin.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
store_filepath |
Path
|
path to the container file holding PyTorch tensors (.pt, .pth, .bin and .safetensors) |
required |
filter_key |
Optional[Callable[[str], bool]]
|
if set, this function filter over tensor by name stored in those format |
None
|
map_location |
Union[str, device]
|
device mapping used by torch.load for .pt/.pth/.bin files (default: "cpu"). |
'cpu'
|
Yields:
| Type | Description |
|---|---|
str
|
provide each tensor that are validated by filter within store filepath |
_Tensor
|
one at a time as tuple with name first then the torch.Tensor itself |