utils
torch_to_nnef.utils
ReactiveNamedItemDict
Named items ordered Dict data structure.
Ensure that 'NO' 2 items are inserted with same 'name' attribute and maintains fast name update and with some additive colision protections.
Warning! only aimed at NamedItem subclass.
Expose a 'list' like interface. (with limited index access)
Example
from dataclasses import dataclass @dataclass ... class DummyItem(NamedItem): ... name: str ... namespace = ReactiveNamedItemDict() item = DummyItem("hello") for i in "abc": ... namespace.append(DummyItem(i)) namespace.append(item) try: ... namespace.append(DummyItem("a")) ... assert False ... except T2NErrorDataNodeValue: ... pass item.name = "world" namespace.append(DummyItem("hello"))
SemanticVersion
SemVer 2.0 compatible version class.
Supports
1.2.3 1.2.3-alpha 1.2.3-alpha.1 1.2.3-rc.1 1.2.3+build.5 (build metadata ignored in ordering)
Allows symmetric comparison with strings
"1.2.0" < SemanticVersion.from_str("1.3.0")
Example
version = SemanticVersion.from_str("1.2.13") "1.2.12" < version < "1.2.14" True "1.3.12" < version False version == "1.2.13" True
T2NExtra
Bases: str, Enum
Special extra names used in T2N framework.
check_torch_ecosystem
Check that torch, torchaudio and torchvision versions are compatible.
This is a common source of runtime errors, so we proactively check and raise a clear error message with instructions if we detect a mismatch.
(avoid cryptic symbol not found errors that can occur missmatched versions)
dedup_list
Remove duplicates from list while preserving order.
ensure_tuple_io
Normalize inputs/outputs into a tuple.
Behavior: - If already a tuple, return as-is. - If a list or other finite sequence, return tuple(value). - If a single Tensor, number, bool, or mapping-like (has items and getitem), wrap into a 1-tuple. - Otherwise, return (value,).
flatten_dict
Flatten a nested dictionary.
flatten_dict_tuple_or_list
flatten_dict_tuple_or_list(obj: T.Any, collected_types: T.Optional[T.List[T.Type]] = None, collected_idxes: T.Optional[T.List[int]] = None, current_idx: int = 0) -> T.Tuple[T.Tuple[T.Tuple[T.Type, ...], T.Tuple[T.Union[int, str], ...], T.Any], ...]
Flatten dict/list/tuple recursively, return types, indexes and values.
Flatten happen in depth first search order
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
obj |
Any
|
dict/tuple/list or anything else (structure can be arbitrary deep) this contains N number of element non dict/list/tuple |
required |
collected_types |
Optional[List[Type]]
|
do not set |
None
|
collected_idxes |
Optional[List[int]]
|
do not set |
None
|
current_idx |
int
|
do not set |
0
|
Returns:
| Type | Description |
|---|---|
Tuple[Tuple[Tuple[Type, ...], Tuple[Union[int, str], ...], Any], ...]
|
tuple of N tuples each containing a tuple of: types, indexes and the element |
Example
If initial obj=[{"a": 1, "b": 3}] it will output: ( ((list, dict), (0, "a"), 1), ((list, dict), (0, "b"), 3), )
init_empty_weights
A context manager under which models init with meta device.
Borrowed from accelerate
A context manager under which models are initialized with all parameters on the meta device, therefore creating an empty model. Useful when just initializing the model would blow the available RAM.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_buffers |
Optional[bool]
|
Whether or not to also put all buffers on the meta device while initializing. |
None
|
Returns:
| Type | Description |
|---|---|
Iterator[None]
|
(None) Just a context manager |
Example:
import torch.nn as nn
from import init_empty_weights
# Initialize a model with 100 billions parameters in no time and
# without using any RAM.
with init_empty_weights():
tst = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)])
Any model created under this context manager has no weights.
As such you can't do something like model.to(some_device) with it.
To load weights inside an empty model, see [load_checkpoint_and_dispatch].
Make sure to overwrite the default device_map param
for [load_checkpoint_and_dispatch], otherwise dispatch is not called.
init_on_device
Context manager under which models are init on the specified device.
Borrowed from accelerate
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
device |
device
|
Device to initialize all parameters on. |
required |
include_buffers |
Optional[bool]
|
Whether or not to also put all buffers on the meta device while initializing. |
None
|
Example:
normalize_cli_list_option
Normalize repeated/CSV CLI options into a list of unique strings.
Accepts values from argparse patterns like action="append" and also
tolerates a single string. Splits on commas, strips whitespace, removes
empty entries, and de-duplicates while preserving order. Returns None if
the input is falsy.
require_extra
Lazily import an optional dependency and raise a error if missing.