Skip to content

utils

torch_to_nnef.utils

ReactiveNamedItemDict

ReactiveNamedItemDict(items: T.Optional[T.Iterable[NamedItem]] = None)

Named items ordered Dict data structure.

Ensure that 'NO' 2 items are inserted with same 'name' attribute and maintains fast name update and with some additive colision protections.

Warning! only aimed at NamedItem subclass.

Expose a 'list' like interface. (with limited index access)

Example

from dataclasses import dataclass @dataclass ... class DummyItem(NamedItem): ... name: str ... namespace = ReactiveNamedItemDict() item = DummyItem("hello") for i in "abc": ... namespace.append(DummyItem(i)) namespace.append(item) try: ... namespace.append(DummyItem("a")) ... assert False ... except T2NErrorDataNodeValue: ... pass item.name = "world" namespace.append(DummyItem("hello"))

append
append(item: NamedItem)

Append item to ordered set.

WARNING: This is crucial that all added items use this function as it set the hook to listen to name changes

SemanticVersion

SemanticVersion(**kwargs)

Helper to check a version is higher than another.

Attributes:

Name Type Description
TAGS

each versions level (should not be modified in most cases) ordering being done from left to right.

Example

version = SemanticVersion.from_str("1.2.13") "1.2.12" < version < "1.2.14" True "1.3.12" < version False version == "1.2.13" True

Init.

(depends on TAGS but default is:)

Name Type Description Default
major

int

required
minor

int

required
patch

int

required

cache

cache(func: T.Callable[..., C]) -> C

LRU cache helper that avoid pylint complains.

cd

cd(path)

Context manager for changing the current working directory.

dedup_list

dedup_list(lst: T.List[T.Any]) -> T.List[T.Any]

Remove duplicates from list while preserving order.

flatten_dict

flatten_dict(d: MutableMapping, parent_key: str = '', sep: str = '.') -> MutableMapping

Flatten a nested dictionary.

flatten_dict_tuple_or_list

flatten_dict_tuple_or_list(obj: T.Any, collected_types: T.Optional[T.List[T.Type]] = None, collected_idxes: T.Optional[T.List[int]] = None, current_idx: int = 0) -> T.Tuple[T.Tuple[T.Tuple[T.Type, ...], T.Tuple[T.Union[int, str], ...], T.Any], ...]

Flatten dict/list/tuple recursively, return types, indexes and values.

Flatten happen in depth first search order

Parameters:

Name Type Description Default
obj Any

dict/tuple/list or anything else (structure can be arbitrary deep) this contains N number of element non dict/list/tuple

required
collected_types Optional[List[Type]]

do not set

None
collected_idxes Optional[List[int]]

do not set

None
current_idx int

do not set

0

Returns:

Type Description
Tuple[Tuple[Tuple[Type, ...], Tuple[Union[int, str], ...], Any], ...]

tuple of N tuples each containing a tuple of: types, indexes and the element

Example

If initial obj=[{"a": 1, "b": 3}] it will output: ( ((list, dict), (0, "a"), 1), ((list, dict), (0, "b"), 3), )

fullname

fullname(o) -> str

Full class name with module path from an object.

init_empty_weights

init_empty_weights(include_buffers: T.Optional[bool] = None) -> T.Iterator[None]

A context manager under which models init with meta device.

Borrowed from accelerate

A context manager under which models are initialized with all parameters on the meta device, therefore creating an empty model. Useful when just initializing the model would blow the available RAM.

Parameters:

Name Type Description Default
include_buffers Optional[bool]

Whether or not to also put all buffers on the meta device while initializing.

None

Returns:

Type Description
Iterator[None]

(None) Just a context manager

Example:

import torch.nn as nn
from  import init_empty_weights

# Initialize a model with 100 billions parameters in no time and
# without using any RAM.
with init_empty_weights():
    tst = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)])

Any model created under this context manager has no weights. As such you can't do something like model.to(some_device) with it. To load weights inside an empty model, see [load_checkpoint_and_dispatch]. Make sure to overwrite the default device_map param for [load_checkpoint_and_dispatch], otherwise dispatch is not called.

init_on_device

init_on_device(device: torch.device, include_buffers: T.Optional[bool] = None) -> T.Iterator[None]

Context manager under which models are init on the specified device.

Borrowed from accelerate

Parameters:

Name Type Description Default
device device

Device to initialize all parameters on.

required
include_buffers Optional[bool]

Whether or not to also put all buffers on the meta device while initializing.

None

Example:

import torch.nn as nn
from accelerate import init_on_device

with init_on_device(device=torch.device("cuda")):
    tst = nn.Linear(100, 100)  # on `cuda` device

torch_version

torch_version() -> SemanticVersion

Semantic version for torch.