utils
torch_to_nnef.utils
ReactiveNamedItemDict
Named items ordered Dict data structure.
Ensure that 'NO' 2 items are inserted with same 'name' attribute and maintains fast name update and with some additive colision protections.
Warning! only aimed at NamedItem subclass.
Expose a 'list' like interface. (with limited index access)
Example
from dataclasses import dataclass @dataclass ... class DummyItem(NamedItem): ... name: str ... namespace = ReactiveNamedItemDict() item = DummyItem("hello") for i in "abc": ... namespace.append(DummyItem(i)) namespace.append(item) try: ... namespace.append(DummyItem("a")) ... assert False ... except T2NErrorDataNodeValue: ... pass item.name = "world" namespace.append(DummyItem("hello"))
SemanticVersion
Helper to check a version is higher than another.
Attributes:
Name | Type | Description |
---|---|---|
TAGS |
each versions level (should not be modified in most cases) ordering being done from left to right. |
Example
version = SemanticVersion.from_str("1.2.13") "1.2.12" < version < "1.2.14" True "1.3.12" < version False version == "1.2.13" True
Init.
(depends on TAGS but default is:)
Name | Type | Description | Default |
---|---|---|---|
major |
int |
required | |
minor |
int |
required | |
patch |
int |
required |
dedup_list
Remove duplicates from list while preserving order.
flatten_dict
Flatten a nested dictionary.
flatten_dict_tuple_or_list
flatten_dict_tuple_or_list(obj: T.Any, collected_types: T.Optional[T.List[T.Type]] = None, collected_idxes: T.Optional[T.List[int]] = None, current_idx: int = 0) -> T.Tuple[T.Tuple[T.Tuple[T.Type, ...], T.Tuple[T.Union[int, str], ...], T.Any], ...]
Flatten dict/list/tuple recursively, return types, indexes and values.
Flatten happen in depth first search order
Parameters:
Name | Type | Description | Default |
---|---|---|---|
obj |
Any
|
dict/tuple/list or anything else (structure can be arbitrary deep) this contains N number of element non dict/list/tuple |
required |
collected_types |
Optional[List[Type]]
|
do not set |
None
|
collected_idxes |
Optional[List[int]]
|
do not set |
None
|
current_idx |
int
|
do not set |
0
|
Returns:
Type | Description |
---|---|
Tuple[Tuple[Tuple[Type, ...], Tuple[Union[int, str], ...], Any], ...]
|
tuple of N tuples each containing a tuple of: types, indexes and the element |
Example
If initial obj=[{"a": 1, "b": 3}] it will output: ( ((list, dict), (0, "a"), 1), ((list, dict), (0, "b"), 3), )
init_empty_weights
A context manager under which models init with meta device.
Borrowed from accelerate
A context manager under which models are initialized with all parameters on the meta device, therefore creating an empty model. Useful when just initializing the model would blow the available RAM.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
include_buffers |
Optional[bool]
|
Whether or not to also put all buffers on the meta device while initializing. |
None
|
Returns:
Type | Description |
---|---|
Iterator[None]
|
(None) Just a context manager |
Example:
import torch.nn as nn
from import init_empty_weights
# Initialize a model with 100 billions parameters in no time and
# without using any RAM.
with init_empty_weights():
tst = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)])
Any model created under this context manager has no weights.
As such you can't do something like model.to(some_device)
with it.
To load weights inside an empty model, see [load_checkpoint_and_dispatch
].
Make sure to overwrite the default device_map param
for [load_checkpoint_and_dispatch
], otherwise dispatch is not called.
init_on_device
Context manager under which models are init on the specified device.
Borrowed from accelerate
Parameters:
Name | Type | Description | Default |
---|---|---|---|
device |
device
|
Device to initialize all parameters on. |
required |
include_buffers |
Optional[bool]
|
Whether or not to also put all buffers on the meta device while initializing. |
None
|
Example: