Skip to content

ir_graph

torch_to_nnef.torch_graph.ir_graph

TorchModuleIRGraph

TorchModuleIRGraph(torch_module_tracer: TorchModuleTracer, omit_useless_nodes: bool = True, is_root_module: bool = False)

Torch Graph intermediate representation from: jit.trace with recursion.

This is not direct torch._C.Graph but simpler abstraction, with:

A list of data nodes in self.data_nodes A list of operations nodes in self.op_nodes

self.inputs is a list of reference of some self.data_nodes self.outputs is a list of reference of some self.data_nodes

This abstraction of the vanilla Torch Graph allow to manipulate graph in order to check/complete missing data informations and ignore useless operations for our transcription needs.

It's also allows to be less reliant on base graph in case of modification of PyTorch Internals (think Adapter Pattern).

Warning ! Only NOT nested data container (TupleTensors, FixedTensorList, ...) are supported for now

parse
parse(nnef_variable_naming_scheme: VariableNamingScheme = DEFAULT_VARNAME_SCHEME, provided_inputs=None, provided_outputs=None, forced_inputs_names=None, forced_outputs_names=None)

Core parsing transforming nn.Module into torch_to_nnef IR.

printall
printall()

Display Helper Graph infos in stdout of your tty.

remap_node
remap_node(from_node, to_node)

Remap a data_node to another.