helper
torch_to_nnef.op.helper
OpHelper
add_single_output_op_from_ir_datas
add_single_output_op_from_ir_datas(nnef_op_type: str, input_nodes: T.List[Data], output_tensor_name_suffix: str = '', force_full_output_tensor_name: str = '', reuse_if_name_exists: bool = False, **kwargs) -> TorchOp
Use input_nodes Data instead of nnef.Tensor.
Also nnefe
OpRegistry
add_tensor_variable_node_as_nnef_tensor
add_tensor_variable_node_as_nnef_tensor(g: NGraph, node: TensorVariable, name_to_tensor: T.Dict[str, NTensor], name_suffix: str = '', prevent_variable: bool = False, force_full_output_tensor_name: T.Optional[str] = None) -> NTensor
Create NNEF tensor and register in graph from torch_graph.Data node.
It automatically adds variable if node is a torch tensor is associated (it avoids bloating nnef graph file with matrix values)
cast_and_add_nnef_operation
Ensure to cast parameters before adding operation to NNEF graph.
cast_inputs_and_attrs
Catch input or attr that would still be torch_graph values into NNEF.
cast_to_if_not_dtype_and_variable
cast_to_if_not_dtype_and_variable(g, name_to_tensor, node, nnef_tensor: NTensor, cast_to: np.dtype, suffix: str = '')
Force casting not expressed in IR graph in case of div for example.
This is neccessary since tract and maybe other inference engine may not cast implicitly to float during div operation for example leading to rounding issues.
maybe_align_inputs_ranks
maybe_align_inputs_ranks(g: NGraph, inputs: T.Sequence[NTensor], outputs: T.Sequence[NTensor], op_type: str) -> T.Sequence[NTensor]
Ensure consistent rank between inputs and outputs with regard to spec.
- May unsqueeze at 0 rank n time to align inputs
This is done at export time and not inference time because: - inference implementation may use 1 dim expansion from left to right like Tract or Tensorflow instead of PyTorch expansion which happen in opposite direction.
pick_axis
Enforce that axis, axes ect does contains only positive values.
pick_index_in_axis
Enforce that index in axis does contains only values within bounds.
Because in case of tract out of bound is not supported !
resolve_attr_axis_size
Resolve input_node.shape[axis] for use in a NNEF op attribute.
When inference_target.has_dynamic_axes, emit (and cache) a
tract_core_shape_of -> slice -> squeeze chain to extract the
runtime size of axis and return an nnef.Identifier referencing
that scalar. Otherwise, return the static int(input_node.shape[axis]).
Use this anywhere a shape-derived value lands in an op attribute
(e.g. reshape(shape=[...]), tile(repeats=[...]), slice(end=...))
so the same emitter works in both static and dyn-axes modes
without baking the trace-time size into the exported graph.