Skip to content

other

torch_to_nnef.op.aten.other

dropout

dropout(node, torch_graph, **kwargs)

Map PyTorch dropout-family ops (inactive at inference) to NNEF.

All variants share the (input, p, train) signature; at inference train is False and the op is the identity, so we just remap the output to the input.

external

external(g: NGraph, node: TensorVariable, name_to_tensor: T.Dict[str, NTensor], inference_target: InferenceTarget)

Add External NNEF Operation in graph.

identity_remap

identity_remap(node, torch_graph, **kwargs)

No-op ATen ops -- just remap output to input.

resolve_conj / resolve_neg flip an internal "is conjugated" / "is negated" view bit and are the identity once those bits are cleared; for real-valued tensors they are unconditionally identity. (conj_physical used to live here too, which silently returned the input unchanged on complex inputs -- it now lives in op/aten/complex.py next to conj, routed through the conjugate fragment.)

isinf

isinf(node, inference_target, op_helper, **kwargs)

Map PyTorch: 'aten:isinf' to NNEF.

isnan

isnan(node, inference_target, op_helper, **kwargs)

Map PyTorch: 'aten:isnan' to NNEF.

isneginf

isneginf(node, inference_target, op_helper, **kwargs)

Map PyTorch: 'aten:isneginf' to NNEF.

isposinf

isposinf(node, inference_target, op_helper, **kwargs)

Map PyTorch: 'aten:isposinf' to NNEF.

numel

numel(node, inference_target, op_helper, **kwargs)

Map PyTorch: 'aten:numel' to NNEF.

scalar_tensor

scalar_tensor(node, inference_target, op_helper, **kwargs)

Map PyTorch: 'aten:scalar_tensor' to NNEF.

size

size(g, node, name_to_tensor, inference_target, torch_graph, op_helper, **kwargs)

Map PyTorch aten::size as NNEF.

We can not use NNEF shape_of that have been deprecated since 1.0.1 version:.

The shape_of function is deprecated and is discouraged from use.
The reason is that it provides syntactic means to access a
property of tensors that is not defined via the syntax itself.

Furthermore, its definition is problematic in cases where the shape
of a tensor is not known in graph compilation time.

These result in problems with custom operations and operations with results
of dynamic shape for a consumer of an NNEF document.

By removing support for the shape_of function from NNEF syntax,
it becomes possible to de-couple parsing
from shape propagation in a consumer of an NNEF document.

Since it is a core component to express some dynamic network that may use tract symbolic dimensions: for example using stream size to apply an averaging: We map it to tract_core_shape_of

to

to(g, node, name_to_tensor, inference_target, **kwargs)

Map PyTorch: 'aten:to' to NNEF.

type_as

type_as(g, node, name_to_tensor, inference_target, **kwargs)

Map PyTorch: 'aten:type_as' to NNEF.