other
torch_to_nnef.op.aten.other
dropout
Map PyTorch dropout-family ops (inactive at inference) to NNEF.
All variants share the (input, p, train) signature; at inference
train is False and the op is the identity, so we just remap
the output to the input.
external
external(g: NGraph, node: TensorVariable, name_to_tensor: T.Dict[str, NTensor], inference_target: InferenceTarget)
Add External NNEF Operation in graph.
identity_remap
No-op ATen ops -- just remap output to input.
resolve_conj / resolve_neg flip an internal "is conjugated"
/ "is negated" view bit and are the identity once those bits are
cleared; for real-valued tensors they are unconditionally
identity. (conj_physical used to live here too, which silently
returned the input unchanged on complex inputs -- it now lives in
op/aten/complex.py next to conj, routed through the
conjugate fragment.)
isneginf
Map PyTorch: 'aten:isneginf' to NNEF.
isposinf
Map PyTorch: 'aten:isposinf' to NNEF.
scalar_tensor
Map PyTorch: 'aten:scalar_tensor' to NNEF.
size
Map PyTorch aten::size as NNEF.
We can not use NNEF shape_of that have been deprecated since 1.0.1 version:.
The shape_of function is deprecated and is discouraged from use.
The reason is that it provides syntactic means to access a
property of tensors that is not defined via the syntax itself.
Furthermore, its definition is problematic in cases where the shape
of a tensor is not known in graph compilation time.
These result in problems with custom operations and operations with results
of dynamic shape for a consumer of an NNEF document.
By removing support for the shape_of function from NNEF syntax,
it becomes possible to de-couple parsing
from shape propagation in a consumer of an NNEF document.
Since it is a core component to express some dynamic network that may use
tract symbolic dimensions:
for example using stream size to apply an averaging:
We map it to tract_core_shape_of