Skip to content

math

torch_to_nnef.op.aten.math

add

add(node, op_helper, **kwargs)

Map PyTorch: 'aten:add' to NNEF, honoring the alpha parameter.

addcmul

addcmul(node, op_helper, **kwargs)

Map PyTorch: 'aten:addcmul' to the addcmul fragment.

atan2

atan2(node, op_helper, **kwargs)

aten::atan2.

bitwise_and

bitwise_and(node, op_helper, inference_target, **kwargs)

Map PyTorch: 'aten:bitwise_and', 'aten:bitwise_cpu' to NNEF.

bitwise_left_shift

bitwise_left_shift(node, op_helper, inference_target, **kwargs)

Map aten:bitwise_left_shift / << op to NNEF -> tract_shl.

Tract's shift ops live under the tract_core extension despite the bare tract_shl / tract_shr names in the registry.

bitwise_not

bitwise_not(node, op_helper, inference_target, **kwargs)

Map PyTorch: 'aten:bitwise_not', 'aten:bitwise_not_cpu' to NNEF.

On bool inputs, PyTorch's ~ is semantically a logical not, so we emit the standard NNEF not op (keeps the graph portable and self-documenting, rather than relying on tract's bitnot happening to do the right thing on bool). For integer inputs, emit tract_core_bitnot for true bitwise inversion.

bitwise_or

bitwise_or(node, op_helper, inference_target, **kwargs)

Map PyTorch: 'aten:bitwise_or' to NNEF.

bitwise_right_shift

bitwise_right_shift(node, op_helper, inference_target, **kwargs)

Map PyTorch: 'aten:bitwise_right_shift' / >> op -> tract_shr.

bitwise_xor

bitwise_xor(node, op_helper, inference_target, **kwargs)

Map PyTorch: 'aten:bitwise_xor' to NNEF.

cdist

cdist(node, op_helper, **kwargs)

Map PyTorch: 'aten:cdist' to NNEF via a fragment.

cdist(a, b, p) computes the pairwise distance matrix between rows of a (shape (..., M, D)) and b (shape (..., N, D)): out[..., i, j] = (sum(|a[..., i, :] - b[..., j, :]|^p))^(1/p).

The fragment broadcasts via unsqueeze (one new axis on each input) and reduces along the trailing feature axis. Pure stdlib.

celu

celu(node, op_helper, **kwargs)

Map PyTorch: 'aten:celu' to the celu fragment.

copysign

copysign(node, op_helper, **kwargs)

aten::copysign -> |a| * sign(b).

cosine_similarity

cosine_similarity(node, op_helper, **kwargs)

Map PyTorch: 'aten:cosine_similarity' to NNEF via a fragment.

The fragment lives in op/fragment/cosine_similarity.nnef and is composed only of NNEF stdlib ops, so no tract-side change is needed.

Negative dim is normalized to a non-negative index via pick_axis before reaching the fragment: under dynamic-axes mode tract's reduce path crashes (index out of bounds at core/src/ops/nn/reduce.rs) when handed a negative axis against a symbolic-rank tensor.

cross

cross(node, op_helper, **kwargs)

Map PyTorch: 'aten:cross' to NNEF via a fragment.

cross(a, b, dim) is the 3-D vector cross product along dim. The fragment slices each input along dim into its three components and computes the standard (a1*b2 - a2*b1, a2*b0 - a0*b2, a0*b1 - a1*b0) triplet. Pure stdlib.

cummax

cummax(node, op_helper, inference_target, **kwargs)

Map PyTorch: aten::cummax(self, dim) -> (values, indices).

cummin

cummin(node, op_helper, inference_target, **kwargs)

Map PyTorch: aten::cummin(self, dim) -> (values, indices).

cumprod

cumprod(node, op_helper, inference_target, **kwargs)

Map PyTorch: 'aten:cumprod' to NNEF using a scan fragment.

Mirror of cumsum with a mul scan body and an init of 1 (built pointwise via mul(first, 0) + 1 to keep init shape-matching).

cumsum

cumsum(node, op_helper, inference_target, **kwargs)

Map PyTorch: 'aten:cumsum' to NNEF using a scan fragment (Tract).

  • Implemented via tract_core_scan inside fragment cumsum (axis=0).
  • For arbitrary dim, transpose input to bring that axis to 0, apply fragment, then transpose back.

div

div(node, op_helper, inference_target, torch_graph, **kwargs)

Map PyTorch: 'aten:div' to NNEF.

exp2

exp2(node, op_helper, **kwargs)

aten::exp2 -> exp2 fragment (exp(x * ln 2)).

expm1

expm1(node, op_helper, **kwargs)

aten::exp1m.

floor_divide

floor_divide(node, op_helper, inference_target, torch_graph, **kwargs)

Map PyTorch: 'aten::floor_divide' / 'aten::floordiv' to NNEF.

JIT records aten::floordiv for Python //; upstream normalize_ops.cpp does not bridge it to floor_divide, so we alias it on our side.

fmax

fmax(node, op_helper, **kwargs)

aten::fmax -> NaN-skipping max.

fmin

fmin(node, op_helper, **kwargs)

aten::fmin -> NaN-skipping min.

fmod

fmod(node, op_helper, **kwargs)

aten::fmod.

equivalent

a - a.div(b, rounding_mode="trunc") * b

hardshrink

hardshrink(node, op_helper, **kwargs)

Map PyTorch: 'aten:hardshrink' to the hardshrink fragment.

heaviside

heaviside(node, op_helper, **kwargs)

aten::heaviside -> heaviside fragment (step with tie-breaker).

hypot

hypot(node, op_helper, **kwargs)

aten::hypot -> hypot fragment (sqrt(a^2 + b^2)).

isclose

isclose(node, op_helper, **kwargs)

aten::isclose -> |a - b| <= atol + rtol * |b|.

Reads optional rtol / atol from the trace (defaults match torch's 1e-5 / 1e-8 via the fragment defaults). equal_nan=True is not yet handled -- rare in real traces; raises if set.

isfinite

isfinite(node, op_helper, **kwargs)

Map PyTorch: 'aten:isfinite' to the isfinite fragment.

lerp

lerp(node, op_helper, **kwargs)

Map PyTorch: 'aten:lerp' to the lerp fragment.

log10

log10(node, op_helper, **kwargs)

Mul val may not be good enough.

log1p

log1p(node, op_helper, **kwargs)

aten::log1p.

log_sigmoid

log_sigmoid(node, op_helper, **kwargs)

Map PyTorch: 'aten:log_sigmoid' to the log_sigmoid fragment.

logaddexp

logaddexp(node, op_helper, **kwargs)

aten::logaddexp -> numerically-stable log(exp a + exp b).

logaddexp2

logaddexp2(node, op_helper, **kwargs)

aten::logaddexp2 -> base-2 variant.

logical_xor

logical_xor(node, op_helper, inference_target, **kwargs)

Map PyTorch: 'aten:logical_xor' to NNEF.

logit

logit(node, op_helper, **kwargs)

Map PyTorch: 'aten:logit' to the logit fragment.

logsumexp

logsumexp(node, op_helper, **kwargs)

Map PyTorch: 'aten:logsumexp' to the logsumexp fragment.

PyTorch signature: logsumexp(input, dim, keepdim=False). The fragment always reduces the named axis (no keepdim); when the user asks for keepdim=True we follow up with an unsqueeze on the same axis to reinstate it.

mul

mul(node, op_helper, torch_graph, **kwargs)

Map PyTorch: 'aten:mul' to NNEF.

nan_to_num

nan_to_num(node, op_helper, **kwargs)

Map PyTorch: 'aten:nan_to_num' to NNEF.

Decomposed to pure NNEF stdlib (ne for NaN via the IEEE-754 NaN != NaN invariant, gt/lt against the dtype's finite range for +/-inf, plus select). No tract extension needed.

Defaults match torch: NaN -> 0; +inf -> finfo.max; -inf -> finfo.min.

outer

outer(node, op_helper, **kwargs)

Map PyTorch: 'aten:outer' to NNEF.

torch.outer(a, b) over 1-D inputs is a[:, None] * b[None, :]. Lower to two unsqueezes and a broadcasting mul.

Axes are kept positive. Tract's NNEF unsqueeze deserializer (tract_core::ops::change_axes::AxisOp::change_shape) does not normalize negative axes and panics with smallvec: index exceeds length; verified across tract 0.20.22 through 0.23.0-dev.5. This matches the wider t2n convention: the dedicated unsqueeze op handler also normalizes via pick_axis.

pairwise_distance

pairwise_distance(node, op_helper, **kwargs)

Map PyTorch: 'aten:pairwise_distance' to NNEF via a fragment.

pairwise_distance(a, b, p, eps, keepdim) computes (sum(|a - b + eps|^p, axis=-1))^(1/p). Torch keeps the reduced last axis only when keepdim=True; the fragment squeezes it unconditionally and we re-unsqueeze after the call when needed.

Pure NNEF stdlib (sub / abs / pow / sum_reduce / squeeze).

pow_

pow_(node, op_helper, **kwargs)

Map PyTorch: 'aten:pow' to NNEF.

remainder

remainder(node, op_helper, torch_graph, inference_target, **kwargs)

Map PyTorch: 'aten:remainder' to NNEF.

round_

round_(inference_target, **kwargs)

Map PyTorch: 'aten:round' to NNEF.

rsub

rsub(node, op_helper, torch_graph, **kwargs)

Map PyTorch: 'aten:rsub' to NNEF.

sinc

sinc(node, op_helper, **kwargs)

aten::sinc -> normalised sin(pi x)/(pi x) with 0 -> 1.

softshrink

softshrink(node, op_helper, **kwargs)

Map PyTorch: 'aten:softshrink' to the softshrink fragment.

square

square(node, op_helper, **kwargs)

Map PyTorch: 'aten:square' to NNEF.

x.square() is x * x; the dedicated NNEF sqr op exists in some runtimes but the simplest portable form is a mul with the same tensor on both sides.

std

std(node, op_helper, **kwargs)

Map PyTorch: 'aten:std' (sqrt of var) to NNEF.

std_mean

std_mean(node, op_helper, **kwargs)

Map PyTorch: 'aten:std_mean' (returns (std, mean)) to NNEF.

sub

sub(node, op_helper, **kwargs)

Map PyTorch: 'aten:sub' to NNEF, honoring the alpha parameter.

tensordot

tensordot(node, op_helper, inference_target, **kwargs)

Map PyTorch: 'aten:tensordot' to NNEF via tract_core_einsum.

tensordot(a, b, dims_a, dims_b) contracts the paired axes (same size on each side) and produces an output of rank a.rank + b.rank - 2 * len(dims_a). The einsum expression is built so each contracted axis-pair shares a label.

trunc

trunc(node, op_helper, **kwargs)

Map PyTorch: 'aten:trunc' to NNEF.

var

var(node, op_helper, **kwargs)

Map PyTorch: 'aten:var' to NNEF.

Centered second moment along dim with arbitrary correction (denominator = N - correction). Lowered to mean_reduce + sub + sqr + (sum_reduce / mean_reduce) so any correction value works without relying on NNEF's var fragment (which always squeezes and so can't honor keepdim=True).

var_mean

var_mean(node, op_helper, **kwargs)

Map PyTorch: 'aten:var_mean' (returns (var, mean)) to NNEF.

xlogy

xlogy(node, op_helper, **kwargs)

aten::xlogy -> xlogy fragment (x * log(y) with x==0 -> 0).