math
torch_to_nnef.op.aten.math
bitwise_and
Map PyTorch: 'aten:bitwise_and', 'aten:bitwise_cpu' to NNEF.
bitwise_left_shift
Map aten:bitwise_left_shift / << op to NNEF -> tract_shl.
Tract's shift ops live under the tract_core extension despite
the bare tract_shl / tract_shr names in the registry.
bitwise_not
Map PyTorch: 'aten:bitwise_not', 'aten:bitwise_not_cpu' to NNEF.
On bool inputs, PyTorch's ~ is semantically a logical not, so we emit
the standard NNEF not op (keeps the graph portable and self-documenting,
rather than relying on tract's bitnot happening to do the right thing on
bool). For integer inputs, emit tract_core_bitnot for true bitwise
inversion.
bitwise_or
Map PyTorch: 'aten:bitwise_or' to NNEF.
bitwise_right_shift
Map PyTorch: 'aten:bitwise_right_shift' / >> op -> tract_shr.
bitwise_xor
Map PyTorch: 'aten:bitwise_xor' to NNEF.
cdist
Map PyTorch: 'aten:cdist' to NNEF via a fragment.
cdist(a, b, p) computes the pairwise distance matrix between
rows of a (shape (..., M, D)) and b (shape (..., N, D)):
out[..., i, j] = (sum(|a[..., i, :] - b[..., j, :]|^p))^(1/p).
The fragment broadcasts via unsqueeze (one new axis on each
input) and reduces along the trailing feature axis. Pure stdlib.
cosine_similarity
Map PyTorch: 'aten:cosine_similarity' to NNEF via a fragment.
The fragment lives in op/fragment/cosine_similarity.nnef and is
composed only of NNEF stdlib ops, so no tract-side change is needed.
Negative dim is normalized to a non-negative index via
pick_axis before reaching the fragment: under dynamic-axes
mode tract's reduce path crashes (index out of bounds at
core/src/ops/nn/reduce.rs) when handed a negative axis against a
symbolic-rank tensor.
cross
Map PyTorch: 'aten:cross' to NNEF via a fragment.
cross(a, b, dim) is the 3-D vector cross product along dim.
The fragment slices each input along dim into its three
components and computes the standard (a1*b2 - a2*b1, a2*b0 -
a0*b2, a0*b1 - a1*b0) triplet. Pure stdlib.
cummax
Map PyTorch: aten::cummax(self, dim) -> (values, indices).
cummin
Map PyTorch: aten::cummin(self, dim) -> (values, indices).
cumprod
Map PyTorch: 'aten:cumprod' to NNEF using a scan fragment.
Mirror of cumsum with a mul scan body and an init of 1 (built
pointwise via mul(first, 0) + 1 to keep init shape-matching).
cumsum
Map PyTorch: 'aten:cumsum' to NNEF using a scan fragment (Tract).
- Implemented via
tract_core_scaninsidefragment cumsum(axis=0). - For arbitrary dim, transpose input to bring that axis to 0, apply fragment, then transpose back.
floor_divide
Map PyTorch: 'aten::floor_divide' / 'aten::floordiv' to NNEF.
JIT records aten::floordiv for Python //; upstream
normalize_ops.cpp does not bridge it to floor_divide, so we
alias it on our side.
hardshrink
Map PyTorch: 'aten:hardshrink' to the hardshrink fragment.
heaviside
aten::heaviside -> heaviside fragment (step with tie-breaker).
isclose
aten::isclose -> |a - b| <= atol + rtol * |b|.
Reads optional rtol / atol from the trace (defaults match
torch's 1e-5 / 1e-8 via the fragment defaults). equal_nan=True
is not yet handled -- rare in real traces; raises if set.
log_sigmoid
Map PyTorch: 'aten:log_sigmoid' to the log_sigmoid fragment.
logaddexp
aten::logaddexp -> numerically-stable log(exp a + exp b).
logical_xor
Map PyTorch: 'aten:logical_xor' to NNEF.
logsumexp
Map PyTorch: 'aten:logsumexp' to the logsumexp fragment.
PyTorch signature: logsumexp(input, dim, keepdim=False).
The fragment always reduces the named axis (no keepdim); when the
user asks for keepdim=True we follow up with an unsqueeze on
the same axis to reinstate it.
nan_to_num
Map PyTorch: 'aten:nan_to_num' to NNEF.
Decomposed to pure NNEF stdlib (ne for NaN via the IEEE-754
NaN != NaN invariant, gt/lt against the dtype's finite
range for +/-inf, plus select). No tract extension needed.
Defaults match torch: NaN -> 0; +inf -> finfo.max; -inf -> finfo.min.
outer
Map PyTorch: 'aten:outer' to NNEF.
torch.outer(a, b) over 1-D inputs is a[:, None] * b[None, :].
Lower to two unsqueezes and a broadcasting mul.
Axes are kept positive. Tract's NNEF unsqueeze deserializer
(tract_core::ops::change_axes::AxisOp::change_shape) does not
normalize negative axes and panics with smallvec: index exceeds
length; verified across tract 0.20.22 through 0.23.0-dev.5. This
matches the wider t2n convention: the dedicated unsqueeze op
handler also normalizes via pick_axis.
pairwise_distance
Map PyTorch: 'aten:pairwise_distance' to NNEF via a fragment.
pairwise_distance(a, b, p, eps, keepdim) computes
(sum(|a - b + eps|^p, axis=-1))^(1/p). Torch keeps the reduced
last axis only when keepdim=True; the fragment squeezes it
unconditionally and we re-unsqueeze after the call when needed.
Pure NNEF stdlib (sub / abs / pow / sum_reduce / squeeze).
remainder
Map PyTorch: 'aten:remainder' to NNEF.
softshrink
Map PyTorch: 'aten:softshrink' to the softshrink fragment.
square
Map PyTorch: 'aten:square' to NNEF.
x.square() is x * x; the dedicated NNEF sqr op exists in some
runtimes but the simplest portable form is a mul with the same
tensor on both sides.
std_mean
Map PyTorch: 'aten:std_mean' (returns (std, mean)) to NNEF.
tensordot
Map PyTorch: 'aten:tensordot' to NNEF via tract_core_einsum.
tensordot(a, b, dims_a, dims_b) contracts the paired axes (same
size on each side) and produces an output of rank
a.rank + b.rank - 2 * len(dims_a). The einsum expression is
built so each contracted axis-pair shares a label.
var
Map PyTorch: 'aten:var' to NNEF.
Centered second moment along dim with arbitrary correction
(denominator = N - correction). Lowered to mean_reduce + sub + sqr
+ (sum_reduce / mean_reduce) so any correction value works without
relying on NNEF's var fragment (which always squeezes and so
can't honor keepdim=True).
var_mean
Map PyTorch: 'aten:var_mean' (returns (var, mean)) to NNEF.