reducer
torch_to_nnef.op.aten.reducer
aminmax
Map PyTorch: 'aten:aminmax' to NNEF.
aminmax(input, dim=None, keepdim=False) returns a (min, max)
tuple. Decomposed into two independent reductions: min_reduce
into outputs[0] and max_reduce into outputs[1]. Squeeze
handling is shared with the rest of the reducer family via
reducer_helper.
count_nonzero
Map PyTorch: 'aten:count_nonzero' to NNEF.
count_nonzero(input, dim=None) returns the number of non-zero
elements in input along dim (or globally when dim=None) as
an int64 scalar / reduced tensor. Decomposed as
ne(x, 0) -> tract_core_cast(i64) -> sum_reduce(axes) -> squeeze.
Intermediate NTensors are built explicitly with their kept-dim
shapes (rather than going through add_single_output_op_from_nnef_tensors
which inherits node.outputs[0].shape). The shared helper
declares the rank-0 final shape on every intermediate, which then
trips the rank-align pass: ne(input, scalar_zero) sees both
operands as "scalar-like" and squeezes the rank-1 input to scalar
before evaluating, panicking the downstream sum_reduce.
nanmean
Map PyTorch: aten::nanmean -> NaN-skipping mean.
Sum of NaN-replaced input divided by the count of non-NaN inputs along the reduce axes.
nansum
Map PyTorch: aten::nansum -> NaN-skipping sum.
Decomposed as sum_reduce(select(isnan(x), 0, x)). NaN detection
via ne(x, x) (IEEE-754 invariant) so the decomposition only
touches NNEF stdlib ops.
reduce_all
Map PyTorch: 'aten:reduce_all', 'aten:all' to NNEF.
reduce_any
Map PyTorch: 'aten:reduce_any', 'aten:any' to NNEF.
reduce_max
Map PyTorch: 'aten:reduce_max', 'aten:amax' to NNEF.
reduce_min
Map PyTorch: 'aten:reduce_min', 'aten:amin' to NNEF.