Skip to content

reducer

torch_to_nnef.op.aten.reducer

aminmax

aminmax(node, op_helper, **kwargs)

Map PyTorch: 'aten:aminmax' to NNEF.

aminmax(input, dim=None, keepdim=False) returns a (min, max) tuple. Decomposed into two independent reductions: min_reduce into outputs[0] and max_reduce into outputs[1]. Squeeze handling is shared with the rest of the reducer family via reducer_helper.

argmax

argmax(node, op_helper, **kwargs)

Map PyTorch: 'aten:argmax' to NNEF.

argmin

argmin(node, op_helper, **kwargs)

Map PyTorch: 'aten:argmin' to NNEF.

count_nonzero

count_nonzero(node, op_helper, inference_target, **kwargs)

Map PyTorch: 'aten:count_nonzero' to NNEF.

count_nonzero(input, dim=None) returns the number of non-zero elements in input along dim (or globally when dim=None) as an int64 scalar / reduced tensor. Decomposed as ne(x, 0) -> tract_core_cast(i64) -> sum_reduce(axes) -> squeeze.

Intermediate NTensors are built explicitly with their kept-dim shapes (rather than going through add_single_output_op_from_nnef_tensors which inherits node.outputs[0].shape). The shared helper declares the rank-0 final shape on every intermediate, which then trips the rank-align pass: ne(input, scalar_zero) sees both operands as "scalar-like" and squeezes the rank-1 input to scalar before evaluating, panicking the downstream sum_reduce.

max_

max_(node, op_helper, **kwargs)

Map PyTorch: 'aten:max' to NNEF.

mean

mean(node, op_helper, **kwargs)

Map PyTorch: 'aten:mean' to NNEF.

min_

min_(node, op_helper, **kwargs)

Map PyTorch: 'aten:min' to NNEF.

nanmean

nanmean(node, op_helper, **kwargs)

Map PyTorch: aten::nanmean -> NaN-skipping mean.

Sum of NaN-replaced input divided by the count of non-NaN inputs along the reduce axes.

nansum

nansum(node, op_helper, **kwargs)

Map PyTorch: aten::nansum -> NaN-skipping sum.

Decomposed as sum_reduce(select(isnan(x), 0, x)). NaN detection via ne(x, x) (IEEE-754 invariant) so the decomposition only touches NNEF stdlib ops.

prod

prod(node, op_helper, inference_target, **kwargs)

Map PyTorch: 'aten:prod' to NNEF.

reduce_all

reduce_all(node, op_helper, **kwargs)

Map PyTorch: 'aten:reduce_all', 'aten:all' to NNEF.

reduce_any

reduce_any(node, op_helper, **kwargs)

Map PyTorch: 'aten:reduce_any', 'aten:any' to NNEF.

reduce_max

reduce_max(node, op_helper, **kwargs)

Map PyTorch: 'aten:reduce_max', 'aten:amax' to NNEF.

reduce_min

reduce_min(node, op_helper, **kwargs)

Map PyTorch: 'aten:reduce_min', 'aten:amin' to NNEF.

reduce_sum

reduce_sum(node, op_helper, **kwargs)

Map PyTorch: 'aten:reduce_sum', 'aten:sum' to NNEF.