matmul
torch_to_nnef.op.aten.matmul
baddbmm
Map PyTorch: 'aten:baddbmm', 'aten:addmm' to NNEF.
conv_tbc
Map PyTorch: 'aten:conv_tbc' to NNEF.
conv_tbc(input, weight, bias, pad) is a 1-D convolution over a
(T, B, C) input -- time-batch-channel layout -- with weight
(kernel, C_in, C_out). Equivalent semantically to
conv1d(input.permute(1, 2, 0), weight.permute(2, 1, 0), bias,
padding=pad).permute(2, 0, 1), which is exactly the
permute -> conv -> permute chain we emit.
conv_transpose_nd
Map PyTorch: 'aten:conv_transpose{1,2,3}d' to NNEF.
Marked CompositeImplicitAutograd upstream, so PyTorch usually
decomposes these to aten::_convolution(transposed=True) before
the trace reaches t2n. Registering them anyway keeps the support
page accurate and gives a working path if PyTorch ever stops
decomposing for some platform.
Signature: (input, weight, bias?, stride, padding, output_padding,
groups, dilation) -- 8 positional args.
dot
Map PyTorch: 'aten:dot' to NNEF.
torch.dot(a, b) is the 1-D x 1-D inner product, returning a
scalar. NNEF's matmul requires rank >= 2, so we unsqueeze the
inputs to (1, K) and (K, 1), matmul, then squeeze the (1, 1) back
to a scalar.
einsum
Map PyTorch: 'aten:einsum' to NNEF.
linear
Map PyTorch: 'aten:linear' to NNEF.
matmul
Map PyTorch: 'aten:matmul', 'aten:bmm', 'aten:mm' to NNEF.