Supported aten operators

Note

This table and page are auto generated by docs/contributing/generate_support_page.py and reflect the PyTorch reference docs at the time of generation. Targeted torch version: 2.11. Generated on 14 May 2026.

Warning

Take these results with a grain of salt: many of the listed operators never appear in the torch IR graph that torch_to_nnef traces (they get remapped to more generic ops upstream), and some uncommon operators are rare in real models so support may be lacking even when marked unsupported. SONOS maintains operators on a per-need basis, and contributions are always welcome see how.

'is core' column refers to this PyTorch IR documentation page (the page for torch 2.11 was emptied upstream; falling back to torch 2.9 which is the last version that still enumerates the core ATen IR).

We filter out 'backward' and 'sym' operators from the listing since they are unwanted in an inference engine. In-place operations are merged with their non-inplace counterparts since that distinction is an inference implementation detail.

We also exclude a long tail of identifiers that the aten::* source-grep picks up but that can never surface in an inference JIT trace (see the full list at the bottom of this page). This trims the unsupported column to the names where a torch_to_nnef emitter (or a deliberate no-op map) would actually be meaningful.

Total matched operators in torch_to_nnef compared to:

  • core PyTorch opset:

127/138

  • and support from full aten:::

323/530

(total operators listed as supported by torch_to_nnef being 376)

translatedaten namealiasescan in-placeis core
absabsolute
acosarccos
acosharccosh
adaptive_avg_pool1d
add
addmm
alias
amax
amin
any
arange
argmax
argmin
as_strided
asinarcsin
asinharcsinh
atanarctan
atan2arctan2
atanharctanh
avg_pool1d
avg_pool2d
avg_pool3d
bitwise_and
bitwise_not
bitwise_or
bitwise_xor
bmm
catconcat, concatenate
ceil
clampclip
clone
col2im
constant_pad_nd
convolution
copy
cos
cosh
cumsum
diagonal
divdivide, true_divide
elu
embedding
empty
empty_strided
eq
erfspecial_erf
exp
expand
expm1special_expm1
fill
flip
floor
fmod
full
full_like
gather
gegreater_equal
gelu
grid_sampler_2d
gtgreater
hardtanh
index
index_put
index_select
isinf
isnan
leless_equal
leaky_relu
log
log10
log1pspecial_log1p
log2
logical_and
logical_not
logical_or
logical_xor
ltless
masked_scatter
max
max_pool2d_with_indices
max_pool3d_with_indices
maximum
mean
min
minimum
mm
mulmultiply
native_dropout
native_group_norm
native_layer_norm
nenot_equal
negnegative
nonzero
permute
pow
prod
rand
randn
randperm
reciprocal
reflection_pad1d
reflection_pad2d
reflection_pad3d
relu
remainder
repeat
replication_pad2d
replication_pad3d
resize_
roundspecial_round
rsqrt
scalar_tensor
scatter
scatter_add
scatter_reduce
select
sigmoidspecial_expit
sign
sin
sinh
slice
slice_scatter
sort
split_with_sizes
sqrt
squeeze
subsubtract
sum
tan
tanh
topk
truncfix
unsqueeze
upsample_bilinear2d
upsample_nearest2d
var
view
where
adaptive_avg_pool2d-
adaptive_avg_pool3d-
adaptive_max_pool1d-
adaptive_max_pool2d-
adaptive_max_pool3d-
addbmm-
addcdiv-
addcmul-
addmv-
addr-
affine_grid_generator-
all-
allclose-
alpha_dropout-
aminmax-
angle-
argsort-
argwhere-
as_tensor-
atleast_1d-
atleast_2d-
atleast_3d-
baddbmm-
bartlett_window-
batch_norm-
bernoulli-
bias_addmm-
bilinear-
binary_cross_entropy-
binary_cross_entropy_with_logits-
bincount-
binomial-
bitwise_left_shift-
bitwise_right_shift-
blackman_window-
broadcast_tensors-
broadcast_to-
bucketize-
cauchy-
cdist-
celu-
chain_matmul-
chalf-
channel_shuffle-
cholesky-
cholesky_inverse-
cholesky_solve-
chunk-
clamp_max-
clamp_min-
column_stack-
complex-
conj-
conj_physical-
contiguous-
conv-
conv1d-
conv2d-
conv3d-
conv_tbc-
conv_transpose1d-
conv_transpose2d-
conv_transpose3d-
convolution_overrideable-
copy_sparse_to_sparse_-
copy_to-
copysign-
cosine_embedding_loss-
cosine_similarity-
cross-
cross_entropy_loss-
ctc_loss-
cummax-
cummin-
cumprod-
dequantize-
detach-
diag-
diagflat-
digammaspecial_digamma, special_psi-
dist-
dot-
dropout-
dstack-
einsum-
embedding_bag-
embedding_renorm_-
empty_like-
empty_permuted-
empty_quantized-
erfcspecial_erfc-
erfinvspecial_erfinv-
exp2special_exp2-
expand_as-
exponential-
eye-
feature_alpha_dropout-
feature_dropout-
fft_fftfreq-
fft_ihfft2-
fft_ihfftn-
fft_rfftfreq-
flatten-
flatten_dense_tensors-
floor_divide-
floordiv-
fmax-
fmin-
frac-
fractional_max_pool2d-
fractional_max_pool3d-
frexp-
frobenius_norm-
gamma-
gcd-
geometric-
geqrf-
glu-
grid_sampler-
grid_sampler_3d-
group_norm-
gru-
gru_cell-
hamming_window-
hann_window-
hardshrink-
hardsigmoid-
hardswish-
hash_tensor-
heaviside-
hinge_embedding_loss-
histc-
histogram-
histogramdd-
hstack-
hypot-
i0special_i0-
igammaspecial_gammainc-
igammacspecial_gammaincc-
im2col-
index_add-
index_copy-
index_fill-
index_reduce-
instance_norm-
isclose-
isfinite-
istft-
kaiser_window-
kl_div-
kthvalue-
l1_loss-
layer_norm-
lcm-
ldexp-
lerp-
lgamma-
linalg__powsum-
linalg_cond-
linalg_cross-
linalg_detdet-
linalg_eig-
linalg_eigh-
linalg_eigvals-
linalg_eigvalsh-
linalg_householder_productorgqr-
linalg_invinverse-
linalg_ldl_solve-
linalg_lstsq-
linalg_lu-
linalg_lu_solve-
linalg_matrix_expmatrix_exp-
linalg_matrix_norm-
linalg_matrix_powermatrix_power-
linalg_matrix_rank-
linalg_norm-
linalg_pinv-
linalg_qr-
linalg_slogdet-
linalg_solve-
linalg_solve_triangular-
linalg_svd-
linalg_tensorsolve-
linalg_vector_norm-
linear-
linspace-
log_normal-
log_sigmoid-
log_softmaxspecial_log_softmax-
logcumsumexp-
logdet-
logitspecial_logit-
logspace-
logsumexpspecial_logsumexp-
lstm-
lstm_cell-
lu_solve-
lu_unpack-
mHadjoint-
mT-
margin_ranking_loss-
masked_fill-
masked_select-
matmullinalg_matmul-
matrix_H-
max_pool1d-
max_pool1d_with_indices-
max_pool2d-
max_pool3d-
median-
meshgrid-
mish-
mode-
modf-
movedimmoveaxis-
mse_loss-
multi_margin_loss-
multilabel_margin_loss-
multinomial-
mv-
mvlgammaspecial_multigammaln-
nan_to_num-
nanmedian-
nanquantile-
narrow-
native_channel_shuffle-
native_multi_head_self_attention-
new_empty-
new_empty_strided-
new_full-
new_ones-
new_zeros-
nextafter-
nll_loss-
nll_loss2d-
nll_loss_nd-
nonzero_numpy-
nonzero_static-
norm-
normal-
nuclear_norm-
numel-
numpy_T-
one_hot-
ones-
ones_like-
ormqr-
outerger-
pad-
pad_sequence-
pairwise_distance-
pdist-
pixel_shuffle-
pixel_unshuffle-
poisson-
polar-
polygammaspecial_polygamma-
prelu-
put-
qr-
quantile-
quantize-
quantize_per_channel-
quantize_per_tensor-
quantize_per_tensor_dynamic-
quantized_batch_norm-
quantized_gru-
quantized_lstm-
quantized_max_pool1d-
quantized_max_pool2d-
quantized_max_pool3d-
rand_like-
randint-
randint_like-
randn_like-
random-
range-
relu6-
renorm-
repeat_interleave-
replication_pad1d-
reshape-
reshape_as-
resize_as_sparse_-
resolve_conj-
resolve_neg-
rnn_relu-
rnn_relu_cell-
rnn_tanh-
rnn_tanh_cell-
roll-
rot90-
rrelu-
rrelu_with_noise-
rsub-
scaled_dot_product_attention-
searchsorted-
segment_reduce-
selu-
set_-
signbit-
silu-
sincspecial_sinc-
size-
smooth_l1_loss-
soft_margin_loss-
softmaxspecial_softmax-
softplus-
softshrink-
special_airy_ai-
special_bessel_j0-
special_bessel_j1-
special_bessel_y0-
special_bessel_y1-
special_chebyshev_polynomial_t-
special_chebyshev_polynomial_u-
special_chebyshev_polynomial_v-
special_chebyshev_polynomial_w-
special_erfcx-
special_hermite_polynomial_h-
special_hermite_polynomial_he-
special_i0e-
special_i1-
special_i1e-
special_laguerre_polynomial_l-
special_legendre_polynomial_p-
special_log_ndtr-
special_modified_bessel_i0-
special_modified_bessel_i1-
special_modified_bessel_k0-
special_modified_bessel_k1-
special_ndtri-
special_scaled_modified_bessel_k0-
special_scaled_modified_bessel_k1-
special_shifted_chebyshev_polynomial_t-
special_shifted_chebyshev_polynomial_u-
special_shifted_chebyshev_polynomial_v-
special_shifted_chebyshev_polynomial_w-
special_spherical_bessel_j0-
special_zeta-
split-
square-
stack-
std-
std_mean-
stft-
t-
take-
tanhshrink-
tensor_split-
tensordot-
threshold-
tile-
to-
trace-
transposeswapaxes, swapdims-
triangular_solve-
tril-
tril_indices-
triplet_margin_loss-
triu-
triu_indices-
type_as-
unbind-
unflatten-
unflatten_dense_tensors-
unfold-
uniform-
unique_consecutive-
unique_dim-
unique_dim_consecutive-
unsafe_chunk-
unsafe_split-
unsafe_split_with_sizes-
upsample-
upsample_bicubic2d-
upsample_linear1d-
upsample_nearest1d-
upsample_nearest3d-
upsample_trilinear3d-
vander-
var_mean-
view_as-
view_as_complex-
view_as_real-
vstackrow_stack-
wrapped_linear_prepack-
wrapped_quantized_linear_prepacked-
xlogyspecial_xlogy-
zero-
zeros-
zeros_like-

Total matched operators in builtin PyTorch ONNX support based on this page (the page for torch 2.11 was removed upstream in torch 2.9+; falling back to torch 2.8 which is the last version that still ships the TorchScript ONNX support listing) compared to:

  • core PyTorch opset:

123/138

  • and support from full aten:::

306/530

(total operators listed as supported by PyTorch's TorchScript ONNX exporter being 348)

translatedaten namealiasescan in-placeis core
absabsolute
acosarccos
acosharccosh
adaptive_avg_pool1d
add
addmm
alias
amax
amin
any
arange
argmax
argmin
as_strided
asinarcsin
asinharcsinh
atanarctan
atan2arctan2
atanharctanh
avg_pool1d
avg_pool2d
avg_pool3d
bitwise_and
bitwise_not
bitwise_or
bitwise_xor
bmm
catconcat, concatenate
ceil
clampclip
clone
col2im
constant_pad_nd
convolution
copy
cos
cosh
cumsum
diagonal
divdivide, true_divide
elu
embedding
empty
empty_strided
eq
erfspecial_erf
exp
expand
expm1special_expm1
fill
flip
floor
fmod
full
full_like
gather
gegreater_equal
gelu
grid_sampler_2d
gtgreater
hardtanh
index
index_put
index_select
isinf
isnan
leless_equal
leaky_relu
log
log10
log1pspecial_log1p
log2
logical_and
logical_not
logical_or
logical_xor
ltless
masked_scatter
max
max_pool2d_with_indices
max_pool3d_with_indices
maximum
mean
min
minimum
mm
mulmultiply
native_dropout
native_group_norm
native_layer_norm
nenot_equal
negnegative
nonzero
permute
pow
prod
rand
randn
randperm
reciprocal
reflection_pad1d
reflection_pad2d
reflection_pad3d
relu
remainder
repeat
replication_pad2d
replication_pad3d
resize_
roundspecial_round
rsqrt
scalar_tensor
scatter
scatter_add
scatter_reduce
select
sigmoidspecial_expit
sign
sin
sinh
slice
slice_scatter
sort
split_with_sizes
sqrt
squeeze
subsubtract
sum
tan
tanh
topk
truncfix
unsqueeze
upsample_bilinear2d
upsample_nearest2d
var
view
where
adaptive_avg_pool2d-
adaptive_avg_pool3d-
adaptive_max_pool1d-
adaptive_max_pool2d-
adaptive_max_pool3d-
addbmm-
addcdiv-
addcmul-
addmv-
addr-
affine_grid_generator-
all-
allclose-
alpha_dropout-
aminmax-
angle-
argsort-
argwhere-
as_tensor-
atleast_1d-
atleast_2d-
atleast_3d-
baddbmm-
bartlett_window-
batch_norm-
bernoulli-
bias_addmm-
bilinear-
binary_cross_entropy-
binary_cross_entropy_with_logits-
bincount-
binomial-
bitwise_left_shift-
bitwise_right_shift-
blackman_window-
broadcast_tensors-
broadcast_to-
bucketize-
cauchy-
cdist-
celu-
chain_matmul-
chalf-
channel_shuffle-
cholesky-
cholesky_inverse-
cholesky_solve-
chunk-
clamp_max-
clamp_min-
column_stack-
complex-
conj-
conj_physical-
contiguous-
conv-
conv1d-
conv2d-
conv3d-
conv_tbc-
conv_transpose1d-
conv_transpose2d-
conv_transpose3d-
convolution_overrideable-
copy_sparse_to_sparse_-
copy_to-
copysign-
cosine_embedding_loss-
cosine_similarity-
cross-
cross_entropy_loss-
ctc_loss-
cummax-
cummin-
cumprod-
dequantize-
detach-
diag-
diagflat-
digammaspecial_digamma, special_psi-
dist-
dot-
dropout-
dstack-
einsum-
embedding_bag-
embedding_renorm_-
empty_like-
empty_permuted-
empty_quantized-
erfcspecial_erfc-
erfinvspecial_erfinv-
exp2special_exp2-
expand_as-
exponential-
eye-
feature_alpha_dropout-
feature_dropout-
fft_fftfreq-
fft_ihfft2-
fft_ihfftn-
fft_rfftfreq-
flatten-
flatten_dense_tensors-
floor_divide-
floordiv-
fmax-
fmin-
frac-
fractional_max_pool2d-
fractional_max_pool3d-
frexp-
frobenius_norm-
gamma-
gcd-
geometric-
geqrf-
glu-
grid_sampler-
grid_sampler_3d-
group_norm-
gru-
gru_cell-
hamming_window-
hann_window-
hardshrink-
hardsigmoid-
hardswish-
hash_tensor-
heaviside-
hinge_embedding_loss-
histc-
histogram-
histogramdd-
hstack-
hypot-
i0special_i0-
igammaspecial_gammainc-
igammacspecial_gammaincc-
im2col-
index_add-
index_copy-
index_fill-
index_reduce-
instance_norm-
isclose-
isfinite-
istft-
kaiser_window-
kl_div-
kthvalue-
l1_loss-
layer_norm-
lcm-
ldexp-
lerp-
lgamma-
linalg__powsum-
linalg_cond-
linalg_cross-
linalg_detdet-
linalg_eig-
linalg_eigh-
linalg_eigvals-
linalg_eigvalsh-
linalg_householder_productorgqr-
linalg_invinverse-
linalg_ldl_solve-
linalg_lstsq-
linalg_lu-
linalg_lu_solve-
linalg_matrix_expmatrix_exp-
linalg_matrix_norm-
linalg_matrix_powermatrix_power-
linalg_matrix_rank-
linalg_norm-
linalg_pinv-
linalg_qr-
linalg_slogdet-
linalg_solve-
linalg_solve_triangular-
linalg_svd-
linalg_tensorsolve-
linalg_vector_norm-
linear-
linspace-
log_normal-
log_sigmoid-
log_softmaxspecial_log_softmax-
logcumsumexp-
logdet-
logitspecial_logit-
logspace-
logsumexpspecial_logsumexp-
lstm-
lstm_cell-
lu_solve-
lu_unpack-
mHadjoint-
mT-
margin_ranking_loss-
masked_fill-
masked_select-
matmullinalg_matmul-
matrix_H-
max_pool1d-
max_pool1d_with_indices-
max_pool2d-
max_pool3d-
median-
meshgrid-
mish-
mode-
modf-
movedimmoveaxis-
mse_loss-
multi_margin_loss-
multilabel_margin_loss-
multinomial-
mv-
mvlgammaspecial_multigammaln-
nan_to_num-
nanmedian-
nanquantile-
narrow-
native_channel_shuffle-
native_multi_head_self_attention-
new_empty-
new_empty_strided-
new_full-
new_ones-
new_zeros-
nextafter-
nll_loss-
nll_loss2d-
nll_loss_nd-
nonzero_numpy-
nonzero_static-
norm-
normal-
nuclear_norm-
numel-
numpy_T-
one_hot-
ones-
ones_like-
ormqr-
outerger-
pad-
pad_sequence-
pairwise_distance-
pdist-
pixel_shuffle-
pixel_unshuffle-
poisson-
polar-
polygammaspecial_polygamma-
prelu-
put-
qr-
quantile-
quantize-
quantize_per_channel-
quantize_per_tensor-
quantize_per_tensor_dynamic-
quantized_batch_norm-
quantized_gru-
quantized_lstm-
quantized_max_pool1d-
quantized_max_pool2d-
quantized_max_pool3d-
rand_like-
randint-
randint_like-
randn_like-
random-
range-
relu6-
renorm-
repeat_interleave-
replication_pad1d-
reshape-
reshape_as-
resize_as_sparse_-
resolve_conj-
resolve_neg-
rnn_relu-
rnn_relu_cell-
rnn_tanh-
rnn_tanh_cell-
roll-
rot90-
rrelu-
rrelu_with_noise-
rsub-
scaled_dot_product_attention-
searchsorted-
segment_reduce-
selu-
set_-
signbit-
silu-
sincspecial_sinc-
size-
smooth_l1_loss-
soft_margin_loss-
softmaxspecial_softmax-
softplus-
softshrink-
special_airy_ai-
special_bessel_j0-
special_bessel_j1-
special_bessel_y0-
special_bessel_y1-
special_chebyshev_polynomial_t-
special_chebyshev_polynomial_u-
special_chebyshev_polynomial_v-
special_chebyshev_polynomial_w-
special_erfcx-
special_hermite_polynomial_h-
special_hermite_polynomial_he-
special_i0e-
special_i1-
special_i1e-
special_laguerre_polynomial_l-
special_legendre_polynomial_p-
special_log_ndtr-
special_modified_bessel_i0-
special_modified_bessel_i1-
special_modified_bessel_k0-
special_modified_bessel_k1-
special_ndtri-
special_scaled_modified_bessel_k0-
special_scaled_modified_bessel_k1-
special_shifted_chebyshev_polynomial_t-
special_shifted_chebyshev_polynomial_u-
special_shifted_chebyshev_polynomial_v-
special_shifted_chebyshev_polynomial_w-
special_spherical_bessel_j0-
special_zeta-
split-
square-
stack-
std-
std_mean-
stft-
t-
take-
tanhshrink-
tensor_split-
tensordot-
threshold-
tile-
to-
trace-
transposeswapaxes, swapdims-
triangular_solve-
tril-
tril_indices-
triplet_margin_loss-
triu-
triu_indices-
type_as-
unbind-
unflatten-
unflatten_dense_tensors-
unfold-
uniform-
unique_consecutive-
unique_dim-
unique_dim_consecutive-
unsafe_chunk-
unsafe_split-
unsafe_split_with_sizes-
upsample-
upsample_bicubic2d-
upsample_linear1d-
upsample_nearest1d-
upsample_nearest3d-
upsample_trilinear3d-
vander-
var_mean-
view_as-
view_as_complex-
view_as_real-
vstackrow_stack-
wrapped_linear_prepack-
wrapped_quantized_linear_prepacked-
xlogyspecial_xlogy-
zero-
zeros-
zeros_like-

Appendix: identifiers excluded from this page

These names are filtered out of the support tables above because they cannot surface in an inference JIT trace. Each group has a documented rationale; click any to expand the full member list.

Python builtin / scripting false positives (52 names)

capitalize, center, chr, clear, dict, endswith, expandtabs, extend, find, format, get, getelem, hash, hex, isalnum, isalpha, isdecimal, isdigit, isidentifier, islower, isnumeric, isprintable, isspace, istitle, isupper, items, join, keys, ljust, lower, lstrip, oct, ord, partition, popitem, rfind, rindex, rjust, rpartition, rsplit, rstrip, setdefault, sorted, splitlines, startswith, strip, swapcase, title, update, upper, values, zfill

Test harness / placeholder identifiers (13 names)

confirmed_by_owner, foo, mathremainder, percentFormat, pointwise_placeholder, symbolic_b, test, test_symbol, test_vartype, test_vartype2, unknown, view_expand_placeholder, your_op

Distributed / RPC primitives (14 names)

all_gather_into_tensor, all_reduce, fork, get_gradients, is_owner, local_value, owner, owner_name, reduce_scatter_tensor, to_here, wait, wait_tensor, warn, warns

Backend-specific dispatch shims (cudnn / miopen / mkldnn / mps) (26 names)

cudnn_affine_grid_generator, cudnn_batch_norm, cudnn_convolution, cudnn_convolution_add_relu, cudnn_convolution_relu, cudnn_convolution_transpose, cudnn_grid_sampler, cudnn_is_acceptable, miopen_batch_norm, miopen_convolution, miopen_convolution_add_relu, miopen_convolution_relu, miopen_convolution_transpose, miopen_ctc_loss, miopen_depthwise_convolution, miopen_rnn, mkldnn_adaptive_avg_pool2d, mkldnn_convolution, mkldnn_linear, mkldnn_max_pool2d, mkldnn_max_pool3d, mkldnn_reorder_conv2d_weight, mkldnn_reorder_conv3d_weight, mkldnn_rnn_layer, mps_linear, to_mkldnn

Tensor metadata accessors (return a Python value, not a tensor) (60 names)

can_cast, data, dense_dim, device, dtype, element_size, enable_grad, get_autocast_dtype, get_device, get_pool_ceil_padding, grad, has_torch_function, iinfo, initial_seed, int_repr, is_autocast_cpu_enabled, is_autocast_enabled, is_coalesced, is_complex, is_conj, is_contiguous, is_cuda, is_grad_enabled, is_leaf, is_non_overlapping_and_dense, is_nonzero, is_pinned, is_same_size, is_scripting, is_set_to, is_signed, is_strides_like_format, manual_seed, node, op, op_name, output_nr, pin_memory, promote_types, q_per_channel_axis, q_per_channel_scales, q_per_channel_zero_points, q_scale, q_zero_point, qscheme, record_stream, refine_names, rename, requires_grad_, result_type, retain_grad, retains_grad, save, seed, set_data, set_grad_enabled, set_source_Tensor_storage_offset, sparse_dim, storage_offset, stride

Named-tensor API (names stripped before JIT trace) (3 names)

align_as, align_tensors, align_to

Sparse-tensor machinery (NNEF / tract are dense-only) (25 names)

ccol_indices, ccol_indices_copy, coalesce, col_indices, col_indices_copy, copy_sparse_to_sparse, crow_indices, crow_indices_copy, hspmm, indices, indices_copy, nested_to_padded_tensor, row_indices, row_indices_copy, smm, sparse_compressed_tensor, sparse_coo_tensor, sparse_mask, sparse_resize, sparse_resize_and_clear, sparse_sampled_addmm, sspaddmm, to_dense, to_padded_tensor, values_copy

Functionalization *_copy / *_scatter variants (20 names)

alias_copy, as_strided_copy, as_strided_scatter, detach_copy, diagonal_copy, diagonal_scatter, lift, lift_fresh, lift_fresh_copy, narrow_copy, permute_copy, select_copy, slice_copy, slice_inverse, split_copy, unbind_copy, unfold_copy, view_as_complex_copy, view_as_real_copy, view_copy

Batch-norm training / backward-only internals (8 names)

batch_norm_elemt, batch_norm_gather_stats, batch_norm_gather_stats_with_counts, batch_norm_stats, batch_norm_update_stats, native_batch_norm, native_norm, norm_except_dim

linalg_*_ex paired-output variants + deprecated linalg wrappers (12 names)

eig, linalg_cholesky_ex, linalg_inv_ex, linalg_ldl_factor_ex, linalg_lu_factor_ex, lstsq, matrix_rank, pinv, pinverse, solve, svd, symeig

QAT fake_quantize_* training-only ops (6 names)

choose_qparams_optimized, fake_quantize_per_channel_affine, fake_quantize_per_channel_affine_cachemask, fake_quantize_per_tensor_affine, fake_quantize_per_tensor_affine_cachemask, fused_moving_avg_obs_fake_quant

slow_conv* / thnn_conv* dispatcher-fallback kernels (8 names)

conv_depthwise3d, slow_conv3d, slow_conv3d_forward, slow_conv_dilated2d, slow_conv_dilated3d, slow_conv_transpose2d, slow_conv_transpose3d, thnn_conv2d

Python / TorchScript scalar builtins (extra) (25 names)

append, bin, count, cpu, cuda, degrees, dim, divmod, equal, fabs, factorial, insert, is_floating_point, item, len, list, list_with_default, neq, pop, radians, remove, replace, reverse, str, tensor

Inplace storage / metadata mutators stripped by JIT (9 names)

fill_diagonal_, float_power_, rename_, resize, resize_as_, resize_as_sparse, set, sparse_resize_, sparse_resize_and_clear_

Backward / dynamo-autograd internals (10 names)

embedding_renorm, from_file, glu_jvp, log_sigmoid_forward, multilabel_margin_loss_forward, nll_loss_forward, normal_functional, rowwise_prune, rrelu_with_noise_functional, sum_to