Priors over Neural Network weights
From Understanding Deep Learning - Chapter 10, 1d convolutions can be represented as weight matrices from a MLP with a specific prior where the diagonals are the same (d).
Rotationally equivariant convolutions can be implemented by isotropic filters (a prior on the conv2d weight):