Learning Partial Equivariances from Data
Properties | |
---|---|
authors | David W. Romero, Suhas Lohit |
year | 2021 |
url | https://arxiv.org/abs/2110.10211 |
Monte Carlo Approximation of Group Convolutions
We can approximate Group Convolutions on the expectation by uniformly sampling group actions \(v_j\).
$$
(\psi \hat{*} f)(u_i) = \sum_j \psi (v_j^{-1} u_i)f(v_j) \bar{\mu}_{\mathcal{G}} (v_j)
$$
Main idea
- Prioritize sampling of specific group elements during the group convolution by learning a probability distribution over them.
- 1D continuous groups: use reparametrization trick on the Lie algebra of the group, which is uniform over a connected set of group elements but zero otherwise. \(\to\) Partial Equivariance
- 1D discrete groups: Bernoulli Distribution over all possible element combinations
Citations:
- Self-Supervised Detection of Perfect and Partial Input-Dependent Symmetries
- Color Equivariant Convolutional Networks
- Equivariance-aware architectural optimization of neural networks
- Approximation-Generalization Trade-offs under (Approximate) Group Equivariance