Exploiting Redundancy Separable Group Convolutional Networks on Lie Groups

Properties
authors David M. Knigge, David W. Romero, Erik J. Bekkers

Abstract

In this work, we investigate the properties of representations learned by regular G-CNNs, and show considerable parameter redundancy in group convolution kernels. This finding motivates further weight-tying by sharing convolution kernels over subgroups. To this end, we introduce convolution kernels that are separable over the subgroup and channel dimensions.

Interesting because it reduces the total parameter count by separating group convolution kernels. This also has a regularisation effect.

Citations:
- Relaxing Equivariance Constraints with Non-stationary Continuous Filters