Skip to content

TRecViT A Recurrent Video Transformer

Properties
authors Viorica Pătrăucean, Xu Owen He, Joseph Heyward, Chuhan Zhang, Mehdi S. M. Sajjadi, George-Cristian Muraru, Artem Zholus, Mahdi Karami, Ross Goroshin, Yutian Chen, Simon Osindero, João Carreira, Razvan Pascanu
year 2024
url http://arxiv.org/abs/2412.14294

Abstract

We propose a novel block for video modelling. It relies on a time-space-channel factorisation with dedicated blocks for each dimension: gated linear recurrent units (LRUs) perform information mixing over time, self-attention layers perform mixing over space, and MLPs over channels. The resulting architecture TRecViT performs well on sparse and dense tasks, trained in supervised or self-supervised regimes. Notably, our model is causal and outperforms or is on par with a pure attention model ViViT-L on large scale video datasets (SSv2, Kinetics400), while having \(3\times\) less parameters, \(12\times\) smaller memory footprint, and \(5\times\) lower FLOPs count. Code and checkpoints will be made available online at https://github.com/google-deepmind/trecvit.

Notes

Zotero Link