Over the previous few years, autoregressive Transformers have introduced a gentle stream of breakthroughs in generative modeling. These fashions generate every factor of a pattern – the pixels of a picture, the characters of a textual content (usually in “token” chunks), the samples of an audio waveform, and so forth – by predicting one factor after the opposite. When predicting the following factor, the mannequin can look again at people who have been created earlier.
Nonetheless, every of a Transformer’s layers grows dearer as extra components are used as enter, and practitioners can solely afford to coach deep Transformers on sequences not more than about 2,048 components in size. And so, most Transformer-based fashions ignore all components past the latest previous (round 1,500 phrases or 1/6 of a small picture) when making a prediction.
In distinction, our just lately developed Perceiver fashions give glorious outcomes on quite a lot of real-world duties with as much as round 100,000 components. Perceivers use cross-attention to encode inputs right into a latent house, decoupling the enter’s compute necessities from mannequin depth. Perceivers additionally spend a set value, no matter enter dimension, at almost each layer.
Whereas latent-space encoding handles all components in a single move, autoregressive era assumes processing occurs one factor at a time. To deal with this downside, Perceiver AR proposes a easy resolution: align the latents one after the other with the ultimate components of the enter, and thoroughly masks the enter so latents see solely earlier components.
The result’s an structure (proven above) that attends to as a lot as 50x longer inputs as commonplace Transformers, whereas deploying as broadly (and basically as simply) as commonplace decoder-only Transformers.
Perceiver AR scales significantly higher with dimension than each commonplace Transformers and Transformer-XL fashions at a spread of sequence lengths in actual phrases. This property permits us to construct very efficient long-context fashions. For instance, we discover {that a} 60-layer Perceiver AR with context size 8192 outperforms a 42-layer Transformer-XL on a book-length era process, whereas operating quicker in actual wall-clock phrases.
On commonplace, long-context picture (ImageNet 64×64), language (PG-19), and music (MAESTRO) era benchmarks, Perceiver AR produces state-of-the-art outcomes. Growing enter context by decoupling enter dimension from compute price range results in a number of intriguing outcomes:
- Compute price range might be tailored at eval time, permitting us to spend much less and easily degrade high quality or to spend extra for improved era.
- A bigger context permits Perceiver AR to outperform Transformer-XL, even when spending the identical on compute. We discover that better context results in improved mannequin efficiency even at reasonably priced scale (~1B parameters).
- Perceiver AR’s pattern high quality reveals a lot much less sensitivity to the order wherein it generates components. This makes Perceiver AR straightforward to use to settings that don’t have a pure left-to-right ordering, comparable to knowledge like photos, with construction that spans a couple of dimension.
Utilizing a dataset of piano music, we educated Perceiver AR to generate new items of music from scratch. As a result of every new be aware is predicted primarily based on the complete sequence of notes that got here earlier than, Perceiver AR is ready to produce items with a excessive degree of melodic, harmonic, and rhythmic coherence:
Be taught extra about utilizing Perceiver AR:
- Obtain the JAX code for coaching Perceiver AR on Github
- Learn our paper on arXiv
- Take a look at our highlight presentation at ICML 2022
See the Google Magenta weblog put up with extra music!