Overview
This is a code repository for PoNet: Pooling Network for Efficient Token Mixing in Long Sequences by Chao-Hong Tan, Qian Chen, Wen Wang, Qinglin Zhang, Siqi Zheng, Zhen-Hua Ling.
We propose a novel Pooling Network for token mixing with linear complexity, achieve competitive performance on the Long Range Arena benchmark, and 95.7% of the accuracy of BERT on the GLUE demonstrating its transferability.
The abstract from the paper is the following:
Transformer-based models have achieved great success in various NLP, vision, and speech tasks. However, the core of Transformer, the self-attention mechanism, has a quadratic time and memory complexity with respect to the sequence length, which hinders applications of Transformer-based models to long sequences. Many approaches have been proposed to mitigate this problem, such as sparse attention mechanisms, low-rank matrix approximations and scalable kernels, and token mixing alternatives to self-attention. We propose a novel Pooling Network (PoNet) for token mixing in long sequences with linear complexity. We design multi-granularity pooling and pooling fusion to capture different levels of contextual information and combine their interactions with tokens. On the Long Range Arena benchmark, PoNet significantly outperforms Transformer and achieves competitive accuracy, while being only slightly slower than the fastest model, FNet, across all sequence lengths measured on GPUs. We also conduct systematic studies on the transfer learning capability of PoNet and observe that PoNet achieves 95.7 percent of the accuracy of BERT on the GLUE benchmark, outperforming FNet by 4.5 percent relative. Comprehensive ablation analysis demonstrates effectiveness of the designed multi-granularity pooling and pooling fusion for token mixing in long sequences and efficacy of the designed pre-training tasks for PoNet to learn transferable contextualized language representations.
Tips:
- PoNet is an encoder architecture that can be used for efficient comprehension of long sentences.
- For applications with long sentences, i.e. longer than 512, it needs to extend position embeddings. Please refer to the following link: https://github.com/lxchtan/PoNet/blob/main/run_long_classification.py#L297.
This model was contributed by chtan. The original code can be found here.
How to Use
Note: This model requires that trust_remote_code=True
be passed to the from_pretrained method. This is because we use a custom model architecture that is not yet part of the transformers package.
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("chtan/ponet-base-uncased", trust_remote_code=True)
model = AutoModelForSequenceClassification.from_pretrained("chtan/ponet-base-uncased", trust_remote_code=True)
- Downloads last month
- 13