Extra Global Attention Designation Using Keyword Detection in Sparse Transformer Architectures
Document Type
Article
Publication Date
10-11-2024
Department
Department of Computer Science; Department of Electrical and Computer Engineering
Abstract
In this paper, we propose an extension to Longformer Encoder-Decoder, a popular sparse transformer architecture. One common challenge with sparse transformers is that they can struggle with encoding of long range context, such as connections between topics discussed at a beginning and end of a document. A method to selectively increase global attention is proposed and demonstrated for abstractive summarization tasks on several benchmark data sets. By prefixing the transcript with additional keywords and encoding global attention on these keywords, improvement in zero-shot, few-shot, and fine-tuned cases is demonstrated for some benchmark data sets.
Publication Title
arXiv
Recommended Citation
Lucas, E.,
Kangas, D. J.,
&
Havens, T. C.
(2024).
Extra Global Attention Designation Using Keyword Detection in Sparse Transformer Architectures.
arXiv.
http://doi.org/10.48550/arXiv.2410.08971
Retrieved from: https://digitalcommons.mtu.edu/michigantech-p2/2199