Conference Paper Published
Study
Experience and Opportunities
| Feng, Z., Ma, J., Chersoni, E., Zhao, X., & Bao, X. (2025). Learning to Look at the Other Side: A Semantic Probing Study of Word Embeddings in LLMs with Enabled Bidirectional Attention. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 23226-23245. |
| DOI: https://doi.org/10.18653/v1/2025.acl-long.1132 |
|
|
|
Abstract Autoregressive Large Language Models (LLMs) demonstrate exceptional performance in language understanding and generation. However, their application in text embedding tasks has been relatively slow, along with the analysis of their semantic representation in probing tasks, due to the constraints of the unidirectional attention mechanism. This paper aims to explore whether such constraints can be overcome by enabling bidirectional attention in LLMs. We tested different variants of the Llama architecture through additional training steps, progressively enabling bidirectional attention and unsupervised/supervised contrastive learning. Our results show that bidirectional attention improves the LLMs’ ability to represent subsequent context but weakens their utilization of preceding context, while contrastive learning training can help to maintain both abilities. |
We use Cookies to give you a better experience on our website. By continuing to browse the site without changing your privacy settings, you are consenting to our use of Cookies. For more information, please see our Privacy Policy Statement.
Your browser is not the latest version. If you continue to browse our website, Some pages may not function properly.
You are recommended to upgrade to a newer version or switch to a different browser. A list of the web browsers that we support can be found here