Conference Paper Published
Study
Experience and Opportunities
| Zhang, Z., Ma, J., Chersoni, E., You, J., & Feng, Z. (2025). From BERT to LLMs: Comparing and Understanding Chinese Classifier Prediction in Language Models. In Proceedings of the BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, 317-329. |
| DOI: https://doi.org/10.18653/v1/2025.blackboxnlp-1.20 |
|
|
|
Abstract Classifiers are an important and defining feature of the Chinese language, and their correct prediction is key to numerous educational applications. Yet, whether the most popular Large Language Models (LLMs) possess proper knowledge the Chinese classifiers is an issue that has largely remain unexplored in the Natural Language Processing (NLP) literature.To address such a question, we employ various masking strategies to evaluate the LLMs’ intrinsic ability, the contribution of different sentence elements, and the working of the attention mechanisms during prediction. Besides, we explore fine-tuning for LLMs to enhance the classifier performance.Our findings reveal that LLMs perform worse than BERT, even with fine-tuning. The prediction, as expected, greatly benefits from the information about the following noun, which also explains the advantage of models with a bidirectional attention mechanism such as BERT. |
|
|
We use Cookies to give you a better experience on our website. By continuing to browse the site without changing your privacy settings, you are consenting to our use of Cookies. For more information, please see our Privacy Policy Statement.
Your browser is not the latest version. If you continue to browse our website, Some pages may not function properly.
You are recommended to upgrade to a newer version or switch to a different browser. A list of the web browsers that we support can be found here