Skip to main content
Start main content

Conference Paper Published

Rearch

Branching Out: Exploration of Chinese Dependency Parsing with Fine-tuned Large Language Models

Zhou, H., Chersoni, E., & Hsu, Y. Y. (2025). Branching Out: Exploration of Chinese Dependency Parsing with Fine-tuned Large Language Models. In Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing: Natural Language Processing in the Generative AI era, 1437‑1445.
 
URL:  https://acl-bg.org/proceedings/2025/RANLP%202025/pdf/2025.ranlp-1.166.pdf

 

Abstract

In this paper, we investigate the effectiveness of large language models (LLMs) for Chinese dependency parsing through fine-tuning. We explore how different dependency representations impact parsing performance when fine-tuning
the Chinese Llama-3 model.

Our results demonstrate that while the Stanford typed dependency tuple representation yields the highest number of valid dependency trees, converting dependency structure into a lexical centered tree produces parses of significantly higher quality despite generating fewer valid structures. The results further show that finetuning enhances LLMs’ capability to handle longer dependencies to some extent, though challenges remain. Additionally, we evaluate the effectiveness of DeepSeek in correcting LLM-generated dependency structures, finding that it is effective for fixing index errors and
cyclicity issues but still suffers from tokenization mismatches.

Our analysis across dependency distances and relations reveals that fine-tuned LLMs outperform traditional parsers in specific syntactic structures while struggling with others. These findings contribute to the research on leveraging LLMs for syntactic analysis tasks.

 
   

 

 



Your browser is not the latest version. If you continue to browse our website, Some pages may not function properly.

You are recommended to upgrade to a newer version or switch to a different browser. A list of the web browsers that we support can be found here