Biography
Chief Supervisor
Project Title
A Universal Generation Model for Multi-Modal Ophthalmic Imaging
Synopsis
Ophthalmic imaging plays a critical role in the diagnosis, monitoring, and treatment of eye diseases. Modalities such as Magnetic Resonance Imaging (MRI), Color Fundus photography (CF), Fluorescein Angiography (FFA), and Optical Coherence Tomography (OCT) provide complementary structural and functional information. Existing generative models mostly focus on individual modalities, limiting theirability for multi-modal integration and cross-modal generation. This project aims to develop a universal generation model for ophthalmic imaging, enabling multi-modal generation, translation, and enhancement within a unified framework, while also supporting text-driven image generation and cross-modal reasoning. The framework is intended to cover the entire spectrum from image generation to diagnostic support. The study leverages Large Language Models to advance applications including data augmentation, modality completion, and clinical assistance. Expected outcomes include increased availability of multi-modal data, improved robustness of downstream analysis models, and versatile AI tools for both clinical practice and biomedical research, thereby contributing to precision ophthalmology and facilitating early detection and personalized treatment planning.