AI is reshaping workplaces, yet students often struggle to engage critically with AI-generated content. Our project introduced an AI-driven learning simulation using Classlet, an interactive platform designed to develop AI literacy through structured decision-making, real-time feedback, and multimodal engagement in a VR-based professional office environment.
Grounded in experiential learning theory, this approach emphasises active participation, reflection, and iterative improvement. The simulation follows Kolb’s Experiential Learning Cycle, where students engage in concrete AI interactions, reflect on AI-generated responses, conceptualise better prompting strategies, and apply these refinements in real time. Unlike passive AI literacy training, this method ensures that students actively experiment with prompts, analyze AI outputs, and refine their critical thinking skills in AI-assisted workplace decision-making.
A key innovation was the simulated intentional exposure to both effective and ineffective AI-generated responses, allowing students to recognise patterns in AI decision-making, refine their prompts, and understand how input structure impacts AI-generated insights. Over a 45-minute session, 21 students completed 646 tasks, engaging in real-world problem-solving using videos, images, and 3D object manipulation. AI-powered avatars facilitated scenario-based discussions, instant feedback loops, and adaptive learning pathways, ensuring that students not only interacted with AI but also reflected on the strengths and limitations of AI-generated insights.
Feedback highlighted the engaging and immersive nature of the experience, with students reporting an average enjoyment score of 3.55 out of 5 (equivalent to 71% of the scale’s maximum value). Similarly, the perceived usefulness of the AI learning method received a mean score of 3.48 out of 5 (69.6% of the scale’s maximum value), indicating a generally positive reception of the AI-driven simulation. “It’s innovative and fun,” noted one participant, while others valued the ability to experiment with AI prompts and analyse varying response quality. However, some students struggled with AI continuity, pointing out that "the chatbot skips dialogue sometimes" and that responses felt "too general, needing more follow-up questions."
With 99.0% input processing accuracy, the simulation successfully recognised and processed 102 out of 103 student inputs, ensuring that nearly all interactions with the AI were correctly interpreted. Additionally, 96.1% of AI-generated responses aligned with the intended learning objectives, meaning that most outputs provided contextually relevant and meaningful feedback that supported students' learning goals. Future iterations will refine AI response depth using Retrieval-Augmented Generation (RAG) and improve interface design and dialogue.
This project under the Faculty of Humanities Fund for Innovative Technology-In-education (FITE) is led by Prof. Renia Lopez, with Prof. Christy Qiu as the co-investigator.