Skip to main content Start main content
WhatsApp Image 20250619 at 152716512d3eaa

Media interview: PolyU start-up utilises technology to explore local community stories

Recognising that Hong Kong's fast-paced urban life often overshadows local community stories, Ms Fion Lau, a staff member of the PolyU Research and Innovation Office, and Mr Ken Chau, an alumnus of the PolyU Department of Applied Social Sciences, conceived the idea of combining social issues with puzzle games. In 2022, they launched "Puzzle Weekly," a start-up designed to encourage public engagement with local narratives and to foster stronger community connections through interactive game experiences. A recent media interview highlighted Puzzle Weekly's innovative approach to community engagement. By leveraging their expertise in visual communication and collaborating with business partners, the team integrates authentic Hong Kong community narratives into engaging puzzle games. These multicultural games provide players the opportunity to discover Hong Kong's hidden cultural heritage and strengthen community ties. After rigorous trials and development, the team achieved success by winning the "Best Social Care Award" at the YDC Dare to Change Business Pitch Competition in 2022. With support from PolyU, the project also received funding from PolyU’s Micro Fund, supporting its continued growth and maturation. With plans to expand internationally and forge collaborations, the team aims to share Hong Kong's unique stories with a global audience, preserving and promoting local culture on a wider stage.  

19 Jun, 2025

Research and Innovation

20250612 - Top50-01

PolyU scholar named “Top 50 Women in Web3 & AI” by CoinDesk

The Hong Kong Polytechnic University (PolyU) is committed to driving innovation and interdisciplinary research by leveraging artificial intelligence (AI) across diverse fields. Prof. YANG Hongxia, Associate Dean (Global Engagement) of the Faculty of Computer and Mathematical Sciences, Professor of Department of Computing, and Executive Director of PolyU Academy for Artificial Intelligence, has been named in CoinDesk's “Top 50 Women in Web3 & AI” for her impactful contributions to pioneering cutting-edge technologies. This inaugural list highlights 50 of the most influential women worldwide who are shaping the future of crypto and AI. Prof. YANG’s inclusion recognises her groundbreaking work in AI development, particularly her efforts to bridge advanced technology with practical applications across industries, from healthcare to finance. Prof. YANG is a distinguished AI scientist with over 15 years of experience, specialising in large-scale machine learning, data mining, deep learning, and the practical deployment of large language model (LLM) systems in real-world settings. Throughout her illustrious career, she has developed ten significant algorithmic systems that have enhanced the operations of various enterprises. Advancing AI in Healthcare and Beyond As a strong advocate for decentralised AI, Prof. YANG has championed the "Model-over-Models" approach, which builds a foundational model from smaller, stackable domain-specific models. This solution, called InfiFusion, offers an efficient and scalable solution for high-performance LLM and aims to empower more industries with advanced AI capabilities. Currently, Prof. YANG and her team are developing foundation models in cutting-edge fields, including healthcare, manufacturing, energy, and finance. Focused on bridging statistical innovation with healthcare and actuarial science, which is crucial for the future well-being of humanity, Prof. YANG has been invited to host a high-level course themed “AI and Statistics” at the prestigious Croucher Advanced Study Institutes. Prof. YANG said, “By integrating statistical principles with the capabilities of generative AI, we aim to develop more robust models that can generate realistic data, improve predictive accuracy, and provide deeper insights into complex datasets.” The objective of the Croucher Advanced Study Institutes is to explore the frontiers of generative AI and statistics is to delve into the intersection of these two dynamic fields, uncovering new methodologies and applications that can enhance data-driven decision-making and innovation.  CoinDesk's selection process involved a diverse panel of judges and over 300 nominations from around the world, emphasizing a balance of innovation, relevancy, and influence. The final list showcases leaders in crypto and AI with diverse areas of expertise, ranging from product development and business strategy to regulatory compliance and ethical frameworks.

13 Jun, 2025

Awards and Achievements

WhatsApp Image 20250611 at 16381915613cac

Media coverage: PolyU and AELIS Couture partner on innovative materials for sustainable fashion

Technological innovation is revolutionising fashion design, driving continuous and dramatic changes across the industry. The Hong Kong Polytechnic University (PolyU) and the esteemed Paris fashion house AELIS Couture (AELIS) have collaborated to transform cutting-edge scientific research into sustainable luxury fashion materials for the Fall/Winter 2024/25 Couture Collection, breathing new life into the fashion industry.  The collection features a precious gold and silver coated sustainable silk organza with a metallic pearly sheen. This innovative material was meticulously developed and designed by a team led by Prof. Kinor JIANG, Professor of the School of Fashion Textiles at PolyU. The research team employed a novel metallising technology to place ultra-thin, nano-scale metal films onto textiles, creating a material that retains the comfort and flexibility of traditional fabrics while exhibiting a striking visual effect. PolyU is committed to researching and developing environmentally friendly materials, which aligns perfectly with AELIS's brand philosophy. This partnership effectively integrates scientific and technological advancements with fashion design, highlighting a shared commitment to environmental sustainability and propelling the fashion industry towards a more eco-conscious future.  In response to the growing demand for eco-friendly materials in the industry, the collaborative initiative between AELIS and PolyU integrates innovation with sustainable development concepts, bridging the realms of fashion and research. This partnership serves as a model for future collaborations, effectively bridging fashion design and scientific research to foster material innovation and expand the creative horizons of fashion designers.  

11 Jun, 2025

Research and Innovation

Photo 1

PolyU develops novel multi-modal agent to facilitate long video understanding by AI, accelerating development of generative AI-assisted video analysis

While Artificial Intelligence (AI) technology is evolving rapidly, AI models still struggle with understanding long videos. A research team from The Hong Kong Polytechnic University (PolyU) has developed a novel video-language agent, VideoMind, that enables AI models to perform long video reasoning and question-answering tasks by emulating humans’ way of thinking. The VideoMind framework incorporates an innovative Chain-of-Low-Rank Adaptation (LoRA) strategy to reduce the demand for computational resources and power, advancing the application of generative AI in video analysis. The findings have been submitted to the world-leading AI conferences. Videos, especially those longer than 15 minutes, carry information that unfolds over time, such as the sequence of events, causality, coherence and scene transitions. To understand the video content, AI models therefore need not only to identify the objects present, but also take into account how they change throughout the video. As visuals in videos occupy a large number of tokens, video understanding requires vast amounts of computing capacity and memory, making it difficult for AI models to process long videos. Prof. Changwen CHEN, Interim Dean of the PolyU Faculty of Computer and Mathematical Sciences and Chair Professor of Visual Computing, and his team have achieved a breakthrough in research on long video reasoning by AI. In designing VideoMind, they made reference to a human-like process of video understanding, and introduced a role-based workflow. The four roles included in the framework are: the Planner, to coordinate all other roles for each query; the Grounder, to localise and retrieve relevant moments; the Verifier, to validate the information accuracy of the retrieved moments and select the most reliable one; and the Answerer, to generate the query-aware answer. This progressive approach to video understanding helps address the challenge of temporal-grounded reasoning that most AI models face. Another core innovation of the VideoMind framework lies in its adoption of a Chain-of-LoRA strategy. LoRA is a finetuning technique emerged in recent years. It adapts AI models for specific uses without performing full-parameter retraining. The innovative chain-of-LoRA strategy pioneered by the team involves applying four lightweight LoRA adapters in a unified model, each of which is designed for calling a specific role. With this strategy, the model can dynamically activate role-specific LoRA adapters during inference via self-calling to seamlessly switch among these roles, eliminating the need and cost of deploying multiple models while enhancing the efficiency and flexibility of the single model. VideoMind is open source on GitHub and Huggingface. Details of the experiments conducted to evaluate its effectiveness in temporal-grounded video understanding across 14 diverse benchmarks are also available. Comparing VideoMind with some state-of-the-art AI models, including GPT-4o and Gemini 1.5 Pro, the researchers found that the grounding accuracy of VideoMind outperformed all competitors in challenging tasks involving videos with an average duration of 27 minutes. Notably, the team included two versions of VideoMind in the experiments: one with a smaller, 2 billion (2B) parameter model, and another with a bigger, 7 billion (7B) parameter model. The results showed that, even at the 2B size, VideoMind still yielded performance comparable with many of the other 7B size models. Prof. Chen said, “Humans switch among different thinking modes when understanding videos: breaking down tasks, identifying relevant moments, revisiting these to confirm details and synthesising their observations into coherent answers. The process is very efficient with the human brain using only about 25 watts of power, which is about a million times lower than that of a supercomputer with equivalent computing power. Inspired by this, we designed the role-based workflow that allows AI to understand videos like human, while leveraging the chain-of-LoRA strategy to minimise the need for computing power and memory in this process.” AI is at the core of global technological development. The advancement of AI models is however constrained by insufficient computing power and excessive power consumption. Built upon a unified, open-source model Qwen2-VL and augmented with additional optimisation tools, the VideoMind framework has lowered the technological cost and the threshold for deployment, offering a feasible solution to the bottleneck of reducing power consumption in AI models. Prof. Chen added, “VideoMind not only overcomes the performance limitations of AI models in video processing, but also serves as a modular, scalable and interpretable multimodal reasoning framework. We envision that it will expand the application of generative AI to various areas, such as intelligent surveillance, sports and entertainment video analysis, video search engines and more.”

10 Jun, 2025

Research and Innovation

Photo 1

PolyU-led research reveals that sensory and motor inputs help large language models represent complex concepts

Can one truly understand what “flower” means without smelling a rose, touching a daisy or walking through a field of wildflowers? This question is at the core of a rich debate in philosophy and cognitive science. While embodied cognition theorists argue that physical, sensory experience is essential to concept formation, studies of the rapidly evolving large language models (LLMs)suggest that language alone can build deep, meaningful representations of the world. By exploring the similarities between LLMs and human representations, researchers at The Hong Kong Polytechnic University (PolyU) and their collaborators have shed new light on the extent to which language alone can shape the formation and learning of complex conceptual knowledge. Their findings also revealed how the use of sensory input for grounding or embodiment – connecting abstract with concrete concepts during learning – affects the ability of LLMs to understand complex concepts and form human-like representations. The study, in collaboration with scholars from Ohio State University, Princeton University and City University of New York, was recently published in Nature Human Behaviour. Led by Prof. LI Ping, Sin Wai Kin Foundation Professor in Humanities and Technology, Dean of the PolyU Faculty of Humanities and Associate Director of the PolyU-Hangzhou Technology and Innovation Research Institute, the research team selected conceptual word ratings produced by state-of-the-art LLMs, namely ChatGPT (GPT-3.5, GPT-4) and Google LLMs (PaLM and Gemini). They compared them with human-generated word ratings of around 4,500 words across non-sensorimotor (e.g., valence, concreteness, imageability), sensory (e.g., visual, olfactory, auditory) and motor domains (e.g., foot/leg, mouth/throat) from the highly reliable and validated Glasgow Norms and Lancaster Norms datasets. The research team first compared pairs of data from individual humans and individual LLM runs to discover the similarity between word ratings across each dimension in the three domains, using results from human-human pairs as the benchmark. This approach could, for instance, highlight to what extent humans and LLMs agree that certain concepts are more concrete than others. However, such analyses might overlook how multiple dimensions jointly contribute to the overall representation of a word. For example, the word pair “pasta” and “roses” might receive equally high olfactory ratings, but “pasta” is in fact more similar to “noodles” than to “roses” when considering appearance and taste. The team therefore conducted representational similarity analysis of each word as a vector along multiple attributes of non-sensorimotor, sensory and motor dimensions for a more complete comparison between humans and LLMs. The representational similarity analyses revealed that word representations produced by the LLMs were most similar to human representations in the non-sensorimotor domain, less similar for words in sensory domain and most dissimilar for words in motor domain. This highlights LLM limitations in fully capturing humans’ conceptual understanding. Non-sensorimotor concepts are understood well but LLMs fall short when representing concepts involving sensory information like visual appearance and taste, and body movement. Motor concepts, which are less described in language and rely heavily on embodied experiences, are even more challenging to LLMs than sensory concepts like colour, which can be learned from textual data. In light of the findings, the researchers examined whether grounding would improve the LLMs’ performance. They compared the performance of more grounded LLMs trained on both language and visual input (GPT-4, Gemini) with that of LLMs trained on language alone (GPT-3.5, PaLM). They discovered that the more grounded models incorporating visual input exhibited a much higher similarity with human representations. Prof. Li Ping said, “The availability of both LLMs trained on language alone and those trained on language and visual input, such as images and videos, provides a unique setting for research on how sensory input affects human conceptualisation. Our study exemplifies the potential benefits of multimodal learning, a human ability to simultaneously integrate information from multiple dimensions in the learning and formation of concepts and knowledge in general. Incorporating multimodal information processing in LLMs can potentially lead to a more human-like representation and more efficient human-like performance in LLMs in the future.” Interestingly, this finding is also consistent with those of previous human studies indicating the representational transfer. Humans acquire object-shape knowledge through both visual and tactile experiences, with seeing and touching objects activating the same regions in human brains. The researchers pointed out that – as in humans – multimodal LLMs may use multiple types of input to merge or transfer representations embedded in a continuous, high-dimensional space. Prof. Li added, “The smooth, continuous structure of embedding space in LLMs may underlie our observation that knowledge derived from one modality could transfer to other related modalities. This could explain why congenitally blind and normally sighted people can have similar representations in some areas. Current limits in LLMs are clear in this respect”. Ultimately, the researchers envision a future in which LLMs are equipped with grounded sensory input, for example, through humanoid robotics, allowing them to actively interpret the physical world and act accordingly. Prof. Li said, “These advances may enable LLMs to fully capture embodied representations that mirror the complexity and richness of human cognition, and a rose in LLM’s representation will then be indistinguishable from that of humans.”

10 Jun, 2025

Research and Innovation

20250521 - Fusing AI and Robotics for Dynamic Environments-01

Smart Adaptation: The Fusion of AI and Robotics for Dynamic Environments

The advancement of artificial intelligence (AI) has ushered in a new era of automated robotics that are adaptive to their environments.   The field of robotics has made remarkable strides over the past few decades, yet it continues to face challenges that hinder the full utilisation of its potential. Traditional robots often rely on pre-programmed instructions and restricted configurations, limiting their ability to respond to unforeseen circumstances. AI technologies - encompassing cognition, analysis, inference, and decision-making-enable robots to operate intelligently, significantly enhancing their capabilities to assist and support humans.  By augmenting robots with AI technologies within engineering systems, we can expect more ever-present applications in industry, agriculture, logistics, medicine, and beyond, allowing robots to perform complex tasks with greater autonomy and efficiency. This technological enhancement unleashes the potential of robotics in real-world applications, offering solutions to pressing medical and environmental problems and facilitating a paradigm shift towards intelligent manufacturing in the context of Industry Revolution 4.0.   With the application of AI, a research team led by Prof. Dan ZHANG, Chair Professor of Intelligent Robotics and Automation in the Department of Mechanical Engineering, and Director of PolyU-Nanjing Technology and Innovation Research Institute at the Hong Kong Polytechnic University (PolyU), has fabricated a number of novel robotic systems with high dynamic performance. Prof. ZHANG’s research team has recently proposed a grasp pose detection framework that applies deep neural networks to generate a rich set of omnidirectional (in six degrees of freemen “6-DoF”) grasp poses with high precision. To detect the objects to be grasped, convolutional neural networks (CNNs) are applied in a multi-scale cylinder with varying radii, providing detailed geometric information about each object’s location and size estimation. Multiple multi-layer perceptrons (MLPs) optimise the precision parameters of the robotic manipulator to grasp objects, including the gripper width, grasp score (for specific in-plane rotation angles and gripper depths) as well as collision detection. These parameters are fed into an algorithm within the framework, extending grasps from pre-set configurations to generate comprehensive grasp poses tailored for the scene. Experiments reveal that the proposed method consistently outperforms the benchmark method in laboratory simulations, achieving an average success rate of 84.46% compared to 78.31% for the benchmark method in real-world experiments. In addition, the research team leverages AI technologies to enhance the functionality and user experience of a novel robotic knee exoskeleton for the gait rehabilitation of patients with knee joint impairment. The structure of the exoskeleton includes an actuator powered by an electric motor to assist knee flexion/extension actively, an ankle joint that transfers the weight of the exoskeleton to the ground, and a stiffness adjustment mechanism powered by another electric motor.  A Long Short-Term Memory (LSTM) network in a machine learning algorithm is applied to provide real-time nonlinear stiffness and torque adjustments, mimicking the biomechanical characteristics of the human knee joint. The network is trained on a large dataset of electromyography (EMG) signals and knee joint movement data, enabling real-time adjustments of the exoskeleton’s stiffness and torque based on the user’s physiological signals and movement conditions. By predicting necessary adjustments, the system adapts to various gait requirements, enhancing the user’s walking stability and comfort.  The integration of an adaptive acceptance control algorithm based on Radial Basis Function (RBF) networks enables the robotic knee exoskeleton to automatically adjust joint angles and stiffness parameters without the need for force or torque sensors. This enhances the accuracy of position control and improves the exoskeleton’s responsiveness to different walking postures. This data-driven approach refines the model’s predictions and improves overall performance over time. Experimental results demonstrate that the model outperforms traditional fixed control methods in terms of accuracy and real-time responsiveness, generating the desired reference joint trajectory for users at different walking speeds.   Prof. ZHANG’s innovation reveals that AI techniques, particularly deep learning, have improved the ability of robots to perceive and understand their environments. This advancement contributes to more effective and flexible solutions for handling tasks beyond fixed configurations in standard settings. The melding of AI and robotics not only enhances precision and accuracy but also introduces new capabilities for robotic automation, enabling real-time decision-making and continuous learning. As a result, robots can improve their performance over time, leading to extended utilisation of robotics in society for future endeavours. Source: Innovation Digest, Issue 1  For more about research  

9 Jun, 2025

Research and Innovation

2

PolyU shines at the 4th Nanchang Healthcare Expo, showcasing innovations in medicine-engineering integration and AI

The 4th China (Nanchang) International Healthcare Industry Conference and Expo 2025 was successfully launched on 6 June at the Nanchang Greenland International Expo Centre in Jiangxi. This prominent event serves as a key platform for exchange and collaboration within Mainland China’s healthcare industry. As the sole representative of Hong Kong's higher education institutions, PolyU invited 11 research teams to the expo to showcase cutting-edge research and technological achievements in healthcare. The teams demonstrated PolyU’s innovative strengths in medicine-engineering integration and artificial intelligence.  Held under the theme “Technology-Driven Health and Reshaping New Business Models,” the expo covers the entire healthcare value chain, including biomedicine, medical devices, the silver economy, smart healthcare, traditional Chinese medicine, health consumption, and smart living. PolyU’s exhibition included:  “Portable, self-testing retinal fundus camera integrated with AI system”, led by Prof. Mingguang HE, Chair Professor of Experimental Ophthalmology of the PolyU School of Optometry and Henry G. Leong Professor in Elderly Vision Health  “An intelligent ankle rehabilitation robot” developed by Prof. ZHANG Dan, Chair Professor of the PolyU Department of Mechanical Engineering  “An AI-Assisted Pharmaceutical Product Development Platform” led by Prof. MA Cong, Associate Professor of the PolyU Department of Applied Biology and Chemical Technology  “A wearable Smart Navigation and Interaction System for the Visually Impaired” developed by Prof. Weisong WEN, Assistant Professor of the PolyU Department of Aeronautical and Aviation Engineering  “Liverscan, a palm-sized ultrasound device for fatty liver and liver fibrosis assessment" and "Scolioscan, 3D ultrasound imaging device to provide radiation-free assessment of scoliosis" led by Prof. ZHENG Yongping,  Chair Professor of the PolyU Department of Biomedical Engineering and Henry G. Leong Professor in Biomedical Engineering “Digital Strolling” for alleviating depression in mobility-impaired individuals led by Prof. Yan LI, Assistant Professor (Presidential Young Scholar) of the PolyU School of Nursing  “Virtual MRI Contrast Enhancement System” led by Prof. CAI Jing, Head and Professor of the PolyU Department of Health Technology and Informatics  “The Mobile Ankle-foot Exoneuromusculoskeleton” led by Prof. HU Xiaoling, Associate Professor of the PolyU Department of Biomedical Engineering  "E-bibliotherapy App for Caregivers of People with Dementia" led by Prof. Shanshan WANG, Assistant Professor of the PolyU School of Nursing  "Dementia Simulation Game Kit" led by PolyU Jockey Club Design Institute for Social Innovation  Mr Victor ZHAO, Associate Director of the PolyU Research and Innovation Office, attended the opening ceremony, alongside industry leaders, including representatives from the Jiangxi Provincial Government, academicians from the Chinese Academy of Engineering, representatives from the Nanchang Municipal People's Government, and other distinguished guests. The event fostered vibrant academic exchanges and business discussions. Local media also conducted interviews with the PolyU research teams, further broadening public awareness of PolyU's cutting-edge scientific research and its contributions to the field.  PolyU has a strong foundation in medical education and innovation. By integrating healthcare with artificial intelligence, engineering, and data science through interdisciplinary strategies, PolyU is committed to addressing diverse medical challenges, enhancing healthcare services in Hong Kong and the Greater Bay Area, and advancing Hong Kong’s development as an international hub for health and medical innovation.  

7 Jun, 2025

Events

Photo 1

PolyU and Peking University Third Hospital join forces to establish Joint Research Laboratory on Musculoskeletal and Sports Rehabilitation

The Hong Kong Polytechnic University (PolyU) and Peking Third Hospital (PUTH) signed a collaboration agreement last month to officially establish the “Joint Research Laboratory on Musculoskeletal and Sports Rehabilitation”. This partnership aims to advance cutting-edge research and innovation in musculoskeletal health and sports rehabilitation by leveraging the clinical expertise, medical engineering and translational research strengths of both institutions, and to promote the translation and application of research outcomes. The agreement signing ceremony and the plaque unveiling ceremony of the joint research laboratory were held at PUTH. The agreement was signed by Prof. DONG Cheng, Associate Vice President (Mainland Research Advancement) of PolyU and Prof. FU Wei, President of PUTH. Following this, witnessed by Prof. Dong Cheng and Prof. Fu Wei, the plaque unveiling ceremony for the “Joint Research Laboratory on Musculoskeletal and Sports Rehabilitation” was jointly officiated by Prof. Marco PANG, Shun Hing Education and Charity Fund Professor in Rehabilitation Sciences, Chair Professor of Neurorehabilitation and Head of the Department of Rehabilitation Sciences (RS) of PolyU, and Prof. LI Rong, Vice President of PUTH and Director of the Reproductive Medicine Department, symbolising the official launch of the joint research laboratory and marking a significant step forward in academic and medical research collaboration between Hong Kong and the Mainland China. Prof. Dong Cheng remarked, “In the face of global population ageing and the increasing burden of chronic diseases, sports rehabilitation plays an increasingly vital role in improving quality of life and alleviating healthcare pressures. This collaboration between PolyU and PUTH will not only deepen academic exchange between the two institutions but also inject new momentum into the field of sports rehabilitation in the Greater China region. We look forward to working closely with PUTH to develop the joint laboratory into a hub for research innovation and talent development serving the Asia-Pacific region.” Prof. Fu Wei stated, “We are delighted to collaborate with PolyU to advance impactful research projects and promote the translation of innovative achievements in musculoskeletal and sports rehabilitation, benefiting patients and fostering the further development of rehabilitation medicine.” PolyU and PUTH will leverage their respective strengths to establish an integrated and interdisciplinary joint research laboratory. The partnership will promote scientific research, technological innovation and talent cultivation in the field of sports rehabilitation, with the goal of improving patient recovery outcomes and contributing to healthy ageing.

6 Jun, 2025

Partnership

Photo 1

PolyU study uncovering Hong Kong’s hidden history with cutting-edge geospatial technologies receives Innovation and Technology Fund grant

Hong Kong’s rich history is interwoven with layers of untold stories, many buried beneath the surface of its bustling modern landscape. A project led by researchers from The Hong Kong Polytechnic University (PolyU)’s Department of Land Surveying and Geo-informatics seeks to reveal and record the City’s lost history hidden underground by utilising cutting-edge geospatial technologies and to launch public education programmes to promote the conservation and better understanding of the City’s cultural heritage. The project has received funding of HK$3.22 million from the General Support Programme under the Innovation and Technology Fund (ITF-GSP) of the Innovation and Technology Commission. The two-year project “Antiquity and Heritage Lost, Found and Revealed: Promotion of 21st Century Geo-spatial Technologies,” led by Prof. Wallace Wai Lok LAI, Associate Head (Teaching) and Professor of the Department of Land Surveying and Geo-informatics, aims to identify and capture images of hidden and buried wartime relics, cultural antiquities, and heritage sites in Hong Kong by utilising advanced geospatial technologies. These technologies include geo-referencing and mapping techniques, airborne and terrestrial laser scanning, and geophysical technologies, enhancing the understanding of Hong Kong’s battlefields and cultural heritage sites. The research is being conducted in collaboration with Prof. Chi-Man KWONG, Associate Professor of the Department of History of Hong Kong Baptist University, and local amateur war historians. PolyU research team has collaborated with the Government, universities and industry partners, while also working closely with National Geographic Magazine, Scientific American, as well as the Pokfulam Farm, a collaboration between NGO and community in Pokfulam village, to promote public engagement and the use of advanced geospatial technologies in uncovering Hong Kong’s hidden stories. Utilising geospatial and geophysical technologies to reconstruct and revive Hong Kong’s history, the team has recently uncovered “lost and found” stories from five cultural and wartime heritage sites. These include the Gin Drinkers Line; Mount Davis Battery, the East Brigade Headquarters in Tai Tam; Pokulam Village and the Old Dairy Farm; Fan Lau Fort on Lantau Island, and Tung Chung Fort. To promote technology-driven historical interpretation, geo-spatial mapping and conservation, and STEAM education, the PolyU team will provide a range of education programmes including field visits to cultural and wartime heritage sites in Hong Kong, STEAM-focused seminars and talks, interactive workshops, and immersive learning exhibitions. The project has been supported by advanced facilities at the PolyU Industrial Centre, including its Hybrid Immersive Virtual Environment (HiVE) and 3D printing facilities, to enable an immersive learning experience for secondary and tertiary students on combining art-tech with history to depict Hong Kong’s hidden stories. This initiative is expected to leverage 21st Century geospatial technologies to enrich STEAM education, deepen public appreciation for cultural heritage, foster widespread community participation, and promote effective knowledge sharing. Prof. Wallace Lai said, “The project combines cutting-edge technology with historical investigation, uncovering and preserving cultural legacies. More than just an educational platform, it also serves as a vital reminder to safeguard our collective memories in humanities. Through advanced technologies, interactive education, and innovative approaches that blend art, technology, and historical interpretation, we aim to ignite a passion for learning in the next generation. Our mission is to preserve and revitalise Hong Kong’s rich history, ensuring it remains a vibrant and enduring presence in their hearts and minds.” With support from the PolyU Research Institute for Land and Space, Prof. Lai is expanding this study to Southeast Asia, building on Hong Kong’s experience in applying geo-spatial technologies. In May of this year, the team embarked on its first expedition to Malacca, where they conducted 3D-scanning and mapping of the iconic fortification gate and St. Paul Church, both dating back to the Portuguese and Dutch colonial periods. Using digitised old maps alongside advanced scanning and mapping techniques, they also uncovered traces of lost and buried colonial fortification walls. Further expeditions are planned for other regions in Malaysia To raise public awareness and deepen understanding of the importance of innovation and technology, the ITF-GSP aims to support non-research and development projects that contribute to the upgrading and development of Hong Kong’s industries, the fostering of an innovation and technology culture in Hong Kong, as well as promoting popular science.

4 Jun, 2025

Awards and Achievements

20250604 - PolyU young researcher to lead carbon-free energy conference_V1-02

PolyU young researcher to lead carbon-free energy conference with support from NSFC/RGC Joint Research Scheme

The Hong Kong Polytechnic University (PolyU) is committed to fostering the development of young scholars by promoting their collaborative networks and the pursuit of research excellence. A project led by a young PolyU scholar has received support from the National Natural Science Foundation of China (NSFC) /RGC Joint Research Scheme (Conference Grant) 2025/26 to host a conference on carbon-free energy. Led by Prof. Yu GUAN, Assistant Professor of Department of Aeronautical and Aviation Engineering, the project “Carbon-free energy utilization empowered by AI” has been awarded a grant of HK$249,800 for a duration of 12 months under the NSFC/RGC Joint Research Scheme (Conference Grant). This project is conducted in collaboration with Prof. Xi Xia from Shanghai Jiao Tong University. NSFC/RGC Joint Research Scheme’s conference proposals are assessed based on the research standing of the Main and Co-organisers, guest speakers, the level of participation by local and Mainland researchers and students, and prospects for new/longer-term research collaboration between Hong Kong and Mainland researchers. The RGC provides applicants with funding for a two or three-day conference in Hong Kong. The maximum amount of RGC grant per conference is HK$250,000.

4 Jun, 2025

Awards and Achievements

Your browser is not the latest version. If you continue to browse our website, Some pages may not function properly.

You are recommended to upgrade to a newer version or switch to a different browser. A list of the web browsers that we support can be found here