Skip to main content Start main content

News

1

PolyU research projects win funding support from RAISe+ Scheme

The Innovation and Technology Commission of the HKSAR Government has recently announced the second batch of projects selected for funding under the Research, Academic, and Industry Sectors One-plus (RAISe+) Scheme. Among the successful projects, four from The Hong Kong Polytechnic University (PolyU) showcase the University’s research excellence and strong commitment to commercialisation of its research outcomes. Prof. Christopher CHAO, PolyU Vice President (Research and Innovation) congratulated the PolyU research teams, stating, “We are delighted that four PolyU research projects have been named among the second batch of those funded under the RAISe+ Scheme. This achievement not only underscores the University’s robust research capabilities, but also the strong confidence that government, industry and community stakeholders have placed in our ability and efforts in driving innovations and translation of research outcomes. Moving forward, PolyU will continue to foster effective collaboration among the Government, industry, academia and research sectors, injecting new momentum into its research projects and accelerating the translation of research outcomes into real-world impact, which in turn contributes to the development of Hong Kong, our Nation and the world.” The funded projects cover a wide range of innovation and technology fields, including AI and robotics, Chinese medicine, computer science/information technology, and electrical and electronic engineering. Details of the projects are listed below Project Title Project Leader Project Description High-Speed 3D Stacked AI Vision Sensors Prof. Yang CHAI Associate Dean, Faculty of Science; Chair Professor of Semiconductor Physics, Department of Applied Physics; and Director, Joint Research Centre for Microelectronics This project focuses on the development and commercialisation of an advanced AI vision sensor with high-speed operation, high dynamic range, and ultra-low power consumption. The sensor overcomes key limitations of conventional image sensors, particularly motion blur in high-speed scenarios. Key applications of the AI vision sensor include security surveillance systems, autonomous navigation, and motion analysis in extended reality devices and smartphones. The sensor achieves high-speed, high-dynamic-range, and low-power imaging through the integration of conventional image sensors with dedicated visual information processing chips. Novel Nutraceuticals for Neurodegenerative Diseases Prof. Simon Ming-yuen LEE Cally Kwong Mei Wan Professor in Biomedical Sciences and Chinese Medicine Innovation; Chair Professor of Biomedical Sciences, Department of Food Science and Nutrition; and Director, PolyU-BGI Joint Research Centre for Genomics and Synthetic Biology in Global Ocean Resources This project develops novel nutraceuticals and drugs derived from natural resources for treating neurodegenerative diseases. It establishes the LifeChip technology platform, combining next-generation DNA sequencing, AI-driven discovery, advanced chemical separation, high-throughput in vivo screening, and synthetic biology. Focusing on neurodegenerative diseases like Alzheimer’s and Parkinson’s, as well as neurological disorders including insomnia, depression, and anxiety, this integrated approach delivers a comprehensive solution for both prevention and treatment using innovative nutraceuticals with unique mechanisms of action. For instance, the development candidate Oxyphylla® is a first-in-class drug targeting α-synuclein, an emerging therapeutic target for Parkinson’s disease. Oxyphylla® is anticipated to become a disease-modifying therapy, offering a breakthrough in neurological health. Reallm: World-leading Enterprise GenAI Infrastructure Solution Prof. YANG Hongxia Executive Director, PolyU Academy for Artificial Intelligence;Associate Dean (Global Engagement), Faculty of Computer and Mathematical Sciences; and Professor, Department of Computing   The project aims to develop a comprehensive Generative Artificial Intelligence (GenAI) infrastructure solution, including by: establishing a decentralised pretraining and post-training system architecture to support distributed model training frameworks; developing a domain-adaptive continual pretraining and post-training system to continuously optimise large language models using domain-specific unlabelled data, enabling adaptation to target domain distributions; and designing a low-bit training framework that requires only half the computational and storage resources of traditional training, while still achieving high-quality, end-to-end training from pretraining to post-training—significantly lowering the entry barrier for enterprises. Ultimately, the project will launch a platform specifically designed to enhance cross-domain collaboration through enterprise-grade GenAI services, including Software-as-a-Service, Platform-as-a-Service, and Infrastructure-as-a-Service. Tunable Laser Chip Based on Metasurface Structure and its Application Prof. YU Changyuan Director, PolyU-Jingjang Technology and Innovation Research Institute; and Professor, Department of Electrical and Electronic Engineering   This project pioneers a novel broadband tunable laser chip that, for the first time, integrates both a metasurface reflector and phase-change materials within a vertical-cavity surface-emitting laser. This enables an ultra-high quality factor resonant cavity and dynamic continuous tuning of the output wavelength over an exceptionally wide bandwidth (40nm). Compared to traditional laser structures, the chip not only features a more compact design but also achieves the same kHz-level tuning speed as leading international competitors. With the cost just one-twentieth that of existing market solutions, the chip is expected to achieve widespread adoption in battery monitoring systems, industrial production processes, autonomous driving technologies and high-speed optical communication modules. Inaugurated in 2023, the RAISe+ Scheme aims to provide funding, on a matching basis, for at least 100 research teams from universities funded by the University Grants Committee which demonstrate strong potential to evolve into successful startups. Each approved project will receive funding support ranging from HK$10 million to HK$100 million.

20 Jun, 2025

Awards and Achievements

Photo 1

PolyU showcases innovative research in AI and medicine-engineering integration at BIO International Convention 2025

The Hong Kong Polytechnic University (PolyU) participated in the BIO International Convention 2025 (BIO 2025) held in Boston from 16 to 19 June. Showcasing its groundbreaking innovations and translational research across the fields of artificial intelligence (AI), medicine and engineering for global industry leaders, the University highlighted its interdisciplinary excellence and leading position in medical and health research. The world’s largest biotechnology convention, BIO 2025 brought together over 20,000 industry leaders and professionals from about 70 countries and regions. It embraced all key aspects of the biotechnology ecosystem, spanning from research and development, clinical trials and manufacturing, to investment, business development and commercialisation. Ten cutting-edge PolyU innovations in drug discovery, medical devices and diagnostics, biomedical engineering, rehabilitation technologies, optometry and food science were showcased at the Convention. Apart from presentations, the research teams also engaged in business forums and thematic discussions during the event to cultivate meaningful connections, forge strategic partnerships and explore new avenues for collaboration with global industry leaders. Prof. Christopher CHAO, PolyU Vice President (Research and Innovation), stated, “PolyU has demonstrated excellence in translational research by leveraging its academic strengths and innovative capabilities, particularly in medicine-engineering integration and AI-powered medical advancements. We have achieved significant technological breakthroughs with strong support from the Government and industrial partners, along with consistent global recognition through prestigious awards. For over a decade, the University has participated in the BIO International Convention, fully leveraging this global platform to showcase PolyU’s research and innovation strengths while also forging valuable partnerships worldwide.”PolyU innovations exhibited at BIO 2025: “PocNova™: Ultra-Fast Nucleic Acid Testing System,” led by Prof. Thomas LEE, Associate Professor of the Department of Biomedical Engineering “Hybrid Robotic IoT for Telerehabilitation after Stroke,” led by Prof. Xiaoling HU, Associate Professor of the Department of Biomedical Engineering “Vcare: Vision Training VR Device,” led by Ir Dr Yu Ming TANG, Senior Lecturer of the Department of Industrial and Systems Engineering “Innovative Hormones for the Treatment of Diabetes and Related Metabolic Complications,” led by Prof. WONG Chi Ming, Associate Professor of the Department of Health Technology and Informatics “HAND-HEART: An AI-Based Hand Hygiene Augmented Reality Tool,” led by Prof. Lin YANG, Associate Professor of the School of Nursing “ABarginase: First-in-class Drug for Treatment of Multiple Obesity-related Metabolic Diseases,” led by Prof. LEUNG Yun-chung, Thomas, Professor of the Department of Applied Biology and Chemical Technology “First-in-Class Antibiotic Therapeutics,” led by Prof. MA Cong, Associate Professor of the Department of Applied Biology and Chemical Technology “Novel Nutraceuticals for Neurodegenerative Diseases,” led by Prof. LEE Ming-yuen, Simon, Chair Professor of Biomedical Sciences of the Department of Food Science and Nutrition “AkkMore™: a Fungus- and Plant-based Supplement against Obesity and Prediabetes,” led by Dr Gail CHANG Jinhui, Research Assistant Professor of the Department of Food Science and Nutrition “Safe & Eco-friendly Antimicrobial Materials,” led by Dr Gavin ZHANG, Research Fellow of the School of Fashion and Textiles A special highlight of PolyU’s participation this year in the business presentation session was Prof. Ma Cong sharing his latest breakthrough in antibiotic therapeutics. Prof. Ma has led a research team to make a first-in-class antimicrobial drug discovery with a unique mechanism of action to tackle antimicrobial resistance. His innovative approach focuses on disrupting protein–protein interactions (PPIs) within the bacterial transcription complex — igniting hope for the development of new antimicrobial agents. Building on its robust foundation in medical and health research, PolyU is dedicated to advancing interdisciplinary research at the convergence of medicine, AI, engineering, and data science, pioneering a new era of healthcare innovation, while also contributing to Hong Kong’s development into an international health and medical innovation hub.

19 Jun, 2025

Events

WhatsApp Image 20250619 at 152716512d3eaa

Media interview: PolyU start-up utilises technology to explore local community stories

Recognising that Hong Kong's fast-paced urban life often overshadows local community stories, Ms Fion Lau, a staff member of the PolyU Research and Innovation Office, and Mr Ken Chau, an alumnus of the PolyU Department of Applied Social Sciences, conceived the idea of combining social issues with puzzle games. In 2022, they launched "Puzzle Weekly," a start-up designed to encourage public engagement with local narratives and to foster stronger community connections through interactive game experiences. A recent media interview highlighted Puzzle Weekly's innovative approach to community engagement. By leveraging their expertise in visual communication and collaborating with business partners, the team integrates authentic Hong Kong community narratives into engaging puzzle games. These multicultural games provide players the opportunity to discover Hong Kong's hidden cultural heritage and strengthen community ties. After rigorous trials and development, the team achieved success by winning the "Best Social Care Award" at the YDC Dare to Change Business Pitch Competition in 2022. With support from PolyU, the project also received funding from PolyU’s Micro Fund, supporting its continued growth and maturation. With plans to expand internationally and forge collaborations, the team aims to share Hong Kong's unique stories with a global audience, preserving and promoting local culture on a wider stage.  

19 Jun, 2025

Research and Innovation

20250612 - Top50-01

PolyU scholar named “Top 50 Women in Web3 & AI” by CoinDesk

The Hong Kong Polytechnic University (PolyU) is committed to driving innovation and interdisciplinary research by leveraging artificial intelligence (AI) across diverse fields. Prof. YANG Hongxia, Associate Dean (Global Engagement) of the Faculty of Computer and Mathematical Sciences, Professor of Department of Computing, and Executive Director of PolyU Academy for Artificial Intelligence, has been named in CoinDesk's “Top 50 Women in Web3 & AI” for her impactful contributions to pioneering cutting-edge technologies. This inaugural list highlights 50 of the most influential women worldwide who are shaping the future of crypto and AI. Prof. YANG’s inclusion recognises her groundbreaking work in AI development, particularly her efforts to bridge advanced technology with practical applications across industries, from healthcare to finance. Prof. YANG is a distinguished AI scientist with over 15 years of experience, specialising in large-scale machine learning, data mining, deep learning, and the practical deployment of large language model (LLM) systems in real-world settings. Throughout her illustrious career, she has developed ten significant algorithmic systems that have enhanced the operations of various enterprises. Advancing AI in Healthcare and Beyond As a strong advocate for decentralised AI, Prof. YANG has championed the "Model-over-Models" approach, which builds a foundational model from smaller, stackable domain-specific models. This solution, called InfiFusion, offers an efficient and scalable solution for high-performance LLM and aims to empower more industries with advanced AI capabilities. Currently, Prof. YANG and her team are developing foundation models in cutting-edge fields, including healthcare, manufacturing, energy, and finance. Focused on bridging statistical innovation with healthcare and actuarial science, which is crucial for the future well-being of humanity, Prof. YANG has been invited to host a high-level course themed “AI and Statistics” at the prestigious Croucher Advanced Study Institutes. Prof. YANG said, “By integrating statistical principles with the capabilities of generative AI, we aim to develop more robust models that can generate realistic data, improve predictive accuracy, and provide deeper insights into complex datasets.” The objective of the Croucher Advanced Study Institutes is to explore the frontiers of generative AI and statistics is to delve into the intersection of these two dynamic fields, uncovering new methodologies and applications that can enhance data-driven decision-making and innovation.  CoinDesk's selection process involved a diverse panel of judges and over 300 nominations from around the world, emphasizing a balance of innovation, relevancy, and influence. The final list showcases leaders in crypto and AI with diverse areas of expertise, ranging from product development and business strategy to regulatory compliance and ethical frameworks.

13 Jun, 2025

Awards and Achievements

WhatsApp Image 20250611 at 16381915613cac

Media coverage: PolyU and AELIS Couture partner on innovative materials for sustainable fashion

Technological innovation is revolutionising fashion design, driving continuous and dramatic changes across the industry. The Hong Kong Polytechnic University (PolyU) and the esteemed Paris fashion house AELIS Couture (AELIS) have collaborated to transform cutting-edge scientific research into sustainable luxury fashion materials for the Fall/Winter 2024/25 Couture Collection, breathing new life into the fashion industry.  The collection features a precious gold and silver coated sustainable silk organza with a metallic pearly sheen. This innovative material was meticulously developed and designed by a team led by Prof. Kinor JIANG, Professor of the School of Fashion Textiles at PolyU. The research team employed a novel metallising technology to place ultra-thin, nano-scale metal films onto textiles, creating a material that retains the comfort and flexibility of traditional fabrics while exhibiting a striking visual effect. PolyU is committed to researching and developing environmentally friendly materials, which aligns perfectly with AELIS's brand philosophy. This partnership effectively integrates scientific and technological advancements with fashion design, highlighting a shared commitment to environmental sustainability and propelling the fashion industry towards a more eco-conscious future.  In response to the growing demand for eco-friendly materials in the industry, the collaborative initiative between AELIS and PolyU integrates innovation with sustainable development concepts, bridging the realms of fashion and research. This partnership serves as a model for future collaborations, effectively bridging fashion design and scientific research to foster material innovation and expand the creative horizons of fashion designers.  

11 Jun, 2025

Research and Innovation

Photo 1

PolyU develops novel multi-modal agent to facilitate long video understanding by AI, accelerating development of generative AI-assisted video analysis

While Artificial Intelligence (AI) technology is evolving rapidly, AI models still struggle with understanding long videos. A research team from The Hong Kong Polytechnic University (PolyU) has developed a novel video-language agent, VideoMind, that enables AI models to perform long video reasoning and question-answering tasks by emulating humans’ way of thinking. The VideoMind framework incorporates an innovative Chain-of-Low-Rank Adaptation (LoRA) strategy to reduce the demand for computational resources and power, advancing the application of generative AI in video analysis. The findings have been submitted to the world-leading AI conferences. Videos, especially those longer than 15 minutes, carry information that unfolds over time, such as the sequence of events, causality, coherence and scene transitions. To understand the video content, AI models therefore need not only to identify the objects present, but also take into account how they change throughout the video. As visuals in videos occupy a large number of tokens, video understanding requires vast amounts of computing capacity and memory, making it difficult for AI models to process long videos. Prof. Changwen CHEN, Interim Dean of the PolyU Faculty of Computer and Mathematical Sciences and Chair Professor of Visual Computing, and his team have achieved a breakthrough in research on long video reasoning by AI. In designing VideoMind, they made reference to a human-like process of video understanding, and introduced a role-based workflow. The four roles included in the framework are: the Planner, to coordinate all other roles for each query; the Grounder, to localise and retrieve relevant moments; the Verifier, to validate the information accuracy of the retrieved moments and select the most reliable one; and the Answerer, to generate the query-aware answer. This progressive approach to video understanding helps address the challenge of temporal-grounded reasoning that most AI models face. Another core innovation of the VideoMind framework lies in its adoption of a Chain-of-LoRA strategy. LoRA is a finetuning technique emerged in recent years. It adapts AI models for specific uses without performing full-parameter retraining. The innovative chain-of-LoRA strategy pioneered by the team involves applying four lightweight LoRA adapters in a unified model, each of which is designed for calling a specific role. With this strategy, the model can dynamically activate role-specific LoRA adapters during inference via self-calling to seamlessly switch among these roles, eliminating the need and cost of deploying multiple models while enhancing the efficiency and flexibility of the single model. VideoMind is open source on GitHub and Huggingface. Details of the experiments conducted to evaluate its effectiveness in temporal-grounded video understanding across 14 diverse benchmarks are also available. Comparing VideoMind with some state-of-the-art AI models, including GPT-4o and Gemini 1.5 Pro, the researchers found that the grounding accuracy of VideoMind outperformed all competitors in challenging tasks involving videos with an average duration of 27 minutes. Notably, the team included two versions of VideoMind in the experiments: one with a smaller, 2 billion (2B) parameter model, and another with a bigger, 7 billion (7B) parameter model. The results showed that, even at the 2B size, VideoMind still yielded performance comparable with many of the other 7B size models. Prof. Chen said, “Humans switch among different thinking modes when understanding videos: breaking down tasks, identifying relevant moments, revisiting these to confirm details and synthesising their observations into coherent answers. The process is very efficient with the human brain using only about 25 watts of power, which is about a million times lower than that of a supercomputer with equivalent computing power. Inspired by this, we designed the role-based workflow that allows AI to understand videos like human, while leveraging the chain-of-LoRA strategy to minimise the need for computing power and memory in this process.” AI is at the core of global technological development. The advancement of AI models is however constrained by insufficient computing power and excessive power consumption. Built upon a unified, open-source model Qwen2-VL and augmented with additional optimisation tools, the VideoMind framework has lowered the technological cost and the threshold for deployment, offering a feasible solution to the bottleneck of reducing power consumption in AI models. Prof. Chen added, “VideoMind not only overcomes the performance limitations of AI models in video processing, but also serves as a modular, scalable and interpretable multimodal reasoning framework. We envision that it will expand the application of generative AI to various areas, such as intelligent surveillance, sports and entertainment video analysis, video search engines and more.”

10 Jun, 2025

Research and Innovation

Photo 1

PolyU-led research reveals that sensory and motor inputs help large language models represent complex concepts

Can one truly understand what “flower” means without smelling a rose, touching a daisy or walking through a field of wildflowers? This question is at the core of a rich debate in philosophy and cognitive science. While embodied cognition theorists argue that physical, sensory experience is essential to concept formation, studies of the rapidly evolving large language models (LLMs)suggest that language alone can build deep, meaningful representations of the world. By exploring the similarities between LLMs and human representations, researchers at The Hong Kong Polytechnic University (PolyU) and their collaborators have shed new light on the extent to which language alone can shape the formation and learning of complex conceptual knowledge. Their findings also revealed how the use of sensory input for grounding or embodiment – connecting abstract with concrete concepts during learning – affects the ability of LLMs to understand complex concepts and form human-like representations. The study, in collaboration with scholars from Ohio State University, Princeton University and City University of New York, was recently published in Nature Human Behaviour. Led by Prof. LI Ping, Sin Wai Kin Foundation Professor in Humanities and Technology, Dean of the PolyU Faculty of Humanities and Associate Director of the PolyU-Hangzhou Technology and Innovation Research Institute, the research team selected conceptual word ratings produced by state-of-the-art LLMs, namely ChatGPT (GPT-3.5, GPT-4) and Google LLMs (PaLM and Gemini). They compared them with human-generated word ratings of around 4,500 words across non-sensorimotor (e.g., valence, concreteness, imageability), sensory (e.g., visual, olfactory, auditory) and motor domains (e.g., foot/leg, mouth/throat) from the highly reliable and validated Glasgow Norms and Lancaster Norms datasets. The research team first compared pairs of data from individual humans and individual LLM runs to discover the similarity between word ratings across each dimension in the three domains, using results from human-human pairs as the benchmark. This approach could, for instance, highlight to what extent humans and LLMs agree that certain concepts are more concrete than others. However, such analyses might overlook how multiple dimensions jointly contribute to the overall representation of a word. For example, the word pair “pasta” and “roses” might receive equally high olfactory ratings, but “pasta” is in fact more similar to “noodles” than to “roses” when considering appearance and taste. The team therefore conducted representational similarity analysis of each word as a vector along multiple attributes of non-sensorimotor, sensory and motor dimensions for a more complete comparison between humans and LLMs. The representational similarity analyses revealed that word representations produced by the LLMs were most similar to human representations in the non-sensorimotor domain, less similar for words in sensory domain and most dissimilar for words in motor domain. This highlights LLM limitations in fully capturing humans’ conceptual understanding. Non-sensorimotor concepts are understood well but LLMs fall short when representing concepts involving sensory information like visual appearance and taste, and body movement. Motor concepts, which are less described in language and rely heavily on embodied experiences, are even more challenging to LLMs than sensory concepts like colour, which can be learned from textual data. In light of the findings, the researchers examined whether grounding would improve the LLMs’ performance. They compared the performance of more grounded LLMs trained on both language and visual input (GPT-4, Gemini) with that of LLMs trained on language alone (GPT-3.5, PaLM). They discovered that the more grounded models incorporating visual input exhibited a much higher similarity with human representations. Prof. Li Ping said, “The availability of both LLMs trained on language alone and those trained on language and visual input, such as images and videos, provides a unique setting for research on how sensory input affects human conceptualisation. Our study exemplifies the potential benefits of multimodal learning, a human ability to simultaneously integrate information from multiple dimensions in the learning and formation of concepts and knowledge in general. Incorporating multimodal information processing in LLMs can potentially lead to a more human-like representation and more efficient human-like performance in LLMs in the future.” Interestingly, this finding is also consistent with those of previous human studies indicating the representational transfer. Humans acquire object-shape knowledge through both visual and tactile experiences, with seeing and touching objects activating the same regions in human brains. The researchers pointed out that – as in humans – multimodal LLMs may use multiple types of input to merge or transfer representations embedded in a continuous, high-dimensional space. Prof. Li added, “The smooth, continuous structure of embedding space in LLMs may underlie our observation that knowledge derived from one modality could transfer to other related modalities. This could explain why congenitally blind and normally sighted people can have similar representations in some areas. Current limits in LLMs are clear in this respect”. Ultimately, the researchers envision a future in which LLMs are equipped with grounded sensory input, for example, through humanoid robotics, allowing them to actively interpret the physical world and act accordingly. Prof. Li said, “These advances may enable LLMs to fully capture embodied representations that mirror the complexity and richness of human cognition, and a rose in LLM’s representation will then be indistinguishable from that of humans.”

10 Jun, 2025

Research and Innovation

20250521 - Fusing AI and Robotics for Dynamic Environments-01

Smart Adaptation: The Fusion of AI and Robotics for Dynamic Environments

The advancement of artificial intelligence (AI) has ushered in a new era of automated robotics that are adaptive to their environments.   The field of robotics has made remarkable strides over the past few decades, yet it continues to face challenges that hinder the full utilisation of its potential. Traditional robots often rely on pre-programmed instructions and restricted configurations, limiting their ability to respond to unforeseen circumstances. AI technologies - encompassing cognition, analysis, inference, and decision-making-enable robots to operate intelligently, significantly enhancing their capabilities to assist and support humans.  By augmenting robots with AI technologies within engineering systems, we can expect more ever-present applications in industry, agriculture, logistics, medicine, and beyond, allowing robots to perform complex tasks with greater autonomy and efficiency. This technological enhancement unleashes the potential of robotics in real-world applications, offering solutions to pressing medical and environmental problems and facilitating a paradigm shift towards intelligent manufacturing in the context of Industry Revolution 4.0.   With the application of AI, a research team led by Prof. Dan ZHANG, Chair Professor of Intelligent Robotics and Automation in the Department of Mechanical Engineering, and Director of PolyU-Nanjing Technology and Innovation Research Institute at the Hong Kong Polytechnic University (PolyU), has fabricated a number of novel robotic systems with high dynamic performance. Prof. ZHANG’s research team has recently proposed a grasp pose detection framework that applies deep neural networks to generate a rich set of omnidirectional (in six degrees of freemen “6-DoF”) grasp poses with high precision. To detect the objects to be grasped, convolutional neural networks (CNNs) are applied in a multi-scale cylinder with varying radii, providing detailed geometric information about each object’s location and size estimation. Multiple multi-layer perceptrons (MLPs) optimise the precision parameters of the robotic manipulator to grasp objects, including the gripper width, grasp score (for specific in-plane rotation angles and gripper depths) as well as collision detection. These parameters are fed into an algorithm within the framework, extending grasps from pre-set configurations to generate comprehensive grasp poses tailored for the scene. Experiments reveal that the proposed method consistently outperforms the benchmark method in laboratory simulations, achieving an average success rate of 84.46% compared to 78.31% for the benchmark method in real-world experiments. In addition, the research team leverages AI technologies to enhance the functionality and user experience of a novel robotic knee exoskeleton for the gait rehabilitation of patients with knee joint impairment. The structure of the exoskeleton includes an actuator powered by an electric motor to assist knee flexion/extension actively, an ankle joint that transfers the weight of the exoskeleton to the ground, and a stiffness adjustment mechanism powered by another electric motor.  A Long Short-Term Memory (LSTM) network in a machine learning algorithm is applied to provide real-time nonlinear stiffness and torque adjustments, mimicking the biomechanical characteristics of the human knee joint. The network is trained on a large dataset of electromyography (EMG) signals and knee joint movement data, enabling real-time adjustments of the exoskeleton’s stiffness and torque based on the user’s physiological signals and movement conditions. By predicting necessary adjustments, the system adapts to various gait requirements, enhancing the user’s walking stability and comfort.  The integration of an adaptive acceptance control algorithm based on Radial Basis Function (RBF) networks enables the robotic knee exoskeleton to automatically adjust joint angles and stiffness parameters without the need for force or torque sensors. This enhances the accuracy of position control and improves the exoskeleton’s responsiveness to different walking postures. This data-driven approach refines the model’s predictions and improves overall performance over time. Experimental results demonstrate that the model outperforms traditional fixed control methods in terms of accuracy and real-time responsiveness, generating the desired reference joint trajectory for users at different walking speeds.   Prof. ZHANG’s innovation reveals that AI techniques, particularly deep learning, have improved the ability of robots to perceive and understand their environments. This advancement contributes to more effective and flexible solutions for handling tasks beyond fixed configurations in standard settings. The melding of AI and robotics not only enhances precision and accuracy but also introduces new capabilities for robotic automation, enabling real-time decision-making and continuous learning. As a result, robots can improve their performance over time, leading to extended utilisation of robotics in society for future endeavours. Source: Innovation Digest, Issue 1  For more about research  

9 Jun, 2025

Research and Innovation

2

PolyU shines at the 4th Nanchang Healthcare Expo, showcasing innovations in medicine-engineering integration and AI

The 4th China (Nanchang) International Healthcare Industry Conference and Expo 2025 was successfully launched on 6 June at the Nanchang Greenland International Expo Centre in Jiangxi. This prominent event serves as a key platform for exchange and collaboration within Mainland China’s healthcare industry. As the sole representative of Hong Kong's higher education institutions, PolyU invited 11 research teams to the expo to showcase cutting-edge research and technological achievements in healthcare. The teams demonstrated PolyU’s innovative strengths in medicine-engineering integration and artificial intelligence.  Held under the theme “Technology-Driven Health and Reshaping New Business Models,” the expo covers the entire healthcare value chain, including biomedicine, medical devices, the silver economy, smart healthcare, traditional Chinese medicine, health consumption, and smart living. PolyU’s exhibition included:  “Portable, self-testing retinal fundus camera integrated with AI system”, led by Prof. Mingguang HE, Chair Professor of Experimental Ophthalmology of the PolyU School of Optometry and Henry G. Leong Professor in Elderly Vision Health  “An intelligent ankle rehabilitation robot” developed by Prof. ZHANG Dan, Chair Professor of the PolyU Department of Mechanical Engineering  “An AI-Assisted Pharmaceutical Product Development Platform” led by Prof. MA Cong, Associate Professor of the PolyU Department of Applied Biology and Chemical Technology  “A wearable Smart Navigation and Interaction System for the Visually Impaired” developed by Prof. Weisong WEN, Assistant Professor of the PolyU Department of Aeronautical and Aviation Engineering  “Liverscan, a palm-sized ultrasound device for fatty liver and liver fibrosis assessment" and "Scolioscan, 3D ultrasound imaging device to provide radiation-free assessment of scoliosis" led by Prof. ZHENG Yongping,  Chair Professor of the PolyU Department of Biomedical Engineering and Henry G. Leong Professor in Biomedical Engineering “Digital Strolling” for alleviating depression in mobility-impaired individuals led by Prof. Yan LI, Assistant Professor (Presidential Young Scholar) of the PolyU School of Nursing  “Virtual MRI Contrast Enhancement System” led by Prof. CAI Jing, Head and Professor of the PolyU Department of Health Technology and Informatics  “The Mobile Ankle-foot Exoneuromusculoskeleton” led by Prof. HU Xiaoling, Associate Professor of the PolyU Department of Biomedical Engineering  "E-bibliotherapy App for Caregivers of People with Dementia" led by Prof. Shanshan WANG, Assistant Professor of the PolyU School of Nursing  "Dementia Simulation Game Kit" led by PolyU Jockey Club Design Institute for Social Innovation  Mr Victor ZHAO, Associate Director of the PolyU Research and Innovation Office, attended the opening ceremony, alongside industry leaders, including representatives from the Jiangxi Provincial Government, academicians from the Chinese Academy of Engineering, representatives from the Nanchang Municipal People's Government, and other distinguished guests. The event fostered vibrant academic exchanges and business discussions. Local media also conducted interviews with the PolyU research teams, further broadening public awareness of PolyU's cutting-edge scientific research and its contributions to the field.  PolyU has a strong foundation in medical education and innovation. By integrating healthcare with artificial intelligence, engineering, and data science through interdisciplinary strategies, PolyU is committed to addressing diverse medical challenges, enhancing healthcare services in Hong Kong and the Greater Bay Area, and advancing Hong Kong’s development as an international hub for health and medical innovation.  

7 Jun, 2025

Events

Photo 1

PolyU and Peking University Third Hospital join forces to establish Joint Research Laboratory on Musculoskeletal and Sports Rehabilitation

The Hong Kong Polytechnic University (PolyU) and Peking Third Hospital (PUTH) signed a collaboration agreement last month to officially establish the “Joint Research Laboratory on Musculoskeletal and Sports Rehabilitation”. This partnership aims to advance cutting-edge research and innovation in musculoskeletal health and sports rehabilitation by leveraging the clinical expertise, medical engineering and translational research strengths of both institutions, and to promote the translation and application of research outcomes. The agreement signing ceremony and the plaque unveiling ceremony of the joint research laboratory were held at PUTH. The agreement was signed by Prof. DONG Cheng, Associate Vice President (Mainland Research Advancement) of PolyU and Prof. FU Wei, President of PUTH. Following this, witnessed by Prof. Dong Cheng and Prof. Fu Wei, the plaque unveiling ceremony for the “Joint Research Laboratory on Musculoskeletal and Sports Rehabilitation” was jointly officiated by Prof. Marco PANG, Shun Hing Education and Charity Fund Professor in Rehabilitation Sciences, Chair Professor of Neurorehabilitation and Head of the Department of Rehabilitation Sciences (RS) of PolyU, and Prof. LI Rong, Vice President of PUTH and Director of the Reproductive Medicine Department, symbolising the official launch of the joint research laboratory and marking a significant step forward in academic and medical research collaboration between Hong Kong and the Mainland China. Prof. Dong Cheng remarked, “In the face of global population ageing and the increasing burden of chronic diseases, sports rehabilitation plays an increasingly vital role in improving quality of life and alleviating healthcare pressures. This collaboration between PolyU and PUTH will not only deepen academic exchange between the two institutions but also inject new momentum into the field of sports rehabilitation in the Greater China region. We look forward to working closely with PUTH to develop the joint laboratory into a hub for research innovation and talent development serving the Asia-Pacific region.” Prof. Fu Wei stated, “We are delighted to collaborate with PolyU to advance impactful research projects and promote the translation of innovative achievements in musculoskeletal and sports rehabilitation, benefiting patients and fostering the further development of rehabilitation medicine.” PolyU and PUTH will leverage their respective strengths to establish an integrated and interdisciplinary joint research laboratory. The partnership will promote scientific research, technological innovation and talent cultivation in the field of sports rehabilitation, with the goal of improving patient recovery outcomes and contributing to healthy ageing.

6 Jun, 2025

Partnership

Your browser is not the latest version. If you continue to browse our website, Some pages may not function properly.

You are recommended to upgrade to a newer version or switch to a different browser. A list of the web browsers that we support can be found here