Skip to main content Start main content
Welcome to our research group. We are dedicated to discover, understand, and engineer low-dimensional materials and devices with new functionalities and unprecedented performance.
Associate Dean (FS) & ProfessorChair
Department of Applied Physics

Prof. Yang Chai



Welcome to ISMC2024!

The 4th International Symposium on Emerging Memory and Computing (ISMC), is scheduled to take place in Hong Kong on January 9-11, 2024! Our journey began with the 1st Symposium held at the Hong Kong Polytechnic University from September 22-24, 2017. Building on its success, the 2nd Symposium was hosted in Ji'an City, Jiangxi Province, from August 31 to September 1, 2018, followed by the 3rd Symposium held online and hosted by the Hong Kong Polytechnic University from May 26-29, 2021. This highly anticipated international symposium will bring together distinguished researchers, engineers, and experts from across the globe to exchange their latest findings and insights in the rapidly evolving field of emerging memory and computing.

Computational event-driven vision sensors for in-sensor spiking neural networks

Neuromorphic event-based image sensors capture only the dynamic motion in a scene, which is then transferred to computation units for motion recognition. This approach, however, leads to time latency and can be power consuming. Here we report computational event-driven vision sensors that capture and directly convert dynamic motion into programmable, sparse and informative spiking signals. The sensors can be used to form a spiking neural network for motion recognition. Each individual vision sensor consists of two parallel photodiodes with opposite polarities and has a temporal resolution of 5 μs. In response to changes in light intensity, the sensors generate spiking signals with diferent amplitudes and polarities by electrically programming their individual photoresponsivity. The non-volatile and multilevel photoresponsivity of the vision sensors can emulate synaptic weights and can be used to create an in-sensor spiking neural network. Our computational event-driven vision sensor approach eliminates redundant data during the sensing process, as well as the need for data transfer between sensors and computation units.

Research Direction


Near-sensor and In-sensor Computing


Emegring Memories

Your browser is not the latest version. If you continue to browse our website, Some pages may not function properly.

You are recommended to upgrade to a newer version or switch to a different browser. A list of the web browsers that we support can be found here