From Single Modality to Modality Fusion, The First Step Towards Robust Multisensory Based Localisation Powered by AI
27 Sep 2022
Department of Aeronautical and Aviation Engineering
16:00 - 17:00
General Office firstname.lastname@example.org
Meeting ID: 976 5053 2247 | Passcode: 491169
We interact with the world through multiple senses, including vision, hearing, physical interaction and etc. In the era of Artificial Intelligence (AI), we aim to let AI acquire such abilities that understanding and interacting with the real world from a multimodality perspective, towards delivering robust machine learning based solutions for specific tasks.
In this talk, Mr Wei would share a series of work done in their group, from a single modality based location tracking system to a sensor-fusion based localisation solution, all achieved by end-to-end machine learning techniques without human intuition.
Mr Xijia Wei is currently a PhD student at University College London (UCL) under the supervision of Prof. Nadia Berthouze and Dr Youngjun Cho. He focuses on sensor-fusion based ubiquitous computing. He specialises in multimodal machine learning that allows models automatically learn communicative features from multisensory data without human intervention to make robust inferences under various real-life scenarios. Prior to joining UCL, Mr Wei studied Artificial Intelligence (MSc) under the supervision of Dr Valentin Radu and Electronics and Electrical Engineering (BEng), supervised by Prof. Tughrul Arslan, both at the University of Edinburgh, Scotland. Beyond academia, in the past four years, Mr Wei has worked both in the industry and at research institutes, as a Tech Lead for developing machine learning systems in the field of localisation, healthcare and FinTech.