Skip to main content Start main content

Issue 2

VibrantFENG2050500

CURRENT ISSUE  |  ALL ISSUES

Vibrant@FENG - Issue 2 (December 2021)

 Research Achievements

Best Paper Awards in International Workshop on Advanced Image Technology 2021

The International Workshop on Advanced Image Technology (IWAIT) 2021 is a well-known international event that gathers researchers, professors, students and interested persons in the field of advanced image technology. Two teams of EIE students led by Professor Kenneth Lam and Dr Haibo Hu received the Best Paper Awards in this workshop:

Paper Staff/Student
Attention-based Cross-modality Interaction for Multispectral Pedestrian Detection Professor Kenneth Lam
Mr Tianshan LIU (PhD Student)
Mr Rui ZHAO (PhD Student)
An Extraction Attack on Image Recognition Model using VAE-kdtree Mode Dr Haibo Hu
Miss Tianqi WEN (BEng in EIE student)
Mr Huadi Zheng (PhD student)

Multispectral pedestrian detection has attracted extensive attention, as paired RGB-thermal images can provide complementary patterns to deal with illumination changes in realistic scenarios. However, most of the existing deep-learning-based multispectral detectors extract features from RGB and thermal inputs separately, and fuse them by a simple concatenation operation. To address this limitation, Professor Lam’s team proposed an attention-based cross-modality interaction (ACI) module, which aims to adaptively highlight and aggregate the discriminative regions and channels of the feature maps from RGB and thermal images. The proposed ACI module is deployed into multiple layers of a two-branch-based deep architecture, to capture the cross-modal interactions from diverse semantic levels, for illumination-invariant pedestrian detection.

While Dr Hu's team proposed a black box extraction attack model on pre-trained image classifiers to rebuild a functionally equivalent model with high similarity in their paper. Common model extraction attacks use a large number of training samples to feed the target classifier which is time-consuming with redundancy. The attack results have a high dependency on the selected training samples and the target model. The extracted model may only get part of crucial features because of inappropriate sample selection. To eliminate these uncertainties, they proposed the VAE-kdtree attack model which eliminates the high dependency between selected training samples and the target model. It can not only save redundant computation, but also extract critical boundaries more accurately in image classification.

ra-iwoait

Top

Your browser is not the latest version. If you continue to browse our website, Some pages may not function properly.

You are recommended to upgrade to a newer version or switch to a different browser. A list of the web browsers that we support can be found here