Recently, the "All-Day Multi-Camera Multi-Target Tracking" research achievement by the Machine Intelligence Research Group at the Robotics Research Department of the Shenyang Institute of Automation (SIA), Chinese Academy of Sciences (CAS), has been officially accepted by IEEE CVPR 2025 (IEEE/CVF Conference on Computer Vision and Pattern Recognition), a flagship international conference in computer vision and pattern recognition.
To address the challenge of low tracking accuracy in multi-camera multi-target tracking under low-light conditions, the researchers developed a novel fusion module named ADMF (Adaptive Dual-Modal Fusion) using a Mamba network, which adaptively integrates illumination intensity, visible light, and infrared modalities. Leveraging ADMF, the team established the first all-day multi-target tracking framework, ADMCMT (All-Day Multi-Camera Multi-Target Tracking).
Additionally, the team created the first RGBT multi-camera, multi-object tracking dataset called M3Track (to be released, containing 88 sequences collected from 19 different real-world scenes, totaling 118K×2 infrared-visible light frames). Through experiments, they validated the effectiveness of the ADMCMT method in improving tracking accuracy under low-light conditions. The M3Track dataset will provide data support for future research on all-day tracking methods.
The paper’s first author is Researcher Fan Huijie from the Robotics Research Department, with Dr. Wang Qiang as the corresponding author.
Other Accepted Research Highlights, GLM: Global-Local Variation Awareness in Mamba-based World Model and FMambaIR: A Hybrid State Space Model and Frequency Domain for Image Restoration were published in the AAAI 2025 conference and IEEE Transactions on Geoscience and Remote Sensing respectively.
The research of the team were supported by grants from the National Natural Science Foundation of China, the SIA Fundamental Research Program, and the Independent Research Projects of the State Key Laboratory of Robotics.