Multi-sensor fusion has better performance in both localization and scenario understanding. By calibrating multiple sensors, their data can be fused in data level, feature level and model level to achieve tasks with higher accuracy.
Simultaneous localization and mapping (SLAM) is the major method for the navigation of unmanned vehicles in new environment. Unmanned vehicles can determine their locations and orientation quickly by using local SLAM and global map.
Compared with existing Lidars and cameras, the intelligent camera array specially developed for autonomous driving has better anti-vibration capability, longer field of view, faster computation speed, and better scenario recognition performance.
The multi-sensor data, especially the 3D data, generated in the autonomous driving has huge volume, and brings heavy load to the storage and communication. By using compressive sensing and compressive neural network, compressive sampling is implemented in the sensor hardware. The data volume can be greatly reduced while the signal quality is guaranteed.