The fusion of dynamic 2D/3D multi-sensor data has not been extensively explored in the vision community. MSF 2014 workshop will encourage interdisciplinary interaction and collaboration from computer vision, remote sensing, robotics and photogrammetry communities, that will serve as a forum for research groups from academia and industry. There has been ever increasing amount of multi-sensory data collections, e.g. KITTI benchmark (stereo+laser), from different platforms, such as autonomous vehicles, surveillance cameras, UAVs, planes and satellites. With emphasis on multi-sensory data, we hope this workshop will foster new research direction in the computer vision community.
The event is organized with ISPRS WG III/3 "Image Sequence Analysis".
Submissions are invited from all areas of computer vision and image analysis relevant for, or applied to, scene understanding. This workshop will focus on multi-sensory dynamic spatial information fusion from stereo sequences, visual and infared sequences, video and lidar sequences, stereo and laser sequences, etc. Indoor applications are also welcome to submit.
Topics of interest include, but are not limited to:
- Object detection and tracking
- Motion segmentation
- Image sequence registration
- Dynamic scene understanding
- Security/surveillance
- Vision based robot/drone navigation
- Multi-modal fusion of sensory information
- Multi-scale fusion
- Low-level processing of different sensors
- 3D scanning sensors, laser and lidar systems
- 3D object recognition and classification
- Large scale issues
- Simultaneous localization and mapping
All manuscripts will be subject to a double-blind review process. The proceedings will be published by IEEE on the DVD proceedings of CVPR 2014, and will be available in IEEE Xplore.