Video Inertial Multiple People Tracking Dataset (VIMPT2019)

TNT members involved in this project:
Nobody is listed for this project right now!
Show all

 

Video-based multiple people tracking has been a very active research area for decades now. Yet, (partial) occlusions, motion model assumptions, and image ambiguities makes it very hard to accurately track all persons. Recent approaches have shown that adding local motion measurements from Inertial Measurement Units (IMUs) help to disambiguate different persons and to improve tracking accuracy. As a complementary data source, inertial sensors allow to accurate track persons even under fast and abrupt motions, or in case of full occlusion.

Simulatenously performing multiple people tracking and assigning each trajectory to the corresponding, body-worn IMU device, is termed Video Inertial Multiple People Tracking (VIMPT).

 

We anticipate that the release of the first available VIMPT dataset will initiate a fruitful and new research direction, which might also deliver insights into video-based multiple people tracking.

The VIMPT2019 dataset consists of 7 sequences (6 soccer sequence, 1 outdoor sequence).

For each sequence we provide

  • video data: sequences obtained from a static and calibrated RGB-camera.
  • IMU data: orientation and acceleration data of 8 IMUs.


The VIMPT2019 dataset used in the TIP article is freely available for your own tests and experiments. However it is restricted to research purposes only. If you use this data, please acknowledge the effort that went into data collection by citing the corresponding article Accurate Long-Term Multiple People Tracking using Video and Body-Worn IMUs (BibTeX).

The full dataset can be downloaded by saving this zip-File:  VIMPT2019_V1.zip.