SFU-Store-Nav

Updates

5/24/2021 - Added the SFU-Store-Nav 3D Virtual Human Platform

9/21/2020 - Initial data uploaded! Total video time: 28.08 hrs.

8/13/2020 - SFU-Store-Nav website up!

About the SFU-Store-Nav dataset

SFU-Store-Nav is a dataset collected in a set of experiments that involves human participants and a robot. The set of experiments was conducted in the computing science robotics lab in Simon Fraser University, Burnaby, BC, Canada, and its aim is to gather data containing common gestures, movements, and other behaviours that may indicate humans’ navigational intent relevant to autonomous robot navigation.


Experiment Setup

The experiment simulates a shopping scenario where human participants come in to pick up items from his/her shopping list and interact with a Pepper robot that is programmed to help the human participant. Four webcams were placed at the corners of the lab to capture the visual data in the room. A Pepper robot was placed in the lab to interact with the participants and record visual data through its own camera. The participants were asked to wear a helmet with motion capture markers on it so that the Vicon motion capture system could track their positions and head orientation.


Data Description

Type of data:

  • Video data, in AVI file format.

  • Vicon motion tracking data, in CSV file format.


Sample Data

Our motion capture data contains 2 parts:

  1. Head position

  2. Head orientation

For example, head position data are stored in vicon_hat_4_hat_4_translation.csv:

Time | X | Y | Z |

0 | 0.3928178498 | 0.7167165493 | 1.847175367 |

And head orientation data are stored in vicon_hat_4_hat_4_orientation.csv :

Time | Roll | Pitch | Yaw |

0 | -0.12083437270945468 | 0.02716215219414442| 0.7919105034594937 |

About the SFU-Store-Nav 3D Virtual Human Platform

The original SFU-Store-Nav dataset scene was re-created in Blender, and the 3D body shape and pose estimations were combined with motion capture data to create virtual humans that interact with the environment as in the original experiment. This virtual human platform aims to provide a safe, quick, and inexpensive way to test human intent inference and robot navigation systems.


For more information, please refer to the github repo.

Download

The SFU-Store Nav

Our data is public available for research purpose. We kindly ask you to fill out an access form before you download our data, click the 'Download Dataset Here' button to fill out the access form, then you will be automatically redirect to the data download page.

The SFU-Store Nav 3D Virtual Human Platform

The animations can be downloaded here.

Citation

@article{zhang2020sfu,

title={SFU-store-nav: A multimodal dataset for indoor human navigation},

author={Zhang, Zhitian and Rhim, Jimin and TaherAhmadi, Mahdi and Yang, Kefan and Lim, Angelica and Chen, Mo},

journal={Data in Brief},

volume={33},

pages={106539},

year={2020},

publisher={Elsevier}

}

Acknowledgement

This work is supported by SFU Rosie Lab, SFU Mars Lab and SFU-Huawei Visual Computing Joint Lab.

Contact

If you have any questions about the dataset, or any suggestions, please contact zhitianz at sfu dot ca.