Visual attention for point clouds in VR

Modelling human visual attention is of great importance in the field of computer vision and has been widely explored for 3D imaging. To assess whether model predictions are in alignment with the actual human viewing behavior, ground truth data are required; for this purpose, eye-tracking experiments are typically conducted. In this study [1], we extend the state-of-the-art by tracking the visual attention of observers in an immersive virtual reality experience with 6 degrees of freedom, using a head mounted display. In this environment, the users interacted with 3D point cloud models following a task-dependent protocol and their gaze and head trajectories were recorded. The logged information was processed and fixation density maps for the models under inspection were obtained.

In this dataset, we make publicly available the tracked behavioural data, post-processing results, saliency maps in form of importance weights, re-distribution of a sub-set of contents and scripts to generate the exact versions of the point clouds that were used in the study, and usage examples.

Download

You can download the material from the following FTP address, using the provided credentials. Please use dedicated FTP clients, such as FileZilla or FireFTP:

FTP address: tremplin.epfl.ch
User name: pc_visual_attention@grebvm2.epfl.ch
Password: pc_visual_attention

The total size of the dataset is about 750 MB.

Please read carefully the README file for a detailed description about the structure of the provided material as well as how to use it. For further information regarding the experiment and the post-processing methodologies applied on the acquired data, please refer to [1].

Conditions of use

If you wish to use any of the provided material in your research, we kindly ask you to cite [1].

Permission is hereby granted, without written agreement and without license or royalty fees, to use, copy, modify, and distribute the data provided and its documentation for research purpose only. The data provided may not be commercially distributed. In no event shall the École Polytechnique Fédérale de Lausanne (EPFL) be liable to any party for direct, indirect, special, incidental, or consequential damages arising out of the use of the data and its documentation. The École Polytechnique Fédérale de Lausanne (EPFL) specifically disclaims any warranties. The data provided hereunder is on an “as is” basis and the École Polytechnique Fédérale de Lausanne (EPFL) has no obligation to provide maintenance, support, updates, enhancements, or modifications.

References

[1] Evangelos Alexiou, Peisen Xu, and Touradj Ebrahimi, “Towards modelling of visual saliency in point clouds for immersive applications,” in 26th IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, September 2019.