EyeC3D: 3D video eye tracking dataset

Introduction

Understanding visual attention in 3DTV is essential for many applications, e.g., capture, coding, visual confort enhancement, 2D-to-3D conversion, retargeting, and subtitling. Therefore, public datasets of 3D content with associated ground truth eye tracking data are needed. To overcome the lack of publicly available 3D video eye tracking datasets, we created the EyeC3D dataset. Eight stereoscopic video sequences were used in the eye tracking experiments. For each video, eye movement data was recorded via a set of subjective experiments. From the eye movement data, the fixation density maps (FDMs) were computed for each frame of the stereoscopic video sequences.

Stereoscopic video sequences

Eight stereoscopic video sequences were used in the eye tracking experiments. Five sequences (Boxers, Hall, Lab, News report, and Phone call) were obtained from the NAMA3DS1 database [1]. Two sequences (Musicians and Poker) were obtained from the European FP7 Research Project MUSCADE [2]. Sequence Poznan Hall2 was obtained from the Poznan multiview video database [3].

[1] M. Urvoy, M. Barkowsky, R. Cousseau, Y. Koudota, V. Ri- corde, P. Le Callet, J. Gutierrez, and N. Garcia, “NAMA3DS1-COSPAD1: Subjective video quality assessment database on coding conditions introducing freely available high quality 3D stereoscopic sequences,” in Fourth International Workshop on Quality of Multimedia Experience, July 2012, pp. 109–114.

[2] ISO/IEC JTC1/SC29/WG11, “Proposed Stereo Test Sequences for 3D Video Coding,” Doc. M23703, Feb. 2012.

[3] ISO/IEC JTC1/SC29/WG11, “Poznan Multiview Video Test Se- quences and Camera Parameters,” Doc. M17050, Oct. 2009.

Eye tracking experiments

The eye tracking experiments were conducted at the MMSPG test laboratory. The laboratory environments and the viewing conditions were set to fulfill recommendation ITU-R BT.2021 [4]. Please see the following table for the specific information on the experimental conditions.

Category Details Specification
Participants Number (M/F) 21 (16/5)
Age range (Ave.) 18 − 31 (21.8)
Screening Snellen chart, Ishihara chart, and Randot test
Viewing Conditions Environment Laboratory
Illumination Low
Color temperature 6500 [K]
Viewing distance 1.8 [m]
Task Free-viewing
Display Manufacturer Hyundai
Model S465D
Type Polarized LCD
Size 46 [inch]
Resolution 1920 × 1080 [pixels]
Angular resolution 60 [pixel/degree]
Display calibration Probe X-Rite i1Display Pro
Profile D65 white point, 120 [cd/m2] brightness, minimum black level
Eye tracker Manufacturer Smart Eye
Model Smart Eye Pro 5.8
Mounting position 1.28 [m] from the display
Sampling frequency 60 [Hz]
Accuracy < 0.5 [degree]
Calibration points 4 points on screen
Video presentation Presentation order Random
Presentation time − 10 [s]
Repetitions 2
Grey-screen duration 2 [s]

[4] ITU-R BT.2021, “Subjective methods for the assessment of stereoscopic 3DTV systems,” ITU, August 2012.

Fixation density maps

All detected saccades and blinks were excluded from the eye movement data and only the gaze points classified as fixation points were used. For each frame of the video sequence, the corresponding fixation points were processed as follows. First, the right-eye fixation points were shifted horizontally according to the right-to-left disparity map. Then, these points were combined with the left-eye fixation points and filtered with a Gaussian kernel to account for the eye tracking inaccuracies and the reduction of the visual sensitivity, which depends on the distance from the fovea. The standard deviation of the Gaussian filter used for computing the FDMs was set to 1 degree of visual angle, which corresponds to 60 pixels in our experiments. This is based on the assumption that the fovea of the human eye covers approximately 2 degrees of visual angle [5]. Therefore, for each frame of the stereoscopic video sequence, only one FDM, corresponding to the left view, was produced from the left and right eye movements.

[5] U. Engelke, A. Maeder, and H. Zepernick, “Visual attention modelling for subjective image quality databases,” in IEEE International Workshop on Multimedia Signal Processing (MMSP), October 2009, pp. 1–6.

Download

You can download all lists of fixation points and fixation density maps from the following FTP (please use dedicated FTP clients, such as FileZilla or FireFTP):

FTP address: tremplin.epfl.ch
Username: [email protected]
Password: ohsh9jah4T
FTP port: 21

After you connect, choose the EyeC3D folder from the remote site, and download the relevant material. The total size of the provided data is ~3.9 GB.

If you use the EyeC3D dataset in your research, we kindly ask you to reference the following paper and URL link of this website:

P. Hanhart and T. Ebrahimi. EyeC3D: 3D video eye tracking dataset. Sixth International Workshop on Quality of Multimedia Experience (QoMEX), Singapore, September 2014.

URL link: http://www.epfl.ch/labs/mmspg/eyec3d

You may also check the above paper for some other helpful information.

In case of any problems or questions, please send an email to [email protected]

Permission is hereby granted, without written agreement and without license or royalty fees, to use, copy, modify, and distribute the data provided and its documentation for research purpose only. The data provided may not be commercially distributed. In no event shall the Ecole Polytechnique Fédérale de Lausanne (EPFL) be liable to any party for direct, indirect, special, incidental, or consequential damages arising out of the use of the data and its documentation. The Ecole Polytechnique Fédérale de Lausanne (EPFL) specifically disclaims any warranties. The data provided hereunder is on an “as is” basis and the Ecole Polytechnique Fédérale de Lausanne (EPFL) has no obligation to provide maintenance, support, updates, enhancements, or modifications.