Geometry-only point cloud data set

Point cloud imaging is one of the most promising technologies for 3D content representations. However, subjective and objective quality assessment for this type of visual data is still an open challenge. To tackle the problem, a series of studies was conducted [1-5], in order to investigate the performance of state-of-the-art objective quality metrics and propose new subjective and objective evaluation methodologies. In these efforts, a representative set of five geometry-only point clouds was used. The selected models were degraded using two radically different types of distortions and the generated stimuli were assessed in two experimental setups [1,2].

In this dataset, we make publicly available the reference models, the degraded stimuli, and the subjective quality scores that were collected in our experiments.

Content selection

Bunny and dragon are selected from the Stanford 3D Scanning Repository to represent contents with regular geometry and reduced amount of noise. Cube and sphere are artificially generated using mathematical formulas and represent synthetic contents with highly regular geometry. Finally, vase, is a 3D model manually captured using the Intel RealSense R200 camera, and constitutes a representative point cloud with irregular structure that can be acquired by low-cost depth sensors.

The target application of such contents involves scenarios where the users may visualize point clouds from the outside and interact by either rotating or moving around them. These use cases typically occur when simple objects are scanned by sensors that provide, either directly or indirectly, a cloud of points to represent their 3D shapes. To form a representative data set, the contents were selected considering the following properties:

  1. Simplicity, as it would have been difficult for subjects to clearly perceive a complex scene in the absence of texture. Although simple, the complexity of contents covers a reasonable range.
  2. Diversity of geometric structure, as different artifacts may be observed by applying different types of degradations. Thus, the testing contents were generated by different means.
  3. Similarity of points density, as the visual quality of point clouds is affected by the number of points used to represent an object. The contents were also scaled to fit in a bounding box of size 1.

Types of degradation

Two different types of degradations were considered: (a) Octree-pruning and (b) Gaussian noise. The first type of distortion is used to simulate position errors, with the coordinates of every point of the content being modified in every dimension following a target standard deviation (σ = {0.0005, 0.002, 0.008, 0.016}). The second type of degradation is obtained by removing points after setting a desirable Level of Details (LoD) value in the Octree that encloses the content, thus, a structural loss with points removal and displacement is observed. The LoD is set appropriately for each content to achieve target percentages (ρ) with respect to the original number of points, allowing an acceptable deviation of ±2% (ρ = {90%, 70%, 50%, 30%}). For more details, the reader can refer to [1].

Download

In the provided  URL link the set of point cloud models that have been extensively used in our studies can be found, along with subjective scores that were collected in our experimental setups [1,2].

Contents

These contents were used in [1-5]. Both the reference and the distorted contents are given in PLY format using ASCII representation. The naming convention for the reference is: contentName.ply, while for the distorted models is: contentName_degradationType_degradationLevel.ply. Specifically:

  • contentName = {bunny, cube, dragon, sphere, vase}
  • degradationType = {D01, D02}, with D01 indicating Octree-pruning and D02 indicating Gaussian noise
  • degradationLevel = {L01, L02, L03, L04}, with increasing numbers indicating higher levels of degradations. For instance, L01 and L04 for D01 correspond to 90% and 30% of remaining points, respectively. L01 and L04 for D02 correspond to 0.005 and 0.016 standard deviation, respectively

Subjective scores

  • Desktop setup: These scores were collected from subjective evaluations in a desktop setup, conducted in the framework of [1]. In this experiment, both ACR and DSIS subjective evaluation methodologies were adopted. The two sets of scores can be distinguished by the suffix of the corresponding files. The naming convention for the reference models in these files is: contentName_degradationType_hidden.ply, while for the distorted models is: contentName_degradationType_degradationLevel.ply. For further details, the reader can refer to [1]. It should be noted that the same set of scores was additionally used in [3-5].
  • HMD AR setup: These scores were collected from subjective evaluations in an Augmented Reality (AR) scenario using a head-mounted display (HMD), conducted in the framework of [2]. In this experiment, the DSIS subjective evaluation methodology was adopted. The same naming convention with the desktop setup is followed for the models. For further details, the reader can refer to [2]. It should be noted that the same set of scores was additionally used in [3].

Conditions of use

  • If you wish to use the provided contents, or the subjective scores from the desktop setup in your research, we kindly ask you to cite [1].
  • If you wish to use the subjective scores from the HMD AR setup in your research, we kindly ask you to cite [2].

Permission is hereby granted, without written agreement and without license or royalty fees, to use, copy, modify, and distribute the data provided and its documentation for research purpose only. The data provided may not be commercially distributed. In no event shall the École Polytechnique Fédérale de Lausanne (EPFL) be liable to any party for direct, indirect, special, incidental, or consequential damages arising out of the use of the data and its documentation. The École Polytechnique Fédérale de Lausanne (EPFL) specifically disclaims any warranties. The data provided hereunder is on an “as is” basis and the École Polytechnique Fédérale de Lausanne (EPFL) has no obligation to provide maintenance, support, updates, enhancements, or modifications.

References

  1. E. Alexiou and T. Ebrahimi, “On the performance of metrics to predict quality in point cloud representations,” SPIE Optical Engineering + Applications, Applications of Digital Image Processing XL, San Diego, USA, 2017
  2. E. Alexiou, E. Upenik and T. Ebrahimi, “Towards Subjective Quality Assessment of Point Cloud Imaging in Augmented Reality,” 19th International Workshop on Multimedia Signal Processing (MMSP), Luton Bedfordshire, United Kingdom, 2017
  3. E. Alexiou and T. Ebrahimi, “Impact of visualization strategy for subjective quality assessment of point clouds,” International Conference on Multimedia & Expo Workshops (ICMEW), San Diego, USA, 2018
  4. E. Alexiou and T. Ebrahimi, “Benchmarking of objective quality metrics for colorless point clouds,” Picture Coding Symposium (PCS), San Francisco, USA, 2018
  5. E. Alexiou and T. Ebrahimi, “Point Cloud Quality Assessment Metric Based on Angular Similarity,” International Conference on Multimedia and Expo (ICME), San Diego, USA, 2018