Geometry-only point cloud data set

Point cloud imaging is one of the most promising technologies for 3D content representation. Despite the significant amount of interest that has been recently drawn, the subjective and objective quality assessment for this type of media content is still an open challenge. On our aim to investigate the performance of the state-of-the-art objective tools and propose various subjective evaluation protocols, a representative data set of five geometry-only point cloud contents was selected to be used in our studies.


Bunny and dragon are selected from the Stanford 3D Scanning Repository to represent contents with regular geometry and reduced amount of noise. Cube and sphere are artificially generated using mathematical formulas and represent synthetic contents with highly regular geometry. Finally, vase, is a 3D model manually captured using the Intel RealSense R200 camera, and constitutes a representative point cloud with irregular structure that can be acquired by low-cost depth sensors. 

The target application of such contents involves scenarios where the users may visualize point clouds from the outside and interact by either rotating or moving around them. These use cases typically occur when simple objects are scanned by sensors that provide, either directly or indirectly, a cloud of points to represent their 3D shapes. To form a representative data set, the contents were selected considering the following properties:

  1. Simplicity, as it would have been difficult for subjects to clearly perceive a complex scene in the absence of texture. Although simple, the complexity of contents covers a reasonable range.
  2. Diversity of geometric structure, as different artifacts may be observed by applying different types of degradations. Thus, test contents used were generated by different means.
  3. Similarity of points density, as the visual quality of point clouds is directly affected by the number of points used to represent an object. The contents were also scaled to fit in a bounding box of size 1.

Types of degradation

In these studies, two different types of degradations were considered: (a) Octree-pruning and (b) Gaussian noise. The first type of distortion is used to simulate position errors, with the coordinates of every point of the content being modified in every dimension following a target standard deviation (σ = {0.0005, 0.002, 0.008, 0.016}). The second type of degradation is obtained by removing points after setting a desirable Level of Details (LoD) value in the Octree that encloses the content, thus, a structural loss with points removal and displacement is observed. The LoD is set appropriately for each content to achieve target percentages (ρ) with respect to the original number of points, allowing an acceptable deviation of ±2% (ρ = {90%, 70%, 50%, 30%}). For more information the reader may refer to [1].


In the provided  URL link  you can find the whole data set of point clouds that were used in [1-4]; part of this data set was also used in other studies, such as in [5]. Both the reference and the distorted contents are given in PLY format using ASCII representation. The naming convention for the reference is: contentName.ply, while for the distorted models is: contentName_degradationType_degradationLevel.ply. Specifically:

  • contentName = {bunny, cube, dragon, sphere, vase}
  • degradationType = {D01, D02}, with D01 indicating Octree-pruning and D02 indicating Gaussian noise
  • degradationLevel = {L01, L02, L03, L04}, with increasing numbers indicating higher levels of degradations. For instance, L01 and L04 for D01 correspond to 90% and 30% of remaining points, respectively. L01 and L04 for D02 correspond to 0.005 and 0.016 standard deviation, respectively


In case you use our data set in your research, please cite [1].