Optimized Dynamic Point Cloud Compression OPT-PCC: Report on experimental results




Journal Title

Journal ISSN



Volume Title



Technical Report

Peer reviewed



Point clouds are representations of three-dimensional (3D) objects in the form of a sample of points on their surface. Point clouds are receiving increased attention from academia and industry due to their potential for many important applications, such as real-time 3D immersive telepresence, automotive and robotic navigation, as well as medical imaging. Compared to traditional video technology, point cloud systems allow free viewpoint rendering, as well as mixing of natural and synthetic objects. However, this improved user experience comes at the cost of increased storage and bandwidth requirements as point clouds are typically represented by the geometry and colour (texture) of millions up to billions of 3D points. For this reason, major efforts are being made to develop efficient point cloud compression schemes. However, the task is very challenging, especially for dynamic point clouds (sequences of point clouds), due to the irregular structure of point clouds (the number of 3D points may change from frame to frame, and the points within each frame are not uniformly distributed in 3D space). To standardize point cloud compression (PCC) technologies, the Moving Picture Experts Group (MPEG) launched a call for proposals in 2017. As a result, three point cloud compression technologies were developed: surface point cloud compression (S-PCC) for static point cloud data, video-based point cloud compression (V-PCC) for dynamic content, and LIDAR point cloud compression (L-PCC) for dynamically acquired point clouds. Later, L-PCC and S-PCC were merged under the name geometry-based point cloud compression (G-PCC). The aim of the OPT-PCC project is to develop algorithms that optimise the rate-distortion performance [i.e., minimize the reconstruction error (distortion) for a given bit budget] of V-PCC. The objectives of the project are to:

O1: build analytical models that accurately describe the effect of the geometry and colour quantization of a point cloud on the bit rate and distortion; 
O2: use O1 to develop fast search algorithms that optimise the allocation of the available bit budget between the geometry information and colour information; 
O3: implement a compression scheme for dynamic point clouds that exploits O2 to outperform the state-of-the-art in terms of rate-distortion performance. The target is to reduce the bit rate by at least 20% for the same reconstruction quality;  
O4: provide multi-disciplinary training to the researcher in algorithm design, metaheuristic optimisation, computer graphics, media production, and leadership and management skills. 

This deliverable reports on the work undertaken in this project to achieve objective O3. The bitrates and distortions were computed for the quantization steps obtained as solutions of the optimization problem for a given target bitrate. Section 1 evaluates the rate-distortion performance of the optimization algorithms developed to achieve objective O2 when the dynamic point cloud consists of one group of frames. Section 2 considers the case when the dynamic point cloud consists of two groups of frames. Each time, two algorithms are evaluated: one where the optimization is carried out with differential evolution (DE) for analytical models of the rate and distortion functions (we call this solution model-based DE solution) and one where the optimization is carried out with DE for the actual rate and distortion functions (we call this solution encoding-based DE solution). To assess the performance of a solution, we compute the Bjøntegaard delta (BD) rate and BD distortion with respect to the state-of-the art method. For the color distortion, we considered only the luminance component. Moreover, we evaluate the bit allocation accuracy by calculating the bitrate error (BE) = |R_a-R_T |/R_a ×100%, where R_a and R_T are the actual bitrate computed by the method and the target bitrate, respectively. Results are reported for six dynamic point clouds (longdress, redandblack, loot, soldier, queen, basketballplayer) and for V-PCC Test Model TMC2 v12.0, which relies on the High Efficiency Video Coding Test Model Version 16. The computer codes used to generate the results are available at http://doi.org/10.5281/zenodo.5034575 and https://doi.org/10.5281/zenodo.5211174 for the one group of frames case and at https://doi.org/10.5281/zenodo.5552760 for the two groups of frames case.



Point cloud compression, Video-based point cloud compression, Rate-distortion optimization, Differential evolution


Yuan, H., Hamzaoui, R., Neri, F. and Yang, S. (2021) Optimized Dynamic Point Cloud Compression (OPT-PCC): Report on experimental results. Deliverable D4 of the Optimized Dynamic Point Cloud Compression (OPT-PCC) project.


Research Institute

Institute of Engineering Sciences (IES)