Browsing by Author "Liu, Qi"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Embargo Coarse to fine rate control for region-based 3D point cloud compression(IEEE, 2020-06-09) Liu, Qi; Yuan, Hui; Hamzaoui, Raouf; Su, HongleiWe modify the video-based point cloud compression standard (V-PCC) by mapping the patches to seven regions and encoding the geometry and color video sequences of each region. We then propose a coarse to fine rate control algorithm for this scheme. The algorithm consists of two major steps. First, we allocate the target bitrate between the geometry and color information. Then, we optimize in turn the geometry and color quantization steps for the video sequences of each region using analytical models for the rate and distortion. Experimental results for eight point clouds showed that the average percent bitrate error of our algorithm is only 3.7%, and its perceptual reconstruction quality is better than that of V-PCC.Item Open Access Model-based encoding parameter optimization for 3D point cloud compression(IEEE, 2018-11) Liu, Qi; Yuan, Hui; Hou, Junhui; Liu, Hao; Hamzaoui, RaoufRate-distortion optimal 3D point cloud compression is very challenging due to the irregular structure of 3D point clouds. For a popular 3D point cloud codec that uses octrees for geometry compression and JPEG for color compression, we first find analytical models that describe the relationship between the encoding parameters and the bitrate and distortion, respectively. We then use our models to formulate the rate-distortion optimization problem as a constrained convex optimization problem and apply an interior point method to solve it. Experimental results for six 3D point clouds show that our technique gives similar results to exhaustive search at only about 1.57% of its computational cost.Item Open Access Model-based joint bit allocation between geometry and color for video-based 3D point cloud compression(IEEE, 2020) Liu, Qi; Yuan, Hui; Hou, Junhui; Hamzaoui, Raouf; Su, HongleiIn video-based 3D point cloud compression, the quality of the reconstructed 3D point cloud depends on both the geometry and color distortions. Finding an optimal allocation of the total bitrate between the geometry coder and the color coder is a challenging task due to the large number of possible solutions. To solve this bit allocation problem, we first propose analytical distortion and rate models for the geometry and color information. Using these models, we formulate the joint bit allocation problem as a constrained convex optimization problem and solve it with an interior point method. Experimental results show that the rate distortion performance of the proposed solution is close to that obtained with exhaustive search but at only 0.66% of its time complexity.Item Open Access No-reference Bitstream-layer Model for Perceptual Quality Assessment of V-PCC Encoded Point Clouds(IEEE, 2022-05) Liu, Qi; Su, Honglei; Chen, Tianxin; Yuan, Hui; Hamzaoui, RaoufNo-reference bitstream-layer models for point cloud quality assessment (PCQA) use the information extracted from a bitstream for real-time and nonintrusive quality monitoring. We propose a no-reference bitstream-layer model for the perceptual quality assessment of video-based point cloud compression (V-PCC) encoded point clouds. First, we describe the fundamental relationship between perceptual coding distortion and the texture quantization parameter (TQP) when geometry encoding is lossless. Then, we incorporate the texture complexity (TC) into the proposed model while considering the fact that the perceptual coding distortion of a point cloud depends on the texture characteristics. TC is estimated using TQP and the texture bitrate per pixel (TBPP), both of which are extracted from the compressed bitstream without resorting to complete decoding. Then, we construct a texture distortion assessment model upon TQP and TBPP. By combining this texture distortion model with the geometry quantization parameter (GQP), we obtain an overall no-reference bitstream-layer PCQA model that we call bitstreamPCQ. Experimental results show that the proposed model markedly outperforms existing models in terms of widely used performance criteria, including the Pearson linear correlation coefficient (PLCC), the Spearman rank order correlation coefficient (SRCC) and the root mean square error (RMSE). The dataset developed in this study is publicly available at https://github.com/qdushl/Waterloo-Point-Cloud-Database-3.0.Item Embargo PU-Mask: 3D Point Cloud Upsampling via an Implicit Virtual Mask(IEEE, 2024-02-26) Liu, Hao; Yuan, Hui; Hamzaoui, Raouf; Liu, Qi; Li, ShuaiWe present PU-Mask, a virtual mask-based network for 3D point cloud upsampling. Unlike existing upsampling methods, which treat point cloud upsampling as an “unconstrained generative” problem, we propose to address it from the perspecitive of “local filling”, i.e., we assume that the sparse input point cloud (i.e., the unmasked point set) is obtained by locally masking the original dense point cloud with virtual masks. Therefore, given the unmasked point set and virtual masks, our goal is to fill the point set hidden by the virtual masks. Specifically, because the masks do not actually exist, we first locate and form each virtual mask by a virtual mask generation module. Then, we propose a mask-guided transformer-style asymmetric autoencoder (MTAA) to restore the upsampled features. Moreover, we introduce a second-order unfolding attention mechanism to enhance the interaction between the feature channels of MTAA. Next, we generate a coarse upsampled point cloud using a pooling technique that is specific to the virtual masks. Finally, we design a learnable pseudo Laplacian operator to calibrate the coarse upsampled point cloud and generate a refined upsampled point cloud. Extensive experiments demonstrate that PU-Mask is superior to the state-of-the-art methods. Our code will be made available at: https://github.com/liuhaoyun/PU-MaskItem Open Access Reduced Reference Perceptual Quality Model with Application to Rate Control for Video-based Point Cloud Compression(IEEE, 2021-07) Liu, Qi; Yuan, Hui; Hamzaoui, Raouf; Su, Honglei; Hou, Junhui; Yang, HuanIn rate-distortion optimization, the encoder settings are determined by maximizing a reconstruction quality measure subject to a constraint on the bitrate. One of the main challenges of this approach is to define a quality measure that can be computed with low computational cost and which correlates well with the perceptual quality. While several quality measures that fulfil these two criteria have been developed for images and videos, no such one exists for point clouds. We address this limitation for the video-based point cloud compression (V-PCC) standard by proposing a linear perceptual quality model whose variables are the V-PCC geometry and color quantization step sizes and whose coefficients can easily be computed from two features extracted from the original point cloud. Subjective quality tests with 400 compressed point clouds show that the proposed model correlates well with the mean opinion score, outperforming state-of-the-art full reference objective measures in terms of Spearman rank-order and Pearson linear correlation coefficient. Moreover, we show that for the same target bitrate, rate-distortion optimization based on the proposed model offers higher perceptual quality than rate-distortion optimization based on exhaustive search with a point-to-point objective quality metric. Our datasets are publicly available at https://github.com/qdushl/Waterloo-Point- Cloud-Database-2.0.Item Embargo Support vector regression-based reduced-reference perceptual quality model for compressed point clouds(IEEE, 2023-12-27) Su, Honglei; Liu, Qi; Yuan, Hui; Cheng, Qiang; Hamzaoui, RaoufVideo-based point cloud compression (V-PCC) is a state-of-the-art moving picture experts group (MPEG) standard for point cloud compression. V-PCC can be used to compress both static and dynamic point clouds in a lossless, near lossless, or lossy way. Many objective quality metrics have been proposed for distorted point clouds. Most of these metrics are full-reference metrics that require both the original point cloud and the distorted one. However, in some real-time applications, the original point cloud is not available, and no-reference or reduced-reference quality metrics are needed. Three main challenges in the design of a reduced-reference quality metric are how to build a set of features that characterize the visual quality of the distorted point cloud, how to select the most effective features from this set, and how to map the selected features to a perceptual quality score. We address the first challenge by proposing a comprehensive set of features consisting of compression, geometry, normal, curvature, and luminance features. To deal with the second challenge, we use the least absolute shrinkage and selection operator (LASSO) method, which is a variable selection method for regression problems. Finally, we map the selected features to the mean opinion score in a nonlinear space. Although we have used only 19 features in our current implementation, our metric is flexible enough to allow any number of features, including future more effective ones. Experimental results on the Waterloo point cloud datasetversion 2 (WPC2.0) and the MPEG point cloud compression dataset (M-PCCD) show that our method, namely PCQAML, outperforms state-of-the-art full-reference and reduced-reference quality metrics in terms of Pearson linear correlation coefficient, Spearman rank order correlation coefficient, Kendall’s rank-order correlation coefficient, and root mean squared error.