Browsing by Author "Li, Shuai"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Embargo Enhancing Context Models for Point Cloud Geometry Compression with Context Feature Residuals and Multi-Loss(IEEE, 2024-02-20) Sun, Chang; Yuan, Hui; Li, Shuai; Lu, Xin; Hamzaoui, RaoufIn point cloud geometry compression, context models usually use the one-hot encoding of node occupancy as the label, and the cross-entropy between the one-hot encoding and the probability distribution predicted by the context model as the loss function. However, this approach has two main weaknesses. First, the differences between contexts of different nodes are not significant, making it difficult for the context model to accurately predict the probability distribution of node occupancy. Second, as the one-hot encoding is not the actual probability distribution of node occupancy, the cross-entropy loss function is inaccurate. To address these problems, we propose a general structure that can enhance existing context models. We introduce the context feature residuals into the context model to amplify the differences between contexts. We also add a multi-layer perception branch, that uses the mean squared error between its output and node occupancy as a loss function to provide accurate gradients in backpropagation. We validate our method by showing that it can improve the performance of an octreebased model (OctAttention) and a voxel-based model (VoxelDNN) on the object point cloud datasets MPEG 8i and MVUB, as well as the LiDAR point cloud dataset SemanticKITTI.Item Embargo PU-Mask: 3D Point Cloud Upsampling via an Implicit Virtual Mask(IEEE, 2024-02-26) Liu, Hao; Yuan, Hui; Hamzaoui, Raouf; Liu, Qi; Li, ShuaiWe present PU-Mask, a virtual mask-based network for 3D point cloud upsampling. Unlike existing upsampling methods, which treat point cloud upsampling as an “unconstrained generative” problem, we propose to address it from the perspecitive of “local filling”, i.e., we assume that the sparse input point cloud (i.e., the unmasked point set) is obtained by locally masking the original dense point cloud with virtual masks. Therefore, given the unmasked point set and virtual masks, our goal is to fill the point set hidden by the virtual masks. Specifically, because the masks do not actually exist, we first locate and form each virtual mask by a virtual mask generation module. Then, we propose a mask-guided transformer-style asymmetric autoencoder (MTAA) to restore the upsampled features. Moreover, we introduce a second-order unfolding attention mechanism to enhance the interaction between the feature channels of MTAA. Next, we generate a coarse upsampled point cloud using a pooling technique that is specific to the virtual masks. Finally, we design a learnable pseudo Laplacian operator to calibrate the coarse upsampled point cloud and generate a refined upsampled point cloud. Extensive experiments demonstrate that PU-Mask is superior to the state-of-the-art methods. Our code will be made available at: https://github.com/liuhaoyun/PU-MaskItem Open Access PU-Refiner: A Geometry Refiner with Adversarial Learning for Point Cloud Upsampling(IEEE, 2022-05) Liu, Hao; Yuan, Hui; Hamzaoui, Raouf; Gao, Wei; Li, ShuaiWe present PU-Refiner, a generative adversarial network for point cloud upsampling. The generator of our network includes a coarse feature expansion module to create coarse upsampled features, a geometry generation module to regress a coarse point cloud from the coarse upsampled features, and a progressive geometry refinement module to restore the dense point cloud in a coarse-to-fine fashion based on the coarse upsampled point cloud. The discriminator of our network helps the generator produce point clouds closer to the target distribution. It makes full use of multi-level features to improve its classification performance. Extensive experimental results show that PU-Refiner is superior to five state-of-the-art point cloud upsampling methods. Code: https://github.com/liuhaoyun/PU-Refiner