Show simple item record

dc.contributor.authorChen, Jie
dc.contributor.authorYang, Shengxiang
dc.contributor.authorWang, Zhu
dc.contributor.authorMao, Hua
dc.date.accessioned2021-10-12T08:41:06Z
dc.date.available2021-10-12T08:41:06Z
dc.date.issued2021-10
dc.identifier.citationChen, J., Yang, S., Wang, Z. and Mao, H. (2021) Efficient sparse representation for learning in high-dimensional data. IEEE Transactions on Neural Networks and Learning Systems.en
dc.identifier.issn2162-2388
dc.identifier.urihttps://dora.dmu.ac.uk/handle/2086/21353
dc.descriptionThe file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.en
dc.description.abstractDue to the capability of effectively learning intrinsic structures from high-dimensional data, techniques based on sparse representation have begun to display an impressive impact in several fields, such as image processing, computer vision and pattern recognition. Learning sparse representations is often computationally expensive due to the iterative computations needed to solve convex optimization problems in which the number of iterations is unknown before convergence. Moreover, most sparse representation algorithms focus only on determining the final sparse representation results and ignore the changes in the sparsity ratio during iterative computations. In this paper, two algorithms are proposed to learn sparse representations based on locality-constrained linear representation learning with probabilistic simplex constraints. Specifically, the first algorithm, called approximated local linear representation (ALLR), obtains a closed-form solution from individual locality-constrained sparse representations. The second algorithm, called approximated local linear representation with symmetric constraints (ALLRSC), further obtains all symmetric sparse representation results with a limited number of computations; notably, the sparsity and convergence of sparse representations can be guaranteed based on theoretical analysis. The steady decline in the sparsity ratio during iterative computations is a critical factor in practical applications. Experimental results based on public datasets demonstrate that the proposed algorithms perform better than several state-of-the-art algorithms for learning with high-dimensional data.en
dc.language.isoen_USen
dc.publisherIEEE Pressen
dc.subjectSparse representationen
dc.subjectlinear representationen
dc.subjectlow-dimensional structuresen
dc.subjectprobabilistic simplexen
dc.titleEfficient sparse representation for learning in high-dimensional dataen
dc.typeArticleen
dc.identifier.doihttps://doi.org/10.1109/TNNLS.2021.3119278
dc.peerreviewedYesen
dc.funderOther external funder (please detail below)en
dc.projectid61303015 and 61673331en
dc.cclicenceN/Aen
dc.date.acceptance2021-10-04
dc.researchinstituteInstitute of Artificial Intelligence (IAI)en
dc.funder.otherNational Natural Science Foundation of Chinaen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record