Comparing CNN and Human Crafted Features for Human Activity Recognition

dc.cclicenceCC-BY-NCen
dc.contributor.authorCruciani, Federico
dc.contributor.authorVafeiadis, Anastasios
dc.contributor.authorNugent, Chris
dc.contributor.authorCleland, Ian
dc.contributor.authorMcCullagh, Paul
dc.contributor.authorVotis, Konstantinos
dc.contributor.authorGiakoumis, Dimitrios
dc.contributor.authorTzovaras, Dimitrios
dc.contributor.authorChen, Liming
dc.contributor.authorHamzaoui, Raouf
dc.date.acceptance2019-04-29
dc.date.accessioned2019-06-24T08:54:11Z
dc.date.available2019-06-24T08:54:11Z
dc.date.issued2019-08
dc.description.abstractDeep learning techniques such as Convolutional Neural Networks (CNNs) have shown good results in activity recognition. One of the advantages of using these methods resides in their ability to generate features automatically. This ability greatly simplifies the task of feature extraction that usually requires domain specific knowledge, especially when using big data where data driven approaches can lead to anti-patterns. Despite the advantage of this approach, very little work has been undertaken on analyzing the quality of extracted features, and more specifically on how model architecture and parameters affect the ability of those features to separate activity classes in the final feature space. This work focuses on identifying the optimal parameters for recognition of simple activities applying this approach on both signals from inertial and audio sensors. The paper provides the following contributions: (i) a comparison of automatically extracted CNN features with gold standard Human Crafted Features (HCF) is given, (ii) a comprehensive analysis on how architecture and model parameters affect separation of target classes in the feature space. Results are evaluated using publicly available datasets. In particular, we achieved a 93.38% F-Score on the UCI-HAR dataset, using 1D CNNs with 3 convolutional layers and 32 kernel size, and a 90.5% F-Score on the DCASE 2017 development dataset, simplified for three classes (indoor, outdoor and vehicle), using 2D CNNs with 2 convolutional layers and a 2x2 kernel size.en
dc.funderEuropean Union (EU) Horizon 2020en
dc.identifier.citationF. Cruciani, A. Vafeiadis, C. Nugent, I. Cleland, P. McCullagh, K. Votis, D. Giakoumis, D. Tzovaras, L. Chen, R. Hamzaoui, Comparing CNN and human crafted features for human activity recognition. In: Proc. 16th IEEE International Conference on Ubiquitous Intelligence and Computing (UIC 2019), Leicester, Aug. 2019.en
dc.identifier.urihttps://www.dora.dmu.ac.uk/handle/2086/18111
dc.language.isoenen
dc.peerreviewedYesen
dc.projectidACROSSING project, Marie Skłodowska-Curie EU Framework for Research and Innovation Horizon 2020, Grant Agreement No. 676157.en
dc.publisherIEEEen
dc.researchinstituteCyber Technology Institute (CTI)en
dc.subjectHuman Activity Recognitionen
dc.subjectDeep Learningen
dc.subjectConvolutional Neural Networksen
dc.titleComparing CNN and Human Crafted Features for Human Activity Recognitionen
dc.typeConferenceen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
UCI_Leicester (1).pdf
Size:
4.75 MB
Format:
Adobe Portable Document Format
Description:
Main article
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
4.2 KB
Format:
Item-specific license agreed upon to submission
Description: