Motion Pattern-Based Scene Classification Using Adaptive Synthetic Oversampling and Fully Connected Deep Neural Network

dc.contributor.authorMohammed, Sultan Mohammed
dc.contributor.authorAl-Dhamari, Ahlam
dc.contributor.authorSaeed, Waddah
dc.contributor.authorAl-Aswadi, Fatima N.
dc.contributor.authorSaleh, Sami Abdulla Mohsen
dc.contributor.authorMarsono, M. N.
dc.date.acceptance2023-10-16
dc.date.accessioned2023-10-31T10:25:42Z
dc.date.available2023-10-31T10:25:42Z
dc.date.issued2023-10-25
dc.descriptionopen access article
dc.description.abstractAnalyzing crowded environments has become an increasingly researched topic, largely due to its myriad practical applications, including enhanced video surveillance systems and the estimation of crowd density in specific settings. This paper presents a comprehensive method for progressing the study of crowd dynamics and behavioral analysis, specifically focusing on the classification of movement patterns. We introduce a specialized neural network-based classifier explicitly designed for the accurate categorization of various crowd scenes. This classifier fills a unique niche in the existing literature by offering robust and adaptive classification capabilities. To optimize the performance of our model, we conduct an in-depth analysis of loss functions commonly employed in multi-class classification tasks. Our study encompasses four widely-used loss functions Focal Loss, Huber Loss, Cross-Entropy Loss, and Multi-Margin Loss. Based on empirical findings, we introduce a Joint Loss function that combines the strengths of Cross-Entropy and Multi-Margin Loss, outperforming existing methods across key performance metrics such as accuracy, precision, recall, and F1-score. Furthermore, we address the critical challenge of class imbalance in motion patterns within crowd scenes. To this end, we perform a comprehensive comparative study of two leading oversampling techniques: the synthetic minority oversampling technique (SMOTE) and adaptive synthetic sampling (ADASYN). Our results indicate that ADASYN is superior at enhancing classification performance. This approach not only mitigates the issue of class imbalance but also provides robust empirical validation for our proposed method. Finally, we subject our model to a rigorous evaluation using the Collective Motion Database, facilitating a comprehensive comparison with existing state-of-the-art techniques. This evaluation confirms the effectiveness of our model and aligns it with established paradigms in the field.
dc.funderOther external funder (please detail below)
dc.funder.otherResearch Management Centre (RMC) at Universiti Teknologi Malaysia (UTM) under the Professional Development Research University grant (R.J130000.7113.06E45)
dc.identifier.citationMohammed, S. M. et al. (2023) Motion Pattern-Based Scene Classification Using Adaptive Synthetic Oversampling and Fully Connected Deep Neural Network. IEEE Access,
dc.identifier.doihttps://doi.org/10.1109/ACCESS.2023.3327463
dc.identifier.urihttps://hdl.handle.net/2086/23310
dc.language.isoen
dc.peerreviewedYes
dc.publisherIEEE
dc.titleMotion Pattern-Based Scene Classification Using Adaptive Synthetic Oversampling and Fully Connected Deep Neural Network
dc.typeArticle

Files

License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
4.2 KB
Format:
Item-specific license agreed upon to submission
Description: