TY - GEN
T1 - Deep Learning Model with GA-based Visual Feature Selection and Context Integration
AU - Mandal, Ranju
AU - Azam, Basim
AU - Verma, Brijesh
AU - Zhang, Mengjie
N1 - Funding Information:
VI. ACKNOWLEDGMENT This research was supported under Australian Research Council’s Discovery Projects funding scheme (project number DP200102252).
Publisher Copyright:
© 2021 IEEE
PY - 2021
Y1 - 2021
N2 - Deep learning models have been very successful in computer vision and image processing applications. Since its inception, Convolutional Neural Network (CNN)-based deep learning models have consistently outperformed other machine learning methods on many significant image processing benchmarks. Many top-performing methods for image segmentation are also based on deep CNN models. However, deep CNN models fail to integrate global and local context alongside visual features despite having complex multi-layer architectures. We propose a novel three-layered deep learning model that learns independently global and local contextual information alongside visual features, and visual feature selection based on a genetic algorithm. The novelty of the proposed model is that One-vs-All binary class-based learners are introduced to learn Genetic Algorithm (GA) optimized features in the visual layer, followed by the contextual layer that learns global and local contexts of an image, and finally the third layer integrates all the information optimally to obtain the final class label. Stanford Background and CamVid benchmark image parsing datasets were used for our model evaluation, and our model shows promising results. The empirical analysis reveals that optimized visual features with global and local contextual information play a significant role to improve accuracy and produce stable predictions comparable to state-of-the-art deep CNN models.
AB - Deep learning models have been very successful in computer vision and image processing applications. Since its inception, Convolutional Neural Network (CNN)-based deep learning models have consistently outperformed other machine learning methods on many significant image processing benchmarks. Many top-performing methods for image segmentation are also based on deep CNN models. However, deep CNN models fail to integrate global and local context alongside visual features despite having complex multi-layer architectures. We propose a novel three-layered deep learning model that learns independently global and local contextual information alongside visual features, and visual feature selection based on a genetic algorithm. The novelty of the proposed model is that One-vs-All binary class-based learners are introduced to learn Genetic Algorithm (GA) optimized features in the visual layer, followed by the contextual layer that learns global and local contexts of an image, and finally the third layer integrates all the information optimally to obtain the final class label. Stanford Background and CamVid benchmark image parsing datasets were used for our model evaluation, and our model shows promising results. The empirical analysis reveals that optimized visual features with global and local contextual information play a significant role to improve accuracy and produce stable predictions comparable to state-of-the-art deep CNN models.
KW - Deep learning
KW - Genetic algorithm
KW - Image parsing
KW - Scene understanding
KW - Semantic segmentation
UR - http://www.scopus.com/inward/record.url?scp=85124583628&partnerID=8YFLogxK
U2 - 10.1109/CEC45853.2021.9504753
DO - 10.1109/CEC45853.2021.9504753
M3 - Conference contribution
AN - SCOPUS:85124583628
T3 - 2021 IEEE Congress on Evolutionary Computation, CEC 2021 - Proceedings
SP - 288
EP - 295
BT - 2021 IEEE Congress on Evolutionary Computation, CEC 2021 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2021 IEEE Congress on Evolutionary Computation, CEC 2021
Y2 - 28 June 2021 through 1 July 2021
ER -