Educational Quality Assessment System based on Emotions using Facial Images Applying Deep Learning Techniques

Educational Quality Assessment System based on Emotions using Facial Images Applying Deep Learning Techniques

  IJETT-book-cover           
  
© 2024 by IJETT Journal
Volume-72 Issue-3
Year of Publication : 2024
Author : Victor Romero-Alva, Sebastian Ramos-Cosi, Avid Roman-Gonzalez
DOI : 10.14445/22315381/IJETT-V72I3P122

How to Cite?

Victor Romero-Alva, Sebastian Ramos-Cosi, Avid Roman-Gonzalez, "Educational Quality Assessment System based on Emotions using Facial Images Applying Deep Learning Techniques," International Journal of Engineering Trends and Technology, vol. 72, no. 3, pp. 249-259, 2024. Crossref, https://doi.org/10.14445/22315381/IJETT-V72I3P122

Abstract
In response to the shift towards online education prompted by the COVID-19 pandemic, this project aims to analyze students’ emotional responses in a virtual classroom setting through the capture of facial expressions. By leveraging facial imagery, the initiative seeks to objectively assess levels of engagement, interest, or lack thereof, during online sessions in a discreet and non-intrusive manner. This methodology is intended to provide actionable insights to refine and enhance the effectiveness of future educational sessions, thereby optimizing the learning experience. Facial images are captured in real-time by students who have their cameras enabled during live class sessions. Subsequently, these images undergo processing to identify and evaluate the students’ facial expressions. In its preliminary phase, the model successfully identifies and categorizes four primary emotions—happiness, sadness, anger, and surprise—with an impressive accuracy rate exceeding 90%, per the system’s evaluation criteria. This accuracy is gauged based on the prevalence of each emotion throughout the class. The findings are then compiled and shared with educators as constructive feedback to inform and improve the planning and execution of subsequent sessions. Additionally, a Chrome browser extension has been developed to facilitate the deployment of this system within “Google Meet” platforms.

Keywords
Emotion detection, Recognition system, Educational quality, COVID-19, Virtual classes.

References
[1] Ashok Kumar Verma, and Sadguru Prakash, “Impact of COVID-19 on Environment and Society,” Journal of Global Biosciences, vol. 9, no. 5, pp. 7352-7363, 2020.
[Google Scholar] [Publisher Link]
[2] Indranil Chakraborty, and Prasenjit Maity, “COVID-19 Outbreak: Migration, Effects on Society, Global Environment and Prevention,” Science of The Total Environment, vol. 728, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Juan Fernando Grajales Escobar, and Yadir Milena Osorno Mira, “Globalization and the Importance of ICT in Social Development,” Reflections and Knowledge Magazine, no. 11, pp. 2-9, 2019.
[Google Scholar] [Publisher Link]
[4] Natalia García Fernández, María Luisa Rivero Moreno, and José Ricis Guerra, “Digital Divide in Times of COVID-19,” Revista Educativa Hekademos, no. 28, pp. 76-85, 2020.
[Google Scholar] [Publisher Link]
[5] Wildebeest. Eclac, Latin America and the Caribbean in the Face of the COVID-19 Pandemic: Economic and Social Effects, COVID-19 Special Report, pp. 1-15, 2020.
[Google Scholar] [Publisher Link]
[6] Juan Luis Cortés Rojas, “Teacher Stress in Times of Pandemic,” Contemporary Dilemmas: Education, Politics and Values, vol. 8, no. 1, pp. 1-11, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Gladys Galvis López et al., “Tensions and Realities of University Teachers in the Face of the Covid-19 Pandemic,” European Journal of Health Research, vol. 7, no. 1, pp. 1-13, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Josnel Martínez-Garcés, and Jacqueline Garcés-Fuenmayor, “Digital Teaching Skills and the Challenge of Virtual Education Derived from Covid-19,” Education and Humanism, vol. 22, no. 39, pp. 1-16, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Mustamin Anggo, and La Arapu, “Face Recognition Using Fisherface Method,” Journal of Physics: Conference Series, pp. 1-10, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Byoung Chul Ko, “A Brief Review of Facial Emotion Recognition Based on Visual Information,” Sensors, vol. 18, no. 2, pp. 1-20, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Héctor Ribes Gil, “Development of a Facial Emotion Recognition System in Real Time,” Final Degree Thesis, Autonomous University of Barcelona, School of Engineering, pp. 1-9, 2017.
[Google Scholar] [Publisher Link]
[12] Yejin Kwon, and Dongho Kim, “Real-Time Workout Posture Correction Using OpenCV and MediaPipe,” Journal of KIIT, vol. 20, no. 1, pp. 199-208, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Diego Fabián Centurión Talavera, and Carlos Domingo Almeida Delgado, “Facial Recognition using Artificial Neural Networks on Raspberry Pi,” FPUNE Scientific, no. 16, pp. 17-24, 2022.
[Google Scholar] [Publisher Link]
[14] Juan José Gutiérrez Leguizamón et al., “Colombian Sign Language Recognition Using Convolutional Neural Networks and Motion Capture,” Technurae, vol. 26, no. 74, pp. 70-86, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Ivan Culjak et al., “A Brief Introduction to OpenCV,” 2012 Proceedings of the 35th International Convention MIPRO, Opatija, Croatia, pp. 1725-1730, 2012.
[Google Scholar] [Publisher Link]
[16] L. León et al., “Development of a Face Detection System with or without a Mask using Software,” 20th LACCEI International Multi-Conference for Engineering, Education, and Technology, Education, Research and Leadership in Post-pandemic Engineering: Resilient, Inclusive and Sustainable Actions, Boca Raton, pp. 1-8, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Giuseppe Massimo Bernava et al., “An Advanced Tool for Semi-automatic Annotation for Early Screening of Neurodevelopmental Disorders,” Image Analysis and Processing, ICIAP 2022 Workshops, Lecce, Italy, pp. 154-164, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Arya Khanorkar et al., “Background Removal of Video in Realtime,” 3C ICT: Applied ICT Development Notebooks, vol. 11, no. 2, pp. 195-206, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Mukhriddin Mukhiddinov et al., “Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People,” Sensors, vol. 23, no. 3, pp. 1-23, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Andrés Jaramillo Ortiz, M. Robinson Jiménez, and Olga Lucia Ramos, “Quality Inspection for an Industrial Production System Based on Image Processing,” Investigation, vol. 18, no. 41, pp. 76-90, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Todd M. Dewey et al., “Reliability of Risk Algorithms in Predicting Early and Late Operative Outcomes in High-Risk Patients Undergoing Aortic Valve Replacement,” The Journal of Thoracic and Cardiovascular Surgery, vol. 135, no. 1, pp. 180-187, 2008.
[CrossRef] [Google Scholar] [Publisher Link]
[22] Mackenzie Weygandt Mathis, and Alexander Mathis, “Deep Learning Tools for the Measurement of Animal Behavior in Neuroscience,” Current Opinion in Neurobiology, vol. 60, pp. 1-11, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[23] Alison J. Fairbrass et al., “CityNet-Deep Learning Tools for Urban Ecoacoustic Assessment,” Methods in Ecology and Evolution, vol. 10, no. 2, pp. 186-197, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[24] John Klein, and Hans Van Vliet, “A Systematic Review of System-of-Systems Architecture Research,” Proceedings of the 9th International ACM Sigsoft Conference on Quality of Software Architectures, pp. 13-22, 2013.
[CrossRef] [Google Scholar] [Publisher Link]
[25] Terisara Micaraseth et al., “Coffee Bean Inspection Machine with Deep Learning Classification,” International Conference on Electrical, Computer, Communications and Mechatronics Engineering, Maldives, pp. 1-5, 2022.
[CrossRef] [Google Scholar] [Publisher Link]