Mobile-based Human Emotion Recognition based on Speech and Heart rate

Main Article Content

Huda Majed Swadi
Hamid Mohammed Ali

Abstract

Mobile-based human emotion recognition is very challenging subject, most of the approaches suggested and built in this field utilized various contexts that can be derived from the external sensors and the smartphone, but these approaches suffer from different obstacles and challenges. The proposed system integrated human speech signal and heart rate, in one system, to leverage the accuracy of the human emotion recognition. The proposed system is designed to recognize four human emotions; angry, happy, sad and normal. In this system, the smartphone is used to   record user speech and send it to a server. The smartwatch, fixed on user wrist, is used to measure user heart rate while the user is speaking and send it, via Bluetooth, to the smartphone which in turn sends it to the server. At the server side, the speech features are extracted from the speech signal to be classified by neural network. To minimize the misclassification of the neural network, the user heart rate measurement is used to direct the extracted speech features to either excited (angry and happy) neural network or to the calm (sad and normal) neural network. In spite of the challenges associated with the system, the system achieved 96.49% for known speakers and 79.05% for unknown speakers

Article Details

How to Cite
“Mobile-based Human Emotion Recognition based on Speech and Heart rate” (2019) Journal of Engineering, 25(11), pp. 55–66. doi:10.31026/j.eng.2019.11.05.
Section
Articles

How to Cite

“Mobile-based Human Emotion Recognition based on Speech and Heart rate” (2019) Journal of Engineering, 25(11), pp. 55–66. doi:10.31026/j.eng.2019.11.05.

Publication Dates

Similar Articles

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)