Mobile-based Human Emotion Recognition based on Speech and Heart rate

  • Huda Majed Swadi College of Engineering - University of Baghdad
  • Hamid Mohammed Ali College of Engineering - University of Baghdad
Keywords: smartphone, neural network, smartwatch, speech signal, heart rate


Mobile-based human emotion recognition is very challenging subject, most of the approaches suggested and built in this field utilized various contexts that can be derived from the external sensors and the smartphone, but these approaches suffer from different obstacles and challenges. The proposed system integrated human speech signal and heart rate, in one system, to leverage the accuracy of the human emotion recognition. The proposed system is designed to recognize four human emotions; angry, happy, sad and normal. In this system, the smartphone is used to   record user speech and send it to a server. The smartwatch, fixed on user wrist, is used to measure user heart rate while the user is speaking and send it, via Bluetooth, to the smartphone which in turn sends it to the server. At the server side, the speech features are extracted from the speech signal to be classified by neural network. To minimize the misclassification of the neural network, the user heart rate measurement is used to direct the extracted speech features to either excited (angry and happy) neural network or to the calm (sad and normal) neural network. In spite of the challenges associated with the system, the system achieved 96.49% for known speakers and 79.05% for unknown speakers


Download data is not yet available.
How to Cite
Swadi, H. and Ali, H. (2019) “Mobile-based Human Emotion Recognition based on Speech and Heart rate”, Journal of Engineering, 25(11), pp. 55-66. doi: 10.31026/j.eng.2019.11.05.