In the magazine Multimodal Technologies and Interaction published on April 14, 2022, an article written by Maria Chiara Caschera, Patrizia Grifoni e Fernando Ferri entitled "Emotion Classification from Speech and Text in Videos Using a Multimodal Approach"
"Emotion classification is a research area in which there has been very intensive literature production concerning natural language processing, multimedia data, semantic knowledge discovery, social network mining, and text and multimedia data mining. This paper addresses the issue of emotion classification and proposes a method for classifying the emotions expressed in multimodal data extracted from videos. The proposed method models multimodal data as a sequence of features extracted from facial expressions, speech, gestures, and text, using a linguistic approach. Each sequence of multimodal data is correctly associated with the emotion by a method that models each emotion using a hidden Markov model. The trained model is evaluated on samples of multimodal sentences associated with seven basic emotions. The experimental results demonstrate a good classification rate for emotions."
The article is in Open format