Automated real-time analytics for multi-party dialogues
Date of Issue2015
School of Electrical and Electronic Engineering
This project explores the idea of detecting high-level features, which includes human personality and emotion, based on low-level prosodic cues displayed in a conversation. After listening to audio recordings lasting 2 minutes long each, participants were made to rate the high-level features displayed by each speaker during the conversation. The high-level features were selected on the basis that they are easier to be identified in a conversation by the human brain. The analysis from the annotations received from the participants showed that not all high-level features can be well identified. From the two-party dialogs, politeness, confusion and hostility were the most easily identifiable features from both annotations and classification results. The same analysis had also been extended to multiparty dialogs. In this project, a group of 4 speakers was used as the representation for multiparty dialogs. The analysis from the annotations received gave a slightly different outcome compared to the two-party dialogs. Judging from the annotations, interest, disagreement, likeability, politeness and respect were the more easily identifiable features. However, the classifiers built from the annotations gave more accurate detection for the features likeability, friendliness, respect, confusion and hostility.
DRNTU::Engineering::Electrical and electronic engineering
Final Year Project (FYP)
Nanyang Technological University