Non-verbal speech analysis of parent child dialog
Date of Issue2017-12-29
School of Electrical and Electronic Engineering
Over the recent years, human emotion recognition has been in top priority for researchers from various domains. Despite the use of various physio psychological parameters as an index to human emotions, speech signal is considered as an important parameter that reflects the emotional state of a human being. The importance of automated emotion recognition models can be accredited to the growing demand for socially intelligent systems. This dissertation work focuses on analysing speech signals by extracting non-verbal speech features in order to recognize the emotions and classify them accordingly. The research work has been carried out using the audio data recorded from different parent child conversations by providing them with visual stimuli in the form of pictures. The features are extracted from the audio data using MATLAB and OpenSMILE toolbox. The extracted features are classified into five classes as labelled in the experiment using WEKA Tool. In order to achieve higher classification accuracy, different pairs of classes were chosen based on K-means clustering algorithm and binary classification is performed. The scatter plots are visually represented and classification accuracy for various classifier algorithms have been tabulated. The ranking of classifiers has been done based on their classification accuracy.
DRNTU::Engineering::Electrical and electronic engineering