Motive classification using deep learning approaches
Date of Issue2019
School of Electrical and Electronic Engineering
Automated implicit motive classification can be formulated as a natural language processing (NLP) task. With the fast development of computational abilities, complex NLP models have been developed, contributing to the classification accuracy. In this dissertation, we study several different deep learning models such as Long Short-Term Memory (LSTM) model, Gated recurrent unit (GRU) model, Bidirectional GRU model, Transformer model and Bidirectional Encoder Representations from Transformers (BERT) model for implicit motive classifications. The architecture of each of those models are reviewed, illustrated, and several motive classification models are implemented based on them. The performances of those models are evaluated and compared with each other on some benchmark datasets, and measured in Precision, Recall and F1 score. From the experimental studies, we concluded that base-BERT model demonstrates the best performance on the dataset. Large-BERT model has the secondary best performance on the dataset. However, it requires the training for the largest number of parameters among those models and the most training time. Bidirectional GRU model has the third best performance among those models, which does not require so much computing power and training time in server. And simple GRU model has the worst performance, which is in accordance with the theoretical analysis, because it has the simplest structure, only one GRU layer and no use of reverse time information. This report states the methodologies and implementation details used in the experiments, followed by discussions and analysis of the obtained results.
Engineering::Electrical and electronic engineering