Deep interactive learning for fine-grained opinion mining : single-domain, cross-domain & cross-lingual
Date of Issue2018
School of Computer Science and Engineering
Centre for Computational Intelligence
Opinion mining or sentiment analysis provides a way to analyze the attitude or emotion of an opinion holder towards an entity or feature of any product. The emergence of opinion mining techniques comes along with the excessive volume of text data from social media made available to the public, such as product reviews, blog posts, forum discussions and Twitter website for social network. Sentiment analysis has been actively studied among the Natural Language Processing (NLP) community since 2000. At this early stage, the main focus of sentiment analysis dealt with the overall sentiment polarity towards a given sentence or document, which we call coarse-grained sentiment analysis. For example, given a review sentence: "The sound is clean and clear", the desired output should be positive, without knowing the exact object the reviewer is talking about. For this task, various machine learning classifiers and deep learning models were developed to learn opinion-bearing features for the whole sentence or document to make predictions. However, a mere sentiment score is far from enough to provide all the necessary information for decision making. That's the reason why fine-grained opinion mining becomes more popular in a later stage. Fine-grained opinion mining, or aspect-based sentiment analysis, mainly focuses on finding the opinion targets (aspects) from a given sentence and the emotions expressed towards them. This is more related to information extraction. Take the previous example review sentence, the model should be able to extract sound as the aspect and clean and clear as the opinion. The extracted information can be further used for aspect categorization, aspect sentiment classification and opinion summarization which provides a thorough analysis for the input. In this case, the task is much more difficult than coarse-grained sentiment classification because more fine-grained features are needed to make token-level predictions. This thesis works on the task of aspect-based sentiment analysis to extract all the explicit aspects and opinions appeared in each review sentence. There has been active studies using machine learning models with manually-defined rules or extensive human-engineered features. However, this line of work required domain-specific knowledge and pipelined processes which is non-flexible for inconstant and informal texts. Another line of work applied deep learning to automatically learn the high-level representation for each word. However, they failed to model the important interactions among the words within a sentence. It has been shown that syntactic interactions and contextual relations among the words are especially crucial for information propagation. Bearing this in mind, this thesis designs some specific deep learning models that could automatically exploit the relationship among the input tokens in order to extract more accurate information. The models are flexible to learn the desired patterns without manual construction. Different from traditional deep learning models, the proposed approaches are able to capture fine-grained correlations among the input and could produce token-level representations inheriting the information from their correlated words. Specifically, we begin with the task of supervised single-domain aspect and opinion terms extraction. The first model is a dependency-tree-based recursive neural network, which is able to exploit the interactions of syntactically related word pairs given the dependency tree of a sentence. This approach, however, still requires the dependency parser as a linguistic tool and is prone to the inevitable errors of the parser. This motivates the second method, which applies attention mechanism to model the interactions among the words automatically. We further extend from single-domain problem to cross-domain and cross-lingual settings, where labeled data only exists in the source domain (language), leading to an unsupervised adaptation problem. We explore deep models to learn shared spaces across different product domains within the same language as well as different languages. The shared spaces are built through common syntactic relations. We experiment all the above methods on benchmark datasets to prove the state-of-the-art performances of the proposed models.
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence