dc.contributor.authorSeng, Jeremy Jie Min
dc.date.accessioned2016-11-11T06:40:25Z
dc.date.available2016-11-11T06:40:25Z
dc.date.issued2016
dc.identifier.urihttp://hdl.handle.net/10356/69145
dc.description.abstractWord embedding has been a popular research topic since 2003 when Mikolov and his colleagues proposed a few new algorithms. These algorithms which were modified from the existing Machine Learning architectures. It allows machine to learn meaning behind words using an unsupervised manner. These proposed algorithms were able to determine how close two words are in a vector by measuring the cosine similarity distance. However, much work can be done to determine if these proposed methods can further to determine the context of a sentence or a paragraphs using these cosine distances. As the proposed algorithms requires a large dictionary of words or commonly referred to a corpus in this report, the author wishes to find out if the corpus supplied with articles found in Wikipedia are able to show the closeness of two words in different context.en_US
dc.format.extent49 p.en_US
dc.language.isoenen_US
dc.rightsNanyang Technological University
dc.subjectDRNTU::Engineeringen_US
dc.titleThe study of word embedding representations in different domainsen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorChng Eng Siongen_US
dc.contributor.schoolSchool of Computer Engineeringen_US
dc.description.degreeBachelor of Engineering (Computer Science)en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record