From an image to a text description of an image
Date of Issue2017-04-17
School of Computer Science and Engineering
This project presents an implementation of a search feature that allows user to look for a particular object of interest in a video. The main idea is to train a very deep neural network architecture that outputs a sequence of words that describe an image. The network consists of a convolutional neural network (CNN) that learns features found on an image, and a long short-term memory (LSTM) unit that predicts the sequence of words from learnt features of the image. This project is not about real-time object detection, instead a video has to be preprocessed before a user may search for an object found visually inside the video.
Final Year Project (FYP)
Nanyang Technological University