dc.contributor.authorChen, Liyang
dc.date.accessioned2016-05-04T01:11:23Z
dc.date.available2016-05-04T01:11:23Z
dc.date.issued2016
dc.identifier.urihttp://hdl.handle.net/10356/66892
dc.description.abstractThis project is a joint Final Year Project conducted by Zhou Xinzi and me. It aims to build a visual-based tourism assistant system. The system is designed for mobile phones or wearable devices, such as see-through glasses, and the core functionalities are locating the user using video stream from device camera and marking surrounding objects. We investigated several types of image features and image-based localization algorithms. On top of researches by Rapid-Rich Object Search (ROSE) Lab and researches on image- based indoor localization by Tao Qingyi in her final year project, the system uses a client-server model and Compact Descriptor for Visual Search (CDVS) solution from ROSE Lab. We explored various system architectures and object recognition strategies. We took 1326 images at NTU campus as our dataset for testing and archived good accuracy for localization and object recognition. We also profiled our system on aspects of speed, memory and network, and the results show that our system is good for the mobile platform. In addition, the system provides good user experience.en_US
dc.format.extent47 p.en_US
dc.language.isoenen_US
dc.rightsNanyang Technological University
dc.subjectDRNTU::Engineeringen_US
dc.titleA visual-based tourism assistant system (Server and algorithm part)en_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorCai Jianfeien_US
dc.contributor.schoolSchool of Computer Engineeringen_US
dc.description.degreeCOMPUTER ENGINEERINGen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record