dc.contributor.authorNg, Jing Nee
dc.date.accessioned2017-04-17T08:44:09Z
dc.date.available2017-04-17T08:44:09Z
dc.date.issued2017
dc.identifier.urihttp://hdl.handle.net/10356/70231
dc.description.abstractThis paper examines the use of several popular object detection frameworks, namely Fast-RCNN, Faster-RCNN, and the more recent real-time object detection system, YOLO. The data utilized in this paper was collected from Flickr to more accurately represent images that could be found in the electronic devices of potential suspects. A total of 90,000 images were used, and split into 4 experiments of 10,000, 20,000, 40,000, and 90,000 images. The VGG_CNN_M_1024 model achieved average precisions (AP)1 of 51.02% and 61.03% for both Fast-RCNN and Faster-RCNN respectively. The PVANet model achieved an AP of 69.15% on Faster-RCNN. Lastly, the YOLO model achieved an AP of 60.60%. All the best APs for each model were attained on the largest dataset, Flickr90k. The trained models were then tested on the NIST database of 2,212 images from the tattoo similarity use case (original, uncropped version), achieving an AP of 97.34% using the PVANet model trained on Flickr90k. Another set of 3,847 images were acquired from NIST’s background tattoo images (original, uncropped version). This set of images achieved an AP of 85.07%.en_US
dc.format.extent52 p.en_US
dc.language.isoenen_US
dc.rightsNanyang Technological University
dc.subjectDRNTU::Engineering::Computer science and engineeringen_US
dc.titleLarge scale tattoo localizationen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorKong Wai-Kin Adamsen_US
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.description.degreeCOMPUTER SCIENCEen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record