Show simple item record

dc.contributor.authorGe, ZongYuan
dc.contributor.authorMcCool, Chris
dc.contributor.authorSanderson, Conrad
dc.contributor.authorWang, Peng
dc.contributor.authorLiu, Lingqiao
dc.contributor.authorReid, Ian
dc.contributor.authorCorke, Peter
dc.date.accessioned2020-07-30T01:46:02Z
dc.date.available2020-07-30T01:46:02Z
dc.date.issued2016
dc.identifier.doi10.1109/dicta.2016.7797039
dc.identifier.urihttp://hdl.handle.net/10072/395907
dc.description.abstractFine-grained classification is a relatively new field that has concentrated on using information from a single image, while ignoring the enormous potential of using video data to improve classification. In this work we present the novel task of video-based fine-grained object classification, propose a corresponding new video dataset, and perform a systematic study of several recent deep convolutional neural network (DCNN) based approaches, which we specifically adapt to the task. We evaluate three-dimensional DCNNs, two-stream DCNNs, and bilinear DCNNs. Two forms of the two-stream approach are used, where spatial and temporal data from two independent DCNNs are fused either via early fusion (combination of the fully-connected layers) and late fusion (concatenation of the softmax outputs of the DCNNs). For bilinear DCNNs, information from the convolutional layers of the spatial and temporal DCNNs is combined via local co-occurrences. We then fuse the bilinear DCNN and early fusion of the two-stream approach to combine the spatial and temporal information at the local and global level (Spatio-Temporal Co-occurrence). Using the new and challenging video dataset of birds, classification performance is improved from 23.1% (using single images) to 41.1% when using the Spatio-Temporal Co-occurrence system. Incorporating automatically detected bounding box location further improves the classification accuracy to 53.6%.
dc.description.peerreviewedYes
dc.publisherIEEE
dc.relation.ispartofconferencenameInternational Conference on Digital Image Computing: Techniques and Applications (DICTA 2016)
dc.relation.ispartofconferencetitle2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA)
dc.relation.ispartofdatefrom2016-11-30
dc.relation.ispartofdateto2016-12-02
dc.relation.ispartoflocationGold Coast, Australia
dc.subject.fieldofresearchArtificial intelligence
dc.subject.fieldofresearchcode4602
dc.titleExploiting Temporal Information for DCNN-Based Fine-Grained Object Classification
dc.typeConference output
dc.type.descriptionE1 - Conferences
dcterms.bibliographicCitationGe, Z; McCool, C; Sanderson, C; Wang, P; Liu, L; Reid, I; Corke, P, Exploiting Temporal Information for DCNN-Based Fine-Grained Object Classification, 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2016
dc.date.updated2020-07-29T04:27:31Z
dc.description.versionAccepted Manuscript (AM)
gro.rights.copyright© 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
gro.hasfulltextFull Text
gro.griffith.authorSanderson, Conrad


Files in this item

This item appears in the following Collection(s)

  • Conference outputs
    Contains papers delivered by Griffith authors at national and international conferences.

Show simple item record