Show simple item record

dc.contributor.authorZhang, Ligang
dc.contributor.authorTjondronegoro, Dian
dc.contributor.authorChandran, Vinod
dc.contributor.authorEggink, Jana
dc.date.accessioned2020-01-14T05:35:12Z
dc.date.available2020-01-14T05:35:12Z
dc.date.issued2016
dc.identifier.issn1380-7501
dc.identifier.doi10.1007/s11042-015-2497-5
dc.identifier.urihttp://hdl.handle.net/10072/390247
dc.description.abstractAffect is an important feature of multimedia content and conveys valuable information for multimedia indexing and retrieval. Most existing studies for affective content analysis are limited to low-level features or mid-level representations, and are generally criticized for their incapacity to address the gap between low-level features and high-level human affective perception. The facial expressions of subjects in images carry important semantic information that can substantially influence human affective perception, but have been seldom investigated for affective classification of facial images towards practical applications. This paper presents an automatic image emotion detector (IED) for affective classification of practical (or non-laboratory) data using facial expressions, where a lot of “real-world” challenges are present, including pose, illumination, and size variations etc. The proposed method is novel, with its framework designed specifically to overcome these challenges using multi-view versions of face and fiducial point detectors, and a combination of point-based texture and geometry. Performance comparisons of several key parameters of relevant algorithms are conducted to explore the optimum parameters for high accuracy and fast computation speed. A comprehensive set of experiments with existing and new datasets, shows that the method is effective despite pose variations, fast, and appropriate for large-scale data, and as accurate as the method with state-of-the-art performance on laboratory-based data. The proposed method was also applied to affective classification of images from the British Broadcast Corporation (BBC) in a task typical for a practical application providing some valuable insights.
dc.description.peerreviewedYes
dc.languageEnglish
dc.publisherSpringer
dc.relation.ispartofpagefrom4669
dc.relation.ispartofpageto4695
dc.relation.ispartofissue8
dc.relation.ispartofjournalMultimedia Tools and Applications
dc.relation.ispartofvolume75
dc.subject.fieldofresearchComputer Software
dc.subject.fieldofresearchDistributed Computing
dc.subject.fieldofresearchInformation Systems
dc.subject.fieldofresearchArtificial Intelligence and Image Processing
dc.subject.fieldofresearchcode0803
dc.subject.fieldofresearchcode0805
dc.subject.fieldofresearchcode0806
dc.subject.fieldofresearchcode0801
dc.subject.keywordsScience & Technology
dc.subject.keywordsComputer Science, Software Engineering
dc.subject.keywordsComputer Science, Theory & Methods
dc.titleTowards robust automatic affective classification of images using facial expressions for practical applications
dc.typeJournal article
dc.type.descriptionC1 - Articles
dcterms.bibliographicCitationZhang, L; Tjondronegoro, D; Chandran, V; Eggink, J, Towards robust automatic affective classification of images using facial expressions for practical applications, Multimedia Tools and Applications, 2016, 75 (8), pp. 4669-4695
dc.date.updated2020-01-14T05:31:37Z
dc.description.versionPost-print
gro.rights.copyright© 2016 Springer. This is an electronic version of an article published in Multimedia Tools and Applications, 75, 4669–4695 (2016). Multimedia Tools and Applications is available online at: http://link.springer.com/ with the open URL of your article.
gro.hasfulltextFull Text
gro.griffith.authorTjondronegoro, Dian W.


Files in this item

This item appears in the following Collection(s)

  • Journal articles
    Contains articles published by Griffith authors in scholarly journals.

Show simple item record