Show simple item record

dc.contributor.authorWang, S.en_US
dc.contributor.authorLiew, Alan Wee-Chungen_US
dc.contributor.editorSimon Lucey, Roland Gocke, Patrick Luceyen_US
dc.date.accessioned2017-04-24T12:52:51Z
dc.date.available2017-04-24T12:52:51Z
dc.date.issued2008en_US
dc.date.modified2013-05-28T23:20:39Z
dc.identifier.urihttp://hdl.handle.net/10072/23567
dc.description.abstractAs we all known, various speakers have their own talking styles. Hence, lip shape and its movement can be used as a new biometrics and infer the speaker's identity. Compared with the traditional biometrics such as human face and fingerprint, person verification based on the lip feature has the advantage of containing both static and dynamic information. Many researchers have demonstrated that incorporating dynamic information such as lip movement help improve the verification performance. However, which is more discriminative, the static features or the dynamic features remained unsolved. In this paper, the discriminative power analysis of the static and dynamic lip features is performed. For the static lip features, a new kind of feature representation including the geometric features, contour descriptors and texture features is proposed and the Gaussian Mixture Model (GMM) is employed as the classifier. For the dynamic features, Hidden Markov Model (HMM) is employed as the classifier for its superiority in dealing with time-series data. Experiments are carried out on a database containing 40 speakers in our lab. Detailed evaluation for various static/dynamic lip feature representation is made along with a corresponding discussion on the discriminative ability. The experimental results disclose that the dynamic lip shape information and the static lip texture information contain much identity-relevant information.en_US
dc.description.peerreviewedYesen_US
dc.description.publicationstatusYesen_US
dc.format.extent248131 bytes
dc.format.mimetypeapplication/pdf
dc.languageEnglishen_US
dc.language.isoen_US
dc.publisherInternational speech communication association, ISCAen_US
dc.publisher.placeAustraliaen_US
dc.publisher.urihttp://www.isca-speech.org/archive_open/avsp08/index.htmlen_US
dc.relation.ispartofstudentpublicationNen_US
dc.relation.ispartofconferencenameInternational Conference on Auditory-Visual Speech Processing, AVSP2008en_US
dc.relation.ispartofconferencetitleProceedings of the International Conference on Auditory-Visual Speech Processing 2008en_US
dc.relation.ispartofdatefrom2008-09-26en_US
dc.relation.ispartofdateto2008-09-29en_US
dc.relation.ispartoflocationMoreton Island, Australiaen_US
dc.rights.retentionYen_US
dc.subject.fieldofresearchPattern Recognition and Data Miningen_US
dc.subject.fieldofresearchcode080109en_US
dc.titleStatic and Dynamic Lip Feature Analysis for Speaker Verificationen_US
dc.typeConference outputen_US
dc.type.descriptionE1 - Conference Publications (HERDC)en_US
dc.type.codeE - Conference Publicationsen_US
gro.rights.copyrightCopyright 2008 ISCA and the Authors. The attached file is reproduced here in accordance with the copyright policy of the publisher. For information about this conference please refer to the conference’s website or contact the authors.en_US
gro.date.issued2008
gro.hasfulltextFull Text


Files in this item

This item appears in the following Collection(s)

  • Conference outputs
    Contains papers delivered by Griffith authors at national and international conferences.

Show simple item record