ICA-Based Lip Feature Representation for Speaker Authentication
MetadataShow full item record
Compared with some "static" biometrics such as human face and fingerprint, person authentication based on lip movement has the advantage of incorporating "dynamic" features which contain rich information indicating the speaker identity. This paper proposes a new lip feature representation and analyzes its discrimination power for person authentication. Since the original lip features are usually of high-dimension, the independent component analysis (ICA) is adopted for dimension- reduction and discriminative feature extraction. Hidden Markov model (HMM) is then employed as the classifier for its superiority in dealing with time-series data. Experiments are carried out on a database containing 40 speakers in our lab. By analyzing the experimental results, detailed evaluation for various lip feature representation is made and 98.07% accuracy rate in speaker recognition and 2.31% EER in speaker authentication is achieved using our lip feature representation.
Proceedings of Third International IEEE Conference on Signal-Image Technologies and Internet-Based System
Copyright 2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Communication Technology and Digital Media Studies