Dynamic Texture Comparison Using Derivative Sparse Representation: Application to Video-Based Face Recognition
Author(s)
Hajati, Farshid
Tavakolian, Mohammad
Gheisari, Soheila
Gao, Yongsheng
Mian, Ajmal S
Griffith University Author(s)
Year published
2017
Metadata
Show full item recordAbstract
Video-based face, expression, and scene recognition are fundamental problems in human–machine interaction, especially when there is a short-length video. In this paper, we present a new derivative sparse representation approach for face and texture recognition using short-length videos. First, it builds local linear subspaces of dynamic texture segments by computing spatiotemporal directional derivatives in a cylinder neighborhood within dynamic textures. Unlike traditional methods, a nonbinary texture coding technique is proposed to extract high-order derivatives using continuous circular and cylinder regions to avoid ...
View more >Video-based face, expression, and scene recognition are fundamental problems in human–machine interaction, especially when there is a short-length video. In this paper, we present a new derivative sparse representation approach for face and texture recognition using short-length videos. First, it builds local linear subspaces of dynamic texture segments by computing spatiotemporal directional derivatives in a cylinder neighborhood within dynamic textures. Unlike traditional methods, a nonbinary texture coding technique is proposed to extract high-order derivatives using continuous circular and cylinder regions to avoid aliasing effects. Then, these local linear subspaces of texture segments are mapped onto a Grassmann manifold via sparse representation. A new joint sparse representation algorithm is developed to establish the correspondences of subspace points on the manifold for measuring the similarity between two dynamic textures. Extensive experiments on the Honda/UCSD, the CMU motion of body, the YouTube, and the DynTex datasets show that the proposed method consistently outperforms the state-of-the-art methods in dynamic texture recognition, and achieved the encouraging highest accuracy reported to date on the challenging YouTube face dataset. The encouraging experimental results show the effectiveness of the proposed method in video-based face recognition in human–machine system applications.
View less >
View more >Video-based face, expression, and scene recognition are fundamental problems in human–machine interaction, especially when there is a short-length video. In this paper, we present a new derivative sparse representation approach for face and texture recognition using short-length videos. First, it builds local linear subspaces of dynamic texture segments by computing spatiotemporal directional derivatives in a cylinder neighborhood within dynamic textures. Unlike traditional methods, a nonbinary texture coding technique is proposed to extract high-order derivatives using continuous circular and cylinder regions to avoid aliasing effects. Then, these local linear subspaces of texture segments are mapped onto a Grassmann manifold via sparse representation. A new joint sparse representation algorithm is developed to establish the correspondences of subspace points on the manifold for measuring the similarity between two dynamic textures. Extensive experiments on the Honda/UCSD, the CMU motion of body, the YouTube, and the DynTex datasets show that the proposed method consistently outperforms the state-of-the-art methods in dynamic texture recognition, and achieved the encouraging highest accuracy reported to date on the challenging YouTube face dataset. The encouraging experimental results show the effectiveness of the proposed method in video-based face recognition in human–machine system applications.
View less >
Journal Title
IEEE Transactions on Human-Machine Systems
Volume
PP
Issue
99
Subject
Electrical engineering not elsewhere classified
Electronics, sensors and digital hardware not elsewhere classified