Weakly Supervised Video Object Segmentation

No Thumbnail Available
File version
Author(s)
Wang, Y
Hu, Y
Liew, Wee-Chung
Wang, J
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
2019
Size
File type(s)
Location

Jeju, Korea (South)

License
Abstract

This paper proposes a novel approach of weakly supervised video object segmentation, which only needs one pixel to guide the segmentation. We use two deep neural networks to get the instance-level semantic segmentation masks and optical flow maps of each frame. An object probability map to the first frame in video is generated by combining the semantic masks, the optical flow maps and the guiding pixel. The object probability map propagates forward and backward and becomes more accurate to each frame. Finally, an energy minimization problem on a function that consists of unary term of object probability and pairwise terms of label smoothness potentials is solved to get the pixel-wise object segmentation mask of each frame. We evaluate our method on a benchmark dataset, and the experimental results show that the proposed approach achieves impressive performance in comparison with state-of-the-art methods.

Journal Title
Conference Title

IEEE Region 10 Annual International Conference, Proceedings/TENCON

Book Title
Edition
Volume

2018-October

Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject

Artificial intelligence

Persistent link to this record
Citation