Left/right hand segmentation in egocentric videos

Citation data:

Computer Vision and Image Understanding, ISSN: 1077-3142, Vol: 154, Page: 73-81

Publication Year:
2017
Usage 46
Abstract Views 45
Link-outs 1
Captures 20
Readers 19
Exports-Saves 1
Social Media 47
Shares, Likes & Comments 46
Tweets 1
Citations 3
Citation Indexes 3
DOI:
10.1016/j.cviu.2016.09.005
Author(s):
Alejandro Betancourt, Pietro Morerio, Emilia Barakova, Lucio Marcenaro, Matthias Rauterberg, Carlo Regazzoni
Publisher(s):
Elsevier BV
Tags:
Computer Science
Most Recent Tweet View All Tweets
article description
Wearable cameras allow people to record their daily activities from a user-centered (First Person Vision) perspective. Due to their favorable location, wearable cameras frequently capture the hands of the user, and may thus represent a promising user-machine interaction tool for different applications. Existent First Person Vision methods handle hand segmentation as a background-foreground problem, ignoring two important facts: i) hands are not a single “skin-like” moving element, but a pair of interacting cooperative entities, ii) close hand interactions may lead to hand-to-hand occlusions and, as a consequence, create a single hand-like segment. These facts complicate a proper understanding of hand movements and interactions. Our approach extends traditional background-foreground strategies, by including a hand-identification step (left-right) based on a Maxwell distribution of angle and position. Hand-to-hand occlusions are addressed by exploiting temporal superpixels. The experimental results show that, in addition to a reliable left/right hand-segmentation, our approach considerably improves the traditional background-foreground hand-segmentation.

This article has 0 Wikipedia mention.