Visual question answering: Datasets, algorithms, and future challenges

Citation data:

Computer Vision and Image Understanding, ISSN: 1077-3142, Vol: 163, Page: 3-20

Publication Year:
2017
Captures 102
Readers 102
Social Media 2
Tweets 2
Citations 2
Citation Indexes 2
DOI:
10.1016/j.cviu.2017.06.005
Author(s):
Kushal Kafle; Christopher Kanan
Publisher(s):
Elsevier BV
Tags:
Computer Science
Most Recent Tweet View All Tweets
article description
Visual Question Answering (VQA) is a recent problem in computer vision and natural language processing that has garnered a large amount of interest from the deep learning, computer vision, and natural language processing communities. In VQA, an algorithm needs to answer text-based questions about images. Since the release of the first VQA dataset in 2014, additional datasets have been released and many algorithms have been proposed. In this review, we critically examine the current state of VQA in terms of problem formulation, existing datasets, evaluation metrics, and algorithms. In particular, we discuss the limitations of current datasets with regard to their ability to properly train and assess VQA algorithms. We then exhaustively review existing algorithms for VQA. Finally, we discuss possible future directions for VQA and image understanding research.