Advancing Vietnamese Visual Question Answering with Transformer and Convolutional Integration
Computers and Electrical Engineering, ISSN: 0045-7906, Vol: 119, Page: 109474
2024
- 5Captures
- 1Mentions
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Captures5
- Readers5
- Mentions1
- News Mentions1
- News1
Most Recent News
Researchers from University of the Sciences Discuss Findings in Computers and Electrical Engineering (Advancing Vietnamese Visual Question Answering With Transformer and Convolutional)
2024 OCT 03 (NewsRx) -- By a News Reporter-Staff News Editor at Engineering Daily News -- New research on Engineering - Computers and Electrical Engineering
Article Description
Visual Question Answering (VQA) has recently emerged as a potential research domain, captivating the interest of many in the field of artificial intelligence and computer vision. Despite the prevalence of approaches in English, there is a notable lack of systems specifically developed for certain languages, particularly Vietnamese. This study aims to bridge this gap by conducting comprehensive experiments on the Vietnamese Visual Question Answering (ViVQA) dataset, demonstrating the effectiveness of our proposed model. In response to community interest, we have developed a model that enhances image representation capabilities, thereby improving overall performance in the ViVQA system. Therefore, we propose AViVQA-TranConI ( A dvancing Vi etnamese V isual Q uestion A nswering with T ransformer and Con volutional I ntegration). AViVQA-TranConI integrates the Bootstrapping Language-Image Pre-training with frozen unimodal models (BLIP-2) and the convolutional neural network EfficientNet to extract and process both local and global features from images. This integration leverages the strengths of transformer-based architectures for capturing comprehensive contextual information and convolutional networks for detailed local features. By freezing the parameters of these pre-trained models, we significantly reduce the computational cost and training time, while maintaining high performance. This approach significantly improves image representation and enhances the performance of existing VQA systems. We then leverage a multi-modal fusion module based on a general-purpose multi-modal foundation model (BEiT-3) to fuse the information between visual and textual features. Our experimental findings demonstrate that AViVQA-TranConI surpasses competing baselines, achieving promising performance. This is particularly evident in its accuracy of 71.04% on the test set of the ViVQA dataset, marking a significant advancement in our research area. The code is available at https://github.com/nngocson2002/ViVQA.
Bibliographic Details
http://www.sciencedirect.com/science/article/pii/S0045790624004014; http://dx.doi.org/10.1016/j.compeleceng.2024.109474; http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85199751273&origin=inward; https://linkinghub.elsevier.com/retrieve/pii/S0045790624004014; https://dx.doi.org/10.1016/j.compeleceng.2024.109474
Elsevier BV
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know