A unified cycle-consistent neural model for text and image retrieval

Multimedia Tools and Applications, ISSN: 1573-7721, Vol: 79, Issue: 35-36, Page: 25697-25721
2020
  • 14
    Citations
  • 0
    Usage
  • 5
    Captures
  • 0
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

  • Citations
    14
    • Citation Indexes
      14
  • Captures
    5

Article Description

Text-image retrieval has been recently becoming a hot-spot research field, thanks to the development of deeply-learnable architectures which can retrieve visual items given textual queries and vice-versa. The key idea of many state-of-the-art approaches has been that of learning a joint multi-modal embedding space in which text and images could be projected and compared. Here we take a different approach and reformulate the problem of text-image retrieval as that of learning a translation between the textual and visual domain. Our proposal leverages an end-to-end trainable architecture that can translate text into image features and vice versa and regularizes this mapping with a cycle-consistency criterion. Experimental evaluations for text-to-image and image-to-text retrieval, conducted on small, medium and large-scale datasets show consistent improvements over the baselines, thus confirming the appropriateness of using a cycle-consistent constrain for the text-image matching task.

Bibliographic Details

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know