You are viewing a preview of...

Vertical Search Engine Integrating Text and Image Analysis

The iLike image search method analyzes text-based and content-based image features to deliver results that match user queries

Background

When searching for multimedia content online or in large-scale repositories (e.g., the Library of Congress Prints and Photographs Catalog), results are typically retrieved using text-based searching of surrounding text or manually annotated metadata. Content-based image retrieval (CBIR) has been intensively studied in the research community, but presents a challenging problem in real-world applications. This is primarily due to the “semantic gap” between low-level visual features and high-level content (i.e., when comparing multiple images, visual feature similarities are not necessarily correlated with content similarities). The iLike method has been developed to bridge this gap for “vertical search” applications that focus on visual content.

Log in or create a free account to continue reading