The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
approach has a limit as only the annotations of found images during the interaction are updated. In this paper we introduce a novel method of semi-automatic annotation. The method is using visual feature representations of keywords which are improved during the region-based relevance feedback. The experiments show that this
Content-based image retrieval (CBIR) has been adopted as a complementary technique to the keyword-based image search. Relevance feedback (RFB) is considered as an effective means to bridge the gap between the designated features and the run-time semantics on a CBIR system. Like many other interactive system, a good
Methods of retrieving images that incorporate human- generated metadata, such as keyword annotation and collaborative filtering, are less vulnerable to the semantic gap than content-based image retrieval. However, generating such metadata is time-consuming, expensive, and difficult to evaluate. This paper discusses an
The content based image retrieval (CBIR) is one of the most popular, rising research areas of the digital image processing. Most of the available image search tools, such as Google Images and Yahoo! Image search, are based on textual annotation of images. In these tools, images are manually annotated with keywords and
Managing photos by using visual features (e.g., color and texture) is known to be a powerful, yet imprecise, retrieval paradigm because of the semantic gap problem. The same is true if search relies only on keywords (or tags), derived from either the image context or user-provided annotations. In this paper we present
In this paper, we present the AI Goggles system, which can instantly describe objects and scenes in the real world and retrieve visual memories about them using keywords input by the users. This is a stand-alone wearable system working on a tiny mobile computer. Also, the system can quickly learn unknown objects and
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.