The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
effective in terms of better precision. Proposed method makes use of keyword clusters for query expansion. Visual features are used for detecting duplicate images in proposed method. Removing duplicates leads to further improve in precision and recall in retrieval result
using feature vector. We do static analysis over computed features to get distinguishing feature descriptors. Maximum similarity i.e. minimum distance allows us to find the query relevant combined pictures and associated relevant words. For textual part of the query we compute the concepts (keywords as well as synonyms of
system to assist the users in easily accessing the information and have an enjoyable experience browsing Kotenseki images. There are two main functions comprising keyword-based and image-based queries. We also provide automatic detection of objects within the original images to create a database of feature vectors. Our
this module and early results of CBIR enabled the combination of content-based retrieval and keyword retrieval. It made some improvements to the retrieval performance and narrowed the gap of semantics. Experimental results demonstrated that this project can to a certain extent help users more precisely retrieve to their
In the field of Digital Image Processing Content Based Image Retrieval is becoming very popular. Google and Yahoo have tools on Digital Image Processing. They are known to be Google Images and Yahoo! Images Search. They are based on textual annotation of images. In textual annotations with the help of keywords images
Multi-label image annotation has received significant attention in the research community over the past few years. Multi-label automatic image annotation assigns keywords to the image based on low level features automatically. In this paper, we present an extensive survey on the research work carried out in the area
Content-based means that the search makes use of the contents of the images themselves, rather than relying on human inputted metadata such as captions or keywords. By content-based techniques, a user can specify contents of interest in a query. The contents may be colors, textures, shapes, or the spatial layout of
Existence of countless digital images has given rise to image retrieval in many applications. Conventional image databases being text-annotated pose two major problems of keywords for images and complexity. Hence, retrieval systems based on image's visual content are more desirable [1]. The content based image
With the rapid development of technology of multimedia, the traditional information retrieval techniques based on keywords are not sufficient, content - based image retrieval (CBIR) has been an active research topic. A new content based image retrieval method using the feature analysis of edge extraction and median
the actual content of the image. The term dasiacontentpsila in this context might refer to colors, shapes, textures, or any other information that can be derived from the image itself. Without the ability to examine image content, search must rely on metadata such as captions or keywords, which may be laborious or
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.