The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
In this paper, we propose a novel strategy at an abstract level by combining textual and visual clustering results to retrieve images using semantic keywords and auto-annotate images based on similarity with existing keywords. Our main hypothesis is that images that fall in to the same text-cluster can be described
Does there exist a compact set of visual topics in form of keyword clusters capable to represent all images visual content within an acceptable error? In this paper, we answer this question by analyzing distribution laws for keywords from image descriptions and comparing with traditional techniques in NLP, thereby
In classical image classification approaches, low-level features have been used. But the high dimensionality of feature spaces poses a challenge in terms of feature selection and distance measurement during the clustering process. In this paper, we propose an approach to generate visual keyword and combine both visual
keywords from the Web pages. The system first identifies the section of the Web page that contains the multimedia file to be extracted and then extracts it by using clustering techniques and other tools of statistical origin. Experimental results on real-world image sharing Web sites are presented and discussed in this paper
In this paper, we propose a novel strategy at an abstract level by combining textual and visual clustering results to retrieve images using semantic keywords and auto-annotate images based on similarity with existing keywords. Our main hypothesis is that images that fall in to the same textcluster can be described
The amount of multimedia information is rapidly increasing due to digital cameras and mobile telephones equipped with such devices. To interpret semantic of image, many researchers use keywords as textual annotation. However, current state of the art produces too many irrelevant keywords for images by annotator. They
The associations between different modalities of Web images could be very useful for Web image retrieval. In this paper, we investigate the multi-modal associations between two basic modalities of Web images, i.e. keyword and visual feature clusters, by data mining technique. The association rule crosses two
keywords in common, then the image is added to an image repository. Additional meta-information are now associated with each image such as caption, cluster features, names of people in the news article, etc. A very large repository containing more than 983k images from 12 million news articles was built using this approach
A multinet system, comprising SOM's linked via Hebbian connections, has been designed and implemented for automatically annotating and retrieving cell migration images. The collateral compound keywords used in image captions and elsewhere in the text were used to train one SOM and colour moments of the image were used
the document tags is considered as cluster name. Thus in short, web search results that are fetched from the prevailing web search engines grouped under phrases that contain one or more search keywords. This paper aims at organizing web search results into clusters facilitating quick browsing options to the browser
In our earlier works on VAST (visuAl & semantic image search) system, the semantic network effectively associated keywords and visual feature clusters. However, we only concerned about the construction of the semantic network before, and did not consider the updating of the semantic network. In this paper, an
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.