The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
the age of Big Data where Velocity, Variety and Volume are the challenges, variety of data that include structured as well as unstructured data is the most important issue. Image mining in Big Data is the challenge need to be addressed, so the proposed work compose an image query object, aspect of use, Keywords to
details, we introduce a comprehensive list of tags that each word is labeled with. These tags can be used for research on specific issues such as dealing with text in different colors. For comparison of different word spotters, a fixed set of 25 keywords with different properties is included. Furthermore, some specifics of
images are to be re-ranked using visual features after the initial text-based search. Here first query keywords are utilize for separating the dataset images into two group of relevant image and irrelevant image then all the images are ranked base on the image different modality of image features as the similar images need
The perspicacity based icon healing (CBIR) is four of the superior huge, putsch restrains areas of the digital build processing. In these supplies, images are manually annotated forth keywords and then retrieved using text-based search methods. The system strives for of CBIR is to abstract obvious sphere of an image
. The color histograms, Texture, GIST and invariant moments, used as features extraction methods, are combined together with multiclass support vector machine, Bayesian networks, Neural networks and nearest neighbour classifiers, in order to annotate the image content with the appropriate keywords. The accuracy of the
This paper proposes an ingenious and fast method to classify videos into fixed broad classes, which would assist searching and indexing using semantic keywords. The model extracts constituent frames from videos and maps low-level features extracted these frames to high-level semantics. We use color, structure and
the objects effectively. In addition, how to tag the objects after the segmentation associated with keywords is also a challenge for researchers. In this study, we proposed a color differentiated fuzzy c-means (CDFCM) framework for effective image segmentation to achieve segmented objects within image which is useful for
The intention of image retrieval systems is to provide retrieved results as close to users' expectations as possible. However, users' requirements vary from each other in various application scenarios for the same concept and keywords. In this paper, we introduce a personalized image retrieval model driven by users
system considering artifacts using the self-organizing map with refractoriness makes use of this property in order to retrieve plural similar images. In this image retrieval system, as the image feature, not only color information but also spectrum and keywords are employed. Moreover, the original image is divided into some
keywords to determine the Basic Expansion Terms (BET) using a number of semantic measures including Betweenness Measure (BM) and Semantic Similarity Measure (SSM). We propose a Map/Reduce distributed algorithm for calculating all the shortest paths in ontology graph. Map/Reduce algorithm will improve considerably the
feeling or emotions. To deal with the author's feelings, we suggest enhancing a text tweet with an appropriate image, along with/without text. To generate an image from the text, we first analyze the text tweet. The morpheme analyzer detects the key words and then the thumbnail images related to those keywords are retrieved
Image identification of plant leaves based on human vision is difficult task as well as plant identification based on keywords retrieval. It requires the domain knowledge in the botanist field. This work proposes the image texture analysis using Discrete Wavelet Transformation (DWT) and combined with an entropy
The content based image retrieval (CBIR) is one of the most popular, rising research areas of the digital image processing. Most of the available image search tools, such as Google Images and Yahoo! Image search, are based on textual annotation of images. In these tools, images are manually annotated with keywords and
use of this property in order to retrieve plural similar images. In this image retrieval system, as the image feature, not only color information but also spectrum and keywords are employed. We carried out a series of computer experiments and confirmed that the effectiveness of the proposed system. Moreover, in the
A neural network model with adaptive structure for image annotation is proposed in this paper. The adaptive structure enables the proposed model to utilize both global and regional visual features, as well as correlative information of annotated keywords for annotation. In order to achieve an approximate global
Online video archives provide a large amount of multimedia presentation contents through the Internet. But, it takes a long time to find what they really want to watch from a lot of presentation videos. We have been developing a system to summarize multiple presentation contents that match given keywords. In this
Content-based means that the search makes use of the contents of the images themselves, rather than relying on human inputted metadata such as captions or keywords. By content-based techniques, a user can specify contents of interest in a query. The contents may be colors, textures, shapes, or the spatial layout of
more easily. This method is faster than traditional GIS system when searching data from database, because it doesn't need search keywords and zone's location message anymore, simplifying the data structure, cutting down the complexity and computer source and improving the efficiency of search. The analysis of this method
The LIGVID system is designed for online interactive video shots retrieval and annotation. It uses a user-controlled combination of multiple criteria: keywords, phonetic string, similarity to example images, semantic categories, and relevance feedback strategies: visual and temporal similarity to already identified
Finding information based on an object's profile is very useful when exact keywords for the object are unknown. Current image retrieval system all ignores the color information, for example we want to find a super-star with a piece of red petticoat, or we want to a red flower with white background. They all cannot
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.