The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
Automatic image annotation is crucial for keyword-based image retrieval. There is a trend focusing on utilization of machine learning techniques, which learn statistical models from annotated images and apply them to generate annotations for unseen images. In this paper we propose MAGMA - new image auto-annotation
neighbor search of videos from Internet. The fundamental problem lies on the scalability of a search technique, in face of the intractable volume of videos which keep rolling on the Web. In this paper, we investigate scalability of several well-known features including color signature and visual keywords for Web-based
This paper presents an integrated approach to automatically provide an overview of content on Thai websites based on tag cloud. This approach is intended to address the information overload issue by presenting the overview to users in order that they could assess whether the information meets their needs. The approach has incorporated Web content extraction, Thai word segmentation, and information...
Recently, with the increasing of users and activities in social network service, an image sentiment analysis has been an important keyword for psychological study and commercial marketing. To recognize accurately user's sentiments of the image, it is essential to identify discriminative visual features and then
There are a huge number of videos with text tags on the Web nowadays. In this paper, we propose a method of automatically extracting from Web videos video shots corresponding to specific actions with just only providing action keywords such as “walking” and “eating”. The proposed method
topic, object and attribute dictionaries. Eight kinds of text are extracted as image semantic source from Web pages. Combining with semantic dictionaries, image semantic keywords can be extracted from the eight kinds of text. The strategy of extracting image semantics is better than existing technique, which is better than
, emotion keywords which behave distributions of 3D structure can be projected into the emotion space. Emotion distributions were transformed into an emotion matrix. By analyzing the emotion matrix, not only binary classification of texts but also multi-emotion attributes can be investigated. The best precision 91% of a binary
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.