The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This paper describes experiments for audio clips comparison based on spoken context. The spoken content is obtained using automatic speech recognition. The social tags that are available for most of the audio clips are used as keywords. These keywords are mapped to the spoken transcription representing the audio clips
In an effort to develop effective multi-media learning objects (MLO), we propose a framework to extract and associate semantic tags to temporally segmented instructional videos. These tags serve for the purpose of efficient indexing and retrieval system. We create these semantic tags from potential keywords extracted
multimedia user-generated content. An authorized person or body filters it before being published. Once user-generated multimedia tourist contents are accepted, they are published using a tool into a web page. This GUI allows browsing for contents provided by individual users or those including a tag or keyword. Finally, the
videos and generate corresponding MPEG-7 description files. Subsequently, it establishes distributed index of the MPEG-7 files and distributed storage of video files separately. The system provides numerous web query interfaces, including keywords semantic expansion query, semantic graph query and natural language query
documents based on keywords, users normally have a more abstract perception what information they require. Semantic gap, which is the disparity between user's request and query results, has been identified as a challenging issue. In this paper, we are interested in scientific document indexing for retrieval. Knowing the
This work identifies relevant songs from a user's personal music collection to accompany pictures of an event. The event's pictures are analyzed to extract aggregated semantic concepts in a variety of dimensions, including scene type, geospatial information, and event type, along with user-provided keywords. These
region of initial retrieved results. Both keywords and image contents of the Web images are computed by LLSI to re-rank the initial retrieval results automatically. The PRF-LLSI contribute to the following: (1) Local LSI resolves the heavy computation cost of LSI; (2) Pseudo Relevance Feedback doesn't need the user's
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.