The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
The majority of computational work on emotion in music concentrates on developing machine learning methodologies to build new, more accurate prediction systems, and usually relies on generic acoustic features. Relatively less effort has been put to the development and analysis of features that are particularly suited for the task. The contribution of this paper is twofold. First, the paper proposes...
A robust algorithm to model the harmony structure of a music piece is proposed. The harmony structure is extracted directly from a music audio signal using a second-order statistic of chroma feature vectors. The method is experimentally shown to be robust against the degradation of chroma feature vectors due to noisy pitch estimation in our classical music opus identification evaluation. To analyze...
A new chroma-based dynamic feature vector is proposed inspired by psychophysical observations that the human auditory system detects reltative pitch changes rather than absolute pitch values. The proposed chroma-based dynamic feature vector describes the relative pitch change intervals. The utility of the proposed feature vector incorporated with a music fingerprint extraction algorithm is experimentally...
An algorithm for extracting music fingerprints directly from an audio signal is proposed in this paper. The proposed music fingerprint aims to encapsulate various aspects of musical information, such as overall note distribution, harmony structure, and their temporal changes, all in a compact representation. The utility of the proposed music fingerprint to the task of automatic classical music cover...
Using the recently proposed framework for latent perceptual indexing of audio clips, we present classification of whole clips categorized by two schemes: high-level semantic labels and the mid-level perceptually motivated onomatopoeia labels. First, feature-vectors extracted from the clips in the database are grouped into reference clusters using an unsupervised clustering technique. A unit-document...
The activity rate of an audio clip in terms of three defined attributes results in a generic, quantitative measure of various acoustic sources present in it. The objective of this work is to verify if the acoustic structure measured in terms of these three attributes can be used for genre classification of music tracks. For this, we experiment on classification of full-length music tracks by using...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.