The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
We present a system to realistically model the sound of bass guitars, and how to estimate the corresponding parameters from the sound of a bass guitar alone, without other physical measurements. Our model includes plucking and expression styles of the musician, like vibrato or bending, and the string number for a realistic modeling and reproduction of the sound. We show that we can estimate the playing...
In this work, Classical Turkish Music songs are classified into six makams. Makam is a modal framework for melodic development in Classical Turkish Music. The effect of the sound clip length on the system performance was also evaluated. The Mel Frequency Cepstral Coefficients (MFCC) were used as features. Obtained data were classified by using Probabilistic Neural Network. The best correct recognition...
This paper presents classification and recognition of monophonic isolated musical instrument sounds using higher order spectra such as Bispectrum and Trispectrum. Experimental results on a widely used dataset shows that higher order spectra based features improve the recognition accuracy, when combined with conventional features such as Mel Frequency Cepstral Coefficient (MFCC), Cepstral, Spectral...
In recent time conventional ways of listening to music, and methods for discovering music, such as radio broadcasts and record stores, are being replaced by personalized ways to hear and learn about music. There is abundant research and experimentation done over Western music. However, moderate amount of work noticed over Indian music and its related fields such as computational musicology and artificial...
This paper proposes a musical instrument identification, melody and bass line estimation method of mixture sound signal for automatic music transcription. The method is based on sound feature verification using musical sound feature database such as tone (timbre) and pitch database. First, musical instrument is identified using MFCC. Then, melody and bass lines are estimated using time-frequency power...
Musical instrument recognition becomes an important aspect of music information retrieval which can be used by Musician and lay man to understand it. In this paper, Dynamic time warping techniques is utilized to recognize Indian musical instruments using 39 MFCC features. Six Indian musical instruments from different families are considered in this work. A large audio database are collected and recorded...
The main goal of our work is to develop a framework which can train students on musical instruments. The student's musical instrument can be connected to the framework through MIDI connection. The framework receives input both from the MIDI instrument and from a source file containing exact notes, compares them to evaluate the performance of the student and grades the performance. It also has mechanism...
Classification of musical instrument is resulted from musicology. Thus, automatic identification of instrument families not only benefits the study in musicology, but also worth of attention in MIR. Based on a database consists of 2177 clips from Chinese and Western instrumental music, the experiments provided in this paper evaluate the ability of automatic Chinese and Western instrument identification...
The main goal of our work is to develop a framework which can train students on musical instruments. The student's musical instrument can be connected to the framework through MIDI connection. The framework receives input both from the MIDI instrument and from a source file containing exact notes, compares them to evaluate the performance of the student and grades the performance. It also has mechanism...
A bass line is an instrumental melody that encapsulates both rhythmic, melodic, and harmonic features and arguably contains sufficient information for accurate genre classification. In this paper a bass line based automatic music genre classification system is described. "Melodic Interval Histograms" are used as features and k-nearest neighbor classifiers are utilized and compared with SVMs...
A prerequisite for identifying the singers in popular music recordings is to reduce the interference of background accompaniment when trying to characterize the singer voice. This study proposes a background music removal approach for singer identification (SID) by exploiting the underlying relationships between solo voices and their accompanied versions in cepstrum. The relationships are characterized...
In this paper, an approach that estimates the times at which musical beats occur is presented. The system uses a hybrid multi-band decomposition in order to estimate the music tempo. Following this, beat events are tracked by using a dynamic programming approach, which is updated by using short time tempo estimates. The hybrid decomposition is used in order to calculate the tempo by using different...
Instrumental music is often classified or retrieved in terms of instruments played in it. With a large database consists of Chinese traditional music and western classical music, this paper extracted several features to automatically classify Chinese and western instruments by SVM classifier, and analyzed the classification results.
This work describes the design and realization of a laser sensor dedicated to the measurement of the finger position on a musical instrument, like a guitar. The working principle of the sensor is optical triangulation: the system is realized by six lasers, multiplexed in time, and four Position Sensing Detectors. The speed and the accuracy of the proposed sensor well satisfy the application requirements.
We present a novel music signal processing task of classifying the tuning of a harpsichord from audio recordings of standard musical works. We report the results of a classification experiment involving six different temperaments, using real harpsichord recordings as well as synthesised audio data. We introduce the concept of conservative transcription, and show that existing high-precision pitch...
In this paper,we present a feature-based approach for the classification of different playing techniques in bass guitar recordings. The applied audio features are chosen to capture typical instrument sounds induced by 10 different playing techniques. A novel database that consists of approx. 4300 isolated bass notes was assembled for the purpose of evaluation. The usage of domain-specific features...
3D graphical interaction offers a large amount of possibilities for musical applications. However it also carries several limitations that prevent it from being used as an efficient musical instrument. For example, input devices for 3D interaction or new gaming devices are usually based on 3 or 6 degrees-of-freedom tracking combined with push-buttons or joysticks. While buttons and joysticks do not...
A new approach to instrument identification based on individual partials is presented. It makes identification possible even when the concurrently played instrument sounds have a high degree of spectral overlapping. A pairwise comparison scheme which emphasizes the specific differences between each pair of instruments is used for classification. Finally, the proposed method only requires a single...
A flexible sound rendering system is presented. Multi-channel music is played in manners adaptive to the locations of the speakers and the listener. In addition, a user can specify the positions of sound sources in a music ensemble to be replayed to be different from the original positions at the time of recording.
Melody extraction algorithms for single-channel polyphonic music typically rely on the salience of the lead melodic instrument, considered here to be the singing voice. However the simultaneous presence of one or more pitched instruments in the polyphony can cause such a predominant-F0 tracker to switch between tracking the pitch of the voice and that of an instrument of comparable strength, resulting...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.