• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 202
Next 10 →

1 A Mid-Level Representation for Melody-based Retrieval in Audio Collections

by Matija Marolt
"... Abstract — Searching audio collections using high-level musical descriptors is a difficult problem, due to the lack of reliable methods for extracting melody, harmony, rhythm, and other such descriptors from unstructured audio signals. In the paper, we present a novel approach to melody-based retrie ..."
Abstract - Add to MetaCart
-based retrieval in audio collections. Our approach supports audio, as well as symbolic queries and ranks results according to melodic similarity to the query. We introduce a beat-synchronous melodic representation consisting of salient melodic lines, which are extracted from the analyzed audio signal. We propose

CROSS-CORRELATION OF BEAT-SYNCHRONOUS REPRESENTATIONS FOR MUSIC SIMILARITY

by Daniel P. W. Ellis, Courtenay V. Cotton, Michael I. M
"... Systems to predict human judgments of music similarity directly from the audio have generally been based on the global statistics of spectral feature vectors i.e. collapsing any large-scale temporal structure in the data. Based on our work in identifying alternative (“cover”) versions of pieces, we ..."
Abstract - Cited by 11 (1 self) - Add to MetaCart
investigate using direct correlation of beat-synchronous representations of music audio to find segments that are similar not only in feature statistics, but in the relative positioning of those features in tempo-normalized time. Given a large enough search database, good matches by this metric should have

AUTOMATIC BEAT-SYNCHRONOUS GENERATION OF MUSIC LEAD SHEETS

by unknown authors
"... Most of the popular music scores are written in a specific format, the lead sheet format. It sums up a song by representing the notes of the main melody, along with the chord sequence together with other cues such as style, tempo and time signature. This sort of representation is very common in jazz ..."
Abstract - Add to MetaCart
Most of the popular music scores are written in a specific format, the lead sheet format. It sums up a song by representing the notes of the main melody, along with the chord sequence together with other cues such as style, tempo and time signature. This sort of representation is very common

Identifying ‘Cover Songs ’ with Beat-Synchronous Chroma Features

by unknown authors
"... Large music collections, ranging from thousands to millions of tracks, are unsuited to manual searching, motivating the development of automatic search methods. When two musical groups perform the same underlying song or piece, these are known as ‘cover ’ versions. We describe a system that attempts ..."
Abstract - Add to MetaCart
supporting each semitone of the octave. To compare two recordings, we simply cross-correlate the entire beat-by-chroma representation for two tracks and look for sharp peaks indicating good local alignment between the pieces. Evaluation on a small set of 15 pairs of pop music cover versions identified within

MUSIC STRUCTURE ANALYSIS BY RIDGE REGRESSION OF BEAT-SYNCHRONOUS AUDIO FEATURES

by Yannis Panagakis, Constantine Kotropoulos
"... A novel unsupervised method for automatic music structure analysis is proposed. Three types of audio features, namely the mel-frequency cepstral coefficients, the chroma features, and the auditory temporal modulations are employed in order to form beat-synchronous feature sequences modeling the audi ..."
Abstract - Add to MetaCart
A novel unsupervised method for automatic music structure analysis is proposed. Three types of audio features, namely the mel-frequency cepstral coefficients, the chroma features, and the auditory temporal modulations are employed in order to form beat-synchronous feature sequences modeling

Representation and Synthesis of Melodic Expression

by Christopher Raphael
"... A method for expressive melody synthesis is presented seeking to capture the prosodic (stress and directional) element of musical interpretation. An expressive performance is represented as a notelevel annotation, classifying each note according to a small alphabet of symbols describing the role of ..."
Abstract - Add to MetaCart
A method for expressive melody synthesis is presented seeking to capture the prosodic (stress and directional) element of musical interpretation. An expressive performance is represented as a notelevel annotation, classifying each note according to a small alphabet of symbols describing the role of the note within a larger context. An audio performance of the melody is represented in terms of two time-varying functions describing the evolving frequency and intensity. A method is presented that transforms the expressive annotation into the frequency and intensity functions, thus giving the audio performance. The problem of expressive rendering is then cast as estimation of the most likely sequence of hidden variables corresponding to the prosodic annotation. Examples are presented on a dataset of around 50 folk-like melodies, realized both from hand-marked and estimated annotations. 1

Symbolic and Structrual Representation of Melodic Expression

by Christopher Raphael - in ISMIR , 2009
"... A method for expressive melody synthesis is presented seeking to capture the structural and prosodic (stress, direction, and grouping) elements of musical interpretation. The interpretation of melody is represented through a hierarchical structural decomposition and a note-level prosodic annotation. ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
A method for expressive melody synthesis is presented seeking to capture the structural and prosodic (stress, direction, and grouping) elements of musical interpretation. The interpretation of melody is represented through a hierarchical structural decomposition and a note-level prosodic annotation. An audio performance of the melody is constructed using the time-evolving frequency and intensity functions. A method is presented that transforms the expressive annotation into the frequency and intensity functions, thus giving the audio performance. In this framework, the problem of expressive rendering is cast as estimation of structural decomposition and the prosodic annotation. Examples are presented on a dataset of around 50 folk-like melodies, realized both from hand-marked and estimated annotations. 1.

Melodic analysis with segment classes

by Darrell Conklin , 2006
"... This paper presents a representation for melodic segment classes and applies it to music data mining. Melody is modeled as a sequence of segments, each segment being a sequence of notes. These segments are assigned to classes through a knowledge representation scheme which allows the flexible constr ..."
Abstract - Cited by 19 (8 self) - Add to MetaCart
This paper presents a representation for melodic segment classes and applies it to music data mining. Melody is modeled as a sequence of segments, each segment being a sequence of notes. These segments are assigned to classes through a knowledge representation scheme which allows the flexible

Probabilistic Models for Melodic Prediction

by Jean-francois Paiement , Samy Bengio , Douglas Eck , 2008
"... Chord progressions are the building blocks from which tonal music is constructed. The choice of a particular representation for chords has a strong impact on statistical modeling of the dependence between chord symbols and the actual sequences of notes in polyphonic music. Melodic prediction is use ..."
Abstract - Cited by 4 (0 self) - Add to MetaCart
Chord progressions are the building blocks from which tonal music is constructed. The choice of a particular representation for chords has a strong impact on statistical modeling of the dependence between chord symbols and the actual sequences of notes in polyphonic music. Melodic prediction

Using Transportation Distances for Measuring Melodic Similarity

by Rainer Typke , Panos Giannopoulos, Remco C. Veltkamp, Frans Wiering, René Van Oostrum - IN ISMIR PROCEEDINGS , 2003
"... Most of the existing methods for measuring melodic similarity use one-dimensional textual representations of music notation, so that melodic similarity can be measured by calculating editing distances. We view notes as weighted points in a two-dimensional space, with the coordinates of the poin ..."
Abstract - Cited by 54 (10 self) - Add to MetaCart
Most of the existing methods for measuring melodic similarity use one-dimensional textual representations of music notation, so that melodic similarity can be measured by calculating editing distances. We view notes as weighted points in a two-dimensional space, with the coordinates
Next 10 →
Results 1 - 10 of 202
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University