• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 60,786
Next 10 →

Table 7 : Recognition accuracy of music

in Affective Expressions of Machines
by Christoph Bartneck, Christoph Bartneck, Christoph Bartneck 2001
Cited by 6

TABLE 6. DURATION OF MUSICAL NOTE Fact

in unknown title
by unknown authors
"... In PAGE 7: ... MTERA COMMON SENSE 33 TABLE 5. MEASURING INSTRUMENT COMPONENTS 43 TABLE6 . DURATION OF MUSICAL NOTE 48 TABLE 7.... ..."

Table 1 State structure model for the Manage One User Music role and the Alloy code.

in Precise Graphical Representation of Roles in Requirements
by Engineering Pavel Balabko, Pavel Balabko, Alain Wegmann
"... In PAGE 5: ... This means that this example, generated by the Alloy Constraint Analyzer, could have maximum two elements of each type (except the UserMusic type) and one element of the UserMusic type. Having only one element of the UserMusic type corresponds to the fact that the Manage One User Music role includes one UserMusic attribute (see diagram in Table1 ). The Alloy Constraint Analyzer allows for the generation of several different examples for the same model and the same scope.... ..."

Table 2: Testing music pieces for rhythm recognition

in Automatic Rhythm Transcription from Multiphonic MIDI Signals
by Haruto Takeda, Takuya Nishimoto, Shigeki Sagayama
"... In PAGE 2: ... After estimating the tempo, note values are estimated again. 5 Experimental Evaluation The proposed method was evaluated by using 3 classical mu- sic pieces listed in Table2 recorded in the MIDI format, which were performed 2 times by 5 players for each piece. 19 kinds 0.... ..."

Table 8 State Structure Composition for the Manage Multiple User Music and Manage Multiple Artist roles.

in Ecole Polytechnique Fédérale de Lausanne Supervised by Prof. Alain Wegmann
by Pavel Balabko, Jury Members, Prof Colin Atkinson, Prof Christopher Tucci, Frederic Bouchet
"... In PAGE 66: ... All these models give us the state structure model of a Simple Music Management System. At this point we can create an Alloy code (see Table8 ) that reflects the composition of the two roles and ... In PAGE 66: ... We have to reflect in the Alloy code new relations between the AtristTracks and UserTrack types and between the Album and UserAlbumTrack types. To make a new relation between two already specified attributes in the current version of Alloy, we have to extend existing types: UserTrack and UserAlbum (lines 7-8 in Table8 ). To ensure that new attributes (UserTrack1 and UserAlbum1) and their predecessors participate in the same relations we created two facts (line 9-10 in Table 8).... In PAGE 66: ... To make a new relation between two already specified attributes in the current version of Alloy, we have to extend existing types: UserTrack and UserAlbum (lines 7-8 in Table 8). To ensure that new attributes (UserTrack1 and UserAlbum1) and their predecessors participate in the same relations we created two facts (line 9-10 in Table8 ). Line 11 in Table 8 specifies the multiplicity invariant.... In PAGE 66: ... Line 11 in Table 8 specifies the multiplicity invariant. Note that this multiplicity is specified in the context of one user (as it is shown in a diagram from Table8 ): we require that for any artist album there is only one user album in the context of a given user music. When we create an Alloy code, we have to take into account possible conceptual cycles .... In PAGE 67: ... 5.4 Analysis of the Composed Model Based on the Alloy code from Table8 , several instance diagrams can be generated. One of these diagrams is shown in Figure 29.... ..."

Table 3.3: File Structure MUSICAL STRUCTURE FILE STRUCTURE

in Contents
by unknown authors

Table 2 : Music parameter settings for several emotions

in Affective Expressions of Machines
by Christoph Bartneck, Christoph Bartneck, Christoph Bartneck 2001
Cited by 6

Table 2: Features and classifiers used is some speech/music and general audio classifiers

in Computational Auditory Scene Recognition
by Vesa Peltonen, Vesa Peltonen 2002
"... In PAGE 13: ...summarized in Table2 . The recognition rates listed in the table are not comparable as such, due to the lack of a common test bench, differences in the duration of the classified audio excerpts (frame-based vs.... In PAGE 13: ... Speech and music have quite different temporal characteristics and probability density functions (pdf). Therefore, it is not very difficult to reach a relatively high discrimination accuracy, as can be noticed from Table2 , only a few approaches reported discrimination accuracy below 90 %. Figure 2 illustrates the difference between speech and music: the waveforms and sample histograms of speech and music signals are plotted side-by-side.... In PAGE 64: ...2 %. Table2 0: Recognition rates for six meta-classes Class Number of samples Recognition rate Example Private 33 66.7 % home, office Public 25 72 % restaurant, shop Car 18 88.... ..."
Cited by 23

Table 1: Recognition results for the music score of Figure 1.

in An Optical Notation Recognition System for Printed Music based on template matching and high level reasoning
by S. -e. Fotinea, G. Giakoupis, A. Liveris, S. Bakamidis, G. Carayannis
"... In PAGE 5: ... An example is presented below, for a music score seen in Figure 1 (scanned image at 240 dpi). In Table1 recognition results for both methodologies are presented in tabular form, where #_ appear corresponds to the number of times the symbols appeared in the original music score, #_recogn_i corresponds to how many times the corresponding symbol has been correctly recognized, %_recogn_i denotes the recognition results and i=1,2 depending on the methodology. Total recognition percentages for both methodologies are weighted over the frequency of symbols appearance.... ..."

Table 2. Recognition rate of the speech/music discriminator

in Rhythm Detection For Speech-Music Discrimination In MPEG Compressed Domain
by Roman Jarina, Noel O'Connor, Sen Marlow, Noel Murphy 2002
"... In PAGE 3: ... TEST RESULTS The discriminator was tested separately for all signal groups from the audio database. The results are shown in Table2 . The results taken from [12] are in the third column in Table 2, where just two features Lm (max.... In PAGE 3: ... The results are shown in Table 2. The results taken from [12] are in the third column in Table2 , where just two features Lm (max. peak width) and R (rate of peaks) were used for discrimination.... ..."
Cited by 5
Next 10 →
Results 1 - 10 of 60,786
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University