Results 1 - 10
of
60,786
TABLE 6. DURATION OF MUSICAL NOTE Fact
"... In PAGE 7: ... MTERA COMMON SENSE 33 TABLE 5. MEASURING INSTRUMENT COMPONENTS 43 TABLE6 . DURATION OF MUSICAL NOTE 48 TABLE 7.... ..."
Table 1 State structure model for the Manage One User Music role and the Alloy code.
"... In PAGE 5: ... This means that this example, generated by the Alloy Constraint Analyzer, could have maximum two elements of each type (except the UserMusic type) and one element of the UserMusic type. Having only one element of the UserMusic type corresponds to the fact that the Manage One User Music role includes one UserMusic attribute (see diagram in Table1 ). The Alloy Constraint Analyzer allows for the generation of several different examples for the same model and the same scope.... ..."
Table 2: Testing music pieces for rhythm recognition
"... In PAGE 2: ... After estimating the tempo, note values are estimated again. 5 Experimental Evaluation The proposed method was evaluated by using 3 classical mu- sic pieces listed in Table2 recorded in the MIDI format, which were performed 2 times by 5 players for each piece. 19 kinds 0.... ..."
Table 8 State Structure Composition for the Manage Multiple User Music and Manage Multiple Artist roles.
"... In PAGE 66: ... All these models give us the state structure model of a Simple Music Management System. At this point we can create an Alloy code (see Table8 ) that reflects the composition of the two roles and ... In PAGE 66: ... We have to reflect in the Alloy code new relations between the AtristTracks and UserTrack types and between the Album and UserAlbumTrack types. To make a new relation between two already specified attributes in the current version of Alloy, we have to extend existing types: UserTrack and UserAlbum (lines 7-8 in Table8 ). To ensure that new attributes (UserTrack1 and UserAlbum1) and their predecessors participate in the same relations we created two facts (line 9-10 in Table 8).... In PAGE 66: ... To make a new relation between two already specified attributes in the current version of Alloy, we have to extend existing types: UserTrack and UserAlbum (lines 7-8 in Table 8). To ensure that new attributes (UserTrack1 and UserAlbum1) and their predecessors participate in the same relations we created two facts (line 9-10 in Table8 ). Line 11 in Table 8 specifies the multiplicity invariant.... In PAGE 66: ... Line 11 in Table 8 specifies the multiplicity invariant. Note that this multiplicity is specified in the context of one user (as it is shown in a diagram from Table8 ): we require that for any artist album there is only one user album in the context of a given user music. When we create an Alloy code, we have to take into account possible conceptual cycles .... In PAGE 67: ... 5.4 Analysis of the Composed Model Based on the Alloy code from Table8 , several instance diagrams can be generated. One of these diagrams is shown in Figure 29.... ..."
Table 2 : Music parameter settings for several emotions
2001
Cited by 6
Table 2: Features and classifiers used is some speech/music and general audio classifiers
2002
"... In PAGE 13: ...summarized in Table2 . The recognition rates listed in the table are not comparable as such, due to the lack of a common test bench, differences in the duration of the classified audio excerpts (frame-based vs.... In PAGE 13: ... Speech and music have quite different temporal characteristics and probability density functions (pdf). Therefore, it is not very difficult to reach a relatively high discrimination accuracy, as can be noticed from Table2 , only a few approaches reported discrimination accuracy below 90 %. Figure 2 illustrates the difference between speech and music: the waveforms and sample histograms of speech and music signals are plotted side-by-side.... In PAGE 64: ...2 %. Table2 0: Recognition rates for six meta-classes Class Number of samples Recognition rate Example Private 33 66.7 % home, office Public 25 72 % restaurant, shop Car 18 88.... ..."
Cited by 23
Table 1: Recognition results for the music score of Figure 1.
"... In PAGE 5: ... An example is presented below, for a music score seen in Figure 1 (scanned image at 240 dpi). In Table1 recognition results for both methodologies are presented in tabular form, where #_ appear corresponds to the number of times the symbols appeared in the original music score, #_recogn_i corresponds to how many times the corresponding symbol has been correctly recognized, %_recogn_i denotes the recognition results and i=1,2 depending on the methodology. Total recognition percentages for both methodologies are weighted over the frequency of symbols appearance.... ..."
Table 2. Recognition rate of the speech/music discriminator
2002
"... In PAGE 3: ... TEST RESULTS The discriminator was tested separately for all signal groups from the audio database. The results are shown in Table2 . The results taken from [12] are in the third column in Table 2, where just two features Lm (max.... In PAGE 3: ... The results are shown in Table 2. The results taken from [12] are in the third column in Table2 , where just two features Lm (max. peak width) and R (rate of peaks) were used for discrimination.... ..."
Cited by 5
Results 1 - 10
of
60,786