Results 1 - 10
of
40
Universal Recognition of Three Basic Emotions in Music Report
"... It has long been debated which aspects of music perception are universal and which are developed only after exposure to a specific musical culture [1–5]. Here, we report a crosscultural study with participants from a native African population (Mafa) and Western participants, with both groups being n ..."
Abstract
-
Cited by 27 (2 self)
- Add to MetaCart
It has long been debated which aspects of music perception are universal and which are developed only after exposure to a specific musical culture [1–5]. Here, we report a crosscultural study with participants from a native African population (Mafa) and Western participants, with both groups being naive to the music of the other respective culture. Experiment 1 investigated the ability to recognize three basic emotions (happy, sad, scared/fearful) expressed in Western music. Results show that the Mafas recognized happy, sad, and scared/fearful Western music excerpts above chance, indicating that the expression of these basic emotions in Western music can be recognized universally. Experiment 2 examined how a spectral manipulation of original, naturalistic music affects the perceived pleasantness of music in Western as well as in Mafa listeners. The spectral manipulation
Collaboration Perspectives for Folk Song Research and Music Information Retrieval: The Indispensable Role of Computational Musicology
"... journal of interdisciplinary music studies season 200x, volume x, issue x, art. #xxxxxx, pp. xx-xx ..."
Abstract
-
Cited by 7 (1 self)
- Add to MetaCart
journal of interdisciplinary music studies season 200x, volume x, issue x, art. #xxxxxx, pp. xx-xx
Aesthetic judgments of music in experts and laypersons – an ERP study
- Int. J. Psychophysiol
, 2010
"... We investigated whether music experts and laypersons differ with regard to aesthetic evaluation of musical sequences. 16 music experts and 16 music laypersons judged the aesthetic value (beauty judgment task) as well as the harmonic correctness (correctness judgment task) of chord sequences. The se ..."
Abstract
-
Cited by 5 (2 self)
- Add to MetaCart
(Show Context)
We investigated whether music experts and laypersons differ with regard to aesthetic evaluation of musical sequences. 16 music experts and 16 music laypersons judged the aesthetic value (beauty judgment task) as well as the harmonic correctness (correctness judgment task) of chord sequences. The sequences consisted of five chords with the final chord sounding congruous, ambiguous or incongruous relative to the harmonic context established by the preceding four chords. On behavioural measures, few differences were observed between experts and laypersons. However, several differences in event-related potential (ERP) parameters were observed in auditory, cognitive and aesthetic processing of chord cadences between experts and laypersons. First, established ERP effects known to reflect the processing of harmonic rule violation were investigated. Here, differences between the groups were observed in the processing of the mild violationexperts and laypersons differed in their early brain responses to the beginning of the chord sequence. Furthermore, ERP data indicated distinctions between experts and laypersons in aesthetic evaluation at three different stages. Firstly, during the interval of task-cue presentation, a stronger contingent negative variation (CNV) to the beauty judgment task was observed for experts, indicating that experts invest more effort into preparation for aesthetic processes than into correctness judgments. Secondly, during the first four chords, preparation for the correctness judgment required more exertion on the laypersons' side. Thirdly, during the last chord, laypersons showed a larger late and widespread positivity for the beauty compared to the correctness judgment, indicating a stronger reliance on internal affective states while forming a judgment.
Scale-free music of the brain
- PLoS ONE
"... Background: There is growing interest in the relation between the brain and music. The appealing similarity between brainwaves and the rhythms of music has motivated many scientists to seek a connection between them. A variety of transferring rules has been utilized to convert the brainwaves into mu ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
Background: There is growing interest in the relation between the brain and music. The appealing similarity between brainwaves and the rhythms of music has motivated many scientists to seek a connection between them. A variety of transferring rules has been utilized to convert the brainwaves into music; and most of them are mainly based on spectra feature of EEG. Methodology/Principal Findings: In this study, audibly recognizable scale-free music was deduced from individual Electroencephalogram (EEG) waveforms. The translation rules include the direct mapping from the period of an EEG waveform to the duration of a note, the logarithmic mapping of the change of average power of EEG to music intensity according to the Fechner’s law, and a scale-free based mapping from the amplitude of EEG to music pitch according to the power law. To show the actual effect, we applied the deduced sonification rules to EEG segments recorded during rapid-eye movement sleep (REM) and slow-wave sleep (SWS). The resulting music is vivid and different between the two mental states; the melody during REM sleep sounds fast and lively, whereas that in SWS sleep is slow and tranquil. 60 volunteers evaluated 25 music pieces, 10 from REM, 10 from SWS and 5 from white noise (WN), 74.3 % experienced a happy emotion from REM and felt boring and drowsy when listening to SWS, and the average accuracy for all the music pieces identification is 86.8%(k = 0.800, P,0.001). We also applied the method to the EEG data from eyes closed, eyes open and epileptic EEG,
Fear across the senses: brain responses to music, vocalizations and facial expressions. Soc Cogn Affect Neurosci
, 2014
"... Intrinsic emotional expressions such as those communicated by faces and vocalizations have been shown to engage specific brain regions, such as the amygdala. Although music constitutes another powerful means to express emotions, the neural substrates involved in its processing remain poorly understo ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Intrinsic emotional expressions such as those communicated by faces and vocalizations have been shown to engage specific brain regions, such as the amygdala. Although music constitutes another powerful means to express emotions, the neural substrates involved in its processing remain poorly understood. In particular, it is unknown whether brain regions typically associated with processing biologically relevant emotional expressions are also recruited by emotional music. To address this question, we conducted an event-related functional magnetic resonance imaging study in 47 healthy volunteers in which we directly compared responses to basic emotions (fear, sadness and happiness, as well as neutral) expressed through faces, non-linguistic vocalizations and short novel musical excerpts. Our results confirmed the importance of fear in emotional communication, as revealed by significant blood oxygen level-dependent signal increased in a cluster within the posterior amygdala and anterior hippocampus, as well as in the posterior insula across all three domains. Moreover, subject-specific amygdala responses to fearful music and vocalizations were correlated, consistent with the proposal that the brain circuitry involved in the processing of musical emotions might be shared with the one that have evolved for vocalizations. Overall, our results show that processing of fear expressed through music, engages some of the same brain areas known to be crucial for detecting and evaluating threat-related information.
Musicality: Instinct or Acquired Skill?
, 2012
"... Is the human tendency toward musicality better thought of as the product of a specific, evolved instinct or an acquired skill? Developmental and evolutionary arguments are considered, along with issues of domain-specificity. The article also considers the question of why humans might be consistently ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Is the human tendency toward musicality better thought of as the product of a specific, evolved instinct or an acquired skill? Developmental and evolutionary arguments are considered, along with issues of domain-specificity. The article also considers the question of why humans might be consistently and intensely drawn to music if musicality is not in fact the product of a specifically evolved instinct.
A computational approach to the modeling and employment of cognitive units of folk song melodies using audio recordings
- In Proceedings of the 11th International Conference on Music Perception and Cognition
, 2010
"... ABSTRACT We present a method to classify audio recordings of folk songs into tune families. For this, we segment both the query recording and the recordings in the collection. The segments can be used to relate recordings to each other by evaluating the recurrence of similar melodic patterns. We co ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
ABSTRACT We present a method to classify audio recordings of folk songs into tune families. For this, we segment both the query recording and the recordings in the collection. The segments can be used to relate recordings to each other by evaluating the recurrence of similar melodic patterns. We compare a segmentation that results in what can be considered cognitive units to a segmentation into segments of fixed length. It appears that the use of 'cognitive' segments results in higher classification accuracy. BACKGROUND Large collections of monophonic folk song recordings are interesting from a music cognition perspective since they represent musical performances of common people. Most people share a 'common core of musical knowledge' (Peretz 2006: section 2). Since recorded folk songs were sung from memory, knowledge about the process of remembering and reproducing melodies can be used to employ these recordings in the context of folk song research, music information retrieval or music cognition studies. This study combines ideas and approaches from etnomusicology, music cognition and computer science. One of the research questions of etnomusicology is how melodies in an oral tradition relate to each other (Nettl 2005, chapter 9; Understanding the way melodies change in oral transmission involves understanding of encoding of melodies in, and reproduction of melodies from human memory. Cognitive studies indicate that melodies are not reproduced note by note, but as a sequence of higher level musical units, or chunks The computational methods we use enable a data-rich, empirical approach to the study of segmentation and similarity of melodies (Clarke and Cook 2004). The current study has been performend in the context of a music information retrieval project that has the aim to design a search engine for folk song melodies The two main questions in this paper are whether recurrence of audio segments can be exploited to classify a folk song recording into the correct tune family, and whether the use of cognitively and musically meaningful audio segments yields better classification performance than the use of fixed-length audio segments. Our classification method consists of four stages: pitch extraction, segmentation, selection of representative segments for each tune family, and classification using these representative segments. These four stages are described in the next sections. The main idea for segmentation we employ in this paper is to take breathing and pauses during singing as segment boundaries, which can be conceived as chunk boundaries. Thus, segmentation results in musically and cognitively meaningful units. We do not assume a one-to-one relation between these breathing and pause boundaries at the one hand and chunk boundaries at the other hand, but we do assume a strong relationship. Contribution: We widen the scope of automatic folk song classification by using audio recordings rather than symbolic data. To our knowledge, this is the first study in which aspects of folk song performance (breathing and pauses) are used to mark segment boundaries, and this is the first computational study that models a tune family by its most representative recurring segments.
Brain disorders and the biological role of music
- Soc. Cogn. Affect. Neurosci
, 2014
"... Despite its evident universality and high social value, the ultimate biological role of music and its connection to brain disorders remain poorly under-stood. Recent findings from basic neuroscience have shed fresh light on these old problems. New insights provided by clinical neuroscience concernin ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Despite its evident universality and high social value, the ultimate biological role of music and its connection to brain disorders remain poorly under-stood. Recent findings from basic neuroscience have shed fresh light on these old problems. New insights provided by clinical neuroscience concerning the effects of brain disorders promise to be particularly valuable in uncovering the underlying cognitive and neural architecture of music and for assessing candidate accounts of the biological role of music. Here we advance a new model of the biological role of music in human evolution and the link to brain disorders, drawing on diverse lines of evidence derived from comparative ethology, cognitive neuropsychology and neuroimaging studies in the normal and the disordered brain. We propose that music evolved from the call signals of our hominid ancestors as a means mentally to rehearse and predict potentially costly, affectively laden social routines in surrogate, coded, low-cost form: essentially, a mechanism for transforming emotional mental states efficiently and adaptively into social signals. This biological role of music has its legacy today in the disordered processing of music and mental states that characterizes certain developmental and acquired clinical syndromes of brain network disintegration.
Interpreting the fossil evidence for the evolutionary origins of music
- South. Afr. Humanit
, 2010
"... ABSTRACT The adaptive history of two components of music, rhythmic entrained movement and complex learned vocalization, is examined. The development of habitual bipedal locomotion around 1.6 million years ago made running possible and coincided with distinct changes in the vestibular canal dimensio ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
ABSTRACT The adaptive history of two components of music, rhythmic entrained movement and complex learned vocalization, is examined. The development of habitual bipedal locomotion around 1.6 million years ago made running possible and coincided with distinct changes in the vestibular canal dimensions. The vestibular system of the inner ear clearly plays a role in determining rhythm and therefore bipedalism did not only make refined dancing movements possible, but also changed rhythmic capabilities. In current scenarios for the evolution of musicality, the descent of the larynx is regarded as pivotal to enable complex vocalization. However, the larynx descends in chimpanzees as well, for reasons unrelated to vocalization or bipedalism. A new perspective discussed in this paper is that vocal learning capabilities could have evolved from a simple laryngeal vocalization, or a grunt. The burgeoning literature on the neuroscience of musical functions is of limited use to investigate the origins of rhythmical and vocalization capabilities, but the out-of-proportion evolution of the cerebellum and pre-frontal cortex may be relevant. It suggested that protomusic was a behavioural feature of Homo ergaster 1.6 million years ago. Protomusic consisted of entrained rhythmical whole-body movements, initially combined with grunts. Homo heidelbergensis, 350 000 years ago, had a brain approaching modern size, had an enlarged thoracic canal which indicates that they had modern-style breathing control essential for singing, and had modern auditory capability, as is evident from the modern configuration of the middle ear. The members of this group may have been capable of producing complex learned vocalizations and thus modern music in which voluntary synchronized movements are combined with consciously manipulated melodies.
Effects of Culture on Musical Pitch Perception
"... The strong association between music and speech has been supported by recent research focusing on musicians ’ superior abilities in second language learning and neural encoding of foreign speech sounds. However, evidence for a double association—the influence of linguistic background on music pitch ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
The strong association between music and speech has been supported by recent research focusing on musicians ’ superior abilities in second language learning and neural encoding of foreign speech sounds. However, evidence for a double association—the influence of linguistic background on music pitch processing and disorders—remains elusive. Because languages differ in their usage of elements (e.g., pitch) that are also essential for music, a unique opportunity for examining such language-to-music associations comes from a cross-cultural (linguistic) comparison of congenital amusia, a neurogenetic disorder affecting the music (pitch and rhythm) processing of about 5 % of the Western population. In the present study, two populations (Hong Kong and Canada) were compared. One spoke a tone language in which differences in voice pitch correspond to differences in word meaning (in Hong Kong Cantonese, /si / means ‘teacher ’ and ‘to try ’ when spoken in a high and mid pitch pattern, respectively). Using the On-line Identification Test of Congenital Amusia, we found Cantonese speakers as a group tend to show enhanced pitch perception ability compared to speakers of Canadian French and English (non-tone languages). This enhanced ability occurs in the absence of differences in rhythmic perception and persists even after relevant factors such as musical background and age were controlled. Following a common definition of amusia (5 % of the population), we found Hong Kong pitch amusics also show enhanced pitch abilities relative to their