Unlikely Neuropsychological Explanations for Musical Agnosia

Recent research attempts to establish how different regions of brain anatomy are implicated in “musical agnosia,” that is, loss of ability to recognize music, which once was familiar to the patient.  The basic theory is that musical cognition is not mediated by a single mechanism or by a combination of independent processes.  Rather, it is a special function occurring in different anatomical regions of the brain.  These regions are the left hemisphere for rhythm (temporally-patterned) processing and the right hemisphere (in particular, the right superior temporal gyrus) for melody (non-temporal, holistic) processing (Alossa & Castelli, 2009).

Alossa & Castelli cite four studies in support of this account.  The research protocols used in them are dubious.  Furthermore they are overwhelmed by at least four confounding variables and cannot be supported absent further research.  Before I explain what they are I will briefly look at each study.

1.  Peretz (1990) used musical sequences that “were tonally structured and made up of two phrases” following principles developed by the fin de siècle Viennese composer Arnold Schoenberg.  In half of the experimental settings the first phrase was played in double-time and the second phrase in triple-time.  In the other half only the second phrase was played; it was 4 bars long, lasted 4 s and comprised from 8 to 19 tones.

2.  Zatorre et al. (1991) measured differential response to “target” tones and “comparison” tones.  All were constructed out of sawtooth waveforms.  The target tones were randomly-chosen notes between middle C and B.  The comparison tones were randomly-chosen notes from the next-higher or next-lower octave; none were repeated within any one series.  The target tones lasted for 325 ms and the comparison tones lasted for 162.5 ms.  There were 72 trials, presented in random order.  In 36 of them the comparison tone was the same as the target tone; in the other half it was different, varying by 1, 2 or 3 notes.

3.  Zatorre et al. (1994) used “melodies” and “noise bursts.”  They prepared 16 different 8-note melodies, all of which had the same rhythmic configuration and timbre, but used different notes.  The “noise bursts” were constructed so as to approximate the characteristics of the melodies (variables such as number, duration and volume of notes, and inter-stimulus presentation rate).  Each noise burst was matched to the notes of the corresponding melody by shaping its onsets and offsets to approximate the amplitude envelopes of the musical tones.  Everything was played back at the same volume.

4.  Liégeois-Chauvel et al. (1994) created a suite of different of rhythmic and melodic tests.  The first used “familiar musical excerpts,” which were taken from “pre-existing vocal and instrumental pieces.”  The rest used “novel musical sequences” that were “tonally structured,” again using the atonal principles initially devised by Schoenberg.  They “approximated familiar stimuli structures while failing to evoke a sense of familiarity.”  The melodies were slightly altered as they were presented to the experimental subjects over different trials.  Each was performed at a slightly different tempo, manipulated by “interchanging the time values of two adjacent notes” while “keeping the metre and the total number of sounds identical.”

Here are the confounding variables:

(1)  There is a profound difference between musical compositions and sound recordings.  A musical composition is the underlying song (in the case of pop music), comprising the music and lyrics.  In the case of jazz it may be the theme; in the case of chamber or orchestral music, the score.  A sound recording on the other hand is a performance of the composition in a particular instance.  It is an iteration of the composition; there could be and frequently are many others.

For example, Lennon & McCartney wrote the song “Yesterday” and it was performed by a band called “The Beatles.”  The proprietor of the master sound recording – that is, the Beatles performing “Yesterday” – is EMI Records. Rights to the underlying composition, however, are owned by the music publisher (a peculiar joint venture between Sony Music and the Estate of Michael Jackson).  Every time somebody performs the composition the music publisher collects a royalty.  Thus, when Henry Mancini’s 101 Strings cover “Yesterday,” the music publisher gets a royalty; whereas EMI Records gets nothing.  There are many different versions of “Yesterday” performed over the years, all of which earn royalties for the music publisher, but none for EMI Records.

It stands to reason that the nature, manner and style of the performance of the underlying musical composition will significantly influence the way in which the listener perceives it.  Conversely the elements of the underlying musical composition will significantly constrain the manner in which it is performed.  There potentially could be thousands of different arrangements of these variables.  This significantly challenges the experimental validity of Peretz (1990) and Liégeois-Chauvel et al. (1994), both of which peculiarly relied on already-idiosyncratic music by Schoenberg.

(2)  Zatorre et al. (1991) only studied melody.  Musical performances however comprise many elements in addition to melody, all of which the brain considers simultaneously.  At a minimum these include rhythm, tempo, the nature of the envelopes for individual notes (attack, decay, sustain and release), variations in tonal texture and timbre and other variations in performance style.  A musical performance results in a complex sound and information is scattered across the perceptual spectrum.  However they ignored all of these factors.  They also only used one kind of waveform (a “sawtooth”).  There are other waveforms, which together or in isolation form the basic elements of sounds, such as sine waves or square waves (so called because of the way they appear on an oscilloscope).  They might have gotten different results if they had used these instead.

(3)  Zatorre et al. (1994) attempted to improve on their previous experimental design by also incorporating rhythmic elements.  While they made some attempt to shape the envelope and other characteristics of the noise bursts to that of the notes, they did not consider any of the other experimental variables identified above.  In particular they only used one timbre (a “guitar” tone), failing to account for the possible influence of other tones and timbres, which may have yielded different experimental results.

(4)  It now generally is accepted that the brain is more analogous to a parallel processor of information rather than one where information is relayed serially throughout a neural network (Moors et al., 2006).  This story is completely different, though, if a behavioral response is required, such as intentional or goal-directed behavior.  Then, the brain must determine what information is relevant and signal the motor cortex to execute an appropriate action.  A mediation process most likely takes place at the basal ganglia, which acts as a gating threshold.  The signal is amplified and the motor cortex activated if the task is relevant to the objective, or inhibited if not (Szüc et al., 2009).  All four studies were backwards.  They were premised on the assumption that the experimental subjects had to act.  While this may be true e.g. in the case of a musician playing an instrument, that was not the case here.  All the experimental subjects had to do was listen.  This makes it far less likely that any hypothesis based on localization is valid.

References

Alossa, N. & Castelli, L. (2009).  “Amusia and Musical Functioning.”  Eur. Neurol.61, 269 – 277.

Liégeois-Chauvel, C., Peretz, I., Babaï, M., Laguitton, V. & Chauvel, P. (1998).  “Contribution of different cortical areas in the temporal lobes to music processing.”  Brain121, 1853 – 1867.

Moors, A. & DeHouwer, J. (2006).  “Automaticity: A theoretical and conceptual analysis.”  Psychological Bulletin132 (2), 297 – 326.

Peretz, I. (1990).  “Processing of local and global musical information by unilateral brain-damaged patients.”  Brain113, 1185 – 1205.

Szücs, D., Soltész, F., Bryce, D. & Whitebread, D. (2009).  “Activation and Response Competition in a Stroop Task in Young Children: A Lateralized Readiness Potential Study.”  J. Cognitive Neurosci.21 (11), 2195 – 2206.

Zatorre, R. & Samson, S. (1991).  “Role of the right temporal neocortex in retention of pitch in auditory short-term memory.”  Brain114, 2403 – 2417.

Zatorre, R., Evans, A. & Meyer, E. (1994).  “Neural mechanisms underlying melodic perception and memory for pitch.”  J. Neurosci.14, 1908 – 1919.

 

David Kronemyer