Abstract

Like language, music engagement is universal, complex and present early in life. However, ∼4% of the general population experiences a lifelong deficit in music perception that cannot be explained by hearing loss, brain damage, intellectual deficiencies or lack of exposure. This musical disorder, commonly known as tone-deafness and now termed congenital amusia, affects mostly the melodic pitch dimension. Congenital amusia is hereditary and is associated with abnormal grey and white matter in the auditory cortex and the inferior frontal cortex. In order to relate these anatomical anomalies to the behavioural expression of the disorder, we measured the electrical brain activity of amusic subjects and matched controls while they monitored melodies for the presence of pitch anomalies. Contrary to current reports, we show that the amusic brain can track quarter-tone pitch differences, exhibiting an early right-lateralized negative brain response. This suggests near-normal neural processing of musical pitch incongruities in congenital amusia. It is important because it reveals that the amusic brain is equipped with the essential neural circuitry to perceive fine-grained pitch differences. What distinguishes the amusic from the normal brain is the limited awareness of this ability and the lack of responsiveness to the semitone changes that violate musical keys. These findings suggest that, in the amusic brain, the neural pitch representation cannot make contact with musical pitch knowledge along the auditory-frontal neural pathway.

Introduction

Humans are born with the potential to both speak and make music. For the majority of individuals who are musically untrained, this fundamental human trait is expressed by avid music listening, and occasional dancing and singing. The propensity to engage in music ultimately gives rise to a sophisticated music processing system that is largely acquired implicitly by experience (Peretz, 2006). However, a minority of individuals never acquire this core musical system, either in part or at all. This condition concerns 4% of the general population (Kalmus and Fry, 1980) and is termed congenital amusia (Peretz, 2001). This disorder is akin to other developmental disorders, such as congenital prosopagnosia, dyscalculia, dysphasia and dyslexia, and is thought to result from a musical pitch disorder. In many laboratories, including our own, the test that is most diagnostic of amusia requires participants to detect out-of-key notes in conventional but unfamiliar melodies. A behavioural failure on this test is diagnostic because there is typically no overlap between the distributions of the scores of amusics and controls (Ayotte et al., 2002; Hyde and Peretz, 2005). This musical pitch disorder represents a clear-cut phenotype that serves to identify the associated neuro-genetic factors (Peretz et al., 2007). The goal of the present study was to specify the neural dynamics of this pitch disorder in a musical context.

Various methods can be employed to uncover a possible brain anomaly in amusics. Among these, we have used MRI-based brain structural analyses and electroencephalography (EEG). Both methods were very informative. First, voxel-based morphometry (VBM) was used to detect anatomical differences in amusic brains relative to musically intact brains, by analysing MRIs from two independent samples of subjects (13 amusics from Montreal and 8 amusics from the UK). The results were consistent across samples in highlighting a reduction in white-matter concentration in the right inferior frontal gyrus of amusic individuals (Hyde et al., 2006). However, this VBM study also revealed associated increases in grey matter in the same right inferior frontal gyrus (IFG) region of amusics. The objective of a second study was to better understand this morphological brain anomaly by way of cortical thickness measures that provide a more specific measure of cortical morphology relative to VBM. We found that the same amusic subjects have thicker cortex in the right IFG and the right auditory cortex relative to musically intact controls (Hyde et al., 2007). These cortical thickness differences suggest the presence of cortical malformations in the amusic brain that may have compromised the normal development of a right fronto-temporal pathway.

There is, however, an inherent uncertainty with regard to the conclusions that can be drawn from anatomical–functional relationships. Therefore, it is important that anatomical measurements be supported by functional investigations. To this aim, we measured electrical brain potentials (ERPs) in amusic individuals while they were monitoring five tone sequences for the presence of a pitch change (Peretz et al., 2005). We found that the amusic brain did not respond to pitch deviances smaller than one semitone, whereas a normal brain did so reliably. In contrast, the amusic brain ‘over-reacted’ to large pitch changes, by eliciting a N2 (that was not present in normals) and a P3 that was almost twice as large as that observed in controls’ brains. This altered pattern of electrical activity did not seem to arise from an anomalous functioning of the auditory cortex, because the N1 component appeared normal; the altered electrical brain potentials might reveal difficulties that occur in later stages along the auditory pathway, such as in the frontal regions. These electrophysiological data are consistent with the anatomical findings in highlighting a possible dysfunction of the right temporo-frontal pathway. Note that the normal N1 did not provide relevant evidence regarding pitch processing. This is because the N1 observed in our 2005 study (as well as here) was only sensitive to the large pitch changes (over one semitone), not to the small pitch changes (such as the quarter-tone distance used here), even in controls. This limited sensitivity of N1 to pitch distance is not specific to our studies. The observation of tonotopic effects on N1 requires large (over an octave) pitch differences between tones with relatively long Inter-Stimuli-Intervals (over 1 s). In general, the major part of the N1 component reflects the detection of sound transitions, providing information about stimulus onsets and, for long duration stimuli, also about offsets (Näätänen and Winkler, 1999).

The goal of the present study was to extend these findings obtained in an acoustical context to a musical context while using the same Event-related Potentials (ERP) technique. To this aim, we presented the unfamiliar melodies used previously for the diagnosis of amusia (Peretz et al., 2003, 2007, 2008) and inserted anomalous pitches to be detected. The anomalies consisted of an altered tone in the melody so that it was either mistuned by a quarter tone or was outside the key. In such a musical context, pitch deviations do not depend so much on absolute pitch height but rather on their membership to the key of the melody. Even listeners with little or no formal training in music are quite sensitive to this key structure. Unless one is born amusic, as mentioned previously, everyone can recognize the mistakes of an inexperienced pianist, especially when the erroneous tones are outside the original key, hence violating the expectations provided by the tonal structure (Cuddy et al., 1979; Trainor and Trehub, 1992). There is nothing inherently wrong with these out-of-key violations; they would sound perfectly in tune if heard in a different context, that is, in a different key. The only reason the tones sound incongruous is because of the listener's expectation of in-key (diatonic) tones, which arises from the activation of tonal key schemas typically used in Western music. However, tones can also deviate from these tonal schemas by departing from the semitone distance, which is the building block of musical keys. One frequently used violation in studies of music cognition is the quarter tone. A quarter-tone deviation from the equal-tempered chromatic tone (or mistuning) is judged to be out of tune (or anomalous) in both tonal and atonal music by musicians and non-musicians alike (Umemoto, 1990) and can be detected even in the absence of attentive listening (Brattico et al., 2006).

Hence, in a normal brain, the out-of-key as well as the out-of-tune tone inserted in a typical (hence tonal) melody of the Western tradition should sound incongruous. Such pitch incongruities elicited an early frontal negativity that was followed by a parietal positivity (P600) in normal students who were rating the melodies for incongruency (Brattico et al., 2006). Both the negative and positive deflections were larger for the more salient out-of-tune pitch than the less salient out-of-key pitch. We predicted that amusics would be unable to detect the out-of-key notes, which in turn would not elicit any brain response, negative or positive. For the mistuned pitches, predictions were less straightforward, hence potentially more interesting. In our prior study, amusics were able to detect quarter-tone deviations in five repeating tone sequences with an accuracy of 70% (Peretz et al., 2005). Yet, we found no evidence of brain responses to these small pitch changes. It is possible that the richer melodic material used here facilitates the detection of mistuned pitches, enhancing the chances to obtain brain responses to pitch deviations.

Methods

Participants

Eight amusic adults and nine matched controls (with no musical education and no musical impairment)—who had participated in the prior ERP study (Peretz et al., 2005)—were tested (see Table 1 for their background information). They were considered as amusic (or not) on the basis of their scores on the Montreal Battery of Evaluation of Amusia (MBEA; Peretz et al., 2003). Informed consent was obtained from all participants. The battery involves six tests (180 trials) that assess various music processing components. Three tests assess the ability to discriminate pitch changes in melodies (melodic tests) and three tests assess the ability to discriminate rhythmic changes, meter and memory. Each amusic tested here obtained a global score as well as a melodic score that were 2 SDs below controls (Table 1). In addition, all but one amusic participants were unable to detect out-of-tune (quarter tone) and out-of-key tones in the melodies while they obtained scores in the normal range in the detection of time violations in the same melodies (Table 1). The amusic person who obtained a normal score on this diagnostic test can still be considered as amusic because the melodic score (55.6%) on the MBEA was three standard deviations below that of normals. Her pattern of performance on the MBEA suggested that she had a pitch memory problem. The perceptual test that assesses on-line detection of pitch and time incongruities in melodies has been used as an on-line screening test for the presence of amusia in a family aggregation study (Peretz et al., 2007) and has been validated in a sample of more than 200 subjects (Peretz et al., 2008). Note that both the MBEA and the on-line test use the same melodies as used here. Furthermore, the on-line test also requires subjects to detect if there is an ‘incongruent note’, as done here. However, in the on-line test, a simple yes/no response is required whereas here, for the first time, a rating scale was used. The use of a rating scale was motivated by the possibility that ratings might reflect degrees of sensitivity to the presence of pitch incongruities. These graded behavioural measures can be more easily compared with brain responses, which are typically more fine-grained.

Table 1

Mean age, education and gender are presented for each group

Amusic (n = 8)Control (n = 9)t-test
Age (years)58.0 (4.4)59.0 (11.9)NS
Gender2 M, 6 F2 M, 7 F
Education (years)17 (1.8)17 (1.7)NS
MBEA (Peretz et al., 2003)
  Global score (percent correct)61.9 (7.7)88.3 (6.6)P < 0.001
  Melodic score57.2 (5.7)86.8 (8.8)P < 0.001
On-line test (Peretz et al., 2008)
  Out-of-tune61.3 (11.9)93.3 (4.3)P < 0.001
  Out-of-scale51.2 (16.5)87.5 (11.2)P < 0.001
  Out-of-time67.9 (19.8)76.0 (8.7)NS
Amusic (n = 8)Control (n = 9)t-test
Age (years)58.0 (4.4)59.0 (11.9)NS
Gender2 M, 6 F2 M, 7 F
Education (years)17 (1.8)17 (1.7)NS
MBEA (Peretz et al., 2003)
  Global score (percent correct)61.9 (7.7)88.3 (6.6)P < 0.001
  Melodic score57.2 (5.7)86.8 (8.8)P < 0.001
On-line test (Peretz et al., 2008)
  Out-of-tune61.3 (11.9)93.3 (4.3)P < 0.001
  Out-of-scale51.2 (16.5)87.5 (11.2)P < 0.001
  Out-of-time67.9 (19.8)76.0 (8.7)NS

The mean percentages of correct responses (SDs) obtained on the full MBEA (global score) and the three MBEA melodic tests are presented for the eight amusics and their nine matched controls, and on the new on-line perceptual test (see text for more information). Chance level is 50% in all tests. Control data for the on-line test are from 13 subjects who did not participate to the present study (mean age: 57; 6M, 9F; mean education: 16.5). Group differences were assessed by way of bilateral t-tests.

Table 1

Mean age, education and gender are presented for each group

Amusic (n = 8)Control (n = 9)t-test
Age (years)58.0 (4.4)59.0 (11.9)NS
Gender2 M, 6 F2 M, 7 F
Education (years)17 (1.8)17 (1.7)NS
MBEA (Peretz et al., 2003)
  Global score (percent correct)61.9 (7.7)88.3 (6.6)P < 0.001
  Melodic score57.2 (5.7)86.8 (8.8)P < 0.001
On-line test (Peretz et al., 2008)
  Out-of-tune61.3 (11.9)93.3 (4.3)P < 0.001
  Out-of-scale51.2 (16.5)87.5 (11.2)P < 0.001
  Out-of-time67.9 (19.8)76.0 (8.7)NS
Amusic (n = 8)Control (n = 9)t-test
Age (years)58.0 (4.4)59.0 (11.9)NS
Gender2 M, 6 F2 M, 7 F
Education (years)17 (1.8)17 (1.7)NS
MBEA (Peretz et al., 2003)
  Global score (percent correct)61.9 (7.7)88.3 (6.6)P < 0.001
  Melodic score57.2 (5.7)86.8 (8.8)P < 0.001
On-line test (Peretz et al., 2008)
  Out-of-tune61.3 (11.9)93.3 (4.3)P < 0.001
  Out-of-scale51.2 (16.5)87.5 (11.2)P < 0.001
  Out-of-time67.9 (19.8)76.0 (8.7)NS

The mean percentages of correct responses (SDs) obtained on the full MBEA (global score) and the three MBEA melodic tests are presented for the eight amusics and their nine matched controls, and on the new on-line perceptual test (see text for more information). Chance level is 50% in all tests. Control data for the on-line test are from 13 subjects who did not participate to the present study (mean age: 57; 6M, 9F; mean education: 16.5). Group differences were assessed by way of bilateral t-tests.

Materials and procedure

Amusic and control subjects were presented with the same melodies as used in a prior study with students (Brattico et al., 2006). There were 160 melodies, all computer-synthesized with the timber of piano, using 26 different pitches of the equal-tempered scale and nine different semitone intervals. The melodies were derived from a set of 40 unfamiliar melodies that sound typically Western. All had a metrical rhythm, contained four bars and had notes from only one major key with no silence between tones. The melodies lasted from 3 to 7 s (mean: 5 s). These melodies were considered congruous and were repeated once. They were randomly mixed with the same 80 melodies in which 40 target tones were played out-of-key and 40 target tones were mistuned by a quarter tone (see Fig. 1 for an example). The target tone is the first beat of the third bar; its duration is constantly 500 ms across stimuli.

Figure 1

Example of a congruous melody (A), an incongruous melody with an out-of-key tone (B) and a mistuned tone by a quarter tone (graphic in C). Target tones were 500 ms long. The mean ratings given by each amusic and each control subject, as a function of the target tone condition, are presented below. Bar indicates standard error.

The 160 stimuli were presented to the subjects through Sennheiser HD450 headphones in a quiet room, at an intensity level of 70 dB SPL-A, with an interstimulus interval of 4 s. During this interval, subjects were required to judge whether there was an anomalous note (a ‘wrong’ note) by using a rating scale from 1 (highly incongruous) to 7 (highly congruous). They were provided with four examples and were tested individually. Subjects were further requested to blink at the end of each sequence, to view their answer sheet, and to remain relaxed.

While the subjects listened to the melodies, the EEG was continuously sampled at 256 Hz from 60 electrodes (0.5–50 Hz online filter and 0.5–25 Hz offline bandpass filter, 24 dB/octave; nose reference) via an InstEP amplifier. Bipolar electrode pairs monitored horizontal and vertical electro-oculograms. Offline, the EEG contaminated by eye and movement artefacts was corrected, and the signals were filtered and baseline corrected. Independent component analyses were used to isolate the components due to non-encephalic artefacts (Jung et al., 2000) which were reconstructed. The number of artefact-free responses was 76.0 ± 6.6 for the congruous, 38.1 ± 3.8 for the out-of-key and 38.5 ± 4.2 for the out-of-tune condition in amusics, and 78.9 ± 1.5, 39.2 ± 1.0, 39.6 ± 0.7, respectively, in controls. The EEG was divided into epochs of 1000 ms including a 100 ms baseline activity before the onset of the target note.

The resulting ERPs time-locked to the onset of the target tone from the individual waveforms were analysed, irrespective of performance, at electrodes F3, Fz, F4, FC3, FCz, FC4, C3, Cz, C4, P3, Pz, P4, PO3, POz, PO4. ERPs were quantified as mean amplitudes of 40 ms around the individual peaks obtained within 180–300 ms for N200 and 600–780 ms for P600, relative to a 100 ms baseline activity before the target tone onset. The mean amplitudes were subjected to analyses of variance with repeated measures. The mean latencies were similarly analysed. Since latency measures did not differ between groups nor across conditions (F ≤ 1), their analyses are not reported. The original degrees of freedom for all analyses are reported in the article. Type I errors associated with inhomogeneity of variance were controlled by decreasing the degrees of freedom using the Greenhouse–Geisser epsilon, and the probability estimates are based on these reduced degrees of freedom. Post hoc tests were conducted by Fisher's least-significant difference (LSD) comparisons.

Results and comments

The behavioural results confirm that five out of eight amusics have difficulties detecting pitch anomalies in melodies, while control subjects could easily distinguish conventional melodies from those containing a mistuned or an out-of-key altered tone (Fig. 1). This was supported by an ANOVA that yielded an interaction between pitch Category and Group, with F(2,30) = 8.88, P < 0.005. The effect of pitch Category was highly significant in controls, with F(2,16) = 41.39, P < 0.001. Controls found both the out-of-key and out-of-tune changes less congruent than the diatonic changes (both P < 0.001 by bilateral t-tests), and the mistuning more incongruent than the out-of-key changes (P < 0.001). In contrast, five amusics judged all melodic stimuli congruent irrespective of their category, whereas three of them were able to distinguish the melodies with pitch violations (Fig. 1). Among these ‘pitch sensitive’ amusics, two had failed on the on-line test, which used the same melodies and the same pitch violations (with 62.5% correct in both cases). The surprisingly high score observed here in three out of eight amusics suggests that the use of a graded scale and/or a larger number of stimuli (160 here versus 48 in the on-line test) might be well suited to reveal melodic pitch discrimination abilities in amusia. Indeed, as discussed in the participants’ section above, there is no reason to consider these three ‘pitch-sensitive’ amusics as representing a distinct type of amusia. Moreover, they represent a minority of amusics; most amusics remain largely deaf to melodic pitch incongruities.

As predicted, the behavioural difficulty in detecting pitch incongruities in melodies between amusics and normals is mirrored by the late positive components of the event-related potentials (Fig. 2). This was confirmed by a four-way ANOVA considering Group (amusics, controls), pitch Category (congruous, out-of-tune and out-of-key), Frontality (frontal, fronto-central, central, parietal and parieto-occipital electrodes) and Laterality (left, middle, right electrodes). In controls, both the out-of-tune and out-of-key tones elicited a large positivity relative to the congruous tone, F(2,16) = 8.8, P = 0.01 by the four-way ANOVA, the positivity being larger for the mistuned pitch (P < 0.05). In amusics, including the three who showed sensitivity to pitch anomalies in their ratings, there was no such late positivity [F(2,14) = 2.4, NS; see Fig. 4 for individual data]. This difference in P600 between controls and amusics was significant, F(2,30) = 3.7, P = 0.04 for the Group by pitch Category interaction. The time-window, morphology, and scalp distribution of the observed late positivity correspond to the P600 component of the event-related potentials, indexing the attentive, possibly conscious, processes of integration and re-parsing of the incongruous event into the musical context (Besson and Faïta, 1995; Patel et al., 1998; Brattico et al., 2006). Our results, thus, indicate that this process occurs in controls and not in amusics.

Figure 2

Grand-average ERPs for amusics and controls, time-locked to the onset of the target tone, recorded at the 15 electrodes used in the statistical analysis.

Interestingly, in both controls and amusics, the out-of-tune tone elicited a larger early negativity as compared to the congruous tone (see the peaks marked as N200 as recorded at FC4 in Fig. 3). This was confirmed by the ANOVA considering Group (amusics, controls), pitch Category (congruous, out-of-tune and out-of-key), Frontality (frontal, fronto-central, central, parietal and parieto-occipital electrodes) and Laterality (left, middle, right electrodes). The early negativity, here termed N200, was significantly larger for the mistuned pitches, F(2,30) = 9.9, P = 0.001, in both amusics and controls (the interaction with Group being F < 1). This N200 response to mistuning also reached significance in the five amusics who showed no pitch sensitivity in behaviour (with t(4) = 4, P = 0.02 and Z = −2, P = 0.04, by paired t-test and Wilcoxon rank test, respectively, as measured at FC4). This early negativity was also larger for the out-of-key tone in controls (P < 0.05) but not in amusics (P > 0.05 by post hoc least-significant difference comparisons). As can be seen in Fig. 4, the N200 for the mistuned tone was present in all but one amusic and was unrelated to the congruency ratings, with r = 0.09 and 0.07 for Pearson and Spearman correlation, respectively. The N200 was also larger on the right than the left side of the brain, F(4,60) = 3.1, P < 0.05, for the interaction between Laterality and pitch Category in both controls and amusics. The N200 was also slightly more anterior in the amusic than in the control brain, F(8,120) = 2.3, P < 0.05 for the interaction between Group, pitch Category and Frontality. The different scalp distribution, summarized in Fig. 3, most likely reflects the modulation of the early event-related potential deflections by the late positivities seen in controls but not in amusics.

Figure 3

Grand-average ERPs at the FC4 and Pz electrodes in amusics (A) and in controls (B). The N200 and P600 are indicated by an arrow. Below (C), the voltage maps from grand-average difference waves obtained at 235–242 ms for the out-of-tune condition minus the congruous condition.

Figure 4

Individual N200 and P600 peaks observed in amusics and controls for the out-of-key and the mistuned tone conditions, after subtraction of the congruous condition. Amusics and controls did not differ in the congruous condition [t(15) = 1.75, NS].

The time-window, morphology and laterality of the N200 wave are typical of the family of negative change-related ERPs, which includes, among others, the mismatch negativity, the N2b, and the Early Right Anterior Negativity (ERAN). All can index the neural detection of an unexpected pitch in melodic contexts in normal subjects (Brattico et al., 2006; Miranda and Ullman, 2007).

Note that the N200 was slightly larger for the out-of-key tone than the congruous tone at F4 in amusics (P = 0.06 by post hoc LSD comparison). This N200 was only marginally significant at F4 while in controls the N200 for out-of-key tones reached significance at several electrodes (F3, FC3, FCz, FC4, C3, C4, P3, Pz, P4, with P < 0.05 by post hoc LSD comparisons). Therefore, this indication of a brain response to out-of-key notes in amusics was considered as weak evidence for emerging processing of musical pitch in terms of keys and was not further discussed. Future research should aim at replicating this finding.

In order to further statistically assess the major ERP findings of the present study, an additional ANOVA was conducted by considering only Group (controls, amusics), a subset of Pitch category (out-of-tune, congruous), and Time window (N200 at FC4, P600 at Pz) as factors. This analysis confirmed the presence of a normal N200 for the mistuned tone in the absence of a normal P600 in amusics by yielding a significant interaction between the three factors considered, F(1,15) = 5.4, P = 0.03.

Note that the first negative component, which peaked at around 110 ms from stimulus onset with maximal amplitudes at central regions, did not differ between pitch categories. Thus, we conclude that it corresponds to the N1 obligatory response to tone onset. A second N1 peak occurred at 620 ms, as a neural response to the onset of the next sound (always following the target tone by 500 ms). This second N1 was modulated in amplitude by the previous neural events, and thus was not studied separately.

Discussion and conclusion

Results show that the amusic brain can track quarter-tone pitch differences by exhibiting an early right-lateralized negative brain response, here termed N200. This is the first demonstration of near-normal neural processing of musical pitch incongruities in congenital amusia. It is important because it reveals that the amusic brain is equipped with the essential neural circuitry to perceive fine-grained pitch differences. What distinguishes the amusic from the normal brain is the absence of awareness of this ability. Thus, the amusic brain appears to be more in tune than conscious perception reveals.

The capacity of the amusic brain to respond to quarter-tone changes, as indexed by the early negative brain responses elicited by mistuned tones in all but one amusic participant, can be related to frequency of occurrence. Indeed, the mistuned pitches were rare events within and across the presented melodies (with a probability of occurrence of 2.4 and 5.4% in terms of pitch heights and intervals, respectively). In contrast, the out-of-key tones, that did not elicit clear brain responses in amusics, occurred frequently (76% in terms of both pitch height and intervals) because melodies are constructed with the same pitch heights and scale steps (even though the melodic tones used 26 different pitches and 9 interval sizes across the entire set). Thus, auditory sensory memory could have established a sensory memory trace for the most frequent intervals in terms of semitones, rather than quarter tones. In this melodic context, the mistuned pitches, by introducing quarter-tone intervals, may have created a frequency mismatch negativity (e.g. Saarinen et al., 1992). Accordingly, the N200 observed in both amusics and controls for the mistuned pitches may correspond to the mismatch negativity elicited by low probability events in complex auditory patterns (e.g. Bonte et al., 2005). Thus, the presence of a mismatch negativity for mistuned pitches provides evidence that the amusic brain is sensitive to frequency of occurrence of fine-grained pitch differences in a musical context.

This sensitivity of the amusic brain to compute fine-grained pitch differences seems to occur without awareness. Indeed, the N200 to mistuned tones was visible in the individual waveforms of all but one amusic. This brain response was not followed by a late positive component (P600) as observed in normal brains. The P600 was also missing in the three amusics who showed sensitivity to the pitch violations in their ratings. Informal interactions with these three ‘pitch-sensitive’ amusics indicated that they were not aware of anything wrong in the melodies; they just felt sometimes that the melody was ‘weird’ without being able to go beyond such statements. Thus, their ratings might also reflect (implicit) perception without awareness.

Implicit perception without awareness may be related to a reduction of attentional resources in amusia. However, as observed in our earlier study (Brattico et al., 2006), the kind of pitch incongruities examined here do not seem to require attention. Moreover, if amusics were paying less attention to the melodies, the amplitude of their N200 to the mistuned tones should have been attenuated as compared with normal controls (as found in Loui et al., 2005). This was not the case. Future research should determine the extent to which attention and musical context modulate pitch perceptual abilities in amusia. In particular, implicit or indirect behavioural measures should be used in order to track the graded nature of conscious experience, as recently done by Tillmann and collaborators in a brain-damaged case (Tillmann et al., 2007) and discussed by Seth and collaborators (2008). Nevertheless, the N200 indicates that the auditory cortex in the amusic brain is able to compute very fine-grained pitch differences and use the statistical regularities of these basic acoustical properties to monitor melodies on-line. However, this normal, early process does not seem to reach higher, conscious stages of processing in amusic individuals.

The finding of an early N200 response to fine-grained pitch differences in amusics raises the question of the nature of the neural representation that supports it. One possibility is that pitch sensitivity is normal in congenital amusia and lack of confidence is the origin of the observed behavioural failures. Amusic subjects are often under-confident about their perceptual experiences. They may treat uncertainty as a lack of perception and report no awareness even when more objective measures show that they can detect the presence of a change to some extent. Such differences in confidence might have contributed to the dissociation that has been recently observed between impaired pitch discrimination and preserved pitch singing in amusic cases (Loui et al., 2008). Another possibility is that the neural representation of pitch is accurate but too weak to support reliable discrimination. Our prior event-related potential findings in the same amusic participants (Peretz et al., 2005) are compatible with both accounts. The absence of brain responses to quarter-tone pitch differences may have been masked by the presence of abnormally large N2-P3 responses to the conscious detection of large pitch changes. Based on the present findings, one can predict that a mismatch negativity should be observed for quarter-tone changes when amusics must ignore the stimuli. Nevertheless, the observation of a mismatch negativity will not distinguish between a sensitivity or confidence account of the brain responses to pitch deviance in amusia. What is needed is evidence from other neuroimaging methods, such as fMRI.

As demonstrated here, neuroimaging methods can provide insights that would be unattainable using strictly behavioural measures (Hannula et al., 2005). For example, automatic perception of fearful faces and corresponding amygdala activation can be observed in patients with neglect syndrome. Such patients show amygdala activation for neglected stimuli (which they cannot report; Vuilleumier et al., 2001). Similarly, the mismatch negativity can index implicit auditory learning before people can actually do the task behaviourally (Tremblay et al., 1998; van Zuijen et al., 2006). Hence, neuroimaging can reveal distinct neural signatures, corresponding to different brain regions or different time courses of activation, for unconscious perception. Based on the present findings, one can predict that the amusic brain can normally pick up acoustical pitch differences in the auditory cortex but that this acoustical information cannot be integrated into a conscious percept because such a neural representation is too weak or because its percept is not supported by long-term memory representations of musical key schemas.

In effect, there was little evidence that the amusic brain could detect the presence of an out-of-key note, although the pitch changes were at least twice as large in terms of pitch (semitone) distance. Contrary to normal brains, there was no evidence of an early negativity and late positivity in responses to tones that lied outside the key of the melody in the amusic brain. It should be emphasized that tonal pitch distance could not be derived here from frequency of occurrence. Key violations recruit a higher-level, more abstract brain mechanism. Tonal pitch distance can only be computed by reference to the key of the melody, that is, by mapping the incoming pitches onto scale steps. This knowledge is apparently missing in amusia.

It has been shown in a number of studies that tonal encoding of pitch recruits fronto-temporal neural networks. These regions have been implicated in the detection of chords that lie outside the harmony of the context (e.g. Tillmann et al., 2006; see Koelsch and Siebel, 2005 for a review). The brain regions involved in the perception of tonal violations in melodies, as used here, are likely to be the same ones as those involved in harmony because perception of out-of-key violations requires consideration of the rules of the same tonal system. Recent event-related potential studies support this notion. Out-of-key tones elicit early frontal negativities in both familiar and unfamiliar melodies (Brattico et al., 2006; Miranda and Ullman, 2007) and are localized in the secondary auditory cortex with an additional source in the frontal cortex (Brattico et al., 2006). The present N200 observed for tonal violations in melodies is also comparable in polarity, scalp distribution and latency to the ERAN reported repeatedly by Koelsch and collaborators in the context of harmonic violations (see Koelsch, 2009, for a recent review and a distinction between ERAN and mismatch negativity that is consistent with the present results). The ERAN is partially automatic, linked to violation of tonal rules, and depends on frontal brain structures. In contrast, the late positivities that follow the ERAN have a central-posterior distribution, depend on attention, and are located in temporo-parietal structures. All these characteristics were observed in the brain potentials of the normal participants here, and were essentially missing in the brain responses of the amusic participants. This dissociation in amusia between the missing ERAN and the normal mismatch negativity provides further support to the distinction of ERAN as reflecting the processing of tonal structure and the mismatch negativity as reflecting perceptual aspects of auditory processing in melodic context (Koelsch, 2009). Furthermore, the absence of an ERAN-P600 complex in response to key violations in the amusic brain is consistent with the notion that the core deficit in amusia concerns tonal encoding of pitch (e.g. Peretz, 2008) due to cortical malformations in the temporo-frontal network (Hyde et al., 2006, 2007).

The discovery of unsuspected brain responses to fine-grained pitch differences has a number of theoretical and clinical implications. Theoretically, as mentioned above, it would be important to specify whether this pitch tracking mechanism could affect not only perception, such as emotional responses, but also production—such as singing—without conscious awareness. Clinically, this automatic pitch-tracking mechanism could be the target of therapeutic strategies. The cortical tuning of this early brain response to the frequency of distribution of scale steps in melodies might serve to build knowledge about musical keys. Such knowledge is acquired through exposure to the statistical regularities of pitch intervals in normal brains (Krumhansl, 1990). Although preliminary findings suggest that probabilities of pitch transitions might be difficult to learn by adult amusics (Peretz and Saffran, 2006), the present results are encouraging and interventions at this level during childhood might compensate for the neurogenetic vulnerability.

In conclusion, the recording of electrical activity in the amusic brain revealed the existence of a near-normal ability to detect pitch irregularities in a musical context at an early processing stage. This early neural information, typically computed in the right auditory cortex (Deouell, 2007) appears, however, insufficient to make contact or to build schemas about musical keys for conscious detection of violations in the right inferior frontal region (Maess et al., 2001). This is consistent with the anatomical anomalies observed in the auditory and inferior frontal cortex of amusic individuals (Hyde et al., 2006, 2007; Mandell et al., 2007).

Funding

Canadian Institutes of Health Research, the Canada Research Chair in neurocognition of music and the European Commission (-2004-NEST-PATH-028570 BrainTuning project).

Acknowledgements

We thank Barbara Tillmann, Pierre Jolicoeur, Nina Kraus, Jessica Phillips-Silver and Nathalie Gosselin for insightful comments.

References

Alho
K
Winkler
I
Escera
C
Huotilainen
M
Virtanen
J
Jääskeläinen
IP
, et al. 
Processing of novel sounds and frequency changes in the human auditory cortex: magnetoencephalographic recordings
Psychophysiology
1998
, vol. 
35
 (pg. 
211
-
24
)
Ayotte
J
Peretz
I
Hyde
KL
Congenital amusia: a group study of adults afflicted with a music-specific disorder
Brain
2002
, vol. 
125
 (pg. 
238
-
51
)
Besson
M
Faïta
F
An event-related potential (ERP) study of musical expectancy: comparison of musicians with nonmusicians
J Exp Psychol: Hum Percept Perform
1995
, vol. 
21
 (pg. 
1278
-
96
)
Bonte
ML
Mitterer
H
Zellagui
N
Poelmans
H
Blomert
L
Auditory cortical tuning to statistical regularities in phonology
Clin Neurophysiol
2005
, vol. 
116
 (pg. 
2765
-
74
)
Brattico
E
Tervaniemi
M
Naatanen
R
Peretz
I
Musical scale properties are automatically processed in the human auditory cortex
Brain Res
2006
, vol. 
1117
 (pg. 
162
-
74
)
Cuddy
LL
Cohen
AJ
Miller
J
Melody recognition: the experimental application of musical rules
Can J Psychol
1979
, vol. 
33
 (pg. 
148
-
57
)
Deouell
LY
The frontal generator of the mismatch negativity revisited
J Psychophysiol
2007
, vol. 
21
 (pg. 
147
-
60
)
Hannula
DE
Simons
DJ
Cohen
NJ
Opinion - imaging implicit perception: promise and pitfalls
Nat Rev Neurosci
2005
, vol. 
6
 (pg. 
247
-
55
)
Hyde
KL
Lerch
JP
Zatorre
RJ
Griffiths
TD
Evans
A
Peretz
I
Cortical thickness in congenital amusia: when less is better than more
J Neurosci
2007
, vol. 
27
 (pg. 
13028
-
32
)
Hyde
KL
Peretz
I
Syka
J
Merzenich
M
Congenital amusia: impaired musical pitch but intact musical time
Plasticity and signal representation in the auditory system
2005
New York
US Springer Publishing Co.
(pg. 
291
-
6
)
Hyde
KL
Zatorre
RJ
Griffiths
TD
Lerch
JP
Peretz
I
Morphometry of the amusic brain: a two-site study
Brain
2006
, vol. 
129
 (pg. 
2562
-
70
)
Jung
TP
Makeig
S
Westerfield
M
Townsend
J
Courchesne
E
Sejnowski
TJ
Removal of eye activity artifacts from visual event-related potentials in normal and clinical subjects
Clin Neurophysiol
2000
, vol. 
111
 (pg. 
1745
-
58
)
Kalmus
H
Fry
DB
On tune deafness (dysmelodia): frequency, development, genetics and musical background
Ann Hum Genet
1980
, vol. 
43
 (pg. 
369
-
82
)
Koelsch
S
Music-syntactic processing and auditory memory - similarities and differences between ERAN and MMN
Psychophysiology
2009
, vol. 
46
 (pg. 
179
-
90
)
Koelsch
S
Siebel
WA
Towards a neural basis of music perception
Trends Cogn Sci
2005
, vol. 
9
 (pg. 
578
-
84
)
Krumhansl
CL
Cognitive foundations of musical pitch
1990
New York
Oxford University Press
Loui
P
Guenther
FH
Mathys
C
Schlaug
G
Action-perception mismatch in tone-deafness
Curr Bio
2008
, vol. 
18
 (pg. 
R331
-
32
)
Loui
P
Grent-'t-Jong
Torpey
D
Woldorff
M
Effects of attention on the neural processing of harmonic syntax in Western music
Cogn Brain Res
2005
, vol. 
25
 (pg. 
678
-
87
)
Maess
B
Koelsch
S
Gunter
TC
Friederici
AD
Musical syntax is processed in Broca's area: an MEG study
Nat Neurosci
2001
, vol. 
4
 (pg. 
540
-
45
)
Mandell
J
Schulze
K
Schlaug
G
Congenital amusia: An auditory-motor feedback disorder?
Restor Neurol Neurosci
2007
, vol. 
25
 (pg. 
323
-
34
)
Miranda
R
Ullman
M
Double dissociation between rules and memory in music: an event-related potential study
NeuroImage
2007
, vol. 
38
 (pg. 
331
-
45
)
Näätänen
R
Winkler
I
The concept of auditory représentation in cognitive neuroscience
Psychol Bull
1999
, vol. 
125
 (pg. 
826
-
59
)
Patel
AD
Gibson
E
Ratner
J
Besson
M
Holcomb
PJ
Processing syntactic relations in language and music: an event-related potential study
J Cogn Neurosci
1998
, vol. 
10
 (pg. 
717
-
33
)
Peretz
I
Brain specialization for music. New evidence from congenital amusia
Ann N Y Acad Sci
2001
, vol. 
930
 (pg. 
153
-
65
)
Peretz
I
The nature of music from a biological perspective
Cognition
2006
, vol. 
100
 (pg. 
1
-
32
)
Peretz
I
Musical disorders: from behavior to genes
Curr Dir Psychological Science
2008
, vol. 
17
 (pg. 
329
-
33
)
Peretz
I
Brattico
E
Tervaniemi
M
Abnormal electrical brain responses to pitch in congenital amusia
Ann Neurol
2005
, vol. 
58
 (pg. 
478
-
82
)
Peretz
I
Champod
AS
Hyde
KL
Varieties of musical disorders. The Montreal battery of evaluation of amusia
Ann N Y Acad Sci
2003
, vol. 
999
 (pg. 
58
-
75
)
Peretz
I
Cummings
S
Dubé
MP
The genetics of congenital amusia (tone deafness): a family-aggregation study
Am J Hum Genet
2007
, vol. 
81
 (pg. 
582
-
88
)
Peretz
I
Gosselin
N
Tillmann
B
Cuddy
LL
Gagnon
B
Trimmer
CG
, et al. 
On-line identification of congenital amusia
Music Percept
2008
, vol. 
25
 (pg. 
331
-
43
)
Peretz
I
Saffran
J
Dissociation of music and speech: evidence from statistical learning in congenital amusia
J Acoust Soc Am
2006
, vol. 
120
 pg. 
3167
 
Saarinen
J
Paavilainen
P
Schoger
E
Tervaniemi
M
Naatanen
R
Representation of abstract attributes of auditory stimuli in the human brain
Neuroreport
1992
, vol. 
3
 (pg. 
1149
-
51
)
Seth
A
Dienes
Z
Cleeremans
A
Overgaard
M
Pessoa
L
Measuring consciousness: relating behavioural and neurophysiological approaches
Trends Cogn Sci
2008
, vol. 
12
 (pg. 
314
-
21
)
Tillmann
B
Koelsch
S
Escoffier
N
Bigand
E
Lalitte
P
Friederici
AD
, et al. 
Cognitive priming in sung and instrumental music: activation of inferior frontal cortex
NeuroImage
2006
, vol. 
31
 (pg. 
1771
-
82
)
Tillmann
B
Peretz
I
Bigand
E
Gosselin
N
Harmonic priming in an amusic patient: the power of implicit tasks
Cogn Neuropsychol
2007
, vol. 
24
 (pg. 
603
-
22
)
Trainor
LJ
Trehub
SE
A comparison of infants’ and adults’ sensitivity to western musical structure
J Exp Psychol Hum Percep Perform
1992
, vol. 
18
 (pg. 
394
-
402
)
Tremblay
K
Kraus
N
McGee
T
The time course of auditory perceptual learning: neurophysiological changes during speech-sound training
Neuroreport
1998
, vol. 
9
 (pg. 
3557
-
60
)
Umemoto
T
The psychological structure of music
Music Percept
1990
, vol. 
8
 (pg. 
115
-
28
)
van Zuijen
TL
Simoens
VL
Paavilainen
P
Naatanen
R
Tervaniemi
M
Implicit, intuitive, and explicit knowledge of abstract regularities in a sound sequence: an event-related brain potential study
J Cogn Neurosci
2006
, vol. 
18
 (pg. 
1292
-
303
)
Vuilleumier
P
Valenza
N
Landis
T
Explicit and implicit perception of illusory contours in unilateral spatial neglect: behavioural and anatomical correlates of preattentive grouping mechanisms
Neuropsychologia
2001
, vol. 
39
 (pg. 
597
-
610
)

Abbreviations

    Abbreviations
     
  • ERAN

    early right anterior negativity

  •  
  • ERPs

    event-related potentials

  •  
  • IFG

    inferior frontal gyrus

  •  
  • MBEA

    Montreal battery of evaluation of amusia

  •  
  • MRI

    magnetic resonance imaging

  •  
  • VBM

    voxel-based morphometry

Author notes

*These authors contributed equally to this work.