Abstract

Unilateral aural stimulation has been shown to cause massive cortical reorganization in brain with congenital deafness, particularly during the sensitive period of brain development. However, it is unclear which side of stimulation provides most advantages for auditory development. The left hemisphere dominance of speech and linguistic processing in normal hearing adult brain has led to the assumption of functional and developmental advantages of right over left implantation, but existing evidence is controversial. To test this assumption and provide evidence for clinical choice, we examined 34 prelingually deaf children with unilateral cochlear implants using near-infrared spectroscopy. While controlling for age of implantation, residual hearing, and dominant hand, cortical processing of speech showed neither developmental progress nor influence of implantation side weeks to months after implant activation. In sharp contrast, for nonspeech (music signal vs. noise) processing, left implantation showed functional advantages over right implantation that were not yet discernable using clinical, questionnaire-based outcome measures. These findings support the notion that the right hemisphere develops earlier and is better preserved from adverse environmental influences than its left counterpart. This study thus provides, to our knowledge, the first evidence for differential influences of left and right auditory peripheral stimulation on early cortical development of the human brain.

Introduction

Cochlear implants (CI) provide individuals with severe-to-profound hearing loss the opportunity to partially restore hearing and, in the cases of prelingually deaf children, to acquire speech and language. The past forty years have witnessed rapid progress in coding strategies, speech processing strategies, and surgical techniques (Nie et al. 2005; Cullen et al. 2008; Srinivasan et al. 2013; Snels et al. 2019; Lamping et al. 2020; Luo et al. 2021). While some CI users can develop age-appropriate speech and language abilities, allowing them to successfully integrate into society (Zeng et al. 2008), one of the main obstacles in CI practice is huge and unpredictable individual differences (Szagun 2001). There are many contributing factors to such individual variability (Blamey et al. 1996, 2013; Lazard et al. 2012; Zhao et al. 2020). For young children with prelingual sensorineural hearing loss, factors that have been shown to affect CI outcomes include side of implantation (Kraaijenga et al. 2018), age of implantation (AoI) (Kirk et al. 2002; Gaurav et al. 2020), duration of CI use (Fryauf-Bertschy et al. 1997; Zhang et al. 2019), residual hearing (Chiossi and Hyppolito 2017), and social factors (Punch and Hyde 2011).

Among these factors, side of implantation is of particular interest both from the standpoint of clinical practice and from the perspective of brain functional development. From the standpoint of clinical practice, most CI users are implanted unilaterally due to clinical, economic, and technological concerns, particularly in developing countries. For example, in China, the vast majority of governmental and private CI aid programs only support unilateral implantation, making side of implantation a choice that clinicians have to make regularly in their practice. Existing research on effect of implantation side is controversial (for review, see Kraaijenga et al. 2018). While some studies found no influence of implantation side on CI outcomes (Morris et al. 2007; Surmelioglu et al. 2014), others demonstrated right CI advantages for speech perception or production (Flipsen Jr. 2008; Henkin et al. 2008; Kamal et al. 2014). In face of lack of consensus on influence of implantation side, it has been argued that CI should be implanted on the same side of the patient’s dominant hand to facilitate handling of the device (Deguine et al. 1995).

From the perspective of auditory and language neuroscience, choice of implantation side may have critical subsequences in terms of cortical functional development. In normal hearing (NH) adults, many language functions are lateralized to the left hemisphere, especially in right-handed individuals (Taylor 1990; McGettigan and Scott 2012; Poeppel 2014). The left lateralization of speech processing appears as early as at birth, sometimes even in preterm infants, while processing of nonspeech sounds is dominated by the right hemisphere (Bisiacchi and Cainelli 2021). For prelingual CI users who depend on restored hearing to acquire speech and verbal language, the asymmetric auditory input can have far-reaching repercussions in auditory cortical development and reorganization (Gordon et al. 2013; Kral et al. 2013). Indeed, such interhemispheric asymmetry in brain functions have led many to assume and search for right implantation advantages in CI users, but with controversial results in children as well as adults (Kraaijenga et al. 2018).

To fill in this gap and provide neurophysiological evidence for choice of implantation side in clinical practice, we examined impact of implantation side on functional development of auditory cortex in children with prelingual sensorineural hearing loss in a longitudinal study. Children with bilateral severe-to-profound hearing loss were recruited before CI surgery and received two laboratory tests starting shortly after CI was turned on. At each test, functional near-infrared spectroscopy (fNIRS), a noninvasive and CI compatible brain imaging method (Pollonini et al. 2014; Olds et al. 2015; Saliba et al. 2016; Lawrence et al. 2021), was used to assess cortical responses to sounds in the auditory cortex and adjacent areas. fNIRS measures the hemodynamic changes in cortical surface via optical absorption properties of blood hemoglobin (Jobsis 1977). Compared to fMRI, fNIRS is suitable for neonates (Franceschini et al. 2007; Mahmoudzadeh et al. 2013), autism (Liu et al. 2017) and friendly to CI implanters (Olds et al. 2015) and repeated testing (Bulgarelli et al. 2020; Yeung 2021). To probe into the different stages of auditory functional development after CI hearing onset, both natural speech and nonspeech sounds were used for fNIRS recordings, and nonspeech sounds were further divided into signal (instrumental music) and noise (multispeaker babble simulating cocktail party–like background noise). According to the maturation order of the NH brain (Schnupp et al. 2011; Vasung et al. 2019), auditory processing such as signal–noise discrimination should develop before speech perception. Questionnaires for parents/guardians, including Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS) (Zimmerman-Phillips et al. 1997), Categories of Auditory Performance (CAP) (Al-shawi et al. 2020), as well as Speech Intelligibility Rating (SIR) (Allen et al. 1998), were used to assess postimplantation auditory and speech performance. For prelingual deaf children, these questionnaires represent the current common practice in hearing clinics. According to the interhemispheric asymmetry of auditory functions in NH populations, we hypothesized that left implantation would have right-hemisphere processing advantage for nonspeech sounds and that right implantation would have left-hemisphere processing advantages for speech.

Materials and methods

Participants

Thirty-four children (23 boys and 11 girls, mean ± standard deviation of age: 36.82 ± 17.65 months) with severe-to-profound prelingual bilateral sensorineural hearing loss were recruited from the Peking University First Hospital before cochlear implantation surgery for a prospective longitudinal developmental study (see Table 1 for their demographic information). All participants complied with the cochlear implantation criteria set out by China National Guidelines for Cochlear Implantation and received unilateral implantation (Editorial Board of Chinese Journal of Otorhinolaryngology Head and Neck Surgery 2014). According to the implantation criteria, each participant’s binaural click auditory brainstem response threshold was above 90 dB nHL, auditory steady-state response threshold at 2 kHz and higher frequencies was above 90 dB nHL, otoacoustic emissions were unable to induce for both ears, unaided pure-tone air conduction hearing threshold of both ears was above 90 dB HL at 2 kHz and higher frequencies, and above 75 dB HL from 500 to 2 kHz. The implantation ear for each participant was determined according to the following clinical conventional procedure: when there was residual hearing in both ears, the ear with poorer residual hearing was chosen for implantation to preserve the advantage of residual hearing; when there was residual hearing only in one ear, this ear was chosen to maximize the probability of hearing restoration after implantation; when residual hearing was symmetrical, the ear on the same side as the dominant hand was chosen to facilitate handling of the device.

Table 1

Demographic information of the participants.

IDT1 (mo.)T2 (mo.)Age (mo.)GHandednessHT(dB) CI earHT(dB) Non-CI earHA time (mo.)CI SideIT-MAIS T1IT-MAIS T2SIR T1SIR T2
10.831.642FRight1009012Right62.577.523
2a0.230.8336FRight10010012Right32.547.522
30.40.961MRight10010036Left7.552.525
40.20.8331FRight10010012Left7.557.511
5b0.30.9330MRight10010012Right7.55511
60.31.255FRight10010024Right22.537.512
70.21.640FRight10010018Left7.522.511
80.231.6381MRight10010048Right558523
9b0.130.9329MRight909012Right27.547.522
100.231.1315FBoth1001009Right12.52511
110.231.1366MRight10010048Left5582.534
120.41.124MLeft12012012Right12.517.511
1300.5738MRight1001009Right22.537.522
14a00.7337MRight1001007Left12.537.512
15a0.72.6723MRight1001004Left3562.533
16a0.75.8320MLeft958018Left77.59023
17b0.21.1336MRight100904Left2.51011
180.230.973MRight10010048Right505022
190.30.841MRight1001008Right5077.525
200.21.121MRight10010012Right1537.511
2101.935MRight909012Right22.55522
22c0.23.1720MRight958511Left23.567.522
23a00.6378MRight1009022Right5010025
240.21.816FBoth89858Left256511
2501.1348FRight10010022Right62.59023
26a0.21.125FLeft10010016Right57.58522
27a0.21.828MRight1001006Left22.55011
2801.621MBoth95956Right4052.511
290.671.426MBoth1001006Right22.532.511
300.130.829MLeft1001004Left3032.511
310.771.7743FBoth10010024Right22.55023
32b0.231.147FRight87.591.2512Left12.537.522
330.20.919MBoth1001005Left7.54012
340.91.9718MBoth1001006Left37.557.511
M (SD)0.29 (0.25)1.43 (0.95)36.82 (17.65)97.76 (5.17)96.95 (7.35)15.95 (13.02)
IDT1 (mo.)T2 (mo.)Age (mo.)GHandednessHT(dB) CI earHT(dB) Non-CI earHA time (mo.)CI SideIT-MAIS T1IT-MAIS T2SIR T1SIR T2
10.831.642FRight1009012Right62.577.523
2a0.230.8336FRight10010012Right32.547.522
30.40.961MRight10010036Left7.552.525
40.20.8331FRight10010012Left7.557.511
5b0.30.9330MRight10010012Right7.55511
60.31.255FRight10010024Right22.537.512
70.21.640FRight10010018Left7.522.511
80.231.6381MRight10010048Right558523
9b0.130.9329MRight909012Right27.547.522
100.231.1315FBoth1001009Right12.52511
110.231.1366MRight10010048Left5582.534
120.41.124MLeft12012012Right12.517.511
1300.5738MRight1001009Right22.537.522
14a00.7337MRight1001007Left12.537.512
15a0.72.6723MRight1001004Left3562.533
16a0.75.8320MLeft958018Left77.59023
17b0.21.1336MRight100904Left2.51011
180.230.973MRight10010048Right505022
190.30.841MRight1001008Right5077.525
200.21.121MRight10010012Right1537.511
2101.935MRight909012Right22.55522
22c0.23.1720MRight958511Left23.567.522
23a00.6378MRight1009022Right5010025
240.21.816FBoth89858Left256511
2501.1348FRight10010022Right62.59023
26a0.21.125FLeft10010016Right57.58522
27a0.21.828MRight1001006Left22.55011
2801.621MBoth95956Right4052.511
290.671.426MBoth1001006Right22.532.511
300.130.829MLeft1001004Left3032.511
310.771.7743FBoth10010024Right22.55023
32b0.231.147FRight87.591.2512Left12.537.522
330.20.919MBoth1001005Left7.54012
340.91.9718MBoth1001006Left37.557.511
M (SD)0.29 (0.25)1.43 (0.95)36.82 (17.65)97.76 (5.17)96.95 (7.35)15.95 (13.02)

aParticipants with large vestibular aqueduct syndrome.

bParticipants wearing hearing aid (HA) in the nonimplanted side at the time of testing. Handedness was assessed using Edinburgh scale. HA time was the length of HA wearing experiences before cochlear implantation; ID = participant identification number; G = gender; F = female; M = male; HT = hearing threshold before implantation; CI = cochlear implant; IT-MAIS = Infant-Toddler Meaningful Auditory Integration Scale; SIR = Speech Intelligibility Rate; T1 = Test 1; T2 = Test 2; SD = Standard Deviation.

cParticipants with large vestibular aqueduct syndrome and Mondini deformity.

Table 1

Demographic information of the participants.

IDT1 (mo.)T2 (mo.)Age (mo.)GHandednessHT(dB) CI earHT(dB) Non-CI earHA time (mo.)CI SideIT-MAIS T1IT-MAIS T2SIR T1SIR T2
10.831.642FRight1009012Right62.577.523
2a0.230.8336FRight10010012Right32.547.522
30.40.961MRight10010036Left7.552.525
40.20.8331FRight10010012Left7.557.511
5b0.30.9330MRight10010012Right7.55511
60.31.255FRight10010024Right22.537.512
70.21.640FRight10010018Left7.522.511
80.231.6381MRight10010048Right558523
9b0.130.9329MRight909012Right27.547.522
100.231.1315FBoth1001009Right12.52511
110.231.1366MRight10010048Left5582.534
120.41.124MLeft12012012Right12.517.511
1300.5738MRight1001009Right22.537.522
14a00.7337MRight1001007Left12.537.512
15a0.72.6723MRight1001004Left3562.533
16a0.75.8320MLeft958018Left77.59023
17b0.21.1336MRight100904Left2.51011
180.230.973MRight10010048Right505022
190.30.841MRight1001008Right5077.525
200.21.121MRight10010012Right1537.511
2101.935MRight909012Right22.55522
22c0.23.1720MRight958511Left23.567.522
23a00.6378MRight1009022Right5010025
240.21.816FBoth89858Left256511
2501.1348FRight10010022Right62.59023
26a0.21.125FLeft10010016Right57.58522
27a0.21.828MRight1001006Left22.55011
2801.621MBoth95956Right4052.511
290.671.426MBoth1001006Right22.532.511
300.130.829MLeft1001004Left3032.511
310.771.7743FBoth10010024Right22.55023
32b0.231.147FRight87.591.2512Left12.537.522
330.20.919MBoth1001005Left7.54012
340.91.9718MBoth1001006Left37.557.511
M (SD)0.29 (0.25)1.43 (0.95)36.82 (17.65)97.76 (5.17)96.95 (7.35)15.95 (13.02)
IDT1 (mo.)T2 (mo.)Age (mo.)GHandednessHT(dB) CI earHT(dB) Non-CI earHA time (mo.)CI SideIT-MAIS T1IT-MAIS T2SIR T1SIR T2
10.831.642FRight1009012Right62.577.523
2a0.230.8336FRight10010012Right32.547.522
30.40.961MRight10010036Left7.552.525
40.20.8331FRight10010012Left7.557.511
5b0.30.9330MRight10010012Right7.55511
60.31.255FRight10010024Right22.537.512
70.21.640FRight10010018Left7.522.511
80.231.6381MRight10010048Right558523
9b0.130.9329MRight909012Right27.547.522
100.231.1315FBoth1001009Right12.52511
110.231.1366MRight10010048Left5582.534
120.41.124MLeft12012012Right12.517.511
1300.5738MRight1001009Right22.537.522
14a00.7337MRight1001007Left12.537.512
15a0.72.6723MRight1001004Left3562.533
16a0.75.8320MLeft958018Left77.59023
17b0.21.1336MRight100904Left2.51011
180.230.973MRight10010048Right505022
190.30.841MRight1001008Right5077.525
200.21.121MRight10010012Right1537.511
2101.935MRight909012Right22.55522
22c0.23.1720MRight958511Left23.567.522
23a00.6378MRight1009022Right5010025
240.21.816FBoth89858Left256511
2501.1348FRight10010022Right62.59023
26a0.21.125FLeft10010016Right57.58522
27a0.21.828MRight1001006Left22.55011
2801.621MBoth95956Right4052.511
290.671.426MBoth1001006Right22.532.511
300.130.829MLeft1001004Left3032.511
310.771.7743FBoth10010024Right22.55023
32b0.231.147FRight87.591.2512Left12.537.522
330.20.919MBoth1001005Left7.54012
340.91.9718MBoth1001006Left37.557.511
M (SD)0.29 (0.25)1.43 (0.95)36.82 (17.65)97.76 (5.17)96.95 (7.35)15.95 (13.02)

aParticipants with large vestibular aqueduct syndrome.

bParticipants wearing hearing aid (HA) in the nonimplanted side at the time of testing. Handedness was assessed using Edinburgh scale. HA time was the length of HA wearing experiences before cochlear implantation; ID = participant identification number; G = gender; F = female; M = male; HT = hearing threshold before implantation; CI = cochlear implant; IT-MAIS = Infant-Toddler Meaningful Auditory Integration Scale; SIR = Speech Intelligibility Rate; T1 = Test 1; T2 = Test 2; SD = Standard Deviation.

cParticipants with large vestibular aqueduct syndrome and Mondini deformity.

All participants were brought up in Mandarin-speaking environments. Recruitment criteria include (i) no known developmental, cognitive, or movement disorders apart from hearing loss, (ii) less than 15 dB HL difference between the two ears, (iii) no brain abnormalities in presurgery T1-weighted structural whole-brain magnetic resonance imaging scan, and (iv) no complications from the surgery, accurate location of all implanted electrodes in the cochlea according to postoperative X-ray examination, and appropriate impedance of all electrodes after surgery.

Among the 34 participants recruited, 15 were implanted in the left ear and 19 in the right ear (Table 2). Seven participants had large vestibular aqueduct syndrome and one participant had both large vestibular aqueduct syndrome and Mondini deformity (Table 1). Other participants had no inner-ear deformations in presurgery imaging examination.

Table 2

Demographic information of the left and right CI groups.

GroupLeft CIRight CIt-valueP-value
N = 15N = 19
Mean age at implantation, mo. (SD)32.73 (15.43)40.05 (19.45)-1.1900.243
Mean time of test 1, mo. (SD)0.31 (0.25)0.26 (0.25)0.5570.581
Mean time of test 2, mo. (SD)1.76 (1.34)1.17 (0.38)1.830.077
Female, N (%)4 (26%)7 (37%)-0.6150.543
Right-handed, N1013
Left-handed, N22-0.1930.848
Balance of hands, N34
Mean HT of CI ear (SD)97.77 (4.25)98.68 (3.27)-0.7130.481
Mean HT of non-CI ear (SD)94.75 (8.08)97.89 (7.13)-0.3780.708
GroupLeft CIRight CIt-valueP-value
N = 15N = 19
Mean age at implantation, mo. (SD)32.73 (15.43)40.05 (19.45)-1.1900.243
Mean time of test 1, mo. (SD)0.31 (0.25)0.26 (0.25)0.5570.581
Mean time of test 2, mo. (SD)1.76 (1.34)1.17 (0.38)1.830.077
Female, N (%)4 (26%)7 (37%)-0.6150.543
Right-handed, N1013
Left-handed, N22-0.1930.848
Balance of hands, N34
Mean HT of CI ear (SD)97.77 (4.25)98.68 (3.27)-0.7130.481
Mean HT of non-CI ear (SD)94.75 (8.08)97.89 (7.13)-0.3780.708

SD = Standard deviation; HT = hearing threshold before implantation; CI = cochlear implant.

Table 2

Demographic information of the left and right CI groups.

GroupLeft CIRight CIt-valueP-value
N = 15N = 19
Mean age at implantation, mo. (SD)32.73 (15.43)40.05 (19.45)-1.1900.243
Mean time of test 1, mo. (SD)0.31 (0.25)0.26 (0.25)0.5570.581
Mean time of test 2, mo. (SD)1.76 (1.34)1.17 (0.38)1.830.077
Female, N (%)4 (26%)7 (37%)-0.6150.543
Right-handed, N1013
Left-handed, N22-0.1930.848
Balance of hands, N34
Mean HT of CI ear (SD)97.77 (4.25)98.68 (3.27)-0.7130.481
Mean HT of non-CI ear (SD)94.75 (8.08)97.89 (7.13)-0.3780.708
GroupLeft CIRight CIt-valueP-value
N = 15N = 19
Mean age at implantation, mo. (SD)32.73 (15.43)40.05 (19.45)-1.1900.243
Mean time of test 1, mo. (SD)0.31 (0.25)0.26 (0.25)0.5570.581
Mean time of test 2, mo. (SD)1.76 (1.34)1.17 (0.38)1.830.077
Female, N (%)4 (26%)7 (37%)-0.6150.543
Right-handed, N1013
Left-handed, N22-0.1930.848
Balance of hands, N34
Mean HT of CI ear (SD)97.77 (4.25)98.68 (3.27)-0.7130.481
Mean HT of non-CI ear (SD)94.75 (8.08)97.89 (7.13)-0.3780.708

SD = Standard deviation; HT = hearing threshold before implantation; CI = cochlear implant.

NH young adults (n = 35, 20 females; mean ± standard deviation of age: 22.8 ± 1.78 years) were recruited as control for the testing equipment and stimulation paradigm. All adult participants were right-handed with NH (tone threshold ≤ 20 dB HL across 0.25–16 kHz at both ears) and no cognitive or language disorders.

Ethics

The study was approved by the Ethical Committee of Peking University and the Peking University First Hospital (Reference: 2018 research No. 239). The conduct of the study was governed by ethical standards according to the Declaration of Helsinki of 1975 (and as revised in 1983). Informed consents were obtained for all child participants from their parent(s) or guardian(s) and all adult participants.

Experimental design

After implantation, auditory cortical functions were evaluated using fNIRS and electroencephalogram (not reported here) in the Auditory Cognitive Neuroscience laboratory in Beijing Normal University after clinical evaluations on each subsequent hospital visit. One month after implantation, the participants were scheduled to return to the hospital to turn on and map their cochlear implants. Thereafter, the participants were advised to return to the hospital following a recommended schedule for CI mapping and outcome evaluation according to the standard clinical protocol. The actual returning visit frequency and dates were decided by the parents or guardians, with considerable variations across individual cases. For NH young adults, a single fNIRS session was completed.

As most participants of the study were prelingual deaf toddlers with bilateral sensorineural hearing loss between the age of 2 and 5 years (88%) without effective communication methods, experimental assessment would stop whenever the participant was crying, showed signs of agitation, emotional disturbance, frequent head movements, or when the guardian should desire so. Therefore, brain function assessment might not be completed for each laboratory visit. To be included in the current study, a participant should complete at least two fNIRS sessions with valid data within the first 6 months after their CI activation. Among the 34 participants included in the current study, the average time of the first laboratory test (Test 1) was 8.7 ± 7.5 days after CI activation (range of 0–27 days), and the average time of the second test (Test 2) was 42.9 ± 28.5 days after CI activation (17–175 days), 34.3 ± 25.9 days after Test 1 (15–153.9 days).

CI outcome assessment

On each hospital visit, the postimplantation speech and language development of the participants were measured using three questionnaires: IT-MAIS/MAIS (Zimmerman-Phillips et al. 1997), CAP (Al-shawi et al. 2020), and SIR (Allen et al. 1998). These questionnaires were designed for participants who cannot cooperate or complete word and sentence audiometry, appropriate for the biological and auditory ages of the participants. Scores from these questionnaires were rescaled to 0–100 to provide a unified measure for the participants’ auditory/speech ability.

fNIRS assessment

After each hospital visit, auditory cortical functions of the CI children were assessed in a sound attenuated chamber in Beijing Normal University using fNIRS. The participants were seated at a child chair or on the lap of their guardians, watching a silenced video or playing with silent toys of their choice while an elastic fNIRS cap was placed on their head. The same testing environment and experimental procedure were applied to NH adult participants.

The acoustic stimuli, in the form of 20-s segments of soundtracks, were adapted from audio books and music recordings using Praat (Boersma 2002; Praat 2017). For the current study, only the speech (segments of audio books telling child stories), music (segments of classical instrumental music recordings), and noise (mixed audio tracks of 6 stories by different tellers) conditions were analyzed (Fig. 1a). A fourth condition of speech-in-noise was designed to probe later stage of auditory perceptual development after speech acquisition, which most of the CI children did not yet reach at the end of our observation period, and was thus not included in the current analysis.

fNIRS experimental design and representative hemodynamic responses. (a) Stimuli and presentation design for fNIRS assessments. The sound conditions were illustrated by their sample waveform (black lines) with amplitude envelope outlined (red lines). (b) Placement of optodes on the cortical regions and the relative sensitivity to hypothetical cortical activation (in logarithmical scale by color) for cortical regions of interests. The photon emitters (red dots) and detectors (blue dots) were placed above the bilateral temporal lobes and adjacent areas, yielding 20 channels of light transmission (denoted in white text) for cortical activity assessment. On each hemisphere, two regions of interest were identified and their relative sensitivity depicted: the anterior temporal lobe (ATL, channels in yellow lines), and the Sylvian parietotemporal area (Spt, channels in green lines). (c) The waveform of HbO and Hb responses for CI children and NH adults in two channels (speech). The solid lines represent the mean waveform and the shaded areas represent bootstrapped 95% confident intervals with 10,000 times calculation over participants. (d) T statistics map of each sound condition in comparison with silence for NH adults (n = 35).
Fig. 1

fNIRS experimental design and representative hemodynamic responses. (a) Stimuli and presentation design for fNIRS assessments. The sound conditions were illustrated by their sample waveform (black lines) with amplitude envelope outlined (red lines). (b) Placement of optodes on the cortical regions and the relative sensitivity to hypothetical cortical activation (in logarithmical scale by color) for cortical regions of interests. The photon emitters (red dots) and detectors (blue dots) were placed above the bilateral temporal lobes and adjacent areas, yielding 20 channels of light transmission (denoted in white text) for cortical activity assessment. On each hemisphere, two regions of interest were identified and their relative sensitivity depicted: the anterior temporal lobe (ATL, channels in yellow lines), and the Sylvian parietotemporal area (Spt, channels in green lines). (c) The waveform of HbO and Hb responses for CI children and NH adults in two channels (speech). The solid lines represent the mean waveform and the shaded areas represent bootstrapped 95% confident intervals with 10,000 times calculation over participants. (d) T statistics map of each sound condition in comparison with silence for NH adults (n = 35).

During fNIRS recording, optical signals were obtained using a continuous wave NIRSport2 optical topography system (NIRx Medical Technologies, NY, USA) at wavelengths of 760 and 850 nm with a sampling rate of 7.81 Hz. The optode probes consisted of two symmetric arrays of 4 light-emitting diode illuminators and 4 photodiode detectors (3-cm optode separation), yielding 10 measurement channels on each hemisphere covering the temporal lobe and adjacent language processing areas (Fig. 1b) (Dronkers et al. 2004; Turken and Dronkers 2011). Optode placement was in accordance with the international 10–10 system (see also Oostenveld and Praamstra 2001; Long et al. 2021; Zhao et al. 2021).

Each fNIRS session started with 5 min of resting in silence, during which the CI and hearing aid if in wearing were turned off. Then the CI (and hearing aid if in wearing) was turned on, starting a passive-listening session of 17 min, during which four conditions of acoustic stimuli (speech, music, noise, and speech in noise) were presented in a block design, with 20-s sound presentation blocks interleaved with silence blocks of varying durations between 20 and 25 s. Each stimulus condition was presented for 5 blocks, with a pseudorandomized order across conditions so that the same condition should not be presented in consecutive blocks. To simulate most closely the daily hearing conditions of the children, a hearing aid if in habit of wearing would be turned on and off together with the CI.

The sounds were played from two cube speakers (BOSE, USA) placed at 45° to the left and right of the direction the participant was facing, 1 m away from the participant’s center of head. All sounds were presented at 65 dB SPL measured at the listening position by a handheld sound level meter (Type 2270, Brüel and Kjær, Denmark).

fNIRS data processing

Raw fNIRS recordings were loaded to MATLAB (R2017b, The MathWorks, USA) for further analyses. Sensitivity of fNIRS measurement to the targeted cortical regions was evaluated using the HOMER2 toolbox (version 1.5, http://homer-fnirs.org/) by simulating the probabilistic path of photon transmission (Boas et al. 2002; Huppert et al. 2009). Probe positions determined by the international 10–10 system were projected onto the brain atlas (see Table 3 for MNI indices of the channels). Monte Carlo calculations were used to generate sensitivity profiles (via 100 million photons) for each channel (Fig. 1b) (Aasted et al. 2015). Based on the sensitivity profiles and the dual-stream framework of auditory and speech processing, the anterior temporal lobe (ATL) and the Sylvian parietotemporal area (Spt) on each hemisphere, corresponding to the starting stages of ventral and dorsal auditory processing streams, respectively, were identified as Regions of Interests (ROIs) for further analyses (Fig. 1b) (Hickok and Poeppel 2007; Rauschecker and Scott 2009; Friederici 2011; Poeppel 2014).

Table 3

Spatial registration for NIRS channels.

ChannelMNI coordinatesBA (percentage of overlap)Brain regions
XYZ
Left anterior temporal lobe
2−68−17122 (0.65)Superior temporal gyrus
5−67−5−1621 (1)Middle temporal gyrus
8−70−28−1021 (1)Middle temporal gyrus
Left Sylvian parietotemporal area
3−68−262640 (0.65)Supramarginal gyrus
6−67−411122 (0.92)Superior temporal gyrus
9−63−483740 (0.54)Supramarginal gyrus
Right anterior temporal lobe
1265−3−1921 (0.88)Middle temporal gyrus
1469−12−122 (0.73)Superior temporal gyrus
1670−27−1121 (1)Middle temporal gyrus
Right Sylvian parietotemporal area
1568−232740 (0.85)Supramarginal gyrus
1769−371122 (0.96)Superior temporal gyrus
1961−443840 (0.85)Supramarginal gyrus
Other channels
1−65−4214 (0.5)Primary motor cortex
4−5817944 (0.62)Pars opercularis
7−64−57−137 (0.58)Fusiform gyrus
10−57−632139 (0.54)Angular gyrus
115816119 (0.46)Dorsolateral prefrontal cortex
1363−1216 (0.81)Premotor cortex and supplementary motor cortex
1862−54−137 (0.5)Fusiform gyrus
2056−592339 (0.85)Angular gyrus
ChannelMNI coordinatesBA (percentage of overlap)Brain regions
XYZ
Left anterior temporal lobe
2−68−17122 (0.65)Superior temporal gyrus
5−67−5−1621 (1)Middle temporal gyrus
8−70−28−1021 (1)Middle temporal gyrus
Left Sylvian parietotemporal area
3−68−262640 (0.65)Supramarginal gyrus
6−67−411122 (0.92)Superior temporal gyrus
9−63−483740 (0.54)Supramarginal gyrus
Right anterior temporal lobe
1265−3−1921 (0.88)Middle temporal gyrus
1469−12−122 (0.73)Superior temporal gyrus
1670−27−1121 (1)Middle temporal gyrus
Right Sylvian parietotemporal area
1568−232740 (0.85)Supramarginal gyrus
1769−371122 (0.96)Superior temporal gyrus
1961−443840 (0.85)Supramarginal gyrus
Other channels
1−65−4214 (0.5)Primary motor cortex
4−5817944 (0.62)Pars opercularis
7−64−57−137 (0.58)Fusiform gyrus
10−57−632139 (0.54)Angular gyrus
115816119 (0.46)Dorsolateral prefrontal cortex
1363−1216 (0.81)Premotor cortex and supplementary motor cortex
1862−54−137 (0.5)Fusiform gyrus
2056−592339 (0.85)Angular gyrus

MNI = Montreal Neurological Institute; BA = Brodmann area.

Table 3

Spatial registration for NIRS channels.

ChannelMNI coordinatesBA (percentage of overlap)Brain regions
XYZ
Left anterior temporal lobe
2−68−17122 (0.65)Superior temporal gyrus
5−67−5−1621 (1)Middle temporal gyrus
8−70−28−1021 (1)Middle temporal gyrus
Left Sylvian parietotemporal area
3−68−262640 (0.65)Supramarginal gyrus
6−67−411122 (0.92)Superior temporal gyrus
9−63−483740 (0.54)Supramarginal gyrus
Right anterior temporal lobe
1265−3−1921 (0.88)Middle temporal gyrus
1469−12−122 (0.73)Superior temporal gyrus
1670−27−1121 (1)Middle temporal gyrus
Right Sylvian parietotemporal area
1568−232740 (0.85)Supramarginal gyrus
1769−371122 (0.96)Superior temporal gyrus
1961−443840 (0.85)Supramarginal gyrus
Other channels
1−65−4214 (0.5)Primary motor cortex
4−5817944 (0.62)Pars opercularis
7−64−57−137 (0.58)Fusiform gyrus
10−57−632139 (0.54)Angular gyrus
115816119 (0.46)Dorsolateral prefrontal cortex
1363−1216 (0.81)Premotor cortex and supplementary motor cortex
1862−54−137 (0.5)Fusiform gyrus
2056−592339 (0.85)Angular gyrus
ChannelMNI coordinatesBA (percentage of overlap)Brain regions
XYZ
Left anterior temporal lobe
2−68−17122 (0.65)Superior temporal gyrus
5−67−5−1621 (1)Middle temporal gyrus
8−70−28−1021 (1)Middle temporal gyrus
Left Sylvian parietotemporal area
3−68−262640 (0.65)Supramarginal gyrus
6−67−411122 (0.92)Superior temporal gyrus
9−63−483740 (0.54)Supramarginal gyrus
Right anterior temporal lobe
1265−3−1921 (0.88)Middle temporal gyrus
1469−12−122 (0.73)Superior temporal gyrus
1670−27−1121 (1)Middle temporal gyrus
Right Sylvian parietotemporal area
1568−232740 (0.85)Supramarginal gyrus
1769−371122 (0.96)Superior temporal gyrus
1961−443840 (0.85)Supramarginal gyrus
Other channels
1−65−4214 (0.5)Primary motor cortex
4−5817944 (0.62)Pars opercularis
7−64−57−137 (0.58)Fusiform gyrus
10−57−632139 (0.54)Angular gyrus
115816119 (0.46)Dorsolateral prefrontal cortex
1363−1216 (0.81)Premotor cortex and supplementary motor cortex
1862−54−137 (0.5)Fusiform gyrus
2056−592339 (0.85)Angular gyrus

MNI = Montreal Neurological Institute; BA = Brodmann area.

Data quality screening was performed using the FC_NIRS toolbox and custom scripts in MATLAB (Xu et al. 2015). Recordings of a given channel were regarded as invalid and excluded from further analysis if heart rate (1 ~ 1.5 Hz) was not detectable or coefficient of variation exceeded 20 (Piper et al. 2014). The valid recordings were then preprocessed using the HOMER2 toolbox, including motion artifact correction, bandpass filtering, hemodynamic signal calculation, and baseline correction. Motion artifacts were removed using principal component analysis (PCA; Zhang et al. 2005). A given fNIRS session was included in further analyses if there were more than 66.7% channels in our ROI areas and more than 60% of valid blocks for each condition. A bandpass filter of 0.01 ~ 0.2 Hz was then applied to exclude physiological noise such as breathing (0.2 ∼ 0.5 Hz) and drifts (Xu et al. 2020). The relative concentration changes of oxyhemoglobin (HbO) and deoxy-hemoglobin (Hb) were calculated by the modified Beer–Lambert law (MBLL) with differential path length factor (DPF) set at 6 for the two wavelengths. Each block was baseline-corrected using the 5-s silence preceding stimulus onset.

This study focused on the HbO signal, which was shown to be most sensitive to regional cerebral blood flow changes (Hoshi 2007). To quantify sound-evoked responses, beta weights (βs) were calculated using general linear models (GLM) of the NIRS-SPM toolbox (Ye et al. 2009). The GLM design matrix was derived from the block design of the stimulus conditions convolved with the canonical hemodynamic response function (HRF) (e.g. Minagawa-Kawai et al. 2011). The beta weights reflect strength of cortical activation of a given stimulus condition compared with silence, with positive values indicating increased hemodynamic responses relative to baseline and negative values indicating decreased hemodynamic responses (see also Wilf et al. 2016; Holmes and Johnsrude 2021; Zhao et al. 2021).

Statistical analyses

Cortical functional activity (averaged beta weights in each ROI), CI outcome measures, and relevant clinical variables were statistically analyzed using IBM SPSS Statistics (version 21.0, IBM company, USA). Outlying beta values (>3 standard deviations from group mean) were removed before statistical analyses. Equal variance of the left and right implantation groups for each variable in Table 2 was confirmed using Levene’s test (P > 0.05 in all cases). Analyses of Variance (ANOVAs) were then conducted to compare the two groups across conditions and hemispheres for each ROI. Analyses of Covariance (ANCOVAs) were used to evaluate covariation of clinical or CI outcome measures with brain activation. Pearson correlations were used to evaluate the neurobehavioral relations.

Data and code availability statement

All original behavioral and fNIRS brain imaging data are part of a larger study and dataset. All data presented in this paper are available as tables. fNIRS recordings were made using NIRSport2 optical topography systems (NIRx Medical Technologies, NY, USA). Raw fNIRS recordings were loaded to MATLAB (R2017b, The MathWorks, USA) for further analyses. Sensitivity of fNIRS measurement to the targeted cortical regions was evaluated using the HOMER2 toolbox (version 1.5, http://homer-fnirs.org/). Data quality screening was performed using the FC_NIRS toolbox and custom scripts in MATLAB (Xu et al. 2015).

Results

At the first fNIRS test, both groups of CI children demonstrated typical hemodynamic responses to sounds in their auditory cortical areas, with HbO concentration changing at a larger scale than that in NH adults (Fig. 1c), consistent with higher level of synaptic activity in the developing than the mature brain. Note that for NH adults (Fig. 1d), noise-evoked responses were greater to speech (ANOVA, ATL: main effect of condition: F1,34 = 5.24, P = 0.028, partial η2 = 0.134; Spt: F1,34 = 3.55, P = 0.068, partial η2 = 0.095) and music (ANOVA, ATL: main effect of condition: F1,34 = 4.51, P = 0.041, partial η2 = 0.117; Spt: F1,34 = 0.34, P = 0.565, partial η2 = 0.010), particularly in the anterior temporal cortex, inconsistent with and sometimes opposite to the response and developmental patterns of CI children (Figs 26). To examine the influence of implantation side on auditory cortical development, the following analyses focused on CI children, with influence of implantation side indexed by group difference, development indexed by within-subject changes in cortical responses, and auditory functionality indexed by signal–noise discrimination.

Cortical processing of speech sounds for the left and right CI children. T statistic maps for cortical activation by speech shortly after cochlear implant (CI) activation (Test 1, left column), after weeks to months of CI experiences (Test 2, middle column), and difference between the two tests (right column) for the left (top row, n = 15) and right (bottom row, n = 19) CI children. (b) Functional activation measured with beta weights in the anterior temporal lobe (ATL) of each hemisphere for the left (red boxes) and right (blue boxes) CI children. (c) Functional activation in the Sylvian parietotemporal area (Spt). (d) Developmental changes between the two tests for the ATLs and Spts. In the abscissa, LH denotes the left hemisphere, RH denotes the right hemisphere.
Fig. 2

Cortical processing of speech sounds for the left and right CI children. T statistic maps for cortical activation by speech shortly after cochlear implant (CI) activation (Test 1, left column), after weeks to months of CI experiences (Test 2, middle column), and difference between the two tests (right column) for the left (top row, n = 15) and right (bottom row, n = 19) CI children. (b) Functional activation measured with beta weights in the anterior temporal lobe (ATL) of each hemisphere for the left (red boxes) and right (blue boxes) CI children. (c) Functional activation in the Sylvian parietotemporal area (Spt). (d) Developmental changes between the two tests for the ATLs and Spts. In the abscissa, LH denotes the left hemisphere, RH denotes the right hemisphere.

Cortical processing of speech–noise discrimination for the left and right CI children. a) T statistic maps for cortical activation difference between the speech and noise conditions shortly after cochlear implant (CI) activation (Test 1, left column), after weeks to months of CI experiences (Test 2, middle column), and difference between the two tests (right column) for the left (top row, n = 15) and right (bottom row, n = 19) CI children. (b) Functional activation measured with beta weights in the anterior temporal lobe (ATL) of each hemisphere for the left (red boxes) and right (blue boxes) CI children. (c) Functional activation in the Sylvian parietotemporal area (Spt). (d) Developmental changes between the two tests for the ATLs and Spts. In the abscissa, LH denotes the left hemisphere, and RH denotes the right hemisphere.
Fig. 3

Cortical processing of speech–noise discrimination for the left and right CI children. a) T statistic maps for cortical activation difference between the speech and noise conditions shortly after cochlear implant (CI) activation (Test 1, left column), after weeks to months of CI experiences (Test 2, middle column), and difference between the two tests (right column) for the left (top row, n = 15) and right (bottom row, n = 19) CI children. (b) Functional activation measured with beta weights in the anterior temporal lobe (ATL) of each hemisphere for the left (red boxes) and right (blue boxes) CI children. (c) Functional activation in the Sylvian parietotemporal area (Spt). (d) Developmental changes between the two tests for the ATLs and Spts. In the abscissa, LH denotes the left hemisphere, and RH denotes the right hemisphere.

Cortical processing of nonspeech sounds shortly after CI activation (Test 1) for the left and right CI children. a) T statistic maps for cortical activation by signal (music, left column) and noise (middle column), and difference between the two conditions (right column) at Test 1 for the left (top row) and right (bottom row) CI children. b and c) Functional activation for anterior temporal lobe (ATL, b) and Sylvian parietotemporal area (Spt, c). d) Signal–noise differences in the cortical regions of interest. In the abscissa, LH denotes the left hemisphere, and RH denotes the right hemisphere.
Fig. 4

Cortical processing of nonspeech sounds shortly after CI activation (Test 1) for the left and right CI children. a) T statistic maps for cortical activation by signal (music, left column) and noise (middle column), and difference between the two conditions (right column) at Test 1 for the left (top row) and right (bottom row) CI children. b and c) Functional activation for anterior temporal lobe (ATL, b) and Sylvian parietotemporal area (Spt, c). d) Signal–noise differences in the cortical regions of interest. In the abscissa, LH denotes the left hemisphere, and RH denotes the right hemisphere.

Cortical processing of nonspeech sounds after a period of CI use (Test 2) for the left and right CI children. a) T statistic maps for cortical activation by signal (music, left column) and noise (middle column), and difference between the two conditions (right column) at Test 2 for the left (top row) and right (bottom row) CI children. b and c) Functional activation for anterior temporal lobe (ATL, b) and Sylvian parietotemporal area (Spt, c). d) Relationship between cortical activation in the right ATL and age of implantation for the left (red) and right (blue) CI children. In the abscissa, LH denotes the left hemisphere, and RH denotes the right hemisphere.
Fig. 5

Cortical processing of nonspeech sounds after a period of CI use (Test 2) for the left and right CI children. a) T statistic maps for cortical activation by signal (music, left column) and noise (middle column), and difference between the two conditions (right column) at Test 2 for the left (top row) and right (bottom row) CI children. b and c) Functional activation for anterior temporal lobe (ATL, b) and Sylvian parietotemporal area (Spt, c). d) Relationship between cortical activation in the right ATL and age of implantation for the left (red) and right (blue) CI children. In the abscissa, LH denotes the left hemisphere, and RH denotes the right hemisphere.

Cortical development of nonspeech sound processing with weeks to months of CI experiences. a) T statistic maps for difference in cortical activity between Test 1 and Test 2 for signal (left column) and noise (right column) for the left (top row) and right (bottom row) CI children. b and c) Developmental changes in the anterior temporal lobe (ATL, b) and the Sylvian parietotemporal area (Spt, c) for the left (red) and right (blue) CI children. In the abscissa, LH denotes the left hemisphere, and RH denotes the right hemisphere.
Fig. 6

Cortical development of nonspeech sound processing with weeks to months of CI experiences. a) T statistic maps for difference in cortical activity between Test 1 and Test 2 for signal (left column) and noise (right column) for the left (top row) and right (bottom row) CI children. b and c) Developmental changes in the anterior temporal lobe (ATL, b) and the Sylvian parietotemporal area (Spt, c) for the left (red) and right (blue) CI children. In the abscissa, LH denotes the left hemisphere, and RH denotes the right hemisphere.

Cortical processing of speech sounds after unilateral cochlear implantation

To test the hypothesis of left hemisphere advantage for speech processing, speech-evoked responses were compared between the left and right CI children (Fig. 2) shortly after CI activation (Test 1) and then after weeks to months of CI wearing (Test 2). A group (left vs. right CI) by hemisphere (left vs. right) repeated-measures ANOVA was conducted separately for each ROI (ATL and Spt) at each test. For both regions and both tests, speech-evoked responses were similar between the hemispheres and the CI groups (Fig. 2b and c; P > 0.1 for all effects; see Table 4 for detailed statistics). Further, no changes were observed between the two tests for either group (Fig. 2d; P > 0.1; see Table 4 for detailed statistics), indicating that cortical processing of speech sounds, as well as its hypothesized left hemisphere advantage, were yet to be developed for newly implanted young children with prelingual deafness.

Table 4

Statistic results of cortical response caused by speech stimulation.

RegionTimeLeft CI vs. right CIInterhemispheric asymmetryInteraction
FPη2FPη2FPη2
Anterior temporal lobeTest 10.0510.8230.0023.6830.0640.1060.8540.3620.027
Test 20.3800.5420.0120.2730.6050.0080.5060.4820.016
Developmental changes0.4240.5200.0141.3600.2520.0422.6380.1140.078
Sylvian parietotemporal areaTest 10.0010.9790.0010.1400.7110.0042.1080.1560.062
Test 20.2850.5970.0090.0960.7580.0030.0780.7820.002
Developmental changes0.0710.7920.0020.0360.8510.0010.5580.4600.017
RegionTimeLeft CI vs. right CIInterhemispheric asymmetryInteraction
FPη2FPη2FPη2
Anterior temporal lobeTest 10.0510.8230.0023.6830.0640.1060.8540.3620.027
Test 20.3800.5420.0120.2730.6050.0080.5060.4820.016
Developmental changes0.4240.5200.0141.3600.2520.0422.6380.1140.078
Sylvian parietotemporal areaTest 10.0010.9790.0010.1400.7110.0042.1080.1560.062
Test 20.2850.5970.0090.0960.7580.0030.0780.7820.002
Developmental changes0.0710.7920.0020.0360.8510.0010.5580.4600.017
Table 4

Statistic results of cortical response caused by speech stimulation.

RegionTimeLeft CI vs. right CIInterhemispheric asymmetryInteraction
FPη2FPη2FPη2
Anterior temporal lobeTest 10.0510.8230.0023.6830.0640.1060.8540.3620.027
Test 20.3800.5420.0120.2730.6050.0080.5060.4820.016
Developmental changes0.4240.5200.0141.3600.2520.0422.6380.1140.078
Sylvian parietotemporal areaTest 10.0010.9790.0010.1400.7110.0042.1080.1560.062
Test 20.2850.5970.0090.0960.7580.0030.0780.7820.002
Developmental changes0.0710.7920.0020.0360.8510.0010.5580.4600.017
RegionTimeLeft CI vs. right CIInterhemispheric asymmetryInteraction
FPη2FPη2FPη2
Anterior temporal lobeTest 10.0510.8230.0023.6830.0640.1060.8540.3620.027
Test 20.3800.5420.0120.2730.6050.0080.5060.4820.016
Developmental changes0.4240.5200.0141.3600.2520.0422.6380.1140.078
Sylvian parietotemporal areaTest 10.0010.9790.0010.1400.7110.0042.1080.1560.062
Test 20.2850.5970.0090.0960.7580.0030.0780.7820.002
Developmental changes0.0710.7920.0020.0360.8510.0010.5580.4600.017

Speech perception depends on auditory processing of acoustic features of the speech sounds as well as on linguistic processing. To probe into the development of auditory processing involved in speech perception for CI children, we next examined their cortical discrimination of speech and noise sounds. Shortly after CI activation (Test 1, Fig. 3a–c), no influence of implantation side or differences between speech- and noise-evoked responses were observed in either ATL (Fig. 3b; main effect of group: F1,29 = 0.031, P = 0.862, partial η2 = 0.001; condition: F1,29 = 0.135, P = 0.716, partial η2 = 0.005; group by condition interaction: F1,29 = 0.027, P = 0.872, partial η2 = 0.001) or Spt (Fig. 3c; main effect of group: F1,30 = 0.564, P = 0.459, partial η2 = 0.018; condition: F1,30 = 3.196, P = 0.084, partial η2 = 0.096; group by condition interaction: F1,30 = 0.880, P = 0.356, partial η2 = 0.028). After a period of CI use (Test 2, Fig. 3a–c), speech evoked greater responses than noise in ATL (Fig. 3b; main effect of condition: F1,31 = 9.380, P = 0.005, partial η2 = 0.232) but not in Spt (Fig. 3c; main effect of condition: F1,31 = 2.467, P = 0.126, partial η2 = 0.074), without any difference between the two CI groups (main effect of group: F < 0.438, P > 0.513, partial η2 < 0.014; group-related interactions: F < 0.819, P > 0.372, partial η2 < 0.026). Between the two tests, the two CI groups showed similar developmental improvement in speech-to-noise discrimination in ATL (Fig. 3d; main effect of condition: F1,30 = 9.381, P = 0.005, partial η2 = 0.238; main effect of group: F1,30 = 1.319, P = 0.260, partial η2 = 0.042; group by condition interaction: F1,30 = 1.383, P = 0.249, partial η2 = 0.044), and no improvement in Spt (main effect of condition: F1,31 = 2.174, P = 0.150, partial η2 = 0.066; main effect of group: F1,30 = 0.333, P = 0.568, partial η2 = 0.011; group by condition interaction: F1,30 = 1.404, P = 0.245, partial η2 = 0.043). Thus, cortical discrimination of speech from noise improved with increasing hearing experiences in the ATLs, regardless of implantation side. Given the lack of developmental changes in speech-evoked responses, such improvement is probably mediated by development of nonspeech auditory processing, which will be examined in further detail below.

Cortical processing of nonspeech sounds after unilateral cochlear implantation

To test the hypothesis of right hemisphere advantage for nonspeech processing, hemodynamic responses to music as signal and multispeaker babble as noise (Fig. 1a) were compared between the left and the right implanted children. In parallel to speech processing, we first examined cortical response patterns shortly after CI activation (Test 1), after a period of CI use (Test 2) and then analyzed the developmental changes between the two tests.

Shortly after CI activation (Test 1)

For ATLs, functional activity did not vary with implantation side, stimulus condition, or hemisphere (Fig. 4b; main effect of group: F1,32 = 0.315, P = 0.579, η2 = 0.010; condition: F1,32 = 1.409, P = 0.244, η2 = 0.043; hemisphere: F1,32 = 2.581, P = 0.118, η2 = 0.077; interactions: F1,32 < 0.5, P > 0.05, η2 < 0.02). For Sylvian parietotemporal areas (Spts), the left and right implantation groups showed different activation patterns across conditions (Fig. 4c; condition by group interaction: F1,32 = 4.456, P = 0.043, partial η2 = 0.122; main effect of group: F1,32 = 1.071, P = 0.309, partial η2 = 0.032; main effect of hemisphere: F1,32 = 0.039, P = 0.844, partial η2 = 0.001). Follow-up separate ANOVAs for each condition revealed that the two groups responded similarly to signal (main effect of group: F1,32 = 0.430, P = 0.517, partial η2 = 0.013; hemisphere F1,32 = 0.108, P = 0.744, partial η2 = 0.003; interaction: hemisphere by group: F1,32 = 0.206, P = 0.653, partial η2 = 0.006), but differently for noise (main effect of group: F1,32 = 5.248, P = 0.029, partial η2 = 0.141; hemisphere F1,32 = 0.013, P = 0.911, partial η2 = 0.000; interaction hemisphere by group F1,32 = 0.180, P = 0.674, partial η2 = 0.006), with smaller noise-induced responses for the left implantation group. Further, we examined whether such group differences were attributable to other patient-specific factors including AoI, residual hearing, time lapse since implantation, duration of hearing aid wearing, and dominant hand using ANCOVAs with these factors as covariates. None of these factors showed significant covariance or interaction with implantation side (F < 2.8, P > 0.1, partial η2 < 0.23).

Functional differentiation of the brain regions was indexed by a t-test comparison between the conditions, i.e. the signal–noise discriminability (Fig. 4a, right column; Fig. 4d). While the two groups were similar in ATLs (main effect of group: F1,32 = 0.005, P = 0.944, partial η2 < 0.001; main effect of hemisphere F1,32 = 0.037, P = 0.848, partial η2 = 0.001; interaction: F1,32 = 0.489, P = 0.489, partial η2 = 0.016), the left implantation group showed greater signal–noise discrimination in Spts (main effect of group: F1,32 = 4.456, P = 0.043, partial η2 = 0.122; hemisphere: F1,32 = 0.122, P = 0.729, partial η2 = 0.004). ANCOVAs revealed no influence of AoI, residual hearing, time lapse since implantation, duration of hearing aid wearing, or dominant hand (F < 2.2, P > 0.15, partial η2 < 0.24).

After a period of CI use (Test 2)

The fNIRS measure of auditory functional activity was repeated after a period of CI use (Fig. 5a). For ATLs, functional responses were greater to music than noise (Fig. 5b; main effect of condition: F1,31 = 4.680, P = 0.038, partial η2 = 0.131; main effect of hemisphere F1,31 = 3.961, P = 0.055, partial η2 = 0.113), without difference between the two CI groups (main effect of group: F1,31 = 0.076, P = 0.785, partial η2 = 0.002; interaction: hemisphere by condition by group, F1,31 = 1.203, P = 0.281, partial η2 = 0.037; hemisphere by group, F1,31 = 2.609, P = 0.116, partial η2 = 0.078; condition by group, F1,31 = 2.941, P = 0.096, partial η2 = 0.087).

ANCOVAs revealed that the signal responses of the right ATL for left and right CI children were associated with differential influences of AoI (Fig. 5d; group by age interaction: F1,24 = 4.914, P = 0.036, partial η2 = 0.170): The signal response level decreased with increasing age of implantation only when the left ear was implanted. No influence was observed for residual hearing (F1,24 = 1.617, P = 0.216, partial η2 = 0.063), time lapse since implantation (F1,24 = 0.302, P = 0.587, partial η2 = 0.012), duration of hearing aid wearing (F1,24 = 0.111, P = 0.742, partial η2 = 0.005), or the dominant hand (F1,24 = 1.196, P = 0.285, partial η2 = 0.047).

For Spts (Fig. 5c), sound-evoked hemodynamic responses manifested marked interhemispheric asymmetry favoring the right side (main effect of hemisphere: F1,31 = 16.552, P < 0.001, partial η2 = 0.348) and a tendency of greater signal than noise activity (main effect of condition: F1,31 = 4.150, P = 0.050, partial η2 = 0.118), but without differences between the two CI groups (main effect of group: F1,31 = 0.117, P = 0.735, partial η2 = 0.004; hemisphere by group interaction: F1,31 = 0.001, P = 0.975, partial η2 = 0.000; condition by group interaction: F1,31 = 0.001, P = 0.975, partial η2 = 0.000; condition by hemisphere by group interaction: F1,31 = 0.243, P = 0.625, partial η2 = 0.008). Signal–noise discriminability during the second test was similar between the two CI groups in overall cortical responses (all group effects of ANOVA: F < 2.6, P > 0.09, partial η2 < 0.08).

Developmental changes

Within-subject developmental changes in cortical functional responses between the two tests were illustrated in Fig. 6a. For ATLs (Fig. 6b), developmental changes varied with stimulus condition (main effect of condition: F1,30 = 8.174, P = 0.008, partial η2 = 0.214) and did so differentially between the two CI groups (condition by group interaction: F1,30 = 5.104, P = 0.031, partial η2 = 0.145; main effect of group: F1,30 = 0.164, P = 0.688, partial η2 = 0.005; other interactions: F < 0.5, P > 0.5, partial η2 < 0.02), with greater development in signal–noise discrimination for the left than the right CI group. Follow-up analyses revealed that this left CI advantage resulted from greater noise reduction (main effect of group: F1,30 = 4.579, P = 0.036, partial η2 = 0.069) and similar signal enhancement (F1,30 = 1.829, P = 0.181, partial η2 = 0.028). This group difference bore no relations with AoI, residual hearing, time lapse since implantation, duration of hearing aid wearing, or dominant hand (F < 2.8, P > 0.1, partial η2 < 0.1).

For Spts (Fig. 6c), functional development was asymmetric between the hemispheres (main effect of hemisphere: F1,30 = 4.871, P = 0.035, partial η2 = 0.136), with greater increase in signal responses and smaller reduction in noise response on the right hemisphere. No differences were observed between the CI groups (main effect of group: F1,30 = 0.252, P = 0.619, partial η2 = 0.008; condition by group interaction: F1,30 = 1.797, P = 0.190, partial η2 = 0.055; hemisphere by group interaction: F1,30 = 0.425, P = 0.519, partial η2 = 0.014).

Neurobehavioral relations

CI outcomes were assessed in the two tests using three questionnaires with parents/guardians (IT-MASI/MAIS, CAP, and SIR). The sum score of the three questionnaires increased between the two tests (Fig. 7a; F1,32 = 64.727, P < 0.001, partial η2 = 0.669), but without any differences between the two CI groups (See Table 5 for statistic details).

Behavioral development and neurobehavioral relations for the left and right CI children. a) Auditory and speech performance of CI children evaluated by three questionnaires with parents/guardians at Test 1 and Test 2. b) Shortly after CI activation, behavioral performance measured by the sum score of the three questionnaires was correlated with cortical activity in the right anterior temporal lobe (ATL) elicited by signal sound (music) for all children regardless of CI side (r = −0.367, P = 0.033). c) After weeks to months of CI experiences, behavioral performance of the CI children was predicted by developmental changes in noise processing of the left ATL regardless of CI side (r = 0.445, P = 0.011). Abbreviations: cochlear implant (CI); Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS); Categories of Auditory Performance (CAP); Speech Intelligibility Rate (SIR).
Fig. 7

Behavioral development and neurobehavioral relations for the left and right CI children. a) Auditory and speech performance of CI children evaluated by three questionnaires with parents/guardians at Test 1 and Test 2. b) Shortly after CI activation, behavioral performance measured by the sum score of the three questionnaires was correlated with cortical activity in the right anterior temporal lobe (ATL) elicited by signal sound (music) for all children regardless of CI side (r = −0.367, P = 0.033). c) After weeks to months of CI experiences, behavioral performance of the CI children was predicted by developmental changes in noise processing of the left ATL regardless of CI side (r = 0.445, P = 0.011). Abbreviations: cochlear implant (CI); Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS); Categories of Auditory Performance (CAP); Speech Intelligibility Rate (SIR).

Table 5

Behavioral result.

TimeIT-MAIS/MAISCAPSIR
Left CI group, mean (SD)Test 124.233 (20.669)3.400 (0.986)1.533 (0.743)
Test 251.000 (21.585)4.200 (1.146)2.000 (1.254)
Right CI group, mean (SD)Test 134.079 (18.431)3.632 (1.116)1.632 (0.496)
Test 255.790 (23.512)4.500 (1.295)2.211 (1.228)
Main effect of groupsTest 1F1,31 = 2.150, P = 0.152F1,31 = 0.399, P = 0.532F1,31 = 0.213, P = 0.648
Test 2F1,31 = 0.374, P = 0.545F1,31 = 0.487, P = 0.491F1,31 = 0.242, P = 0.626
TimeIT-MAIS/MAISCAPSIR
Left CI group, mean (SD)Test 124.233 (20.669)3.400 (0.986)1.533 (0.743)
Test 251.000 (21.585)4.200 (1.146)2.000 (1.254)
Right CI group, mean (SD)Test 134.079 (18.431)3.632 (1.116)1.632 (0.496)
Test 255.790 (23.512)4.500 (1.295)2.211 (1.228)
Main effect of groupsTest 1F1,31 = 2.150, P = 0.152F1,31 = 0.399, P = 0.532F1,31 = 0.213, P = 0.648
Test 2F1,31 = 0.374, P = 0.545F1,31 = 0.487, P = 0.491F1,31 = 0.242, P = 0.626

CI = Cochlear implant; IT-MAIS = Infant-Toddler Meaningful Auditory Integration Scale; CAP = Categories of Auditory Performance; SIR = Speech Intelligibility Rate; SD = standard deviation.

Table 5

Behavioral result.

TimeIT-MAIS/MAISCAPSIR
Left CI group, mean (SD)Test 124.233 (20.669)3.400 (0.986)1.533 (0.743)
Test 251.000 (21.585)4.200 (1.146)2.000 (1.254)
Right CI group, mean (SD)Test 134.079 (18.431)3.632 (1.116)1.632 (0.496)
Test 255.790 (23.512)4.500 (1.295)2.211 (1.228)
Main effect of groupsTest 1F1,31 = 2.150, P = 0.152F1,31 = 0.399, P = 0.532F1,31 = 0.213, P = 0.648
Test 2F1,31 = 0.374, P = 0.545F1,31 = 0.487, P = 0.491F1,31 = 0.242, P = 0.626
TimeIT-MAIS/MAISCAPSIR
Left CI group, mean (SD)Test 124.233 (20.669)3.400 (0.986)1.533 (0.743)
Test 251.000 (21.585)4.200 (1.146)2.000 (1.254)
Right CI group, mean (SD)Test 134.079 (18.431)3.632 (1.116)1.632 (0.496)
Test 255.790 (23.512)4.500 (1.295)2.211 (1.228)
Main effect of groupsTest 1F1,31 = 2.150, P = 0.152F1,31 = 0.399, P = 0.532F1,31 = 0.213, P = 0.648
Test 2F1,31 = 0.374, P = 0.545F1,31 = 0.487, P = 0.491F1,31 = 0.242, P = 0.626

CI = Cochlear implant; IT-MAIS = Infant-Toddler Meaningful Auditory Integration Scale; CAP = Categories of Auditory Performance; SIR = Speech Intelligibility Rate; SD = standard deviation.

Multivariate regression analyses were conducted to reveal the cortical processes contributing to behavioral outcomes. For behavior shortly after CI activation (the sum scores of the three questionnaires at Test 1), cortical responses to signal (music) and noise in all ROIs at Test 1 were entered. Behavior was predictable by only signal-evoked responses in the right ATL (Fig. 7b; coefficient = −41.74, P = 0.023; other variates: P > 0.05), with better outcome associated with lower (hence closer to adult, Fig. 1c and d) level of cortical responses. For behavior after a period of CI use (Test 2), cortical responses to signal (music) and noise of two hemispheres at Test 2 and their developmental changes between the two tests were entered. Behavior was associated only with developmental changes in noise-evoked responses in the left ATL (Fig. 7c; coefficient = 38.28, P = 0.049; other variates: P > 0.05), i.e. a greater increase in cortical responses to noise (hence changing toward adult, Fig. 1c and d) predicted better speech and communication skills observed by parents/guardians. Consistent with the lack of group differences in behavioral measures, no effect of implantation side (F < 0.157, P > 0.695, partial η2 < 0.005) or clinical factors (AoI, residual hearing, time lapse since implantation, duration of hearing aid wearing: P > 0.05) was found for the neurobehavioral relations.

Discussion

This study examined impact of implantation side on early development of auditory cortical functions in CI children with prelingual deafness (Fig. 8). We hypothesized right implantation advantage for speech processing and left implantation advantage for nonspeech processing, according to the interhemispheric asymmetry of auditory functions in NH populations. Inconsistent with the right implantation advantage hypothesis, cortical hemodynamic responses of CI children to speech sounds showed neither influence of implantation side nor developmental changes in the observation period. Despite the lack of developmental changes in speech responses per se, speech–noise discrimination improved with CI experiences in the anterior temporal lobes (ATLs), probably mediated by development in the nonspeech aspect of auditory processing. In support of left implantation advantage, the left CI children demonstrated better differentiation of nonspeech signal (music) and noise than the right CI children, first by reduced noise responses in bilateral Spts shortly after implant activation and then by greater developmental improvement in noise processing in the bilateral ATLs with weeks to months of CI use. The right ATL responses to signal were correlated with age of implantation (AoI), providing a possible neural mechanism for advantage of early implantation (Kirk et al. 2002; Mitchell et al. 2020), but only in the left CI group. In addition, the results revealed different developmental patterns across auditory temporal areas regardless of implantation side. With increasing CI experiences, the anterior regions (ATLs) showed enhanced signal–noise differentiation, while the posterior regions (Spts) showed greater interhemispheric asymmetry. Consistent with this developmental distinction, postoperative outcome on auditory and speech performance measured by questionnaires with parents/guardians (Fig. 7) was associated primarily with the ATLs: predicted shortly after CI activation by signal-evoked responses in the right ATL and during subsequent CI experiences by reduction of noise responses in the left ATL.

Summary of auditory cortical development in CI children. Emergence of signal–noise discrimination (brown arrows) and interhemispheric asymmetry (blue arrows), as well as left over right implantation difference (red dots), were illustrated in the corresponding brain region along a developmental axis after cochlear implant (CI) activation.
Fig. 8

Summary of auditory cortical development in CI children. Emergence of signal–noise discrimination (brown arrows) and interhemispheric asymmetry (blue arrows), as well as left over right implantation difference (red dots), were illustrated in the corresponding brain region along a developmental axis after cochlear implant (CI) activation.

Our findings provide, to the best of our knowledge, the first neurodevelopmental evidence for choice of implantation side, a decision that has to be made for the majority of CI users in current clinical practice in developing countries. The early developmental advantages of nonspeech processing for left implantation revealed by repeated fNIRS assessments posed a stark contrast to the right-over-left priority of unilateral implantation practiced by some CI clinicians. Other factors that may impact the postoperative outcome, including AoI, residual hearing, and dominant hand (Kirk et al. 2002; Henkin et al. 2008; Chiossi and Hyppolito 2017; Kraaijenga et al. 2018), were matched between the CI groups and controlled for as covariates in the statistical analyses. Interestingly, though AoI did not differ between the groups (Table 2), cortical developmental advantage of early implantation was evident only for left implantation (Fig. 5d). Thus, the AoI and implantation side appear to work in synergy in their influence on postoperative cortical development, pointing to maximized effect via early implantation of the left ear.

Though implantation side has been considered among the factors that may influence postoperative performance (Kraaijenga et al. 2018), previous studies of this factor were considered to be of poor quality, with heterogeneous populations and outcome measures (Kraaijenga et al. 2018; Zhao et al. 2020). Performance measure was particularly problematic for young children with prelingual hearing loss, consisting primarily of indirect and coarse evaluations such as questionnaires with parents/guardians (Henkin et al. 2008). Indeed, in the current study, the questionnaire measures failed to distinguish the left and right implantation groups (Fig. 7a; Table 5), and some of the questionnaires were not even able to register performance improvement during the observation period. In contrast, the fNIRS recordings used in the current study provide an objective measure of auditory cortical functions for the CI users, which proved to be sensitive to influence of implantation side and AoI. Further, the current study employed a longitudinal design, with two valid fNIRS sessions completed within a period after CI activation to provide a within-subject measure of developmental changes in cortical functions. Given the remarkable individual variability present in CI children (Szagun 2001), the longitudinal design should offer considerable increase in result reliability than developmental studies with cross-sectional designs. Additionally, from the perspective of sensitive time window for language development (Lazard et al. 2011; Rouger et al. 2012; Lazard et al. 2013), early differences in cortical development would have far-reaching impacts for young CI users who depend on restored hearing to acquire speech. Given these considerations, we conclude that left implantation has neurodevelopmental advantages over right implantation for young children at least in the early rehabilitation stage, which may not be discernable using conventional, questionnaire-based evaluations. Given the potential clinical relevance, developmental influence of implantation side should be further investigated with larger CI populations and over longer observation period.

The current results also bear significant implications for theories of brain development and auditory processing. First, for NH populations, there is conspicuous interhemispheric asymmetry in auditory and speech processing that can be detected as early as at the beginning of life (McGettigan and Scott 2012; Poeppel 2014; Adibpour et al. 2018; Bisiacchi and Cainelli 2021). Specifically, according to the dual-stream theory of auditory and speech processing (Hickok and Poeppel 2007; Rauschecker and Scott 2009; Friederici 2011; Poeppel 2014), the dorsal stream of auditory processing starting from the Spts involved in sensory integration and sensorimotor transformations is dominated by the left hemisphere, while the ventral stream involving the ATLs is suggested to underlie sound-meaning transformations and is more bilateral. However, it remains unclear how such interhemispheric specializations emerge in the developing brain and how they interact with auditory experiences (Trainor et al. 2003). The current results with CI children demonstrate that interhemispheric asymmetry first appears in the Spts, but not in the ATLs, with weeks to months of restored hearing experiences (Fig. 8), echoing the interhemispheric differentiation patterns in the mature and NH brain. The left CI advantages also appeared first in the Spts and then extended to the ATLs with increasing auditory experiences, pointing to the same maturation order for the two streams of auditory processing.

Furthermore, the findings that CI children showed right hemisphere advantages for left ear implantation, but not left hemisphere advantages for right ear implantation, and that the advantages were evident only for nonspeech sounds, lend coherent and novel evidence to an early hypothesis for hemispheric specialization (Geschwind and Galaburda 1985) and its recent development (Bisiacchi and Cainelli 2021; Bourke and Todd 2021; Hartwigsen et al. 2021). This right-hemisphere conservatism theory states that the right hemisphere, dominating tasks essential for survival, develops earlier to reduce chance of impairments by external factors, while the left hemisphere develops later to accommodate learning of more complex functions from interactions with environment. In the sense that auditory deprivation caused by hearing loss can be viewed as adverse environment to the auditory cortex, the current data indicate that the functionality of the right temporal lobe is indeed less influenced, resulting in earlier and greater contribution to nonspeech processing than its counterpart on the left hemisphere. Indeed, the advantage of early implantation was only observed for the right ATL (Fig. 5d), suggesting that the high level of neural plasticity underlying the sensitive period for auditory functional development in young children may be better preserved in the right hemisphere (Sharma et al. 2002). The lack of development and influence of implantation side on speech processing during the first months of CI experiences also support the view that the left hemisphere undertakes more complex and later developed functions.

The current study employed fNIRS for repeated brain imaging with CI children, because the use of optical absorption is compatible with the implants, tolerating with head motions of young children, and friendly to multiple testing (Bulgarelli et al. 2020; Yeung 2021). Compared with the widely used brain imaging method of fMRI, fNIRS has lower spatial resolution (5 ~ 10 vs. 1 mm, Cui et al. 2011) and is limited to superficial regions of the cortex. As applying fMRI to young CI children likely remains difficult for some period, better understanding of auditory and speech development with CI-enabled hearing experiences could possibly be acquired by application of fNIRS to larger normal and abnormal developmental cohorts, possibly in combination with other CI-compatible brain-recording methods.

In summary, the current study, using repeated fNIRS neuroimaging with young CI children of prelingual deafness, demonstrated early and progressive developmental advantages in nonspeech sound discrimination for left compared with right implantation, while speech processing and CI outcome measures were not affected by implantation side. These findings, when considered under major theories of auditory processing and brain development, suggest that the left implantation advantages may result from the interhemispheric specialization of auditory processing combined with earlier development and better-preserved neural plasticity in the right than in the left temporal lobe. Though it remains to be examined in future studies how such early differences influence later development, there is reason to believe that acquisition of speech and verbal communication skills should benefit from earlier restoration of basic, nonspeech aspect of auditory cortical processing.

Acknowledgments

We are in sincere and deep gratitude to the child participants and their parents/guardians for their participation in this study.

Funding

This work was supported by the National Natural Science Foundation of China (grant number 82071070) and the Beijing Municipal Science and Technology Commission (grant numbers No. Z191100006619027 and No. Z181100001518003). The funders had no role in study design, data collection, and analysis, decision to publish, or preparation of the manuscript.

Conflict of interest statement. The authors declare that they have no known competing financial interest or personal relationships that could have appeared to influence the work reported in this paper.

Authors’ contributions

YW: Conceptualization, Validation, Formal analysis, Investigation, Data collection, Writing—original draft, Visualization. MW: Software, Data analysis, Data collection, Visualization. KW, HL, SW, ZZ, and ML: Data collection. CW: Conceptualization, Data collection. YZ and YL: Conceptualization, Methodology, Resources, Writing—review and editing, Supervision.

References

Aasted
 
CM
,
Yucel
 
MA
,
Cooper
 
RJ
,
Dubb
 
J
,
Tsuzuki
 
D
,
Becerra
 
L
,
Petkov
 
MP
,
Borsook
 
D
,
Dan
 
I
,
Boas
 
DA
.
Anatomical guidance for functional near-infrared spectroscopy: AtlasViewer tutorial
.
Neurophotonics
.
2015
:
2
(2):
020801
.

Adibpour
 
P
,
Dubois
 
J
,
Moutard
 
ML
,
Dehaene-Lambertz
 
G
.
Early asymmetric inter-hemispheric transfer in the auditory network: insights from infants with corpus callosum agenesis
.
Brain Struct Funct
.
2018
:
223
(6):
2893
2905
.

Allen
 
MC
,
Nikolopoulos
 
TP
,
O'Donoghue
 
GM
.
Speech intelligibility in children after cochlear implantation
.
Otol Neurotol
.
1998
:
19
(6):
742
746
.

Al-shawi
 
Y
,
Mesallam
 
TA
,
Alfallaj
 
R
,
Aldrees
 
T
,
Albakheet
 
N
,
Alshawi
 
M
,
Alotaibi
 
T
,
Algahtani
 
A
.
Inter-rater reliability and validity of the Arabic version of categories of auditory performance-II (CAP-II) among children with Cochlear implant
.
Otol Neurotol
.
2020
:
41
(5):
e597
e602
.

Bisiacchi
 
P
,
Cainelli
 
E
.
Structural and functional brain asymmetries in the early phases of life: a scoping review
.
Brain Struct Funct
.
2021
. https://doi.org/10.1007/s00429-021-02256-1.

Blamey
 
P
,
Arndt
 
P
,
Bergeron
 
F
,
Bredberg
 
G
,
Brimacombe
 
J
,
Facer
 
G
,
Larky
 
J
,
Lindström
 
B
,
Nedzelski
 
J
,
Peterson
 
A
, et al.  
Factors affecting auditory performance of postlinguistically deaf adults using cochlear implants
.
Audiol Neurotol
.
1996
:
1
(5):
293
306
.

Blamey
 
P
,
Artieres
 
F
,
Baskent
 
D
,
Bergeron
 
F
,
Beynon
 
A
,
Burke
 
E
,
Dillier
 
N
,
Dowell
 
R
,
Fraysse
 
B
,
Gallego
 
S
, et al.  
Factors affecting auditory performance of postlinguistically deaf adults using cochlear implants: an update with 2251 patients
.
Audiol Neurotol
.
2013
:
18
(1):
36
47
.

Boas
 
D
,
Culver
 
J
,
Stott
 
J
,
Dunn
 
A
.
Three dimensional Monte Carlo code for photon migration through complex heterogeneous media including the adult human head
.
Opt Express
.
2002
:
10
(3):
159
170
.

Boersma
 
P
.
Praat, a system for doing phonetics by computer
.
2002
.

Boersma
 
P
,
Weenink
 
D
.
Praat: doing phonetics by computer
.
2017
. https://www.fon.hum.uva.nl/praat/.

Bourke
 
JD
,
Todd
 
J
.
Acousticsversuslinguistics? Context is part and parcel to lateralized processing of the parts and parcels of speech
.
Laterality
.
2021
:
26
(
6
):
725
765
.

Bulgarelli
 
C
,
Klerk
 
CCJM
,
Richards
 
JE
,
Southgate
 
V
,
Hamilton
 
A
,
Blasi
 
A
.
The developmental trajectory of fronto-temporoparietal connectivity as a proxy of the default mode network: a longitudinal fNIRS investigation
.
Hum Brain Mapp
.
2020
:
41
(10):
2717
2740
.

Chiossi
 
JSC
,
Hyppolito
 
MA
.
Effects of residual hearing on cochlear implant outcomes in children: a systematic-review
.
Int J Pediatr Otorhinolaryngol
.
2017
:
100
:
119
127
.

Cui
 
X
,
Bray
 
S
,
Bryant
 
DM
,
Glover
 
GH
,
Reiss
 
AL
.
A quantitative comparison of NIRS and fMRI across multiple cognitive tasks
.
NeuroImage
.
2011
:
54
(4):
2808
2821
.

Cullen
 
RD
,
Fayad
 
JN
,
Luxford
 
WM
,
Buchman
 
CA
.
Revision cochlear implant surgery in children
.
Otol Neurotol
.
2008
:
29
(2):
214
220
.

Deguine
 
O
,
Garcia de Quevedo
 
S
,
Fraysse
 
B
,
Cormary
 
X
,
Uziel
 
A
,
Demonet
 
JF
.
Criteria for selecting the side for cochlear implantation
.
Ann Otol Rhinol Laryngol Suppl
.
1995
:
166
:
403
406
.

Dronkers
 
NF
,
Wilkins
 
DP
,
van Valin
 
RD
 Jr
,
Redfern
 
BB
,
Jaeger
 
JJ
.
Lesion analysis of the brain areas involved in language comprehension
.
Cognition
.
2004
:
92
(1–2):
145
177
.

Editorial Board of Chinese Journal of Otorhinolaryngology Head and Neck Surgery
,
Chinese Medical Association Otorhinolaryngology Head and Neck Surgery Branch
,
Professional Committee of Hearing and Speech Rehabilitation of China Disabled Rehabilitation Association
.
Guidelines for Cochlear implant work (2013) [J]
.
Chinese Journal of Otorhinolaryngology Head and Neck Surgery
.
2014
:
49
(
02
):
89
95
. https://doi.org/10.3760/cma.j.issn.1673-0860.2014.02.001.

Flipsen
 
P
 Jr.
Ear selection and pediatric cochlear implants: a preliminary examination of speech production outcomes
.
Int J Pediatr Otorhinolaryngol
.
2008
:
72
(11):
1663
1670
.

Franceschini
 
MA
,
Thaker
 
S
,
Themelis
 
G
,
Krishnamoorthy
 
KK
,
Bortfeld
 
H
,
Diamond
 
SG
,
Boas
 
DA
,
Arvin
 
K
,
Grant
 
PE
.
Assessment of infant brain development with frequency-domain near-infrared spectroscopy
.
Pediatr Res
.
2007
:
61
(5 Pt 1):
546
551
.

Friederici
 
AD
.
The brain basis of language processing: from structure to function
.
Physiol Rev
.
2011
:
91
(4):
1357
1392
.

Fryauf-Bertschy
 
H
,
Tyler
 
RS
,
Kelsay
 
DM
,
Gantz
 
BJ
,
Woodworth
 
GG
.
Cochlear implant use by prelingually deafened children: the influences of age at implant and length of device use
.
J Speech Lang Hear Res
.
1997
:
40
(1):
183
199
.

Gaurav
 
V
,
Sharma
 
S
,
Singh
 
S
.
Effects of age at Cochlear implantation on auditory outcomes in Cochlear implant recipient children
.
Indian J Otolaryngol Head Neck Surg
.
2020
:
72
(1):
79
85
.

Geschwind
 
N
,
Galaburda
 
AM
.
Cerebral lateralization. Biological mechanisms, associations, and pathology: I. A hypothesis and a program for research
.
Arch Neurol
.
1985
:
42
(6):
428
459
.

Gordon
 
KA
,
Wong
 
DD
,
Papsin
 
BC
.
Bilateral input protects the cortex from unilaterally-driven reorganization in children who are deaf
.
Brain
.
2013
:
136
(Pt 5):
1609
1625
.

Hartwigsen
 
G
,
Bengio
 
Y
,
Bzdok
 
D
.
How does hemispheric specialization contribute to human-defining cognition?
 
Neuron
.
2021
:
109
(13):
2075
2090
.

Henkin
 
Y
,
Taitelbaum-Swead
 
R
,
Hildesheimer
 
M
,
Migirov
 
L
,
Kronenberg
 
J
,
Kishon-Rabin
 
L
.
Is there a right cochlear implant advantage?
 
Otol Neurotol
.
2008
:
29
(4):
489
494
.

Hickok
 
G
,
Poeppel
 
D
.
The cortical organization of speech processing
.
Nat Rev Neurosci
.
2007
:
8
(5):
393
402
.

Holmes
 
E
,
Johnsrude
 
IS
.
Speech-evoked brain activity is more robust to competing speech when it is spoken by someone familiar
.
NeuroImage
.
2021
:
237
:
118107
.

Hoshi
 
Y
.
Functional near-infrared spectroscopy: current status and future prospects
.
J Biomed Opt
.
2007
:
12
(6):
062106
.

Huppert
 
TJ
,
Diamond
 
SG
,
Franceschini
 
MA
,
Boas
 
DA
.
HomER: a review of time-series analysis methods for near-infrared spectroscopy of the brain
.
Appl Opt
.
2009
:
48
(10):
D280
D298
.

Jobsis
 
FF
.
Noninvasive, infrared monitoring of cerebral and myocardial oxygen sufficiency and circulatory parameters
.
Science
.
1977
:
198
(4323):
1264
1267
.

Kamal
 
N
,
Tawfik
 
S
,
Mahrous
 
M
.
Impact of auditory cortical asymmetry in cochlear implantation
.
Cochlear Implants Int
.
2014
:
15
(
Suppl 1
):
S75
S77
.

Kirk
 
KI
,
Miyamoto
 
RT
,
Lento
 
CL
,
Ying
 
E
,
O'Neill
 
T
,
Fears
 
B
.
Effects of age at implantation in young children
.
Ann Otol Rhinol Laryngol Suppl
.
2002
:
189
:
69
73
.

Kraaijenga
 
VJC
,
Derksen
 
TC
,
Stegeman
 
I
,
Smit
 
AL
.
The effect of side of implantation on unilateral cochlear implant performance in patients with prelingual and postlingual sensorineural hearing loss: a systematic review
.
Clin Otolaryngol
.
2018
:
43
(2):
440
449
.

Kral
 
A
,
Hubka
 
P
,
Heid
 
S
,
Tillein
 
J
.
Single-sided deafness leads to unilateral aural preference within an early sensitive period
.
Brain
.
2013
:137(Pt 1):
180
193
.

Lamping
 
W
,
Goehring
 
T
,
Marozeau
 
J
,
Carlyon
 
RP
.
The effect of a coding strategy that removes temporally masked pulses on speech perception by cochlear implant users
.
Hear Res
.
2020
:
391
:
107969
.

Lawrence
 
RJ
,
Wiggins
 
IM
,
Hodgson
 
JC
,
Hartley
 
DEH
.
Evaluating cortical responses to speech in children: a functional near-infrared spectroscopy (fNIRS) study
.
Hear Res
.
2021
:
401
:
108155
.

Lazard
 
DS
,
Giraud
 
AL
,
Truy
 
E
,
Lee
 
HJ
.
Evolution of non-speech sound memory in postlingual deafness: implications for cochlear implant rehabilitation
.
Neuropsychologia
.
2011
:
49
(9):
2475
2482
.

Lazard
 
DS
,
Vincent
 
C
,
Venail
 
F
,
Van de Heyning
 
P
,
Truy
 
E
,
Sterkers
 
O
,
Skarzynski
 
PH
,
Skarzynski
 
H
,
Schauwers
 
K
,
O'Leary
 
S
, et al.  
Pre-, per- and postoperative factors affecting performance of postlinguistically deaf adults using cochlear implants: a new conceptual model over time
.
PLoS One
.
2012
:
7
(11):
e48739
.

Lazard
 
DS
,
Lee
 
HJ
,
Truy
 
E
,
Giraud
 
AL
.
Bilateral reorganization of posterior temporal cortices in post-lingual deafness and its relation to cochlear implant outcome
.
Hum Brain Mapp
.
2013
:
34
(5):
1208
1219
.

Liu
 
T
,
Liu
 
X
,
Li
 
Y
,
Zhu
 
C
,
Markey
 
PS
,
Pelowski
 
M
.
Assessing autism at its social and developmental roots: a review of autism Spectrum disorder studies using functional near-infrared spectroscopy
.
NeuroImage
. 2019:185:955–967.

Long
 
Y
,
Chen
 
C
,
Wu
 
K
,
Zhou
 
S
,
Zhou
 
F
,
Zheng
 
L
,
Zhao
 
H
,
Zhai
 
Y
,
Lu
 
C
.
Interpersonal conflict increases interpersonal neural synchronization in romantic couples
.
Cereb Cortex
.
2021
:
31
(
3
):
1647
1659
.

Luo
 
X
,
Wu
 
CC
,
Pulling
 
K
.
Combining current focusing and steering in a cochlear implant processing strategy
.
Int J Audiol
.
2021
:
60
(3):
232
237
.

Mahmoudzadeh
 
M
,
Dehaene-Lambertz
 
G
,
Fournier
 
M
,
Kongolo
 
G
,
Goudjil
 
S
,
Dubois
 
J
,
Grebe
 
R
,
Wallois
 
F
.
Syllabic discrimination in premature human infants prior to complete formation of cortical layers
.
Proc Natl Acad Sci
.
2013
:
110
(12):
4846
4851
.

McGettigan
 
C
,
Scott
 
SK
.
Cortical asymmetries in speech perception: what's wrong, what's right and what's left?
 
Trends Cogn Sci
.
2012
:
16
(5):
269
276
.

Minagawa-Kawai
 
Y
,
Cristià
 
A
,
Vendelin
 
I
,
Cabrol
 
D
,
Dupoux
 
E
.
Assessing signal-driven mechanisms in neonates: brain responses to temporally and spectrally different sounds
.
Front Psychol
.
2011
:
2
:
135
.

Mitchell
 
RM
,
Christianson
 
E
,
Ramirez
 
R
,
Onchiri
 
FM
,
Horn
 
DL
,
Pontis
 
L
,
Miller
 
C
,
Norton
 
S
,
Sie
 
KCY
.
Auditory comprehension outcomes in children who receive a cochlear implant before 12 months of age
.
Laryngoscope
.
2020
:
130
(3):
776
781
.

Morris
 
LG
,
Mallur
 
PS
,
Roland
 
JT
 Jr
,
Waltzman
 
SB
,
Lalwani
 
AK
.
Implication of central asymmetry in speech processing on selecting the ear for cochlear implantation
.
Otol Neurotol
.
2007
:
28
(1):
25
30
.

Nie
 
K
,
Stickney
 
G
,
Zeng
 
FG
.
Encoding frequency modulation to improve cochlear implant performance in noise
.
IEEE Trans Biomed Eng
.
2005
:
52
(1):
64
73
.

Olds
 
C
,
Pollonini
 
L
,
Abaya
 
H
,
Larky
 
J
,
Loy
 
M
,
Bortfeld
 
H
,
Beauchamp
 
MS
,
Oghalai
 
JS
.
Cortical activation patterns correlate with speech understanding after cochlear implantation
.
Ear Hear
.
2015
:
37
(3):
e160
.

Oostenveld
 
R
,
Praamstra
 
P
.
The five percent electrode system for high-resolution EEG and ERP measurements
.
Clin Neurophysiol
.
2001
:
112
(4):
713
719
.

Piper
 
SK
,
Krueger
 
A
,
Koch
 
SP
,
Mehnert
 
J
,
Habermehl
 
C
,
Steinbrink
 
J
,
Obrig
 
H
,
Schmitz
 
CH
.
A wearable multi-channel fNIRS system for brain imaging in freely moving subjects
.
NeuroImage
.
2014
:
85
(
Pt 1
):
64
71
.

Poeppel
 
D
.
The neuroanatomic and neurophysiological infrastructure for speech and language
.
Curr Opin Neurobiol
.
2014
:
28
:
142
149
.

Pollonini
 
L
,
Olds
 
C
,
Abaya
 
H
,
Bortfeld
 
H
,
Beauchamp
 
MS
,
Oghalai
 
JS
.
Auditory cortex activation to natural speech and simulated cochlear implant speech measured with functional near-infrared spectroscopy
.
Hear Res
.
2014
:
309
:
84
93
.

Punch
 
R
,
Hyde
 
M
.
Social participation of children and adolescents with cochlear implants: a qualitative analysis of parent, teacher, and child interviews
.
J Deaf Stud Deaf Educ
.
2011
:
16
(4):
474
493
.

Rauschecker
 
JP
,
Scott
 
SK
.
Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing
.
Nat Neurosci
.
2009
:
12
(6):
718
724
.

Rouger
 
J
,
Lagleyre
 
S
,
Démonet
 
J-F
,
Fraysse
 
B
,
Deguine
 
O
,
Barone
 
P
.
Evolution of crossmodal reorganization of the voice area in cochlear-implanted deaf patients
.
Hum Brain Mapp
.
2012
:
33
(8):
1929
1940
.

Saliba
 
J
,
Bortfeld
 
H
,
Levitin
 
DJ
,
Oghalai
 
JS
.
Functional near-infrared spectroscopy for neuroimaging in cochlear implant recipients
.
Hear Res
.
2016
:
338
:
64
75
.

Schnupp
 
J
,
Nelken
 
I
,
King
 
A
.
Auditory neuroscience: making sense of sound
.
The MIT Press: Cambridge, Massachusetts London, England
;
2011

Sharma
 
A
,
Dorman
 
MF
,
Spahr
 
AJ
.
A sensitive period for the development of the central auditory system in children with Cochlear implants: implications for age of implantation
.
Ear Hear
.
2002
:
23
(6):
532
539
.

Snels
 
C
,
IntHout
 
J
,
Mylanus
 
E
,
Huinck
 
W
,
Dhooge
 
I
.
Hearing preservation in Cochlear implant surgery: a meta-analysis
.
Otol Neurotol
.
2019
:
40
(2):
145
153
.

Srinivasan
 
AG
,
Padilla
 
M
,
Shannon
 
RV
,
Landsberger
 
DM
.
Improving speech perception in noise with current focusing in cochlear implant users
.
Hear Res
.
2013
:
299
:
29
36
.

Surmelioglu
 
O
,
Cetik
 
F
,
Tarkan
 
O
,
Ozdemir
 
S
,
Tuncer
 
U
,
Kiroglu
 
M
,
Sahin
 
R
.
Choice of cochlear implant side in a paediatric population
.
J Laryngol Otol
.
2014
:
128
(6):
504
507
.

Szagun
 
G
.
Language acquisition in young German-speaking children with cochlear implants: individual differences and implications for conceptions of a 'sensitive phase
.
Audiol Neurootol
.
2001
:
6
(5):
288
297
.

Taylor
 
I
.
Psycholinguistics: learning and using language
. Prentice Hall, Inc. Upper Saddle River, NJ, USA:
Pearson College Division
;
1990
.

Trainor
 
LJ
,
Shahin
 
A
,
Roberts
 
LE
.
Effects of musical training on the auditory cortex in children
.
Ann N Y Acad Sci
.
2003
:
999
(1):
506
513
.

Turken
 
AU
,
Dronkers
 
NF
.
The neural architecture of the language comprehension network: converging evidence from lesion and connectivity analyses
.
Front Syst Neurosci
.
2011
:
5
:
1
1
.

Vasung
 
L
,
Abaci Turk
 
E
,
Ferradal
 
SL
,
Sutin
 
J
,
Stout
 
JN
,
Ahtam
 
B
,
Lin
 
PY
,
Grant
 
PE
.
Exploring early human brain development with structural and physiological neuroimaging
.
NeuroImage
.
2019
:
187
:
226
254
.

Wilf
 
M
,
Ramot
 
M
,
Furman-Haran
 
E
,
Arzi
 
A
,
Levkovitz
 
Y
,
Malach
 
R
.
Diminished auditory responses during NREM sleep correlate with the hierarchy of language processing
.
PLoS One
.
2016
:
11
(6):
e0157143
.

Xu
 
J
,
Liu
 
X
,
Zhang
 
J
,
Li
 
Z
,
Wang
 
X
,
Fang
 
F
,
Niu
 
H
.
FC-NIRS: a functional connectivity analysis tool for near-infrared spectroscopy data
.
Biomed Res Int
.
2015
:
2015
:
248724
.

Xu
 
SY
,
Lu
 
FM
,
Wang
 
MY
,
Hu
 
ZS
,
Zhang
 
J
,
Chen
 
ZY
,
Armada-da-Silva
 
PAS
,
Yuan
 
Z
.
Altered functional connectivity in the motor and prefrontal cortex for children with Down's syndrome: an fNIRS study
.
Front Hum Neurosci
.
2020
:
14
:
6
.

Ye
 
JC
,
Tak
 
S
,
Jang
 
KE
,
Jung
 
J
,
Jang
 
J
.
NIRS-SPM: statistical parametric mapping for near-infrared spectroscopy
.
NeuroImage
.
2009
:
44
(2):
428
447
.

Yeung
 
MK
.
An optical window into brain function in children and adolescents: a systematic review of functional near-infrared spectroscopy studies
.
NeuroImage
.
2021
:
227
:
117672
.

Zeng
 
FG
,
Rebscher
 
S
,
Harrison
 
W
,
Sun
 
X
,
Feng
 
H
.
Cochlear implants: system design, integration, and evaluation
.
IEEE Rev Biomed Eng
.
2008
:
1
:
115
142
.

Zhang
 
Y
,
Brooks
 
D
,
Franceschini
 
M
,
Boas
 
D
.
Eigenvector-based spatial filtering for reduction of physiological interference in diffuse optical imaging
.
J Biomed Opt
.
2005
:
10
(1):
011014
.

Zhang
 
F
,
Underwood
 
G
,
McGuire
 
K
,
Liang
 
C
,
Moore
 
DR
,
Fu
 
QJ
.
Frequency change detection and speech perception in cochlear implant users
.
Hear Res
.
2019
:
379
:
12
20
.

Zhao
 
EE
,
Dornhoffer
 
JR
,
Loftus
 
C
,
Nguyen
 
SA
,
Meyer
 
TA
,
Dubno
 
JR
,
McRackan
 
TR
.
Association of patient-related factors with adult cochlear implant speech recognition outcomes: a meta-analysis
.
JAMA Otolaryngol - Head Neck Surg
.
2020
:
146
(7):
613
620
.

Zhao
 
H
,
Cheng
 
T
,
Zhai
 
Y
,
Long
 
Y
,
Wang
 
Z
,
Lu
 
C
.
How mother-child interactions are associated with a Child's compliance
.
Cereb Cortex
.
2021
:
31
(9):
4398
4410
.

Zimmerman-Phillips
 
S
,
Osberger
 
MJ
,
Robbins
 
AM
.
Infant-toddler: meaningful auditory integration scale (IT-MAIS)
.
Sylmar
:
Advanced Bionics Corporation
;
1997

Author notes

Yu-Xuan Zhang and Yuhe Liu contributed equally to this work.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)