Abstract

Nothing provides as strong a sense of self as seeing one's face. Nevertheless, it remains unknown how the brain processes the sense of self during the multisensory experience of looking at one's face in a mirror. Synchronized visuo-tactile stimulation on one's own and another's face, an experience that is akin to looking in the mirror but seeing another's face, causes the illusory experience of ownership over the other person's face and changes in self-recognition. Here, we investigate the neural correlates of this enfacement illusion using fMRI. We examine activity in the human brain as participants experience tactile stimulation delivered to their face, while observing either temporally synchronous or asynchronous tactile stimulation delivered to another's face on either a specularly congruent or incongruent location. Activity in the multisensory right temporo-parietal junction, intraparietal sulcus, and the unimodal inferior occipital gyrus showed an interaction between the synchronicity and the congruency of the stimulation and varied with the self-reported strength of the illusory experience, which was recorded after each stimulation block. Our results highlight the important interplay between unimodal and multimodal information processing for self-face recognition, and elucidate the neurobiological basis for the plasticity required for identifying with our continuously changing visual appearance.

Introduction

The ability to represent the visual properties of one's face as distinct from others is a fundamental aspect of human self-awareness. Recognizing one's face in a mirror is a key behavioral marker of self-awareness (Gallup 1970), an ability expressed by a small selection of species, including humans (Gallup 1970; Anderson and Gallup Jr. 2011). Neuroimaging studies of self-face recognition suggest that representations of one's own facial appearance may be stored in a specialized network of areas, which is engaged when viewing images of one's own face (Kircher et al. 2001; Uddin et al. 2005, 2006, 2008; Platek et al. 2006, 2008; Devue et al. 2007; Kaplan et al. 2008; Sugiura et al. 2008; Platek and Kemp 2009; Devue and Bredart 2011; Ramasubbu et al. 2011; Apps et al. 2012; Ma and Han 2012). Of these areas, the inferior occipital gyrus (IOG), the inferior temporal gyrus (ITG), and the temporo-parietal junction (TPJ) respond only to images of one's current, but not one's past face, suggesting they continuously update a visual representation of one's facial appearance (Apps et al. 2012). These studies have used static face stimuli to investigate self-recognition. However, the way in which infants and nonhuman primates succeed in recognizing their specular image suggests that processes other than mere visual perception are engaged in self-identification. The normal experience of looking into one's face in the mirror is accompanied by a continuous integration of tactile and proprioceptive events perceived on one's face and visual events perceived on the mirror-reflection. Such processes putatively underlie responses on the classic “rouge” task of mirror self-recognition. In this task, the placement of a red spot on the face of infants and some primates while they are looking into a mirror, results in behaviors that indicate a detection of the spot on that location of the body, which are evaluated by a goal-directed movement towards the red spot (Gallup 1970; Bertenthal and Fischer 1978; Suarez and Gallup 1981; Suddendorf et al. 2007; Anderson and Gallup Jr. 2011). Such updating of the representation of one's visual appearance during multisensory stimulation may also underlie the assimilation of changes, and provide a sense of continuity of one's self over time (Tsakiris 2010; Apps et al. 2012). Therefore, the static stimuli used in the vast majority of self-face recognition studies seem to violate the dynamic, multisensory conditions present when looking in a mirror and thus lack sensitivity for identifying activity related to mirror self-recognition during multisensory stimulation.

While research investigating self-face recognition has used static stimuli, studies of body-ownership and bodily illusions have used multisensory stimulation to investigate where and how visuo-tactile stimulation is integrated in the brain. Neuroimaging studies highlight that the premotor cortex (PMC), intraparietal sulcus (IPS), cerebellum, and TPJ are activated when visuo-tactile stimulation causes an illusory sense of ownership over the whole, or parts, of a body (Ehrsson et al. 2004; Ionta et al. 2011; Petkova et al. 2011). More recently it has been suggested that representations of one's self change through the integration of visual and tactile information when looking in a mirror (Tajadura-Jiménez et al. 2012b). The matching between felt and observed sensorimotor signals purportedly leads to the formation, and the updating, of a mental representation of one's visual appearance. Seeing an unfamiliar face being touched at the same time as one's own face, a situation akin to looking in a mirror but seeing another person's face, changes one's ability to self-recognize and creates the illusory experience of looking at one's self in the mirror (Tsakiris 2008; Sforza et al. 2010; Tajadura-Jimenez et al. 2012a). This “enfacement” illusion arises due to the congruency between felt and seen sensory events and does not arise from asynchronous visuo-tactile stimulation. However, little is understood about the neuronal processes that underpin this type of visuo-tactile integration and create a sense that I am looking at “me.” Plasticity in self-recognition may therefore occur through the integration of visuo-tactile information in multimodal areas which leads to a sense of ownership over one's face. Understanding the neurobiological mechanisms that underpin the multisensory experience of looking at one's face in a mirror may therefore be crucial for understanding the plasticity of self-recognition. However, no previous study has investigated activity in the brain during the multisensory driven process of experiencing a face as “me.”

Does the multisensory experience of mirror-self recognition engage areas previously known for their role in integrating multisensory information and creating a sense of ownership over the body, or alternatively, does it recruit a distinct network which is activated when recognizing static images of the self-face? Here, we used block-design fMRI to examine brain activity during the enfacement illusion, as a corollary of mirror self-recognition. Participants observed movies of an unfamiliar face receiving tactile stimulation to the face while receiving tactile stimulation themselves from an air puff system. The visuo-tactile stimulation could be synchronized or not and on either a specularly congruent or incongruent location on the faces. After each block, participants rated the extent to which they experienced the illusion on a 7-point Likert scale. This design enabled us to examine where in the brain activity varies with the extent to which participants experience the face that they see being touched in synchrony with their own face as “self.” We predict that activity in brain areas that have previously been implicated in self-face recognition and multisensory bodily illusions will fluctuate with the experience of enfacement.

Methods

Participants

Fifteen female right-handed paid-volunteers (mean age = 25.8 years, SD = 3.88) gave their informed consent to participate. Only participants who experienced the enfacement illusion in a preliminary behavioral session were invited to take part in this experiment, as explained below. The study was approved by the Royal Holloway, Psychology Departmental Ethics Committees, and conformed to regulations set out in the Birkbeck-UCL Neuroimaging Centre (BUCNI) MRI Rules of Operation (http://bucni.psychol.ucl.ac.uk/index.shtml).

Apparatus and Materials

Two different female “models” (∼20 years old), who were unfamiliar to the participants, were recorded being touched with a tap of a cotton bud on the right cheek or on the right hand side of the chin, at a random frequency ranging from 0.33 to 0.76 Hz, while they maintained a neutral facial expression. This allowed 4 40 s “induction” movies to be produced, which differed in the unfamiliar face displayed and the part of the face being touched. In a pilot experiment, the 2 models were rated on scales of trustworthiness and physical attractiveness, along with 8 other faces, by 11 participants, who did not take part in the subsequent parts of the study. Previous research has found a bidirectional link between the physical attractiveness that participants attribute to another person's face and the strength of the enfacement illusion felt for that face (Paladino et al. 2010; Sforza et al. 2010). In addition, trustworthy faces are more likely to be identified as looking like the self (Verosky and Todorov 2010). We therefore ensured that the faces were equally evaluated in terms of trustworthiness and physical attractiveness, to avoid potential influences of the seen face in the pattern of the enfacement illusion. The 2 faces were not significantly different on either measure (trustworthiness: t10 = 0.65, P > 0.53; physical attractiveness: t10 = 1.26, P > 0.23). In addition, the models viewed in the synchronous and asynchronous conditions were counterbalanced across participants. Tactile stimulation was delivered to the left cheek of participants through puffs of air. To deliver the stimulation we used the arrangement of Huang and Sereno (2007). The system consisted of an air compressor in the scanner control room, which provided input to a solenoid valve (Numatics) that was controlled by TTL pulses from a data acquisition and control card (National Instruments USB-6800). Plastic air tubes from the valve were connected to a block fixed in the magnet behind the head coil. From this base, a tube with a flexible nozzle (Loc-Line) that could be freely positioned was used to direct air puffs to the participant's cheek. The input air pressure (30–40 psi) was adjusted so that a 100-ms air puff was delivered, which felt akin to the level of tactile sensation experienced by a touch delivered from the cotton bud, with a similar duration. The system delivers pure uncontaminated air with a temperature comfortable for the participant. In this study, we used air puffs to deliver tactile stimulation to participants while inside the MRI scanner. To ensure that participants experienced the illusory experience when using different methods of stimulation to the participant and the model in the video, we performed a pilot study on participants who previously had reported experiencing the illusion. The pilot study highlighted that we could evoke the enfacement illusion using this paradigm. Thus, as in a previous study (Mazzurega et al. 2011), we show that the enfacement illusion can be induced even when the stimulation applied to the 2 faces is not identical. The tactile stimulation the participants received was therefore akin to the touch being delivered on the faces in the movies described above.

Visual stimuli were projected onto a screen which participants viewed via a mirror positioned above their face. Participants made responses (see below) using their right hand on a 4-button MRI-compatible response box. Brain images were acquired with a Siemens 1.5 Tesla Avanto MRI scanner at Birkbeck-University College London Centre for Neuroimaging (BUCNI). Presentation software (Neurobehavioral Systems, Inc., USA) was used to deliver stimuli and record responses. NI Measurement and Automation Explorer (Version 5.0.0f1) provided access to the data acquisition and control card (National Instruments USB-6800). Behavioral and fMRI Data were analyzed in Matlab 2006a, SPSS 19, SPM8, and MRICRON.

Procedure

Pre-MRI Screening Session

Thirty-six female right-handed paid-volunteers were tested on the enfacement illusion paradigm in a separate behavioral session, which took place a few weeks before the MRI session. Participants were exposed to 2 repetitions of 3 different visuo-tactile stimulation conditions, each lasting for 40 s and with their order randomized: synchronous congruent, synchronous incongruent, and asynchronous congruent, which are described in the subsequent section. The subjective experience of participants during each visuo-tactile stimulation condition was assessed with a statement (I felt I was looking at my face), for which participants rated their level of agreement using a 7-item Likert scale, ranging from “strongly agree” (+3) to “strongly disagree” (−3). This statement was adopted from previous studies on the enfacement illusion (Tajadura-Jimenez et al. 2012a). A keypad was used for this purpose. The 15 participants who experienced the illusion most strongly (i.e., they agreed more strongly with the statement) in the synchronous congruent condition in comparison to the other 2 conditions; mean synchronous congruent = 1.73, SD = 1.13; mean synchronous incongruent = −0.68, SD = 1.57; mean asynchronous congruent = −1.89, SD = 1.16) were invited back to participate in the MRI session.

At the end of this pre-MRI screening session, participants were presented with 2 additional synchronous congruent conditions and were required to indicate the onset of the enfacement illusion by pressing a key in the keypad. The IMS lasted for 40 s, independently of the participant key press. The average onset of the illusion across participants was 13.31 ± 8.56 s (M ± SD). This task revealed both within and across participant variability in the onset of the illusion. The responses across participants were highly variable (M = 13.3 s, SD = ±8.56, range = 5.56–32.65 s). The responses within participants were in comparison relatively consistent, although there was still variability, with the average difference in the keypresses from trial 1 to trial 2 being 5.32 s (SD = ±3.40, range = 0.20–10.28 s).

Scanning Session

Fifteen participants lay supine in an MRI scanner with the fingers of the right hand positioned on the response box. The nozzle connected to the solenoid valve was placed over the left cheek of the participant's face, through which the air puffs were delivered at a frequency ranging from 0.33 to 0.76 Hz during 40 s periods while they watched the movies of tactile stimulation being delivered to the other's face.

Experimental Design

We used a 2 × 2 factorial block design. The first factor was the specular congruency of the visuo-tactile stimulation. While receiving tactile stimulation on the left check, participants viewed tactile stimulation being delivered to the other's face to either a specularly congruent location (i.e., the right cheek of the other person) as if the participant was looking at a mirror, or at an incongruent location (i.e., the chin). This incognruent condition controlled for the possibility that simply the synchrony of a seen touch on another's face and a felt touch on one's own face, even when the seen touch and the felt touch are on different locations of the face, could drive activity in multisensory areas. In the congruent conditions, the touch stimulated the same portion of the cheek of the face in the movie and of the participant's face. The second factor was the temporal synchronicity of the visuo-tactile stimulation. This could either be synchronous, where touch on the participant's face and the face in the movie was temporally synchronized, or asynchronous, with a lag of 1 s separating the stimulation on the participants face and that on the other's face in the movie (see Fig. 1a). This created a design with 4 conditions, synchronous congruent (Sync-Cong), synchronous incongruent (Sync-Incong), asynchronous congruent (Async-Cong), and asynchronous incongruent (Async-Incong). The incongruent and the asynchronous conditions served as control conditions in which no enfacement illusion was expected (Tsakiris 2008; Sforza et al. 2010; Tajadura-Jimenez et al. 2012a).

Figure 1.

Experimental design, timeline, and behavioral results. (a) While in the scanner and in 40 s blocks, participants received tactile stimulation to their left cheek from puffs of air. The stimulation was akin to the cotton bud that was seen touching the face of another unfamiliar person in a movie which was played to the participant at the same time. The tactile stimulation on the 2 faces could be either synchronous or asynchronous and on either specularly congruent or incongruent locations. For the incongruent stimulation, participants observed the other person being touched on the chin. After each block of stimulation, participants rated the strength of the illusory experience on a 7-point Likert scale, ranging from “strongly agree” (+3) to “strongly disagree” (−3). (b) The block interval was 50 s, during which there was, a movie (40 s), followed by a blank screen presented for a variable interstimulus interval (0–4 s), followed by the question and Likert scale (maximum 6 s), followed by a blank screen presented for the remaining time to complete 50 s. (c) The mean Likert scale responses across participants for the 4 conditions are shown. As it can be seen, participants showed a stronger illusory experience in the synchronous, congruent condition, as predicted. Error bars depict standard error of the mean.

Figure 1.

Experimental design, timeline, and behavioral results. (a) While in the scanner and in 40 s blocks, participants received tactile stimulation to their left cheek from puffs of air. The stimulation was akin to the cotton bud that was seen touching the face of another unfamiliar person in a movie which was played to the participant at the same time. The tactile stimulation on the 2 faces could be either synchronous or asynchronous and on either specularly congruent or incongruent locations. For the incongruent stimulation, participants observed the other person being touched on the chin. After each block of stimulation, participants rated the strength of the illusory experience on a 7-point Likert scale, ranging from “strongly agree” (+3) to “strongly disagree” (−3). (b) The block interval was 50 s, during which there was, a movie (40 s), followed by a blank screen presented for a variable interstimulus interval (0–4 s), followed by the question and Likert scale (maximum 6 s), followed by a blank screen presented for the remaining time to complete 50 s. (c) The mean Likert scale responses across participants for the 4 conditions are shown. As it can be seen, participants showed a stronger illusory experience in the synchronous, congruent condition, as predicted. Error bars depict standard error of the mean.

As in the behavioral session, after each block, participants were asked to report their level of agreement with the statement “I felt I was looking at my face,” using a 7-item Likert scale displayed on the screen. The participants used the response box for this task. A maximum of 6s was allowed to answer the statement, which was sufficient for all participants. The onset of the question was jittered randomly and uniformly over the 4 s period after the offset of the movies. Thus, the block interval was 50 s, during which there was, a movie (40 s), followed by a blank screen presented for a variable duration (0–4 s), followed by the question and Likert scale (maximum 6 s), and followed by a blank screen presented for the remaining time to complete 50 s (see Fig. 1b). Participants used one button to move up the scale, one to move down, and a third to indicate that they had chosen their response. This question also afforded us the opportunity to analyze the data parametrically, with the responses to this question being used a predictor of activity during the corresponding movie. This was beneficial as it allowed us to examine the extent to which participants were experiencing the illusion in each block, rather than following the approach used in previous designs (Ehrsson et al. 2004; Ionta et al. 2011; Petkova et al. 2011), where “off-line” reports of the strength of the experience after the experiment are regressed against the BOLD response in different conditions. Thus, our design afforded us the opportunity to examine activity that related to the experience of the illusion on-line. To analyze the behavioral responses, we performed pairwise comparisons between conditions and corrected for multiple comparisons using a Bonferroni correction (P < 0.05).

There were 3 experimental runs, each lasting ∼10 min. In each run, the 4 conditions were repeated 3 times each, their order randomized, resulting in 12 trials completed in each run and a total of 36 trials.

Image Acquisition

For each participant, T2* weighted echo planar images (EPI) were acquired. Thirty-five slices were acquired in an interleaved manner, at an oblique angle to the AC-PC line. A voxel size of 3 × 3 × 3 mm was used; TR = 3 s, TE = 50 ms, flip angle = 90°. Prior to the functional scans, high-resolution T1-weighted structural images were acquired at a resolution of 1 × 1 × 1 mm using an MPRAGE sequence.

Image Analysis

All preprocessing and statistical analyses were conducted using SPM8 (www.fil.ion.ucl.ac.uk/spm). The EPI images were first realigned and co-registered to the subject's own anatomical image. The structural image was processed using a unified segmentation procedure combining segmentation, bias correction, and spatial normalization to the MNI template (Ashburner and Friston 2005); the same normalization parameters were then used to normalize the EPI images. Lastly, a Gaussian kernel of 8 mm FWHM was applied to spatially smooth the images in order to conform to the assumptions of the GLM implemented in SPM8 (see below).

Statistical Analysis

Event Definition and Modeling

The data were analyzed using 2 different approaches. First, we analyzed the data within the factorial design outlined above, with 2 factors Synchronicity (synchronous or asynchronous) and specular Congruency (congruent or incongruent). For each subject, we created a GLM in which there were 5 regressors for each of the 3 scanning sessions. Four events in each session corresponded to each of the 4 conditions (Sync-Cong, Sync-Incong, Async-Cong, Asyn-Incong). These were modeled as 40 s events, which were convolved with the canonical hemodynamic response function (HRF). The fifth event in each session corresponded to the question periods after every block, which were modeled as 6 s blocks and convolved with the canonical HRF.

The second analysis we performed was a parametric analysis, which looked for activity that was scaled with the subjective experience of the illusion, regardless of the condition in the factorial design. For this analysis, the regressors used were similar to those outlined for the factorial analysis. However, the regressors for the 4 experimental conditions in each session were collapsed into one regressor which corresponded to all conditions in that session. A first-order parametric modulator of that regressor was then created, using the responses on the question at the end of each block of IMS, to scale the canonical HRF. As such, this parametric modulator acted as a predictor of the level of activity based on the extent to which the participants self-reported the experience of the illusion “on-line.” That is, we used the responses to the question “I felt like I was looking at my face,” which were collected after the offset of stimulation in every block, regardless of the condition in the factorial design to which the block belonged. This approach allowed us to look block by block at the strength of the illusory experience, and not the presumed strength based on “off-line” questions before or after the scanning session, as in previous studies (Ehrsson et al. 2004; Ionta et al. 2011; Petkova et al. 2011). In both the factorial and parametric analyses, the residual effects of head motion were modeled in the analysis by including the 6 parameters of head motion acquired from the realignment stage of the preprocessing as covariates of no interest. Prior to the study, a set of planned experimental timings were carefully checked so that they resulted in an estimable GLM in which the statistical independence of the different event types was preserved.

First-Level Analysis

For the 2 analyses, SPM{t} contrast images were computed for each regressor at the first level.

Second-Level Analysis

SPM{t} contrast images from the first level were input into a second-level full factorial random effects ANOVA with pooled variance. An F-contrast was performed in the factorial analysis to look for voxels in which activity showed an interaction between synchronicity and congruency (we defined the contrast as [1, −1, −1, 1] with the Sync-Cong and Async-Incong conditions corresponding to the 2 positive contrast weights) with a linear combination of the betas across the 3 sessions. In the parametric analysis, F-contrasts were applied at the second level to look for areas in which activity varied statistically with a linear combination of the betas corresponding to the parametric modulator across the sessions.

To correct for multiple comparisons we used 2 approaches. First, we corrected using Familywise error rate (FWE, P < 0.05) correction across the whole brain. Second, to avoid false-negative results from the deployment of this conservative statistical threshold in areas previously implicated in static self-face recognition and in areas involved in bodily illusions, we applied small volume corrections of an 8 mm sphere around the MNI coordinates in the IPS, occipital face area (OFA), and the ITG from Apps et al. (2012), the premotor and cerebellar coordinates from Ehrsson et al. (2004), and the TPJ coordinate of Ionta et al. (2011) (see Table 1). To ensure that activity in areas that others have previously implicated in self-face recognition with static images were not engaged during the synchronous, congruent touch, we performed additional small volume corrections in 3 areas. We performed small volume corrections in the right insula (38, 22, 16 (Taliarach coordinates)), and the right Inferior Frontal Gyrus (48, 32, 14) from Devue et al. (2007), as well as in the left Fusiform Gyrus (−42, −56, 16) from Sugiura et al. (2005). These coordinates were reported in the original papers in Taliarach coordinates and were converted into MNI coordinates (Calder et al. 2001) for the small volume corrections. However, we would not predict an effect in these areas, as we previously did not find these areas to be involved in current self-face recognition (Apps et al. 2012). We used the coordinates of these other studies to avoid any effects of circularity that occur when using the results of one analysis to inform an additional nonorthogonal analysis applied to the same dataset (Kriegeskorte et al. 2009) (i.e., we did not use coordinates from the factorial analysis as corrections for the parametric analysis or vice versa). In addition, this enabled us to make important inferences about whether areas previously implicated in face recognition, or areas involved in multisensory processing, are engaged during mirror self-recognition.

Table 1

Coordinates used for small volume corrections

Anatomical region MNI coordinate 
Occipital
 
 Right inferior occipital gyrus (BA 19)a 48, −62, −8 
Temporal
 
 Right posterior superior temporal gyrus (in tde temporo-parietal junction region)b 54, −52, 26 
 Inferior temporal gyrus (BA 21)a 62, −12, −16 
Parietal
 
 Right intraparietal sulcus (BA 7)a 28, −62, 48 
Frontal 
 Precentral gyrus (BA 6)c 51, 0, 48 
Cerebellum 
 Lobule VIc 48, −57, −27 
Anatomical region MNI coordinate 
Occipital
 
 Right inferior occipital gyrus (BA 19)a 48, −62, −8 
Temporal
 
 Right posterior superior temporal gyrus (in tde temporo-parietal junction region)b 54, −52, 26 
 Inferior temporal gyrus (BA 21)a 62, −12, −16 
Parietal
 
 Right intraparietal sulcus (BA 7)a 28, −62, 48 
Frontal 
 Precentral gyrus (BA 6)c 51, 0, 48 
Cerebellum 
 Lobule VIc 48, −57, −27 

Coordinates taken from the areas responding to the current self-face in (a) Apps et al. (2012); to the mislocation of the body in space (b) in Ionta et al. (2011); and when experiencing the rubber hand illusion (c) in Ehrsson et al. (2004).

Results

Behavioral Results

A repeated-measures ANOVA on the responses to the statement “I felt like I was looking at my face,” presented after each stimulation block, revealed an interaction effect between Synchronicity and Congruency (F1,14 = 16.84, P < 0.001; see Fig. 1c). Planned pairwise comparisons between the Sync-Cong condition and the other 3 control conditions (Async-Cong, t14 = 10.65, P < 0.001; Async-Incong, t14 = 11.41, P < 0.001; and Sync-Incong, t14 = 6.72, P < 0.001) showed significantly higher responses on the Likert scale for the Sync-Cong than each of the other conditions. Thus, participants experienced the illusory effect more strongly in the Sync-Cong condition than in any other condition, as predicted.

fMRI Results

To analyze the fMRI data, we employed 2 approaches. First, we performed a factorial analysis to look for voxels in which activity showed an interaction effect between Synchronicity and Congruency. Second, we performed a parametric analysis that looked for voxels in which activity in each block, regardless of the condition, was scaled with the responses on the Likert scale to the enfacement question at the end of each block of stimulation. The factorial analysis revealed activity in several areas (see Table 2) that survived whole-brain correction for multiple comparisons. Small volume corrections around the coordinates of previous studies (Ehrsson HH et al. 2004; Ionta et al. 2011; Apps et al. 2012) revealed interaction effects in the right IPS (MNI coordinates: 28, −58, 52; Z = 5.21, P < 0.05 svc), the right IOG (putatively in the OFA; 50, −68, −4; Z = 5.45, P < 0.05 svc) and the posterior portions of the superior temporal gyrus in the right TPJ (54, −48, 20; Z = 5.45, P < 0.05 svc) and these regions were also identified as parts of clusters within the whole-brain analysis (P < 0.05 FWE). Similar corrections around the ITG, premotor cortex, and cerebellar coordinates did not reveal any interaction effects (P > 0.05 uncorrected). We found no voxels that showed a main effect of congruency or synchronicity even at a reduced threshold (P < 0.005 uncorrected).

Table 2

Full table of results for the congruency × synchronicity interaction

Anatomical region MNI Coordinate in mm (x,y,zz-Value 
Occipital 
 Left lingual gyrus (BA 19) −14, −64, −2 8.90 
 Right inferior occipital gyrus (BA 19) 30, −96, −8 7.90 
 Left inferior occipital gyrus (BA 19) −22, −96, 0 7.83 
Insula 
 Right short insula gyrus 36, 24, 14 6.44 
 Left short insula gyrus −36, 16, 8 5.36 
 Right long insula gyrus 38, −12, 10 5.35 
Temporal 
 Right posterior superior temporal gyrus (in the TPJ region; BA 39/7) 58, −44, 18 6.08 
Parietal 
 Left intraparietal sulcus (BA 7) −24, −62, 54 5.91 
 Right parietal operculum (secondary Somatosensory cortex) 66, −10, 18 5.80 
 Right intraparietal sulcus (BA 7) 28, −58, 52 5.21 
Frontal 
 Middle frontal gyrus (BA 46) 48, 8, 30 5.44 
Anatomical region MNI Coordinate in mm (x,y,zz-Value 
Occipital 
 Left lingual gyrus (BA 19) −14, −64, −2 8.90 
 Right inferior occipital gyrus (BA 19) 30, −96, −8 7.90 
 Left inferior occipital gyrus (BA 19) −22, −96, 0 7.83 
Insula 
 Right short insula gyrus 36, 24, 14 6.44 
 Left short insula gyrus −36, 16, 8 5.36 
 Right long insula gyrus 38, −12, 10 5.35 
Temporal 
 Right posterior superior temporal gyrus (in the TPJ region; BA 39/7) 58, −44, 18 6.08 
Parietal 
 Left intraparietal sulcus (BA 7) −24, −62, 54 5.91 
 Right parietal operculum (secondary Somatosensory cortex) 66, −10, 18 5.80 
 Right intraparietal sulcus (BA 7) 28, −58, 52 5.21 
Frontal 
 Middle frontal gyrus (BA 46) 48, 8, 30 5.44 

All results are whole-brain corrected (P<0.05 FWE). The atlas of Duvernoy (Duvernoy 1999) was used for anatomical localization.

The parametric analysis did not find any voxels in which activity co-varied with the Likert scale responses, when correcting for multiple comparisons across the whole brain. Nevertheless, when applying small volume corrections around the coordinates from previous studies, we found a negative correlation between the illusory experience and the magnitude of BOLD activity in the right TPJ (50, −52, 26; Z = 3.11, P < 0.05 svc). Examination of the beta coefficients seen in Figure 2 shows that this effect may be driven by a decrease in the negative BOLD response (a higher absolute response) found in all conditions in the experiment. This is in line with previous studies that have shown experimentally induced negative BOLD responses in the TPJ (Corbetta et al. 2008; Geng and Mangun 2011), particularly in tasks that require subjects to attend to differences between self and other, or during perspective taking tasks (Lombardo et al. 2011; Schnell et al. 2011). In addition, the right IPS (28, −56, 50; Z = 2.97, P < 0.05 svc) and the right IOG (putatively in the OFA; 42, −62, −10; Z = 3.07, P < 0.05 svc) activity was found to positively co-vary with the experience of the illusion on the enfacement question. Small volume corrections around the ITG, premotor, and cerebellar coordinates did not reveal any voxels that showed a parametric effect. Thus, we find that activity in the same locations as reported in previous studies of self-face recognition (Apps et al. 2012) and bodily illusions (Ionta et al. 2011) varies parametrically with the extent to which the illusion was experienced. In addition, they were in the same locations as the activations identified by the factorial analysis.

Figure 2.

fMRI results. Activity in voxels that showed a significant interaction between Synchronicity and Congruency, and also in which activity varied parametrically with the illusory experience. Voxels that showed this response were found in the right TPJ (a), the right IOG (b) and the right IPS (c) and are displayed in the upper panels (P < 0.001 uncorrected is used for display purposes). Plots of the beta coefficients from the peak voxels from the factorial analysis are displayed in the lower panels.

Figure 2.

fMRI results. Activity in voxels that showed a significant interaction between Synchronicity and Congruency, and also in which activity varied parametrically with the illusory experience. Voxels that showed this response were found in the right TPJ (a), the right IOG (b) and the right IPS (c) and are displayed in the upper panels (P < 0.001 uncorrected is used for display purposes). Plots of the beta coefficients from the peak voxels from the factorial analysis are displayed in the lower panels.

Activity in the Insula, the right inferior frontal gyrus and left fusiform gyrus was not found to vary parametrically with the strength of the illusion and did not show an interaction between synchronicity and congruency even at a lowered threshold (P < 0.005).

Our results show that activity in a network of areas is modulated by the synchronicity and specular congruency of visuo-tactile stimulation. Activity in 3 areas that show such an interaction effect, the rTPJ, rIOG, and the rIPS, fluctuates parametrically with the extent to which multisensory stimulation leads to the illusory experience of another's face being one's own.

Discussion

We used fMRI to examine brain activity during the illusory experience of identification with another's face that occurs following synchronized visuo-tactile stimulation (Tsakiris 2008; Sforza et al. 2010; Tajadura-Jimenez et al. 2012a). Activity in the right TPJ, IOG, and the IPS was modulated by synchronous, congruent visuo-tactile stimulation between one's own and another person's face, and activity in these areas varied parametrically with the extent to which participants were experiencing the illusion. We suggest that the interplay between the unimodal IOG and the multimodal TPJ and IPS drives the dynamic process of self-identification.

Our results support the notion that dynamic changes in self-recognition involve plasticity in unimodal self-face representations. Lateral portions of the IOG contain patches which respond selectively to particular categories of stimuli, including the face selective OFA. Theories of face recognition, supported by neuroimaging studies, suggest that the OFA processes individual facial features but does not process configural information that leads to the representation of an identity (Barton 2008; Kanwisher and Barton 2011). This would suggest that synchronous congruent visuo-tactile stimulation to self and other leads to changes in the unimodal representations of the low-level visual features of the seen unfamiliar face stimulus. Such plasticity in the face perception system may account for the changes in the perceptual experience of the face during the enfacement illusion, such as the assimilation of features of the other's face in the mental representation of one's own face, as has been documented in behavioral tasks (Tajadura-Jiménez et al. 2012a). This is also similar to the findings of imaging studies investigating multisensory stimulation to the body, which reported plasticity in the extrastriate body area during synchronous stimulation (Ionta et al. 2011).

Importantly, our findings suggest that the experience of self-identification involves integration in multisensory brain areas. The ventral IPS receives projections from portions of the inferior and superior temporal sulci and the IOG (Seltzer and Pandya 1978, 1980, 1986, 1994; Petrides and Pandya 2009), which contain face selective patches (Allison et al. 2000; Haxby et al. 2000; Barraclough and Perrett 2011; Kanwisher and Barton 2011). The IPS also receives somatosensory and vestibular input, suggesting involvement in integrating body-related multisensory information (Seltzer and Pandya 1980, 1986; Lopez and Blanke 2011). Neurophysiological studies in monkeys and neuroimaging investigations in humans have identified bimodal neurons with topographically aligned somatosensory and visual receptive fields in the IPS (Duhamel et al. 1998; Avillac et al. 2005; Sereno and Huang 2006; Huang et al. 2012). In addition, the IPS is activated during illusions where body ownership is modulated such as during the rubber-hand illusion and whole-body illusions (Ehrsson et al. 2004, 2005; Petkova et al. 2011), and also when seeing touch on another's face in a (Cardini et al. 2011). These findings suggest that the IPS integrates visual and somatosensory information to create a coherent representation of one's body and its peripersonal space, which results in predictions being formed about the likelihood of upcoming somatosensory input (Blanke 2012).

It is suggested that synchronized visuo-tactile stimulation leads to an updating of the near-space representation of one's face and hand that is processed in the IPS (Brozzoli et al. 2011, 2012a, 2012b; Cardini et al. 2013). Interestingly, it has also been shown that a rapid, plastic re-mapping of the visuo-tactile peripersonal space around one's body occurs when the body is seen in the mirror (Maravita et al. 2002). The process of identifying with a body seen in the extrapersonal space, that is, in the space behind the mirror, alters the processing of the visual stimuli applied to the reflected body, which even though are seen to be in extrapersonal space they are now being remapped as peripersonal stimuli through the mirror reflection (Holmes et al. 2004). Predictions about the body are therefore rapidly updated during multisensory experience while exposed to a mirror reflection. Our finding is consistent with this view. In this study, as the face was not experienced as “me” during the control conditions, the approaching cotton bud was not predictive of an impending tactile stimulation to the same location on the subject's face. Thus, only during the illusory condition would an experience be akin to looking in a mirror and would plastic updating occur to the perispersonal space. This finding suggests that the conditions that elicit the enfacement illusion result in multisensory driven predictions about upcoming somatosensory input, which are processed in the IPS. Such an effect may be central to the experience of ownership of one's face when looking in a mirror.

The TPJ is known for its role in integrating multisensory information and in the processing of the first-person perspective. The portion of the TPJ in the upper bank of the posterior superior temporal sulcus (STS) and the adjacent portion of the angular gyrus are connected to multisensory areas including the ventral IPS, the anterior insula (AI) and the premotor cortex, but also to visual areas including the lateral occipital areas, inferior temporal cortex and additionally to the primary and secondary somatosensory areas (Seltzer and Pandya 1978, 1989, 1994; Barnes and Pandya 1992; Augustine 1996; Cipolloni and Pandya 1999; Petrides and Pandya 2009; Mars et al. 2012). Notably, the cluster we identified is distinct from the portion of the TPJ often referred to as being part of the default-mode network (Mars et al. 2012).

A recent study by Ionta et al. (2011) showed that stimulation to the trunk, which causes the illusory experience of one's body being located above its actual position, modulates activity in the same portion of the TPJ that was activated in our study. Lesions and transcranial magnetic stimulation (TMS)-induced disruptions to this region elicit out of body experiences (Blanke and Mohr 2005; Ehrsson 2007). Neuroimaging studies also show that the same portion of the TPJ is engaged during self-face recognition (Uddin et al. 2005; Kaplan et al. 2008; Apps et al. 2012), and some have suggested the same portion is activated when processing others' mental states (Saxe and Kanwisher 2003; Frith and Frith 2006; Hampton et al. 2008; Aichhorn et al. 2009). Interestingly, this area is also engaged when processing the level of trust that one should have with another and the level of similarity of another's face based on how trustworthy it is (Behrens et al. 2008; Hampton et al. 2008). Increasing trust with another also increases the level of perceived similarity between one's own and another's face (Farmer, Mckay and Tsakiris, in press). This seems to indicate that the magnitude of the TPJ response is a function of the extent to which perspectives, self or other, are being processed. We found a reduction in the magnitude of the BOLD response in the TPJ that was scaled with the experience of enfacement. This result suggests that during synchronous congruent stimulation participants represented and experienced the seen face as self, while in the control conditions, they represented 2 individuals, the self and the other seen face. This effect may be an important neural marker of visual self-recognition, as seeing one's face in a mirror reflects a rare instance in which a face is seen but is experienced as mine. In addition, given the important role that this region has in processing social information (Behrens et al. 2008; Zaitchik et al. 2010; Carter et al. 2012; Mars et al. 2012; Santiesteban et al. in press), it is possible that plasticity in the representation of the self-face in the TPJ may underpin changes in sociocognitive processing that occur following the experience of enfacement (Maister et al. 2013).

Neuroimaging studies have identified regions that are engaged during self-face recognition, when viewing static visual stimuli (Platek et al. 2008; Devue and Bredart 2011). Such studies have reported activity in many regions including: the right TPJ, right IOG, right inferior/middle frontal gyrus (IFG/MFG), the bilateral IPS, the right ITG, the posterior cingulate gyrus, the precuneus, the AI, the fusiform gyrus, and the temporal poles. Many have argued that these areas therefore reflect the neural basis of mirror self-recognition (Kircher et al. 2000; Uddin et al. 2005; Platek et al. 2006; Devue et al. 2007; Kaplan et al. 2008; Sugiura et al. 2008; Platek and Kemp 2009; Heinisch et al. 2011; Apps et al. 2012; Ma and Han 2012). Our study, by using multisensory stimulation shows that a small subset of these regions, the TPJ, the IPS, and the IOG, are involved in the process of experiencing a visually observed face as “me” and the multisensory process of mirror self-identification. This result therefore suggests that not all of the regions previously implicated in self-face recognition, may actually be engaged when identifying one's self with an image during online multisensory input.

While the question of maintenance of a self-face representation has been addressed in several studies with adults (see Devue and Brédart (2011) for a review), the neurocognitive mechanisms that allow us to acquire and update, as opposed to simply maintain a representation of our own face, remain poorly understood. To frame this problem, consider how we first come to form a mental representation of how we look like at the ontogenetic level. Infants cannot have a priori knowledge of their appearance. Thus, the initial acquisition of a mental self-face representation cannot be explained by this process of comparing an external stimulus to a mental representation because a mental representation of what we look like does not exist a priori. An infant encountering a mirror for the first time must succeed in matching their sensorimotor experience with the observed sensorimotor behavior of the object seen inside the mirror (Apps and Tsakiris in press). This matching between felt and observed sensorimotor signals over time will lead to the formation of a mental representation of visual appearance (i.e., “that is my body reflected in the mirror; therefore, that is what I look like”). This process of self-identification allows successful performance in the classic rouge task of mirror self-recognition (Gallup 1970). Furthermore, as our physical appearance changes over time, the mental representation of what we look like should possess sufficient plasticity to ensure both the assimilation of changes and a sense of continuity over time (Apps et al. 2012). Instead, it is the infants' ability to integrate online sensorimotor signals with visual feedback during mirror exposure that allows them to realize that the face with the rouge spot that they see in the mirror is their own. A similar process of asimilitating dynamic multisensory input seems to underpin the updating of self-face representations. It is therefore important to distinguish between 3 key processes: 1) self-identification, which allows for the construction and acquisition of a mental representation of appearance; 2) self-recognition, which allows for the maintenance of a stored mental representation; and 3) self-updating, which allows for assimilation of physical changes that will eventually be reflected in the mental representation. While most studies have focused on the second process for which mnemonic representations seem to be crucial, recent studies on the enfacement illusion have successfully demonstrated how multisensory integration can be used to understand the processes of self-identification and self-updating.

We here expand this view by highlighting a set of unimodal and multimodal brain areas that underpin the process of self-identification in response to current multisensory input. We argue that the processes self-identification, self-recognition and self-updating may conform to a core component of the principles of predictive coding within the free-energy principle (Friston 2005, 2009, 2010; Hesselmann et al. 2010; Apps and Tsakiris in press). This principle, a unifying theory of cortical function, states that the brain generates a model of the world through its sensory systems, which leads to predictions about upcoming sensory input. Sensory input which is not predicted causes surprise (or “entropy”) in sensory systems. The brain tries to reduce the average level of surprise across all sensory systems. This reduction can occur in 2 ways. First, actions can be performed with predictable outcomes to remove and avoid surprise. Second, representations of the causes of sensory events can be updated, to optimize predictions about future sensory input. Our results and the effect of enfacement can be explained within this framework. Before synchronous, congruent stimulation, the other's face is not processed as “me.” During stimulation, there is surprise induced by the congruency of the seen and felt events. Participants are instructed to remain motionless during stimulation and therefore they cannot avoid surprise by performing actions. Thus, the only way for the brain to minimize the surprise is by updating representations of the self-face, with multimodal areas explaining away surprise in unimodal sensory areas (Apps and Tsakiris in press). Interestingly, this account argues that when stimuli become predictable, the BOLD response in areas involved in processing contextually relevant information is attenuated. In our study, when the illusion is experienced and subjects are processing the face as if it was their own in a mirror, the tactile stimulation becomes more predictable (i.e., as the cotton bud approaches the face, a tactile stimulation can be predicted on the same location of one's own face). The free-energy principle would therefore predict an attenuated response in areas that process both visual and tactile information about one's own face, during the illusory experience, as opposed to conditions where separate visual and tactile information need to be processed about one's own and another's face (Apps and Tsakiris in press).

Tentatively, our results support this claim. We showed the involvement of both the unimodal IOG and the multimodal TPJ and IPS in processing the multisensory driven changes in the representation of a face, supporting the free-energy claim that interactions between unimodal and multimodal areas explain incoming sensory input. Also, we found evidence of an attenuation of the BOLD response in the TPJ during the illusion condition. Thus, our data support 2 key tenets of a predictive coding account of self-recognition, which may offer an improvement on past theoretical perspective (Legrand and Ruby 2009), which have focused on the role of motor efference for self-awareness. Here, we show that non-motor multisensory information can also update representations and predictions about the self. Future studies should therefore examine whether predictive coding and the free-energy principle may be fruitful explicators of the neural basis of self-recognition.

In conclusion, our study shows that plasticity in both unimodal and multisensory areas during visuo-tactile stimulation leads to another's face being perceived as one's own. We argue that such processes underpin mirror self-recognition and the ontogeny of representations of one's visual appearance. These findings may be crucial for understanding the neurobiological processes that underpin our maintenance of a continuous sense of self as we age, and also the accommodation of the extensive changes that may occur as a result of ageing, reconstructive surgery, or traumatic events.

Funding

ESRC First Grant (RES-061-25-0233) and a European Research Council (ERC-2010-StG-262853) grant to M.T., Bial Foundation Bursary for Scientific Research 2010/2011 to A.T.-J. and M.T.

Notes

Conflict of Interest: None declared.

References

Aichhorn
M
Perner
J
Weiss
B
Kronbichler
M
Staffen
W
Ladurner
G
Temporo-parietal junction activity in theory-of-mind tasks: falseness, beliefs, or attention
J Cogn Neurosci
 , 
2009
, vol. 
21
 (pg. 
1179
-
1192
)
Allison
T
Puce
A
McCarthy
G
Social perception from visual cues: role of the STS region
Trends Cogn Sci
 , 
2000
, vol. 
4
 (pg. 
267
-
278
)
Anderson
JR
Gallup
GG
Jr
Which Primates Recognize Themselves in Mirrors?
Plos Biol
 , 
2011
, vol. 
9
 
Apps
MAJ
Tajadura-Jimenez
A
Turley
G
Tsakiris
M
The different faces of one's self: an fMRI study into the recognition of current and past self-facial appearances
Neuroimage
 , 
2012
, vol. 
63
 (pg. 
1720
-
1729
)
Apps
MAJ
Tsakiris
M
The free-energy self: a predictive coding account of self-recognition
Neurosci Biobehav Rev
  
In press
Ashburner
J
Friston
KJ
Unified segmentation
Neuroimage
 , 
2005
, vol. 
26
 (pg. 
839
-
851
)
Augustine
JR
Circuitry and functional aspects of the insular lobe in primates including humans
Brain Res Rev
 , 
1996
, vol. 
22
 (pg. 
229
-
244
)
Avillac
M
Deneve
S
Olivier
E
Pouget
A
Duhamel
JR
Reference frames for representing visual and tactile locations in parietal cortex
Nat Neurosci
 , 
2005
, vol. 
8
 (pg. 
941
-
949
)
Barnes
CL
Pandya
DN
Efferent cortical connections of multimodal cortex of the superior temporal sulcus in the rhesus-monkey
J Comp Neurol
 , 
1992
, vol. 
318
 (pg. 
222
-
244
)
Barraclough
NE
Perrett
DI
From single cells to social perception
Philos Trans R Soc B Biol Sci
 , 
2011
, vol. 
366
 (pg. 
1739
-
1752
)
Barton
JJS
Structure and function in acquired prosopagnosia: lessons from a series of 10 patients with brain damage
J Neuropsychol
 , 
2008
, vol. 
2
 (pg. 
197
-
225
)
Behrens
TEJ
Hunt
LT
Woolrich
MW
Rushworth
MFS
Associative learning of social value
Nature
 , 
2008
, vol. 
456
 (pg. 
245
-
U245
)
Bertenthal
BI
Fischer
KW
Development of self-recognition in infant
Dev Psychol
 , 
1978
, vol. 
14
 (pg. 
44
-
50
)
Blanke
O
Multisensory brain mechanisms of bodily self-consciousness
Nat Rev Neurosci
 , 
2012
, vol. 
13
 (pg. 
556
-
571
)
Blanke
O
Mohr
C
Out-of-body experience, heautoscopy, hallucination of neurological and autoscopic origin implications for neurocognitive mechanisms of corporeal awareness and self consciousness
Brain Res Rev
 , 
2005
, vol. 
50
 (pg. 
184
-
199
)
Brozzoli
C
Gentile
G
Ehrsson
HH
Neural bases of peripersonal space in humans revealed by fMRI-adaptation
Cogn Process
 , 
2012a
, vol. 
13
 (pg. 
S23
-
S24
)
Brozzoli
C
Gentile
G
Ehrsson
HH
That's near my hand! Parietal and premotor coding of hand-centered space contributes to localization and self-attribution of the hand
J Neurosci
 , 
2012b
, vol. 
32
 (pg. 
14573
-
14582
)
Brozzoli
C
Gentile
G
Petkova
VI
Ehrsson
HH
fMRI Adaptation Reveals a Cortical Mechanism for the Coding of Space Near the Hand
J Neurosci
 , 
2011
, vol. 
31
 (pg. 
9023
-
9031
)
Calder
AJ
Lawrence
AD
Young
AW
Neuropsychology of fear and loathing
Nat Rev Neurosci
 , 
2001
, vol. 
2(5)
 (pg. 
352
-
363
)
Cardini
F
Costantini
M
Galati
G
Romani
GL
Ladavas
E
Serino
A
Viewing One's Own Face Being Touched Modulates Tactile Perception: an fMRI Study
J Cogn Neurosci
 , 
2011
, vol. 
23
 (pg. 
503
-
513
)
Cardini
F
Tajadura-Jiménez
A
Serino
A
Tsakiris
It feels like it's me: interpersonal multisensory stimulation enhances visual remapping of touch from other to self
J Exp Psychol Hum Percpect Perform
 , 
2013
, vol. 
23(3)
 (pg. 
630
-
637
)
Carter
RM
Bowling
DL
Reeck
C
Huettel
SA
A distinct role of the temporal-parietal junction in predicting socially guided decisions
Science
 , 
2012
, vol. 
337
 (pg. 
109
-
111
)
Cipolloni
PB
Pandya
DN
Cortical connections of the frontoparietal opercular areas in the rhesus monkey
J Comp Neurol
 , 
1999
, vol. 
403
 (pg. 
431
-
458
)
Corbetta
M
Patel
G
Shulman
GL
The reorienting system of the human brain: from environment to theory of mind
Neuron
 , 
2008
, vol. 
58
 (pg. 
306
-
324
)
Devue
C
Bredart
S
The neural correlates of visual self-recognition
Conscious Cogn
 , 
2011
, vol. 
20
 (pg. 
40
-
51
)
Devue
C
Collette
F
Balteau
E
Dequeldre
C
Luxen
A
Maquet
P
Bredart
S
Here I am: the cortical correlates of visual self-recognition
Brain Res
 , 
2007
, vol. 
1143
 (pg. 
169
-
182
)
Duhamel
JR
Colby
CL
Goldberg
ME
Ventral intraparietal area of the macaque: congruent visual and somatic response properties
J Neurophysiol
 , 
1998
, vol. 
79
 (pg. 
126
-
136
)
Duvernoy
HM
The human brain: surface, three-dimensional sectional anatomy with MRI, and vascularization
 , 
1999
Wein
Springer-Verlag
Ehrsson
HH
The experimental induction of out-of-body experiences
Science
 , 
2007
, vol. 
317
 pg. 
1048
 
Ehrsson
HH
Holmes
NP
Passingham
RE
Touching a rubber hand: feeling of body ownership is associated with activity in multisensory brain areas
J Neurosci
 , 
2005
, vol. 
25
 (pg. 
10564
-
10573
)
Ehrsson
HH
Spence
C
Passingham
RE
That's my hand! Activity in premotor cortex reflects feeling of ownership of a limb
Science
 , 
2004
, vol. 
305
 (pg. 
857
-
877
)
Farmer
H
McKay
R
Tsakiris
M
Trust in me: trustworthy others are seen as more physically similar to the self
Psychol Sci
  
In press
Friston
K
The free-energy principle: a rough guide to the brain?
Trends Cogn Sci
 , 
2009
, vol. 
13
 (pg. 
293
-
301
)
Friston
K
The free-energy principle: a unified brain theory?
Nat Rev Neurosci
 , 
2010
, vol. 
11
 (pg. 
127
-
138
)
Friston
K
A theory of cortical responses
Philos Trans R Soc B Biol Sci
 , 
2005
, vol. 
360
 (pg. 
815
-
836
)
Frith
CD
Frith
U
The neural basis of mentalizing
Neuron
 , 
2006
, vol. 
50
 (pg. 
531
-
534
)
Gallup
GG
Chimpanzees. Self-recognition
Science
 , 
1970
, vol. 
167
  
86
Geng
JJ
Mangun
GR
Right temporoparietal junction activation by a salient contextual cue facilitates target discrimination
Neuroimage
 , 
2011
, vol. 
54
 (pg. 
594
-
601
)
Hampton
AN
Bossaerts
P
O'Doherty
JP
Neural correlates of mentalizing-related computations during strategic interactions in humans
Proc Natl Acad Sci USA
 , 
2008
, vol. 
105
 (pg. 
6741
-
6746
)
Haxby
JV
Hoffman
EA
Gobbini
MI
The distributed human neural system for face perception
Trends Cogn Sci
 , 
2000
, vol. 
4
 (pg. 
223
-
233
)
Heinisch
C
Dinse
HR
Tegenthoff
M
Juckel
G
Bruene
M
An rTMS study into self-face recognition using video-morphing technique
Soc Cogn Affect Neurosci
 , 
2011
, vol. 
6
 (pg. 
442
-
449
)
Hesselmann
G
Sadaghiani
S
Friston
KJ
Kleinschmidt
A
Predictive coding or evidence accumulation? False inference and neuronal fluctuations
Plos One
 , 
2010
, vol. 
5
 
Holmes
NP
Crozier
G
Spence
C
When mirrors lie: “visual capture” of arm position impairs reaching performance
Cogn Affect Behav Neurosci
 , 
2004
, vol. 
4
 (pg. 
193
-
200
)
Huang
R-S
Chen
C-F
Tran
AT
Holstein
KL
Sereno
MI
Mapping multisensory parietal face and body areas in humans
Proc Natl Acad Sci USA
 , 
2012
, vol. 
109
 (pg. 
18114
-
18119
)
Huang
R-S
Sereno
MI
Dodecapus: an MR-compatible system for somatosensory stimulation
Neuroimage
 , 
2007
, vol. 
34
 (pg. 
1060
-
1073
)
Ionta
S
Heydrich
L
Lenggenhager
B
Mouthon
M
Fornari
E
Chapuis
D
Gassert
R
Blanke
O
Multisensory mechanisms in temporo-parietal cortex support self-location and first-person perspective
Neuron
 , 
2011
, vol. 
70
 (pg. 
363
-
374
)
Kanwisher
N
Barton
J
Haxby
J
Johnson
M
Rhodes
G
Calder
A
The functional architecture of the face system: integrating evidence from fMRI and patient studies
Handbook of face perception
 , 
2011
Oxford
Oxford University Press
(pg. 
111
-
130
)
Kaplan
JT
Aziz-Zadeh
L
Uddin
LQ
Iacoboni
M
The self across the senses: an fMRI study of self-face and self-voice recognition
Soc Cogn Affect Neurosci
 , 
2008
, vol. 
3
 (pg. 
218
-
223
)
Kircher
TTJ
Senior
C
Phillips
ML
Benson
PJ
Bullmore
ET
Brammer
M
Simmons
A
Williams
SCR
Bartels
M
David
AS
Towards a functional neuroanatomy of self processing: effects of faces and words
Cogn Brain Res
 , 
2000
, vol. 
10
 (pg. 
133
-
144
)
Kircher
TTJ
Senior
C
Phillips
ML
Rabe-Hesketh
S
Benson
PJ
Bullmore
ET
Brammer
M
Simmons
A
Bartels
M
David
AS
Recognizing one's own face
Cognition
 , 
2001
, vol. 
78
 (pg. 
1
-
15
)
Kriegeskorte
N
Simmons
WK
Bellgowan
PSF
Baker
CI
Circular analysis in systems neuroscience: the dangers of double dipping
Nat Neurosci
 , 
2009
, vol. 
12
 (pg. 
535
-
540
)
Legrand
D
Ruby
P
What is self-specific? Theoretical investigation and critical review of neuroimaging results
Psychol Rev
 , 
2009
, vol. 
116
 (pg. 
252
-
282
)
Lombardo
MV
Chakrabarti
B
Bullmore
ET
Baron-Cohen
S
Consortium
MA
Specialization of right temporo-parietal junction for mentalizing and its relation to social impairments in autism
Neuroimage
 , 
2011
, vol. 
56
 (pg. 
1832
-
1838
)
Lopez
C
Blanke
O
The thalamocortical vestibular system in animals and humans
Brain Res Rev
 , 
2011
, vol. 
67
 (pg. 
119
-
146
)
Ma
Y
Han
S
Functional dissociation of the left and right fusiform gyrus in self-face recognition
Hum Brain Mapp
 , 
2012
, vol. 
33(10)
 (pg. 
2255
-
2267
)
Maister
L
Tsiakkas
E
Tsakiris
M
I feel your fear: shared touch between faces facilitates recognition of fearful facial expressions
Emotion
 , 
2013
, vol. 
13
 (pg. 
7
-
13
)
Maravita
A
Spence
C
Sergent
C
Driver
J
Seeing your own touched hands in a mirror modulates cross-modal interactions
Psychol Sci
 , 
2002
, vol. 
13
 (pg. 
350
-
355
)
Mars
RB
Sallet
J
Schueffelgen
U
Jbabdi
S
Toni
I
Rushworth
MFS
Connectivity-based subdivisions of the human right “temporoparietal junction area”: evidence for different areas participating in different cortical networks
Cereb Cortex
 , 
2012
, vol. 
22
 (pg. 
1894
-
1903
)
Mazzurega
M
Pavani
F
Paladino
MP
Schubert
TW
Self-other bodily merging in the context of synchronous but arbitrary-related multisensory inputs
Exp Brain Res
 , 
2011
, vol. 
213(2–3)
 
Paladino
M-P
Mazzurega
M
Pavani
F
Schubert
TW
Synchronous multisensory stimulation blurs self-other boundaries
Psychol Sci
 , 
2010
, vol. 
21
 (pg. 
1202
-
1207
)
Petkova
VI
Bjornsdotter
M
Gentile
G
Jonsson
T
Li
T-Q
Ehrsson
HH
From part- to whole-body ownership in the multisensory brain
Curr Biol
 , 
2011
, vol. 
21
 (pg. 
1118
-
1122
)
Petrides
M
Pandya
DN
Distinct parietal and temporal pathways to the homologues of Broca's area in the monkey
Plos Biol
 , 
2009
, vol. 
7
 
Platek
SM
Kemp
SM
Is family special to the brain? An event-related fMRI study of familiar, familial, and self-face recognition
Neuropsychologia
 , 
2009
, vol. 
47
 (pg. 
849
-
858
)
Platek
SM
Loughead
JW
Gur
RC
Busch
S
Ruparel
K
Phend
N
Panyavin
IS
Langleben
DD
Neural substrates for functionally discriminating self-face from personally familiar faces
Hum Brain Mapp
 , 
2006
, vol. 
27
 (pg. 
91
-
98
)
Platek
SM
Wathne
K
Tierney
NG
Thomson
JW
Neural correlates of self-face recognition: an effect-location meta-analysis
Brain Res
 , 
2008
, vol. 
1232
 (pg. 
173
-
184
)
Ramasubbu
R
Masalovich
S
Gaxiola
I
Peltier
S
Holtzheimer
PE
Heim
C
Goodyear
B
MacQueen
G
Mayberg
HS
Differential neural activity and connectivity for processing one's own face: a preliminary report
Psychiatry Res
 , 
2011
, vol. 
194
 (pg. 
130
-
140
)
Santiesteban
I
Banissy
MJ
Catmur
C
Bird
G
Enhancing social ability by stimulating right temporoparietal junction
Curr Biol
 , 
in press
, vol. 
22
 (pg. 
2274
-
2277
)
Saxe
R
Kanwisher
N
People thinking about thinking people—the role of the temporo-parietal junction in “theory of mind”
Neuroimage
 , 
2003
, vol. 
19
 (pg. 
1835
-
1842
)
Schnell
K
Bluschke
S
Konradt
B
Walter
H
Functional relations of empathy and mentalizing: an fMRI study on the neural basis of cognitive empathy
Neuroimage
 , 
2011
, vol. 
54
 (pg. 
1743
-
1754
)
Seltzer
B
Pandya
DN
Afferent cortical connections and architectonics of superior temporal sulcus and surrounding cortex in rhesus-monkey
Brain Res
 , 
1978
, vol. 
149
 (pg. 
1
-
24
)
Seltzer
B
Pandya
DN
Converging visual and somatic sensory cortical input to the intraparietal sulcus of the rhesus-monkey
Brain Res
 , 
1980
, vol. 
192
 (pg. 
339
-
351
)
Seltzer
B
Pandya
DN
Frontal-lobe connections of the superior temporal sulcus in the rhesus-monkey
J Comp Neurol
 , 
1989
, vol. 
281
 (pg. 
97
-
113
)
Seltzer
B
Pandya
DN
Parietal, temporal, and occipital projections to cortex of the superior temporal sulcus in the rhesus-monkey—a retrograde tracer study
J Comp Neurol
 , 
1994
, vol. 
343
 (pg. 
445
-
463
)
Seltzer
B
Pandya
DN
Posterior parietal projections to the intraparietal sulcus of the rhesus-monkey
Exp Brain Res
 , 
1986
, vol. 
62
 (pg. 
459
-
469
)
Sereno
MI
Huang
RS
A human parietal face area contains aligned head-centered visual and tactile maps
Nat Neurosci
 , 
2006
, vol. 
9
 (pg. 
1337
-
1343
)
Sforza
A
Bufalari
I
Haggard
P
Aglioti
SM
My face in yours: visuo-tactile facial stimulation influences sense of identity
Soc Neurosci
 , 
2010
, vol. 
5
 (pg. 
148
-
162
)
Suarez
SD
Gallup
GG
Self-recognition in chimpanzees and orangutans, but not gorillas
J Hum Evol
 , 
1981
, vol. 
10
 (pg. 
175
-
188
)
Suddendorf
T
Simcock
G
Nielsen
M
Visual self-recognition in mirrors and live videos: evidence for a developmental asynchrony
Cogn Dev
 , 
2007
, vol. 
22
 (pg. 
185
-
196
)
Sugiura
M
Sassa
Y
Jeong
H
Horie
K
Sato
S
Kawashima
R
Face-specific and domain-general characteristics of cortical responses during self-recognition
Neuroimage
 , 
2008
, vol. 
42
 (pg. 
414
-
422
)
Tajadura-Jimenez
A
Grehl
S
Tsakiris
M
The other in me: interpersonal multisensory stimulation changes the mental representation of the self
Plos One
 , 
2012a
, vol. 
7
 
Tajadura-Jiménez
A
Longo
MR
Coleman
R
Tsakiris
M
The person in the mirror: using the enfacement illusion to investigate the experiential structure of self-identification
Conscious Cogn
 , 
2012b
, vol. 
21
 (pg. 
1725
-
1738
)
Tsakiris
M
Looking for myself: current multisensory input alters self-face recognition
Plos One
 , 
2008
, vol. 
3
 
Tsakiris
M
My body in the brain: a neurocognitive model of body-ownership
Neuropsychologia
 , 
2010
, vol. 
48
 (pg. 
703
-
712
)
Uddin
LQ
Davies
MS
Scott
AA
Zaidel
E
Bookheimer
SY
Iacoboni
M
Dapretto
M
Neural basis of self and other representation in autism: an fMRI study of self-face recognition
Plos One
 , 
2008
, vol. 
3
 
Uddin
LQ
Kaplan
JT
Molnar-Szakacs
I
Zaidel
E
Iacoboni
M
Self-face recognition activates a frontoparietal “mirror” network in the right hemisphere: an event-related fMRI study
Neuroimage
 , 
2005
, vol. 
25
 (pg. 
926
-
935
)
Uddin
LQ
Molnar-Szakacs
I
Zaidel
E
Iacoboni
M
rTMS to the right inferior parietal lobule disrupts self-other discrimination
Soc Cogn Affect Neurosci
 , 
2006
, vol. 
1
 (pg. 
65
-
71
)
Verosky
SC
Todorov
A
Differential neural responses to faces physically similar to the self as a function of their valence
Neuroimage
 , 
2010
, vol. 
49
 (pg. 
1690
-
1698
)
Zaitchik
D
Walker
C
Miller
S
LaViolette
P
Feczko
E
Dickerson
BC
Mental state attribution and the temporoparietal junction: an fMRI study comparing belief, emotion, and perception
Neuropsychologia
 , 
2010
, vol. 
48
 (pg. 
2528
-
2536
)

Author notes

M.A.J. Apps and A. Tajadura-Jiménez declare equal contribution.