Abstract

Humans are highly adept at multisensory processing of object shape in both vision and touch. Previous studies have mostly focused on where visually perceived object-shape information can be decoded, with haptic shape processing receiving less attention. Here, we investigate visuo-haptic shape processing in the human brain using multivoxel correlation analyses. Importantly, we use tangible, parametrically defined novel objects as stimuli. Two groups of participants first performed either a visual or haptic similarity-judgment task. The resulting perceptual object-shape spaces were highly similar and matched the physical parameter space. In a subsequent fMRI experiment, objects were first compared within the learned modality and then in the other modality in a one-back task. When correlating neural similarity spaces with perceptual spaces, visually perceived shape was decoded well in the occipital lobe along with the ventral pathway, whereas haptically perceived shape information was mainly found in the parietal lobe, including frontal cortex. Interestingly, ventrolateral occipito-temporal cortex decoded shape in both modalities, highlighting this as an area capable of detailed visuo-haptic shape processing. Finally, we found haptic shape representations in early visual cortex (in the absence of visual input), when participants switched from visual to haptic exploration, suggesting top-down involvement of visual imagery on haptic shape processing.

Introduction

Shape is one of the most important object properties: Updating and manipulating shape information is necessary for humans to efficiently interact with the world. Accordingly, humans are experts in shape processing—from simple to complex, and from familiar to unfamiliar objects (Rosch 1999). Despite the fact that shape information in general is acquired by various sensory systems, previous studies have almost exclusively focused on the visual modality. Nevertheless, haptic identification of a wide range of objects is remarkably fast and accurate, making us haptic experts as much as we are visual experts (Klatzky et al. 1985). Although much is known about the behavioral and neural processing of shape in vision (Malach et al. 1995; Grill-Spector et al. 2001; Kourtzi and Kanwisher 2001), comparatively little is known about the haptic modality (Bodegård et al. 2001; Reed et al. 2004), and even less about how the 2 modalities may interact in object-shape processing (Amedi et al. 2001, 2002; James et al. 2002; Kassuba et al. 2013). The goal of the present study is therefore to provide a detailed investigation of the neural mechanisms of shape processing in vision and touch, focusing on unisensory and multisensory pathways.

For this, we use the framework of perceptual spaces, first proposed by Shepard (1987), in which internal representations are described as a space with distances between entities defined by their perceptual similarity. Importantly, the topology of this perceptual space conforms to the real-world, physical properties of these objects. Cutzu and Edelman (1996) provided an elegant experiment which tested that, indeed, visual perceptual space—as measured by similarity ratings and reconstructed using multidimensional scaling (MDS)—conformed to a complex mathematical parameterization of object shape. Our previous studies have also demonstrated how such physical spaces can be reconstructed well not only in vision but also in touch. In addition, the perceptual spaces acquired from the different sensory modalities were also highly congruent (Cooke et al. 2007; Gaißert et al. 2010, 2011; Gaissert and Wallraven 2012; Wallraven and Dopjans 2013). Hence, the perceptual space framework has already proven useful for understanding shape representations in both vision and touch at the behavioral level.

In the visual modality, this framework has also been used to relate behavioral results with neuroimaging data analyzed through multivoxel methods to investigate how the visual perceptual space is created in the brain using familiar objects (Eger et al. 2008; Peelen and Caramazza 2012; Mur et al. 2013; Peelen et al. 2014; Snow et al. 2014; Smith and Goodale 2015) and unfamiliar objects (Op de Beeck et al. 2008; Drucker and Aguirre 2009). These studies have provided evidence that shape representations can be decoded in lateral occipital cortex (including regions BA19 to BA37 in the ventral visual pathway) regardless of object familiarity. Furthermore, Smith and Goodale (2015) showed involvement of the parietal lobe (including area BA7, or intraparietal sulcus) in shape decoding. Given that the previously mentioned perceptual studies have shown that “haptic” processing is capable of reconstructing physical shape spaces with high accuracy as well, one major goal of this study is to extend this research to determine the brain areas involved in creating this haptic perceptual space.

In addition, these imaging experiments have reliably identified one region in the occipital cortex— the lateral occipital complex (LOC)—for visual object processing in general (for example, Malach et al. 1995; Kourtzi and Kanwisher 2001) and processing of shape in particular (for example, Op de Beeck et al. 2008; Drucker and Aguirre 2009). A seminal study by Amedi et al. (2001) has further shown that a subpart of this region [the so-called lateral occipital tactile visual region (LOtv)] becomes active not only for visual processing but also for haptic processing of objects. More recently, Snow et al. (2014) provided evidence that haptic shape sensitivity is not only limited to LOtv but also covers the entire LOC [including the lateral occipital (LO) area and posterior fusiform gyrus]—in addition to V1 and V4. We therefore hypothesize that LOC may be a candidate region that is able to encode visual as well as haptic properties of shape, and that it may be possible to find both visual and haptic representations of the perceptual spaces in this area.

In summary, in the present study, we investigate the neural correlates of visual and haptic perceptual shape spaces as well as their interaction. We first created a parametrically defined space of three-dimensional, novel objects as our physical space. Two groups of participants performed similarity ratings to yield visual or haptic perceptual spaces. Neural correlates of these perceptual spaces were then derived from multivoxel analyses of fMRI data.

Materials and Methods

Participants

Twenty-five (male = 14: mean age = 25, age range = 18–34; female = 11: mean age = 23.45, age range = 18–27) healthy, right-handed adults with no prior diagnosis of neurological or perceptual deficits were recruited as participants for a behavioral and an fMRI experiment. Participants were randomly assigned to one of the 2 groups, the visual group (N = 11, male = 6) and the haptic group (N = 14, male = 8). All participants were provided with informed written consent, and the experiment received prior approval by the Korea University Institutional Review Board [1040548-KU-IRB-12-30-A-1(E-A-1)].

Stimuli

The stimuli used in this experiment were generated by a parametric shape model (“superformula”; Gielis 2003). This model allowed us to create a parameter space of novel, three-dimensional objects with complex shape characteristics [see Cooke et al. (2007); Gaißert et al. (2010, 2011); Gaissert and Wallraven (2012) and Wallraven and Dopjans (2013) for similar approaches with different parameter spaces]. Three-dimensional shapes are generated by a complex combination of different trigonometric functions (Fig. 1A). In the present study, we used 3 parameters (n1, m1, and m2) that were varied to create a two-dimensional cross within a three-dimensional cube of the parameter space [see also Cutzu and Edelman (1996) for a similar configuration]. The resulting 9 stimuli in the parameter subspace are shown in Figure 1A.

Figure 1.

(A) Cross-like physical parameter space showing the 9 objects used in the experiments at the location defined by the parameter values. The equations for creating three-dimensional objects are shown below. (B) Design of the similarity-rating task. (C) Design of the visual and haptic runs in the fMRI experiment.

The stimuli were then printed out as MR-compatible, tangible objects (average measurements: 7.59 × 8.56 × 7.56 cm, average weight: 196.66 g) on a 3D printer (Zprinter650, 3DSystems, USA) for use in both visual and haptic experiments.

Behavioral Experiments

Participants of either group first performed a behavioral experiment to determine their visual or haptic perceptual space based on a standard similarity-rating task (Fig. 1B). This experiment was conducted exactly 2 days before the fMRI experiment and gave participants unimodal experience with the stimuli. During the experiment, the experimenter and the participant sat on opposite sides of a table. For the visual group, the experimenter placed objects at random orientations on the table (visual angle: ∼10°). Participants were instructed not to move their head too much, nor to touch the objects. Since random orientations were used, participants were not able to use simple image-based or accidental cues to make the similarity judgment [see Cooke et al. (2007) and Gaißert et al. (2010) for similar protocols]. For the haptic group, the experimenter placed the object into the participant's right hand (at random orientation) for haptic exploration. The hand was extended through a curtain, thus blocking any visual input.

First, as an introduction, participants of either group were asked to look at (up to 10 s) or to haptically explore (up to 10 s) all 9 objects presented in a random order. This served to anchor their scales and to acquaint them with shape variability across objects. After this, participants performed a pair-wise similarity-rating task. Similarities were rated between all possible object pairs including same objects pairs, with pairs being consecutively presented in a random order. Each object was presented for 6 s, ensuring ample time for haptic (and visual) exploration. Timing and object presentation were controlled via spoken commands presented to the experimenter via headphones. There were 4 repetitions of all object pairs, yielding a total of 180 trials [((9 × 8/2) + 9) × 4 repetitions; note that this includes presentations of identical objects for each block]. After each pair, participants verbally reported the perceived similarity on a Likert-type scale (1 = fully dissimilar to 7 = identical). There was one mandatory break after 2 repetitions. Overall, the experiment took approximately 1 h to finish. Participants were encouraged to focus on shape properties when making their judgment and to use the full range of the scale.

Scanning

MRI data were acquired on a SIEMENS Trio 3-T scanner (Siemens Medical Systems, Erlangen, Germany) with a 32-channel SENSE head coil (Brain Imaging Center, Korea University, Seoul, South Korea).

Structural MR images of all participants were collected using a T1-weighted sagittal high-resolution MPRAGE sequence [repeat time (TR) = 2250 ms, echo time (TE) = 3.65 ms, flip angle (FA) = 9°, field of view (FOV) = 256 × 256 mm, in-plane matrix = 256 × 256, voxel size = 1 × 1 × 1 mm, 192 axial slices]. Functional imaging was performed with a gapless, echo planar imaging sequence (TR = 3000 ms, TE = 30 ms, FA = 60°, FOV = 250 × 250 mm, in-plane matrix = 128 × 128, voxel size = 2 × 2 × 4 mm, 30 slices). The first 9 s of each functional run consisted of dummy scans to allow for steady-state magnetization.

Localizer Run for Visual Object-Selective Cortex

Before the main visual and haptic runs, participants completed a standard localizer task for determining object-selective cortical areas. For this, intact and scrambled images of familiar objects (including pictures of vehicles, fruits, animals, daily objects, furniture, etc.) and unfamiliar objects [including Greebles (Gauthier and Tarr 1997), three-dimensional computer graphics-generated objects (Wallraven et al. 2013), and random line-drawing figures] were shown in a rapid serial visual presentation paradigm. The localizer consisted of 16 randomized blocks (4 repetitions × familiar/unfamiliar × scrambled/intact) with each block starting with a 15-s fixation cross baseline followed by rapid presentation of 15 images for 1 s each. During the stimulus presentation, participants performed a one-back task to ensure attention. Whenever the current image was the same as the previous one, participants had to press a button on an fMRI-compatible button box that was held in their left hand. Participants were instructed to fixate the middle of the screen during the task. Images were shown at a visual angle of 10° on a MRI-compatible LCD display (BOLDscreen; Cambridge Research Systems, UK) that was viewed through a mirror mounted on the head coil. The scanning parameters were the same as for the main visual and haptic runs. The total length of the visual localizer was 480 s [16 blocks × (15 s baseline + 15 images × 1 s)].

Visual and Haptic Runs During fMRI

The main experiment consisted of a visual run and a haptic run in which participants did a one-back task to ensure sustained attention (Fig. 1C). Participants from the visual group performed the visual run first, followed by the haptic run, and vice versa for participants from the haptic group. This specific order was selected as to allow investigation of effects of visual and haptic prior exposure from the previous behavioral experiment. Each run contained a randomized set of trials with 6 repetitions of the 9 objects, yielding 54 trials. Each trial consisted of 6 s presentation time, followed by a 9-s brief pause to allow for BOLD relaxation and a 9-s response period. The pauses after object presentations and the long duration of the answering period were employed to ensure BOLD relaxation from object exploration and button pressing since these were related to hand movement and touch. Baseline times were inserted before trial 1 and after trials 27 and 54. A run took 1341 s to finish [15 s baseline × 3 + 54 trials × (6 s exploration time + 9 s BOLD relaxation time + 9 s response time = 24 s)]. After a full run, there was a break time of 120 s in which the experimenter prepared for the next presentation mode.

Baselines in the visual run consisted of a 15-s fixation cross in the middle of the screen. Baselines in the haptic run consisted of 15 s texture stimulation in which participants palpated a piece of cloth with their right hand.

In the visual run, the experimenter placed stimuli on top of a table such that they projected through a mirror mounted on the participant's head coil (visual angle: ∼10 °). Objects were placed at random orientations by the experimenter such as to vary both object orientation as well as the experimenter's hand posture. This was done to prevent simple image-based or hand-posture-based cues (Kaplan and Meyer 2012) to object identity.

In the haptic run, participants were instructed to close their eyes to restrict visual input and to explore the object in their “right” hand. In this condition, the experimenter placed the object into the participant's hand for palpation similar to the behavioral experiment.

During the runs, participants had to perform a one-back task and were instructed to report if the previous object was the same as the current object or a different object by pressing either of 2 buttons with the index and middle fingers of their left hand. Timing of each trial section was ensured by short, spoken audio cues, synchronized to the beginnings of each section (“Start” for experimenter to present the object, “Stop” for experimenter to remove the object, and “Answer” for participant to press the button). Cues were presented via loudspeaker to both experimenter and participant with sound volume set so as to be clearly audible over the scanning noise. Since the audio cues did not contain information about object identity, the experimenter followed a printed list of trial orders to ensure proper trial randomization.

Since the duration of the runs was quite long, care was taken to avoid excessive head movements and to make participants comfortable: Participants were first instructed to limit their head and shoulder movements during the scanning. In addition, participants' heads were comfortably fixed by fitting foam padding in the head coil to limit head movement. Finally, the elbow was supported by a foam pad during the runs in order to minimize arm fatigue and to reduce movement of the elbow and shoulder.

Localizer Run for Haptic Object-Selective Cortex

A haptic localizer run to determine haptic object-selective cortical areas was performed by 12 additional participants. The haptic localizer consisted of shape and texture blocks. The shape blocks contained either familiar objects (selected from 8 everyday, hand-held objects, such as spoon, cup, comb, etc.) or unfamiliar objects (selected from 8 three-dimensional novel objects, Lee and Wallraven 2013). The texture blocks contained stimuli selected from 16 flat textures spanning a wide variety of materials and texture granularity mounted on a piece of cardboard. There were a total of 4 shape and 4 texture blocks presented in a random order with a 15-s break in between blocks. The block type (shape or texture) was announced before the start of each block via an audio cue to both the experimenter and the participant. During the task, participants were handed a shape or a texture by the experimenter and asked to explore it for 6 s using their right (dominant) hand. Similar to the main experiment (see below), participants had to perform a one-back task in which they were required to answer for each trial whether it contained the same or a different object than before. Answers were given on a button box held in the left hand of the participant. Each block started with a 15-s baseline, followed by 5 trials and a 15-s break. The total length of the localizer run was 585 s [8 blocks × (15 s baseline + 5 × (6 s exploration time + 3 s response time)) + 7 × 15 s break].

Analysis of Results From the Similarity-Judgment Task

Individual ratings were first correlated across participants to analyze rating consistency. Ratings were then averaged across participants to obtain group similarity matrices for the visual or haptic group. MDS was applied to these matrices to reconstruct perceptual spaces using the MATLAB (R2014a, The Mathworks, Natick, MA, USA) built-in function mdscale. We determined the optimal number of dimensions for the fit based on standard criteria for the stress value of the MDS solution (e.g., Gaißert et al. 2010). The resulting spaces were compared using “Procrustes analysis,” which maps spaces onto each other without distorting interobject distances of the data. The standardized average Euclidean distances between corresponding points after Procrustes transformation were used to determine the correlation between 2 spaces as a goodness-of-fit measure (Gaißert et al. 2010).

Analysis of Imaging Data

Imaging data were analyzed using the Statistical Parametric Mapping software package (SPM8, Wellcome Department of Cognitive Neurology, London, UK), as well as the custom Matlab code for selecting regions of interest (ROI), and conducting ROI-based correlational analysis and whole-brain correlational searchlight analysis (Op de Beeck et al. 2008; Bulthé et al. 2014).

Preprocessing

Participants' head movements were checked for excessive values in both translation and rotation, and none of the data had to be discarded. MR images were corrected for slice-timing differences, followed by realignment to the mean of the images, and functional images were normalized to the Montreal Neurological Institute (MNI) template with a re-sampling size of 2 × 2 × 2 mm. Images were then spatially smoothed (a 5-mm full-width half-maximum kernel).

Univariate Analysis for Visual and Haptic Runs

A standard general linear model (GLM) was used to obtain participants' object-specific brain activation in the one-back task during visual and haptic runs. Since there were 9 different objects, the GLM contained 9 variables as well as 6 standard head motion-related covariates. We fitted one GLM for the visual run, and one GLM for the haptic run. The resulting beta-estimates were later used as basis for correlational analyses to yield visual and haptic “neural” spaces. It should be noted that the purpose of this analysis was solely for obtaining these beta-estimates (e.g., Op de Beeck et al. 2008) and not for determining object-baseline contrasts. All whole-brain analyses used thresholds of P < 0.05 with family-wise error correction.

ROI Selection

To enable ROI-based analysis, we first selected standard visual-processing areas along the ventral stream from the occipital lobe to the temporal lobe: Brodmann area (BA) 17 as early visual cortex, and BA18, 19, and 37 as higher-level visual-processing areas. ROIs were then defined based on group brain activation obtained from the contrast of all 4 kinds of images versus baseline in the visual localizer run (using a 2 × 2 factorial design), masked by anatomically defined BA images generated from the PickAtlas software (Maldjian et al. 2003). Since previous studies showed no differences in visual processing across hemispheres (Op de Beeck et al. 2008; Peelen et al. 2014), ROIs were combined (except for BA37, see below). In addition, an object-selective ROI was defined individually by the contrast of intact versus scrambled images from the visual localizer task (for corresponding results using the group-based object-selective ROI, see Supplementary Material). The resulting visual object-selective ROI was located in the LO and occipito-temporal areas (commonly referred to as the LOC; Malach et al. 1995; Kourtzi and Kanwisher 2001). Since previous studies have implied functional differences between left and right LOC (Zhang et al. 2004), we used separate ROIs in the following. Resulting ROI sizes in voxels were as follows: BA17 = 245; BA18 = 628; BA19 = 253; BA37(left) = 333; BA37(right) = 369; LOC(left) = 233 ± 44; LOC(right) = 259 ± 41 (there was partial overlap of LOC with areas BA19 and BA37).

For haptic processing, we selected early somatosensory areas (BA1, BA2, and BA3), associative high-level somatosensory areas (BA5, BA7, BA39, and BA40), and BA6 as premotor cortex as ROIs. All ROIs were defined based on group brain activation obtained from the contrast of object shape versus baseline in the haptic localizer run (except for BA6 using an object shape versus texture contrast), masked by anatomically defined BA images generated from the PickAtlas software. To check for lateralization effects, we split all areas into left- and right-hemispheric regions. ROI sizes were as follows: BA1(left) = 86; BA1(right) = 96; BA2(left) = 237; BA2(right) = 177; BA3(left) = 289; BA3(right) = 160; BA5(left) = 102; BA5(right) = 84; BA6(left) = 294; BA6(right) = 255; BA7(left) = 204; BA7(right) = 230; BA39(left) = 201; BA39(right) = 176; BA40(left) = 311; BA40(right) = 229. For an illustration of the selected ROIs, see Supplementary Figure 1.

Generating Neural Similarity Matrices

Individual visual and haptic neural matrices were determined from extracted beta-values for object contrasts in the visual and haptic tasks. Beta-values were normalized for each voxel in each ROI by subtracting the average beta-value across all objects (note that by doing so, many extraneous effects such as experimenter's hand movement as well as different visual and haptic baseline structures were eliminated since the contrast consisted of one object vs. the other objects in that run). Next, the multivoxel pattern of normalized beta-values for each object was Pearson-correlated with the multivoxel pattern for each other object, resulting in 9 × 9-element similarity matrices. This was done separately for each participant, yielding individual visual and haptic neural similarity matrices. Finally, we averaged all individual neural matrices to create one group neural matrix for each participant group in each ROI. These group neural matrices were correlated with group behavioral matrices to determine which ROI corresponded to the behavioral results.

Permutation analyses (e.g., Op de Beeck et al. 2008) were used to determine the statistical validity of correlations between a group neural matrix and a group behavioral matrix (based on permuting the object index in the neural matrix) and for determining differences between the 2 groups (based on permuting the group membership of participants).

Searchlight Analysis

We also performed a whole-brain searchlight analysis to augment the ROI-based correlational analysis (Kriegeskorte et al. 2006; Bulthé et al. 2014). For this, we computed voxel-wise correlations with the behavioral matrices for beta-values averaged in a 2-mm3 cube. This was done for each participant, followed by a standard random-effect group analysis to identify significant voxels at the group level. Since these analyses were used to verify to what degree the ROI analyses captured the cortical distribution of shape representations, the threshold for significance was set to P < 0.001 (uncorrected) to ensure broader coverage.

Results

Visual Perceptual Space

Correlation analyses of individual ratings across participants showed high consistency with a mean correlation of r = 0.897 (minimum = 0.818, maximum = 0.953), already indicating that participants represented the 9 object shapes in a similar fashion. The average group similarity matrix is shown in Figure 2A. Stress values of the MDS on the group matrix yielded a clear drop (or “elbow”) for two-dimensional solutions [stress(1) = 0.220, stress(2) = 0.067, and stress(3) = 0.004], showing that 2 dimensions were sufficient to explain the data. The resulting two-dimensional perceptual space after Procrustes transformation is shown in Figure 2C and conforms well to the underlying topology of the superformula parameter space (which should show a cross-like shape—cf. Fig. 1A). The average fit quality with the physical parameter space was high with a correlation of r2 = 0.739 [see also Gaißert et al. (2010) for comparable reconstruction values with different novel objects].

Figure 2.

(A and B) Similarity matrices for visual (A) and haptic (B) similarity judgments. Numbers on axes refer to objects in Figure 1 (blue = dissimilar and red = similar). (C) MDS results for visual (red solid line) and haptic (blue solid line) similarity judgments after Procrustes alignment. Black, thin lines connect corresponding object locations of the 2 perceptual spaces. The physical parameter space is shown in gray and the objects aligned in the cross-shape are shown as inset. Note the high topological similarity between the visual and haptic perceptual spaces.

Haptic Perceptual Space and Comparison to Visual Space

Participants' haptic similarity ratings were also highly consistent (mean r = 0.903, minimum = 0.721, maximum = 0.965). Similarly, the underlying topology of the superformula parameter space was also recovered well for the haptic group in 2 dimensions as shown by MDS [stress(1) = 0.112, stress(2) = 0.044, and stress(3) = 0.001]: Results of the Procrustes analysis showed a similar fit quality to the physical parameter space with r2 = 0.701 (Fig. 2C).

Importantly, we observe that the 2 perceptual spaces not only capture the physical space, but that they also match each other very well: Average fit quality between the visual and haptic perceptual spaces indicates a good match with r2 = 0.938. This confirms earlier results for 2 different sets of novel objects, for which good fits of perceptual spaces to physical spaces were observed and for which fits between the 2 perceptual spaces were also better than between the perceptual spaces and the physical space (Cooke et al. 2007; Gaißert et al. 2010, 2011). These results show that the brain is capable of extracting complex shape parameterizations [see also Edelman et al. (1998)], but that there are some biases in how the shape information is extracted—more importantly for the present study, however, these biases occur in the same fashion in both vision and haptics, indicating highly similar shape representations across the 2 modalities.

Behavioral Results from Localizer Runs and fMRI Runs

For the visual localizer, accuracy was 91.22% (SD = 5.1). For the haptic localizer, the overall accuracy was 89.84% (SD = 6.5). Both results confirm that participants were focused during the localizer tasks.

For the functional runs, mean accuracy for the one-back task was high (>94% on average), indicating that all participants concentrated on the task. There were no differences in accuracy between the 2 groups in the visual run (t(21) = 1.285, P = 0.213) or the haptic run (t(21) = 0.154, P = 0.879). Furthermore, task performance between vision and haptics was not significantly different within groups [visual group: 96.80% for visual task vs. 96.13% for haptic task (t(10) = 0.256, P = 0.803); haptic group: 94.60% for visual task vs. 96.45% for haptic task (t(11)= −2.105, P = 0.059); due to potential ceiling effects, all tests use arcsin-transformed accuracy values].

Neural Representation of Visual Perceptual Space

As expected from earlier work, results from the correlation analysis between averaged group neural matrices and visual behavioral matrices show good correlations along the ventral pathway (for the full list of results for each ROI, see Table 1). Specifically, early visual cortex (BA17) was able to capture visual information for both groups, for the visual group in which the experiment started with this visual presentation of the objects, as well as for the haptic group in which the haptic data acquisition preceded this visual presentation. Along with the ventral pathway, BA18, BA19, and BA37 also showed high correlations. Moreover, both groups also had significant correlations in individually defined bilateral LOC (note that, in addition, when using LOC-ROIs defined by both the group visual and haptic localizer as ROIs we found similar results; see Supplementary Table 1).

Table 1

Results of correlations of the perceptual space with the neural space for all ROIs

Visual task
Haptic task
Visual groupHaptic groupVisual groupHaptic group
Left BA1−0.0170.1790.525***0.252
Right BA10.243−0.0040.570**0.258
Left BA20.0960.2100.390*0.318*
Right BA20.150−0.1420.447*0.431*
Left BA3−0.0260.0420.535***0.431**
Right BA30.342*0.1730.470*0.356*
Left BA50.2240.343*0.2010.458**
Right BA50.1750.2100.462**0.210
Left BA6−0.0680.2240.584**0.518**
Right BA60.0800.365*0.430*0.448**
Left BA70.394*0.430**0.632***0.614***
Right BA70.1930.0520.600***0.358*
BA170.726***0.679***0.482**0.055
BA180.701***0.702***0.496*0.175
BA190.648***0.573***0.440*0.383*
Left BA370.579***0.518**0.625***0.191
Right BA370.583***0.322*0.0960.027
Left BA390.316*0.1800.364*0.660***
Right BA390.529***0.2160.2710.354*
Left BA400.2750.499**0.2110.514**
Right BA400.206−0.0310.695***0.248
Left LOC0.614***0.726***0.481**0.497**
Right LOC0.596***0.467**0.412*0.079
Visual task
Haptic task
Visual groupHaptic groupVisual groupHaptic group
Left BA1−0.0170.1790.525***0.252
Right BA10.243−0.0040.570**0.258
Left BA20.0960.2100.390*0.318*
Right BA20.150−0.1420.447*0.431*
Left BA3−0.0260.0420.535***0.431**
Right BA30.342*0.1730.470*0.356*
Left BA50.2240.343*0.2010.458**
Right BA50.1750.2100.462**0.210
Left BA6−0.0680.2240.584**0.518**
Right BA60.0800.365*0.430*0.448**
Left BA70.394*0.430**0.632***0.614***
Right BA70.1930.0520.600***0.358*
BA170.726***0.679***0.482**0.055
BA180.701***0.702***0.496*0.175
BA190.648***0.573***0.440*0.383*
Left BA370.579***0.518**0.625***0.191
Right BA370.583***0.322*0.0960.027
Left BA390.316*0.1800.364*0.660***
Right BA390.529***0.2160.2710.354*
Left BA400.2750.499**0.2110.514**
Right BA400.206−0.0310.695***0.248
Left LOC0.614***0.726***0.481**0.497**
Right LOC0.596***0.467**0.412*0.079

Note: Asterisks denote P-values as determined in permutation analyses (*P < 0.05; **P < 0.01; ***P < 0.001). Results are reported in detail in the main text for the visual and haptic tasks only if both areas correlate significantly; in addition, group differences are reported whenever the correlations across groups differ significantly.

Table 1

Results of correlations of the perceptual space with the neural space for all ROIs

Visual task
Haptic task
Visual groupHaptic groupVisual groupHaptic group
Left BA1−0.0170.1790.525***0.252
Right BA10.243−0.0040.570**0.258
Left BA20.0960.2100.390*0.318*
Right BA20.150−0.1420.447*0.431*
Left BA3−0.0260.0420.535***0.431**
Right BA30.342*0.1730.470*0.356*
Left BA50.2240.343*0.2010.458**
Right BA50.1750.2100.462**0.210
Left BA6−0.0680.2240.584**0.518**
Right BA60.0800.365*0.430*0.448**
Left BA70.394*0.430**0.632***0.614***
Right BA70.1930.0520.600***0.358*
BA170.726***0.679***0.482**0.055
BA180.701***0.702***0.496*0.175
BA190.648***0.573***0.440*0.383*
Left BA370.579***0.518**0.625***0.191
Right BA370.583***0.322*0.0960.027
Left BA390.316*0.1800.364*0.660***
Right BA390.529***0.2160.2710.354*
Left BA400.2750.499**0.2110.514**
Right BA400.206−0.0310.695***0.248
Left LOC0.614***0.726***0.481**0.497**
Right LOC0.596***0.467**0.412*0.079
Visual task
Haptic task
Visual groupHaptic groupVisual groupHaptic group
Left BA1−0.0170.1790.525***0.252
Right BA10.243−0.0040.570**0.258
Left BA20.0960.2100.390*0.318*
Right BA20.150−0.1420.447*0.431*
Left BA3−0.0260.0420.535***0.431**
Right BA30.342*0.1730.470*0.356*
Left BA50.2240.343*0.2010.458**
Right BA50.1750.2100.462**0.210
Left BA6−0.0680.2240.584**0.518**
Right BA60.0800.365*0.430*0.448**
Left BA70.394*0.430**0.632***0.614***
Right BA70.1930.0520.600***0.358*
BA170.726***0.679***0.482**0.055
BA180.701***0.702***0.496*0.175
BA190.648***0.573***0.440*0.383*
Left BA370.579***0.518**0.625***0.191
Right BA370.583***0.322*0.0960.027
Left BA390.316*0.1800.364*0.660***
Right BA390.529***0.2160.2710.354*
Left BA400.2750.499**0.2110.514**
Right BA400.206−0.0310.695***0.248
Left LOC0.614***0.726***0.481**0.497**
Right LOC0.596***0.467**0.412*0.079

Note: Asterisks denote P-values as determined in permutation analyses (*P < 0.05; **P < 0.01; ***P < 0.001). Results are reported in detail in the main text for the visual and haptic tasks only if both areas correlate significantly; in addition, group differences are reported whenever the correlations across groups differ significantly.

The searchlight analysis results shown in Figure 3A support this ROI-based analysis, charting the reconstruction of the perceptual space in the human occipital lobe along the ventral pathway (see also Supplementary Fig. 1, showing the overlap between the functionally selected ROIs and the searchlight results). As a further illustration, Figure 3C shows the reconstructed neural space from area BA18 (as reconstructed by MDS) together with the behavioral space for the visual group and the visual task.

Figure 3.

Whole-brain searchlight analysis for the visual (A) and haptic (B) task mapped on inflated cortices using the CARET software (Van Essen et al. 2001). (A) Significant clusters of positive correlation are visible along the ventral pathway up until inferior temporal cortex (P < 0.001, uncorrected). The labels are defined as follows: IFG, inferior frontal gyrus; PreCG, precentral gyrus; LOC, lateral occipital cortex; ITG, inferior temporal gyrus; SFG, superior frontal gyrus; MFG, medial frontal gyrus; SMA, supplementary motor area; PoCG, postcentral gyrus; SPL, superior parietal lobe; MOG, middle occipital gyrus; AG, angular gyrus. (B) Significant clusters of positive correlation are visible in both early and associative somatosensory areas, as well as in ITG, premotor cortex, and IFG (P < 0.001, uncorrected). The labels are defined as follows: LOC, lateral occipital cortex; ITG, inferior temporal gyrus; OL, occipital lobe. (C) MDS reconstruction comparing the visual perceptual space (red, white numbers, solid line) and the neural space in the visual task (pink, black numbers, broken line) for area BA18 for the visual group—the area with the highest correlations in both visual tasks. VG, visual group; VT, visual task. Black, thin lines connect corresponding object locations of the neural and perceptual spaces. The original physical space is shown in gray (white numbers), and the original objects are shown in their cross-pattern as an inset. Goodness-of-fit values for physical space to neural space: r2 = 0.623 and for perceptual to neural space: r2 = 0.765. (D) MDS reconstruction comparing the haptic perceptual space (blue, white numbers, solid line) and the neural space in the haptic task (sky blue, black numbers, broken line) for left BA7 for the haptic group—the area with the highest correlations in both haptic tasks. HG, haptic group; HT, haptic task. Black, thin lines connect corresponding object locations of the neural and perceptual spaces. The original physical space is shown in gray (white numbers), and the original objects are shown in their cross-pattern as an inset. Goodness-of-fit values for physical space to neural space: r2 = 0.575 and for perceptual to neural space: r2 = 0.652.

Neural Representation of Haptic Perceptual Space

The correlational analyses in the somatosensory areas indicated that several areas represented the haptic perceptual space. This included early somatosensory areas of bilateral BA3 and bilateral BA2. In addition, shape representations were found in higher-level haptic shape-processing areas in bilateral BA7 and left BA39 in both groups. Finally, bilateral BA6 as a premotor area also had positive correlations for both groups for the haptic task. Results from the whole-brain searchlight analysis confirming the ROI-based analysis are shown in Figure 3B (see Supplementary Fig. 1, showing the overlap between the functionally selected ROIs and the searchlight results). Note that the searchlight results revealed bilateral correlations in the parietal lobe similar to the ROI-based analysis. In addition, left inferior frontal gyrus (IFG) was shown to represent haptic shape well. As an illustration, Figure 3D shows the reconstruction of the neural similarity space of the haptic group for the haptic task from left BA7 in comparison with the behavioral data.

Neural Representation of Haptic Perceptual Similarity Space in the Human Ventral Pathway

Interestingly, in the ventral “visual” stream, left LOC also showed significant correlation with haptic perceptual space in both groups in the haptic task (again, results were observed not only for individually defined, but also for group-level LOC based on both visual and haptic localizer results, see Supplementary Table 1). In addition, bilateral BA19 showed weak, yet significant correlations with the haptic perceptual space. This indicates that a visually defined, object-selective area as well as BA19 not only encode the perceptual topology of “visually presented” shapes (see above), but also that of “haptically presented” objects. Figure 4 illustrates the similar topology of the averaged neural space (across both groups and tasks) for left LOC and the averaged perceptual space (across both groups).

Figure 4.

MDS reconstruction of the averaged neural space for left LOC (averaged across both groups and both tasks, light green, black numbers, broken line) compared with the averaged perceptual space (averaged across both groups, dark green, white numbers, solid line). Black, thin lines connect corresponding object locations of the neural and perceptual spaces. The original physical space is shown in gray (white numbers), and the original objects are shown in their cross-pattern as an inset.

Group Differences

Since participants were tested with both modalities in the scanner but only had prior experience in one of the 2 modalities, we can also investigate how previous experience modulates this cross-modal information transfer. Permutation analyses on the correlations compared across groups revealed that haptically explored objects were better reconstructed for the visual group than for the haptic group in early visual cortex (BA17, visual group: r = 0.482, P = 0.007; haptic group: r = 0.055, P = 0.374; group differences: P = 0.01) and left BA37 (visual group: r = 0.625, P < 0.001; haptic group: r = 0.191, P = 0.148; group differences: P = 0.04). A similar trend was observed in right LOC (visual group: r = 0.412, P = 0.020; haptic group: r = 0.079, P = 0.317; group differences: P = 0.059) for the haptic task. Overall, these results indicate that prior visual experience enhances the representations in visual cortex for the shape of haptically presented objects.

Interestingly, prior haptic experience did not lead to significant modulations of correlations across groups in somatosensory cortex for both visual and haptic stimuli (group differences for visual task with somatosensory areas: all P > 0.09; group differences for haptic task with somatosensory areas: all P > 0.11). Whole-brain searchlight analyses restricted to the participants from the same groups (Fig. 5A) found that correlations of the neural similarity data of visually presented stimuli with the visual perceptual space peaked in (x, y, z = −10, −92, −14) for the visual group and in (x, y, z = −10, −94, −4) for the haptic group, both of which are located in the occipital lobe (BA17). The extent of activation tended to be different in the 2 groups (red and blue colors in Fig. 5A), but its general location and peak activity were similar. In contrast, correlations of the neural similarity data of haptically presented stimuli with the haptic perceptual space peaked in (x, y, z = −46, −62, −12) located in the inferior occipital gyrus (the nearest gray matter BA is BA37) in the ventral pathway for the visual group, whereas they peaked in the superior parietal lobe (the nearest gray matter BA is BA7) located in associative somatosensory areas (x, y, z = −20, −56, 66) for the haptic group (Fig. 5B). These results also confirm that previous experience modulates which brain regions are involved in representing haptic information.

Figure 5.

(A and B) Results of whole-brain searchlight analysis comparing the visual (red) and the haptic (blue) group mapped on inflated cortices using the CARET software (Van,Essen et al. 2001) for the visual task (A) and the haptic (B) task (P uncorrected <0.001). VG refers to the visual group, whereas HG refers to the haptic group. Labels are defined as follows: (A) LOC, lateral occipital cortex; IOG, inferior occipital gyrus; MOG, middle occipital gyrus; CG, calcarine gyrus and (B) PreCG, precentral gyrus; LOC, lateral occipital cortex; ITG, inferior temporal gyrus; PoCG, postcentral gyrus; SPL, superior parietal lobe; MOG, middle occipital gyrus; AG, angular gyrus.

Discussion

In this study, we investigated visual and haptic shape representations in the brain using parametrically defined, novel shapes. We first showed that both visual and haptic perceptual spaces conform well to the physical parameter space. Importantly, the 2 perceptual spaces are also highly similar across modalities. This finding extends our previous studies with different types of stimuli in different task environments and highlights the ability of the brain to analyze and faithfully represent even complex shape spaces (Cooke et al. 2007; Gaißert et al. 2010, 2011; Gaissert and Wallraven 2012; Wallraven et al. 2013).

We then analyzed fMRI data on visual and haptic shape processing using a multivoxel approach similar to representational similarity (Kriegeskorte et al. 2006; Peelen and Caramazza 2012; Mur et al. 2013; Peelen et al. 2014) as well as ROI-based analyses and were able to pinpoint areas in the brain that represent this perceptual space.

Along with the ventral stream, we found strong positive correlations between neural patterns and the visual perceptual space in bilateral areas BA17, 18, 19, 37, as well as in LOC. In terms of visual processing, these findings confirm previous studies highlighting the role of the occipital lobe in object-shape processing (Eger et al. 2008; Haushofer et al. 2008; Op de Beeck et al. 2008; Drucker and Aguirre 2009; Peelen and Caramazza 2012; Mur et al. 2013). Interestingly, whereas most of these studies implicated higher-level, object-selective brain areas such as LOC in processing fine details of object-shape properties, our results also include early visual cortex with high correlations. Although Peelen and Caramazza (2012) have shown previously that pixel-wise information about shape is reflected in BA17, 18, and 19, the present study used visual presentation of stimuli rotated randomly in depth, which cannot be modeled well by direct pixel-based comparisons of “images.” Indeed, going further than simple pixel-wise similarity, a recent study has implicated V1 in processing of fine details of visually presented shape properties, claiming the involvement of early visual cortex may be caused by top-down influence (Smith and Goodale 2015). Similarly, a recent computational model of visual shape processing (Tschechne and Neumann 2014) suggests that feedback from higher areas can, for example, enhance task-relevant contour integration and curvature representation in early visual cortex, thus creating a rich shape representation.

In the case of haptic processing, our results for the first time map out how a perceptual shape space is constructed throughout the brain: We found veridical shape representations in several somatosensory areas, including bilateral early somatosensory cortex (BA3 and 2), as well as contralateral higher-level areas such as left BA39 and bilateral BA7 and 6. Very few, if any, studies so far have investigated higher-level, haptic shape processing of complex shapes in the brain—our results extend those of previous studies based on univariate analyses of haptic processing of non-parametric objects that indicated the involvement of several somatosensory areas (Bodegård et al. 2001; Reed et al. 2004; Snow et al. 2014). Interestingly, our results did not implicate SII in shape processing, confirming earlier results that show SII to be mostly involved in texture, hardness, and material processing (James et al. 2007).

Importantly, haptic object-shape properties in addition to somatosensory areas were also well represented in the occipital lobe in left LOC and BA19 (Amedi et al. 2002; James et al. 2002; Lacey et al. 2009). The ROI results highlight both LOC and BA19 as multisensory convergence areas, whereas the searchlight result shows correlations in the inferior temporal gyrus mostly for the haptic task. In addition, visual shape was also represented to some degree in the superior parietal lobe in left BA7 (Zhang et al. 2004; James et al. 2007; Lacey et al. 2009; Smith and Goodale 2015)—an area that searchlight does not highlight for the visual task. Note, however, that searchlight analyses are in general prone to issues of parameter selection in terms of their sensitivity (such as searchlight radius; Op de Beeck 2010; Etzel et al. 2013). Additionally, the ROI-based results are consistent with some earlier studies that localize visual representations in broad regions of LO [see, e.g., Op de Beeck et al. (2008)], whereas haptic stimuli are confined to a smaller area in LO [LOtv, see Amedi et al. (2001, 2002) and James et al. (2002)] and even in the inferior temporal lobe as shown in our study [see Amedi et al. (2001, 2002) and James et al. (2002)]. Adding to this, a recent study provided evidence that in addition to visual and haptic information, LOC and ITG may also encode auditory information related to shape as well as mental imagery of shape representation (Peelen et al. 2014). These results highlight the importance of these areas as multisensory convergence areas for “detailed” shape processing. Note that since this study used separate, unimodal runs, the results so far demonstrate clear spatial convergence of visual and haptic processing. The degree to which this generalizes to combined visuo-haptic processing and in how far neurons in these areas possess true multisensory response characteristics remains to be studied (using, for example, cross-modal matching paradigms as in Tal and Amedi (2009) and Kassuba et al. (2013). Also, it should be noted that in the present study, haptic perceptual shape space was represented well only in left LOC, whereas correlations were found bilaterally for visual processing. The question of lateralization in visual and haptic object-shape processing, therefore, needs further investigation (Crutch et al. 2005; Hömke et al. 2009; Deshpande et al. 2010).

The whole-brain searchlight analysis also revealed significant correlations for haptic shape processing in IFG—a region that extends over BA44, 45, and 46 and hence cannot be captured well using standard ROI-based analysis. A few other studies have also implicated IFG in haptic object tasks (Binkofski et al. 1999; Stoeckel et al. 2003; Reed et al. 2004; Lacey et al. 2010). One explanation for the involvement of IFG in these and in our results comes from a recent meta-analysis, which has shown that IFG is often involved in sequential ordering of motor execution, especially when the task requires selective attention (Liakakis et al. 2011). This finding is an excellent fit with our shape-processing task, which requires participants to explore the object in a controlled fashion such as to extract discriminative information in an efficient way.

Our results also highlight some differences for cross-modal information transfer: Whereas the visual group also employed early visual cortex to represent haptic perceptual space, the haptic group did not recruit the occipital lobe (except for left LOC), suggesting that previous visual exposure involves early visual cortex “even in the absence of visual input” in the form of top-down activation. Similarly, although the haptic group also activated early visual cortex in the visual task, correlations were weaker overall than for the visual group. In addition, prior visual experience also recruited right LOC for haptic perceptual space, whereas only left LOC was activated for the haptic group. One potential explanation for this dissociation may be that right LOC is associated more with visual imagery than left LOC (Zhang et al. 2004).

Interestingly, Snow et al. (2014) also found visual cortex (including primary visual areas reported here) to be activated during haptic processing of highly “familiar” objects. Although our objects are comparatively simple conceptually and hence participants may have acquired a certain stimulus familiarity during the short exposure time before the fMRI experiment (1.5 h), the objects do not share the same familiarity level as the everyday objects used in the study of Snow and colleagues. Overall, this may suggest that such top-down activation is indicative of more general shape processing in early visual areas. Further studies are needed, however, to track the acquisition of expertise and the accompanying changes in neural representations.

Recently, Vetter et al. (2014) demonstrated decoding of auditory patterns from early visual cortex and suggested that the visual activation may be due to visual imagery when fine details of shape information are required. Since the present study used a one-back task that explicitly required participants to keep one stimulus in mind for comparison, visual imagery may be able to explain these results at least in part, although more evidence is needed to resolve the issue.

In contrast to the effect of previous visual experience on neural processing of haptic stimuli, prior “touch” experience about shape showed no significant modulation. Even though Smith and Goodale (2015) provided evidence that early somatosensory cortex can deliver some visually obtained information of familiar objects with rich touch information, in the present study, left BA5 and left BA40 in the parietal lobe and right BA6 for premotor cortex showed mild correlations with the visual perceptual space due to prior haptic experience. These correlations, however, were not strong enough to result in significant group differences. There may be several reasons for this result: First, differences due to task difficulty between the 2 cross-modal blocks in the scanner. Since performance in the cross-modal run was equally good (and at a high level) for both groups, however, we conjecture that this cannot be the reason for the differences. Second, as stated above, our training time was not enough to “deeply” familiarize participants with the novel objects—perhaps haptic processing requires much longer periods of exposure to prime visually obtained object-shape information in somatosensory areas that can only be activated for familiar objects. In this context, a recent review discussed cross-modal transfer between vision and haptics, suggesting that performance may be better when visual encoding is followed by haptic retrieval than for the reverse (Lacey and Sathian 2014); see also Dopjans et al. (2009). In addition, Kassuba et al. (2013) used cross-modal matching of familiar objects in fMRI and found that activation in the lateral occipital gyrus and the fusiform gyrus was higher for visual presentation followed by haptic presentation than for the reverse, indicating asymmetric information transfer. Such asymmetries, however, seem to depend on the task and the stimuli, since other behavioral studies have demonstrated symmetric cross-modal transfer for categorization of novel, unfamiliar objects (Wallraven et al. 2013). Further investigations are necessary to better understand the role of training effects and familiarization—especially for haptic processing in the brain.

In summary, our study provides evidence that the human brain is able to reconstruct complex shape spaces remarkably well using both visual and haptic processing. Furthermore, these 2 different sensory modalities create highly similar perceptual spaces. Both visual and haptic object-shape information are reconstructed well already in early sensory areas (V1 for visual input and S1 for haptic input), as well as higher-level areas. In addition, visual and haptic perceptual spaces are represented well in ventrolateral occipito-temporal cortex (LOC), suggesting this area as a candidate for a multisensory convergence area, or even a supramodal shape representation. Moreover, we were able to demonstrate that prior visual experience activates early visual cortex during haptic processing even in the absence of visual input. The framework of representational spaces—originated in Shepard's program—hence provides a powerful instrument to investigate the full processing pipeline that underlies our capability to understand and represent the rich world of shapes around us.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2013R1A1A1011768) and the Brain Korea 21plus program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education. We gratefully acknowledge the help of Nicky Daniels for assistance with the searchlight analysis funded by European Research Council grant ERC-2011-Stg-284101, and federal research action grant IUAP/PAI P7/11.

Notes

The authors thank the anonymous reviewers for providing helpful comments. Conflict of Interest: None declared.

References

Amedi
A
,
Jacobson
G
,
Hendler
T
,
Malach
R
,
Zohary
E
.
2002
.
Convergence of visual and tactile shape processing in the human lateral occipital complex
.
Cereb Cortex
.
12
:
1202
1212
.

Amedi
A
,
Malach
R
,
Hendler
T
,
Peled
S
,
Zohary
E
.
2001
.
Visuo-haptic object-related activation in the ventral visual pathway
.
Nat Neurosci
.
4
:
324
330
.

Binkofski
F
,
Buccino
G
,
Posse
S
,
Seitz
RJ
,
Rizzolatti
G
,
Freund
HJ
.
1999
.
A fronto-parietal circuit for object manipulation in man: evidence from an fMRI-study
.
Eur J Neurosci
.
11
:
3276
3286
.

Bodegård
A
,
Geyer
S
,
Grefkes
C
,
Zilles
K
,
Roland
PE
.
2001
.
Hierarchical processing of tactile shape in the human brain
.
Neuron
.
31
:
317
328
.

Bulthé
J
,
De Smedt
B
,
Op de Beeck
HP
.
2014
.
Format-dependent representations of symbolic and non-symbolic numbers in the human cortex as revealed by multi-voxel pattern analyses
.
Neuroimage
.
87
:
311
322
.

Cooke
T
,
Jäkel
F
,
Wallraven
C
,
Bülthoff
HH
.
2007
.
Multimodal similarity and categorization of novel, three-dimensional objects
.
Neuropsychologia
.
45
:
484
495
.

Crutch
SJ
,
Warren
JD
,
Harding
L
,
Warrington
EK
.
2005
.
Computation of tactile object properties requires the integrity of praxic skills
.
Neuropsychologia
.
43
:
1792
1800
.

Cutzu
F
,
Edelman
S
.
1996
.
Faithful representation of similarities among three-dimensional shapes in human vision
.
Proc Natl Acad Sci USA
.
93
:
12046
12050
.

Deshpande
G
,
Hu
X
,
Lacey
S
,
Stilla
R
,
Sathian
K
.
2010
.
Object familiarity modulates effective connectivity during haptic shape perception
.
Neuroimage
.
49
:
1991
2000
.

Dopjans
L
,
Wallraven
C
,
Bülthoff
HH
.
2009
.
Cross-modal transfer in visual and haptic face recognition
.
IEEE Trans Haptics
.
2
:
236
240
.

Drucker
DM
,
Aguirre
GK
.
2009
.
Different spatial scales of shape similarity representation in lateral and ventral LOC
.
Cereb Cortex
.
19
:
2269
2280
.

Edelman
S
,
Grill-Spector
K
,
Kushnir
T
,
Malach
R
.
1998
.
Toward direct visualization of the internal shape representation space by fMRI
.
Psychobiology
.
26
:
309
321
.

Eger
E
,
Ashburner
J
,
Haynes
J-D
,
Dolan
RJ
,
Rees
G
.
2008
.
fMRI activity patterns in human LOC carry information about object exemplars within category
.
J Cogn Neurosci
.
20
:
356
370
.

Etzel
JA
,
Zacks
JM
,
Braver
TS
.
2013
.
Searchlight analysis: promise, pitfalls, and potential
.
Neuroimage
.
78
:
261
269
.

Gaißert
N
,
Bülthoff
HH
,
Wallraven
C
.
2011
.
Similarity and categorization: from vision to touch
.
Acta Psychol (Amst)
.
138
:
219
230
.

Gaißert
N
,
Wallraven
C
,
Bülthoff
HH
.
2010
.
Visual and haptic perceptual spaces show high similarity in humans
.
J Vis
.
10
:
2
.

Gaissert
N
,
Wallraven
C
.
2012
.
Categorizing natural objects: a comparison of the visual and the haptic modalities
.
Exp Brain Res
.
216
:
123
134
.

Gauthier
I
,
Tarr
MJ
.
1997
.
Becoming a “Greeble” expert: exploring mechanisms for face recognition
.
Vision Res
.
37
:
1673
1682
.

Gielis
J
.
2003
.
A generic geometric transformation that unifies a wide range of natural and abstract shapes
.
Am J Bot
.
90
:
333
338
.

Grill-Spector
K
,
Kourtzi
Z
,
Kanwisher
N
.
2001
.
The lateral occipital complex and its role in object recognition
.
Vision Res
.
41
:
1409
1422
.

Haushofer
J
,
Livingstone
MS
,
Kanwisher
N
.
2008
.
Multivariate patterns in object-selective cortex dissociate perceptual and physical shape similarity
.
PLoS Biol
.
6
:
e187
.

Hömke
L
,
Amunts
K
,
Bönig
L
,
Fretz
C
,
Binkofski
F
,
Zilles
K
,
Weder
B
.
2009
.
Analysis of lesions in patients with unilateral tactile agnosia using cytoarchitectonic probabilistic maps
.
Hum Brain Mapp
.
30
:
1444
1456
.

James
TW
,
Humphrey
GK
,
Gati
JS
,
Servos
P
,
Menon
RS
,
Goodale
MA
.
2002
.
Haptic study of three-dimensional objects activates extrastriate visual areas
.
Neuropsychologia
.
40
:
1706
1714
.

James
TW
,
Kim
S
,
Fisher
JS
.
2007
.
The neural basis of haptic object processing
.
Can J Exp Psychol
.
61
:
219
229
.

Kaplan
JT
,
Meyer
K
.
2012
.
Multivariate pattern analysis reveals common neural patterns across individuals during touch observation
.
Neuroimage
.
60
:
204
212
.

Kassuba
T
,
Klinge
C
,
Hölig
C
,
Röder
B
,
Siebner
HR
.
2013
.
Vision holds a greater share in visuo-haptic object recognition than touch
.
Neuroimage
.
65
:
59
68
.

Klatzky
RL
,
Lederman
SJ
,
Metzger
VA
.
1985
.
Identifying objects by touch: an “expert system”
.
Percept Psychophys
.
37
:
299
302
.

Kourtzi
Z
,
Kanwisher
N
.
2001
.
Representation of perceived object shape by the human lateral occipital complex
.
Science
.
293
:
1506
1509
.

Kriegeskorte
N
,
Goebel
R
,
Bandettini
P
.
2006
.
Information-based functional brain mapping
.
Proc Natl Acad Sci USA
.
103
:
3863
3868
.

Lacey
S
,
Flueckiger
P
,
Stilla
R
,
Lava
M
,
Sathian
K
.
2010
.
Object familiarity modulates the relationship between visual object imagery and haptic shape perception
.
Neuroimage
.
49
:
1977
1990
.

Lacey
S
,
Sathian
K
.
2014
.
Visuo-haptic multisensory object recognition, categorization, and representation
.
Fron Psychol
.
5
:
730
.

Lacey
S
,
Tal
N
,
Amedi
A
,
Sathian
K
.
2009
.
A putative model of multisensory object representation
.
Brain Topogr
.
21
:
269
274
.

Lee
H
,
Wallraven
C
.
2013
.
The brain's “superformula”: perceptual reconstruction of complex shape spaces
.
J Vis
.
13
:
445
.

Liakakis
G
,
Nickel
J
,
Seitz
RJ
.
2011
.
Diversity of the inferior frontal gyrus—a meta-analysis of neuroimaging studies
.
Behav Brain Res
.
225
:
341
347
.

Malach
R
,
Reppas
J
,
Benson
R
,
Kwong
K
,
Jiang
H
,
Kennedy
W
,
Ledden
P
,
Brady
T
,
Rosen
B
,
Tootell
R
.
1995
.
Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex
.
Proc Natl Acad Sci USA
.
92
:
8135
8139
.

Maldjian
JA
,
Laurienti
PJ
,
Kraft
RA
,
Burdette
JH
.
2003
.
An automated method for neuroanatomic and cytoarchitectonic atlas-based interrogation of fMRI data sets
.
Neuroimage
.
19
:
1233
1239
.

Mur
M
,
Meys
M
,
Bodurka
J
,
Goebel
R
,
Bandettini
PA
,
Kriegeskorte
N
.
2013
.
Human object-similarity judgments reflect and transcend the primate-IT object representation
.
Front Psychol
.
4
:
128
.

Op de Beeck
HP
.
2010
.
Against hyperacuity in brain reading: spatial smoothing does not hurt multivariate fMRI analyses?
Neuroimage
.
49
:
1943
1948
.

Op de Beeck
HP
,
Torfs
K
,
Wagemans
J
.
2008
.
Perceived shape similarity among unfamiliar objects and the organization of the human object vision pathway
.
J Neurosci
.
28
:
10111
10123
.

Peelen
MV
,
Caramazza
A
.
2012
.
Conceptual object representations in human anterior temporal cortex
.
J Neurosci
.
32
:
15728
15736
.

Peelen
MV
,
He
C
,
Han
Z
,
Caramazza
A
,
Bi
Y
.
2014
.
Nonvisual and visual object shape representations in occipitotemporal cortex: evidence from congenitally blind and sighted adults
.
J Neurosci
.
34
:
163
170
.

Reed
CL
,
Shoham
S
,
Halgren
E
.
2004
.
Neural substrates of tactile object recognition: an fMRI study
.
Hum Brain Mapp
.
21
:
236
246
.

Rosch
E
.
1999
.
Principles of categorization
. In:
Margolis
E
,
Laurence
S
, editors.
Concepts: core readings
. p.
189
206
.

Shepard
RN
.
1987
.
Toward a universal law of generalization for psychological science
.
Science
.
237
:
1317
1323
.

Smith
FW
,
Goodale
MA
.
2015
.
Decoding visual object categories in early somatosensory cortex
.
Cereb Cortex
.
25
:
1020
1031
.

Snow
JC
,
Strother
L
,
Humphreys
GW
.
2014
.
Haptic shape processing in visual cortex
.
J Cogn Neurosci
.
26
:
1154
1167
.

Stoeckel
MC
,
Weder
B
,
Binkofski
F
,
Buccino
G
,
Shah
NJ
,
Seitz
RJ
.
2003
.
A fronto-parietal circuit for tactile object discrimination: an event-related fMRI study
.
Neuroimage
.
19
:
1103
1114
.

Tal
N
,
Amedi
A
.
2009
.
Multisensory visual–tactile object related network in humans: insights gained using a novel crossmodal adaptation approach
.
Exp Brain Res
.
198
:
165
182
.

Tschechne
S
,
Neumann
H
.
2014
.
Hierarchical representation of shapes in visual cortex—from localized features to figural shape segregation
.
Front Comput Neurosci
.
8
:
93
.

Van Essen
DC
,
Drury
HA
,
Dickson
J
,
Harwell
J
,
Hanlon
D
,
Anderson
CH
.
2001
.
An integrated software suite for surface-based analyses of cerebral cortex
.
J Am Med Inform Assoc
.
8
:
443
459
.

Vetter
P
,
Smith Fraser
W
,
Muckli
L
.
2014
.
Decoding sound and imagery content in early visual cortex
.
Curr Biol
.
24
:
1256
1262
.

Wallraven
C
,
Bülthoff
HH
,
Waterkamp
S
,
van Dam
L
,
Gaißert
N
.
2013
.
The eyes grasp, the hands see: metric category knowledge transfers between vision and touch
.
Psychon Bull Rev
.
21
:
976
985
.

Wallraven
C
,
Dopjans
L
.
2013
.
Visual experience is necessary for efficient haptic face recognition
.
NeuroReport
.
24
:
254
258
.

Zhang
M
,
Weisser
VD
,
Stilla
R
,
Prather
S
,
Sathian
K
.
2004
.
Multisensory cortical processing of object shape and its relation to mental imagery
.
Cogn Affect Behav Neurosci
.
4
:
251
259
.

Supplementary data