Table of Contents    
Original Article
 
Face and emotional expression processing and event-related potentials in a case study of impaired face perception
Lucy J. Troup1, Stephanie Bastidas2, Jason S. Nomi3, Maia T. Nguyen2, Tien Tong4
1PhD, Colorado State University, Department of Psychology, Colorado State University, Fort Collins, CO, United States.
2MS, Colorado State University, Department of Psychology, Colorado State University, Fort Collins, CO, United States.
3PhD, University of Miami, Department of Psychology, University of Miami, Miami, Florida, United States.
4BSc, Colorado State University, Department of Psychology, Colorado State University, Fort Collins, CO, United States.

Article ID: 100002P13LT2015
doi:10.5348/P13-2015-2-OA-2

Address correspondence to:
Lucy Jane Troup
Department of Psychology, Colorado State University
Fort Collins, Colorado
United States 80523-1876
Phone: 1 970 491 6820

Access full text article on other devices

  Access PDF of article on other devices

[HTML Abstract]   [PDF Full Text] [Print This Article]
[Similar article in Pumed] [Similar article in Google Scholar]

How to cite this article
Troup LJ, Bastidas S, Nomi JS, Nguyen MT, Tong T. Face and emotional expression processing and event-related potentials in a case study of impaired face perception. Edorium J Psychol 2015;1:9–17.


Abstract
Aims: To evaluate face and emotional expression recognition in a single participant event related potential (ERP) case study.
Methods: We compared an individual with impaired face perception (participant G.O.) to 29 normal controls in behavioral tests of object, face and expression recognition and also recorded Event-Related Potentials (ERPs) in response to houses, faces and faces with emotional expressions.
Results: Participant G.O. performed normally on behavioral tests of object and emotional expression recognition but was significantly impaired in tests of face recognition. Unlike controls, G.O. did not show a difference in mean amplitude of P1 to houses compared to upright faces. Additionally, G.O. presented with a marked amplitude decrease in the temporal occipital N170 in response to faces compared to controls and a decrease in N170 and P300 amplitude in response to emotional expressions compared to controls. G.O. clearly showed a unique pattern of face and emotion recognition compared to control participants.
Conclusion: The behavioral deficits were not directly reflected in the ERP responses found for G.O. and controls. However, G.O. presented a distinctive pattern of scalp electrical activity for faces, both standard and with emotion. Thus highlighting the importance of using multiple measures in the examination of face perception deficits in individuals

Keywords: Event related potentials, Face recognition impairment, Emotion


Introduction

Human face recognition is a vastly complex process that on initial inspection appears to be seamless. Within the literature addressing human face perception there is consensus that there are brain structures both independent and overlapping that are necessary for face processing. Imaging studies such as positron emission tomography (PET) scan and functional magnetic resonance imaging (fMRI) scan, identify the fusiform gyrus, in the dorsal stream as a structure central to face perception [1] [2] [3] [4] [5] [6]. Other imaging techniques such as event-related potential (ERP) have focused attention on temporal markers that represent face perception [7].

Explanations of face perception, including those which account for recognition in terms of identity and expression, capture both the serial, and parallel nature of these structural accounts. The Bruce and Young (1986) model of face recognition and Haxby, Hoffman, and Gobbini (2000) distributed model of face recognition hypothesize independent mechanisms for the processing of identity and expression; Bruce and Young propose independent but parallel processing of identity and expression while Haxby et al. propose separate but interacting mechanisms for identity and expression recognition a name="ft5" href="#ref5">[5] [8]. The two models are supported by evidence finding identity recognition and expression recognition rely on independent and overlapping cortical networks [9][10][11][12][13].

Individuals with face perception deficits such as prosopagnosia, the inability to recognize faces, demonstrate impairments in behavioral face recognition tasks while performing normally on behavioral tasks of emotional expression recognition [13] [14] [15] [16] [17] [18]. Additionally, individuals with these deficits demonstrate a marked decrease in the amplitude of the ERP most associated with face perception, the temporal-occipital N170. In normal individuals the N170 amplitude is much larger in response to faces compared to other objects such as cars or houses, whereas there is typically no difference between faces and objects for individuals with prosopagnosia [7] [19]. Therefore, the poor performance by individuals with prosopagnosia in behavioral tests of face recognition could be attributed to similar amplitude for the N170 in response to faces and houses. However, it is unclear in literature how emotional expressions influence ERPs in individuals with developmental prosopagnosia compared to normal controls. Although individuals with prosopagnosia perform normally on behavioral tests of expression recognition, it is still not clear if their ERPs for emotional expression recognition match those of normal controls. Studies have found that emotional expression modulates the amplitude of the N170, while others have found no expression influence on the N170 amplitude [20] [21]. Additionally, emotional images have been shown to influence the Late Positive Potential (LPP) ERP, or the P300, in normal individuals but there is a deficit in the literature regarding this effect in individuals with prosopagnosia [22]. Therefore, it is unclear how emotional expressions will influence the N170 and P300 in individuals with developmental prosopagnosia and normal controls.

Aims
The current study had two purposes: Determine if participant G.O. presents with a face perception deficit, marked by impaired face recognition with preserved object and expression recognition along with decreased amplitude of his N170 in response to faces; and second, to compare ERPs between G.O. and normal controls in response to emotional facial expressions.


Materials and Methods

Case Description
G.O. is an 83-year-old right-handed male. He holds a Ph.D. in Psychology and is semi-retired. G.O. contacted our laboratory in November 2012 reporting difficulties with face recognition present since childhood. G.O. did not report any instances of brain insults that could have led to these difficulties. He reported no other cognitive or neurological deficits apart from the inability to recognize faces.

Neurologically Intact Controls: Twenty-nine undergraduate students (19 females; ages 19–27) with no history of neurological illness and no significant symptoms of depression or anxiety were included in the control group. Although our controls were not aged matched, literature is conflicted as to the effects that age has on ERP's specifically that difference are mostly seen in latency of the component. Evidence from literature suggests differences are very much task specific, and do not have a significant effect on sensory processing [23] [24]. The N170 component for example is considered relatively stable over age in adulthood [25]. The P300 does exhibit some age and sex related differences that are indicative of normal aging and cognitive decline, although the decrease in amplitude and increase in latency are gradual [26]. The number of participants who fully completed individual tasks varied, ranging from 21 to 29; descriptive for each sub-group are given in (Table 1). All students received credit in a Psychology course for their participation and provided written consent.

Face Perception Tasks
All participants were screened for possible face recognition deficits using widely accepted measures for face processing deficits [9] [27]. Each behavioral test used is described below and assesses an individual for both memory for faces that have been validated as widely familiar (Famous Face Tests) and for recognition performance for novel faces (Cambridge Memory for Faces Test) and for general face recognition performance (Cambridge Face Perception Test). As well as face perception performance, all participants were also evaluated for emotional perception performance using widely accepted measures that are described below.

Cambridge Memory for Faces Test: Developed by Duchaine and Nakayama (2006), this test examined recognition memory for faces in a forced-choice paradigm [27]. Grayscale photographs of 52 males in their 20s and 30s from three different views (frontal, left 1/3 profile, and right 1/3 profile) with cropped hair were used. Pictures of six individuals represented target items and the remaining 46 individuals represented distractors, grouped as one target and two distractors per test item. Practice stimuli consisted of three views (frontal, left 1/3 profile, and right 1/3 profile) of Bart Simpson. Participants completed an upright version and an inverted version with three stages of testing each: an introductory stage in which each target item was tested separately, and two novel image stages, one with Gaussian noise added. During practice and introduction, three views were presented of each test for 3000 ms each, followed by three test items in which participants pressed a key to identify the target for a total of three practice and 18 introduction trials. In the novel images condition, all six faces were presented simultaneously for 20s in a frontal view for study, followed by 30 test items presented at different angles (6 targets x 5 tests). The novel images with noise condition had Gaussian noise added to test images and there were 24 test items (6 targets x 4 tests). Reaction time and accuracy were recorded for each stage.

Cambridge Face Perception Test: The CFPT for impairments in perception of upright and inverted faces; participants are presented with morphed faces that they must order based on similarity to a target face [28]. Grayscale images of six male subjects at a ¾ profile view represented target items; comparison items consisted of frontal views morphed towards images of other subjects by 28%, 40%, 52%, 64%, 76%, and 88%. Participants completed 2 practice trials (1 inverted) and eight upright and eight inverted trials, presented in randomized order. Scores represented the average number and percent errors for the upright and inverted conditions, calculated as deviation from the correct position for each face.

Famous Faces Test: Thirty photographs of celebrities and politicians with cropped hair were presented on a computer screen at a viewing distance of 30 cm [9]. Each face was presented for 3000 ms after which participants were allowed unlimited time to provide the name of the celebrity or, if unknown, to write down any information unique to the depicted individual. Correct responses entailed providing the name of or information unique to the individual (e.g., "Captain Jean-Luc Picard" for Sir Patrick Stewart).

EEG Face Task: Stimuli consisted of grayscale images of 16 faces (8 females) with cropped hair measuring 239x276 pixels presented upright and inverted, and 16 houses measuring 368x276 pixels shown on a black background. As an attention check, participants were instructed to press a key every time a white box (276x276 pixels) appeared on the screen. Each block contained one presentation of each stimuli for a total of 48 trials presented in random order. Each trial consisted of delay of 1300 ms during which a black screen was presented, followed by a 100 ms presentation of an upright or inverted face or a house. Controls completed 8 blocks while G.O. completed 16 blocks of the task.

Emotion Perception Tasks
Mind in the Eyes Test: Developed by Baron-Cohen, Wheelwright, Hill, Raste, and Plumb, the Mind Test assesses emotion attributions in adults in a forced-choice paradigm [29]. Thirty-six photographs of faces cropped to show only the eye region were presented, along with four adjectives describing emotional states. Participants completed this task on paper and were given unlimited time to circle the option most representative of the emotional state depicted by each image.

EEG Emotion Task: Stimuli were grayscale images of 20 participants (10 females) depicting three facial expressions (Happy, Sad, Neutral; NimStim face database), presented with hair obscured by a black oval mask, and resized to 210x270 pixels [30]. Based on conditions from Rellecke, Sommer, and Schacht (gender decision, emotional decision), the explicit emotion processing condition consisted on explicit decisions about the emotional expression of the portrayed person, while the implicit emotion processing condition consisted on decisions about the sex of the person [31]. Stimuli were separated into two sets of 30 counterbalanced by task assignment: half of the participants viewed set 1 during implicit processing and set 2 during explicit processing, and vice versa. Each trial started with an instruction prompt to attend to either the Sex or the Emotion depicted by the individual, corresponding to implicit and explicit processing of the facial expressions. The instructional prompt (2000 ms) was followed by 1500 ms allowed to internalize the instructions, then a fixation cross (1000 ms) preceded the expression image, shown for 2000 ms. Finally, the participant was allowed up to 2000 ms to press a key with the appropriate response (implicit: male or female; explicit: neutral, happy, or angry).

General Procedure
Participants provided written consent and were fitted with an EEG cap as they completed the experiment presented on a Dell desktop computer (Dell Inc., Round Rock, Texas, USA) at a viewing distance of 30 cm. All tasks were shown using Stim2 software (CompumedicsNeuroScan, Charlotte, NC, USA) except for questionnaires and the Mind test, which were completed on paper. All participants including G.O. completed questionnaires, recorded demographic information and screened for symptoms of depression and anxiety using the Center for Epidemiological Studies Depression Scale (CES-D) and the State-Trait Anxiety Inventory (STAI) [32] [33]. Cutoffs were 16 or more for CES-D, 35 or more for STAI consistent with norms associated with these tests. Participants with scores of 16 or more in the CES-D, or 35 or more in the STAI were excluded from analysis.

EEG Acquisition: Electroencephalography (EEG) was recorded from 21 Ag/AgCl electrodes (midline: Fz, Cz, Pz; left: Fp1, F3, F7, C3, T7, P3, P7, PO7, O1; and corresponding right electrodes) mounted on a SynAmps2 64-channel QuikCap (CompumedicsNeuroScan, Charlotte, NC, USA) according to the 10-20 system. The vertex was used as online reference. Horizontal electrooculogram was monitored with electrodes placed on the outer canthi of the left and right eyes. Impedance was kept below 11 O. Signals were recorded at a sampling rate of 500 Hz and amplified with a band pass of .10-50 Hz in epochs from -200 to 400 ms for the face processing task and -200 to 1000 ms for the emotion processing task.

Data Analysis: Average reaction times and percent correct responses during each task were obtained for all participants. EEG data was re-referenced offline to the common average and baseline corrected to pre-stimulus period. Artifact rejection was applied to trials with amplitudes exceeding ±50 µV at HEO channels and ±100 µ at remaining electrodes. Grand averages for the face processing task were examined in ANOVAs for controls by Type (Face-Upright, Face-Inverted, House) x Electrode (parietal, parieto-occipital, occipital)xHemisphere (left, right), based on mean amplitudes and peak latencies were for P1 (80–140 ms), N170 (140–200 ms), and P3a (200–400) components. The emotion processing task was similarly analyzed by Task (Implicit, Explicit), Emotion (Angry, Neutral, Happy), Electrode (temporal, parieto-occipital, occipital), and Hemisphere (left, right) with the addition of P3b amplitude and latency (400–600 ms). Latencies were the time in milliseconds corresponding to the peak amplitude for each component. Significant differences were further investigated using paired sample t-tests. Differences between G.O. and controls were examined using Crawford and Garthwaite modified t-test for single case studies [34]. Alpha levels were set at α=0.05 with a Bonferroni correction for post-hoc tests where appropriate. Participants with >90% "no responses" or >75% rejected EEG trials were excluded accordingly, resulting in variations in Control group makeup for each task (Table 1).


Cursor on image to zoom/Click text to open image
Table 1: Descriptive statistics for demographic information of participants included within each individual task




Results

Face Perception Tasks
Cambridge Memory for Faces Test: During upright trials, G.O. scored 33 out of 72 correct responses, significantly lower than controls' mean of 55.33 correct responses (SD = 9.452), t(20) -2.309, p < 0.01. G.O. correctly identified less faces than controls during introduction, t(20) = -3.795, p < 0.01; novel images, t(20) = -1.737, p < 0.05); and Gaussian noise, t(20) = 2.317, p < 0.05 (Table 1). Similarly, G.O.'s score of 22 out of 72 correct was lower than controls' mean score of 41.57 correct (SD = 4.95), t(20) = 3.866, p < 0.01 during inverted trials and remained so throughout introduction, t(20) = -2.946, p < 0.01, novel images, t(20) = -2.518, p < 0.01, and Gaussian noise images conditions, t(20) = -1.926, p < 0.05 (Table 2).

Cambridge Face Perception Test: Scores on the CFPT reflect deviations from the correct position for each face added across all items for each orientation, with higher scores representing a greater number and percent of errors. Overall, controls showed an inversion effect, with a higher number of errors in the inverted condition (M = 39.52, SD = 9.527) than in the upright condition (M = 66.48, SD = 12.429), t(28) = -8.954, p < 0.01. G.O.'s error scores did not differ from controls' during upright trials (54 vs. M = 39.52, SD = 9.527), t(28) = 1.495, p > 0.05, or inverted trials (56 vs. M=66.48, SD = 12.429), t(28) = -.829, p > 0.05.

Famous Faces Test: G.O.'s score of 4 correct out of 30 presented faces was lower than control participants' mean score 19.78 out of 30, t(22) = -2.32, p < 0.05.

EEG Task: All participants obtained over 90% correct detections on the attention check and averaged 37.49% rejected ERP trials, with one participant being excluded for excessive rejected trials (100%). The remaining control participants were compared to G.O. (Figure 1).

P1: A main effect of electrode was found, F(2, 46) = 6.644, p < 0.05, such that P1 mean amplitude was greater at parieto-occipital than occipital, t(143) = 2.996, p < 0.01, and temporal sites, t(143) = -4.36, p < 0.01. A significant type by electrode interaction, F(4, 92) = 8.012, p < 0.05, suggested controls showed smaller mean amplitude for houses compared to upright, t(47) = 3.244, p < 0.01, and to inverted faces, t(47) = -4.13, p < 0.01 at parieto-occipital electrodes but not at other sites. This pattern was not observed in G.O. No P1 latency effects were identified.

N170 mean amplitude: Controls presented an effect of Type, F(2, 46) = 24.057, p < 0.05, with smaller N170 amplitude for houses than upright, t(143) = -9.787, p < 0.01, and inverted faces, t(143) = 12.468, p < 0.01; and for upright than inverted faces, t(143) = 5.436, p < 0.01. A main effect of electrode, F(2, 46) = 14.642, p < 0.05, showed greater mean N170 amplitude at temporal than parieto-occipital, t(143) = -6.888, p < 0.01, and occipital sites, t(143) = -8.175, p < 0.01, and at parieto-occipital than occipital sites, t(143) = -3.357, p < 0.01. A significant type by electrode interaction, F(4, 92) = 3.224, p < 0.05, suggested these differences were largest at parietal sites, followed by parietooccipital, and smallest at occipital sites. G.O. showed no change in performance on houses and inverted faces, but a decrease in performance for upright faces most noticeable at parietooccipital sites.

N170 Latency: A main effect of Type was found, F(2, 46) = 22.942, p < 0.05, showing greater N170 latency for inverted than upright faces, t(143) = -8.253, p < 0.01, and houses, t(143) = -10.383, p < 0.01, and for upright faces than houses, t(143) = 5.688, p < 0.01. A main effect of Electrode, F(2, 46) = 8.253, p < 0.05, showed shorter latency at occipital than temporal, t(143) = 4.135, p < 0.01, and parieto-occipital sites, t(143)=4.565, p < 0.01, while G.O. showed largest latency at parieto-occipital sites. A significant type by electrode interaction was found, F(2.194, 50.456)=9.457, p < 0.01, with greater latency for inverted faces than upright faces and houses, and for upright faces than houses at all electrode sites except for temporal, t(47)= 0.0588, p > 0.05. while G.O. showed a similar pattern over occipital sites, with no differences between inverted and upright faces. Overall, G.O. presented later N170 for upright, t(24)=1.924, p < 0.05, and inverted faces, t(24) = 1.918, p < 0.05, but not houses, t(24) = 1.595, p > 0.05, compared to controls.

P3a Maximum Amplitude: A significant effect of Type, F(2, 46) = 18.503, p < 0.05, suggested reduced P3a for houses compared to upright and inverted faces, both in Controls and G.O. An effect of Electrode, F(2, 46) = 8.144, p < 0.05, with smallest P3a over temporal electrodes, was modulated by an Electrode by Hemisphere interaction, F(2, 46) = 3.194, p < 0.05, in which P3a was reduced over left temporal compared to left parieto-occipital and left occipital electrodes, with no differences over right sites or between hemispheres. G.O. presented a similar pattern with the addition of reduced P3a also over parieto-occipital left sites.

P3a Latency: An effect of electrode, F(2, 46) = 5.961, p < 0.05, suggested faster P3a over occipital sites in Controls, which G.O. also presented. Further, a significant effect of Type, F(2, 46) = 3.306, p < 0.05, showed faster P3a for upright faces compared to houses and to inverted faces, while G.O. presented a slower P3a only for inverted faces.

Emotion Processing Tasks
Mind Test: G.O. scored 20 out of 36 correct identification of emotion for presented items. This score was not significantly different from controls' (M = 22.58, SD = 6.094), t(25) = -.415, p > 0.05.

EEG Emotion Processing Task: Participants averaged 49.32% rejected trials; three participants being excluded for excessive rejected trials (99.56–100%). Data from remaining controls was compared to G.O.'s (Figure 2).

P1 amplitude: A main effect of electrode, F(2, 42) = 7.687, p < 0.05, showed smaller P1 mean amplitude at temporal than parieto-occipital and occipital sites in controls. G.O. presented a similar distribution with no differences in amplitude from controls at any electrode sites (temporal: t(24) = -0.202, p > 0.05; parieto-occipital: t(24) = 0.408, p > 0.05; occipital: t(24) = 0.097, p > 0.05). No effects of P1 latency were found.

N170 amplitude: A main effect of electrode, F(2, 42) = 7.545, p < 0.05, found greater mean amplitude at temporal than parieto-occipital, t(263) = -5.425, p < 0.01, and occipital sites, t(263) = -6.311, p < 0.01. G.O. presented a different pattern in which N170 amplitude was largest at parieto-occipital rather than temporal sites. A Task by Emotion by Hemisphere effect, t(2, 42) = 4.31, p < 0.05, highlighted greater right than left N170 for Happy stimuli overall, and for Neutral stimuli only during implicit processing. G.O. presented reversed patterns based on task: larger N170 over right electrodes for explicit processing of neutral and implicit processing of happy; this was also present for angry faces overall.

N170 latency: A significant effect of emotion, F(2, 42) = 3.499, p < 0.05, suggested slower N170 for neutral than happy and angry faces in controls, whereas G.O. showed greatest latency for angry faces, t(263) = 3.919, p < 0.01, followed by neutral, t(263) = 1.392, p < 0.01, then happy faces, t(263) = -2.685, p < 0.01. G.O.'s N170 was slower overall compared to controls', t(24) = 1.709, p > 0.05.

P3a Amplitude: Effects for Electrode were found, F(2, 42) = 3.485, p < 0.05, with smallest maximum amplitude over temporal sites, followed by occipital, and maximum amplitude over parieto-occipital sites. G.O. presented no differences from controls.

P3a Latency: A Task by Emotion interaction showed shorter latency for Happy than Angry faces during explicit processing that was reversed during explicit processing, F(2, 42)=3.492, p < 0.05.; while G.O. showed slower P3a for Happy faces independent of task, reflected as overall slower P3a compared to controls, t(24)=-2.441, p < 0.05.

P3b Amplitude: In controls, a Task by Emotion by Electrode interaction trended towards significance F(4, 84) = 2.328, p=0.061, suggesting greater amplitude for explicit processing of Angry faces over temporal electrodes and Happy faces over parieto-occipital electrodes. G.O. did not present differences by emotion and electrode site. No latency effects were identified.

Cursor on image to zoom/Click text to open image
Table 2: Mean scores for upright and inverted sub-tests of Cambridge Memory Test for Faces (CMTF). Control standard deviations shown in parentheses


Cursor on image to zoom/Click text to open image
Figure 1: Comparison of G.O. and control group ERP waveforms in response to upright faces, inverted faces, and houses.


Cursor on image to zoom/Click text to open image
Figure 2: Comparison of G.O. and control group ERP waveforms during implicit and explicit processing of angry, neutral, and happy facial expressions.



Cursor on image to zoom/Click text to open image
Figure 3: Individual scores from Control group and G.O. in face processing and expression processing tasks.



Discussion

Participant G.O. showed clear deficits in face recognition in behavioral and ERP responses, accompanied by differential patterns in emotion processing most marked in ERP than behavioral measures. G.O. scored significantly lower than controls on measures of recognition for studied new faces (Cambridge Memory for Faces Task) and for unstudied famous faces. His behavioral deficits in processing appear to be limited to face recognition ( (Figure 3), left), as he presented no deficits in face perception (Cambridge Face Perception Test), or in forced-choice measures of emotion identification (Mind in the Eyes) and discrimination ( (Figure 3), right). This pattern of response is not necessarily indicative of prosopagnosia but it is consistent with prosopamnesia: he is able to perceive faces but has difficulties encoding and retrieving specific faces [35]. Prosopamnesia is a condition in which memory for faces is selectively impaired. Although G.O was able to perceive faces normally, there was a clear deficit in his memory for faces and his ability to maintain recognition over time. This is consistent with face recognition impairment and similar to performance on face memory tasks for individuals with a diagnosis of prosopagnosia and prosopamnesia. It is noted in the literature that it is often difficult to distinguish clinically between the two conditions [36]. His ability to recognize and process emotional expression remained intact despite his face recognition deficits.

ERP responses
During passive viewing of face and non-face stimuli, the time course of processing in G.O. reflected differences from controls consistent with his behavioral deficits. G.O. showed a reduction for faces over houses in the N170 compared to controls. This lack of differentiation in N170 amplitude and latency for face and non-face stimuli has been previously reported in individuals with developmental and acquired prosopagnosic deficits [11] [19]. However, while G.O.'s behavioral performance in emotion identification and discrimination appeared intact, he presented differential patterns of processing during both emotion tasks compared to controls. There were significant differences in G.O.'s early and late components in the emotional processing task. G.O.'s P300 was significantly faster during both implicit and explicit emotional processing, and he presented differences from controls in laterality during each type of task and in overall processing of angry and happy expressions.

The results suggest an interesting pattern in which G.O. does not present behavioral deficits engaging in emotional expression processing, but he possibly relies on different mechanisms from controls in processing emotional expression. This differs from the intact emotional expression recognition reported in developmental prosopagnosia [9] [11]. G.O.'s performance supports aspects of both of the dominant models of face and emotional expression recognition [5] [8]. In relation to the Bruce and Young explanation, emotion expression recognition and face recognition occur in parallel in separate systems in a linear fashion. Haxby et al.'s more recent model suggests that although there is independence of emotional expression and identity recognition the two processes are both independent and integrated. G.O. clearly has deficits in one system (face recognition) that is still intact in another (emotional expression recognition) consistent with the Bruce and young account, but the subtleties of his deficits in emotional expression recognition suggest that the independence of these deficits are not as clear and favor the Haxby et al. account [5] [8].

Differences in scalp distribution during familiarity and recollection has suggested the use of "abnormal routes" to face recognition in DPs [37]; the extent of the deficits may contribute differently to the routes used for expression processing later in life. While it remains to be established how common dissociations between behavioral and neurophysiological deficits are face processing impairments, this topic might provide further support to the relationship between impaired identity and expression recognition [10].

Neither the Haxby et al. nor the Bruce and Young models provides a complete explanation for the differences between behavioral and neuronal performance that we see in participant G.O. [5] [8].The Bruce and Young model could explain the behavioral results of impaired identity recognition with intact expression recognition, but the Haxby model would allow for the differences observed in both N170 and P300 ERPs to expression processing. Previous research has demonstrated that emotional expression influences the N170, therefore, G.O.'s marked decrease in N170 could result in the noted decrease in the P300 [20]. That is, G.O.'s deficit in face recognition as reflected in his decreased N170 could possibly be driving the ERP differences during emotion recognition reflected in the P300. This still does not explain his normal behavioral responses to emotional expression. Our results support a model of face and emotional expression recognition that suggests these processes are somewhat independent but highly related [5]. Face and expression processing is modular yet there is interaction between the processing of both a given face and its expression.


Conclusion

It is important to note that there is variance in deficits among individuals with face perception impairments, whether developmental or acquired. Further, while this ERP data might reflect the processing mechanisms of face and emotional expression recognition, it does not necessarily mean that those processing differences are related to similarities in the underlying functional architecture. As discussed, the same brain areas might not be responsible for the deficits we are observing in different individuals. We are currently following up this study with two more prosopagnosics to compare their results with those of G.O. as well as including controls that were more closely aged matched. Although we were able to use controls who were in a closer age range to G.O. there was still a deficit in years that we will address in future work.


Acknowledgements

Sponsored by the National Science Foundation (NSF) Research Experiences for Undergraduates (REU) Program, NSF Grant SMA-1005199 to Edward L. DeLosh.


References
  1. Haxby JV, Grady CL, Horwitz B, et al. Dissociation of object and spatial visual processing pathways in human extrastriate cortex. Proc Natl Acad Sci U S A 1991 Mar 1;88(5):1621–5.   [CrossRef]   [Pubmed]    Back to citation no. 1
  2. Sergent J, Ohta S, MacDonald B. Functional neuroanatomy of face and object processing. A positron emission tomography study. Brain 1992 Feb;115 Pt 1:15–36.   [Pubmed]    Back to citation no. 2
  3. Haxby JV, Horwitz B, Ungerleider LG, Maisog JM, Pietrini P, Grady CL. The functional organization of human extrastriate cortex: a PET-rCBF study of selective attention to faces and locations. J Neurosci 1994 Nov;14(11 Pt 1):6336–53.   [Pubmed]    Back to citation no. 3
  4. Clark VP, Keil K, Maisog JM, Courtney S, Ungerleider LG, Haxby JV. Functional magnetic resonance imaging of human visual cortex during face matching: a comparison with positron emission tomography. Neuroimage 1996 Aug;4(1):1–15.   [CrossRef]   [Pubmed]    Back to citation no. 4
  5. Haxby JV, Hoffman EA, Gobbini MI. The distributed human neural system for face perception. Trends Cogn Sci 2000 Jun;4(6):223–33.   [CrossRef]   [Pubmed]    Back to citation no. 5
  6. McCarthy G, Puce A, Gore JC, Allison T. Face-specific processing in the human fusiform gyrus. J Cogn Neurosci 1997 Fall;9(5):605–10.   [CrossRef]   [Pubmed]    Back to citation no. 6
  7. Bentin S, Allison T, Puce A, Perez E, McCarthy G. Electrophysiological Studies of Face Perception in Humans. J Cogn Neurosci 1996 Nov;8(6):551–65.   [CrossRef]   [Pubmed]    Back to citation no. 7
  8. Bruce V, Young A. Understanding face recognition. Br J Psychol 1986 Aug;77 ( Pt 3):305–27.   [CrossRef]   [Pubmed]    Back to citation no. 8
  9. Duchaine BC, Parker H, Nakayama K. Normal recognition of emotion in a prosopagnosic. Perception 2003;32(7):827–38.   [CrossRef]   [Pubmed]    Back to citation no. 9
  10. Calder AJ, Young AW. Understanding the recognition of facial identity and facial expression. Nat Rev Neurosci 2005 Aug;6(8):641–51.   [CrossRef]   [Pubmed]    Back to citation no. 10
  11. Humphreys K, Avidan G, Behrmann M. A detailed investigation of facial expression processing in congenital prosopagnosia as compared to acquired prosopagnosia. Exp Brain Res 2007 Jan;176(2):356–73.   [CrossRef]   [Pubmed]    Back to citation no. 11
  12. Nomi JS, Scherfeld D, Friederichs S, et al. On the neural networks of empathy: A principal component analysis of an fMRI study. Behav Brain Funct 2008 Sep 17;4:41.   [CrossRef]   [Pubmed]    Back to citation no. 12
  13. Vuilleumier P, Pourtois G. Distributed and interactive brain mechanisms during emotion face perception: evidence from functional neuroimaging. Neuropsychologia 2007 Jan 7;45(1):174–94.   [Pubmed]    Back to citation no. 13
  14. Damasio AR, Damasio H, Van Hoesen GW. Prosopagnosia: anatomic basis and behavioral mechanisms. Neurology 1982 Apr;32(4):331–41.   [CrossRef]   [Pubmed]    Back to citation no. 14
  15. Damasio AR, Tranel D, Damasio H. Face agnosia and the neural substrates of memory. Annu Rev Neurosci 1990;13:89–109.   [CrossRef]   [Pubmed]    Back to citation no. 15
  16. Meadows JC. The anatomical basis of prosopagnosia. J Neurol Neurosurg Psychiatry 1974 May;37(5):489–501.   [CrossRef]   [Pubmed]    Back to citation no. 16
  17. Whiteley AM, Warrington EK. Prosopagnosia: a clinical, psychological, and anatomical study of three patients. J Neurol Neurosurg Psychiatry 1977 Apr;40(4):395–403.   [CrossRef]   [Pubmed]    Back to citation no. 17
  18. Eimer M, Holmes A. An ERP study on the time course of emotional face processing. Neuroreport 2002 Mar 25;13(4):427–31.   [CrossRef]   [Pubmed]    Back to citation no. 18
  19. Harris AM, Duchaine BC, Nakayama K. Normal and abnormal face selectivity of the M170 response in developmental prosopagnosics. Neuropsychologia 2005;43(14):2125–36.   [CrossRef]   [Pubmed]    Back to citation no. 19
  20. Ashley V, Vuilleumier P, Swick D. Time course and specificity of event-related potentials to emotional expressions. Neuroreport 2004 Jan 19;15(1):211–6.   [CrossRef]   [Pubmed]    Back to citation no. 20
  21. Sprengelmeyer R, Jentzsch I. Event related potentials and the perception of intensity in facial expressions. Neuropsychologia 2006;44(14):2899–906.   [CrossRef]   [Pubmed]    Back to citation no. 21
  22. Johnston VS, Miller DR, Burleson MH. Multiple P3s to emotional stimuli and their theoretical significance. Psychophysiology 1986 Nov;23(6):684–94.   [Pubmed]    Back to citation no. 22
  23. Zurrón M, Lindín M, Galdo-Alvarez S, Díaz F. Age-related effects on event-related brain potentials in a congruence/incongruence judgment color-word Stroop task. Front Aging Neurosci 2014 Jun 17;6:128.   [CrossRef]   [Pubmed]    Back to citation no. 23
  24. Pfütze EM, Sommer W, Schweinberger SR. Age-related slowing in face and name recognition: evidence from event-related brain potentials. Psychol Aging 2002 Mar;17(1):140–60.   [Pubmed]    Back to citation no. 24
  25. Kuefner D, de Heering A, Jacques C, Palmero-Soler E, Rossion B. Early Visually Evoked Electrophysiological Responses Over the Human Brain (P1, N170) Show Stable Patterns of Face-Sensitivity from 4 years to Adulthood. Front Hum Neurosci 2010 Jan 6;3:67.   [CrossRef]   [Pubmed]    Back to citation no. 25
  26. vanDinteren R, Arns M, Jongsma MLA, Kessels RPC. P300 development across the lifespan: a systematic review and meta-analysis. PLoS One 2014 Feb 13;9(2):e87347.   [CrossRef]   [Pubmed]    Back to citation no. 26
  27. Duchaine B, Nakayama K. The Cambridge Face Memory Test: results for neurologically intact individuals and an investigation of its validity using inverted face stimuli and prosopagnosic participants. Neuropsychologia 2006;44(4):576–85.   [CrossRef]   [Pubmed]    Back to citation no. 27
  28. Duchaine B, Yovel G, Nakayama K. No global processing deficit in the Navon task in 14 developmental prosopagnosics. Soc Cogn Affect Neurosci 2007 Jun;2(2):104–13.   [CrossRef]   [Pubmed]    Back to citation no. 28
  29. Baron-Cohen S, Wheelwright S, Hill J, Raste Y, Plumb I. The "Reading the Mind in the Eyes" Test revised version: a study with normal adults, and adults with Asperger syndrome or high-functioning autism. J Child Psychol Psychiatry 2001 Feb;42(2):241–51.   [CrossRef]   [Pubmed]    Back to citation no. 29
  30. Tottenham N, Tanaka JW, Leon AC, et al. The NimStim set of facial expressions: judgments from untrained research participants. Psychiatry Res 2009 Aug 15;168(3):242–9.   [CrossRef]   [Pubmed]    Back to citation no. 30
  31. Rellecke J, Sommer W, Schacht A. Does processing of emotional facial expressions depend on intention? Time-resolved evidence from event-related brain potentials. Biol Psychol 2012 Apr;90(1):23–32.   [CrossRef]   [Pubmed]    Back to citation no. 31
  32. Radloff LS. The CES-D scale: A self-report depression scale for research in the general population. ApplPsychol Meas 1997;1(3):385–401.    Back to citation no. 32
  33. Spielberger CD. Manual for the State-Trait Anxiety Inventory: STAI (Form Y). 1992.    Back to citation no. 33
  34. Crawford JR, Garthwaite PH. Investigation of the single case in neuropsychology: confidence limits on the abnormality of test scores and test score differences. Neuropsychologia 2002;40(8):1196–208.   [CrossRef]   [Pubmed]    Back to citation no. 34
  35. Tippett LJ, Miller LA, Farah MJ. Prosopamnesia: a selective impairment in face learning. Cogn Neuropsychol 2000 Feb 1;17(1):241–55.   [CrossRef]   [Pubmed]    Back to citation no. 35
  36. Barton JJ. Disorders of face perception and recognition. Neurol Clin 2003 May;21(2):521–48.   [CrossRef]   [Pubmed]    Back to citation no. 36
  37. Burns E, Tree J, Weidemann C. Recognition memory in developmental prosopagnosia: Behavioural and electrophysiological evidence for an impairment of recollection of faces. Perception 2013;42:78–79.    Back to citation no. 37
[HTML Abstract]   [PDF Full Text]

Author Contributions:
Lucy J. Troup – Substantial contributions to conception and design, Acquisition of data, Drafting the article, Revising it critically for important intellectual content, Final approval of the version to be published
Stephanie Bastidas – Substantial contributions to conception and design, Acquisition of data, Drafting the article, Revising it critically for important intellectual content, Final approval of the version to be published
Jason S. Nomi – Substantial contributions to conception and design, Acquisition of data, Drafting the article, Revising it critically for important intellectual content, Final approval of the version to be published
Maia T. Ngyuen – Substantial contributions to conception and design, Acquisition of data, Drafting the article, Revising it critically for important intellectual content, Final approval of the version to be published
Tien Tong – Substantial contributions to conception and design, Acquisition of data, Revising it critically for important intellectual content, Final approval of the version to be published
Guarantor of submission
The corresponding author is the guarantor of submission.
Source of support
None
Conflict of interest
Authors declare no conflict of interest.
Copyright
© 2015 Lucy J. Troup et al. This article is distributed under the terms of Creative Commons Attribution License which permits unrestricted use, distribution and reproduction in any medium provided the original author(s) and original publisher are properly credited. Please see the copyright policy on the journal website for more information.