Large inversion effects are not specific to faces and do not vary with object expertise Constantin Rezlescu1,2, Angus Chapman3, Tirta Susilo3, & Alfonso Caramazza1 1Department of Psychology, Harvard University, USA 2Institute of Cognitive Neuroscience, University College London, UK 3School of Psychology, Victoria University of Wellington, NZ Abstract Visual object recognition is impaired when stimuli are shown upside-down. This phenomenon is known as the inversion effect, and a substantial body of evidence suggests it is much larger for faces than non-face objects. The large inversion effect for faces has been widely used as key evidence that face processing is special, and hundreds of studies have used it as a tool to investigate face-specific processes. Here we show that large inversion effects are not specific to faces. We developed two car tasks that tap basic object recognition and within-class recognition. Both car tasks generated large inversion effects (~25% on a three-choice format), which were identical to those produced by parallel face tasks. Additional analyses showed that the large car inversion effects did not vary with expertise. Our findings demonstrate that non-face object recognition can depend on processes that are highly orientation-specific, challenging a critical behavioral marker of face-specific processes. 2 1. Introduction Inversion effects – the reduced ability to visually recognize objects shown upside- down – are observed for most objects that have a canonical orientation, but the effects are much larger for faces than for other objects. This finding led Yin (1969) to argue that face processing relies on special mechanisms, and his initial results have been widely replicated during the past 50 years. In a meta-analysis of 77 studies exploring perceptual hallmarks of face-specificity (the inversion, composite, part-whole and contrast negation effects), Bruyer (2011) found that the inversion effect was, by far, the most investigated (appearing in 63% of all studies), the largest, and the most reliable. The effect size differences between faces and objects are most evident in studies comparing them directly. In their review, Robbins and McKone (2007) noted that inversion effects for faces are usually between 15-25% compared to 0-8% for objects. Large inversion effects for faces are theoretically important for two reasons. First, they have been used as an experimental method to characterize face processing in hundreds of studies across psychology and neuroscience. These studies span the cognitive (Rhodes, Brake, & Atkinson, 1993; Yin, 1969), developmental (Carey & Diamond, 1977; Fagan, 1972), comparative (Kendrick, Atkins, Hinton, Heavens, & Keverne, 1996; Parr, Dove, & Hopkins, 1998), neuroscience (Freiwald, Tsao, & Livingstone, 2009; Yovel & Kanwisher, 2005), and clinical (Farah, Wilson, Drain, & Tanaka, 1995; Hobson, Ouston, & Lee, 1988) literatures. Second, large inversion effects for faces have been used as key evidence for the face-specific hypothesis (Robbins & McKone, 2007) and the expertise hypothesis (Diamond & Carey, 1986; Gauthier, Skudlarski, Gore, & Anderson, 2000). The face-specific hypothesis asserts that larger inversion effects for faces than for objects indicate that upright face processing involves special mechanisms that are not used in object processing. The expertise hypothesis claims that face processing seems special only because we have much more experience with upright faces than we do with other objects. According to this view, faces produce large inversion effects because almost everyone is an expert upright face recognizer, but other objects of expertise would engage the same mechanisms and thus produce similar effects (Diamond & Carey, 1986; Gauthier et al., 2000). The face-specific hypothesis and the expertise hypothesis differ in how they account for the emergence of mechanisms involved in face processing, but 3 they agree that inversion effects for most objects should be smaller than inversion effects for faces or for objects with which we have expertise. Here we show that large, face-like inversion effects can be obtained for non-face objects and that these effects are not linked to expertise. Using cars as model stimuli, we developed two car tasks that reliably produce inversion effects as large as the largest effects reported with faces. One task taps basic object recognition (discriminating cars from non- cars), the other taps within-class recognition (discriminating one particular car from other cars). 2. Method 2.1. Participants. According to a recent meta-analysis (Bruyer, 2011), the difference between the average inversion effect sizes reported for faces and objects was 0.5. A power analysis revealed that a sample of 54 participants was required to achieve 95% power to detect this difference (significance threshold .05; two-tailed repeated-measures t-test). We tested two groups of participants over the web and in the laboratory. Our original web sample consisted of 71 participants from Amazon Mechanical Turk, of which we excluded eight based on abnormally fast response times suggesting lack of engagement with the task. Our final web sample consisted of 63 participants (36 male) with mean age of 36.1 years (SD = 10.1). Our lab sample consisted of 57 undergraduate students (11 male) from Victoria University of Wellington. Their mean age was 18.8 years (SD = 1.6). 2.2. Procedure. Participants completed the following four tests in random order. Basic-level recognition of faces (Verhallen et al., 2014). This task is a modified version of the classic Mooney task (Mooney, 1957). Each trial presented three images: a target image – one of the original Mooney faces –and two distractor images, created by rearranging parts of Mooney faces in systematic ways (see Verhallen et al., 2014, for details). The three images were shown side by side for 400ms. Participants had to indicate as quickly as possible the image containing the Mooney face by pressing a key. Participants completed one block with upright images and another with upside-down images. Block order was randomized. Each block started with three practice trials and continued with 39 test trials. A demo version of the experiment can be found at: testable.org/t/493353c. 4 Basic-level recognition of cars. We developed an identical basic-level recognition task with cars instead of faces. Forty images of SUVs were taken from the Fine-Grained Categorization Car Dataset (Krause, Deng, Stark, & Fei-Fei, 2013) and were carefully edited as in Verhallen et al. (2014) to create a Mooney car target and two Mooney distractors for each car target. All task parameters were identical to the face task. Pilot testing ensured performance with upright cars was comparable to performance with upright faces. A demo version can be found here: testable.org/t/4b14caa. Within-class recognition of faces (Rezlescu, Pitcher, & Duchaine, 2012). In this task, participants are asked to match one of three test faces (side view) to a target face (frontal view). All stimuli are grayscale male faces with a standard black cap to cover the hair. Participants first saw the target face for 400ms, followed immediately by the three simultaneously presented test faces for 2000ms. Participants had unlimited time to respond. There were 40 upright trials and 40 inverted trials that were randomized. A demo version can be found here: testable.org/t/43a13cc. Within-class recognition of cars. We developed a similar within-level recognition task with cars instead of faces. The car stimuli consisted of 20 BMW car models, downloaded from the BMW official website (www.bmw.com). All cars were black. The target car was presented at a 20 degrees angle, while the test cars were presented at 50 degrees angles. The design and presentation times were identical to the face task. Pilot testing ensured performance with upright cars was matched to performance with upright faces. A demo version can be found here: testable.org/t/4536cd0. Example images from all tasks are presented in Figure 1. 3. Results 3.1. Inversion effects for basic-level recognition Web sample. For faces, average performance in the upright condition was 80.0% (SD = 8.8%) and in the inverted condition 51.9% (SD = 10.9%). The inversion effect for faces was 28.1% (SD = 12.9%; Cohen’s d = 2.84). For cars, average performance was 75.1% (SD = 11.7%) in the upright condition and 46.7% (SD = 9.7%) in the inverted condition. The inversion effect for cars was 28.4% (SD = 11.7%; Cohen’s d = 2.64). There was no difference between the inversion effects for faces and cars [t(62) = 0.115; p = .909]. Lab sample. Average performance with upright faces was 75.9% (SD = 8.1%) and with inverted faces 49.9% (SD = 11.8%). The face inversion effect was 26.0% (SD = 10.6%; Cohen’s d = 2.57). For cars, average performance was 76.1% (SD = 11.1%) in the upright condition 5 and 48.9% (SD = 10.7%) in the inverted condition. The car inversion effect was 27.2% (SD = 12.6%; Cohen’s d = 2.50). Again, there was no difference between the face inversion effect and the car inversion effect [t(56) = 0.58; p = .565]. Web vs. lab samples. The size of the face inversion effect in the web sample was not different from that in the lab sample [t(118) = 0.99; p = .325]. Similarly, the size of the car inversion effect across the two samples was also similar [t(118) = 0.55; p = .587]. When data from both samples were collapsed, the face inversion effect (M = 27.1%, SD = 11.9%) was not significantly different from the car inversion effect (M = 27.8%, SD = 12.1%) [t(119) = 0.47; p = .639]. 3.2. Inversion effects for within-class recognition Web sample. For faces, average performance was 78.7% (SD = 11.6%) in the upright condition and 53.6% (SD = 13.5%) in the inverted condition. The face inversion effect was 25.0% (SD = 12.7%; Cohen’s d = 2.00).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages19 Page
-
File Size-