Peripheral Vision: a New Killer App for Smart Glasses

Total Page:16

File Type:pdf, Size:1020Kb

Peripheral Vision: a New Killer App for Smart Glasses Peripheral Vision: A New Killer App for Smart Glasses Isha Chaturvedi Farshid Hassani Bijarbooneh The Hong Kong University of Science and Technology The Hong Kong University of Science and Technology Hong Kong Hong Kong [email protected] [email protected] Tristan Braud Pan Hui The Hong Kong University of Science and Technology University of Helsinki Hong Kong Helsinki, Finland [email protected] The Hong Kong University of Science and Technology Hong Kong [email protected] ABSTRACT KEYWORDS Most smart glasses have a small and limited field of view. Human perception; field of view; peripheral vision; smart The head-mounted display often spreads between the human glasses head-mounted display; information input central and peripheral vision. In this paper, we exploit this ACM Reference Format: characteristic to display information in the peripheral vision Isha Chaturvedi, Farshid Hassani Bijarbooneh, Tristan Braud, and Pan of the user. We introduce a mobile peripheral vision model, Hui. 2019. Peripheral Vision: A New Killer App for Smart Glasses. which can be used on any smart glasses with a head-mounted In 24th International Conference on Intelligent User Interfaces (IUI display without any additional hardware requirement. This ’19), March 17–20, 2019, Marina del Ray, CA, USA. ACM, New York, model taps into the blocked peripheral vision of a user and NY, USA, 14 pages. https://doi.org/10.1145/3301275.3302263 simplifies multi-tasking when using smart glasses. To display the potential applications of this model, we implement an 1 INTRODUCTION application for indoor and outdoor navigation. We conduct Smartglasses have become increasingly popular in recent an experiment on 20 people on both smartphone and smart years. They provide various applications in information visu- glass to evaluate our model on indoor and outdoor conditions. alization [49], education [16], gaming [41], medical [36] and Users report to have spent at least 50% less time looking at the other commercial industries [2, 15]. Nowadays, most smart- screen by exploiting their peripheral vision with smart glass. glasses embed a small head-mounted screen which spreads 90% of the users Agree that using the model for navigation over the eye of the user. The Angular field of view (AFOV or is more practical than standard navigation applications. AOV) measures the angular extent of a 360-degree circle that is visible by the human eye [6]. Figure 1 shows the AFOV CCS CONCEPTS of the human eye. The foveal system, responsible for the foveal vision, lies within the central and para-central area. • Human-centered computing → User studies; Empiri- The area outside the foveal system is responsible for the pe- cal studies in HCI; • Computing methodologies → Per- ripheral vision [39, 42]. The term Field of view (FOV) is often ception. used interchangeably with AFOV. Most smartglasses have small and limited FOV which restricts their potential applica- tions [34, 46]. The AFOV of Google Glass1 is approximately 30 degrees (as represented Figure 2), which is significantly Permission to make digital or hard copies of all or part of this work for smaller than the AFOV of the human eye. This is the case for personal or classroom use is granted without fee provided that copies are not most smartglasses including MadGaze Glass2. This limited made or distributed for profit or commercial advantage and that copies bear FOV forces the user to direct his central eye gaze towards the this notice and the full citation on the first page. Copyrights for components small screen of the glass to extract meaningful information. of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to Additionally, focusing the eyes on a display screen at close fo- redistribute to lists, requires prior specific permission and/or a fee. Request cal distances causes visual fatigue [37, 43], which immensely permissions from [email protected]. affects the usability of smartglasses. As the user focuses his IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA central eye gaze on the screen of the smartglass at a close © 2019 Association for Computing Machinery. ACM ISBN 978-1-4503-6272-6/19/03...$15.00 1Google Inc, https://en.wikipedia.org/wiki/Google_Glass https://doi.org/10.1145/3301275.3302263 2MadGaze Group, http://madgaze.com/x5/specs IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA I. Chaturvedi et al. by removing the need to focus on the screen of smartglasses. mid-peripheral This paper contributes to the state-of-the-art by develop- far far peripheral peripheral ing a model that combines two theories: motion detection near-peripheral through peripheral vision [8] and color sensitivity of human 30° 60° 90° eye [26] and demonstrates its application for navigation on paracentral central smartglasses with a head-mounted display. Existing works mainly focus on exploring peripheral vision by changing the hardware of the smartglasses while we propose in this page a pure software solution. Using our model, we develop a high-fidelity peripheral vision-based navigation application for both indoor and outdoor environment scenarios. To the Figure 1: Angular Field of View of the Human Eye best of our knowledge, this paper presents the first use of peripheral vision in a mobile context, using standard smart- glasses in both indoor and outdoor environment without additional hardware. This paper presents the following contributions: AFOV~30° • We present an MPV Model using color and motion to display visual cues in the peripheral vision of the user. • We implement the MPV Model within a navigation application. This application is then compared to a standard navigation application on smartglasses, as Figure 2: Angular Field of View of Google Glass well as the same application on smartphone. As such, we are able to isolate both the impact of peripheral focal point, his multitasking ability is strongly affected. This vision and use of smartglasses. Thanks to our model, temporary shift of focus may have deadly consequences. For users spend on average 50% less time looking at the instance, a user driving a car on the highway at 100km/h screen of the smartglasses. Furthermore, 90% Agree who takes his eyes off the road for one second to look at that the smartphone application was beneficial. a map screen is actually blind for 28 meters. Using mobile • We further discuss two specific cases, namely strabis- devices also limits cognitive ability and restricts peripheral mus and color-blindness, for which our MPV model vision [20]. There have been about 5,984 pedestrian traffic does not apply. Indeed, color-blindness changes the fatalities in 2017. One of the main causes of these accidents color sensitivity of the eye, while strabismus impacts is the extensive use of mobile devices3. the eye mobility. We propose modifications to our Smartglasses with a head-mounted display like Google model to account for these specific cases. Glass or even Microsoft HoloLens partially cover the user’s 4 peripheral vision . The peripheral visual field is an important The rest of this paper is organized as follows: We first part of the human vision and is useful for daily locomotive ac- discuss research studies related to ways of increasing field tivities such as walking, driving, and sports [40]. Visual cues of view, use of peripheral vision in providing notifications to from the periphery can help to detect obstacles, avoid acci- the user, and navigation using smartglasses. In Section 2, we dents and ensure proper foot placement while walking [21]. explain our MPV model and its applications for the mobile In this paper, we present a Mobile Peripheral Vision (MPV) users. In Section 3, we discuss our demo application and the model. Any smartglass with a head-mounted display over- user study built around the application. Finally, we discuss lapping with the peripheral vision can run this model, which the results of the experiments to evaluate the applicability does not require any additional hardware. Our model taps of our model. into the peripheral vision of the user by using the screen of the head-mounted display of the smartglass to present visual cues. The model simplifies multi-tasking for the mobile user Related Work In this section, we present the main related studies. These 3Pedestrian Traffic Fatalities by State, https://www.ghsa.org/resources/ spotlight-pedestrians18 studies spread around three main fields: enhancing the FOV 4Google Glass Blocks Peripheral Vision, https://www.livescience.com/ of smartglasses, displaying information on peripheral vision, 48608-google-glass-blocks-peripheral-vision.html and navigation on smartglasses. Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA Enhancing the FOV of smartglasses non-exhaustive list of studies, gaze detection and guiding has Augmenting the field of view has previously been studied by been a very active field to target the user’s attention towards changing the hardware of the smartglasses [7, 28]. Sparse- specific details of a scene and improve global recollection. LightAR increases the field of view of head-mounted displays However, none of these studies exploit peripheral vision to by adding an array of Light Emitting Diodes (LEDs) around send subtle cues to the user without altering his focus on the the central display [47]. Similarly, AmbiGlasses illuminates main task. the periphery of the human visual field by adding 12 LEDs Few studies have explored the possibility of using ani- in the frame of the glasses [31]. Matviienko et al. [22] dis- mations in peripheral vision displays for enhancing visual cuss the possibility of employing ambient light in the car to interest, without distracting the user [30]. The study in [18] keep the focus of the user on the road.
Recommended publications
  • Specific Eye Conditions with Corresponding Adaptations/Considerations
    Specific Eye Conditions with Corresponding Adaptations/Considerations # Eye Condition Effect on Vision Adaptations/Considerations 1 Achromotopsia colors are seen as shades of grey, tinted lenses, reduced lighting, alternative nystagmus and photophobia improve techniques for teaching colors will be with age required 2 Albinism decreased visual acuity, photophobia, sunglasses, visor or cap with a brim, nystagmus, central scotomas, strabismus reduced depth perception, moving close to objects 3 Aniridia photophobia, field loss, vision may tinted lenses, sunglasses, visor or cap with fluctuate depending on lighting brim, dim lighting, extra time required to conditions and glare adapt to lighting changes 4 Aphakia reduced depth perception, inability to sunglasses, visor or cap with a brim may accommodate to lighting changes be worn indoors, extra time required to adapt to lighting changes 5 Cataracts poor color vision, photophobia, visual bright lighting may be a problem, low acuity fluctuates according to light lighting may be preferred, extra time required to adapt to lighting changes 6 Colobomas photophobia, nystagmus, field loss, sunglasses, visor or cap with a brim, reduced depth perception reduced depth perception, good contrast required 7 Color Blindness difficulty or inability to see colors and sunglasses, visor or cap with a brim, detail, photophobia, central field reduced depth perception, good contrast scotomas (spotty vision), normal required, low lighting may be preferred, peripheral fields alternative techniques for teaching colors
    [Show full text]
  • Interaction Methods for Smart Glasses: a Survey
    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2831081, IEEE Access Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. Digital Object Identifier 10.1109/ACCESS.2018.Doi Number Interaction Methods for Smart Glasses: A survey Lik-Hang LEE1, and Pan HUI1&2 (Fellow, IEEE). 1The Hong Kong University of Science and Technology, Department of Computer Science and Engineering 2The University of Helsinki, Department of Computer Science Corresponding author: Pan HUI (e-mail: panhui@ cse.ust.hk). ABSTRACT Since the launch of Google Glass in 2014, smart glasses have mainly been designed to support micro-interactions. The ultimate goal for them to become an augmented reality interface has not yet been attained due to an encumbrance of controls. Augmented reality involves superimposing interactive computer graphics images onto physical objects in the real world. This survey reviews current research issues in the area of human-computer interaction for smart glasses. The survey first studies the smart glasses available in the market and afterwards investigates the interaction methods proposed in the wide body of literature. The interaction methods can be classified into hand-held, touch, and touchless input. This paper mainly focuses on the touch and touchless input. Touch input can be further divided into on-device and on-body, while touchless input can be classified into hands-free and freehand. Next, we summarize the existing research efforts and trends, in which touch and touchless input are evaluated by a total of eight interaction goals.
    [Show full text]
  • Photography and Photomontage in Landscape and Visual Impact Assessment
    Photography and Photomontage in Landscape and Visual Impact Assessment Landscape Institute Technical Guidance Note Public ConsuDRAFTltation Draft 2018-06-01 To the recipient of this draft guidance The Landscape Institute is keen to hear the views of LI members and non-members alike. We are happy to receive your comments in any form (eg annotated PDF, email with paragraph references ) via email to [email protected] which will be forwarded to the Chair of the working group. Alternatively, members may make comments on Talking Landscape: Topic “Photography and Photomontage Update”. You may provide any comments you consider would be useful, but may wish to use the following as a guide. 1) Do you expect to be able to use this guidance? If not, why not? 2) Please identify anything you consider to be unclear, or needing further explanation or justification. 3) Please identify anything you disagree with and state why. 4) Could the information be better-organised? If so, how? 5) Are there any important points that should be added? 6) Is there anything in the guidance which is not required? 7) Is there any unnecessary duplication? 8) Any other suggeDRAFTstions? Responses to be returned by 29 June 2018. Incidentally, the ##’s are to aid a final check of cross-references before publication. Contents 1 Introduction Appendices 2 Background Methodology App 01 Site equipment 3 Photography App 02 Camera settings - equipment and approaches needed to capture App 03 Dealing with panoramas suitable images App 04 Technical methodology template
    [Show full text]
  • An Augmented Reality Social Communication Aid for Children and Adults with Autism: User and Caregiver Report of Safety and Lack of Negative Effects
    bioRxiv preprint doi: https://doi.org/10.1101/164335; this version posted July 19, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. An Augmented Reality Social Communication Aid for Children and Adults with Autism: User and caregiver report of safety and lack of negative effects. An Augmented Reality Social Communication Aid for Children and Adults with Autism: User and caregiver report of safety and lack of negative effects. Ned T. Sahin1,2*, Neha U. Keshav1, Joseph P. Salisbury1, Arshya Vahabzadeh1,3 1Brain Power, 1 Broadway 14th Fl, Cambridge MA 02142, United States 2Department of Psychology, Harvard University, United States 3Department of Psychiatry, Massachusetts General Hospital, Boston * Corresponding Author. Ned T. Sahin, PhD, Brain Power, 1 Broadway 14th Fl, Cambridge, MA 02142, USA. Email: [email protected]. Abstract Background: Interest has been growing in the use of augmented reality (AR) based social communication interventions in autism spectrum disorders (ASD), yet little is known about their safety or negative effects, particularly in head-worn digital smartglasses. Research to understand the safety of smartglasses in people with ASD is crucial given that these individuals may have altered sensory sensitivity, impaired verbal and non-verbal communication, and may experience extreme distress in response to changes in routine or environment. Objective: The objective of this report was to assess the safety and negative effects of the Brain Power Autism System (BPAS), a novel AR smartglasses-based social communication aid for children and adults with ASD. BPAS uses emotion-based artificial intelligence and a smartglasses hardware platform that keeps users engaged in the social world by encouraging “heads-up” interaction, unlike tablet- or phone-based apps.
    [Show full text]
  • The Use of Smartglasses in Everyday Life
    University of Erfurt Faculty of Philosophy The Use of Smartglasses in Everyday Life A Grounded Theory Study Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Dr. phil.) at the Faculty of Philosophy of the University of Erfurt submitted by Timothy Christoph Kessler from Munich November 2015 URN: urn:nbn:de:gbv:547-201600175 First Assessment: Prof. Dr. Joachim R. Höflich Second Assessment: Prof. Dr. Dr. Castulus Kolo Date of publication: 18th of April 2016 Abstract We live in a mobile world. Laptops, tablets and smartphones have never been as ubiquitous as they have been today. New technologies are invented on a daily basis, lead- ing to the altering of society on a macro level, and to the change of the everyday life on a micro level. Through the introduction of a new category of devices, wearable computers, we might experience a shift away from the traditional smartphone. This dissertation aims to examine the topic of smartglasses, especially Google Glass, and how these wearable devices are embedded into the everyday life and, consequently, into a society at large. The current research models which are concerned with mobile communication are only partly applicable due to the distinctive character of smartglasses. Furthermore, new legal and privacy challenges for smartglasses arise, which are not taken into account by ex- isting theories. Since the literature on smartglasses is close to non-existent, it is argued that new models need to be developed in order to fully understand the impact of smart- glasses on everyday life and society as a whole.
    [Show full text]
  • Field of View - Wikipedia, the Free Encyclopedia
    Field of view - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Field_of_view From Wikipedia, the free encyclopedia The field of view (also field of vision, abbreviated FOV) is the extent of the observable world that is seen at any given moment. In case of optical instruments or sensors it is a solid angle through which a detector is sensitive to electromagnetic radiation. 1 Humans and animals 2 Conversions Horizontal Field of View 3 Machine vision 4 Remote sensing 5 Astronomy 6 Photography 7 Video games 8 See also 9 References In the context of human vision, the term “field of view” is typically used in the sense of a restriction Vertical Field of View to what is visible by external apparatus, like spectacles[2] or virtual reality goggles. Note that eye movements do not change the field of view. If the analogy of the eye’s retina working as a sensor is drawn upon, the corresponding concept in human (and much of animal vision) is the visual field. [3] It is defined as “the number of degrees of visual angle during stable fixation of the eyes”.[4]. Note that eye movements are excluded in the definition. Different animals have different visual fields, depending, among others, on the placement of the eyes. Humans have an almost 180-degree forward-facing horizontal diameter of their visual field, while some birds have a complete or nearly complete 360-degree Angle of view can be measured visual field. The vertical range of the visual field in humans is typically horizontally, vertically, or diagonally.
    [Show full text]
  • Factors Influencing Consumer Attitudes Towards M-Commerce AR Apps
    “I see myself, therefore I purchase”: factors influencing consumer attitudes towards m-commerce AR apps Mafalda Teles Roxo and Pedro Quelhas Brito Faculdade de Economia da Universidade do Porto and LIAAD-INESC TEC, Portugal [email protected]; [email protected] Abstract Mobile commerce (m-commerce) is starting to represent a significant share of e-commerce. The use of Augmented Reality (AR) by brands to convey information about their products - within the store and mainly as mobile apps – makes it possible for researchers and managers to understand consumer reactions. Although attitudes towards AR have been studied, the overall effect of distinct aspects such as the influence of others, the imagery, projection and perceived presence, has not been tackled as far as we know. Therefore, we conducted a study on 218 undergraduate students, using a pre-test post-test experimental design to address the following questions: (1) Do AR media characteristics affect consumer attitudes towards the medium in a mobile shopping context? Also, (2) Do the opinion and physical presence of people influence the attitude towards an m-commerce AR app? It found that AR characteristics such as projection and imagery positively influence attitudes towards m-commerce AR apps, whereas social variables did not have any influence. Keywords: MAR; m-commerce; consumer psychology; AR-consumer relationship. 1 Introduction Simultaneously with the increasing percentage of e-commerce sales resulting from mobile retail commerce (m-commerce), it is estimated that in the U.S., by 2020, 49.2% of online sales will be made using mobile apps (Statista, 2019b). Also, in 2018, approximately 57% of internet users purchased fashion-related products online (Statista, 2019a).
    [Show full text]
  • Investigation of Driver's FOV and Related Ergonomics Using Laser Shadowgraphy from Automotive Interior
    of Ergo al no rn m u ic o s J Hussein et al., J Ergonomics 2017, 7:4 Journal of Ergonomics DOI: 10.4172/2165-7556.1000207 ISSN: 2165-7556 Research Article Open Access Investigation of Drivers FOV and Related Ergonomics Using Laser Shadowgraphy from Automotive Interior Wessam Hussein1*, Mohamed Nazeeh1 and Mahmoud MA Sayed2 1Military Technical College, KobryElkobbah, Cairo, Egypt 2Canadian International College, New Cairo, Cairo, Egypt *Corresponding author: Wessam Hussein, Military Technical College, KobryElkobbah, 11766, Cairo, Egypt, Tel: + 20222621908; E-mail: [email protected] Received date: June 07, 2017; Accepted date: June 26, 2017; Publish date: June 30, 2017 Copyright: © 2017 Hussein W, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Abstract A new application of laser shadowgraphy in automotive design and driver’s ergonomics investigation is described. The technique is based on generating a characterizing plot for the vehicle’s Field of View (FOV). This plot is obtained by projecting a high divergence laser beam from the driver’s eyes cyclopean point, on a cylindrical screen installed around the tested vehicle. The resultant shadow-gram is photographed on several shots by a narrow field camera to form a complete panoramic seen for the screen. The panorama is then printed as a plane sheet FOV plot. The obtained plot is used to measure and to analyse the areal visual field, the eye and nick movement ranges in correlation with FOV, the horizontal visual blind zones, the visual maximum vertical angle and other related ergonomic parameters.
    [Show full text]
  • Light Engines for XR Smartglasses by Jonathan Waldern, Ph.D
    August 28, 2020 Light Engines for XR Smartglasses By Jonathan Waldern, Ph.D. The near-term obstacle to meeting an elegant form factor for Extended Reality1 (XR) glasses is the size of the light engine2 that projects an image into the waveguide, providing a daylight-bright, wide field-of-view mobile display For original equipment manufacturers (OEMs) developing XR smartglasses that employ diffractive wave- guide lenses, there are several light engine architectures contending for the throne. Highly transmissive daylight-bright glasses demanded by early adopting customers translate to a level of display efficiency, 2k-by-2k and up resolution plus high contrast, simply do not exist today in the required less than ~2cc (cubic centimeter) package size. This thought piece examines both Laser and LED contenders. It becomes clear that even if MicroLED (µLED) solutions do actually emerge as forecast in the next five years, fundamentally, diffractive wave- guides are not ideally paired to broadband LED illumination and so only laser based light engines, are the realistic option over the next 5+ years. Bottom Line Up Front • µLED, a new emissive panel technology causing considerable excitement in the XR community, does dispense with some bulky refractive illumination optics and beam splitters, but still re- quires a bulky projection lens. Yet an even greater fundamental problem of µLEDs is that while bright compared with OLED, the technology falls short of the maximum & focused brightness needed for diffractive and holographic waveguides due to the fundamental inefficiencies of LED divergence. • A laser diode (LD) based light engine has a pencil like beam of light which permits higher effi- ciency at a higher F#.
    [Show full text]
  • Advanced Assistive Maintenance Based on Augmented Reality and 5G Networking
    sensors Article Advanced Assistive Maintenance Based on Augmented Reality and 5G Networking Sebastiano Verde 1 , Marco Marcon 2,* , Simone Milani 1 and Stefano Tubaro 2 1 Department of Information Engineering, University of Padova, 35131 Padua, Italy; [email protected] (S.V.); [email protected] (S.M.) 2 Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy; [email protected] * Correspondence: [email protected] Received: 13 November 2020; Accepted: 9 December 2020; Published: 14 December 2020 Abstract: Internet of Things (IoT) applications play a relevant role in today’s industry in sharing diagnostic data with off-site service teams, as well as in enabling reliable predictive maintenance systems. Several interventions scenarios, however, require the physical presence of a human operator: Augmented Reality (AR), together with a broad-band connection, represents a major opportunity to integrate diagnostic data with real-time in-situ acquisitions. Diagnostic information can be shared with remote specialists that are able to monitor and guide maintenance operations from a control room as if they were in place. Furthermore, integrating heterogeneous sensors with AR visualization displays could largely improve operators’ safety in complex and dangerous industrial plants. In this paper, we present a complete setup for a remote assistive maintenance intervention based on 5G networking and tested at a Vodafone Base Transceiver Station (BTS) within the Vodafone 5G Program. Technicians’ safety was improved by means of a lightweight AR Head-Mounted Display (HDM) equipped with a thermal camera and a depth sensor to foresee possible collisions with hot surfaces and dangerous objects, by leveraging the processing power of remote computing paired with the low latency of 5G connection.
    [Show full text]
  • Two Eyes See More Than One Human Beings Have Two Eyes Located About 6 Cm (About 2.4 In.) Apart
    ivi act ty 2 TTwowo EyesEyes SeeSee MoreMore ThanThan OneOne OBJECTIVES 1 straw 1 card, index (cut in half widthwise) Students discover how having two eyes helps us see in three dimensions and 3 pennies* increases our field of vision. 1 ruler, metric* The students For the class discover that each eye sees objects from a 1 roll string slightly different viewpoint to give us depth 1 roll tape, masking perception 1 pair scissors* observe that depth perception decreases *provided by the teacher with the use of just one eye measure their field of vision observe that their field of vision decreases PREPARATION with the use of just one eye Session I 1 Make a copy of Activity Sheet 2, Part A, for each student. SCHEDULE 2 Each team of two will need a metric ruler, Session I About 30 minutes three paper cups, and three pennies. Session II About 40 minutes Students will either close or cover their eyes, or you may use blindfolds. (Students should use their own blindfold—a bandanna or long strip VOCABULARY of cloth brought from home and stored in their science journals for use—in this depth perception and other activities.) field of vision peripheral vision Session II 1 Make a copy of Activity Sheet 2, Part B, for each student. MATERIALS 2 For each team, cut a length of string 50 cm (about 20 in.) long. Cut enough index For each student cards in half (widthwise) to give each team 1 Activity Sheet 2, Parts A and B half a card. Snip the corners of the cards to eliminate sharp edges.
    [Show full text]
  • Binocular Vision
    BINOCULAR VISION Rahul Bhola, MD Pediatric Ophthalmology Fellow The University of Iowa Department of Ophthalmology & Visual Sciences posted Jan. 18, 2006, updated Jan. 23, 2006 Binocular vision is one of the hallmarks of the human race that has bestowed on it the supremacy in the hierarchy of the animal kingdom. It is an asset with normal alignment of the two eyes, but becomes a liability when the alignment is lost. Binocular Single Vision may be defined as the state of simultaneous vision, which is achieved by the coordinated use of both eyes, so that separate and slightly dissimilar images arising in each eye are appreciated as a single image by the process of fusion. Thus binocular vision implies fusion, the blending of sight from the two eyes to form a single percept. Binocular Single Vision can be: 1. Normal – Binocular Single vision can be classified as normal when it is bifoveal and there is no manifest deviation. 2. Anomalous - Binocular Single vision is anomalous when the images of the fixated object are projected from the fovea of one eye and an extrafoveal area of the other eye i.e. when the visual direction of the retinal elements has changed. A small manifest strabismus is therefore always present in anomalous Binocular Single vision. Normal Binocular Single vision requires: 1. Clear Visual Axis leading to a reasonably clear vision in both eyes 2. The ability of the retino-cortical elements to function in association with each other to promote the fusion of two slightly dissimilar images i.e. Sensory fusion. 3. The precise co-ordination of the two eyes for all direction of gazes, so that corresponding retino-cortical element are placed in a position to deal with two images i.e.
    [Show full text]