An Overview of Computer Vision

Total Page:16

File Type:pdf, Size:1020Kb

An Overview of Computer Vision U.S. Department of Commerce National Bureau of Standards NBSIR 82-2582 AN OVERVIEW OF COMPUTER VISION September 1982 f— QC Prepared for iuu National Aeronautics and Space . Ibo Administration Headquarters 61-2562 Washington, D.C. 20546 National Bureau ot Standard! OCT 8 1982 ftsVac.c-'s loo - 2% NBSIR 82-2582 AN OVERVIEW OF COMPUTER VISION William B. Gevarter* U.S. DEPARTMENT OF COMMERCE National Bureau of Standards National Engineering Laboratory Center for Manufacturing Engineering Industrial Systems Division Metrology Building, Room A127 Washington, DC 20234 September 1982 Prepared for: National Aeronautics and Space Administration Headquarters Washington, DC 20546 U.S. DEPARTMENT OF COMMERCE, Malcolm Baldrige, Secretary NATIONAL BUREAU OF STANDARDS, Ernest Ambler, Director * Research Associate at the National Bureau of Standards Sponsored by NASA Headquarters * •' ' J s rou . Preface Computer Vision * Computer Vision — visual perception employing computers — shares with "Expert Systems" the role of being one of the most popular topics in Artificial Intelligence today. Commercial vision systems have already begun to be used in manufacturing and robotic systems for inspection and guidance tasks. Other systems at various stages of development, are beginning to be employed in military, cartograhic and image inter pretat ion applications. This report reviews the basic approaches to such systems, the techniques utilized, applications, the current existing systems, the state-of-the-art of the technology, issues and research requirements, who is doing it and who is funding it, and finally, future trends and expectations. The computer vision field is multifaceted, having many participants with diverse viewpoints, with many papers having been written. However, the field is still in the early stages of development — organizing principles have not yet crystalized, and the associated technology has not yet been rationalized. Thus, this report is not as smooth and even as would be desirable. Nevertheless, this overview should prove useful to engineering and research managers, potential users and others who will be affected by this field as it unfolds. * Th i s re port Isi n support of the more general NBS/NASA report, An Overview of Artificial Intelligence and Robotics i . Acknowledgments I wish to thank the many individuals and organizations who have contributed to this report by furnishing information and suggestions. I particularly would like to thank Drs. T. Binford of Stanford, M. Brady of M.I.T., M. Tenenbaum and H. Barrow of Fairchild and Gennery et al of JPL for furnishing source material that was essential to the development of this report. In addition, I would like to thank the staff of the NBS Industrial Systems Division, CDMR R. Ohlander of DARPA, Drs. A. Rosenfeld and M. Schneer of U. of MD., M. Fischler of SRI, E. Sacerdoti of M.I.C. and J. Wilder of Object Recognition Systems for reviewing this report and suggesting corrections and modifications. However, any remaining errors or omissions must remain the responsibility of the author. I would also like to thank Margie Johnson for typing and facilitating the publication of this NBS series of reports on Artificial Intelligence and Robotics. It is not the intent of the National Bureau of Standards to recommend or endorse any of the manuf ac turers or organizations named in this report, but simply to attempt to provide an overview of the computer vision field. However, in such a diverse and rapidly changing field, important activities and products may not have been mentioned. Lack of such mention does not in any way imply that they are not also worthwhile. The author would appreciate having any such omissions or oversights called to his attention so that they can be considered for future reports Table of Contents Preface i I. Introduction 1 II. Definition 5 III. Origins of Computer Vision 7 IV. Relation to Human Vision 11 V. Applications 16 VI. Basis for a General Purpose Image Understanding System 19 VII. Basic Paradigms for Computer Vision 22 A. Hierarchical -- Bottom-up Approach 22 B. Hierarchical Top-down Approach 24 C. Heterarchical Approach 26 D. Blackboard Approach 26 VIII. Levels of Representation 28 IX. Research in Model-Based Vision Systems 34 X. Industrial Vision Systems 38 A. General Characteristics 38 B. Examples of Efforts in Industrial Visual Inspection Systems 40 C. Examples of Efforts in Industrial Visual Recognition and Location Systems 42 D. Commercially Available Industrial Vision Systems 44 XI. Who is Doing It 46 A. Research Oriented 46 B. Commercial Vision Manufacturers 46 XII. Who is Funding It 49 iii XIII. Summary of the State-of-the-Ar t 52 A. Human Vision 52 B. Low and Intermediate Levels of Processing ... 52 C. Industrial Vision Systems 54 D. General Vision Systems 56 1 . Introduction 56 2. Difficulties 57 3. Techniques 59 4. Conclusions 60 E. Visual Tracking 62 F. Overview 62 XIV. Current Problems and Issues 64 A. General 64 B. Techniques 65 1 . Low Level Processing 65 2. Middle Level Processing 65 3. High Level Processing 66 C. Representation and Modeling 66 D. System Paradigms and Design 66 E. Knowledge Acquisition — Teaching and Programming 67 F. Sensing 67 G. Planning 67 XV. Research Needed 68 A. General 68 B. Techniques 68 1 . Low Level Processing 68 2. Middle Level Processing 68 iv 3. High Level Processing 69 C. Representaion and Modeling 69 D. System Paradigms and Design 69 E. Knowledge Acquisition - Teaching and Programming 70 F. Sensing ..... 70 G. Planning 70 XVI. Future Trends 71 A. Techniques 71 B. Hardware and Architecture 71 C. AI and General Vision Systems 72 D. Modeling and Programming 73 E. Knowledge Acquisition 73 F. Sensing 73 G. Industrial Vision Systems 7^ H. Future Applications 75 I. Conclusion 77 References 78 Appendices A. Low Level Features and Representations 86 A. Pixels 86 B. Texture 86 C. Regions 86 D. Edges and Lines 87 E. Corners 87 B. Extracting Edges and Areas 88 A. Extracting Edges 88 1. Linear Matched Filtering 88 v 2. Non-Linear Filtering 88 3. Local Thresholding 92 4 . Surface Fitting 92 5. Rotationally Insensitive Operators .... 92 6. Line Following 92 7. Global Methods 92 B. Edge Finding Variations 94 C. Linking Edge Elements and Thinning Resultant Lines 94 D. Remarks on Edge Finding 95 E. Extracting Regions 96 £. Segmentation and Interpretation 98 A. The Computer Vision Paradigm 98 B. An Early Bottom-up Systems 98 C. Problems with Bottom-up Systems 99 D. Interpretation-Guided Segmentation 99 E. Use of General World Knowledge to Guide Segmentation 100 E). 2-D Representation, Description and Recognition 103 A. Pyramids 103 B. Quadtrees 103 C. Statistical Features of a Region 103 D. Boundary Curves 104 E. Run-Length Encoding 104 F. Skeleton Representations and Generalized Ribbons 104 G. Representation by a Concatenation of Primitive Forms 105 H. Relational Graphs 105 vi I. Recognition . 105 E, Recovery of Intrinsic Image Characteristics .... 106 A. Basic Approach 106 B. Shape from Shading 109 C. Stereoscopic Approach 110 D. Photometric Stereo 110 E. Shape from Texture Ill F. Shape from Contour Ill G. Shape and Velocity from Motion Ill F. Higher Levels of Representation 113 A. Volumetric Models 113 1. Generalized Cones . 113 2. Wire Frame Models 113 3. Polyhedral Models 113 4. Combining ID, 2D, and 3D Primitives .... 114 5. Planes and Ellipsoids 114 6. Sets of Prototype Volumes 114 B. Symbolic Descriptions 114 C. Procedural Models 115 G. Higher Levels of Interpretation 116 H, . Tracking 118 JE. Additional Tables of Model-Based Vision 122 £. Tables of Commerically Available Systems 135 K. Glossary 146 L. Publication Sources for Further Information 154 vii Figures 1. A Framework for Early and Intermediate States in a Theory of Visual Information Processing 12 2. An Example of a 2-1/2D Sketch 13 3. Examples of Applications of Computer Vision Now Underway 17 4. Model-Based Interpretation of Images 20 5. Basic Image Understanding Paradigms 23 6. Computational Architecture for a General-Purpose Vision System 30 7. Organization of a Visual System 32 8. Intensity Variations at Step Edges 89 9. A Set of Intrinsic Images Derived from a Single Monochrome Intensity Image 108 viii Tables I. Examples of Non-Linear Filtering for Extracting Edge and Line Elements 91 1 1 — 1 . Model-Based Vision Systems -- ACRONYM 36 II-2. Model-Based Vision Systems — VISIONS 37 III. [Additional] Model-Based Vision Systems 123-134 IV. Example Efforts in Industrial Visual Inspection Systems 41 V. Example Efforts in Industrial Visual Recognition and Location Systems 43-44 VI. Commercially Available Industrial Vision Systems . 136-145 VII. University Organizations Engaged in Computer Vision Research 47 VIII. Commercial Vision System Developers 48 IX. Visual Tracking Approaches 119-120 X. Example Future Applications for Computer Vision Systems 76 . Computer Vision I . Introduction Following the lead of Cohen and Feigenbaum (1982, p. 127) we may consider computer vision to be the information-processing task of understanding a scene from its projected images. Other fields such as image processing and pattern recognition also utilize computers in vision tasks. However, we can distinguish the fields by categorizing them as follows: Image processing is a signal processing task that transforms an input image into a more desirable output image through processes such as noise reduction, contrast enhancement and registr ation Pattern recognition is a classification task that classifies images into predetermined categories. \. £o rn£u t.e r_ vision is an image understanding task that automatically builds a description not only of the image itself, but of the three dimensional scene that it depicts. The term scene analysis has been used in the past to emphasize the distinction between processing two dimensional images, as in pattern c 1 a s s i f i c i a t ion and seeking information about three- , dimensional scenes.
Recommended publications
  • Vision-Based Positioning for Internet-Of-Vehicles Kuan-Wen Chen, Chun-Hsin Wang, Xiao Wei, Qiao Liang, Chu-Song Chen, Ming-Hsuan Yang, and Yi-Ping Hung
    http://ieeexplore.ieee.org/Xplore Vision-Based Positioning for Internet-of-Vehicles Kuan-Wen Chen, Chun-Hsin Wang, Xiao Wei, Qiao Liang, Chu-Song Chen, Ming-Hsuan Yang, and Yi-Ping Hung Abstract—This paper presents an algorithm for ego-positioning Structure by using a low-cost monocular camera for systems based on from moon the Internet-of-Vehicles (IoV). To reduce the computational and For local model memory requirements, as well as the communication load, we construc,on tackle the model compression task as a weighted k-cover problem for better preserving the critical structures. For real-world vision-based positioning applications, we consider the issue of large scene changes and introduce a model update algorithm to For image collec,on address this problem. A large positioning dataset containing data collected for more than a month, 106 sessions, and 14,275 images is constructed. Extensive experimental results show that sub- meter accuracy can be achieved by the proposed ego-positioning algorithm, which outperforms existing vision-based approaches. (a) Index Terms—Ego-positioning, model compression, model up- date, long-term positioning dataset. Download local 3D Upload newly scene model acquired images for model update I. INTRODUCTION NTELLIGENT transportation systems have been exten- I sively studied in the last decade to provide innovative and proactive services for traffic management and driving safety issues. Recent advances in driving assistance systems mostly For image matching provide stand-alone solutions to these issues by using sensors and posi1oning limited to the line of sight. However, many vehicle accidents occur because of other vehicles or objects obstructing the (b) view of the driver.
    [Show full text]
  • Artificial Intelligence in Health Care: the Hope, the Hype, the Promise, the Peril
    Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril Michael Matheny, Sonoo Thadaney Israni, Mahnoor Ahmed, and Danielle Whicher, Editors WASHINGTON, DC NAM.EDU PREPUBLICATION COPY - Uncorrected Proofs NATIONAL ACADEMY OF MEDICINE • 500 Fifth Street, NW • WASHINGTON, DC 20001 NOTICE: This publication has undergone peer review according to procedures established by the National Academy of Medicine (NAM). Publication by the NAM worthy of public attention, but does not constitute endorsement of conclusions and recommendationssignifies that it is the by productthe NAM. of The a carefully views presented considered in processthis publication and is a contributionare those of individual contributors and do not represent formal consensus positions of the authors’ organizations; the NAM; or the National Academies of Sciences, Engineering, and Medicine. Library of Congress Cataloging-in-Publication Data to Come Copyright 2019 by the National Academy of Sciences. All rights reserved. Printed in the United States of America. Suggested citation: Matheny, M., S. Thadaney Israni, M. Ahmed, and D. Whicher, Editors. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. NAM Special Publication. Washington, DC: National Academy of Medicine. PREPUBLICATION COPY - Uncorrected Proofs “Knowing is not enough; we must apply. Willing is not enough; we must do.” --GOETHE PREPUBLICATION COPY - Uncorrected Proofs ABOUT THE NATIONAL ACADEMY OF MEDICINE The National Academy of Medicine is one of three Academies constituting the Nation- al Academies of Sciences, Engineering, and Medicine (the National Academies). The Na- tional Academies provide independent, objective analysis and advice to the nation and conduct other activities to solve complex problems and inform public policy decisions.
    [Show full text]
  • Visual Prosthetics Wwwwwwwwwwwww Gislin Dagnelie Editor
    Visual Prosthetics wwwwwwwwwwwww Gislin Dagnelie Editor Visual Prosthetics Physiology, Bioengineering, Rehabilitation Editor Gislin Dagnelie Lions Vision Research & Rehabilitation Center Johns Hopkins University School of Medicine 550 N. Broadway, 6th floor Baltimore, MD 21205-2020 USA [email protected] ISBN 978-1-4419-0753-0 e-ISBN 978-1-4419-0754-7 DOI 10.1007/978-1-4419-0754-7 Springer New York Dordrecht Heidelberg London Library of Congress Control Number: 2011921400 © Springer Science+Business Media, LLC 2011 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com) Preface Visual Prosthetics as a Multidisciplinary Challenge This is a book about the quest to realize a dream: the dream of restoring sight to the blind. A dream that may have been with humanity much longer than the idea that disabilities can be treated through technology – which itself is probably a very old idea.
    [Show full text]
  • Machine Vision for the Iot and I0.4
    International Journal of Management, Technology And Engineering ISSN NO : 2249-7455 MACHINE VISION FOR THE IOT AND I0.4 ROHIT KUMAR1, HARJOT SINGH GILL2 1Student, Department of Mechatronics Engineering Chandigarh University, Gharuan 2Assistant Professor, Department of Mechatronics Engineering Chandigarh University, Gharuan Abstract: Human have six sense and vision is one of the most used sense in the daily life. So here the concept if we have to replace the human by machine .We need the high quality sensor that can provid the real time vision for the machine . It also help to make control system. It can be used in industry very easily at different stages of manufacturing for preventive maintenance and fault diagnosis. Machine vision aslo help in industry to speed up inspection processes and reduces the wastage by preventing production quality. For instance, consider an arrangement of pictures demonstrating a tree influencing in the breeze on a splendid summer's day while a cloud moves over the sun changing the power and range of the enlightening light. For instance, consider an arrangement of pictures demonstrating a tree influencing in the breeze on a brilliant summer's day while a cloud moves over the sun adjusting the power and range of the enlightening light. Key words: Artificial Intelligence, Machine vision, Artificial neural network, , image processing. Introduction: The expanded utilization, consciousness of value, security has made a mindfulness for enhanced quality in customer items. The interest of client particular custom-misation, increment of rivalry has raised the need of cost decrease. This can be accomplished by expanding the nature of items, decreasing the wastage amid the generation, adaptability in customisation and quicker creation.
    [Show full text]
  • (12) United States Patent (10) Patent No.: US 9,547,804 B2 Nirenberg Et Al
    USO09547804B2 (12) United States Patent (10) Patent No.: US 9,547,804 B2 Nirenberg et al. (45) Date of Patent: Jan. 17, 2017 (54) RETNAL ENCODER FOR MACHINE (58) Field of Classification Search VISION CPC ...... G06K 9/4619; H04N 19760; H04N 19/62; H04N 19/85; G06N 3/049 (75) Inventors: Sheila Nirenberg, New York, NY (US); (Continued) Illya Bomash, Brooklyn, NY (US) (56) References Cited (73) Assignee: Cornell University, Ithaca, NY (US) U.S. PATENT DOCUMENTS (*) Notice: Subject to any disclaimer, the term of this 5,103,306 A * 4/1992 Weiman .................. GOS 5.163 patent is extended or adjusted under 35 348/400.1 U.S.C. 154(b) by 0 days. 5,815,608 A 9/1998 Lange et al. (21) Appl. No.: 14/239,828 (Continued) (22) PCT Filed: Aug. 24, 2012 FOREIGN PATENT DOCUMENTS CN 101.239008 A 8, 2008 (86) PCT No.: PCT/US2O12/052348 CN 101.336856. A 1, 2009 S 371 (c)(1), (Continued) (2), (4) Date: Jul. 15, 2014 OTHER PUBLICATIONS (87) PCT Pub. No.: WO2O13AO29OO8 Piedade, Moises, Gerald, Jose, Sousa, Leonel Augusto, Tavares, PCT Pub. Date: Feb. 28, 2013 Goncalo, Tomas, Pedro. “Visual Neuroprothesis: A Non Invasive System for Stimulating the Cortex'IEEE Transactions on Circuits (65) Prior Publication Data and Systems-I: Regular Papers, vol. 53 No. 12, Dec. 2005.* US 2014/0355861 A1 Dec. 4, 2014 (Continued) Primary Examiner — Kim Vu Related U.S. Application Data Assistant Examiner — Molly Delaney (60) Provisional application No. 61/527.493, filed on Aug. (74) Attorney, Agent, or Firm — Foley & Lardner LLP 25, 2011, provisional application No.
    [Show full text]
  • A Neuromorphic Approach for Tracking Using Dynamic Neural Fields on a Programmable Vision-Chip
    A Neuromorphic Approach for Tracking using Dynamic Neural Fields on a Programmable Vision-chip Julien N.P. Martel Yulia Sandamirskaya Institute of Neuroinformatics Institute of Neuroinformatics University of Zurich and ETH Zurich University of Zurich and ETH Zurich 8057 Zurich, Switzerland 8057 Zurich, Switzerland [email protected] [email protected] ABSTRACT DNF activation Saliency map In artificial vision applications, such as tracking, a large Frame amount of data captured by sensors is transferred to pro- Processor Array cessors to extract information relevant for the task at hand. Smart vision sensors offer a means to reduce the computa- Lens tional burden of visual processing pipelines by placing more processing capabilities next to the sensor. In this work, we use a vision-chip in which a small processor with memory is located next to each photosensitive element. The architec- Scene ture of this device is optimized to perform local operations. To perform a task like tracking, we implement a neuromor- phic approach using a Dynamic Neural Field, which allows to segregate, memorize, and track objects. Our system, con- sisting of the vision-chip running the DNF, outputs only the activity that corresponds to the tracked objects. These out- Cellular Processor Array Vision-chip puts reduce the bandwidth needed to transfer information as well as further post-processing, since computation happens Figure 1: An illustration of the system implemented at the pixel level. in this work: A vision-chip is directed at a scene and captures a stream of images processed on-chip CCS Concepts in a saliency map. This map is fed as an input to a •Computing methodologies ! Object detection; Track- Dynamic Neural Field (DNF), also run on-chip and ing; Massively parallel and high-performance simulations; used for tracking objects.
    [Show full text]
  • AI: What It Is, and Should You Use It
    AI: What it is, and should you use it AI: What it is, and should you use it In this E-Guide: The hype surrounding artificial intelligence (AI) and all its potential applications is seemingly unending—but, could it really as powerful as people say it is? AI (artificial intelligence) Keep reading to discover what AI is by definition, why it’s becoming a hot commodity for contact centers, and the pros and cons of implementing it for yourself. 5 trends driving AI in contact centers The pros and cons of customer service AI Page 1 of 21 SPONSORED BY AI: What it is, and should you use it AI (artificial intelligence) Margaret Rouse, WhatIs.com Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of AI (artificial information and rules for using the information), reasoning (using rules to reach approximate intelligence) or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision. 5 trends driving AI in contact centers AI can be categorized as either weak or strong. Weak AI, also known as narrow AI, is an AI system that is designed and trained for a particular task. Virtual personal assistants, such as The pros and cons of Apple's Siri, are a form of weak AI. Strong AI, also known as artificial general intelligence, is customer service AI an AI system with generalized human cognitive abilities. When presented with an unfamiliar task, a strong AI system is able to find a solution without human intervention.
    [Show full text]
  • Neural Network Based Fault Detection on Painted Surface
    UMEÅ UNIVERSITY MASTER THESIS Neural network based fault detection on painted surface Author: Midhumol Augustian A thesis submitted in partial fulfillment of the requirements for the degree of Masters in Robotics and Control Engineering Department of Applied Physics and Electronics Umeå University 2017 iii Declaration of Authorship I, Midhumol Augustian, declare that this thesis titled, “Neural network based fault detection on painted surface ” and the work presented in it are my own. I confirm that: • This work was done wholly while in candidature for a Masters degree in Robotics and control Engineering at Umeå University. • Where I have consulted the published work of others, this is always clearly attributed. • Where I have quoted from the work of others, the source is always given. With the exception of such quotations, this thesis is entirely my own work. • I have acknowledged all main sources of help. • Where the thesis is based on work done by myself jointly with others, I have made clear exactly what was done by others and what I have contributed myself. Signed: Date: v Abstract Machine vision systems combined with classification algorithms are being in- creasingly used for different applications in the age of automation. One such application would be the quality control of the painted automobile parts. The fundamental elements of the machine vision system include camera, illumi- nation, image acquisition software and computer vision algorithms. Tradi- tional way of thinking puts too much importance on camera systems and ig- nores other elements while designing a machine vision system. In this thesis work, it is shown that selecting an appropriate illumination for illuminating the surface being examined is equally important in case of machine vision system for examining specular surface.
    [Show full text]
  • Vision, Brain and Assistive Technologies Rationale Course
    Vision, Brain and Assistive Technologies Rationale Based on the World Health Organization 2012 Report, there are more than 285 million visually impaired people, of which 39 million are blind. About 65% of all people who are visually impaired are aged 50 and older, while this age group comprises about 20% of the world's population. With an in- creasing elderly population in many countries, more people will be at risk of age-related visual impairment. Research on multimodal and alternative perception will have a long term impact on the health and wellness of soci- ety, not only for the visually challenged, but for people who often work in dangerous environments, such as firefighters, drivers and soldiers. Course Description This course will discuss modern vision science and explore how the brain sees the world, thus including introductory on computational neuroscience, mo- tion, color and several other topics. Then the basics of computer vision will be introduced, for both preprocessing and 3D computer vision. Finally, we will discuss the needs and state-of-art in sensing, processing and stimulation for assisting visually challenged people (blind and visually impaired people) using advanced technologies in machine vision, robotics, displays, materials, portable devices and infrastructure. The course will be offered as an interdisciplinary seminar course, in which a few lectures will be provided from the recommended textbook on human vision as well as the lecture notes of the instructor on computer vision, and then students from mathematics, physics, electrical engineering, computer science and psychology and other social sciences will be assigned to read, present and discuss materials in vision, brain, computing and devices for assisting the visually impaired.
    [Show full text]
  • Machine Vision and Industrial Automation Accelerating Adoption in Industrial Automation Jayavardhana Gubbi Jayavardhana P Balamuralidhar
    Machine Vision and Industrial Automation Accelerating Adoption in Industrial Automation Jayavardhana Gubbi Jayavardhana P Balamuralidhar Machine vision has harnessed deep learning admirably. Disruptive solutions are in the market for security and surveillance, autonomous vehicles, industrial inspection, digital twins, and augmented reality (AR). However, historically, data acquisition within computer vision has been hamstrung by manual interpretation of captured image data. Fortunately though, deep learning has generated data-driven models that could replace handcrafted preprocessing and feature extraction superbly. IN BRIEF The future has already arrived. TCS has, for instance, not only delivered revolutionary deep learning-powered machine vision solutions, but is also using advanced software algorithms to design industry-grade spatial cognition-enabled computer vision applications. A field established in the 1960s, Machine vision today it’s only in the past 30 months or so that the machine vision Key technological developments in industry has harnessed deep miniaturized cameras, embedded learning—with extremely devices, and machine learning good results. Poised where AI, have historically enabled adoption imaging sensors, and image of machine vision in a multitude processing meet, machine vision of applications—satellite image is now making a massive impact in analysis and medical image industrial applications. interpretation, among others— across diverse domains. This has brought disruptive industry-grade solutions to the But the game-changer today is market. Table 11 sets out the key the power and affordability of industry sectors in which computer computing devices, including vision has made inroads. the advent of digital and mobile cameras. This has resulted in image The key application trends processing spreading to other bringing machine vision to the fore areas and making humongous include security and surveillance, volumes of real (as against autonomous vehicles, industrial synthetic) visual data available inspection, digital twins, and AR.
    [Show full text]
  • Improving Perception from Electronic Visual Prostheses
    Improving Perception from Electronic Visual Prostheses Justin Robert Boyle BEng (Mech) Hons1 Image and Video Research Laboratory School of Engineering Systems Queensland University of Technology Submitted as a requirement for the degree of Doctor of Philosophy February 2005 Keywords image processing, visual prostheses, bionic eye, artificial human vision, visual perception, subjective testing, visual information ii Abstract This thesis explores methods for enhancing digital image-like sensations which might be similar to those experienced by blind users of electronic visual prostheses. Visual prostheses, otherwise referred to as artificial vision systems or bionic eyes, may operate at ultra low image quality and information levels as opposed to more common electronic displays such as televisions, for which our expectations of image quality are much higher. The scope of the research is limited to enhancement by digital image processing: that is, by manipulating the content of images presented to the user. The work was undertaken to improve the effectiveness of visual prostheses in representing the visible world. Presently visual prosthesis development is limited to animal models in Australia and prototype human trials overseas. Consequently this thesis deals with simulated vision experiments using normally sighted viewers. The experiments involve an original application of existing image processing techniques to the field of low quality vision anticipated from visual prostheses. Resulting from this work are firstly recommendations for effective image processing methods for enhancing viewer perception when using visual prosthesis prototypes. Although limited to low quality images, recognition of some objects can still be achieved, and it is useful for a viewer to be presented with several variations of the image representing different processing methods.
    [Show full text]
  • Genie Nano Series™ Camera User’S Manual 1 Gb Gige Vision – Monochrome & Color Area Scan
    Genie Nano Series™ Camera User’s Manual 1 Gb GigE Vision – Monochrome & Color Area Scan sensors | cameras | frame grabbers | processors | software | vision solutions February 12, 2021 Rev: 0024 P/N: G3-G00M-USR00 www.teledynedalsa.com Notice © 2015-2020 Teledyne Digital Imaging Inc. All information provided in this manual is believed to be accurate and reliable. No responsibility is assumed by Teledyne DALSA for its use. Teledyne DALSA reserves the right to make changes to this information without notice. Reproduction of this manual in whole or in part, by any means, is prohibited without prior permission having been obtained from Teledyne DALSA. Microsoft and Windows are registered trademarks of Microsoft Corporation in the United States and other countries. Windows, Windows 7, Windows 10 are trademarks of Microsoft Corporation. All other trademarks or intellectual property mentioned herein belong to their respective owners. Document Date: February 12, 2021 Document Number: G3-G00M-USR00 About Teledyne DALSA, a business unit of Teledyne Digital Imaging Inc. Teledyne DALSA is an international high performance semiconductor and Electronics Company that designs, develops, manufactures, and markets digital imaging products and solutions, in addition to providing wafer foundry services. Teledyne DALSA Digital Imaging offers the widest range of machine vision components in the world. From industry-leading image sensors through powerful and sophisticated cameras, frame grabbers, vision processors and software to easy-to-use vision appliances and custom vision modules. This document does not contain information whose export/transfer/disclosure is restricted by the Canadian Export Control regulation. Contents GENIE NANO SERIES OVERVIEW _________________________________ 1 DESCRIPTION .............................................................................................. 1 GigE with TurboDrive ...........................................................................
    [Show full text]