CAOS Smart Camera-Based Robust Low Contrast Image Recovery Over 90 Db Scene Linear Dynamic Range

Total Page:16

File Type:pdf, Size:1020Kb

CAOS Smart Camera-Based Robust Low Contrast Image Recovery Over 90 Db Scene Linear Dynamic Range https://doi.org/10.2352/ISSN.2470-1173.2020.7.ISS-226 © 2020, Society for Imaging Science and Technology CAOS Smart Camera-based Robust Low Contrast Image Recovery over 90 dB Scene Linear Dynamic Range Nabeel. A. Riza and Mohsin. A. Mazhar; School of Engineering, University College Cork; Cork, Ireland 2-D time-frequency optical array modulating device. Next an optical Abstract antenna (i.e., lens) can be used to catch all these time-frequency Experimentally demonstrated for the first time is Coded Access encoded optical signals/pixel data set that is converted to an AC Optical Sensor (CAOS) camera empowered robust and true white signal by a sensitive optical-to-electrical converter, e.g., high speed light High Dynamic Range (HDR) scene low contrast target image point photo-detector with an amplifier. This AC signal is subjected to recovery over the full linear dynamic range. The 90 dB linear HDR sensitive AC-style (in Hz domain vs DC domain) electronic Digital scene uses a 16 element custom designed test target with low Signal Processing (DSP) for decoding pixel data, including using multiple access decoding methods for simultaneous pixels detection. contrast 6 dB step scaled irradiances. Such camera performance is By doing so, many optical pixels can be observed with extremely high highly sought after in catastrophic failure avoidance mission linear Dynamic Range (DR) and SNR control, realizing the CAOS critical HDR scenarios with embedded low contrast targets. camera. Combined with classic multi-pixel optical sensors that detect DC (i.e., unmodulated) light maps, one forms the CAOS smart camera Introduction that inherently also makes a fault-tolerant robust camera system with full spectrum capabilities and extremely secure operations. It is well known that both natural and human-made scenes can have critical optical information distributed across different irradiance bands within the full HDR of the irradiance data [1]. Specifically, these Table 1: Example Commercial HDR Silicon CMOS Sensors. critical irradiance values within a specific sub-range of the full irradiance map can display low image contrast. In order for robust and Company HDR Method Range true image contrast recovery over the full HDR of a camera imaged New Imaging scene, required is a linear Camera Response Function (CRF), i.e., Technologies Log Compression 140 dB input-output system response over the entire HDR. Considering linear (France) 2016 HDR values of ≥ 90 dB, highly deployed commercial CMOS [2-7], Analog CCD [8-9] and FPA (Focal Plane Array) [10] based image sensors are hard pressed to deliver robust and true irradiance maps under low Devices Log Compression 130 dB contrast conditions over a full HDR. A fundamental reason for this (USA) 2017 limitation is the difficulty in designing and physically realizing in Photonfocus Linear 60 dB hardware a high linearity, uniform sensitivity, and adequate Signal-to- (Swiss) 2016 Log Compression 60 to 120 dB Noise Ratio (SNR) multi-pixel optical sensor opto-electronic array Omnivision Linear (Deep QW) 94 dB device over the full HDR [11]. Indeed, recent experiments conducted (USA) 2016 Double Exposure 94 to 120 dB with an up-to 87 dB HDR commercial CMOS sensor camera from Thorlabs has shown limitations in recovery of robust true linear HDR ON-Semi Linear (1 exposure) 95 dB image data [12]. To provide an update on HDR CMOS sensor USA (2018) 4 Exposures 140 dB technology, Table 1 shows recent CMOS sensor HDR numbers and Melexis- Piece-wise Linear their HDR generation methods listed by some vendors. Cypress- (Non-linear) 150 dB As proposed recently, CAOS is an alternate way to design an Sensata (Multi-Slope using Pixel optical camera based on the principles of multiple access RF/optical (USA) 2017 Resets) wireless networks [12-14]. The origins of the CAOS camera and its forerunner, the agile pixel imager, in the context of prior art is described in ref.15 and ref.16 [15-16]. The seeds of the CAOS invention were laid in 1985 when N. A. Riza as a MS/PhD student at Caltech attended the legendary physics Nobel laureate Professor Richard P. Feynman’s class (see Fig.1). Professor Feynman basically said: There is Radio Moscow in this room, there is Radio Beijing in this room, there is Radio Mexico city in this room; then he paused and said: aren’t we humans fortunate that we can’t sense all these signals; if we did we would surely go mad with the massive overload of electromagnetic radiation (radio signals) around us! These words of wisdom led to the CAOS invention as even today, radio signals are all around us, each encoded with their unique Radio Frequency (RF) signatures. So when one desires, one can “hear” these RF signals if one uses the correct time-frequency codes to decode these super weak signals using coherent electronic signal processing via sensitive RF receivers. In fact, this encode-decode operation happens in today’s RF mobile cellular phone network where each mobile user has a specific assigned code, i.e., telephone number. Many users simultaneously use the RF spectrum and multiple access techniques are used to acquire the desired signals. One can apply the same Figure 1. Shown is Prof. Richard P. Feynman (centre), 1965 Physics Nobel principles of a multi-access multiple user RF network to the optical Prize, in his 1985 graduate class at Caltech where Prof. Feynman’s words imaging task where one can treat optical pixels in the image space as many years later inspired N. A. Riza (2nd from left) to invent the CAOS independent mobile users assigned specific telephone codes using a camera. IS&T International Symposium on Electronic Imaging 2020 Imaging Sensors and Systems 226-1 There is also another similarity between CAOS and the RF wireless multi-access mobile phone network, i.e., both operate with an intrinsic HDR design. Fig.2 shows key design parameters of the RF network one cell zone with a radius of d1. Mobile users are labelled th as M1, M2, M3, …, Mn, where n is the mobile user’s number. The n mobile user Mn is located at a distance dn from the cell base station antenna that transmits PT dBm radiated RF power with all user communication channels within its coded bandwidth. The nth mobile RF receiver harvests Pn RF power that drops as dn increases for users farther from the base station antenna. There are various standards for which the RF network is design. For example, for the GSM network standard, the minimum detectable RF signal power at the mobile receiver is – 102 dBm [17]. For the example in Fig.2, the M1 user is farthest from the base station antenna, hence P1 = -102 dBm. Transmitted RF power PT from base antenna can be as high as 43 dBm for suburban or rural areas [17], which implies that the ratio between the transmitted RF power and minimum received RF power is 102+43=145 dB, indeed a HDR. In other words, the RF network is designed for HDR operations within a cell as users can be very close to the base station antenna as well as on the boundary of the cell zone. Hence, the received RF power radiation map of all users covers Figure 2. Shown is the RF Wireless Multi-Access Mobile Phone Network a HDR, much like the HDR optical image map captured by CAOS operations within one Cell zone. Both CAOS and the RF Network operate with using the principles of a RF multi-user wireless phone network. There- an instrinsic HDR design. in is showcased the similarity between an HDR optical image pixel caught by the CAOS camera and the HDR RF radiation power caught by the mobile user’s RF receiver. Also, the CAOS smart camera design in one sense follows nature’s human eye rods and cones dual photo-cells design as the CAOS smart camera features image fusion to produce optimized (i.e., linear response and adequate SNR) imaging using CAOS pixels and CMOS/CCD/FPA pixels. It has also been shown that the CAOS camera intrinsically is a linear camera response highly programmable smart camera that so far has reached a 177 dB linear Extreme Dynamic Range (EDR) [12]. Given CAOS’s linear CRF feature, it is naturally suited for low contrast image recovery over an EDR. Given this low contrast image detection capability that can be critical for many applications such as search and rescue, medical imaging, industrial parts inspection and still photography, the present paper for the first time experimentally demonstrates the low contrast image recovery capability of the CAOS camera over a scene instantaneous Figure 3. Shown is the top view of the CAOS smart camera [12]. EDR. Fig.3 highlights the design of the CAOS smart camera using key components such as the TI Digital Micromirror Device (DMD), point Photo-Detector Module (PD-M), point Photo-Multiplier Tube Module To test low contrast image recovery over a HDR, one requires a (PMT-M), lenses (L), shutter (S), aperture (A1), CMOS sensor, Mirror- low contrast resolution HDR chart (e.g., see Fig.4) that has pairs of Motion Module (MM-M), and Variable Optical Attenuators (VOAs). image patches in the HDR scene that have 2:1 relative irradiance The DMD acts as the required space-time-frequency optical coding values across the entire test HDR [19]. A 2:1 relative irradiance step device to generate the wireless style CAOS signals required for linear between two adjacent patches gives a 6 dB DR step between the two HDR imaging via high speed Digital Signal Processing (DSP). For test patches. In other words, for effective low contrast image recovery example, Pixels of Interest (POI) in the image incident on the DMD of an HDR scene, the camera must be able to correctly measure all plane in the CAOS camera can be simultaneously encoded with the irradiance value 6 dB DR steps across the full HDR.
Recommended publications
  • A Modular FPGA-Based Smart Camera Architecture Merwan Birem, François Berry
    DreamCam: A modular FPGA-based smart camera architecture Merwan Birem, François Berry To cite this version: Merwan Birem, François Berry. DreamCam: A modular FPGA-based smart camera architecture. Journal of Systems Architecture, Elsevier, 2014, 60 (6), pp.519 - 527. 10.1016/j.sysarc.2014.01.006. hal-01625648 HAL Id: hal-01625648 https://hal.archives-ouvertes.fr/hal-01625648 Submitted on 28 Oct 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Journal of Systems Architecture 60 (2014) 519–527 Contents lists available at ScienceDirect Journal of Systems Architecture journal homepage: www.elsevier.com/locate/sysarc DreamCam: A modular FPGA-based smart camera architecture ⇑ Merwan Birem , François Berry Institut Pascal – UMR 6602 UBP/CNRS – Campus des Cézeaux, 24 Avenue des Landais, 63177 Aubiere Cedex, France article info abstract Article history: DreamCam is a modular smart camera constructed with the use of an FPGA like main processing board. Received 9 July 2012 The core of the camera is an Altera Cyclone-III associated with a CMOS imager and six private Ram blocks. Received in revised form 9 October 2013 The main novel feature of our work consists in proposing a new smart camera architecture and several Accepted 21 January 2014 modules (IP) to efficiently extract and sort the visual features in real time.
    [Show full text]
  • Smart Camera System CMOS Image Sensor and Imaging Processor for ADAS
    Smart camera system CMOS image sensor and imaging processor for ADAS ST’s new camera-based advanced driver assistance system helps customers develop secure and high quality automotive applications This smart camera system KEY BENEFITS consists of a VG6640 high- • Front-side Illuminated technology with TARGET APPLICATIONS performance 1.3-megapixel HDR 3.75 micron pixel size • Automotive image sensor and a versatile • Outstanding dynamic range of 132 dB • Ethernet rear-view camera STV0991 system-on-chip with • Smart rear-view camera • Best-in-class low-light signal-to-noise ratio advanced and instant HDR • Surround-view camera • LED flicker mitigation image signal processing. • Traffic Alert camera It’s a perfect solution for a • Self-contained, HDR image processor • Virtual mirror replacement compact, low component count • ARM® Cortex®-R4 core @ 500 MHz • In-cabin camera and low energy-consuming • 2-Mbyte SRAM and 2-Mbyte Flash • Consumer/Security camera system for automotive memory • IP and Skype camera • Drone camera and security applications. • MIPI and Parallel In and Out interfaces • Surveillance and intruder alarm • Dedicated hardware engines for video camera analytics (optical flow, edge detection) and • Building management smart geometrical lens correction camera • Graphics overlay with transparency • Wearable AV recorder • H.264 I/P encoder and JPEG 8/12-bit video compression with no DRAM • Fail-safe strategy support (ISO 26262) • AEC-Q100 Grade 2 qualified www.st.com SYSTEM DESCRIPTION The VG6640 image sensor offers an outstanding dynamic range of 132 dB, the highest on the market, thanks to its HDR pixel architecture. The sensor proposes various configuration modes that make this camera perfect for any tricky lighting environment.
    [Show full text]
  • Smart Video Surveillance Systems Are Capable of Enhancing Situational Awareness Across Multiple Scales of Space and Time
    [Arun Hampapur, Lisa Brown, Jonathan Connell, Ahmet Ekin, Norman Haas, Max Lu, Hans Merkl, Sharath Pankanti, Andrew Senior, Chiao-Fe Shu, and Ying Li Tian] Smart Video © EYEWIRE Surveillance [Exploring the concept of multiscale spatiotemporal tracking] ituation awareness is the key to security. Awareness requires information that spans multiple scales of space and time. A security analyst needs to keep track of “who are the people and vehicles in a space?” (identity tracking), “where are the people in a space?” (location tracking), and “what are the people/vehicles/objects in a space doing?” (activity tracking). The analyst also needs to use his- torical context to interpret this data. For example, the fact that the paper delivery truck showed up Sat 6 a.m. instead of the usual 8 a.m. would alert a security analyst. Smart video surveillance systems are capable of enhancing situational awareness across multiple scales of space and time. However, at the present time, the component technologies are evolving in isolation; for exam- ple, face recognition technology addresses the identity tracking challenge while constraining the subject to be in front of the camera, and intelligent video surveillance technologies provide activity detection capabilities on video streams while ignoring the identity tracking challenge. To provide comprehensive, nonintrusive situation awareness, it is imperative to address the challenge of multiscale, spatiotemporal tracking. This article explores the concepts of multiscale spatiotemporal tracking through the use of real-time video analysis, active cameras, multiple object models, and long-term pattern analysis to provide comprehensive situation awareness. INTRODUCTION Ensuring high levels of security at public access facilities like airports and seaports is an extremely complex challenge.
    [Show full text]
  • Wireless Embedded Smart Cameras: Performance Analysis and Their Application to Fall Detection for Eldercare
    University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Theses, Dissertations, and Student Research Electrical & Computer Engineering, Department from Electrical & Computer Engineering of Summer 7-28-2011 Wireless Embedded Smart Cameras: Performance Analysis and their Application to Fall Detection for Eldercare. Alvaro Pinto University of Nebraska - Lincoln Follow this and additional works at: https://digitalcommons.unl.edu/elecengtheses Part of the Digital Communications and Networking Commons, Electrical and Electronics Commons, Other Electrical and Computer Engineering Commons, and the VLSI and Circuits, Embedded and Hardware Systems Commons Pinto, Alvaro, "Wireless Embedded Smart Cameras: Performance Analysis and their Application to Fall Detection for Eldercare." (2011). Theses, Dissertations, and Student Research from Electrical & Computer Engineering. 22. https://digitalcommons.unl.edu/elecengtheses/22 This Article is brought to you for free and open access by the Electrical & Computer Engineering, Department of at DigitalCommons@University of Nebraska - Lincoln. It has been accepted for inclusion in Theses, Dissertations, and Student Research from Electrical & Computer Engineering by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln. WIRELESS EMBEDDED SMART CAMERAS: PERFORMANCE ANALYSIS AND THEIR APPLICATION TO FALL DETECTION FOR ELDERCARE by Alvaro Pinto A THESIS Presented to the Faculty of The Graduate College at the University of Nebraska In Partial Fulfilment of Requirements
    [Show full text]
  • A High Resolution Smart Camera with Gige Vision Extension for Surveillance Applications
    A HIGH RESOLUTION SMART CAMERA WITH GIGE VISION EXTENSION FOR SURVEILLANCE APPLICATIONS E. Norouznezhad, A. Bigdeli, A. Postula and B. C. Lovell ITEE, The University of Queensland, Brisbane, QLD 4072, Australia NICTA, 300 Adelaide Street, Brisbane, QLD 4000, Australia ABSTRACT Therefore there is strong interest in reducing the role of humans in video surveillance systems and using humans Intelligent video surveillance is currently a hot topic in only when it is required. Intelligent video surveillance computer vision research. The goal of intelligent video systems enhance efficiency and security levels by means of surveillance is to process the captured video from the using machines instead of humans to monitor the monitored area, extract specific information and take surveillance areas. The goal of intelligent video surveillance appropriate action based on that information. Due to the systems is to analyse the captured video by machine, extract high computational complexity of vision tasks and the real- specific information and take appropriate action based on time nature of these systems, current software-based that information [3, 4]. intelligent video surveillance systems are unable to perform Currently intelligent video surveillance systems use sophisticated operations. Smart cameras are a key traditional cameras. In these systems, the video streams from component for future intelligent surveillance systems. They all the cameras are directed to the central processing units use embedded processing to offload computationally and the central processing units should process all the intensive vision tasks from the host processing computers received video. Therefore the whole processing load is and increasingly reduce the required communication borne by the host computers.
    [Show full text]
  • Sony 2020 Technology
    4 Technology That Inspires Emotion 30 Moving Towards Evolution in Mobility with “VISION-S” Three Themes to Get Closer to People’s Motivations Sony’s New Initiative Pursues Comfort and Entertainment in the Realm of Mobility Exceed Exceed human capabilities 32 Knowledge Sharing through Engineering Report Significance of Reporting Learned from the “Iwama Report” 6 Imaging and Sensing Technology 34 Aiming for More Comfortable and Secure Mobile Society with AI and Sensing Technology Seven Technologies Explained by Employees who Helped Develop Them Future Mobility Project 10 Hawk-Eye 35 Sony AI Making Sports More Exciting Through Visualization Technology Attracting the World’s Best AI Researchers and Engineers to Unleash the Potential of Sony’s AI Connect Connecting people with people, and people with products Sony is a “Creative Entertainment Company with a Solid Foundation in Technology.” Aiming for a sustainable society For Sony, technology is one of the most important elements that empowers our diverse businesses, 14 Real-Time Transmissions through 5G A New Way of Live Video Production Using 5G’s Strengths 36 Synecoculture creates long-term value, and enables sustainable growth while contributing to society. A Farming Method for a Sustainable Future 16 Dynamic Spectrum Access (DSA) Optimizing Use of Radio Frequency Resources towards the 5G/Beyond-5G Era 37 Small Optical Link for International Space Station (SOLISS) Contributing to the Construction of “Sony’s Technology 2020” is the second technology-focused issue of the Sony Group’s magazine, 18 360 Reality Audio – An All New Music Experience Space Communication Infrastructure through Optical Disc Technology Creating an Ecosystem to Provide an Immersive Sound Field to and follows the “Paving the Way to the Future with Technology” issue of Family published in June 2019.
    [Show full text]
  • Smart Cameras for Embedded Machine Vision
    Smart Cameras for Embedded Machine Vision NI 17xx Smart Cameras NEW! • Real-time machine vision Recommended Software • High-quality monochrome VGA (640x480) • Vision Builder for Automated Inspection or SXGA (1280x1024) CCD image sensors (included) or • High-performance embedded processors • LabVIEW and the LabVIEW Real-Time • Isolated 24 V digital I/O Vision Development Bundle • Dual gigabit Ethernet ports Recommended Accessories • RS232 serial support • C-mount lens • Support for industrial protocols • Power and I/O cables • Expansion analog and digital I/O through • Mounting bracket NI Compact FieldPoint and CompactRIO • Built-in NI direct drive lighting controller for current-controlled LED light heads1 • Quadrature encoder support1 1Not supported by NI 1722 Smart Camera Overview NI 17xx Smart Cameras simplify machine vision by analyzing images directly logic controllers (PLCs), or with human machine interfaces (HMIs). All on the camera with a powerful, embedded processor capable of running NI Smart Cameras include an RS232 serial port and 5 and 24 V lighting the entire suite of NI vision algorithms. You can use these cameras in a strobe outputs that you can use for synchronization with third-party variety of applications including part location, defect detection, code lighting controllers. reading, and assembly verification. The combination of the onboard NI Smart Cameras, with the exception of the NI 1722, include quadrature processor with a charge-coupled device (CCD) image sensor provides an encoder support for synchronizing inspections with linear and rotary drive easily distributed, all-in-one vision system that transmits inspection systems. This feature simplifies timing in complex applications where results along with or in place of raw images.
    [Show full text]
  • Machine Vision Catalogue
    Custom Vision Solutions INDUSTRIAL DIVISION Embedded vision systems Custom Vision Solutions and custom made software for industrial applications member of: Since Tattile establishment in 1988 we have been Our ambition: We enable our customers to fully automate developing and producing Vision Systems, used to processes by providing vision solutions for inspections, control quality on production plants in several sectors; measurements, verifications, recognition, process control such as pharmaceutical, packaging, semiconductors, and analyzes of color, shape, text and material structure to printing, ceramics, food & beverage, automotive... automatically improve their quality, productivity, reliability, traceability, big data collection and provide self learning A high-tech company with a strong international outlook. systems. We have always distinguished ourselves, because of our finest innovation capacity, the collaborative spirit that Thus we help reducing pollution and the consumption of animates the entire organization. resources. Today Tattile is a part of the LakeSight Tecnologies a Our differentiators: We provide innovative high European platform comprised of smaller synergic players performance technologies for imaging systems to realize that can share sales channels, management resources and the best fit and highest value add for our customers investment programs. Lakesight’s end goal is to create a applications. unique machine vision player with global ambitions. www.lakesighttechnologies.com www.tattile.com Human resources technological
    [Show full text]
  • Product Guide MACHINE VISION
    >Machine Vision PRODUCT GUIDE MACHINE VISION The Machine Vision Business Unit of Datalogic Industrial • Time-to-market - Personalized, technically superior and Automation is built upon the acquisition of PPT Vision Inc. in committed customer support. We can provide you with as much 2011. For over 30 years, PPT Vision has focused exclusively on the support as you need when it comes to delivering application development of machine vision technology for in-line automated solutions. Choose one of our highly skilled and qualified inspection and factory automation. Thanks to its extensive application engineers or training specialists, or select a certified experience of thousands of successful machine vision installations partner to guide you from application concept to installation and throughout the world, PPT has become a recognized world leader qualification of your system. in machine vision innovation and has brought unique benefits to • large product portfolio - Hardware platforms that allow our customers: customers to expand their range of applications. From the • A single machine vision software platform - Programming simplest vision sensors to the highest performance embedded software that is flexible, powerful, and common to all smart processors, we can deliver a vision system optimized for your cameras and embedded vision system products. This means inspection needs. Choose a smart camera in an inline or right no operator cross-training and no need to maintain different angle version, color or greyscale sensor, CCD or CMOS sensor; software platforms– just select the hardware you want and go! it does not matter because we have you covered. For vision Transfer inspection programs from one camera to another and processors, select from a single to multi-headed area scan or back again without redeveloping the application.
    [Show full text]
  • Machine Vision with Vision Sensors, Smart Camera and Checkbox
    Machine vision with vision sensors, smart camera and checkbox Productivity in focus You want a reliable solution for identifying products. You demand 100% quality output. We have the right vision for achieving maximum productivity. Applications page 4 Vision sensor SBSx page 6 Suitable for a broad range of applications, Vision sensor SBSx reliable in every respect for a variety of applications Machine vision for virtually all industries and applications. Cost-effective and quick to commission. Powerful as a code reader Be inspired by how and where you can use our cameras and reliable as an object sensor or universal variant 2 Machine vision systems from Festo give you a decisive productivity advantage Your vision: maximum process reliability. The objective: total quality. The method: a high level of productivity. Our solutions contribute significantly to harmonise the input and output. They monitor and stabilise the process, whether they are reading codes or detecting positions for handling tasks. In some cases, they even control the process itself. And they inspect quality from when the goods come in to when they are finished. That makes your work easier. It makes your machines and systems more productive and flexible. And it further optimises your use of materials. Smart camera SBRD page 14 Checkbox compact CHB-C-N page 22 Smart camera SBRD as a powerful Checkbox compact CHB-C-N for sorting, checking and counting Machine vision assembly parts New opportunities in automation and robotics: perfect for beginners Convincing all round: the intelligent system with adaptive parts flow as well as professionals control and optical identification of the parts and workpieces 3 Applications Application examples of part identification Part identification plays a central role in automated production and logistics, e.g.
    [Show full text]
  • DSP for Smart Biometric Solutions Ram Sathappan Digital Signal Processor Group
    White Paper SPRA894A – May 2003 DSP for Smart Biometric Solutions Ram Sathappan Digital Signal Processor Group ABSTRACT Biometrics is the science of measuring and statistically analyzing biological data. In information technology, biometrics refers to the use of a person’s biological characteristics for personal identification and authentication. Fingerprint, iris-scan, retinal-scan, voiceprint, signature, handprint and facial features are some of the most common types of human biometrics. Digital signal processors (DSPs), which are specially designed single-chip digital microcomputers that process electrical signals generated by electronic sensors (e.g., cameras, fingerprint sensors, microphones, etc.), will help to revolutionize this world of biometrics. The core of the biometric authentication process is made up of image processing and pattern matching or minutiae comparison algorithms. And the programmable DSP, with an architecture well-suited for implementing complex mathematical algorithms, can efficiently address all the processing needs of such a system. The following information introduces the concept of a complete biometrics system solution based on Texas Instruments (TI) semiconductor components, development tools, and software solutions. Additionally, the various concepts that outline the inherent advantages of a DSP in a biometric system - better accuracy, faster recognition and lower cost, all leading to smarter biometrics - will also be covered. 1 SPRA894 Contents 1 Introduction .....................................................................................................................................3
    [Show full text]
  • Partner Solution
    Partner Solution 4K SMART CAMERA WITH OPTICAL ZOOM This compact smart camera solution combines Sony’s best-in-class 4K FCB-ER8550 with a powerful nVidia Jetson Xavier NX processor together in a compact housing. This unique combination of high performance components can be used for many artificial intelligence driven applications, such as License Plate Reading (LPR), vehicle identification, object classification, and many others depending upon the AI App installed. This H.264/H.265 IP camera is upgradeable to full ONVIF Profile S remote control support and is upgradeable to Power-Over-Ethernet (PoE) if desired and supports nVidia SDK to deliver low latency and high throughput for high-speed inference applications. The FCB-ER8550 camera offers 30x* telephoto zoom with image stabilization. Camera settings may be easily optimized for the end customers’ desired application via a simple graphical user interface. Capture Mode Current Release: 1080p60 Future Release(TBD): 4K30p nVidia® Jetson Xavier™ AI Processor 30X Optical Zoom* H.264 / H.265 Encoding External Sych Input Other Functions Auto Focus Image Stabilization Auto Day/Night Sony Image Processing Multistreaming Future Release (TBD): ONVIF Profile S Compatibile *Using Super Resolution Zoom Feature FCB-ER8530 (optional) FCB-ER8550 Image Sensor 1/2.5 type Exmor R CMOS Sensor (8510K pixels) (# of effective pixels) Output Pixels (HxV) 3840 x 2160 (QFHD), 1920 x 1080 (FHD), 1280 x 720 (HD) Signal System 1080p/23.98, 1080p/25, 1080p/29.97, 1080p/60, 720p/25, 720p/29.97, 720p/50, 720p/59.94 Camera Control Full camera support (Zoom, Focus, ePTZ, IR Cut, etc.) with VISCA via Jetson Module.
    [Show full text]