A Modular FPGA-Based Smart Camera Architecture Merwan Birem, François Berry

Total Page:16

File Type:pdf, Size:1020Kb

A Modular FPGA-Based Smart Camera Architecture Merwan Birem, François Berry DreamCam: A modular FPGA-based smart camera architecture Merwan Birem, François Berry To cite this version: Merwan Birem, François Berry. DreamCam: A modular FPGA-based smart camera architecture. Journal of Systems Architecture, Elsevier, 2014, 60 (6), pp.519 - 527. 10.1016/j.sysarc.2014.01.006. hal-01625648 HAL Id: hal-01625648 https://hal.archives-ouvertes.fr/hal-01625648 Submitted on 28 Oct 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Journal of Systems Architecture 60 (2014) 519–527 Contents lists available at ScienceDirect Journal of Systems Architecture journal homepage: www.elsevier.com/locate/sysarc DreamCam: A modular FPGA-based smart camera architecture ⇑ Merwan Birem , François Berry Institut Pascal – UMR 6602 UBP/CNRS – Campus des Cézeaux, 24 Avenue des Landais, 63177 Aubiere Cedex, France article info abstract Article history: DreamCam is a modular smart camera constructed with the use of an FPGA like main processing board. Received 9 July 2012 The core of the camera is an Altera Cyclone-III associated with a CMOS imager and six private Ram blocks. Received in revised form 9 October 2013 The main novel feature of our work consists in proposing a new smart camera architecture and several Accepted 21 January 2014 modules (IP) to efficiently extract and sort the visual features in real time. In this paper, extraction is per- Available online 31 January 2014 formed by a Harris and Stephen filtering associated with customized modules. These modules extract, select and sort visual features in real-time. As a result, DreamCam (with such a configuration) provides Keywords: a description of each visual feature in the form of its position and the grey-level template around it. Smart camera 2014 Elsevier B.V. All rights reserved. Image processing Ó Interest points VHDL Harris and Stephen algorithm Field Programmable Gate Array (FPGA) Hardware implementation Real-time system 1. Introduction tions [4]. Although they are slower than the traditional Application Specific Integrated Circuit (ASIC), their design flexibility is a major Intelligent robots are becoming increasingly important. One of advantage. Users can change program as desired at any stage of the the key components of an intelligent robot is its ability to under- experiment thereby saving time and cost. stand its environment and recognize its position. In the robot Throughout this paper, we propose a customized smart sensor community, most researchers use information from sources such based on a CMOS imager. The main original feature is system as odometry, laser-range-finders, and sonar sensors. In contrast, management by a System-On-Chip integrated in an FPGA. Our in the vision community, new methods using information from approach allows most early perception processes to be performed camera sequence are being developed, see [1,2]. in the main sensing unit (FPGA), and sends just the main sensing Using an entire image as an observation is difficult or impossi- features to the host computer so as to reduce a classic communica- ble owing to the high resolution, typically of the order of a hundred tion bottleneck. Another advantage of this method is the real time thousand pixels. Thus, to identify interest points, feature extraction feedback on the sensor. The different parameters can be actively which is the first crucial step, should be used [3]. tuned to optimize perception to render it similar to primate The algorithms that extract features are time-consuming, which perception. For instance, in strong light the pupil contracts and is a huge drawback when developing real-time applications. One becomes small, but still allows light to be cast over a large part solution to this problem is the use of dedicated hardware for the of the retina [5]. This embedded sensor can be considered as a algorithms, such as Field Programmable Gate Array (FPGA), which reactive architecture, and above all, as a research platform for can provide dedicated functional blocks that perform complex the smart sensor. image processing operations in parallel. In addition to the parallel To highlight the novel elements in our work, we present in the properties of FPGA, which lead to a high-throughput, FPGA has a following section previous research carried out in this field. In Sec- small footprint system and low power consumption, which makes tion 3, we give a large overview of the smart sensor. An application it ideal for mobile applications. for the extraction of visual features based on the Harris and Ste- FPGAs have achieved rapid acceptance and growth over the past phen algorithm is presented in Section 4. In this work, we consider decade because they can be used in a very wide range of applica- feature extraction as a combination of a feature detection followed by a description. Thus, feature detection consists in finding the interest points (features) in the image, whereas feature extraction ⇑ Corresponding author. Tel.: +33 760517676. consists in representing them. The final goal is to compare them E-mail address: [email protected] (M. Birem). http://dx.doi.org/10.1016/j.sysarc.2014.01.006 1383-7621/Ó 2014 Elsevier B.V. All rights reserved. 520 M. Birem, F. Berry / Journal of Systems Architecture 60 (2014) 519–527 with other interest points for applications such as navigation, Table 2 object recognition, etc. In Section 5 we include results that support Hardware implementation of H&S algorithm. the relevance of our approach and in Section 6 we give a System Platform capabilities conclusion. Author (s) Sensor Processor Resol. @ Fps [19] EyeBot M6 Xilinx 352 Â 288 @ 7.37 2. Previous work OV6630 Spartan-3E FPGA Comparison of our work with previous works can be done on [18] MT9P031 Xilinx 320 Â 480 @ 27 two levels: CMOS Virtex-II Micron FPGA [20] Video Xilinx 640 Â 480 @ 266 System-level: At this level, we propose to study the most popu- frame from Virtex-5 lar smart cameras developed in the last decade. As a reminder, a a memory FPGA smart camera is defined as a vision system in which the funda- mental function is the production of a high level understanding ones and to describe each feature by a grey-level template. of the imaged scene. A camera is called a ’’smart camera’’ when These last steps are fundamental in computer vision, in which it performs application specific information processing (ASIP). the input is not images but semantic features. (see Table 2). The output of such cameras is either the features extracted from the captured images or a high-level description of the scene. More details about this system can be found in articles by Wolf 3. Hardware description of the ‘‘DreamCam’’ [6] and Shi and Lichman [7]. Table 1 presents an overview of the most common smart cam- The goal of artificial vision research is to exploit images or era platforms found in the literature. image sequences generated by sensors, in order to effectively Others works have been based on Silicon-integrated smart cam- translate an environment. From this translation, different processes eras. In these systems, the authors propose an approach in can be used to identify or inspect an object, control robots, etc. The which image sensing and processing are integrated on a single first way to treat the vision problem is to carry out passive vision silicon die. This kind of device is called ‘‘vision chips’’. The inter- approaches, which is the classic way to analyze images. In contrast, ested reader can find a detailed description in [14]. Among another approach exists known as ‘‘active vision’’, which is the these works, the project scamp by P. Dudeck is one of the most result of an attempt to simulate the human visual system. well-known [15]. Other contributions in the same vein can be Based on this concept, our approach consists in integrating the found in [16,17]. control of the imager in the perception loop, especially in the early Algorithm-level: As explained below, we implemented a ‘‘Harris perception processes. By integration of early processing, close to and Stephen’’-based algorithm to extract visual features in our the imager, a reactive sensor can be designed. With such a smart camera. At the output of this extraction step, several modules sensor, it is possible to perform basic processing and the selection (filtering, sorting, description) were added to provide high-level of relevant features [4]. For example, FPGAs have already been features from the image. Thus, in this part, we propose a short used to accelerate real-time point tracking [21], stereo-vision overview of published papers about the implementation of [22], color-based object detection [23], and video and image com- the Harris and Stephen detector on FPGA. To our knowledge, pression [24]. In our case, the notion of a system on programmable there is no work proposing a full approach with detection, filter- chip (SOPC) describes the whole system. ing, sorting and description steps. Consequently, the works pre- Most vision applications are focused on several small image sented above, represent a fragmented bibliography mainly areas and consequently acquisition of the whole image is not focused on the Harris detection. always necessary. It is evident, therefore, that one of the main goals The work presented in [18] implements a Harris detection on a of an efficient vision sensor is to select regions of interest (ROI) in FPGA connected to a stereo rig.
Recommended publications
  • Smart Camera System CMOS Image Sensor and Imaging Processor for ADAS
    Smart camera system CMOS image sensor and imaging processor for ADAS ST’s new camera-based advanced driver assistance system helps customers develop secure and high quality automotive applications This smart camera system KEY BENEFITS consists of a VG6640 high- • Front-side Illuminated technology with TARGET APPLICATIONS performance 1.3-megapixel HDR 3.75 micron pixel size • Automotive image sensor and a versatile • Outstanding dynamic range of 132 dB • Ethernet rear-view camera STV0991 system-on-chip with • Smart rear-view camera • Best-in-class low-light signal-to-noise ratio advanced and instant HDR • Surround-view camera • LED flicker mitigation image signal processing. • Traffic Alert camera It’s a perfect solution for a • Self-contained, HDR image processor • Virtual mirror replacement compact, low component count • ARM® Cortex®-R4 core @ 500 MHz • In-cabin camera and low energy-consuming • 2-Mbyte SRAM and 2-Mbyte Flash • Consumer/Security camera system for automotive memory • IP and Skype camera • Drone camera and security applications. • MIPI and Parallel In and Out interfaces • Surveillance and intruder alarm • Dedicated hardware engines for video camera analytics (optical flow, edge detection) and • Building management smart geometrical lens correction camera • Graphics overlay with transparency • Wearable AV recorder • H.264 I/P encoder and JPEG 8/12-bit video compression with no DRAM • Fail-safe strategy support (ISO 26262) • AEC-Q100 Grade 2 qualified www.st.com SYSTEM DESCRIPTION The VG6640 image sensor offers an outstanding dynamic range of 132 dB, the highest on the market, thanks to its HDR pixel architecture. The sensor proposes various configuration modes that make this camera perfect for any tricky lighting environment.
    [Show full text]
  • Smart Video Surveillance Systems Are Capable of Enhancing Situational Awareness Across Multiple Scales of Space and Time
    [Arun Hampapur, Lisa Brown, Jonathan Connell, Ahmet Ekin, Norman Haas, Max Lu, Hans Merkl, Sharath Pankanti, Andrew Senior, Chiao-Fe Shu, and Ying Li Tian] Smart Video © EYEWIRE Surveillance [Exploring the concept of multiscale spatiotemporal tracking] ituation awareness is the key to security. Awareness requires information that spans multiple scales of space and time. A security analyst needs to keep track of “who are the people and vehicles in a space?” (identity tracking), “where are the people in a space?” (location tracking), and “what are the people/vehicles/objects in a space doing?” (activity tracking). The analyst also needs to use his- torical context to interpret this data. For example, the fact that the paper delivery truck showed up Sat 6 a.m. instead of the usual 8 a.m. would alert a security analyst. Smart video surveillance systems are capable of enhancing situational awareness across multiple scales of space and time. However, at the present time, the component technologies are evolving in isolation; for exam- ple, face recognition technology addresses the identity tracking challenge while constraining the subject to be in front of the camera, and intelligent video surveillance technologies provide activity detection capabilities on video streams while ignoring the identity tracking challenge. To provide comprehensive, nonintrusive situation awareness, it is imperative to address the challenge of multiscale, spatiotemporal tracking. This article explores the concepts of multiscale spatiotemporal tracking through the use of real-time video analysis, active cameras, multiple object models, and long-term pattern analysis to provide comprehensive situation awareness. INTRODUCTION Ensuring high levels of security at public access facilities like airports and seaports is an extremely complex challenge.
    [Show full text]
  • Wireless Embedded Smart Cameras: Performance Analysis and Their Application to Fall Detection for Eldercare
    University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Theses, Dissertations, and Student Research Electrical & Computer Engineering, Department from Electrical & Computer Engineering of Summer 7-28-2011 Wireless Embedded Smart Cameras: Performance Analysis and their Application to Fall Detection for Eldercare. Alvaro Pinto University of Nebraska - Lincoln Follow this and additional works at: https://digitalcommons.unl.edu/elecengtheses Part of the Digital Communications and Networking Commons, Electrical and Electronics Commons, Other Electrical and Computer Engineering Commons, and the VLSI and Circuits, Embedded and Hardware Systems Commons Pinto, Alvaro, "Wireless Embedded Smart Cameras: Performance Analysis and their Application to Fall Detection for Eldercare." (2011). Theses, Dissertations, and Student Research from Electrical & Computer Engineering. 22. https://digitalcommons.unl.edu/elecengtheses/22 This Article is brought to you for free and open access by the Electrical & Computer Engineering, Department of at DigitalCommons@University of Nebraska - Lincoln. It has been accepted for inclusion in Theses, Dissertations, and Student Research from Electrical & Computer Engineering by an authorized administrator of DigitalCommons@University of Nebraska - Lincoln. WIRELESS EMBEDDED SMART CAMERAS: PERFORMANCE ANALYSIS AND THEIR APPLICATION TO FALL DETECTION FOR ELDERCARE by Alvaro Pinto A THESIS Presented to the Faculty of The Graduate College at the University of Nebraska In Partial Fulfilment of Requirements
    [Show full text]
  • A High Resolution Smart Camera with Gige Vision Extension for Surveillance Applications
    A HIGH RESOLUTION SMART CAMERA WITH GIGE VISION EXTENSION FOR SURVEILLANCE APPLICATIONS E. Norouznezhad, A. Bigdeli, A. Postula and B. C. Lovell ITEE, The University of Queensland, Brisbane, QLD 4072, Australia NICTA, 300 Adelaide Street, Brisbane, QLD 4000, Australia ABSTRACT Therefore there is strong interest in reducing the role of humans in video surveillance systems and using humans Intelligent video surveillance is currently a hot topic in only when it is required. Intelligent video surveillance computer vision research. The goal of intelligent video systems enhance efficiency and security levels by means of surveillance is to process the captured video from the using machines instead of humans to monitor the monitored area, extract specific information and take surveillance areas. The goal of intelligent video surveillance appropriate action based on that information. Due to the systems is to analyse the captured video by machine, extract high computational complexity of vision tasks and the real- specific information and take appropriate action based on time nature of these systems, current software-based that information [3, 4]. intelligent video surveillance systems are unable to perform Currently intelligent video surveillance systems use sophisticated operations. Smart cameras are a key traditional cameras. In these systems, the video streams from component for future intelligent surveillance systems. They all the cameras are directed to the central processing units use embedded processing to offload computationally and the central processing units should process all the intensive vision tasks from the host processing computers received video. Therefore the whole processing load is and increasingly reduce the required communication borne by the host computers.
    [Show full text]
  • Sony 2020 Technology
    4 Technology That Inspires Emotion 30 Moving Towards Evolution in Mobility with “VISION-S” Three Themes to Get Closer to People’s Motivations Sony’s New Initiative Pursues Comfort and Entertainment in the Realm of Mobility Exceed Exceed human capabilities 32 Knowledge Sharing through Engineering Report Significance of Reporting Learned from the “Iwama Report” 6 Imaging and Sensing Technology 34 Aiming for More Comfortable and Secure Mobile Society with AI and Sensing Technology Seven Technologies Explained by Employees who Helped Develop Them Future Mobility Project 10 Hawk-Eye 35 Sony AI Making Sports More Exciting Through Visualization Technology Attracting the World’s Best AI Researchers and Engineers to Unleash the Potential of Sony’s AI Connect Connecting people with people, and people with products Sony is a “Creative Entertainment Company with a Solid Foundation in Technology.” Aiming for a sustainable society For Sony, technology is one of the most important elements that empowers our diverse businesses, 14 Real-Time Transmissions through 5G A New Way of Live Video Production Using 5G’s Strengths 36 Synecoculture creates long-term value, and enables sustainable growth while contributing to society. A Farming Method for a Sustainable Future 16 Dynamic Spectrum Access (DSA) Optimizing Use of Radio Frequency Resources towards the 5G/Beyond-5G Era 37 Small Optical Link for International Space Station (SOLISS) Contributing to the Construction of “Sony’s Technology 2020” is the second technology-focused issue of the Sony Group’s magazine, 18 360 Reality Audio – An All New Music Experience Space Communication Infrastructure through Optical Disc Technology Creating an Ecosystem to Provide an Immersive Sound Field to and follows the “Paving the Way to the Future with Technology” issue of Family published in June 2019.
    [Show full text]
  • Smart Cameras for Embedded Machine Vision
    Smart Cameras for Embedded Machine Vision NI 17xx Smart Cameras NEW! • Real-time machine vision Recommended Software • High-quality monochrome VGA (640x480) • Vision Builder for Automated Inspection or SXGA (1280x1024) CCD image sensors (included) or • High-performance embedded processors • LabVIEW and the LabVIEW Real-Time • Isolated 24 V digital I/O Vision Development Bundle • Dual gigabit Ethernet ports Recommended Accessories • RS232 serial support • C-mount lens • Support for industrial protocols • Power and I/O cables • Expansion analog and digital I/O through • Mounting bracket NI Compact FieldPoint and CompactRIO • Built-in NI direct drive lighting controller for current-controlled LED light heads1 • Quadrature encoder support1 1Not supported by NI 1722 Smart Camera Overview NI 17xx Smart Cameras simplify machine vision by analyzing images directly logic controllers (PLCs), or with human machine interfaces (HMIs). All on the camera with a powerful, embedded processor capable of running NI Smart Cameras include an RS232 serial port and 5 and 24 V lighting the entire suite of NI vision algorithms. You can use these cameras in a strobe outputs that you can use for synchronization with third-party variety of applications including part location, defect detection, code lighting controllers. reading, and assembly verification. The combination of the onboard NI Smart Cameras, with the exception of the NI 1722, include quadrature processor with a charge-coupled device (CCD) image sensor provides an encoder support for synchronizing inspections with linear and rotary drive easily distributed, all-in-one vision system that transmits inspection systems. This feature simplifies timing in complex applications where results along with or in place of raw images.
    [Show full text]
  • Machine Vision Catalogue
    Custom Vision Solutions INDUSTRIAL DIVISION Embedded vision systems Custom Vision Solutions and custom made software for industrial applications member of: Since Tattile establishment in 1988 we have been Our ambition: We enable our customers to fully automate developing and producing Vision Systems, used to processes by providing vision solutions for inspections, control quality on production plants in several sectors; measurements, verifications, recognition, process control such as pharmaceutical, packaging, semiconductors, and analyzes of color, shape, text and material structure to printing, ceramics, food & beverage, automotive... automatically improve their quality, productivity, reliability, traceability, big data collection and provide self learning A high-tech company with a strong international outlook. systems. We have always distinguished ourselves, because of our finest innovation capacity, the collaborative spirit that Thus we help reducing pollution and the consumption of animates the entire organization. resources. Today Tattile is a part of the LakeSight Tecnologies a Our differentiators: We provide innovative high European platform comprised of smaller synergic players performance technologies for imaging systems to realize that can share sales channels, management resources and the best fit and highest value add for our customers investment programs. Lakesight’s end goal is to create a applications. unique machine vision player with global ambitions. www.lakesighttechnologies.com www.tattile.com Human resources technological
    [Show full text]
  • Product Guide MACHINE VISION
    >Machine Vision PRODUCT GUIDE MACHINE VISION The Machine Vision Business Unit of Datalogic Industrial • Time-to-market - Personalized, technically superior and Automation is built upon the acquisition of PPT Vision Inc. in committed customer support. We can provide you with as much 2011. For over 30 years, PPT Vision has focused exclusively on the support as you need when it comes to delivering application development of machine vision technology for in-line automated solutions. Choose one of our highly skilled and qualified inspection and factory automation. Thanks to its extensive application engineers or training specialists, or select a certified experience of thousands of successful machine vision installations partner to guide you from application concept to installation and throughout the world, PPT has become a recognized world leader qualification of your system. in machine vision innovation and has brought unique benefits to • large product portfolio - Hardware platforms that allow our customers: customers to expand their range of applications. From the • A single machine vision software platform - Programming simplest vision sensors to the highest performance embedded software that is flexible, powerful, and common to all smart processors, we can deliver a vision system optimized for your cameras and embedded vision system products. This means inspection needs. Choose a smart camera in an inline or right no operator cross-training and no need to maintain different angle version, color or greyscale sensor, CCD or CMOS sensor; software platforms– just select the hardware you want and go! it does not matter because we have you covered. For vision Transfer inspection programs from one camera to another and processors, select from a single to multi-headed area scan or back again without redeveloping the application.
    [Show full text]
  • Machine Vision with Vision Sensors, Smart Camera and Checkbox
    Machine vision with vision sensors, smart camera and checkbox Productivity in focus You want a reliable solution for identifying products. You demand 100% quality output. We have the right vision for achieving maximum productivity. Applications page 4 Vision sensor SBSx page 6 Suitable for a broad range of applications, Vision sensor SBSx reliable in every respect for a variety of applications Machine vision for virtually all industries and applications. Cost-effective and quick to commission. Powerful as a code reader Be inspired by how and where you can use our cameras and reliable as an object sensor or universal variant 2 Machine vision systems from Festo give you a decisive productivity advantage Your vision: maximum process reliability. The objective: total quality. The method: a high level of productivity. Our solutions contribute significantly to harmonise the input and output. They monitor and stabilise the process, whether they are reading codes or detecting positions for handling tasks. In some cases, they even control the process itself. And they inspect quality from when the goods come in to when they are finished. That makes your work easier. It makes your machines and systems more productive and flexible. And it further optimises your use of materials. Smart camera SBRD page 14 Checkbox compact CHB-C-N page 22 Smart camera SBRD as a powerful Checkbox compact CHB-C-N for sorting, checking and counting Machine vision assembly parts New opportunities in automation and robotics: perfect for beginners Convincing all round: the intelligent system with adaptive parts flow as well as professionals control and optical identification of the parts and workpieces 3 Applications Application examples of part identification Part identification plays a central role in automated production and logistics, e.g.
    [Show full text]
  • DSP for Smart Biometric Solutions Ram Sathappan Digital Signal Processor Group
    White Paper SPRA894A – May 2003 DSP for Smart Biometric Solutions Ram Sathappan Digital Signal Processor Group ABSTRACT Biometrics is the science of measuring and statistically analyzing biological data. In information technology, biometrics refers to the use of a person’s biological characteristics for personal identification and authentication. Fingerprint, iris-scan, retinal-scan, voiceprint, signature, handprint and facial features are some of the most common types of human biometrics. Digital signal processors (DSPs), which are specially designed single-chip digital microcomputers that process electrical signals generated by electronic sensors (e.g., cameras, fingerprint sensors, microphones, etc.), will help to revolutionize this world of biometrics. The core of the biometric authentication process is made up of image processing and pattern matching or minutiae comparison algorithms. And the programmable DSP, with an architecture well-suited for implementing complex mathematical algorithms, can efficiently address all the processing needs of such a system. The following information introduces the concept of a complete biometrics system solution based on Texas Instruments (TI) semiconductor components, development tools, and software solutions. Additionally, the various concepts that outline the inherent advantages of a DSP in a biometric system - better accuracy, faster recognition and lower cost, all leading to smarter biometrics - will also be covered. 1 SPRA894 Contents 1 Introduction .....................................................................................................................................3
    [Show full text]
  • CAOS Smart Camera-Based Robust Low Contrast Image Recovery Over 90 Db Scene Linear Dynamic Range
    https://doi.org/10.2352/ISSN.2470-1173.2020.7.ISS-226 © 2020, Society for Imaging Science and Technology CAOS Smart Camera-based Robust Low Contrast Image Recovery over 90 dB Scene Linear Dynamic Range Nabeel. A. Riza and Mohsin. A. Mazhar; School of Engineering, University College Cork; Cork, Ireland 2-D time-frequency optical array modulating device. Next an optical Abstract antenna (i.e., lens) can be used to catch all these time-frequency Experimentally demonstrated for the first time is Coded Access encoded optical signals/pixel data set that is converted to an AC Optical Sensor (CAOS) camera empowered robust and true white signal by a sensitive optical-to-electrical converter, e.g., high speed light High Dynamic Range (HDR) scene low contrast target image point photo-detector with an amplifier. This AC signal is subjected to recovery over the full linear dynamic range. The 90 dB linear HDR sensitive AC-style (in Hz domain vs DC domain) electronic Digital scene uses a 16 element custom designed test target with low Signal Processing (DSP) for decoding pixel data, including using multiple access decoding methods for simultaneous pixels detection. contrast 6 dB step scaled irradiances. Such camera performance is By doing so, many optical pixels can be observed with extremely high highly sought after in catastrophic failure avoidance mission linear Dynamic Range (DR) and SNR control, realizing the CAOS critical HDR scenarios with embedded low contrast targets. camera. Combined with classic multi-pixel optical sensors that detect DC (i.e., unmodulated) light maps, one forms the CAOS smart camera Introduction that inherently also makes a fault-tolerant robust camera system with full spectrum capabilities and extremely secure operations.
    [Show full text]
  • Partner Solution
    Partner Solution 4K SMART CAMERA WITH OPTICAL ZOOM This compact smart camera solution combines Sony’s best-in-class 4K FCB-ER8550 with a powerful nVidia Jetson Xavier NX processor together in a compact housing. This unique combination of high performance components can be used for many artificial intelligence driven applications, such as License Plate Reading (LPR), vehicle identification, object classification, and many others depending upon the AI App installed. This H.264/H.265 IP camera is upgradeable to full ONVIF Profile S remote control support and is upgradeable to Power-Over-Ethernet (PoE) if desired and supports nVidia SDK to deliver low latency and high throughput for high-speed inference applications. The FCB-ER8550 camera offers 30x* telephoto zoom with image stabilization. Camera settings may be easily optimized for the end customers’ desired application via a simple graphical user interface. Capture Mode Current Release: 1080p60 Future Release(TBD): 4K30p nVidia® Jetson Xavier™ AI Processor 30X Optical Zoom* H.264 / H.265 Encoding External Sych Input Other Functions Auto Focus Image Stabilization Auto Day/Night Sony Image Processing Multistreaming Future Release (TBD): ONVIF Profile S Compatibile *Using Super Resolution Zoom Feature FCB-ER8530 (optional) FCB-ER8550 Image Sensor 1/2.5 type Exmor R CMOS Sensor (8510K pixels) (# of effective pixels) Output Pixels (HxV) 3840 x 2160 (QFHD), 1920 x 1080 (FHD), 1280 x 720 (HD) Signal System 1080p/23.98, 1080p/25, 1080p/29.97, 1080p/60, 720p/25, 720p/29.97, 720p/50, 720p/59.94 Camera Control Full camera support (Zoom, Focus, ePTZ, IR Cut, etc.) with VISCA via Jetson Module.
    [Show full text]