CAOS Smart Camera-Based Robust Low Contrast Image Recovery Over 90 Db Scene Linear Dynamic Range
Total Page:16
File Type:pdf, Size:1020Kb
https://doi.org/10.2352/ISSN.2470-1173.2020.7.ISS-226 © 2020, Society for Imaging Science and Technology CAOS Smart Camera-based Robust Low Contrast Image Recovery over 90 dB Scene Linear Dynamic Range Nabeel. A. Riza and Mohsin. A. Mazhar; School of Engineering, University College Cork; Cork, Ireland 2-D time-frequency optical array modulating device. Next an optical Abstract antenna (i.e., lens) can be used to catch all these time-frequency Experimentally demonstrated for the first time is Coded Access encoded optical signals/pixel data set that is converted to an AC Optical Sensor (CAOS) camera empowered robust and true white signal by a sensitive optical-to-electrical converter, e.g., high speed light High Dynamic Range (HDR) scene low contrast target image point photo-detector with an amplifier. This AC signal is subjected to recovery over the full linear dynamic range. The 90 dB linear HDR sensitive AC-style (in Hz domain vs DC domain) electronic Digital scene uses a 16 element custom designed test target with low Signal Processing (DSP) for decoding pixel data, including using multiple access decoding methods for simultaneous pixels detection. contrast 6 dB step scaled irradiances. Such camera performance is By doing so, many optical pixels can be observed with extremely high highly sought after in catastrophic failure avoidance mission linear Dynamic Range (DR) and SNR control, realizing the CAOS critical HDR scenarios with embedded low contrast targets. camera. Combined with classic multi-pixel optical sensors that detect DC (i.e., unmodulated) light maps, one forms the CAOS smart camera Introduction that inherently also makes a fault-tolerant robust camera system with full spectrum capabilities and extremely secure operations. It is well known that both natural and human-made scenes can have critical optical information distributed across different irradiance bands within the full HDR of the irradiance data [1]. Specifically, these Table 1: Example Commercial HDR Silicon CMOS Sensors. critical irradiance values within a specific sub-range of the full irradiance map can display low image contrast. In order for robust and Company HDR Method Range true image contrast recovery over the full HDR of a camera imaged New Imaging scene, required is a linear Camera Response Function (CRF), i.e., Technologies Log Compression 140 dB input-output system response over the entire HDR. Considering linear (France) 2016 HDR values of ≥ 90 dB, highly deployed commercial CMOS [2-7], Analog CCD [8-9] and FPA (Focal Plane Array) [10] based image sensors are hard pressed to deliver robust and true irradiance maps under low Devices Log Compression 130 dB contrast conditions over a full HDR. A fundamental reason for this (USA) 2017 limitation is the difficulty in designing and physically realizing in Photonfocus Linear 60 dB hardware a high linearity, uniform sensitivity, and adequate Signal-to- (Swiss) 2016 Log Compression 60 to 120 dB Noise Ratio (SNR) multi-pixel optical sensor opto-electronic array Omnivision Linear (Deep QW) 94 dB device over the full HDR [11]. Indeed, recent experiments conducted (USA) 2016 Double Exposure 94 to 120 dB with an up-to 87 dB HDR commercial CMOS sensor camera from Thorlabs has shown limitations in recovery of robust true linear HDR ON-Semi Linear (1 exposure) 95 dB image data [12]. To provide an update on HDR CMOS sensor USA (2018) 4 Exposures 140 dB technology, Table 1 shows recent CMOS sensor HDR numbers and Melexis- Piece-wise Linear their HDR generation methods listed by some vendors. Cypress- (Non-linear) 150 dB As proposed recently, CAOS is an alternate way to design an Sensata (Multi-Slope using Pixel optical camera based on the principles of multiple access RF/optical (USA) 2017 Resets) wireless networks [12-14]. The origins of the CAOS camera and its forerunner, the agile pixel imager, in the context of prior art is described in ref.15 and ref.16 [15-16]. The seeds of the CAOS invention were laid in 1985 when N. A. Riza as a MS/PhD student at Caltech attended the legendary physics Nobel laureate Professor Richard P. Feynman’s class (see Fig.1). Professor Feynman basically said: There is Radio Moscow in this room, there is Radio Beijing in this room, there is Radio Mexico city in this room; then he paused and said: aren’t we humans fortunate that we can’t sense all these signals; if we did we would surely go mad with the massive overload of electromagnetic radiation (radio signals) around us! These words of wisdom led to the CAOS invention as even today, radio signals are all around us, each encoded with their unique Radio Frequency (RF) signatures. So when one desires, one can “hear” these RF signals if one uses the correct time-frequency codes to decode these super weak signals using coherent electronic signal processing via sensitive RF receivers. In fact, this encode-decode operation happens in today’s RF mobile cellular phone network where each mobile user has a specific assigned code, i.e., telephone number. Many users simultaneously use the RF spectrum and multiple access techniques are used to acquire the desired signals. One can apply the same Figure 1. Shown is Prof. Richard P. Feynman (centre), 1965 Physics Nobel principles of a multi-access multiple user RF network to the optical Prize, in his 1985 graduate class at Caltech where Prof. Feynman’s words imaging task where one can treat optical pixels in the image space as many years later inspired N. A. Riza (2nd from left) to invent the CAOS independent mobile users assigned specific telephone codes using a camera. IS&T International Symposium on Electronic Imaging 2020 Imaging Sensors and Systems 226-1 There is also another similarity between CAOS and the RF wireless multi-access mobile phone network, i.e., both operate with an intrinsic HDR design. Fig.2 shows key design parameters of the RF network one cell zone with a radius of d1. Mobile users are labelled th as M1, M2, M3, …, Mn, where n is the mobile user’s number. The n mobile user Mn is located at a distance dn from the cell base station antenna that transmits PT dBm radiated RF power with all user communication channels within its coded bandwidth. The nth mobile RF receiver harvests Pn RF power that drops as dn increases for users farther from the base station antenna. There are various standards for which the RF network is design. For example, for the GSM network standard, the minimum detectable RF signal power at the mobile receiver is – 102 dBm [17]. For the example in Fig.2, the M1 user is farthest from the base station antenna, hence P1 = -102 dBm. Transmitted RF power PT from base antenna can be as high as 43 dBm for suburban or rural areas [17], which implies that the ratio between the transmitted RF power and minimum received RF power is 102+43=145 dB, indeed a HDR. In other words, the RF network is designed for HDR operations within a cell as users can be very close to the base station antenna as well as on the boundary of the cell zone. Hence, the received RF power radiation map of all users covers Figure 2. Shown is the RF Wireless Multi-Access Mobile Phone Network a HDR, much like the HDR optical image map captured by CAOS operations within one Cell zone. Both CAOS and the RF Network operate with using the principles of a RF multi-user wireless phone network. There- an instrinsic HDR design. in is showcased the similarity between an HDR optical image pixel caught by the CAOS camera and the HDR RF radiation power caught by the mobile user’s RF receiver. Also, the CAOS smart camera design in one sense follows nature’s human eye rods and cones dual photo-cells design as the CAOS smart camera features image fusion to produce optimized (i.e., linear response and adequate SNR) imaging using CAOS pixels and CMOS/CCD/FPA pixels. It has also been shown that the CAOS camera intrinsically is a linear camera response highly programmable smart camera that so far has reached a 177 dB linear Extreme Dynamic Range (EDR) [12]. Given CAOS’s linear CRF feature, it is naturally suited for low contrast image recovery over an EDR. Given this low contrast image detection capability that can be critical for many applications such as search and rescue, medical imaging, industrial parts inspection and still photography, the present paper for the first time experimentally demonstrates the low contrast image recovery capability of the CAOS camera over a scene instantaneous Figure 3. Shown is the top view of the CAOS smart camera [12]. EDR. Fig.3 highlights the design of the CAOS smart camera using key components such as the TI Digital Micromirror Device (DMD), point Photo-Detector Module (PD-M), point Photo-Multiplier Tube Module To test low contrast image recovery over a HDR, one requires a (PMT-M), lenses (L), shutter (S), aperture (A1), CMOS sensor, Mirror- low contrast resolution HDR chart (e.g., see Fig.4) that has pairs of Motion Module (MM-M), and Variable Optical Attenuators (VOAs). image patches in the HDR scene that have 2:1 relative irradiance The DMD acts as the required space-time-frequency optical coding values across the entire test HDR [19]. A 2:1 relative irradiance step device to generate the wireless style CAOS signals required for linear between two adjacent patches gives a 6 dB DR step between the two HDR imaging via high speed Digital Signal Processing (DSP). For test patches. In other words, for effective low contrast image recovery example, Pixels of Interest (POI) in the image incident on the DMD of an HDR scene, the camera must be able to correctly measure all plane in the CAOS camera can be simultaneously encoded with the irradiance value 6 dB DR steps across the full HDR.