<<

A study on phones processing their images using multiple lenses.

Department of MSc-IT Sheetal Chavan, Pragati A Rajiwade. SIES(Nerul) College of Arts, Science and Commerce [email protected], [email protected]

I. Abstract

In a Multi- phone, Each camera captures an image at the same time with each camera having different lenses or sensors, that captures images at different levels of using the image processing software, running on a phone which integrates them all together creating an image that accentuates the best aspect of each photo. The interest of this paper is to study how images are rendered using multiple lenses, testing based on various images being captured using different camera lenses and how the image processing engine handles it using distinct .

II. Introduction For a prolonged phase, phones had a single camera unit that took ordinary pictures with each passing year manufacturers allocated additional megapixels to give higher resolutions but there comes a point where adding more megapixels isn't precisely sufficient to set one camera apart from another. Multi-Camera setup in mobile phones came into existence, as a normal conventional camera cannot suffice the need to give the flexibility to provide optical zoom, portrait modes, better HDR, Ultra Wide Angle perspective etc. ​ ​

Some things that the multi-camera phones are used for is to help enhance the cell phone zoom capabilities and . There is a mechanical assembly that shifts the focal length of the entering the lens.100 millimetre to 400-millimetre lens is referred To as a 4x zoom.

Multi-camera phones use different sensors to achieve a much sharper image. camera configuration uses one camera to take photos and another one to take monochrome photos. Monochrome doesn't use a colour filter array this enables more photons to reach the sensor making the sensor more delicate towards the light which implies that the monochrome image sensor has a better low light performance Additionally not having a colour filter array (CFA) benefit in yielding a sharper image with a higher effective resolution. Utilizing these two sensors, monochrome image sensor and CFA sensor together on a phone's image processing software combines the best of each snap to extensively improve the end product by rendering the image into much sharper, brighter, clearer than each camera can accomplish on their own.

III. Literature Review

Chen CH, Zi-Hong LI, inventor et at (2015); refers to the process of Image-capturing system with the dual . Another purpose is accomplished by providing an image capturing system for scaling a position and distance of the subjects close to the system.

Andrew Ensor and Seth et al Hall had described different GPU in the mobile market for performing image processing in "GPU-based on Mobile Devices". These authors have done a study to figure out rates for an image captured and edge detection.

"Dual , Image and Depth from a mobile camera" In this paper, Manuel Martinello et al (2015) introduced a novel camera, the Dual Aperture camera is capable of capturing an all-in-focus image and depth of the scene in a single shot. He showed that it is feasible to modify the existing image processing chain to handle the dual aperture image data and that this can be performed within a mobile device.

Satya Mallick studied how to create a high dynamic range (HDR) image using multiple images taken with the different exposure settings. They give a brief description of "How does high dynamic range images work". In this, we get to know about automatic Exposure bracketing(AEB) on the phone, which is responsible for taking multiple shots of the same image with different exposure.

IV. Image Processing Engine An image processing engine is also called an image , image processing unit (IPU), or image signal processor (ISP), is categorized as a media processor or (DSP) is used extensively for image Processing, in digital or other devices. Image processors frequently require parallel computing even with SIMD(single instruction multiple data) or MIMD(multiple instructions multiple data) technologies to gain pace and efficiency. a range of tasks can be accomplished by a processing engine. To enhance the system integration on embedded devices, often it is a with the multi-core processor architecture.

An image processing engine, which performs semitone and a system of measurement nonlinear optical response for electrophoretic displays (EPDs), is proposed. The engine processes images according to temperature-dependent characteristics of Expected Progeny Difference for getting better visual quality.

The development and generation of a digital image processing engine are affected by three factors: 1. The development of . 2. The development of mathematics (particularly the production and growth of discrete mathematics theory). 3. The demand for a wide range of applications in environment, agriculture, industry, military and medical science has increased.

V. Types of lens

Multiple camera setup always consists of the main camera lens along with two or additional three lenses each having a different task for each of the individual cameras. 1. Main lens: The main lens is a standard camera lens that captures an ordinary image. The quality of the image depends on the provided megapixels. This lens helps in capturing the comprehensive picture in bright and low light megapixels offered can be 24Mp, 48Mp, 64Mp.

2. Telephoto: Having a provides us with a more comprehensive and high-quality image even when it is zoomed in without losing its resolution. With a telephoto lens, crisp details of an image can be achieved even when the object is far away. A telephoto lens encompasses 2x,3x, or 5x optical zoom the higher the optical zoom the more one can zoom in into the picture. The only disadvantage of this lens is that it does not work in low light. When the picture is captured in a low light mobile phone camera automatically gets shifted to the main camera.

3. Ultra wide-angle lens: Ultra Wide Angle lens is effective in covering a wide space in a single photo. Ultra Wide Angle lenses have come a long way in terms of quality, taking a photo of any object that's huge and doesn't fit into the single photo using Ultra Wide Angle lens takes hold of a lot of surrounding to fit in more objects in the single picture.

4. Depth sensor: It Differentiates the object from the background, the main subject is in focus and the background scene gets blurred. The main camera is used to focus on the object and the background is captured by the depth sensor. The object and the background get segregated from one another. The software used in the camera helps to render the image by recognizing the subject that needs to be focused and the background that needs to be blurred, this is possible because of the depth sensor hence, the image turns out to give a portrait effect.

5. Monochrome lens: photos can be captured using a monochrome lens. The main difference between the main camera and the monochrome lens is the RGB filter. The monochrome lens does not capture the colour it captures an adequate amount of light which leads to a better quality image. The main camera and monochrome lens integrate the picture to give a quality image and the noise appeared is also less.

6. Macro lens: The lens captures an object from a very close distance without perspective or . shooting distance is between 10~25mm. A macro lens is ideal for shooting tiny objects and uncovering every detail of the object in a picture. while taking an image of a tiny object in close rays and also anticipate all the details to get captured then macro lenses are adopted.

VI. Working on a phone camera

The image captured with a dual-camera lens encompasses the actuating module for the first motor and second motor which is responsible for rotating the first and second lens. It also comprises a control unit and an Image Processing Unit. The control unit regulates the actuating module to rotate the lens in such a manner that the visibility area of the first lens coincides with the visibility area of the second lens.

Fig 1.1 Working of a dual phone camera

The Image Processing Unit integrates the image captured from the first lens with the image captured from the second lens which concurrently produces an ultrawide angle in the first mode, and in the second mode, the Image Processing Unit processes the image as a stereoscopic image.

The camera has a set of lenses with a motor that allows the camera to change its focus. This lens takes a wide angle of light and directs it to establish a precise image. Input is desired to tell the smartphone to load the camera application and capture the image. This input is examined through a screen that gauges transitions in capacitance and outputs X and Y coordinates of one or multiple touches. This incoming signal is fed into the Central Processing Unit or CPU and Random Access Memory or RAM. Here the CPU acts as a brain and thinking power for a smartphone while the RAM is working memory. Softwares and program camera apps are moved from smartphone storage location Which in this case is a solid-state drive and into a Random Access Memory. Once the software is loaded the camera is triggered. a light sensor calculates the illumination of the surroundings and a laser range finder estimates the range to the object in front of the camera. Based on these readings the CPU and software set the electronic shutter to restrict the quantity of incoming light while the miniature motor shifts the camera lens forward or backwards in commission to obtain the subject in focus. Active image from the camera is carried back to the display and depending on the surroundings and LED light is used to brighten the scene

Fig 1.2 block diagram of phone camera

Eventually when the camera is triggered a picture is taken and sent to the display for analysis and solid-state for storage. However, there are still two further significant portions of the dilemma that is the power supply and wires; all of these elements use electricity procured from the battery heap and power regulator. Wires carry the power to each component while separate wires carry electrical to enable the components to communicate to one another. The printed circuit board or PCB is where a lot of components such as CPU, RAM and solid-state drive are mounted. It is nothing more than a multi-layered complex set of wires used to connect each of the components mounted to it. There is an electronic shutter that regulates the portion of the light that hits the sensor. At the rear of the camera is an extensive grade of the microscopic light-sensitive square. The grid illustrated is called an image sensor while every single light-sensitive square in the grid is called a .

VII. High Dynamic Range

HDR is a method that attempts to render the image on your phone that looks more true to life by letting their operators capture beautiful images that have a mixture of bright light and shadows without losing much detail. A camera captures the image multiple times in Rapid succession at differing exposure the software techniques are either automatically or manually pertained to combine them together into one image borrowing the details in the shadow from the brightest image and the details in the Bright light from the darkest image. : Step 0 : Capture multiple images with different exposures(darker image, regular image, brighter ​ image). Step 1 : Reading the assigned exposure time in the images. ​ Step 2 : Align the input images using AlignMTB algorithm to convert all the images to the threshold bitmaps (MTB). PtralignMTB=createAlignMTB(); Step 3 : Obtain the camera response function to merge the images into one HDR image ​ PtrcalibrateDebevec=createCalibrateDebevec(); Step 4 : Merge images into an HDR linear image. ​ PtrmergeDebevec=createMergeDebevec(); Step 5 : Save the HDR image. ​ cv2.imwrite("hdrDebevec.hdr",hdrDebevec) Step 6 : Tone mapping using some of its common parameters Gamma saturation contrast. ​ a) Gamma: It is used to compress the dynamic range by applying gamma correction. Gamma=1 (no correlation applied) Gamma<1 (darkens the image) Gamma>1 (Brightens the image) b) Saturation: Amount of saturation is increased or decreased. When the saturation is high colours are richer and intense. When the saturation value is closer to zero colours fade away to . c) Contrast: Controls the contrast. (Log(maxPixelValue/MinPixelValue))

Fig 1.3 Normal image Fig 1.4 HDR image

VIII. Conclusion

This paper explains the working principle of the camera capturing the images in Multicam setup and the algorithm used to enhance the image quality. It is observed that the most significant role is played by the Camera sensor and the number of lenses working together in combination with each other to get the best aspect of every image being captured by a Multicam set up in a mobile phone.

IX. Reference

1. Chen CH, Zi-Hong LI, inventors; National Central University, assignee. Image-capturing system with dual lens camera. United States patent application US 14/133,084. 2015 Mar 26.

2. Martinello, M., Wajs, A., Quan, S., Lee, H., Lim, C., Woo, T., Lee, W., Kim, S.S. and Lee, D., 2015, April. Dual aperture photography: Image and depth from a mobile camera. In 2015 IEEE International Conference on Computational Photography (ICCP)(pp. 1-10). IEEE.

3. Thabet, R., Mahmoudi, R. and Bedoui, M.H., 2014, November. Image processing on mobile devices: An overview. In International Image Processing, Applications and ​ Systems Conference (pp. 1-8). IEEE. ​

4. Evans, V.D.J., Jiang, X., Rubin, A.E., Hershenson, M. and Miao, X., Essential Products Inc, 2017. Wide field of for integration with a mobile device. U.S. Patent ​ ​ 9,813,623.

5. Reinhard, E., Heidrich, W., Debevec, P., Pattanaik, S., Ward, G. and Myszkowski, K., 2010. High dynamic range : acquisition, display, and image-based lighting. ​ ​ Morgan Kaufmann.

6. Ensor, A. and Hall, S., 2011. GPU-based image analysis on mobile devices. arXiv ​ preprint arXiv:1112.3110. ​

7. Mentzer, R.A., Aptina Imaging Corp, 2010. Multi-camera system and method having a ​ common processing block. U.S. Patent 7,724,284. ​

8. Kozko, D., LAB LLC., 2016. Enabling manually triggered multiple fields of view image ​ capture within a surround image mode for multi-lens mobile devices. U.S. Patent ​ 9,521,321.

9. https://www.dxomark.com/multi-camera-smartphones-benefits-and-challenges/

10. https://www.howtogeek.com/349408/why-do-some-smartphones-use-multiple-cameras/

11. https://trendinfocus.com/quad-camera-rear-benefits-details/

12. https://youtu.be/B7Dopv6kzJA

13. https://www.learnopencv.com/high-dynamic-range-hdr-imaging-using-opencv-cpp-pytho n/