<<

applied sciences

Article Performance Analysis and Improvement of Optical Camera Communication

Moh. Khalid Hasan , Mostafa Zaman Chowdhury , Md. Shahjalal , Van Thang Nguyen and Yeong Min Jang *

Department of Electronics Engineering, Kookmin University, Seoul 02707, Korea; [email protected] (M.K.H.); [email protected] (M.Z.C.); [email protected] (M.S.); [email protected] (V.T.N.) * Correspondence: [email protected]; Tel.: +82-2-910-5068

 Received: 2 November 2018; Accepted: 3 December 2018; Published: 6 December 2018 

Abstract: Optical camera communication (OCC) is a technology in which a camera image sensor is employed to receive data bits sent from a light source. OCC has attracted a lot of research interest in the area of mobile optical communication due to the popularity of with embedded cameras. Moreover, OCC offers high-performance characteristics, including an excellent signal-to-interference-plus-noise ratio (SINR), high security, low interference, and high stability with respect to varying communication distances. Despite these advantages, OCC suffers from several limitations, the primary of which is the low data rate. In this paper, we provide a comprehensive analysis of the parameters that influence the OCC performance. These parameters include the camera sampling rate, the exposure time, the focal length, the pixel edge length, the transmitter configurations, and the optical flickering rate. In particular, the focus is on enhancing the data rate, SINR, and communication distance, which are the principal factors determining the quality of service experienced by a user. The paper also provides a short survey of schemes used in OCC on the basis of the achieved data rate, communication distance, and possible application scenarios. A theoretical analysis of user satisfaction using OCC is also rendered. Furthermore, we present the simulation results demonstrating OCC performance with respect to variations in the parameters mentioned above, which include the outage probability analysis for OCC.

Keywords: optical camera communication (OCC); performance improvement; camera sampling rate; SINR; shutter speed synchronization; outage probability; user satisfaction

1. Introduction Optical wireless communication (OWC) has recently become a congruent complement of radio-frequency (RF) technologies due to its vast unregulated spectrum, as well as the dense formation of lighting infrastructures in both indoor and outdoor scenarios [1]. It has introduced a new venture for the of Things (IoT) by increasing connectivity options. The optical spectrum is entirely cost-free and is a significant option to assist RF in handling the massive data traffic envisaged for the future. Optical camera communication (OCC) is a subsystem of OWC that uses visible or infrared light to communicate with a camera sensor [1]. Complementary metal-oxide- (CMOS) cameras integrated into smartphones or mobile robots have become very common in recent years, making OCC immensely promising. Current commercial light-emitting diodes (LEDs) offer low costs, a low power consumption, high efficiency, and are established almost everywhere. Exploiting cameras to receive data sent from LED transmitters adds a significant dimension in the area of OWC systems. Accordingly, IEEE has developed a task group regarding OCC, called IEEE 802.15.7m [2]. OCC is

Appl. Sci. 2018, 8, 2527; doi:10.3390/app8122527 www.mdpi.com/journal/applsci Appl. Sci. 2018, 8, 2527 2 of 18 a directional technology that provides the users with a high level of security. Moreover, due to the nature of camera pixels, an OCC system is less affected by interference than any other OWC technologies. These characteristics open up possibilities for the system’s usability in a variety of applications, including vehicle-to-vehicle and vehicle-to-infrastructure (V2X) communications [3,4] digital signage, augmented reality (AR), and location-based services (LBS) [5–7]. However, several challenges have been observed while communicating using image sensors. First, cameras cannot detect LEDs blinking at a high frequency [8]. Second, very low frequencies lead to flickering observed by the human eyes, which is prohibited. Third, conventional cameras offer low sampling rates, resulting in a low-rate data reception. Fourth, synchronization between the LED modulation frequency and the camera shutter speed is also an important challenge for OCC. Without proper synchronization, the bit-error rate (BER) increases. Although OCC is very stable in terms of changing the communication distance, the limited angle-of-view (AOV) leads to the system not exploiting the full LED coverage area. Moreover, an image should appear in several pixels to decode the information sent from the LED. Therefore, the fifth reason that significantly affects OCC performance accounts to the pixel edge length. Different methods can be employed to improve OCC performance. Deep-learning-based algorithms can be utilized to increase the data rate and communication distance. A convolutional neural network was proposed in [2] in conjunction with OCC to receive data in outdoor environments. Communication in these scenarios is affected by sunlight and bad weather conditions (fog, rain, or snow). Computer vision techniques can also be utilized for accurate LED pattern classification and recognition to increase the signal-to-interference-plus-noise ratio (SINR), communication distance, and data rate for OCC. In recent times, red/green/blue (RGB) LEDs are used in OCC to improve the performance. Deep learning is a significant option to design the transceiver for RGB-LED-based communications [9]; however, this technique faces some limitations, such as its computation expensiveness and configuration complexities. Researchers have been investigating how to enhance the OCC performance for years. Enhancement of the signal performance of mobile-phone-based OCC using histogram equalization was reported in Reference [10]. An improved decoding method for OCC for vehicular applications using high-speed cameras was presented in Reference [11]. Recently, researchers have been exploring how to increase the maximum communication distance and data rate by introducing new modulation schemes. In Reference [12], a combination of modulation schemes has been proposed to achieve an improved data rate. On the other hand, RGB LEDs are utilized to enhance the communication distance for OCC [13]. Nonetheless, to the best of our knowledge, no researchers have briefly examined the OCC performance emphasizing on the effects of the LED and camera parameters. In this study, we provide a comprehensive discussion of these parameters that affect OCC performance. We emphasize cameras using the rolling shutter technique, as these types are very popular, cost-effective, and widely available. Our main contributions in this paper are summarized as follows:

• We briefly explore the role of LED and camera parameters in determining OCC performance. Variations in these parameters lead to our insights on variations in the OCC performance. We focus on key issues related to the quality of service, including data-rate enhancement, SINR improvement, increasing the communication distance, and decreasing BER. • We present a channel model for OCC based on Lambertian radiant intensity [14]. The channel model is also used as the basis for our analysis of pixel SINR. • We provide a theoretical representation of OCC users’ satisfaction. • We conducted a short survey of existing modulation schemes and categorize them according to communication distance and data-rate characteristics in order to enable us to select the appropriate scheme for a specific application. Appl. Sci. 2018, 8, 2527 3 of 18

• We present simulation results (including the outage probability analysis for OCC) that demonstrate which values of the parameters are optimal to achieve the required OCC output for Appl. performanceSci. 2019, 9 FOR improvement.PEER REVIEW 3

TheThe remainder of this this paper paper is is organized organized as as follows. follows. Section Section 2 provides2 provides an overview an overview of the of OCC the OCCdata data decoding decoding principle principle and and system system para parametersmeters that that include include the the OCC OCC channel channel model and and representationrepresentation ofof pixelpixel SINR.SINR. InIn SectionSection3 3,, we we outline outline the the analysis analysis of of parameters parameters that that determine determine the the performanceperformance ofof anan OCCOCC system.system. SectionSection4 4 presents presents a a short short survey survey of of existing existing techniques techniques to to modulate modulate thethe OCCOCC transmitter.transmitter. AnAn analysisanalysis ofof thethe OCCOCC useruser satisfactionsatisfaction isis presentedpresented inin SectionSection5 .5. Simulation Simulation resultsresults involvinginvolving OCCOCC performanceperformance variationsvariations regardingregarding changingchanging LEDLED andand cameracamera parametersparameters areare evaluatedevaluated in in Section Section6. Finally, 6. Finally, a brief a brief summary summary of our of work our and work possible and possibl future researche future are research provided are inprovided Section7 in. Section 7.

2.2. OCCOCC SystemSystem OverviewOverview FigureFigure1 1 shows shows a a generalized generalized blockblock diagramdiagram ofof thethe OCCOCC system. The data data is is modulated modulated using using a amodulation modulation technique, technique, and and an an LED LED driver driver is is used used to to activate activate the the LED. The projected image ofof thethe LEDLED isis thenthen processedprocessed toto demodulatedemodulate thethe datadata bits.bits. TheThe opticaloptical channelchannel characteristicscharacteristics ofof OCCOCC areare similarsimilar toto thosethose ofof otherother OWCOWC technologiestechnologies exceptexcept thatthat thethe systemsystem cancan achieveachieve higherhigher SINRsSINRs asas itit isis lessless susceptiblesusceptible toto interferences.interferences.

Transmitter Input LED Optical signal 0 1 0 . . . . 1 0 1 LED driver modulator

Communication channel Image sensor

Signal 0 1 0 . . . . 1 0 1 demodulator

Output Frame sampling Image processing and data recovery

FigureFigure 1.1. TheThe schematicschematic blockblock diagramdiagram ofof anan OCCOCC system.system. 2.1. Data Decoding Principle 2.1. Data Decoding Principle An image sensor is built with pixel arrays and a built-in read-out circuit. Each pixel of an image An image sensor is built with pixel arrays and a built-in read-out circuit. Each pixel of an image sensor acts as a photodetector. The transmitted bits are decoded through several image-processing sensor acts as a photodetector. The transmitted bits are decoded through several image-processing techniques. At the beginning of the image processing, the LED is tracked and detected using a camera techniques. At the beginning of the image processing, the LED is tracked and detected using a camera that utilizes computer vision algorithms. In general, the image frames are examined when the pixels that utilizes computer vision algorithms. In general, the image frames are examined when the pixels receive optical pulses. Furthermore, each image is converted to grayscale with each pixel having receive optical pulses. Furthermore, each image is converted to grayscale with each pixel having a a certain value called the pixel value. Thereafter, a threshold is set to binarize the images. Finally, certain value called the pixel value. Thereafter, a threshold is set to binarize the images. Finally, the the data is extracted as binary bits. data is extracted as binary bits. There are two techniques that can be used to receive optical pulses sent from an LED: global There are two techniques that can be used to receive optical pulses sent from an LED: global shutter and rolling shutter. All the pixels of the image sensor are exposed to light at the same time for shutter and rolling shutter. All the pixels of the image sensor are exposed to light at the same time cameras with the global-shutter technique. However, this technique suffers from a high level of pixel for cameras with the global-shutter technique. However, this technique suffers from a high level of noise [15]. In addition, it is not useful for CMOS image sensors. With the rolling-shutter technique, pixel noise [15]. In addition, it is not useful for CMOS image sensors. With the rolling-shutter the image pixels are scanned on a sequential basis. This results in a sequential read-out of pixels that technique, the image pixels are scanned on a sequential basis. This results in a sequential read-out of leads to dark and bright strips on the image sensor. The data can be decoded by measuring the strip pixels that leads to dark and bright strips on the image sensor. The data can be decoded by measuring configurations, which will be discussed further in Section 3.2. the strip configurations, which will be discussed further in Section 3.2.

2.2. Illumination Model for Camera Pixels An OCC data transmission model is illustrated in Figure 2. The channel for optical signal transmission has two components: line-of-sight (LOS) and non-line-of-sight (NLOS). The asperity of the chip faces and the geometric conditions of the encapsulating lens affect the radiation pattern of Appl. Sci. 2018, 8, 2527 4 of 18

Appl. Sci. 2019, 9 FOR PEER REVIEW 4 2.2. Illumination Model for Camera Pixels the LED. The luminous intensity model is represented by the Lambertian radiant intensity [14,16] An OCC data transmission model is illustrated in Figure2. The channel for optical signal which is expressed as transmission has two components: line-of-sight (LOS) and non-line-of-sight (NLOS). The asperity of m the chip faces and the geometric conditions of the(1+ encapsulating m) cos l ( lens) affect the radiation pattern of the R (=) l ir , (1) LED. The luminous intensity model is0 represented ir by the2 Lambertian radiant intensity [14,16] which is expressed as where  denotes the angle of irradiance of( 1the+ mLED,) cos andml (ψ m) is the Lambertian emission index ir R (ψ ) = l ir l, (1) 0 ir 2π originating from the radiation angle,  1 , at which the radiation intensity is half of that in the main where ψir denotes the angle of irradiance2 of the LED, and ml is the Lambertian emission index originatingbeam’s direction. from theml radiation is defined angle, as ς 1 , at which the radiation intensity is half of that in the main 2 beam’s direction. m is defined as l ln 2 m =− ln 2 . ml =l − ln(cos ) .(2) (2) ln(cos ς11 ) 22

LED

ir

AOV of camera

d,h

in d,

d,x 

Camera

Figure 2. The System model.

Assuming that the Euclidean distance between α and β is d , the overall DC gain per pixel can Assuming that the Euclidean distance between  and isα ,βd , the overall DC gain per pixel be expressed as , can be expressed as R0(ψ)Ac Hα,β = g cos(ψ )∆ , (3) op in σ 2 RA(dα),β H= g cos   0c, , opq( in ) 2 (3) 2 2d dα,β = dα,x + dα,h , (4) where ψin signifies the corresponding angle of incidence, gop represents the gain of the optical filter, d=+ d22 d (4) Ac is the entire area of the image projected, onto the  ,x image  ,h sensor, σ is the total number of camera pixels illumined by the LED, and ∆ is a rectangular function whose value is implied as where in signifies the corresponding angle of incidence, gop represents the gain of the optical ( filter, A is the entire area of the image projected0, ψ onto≥ ∂the image sensor,  is the total number of c ∆ = in aov , (5) camera pixels illumined by the LED, and  is1, a rectangularψin < ∂aov function whose value is implied as where ∂aov is the AOV of the camera. 0,    = in aov , (5) 1, in   aov where aov is the AOV of the camera.

2.3. Pixel SINR Appl. Sci. 2018, 8, 2527 5 of 18

2.3. Pixel SINR There are two things that significantly affect the SINR for OCC. The first one is the interference generated by the neighboring light sources. The second one is the image distortion effects originating from the motion of the users, unfocused images, bad weather (e.g., rain, snowfall, fog, etc.), and long distance between the transmitter and the receiver. Considering these effects, the pixel SINR is represented as

22 (1 − χ) ξPavHα,βσ SINR = , [χ ∈ R : 0 ≤ χ ≤ 1], (6) M 2  2 ∑ ξPavHj,βσj + qeξPavnAcfr j=0 where Pav is the average transmitted pixel intensity, ξ denotes the optical-to-electrical conversion efficiency per pixel, M is the total number of neighboring OCC transmitters, j is the sequence of a light source, Hj,β is the corresponding DC gain received from the specific light source, σj is the total number of pixels illuminated by that light source, Pavn is the average noise power, qe is the electron charge, R is the set of all real numbers, and fr is the sampling rate of the camera. χ is defined as the image distortion factor that depends on the perspective distortion and lens-blur in the camera imaging process. Distortions in the image can be reduced to a great extent by accurately focusing the camera using a high-resolution image sensor. The interference generated from the neighboring light sources can be mitigated significantly by applying region of interest signaling [2]. In this technique, the LED light source is detected at a low frame rate. When a region of the LED source is detected, the frame rate is accelerated and the data is demodulated at a high speed. Another method, called selective capturing, can also be used to minimize interference [17]. In this approach, the LED region is captured selectively within the frame. By reducing the captured area, the probability of interference is minimized. The OCC system’s capacity can be derived from Shannon’s capacity formula. According to Ashok et al. [18], the capacity of a camera-based communication depends on the deterministic nature of the perspective distortions and the Additive white Gaussian noise (AWGN)characteristics of the communication channel and is expressed as

COCC = frWs log2(1 + SINR), (7) where Ws represents the spatial denoting the number of information carrying pixels per camera image frame. The parameters that affect the capacity of an OCC system include camera resolution, sampling rate, image distortion factor, and rapidity of LED detection and classification. Among them, the camera sampling rate is considered as the principal factor affecting the data rate performance. Most of the conventional commercial cameras offer low sampling rates (around 30 frames per second). Hence, previous OCC demonstrations using rolling shutter cameras with low-frame rates and conventional intensity modulation schemes showed considerably low data rates [10,19–21]. However, the OCC capacity can be significantly improved using cameras with high sampling rates. In addition, the effects of motion blur, background noise, and other interfering elements also limit the overall capacity of an OCC system. These problems can be mitigated by ensuring accurate object focus by the image sensor and by using high-resolution cameras with enhanced focal length.

3. OCC Performance Improvement The performance of an OCC system depends on the characteristics of the transmitter and the parameters of the camera. This section discusses the effects of these parameters and their appropriate selection based on service requirements and scenarios. In particular, we consider rolling-shutter based cameras for our analysis. Appl. Sci. 2018, 8, 2527 6 of 18

3.1. Focal Length and Pixel Edge Length The focal length indicates how much of a scene is captured by a camera. The relationship between dα,β and the distance behind the lens at which the focused image is formed (dim) can be expressed according to the Newtonian form of the lens law    dα β d 1 − , 1 − im = 1, (8) fo fo where fo is the focal length of the camera. The object, i.e., the LED light source, needs to be focused appropriately to avoid any kind of image distortion. The area of the projected image depends on the focal length and the pixel edge length of the image sensor as expressed in 2 Alfo Ac = , (9) 2 2 ρ dα,β where ρ is the pixel edge length and Al is the effective area of the light source exposed to illumination. It can be noticed that a large focal length effectively contributes to high-distance communication. Here, it is worth mentioning that ρ2 signifies the size of the image sensor per pixel. The focal length determines the total AOV of the camera. From Figure3, the AOV can be measured horizontally, vertically, or diagonally. The AOV of a camera with sensor dimensions of p × q can be expressed as

−1 2fo ∂aov = 2 cos q , (10) 2 2 L + 4fo  ∂ ,L = p,  vert here, ∂aov = ∂horz,L = q,  p 2 2  ∂diag,L = p + q , whereAppl. Sci.∂ vert2019,, ∂9 horzFOR, PEER and REVIEW∂diag represent the vertical, horizontal, and diagonal AOVs, respectively. 7

Image sensor p q

 vert Lens horz diag

db

Horizontal coverage

Vertical coverage FigureFigure 3.3. The illustrationillustration ofof the cameracamera coverage.coverage.

To increase the communication distance, the entire coverage area of the camera should be correspondingly increased. The total area of the camera coverage is illustrated in Figure 3 and can be represented as

2  vert    horz  dct= 4d b tan  tan   , (12) 22   

where db is the distance between the center of the image sensor and the coverage area. The coverage area is very important in OCC as it determines how much of the area can be utilized to set the LED for data transmission.

3.2. Strip Configurations The LED transmitter has two states: ON and OFF. These result in dark and bright strips in the LED image onto the rolling shutter image sensors depicted in Figure 4. In the course of image- processing, the number of strips and the width of the strips are employed to recover data streams. The number of strips varies proportionally with the size of the LED. However, the width of the strips depends strictly on the ON and OFF frequencies of the LED. Different frequencies result in different strip widths. The width of the bright and dark strips can be theoretically expressed as

1  f , if LED ON S = , Here, f =  on (12) ftr foff , if LED OFF

where tr signifies the time needed to read-out a single pixel of the image, and fon and f off indicate the ON and OFF frequencies of the LED light source, respectively. It can be noticed that high-speed cameras with high read-out architectures can provide an excellent communication speed (e.g., 55 Mbps [22]). Appl. Sci. 2018, 8, 2527 7 of 18

To increase the communication distance, the entire coverage area of the camera should be correspondingly increased. The total area of the camera coverage is illustrated in Figure3 and can be represented as  ∂   ∂  d = 4d2 tan vert tan horz , (11) ct b 2 2 where db is the distance between the center of the image sensor and the coverage area. The coverage area is very important in OCC as it determines how much of the area can be utilized to set the LED for data transmission.

3.2. Strip Configurations The LED transmitter has two states: ON and OFF. These result in dark and bright strips in the LED image onto the rolling shutter image sensors depicted in Figure4. In the course of image-processing, the number of strips and the width of the strips are employed to recover data streams. The number of strips varies proportionally with the size of the LED. However, the width of the strips depends strictly on the ON and OFF frequencies of the LED. Different frequencies result in different strip widths. The width of the bright and dark strips can be theoretically expressed as " ( # 1 f , if LED ON S = , Here, f = on (12) ftr foff, if LED OFF where tr signifies the time needed to read-out a single pixel of the image, and fon and foff indicate the ON and OFF frequencies of the LED light source, respectively. It can be noticed that high-speed cameras Appl.with Sci. high 2019 read-out, 9 FOR PEER architectures REVIEW can provide an excellent communication speed (e.g., 55 Mbps [22]).8

LED states On Off Off On

Strip pattern

FigureFigure 4. 4. TheThe s striptrip configurations configurations for for OCC. OCC. 3.3. Camera Sampling Rate and Shutter Speed 3.3. Camera Sampling Rate and Shutter Speed The performance of OCC is greatly influenced by the sampling rate of the camera. High-frame-rate The performance of OCC is greatly influenced by the sampling rate of the camera. High-frame- cameras contribute to high-data-rate communication. However, the majority of the currently available rate cameras contribute to high-data-rate communication. However, the majority of the currently commercial cameras have low frame rates, which led researchers to introduce some undersampling available commercial cameras have low frame rates, which led researchers to introduce some techniques to communicate with these low-frame-rate cameras [8,23]. Recalling Equation (6), it can undersampling techniques to communicate with these low-frame-rate cameras [8,23]. Recalling be noted that the frame rate has a significant effect on the overall SINR and channel capacity. Equation (6), it can be noted that the frame rate has a significant effect on the overall SINR and channel capacity. Using high-frame-rate cameras leads to higher data rates. This is because using the on-off keying (OOK) scheme requires the sampling rate to be at least double the LED flickering frequency to satisfy the Nyquist criterion [15]. For the undersampled frequency-shift OOK (UFSOOK), the modulation frequency is a multiple of the frame rate. Therefore, high data rates can be achieved for the same multiples using cameras with high frame rates. The shutter speed indicates the length of time for which a frame is exposed to light. The shutter speed plays an important role in OCC. A low shutter speed leads to blurring as the LED switches at a high speed. This increases the BER significantly. The shutter speed should be more than twice the frame rate for secure communication. Moreover, the shutter speed should be synchronized with the LED flickering rate as the synchronization establishes how many bits are transmitted during one exposure. As shown in Figure 5, the total number of detectable bits during one exposure is nb . We assume that every pixel of the image sensor is exposed to light for the same amount of time. The opaque portions represent the closed periods of the camera shutter. For simplicity, we consider that the on and off periods of the camera shutter are the same and the total number of exposures per second is m. After synchronization, the number of detectable transmitted bits per exposure can be represented as

   nbe= 2ft , tsr 2f (13) where te denotes the time of a single exposure for a particular frame, and ts indicates the shutter speed. Appl. Sci. 2018, 8, 2527 8 of 18

Using high-frame-rate cameras leads to higher data rates. This is because using the on-off keying (OOK) scheme requires the sampling rate to be at least double the LED flickering frequency to satisfy the Nyquist criterion [15]. For the undersampled frequency-shift OOK (UFSOOK), the modulation frequency is a multiple of the frame rate. Therefore, high data rates can be achieved for the same multiples using cameras with high frame rates. The shutter speed indicates the length of time for which a frame is exposed to light. The shutter speed plays an important role in OCC. A low shutter speed leads to blurring as the LED switches at a high speed. This increases the BER significantly. The shutter speed should be more than twice the frame rate for secure communication. Moreover, the shutter speed should be synchronized with the LED flickering rate as the synchronization establishes how many bits are transmitted during one exposure. As shown in Figure5, the total number of detectable bits during one exposure is nb. We assume that every pixel of the image sensor is exposed to light for the same amount of time. The opaque portions represent the closed periods of the camera shutter. For simplicity, we consider that the on and off periods of the camera shutter are the same and the total number of exposures per second is m. After synchronization, the number of detectable transmitted bits per exposure can be represented as nb = 2fte, [ts ≥ 2fr] (13)

Appl.where Sci.t e2019denotes, 9 FOR thePEER time REVIEW of a single exposure for a particular frame, and ts indicates the shutter speed.9

1 s

te 1st exposure 1st opaque mth exposure mth opaque

st th LED pulses st th 1 bit nb bit 1 bit nb bit Received bits Discarded bits Received bits Discarded bits

Figure 5. The relationship between camera exposure time and the number of detectable bits. Figure 5. The relationship between camera exposure time and the number of detectable bits. The exposure time also affects the power received by the image sensor. Data decoding depends The exposure time also affects the power received by the image sensor. Data decoding depends on the brightness level of the strips. The exposure time determines how much signal power will be on the brightness level of the strips. The exposure time determines how much signal power will be allocated to each pixel. When the exposure time is increased, a higher intensity of light will illuminate allocated to each pixel. When the exposure time is increased, a higher intensity of light will illuminate the pixels, eventually increasing the signal strength. However, as the image is binarized, a short the pixels, eventually increasing the signal strength. However, as the image is binarized, a short exposure time will increase the signal strength of the pixels containing the projected image of the LED exposure time will increase the signal strength of the pixels containing the projected image of the as compared to the other pixels that result in LED detection with reduced complexity. In addition, LED as compared to the other pixels that result in LED detection with reduced complexity. In longer exposure time will also increase the noise elements. On the contrary, there is a minimum addition, longer exposure time will also increase the noise elements. On the contrary, there is a received signal power below which the data cannot be decoded. The number of strips corresponding minimum received signal power below which the data cannot be decoded. The number of strips to that power is analyzed in the next section. Therefore, the exposure time should be set to a certain corresponding to that power is analyzed in the next section. Therefore, the exposure time should be limit as if the signal strength of the pixels remains beyond the minimum extent. set to a certain limit as if the signal strength of the pixels remains beyond the minimum extent. 3.4. LED Size 3.4. LED Size The LED size significantly determines the communication distance. The number of strips is reducedThe when LED the size LED significantly is too small. determines The number the of communication strips for a circular-shaped distance. The LED number can be expressed of strips asis reduced when the LED is too small. The number of strips for a circular-shaped LED can be expressed 2 as Alfo (fon + foff)tr ns = . (14) 2 2 2ρ dα,β Al f o( f on+ f off) t r n = . (14) s ρ22 For the OCC system, there is a minimum numberdαβ, of strips below which the data bits cannot be extracted. The full LED does not need to appear inside the image sensor. The minimum area For the OCC system, there is a minimum number of strips below which the data bits cannot be extracted. The full LED does not need to appear inside the image sensor. The minimum area that should appear depends on the LED size. As shown in Figure 6, the minimum area and the corresponding number of strips considering a circular LED can be expressed as

r  l  2 r22− x dx : r r ,  l l m  rr−  lm  1  Al : r m= r l , Am =  2 (15)  rm  2 r22− x dx : r r ,  l m l  rr−  ml Al : r m= 2r l ,

A f2 ( f+ f) t = m o on off r nsmin 22 (16)  d,

where rlm and r represent the LED radius and the minimum portion of it that should appear inside the image sensor, respectively. It is worth noting here that a low nsmin decreases the overall received power of the image sensor that can contribute to an increase in the BER. Appl. Sci. 2018, 8, 2527 9 of 18 that should appear depends on the LED size. As shown in Figure6, the minimum area and the corresponding number of strips considering a circular LED can be expressed as

 rl q  2 R r2 − x2dx : r > r ,  l l m  rl−rm   1 A : r = r , A = 2 l m l (15) m rm q  2 R r2 − x2dx : r > r ,  l m l  rm−rl  Al : rm = 2rl,

A f 2(f + f )t n = m o on off r (16) smin 2 2 ρ dα,β where rl and rm represent the LED radius and the minimum portion of it that should appear inside the image sensor, respectively. It is worth noting here that a low nsmin decreases the overall received powerAppl. Sci. of 2019 the, 9image FOR PEER sensor REVIEW that can contribute to an increase in the BER. 10

rl

rm

Figure 6. An illustration of the minimum LED area required toto decodedecode datadata bits.bits. 3.5. MIMO Functionality 3.5. MIMO Functionality Each pixel in an image sensor acts as a photodetector. Due to the nature of the imaging lens, Each pixel in an image sensor acts as a photodetector. Due to the nature of the imaging lens, optical signals coming from different directions are imaged in different locations onto the image sensor, optical signals coming from different directions are imaged in different locations onto the image which can be utilized as a multiple optical signals receiver. This particularly facilitates the utilization sensor, which can be utilized as a multiple optical signals receiver. This particularly facilitates the of multiple-input multiple output (MIMO) functionalities in OCC. An OCC-MIMO system is depicted utilization of multiple-input multiple output (MIMO) functionalities in OCC. An OCC-MIMO system in Figure7. As illustrated, an array of LEDs can transmit signals simultaneously. As long as the LEDs is depicted in Figure 7. As illustrated, an array of LEDs can transmit signals simultaneously. As long are positioned inside the AOV of the camera, the projected LED images can be spatially separated as the LEDs are positioned inside the AOV of the camera, the projected LED images can be spatially from each other. The pixels in the projected images for each LED can be considered as groups that are separated from each other. The pixels in the projected images for each LED can be considered as then separately extracted using image processing and computer vision algorithms. groups that are then separately extracted using image processing and computer vision algorithms. The performance of an OCC system can be improved significantly using MIMO functionalities. The performance of an OCC system can be improved significantly using MIMO functionalities. MIMO enhances data rates through spatial and provides multiple access MIMO enhances data rates through spatial multiplexing and provides multiple access conveniences conveniences [15]. Luo et al. [13] demonstrated that the communication distance can be significantly [15]. Luo et al. [13] demonstrated that the communication distance can be significantly increased by increased by employing MIMO using RGB LEDs. Furthermore, by introducing spatial redundancy, the employing MIMO using RGB LEDs. Furthermore, by introducing spatial redundancy, the BER can BER can be minimized to a great extent. be minimized to a great extent.

Imaging lens Image sensor

Serial-to- Parallel- LED Data Input parallel Modulation Demodulation to-serial Output array extraction converter converter

Figure 7. The depiction of the OCC-MIMO system.

An OCC-MIMO tandem is exploited to reduce interference. The images of the LED transmitter, neighboring light sources, and other noise elements are projected at different pixels that are separated afterward, resulting in an excellent SINR for OCC.

4. Modulation Schemes The most conventional modulation scheme for OCC is OOK. However, a high flickering rate is required for the LEDs to achieve high data rates. Moreover, the camera should be able to receive the data stream, leading to the necessity for cameras to have high sampling rates. The currently available commercial cameras are mostly built to operate at 30 frames per second (fps). To satisfy the Nyquist criterion, the LED flickering rate must not exceed 15 Hz, a frequency too low to be detected by the human eye (the cut-off frequency for the human eye is 100 Hz [24]). Therefore, an appropriate modulation scheme should be chosen for OCC. Appl. Sci. 2019, 9 FOR PEER REVIEW 10

rl

rm

Figure 6. An illustration of the minimum LED area required to decode data bits.

3.5. MIMO Functionality Each pixel in an image sensor acts as a photodetector. Due to the nature of the imaging lens, optical signals coming from different directions are imaged in different locations onto the image sensor, which can be utilized as a multiple optical signals receiver. This particularly facilitates the utilization of multiple-input multiple output (MIMO) functionalities in OCC. An OCC-MIMO system is depicted in Figure 7. As illustrated, an array of LEDs can transmit signals simultaneously. As long as the LEDs are positioned inside the AOV of the camera, the projected LED images can be spatially separated from each other. The pixels in the projected images for each LED can be considered as groups that are then separately extracted using image processing and computer vision algorithms. The performance of an OCC system can be improved significantly using MIMO functionalities. MIMO enhances data rates through spatial multiplexing and provides multiple access conveniences [15]. Luo et al. [13] demonstrated that the communication distance can be significantly increased by Appl.employing Sci. 2018 ,MIMO8, 2527 using RGB LEDs. Furthermore, by introducing spatial redundancy, the BER10 ofcan 18 be minimized to a great extent.

Imaging lens Image sensor

Serial-to- Parallel- LED Data Input parallel Modulation Demodulation to-serial Output array extraction converter converter

FigureFigure 7.7. The depictiondepiction of the OCC-MIMOOCC-MIMO system.system.

AnAn OCC-MIMOOCC-MIMO tandem is exploited to reduce interference. The images of the LED transmitter, neighboringneighboring lightlight sources,sources, andand other noise elements are projected at different pixels that are separated afterward,afterward, resultingresulting inin anan excellentexcellent SINRSINR forfor OCC.OCC.

4.4. ModulationModulation Schemes TheThe most conventional modulation modulation scheme scheme for for OCC OCC is is OOK. OOK. However, However, a ahigh high flickering flickering rate rate is isrequired required for for the the LEDs LEDs to toachieve achieve high high data data rates. rates. Moreover, Moreover, the thecamera camera should should be able be able to receive to receive the thedata data stream, stream, leading leading to the to necessity the necessity for cameras for cameras to have to high have sampling high sampling rates. The rates. currently The currentlyavailable availablecommercial commercial cameras are cameras mostly are built mostly to operate built to at operate 30 frames at 30 per frames second per (fps) second. To (fps).satisfy To the satisfy Nyquist the Nyquistcriterion, criterion, the LED theflickering LED flickering rate must rate not must exceed not 15 exceed Hz, a 15 frequency Hz, a frequency too low too to lowbe detected to be detected by the byhum thean human eye (the eye cut (the-off cut-off frequency frequency for the for the human human eye eye is is100 100 Hz Hz [24]). [24]). Therefore, Therefore, an an appropriate modulationmodulation schemescheme shouldshould bebe chosenchosen forfor OCC.OCC. The modulation schemes are selected on the basis of service scenarios. This section provides a short description of existing OCC modulation schemes categorized according to the data rate and communication distance. Table1 summarizes the proposed and implemented modulation techniques for OCC and their features. It is worth mentioning that the implementation of different modulation schemes will have different requirements. The survey was performed to categorize the implemented modulation schemes on the basis of their required features and possible application scenarios.

Table 1. The features of the OCC modulation schemes and their applications.

RGB Under Frame Rate Possible Application Scheme Data Rate Distance LED Sampling (fps) Scenarios

OOK [10] ×√ × 28 896 bps 25 cm LBS UFSOOK [21,23,25] × 30 [21] 15 bps [21] 40 cm [21] V2X, LBS √ 240 bps [26] 50 cm [26] CSK [26–28] × 30 [26,27] LBS √ √ 5.2 kbps [27] 3 cm [27] UPSOOK [13] √ 50 150 bps 60 m V2X UQAMSM [29] √× 50 500 bps 1.5 m LBS m-IM [30] √× 30 >10 kbps 2 m Digital signage, LBS UPWM [31] √× 30 150 bps 1 m LBS DCO-OFDM [22] √ × 30 55 Mbps 1.5 m V2X, AR CSK and multilevel PAM [12] × 330 95 kbps 1.2 m V2X and large data transfer

4.1. Communication Distance To overcome the limitations of the traditional OOK scheme, several studies [13,23,31] have investigated undersampling modulation schemes for OCC, including UFSOOK and undersampled phase-shift OOK (UPSOOK). UFSOOK can be used for both rolling-shutter and global-shutter cameras; however, this technique suffers from considerably low data rates [15]. RGB LEDs can be used to communicate over long distances using UPSOOK [13]. Several researchers have developed the color-shift keying (CSK) [26–28], the undersampled quadrature- amplitude-modulation subcarrier modulation (UQAMSM) [29], and the undersampled pulse-width modulation (UPWM) [31] for OCC. However, these modulation techniques are not useful for long Appl. Sci. 2018, 8, 2527 11 of 18 distances, although excellent BERs can be achieved. Another modulation technique called the spatial-2-phase-shift keying (S2-PSK) was proposed specifically for vehicular applications [2]. However, most of the techniques mentioned above do not solve the data rate issue. These techniques can be used in scenarios such as LBS and electronic healthcare (eHealth), where a low BER, rather than data rate, is the prime concern. All the above mentioned schemes can be employed in other environments, such as localized advertising, digital signage, and any outdoor applications, whereas other visible light communication technologies suffer from severe interference.

4.2. Data Rate Data-rate enhancement is a major issue in OCC performance improvements. The majority of existing modulation techniques do not support high-data-rate communication. OOK, the most conventional scheme for OCC, was utilized in [10] achieving a data rate of 896 bps. However, several studies have already demonstrated healthy data rates by introducing new modulation techniques [12,22]. A data rate of 10 kbps was achieved using a multilevel intensity-modulation (m-IM) technique [30]. The DC-biased optical orthogonal frequency-division multiplexing technique employed by Goto et al. attained a data rate of 55 Mbps [22]. The proposed color intensity modulation combining CSK and multilevel pulse amplitude modulation (PAM) in Reference [12] was able to achieve a data rate of 95 kbps.

5. OCC User Satisfaction As discussed earlier, the data rate is the main concern in the OCC performance. However, users do not need to achieve a high data rate in every case. Sometimes, an excellent SINR is more significant than a high data rate. For example, if a user wants to localize its position using an LED access point, it will definitely need to achieve a high SINR and an accurate detection of LED to minimize the localization resolution. On the other hand, both SINR and data rate are important for a user who wants to make a video call. User satisfaction in OCC can be represented as the measurement of the communication effectiveness considering the service requirements for the users. It is measured from the required and the achieved data rates and SINRs for OCC users. As we discussed in the previous sections, these factors depend on the camera specifications. The user satisfaction factor in OCC can be expressed as ϕaκa S = pd(1 − pb) , [S ∈ R : 0 ≤ S ≥ 1], (17) ϕrκr where ϕa and ϕr denote the achievable and required data rates, whereas κa and κr indicate the achievable and required SINR, respectively. The expression pb is the bit-error probability and pd signifies the LED detection probability of the LED that depends on the communication distance and whether or not the user is inside the AOV of the camera.

6. Performance Evaluation This section presents the simulation results on the OCC performance using different values of LED and the camera parameters. Some parameters were kept unchanged, as in Table2, throughout the simulations. It is worth noting that any variation in the luminaire characteristics will change the simulation results. All the simulations were performed in MATLAB. Appl. Sci. 2018, 8, 2527 12 of 18

Table 2. The system parameters for the simulations.

LED Parameters

Gain of the optical filter, gop 1 Transmitted optical power 10 W ◦ Half-intensity radiation angle, ς 1 60 2 Camera Parameters Image sensor size, p × q 6 × 4 (3:2 aspect ratio) Pixel edge length, ρ 1.8 µm Camera optical to electrical conversion efficiency, ξ 0.51

As discussed in Section 3.3, the maximum number of detectable bits depends on the modulation frequency and shutter speed of the camera. Figure8 shows how the number of detectable bits varies with camera exposure time for different LED modulation frequencies. The simulation was performed assuming a communication distance of 2 m between the camera and the LED. The exposure time should be controlled to avoid unexpected motion blurs that lead to increased BERs. A high exposure time also contributes to the increase in the projection of noise elements onto the image sensor that affects the overall SINR. Subsequently, variations in SINR affects the BER performance. Recalling Equation (6), it can be seen that the image distortion factor (χ) has an immediate impact on the SINR. The SINR variations for OCC with increasing dα,x for different values of χ are illustrated in Figure9. ◦ The effective fo, ∂aov, and rl were considered as 26 mm, 60 , and 5 cm, respectively. The distance dα,h was set at 2 m and was kept throughout it. At dα,x = 0, the LED appears in the center of the image sensor. With increasing dα,x, at a certain point, the projected image of the LED occupies an area corresponding to the minimum number of strips. Note that a significant drop in SINR can be seen in Figure9. This is because only a part of the LED appears inside the image sensor as the horizontal Appl. Sci. 2019, 9 FOR PEER REVIEW 13 distance is increased. The OCC achieves an excellent SINR when the full LED appears inside the image sensor. It can be noted that the SINR decreases with higher values of χ. For dα = 1 m, an SINR of d= 1 m, an SINR of approximately 18 dB is reduced when  is increased from,x 0.1 to 0.4. Hence, approximately,x 18 dB is reduced when χ is increased from 0.1 to 0.4. Hence, lower values of χ show significantlower values SINR of improvement. show significant The SINR SINR performance improvement. can also The be improved SINR performance using cameras can with also high be pixelimproved densities. using This cameras is because with the high spatial pixel separation densities. of This the interfering is because element the spatial from separation the image sensor of the isinterfering facilitated element with high-resolution from the image cameras. sensor is facilitated with high-resolution cameras.

1200

1000 f = 500 Hz f = 1.5 kHz 800 f = 2.5 kHz

600

400

200 Maximum number of detectable bits detectable of number Maximum 0 0.00 0.05 0.10 0.15 0.20 0.25 Exposure time (s) Figure 8. The v variationariation of the maximum number of detectable bits with respect to the exposure time.time.

85

80 Appearance of full LED 75 Appearance of a part of LED  =  70  = 

SINR (dB) SINR 65

60

55 0.0 0.2 0.4 0.6 0.8 1.0 1.2 d (m) x Figure 9. The variation of the SINR by changing the horizontal distance between the camera and the LED.

The LED size has a profound impact on OCC performance. A large LED will occupy a high amount of pixels inside the image sensor resulting in a large number of strips. Consequently, the maximum communication distance will be comparatively high. The size of the strips depends on the read-out time and the LED modulation frequency. As discussed in Section 3, the minimum number of strips below which the camera cannot extract the information of the projected image depends on the LED size and the focal length of the camera. Figure 10 shows the number of pixels occupied by the projected image inside the image sensor at different distances from the LED to the camera with

different LED sizes and a constant effective focal length of 26 mm. We kept d0,x = and d,h was varied during performing the simulation to keep the LED at the center. In addition, the LED flickering rate was set to 1.5 kHz resulting in 70 pixels to construct one strip. The minimum number of strips corresponds to the maximum communication distance, which also depends on the focal length. Appl. Sci. 2019, 9 FOR PEER REVIEW 13

d,x = 1 m, an SINR of approximately 18 dB is reduced when  is increased from 0.1 to 0.4. Hence, lower values of show significant SINR improvement. The SINR performance can also be improved using cameras with high pixel densities. This is because the spatial separation of the interfering element from the image sensor is facilitated with high-resolution cameras.

1200

1000 f = 500 Hz f = 1.5 kHz 800 f = 2.5 kHz

600

400

200 Maximum number of detectable bits detectable of number Maximum 0 0.00 0.05 0.10 0.15 0.20 0.25 Exposure time (s) Appl. Sci. 2018, 8, 2527 13 of 18 Figure 8. The variation of the maximum number of detectable bits with respect to the exposure time.

85

80 Appearance of full LED 75 Appearance of a part of LED  =  70  = 

SINR (dB) SINR 65

60

55 0.0 0.2 0.4 0.6 0.8 1.0 1.2 d (m) x FigureFigure 9.9. TheThe v variationariation of of the the SINR SINR by by changing changing the the horizontal horizontal distance distance between between the the camera camera and and the theLED. LED.

TheThe LEDLED size size has has a profounda profound impact impact on OCCon OCC performance. performance. A large A LEDlarge will LED occupy will occupy a high amount a high ofamount pixels of inside pixels the inside image the sensor image resulting sensor inresulting a large numberin a large of number strips. Consequently, of strips. Consequently, the maximum the communicationmaximum communication distance will distance be comparatively will be comparatively high. The sizehigh. of The the size strips of dependsthe strips ondepends the read-out on the timeread- andout time the LED and modulationthe LED modulation frequency. frequ As discussedency. As discussed in Section in3, Section the minimum 3, the minimum number of number strips belowof strips which below the which camera the cannot camera extract cannot the extract information the information of the projected of the projected image depends image depends on the LED on sizethe LED and thesize focal and length the focal of the length camera. of the Figure camera. 10 shows Figure the 10 number shows ofthe pixels number occupied of pixels by theoccupied projected by imagethe proj insideected theimage image inside sensor the atimage different sensor distances at different from distances the LED tofrom the the camera LED withto the different camera LEDwith sizes and a constant effective focal length of 26 mm. We kept dα,x = 0 and dα,h was varied during different LED sizes and a constant effective focal length of 26 mm. We kept d0,x = and d,h was performing the simulation to keep the LED at the center. In addition, the LED flickering rate was set to varied during performing the simulation to keep the LED at the center. In addition, the LED flickering 1.5 kHz resulting in 70 pixels to construct one strip. The minimum number of strips corresponds to rate was set to 1.5 kHz resulting in 70 pixels to construct one strip. The minimum number of strips the maximum communication distance, which also depends on the focal length. Figure 11 illustrates corresponds to the maximum communication distance, which also depends on the focal length. how the maximum communication distance varies with respect to the LED size and the effective focal length. It can be seen that the maximum communication distance can be enhanced using larger LEDs. Compared to an LED size of 1 cm, the communication-distance improvement of 2.5 and 4 times can be noticed for LEDs with 5 cm and 10 cm, respectively. Both simulations in Figures 10 and 11 were performed at a constant noise spectral density of 10−21. It is worth stating that effective communication will be achieved if the LED is clearly focused on the camera; otherwise, it will result in a significant increase in the BER. Moreover, there is a minimum SINR below which data bits cannot be decoded. Appl. Sci. 2019, 9 FOR PEER REVIEW 14 Appl. Sci. 2019, 9 FOR PEER REVIEW 14 Figure 11 illustrates how the maximum communication distance varies with respect to the LED size andFigure the 11 effective illustrates focal how length. the maximum It can be communication seen that the maximumdistance varies communication with respect distanceto the LED can size be enhancedand the effective using larger focal length. LEDs. ComparedIt can be seen to an that LED the size maximum of 1 cm, communication the communication distance-distance can be improvementenhanced using of 2.5 larger and LEDs.4 times Compared can be noticed to an for LED LEDs size with of 15 cm cm, and the 10 communication cm, respectively.-distance Both improvement of 2.5 and 4 times can be noticed for LEDs with 5 cm and 10 cm, respectively. Both simulations in Figures 10 and 11 were performed at a constant noise spectral density of 10−21 . It is −21 worthsimulations stating in thatFigures effective 10 and communication 11 were performed will beat a achieved constant if noise the LEDspectral is clearly density focused of 10 on. It the is camera;worth stating otherwise, that effectiveit will result communication in a significant will increase be achieved in the ifBER. the Moreover, LED is clearly there focused is a minimum on the Appl.SINRcamera; Sci. below2018 otherwise,, 8whi, 2527ch datait will bits result cannot in abe significant decoded. increase in the BER. Moreover, there is a minimum14 of 18 SINR below which data bits cannot be decoded. 1x104 1x104 9x103 9x103 3 r = 10 cm 8x10 l 3 rl = 10 cm 8x103 r = 5 cm 7x10 rl = 5 cm 7x103 rl = 1 cm 6x103 rl = 1 cm 6x103 l 5x103 5x103

(pixels) 3

c 4x10

(pixels) 3

A

c 4x103 A 3x10 3x103 2x103 2x103 1x103 1x103 0 02.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 d (m) dh (m) h Figure 10. The variation of the projected image size onto the image sensor with changing vertical Figure 10. The variation of the projected image size onto the image sensor with changing vertical communicationFigure 10. The vdistanceariations .of the projected image size onto the image sensor with changing vertical communication distances.distances. 7 7

6 rl = 1 cm 6 rl = 1 cm rl = 5 cm r = 5 cm 5 l rl = 10 cm 5 r = 10 cm

(m) l

 d (m) 4  d 4

3

Maximum Maximum 3 Maximum Maximum 2 2

1 120 25 30 35 40 45 20 25 Effective30 f (mm)35 40 45 o Effective f (mm) o Figure 11. The m maximumaximum communication distance vs focal length for different LED sizes. Figure 11. The maximum communication distance vs focal length for different LED sizes. Figure 12 12 illustrates illustrates the the outage outage probability probability for OCC for OCC for different for different AOVs ofAOVs the camera. of the camera. The effective The focaleffectiveFigure length focal and 12 length illustrates LED radiusand LED the were outageradius the same were probability for the the same second for for OCC the simulation. second for different simulation. It can AOVs be seen It ofcan that the be the seen camera. probability that Thethe ofprobabilityeffective outage focal increases of lengthoutage as and increases the LED AOV radius as is the reduced. wereAOV the is In reduced. addition,same for Inthe the addition, second outage simulation. probabilitythe outage Itprobability increases can be seen as increases thethat LOS the distanceasprobability the LOS between distancof outage thee between increases LED and the as theLED the camera andAOV the is increases. camerareduced. increases. Therefore,In addition, Therefore, it the can outage be it outlined can probability be outlined that the increases that outage the probabilityoutageas the LOS probability distanc can be improvede canbetween be improved bythe increasing LED and by increasingthe the camera camera theincreases. AOV, camera which Therefore, AOV, can be which achieved it can can be by outlined be enhancing achieved that the by resolutionenhancingoutage probability ofthe the resolution image can sensor. be of improvedthe image sensor. by increasing the camera AOV, which can be achieved by enhancingThe capacity the resolution of of an an OCC OCC of the system system image signifi significantlysensor.cantly depends depends on on the the camera camera parameters. parameters. Increasing Increasing the thecamera cameraThe sampling capacity sampling of rate an rate canOCC can improve system improve signifi the the OCCcantly OCC capacity capacitydepends to toon a a the considerable considerable camera parameters. extent. The Increasing cumulative the distributioncamera sampling function rate (CDF) can improve of the theoretical the OCC data-rate capacity limit to a for considerable different camera extent. sampling The cumulative rates is depicted in Figure 13. The simulation was performed for different Euclidean distances between the LED and the camera that are ranged from 0.5 m to 5 m. The CDF is zero at distances higher than approximately 4.472 m. As shown in Figure 13, the data rate performance is significantly improved with higher frame rates. In case of a sampling rate in the range of a few kfps, the OCC capacity of up to several Mbps can be achieved. Moreover, it can be noted that the variation in OCC capacity is not considerably severe (16–28 kbps at 30 fps) with increasing communication distance. Appl. Sci. 2019, 9 FOR PEER REVIEW 15 Appl. Sci. 2019, 9 FOR PEER REVIEW 15 distribution function (CDF) of the theoretical data-rate limit for different camera sampling rates is distributiondepicted in Figurefunction 13. (CDF) The simulation of the theoretical was performed data-rate for limit different for different Euclidean camera distances sampling between rates the is depicteLED andd inthe Figure camera 13. thatThe aresimulation ranged fromwas performed 0.5 m to 5 form. differentThe CDF Euclidean is zero at distancesdistances betweenhigher than the LEDapproximately and the camera 4.472 m.that As are shown ranged in Figurefrom 0.5 13, m the to data5 m. rate The performanc CDF is zeroe isat significantlydistances higher improved than approximatelywith higher frame 4.472 rates. m. AsIn caseshown of ain sampling Figure 13, rate the in data the rangerate performanc of a few kfps,e is thesignificantly OCC capacity improved of up withto several higher Mbps frame can rates. be achieved. In case of Moreover, a sampling it rate can inbe the noted range that of the a few variation kfps, the in OCC capacity isof notup to several Mbps can be achieved. Moreover, it can be noted that the variation in OCC capacity is not Appl.considerably Sci. 2018, 8 ,severe 2527 (16–28 kbps at 30 fps) with increasing communication distance. 15 of 18 considerably severe (16–28 kbps at 30 fps) with increasing communication distance. 100 o 100 AOV = 45 oo 10-1 AOVAOV == 4560 o 10-1 AOV = 60 10-2 10-2 10-3 10-3 10-4

Outage probability Outage 10-4

Outage probability Outage 10-5 10-5 10-6 1 2 3 4 5 6 7 8 9 10 10-6 1 2 3 4 5d (m6) 7 8 9 10  d (m)  Figure 12. The outage probability for OCC for different LOS distance between the LED and the Figurecamera. 12. 12. TheThe outage outage probability probability for for OCC OCC for differentfor different LOS LOSdistance distance between between the LED the and LED the and camera. the camera.

1.0 1.0

0.8 fr = 30 fps 0.8 f = 30 fps frr = 60 fps f = 60 fps 0.6 r 0.6

CDF 0.4

CDF 0.4

0.2 0.2

0.0 15 20 25 30 35 40 45 50 55 60 0.0 15 20 25 30 Capacity35 (kbps)40 45 50 55 60

Capacity (kbps) Figure 13. The c cumulativeumulative distribution function of the OCC capacity. Figure 13. The cumulative distribution function of the OCC capacity. As discussed in Section Section5 ,5, user user satisfaction satisfaction with with OCC OCC depends depends on on the the service service types types and and requirements.As discussed A A user user in makingSection making a5, avoice voice user call callsatisfaction requires requires a withlower a lower OCC data data rate depends rate compared compared on the to a service to user a user engaged types engaged andin a requirements.invideo a video call. However, call. A However,user amaking user awho user a voice uses who callOCC uses requires for OCC LBS a for requires lower LBS data requires a good rate SINRcompared a good rather SINR to than a rather user a high engaged than dat aa high rate.in a videodataFigure rate. call. 14 illustratesHowever, Figure 14 theaillustrates user variations who theuses in variations OCC the user for LBS satisfaction in therequires user factor satisfactiona good with SINR respect factor rather withto than SINR, respecta high evaluated dat toa SINR, rate. in Figureevaluatedterms of14 differentillustrates in terms data ofthe different variations rates. This data in evaluation rates.the user This satisfaction is evaluation generalized factor is generalized with with a respect satisfaction with to aSINR, satisfaction factor evaluated above factor 0.5in termsaboveconsidered 0.5of different considered as good data and as gooda rates. factor and This above a factor evaluation 0.8 aboveas excellent. is 0.8 generalized as excellent. with a satisfaction factor above 0.5 considered as good and a factor above 0.8 as excellent. Appl. Sci. 2018, 8, 2527 16 of 18 Appl. Sci. 2019, 9 FOR PEER REVIEW 16

1.0 r < 1 kbps

1 kbps < r < 100 kbps 0.8 r > 100 kbps

0.6

0.4 Satisfaction factor Satisfaction 0.2

0.0 40 45 50 55 60 65 70 75 80 SINR (dB) FigureFigure 14.14. The illustrationillustration ofof useruser satisfactionsatisfaction withwith OCC.OCC. 7. Conclusions and Future Research 7. Conclusions and Future Research OWC has already been proved to be a congruent complementary technology to RF, with the OWC has already been proved to be a congruent complementary technology to RF, with the potential of being integrated into the next-generation of wireless technologies. OCC is a promising potential of being integrated into the next-generation of wireless technologies. OCC is a promising OWC system that can be employed to receive data from existing LED infrastructures using cameras. OWC system that can be employed to receive data from existing LED infrastructures using cameras. Despite its advantages, OCC suffers from some limitations that degrade its overall performance. In this Despite its advantages, OCC suffers from some limitations that degrade its overall performance. In study, several performance-improvement techniques for OCC mainly focusing on the transmitter this study, several performance-improvement techniques for OCC mainly focusing on the transmitter and receiver parameters are discussed. A great deal of attention was given to enhancing the user and receiver parameters are discussed. A great deal of attention was given to enhancing the user data data rate and SINR. Existing modulation techniques for OCC and their application environments rate and SINR. Existing modulation techniques for OCC and their application environments were were also discussed. In addition, an analysis of user satisfaction was performed to demonstrate also discussed. In addition, an analysis of user satisfaction was performed to demonstrate the optimal the optimal OCC settings for users in different service scenarios. Our future work will include OCC settings for users in different service scenarios. Our future work will include the the implementation and testing of the optimality of OCC systems to achieve a high data rate in implementation and testing of the optimality of OCC systems to achieve a high data rate in long- long-distance communication. Moreover, very few studies have investigated the effects of the distance communication. Moreover, very few studies have investigated the effects of the communication channel parameters on OCC data reception, which could be another significant communication channel parameters on OCC data reception, which could be another significant research topic for OCC performance improvements. Future research will also include the integration of research topic for OCC performance improvements. Future research will also include the integration OCC in optical hybrid infrastructures by utilizing the useful features offered by current OCC systems. of OCC in optical hybrid infrastructures by utilizing the useful features offered by current OCC Authorsystems. Contributions: All authors contributed to this paper. Specifically, M.K.H. and M.Z.C. proposed the idea and wrote the paper; M.K.H. and M.S. performed the simulations; M.Z.C. and V.T.N. analyzed and verified the simulationAuthor Contributions: data; and Y.M.J. All supervisedauthors contributed the work to and this provided paper. Specifically, funding support. M.K.H. and M.Z.C. proposed the idea Funding:and wrote Thisthe paper; research M.K.H. was and supported M.S. performed by the MSIT the simulations; (Ministry of M.Z.C. Science and and V.T.N. ICT), analyzed Korea, underand verified the ITRC the (Informationsimulation data; Technology and Y.M.J. Research supervised Center) the support work and program provided (IITP-2018-0-01396) funding support. supervised by the IITP (Institute for Information & communications Technology Promotion). Funding: This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC Conflicts(Information of Interest: TechnologyThe authors Research declare Center) no conflict support of interest. program (IITP-2018-0-01396) supervised by the IITP (Institute for Information & communications Technology Promotion). References Conflicts of Interest: The authors declare no conflict of interest. 1. Chowdhury, M.Z.; Hossan, M.T.; Islam, A.; Jang, Y.M. A comparative survey of optical wireless technologies: ReferencesArchitectures and applications. IEEE Access 2018, 6, 9819–9840. [CrossRef] 2. Islam, A.; Hossan, M.T.; Jang, Y.M. Convolutional neural network scheme–based optical camera 1. Chowdhury, M.Z.; Hossan, M.T.; Islam, A.; Jang, Y.M. A comparative survey of optical wireless communication system for intelligent Internet of vehicles. Int. J. Distrib. Sens. Netw. 2018, 14.[CrossRef] technologies: Architectures and applications. IEEE Access 2018, 6, 9819–9840. 3. Yamazato, T.; Kinoshita, M.; Arai, S.; Souke, E.; Yendo, T.; Fujii, T.; Kamakura, K.; Okada, H. Vehicle motion 2. Islam, A.; Hossan, M.T.; Jang, Y.M. Convolutional neural network scheme–based optical camera and pixel illumination modeling for image sensor based visible light communication. IEEE J. Sel. Areas communication system for intelligent Internet of vehicles. Int. J. Distrib. Sens. Netw. 2018, 14, Commun. 2015, 33, 1793–1805. [CrossRef] doi:10.1177/1550147718770153. 4. Takai, I.; Harada, T.; Andoh, M.; Yasutomi, K.; Kagawa, K.; Kawahito, S. Optical vehicle-to-vehicle 3. Yamazato, T.; Kinoshita, M.; Arai, S.; Souke, E.; Yendo, T.; Fujii, T.; Kamakura, K.; Okada, H. Vehicle motion communication system using LED transmitter and camera receiver. IEEE Photonics J. 2014, 6, 1–14. [CrossRef] and pixel illumination modeling for image sensor based visible light communication. IEEE J. Sel. Areas Commun. 2015, 33, 1793–1805. Appl. Sci. 2018, 8, 2527 17 of 18

5. Lin, B.; Ghassemlooy, Z.; Lin, C.; Tang, X.; Li, Y.; Zhang, S. An indoor visible light positioning system based on optical camera communications. IEEE Photonics Technol. Lett. 2017, 29, 579–582. [CrossRef] 6. Armstrong, J.; Sekercioglu, Y.A.; Neild, A. Visible light positioning: A roadmap for international standardization. IEEE Commun. Mag. 2013, 51, 68–73. [CrossRef] 7. Shahjalal, M.; Hossan, M.T.; Hasan, M.K.; Chowdhury, M.Z.; Le, N.T.; Jang, Y.M. An implementation approach and performance analysis of image sensor based multilateral indoor localization and navigation system. Wirel. Commun. Mob. Comput. 2018, 2018, 7680780. [CrossRef] 8. Luo, P.; Ghassemlooy, Z.; Minh, H.L.; Tang, X.; Tsai, H.M. Undersampled phase shift ON-OFF keying for camera communication. In Proceedings of the International Conference on Wireless Communications and Signal Processing (WCSP), Hefei, China, 23–24 October 2014. 9. Lee, H.; Lee, I.; Lee, S.H. Deep learning based transceiver design for multi-colored VLC systems. Opt. Express 2018, 26, 6222–6238. [CrossRef] 10. Chow, C.; Chen, C.; Chen, S. Enhancement of signal performance in LED visible light communications using mobile phone camera. IEEE Photonics J. 2015, 7, 1–7. [CrossRef] 11. Nagura, T.; Yamazato, T.; Katayama, M.; Yendo, T.; Fujii, T.; Okada, H. Improved Decoding Methods of visible light communication system for ITS using LED array and high-speed camera. In Proceedings of the IEEE Vehicular Technology Conference, Taipei, Taiwan, 16–19 May 2010. 12. Tian, P.; Huang, W.; Xu, Z. Design and experimental demonstration of a real-time 95kbps optical camera communication system. In Proceedings of the International Symposium on Communication Systems, Networks and Digital Signal Processing (CSNDSP), Prague, Czech Republic, 20–22 July 2016. 13. Luo, P.; Zhang, M.; Ghassemlooy, Z.; le Minh, H.; Tsai, H.; Tang, X.; Png, L.C.; Han, D. Experimental demonstration of RGB LED-based optical camera communications. IEEE Photonics J. 2015, 7, 1–12. [CrossRef] 14. Kahn, J.M.; Barry, J.R. Wireless infrared communications. Proc. IEEE 1997, 85, 265–298. [CrossRef] 15. Saha, N.; Ifthekhar, M.S.; Le, N.T.; Jang, Y.M. Survey on optical camera communications: Challenges and opportunities. IET Optoelectron. 2015, 9, 172–183. [CrossRef] 16. Hasan, M.K.; Chowdhury, M.Z.; Shahjalal, M.; Jang, Y.M. Fuzzy based network assignment and link-switching analysis in hybrid OCC/LiFi system. Wirel. Commun. Mob. Comput. 2018, 2018, 2870518. [CrossRef] 17. Teli, S.; Chung, Y. Selective capture based high-speed optical vehicular signaling system. Signal Process. Image Commun. 2018, 68, 241–248. [CrossRef] 18. Ashok, A.; Jain, S.; Gruteser, M.; Mandayam, N.; Yuan, W.; Dana, K. Capacity of screen-camera communications under perspective distortions. Pervasive Mob. Comput. 2015, 16, 239–250. [CrossRef] 19. Liang, K.; Chow, C.; Liu, Y. Mobile-phone based visible light communication using region-grow light source tracking for unstable light source. Opt. Express 2016, 24, 1–6. [CrossRef][PubMed] 20. Chow, C.; Chen, C.Y.; Chen, S.H. Visible light communication using mobile-phone camera with data rate higher than frame rate. Opt. Express 2015, 23, 26080–26085. [CrossRef] 21. Ji, P.; Tsai, H.; Wang, C.; Liu, F. Vehicular visible light communications with LED taillight and rolling shutter camera. In Proceedings of the IEEE 79th Vehicular Technology Conference, Seoul, Korea, 18–21 May 2014. 22. Goto, Y.; Takai, I.; Yamazato, T.; Okada, H.; Fujii, T.; Kawahito, S.; Arai, S.; Yendo, T.; Kamakura, K. A new automotive VLC system using image sensor. IEEE Photonics J. 2016, 8, 1–17. [CrossRef] 23. Roberts, R.D. Undersampled frequency shift ON-OFF keying (UFSOOK) for camera communications (CamCom). In Proceedings of the Wireless and Optical Communication Conference, Chongqing, China, 16–18 May 2013. 24. Kim, B.W.; Jung, S. Novel flicker-free optical camera communications based on compressed sensing. IEEE Commun. Lett. 2016, 20, 1104–1107. [CrossRef] 25. Roberts, R.D. MIMO protocol for camera communication (CamCom) using undersampled frequency shift ON-OFF keying (UFSOOK). In Proceedings of the Optical Wireless Communications GlobeCom Workshop, Atlanta, GA, USA, 9–13 December 2013. 26. Chen, S.; Chow, C. Color-shift keying and code-division multiple-access transmission for RGB-LED visible light communications using mobile phone camera. IEEE Photonics J. 2014, 6, 1–6. [CrossRef] Appl. Sci. 2018, 8, 2527 18 of 18

27. Hu, P.; Pathak, P.H.; Feng, X.; Fu, H.; Mohapatra, P. Colorbars: Increasing data rate of LED-to-camera communication using color shift keying. In Proceedings of the ACM International Conference on Emerging Networking and Experiments and Technologies, Heidelberg, Germany, 1–4 December 2015. 28. Singh, R.; O’Farrell, T.; David, J.P.R. An enhanced color shift keying modulation scheme for high-speed wireless visible light communications. J. Lightw. Technol. 2014, 32, 2582–2592. [CrossRef] 29. Luo, P.; Zhang, M.; Ghassemlooy, Z.; le Minh, H.; Tsai, H.; Tang, X.; Han, D. Experimental demonstration of a 1024-qam optical camera communication system. IEEE Photonics Technol. Lett. 2014, 28, 139–142. [CrossRef] 30. Rachim, V.P.; Chung, W. Multilevel intensity-modulation for rolling shutter-based optical camera communication. IEEE Photonics Technol. Lett. 2018, 30, 903–906. [CrossRef] 31. Luo, P.; Jiang, T.; Haigh, P.A.; Ghassemlooy, Z.; Zvanovec, S. Undersampled pulse width modulation for optical camera communications. In Proceedings of the IEEE International Conference on Communications Workshops (ICC Workshops), Kansas City, MO, USA, 20–24 May 2018.

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).