Evaluation of Roadway Lighting Systems Designed by Small Target Visibility (STV) Methods

Evaluation of Roadway Lighting Systems Designed by Small Target Visibility (STV) Methods

<p> Development of an Affordable Field Instrument Correlating Small Target Visibility (STV) with Digital Video Images of Road Lighting Scenes</p><p>(Research Proposal)</p><p>Applicant’s Name: Roman Stemprok Bobby Green</p><p>Rank: Assistant Professor, Professor</p><p>Departments: Engineering Technology, Engineering Technology</p><p>University of North Texas, Texas Tech</p><p>Date: October 3, 2001</p><p>NARRATIVE:</p><p>1. Objective</p><p>2. The Rationale and Significance of the Project</p><p>3. Small Target Visibility (STV)</p><p>4. Illuminance/Luminance (ILL/L)</p><p>5. Spacial-Frequency Content Method</p><p>6. Research Plan</p><p>7. Deliverables</p><p>8. Tasks</p><p>9. Project Summary</p><p>10. Scope of Project</p><p>11. Working Personnel</p><p>12. Research Budget</p><p>1. Objective</p><p>1 Balu WDOT To develop a low cost field instrument using existing digital technology with a modified software package which is correlated to STV and verified through field experiments.</p><p>2. The Rationale and Significance of the Project</p><p>With the advent of computer designed systems and computer modeling of lighting systems, it is feasible to calculate Visibility Levels (VL) of a target. The VL is based on the light adaptation of the eye, age of the average observer, average background luminance of a scene and the contrast of a target under observation. Small target visibility (STV) is being proposed as the recommended design practice of the Illuminating Engineering Society of North America (IESNA) as well as the American National Standards Institute (ANSI).</p><p>Small target visibility concepts, and assumptions need to be verified with some readily available and straight forward measurement technique. The design method needs to be examined to determine if it may be easily measured. Both of these goals need the development of a measuring device capable of measuring visibility levels in the field.</p><p>We propose to develop a field visibility level measuring device using a CCD device coupled with a Fourier spatial frequency calculation to determine the relative average background luminance. And a second order imaging calculation to determine the absolute average background luminance. The second order imaging calculation was developed to calculate e- beam and laser beam absolute intensities using a CCD array.</p><p>3. Small Target Visibility (STV)</p><p>The Visibility Level (VL) is a metric used to combine effects of factors listed in RP-8-1990 on a 2-D sample target 18 cm square with a diffuse reflectivity of 20% or, now, 50%. [ANSI, 1990] The target is perpendicular to the road surface and 83 meters from an observer. The Small Target Visibility (STV) concept, as defined in the proposed ANSI/IES RP-8-1990 (3), (4), and (5), is a calculated measure of the visibility of the arbitrary two-dimensional (2-D) target. STV is calculated based on surface reflectivities and orientations of the target and its background with respect to an observer. The result of this calculation reveals the visibility of the small target seen in the level of the background luminance surrounding the target. The calculations follow a typical ray tracing model, summing reflectance from roadway light sources around the target.</p><p>The STV model is a static or steady state model and does not take into account dynamics of changes in contrast due to the relative motion of a small fixed target, bacground luminance and a dynamic observer. But this can be achieved by an instrument monitoring STV.</p><p>Illuminance, E, in Lux or fc is a lighting measurement and Luminance , L, in cd/m2 of fL is a measurement based on incident illuminance and the Reflectivity, R, of a surface. Contrast is a calculation and visibility is also a calculation. Contrast is a calculation based on background and target luminances. Visibility is a calculation based on background liminance, target luminance, and eye adaptation.</p><p>Contrast is a luminance ratio [Stein, et al, 1986] a dimensionless number, defined as </p><p>2 Balu WDOT Lt  Lb C  , Lb</p><p>Contrast modulation is also a luminance ratio [IESNA Llighting Ready Reference, 1996] is a dimensionless number, defined as </p><p>Lt  Lb C  , Lt  Lb where: Lt = luminance of target</p><p>Lb = luminance of the background, with the value of C being 0 < C < 1.</p><p>Luminance, L, is a product of Illuminance, E, and reflectance, R.</p><p>L = RE/ with L in cd/m2</p><p>Visibility, [RP8] is a number based on the eye adaptation to a particular luminance level</p><p>Lt  Lb VL  , DL4 where: Lt = luminance of target</p><p>Lb = luminance of the background,</p><p>DL4 = is the threshold Luminance (L) of the target related to the adaptation level of the eye.</p><p>When the illumination of the task and the background are the same, and since luminance is the product of illumination and reflectance, contrast may also be expressed as</p><p>Rt  Rb C  . Rb where: Rt = reflectance of target</p><p>Rb = reflectance of the background, both in direction to the observer.</p><p>The eye adaptation is usually considered as photopic, mesoptic, or scotopic. The range of vision extends over about eight orders of luminance magnitude. The photopic, or cone vision region, is the eye adaptation level for the highest luminance magnitudes and the scotopic, or rod vision region is for the lowest luminance regions. Mesoptic falls in the region between bright light adapted eyes and night vision adapted eyes. Roadway lighting situations fall in the mesoptic region of vision adaptation. Day time and night time vision also changes the color spectral </p><p>3 Balu WDOT response of the eye. The photopic, cone vision, region has a peak spectral response around 550- nm and the scotopic, rod vision, region has a peak spectral response around 510-nm. </p><p>Eye adaptation also changes with age of and individual. The spatial frequency response of the eye decreases with age, the eye acts as a low pass spatial frequency filter. With age the pass band decreases.</p><p>However when the luminance of a target and its background are not the same, such as in a roadway lighting situation, luminance values are necessary to calculate contrast values and visibility values. So, neglecting specularity, contrast is generally independent of illumination. Typically a flat plate will exhibit some specularity in its reflection pattern, even if a flat plate is very diffuse, it usually does not exhibit a lambertian reflection pattern [Green, et, al, 1987]. The plate will have a specula component plus a lambertian reflection component. </p><p>4. Illuminance/luminance (ILL/L)</p><p>Illuminance, a light flux, is usually measured in lux or lumens-per-square meter, and luminance is a combination measure depending on brightness of an object and condition of the observer’s eye, usually measured in candela-per square meter. Brightness and luminance are different measures [O'Hair, et al, 1990].</p><p>Luminance is an engineering measure where brightness is a subjective impression of an object. Brightness is also known as subjective brightness or apparent brightness. Luminance is in terms of luminous flux from a surface. The surface may be reflecting, transmitting, or emitting one candela per square meter. In a luminance measure, much like an STV measure, the source of radiation is not an issue. However in the ILL/L measures the incident radiation intensity is measured and taken into account.</p><p>4 Balu WDOT 5. Spacial-Frequency Content Method</p><p>Fourier Series </p><p>Fourier Analysis of Digital Images</p><p>The Fourier transform, in essence, decomposes or separates a waveform or function into sinusoids of different frequency which sum to the original waveform. It identifies or distinguishes the different frequency sinusoids and their respective amplitudes (Brigham (1988)).</p><p>Frequency Domain Representation of Digital Images</p><p>Using Fourier analysis the image of an object may always be represented by a Fourier series or by a simple or multiple Fourier integral. The amplitudes and phase angles of the terms of the series or the integrand may be regarded as describing the spatial frequencies of the image, which leads to a complete representation of the same object in a different domain rather than the spatial.</p><p>It is often useful to think of functions and their transforms as occupying two domains. These domains are referred to as the upper and the lower domains in older texts, ``as if functions circulated at ground level and their transforms in the underworld'' (Bracewell (1965)). They are also referred to as the function and transform domains, but in most physics applications, they are called the time and frequency domains respectively. For optics applications the two domains are the image intensity domain and the spatial frequency amplitude domain. Operations performed in one domain have corresponding operations in the other. For example, the convolution operation in the time domain becomes a multiplication operation in the frequency domain, The reverse is also true. Such theorems allow one to move between domains so that operations can be performed where they are easiest or most advantageous.</p><p>The most often used transform in our research is the two-dimensional fast Fourier transform on a discrete matrix (the digital image). The result of this transform is a discrete matrix whose elements represent the frequency domain amplitudes.</p><p>Using Matlab</p><p>As a powerful mathematics software package, Matlab provides very convenient functions to perform the two-dimensional discrete Fourier transform and inverse, respectively FFT2 and IFFT2. [11,12,15] The following example figures 1, 2, and 3 show the transform of a 128-by-128-pixel picture at the center of which there is a white square hole in a black background The FFT2 converts the square hole into its frequency space counterpart. </p><p>5 Balu WDOT Figure 1. 3-D view of a square hole</p><p>Using the command FFT2 transforms the real space 2-D matrix into a complex 2-D matrix that is not as easily understood. Figure 1 shows the amplitude of each element of the image matrix where, white = 1 and black =0, while Figure 2 shows the amplitudes of the Fourier transform matrix.</p><p>Figure 2. 3-D view of the real part of the square hole's FFT.</p><p>The magnitude of the modulus is equivalent to the power spectrum or spatial energy content of the image. The modulus is shown in Figure 3. There are two parts of the FFT A magnitude portion and a phase shift portion. The magnitude of the modulus is the most often used representation of the frequency spectrum of an image. The phase portion is necessary for accurate image reconstruction but not for spatial frequency analysis.</p><p>6 Balu WDOT Figure 3. 3-D view of the absolute value of the square hole's FFT.</p><p>Let us look at a picture of the earth.</p><p>Figure 4. Earth (64 x 64, Gray Level)</p><p>The Fourier transform of the gray scale earth image in Figure 4 is shown in the frequency domain in figure 5. The large central amplitude is equivalent to the average background luminance of the image. The surrounding low frequency amplitudes are large and the amplitudes of the higher frequencies of the Fourier transform drop off rapidly.</p><p>Figure 5. Absolute FFT result</p><p>Looking at Volume Under FFT Curve</p><p>Initially, we calculated the FFT of an image, getting the absolute value for each element of the FFT matrix and drawing it in the 3-D view. The spectral results are sumular. First, there is an extremely tall spike representing the zeroth term of the Discrete Fourier Transform (DC component) at the center of the frequency plane. Then, there is the low-frequency and mid-frequency range in which lower amplitude spikes can are found and the high frequency range, with very low amplitude spikes.</p><p>7 Balu WDOT Usually in the high-frequency range, the amplitude of the frequency components are relatively flat. However, for images with highly periodic features, there are features shown in the high frequency range. </p><p>The Idea</p><p>We can assume that the volume under certain frequency range will tell us the amount of information in that particular spatial frequency. We used the following steps to reduce the complexity of viewing the Fourier transforms. </p><p>1. Prepare black and white images into 256 gray levels for luminance evaluations.</p><p>2. Determine the size of the image.</p><p>3. Take the FFT of the image.</p><p>4. Divide the footprint area of the 2-D FFT into bands starting from the zeroth term, </p><p>5. Sum the magnitudes of the FFT pixels inside each ring to get a volume for the ring. </p><p>6. Import the sums to Microsoft® Excel 97.</p><p>7. Graph the volume of each ring to analyze the relation between the sub volumes under each ring footprint of the FFT partial volume curve. </p><p>Choice of Division</p><p>There are several footprint choices, rectangular divisions, square divisions, circular divisions, and many others. The reason was that the DFT given by the software is in the form of a matrix, usually a two-dimensional array for a two-dimensional image. From this point of view, the rectangular division or a square is sort of ‘intrinsic’, and thus easy to compute. On the contrary, to make a circular division, we have to locate the center of the matrix, calculate the distance between this center and each element of the matrix. With the distance falling into our pre-determined grids, we then say this element belongs to group, n, instead of the next group, l, or the previous group, m. Obviously, circular division involve much more computing.</p><p>Initially, rectangular divisions were chosen. Rectangular bands can be easily found according to the columns and rows as (shown in Figure 6). Several calculations and curves were generated in this way.</p><p>As the study progressed symmetry about axis x=0 and axis y=0 and equal equal x and y were desired. Some of the columns of the Fourier transform matrix were not included in the symmetrical square footprint (as shown in Figure 7) could be neglected. The outer column of the Fourier transform matrix contain the amplitudes of highest Fourier frequency components in the image, and are beyond the spatial frequency cutoff of the human eye. Thus, the division becomes a square footprint instead of rectangular shape footprint brought by the camera. The extra columns represent frequency components that are outside the human visual bandwidth and may be neglected without significant changes in the image quality.</p><p>8 Balu WDOT One may also find the distribution of the Fourier spectral amplitudes along the modulus frequency without regarding any direction. By this standard, circular crossections would be the shape of choice. At this point, it is only natural to bring the symmetry to the next step. Can the division be symmetrical in any direction (circular symmetry as shown in Figure 8)? We know that rectangular or square divisions are easy to calculate. However when the matrix becomes very large in size, for example 720 by 480 pixels, it is found that circular division can do as well. (Figure 8). Figure 9 compares the circular versus the square footprint calculation methods. </p><p> y </p><p>-  x x </p><p>(0,0)</p><p>-  Figure 6. Rectangular band division y</p><p>Frequency Space</p><p>Figure 7: Square band division</p><p>9 Balu WDOT Frequency Space</p><p>Figure 8: Circular band division</p><p>6.0E+07 y t i s n</p><p> e 5.0E+07 D</p><p> t circular n</p><p> e 4.0E+07 square n o p</p><p> m 3.0E+07 o C</p><p> y 2.0E+07 c n e u</p><p> q 1.0E+07 e r F 0.0E+00 1 5 9 3 7 1 5 9 3 7 1 5 9 3 7 1 1 2 2 2 3 3 4 4 4 5 5 Band Number</p><p>Figure 9: Comparison between circular and square division</p><p>For a large size image with more than 600 by 400 pixels, we may use the circular division as the better option. However, in situations where the images’ sizes are less than 100 pixels, we choose to use square division because of the difficulty in dividing the pixels and the deviation brought by the circular division in such small matrices. </p><p>Separating the Zeroth Term (the DC component)</p><p>After examining the equations of 2-D discrete Fourier transform, it is obvious that the center of the frequency domain, (x,y) = (0,0), should be the result the average background luminance. It describes how bright the picture looks from a far distance rather than describing any feature of the </p><p>10 Balu WDOT picture that is detailed. The electrical term, 'DC component', is used to refer to this value. for a CCD image the "DC" value corresponds to the average background luminance in the scene. the average background luminance is directly related to the eye adaption to the lighting situation and also directly related to the visibility level, VL, of the lighting situation.</p><p>The initial calculation used the center footprint and puts the DC component along with some low frequency components in the center block (the first band) and finds the partial volume above the center footprint. In later calculations the DC component, the relative average background luminance, was separated form the low frequency components and shown as a stand alone portion of the characteristic curve of spatial frequency partial volumes. Figure 10 shows the comparison of the volume under the center footprint with the DC value included versus the volume of the center footprint with the DC value separated. The DC term alone becomes the first footprint and the low frequencies are moved to the second footprint.</p><p>7.0E+07 Frequency Component Density 6.0E+07</p><p>5.0E+07 square footprint, band 1 has DC term + low frequencies</p><p>4.0E+07 square footprint, DC term and low frequencies separated</p><p>3.0E+07</p><p>2.0E+07</p><p>1.0E+07</p><p>0.0E+00 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58</p><p>Band Number</p><p>Figure 10: Comparison between separating and not separating the DC value</p><p>Defining the Frequency Range</p><p>In the high frequency and very high frequency range of a CCD image, above the low pass cutoff frequency for the eye, noise contributes more and more to frequency spectrum of the image captured by a CCD. The CCD noise is a random thermal noise contribution characteristic to a CCD imaging system. In order to reduce the thermal noise contribution the temperature of the </p><p>11 Balu WDOT CCD must be reduced or the high frequency noise components of the Fourier spectrum may be spatially filtered out.</p><p>Sampling Theorem</p><p>A bandlimited signal is a signal, f(t), which has no spectral components beyond a frequency B-Hz; that is, F(s) = 0 for |s| > B. The sampling theorem states that a real signal, f(t), which is bandlimited to B-Hz can be reconstructed without error from samples taken uniformly at a rate R > 2B samples per second. This minimum sampling frequency, RN= 2B-Hz, is called the Nyquist rate or the Nyquist frequency. The corresponding sampling interval, T = 1/2B (where t = nT), is called the Nyquist interval. A signal bandlimited to B-Hz which is sampled at less than the Nyquist frequency of 2B, i.e., which was sampled at an interval T > 1/2B, is said to be undersampled. Figure 11 shows cases of undersampled signals and a properly sampled signal and their respective reconstructions.</p><p> a) b) c)</p><p>Figure 11. f(x) sampled a) below the Nyquist, b) slightly below the Nyquist, c) at or above the Nyquist rate and the reconstructed fR(x)'s from the Nyquist samples</p><p>Human eye resolution and cut-off frequency</p><p>An interesting question emerged from examining the above images. What does the driver see instead of the video camera? To answer this question, first we need to find out what resolution a human eye has and determine what sort of filter the human eye appears to be. The resolution of the eye is related to the upper and lower cutoff frequencies of the eye. In other word, we need be able to decide what frequency range is effective in the frequency domain representation of a digital image. Figure 12 shows representations of the typical types of frequency filters. The eye acts like a low pass spatial filter. When the upper cutoff frequency of the eye is lower than the maximum frequency of an image, we can remove all of the higher frequency components in the image without lose of image resolution with respect to the eye. Further more, we can use inverse FFT to reconstruct a digital image that shows what a human eye would see. This renders the possibility to eliminate the noise.</p><p> fc1 fc3 fc4 Lo w Pas s fi lter</p><p>Ban d Pas s filt er</p><p> fc2 fc3 fc4 Ban d Reject fil ter Hig h Pas s filt er</p><p>Figure 12. The Standard Filters Types</p><p>12 Balu WDOT The resolutions of human eyes vary form individual to individual. However, most people have roughly a bar-gap resolution of a minute of arc. </p><p> The resolution of the video cam is greater than that of the human eye.</p><p> The images may be filtered to determine the cutoff spatial frequency intrinsic of individual human eye. Suppose the images' frequency components are filtered one ring at a time from the highest ring. When the observer first detects a change of content judging only by his eyes, we say he has a cutoff frequency related to that ring number. </p><p> For most people, filtering the component above the 10th or 15th ring does not reduce the image resolution for their eyes.</p><p>The Road Images</p><p>Using Matlab to evaluate several identical images of the same scene with varying lighting conditions and placing all the characteristic graphs in the same scale as shown in Figure 13. We can see the changes in spatial frequency content from a roadway lights off with some off road lighting, curves C and D, to a roadway lights on, Curve A, situation. The changes represent the differences in the information available to an observer as more light is added to the scene. As light is added to the scene the average background luminance of the scene increases as shown in the region of range 1. From ranges 2 to 38, as more light is added to the scene, the amplitudes of the higher frequencies increase, reflecting the added detail available to an observer. Curve E, shows camera black, a no light situation.</p><p>The curve, A, associated with Figure 13, has the largest partial volumes above each partial area. The total volume, the sum of all the partial volumes, is a measure of the total information available to the observer. An observer has a visual transfer function with a cutoff frequency of about one minute-of-arc (1') or less, so an observer cannot resolve the higher frequencies past about, range 12 to 13. </p><p>It should be noted that in order to change the scene noticeably to an observer the modulation transfer function (MTF) must change by at least 10% or the changes in the scene are not noticeable to an observer (Williams (1989)). From Figure 13, it is clear that there is a greater than 10% change from curve E, no light, to curve B, some lights, and there is a greater than 10% change from curve B, some light, to curve A, full roadway lighting. The fully lighted roadway, curve A, has a large background luminance and low frequency components and rich higher frequency spectrum below the eye's spatial cutoff frequencies. The "camera black" image of curve E represents the "no light" condition of a CCD imaging chip. It is clearly seen that the camera black image is made of high frequency Fourier spectrum. The high frequency spectrum is due to thermal noise in the CCD chip.</p><p>13 Balu WDOT Relative Magnitudes 12000 Figure 13: Frequency component distribution ( comparison of variable lighting situations ) 10000 Curve A, Lights-on</p><p>8000 Curve B, Lights partially on</p><p>6000 Curves C & D, Lights-off, with off road lighting</p><p>4000</p><p>2000 Curve E, CCD "Camera Black"</p><p>0 0 5 10 15 20 25 30 35 40 Footprint Number for Rectangles Figure 13: Frequency Component distribution (comparison of variable lighting situations)</p><p>The "camera black" noise is above the frequency spectrum resolvable by the eye so the noise is not noticed by an observer. The average background luminance may be recovered from the CCD in the lower frequency spectrum in range 1.</p><p>6. Research Plan</p><p>Software:</p><p>An image will be captured with a CCD type digital camera; the latest digital video technology will be used to take a single frame of a roadway lighting situation. Each pixel of an image has a specific value on a gray scale, thus the point luminances of the entire contents of the road can be captured at one frame. The image is evaluated using a computer software (Matlab) and the frame information content is plotted versus the road distance. In order to be comparable, standardized image acquisition method will be carefully designed and carried out. We are interested in maintaining fixed focal length, fixed tilt of camera, standard image width, etc. This frequency analysis method will be used as an alternative visibility calculated method in this project. The following areas shall be investigated:</p><p>14 Balu WDOT  Establish the image camera resolution</p><p> Calibrate the spectra response of the digital camera</p><p> Determine the thermal noise level of the dark CCD, "camera black"</p><p> Investigate the stray light sources versus the information contents</p><p> Comparison studies between the meter reading and actual STV measured on the road.</p><p> Summary, actual accuracy</p><p>Hardware:</p><p>The team will be working on developing a "Visibility Meter". The device shall contain a single board computer with a small screen and a small CCD camera. The final product should be usable in the field, easily transportable, and relatively user friendly.</p><p>7. Deliverables</p><p>1) Provide report on proof of concept that information theory (or other digital analysis technique) can be used to achieve stated objective. This will be the go or no-go phase of the work 2) Arrange three meetings with monitoring team selected by WDOT as follows: a. Initial kick-off meeting to discuss project b. Progress meeting 1 c. Progress meeting 2 3) Provide written monthly progress reports and conference calls on progress 4) Provide a proof of concept report with an update literature review on a. The theoretical application of information theory and/or other digital analysis methods to meet the low luminance road scene analysis requirements. b. And the specifications of the off the shelf CCD equipment and modifications needed to develop a field instrument. c. The field instrument could be manufactured for under $7,000. d. Present the results in progress meeting 1 e. Provide a proof of concept that information theory (or other digital analysis techniques) can be used to achieve the stated objectives.</p><p>15 Balu WDOT f. This is the go or no-go phase of the work. 5) Provide a report and demonstration of modified field instrument based on information theory software upgrade to an off-the-shelf CCD scene photometer from task (4) a. Demonstrate how this modified instrument correlates to measured and calculated VL in the road scene. b. Present the results in Progress meeting 2. 6) Modify a demonstration instrument based on feedback of monitoring team. 7) Provide a report on how existing nighttime accident rates on the road(s) in task (5) above could be correlated to the actual road scene visibility levels as recorded with the modified STV CCD system a. The nighttime accident rate will be provided. 8) Incorporate feedback from monitoring team. 9) Present final results to WDOT and the project monitors at WDOT headquarters. a. Division of Transportation, Infrastructure Development, Bureau if Highway Operations, 4802 Sheboygan Av. Rm 201, PO Box 7986, Madison,WI 53707- 7986, ph 608-266-8370, fax 608-267-7856 10) Prepare detailed final report with appropriate literature review. 11) Prepare and present journal quality papers at: a. IESNA Annual Conference, and b. Transportation Research Board meeting, and c. Present a paper on results to the IESNA Roadway Lighting Committee 12) Provide travel support within the budget for a. One WDOT employee to be present at the three presentations in Task (11) 13) Deliver 1 (one) fully functional instrument to WDOT, which measures validated STV in real world nighttime roadway scenes using CCD technology, and a. Show how it could be related to nighttime traffic accident rates. 14) Provide report with notification of any patentable inventions with right to mutually develop realty or licensing rights assigned to WDOT and/or its assignees. 15) Provide travel support for one WDOT employee to attend IESNA meeting, a. TRB meeting b. IESNA RLC meeting c. At time of presentation 16) Travel to one final meeting to present results to WDOT at a location selected by WDOT 17) Provide final report with literature review a. Detail the theoretical derivation of the information theory </p><p>16 Balu WDOT b. Detail the application to the roadway lighting problem 18) Provide computer algorithm and programs developed in pursuit or this research, which can be published in the public domain 19) Project to be completed in 24 months (or less) from signing of contract.</p><p>8. Tasks</p><p>A) Provide a short (two or three sentences) description of each step or task required to accomplish the projects goals and objectives. B) Indicate who will perform each task.</p><p>Visibility is currently an ideal theoretical one-dimensional (1-d) calculation or a 1-d calculation from data gathered at a roadway lighting site. Spatial frequency analysis is either a two- dimensional (2-d) ideal calculation or a calculation from data gathered at a roadway lighting site. The information content in the roadway lighting scene is related to the amplitude of the spatial frequency content calculated from the roadway lighting scene. The method of choice to gather data from a roadway lighting scene for spatial frequency analysis is an image capture with a CCD device. The image from the CCD device contains thermal noise, usually does not have an absolute average background luminance figure but does have a relative average background luminance data. The Spatial frequency data calculated from the CCD image must be filtered to remove thermal noise, filtered to remove low frequency spatial components, and an absolute average background luminance figure must be used to normalize the relative average background luminance figure provided by the CCD device.</p><p>Dr Roman Stemprok and Bobby Green will develop the theoretical spatial frequency models based on idealized visibility targets. Dr Werner Adrian will be consulted for the characteristics of a variety of proper idealized visibility targets. </p><p>Dr Werner Adrian will assist with the theoretical calculations of the 1-d visibility level for each set of idealized visibility targets and correlating the theoretical VL calculations to a theoretical set of spatial frequency components for each of the idealized visibility targets.</p><p>Dr Roman Stemprok and Bobby Green will develop spatial frequency filtering algorithms to remove thermal noise from the CCD spatial frequency calculations and bandpass filter algorithms to recover the relative VL. An average background luminance will be input into the calculation to recover a theoretical VL value.</p><p>17 Balu WDOT This should be the first go-no-go stage of the project, "Concept Verification". To determine the level of correlation between VL and a filtered re-normalized ideal spatial frequency calculation. It is assumed the theoretical calculation verification stage of the project will take at least five (5) months.</p><p>Upon completion of the theoretical model for spatial frequency-to-VL calculation, hardware will be purchased to measure average background luminances and to capture CCD images to input into the theoretical spatial frequency-to-VL algorithm. The CCD image capture to VL algorithm will be roughly tuned before sending to Dr. Ron Gibbons</p><p>This should be the second go-no-go stage of the project "Hardware Verification". To determine the level of correlation between VL and a filtered re-normalized CCD spatial frequency calculation. It is assumed the "Hardware Verification" will take at least six (6) months.</p><p>Dr. Ron Gibbons will receive the CCD device, related components, and the spatial frequency filtering algorithm after "Hardware Verification" for complete field tuning and VL measurement verification. The measurement phase of the project is the "Experimental tune-up" phase. The field experiments will be preformed by Dr Gibbons at the "Smart Road" facility</p><p>After "Hardware Verification" and "Experimental tune-up", work will begin on "Hardware Consolidation". It is assumed "Experimental tune-up" will take at least three (3) months. It is assumed "Hardware Consolidation" will take at least four (4) months.</p><p>Dr. Stemprok will conduct the on "Hardware Consolidation", where the software and hardware will be incorporated into a single unit for measurement of field VL. After "Hardware Consolidation" the device will again be sent to Dr. Gibbons for field tune-up and calibration. The cost of the prototype devices is unknown at this point so it is unknown whether the $7000 target price for a measurement tool is a reasonable expectation or an unreasonable expectation. The target is a $7000 device, so every effort will be made to achieve the target price or beat the target price for a VL meter.</p><p>It is assumed "Consolidated Hardware Calibration" should take at least three (3) months. Two months at UNT for Dr Stemprok to complete pre-calibration tests and one (1) with Dr Gibbons for field calibrations.</p><p>The total project is assumed to take at least twenty one (21) months to complete and less than twenty-four (24) months to complete.</p><p>18 Balu WDOT Deliverable (1), "report on proof of concept", will be provided by Dr Stemprok and Bobby Green upon completion the first go or no-go stage of the project, "correlation of theoretical spatial frequency-to-VL calculations. Deliverable (2) will be settled after the contract has been awarded. Deliverable (3) is self explanatory in the WDOT RFP. Deliverable (4), will be completed by Dr Stemprok, Dr. Adrian, and Bobby Green Deliverable (5), will be completed by Dr Stemprok and Dr Gibbons Deliverable (6), modifications to the instrument will be conducted by Dr Stemprok and Bobby Green Deliverable (7), will be completed by Bobby Green and Karl Burkett of TXDOT Deliverable (8) is self explanatory in the WDOT RFP Deliverable (9) report on final results to WDOT at WDOT HQ will be preformed by Dr Stemprok, Dr. Adrian Deliverable (10) final report with literature review will be completed by Dr Stemprok and Bobby Green Deliverable (11) Journal papers will be prepared and authored or co-authored by Dr. Stemprok, Dr. Adrian, Dr. Gibbons, Karl Burkett, and Bobby Green, on the several phases of the project. Deliverable (12) Travel support for WDOT personal has been accounted for in this budget. Deliverable (13) one fully functional instrument will be provided by Dr. Stemprok and Bobby Green, and night time traffic accident rates will be evaluated by Karl Burkett. Deliverable (14) Report with notification of patentable inventions will be prepared by Dr. Stemprok. Deliverable (15) Travel support for WDOT personal has been accounted for in this budget. Deliverable (16) Travel to final WDOT meeting by Dr. Stemprok and other research personnel as necessary. Deliverable (17) Provide final report with literature review, Dr. Stemprok et al. Deliverable (18) Provide algorithm and programs, Dr. Stemprok. Deliverable (19) Complete the VL meter project in 24 months or less.</p><p>9. Project Summary</p><p>The VL meter projects consists of software development, hardware development, seven (7) major reports with at least three (3) complete literature reviews, twenty (20) to twenty-four (24) monthly reports, and twelve (12) to fifteen (15) man-trips to various locations.</p><p>10. Scope of Project</p><p>19 Balu WDOT The scope of the VL meter project appears to be quite extensive, the estimated cost of the entire project is approximately $230,000. The cost of the project is negotiable based on a reduction in the scope of work. The items that appear to be necessary for the successful completion of the project are: the correlations between theoretical VL calculations and theoretical spatial frequency analysis, choosing off-the-shelf hardware for initial verification of spatial frequency-to-VL conversions, hardware verification at the "Smart Road", "Hardware Consolidation", and final calibration verification at the "Smart Road". Any items beyond these critical path items are open to negotiation. A reduction in the scope of work would reduce the cost of this project.</p><p>Schedule showing major milestones and final project completion:</p><p>Task 1 2 3 4 5</p><p>Concept verification 5 months Hardware verification 6 months Experimental Tune Up 3 months Hardware consolidation 4 months Consolidated Hardware Calibration 3 months</p><p>Virginia Tech Transportation Institute: The contribution of VTTI to the project will take approximately 4 months. This includes the preliminary test scenario selection, preliminary photometric verifications, device testing and report preparation. The estimated project plan is shown below:</p><p>20 Balu WDOT</p><p>Dr. Roman Stemprok,Dr. P.E. 11.   Projects Funding History:Relevant ofTransportation (TxDOT). Texas Advisory at (TAP) Department the Panel Transportation of 5(RMC5) Committee Management Research Dr. isa member ofthe Stemprok engineered. standardsare illumination (IES)wheredifferent meetings ofNorthAmerica Society Engineering Commission on Illumination (CIE) TC1-19 ofthe onthe (IESNA), ofNorthAmerica committee Society and also, Engineering ofthe Illumination isa committeemember Tech. Stemprok Dr. Texas at of Technology the Department with at the personnel he collaborates in 1998.Now associate a as research Tech projects Texas ofNorthTexas.onvisibility worked at He University at professor the associate D.Present Results atWDOT C.Present paper IESNAat Annual Conference and Roadway Lighting ParticipateMeeting B. inthe Progress Meeting 2 Participate A. inMonthly Teleconferences -24 Organization Project Prepare A. report on the measured results Report Final C.Compare results Measure B. STV with Proposed instrument Photometer A. TestAll Scenarios - Approximately setups8 test Testing Device Preliminary B. Photometric Verification SelectTesting A. Scenarios Selection Scenario and Testing Preliminary</p><p> year September 2001 to August 2002 UNT, June2001. 2001to2002UNT, September August year studyfor (STV) forthe Visibility Small Target Research a $3000Faculty Grant Received 2001. June to TxDOTresearch, grant aUniversity related $5482.50Gpby3 ofNorthTexas Received 2.Verify Photometric Characteristics of Test Setup 1.Perform STV Calculations for Smart Road 3.Target Location Selection 2.Pavement Section Selection - Concrete, Asphalt 1,Asphalt 2 1.Luminaire Selection -Type, Spacing, Mounting Height, Lamp Output Working Personnel 21 Task is a prime investigator in this proposal. Currently he works as an proposal.he works Currently inan as primethis isa investigator</p><p>CIE – Division 4. He attends the Illumination Illumination the CIE –Division attends 4.He a a a 1 Month1 a a a 8 a 15 a</p><p>Balu WDOT 23 a 1 Month2 a 8 a a 15 a a 23 a a a 1 Month3 a a a 8 a 15 a 23 a 1 Month4 a 8 International a 15 a 23  $3500 Faculty Research Grant for the Small Target Visibility (STV) study for year September 2000 to December 2001 through the University of North Texas, June 2000.</p><p> Received a $5000 Gpby3 University of North Texas grant related to STV research, April 2000.</p><p> $2000 Faculty Research Grant for the Small Target Visibility (STV) study for year September 1999 to December 2000 through the University of North Texas.</p><p> $5000 TxDOT, $5000 travel grant (joint with Dr. Lynn Johnson), for 2001-2002.</p><p> TxDOT research grant: while Roman Stemprok was at Texas Tech, the funding was from the Texas Department of Transportation (TxDOT), (before September 1998).</p><p>Relevant Publications:</p><p> Roman Stemprok, Bobby Green, and Zhen Tang, “Measurement of Visibility with CCD Devices,” Proceedings of the Symposium of Commission Internationale De L’Eclaire, (CIE), Division 4 and 5 meeting, Toronto, Canada, September 3 – 8, 2000.</p><p> Khan, H.K., S.P. Senadheera, D.D. Gransberg, and R. Stemprok, "Influence of Pavement Surface Characteristics on Nighttime Visibility of Objects," Transportation Research Record (TRB), Journal of the Transportation Research Board, No. 1692, Transportation Research Record - National Research Council, Washington, D.C., Paper No. 99-0728, pp. 39 – 48, November 1999.</p><p>Bobby Green, P.E., Associate Professor, Engineering Technology, will serve as Co-Principal Investigator in the proposed research study. Mr. Green received his BSEE and MSEE degrees from Texas Tech University in 1975 and 1979 respectively. After receiving his BS he worked for Federal Aviation Administration, a branch of US Department of Transportation, as a facilities and equipment engineer installing and modifying electronics equipment for air traffic control centers in the FAA Southern Region. After receiving his MS degree he worked on a US Department of Energy solar power project as a research associate with Texas Tech University, then worked as a consulting engineer in the electric power industry for a short time, and joined Texas Tech as an Engineering Technology professor. After returning to Texas Tech he has worked on several research projects associated with Texas Tech, two research projects with the US Air Force in Tyndall, AFB, TXDOT roadway lighting project and is currently participating in a Texas fire ant initiative project. Teaching experience includes basic undergraduate electrical engineering and engineering technology courses, Junior and senior level courses in engineering economics, engineering control systems, digital and analog electronics, digital and analog communications, and a graduate refresher course in advanced mathematics.</p><p>Dr. Ron Gibbons obtained his Ph.D. from the University of Waterloo, Canada in 1998. His field of research was the reflection properties of pavement surfaces and the impact on target visibility. He also worked on several project studying visual performance, visual acuity, and peripheral </p><p>22 Balu WDOT vision. Dr. Gibbons joined the Philips Lighting Company in 1995 as the Manager of the Corporate Calibration and Standards Laboratory. There he was responsible for the measurement and calibration of light sources for the entire North American region. Dr. Gibbons has also worked as a luminaire designer, operations manager and project engineer. Dr. Gibbons joined the Virginia Tech Transportation Institute as a Research Scientist in August 2001. At Virginia Tech, Dr. Gibbons is responsible for lighting and visibility associated research projects. Dr. Gibbons has published several papers on roadway lighting, photometry and target visibility.</p><p>Research Student Personnel:</p><p>Roman Stemprok, Bobby Green and Ron Gibbons will contract graduate and undergraduate students during research investigation at UNT, Texas Tech and WI Tech. The student personnel shall be screened for the research ability; for example, knowledge of Matlab, or reliability and willingness to work in productive and timely manner. The preference will be given to short work assignments. The students will be paid on monthly basis and the faculty shall sign them off each month verifying the student’s task. If no qualified personnel have been found, the research money shall roll over for next year and/or no research money (to WIDOT) will be spent.</p><p>Roman Stemprok, Bobby Green and Ron Gibbons will contract a consulting help based on the research investigation needs.</p><p>23 Balu WDOT 12. Research Budget</p><p>Equipment and Supplies</p><p>Total Direct Costs (UNT) $67,500</p><p>TOTAL PERSONNEL COSTS (TTU) $70,000</p><p>Equipment and Supplies 7,500</p><p>Travel and Fees for working personnel and WDOT employees 25,000</p><p>Materials & Supplies</p><p>Smart Road Usage (with Lighting) (48 HRS @ $57/HOUR)</p><p>Total Modified Direct Costs (VTTI) $33,600</p><p>Total Consultant costs $24,000</p><p>TOTAL COST $230,850</p><p>24 Balu WDOT</p>

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    24 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us