<<

MASTER'S THESIS

Design and Testing of the Star Trackers for the SEAM Nanosatellite

Nikola Shterev 2015

Master of Science (120 credits)

Luleå University of Technology Department of Science, Electrical and Design and Testing of the Star Trackers for the SEAM Nanosatellite

Nikola Shterev Stockholm 2014

Department of Computer Science, Electrical and Space Engineering Space Technology Division Abstract

Star trackers are instruments for attitude determination and are a must for spacecrafts where precise attitude knowledge is needed. Star trackers take accurate images of the stars and then perform image processing and pattern recognition in order to accurately determine the attitude. This requires high processing power, size and mass of the instrument, which are limited on cubesats. In this thesis work the sensor of a star tracker has been characterized, and calibration and image correction procedures that require low processing capabilities have been developed. The thesis work also includes the design of the star tracker in CAD software, with special attention being payed to the baffle due to the limited size available and the importance of stray radiation suppression. Stray radiation suppression is especially important due to limited ability for cor- rections on image . Furthermore, a method for processing the used star catalog has been developed so the stellar brightness in the catalog corresponds to the brightness detected by the star tracker. List of Figures

1.1 SEAM solid model (boom stowed). Taken from [1] ...... 2

3.1 Master bias frame ...... 10 3.2 Columns of the master bias frame with their elements averaged ...... 11 3.3 Thermal frame ...... 12 3.4 Histogram of the hot spots ...... 13 3.5 Histogram when hot pixels are removed ...... 14 3.6 Flat frame taken by pointing at a monitor ...... 15 3.7 Middle columns and row radial values and their curve fit ...... 16 3.8 Middle columns and row radial values and their curve fit for the lens assembly to beused ...... 16

4.1 Johnson’s visual passband and visual flux density of a G5V star with magnitude 1 18 4.2 Flux density of a G5 star with magnitude 2 ...... 19 4.3 Flux density and the corresponding number of photons of G5 star with magnitude 2 20 4.4 Quantum efficiency of the sensor ...... 21 4.5 Photons and generated photoelectrons ...... 21 4.6 Read intensities vs predicted photoelctron count ...... 22

5.1 Histogram of RMSE of each pixel for 49 bias frames ...... 25 5.2 Box in which the sensor was mounted ...... 28 5.3 Box and light source used in the setup ...... 29 5.4 Mean value of the gain 1 image set for different exposure time ...... 30 5.5 PTC determined by both methods and linear fit to the data from the first method 30 5.6 σ for different mean signal values per pixel. The plot is for analog gain of 1 . . . 31 5.7 SNR for different mean signal values per pixel. The plot is for analog gain of 1 . 31

6.1 Placing vanes in a baffle ...... 35 6.2 Derivation of the equations for determining vane position and height ...... 36 6.3 Baffle vanes designed with tolerance margins (n=2 on the drawing) ...... 38 6.4 Bevel edge on the object side (a) and on the objective side (b) ...... 39 6.5 Rectangle and entrance pupil circle used to find the area viewed by the at certain distance ...... 40 6.6 The star tracker viewed from the front, showing the area viewed by the system at the distance of the vanes ...... 40 6.7 Design of the star tracker ...... 41 6.8 Design of the star tracker. Viewed from the rear ...... 42 6.9 V shape of the lens and mount support ...... 42

i Contents

1 Background 1 1.1 SEAM ...... 1 1.2 Star Trackers ...... 2 1.2.1 First-Generation Star Trackers ...... 2 1.2.2 Second-Generation Star Trackers ...... 3 1.2.3 Modes of Operation ...... 3 1.2.4 Field of View, Resolution, Update Rate ...... 3 1.2.5 Typical Star Tracker and the One for SEAM ...... 4 1.3 Stars, stellar spectra and stellar classes ...... 4

2 Imaging 6 2.1 Different Signals and Noise ...... 6 2.2 Calibration ...... 7

3 Calibration 9 3.1 Initial Calibration ...... 9 3.1.1 Bias Frames ...... 9 3.1.2 Dark Frames ...... 10 3.1.3 Flat Frames ...... 12 3.2 Final Calibration ...... 14

4 Estimating the Received Photoelectrons 17 4.1 Estimating the Intensity to Be Read by the Sensor ...... 17 4.2 Comparison of Predicted and Read Intensities ...... 20

5 Sensor Characterization 23 5.1 Determining the Threshold ...... 23 5.2 Noise ...... 24 5.3 Photon Transfer Curve ...... 25 5.4 Constructing the PTC and Results ...... 27 5.4.1 Test Setup ...... 27 5.4.2 Creation of the PTC ...... 28

6 Optomechanical Design 32 6.1 Definitions ...... 32 6.2 Baffle ...... 33 6.2.1 Critical Surfaces ...... 33 6.2.2 Power Transferred ...... 33

ii 6.2.3 Vane Placement ...... 34 6.2.4 Vane Edges ...... 37 6.3 Body Star Tracker ...... 38 6.3.1 Star Tracker Configuration ...... 40

7 Conclusion 43 7.1 Work Done, Conclusions and Recommendations for Future Work ...... 43 7.2 Environmental, Social and Ethical Aspects ...... 44

iii Abbreviations

AC Alternating Current ADU Analog-to-Digital Units AP S Active Pixel Sensor BFL Back Focal Length CAD Computer-Aided Design CCD Charge-Coupled Device CMOS Complementary Metal-Oxide Semiconductor DC Direct Current EFL Effective Focal Length ELF Extremely Low Frequency ET Edge Thickness F P GA Field Programmable Gate Array FPS Frames per Second FOV Field of View KTH Kungliga Tekniska Hgskolan - Royal Institute of Technology LED Light Emitting Diode OD Outer Diameter PCB Printed Circuit Board PTC Photon Transfer Curve QE Quantum Efficiency SEAM Small Explorer for Advanced Missions SNR Signal to Noise Ratio S/C Spacecraft V LF Very Low Frequency

iv Chapter 1

Background

The background includes short description of the SEAM (Small Explorer for Advanced Mis- sions) nanosatellite, description of star trackers, stars and spectral classes, and the theoretical knowledge needed for understanding most of the work described in this document.

1.1 SEAM

SEAM is a project that aims to design, build and operate a nanosatellite for observation of the magnetic field around Earth and to improve the understanding of the geospace environment. The project is part of the 7th Framework Programme funded by the European Union, involves a consortium of 8 partners and is coordinated by the Royal Institute of Technology (KTH) in Sweden. Solid model of the SEAM can be seen in figure 1.1. The SEAM satellite is to contribute to the research in three areas: the auroral current system, natural ELF/VLF waves in the ionosphere, and man-made ELF/VLF emissions. This is accomplished by performing high-quality measurements of the three-axis DC magnetic field, the three-axis AC magnetic field and a single component of the AC electric field. These high-quality measurements require attitude knowledge of 1 arcminute which neces- sitates the use of star tracker. The attitude sensors to be used on the SEAM satellite are sun sensors located on the exterior solar panels, a 3-axis Honeywell HMC5843 magnetometer, a gyro- scope and two star trackers. Attitude determination will be performed by an unscented Kalman filter which uses the information from all sensors. [1] One of the star trackers will be placed on the tip of a boom carrying a fluxgate sensor and the other one will be in the main body. These star trackers and the preliminary work on them are the focus of this thesis. The star trackers are being developed by KTH and are based on 5-mexapixel CMOS monochrome image sensor operated at multiple frames per second (FPS). Both star trackers are functionally the same but the on-board one will have larger optics aperture and have all of its electronics integrated in a single unit, while the boom-mounted one will only have its optics and pre-processing hardware on the boom and the rest of the electronics in the satellite. The optics have to be mounted on the boom in order to accurately determine the pointing of the boom and the rest of the electronics are kept inside the satellite to reduce the magnetic disturbance caused by them to the magnetometer mounted on the boom. The boom- mounted star tracker will have smaller optical aperture so it will require longer exposure times and will thus have smaller FPS rate.

1 Figure 1.1: SEAM solid model (boom stowed). Taken from [1]

1.2 Star Trackers

The most accurate instruments for attitude determination are the star trackers. Star trackers take images of the sky and compare them to star catalogs. The position of the stars in the catalog is well known and the angles of ration needed to match the observed stars with those in the catalog give information about the attitude of the spacecraft. Star trackers are most suitable for three-axis stabilized spacecrafts. A star tracker is basically a digital camera where the sensor at the focal plane is either a CCD (charge-coupled device) or CMOS (complementary metal-oxide semiconductor) one. CCD sensors usually have lower noise but CMOS sensors are more resistant to radiation and can read different pixels at different rates. Sensors with pixels that have data processing capabilities are called active pixel sensors (APS). A star image usually covers several pixels and the location of the centroid of a star image is the ”center of mass” of the photoelectrons collected by each pixels in an n by n block. The position of the centroid can be determined to a subpixel level. The accuracy of the centroid determination depends on the star brightness, the exposure time, and various noise sources. [2] The first star trackers that were developed in the early 1970s differ significantly than the ones that are currently developed. Therefore, they are often referred to as first-generation and second-generation star trackers. [3][4]

1.2.1 First-Generation Star Trackers First-generation star trackers were only able to give the position of a few stars in a coordinate system referenced to the sensor and thus required extensive external processing. These instru-

2 ments consisted of a CCD sensor, optics and dedicated electronics. They detected two to six stars in a data frame and then outputted CCD coordinates of the stars. The coordinates were then processed in the main computer of the satellite or on ground to determine the attitude of the satellite. Additional information, such as sun vector, was often required for the process. [4]

1.2.2 Second-Generation Star Trackers Second-generation star trackers are fully autonomous, have internal star catalogs, use many stars, can determine the attitude in lost-in-space situation, have higher accuracy, smoother and more robust operation. Second-generation star trackers perform star pattern recognition autonomously using internal catalogs and output inertial-space referenced quaternions with no need for external processing or additional attitude information. They can utilize a large number of stars (25 to 65) in the field of view (FOV)and their internal catalogs can contain more than 20 000 stars. This improves the acquisition properties, the accuracy and the percentage of the sky for which the tracker can perform attitude determination. Because these star trackers are autonomous, their integration with the spacecraft is simplified which leads to reduction in costs and increase in reliability. [4]

1.2.3 Modes of Operation Star trackers can operate in two modes. The first mode is an initial attitude acquisition mode which is used when the spacecraft (S/C) attitude is unknown. For this reason this mode is often called lost-in-space mode. The second mode, which is used most of the time during the instrument operation, is the tracking mode. In the lost-in-space mode the star tracker scans the entire FOV for stars and determines the centroids of the brightest stars. At least three stellar centroids have to be determined if the tracker is to be able to determine the attitude. After the centroids are found, the tracker calculates the arc length between the stars, the angles that they form or/and their brightness and uses this information to find the stars in the star catalog. When the star tracker is in tracking mode, it follows (tracks) a couple of stars that have been already recognized. A region of interest is formed around each of the tracked stars and the centroid of the bright pixels in every region is determined to find the new position of the stars. The size and position of the regions depend on the previous position of the stars, the estimated change in attitude during the integration time, and the accuracy of the attitude knowledge. When a star moves out of the FOV, the star tracker searches for a new star to replace it with. Bright star that are well separated from other stars are preferred for replacements. These stars are easily found because the S/C attitude has been already determined and the star catalog can be used to give the approximate position of the new stars.[2] This mode has higher update rate than the lost-in-space one because only the pixels in the regions of interest are read and the tracker knows for which star pattern to look for.

1.2.4 Field of View, Resolution, Update Rate Parameters which are commonly used to asses star tracker performance are the field of view, the resolution and the update rate. Star trackers with good resolution are able to more accurately determine the position of the observed stars and thus the attitude. The accuracy of the attitude determination depends on the resolution of the sensor and its size, the size of the FOV, and the centroiding accuracy. Having a sensor with small dimensions and large number of pixels leads to be better resolution, so does

3 having smaller FOV and better centroiding. The resolution of the outputted attitude is also affected by the star tracker software which may use a couple of star patterns and determine the attitude using each of them. Then it can evaluate the attitude for the different star patterns and choose the most accurate one. The centroiding accuracy is reduced by optical aberrations and noise. Optical aberrations are reduced with better optics and correction of the recorded image for the aberrations of the used optics. The effect of different noise is reduced by acquiring stronger stellar signal (more photoelectrons). This is achieved by using bigger aperture for the optics or increasing the inte- gration time. However, an increase in the aperture leads to increase in the size and weight; and an increase in the integration time leads to smaller update rate and smearing of the image.

1.2.5 Typical Star Tracker and the One for SEAM Typical star trackers have update rate in the range of 0.5 to 10 Hz, accuracy of a few arcseconds in boresight pointing direction, less accurate for rotation about the boresight; mass of about 3 kg and about 10 W for power requirement. [2] SEAM is a nanosatellite so the star tracker designed for it has a significantly smaller and lighter . Thus, its design is simplified and has smaller accuracy of attitude determi- nation. The star tracker consists of a baffle to limit the unwanted radiation, a lens assembly, a sensor to record the received light, a camera FPGA that performs image processing and star detection, a Smartfusion2 board for performing lost in space and tracking, and support structure. The star tracker designed for the SEAM satellite uses a CMOS sensor chosen because of their good radiation resistance. An update rate of about 4 Hz will be used. Centroid determination is to be performed on hardware level (camera FPGA) and only the centroid position, the number of pixels in the star and the stellar intensity are provided to the software for the tracker and lost- in-space modes. Performing the centroiding on hardware level means that no image processing is required on software level and as a result the requirements towards the Smartfusion2 board are lowered. The camera FPGA works with integer numbers so subpixel centroid determination cannot be performed which limits the accuracy of the star tracker. A small FOV (9.32x7 degrees) was chosen to compensate for that and simplify baffling of unwanted radiation. A big FOV requires bigger baffle which is not feasible considering the limited space available for the star tracker. The disadvantage of the small FOV is that more stars are needed for the used star catalog and that the minimum stellar magnitude that needs to be detected is increased.

1.3 Stars, stellar spectra and stellar classes

Star trackers use stars for reference because the stars are the brightest objects in the sky, the apparent position of which remains relatively the same with time. When one wants to find which is the brighter out of two stars, he compares their fluxes. However, the received flux can differ significantly between stars - more than a thousand times. For this reason direct flux comparison is avoided and instead the stellar magnitude is introduced. The difference in magnitude between two stars is equal to 2.5 times the base 10 logarithm of   F1 their flux ratio: m2 − m1 = 2.5 · log , where m2 and m1 are the magnitudes of star 2 and F2 1, respectively; and F2 and F1 are their fluxes. For instance, a hundred times difference in flux corresponds to a magnitude difference of just 5. Usually the flux of the Vega star is taken to correspond to a magnitude of zero and the magnitudes of all the other stars are determined with respect to that of Vega. It should be noted that the brighter the star the smaller its magnitude is.

4 There are two used types of magnitude: apparent magnitude and absolute magnitude. Ap- parent magnitude is the magnitude of the star as seen from an observer on Earth. This is the magnitude of interest for star trackers as currently most operate in the vicinity of Earth. Clearly the apparent magnitude of a star will decrease with its distance to Earth. In order to characterize the radiation emitted from a star, astronomers use the absolute magnitude of the star. The absolute magnitude of a star is unaffected by its distance from Earth. How much flux is detected from a given star depends also on the wavelength region in which the sensor operates. Therefore, magnitudes for the different wavelength bands also exist, for example, visual magnitude, mV , ultraviolet, mU and blue, mB. Irradiance is the power of electromagnetic radiation received on unit area of a surface. The SI units for irradiance are W/m2 and the ones in cgs are erg·cm−2·s−1 which are mostly used in astronomy. In astronomy the irradiance is also often called intensity. Irradiance (intensity) shows the total radiation received at all frequency. A sensor that is able to detect radiation at all frequencies with the same efficiency does not exist; therefore, more practical quantity is the spectral irradiance which is the received radiation per unit area at a given frequency. The SI units for spectral irradiance are W/m3 but W· m−2·A˚−1 are more commonly used. It should be noted that the stellar flux is measure in erg·s−1·m−2 and the flux density in erg·s−1·m−2· Hz−1 so they correspond to the intensity and the spectral intensity, respectively.

5 Chapter 2

Imaging

This chapter provides information on the different signals and noise in the star tracker sensor, how to measure them and how to calibrate the tracker so that only the wanted signal is left. The Handbook of Astronomical Image Processing [5] is recommended to anyone interested in learning more about astronomical imaging.

2.1 Different Signals and Noise

The signal is the information that one looks for when taking an image. In the case of star trackers, the brightness (intensity) of the stars in the FOV is sought, so that pattern recognition can be performed. However, the intensity cannot be measured directly and instead the measured signal is the number of photons received from a star, which is related to the intensity. Unfortunately photons do not arrive in a steady flow and their number deviates from the average number from sample to sample. This deviation in the signal is called noise and is due to the fact that an image is not a precise indicator of the photon arrival rate, but just a sample of the photon flux - record of the photons that arrived for that particular exposure time. Fortunately, the photons samples always have the same distribution, which is called the normal distribution. The terms mean value and standard deviation are needed to explain normal distribution. Mean value, is simply the mean value of a large number of samples and standard deviation, σ, gives information on the variation of the values from the mean value. The formula for standard deviation depends on whether it is applied to the whole population studied or just a sample of this population. The formula for the whole population is given in equation (2.1) and the one for sample in equation (2.2). v u n u P(x − x¯)2 t i σ = i=1 (2.1) n v u n u P 2 u (xi − x¯) σ = t i=1 (2.2) n − 1 th , where xi is the value of the i element in the population or sample data set, andx ¯ is the mean value. In normal (Gaussian) distribution 68.3% of the samples will lie within one standard deviation from the mean value, 95.4% withing two standard deviations and 99.7% withing three. For

6 photons the standard deviation is equal to the square root of the mean value. This means that for very large photon fluxes the deviation will be negligible compared to the signal. The raw image that is obtained from a star tracker camera contains not only the signal from the observed objects, but also dark current and an offset from the zero signal, called bias. Furthermore, the image is affected by contamination, optical vignetting and difference in the sensitivity of the pixels of the sensor. As mentioned, the bias is an offset from the zero signal. Bias is used so the pixel values cannot become negative due to noise. The value of the bias should be constant but it is affected by the readout noise. The readout noise comes from the detection and amplification electronics and is independent from the signal and bias level. Due to imperfections the bias of some pixels differ from that of other and some of them form fixed-pattern bias. Dark current is electrons generated in the crystal lattice of a silicon detector. Those elec- trons arise due to defects in the lattice. The number of generated electrons follows Poisson’s distribution so the dark current has noise associated with it, and the standard deviation of the dark current, σd, is equal to the square root of the number of dark current electrons,x ¯d [5]. It should be noted that dark current increases with temperature and because of this dependence dark current electrons are often called thermal electrons. Vignetting is caused by objects that obscure the lens/lenses in the system. Usually such objects are the retaining rings used to keep the lenses into place. Vignetting leads to less light reaching the sensor and is more expressed for pixels further away from the center. In order to extract useful information out of an image, one requires small noise relative to the signal value. Therefore, the signal-to-noise ratio, SNR, is used to quantify the signal quality. As the name indicates the SNR is simply the signal divided by the noise, or the mean value divided by the standard deviation, when only photon flux is considered. The SNR is given by the formula: S SNR = (2.3) σ , where S is the signal and σ is the noise. It should be noted that when the SNR of an image is mentioned, it often refers to the SNR of the background.

2.2 Calibration

As mentioned earlier a raw image contains the detected photon count, x, dark current, xd and bias, b. The raw image is in digital units but the photon count and the dark current are measured in electrons (photons generate photoelectrons at the sensor) which are converted to voltage which is, in turn, digitized to analog-to-digital units, ADU. The conversion applies a gain which is measured in e−/ADU and is denoted by g. This means that the photon and dark current signals have to be divided by the gain in order to express them in ADU. The bias is already in ADU so the signal of the raw image can be written as: x x S = + d + b (2.4) raw g g Calibration is performed to remove the bias and the dark current so that only the photon flux signal remains. Furthermore, calibration compensates for vignetting, contamination and difference in quantum efficiency (QE) in the sensor. QE is the ratio of the incident photons on the sensor to the generated photoelectrons. The calibration involves making bias frames, dark frames, flat frames and the accompanying them flat dark frames. Bias frames are taken with the lens cover on and zero or close to zero integration time. This way the only recorded signal is the bias one and the only noise is the readout noise. The bias

7 frames are later subtracted from the image to remove the bias, the value of which is independent of the integration time. A bias frame often reveals fixed-pattern bias and difference of the pixel values depending on their location. The later is due to the fact that during the readout time some pixels accumulate thermal electrons, and because pixels cannot be read simultaneously, some accumulate more thermal electrons than other. Dark frames are taken in order to study the dark current and the noise associated with it. Dark current can be different from pixel to pixel. Some pixels may form pattern on the sensor due to the dark current signal, and some single pixels, known as hot pixels, may have dark current which is hundred or even thousand times larger than the average one. Dark frames are taken with the cover on and with large integration time. Dark current is linearly dependent on the integration time so the dark current for different exposure time can be easily calculated. Large integration time is used as with it the dark current signal increases, which, in turn, increases the SNR. This ensures that the found accumulation rate of thermal electrons (dark current electrons) has small uncertainty. A dark frame contains the dark current signal plus the bias. Therefore, the bias has to be subtracted from it in order to obtain a scalable dark frame which can be used to subtract the dark current from images with different integration time. This scalable dark frame is also called thermal frame. Flat frames are made to compensate for vignetting, difference in QE and contamination (usu- ally dust donuts). A flat frame is taken by imaging an object with uniform light distribution. Such an object can be the twilight sky, white screen or a light box, constructed for this sole purpose. The integration time should be such that the pixel values of the image are roughly half the maximum value. Flat frames require that flat dark frames are also taken. Those dark frame are with the same configuration but with the cover on and are needed to remove the bias and dark current from the flat frame.

The described calibration procedure involves too much processing power and is, therefore, not applicable to the star tracker designed for the SEAM satellite. To simplify the procedure the bias and dark frames are taken as single values, equal to the mean of the frame, which are subtracted from every pixel of the raw image. The small deviation from pixels to pixel in the bias and dark frames from their respective single value, will act as extra noise once the image has been calibrated. The flat frame will be replaced with a second order polynomial which is a good approximation as will be seen later. This way a flat frame does not need to be stored in the star tracker. The only disadvantage is that correction for contamination and difference in QE, between the pixels, becomes impossible. However, due to the high cleanliness standards for space components, effects of contamination should be negligible. The QE of the pixels for the chosen sensor is also very uniform.

8 Chapter 3

Calibration

This chapter is about characterizing the image sensor and the optical system of the star tracker so the most suitable method for correction of the taken images can be chosen. An initial and final calibrations were performed. The initial calibration was used to study the different calibration frames for the chosen sensor and decide if those frames can be replaced by simpler ones. A simpler calibration frame may, for example, be just the average of the bias which is to be subtracted from the taken image, instead of a bias frame. This is important as saving and subtracting a bias frame from every taken image would increase the memory and processing demands towards the tracker. The final calibration was performed after the calibration method has been chosen and was used to correct the star images used to evaluate the star tracker performance.

3.1 Initial Calibration

The initial calibration of the image sensor involved 1 set of bias frames, 2 sets of dark frames and 2 of flat frames. One of the dark frame sets is with smaller integration time which was chosen so the value of the hot pixels is close but less than the maximum pixel value. The other dark frame set is with the longest integration time for which the images could be saved (setting the exposure time register to too high values led to saving problems). Dark frames with long integration time are needed for studying the dark current when the hot pixels are ignored. The flat frames were taken by pointing the camera towards a paper-covered monitor which was displaying a white screen. Covering with paper is necessary because without it the recorded image is not uniform. It has horizontal lines, the position of which changes from image to image.

3.1.1 Bias Frames The bias frames were taken with the integration time register set to 0x0001 which corresponds to 3.83 · 10−4 seconds. Bias frames are ideally taken with zero integration time, but that is not possible for the used imager. Nevertheless, the dark current signal for the used time is negligibly small and if any is present, it will be considered as part of the bias. Ten bias frames have been averaged to produce a master bias frame. The master frame is shown in figure 3.1. As it can be seen, the deviations from the mean bias level are evenly distributed; however there seems to be a fixed-pattern bias in the form of horizontal line. That pattern seems to be more expressed at the left corner of the image. To better visualize this, the values in every column have been

9 averaged and then plotted (figure 3.2). It can be seen from figure 3.2 that there indeed is a fixed- pattern bias. However, the average value differs with about only one digital unit so if single bias value is to be subtracted form the images of the star tracker, the accuracy of the star intensity determination will be only slightly reduced.

Figure 3.1: Master bias frame

The standard deviation for the master bias frame was calculated using equation (2.1) where xi was the value of each pixel of the master frame andx ¯ was the mean value of all the pixels in the same frame. The standard deviation was found to be 1.3280 ADU. When the same calculation was performed for the tenth bias frame the value of σ was 3.6850 ADU. By comparing the two values one can clearly see the advantage of using more than one bias frame to create a master bias frame. The mean pixel value of the master bias frame is 167.3595 ADU. Due to the small deviation in pixel values and the complications involved in using a bias frame on board, it was decided that for the final calibration the bias will be corrected by subtracting the average value of the master bias frame. That average value can be referred as single-value bias frame.

3.1.2 Dark Frames The first set of dark frames is of 10 captures with integration time of 1.2020675 seconds. The integration time was chosen so none of the hot pixels reaches saturation but is close to that value. The 10 frames were averaged to produce the master dark frame which has mean pixel value of 167.7724 ADU. Then the master bias frame has been subtracted from the master dark frame to produce the thermal frame. That frame can be seen in figure 3.3 and it is obvious that there is no pattern in the observed noise. The thermal frame has mean pixel value of 0.4129 which is expected considering how close the mean pixel values of the master dark and bias frames are. Some of the pixels of the thermal frame have negative values which is actually not surprising as

10 169

168.5

168

167.5 Pixel value in ADU

167

166.5 0 500 1000 1500 2000 2500 3000 Column

Figure 3.2: Columns of the master bias frame with their elements averaged the difference in the mean values is smaller than the noise. If a very large number of frames were used to create the master bias and the dark frames, there would have been no negative pixel in the thermal frame. As there is no obvious pattern in the dark current, a single value thermal frame can be used and individual pixel corrections can be applied only to the hot pixels. In order to use a single value as the thermal frame, its mean value has to be calculated without the hot pixels. To do that, all hot pixels, with ADU value grater than 150, have been set to the previous mean value (0.4129). The mean dark current that was accumulated in all but the hot pixels in 1.2016845 seconds (1.2020675−3.83·10−4) corresponds to 0.363297 ADU. This means that the dark current accumulation rate for those pixels is 0.302323 ADU per second. The accumulation rated was also studies at different temperature using a thermal chamber and its increase due to higher temperature was negligible. The slow accumulation rate suggests that the single value thermal frame can be ignored or a single value for both the bias and the thermal frame can be used, the later being preferable. This value can be taken as the mean of the dark frames which also contains the bias level. A histogram of the hot pixels (figure 3.4) is produced do better understand the influence of hot pixels and whether to do a separate correction for them. While the number of hot pixels is relatively small, their values are not. The energy recorded from a star with magnitude 6, for the exposure time to be used (0.2405 s), corresponds to just a couple of hundreds of ADU. The centroiding algorithm ignores single pixel; however, if a hot pixel is one of the pixels of a star, the recorded energy will be greatly increase and the pattern recognition of the stars may

11 fail. Therefore, hot pixel correction is necessary for the actual sensor to be used on the star tracker. The histogram starts from pixel value of 500 ADU to better show the high-value hot pixels and to ignore pixels whose value is large due to statistical probability. From the histogram of the rest of the pixels (figure 3.5) it is clear to see that most high-value pixels with value up to 500 are the ”tail” of the normal distribution curve.

Figure 3.3: Thermal frame

The second set of dark frames has 5 captures with integration time of 2.681366 seconds. The integration time was the maximum one for which the captures could be saved. Long integration time was chosen so the dark current in all but the hot pixels can be well studied. The dark frames were averaged to produce a master dark frame from which the bias frame was subtracted to produce thermal frame for the longer integration time. The mean dark current of the ”cold” pixels in that thermal frame is 0.794407 ADU. The difference in integration time between the bias and the dark frame is 2.680983 seconds so the dark current accumulation rate in ”cold” pixels is 0.29631 ADU per second. This value is very close to that of the thermal frame with shorter integration time (0.302323 ADU per second) and is considered better approximation as it had bigger dark current signal. If the single value bias and thermal frames are to be subtracted separately, the mean value of the obtained thermal frame will be first scaled to the integration time used for the star tracker images.

3.1.3 Flat Frames The flat frames were made by pointing the camera at a white screen. An alternative by pointing the camera at cloudy sky was also tried but it turned out to be unusable due to inhomogeneity

12 25

20

15

10

5 Number of pixels with value within bin range

0 500 1000 1500 2000 2500 3000 3500 4000 4500 Value of the pixels

Figure 3.4: Histogram of the hot spots of the received light. The inhomogeneity was because different layers of clouds were observed. It was impossible to avoid having different layers due to the large area viewed with the used FOV. The flat frame was created by taking 100 images of a white monitor and then 100 dark frames with the same settings. Both sets were averaged to produce a master flat and accompanying master dark frame. The dark one was subtracted from the flat one to remove the bias and the dark current. The resulting flat frame was normalized and can be seen in figure 3.6. The normalized frame is then used to find a second order polynomial that approximates the vignetting. As vignetting is a function of the radial distance from the center of the image, the middle column and row were used for curve fitting. Perfect alignment of the camera and the monitor is hard to achieve, so there were slight difference of the values of the middle column and row before they reach the center and after that. To compensate for that, elements at the same distance from the center were averaged to produce mean column and mean row radius. Those were then used for the curve fitting and the results can be seen in figure 3.7. From the figure can be seen that as the radial distance increase, the vignetting is stronger in the column direction. This could be due to not having the camera normal to the monitor. This would have resulted in decreased signal towards the edges and shift of the intensity peak from the center of the FOV. When the values of the whole middle columns were plotted, it was observed that the intensity peak was indeed shifted. For the final calibration the camera was aligned better with the screen normal. The flat field correction of the raw image could involve calculating the radial distance from

13 4 Histogram of meanx 10pixel value for 10 dark frames. The exposure time used was 1.4421s. Hot pixels removed.

12

10

8

6

4

2 Number of pixels with value within bin range

0 162 164 166 168 170 172 174 176 178 180 Value of the pixels

Figure 3.5: Histogram when hot pixels are removed

the center of the image and the value of the polynomial for that distance, and then the value of each pixel is divided by that of the polynomial to compensate for the vignetting. However, in that case the pixel value would be divided by a number smaller than 1 and its noise would increase. As the vignetted pixels receive less light, their SNR is smaller. This means that if they are divided by a number smaller than 1, the threshold of the centroiding algorithm will have to be increased as the minimum threshold depends on the noise level. A better alternative is to first perform the centroiding, which also involves reading the stellar radiation, and then apply the vignetting correction to the recorded radiation. Furthermore, this way the vignetting correction has to be applied only to the stars rather than every pixel of the image.

3.2 Final Calibration

The final calibration involved determining single values to be subtracted to remove the bias offset and the dark current signal, and finding the second degree polynomial used to correct the detected stellar radiation for vignetting. The bias offset was determined as the average pixel value from 200 bias frames and is 167.3602 ADU. The dark current value in ADU was calculated as the average pixel value of 200 thermal frames (dark frames minus bias frames) and is 0.3303 ADU. Hot pixels were ignored when this value was calculated by setting every thermal pixel with value greater than 60 to the average pixel value. The polynomial for vignetting correction was

14

1 200

400 0.95 600

800 0.9 1000 Row

1200 0.85 1400 Pixel value in ADU

1600 0.8

1800 0.75 500 1000 1500 2000 2500 Column

Figure 3.6: Flat frame taken by pointing at a monitor determined in the same way as it was during the initial calibration but for the lens assembly to be used on the body-mounted star tracker. The middle column and row radial values and their curve fit can be seen in figure 3.8. The polynomial that was chosen is the one for the column values because the alignment of the camera with the monitor was not as good for the rows. The polynomial is: 2 p1 · x + p2 · x + p3 (3.1) −07 , where x is the distance from the center of the FOV in pixels, p1 is -1.4238·10 , p2 is −06 2.1167·10 and p3 is 1. A few pixels of the sensor have their bias value smaller than the average bias value by more than 10 ADU. With the short exposure time used, they will almost always have output smaller than the single value bias frame. Therefore, care should be taken that the subtraction results in zero, instead of a negative number. Having negative values may lead to instability in the centroiding process.

15 1.05 column radial values curve fit for the column values 1 row radial values curve fit of the row values

0.95

0.9

0.85 Normalized pixel value 0.8

0.75

0.7 0 200 400 600 800 1000 1200 1400 Distance of the pixel from the center of the FOV

Figure 3.7: Middle columns and row radial values and their curve fit

1.1 column radial values curve fit for the column values 1.05 row radial values curve fit of the row values

1

0.95

0.9 Normalized pixel value 0.85

0.8

0.75 0 200 400 600 800 1000 1200 1400 Distance of the pixel from the center of the FOV

Figure 3.8: Middle columns and row radial values and their curve fit for the lens assembly to be used

16 Chapter 4

Estimating the Received Photoelectrons

The stellar magnitudes in the star catalogs are for passbands which are different from the QE of the image sensors. For this reason the star catalog needs to be processed so it contains information about the intensity of the stars, as it would be read by the used image sensor. This chapter describes the method used to calculate this expected stellar intensity and gives a comparison of the calculated intensities with intensities taken from actual star images.

4.1 Estimating the Intensity to Be Read by the Sensor

The stellar intensity recorded by the sensor is directly related to the photoelectrons generated in it. The number of photoelectrons, in turn, depends on the spectral characteristic of the influx from the observed star, the magnitude of the star, the photon energy and the absolute quantum efficiency. The spectral characteristic of an observed star is similar to that of other stars with the same spectral class. Therefore, a star with well-recorded spectrum was taken as representative for each spectral class. The star representatives were taken from Pickles atlas [6], which offers stellar spectra for different spectral types and luminosity classes. In order to scale the spectrum of the representative star to that of the observed, one needs to know the difference in their magnitudes. The visual magnitude of the observed star is taken from used star catalog (SKY2000 [7]), while the one from Pickles atlas is calculated using equation (4.1) taken from [8]. Z mV,0 = −2.5 · log( Vλ · fλ · dλ) − 13.74 (4.1)

, where mV,0 is the visual magnitude of the star, Vλ is the visual sensitivity curve or visual filter/passband, λ is the wavelength in A,˚ and fλ is the flux density of the star, measured in erg/s/cm2/A.˚ The visual magnitudes recorded in SKY2000 were observed in the visual passband of Johnson and Morgan (1953), so the visual magnitude of the representative stars was calculated in the same passband. Tabulated values for Johnson’s filter were taken from [9] and were inter- polated to produce the filter curve. The passband curve and the calculated visual flux density can be seen in figure 4.1. As a sanity check the results were compared to others, calculated using the sensitivity curves given in [8] and [10]. The three sensitivity curves gave similar results.

17 −9 x 10 4 2 Visual Flux Density Johnsons Visual Passband

2 1 /A 2 J/s/cm Passband value 0 0

−2 −1 4000 4500 5000 5500 6000 6500 7000 7500 Wavelength in A

Figure 4.1: Johnson’s visual passband and visual flux density of a G5V star with magnitude 1

The magnitude of the representative star, mV,0, and the magnitude of the observed star, mV , known from the star catalog, are used to find the ratio between the fluxes of the two stars which is the same as their flux density ratio. The ratio is given by:

FV mV − mV,0 = −2.5 log10 FV,0 FV = 10(mV,0−mV )/2.5 (4.2) FV,0

, where FV is the visual flux of the observed star and FV,0 is the visual flux of the representative R star. The visual flux of a star is equal to FV = Vλ · fλ · dλ. By multiplying the spectrum of the representative star by the flux ratio of the two stars, the spectrum of the observed star is obtained. The spectrum of a magnitude 2 star with spectral class G5 is shown in figure 4.2. It is worth noting that values for wavelengths larger than 11000 A˚ are not important as the QE is zero for those wavelengths. The number of incoming photons per second on a square cm for given wavelength is found by dividing the flux density, expressed in J/s/cm2/A,˚ by the photon energy for that wavelength. The photon energy is given by: hc E = (4.3) λ

18 −17 x 10 7

6

5

4

3

2 Flux density in J/s/cm2/A

1

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000 Wavelength in A

Figure 4.2: Flux density of a G5 star with magnitude 2

, where E is the photon energy in Joules, λ is the wavelength in A,˚ h = 6.626 · 10−34 J/s is Planck’s constant and c is the speed of light, expressed in A/s.˚ The photon influx for the same magnitude 2 star is shown in figure 4.3 where it is compared to the flux density of the star. The number of photons, which are converted into photoelectrons per second on 1 cm2 aperture area, is found by multiplying the influx of photons with the quantum efficiency of the sensor. The QE was taken from the datasheet but as it was available only as an image, it had to be extracted using a plot digitizer and then interpolated so the data was per integer values of the wavelength. The sensor QE can be seen in figure 4.4 and a plot of the incoming photons and detected photons (equal to the number of photoelectrons) can be seen in figure 4.5. For a G2 star with visual magnitude of 0, it was calculated that the used sensor will generate 8.0194 · 105 photoelectrons in 1 second for a lens aperture area of 1 cm2, or 8019.4 photoelectrons for area of 1 mm2. This is comparable to the 19100 photoelectrons calculated in [11] for the same star but different sensor with different QE. It should also be noted that in [11] the stellar spectrum is calculated by approximating the star as a black body, while here a representative spectrum of a real star is used. The real stellar spectrum has absorption lines which also account for the smaller photoelectron count. As can be seen in figure 4.4, no information is provided about the QE of the sensor for wavelengths shorter than 4180 A˚ and the QE curve is almost linear around that wavelength. Therefore, a linear extension of the curve was calculated in order to extend the range of the QE down to 3000 A˚ and the number of photoelectrons generated is calculated for both the extended and not extended QE.

19 −17 x 10 8 200 Flux density of the star Photon flux density

6 150 /A 2 /A 2 4 100 J/s/cm Photons/s/cm

2 50

0 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000 Wavelength in A

Figure 4.3: Flux density and the corresponding number of photons of G5 star with magnitude 2

4.2 Comparison of Predicted and Read Intensities

The intensity that is read by the sensor is in digital units and is for the used exposure times which is not equal to 1 s. The expected intensity is in photoelectrons per second per unit area. To transform the photoelectrons to digital units one needs to multiply them by the area of the aperture, the exposure time and the conversion factor of electrons to digital units. This was not done when the two intensities were compared as the conversion factor is the same for the camera board, the exposure time is the same for stars on the same image and if the camera is not changed, the aperture also stays the same. Therefore, the expected intensity in digital units is just a scalar of that in photoelectrons and the relationship between them does not change if the same hardware is used. It should be noted that what is sought here is to see if the ratio of the two intensities stays the same for different stars. If it does, then the method for calculating the number of photoelectrons is works well and the star catalog to be used can be processed so it uses new magnitudes that correspond to the QE of the sensor. The new magnitudes are to be calculated based on the predicted photoelectron count that is to be recorded by the used sensor. A plot of the read intensity versus the predicted photoelectron count for the used optical system and the same exposure time can be seen in figure 4.6. Information about the stars used for the plot is presented in table 4.1. The plot also includes a second degree polynomial fit to the data. As can be seen, for large values of photoelectrons, corresponding to bright stars, the relationship is linear. The relationship is not linear for dim stars as for them the SNR is smaller which leads to less accurate determination of their intensity.

20 0.7 Extended QE QE as given in the datasheet 0.6

0.5

0.4

0.3

Quantum efficiency 0.2

0.1

0 3000 4000 5000 6000 7000 8000 9000 10000 11000 Wavelength in A

Figure 4.4: Quantum efficiency of the sensor

180 Photon flux density /A

2 Photoelectron flux density 160

140

120

100

80

60

40

20

Number of photons or photoelectrons per s/cm 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000 Wavelength in A

Figure 4.5: Photons and generated photoelectrons

21 Table 4.1: Stars used to compare the read intensity to the predicted photoelectron count

Star Predicted Photoel Read Intensity Magnitude Sp Class Alhena 3.6640 · 106 1.6725 · 105 2.20 A0 Mebsuta 1.6347 · 106 0.4870 · 105 3.01 G5 Wasat 1.0249 · 106 0.2589 · 105 3.54 F0 k Gem 0.9760 · 106 0.2150 · 105 3.57 G5 λ Gem 1.0238 · 106 0.2443 · 105 3.58 A2 e Gem 0.3521 · 106 0.0546 · 105 4.70 F0 g Gem 0.3023 · 106 0.0415 · 105 4.87 K2 f Gem 0.2857 · 106 0.0295 · 105 5.04 K5 56 Gem 0.2469 · 106 0.0249 · 105 5.09 K2 63 Gem 0.2112 · 106 0.0229 · 105 5.24 F5 26 Gem 0.2128 · 106 0.0229 · 105 5.29 A0 d Gem 0.2167 · 106 0.0204 · 105 5.27 A0 68 Gem 0.2159 · 106 0.0274 · 105 5.27 A2

4 x 10

Observed Stars 16 2nd Degree Polynomial Fit 14 exp

12

10

8

6

4 Recorded Intensity in ADU/t 2

0

0.5 1 1.5 2 2.5 3 3.5 6 Predicted Photoelectrons x 10 Figure 4.6: Read intensities vs predicted photoelctron count

22 Chapter 5

Sensor Characterization

In this chapter information is provided about the noise in the system and the photoelectrons-to- ADU conversion factor of the sensor. Investigating the different noise sources, which were discussed in ??, is important because knowledge of the noise level in the system is needed when determining the threshold level to be used during star detection (centroiding process). The shot noise does not depend on the sensor, but the readout noise and the thermal noise do. Therefore, if the noise contribution of the readout and thermal noise are significant, an alternative sensor should be sought. Knowledge of the photoelectron-to-ADU conversion factor is useful for characterizing the system. Furthermore, the number of generated photoelectrons can be calculated for a given stellar spectrum, exposure time, sensor QE and effective diameter of the optical system. Therefore, the recorded radiation in ADU of a given star can be calculated for the designed star tracker.

5.1 Determining the Threshold

The star tracker detects a potential star when it finds a group of adjacent pixels with values above a given threshold. The threshold value is needed to distinguish the star from the background. Setting the threshold value is extremely important to the performance of the star tracker as if it is set too high, stars with large magnitudes will not be detected, and if it is set too low, too many noisy background pixels will be detected as potential stars and the output of the detection algorithm will be corrupted or there will no output at all due to overload. When choosing a value for the threshold, one has to consider the background signal and the noise level at the background. As the noise, σ is a changing value, the threshold has to be set to the sum of the background signal and ”x” times the noise at the background. ”x” has to be determined so that most background pixels are below the threshold. In [11] a value of 5 for ”x” is recommended. The value of ”x”; however, should be different from star tracker to star tracker as it depends on the sensor, the chosen FOV and the optics. For this reason a value of 5 might not have been applicable for the designed star tracker. In the end, the threshold was experimentally determined as the minimum one for which the centroiding algorithm works plus a safety margin to take into account a potential increase in the noise when the tracker is operated in space. Once the threshold has been experimentally determined, it is usually a good idea to use that knowledge to find ”x”. Knowing ”x” one can have a dynamic threshold that depends on the background. This was; however, not applicable for the design star tracker as information on the background is not provided from the centroiding process, performed by the camera FPGA.

23 For an exposure time of 0.2405 seconds and threshold of 30 ADU the star detection algorithm runs without errors and is able to detect stars with visual magnitude of about 6.5 and higher depending on the stellar class. The problem with that threshold is that sometimes a single star can be detected as two stars. It should be mentioned that this occurs for stars close to the edge of the FOV where vignetting is more expressed. This double detection can be fix if the threshold is increased to at least 50 ADU, but in that case the maximum detectable magnitude is about 5.7. This erroneous detection is not a major problem as the star is still detected and the two centroids that correspond to the star are very closely located, so the pattern recognition software can be modified to be able to recognize them as a single star. Even if it is not able to recognize them, in the next frame, which is in about 0.25 second, the star would be most likely detected properly. Furthermore, such double detection occurs for dim stars in the edge of the FOV and such stars are rarely used for pattern recognition. From all of the above it follows that if high maximum detectable magnitude is required for the normal operation of the star tracker, then the threshold used could be lowered down to 30 ADU. This value may; however, be changed once the effect of radiation on the used sensor has been studied.

5.2 Noise

The total noise in the system is equal to the shot noise expressed in ADU, σshot, the readout noise in ADU, σreadout, and the thermal (dark current) noise, σthermal, also expressed in ADU: q 2 2 2 σ = σshot + σreadout + σthermal (5.1)

The bias frames contain only the bias offset and no other signal, so the readout noise can be determined from there. The noise of every pixel is different so the noise has been determined per pixel and plotted in a histogram. From 49 bias frames the standard deviation of every pixel has been calculated and a histogram of the σ values can be seen in figure 5.1. The total noise in the dark frame, σdf , is from readout noise and thermal noise so the thermal noise can be expressed as:

q 2 2 σthermal = σdf − σreadout (5.2)

The value of σdf for each pixel has been calculated using 49 dark frames and then averaged to find the mean pixel σdf (3.7935 ADU). This mean pixel dark frame noise and the one for the readout (3.5902 ADU) were used in equation (5.2) to find the mean thermal noise - 1.2252 ADU. This time the noise was not calculated for each pixel as the σdf value of some pixels was smaller than their σreadout value, which resulted in complex values for σthermal. This is because of the small value of the dark current and the relatively small number of frames used to determine the noise. A larger number of frames was not used as it becomes too computationally heavy to determine the standard deviation of 5038848 pixels for more frames. The shot noise is equal to the square root of the generated photoelectrons times the photoelectron- to-ADU conversion factor of the sensor. The conversion factor is determined from the Photon Transfer Curve discussed below. The number of generated photoelectrons can be estimated for a given stellar spectrum, exposure time and effective diameter so an estimation of the shot noise is also possible.

24 4 x 10

2

1.8

1.6

1.4

1.2 value within bin range σ 1

0.8

0.6

0.4

Number of pixels with 0.2

0 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 Value of σ

Figure 5.1: Histogram of RMSE of each pixel for 49 bias frames

5.3 Photon Transfer Curve

In order to characterize the performance of the chosen sensor, its photon transfer curve (PTC) was determined. A photon transfer curve is a plot of the variance for different mean values of the sensor pixels. It is used to provide data on readout noise, dark current, full well capacity, sensitivity, dynamics range, gain and linearity. The characteristics of the sensor are determined using the sensor output for given input. An image sensor has input of photons and output in digital units. The only noise at the input is the photon shot noise, which is easily calculated for given photon flux, so the difference in noise between the input and output is due to noise in the sensor electronics. [12] At a camera the incoming photons generate photoelectrons and their charge is then converted to a voltage, which, in turn, is amplified and then digitized. The output signal, SADU , is equal to the number of received photons, Nphotons, divided by a conversion factor J, which indicates how many of the incoming (incident) photons correspond to one ADU. SADU can also be expressed by the number of generated photoelectrons, Ne− , divided by a conversion factor K, which indicates how many photoelectrons correspond to one ADU. J and K depend on the sensitivity of the − circuitry, Sv (V/e ), the analog gain of the camera, A1 (V/V), the transfer function of the analog to digital converter, A2 (ADU/V), and the quantum efficiency [13]. It should be noted that in [13] J is in interacting photons/DN and here it is in incident photons/DN. The formulas for J

25 and K can be seen in equation (5.3) and equation (5.4), respectively.

−1 J = (QE · Sv · A1 · A2) (5.3) −1 K = (Sv · A1 · A2) (5.4)

The sensitivity of the circuitry, the analog gain and the transfer function of the converter are constant, but the QE is different for different wavelengths. Therefore, J will be different for different spectra, while K is linearly dependent on the analog gain and constant when the gain is not changed. In chapter 4 it was shown that the generated photoelectrons for every star in the star catalog can be predicted, so K and 1/K provide important information on the correlation between the input signal and the output. The PTC can be used to determine 1/K, notated by x for simplicity, and an explanation of how the PTC provides this information and how to produce a PTC is given below. The method used for constructing the PTC is a modified version of the one given in [14]. First, the photoelectron signal, S − , and noise, σ − , are determined. The photon shot noise, e e q σshot,ph is equal to the square root of the average number of incoming photons - Nphotons. The photoelectron signal is equal to the number of generated electrons, which, in turn, are equal to the number of photons times the QE (Se− = Ne− = QE · Nphotons). The noise in photoelectrons is due to input photon shot noise and fluctuations introduced by binomial statistics due to the fact that the QE is smaller than 1. The two noises are not correlated so the variance of the photoelectron signal will be: 2 2 2 σe− = (σshot,ph)e− + σQE (5.5) 2 2 2 , where (σshot,ph)e− is the variance (σshot,ph) represented in photoelectrons and σQE is the 2 variance introduced by the binomial fluctuations. (σshot,ph)e− depends on the QE and the input 2 noise (σshot,ph), and σQE depends on the QE and the average input signal (Nphotons):

2 2 2 2 (σshot,ph)e− = QE · σshot,ph = QE · Nphotons (5.6) 2 σQE = QE · (1 − QE) · Nphotons (5.7)

When 5.6 and 5.7 are substituted in 5.5, equation (5.8), one sees that the photoelectron variance is equal to the average photoelectron signal, so it matches the variance due Poissons fluctuations and its noise can be treated as shot noise [15]:

2 2 2 σe− = QE · Nphotons + QE · (1 − QE) · Nphotons = QE · Nphotons = Ne− = σshot,e− (5.8)

The outputted signal in ADU, SADU , is equal to the photoelectron signal times x, where x represents how many ADU correspond to one photoelectron. The photoelectron signal is simply the number of generated photoelectrons, Ne− . The noise at the output due to the shot photoelectron noise is equal to σshot,e− times x, has units of ADU and is notated by σshot,ADU :

SADU = x · Ne− (5.9) p σshot,ADU = x · σshot,e− = x · Ne− (5.10)

If equation (5.10) is squared, equation (5.11), transformed to express Ne− , equation (5.12), and then this expression is substituted in equation (5.9), one obtains equation (5.13) which can

26 be transformed to express the variance, equation (5.14):

2 2 σshot,ADU = x · Ne− (5.11) 2 σshot,ADU N − = (5.12) e x2 x · σ2 S = shot,ADU (5.13) ADU x2 2 σshot,ADU = x · SADU (5.14)

The total noise at the output also involves readout noise, σRO, and dark current noise. The dark current noise is negligible so the total variance is:

2 2 2 2 σADU = σshot,ADU + σRO = x · SADU + σRO (5.15) The readout noise and the associated with it variance are independent of the signal, so a 2 2 plot of the total variance, σADU , for different signal levels, SADU , will have offset equal to σRO and slope equal to x. This plot is the photon transfer curve. From it, x can be read directly or equation (5.15) can be transformed to obtain an expression for x (valid only when the offset (bias signal) is subtracted from SADU ):

σ2 − σ2 x = ADU RO (5.16) SADU If one does not know the readout noise, he can make a linear curve fit to the PTC plot and read the curve fit value for the signal at zero exposure time (which actually contains only the bias offset and zero signal). The curve fit value at this point correspond to the readout noise. The full well capacity can also be easily determined from the PTC. When low gain is used, the signal value, where the data curve begins to deviate from the linear fit, divided by the used gain, gives the full well capacity in electrons.

5.4 Constructing the PTC and Results 5.4.1 Test Setup The sensor, without any lenses, and its board were mounted in a box. Three of the box walls didn’t allow any light to pass through them. The fourth wall, towards which the camera was looking, was a sheet of paper which was illuminated from about 1-2m by a flashlight and there were no other light sources in the room. A photo of the box when it is open and another one, when it is closed, can be seen in figure 5.2 and figure 5.3, respectively. No lenses were used in order to avoid vignetting which would have resulted in non uniformity of the illumination across the pixels. The box ensured that the received light is only from the flash light and the paper sheet was used to provide uniform light distribution from different angles. Using the described setup, sets of 36 images were taken for different exposure time and gain. The position of the light source and the box were kept the same so the received signal is only dependent on the exposure time and the dependence is linear. It can be seen from figure 5.4 that the dependence is, indeed, linear. It should also be mentioned that the sensor board has LEDs that were effecting the results so they were covered.

27 Figure 5.2: Box in which the sensor was mounted

5.4.2 Creation of the PTC The PTC was determined in two ways. Both of them follow the logic described in section 5.3 but the noise and variance are determined in a slightly different way. The first method uses all of the 36 available images per set to determine the noise and variance of every pixels and then averages the pixels to determine the mean noise and variance for each set. The second method is taken from [12] and it calculates the variance and the noise using the following formula:

Np P 2 ((X1i − M1) − (X2i − M2)) V ariance = σ2 = i=1 (5.17) 2 · Np th ,where X1i is the pixel value of the i pixel of the first frame in ADU; X2i is the pixel value of the ith pixel of the second frame in ADU; M1 and M2 are the mean of the pixel values of all pixels in the first and second frame, respectively; and Np is the number of pixels in the sample. In all of the used references the variance is determined using just two frames; therefore, here the second method will be refered to as the standard method. It should be noted that in the second (standard) method, the variance is calculated using the difference between each pixel and mean pixel value of the frame, and the second frame is subtracted in order to remove the variations of the pixel values due to difference in their sensitivity. Using this method one may expect that the noise will not be accurately calculated because only two images are used and each pixel, due to statistical variations, will have contribution to the noise different from its actual noise. However, this variation is statistical and because the sensor has more than 5 million pixels, the individual pixels variations will cancel each other out. A plot of the PTC determined by both methods and a linear fit to the data from the first method can be seen in figure 5.5. In section 3.2 the bias offset was determined using bias frames and has a value of 167.3602 ADU. When the value of the linear fit for signal value of 167 is taken

28 Figure 5.3: Box and light source used in the setup and then square rooted, an approximation of the readout noise is obtained. The approximated readout noise using the PTC for both methods is 4.16257 and 4.98561 ADU, respectively, while the one determined using the bias frame is 3.59016 ADU. The readout noise, determined with the bias frames, provides the most accurate information on the readout noise. The noise from the first PTC method is closer to the bias one, so its PTC is assumed to give more accurate information and will be used for further analyses. It should be mentioned that the data for the curve fit is from 36 images, while that for the bias is from 49, so the bias calculated noise is expected to be slightly smaller. The slope of the linear fit to PTC for the first method is 0.555043, which means that 1.80166 electrons correspond to 1 ADU. The slope for the standard method is 0.551841 DN/e−. For gain of 2, the slopes for the two methods are 1.12853 and 1.1384, respectively, and for gain of 4, they are 2.20862 and 2.23122. Looking at the slope values it becomes clear that the slope is linearly dependent on the gain as one would expect from equation (5.4). The readout noise, determined using the PTC, is also dependent on the gain. For the first method and gain of 1 it is 4.16257, 4.55828 for gain of 2 and 7.6706 for gain of 4. The offset value was subtracted form the mean image value in order to obtain the mean value of the signal received by ever pixel in ADU. The dark current effect on the mean pixel value is negligible so it was not taken into account. The mean signal value was used to produce plots of σ and the SNR for different mean signal values, figure 5.6 and figure 5.7, respectively. If one knows the minimum SNR for which the star detection algorithm works, he can determine the maximum detectable stellar magnitude. Using the slope of the PTC, the recorded signal in ADU can be expressed in photoelectrons, which in turn are related to the incoming photon flux and thus to the stellar magnitude.

29 Mean value of the image sets versus their exposure time 4000

3500

3000

2500

2000

1500

1000 Mean image value in digital units

500

0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Exposure time in seconds

Figure 5.4: Mean value of the gain 1 image set for different exposure time

Photon transfer curve 2500 Photon transfer curve Linear fit of the photon tranfer curve Photon transfer curve calculated using the standard method 2000

1500

1000 Variance in squared digital units 500

0 0 500 1000 1500 2000 2500 3000 3500 4000 Mean image value in digital units

Figure 5.5: PTC determined by both methods and linear fit to the data from the first method

30 σ versus mean value of the sets 45

40

35

30

25 in digital units

σ 20

15

10

5 0 500 1000 1500 2000 2500 3000 3500 4000 Mean signal value per pixel in digital units

Figure 5.6: σ for different mean signal values per pixel. The plot is for analog gain of 1

SNR 80

70

60

50

40

30 Signa to noise ratio

20

10

0 0 500 1000 1500 2000 2500 3000 3500 4000 Mean signal value per pixel in digital units

Figure 5.7: SNR for different mean signal values per pixel. The plot is for analog gain of 1

31 Chapter 6

Optomechanical Design

This chapter is focused on the design of the body-mounted star tracker for the SEAM satellite. It also goes into details about the baffle design of the star tracker and provides the theory needed to understand the design choices and used methods.

6.1 Definitions

The clear diameters of the lenses and diaphragms in an optical system are the system apertures. One of these apertures determines the cone of energy accepted by the system from a point of the observed object that is on the optical axis. This aperture determines the illumination received at the image and is called aperture stop. The aperture that determines the angular size of the object that the system will image is called field stop. The image of the aperture stop as viewed from a point on the object that is also on the optical axis is the entrance pupil. The exit pupil is the image of the aperture stop as viewed from the image plane. The location and size of the entrance and exit pupil are important because they determine the amount of radiation that is accepted and emitted by the system [16]. The position of the entrance and exit pupil are determined by the principal ray, and their size by the marginal axial ray. Where a principal ray is a ray that starts at the edge of the object and crosses the optical axis at the entrance pupil, the aperture stop and the exit pupil. Therefore, once the aperture stop has been determined and its size is known, the principal ray can be traced to find the entrance and exit pupil. The marginal axial ray starts from a point of the object that is on the optical axis and passes through the edge of the aperture stop. The distances from this ray to the optical axis in the planes of the entrance and exit pupil determine the size of the respective pupil.

The position of the entrance pupil and its diameter (effective diameter) were taken from the lens assembly datasheet [17], and were used to calculate the FOV cone which, in turn, was used when designing the baffle.

32 6.2 Baffle

In the ideal case the star tracker receives light only from sources within its FOV; however, due to reflections, radiation is also received from sources outside the FOV. In order to limit this unwanted radiation baffles are used. The baffle is a mechanical system that makes light from outside the FOV have multiple reflections on its surfaces before reaching the optical system, thus minimizing the intensity of light from unwanted sources. The baffle consists of a tube that has vanes on its internal walls. The vanes are added so that the light has to make multiple reflections before reaching the optical system [18].

6.2.1 Critical Surfaces The surfaces that can be ”seen” from the detector are very important because they contribute directly to the detected power. Such surfaces are called critical surfaces and they are easily determined when one visualizes what he would see, if he were to look out of the system from the image. The light that reaches such surfaces should be minimized as much as possible or they should be removed from the view of the detector [19]. Because the baffle in this thesis work is placed before the objective lenses, the critical surfaces here are the ones that are ”seen” by the first lens of the objective, not the detector.

6.2.2 Power Transferred The equation giving the powered transferred from one section to another is:

cos(Θs) · dAc · cos(Θc) dΦc = Ls(Θo, Ψo) · dAs · 2 (6.1) Rsc

, where dΦc is the deferential power transferred; Ls(Θo, Ψo) is the bidirectional radiance of the source section; dAs and dAc are the elemental areas of the source and collector, respectively; Θs and Θc are the angles that the line of sight from the source to the collector makes with the source and collector normals, respectively [19]. Equation (6.1) can be rewritten as:

dΦc = BRDF (Θi, Ψi;Θo, Ψo) · dΦs(Θi, Ψi) · dΩsc (6.2)

,where dΦs(Θi, Ψi) is the incident power; BRDF (Θi, Ψi;Θo, Ψo) is the bidirectional reflection distribution function which is determined by the surface characteristics; dΩsc is the projected solid angle from the source to the collector [19]. Reducing to zero any of the three term in equation (6.2) will lead to no power being transferred between the source and the collector. Minimizing simultaneously the power transfer between all surface is impossible, so one starts by minimizing the most important ones - between the critical surfaces and the objective. dΦs(Θi, Ψi) is minimized by reducing the incident power to the critical surfaces and BRDF (Θi, Ψi;Θo, Ψo) is minimized by choosing appropriate coating or paint. The last term, dΩsc is easiest to reduce to zero and is given by the expression:

cos(Θs) · dAc · cos(Θc) dΩsc = 2 (6.3) Rsc

Looking at equation (6.3), the obvious way of reducing dΩsc is increasing Rsc,Θs,Θc or reducing dAc. What is not apparent is that the value of dΩsc is limited by apertures and

33 obstructions. An approach should be taken that blocks off as many as possible direct paths from unwanted sources to the objective and then minimizes dΩsc for the remaining paths [19].

6.2.3 Vane Placement Sources outside of the FOV are blocked by the entrance aperture of the baffle (outermost vane) and critical surfaces are removed by placing the vanes using the logic described in [16] (here the logic is adapted for having the baffle before the objective while in [16] it is for a baffle between the lens and the detector). The procedure for placing the vanes is illustrated in figure 6.1 and is as follows: The two symmetrical lines from the first lens of the objective (detector in [16]) show the FOV, so the vanes should not intrude into them as it will limit the radiation from wanted sources. The baffle walls can be assumed to be critical surfaces as almost the whole wall area can be ”seen” from the objective lens. Then the dashed line AA0 is the ”line of sight” from the edge of the first lens of the objective to the point where the critical area begins. By placing a vane at the intersection of AA0 with the line indicating the FOV, the detector cannot ”see” the baffle wall from point A0 to the first vane. Line BB0, from the edge of the baffle aperture to the baffle wall, shows the area of the baffle wall that is shadowed by the first vane and thus cannot directly receive radiation from unwanted sources. This area is the area from the first vane to point B0 and it is ”safe” for the objective to ”see”. The dashed line AB0 is thus a ”safe” line of sight and placing a vane at the intersection of AB0 with the FOV line will prevent the objective from directly receiving reflected radiation from the wall section between B0 and this vane. Line BC0 is drawn from the edge of the baffle aperture through the edge of the last vane to find the point (C0) up to which shadowing is provided by the second vane. The procedure is repeated until the entire wall is protected. Note that vane 3 shields the baffle wall to point D that is behind the detector/objective, so the vanes can be shifted forward. Being able to move the vanes is important as then they can be manufactured slightly smaller without noticeably influencing the performance of the baffle. This increases the tolerances for the manufacturing. If the vanes have to be manufactured to the exact determined height, the manufacturing cost is increased significantly. After the vanes have been position using the logic described, the only critical surface left is that of the edges of the vanes. The vane edges cannot be removed from the view of the objective, so the number of vanes is kept as small as possible while preventing the objective to ”see” the baffle walls. If one is interested in the trade-offs between reducing the vanes spacing in order to introduce more secondary reflection and increasing it to reduce the number of vane edges, he is recommended to read [19] where information is given about edge scatter versus secondary vane scatter, when the spacing between the vanes is changed.

The described method for determining the position and height of the vanes can be time consuming if many iterations in the baffle design are needed. Therefore, equations were needed for the calculation of the vane placement and height. There are available equations, for instance the ones mentioned in [18]; however no all of the used variables are explained and one requires information about the outermost and first vane to use the equations. For this reason new equations were derived for the placement and height of the vanes, using the logic described before. Figure 6.2 illustrates the derivation which is given below: First the the half-width, y0, and height of the outermost vane, h0 are found:

y0 = tan(β/2) · s + a (6.4)

h0 = r − y0 (6.5)

34 #1 #2 #3

B

Field of View (clearance lines) A Detector or lens

A’ B’ C’ D’

Figure 6.1: Placing vanes in a baffle

, where β is the half angle of the FOV, s is the distance from the objective lens to the outermost vane, a is the effective radius of the lens and r is the distance from the optical axis to the baffle wall. th Then a system of equations for the n vane is derived and solved for the position, xn, and height, hn of that vane. To simplify the expressions, the distance from the outer edge to the point where the previous edge provides shadowing, zn, is also found: The angle, ψn, is the angle at which the ”safe to see” part of the baffle wall is seen from the edge of the outermost vane at the opposite side of the baffle. ψnis expressed in two ways and then the equations are combined to find the unknown, zn:

xn−1 tan(ψn) = y0 + r − hn−1 zn tan(ψn) = y0 + r xx−1 · (y0 + r) zn = (6.6) y0 + r − hn−1

Using similar logic two equations for αn and one for β are derived and then solved for the position and height of the nth vane. α is the angle formed by the baffle wall and the line

35 #1 #2 #3

B

r ψ 2 Field of View (clearance lines) A Detector or lens

a y" y$ β

β h+ h" h$ h* α1 α* α+ A’ B’ C’ D’ x1 xn=x2 s

zn=z 2 z3

Figure 6.2: Derivation of the equations for determining vane position and height connecting the last shadowed point and the furthermost point of the lens.

hn tan(αn) = (6.7) xn − zn r + a tan(αn) = (6.8) s − zn hn − hn−1 hn − hn−1 tan(β) = ⇒ xn = + xn−1 (6.9) xn − xn−1 tan(β)

Equation (6.7) and equation (6.8) are combined to give:

(xn − zn) · (r + a) hn = (6.10) s − zn

36 then 6.9 is substituted in 6.10, to obtain equation for hn:

(hn − hn−1 + xn−1 · tan(β) − zn · tan(β)) · (r + a) hn = (s − zn) · tan(β)

hn · (s − zn) · tan(β) − hn · (r + a) = −(hn−1 − xn−1 · (tanβ) + zn · tan(β)) · (r + a)

(hn−1 − xn−1 · tan(β) + zn · tan(β)) · (r + a) hn = (6.11) (r + a − (s − zn) · tan(β)

The height of the outermost vane is given by equation (6.5), z0, z1 and x0 are equal to zero, so the required input for the first vane is available. After the position and height of the fist vane are calculated, the second vane can be determined, then the third and so on. The process is repeated until xn becomes larges than s. Using these equations the calculated vanes end at the FOV cone. This could be problematic as if the vanes are manufactured shorter, then the objective lens will directly illuminated surfaces or if they are longer, then the vanes will be obscuring the FOV. For this reason in [20] safety margins of 1.5◦ and 0.25◦ are added to the FOV when calculating the vanes and the outermost vane, respectively. This is a good idea but the safety margin are small for vanes close to the objective lens so the demands for the manufacturing would be high for those vanes. To solve this problem the safety margins chosen here are not angular but linear. To introduce the margins, when calculating the cone of the FOV, the lens radius is considered bigger by a certain offset. This way the cone of the FOV has its radius one offset value bigger than the original FOV cone and this difference stays constant for the length of the baffle. This bigger FOV cone is used instead of the smaller one, but for everything else the normal dimensions of the baffle and lens are used. When the position and the height of the vanes has been determined, a value of half the offset is added to the vane height. Such a baffle can be seen in figure 6.3. Using a bigger FOV cone ensures that the vanes are not obscuring the actual FOV and increasing the height by half the offset prevents the lens from ”seeing” directly illuminated surfaces in case of the vanes being manufactured slightly shorter. Even after the increase of the vane height they are half an offset away from the actual FOV so no extra vignetting will be introduced. Adding this offset to the equations introduces very small changes in them. The only affected equation is the one for the half-width of the entrance vane, y0, which becomes: offset y = tan(β/2) · s + a + (6.12) 0 2

6.2.4 Vane Edges Vanes are usually designed with some angle of bevel to minimize edge scatter. The bevel edge can be placed either on the object side of the vane (figure 6.4 (a)) or on the objective side (figure 6.4 (b)). The optimal placement depends on the position of the vane. When the bevel edge is on the object side, the radiation striking it will scatter to angles of 110◦ (90◦ + 20◦). If the bevel edge is on the objective side then the scatter from the bevel is limited to 90◦ for light incoming at angles smaller than 70◦. Therefore, the bevel edge is on the object side of the vanes when they are close to the aperture because for those vanes the incoming radiation comes from large range of angles. For vanes deeper in the baffle, the bevel edge is at the objective side because the direct radiation comes at smaller angles and cannot reach the bevel edge. Hence, it is reflected to angles up to 90 degrees and cannot reach the objective lens [19] (the round edge of the vane will still scatter light to greater angles but its area is significantly smaller and also a very small portion of the scattered radiation will be towards the objective). Furthermore, positioning the

37 #1 #2 #3

B r ψ 2

A

a

y) o!set y(

β h& h( h) h% α1 α% α& A’ B’ C’ D’ x1 xn=x2

x3 zn=z 2 z3

Figure 6.3: Baffle vanes designed with tolerance margins (n=2 on the drawing) bevel edges in this reduces the vane edge surface of the front vanes seen by objective and the vane edge surface of the deeper vanes, seen by the radiation sources.

6.3 Body Star Tracker

The body-mounted star tracker consists of a lens assembly, C mount for the assembly, header- board with CMOS sensor for recording the stellar intensities, an electronics PCB for processing the recorded data to determine the attitude, support plate with baffle, and two holders for se- curing the lens assembly and the C mount. The design of the support plate with the baffle and the holder, as well as the arrangement of all the components was part of this thesis work. The lens assembly, Fujinon HF35HA-1 [17], was ordered, the C mount was already available from another KTH project, the headerboard was developed by Marcus Lindh [21], the used sensor is 1/2.5-Inch 5Mp CMOS Digital Image Sensor MT9P031 [22], and the electronics PCB was yet to be designed at the time of writing of this thesis work. The baffle of the star tracker was designed using the method described in section 6.2.3. The space available for the star tracker was optimized to have as much space as possible for the baffle

38 20 20

(a) (b)

Figure 6.4: Bevel edge on the object side (a) and on the objective side (b) length. This resulted in 41.374 mm being available for the baffle length. The inner baffle surface is 4 walls instead of a cylinder so all of the available space can be used as depth of the baffle. The distances from the optical axis to the two pairs of inner walls are 40 and 37.6 mm, and the smaller one is used for calculations of the baffle vanes, r = 37.6 mm. The entrance pupil of the lens assembly was used instead of the objective lens in the procedure from section 6.2.3 to calculating the vane position and height. The distance from the entrance pupil to the baffle entrance aperture (outermost vane), s, is 68.774 mm. The available baffle length, as already mentioned, is 41.374 mm, so vanes with calculated position bigger than that value, are discarded. The lens assembly prevents the entrance pupil to ”see” walls that can be directly illuminated, so those vanes can safely be discarded. The offset value used in equation (6.12) is 0.5 mm. The effective radius, a, was taken from the datasheet and is 10.9344 mm. The FOV was calculated using the formula:  Y 0  Θ = 2 · arctan (6.13) 2 · f , where Θ is the angle of view, Y 0 is the image size and f is the focal length. The focal length is 35 mm and the image sensor dimensions are 5.7x4.28 mm. This gives a rectangular FOV of 9.31x7 degrees. The 9.31 angle was used when calculating the vane placement and height as the smaller one would introduce vignetting. This means that the half angle of the FOV, β, is 4.655 degrees. Matlab code has been written for the described method of vane determination, and run with the mentioned input. The results from it are given in table 6.1.

Table 6.1: Vane position and height

Vane: 1 2 3 4 Vane Position from the 9.1999 15.9649 23.8586 32.9168 Outermost Vane [mm] Vane Height [mm] 2.7649 3.3157 3.9584 4.6960

It was already mentioned that the FOV is a rectangular one. This means that having a conical baffle will result in having larger areas of the vane openings compared to the areas visible from the system. The area visible from the system for a particular distance is the area swept by a rectangle, characterized by the FOV, around a circle with diameter equal to that of the entrance pupil, plus the non-overlapping area of the circle. The mentioned rectangle has sides

39 equal to the product of the tangent of the respective half FOV angle, β (4.655 and 3.5 degrees for the used compound lens), and the distance from the entrance pupil, d,- side = tan(β) · d. Simple of the rectangle and the entrance pupil circle is given in figure 6.5. The area swept is easy to see from the actual vanes shown in figure 6.6.

Figure 6.5: Rectangle and entrance pupil circle used to find the area viewed by the system at certain distance

Figure 6.6: The star tracker viewed from the front, showing the area viewed by the system at the distance of the vanes

When the vanes were being designed, half of the previously mentioned offset value was added to the side of the rectangle to include a safety margin. This way the vane height, at points where the vane opening is largest, is the same as that in table 6.1.

6.3.1 Star Tracker Configuration The whole star tracker can be seen in figure 6.7 and in figure 6.8. The part in black is the support plate with the baffle. It will be milled from a single piece of aluminum in order to reduce the number of element and thus the complexity of the star tracker. The big and the small green

40 parts are the electronics PCB and the headerboard, respectively. The dark grey part is the lens assembly, the light grey one is the C mount and the two parts above them are the holders used to secure them. The top (-X) and bottom (X) side of the baffle will be covered by a metal sheet. Walls of the baffle on these two side were not included to make possible the milling of the baffle vanes.

Figure 6.7: Design of the star tracker

The star tracker has an envelope with dimensions of 94x52.6x40 mm. The length, 94 mm, is the maximum available length for tracker placement in the satellite. 100% of the available length was utilized in order to have as good baffling as possible. The width, 52.6 mm, was set by the required baffle width plus the baffle walls and the electronics PCB with its mounting. The height of 40mm was set by the C mount and the header board. A cutout in the support plate was made for them so that the height can be kept minimal. The star tracker is attached to the satellite using four M2.5 screws, screwed onto the support plate and the baffle. The electronics PCB is attached with 4 M1.8 screws also to the support plate and the baffle, the two holder are also tightened by M1.8 screws, and the headerboard is attached to the C mount with M2 screws. The support for the lens assembly and the C mount is V shaped (figure 6.9) so when the upper holders are tightened, the normal force centers the C mount and the lens assembly.

41 Figure 6.8: Design of the star tracker. Viewed from the rear

Figure 6.9: V shape of the lens and mount support

42 Chapter 7

Conclusion

7.1 Work Done, Conclusions and Recommendations for Future Work

A method for estimating the photoelectrons to be generated in a given sensor from a known star has been developed. The results obtained from the method have been compared with actual measurements to prove the accuracy of the method. Furthermore, the star catalog has been updated to include the photoelectrons to be generated for each star. The number of photoelectrons is important as it directly related to the stellar brightness as ”seen” by the star tracker. The sensor and the optics of the body star tracker have been characterized, and a method for calibrating the taken images has been developed. Star images were taken, calibrated and processed with the centroiding algorithm [21] to evaluate the performance of the star tracker. Based on the results, it has been concluded that single values can be used instead of the bias and thermal calibration frames, and that a second degree polynomial can be used instead of the flat frame. Using those simplifications the requirements towards the star tracker hardware can be greatly reduced. It was also recommended that, if possible, flat frame correction should be applied after centroiding. This way lower level of threshold can be used and thus higher magnitude stars will be detectable. The mean value of the bias was found to be 167.3595 ADU and to have a standard deviation among the pixels of 1.3280 ADU. The value of the number replacing the thermal frame is neg- ligible (0.302323 ADU for 1 sec of exposure) and that correction can be skipped; however, the most expressed hot pixels of the actual sensor to be used have to be determined and separate correction for them should be implemented. It was determined that if exposure time of 0.2405 seconds and threshold of 30ADU is used, then the maximum detectable visual magnitude is about 6.7(this value changes depending on the stellar class of the star). The detectable magnitude can be further increased by decreasing the threshold but this will increase the chance of erroneous centroiding due to the reduced difference between the threshold level and the background. It is recommended that the possibility of implementing a dynamic threshold is investigated. If information about the image background cannot be acquired, then the threshold could be changed in accordance with the number of stars detected. If not enough stars are detected, then the threshold can be decreased down to about 18 ADU. In cases of increased stray radiation, for instance from the sun, the background level will also be greater and the centroiding may fail due to too many objects, treated as stars, being

43 detected. This could also be solved by changing the threshold, but this time by increasing it. The body-mounted star tracker has been designed in NX (CAD software). The envelope dimensions of the designed star tracker are 94x52.6x40 mm and its mass is 190 g. Special attention was placed on the baffle design, where an improvement in one of the commonly used methods for vane placement has been made. Single-lens optical were also designed and manufactured. Some images were taken with them to test them, but due to insufficient time they were not analyzed and are; therefore, not included in the thesis report. The single-lens optical systems were of interest for the boom-mounted star tracker, but the dimensions for it were not known at the time of the thesis work, so the single-lens had a reduced priority. Future work related to that of this thesis will be implementing the calibration procedure into the star tracker; determination of the hot pixels of the actual sensor to be used and introducing corrections for them; and designing the boom-mounted star tracker once the available dimensions are known. Furthermore, bevel edges have to be added to the vanes and absorptive coating has to be chosen.

7.2 Environmental, Social and Ethical Aspects

Star trackers considerably improve the attitude determination of a S/C. With better attitude determination one can also achieve better pointing which means that any pointing-dependent instruments do not have to be so complex and heavy, and that more focused signals can be used for telemetry. Having a focused signal for telemetry is important as the power requirements for the transmission are reduced, which, in turn, leads to smaller solar panels and batteries, and thus reduced mass. Therefore, star trackers have an indirect positive environmental influence by reducing the S/C launch mass and thus the required propellant for the launch. Nowadays a lot of the Earth related research is carried out from space. All of the instruments involved in this type of research require good attitude knowledge and pointing accuracy. The goals of the Earth related research very greatly but most of them aim to improve life on Earth. Implementation of star trackers leads to better research results, thus improving the quality of life on Earth. Star trackers provide accurate attitude information which makes them implementable in many other fields that the intended one. Military applications also seem possible, especially for missiles. However, their use for missiles is limited because star trackers are only reliable in space. In the atmosphere of Earth, reflected sun light or cloudy weather hinder the accurate determination of stars and thus the performance of the star tracker.

44 References

[1] GomSpace. Preliminary design report small explorer for advanced missions. Technical report, 2014. [2] Fundamentals of Spacecraft Attitude Determinaton and Control. Springer, 2014. [3] Allan R Eisenman, Carl C Liebe, and John L Joergensen. New generation of autonomous star trackers. In Aerospace Remote Sensing’97, pages 524–535. International Society for Optics and Photonics, 1997. [4] A Read Eisenman and Carl Christian Liebe. The advancing state-of-the-art in second generation star trackers. In Aerospace Conference, 1998 IEEE, volume 1, pages 111–118. IEEE, 1998.

[5] Richard Berry and James Burnell. The Handbook of Astronomical Image Processing. Willmann-Bell,Inc., 2005.

[6] A.J. Pickles. Pickles atlas, a stellar spectral flux library. http://www.stsci.edu/hst/ observatory/crds/pickles_atlas.html. [7] SKYMAP Requirements, Functional, and Mathematical Specifications.

[8] Astrophysical Quantities. The Athlone Press, 1976.

[9] Johnsons V Filter. http://obswww.unige.ch/gcpd/filters/fil01.html. [10] M. S. Bessell. UBVRI passbands. 102:1181–1199, October 1990.

[11] Carl Christian Liebe. Accuracy performance of star trackers-a tutorial. Aerospace and Electronic Systems, IEEE Transactions on, 38(2):587–599, 2002. [12] David Gardner. Characterizing digital cameras with the photon transfer curve. Summit imaging (undated document supplied by Jake Beverage). [13] James R Janesick, Kenneth P Klaasen, and Tom Elliott. Charge-coupled-device charge- collection efficiency and the photon-transfer technique. Optical engineering, 26(10):260972– 260972, 1987. [14] Carl Christian Liebe, Edwin W Dennison, Bruce Hancock, Robert C Stirbl, and Bedabrata Pain. Active pixel sensor (APS) based star tracker. In Aerospace Conference, 1998 IEEE, volume 1, pages 119–127. IEEE, 1998.

[15] Giovanni Zanella. DQE as quantum efficiency of imaging detectors. arXiv preprint physics/0211112, 2002.

45 [16] Smith W.J. Modern Optical Engineering. McGraw-Hill, 1990. [17] FUJIFILM. FUJINON CCTV LENS. [18] Lucimara CN Scaduto, Erica´ G Carvalho, Lucas F Santos, F´atimaMM Yasuoka, M´arioA Stefani, and Jarbas C Castro. Baffle Design and Analysis of Stray-light in Multispectral Camera of a Brazilian Satellite. Annals of Optics, XXIX ENFMC, 2006. [19] Robert P Breault. Problems and techniques in stray radiation suppression. In 1977 SPIE/SPSE Technical Symposium East, pages 2–23. International Society for Optics and Photonics, 1977. [20] Young Sun Lee, Yong Ha Kim, Yu Yi, and Jhoon Kim. A baffle design for an airglow photometer onboard the korea sounding rocket-iii. Journal of Korean Astronomical Society, 33:165–172, 2000. [21] Marcus Lindh. Development and implementation of star tracker electronics. Master’s thesis, Kungliga Tekniska H¨ogskolan, 2014.

[22] Aptina Imaging. MT9P031 CMOS Digital Image Sensor Data Sheet (Rev.F).

46