The Applicability of Algorithm for Tropical Cyclone Eye and Water Body Boundary Extraction in SAR Data

By Ke Wang

A thesis in fulfilment of the requirements for the degree of Master of Engineering

Surveying and Geospatial Engineering School of Civil & Environmental Engineering Faculty of Engineering The University of New South Wales

September 2014

PLEASE TYPE THE UNIVERSITY OF NEW SOUTH WALES Thesis/Dissertation Sheet

Surname or Fam ily name: Wang

First name: Ke Other name/s: Isabella

Abbreviation for degree as given in the University calendar: ME (Master of Engineering)

School: School of Civil &Environmental Engineering (CVEN) Faculty: Faculty of Engineering

Title: The applicability of mathematical morphology algorithm for tropical cyclone eye and water body boundary extraction in SAR data

Abstract 350 words maximum: (PLEASE TYPE)

Tropical cyclone (TC) and flooding are global catastrophes, devastating natural disasters. Synthetic aperture radar (SAR) satellite images present unique capabilities of cloud penetration, from signal response of sea/ground surface backscatter. The availability of high spatial resolution SAR satellite imagery shows potential for new metrological and environmental applications. This thesis presents two major case studies detailing efficient approaches in TC eye extraction and water body detection, using data from spaceborne Radarsat-1 , En vi sat ASAR and airborne Interferometric Synthetic Aperture Radar (IFSAR).

In the case study of TCs, using the relationship between normalized radar cross section (NRCS) and backscatter for the roughness of sea surface, SAR images enable the measurement of the areas of TC eyes as an identifiable result. The size and of TC eyes strongly corresponds with its evolution and strength. It can play a vital role in monitoring and forecasting the behaviour of TCs by introducing mathematical morphology methods. Skeleton pruning based on discrete curve evolution (DCE) was used to ensure global and local preservation of the TC eye shape, by reducing redundant skeletons caused by speckle noise along the edges of the TC eye. These morphological-based analyses have been employed explicitly for six representative ocean SAR images with different TC patterns. The results demonstrate a high degree of agreement with the area of coverage derived from reference data based on NOAA's manual work.

The second case study for water body detection involves pattern recognition with respect to segmentation. Morphological watershed algorithm is applicable for image segmentation, albeit excessively sensitive to speckle noise in SAR images, leading to over-segmentation and thus a reduction in efficiency. This thesis presents a novel approach for water body extraction from SAR images by applying marker-controlled watershed combined with top-hat transformation. The purpose of this case study is to improve the efficiency of watershed techniques compared with Canny edge detection results for fine resolution I FAR images, and to yield an intuitive and well segmented image for mapping of water body boundaries.

Declaration relating to disposition of project thesis/dissertation

I hereby grant to the University of New South Wales or its agents the right to archive and to make available my thesis or dissertation in whole or in part in the University libraries in all forms of media, now or here after known , subject to the provisions of the Copyright Act 1968. I retain all property rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertation.

I also authorise University Microfilms to use the 350 word abstract of my thesis in Dissertation Abstracts International (this is applicable to doctoral theses only).

r?~l.t?~! .~. O. . ' '?. ··· ·· · ·· · ····· · ·· ~Signature · · ···· · · · · · · · · · · ··· Date

The University recogn ises that there may be exceptional circumstances requiring restrictions on copying or conditions on use. Requests for restriction for a period of up to 2 years must be made in writing. Requests for a longer period of restriction may be considered in exceptional circumstances and re uire the a roval of the Dean of Graduate Research .

FOR OFFICE USE ONLY Date of completion of requirements for Award:

THIS SHEET IS TO BE GLUED TO THE INSIDE FRONT COVER OF THE THESIS COPYRIGHT STATEMENT

'I hereby grant the University of New South Wales or its agents the right to archive and to make available my thesis or dissertation in whole or part in the University libraries in all forms of media, now or here after known, subject to the provisions of the Copyright Act 1968. I retain all proprietary rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertation. I also authorise University Microfilms to use the 350 word abstract of my thesis in Dissertation Abstract International (this is applicable to doctoral theses only). I have either used no substantial portions of copyright material in my thesis or I have obtained permission to use copyright material; where permission has not been granted I have applied/will apply for a partial restriction of the digital copy of my thesis or dissertation.'

Signed ····· ······· ··· ~ ······ · ·· · ·· · · · · · ·· ·· ···· ·············

Date ...... P~/p;;,./:k?J? ......

AUTHENTICITY STATEMENT

'I certify that the Library deposit digital copy is a direct equivalent of the final officially approved version of my thesis. No emendation of content has occurred and if there are any minor variations in formatting, they are the result of the conversion to digital format.'

Signed ...... ~ ......

Date ...... r:?.~/ ~. ~• ./ . .?.1.?.! .$.'...... ORIGINALITY STATEMENT

'I hereby declare that this submission is my own work and to the best of my knowledge it contains no materials previously published or written by another person, or substantial proportions of material which have been accepted for the award of any other degree or diploma at UNSW or any other educational institution, except where due acknowledgement is made in the thesis. Any contribution made to the research by others, with whom I have worked at UNSW or elsewhere, is explicitly acknowledged in the thesis. I also declare that the intellectual content of this thesis is the product of my own work, except to the extent that assistance from others in the project's design and conception or in style, presentation and linguistic expression is acknowledged.'

Signed ...... ~ ......

Date ...... P.Y..'?.k/~ . ~5 ...... Abstract

Tropical cyclone (TC) and flooding are global catastrophes, devastating natural disasters.

Synthetic aperture radar (SAR) satellite images present unique capabilities of cloud penetration, from signal response of sea/ground surface backscatter. The availability of high spatial resolution SAR satellite imagery shows potential for new metrological and environmental applications. This thesis presents two major case studies detailing efficient approaches in TC eye extraction and water body detection, using data from spaceborne Radarsat-1, Envisat ASAR and airborne Interferometric Synthetic Aperture Radar (IFSAR).

In the case study of TCs, using the relationship between normalized radar cross section (NRCS) and backscatter for the roughness of sea surface, SAR images enable the measurement of the areas of TC eyes as an identifiable result. The size and shape of TC eyes strongly corresponds with its evolution and strength. It can play a vital role in monitoring and forecasting the behaviour of TCs by introducing mathematical morphology methods. Skeleton pruning based on discrete curve evolution (DCE) was used to ensure global and local preservation of the TC eye shape, by reducing redundant skeletons caused by speckle noise along the edges of the TC eye. These morphological-based analyses have been employed explicitly for six representative ocean SAR images with different TC patterns. The results demonstrate a high degree of agreement with the area of coverage derived from reference data based on NOAA’s manual work.

The second case study for water body detection involves pattern recognition with respect to digital image segmentation. Morphological watershed algorithm is applicable for image segmentation, albeit excessively sensitive to speckle noise in SAR images, leading to over- segmentation and thus a reduction in efficiency. This thesis presents a novel approach for water body extraction from SAR images by applying marker-controlled watershed combined with top- hat transformation. The purpose of this case study is to improve the efficiency of watershed i techniques compared with Canny edge detection results for fine resolution IFAR images, and to yield an intuitive and well segmented image for mapping of water body boundaries.

ii

Acknowledgement

Foremost, I would like to express my deepest thanks to my supervisors, Professor John C. Trinder, he has been a mentor to me in a way. This spirit of mentorship has pervaded every step of my research, and I would like to thank him for offering thorough and excellent feedback on every version of this thesis even during his holiday. Literally, I am truly grateful for his selfless dedication to both my personal and academic development, as I consider myself extremely fortunate to have him as my supervisor at UNSW, even whose maddening attention to detail drove me to finally learn to punctuate my winding sentences. But most importantly, his immense academic knowledge and guidance has made this a thoughtful and rewarding journey.

I am eternally gratefully to all my family for all their love to support me in all my pursuits †

I would also like to thank:

Dr. Xiaofeng Li, without your strong support and expertise I would not have a chance to complete the research of tropical cyclone case study, I’m truly appreciate your patience and help in resolving the problem I have met during study;

Dr. Scott Hensley, I really appreciate that the marvellous lecture you gave and your great patience in answering all my questions with respect to Radar principle;

Dr. Xianwen Ding, I truly thank for your great help and consideration during IGRASS 2014 conference;

A/Prof. Samsung Lim, as my co-supervisor the useful advice you provided through consultation and coordination is highly appreciated;

Dr. Hossein Aghighi, your diligence is a role model for me to learn from, and I am grateful for my inclusion in your research.

Last but not least I would like to thank the “Thesis Posse” that I have formed with the numerous individuals over the past few years that have inspired, supported and encouraged me along the way, without which I would not have enjoyed this thesis as much as I have.

Mr. Allan Hong Zhang & Ms. Xue Hua Zhang, Dr. Bo Jin, Ms Boya Wang, Mr.Chao Zhou, Dr. Hwan Choi, Dr. Ke Xu, Dr. Ligaya Leah Figueroa, Dr. Shi Zhan, Dr. Siyuan Chen, Dr. Yaobin Sheng, Dr. Yiping Jiang, Mr. Yuqiang Chu, Dr. Zeyu Li.

iii

Glossary

ALOS : Advanced Land Observing Satellite

AVHRR : Advanced Very High Resolution Radiometer

ASAR : Advanced Synthetic Aperture Radar (Envisat)

ASI : Italian Space Agency

ASTER : Advanced Spaceborne Thermal Emission and Refection Radiometer.

Aqua (EOS PM) : is a multi-national NASA scientific research satellite in orbit around the Earth, studying the precipitation, evaporation, and cycling of water.

BCS : Backscatter Cross Section

Tropical Cyclone Category : Category is rated in five categories according to their strongest wind speed, Category 4-5 are very destructive winds with typical gusts over open flat land of 225 - 279 km/h or more than 280 km/h.

CCRS : Canadian Centre for Remote Sensing

CMOD4 : C-Band Model Function

CNES : Centre National D'études Spatiale

COSMO-SkyMed: : (COnstellation of small Satellites for the Mediterranean basin Observation) is an Earth observation satellite system funded by the Italian Ministry of Research and Ministry of Defence and conducted by the Italian Space Agency (ASI), intended for both military and civilian use.

CPHC : Central Pacific Hurricane Centre

DCE : Discrete Curve Evolution

DEM : Digital Elevation Models

DLR : German Aerospace Centre

DoE : Difference of Estimates

EMR : Electromagnetic Radiation

ESA : European Space Agency

ESRIN : ESA Centre for Earth Observation

iv

ERS : European Respiratory Society

FIR : Finite Impulse Response

GIS : Geographic Information System

GMES : Global Monitoring For Environment and Security

GMF : Geophysical Model Function

GNSS : Global Navigation Satellite System

HF : High Frequency

IFSAR : Interferometric Synthetic Aperture Radar

IIR : Infinite Impulse Response

INS : Immigration and Naturalization Service

JTWC : Joint Typhoon Warning Center

LOG : Laplacian-of-Gaussian

MAT : Medial Axis Transform

MM : Mathematical Morphology

MODIS : Moderate-Resolution Imaging Spectroradiometer

MSS : Multi-Spectral Scanner

NIR : Near Infrared

NOAA : National Oceanic and Atmospheric Administration

NRCS : Normalized Radar Cross Section

OLI : Operational Land Imager

PASAR : Phased Array Synthetic-Aperture Radar

PAZ (Hisdesat) : is the first Spanish Synthetic Aperture Radar in X band , designed as a dual use (military and civilian) mission to meet operational requirements in the field of high resolution (up to 1 meter) observation.

POES : Polar Orbiting Spaceborne

PRF : Pulse Repetition Frequency

Radarsat-1 : is Canada's first commercial Earth observation satellite , developed under the management of the Canadian Space Agency (CSA) in co-operation with Canadian provincial governments and the private sector. v

RAR : Real Aperture Radar

RCM : Radarsat Constellation Mission

RCS : Radar Cross-Section

RDP : Relative Dielectric Permittivity

RISAT-1 : is an Indian remote sensing satellite which was built and is operated by the Indian Space Research Organisation (ISRO). RPI : Repeat Pass Interferometry.

SAR : Synthetic Aperture Radar

SAT : Symmetry Axis Transform

SE : Structuring Element

SLAR : Side-Looking Airborne Radar

SNR : Signal to Noise Ratio

SPI : Single Pass Interferometry

SPOT : Satellite Pour l'Observation de la Terre

TanDEM-X : TerraSAR-X add-on for Digital Elevation Measurement, is the name of TerraSAR-X's twin satellite.

TerraSAR-X : A radar Earth observation satellite with its twin satellite TanDEM-X launched June 21, 2010 is a joint venture being carried out under a public-private-partnership between the German Aerospace Center (DLR) and EADS Astrium.

Terra : Formerly EOS (Earth Observing System) AM-1 is a multi- national NASA scientific research satellite in a Sun- synchronous orbit around the Earth.

TC : Tropical Cyclones

TIRS : Thermal Infrared Sensor

TM : Thematic Mapper

TRMM : Tropical Rainfall Measuring Mission

TPC/NHC : Tropical Prediction Center/National Hurricane Center

UAF : University of New South Wales Adaptive Filter

UAV : Unmanned Aerial Vehicle

UHF : Ultra High Frequency

vi

VHF : Very High Frequency

WFO : Weather Forecast Office

WMO : World Meteorological Organization

vii

Table of Contents

Abstract i Acknowledgement iii Glossary iv Table of Contents viii List of Figures xii List of Tables xvi List of Publications xvii

Chapter 1. Introduction

1.1 Background Information 1 1.2 Problem Statement and Justification 2 1.3 Objectives and Contributions of Research 4 1.4 Thesis Organization 6

Chapter 2. Principles of Synthetic Aperture Radar (SAR) System

2.1 Basic Principles of Imaging Radar Systems 9 2.1.1 SAR Wavelength 13 2.1.2 Spatial Resolution 16 2.1.2.1 Resolution in Range (Cross-track) 16 2.1.2.2 Resolution in Azimuth (Along-track) 17 2.1.3 Polarization 18 2.1.4 Scattering Mechanisms 19 2.2 SAR System Parameters and Properties 20 2.2.1 Radar Equation 20 2.2.2 Backscatter (Surface Roughness, Dielectric Properties) 22 2.2.3 Incidence Angle, Azimuth and Look Direction 24 2.2.4 Geometric Effects 25 2.2.4.1 Radar Shadows 26 2.2.4.2 Radar Layover 27 2.2.4.3 Foreshortening 27 2.2.5 Speckle Effect and Filtering 28 2.3 Interferometric Synthetic Aperture Radar (IFSAR) 29

viii

Chapter 3. Image Processing for the Extraction of Features in Images

3.1 Introduction 32 3.2 Brief History of Mathematical Morphology 32 3.3 Morphological Structuring Element 33 3.4 Binary Erosion and Dilation 34 3.5 Binary Opening and Closing 37 3.6 Top-hat Transform 38 3.7 Skeletonization 41 3.7.1 Skeleton Based on Medial Axis 41 3.7.2 Skeleton Representation 42 3.7.3 Shape Reconstruction 44 3.8 Pruning with Discrete Curve Evolution (DCE) 46 3.9 Watershed Segmentation 49 3.9.1 Implementation of Traditional Watershed Segmentation 51 3.9.2 Oversegmentation and Its Remedy 54 3.10 Classic Edge Detectors 55 3.10.1 Introduction 55 3.10.2 Gradient 57 3.10.3 Edge Detection Using Classical Edge Detectors 59 3.10.3.1 Roberts Edge Detectors 59 3.10.3.2 Sobel Edge Detectors 60 3.10.3.3 Prewitt Edge Detectors 60 3.10.3.4 Laplacian-of-Gaussian (LOG) Edge Detectors 61 3.10.3.5 Canny Edge Detector 63 3.11 Speckle Noise Filtering 65 3.11.1 UNSW (University of New South Wales) Adaptive Filter (UAF) 65 3.11.2 Median Filter 66

Chapter 4. Applications of Remote Sensing Data for Tropical Cyclone Studies and

Flood Mapping

4.1 Introduction 69 4.2 Approaches of Tropical Cyclone Study 70 4.2.1 Introduction 70 4.2.2 TC Identification Tropical Cyclone Characteristics Using Dvorak Technique 71 4.2.3 Data Acquisition and Analysis 72

ix

4.2.4 Ocean Surface Wind Speed Retrieval Model 74 4.2.4.1 Multisource for Wind Speed Retrieval 74 4.2.4.2 SAR Wind Speed Retrieval 75 4.2.5 Tropical Cyclone Eyes Detection and Location 77 4.3 Approaches of Water Body/Flood Mapping 78 4.3.1 Introduction 78 4.3.2 Flood Monitoring with Passive Remote Sensing System 79 4.3.3 Flood Mapping and Monitoring from Radar Images 82 4.3.4 Flood Mapping and Monitoring Using Radar Coherence 86 4.3.5 A Combination of Optical and Radar Data 89 4.3.6 Flood Detection Using GIS and Remote Sensing 89 4.3.6.1 Digital Elevation Models (DEM) 89 4.3.6.2 Light Detecting and Ranging (LIDAR) 90 4.4 Summary 90

Chapter 5. Processing of SAR Images for Case Studies

5.1 Tropical Cyclone Eyes 93 5.1.1 Introduction 93 5.1.2 Extraction of Tropical Cyclone Eyes 95 5.1.2.1 Using Adaptive Filters UAF for SAR Speckle Reduction 95 5.1.2.2 Image Enhancement 96 5.1.2.3 Morphological Skeleton Extraction 98 5.1.2.4 Skeleton Pruning with Discrete Curve Evolution (DCE) 99 5.1.2.5 Reconstruction Algorithm 100 5.1.3 Study Area and Dataset 100 5.1.4 Experimental Results and Discussion 104 5.1.5 Analysis of Results of Extraction of TC Eyes 107 5.2 Water Body Boundary Detection 111 5.2.1 Introduction 111 5.2.2 Water Body Extraction 112 5.2.2.1 Creating Gradient Image 112 5.2.2.2 Top-hat Transformation 113 5.2.3 Watershed Algorithm 114 5.2.4 Study Area and Dataset 115 5.2.5 Experimental Results and Discussion 116 5.2.6 Analysis of Results of Water Body Detection 121

x

Chapter 6. Concluding Remarks and Future Research

6.1 Concluding Remarks 124 6.1.1 Conclusion of Tropical Cyclone Case Study 124 6.1.2 Conclusion of Water Body Detection Case Study 126 6.2 Recommendations for Future Research 127

Reference s 129

xi

List of Figures

Figure 1.1 : Demonstration of different types and numbers of natural disasters : 2013 1 versus 2003-2012 Figure 1.2 : Percent share of reported occurrence by disaster sub-group and continent in 2 2013 Figure 2.1 : (a ) Illustration of a spaceborne SAR viewing geometry; (b) Principle of 10 SAR systems. Object points , and on the ground are less frequently viewed at near range than from far range. Hence, point has a proportional

shorter effective antenna length, > > Figure 2.2 : Illustration of marginal atmospheric effects in Advanced Synthetic 13 Aperture Radar (ASAR) Figure 2.3 : Impulse response to a point target 17 Figure 2.4 : Polarization of electromagnetic signal 19 Figure 2.5 : Scattering mechanisms of surface scattering 20 Figure 2.6 : Demonstration of surface and volume scattering 20 Figure 2.7 : Demonstration of corner reflection 20 Figure 2.8 : Scattering mechanisms of water and land surfaces under different 21 conditions Figure 2.9 : Typical radar backscatter as a function of incidence angle for 23 representative surfaces Figure 2.10 : Geometry of different radar angles 24 Figure 2.11 : Relationship of radar incidence angle and corresponding backscatter 25 intensity Figure 2.12 : Foreshortening, layover, and shadow; the three-dimensional world is 26 collapsed to two dimensions in conventional SAR imaging Figure 2.13 : Geometric distortions in SLAR imaging radar system 26 Figure 2.14 : Radar Layover effect of terrain feature 27 Figure 2.15 : Radar foreshortening effect of terrain feature 28 Figure 2.16 : Illustration of single pass interferometry (left) and repeat pass 29 interferometry (right) Figure 3.1 : A non-flat structuring elements (left) and a flat structuring elements 34 (right) are used in a 1D image for dilation. Dash line is the result of dilation from original shape which presents as solid line (Dot points shown in the SE and are used as the centre of the structuring

xii

element) Figure 3.2 Dilation by a structuring element circle on shape X, and the result of 34

expansion of X is shown as () Figure 3.3 : Erosion by a structuring element circle on shape X, and the result of 35

reduction of X is shown as () Figure 3.4 : Demonstration of dilation result of the images using different structuring 36 elements. The first column indicates the different of original images, and the first row represents 5 types of structuring elements (SE). Each of the second to fourth rows shows the result dilated by each SE from the first row, which varies with the shape of SEs. The orange squares were used as the centre of the SEs Figure 3.5 : Demonstration of erosion result of the images using different structuring 37 elements. The first column indicates the different shapes of original images, and the first row represents 5 types of structuring elements (SE). Each of the second to fourth row shows the result eroded by each SE from the first row, which varies from the shape of SEs. The orange squares were used as the centre of the SEs Figure 3.6 : Opening of a discrete binary image X (13 × 10 ) shown in (a) by a 37 structuring element B (2 × 2) shown in (b), resulting in the grey pixels with the white background pixels (c) Figure 3.7 : Closing of a discrete binary image X (13 × 10 ) shown in (a) by a 37 structuring element B (2 × 2) shown in (b), resulting in the grey pixels with the white background pixels (c) Figure 3.8 : Demonstration of opening or white top-hat transformation 40 Figure 3.9 : Demonstration of closing or black top-hat/bottom-hat 40 Figure 3.10 : Indication of the geometric relationship between a point on a 1D medial 42 axis and its corresponding boundary points. The tangent circle shrinks along with the boundary, so the boundary is perpendicular to its radius vector. Also, the medial axis splits the two radius vectors. The media axis indicated in the rectangle defines the centres of maximal discs. Figure 3.11 : A robust skeleton extraction and a rectangle with boundary disturbance 46 Figure 3.12 : Illustration of immersion analogy 49 Figure 3.13 : Illustration of watershed segmentation in topographic surface 52 Figure 3.14 : Two examples of the watershed transform applied to a one dimensional 55 signal are presented. Chart A) When three markers or labels are assigned at the three local minima, three segmented areas are produced by

xiii

watershed lines as the boundaries which are separated at the local maxima between each two basins; Chart B) When only two markers are located, segment 2 is flooded over a small peak and into the adjacent minima until a watershed line is shaped with segment 1. Figure 3.15 : The 2-D Laplacian of Gaussian (LOG) function, where and axes are 62 marked in standard deviations ( ) Figure 3.16 : (Left) Demonstration of the robust skeleton for rectangle, (Middle) 66 (Right) the sub-branches in the skeletons are generated by noise over the boundary of the rectangle Figure 4.1 : Illustration of T-Numbers of the Dvorak technique 71 Figure 4.2 : Illustration of pattern strength based on Dvorak technique 71 Figure 4.3 : Illustration of tropical cyclones observation 74 Figure 4.4 : Illustration of TC structure in vertical slice through the centre of a mature 78 TC Figure 4.5 : Scattering mechanisms of a non-flooded forest (left column) and flooded 84 forest (middle and right column) Figure 5.1 : ENVISAT satellite imagery of hurricane Katrina collected on 28 August 94 2005 from MERIS (UTC 15:50) Figure 5.2 : ENVISAT satellite imagery of hurricane Katrina collected on 28 August 94 2005 from ASAR (UTC 17:00) Figure 5.3 : ASAR image overlaid on the Terra/MODIS optical image (Hurricane 94 Katrina 2005 UTC 17:00) Figure 5.4 : Demonstration of original SAR image (Left) of hurricane Dean acquired 96 on 08/19/2007 with speckle noise, (Right) the denoised image after exploiting UAF adaptive filter Figure 5.5 : Illustration of denoising SAR image (Left) and image enhancement based 98 on Ostu’method (Right) Figure 5.6 : Illustration of original ocean SAR image (time series hurricane Katrina 113 27/08/2005 and 28/08/2005 Figure 5.7 : Illustration of original ocean SAR image (time series hurricane Dean 113 17/08/2007 and 19/08/2007) Figure 5.8 : Illustration of original ocean SAR image (hurricane Earl and Rita, 114 02/09/2010 and 22/09/2005) Figure 5.9 : (a) Original SAR image I, (b) pre-processing SAR image I 114 Figure 5.10 : (a) Original SAR image II, (b) pre-processing SAR image II 114 Figure 5.11 : Demonstration of IFSAR image (acquisition time: UTM 21:51 22 Oct 116

xiv

2013) Figure 5.12 ; (a) Oversegmentation of SAR image I, (b) watershed of processed SAR 117 image I (c) marker-controlled watershed of processed SAR image I Figure 5.13 : (a) Oversegmentation of SAR image II, (b) watershed of SAR image II, 117 (c) marker-controlled watershed of SAR image II Figure 5.14 : (a) Original SAR image III overlaid with reference map in yellow solid 119 line as the river edge (b) Canny edge detection example 1 (c) Canny edge detection outcome example 1 overlaid with the watershed transformation result Figure 5.15 : (a) Original SAR image IV overlaid with reference map in yellow solid 119 line as the river edge (b) Canny edge detection example 2 (c) Canny edge detection outcome example 2 overlaid with the watershed transformation result Figure 5.16 : (a) Original SAR image IV overlaid with reference map in yellow solid 120 line as the river edge (b) Canny edge detection example 3 (c) Canny edge detection outcome example 3 overlaid watershed transformation result Figure 5.17 : (a) Original SAR image V overlaid with reference map in yellow solid 120 line as the river edge (b) Canny edge detection example 4 (c) Canny edge detection outcome example 4 overlaid watershed transformation result

xv

List of Tables

Table 2.1 : Frequency and wavelength relationships 11 Table 2.2 : Electromagnetic (EMR) spectrum of interest to remote sensing 12 Table 2.3 : Common radar remote sensing bands and their characteristics 14 Table 2.4 : The major commercial Radar remote sensing satellites 15 Table 2.5 : The typical values of backscatter coefficients and corresponding ground 23 features Table 5.1 : Imagery information of hurricane event 101 Table 5.2 : Wind speed information of TC area 101 Table 5.3 : Illustration of morphological reconstruction compared to the skeleton 105 pruning result Table 5.4 : Demonstration of denoised TC images for morphological reconstruction 106 after skeleton and pruning Table 5.5 : Evaluation of shape extraction for tropical cyclone coverage 108 Table 5.6 : Estimation of area extraction for tropical cyclone by wavelet analysis 109 Table 5.7 : Analysis of initial watershed segmentation (oversegmentation) and 118 marker-controlled watershed result

xvi

List of Publications

Conference Papers:

Ke Wang , Xiaojing Li, Linlin Ge, “Locating Tropical Cyclones with Integrated SAR and Optical Satellite Imagery”, IEEE International Geoscience and Remote Sensing Symposium , TUP.P15, July, 2013 (in press)

Ke Wang , John C. Trinder, “Applied Watershed Segmentation Algorithm for Water Body Extraction in Airborne SAR Image”, EuSAR 2014:10th European Conference on Synthetic Aperture Radar , 2013 (in press)

Ke Wang , Xiaofeng Li, John C. Trinder, “Mathematical Morphological Analysis of Tropical Cyclone Eyes on Ocean SAR”, IEEE International Geoscience and Remote Sensing Symposium , 2014 (in press)

Journal Paper:

Ke Wang , Ali Shamsoddini, Xiaofeng Li, John C. Trinder, “Extracting Hurricane Eyes in Spaceborne SAR Images Using Morphological Analysis”, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing , 2014 (Submitted)

Peer-reviewed Paper:

Hossien Aghighi, John C. Trinder, Ke Wang , Yuliya Tarabalka and Sumsang Lim, “Smoothing Parameter Estimation for MRF Classification of Non-Gaussian Distribution Image”, ISPRS - International Society for Photogrammetry and Remote Sensing, Commission VII, WG VII/4, 2014 (in press)

xvii

Chapter Introduction

Chapter 1 Introduction

Chapter Introduction

1 Introduction

1.1 Background Information

Reports of the occurrence of natural disasters over the past few decades are illustrated below

(Figures 1.1and 1.2). Earthquakes, floods and tropical cyclones (TC), are showing an upward trend, giving rise to an increased public consciousness of the impacts of various disastrous events. Most of the time, the impacts of such catastrophic events are inevitable even with immediate rescue activities, whereas establishing systematic climate change monitoring institutions, as well as post-disaster management or reconstruction may have significant long term benefits. In addition, in order to understand and possibly alleviate the impacts of some of these disastrous events on human beings and their environment, research in terms of meteorological and hydrological monitoring is being undertaken for each of the characteristic stages of such events, i.e. prior to the event (early warning systems, disaster preparedness), the instance when the event happens (disaster alert systems, disaster assistance), and after the event

(emergency response and risk assessment).

Figure 1.1 has been removed due to Copyright restrictions.

Figure 1.1 Demonstration of different types and numbers of natural disasters: 2013 versus 2003-2012 (Guha-Sapir et al., 2014)

Natural disasters are defined as events not brought about by human activity that have significant negative impacts on people, infrastructure, and the environment. From 2003 to 2013, 718 natural disasters worldwide were registered to have affected more than 2,256 million victims, causing US$118.6 billion of economic damage (Guha-Sapir et al., 2014). Amongst the various types of disasters, hydrological and meteorological events (such as flooding and tropical cyclones) account for the largest share. Inundation disasters were the most frequently occurring disasters on average annually, for the years 2003 to 2013(Figure 1.2).

1

Chapter Introduction

Figure 1.2 has been removed due to Copyright restrictions.

Figure 1.2 Percent share of reported occurrence by disaster sub-group and continent in 2013

(Guha-Sapir et al., 2014)

At present, Tropical Cyclones (TC) (also referred to as hurricanes and typhoons) have received close attention from policymakers and the scientific community. For instance, in America TCs are monitored by several federal governmental organizations under the leadership of the

National Oceanic and Atmospheric Administration (NOAA), including the Tropical Prediction

Center/National Hurricane Center (TPC/NHC), Central Pacific Hurricane Center (CPHC),

Weather Forecast Office (WFO), and Joint Typhoon Warning Center (JTWC). These authorities are all trying to predict cyclone or hurricane paths and intensities to issue accurate and timely warnings to the public in harm’s way.

Rapid damage assessment after catastrophes is vital for launching effective emergency rescue actions. The crucial and prompt geospatial information about areas affected can be provided by remote sensing satellites features optical and SAR imaging sensors. With the development of satellite mission technology, satellite images can be used for rapid mapping of regions of interest, with a high geometric positional accuracy. Some operational aspects of the use of earth observation information for integration of data for disaster preparedness and risk assessment are emphasized in this research. The demand for emergency response crisis information on civil endangerment has considerably increased in recent years on a global scale.

1.2 Problem Statement and Justification

The casualties and property losses caused by TCs are the most damaging natural hazards for coastal residents. The difficulties of locating and monitoring the eyes of TCs in the previous research are caused by thick cloud obstructions for optical images, especially during the formation stage of a TC, which is of importance in determining the evolution process. Simple

2

Chapter Introduction sketchy information makes it difficult to know where TCs are likely to make landfall. The public should be able to be warned of the significance of the potential danger of a TC. Detailed descriptions of TC eye, such as location, size and shape, are the crucial information that provide meteorologists with adequate evidence to interpret and predict its intensity and strength variation, and thus determine where TCs might head next, as well the prediction of the Category.

If it attains Category 5, the highest state of weather alert should be issued. Extracting TC eye areas automatically is challenging and critical to determining the trend in the TC wavenumber or category. The reason for this is that the evolution of TC’s category is highly correlated with its intensity, which corresponds to its destructive effect (Li et al., 2013, Li et al., 2012).

In the TC case study of this thesis, exploiting mathematical morphological methods to efficiently improve the accuracy of TC eye extraction is a major objective concerned for comparison with wavelet analysis in charactering TC eyes (Du and Vachon, 2003) in Chapter 5.

Although, the proposed morphological method that aims at extracting TC eye areas has been widely employed for image recognition and analysis for calculating the geometric features of shape and structure, its limitations cannot be avoided because of the sensitivity of speckle noise in SAR image. The noise along the edges of TC eyes generates redundant skeleton branches, resulting in extra erroneous areas in the shape reconstruction step. Therefore an effective technique is required to overcome this instability of the skeleton with the purpose of drastically reducing these redundant segments while preserving the significant skeleton information, in order to present an accurate final shape outcome.

Floods are some of the most devastating, worldwide and frequent disasters, and hence are responsible for enormous financial loss and personal casualty. With its long coastline, Australia is more accessible and susceptible to the influences of extreme weather, such as tropical cyclones developed over the oceans. Coastal residents suffer from huge destruction from cyclones; indeed a large portion of tropical cyclones usually give rise to flooding as well. The thesis will investigate the application of satellite image data for defining the boundaries of water bodies that would be suitable for investigating the extent of flood disasters. 3

Chapter Introduction

Remote sensing is a technology that is potentially well-suited to the provision of spatial information and knowledge in a timely fashion by assisting disaster risk management and assessment. Nevertheless, in the past the spatial resolution constraints have limited the usefulness of SAR imagery for monitoring water bodies and extracting their edges. First of all, a number of studies have been conducted into the use of SAR data for mapping water bodies and the extent of flood events; however those studies have in general been based on relatively low resolution SAR imagery compared with fine resolution optical data.

Secondly, image enhancement is another problem needing addressing, because it will produce a poor result in SAR image when the boundaries of flood areas are extracted based on the proposed watershed transformation. When image contrast is inferior, detailed edge information cannot be maintained, while it may also result in oversegmentation with a great number of trivial areas. By contrast, overly high contrast ratios will yield appropriate segmentation at the expense of edge detail loss. Thus, a suitable image enhancement approach is required to be implemented in the pre-processing step.

Thirdly, a common issue with the proposed application of watershed segmentation is oversegmentation, thanks to the inevitable SAR speckle noise. Moreover, the oversegmentation, presented as enormous redundant classified regions, may also result from the irregularities of the gradient image generated from SAR image.

1.3 Objectives and Contributions of Research

In this research novel techniques are presented to support emergency response after catastrophic events in an automatic way, using different SAR sensor imagery to investigate the visible water body over ocean surface and inland areas.

The expected benefits of the research and major contributions are listed as follows:

4

Chapter Introduction

1. For the TC case study, to detect the TC eyes areas, two satellite imaging radar sensors intended for this purpose are Radarsat-1 (launched in November 1995, provided by Canada

Space Agency) and Envisat ASAR (Advanced Synthetic Aperture Radar, launched in 1 March

2002). Medium spatial resolution data products from these sensors have been studied to assess, and in particular to establish the most appropriate system characteristics for their potential application for representing and extraction of the characteristics of TC eye areas on the ocean surface.

In Chapter 5, adaptive filters UAF is used for suppress the speckle noise in the SAR data, and the textural information is well-maintained. Classic Otsu automatic threshold selection is implemented by separating the TC eye and non-eye areas of followed by noise reduction as a simple but effective image enhancement approach. The skeletons produced by morphological skeleton are solved by applying skeleton pruning based on discrete curve evolution (DCE) on ocean SAR images with meaningful branches preserved. The key contribution of this TC case study is to achieve a better estimate of TC eye extraction with relative accuracy of 92% (six ocean SAR hurricane events included) and have been compared with the previous methods of extracting TC eye areas.

2. For the water body detection case study, airborne IFSAR system consists of SAR sensor with two radar antennas and the GNSS/INS components, with high spatial resolution of 63 cm and is used primarily for the improving the accuracy of the detected river edge. The water body areas of interest are located in Mildura, Victoria, Australia. In the pre-processing, introducing median filter reduces the high frequency noise with the IFSAR image without losing corresponding textural information of water body boundary. In this case study, morphological top-hat transformation is exploited as a pre-processing technique to improve segmentation quality in the following steps.

Furthermore, introducing a marker-controlled morphological watershed segmentation method is used to address the oversegmentation problem by labelling the inner and outer marking

5

Chapter Introduction constraints, providing an automatic extraction method under an unsupervised situation with better accuracy of extraction of water body boundaries. The major contribution of this case study is to provide an efficiency comparison of marked-controlled watershed algorithm and

Canny edge detection with purpose of generating an appropriate segmented map associated with detection of water body edges.

1.4 Thesis Organization

By identifying key problems in TC eye extraction and water body detection based on different mathematical morphology methods, this thesis presents corresponding solutions in each chapter.

The structure of this thesis is organized as follows:

Chapter 2

This chapter is an overview of basic principles of active remote sensing systems, with regard to spatial resolution, characteristics of radar backscattering and speckle noise reduction. It is essential to review these fundamentals, since the data pre-processing is mainly dependent on understanding the characteristics of SAR images. Based on the knowledge of SAR system, this chapter examines SAR data carefully by analysing the texture characteristics, contrast of grey scales and coherent speckle noise.

Chapter 3

Fundamentally, mathematical morphology has been used extensively for in remote sensing for computing geometric shape and structure of features. Mathematical morphology focuses on the geometric characteristics of an object, by setting the proper structure element. The rest of this chapter introduces the mathematical morphology methods consisting of

6

Chapter Introduction skeletonization, pruning, shape reconstruction and watershed transformation. Also a comparison of different classic edge detection methods is given in this chapter.

Chapter 4

Furthermore, the literature review concentrates on previous research methods of water body detection over ocean tropical cyclones and inland flood mapping methodologies. By reviewing the research concerns of previous studies, advantages and limitations are presented with the aim of seeking knowledge gaps in the current work.

Chapter 5

Optimization of pattern recognition is the major issue that has to be address in order to increase the accuracy rate of conventional classification approaches. Two case studies, the methodologies of TC study and water body detection is examined in detail, and as well as the experimental results of each case study will be discussed in this chapter. The proposed results can achieve higher accuracies compared with general image classification approaches.

Chapter 6

This chapter draws conclusion of this study by carefully examining the experimental results and validation. The advantage of the methodology is discussed in detail with statistical estimations derived from the previous chapters. The main idea is to demonstrate the advantages of and pattern recognition methods for extracting areas of interest over the active and passive remote sensing images. Also, some limitations of this study are described to provide research directions for future work.

7

Chapter Principles of Synthetic Aperture Radar (SAR) System

Chapter 2 Principles of Synthetic

Aperture Radar (SAR) System

Chapter Principles of Synthetic Aperture Radar (SAR) System

2. Principles of Synthetic Aperture Radar (SAR) System

2.1 Basic Principles of Imaging Radar Systems

Radar is an abbreviation for Radio Detection and Ranging, which is an active technique in remote sensing that: transmits microwave pulses of electromagnetic energy at a high pulse repetition frequency (PRF), records the echoes of the returned pulses from a target by an antenna and displays the echoes as an image (Fernandes et al., 2013). Remote sensing radars can be divided into imaging or non-imaging radars, i.e. microwave radiometer, microwave altimeter and laser etc. In this chapter, imaging radar systems are the main focus.

Imaging radar systems, being active sensors, transmit their own microwave illumination and measure the return energy from a target, whilst optical sensors are passive and depend on illumination from sun or thermal radiation sources. The measurement of the round trip travel time of the microwave pulses from and back to the antenna is used to determine the distance to the target (Figure 2.1). The scanning of the Earth’s surface is accomplished to the side of the platform, either an aircraft or satellite, at right angles to the flight line.

Imaging radar systems provide a two-dimensional representation of the returned signal from the ground surface, which differs from data acquired by radar altimeters and scatterometers in non- imaging radar systems. For instance, radar altimeters look straight down at nadir below the platform and measure altitude or elevation by transmitting short microwave pulses and calculating the round trip time delay to objects. They are typically carried on aircraft for altitude determination, or on aircraft and satellites for topographic mapping and sea surface height estimation. Scatterometers record one-dimensional measurements of the radar energy backscatter cross section (BCS) of an object on the earth’s surface with precise quantitative measurements of energy backscattered from objects. The amount of energy backscattered is dependent on the surface properties, such as surface roughness, and the angle at which the microwave energy strikes the object . 9

Chapter Principles of Synthetic Aperture Radar (SAR) System

Figure 2.1 has been removed due to Copyright restrictions.

Figure 2.1 (a) Illustration of a spaceborne SAR viewing geometry; (b) Principle of SAR systems. Object points , and on the ground are less frequently viewed at near range than from far range. Hence, point has a proportional shorter effective antenna length, . (image credit: NASA/JPL)

Initially, imaging radar systems were based on so-called real aperture side-looking airborne systems (SLAR), but the physical size of the antenna was the determining factor of azimuth (in flight direction) resolution in such systems. Therefore a very long antenna was required to achieve high azimuth resolution. Therefore, synthetic aperture radar (SAR) systems were designed to overcome the azimuth resolution problem. SAR systems synthesize a very long antenna, by continuously receiving the reflected signals from the same target as the platform progresses over the terrain. Using Doppler shifts of the received signals the azimuth resolution of SAR images can be shown to be approximately equal to /2, where L is the actual length of the antenna on the platform. This means that the resolution of the images will improve as the antenna increases in size. In addition, the resolution is independent of the elevation of the platform and the frequency of propagation. The azimuth resolution is constrained by the complexity of the radar system. For spaceborne SAR sensors the footprint is a constrained area or swath scanned by the signal. Successive swaths enable imaging of the area of interest.

The range of wavelengths covered by radar remote sensing systems varies from about 0.75 cm to 1 m. Radar systems are divided into several categories according to their wavelengths (shown in Table 2.1). Each band has its own characteristics with respect to ground feature detection. X-,

C- and L-bands are typically used on spaceborne radar systems. Owing to the fact that shorter wavelength microwave signals can be transmitted more effectively through the atmosphere and especially clouds, radar sensors with C- and L-band are suitable for earth observation purposes

(Aguirre-Salado et al., 2012). This property also applies to X-band radar, except in the case when precipitation occurs, such as in the case of heavy rain or hailstorms etc. (Bueno et al.,

2012). 10

Chapter Principles of Synthetic Aperture Radar (SAR) System

Table 2.1 Frequency and wavelength relationships (Hensley and Madsen, 2007). Frequency Band (MHz) Wavelength Range (cm) Band Identification

26,500 – 40,000 1.13 – 0.75 Ka

18,000 - 26,500 1.66 – 1.13 K

12,500 – 18,000 2.4 – 1.66 Ku

8,000 - 12,500 3.75 – 2.4 X

4,000- 8,000 7.5 – 3.75 C

2,000 - 4,000 15 – 7.5 S

1,000 - 2,000 30 – 15 L

300 - 900 100 -33 P or UHF *

30 - 300 1,000 – 100 VHF *

3 - 30 10, 000 – 1,000 HF *

* UHF (Ultra High Frequency); VHF (Very High Frequency); HF (High Frequency)

It is relevant to compare imaging radar systems with optical remote sensing sensors, also namely passive sensors, which detect the reflected or emitted electromagnetic radiation from natural sources. Passive sensors acquire high resolution optical images in the visible and near infrared regions of the electromagnetic spectrum, which can be used for many remote sensing applications including the detection of water bodies and flood extents which is one of the focusses of this thesis. The extraction of surface characteristics depends on the detection of images covering different ranges of wavelengths as summarized briefly in the in the Table 2.2.

Studies have verified the effectiveness of radar images for mapping flood areas as described in

Chapter 4.

11

Chapter Principles of Synthetic Aperture Radar (SAR) System

Table 2.2 Electromagnetic (EMR) spectrum of interest to remote sensing

Electromagnetic Principle Use and Some Characteristics Imaging Systems Spectrum region Blue (0.455-0.492 µm), water body penetration and forest-type mapping

Green (0.492-0.577µm), green reflectance peak in Optical imagery, vegetation; plant vigour assessment panchromatic, Visible colour and false 0.4–0.7 µm Red (0.62-0.78µm), chlorophyll absorption, colour image, discrimination of vegetation type and man-made features,

such as buildings and roads.

High atmospheric scattering effect. Most EMR is reflected solar radiation therefore only used in day-light. High reflectance of vegetation. Discrimination of Near Infrared (NIR) Infrared imagery. vegetation type, density and biomass content; water 0.7–3.0 µm absorption and degree of soil moisture; NIR image can detect minerals in the 2.2 to 2.4 µm region. Medium Infrared As above Vegetation moisture content and soil moisture (MIR) 3.0–8.0 µm Vegetation stress analysis and soil moisture Thermal discrimination As above 8.0–15.0 µm Predominantly radiation emitted by the earth and atmosphere. Major areas of application in this thesis are to use sensors at different frequencies for land and ocean using C-band (5.3GHz, wavelength of 5.6cm) ocean SAR image and X-band (wavelength of 3cm) IFSAR image. Radar, Side Looking Microwave Air-borne Radar Inland the microwave sensors can be used for studying 1 mm–100 cm (SLAR), radiometer water bodies, snow and ice, crops, forest cover, soil moisture and soil types. For ocean applications, radar SAR images can be used for determination of ocean waves, wind vector and direction, sea ice extent and motion and sea surface temperature, etc.

The capabilities of SAR sensors therefore surpass those of passive sensor-based imagery in certain conditions because of:

(1) Their all -weather capability (small sensitivity to clouds and light rain) as well as day

and night operations (independence of sun illumination), and increasing availability of

satellite missions, enable mapping of large areas for multi -purpose tasks.

(2) The response in the different radar wavelengths corresponds to different surface

roughness (i.e. biomass, water bodies and land) and sensitivity to dielectric constants,

thus enabling the interpretation of terrain cover characteristics.

12

Chapter Principles of Synthetic Aperture Radar (SAR) System

(3) Different polarizations of SAR images, which are a function of the design of the

antenna, can enable the acquisition of images with different polarizations thus providing

more information about the earth’s surface characteristics with accurate measurements

of distance.

Although, there are unique capabilities of SAR sensors as mentioned above, there are still the drawbacks listed below, that will be discussed in the following sections:

• Complex interactions (marginal atmospheric effects Figure 2.2)

• Speckle effects (difficulty in visual interpretation)

• Topographic effects (the side looking SAR is extremely sensible to the relief)

• Effect of surface roughness

Figure 2.2 has been removed due to Copyright restrictions.

Figure 2.2 Illustration of marginal atmospheric effects in Advanced Synthetic Aperture Radar (ASAR) (Agent, 2000)

2.1.1 SAR Wavelength

The wavelength of a SAR system is the determining factor for backscattering characteristics of pulsed signal transmission (Lillesand et al., 2004). The relationship between wavelength (cm) and frequency (Hz), the primary parameters in radar imaging systems, (Table 2.1), is defined by:

(Eq 2.1)

Where is speed of light, 310 , is wavelength in centrimetre and radar frequency is in units Hertz. Generally, lower frequency radar signals have a higher capacity to penetrate the media during imaging, while the higher frequencies are characterised by weak penetrability.

13

Chapter Principles of Synthetic Aperture Radar (SAR) System

Therefore, to detect ground surfaces, C-band (wavelength 4-8 cm) and X-band (wavelength 2.5-

4 cm) are preferable for imaging open water or flood areas, whereas, L-band SAR with low frequency is preferred for analysis of vegetation and forests (in Table 2.3) (Rees and Rees,

2012).

Table 2.3 Common radar remote sensing bands and their characteristics. (Parker, 2013)

Frequency and Wavelength of Commonly used Radar Remote Sensing Bands Band Frequency Wavelength Key Characteristics Widely used for military reconnaissance, mapping and X-Band 12.5-8 GHz 2.4-3.75 cm surveillance (TerraSAR-X, TanDEM-X, COSMO-SkyMed) Penetration capability of solid vegetation is limited and restricted C-Band 8-4 GHz 3.75-7.5 cm to the top layers. Useful for sea-ice surveillance (Radarsat, ERS) Used for medium-range meteorological applications e.g. rainfall S-Band 4-2 GHz 7.5-15 cm measurement, airport surveillance Penetrates vegetation to support observation applications over L-Band 2-1 GHz 15-30 cm vegetated surfaces and for monitoring ice sheet and glacier dynamics (ALOS PASAR) To date only used for research and experimental applications. Significant penetration capabilities regarding vegetation canopy P-Band 1-0.3 GHz 30-100 cm (Key element for estimating vegetation biomass), sea ice soil, glaciers.

The current major satellite imaging radar systems including their descriptions are listed in Table

2.4. Data derived from a C-band SAR sensor was used in this study. Since the characteristics of

SAR image are determined by the magnitude of the backscatter, which is a function of the surface and features characteristics, bright areas in a SAR image usually indicate that there is high level of energy reflected back to the radar sensor, while the dark features indicate that a low level of energy is returned. Rough ground surfaces relative to the emitted wavelength will appear bright, while smooth or flat surfaces relative to the emitted wavelength will appear dark in SAR images. Additionally, if a surface is inclined towards the radar sensor, a larger proportion of signal or energy will be returned than if the surface is inclined away from the sensor. Another important resulting factor is the reflection strength, which relies on the dielectric constant (discussed in Section 2.2.2) of various surface structures. Theoretically, wetter objects will appear bright, while drier surfaces will appear dark. However, water bodies are the exception since they will act as flat surfaces and reflect the signals away from the antenna, appearing dark in the image (Hensley and Madsen, 2007). To be more specific, land 14

Chapter Principles of Synthetic Aperture Radar (SAR) System surfaces may appear rough with short wavelength SAR signals, yet the same region will be represented relatively smooth on images derived from long wavelength radar. Short and

Robinson (1998) state that with an L-band sensor, the ground surface with a roughness of the order of 5 cm will appear dark in the image due to low backscatter. By contrast, the same area will appear bright in an X-band wavelength radar, owing to high backscatter (Short and

Robinson, 1998) of the radar signals.

Table 2.4 The major commercial Radar remote sensing satellites Commercial Radar Remote Sensing Satellites Satellite Swath Repeat Launch Resolution Mission and Band Width Rate Space Agency Date (m) Instrument (km) (days) European Space Agency ERS-1/-2 1995/2002 C 25 m 100 km 35 d (ESA) ALOS Phased Array Synthetic- 7-100 m / 20-350 km 46 d/ National Space Aperture Radar 2006/2015 L 1-100m / /25-490km 60d/ Development (PALSAR)/ALOS 0.8 m /50 km 14d Agency of Japan -3 A German mission carried out under a public-private- partnership with German Aerospace Center (DLR) and EADS Astrium (has TerraSAR/ 2007/2010 X 1-18 m 5-150 km 11 d merged with Cassidian in TanDEM-X 2013, now is known as Airbus). Both satellites used as radar interferometer for WorldDEM Italian constellation of four satellites, belongs to Italian COSMO-SkyMed 2007/2008 X 1-100 m 10-200 km 16 d Space Agency (ASI) Canadian commercial mission (Canadian Centre Radarsat-1/-2 1995/2007 C 3-100 m 20-500 km 24 d for Remote Sensing , CCRS) Spanish dual-use mission, PAZ constellation with 2013 X 1-18 m 5-150 km 11 d HISDESATSpain TerraSAR-X and TanDEM-X European Space Agency (ESA), Global Monitoring Sentinal-1A/B 2014/2016 C 5-100 m 80-400 km 12 d For Environment And Security (GMES) Radarsat Constellation 2018 C 3-100 m 20-500 km 4 d CCRS, (three satellites) Mission (RCM)

15

Chapter Principles of Synthetic Aperture Radar (SAR) System

2.1.2 Spatial Resolution

2.1.2.1 Resolution in Range (Cross-track)

Range and azimuth are two major components of spatial resolution of a SAR sensor. The range

(cross-track) resolution is measured according to the length of the radar pulse, the shorter the microwave pulse, the higher the range resolution.

The slant range resolution can be indicated by:

(Eq 2.2)

Where is slant range resolution in kilometre, which is independent of the platform’s height; denotes the transmitted pulse length in second, is a constant of the speed of light in /.

The ground range resolution is determined by the incidence angle , the angle between the radar pulse of EMR and a line perpendicular to the Earth’s surface where it makes contact

(shown in Figure 2.10, which will be presented in detail in Section 2.2.3) and slant range resolution ,

(Eq 2.3) 2

We can see from the Equation 2.3 the ground range resolution also varies inversely with the sine of the incidence angle of the radar range to the ground.

Finer range resolution can be achieved by shorten the pulse length or through pulse compression

techniques. Only when the transmitted pulse duration is larger than can two adjacent targets, which are very close to each other, be effectively distinguished. Yet, this is challenging; in order to maintain a certain signal to noise ratio (SNR), increasing the amplitude of a condensed pulse is required. This can be implemented by applying a long frequency modulated thus enabling higher range resolution (Leberl, 1990); meanwhile, the long pulse has to be compressed using an appropriate filter. 16

Chapter Principles of Synthetic Aperture Radar (SAR) System

2.1.2.2 Resolution in Azimuth (Along-track)

The resolution in azimuth depicts how well a radar sensor can identify two contiguous objects on the ground surface in the along-track direction. In terms of real aperture radar (RAR),

, relies on the beamwidth Θ of the antenna footprint. The beamwidth of an antenna measures the angular extent of the most powerful portion to the radiated energy, which is defined by the range of angles in the main lobe or portion of the emitted radiation (Agent, 2000).

For radar antennas, the beamwidth is described as:

Θ (Eq 2.4)

Where Θ is in radians, is the wavelength of the radar, and is the length of the physical antenna. Since the beam transmitted from the antenna is a fan shape, with growing distance between the antenna and the ground objects as shown in (Figure 2.3), the azimuth resolution is defined by:

Θ (Eq 2.5) ,

Figure 2.3 has been removed due to Copyright restrictions.

Figure 2.3 Impulse response to a point target (Agent, 2000)

As stated above, the improvement in the spatial resolution of SAR images is achieved by, synthesizing a large antenna by using a short physical antenna as the sensor moves forward.

SAR systems receive signal echoes, recorded coherently from sequential antenna positions over a long distance in the flight direction (Elachi and Van Zyl, 2006). Thus, the maximum resolution in azimuth , is defined by:

(Eq 2.6) , 2

17

Chapter Principles of Synthetic Aperture Radar (SAR) System

The azimuth resolution in SAR system only depends on the antenna length , and is independent of the orbital height of the sensor platform. The effective antenna length increases along with slant range distance, as the object on the ground is more often observed at far range than the near range (eg points , and shown in Figure 2.1 in Section 2.1). So, the longer the radar antenna, the narrower the beam width and the higher azimuth resolution based on

Equation 2.5.

2.1.3 Polarization

Polarization is defined as the orientation of the electric field vector of the transmitted and received radar signals propagated in a certain plane in various forms, such as like-polarization

(HH, VV) and cross-polarization (HV) as shown in Figure 2.4 (Yamazaki, 2001).

There are four main combination options for the polarization include, firstly, single polarization, the transmitting and receiving signals are the same polarization operated by the radar system.

Secondly, cross polarization, a different polarization is used to transmit and receive the signal.

Thirdly, dual polarization uses one polarization to transmit the signal, and receive the signal by using both polarizations simultaneously. Fourthly, quad polarization, H and V polarizations are used for alternate pulses to transmit the signal and with both polarizations for the received signal simultaneously.

The characteristics of backscatter will differ for different polarizations. Since the transmitting pulses from a SAR satellite can be polarized in two directions, horizontally (H) or vertically (V), the transmitted and received pulses can be in the four possible combinations of: HH and HV or

VV and VH, where the first letter stands for the polarization transmitted and the second letter refers to the polarization received. The responses to each polarization vary with the different characteristics and properties of the ground surface. Hence, these responses play a key role in improving the identification of surface features, and their discrimination. In the examples in

Figure 2.4, single-polarization, multi-polarization together with various radar frequencies and

18

Chapter Principles of Synthetic Aperture Radar (SAR) System incidence angles provide different scattering mechanisms (as will be introduced in the following

Section 2.2.4) from the tree canopy.

Figure 2.4 has been removed due to Copyright restrictions.

Figure 2.4 Polarization of electromagnetic signal (JAXA)

2.1.4 Scattering Mechanisms

The common scattering mechanism that occurs in radar systems of land surface includes: surface scattering, volume scattering and target scattering, which determine the strength of the measured signal and the resulting grey value and texture for a SAR image.

If the ground surface is smooth and flat, surface scattering will occur, i.e. water, will scatter the wave away from the antenna. This will be represented as a dark area in the SAR image, resulting in no signal return to the antenna. When the water surface is relatively rough, the wave will mostly be scattered away from the antenna, while a small part of the radiation will be scattered back or returned to the antenna. In this case, it will appear as a grey area in the SAR image as shown in Figure 2.5. Volume scattering happens when there is no identifiable single scattering site; instead, the reflections can be seen to come from the uncountable amount of scattering elements, such as the tree canopy (shown in Figure 2.6). Both corner reflection and facet scattering can be defined as giving strong wave components to the radar antenna, and thus it will yield larger effective power, representing as bright areas in the SAR image (shown in

Figure 2.7) (Campbell, 2002, Livingstone et al., 1995).

19

Chapter Principles of Synthetic Aperture Radar (SAR) System

Figure 2.5 Scattering mechanisms of surface scattering

Figure 2.6 Demonstration of surface and volume scattering

Figure 2.7 has been removed due to Copyright restrictions.

Figure 2.7 Demonstration of corner reflection (Livingstone et al., 1995)

As stated earlier, this thesis is mainly focussed on water body detection using SAR image. So the major interaction between the transmitted SAR signal and a water surface under different conditions is briefly described with respect to systems and objects in Figure 2.8. The detectability of water in SAR data relies on the contrast between the water areas and its 20

Chapter Principles of Synthetic Aperture Radar (SAR) System surrounding land, which is highly influenced by roughness characteristics of the water surface, and the system specific parameters such as wavelength, incidence angle and polarization. In

Figure 2.8 the following reflection and scattering types can be observed: specular reflection

(mirror reflection), corner reflection, diffuse surface scattering, diffuse volume scattering and

Bragg scattering. Those effects occur when the SAR signal interacts with smooth still water and rough open water surfaces, flooded vegetation or flooded residential areas. Bragg scattering occurs when the scattered radar electromagnetic waves add coherently, resulting in a strong return of energy at a very precise wavelength from a rough surface.

Figure 2.8 has been removed due to Copyright restrictions.

Figure 2.8 Scattering mechanisms of water and land surfaces under different conditions (Agent, 2000)

2.2 SAR System Parameters and Properties

Fundamentally, SAR imagery which represents microwave backscattering from the terrain surface, also referred to as radar Backscatter Cross Section (BCS), can be used to delineate surface features, such as vegetation, water bodies, soil etc. SAR system parameters have significant influences on the identification of ground information. A brief discussion will be given of these important factors in the following sections, which describe wavelength, polarization, spatial resolution, incidence angle, look direction, backscattering and speckle.

2.2.1 Radar Equation

The radar range equation represents the physical dependencies of the transmit power; power is a 2D representation of a radar image and refers to the backscatter power from the illuminated regions on the ground and returned to the receiving antenna. It is a function of the transmitted power , the slant range R, the antenna gain, and the reflection characteristics of the terrain surface, which is described as the radar cross-section σ (RCS) [ ]. The RCS is 21

Chapter Principles of Synthetic Aperture Radar (SAR) System independent of the strength of emitter and distance from it, because it is a property of the object’s reflectivity. It is given by the radar equation below (Ulaby et al., 1982).

4 (Eq 2.7)

2.2.2 Backscatter (Surface Roughness, Dielectric Properties)

The RCS is related to size of the resolution cell, azimuth resolution , and ground range resolution , which is given by:

, (Eq 2.8)

Where , denotes the normalized backscattering coefficient, which is usually defined as per unit area on the ground and expressed logarithmically in decibel [dB:

〈〉 (Eq 2.9)

Where the 〈〉 represents the average pixel intensity, denotes the absolute calibration constant, and denotes the distributed object incidence angle for ground range projected SAR image, which will be introduced in the following Section 2.2.3. In addition, surface roughness is considered the main factor affecting radar backscattering. The interaction between radar signals and the ground surface depends on different incidence angles (Lillesand et al., 2004).

Backscatter variations may result from the interaction of certain ground roughness (see the brief summary in Table 2.5) and incidence angles (Hensley and Madsen, 2007). As mentioned in

Section 2.1.1, the bright and dark areas of a SAR image represent varying degrees of reflected energy back to the sensor.

22

Chapter Principles of Synthetic Aperture Radar (SAR) System

Table 2.5 The typical values of backscatter coefficients and corresponding ground features

Levels of Radar Backscatter Coefficient Typical Scenario of Ground Features Man-made objects (urban), terrain slopes towards radar, very rough surfaces, very Very high backscatter (above -5 dB) steep radar look angle (incidence angle less than 20 degrees) High backscatter (-10 dB to 0 dB) Rough surface, dense vegetation (forest) Medium level dense vegetation, agricultural Moderate backscatter (-20 to -10 dB) crops, moderately rough surfaces Smooth surface calm water, roads, sand, Low backscatter (below -20 dB) very dry terrain

It can be demonstrated that the backscatter for three different surfaces is dependent on the incidence angle as shown in Figure 2.9. When the incidence angle is small, smooth surfaces

(roughness much smaller than radar wavelength) are more likely to lead to mirror reflection or specular reflection, while the radar backscatter measured in decibel [ dB unit decreases significantly with incidence angles greater than 20 degrees. This is because the radar signal at shallow angles bounces off the surface. When the incidence angle is less than 20 degrees, for rough surface (roughness much larger than the radar wavelength) most of the energy is scattered in different directions, thus the total backscatter that is reflected back is lower than from a smooth surface. On the other hand, when the incidence angle increases from 40 to 50 degrees, the backscatter value of the rough surface is superior to that of the smooth surface, because more random scattering is generated by the rough surface.

Figure 2.9 has been removed due to Copyright restrictions.

Figure 2.9 Typical radar backscatter as a function of incidence angle for representative surfaces (Farr, 1993)

The complex dielectric constant is a measure of electric properties or reflectivity of surface materials, which is controlled by the amount of moisture content. Dry soil has a low dielectric constant and low radar reflectivity, while saturated soil is a strong reflector. Moist and partially

23

Chapter Principles of Synthetic Aperture Radar (SAR) System frozen soils will have intermediate values. The dielectric constant also refers to relative dielectric permittivity (RDP), which generally measures the strength of radar signals. For most natural materials dielectric constant ranges from 3 to 8 in dry conditions (Martinez and Byrnes,

2001). The dielectric constant of water is approximately 80, which is ten times higher than dry soil.

2.2.3 Incidence Angle, Azimuth and Look Direction

The incidence angle is defined as the angle at the ground surface between the incidence radar beam and the vertical direction, and is usually used to describe the angular relationship between the radar signal and the ground, surface layers or a target (Figure 2.10).

Figure 2.10 has been removed due to Copyright restrictions.

Figure 2.10 Geometry of different radar angles (Farr, 1993)

Basically, reflectivity from the scattering surface decreases with increasing incidence angle. The contrast between a water body and land is enhanced as the incidence angle increases. Hence, a high incidence angle is more suitable for the purpose of water detection (Solbo et al., 2003). The incidence angle is also affected by topography, since the local incidence angle refers to the angle between the incident radar beam and a direction perpendicular to that surface. The local incidence angle is deemed to be a key element affecting image brightness and tone of terrain features in SAR data (Figure 2.11).

24

Chapter Principles of Synthetic Aperture Radar (SAR) System

Figure 2.11 Relationship of radar incidence angle and corresponding backscatter intensity

2.2.4 Geometric Effects

The scan direction of a SAR image is the direction of the radar beam illumination from the aircraft or spacecraft. Generally, the scan direction is an important parameter for analysing the characteristics of features on the landscape. The geometric distortions in SAR data are mainly caused by terrain slope and incident angle. Since the radar system measures the difference of signal travel times from targets on the ground to the antenna, the signal is depicted as a circle centred on a SAR sensor. Even though there may be different distances between terrain features they may have the same slant range to the antenna. The side-looking geometry of an imaging radar system shows some key distortional characteristics, known as layover, foreshortening and shadows (Figure 2.12).

25

Chapter Principles of Synthetic Aperture Radar (SAR) System

Figure 2.12 has been removed due to Copyright restrictions.

Figure 2.12 Foreshortening, layover, and shadow; the three-dimensional world is collapsed to two dimensions in conventional SAR imaging (Agent, 2000)

2.2.4.1 Radar Shadows

Terrain slope may obstruct areas of the imaged region from illumination by the radar beam, causing shadows, which will be more severe with increasing incidence angle (described in

Section 2.2.3), especially when the terrain is steeper than the incidence angle of the beam. When the scan direction is perpendicular to topographic relief, radar shadows in an image will tend to be maximized, whereas when the scan direction is parallel to the terrain shadows will tend to be minimized. Consequently, linear features on a SAR image that are displayed dark the contents of which might be visually imperceptible, whereas they may be displayed as bright features in another image with a different scan direction.

Moreover, the extent of shadows depends not only on local relief, but on the position of features with respect to the flight direction (Lin, 2013). For instance, when ground features are in the near-range portion of an image they will cast smaller shadows in an image, while those at the far-range edge of radar image will appear as larger shadows. As well, shadows caused by large elevation differences are undesirable, because they mask large areas from observation.

Figure 2.13 has been removed due to Copyright restrictions.

Figure 2.13 Geometric distortions in SLAR imaging radar system (Jensen, 2009)

26

Chapter Principles of Synthetic Aperture Radar (SAR) System

2.2.4.2 Radar Layover

Radar layover is generated when the signal from the top of the target is returned to the sensor before the signal from its base. It can also be observed when the incidence angle is smaller than the slope of terrain towards the antenna. As radar measures distances relative to the object by timing the transmission and reception of an echo, since the top of the hill may be closer to the sensor than its base, the radar signal that hits the upper part of the object will be return earlier than the signal from its base. This effect is referred to as layover (Figure 2.13). Likewise, if the ground range of both the top and the base of an object occur at the same range from the antenna they will appear in the same position on the radar image.

In Figure 2.14, the edge of slope is equal to BC on the terrain e, but since radar measures slant-range, the beam will reach point B earlier than A. In other words, the point B is closer to the antenna than point A, and is projected onto the slant-range as line bac . As shown from the figure, these three points are not in the same order as on the terrain, causing the layover effect.

Figure 2.14 has been removed due to Copyright restrictions.

Figure 2.14 Radar Layover effect of terrain feature (Agent, 2000)

2.2.4.3 Foreshortening

Another form of image distortion in radar images is called foreshortening, which is caused by slopes facing the antenna that are less steep than the incidence angle. In this case, the terrain feature ABC is in the same relationships as line abc on the terrain, however the distances ab and bc in Figure 2.15 are distorted in the image. This is because the sloping surface compresses distances in the SAR image on account of the shorter time delay between echoes from the top and base of the target. 27

Chapter Principles of Synthetic Aperture Radar (SAR) System

Figure 2.15 has been removed due to Copyright restrictions.

Figure 2.15 Radar foreshortening effect of terrain feature (Agent, 2000)

2.2.5 Speckle Effect and Filtering

Since radar signals are coherent, the energy scattered from the ground surface leads to destructive interference on the image. This will give rise to the appearance of random bright and dark specks in radar images or radar speckle. The bright returns are caused by high amplitude peaks of the radar waveform. Alternatively, when the high peaks match the low values, they will tend to offset each other, thus leading to dark returns in the image (Campbell, 2002). This apparent noise in SAR images not only reduces the capability of interpretation of fine details in the image, but also hinders the automatic processing of SAR data. Reducing speckle noise will improve radiometric resolution; however this will inevitably be at the expense of spatial resolution.

In order to reduce radar image speckle, multi-look processing is usually applied, which averages

N samples of the same targets (Lillesand et al., 2004). It is noticed that the speckle noise still inherent in the actual SAR data can be reduced further through processing tasks such as filtering.

There are two alternative filtering strategies, non-adaptive and adaptive filters (Lillesand et al.,

2004). Non-adaptive filters use a single filter to process the entire image homogeneously, such as the UAF which will be described in more details in Section 3.11.1. Adaptive filters, such as the Frost, Lee, Gamma, mean and median filters (described in Section 3.11.2) require more computation and apply more complicated filters locally to different ground features, thus improving features and edge information preservation.

28

Chapter Principles of Synthetic Aperture Radar (SAR) System

2.3 Interferometric Synthetic Aperture Radar (IFSAR)

Radar interferometry systems can be mainly divided into single pass interferometry (SPI) for topography measurement, and repeat pass interferometry (RPI) for topography and deformation detection (shown in Figure 2.16 and Eq 2.10-2.11). For SPI mode, the interferometric data can be collected by two antennas carried on the same platform, one antenna transmits and both of them received the returned signals. SPI systems have little or no temporal decorrelation that represents the scene changes between observations In the case of RPI, there are two SAR observations acquired at separated times but spatially close enough (Rosen et al., 2000). The time intervals vary from second to years. Temporal decorrelation will be a major issue limiting observations related to wavelength and temporal baseline and the baseline between the acquisitions of the two datasets.

Figure 2.16 has been removed due to Copyright restrictions.

Figure 2.16 Illustration of single pass interferometry (left) and repeat pass interferometry (right) (Rosen et al., 2000)

SPI phase equation: (Eq 2.10)

RPI phase equation: (Eq 2.11)

Where denotes the differential ranges between pixels in the two SAR images. Radar interferometry by exploits the different phase measurements in the two complex SAR observations acquire by the two antennas either in different orbital positions or at different times.

Interferometric phase represents the measured difference in range to the two antennas which enables the determination of terrain elevations

RPI allows the detection and mapping of changes in spatial and dielectric properties of the land surface by studying the temporal and spatial coherence characteristics between the two images.

The information obtained from these interferometric datasets can be used to map the topography or deformation of the ground. Earth scientists commonly use radar interferometry for a wide 29

Chapter Principles of Synthetic Aperture Radar (SAR) System range of applications including measuring surface changes caused by earthquakes, volcanoes, glacier flow, ocean currents, landslides, vegetation properties and ground subsidence, etc.

30

Chapter Image Processing for the Extraction of Features in Images

Chapter 3 Image Processing for the

Extraction of Features in Images

Chapter Image Processing for the Extraction of Features in Images

3.1 Introduction

This Chapter includes descriptions of approaches of image processing applied in this thesis. The main focus is mathematical morphology since morphological algorithms will be applied extensively for tropical cyclone (TC) eye extraction and water body depiction in Chapter 5. The aspects of mathematical morphology covered include the selection of the appropriate structure element, elementary binary erosion and dilation, opening and closing operations, top-hat transformation, skeletonization representation, skeleton pruning based on Discrete Curve

Evolution (DCE) and watershed segmentation. As well the main types of edge extraction methods are presented including the classic Roberts, Sobel, Prewitt, LOG and Canny edge operators for locating and orientating the discontinuities in images acquired under different conditions. Speckle noise filters, i.e. UNSW (University of New South Wales) Adaptive Filter

(UAF) and Median Filter will be utilized in two case studies in Chapter 5 for smoothing the noise along the targeted edge are also reviewed.

3.2 Brief History of Mathematical Morphology

A brief summary of the history of the development of mathematical morphology will be given in this section. Mathematical morphology was initially introduced and has been further developed since 1960s by George Matheron and Jean Serra (Serra, 1982, Matheron and Serra,

2002). It is a method of nonlinear image processing and analysis, focusing on the forms and geometric structure of objects in an image. The original approach was restricted to binary images; then it was extended to grayscale morphology by Sternberg (Sternberg, 1983),

Rosenfeld described in (Gonzales and Woods, 2002) and Shih and Mitchell (Shih and Mitchell,

1989), who provided the opportunities and a novel insight into greyscale morphological processing as a technique of threshold decomposition. More related studies have been conducted in different fields since 1985 in which numerous sophisticated morphological

32

Chapter Image Processing for the Extraction of Features in Images algorithms and widespread applications have been successfully developed by researchers in this field.

It is widely acknowledged that the scope of mathematical morphology is able to provide a wide range of nonlinear, lossy theories for image processing and pattern analysis approaches, including: enhancement, segmentation, hit-or-miss transform, edge detection, texture analysis, skeletonization, grey-scale morphology, granulometry, distance transformation, curve filling and pruning for image pre- and post-processing. These methods have been successfully applied for diverse fields in regards to robot vision, inspection, microscopy, biology, medical imaging, geoscience and remote sensing (Soille, 2003, Maragos and Ziff, 1990).

3.3 Morphological Structuring Element

Fundamentally, in binary morphology geometric feature filtering contributes to the development of morphological operations. The idea is to probe an image using a simple and pre-defined shape, called structuring element. To quantify whether the structuring element fits an object within the image or which structuring element is suitable for the image was emphasized by

Marheron (Matheron, 1989). The determination of the structuring element is highly dependent on the specific size and shape of the targets in the images, because the proper decision-making of shape is associated with the accuracy of extraction (as shown in Figure 3.1). What can usually work is to eliminate a few pixels, fill gaps in between objects/features or even connect separate blobs of pixels to improve an object’s shape. When the size of a feature is larger than the structuring element or the shape differs from the structuring element, the feature will be dilated. On the other hand, if the size of the feature within the image is smaller than the structuring element, the feature will tend to be eliminated (more details about practice of dilation and erosion will present in next Section 3.4).

Additionally, to detect edges morphological filters or operations can be used for the geometric filtering operations of erosion and dilation (in Section 3.4). Since the complex structuring

33

Chapter Image Processing for the Extraction of Features in Images element can be computed iteratively, morphological operations have been strongly promoted in the last few decades. It can simplify targets within an image while preserving the crucial shape characteristics and removing irrelevancies. These morphological operations can be implemented by cracks, holes, corners, fillets and wedges through working with structuring elements (SE) of diverse shapes and sizes (Serra, 1982).

Figure 3.1 has been removed due to Copyright restrictions.

Figure 3.1 A non -flat structuring elements (left) and a flat structuring elements (right) are used in a 1D image for dilation. Dash line is the result of dilation from original shape which presents as solid line (Dot points shown in the SE and are used as the centre of the structuring element) (Beucher and Meyer, 1992)

3.4 Binary Erosion and Dilation

Dilation can be described as adding several pixels on the edges of the image, as a function of the defined size and shape of the structuring element. In other words, this function is created when the structuring element is dragged around the edges of the image to determine if additional pixels will be added beyond the edge. The minimum requirement in dilation processing is for at least one pixel of the structuring element to remain inside the original shape. As a result, the remaining parts of the structuring element when dragged around the image will be added to the image (as shown in Figure 3.2). If square A is dilated by structuring element B, the original shape A is expanded or elongated by the shape of B, as illustrated in the Eq 3.1 and Figure 3.2.

Figure 3.2 has been removed due to Copyright restrictions.

Figure 3.2 Dilation by a structuring element circle on shape , and the result of expansion of is shown as (Soille, 2003) X X

Erosion, on the other hand subtracts pixels from the original image. The minimum requirement is that the whole of the structuring element must be inside the original image. Likewise, a reference point on the structuring element is needed. If the whole structuring element fits inside the image, the reference point will be shaded, which means that it will remain part of the image.

Notice that the difference between dilation and erosion is that the latter only contains the 34

Chapter Image Processing for the Extraction of Features in Images reference point if the whole structuring element fits inside the image, while dilation must include consideration of all the pixels of the structuring element in the image even if only one pixel of the structuring element remains in the image. If square is eroded by structuring element , the original shape is reduced or shrunk by the shape of as illustrated in the Eq 3.2 and Figure 3.3.

Figure 3.3 has been removed due to Copyright restrictions.

Figure 3.3 Erosion by a structuring element circle on shape , and the result of reduction of is shown as (Soille, 2003) X X

We define a binary discrete image as set , represents a structuring element, where both and A B A are subsets in discrete Euclidean space . represents the foreground of the image, the B A B background, with default values of 1 and 0, respectively. is the symmetrical set of in regard B B to the origin, called the reflection or transposition of . The basic morphological set operations B are defined below as:

(Eq 3.1)

Dilation: A ⊕ B xB⋂ A ϕ Erosion: (Eq 3.2) A ⊖ B x|B ⊆ Where set subtraction and set addition refer to erosion and dilation, respectively using ⊖ ⊕ Minkowski set algebra to describe mathematical morphology principles with regard to algebraic relations, involving erosion, dilation, and other basic set theoretic operations. The symbols ⊕ and can denote dilation and erosion in either binary or grayscale images. There are two basic ⊖ relationships associated with dilation, commutability and associativity given in Eq 3.3 and Eq

3.4, respectively.

(Eq 3.3) A ⊕ B B ⊕ A (Eq 3.4) A ⊕ B ⊕ C A ⊕ B ⊕ C (Eq 3.5) A ⊖ B B ⊖ A35

Chapter Image Processing for the Extraction of Features in Images

Note that the erosion operation has a non-commutative feature Eq 3.5. Associativity allows us to perform iterated dilations without having to consider which dilation should be performed first.

For erosions, there is a limitation on the size of the structuring element which should be obeyed for circumventing with a dilation operation, denoted as Eq 3.6:

(Eq 3.6) A ⊖ B ⊕ C A ⊖ B ⊖ C This means that we can iteratively erode by and first, if the result of is too large to be B C B ⊕ C eroded to some degree. Scalar multiplication is another important role in sizing and particle analysis as defined in Eq 3.7 and Eq 3.8 where , can be for any real number: τ )= (Eq 3.7) τA ⊕ B τA ⊕ τB )= (Eq 3.8) τA ⊖ B τA ⊖ τB The demonstration of dilation and erosion below shows how morphological operations actually work. Since , we can derive the operation as shown in the Eq 3.9 and Eq 3.10: 2B B ⊕ B (Eq 3.9) A⊖2BA⊖B⊕BA⊖B⊖B (Eq 3.10) A⊖3BA⊖B⊖B⊖B Furthermore, the iterative evaluation can also be applied when is any positive integer (Eq n 3.11):

(Eq 3.11) nBB⊕B⊕……⊕B

Figure 3.4 has been removed due to Copyright restrictions.

Figure 3.4 Demonstration of dilation result of the images using different structuring elements (Soriano, 2012) The first column indicates the different shapes of original images, and the first row represents 5 types of structuring elements (SE). Each of the second to fourth rows shows the result dilated by each SE from the first row, which varies with the shape of SEs. The orange squares were used as the centre of the SEs

36

Chapter Image Processing for the Extraction of Features in Images

Figure 3.5 has been removed due to Copyright restrictions.

Figure 3.5 Demonstration of erosion result of the images using different structuring elements (Soriano, 2012) The first column indicates the different shapes of original images, and the first row represents 5 types of structuring elements (SE). Each of the second to fourth row shows the result eroded by each SE from the first row, which varies from the shape of SEs. The orange squares were used as the centre of the SEs

3.5 Binary Opening and Closing

Besides the two primary operations of erosion and dilation, another two secondary operations play significant roles in morphological image analysis; these are opening and its dual equivalent, closing. They are both derived from the fundamental operations of erosion and dilation. Even though opening and closing are fundamentally defined in the light of erosion and dilation

(mentioned above in Section 3.4), the operations possess more geometric formulations for examining how the structuring element suits its application.

Figure 3.6 has been removed due to Copyright restrictions.

Figure 3.6 Opening of a discrete binary image ( ) shown in (a) by a structuring element shown in (b), resulting in the grey pixels with the white background pixels (c) (Soille, X 13 10 B 2003) 2 2 Figure 3.7 has been removed due to Copyright restrictions.

Figure 3.7 Closing of a discrete binary image ( ) shown in (a) by a structuring element shown in (b), resulting in the grey pixels with the white background pixels (c) (Soille, X 13 10 B 2003) 2 2

The opening and closing operations can be construed as follows: the opening of image by A image is defined as a composition of erosion and then dilation; it is obtained by first B eroding by and then dilating the resulting image by in Eq 3.12. The basic concept of A B B opening is to remove all of the pixels in the areas that are too small to be covered by the structuring element (probe). On the other hand, in closing, the reverse order of operations takes place. Then holes and concavities that are smaller than the structuring element will be filled in to complete this procedure, obtained by first dilating by and then eroding the resulting A B

37

Chapter Image Processing for the Extraction of Features in Images image by in Eq 3.13. Such filters can be applied to suppress object characteristics or to B discriminate objects based on varied shapes or sizes.

(Eq 3.12) Opening: A ∘ B A ⊖ B ⊕ B (Eq 3.13) Closing: A • B A ⊕ B ⊖ B Where according to the Minkowski set is used in Section 3.3, denotes opening, and denotes closing. ∘ •

Opening is a very effective approach for separating particularly shaped objects from the background. Although, it is still some way off from being a universal 2D object recognizer/segmenter, decisions of the structuring element preference should be made carefully.

Otherwise, many desirable objects will be wrongly removed while undesirable ones will remain, because sometimes a delicate balance is required to suit the particular situation. Opening can eliminated small features and texture fluctuations. The image after opening operation should have a more matt appearance, while it also can be applied as a filter to remove the ‘salt noise’ in images.

Closing is similar to dilation to some extent in that it tends to expand the edges of the foreground areas in grey in a discrete binary image, and likewise shrink the background holes presented as white pixels in such areas. However, it is less damaging on the original edge shape.

The reason for this is the precise operation is determined by the structuring element as with other morphological operators. Thus, the consequence of the operation is to preserve background areas that have an analogous shape to the structuring element, or that can be completely contained in the structuring element to eliminate other background pixels.

3.6 Top-hat Transform

Image enhancement is another challenge for yielding well-described watershed transformation results for SAR images as will be described in Chapters 4 and 5. Lowering contrast maintains

38

Chapter Image Processing for the Extraction of Features in Images detailed edge information but may lead to over-segmentation with an enormous amount of trivial regions. In contrast, excessively high contrast ratio can produce appropriate region segmentation at the expense of edge detail loss (as depicted in Section 3.6). Therefore, in order to improve segmentation quality, Lee (Lee et al., 1987) proposes an extension of morphological-based operators, called top-hat transformation.

As closing includes the input image, the set subtraction of the input image from the image after closing gives the closing top-hat operator (Eq 3.14).

(Eq 3.14) A • B A•B A Theoretically, the top-hat transform is applied for extracting small or thin, bright or dark features in an image. Setting a simple threshold cannot be an effective method, because of variations in the background. If the intensity values in a grey-level image can be considered as altitude or elevations, then a scene usually contains mountain tops (brightest points) and valley lows (darkest regions). Hence, poor contrast in an image will often degrade by isolating the adjacent mountain tops between narrow valleys by thresholding. To solve this problem, an open top-hat filter, also named white top-hat transformation or bottom-hat, is applied to enhance bright points (maxima) and a closing top-hat filter, also namely black top-hat transformation, is applied to enhance the valleys (minima) (Beucher and Lantuéjoul, 1979).

The mathematical definitions of top-hat transformation involves denoting as a greyscale two , dimensional image, denoted as a greyscale structuring element. The white top-hat transformation of a greyscale image is defined as minus its opening, similarly the bottom- hat transformation of a greyscale image is defined as the closing of minus . The difference then yields an image comprising only the removed objects. The white (opening) top-hat transformation of image (Eq 3.15) is used for light objects on a dark background and the , black top-hat (closing or bottom-hat) for dark objects on a light background (Eq 3.16) (Soriano,

2012):

39

Chapter Image Processing for the Extraction of Features in Images

(Eq 3.15) ∘ ∘ ⊖⊕ (Eq 3.16) • • ⊕⊖

To summarize, a significant result of using top-hat transformation is to correct the area affected by poor lighting conditions. One important application of these transforms is in eliminating objects from tropical cyclone (TC) images following opening (Soriano, 2012). The example in

Figure 3.8 demonstrates how a top-hat filter preserves its sharp peaks and removes other poor contrast features. Initially, the sharp peaks of the mountains will be trimmed off in erosion operation, when the size of structuring element is slightly wider than the widths of the sharp peaks while much narrower than the mountains. In the next step, dilation will reconstruct the mountains without the sharp peaks. Subtraction of the original image from the result following opening produces the top-hat filtered image, where the sharp peaks appear only. It can be seen in Figure 3.8 that the image consists of a series of mountains with poor contrast between the left side peaks and right side valleys. Erosion prunes away the peaks, while dilation reconstructs the mountains without the sharp peaks. The peaks will be yielded by subtraction from the original images only, eliminating the poor contrast in a non-uniform illumination by the top-hat filter results at the bottom.

Figure 3.8 has been removed due to Copyright restrictions.

Figure 3.8 Demonstration of opening or white top -hat transformation (Soriano, 2012)

Figure 3.9 has been removed due to Copyright restrictions.

Figure 3.9 Demonstration of closing or black top -hat/bottom -hat transformation (Soriano, 2012)

Figure 3.9 illustrates how a closing top-hat filter emphasizes sharp valleys. As a matter of convenience opening top-hat filters will be referred to as ‘top-hat filters’ and closing top-hat filters as ‘bottom-hat’ filters. According to Figure 3.9, image is a sequence of valleys with uneven contrast in an image. Valleys with sharp bottoms can be trimmed off by morphological

40

Chapter Image Processing for the Extraction of Features in Images dilation, smoother bottoms of valleys can be reconstructed by erosion, and the inverted bottoms will be generated by subtraction from the original images only.

Since the top-hat transformation is susceptible to high frequency noise generated by the SAR image, it is suggested that the noise filter is utilized before implementing the morphological operator (Soille, 2003). The top-hat transform enables researchers to extract small or narrow, bright or dark objects from a varying background and it can enhance the input image by sharpening the small grey-level pixel difference along boundaries.

3.7 Skeletonization

3.7.1 Skeleton Based on Medial Axis

Before introducing the principle of skeletonization there is another morphological approach that normally causes confusion because of its comparability and similarity, called morphological thinning. Thinning is an erosion-based process that does not break connected objects, and is used to segment objects that are touching at parts of each object; nonetheless, removal of thinning does not alter the object’s shape or topology. On the other hand, skeletonization tends to explore more structural information of an object for shape analysis. It can be thought of as a thinning process that produces a skeleton of an object. The concept of skeletonization was initially proposed by Blum (1967) (Blum, 1967, Blum, 1973), who emphasized certain properties of images as a result of the Medial Axis Transform (MAT) or Symmetry Axis

Transform (SAT) procedures. Subsequently its definition has been studied comprehensively, with developments of a series of applications based on mathematical properties and its relation to the boundaries of the object, which can be represented in 2D and 3D views (Siddiqi and Pizer, 2008,

De Floriani and Spagnuolo, 2007). Combining medical axis with an associated radius function of maxima inscribed discs is called the medial axis transform (MAT). This is an alternative but a slightly different concept to skeletonization. The latter is often used to describe a binary image for presenting the simple skeleton of the object, while for the MAT approach; each point on the 41

Chapter Image Processing for the Extraction of Features in Images skeleton on a grey level image has an intensity value, describing its distance to an edge in the original object.

Skeletonization reduces the foreground of regions in a binary image by means of morphological operators, in order to produce a skeleton of the residue that mainly preserves the extent and connectivity of the original object, while abandoning most of the foreground pixels from the original image. For instance, a contour on an object corresponds to topological properties of the skeleton. It defines the nearest boundary points to each inner point within an object, which has at least two closest boundary points belonging to the skeleton shown in Figure 3.10. It is noted that the skeleton sometimes is given the name medial axis because the pixels are located at mid- points along local symmetrical axes of the region, and is the set of points having more than one closest equidistant point to the boundaries of the object. The topological skeleton was developed by

Blum (Blum, 1967) as a useful tool for biological shape recognition.

Figure 3.10 has been removed due to Copyright restrictions.

Figure 3.10 Indication of the geometric relationship between a point on a 1D medial axis and its corresponding boundary points. The tangent circle shrinks along with the boundary, so the boundary is perpendicular to its radius vector. Also, the medial axis splits the two radius vectors. The media axis indicated in the rectangle defines the centres of maximal discs. (Fisher, 2014)

3.7.2 Skeleton Representation

The morphological skeleton operations tend to unify a large number of skeletonization algorithms that were proposed by different researchers. Many of the algorithms in the literature use different definitions, parameters and thresholds to demonstrate their performance. This task is aimed at improving skeleton algorithms in terms of decomposition and reconstruction.

A homotopy between two continuous functions can be deformed from one topological space to another. First of all, the identification of points is critical in shape representation. A few restrictions are placed on the structuring elements using a simple search approach to find points whose removal would change the homotopy. Secondly, the identification of further points

42

Chapter Image Processing for the Extraction of Features in Images necessary to preserve homotopy is another key step for reconstruction. With a more constructional technique, the algorithm can find the points that are essential for preserving homotopy, which limits it to a more restricted set of structuring elements than the simple search approach for points seeking in the first step. This was proposed for the preservation of skeletons in mathematical morphology as important invariants in algebraic topology by homotopy (Ji and

Piper, 1992).

Another method to obtained the skeleton is known as Euclidean distance transformation and maximum value tracking that was proposed by Shih (Sternberg, 1986). The Euclidean distance technique is a nonlinear measure, which can ensure the localization of skeleton points in n- dimensional objects is well-defined and robust enough to preserve the homotopy with the original object.

Under the definitions mentioned above for opening and closing, the morphological skeleton is denoted by the following formulas (Serra, 1982):

(Eq 3.17) ⋃ (Eq 3.18) ⊖ ⊖ ∘ , 0 (Eq 3.19) ⊖ … ⊖ ⊖⊖…⊖,0 (Eq 3.20) | ⊖ Where the main equation represents the subset that has been skeletonized times by n the structuring element . After preforming a further erosion operation together with set until the set is empty, is the maximum size of the structuring element . In order to reconstruct the original discrete image, skeleton subset has to be dilated by the corresponding structuring element and its union.

43

Chapter Image Processing for the Extraction of Features in Images

3.7.3 Shape Reconstruction

Reconstruction can be explained as an iteration of dilations, to create padding with the purpose of renovating the original binary image after skeletonization. The original image can be X reconstructed precisely by Serra’s (Serra, 1982) concept:

(Eq 3.21) ⋃ ⊕ However, since this algorithm is based on the subset of the skeleton, it results in partial reconstruction of original image , is successive dilations of set . Hence, in this case Maragos and Schafer (Maragos and Schafer, 1986) advanced a more overall-awareness reconstruction method, due to the associative and distributive properties of set union, as follows:

(Eq 3.22) ⊕ ∪ ⊕ ∪ ∙∙∙ ⊕ ∪ The relevant index number ensures the algorithm can be used for all the sub-skeletons to be computed during the iteration, which is sufficient to derive a global and thoroughly reconstructed object.

In pattern analysis, the skeleton, medial axis or symmetrical axis has been broadly applied for adequately characterizing objects using structures that consist of line or arc patterns. The storage of the essential structural information can benefit from decreasing the memory space and simplifying the data structure. In addition, the original shape of objects can be reconstructed based on the relationship of skeleton points and their distances to the object’s boundaries. In contrast, the original shape cannot be reconstructed by a thinned image obtained by a thinning algorithm.

The skeleton provides a compact and simple representation of a shape in which is a very useful approach for preserving a wide variety of the topological and size characteristics of the original shape. Lantuéjoul (1980) stated that since it is impossible for a discrete skeleton to be infinitely thin, its thickness must be at least one pixel. Fundamentally, mathematical morphology has been used extensively for image analysis in remote sensing for computing the geometric shape and 44

Chapter Image Processing for the Extraction of Features in Images structure of features. Mathematical morphology focuses on the geometric characteristics of an object, by setting the proper structuring element (Maragos and Schafer, 1986, Goutsias and

Schonfeld, 1991, Pai and Hansen, 1994, Kresch and Malah, 1994). Skeletons contribute to a wide range of applications including the recognition and representation of handwritten characters, ridge patterns, biological cell structures, circuit diagrams, engineering drawings and robot path planning.

Another illustrative description of skeletons is the ‘prairie-fire analogy’, in which the edge of an object is set on fire and the loci of where the fire fronts meet and quench each other is defined as the skeleton. Once this theory was more formalized by Serra (Serra, 1982) and

Lantuéjoul (Lantuejoul, 1980), skeletons have become an very effective and influential descriptor for shape or pattern analysis. The skeleton of a shape refers to a line representation of the object. Theoretically, it must preserve the topology of the object that must be one-pixel thick, located in the middle of the object. However, some objects are not always realizable by skeletonization in discrete images. In Figure 3.12, a robust skeleton of a rectangle is compared with an abnormal skeleton disturbed by some random errors along the edges which normally appear as noise in real images.

The skeleton method has also been criticized for its sensitivity to noise over the shape boundary because:

(1) Even slight noise or a small disturbances around the target edge can lead to the generation of redundant skeleton branches, which affect the derivation of a correct relevant global signature

(Katz and Pizer, 2003),as shown in the example in Figure 3.11.

(2) To maximize the functionality of the skeletonization while maintaining the connectivity and integrity of objects, significant skeletons should be considered during the erosion process in order to achieve lossless morphological representation.

45

Chapter Image Processing for the Extraction of Features in Images

Figure 3.11 A robust skeleton extraction and a rectangle with boundary disturbance

Several research trials have demonstrated the significance of morphological skeleton variants, which can eliminate redundant segments in shape representation at relatively low cost and more efficient programming practice (Pai and Hansen, 1994, Shoji, 1992, Kresch and Malah, 1994,

Goutsias and Schonfeld, 1991). Maragos(Maragos, 1989) claimed that the theories of entropy measurement have been introduced to provide a practical method for the structuring element selection that can advance the efficacy of expected outcomes. Pai et al (Pai and Hansen, 1994) proposed a novel algorithm for minimizing the boundary-constrained skeleton reconstruction for the purpose of reducing iteration-based memory requirements and computational complexity.

Because the skeleton preserves the topological properties and morphology of targets, it can confirm object identification, pattern recognition and shape extraction (Shah, 2005, Pizer et al.,

2003, Goh, 2008). Hence, the majority of applications of this technique are usually in the field of bio-medical imaging, surface model reconstruction and model retrieval.

3.8 Pruning with Discrete Curve Evolution (DCE)

To effectively remove unwanted branches or skeletons of a pattern and keep the connectivity of the skeletons, a pruning algorithm is required based on contour approximation with medial axis integration. This will help with conserving the topological properties of the targets as well as be robust to noise along the edge area, but also very practical and less time-consuming. Hence, this method would be more likely to present an accurate result during shape reconstruction based on the robust skeleton.

46

Chapter Image Processing for the Extraction of Features in Images

Without skeleton pruning, the accuracy of the reconstruction from the initial extracted skeleton will be reduced. These shortcomings have been responsible for reductions in the effectiveness of the pattern analysis in several applications (Bai et al., 2007). Unfortunately, using the skeleton algorithms for discrete shapes are particularly vulnerable to noise and topological variations, because the quantized pixel locations result in redundant branching, such as in Fig ure 3.12 . Some of the existing pruning algorithms produce large geometrical and topological changes to the skeleton’s curves, for instance, creating more points due to noise, shrinking the final skeleton, disconnecting a skeleton’s curves or eliminating some significant branches of the skeleton. In addition, a questionable skeleton is possible which either generates extra areas of the object caused by complex branches, or imperfect areas of the object because of the incomplete branches. Hence, pruning branches from a complex skeleton is an effective approach to shape simplification.

A sound presentation of skeleton pruning should comprise topological information preservation, accuracy of extraction of the skeletons and stability of the transformations (Solís Montero and

Lang, 2012). The two significant measurements contribute globally and locally in skeleton pruning step, as well as integrity and accuracy of pruned skeletons have to be clearly defined

(Liu et al., 2012, Telea, 2012). Bai et al (2007) proposed Discrete Curve Evolution (DCE) based a contour partitioning method, which yields outstanding results in the fields of human visual perception, by maximizing the similarity to the original skeleton topology, as well as suppressing spurious branches or biased skeleton points. In addition, the stability of pruning with DCE yields a hierarchical skeleton structure without replacing skeleton points, which indicate the centres of the maximal discs in the objects. This process cannot be achieved by other combined pruning methods (Bai et al., 2007).

DCE (Latecki and Lakämper, 1999) produces a sequence of polygons , given P P ,P ,…,P the input closed polygon , where each vertex is assigned in as a relevance measure P v P in Eq 3.23 and Eq 3.24. Kv, P ∈ 0 (Eq 3.23) , , ∈ 47

Chapter Image Processing for the Extraction of Features in Images

(Eq 3.24) ∈ , , 0,1,…,1 Let present the relevance measure of a boundary polygon , present each Kv, P P VerticesP vertex of in the set, is the iterative step from to , where denotes the deletion P i 0 n 1 V P from after minimization of the relevance measure in Eq 3.25 . P Kv, P

(Eq 3.25) 0, 1, … , 1

| | 3 Subsequently the homotopic contours are typically distorted in a discrete image with respect to digitalized pixels, the DCE method guarantees the elimination of multiple distortions and simplifies the skeleton shape while preserving the significant visual segments. Additionally, it can produce a satisfactory result even under noise effects and shape variations without dislocating the centres of maximal disks based on Euclidean distance functions for the skeleton points (Calabi, 1965). This is of importance for the later process of reconstructing the original object.

The skeletons derived by the DCE can then be post-processed through pruning, to remove or eliminate the unnecessary skeleton points and redundant sub-branches, which result in image disturbances (Möller et al., 2009, Attali et al., 2009). Skeleton pruning is analogous to pre- processing preparation for the following reconstruction sector. As mentioned in Section 3.7.1, a skeleton is one pixel wide, branch points are pixels along the major skeleton that have more than two neighbouring pixels, and end points of skeletons have only one neighbouring pixel

(Hesselink and Roerdink, 2008). The principle of pruning can be indicated by Equation (Eq

3.26) (Solís Montero and Lang, 2012):

(Eq 3.26)

|E B| s ∗ f B Assuming a branch set , generated by skeleton points, denotes the beginning point B,…E B and denotes the endpoint. Let be the nearest point of the edge of shape, the removed parts E Y 48

Chapter Image Processing for the Extraction of Features in Images of the branches are defined as the location of the endpoint . If the is inside of the circle E E centred at while its radius is equal to the Euclidean distance , then these parts can B Y B be removed as spurious branches, which are separate from the main skeleton. The coefficient s is a scale factor that can be adjusted for the pruning (Solís Montero and Lang, 2012). After skeleton pruning the redundant sub-branches are trimmed, the robust skeleton can be used to reconstruct the targeted shape more accurately, and the application of these will be demonstrated in details in Chapter 5 for TC eye extraction.

3.9 Watershed Segmentation

Nowadays, watershed segmentation has been applied for segmenting objects in images based on mathematical morphology (MM) in which the intensity values or greyscale values of pixels are represented as topographic reliefs. This extremely powerful analysis approach provides topographers and cartographers with a better appreciate of image transformation.

The concept of watersheds and ‘the great divide’ for dealing with the segmentation issues were initially introduced by (Beucher and Lantuéjoul, 1979, Serra, 1982, Soille, 2003) as a

“watershed by flooding” model similar to the field of topography. They proposed a non- parametric contour detection method for the watersheds of the variation function , which is derived from the gradient modulus of the relief surface (Beucher and Lantuéjoul, 1979).

Figure 3.12 has been removed due to Copyright restrictions.

Figure 3.12 Illustration of immersion analogy (Soille, 2003)

Subsequently (Vincent and Soille, 1991, Vincent, 1993) proposed the classical watershed method based on immersion simulations theory, successfully upgrading the execution time from hours to seconds. Because of their development of concepts such as minima, catchment basins and watersheds, the method can be applied to greyscale images. We assume that the image is

49

Chapter Image Processing for the Extraction of Features in Images indicative of different details in terms of land relief, which are submerged pixel by pixel. The theory of watershed is to find the regional minima of each basin (see Figure 3.12), and then it floods relief progressively corresponding to the various minima. A dike or dam is established as soon as water reaches a sufficient level in order to connect two differing basins. When water submerges everything, the locations of these dikes are the watershed lines which would separate different regions over the image. The resulting boundaries from the watershed transformation define continuous and complete regions. Additionally, the extracted edges correspond to contours which can be clearly detected.

As mentioned before in Section 3.5, an opening operation tends to remove some peaks and ridge lines, while a closing operation can fill in valleys and basins. Therefore, these concepts can be employed to determine the minima in the catchment basins so that watershed segmentation can be well defined for grey scale images.

Notwithstanding the above merits, most conventional watershed algorithms are either too time consuming or insufficiently accurate. For segmenting more complex images, the major drawback of the algorithm is severe oversegmentation. In the early 1990’s, (Meyer, 1982) developed a solution to overcome this difficulty, using Meyer's flooding algorithm (Meyer and

Beucher, 1990). Oversegmenting is avoided by merging irrelevant fragmented regions by setting markers or labels of objects to be segmented in both foreground and background pixels on an image.

As watersheds can be defined in both continuous and discrete domains, this algorithm has been extensively used in various fields for segmentation purposes, such as biology, medicine, computer vision, remote sensing, scene analysis, 3D images, moving objects detection and colour images segmentation.

50

Chapter Image Processing for the Extraction of Features in Images

3.9.1 Implementation of Traditional Watershed Segmentation

To prepare the input for watershed transformation, a gradient magnitude image is required to be extract from the original image by morphological gradient operators (Eq 3.27).

(Eq 3.27) ⨁ ⊖ Instead of being suited only to grey scale images, (Vincent and Soille, 1991) made use of colour images and also applied it to 3 dimensional objects by designing a means of First-In-First-

Out (FIFO) structure.

The gradient magnitudes of all pixels should not be greater than the gradient threshold that is marked as water area pixels based on a regional maximum. Initially, every connected component within the gradient image is treated as a catchment basin with a unique label. Other pixels, including local minima above the gradient threshold are taken as non-water area pixels.

When the algorithm starts, the water level will keep growing step by step from the gradient threshold. Unmarked pixels surrounding the catchment basins will be merged into the adjacent basin and marked with the corresponding label. With the increase in water level, unmarked local minima will eventually be marked. Generally, the processing of watershed algorithm can be divided in to four realization steps proposed by (Vincent and Soille, 1991):

Step 1. Sorting

In the grey value image, let as a greyscale image, each pixel needs to be sorted from : → the minimum value to the maximum value of . The value is a connected pixel component of constant grey value. The threshold of at level is denoted as (Sohn and Jezek, 1999, Roerdink and Meijster, 2000):

(Eq 3.28) ∈ |

Where represents the elevation of a pixel at that point, and is considered as a topographic surface. For setting the geodesic distance and the geodesic influence zones, which are computed

51

Chapter Image Processing for the Extraction of Features in Images for all hitherto determined catchment basins (Vincent and Soille, 1991), if the geodesic distance

between point and is within set , then it is the minimum length of path among all , paths. Assume now is a subset of , which is comprised of a series of connected components , . So the geodesic influence zone of connected , , … 1, 2, … components within set is defined as: (Eq 3.29) ∈ ∀ ∈ 1, \: , , Let and . Thus, the set is the union of the geodesic ⊆ , ∈, influence zones of the connected components of as shown in the following equation:

(Eq 3.30) The complement of the set within determines the skeleton of influence zones : (Eq 3.31) \

Step 2. Marking or labelling

Since the neighbourhood of the lowest point is being marked, either four-neighbourhood or eight-neighbourhood pixels can be considered to be connected to the lowest point, if the adjacent point values are also within or close to the minimum value , after comparing with other neighbourhood pixels of constant grey values. Then these adjacent points will be assigned the same object. This process will be repeated until all pixels with minimum value are processed.

Figure 3.13 has been removed due to Copyright restrictions.

Figure 3.13 Illustration of minimum and maxima of a w atershed segmentation function (Beucher and Meyer, 1992)

52

Chapter Image Processing for the Extraction of Features in Images

Step 3. Region merging

To simulate the immersion, a recursive process is undertaken with the grey level growing from to , in which the basins or valleys associated with the minima of are consecutively expanded. The watershed by immersion can be defined as (Roerdink and Meijster,

2000):

(Eq 3.32) ∈ | ∪ , ∈ , Where represents the union of the set of basins calculated at level , is a connected component of the threshold set at level , which can be treated as either a new minimum, or 1 an extension of a basin in . It is noted that computing geodesic influence zone of within , leads to an update to .

Step 4. Generating watersheds

After step (3), if there are different markers in the adjacent area of a pixel, it can be concluded that the pixel must be on a single dividing line. Therefore, the image can be segmented by watersheds. Let represent the union of all local minima at elevation , and then the watershed of is the complement of in a connected domain in the ∈ discrete greyscale image represented as (Roerdink and Meijster, 2000):

(Eq 3.33) \ According to the watershed theory described in Section 3.9, the result of this transformation is to segment the image into catchment basins, one for every regional minimum of the image. A local minimum is a connected set of points required to be defined in a specific grey level surrounded only by points of higher grey levels. Due to the absence of continuous gradients in digital images, it is difficult to compute watershed regions. However, the watershed transform can be simulated by assuming that the flow from a point in a grey level is directed toward the

53

Chapter Image Processing for the Extraction of Features in Images closest point belonging to a lower grey level. Then, it similarly continues to the next closest point of a lower grey level, until it reaches a regional minimum. After calculation, each grey level is ordered by its value from small to large. When there exits only one pixel labelled or there are more than one pixel which have the same marker in the neighbourhood area, the pixel will be given the same marker. If the marker is not within its neighbourhood area, a new marker will be assigned to the pixel. The next step is processed until the maximum grey values are reached.

However, a common issue caused by this technique is over-segmentation, owing to the unavoidable residual speckle noise in the SAR image. A huge mass of small catchment basins, as regions in the image, are presented as the redundant classified areas produced by raw watershed algorithm. Over-segmentation also results from the regional irregularities of the gradient image; its remedy will be introduced in the next Section 3.9.2.

3.9.2 Oversegmentation and Its Remedy

The purpose of segmentation is to locate edge points appropriately according to the boundary information. The over-segmentation, which is the major drawback of this technique, is caused by the sensitivity of the gradient image to noise, which not only disturbs the actual image segmentation, but also increases unnecessary computational complexity by calculating trivial segments. There are two common remedies for over-segmentation:

(1) Ensuring that the extracted basin only corresponds to the expected target by

modifying the gradient function

(2) The removal or elimination of irrelevant contour elements.

A practical solution to address this problem is to label inner and outer control markers(Beucher and Meyer, 1992, Vincent, 1993) into the minima and maxima of different regions before immersing the relief (shown in Figure 3.14). To be more specific, for a gradient image, many regional minima exist in the gradient image, which are denoted as inner markers (Meyer and 54

Chapter Image Processing for the Extraction of Features in Images

Beucher, 1990). Each minimum is surrounded by a divide line, labelled as an outer marker.

Then we use the morphological minima marker technique to replace all the regional minima in the gradient image as the new regional minima.

In addition, in a grey scale image, choosing a set of markers or labels to define the pixels for the commencement of flooding gives them different labels (Meyer and Beucher, 1990). According to a precedence level corresponding to the grey level of the pixel, the adjacent pixels of each marked region are inserted into a priority queue in Meyer’s flooding algorithm. After extracting the pixel with the highest priority from the priority queue, if the adjacent extracted pixels already have been marked and all have the same label, then the pixel is marked with its marker. Therefore all non - labelled neighbours that are not yet in the priority queue have to be placed into the priority queue.

All these non -labelled pixels are the watershed lines.

As these control markers are being extracted, a better delineation of the edges can be presented, whilst preserving integration and continuity of boundary information. Therefore the potential of watershed algorithm can be implemented with better efficiency. The watershed method will be implemented in Chapter 5 for the extraction of flood areas on SAR images.

Figure 3.14 has been removed due to Copyright restrictions.

Figure 3.14 Two examples of the watershed transform applied to a one dimensional signal are presented. Chart A) When three markers or labels are assigned at the three local minima, three segmented areas are produced by watershed lines as the boundaries which are separated at the local maxima between each two basins; Chart B) When only two markers are located, segment 2 is flooded over a small peak and into the adjacent minima until a watershed line is shaped with segment 1. (Fisher, 2014)

3.10 Classic Edge Detectors

3.10.1 Introduction

Over the past few decades, edge detection techniques have been developed because of the important role edges play in image processing and machine vision applications. Edges in an image result from abrupt changes in the brightness of objects and may be created by shadows,

55

Chapter Image Processing for the Extraction of Features in Images differences in object contrast due to object shape, texture and 2D or 3D object geometry. Edges occur in various image resolutions or scales and represent transitions or gradients in image grey scale values. The principle of edge detection denotes the process of identifying and locating the sharp discontinuities in images. There are three types of edge detectors:

• Linear edge detectors consist of a process of linear filtering or edge detectors. Linear

filters which are very common in digital signal processing are divided into two

classes: infinite impulse response (IIR) and finite impulse response (FIR) filters. Linear

filtering replaces each pixel with a linear combination of its neighbours; this linear

combination is also known as a convolution kernel. Linear filters are often used to

eliminate undesirable frequencies from time-varying input signal which is subject to the

constraint of linearity, or to select a required frequency among many others. The results

from processing systems are constituted solely of components classified as having a linear

response.

A lowpass filter in known as a typical linear filter with the function of a smoothing

operation. In an edge detector, the first or second order differential operators are linear

equations, commonly used to compute and locate the difference of image intensities to

estimated edge points in the image. First order differential operators include Roberts,

Prewitt, Sobel and Kirsch. These simple derivative filters respond strongly to noise.

Further filters, Laplacian, LOG and Canny edge detectors use second order differential

operators (Gonzales and Woods, 2002, Canny, 1986b, Marr, 1982).

• Apart from linear edge detectors that are likely to blur edges and other details, and fail to

perform well with non-Gaussian noise, the goal of non-linear edge detectors is statistical

estimators for estimating the value of a pixel in an actual image in the presence of noise.

Non-linear detectors generally are divided into two types, traditional non-linear detectors

based on Taylor’s series expansion and density-based non-linear detectors, particularly for

56

Chapter Image Processing for the Extraction of Features in Images

the removal of certain types of noise and can be extremely useful for the design of

sophisticated filters.

Non-linear edge detectors use non-linear filtering that can preserve edges and remove

noise effectively, despite their design complexity. Difference of Estimates (DoE) is one

such operator developed by Yoo (Yoo et al., 1993), which applies optimum stack filtering

approaches to find the dilation and erosion in a local kernel, and their difference is the

edge strength.

• The third type edge detectors use hybrid techniques, containing both linear and non-linear

operators. One hybrid edge approach, called non-linear filtering, uses a non-linear filter

first and then applies a differential or gradient operator to edge detection (Pitas and

Venetsanopoulos, 1990), while another detector uses linear filtering first and then a non-

linear detector.

The principle of edge detection applies certain algorithms in order to extract the boundary between an object and its background. Most classic edge detection techniques are based on computing of derivatives. To begin with, filtering the noise in the image can be processed by smoothing, and then, by using the first or second order differential operators the zero crossing point of maxima gradient and the second derivative can be obtained. Finally, the appropriate threshold enables the selection of the edges of the object.

3.10.2 Gradient

As stated above, edge detection essentially detects the significant local changes in an image, which can be considered as a discreet array of samples of a continuous function of image intensity; the gradient is a measure of variations or changes in this function. In grey value images, significant changes can be identified by using a discrete approximation to the gradient, which is assumed to be analogous to the continuous gradient. The gradient is a 2-dimensional equivalent of the first derivative, associated with a local peak, and is defined as the vector: 57

Chapter Image Processing for the Extraction of Features in Images

(Eq 3.34) , Where the vector is determined at each pixel. The direction of the gradient is defined , by the maximum change of the function and the magnitude of this maximum change is , given by (Eq. 3.35),

(Eq 3.35) , (Eq 3.36) , || In order to approximate the gradient magnitude by absolute values, equation (Eq. 3.36) is derived from equation (Eq. 3.35), and then the direction of the gradient at each point is defined as:

(Eq 3.37) , tan

The angle is measured associated with the x axis. Providing an accurate estimate of edge’s location or orientation is still a challenge, because detection only indicates that an edge exists close to a pixel in an image. To understand the possible errors in edge detection, the misclassifications in the location or orientation of edges include two main parts, false edges and missing edges. Distinguishing between edge detection and estimation plays a key role in edge studies, since these steps are performed by different calculations with different error models.

For instance, probability distributions for the location and orientation can be estimated to check the accuracy of edge extraction, and the error distribution from a histogram for location error. In the past two decades, many edge detectors have been developed; some commonly used edge detectors will be discussed in the following sections.

58

Chapter Image Processing for the Extraction of Features in Images

3.10.3 Edge Detection Using Classical Edge Detectors

3.10.3.1 Roberts Edge Detectors

The Roberts edge detector is a simple and quick operator to compute 2D spatial gradients on an image, containing a pair of convolution kernels or masks, and 1 0 2 2 0 1 . By applying these kernels, the differences are the 2D spatial gradients computed at 0 1 1 0 an interpolated point in the diagonal directions on an image. can be obtained by i , j a rotation of 90˚ to the , these masks are designed to respond maximally to edges at the bearing of 45 degrees to the pixel direction, thus making it more sensitive to noise.

(Eq 3.38) ||

In the output image, pixel values at each point represent the estimated absolute magnitude of the spatial gradient, since the gradient magnitude of at a pixel is obtained by the partial derivatives

and at (Jain et al., 1995). Therefore,

; (Eq 3.39) , 1,1 1, , 1 A primary advantage of Roberts edge detector is that the position accuracy is relatively high, smaller amount of computations are required, while partial edge information may be easily lost and the kernel is small and contains only integers.

59

Chapter Image Processing for the Extraction of Features in Images

3.10.3.2 Sobel Edge Detectors

The Sobel operator is designed to use two convolution kernels, and 1 0 1 3 3 2 0 2 1 0 1 which weights the integer values to determine approximations to the first 1 2 1 0 0 0 derivatives1 oriented 2 1 in horizontal and vertical directions. The Sobel operator is a non-directional edge detector, which smooths the image while preserving edge positions close to the centre of the kernel (Roushdy, 2006). The kernels are given by:

, 1, 1 1, 1 2 1, 2 1, 1, 1 (Eq 3.40) 1, 1

, 1, 1 1, 1 2, 1 2, 1 1, 1 (Eq 3.41) 1, 1

3.10.3.3 Prewitt Edge Detectors

Prewitt edge operation is superior to Roberts, because of its better performance in suppressing noise with 3*3 masks. The Prewitt edge detector has a similar effect as the Sobel, but it does not use weighted values and hence is more sensitive to noise. The Prewitt operator can be defined as

(Prewitt, 1970), and . 1 1 1 1 0 1 0 0 0 1 0 1 1 1 1 1 0 1

The results of the Prewitt operator correspond to the gradient at each pixel in the image

(NOORULLAH and DAMODARAM, 2009). In the next sections second derivative operators will be presented.

60

Chapter Image Processing for the Extraction of Features in Images

3.10.3.4 Laplacian-of-Gaussian (LOG) Edge Detectors

In the context of image processing, an isotropic operator is applied equally well in all directions.

A zero crossing edge detector and Gaussian smoothing can be two typical examples that representing an isotropic operator’s responses to edges in all directions. Even though this operator might be isotropic in theory, it is noted that it fails to perform perfectly in its actual implementation, for instance, when a very small standard deviation (less than one pixel) of the

Gaussian smoothing function is used, the smoothing will have very little effect.

Compared with using the Laplacian operator only, theoretically, LOG combines a Gaussian smoothing filter and Laplacian sharpening filter together (Nalwa and Binford, 1986). To gain a better result, edge detection should be employed after supressing the noise along the edge. The

LOG operator takes the second derivation of the image, hence it will be zero when the image is uniform; when a change occurs in the image, the LOG will give a positive response to the darker side and a negative response to the lighter side. The response of LOG detector has following characteristics:

(1) Locates the detected edge as close as possible to the real edge.

(2) Appears on the darker side of the edge only for a positive value of LOG, while for a

negative value it will appear on the lighter side.

(3) Enhances and sharpens the edge yet it is affected by increases in noise. Otherwise, if a

LOG filter with a very Narrow Gaussian function is used ( ), the result 0.5 would be rather noisy.

(4) Be affected by noise even when integrating the result of LOG filter with the original

image; on the other hand, increasing the value for the Gaussian could reduce the noise drastically, whereas, the sharpening effect will be suppressed.

The zero value generated by the second derivative depends upon the maxima of the first derivative. In order to reduce noise, a Gaussian low pass kernel can be computed with standard deviation defined by equation (Eq. 3.38) (Marr, 1982): 61

Chapter Image Processing for the Extraction of Features in Images

(Eq 3.42) 1 , √2 In order to achieve better detection results, it is essential that standard deviation is taken into account. The value of is very important for LOG operator, because plays a role on it smoothing and the greater the value, the greater the size of kernel. Thus, it will perform well for eliminating high frequency noise with a strong capability of noise suppression. Conversely, edge detection will be more accurate when the value of standard deviation is small, while at the same time it is less able to cope with the noise problem. The LOG equation combines the

Laplacian and Gaussian as given by (Eq 3.44) below:

(Eq 3.43) ,

1 (Eq 3.44) , , , 1 2

The two-dimension LOG function centred at (0, 0) and with Gaussian standard deviation is a continuous function, as defined in the Figure 3.15 below (Marr and Hildreth, 1980):

Figure 3.15 The 2 -D Laplacian of Gaussian (LOG) function, where x and y axes are marked in standard deviations ( ).

There are two approximate discrete convolution kernels as follows:

0 1 0 1 1 1 1 4 1 1 8 1 0 1 0 62 1 1 1

Chapter Image Processing for the Extraction of Features in Images

Tests of spurious responses can be achieved by thresholding the magnitude of the gradients.

Sharp contrast in the images enables identification between maximum and negative minimum by using certain thresholds. The location of edge can be estimated with sub-pixel resolution applying linear interpolation.

3.10.3.5 Canny Edge Detector

The Canny edges detector is the most rigorously defined operator and is one of the most extensively used edge detection algorithms. By applying an optimization process Canny proposed an approximation to the optimal indicator as the maxima of gradient magnitude of a

Gaussian-smoothed image. The popularity of Canny can be attributed to its optimality according to the criteria of strong capability of detection and localization, and accurate response to the edges (Canny, 1986b).

(1) Detection: to achieve good detection and maximize the probability of real edge point

detection, the signal -to -noise ratio (SNR) should be set so as to greatly minimize the

probability of wrongly marked non -edge points even in noisy conditions. Hence, the

processed image can lead to a low error rate with unwanted edge information filter out

while the useful information preserved.

(2) Localization: good localization should mean that edges are as close as possible to the

true location of edges, which will mean as minimum variation as possible between

actual edge and detected edge.

(3) Minimal responses: only one response should give rise to a single detected edge,

eliminating the possibility of multiple responses to single edge points.

According to the criteria above, the first step of the Canny edge detector is to smooth the image to eliminate noise along the edges before locating and detecting any edges. The Gaussian filter is introduced to blur and remove unwanted details and noise. The Gaussian smoothing can be

63

Chapter Image Processing for the Extraction of Features in Images achieved using standard convolution method by calculating a suitable matrix kernel, 5 5 which is much smaller than the actual edge over the whole image. The larger the width of the

Gaussian filter, the lower the sensitivity of the edge detector to noise. If the Gaussian width is increased, the localization errors along the targeted edges also increase slightly. The increasing standard deviation of the Gaussian filters results in reducing the intensity of the noise. Because the kernel is centre-based, any noise appearing outside the kernel will be eliminated, as the weight declines outward from the centre value. Based on this 2D isotropic Gaussian Equation

3.37 the image gradient can be detected to emphasize regions with high spatial derivatives. By applying non-maximum suppression, which is an algorithm to convert the blurred edges from gradient magnitudes to sharp edges performed by considering all local maxima values of the gradient image only, it allows the kernel to track along these regions and suppress any pixel that is not at the maximum.

The next step is to detect the edge strength by finding the large magnitude of gradient in the image after image smoothing and noise elimination. The Sobel operator is used as a 2D spatial gradient measurement to convolve the image points in horizontal and vertical directions with a pair of convolution masks and for and detection respectively as in section 3 3 3.9.3.2, and then the operator calculates the gradient of intensity at each point of the image. In the case of high frequency changes in the image, the gradient of the image is easier to detect.

For each point of the image, the resulting gradient approximations can be combined to produce the gradient magnitude as in equation (Eq. 3.39), and the approximate absolute gradient magnitude (edge strength) can be convoluted by using Equation (Eq. 3.36) and (Eq. 3.45) given below.

(Eq 3.45) , , ∗ , The non-maximum suppression step is used to trace along the gradient in the edge direction and compare the value perpendicular to the gradient. The operator only targets the local maxima and marks them as edges, and then it applies a threshold to define only strong edges.

64

Chapter Image Processing for the Extraction of Features in Images

As a conventional detection scheme, the Canny edge detection method marks edges in a maxima gradient image based on a directional operator. The main principle of the method is to optimize it for identifying the target boundaries. Additionally, owing to the localization criteria, outputs captured by the Canny method approach the centre of the true edge, while eliminating noise along the edges (Canny, 1986a). However, as with other common edge-based detectors, the Canny method also has some limitations. The operation strongly relies on the gradient magnitude to search for the potential edges, which comprises discrete pixels, and hence it gives rise to incompletion and discontinuous edge results. This method will be used for extracting the boundaries of water bodies in Chapter 5.

3.11 Speckle Noise Filtering

3.11.1 UNSW (University of New South Wales) Adaptive Filter (UAF)

Generally, speckle noise is common in all SAR imagery systems, resulting in reduced potential for recognition of objects/targets (Bamler, 2000). The accuracy of classification and segmentation, even edge detection, can be dramatically influenced by speckle noise, so de- noising becomes one of the imperatives in pre-processing (Ali et al., 2008). However, some adaptive filters, such as the Kuan (Kuan et al., 1987), Lee, minimum mean square error (MMSE)

(Lee, 1981) and Frost (Frost et al., 1982) filters use a priori statistical knowledge in the model to suppress speckle noise at the expense of detail degradation. By employing a gamma maximum a posteriori (Map) (Lopes et al., 1990a) and enhanced Lee filters (Lopes et al., 1990b) that use the homogeneity level, meaningful information can be sufficiently preserved, yet with residual noise remaining.

These common statistical filters pay less attention to the homogeneity of intensity levels of the

SAR image, leading to a degradation of spatial resolution due to loss of edge details, which are the main deficiencies of the speckle reduction filters listed. Furthermore, it is also important to

65

Chapter Image Processing for the Extraction of Features in Images suppress noise in SAR images, particularly at edge regions which cause redundant sub-branches during the skeletonizing process (Soille, 2003) as shown in Figure 3.11 and Figure 3.16.

Figure 3.16 has been removed due to Copyright restrictions.

Figure 3.16 (Left) Demonstration of the robust skeleton for rectangle, (Middle) (Right) the sub- branches in the skeletons are generated by noise over the boundary of the rectangle. (Beucher and Meyer, 1992)

By considering the deficiencies of these common speckle noise filters, a novel noise reduction approach will be employed in Section 5.1 is the UNSW (University of New South Wales)

Adaptive Filter (UAF) (Shamsoddini and Trinder, 2012) which is able to preserve edges and textural information and simultaneously reduce the noise to an acceptable level.

Due to the complex texture of hurricane-affected areas, including bare islands, noisy inland areas, the relative calm ocean surface at TC area and rough ocean surface caused by rainbands,

UAF is expected to perform better than the traditional filters. The principle of UAF is that it not only focuses on the level of homogeneity, but also pays close attention to the incidence of textural features. As textural information is well maintained by UAF, while speckle noise is suppressed, more effective and robust interpretation of images should be achievable when the latter procedures are applied. UAF plays a crucial role as an adaptive and robust discriminator for differentiating between speckle and textural information.

3.11.2 Median Filter

Median filtering was initially introduced by Tukey, 1977) as a nonlinear smoother for time series data (Tukey, 1977). Nowadays it is widely used in image processing as a nonlinear digital filtering technique (Gallagher Jr and Wise, 1981, Pratt, 1975). One of the useful characteristics of median filtering is its capacity to preserve edges while filtering out noise. The main idea of median filter is running a kernel over the entire image by 2 1 2 1 1,2,3 …

66

Chapter Image Processing for the Extraction of Features in Images replacing the central pixel at each position of the kernel by the median value of centre neighbourhood. For an image , median filtering operation using the square kernel is: ,, (Eq 3.46) ,, ,, Where denotes the output of the filter. Normally, the linear filters tend to smooth ,, the sharp edges and reduce the quality of other delicate image details, and perform poorly in the presence of signal-dependent noise. However, with nonlinear filters, the noise can be removed without any attempts to explicitly identify it. This will be used in Section 5.2 with purpose of automatically processing SAR data. The filtered image is improved for speckle suppression of various densities and preserves the water body edges and other corresponding textural information at proper levels.

67

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

Chapter 4 Applications of Remote

Sensing Data for Tropical Cyclone

Studies and Flood Mapping

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

4.1 Introduction

In this Chapter, the methods reviewed in Chapters 2 and 3 are examined to determine appropriate approaches for tropical cyclone monitoring, flood mapping and general water body detection. The analysis of past research will provide assistance in the development of methods for processing the case studies in Chapter 5. For instance, visible (Vis) or infrared (IR) images acquired by optical remote sensing satellites are used for identifying cloud atlases by the classic

Dvorak technique. Detecting temperature information influenced by clouds type or thickness can help with locating and estimating the eye area of tropical cyclone after appropriate processing. Moreover, ocean SAR data plays an increasingly important role because of its range of polarizations, extensive coverage and high resolution allowing for more detailed observation.

For example ocean surface wind vector fields can be extracted from C-band Radarsat-1 launched by Canadian Space Agency (CSA) by using radar backscatter along with wavelength and incidence angles. Other techniques, such as wind speed retrieval models based on ocean surface studies of upper-air balloons, buoys and reconnaissance aircraft, can also benefit the evaluation to locate or forecast the TC movement in Section 4.2.

The capability of active remote sensing system for water body detection/flood mapping studies is enabled by its ability to penetrate cloud obstructions, day and night operations and multi- temporal imaging. The characteristics of extracted inundation boundaries by SAR data in X and

L bands with multiple polarization are reviewed in Section 4.3.3. In Section 4.3.2, some passive remote sensing system, such as visible and infrared images, are reviewed for providing fine temporal and spatial resolution images for flood mapping. Else, DEM data and airborne or terrestrial laser scanning techniques have also been extensively applied for depicting the boundaries of inundation with a high positional accuracy in recent years, as described in detail in Section 4.3.

69

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

4.2 Approaches of Tropical Cyclone Study

4.2.1 Introduction

The potential of the Radarsat imagery has been demonstrated operationally for purposes of detecting ships, ice movement, ocean surface oil spill and coastal flood mapping for many years.

Synthetic aperture radar (SAR) data is claimed to have a unique capability to monitor marine meteorology and ocean related activities.

Hurricane and Tropical cyclones are the most damaging natural disasters worldwide, especially for coastal residents. Typically, they result in flooding causing financial loss and causalities.

The benefits of satellite images are that they can provide large coverage and frequent repeat observations for Tropical Cyclone (TC) research for an extensive period. Many satellite observations, particularly using optical satellites, for cyclone monitoring have been conducted since the 1960s using data from operational polar-orbiting and geostationary satellites (Bell and

Montgomery, 2008, Hasler et al., 1998, Li et al., 2013, Zhu et al., 2004). For instance, cloud changes, the area of the centre or eye of cyclones, and the temperature of the cyclone eye can be estimated and tracked from visible (Vis) or infrared (IR) derived images obtained by optical meteorological satellites. Recently, the application of Radarsat SAR images has been recognised for hurricane or cyclone monitoring. SAR images contribute high resolution images over the ocean surface that cannot be acquired by infrared and visible sensors when there is thick cloud cover. SAR ocean images contribute a complementary view of hurricane and cyclone eyes, eyewall, rainbands as well as wind-related ocean features, such as surface waves, as a result of the signal response from sea surface backscatters (Lee and Liu, 1999, Suhn et al., 1999,

Friedman and Li, 2000b, Sikora et al., 2000). The characteristics of TC eyes are closely related to their behaviour, which can play a key role in establishing the relationship between forecasting and monitoring. Therefore, the study of the features of TC may enable us to interpret a hurricane’s evolution and variation. This is part of the subject matter of this research.

70

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

4.2.2 TC Identification Tropical Cyclone Characteristics Using Dvorak Technique

Satellite technology plays a pivotal role in TC, hurricane and typhoon interpretation, for which particular characteristics, such as the spiral shape and central eyes are the most important characteristics of this meteorological phenomenon, which affect the daily lives of many humans in tropical areas. Over the past few decades, there has been extensive research conducted to estimate the intensity and movement of TCs from images acquired by multiple sensors from different satellite missions. The Dvorak technique (Dvorak, 1975, Dvorak, 1984, Dvorak, 1972) which has been acknowledged as one the best known methods, where the TC number is assigned a wind intensity value according to the size, shape and vorticity of the dense cloud shield near the centre of the cyclone.

According to the classic Dvorak’s theory, certain types of tropical cyclones can be classified by the corresponding “templates” through their life cycles, as shown in the Figure 4.1. Additionally, as can be seen in the Figure 4.2, different strengths of cyclone also can be classified visually based on its spiral shape.

Figure 4.1 has been removed due to Copyright restrictions.

Figure 4.1 Illustration of T-Numbers of the Dvorak technique (Jarvinen et al., 1988)

Figure 4.2 has been removed due to Copyright restrictions.

Figure 4.2 illustration of pattern strength based on Dvorak technique (Jarvinen et al., 1988)

Nevertheless, the cloud pattern characteristics vary greatly, and as well the scene analysis techniques are inefficient in terms of identification and extraction of cloud systems. These problems add to the difficulty of TC pattern matching using the Dvorak technique, which is predominantly based on subjective human judgement. An elastic dynamic link model (EGDLM) was introduced to provide a solution to automated graph matching based on Dvorak analysis by

(Lee and Lin, 2001). This solution uses the extension of dynamic link architecture (DLA) to automatically extract the contours of TC patterns using active contour models (ACM) as 71

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

described in Chapter 3 (Cohen, 1991, Xu and Prince, 1998, VELDEN et al., 1989). Therefore, previous visual TC pattern recognition can be transformed into an adaptable pattern matching solution of TC contour patterns. The principle of the DLA model was initially proposed by

(Von Der Malsburg, 1994), which compares stored patterns in the analysis dataset with the input cyclone pattern by establishing internal connections with each pattern, and then interprets and matches them. The significant advantages of the DLA algorithm are its inherited invariant properties. Therefore, based on this matching structure, numerous transformations such as morphological dilation and erosion, reflection, translation, rotation and distortion have also been widely used in many research fields, such as invariant character recognition (Liu and Lee, 1997), and facial recognition (Wiskott and Von Der Malsburg, 1996, Würtz, 1997). As mentioned above, the most widely utilized satellite-based method for estimating TC intensity is the Dvorak technique, which has been employed operationally for TC forecast over 30 years. The accuracy of Dvorak-based intensity estimates for determining TC category in terms of the root-mean- square error (RMSE) of wind intensity is in the range of 46–64 m/s (Knaff et al., 2010).

4.2.3 Data Acquisition and Analysis

In November 1995, Radarsat-1 was launched in to a Sun-synchronous polar orbit for the

Canadian Space Agency (Vachon et al., 2001). It is C-band SAR (5.6 cm wavelength) with HH polarization and looks to the right side of the satellite with a crossing time at the equator of 6pm

(local time) in ascending mode. The resolution of the ScanSAR is 100 m, the spacing between pixels is 50 m and swath width is 500 km (Suhn et al., 1999, Reppucci et al., 2010).

In the hurricane season, the Atlantic Ocean along the southeast coast of United States is the priority zone of hurricane monitoring by National Oceanic and Atmospheric Administration

(NOAA) and National Environmental Satellite Data Information Service (NESDIS) (Dvorak,

1984), when the Radarsat-1 trajectory is scheduled for wide area coverage, which will provide enough data for observation of a developing hurricane.

72

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

The advantage of SAR images for detecting TCs have been proved by facts in the case of the hurricane events Bonnie and Danielle (Schwendike and Kepert, 2008) in 1998, for which two and one images were captured respectively, with clear and detectable hurricane eyes in each case. A time series of SAR ocean images is indispensable for understanding hurricane progression; however, it is also a challenge to gain more than one image of single hurricane event. Availability of SAR time series images is rare; NOAA did manage to acquire images of

Hurricane Bonnie twice (25 and 28 August 1998) (Zhu et al., 2004), Although the images failed to cover the entire storm area by ScanSAR wide mode, they are still useful for studying part of the hurricane development.

The significant advantage of SAR images is that they enable the observation of variations of ocean surface roughness resulting from variations in wind strengths. Since the SAR images present different alternating patterns of light and dark recorded by radar cross section (RCS)

(Knott et al., 2004), which are caused by the higher wind speeds, thus enhancing the backscatter signal, the wind direction vector can be inferred from the magnitude of the ocean surface roughness signature. For instance, buoy observations on the ocean surface indicates wind speed errors of about 1.9 m/s RMS for ERS SAR and 2.4 m/s for Radarsat-1 SAR which is more accurate compared with the result of wind field retrieval derived from SAR image (Vachon and

Dobson, 2000). The relative wind direction might be difficult to obtain, because unlike a scatterometers the SAR image provides one look direction only with respect to the wind field direction. Therefore, more wind field direction information from multiple sources is also needed, such as ocean surface wind fields that can be measured from some estimation models or scatterometers (SCATs) data. The multisource data would enable the detection and estimation from the SAR images of a broader range of correlated metrological information, including the precipitation variability, squall line, rainbands and storm cell. Relatively low backscatter areas on SAR images are probably caused by rain derived atmospheric attenuation. This may be because rain possibly dampens the capillary waves on the sea surface, which gives rise to dark regions (low backscatter) on SAR image.

73

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

4.2.4 Ocean Surface Wind Speed Retrieval Model

4.2.4.1 Multisource for Wind Speed Retrieval

Many other techniques has been applied for TC studies (see Figure 4.3), including radiosonde balloons for upper air measurements, ships and buoys, surface meteorological stations, and weather radar, which is particularly useful for examining cyclone structure and wind patterns once the cyclone is near or on land.

Figure 4.3 has been removed due to Copyright restrictions.

Figure 4.3 Illustration of tropical cyclones observation (World Meteorological Organization)

For instance, in the case study of hurricane Bonnie (27/08/1998), the first reconnaissance plane was sent to enter the storm and provide important measurement in forecasts, i.e. the central barometric pressure. However, reconnaissance aircraft observations are restricted and cannot be regularly committed to enter the cyclone because of the danger to the crew. Thus, satellites are the key techniques used to indirectly observe hurricanes, and they have greatly improved the monitor ability and understanding of hurricanes since interest is focussed on the wind speed retrieval from the RCS.

By reviewing the Hurricane Georges’ study (26/08/1998) (Friedman and Li, 2000a), the SAR image structure is highly correlated with consecutive surface wind fields obtained from NOAA

P-3 flights over the hurricane (Vachon et al., 1999). The Ocean Monitoring Workstation (OMW) has automatically recorded the wind field direction from the relatively low wavenumber image energy trend (Vachon and Dobson, 2000), whilst the wind speed field is derived from the RCS.

In order to compare the sea surface wind field with the SAR observation, the wind retrieval model derived from a hybrid C-band HH can reach an accuracy of 15-25 m/s according to

(Vachon and Dobson, 2000, Vachon et al., 2000), which indicates lower accuracy compared with buoys observations above.

74

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

Radarsat SAR data combined with a precipitation image acquired from the Special Sensor

Microwave Imager (SSM/I) on Defence Meteorological Satellite Program polar-orbiting satellites, are capable of developing a three dimensional model for studying hurricanes and SAR also provides an adaptive method to validate the wind speed retrieval model. Vachon (Vachon and Dobson, 2000, Vachon et al., 2000, Vachon et al., 2001) proposed two main methods for estimating wind vectors from SAR ocean image. The first approach is using RCS and known geometry and wind direction of SAR image to estimate the wind speed. It requires sea surface wind vector derived by the wind retrieval model, which is developed from the ocean wind scatterometry, the local incidence angle, the observed radar cross section from the ocean SAR image and the relative wind direction (Scoon et al., 1996, Johannessen et al., 1996) .

Another method uses the azimuth cutoff degree of the SAR spectrum to calculate the ocean wind speed. Azimuth cutoff refers to the distortion effects of the SAR imagery spectrum, resulting from SAR Doppler misregistrations along with the azimuth direction. The azimuth cutoff is determined by the ocean surface SAR imaging (i.e. parameters of sensor, platform altitude and velocity, etc.) and sea state conditions (Chapron et al., 1995, Kerbaol et al., 1998).

As well, measurement of the spectral width plays a prerequisite role in this process and the relationship of the cutoff wavelength, sea surface wave spectrum and wind speed have been indicated as models for this task (Vachon and Dobson, 2000, Vachon et al., 2000, Vachon et al.,

2001).

4.2.4.2 SAR Wind Speed Retrieval

The retrieval of ocean surface wind from SAR data can be divided into two major processes. In the initial phase, wind directions must be retrieved, which are an essential input for the following step. For instance, retrieving high-resolution wind vector field is shown to be feasible using ScanSAR data attained by the Canadian C-band Radarsat-1, at moderate incidence angles

(discussed in Section 2.2.3) between 20° and 50° with horizontal (HH) polarization (discussed in Section 2.1.3) (Horstmann et al., 2000), for the range of incidence angles and electromagnetic 75

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

wavelength, the radar backscatter from the sea surface is principally produced by the small-scale surface roughness, which will be largely influenced by the regional wind field thus allowing the backscatter to be empirically associated with wind speed (Donelan and Pierson, 1987). The wind direction can also be retrieved from the direction of stripes caused by wind, which are in about the same direction as the mean wind direction near the sea surface, and are caused by wind shadowing or atmospheric boundary layer rolls. Theoretically, the direction of these stripes can be retrieved by different approaches. One is known as using spectral methods as stated by (Vachon and Dobson, 1996, Horstmann et al., 2000, Fetterer et al., 1998), the other applies an approach according to the locally derived gradients (Horstmann et al., 2002, Koch,

2004) in the spatial domain to extract the directions of the stripes.

In the following step, wind speeds are derived from the normalized radar cross section (NRCS) of the sea surface by using a geophysical model function (GMF), together with the local wind direction retrieved from the initial step. The GMF refers to the dependence of the NRCS on the radar imaging geometry and wind (Horstmann and Koch, 2005).The CMOD4 (C-Band Model

Function) (Lecomte, 1993) refers to the dependency of the NRCS on wind and geometry of image performed by considering the polarization ratio, and was formerly developed for the scatterometers of the European remote sensing satellites ERS-l and ERS-2 operating at C-band with vertical polarization. To retrieve the wind speed from the ocean SAR data, initially a semi- empirical C-band model CMOD4 for vertical (VV) polarization (Vachon and Dobson, 1996,

Lehner et al., 1998, Fetterer et al., 1998, Horstmann et al., 2002) based on NRCS is also able to be extended to HH polarization of Radarsat-1 (Vachon and Dobson, 2000, Horstmann et al.,

2000, Horstmann et al., 2001). In addition, the retrieved wind field using the new SAR wind retrieval method by estimating quantities such as the 2-D divergence or vorticity (Schulz-

Stellenfleth et al., 2007) based on GMF CMOD5 (Hersbach et al., 2007) can also provide a consistent wind field with a good wind speed field estimation. The source of error in ScanSAR wind retrieval model for estimating wind speed might result from the uncertainties regarding incidence angles, NRCS and wind direction (Horstmann et al., 2002, Horstmann et al., 2000).

76

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

4.2.5 Tropical Cyclone Eyes Detection and Location

Normally, hurricanes/cyclones are defined as such with 33 m/s sustained wind speeds or more, a relatively calm region called “eyes” is in the centre of the spiral cloud pattern with the diameter size of 30-50 km, but they maybe reach more than 500 km for supercharged storms. Due to the unpredictable trajectory of hurricanes, establishing its probable landing regions in the early forecast system is imperative. This is because they are likely to bring heavy rains or even inland inundation, which might last several weeks. Therefore, the value and practicality of this research is undisputed. The Hurricane Danielle C-band SAR image was imaged on 31 August 1998 by

Radarsat ScanSAR wide mode, with an identifiable central eye with a diameter of approximately 40 km; the entire area of Danielle covered 500 km which is the total ScanSAR swath width. The wind direction is presented as bright area around the hurricane eye, and its structure can also be clearly imprinted on the ocean surface described by the Backscatter Cross

Section (BCS).

To sum up, the TC intensity category is determined by Dvorak model (T number) (Dvorak,

1972, Dvorak, 1975, Dvorak, 1984), and in most cases maximum sustained winds (Holliday,

1969, Atkinson and Holliday, 1977) of TCs over open flat land or water. These are the common indicators of the intensity of the storm defined by World Meteorological Organization (WMO).

To be specific, the maximum sustained wind (Olander et al., 2004) is represented by the measurement of the highest sustained winds at a height of 10 metres over either a one minute

(U.S. standard) or 10 minutes time span (Tropical Cyclone Weather Services Program, 2006), and then taking the average. The maximum sustained wind typically appears some distance from the centre of TC which is considered as the radius of maximum wind, within a complete eyewall (shown in Figure 4.4 below) of TC (Blanchard and Hsu, 2005). Between the TC eye and the eyewall, there is a discontinuous wind boundary (Du and Vachon, 2003), along with the

TC eye (the area appear dark in the SAR images, will show in Section 5.1.3, Figure 5.6-5.8) which are two significant characteristics of a clear eye in the SAR data. Since the category is determined by the indirect methods, there is no adequate indication of its accuracy.

77

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

The air may sink over a layer deep enough to suppress cloud formation for a sufficiently strong

TC, i.e. hurricane Category 4-5, thereby creating a clear TC eye in the SAR data. A stronger TC demonstrates spiral banding and increased centralization, while the strongest develops an eye.

The maximum sustained wind will be experienced around the eyewall of the cyclone (Shapiro and Willoughby, 1982, Willoughby, 1990). Consequently, TC eye location and wind speed/direction field derived by SAR image could help meteorologist to study the wind speed study to promote the category identification and TC morphology and dynamics.

Figure 4.4 has been removed due to Copyright restrictions.

Figure 4.4 Illustration of TC structure in vertical slice through the centre of a mature TC (Knutson et al., 2010)

4.3 Approaches of Water Body/Flood Mapping

4.3.1 Introduction

Inundations are one of the most hazardous disasters globally, and they are responsible for enormous casualties and countless economic loss almost every year. Detection and mapping of floods is becoming increasingly valuable, not only to assist with settlement of insurance claims, but also in the light of real time and near real time flood monitoring for government applications.

The challenges for SES (State Emergency Service) or other governmental agencies are that they currently have limited use of multi-resource data, such as, spaceborne and Unmanned Aerial

Vehicle (UAV) data. Vast coverage and multi-temporal images also require significant levels of image processing. In terms of optical images, although there are advantages of true colour and fine resolution, they are still of limited use for detection of the flood extent, because of their inability to penetrate cloud and vegetation coverage.

There are a range of reasons why SAR images are suitable for flood mapping, including their capability to penetrate thick clouds and their insensitive to daylight. As stated in Section 2.2.3, the multiple combinations of radar polarization represent the different scattering of the ground

78

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

surface. A combination of different polarizations with wavelengths and incidence angles tend to play an important role in increasing the competence of interpreters, particularly in distinguishing flood and non-flooded extent for inundation hazard mapping. Hence, SAR images are able to image land features even under the forest canopy. For example, with the same wavelength, the backscatter and incidence angle value is greater in a horizontally polarized image than in a vertically polarized one. This process demonstrates the identifiable characteristics of images with different polarizations, supporting detailed segmentation in terms of identifying different radar signals.

The purpose of this section is to briefly review recent efforts and developments of flood hazard mapping and monitoring using SAR data. This literature review includes the following aspects:

• The development of flood delineation by using SAR imagery

• The evaluation of water bodies and boundary detection

• The validation of multisource of remote sensing data

4.3.2 Flood Monitoring with Passive Remote Sensing System

Natural radiation that is emitted or reflected from object or surrounding areas can be measured by passive sensors. Reflected sunlight is considered as the most common source of radiation detected by passive sensors, comprising: visible and infrared radiation. Some optical sensors have been used in the past. For instance: Landsat Thematic Mapper (TM) and Multi-Spectral

Scanner (MSS) sensors which are medium spatial resolution sensors that have been extensively used; the Advanced Very High Resolution Radiometer (AVHRR) is a low spatial resolution polar orbiting spaceborne (POES) sensor launched by the National Oceanic and Atmospheric

Administration (NOAA); the Satellite Pour l'Observation de la Terre (SPOT) which is designed by French government institution CNES (Centre National D'études Spatiale); the Advanced

Spaceborne Thermal Emission and Refection Radiometer (ASTER), and Moderate-Resolution

Imaging Spectroradiometer (MODIS) on board the Terra (EOS AM) and Aqua (EOS 79

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

PM) satellites. The recently launched Landsat-8 satellite carries sensors instruments referred to as the Operational Land Imager (OLI) and an additional thermal infrared sensor (TIRS). All of these satellites are affected by clouds which commonly occur during floods.

In the early stages of spaceborne remote sensing, images obtained by the Multi-Spectral Scanner

(MSS) sensor carried on Landsat missions, with a spatial resolution of 80 m, were the major data sources for flood mapping projects. The groundbreaking applications in using satellite remote sensing techniques in USA mainly focused on some areas which were frequently flooded with the aim of reducing the loss of human lives and property. The data obtained from the Landsat was applied for generating and monitoring the flood extent in the case of Iowa

(Hallberg et al., 1973, Rango and Salomonson, 1977), Arizona (Morrison and Cooley, 1973), and Mississippi River basin (Deutsch and Ruggles, 1974a, Deutsch and Ruggles, 1974b,

McGinnis, 1975, Morrison and Cooley, 1973, Rango and Salomonson, 1977). The major finding was that the estimation of flood region boundaries using MSS data are reasonably well matched with results from aerial photography (Hallberg et al., 1973, Morrison and Cooley,

1973).

In these investigations, the Landsat band 7 from the MSS sensor (with wavelength about 0.8-

1.1 µm) was the most appropriate bands for defining water bodies based on other interferences such as dry and moist soil and vegetation areas (Hallberg et al., 1973, Smith, 1997). That was because band 7 has the characteristic of stronger water absorption compared with other bands in the near infrared spectrum region, which means that less near infrared energy is reflected by water bodies. Such areas will be represented as relatively dark objects in the near infrared image. These dark areas are very similar to the true colour water bodies, which could lead to a challenge for interpretation (Wang et al., 1994). However, this problem can be addressed by the combination of Landsat bands 4 and 7. Landsat TM band 4 has a similar appearance as band 5 for discriminating between normal surface and flood areas.

80

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

This result was proved by later research using a comparison of panchromatic and infrared aerial images (Hallberg et al., 1973). They used strandline neighbourhood as the area of interest for the analysis of band 5 (0.6-0.7 µm) and band 7, combined with radiometer statistics (Gupta and

Banerji, 1985). The consequence indicated that the inaccuracy of defining water areas can be reduced to less than 5% by suitable assessment (Rango and Salomonson, 1977). In addition,

(Gupta and Banerji, 1985) described almost the same level of accuracy as the previous research.

Nevertheless, in Bangladesh, floods over non-flood regions with 80 m spatial resolution was far from sufficient to classify, for instance paddy fields (vegetation mixed with water areas) also influenced the classification results to some extent (Imhoff et al., 1987). Moreover, the high reflection over the visible and NIR range leads to the extraction of boundaries which can make a great difference in the definition of the extent of flooded vegetated areas, such as forest and pastures (Hallberg et al., 1973, Moore and North, 1974). The uncertainty of classifications of flooded forests has involved such approaches as GIS hydrological models (Dasilva and Kux,

1992a, Dasilva and Kux, 1992b) and a probabilistic map of inundation boundaries by integrating the consequences of model uncertainty and natural variability (Merwade et al., 2008).

The limitation of visible and near infrared images for flood mapping is that the sensor is incapable of penetrating clouds as for example in the attempts to use Landsat images by (Rango and Salomonson, 1977, Paul and Rasid, 1993, Mallik and Rasid, 1993, Lowry et al., 1981). In fact, the difficulties of maximizing the location of inundation areas are also affected when estimating the extent of receding floodwaters. This is due to the fact that these areas are wet and covered by biomass. Thus, it was necessary to evaluate errors in the flood estimation during the downturn period of less than approximately two weeks (Deutsch and Ruggles, 1974b, Rango and Salomonson, 1977), (Anderson, 1974, Morrison and Cooley, 1973, Rango and Salomonson,

1977) .

However, while the use of visible/infrared images has been promoted in the early 1980s and even though there are some achievements derived by satellite optical data, SAR images provide

81

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

a more direct methodology to solve this issue, which have the advantages of image of also perpetrating forest canopy in all weather conditions.

4.3.3 Flood Mapping and Monitoring from Radar Images

SAR sensors can image the earth surface at all times and in any weather conditions and hence is a unique and complementary technique in flood mapping and monitoring. An early experiment of flood inundation extent mapping with radar images was implemented by (Lowry et al., 1981), using an aircraft borne side looking radar (SLAR) which could operate in both X and L bands.

In Brazil around the late 1970s, X-band SLAR images were also utilized to monitor the flood plain lakes over the Amazon River (Sippel et al., 1992). At the First ERS Thematic Working

Group Meeting on Flood Monitoring (ESA/ESRIN, 1995), many researchers reported their flood mapping results acquired by the European remote sensing satellite (ERS)-1 SAR images in C-band It was widely accepted that for smooth open water surface without any vegetation, such as bushes and trees, the radar intensity of these areas is very low because of the specular reflection of the mirror-style water surface. Due to this characteristic, the flood boundaries could be easily determined from radar intensity images which satisfied high accuracy conditions.

Some of the former and current radar SAR satellites are using for water body detection/flood mapping:

• RISAT -1 (SAR,ISRO India, 2012) • RADARSAT -1 (SAR, Canadian, 1995) • RADARSAT -2 (SAR, Canadian, 2007) • TerraSAR -X (SAR Germany, 2007) • TanDEM -X (SAR Germany, 2010) • COSMO -SkyMed (SAR, Italy, 2007) • TecSAR (SAR, Israeli, 2008) • Shuttle Imaging Radar (see Shuttle Radar Topography Mission) (SAR) • JERS -1 (SAR) • ALOS (PALSAR)

82

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

• ERS-1 & ERS -2 (European Remote -Sensing Satellite) (altimeter, combined SAR/scatterometer) • Envisat (ASAR, altimeter) • Tropical Rainfall Measuring Mission (TRMM, Precipitation Radar)

When radar transmits a microwave signal to the ground surface, small particles (roughly the same size as radar wavelength) of the imaged target will reflect the microwave energy both away from and towards the radar antenna. The amount of backscatter (mentioned in Section

2.2.2) is a function of surface roughness, incidence angle and soil moisture, which relyingies on the dielectric constant and wavelength. The complex dielectric constant which varies with different materials from ground surface, is defined by two terms, permittivity and conductivity, that are highly depending on the water content (humidity) of the material as presented in details in Section 2.2.2. For a comparatively large area (such as for a pixel of tens of metres resolution), the radiation pattern of microwave energy reflected by the imaged surfaces is also dependent on the terrain topography, especially the local slope, which gives a very complex radiation pattern.

For the smooth open water surface, the slope and roughness can be assumed to be approximately zero; hence, the majority of the transmitted radar energy will be obliquely incident on the surface and then reflected away from the radar, which will result in extremely low radar backscattered signal. Strong winds will minimize this type of reflection because it will increase roughness of the water surface, and hence increase the microwave energy reflected back to the receiver. For the unsaturated soils of flooded areas, the radar intensity will be larger than that before the flood event, because the water content of the soil will increase the dielectric constant (Elachi and Van Zyl, 2006), which is also influenced by the wavelength. The longer the wavelength, the greater the sensitivity of the dielectric constant to moisture content in soil.

Thus, using L band SAR data tends to be more sensitive to moisture in the soil than the bands of shorter wavelength (Oberstadler et al., 1997).

Multiple polarizations and frequencies radar can provide much more information of the imaged ground features than only one frequency and single-polarization. For instance, the X-band (2.4-

3.8 cm) and C-band (3.8-7.5 cm) radar signals are strongly back-scattered by the leaves and 83

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

branches of the vegetation and tress, while L-band (15.0-30.0 cm) and P-band (30.0-100.0 cm) have stronger penetration ability and then will interact more with trunks and large branches.

Comparison of signals from multi-frequency and multi-polarization could help to identify or classify the imaged ground surface. The intensity of L-band radar image will normally be larger in flooded vegetated areas because of the double-bounce effect between the water surface and vegetation or trees (Figure 4.5) (Hoffer et al., 1985, Hess et al., 1994, Melack et al., 1994).

Figure 4.5 has been removed due to Copyright restrictions.

Figure 4.5 Scattering mechanisms of a non-flooded forest (left column) and flooded forest (middle and right column) (Bourgeau-Chavez et al., 2009)

For polarimetric data, (Durden et al., 1989) utilized an airborne radar in HV polarization to detect the phase difference before and after a flood, in the modelling process. If there is 180 degree phase change between the vertical and horizontal polarizations in the received data, it will be taken as the existence of double-bounce. In addition to this, the magnitude of the intensity change was also strongly affected by various vegetation types. This has also been proved in many applications ((Krohn et al., 1983, Evans et al., 1986, Harris et al., 1986,

Sheffner, 1994). Many studies (Wedler and Kessler, 1981, Evans et al., 1986, Hess et al., 1994) implemented with multi-polarization airborne radar imagery have suggested the best choice for mapping flood under trees is the co-polarization (HH and VV) instead of cross polarization (HV and VH). At the same time, some researchers, such as (Wu and Sader, 1987) suggested that the ratio of co-polarization and cross-polarization is more suitable and efficient for detecting flooding under trees.

Apart from polarization, the incidence angle is also a very important parameter in radar flood mapping. Hess and Melack (1994) acquired radar images over the Altamaha River in Georgia using an L-band HH polarization radar. After image processing and analysis, it was found that the intensity of L-band image is significantly related to the radar incidence angle. The flooded forest could easily be identified in the radar images that were acquired at an incidence angle 84

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

range from 18° to 45°. When the incidence angle was up to 58°, the flooded forest areas could not be separated from the un-flooded dry forest. In order to compare the differences between various bands, JPL (Jet Propulsion Laboratory) imaged the same area using HH polarized radar that transmitted signals in C, L and P bands. Over a marsh areas, the intensity of C band images increased after flooding, which was induced by the double bounce effect when there were both bushes and flooded water surface. At the same time, the intensity of L and P band images did not show any large differences from before and after flooding, which was due to the low bushes not causing double-bounce phenomena at these two wavelengths. For flooded areas with dense and high forest canopy, responses changed. Under these circumstances, the intensity of L and P band images show large differences before and after flooding, because the radar signal for these wavelengths can penetrate the tree canopy and form double bounces between the branches and flooded water surface. While C band radar signal shows exactly the same reflection pattern as L and P band, the radar signal was only reflected by leaves and branches, so there was no significant change in intensity values (Hess and Melack, 1994). Different types of vegetation or forest interact with different radars in various ways, and this has been studied by (Hess et al.,

1990)and (Hess et al., 1994)

It could be concluded that there are two major approaches for flood mapping and monitoring using radar data. The first category classifies the radar image into different groups according to the intensity values, then flooding can be identified based on the time series change of the water body. This method has been widely tested in many recent applications (Townsend, 2001, Bonn and Dixon, 2005, Kwoun and Lu, 2009). The second category targets each pixel of a study object, and then identifies the flood based on comprehensive considerations of many parameters such as wavelength, incidence angle, polarization and vegetation coverage, and sometimes the weather information is also introduced to assist the decision making (Kiage et al., 2005)

Several recently launched SAR satellites such as Radarsat-2 TerraSAR-X, and ALOS PALSAR are capable of providing simultaneously high resolution radar imagery of various polarizations.

Because of its high sensitivity to the radar signal change, another parameter that has to be taken 85

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

account in the process for flood mapping is speckle noise. Several researchers, (Martinis et al.,

2009, Matgen et al., 2011), have studied the removal of speckle noise, which(discussed in

Section 2.2.5) is distributed randomly over the whole image and reduces the ability to interpret the radar image, hence also affecting the performance of flood mapping using the images.

4.3.4 Flood Mapping and Monitoring Using Radar Coherence

As mentioned above, SAR intensity has already been successfully used for the mapping and monitoring of flood, but in many cases, the SAR intensity is affected by the presence of wind, vegetation, incidence angle, and different polarizations. Hence, flood mapping and monitoring with only SAR intensity values is not always reliable, and sometimes requires consideration of weather conditions before a final conclusion can be made on the interpretation.

Another approach uses radar images is based on the coherence derived in radar interferometry.

This technique contains in the coherent combination of two radar images over the same region acquired before and after a flood. The lack of interferometric coherence between the two images may be caused by decorrelation effects which are normally identified as water areas. All decorrelation factors comprising geometric, temporal, rotational, registration and noise decorrelation can be written as in the following expression (Eq 4.1):

(Eq 4.1)

Where represents the geometric decorrelation effects, caused by the multi-pass radar image the ground surface at different locations, directions and orbits; denotes temporal decorrelation effects, caused by the ground surface features changing during the multi-pass image acquisition repeat interval; is the rotational decorrelation (rotation effects), as imaging radar is designed to be on parallel orbits in different imaging times, which is not perfectly controlled; stands for the backscattering decorrelation effects; represents the registration decorrelation that is induced in the interferometric process of two radar images; is the noise

86

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

decorrelation effects that are caused by the high temperature of the hardware, and sometimes called thermal decorrelation instead.

Temporal and scattering decorrelation is able to provide useful information about the imaged ground surface. The extraction and retrieval of this kind of information could be achieved by segmentation or classification of the coherence result. On the other hand, for the sake of simplicity in the registration, thermal noise and orbit decorrelation can be neglected because only a small percentage of selected pixels should be affected by decorrelation noise. As well, since the ERS German precise orbital data has an error contribution of less than 1 metre, the geometric decorrelation could be partially removed using a filtering process (Gatelli et al., 1994)

In the procedure of calculation of interferometric coherence, any changes of the backscatters on the ground surface that is larger than the wavelength will reduce the coherence. As a result, in an interferometric coherence map that is generated using two radar images obtained before and after the flood, theoretically the flooded areas should show a lower coherence than those areas that are not affected by the flood, because of the difference of dielectric constant, backscattering pattern and the characteristics of ground surface.

Interferometric coherence (defined in Section 2.3) has been utilized (Marinelli et al., 1997) as a unique parameter for identifying flooded areas (Laugier et al., 1997, Fellah et al., 1997) since the coherence is theoretically low in water areas regardless of different weather conditions. The

Yangtze River flood that occurred in China during the summer season of 1998 was mapped using interferometric coherence (Dellepiane et al., 2000a). The SAR imagery involved in this study included a pair of December 1995 by ERS-l/2 and an ERS-1 pair that were acquired in summer 1993; these reference images were compared with a pair of images that were taken during the flood period by the ERS-l/ERS-2 Tandem Mission. From the validation of the classification results, and then the final flood map product, it has been proved that various classes of ground features could be detected from the SAR images and these classes included open water, flooded areas and non-flooded areas. Further discussions suggested that in order to

87

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

visualize the processed results by a false-colour RGB image, the best combination of classes are the backscatter intensity images that were taken during the reference period (1993-Yangtze

River flooding summer 1998) and the coherence change between two images taken during the flooding period (Dellepiane et al., 2000b).

Nico (Nico et al., 2000) compared intensity and coherence on interferometric images for flood detection. The case studied was a flood event which occurred in southern France (28-

29/01/1996), using SAR images acquired by the ERS-1/2 tandem mission; two images were acquired exactly on those two dates. At that time, this was the only known flood event that was covered by available SAR tandem data with only 1-day separation. The reference image without flooding involved in this study was taken on 7 August 1995. The final outcomes indicate a satisfactory flood contour map whose shape is consistent with the ESA qualitative image. The intensity results were generated by using a simple threshold aided by a few carefully selected averaging filters. It was concluded that the use of the combination of intensity and coherence made it easier to generate a high accuracy flood detection and monitoring map, although the determination of the best threshold value is extremely difficult.

(Alexandra, 2012, Forghani et al., 2007, Sharon et al., 2011) use the coherence results generated from SAR sensors mounted on Cosmo-SkyMed satellites to map floods in Queensland and

Victoria, Australia. It was concluded that flood mapping results generating from a coherence map is much more reliable compared to traditional SAR intensity analysis. This method will be applied more in future applications as there are now more higher resolution space-borne SAR sensors, such as TerraSAR-X, Radarsat-2, and CosmoSkyMed available which have more flexible revisiting intervals and operate in multi-frequencies and multi-polarizations. Coherence images have not been used in this thesis.

88

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

4.3.5 A Combination of Optical and Radar Data

Recently, the combination of optical and radar remote sensing image presents a promising method in the field of water body identification and inundated area mapping. Optical image have commonly been used to estimate flood fields a few days after the event. Some case studies have been conducted using a sediment recognition approach (Mertes et al., 1993), and by

(Michener and Houhoulis, 1997) by applying vegetation stress detection. However, most floods occur in bad weather conditions with cloud obstruction that could possibly last for some days and hence cannot be observed by optical data, thus SAR data is able to remedy these weaknesses. With high spatial resolution visible or infrared data, and high penetration radar images, flood detection is much more efficient than when only one of them is used.

4.3.6 Flood Detection Using GIS and Remote Sensing

4.3.6.1 Digital Elevation Models (DEM)

Since flood risk is a nature disaster appearing frequently, adequate data has often been collected for predicting and modelling inundation. In particular, some floods are defined as an once-in-a- lifetime flood event, which means they are rarely seen in normal years. As flood hazards are one type of hydrological condition, some researches have introduced GIS and digital elevation models (DEM) to identify inundation risk that can lead to the generation of visualization of flood areas. Due to the raster format of DEMs, the depths of inundation extents have been computed by deducting the elevation model cell by cell in ArcGIS software (Townsend and

Walsh, 1998, Ali et al., 2001, Islam and Sado, 2001).

A simulation algorithm has been used for DEM modelling for estimating the inundation depth caused by water release. The outcome is usually compared with the flooded extent obtained from spaceborne data. The accuracy of flood areas using supplementary DEMs has improved comparing with hydrology models. However, this methodology is more reliant on the precision of the DEM, otherwise, new errors will enter into the flood hazard estimation (Dasilva and Kux,

89

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

1992a, Jones, 1998). Due to the vast extent of plains in Asia, it is difficult to achieve high accuracy DEM models. In flood mapping assessment, the important task is to identify the elevation errors, because 1 m vertical error in DEMs may result in an error of estimating flood extent of 100s km 2 (Hunter and Goodchild, 1995, Lee et al., 1992). Hence, high resolution DEM modelling is the key to determining flood mapping. SAR data is a very effective substitution for low accuracy DEMs.

4.3.6.2 Light Detecting and Ranging (LIDAR)

Lidar is another methodology which is becoming increasingly useful for mapping flood areas. A

Lidar sensor has the capability to measure the elevation changes in the terrain, particularly in forecasting possible inundation ranges. Lidar sensors can accomplish the elevation accuracies of the order of 10 – 20 cm, which is generally determined by the accuracy of the GNSS system.

This vertical precision is adequate for identifying flood zones in most cases and it is usually the best elevation data available. In addition, the high density vegetation areas also have a great influence on the final result in determining the terrain elevation (Hodgson et al., 2003). In the case of smooth plains with flooded areas, Lidar data is more competitive than microwave remote sensing SAR data, even though the acquisition is expensive.

4.4 Summary To summarise, for the review in Section 4.2, recent studies on the observation of tropical cyclone eyes have acquired information from multiple sources, including wind speed retrieval from SAR image and wind field direction from some estimation models or scatterometers

(SCATs) data. The difficulties of observing TC eyes by optical satellite data together with unique characteristics of SAR for detection of cyclone eyes are reviewed Section 4.2.3, and other useful approaches in order to analyse the eye area are examined in Section 4.2.4.

In Section 4.3, optical imagery has been widely used for inundation mapping and water body detection because of the availability of high spatial resolution and it can be efficiently 90

Chapter Applications of Remote Sensing Data for Tropical Cyclone Studies and Flood Mapping

interpreted. Integrated with various classification algorithms and multi-data fusion methods, the extent of water bodies and status of flood are able to be measured and extracted by analysing specific optical bands (Lacava et al., 2010, Khan et al., 2011). However, as discussed in Section

4.3.2 the weakness of optical data is that highly dependent on the solar illumination which is only available during daytime, and cannot image the ground surface due to the cloud cover.

Thus, SAR image as a complement to optical sensors can penetrate clouds in all weather conditions, day and night. Applying a series of processes, such as noise reduction and edge detection approaches to a single SAR intensity image, the boundaries of water bodies can be depicted clearly. IFSAR coherence images derived from two or more SAR images at different occasions, can indicate have shown potential in water related research, such as flood mapping and water body classification and extraction. Because of the superiority of SAR data for the extraction of cyclone eyes, SAR data will be processed by mathematical morphological methods in the following chapter to extract cyclone eyes on a number of events in USA.

91

Chapter 5 Processing of SAR Images

for Case Studies

5.1 Tropical Cyclone Eyes

5.1.1 Introduction

Tropical cyclones (TC), referred to as hurricanes in the Atlantic and typhoons in the Pacific, are capable of generating damaging storm surges and heavy rain, and therefore coastal residents are particularly vulnerable to harm due to extensive coastal and inland flooding during such events.

They are some of the most terrifying and destructive weather phenomena on earth, resulting in casualties and huge property damage. The difficulties of recognizing and locating eyes of TCs, as described in published research, are caused by cloud obstructions for optical satellite images, especially during the formation stage of a TC. For example, as shown in Figure 5.1 the TC eye area cannot be identified because of the thick cloud Thus, TC studies using ocean SAR images have the potential to play a vital role in monitoring their evolution and trajectory.

While limited spaceborne synthetic aperture radar (SAR) images have been applied for studies of the structure of TC eyes (Figure 5.2), rainbands, wind rolls and wind speed over recent decades (Katsaros et al., 2000), SAR images which display the interaction of TCs at the ocean surface can provide a striking and effective demonstration of the behaviour of TCs. In contrast to optical sensors, active microwave radar sensors are able of penetrating intensive clouds surrounding the eye. The normalized radar cross section (NRCS) associated with radar backscatter from the sea surface indicates adequate information about the sea surface roughness, which is affected by ocean swell, wind speed, rain and other atmospheric or oceanic phenomena. A comparison of SAR and optical images of a TC is shown in Figure 5.3, where the optical image is overlaid on the SAR image. The benefits of using SAR images for studying

TC are clearly visible in this figure, since cloud obscures the TC in the optical image.

93

Figure 5.1 has been removed due to Copyright restrictions.

Figure 5.1 ENVISAT satellite imagery of hurricane Katrina collected on 28 August 2005 from MERIS (UTC 15:50) (ESA)

Figure 5.2 has been removed due to Copyright restrictions.

Figure 5.2 ENVISAT satellite imagery of hurricane Katrina collected on 28 August 2005 from ASAR (UTC 17:00) (ESA)

Figure 5.3 ASAR image overlaid on the Terra/MODIS optical image (Hurricane Katrina 2005 UTC 17:00) (ESA)

With their unique capability of imaging under all weather conditions, day or night, as well as their high spatial resolution, SAR images are becoming increasingly important for TC studies.

In addition, the 450 km swath width of the ScanSAR mode of Radarsat-1 is acceptable for covering the entire area of a TC (Friedman and Li, 2000). It is possible to identify and extract the patterns of TCs, including their shape, size and area from microwave SAR C-band images, by measuring the radar backscatter response at the sea surface. 94

The weather in the TC eye is normally calm and free of wind and clouds. The varying sea surface roughness away from the TC eye causes different responses in SAR images, revealed in the level of radar cross section (RCS). The TC eye can be easily identified because of the relatively smooth ocean surface corresponding to minimum RCS. Extracting TC eye areas automatically is challenging and critical to determining the trend in the TC wavenumber or category which varies directly with its intensity, corresponding to its destructive effect (Li et al.,

2012, Li et al., 2013).

Fundamentally, mathematical morphology has been used extensively for image analysis for computing the geometric characteristics of shape and structure of features (Serra, 1982). One of the popular approaches of morphological shape representation is a morphological based skeleton (Calabi, 1965). Several research trials have demonstrated the significance of the morphological skeleton and pruning variants. During skeleton transformation, the outcomes usually yield redundant skeleton branches caused by noise along the edges of the TC eye.

Introducing a pruning technique overcomes the instability of the skeleton by drastically eliminating these redundant segments, resulting in shape representation at relatively low cost and more efficient programming (Bai et al., 2007). Hence, appropriate skeleton pruning will greatly help to reconstruct the targeted pattern. It is then apparent that the final result of extraction is closer to the real shape and size of the TC eye. The demonstration of the fitting and evaluation will be presented in this chapter.

5.1.2 Extraction of Tropical Cyclone Eyes

5.1.2.1 Using Adaptive Filters UAF for SAR Speckle Reduction

As described in Section 3.11.1, due to the complex texture of TC-affected areas, including bare islands, noisy inland areas, the relative calm ocean surface at TC eye areas and rough ocean surface away from the eye caused by rainbands, UAF filter is expected to perform better than the traditional filters in suppressing the effects of SAR speckle noise. As textural information 95

has been well maintained by UAF, while speckle noise is suppressed, more effective and robust interpretation should be represented in the later procedures. The results of the application of

UAF on a SAR image of hurricane Dean is demonstrated in Figure 5.4.

Figure 5.4 Demonstration of original SAR image (Left) of hurricane Dean acquired on 08/19/2007 with speckle noise, (Right) the denoised image after exploiting UAF adaptive filter

5.1.2.2 Image Enhancement

Image enhancement plays an important part in extracting a well-defined skeleton, and requires special attention for SAR images. Enhancement is undertaken by thresholding because it is essential to improve the quality of extraction on the targeted area. Lowering the threshold maintains global representation of the object at the expense of increasing the noise effects on the skeleton definition, while a relatively high threshold can produce a more effective skeleton yet homotopy may be reduced.

The classic Otsu automatic threshold selection method is practical and efficient for dealing with image thresholds (Otsu, 1975). To be specific, the automatic determination of an optimal threshold minimizes the variance within classes for subsequent segmentation without the need for prior knowledge. Otsu's thresholding technique aims at finding the threshold value where the sum of the foreground refers to the TC eye, and background denotes the non-eye area is at its minimum, through iterating possible threshold values and calculating a measure of spread for 96

the pixel levels each side of the threshold. In order to evaluate the optimal threshold in the image, a discriminant criterion to measure the class separability was introduced by (Fukunaga,

1990):

Where is defined as within-class (background) variance as the weighted sum of the variances of each class, is defined as between-class (foreground) variance, which is derived by subtracting the within-class variance form the total variance of the combined distribution.

(Eq 5.1)

(Eq 5.2)

(Eq 5.3)

Where represents the probabilities of the foreground and background separated by a threshold , and is the range of intensity levels, is the variance of the 0, 1 pixels in the background (below threshold), and is the variance of the pixels in the foreground (above threshold), where denotes the weights, which are the probabilities of the two classes separated by a threshold and are the variances of the two classes, foreground and background. According to Ostu’s method the maximum inter- class variance and the minimum intra-class variance are able to be calculated by:

(Eq 5.4)

Hence, it can also be used for finding the best threshold to efficiently separate the foreground and background based on variances. So, for each potential threshold , firstly it is necessary to separate the pixels into two clusters according to that threshold. Next, it is necessary to find the mean of each cluster and square the difference from the means. Thirdly, the result from previous

97

Figure 5.5 Illustration of denoising SAR image (Left) and image enhancement based on Ostu’method (Right)

*Hurricane Dean 19/08/2007 SAR image step needs to be multiplied by the number of pixels in each cluster. The results of the processing are displayed in Figure. 5.5.

5.1.2.3 Morphological Skeleton Extraction

After applying the adaptive UAF filter to remove the SAR speckle noise around TC eye area and the Ostu’s thresholding to separate the foreground of, the TC eye area from the background, or non-eye area, a closing morphological operator is applied to remedy some new holes, which might appear during the image enhancement. This is because such holes would give rise to new complex skeleton structures in the following step. Finally, in order to eliminate some parts of the shape of the object that may be wrongly connected by the earlier procedures, an extra opening morphological operation is applied. As a result, a binary image is obtained for the skeleton extraction.

The skeleton approach is designed to simplify the patterns into a series of thin lines, called

Euclidean skeletons. For shape recognition, extracting a robust skeleton is essential for identifying and detecting endpoints and closed loops, based on Euclidean distance functions

98

(Calabi, 1965) presented in Chapter 3. This technique can retain homotopy of the initial objects and connectivity during the skeletonization process.

According to Lantuejoul (1980), in order to ensure the skeleton is at least one pixel thin, equations (Eq 3.17 - 3.20) given in Chapter 3 are required to extract the skeleton. Euclidean skeletonization is a critical step in extracting an effective shape description before implementing pruning.

5.1.2.4 Skeleton Pruning with Discrete Curve Evolution (DCE)

The skeletonizing algorithm (described in Section 3.7) contains four parts as follows: contour approximation (Discrete Curve Evolution, DCE), skeleton extraction, skeleton pruning and shape reconstruction (will be represented in the following Section 5.1.2.5). The solution to this process is based on (Bai et al., 2007) describe in Section 3.8. They stated that a maximal circle represents each skeleton point whose centre and the tangent points at the edges belong to different sides of the object shape.

For the first step of the solution, discrete curve evolution (DCE) is used for edge partitioning

(Bai et al., 2007). This contour segmentation method can simplify the object’s shape and eliminate incorrect skeletons, as well as preserve the major skeleton segments for subsequent shape reconstruction. The second step of this algorithm is to compute the skeleton associated with the medial axis transform (Hesselink and Roerdink, 2008) using Equation 3.23-

3.26 in Chapter 3, and pruning the skeleton based on DCE from the former step. This process requires iterations until there are no more pixels to be removed from the skeleton and the desirable shape remains. The benefit of this step is the conservation of the connectivity of the final skeleton representation and the meaningful topological properties of the object’s shape.

Using the skeleton extraction together with DCE, results in almost every erroneous skeleton branch at every partition point being removed.

99

5.1.2.5 Reconstruction Algorithm

After pruning, some unwanted skeletons are trimmed. Then the shape reconstruction is possible from the output from the thinning algorithm and the medial axis transform. Based on the skeleton calculation, reconstructing the original shape is possible with only minor errors remaining in the complete dataset. The reconstruction algorithm is based on a series of iterative computations filling the original binary image using Equation 3.21and 3.22 in Section 3.7. Since the skeleton branches are removed during pruning, those parts of the shape corresponding to the removed branches will not exist in the reconstruction. Thus, a more accurate shape can eventually be preserved based on the contributions of only the significant branches.

5.1.3 Study Area and Dataset

Multiple hurricane events, comprising Hurricane Katrina, Rita, Dean and Earl were considered in order to obtain a particular understanding of ocean SAR image with the aim of shape analysis of TC eyes. Moreover, the aim is also to prove that the morphological skeleton has the capability of precise eye extraction in different situations. Overviews of the data for the different hurricane events are described in the following Table 5.1:

100

Table 5.1 Imagery information of hurricane event

* SCW (ScanSAR wide beam), WSM (Wide swath mode)

* Wavenumber is analyzed by quantifying the azimuthal wavenumber of the TC with respect to intensity, and is classified as Category number (Li et al., 2013). The trend is for an increasing wavenumber as the hurricane intensity increases.

Table 5.2 Wind speed information of TC area

The National Hurricane Centre (NHC) in Miami, Florida, issued the first advisory at 5 a.m. 23

August 2005, regarding the tropical system would become Hurricane Katrina . In the next 3 days Katrina passed into the Gulf of Mexico and headed for Louisiana and Mississippi, becoming a Category 2 hurricane on 26 August. One day later Katrina was upgraded to

Category 3 on 27 August and at 7:00 AM 28 August Katrina was declared a Category 5 storm with 280 km/h winds, the highest intensity rating on the Saffir –Simpson hurricane wind scale . It was also the strongest Atlantic hurricane ever recorded in the Gulf of Mexico at that time; however, this record was later broken by Hurricane Rita.

101

Hurricane Rita (Figure 5.8) was first observed in the Tropical Weather Outlook (TWO) of

National Hurricane Centre (NHC) on September 15 2005, while situated northeast of Puerto

Rico, then it strengthened into a Category 1 on 20 September. By 18:00 UTC of 21 September,

Rita reached Category 5 hurricane intensity, the highest intensity. At 03:00 UTC on

September 22, Rita attained its peak intensity with maximum sustained winds speeds of

180 mph (285 km/h) located 310 mi (500 km) south of the Mississippi River Delta, thus surpassing hurricane Katrina as the strongest tropical cyclone ever recorded in the Gulf of

Mexico. Rita maintained intensity of Category 5 for 18 hours, weakening to Category 4 by

18:00 UTC on September 22, owing to the presence of wind shear and cooler continental shelf waters.

Hurricane Dean (Figure 7) was upgraded to hurricane status at 09:00 UTC 16 August 2007, when a reconnaissance aircraft discovered that Dean had strengthened to a Category 3

Hurricane from Category 2. By later in the night on 17 August it was updated to Category 4 until 18 th August. Dean became debilitated on the morning of August 19 and then began to intensify again to Category 4 that night. Dean further strengthened to a Category 5 hurricane

00:35 August 21, UTC.

Around 12:30 UTC August 29 2010, Hurricane Earl (Figure 5.8) strengthened into a hurricane in the east of the northern Leeward Islands; it steadily increased intensity through August 30 as a Category 2 storm. Just a few hours later, Earl further intensified and became a Category 4.

Early on August 31, Earl's eye dissipated and was replaced by a larger one; the system maintained its intensity through September 1, when it briefly weakened to a Category 3 system.

Hurricane Earl later hit its peak intensity of Category 4 during the morning of September 2.

Around 14:00 UTC on September 4 Earl approached Queens County near Western Head, Nova

Scotia, as a Category 1 hurricane. The changes in intensity of these TCs demonstrates the need to be able to remotely monitor the behaviour such as by SAR images on a continual basis.

102

Figure 5.6 Illustration of original ocean SAR image (time series hurricane Katrina 27/08/2005 and 28/08/2005)

Figure 5.7 Illustration of original ocean SAR image (time series hurricane Dean acquired by 17/08/2007 and 19/08/2007

103

Figure 5.8 Illustration of original ocean SAR image (hurricane Earl and Rita, 02/09/2010 and 22/09/2005)

5.1.4 Experimental Results and Discussion

To assess the robustness of these methods for determining the shape of the TC, two components were evaluated:

i. effectiveness and completeness of the skeleton pruning by comparison of pre- and post-

pruning areas,

ii. the accuracy of the extracted areas of the TC eye compared with the manually extracted

area (Li et al., 2013).

In attempting to guarantee an acceptable final outcome for delineation of the TC eye areas, it is required to overcome two potential difficulties: redundant bias which may be generated by the skeleton; and the effects of the remaining noise after noise suppression filtering.

A comparison of morphological skeleton and skeleton pruning based on DCE for the morphological development of TC eyes is shown in Table 5.3 for all hurricane event images.

The ridged edges of the TC eyes usually yield redundant information that affects the local contributions to the skeletons. Whereas the skeleton pruning of the image of Rita 22/09/2005

104

presents relatively lower similarity compared with that determined by the morphological skeleton, it is believed that this is because of the complexity of the structural distribution of hurricane Rita, as well as redundant spurs in the shape of the boundary.

Table 5.3 Illustration of morphological reconstruction compared to the skeleton pruning result

The majority of skeleton pruning results are very reasonable, for instance, those for Katrina

08/27/2005, Katrina 28/08/2005, Dean17/08/2007, Dean19/08/2007 and Earl 02/09/2010 which are simplified appropriately based on DCE algorithm. Only the significant branches remain, instead of chaotic sub-skeletons near the boundaries.

105

According to column B in Table 5.3, which shows the morphological skeleton only; and column

B in Table 5.4, which presents the evaluation of reconstruction results, it can be suggested that the higher number of sub-branches in the skeleton leads to larger areas that are wrongly identified as the TC eyes in later reconstruction step. Thus, skeleton pruning not only results in trimming the bias over the boundary and simplifying them, but also maintaining the important skeletons for future reconstruction.

Table 5.4 Demonstration of denoised TC images for morphological reconstruction after skeleton and pruning

106

5.1.5 Analysis of Results of Extraction of TC Eyes

The objective of this case study is a skeletonization and pruning solution that preserves the TC eye’s shape and topological properties. The contribution of the study is a pruning technique based on the DCE algorithm that conserves connectivity of the pruned skeleton, and yields a subset of the medial axis of the input shape that is suitable for TC eye study applications.

For experimental validation, the pruning algorithm together with DCE outperforms simple skeleton and reconstruction solutions in terms of stability. In addition, the results are more stable than simple morphological solutions even though the shape undergoes rigid image transformations. The DCE based pruning algorithm, as a simple contour approximation, can remove excessive branches from skeletons of noisy two dimensional shapes. This combination proved to be stronger and more flexible than using skeleton pruning only. This combined algorithm calculates a skeleton that is a discrete approximation of the medial axis; it allows reconstruction of the simplified shape from the robust skeleton.

SAR images present unique capabilities of measuring the emitted microwave signal response from the sea surface backscatter for characterizing tropical cyclones, hurricanes and typhoons.

The distinct advantages of ocean SAR imagery enable marine meteorologists to achieve a deeper understanding of catastrophic TCs. This study is focused on shape extraction of TC eye areas by introducing mathematical morphology approaches, morphological skeleton and pruning with the aim of ensuring global and local preservation of the precise shape of the TC eye. These morphological-based analyses are employed explicitly for six representative ocean

SAR images with different TC patterns, consisting of two pairs of SAR images of hurricanes

Katrina (2005) and Dean (2007), one each of Rita (2005) and Earl (2010), with different TC patterns, acquired from Radarsat-1 and Envisat ASAR. The morphological Euclidean skeleton and pruning is based on discrete curve evolution (DCE), resulting in relative accuracy after pruning that is able to reach good agreement with the area of coverage derived from reference data (shown in Table 5.4).

107

The differences between the manually derived TC eye areas, and those extracted by mathematical morphological processing before and after pruning based on the DCE method, are shown in Table 5.5. The average relative difference for the six TCs eye areas after pruning, expressed as a percentage of the area of the eye and is estimated to be approximately 7.8%, compared with the average relative accuracy for the six TCs before skeleton pruning of 17.9%.

In addition, the large relative differences for Hurricane Katrina (08/27/2005) and Rita

(09/22/2005) of about 38.0% and 22.2% respectively before pruning are much worse than the corresponding relative differences after pruning based on the DCE method of 7.9% and 11.4%, respectively.

It can therefore be concluded that with daily observations of WSM (wide swath mode) a time series of sea SAR imagery is suitable for a broad range of applications in the future with the aim of tracking or forecasting TC/hurricane events. The extracted skeletons together with their pruning are important steps; otherwise the results derived for specific TC areas will be distorted or subject to errors caused by redundant noise, which according to the outcomes of this evaluation would significantly affect the processing of the TC eye areas.

Table 5.5 Evaluation of shape extraction for tropical cyclone coverage

MM* Mathematical Morphology Method

108

In Du and Vachon’s study (Du and Vachon, 2003), wavelet analysis has been discussed to estimate the scale and area of TC eye with eight well-defined eye areas between1998 and 2001.

Four results of extracted TC eye areas by Du and Vachon from three hurricane events were compared with the manual determinations by NOAA (Li et al., 2013) with a reference dataset recorder (2001-2010) of NOAA. These hurricane events include hurricane Erin (11/09/2001 and

13/09/2001), hurricane Humberto (26/09/2001) and hurricane Olga (28/11/2001). As shown in

Table 5.6, the mean relative discrepancy between wavelet analysis and the manually extracted areas was approximately 19.6%, and the mean relative accuracy was 80.3%. Compared with wavelet algorithm, the proposed mathematical morphological method has the average of the relative discrepancy accuracy of only 7.8%, which is significantly better than the results derived from the wavelet approach.

Table 5.6 Estimation of area extraction for tropical cyclone by wavelet analysis

In the first entry in Table 5.6 for Hurricane Erin (11/09/2001), the relative discrepancy reaches

86.8% which is high compared with the second entry of the same event (13/09/2001). After critical analysis the first entry should be considered as an outlier. Thus, after excluding this first entry from the calculation of the relative discrepancy the mean of the relative discrepancy is

109

about 19.6% while the mean of relative accuracy for all hurricane events using wavelet analysis can achieve approximately 80.3%.

To conclude, while there were a limited number of tests of cyclone events, the relative accuracy derived by employing the mathematical morphology algorithms are estimated to be about 92.2% as shown in the right hand column of Table 5.5, while the relative accuracy of the wavelet based algorithms are about 80.3%. Accordingly, morphology algorithms can be considered as a superior method for monitoring and identifying the areas of TC eyes.

As presented in Section 4.2.5, the extent of TC eye is surrounded by the eyewall (described in

Section 4.2.5, shown in Figure 4.4), which is associated with the maximum sustained wind speed in the TC. Because the locations of TC eyes largely determine the actual trend of TC in terms of movement and prediction, TC location associated with TC eye analysis plays an important role on TC studies. In principle, as the shape and size of TC eye has been well extracted from the SAR image, the evolution and movement of TC can be estimated objectively by time series ocean SAR data if it is available from advanced satellite missions. TC eye position together with wind a speed retrieval model could not only provide a promising approach to improving the estimation accuracy of the direction and velocity of TC, but also assistance in generating a trajectory diagram to improve the prediction together with historical

TC datasets, thus preventing major personnel and property losses during severe TC seasons.

110

Chapter Processing of SAR Images for Case Studies

5.2 Water Body Boundary Detection

5.2.1 Introduction

Detecting water bodies on remotely sensed images is a challenging task involving aspects of computer vision, pattern recognition and digital image segmentation. It is only possible to detect water body boundaries with sufficient accuracy on imagery derived from medium and high resolution optical imagery. However, as expressed in Chapter 4 clouds and other weather conditions will directly limit the ability to use optical images for this purpose and hence SAR systems provide a unique ability to work in all-weather conditions. The radar backscatter cross section (BCS) can be used to delineate water body regions when the water surface is relatively calm and smooth, which corresponds with minimum values of BCS, whereas on land surface features the BCS has a different appearance. Smooth water areas appear relatively dark in SAR images and they can be identified clearly. Various studies (Nico et al., 2000, Mason et al., 2010,

Lee and Jurkevich, 1990, Karvonen et al., 2005, Henderson, 1995) have evaluated the potential of SAR images and verified their effectiveness in water body detection.

By segmenting image pixels using appropriate algorithms, it is possible to locate edges which may define boundaries of specific features in images. Segmentation algorithms are generally divided according to three aspects object- based, edge-based, region-based. There is also a hybrid method derived from these methods. As stated in Section 3.2, a conventional edge detection scheme, such as the Canny edge detection method marks edges in a maxima gradient image based on a directional operator. The main principle of the method is to optimize segmentation for identifying the ridge and roof of intensity describing the boundaries of the target. Additionally, owing to the localization criteria, outputs captured by the Canny method approach the centre of the true edge, while eliminating noise along the edges (Canny, 1986).

However, as with other common edge-based detectors, the Canny method also has some limitations. The operation strongly depends on the gradient magnitude to search for the potential

111

Chapter Processing of SAR Images for Case Studies edges, which comprises discrete pixels, and hence it can give rise to incompletion and discontinuous edge results.

As described in Section 3.8, the morphological watershed segmentation method combines edge and region based segmentation techniques, providing an automatic unsupervised extraction method with satisfactory accuracy of extraction of water body boundaries. Watershed transformation is widely used in image segmentation and processing because of its simplicity and objectivity. Although, the watershed technique is more accurate and performs well in particular pixel configurations at edges of water bodies, a few disadvantages constrain its implementation. For instance, its low tolerance to SAR image noise and sensitivity to poor contrast are the main challenges to proper river edge identification. In this study, an improved approach has been used combining effective pre-processing in a marker-controlled watershed algorithm (in the post-processing) to minimizing disturbances caused by speckle noise, and enhancing the image contrast by applying a top-hat transformation.

The rest of chapter is organized as follows. First of all, a median filter has been applied instead of UAF, to avoid the need for manual extraction of training pixels, for automatic noise reduction in the SAR image. Then, a morphological top-hat transformation is briefly introduced with the aim of preventing oversegmentation when the watershed algorithm is implemented. The following sections will discuss experimental results and their significance.

5.2.2 Water Body Extraction

5.2.2.1 Creating Gradient Image

In this case study a watershed algorithm is presented based on gradient transformation as described in Section 3.3. The traditional watershed transformation was initially designed for optical images, and it cannot be directly applied to SAR images, so producing a gradient image is an essential step before implementing the watershed algorithm, which can be adapted for

SAR image segmentation. The major concern of this approach is to transform edge information 112

Chapter Processing of SAR Images for Case Studies of a water body into grey levels and then segment the image based on the watersheds. The performance of the watershed transformation largely depends on the quality of the generated gradient image. Thereby, if an image contains weak edges, the gradient map will not be suitable for the watershed algorithm and the results of the watershed algorithm will not meet the requirements for extracting water bodies.

An edge detection operator can be used to create a gradient image. Different edge detection operators will produce different results for gradient images, leading to different segmentation results. For instance, the Roberts, Laplace, LOG and Canny operators are commonly used edge detection operators to generate a gradient image as described earlier in Section 3.3. In the following sections, the results of Canny will be focused on for comparison with the watershed result.

5.2.2.2 Top-hat Transformation

Image enhancement is another challenge for yielding a well-described watershed transformation result in SAR image. Lowering contrast maintains detailed edge information but may lead to over-segmentation with an enormous amount of trivial regions. In contrast, an excessively high contrast ratio can produce appropriate region segmentation at the expense of edge detail loss.

Thereby, in order to improve segmentation quality, a top-hat transformation, which is an extension of morphological-based operators described in Section 3.6 will be used for processing images in this Chapter. This method can enhance the input image by sharpening the small grey- level pixel differences along boundaries.

The mathematical definition of top-hat transformation is presented by equation Eq 3.15 and Eq

3.16 in Section 3.6. In this case study, white top-hat transformation can detect the peak of a gradient image, while the black top-hat transformation can identify the valley. Hence, the transferred image will be enhanced and the results of the watershed process will be more apparent, resulting in better segmentation outcomes after white/black top-hat transformations.

113

Chapter Processing of SAR Images for Case Studies

Since the top-hat transformation is susceptible to high frequency noise generated by the SAR image, the implementation of a noise filter was utilized in advance (shown in Figure 5.9 and

5.10).

Figure 5.9 (a) Original SAR image I, (b) pre-processing SAR image I

Figure 5.10 (a) Original SAR image II, (b) pre-processing SAR image II

5.2.3 Watershed Algorithm

Watershed segmentation has been applied for rapidly locating weak edges in images between adjacent regions with high accuracy. As a result the segmentation, edges from the watershed algorithm can be defined as continuous and complete regions. Furthermore, the edges extracted by watershed are able to be clearly detected. However, this will lead to oversegmentation as an unavoidable issue normally in SAR image, which means that homogeneous regions are

114

Chapter Processing of SAR Images for Case Studies separated into several parts. The solution of oversegmentation is the marker-controlled watershed transformation that was introduced in the following Section 3.9. The entire work is undertaken by developing the controlled markers for segmentation by the proposed watershed areas, based on existing morphological operations in MathWorks functions.

5.2.4 Study Area and Dataset

In this case study, Intermap’s airborne Interferometric SAR (IFSAR)-generated 63cm high resolution orthorectified X-band magnitude radar images in HH polarization (ORI) were acquired in October 2013 (Figure 5.11) with minima atmosphere effect (Li et al., 2002). The coverage of this image is about 406.42 km in Mildura, Victoria Australia between 34.67 S,

141.97 E (upper left) and 34.25 S, 142.19 E (lower right) on the GDA94datum. The current

ORIs from IFSAR system provide the highest-resolution images available from either airborne or satellite platforms, which is ideally suited for interpretation, automatic and semi-automatic feature extraction of water bodies, roads, manmade features and hydrology, enabling more accurate topographic line mapping and GIS data compilation. The main features on the ground include: river and lake, residential areas, farmland and bare soil etc.

115

Chapter Processing of SAR Images for Case Studies

Figure 5.11 Demonstration of IFSAR image (acquisition time: UTM 21:51 22 Oct 2013)

5.2.5 Experimental Results and Discussion

In this section, a comparison of results of the implementation of the watershed algorithm and

Canny edge operator will be the focus in order to improve the efficiency of water body extraction. The results of the watershed processing of the original SAR image, and the pre- processed SAR image, as well as the pre-processed image processed by marker-controlled watershed algorithm are compared in this section as shown in Figure 5.12. The original SAR image processed by the watershed algorithm, shown in Figure 5.12 (left), reveals over- segmentation, since the regions are split into thousands of blocks that include insignificant information. The result of the application of watershed after pre-processing of the SAR image is shown in Figure 5.12 (middle) which has improved slightly, yet trivial blocks still remain. The acceptable result is based on marker-controlled watershed algorithm as shown in Figure 5.12

(right). Here, the initial zones of the image have been identified based on constraints by the inner and outer markers. Therefore, while the results depend on the effectiveness of the control

116

Chapter Processing of SAR Images for Case Studies markers, the marker-controlled watershed algorithm has considerably improved the segmentation by merging the mass of trivial blocks. The outcome can be easily appreciated by also examining Figure 5.13. Accordingly, the number of blocks decreased significantly after processing by noise reduction and top-hat transformation on the original image.

Figure 5.12 (a) Oversegmentation of SAR image I, (b) watershed of processed SAR image I (c) marker -controlled watershed of processed SAR image I

Figure 5.13 (a) Oversegmentation of SAR image II, (b) watershed of SAR image II, (c) marker-controlled watershed of SAR image II

As presented in Section 3.9, watershed has the advantage of a region growing algorithm; the segments are spatially consistent with different elements connected. Although the gradient image commonly suffers from over-segmentation because of local irregularities of the gradient image and coherent speckle noise based on the marker-controlled watershed algorithm, the influence of over-segmentation is effectively controlled as can be seen in the Table 5.7. This table indicates that the total number of blocks in the traditional watershed segmentation compared with improved marker-controlled watershed result is considerably reduced, from a large number of tiny blocks into a few merged blocks, leading to the good performance of

117

Chapter Processing of SAR Images for Case Studies marking the foreground and background of the gradient image. Further processing to extract the edges would involve a vectorization process. This would further demonstrate that the vectors derived by the watershed method are more comprehensive than those derived by the Canny edge extraction process.

Table 5.7 Analysis of initial watershed segmentation (oversegmentation) and marker-controlled watershed result

In this work, the Canny edge detector and watershed transform have been compared as methods of segmentation. The Canny method has been chosen among all edge detectors because, while all edge detectors focus on the relative maxima of the gradient magnitude, the Canny method uses the relative maxima of the gradient directions. The comparison of results of the application of the watershed transformation and the Canny edge operator overlaid on the edge of reference map in yellow in Figures 5.14(a, b and c), 5.15(a, b and c) and 5.16(a, b and c) reveals that the marker controlled watershed transformation can preserve the river edge accurately with identifiable lines. By contrast the Canny method fails on occasions to preserve some sections of the river edge. For instance, the Canny results in Figure 5.14(b) and 5.15(b) generate redundant edges and islands along the river edge, yet the watershed algorithm results in fewer mistakes.

The blue areas in Figure 5.14 (c) and Figure 5.15 (c) illustrate that the water body is automatically extracted by the watershed transformation, where the red area indicates the river bank or land.

118

Chapter Processing of SAR Images for Case Studies

Figure 5.14 (a) Original SAR image III overlaid with reference map in yellow solid line as the river edge (b) Canny edge detection example 1 (c) Canny edge detection outcome example 1 overlaid with the watershed transformation result

Figure 5.15 (a) Original SAR image IV overlaid with reference map in yellow solid line as the river edge (b) Canny edge detection example 2 (c) Canny edge detection out come example 2 overlaid with the watershed transformation result

In addition, while the Canny created edges in areas of interest they are disconnected and some edge information is missing. The watershed algorithm in comparison performs well, since it smooths the edges ad results in continuity of the edges, which indicates that there is either a clear river bank, or there are blurred and relatively complicated regions in Figure 5.16 (b) and

(c).

119

Chapter Processing of SAR Images for Case Studies

Figure 5.16 (a) Original SAR image IV overlaid with reference map in yellow solid line as the river edge (b) Canny edge detection example 3 (c) Canny edge detection outcome example 3 overlaid watershed transformation result

Compared with Canny method, the watershed algorithm cannot only extract the main water bodies correctly, since it neglects the trestle bridge or small boat in the middle of the river in

Figure 5.17 (b) and (c), but also presents the edge continuously and completely by delineating the main river boundaries, particularly in some topographic areas where is the edge is hardly visible.

After setting marker-controls on both background and foreground of the SAR image by using watershed algorithm, over-segmentation is limited to the land areas; the water body has been segmented properly. Because the Canny edge detection calculates the image gradient to emphasizing regions with high spatial derivatives, while suppressing any pixel which is not at the maximum regions, it may give rise to incomplete or discontinuous edges. Moreover, compared with watershed algorithm, Canny is computationally expensive.

Figure 5.17 (a) Original SAR image V overlaid with reference map in yellow solid line as the river edge (b) Canny edge detection example 4 (c) Canny edge d etection outcome example 4 overlaid watershed transformation result 120

Chapter Processing of SAR Images for Case Studies

In order to evaluate the accuracy of extracted edge maps using watershed algorithm and Canny edge detector, a reference map was generated manually. Then, 800 edge pixels were selected randomly along the river edge from the reference map and the corresponding edge pixels derived by the watershed algorithm and Canny results were extracted. It is assumed that the extracted pixels should be within a radius of 3 pixels of a given reference pixel

(approximately equal to 3 times the RMSE of the edge position) or the pixel position would be considered as a blunder and hence discarded. The Euclidean distance between the reference pixels and extracted edge pixels from watershed and Canny edge were computed as Eq 5.5:

_ _ _ _ (Eq 5.5)

Where, _ and _ are the coordinates of reference pixels, and are their _ _ corresponding pixels coordinates (row and column) from the watershed algorithm and Canny results. The computed Euclidean distances were utilized to estimate the standard deviation for each approach using Eq 5.6:

(Eq 5.6) 1 _

The spatial pixel size ( ) is applied to convert the pixel-based Euclidean distances into 63 dimensions on the ground in meter ( ). The estimated for the watershed segmentation derived from the 800 samples along the water body is 1.1 m and for the Canny result is 2.1 . The standard deviation is an estimate of the accuracy of the experimental results and shows the reliability of the watershed transformation compared with Canny edge operator.

5.2.6 Analysis of Results of Water Body Detection

The purpose of this study is to compare the efficiency of watershed techniques with Canny edge detection results and to yield an intuitive and well segmented image for mapping of water body boundaries. In this Section 5.2, the watershed transformation algorithm was applied with an 121

Chapter Processing of SAR Images for Case Studies adaptive median filter and top-hat transformation. In the pre-processing, median filter technique was used as an automatic method to limit the effects of speckle noise in the SAR image. It can reduce the sharp changes caused by speckle noise, and effectively preserve the edges of the

SAR data, simply and rapidly. By applying the gradient image as the input data for the watershed method, as well as labeling internal and external markers, the oversegmented regions have been properly constrained. Redundant trivial regions are merged into large blocks. The extracted water body edges employing watershed algorithm were identified more efficiently compared with Canny edge operator. This is because in some certain cases Canny fails to define the actual river edge, leading to the wrong contours which may be cause by coherent speckle noise in the SAR image. On the other hand, the results of the marker-controlled watershed algorithm were intuitive by visually comparison with the original SAR image, based on well prepared input data. The future work will focus on different techniques of classification of markers and a sub-pixel watershed algorithm. It is likely that other potential constraints could be targeted to further improve the effectiveness of watershed algorithms.

122

Chapter Concluding Remarks and Future Research

Chapter 6 Concluding Remarks and

Future Research

Chapter Concluding Remarks and Future Research

6. Concluding Remarks and Future Research

Nowadays, extreme weather conditions such as tropical cyclone and flood events seem to become more frequent and destructive in many regions on the globe. A rising consciousness of the availability of SAR remotely sensed data based geospatial information has led to an increase in requests for SAR data by mapping and insurance services to support civil protection and relief organizations with disaster-related planning and analysis activities.

Since an amazingly large number of satellite missions have been launched with high revisit frequencies and multiple polarizations, a huge amount of SAR data is now available than previously for studying the impacts of natural hazards. They are proving to be far more beneficial for observing the large scale extent of adverse effects of disaster events with fine time series and spatial resolutions. However, the use of the data also calls for automatic and accurate detection approaches, which should drastically reduce biases in the interpretation of the data.

6.1 Concluding Remarks

In this thesis, several difficulties were targeted and corresponding solutions are proposed to achieve the main objectives that were stated in Chapter 1. The contributions of this thesis are summarized as follows:

6.1.1 Conclusion of Tropical Cyclone Case Study

For the case study of TC eye extraction as presented in Section 5.1, an automatic method of extracting areas of tropical cyclone (TC) eyes was developed using a mathematical morphology algorithm on SAR images for different hurricane events. The UAF filtering algorithm performs better than the other noise reduction filters, but since it suppresses the noise in the image without smoothing the edge. It ensures the robust interpretation could be conducted in the subsequent procedures. Otsu’s automatic threshold selection method efficiently separated a clear and identifiable TC eye area the following procedures despite the disruptive effects of the 124

Chapter Concluding Remarks and Future Research surrounding texture. In the morphological process, some redundant skeletons caused by noise along the edges of the object are eliminated by applying skeleton pruning based on Discrete

Curve Evolution (DCE). Repeated iterations are undertaken to trim undesirable pixels, assisting in generating an accurately reconstructed result as a meaningful skeleton pruning.

The assessment has been implemented for locations of four major tropical cyclone events (also called hurricanes, but the term tropical cyclone will be used in this context) where the tropical cyclone eye areas typically consist of various complex patterns, with different sizes and shapes in the Gulf of Mexico (Hurricane Katrina, 2005 and Hurricane Rita, 2005), Island of Jamaica

(Hurricane Dean, 2007), and Queens County near Western Head, Nova Scotia (Hurricane Earl,

2010).

By applying a novel morphological skeleton and pruning based on DCE on ocean SAR images for marine meteorology, with the purpose of locating and extracting TC eye areas, the resulting close agreement with manually extracted reference data has been achieved with a relative accuracy of 92% compared with the previous technique by using the edge detection properties of wavelets, with an average relative accuracy value of about 64%, which was evaluated with the same reference data as mathematical morphology approach. The TC evolution along with maximum sustained wind largely determine the TC category classification and its destructivity for inland residents. Moreover, by extracting and locating TC eye areas in SAR data, this method should enable meteorologists to predict or evaluate with high accuracy the TC movement and its landfall, based on the time series ocean SAR data. A path diagram of TC eye should assist researchers to compare it with that derived from the Dvorak tropical cyclone intensity estimation technique, thus providing a better understanding of TC evolution and behaviour patterns for the purposes of natural disaster prevention.

125

Chapter Concluding Remarks and Future Research

6.1.2 Conclusion of Water Body Detection Case Study

For the case study of water body detection as described in Section 5.2, the presence of speckle noise and a variety of ground feature factors makes the segmentation of SAR image difficult and gives rise to oversegmentation and errors in automated edge detection. Unlike the application of the UAF filter used for the case study of TC eye (Section 5.1), median filter was used in Section 5.2 as it is a simple and efficient method to reduce the noise which would lead to the number of false edges.

The Top-hat transformation together with the median filter is necessary to assist in reducing over-segmentation caused by noise and image contrast. The over-segmentation was addressed by an improved watershed used in Section 5.2 using the marker-controlled watershed algorithm

(presented in details in Chapter 3). By this method irrelevant contour elements are able to be removed, and the resulting catchment basins only demarked the water bodies. According to the experimental results as discussed in Section 5.2, the enormous number of trivial elements has been significantly decreased by the marker-controlled watershed algorithm, and the over- segmentation addressed the extraction of the different objects present in the SAR image.

As an edge-based method, the segmentation results of Canny method failed to maintain the completion of the river edge with clear contours. Compared with Canny method, the advantage of marker-controlled watershed algorithm, region-based approach, is that the real river edge can be measured accurately, while Canny wrongly detects the contours of irrelevant objects as the actual edge. Due to noise effects in some areas, the Canny method produced some redundant edges which are less accurate than watershed extracted river edges. Furthermore, edges detected by Canny might lead to the loss of some useful information, whereas the performance of watershed is smoother with continuous edges represented in the SAR data.

In particular, after implementing improved watershed algorithm, over-segmentation is restricted to inland regions, resulting in appropriately segmenting the extent of the water bodies by ignoring disturbance along the river bank, such as trestles and small boats. 800 edge pixels

126

Chapter Concluding Remarks and Future Research extracted by manual work were used to evaluate the accuracy of extracted edge maps using watershed algorithm and Canny edge operator. The estimated standard deviation for watershed and Canny result is 1.1m and 2.1m, respectively. It is reported that the watershed segmentation is relatively reliable and more accurate than edge-based Canny method.

6.2 Recommendations for Future Research

The major advantages of passive optical remote sensing are true colour and superior resolution for detailed information examination. The reasons why SAR data is extensively used is because of its high penetration in all weather conditions. As well, various polarizations and composite intensity/coherence images are beneficial to identify the decorrelation factors for the variations of reflected radiation from water bodies. Numerous methodologies and researchers tend to use multiple data fusion, which has gained in popularity in terms of accuracy and availability. For instance, combining SAR, optical remote sensing data and other GIS information is used for tropical cyclone monitoring or storm surge predictions, and inland floodplain detection with advanced satellite mission. Besides, for the purpose of environmental and meteorological monitoring, high temporal resolution is a more important requirement than high spatial resolution, as devastating events (tropical cyclone or flood) are usually highly dynamic and have relatively short lifespan. A complete time series track that provides high temporal resolution during a flood or TC will provide better support to researchers and governments for further analysis.

A wide range of possible applications could be researched for optimization of the morphological skeleton algorithm by introducing different implementation methods. In this thesis, the DCE algorithm associated with the integer medial axis produces a connected skeleton generated from the noisy shape data, and then the output shape is suitably reconstructed from the skeleton using the feature transformation. Furthermore, as the principle of the integer medial axis can be

127

Chapter Concluding Remarks and Future Research applied efficiently in the field of three dimensional feature reconstructions, this is more versatile than plane feature extraction.

Future work can focus on different techniques of classification of markers and a sub-pixel watershed algorithm. It is likely that other potential constraints could be expected to further improve the effectiveness of watershed algorithms.

128

References

AGENT, E. S. 2000. RADAR and SAR Glossary [Online]. Available from : .

AGUIRRE-SALADO, C. A., TREVINO-GARZA, E. J., AGUIRRE-CALDERON, O. A., JIMENEZ-PEREZ, J., GONZALEZ-TAGLE, M. A., MIRANDA-ARAGON, L., VALDEZ-LAZALDE, J. R., AGUIRRE- SALADO, A. I. & SANCHEZ-DIAZ, G. 2012. Forest Cover Mapping in North-Central Mexico: A Comparison of Methods. GIScience & Remote Sensing, 49 , 895-914.

ALEXANDRA, J. 2012. Australia's landscapes in a changing climate-caution, hope, inspiration, and transformation. Crop & Pasture Science, 63 , 215-231.

ALI, S. M., JAVED, M. Y., KHATTAK, N. S., MOHSIN, A. & FAROOQ, U. 2008. Despeckling of Synthetic Aperture Radar Images Using Inner Product Spaces in Undecimated Wavelet Domain. Proceedings of World Academy of Science, Engineering and Technology, Vol 27, 27 , 167-172.

ALI, S., HASSAN, A., MARTIN, T. C. & HASSAN, Q. H. 2001. Geo-spatial tools for monitoring floodplain water dynamics. Remote Sensing and Hydrology 2000 , 465-468.

ANDERSON, D. R. 1974. National Flood Insurance Program - Problems and Potential. Journal of Risk and Insurance, 41 , 579-599.

ATKINSON, G. D. & HOLLIDAY, C. R. 1977. Tropical cyclone minimum sea level pressure/maximum sustained wind relationship for the western north Pacific. Monthly Weather Review, 105 , 421-427.

ATTALI, D., BOISSONNAT, J.-D. & EDELSBRUNNER, H. 2009. Stability and computation of medial axes-a state-of-the-art report. Mathematical foundations of scientific visualization, computer graphics, and massive data exploration. Springer.

BAI, X., LATECKI, L. J. & LIU, W.-Y. 2007. Skeleton pruning by contour partitioning with discrete curve evolution. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29 , 449-462.

BAMLER, R. 2000. Principles of synthetic aperture radar. Surveys in Geophysics, 21 , 147-157.

BELL, M. M. & MONTGOMERY, M. T. 2008. Observed structure, evolution, and potential intensity of category 5 Hurricane Isabel (2003) from 12 to 14 September. Monthly Weather Review, 136 , 2023- 2046.

BEUCHER, S. & LANTUÉJOUL, C. 1979. Use of watersheds in contour detection.

BEUCHER, S. & MEYER, F. 1992. The morphological approach to segmentation: the watershed transformation. OPTICAL ENGINEERING-NEW YORK-MARCEL DEKKER INCORPORATED-, 34 , 433-433.

BLUM, H. 1967. A transformation for extracting new descriptors of shape. Models for the perception of speech and visual form, 19 , 362-380.

BLUM, H. 1973. Biological shape and visual science (Part I). Journal of theoretical Biology, 38 , 205-287.

BONN, F. & DIXON, R. 2005. Monitoring flood extent and forecasting excess runoff risk with RADARSAT-1 data. Natural Hazards, 35 , 377-393.

BOURGEAU-CHAVEZ, L. L., RIORDAN, K., POWELL, R. B., MILLER, N. & BARADA, H. 2009. Improving Wetland Characterization with Multi-Sensor, Multi-Temporal SAR and Optical/Infrared Data Fusion .

129

BUENO, G., GONZALEZ, R., DENIZ, O., GARCIA-ROJO, M., GONZALEZ-GARCIA, J., FERNANDEZ- CARROBLES, M. M., VALLEZ, N. & SALIDO, J. 2012. A parallel solution for high resolution histological image analysis. Computer Methods and Programs in Biomedicine, 108 , 388-401.

CALABI, L. 1965. A study of the skeleton of plane figures , Parke Mathematical Laboratories.

CAMPBELL, B. A. 2002. Radar remote sensing of planetary surfaces , Cambridge University Press.

CANNY, J. 1986. A Computational Approach to Edge-Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8, 679-698.

CHAPRON, B., FOUHAILY, T. & KERBAOL, V. 1995. Calibration and validation of ERS wave mode products. Doc. DRO/OS/95, 2.

COHEN, L. D. 1991. On active contour models and balloons. CVGIP: Image understanding, 53 , 211-218.

DASILVA, J. D. & KUX, H. J. H. 1992a. Thematic Mapper and GIS Data Integration to Evaluate the Flooding Dynamics within the Pantanal - Mato-Grosso-Do-Sul State, Brazil. International Space Year : Space Remote Sensing, Vols 1 and 2 , 1478-1480.

DASILVA, J. D. V. & KUX, H. J. H. 1992b. Remote-Sensing Techniques to the Detection and Mapping of Flooding Dynamics within the Pantanal, Mato-Grosso-Do-Sul-State, Brazil - Preliminary-Results. Proceedings of the 24th International Symposium on Remote Sensing of Environment, Vols 1 and 2 , 353-365.

DE FLORIANI, L. & SPAGNUOLO, M. 2007. Shape analysis and structuring , Springer.

DELLEPIANE, S., BO, G., MONNI, S. & BUCK, C. 2000a. Improvements in flood monitoring by means of interferometric coherence. SAR Image Analysis, Modeling, and Techniques Iii, 4173 , 219-229.

DEUTSCH, M. & RUGGLES, F. 1974a. OPTICAL DATA PROCESSING AND PROJECTED APPLICATIONS OF THE ERTS-1 IMAGERY COVERING THE 1973 MISSISSIPPI RIVER VALLEY FLOODS1. JAWRA Journal of the American Water Resources Association, 10 , 1023-1039.

DEUTSCH, M. & RUGGLES, F. H. 1974b. Optical data processing and projected applications of the ERTS-1 imagery covering the 1973 Mississippi River Valley floods. NASA Special Publication, 351 , 1167- 1188.

DONELAN, M. A. & PIERSON, W. J. 1987. Radar scattering and equilibrium ranges in wind ‐generated waves with application to scatterometry. Journal of Geophysical Research: Oceans (1978–2012), 92 , 4971- 5029.

DU, Y. & VACHON, P. W. 2003. Characterization of hurricane eyes in RADARSAT-1 images with wavelet analysis. Canadian Journal of Remote Sensing, 29 , 491-498.

DURDEN, S. L., VANZYL, J. J. & ZEBKER, H. A. 1989. Modeling and Observation of the Radar Polarization Signature of Forested Areas. IEEE Transactions on Geoscience and Remote Sensing, 27 , 290-301.

DVORAK, V. F. 1972. A technique for the analysis and forecasting of tropical cyclone intensities from satellite pictures , US Department of Commerce, National Oceanic and Atmospheric Administration, National Environmental Satellite Service.

DVORAK, V. F. 1975. Tropical Cyclone Intensity Analysis and Forecasting from Satellite Imagery. Monthly Weather Review, 103 , 420-430.

DVORAK, V. F. 1984. Tropical cyclone intensity analysis using satellite data , US Department of Commerce, National Oceanic and Atmospheric Administration, National Environmental Satellite, Data, and Information Service.

130

European Space Agency (ESA). 2005. Hurricane Katrina off southern Florida [Online]. Available from : < http://www.esa.int/ESA>.

ELACHI, C. & VAN ZYL, J. J. 2006. Introduction to the physics and techniques of remote sensing , JOHN WILEY & SONS.

EVANS, D. L., FARR, T. G., FORD, J. P., THOMPSON, T. W. & WERNER, C. L. 1986. Multipolarization Radar Images for Geologic Mapping and Vegetation Discrimination. IEEE Transactions on Geoscience and Remote Sensing, 24 , 246-257.

FARR, T. G. 1993. Guide to Magellan image interpretation.

FELLAH, K., MEYER, C., LAUGIER, O., CLANDILLON, S. & DE FRAIPONT, P. 1997. Potential and limitations of multi-temporal SAR data in a quantitative approach for multi-scalar hydrological applications. Synthesis of ERS Alsace/Camargue pilot project. Third ERS Symposium on Space at the Service of Our Environment, Vol 1, 414 , 61-70.

FERNANDES, R. C., PEREZ, E., BENITO, R. G., DELGADO, R. M. G., DE AMORIM, A. L., SANCHEZ, S. F., HUSEMANN, B., BARROSO, J. F., SANCHEZ-BLAZQUEZ, P., WALCHER, C. J. & MAST, D. 2013. Resolving galaxies in time and space I. Applying starlight to CALIFA datacubes. Astronomy & Astrophysics, 557.

FETTERER, F., GINERIS, D. & WACKERMAN, C. C. 1998. Validating a scatterometer wind algorithm for ERS-1 SAR. Geoscience and Remote Sensing, IEEE Transactions on, 36 , 479-492.

FISHER, A. 2014. Cloud and Cloud-Shadow Detection in SPOT5 HRG Imagery with Automated Morphological Feature Extraction. Remote Sensing, 6, 776-800.

FORGHANI, A., CECHET, B. & NADIMPALLI, K. 2007. Object-based classification of multi-sensor optical imagery to generate terrain surface roughness information for input to wind risk simulation. IGARSS: 2007 IEEE International Geoscience and Remote Sensing Symposium, Vols 1-12 , 3090-3095.

FRIEDMAN, K. S. & LI, X. 2000. Monitoring hurricanes over the ocean with wide swath SAR. Johns Hopkins APL Technical Digest, 21 , 80-85.

FROST, V. S., STILES, J. A., SHANMUGAN, K. S. & HOLTZMAN, J. C. 1982. A Model for Radar Images and Its Application to Adaptive Digital Filtering of Multiplicative Noise. IEEE Transactions on Pattern Analysis and Machine Intelligence, 4, 157-166.

FUKUNAGA, K. 1990. Introduction to statistical pattern recognition , Academic press.

GALLAGHER JR, N. C. & WISE, G. L. 1981. A theoretical analysis of the properties of median filters. Acoustics, Speech and Signal Processing, IEEE Transactions on, 29 , 1136-1141.

GATELLI, F., GUARNIERI, A. M., PARIZZI, F., PASQUALI, P., PRATI, C. & ROCCA, F. 1994. The Wave- Number Shift in SAR Interferometry. IEEE Transactions on Geoscience and Remote Sensing, 32 , 855- 865.

GOH, W. B. 2008. Strategies for shape matching using skeletons. Computer Vision and Image Understanding, 110 , 326-345.

GONZALES, R. C. & WOODS, R. E. 2002. Digital Image Processing, 2-nd Edition. Prentice Hall.

GOUTSIAS, J. & SCHONFELD, D. 1991. Morphological Representation of Discrete and Binary Images. IEEE Transactions on Signal Processing, 39 , 1369-1379.

GUHA-SAPIR, D., HOYOIS, P. & BELOW, R. 2014. Annual Disaster Statistical Review 2012: The Numbers and Trends. 2013. Centre for REsearch on the Epidemiology of Disasters (CRED), Institute of Health and Society (IRSS) and Universite catholoque de Louvain: Louvain-la-neuve, Belgium .

131

GUPTA, R. P. & BANERJI, S. 1985. Monitoring of reservoir volume using LANDSAT data. Journal of Hydrology, 77 , 159-170.

HALLBERG, G. R., HOYER, B. E. & RANGO, A. 1973. Application of ERTS-1 imagery to flood inundation mapping. NASA Special Publication, 327 , 745-753.

HARRIS, P. T., ASHLEY, G. M., COLLINS, M. B. & JAMES, A. E. 1986. Topographic Features of the Bristol Channel Sea-Bed - a Comparison of SEASAT (Synthetic Aperture Radar) and Side-Scan Sonar Images. International Journal of Remote Sensing, 7, 119-136.

HASLER, A., PALANIAPPAN, K., KAMBHAMMETU, C., BLACK, P., UHLHORN, E. & CHESTERS, D. 1998. High-resolution wind fields within the inner core and eye of a mature tropical cyclone from GOES 1-min images. Bulletin of the American Meteorological Society, 79 , 2483-2496.

HENDERSON, F. 1995. Environmental factors and the detection of open surface water areas with X-band radar imagery. International Journal of Remote Sensing, 16 , 2423-2437.

HENSLEY, S. & MADSEN, S. N. 2007. Interferometric radar waveform design and the effective interferometric wavelength. 2007 International Waveform Diversity & Design Conference , 287-291.

HERSBACH, H., STOFFELEN, A. & DE HAAN, S. 2007. An improved C ‐band scatterometer ocean geophysical model function: CMOD5. Journal of Geophysical Research: Oceans (1978–2012), 112.

HESS, L. L., MELACK, J. M. & DAVIS, F. W. 1994. Mapping of Floodplain Inundation with Multi-Frequency Polarimetric SAR - Use of a Tree-Based Model. IGARSS '94 - 1994 International Geoscience and Remote Sensing Symposium Volumes 1-4, 1072-1073.

HESS, L. L., MELACK, J. M. & SIMONETT, D. S. 1990. Radar Detection of Flooding beneath the Forest Canopy - a Review. International Journal of Remote Sensing, 11 , 1313-1325.

HESSELINK, W. H. & ROERDINK, J. B. 2008. Euclidean skeletons of digital image and volume data in linear time by the integer medial axis transform. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 30 , 2204-2217.

HODGSON, M. E., JENSEN, J. R., SCHMIDT, L., SCHILL, S. & DAVIS, B. 2003. An evaluation of LIDAR- and IFSAR-derived digital elevation models in leaf-on conditions with USGS Level 1 and Level 2 DEMs. Remote Sensing of Environment, 84 , 295-308.

HOFFER, R. M., MUELLER, P. W. & LOZANO-GARCIA, D. F. ASSESSING FOREST RESOURCES USING MULTIPLE INCIDENCE ANGLE SIR-B DATA. Digest - 1985 International Geoscience and Remote Sensing Symposium (IGARSS '85). Remote Sensing Instrumentation: Technology for Science and Applications., 1985 Amherst, MA, USA. IEEE, 572.

HOLLIDAY, C. 1969. On the maximum sustained winds occurring in Atlantic hurricanes , US Department of Commerce, Environmental Science Services Administration, Weather Bureau, Southern Region Headquarters, Scientific Services Division.

HORSTMANN, J. & KOCH, W. 2005. Measurement of ocean surface winds using synthetic aperture radars. Oceanic Engineering, IEEE Journal of, 30 , 508-515.

HORSTMANN, J., KOCH, W., LEHNER, S. & TONBOE, R. 2000. Wind retrieval over the ocean using synthetic aperture radar with C-band HH polarization. Geoscience and Remote Sensing, IEEE Transactions on, 38 , 2122-2131.

HORSTMANN, J., KOCH, W., LEHNER, S. & TONBOE, R. 2002. Ocean winds from RADARSAT-1 ScanSAR. Canadian Journal of Remote Sensing, 28 , 524-533.

132

HORSTMANN, J., LEHNER, S. & SCHILLER, H. Global wind speed retrieval from complex SAR data using scatterometer models and neural networks. Geoscience and Remote Sensing Symposium, 2001. IGARSS'01. IEEE 2001 International, 2001. IEEE, 1553-1555.

HUNTER, G. J. & GOODCHILD, M. F. 1995. Dealing with Error in Spatial Databases - a Simple Case-Study. Photogrammetric Engineering and Remote Sensing, 61 , 529-537.

IMHOFF, M. L., VERMILLION, C., STORY, M. H., CHOUDHURY, A. M., GAFOOR, A. & POLCYN, F. 1987. Monsoon flood boundary delineation and damage assessment using space borne imaging radar and Landsat data. Photogrammetric Engineering and Remote Sensing, 53 , 405-13.

ISLAM, M. M. & SADO, K. 2001. Flood damage and management modelling using satellite remote sensing data with GIS: case study of Bangladesh. Remote Sensing and Hydrology 2000 , 455-457.

JAIN, R., KASTURI, R. & SCHUNCK, B. G. 1995. Machine vision , McGraw-Hill New York.

JARVINEN, B. R., NEUMAN, C. & DAVIS, M. 1988. A tropical cyclone data tape for the North Atlantic basin. NOAA Tech. Memo. NWS NHC-22 .

JAXA, A. R. A. A. P. Polarimetric observation by PALSAR.

JENSEN, J. R. 2009. Remote Sensing of the Environment: An Earth Resource Perspective 2/e , Pearson Education India.

JI, L. A. & PIPER, J. 1992. Fast Homotopy - Preserving Skeletons Using Mathematical Morphology. Ieee Transactions on Pattern Analysis and Machine Intelligence, 14 , 653-664.

JOHANNESSEN, J. A., SHUCHMAN, R. A., DIGRANES, G., LYZENGA, D., WACKERMAN, C., JOHANNESSEN, O. M. & VACHON, P. 1996. Coastal ocean fronts and eddies imaged with ERS 1 synthetic aperture radar. Journal of Geophysical Research: Oceans (1978–2012), 101 , 6651-6667.

JONES, K. H. 1998. A comparison of algorithms used to compute hill slope as a property of the DEM. Computers & Geosciences, 24 , 315-323.

KARVONEN, J., SIMILA, M. & MAKYNEN, M. 2005. Open water detection from Baltic Sea ice Radarsat-1 SAR imagery. Geoscience and Remote Sensing Letters, IEEE, 2, 275-279.

KATSAROS, K. B., VACHON, P. W., BLACK, P. G., DODGE, P. P. & UHLHORN, E. W. 2000. Wind fields from SAR: Could they improve our understanding of storm dynamics? Johns Hopkins APL Technical Digest, 21 , 86-93.

KATZ, R. A. & PIZER, S. M. 2003. Untangling the Blum Medial Axis Transform. International Journal of Computer Vision, 55 , 139-153.

KERBAOL, V., CHAPRON, B. & VACHON, P. W. 1998. Analysis of ERS ‐1/2 synthetic aperture radar wave mode imagettes. Journal of Geophysical Research: Oceans (1978–2012), 103 , 7833-7846.

KHAN, S. I., HONG, Y., WANG, J., YILMAZ, K. K., GOURLEY, J. J., ADLER, R. F., BRAKENRIDGE, G. R., POLICELL, F., HABIB, S. & IRWIN, D. 2011. Satellite remote sensing and hydrologic modeling for flood inundation mapping in Lake Victoria Basin: Implications for hydrologic prediction in ungauged basins. Geoscience and Remote Sensing, IEEE Transactions on, 49 , 85-95.

KIAGE, L. M., WALKER, N. D., BALASUBRAMANIAN, S., BABIN, A. & BARRAS, J. 2005. Applications of Radarsat-1 synthetic aperture radar imagery to assess hurricane-related flooding of coastal Louisiana. International Journal of Remote Sensing, 26 , 5359-5380.

KNAFF, J. A., BROWN, D. P., COURTNEY, J., GALLINA, G. M. & BEVEN, J. L. 2010. An evaluation of Dvorak technique-based tropical cyclone intensity estimates. Weather and Forecasting, 25 , 1362-1379.

133

KNOTT, E. F., SHAEFFER, J. & TULEY, M. 2004. Radar cross section , SciTech Publishing.

KNUTSON, T. R., MCBRIDE, J. L., CHAN, J., EMANUEL, K., HOLLAND, G., LANDSEA, C., HELD, I., KOSSIN, J. P., SRIVASTAVA, A. & SUGI, M. 2010. Tropical cyclones and climate change. Nature Geoscience, 3, 157-163.

KOCH, W. 2004. Directional analysis of SAR images aiming at wind direction. Geoscience and Remote Sensing, IEEE Transactions on, 42 , 702-710.

KRESCH, R. & MALAH, D. 1994. Morphological Reduction of Skeleton Redundancy. Signal Processing, 38 , 143-151.

KROHN, M. D., MILTON, N. M. & SEGAL, D. B. 1983. SEASAT Synthetic Aperture Radar (SAR) Response to Lowland Vegetation Types in Eastern Maryland and Virginia. Journal of Geophysical Research- Oceans and Atmospheres, 88 , 1937-1952.

KUAN, D. T., SAWCHUK, A. A., STRAND, T. C. & CHAVEL, P. 1987. Adaptive Restoration of Images with Speckle. IEEE Transactions on Acoustics Speech and Signal Processing, 35 , 373-383.

KWOUN, O. I. & LU, Z. 2009. Multi-temporal RADARSAT-1 and ERS Backscattering Signatures of Coastal Wetlands in Southeastern Louisiana. Photogrammetric Engineering and Remote Sensing, 75 , 607-617.

LACAVA, T., FILIZZOLA, C., PERGOLA, N., SANNAZZARO, F. & TRAMUTOLI, V. 2010. Improving flood monitoring by the Robust AVHRR Technique (RAT) approach: the case of the April 2000 Hungary flood. International Journal of Remote Sensing, 31 , 2043-2062.

LANTUEJOUL, C. 1980. Skeletonization in quantitative metallography. Issues of Digital Image Processing, 34 , 109.

LATECKI, L. J. & LAKÄMPER, R. 1999. Polygon evolution by vertex deletion. Scale-Space Theories in Computer Vision. Springer.

LAUGIER, O., FELLAH, K., THOLEY, N., MEYER, C. & DE FRAIPONT, P. 1997. High temporal detection and monitoring of flood zone dynamics using ERS data around catastrophic natural events: The 1993 and 1994 Camargue flood events. Third ERS Symposium on Space at the Service of Our Environment, Vol 1, 414 , 559-564.

LEBERL, F. W. 1990. Radar grammetric image processing.

LECOMTE, P. 1993. CMOD4 model description. ESA technical note .

LEE, J. S. 1981. Speckle Analysis and Smoothing of Synthetic Aperture Radar Images. Computer Graphics and Image Processing, 17 , 24-32.

LEE, J. S. J., HARALICK, R. M. & SHAPIRO, L. G. 1987. Morphological Edge-Detection. IEEE Journal of Robotics and Automation, 3, 142-156.

LEE, J., SNYDER, P. K. & FISHER, P. F. 1992. Modeling the Effect of Data Errors on Feature-Extraction from Digital Elevation Models. Photogrammetric Engineering and Remote Sensing, 58 , 1461-1467.

LEE, J.-S. & JURKEVICH, I. 1990. Coastline detection and tracing in SAR images. Geoscience and Remote Sensing, IEEE Transactions on, 28 , 662-668.

LEE, R. S. & LIN, J. 2001. An elastic contour matching model for tropical cyclone pattern recognition. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 31 , 413-417.

LEE, R. S. & LIU, J. N. 1999. An automatic satellite interpretation of tropical cyclone patterns using elastic graph dynamic link model. International journal of pattern recognition and artificial intelligence, 13 , 1251-1270.

134

LEHNER, S., HORSTMANN, J., KOCH, W. & ROSENTHAL, W. 1998. Mesoscale wind measurements using recalibrated ERS SAR images. Journal of Geophysical Research: Oceans (1978–2012), 103 , 7847- 7856.

LI, X. F., ZHANG, J. A., YANG, X. F., PICHEL, W. G., DEMARIA, M., LONG, D. & LI, Z. W. 2013. Tropical Cyclone Morphology from Spaceborne Synthetic Aperture Radar. Bulletin of the American Meteorological Society, 94 , 215-+.

LI, X. F., ZHANG, J. A., YANG, X. F., PICHEL, W. G., DEMARIA, M., LONG, D. & LI, Z. W. 2012. Ocean Surface Response to Hurricanes Observed by SAR. 2012 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 5180-5183.

LI, X., BAKER, A. B. & HUTT, T. Accuracy of airborne IFSAR mapping. Proceedings of the American Society of Photogrammetry and Remote Sensing, XXII International Congress, Washington, USA, 2002. Citeseer.

LILLESAND, T. M., KIEFER, R. W. & CHIPMAN, J. W. 2004. Remote sensing and image interpretation , John Wiley & Sons Ltd.

LIN, S.K. 2013. Introduction to Remote Sensing. By James B. Campbell and Randolph H. Wynne, The GuilfordPress, 2011; 662 pages. Price:£ 80.75, ISBN 978-1-60918-176-5. Remote Sensing, 5, 282-283.

LIU, H., WU, Z., FRANK HSU, D., PETERSON, B. S. & XU, D. 2012. On the generation and pruning of skeletons using generalized Voronoi diagrams. Pattern Recognition Letters, 33 , 2113-2119.

LIU, J. N. & LEE, R. S. Invariant character recognition in dynamic link architecture. Knowledge and Data Engineering Exchange Workshop, 1997. Proceedings, 1997. IEEE, 188-195.

LIVINGSTONE, C., GRAY, A., HAWKINS, R., VACHON, P., LUKOWSKI, T. & LALONDE, M. 1995. The CCRS airborne SAR systems: Radar for remote sensing research. Canadian Journal of Remote Sensing, 21 , 468-491.

LOPES, A., NEZRY, E., TOUZI, R. & LAUR, H. 1990a. Maximum a Posteriori Speckle Filtering and 1st Order Texture Models in SAR Images. Remote Sensing Science for the Nineties, Vols 1-3, 2409-2412.

LOPES, A., TOUZI, R. & NEZRY, E. 1990b. Adaptive Speckle Filters and Scene Heterogeneity. IEEE Transactions on Geoscience and Remote Sensing, 28 , 992-1000.

LOWRY, R. T., LANGHAM, E. J. & MUDRY, N. 1981. A preliminary analysis of SAR mapping of the Manitoba flood, May 1979. Proceedings of the Annual William T. PECORA Memorial Symposium on Remote Sensing , 316-323.

MALLIK, A. U. & RASID, H. 1993. Root Shoot Characteristics of Riparian Plants in a Flood-Control Channel - Implications for Bank Stabilization. Ecological Engineering, 2, 149-158.

MARAGOS, P. & ZIFF, R. D. 1990. Threshold superposition in morphological image analysis systems. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 12 , 498-504.

MARAGOS, P. 1989. Pattern Spectrum and Multiscale Shape Representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11 , 701-716.

MARAGOS, P. A. & SCHAFER, R. W. 1986. Morphological Skeleton Representation and Coding of Binary Images. IEEE Transactions on Acoustics Speech and Signal Processing, 34 , 1228-1244.

MARINELLI, L., MICHEL, R., BEAUDOIN, A. & ASTIER, J. 1997. Flood mapping using ERS tandem coherence image: A case in southern France. Third ERS Symposium on Space at the Service of Our Environment, Vol 1, 414 , 531-536.

135

MARR, D. & HILDRETH, E. 1980. Theory of edge detection. Proceedings of the Royal Society of London. Series B. Biological Sciences, 207 , 187-217.

MARR, D. 1982. Vision San Francisco. W. H. Freeman and Company .

MARTINEZ, A. & BYRNES, A. P. 2001. Modeling dielectric-constant values of geologic materials: An aid to ground-penetrating radar data collection and interpretation , Kansas Geological Survey, University of Kansas.

MARTINIS, S., TWELE, A. & VOIGT, S. 2009. Towards operational near real-time flood detection using a split-based automatic thresholding procedure on high resolution TerraSAR-X data. Natural Hazards and Earth System Sciences, 9, 303-314.

MASON, D. C., SPECK, R., DEVEREUX, B., SCHUMANN, G.-P., NEAL, J. C. & BATES, P. D. 2010. Flood detection in urban areas using TerraSAR-X. Geoscience and Remote Sensing, IEEE Transactions on, 48 , 882-894.

MATGEN, P., HOSTACHE, R., SCHUMANN, G., PFISTER, L., HOFFMANN, L. & SAVENIJE, H. H. G. 2011. Towards an automated SAR-based flood monitoring system: Lessons learned from two case studies. Physics and Chemistry of the Earth, 36 , 241-252.

MATHERON, G. & SERRA, J. The birth of mathematical morphology. Proc. 6th Intl. Symp. Mathematical Morphology, 2002. Sydney, Australia, 1-16.

MATHERON, G. 1989. Estimating and choosing: An essay on probability in practice (AM Hasofer, Trans.). Berlin: Springer-Verlag. (Original work published 1978).

MCGINNIS, D. F. 1975. Earth Resources Satellite systems for flood monitoring. Geophysical Research Letters, 2, 132-5.

MELACK, J. M., HESS, L. L. & SIPPEL, S. 1994. Remote sensing of lakes and floodplains in the Amazon Basin. Remote Sensing Reviews, 10 , 127-142.

MERTES, L. A., SMITH, M. O. & ADAMS, J. B. 1993. Estimating suspended sediment concentrations in surface waters of the Amazon River wetlands from Landsat images. Remote Sensing of Environment, 43 , 281-301.

MERWADE, V., OLIVERA, F., ARABI, M. & EDLEMAN, S. 2008. Uncertainty in flood inundation mapping: current issues and future directions. Journal of Hydrologic Engineering, 13 , 608-620.

MEYER, F. & BEUCHER, S. 1990. Morphological segmentation. Journal of visual communication and image representation, 1, 21-46.

MEYER, F. The perceptual graph: a new algorithm. Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP'82., 1982. IEEE, 1932-1935.

MICHENER, W. K. & HOUHOULIS, P. F. 1997. Detection of vegetation changes associated with extensive flooding in a forested ecosystem. Photogrammetric Engineering and Remote Sensing, 63 , 1363-1374.

MÖLLER, T., HAMANN, B. & RUSSELL, R. D. 2009. Mathematical Foundations of Scientific Visualization, Computer Graphics, and Massive Data Exploration , Springer.

MOORE, G. K. & NORTH, G. W. Flood inundation in the southeastern United States from aircraft and satellite imagery. Ninth International Symposium on Remote Sensing of Environment; Volume I, 1974. Environ. Res. Inst. Mich., Ann Arbor, Michigan, 607-620.

MORRISON, R. B. & COOLEY, M. E. 1973. Assessment of flood damage in Arizona by means of ERTS-1 imagery. NASA Special Publication, 327 , 755-760.

136

NALWA, V. S. & BINFORD, T. O. 1986. On detecting edges. Pattern Analysis and Machine Intelligence, IEEE Transactions on , 699-714.

NASA/JPL .Mission to Earth Seasat [Online]. Available from: < www.jpl.nasa.gov/missions/seasat/ >.

NICO, G., PAPPALEPORE, M., PASQUARIELLO, G., REFICE, A. & SAMARELLI, S. 2000. Comparison of SAR amplitude vs. coherence flood detection methods - a GIS application. International Journal of Remote Sensing, 21 , 1619-1631.

NOORULLAH, R. & DAMODARAM, A. 2009. Innovative Thinning and Gradient Algorithm for Edge Field and Categorization Skeleton Analysis of Binary and Grey Tone Images. Journal of Theoretical & Applied Information Technology, 5.

OBERSTADLER, R., HÖNSCH, H. & HUTH, D. 1997. Assessment of the mapping capabilities of ERS ‐1 SAR data for flood mapping: a case study in Germany. Hydrological processes, 11 , 1415-1425.

OLANDER, T. L., VELDEN, C. & KOSSIN, J. The Advanced Objective Dvorak Technique (AODT)–Latest upgrades and future directions. 26th Conference on Hurricanes and Tropical Meteorology, 2004. 294- 295.

OTSU, N. 1975. A threshold selection method from gray-level histograms. Automatica, 11 , 23-27.

PAI, T. W. & HANSEN, J. H. L. 1994. Boundary-Constrained Morphological Skeleton Minimization and Skeleton Reconstruction. Ieee Transactions on Pattern Analysis and Machine Intelligence, 16 , 201- 208.

PAN, Y., LIU, A. K., HE, S., YANG, J. & HE, M.-X. 2013. Comparison of typhoon locations over ocean surface observed by various satellite sensors. Remote Sensing, 5, 3172-3189.

PARKER, W. V. 2013. Discover the Benefits of Radar Imaging. EIJ Earth Imaging Journal .

PAUL, B. K. & RASID, H. 1993. Flood Damage to Rice Crop in Bangladesh. Geographical Review, 83 , 150- 159.

PITAS, I. & VENETSANOPOULOS, A. N. 1990. Nonlinear digital filters , Springer.

PIZER, S. M., GERIG, G., JOSHI, S. & AYLWARD, S. R. 2003. Multiscale medial shape-based analysis of image objects. Proceedings of the IEEE, 91 , 1670-1679.

PRATT, W. K. 1975. Median filtering. Image Process. Inst., Univ. Southern California, Los Angeles .

PREWITT, J. M. 1970. Object enhancement and extraction. Picture processing and Psychopictorics, 10 , 15-19.

RANGO, A. & SALOMONSON, V. 1977. UTILITY OF SHORT WAVELENGTH ( less than 1 MM) REMOTE SENSING TECHNIQUES FOR THE MONITORING AND ASSESSMENT OF HYDROLOGIC PARAMETERS. 55-66.

REES, G. & REES, W. 2012. Physical principles of remote sensing , Cambridge University Press.

REPPUCCI, A., LEHNER, S., SCHULZ-STELLENFLETH, J. & BRUSCH, S. 2010. Tropical Cyclone Intensity Estimated From Wide-Swath SAR Images. IEEE Transactions on Geoscience and Remote Sensing, 48 , 1639-1649.

ROERDINK, J. B. & MEIJSTER, A. 2000. The watershed transform: Definitions, algorithms and parallelization strategies. Fundamenta Informaticae, 41 , 187-228.

ROSEN, P. A., HENSLEY, S., JOUGHIN, I. R., LI, F. K., MADSEN, S. N., RODRIGUEZ, E. & GOLDSTEIN, R. M. 2000. Synthetic aperture radar interferometry. Proceedings of the IEEE, 88 , 333- 382.

137

ROUSHDY, M. 2006. Comparative study of edge detection algorithms applying on the grayscale noisy image using morphological filter. GVIP journal, 6, 17-23.

SCHULZ-STELLENFLETH, J., LEHNER, S., REPPUCCI, A., BRUSCH, S. & KÖNIG, T. 2007. On the divergence and vorticity of SAR derived wind fields. This issue .

SCHWENDIKE, J. & KEPERT, J. D. 2008. The boundary layer winds in Hurricanes Danielle (1998) and Isabel (2003). Monthly Weather Review, 136 , 3168-3192.

SCOON, A., ROBINSON, I. & MEADOWS, P. 1996. Demonstration of an improved calibration scheme for ERS-l SAR imagery using a scatterometer wind model. International Journal of Remote Sensing, 17 , 413-418.

SERRA, J. 1982. Image analysis and mathematical morphology , London.: Academic Press.[Review by Fensen, EB in: J. Microsc. 131 (1983) 258.] Mathematics, Review article General article, Technique Staining Microscopy, Cell size (PMBD, 185707888).

SHAH, J. 2005. Gray skeletons and segmentation of shapes. Computer Vision and Image Understanding, 99 , 96-109.

SHAMSODDINI, A. & TRINDER, J. C. 2012. Edge-detection-based filter for SAR speckle noise reduction. International Journal of Remote Sensing, 33 , 2296-2320.

SHAPIRO, L. J. & WILLOUGHBY, H. E. 1982. The response of balanced hurricanes to local sources of heat and momentum. Journal of the Atmospheric Sciences, 39 , 378-394.

SHARON, M., FREEMAN, S. & SNEH, B. 2011. Assessment of Resistance Pathways Induced in Arabidopsis thaliana by Hypovirulent Rhizoctonia spp. Isolates. Phytopathology, 101 , 828-838.

SHEFFNER, E. J. 1994. The Landsat Program - Recent History and Prospects. Photogrammetric Engineering and Remote Sensing, 60 , 735-744.

SHIH, F. Y. & MITCHELL, O. R. 1989. Threshold decomposition of gray-scale morphology into binary morphology. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 11 , 31-42.

SHOJI, K. 1992. Generalized Skeleton Representation and Adaptive Rectangular Decomposition of Binary Images. Image Algebra and Morphological Image Processing Iii, 1769 , 404-415.

SHORT, N. M. & ROBINSON, J. 1998. The remote sensing tutorial , Goddard Space Flight Center, NASA.

SIDDIQI, K. & PIZER, S. M. 2008. Medial representations: mathematics, algorithms and applications , Springer.

SIKORA, T. D., FRIEDMAN, K. S., PICHEL, W. G. & CLEMENTE-COLÓN, P. 2000. Synthetic aperture radar as a tool for investigating polar mesoscale cyclones. Weather and forecasting, 15 , 745-758.

SIPPEL, S. J., HAMILTON, S. K. & MELACK, J. M. 1992. Inundation Area and Morphometry of Lakes on the Amazon River Floodplain, Brazil. Archiv Fur Hydrobiologie, 123 , 385-400.

SMITH, L. C. 1997. Satellite remote sensing of river inundation area, stage, and discharge: a review. Hydrological Processes, 11 , 1427-1439.

SOHN, H.-G. & JEZEK, K. 1999. Mapping ice sheet margins from ERS-1 SAR and SPOT imagery. International Journal of Remote Sensing, 20 , 3201-3216.

SOILLE, P. 2003. Morphological image analysis: principles and applications , Springer-Verlag New York, Inc.

138

SOLBO, S., MALNES, E., GUNERIUSSEN, T., SOLHEIM, I. & ELTOFT, T. Mapping surface-water with Radarsat at arbitrary incidence angles. INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2003. IV: 2517-2519.

SOLÍS MONTERO, A. & LANG, J. 2012. Skeleton pruning by contour approximation and the integer medial axis transform. Computers & Graphics, 36 , 477-487.

SORIANO, M. 2012. Applications of Morphological Operations. Applied Physics 186 Manual Handouts, Instrumentation Physics II.

STERNBERG, S. R. 1983. Biomedical image processing. Computer, 16 , 22-34.

STERNBERG, S. R. 1986. Grayscale morphology. Computer Vision, Graphics, and Image Processing, 35 , 333- 355.

SUHN, H.-G., JEZEK, K., BAUMGARTNER, F., FORSTER, R. & MOSLEY-THOMPSON, E. Radar backscatter measurements from RADARSAT SAR imagery of South Pole Station, Antarctica. Geoscience and Remote Sensing Symposium, 1999. IGARSS'99 Proceedings. IEEE 1999 International, 1999. IEEE, 2360-2362.

TELEA, A. 2012. Feature preserving smoothing of shapes using saliency skeletons. Visualization in Medicine and Life Sciences II. Springer.

TOWNSEND, P. A. & WALSH, S. J. 1998. Modeling floodplain inundation using an integrated GIS with radar and optical remote sensing. Geomorphology, 21 , 295-312.

TOWNSEND, P. A. 2001. Mapping seasonal flooding in forested wetlands using multi-temporal Radarsat SAR. Photogrammetric Engineering and Remote Sensing, 67 , 857-864.

TROPICAL CYCLONE WEATHER SERVICES PROGRAM, N.-. 2006. Tropical Cyclone Definitions [Online]. Available: http://www.nws.noaa.gov/directives/. [Accessed National Oceanic & Atmospheric Administration.

TUKEY, J. W. 1977. Exploratory data analysis.

ULABY, F., MOORE, R. & FUNG, A. 1982. Microwave remote sensing: Active and passive. Volume 2-Radar remote sensing and surface scattering and emission theory.

VACHON, P. & DOBSON, F. 1996. Validation of wind vector retrieval from ERS-1 SAR images over the ocean. The Global Atmosphere and Ocean System, 5, 177-187.

VACHON, P. W. & DOBSON, F. W. 2000. Wind retrieval from RADARSAT SAR images: Selection of a suitable C-band HH polarization wind retrieval model. Canadian Journal of Remote Sensing, 26 , 306- 313.

VACHON, P. W., ADLAKHA, P., EDEL, H., HENSCHEL, M., RAMSAY, B., FLETT, D., REY, M., STAPLES, G. & THOMAS, S. 2000. Canadian progress toward marine and coastal applications of synthetic aperture radar. Johns Hopkins APL Technical Digest, 21 , 33-40.

VACHON, P. W., KATSAROS, K., BLACK, P. & DODGE, P. RADARSAT synthetic aperture radar measurements of some 1998 Hurricanes. DIG INT GEOSCI REMOTE SENS SYMP(IGARSS), 1999. 1631-1633.

VACHON, P., CLEMENTE-COLON, P., PICHEL, W., BLACK, P., DODGE, P., KATSAROS, K. & MACDONELL, K. RADARSAT-1 Hurricane watch, Geoscience and Remote Sensing Symposium , 2001, IGARSS'01. IEEE 2001 International, 2001. IEEE, 471-473.

VELDEN, C., OLSON, W. & ROTH, B. Tropical cyclone center-fixing using DMSP SSM/I data, Conference on Satellite Meteorology and Oceanography , 4 th, San Diego, CA, 1989.

139

VINCENT, L. & SOILLE, P. 1991. Watersheds in Digital Spaces - an Efficient Algorithm Based on Immersion Simulations. Ieee Transactions on Pattern Analysis and Machine Intelligence, 13 , 583-598.

VINCENT, L. 1993. Morphological grayscale reconstruction in image analysis: Applications and efficient algorithms. Image Processing, IEEE Transactions on, 2, 176-201.

VON DER MALSBURG, C. 1994. The correlation theory of brain function , Springer.

WANG, Y., HESS, L. L., FILOSO, S. & MELACK, J. M. 1994. Canopy Penetration Studies - Modeled Radar Backscatter from Amazon Floodplain Forests at C-Band, L-Band, and P-Band. IGARSS '94 - 1994 International Geoscience and Remote Sensing Symposium Volumes 1-4, 1060-1062.

WEDLER, E. & KESSLER, R. INTERPRETATION OF VEGETATIVE COVER IN WETLANDS USING FOUR-CHANNEL SAR IMAGERY. CELL POWERPLANT. Technical Papers of the American Society of Photogrammetry 47th Annual Meeting, 1981 ASP-ACSM Convention., 1981 Washington, DC, USA. Am SOC of Photogrammy, 111-124.

WILLOUGHBY, H. E. 1990. Gradient balance in tropical cyclones. Journal of the Atmospheric Sciences, 47 , 265-274.

WISKOTT, L. & VON DER MALSBURG, C. 1996. Recognizing faces by dynamic link matching. Neuroimage, 4, S14-S18.

World Meteorological Organization. Guidelines on the Role, Operations and Management of the National xxxxxxsx Meteorological or Hydrometeorological Services (NMSs) [Online]. Available from : xxxx xxxxxxxx xx xxxxxxx . WU, S. T. & SADER, S. A. 1987. Multipolarization SAR Data for Surface-Feature Delineation and Forest Vegetation Characterization. IEEE Transactions on Geoscience and Remote Sensing, 25 , 67-76.

WÜRTZ, R. P. 1997. Object recognition robust under translations, deformations, and changes in background. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19 , 769-775.

XU, C. & PRINCE, J. L. 1998. Snakes, shapes, and gradient vector flow. Image Processing, IEEE Transactions on, 7, 359-369.

YAMAZAKI, F. 2001. Applications of remote sensing and GIS for damage assessment. Structural Safety and Reliability .

YOO, J., BOUMAN, C. A., DELP, E. J. & COYLE, E. J. 1993. The nonlinear pre-filtering and difference of estimates approaches to edge detection: applications of stack filters. CVGIP: Graphical Models and Image Processing, 55 , 140-159.

ZHU, T., ZHANG, D.-L. & WENG, F. 2004. Numerical simulation of Hurricane Bonnie (1998). Part I: Eyewall evolution and intensity changes. Monthly weather review, 132 , 225-241.

140