<<

Table of Contents

General

● Overview

● Definitions

● Supported Products

● Abstracted Metadata

DAT

● Display and Analysis Tool

Tool Views

❍ Tool Windows

❍ Project View

❍ Product View

❍ Pixel Info View

❍ Image View

❍ Navigation Window

❍ Colour Manipulation

❍ Layer Manager

Map

❍ WorldWind View

❍ Product Library

❍ Preferences Dialog

❍ Settings Dialog

❍ Wave Mode Polar View

Product Readers

❍ Open SENTINEL-1 Product

❍ Open ENVISAT Product

❍ Open ERS (.E1, .E2) Product ❍ Open ERS1/2 CEOS Product

❍ Open JERS CEOS Product

❍ Open Radarsat-1 CEOS Product

❍ Open Radarsat-2 Product

❍ Open TerraSarX Product

❍ Open Cosmo-Skymed Product

❍ Open ENVI Product

❍ Open PolsarPro Product

❍ Open ALOS PALSAR CEOS Product

❍ Open ALOS AVNIR2 CEOS Product

❍ Open ALOS PRISM CEOS Product

❍ Open GeoTIFF Product

❍ Open ImageIO Product

❍ Open BEAM-DIMAP Product

❍ Open TM Product

❍ Open NetCDF Product

❍ Open HDF Product

❍ Open Generic Binary

❍ Open Complex Generic Binary

❍ Open GETASSE30 Tile

❍ Open GTOPO30 Tile

❍ Open ACE DEM Tile

❍ Open SRTM DEM Tile

❍ Import Geometry/Shape

Product Writers

❍ Export BEAM-DIMAP Product

❍ Export GeoTIFF Product

❍ Export HDF-5 Product

❍ Export NetCDF Product

❍ Export ENVI GCP File

❍ Export Displayed Image

❍ Export Google KMZ

❍ Export Roi Pixels ❍ Export Transect Pixels

❍ Export Color Palette

Imaging Tools

❍ RGB-Image Profile

❍ No Data Overlay

❍ Bitmask Overlay

❍ Bitmask Editor

Analysis Tools

❍ Geometry Management

❍ Mask and ROI Management

❍ Product GCP Manager

❍ Product Pin Manager

❍ Product/Band Information

❍ Product/Band Geo-Coding Information

❍ Band Statistics

❍ Band Histogram

❍ Band Scatter Plot

❍ Band Transect Profile Plot

❍ Band Transect Co-ordinate List

❍ Compute ROI-Mask Area

Product Generation Tools

❍ Product Subset Dialog

❍ Band Arithmetic

❍ Arithmetic Expression Editor

❍ Reprojection

■ Reprojection Dialog

■ Resampling Methods

❍ Data Flip

❍ Attach Pixel-based Geo-Coding

❍ Create Elevation Band

❍ Create Filtered Band ❍ Complex to Detected Ground Range

Graph Processing

● Introduction

Graph Tools

❍ Graph Processing Tool

❍ Command Line Reference

❍ Graph Builder Tool

❍ Batch Processing

Analysis Operators

❍ Principle Component Analysis

❍ EM Cluster Analysis

❍ K-Means Cluster Analysis

❍ Data Analysis

Utility Operators

❍ Create Stack

❍ Create Subset

❍ Band Arithmetic

❍ Convert Datatype

❍ Over Sample

❍ Under Sample

❍ Fill DEM Hole

❍ Image Filtering

SAR Operators

❍ Apply Correction

❍ Calibration

❍ Remove Antenna Pattern

❍ GCP Selection

❍ Multilook ❍ Speckle Filter

❍ Multi-Temporal Speckle Filter

❍ Warp

❍ WSS Deburst

❍ WSS Mosaic

❍ S1 TOPSAR Deburst and Merge

Geometry Operators

❍ Create Elevation

❍ Range Doppler Terrain Correction

❍ SAR Simulation Terrain Correction

❍ SAR Simulation

❍ Geolocation Grid Ellipsoid Correction

❍ Average Height RD Ellipsoid Correction

❍ Map Projection

❍ Slant Range to Ground Range

❍ Mosaic

InSAR

❍ Interferometric functionality

❍ Computation and subtraction of 'flat earth' phase

❍ Estimation and subtraction of topographic phase

❍ Coherence estimation

❍ Azimuth filtering

❍ Range filtering

❍ Phase filtering

❍ Slant to height

❍ Differential InSAR

❍ Unwrapping

❍ Snaphu Data Export

❍ Snaphu Data Import

❍ InSAR Stack Overview

Ocean Tools ❍ Object Detection

❍ Oil Spill Detection

❍ Create Land Mask

❍ Wind Field Estimation

Development

● General Design

● Detailed Design

● Open Source Development

● Building the Source

● Developing a Reader

● Developing a Writer

● Developing an Operator

Tutorials

● Remote Sensing

● Quick Start With The DAT

● Coregistration

● Orthorectification

● Command Line Processing

F.A.Q.

● Installation

● General NEST Overview

Next ESA SAR Toolbox

Overview The Next ESA SAR Toolbox (NEST) is an open source (GNU GPL) toolbox for reading, post- processing, analysing and visualising the large archive of data (from Level 1) from ESA SAR missions including ERS-1 & 2, ENVISAT and in the future Sentinel-1. In addition, NEST supports handling of products from third party missions including JERS-1, ALOS PALSAR, TerraSAR-X, RADARSAT-1&2 and Cosmo-Skymed. NEST has been built using the BEAM Toolbox and Development Platform.

Architecture Highlights

● Display and Analysis Tool (DAT): integrated graphical user-friendly interface

● Graph Processing Framework (GPF): user-defined processing chains

● Graphical or command-line execution

● Tiled memory management for working with very large data products

● Data abstraction models to handle all SAR missions in a common way

● Modular design for easy modifications and upgrades

● Users are able to add their own modules via

● Multithreading and Multi-core processor support

● Integrated WorldWind visualisation

Main Features

● Statistics & Data Analysis

● Metadata handling

● Subset, Resample and Band Arithmetic

● Export to GeoTiff, HDF 4 & 5, NetCDF, Binary, Envi, Kmz formats

● LUT and Layer Management

● ROI tools, layer stacking

● Absolute calibration (Envisat ASAR, ERS 1&2, ALOS, RADARSAT-2, TerraSAR-X, Cosmo- SkyMed)

● Multilooking & speckle (single and multitemporal) filtering

● ERS-ASAR precise orbit handling (Doris, Prare and Delft orb.)

● Coregistration of detected and complex products

● Debursting of ASAR WSS ● Range-Doppler Terrain Correction

● Radiometric normalization during Terrain Correction

● SAR simulation

● Layover and shadow masks

● Simulated SAR Terrain Correction

● Ellipsoid correction, Map Reprojection, Mosaicking

● Automatic SRTM DEM download and tile selection

● Product library for scanning and cataloguing large archives efficiently

tools: basic routines for oil spill detection, ship detection and wind field estimation from SAR data

● Fully integrated and featured InSAR processor (JLinda) for Stripmap and Zero-Doppler focused data

● Compatibility with PolSARpro Toolbox (Reader, Writer)

NEST is being developed by Array Systems Computing Inc. of Toronto Canada under ESA Contract number 20698/07/I-LG. InSAR functionalities are being developed by PPO.labs and Delft University of Technology.

Supported Product Formats:

SAR Data Products

❍ ENVISAT ASAR, MERIS, AATSR

❍ ERS AMI 1 & 2 (CEOS & Envisat format from PGS and VMP)

❍ JERS SAR

❍ ALOS PALSAR

❍ ASAR Wave Products

❍ RADARSAT 1

❍ RADARSAT 2

❍ TerraSar-X

❍ Cosmo-Skymed

Common EO Data Formats

❍ DIMAP

❍ GeoTIFF

❍ HDF 4 & HDF 5

❍ NetCDF

❍ ENVI ❍ PolsarPro

❍ Generic Binary

Digital Elevation Models

❍ SRTM

❍ ASTER Global DEM

❍ ACE

❍ GETASSE30

❍ GTOPO30 tiles

For further details on which products are support, please see the Supported_Mission- Product_vs_Operators_table.xls

Source Code The complete NEST software has been developed under the GNU public license and comes with full source code in JavaTM. An Application Programming Interface (API) is provided with NEST to allow easy extension by users to add new data readers/writers of other formats and to support data formats of future missions. Plug-in modules can be developed separately and shared by the user community. Processors can be easily extended without needing to know about the complexities of the whole software.

Supported Platforms NEST is programmed in ™ to allow a maximum portability. The NEST software has been successfully tested under MS Windows™ XP®, Vista and 7 as well as under , Solaris® and Mac OS X operating systems. Definitions, Acronyms, Abbreviations

Definition of Terms

Product An object that contains remote sensing data for a scene on the earth. A product can contain meta-data, geo-coding information, tie-point grids and bands. All band raster datasets within a product have the same pixel resolution and share the same geo-coding. Band A raster dataset of a product. The band's sample values are usually the measurements of a sensor. Tie-point grid An auxiliary (geophysical) raster dataset of a product. Tie-point grids usually provide less sample values than a band. Missing values, with respect to the full pixel resolution of a product, are obtained by a linear interpolation. The tie-point grid data normally does not originate from the sensor which provided the product's measurement data. Geo-Coding Provides the geodetic co-ordinates for a given pixel of a product. An image is geo-coded if it is somehow possible to find the geographical latitude and longitude values for any pixel. ENVISAT products store their geo-coding information in the tie-point grids named latitude and longitude. If a product has been map transformed, it is known to be geo-referenced and as such, tie-point information is no longer required. Geo-Reference An image is geo-referenced if any point in the image can be found in a corresponding reference map by a linear transformation. Every pixel in the image has the same size if expressed in map units. Geo-referencing a geo- coded image includes image warping, applying a well known map projection and pixel re-sampling. Map Graphic representation of the physical features (natural, artificial, or both) of a part or the whole of the Earth's surface, by means of signs and symbols or photographic imagery, at an established scale, on a specified projection, and with the means of orientation indicated. Map Projection Orderly system of lines on a plane representing a corresponding system of imaginary lines on an adopted terrestrial or celestial datum surface. Also, the mathematical concept for such a system. For maps of the Earth, a projection consists of a graticule of lines representing parallels of latitude and meridians of longitude or a grid. Graticule Network of parallels and meridians on a map or chart. A geographic graticule is a system of coordinates of latitude and longitude used to define the position of a point on the surface of the Earth with respect to the reference ellipsoid. Grid In connection with maps: a network of uniformly spaced parallel lines intersecting at right angles. When superimposed on a map, it usually carries the name of the projection used for the map – that is, Lambert grid, transverse Mercator grid, universal transverse Mercator grid. Pixel coordinates Pixel values always refer to the upper left corner of the pixel. Pixel co- ordinates are always zero based, the pixel at X=0,Y=0 refers to the upper left pixel of an image and the upper left corner of that pixel. Pixel value In general a composite of red, green and blue sample values resulting in a colour as part of an image. Geodetic co-ordinates Geodetic co-ordinates are given as latitude and longitude values and always refer - if not otherwise stated - to the WGS-84 ellipsoid. The geodetic co- ordinates of a pixel, always refer to the upper left corner of the pixel.

Acronyms, Abbreviations

AATSR Advanced Along Track Scanning Radiometer ADS Annotation Data Set in an ENVISAT data product ASAR Advanced Synthetic Aperture Radar BEAM Acronym for Basic ENVISAT Toolbox for (A)ATSR and MERIS COTS Commercial Off-The-Shelf Software ECSS European Co-operation for Space Standardisation (documents available at ESTEC at the Requirements and Standards Division) EnviView Software developed at ESTEC to visualise and analyse the Envisat data. See User services section at http://envisat.esa.int/ EO Earth Observation ESA European Space Agency (see http://www.esa.it/export/esaCP/index.html) ESRIN European Space Research Institute (see http://www.esa.it/export/esaCP/index.html) ESTEC European Space Research and Technology Centre (see http://www.esa.it/export/esaCP/ index.html) ENVISAT ESA satellite (see http://envisat.esa.int/) GADS Global Annotation Data Set in an ENVISAT data product HDF Hierarchical Data Format (see http://www.hdfinfo.com/) HDF-EOS Extended HDF format (see http://ecsinfo.gsfc.nasa.gov/iteams/HDF-EOS/HDF-EOS.html) MDS Measurement Data Set in an ENVISAT data product MERIS Medium Resolution Imaging Spectrometer Instrument (see http://envisat.esa.int/) MODIS Moderate Resolution Imaging Spectroradiometer (see http://modis.gsfc.nasa.gov/ MODIS/) MPH Main Product Header in an ENVISAT data product NEST Next ESA SAR Toolbox OSSD Open Source Software Development SAR Synthetic Aperture Radar (http://earth.esa.int) SPH Specific Product Header in an ENVISAT data product SW Software NEST Supported Products

Product Readers

NEST is able to ingest a variety of data products into a common internal representation.

NEST includes readers for the following data products:

● ENVISAT ASAR ( IODD 3H to 4B )

❍ ASA_APG_1P, ASA_APM_1P, ASA_APP_1P, ASA_APS_1P, ASA_AP__BP

❍ ASA_IMG_1P, ASA_IMM_1P, ASA_IMP_1P, ASA_IMS_1P, ASA_IM__BP

❍ ASA_WSS_1P, ASA_WSM_1P, ASA_WS__BP

❍ ASA_WVS_1P, ASA_WVW_2P, ASA_WVI_1P

❍ ASA_GM1_1P, ASA_GM__BP

❍ ASA_XCA_AX

● ERS AMI 1 & 2 (PGS in ENVISAT Format)

❍ SAR_IMG_1P, SAR_IMM_1P, SAR_IMP_1P, SAR_IMS_1P, SAR_IM__BP

● ERS AMI 1 & 2 (VMP & PGS in CEOS Format)

❍ SLC, PRI, GEC

● JERS SAR CEOS

❍ SLC, PRI, GEC

● ALOS PALSAR CEOS

❍ FBS, FBD, PLR, WB1 at levels 1.1 and 1.5

● Radarsat 1 CEOS

❍ SLC, SGF, SGX, SSG, SCN, SCW

● Radarsat 2

❍ SLC, SGF, SGX, SSG, SPG

● TerraSAR X

❍ MGD, GEC, EEC, SSC

● Cosmo-Skymed

❍ SCS, DGM, GEC

● Beam DIMAP

● ENVI

● Generic Binary

● PolsarPro ● GeoTiff

● NetCDF

● HDF 4 & 5

● ImageIO

❍ JPG, BMP, PNG, GIF

● Digital Elevation Maps

❍ GETASSE30, ACE, GTOPO30, SRTM, ASTER

● Orbit Files

❍ DORIS DOR_VOR_AX, DOR_POR_AX (ENVISAT)

❍ DELFT Precise (ERS 1, ERS 2, ENVISAT)

❍ PRARE Precise Orbits (ERS 1, ERS 2)

● Optical

❍ ENVISAT MERIS (1b/L2 RR/FR/FRG/FSG)

❍ ERS ATSR and ATSR-2

❍ ALOS AVNIR-2 (1A, 1B1, 1B2)

❍ ALOS PRISM (1A, 1B1, 1B2)

❍ Landsat 5 TM in FAST format

An API is provided with NEST to allow users to add data ingestion of other formats and to support data formats of future missions.

Supported Products Table

A complete list of the currently supported products with respect to each operation can be found here.

Product Writers

NEST supports various data conversion options and output to common georeferenced data formats for import into 3rd party software.

The following data format writers are available:

● Beam DIMAP

● GeoTiff

● HDF 5 ● NetCDF

● ENVI

● JPG

● BMP

● PNG

KMZ

New Writer Plugins could also be developed with the easy to use API Abstracted Metadata

Abstracted Metadata

A variety of data products can be ingested into a common internal representation. For metadata, this is done using the Abstracted Metadata. The Abstracted Metadata is an extract of information and parameters from the actual metadata of the product. The idea behind this is firstly to list the needed parameters to run tools and algorithms and secondly to modify these in line with the processing over the product. In fact, the parameters read from the abstracted metadata can be changed as the result of any processing. According to this concept, the abstracted metadata can be considered a dynamic header. Each Product Reader knows how to read a particular file format and map the metadata to the Abstracted Metadata. For any fields that do not exist in a product, a default dummy value of 99999 is used. The abstracted metadata can be edited and changed after having saved the product in the internal BEAM DIMAP format. Therefore if external information is available, for instance the user can modify and update the dummy values in the abstracted metadata.

You may search through the metadata by clicking on the Search Metadata menu item in the SAR Tools Menu. Enter a partial string of a metadata field name and all entries will be shown in a metadata table.

Importing Metadata If you are importing a file with no metadata, for example a bitmap or JPEG with the ImageIO reader or extra metadata for an ENVI product, you can create an XML file that will be read in as the Abstracted Metadata and be used within the processing. The filename for the metadata data should be called either metadata. or filename.xml when importing filename.hdr or filename.jpg The format of the file should be as follows:

Only name and value are required for each attribute the others could be optional. Also, not all attributes are required. If any are missing then defaults will be used. Any elements placed in tie-point-grids will be imported as a tie-point grid band and be interpolated to the image raster dimensions at run time. The latitude and longitude tie points will be used to create the product geocoding. You can export the metadata from a currently opened product by clicking on Export Metadata in the Product Writers menu. DAT Application

Display and Analysis Tool Overview

The DAT is the visualisation, analysing and processing application. It has a clear and intuitive allowing new users to get started quickly. The DAT let's you organize your product datasets using Projects and allows you to manage multiple products within a tree view. A Products View manages all open products and lets you examine their raster data and metadata, and a comprehensive pixel information view displays the geophysical values readouts. The Tool Windows of the DAT provide access to tools. These windows can be floating, docked or tabbed. All changes you apply to the layout will be saved for the next start of the DAT. Note: The open command is used to open any product format which is supported, the save command stores data products in the BEAM-DIMAP format. You can also import any data product by using the corresponding product reader in the file menu which allows you to open a subset of a data product.

A new image view is simply created by double-clicking on a tie-point grid or spectral/geophysical band. You can open as many images as your computer's RAM allows. After you have opened an image view you can inspect the images with the Navigation Window.

Tool Windows

DAT's Tool Windows

Tool Windows Tool windows are used in DAT to display information and properties of the currently selected data product or a component of the currently selected data product. They are also used to manage and edit the properties of the current view, such as pins, ROIs or the colours of the current image view. In contrast to image or metadata views, the DAT has only a single instance of each particular tool window. It can be shown, hidden, docked or floating. Tool windows can also be grouped together. You can also save and reload the current layout of tool windows and tool bars. You will find all available tool windows in the View/Tool Windows menu. Many tool windows also have corresponding tool bar icons.

Docking, Floating and Tab Mode You can drag each tool window by clicking and holding its title bar. The outline of the window is shown while you are dragging around the tool window. The outline shows you where it will be dropped if you release the mouse button. If a tool window is docked to one side, press the pin button in its title bar and it will switch to the tab mode. In tab mode, the tool window collapses and on the side a tab appears representing the window. If you move the mouse pointer over this tab, the tool window will expand. When leaving the tool window area, the window slides automatically away. Note: To switch a tabbed window back into the floating mode, press the pin icon in the title bar in order to dock the window first. If it is docked, you can put it into floating mode by dragging its title bar.

Grouping To group multiple tool windows in one window you have to move the mouse pointer, while dragging one window, on the title bar of an other tool window while holding down the CTRL key. You will get one window with one tab for each grouped tool window. To remove a tool window from a group you have to drag the tab out of the grouped window stack.

Layout Management The view menu contains the entry 'Manage Layout'. This menu consists of items to manage the current layout.

● Load User Layout - Load the stored user layout. ● Save as User Layout - Saves the current layout of tool windows.

● Reset to Default Layout - Resets the layout to the default layout of DAT.

Context Menu The context menu is invoked by clicking with the right mouse button on the title bar of a tool window.

The context menu allows you to:

● switching between floating, docked and auto hide mode

● disable the dockable functionality of a tool window.

● close a tool window.

List of available Tool Windows

● The Product View

● The Project View

● The Product Library

● The Pixel Info View

● The Pin Manager

● The GCP Manager

● The Colour Manipulation Window

● The

● The Navigation Window

● The Bitmask Overlay Window

● The Statistics Window

Project View

Project View

The Project View is a convenient tool for managing your data products. A Project will help organize your data by storing all related work in one folder. To create a project, select New Project from the File menu or Toolbar. A dialog will prompt you for a project folder location and project file name. By default a Project will be created with folders for ProductSets, Graphs, External Product Links, Imported Products and Processed Products.

Whenever you open a new product, a link to that product will show up under External Product Links. When processing data, the output folder will default to your project's Processed Products folder.

Within ProductSets, Graphs, Imported Products, Processed Products and any other user created folder, the project folders mirror the file structure of the physical hard disk. Therefore any change you make to the physical project folders on disk will be reflected in your project.

Right click on a folder to create a new sub-folder, rename or remove a folder. Right click on a file to open or remove the file.

External Product Links A Project can store links to other data related to your work including data projects, orbit files, auxiliary data, and DEMs. Files can be dragged and dropped from a Project into an Operator dialog for processing.

Importing Products Products can be imported from the External Product Links or by using the Product Library tool. Imported products are converted to DIMAP format and saved within your Project folder.

ProductSets

A Product Set is a list of products you would like to group to apply the same processing to them in a graph.

To create a product set, right click on the ProductSet folder and create a new ProductSet. Drag and drop products from a project onto the table in the ProductSet dialog. From the GraphBuilder use the ProductSet by adding a ProductSetReader and dragging the ProductSet into its dialog. The dialog table should be populated with the list of products in the ProductSet.

You may drag and drop a ProductSet into the Batch Processing tool to apply a graph to each of the products in the ProductSet.

DAT's Products View

Products View All products when opened are added to the Products View's open product list. The product list is a tree view with up to five root nodes for each open product:

● Bands - contains all measured spectral/geophysical and quality raster datasets of a product (mandatory)

● Tie-point grids - contains all tie-point grid raster datasets (optional)

● Flag Codings - contains flag coding metadata for quality flags datasets (optional)

● Index Codings - contains index coding metadata for classification bands (optional)

● Metadata - contains additional metadata (optional)

You can quickly open an image view for a band or tie-point grid by double-clicking on an item of the expanded product root nodes. A Metadata View is opened if you click on a metadata node.

The information concerning each pixel can be analysed interactively in the pixel information view.

Pixel Info View The Pixel View displays pixel information while you move the mouse over the band image view.

You can un-dock each section within the Pixel View using the floating-button ( ), and dock it back by using the docking-button ( ) in the header bar. The information displayed belongs to the current image pixel beneath the mouse pointer:

● Geo-location: Displays the image position, the geographic-location and also the map co-ordinates if the current product is map-projected.

● Tie Point Grids: Shows the values of the tie-point grids.

● Time Info: The time information associated with the current line.

● Bands: The value of the pixel beneath the mouse pointer. ● Flags: Displays the state of the flags at the current pixel.

If a pin is selected in the current image view, you can select Snap to selected pin in order to "freeze" the pixel information to the position of the currently selected pin. Note: Flag values are only displayed if a corresponding flag dataset has been loaded. Use the right mouse button over a flag dataset in the product view in order to load a flag dataset's sample values. Note: In the preferences dialog you can deselect the option that only pixel values of displayed or loaded bands are shown. Image View

Image View

The image view displays the sample values of raster datasets such as bands and tie-point grids as an image.

Slider Bar By default the horizontal and vertical slider bars are disabled for the image view. You can activate the sliders again in the preferences dialog in the section Layer Properties. If the slider bars are visible you have also a small button in the lower right corner of the image view which zooms to the fill image bounds if you click on it

Navigation Control The navigation control, which is located in the upper left corner of the image view, can be used to pan, zoom and rotate the image. The visibility of the control is dimmed if the mouse pointer is not located over the control and it becomes visible when the mouse gets near the control. If you rotate the image, you can use the CTRL-Key to change the stepping of the rotation angle from continuous values to a discrete stepping of quarter of 90°. The left image shows the dimmed version of the control the right one shows the active control with a rotation. You can deactivate the navigation control in the preferences dialog in the section Layer Properties

Multiple Windows of the same band It is possible to open more than one window of the same band to have several views on the band. This can be done by right-clicking on the band name in the Product Scene View. The windows will be numbered according to their appearance: Context Menu When you right-click the mouse over the image view a popup menu comes up:

Entries:

● Copy Pixel Info to Clipboard - copies all sample values at the current pixel position, their names and physical units (which you also can see in the pixel view) to the clipboard

● Show ROI Overlay - toggles the visibility state of the ROI overlay

● Show Graticule Overlay - toggles the visibility state of the graticule overlay

● Show Pin Overlay - toggles the visibility state of the pin overlay ● Create Subset from View - opens a new product subset dialog, with predefined spatial subset scene from the current image view

Note: The Copy Pixel Info to Clipboard command copies information as tabulator-separated text into the clipboard and may therefore be pasted directly into a spreadsheet application (e.g. MS Excel).

RGB Image Support On creating an RGB image view you are prompted to select or create a RGB-Profile. You have also the possibility to create new and change existing profiles in the user preferences. Note: The band to be used for each of the channels in the current RGB image view can be changed at any time in the Contrast Stretch / Color Palette window. The Navigation Window

The Navigation Window is used to move the viewport of an image view, to zoom in and out of it and to rotate the image in steps of 5 degrees using the spinner control below the image preview. The current viewport is depicted by a semi-transparent rectangle which can be dragged in order to move the viewport to another location. It also provides a slider used to zoom in and out of the view:

The text box at the left side of slider can be used to adjust the zoom factor manually. You can enter decimal value which sets the zoom factor of the view to this value. Also you can enter the zoom factor in the same format as it is displayed. The Navigation window additionally provides the following features via its tool buttons:

Zoom In A click on the Zoom-In-Tool will increase the magnification of the image in discrete steps, centered on the image view. The result will be displayed instantly and the magnification value will be refreshed.

Zoom Out A click on the Zoom-Out-Tool will decrease the magnification of the image in discrete steps, centered on the image view. The result will be displayed instantly and the magnification value will be refreshed.

Zoom Actual Pixel Sets the zoom factor to the default value so that the size of an image pixel has the same size of a display pixel.

Zoom All A click on the Zoom-All-Tool will adjust the magnification so that the whole image fits into the image view. The result will be displayed instantly and the magnification value in the editor will be refreshed. The same effect can be achieved by clicking on the icon in the lower right corner of the image view.

Synchronise The following image shows DAT with six image views that have been arranged with the "Tile Evenly" command in the Window Menu. When the Synchronise-Button is pressed, all available tools of the Navigation Window operate on all open image views. As a result,

● All open image views show the same section of the image,

● Dragging the highlighted area around results in simultaneous scrolling of all open image views,

● Moving the slider or applying any of the zooming tools -including the value field- will be reflected instantly in all image view windows.

The Colour Manipulation Window

Overview

If you are opening an image view of a data product's band or tie-point grid, DAT either loads image settings from the product itself (BEAM-DIMAP format only) or uses default colour settings. The colour manipulation window is used to modify the colours used for the image. Depending on the type of the source data used for the images, the colour manipulation window offers four different editors:

A.1: Editor for images of a single, spectral/geophysical band A.2: Editor for images of a single, spectral/geophysical band in Sliders mode in Table mode

B.: Editor for images of a single, index-coded band C. Editor for images using separate R,G,B channels

To open the colour manipulation window, use the corresponding icon icon in the main toolbar or select View/ Tool Windows/Colour Manipulation from the main menu. Changes in the colour manipulation window will be become effective only if the Apply button is pressed.

A. Editor for images of a single, spectral/geophysical band

Images of a single, spectral/geophysical band use a colour palette to assign a colour to a sample value in the source band. By default, the editor is in a mode where sliders are used to modify the colour palette and thereby change the assignment of sample values to colours: Colour Palette Modifications As the white diagonal line above the histogram indicates, the colour palette will be linearly applied to the samples of the source band of the current image view. The label below the slider indicates the sample value assigned to the position of the slider in the histogram. By moving a slider with the mouse you can easily change its sample value. You may also double-click the label; then a text field appears where you can enter a sample value. When you move a slider, the colours in the palette will change accordingly. You can also click between two sliders in order to move the nearest slider of both under the mouse cursor. Slider colours are changed by clicking a slider. A popup window appears, where you can select the new colour. If you select None, a fully transparent slider results. In the More Options panels you can adjust the No-Data Colour and apply a Histogram Matching to enhance the contrast of the final image. If you right-click on a slider, a context menu will pop up. The available actions are Add slider, Remove slider, Center slider sample and Center slider colour. Some actions may be disabled when the action cannot be performed.

Table Mode By changing the Editor option from Sliders to Table, the sample value to colour assignment can be done in a table: Here you can enter the colour and sample values directly by clicking into a table cell.

B. Editor for images of a single, index-coded band

Images of a single, index-coded band, look-up discrete colours from a fixed-size table:

Labels and colours are simply changed by clicking into the corresponding table cell. In the More Options panels you can adjust the No-Data Colour.

Editor for images using separate R,G,B channels

Images using separate R,G,B channels obtain their colours from the samples of three arbitrary bands. In the editor, users can switch between the channels in order to edit the contrast stretch range and gamma value of each channel separately. In this mode the sliders are used for contrast stretching in each of the R,G,B channels. In the More Options panels, you can adjust the No-Data Colour and Histogram Matching for the final image. The Source Band and Gamma options apply to each channel. The gamma value is used to adapt the transfer function which quantises the band's sample values to colour values. A gamma value of 0.7 for the blue channel enhances most RGB images.

Common Functions

No-Data Colour In the More Options panel of all editors you can adjust the No-Data Colour. This colour which will be used for no-data pixels in the source band(s). If you select None, no-data sample will be transparent in the image.

Histogram Matching It is sometimes desirable to transform an image so that its histogram matches that of a specified functional form. It is possible to apply an equalized or normalized histogram matching to images which can often improve image quality.

Multiple Assignment of the current Settings

A click on the icon opens a dialog where you can select the bands to which you can assign the current colour palette. If the destination band has a similar pixel value range, the slider positions are exactly preserved; otherwise, they are proportionally distributed over the valid range of the destination band.

Import and Export of the Colour Palette

Click the icon to import colour palette definition files and the icon to export the current colour manipulation settings. The colour palette information used for the current image can also be exported into an image file. Click the context menu item Export Color Legend over an open image view in order export the colour legend. The colour palette can be also exported as a Color Palette Table. Choose from the File menu Color Palette to export the table as a *.csv or *.txt file.

Slider Auto-Adjustment

A click on the icon adjusts the sliders to cover 95% of all pixels in the band.

A click on the icon adjusts the sliders to cover 100% (the full range) of all pixels in the band.

A click on the icon distributes the inner sliders evenly between the first and the last slider.

Zooming into the histogram

Click on the icon to zoom in vertically or on the icon to zoom into the histogram horizontally.

Zooming out of the histogram

Click on the icon to zoom out vertically or on the icon to zoom out of the histogram horizontally.

Reset

The reset icon is used to revert the window to use default values.

Context help

The help icon opens the Help for the current context. The Layer Manager Window

Overview

The layer manager is used control what content is shown in the current image view and how it is displayed. You can open the layer manager tool window from the View / Tool Windows / Layer Manager menu or by clicking the icon in the tool bar. A layer can be shown or hidden by using the check box left to the layer name. Any number of layers may be open at one time. Each layer will have a rendering order, and much like a painter’s algorithm, will be drawn on top of each other. The user willhave control over the layer ordering and the translucency of the images. Finer visibility control can be achieved by selecting a layer and using the Transparency slider. Visibility changes will directly take effect in the image view.

Layer manager tool window

Functions

Add layer: Opens the Add Layer assistant window which lets you add a new layer to the current view.

Remove layer: Removes the selected layer. Note that not all layers can be removed. Edit layer: Opens the Layer Editor tool window which lets you alter the display properties of the selected layer.

Move up: Moves the selected layer up so that it is displayed on top of all layers following it in the list.

Move down: Moves the selected layer down so that it is displayed underneath of all overlying layers in the list.

Move left: Moves the layer into the parent group (if any).

Move down: Moves the layer into a child group (if any).

Layer Editors

Layer editors lets you alter layer properties in order to control the display of layer data. Different layer types have different layer editors. The following screenshot shows the layer editor of the pin layer:

Layer editor tool window

Changes to the properties in this window are directly propagated to the layer selected in the layer manager and display updates will occur immediately in the current image view.

Adding Layers

The Add Layer assistant window shows a number of layer sources. It depends on the type of the current image view which layer sources are present in this list. In addition to the raster data products, the DAT will be able to load and display vector data as layers. This will prove helpful for overlaying shoreline, political boundaries, navigational charts, etc., and using this data to mask land or water areas. The following standard layers are always usable:

● ESRI Shapefile - Overlays an ESRI shapefile read from a file.

● Image from File - Overlays any image read from a file.

● Image of Band / Tie-Point Grid - Overlays images of bands or tie-point grids of other products currently opened.

● Layer Group - Creates a group in which you can freely organise other layers and groups.

Layer sources which are restricted to special image views are:

● Image from (WMS) - Downloads a thematic image map layer from a dedicated internet WMS. Applicable to map-projected (geo-referenced) views only. A good list of public services is provided at http://www.skylab-mobilesystems.com/en/wms_serverlist.html.

● Wind Speed from MERIS ECMWF annotation data - Displays wind speed vectors. Applicable to MERIS L1b/L2 band views only.

The 'Add Layer' dialog

Select a layer source and press > Next. Depending on the type of layer selected, the assistant window will guide you through one or more option pages. Once the Finish button is enabled, you can add the new layer to the current image view. Layer manager tool view after adding an ESRI shapefile layer The World Map

When working with satellite data, it is often not obvious at first sight which region of the world is covered by the data product. To facilitate finding the location of the product on the globe, DAT has a built-in world map that shows the projection of the product boundaries on a . To invoke the World Map simply click on the globe icon in the main toolbar or select World Map from the 'View' menu. This will open a window similar to the one below.

The World Map is a flat Earth representation of the 3D WorldWind View.

To navigate around the World Map, left click on the map and drag to mouse to pan. Use the middle mouse wheel to zoom in and out.

Place names, Nasa's Blue and aerial optical data will automatically be downloaded as long as your computer is connected to the internet.

The World Map is intended to show you a quick reference of where your product is on the globe. To overlay the actual data and view it in 3D, use the 3D WorldWind View.

The WorldWind View

The WorldWind View allows you to view the world in 3D, automatically download and view imagary and elevation data via Services (WMS) and overlay your SAR data. The WorldWind View has been created using NASA's WorldWind Java SDK. A 3D video card with updated drivers is necessary. World Wind has been tested on Nvidia, ATI/AMD, and Intel platforms using Windows and Ubuntu Linux. Note: Update your video card drivers. (How to update windows drivers) (ATI/AMD - Nvidia - Intel)

Navigate using the left mouse button to pan by clicking and dragging and the mouse wheel to zoom in and out. Click and drag the right mouse button to tilt and rotate the camera angle.

In the layer menu, you can select the layers to be shown including ● Place Names,

● Opened Products

● Open Street Map

● MS Virtual Earth

● BlueMarble Product Library

Product Library

The Product Library tool optimizes the identification of data products in a database for fast retrievable of the metadata of locally stored products. Search results are displayed in a table listing product name, path, mission, product type, acquisition date, pass, pixel spacing, etc. without actually opening the original products. The footprint of each image is outlined on the world map over top of Blue Marble images with a place names vector layer. Multilooked quicklooks of the images are also generated and stored in the database for quick previewing. The Product Library can optionally automatically add new metadata to the database whenever a product is manually opened. The user may also define a list of repository folders that will be scanned recursively for new or modified products. The product readers of the Toolbox are able to abstract the metadata from each product into the Generic Product Model of the Toolbox. Thereby, the Product Library and all processing tools of the Toolbox are able to work with the metadata in this common form without requiring the user to manually input any metadata. Products may be searched in terms of mission, product type, beam mode, ground location, date and time of acquisition, processing history, previously defined AOIs, and suggested image pairing. Products may also be searched by graphically drawing an area of interest on the world map and querying the database for products that cover the AOI. The user may then select from the resulting table of products which products to open, add to a project, or batch process directly from the Product Library. Importing into a Project After having selected the products you wish to import, press the Import to Project button to convert the products into DIMAP format and add them to the currently opened project. If a project is not currently opened you will be prompted to create a new project. If you would like to simply open the products without converting them into DIMAP format then press the Open Selected button to add them to the DAT Product View list.

Batch Processing

To Batch Process a list of selected products press the Batch Process button. This will open the Batch Processing dialog and add your products to the input list. If you right click on the Batch Processing button, a popup menu will appear with all the graphs from the User Graphs menu. You may select one and then the Batch Processing dialog will default to the graph selected. The Preferences Dialog

On the left side of the Preferences dialog window you can see a thematical tree where you can select the context of the settings you want to change. In the following example, a screen shot is shown where the settings for the user interface behavior can be edited.

UI Behavior This preferences page contains general user interface behavior and memory management settings.

● Show navigation window when image views are opened If an image view is opened the navigation window will also be opened.

● Open image view for new (virtual) bands Currently, this option only affects the band arithmetic tool. If this option is selected, a new image view is automatically created and opened for any new bands created in the band arithmetic tool.

● Show only pixel values of loaded or displayed bands If selected, the pixel info view will only show pixel values of bands which have their raster data completely loaded into memory or which are currently displayed as an image. If this option is not selected, DAT displays all bands contained in a product. For those bands which are not loaded, DAT reads the sample values directly from the product file. Depending on your computer's I/O performance and the location of the product file (e.g., CD- ROM, harddrive, network) this can have an impact on DAT's runtime performance.

● On product open, warn if product size exceeds If a product being opened has a file size greater than the size put in here, then DAT asks you whether or not you want to create a spectral or spatial subset of this product in order to reduce the required memory.

● On low memory, warn if free RAM falls below Here you can enter a value in megabytes, which uses DAT to decide to pop-up a low memory message box. Set this value to zero, if you don't want to get any warnings.

● On image open, load raster data only if size is below The value entered here is the number of megabytes a band's raster can have so that it is completely loaded into memory before the image is displayed. If raster data is completely loaded into memory, DAT can perform many computations much faster. If you set this value to zero, DAT will never load and store raster data in memory for image creation and will reload missing raster data from the product file any time it needs it.

● Un-load raster data when closing images In order to display images, DAT holds a raster data buffer which contains the geophysical measurement values for the tie point grid or band being displayed. Depending on the product size, these buffers require huge amounts of memory. After these buffers have been un-loaded and an image view is opened again for the same band, DAT needs to reallocate the buffer memory and to read the data from disk again.

● Check for new version on DAT start Checks on application start, if a new version of DAT is available.

● Show all suppressed tips and messages again Some dialogs of DAT can be disabled by checking the option "Don't show this message again". You can select this option to enable all messages and tip dialog boxes again.

UI Appearance

● UI Font: Sets DAT user interface font name and size.

● UI Look and Feel: Lets you select the appearance (Look and Feel) of the user interface of DAT.

Product Settings

● Consider products as spatially compatible, if their geo-locations differ less than This value affects the product compatibility check performed by the band arithmetic tool. The check is performed in order to decide which bands from which products can be used as data source in a band arithmetic expression. The value must be given in degrees. Use small values, e.g., 0.0001 degrees, to perform a high accuracy check. This will ensure that products completely overlap each other in space.

Geo-location Display DAT uses an image coordinate system whose origin (x=0, y=0) is the upper left corner of the upper left pixel. Image X-coordinates increase to the right, Y-values increase downwards. The center of the pixel in the origin is then located at (x=0.5, y=0.5).

● Relative pixel-X/Y offset Defines a relative offset used to display image coordinates and associated geo-location information. Note that the offset is exclusively used for coordinate value display; it does not affect the geo-coding information associated with a data product.

● Show floating-point image coordinates Select this option to display image coordinates as floating point values, e.g., in the geo-location section of the pixel info view. If not selected, image coordinates are displayed as integer values.

Data Input/Output

● Save product header (MPH, SPH) This option allows you to include/exclude the main product header (MPH) and specific product header (SPH) of ENVISAT products in/from the file being saved. If this option is selected, DAT stores the MPH and SPH as meta- data in the BEAM-DIMAP (XML) header. It is recommended to always include this meta data.

● Save product history (History) This option switches the saving of the processing history of the product on or off. It is recommended to always include this meta data to be able to track the processing stages and the original product.

● Save product annotation data sets (ADS) This option allows you to include/exclude all annotation data sets (ADS) of ENVISAT products in/from the file being saved. If this option is selected, DAT stores ADS as meta-data in the BEAM-DIMAP (XML) header. For ENVISAT products, location and annotation data sets (provided at tie-points) need not to be stored as meta data, since DAT automatically converts them into tie point grids, which are separately saved. ● Use incremental Save If this option is selected (recommended), DAT will only save the modifications applied to a product, such as bands modified or created with band arithmetic tool or the removal of bands. Otherwise, a save operation will always write the entire product.

Layer Properties ->

● Use anti-aliasing for rendering text and vector graphics If this option is selected, DAT uses anti-aliasing to smooth vector graphics within image views.

Image Display

● Interpolation method Here you can choose how neighbouring pixel colours are interpolated. Choose

❍ Nearest Neighbour to prevent colours from being mixed,

❍ Bi-linear or Bi-Cubic to smooth the image.

Note: choosing other than Nearest Neighbour may slow down image handling. But: on Mac OS X, Bi-linear is the System Default and therefore fastest.

● Tile cache capacity The amount of memory the image tile cache can allocate. Allocating more memory can improve the speed of the image processing.

● Background colour Choose a background colour for the image displayed in the image view.

● Show image border Here you can specify if an image border should be visible in the image view. If yes, you can also set the border size and colour.

● Show pixel border in magnified views Define whether a border should be drawn around a pixel under magnification when the mouse cursor points at it.

No-data overlay This preferences page provides options to customize DAT's no-data overlay.

● Color: Sets the fill colour of a ROI.

● Transparency: Sets the transparency of a filled ROI.

Graticule Overlay This preferences page provides options to customize DAT's graticule overlay.

● Grid behaviour:

❍ Compute latitude and longitude steps The step size of the grid lines will be computed automatically.

❍ Average grid size in pixels Defines the size in pixels of the grid cells.

❍ Latitude step (dec. degree): Sets the grid latitude step in decimal degrees.

❍ Longitude step (dec. degree): Sets the grid longitude step in decimal degrees.

● Line appearance: ❍ Line colour: Sets the colour of the grid lines.

❍ Line width: Sets the width of the grid lines.

❍ line transparency: Sets the transparency of the grid lines.

● Text appearance:

❍ Show text labels: Sets the visibility of the text labels.

❍ Text foreground colour: Sets the colour of the graticule text (latitude and longitude values).

❍ Text background colour: Sets the background colour of the graticule text (latitude and longitude values).

❍ Text background transparency: Sets the transparency of the background colour of the graticule text.

Pin Overlay This preferences page provides options to customize DAT's pin overlay.

● Show text labels: Shows the labels of the displayed pins.

● Text foreground layer: Sets the foreground colour of the labels.

● Text background layer: Sets the background colour for the labels.

● Text background transparency: Sets the transparency of the background for the labels.

GCP Overlay This preferences page provides options to customize DAT's GCP overlay.

● Show text labels: Shows the labels of the displayed GCPs.

● Text foreground layer: Sets the foreground colour of the labels.

● Text background layer: Sets the background colour for the labels.

● Text background transparency: Sets the transparency of the background for the labels.

Shape Figure Overlay This preferences page provides options to customize DAT's shape overlay.

● Outline appearance:

❍ Outline shape If selected, the outline of a ROI is also drawn.

❍ Shape outline colour: Sets the colour of a shape's outline.

❍ Shape outline transparency: Sets the transparency of a shape's outline. Low values produce high coverage.

❍ Shape outline width: Sets the width of a ROI's outline.

● Fill appearance: ❍ Fill shape If selected, the shape area is also drawn.

❍ Shape fill colour: Sets the colour of a shape's area.

❍ Shape fill transparency: Sets the transparency of a shape's area. Low values produce high coverage.

ROI Overlay This preferences page provides options to customize DAT's ROI overlay.

● Color: Sets the fill colour of a ROI.

● Transparency: Sets the transparency of a filled ROI.

RGB Profiles This preference page is used to edit the RGB profiles used for RGB image creation from various product types. A RGB- Profile defines the arithmetical band expressions to be used for the red, green and blue components of an RGB image. For detailed information about RGB-Profiles please refer to the chapter RGB-Image Profile located at DAT/ Tools/Imaging Tools Profile Lets you Select one of the actual stored RGB-Profiles to use for creation of the new image view.

● Use the to open a stored RGB-Profile file.

● Use the to save the currently displayed RGB-Profile.

● Use the to delete the currently displayed RGB-Profile.

RGB Channels

● Red - Defines the arithmetical expression for the red channel.

● Green - Defines the arithmetical expression for the green channel.

● Blue - Defines the arithmetical expression for the blue channel.

Use the to edit the expression for the specific channel by using the expression editor. Note: The arithmetical expressions are not validated by DAT; keep careful to use the correct syntax. Please refer to the Arithmetic Expression Editor documentation for the syntax and capabilities of expressions.

Logging This preferences page provides options to customize DAT's logging behavior.

● Enable logging If this option is selected, DAT writes a log file which can be used to reconstruct user interactions and to trace system failures.

● Log filename prefix: Here you can enter a prefix for the log file name. The disk file name will be assembled using this prefix plus a log file version number and an identification number. Log files are always written in the folder log located in the NEST user directory. Under windows, this folder would be c:\Document and Settings\user\.nest\. Under Linux, this folder would be /home/user/.nest/.

● Echo log output (effective only with console) If DAT is started with a text console window using the %NEST_HOME%/DAT.bat (Windows) or $NEST_HOME/DAT.sh (Linux) scripts, the log file entries are also printed out to the console window. ● Log extra debugging information Sets DAT into the debugging mode which can be helpful to find software bugs. The Settings Dialog

Settings

The Settings Dialog allows you to customize the default data path directories. From this window it is possible choose the root directory in which the Toolbox looks for the default Digital Elevation models and the supported orbits files. String values specified here can use variables pointing to other entries or environment variables. For example, the root demPath can hold the path to all your DEM files and the aceDEMDataPath could be ${demPATH}\ACE to specify the location of the ACE DEM files. Changes made via the Settings dialog will be saved in a settings.xml file.

Wave Mode Polar View

ASAR Wave Mode Products

The ENVISAT ASAR sensor in Wave Mode acquires small imagettes (5 km by 5 km to 10 km by 5 km size, high-resolution complex images of ocean scenes), allowing data acquisition to occur periodically (every 100km) around a full orbit. This constitutes the level 0 product, which is systematically produced for all data acquired within this mode. These are processed to derive the spectra of the surface backscatter, and consequently the wavelength and the direction of the ocean waves.

ENVISAT ASAR has two level 1 products and one level 2 product: ASA_WVI_1P: The product can contain up to 400 SLC imagettes, each 10 km x 10 km in size, which are acquired every 100 km. The imagettes are 1 look in azimuth and 1 look in range. The WVI product also contains the Cross Spectra of the imagettes.

ASA_WVS_1P: This product contains the Cross Spectra of the imagettes.

ASA_WVW_2P: This product contains the Wave Spectra.

The Cross Spectra is a polar grid of complex data with 24 bins in wavelength and 36 bins in direction (each 10 degrees wide).

Polar View

The Polar View is used for displaying Cross Spectra and Wave Spectra from ENVISAT ASAR Wave mode products.

The Polar View shows one record at a time. You may use the buttons and slider at the bottom of the view to change the currently viewed record. The Animate button allows the view to cycle through records automatically.

Readouts for Peak Direction and Wavelength, Min/Max Spectrum and Wind Speed and Direction can be viewed on the right hand side of the polar plot.

Cursor readouts for Wavelength, Direction and Value can be viewed on the left hand side of the polar plot as you move your mouse over each bin.

For more information on ASAR Wave mode products: ASAR Product Handbook – Chapter 2 - ASAR Products and Algorithms http://envisat.esa.int/handbooks/asar/CNTR2-6.htm#eph.asar.prodalg.levb

Import SENTINEL-1

Import SENTINEL-1 Product

This option allows to import SENTINEL-1 products into DAT. The SENTINEL-1 mission is the European Radar Observatory for the Copernicus joint initiative of the European Commission (EC) and the European Space Agency (ESA). The mission is composed of a constellation of two satellites, SENTINEL-1A and SENTINEL- 1B, sharing the same orbital plane with a 180° orbital phasing difference. The mission provides an independent operational capability for continuous radar mapping of the Earth with enhanced revisit frequency, coverage, timeliness and reliability for operational services and applications requiring long time series. SENTINEL-1 is designed to work in a pre-programmed, conflict-free operation mode, imaging all global landmasses, coastal zones and shipping routes at high resolution and covering the global ocean with vignettes. This ensures the reliability of service required by operational services and a consistent long term data archive built for applications based on long time series. SENTINEL-1 carries a single C-band synthetic aperture radar instrument operating at a centre frequency of 5.405 GHz. It includes an active phased array antenna providing fast scanning in elevation and azimuth. The C-SAR instrument supports operation in dual polarisation (HH+HV, VV+VH) implemented through one transmit chain (switchable to H or V) and two parallel receive chains for H and V polarisation. SENTINEL-1 product files have extension .safe SENTINEL-1 operates in four exclusive acquisition modes:

● Stripmap (SM)

● Interferometric Wide Swath (IW)

● Extra-Wide Swath (EW)

● Wave Mode (WV).

Supported SENTINEL-1 Products

Product Type Description

S1A_S1_SLC S1 Stripmap Single Look Complext (SLC) with incidence angles S1 to S6

S1A_S1_GRD S1 Stripmap Ground Range Detected (GRD) with incidence angles S1 to S6 in medium, high and full resolutions

S1A_IW_SLC S1 Interferometric Wide SLC

S1A_IW_GRD S1 Interferometric Wide GRD in medium, high and full resolutions S1A_EW_SLC S1 Extra Wide SLC

S1A_EW_GRD S1 Extra Wide GRD in medium, high and full resolutions S1A_WV_OCN S1 Wave mode Ocean level-2

Import ENVISAT

Import ENVISAT (ASAR, MERIS, AATSR) Product

This option allows to import ENVISAT products into DAT. The supported ENVISAT product types are:

● ASAR products

● MERIS L1b/L2 RR/FR/FRG/FSG products

● AATSR TOA L1b and NR L2

ENVISAT product files have extension .N1

Supported ENVISAT ASAR Products

Product Type Description

ASA_APG_1P ASAR Alternating Polarization Ellipsoid Geocoded Image

ASA_APM_1P ASAR Alternating Polarization Medium Resolution Image product

ASA_APP_1P ASAR Alternating Polarization Mode Precision Image

ASA_APS_1P ASAR Alternating Polarization Mode Single Look Complex

ASA_AP__BP ASAR Alternating Polarization Browse Product

ASA_GM1_1P ASAR Global Monitoring Mode Image

ASA_GM__BP ASAR Global Monitoring Mode Browse Product

ASA_IMG_1P ASAR Image Mode Ellipsoid Geocoded Image

ASA_IMM_1P ASAR Image Mode Medium Resolution Image

ASA_IMP_1P ASAR Image Mode Precision Image

ASA_IMS_1P ASAR Image Mode Single Look Complex

ASA_IM__BP ASAR Image Mode Browse Product

ASA_WSS_1P ASAR Wide Swath Single Look Complex

ASA_WSM_1P ASAR Wide Swath Medium Resolution Image

ASA_WS__BP ASAR Wide Swath Mode Browse Image

ASA_WVI_1P ASAR Wave Mode SLC Imagettes ASA_WVS_1P ASAR Wave Mode Cross Spectra

ASA_WVW_2P ASAR Wave Mode Wave Spectra

ASA_XCA_AX ASAR External Calibration File

Supported ENVISAT MERIS Products

Product Type Description

MER_RR__1P MERIS Reduced Resolution Geolocated and Calibrated TOA Radiance Product

MER_RR__1P MERIS Reduced Resolution Geolocated and Calibrated TOA Radiance Product Valid until 2002-12

MER_FR__1P MERIS Full Resolution Geolocated and Calibrated TOA Radiance Product

MER_FR__1P MERIS Full Resolution Geolocated and Calibrated TOA Radiance Product Valid until 2002-12

MER_FRG__1P MERIS Full Resolution Geo/Ortho-corrected TOA Radiance

MER_FSG__1P MERIS Full Resolution Full Swath Geo/Ortho-corrected TOA Radiance

MER_RR__2P MERIS Reduced Resolution Geophysical Product

MER_RR__2P MERIS Reduced Resolution Geophysical Product Valid until 2004-07

MER_FR__2P MERIS Full Resolution Geophysical Product

MER_FR__2P MERIS Full Resolution Geophysical Product Valid until 2004-07

MER_LRC_2P MERIS Extracted Cloud Thickness and Water Vapour Product for Meteo Users

MER_RRC_2P MERIS Extracted Cloud Thickness and Water Vapour Product

MER_RRV_2P MERIS Extracted Vegetation Indices Product

Supported ENVISAT AATSR Products

Product Type Description

ATS_AR__2P AATSR averaged geophysical product

ATS_MET_2P AATSR Spatially Averaged Sea Surface Temperature for Meteo Users

ATS_NR__2P AATSR geophysical product (full resolution)

ATS_TOA_1P AATSR Gridded brightness temperature and reflectance Import ERS1/2 SAR or ATSR

Import ERS Product in ENVISAT format

This option allows to import ERS1/2 products stored in the ENVISAT format into DAT. The supported ERS1/2 product types are:

● SAR_IMG_1P, SAR_IMM_1P, SAR_IMP_1P, SAR_IMS_1P, SAR_IM__BP products

● ATSR and ATSR-2 TOA products

ERS products in ENVISAT format have extension .E1 for ERS 1 and .E2 for ERS 2 Import ERS1/2 SAR in CEOS Format

Import ERS Product

This option allows to import ERS1/2 products stored in the CEOS format into DAT produced from PGS or VMP software. The supported ERS1/2 product types are:

● GEC Geocoded products

● PRI Precision Image products

● SLC Single Look Complex products Import JERS SAR

Import JERS Product

This option allows to import JERS products stored in the CEOS format into DAT. The supported JERS product types are:

● GEC Geocoded products

● PRI Precision Image products

● SLC Single Look Complex products Import RADARSAT-1 SAR

Import RADARSAT-1 Product

This option allows to import RADARSAT-1 products stored in the CEOS format into DAT. The supported RADARSAT-1 product types are

● SLC,

● SGF,

● SGX,

● SSG,

● SCW,

● SCN Import RADARSAT-2 SAR

Import RADARSAT-2 Product

This option allows to import RADARSAT-2 products SLC, SGF, SGX, SSG, SPG into DAT for all modes.

Import TerraSarX SAR

Import TerraSarX Product

This option allows to import TerraSarX products into DAT. The supported TerraSarX product types are:

● MGD Multi Look Ground Range, Detected representation

● GEC Geocoded Ellipsoid Corrected, Detected representation

● EEC Enhanced Ellipsoid Corrected, Detected representation

● SSC Single Look Slant Range Complex representation Import Cosmo-Skymed

Import Cosmo-Skymed Product

This option allows to import Cosmo-Skymed products into DAT. The Cosmo-Skymed product reader supports SCS, DGM, GEC for the following modes:

● ScanSAR Huge Region (DGM, GEC only)

● ScanSAR Wide Region

● Spotlight Enhanced

● Stripmap Himage

● Stripmap PingPong

Currently the SCS ScanSAR Huge Region is not supported. Import ENVI

Import ENVI Product

This option allows to import ENVI products into DAT. Abstracted Metadata can be imported by creating a metadata.xml file in the same folder as the ENVI product.

Import PolsarPro

Import PolsarPro Product

This option allows to import PolsarPro products stored in the ENVI format into DAT. If you select the ENVI .hdr file then it will import only that one band into a product. If you select the folder, it will import all bands within the folder as one product. Metadata from PolsarPro will be imported if a metadata.xml file or PolSARPro_NEST_metadata.xml exists.

Replacing Metadata You may also want to replace the metadata in the newly imported PolsarPro output with the metadata from the original product. To do so, both products must be of the same dimensions and both must be currently opened in the DAT. Select the PolsarPro product and from the Utilities -> Metadata menu select Replace Metadata. Then in the dialog that pops up select the original product where the metadata will come from. This will replace the PolsarPro's empty metadata with that of the original product. This can also be done from a graph using the ReplaceMetadata operator. The operator takes two products as input. The first product connected should be the original product where you want the metadata to come from.

Import ALOS PALSAR

Import ALOS PALSAR CEOS

The CEOS reader enables DAT to import CEOS formatted ALOS PALSAR data products. Levels 1.1 and 1.5 of FBS, FBD, PLR and WB1 products are supported. Depending on where the products have been generated from, the ALOS Level 1.1 do not have geolocation information available. Some products contain a workreport.txt from which the software is able to produce a proper tie point geocoding. Level 1.5 products cannot be Terrain Corrected by the software because of missing SRGR coefficients in the product. A brief description about the sensor characteristics can be found at http://www.eorc.jaxa.jp/ ALOS/en/about/palsar.htm Import AVNIR-2

Import AVNIR-2

The AVNIR-2 reader enables DAT to import CEOS formatted AVNIR-2 Level-1 data products. Note: The AVNIR-2 reader is implemented in line with the "ALOS AVNIR-2 Level 1 Product Format Descriptions Rev.G" (http://www.eorc.jaxa.jp/ALOS/doc/format.htm) and is capable of reading data products of Level 1A, 1B1 and 1B2. A brief description about the sensor characteristics can be found at http://www.eorc.jaxa.jp/ ALOS/about/avnir2.htm Import PRISM

Import PRISM

The PRISM reader enables DAT to import CEOS formatted PRISM Level-1 data products. Note: The PRISM reader is implemented in line with the "ALOS PRISM Level 1 Product Format Descriptions Rev.G" (http://www.eorc.jaxa.jp/ALOS/doc/format.htm) and is capable of reading data products of Level 1A, 1B1 and 1B2. A brief description about the sensor characteristics can be found at http://www.eorc.jaxa.jp/ ALOS/about/prism.htm Import GeoTIFF

Import GeoTIFF

This command allows to import the data of a GeoTIFF file. GeoTIFF is an extension to the TIFF 6.0 specification. These image files contain additional information about their georeference and their spatial resolution. A wide range of projected coordinate systems are supported including UTM, US State Plane, and National Grids. For further information have a look at the following documentations:

● GeoTIFF Homepage

● TIFF Specification

Constraints of reading GeoTIFF product files

If a GeoTiff file was exported by the toolbox, the reader can recover information from a special tag used by the GeoTIFF writer. This includes:

● Name and type of the product.

● Name, description and unit of each band.

● The properties of no-data and scaling for each band.

● The index coding of a band.

● Virtual and filtered bands.

For features and limitations of the GeoTIFF import and constraints see the following list:

● Only the first image of a GeoTIFF file is considered.

● The size of images is limited to a pixel count of 232.

● The GeoTIFF reader supports the projections available in the toolbox, model transformations and tie-points.

● Color maps are considered and converted to an index coding. Import ImageIO

Import ImageIO Product

This option allows to import any format supported by Java ImageIO plug-ins. The built in ImageIO readers/writers include:

● BMP

● PNG

● GIF

● JPEG

● BPM

● PPM

● PGM

● RAW

Abstracted Metadata can be imported by creating a metadata.xml file in the same folder as the image. The BEAM-DIMAP Data Format

Introduction

The DIMAP format has been developed by SPOT-Image, France. The software uses a special DIMAP profile called BEAM-DIMAP. The BEAM-DIMAP is the standard I/O product format for the software.

Overview

A data product stored in this format is composed of

● a single product header file with the suffix .dim in XML format containing the product meta-data and ® ● an additional directory with the same name plus the suffix .data containing ENVI - compatible images for each band.

The following diagram shows the structure:

The geo-coding of data products in satellite co-ordinates is stored in tie-point grid datasets. Tie-point datasets are stored in the data directory and have exactly the same format as the geophysical bands of a product. Because of its simplicity, the product components can be accessed with nearly every image processing system or programming language. The product's metadata can be viewed directly in a text editor or an XML viewer. The following contains a closer look at the components of the product format.

XML Product Header

XML stands for eXtensible Markup Language and is a mark-up language much like the better known HTML. The most significant difference is that HTML is about displaying information, XML is about describing information. XML can be stored in plain text files and provides a data structuring scheme composed of elements and attributes. An element is enclosed by tags, but the tags are not predefined in XML. In order to use XML for data storage, an application or standard must define its own tags. In this case, XML is very similar to HDF because the interpretation of the content is left to the application or, respectively, to the user. The product header for a BEAM-DIMAP data product will contain two types of information

● meta-data describing the image data contained in the product

● the geophysical bands and tie point grids of the product as references to the actual image data files, since the image data is not stored in XML format

Image Data Format

One geophysical band in the data product is represented by a single image. The image data - the data product's geophysical samples - are stored in flat binary files in big endian order. What makes this format compatible with the simple ENVI image format is that an extra image header for each image is also stored: the ENVI header.

ENVI Header File Format

The header files also have plain text format and comprise key-values pairs describing storage layout of the raw data in terms of raster width, height and the sample data type. In addition to ENVI, multiple other imaging applications are capable of importing image files having flat binary format. Here is an example of a header file:

ENVI samples = 1100 lines = 561 bands = 1 header offset = 0 file type = ENVI Standard data type = 4 interleave = bsq byte order = 1

An ENVI header file starts with the text string ENVI to be recognized by ENVI as a native file header. Keywords within the file are used to indicate critical file information. The following keywords are used by the BEAM-DIMAP format: description a character string describing the image or processing performed. samples number of samples (pixels) per image line for each band. lines number of lines per image for each band. number of bands per image file. For BEAM-DIMAP the value is always bands 1 (one). header offset refers to the number of bytes of imbedded header information present in the file. These bytes are skipped when the ENVI file is read. For BEAM- DIMAP the value is always . 0 file type refers to specific ENVI defined file types such as certain data formats and processing results. For BEAM-DIMAP the value is always the string . "ENVI Standard" data type parameter identifying the type of data representation, where 1=8 bit byte; 2=16-bit signed integer; 3=32-bit signed long integer; 4=32-bit floating point; 5=64- bit double precision floating point; 6=2x32-bit complex, real- imaginary pair of double precision; 9=2x64-bit double precision complex, real-imaginary pair of double precision; 12=16-bit unsigned integer; 13=32- bit unsigned long integer; 14=64-bit unsigned integer; and 15=64-bit unsigned long integer. interleave refers to whether the data are band sequential (BSQ), band interleaved by pixel (BIP), or band interleaved by line (BIL). For BEAM-DIMAP the value is always . "bsq" byte order describes the order of the bytes in integer, long integer, 64-bit integer, unsigned 64-bit integer, floating point, double precision, and complex data types; Byte order=0 is Least Significant Byte First (LSF) data (DEC and MS- DOS systems) and byte order=1 is Most Significant Byte First (MSF) data (all others - , SGI, IBM, HP, DG). For BEAM-DIMAP the value is always (Most Significant Byte First = Big Endian Order). 1 x-start and y-start parameters define the image coordinates for the upper left hand pixel in the image. The values in the header file are specified in "file coordinates," which is a zero-based number. map info lists geographic coordinates information in the order of projection name (UTM), reference pixel x location in file coordinates, pixel y, pixel easting, pixel northing, x pixel size, y pixel size, Projection Zone, "North" or "South" for UTM only. projection info parameters that describe user-defined projection information. This keyword is added to the ENVI header file if a user-defined projection is used instead of a standard projection. band names allows entry of specific names for each band of an image. wavelength lists the center wavelength values of each band in an image.

Import Landsat TM

Import Landsat TM

The Landsat TM reader enables DAT to import Fast formatted Landsat TM data products. More about Landsat TM characteristics can be found at http://landsat.usgs.gov/resources/ project_documentation.php. Import NetCDF

Import NetCDF

The NetCDF reader enables DAT to import NetCDF data products. The NetCDF reader supports any image-like NetCDF file structure in NetCDF version 3 or 4. Note: You can find additional information about NetCDF at http://www.unidata.ucar.edu/ software/netcdf/. Import HDF

Import HDF Product

This option allows to import HDF 4 and HDF 5 products into DAT.

Import Generic Binary

Import Generic Binary Product

This option allows to import Generic Binary into DAT. By selecting the Generic Binary Reader from the Product Readers menu, a dialog will prompt you for the input file. A second dialog will then come up asking you to enter the dimensions of the image, data type, number of bands, byte order and the number bytes to skip for the header.

Note: The Generic Binary reader currently only supports BSQ formatted data. Import Complex Generic Binary

Import Complex Generic Binary Product

This option allows to import Complex Generic Binary into DAT. The Complex Generic Binary is an extension of the Generic Binary Reader for importing complex data into DAT. The expected input format of the complex data that real (Re) and imaginary (Im) parts of the complex number are stored in sequential order. In other words, given a complex matrix, the real value of first element is stored first, then imaginary value of the first element, to be followed by the real, imaginary parts of the complex number of the second element. The input parameters for the Complex Generic Binary Reader are the same as of Generic Binary Reader. First a dialog will prompt for specifying the input file. Then a second dialog will show up asking for the dimensions of the image, data type, number of bands, byte order and the number bytes to skip for the header.

Note: The Complex Generic Binary reader currently only supports BSQ (Band-Sequential) formatted data. GETASSE30 Elevation Model

GETASSE30 Elevation Model

GETASSE30 stands for Global Earth Topography And Sea Surface Elevation at 30 arc second resolution. This elevation model has been provided by Marc Bouvet ([email protected]) of ESA/ESRIN. This documentation is also by courtesy of Marc Bouvet. The DEM tiles will be downloaded automatically by the software as needed.

The GETASSE30 Data Set GETASSE30 is a composite of four other DEM datasets. It uses the SRTM30 dataset, ACE dataset, Mean Sea Surface (MSS) data and the EGM96 ellipsoid as sources. The resulting GETASSE30 dataset represents the Earth Topography And Sea Surface Elevation with respect to the WGS84 ellipsoid at 30 arc second resolution. The dataset has no missing values, but at the junction of the arctic and antarctic there are some tens of strange negative values (down to -700 m) inherent to the ACE dataset. All latitude/longitude values refer to the center of a pixel, not to one of its corners. The GETASSE30 dataset is organised as multiple tiles.

Example of a GETASSE30 tile

The GETASSE30 Data

I. Mean sea surface height over sea and height over land, both referenced to the WGS84 ellipsoid. Resolution: 30 arc second latitude and longitude Unit: meter File name example: 45S045W.GETASSE30 where the first number is the latitude of the most South West pixel and the second number its longitude Data format: binary, 1800*1800 signed 16-bit integer values, big endian order

The GETASSE30 elevation data

There are 5 types of values:

1. Over land, between 60N and -60S, where SRTM30 DEM data are available, the output value is the sum of SRTM30 elevation and the EGM96 geoid height. 2. Over land, between 60N and -60S, where SRTM30 DEM data are not available, the output value is the sum of ACE elevation and the EGM96 geoid height. 3. Over land, above 60N and -60S, where ACE data are available, the output value is the sum of ACE elevation and the EGM96 geoid height. 4. Over sea, where neither ACE DEM nor SRTM30 data are not available, if MSS data are available, then the output value is the MSS. 5. Over sea, where neither ACE DEM nor SRTM30 data are not available, if MSS data are not available, then the output value is the EGM96 value.

● Flag

● Pixels described as in 2) and 3) are flagged with the value 0

● Pixels described as in 3) are flagged with the value 1

● Pixels described as in 4) are flagged with the value 2

● Pixels described as in 1) are flagged with the value 3

Resolution: 30 arc second latitude and longitude Unit: none File name example: 45S045W.GETASSE30_flag where the first number is the latitude of the most South West pixel and the second number its longitude Data format: binary, 1800*1800 unsigned 8-bit integer values, big endian order

The flag associated to the GETASSE30 data

Input Data used in GETASSE30 Generation

● ACE DEM at 30 arc second resolution referenced to the EGM96 geoid See ACE Report at http://www.cse.dmu.ac.uk/geomatics/products_ace_overview.html

● SRTM30 data are reference to the EGM96 geoid See http://www2.jpl.nasa.gov/srtm/

● Mean Sea Surface (MSS) height at 2 minutes resolution referenced to the WGS84 ellipsoid. From the RA2 auxiliary file RA2_MSS_AXVIEC20031208_145545_20020101_000000_20200101_000000 See RA2 product handbook at http://www.ENVISAT.esa.int Import GTOPO30 DEM

Import GTOPO30 DEM Tile

This option allows to import GTOPO30 tiles into DAT.

Import ACE30 DEM

Import ACE30 DEM Tile

Imports a tile of the ACE elevation model at 30 arc second resolution referenced to the EGM96 geoid. See ACE Report at http://www.cse.dmu.ac.uk/geomatics/products_ace_overview.html Since the ACE height information is referred to geoid EGM96, not WGS84 ellipsoid, correction has been applied to obtain height relative to the WGS84 ellipsoid. Import SRTM DEM

Import SRTM DEM

Imports a tile of the Shuttle Radar Topography Mission (SRTM) elevation model at 3 arc second resolution. The SRTM DEM Definition is intended to work with the SRTM 90m Version 4 in GeoTiff format. If a requested tile is not found in the SRTM Folder specified in the Settings, then it will download the tile from the Consultative Group for International Agriculture Research (CGIAR) - Consortium for Spatial Information (CSI). Since the SRTM height information is referred to geoid EGM96, not WGS84 ellipsoid, correction has been applied to obtain height relative to the WGS84 ellipsoid. Import Geometry

Import Geometry

You can either import transect data or an ESRI Shapefile.

Import ESRI Shapefile

When importing an ESRI Shapefile and no associated *.prj is found, you will be asked to define the Coordinate Reference System (CRS) on which is coordinates of the shapefile are defined.

You can choose between three options to define the CRS:

● Use target CRS This options can be chosen if the shapefile is defined on the same CRS as the product you want to import the shapefile. The transformation used by the projection can be selected. Also the geodetic datum and transformation parameters can be set, if possible for the selected transformation.

● Custom CRS Here you can freely define the CRS of the shapefile.

● Predefined CRS By clicking on the Select... button a new dialog is shown where a predefined CRS can be selected.

Import Transect Data (Geometric Shape)

If transect data is imported it is converted into a geometric shape defined either in pixel or geodetic (WGS-84) co-ordinates from a text file.

Transect data files are text files and have the default extension .txt. Each line in the file corresponds to a single vertex point of the geometric shape. Co-ordinates are stored in columns separated by whitespace characters (space or tabulator). All co-ordinates refer to a pixel's upper left corner. A file can have a header line in order to specify column names.

Transect Data Files without Header Line Columns 1 and 2 are mandatory and contain the point's pixel X co-ordinate and Y co- ordinate, respectively. Columns 3 and 4 are optional and contain the point's geodetic latitude and longitude co-ordinates defined on the WGS-84 ellipsoid in decimal degree. Other columns are ignored:

column 1: mandatory X co-ordinate given in pixels column 2: mandatory Y co-ordinate given in pixels column 3: optional latitude in decimal degree column 4: optional longitude in decimal degree column 4 + i: optional additional data

If the current product is geo-coded, which is always true for ENVISAT products or map projected products, and the geodetic co-ordinates are given, the pixel co-ordinates are rejected. In this case, VISAT computes the actual pixel co-ordinates for the current product. For example (note, 5th column is ignored):

207 390 43.167255 -7.7339296 38.732 208 389 43.175785 -7.718502 48.529 208 388 43.18614 -7.715622 50.889 209 387 43.19465 -7.7002006 38.709 ...

Transect Data Files with Header Line The file can optionally contain a header line. The header line contains column names which are also expected to be separated by whitespace characters. Some column names have a special meaning for the import command, since they specify the alternate column for the recognized co-ordinates. Special names can appear in any order and are letter case insensitive. They are:

x, pixel_x or pixel-x: X co-ordinate given in pixels y, pixel_y or pixel-y: Y co-ordinate given in pixels lat or latitude: latitude in decimal degree lon, long or longitude: latitude in decimal degree other names: ignored

Either pixel or geodetic co-ordinates must be given. Again, if geodetic co-ordinates are present, they override the point's pixel co-ordinate if present, since they are recomputed by VISAT. The columns can appear in any order. For example (note, columns named "Index" and "radiance_11" are ignored):

Index Pixel-X Pixel-Y Lat Lon radiance_11 0 205 393 43.139843 -7.767645 35.16 1 206 392 43.148373 -7.752228 32.764 2 206 391 43.158722 -7.749352 35.751 ...

Export BEAM-DIMAP

Export BEAM-DIMAP Product

This option allows to export loaded bands to DAT's standard BEAM-DIMAP format product. Export GeoTIFF

Export GeoTIFF Product

This command allows to export the data of the current product to a GeoTIFF file. GeoTIFF is an extension to the TIFF 6.0 specification. These image files contain additional information about their georeference and their spatial resolution. A wide range of projected coordinate systems are supported including UTM, US State Plane, and National Grids. For further information have a look at the following documentation:

● GeoTIFF Homepage

● TIFF Specification

Constraints on created GeoTIFF product files

Exported GeoTIFF files contain additional information beside the tags defined by the GeoTIFF specification. The BEAM-DIMAP header is used to hold this additional information. This allows a better support of several features including:

● Name and type of the product.

● Name, description and unit of each band.

● The properties of no-data and scaling for each band.

● The index coding of a band.

● Virtual and filtered bands.

● etc.

For features and limitations of the GeoTIFF export and constraints of interaction with other GIS tools are given in the following list:

● All bands of a GeoTiff file must have the same data type. Therefore a common data type is identified which is suitable for all bands. Double precision data type is not supported.

● The data of a band is always written raw and unscaled.

● Virtual and filtered bands can only be reimported by the Toolbox.

● All projections available in Reprojection are also supported by the GeoTIFF writer. As a fallback, if a map projection or other geocoding is not directly supported, a tie-point grid is written to the GeoTiff file. The exported GeoTiff files were tested for compatibility with ENVI and ArcGis. While the files are fully compatible with ENVI, ArcGIS has problems to understanding stereographic projections. ● An index coded band can only be correctly decoded by other GIS tools, if it is written as an image with one band. Export HDF5

Export HDF5 Product

This command allows to export the data of the current product to HDF5 format. Export NetCDF

Export NetCDF Product

This command allows to export the data of the current product to NetCDF 3 format. Export GCP

Export Envi GCP File

This command allows to store the geo-coding information of the selected product as ENVI GCP (Ground Control Points) file.

ENVI GCP File Format

Text taken from the ENVI User's Guide.

The ENVI Ground Control Point (GCP) files are ASCII files that contain the pixel coordinates of tie points selected from a base and warp image using ENVI’s registration utilities. They are assigned the file extension .pts by default and begin with the keywords "ENVI Registration GCP File." For image-to-image registration, pixel coordinates are listed in the order Base image X, Y, Warp Image X, Y. An example of a typical image-to-image .pts file is shown here:

ENVI Registration GCP File 921.00000 2538.0000 2098.0000 111.00000 1381.0000 2291.0000 2145.0000 85.000000 1669.0000 2116.0000 2174.0000 66.000000 119.00000 3032.0000 2017.0000 164.00000 1286.0000 3820.0000 2140.0000 240.00000 1294.0000 1754.0000 2132.0000 28.000000 1783.0000 1814.0000 2184.0000 34.000000 715.00000 4504.0000 2083.0000 312.00000 351.00000 1708.0000 2039.0000 27.000000 207.00000 2357.0000 2025.0000 98.000000

For image-to-map registration, coordinates are listed as map X, map Y, image X, image Y. An example of a typical image-to-map .pts file is shown here:

ENVI Registration GCP File 359459.810 4143531.300 288.000 496.000 367681.530 4141772.000 232.000 23.000 366343.470 4138660.500 458.000 35.000 362337.840 4145969.500 71.000 388.000 361339.910 4138479.300 569.000 301.000 354457.530 4140550.300 591.000 714.000 354352.590 4145685.000 261.000 819.000 359918.310 4142412.000 351.000 448.000 364900.910 4141752.300 290.000 172.000

Export Image

Export Displayed Image

This command allows the export of a loaded image to an image file. The save image dialog offers the possibility to choose whether the visible view (Clipping only) or the entire image with all annotations (e.g. graticule, ROI, pin ... ) should be exported. The (presently) supported formats are:

● BMP: Windows Bitmap (*.bmp)

● PNG: Portable Network Graphics (*.png)

● JPEG: Joint Photographic Experts Group (*.jpg, *.jpeg)

● TIFF: Tagged Image File Format (*.tif, *.tiff)

● GeoTIFF: TIFF with geo-location (*.tif, *.tiff) Export KML

Export Displayed Image as KML

This command allows the export of the loaded image to a KML file. Only images from loaded bands, that are in lat/lon projection can be exported. Following features of the product are exported:

● raster data

● colour legend

● pins

The exported image is saved in the KML the (link: http://earth. google.com/kml/). To view this file a version of Google Earth (link: http://earth.google. com/) is necessary.

Export ROI Pixels

Export ROI pixels

This command allows the export of pixel values as tab-separated values. This is convenient for further use in spreadsheets, e.g., Microsoft® Excel. When selected, a dialog box pops up:

The command is similar to Export Transect pixels and also the export format is identical. The "Copy to Clipboard" option will copy the ROI pixel values to the clipboard. This may take a few moments, depending on the number of pixels. The "Write to File" option brings up a file selection dialog to select the destination directory and to assign a name to the text file. The written file may easily be imported with any text editor or spreadsheet software. Note: in low memory situations, it may be a better to export to a file instead of copying to the clipboard.

Export Transect Pixels

Export Transect Pixels

This command allows the export of pixel values along the current transect as tab-separated values. This is convenient for further use in spreadsheets, e.g. Microsoft® Excel. A transect can be defined by arbitrary shape figures drawn into an image. You can also import a transect using the Import Transect Data command. The command is similar to Export ROI pixels and also the export format is identical. The "Copy to Clipboard" option will copy the transect pixel values to the clipboard. This may take a few moments, depending on the number of pixels. The "Write to File" option brings up a file selection dialog to select the destination directory and to assign a name to the text file. The written file may easily be imported with any text editor or spreadsheet software. Note: In low memory situations, it may be a better to export to a file instead of copying to the clipboard.

Export Color Palette

Exporting a Color Palette Table

To export the Color Palette Table, click the menu item Color Palette... in the File Menu. The Color Palette Table used by the current image view can be exported to a *.txt or a *.csv file. When exporting the colour palette table of an RGB image, the red channel is used.

File Format The exported file is separated into two sections. The Header Section contains general information about the exported colour palette. The Data Section contains the Color Palette Table. When exporting the colour palette, the variables in curly braces are replaced by their current value. Header Section # Band: {BAND_NAME} # Sample unit: {SAMPLE_UNIT} # Minimum sample value: {MIN_VALUE} # Maximum sample value: {MAX_VALUE} # Number of colors: {COLOR_COUNT}

Data Section ID;Sample;RGB {COLOR_INDEX};{SAMPLE_VALUE};{RED}, {GREEN}, {BLUE} RGB-Image Profile

RGB-Image Profile

In this window you are asked to define the RGB channels for a new RGB image view. You are able to load defined RGB-Profiles or to create and store new profiles.

Profile Selects one of the actual stored RGB-Profiles to use for creation of the new image view.

● Use the to open a stored RGB-Profile file.

● Use the to save the currently displayed RGB-Profile.

● Use the to delete the currently displayed RGB-Profile.

RGB Channels Red - Defines the arithmetical expression for the red channel. Green - Defines the arithmetical expression for the green channel. Blue - Defines the arithmetical expression for the blue channel.

Use the to edit the expression for the specific channel by using the expression editor.

Store RGB Channels Stores the RGB channels into the current product as virtual bands.

RGB-Profile File

RGB-Profile files must have the extension ".rgb". Multiple default profiles provided are located in the $NEST_HOME$/auxdata/rgb_profiles. A RGB-Profile file contains several entries. The syntax of an entry is 'EntryName = EntryValue'. Normally one entry is written in one line, but you can use the '\' character to indicate that the next line also belongs to the value. Empty lines and lines beginning with the '#' character are ignored. The possible entries for an RGB-Profile are listed in the following table:

Name Type Description name String The name of the RGB-Profile, if given it is displayed instead of the file name. internal Boolean The default value is false, if given and set to true it indicates that this RGB- Profile can not be deleted from the user interface (but does not prevent from overwriting the file). red or r String The arithmetic expression used to create the red channel. This entry is mandatory. green or g String The arithmetic expression used to create the green channel. This entry is mandatory. blue or b String The arithmetic expression used to create the blue channel. This entry is mandatory.

No-Data Overlay

No-Data Overlay Toggles the overlay of a no-data mask within a band's image view. The overlay properties can be modified using the no-data overlay page in the preferences dialog. The no-data mask is similar to a bitmask overlay and masks out all the no-data pixels in the selected image view. If the property no-data value of a band or the valid pixel expression of a band is set, a no- data overlay can be displayed within the current image view.

Bitmask Overlay Window

In this window you may activate the overlay of flags and combinations of them on a loaded band image view. Simply activate the checkboxes in the column with the icon to toggle the overlay with that specific flag.

Use the icon to create new bitmask expressions. This will also use the Bitmask Expression Editor.

Use the icon to copy bitmask expressions. This will also use the Bitmask Expression Editor.

Use the icon to edit (change name, colour, transparency etc.) a selected bitmask. Double- clicking on a row has the same effect.

Use the icon to remove a selected bitmask.

Use the icon to import bitmask expressions from files and to save them.

Similarly you can use the icon to import and the icon to export multiple bitmask expressions at the same time.

Use the and icons to order the overlay sequence of the bitmasks. Bitmask Editor

If you chose to edit a bitmask the Bitmask Editor appears like in the following screenshot:

In the upper area of the window, the properties name, description, colour and transparency can be changed. In the lower part of the window, the expression can be defined with a few mouse clicks or typed into the text field on the right.

Building a bitmask expression

Creating an expression for a bitmask is done like defining expressions with the Arithmetic Expression Editor. Geometry Management

The Geometry Concept

As opposed to the bands and tie-point grids contained in satellite data products - which are all kinds of raster data - a geometry refers to vector data. Thus a geometry can be a point, line or polygon. Geometries are part of the a product's data model, and stored (restored) to (from) BEAM-DIMAP product files. A product can comprise a number of named geometry containers. Each container may virtually comprise any number of geometries. New geometries are always added to the geometry container selected in Geometries node of the Products View tool window as shown in the screenshot the right. If there isn't a single geometry container yet, a default container named geometry will be created.

Relationship with Pins and Ground Control Points (GCP)

Pins and Ground Control Points are placemarks treated as point geometries. Thus, geometry editing such as moving or deleting is the same as for other geometries as described below in chapter 'Working with Geometries'. Despite this, the pins and GCPs of a data product are still managed as described in Pin Management and GCP Management, respectively.

Relationship with Masks and Regions of Interest (ROI)

The geometries in a geometry container can be directly used as a ROI for raster data analysis. The way how this is established is closely related to the new Mask and ROI Management. Once a new geometry container has been added to the data product, an associated geometry mask is created by rendering the geometry onto the product's intrinsic raster data grid.

Geometry ROI

Polygon geometry (vector data) Resulting mask (raster data)

The associated mask will always have the same name as the geometry container which created it and can serve as possible ROI for the selected band or tie-point grid without any additional user interaction. Once the geometry is created (e.g. simply by drawing it, see below), its associated geometry mask can be used as ROI in the various analysis tools, such as the Statistics, Histogram, and Scatter Plot tool windows. Multiple geometry ROIs can be defined by creating new geometry containers as described below.

Working with Geometries

Creating a new geometry

Once you have opened an image view, new geometries are created by using the various drawing tools provided through the Interactions Toolbar:

Line: Press left mouse button for the start point, drag line to end point and release left mouse button.

Polyline: Single-click (press and release) left mouse button for the start point, move line segment and click to add a vertex point, move to end point and double-click to finalize the polyline. Rectangle: Press left mouse button for the start point, drag line to end point and release left mouse button.

Ellipse: Similar to rectangle; Press left mouse button for the start point, drag line to end point and release left mouse button. Polygon: Similar to polyline; Single-click (press and release) left mouse button for the start point, move line segment and click to add a vertex point, move to end point and double-click to close the polygon.

Creating a new geometry container

Use the menu item Tools / Create Geometry Container or the corresponding tool button from the Interaction Toolbar to create a new geometry container.

Clicking the button opens a dialog for creating a new geometry container. You will be prompted to enter a unique name for the new container.

Editing geometries

Geometries may be edited in a number of ways once they have been selected. Note that editing or deleting a geometry will automatically affect the mask associated with the geometry's container. Use the Select tool to select geometries which shall be edited:

Select a single geometry by clicking it. Select one or more geometries by dragging a selection rectangle around them. Hold down the control key while selecting in order to add or remove geometries from the current selection set.

Clicking selected geometries multiple times lets them step through a number of selection modes allowing for different editing modes which are further described below.

Selection mode 1 Selection mode 2 Selection mode 3

Move: Selected shapes can be moved to another location simply by dragging them with the mouse. Move vertex: If single selected geometries are clicked once again, the selection mode changes depending on the geometry type. The first mode lets you move the vertexes of lineal and polygonal geometries by dragging the appearing vertex handles. Scale: The next selection mode (click again) lets you scale the size of a geometry by dragging the appearing size handles. Cut, Copy, Paste: Use these commands from the Edit menu or use the keys Control X, Control C, Control V to cut or copy geometries into the 's clipboard and to paste them into the same or another view. Delete: Use the command from the Edit menu or use the Delete key.

Importing geometry data

You can use the Import Geometry command to import geometries from plain text files or from ESRI Shapefiles. Plain text (*.txt): Imports a single geometry (polyline or polygon) from a plain text file having the format described in Import Geometry. ESRI Shapefile (*.shp): Multiple geometries can be imported from a Shapefile. The geometry coordinates used in the Shapefile will be converted to the coordinates reference system used by the current data product. Note that there is only limited support regarding the various style settings which may be attached to a specific Shapefile. It also currently ignore all the attribute data that usually comes with a Shapefile. Mask and ROI Management

The Mask Concept

Masks are useful to mask out image pixels either solely for image display or for image analysis. In the latter case we refer to a mask in the role of a region of interest (ROI). Currently three basic types of masks are supported differing in the way the mask is defined:

1. Masks defined by a band math expression 2. Masks defined by a simple sample value range 3. Masks defined by geometry such as lines and polygons

In data modelling terms, a mask is a product node similar to a band or tie-point grid. It has a unique name and comprises a image (raster data) whose sample data type is Boolean. Each data product may comprise virtually any number of masks. Not only the mask definitions but also their use in conjunction with a raster data set such as a band or tie-point grid are part of the data model stored within the product. A product "remembers" for a certain band or tie-point grid

● which masks have been switched visible and

● which masks are in use as ROI.

A number of product formats define a default mask set. E.g. the Envisat MERIS L1 and L2 product types define a mask for each of their quality flags.

Working with Masks

Masks are managed by the Mask and ROI Manager tool window. To bring up this tool window click the tool button in the Tool Windows> tool bar or select the corresponding menu item in View / Tool Windows / Mask and ROI Manager.

The manager allows creating new masks, editing mask properties and delete existing masks. It also allows for creating new masks based on logical combinations of existing masks. Furthermore masks may be imported and exported. If an image view is selected, the manager tool window can also be used to control the visibility and its role as a possible ROI for the currently displayed band. When the mask's ROI role is selected, it becomes available in the various raster data analysis tools, such as the Statistics, Histogram, and Scatter Plot tool windows.

Mask and ROI Manager tool window Band math expression: Adds a new mask to the product which is based on a band math expression. The expression can reference any bands, tie-point grids, single flags or other masks defined in the data product. For more information on band math please refer to Band Maths. Value range: Adds a new mask to the product which is based on a value range of the selected band. All pixels whose sample values fall within the value range are part of the mask. Geometry: Adds a new mask to the product which is based on the geometries contained in an associated geometry container. The command effectively creates a new geometry container which in turn causes the creation of an associated geometry mask. For more information about geometry and geometry containers have a look into the Geometry Management. Union: Creates the logical union of two or more selected masks.

Intersection: Creates the logical intersection of two or more selected masks.

Difference: Creates the logical difference of two or more selected masks (in top-down order).

Inv. Difference: Creates the logical difference of two or more selected masks (in bottom-up order).

Complement: Creates the logical complement one or more selected masks.

Copy: Creates a copy of the selected mask.

Edit: Edits the definition of the selected mask. Double-clicking a mask entry in the table has the same effect.

Delete: Deletes the selected mask.

Import: Imports a mask from a plain text file.

Export: Exports the selected masks to a plain text file. GCP Management

GCP Management in DAT A GCP is a marker for a certain geographical position within a geo-referenced image. The properties of a GCP are

● the geographical co-ordinate,

● a graphical symbol,

● the name (label),

● and a textual description.

In contrast to a pin a GCP is fixed to a geographical position while the pin is not. GCPs can be used to create a GCP geo-coding for a product or improve an existing geo-coding. GCPs are displayed as symbols at their geographical positions in image views associated with the current product. GCPs are stored in the current product and available again if the product is re- opened. New GCPs can be created with the GCP tool It is also possible to create and remove GCPs by using the GCP manager.

The GCP Tool The GCP tool is used to create a new GCP but and also to select an existing GCP. If the GCP tool is active, a click into an image view creates a new GCP at the current cursor position. If you click on an existing GCP, it becomes the selected GCP. If you double-click on an existing GCP the Edit GCP dialog appears and lets you the edit the properties of the selected GCP.

The GCP Manager The GCP manager is used to display all GCPs stored in the current product within a table and provides some GCP related operations. If you click on a GCP within the table it becomes the selected GCP. Double-clicking on GCP in the table opens the Edit GCP. You can also change the values for X, Y, Lat and Lon of the GCP by editing the table cells directly. In the following, the tool buttons of the GCP manager are explained.

Creates a new GCP and adds it to the product.

Creates a new GCP by cloning the selected GCP.

Opens the edit dialog for the selected GCP.

Removes the selected GCP from the current product.

Imports a single GCP from a flat text or XML file.

Exports a single GCP to an XML file.

Enables the expansion of the table with pixel values.

Exports all values of the displayed table to a flat text file. The exported text is tabulator-separated and may therefore be imported directly into a spreadsheet application (e.g. MS Excel). Centers the image view on the selected GCP.

The Edit GCP Dialog The GCP edit dialog is used to edit the properties of a GCP after a new GCP has been created or after a selected GCP has been double-clicked. This dialog is also available in the GCP manager.

The editable GCP properties are

● the geographical co-ordinate, which can also be given as pixel position

● the name for the GCP

● and a textual description. GCP Geo-Coding This geo-coding is calculated by using the defined GCPs as reference points and the given polynomial method type. You can choose between the linear, quadratic or a cubic polynomial method. The status field below the polynomial method chooser shows you if you already have enough GCPs defined for the currently chosen method or if the geo-coding is already attached to the product. On the lower left hand side, information about the currently attached GCP geo-coding is displayed. It shows the used polynomial method and the overall RMSE for the latitudes and longitudes. In the table you can see the deviation (Delta Lat and Delta Lon) for each GCP from the given to the calculated geo-position. If you attach a GCP geo-coding to the product, the former geo-coding is preserved and will be restored if you detach the GCP geo-coding. Pin Management

Pin Management in DAT A pin is a marker for a certain geographical position within a geo-referenced image. The properties of a pin are

● the geographical co-ordinate,

● a graphical symbol,

● the name (label),

● and a textual description.

Pins are displayed as symbols at their geographical positions in image views associated with the current product. Pins are stored in the current product and available again if the product is re-opened. In DAT, pins can be used to "freeze" the pixel info view to the selected pin in order to display the values of the pixel associated with the selected pin. New pins can be created with the pin tool and removed using the delete pin command in the Edit Menu. It is also possible to create and remove pins by using the pin manager.

The Pin Tool The pin tool is used to create a new pin and also to select an existing pin. If the pin tool is active, a click into an image view creates a new pin at the current cursor position. If you click on an existing pin, it becomes the selected pin. If you double-click on an existing pin, the Edit Pin dialog appears and lets you edit the properties of the selected pin.

The Pin Manager The pin manager is used to display all pins stored in the current product within a table and provides some pin related operations. If you click on a pin within the table, it becomes the selected pin. Double-clicking on a pin in the table opens the Edit Pin dialog. You can also change the values for X, Y , Lat and Lon of the pin by editing the table cells directly. In the following, the tool buttons of the pin manager are explained.

Creates a new pin and adds it to the product.

Creates a new pin by cloning the selected pin.

Opens the edit dialog for the selected pin.

Removes the selected pin from the current product.

Imports multiple pins from a flat text or XML file.

Exports the selected pins to an XML file.

Enables the expansion of the table with pixel values.

Exports selected part of the displayed table to a flat text file. The exported text is tabulator- separated and may therefore be imported directly into a spreadsheet application (e.g. MS Excel). Centers the image view on the selected pin.

The Edit Pin Dialog The pin edit dialog is used to edit the properties of a pin after a new pin has been created or after a selected pin has been double-clicked. This dialog is also available in the pin manager. The editable pin properties are

● the geographical co-ordinate, which can also be given as pixel position

● the outline and fill colour for the graphical symbol,

● the name for the pin

● and a textual description. Product/Band Information

Product/Band Information

This dialog shows general properties of a loaded product, band or tie-point grid and their parent product. Note: A mouse right-click within the properties data area brings up a context menu with the item Copy data to clipboard. This will copy the diagram data as tabulated text to the system clipboard. The copied text can then be pasted directly into a spreadsheet application (e.g. Microsoft® Excel).

Geo-Coding Information

Geo-Coding Information

This dialog shows the geo-coding information for the selected data product. Geo-coding enables DAT to transform pixel co-ordinates to geographical co-ordinates (WGS-84 datum) and vice versa. Geo-coding can be either based on a map projection (product is geo-referenced) or based on tie point grids (product is geo- coded). If a product is not geo-referenced, DAT uses the tie point grids "latitude" and "longitude" for geo- coding. For tie point grid based geo-coding, the transformation of a geographical co-ordinate to a pixel position is more complicated than the other way round. DAT uses either an iterative algorithm or a polynomial approximation depending on the root mean square error (RMSE) of the approximation. If the RMSE is underneath half a pixel, the approximation is used instead of the iteration because the latter can sometimes have no clear attraction point and would yield to infinite looping. Note: A mouse right-click within the geo-coding information area brings up a context menu with the item Copy data to clipboard. This will copy the diagram data as tabulated text to the system clipboard. The copied text can then be pasted directly into a spreadsheet application (e.g. Microsoft® Excel).

Statistics

Statistics Display

The Statistics Display shows statistical information about the band or an active ROI. For the whole band, the following statistical information is given (see Figure 1):

Figure 1. Statistical information for a given band

For user selected homogeneous ROI, the following statistics are computed (see Figure 2):

Figure 2. Statistical information for user selected ROI Note: A mouse right-click within the statistics data area brings up a context menu with the item Copy data to clipboard. This will copy the diagram data as tabulated text to the system clipboard. The copied text can then be pasted directly into a spreadsheet application (e.g., Microsoft® Excel).

Histogram

Histogram Display

This dialog displays a histogram for the selected band. If a ROI is defined, you may restrict the computation to the pixels within that ROI. You can also set the number of bins used for the histogram creation, and set the range manually or let it compute automatically.

Context Menu A click with the right mouse button on the diagram brings up a context menu which consists of the following menu items:

● Properties... Edit several properties (colors, axes, etc.) of the diagram.

You can also use the on the right 'Plot' panel.

● Save As... Save the diagram as image (PNG). You can also use the on the right 'Plot' panel.

● Print... Print the diagram.

You can also use the on the right 'Plot' panel.

● Zoom In

❍ Both Axes - Zoom in on both axes.

You can also use the on the right 'Plot' panel.

❍ Domain Axes - Zoom in only on the domain axis.

❍ Range Axes - Zoom in only on the range axis.

● Zoom Out

❍ Both Axes - Zoom out on both axes.

You can also use the on the right 'Plot' panel.

❍ Domain Axes - Zoom out only on the domain axis.

❍ Range Axes - Zoom out only on the range axis.

● Auto Range

❍ Both Axes - Adjusts both axes to the full data range.

You can also use the on the right 'Plot' panel.

❍ Domain Axes - Adjusts the domain axis to the full data range.

❍ Range Axes - Adjusts the range axis to the full data range.

● Copy Data to Clipboard This will copy the diagram data as tabulated text to the system clipboard. The copied text can then be pasted directly into a spreadsheet application (e.g. Microsoft® Excel). Scatter Plot

Scatter Plot Display

This dialog allows you to plot one variable against another. In order to plot them, these variables must already be displayed as bands. Otherwise the values would not be available to DAT, except for the tie point data (latitude, longitude etc.). If you consider the following figure, you see the plot of longitude against latitude of a MERIS product. The variables to be used as abscissa (X- Band) and ordinate (Y-Band) can be selected from the two drop-down menus (red ellipses).

Context Menu A click with the right mouse button on the diagram brings up a context menu which consists of the following menu items: ● Properties... Edit several properties (colors, axes, etc.) of the diagram.

You can also use the on the right 'Plot' panel.

● Save As... Save the diagram as image (PNG).

You can also use the on the right 'Plot' panel.

● Print... Print the diagram.

You can also use the on the right 'Plot' panel.

● Zoom In

❍ Both Axes - Zoom in on both axes.

You can also use the on the right 'Plot' panel.

❍ Domain Axes - Zoom in only on the domain axis.

❍ Range Axes - Zoom in only on the range axis.

● Zoom Out

❍ Both Axes - Zoom out on both axes.

You can also use the on the right 'Plot' panel.

❍ Domain Axes - Zoom out only on the domain axis.

❍ Range Axes - Zoom out only on the range axis.

● Auto Range

❍ Both Axes - Adjusts both axes to the full data range.

You can also use the on the right 'Plot' panel.

❍ Domain Axes - Adjusts the domain axis to the full data range.

❍ Range Axes - Adjusts the range axis to the full data range.

● Copy Data to Clipboard This will copy the diagram data as tabulated text to the system clipboard. The copied text can then be pasted directly into a spreadsheet application (e.g. Microsoft® Excel). Profile Plot

Profile Plot Display

This option is available only, if a geometric shape has been defined in the current image view. The profile plot displays the sample values of current geophysical band along the current geometric shape (the transect profile in this case).

Context Menu A click with the right mouse button on the diagram brings up a context menu which consists of the following menu items:

● Properties... Edit several properties (colors, axes, etc.) of the diagram.

You can also use the on the right 'Plot' panel.

● Save As... Save the diagram as image (PNG).

You can also use the on the right 'Plot' panel.

● Print... Print the diagram.

You can also use the on the right 'Plot' panel.

● Zoom In

❍ Both Axes - Zoom in on both axes.

You can also use the on the right 'Plot' panel.

❍ Domain Axes - Zoom in only on the domain axis.

❍ Range Axes - Zoom in only on the range axis.

● Zoom Out

❍ Both Axes - Zoom out on both axes.

You can also use the on the right 'Plot' panel.

❍ Domain Axes - Zoom out only on the domain axis.

❍ Range Axes - Zoom out only on the range axis.

● Auto Range

❍ Both Axes - Adjusts both axes to the full data range.

You can also use the on the right 'Plot' panel.

❍ Domain Axes - Adjusts the domain axis to the full data range.

❍ Range Axes - Adjusts the range axis to the full data range.

● Copy Data to Clipboard This will copy the diagram data as tabulated text to the system clipboard. The copied text can then be pasted directly into a spreadsheet application (e.g. Microsoft® Excel). Co-ordinate List

Co-ordinate List Display

This option is available only, if a geometric shape has been defined in the current image view. The co-ordinate list enumerates information not only for each vertex but for each pixel along the current geometric shape (the transect profile in this case). Note: A mouse right-click within the co-ordinate list area brings up a context menu with the item Copy data to clipboard. This will copy the diagram data as tabulated text to the system clipboard. The copied text can then be pasted directly into a spreadsheet application (e.g. Microsoft® Excel).

Compute ROI-Mask Area

Compute ROI-Mask Area

● Number of ROI pixels - The number of pixels contained in the ROI.

● ROI area - The computed area of the ROI in km2.

● Mean pixel area - The computed mean pixel area of the ROI in km2.

● Minimum pixel area - The computed minimum pixel area of the ROI in km2.

● Maximum pixel area - The computed maximum pixel area of the ROI in km2.

● Mean earth radius - The mean earth radius in km2 on which the computation is based.

Product Subset

Product Subset Definition

This dialog appears from various import and export file selection dialogs and always if a new product has to be created from a base product, such as in the map projection or band arithmetic dialogs.

The dialog is divided into four tabs, each providing specific subset options.

Spatial Subset If you are not interested in the whole image, you may specify an area of the product to be loaded. You can select the area either by dragging the blue surrounding rectangle in the preview (see figure above) or by editing the appropriate fields. If you drag the rectangle, the field values change simultaneously. You can also specify a sub-sampling, by setting the values of Scene step X and Scene step Y.

Band Subset This tab is used to select the bands you want to have in your product subset.

Tie-Point Grid Subset As with bands, this tab is used to select the tie-point grids you want to have in your product subset. The tie-point grids named latitude and longitude can not be deselected because they provide essential geo-coding information.

Metadata Subset The tab lets you select/deselect meta-data records and tables. By default, all meta-data tables are deselected, because they can be very huge for ENVISAT products and may reduce DAT's performance. Band Arithmetic

Band Arithmetic

The band arithmetic tool is used to create new image sample values derived from existing bands, tie-point grids and flags. The source data can originate from all currently open and spatially compatible input products. The source data is combined by an arithmetic expression to generate the target data. By default, a new image view is automatically opened for the new sample values. You can disable this behaviour in the preferences dialog. Please refer to the expression editor documentation for the syntax and capabilities of expressions. After the new band has been created (or an existing one has been overwritten), you can change to DAT's Product View in order to open an image view in order to inspect the resulting samples.

Parameters Target Product: Select the target product where the new band will be added. Name: Specifies the name of the new band. The name must not be empty and the target product must not contain a band with the same name. Description: An optional description can be entered here. Unit: An optional unit can be entered here. Virtual (save expression only, don't write data) If the option is checked a virtual band is created. This means that only the expression is stored and the data is re-computed if needed. If this option is unchecked, the data is computed once and stored in the product. Replace NaN and infinity results by Sometimes an expression can result to a NaN (Not a Number)or infinity value, in these cases these values will be replaced by the value specified here. This value will also be used as the no-data value of the band.

Expression: This field takes the arithmetic expression which is used to create new data samples. Please refer to the expression editor documentation for the syntax and capabilities of expressions. Edit Expression... button Opens the expression editor which provides a a convenient way to create valid arithmetic expressions.

OK, Cancel and Help OK Computes the new sample data and closes the dialog. Refer to the preferences dialog for an option to specify whether or not to automatically open an image view for the new data samples. Cancel Closes the dialogue and all changes are reverted. Help Opens this help page. Expression Editor

The Arithmetic Expression Editor DAT's expression editor provides a convenient way to construct arithmetic expressions with C syntax from various data sources, such as bands, tie-point grids and flag values. You can combine these data sources by a number of comparison, arithmetic, logical and binary operators or use them as arguments for mathematical functions.

Product: Selects the current input product providing source bands, tie-point grids and flags.

Data Sources: The list of available data sources provided by the selected input product. Click on a data source to move it into the expression text field. Show Bands checkbox Checks whether or not the bands of a product are shown in the list of available data sources. Show Tie Point Grids checkbox Checks whether or not the tie-point grids of a product are shown in the list of available data sources. Show single Flags checkbox Checks whether or not the flags of a product are shown in the list of available data sources.

Expression: The expression text field. You can also directly edit the expression here. Select All Button Selects the entire text in the expression text field. Clear Button Clears the entire text in the expression text field. Undo Button Undoes multiple last edits in the expression text field.

OK Button Accepts the expression.

Expression Syntax The syntax for valid expressions used in DAT is almost the same as used in the C, C++ or Java programming languages. However, currently not supported are any kind of type conversions or type castings or object accessing operations.

Supported Operators and Functions Important note: The operators listed here are enumerated in decreasing operator precedence (or increasing operator priority). If not otherwise stated, binary operators always bind (and evaluate) from left to right, so that A - B - C - D is equivalent to ((A - B) - C) - D.

Ternary Conditional Operator This operator returns a value depending on a given boolean expression X.

A ? B : C if A then B, else C

Binary Logical Operators These operators are meant to be used in conjunction with a data product's quality flags. The arguments must always be Boolean.

X || Y Logical OR X or Y X && Y Logical AND X and Y

Binary Comparison Operator

These operators return Boolean values true or false. You can use the conditional operator to convert the Boolean return value into a real number, for example (radiance_13 <= 0) ? 0 : 10*sqrt (radiance_13)

X == Y Equal to X != Y Not equal to X < Y Less than X <= Y Less than or equal to X > Y Greater then X >= Y Greater then or equal to

Binary Bitwise Operators

X | Y Bitwise OR X ^ Y Bitwise XOR X & Y Bitwise AND

Arithmetic Operators

X + Y Plus X - Y Minus X * Y Division X / Y Multiplication X % Y Modulo (remainder)

Unary Operators

+ X Arithmetic positive sign, no actual operation, equivalent to 1 * X - X Arithmetic negation, equivalent to -1 * X

! X Logical NOT of Boolean argument X not X ~ X Bitwise NOT of integer argument X

Mathematical Constants

PI PI = 3.14159265358979323846. The double value that is closer than any other to PI E E = 2.7182818284590452354. The double value that is closer than any other to E, the base of the natural logarithms

NaN NaN = 0.0 / 0.0. A constant holding a Not-a-Number (NaN) value X The X-position of the current pixel. Y The Y-position of the current pixel.

Mathematical Functions

X Returns square root of X sqrt( ) X Y X raised to the power of Y pow( , ) X Returns Euler's number e raised to the power of X exp( ) X Returns the value of 10 raised to the power of X exp10( ) X Returns the natural logarithm (base e) of X log( ) X Returns the common logarithm, the logarithm with base 10 of X. log10( ) X Returns the trigonometric sine of an angle X in radians sin( ) X Returns the trigonometric cosine of an angle X in radians cos( ) X Returns the trigonometric tangent of an angle X in radians tan( ) X Returns the trigonometric arc-sine of X asin( ) X Returns the trigonometric arc-cosine of X acos( ) X Returns the trigonometric arc-tangent of X atan( ) Y X Returns the angle of polar co-ordinate of X,Y atan2( , ) R I Returns the amplitude function of a complex argument, same as sqrt(R * R + I * I) ampl( , ) R I Returns the phase function of a complex argument, same as atan2(I, R) phase( , ) X Converts X from decimal degree to radian rad( ) X Converts X from radian to decimal degree deg( ) X Returns the absolute value of X abs( ) X Returns the sign of A, always one of -1, 0, +1 sign( ) X Y Returns the smaller value of X and Y min( , ) X Y Returns the greater value of X and Y max( , ) X Returns the largest (closest to positive infinity) double value that is less than or equal to X floor( ) and is equal to a mathematical integer X Returns the closest long to X. The result is rounded to an integer by adding 1/2, taking the round( ) floor of the result, and casting the result to type long X Returns the smallest (closest to negative infinity) double value that is greater than or equal ceil( ) to X and is equal to a mathematical integer X Returns the double value that is closest in value to X and is equal to a mathematical integer. rint( ) If two double values that are mathematical integers are equally close, the result is the integer value that is even X Y Performs a fuzzy equal operation for the X and Y arguments feq( , ) X Y EPS Performs a fuzzy equal operation for the X and Y arguments by using EPS as maximum feq( , , ) deviation X Y Performs a fuzzy not equal operation for the X and Y arguments fneq( , ) X Y EPS Performs a fuzzy not equal operation for the X and Y arguments by using EPS as maximum fneq( , , ) deviation X Returns true if X is a Not-a-Number (NaN) value, false otherwise nan( ) X Returns true if X is infinitely large in magnitude, false otherwise inf( ) Reprojection

Reprojection Dialog

Use Reprojection to create new product with a projected Coordinate Reference System (CRS).

I/O Parameters

Source Product Name: Here the user specifies the name of the source product. The combo box presents a list of all products opened. The user may select one of these or, by clicking on the button next to the combo box, choose a product from the file system.

Target Product Name: Used to specify the name of the target product. Save as: Used to specify whether the target product should be saved to the file system. The combo box presents a list of file formats, currently BEAM-DIMAP, GeoTIFF, and HDF5. The text field allows to specify a target directory. Open in DAT: Used to specify whether the target product should be opened. When the target product is not saved, it is opened automatically.

Projection Parameters

Coordinate Reference System (CRS) Custom CRS: The transformation used by the projection can be selected. Also the geodetic datum and transformation parameters can be set, if possible for the selected transformation. Predefined CRS: By clicking on the Select... button a new dialog is shown where a predefined CRS can be selected. Use CRS of: A product can be selected to use its projected Coordinate Reference System. This will have the effect that source product will cover the same geographic region on the same CRS. Which means that both products are collocated.

Output Settings Preserve resolution: If unchecked the Output Parameters... is enabled and the upcoming dialog lets you edit the output parameters like easting and northing of the reference pixel, the pixel size and the scene height and width. Reproject tie-point grids: Specifies whether or not the tie-point grids shall be included. If they are reprojected they will appear as bands in the target product and not any more as tie-point grids. No-data value: The default no-data value is used for output pixels in the projected band which have either no corresponding pixel in the source product or the source pixel is invalid. Resampling Method: You can select one resampling method for the projection. For a brief description have a look at Resampling Methods.

Output Information Displays some information about the output, like scene width and height, the geographic coordinate of the scene center and short description of the selected CRS. When clicking the Show WKT... button the corresponding Well- Known Text of the currently defined CRS is shown. Resampling Methods

Resampling Methods

If a product is projected, it comes up that the pixel centers of the target product generally do not correspond to the centers of the pixels of the input product. Resampling is the process of determination and interpolation of pixels in the source product for computation of the pixel values in the target product. The effects of resampling will especially be visible if the pixels in the target product are larger than the source pixels. Three different resampling methods are available for this computation.

Nearest Neighbour Every pixel value in the output product is set to the nearest input pixel value.

Pros Cons Very simple, fast Some pixels get lost and others are duplicated No new values are calculated by interpolation Loss of sharpness Fast, compared to Cubic Convolution resampling

The following figure demonstrates the calculation of a new resampled pixel value.

Bi-linear Interpolation Calculation of the new pixel value is performed by the weight of the four surrounding pixels.

Pros Cons Extremes are balanced Less contrast compared to Nearest Neighbour Image loses sharpness compared to Nearest Neighbour New values are calculated which are not present in the input product

Following figure demonstrates the calculation of the new pixel value.

The bilinear interpolation is performed by the following equation:

Cubic Convolution Calculation of the new pixel value is performed by weighting the 16 surrounding pixels.

Pros Cons Extremes are balanced Less contrast compared to Nearest Neighbour Image is sharper compared to Bi-linear Interpolation New values are calculated which are not present in the input product Slow, compared to Nearest Neighbour resampling

Following figure demonstrates the calculation of the new pixel value.

The cubic convolution is performed by the following equation:

In the first step the average value for each line is calculated, afterwards the new pixel value is calculated with the four new average values P'(1) - P'(4) similar to the preceding calculation.

Visual Comparison of the Resampling Methods Nearest Neighbour Bi-linear Interpolation Cubic Convolution Data Flip

Data Flip (Product Transposition)

The data-flip-command applies a flip operation to an input product and creates a new transposed output product. After the new product has been created, you can change to DAT's product browser in order to open an image view for a band of the new product.

Input Product: You must specify an input product here. Note that this command operates on entire products, so you might want to create a product subset first.

Output Product Name: You can specify the output product's name here. The name must be unique within DAT's open product list. Description: You can enter a short description text for the new product here.

Flip Data horizontally radiobutton Mirrors the tie-point grids and bands of the product along the central vertical axis. vertically radiobutton Mirrors the tie-point grids and bands of the product along the central horizontal axis. horizontally & vertically radiobutton Mirrors the tie-point grids and bands of the product along both the central vertical and horizontal axes.

Output product Information Displays properties for the resulting output product.

OK Button Applies the flip transposition to the input product, creates the specified output product and closes the dialog. Pixel Geo-Coding

Pixel Geo-Coding

The Pixel Geo-Coding can be used if the user has two bands filled with accurate latitude and longitude values for each pixel. For example the FSG and FRG products have corrected latitude ("corr_lat") and longitude ("corr_lon") bands. These bands can be used to replace the current geo-coding associated with the product.

Attach Pixel Geo-Coding

● Longitude Band The band which keeps the longitude values for each pixel.

● Latitude Band The band which keeps the latitude values for each pixel.

● Valid Mask This mask is used by the search algorithm when trying to find the pixel position for a given lat/lon. Only pixels which meet the expression are included in the search.

● Search Radius This value is used by the search algorithm when trying to find the pixel position for a given lat/lon. A higher value means that the area to search for the proper pixel position is greater and the risk not to find the searched value is smaller. The greater the value is, the longer takes the search.

Detach Pixel Geo-Coding Just detaches the current pixel geo-coding and restores the previous geo-coding.

Create Elevation Band

Create Elevation Band

This command adds a new elevation band to the current product. The altitudes are computed by looking up each pixel in a selected high-resolution DEM. The Toolbox is able to download SRTM 90m DEM and the GETASSE30 DEM as needed. For ACE and ASTER DEMs, you will need to download datasets youself and have the respective path in the settings dialog point to the location of the datasets.

● Elevation Model Let's you select an elevation model.

● Band name The name of the new created elevation band. Image Filtering

Image Filtering

The operator creates filtered image bands by applying convolution or non-linear filters to the selected band. Filters are used to perform common image processing operations e.g. sharpening, blurring or edge enhancement. Note: When storing a product containing a filtered band, the data of the band is not stored with the product. Only the information how to compute the data is stored. This behaviour is similar to virtual bands.

DAT supports the following filters

● Detect Lines Horizontal Edges, Vertical Edges, Left Diagonal Edges, Right Diagonal Edges, Compass Edge Detector, Roberts Cross North-West, Roberts Cross North-East

● Detect Gradients (Emboss) Sobel North, Sobel South, Sobel West, Sobel East, Sobel North East

● Smooth and Blurr Arithmetic 3x3 Mean, Arithmetic 4x4 Mean, Arithmetic 5x5 Mean, Low-Pass 3x3, Low-Pass 5x5

● Sharpen High-Pass 3x3 #1, High-Pass 3x3 #2, High-Pass 5x5

● Enhance Discontinuities Laplace 3x3, Laplace 5x5

● Non-Linear Filters Minimum 3x3, Minimum 5x5, Maximum 3x3, Maximum 5x5, Mean 3x3, Mean 5x5, Median 3x3, Median 5x5, Standard Deviation 3x3, Standard Deviation 5x5, Root-Mean-Square 3x3, Root-Mean-Square 5x5

DAT also supports user defined filters

● The operator also supports user defined convolution filter kernel. The user define kernel can be browsed and selected from User Defined Kernel file in the UI.

● The user defined kernel must be saved in ASCII file in a matrix format. The first line of the file contains two integers to indicate the dimension (rows and columns) of the matrix. For example, a user defined 3x3 low- pass filter can be saved in file lop_3_3.txt in the following format:

3 3 1 1 1 1 1 1 1 1 1

Example images

Compass Edge Detector Filter Low-Pass 5x5 Filter

High-Pass 3x3 #2 Filter Laplace 3x3

DAT also supports user defined filters

1. Source Bands: All bands (real or virtual) of the source product. User can select one or more bands for producing filtered images. If no bands are selected, then by default all bands are selected. 2. Filters: Pre-defined filters. 3. User Defined Kernel File: User defined filter kernel file.

Convert SLC to Detected GR

Convert Complex to Detected Ground Range

This tools converts a Single Look Complex product in slant range into a detected product in ground range.

Graph Processing

Graphs

Graph Processing is used to apply operations to the data to allow you to create your own processing chains. A graph is a set of nodes connected by edges. In this case, the nodes will be the processing steps. The edges will show the direction in which the data is being passed between nodes; therefore it will be a directed graph. The graph will have no loops or cycles, so it will be a Directed Acyclic Graph (DAG). The sources of the graph will be the data product readers, and the sinks can be either a product writer or an image displayed on the DAT. The GPF uses a Pull Model, wherein a request is made from the sink backwards to the source to process the graph. This request could be to create a new product file or to update a displayed image. Once the request reaches a source, the image is pulled through the nodes to the sink. Each time an image passes through an operator, the operator transforms the image, and it is passed down to the next node until it reaches the sink. The graph processor will not introduce any intermediate files unless a writer is optionally added anywhere in the sequence

Tiling

The memory management allows very large data products that can not be all stored in available memory, to be handled by the processing tools and visualization. To do so, a tiled approach is used. The dataset is divided into workable areas called tiles consisting of a subset of the data read from disk in one piece. Only the data for tiles being visualized is read in, and in some cases the data could be down-sampled to view the desired area at the expense of resolution. Depending on the tool, data is ingested for a tile or a set of tiles, and processing is applied only to the current set of tiles. The data is then written to a file and released from memory. The process is then repeated on a new set of tiled data from the large data product. From the DAT, in order to allow zooming out and viewing of the entire image, a pyramid of tiled images at different resolutions is used. Tiling is generally transparent to the user.

Operators

In order to provide the greatest flexibility to the end user, processing algorithms such as orthorectification and co-registration are broken down into unit processing steps called Operators. These operators may be reused to create new processing graphs for other purposes.

The Toolbox includes various processing modules that could be run either from the DAT GUI or from the command line such as:

● Data Conversion

● Band Arithmetic

● Image Filtering

● Statistics & Data Analysis

● Ellipsoid Correction

● Terrain Correction

● Co-Registration

● Reprojection

● Subset

● Calibration

● Multilooking

● Apply Orbit Correction

● Create Stack

● Create Elevation

● Resampling

● Interferometry

● Plus many more

Graph Processing Tool

GPT

The Graph Processing Tool (GPT) is the command line interface for executing graphs created using the Graph Builder. Data sources and parameters found in the graph file can be replaced at the command line using arguments passed to the GPT. The GPT could also be used to read a product, execute a single operator and produce output in the specified format without the use of a graph.

Usage:

{0} | [options] [ ...]

Description:

This tool is used to execute raster data operators in batch-mode.

The operators can be used stand-alone or combined as a directed acyclic

graph (DAG). Processing graphs are represented using XML. More info

about processing graphs, the operator API, and the graph XML format can

be found in the documentation.

Arguments:

Name of an operator. See below for the list of s.

Operator graph file (XML format).

The th source product file. The actual number of source

file arguments is specified by . May be optional for

operators which use the '-S' option.

Options:

-h Displays command usage. If is given, the specific

operator usage is displayed. -e Displays more detailed error messages. Displays a stack

trace, if an exception occurs.

-t The target file. Default value is ''{1}''.

-f Output file format, e.g. ''GeoTIFF'', ''HDF5'',

''BEAM-DIMAP''. If not specified, format will be derived

from the target filename extension, if any, otherwise the

default format is ''{2}''. Ony used, if the graph

in does not specify its own ''Write'' operator.

-p A (Java Properties) file containing processing

parameters in the form =. Entries in this

file are overwritten by the -P= command-line

option (see below).

-c Sets the tile cache size in bytes. Value can be suffixed

with ''K'', ''M'' and ''G''. Must be less than maximum

available heap space. If equal to or less than zero, tile

caching will be completely disabled. The default tile

cache size is ''{3}M''.

-q Sets the maximum parallelism used for the computation, i.e.

the maximum number of parallel (native) threads.

The default parallelism is ''{4}''.

-x Clears the internal tile cache after writing a complete

row of tiles to the target product file. This option may be useful if you run into memory problems.

-T= Defines a target product. Valid for graphs only.

must be the identifier of a node in the graph. The node''s

output will be written to .

-S= Defines a source product. is specified by the

operator or the graph. In an XML graph, all occurrences of

$'{'} will be replaced with references to a source

product located at .

-P= Defines a processing parameter, is specific for the

used operator or graph. In an XML graph, all occurrences of

$'{'} will be replaced with . Overwrites

parameter values specified by the ''-p'' option.

-inFolder For graphs with ProductSetReaders such as coregistration,

all products found in the specified folder and subfolders

will be used as input to the ProductSetReader

-printHelp Prints the usuage help for all operators

Running from the Command Line To be able to properly run the GPT from the command line, make sure the environment variable NEST_HOME is set to the installation folder. Also, add the installation folder to the PATH so you may call the GPT from any folder. Some example graphs can be found in $NEST_HOME/commandline.

● asar-caibrate.bat - a windows batch file as an example how to calibrate all ASAR products in a given folder

● ers-cailbrate.bat - a windows batch file as an example how to calibrate all ERS products in a given folder

● coregister.bat - a windows batch file as an example how to run multiple graphs together, in this case, GCPSelectionGraph and WarpGraph ● MapProjGraph.xml - a graph for applying a map projection on a product

● TC_Graph.xml - a graph for applying Terrain Correction on a product

● write.xml - a graph to write products in another format, in this case, it will write to GeoTIFF

Command Line Reference

Usage: gpt | [options] [ ...]

Description: This tool is used to execute raster data operators in batch-mode. The operators can be used stand-alone or combined as a directed acyclic graph (DAG). Processing graphs are represented using XML. More info about processing graphs, the operator API, and the graph XML format can be found in the documentation. Arguments: Name of an operator. See below for the list of s. Operator graph file (XML format). The th source product file. The actual number of source file arguments is specified by . May be optional for operators which use the -S option.

Options: -h Displays command usage. If is given, the specific operator usage is displayed. -e Displays more detailed error messages. Displays a stack trace, if an exception occurs. -t The target file. Default value is './target.dim'. -f Output file format, e.g. 'GeoTIFF', 'HDF5', 'BEAM-DIMAP'. If not specified, format will be derived from the target filename extension, if any, otherwise the default format is 'BEAM-DIMAP'. Ony used, if the graph in does not specify its own 'Write' operator. -p A (Java Properties) file containing processing parameters in the form =. Entries in this file are overwritten by the -P= command-line option (see below). -c Sets the tile cache size in bytes. Value can be suffixed with 'K', 'M' and 'G'. Must be less than maximum available heap space. If equal to or less than zero, tile caching will be completely disabled. The default tile cache size is '512M'. -q Sets the maximum parallelism used for the computation, i.e. the maximum number of parallel (native) threads. The default parallelism is '2'. -x Clears the internal tile cache after writing a complete row of tiles to the target product file. This option may be useful if you run into memory problems. -T= Defines a target product. Valid for graphs only. must be the identifier of a node in the graph. The node's output will be written to . -S= Defines a source product. is specified by the operator or the graph. In an XML graph, all occurrences of ${} will be replaced with references to a source product located at . -P= Defines a processing parameter, is specific for the used operator or graph. In an XML graph, all occurrences of ${} will be replaced with . Overwrites parameter values specified by the '-p' option. -inFolder For graphs with ProductSetReaders such as coregistration, all products found in the specified folder and subfolders will be used as input to the ProductSetReader -printHelp Prints the usuage help for all operators Operators: AdaptiveThresholding Detect ships using Constant False Alarm Rate detector. Apply-Orbit-File Apply orbit file BandMaths Create a product with one or more bands using mathematical expressions. Calibration Calibration of products Interferogram Compute interferograms from stack of coregistered images : JBLAS implementation Convert-Datatype Convert product data type Coherence Estimate coherence from stack of coregistered images Create-LandMask Creates a bitmask defining land vs ocean. CreateElevation Creates a DEM band CreateStack Collocates two or more products based on their geo-codings. Data-Analysis Computes statistics DeburstWSS Debursts an ASAR WSS product EMClusterAnalysis Performs an expectation-maximization (EM) cluster analysis. Ellipsoid-Correction-GG GG method for orthorectification Ellipsoid-Correction-RD Ellipsoid correction with RD method and average scene height Fill-Hole Fill holes in given product GCP-Selection2 Automatic Selection of Ground Control Points Image-Filter Common Image Processing Filters KMeansClusterAnalysis Performs a K-Means cluster analysis. LinearTodB Converts bands to dB Mosaic Mosaics two or more products based on their geo-codings. Multi-Temporal-Speckle-Filter Speckle Reduction using Multitemporal Filtering Multilook Averages the power across a number of lines in both the azimuth and range directions Object-Discrimination Remove false alarms from the detected objects. Oil-Spill-Clustering Remove small clusters from detected area. Oil-Spill-Detection Detect oil spill. Oversample Oversample the datset ProductSet-Reader Adds a list of sources Read Reads a product from disk. RemoveAntennaPattern Remove Antenna Pattern ReplaceMetadata Replace the metadata of the first product with that of the second Reprojection Applies a map projection SAR-Simulation Rigorous SAR Simulation SARSim-Terrain-Correction Orthorectification with SAR simulation SRGR Converts Slant Range to Ground Range Speckle-Filter Speckle Reduction SubsetOp Create a spatial subset of the source product. Terrain-Correction RD method for orthorectification Undersample Undersample the datset Warp2 Create Warp Function And Get Co-registrated Images Wind-Field-Estimation Estimate wind speed and direction Write Writes a data product to a file. WriteRGB Creates an RGB image from three source bands. ------Terrain-Correction ------Usage: gpt Terrain-Correction [options] Description: RD method for orthorectification Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PapplyRadiometricNormalization= Sets parameter 'applyRadiometricNormalization' to . Default value is 'false'. -PauxFile= The auxiliary file Value must be one of 'Latest Auxiliary File', 'Product Auxiliary File', 'External Auxiliary File'. Default value is 'Latest Auxiliary File'. -PdemName= The . Value must be one of 'ACE', 'GETASSE30', 'SRTM 3Sec GeoTiff'. Default value is 'SRTM 3Sec GeoTiff'. -PdemResamplingMethod= Sets parameter 'demResamplingMethod' to . Value must be one of 'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'. Default value is 'BILINEAR_INTERPOLATION'. -PexternalAuxFile= The antenne elevation pattern gain auxiliary data file. -PexternalDEMFile= Sets parameter 'externalDEMFile' to . -PexternalDEMNoDataValue= Sets parameter 'externalDEMNoDataValue' to . Default value is '0'. -PimgResamplingMethod= Sets parameter 'imgResamplingMethod' to . Value must be one of 'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'. Default value is 'BILINEAR_INTERPOLATION'. -PincidenceAngleForGamma0= Sets parameter 'incidenceAngleForGamma0' to . Value must be one of 'Use incidence angle from Ellipsoid', 'Use projected local incidence angle from DEM'. Default value is 'Use projected local incidence angle from DEM'. -PincidenceAngleForSigma0= Sets parameter 'incidenceAngleForSigma0' to . Value must be one of 'Use incidence angle from Ellipsoid', 'Use projected local incidence angle from DEM'. Default value is 'Use projected local incidence angle from DEM'. -PpixelSpacingInDegree= The pixel spacing in degrees Default value is '0'. -PpixelSpacingInMeter= The pixel spacing in meters Default value is '0'. -PprojectionName= The projection name Default value is 'Geographic Lat/Lon'. -PsaveBetaNought= Sets parameter 'saveBetaNought' to . Default value is 'false'. -PsaveDEM= Sets parameter 'saveDEM' to . Default value is 'false'. -PsaveGammaNought= Sets parameter 'saveGammaNought' to . Default value is 'false'. -PsaveLocalIncidenceAngle= Sets parameter 'saveLocalIncidenceAngle' to . Default value is 'false'. -PsaveProjectedLocalIncidenceAngle= Sets parameter 'saveProjectedLocalIncidenceAngle' to . Default value is 'false'. -PsaveSelectedSourceBand= Sets parameter 'saveSelectedSourceBand' to . Default value is 'true'. -PsaveSigmaNought= Sets parameter 'saveSigmaNought' to . Default value is 'false'. -PsourceBands= The list of source bands. Graph XML Format: 1.0 Terrain-Correction ${source} string <.../> string file double string string double double string boolean boolean boolean boolean boolean boolean boolean boolean string string string file

------Multi-Temporal-Speckle-Filter ------Usage: gpt Multi-Temporal-Speckle-Filter [options] Description: Speckle Reduction using Multitemporal Filtering Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PsourceBands= The list of source bands. -PwindowSize= Sets parameter 'windowSize' to . Value must be one of '3x3', '5x5', '7x7', '9x9', '11x11'. Default value is '3x3'. Graph XML Format: 1.0 Multi-Temporal-Speckle-Filter ${source} string <.../> string ------Ellipsoid-Correction-RD ------Usage: gpt Ellipsoid-Correction-RD [options] Description: Ellipsoid correction with RD method and average scene height Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. -Ssource= Sets source 'source' to . This is a mandatory source.

Parameter Options: -PapplyRadiometricNormalization= Sets parameter 'applyRadiometricNormalization' to . Default value is 'false'. -PauxFile= The auxiliary file Value must be one of 'Latest Auxiliary File', 'Product Auxiliary File', 'External Auxiliary File'. Default value is 'Latest Auxiliary File'. -PdemName= The digital elevation model. Value must be one of 'ACE', 'GETASSE30', 'SRTM 3Sec GeoTiff'. Default value is 'SRTM 3Sec GeoTiff'. -PdemResamplingMethod= Sets parameter 'demResamplingMethod' to . Value must be one of 'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'. Default value is 'BILINEAR_INTERPOLATION'. -PexternalAuxFile= The antenne elevation pattern gain auxiliary data file. -PexternalDEMFile= Sets parameter 'externalDEMFile' to . -PexternalDEMNoDataValue= Sets parameter 'externalDEMNoDataValue' to . Default value is '0'. -PimgResamplingMethod= Sets parameter 'imgResamplingMethod' to . Value must be one of 'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'. Default value is 'BILINEAR_INTERPOLATION'. -PincidenceAngleForGamma0= Sets parameter 'incidenceAngleForGamma0' to . Value must be one of 'Use incidence angle from Ellipsoid', 'Use projected local incidence angle from DEM'. Default value is 'Use projected local incidence angle from DEM'. -PincidenceAngleForSigma0= Sets parameter 'incidenceAngleForSigma0' to . Value must be one of 'Use incidence angle from Ellipsoid', 'Use projected local incidence angle from DEM'. Default value is 'Use projected local incidence angle from DEM'. -PpixelSpacingInDegree= The pixel spacing in degrees Default value is '0'. -PpixelSpacingInMeter= The pixel spacing in meters Default value is '0'. -PprojectionName= The projection name Default value is 'Geographic Lat/Lon'. -PsaveBetaNought= Sets parameter 'saveBetaNought' to . Default value is 'false'. -PsaveDEM= Sets parameter 'saveDEM' to . Default value is 'false'. -PsaveGammaNought= Sets parameter 'saveGammaNought' to . Default value is 'false'. -PsaveLocalIncidenceAngle= Sets parameter 'saveLocalIncidenceAngle' to . Default value is 'false'. -PsaveProjectedLocalIncidenceAngle= Sets parameter 'saveProjectedLocalIncidenceAngle' to . Default value is 'false'. -PsaveSelectedSourceBand= Sets parameter 'saveSelectedSourceBand' to . Default value is 'true'. -PsaveSigmaNought= Sets parameter 'saveSigmaNought' to . Default value is 'false'. -PsourceBands= The list of source bands. Graph XML Format: 1.0 Ellipsoid-Correction-RD ${source} ${source} string <.../> string file double string string double double string boolean boolean boolean boolean boolean boolean boolean boolean string string string file ------PCA-Image ------Usage: gpt PCA-Image [options] Description: Computes PCA Images Source Options: -SsourceProduct= Sets source 'sourceProduct' to . This is a mandatory source. ------Merge ------Usage: gpt Merge [options] Description: Merges an arbitrary number of source bands into the target product. Parameter Options: -PbaseGeoInfo= The ID of the source product providing the geo-coding. -PproductName= The name of the target product. Default value is 'mergedProduct'. -PproductType= The type of the target product. Default value is 'UNKNOWN'. Graph XML Format: 1.0 Merge string string string string string string string <.../> ------Speckle-Filter ------Usage: gpt Speckle-Filter [options]

Description: Speckle Reduction Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PdampingFactor= The damping factor (Frost filter only) Valid interval is (0, 100]. Default value is '2'. -PedgeThreshold= The edge threshold (Refined Lee filter only) Valid interval is (0, *). Default value is '5000'. -Penl= The number of looks Valid interval is (0, *). Default value is '1.0'. -PestimateENL= Sets parameter 'estimateENL' to . Default value is 'false'. -Pfilter= Sets parameter 'filter' to . Value must be one of 'Mean', 'Median', 'Frost', 'Gamma Map', 'Lee', 'Refined Lee'. Default value is 'Mean'. -PfilterSizeX= The kernel x dimension Valid interval is (1, 100]. Default value is '3'. -PfilterSizeY= The kernel y dimension Valid interval is (1, 100]. Default value is '3'. -PsourceBands= The list of source bands. Graph XML Format: 1.0 Speckle-Filter ${source} string <.../> string int int int double boolean double ------Calibration ------Usage: gpt Calibration [options]

Description: Calibration of products Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PauxFile= The auxiliary file Value must be one of 'Latest Auxiliary File', 'Product Auxiliary File', 'External Auxiliary File'. Default value is 'Latest Auxiliary File'. -PcreateBetaBand= Create beta0 virtual band Default value is 'false'. -PcreateGammaBand= Create gamma0 virtual band Default value is 'false'. -PexternalAuxFile= The antenne elevation pattern gain auxiliary data file. -PoutputImageScaleInDb= Output image scale Default value is 'false'. -PsourceBands= The list of source bands. Graph XML Format: 1.0 Calibration ${source} string <.../> string file boolean boolean boolean ------Ellipsoid-Correction-GG ------Usage: gpt Ellipsoid-Correction-GG [options] Description: GG method for orthorectification Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PimgResamplingMethod= Sets parameter 'imgResamplingMethod' to . Value must be one of 'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'. Default value is 'BILINEAR_INTERPOLATION'. -PprojectionName= The projection name Default value is 'Geographic Lat/Lon'. -PsourceBands= The list of source bands. Graph XML Format: 1.0 Ellipsoid-Correction-GG ${source} string <.../> string string

------CreateElevation ------Usage: gpt CreateElevation [options] Description: Creates a DEM band Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PdemName= The digital elevation model. Value must be one of 'ACE', 'GETASSE30', 'SRTM 3Sec GeoTiff'. Default value is 'SRTM 3Sec GeoTiff'. -PelevationBandName= The elevation band name. Default value is 'elevation'. -PexternalDEM= The external DEM file. Default value is ' '. -PresamplingMethod= Sets parameter 'resamplingMethod' to . Value must be one of 'Nearest Neighbour', 'Bilinear Interpolation', 'Cubic Convolution'. Default value is 'Bilinear Interpolation'. Graph XML Format: 1.0 CreateElevation ${source} string string string string ------Reproject ------Usage: gpt Reproject [options]

Description: Reprojection of a source product to a target Coordinate Reference System. Source Options: -ScollocateWith= The source product will be collocated with this product. This is an optional source. -Ssource= The product which will be reprojected. This is a mandatory source. Parameter Options: -Pcrs= A text specifying the target Coordinate Reference System, either in WKT or as an authority code. For appropriate EPSG authority codes see (www. epsg-registry.org). AUTO authority can be used with code 42001 (UTM), and 42002 (Transverse Mercator) where the scene center is used as reference. Examples: EPSG:4326, AUTO:42001 -Peasting= The easting of the reference pixel. -PelevationModelName= The name of the elevation model for the orthorectification. If not given tie-point data is used. -Pheight= The height of the target product. -PincludeTiePointGrids= Whether tie-point grids should be included in the output product. Default value is 'true'. -PnoDataValue= The value used to indicate no-data. -Pnorthing= The northing of the reference pixel. -Porientation= The orientation of the output product (in degree). Valid interval is [-360,360]. Default value is '0'. -Porthorectify= Whether the source product should be orthorectified. (Not applicable to all products) Default value is 'false'. -PpixelSizeX= The pixel size in X direction given in CRS units. -PpixelSizeY= The pixel size in Y direction given in CRS units. -PreferencePixelX= The X-position of the reference pixel. -PreferencePixelY= The Y-position of the reference pixel. -Presampling= The method used for resampling of floating-point raster data. Value must be one of 'Nearest', 'Bilinear', 'Bicubic'. Default value is 'Nearest'. -Pwidth= The width of the target product. -PwktFile= A file which contains the target Coordinate Reference System in WKT format. Graph XML Format: 1.0 Reproject ${source} ${collocateWith} file string string boolean double double double double double double double integer integer boolean string double ------Fill-Hole ------Usage: gpt Fill-Hole [options]

Description: Fill holes in given product Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PNoDataValue= Sets parameter 'NoDataValue' to . Default value is '0.0'. -PsourceBands= The list of source bands.

Graph XML Format: 1.0 Fill-Hole ${source} string <.../> double ------Data-Analysis ------Usage: gpt Data-Analysis [options] Description: Computes statistics Source Options: -SsourceProduct= Sets source 'sourceProduct' to . This is a mandatory source. ------GCP-Selection2 ------Usage: gpt GCP-Selection2 [options] Description: Automatic Selection of Ground Control Points Source Options: -SsourceProduct= Sets source 'sourceProduct' to . This is a mandatory source. Parameter Options: -PapplyFineRegistration= Sets parameter 'applyFineRegistration' to . Default value is 'true'. -PcoarseRegistrationWindowHeight= Sets parameter 'coarseRegistrationWindowHeight' to . Value must be one of '32', '64', '128', '256', '512', '1024'. Default value is '128'. -PcoarseRegistrationWindowWidth= Sets parameter 'coarseRegistrationWindowWidth' to . Value must be one of '32', '64', '128', '256', '512', '1024'. Default value is '128'. -PcoherenceThreshold= The coherence threshold Valid interval is (0, *). Default value is '0.6'. -PcoherenceWindowSize= The coherence window size Valid interval is (1, 10]. Default value is '3'. -PcolumnInterpFactor= Sets parameter 'columnInterpFactor' to . Value must be one of '2', '4', '8', '16'. Default value is '2'. -PfineRegistrationWindowHeight= Sets parameter 'fineRegistrationWindowHeight' to . Value must be one of '32', '64', '128', '256', '512', '1024'. Default value is '128'. -PfineRegistrationWindowWidth= Sets parameter 'fineRegistrationWindowWidth' to . Value must be one of '32', '64', '128', '256', '512', '1024'. Default value is '128'. -PgcpTolerance= Tolerance in slave GCP validation check Valid interval is (0, *). Default value is '0.5'. -PmaxIteration= The maximum number of iterations Valid interval is (1, 10]. Default value is '2'. -PnumGCPtoGenerate= The number of GCPs to use in a grid Valid interval is (10, 10000]. Default value is '200'. -ProwInterpFactor= Sets parameter 'rowInterpFactor' to . Value must be one of '2', '4', '8', '16'. Default value is '2'. -PuseSlidingWindow= Use sliding window for coherence calculation Default value is 'false'. Graph XML Format: 1.0 GCP-Selection2 ${sourceProduct} int string string string string int double boolean string string int double boolean ------CplxIfg ------Usage: gpt CplxIfg [options]

Description: Compute interferograms from stack of coregistered images

Source Options: -SsourceProduct= Sets source 'sourceProduct' to . This is a mandatory source.

------PassThrough ------

Usage: gpt PassThrough [options] Description: Sets target product to source product.

Source Options: -SsourceProduct= Sets source 'sourceProduct' to . This is a mandatory source.

------Coherence ------

Usage: gpt Coherence [options] Description: Estimate coherence from stack of coregistered images

Source Options: -SsourceProduct= Sets source 'sourceProduct' to . This is a mandatory source. Parameter Options: -PcoherenceWindowSizeAzimuth= Size of coherence estimation window in Azimuth direction Valid interval is (1, 20]. Default value is '10'. -PcoherenceWindowSizeRange= Size of coherence estimation window in Range direction Valid interval is (1, 20]. Default value is '2'. Graph XML Format: 1.0 Coherence ${sourceProduct} int int ------Interferogram ------Usage: gpt Interferogram [options] Description: Compute interferograms from stack of coregistered images : JBLAS implementation Source Options: -SsourceProduct= Sets source 'sourceProduct' to . This is a mandatory source. Parameter Options: -PorbitPolynomialDegree= Degree of orbit (polynomial) interpolator Value must be one of '1', '2', '3', '4', '5'. Default value is '3'. -PsrpNumberPoints= Number of points for the 'flat earth phase' polynomial estimation Value must be one of '301', '401', '501', '601', '701', '801', '901', '1001'. Default value is '501'. -PsrpPolynomialDegree= Order of 'Flat earth phase' polynomial Value must be one of '1', '2', '3', '4', '5', '6', '7', '8'. Default value is '5'.

Graph XML Format: 1.0 Interferogram ${sourceProduct} int int int

------Create-LandMask ------

Usage: gpt Create-LandMask [options] Description: Creates a bitmask defining land vs ocean. Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PbyPass= Sets parameter 'byPass' to . Default value is 'false'. -Pgeometry= Sets parameter 'geometry' to . -PinvertGeometry= Sets parameter 'invertGeometry' to . Default value is 'false'. -PlandMask= Sets parameter 'landMask' to . Default value is 'true'. -PsourceBands= The list of source bands. -PuseSRTM= Sets parameter 'useSRTM' to . Default value is 'true'. Graph XML Format: 1.0 Create-LandMask ${source} string <.../> boolean boolean string boolean boolean ------Apply-Orbit-File ------Usage: gpt Apply-Orbit-File [options]

Description: Apply orbit file

Source Options: -Ssource= Sets source 'source' to . This is a mandatory source.

Parameter Options: -PorbitType= Sets parameter 'orbitType' to . Value must be one of 'DORIS Precise (ENVISAT)', 'DORIS Verified (ENVISAT)', 'DELFT Precise (ENVISAT, ERS1&2)', 'PRARE Precise (ERS1&2)'. Default value is 'DORIS Verified (ENVISAT)'.

Graph XML Format: 1.0 Apply-Orbit-File ${source} string ------Oversample ------Usage: gpt Oversample [options] Description: Oversample the datset Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PazimuthSpacing= The azimuth pixel spacing Default value is '12.5'. -PheightRatio= The height ratio of the output/input images Default value is '2.0'. -PoutputImageBy= Sets parameter 'outputImageBy' to . Value must be one of 'Image Size', 'Ratio', 'Pixel Spacing'. Default value is 'Image Size'. -PrangeSpacing= The range pixel spacing Default value is '12.5'. -PsourceBands= The list of source bands. -PtargetImageHeight= The row dimension of the output image Default value is '1000'. -PtargetImageWidth= The col dimension of the output image Default value is '1000'. -PwidthRatio= The width ratio of the output/input images Default value is '2.0'. Graph XML Format: 1.0 Oversample ${source} string <.../> string int int float float float float ------CreateStack ------Usage: gpt CreateStack [options]

Description: Collocates two or more products based on their geo-codings. Parameter Options: -Pextent= The output image extents. Value must be one of 'Master', 'Minimum', 'Maximum'. Default value is 'Master'. -PmasterBands= The list of source bands. -PresamplingType= The method to be used when resampling the slave grid onto the master grid. Value must be one of 'NONE', 'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'. Default value is 'NONE'. -PsourceBands= The list of source bands. Graph XML Format: 1.0 CreateStack ${sourceProducts} string <.../> string <.../> string string ------WriteRGB ------Usage: gpt WriteRGB [options] Description: Creates an RGB image from three source bands. Source Options: -Sinput= Sets source 'input' to . This is a mandatory source. Parameter Options: -Pblue= The zero-based index of the blue band. -Pfile= The file to which the image is written. -PformatName= Sets parameter 'formatName' to . Default value is 'png'. -Pgreen= The zero-based index of the green band. -Pred= The zero-based index of the red band. Graph XML Format: 1.0 WriteRGB ${input} int int int string file ------RemoveAntennaPattern ------Usage: gpt RemoveAntennaPattern [options] Description: Remove Antenna Pattern Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PsourceBands= The list of source bands. Graph XML Format: 1.0 RemoveAntennaPattern ${source} string <.../> ------Image-Filter ------Usage: gpt Image-Filter [options] Description: Common Image Processing Filters Source Options: -SsourceProduct= Sets source 'sourceProduct' to . This is a mandatory source. Parameter Options: -PselectedFilterName= Sets parameter 'selectedFilterName' to . -PsourceBands= The list of source bands. -PuserDefinedKernelFile= The kernel file Graph XML Format: 1.0 Image-Filter ${sourceProduct} string <.../> string file

------Mosaic ------

Usage: gpt Mosaic [options] Description: Mosaics two or more products based on their geo-codings.

Parameter Options: -Paverage= Average the overlapping areas Default value is 'false'. -PnormalizeByMean= Normalize by Mean Default value is 'false'. -PpixelSize= Pixel Size (m) Default value is '0'. -PresamplingMethod= The method to be used when resampling the slave grid onto the master grid. Value must be one of 'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'. Default value is 'NEAREST_NEIGHBOUR'. -PsceneHeight= Target height Default value is '0'. -PsceneWidth= Target width Default value is '0'. -PsourceBands= The list of source bands. Graph XML Format: 1.0 Mosaic ${sourceProducts} string <.../> string boolean boolean double int int ------Create-Coherence-Image ------Usage: gpt Create-Coherence-Image [options]

Description: Create Coherence Image Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PcoherenceWindowSize= The coherence window size Valid interval is (1, 10]. Default value is '5'.

Graph XML Format: 1.0 Create-Coherence-Image ${source} int ------AdaptiveThresholding ------Usage: gpt AdaptiveThresholding [options] Description: Detect ships using Constant False Alarm Rate detector. Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PbackgroundWindowSizeInMeter= Background window size Default value is '1000.0'. -PguardWindowSizeInMeter= Guard window size Default value is '400.0'. -Ppfa= Probability of false alarm Default value is '6.5'. -PtargetWindowSizeInMeter= Target window size Default value is '75'. Graph XML Format: 1.0 AdaptiveThresholding ${source} int double double double ------ReplaceMetadata ------Usage: gpt ReplaceMetadata [options] Description: Replace the metadata of the first product with that of the second

Parameter Options: -Pnote= Sets parameter 'note' to . Default value is 'Replace the metadata of the first product with that of the second'. Graph XML Format: 1.0 ReplaceMetadata ${sourceProducts} string ------BandMaths ------Usage: gpt BandMaths [options] Description: Create a product with one or more bands using mathematical expressions. Parameter Options: -PbandExpression= Sets parameter 'bandExpression' to . -PbandName= Sets parameter 'bandName' to . -PbandNodataValue= Sets parameter 'bandNodataValue' to . -PbandUnit= Sets parameter 'bandUnit' to . Graph XML Format: 1.0 BandMaths ${sourceProducts} string string string string string string string integer float float <.../> string string string <.../> string string string string ------Oil-Spill-Detection ------Usage: gpt Oil-Spill-Detection [options]

Description: Detect oil spill. Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PbackgroundWindowSize= Background window size Default value is '13'. -Pk= Threshold shift from background mean Default value is '2.0'. -PsourceBands= The list of source bands.

Graph XML Format: 1.0 Oil-Spill-Detection ${source} string <.../> int double ------LinearTodB ------Usage: gpt LinearTodB [options] Description: Converts bands to dB Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PsourceBands= The list of source bands. Graph XML Format: 1.0 LinearTodB ${source} string <.../> ------KMeansClusterAnalysis ------Usage: gpt KMeansClusterAnalysis [options]

Description: Performs a K-Means cluster analysis. Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PclusterCount= Sets parameter 'clusterCount' to . Valid interval is (0,100]. Default value is '14'. -PiterationCount= Sets parameter 'iterationCount' to . Valid interval is (0,10000]. Default value is '30'. -PrandomSeed= Seed for the random generator, used for initialising the algorithm. Default value is '31415'. -ProiMaskName= The name of the ROI-Mask that should be used. -PsourceBandNames= The names of the bands being used for the cluster analysis. Graph XML Format: 1.0 KMeansClusterAnalysis ${source} int int int string,string,string,... string ------Warp2 ------Usage: gpt Warp2 [options] Description: Create Warp Function And Get Co-registrated Images Source Options: -SsourceProduct= Sets source 'sourceProduct' to . This is a mandatory source. Parameter Options: -PinterpolationMethod= Sets parameter 'interpolationMethod' to . Value must be one of 'Nearest-neighbor interpolation', 'Bilinear interpolation', 'Linear interpolation', 'Cubic convolution (4 points)', 'Cubic convolution (6 points)', 'Truncated sinc (6 points)', 'Truncated sinc (8 points)', 'Truncated sinc (16 points)'. Default value is 'Bilinear interpolation'. -PopenResidualsFile= Show the Residuals file in a text viewer Default value is 'false'. -PrmsThreshold= The RMS threshold for eliminating invalid GCPs Valid interval is (0, *). Default value is '1.0'. -PwarpPolynomialOrder= The order of WARP polynomial function Value must be one of '1', '2', '3'. Default value is '2'. Graph XML Format: 1.0 Warp2 ${sourceProduct} float int string boolean

------Object-Discrimination ------

Usage: gpt Object-Discrimination [options] Description: Remove false alarms from the detected objects. Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PmaxTargetSizeInMeter= Maximum target size Default value is '400.0'. -PminTargetSizeInMeter= Minimum target size Default value is '80.0'. Graph XML Format: 1.0 Object-Discrimination ${source} double double ------PCA-Min ------Usage: gpt PCA-Min [options] Description: Computes minimum for PCA Source Options: -SsourceProduct= Sets source 'sourceProduct' to . This is a mandatory source. ------Reprojection ------Usage: gpt Reprojection [options] Description: Applies a map projection Source Options: -ScollocateWith= The source product will be collocated with this product. This is an optional source. -Ssource= The product which will be reprojected. This is a mandatory source.

Parameter Options: -Pcrs= A text specifying the target Coordinate Reference System, either in WKT or as an authority code. For appropriate EPSG authority codes see (www.epsg-registry.org). AUTO authority can be used with code 42001 (UTM), and 42002 (Transverse Mercator) where the scene center is used as reference. Examples: EPSG:4326, AUTO:42001 -Peasting= The easting of the reference pixel. -PelevationModelName= The name of the elevation model for the orthorectification. If not given tie-point data is used. -Pheight= The height of the target product. -PincludeTiePointGrids= Whether tie-point grids should be included in the output product. Default value is 'true'. -PnoDataValue= The value used to indicate no-data. -Pnorthing= The northing of the reference pixel. -Porientation= The orientation of the output product (in degree). Valid interval is [-360,360]. Default value is '0'. -Porthorectify= Whether the source product should be orthorectified. (Not applicable to all products) Default value is 'false'. -PpixelSizeX= The pixel size in X direction given in CRS units. -PpixelSizeY= The pixel size in Y direction given in CRS units. -PpreserveResolution= Whether to keep original or use custom resolution. Default value is 'true'. -PreferencePixelX= The X-position of the reference pixel. -PreferencePixelY= The Y-position of the reference pixel. -Presampling= The method used for resampling of floating- point raster data. Value must be one of 'Nearest', 'Bilinear', 'Bicubic'. Default value is 'Nearest'. -PsourceBands= The list of source bands. -Pwidth= The width of the target product. -PwktFile= A file which contains the target Coordinate Reference System in WKT format. Graph XML Format: 1.0 Reprojection ${source} ${collocateWith} file string string boolean double double double double double double double integer integer boolean string double string <.../> boolean

------Undersample ------

Usage: gpt Undersample [options] Description: Undersample the datset

Source Options: -Ssource= Sets source 'source' to . This is a mandatory source.

Parameter Options: -PazimuthSpacing= The azimuth pixel spacing Default value is '12.5'. -PfilterSize= Sets parameter 'filterSize' to . Value must be one of '3x3', '5x5', '7x7'. Default value is '3x3'. -PheightRatio= The height ratio of the output/input images Default value is '0.5'. -Pmethod= Sets parameter 'method' to . Value must be one of 'Sub-Sampling', 'LowPass Filtering'. Default value is 'LowPass Filtering'. -PoutputImageBy= Sets parameter 'outputImageBy' to . Value must be one of 'Image Size', 'Ratio', 'Pixel Spacing'. Default value is 'Image Size'. -PrangeSpacing= The range pixel spacing Default value is '12.5'. -PsourceBands= The list of source bands. -PsubSamplingX= Sets parameter 'subSamplingX' to . Default value is '2'. -PsubSamplingY= Sets parameter 'subSamplingY' to . Default value is '2'. -PtargetImageHeight= The row dimension of the output image Default value is '1000'. -PtargetImageWidth= The col dimension of the output image Default value is '1000'. -PwidthRatio= The width ratio of the output/input images Default value is '0.5'. Graph XML Format: 1.0 Undersample ${source} string <.../> string string int int string int int float float float float

------Multilook ------

Usage: gpt Multilook [options] Description: Averages the power across a number of lines in both the azimuth and range directions

Source Options: -Ssource= Sets source 'source' to . This is a mandatory source.

Parameter Options: -PnAzLooks= The user defined number of azimuth looks Valid interval is [1, *). Default value is '1'. -PnRgLooks= The user defined number of range looks Valid interval is [1, *). Default value is '1'. -Pnote= Sets parameter 'note' to . Default value is 'Currently, detection for complex data is performed without any resampling'. -PoutputIntensity= For complex product output intensity or i and q Default value is 'true'. -PsourceBands= The list of source bands. Graph XML Format: 1.0 Multilook ${source} string <.../> int int boolean string ------Read ------Usage: gpt Read [options] Description: Reads a product from disk. Parameter Options: -Pfile= The file from which the data product is read. Graph XML Format: 1.0 Read file

------PCA-Statistic ------

Usage: gpt PCA-Statistic [options] Description: Computes statistics for PCA

Source Options: -Ssource= Sets source 'source' to . This is a mandatory source.

Parameter Options: -PeigenvalueThreshold= The threshold for selecting eigenvalues Valid interval is (0, 100]. Default value is '100'. -PnumPCA= The number of PCA images output Valid interval is (0, 100]. Default value is '1'. -PselectEigenvaluesBy= Sets parameter 'selectEigenvaluesBy' to . Value must be one of 'Eigenvalue Threshold', 'Number of Eigenvalues'. Default value is 'Eigenvalue Threshold'. -PshowEigenvalues= Show the eigenvalues Default value is '1'. -PsourceBands= The list of source bands. -PsubtractMeanImage= Subtract mean image Default value is '1'. Graph XML Format: 1.0 PCA-Statistic ${source} string <.../> string double int boolean boolean ------Convert-Datatype ------Usage: gpt Convert-Datatype [options] Description: Convert product data type Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PsourceBands= The list of source bands. -PtargetDataType= Sets parameter 'targetDataType' to . Value must be one of 'int8', 'int16', 'int32', 'uint8', 'uint16', 'uint32', 'float32', 'float64'. Default value is 'float32'. -PtargetScalingStr= Sets parameter 'targetScalingStr' to . Value must be one of 'Truncate', 'Linear (slope and intercept)', 'Linear (between 95% clipped Histogram)', 'Logarithmic'. Default value is 'Linear (slope and intercept)'.

Graph XML Format: 1.0 Convert-Datatype ${source} string <.../> string string ------TileStackOp ------Usage: gpt TileStackOp [options] Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PsourceBands= The list of source bands. Graph XML Format: 1.0 TileStackOp ${source} string <.../> ------EMClusterAnalysis ------Usage: gpt EMClusterAnalysis [options] Description: Performs an expectation-maximization (EM) cluster analysis. Source Options: -Ssource= Sets source 'source' to . This is a mandatory source.

Parameter Options: -PclusterCount= Sets parameter 'clusterCount' to . Valid interval is (0,100]. Default value is '14'. -PincludeProbabilityBands= Determines whether the posterior probabilities are included as band data. Default value is 'false'. -PiterationCount= Sets parameter 'iterationCount' to . Valid interval is (0,10000]. Default value is '30'. -PrandomSeed= Seed for the random generator, used for initialising the algorithm. Default value is '31415'. -ProiMaskName= The name of the ROI-Mask that should be used. -PsourceBandNames= The names of the bands being used for the cluster analysis. Graph XML Format: 1.0 EMClusterAnalysis ${source} int int int string,string,string,... string boolean ------Write ------Usage: gpt Write [options] Description: Writes a data product to a file. Source Options: -Ssource= The source product to be written. This is a mandatory source. Parameter Options: -PclearCacheAfterRowWrite= If true, the internal tile cache is cleared after a tile row has been written. Ignored if writeEntireTileRows=false. Default value is 'false'. -PdeleteOutputOnFailure= If true, all output files are deleted after a failed write operation. Default value is 'true'. -Pfile= The output file to which the data product is written. -PformatName= The name of the output file format. Default value is 'BEAM-DIMAP'. -PwriteEntireTileRows= If true, the write operation waits until an entire tile row is computed. Default value is 'true'. Graph XML Format: 1.0 Write ${source} file string boolean boolean boolean

------Subset ------

Usage: gpt Subset [options] Description: Create a spatial and/or spectral subset of data product.

Source Options: -Ssource= The source product to create the subset from. This is a mandatory source.

Parameter Options: -PbandNames= Sets parameter 'bandNames' to . -PcopyMetadata= Sets parameter 'copyMetadata' to . Default value is 'false'. -PfullSwath= Forces the operator to extend the subset region to the full swath. Default value is 'false'. -PgeoRegion= The region in geographical coordinates using WKT-format, e.g. POLYGON(( , , ..., )) (make sure to quote the option due to spaces in ) -Pheight= Sets parameter 'height' to . Default value is '1000'. -PregionX= Sets parameter 'regionX' to . Default value is '0'. -PregionY= Sets parameter 'regionY' to . Default value is '0'. -PsubSamplingX= Sets parameter 'subSamplingX' to . Default value is '1'. -PsubSamplingY= Sets parameter 'subSamplingY' to . Default value is '1'. -PtiePointGridNames= Sets parameter 'tiePointGridNames' to . -Pwidth= Sets parameter 'width' to . Default value is '1000'. Graph XML Format: 1.0 Subset ${source} int int int int int boolean geometry int string,string,string,... string,string,string,... boolean

------Oil-Spill-Clustering ------

Usage: gpt Oil-Spill-Clustering [options] Description: Remove small clusters from detected area.

Source Options: -Ssource= Sets source 'source' to . This is a mandatory source.

Parameter Options: -PminClusterSizeInKm2= Minimum cluster size Default value is '0.1'.

Graph XML Format: 1.0 Oil-Spill-Clustering ${source} double ------SubsetOp ------Usage: gpt SubsetOp [options] Description: Create a spatial subset of the source product. Source Options: -SsourceProduct= Sets source 'sourceProduct' to . This is a mandatory source. Parameter Options: -PgeoRegion= WKT-format, e.g. POLYGON(( , , ..., )) (make sure to quote the option due to spaces in ) -Pheight= Sets parameter 'height' to . Default value is '1000'. -PregionX= Sets parameter 'regionX' to . Default value is '0'. -PregionY= Sets parameter 'regionY' to . Default value is '0'. -PsourceBands= The list of source bands. -PsubSamplingX= Sets parameter 'subSamplingX' to . Default value is '1'. -PsubSamplingY= Sets parameter 'subSamplingY' to . Default value is '1'. -Pwidth= Sets parameter 'width' to . Default value is '1000'. Graph XML Format: 1.0 SubsetOp ${sourceProduct} string <.../> int int int int int int geometry ------SRGR ------Usage: gpt SRGR [options] Description: Converts Slant Range to Ground Range Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PinterpolationMethod= Sets parameter 'interpolationMethod' to . Value must be one of 'Nearest- neighbor interpolation', 'Linear interpolation', 'Cubic interpolation', 'Cubic2 interpolation', 'Sinc interpolation'. Default value is 'Linear interpolation'. -PsourceBands= The list of source bands. -PwarpPolynomialOrder= The order of WARP polynomial function Valid interval is [1, *). Default value is '4'. Graph XML Format: 1.0 SRGR ${source} string <.../> int string ------SAR-Simulation ------Usage: gpt SAR-Simulation [options] Description: Rigorous SAR Simulation

Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PdemName= The digital elevation model. Value must be one of 'ACE', 'GETASSE30', 'SRTM 3Sec GeoTiff'. Default value is 'SRTM 3Sec GeoTiff'. -PdemResamplingMethod= Sets parameter 'demResamplingMethod' to . Value must be one of 'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'. Default value is 'BILINEAR_INTERPOLATION'. -PexternalDEMFile= Sets parameter 'externalDEMFile' to . -PexternalDEMNoDataValue= Sets parameter 'externalDEMNoDataValue' to . Default value is '0'. -PsaveLayoverShadowMask= Sets parameter 'saveLayoverShadowMask' to . Default value is 'false'. -PsourceBands= The list of source bands. Graph XML Format: 1.0 SAR-Simulation ${source} string <.../> string string file double boolean ------DeburstWSS ------Usage: gpt DeburstWSS [options] Description: Debursts an ASAR WSS product Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -Paverage= Sets parameter 'average' to . Default value is 'false'. -PproduceIntensitiesOnly= Sets parameter 'produceIntensitiesOnly' to . Default value is 'false'. -PsubSwath= Sets parameter 'subSwath' to . Value must be one of 'SS1', 'SS2', 'SS3', 'SS4', 'SS5'. Default value is 'SS1'. Graph XML Format: 1.0 DeburstWSS ${source} string boolean boolean ------SARSim-Terrain-Correction ------Usage: gpt SARSim-Terrain-Correction [options]

Description: Orthorectification with SAR simulation Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PapplyRadiometricNormalization= Sets parameter 'applyRadiometricNormalization' to . Default value is 'false'. -PauxFile= The auxiliary file Value must be one of 'Latest Auxiliary File', 'Product Auxiliary File', 'External Auxiliary File'. Default value is 'Latest Auxiliary File'. -PexternalAuxFile= The antenne elevation pattern gain auxiliary data file. -PimgResamplingMethod= Sets parameter 'imgResamplingMethod' to . Value must be one of 'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'. Default value is 'BILINEAR_INTERPOLATION'. -PincidenceAngleForGamma0= Sets parameter 'incidenceAngleForGamma0' to . Value must be one of 'Use incidence angle from Ellipsoid', 'Use projected local incidence angle from DEM'. Default value is 'Use projected local incidence angle from DEM'. -PincidenceAngleForSigma0= Sets parameter 'incidenceAngleForSigma0' to . Value must be one of 'Use incidence angle from Ellipsoid', 'Use projected local incidence angle from DEM'. Default value is 'Use projected local incidence angle from DEM'. -PopenShiftsFile= Show range and azimuth shifts file in a text viewer Default value is 'false'. -PpixelSpacingInDegree= The pixel spacing in degrees Default value is '0'. -PpixelSpacingInMeter= The pixel spacing in meters Default value is '0'. -PprojectionName= The projection name Default value is 'Geographic Lat/Lon'. -PrmsThreshold= The RMS threshold for eliminating invalid GCPs Valid interval is (0, *). Default value is '1.0'. -PsaveBetaNought= Sets parameter 'saveBetaNought' to . Default value is 'false'. -PsaveDEM= Sets parameter 'saveDEM' to . Default value is 'false'. -PsaveGammaNought= Sets parameter 'saveGammaNought' to . Default value is 'false'. -PsaveLocalIncidenceAngle= Sets parameter 'saveLocalIncidenceAngle' to . Default value is 'false'. -PsaveProjectedLocalIncidenceAngle= Sets parameter 'saveProjectedLocalIncidenceAngle' to . Default value is 'false'. -PsaveSelectedSourceBand= Sets parameter 'saveSelectedSourceBand' to . Default value is 'true'. -PsaveSigmaNought= Sets parameter 'saveSigmaNought' to . Default value is 'false'. -PwarpPolynomialOrder= The order of WARP polynomial function Value must be one of '1', '2', '3'. Default value is '1'.

Graph XML Format: 1.0 SARSim-Terrain-Correction ${source} float int string double double string boolean boolean boolean boolean boolean boolean boolean boolean string string string file boolean ------Wind-Field-Estimation ------Usage: gpt Wind-Field-Estimation [options] Description: Estimate wind speed and direction Source Options: -Ssource= Sets source 'source' to . This is a mandatory source. Parameter Options: -PsourceBands= The list of source bands. -PwindowSizeInKm= Window size Default value is '20.0'. Graph XML Format: 1.0 Wind-Field-Estimation ${source} string <.../> double

------ProductSet-Reader ------

Usage: gpt ProductSet-Reader [options] Description: Adds a list of sources

Parameter Options: -PfileList= Sets parameter 'fileList' to . Graph XML Format: 1.0 ProductSet-Reader string,string,string,... Graph Builder

Graph Builder

Graphs can be created visually with the Graph Builder, processed directly from the DAT and then saved as XML files. These saved graphs can then also be used as the input for the command line Graph Processing Tool (GPT) with a different set of input data products. The Graph Builder allows the user to assemble graphs from a list of available operators and connect operator nodes to their sources

Right click on the top panel to add an Operator. As Operators are added, their corresponding OperatorUIs are created and added as tabs to a property sheet. The OperatorUIs accept user input for the Operator’s parameters. Connect Operator graph nodes by moving the mouse over the left edge of an Operator node until a circle appears. Then drag the mouse over the Operator node you wish to designate as the source. Every node except for a Reader node will require a source. Before saving or processing a graph, the Graph Builder calls each operator to validate its parameters. Validation also occurs at the graph level to ensure there are no cycles and that each operator has the appropriate connections. Graph Batch Processing

Batch Processing

The Batch Processing tool available via the DAT allows you to execute a single reader/writer graph for a set of products. Select the Batch Processing tool from the Graphs menu and then press the "Load" button to browse for a previously saved graph. Next, add products in the tab by pressing the "Add" button or dragging and dropping a ProductSet or Products from the Project or Products views. Set the target folder where the output will be written to and then press "Run".

Batch Processing can also be called from within the Product Library. In the Product Library, select the products you would like to import and then press the Batch Processing button. In this way you may pre-process a list of products before working with them in the DAT. Principal Component Analysis

Principal Component Analysis Operator

This operator generates the principal component images from a stack of co-registered detected images. The Principal Component Analysis (PCA) consists of a remapping of the information of the input co-registered images into a new set of images. The output images are scaled to prevent negative pixel values.

Major Processing Steps The PCA operator consists of the following major steps:

1. Average the pixels across the input images to compute a mean image. Optionally subtract the computed mean image from each input image. 2. Subtract the mean value of each input image (or image from step 1) from itself to produce zero-mean images. 3. Compute covariance matrix from the zero-mean images given in step 2. 4. Perform eigenvalue decomposition of the covariance matrix. 5. Compute PCA images by multiplying the eigenvector matrix by the zero-mean images given in step2. Here the user can select the eigenvectors instead of using all vectors. The selection is done with a user input threshold, which is in percentage, on the eigenvalues. For example, in the case of three input images, a1, a2 and a3 (where a1 » a2 » a3) are the eigenvalues, if the threshold is 80% and (a1+a2) » 80%, then a3 will not used in computing the PCA images. Only two PCA images will be produced.

Parameters Used The following parameters are used by the operator:

1. Source Bands: All bands (real or virtual) of the source product. You may select one or more bands for performing PCA. If no bands are selected, then by default all bands will be selected. 2. Eigenvalue Threshold: The threshold used in the eigenvalue selection for producing the final PCA images. 3. Show eigenvalues: Checkbox indicating that eigenvalues are displayed automatically. 4. Subtract Mean Image: Checkbox indicating that the mean image of user selected input images will be subtracted from each input image before Principal Component Analysis is applied.

Expectation Maximization (EM) Cluster Analysis

Introduction

Cluster analysis (or clustering) is the classification of objects into different groups, or more precisely, the partitioning of a data set into subsets (clusters or classes), so that the data in each subset (ideally) share some common trait - often proximity according to some defined distance measure. Data clustering is a common technique for statistical data analysis, which is used in many fields, including machine learning, data mining, pattern recognition, image analysis and bioinformatics. The computational task of classifying the data set into k clusters is often referred to as k-clustering.

Algorithm

The EM algorithm can be regarded as a generalization of the k-means algorithm. The main differences are:

1. Pixels are not assigned to clusters. The membership of each pixel to a cluster is defined by a (posterior) probability. For each pixel, there are as many (posterior) probability values as there are clusters and for each pixel the sum of (posterior) probability values is equal to unity. 2. Clusters are defined by a prior probability, a cluster center, and a cluster covariance matrix. Cluster centers and covariance matrixes determine a Mahalanobis distance between a cluster center and a pixel. 3. For each cluster a pixel likelihood function is defined as a normalized Gaussian function of the Mahalanobis distance between cluster center and pixels. 4. Posterior cluster probabilities as well as cluster centers and covariance matrixes and are recalculated iteratively. In the E-step, for each cluster, the cluster prior and posterior probabilities are recalculated. In the M-step all cluster centers and covariance matrixes are recalculated from the updated posteriors, so that the resulting data likelihood function is maximized. 5. When the iteration is completed, each pixel is assigned to the cluster where the posterior probability is maximal.

The algorithm is described in detail on the Wikipedia entry on Expectation maximization. Use this algorithm when you want to perform a cluster analysis of a small scene or region-of-interest and are not satisfied with the results obtained from the k-means algorithm. The result of the cluster analysis is written to a band named class_indices. The values in this band indicate the class indices, where a value '0' refers to the first cluster, a value of '1' refers to the second cluster, etc. The class indices are sorted according to the prior probability associated with cluster, i.e. a class index of '0' refers to the cluster with the highest probability. Note that an index coding is attached to the class_indices band, which can be edited in the Color Manipulation Window. It is possible to change the label and the color associated with a class index. The last columns of the color manipulation window lists the location of the cluster centers. Further information on the clusters is listed in the Cluster-Analysis group of the product metadata.

User Interface

The EM cluster analysis tool can be invoked by selecting the EM Cluster Analysis command in the Image Analysis submenu. In the command line it is available by means of the Graph Processing Tool gpt. Please type gpt EMClusterAnalysis -h for further information. Selecting the EM Cluster Analysis command from the Image Analysis menu pops up the following dialog:

Source Product Group Name: Here the user specifies the name of the source product. The combo box presents a list of all products open in DAT. The user may select one of these or, by clicking on the button next to the combo box, choose a product from the file system.

Target Product Group Name: Used to specify the name of the target product. Save as: Used to specify whether the target product should be saved to the file system. The combo box presents a list of file formats, currently BEAM-DIMAP, GeoTIFF, and HDF5. The text field allows to specify a target directory. Open in DAT: Used to specify whether the target product should be opened in DAT. When the the target product is not saved, it is opened in DAT automatically.

Processing Parameters Panel Number of clusters: Use this field to specify the number of clusters. The default is 14 clusters. Number of iterations: Use this field to specify the maximum number of iterations to be carried out. The default is 30 iterations. The cluster analysis stops when the maximum number of iterations is exceeded. Random seed: The EM algorithm starts with a pseudo-random distribution of initial clusters. The random seed initializes the pseudo-random number generator, which is used to generate the initial clusters. By changing the random seed, you can start with different initial clusters. Any positive integral number will be a perfect random seed. The default seed is 31415. Source band names: Use this field to specify the names of the source bands. Press the control key while selecting or deselecting individual bands. Region of interest: Use this field to restrict the cluster analysis to a region-of-interest (ROI). The combo box allows to select the band which provides the ROI. Include probability bands: Check this box if you want the cluster posterior probabilities to be included in the target product. The target then will contain a single probability band for each cluster.

Button Group Run Creates the target product. The cluster analysis is actually deferred until its band data are accessed, either by writing the product to a file or by viewing its band data. When the Save as option is checked, the cluster analysis is triggered automatically. Close Closes the dialog. Help Displays this page in the Help.

Further information

A good starting point for obtaining further information on cluster analysis terms and algorithms is the Wikipedia entry on data clustering.

K-Means Cluster Analysis

Introduction

Cluster analysis (or clustering) is the classification of objects into different groups, or more precisely, the partitioning of a data set into subsets (clusters or classes), so that the data in each subset (ideally) share some common trait - often proximity according to some defined distance measure. Data clustering is a common technique for statistical data analysis, which is used in many fields, including machine learning, data mining, pattern recognition, image analysis and bioinformatics. The computational task of classifying the data set into k clusters is often referred to as k-clustering.

Algorithm

The k-means clustering tool is capable of working with arbitrary large scenes. Given the number of clusters k, the basic algorithm implemented is:

1. Randomly choose k pixels whose samples define the initial cluster centers. 2. Assign each pixel to the nearest cluster center as defined by the Euclidean distance. 3. Recalculate the cluster centers as the arithmetic means of all samples from all pixels in a cluster. 4. Repeat steps 2 and 3 until the convergence criterion is met.

The convergence criterion is met when the maximum number of iterations specified by the user is exceeded or when the cluster centers did not change between two iterations. This algorithm should be your primary choice for performing a cluster analysis. For the analysis of large scenes, this algorithm is strongly recommended. The result of the cluster analysis is written to a band named class_indices. The values in this band indicate the class indices, where a value '0' refers to the first cluster, a value of '1' refers to the second cluster, etc. The class indices are sorted according to the number of members in the corresponding cluster, i.e. a class index of '0' refers to the cluster with the most members. Note that an index coding is attached to the class_indices band, which can be edited in the Color Manipulation Window. It is possible to change the label and the color associated with a class index. The last columns of the color manipulation window lists the location of the cluster centers. The cluster centers are also listed in the Cluster-Analysis group of the product metadata.

User Interface

The k-means cluster analysis tool can be invoked by selecting the K-Means Cluster Analysis command in the Image Analysis submenu. In the command line it is available by means of the Graph Processing Tool gpt. Please type gpt KMeansClusterAnalysis -h for further information. Selecting the K-Means Cluster Analysis command from the Image Analysis menu pops up the following dialog:

Source Product Group Name: Here the user specifies the name of the source product. The combo box presents a list of all products open in DAT. The user may select one of these or, by clicking on the button next to the combo box, choose a product from the file system.

Target Product Group Name: Used to specify the name of the target product. Save as: Used to specify whether the target product should be saved to the file system. The combo box presents a list of file formats, currently BEAM-DIMAP, GeoTIFF, and HDF5. The text field allows to specify a target directory. Open in DAT: Used to specify whether the target product should be opened in DAT. When the the target product is not saved, it is opened in DAT automatically.

Processing Parameters Panel Number of clusters: Use this field to specify the number of clusters. The default is 14 clusters. Number of iterations: Use this field to specify the maximum number of iterations to be carried out. The default is 30 iterations. The cluster analysis stops when the maximum number of iterations is exceeded. Random seed: The KM algorithm starts with a pseudo-random distribution of initial clusters. The random seed initializes the pseudo-random number generator, which is used to generate the initial clusters. By changing the random seed, you can start with different initial clusters. Any positive integral number will be a perfect random seed. The default seed is 31415. Source band names: Use this field to specify the names of the source bands. Press the control key while selecting or deselecting individual bands. Region of interest: Use this field to restrict the cluster analysis to a region-of-interest (ROI). The combo box allows to select the band which provides the ROI.

Button Group Run Creates the target product. The cluster analysis is actually deferred until its band data are accessed, either by writing the product to a file or by viewing its band data. When the Save as option is checked, the cluster analysis is triggered automatically. Close Closes the dialog. Help Displays this page in the Help.

Further information

A good starting point for obtaining further information on cluster analysis terms and algorithms is the Wikipedia entry on data clustering.

Data Analysis

Data Analysis Operator

The operator evaluates the following local statistics for the user selected area of the image.

The follow statistics are calculated:

1. Mean 2. Standard Deviation 3. Coefficient of Variation 4. Equivalent Number of Looks Create Stack

Create Stack Operator

Create Stack is a component of coregistration. The Create Stack Operator allows collocating two spatially overlapping products. Collocating two products implies that the pixel values of one product (the slave) are resampled into the geographical raster of the other (the master).

When two products are collocated, the band data of the slave product is resampled into the geographical raster of the master product. In order to establish a mapping between the samples in the master and the slave rasters, the geographical position of a master sample is used to find the corresponding sample in the slave raster. If there is no sample for a requested geographical position, the master sample is set to the no-data value which was defined for the slave band. The collocation algorithm requires accurate geopositioning information for both master and slave products. When necessary, accurate geopositioning information may be provided by ground control points. The metadata for the stack product is copied from the metadata of the master product.

Resampling Methods Supported The user may choose between three different resampling methods:

● None (For interferometric processing, no resampling should be used here)

● Nearest Neighbour,

● Bilinear Interpolation, and

● Cubic Convolution.

Output Extents User can select one of the following three extents for the collocated images:

● Master: the master image extents (Master extents are always used with "None" resampling)

● Maximum: both the common coverage and the non overlapping areas

● Minimum: only the common coverage

Parameters Used The following parameters are used by this operator:

1. Master Band: All bands (real or virtual) of the selected product. User can select one band (for real image) or two bands (i and q bands for complex image) as master band for co- registration. 2. Slave Band: All bands (real or virtual) of the selected product. User can select one band (for real image) or two bands (i and q bands for complex image) as slave band for co-registration. 3. Resampling Type: It specifies the resampling method. 4. Output Extents: The output image extent.

Product Subset

Subset Operator

If you are not interested in the whole image of a product, you may specify an area of the product to be loaded. You can select the area by entering the top left corner and the width and height. You can also specify a sub-sampling in the X or Y directions. Band Arithmetic

Band Arithmetic Operator

The Band Arithmetic Operator is used to create new image sample values derived from existing bands, tie-point grids and flags. The source data is combined by an arithmetic expression to generate the target data. Please refer to the expression editor documentation for the syntax and capabilities of expressions.

Convert Datatype

Convert Datatype Operator

The Toolbox is able to read in various supported data products, abstract the complete dataset and meta-data internally and write the data to various file formats using the plug-in writer modules. Internally, the data is stored in a Generic Product Model (GPM) and organized in a similar structure as its external representation the BEAM-DIMAP file format. The BEAM-DIMAP format consists of an XML file containing meta-data and a file folder containing image bands in flat binary files.

The Convert Datatype Operator performs gain conversion. The data will be formatted to be able to adjust for the dynamic range of the data in the following ways:

● no value scaling (truncation)

● linear scaling using slope and intercept: new value = slope * old value + intercept

● linear scaling between min. and max. pixel values or ‘clipped’ (%) histogram

● logarithmic scaling using slope and intercept: new value = 10**(slope * old value + intercept)

● Look-Up Table defined by the user containing the mapping

This will make it possible to be able to convert between the following data types:

● 8-bit integer

● 16-bit integer

● 32-bit integer

● 32-bit float

● complex integer (16 bits + 16 bits)

● complex float (32 bits + 32 bits) Oversample

Oversample Operator

This operator upsamples a real or complex image through frequency domain zero-padding. The algorithm takes into account the value of the Doppler Centroid Frequency when padding the azimuth spectrum. For real image, the upsampled image is also a real image, and for complex image, the up sampled image is a complex image.

Oversampled Image Size User can specify the output image size by selecting

● the user specified output image dimension, or

● the row, column ratios by which the input image should be multiplied, or

● in terms of output image range and azimuth pixel spacings.

Parameters Used If the upsampled image is output by image size, then the following parameters are used by the operator:

1. Source Bands: All bands (real or virtual) of the source product. User can select one or more bands for producing upsampled images. If no bands are selected, then by default all bands are selected. 2. Output Image By: The method for determining upsampled image dimension. 3. Output Image Rows: The row size of the upsampled image. 4. Output Image Columns: The column size of the upsampled image. 5. Use PRF Tile Size: Checkbox indicating the tile size in processing is PRF by image width. In not checked, system computed tile size is used. In case the image is large, system computed tile size should be used to avoid memory problem.

If the upsampled image is output by image dimension ratio, then the following parameters are used by the operator:

1. Source Bands: All bands (real or virtual) of the source product. User can select one or more bands for producing upsampled images. If no bands are selected, then by default all bands are selected. 2. Output Image By: The method for determining upsampled image dimension. 3. Width Ratio: The ratio of the upsampled image width and the source image width. 4. Height Ratio: The ratio of the upsampled image height and the source image height. 5. Use PRF Tile Size: Checkbox indicating the tile size in processing is PRF by image width. In not checked, system computed tile size is used. In case the image is large, system computed tile size should be used to avoid memory problem.

If the upsampled image is output by pixel spacing, then the following parameters are used by the operator:

1. Source Bands: All bands (real or virtual) of the source product. User can select one or more bands for producing upsampled images. If no bands are selected, then by default all bands are selected. 2. Output Image By: The method for determining upsampled image dimension. 3. Range Spacing: The range pixel spacing of the upsampled image. 4. Azimuth Spacing: The azimuth pixel spacing of the upsampled image. 5. Use PRF Tile Size: Checkbox indicating the tile size in processing is PRF by image width. In not checked, system computed tile size is used. In case the image is large, system computed tile size should be used to avoid memory problem.

Undersample

Undersample Operator

This operator downsamples a real or complex image using sub-sampling method or lowpass filtering method.

Undersampling Method

● Sub-sampling method: The image is downsampled with user specified sub-sampling rates in both range and azimuth directions. Here sub-sampling rates is a positive integer which is the step size in reading row/column pixels from product file. For complex image, the i and q bands in the image are downsampled separately, and the downsampled image is still a complex image.

● Lowpass-filtering method: The image is downsampled with a pre-defined low-pass kernel moving across the image with a step-size determined by the size of the required output image. For complex image, intensity image is computed from the i and q bands before lowpass filtering is applied. The downsampled image is always a real image.

Undersampled Image Size User can determine the output image size by specifying

● the output image size, or

● the pixel spacings, or

● the downsampling ratios.

Low-Pass Kernel

● The pre-defined low-pass kernel has three dimensions: 3x3, 5x5 and 7x7.

● The elements of the low-pass kernel are all 1's.

Parameters Used If the sub-sampling method is selected for the downsampling, then the following parameters are used by the operator:

1. Source Band: All bands (real or virtual) of the source product. User can select one or more bands for producing downsampled images. If no bands are selected, then by default all bands are selected. 2. Under-Sampling Method: Sub-Sampling method. 3. Sub-Sampling in X: User provided sub-sampling rate in range. 4. Sub-Sampling in Y: User provided sub-sampling rate in azimuth.

If the Lowpass Filtering method is selected for the downsampling, and the downsampled image is output by image size, then the following parameters are used by the operator:

1. Source Band: All bands (real or virtual) of the source product. User can select one or more bands for producing downsampled images. If no bands are selected, then by default all bands are selected. 2. Under-Sampling Method: Kernel Filtering method. 3. Filter Size: The lowpass filter size. 4. Output Image By: The method for determining output image dimension. 5. Output Image Rows: The row size of the downsampled image. 6. Output Image Columns: The column size of the downsampled image.

If the Lowpass Filtering method is selected for the downsampling, and the downsampled image is output by image dimension ratio, then the following parameters are used by the operator:

1. Source Band: All bands (real or virtual) of the source product. User can select one or more bands for producing downsampled images. If no bands are selected, then by default all bands are selected. 2. Under-Sampling Method: Kernel Filtering method. 3. Filter Size: The lowpass filter size. 4. Output Image By: The method for determining output image dimension. 5. Width Ratio: The ratio of the downsampled image width and the source image width. 6. Height Ratio: The ratio of the downsampled image height and the source image height.

If the Lowpass Filtering method is selected for the downsampling, and the downsampled image is output by pixel spacing, then the following parameters are used by the operator:

1. Source Band: All bands (real or virtual) of the source product. User can select one or more bands for producing downsampled images. If no bands are selected, then by default all bands are selected. 2. Under-Sampling Method: Lowpass Filtering method. 3. Filter Size: The lowpass filter size. 4. Output Image By: The method for determining output image dimension. 5. Range Spacing: The range pixel spacing of the downsampled image in meters. 6. Azimuth Spacing: The azimuth pixel spacing of the downsampled image in meters.

Fill DEM Hole

Fill DEM Hole Operator

The operator fills holes in DEM product with linear interpolations in both row and column directions. For a given DEM pixel, if its value is the same as No Data Value (which is input by user from UI), then it is identified as a pixel in hole. For a pixel in hole, a linear interpolation in the row direction is performed first using the nearest two valid pixel values in the row. Then a similar linear interpolation is performed in column direction. Finally the average of the two interpolated pixel values is assigned to the hole pixel.

The following parameters are used by the operator:

1. Source Bands: Band of DEM product. 2. No Data Value: Value for invalid pixels in DEM.

Apply Orbit Correction

Apply Orbit File Operator

The orbit state vectors provided in the metadata of a SAR product are generally not accurate and can be refined with the precise orbit files which are available days-to-weeks after the generation of the product. The orbit file provides accurate satellite position and velocity information. Based on this information, the orbit state vectors in the abstract metadata of the product are updated.

Orbit Files Supported The operator currently only supports ASAR and ERS products.

● For ASAR product, DORIS precise orbit file generated by the Centre de Traitement Doris Poseidon (CTDP) and Delft University can be applied. It provides the satellite positions and velocities in ECEF coordinates every 60 seconds.

● For ERS product, DELFT precise orbit file generated by Delft Institute for Earth-Oriented Space Research (DEOS) can be applied. It provides the satellite ephemeris information (latitude, longitude, height) every 60 seconds. The operator first converts the satellite position from (latitude, longitude, height) to ECEF coordinates, then computes the velocity information numerically.

● Also for ERS product, PRARE precise orbit file generated by Delft University can be applied. It provides the same information every 30 seconds.

Get Orbit Files

● DELFT orbit files can be downloaded automatically from the DELFT FTP server.

● DORIS and PRARE orbit files must be manually downloaded and placed in the folders specified in the settings dialog.

Major Processing Steps To refine the orbit state vectors, the following steps are performed:

1. Get the start time of the source product; 2. Find orbit file with user specified type and the product start time; 3. For each orbit state vector, get its zero Doppler time; 4. Compute new orbit state vector with 8th order Lagrange interpolation using data for the 9 nearest orbit positions around the zero Doppler time. Important update and note on the orbit interpolation Since version v5.0.13, the interpolator used in the computation of the orbital state vectors was changed from Lagrange 3th order, to the 8th order Lagrange interpolator. Also, instead of 4, in the interpolation, 9 nearest orbit positions are used. As a result, the orbit refinement is more reliable and consistent. This update is especially important for the applications that are sensitive and dependent on the accurate SAR and orbital geometry, eg., interferometry, orthorectification, terrain correction, etc. It is strongly recommended, that all users, interested in these type of applications, do not use the version of software older then v5.0.13 for the orbital refinement, and upgrade to the latest version.

Parameters Used The following parameters are used by the operator:

1. Orbit Type: User can select the type of orbit file for the application. Currently the following orbit file types are supported:

❍ DORIS_VOR

❍ DORIS_POR

❍ DELFT_PRECISE_ENVISAT

❍ DELFT_PRECISE_ERS_1

❍ DELFT_PRECISE_ERS_2

❍ PRARE_PRECISE_ERS_1

❍ PRARE_PRECISE_ERS_2

Calibration

Calibration Operator

The objective of SAR calibration is to provide imagery in which the pixel values can be directly related to the radar backscatter of the scene. Though uncalibrated SAR imagery is sufficient for qualitative use, calibrated SAR images are essential to quantitative use of SAR data. Typical SAR data processing, which produces level 1 images, does not include radiometric corrections and significant radiometric bias remains. Therefore, it is necessary to apply the radiometric correction to SAR images so that the pixel values of the SAR images truly represent the radar backscatter of the reflecting surface. The radiometric correction is also necessary for the comparison of SAR images acquired with different sensors, or acquired from the same sensor but at different times, in different modes, or processed by different processors. This Operator performs different calibrations for ASAR, ERS, ALOS and Radarsat-2 products deriving the sigma nought images. Optionally gamma nought and beta nought images can also be created.

Product Supported

● ASAR (IMS, IMP, IMM, APP, APS, APM, WSM) and ERS products (SLC, IMP) are fully supported

● Third party SAR mission: not fully supported. Please refer to Supported_Products_4A.xls file available from http://liferay.array.ca:8080/web/nest/documentation

ASAR Calibration For ground range detected products, the following corrections are applied:

● incidence angle

● absolute calibration constant

In the event that the antenna pattern used to process an ASAR product is superseded, the operator removes the original antenna pattern, and apply a new, updated one. The old antenna pattern gain data is obtained from the external XCA file specified in the metadata of the source product. The new antenna pattern gain data and the calibration constant are obtained from the user specified XCA file. For XCA file selection, user has the following options:

● latest auxiliary file (the most recent XCA file available in the local repository)

● product auxiliary file (the XCA file specified in the product metadata)

● external auxiliary file (user provided XCA file)

If "product auxiliary file" is selected, then no retro-calibration is performed, i.e. no antenna pattern gain is removed or applied. By default the latest XCA file available for the product is used. For slant range complex products, the following corrections are applied:

● incidence angle

● absolute calibration constant

● range spreading loss

● antenna pattern gain

The antenna pattern gain data and the calibration constant are obtained from the user specified XCA file. For XCA file selection, user has the following options:

● latest auxiliary file (the most recent XCA file available in the local repository) ● external auxiliary file (user provided XCA file)

By default, the latest auxiliary file available for the product will be used for the calibration. The default output of calibration is sigma0 image. User can also select gamma0 and beta0 images outputting as virtual bands in the target product.

In the following, the calibration process is related to the different type of ASAR products.

IMS Products The sigma nought image can be derived from ESA’s ASAR level 1 IMS products as the follows (adapted from [1]):

where

2 ● DNi,j is the pixel intensity for pixel i, j

● K is the absolute calibration constant

● αi,j is incidence angle

● Ri,j is the slant range distance

● Rref is the reference slant range distance

● θi,j is the look angle

● G is the antenna pattern gain

APS Products

The methodology to derive the sigma nought is the same of the IMS data but an additional factor (R / Rref) must be taken into account:

IMP, APP, IMM, APM, WSM Products In contrast to IMS and APS products, ASAR ground range imageries (IMP, APP, IMM, APM, WSM) have all been applied antenna gain pattern compensation (based on ellipsoid approximations) and range spreading loss correction during the formation of the images. Therefore the antenna pattern correction applied to the image must be removed when updated external XCA file is available for the product. The sigma nought image can be derived from ESA’s ASAR level 1 IMP, APP, IMM, APM, WSM products as the follows (adapted from [1]):

For detailed ASAR calibration algorithm, reader is referred to [1].

ERS Calibration The operator is able to calibrate ERS VMP, ERS PGS CEOS and ERS PGS ENVISAT ESA standard products generated by different ESA Processing and Archiving Facilities, such as the German PAF (D-PAF), the Italian PAF (I-PAF) and the United-Kingdom PAF (UK-PAF), and at the acquisitions stations such as PDHS-K (Kiruna) and PDHS-E (Esrin). For ERS-1 ground range product, the following corrections are applied:

● incidence angle

● calibration constant

● replica pulse power variations

● analogue to digital converter non-linearity

For ERS-1 slant range product, the following corrections are applied:

● incidence angle

● calibration constant

● analogue to digital converter non-linearity

● antenna elevation pattern

● range spreading loss

For ERS-2 ground range product, the following corrections are applied:

● incidence angle

● calibration constant

● analogue to digital converter non-linearity

For ERS-2 slant range product, the following corrections are applied:

● incidence angle

● calibration constant

● analogue to digital converter non-linearity ● antenna elevation pattern

● range spreading loss

For detailed ERS product calibration algorithm, reader is referred to [2].

ALOS PALSAR Calibration The operator performs absolute radiometric calibration for ALOS PALSAR level 1.1 or 1.5 products. For ALOS PALSAR L1.1 and L1.5 products, the following corrections have already been applied:

● range spreading loss correction

● antenna pattern gain correction

● incidence angle correction

Therefore the operator applies only the absolute calibration constant correction to the products. For detailed ALOS PALSAR product calibration algorithm, reader is referred to [3].

Radarsat-2 Calibration The operator performs absolute radiometric calibration for Radarsat 2 products by applying the sigma0, beta0 and gamma0 look up tables provided in the product. For detailed Radarsat-2 product calibration algorithm, reader is referred to [4].

TerraSAR-X The operator performs absolute radiometric calibration for TerraSAR-X products by applying the simplified approach where Noise Equivalent Beta Naught is neglected. Only Calibration constant correction and Incidence angle correction are applied. For detailed TerraSAR-X product calibration algorithm, reader is referred to [5].

Cosmo-SkyMed The operator performs absolute radiometric calibration for Cosmo-SkyMed products by applying few product factor corrections. For detailed Cosmo-SkyMed product calibration algorithm, reader is referred to [6].

Parameters Used The parameters used by the operator are as follows:

1. Source Band: All bands (real or virtual) of the source product. User can select one or more bands for calibration. If no bands are selected, then by default, all bands are used for calibration. The operator is able to detect the right input band. 2. Auxiliary File: User selected XCA file for antenna pattern correction. The following options are available: Latest Auxiliary File, Product Auxiliary File (for detected product only) and External Auxiliary File. By default, the Latest Auxiliary File is used. 3. Scale in dB: Checkbox indicating that the calibrated product is saved in dB scale. If not checkmarked, then the product is saved in linear scale. 4. Create gamma0 virtual band: Checkbox indicating that gamma0 image is created as a virtual band. If not checkmarked, no gamma0 image is created. 5. Create beta0 virtual band: Checkbox indicating that beta0 image is created as a virtual band. If not checkmarked, no beta0 image is created. Reference: [1] Rosich B., Meadows P., Absolute calibration of ASAR Level 1 products, ESA/ESRIN, ENVI-CLVL-EOPG-TN-03- 0010, Issue 1, Revision 5, October 2004 [2] Laur H., Bally P., Meadows P., Sánchez J., Schättler B., Lopinto E. & Esteban D., ERS SAR Calibration: Derivation of σ0 in ESA ERS SAR PRI Products, ESA/ESRIN, ES-TN-RS-PM-HL09, Issue 2, Rev. 5f, November 2004 [3] Lavalle M., Absolute Radiometric and Polarimetric Calibration of ALOS PALSAR Products, Issue 1, Revision 2, 01/04/2008 [4] RADARSAT Data Product Specification, RSI-GS-026, Revision 3, May 8, 2000 [5] Radiometric Calibration of TerraSAR-X data - TSXX-ITD-TN-0049-radiometric_calculations_I1.00.doc, 2008 [6] For further details about Cosmo-SkyMed calibration please contact Cosmo-SkyMed Help Desk at info.cosmo@e- geos.it Remove Antenna Pattern

Remove Antenna Pattern Operator

This operator removes antenna pattern and range spreading loss corrections applied to the original ASAR and ERS products. For ERS product, it also removes replica pulse power correction and applies the analogue to digital converter (ADC) power less correction. This operator cannot be applied to multilooked product. Details of the functions of the operator are given below.

ASAR Products For ground range detected products, the following corrections are removed:

● antenna pattern gain

● range spreading loss

For slant range complex products, such as ASAR IMS, APS products, no antenna pattern or range spreading correction has been applied, therefore the operater is not applicable to these products.

ERS Products For ground range products, the following operations are performed:

● remove antenna pattern gain

● remove range spreading loss

● remove replica pulse power (ERS-2 only)

● apply ADC correction

For slant range products, the following operations are performed:

● remove replica pulse power (ERS-2 only)

● apply ADC correction

Other Products For other products, such as ALOS PALSAR and RadarSAT-2 products, the operator is not applicable. The parameter used by the operator is as follows: 1. Source Band: All bands (amplitude or intensity) of the source product. User can select one or more bands. If no bands are selected, then by default, all bands are used for the operation. The operator is able to detect the right input band.

Reference: [1] Rosich B., Meadows P., Absolute calibration of ASAR Level 1 products, ESA/ESRIN, ENVI-CLVL-EOPG-TN-03-0010, Issue 1, Rev. 5, October 2004 [2] Laur H., Bally P., Meadows P., Sánchez J., Schättler B., Lopinto E. & Esteban D., ERS SAR Calibration: Derivation of σ0 in ESA ERS SAR PRI Products, ESA/ESRIN, ES-TN-RS-PM- HL09, Issue 2, Rev. 5f, November 2004 GCP Selection

GCP Selection Operator

The GCP Selection operator is a component of coregistration. Image co-registration is an essential step for Interferometry SAR (InSAR) imaging. It aligns one or more slave images with a master image in such a way that the each pixel from the co-registered slave image represents the same point on the Earth surface as its corresponding pixel in the master image.

The co-registration is accomplished through two major processing steps: GCP selection and WARP. In GCP selection, a set of uniformly spaced Ground Control Points (GCPs) in the master image are generated first, then their corresponding GCPs in the slave image are computed. In WARP processing step, these GCP pairs are used to construct a WARP distortion function, which establishes a map between pixels in the master and slave images. With the WARP function computed, the co-registered image is generated by mapping the slave image pixels onto master image.

This operator computes slave GCPs by coarse registration or coarse and fine registrations depending on the input images are real or complex. For real input images, coarse registration is performed, while for complex images both coarse and fine registrations are performed. The fine registration uses the image coherence technique to further increase the precision of the GCPs.

Coarse Registration The coarse registration is achieved using a cross correlation operation between the images on a series of imagettes defined across the images. The major processing steps are listed as the follows:

1. For a given master GCP, find initial slave GCP using geographical position information of GCP. 2. Determine the imagettes surrounding the master and slave GCPs using user selected coarse registration window size. 3. Compute new slave GCP position by performing cross-correlation of the master and slave imagettes. 4. If the row or column shift of the new slave GCP from the previous position is no less than user selected GCP tolerance and the maximum number of iteration is not reached, then move the slave imagette to the new GCP position and go back to step 3. Otherwise, save the new slave GCP and stop.

Those GCPs, for which the maximum number of iterations has been reached or its final GCP shift is still greater than the tolerance, are eliminated as invalid GCPs.

Fine Registration The additional fine registration for complex images is achieved by maximizing of the complex coherence between the images at a series of imagettes defined across the images. It is assumed the coarse registration has been performed before this operation. Some major processing steps are given below:

1. For each given master-slave GCP pair, get complex imagettes surrounding the master and slave GCPs using user selected coarse registration window size. 2. Compute initial coherence of the two imagettes. 3. Start from the initial slave GCP position, the best sub-pixel shift of slave GCP is computed such that the slave imagette at the new GCP position gives the maximum coherence with the master imagette. Powell's method is used in the optimization [1]. This processing step is optional for complex image co-registration and user can skip it by uncheckmarking the "Apply fine Registration" box in the dialog box.

Coherence Computation

Given master imagette I1 and slave imagette I2, there are two ways to compute the coherence of the two complex imagettes.

● Method 1: Let I1 and I2 be RxC imagettes and denote by I*2 the complex conjugate of I2. Then the coherence is computed by

● Method 2: The coherence is computed with a 3x3 (user can change the size) sliding window in two steps: 1. First for each pixel in the imagette, a 3x3 window centered at the pixel is determined for both master and slave imagettes, and coherence is computed for the two windows using equation above. 2. Average coherences computed for all pixels in the imagette to get the final coherence for the imagette.

User can select the method to use by selecting radio button "Compute Coherence with Sliding Window".

Parameters Used The parameters used by the Operator are as follows:

1. Number of GCPs: The total number of GCPs used for the co-registration. 2. Coarse Registration Window Width: The window width for cross-correlation in coarse GCP selection. It must be power of 2. 3. Coarse Registration Window Height: The window height for cross-correlation in coarse GCP selection. It must be power of 2. 4. Row Interpolation Factor: The row upsampling factor used in cross correlation operation. It must be power of 2. 5. Column Interpolation Factor: The column upsampling factor used in cross correlation operation. It must be power of 2. 6. Max Iterations: The maximum number of iterations for computing coarse slave GCP position. 7. GCP Tolerance: The stopping criterion for slave GCP selection. 8. Apply fine Registration: Checkbox indicating applying fine registration for complex image co- registration. 9. Coherence Window Size: The dimension of the sliding window used in coherence computation. 10. Coherence Threshold: Only GCPs with coherence above this threshold will be used in co-registration. 11. Fine Registration Window Width: The window width for coherence calculation in fine GCP selection. It must be power of 2. 12. Fine Registration Window Height: The window height for coherence calculation in fine GCP selection. It must be power of 2. 13. Compute Coherence with Sliding Window: If selected, sliding window with dimension given in 9 will be used in coherence computation. Otherwise, coherence will be computed directly from all pixel in the Fine Registration Window without using sliding window.

Reference: [1] William H. Press, Brian P. Flannery, Saul A. Teukolsky, Willaim T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing, second eidition, 1992

Multilook

Multilook Operator

Generally, a SAR original image appears speckled with inherent speckle noise. To reduce this inherent speckled appearance, several images are incoherently combined as if they corresponded to different looks of the same scene. This processing is generally known as multilook processing. As a result the multilooked image improves the image interpretability. Additionally, multilook processing can be used to produce an application product with nominal image pixel size.

Multilook Method There are two ways to implement the multilook processing:

● The multilooked images can be produced by space-domain averaging of a single look image, either with or without specific 2D kernels by convolution.

● The multilook images can be produced by frequency-domain method using the sub- spectral band width.

This operator implements the space-domain multilook method by averaging a single look image with a small sliding window.

Selecting Range and Azimuth Looks In selecting the number of range looks and the number of azimuth looks, user has two options:

● GR square pixel: the user specifies the number of range looks while the number of azimuth looks is computed based on the ground range spacing and the azimuth spacing. The window size is then determined by the number of range looks and the number of azimuth looks. As a result, image with approximately square pixel spacing on the ground is produced.

● Independent looks: the number of looks in range and azimuth can be selected independently. The window size is then determined by the number of range looks and the number of azimuth looks.

Parameters Used The following parameters are used by the operator:

1. Source Band: All bands (real or virtual) of the source product. User can select one or more bands for producing multilooked images. If no bands are selected, then by default all bands are selected. 2. GR Square Pixel: If selected, the number of azimuth looks is computed based on the user selected number of range looks, and range and azimuth spacings are approximately the same in the multilooked image. 3. Independent Looks: If selected, the number of range looks and the number of azimuth looks are selected independently by the user. 4. Number of Range Looks: The number of range looks. 5. Number of Azimuth Looks: The number of azimuth looks. 6. Mean GR Square Pixel: The average of the range and azimuth pixel spacings in the multilooked image. It is computed based on the number of range looks, the number of azimuth looks and the source image pixel spacings, and is available only when 'GR Square Pixel' is selected. 7. Output Intensity: This checkbox is for complex product only. If not checked, any user selected bands (I, Q, intensity or phase) are multilooked and output individually. If checked, user can only select I/Q or intensity band and the output is multilooked intensity band.

Reference: Small D., Schubert A., Guide to ASAR Geocoding, RSL-ASAR-GC-AD, Issue 1.0, March 2008 Speckle Filter

Speckle Filter Operator

SAR images have inherent salt and pepper like texturing called speckles which degrade the quality of the image and make interpretation of features more difficult. Speckles are caused by random constructive and destructive interference of the de-phased but coherent return waves scattered by the elementary scatters within each resolution cell. Speckle noise reduction can be applied either by spatial filtering or multilook processing.

Filters Supported The operator supports the following speckle filters for handling speckle noise of different distributions (Gaussian, multiplicative or Gamma):

● Mean

● Median

● Frost

● Lee

● Refined Lee

● Gamma-MAP

Parameters Used For most filters, the following parameters should be selected (see figure 1 for example):

1. Source Band: All bands (real or virtual) of the source product. User can select one or more bands for producing filtered images. If no bands are selected, then by default all bands will be selected. For complex product, only the intensity band can be selected. 2. Filter: The speckle filter. 3. Size X: The filtering kernel width. 4. Size Y: The filtering kernel height. 5. Frost Damping Factor: The damping factor for Frost filter.

Figure 1. Dialog box for Mean filter.

For Frost filter, one extra parameter should be selected (see figure 2):

1. Frost Damping Factor: The damping factor for Frost filter.

Figure 2. Dialog box for Frost filter

For Refined Lee filter, the following parameter should be selected (see Figure 3):

1. Edge Threshold: A threshold for detecting edges. Area of 7x7 pixels with local variance lower than this threshold is considered flat and normal Local Statistics Filter is used for the filtering. If the local variance is greater than the threshold, then the area is considered as edge area and Refined Lee filter will be used for the filtering.

Figure 3. Dialog box for Refined Lee filter. Reference: [1] J. S. Lee, E. Pottier, Polarimetric SAR Radar Imaging: From Basic to Applications, CRC Press, Taylor & Francis Group, 2009. [2] G. S. Robinson, “Edge Detection by Compass Gradient Masks”, Computer Graphics and Image Processing, vol. 6, No. 5, Oct. 1977, pp 492-502. [3] V. S. Frost, J. A. Stiles, K. S. Shanmugan, J. C. Holtzman, \A Model for Radar Images and Its Application to Adaptive Digital Filtering of Multiplicative Noise", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-4, pp. 157-166, 1982 [4] Mansourpour M., Rajabi M.A., Blais J.A.R., “Effects and Performance of Speckle Noise Reduction Filters on Active Radar and SAR Images”, http://people.ucalgary.ca/~blais/ Mansourpour2006.pdf

Speckle Filter

Multi-Temporal Speckle Filter Operator

SAR images have inherent salt and pepper like texturing called speckles which degrade the quality of the image and make interpretation of features more difficult. The speckles are caused by random constructive and destructive interference of the de-phased but coherent return waves scattered by the elementary scatters within each resolution cell. Multi- temporal filtering is one of the commonly used speckle noise reduction techniques.

Multi-Temporal Speckle Filtering For a sequence of N registered multitemporal images, with intensity at position (x, y) in the kth image denoted by Ik(x, y), the temporal filtered images are given by:

for k = 1, ..., N, where E[I] is the local mean value of pixels in a window centered at (x, y) in image I.

Pre-Processing Steps The operator has the following two pre-processing steps:

1. The first step is calibration in which σ0 is derived from the digital number at each pixel. This ensures that values of from different times and in different parts of the image are comparable. 2. The second step is registration of the multitemporal images.

Here it is assumed that pre-processing has been performed before applying this operator. The input to the operator is assumed to be a product with multiple calibrated and co- registered bands.

Parameters Used The following parameters are used by the operator:

1. Source Band: All bands (real or virtual) of the source product. User can select one or more bands for producing the filtered image. If no bands are selected, then by default all bands will be selected. 2. Window Size: Dimension of the sliding window that is used in computing spatial average in each image of the temporal sequence. The supported window sizes are 3x3, 5x5, 7x7, 9x9 and 11x11.

Reference: S. Quegan, T. L. Toan, J. J. Yu, F. Ribbes and N. Floury, “Multitemporal ERS SAR Analysis Applied to Forest Mapping”, IEEE Transactions on Geoscience and Remote Sensing, vol. 38, no. 2, March 2000. Warp

Warp Operator

The Warp operator is a component of coregistration. This operator computes a warp function from the master-slave ground control point (GCP) pairs produced by GCP Selection operator, and generates the final co-registered image.

Compute Warp Polynomial Once the valid master-slave GCP pairs are known, a polynomial of a certain order is computed using a least square method, which maps the master GCPs onto the slave GCPs. This function is known as the warp function and is used to perform the co-registration. Generally the warp function computed with the initial master-slave GCPs is not the final warp function used in co-registration because it normally introduces large errors for GCPs. These GCPs must be removed. Therefore the warp function is determined in an iterative manner.

1. First a warp function is computed using the initial master-slave GCP pairs. 2. Then the master GCPs are mapped to the slave image with the warp function, and the residuals between the mapped master GCPs and their corresponding slave GCPs are computed. The root mean square (RMS) and the standard deviation for the residuals are also computed. 3. Next, the master-slave GCP pairs are filtered with the mean RMS. GCP pairs with RMS greater than the mean RMS are eliminated. 4. The same procedure (step 1 to 3) is repeated up to 2 times if needed and each time the remaining master-slave GCP pairs from previous elimination are used. 5. Finally the master-slave GCP pairs are filtered with the user selected RMS threshold and the final warp function is computed with the remaining master-slave GCP pairs.

The WARP polynomial order is specified by user in the dialog box.

● The valid values for the polynomial order are 1, 2 and 3.

● For most cases where the input images do not suffer from a high level of distortion, linear warp is generally enough and is recommended as default.

● Higher order warp should be used only when image suffers from a high level distortion and a very good co-registration accuracy is required.

● Higher order warp requires more GCPs and can introduce large distortions in image regions containing only a few GCPs.

Generate Co-Registered Image With the determination of warp function which maps pixels in the master image to the slave image, the co-registered image can be obtained with interpolation. Currently the following interpolation methods are supported: 1. Nearest-neighbour interpolation 2. Bilinear interpolation 3. Bicubic interpolation 4. Cubic interpolation (4 and 6 points) 5. Truncated sinc interpolation (6, 8 and 16 points)

Interpolation for InSAR For interferometric applications Cubic or Truncated sinc kernels are recommended. These kernels assure the optimal interpolation in terms of Signal-to-Noise ratio.

Residual File The residual file is a text file containing information about master and slave GCPs before and after each elimination. The residual for a GCP pair is the errors introduced by the warping function and can be used as a good indicator of the quality of the warp function. It is often very useful to check the information contained within the residual file to see if the co-registration process can be considered to have been successful. For example, the "RMS mean" value can be used as an approximate figure of merit for the co-registration. User can view the residual file by checkmarking the "Show Residuals" box in the dialog box. Detailed information contained in the residual file are listed below:

● Band name

● Warp coefficients

● Master GCP coordinates

● Slave GCP coordinates

● Row and column residuals

● Root mean square errors (RMS)

● Row residual mean

● Row residual standard deviation

● Column residual mean

● Column residual standard deviation

● RMS mean

● RMS standard deviation

Parameters Used The following parameters are used by the operator:

1. RMS Threshold: The criterion for eliminating invalid GCPs. In general, the smaller the threshold, the better the GCP quality, but lower the number of GCPs. 2. Warp Polynomial Order: The degree of the warp polynomial. 3. Interpolation Method: The interpolation method used computing co-registered slave image pixel value. 4. Show Residuals: Display GCP residual file if selected.

WSS Deburst

ASAR WSS Deburst Operator

For each subswath, the WSS products have many overlapping ‘bursts’ along the flight direction associated with a zero Doppler time for each range line.

The first step in performing the azimuth debursting is to find all range lines belonging to the same zero-Doppler time.

The Operator provides the following options:

Produce intensities only: The complex data is converted to intensity and the peak of each burst is used. Average intensities: When producing intensities, the coresponding burst lines are mean averaged.

If "Product intensities only" is not selected, then complex data will be produced along with virtual bands for intensity and phase.

WSS Mosaic

ASAR WSS Mosaicing

ASAR WSS products differ from conventional image products in that the data from the five subswaths acquired by five antenna beams SS1 through SS5 are stored in separate image records. The five WSS beams acquire data with a substantial overlap (typically several hundred range samples, ~ 9 Km). The incidence angle variation of 16 to 43 degrees across beams SS1 through SS5 creates large differences in the nominal near and far range backscatter intensities. An ASAR WSS product is delivered as a single data file containing the subswath data records arranged sequentially.

Use can use the Graph for Deburst, Calibrate, Detect and Mosaic an ASAR WSS product. It will deburst and split the WSS product into five subswath products, apply calibration and multilook and then mosaic them back into one product.

SENTINEL-1 TOPSAR Deburst and Merge

For the TOPSAR IW and EW SLC products, each product consists of one image per swath per polarization. IW products have 3 swaths and EW have 5 swaths. Each sub-swath image consists of a series of bursts, where each burst was processed as a separate SLC image. The individually focused complex burst images are included, in azimuth-time order, into a single subswath image, with black-fill demarcation in between, similar to the ENVISAT ASAR Wide ScanSAR SLC products. For IW, a focused burst has a duration of 2.75 sec and a burst overlap of ~50-100 samples. For EW, a focused burst has a duration of 3.19 sec. Overlap increases in range within a sub- swath. Images for all bursts in all sub-swaths of an IW SLC product are re-sampled to a common pixel spacing grid in range and azimuth. Burst synchronisation is ensured for both IW and EW products. Unlike ASAR WSS which contains large overlap between beams, for S-1 TOPSAR, the imaged ground area of adjacent bursts will only marginally overlap in azimuth just enough to provide contiguous coverage of the ground. This is due to the one natural azimuth look inherent in the data. For GRD products, the bursts are concatenated and sub-swaths are merged to form one image. Bursts overlap minimally in azimuth and sub-swaths overlap minimally in range. Bursts for all beams have been resampled to a common grid during azimuth post- processing. In the range direction, for each line in all sub-swaths with the same time tag, merge adjacent sub-swaths. For the overlapping region in range, merge along the optimal sub- swath cut. The optimal cut is defined from the Noise Equivalent Sigma Zero (NESZ) profiles between two sub-swaths. The NESZ is provided in the product. If the two NESZ profiles intersect inside the overlapping region, the position of the intersection point is the optimal cut. If the two profiles do not intersect, all the points in the overlapping region are taken from the sub-swath that has the lowest NESZ over the overlap region. In the azimuth direction, bursts are merged according to their zero Doppler time. Note that the black-fill demarcation is not distinctly zero at the end or start of the burst. Due to resampling, the data fades into zero and out. The merge time is determined by the average of the last line of the first burst and the first line of the next burst. For each range cell, the merging time is quantised to the nearest output azimuth cell to eliminate any fading to zero data

Create Elevation

Create Elevation Operator

The Create Elevation Operator allows you to add an elevation band to product from within the Graph Processing. The altitudes are computed by looking up each pixel in a selected high-resolution DEM. The SRTM 90m DEM and GETASSE30 DEM will be automatically downloaded in tiles as needed. For ACE and ASTER DEMs, you will need to download datasets yourself and have the respective path in the settings dialog point to the location of the datasets. Note: The Terrain Correction Operator will use DEMs directly and you do not need to create an elevation before Terrain Correction. Orthorectification

Range Doppler Terrain Correction Operator

Due to topographical variations of a scene and the tilt of the satellite sensor, distances can be distorted in the SAR images. Image data not directly at the sensor’s Nadir location will have some distortion. Terrain corrections are intended to compensate for these distortions so that the geometric representation of the image will be as close as possible to the real world. The geometry of topographical distortions in SAR imagery is shown below. Here we can see that point B with elevation h above the ellipsoid is imaged at position B’ in SAR image, though its real position is B". The offset ∆ between ' and B" exhibits the effect of r B topographic distortions.

Terrain Correction allows geometric overlays of data from different sensors and/or geometries.

Orthorectification Algorithm The Range Doppler Terrain Correction Operator implements the Range Doppler orthorectification method [1] for geocoding SAR images from single 2D raster radar geometry. It uses available orbit state vector information in the metadata or external precise orbit (only for ERS and ASAR), the radar timing annotations, the slant to ground range conversion parameters together with the reference DEM data to derive the precise geolocation information. Products Supported

● ASAR (IMS, IMP, IMM, APP, APM, WSM), ERS products (SLC, IMP), RADARSAT-2, TerraSAR-X are fully supported.

● Some third party missions are not fully supported. Please refer to the Supported_Mission-Product_vs_Operators_table.xls

DEM Supported

Currently, only the DEMs with geographic coordinates (Plat, Plon, Ph) referred to global geodetic ellipsoid reference WGS84 (and height in meters) are properly supported. Various different types of Digital Elevation models can be used (ACE, GETASSE30, ASTER, SRTM 3Sec GeoTiff). By default directory C:\AuxData and two sub-directories DEMs and Orbits are used to store the DEMs. However, the AuxData directory and DEMs sub-directories are customizable from the Settings dialog (which can be found under Edit tab in the main menu bar). The location of the default DEMs must be specified in dataPath field in the Setting Dialog in order to be properly used by Terrain Correction Operator. The STRM v.4 (3” tiles) from the Joint Research Center FTP (xftp.jrc.it) will automatically be downloaded in tiles for the area covered by the image to be orthorectified. The tiles will be downloaded to the folder C:\AuxData\DEMs\SRTM_DEM\tiff. The Test Connectivity functionality under the Help tab in the main menu bar allows user to verify if the SRTM downloading is working properly. Please note that for ACE and SRTM, the height information (being referred to geoid EGM96) is automatically corrected to obtain height relative to the WGS84 ellipsoid. For Aster Dem height correction is not yet applied. Note also that the SRTM DEM covers area between -60 and 60 degrees latitude. Therefore, for orthorectification of product of high latitude area, different DEM should be used. User can also use external DEM file in Geotiff format which, as specified above, must be with geographic coordinates (Plat, Plon, Ph) referred to global geodetic ellipsoid reference WGS84 (and height in meters).

Pixel Spacing Besides the default suggested pixel spacing computed with parameters in the metadata, user can specify output pixel spacing for the orthorectified image. The pixel spacing can be entered in both meters and degrees. If the pixel spacing in one unit is entered, then the pixel spacing in another unit is computed automatically. The calculations of the pixel spacing in meters and in degrees are given by the following equations: pixelSpacingInDegree = pixelSpacingInMeter / EquatorialEarthRadius * 180 / PI; pixelSpacingInMeter = pixelSpacingInDegree * PolarEarthRadius * PI / 180; where EquatorialEarthRadius = 6378137.0 m and PolarEarthRadius = 6356752.314245 m as given in WGS84.

Projection Supported Right now the following projections are supported:

● Geographic Lat/Lon

● Lambert Conformal Conic

● Stereographic

● Transverse Mercator

● UTM

● Universal Polar Stereographic North

● Universal Polar Stereographic South

Radiometric Normalization This option implements a radiometric normalization based on the approach proposed by Kellndorfer et al., TGRS, Sept. 1998 where

In current implementation θDEM is the local incidence angle projected into the range plane and defined as the angle between the incoming radiation vector and the projected surface normal vector into range plane[2]. The range plane is the plane formed by the satellite position, backscattering element position and the earth centre.

Note that among σ0, γ0 and β0 bands output in the target product, only σ0 is real band while γ0 and β0 are virtual bands expressed in terms of σ0 and incidence angle. Therefore, σ0 and incidence angle are automatically saved and output if γ0 or β0 is selected.

For σ0 and γ0 calculation, by default the projected local incidence angle from DEM [2] (local incidence angle projected into range plane) option is selected, but the option of incidence angle from ellipsoid correction (incidence angle from tie points of the source product) is also available.

ENVISAT ASAR The correction factors [3] applied to the original image depend on if the product is complex or detected and the selection of Auxiliary file (ASAR XCA file).

Complex Product (IMS, APS)

● Latest AUX File (& use projected local incidence angle computed from DEM): The most recent ASAR XCA available from C:\Program Files\NEST4A\auxdata \envisat compatible with product date is automatically selected. According to this XCA file, calibration constant, range spreading loss and antenna pattern gain are obtained.

❍ Applied factors: 1. apply projected local incidence angle into the range plane correction 2. apply calibration constant correction based on the XCA file 3. apply range spreading loss correction based on the XCA file and DEM geometry 4. apply antenna pattern gain correction based on the XCA file and DEM geometry

● External AUX File (& use projected local incidence angle computed from DEM): User can select a specific ASAR XCA file available from the installation folder or from another repository. According to this selected XCA file, calibration constant, range spreading loss and antenna pattern gain are computed.

❍ Applied factors: 1. apply projected local incidence angle into the range plane correction 2. apply calibration constant correction based on the selected XCA file 3. apply range spreading loss correction based on the selected XCA file and DEM geometry 4. apply antenna pattern gain correction based on the selected XCA file and DEM geometry

Detected Product (IMP, IMM, APP, APM, WSM)

● Latest AUX File (& use projected local incidence angle computed from DEM): The most recent ASAR XCA available from the installation folder compatible with product date is automatically selected. Basically with this option all the correction factors applied to the original SAR image based on product XCA file used during the focusing, such as antenna pattern gain and range spreading loss, are removed first. Then new factors computed according to the new ASAR XCA file together with calibration constant and local incidence angle correction factors are applied during the radiometric normalisation process.

❍ Applied factors: 1. remove antenna pattern gain correction based on product XCA file 2. remove range spreading loss correction based on product XCA file 3. apply projected local incidence angle into the range plane correction 4. apply calibration constant correction based on new XCA file 5. apply range spreading loss correction based on new XCA file and DEM geometry 6. apply new antenna pattern gain correction based on new XCA file and DEM geometry

● Product AUX File (& use projected local incidence angle computed from DEM): The product ASAR XCA file employed during the focusing is used. With this option the antenna pattern gain and range spreading loss are kept from the original product and only the calibration constant and local incidence angle correction factors are applied during the radiometric normalisation process.

❍ Applied factors: 1. apply projected local incidence angle into the range plane correction 2. apply calibration constant correction based on product XCA file

● External AUX File (& use projected local incidence angle computed from DEM): User can select a specific ASAR XCA file available from the installation folder or from another repository. Basically with this option all the correction factors applied to the original SAR image based on product XCA file used during the focusing, such as antenna pattern gain and range spreading loss, are removed first. Then new factors computed according to the new selected ASAR XCA file together with calibration constant and local incidence angle correction factors are applied during the radiometric normalisation process.

❍ Applied factors: 1. remove antenna pattern gain correction based on product XCA file 2. remove range spreading loss correction based on product XCA file 3. apply projected local incidence angle into the range plane correction 4. apply calibration constant correction based on new selected XCA file 5. apply range spreading loss correction based on new selected XCA file and DEM geometry 6. apply new antenna pattern gain correction based on new selected XCA file and DEM geometry

Please note that if the product has been previously multilooked then the radiometric normalization does not correct the antenna pattern and range spreading loss and only constant and incidence angle corrections are applied. This is because the original antenna pattern and the range spreading loss correction cannot be properly removed due to the pixel averaging by multilooking. If user needs to apply a radiometric normalization, multilook and terrain correction to a product, then user graph “RemoveAntPat_Multilook_Orthorectify” could be used.

ERS 1&2 For ERS 1&2 the radiometric normalization cannot be applied directly to original ERS product. Because of the Analogue to Digital Converter (ADC) power loss correction , a step before is required to properly handle the data. It is necessary to employ the Remove Antenna Pattern Operator which performs the following operations: For Single look complex (SLC, IMS) products

● apply ADC correction

For Ground range (PRI, IMP) products:

● remove antenna pattern gain

● remove range spreading loss

● apply ADC correction

After having applied the Remove Antenna Pattern Operator to ERS data, the radiometric normalisation can be performed during the Terrain Correction. The applied factors in case of "USE projected angle from the DEM" selection are:

1. apply projected local incidence angle into the range plane correction 2. apply absolute calibration constant correction 3. apply range spreading loss correction based on product metadata and DEM geometry 4. apply new antenna pattern gain correction based on product metadata and DEM geometry

To apply radiometric normalization and terrain correction for ERS, user can also use one of the following user graphs:

● RemoveAntPat_Orthorectify

● RemoveAntPat_Multilook_Orthorectify

RADARSAT-2

● In case of "USE projected angle from the DEM" selection, the radiometric normalisation is performed applying the product LUTs and multiplying by (sin •DEM/sin •el), where •DEM is projected local incidence angle into the range plane and •el is the incidence angle computed from the tie point grid respect to ellipsoid.

● In case of selection of "USE incidence angle from Ellipsoid", the radiometric normalisation is performed applying the product LUT.

These LUTs allow one to convert the digital numbers found in the output product to sigma- nought, beta-nought, or gamma-nought values (depending on which LUT is used).

TerraSAR-X ● In case of "USE projected angle from the DEM" selection, the radiometric normalisation is performed applying 1. projected local incidence angle into the range plane correction 2. absolute calibration constant correction

● In case of " USE incidence angle from Ellipsoid " selection, the radiometric normalisation is performed applying 1. projected local incidence angle into the range plane correction 2. absolute calibration constant correction

Please note that the simplified approach where Noise Equivalent Beta Naught is neglected has been implemented.

Cosmo-SkyMed

● In case of "USE projected angle from the DEM" selection, the radiometric normalisation is performed deriving σ0Ellipsoid [7] and then multiplying by (sinθDEM / sinθel), where

θDEM is the projected local incidence angle into the range plane and θel is the incidence angle computed from the tie point grid respect to ellipsoid.

● In case of selection of "USE incidence angle from Ellipsoid", the radiometric normalisation is performed deriving σ0Ellipsoid [7]

Definitions:

1. The local incidence angle is defined as the angle between the normal vector of the backscattering element (i.e. vector perpendicular to the ground surface) and the incoming radiation vector (i.e. vector formed by the satellite position and the backscattering element position) [2]. 2. The projected local incidence angle from DEM is defined as the angle between the incoming radiation vector (as defined above) and the projected surface normal vector into range plane. Here range plane is the plane formed by the satellite position, backscattering element position and the earth centre [2].

Parameters Used The following parameters are used by the operator:

1. Source Band: All bands (real or virtual) of the source product. User can select one or more bands for calibration. If no bands are selected, then by default all bands are used. 2. Digital Elevation Model: DEM types. Please refer to DEM Supported section above. 3. External DEM: User specified external DEM file. Currently only DEM in Geotiff format with geographic coordinates (Plat, Plon, Ph) referred to global geodetic ellipsoid reference WGS84 (and height in meters) is accepted. 4. DEM Resampling Method: Interpolation method for obtaining elevation values from the original DEM file. The following interpolation methods are available: nearest neighbour, bi-linear, cubic convolution, bi-sinc and bi-cubic interpolations. 5. Image Resampling Method: Interpolation methods for obtaining pixel values from the source image. The following interpolation methods are available: nearest neighbour, bi- linear, cubic and bi-sinc interpolations. 6. Pixel Spacing (m): User can specify pixel spacing in meters for orthorectified image. If no pixel spacing is specified, then default pixel spacing computed from the source SAR image is used. For details, the reader is referred to Pixel Spacing section above. 7. Pixel Spacing (deg): User can also specify the pixel spacing in degrees. If the value of any of the two pixel spacing is changed, the other one is updated automatically. For details, the reader is referred to Pixel Spacing section above. 8. Map Projection: The map projection types. By default the output image will be expressed in WGS84 latlong geographic coordinate. 9. Save DEM as a band: Checkbox indicating that DEM will be saved as a band in the target product. 10. Save local incidence angle as a band: Checkbox indicating that local incidence angle will be saved as a band in the target product. 11. Save projected (into the range plane) local incidence angle as a band: Checkbox indicating that the projected local incidence angle will be saved as a band in the target product. 12. Save selected source band: Checkbox indicating that orthorectified images of user selected bands will be saved without applying radiometric normalization. 13. Apply radiometric normalization: Checkbox indicating that radiometric normalization will be applied to the orthorectified image. 14. Save Sigma0 as a band: Checkbox indicating that sigma0 will be saved as a band in the target product. The Sigma0 can be generated using projected local incidence angle, local incidence angle or incidence angle from ellipsoid. 15. Save Gamma0 as a band: Checkbox indicating that Gamma0 will be saved as a band in the target product. The Gamma0 can be generated using projected local incidence angle, local incidence angle or incidence angle from ellipsoid. 16. Save Beta0 as a band: Checkbox indicating that Beta0 will be saved as a band in the target product. 17. Auxiliary File: available only for ASAR. User selected ASAR XCA file for radiometric normalization. The following options are available: Latest Auxiliary File, Product Auxiliary File (for detected product only) and External Auxiliary File. By default, the Latest Auxiliary File is used. Details about the corrections applied according to the XCA selection are provided in Radiometric Normalisation – Envisat ASAR section above.

Reference: [1] Small D., Schubert A., Guide to ASAR Geocoding, RSL-ASAR-GC-AD, Issue 1.0, March 2008 [2] Schreier G., SAR Geocoding: Data and Systems, Wichmann 1993 [3] Rosich B., Meadows P., Absolute calibration of ASAR Level 1 products, ESA/ESRIN, ENVI-CLVL-EOPG-TN-03-0010, Issue 1, Rev. 5, October 2004 [4] Laur H., Bally P., Meadows P., Sánchez J., Schättler B., Lopinto E. & Esteban D., ERS SAR Calibration: Derivation of σ0 in ESA ERS SAR PRI Products, ESA/ESRIN, ES-TN-RS-PM- HL09, Issue 2, Rev. 5f, November 2004 [5] RADARSAT-2 PRODUCT FORMAT DEFINITION - RN-RP-51-2713 Issue 1/7: March 14, 2008 [6] Radiometric Calibration of TerraSAR-X data - TSXX-ITD-TN-0049- radiometric_calculations_I1.00.doc, 2008 [7] For further details about Cosmo-SkyMed calibration please contact Cosmo-SkyMed Help Desk at [email protected] Orthorectification

SAR Simulation Terrain Correction Operator

The operator generates orthorectified image using rigorous SAR simulation.

Major Processing Steps Some major steps of the procedure are listed below:

1. SAR simulation: Generate simulated SAR image using DEM, the geocoding and orbit state vectors from the original SAR image, and mathematical modeling of SAR imaging geometry. The simulated SAR image will have the same dimension and resolution as the original image. For detailed steps and parameters used in SAR simulation, please refer to the SAR Simulation Operator. 2. Co-registration: The simulated SAR image (master) and the original SAR image (slave) are co- registered and a WARP function is produced. The WARP function maps each pixel in the simulated SAR image to its corresponding position in the original SAR image. For detailed steps and parameters used in co-registration, please refer to the GCP Selection Operator. 3. Terrain correction: Traverse DEM grid that covers the imaging area. For each cell in the DEM grid, compute its corresponding pixel position in the simulated SAR image using SAR model. Then its corresponding pixel position in the original SAR image can be found with the help of the WARP function. Finally the pixel value for the orthorectified image can be obtained from the original SAR image using interpolation.

Products Supported

● ASAR (IMS, IMP, IMM, APP, APM, WSM), ERS products (SLC, IMP), RADARSAT-2, TerraSAR-X are fully supported.

● Some third party missions are not fully supported. Please refer to the Supported_Mission- Product_vs_Operators_table.xls

DEM Supported

Currently, only the DEMs with geographic coordinates (Plat, Plon, Ph) referred to global geodetic ellipsoid reference WGS84 (and height in meters) are properly supported. Various different types of Digital Elevation models can be used (ACE, GETASSE30, ASTER, SRTM 3Sec GeoTiff). By default directory C:\AuxData and two sub-directories DEMs and Orbits are used to store the DEMs. However, the AuxData directory and DEMs sub-directories are customizable from the Settings dialog (which can be found under Edit tab in the main menu bar). The location of the default DEMs must be specified in dataPath field in the Setting Dialog in order to be properly used by Terrain Correction Operator. The STRM v.4 (3” tiles) from the Joint Research Center FTP (xftp.jrc.it) will automatically be downloaded in tiles for the area covered by the image to be orthorectified. The tiles will be downloaded to the folder C: \AuxData\DEMs\SRTM_DEM\tiff. The Test Connectivity functionality under the Help tab in the main menu bar allows user to verify if the SRTM downloading is working properly. Please note that for ACE and SRTM, the height information (being referred to geoid EGM96) is automatically corrected to obtain height relative to the WGS84 ellipsoid. For Aster Dem height correction is not yet applied. Note also that the SRTM DEM covers area between -60 and 60 degrees latitude. Therefore, for orthorectification of product of high latitude area, different DEM should be used. User can also use external DEM file in Geotiff format which, as specified above, must be with geographic coordinates (Plat, Plon, Ph) referred to global geodetic ellipsoid reference WGS84 (and height in meters).

Note that the same DEM is used by both SAR simulation and Terrain correction. The DEM is selected through SAR Simulation UI.

Pixel Spacing Besides the default suggested pixel spacing computed with parameters in the metadata, user can specify output pixel spacing for the orthorectified image. The pixel spacing can be entered in both meters and degrees. If the pixel spacing in one unit is entered, then the pixel spacing in another unit is computed automatically. The calculations of the pixel spacing in meters and in degrees are given by the following equations: pixelSpacingInDegree = pixelSpacingInMeter / EquatorialEarthRadius * 180 / PI; pixelSpacingInMeter = pixelSpacingInDegree * PolarEarthRadius * PI / 180; where EquatorialEarthRadius = 6378137.0 m and PolarEarthRadius = 6356752.314245 m as given in WGS84.

Projection Supported Right now the following projections are supported by NEXT:

● Geographic Lat/Lon

● Lambert Conformal Conic

● Stereographic

● Transverse Mercator

● UTM

● Universal Polar Stereographic North

● Universal Polar Stereographic South

Radiometric Normalization This option implements a radiometric normalization based on the approach proposed by Kellndorfer et al., TGRS, Sept. 1998 where

In current implementation θDEM is the local incidence angle projected into the range plane and defined as the angle between the incoming radiation vector and the projected surface normal vector into range plane [2]. The range plane is the plane formed by the satellite position, backscattering element position and the earth centre.

Note that among σ0, γ0 and β0 bands output in the target product, only σ0 is real band while γ0 and β0 are virtual bands expressed in terms of σ0 and incidence angle. Therefore, σ0 and incidence angle are automatically saved and output if γ0 or β0 is selected. For σ0 and γ0 calculation, by default the projected local incidence angle from DEM [2] (local incidence angle projected into range plane) option is selected, but the option of incidence angle from ellipsoid correction (incidence angle from tie points of the source product) is also available.

Products Supported

● ASAR (IMS, IMP, IMM, APP, APM, WSM), ERS products (SLC, IMP), RADARSAT-2, TerraSAR-X are fully supported.

● Some third party missions are not fully supported. Please refer to the Supported_Mission- Product_vs_Operators_table.xls

ENVISAT ASAR The correction factors [3] applied to the original image depend on if the product is complex or detected and the selection of Auxiliary file (ASAR XCA file).

Complex Product (IMS, APS)

● Latest AUX File (& use projected local incidence angle computed from DEM): The most recent ASAR XCA available from C:\Program Files\NEST4A\auxdata\envisat compatible with product date is automatically selected. According to this XCA file, calibration constant, range spreading loss and antenna pattern gain are obtained.

❍ Applied factors: 1. apply projected local incidence angle into the range plane correction 2. apply calibration constant correction based on the XCA file 3. apply range spreading loss correction based on the XCA file and DEM geometry 4. apply antenna pattern gain correction based on the XCA file and DEM geometry

● External AUX File (& use projected local incidence angle computed from DEM): User can select a specific ASAR XCA file available from C:\Program Files\NEST4A\auxdata\envisat or from another repository. According to this selected XCA file, calibration constant, range spreading loss and antenna pattern gain are computed.

❍ Applied factors: 1. apply projected local incidence angle into the range plane correction 2. apply calibration constant correction based on the selected XCA file 3. apply range spreading loss correction based on the selected XCA file and DEM geometry 4. apply antenna pattern gain correction based on the selected XCA file and DEM geometry

Detected Product (IMP, IMM, APP, APM, WSM)

● Latest AUX File (& use projected local incidence angle computed from DEM): The most recent ASAR XCA available from C:\Program Files\NEST4A\auxdata\envisat compatible with product date is automatically selected. Basically with this option all the correction factors applied to the original SAR image based on product XCA file used during the focusing, such as antenna pattern gain and range spreading loss, are removed first. Then new factors computed according to the new ASAR XCA file together with calibration constant and local incidence angle correction factors are applied during the radiometric normalisation process.

❍ Applied factors: 1. remove antenna pattern gain correction based on product XCA file 2. remove range spreading loss correction based on product XCA file 3. apply projected local incidence angle into the range plane correction 4. apply calibration constant correction based on new XCA file 5. apply range spreading loss correction based on new XCA file and DEM geometry 6. apply new antenna pattern gain correction based on new XCA file and DEM geometry

● Product AUX File (& use projected local incidence angle computed from DEM): The product ASAR XCA file employed during the focusing is used. With this option the antenna pattern gain and range spreading loss are kept from the original product and only the calibration constant and local incidence angle correction factors are applied during the radiometric normalisation process.

❍ Applied factors: 1. apply projected local incidence angle into the range plane correction 2. apply calibration constant correction based on product XCA file

● External AUX File (& use projected local incidence angle computed from DEM): User can select a specific ASAR XCA file available from the installation folder or from another repository. Basically with this option all the correction factors applied to the original SAR image based on product XCA file used during the focusing, such as antenna pattern gain and range spreading loss, are removed first. Then new factors computed according to the new selected ASAR XCA file together with calibration constant and local incidence angle correction factors are applied during the radiometric normalisation process.

❍ Applied factors: 1. remove antenna pattern gain correction based on product XCA file 2. remove range spreading loss correction based on product XCA file 3. apply projected local incidence angle into the range plane correction 4. apply calibration constant correction based on new selected XCA file 5. apply range spreading loss correction based on new selected XCA file and DEM geometry 6. apply new antenna pattern gain correction based on new selected XCA file and DEM geometry

Please note that if the product has been previously multilooked then the radiometric normalization does not correct the antenna pattern and range spreading loss and only constant and incidence angle corrections are applied. This is because the original antenna pattern and the range spreading loss correction cannot be properly removed due to the pixel averaging by multilooking. If user needs to apply a radiometric normalization, multilook and terrain correction to a product, then user graph “RemoveAntPat_Multilook_Orthorectify” could be used.

ERS 1&2 For ERS 1&2 the radiometric normalization cannot be applied directly to original ERS product. Because of the Analogue to Digital Converter (ADC) power loss correction , a step before is required to properly handle the data. It is necessary to employ the Remove Antenna Pattern Operator which performs the following operations: For Single look complex (SLC, IMS) products

● apply ADC correction

For Ground range (PRI, IMP) products:

● remove antenna pattern gain

● remove range spreading loss

● apply ADC correction

After having applied the Remove Antenna Pattern Operator to ERS data, the radiometric normalisation can be performed during the Terrain Correction. The applied factors in case of "USE projected angle from the DEM" selection are:

1. apply projected local incidence angle into the range plane correction 2. apply absolute calibration constant correction 3. apply range spreading loss correction based on product metadata and DEM geometry 4. apply new antenna pattern gain correction based on product metadata and DEM geometry

To apply radiometric normalization and terrain correction for ERS, user can also use one of the following user graphs:

● RemoveAntPat_Orthorectify

● RemoveAntPat_Multilook_Orthorectify

RADARSAT-2

● In case of "USE projected angle from the DEM" selection, the radiometric normalisation is performed applying the product LUTs and multiplying by (sin •DEM/sin •el), where •DEM is projected local incidence angle into the range plane and •el is the incidence angle computed from the tie point grid respect to ellipsoid.

● In case of selection of "USE incidence angle from Ellipsoid", the radiometric normalisation is performed applying the product LUT.

These LUTs allow one to convert the digital numbers found in the output product to sigma-nought, beta- nought, or gamma-nought values (depending on which LUT is used).

TerraSAR-X

● In case of "USE projected angle from the DEM" selection, the radiometric normalisation is performed applying 1. projected local incidence angle into the range plane correction 2. absolute calibration constant correction

● In case of " USE incidence angle from Ellipsoid " selection, the radiometric normalisation is performed applying 1. projected local incidence angle into the range plane correction 2. absolute calibration constant correction Please note that the simplified approach where Noise Equivalent Beta Naught is neglected has been implemented.

Cosmo-SkyMed

● In case of "USE projected angle from the DEM" selection, the radiometric normalisation is performed deriving σ0Ellipsoid [7] and then multiplying by (sinθDEM / sinθel), where θDEM is the projected local

incidence angle into the range plane and θel is the incidence angle computed from the tie point grid respect to ellipsoid.

● In case of selection of "USE incidence angle from Ellipsoid", the radiometric normalisation is performed deriving σ0Ellipsoid [7]

Definitions:

1. The local incidence angle is defined as the angle between the normal vector of the backscattering element (i.e. vector perpendicular to the ground surface) and the incoming radiation vector (i.e. vector formed by the satellite position and the backscattering element position) [2]. 2. The projected local incidence angle from DEM is defined as the angle between the incoming radiation vector (as defined above) and the projected surface normal vector into range plane. Here range plane is the plane formed by the satellite position, backscattering element position and the earth centre [2].

Layover-Shadow Mask Generation This operator can also generate layover-shadow mask for the orthorectified image. The layover effect is caused by the fact that the signal backscattered from the top of the mountain is actually received earlier than the signal from the bottom, i.e. the fore slope is reversed. The shadow effect is caused by the fact that no information is received from the back slope. This operator generates the layover-shadow mask as a separate band using the 2-pass algorithm given in section 7.4 in [2]. The value coding for the layover- shadow mask is defined as the follows:

● 0 - corresponding image pixel is not in layover, nor shadow

● 1 - corresponding image pixel is in layover

● 2 - corresponding image pixel is in shadow

● 3 - corresponding image pixel is in layover and shadow

User can select output the layover-shadow mask by checkmarking "Save Layove-Shadow Mask as band" box in SAR-Simulation tab. To visualize the layover-shadow mask, user can bring up the orthorectified image first, then go to layer manager and add the layover-shadow mask band as a layer.

Parameters Used The following parameters are used by the Terrain Correction step:

1. RMS Threshold: The criterion for eliminating invalid GCPs. (see Help for Warp Operator for detail) 2. WARP Polynomial Order: The degree of the WARP polynomial. The valid values are 1, 2 and 3. (see Help for Warp Operator for detail) 3. DEM Resampling Method: Interpolation method for obtaining elevation values from the original DEM file. The following interpolation methods are available: nearest neighbour, bi-linear, cubic convolution, bi-sinc and bi-cubic interpolations. 4. Image Resampling Method: Interpolation methods for obtaining pixel values from the source image. Three interpolation methods are available: nearest neighbour, bi-linear, cubic and bi-sinc interpolations. 5. Pixel Spacing (m): User can specify pixel spacing in meters for orthorectified image. If no pixel spacing is specified, then default pixel spacing computed from the source SAR image is used. For details, the reader is referred to Pixel Spacing section above. 6. Pixel Spacing (deg): User can also specify the pixel spacing in degrees. If the value of any of the two pixel spacing is changed, the other one is updated automatically. For details, the reader is referred to Pixel Spacing section above. 7. Save DEM as band: Checkbox indicating that DEM will be save as a band in the target product. 8. Save local incidence angle as band: Checkbox indicating that local incidence angle will be save as a band in the target product. 9. Save projected local incidence angle as band: Checkbox indicating that the projected local incidence angle will be save as a band in the target product. 10. Save selected source band: Checkbox indicating that orthorectified images of user selected bands will be saved without applying radiometric normalization. 11. Apply radiometric normalization: Checkbox indicating that radiometric normalization will be applied to the orthorectified image. 12. Save Sigma0 as a band: Checkbox indicating that sigma0 will be saved as a band in the target product. The Sigma0 can be generated using projected local incidence angle, local incidence angle or incidence angle from ellipsoid. 13. Save Gamma0 as a band: Checkbox indicating that Gamma0 will be saved as a band in the target product. The Gamma0 can be generated using projected local incidence angle, local incidence angle or incidence angle from ellipsoid. 14. Save Beta0 as a band: Checkbox indicating that Beta0 will be saved as a band in the target product. 15. Auxiliary File: available only for ASAR. User selected ASAR XCA file for radiometric normalization. The following options are available: Latest Auxiliary File, Product Auxiliary File (for detected product only) and External Auxiliary File. By default, the Latest Auxiliary File is used. Details about the corrections applied according to the XCA selection are provided in Radiometric Normalisation – Envisat ASAR section above. 16. Show Range and Azimuth Shifts: Checkbox indicating that range and azimuth shifts (in m) for all valid GCPs will be displayed. The row and column shifts of each slave GCP away from its initial position are output to a text file.

Figure 1. SAR Sim Terrain Correction dialog box

Detailed Algorithm for Layover-Shadow Mask Generation

1. First a DEM image is created by the SAR Simulation operator using the geocoding of the original SAR image. The DEM image has the same dimension as the original SAR image with each pixel value of the DEM image is the elevation of the corresponding pixel in the original SAR image. 2. Then 2-pass method (see section 7.4 in [2]) is applied to each range line in the DEM image to generate the layover and shadow mask for the DEM image. The 2-pass method compares the slant range for a DEM cell to slant ranges of other cells in the same range line to determine if the DEM cell will be imaged in layover or shadow area. 3. Next the layover-shadow mask for the DEM image is mapped to the simulated image to create the mask for the simulated image. The map is done using SAR simulation. 4. The layover-shadow mask for the simulated SAR image is then mapped to the original SAR image using the WARP function, which was created during co-registration of the simulated SAR image and the original SAR image. 5. Finally the mask for the original SAR image is mapped to the orthorectified image domain to produce the mask for the orthorectified image.

The algorithm is summarized in the figure below.

Figure 2. Layover-shadow mask generation Reference: [1] Small D., Schubert A., Guide to ASAR Geocoding, RSL-ASAR-GC-AD, Issue 1.0, March 2008 [2] Schreier G., SAR geocoding: data and systems, Wichmann-Verlag, Karlsruhe, Germany, 1993 [3] Rosich B., Meadows P., Absolute calibration of ASAR Level 1 products, ESA/ESRIN, ENVI-CLVL-EOPG- TN-03-0010, Issue 1, Rev. 5, October 2004 [4] Laur H., Bally P., Meadows P., Sánchez J., Schättler B., Lopinto E. & Esteban D., ERS SAR Calibration: Derivation of σ0 in ESA ERS SAR PRI Products, ESA/ESRIN, ES-TN-RS-PM-HL09, Issue 2, Rev. 5f, November 2004 [5] RADARSAT-2 PRODUCT FORMAT DEFINITION - RN-RP-51-2713 Issue 1/7: March 14, 2008 [6] Radiometric Calibration of TerraSAR-X data - TSXX-ITD-TN-0049-radiometric_calculations_I1.00.doc, 2008 [7] For further details about Cosmo-SkyMed calibration please contact Cosmo-SkyMed Help Desk at info. [email protected]

SAR Simulation

SAR Simulation Operator

The operator generates simulated SAR image using DEM, the Geocoding and orbit state vectors from a given SAR image, and mathematical modeling of SAR imaging geometry. The simulated SAR image will have the same dimension and resolution as the original SAR image.

Major Processing Steps Some major steps of the simulation procedure are listed below:

1. First a DEM image is created from the original SAR image. The DEM image has the same dimension as the original SAR image. The pixel value of the DEM image is the elevation of the corresponding pixel in the original SAR image. 2. Then, for each cell in the DEM image, its pixel position (row/column indices) in the simulated SAR image is computed based on the SAR model.

3. Finally, the backscattered power σ0 for the pixel is computed using backscattering model.

DEM Supported

Ph) referred to global ,אP ,אRight now only the DEMs with geographic coordinates (P geodetic ellipsoid reference WGS84 in meters are properly supported. By default the following DEMs are available:

● ACE

● GETASSE30

● SRTM 3Sec GeoTiff

● ASTER GDEM

Since the height information in ACE and SRTM is referred to geoid EGM96, not WGS84 ellipsoid, correction has been applied to obtain height relative to the WGS84 ellipsoid.

,אP , אUser can also use external DEM file which, as specified above, must be WGS84 (P Ph) DEM in meters.

Layover-Shadow Mask Generation Besides producing simulated image, this operator can also generate layover-shadow mask for the simulated image using the 2-pass algorithm given in section 7.4 in [2]. For details of layover-shadow mask generation, reader is referred to SAR Simulation Terrain Correction operator.

Parameters Used The following parameters are used by the operator:

1. Source Band: All bands (real or virtual) of the source product. User can select one or more bands for producing simulated image. If no bands are selected, then by default all bands are selected. The selected band will be output as a band in the target product together with the simulated image. 2. Digital Elevation Model: DEM types. Choose from the automatically tiled DEMs or specify using a single external DEM file by selecting "External DEM". 3. DEM Resampling Method: Interpolation method for obtaining elevation values from the original DEM file. The following interpolation methods are available: nearest neighbour, bilinear, cubic convolution, binsinc and bicubic interpolations. 4. External DEM: User specified external DEM file. Currently only WGS84-latlong DEM in meters is accepted as geographic system. 5. Save Layover-Shadow Mask as band: Checkbox indicating that layover-shadow mask is saved as a band in the target product.

Detailed Simulation Algorithm Detailed procedure is as the follows:

1. Get data for the following parameters from the metadata of the SAR image product:

❍ radar wave length

❍ range spacing

❍ first_line_time

❍ line_time_interval

❍ slant range to 1st pixel

❍ orbit state vectors ❍ slant range to ground range conversion coefficients 2. Compute satellite position and velocity for each azimuth time by interpolating the orbit state vectors; 3. Repeat the following steps for each cell in the DEM image: 1. Get latitude, longitude and elevation for the cell; 2. Convert (latitude, longitude, elevation) to Cartesian coordinate P(X, Y, Z); 3. Compute zero Doppler time t for point P(x, y, z) using Doppler frequency function; 4. Compute SAR sensor position S(X, Y, Z) at time t; 5. Compute slant range r = |S - P|; 6. Compute bias-corrected zero Doppler time tc = t + r*2/c, where c is the light speed; 7. Update satellite position S(tc) and slant range r(tc) = |S(tc) – P| for the bias- corrected zero Doppler time tc; 8. Compute azimuth index Ia in the source image using zero Doppler time tc; 9. Compute range index Ir in the source image using slant range r(tc); 10. Compute local incidence angle; 11. Compute backscattered power and save it as value for pixel ((int)Ia, (int)Ir);

Reference: [1] Liu H., Zhao Z., Lezek K. C., Correction of Positional Errors and Geometric Distortions in Topographic Maps and DEMs Using a Rigorous SAR Simulation Technique, Photogrammetric Engineering & Remote Sensing, Vol. 70, No. 9, Sep. 2004 [2] Gunter Schreier, SAR geocoding: data and systems, Wichmann-Verlag, Karlsruhe, Germany, 1993

Ellipsoid Correction

Ellipsoid Correction GG Operator

The operator implements the Geolocation-Grid (GG) method [1]. The implementation is exactly the same as for the Range Doppler Terrain Correction operator except that the slant range is computed from slant range time tie point of the source product instead of using DEM.

Major Processing Steps Some major steps of the algorithm are given below:

1. Get the latitudes and longitudes for the four corners of the source image; 2. Determine target image boundaries based on the scene corner latitudes and longitude; 3. Get range and azimuth pixel spacings from the metadata of the source image; 4. Compute target image traversal intervals based on the source image pixel spacing; 5. Compute target image dimension; 6. Get tie points (latitude, longitude and slant range time) from geolocation LADS of the source image; 7. Repeat the following steps for each cell in the target image raster: a. Get latitude and longitude for current cell; b. Determine the corresponding position of current cell in the source image and the 4 pixels that are immediately adjacent to it; c. Compute slant range R for the cell using slant range time and bi-quadratic interpolation; d. Compute zero Doppler time T for the cell; e. Compute bias-corrected zero Doppler time Tc = T + R*2/C, where C is the light speed; f. Compute azimuth index Ia using zero Doppler time Tc; g. Compute range image index Ir using slant range R; h. Compute pixel value x(Ia,Ir) using bi-linear interpolation and set it for current sample in target image.

Resampling Method Supported

● nearest_neighbour

● bilinear_interpolation

● cubic_convolution Map Projection Supported Right now the following projections are supported by NEXT:

● Geographic Lat/Lon

● Lambert Conformal Conic

● Stereographic

● Transverse Mercator

● UTM

● Universal Polar Stereographic North

● Universal Polar Stereographic South

Parameters Used The following parameters are used by the operator:

1. Source Band: All bands (real or virtual) of the source product. User can select one or more bands. For complex product, i and q bands must be selected together. If no bands are selected, then by default all bands are selected. 2. Image resampling method: Interpolation methods for obtaining pixel values from source image. There are three interpolation methods available: nearest neighbour, bi- linear and cubic interpolations. 3. Map Projection: The map projection types. The orthorectified image will be presented with the user selected map projection.

The output will be expressed in WGS84 latlong geographic coodinate.

[1] Small D., Schubert A., Guide to ASAR Geocoding, Issue 1.0, 19.03.2008

Ellipsoid Correction

Ellipsoid Correction RD Operator

The operator implements the Range Doppler orthorectification method [1]. The implementation is exactly the same as for the Range Doppler Terrain Correction operator except that the averaged scene height is used instead of DEM.

Resampling Method Supported

● nearest_neighbour

● bilinear_interpolation

● cubic_convolution

Map Projection Supported Right now the following projections are supported by NEXT:

● Geographic Lat/Lon

● Lambert Conformal Conic

● Stereographic

● Transverse Mercator

● UTM

● Universal Polar Stereographic North

● Universal Polar Stereographic South

Parameters Used The following parameters are used by the operator:

1. Source Band: All bands (real or virtual) of the source product. User can select one or more bands. For complex product, i and q bands must be selected together. If no bands are selected, then by default all bands are selected. 2. Image resampling method: Interpolation methods for obtaining pixel values from source image. There are three interpolation methods available: nearest neighbour, bi- linear and cubic interpolations. 3. Map Projection: The map projection types. The orthorectified image will be presented with the user selected map projection.

[1] Small D., Schubert A., Guide to ASAR Geocoding, Issue 1.0, 19.03.2008

Map Reprojection

Reprojection Operator

The Map Projection Operator applies a selected Map Projection to the input product and creates a new transposed output product.

The following parameters are used by the Operator: Coordinate Reference System (CRS) Custom CRS: The transformation used by the projection can be selected. Also the geodetic datum and transformation parameters can be set, if possible for the selected transformation. Predefined CRS: By clicking on the Select... button a new dialog is shown where a predefined CRS can be selected. Use CRS of: A product can be selected to use its projected Coordinate Reference System. This will have the effect that source product will cover the same geographic region on the same CRS. Which means that both products are collocated.

Output Settings Preserve resolution: If unchecked the Output Parameters... is enabled and the upcoming dialog lets you edit the output parameters like easting and northing of the reference pixel, the pixel size and the scene height and width. Reproject tie-point grids: Specifies whether or not the tie-point grids shall be included. If they are reprojected they will appear as bands in the target product and not any more as tie-point grids. No-data value: The default no-data value is used for output pixels in the projected band which have either no corresponding pixel in the source product or the source pixel is invalid. Resampling Method: You can select one resampling method for the projection. For a brief description have a look at Resampling Methods.

Output Information Displays some information about the output, like scene width and height, the geographic coordinate of the scene center and short description of the selected CRS. When clicking the Show WKT... button the corresponding Well-Known Text of the currently defined CRS is shown. SRGR Operator

Slant Range to Ground Range Operator

The operator re-projects images from slant range (range spacing proportional to echo delay) to ground range (range spacing proportional to distance from nadir along a predetermined ellipsoid). The operator works on complex or real slant range product. Note: The SRGR operator is not required before terrain correcting since terrain corrected results are always in ground range.

Major Processing Steps The slant range to ground range conversion consists of the following major steps:

1. Create a warp polynomial of given order that maps ground range pixels to slant range pixels. 2. For each ground range pixel, compute its corresponding pixel position in the slant range image using warp polynomial. 3. Compute pixel value using user selected interpolation method.

Interpolation Methods Supported The operator supports the following interpolation methods:

● Nearest-Neighbour interpolation

● linear interpolation

● Cubic interpolation

● Cubic2 interpolation

● Sinc interpolation

Parameters Used The following parameters are used by the operator:

1. Source Band: All bands (real or virtual) of the source product. User can select one or more bands for producing ground range images. If no bands are selected, then by default all bands are selected. 2. Warp Polynomial Order: The degree of WARP polynomial. It should be a positive integer. 3. Interpolation Method: User can select interpolation method used in SRGR conversion.

Mosaic

Mosaic Operator

The Mosaic Operator combines overlapping products into a single composite product. The mosaicking is achieved based on the geocoding of the source products therefore the geocoding needs to be very accurate. It is recommended that the source products be Terrain Corrected and Radiometric Corrected first.

The following parameters are used by the Operator:

1. Source Bands: All bands (real or virtual) of the source product. You may select one or more bands. If no bands are selected, then by default all bands will be processed. 2. Resampling Method: Choice of Nearest Neighbour, Bilinear, Cubic, Bi-Sinc or Bi-Cubic resampling. 3. Pixel Size (m): The output scene pixel spacing in meters. 4. Scene Width (pixels): The output scene width in pixels. 5. Scene Height (pixels): The output scene height in pixels. 6. Feather (pixels): The number of pixels skipped on the boundary of the source images. 7. Weight Average of Overlap: Averaging option to blend overlapping pixels. To achieve better mosaic result, the Normalizer checkbox below should be selected as well. 8. Normalizer: Normalization option to remove mean and normalize standard deviation.

Below is an example of a mosaic of ASA_GM1 products that have been Terrain Corrected and Radiometric Corrected. Overlapping areas have been normalized and averaged. Interferometry

Interferometric Functionality of NEST

Since the 4B release, NEST support interferometric processing. The complete interferometric processing chain is implemented. The processing setup for InSAR is similar like for the other functionalities of NEST. The set of interferometric operators is designed and chained into the graph via the Graph Processing Framework (GPF). For optimal interferometric processing, many of the core functionalities of NEST are improved and/or reimplemented to accommodate for the InSAR requirements. Some of the extended modules are , e. g., Warping function extended with phase preserving and more accurate interpolation kernels. While, the coregistration has been fully reimplemented and optimized for interferometric applications.

Implementation Overview Initially for InSAR functionality, as an algorithmic prototype the DORIS (Delft object-oriented radar interferometric software) software has been used. However, in the course of development many of these algorithmic prototypes have been further extended and completely reimplemented, and most of implementations significantly deviated from the original. Because of these developments, and in order to streamline and simplify further developments a dedicated library and application interface (API) for interferometric application is designed and developed - jLinda (Java Library for Interferometric Data Analysis).

InSAR Library Highlights Some of jLinda's highlights, in context of InSAR functionality of NEST:

● It provides ALL InSAR functionality to NEST.

● The library is independent from NEST/BEAM core.

● The NEST/BEAM libraries are only used in the construction of the operators and in forming of interferometric graph.

The architecture of interferometric modules of NEST is visualized in Figure 1.

Figure 1. Flowchart of implementation strategy of interferometric modules of NEST.

Algorithmic Base All implemented algorithms are fully documented, described in the literature, and follow a generally accepted best practices.

Links and Further Resources

● Web page and Code repository of jLinda - https://github.com/ppolabs/jlinda/ . Interferogram formation (InSAR operator)

Computation of interferogram and computation/ removal of the flat-earth phase

This operator computes (complex) interferogram, with or without subtraction of the flat- earth (reference) phase. The reference phase is subtracted using a 2d-polynomial that is also estimated in this operator. If the orbits for interferometric pair are known, the flat-earth phase is estimated using the orbital and metadata information and subtracted from the complex interferogram. The flat- earth phase is the phase present in the interferometric signal due to the curvature oif the reference surface. The geometric reference system of the reference surface is defined by the reference system of satellite orbits (for now only WGS84 supported, which the reference system used by all space-borne SAR systems). The flat-earth phase is computed in a number of points distributed over the total image, after which a 2d-polynomial is estimated (using least squares) fitting these 'observations', (e.g. plane can be fitted by setting the degree to 1.) A polynomial of degree 5 normally is sufficient to model the reference phase for a full SAR scene (approx 100x100km). While, a lower degree might be selected for smaller images, and higher degree for 'long-swath' scenes. Note that the higher order terms of the flat- earth polynomial are usually small, because the polynomial describes a smooth, long wave body (ellipsoid). To recommended polynomial degree, that should ensure the smooth surface for most image sizes and areas of the world is 5th degree. In order to reduce the noise, as the post-processing step, you can perform multilooking (with Multilook Operator). Multilooking has to be performed seperately on 'virtual' bands phase or intensity. In future releases complex Multilook operator will be released. Note that in case of ESA's ERS and Envisat sensors, the factor 5:1 (azimuth:range) or similar ratio between the factors is chosen to obtain approximately square pixels (20x20 m^2 for factors 5 and 1). Of course the resolution decreases if multilooking is applied. Note: The reference phase polynomial is not estimated nor subtracted from the computed interferogram, if the flag for NOT performing the reference-phase estimation/subtraction is being marked. Note that if the subtraction of the flat-earth is being skipped the formed interferogram will still have the fringes caused by the earth curvature, and could hamper further interferometric processing and analysis. The intention of computing interferogram without the reference-phase being subtracted is only for demonstration and educational purposes.

Operator parameters: The following parameters are used by this operator:

1. Number of points to compute reference phase for least square estimation. Default value is 501, and sufficient for 100x100km SAR scenes. For smaller/or larger scenes this number can be adapted. 2. The degree of 2D flat-earth polynomial. Recommended degree, appropriate for most usage cases is 5th degree. 3. Orbit interpolation method. Defaults to a polynomial of degree (number of state vectors)-1, but smaller than degree 5. Optionally, the degree for the orbit interpolation can be declared. The specified degree has to be smaller or equal to (number of state vectors)-1. The positions of state vectors (x,y,z) are independently interpolated, the velocities are estimated from the position. The annotated velocities, if available, will be used in the interpolation and not computed from the positions. NOTE: It is not recommended to use a degree smaller than the maximum possible, except if it gets too large to avoid oscillations. However, depending on the temporal posting of the state vectors and their accuracy different interpolation strategies can give better results. 4. Flag for skipping estimation and subtraction of the reference-phase.

Source bands: The source bands are the set of, usually coregistered, bands of the complex product

Estimation and Subtraction of Topographic Phase (InSAR operator)

Estimation and Subtraction of Topographic Phase

This operator estimates and subtracts topographic phase from the interferogram. More specifically, this operator first "radarcodes" the Digital Elevation Model (DEM) of the area of interferogram, and then subtracts it from the complex interferogram. This operator has to be performed after the interferogram generation. It also requires an input DEM, SRTM can be used, or any other DEM supported by NEST. The DEM handling for most of elevation models, selection and download from internet of tiles covering the area of interest, interpolation, accounting for geoid undulation, etc, is performed automatically by the operator itself. A set of freely available, for the scientific use, digital elevation models are supported by NEST. The elevation models for which tiles are automatically handled by NEST are: SRTM, ACE, and GETASSE. For all others, some administration and data manipulation work might be needed. Namely, a manual selection and download of tiles covering the area-of-interest has to be performed; while interpolation and pre-processing of DEM for interefometric applications is also performed automatically by the operator.

Implementation Details Note that interpolation of DEM is conceptually different then in other Geometry NEST operators (eg. Range Doppler Terrain Correction Operator). The DEM reference phase is computed in two steps:

● In first step, the DEM is radarcoded to the coordinate systems of the master image. Per DEM point the master coordinate (real valued) and the computed reference phase is saved to a file.

● Then, the reference phase is interpolated to the integer grid of master coordinates. A linear interpolation based on a Delaunay triangulation is used. Dealunay Triangulation library, developed specifically for NEST and SAR applications is used.

Operator parameters:

1. Orbit Interpolation Degree: Degree of orbit interpolation polynomial. 2. Digital Elevation Model: Elevation model to be used for the processing. 3. Elevation Band Name: Name of the elevation band, where interpolated and radarcoded height of the area- of-interest will be saved.

Input/Output bands: Source Bands are stack of flat-earth subtracted ("flattened") interferograms. Output Bands are stack of interferograms with subtracted DEM reference phase and other optional bands, for example: radarcoded elevation of the area, of topographic phase.

Known issues:

When using 'External DEM' as a reference surface, user has to make sure that the input DEM sufficiently covers and and extends, the area of input interferogram. In practice, this means that the input DEM should be 10- 15% bigger in its extent (wider and higher) then the interferogram. This issue will be addressed in NEST 5A- FINAL release.

Coherence estimation (InSAR operator)

Coherence estimation

This operator computes/estimates the coherence image, with or without subtraction of the reference phase. The reference phase is subtracted if there is a 2d-polynomial computed -- result of "Compute Interferogram" operator. While, it is not subtracted if this information is not in the metadata, or if the number of polynomial coefficients in "Compute Interferogram" operator is set to 0. Note that this is a "general" coherence estimation operator and not exclusive only for InSAR applications. It can be utilized to estimate the coherence information from any stack of coregistered complex images. In order to reduce the noise, as the post-processing step, you can perform multilooking (with Multilook Operator). In case of ESA's ERS and Envisat sensors, the factor 5:1 (azimuth:range) or similar ratio between the factors is chosen to obtain approximately square pixels (20x20 m^2 for factors 5 and 1). Of course the resolution decreases if multilooking is applied.

Operator parameters:

The input parameters are size of the shifting window for the coherence estimation. The window size is defined, in both azimuth and range directions.

Source bands:

Source Bands are set of, usually coregistered, bands of the complex product.

Azimuth Filtering (InSAR operator)

Azimuth Filtering

This operator filters the spectras, of stack of SLC images, in the azimuth direction. This is an optional step in the interferometric processing chain. The part of the spectras do that does not overlap with the spectrum of the slave is filtered out. This non overlap is due to the selection of a Doppler centroid frequency in the SAR processing, which normally is not equal for master and slave image. This step can in general best be performed after the coregistration. (The offset in range direction is used to evaluate the polynomial for the Doppler Centroid frequency.) Operator performs filtering of all images in the stack at the same time. However, if multiple slave images are present in the stack, only slave images will be filtered, and master image will remain unfiltered (all slaves coregistered on the same master image). This approach has the advantage that for each interferogram of the stack a separate set of master images is created, effectively saving the disk storage space and making the processing more efficient. The disadvantage of not filtering the master is that a small part of the spectrum of the master is not shared with the slave spectrum, yielding a minor loss of coherence in the interferogram.

Operator parameters:

● FFT Window Length: Length of FFT estimation window per tile in azimuth direction. In general, the larger the better. However, not that if the value for the FFT Window Lenght is larger then the size of Tile, the length of the window will be reduced to the maximum possible length.

● Azimuth Filter Overlap: Half of the overlap between consecutive tiles in azimuth direction. Partially the same data is used to estimate the spectrum, to avoid border effects. However, the exact influence of this parameter to the end results scales with the Doppler Centroid Frequency variability between master and slave images. It has not been studied yet, what is the optimum ratio between Overlap parameter and Doppler Centroid Frequency difference. Setting this card to 0 gives the fastest results.

● Hamming Alpha: The weighting of the spectrum in azimuth direction. The filtered output spectrum is first de-weighted with the specified hamming filter, then re- weighted with a (newly centered) one. If this parameter is set to 1, no weighting is performed.

Source bands:

Source Bands are set of coregistered bands of the complex product. Output bands:

Output Bands are set of bands with spectras filtered in azimuth direction.

Range Filtering (InSAR operator)

Range Filtering

This operator filters the spectras, of stack of SLC images, in the range direction. This is an optional step in interferometric processing chain. The filtering in range direction of master and slave image increases Signal-to-Noise Ration (SNR) in the interferogram. This noise reduction results from filtering out non overlapping parts of the spectrum. This spectral non overlap in range between master and slave is caused by a slightly different viewing angle of both sensors. The longer the perpendicular baseline, the smaller the overlapping part. Eventually a baseline of about 1100 m results in no overlap at all (that is also critical baseline for ERS). Assuming no local terrain slope, a reduction of typically 10-20% in the number of residues can be achieved. The range filtering should be performed after coregistration (after slave images are resampled/warped to the master grid), because the fringe frequency is estimated from the interferogram (that is temporary computed). It is performed simultaneously for the master and slave image, unless there are multiple slave images in the stack. If later is the case, as with Azimuth Filtering Operator, only slave images will be filtered while the master image will be left in its original state.

Implementation Details Currently, only a so-called "adaptive" filtering is implemented, while method based on orbital data and terrain slope will be implemented in coming releases. Adaptive range filtering algorithm builds on the local fringe frequency estimated from the locally computed interferogram. After the warping/resampling of the slave on the master grid the local fringe frequency is estimated using peak analysis of the power of the spectrum of the complex interferogram. The warping/resampling is required since the local fringe frequency is estimated from the interferogram. The fringe frequency is directly related to the spectral shift in range direction. (Note: that this shift is not an actual shift, but it is an indication that the different frequencies are mapped on places with this shift.

Input parameters

● FFT Window Length: Length of the estimation window, a peak is estimated for parts of this length.

● Hamming Alpha: Weight for hamming filter. (Note that, if alpha is set to 1, the weighting window function will be of rectangular type).

● Walking Mean Window: Number of lines over which the (walking) mean will be computed. This parameter reduces noise for the peak estimation. The parameter has to be an odd number. Logically, the walking mean can be compared with the principles of periodogram estimation.

● SNR Threshold: In peak estimation, weight values to bias higher frequencies. The reasoning for this parameter is that the low frequencies are (for small Oversample factors) aliased after interferogram generation. The de-weighting is done by a dividing by a triangle function (convolution of 2 rect window functions, the shape of the range spectrum). Effect of this parameter may be negligible to overall results.

● Oversampling factor: Oversample master and slave(s) with this factor before computing the complex interferogram for the peak estimation. This factor has to be a power of 2. 2 is default, and with this factor the filter is able to estimate the peak for frequency shifts larger than half the bandwidth. A factor of 4 for example might give a better estimate, since the interval between shifts that can be estimated is in that case halfed (fixed FFT Window Length).

Source bands:

Source Bands are set of coregistered bands of the complex product.

Output bands:

Output Bands are set of bands with spectras filtered in range direction.

Phase Filtering of stacks of interferograms (InSAR operator)

Phase filtering of stacks of interferogram

This operator can be used to optionally filter the stacks complex interferograms. The filtering is performed in order to reduce noise, e.g., for visualization or to aid the phase unwrapping. It is probably best run after operator Interferogram. However, the optimal place for the phase filtering in the processing chain depends on the specific application. The following filtering methods are implemented: Goldstein method ("goldstein"), and Spatial Convolution ("spatialconv"). Functionality to load and support user defined filters will be soon implemented. The basic principles of implemented complex phase filters is as follows: In the case of the Goldstein filtering, the interferometeric fringes become sharper because with filtering the peak in the spectrum (caused by the fringes) is given a higher relative weight. Method "spatial convolution" is a simple spatial convolution with a certain function acting as a kernel, e.g., a 3 point moving average. For more details refer to the implementation section and listed references.

Operator parameters: The following input parameters are used by this operator:

1. Filtering Method: Select filtering method. Choose among goldstein method ("goldstein"), spatial convolution ("convolution"). Note that different methods have different parameters and corresponding levels of fine tuning. 2. Alpha: (Input parameter for Goldstein method only) The Alpha parameter, is the input parameter only for method "goldstein". This parameter, can be understood as a "smoothness coefficient" of the filter, defining the effective level of filtering. The value for the alpha, must be in the range from [0, 1]. The value 0 means no filtering, while 1 results in the most filtering. The Alpha parameter is connected and indirectly influenced with the input parameters for the Filtering Kernel - a higher smoothing, gives a relative decrease to the peak, and thus the effect of the alpha. 3. Blocksize: (Input parameter for method "goldstein" only). It defines the size of the blocks that are filtered. The parameter must be a power of 2 value. The value for block-size should be large enough so that the spectrum can be estimated, and small enough that it contains a peak frequency (1 trend in phase). Recommended value for block-size is: 32 pixels. 4. Overlap: Input for method "goldstein" only. The overlap value defines half of the size of the overlap between consecutive filtering blocks and tiles, thus that partially the same data is used for filtering. The total overlap should be smaller than the BLOCKSIZE value. If the parameter is set to BLOCKSIZE/2-1 (the maximum value for this parameter) then each output pixel is filtered based on the spectrum that is centered around it. Not that is probably the most optimal way of filtering, but may well be the most time consuming one. 5. Filtering kernel: This input parameter is for methods "goldstein" and "spatialconv" only. It defines the one- dimension kernel function used to perform convolution. A number of the pre-defined kernels is offered, while future releases will have functionality that can allow users to define their own 1D filtering kernels. For method GOLDSTEIN: default to kernel is [1 2 3 2 1]. This kernel is used to smooth the amplitude of the spectrum of the complex interferogram. The spectrum is later scaled by the smoothed spectrum to the power alpha. For method SPATIALCONV: Default is a 3 point moving average [1 1 1] convolution. The real and imaginary part is averaged separately this way. For more info see implementation section.

Source bands:

Source Bands are stack of interferograms.

Output bands:

Output Bands are stack of phase filtered interferograms. Implementation notes:

(More details on algorithmic implementation COMING SOON!)

1. Spatial Convolution Method: The input complex interferogram is convoluted with a 2D kernel by FFT's. The 2D kernel is computed from 1D kernel, defined as an input parameter of the operator. The block-size for the convolution is chosen as high as possible. In future releases, it will be also possible to load 2D kernel from external file. Note that only odd sized kernels can be used, so if you want to use the kernel of odd simply add a zero to make a kernel size even. 2. Goldstein Method: The algorithm is implemented as:

❍ Read a data tile (T);

❍ Get a data block (B) from input tile;

❍ B = fft2d(B) (obtain complex spectrum);

❍ A = abs(B) (compute magnitude of spectrum);

❍ S = smooth(A) (perform convolution with kernel);

❍ S = S/max(S) (scale S between 0 and 1);

❍ B = B.S^alpha (weight complex spectrum);

❍ B = ifft2d(B) (result in space domain);

❍ If all blocks of tile done, write to disk.

Phase to Height conversion (InSAR operator)

Phase to Height conversion

This operator converts the unwrapped interferometric phase to the heights in the radar coded system. This functionality is sometimes referred to as "slant-to-height" conversion. Implementation is performed following the Schwabisch method, that uses polynomials to compare the actual phase with the flat earth (reference) phase. The Schwabisch method, is a fast method that yields the radar coded heights. It builds on the idea to first compute the reference phase at a number of discrete heights and then to compare the actual phase from the interferogram with these pre-computed values to determine the height. An obstacle with this method is that the interferograms that are used as input do not contain the reference phase anymore, so that has to be pre-computed and included in the height- estimation procedure.

Operator parameters:

1. Number of estimation points: The number of locations to compute the reference phase at different altitudes. 2. Number of height samples: Number of height samples in range [0,5000) at which the reference phase will be estimated. 3. Degree of 1D polynomial: Degree of the one-dimensional polynomial to "fit" the reference phase through. 4. Degree of 2D polynomial: Degree of the two-dimensional polynomial to fit the reference phase through. 5. Orbit interpolation degree: Defaults to a polynomial of degree (number of state vectors)-1, but smaller than degree 5.

Source Products:

Unwrapped interferometric product.

Output bands:

The height estimated from the unwrapped interferogram. The heights are stored meters, while 0.0 height indicates the problem with unwrapping.

Three-Pass DInSAR (InSAR operator)

Three-Pass Differential Interferometry

Three-pass DInSAR, stands for three-pass differential interferometry. It is a method to remove the topographic induced phase from an interferogram that contains topographic, deformation, and atmospheric components. This step can be performed only if an unwrapped topography interferogram (topo pair) and a complex deformation interferogram (defo pair) are present. Both defo and topo pair has to be referenced to a a common master. This can be achieved by processing a stack of interferograms in NEST-DORIS. Both defo and topo interferograms have to be corrected for the phase, and sampled on the same grid (see Coregistration and Wrap Operator). Also, both defo and topo interferograms must have the same multilook factors and the same dimensions (i.e. overlap exactly). Recommendation is that the perpendicular baseline of the topo-pair should be larger than that of the defo-pair. This ratio is recommended in order to prevent that noise is "blown up". Of course, this geometry cannot be always controlled, and it rather depends on the available data. This operation is performed in the stack processing tree. First create a stack of interferograms, that coregistered to the same master and processed until subtraction of the reference phase. Then we have to unwrap defo-pair. In order to do so, the deformation pair has to be selected, and extracted for the external unwrapping in the Snaphu software. After the unwrap is performed externally, the unwrapped results are imported back into the NEST. Finally, for DInSAR operators, both defo-pair and topo-pair are listed as a source products, and the output of the operator is a differential intereferogram. To geocode the differential phase values, standard geocoding modules of NEST can be applied.

Operator parameters:

This operator performs without any parameters, all the necessary processing information is constructed using product metadata. The only input parameter is the control flag for the degree of the orbit interpolator.

1. Orbit interpolation degree: Defaults to a polynomial of degree (number of state vectors)-1, but smaller than degree 5.

Source Products:

Source Products are stack of: 1. Defo-pair: Interferometric product containing bands of so-called defo-pair. See operator description for more details. 2. Topo-pair: Interferometric product containing bands of so-called topo-pair. Topo-pair interferogram has to be unwrapped, and stored as 'real' data band. See operator description for more details.

Output bands:

Output Bands are stack of differential interferograms. The amplitude is the same as that of the original 'deformation' interferogram. A complex value (0,0) indicates that for that pixel unwrapping was not performed correctly.

Phase Unwrapping in NEST

Introduction

The principal observation in radar interferometry, is the two-dimensional relative phase signal, which is the 2pi-modulus of the (unknown) absolute phase signal. The forward problem, the wrapping of the absolute phase to the [-pi,pi) interval is straightforward and trivial. The inverse problem, the so-called phase unwrapping, due to inherent non- uniqueness and non-linearity, is one of the main difficulties and challenges in the application of radar interferometry. There are many proposed techniques to deal with the phase unwrapping problem. The variable phase noise, as well as the geometric problems, i.e., foreshortening and layover, are the main causes why many of the proposed techniques do not perform as desired. Furthermore, any of the given phase unwrapping techniques will not give a unique solution, and without additional a-priori information, or strong assumptions on the data behaviour, it is impossible to assess the reliability of the solution.

Phase Unwrapping in NEST

There are two methods to perform the phase unwrapping in NEST:

1. Using integrated phase unwrapping functionality. 2. Using 3rd party software.

Integrated phase unwrapping NEST version 5, brings the internal implementation of the phase unwrapping functionality. As all other interferometric functionality, the unwrapping support, is provided via jLinda library. This unwrapping implementation is following the concept introduced by Costantini, [1]; where the unwrapping problem is formulated and solved as a minimum-cost flow problem on a network. Because of the framework restrictions, the unwrapping in NEST is performed as a two- stage process. First, the initial unwrapping is performed with the unwrap operator, results need to be saved, and then in a second stage, the unwrapping result is stitched into a smooth result. Specifically:

Uwrap First an independent unwrapping of tiles is performed using the Uwrap operator, and importantly, results need to be saved. Stitch As a second step, independently unwrapped tiles are integrated using the Stitch operator. This operator stitches the unwrapped phases of all tiles to form a complete smooth image of unwrapped phase.

Interface to 3rd party software: SNAPHU To obtain the unwrapped interferogram, a 3rd party software should be used. The recommended tool to perform the phase unwrapping is the "Statistical-Cost, Network-Flow Algorithm for Phase Unwrapping" (SNAPHU), that can be downloaded from SNAPHU project web-page. The restricted distribution license of this software prevented a direct integration in NEST, so from the user it is expected to download and install this, or other, 3rd party software for the phase unwrapping individually.

SNAPHU support tools To make integration between NEST and SNAPHU as easy as possible, we developed a set of tools for exporting and importing data to/from SNAPHU. It should be noted that these tools are developed for the user convenience and that the same functionality can be achieved with chaining Product Generation Tools of NEST (eg. Band Arithmetic, Replace Metadata, and other operations), and manual construction of SNAPHU configuration files.

SNAPHU export The graph for exporting NEST InSAR data to processing with SNAPHU, building SNAPHU configuration file, and creating a "phase" product. The phase product serves as a container an interface with SNAPHU. In the phase product the wrapped phase is saved, with the corresponding metadata. SNAPHU import The importing (ingestion) of data in previously created "phase unwrapping" container product. With importing the unwrapped phase, the existing wrapped phase data in the phase product is replaced with unwrapped phase, while preserving the metadata of the unwrapped product, or the phase product is extended with the unwrapped phase band.

Recommendations, guidelines and notes

Multilooking and Phase Filtering In order to obtain an optimal results of unwrapping it is recommended to multi-look (i. e., square) and phase-filter (i.e., increase signal-to-noise and smooth) input interferogram. Quality of unwrapping The quality and reliability of unwrapped results very much depends on the input coherence. If the input coherence is low, the user should NOT a reliable unwrapping results. For example, if an input interferogram is coherent only in some areas (e.g., "coherence islands"), the reliable results can be expected only in those areas. Also see the next note, on InSAR being relative technique. Interpretation of unwrapped results The interferometry is a relative technique, i.e., gives difference between pixels, rather than the absolute values. Thus the unwrapped results should also be interpreted as a relative. For example, height/displacement between two pixels is a relative height/ displacement between those two pixels. To obtain absolute estimates, a tie point can be used, or assumptions on the signal. For correcting, and making results of unwrapping absolute a BandMaths operator can be used.

Further information

Integrated unwrapper Implementation Reference: [1] Costantini, M. (1998) A novel phase unwrapping method based on network programming. IEEE Tran. on Geoscience and Remote Sensing, 36, 813-821.

SNAPHU Phase unwrapping: For a general reference on phase unwrapping see book of Ghiglia and Pritt, Two-Dimensional phase unwrapping: theory, algorithms, and software. Building and running SNAPHU: A good starting point for obtaining further information on SNAPHU software and algorithms is the project web page. SNAPHU is software developed for UNIX environment and as such building it on Linux and MacOS systems is straightforward. On operating systems, SNAPHU can be built and executed on any of the Unix like environments and command-line interface for Windows (Cygwin, MiniGW, etc.)

Snaphu Data Export

Exporting NEST data for SNAPHU processing

Important note It is strongly advised before executing graph for exporting NEST data for SNAPHU processing for user to get familiar with general principles of doing phase unwrapping in NEST.

SNAPHU Data Export Graph The main purpose of SNAPHU data export functionality is three-fold:

1. To export NEST data (bands) in the format compatible for SNAPHU processing, 2. To build a SNAPHU configuration file (snaphu.conf), the file where processing parameters for SNAPHU are being stored, 3. To construct a container NEST product that will store metadata and bands to be used when SNAPHU results are being ingested back into NEST.

The export graph is visualized in the figure below.

In the export graph, there are two readers, and two writers. The argumentation for two readers is, that as input for SNAPHU export operation both Interferometric, and Coherence product are needed. The interferometry product is needed for the metadata and (complex) phase, while the coherence product is required for the coherence information that is used for weighting in the process of unwrapping. The reason for two writers is, that one writer saves the phase product that, as already introduced, serves as a container for the data exchange with SNAPHU, especially in the data ingestion step. While the other writer saves (exports) the data and builds a configuration file for SNAPHU software.

Phase product part of the export graph Part indicated in the blue box in the figure bellow performs the following:

1-Read: Reads the interferometric product. 3-BandSelect: Selects bands that are to be stored in the "phase" product. It is recommended that only phase band is selected. 5-Write: Writes "phase product" in the standard NEST/BEAM DIMAP format.

SNAPHU product part of the export graph Branches groupd by the red box in the figure bellow performs the following:

1-Read: Reads the interferometric product. 2-Read: Reads the coherence product. 4-SnaphuExport: Selects bands that are to be stored in the SNAPHU product, required bands are phase and coherence. Also in this step parameters for SNAPHU are being defined. For more details about the SNAPHU processing parameters please refer to SNAPHU manual. 6-Write: Write SNAPHU product, using a SNAPHU writer. Note that as output format "Snaphu" format is being predefined.

External processing with SNAPHU Given that the SNAPHU software is properly installed and configured, unwrapping of exported NEST product is quite straightforward. In the directory where the SNAPHU product is being saved, the following command is to be executed: snaphu -f snaphu.conf YOUR_PHASE_BAND.img 99999 where stands for the name of the phase band that is to be unwrapped, and YOUR_PHASE_BAND.img represents the number of lines of the . Note that the command to be 99999 YOUR_PHASE_BAND externally called for the phase unwrapping is listed in the header of file that is snaphu.conf created with the SNAPHU writer. Again, it is strongly recommended that before doing any processing with SNAPHU user becomes familiar with the software and process control flags.

Snaphu Data Import

Importing results from SNAPHU processing in NEST

Important note It is strongly advised before executing graph for exporting NEST data for SNAPHU processing for user to get familiar with general principles of doing phase unwrapping in NEST.

SNAPHU Data Import Graph The main purpose of SNAPHU data import function is two-fold:

1. To import results of SNAPHU processing in NEST, 2. To construct NEST Interferometric product that will contain unwrapped phase band, and the metadata of the source interferometric product.

The import graph is visualized in the figure below.

In the export graph, there are two readers, and one writer. Two readers are reading the phase product, constructed during exporting data for SNAPHU processing, and other reader is reading imported unwrapped result of SNAPHU processing as "Generic Binary". Snaphu Import operator arranges the metadata and bands, and constructs the unwrapped phase product compatible for further utilization within NEST. Finally, writer saves the product in NEST/BEAM DIMAP format. Specifically the import graph performs the following:

1-Read-Phase: Reads the "phase-only" interferometric product constructed during Snaphu Data Export step. 2-Read-Unwrapper-Phase: Reads the "unwrapped-phase-only" product ingested in NEST using Generic Binary Readers. Note that due to restrictions of the framework currently it is not possible to chain the generic binary reader in the graph, and hence it is not possible to ingest unwrapped data directly into NEST. This has to be done outside Snaphu Import graph. 3-SnaphuImport: Arranges the metadata and merges the bands of the source product into a unwrapped phase product. In this step metadata and bands are arranged in a compatible form for further NEST InSAR processing. 4-Write: Writes "unwrapped phase product" in the standard NEST/BEAM DIMAP format.

InSAR Stack Overview

InSAR Stack Overview and Master Selection

This NEST function gives a general information about the interferometric stack. The information about the acquisition date, sensor, mode, as well as information about perpendicular and temporal baselines are being listed. Also an estimate for the modeled (expected) coherence is being computed, and used in selection of the optimal master image for the InSAR stack. The master image is selected such that the dispersion of the perpendicular baseline is as low as possible. The master image is selected maximizing the (expected) stack coherence of the interferometric stack. The "optimal" master implies improved visual interpretation of the interferograms and aids quality assessment.

Implementation details for selection of master image

The stack coherence for a stack with master image m is defined as:

where B symbol represents the perpendicular baseline between images m and k at the center of the image, the symbol T the temporal baseline, and fDC the Doppler baseline (the mean Doppler centroid frequency difference). The divisor c, in the second equation, can be regarded as a critical baseline for which the total de-correlation is expected for targets with the a distributed scattering mechanism. The values given in the first equation, are typical for ERS and Envisat.

Notes and known issues

Reliability of the modeled coherence: The coherence modeled by this functionality, and other information provided by this functionality, is purely based on the metadata information of input products. The complex data (nor any other information from the source product bands) in modeling of the coherence and other parameters is not used at all. Thus this function can also be applied on the detected products, however, the interpretation of the modeled coherence and acceptance of recommendations for the choice of the master image has to be performed with great care.

Known issues: The model used in computation of coherence will severely underestimate the coherence in ERS-2 / Envisat Cross Interferometry applications. For this, and similar application, more robust model that integrates the principles of the wave-number shift shall be used.

Object Detection

Object Detection

The operator detects object such as ships on sea surface from SAR imagery.

Major Processing Steps The object detection operation consists of the following four major operations:

1. Pre-processing: Calibration is applied to source image to make further pre-screening easier and more accurate. 2. Land-sea masking: A land-sea mask is generated to ensure that detection is focused only on the area of interest. 3. Pre-screening: Objects are detected with a Constant False Alarm Rate(CFAR) detector. 4. Discrimination: False alarms are rejected based on object dimension.

For details of calibration, the reader is referred to the Calibration operator. Here it is assumed that the calibration pre-processing step has been performed before applying object detection. For details of land-sea mask generation, the reader is referred to the Create Land Mask operator.

Two-Parameter Constant False Alarm Rate (CFAR) Detector The detector used in pre-screening operation is the two-parameter constant false alarm rate (CFAR) detector. The basic idea is to searche pixels which are unusually bright when compared to pixels in surrounding area.

Let xt be the pixel under test and T be a given threshold, then the detection criterion can be expressed as

Let f(x) be the ocean clutter probability density function and x range through the possible pixel values, then the probability of false alarm (PFA) is given by

and the above detection criterion is equivalent to the criterion below

If Gaussian distribution is assumed for the ocean clutter, the above detection criterion can be further expressed as

where μb is the background mean, σb is the background standard deviation and t is a detector design parameter which is computed from PFA by the following equation

The valid PFA value is in range [0, 1]. In real implementation of the two-parameter CFAR detector, a setup shown in Figure 1 is employed. The target window contains the pixel under test, the background “ring” contains pixels for estimating the underlying background statistics while the guard “ring” separates the target window from the background ring so that no pixels of an extended target are included in the background ring. The background mean μb and the standard deviation σb used in the criterion are estimated from the pixles in the background ring.

In case that the target window contains more than one pixels, this operator uses the following detection criterion

where μt is the mean value of pixels in the target window. In this case, t should be replaced by t√n (where n is the number of pixels in the target window) in the PFA calculation. Adaptive Threshold Algorithm The object detection is performed in an adaptive manner by the Adaptive Thresholding operator. For each pixel under test, there are three windows, namely target window, guard window and background window, surrounding it (see Figure 1). Normally the target window size should be about the size of the smallest object to detect, the guard window size should be about the size of the largest object, and the background window size should be large enough to estimate accurately the local statistics. The operator

● First computes detector design parameter t from user selected PFA using equation above.

● Then computes background mean μb and standard deviation σb using pixels in the background ring.

● Next computes the mean value μt of the target window.

● If μt > μb + σb*t, then the center pixel is detected as part of an object, otherwise not an object.

● Move all windows by one pixel to detect the next pixel.

Figure 1. Window setup for adaptive thresholding algorithm.

Discrimination The discrimination operation is conducted by the Object Discrimination operator. During this operation, false detections are eliminated based on simple target measurements.

1. The operator first clusters contiguous detected pixels into a single cluster. 2. Then the width and length information of the clusters are extracted. 3. Finally based on these measurements and user input discrimination criteria, clusters that are too big or too small are eliminated.

Parameters Used For Adaptive Thresholding operator, the following parameters are used (see Figure 2):

1. Target Window Size (m): The target window size in meters. It should be set to the size of the smallest target to detect. 2. Guard Window Size (m): The guard window size in meters. It should be set to the size of the largest target to detect. 3. Background Window Size (m): The background window size in meters. It should be far larger than the guard window size to ensure accurate calculation of the background statistics. 4. PFA (10^(-x)): Here user enters a positive number for parameter x, and the PFA value is computed by 10^(-x). For example, if user enters x = 6, then PFA = 10^(-6) which is 0.000001.

Figure 2. Adaptive Thresholding Operator dialog box.

For Object Discrimination operator, the following parameters are used (see Figure 3):

1. Minimum Target Size (m): Target with dimension smaller than this threshold is eliminated. 2. Maximum Target Size (m): Target with dimension larger than this threshold is eliminated.

Figure 3. Object Discrimination Operator dialog box.

Visualize Detected Objects To view the object detection results, the following steps should be followed:

1. Bring up the image. 2. Go to Layer Manager and add layer called "Object Detection Results".

The detected object will be circled on top of the image view (see example in the figure below). An Object Detection Report will also be produced in XML in the .nest/log folder. Figure 4. Object Detection Results overlayed on the image. Reference: [1] D. J. Crisp, "The State-of-the-Art in Ship Detection in Synthetic Aperture Radar Imagery." DSTO–RR–0272, 2004-05. Oil Spill Detection

Oil Spill Detection

The operator detects dark spot such as oil spill on sea surface from SAR imagery.

Major Processing Steps The oil spill detection operation consists of the following four operations:

1. Pre-processing: Calibration and speckle filtering are applied to source image in this step. 2. Land-sea masking: Land-sea mask is created in this step to ensure that detection is focused only on area of interest. 3. Dark spot detection: Dark spots are detected in this step with an adaptive thresholding method. 4. Clustering and discrimination: Pixels detected as part of the dark spot are clustered and then eliminated based on the dimension of the cluster and user selected minimum cluster size.

For details of calibration and speckle filtering operations, the readers are referred to the Calibration operator and the Speckle Filter operator. Here it is assumed that the calibration and speckle filtering have been performed before applying the oil spill detection operator. For details of land-sea mask generation, the readers are referred to the Create Land Mask operator.

Adaptive Threshold Algorithm The dark spots are detected using an adaptive thresholding method.

1. First the local mean backscatter level is estimated using pixels in a large window. 2. Then the detecting threshold is set k decibel below the estimated local mean backscatter level. Pixels within the window with values lower than the threshold are detected as dark spot. k is a user selected parameter (see parameter Threshold Shift below). 3. Shift the window to next window position and repeat step 1 and 2.

Discrimination

1. First the contiguous detected pixels are clustered into a single cluster. 2. Then clusters with their sizes smaller than user selected Minimum Cluster Size are eliminated.

Visualize Detected Oil Spill The oil spill detection bit mask is output as a separated band. To view the oil spill detection results, the following steps should be followed:

1. Bring up the image. 2. Go to Layer Manager and add the oil spill bit mask band as a layer.

Parameters Used For dark spot detection, the following parameters are used (see figure 1):

1. Source Bands: All bands (real or virtual) of the source product. User can select one or more bands for producing multi-looked images. If no bands are selected, then by default all bands are selected. 2. Background Window Size: The window size in pixels for computing local mean backscatter level. 3. Threshold Shift (dB): The detecting threshold is lower than the local mean backscatter level by this amount.

Figure 1. Oil Spill Detection Operator dialog box.

For clustering and discrimination, the following parameters are used (see Figure 2):

1. Minimum Cluster Size: The minimum cluster size in square kilometer. Cluster with size smaller than this size is eliminated.

Figure 2. Oil Spill Clustering operator dialog box.

Reference: [1] A. S. Solberg, C. Brekke and R. Solberg, "Algorithms for oil spill detection in Radarsat and ENVISAT SAR images", Geoscience and Remote Sensing Symposium, 2004. IGARSS '04. Proceedings. 2004 IEEE International, 20-24 Sept. 2004, page 4909-4912, vol.7. Create Land Mask

Create Land Mask

The Create Land Mask operation will turn any pixels on land into no data value. If the "preserve land" check box is set to true then all land pixels will be preserved and all ocean pixels will be set to no data value. The operator will automatically download a course SRTM 5 minute DEM the first time it is used. This DEM is used to very quickly determine if a pixel is on land or in the ocean. Alternatively, a geometry from the product could also be used. This could be a user create ROI or an imported Shape file. The following parameters are used by the operator:

1. Source Band: All bands (real or virtual) of the source product. User can select one or more bands. 2. Mask the Land: Checkbox indicating that land pixels will become nodata value. 3. Mask the Sea: Checkbox indicating that sea pixels will become nodata value. 4. Use Geometry as Mask: Select a geometry or ROI from the product to use as the mask. Anything outside the area will be nodata value. 5. Invert Geometry: Anything inside the ROI or geometry will be nodata value. 6. Bypass: Skip any land masking.

Wind Field Estimation

Wind Field Estimation

As the wind blows across the ocean surface, it generates surface roughness generally aligned with the wind direction. Consequently the radar backscatter from this roughened surface is related to the wind speed and direction. This operator retrieves wind speed and direction from C-band SAR imagery.

Major Processing Steps The general approach for the wind field retrieval is as the follows:

1. First a land-sea mask is generated to ensure that the estimation is focused only on the sea surface area. 2. Then the SAR image is divided into grid using user specified window size. 3. For each grid, a wind direction (with 180° ambiguity) is estimated from features in the SAR image using a frequency domain method. 4. With the wind direction estimated for the grid, finally the wind speed is estimated by using CMOD5 model for the Normalized Radar Cross Section (NRCS).

For details of land-sea mask generation, the reader is referred to the Create Land Mask operator.

Wind Direction Estimation The wind direction is estimated from the features in the SAR image. Detailed steps for the estimation are given below:

1. For each window within which a wind direction will be estimated, a local FFT size is determined. The FFT size is 2/3 of the window size, therefore four spectra can be computed in the window with each spectra region has a 50% overlap with the neighboring spectrum. 2. Each window is flattened by applying a large average filter, then dividing by the filtered image. 3. The FFT’s are applied and the four resulting spectra are averaged. 4. An annulus is applied to the spectrum to zero out any energy outside of a wavenumber region. The limits of the annulus are set to wave lengths of 3 km to 15 km. 5. A 3x3 median filter is then applied to the spectrum to remove noise. 6. A 2D polynomial is fit to the resulting spectral samples and the direction through the origin which has the largest quadratic term (i.e. the widest extent) is determined. The wind direction is then assumed to be 90 degree from this direction.

Wind Speed Estimation

● The wind speed is estimated using the CMOD5 model for NRCS developed by Hersbach et al. [1] for VV-polarized C-band scatterometry.

● For ENVISAT HH-polarized product, where CMOD5 model is not directly applicable, the operator first converts the NRCS at HH polarization into a corresponding NRCS for VV polarization with the following equation, then applies the CMOD5 model to the converted NRCS:

where θ is the incidence angle and α is set to 1.

For details of the CMOD5 model, the readers are referred to [1].

Products Supported

● The operator now is only supported for ERS and ENVISAT (VV- and HH-polarized) products. The source product is assumed to have been calibrated before applying the operator.

Parameters Used The following parameters are used by the operator:

1. Source Bands: All bands (real or virtual) of the source product. User can select one or more bands for producing multi-looked images. If no bands are selected, then by default all bands are selected. 2. Window Size: The dimension of a window for which wind direction and speed are estimated.

Figure 1. Wind Field Estimation dialog box

Visualize Estimated Wind Direction To view the estimated wind directions, the following steps should be followed:

1. Bring up the image. 2. Go to layer manager and add layer called "Wind Field Estimation Results".

Then wind directions will be displayed as shown in the example below. Note that the wind direction is indicated by double headed arrows because a 180° ambiguity exists in the estimated wind direction. Also for those grids in which land pixels are found, the wind directions are not estimated and hence not displayed.

Figure 2. Example of wind direction display

Wind Field Retrieval Result Report The wind field estimation results are saved into an xml file .nest/log/wind_field_report.xml with the following information given for each window in which wind estimation is made:

1. lat: Latitude of the central point in the window. 2. lon: Longitude of the central point in the window. 3. speed: Estimated wind speed in m/s. 4. dx: X component of the estimated wind vector. 5. dy: Y component of the estimated wind vector. 6. ratio: In estimating wind direction, the spectrum of a given window is matched with a 2D polynomial (like f(x,y) = ax2 + bxy + cy2 + dx + ey +f). The ratio in the report is the ratio of the minor semi axes over the major semi axes of the 2D polynomial. Generally speaking, the smaller the ratio value, the more reliable the estimated wind direction.

Reference: [1] H. Hersbach, CMOD5, “An Improved Geophysical Model Function for ERS C-Band Scatterometry”, Report of the European Centre Medium-Range Forecasts (ECMWF), 2003. [2] C. C. Wackerman, W. G. Pichel, P. Clemente-Colon, “Automated Estimation of Wind Vectors from SAR”, 12th Conference on Interactions of the Sea and Atmosphere, 2003. General Design

NEST Architecture

NEST consists of a collection of processing modules and data product readers and writers. All modules are centered on the Generic Product Model (GPM) of the BEAM-NEST Core. The GPM is a common, unified data model designed so that all data readers convert data into this data model and all analysis and processing tools use this data model exclusively. Data product reader modules ingest all the data of a particular product, including metadata and transform it into the GPM data structure. The GPM is abstract enough to handle all types of data products without losing any information. The NEST tools have only one interface to the GPM in order to work with the data. Data product writers are able to take the data from the GPM and produce an external file format. With a GPM, file conversions from one file format to another are easily achieved with the appropriate reader and writer modules. Furthermore, the DAT, tools and future plug-ins are not dependent on which data products are supported or any specific complexities of the file formats. NEST’s primary goal is to read in ESA and third part SAR data products, provide tools for calibration, orthorectification, co-registration, interferometry, and data manipulation, and then convert the data into common file formats used by other 3rd party SAR software.

Built on Beam

The NEST architecture is built upon the proven and extensible architecture of BEAM. BEAM features an Application Programming Interface (API) that was designed and developed from the beginning to be used by 3rd parties to extend BEAM or re-use it in their own applications. BEAM is an application suite which facilitates the utilization, viewing and processing of MERIS, AATSR and ASAR data products of the ESA ENVISAT environmental satellite. It provides a multitude of tools and functions to visualize, analyze, manipulate and modify earth observation data. Although BEAM has been intended specifically for optical data, the BEAM architecture has been designed and developed to facilitate the extension and re-use of its code base such that it is feasible to use BEAM’s core framework as the building blocks to develop a Synthetic Aperture Radar (SAR) toolbox. NEST and BEAM share a common core that enables the exchange of modules between the two toolboxes. This common core is maintained cooperatively by both Array Systems Computing Inc., the developers of NEST and Brockmann Consult, the developers of BEAM. The majority of the NEST functionality is encapsulated in BEAM plug-in modules. As such, some modules are interchangeable between the two systems.

Detailed Design

BEAM-NEST Core

The common BEAM-NEST Core architecture consists of various plug-in modules which are independently developed, modified and versioned. The core framework is made up of the beam-core, beam-gpf, beam- processing, beam-ui and beam-visat modules.

The core modules make use of ceres-core, ceres-ui and ceres-binding. These modules package utility classes for module registration, versioning, application building and swing user interface helper functions. The beam-core contains most of the managers and data model including the data IO for the product readers and format writers. The beam-gpf is the Graph Processing Framework (GPF), which implements the new processing framework introduced in version 4 of the software. Processing tasks are implemented as Operators. The GPF provides a way to execute a chain of sequential processing steps on an image. The beam-processing is the old version 3 processing framework, which will not be used by NEST directly but may still have dependencies for some BEAM operators. The beam-ui provides the user interface framework for creating applications, windows and dialogs. Beam-visat is the primary application and user interface to the tools in BEAM. VISAT supports extensions for views for displaying data, actions to add menu items and toolbar buttons to trigger user initiated events. For more information, please refer to the Detailed NEST API JavaDoc.

Open Source NEST

Open Source Development

The NEST software and full source code is freely distributed under the GNU GPL license.

Open source makes software inherently independent of specific vendors, and suppliers. The software can be freely distributed and shared by large communities and includes the source code and the right to modify it. This not only ensures that there isn’t a single entity on which the future of the software depends, but also allows for unlimited improvements and tuning of the quality and functionality of the software.

By making NEST open source, future evolution and growth of the toolbox will be possible by the community of users and developers that contribute to the project. Building NEST

Building NEST from the source

1. Download and install the required build tools

● Install J2SE 1.6 JDK and set JAVA_HOME accordingly.

● Install Maven and set MAVEN_HOME accordingly.

● Install Java Advanced Imaging JDK & JRE

● Install JAI Image I/O JDK & JRE

2. Add $JAVA_HOME/bin and $MAVEN_HOME/bin to your PATH. 3. Download the NEST source code and unpack into $MY_PROJECTS/nest. 4. Copy 3rd-party folder found in nest-developer-tools\-dependencies into your Maven repository, which is by default ~/.m2/repository on Linux and C:\Documents and Settings\user_name\.m2\repository on Windows. 5. Open a Command Prompt or Terminal and CD into the nest source code folder 6. To build NEST from source: Type mvn compile or mvn package 7. To build a release of the software type mvn package assembly:assembly

Using NEST source with an Integrated Development Environment(IDE)

IntelliJ IDEA users:

1. Build IDEA project files for NEST: Type mvn compile idea:idea 2. In IDEA, go to the IDEA Main Menu/File/Open Project and open the created project file $MY_PROJECTS/nest/nest.ipr

Eclipse users:

1. Build IDEA project files for NEST: Type mvn compile :eclipse 2. From Eclipse, click on Main Menu/File/Import 3. Select General/Existing Project into Workspace 4. Select Root Directory $MY_PROJECTS/nest 5. Set the M2_REPO classpath variable:

❍ Open Window/Preferences..., then select Java/Build Path/Classpath Variables

❍ Select New... and add variable M2_REPO

❍ Select Folder... and choose the location of your Maven local repository, e.g ~/. m2/repository

Note: In Eclipse, some purposely malformed XML in a unit test may prevent you from building the source. Simply delete these files module-no-xml.xml and module-malformed-xml.xml in $MY_PROJECTS/nest/beam/ceres-0.x/ceres-core/src/test/resources/com/bc/ceres/ core/runtime/internal/xml

For both IDEA and Eclipse, use the following configuration to run DAT as an Application:

● Main class: com.bc.ceres.launcher.Launcher

● VM parameters: -Xmx1024M -Dceres.context=nest

● Program parameters: none

● Working directory: $MY_PROJECTS/nest/beam

● Use classpath of module nest-bootstrap (For eclipse click on user entry, press the button add project and select nest-bootstrap)

Developing A Reader

Readers

To create a reader plugin implement the ProductReaderPlugIn interface.

public interface ProductReaderPlugIn extends ProductIOPlugIn {

/** * Gets the qualification of the product reader to decode a given input object. * * @param input the input object * @return the decode qualification */ DecodeQualification getDecodeQualification(Object input);

/** * Returns an array containing the classes that represent valid input types for this reader. *

*

Intances of the classes returned in this array are valid objects for the setInput method of the * ProductReader interface (the method will not throw an InvalidArgumentException in this * case). * * @return an array containing valid input types, never null */ Class[] getInputTypes();

/** * Creates an instance of the actual product reader class. This method should never return null. * * @return a new reader instance, never null */ ProductReader createReaderInstance(); }

The reader plugin should create a new instance of your reader in createReaderInstance().

To create a reader implementation, extend the AbstractProductReader. In readProductNodesImpl(), read your data files and create a new Product object.

In readBandRasterDataImpl() fill the destination buffer with band data for the requested rectangular area. Maven DataIO Archetype

The Maven 2 Archetype Plugin for NEST data I/O modules is used to create archetypes for NEST data I/O modules.

A Maven Archetype is a template toolkit for generating a new module package. By using the Maven Archetype you can create a module structure easily and get started adding your code to the module.

A DataIO Archetype will generate a product reader and writer within the same package. Before beginning, make sure that you have built the NEST source code and do a maven install to ensure that all dependencies are in the repository.

From the command line type the following from the NEST source code root folder.:

mvn archetype:create -DarchetypeGroupId=org.esa.nest.maven -DarchetypeArtifactId=maven-nest-dataio-archetype -DarchetypeVersion=1.0 -DgroupId=myGroupId -DartifactId=myArtifactId -Dversion=myVersion -DpackageName=myPackageName

where

● myGroupId will become the groupId of the generated POM ● myArtifactId will become the artifactId of the generated POM and the NEST module's symbolicName. ● myVersion will become the version of the generated POM and NEST module. Defaults to 1.0-SNAPSHOT.

● myPackageName will become the source package name. Defaults to the value of myGroupId .

Please also refer to the documentation of the Maven 2 Archetype Plugin.

Example

mvn archetype:create -DarchetypeGroupId=org.esa.nest.maven - DarchetypeArtifactId=maven-nest-dataio-archetype - DarchetypeVersion=1.0 -DgroupId=org.esa.nest -DartifactId=nest-sar- io

Publishing a Reader Reader implementations are published via the Java service provider interface (SPI). A JAR publishes its readers in the resource file META-INF/services/org.esa.beam.framework.dataio. ProductReaderPlugIn. In this file add your reader SPI eg: org.esa.nest.dataio.radarsat2. Radarsat2ProductReaderPlugIn

Adding Menu Item Actions

In the modules.xml file found in the resources folder of the package, add an Action to create a menu item in the DAT. State the class of the Action to be called and the text to show in the menu item.

importRadarsat2Product org.esa.beam.visat.actions.ProductImportAction Radarsat 2 Import a Radarsat2 data product or product subset. Import a Radarsat2 data product or product subset. icons/Import24.gif importRadarsatProduct importRadarsat2Product

Developing A Writer

Writers

To create a writer plugin implement the ProductWriterPlugIn interface.

public interface ProductWriterPlugIn extends ProductIOPlugIn {

/** * Returns an array containing the classes that represent valid output types for this writer. *

*

Instances of the classes returned in this array are valid objects for the setOutput method of the * ProductWriter interface (the method will not throw an InvalidArgumentException in this * case). * * @return an array containing valid output types, never null * * @see ProductWriter#writeProductNodes */ Class[] getOutputTypes();

/** * Creates an instance of the actual product writer class. This method should never return null. * * @return a new writer instance, never null */ ProductWriter createWriterInstance(); }

The writer plugin should create a new instance of your reader in createWriterInstance().

To create a writer implementation, extend the AbstractProductWriter. In writeProductNodesImpl(), write your data files.

WriteBandRasterData() writes raster data from the given in-memory source buffer into the data sink specified by the given source band and region.

Maven DataIO Archetype

The Maven 2 Archetype Plugin for NEST data I/O modules is used to create archetypes for NEST data I/O modules.

A Maven Archetype is a template toolkit for generating a new module package. By using the Maven Archetype you can create a module structure easily and get started adding your code to the module.

A DataIO Archetype will generate a product reader and writer within the same package. Before beginning, make sure that you have built the NEST source code and do a maven install to ensure that all dependencies are in the repository.

From the command line type the following from the NEST source code root folder.:

mvn archetype:create -DarchetypeGroupId=org.esa.nest.maven -DarchetypeArtifactId=maven-nest-dataio-archetype -DarchetypeVersion=1.0 -DgroupId=myGroupId -DartifactId=myArtifactId -Dversion=myVersion -DpackageName=myPackageName

where

● myGroupId will become the groupId of the generated POM ● myArtifactId will become the artifactId of the generated POM and the NEST module's symbolicName. ● myVersion will become the version of the generated POM and NEST module. Defaults to 1.0-SNAPSHOT.

● myPackageName will become the source package name. Defaults to the value of myGroupId .

Please also refer to the documentation of the Maven 2 Archetype Plugin.

Example

mvn archetype:create -DarchetypeGroupId=org.esa.nest.maven - DarchetypeArtifactId=maven-nest-dataio-archetype - DarchetypeVersion=1.0 -DgroupId=org.esa.nest -DartifactId=nest-sar- io

Publishing a Writer

Writer implementations are published via the Java service provider interface (SPI). A JAR publishes its writers in the resource file META-INF/services/org.esa.beam.framework.dataio. ProductWriterPlugIn. In this file add your writer SPI eg: org.esa.beam.dataio.geotiff. GeoTiffProductWriterPlugIn Adding Menu Item Actions

In the modules.xml file found in the resources folder of the package, add an Action to create a menu item in the DAT. State the class of the Action to be called and the text to show in the menu item.

exportGeoTIFFProduct org.esa.beam.dataio.geotiff.GeoTiffExportAction GeoTIFF true O Export GeoTIFF Product... Export a GeoTIFF data product or subset. Export a GeoTIFF data product or product subset. exportGeoTIFFProduct

Developing An Operator

Operators

The Operator interface is simple to extend. An Operator basically takes a source product as input and creates a new target product within initialize(). The algorithm implementation for what your operator does will go inside computTile() or computeTiles(). Operators work on the data tile by tile. The size of the tile may be dependent on the requests of other Operators in the graph. public interface Operator { OperatorSpi getSpi(); Product initialize(OperatorContext context); void computeTile(Tile targetTile, ProgressMonitor pm); void computeTileStack(Rectangle targetTileRectangle, ProgressMonitor pm); void dispose(); }

The computeTile and computeTileStack methods express different application requirements. Clients may either implement computeTile or computeTileStack or both. In general, the algorithm dictates which of the methods will be implemented. Some algorithms can compute their output bands independently (band-arithmetic, radiance to reflectance conversion), other cannot. The GPF selects the method which best fits the application requirements:

● In order to display an image of a band, the GPF is asked to compute tiles of single bands. The GPF therefore will prefer calling the computeTile method, if implemented. Otherwise it has to call computeTileStack, which might not be the best choice in this case.

● In order to process in batch-mode or to save a product to disk, the GPF is asked to compute the tiles of all bands of a product. The GPF therefore will prefer calling the computeTileStack method, if implemented. Otherwise it will consecutively call the computeTile for each output band

Maven GPF Archetype

The Maven 2 Archetype Plugin for NEST GPF modules is used to create archetypes for NEST GPF modules.

A Maven Archetype is a template toolkit for generating a new module package. By using the Maven Archetype you can create a module structure easily and get started adding your code to the module.

A GPF Archetype will generate a single tile and a multi tile Operator within the same package. Before beginning, make sure that you have built the NEST source code and do a maven install to ensure that all dependencies are in the repository.

From the command line type the following from the NEST source code root folder:

mvn archetype:create -DarchetypeGroupId=org.esa.nest.maven -DarchetypeArtifactId=maven-nest-gpf-archetype -DarchetypeVersion=1.0 -DgroupId=myGroupId -DartifactId=myArtifactId -Dversion=myVersion -DpackageName=myPackageName where

● myGroupId will become the groupId of the generated POM ● myArtifactId will become the artifactId of the generated POM and the NEST module's symbolicName. ● myVersion will become the version of the generated POM and NEST module. Defaults to 1.0-SNAPSHOT.

● myPackageName will become the source package name. Defaults to the value of myGroupId .

Please also refer to the documentation of the Maven 2 Archetype Plugin.

Example

mvn archetype:create -DarchetypeGroupId=org.esa.nest.maven - DarchetypeArtifactId=maven-nest-gpf-archetype -DarchetypeVersion=1.0 -DgroupId=org.esa.nest -DartifactId=nest-sar-op

Publishing an Operator

Operator implementations are published via the Java service provider interface (SPI). A JAR publishes its operators in the resource file META-INF/services/org.esa.beam.framework. gpf.OperatorSpi. In this file add your operator SPI eg: org.esa.nest.gpf.MultilookOp$Spi

In your Operator package add a class to extend the OperatorSpi interface. This class may also serve as a factory for new operator instances.

public static class Spi extends OperatorSpi { public Spi() { super(MultilookOp.class); } }

Adding Menu Item Actions

In the modules.xml file found in the resources folder of the package, add an Action to create a menu item in the DAT. State the class of the Action to be called and the text to show in the menu item.

SlantRangeGroundRangeOp org.esa.nest.dat.SRGROpAction Slant Range to Ground Range Converts a product to/from slant range to/from ground range geometry SRGROp

The Action class should extend AbstractVisatAction and override the handler for actionPerformed

public class SRGROpAction extends AbstractVisatAction {

private DefaultSingleTargetProductDialog dialog;

@Override public void actionPerformed(CommandEvent event) {

if (dialog == null) { dialog = new DefaultSingleTargetProductDialog("SRGR", getAppContext(), "Slant Range to Ground Range", getHelpId()); dialog.setTargetProductNameSuffix("_GR"); } dialog.show();

}

Remote Sensing Tutorials

Remote Sensing Links

Eduspace Principles of Remote Sensing from ESA http://www.eduspace.esa.int/eduspace

Fundamentals of Remote Sensing from CCRS http://ccrs.nrcan.gc.ca/resource/tutor/fundam/index_e.php

Remote Sensing Tutorial from NASA http://rst.gsfc.nasa.gov/

Principles of Remote Sensing from the National University of Singapore http://www.crisp.nus.edu.sg/~research/tutorial/rsmain.htm

Virtual Library on Remote Sensing from VTT Technical Research Centre of Finland http://virtual.vtt.fi/virtual/space/rsvlib/

Synthetic Aperture Radar Links

SAR Courses and Applications from ESA http://earth.esa.int/applications/data_util/SARDOCS/spaceborne/

SAR Land Applications from ESA http://www.tiger.esa.int/training/SAR_LA1_th.pdf

Polarimetry Tutorial from ESA http://envisat.esa.int/polsarpro/tutorial.html

Radar Remote Sensing from CCRS http://www.ccrs.nrcan.gc.ca/resource/tutor/gsarcd/index_e.php CCRS Image Interpretation Quiz http://www.ccrs.nrcan.gc.ca/resource/tutor/iquiz/intro_e.php

SAR Users Manual from NOAA http://www.sarusersmanual.com/

Basic Concepts and Image Formation http://epsilon.nought.de/tutorials/processing/index.php

Interferometric SAR Training Manual from ESA http://www.esa.int/esaMI/ESA_Publications/SEM867MJC0F_0.html

SAR Interferometry http://epsilon.nought.de/tutorials/insar_tmr/index.php 2009 International GEO Workshop on SAR to Support Agricultural Monitoring http://www.cgeo.gc.ca/events-evenements/sar-ros/training-formation-man-eng.pdf Radar Remote Sensing Overview ESA 2010 Warsaw SAR Course part I ESA 2010 Warsaw SAR Course part II

Quick Start With The DAT

Opening Products

Image Products can be opened by either the Open Raster Dataset menu item in the File menu

or by selecting the reader directly from the Product Readers menu. The Open Raster menu item will try to automatically determine which reader to use and it can open one or multiple data products stored on the hard disk in DAT. You can select more than one product, e.g. by pressing CTRL and selecting the desired products with the mouse. By selecting the specific Product Reader, the path to datasets of that type can be stored and reused the next time you open a similar product. After having opened the product, the data is listed in the Products View where you can examine product structure and metadata. A Product consists of the following:

● Identification: Basic information on the product (Mission, Product type, Acquisition time, Track and Orbit)

● Metadata: This includes all the original metadata within the product, the Processing_graph history recording the processing that was done and the Abstracted Metadata which is the important metadata fields used by the Operators in a common format.

● Tie Point Grids: Raster grids created interpolating the tie-points information within the product. The interpolation is done on the fly according to the product.

● Bands: The actual bands inside the product and virtual bands created from expressions. Different icons are used to distinguish these bands.

The information in the metadata and the bands created can vary according to the product. Below is an example of an ENVISAT ASAR IMP product opened in the Products View. Double click on the Abstracted Metadata to view all critical fields.

Double click on a band to open it in an Image View.

After you have opened an Image View you can modify the colors of the image using the colour manipulation window or overlay an opaque or semi-transparent bitmask with the bitmask overlay window. Both windows operate in non-modal mode, which means they float over DAT's main frame, and you can place them somewhere on your desktop. To see what region of the world is covered by the data product, select the World Map View from the View/ ToolViews menu.

Changing Default Paths

Open the Settings dialog from the Edit menu. The Settings dialog allows you to customize the default data path directories for your Digital Elevation models and orbit files. For DELFT Precise Orbits and SRTM 3sec DEM, the Toolbox will automatically download required files as needed and place them in the folders specified in the Settings dialog. For ACE 30src DEM, you will need to download the files manually and place them in the folder specified in the Settings dialog.

Creating A Project

With Projects, you can organise and store complex processing over multiple datasets. The main advantage of using Projects is that it automatically organizes the processed images in separate directories Start by creating a New Project from the File menu. From the New Project dialog, browse for a folder and enter the project name. A new Project folder with the Project name given will be created along with a Project XML file which will store information about your Project. Next, use the Product Library to browse and select data products to import or double click on them to open them directly. From the Project View you can double click on a product to open it in the Product View. From the Product View you can examine a products metadata or double click on a band to open it in an Image View.

Coregistration Tutorial

Coregistering Products

Image co-registration is fundamental for Interferometry SAR (InSAR) imaging and its applications, such as DEM map generation and analysis. To obtain a high quality InSAR image, the individual complex images need to be co-registered to sub-pixel accuracy.

The toolbox will accurately co-register one or more slave images with respect to a master image. The co-registration function is fully automatic, in the sense that it does not require the user to manually select ground control points (GCPs) for the master and slave images.

Images may be fully or only partly overlapping and may be from acquisitions taken at different times using multiple sensors or from multiple passages of the same satellite.

The achievable co-registration accuracy for images in the same acquisition geometry and over flat areas will be better than 0.2 pixels for two real images and better than 0.05 pixels for two complex images.

The image co-registration is accomplished in three major processing steps (see Figure 1) with three operators: Create Stack operator, GCP Selection operator and Warp operator.

Figure 1. Image co-registration

Input Images The input images for the co-registration function can be complex or real. But all images must belong to the same type (i.e. they must all be complex or all real) and have the same projection system (all slant range or all ground range projected or all geocoded). If the images are not in the same projection, the slave image(s) should be reprojected into the same projection system as that of the master image.

Create Stack The Create Stack operator collocates the master and slave images. Basically the slave image data is resampled into the geographical raster of the master image. By doing so the master and slave images share the same geopositioning information and have the same dimension. For details of the Create Stack operator, readers are referred to Create Stack operator. For coregistration of detected products it is ok to use the resamling methods. For coregistration of complex products for interferometry, the option to not do any resampling in the CreateStack should be used.

GCP Selection The GCP Selection operator then creates an alignment between master and slave images by matching the user selected master GCPs to their corresponding slave GCPs. There are two stages for the operation: coarse registration and fine registration. For real images co- registration, the coarse registration is applied. The registration is achieved by maximizing the cross-correlation between master and slave images on a series of imagettes defined across the images. For complex image co-registration, the additional fine registration is applied. The registration is achieved by maximizing the coherence between master and slave images at a series of imagettes defined across the images. For details of the GCP Selection operator, readers are referred to GCP Selection operator.

Warp With the master-slave GCP pairs selected, a warp function is created by the Warp operator, which maps pixels in the slave image into pixels in the master image. For details of the Warp operator, readers are referred to Warp operator.

Co-Registration from the DAT Detailed steps of using this operator are given below:

1. From the SAR Tools menu, select Coregistration. 2. Add a master product to the ProductSet Reader. 3. Add any slave products to the ProductSet reader or drag and drop products from a Project. 4. In the CreateStack tab, select one source band from the master and one source band from the slave for detected products. If you are using complex data products, select bands i and q for the master and select bands i and q from the slaves. For interferometric processing, select no resampling and Master extents in the CreateStack. 5. In the GCP Selection tab enter the number of GCPs to create. 6. In the Warp tab select the RMS threshold and warp polynomial order. 7. In the Target tab enter in the output product name and location. 8. Press the Run button to begin processing.

Orthorectification Tutorial

Terrain Correction

The Terrain Correction Operator will produce an orthorectified product in the WGS 84 geographic coordinates. The Range Doppler orthorectification method [1] is implemented for geocoding SAR images from a single 2D raster radar geometry. It uses available orbit state vector information in the metadata or external precise orbit, the radar timing annotations, the slant to ground range conversion parameters together with the reference DEM data to derive the precise geolocation information. Optionally radiometric normalisation can be applied to the orthorectified image to produce σ0, γ0 or β0 output.

The Ellipsoid Correction RD and Ellipsoid Correction GG Operators will produce ellipsoid corrected products in the WGS 84 geographic coordinates. The Terrain Correction Operator should be used whenever DEM is available. The Ellipsoid Correction (RD and GG) should be used only when DEM is not available.

Orthorectification Algorithm The Range Doppler Terrain Correction Operator implements the Range Doppler orthorectification method [1] for geocoding SAR images from single 2D raster radar geometry. It uses available orbit state vector information in the metadata or external precise orbit (only for ERS and ASAR), the radar timing annotations, the slant to ground range conversion parameters together with the reference DEM data to derive the precise geolocation information.

Products Supported

● ASAR (IMS, IMP, IMM, APP, APM, WSM) and ERS products (SLC, IMP) are fully supported.

● RADARSAT-2 (all products)

● TerraSAR-X (SSC only)

● Cosmo-Skymed

DEMs Supported

Right now only the DEMs with geographic coordinates (Plat, Plon, Ph) referred to global geodetic ellipsoid reference WGS84 (and height in meters) are properly supported. STRM v.4 (3” tiles) from the Joint Research Center FTP (xftp.jrc.it) are downloaded automatically for the area covered by the image to be orthorectified. The tiles will be downloaded to the folder C:\AuxData \DEMs\SRTM_DEM\tiff or the folder specified in the Settings. The Test Connectivity functionality under the Help tab in the main menu bar allows the user to verify if the SRTM downloading is working properly. Please note that for ACE and SRTM, the height information (being referred to geoid EGM96) is automatically corrected to obtain height relative to the WGS84 ellipsoid. For Aster Dem height correction is already applied. Note also that the SRTM DEM covers area between -60 and 60 degrees latitude. Therefore, for orthorectification of product of high latitude area, different DEM should be used. User can also use external DEM file in Geotiff format which, as specified above, must be with geographic coordinates (Plat, Plon, Ph) referred to global geodetic ellipsoid reference WGS84 (and height in meters)

Pixel Spacing Besides the default suggested pixel spacing computed with parameters in the metadata, user can specify output pixel spacing for the orthorectified image. The pixel spacing can be entered in both meters and degrees. If the pixel spacing in one unit is entered, then the pixel spacing in another unit is computed automatically. The calculations of the pixel spacing in meters and in degrees are given by the following equations: pixelSpacingInDegree = pixelSpacingInMeter / EquatorialEarthRadius * 180 / PI; pixelSpacingInMeter = pixelSpacingInDegree * PolarEarthRadius * PI / 180; where EquatorialEarthRadius = 6378137.0 m and PolarEarthRadius = 6356752.314245 m as given in WGS84.

Radiometric Normalization This option implements a radiometric normalization based on the approach proposed by Kellndorfer et al., TGRS, Sept. 1998 where

In current implementation θDEM is the local incidence angle projected into the range plane and defined as the angle between the incoming radiation vector and the projected surface normal vector into range plane[2]. The range plane is the plane formed by the satellite position, backscattering element position and the earth centre.

Note that among σ0, γ0 and β0 bands output in the target product, only σ0 is real band while γ0 and β0 are virtual bands expressed in terms of σ0 and incidence angle. Therefore, σ0 and incidence angle are automatically saved and output if γ0 or β0 is selected.

For σ0 and γ0 calculation, by default the projected local incidence angle from DEM [2] (local incidence angle projected into range plane) option is selected, but the option of incidence angle from ellipsoid correction (incidence angle from tie points of the source product) is also available.

ENVISAT ASAR The correction factors [3] applied to the original image depend on the product being complex or detected and the selection of Auxiliary file (ASAR XCA file).

Complex Product (IMS, APS)

● Latest AUX File (& use projected local incidence angle computed from DEM): The most recent ASAR XCA available from the installation folder \auxdata\envisat compatible with product date is automatically selected. According to this XCA file, calibration constant, range spreading loss and antenna pattern gain are obtained.

❍ Applied factors: 1. apply projected local incidence angle into the range plane correction 2. apply calibration constant correction based on the XCA file 3. apply range spreading loss correction based on the XCA file and DEM geometry 4. apply antenna pattern gain correction based on the XCA file and DEM geometry

● External AUX File (& use projected local incidence angle computed from DEM): User can select a specific ASAR XCA file available from the installation folder \auxdata\envisat or from another repository. According to this selected XCA file, calibration constant, range spreading loss and antenna pattern gain are computed.

❍ Applied factors: 1. apply projected local incidence angle into the range plane correction 2. apply calibration constant correction based on the selected XCA file 3. apply range spreading loss correction based on the selected XCA file and DEM geometry 4. apply antenna pattern gain correction based on the selected XCA file and DEM geometry

Detected Product (IMP, IMM, APP, APM, WSM)

● Latest AUX File (& use projected local incidence angle computed from DEM): The most recent ASAR XCA available compatible with product date is automatically selected. Basically with this option all the correction factors applied to the original SAR image based on product XCA file used during the focusing, such as antenna pattern gain and range spreading loss, are removed first. Then new factors computed according to the new ASAR XCA file together with calibration constant and local incidence angle correction factors are applied during the radiometric normalisation process.

❍ Applied factors: 1. remove antenna pattern gain correction based on product XCA file 2. remove range spreading loss correction based on product XCA file 3. apply projected local incidence angle into the range plane correction 4. apply calibration constant correction based on new XCA file 5. apply range spreading loss correction based on new XCA file and DEM geometry 6. apply new antenna pattern gain correction based on new XCA file and DEM geometry

● Product AUX File (& use projected local incidence angle computed from DEM): The product ASAR XCA file employed during the focusing is used. With this option the antenna pattern gain and range spreading loss are kept from the original product and only the calibration constant and local incidence angle correction factors are applied during the radiometric normalisation process.

❍ Applied factors: 1. apply projected local incidence angle into the range plane correction 2. apply calibration constant correction based on product XCA file

● External AUX File (& use projected local incidence angle computed from DEM): The User can select a specific ASAR XCA file available from the installation folder \auxdata\envisat or from another repository. Basically with this option all the correction factors applied to the original SAR image based on product XCA file used during the focusing, such as antenna pattern gain and range spreading loss, are removed first. Then new factors computed according to the new selected ASAR XCA file together with calibration constant and local incidence angle correction factors are applied during the radiometric normalisation process.

❍ Applied factors: 1. remove antenna pattern gain correction based on product XCA file 2. remove range spreading loss correction based on product XCA file 3. apply projected local incidence angle into the range plane correction 4. apply calibration constant correction based on new selected XCA file 5. apply range spreading loss correction based on new selected XCA file and DEM geometry 6. apply new antenna pattern gain correction based on new selected XCA file and DEM geometry

Please note that if the product has been previously multilooked then the radiometric normalization does not correct the antenna pattern and range spreading loss and only constant and incidence angle corrections are applied. This is because the original antenna pattern and the range spreading loss correction cannot be properly removed due to the pixel averaging by multilooking. If user needs to apply a radiometric normalization, multilook and terrain correction to a product, then user graph “RemoveAntPat_Multilook_Orthorectify” could be used.

ERS 1&2 For ERS 1&2 the radiometric normalization cannot be applied directly to original ERS product. Because of the Analogue to Digital Converter (ADC) power loss correction , a step before is required to properly handle the data. It is necessary to employ the Remove Antenna Pattern Operator which performs the following operations: For Single look complex (SLC, IMS) products

● apply ADC correction

For Ground range (PRI, IMP) products:

● remove antenna pattern gain

● remove range spreading loss ● apply ADC correction

After having applied the Remove Antenna Pattern Operator to ERS data, the radiometric normalisation can be performed during the Terrain Correction. The applied factors in case of "USE projected angle from the DEM" selection are:

1. apply projected local incidence angle into the range plane correction 2. apply absolute calibration constant correction 3. apply range spreading loss correction based on product metadata and DEM geometry 4. apply new antenna pattern gain correction based on product metadata and DEM geometry

To apply radiometric normalization and terrain correction for ERS, user can also use one of the following user graphs:

● RemoveAntPat_Orthorectify

● RemoveAntPat_Multilook_Orthorectify

RADARSAT-2

● In case of "USE projected angle from the DEM" selection, the radiometric normalisation is performed applying the product LUTs and multiplying by (sin •DEM/sin •el), where •DEM is projected local incidence angle into the range plane and •el is the incidence angle computed from the tie point grid respect to ellipsoid.

● In case of selection of "USE incidence angle from Ellipsoid", the radiometric normalisation is performed applying the product LUT.

These LUTs allow one to convert the digital numbers found in the output product to sigma-nought, beta-nought, or gamma-nought values (depending on which LUT is used).

TerraSAR-X

● In case of "USE projected angle from the DEM" selection, the radiometric normalisation is performed applying 1. projected local incidence angle into the range plane correction 2. absolute calibration constant correction

● In case of " USE incidence angle from Ellipsoid " selection, the radiometric normalisation is performed applying 1. projected local incidence angle into the range plane correction 2. absolute calibration constant correction

Please note that the simplified approach where Noise Equivalent Beta Naught is neglected has been implemented.

Cosmo-SkyMed

● In case of "USE projected angle from the DEM" selection, the radiometric normalisation is performed deriving σ0Ellipsoid [7] and then multiplying by (sinθDEM / sinθel), where θDEM is the projected local incidence angle into the

range plane and θel is the incidence angle computed from the tie point grid respect to ellipsoid.

● In case of selection of "USE incidence angle from Ellipsoid", the radiometric normalisation is performed deriving σ0Ellipsoid [7]

Definitions:

1. The local incidence angle is defined as the angle between the normal vector of the backscattering element (i.e. vector perpendicular to the ground surface) and the incoming radiation vector (i.e. vector formed by the satellite position and the backscattering element position) [2]. 2. The projected local incidence angle from DEM is defined as the angle between the incoming radiation vector (as defined above) and the projected surface normal vector into range plane. Here range plane is the plane formed by the satellite position, backscattering element position and the earth centre [2]. Steps to Produce Orthorectified Image The following steps should be followed to produce an orthorectified image:

1. From the Geometry menu select Terrain Correction. This will call up the dialog for the Terrain Correction Operator (Figure 1). 2. Select your source bands. 3. Select the Digital Elevation Model (DEM) to use. You can select 30 second GETASSE30 or ACE DEMs if they are installed on your computer. Preferably, select the SRTM 3 second DEM which has much better resolution and can be downloaded as need automatically if you have an internet connection. Alternatively, you could also browse for an External DEM tile. Currently only DEM in Geotiff format with geographic coordinates (Plat, Plon, Ph) referred to global geodetic ellipsoid reference WGS84 (and height in meters) is accepted. 4. Select the interpolation methods to use for the DEM resampling and the target image resampling. 5. Optionally select the Pixel Spacing in meters for the orthorectified image. By default the pixel spacing computed from the original SAR image is used. For details, the reader is referred to Pixel Spacing section above. 6. Optionally select the Pixel Spacing in degrees for the orthorectified image. By default it is computed from the pixel spacing in meters. If any of the two pixel spacing is changed, the other one is updated accordingly. For details, the reader is referred to Pixel Spacing section above. 7. Optionally select Map Projection. The orthorectified image will be presented with the user selected map projection. By default the output image will be expressed in WGS84 latlong geographic coordinate. 8. Optionally select to save the DEM as a band and the local incidence angle. 9. Optionally select to apply Radiometric Normalizatin to output σ0, γ0 or β0 of the orthorectified image. 10. Press Run to process the data.

Figure 1. Terrain Correction operator dialog box.

Below are some sample images showing the Terrain Correction result of an ASAR IMS product ASA_IMS_1PNUPA20081003_092731_000000162072_00351_34473_2366.N1, acquired on October 3, 2008, imaging the area around Rome in Central Italy. The ASAR IMS image has been multi-looked with 2 Range looks and 10 Azimuth Looks before to be orthorectified.

The DEM employed is the SRTM 3 second Version 4 and since the SRTM height information is referred to geoid EGM96, not WGS84 ellipsoid, correction has been applied to obtain height relative to the WGS84 ellipsoid (this is done automatically)

Figure 2 is in the original SAR geometry after multi-looking 2-10.

The orthorectified image and its radiometric normalised image σ0 are shown in Figure 3 and Figure 4 respectively.

Figures 5 and 6 are a zoom of the figure 3 and 4.

The radiometric scale is in dB/m^2.

Figure 3. Original SAR Geometry after multi-looking 2-10.

Figure 3. Orthrectified image. Figure 4. Radiometric normalized image.

Figure 5. Zoom in of the orthorectified image. Figure 6. Zoom in of the radiometric normalized image.

After Terrain Correction your SAR data will be closer to the real world geometry and you will be able to overlay layers from other sources correctly. Reference: [1] Small D., Schubert A., Guide to ASAR Geocoding, RSL-ASAR-GC-AD, Issue 1.0, March 2008 [2] Schreier G., SAR Geocoding: Data and Systems, Wichmann 1993 [3] Rosich B., Meadows P., Absolute calibration of ASAR Level 1 products, ESA/ESRIN, ENVI-CLVL-EOPG-TN-03- 0010, Issue 1, Rev. 5, October 2004 [4] Laur H., Bally P., Meadows P., Sánchez J., Schättler B., Lopinto E. & Esteban D., ERS SAR Calibration: Derivation of σ0 in ESA ERS SAR PRI Products, ESA/ESRIN, ES-TN-RS-PM-HL09, Issue 2, Rev. 5f, November 2004 [5] RADARSAT-2 PRODUCT FORMAT DEFINITION - RN-RP-51-2713 Issue 1/7: March 14, 2008 [6] Radiometric Calibration of TerraSAR-X data - TSXX-ITD-TN-0049-radiometric_calculations_I1.00.doc, 2008 [7] For further details about Cosmo-SkyMed calibration please contact Cosmo-SkyMed Help Desk at info. [email protected]

Processing from the Command Line

Running a Graph

First, create a graph using the Graph Builder in the DAT. The graph does not need any particular source products for the readers as you will be specifying the input products from the command line. Save the graph to a file. Next, from the command line (DOS prompt in Windows or terminal in Linux) type gpt.bat - h or gpt.sh -h to display the help message. This tutorial will use the command gpt but you may need to specify gpt.bat in Windows or gpt.sh in Linux. If the help does not appear, then it may indicate that your environment variables are not setup. Please refer to the installation FAQ. In order to run a single reader graph type: gpt graphFile.xml inputProduct This will process the graph graphFile.xml and use inputProduct as input. The output will be created in a file in the current directory called target.dim To specify the output target product and location type: gpt graphFile.xml inputProduct -t c:\output\targetProduct.dim

Terrain Correction

To terrain correct using the graph in the Graphs\Standard Graphs folder gpt "graphs\Standard Graphs\Orthorectify.xml" "c:\data\input.N1"

Coregistration

To coregister you will need a graph with a ProductSet Reader that will help you specify a list of input products. Using the Standard Graphs\Coregister.xml graph and an input folder containing all products we wish to coregister together, type: gpt "graphs\Standard Graphs\Coregister.xml" -inFolder c:\data Assuming all products are in relatively the same geolocation, this will coregister all product in folder c:\data into a coregistered stack Frequently Asked Questions

Installation F.A.Q.

What is the minimum required hardware to run NEST It is recommended to have at least 2GB of memory. NEST can work with only 1GB but may run slower. To run the 3D WorldWind View, it is recommended to have a 3D graphics card with updated drivers. NEST has been tested to work on 32 and 64 bit Windows XP, Vista, Windows 7 and Linux.

I can't start up the DAT Try running from the command line to see any errors using dat.bat on Windows or dat.sh on Linux. If the error mentions something to do with any of "WorldWind, native library, JOGL, glugen, or OpenGL" then it may be a problem with WorldWind: Try first to update your drivers (How to update windows drivers) (ATI/AMD - Nvidia - Intel) If that doesn't work, then try removing WorldWind by deleting the file from your NEST installation folder NEST4C/modules/nest-world-wind-view-3C.jar

When I start the DAT it says it could not create the Java virtual machine In dat.bat or dat.sh try reducing the Java maximum heap size -Xmx1024M value to somthing smaller like -Xmx800M

How can I get WorldWind to work? I see a blank window in the World View. To use the WorldWind World Map and 3D View, a 3D video card with updated drivers is necessary. Update your video card drivers. (How to update windows drivers) (ATI/AMD - Nvidia - Intel)

The installer is not working for me Try downloading the Java binary version of NEST. It is simply a zipped folder without an installer. If you would like to run he gpt from any folder, you will need to set the NEST_HOME path to where ever you unzip it. For example on Windows go to Control Panel -> System -> Advanced -> Evironment Variables

Add a variable for NEST_HOME to point to the installation folder and also include % NEST_HOME% in the PATH variable. On Linux, set these environment variables in .profile or .bash_profile in your home directory export NEST_HOME=installation folder export PATH=$PATH:$NEST_HOME Then either restart your terminal or type source ~/.bashrc Also, make sure the paths in NEST4B/config/settings.xml are appropriate for your file system.

How can I improve performance? Performance in NEST drops considerably when using virtual memory. If you have more than 2Gb of RAM and wish to use all of it, on a 64-bit OS you should be using a 64-bit version of Java. You may then set the heap size for the Java virtual machine to something larger such as -Xmx4096M To do so, open in a text editor your gpt.bat on Windows or gpt.sh on Linux found in your NEST installation folder and replace the -Xmx1024M value with something larger like - Xmx4096M. You may do the same with dat.bat or dat.sh. In Windows you may set these for the exe in nest-dat.l4j.ini For Windows64, the 64-bit Windows version of NEST should already come with 64-bit Java and be ready for use.

How can I move the .nest folder somewhere else? By default NEST creates a windows_user_home_folder\.nest (~/.nest on Linux)

The .nest folder is used to store temporary data such as preferences, logs, etc.

If you would like to change the location of this folder:

Edit the file NESTInstallFolder\config\nest.config change the line #nest.application_tmp_folder = c:\\AuxData to nest.application_tmp_folder = your_new_path

How to change the AuxData folder Edit the file NESTInstallFolder\config\settings_win.xml or setting_unix.xml Change the AuxDataPath value to your new path and restart the Toolbox. Frequently Asked Questions

General F.A.Q.

What SAR products are supported? NEST supports ENVISAT, ERS-1, ERS-2, and RADARSAT-2 fully. NEST also supports TerraSAR-X, Cosmo-Skymed, ALOS PALSAR, RADARSAT-1 and JERS with the exception of some special product types. For more details on what product types are supported, please refer to the supported products spreadsheet.

How is NEST licenced? NEST is licensed under the GNU GPL

If I write an application using NEST API am I forced to distribute that application? No. The license gives you the option to distribute your application if you want to. You do not have to exercise this option in the license.

If I wanted to distribute an application using NEST what license would I need to use? The GNU GPL

I am a commercial user. Is there any restriction on the use of NEST? NEST can be used internally ("in-house") without restriction, but only redistributed in other software that is under the GNU GPL unless permission is granted from all copyright holders.

I can't open an ENVISAT product. First, try to open the products with Enviview http://earth.esa.int/enviview/. If that works then it could be a version not supported by NEST. Let us know and we'll try to get it supported asap. If both Enviview and NEST are not able to read the product, then it could be that the file is corrupt or compressed. Some users have had difficulty when decompressing files using Winzip. Deactivate the CR/ LF translation option in winzip (this option is activated by default). When I open a RADARSAT-2 image I don't get any metadata A RADARSAT-2 product should be opened through it's product.xml file to ensure that the RADARSAT-2 product reader will recognize all files and import the whole product.

How do I import a PolSARPro result? Use the PolSARPro reader and select the folder containing all the indivual band files. When imported, all individual bands will show up within one product.

What method is used for Oil Spill Detection? The method currently implemented in NEST for oil spill detection is an adaptive thresholding dark spots detection algorithm. It first estimates the mean backscatter level in a large window, then set a threshold which is k decibel below the estimated mean backscatter level. Pixels with backscatter values lower than the threshold are considered as potential oil spill. Finally the detected pixels are clustered and discriminated. Only those clusters that are larger enough will be saved as detection result. Here in the algorithm, the user has fully control of the window size for estimating the mean backscatter level, the parameter k for setting the threshold, and the minimum cluster size.

The method is based on the following reference in which a much more complicated pyramid approach was proposed. In NEST however one level detection is implemented.

A. S. Solberg, C. Brekke and R. Solberg, "Algorithms for oil spill detection in Radarsat and ENVISAT SAR images", Geoscience and Remote Sensing Symposium, 2004. IGARSS '04. Proceedings. 2004 IEEE International, 20-24 Sept. 2004, page 4909-4912, vol.7.

Can I use the Apply-Orbit-File Operator on JERS-1, Radarsat-1. ALOS, etc.? No. The orbit corrections are only for ENVISAT, ERS-1 and ERS-2. If the orbit state vectors in your products are not accurate then this would be a problem when orthorectifying. You should be using the SARSimulation Terrain Correction to coregister your image with a simulated SAR image generated from the DEM and then terrain correcting.

What external DEM does NEST support?

.Ph) DEM in meters is accepted as geographic system ,אP , אCurrently only WGS84-latlong (P User can specify one tile (i.e. one file) of the DEM.

What are the definitions of local incidence angle and projected local incidence angle? The local incidence angle is defined as the angle between the normal vector of the backscattering element (i.e. vector perpendicular to the ground surface) and the incoming radiation vector (i.e. vector formed by the satellite position and the backscattering element position). The projected local incidence angle is defined as the angle between the incoming radiation vector (as defined above) and the projected surface normal vector into range plane. Here range plane is the plane formed by the satellite position, backscattering element position and the earth center. For more information on the definitions, derivations and calculations of the angles please see section 14.3.4 in "SAR Geocoding: Data and Systems" by Gunter Schreier.

In coregistration I get "Not enough valid GCPs in slave". How can I fix this? Some products can have a significant error in their georeferencing. You may need to set the coarse registration window in the GCP Selection tab to something larger such as 512 or 1024. Also, increasing the number of GCPs will give you a better chance of having more surviving. For fine registration, you could reduce the coherence threshold. In the warp, check 'show residuals' to see a summary of where GCPs are being shifted to and what their RMS values are. Set the RMS threshold to a value where you can get lots of representative GCPs at the smallest RMS.

I've closed my tool views (world map, layer manager, navigation, etc). How can I get them back? In the top menu go to View -> Manage Layout and select Reset Layout or Load Tabbed Layout.

Why can I not orthorectify my image over the Arctic? It will depend on the DEM and output map projection. The default SRTM DEM is only available from +60 to -60 deg latitude. At the polar regions you will need to use either the low resolution GETASSE30, ASTER GDEM up to 83 deg or your own external DEM. Also, for the polar regions, using a polar map projection will give you a better result.

Is training and support available for NEST? ESA provides introductory training at some conference workshops and support for technical problems through the NEST website forum. Personalized training courses and technical support subscriptions are available by contacting Array. Import ATSR

Import ATSR Product

This option allows to import ERS-1 and -2 ATSR data into the DAT application. Graticule Overlay

If you select 'Show Graticule Overlay' from the View Menu, graticule lines will be drawn over the band image view, similar to the following figure:

The steps for both latitude and longitude (in °) can be defined in the Preferences dialog. Export Color Legend

Exporting a Color Legend Image For non-RGB images it is possible to export the current colour palette setting as a colour legend image. This function is available from the context menu over non-RGB images. The menu item Export Color Legend brings up a file selection dialog allowing you to select the file name for the exported legend image. The legend appearance can be modified by opening the Properties... button in the dialog:

Note that the transparency mode is only enabled for the image types TIFF and PNG. A preview dialog for the legend image can be opened by clicking the Preview... button. Within the dialog, it is also possible to copy the colour legend image to the system clipboard by using the context menu over the image area:

Select CRS Dialog

Select CRS Dialog

One of the listed Coordinated Reference Systems (CRS) can be selected. The list is defined by the European Petroleum Survey Group (EPSG).

Filter: In the text box you can type in a word to filter the list of available Coordinate Reference Systems, the list beneath will be updated immediately. Well-Known Text (WKT): This text area displays the definition of the CRS as Well- Known Text. No-Data Value

No-Data Value

Band or tie point grids of a data product determine by two means whether or not a pixel at a certain position contains data:

● a no-data value and/or

● a valid pixel expression is set. These properties can be adjusted using the property editor. If valid pixel detection is enabled, invalid pixels are excluded from range, histogram and other statistical computations. In order to visualise the invalid pixel positions, the no-data overlay is used. The no-data property of a band is also set by the band arithmetic and map projection tools.

Reprojection Output Parameters

Output Parameters

This dialog lets you specify the position of the reference pixel and the easting and northing at this pixel of the output product. Also you are able to set the orientation angle and the pixel size. The orientation angle is the angle between geographic north and map grid north (in degrees), with other words, the convergence angle of the projection's vertical axis from true north. Easting and northing and also the pixel size are given in the units of the underlying map (e.g. dec. degree for geographic and meter for the UTM projection). In order to force a certain width and height in pixels for the output product, you must deselect the fit product size option. Otherwise the size is automatically adjusted, so that the entire source region is included in the new region. Import ASTER DEM

Import ASTER DEM Tile

Imports a tile of the ASTER global elevation model at 3 arc second resolution. The GDEM covers the planet from 83 degrees North to 83 degrees South. The current version of the GDEM product is "research grade" as it may contain artifacts. http://www.gdem.aster.ersdac.or.jp/ Unwrapping

Unwrapping Operator

This operator performs unwrapping based on the concept introduced by Costantini, [1]. The unwrapping problem is formulated and solved as a minimum cost flow problem on a network.

Input and Output

● The input is the interferometric phase.

● The output is the unwrapped, per tile, interferogram that has to be Stitched for smooth results.

Processing Parameters

No processing parameter is required.

Reference: [1] Costantini, M. (1998) A novel phase unwrapping method based on network programming. IEEE Tran. on Geoscience and Remote Sensing, 36, 813-821.

Stitching for Unwrapping

Stitch Operator

When working with tiles, Unwrapping operator stores unwrapped phase for each tile individually. Therefore between unwrapped tiles, the unwrapped phases have a phase difference of multiple of pi. This operator stitches the unwrapped phases of all tiles to form a complete smooth image of unwrapped phase.

Input and Output

● The input is the output of Unwrapping operator.

● The output is the final image of, smooth and integrated, unwrapped phase.

Processing Parameters

No processing parameter is required.

Usage Notes

The unwrapping functionality, because of the restrictions of NEST/BEAM framework is being implemented as a two-step process. First, the initial unwrapping has to be performed with the unwrap operator, results need to be saved, and then as a second step, the unwrapping result is stitched into a smooth result with this operator.

Property Editor

Property Editor

The property editor can be used to edit the properties of a product, a tie-point grid or a (virtual) band. Depending on the kind of object for that the propery editor is invoked, different kinds of properties can be edited. The name and the description can be changed in all cases. Additionally following properties can be edited:

● Products - the product type.

● Tie point-grids and (virtual) bands - the geophysical unit, the no-data value, the valid pixel expression, the spectral wavelength and spectral bandwidth.

● Virtual bands - the virtual band expression.

The property editor can be invoked from the context menu, where it is the top item. The context menu is activated with a click on the rigth mouse button over the object that should be edited.