ADA V Book of Abstracts Crete, 7-9 May 2008

AstronomicalAstronomical DataData AnalysisAnalysis VV

BookBook ofof AbstractsAbstracts

- 1 - ADA V Book of Abstracts Crete, 7-9 May 2008

FOREWORD

Held regularly since 2001, the Astronomical Data Analysis conference series is focused on algorithms, signal, and data processing. The 5th Conference (ADA V) took place at the island of Crete (Greece) between 7-9 May 2008.

The program includes invited and contributed talks as well as posters. This conference series has been characterized by a range of innovative themes, including curvelet transforms, and clustering in , while at the same time remaining closely linked to front-line open problems and issues in astrophysics and cosmology. Herschel and will be launched in 2008, and so ADA-V will focus in particular on: - inverse problems such as map-making and component separation - multi-wavelength data analysis Recent developments in harmonic analysis, especially the "compressed sensing" theory may have major impact on the way we collect, transfer and analyze the data. The ADA-V conference will have an invited expert from this field in order to diffuse these new ideas in the astronomical community.

ADA V KEYNOTE SPEAKERS Emmanuel Candes, Caltech, USA Rien van de Weygaert, Kapteyn Institut, The Netherlands Angelica de Oliveira-Costa, MIT, USA Alexandre Refregier, CEA, France , MIT, USA

ADA V INVITED SPEAKERS Albert Bijaoui, Observatory of Cote d'Azur, France Peter Coles, Cardiff University, UK Jalal Fadili, University of Caen, France Vicent Martinez, Valencia Observatory, Spain Yves Wiaux, EPFL, Switzerland

SCIENTIFIC ORGANIZING COMMITTEE Jogesh Babu, Center for Astrostatistics, USA Vassilis Charmandaris, FORTH/IESL & University of Crete, Greece George Djorgovski, Caltech, USA Jean-Francois Hochedez, Royal Observatory of Belgium, Belgium Antoine Llebaria, LAM, France Giuseppe Longo, University Federico II, Naples, Italy Enrique Martinez-Gonzales, IFCA Santader, Spain Vicent Martinez, Valencia Observatory, Spain Fionn Murtagh (Chair), Royal Holloway, University of London, UK Jean-Luc Starck (Chair), CEA Saclay, France Christian Surace, LAM, France Alex Szalay, The Johns Hopkins University, USA Panagiotis Tsakalides, FORTH/ICS & University of Crete, Greece Ivan Valtchanov, ESA

LOCAL ORGANIZING COMMITTEE Yiannis Askoxylakis, FORTH/ICS, Greece Vassilis Charmandaris, FORTH/IESL & University of Crete, Greece Panagiotis Tsakalides, FORTH/ICS & University of Crete, Greece Mina Papadaki, FORTH/CA, Greece

- 2 - ADA V Book of Abstracts Crete, 7-9 May 2008

CONFERENCE PROGRAM Note: [KS] = Keynote Speaker (50min) [IS] = Invited Speaker (30min) Final version: 7 May 2008

From To Event Tuesday 6 May 2008 18:00 20:00 Registration 20:00 20:45 Welcome cocktail

Wednesday 7 May 2008 : Session 1: Cosmic Microwave Background Session Chair: Albert Bijaoui From To Speaker Topic 08:00 08:50 Registration 08:50 09:00 Welcome Angelica de 9:00 9:50 Cosmic Microwave Background Oliveira-Costa [KS] Steerable and scale-discretized 9:50 10:20 Yves Wiaux [IS] wavelet analyses of the cosmic microwave background Wavelet-based data compression on the 10:20 10:40 Jason McEwen sphere 10:40 11:00 Jean-Luc Starck Wavelet and Polarized Data 11:00 11:30 Coffee Break Session Chair: Vicent Martinez The Axis of Evil and other Cosmic 11:30 12:00 Peter Coles [IS] Quirks Filter design: application to the 12:00 12:20 Jose Luis Sanz detection of compact sources in CMB maps 12:20 12:40 Laurence Perotto CMB lensing Bayesian template fit and the WMAP 12:40 13:00 Marcos Cruz cold spot

13:00 16:30 Afternoon break

Session Chair: Jean-Luc Starck Emmanuel Candes 16:30 17:30 Compressed Sensing [KS] 17:30 18:15 Poster Presentations (2min each poster) 18:15 18:30 Coffee Break 18:30 20:00 Poster Session

- 3 - ADA V Book of Abstracts Crete, 7-9 May 2008

Thursday 8 May 2008: Session 2: Inverse Problems in Astronomical Data Session Chair: Stephan Ott From To Speaker Topic Bayesian estimation, classification, Panos Tsakalides 9:00 9:30 and denoising based on alpha- [IS] stable statistical modeling Multispectral Analysis based on 9:30 10:00 Albert Bijaoui [IS] Wavelet Fusion and a Sparse Decomposition Sparsity and morphological diversity in 10:00 10:20 Jerome Bobin Blind Source Separation 10:20 10:40 Jonathan Aumont POLA-SMIC 10:40 11:10 Coffee Break Chair: Angelica de Oliveira-Costa Autocorrelation & Power spectrum 11:10 11:40 Jalal Fadili [IS] estimation with missing data Herschel map-making using maximum 11:40 12:00 Pierre Chanial likelihood estimator Wavefront sensing from speckle images 12:00 12:20 Xavier Rondeau with polychromatic phase diversity Deconvolution of (x ,y, wavelength) 12:20 12:40 Ferréol Soulez images

13:00 17:00 Trip to Minoan Palace of Knossos

Session 3: Weak Lensing & Cosmological Parameters Session Chair: Peter Coles Rien van de Dynamics & Multiscale Morphology 17:00 17:50 Weygaert [KS] of the Cosmic Galaxies 17:50 18:10 Coffee Break Multimodal Nested Sampling: A robust 18:10 18:30 Farhan Feroz & efficient technique for astronomical data analysis CMB observables and likelihood 18:30 18:50 Anastasia Niarchou functions for multi-connected topologies Nonparametric Estimation of CMB 18:50 19:10 Peter Freeman Foreground Emission

20:30 02:00 Conference Dinner

- 4 - ADA V Book of Abstracts Crete, 7-9 May 2008

Friday 9 May 2008: Session 4: Galaxies, Surveys, Distributions Session Chair: Rien van de Weygaert From To Speaker Topic Alexandre Refregier Review of Weak Gravitational 09:00 9:50 [KS] Lensing Multi-epoch extraction of Supernovae 9:50 10:10 Sebastien Bongard from a structured background in the 3D data of the SNfactory Fast Clusterwise m-Adic Regression: 10:10 10:30 Fionn Murtagh Application to Redshift Calibration Sparse Representation and Efficient 10:30 10:50 Joseph Richards Inference for SDSS Spectra Stephane Paulin- Optimal PSF modelling for weak lensing: 10:50 11:10 Henriksson complexity and sparsity 11:10 11:30 Coffee Break Session Chair: Fionn Murtagh Reliability of the detection of the Vicent Martinez 11:30 12:00 acoustic peaks in the galaxy [IS] distribution Background estimation in far UV images 12:00 12:20 Antoine Llebaria from GALEX Survey Mining the SDSS archives for AGN: the 12:20 12:40 Raffaele D’Abrusco problem of partitionning the parameter space

12:40 14:30 Afternoon break

Session Chair: Vassilis Charmandaris 14:30 15:00 Andrew Jaffe [IS] The Grand Unified Power Spectrum Super resolution of multispectral images 15:00 15:20 Rafael Molina using TV image models Making Maps of the Planck Cosmic 15:20 15:40 Julian Borrill Microwave Background Data Cosmic conundrums and data 15:40 16:40 Max Tegmark [KS] analysis challenges

16:40 17:00 Jean-Luc Starck Closing Remarks

- 5 - ADA V Book of Abstracts Crete, 7-9 May 2008

POSTER PRESENTATIONS

The following 28 posters will be displayed throughout the duration on the conference.

Presenter Poster Title Arnaud Woiselle 3D Data Restoration with the Curvelet Transform Bellas-Velidis Ioannis Unresolved Galaxy Classifier for ESA's GAIA mission Studying the interstellar medium with Blind Signal Berne Olivier Separation methods Parameter Estimation by Optimal Projections in a Local Bijaoui Albert Environment Bobin Jerome Compressed sensing in astronomy The region N83-84-85 of the SMC. Automated classification Bratsolis Emmanuel and study of a possible triggered star formation Butala Mark Dynamic Tomographic Imaging of the Solar Corona Pattern matching applied to automatic astronomical data Castro Sandra reduction: the ESO FORS pipeline Delouille Veronique Super-resolution of EUV images using small scale offpoints Galaxy Cluster Detection using the Sunyaev-Zeldovich Friedenberg David Effect: A Multiple Testing Approach with Error Control EZ: a VO compliant tool for automatic spectral parameters Fumana Marco measurement Multi-wavelength data fusion for astronomical data Hojnacki Susan classification and analysis The CPL Wavelength Calibration method using Iterative Jung Yves Cross-–Correlation Kuemmel Martin The slitless spectroscopy package aXe Analysis of the CMB gravitational lensing effect with Lavabre Alexis PLANCK: from sky to patches Lazzarotto Francesco Data Processing software for the SuperAGILE experiment Reconstruction of extended objects from their speckle Li Yan images Circular Object Detection with the Vector Gradient Lorch Henning Intersection Transform Sparsity-constrained Image Reconstruction in Astronomical Mary David Interferometry The Rutgers/BCS Pipeline: automated analysis of ~50deg^2 Menanteau Felipe of optical griz imaging An iterative thresholding algorithm for multichannel Moudden Yassir deconvolution The Herschel Data Processing System --- an advanced Ott Stephan framework for advanced signal processing algorithms Pantin Eric Mid-infrared data reduction and analysis Correlated anisotropies in the Cosmic Infrared Background: Penin Aurélie new insight into structure evolution Weak Lensing data analysis : Mask interpolation using Pires Sandrine Inpainting Sodre Laerte Photometric with Locally Weighted Regression Vavrek Roland Multiresolution deglitching algorithm for Herschel/PACS data Pierre Royer A q-test based glitch detection algorithm

- 6 - ADA V Book of Abstracts Crete, 7-9 May 2008

CONFERENCE PARTICIPANTS (77)

Name Last Name Institution Country e-mail Woiselle Arnaud CEA France [email protected] Yiannis Askoxylakis FORTH/ICS Greece [email protected] Jonathan Aumont CESR France [email protected] Ioannis Bellas-Velidis NOAO Greece [email protected] Olivier Berne LATT-CESR France [email protected] Albert Bijaoui Obs. Cote d’Azur France [email protected] Jerome Bobin CEA France [email protected] Sebastien Bongard LBNL – UC Berkeley USA [email protected] Julian Borrill LBNL – UC Berkeley USA [email protected] Emmanuel Bratsolis Univ. of Athens Greece [email protected] Michel Broekaert SAGEM France [email protected] Mark Butala Univ. of Illinois USA [email protected] Emmanuel Candes Caltech USA [email protected] Sandra Castro ESO Germany [email protected] Pierre Chanial Imperial College UK [email protected] Vassilis Charmandaris Univ. of Crete Greece [email protected] Peter Coles Cardiff Univ. UK [email protected] Pedro Contreras Univ. of London UK [email protected] Marcos Cruz Inst. de Fysica de Spain [email protected] Cantabria Raffaele D’Abrusco Univ. of Napoli Italy [email protected] Angelica de Oliveira-Costa MIT USA [email protected] Veronique Delouille Royal Obs. Belgium Belgium [email protected] Jalal Fadili Univ. of Caen France [email protected] Farhan Feroz Univ. of Cambridge UK [email protected] Sonia Fornasier Obs. de Paris France [email protected] Peter Freeman Carnegie Mellon Univ. USA [email protected] David Friedenberg Carnegie Mellon Univ. USA [email protected] Marco Fumana INAF-IASF Milano Italy [email protected] René Gastaud CEA France [email protected] Kam Ho Tin Bell Labs, Alcatel – USA [email protected] Lucent Suzan Hojnacki JPL/NASA USA [email protected] Wolfgang Hovest MPA Germany [email protected] Rik Huygen K.U. Leuven Belgium [email protected] Carlo Izzo ESO Germany [email protected] Andrew Jaffe Imperial College UK [email protected] Yves Jung ESO Germany [email protected] Farzad Kamalabadi Univ. of Illinois USA [email protected] Martin Keummel ST-ECF Germany [email protected] Nick Kylafis Univ. of Crete Greece [email protected] Alexis Lavabre LAL France [email protected] Francesco Lazzarotto IAST/INRAF/Rome Italy francesco.lazzarotto@iasf- roma.inaf.it Yan Li Shangai Astron. Obs. China [email protected] Antoine Llebaria LAM/CNRS France [email protected] Giuseppe Longo Univ. of Napoli Italy [email protected] Henning Lorch ESO Germany [email protected] Vicent Martinez Valencia Obs. Spain [email protected] David Mary Univ. of Nice France [email protected] Jason Mc Ewen Univ. of Cambridge UK [email protected] Felipe Menanteau Rutgers Univ. USA felipe@physics rutgers.edu Rafael Molina Univ. de Granada Spain [email protected] Yassir Moudden CEA France [email protected] Fionn Murtagh Univ. of London UK [email protected] Anastasia Niarchou Imperial College UK [email protected] Stephan Ott ESA/ESTEC Spain [email protected]

- 7 - ADA V Book of Abstracts Crete, 7-9 May 2008

Name Last Name Institution Country e-mail Eric Pantin CEA France [email protected] Pasquale Panuzzo CEA France [email protected] Mina Papadaki FORTH Greece [email protected] Stephane Paulin-Henriksson CEA France [email protected]

Aurélie Penin Inst. Astro. Spatiale France [email protected] Laurence Perotto LAL, Univ. Paris-Sud France [email protected] Sandrine Pires CEA France [email protected] Stephane Plaszczynski LAL France [email protected] Ludovic Poupard CEA France [email protected] Alexandre Refregier CEA France [email protected] Joseph Richards Carnegie Mellon Univ. USA [email protected] Xavier Rondeau CRAL France [email protected] Pierre Royer University of Liege Belgium [email protected] Jose Luis Sanz IFCA Spain [email protected] Laerte Sodré Univ. of Sao Paolo Brazil [email protected] Férreol Soulez CRAL France [email protected] lyon1.fr Jean-Luc Starck CEA France [email protected] Max Tegmark MIT USA [email protected] Panos Tsakalides Univ. of Crete Greece [email protected] Rien van de Weygaert Kepteyn Instuitute The [email protected] Netherlands Roland Vavrek ESA/ESAC Spain [email protected] Didier Vibert LAM CNRS France [email protected] Yves Wiaux EPFL Switzerland [email protected]

- 8 - ADA V Book of Abstracts Crete, 7-9 May 2008

ABSTRACTS – ORAL CONTRIBUTIONS

Wednesday 7 June 2006

Cosmic Microwave Background Angelica de Oliveira-Costa (MIT, USA)

Steerable and scale-discretized wavelet analyses of the cosmic microwave background Yves Wiaux (EPFL, Switzerland)

In recent years, scale-space signal processing techniques have provided results of interest for cosmology. In this talk, we discuss steerable and scale-discretized wavelets on the sphere and on the plane, and their application to the analysis of the cosmic microwave background (CMB). The steerability of wavelets allows to probe in detail the local morphology of a signal at each analysis scale. It gives access to local measures of signed-intensity, orientation, elongation, etc. This approach notably already provided essential results in the probe of the statistical anisotropy and non-Gaussianity of the CMB, as well as for the identification of the Integrated Sachs-Wolfe effect. The scale discretization of the wavelets allows the reconstruction of the signal analyzed, hence opening the doors to denoising or deconvolution applications. In this context, local directional features can be identified at any scale in wavelet space and reconstructed after their separation from other signal components. This approach reveals to be great interest for the search of topological defects in the CMB, such as cosmic strings.

Wavelet-based data compression on the sphere Jason McEwen (Univ. of Cambridge, UK)

Large data-sets defined on the sphere arise in many fields. In particular, recent and forthcoming observations of the cosmic microwave background (CMB) made on the celestial sphere by WMAP and Planck contain approximately three and fifty mega- pixels respectively. The compression of such data is therefore becoming increasingly important. We develop algorithms to compress data defined on the sphere. A Haar wavelet transform on the sphere is used as an entropy compression stage, followed by Huffman and run-length encoding stages. Lossless and lossy compression algorithms are developed. We evaluate compression performance and decompression error on simulated CMB data, Earth topography data and global illumination maps used in computer graphics. The CMB data can be compressed to approximately 40% of its original size for essentially no loss to the cosmological information content of the data, and to approximately 30% if a small cosmological information loss is tolerated. The topographic data and illumination maps can be compressed to approximately 2% of their original size when a very small degradation in map quality is allowed. We intend to make our implementation of these compression algorithms available publicly.

- 9 - ADA V Book of Abstracts Crete, 7-9 May 2008

Wavelet and Polarized Data Jean-Luc Starck (CEA/Saclay, France)

We present a new multiscale transform on the sphere for polarized data. We show how it can be used for different applications such denoising and components separation.

The Axis of Evil and other Cosmic Quirks Peter Coles (Cardiff University, UK)

The Wilkinson Microwave Anisotropy Probe has provided the cosmological community with unprecedented quantitative information that has helped to establish the standard \"concordance\" model. It has also revealed a number of features that we do not understand so well, such as unexpected alignments and an anomalously cold spot, prompting suggestions of radical departures from the standard framework. In this talk I will review a few of these interesting anomalies and discuss some of the suggested explanations, focussing in particular on how they might be tested with future observational data.

Filter design: application to the detection of compact sources in CMB maps Jose Luis Sanz (IFCA, Spain)

We design matrix filters that maximize the SNR of compact sources of unknown intensity and known profile,embedded in a set of images. The technique outperforms the standard matched filter applied on individual images. As application we will consider simulations of the sky as it will be observed by the instruments of the upcoming ESA's Planck mission.

CMB Lensing Laurence Perotto (Laboratoire d’Accelerateur Lineaire, France)

The Large Scale Structure are imprinted in the CMBR due to the gravitational lensing effect. This secondary effect results in a remapping of the CMB observables, which introduces some departure from gaussianity of the CMB maps. Based on such a non- gaussianity, it is in principle possible to reconstruct the underlying gravitational potential directly from the CMB maps. PLANCK, a ESA satellite scheduled to be launched next autumn, will be the first CMB experiment with the required angular resolution and sensitivity to allow such a reconstruction. However, CMB lensing is not the only source of non gaussianity within the CMB maps. Eventually, the astrophysical foregrounds residuals after a component separation process or the systematics are hightly non-Gaussian. Even the CMB data processing algorithms themselve are not expected to preserve exactly the statistical properties of the CMB. Here we will assess this problematic. First, within a demonstration model, we prove that the measurement of the gravitational potential is still achievable after a component separation with the Generalized MCA algorithm. Then, we will confront our reconstruction method to more realistic PLANCK synthetics data, with the ambition to assess the feasibility of the CMB lensing reconstruction with PLANCK.

- 10 - ADA V Book of Abstracts Crete, 7-9 May 2008

Bayesian template fit and the WMAP cold spot Marcos Cruz (Instituto de Fisica de Cantabria, Spain)

A non-Gaussian cold spot found in the WMAP data with spherical wavelets, is analysed in a Bayesian framework. Topological defects, such as textures, and gravitational effects are considered as possible candidates to explain the spot. We find a high relative evidence for the existence of a texture-like template versus the hypothesis of having no texture. The amplitude, size and number of expected textures are consistent with that interpretation.

Compressed Sensing Emmanuel Candes (Caltech, USA)

- 11 - ADA V Book of Abstracts Crete, 7-9 May 2008

Thursday 8 June 2008

Bayesian estimation, classification, and denoising based on alpha-stable statistical modeling Panos Tsakalides (FORTH/ICS, Univ. of Crete, Greece)

Remote imaging systems are inherently affected by additive or multiplicative noise. We present a family of novel Bayesian-based algorithms within the framework of wavelet analysis, which reduce noise in images while preserving the structural features and textural information of the scene. First, we show that the subband decompositions of transformed images can be accurately modeled by alpha-stable distributions, a family of heavy-tailed densities. Consequently, we exploit this a priori information by designing maximum a posteriori estimators and classifiers. We use the alpha-stable model to develop noise-suppression processors that perform a nonlinear operation on the data and we relate this nonlinearity to the degree of non- Gaussianity of the data. We compare our proposed methodology to current state-of- the-art soft thresholding techniques applied on real remote imagery data and we quantify the achieved performance improvement.

Multispectral Analysis based on Wavelet Fusion and Sparse Decomposition Albert Bijaoui (Observatoire Côte d'Azur, France)

The joint analysis of a set of images obtained with different spectral bands is an important challenge for today’s Astronomy. The main goal is the identification of astrophysical sources and their characterization in all bands. Chi-2 images allow one to optimize detection for multiband images in case of similar resolutions while wavelet fusion takes into account the PSF variations. Identification of low surface brightness objects is also improved. Different wavelet transforms were tested. Pyramids of Laplacians led to the simplest efficient tool. By the use of a matching pursuit like algorithm the resulting fusion image is then decomposed into a set of elements called pyrels because of their pyramidal construction. Even if many different pyrel types exist, in the present presentation only scalet type is used. Two decompositions were examined, for the à trous algorithm the pyrels are distributed on a grid without decimation contrary to the pyramidal one. The resulting sparsity is quite similar but the pyramidal algorithm allows one to process large images in a reasonable time. The object segmentation results in a pyrel merging into different clusters taking into account their pixel coordinates and their scale. The individual image of each object is then easily restored. The considered pyrels can be interpreted as a specific non orthogonal basis on which the image of each band can be decomposed. New pyrels could be added taking into account the residuals for each band. Spectral decompositions can be performed on pyrels taking into account specific spectral energy distributions. A toolbox is being built for the exploitation of this scheme in the context of the virtual observatory.

Sparsity and morphological diversity in Blind Source Separation Jerome Bobin (CEA/Saclay, France)

Over the last few years, the development of multi-channel sensors motivated interest in methods for the coherent processing of multivariate data. Some specific issues have already been addressed as testified by the wide literature on the so- called blind source separation (BSS) problem. In this context, as clearly emphasized

- 12 - ADA V Book of Abstracts Crete, 7-9 May 2008 by previous work, it is fundamental that the sources to be retrieved present some quantitatively measurable diversity. Recently, sparsity and morphological diversity have emerged as a novel and effective source of diversity for BSS. We give here some essential insights into the use of sparsity in source separation and we outline the essential role of morphological diversity as being a source of diversity or contrast between the sources. We present a new sparsity-based BSS method coined Generalized Morphological Component Analysis (GMCA) that takes advantages of both morphological diversity and sparsity, using recent sparse overcomplete or redundant signal representations. GMCA is a fast and efficient blind source separation method. GMCA is applied to the simulated ESA/Planck data. It is shown to give effective astrophysical component separation.

POLA-SMIC Jonathan Aumont (CESR, France)

The measurement of the CMB (Cosmic Microwave Background) temperature and polarization power spectra constitutes a fundamental tool for modern cosmology. This measurement is limited by the contamination from foreground components, like for example the Galactic diffuse emission, the point-source emissions and the SZ effect. We present a ethod to separate diffuse polarized components in CMB experiments, the PolEMICA (Polarized Expectation-Maximization Independent Component Analysis) algorithm, which is an extension to polarization of the SMICA (Spectral Matching Independent Component Analysis) temperature multi-detectors multi-components (MD-MC) component separation method (Delabrouille et al. 2003). This algorithm allows us to estimate blindly, with a gaussian prior and in spherical harmonic space multiple physical components from multi-detectors polarized sky maps, using an EM (Expectation-Maximization) algorithm. Assuming a linear noisy mixture of components we are able to reconstruct jointly the anisotropies electromagnetic spectra of the components for each mode T, E and B, as well as the temperature and polarization spatial power spectra, TT, EE, BB, TE, TB and EB for each of the physical components and for the noise on each of the detectors. PolEMICA is specially developed to estimate the CMB temperature and polarization power spectra from sky observations including both CMB and foreground emissions. This has been tested intensively using as a first approach full sky simulations of the Planck satellite polarized channels for a 14-months nominal mission assuming a “toy” linear sky model including CMB, synchrotron, dust and free-free emissions.

Autocorrelation and Power spectrum estimation with missing data Jalal Fadidi (ENSICAEN CNRS, France)

This talk will focus on two-point auto-correlation function and power spectrum estimation when samples are physically missing. This problem is of paramount importance in astronomical data analysis such as CMB data. This work will attack this problem from signal processing and statistical perspectives. We will review some estimation methods borrowed from these theories, characterize and compare them from a theoretical point of view. Their advantages and drawbacks will also be highlighted, and we will discuss the conditions under which such methods are potentially applicable to astronomical images analysis.

- 13 - ADA V Book of Abstracts Crete, 7-9 May 2008

Herschel map-making using maximum likelihood estimator Pierre Chanial (Imperial College, UK)

The SPIRE and PACS instruments on-board Herschel are designed to carry out large area imaging surveys. The detector timelines must be combined to produce maps of the sky that are free of artifacts, particularly 1/f noise which can significantly degrade the images. I will present the challenges we are facing and how the PACS and SPIRE Instrument Control Centre plan to cope with them, by using a maximum likelihood estimator.

Wavefront sensing from speckle images with polychromatic phase diversity Xavier Rondeau (IRCCYN, France)

Phase retrieval from monochromatic speckle images is a mean to sense the phase aberrations due to the telescope and to the earth’s atmosphere. Coupled with a real time adaptive correction or with a post processing approach such as blind or myopic deconvolution, phase retrieval enables to obtain diffraction limited long exposure images. It is however a difficult non-linear inverse problem, which is ill-posed (lots of local minima and degeneracies), ill-conditioned, and which involves a large number of unknowns. To solve this inference problem we have developed a dedicated global optimization strategy. Our algorithm makes use of the turbulence statistics (e.g. Kolmogorov law) to restrict the global search to consistent phases and to speed up the local convergence by preconditioning. We show that the unwrapped wavefront can be estimated from a single image for D/r0 as high as 11 and under low light conditions (about 1500 photons/image). We have also extended our algorithm to achieve D/r0 up to 80 thanks to polychromatic diversity, that is wavefront retrieval from simultaneous images at two different wavelengths. As a direct application we derive the benefit of polychromatic phase retrieval for the optimal estimation of the differential atmospheric tip-tilt in the framework of the polychromatic laser guide star.

Deconvolution of (x ,y, wavelength) images Soulez Ferréol (CRAL, France)

Currently, image deconvolution receives increasing attention from the academic world. However, few works have been done in deconvolution of data with heterogeneous dimensions, for example (x, y, depth, wavelength, time...). Following an inverse problem approach, we propose to use physical correlations in the wavelengths and time axes to constraint deconvolution problem. It leads to a faster and a better reconstruction than successive images deconvolution. Moreover, in some cases, it leads to a very effective blind deconvolution scheme(deconvolution of observation blurred by an unknown process). We present deconvolution of (x,y,wavelength) data cubes from the SuperNova factory. (The SuperNova factory is a survey using an integral field spectrograph to observe spectro-photometrically Type Ia supernovae (SNeIa) in the redshift range 0.03

- 14 - ADA V Book of Abstracts Crete, 7-9 May 2008

Dynamics & Multiscale Morphology of the Cosmic Web Rien van de Weygaert (Kapteyn Astronomical Institute, The Netherlands)

Major characteristics of the gravitational formation and evolution of cosmic structure and the emergence of the cosmic web are its 1) hierarchical nature, 2) the anisotropic morphology of forming structures and 3) the emergence of large near- empty void regions. Each of these three aspects are successfully analyzed by means of DTFE based structure identification techniques. The Delaunay Tessellation Field Estimator (DTFE), is a fully self-adaptive method for processing and extracting information on structure and dynamics of discretely sampled structures exploiting the natural adaptivity of Voronoi and Delaunay tessellations. DTFE forms the basis for the Multiscale Morphology Filter (MMF) formalism for identifying filamentary and sheetlike matter concentrations and the Watershed Void Finder (WVF) technique for analysis of the hierarchy of voids in the cosmic matter distribution. In the presentation we will describe these formalisms, along with recent results enabled by their application. The MMF has lead to the identification and classification of filaments in cosmological computer simulations. Analysis of the galaxies in filaments identified from the SDSS DR5 galaxy dataset reveals a strong alignment signal between galaxy spins and filaments, an key towards understanding their formation history. WVF Analysis of the void population in LCDM simulations has shed light on the nature of Megaparsec alignments: the strong alignment of voids is the result of large scale tidal forces.

Multimodal Nested Sampling: A robust & efficient technique for astronomical data analysis Farhan Feroz (Univ. of Cambridge, UK)

In performing the Bayesian analysis of astronomical data, two difficult problems often emerge. First, in estimating the parameters of some model for the data, the resulting posterior distribution may be multimodal or exhibit pronounced (curving) degeneracies, which can cause problems for traditional MCMC sampling methods. Second, in selecting between a set of competing models, calculation of the Bayesian evidence for each model is computationally expensive. I present a newly developed `Multimodal Nested Sampling' technique for parameter estimation and evidence evaluation from distributions that may contain multiple modes and significant degeneracies. I also present some results from the application of this technique in the analysis of beyond the Standard Model theories of particle physics and the extraction of galaxy clusters from the Planck data.

CMB observables and likelihood functions for multi-connected topologies Anastasia Niarchou (Imperial College, UK)

Motivated by the deficit of power at large scales in the CMB power spectrum derived from the WMAP satellite, we explore CMB anisotropies in multi-connected with spherical geometry. These spaces are finite and anisotropic, which translates into reduced power and possible apparent non-Gaussianity in CMB observables. We simulate the power spectrum and the correlation matrix of temperature anisotropies in these topologies and show how they can be compared with the data to infer the statistical significance of such models. We present likelihood functions based on the

- 15 - ADA V Book of Abstracts Crete, 7-9 May 2008

WMAP data for both observables leaving curvature and orientation as free parameters and use Bayesian methods to find if they are supported by the observations. Finally, we briefly discuss how these methods can be used to explore more general anisotropic .

Nonparametric Estimation of CMB Foreground Emission Peter Freeman (Carnegie Mellon Univ., USA)

A critical aspect of the analysis of the CMB temperature and polarization fields is estimating foreground emission. Parametric methods of foreground estimation (based on, e.g., template fitting) are complex, and problematic in a number of ways: the models may be incomplete, and resulting estimates may be biased-necessitating sky cuts for further CMB analysis-or may lack useful estimates of uncertainty. Such problems motivate our use of nonparametric statistical procedures for foreground estimation. We apply an iterative, simultaneous inference scheme to estimate both foreground emission and the values of cosmological parameters. We represent the diffuse foreground using orthogonal basis functions on the sphere, while treating the sum of the temperature (or polarization) fields and the pixel noise as a Gaussian random field with covariance computable directly from assumed cosmological parameters. Simultaneous inference improves the accuracy of the estimators and accounts properly for the uncertainty in both foreground and CMB components; it also obviates the need for sky cuts. We demonstrate our scheme using WMAP-3 data.

- 16 - ADA V Book of Abstracts Crete, 7-9 May 2008

Friday 9 June 2008

Review of Weak gravitational Lensing Alexandre Refregier (CEA/Saclay, France)

Multiepoch extraction of Supernovae from a structured background Sebastien Bongard (Lawrence Berlekey National Lab, USA)

The SuperNova factory is a survey using an integral field Spectrograph to observe spectro-photometrically Type Ia supernovae (SNeIa) in the redshift range 0.03

Fast Clusterwise m-Adic Regression: Application to Redshift Calibration Fionn Murtagh & Pedro Contreras (University London, UK)

Nonlinear regression is used for mapping photometric and astrometric redshifts (d\'Abrusco et al., 2006). We investigate use of a new regression approach, that directly uses number encoding properties. Through use of a natural metric, termed the Baire metric in Murtagh et al. (2008), there is an immediate association with an ultrametric topology. Our motivation includes: (i) computational advantage; (ii) clusterwise or local regression; (iii) support for very large or high precision numbers.

References - R. d 'Abrusco, G. Longo, M. Paolillo, E. de Filippis, M. Brescia, A. Staiano, R. Tagliaferr, The use of neural networks to probe the structure of the nearby , astro-ph/0701137, ADA IV, 2006. - F. Murtagh, G. Downs and P. Contreras, Hierarchical clustering of massive, high dimensional data sets by exploiting ultrametric embedding, SIAM Journal on Scientific Computing, in press, 2008.

- 17 - ADA V Book of Abstracts Crete, 7-9 May 2008

Sparse Representation and Efficient Inference for SDSS Spectra Joseph Richards (Carnegie Mellon, USA)

New methods for dimension reduction hold great promise for mining valuable information from the SDSS spectroscopic catalog. These data probe the physical conditions of an unprecedented number of galaxies and QSOs. Yet, the complexity and high-dimensionality of the data make it challenging for astronomers and statisticians to estimate parameters of interest (e.g. redshift), classify the spectra (e.g. galaxies vs. QSOs), and detect groupings in the data set. We apply local multidimensional scaling (MDS) techniques to search for a lower-dimensional representations of the spectra in SDSS Data Release 6, and then use this lower- dimensional projection to estimate redshift and classify spectra. We investigate a set of dissimilarity measures to determine the optimal procedure to estimate redshift from the spectra. Our techniques, unlike the methods currently used by SDSS to estimate redshift, do not rely on identification of emission lines nor comparison to template spectra. Our methods can easily be extended to multi-wavelength datasets.

Optimal PSF modelling for weak lensing: complexity and sparsity Stephane Paulin-Henriksson (CEA/Saclay, France)

Our Universe appears to be dominated by and . Their study will ultimately lead to discovery of new classes of fundamental particles or a theory of gravity that supersedes Einstein's General Relativity. In this framework, gravitational weak lensing is a promising tool to probe the dark universe, via the study of statistical correlations among the shapes of distant galaxies that are sheared by foreground clumps of dark matter. To get significant results, it is necessary to measure the distortions of millions of galaxies to an extremely high accuracy in the presence of observational constraints. The main constraint is that galaxies appear convolved by the ``Point Spread Function'' (PSF) of the instrument. This instrumental effect has an amplitude one order of magnitude larger than the gravitational shear signal and, if not subtracted correctly, can mimic it. We are presenting our work to build sparse dictionaries of stars and galaxies to optimize the measure of galaxy shapes.

Reliability of the detection of the acoustic peaks in the galaxy distribution Vicent Martinez (Valencia Observatory, Spain)

We study the reliability of the detection of the baryonic acoustic peak in the galaxy two-point correlation function at large scales. We have found additional peaks at very large pair distances (> 200 Mpc/h) in the SDSS LRG data. In order to estimate the statistical confidence of these peaks, we simulate isotropic Gaussian fields with an exactly known oscillating correlation function and test the available estimation methods to see if we can recover the oscillations. We use the turning-band method to generate the realisations, the usual Landy-Szalay estimator for the correlation function, and block jackknife-after-bootstrap to describe its sample distribution. We apply the same methods to the SDSS DR6 LRG data and to the 2dFGRS.

- 18 - ADA V Book of Abstracts Crete, 7-9 May 2008

Background estimation in far UV images from GALEX Survey Antoine Llebaria (LAM/CNRS, France) & D. Vibert

Deep far UV images are very dense fields of galaxies. Background is defined as the residual image brightness once all detected objects removed. This so generic definition needs further precisions: in the background can be found patterns of starlight, diffuse UV radiation from different sources on the sky (dust, interstellar and intergalactic medium, unsolved background objects, etc). In our method the background is defined by the inferior hull of the image, This hull is found through a multiresolution (nonlinear) top-down approach where the sparse characterization of the physical background is crucial in order to restore as precisely as possible the physical background

Mining the SDSS archives for AGN: the problem of partitioning the parameter space Raffaele d'Abrusco (Univ. of Napoli, Italy) G. Longo, M. Brescia, S. Cavuoti, G. d’Angelo, N. Deniskina, M. Paolillo

In the framework of the Euro - VO we have implemented a series of tools capable to perform data mining on massive data sets of high dimensionality and in a distributed computing environment. We shortly describe the tools (MLP, Support Vector Machines and Probabilistic Principal Surfaces) and two applications to real data. The first application is supervised and concerns the derivation of photometric redshifts while the second one makes use of unsupervised clustering to disentangle normal galaxies from candidate AGNs and QSOs.

The Grand Unified Power Spectrum Andrew Jaffe (Imperial College, UK)

We combine current CMB power-spectrum data into a single "Grand Unified Spectrum", calculated from the global likelihood function for all relevant experiments. This spectrum is a an excellent summary of the data, and (combined with error information) is nearly a sufficient statistic for present data. We discuss implications for parameterizing the likelihood function in terms of small number of measured quantities with some physical meaning, and extensions to other sorts of data.

Super resolution of multispectral images using TV image models Rafael Molina (Univ. de Granada, Spain)

In our work we propose novel algorithms for pansharpening of multispectral images. The algorithms are based on the use of Total Variation image prior for the high resolution multispectral image. The restored images and the parameters are estimated utilizing variational distribution approximations. Within the hierarchical Bayesian formulation, the proposed methodology incorporates prior knowledge on the expected characteristics of the multispectral images, uses the sensor characteristics to model the observation process of both panchromatic and multispectral images, and includes information on the unknown parameters in the model in the form of hyperprior distributions. The reconstructed hyperspectral image

- 19 - ADA V Book of Abstracts Crete, 7-9 May 2008 and the unknown hyperparameters for the image prior and the noise are simultaneously estimated. The proposed algorithms provide approximations to the posterior distributions of the latent variables using variational methods. We show that some of the current approaches to TV-based image restoration are special cases of our framework. Using real and synthetic data, the pansharpened multispectral images are compared with the images obtained by other parsharpening methods and their quality is assessed both qualitatively and quantitatively.

Work in collaboration with: Miguel Vega and Javier Mateos (both at the University of Granada, Spain) and A.K. Katsaggelos (University of Evanston, Illinois)

Making Maps of the Planck Cosmic Microwave Background Data Julian Borrill (Berkeley Lab/UC Berkeley, USA)

The Planck satellite, scheduled to launch this autumn, will survey the entire microwave sky at 9 frequencies for upwards of 2 years. The resulting data set will provide the definitive measurement of the CMB temperature and a new state-of-the- art measurement of the CMB polarization. For almost a decade now, members of the Planck collaboration have been developing, testing and comparing map-making algorithms and implementations on the supercomputers at the US Department of Energy's National Energy Research Scientific Computing (NERSC) Center. In this talk I will describe the main classes of algorithm under investigation and compare their performance – both fidelity and computational resource requirements - on a series of increasingly realistic simulated Planck data sets. I will also consider the implication of these results for next-generation suborbital CMB polarization missions such as EBEx and PolarBear.

Cosmic conundrums and data analysis challenges Max Tegmark (MIT, USA)

I first summarize how the recent avalanche of precision measurements involving the cosmic microwave background, galaxy clustering, the Lyman alpha forest, gravitational lensing, supernovae Ia and other tools has transformed our understanding of our universe. I then discuss key open problems such as the nature of dark matter, dark energy and the early universe, with emphasis on interesting data analysis challenges posed by upcoming observations, in particular 21 cm tomography.

- 20 - ADA V Book of Abstracts Crete, 7-9 May 2008

ABSTRACTS – POSTER CONTRIBUTIONS

3D Data Restoration with the Curvelet Transform A. Woiselle (CEA/Saclay, France), J.L. Starck & J. Fadili

We present a new 3D curvelet decomposition which is especially well designed for representing 1D filaments in a 3D space. We show that the 3D curvelet transform can be built using two existing 3D transforms, the 3D undecimated isotropic wavelet transform and the 3D beamlet transform. For a Gaussian noise or a Poisson noise, curvelet coefficients can be thresholded in a similar way as in the 2D case. A denoising algorithm is presented and a range of examples illustrates the results.

Unresolved Galaxy Classifier for ESA's GAIA mission I. Bellas-Velidis1, M. Kontizas2, P. Tsalmantza2,3, E. Livanou2, E. Kontizas1, R. Korakitis4, A. Dapergolas1, A. Karampelas2, C.A.L. Bailer-Jones3, B. Rocca- Volmerange5,6, A. Vallenari7, M. Fioc5,8 (1) National Observatory of Athens, Greece, (2) Univ. of Athens, Greece, (3) Max- Planck-Institut fur Astronomie, Heidelberg, Germany (4) National Technical University of Athens, Greece, (5) IAP, France, (6) Univ. de Paris-Sud XI, France, (7) INAF, Padova Observatory, Italy, (8) Universite Pierre et Marie Curie, France

The Unresolved Galaxy Classifier (UGC), an algorithm for galaxy spectra classification, astrophysical parameters extraction and its implementation, are presented. It is being implemented in Java as part of the GAIA mission's ground- based pipeline software and it is developed in the frame of the Data Processing and Analysis Consortium (DPAC). It is provided to analyze low-dispersion spectra of unresolved galaxies that will be observed with Gaia's BP/RP instrument. UGC is based on Support Vector Machines (SVM), a supervised learning technique. The system is trained with "labeled" galaxy spectra with a priori known the galaxy type and the astrophysical parameter values. The SVM in classification mode is used for the galaxy type, whereas the regression mode is used for the parameters. The UGC training function includes three modules: tuning of the SVM parameters cost and gamma, the SVM-models learning and testing their performance. A separate labeled set of spectra is used for the latter. The result of the training is a set of SVM- models, each one for a single parameter regression and one for the classification. The SVM-models are then used by the UGC application module. It is applied to unlabelled galaxy spectra and estimates the galaxy type and the parameter values. The data necessary to train and test the UGC are provided by Gaia Object Generator library of simulated galaxy BP/RP spectra. The library is based on a set of synthetic galaxy spectra produced by us using the Pegase software. We created synthetic spectra of galaxy models over a grid of predefined galaxy types and parameters. In the first stage of development there is used a subset of the library spectra, only rest- frame spectra, without interstellar reddening for single pass. The tests showed very good performance of the algorithm and its applicability to the task. UGC is already delivered and approved by Gaia DPAC. Development of next version is in progress.

- 21 - ADA V Book of Abstracts Crete, 7-9 May 2008

Studying the interstellar medium with Blind Signal Separation methods Olivier Berne (CESR-LATT, France)

In this contribution, we will present how we have used different classes of Blind Signal Separation (BSS) methods (ICA, NMF) to study the emission of interstellar dust. Using mid-infrared spectro-imagery data from the Spitzer Space Telescope and ISO (Infrared Space observatory), we were able to disentangle the emission spectra from different types of populations of carbonaceous nanograins in photon-dominated regions of the Milky Way. We found that the presence of each population is linked to the local physical conditions e.g. UV field, density. We applied the same methods to the spectro-imagery of galaxies and could map the distribution of the various populations of nanograins, thus indirectly providing a map of the physical conditions (e.g. UV field density) inside these galaxies. We will show an application to the Antennae Galaxies, where a burst of star fomation is revealed by these maps. Finally, we have applied BSS methods to optical images of the NGC 7023 reflection nebula, taken from the Hubble Space Telescope archive and new images obtained at the Canada France Hawaii Telescope. We separated the fluorescent emission of dust in the visible from the scattered stellar light. This separation enabled us to study the spatial distribution of the fluorescent emission, providing new insights into the nature of its carrier.

Parameter Estimation by Optimal Projections in a Local Environment A. Bijaoui (UNSA/CNRS/OCA, France), A. Recio-Blanco, P. De Laverny, G. Kordopakis

In order to determine astrophysical parameters from spectra obtained during the Gaia mission, a method called MATISSE was developed for extracting the information taking into a model grid. The parameters are estimated by projections on relevant functions derived from a multilinear regression. A first set is built considering the whole parameter range of the grid, allowing one to refine the values taking into account the function set computed on the environment associated to the first parameter set. This procedure is adapted to fit huge sets of data with the same model grid. The parameter function sets are previously computed for the given grid and the parameters are estimated by only two projections on known vectors. This procedure was applied to spectrophotometric data (stars or galaxies) with up to 9 parameters. An improvement, based on SIR (sliced inverse regression) is under examination for reducing the number of projection sets by a better taking into account of non-linearities.

Compressed Sensing in Astronomy Jerome Bobin (CEA/Saclay, France)

Recent advances in signal processing have focused on the use of sparse representations in various applications. A new field of interest based on sparsity has recently emerged: Compressed Sensing. This theory provides a new sampling framework that goes further the well-known Shannon sampling theorem in particular cases. In this paper we show how Compressed Sensing can provide new insights into astronomical data compression and more generally how it paves the way for new conceptions in astronomical remote sensing. The attractiveness of Compressed sensing theory stems from its ability to provide a very simple (with low computational cost) coding process, thus favouring its use for

- 22 - ADA V Book of Abstracts Crete, 7-9 May 2008 onboard applications and all the complexity is then carried by the decoding step. Compressed sensing provides a new framework in which data acquisition and data processing are merged. In this context, we introduce a new practical thresholding- based algorithm that aims at solving the decoding step. We further show that CS is more than a classical compression as the whole process can handle physical priors thus providing enhanced compression performances. Numerical results are given in the scope of the Herschel/PACS space mission. Experiments illustrate the reliability of such method to compress Herschel/PACS data.

The region N83-84-85 of the SMC: Automated classification and study of a possible triggered star formation Emmanuel Bratsolis (Univ. of Athens, Greece)

The region N83-84-85 belongs to the inner wing of the SMC and is of interest because of its OB associations and nebulae. It is evident that there is a correlation between associations like NGC 456, 460 and 465 with the nebulae of ionized gas. We focus on this region because it seems to show a feedback between OB star formation and the physical properties of the interstellar medium (Bratsolis et al. 2004, A&A, 423, 919). At the same region have been detected different molecular clouds (Bolatto et al. 2003, ApJ, 595, 167) and HI shells (Hatzidimitriou et al. 2005, 360, 1171). Stellar spectral classification is not only a tool for labeling individual stars but is also useful in studies of stellar population synthesis. Low-dispersion objective prism images have been used and automated methods of detection, extraction and classification have been developed. We present a classification method based on an artificial neural network. We make a brief presentation of the entire automated system and we compare the new classification method with the method of maximum correlation coefficient (Bratsolis 2005, JASP, 15, 2536).

Dynamic Tomographic Imaging of the Solar Corona Mark Butala1, Richard A. Frazin2, Yuguo Chen1, Farzad Kamalabadi1 (1) Univ. of Illinois at Urbana-Champaign, USA, (2) Univ. of Michigan, USA

Empirical 3D maps of the electron density in the solar corona may be tomographically reconstructed from white-light images. A combination of differential emission measure and tomographic analyses of images observed at multiple wavelengths in the extreme ultraviolet yield 3D maps of the temperature in the corona. In either case, fairly standard computational methods may be used to solve the resultant inverse problem and provide reliable reconstructions of persistent, large-scale structures. New developments in data assimilation and statistical estimation theory are necessary to characterize the transient phenomena responsible for space weather. We present a state-space model for the time-varying corona and measurements. State estimation methods may then be used to solve the resultant time-dependent inverse problems. However, such a formulation demands new state-estimation algorithms that scale well with problem size. We will describe a Monte Carlo state estimation algorithm that results in dramatic reductions in computational burden for tomographic inverse problems, thus enabling global data-assimilative models of the stellar atmosphere. We will present reconstructions from SOHO-LASCO, SOHO-EIT, and MLSO-Mk4 measurements and discuss ramifications of the newly launched STEREO mission.

- 23 - ADA V Book of Abstracts Crete, 7-9 May 2008

Pattern matching applied to automatic astronomical data reduction: the ESO FORS pipeline Sandra Castro (ESO, Germany)

Robustness and flexibility are key requirements for an automatic data-reduction pipeline. Robustness is the capability to manage unexpected situations due to hardware failures, supplying precise information about what went wrong, and granting in this way a thorough and safe monitoring of the instrument health. Flexibility, on the other hand, is obtained by the utilisation of algorithms which are general enough to withstand any hardware upgrade, ideally leading to instrument- independent data reduction systems. Pattern-matching techniques extend the palette of tools available to solve calibration problems, including instrument modeling and correlation methods, and are especially useful in the critical context of automatic data reduction. In this presentation a qualitative overview of the automatic pipeline for the reduction of the ESO VLT FORS data is given, in order to provide a direct experience of the advantages of pattern- matching as a general approach to instrument health monitoring and calibration.

Super-resolution of EUV images using small scale offpoints. Veronique Delouille (Royal Observatory of Belgium, Belgium)

The quality of the images recorded by space telescopes is always limited by the instrument's resolution. However, post-processing techniques exist that allow to reconstruct a higher resolution (HR) image from a set of low resolution images. Such 'super-resolution' methods present the advantages of being low-cost and easily applicable to any telescope images. Several algorithms have been proposed in the literature, and we consider in particular algorithms based on the solution of an inverse problem. In this presentation, a set of EUVI/STEREO images obtained with small scale artificial offpoints is used as input of the super-resolution algorithm. The shift between the images is first estimated to subpixel precision with an optical flow procedure. This shows that subpixel resolution can be achieved on a regular basis provided that small scale offpoints, spread and known appropriately, are available.

Galaxy Cluster Detection using the Sunyaev-Zeldovich Effect: A Multiple Testing Approach with Error Control David Friedenberg (Carnegie Mellon Univ., USA)

Understanding the distribution of matter in the universe is important for understanding the universe\'s early history and evolution. Galaxy clusters, in particular, serve as an important tracer of the matter distribution in the universe. An accurate accounting of galaxy clusters can provide sharp estimates of fundamental cosmological parameters. A new and exciting problem in cosmology and astronomy is detecting distant galaxy clusters via their Sunyaev-Zeldovich (SZ) signature. This signal is weak but is, to first order, distance independent. This property makes SZ data a unique tool for detecting distant galaxy clusters. The Atacama Cosmology Telescope(ACT) is designed and dedicated to creating multi-frequency maps of the SZ effect across large patches of sky. Developing techniques for analyzing these images is an active area of research.

- 24 - ADA V Book of Abstracts Crete, 7-9 May 2008

Most of the effort to date has focused on filters which isolate the SZ signal and eliminate confounding cosmological signals. Significantly less effort has gone into identifying the clusters after filtering. This task is commonly accomplished using simple peak finding algorithms. Such procedures can find clusters, however they make no guarantees about purity or completeness, they produce no measure of error. We propose a new statistical approach to the problem that provides a rigorous probabilistic bound on purity that the peak finding algorithms lack. This is a more efficient use of the data since we not only identify the locations of the clusters but also get an error rate. This error rate can then be easily incorporated into downstream inferences that will be made using the clusters. For instance, we can use the results to derive estimates and confidence intervals for Hubble\'s constant and the mass density of the universe that are independent of estimates from other cosmological data sources.

EZ: a VO compliant tool for automatic spectral parameters measurement Fumana Marco (INAF_ASF Milano, Italy)

We wish to present EZ, an open source software for automatic measurement of spectroscopic parameters, with a special attention to galaxy redshifts and their reliability. Its main ingredients are: a custom spectrum template set, a combination of cross-correlation and fitting algorithms and a decisional tree which drives EZ computations. EZ has been tested in full automatic mode on VVDS and zCOSMOS data, giving a redshift measurement accuracy success rate of 80% (with peaks of 95%). We are currently adding new functionalities to measure main spectral features, like line equivalent widths and fluxes. The software supports the PLASTIC messaging system, and it is also able to retrieve data from the Virtual Observatory, accessing SSA services through the VO registries.

Multi-wavelength data fusion for astronomical data classification and analysis Susan Hojnacki (JPL / NASA, USA)

The nature and origin of the often variable emission from young, low-mass stars is an area of intense research. A large fraction of observing time has been devoted to the study of star formation regions and, as a result, datasets across a wide range of wavelengths exist from both ground-based and space-based observations. However, many studies of these observations involve analysis of data from only one source and typically only one modality (e.g., only infrared or only optical). To take advantage of this increasing amount of multiwavelength astronomical data, we are using data fusion techniques coupled with multivariate statistics on a large number of astronomical objects. This simultaneous analysis of emission across different wavelength regimes will help to provide a composite understanding of young stellar objects. Our existing technique coupled with fused multiwavelength data has potential to provide insight into the physical mechanisms responsible for the intense emission from young stars in different wavelength regimes and to uncover trends and correlations between those regimes. We present the data fusion and drawbacks and hazards of these techniques.

- 25 - ADA V Book of Abstracts Crete, 7-9 May 2008

The CPL Wavelength Calibration method using Iterative Cross--Correlation Yves Jung (ESO, Germany)

Most of the ESO VLT data reduction pipelines need to calibrate the instruments in wavelength. The data format and size, the calibration units or strategies or the required accuracy can vary a lot from one instrument to the next, making it impossible to provide a single wavelength calibration module that could support the different needs. To cope with the differences, we rather provide a set of generic wavelength calibration functions that can be called in different ways, depending on the context, achieving both goals of supporting several instruments and providing robust and well-tested algorithms for a fast development. In order to lower the maintenance costs, the Common Pipeline Library implements many of the functionalities needed by the VLT/VLTI data reduction pipelines. It contains mostly low level functions as they are easier to share. Efforts have been also made on providing higher level data reduction algorithms reusable between the different pipelines. The CPL Data Reduction Software (CPLDRS) sub-library contains functions for objects detection, images recombination, point pattern matching, fitting, photometry, or wavelength calibration. This poster gives a detailed description of the cross-correlation based wavelength calibration routines offered in the CPLDRS library, withapplications to many VLT/VLTI instruments like SOFI, CRIRES, ISAAC, NACO, VISIR.

The slitless spectroscopy package aXe Martin Kuemmel (Space Telescope - European Coordinating Facility, Germany)

Slitless spectroscopy is a rather unusual way to obtain spectral information of celestial objects. The method unfolds its main advantage of delivering many spectra in a single image particularly in the low background environment in space, and many satellites, e.g. HST, GALEX and GAIA, deliver slitless spectroscpic data. The software package aXe was built specifically for slitless spectroscopy. Its main extraction package aXe, originally designed for the grism and prism data of the Advanced Camera for Survey on board of Hubble, is supplemented by the visualization module aXe2html and the simulation software aXeSIM. In this contribution we present the aXe package and show how aXe contributes to observation planning, data reduction and data distribution. Contamination, which is the mutual overlap of object spectra, is an ubiquitous phenomenon in slitless spectroscopy that originates in the degeneracy of one spatial coordinate and the spectral coordinate. Rather than solving this degeneracy with inverse techniques, aXe uses the information from direct images to model the spectral contribution from contamination, thus providing an important tool for quality control.

Analysis of the CMB gravitational lensing effect with PLANCK: from sky to patches Alexis Lavabre (Laboratoire d’Accelerateur Lineaire, France)

We focus on the study of the gravitational lensing of CMB data from the Planck satellite which will provide high-resolution temperature maps of the full-sky. The estimation of the deflection power spectrum over the whole sphere is not possible as soon as large regions are masked out (as the galactic plane). Therefore, using a

- 26 - ADA V Book of Abstracts Crete, 7-9 May 2008 quadratic estimator, we perform calculations on small patches of the sky, where the flat approximation holds. This allows us to use a 2D Fourier analysis. However, for a given pixelisation scheme (here HEALPix) , the projected points are irregularly scattered, which involves a step of interpolation in order to use a classical FFT tool. We show that such an interpolation has a non-negligible effect on the estimator. Instead we propose a 2d generation algorithm to compute the Fourier coefficients directly from the irregularly gridded data.

Data Processing software for the SuperAGILE experiment Francesco Lazzarotto (IASF INAF Rome, Italy)

The SuperAGILE (SA) instrument is a X-ray detector for Astrophysics measurement, part of the Italian AGILE satellite for X-Ray and Gamma-Ray Astronomy. Launched at 23/04/2007 from India, SuperAGILE is now studying the sky in the 18 - 60 KeV energy band and is detecting sources with advanced imaging and timing detection capabilities and also good spectral detection capabilities. We will show the software used for the on-ground data processing.

Reconstruction of extended objects from their speckle images Yan Li (Shanghai Astron. Observatory, Chinese Academy of Sciences, China)

As a post-processing method to overcome atmospheric turbulence effects, reconstruction technique of speckle imaging has been shown to mitigate atmospheric-resolution limits effectively, allowing near diffraction-limited images to be reconstructed. For its easy achievement, reconstruction of speckle imaging has been widely used in the observational research of binary stars and good results have been achieved. But few images of extended objects reconstructed by use of these techniques have been published. In this paper, we will mainly compare the results from various reconstruction methods of speckle imaging, and we will discuss how to choose an optimal method to yield reconstruction of the extended objects.

Circular Object Detection with the Vector Gradient Intersection Transform Henning Lorch (ESO, Germany)

In astronomical data processing, circular object characterisation is required to locate and describe for instance projections of pinhole masks, or endings of optical fibres. An automatic detection procedure must cope with major disturbances, like multiple objects in the image, noise and detector defects. We present in this poster an end-to-end detection process for circular objects in images, using the Vector Gradient Intersection Transform (VGIT). The VGIT combines the locations of intersections of gradient vectors into peak distributions in the centres of circles. This paper summarises the workflow consisting of input preparation, transformation, peak detection, object modeling, and fitting. Tuning possibilities, strengths and weaknesses are shown as well. The suitability for different types of input images is compared to the one of the Hough Transform.

- 27 - ADA V Book of Abstracts Crete, 7-9 May 2008

Sparsity-constrained Image Reconstruction in Astronomical Interferometry David Mary (Univ. Nice Sophia Antipolis, France)

In astronomical (near optical) interferometry, the data acquisition process delivers partial information about the Fourier transform of the observed astronomical image. The uncompleteness of this information is twofold. First, the interferometer provides only an undersampled spectrum: spectral measures are not available on a regular grid, but rather appear as a random set of points on the 2D Fourier domain. This distribution corresponds to the autocorrelation function of the total optical aperture (telescopes are both limited in number). Second, for those points selected in the 2D spectrum, only the norms are known (up to the noise) : the phases can presently only be constrained by relationships involving sums of subsets of phases ("phase closures"). A full knowledge of all relative phases might only be reachable in a few years. In this underdetermined framework, the goal is to reconstruct an image of the object which has an undersampled, noisy, partially phase constrained Fourier spectrum that is in agreement with the data. Here we take advantage of the sparsity property extensively discussed and exploited in the last decades, especially recently within the Compressed Sensing framework: like most physical signals, celestial objects of interest are naturally sparse in some bases. Various objects that are not sparse directly in the image space (i.e., not stars or star fields, but extended objects like galaxies, nebulae, etc...) are sparse in some other basis (DCT, wavelets, etc). The method consists in seeking an image with minimal lp norm (the lp, p \\in [- 1 1], norm being a sparsity measure) on some ''sparsity'' basis, subject to the constraint of being in good agreement with the data. The algorithm used in these simulations is a slightly modified version of an epsilon-regularized iteratively reweighted least square algorithm, as recently proposed by Chartrand, and a variable stepsize is introduced to speed-up convergence. The results include a detailed study of the influences of noise, imperfect knowledge of the phases and a comparison of the computation time between the fixed and the variable stepsize algorithms for various p. Simulations of realistic interferometric data for various astronomical sites show that the method could provide an efficient alternative to traditional reconstruction techniques when the phases are sufficiently constrained. In this interferometric framework, our results confirm the advantages already mentioned by Chartrand (for real data and Gaussian measurements matrices) of using a p inferior to 1: both the number of measurements required to recover the image and the computational burden are lowered.

The Rutgers/BCS Pipeline: automated analysis of ~50deg2 of optical griz imaging Felipe Menanteau (Rutgers University, USA)

We have designed and developed a complete and automated imaging pipeline to independently process and analyse the public data from the Blanco Cosmology Survey (BCS). The BCS is large-area survey carried out in 45 nights (2005-2007) on the CTIO Blanco 4-m telescope in northern Chile over two 50 square-degree patches of the southern sky in four bands (griz) been targeted by both Sunyaev-Zel'dovich (SZ) submilimeter experiments (ACT and SPT). We have used the pipeline to processed and analysed the publicly available data from the first two years of the BCS project covering close to 57 square-degrees and 3 million galaxies to i<23 mag. The pipeline is written in Python with a scalable object-oriented design based on existing public astronomical software. The standard image processing steps include:

- 28 - ADA V Book of Abstracts Crete, 7-9 May 2008 bias correction, overscan trim, CCD cross-talk correction, dome flats, super sky-flats creation and correction, fringe patterns and fringe removal, cosmic ray rejection, saturated stars bleed-trail removal and bad pixel masks. Astrometric and photometric calibration, image combination, alignment, stacking as well as detection, photometry and catalog generation are also handled automatically by the pipeline. Photometric redshift estimates are computed on-the-fly from the g,r,i,z magnitudes using BPZ with an custom empirical prior. The multiband imaging, photometric redshift probabilities and sky positions are used to generate density maps to locate clusters of galaxies over the SZ-targeted area.

An iterative thresholding algorithm for multichannel deconvolution Yassir Moudden (CEA/Saclay, France)

In preparation for the challenging task of analyzing the data shortly available from ESA\'s Planck experiment, dedicated multichannel data processing methods are being actively developed. A major concern which attracted much attention is the unmixing of the different astrophysical components contributing to the multichannel observations. Several blind source separation algorithms have been applied and tailored to account for the specific prior knowledge of the physics at play. Another issue, which is very often left out of the former problem thanks to simplifying assumptions, is multichannel deconvolution: the data collected in the different channels of practical instruments are not expected to be at the same resolution. Different channels have different point spread functions. Hence, a correct fusion of the multichannel observations for the purpose of estimating the different astrophysical component maps, requires a simultaneous inversion of the spectral mixing (ie each component contributes to each channel according to its emission law) and of the spatial mixing (ie convolution by the instrument psf in each channel). We focus here on the latter problem, assuming the emission laws of the various components are known a priori, as well as the PSFs in each channel, and propose an algorithm for multichannel deconvolution. The devised method relies on the sparsity of the mixed components in a proper representation (eg fourier, wavelets, curvelets): it seeks components which are maximally sparse in a given representation while still able to correctly reproduce the multichannel data. The proposed multichannel deconvolution method is an iterative thresholding algorithm with a progressively decreasing threshold. This leads to a salient-to-fine estimation process which has proven successful in related algorithms. Results of numerical experiments with simulated data will be presented. An important application of the proposed method would be the analysis of the Planck data.

The Herschel Data Processing System --- an advanced framework for advanced signal processing algorithms Stephan Ott (ESA/ESTEC, Spain) on behalf of the Herschel Data Processing Development Team

The Herschel Space Observatory is the fourth cornerstone mission in the ESA science programme. It will perform photometry and spectroscopy in approximately the 57- 670 micron range. It will have a radiatively cooled 3.5m diameter telescope, and a science payload complement of three instruments (HIFI, PACS and SPIRE) housed inside a superfluid helium cryostat. Herschel will be operated as an observatory facility offering at least three years of routine observations, with observing time available on a competitive basis to the whole astronomical community. Herschel is

- 29 - ADA V Book of Abstracts Crete, 7-9 May 2008 being implemented together with the Planck mission as a single programme and sharing a common Ariane 5 launcher. The launch towards the operational orbit around L2 is planned for October 2008. The development of the Herschel Data Processing System started a six years ago to support the data analysis for Instrument Level Tests. To fulfil the expectations of the astronomical community, additional resources were made available to implement a freely distributable user-friendly Data Processing System capable to interactively and automatically reduce Herschel data at different processing levels. The Herschel Data Processing System is jointly developed by ESA, the PI teams and the NHSC. The software is coded in Java and Jython to be platform independent and to avoid the need for commercial licenses. Building on a wealth of low-level functionality the Herschel Interactive Processing Environment HIPE is implemented as the user friendly face of Herschel Data Processing, This platform is more data-centric than language-centric, providing non- Java versant astronomers a state of the art interface to process Herschel data. HIPE is rich in features: Drag \'n Drop, command-line echoing, access to functionality by menu, toolbar, pop-up menu, and keyboard short-cuts. It permits to look at the same data in different ways at the same time, and to look at similar data the same way at the same time. It features a user controllable layout, permitting to organise the views the way the user wants it, add/remove them, and to dock and undock views. We will summarise the current state of the Herschel Data Processing System, and show some of its tools that were used for the data analysis of the Instrument Level tests, and some higher level interactive tools.

Mid-infrared data reduction and analysis Eric Pantin (CEA/Saclay, France)

VISIR is the VLT mid-infrared imager and spectrometer. It delivers since 2004 mid- infrared data at the limit of diffraction of the VLT telescope. We present in this paper the methods of data reduction that were developed in the framework of building the VISIR pipeline, in order to correct efficiently the data for the instrument signatures and to calibrate accurately the data. In particular, original methods to correct striping effects in images and excess of noise at medium/low spatial frequencies are shown. An original method to reconstruct an image from the multiple beams is also presented.

Correlated anisotropies in the Cosmic Infrared Background:new insight into structure evolution Aurélie Penin (Institut d’Astrophysique Spatiale, France)

Cosmic Infrared Background Anisotropies probe the physics of galaxy clustering, with the bulk of the signal contribution originating from sources below the point source confusion limit in the Far-IR. On small angular scales, anisotropies measure the non-linear clustering within a dark matter halo, and the physics governing how IR galaxies form within a halo. On large angular scales, background anisotropies measure the linear clustering bias of IR galaxies, thus probing the dark matter halo mass scale. We will present in the poster the promises of detecting such anisotropies with Planck and Herschel and a first attempt to separate the different redshift contribution in the anisotropies using standard methods of component separation.

- 30 - ADA V Book of Abstracts Crete, 7-9 May 2008

Weak Lensing data analysis: Mask interpolation using Inpainting Sandrine Pires (CEA/Saclay, France)

With increasingly large data sets, weak lensing measurements are able to measure cosmological parameters with ever greater precision. However this increased accuracy also places greater demands on the statistical tools used to extract the available information. At present, the majority of lensing analyses use the two point- statistics of the cosmic shear field. These can either be studied directly using the two-point correlation function, or in Fourier space, using the power spectrum. Direct measurement of the correlation function, through pair counting, is widely used since this method is not biased by missing data, for instance the masking of bright stars. However, this method is computationally intensive, requiring O(N^2) operations. It is therefore not feasible to use it for future large lensing survey. Measuring the power spectrum is significantly less demanding computationally, requiring O(NlogN ) operations, but is strongly affected by missing data. In order to lower the impact of the gaps on power spectrum measurement, we propose to use an in-painting algorithm based on sparse representation of the data, to fill in and interpolate across the masked regions. This requires approximately O(N log N ) operations which will enable us to keep up with the expected increase in survey area.

Photometric Redshifts with Locally Weighted Regression Laerte Sodre (Univ. de Sao Paulo, Brazil) and Walter A. dos Santos

We have implemented a photometric redshift estimator using a Locally Weighted Regression (LWR) algorithm. This empirical method allows, like neural networks, a local mapping between photometric quantities (magnitudes, colors, and even other parameters like galaxy diameter or light concentration) and redshifts using a training set containing objects with known spectroscopic redshifts. The algorithm has been tested with a sample of 10,000 galaxies from the (SDSS). The algorithm is very competitive: the rms deviation of the photometric redshifts is 0.021 when the 5 SDSS bands are used, and about 3% of the results have catastrophic errors. We have also used successfully the LWR algorithm to estimate equivalent widths of emission lines from broad-band photometry, with the aim of speeding up future galaxy redshift surveys targeted on emission line objects.

Multiresolution deglitching algorithm for Herschel/PACS data Vavrek Roland (ESA, Spain)

We present a deglitching routine for PACS data that is based on the work that Starck and Murtagh (1998) have done for ISOCAM. The algorithm uses a Multiresolution Median Transform (MMT) to decompose the signal. The transform consists mainly of a simple median filter that is applied to the time series of every detector pixel readout. This algorithm separates spikes from the useful part of the signal in the multiresolution space. Adaptive thresholding based on the noise distribution of the MMT coefficients allows the efficient reconstruction of a cleaned, glitch free, signal. Application to test measurements where the PACS Photometer arrays have been exposed to alpha irradiation show very promising results. We also present preliminary investigations focusing on improved noise characterization of the bolometer and Ge:Ga photoconductor signals. Further work will be done to tune the algorithm to the specific case of modulated signals resulting from chopped measurements.

- 31 - ADA V Book of Abstracts Crete, 7-9 May 2008

A q-test based glitch detection algorithm Pierre Royer (K.U. Leuven, Belgium)

The Herschel-PACS Spectrometer consists of two arrays of 16 x 25 Ge:Ga detectors. In order to cover the whole spectral band from 57 to 200 micron, the short and long wavelength arrays are put under very different mechanical stress. Irradiation of sample detectors have shown very different reaction of the low and high stress detectors to particle hits. Deglitching methods which have been applied on data from previous missions involving Ge:Ga detectors mostly rely on multi-resolution decomposition of the signal. We present here a new glitch detection method which does not belong to that classical framework, since it is based on a set of locally applied statistical q-tests. We then show how we tuned it to the PACS-spectrometer data, along with sample results.

- 32 -