Abstracts Magdeburger Stochastik-Tage 2002 German Open Conference on Probability and Statistics
Total Page:16
File Type:pdf, Size:1020Kb
Otto-von-Guericke-Universit¨at Magdeburg Abstracts Magdeburger Stochastik-Tage 2002 19.–22. M¨arz 2002 German Open Conference on Probability and Statistics March 19 to 22, 2002 Contents Abstracts of the Talks 3 Plenary Lectures . 5 Prize Winning Lecture: Prize of the Fachgruppe Stochastik .............. 7 1 Asymptotic Statistics, Nonparametrics and Resampling . 9 2 Computer Intensive Methods and Stochastic Algorithms . 27 3 Limit Theorems, Large Deviations and Statistics of Extremes . 35 4 Quality Control, Reliability Theory and Survival Analysis . 45 5 Stochastic Analysis . 57 6 Spatial Statistics, Stochastic Geometry and Image Processing . 71 7 Stochastic Methods in Biometry, Genetics and Bioinformatics . 85 8 Stochastic Models in Biology and Physics . 95 9 Stochastic Methods in Optimization and Operations Research . 105 10 Stochastic Processes, Time Series and their Statistics . 121 11 Generalized Linear Models and Multivariate Statistics . 133 12 Insurance and Finance . 139 13 Open Section . 147 Teachers’ Day . 153 List of Authors 155 Abstracts of the Talks Plenary Lectures 5 Plenary Lectures Robert Ineichen (Universit´ede Fribourg) Wurfel,¨ Zufall und Wahrscheinlichkeit — ein Blick auf die Vorgeschichte der Stochastik The roots of probability theory are usually attributed to the 17th century. However, one can wonder if some notions related to stochastics were not de- veloped before. Our lecture is a tentative answer to this question. It intends to cast some light on the notions of probability in the Antiquity, in the Middle Ages and in the early Modern Times, on the chance evaluation by counting the number of favorable cases and on the notions of statistical regularity. Contents: Introduction — Games of chance with astragali (heel bones of hooved animals), dice, coins — Contingency and probability in Antiquity; epis- temic probabilities and aleatory probabilities — First steps toward quantifica- tion; “favorable” cases and “unfavorable” cases — Christiaan Huygens and Jakob Bernoulli. P. R. Kumar (University of Illinois) Spatial Communication Networks Over the past two decades the world has seen the proliferation of wired net- works. In the coming years some anticipate that we will see the rapid growth of wireless networks. These can be thought of as networks of computers connected by radios. They can be used even when users are mobile or sporadic. Several mathematical questions arise in the analysis and design of wireless networks. How much traffic can they carry? What are the scaling laws as the number of nodes in the network increases? How should nodes choose their range so that the network is adequately connected? How should the network be operated? At one level, a wireless network can be regarded as a random graph created by randomly located nodes in a domain, with edges connecting every node to other nodes within its transmission range. At another level, one wishes to 6 Plenary Lectures develop an information theory for wireless networks, a la the work of Shannon for point-to-point communication. We will present an overview of results for wireless networks that involves ideas from graph theory, geometry, probability theory, percolation theory, and infor- mation theory. Terry Speed (University of California, Berkeley) Finding spatial patterns in gene expression: the design and analysis of a cDNA microarray experiment In this talk I will describe joint work with my student Yee Hwa Yang and my biology colleague at Berkeley, John Ngai. John’s aim was to find genes with interesting spatial patterns of expression across the mouse olfactory bulb using cDNA microarray experiments. The bulb was dissected into several pieces and the expression of many thousands of genes was compared across these pieces. We then tried to identify genes with interesting expression patterns. The approach has some success, though how much we helped was not obvious. I will begin by giving a brief outline of cDNA microarrays, and then turn to the design questions this study raised. After that, I’ll describe our analysis and some of the problems that arose. Prize Winning Lecture 7 Prize Winning Lecture Prize of the Fachgruppe Stochastik Peter Ruckdeschel (Universit¨at Bayreuth) Robust Recursive Kalman-Filtering We consider robust recursive filtering in the case of a linear, finite-dimensional and time-discrete state-space model with Euclidean state space. Insisting on recursivity for computational reasons, we come up with a new pro- cedure, the rLS-filter, using a Huberized correction-step in the Kalman-filter recursions. Simulation results for ideal and contaminated data indicate that this procedure achieves robustness with respect to AO-contamination, still be- having well in the ideal model compared to the classically optimal procedure, the Kalman-filter. To attack the properties of this procedure theoretically, we consider the state- space model in innovation form. In this reduced setup, it is possible to de- rive optimal robust filters under SO-contamination—both in a “Lemma 5” approach—cf. [3]—and in a minimax approach, the latter generalizing a result of [1]. As in the location case, both solutions coincide, and yield the rLS-filter, provided all inputs from the past are Gaussian. However, treated by the rLS- filter, normality of the past is actually lost. But, extending the SO-contamination neighborhood a little, the minimax and “Lemma 5”-solution of the original SO-neighborhood remain valid, and we are able to show [numerically] that the process of filters/predictions generated by the rLS-filter stays in this extended [e]SO-neighborhood about some fictive Gaussian ideal process, which we base on the second moments of the classical Kalman-filter. We thus obtain the first robust optimality result for recursive procedures re- ferring to distributional neighborhoods about the ideal state-space model. References [1] Birmiwal, K. and Shen, J. (1993). Optimal robust filtering. Stat. Decis. 11(2), 101–119. [2] Fox, A. J. (1972). Outliers in time series. J. R. Soc., Ser. B 34, 350–363. 8 Prize Winning Lecture [3] Hampel, F. R. (1986). Contributions to the theory of robust estimation. PhD Thesis, University of California, Berkeley, CA. [4] Huber, P. J. (1981). Robust Statistics. Wiley & Sons, New York. [5] Rieder, H. (1994). Robust asymptotic statistics. Springer, New York. [6] Ruckdeschel, P. (2001). Ans¨atze zur Robustifizierung des Kalman-Filters. PhD Thesis, Bayreuther Mathematische Schriften, Bayreuth. Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling 9 Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling Organizer: Axel Munk (Paderborn) Invited Lecture Sara van de Geer (University of Leiden) Adaptive regression function estimation under nonstandard conditions Consider a random variable X 2 X , and the parameter θ0 = arg min Eγθ(X) ; θ2Λ where (Λ; k ¢ k) is a given (subset of a) normed vector space, and where for each θ 2 Λ, the function γθ : X! R is a given loss function. We observe independent copies X1;:::;Xn of X and use the penalized M-estimator ( ) Xn ˆ 1 θn = arg min γθ(Xi) + pen(θ) : θ2Λ n i=1 For suitable penalties pen(θ) and under appropriate regularity conditions, the ˆ estimator θn adapts to the unknown smoothness of θ0. We will briefly explain what we mean by smoothness and adaptation. Our main topic will be a relaxation of a commonly used regularity condition. Suppose that for some · ¸ 2, and constant C, we have · E[γθ(X) ¡ γθ0 (X)] ¸ Ckθ ¡ θ0k : The usual assumption · = 2, is motivated by the idea of a two-term Taylor expansion at θ = θ0 (the first term being zero because θ0 minimizes Eγθ(X)). More generally, the parameter · can be thought of as an identifiability param- eter, large values implying that θ0 is hard to identify. Inspired by problems in classification theory, we refer to · as the margin parameter. The case · > 2 comes up naturally in classification problems. But as a more simple example, let us mention here the (nonparametric) regression model and 10 Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling least absolute deviations estimation. When the density of the measurement error vanishes at its median, then (typically) · > 2. We will show that for a large class of estimation methods in regression, one may choose a soft-thresholding type penalty pen(θ) to arrive at an estimator ˆ θn which adapts (up to logarithmic factors) to the unknown smoothness as well as the margin parameter ·. Contributed Lectures (in alphabetic order) Wolfgang Bischoff (Universit¨at Karlsruhe) Asymptotic Regression Models We establish the localized asymptotic models of parametric and nonparametric regression models with respect to LeCam’s notion of weak convergence of sta- tistical experiments (LAN) and with respect to the partial sums process. We can compare these asymptotic models, if the LAN model is a Gaussian process. These results and models will be applied to interesting practical problems as testing linear hypothesis, testing for change-points (a non-linear problem) or model checking (testing constant variance). It is worth mentioning that we need for some problems regression models with correlated observations. Boris Buchmann, Rudolf Gr¨ubel (Technische Universit¨atM¨unchen, Universit¨atHannover) Decompounding: An estimation problem for Poisson random sums Given observations from a compound Poisson process at a fixed unit time grid we consider the problem of a nonparametric estimation of rate ¸ > 0 and claim distribution P where P is some probability distribution on the positive reals. In the first part we discuss plug-in estimators