Otto-von-Guericke-Universit¨at Magdeburg

Abstracts

Magdeburger Stochastik-Tage 2002

19.–22. M¨arz 2002

German Open Conference on Probability and Statistics

March 19 to 22, 2002

Contents

Abstracts of the Talks 3 Plenary Lectures ...... 5 Prize Winning Lecture: Prize of the Fachgruppe Stochastik ...... 7 1 Asymptotic Statistics, Nonparametrics and Resampling ...... 9 2 Computer Intensive Methods and Stochastic Algorithms ...... 27 3 Limit Theorems, Large Deviations and Statistics of Extremes ...... 35 4 Quality Control, Reliability Theory and Survival Analysis ...... 45 5 Stochastic Analysis ...... 57 6 Spatial Statistics, Stochastic Geometry and Image Processing ...... 71 7 Stochastic Methods in Biometry, Genetics and Bioinformatics ...... 85 8 Stochastic Models in Biology and Physics ...... 95 9 Stochastic Methods in Optimization and Operations Research ...... 105 10 Stochastic Processes, Series and their Statistics ...... 121 11 Generalized Linear Models and Multivariate Statistics ...... 133 12 Insurance and Finance ...... 139 13 Open Section ...... 147 Teachers’ ...... 153

List of Authors 155

Abstracts of the Talks

Plenary Lectures 5

Plenary Lectures

Robert Ineichen (Universit´ede Fribourg) Wurfel,¨ Zufall und Wahrscheinlichkeit — ein Blick auf die Vorgeschichte der Stochastik

The roots of probability theory are usually attributed to the 17th . However, one can wonder if some notions related to stochastics were not de- veloped before. Our lecture is a tentative answer to this question. It intends to cast some light on the notions of probability in the Antiquity, in the Middle Ages and in the early Modern , on the chance evaluation by counting the number of favorable cases and on the notions of statistical regularity. Contents: Introduction — Games of chance with astragali (heel bones of hooved animals), dice, coins — Contingency and probability in Antiquity; epis- temic probabilities and aleatory probabilities — First steps toward quantifica- tion; “favorable” cases and “unfavorable” cases — and Jakob Bernoulli.

P. R. Kumar (University of Illinois) Spatial Communication Networks

Over the two the world has seen the proliferation of wired net- works. In the coming some anticipate that we will see the rapid growth of wireless networks. These can be thought of as networks of computers connected by radios. They can be used even when users are mobile or sporadic. Several mathematical questions arise in the analysis and design of wireless networks. How much traffic can they carry? What are the scaling laws as the number of nodes in the network increases? How should nodes choose their range so that the network is adequately connected? How should the network be operated? At one level, a wireless network can be regarded as a random graph created by randomly located nodes in a domain, with edges connecting every node to other nodes within its transmission range. At another level, one wishes to 6 Plenary Lectures develop an information theory for wireless networks, a la the work of Shannon for point-to-point communication. We will an overview of results for wireless networks that involves ideas from graph theory, geometry, probability theory, percolation theory, and infor- mation theory.

Terry Speed (University of California, Berkeley) Finding spatial patterns in gene expression: the design and analysis of a cDNA microarray experiment

In this talk I will describe joint work with my student Yee Hwa Yang and my biology colleague at Berkeley, John Ngai. John’s aim was to find genes with interesting spatial patterns of expression across the mouse olfactory bulb using cDNA microarray experiments. The bulb was dissected into several pieces and the expression of many thousands of genes was compared across these pieces. We then tried to identify genes with interesting expression patterns. The approach has some success, though how much we helped was not obvious. I will begin by giving a brief outline of cDNA microarrays, and then turn to the design questions this study raised. After that, I’ll describe our analysis and some of the problems that arose. Prize Winning Lecture 7

Prize Winning Lecture Prize of the Fachgruppe Stochastik

Peter Ruckdeschel (Universit¨at Bayreuth) Robust Recursive Kalman-Filtering

We consider robust recursive filtering in the case of a linear, finite-dimensional and time-discrete state- model with Euclidean state space. Insisting on recursivity for computational reasons, we come up with a new pro- cedure, the rLS-filter, using a Huberized correction-step in the Kalman-filter recursions. Simulation results for ideal and contaminated data indicate that this procedure achieves robustness with respect to AO-contamination, still be- having well in the ideal model compared to the classically optimal procedure, the Kalman-filter. To attack the properties of this procedure theoretically, we consider the state- space model in innovation form. In this reduced setup, it is possible to de- rive optimal robust filters under SO-contamination—both in a “Lemma 5” approach—cf. [3]—and in a minimax approach, the latter generalizing a result of [1]. As in the location case, both solutions coincide, and yield the rLS-filter, provided all inputs from the past are Gaussian. However, treated by the rLS- filter, normality of the past is actually lost. But, extending the SO-contamination neighborhood a little, the minimax and “Lemma 5”-solution of the original SO-neighborhood remain valid, and we are able to show [numerically] that the process of filters/predictions generated by the rLS-filter stays in this extended [e]SO-neighborhood about some fictive Gaussian ideal process, which we base on the moments of the classical Kalman-filter. We thus obtain the first robust optimality result for recursive procedures re- ferring to distributional neighborhoods about the ideal state-space model.

References

[1] Birmiwal, K. and Shen, J. (1993). Optimal robust filtering. Stat. Decis. 11(2), 101–119.

[2] Fox, A. J. (1972). Outliers in time series. J. R. Soc., Ser. B 34, 350–363. 8 Prize Winning Lecture

[3] Hampel, F. R. (1986). Contributions to the theory of robust estimation. PhD Thesis, University of California, Berkeley, CA. [4] Huber, P. J. (1981). Robust Statistics. Wiley & Sons, New York. [5] Rieder, H. (1994). Robust asymptotic statistics. Springer, New York. [6] Ruckdeschel, P. (2001). Ans¨atze zur Robustifizierung des Kalman-Filters. PhD Thesis, Bayreuther Mathematische Schriften, Bayreuth. Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling 9

Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling

Organizer: Axel Munk (Paderborn)

Invited Lecture

Sara van de Geer (University of Leiden) Adaptive regression function estimation under nonstandard conditions

Consider a random variable X ∈ X , and the parameter

θ0 = arg min Eγθ(X) , θ∈Λ where (Λ, k · k) is a given (subset of a) normed vector space, and where for each θ ∈ Λ, the function γθ : X → R is a given loss function. We observe independent copies X1,...,Xn of X and use the penalized M-estimator ( ) Xn ˆ 1 θn = arg min γθ(Xi) + pen(θ) . θ∈Λ n i=1 For suitable penalties pen(θ) and under appropriate regularity conditions, the ˆ estimator θn adapts to the unknown smoothness of θ0. We will briefly explain what we mean by smoothness and adaptation. Our main topic will be a relaxation of a commonly used regularity condition. Suppose that for some κ ≥ 2, and constant C, we have

κ E[γθ(X) − γθ0 (X)] ≥ Ckθ − θ0k . The usual assumption κ = 2, is motivated by the idea of a two- Taylor expansion at θ = θ0 (the first term being zero because θ0 minimizes Eγθ(X)). More generally, the parameter κ can be thought of as an identifiability param- eter, large values implying that θ0 is hard to identify. Inspired by problems in classification theory, we refer to κ as the margin parameter. The case κ > 2 comes up naturally in classification problems. But as a more simple example, let us mention here the (nonparametric) regression model and 10 Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling least absolute deviations estimation. When the density of the measurement error vanishes at its median, then (typically) κ > 2. We will show that for a large class of estimation methods in regression, one may choose a soft-thresholding type penalty pen(θ) to arrive at an estimator ˆ θn which adapts (up to logarithmic factors) to the unknown smoothness as well as the margin parameter κ.

Contributed Lectures (in alphabetic order)

Wolfgang Bischoff (Universit¨at Karlsruhe) Asymptotic Regression Models

We establish the localized asymptotic models of parametric and nonparametric regression models with respect to LeCam’s notion of weak convergence of sta- tistical experiments (LAN) and with respect to the partial sums process. We can compare these asymptotic models, if the LAN model is a Gaussian process. These results and models will be applied to interesting practical problems as testing linear hypothesis, testing for change-points (a non-linear problem) or model checking (testing constant variance). It is worth mentioning that we need for some problems regression models with correlated observations.

Boris Buchmann, Rudolf Gr¨ubel (Technische Universit¨atM¨unchen, Universit¨atHannover) Decompounding: An estimation problem for Poisson random sums

Given observations from a compound Poisson process at a fixed unit time grid we consider the problem of a nonparametric estimation of rate λ > 0 and claim distribution P where P is some probability distribution on the positive reals. In the first part we discuss plug-in estimators for λ and P and derive strong consistency and asymptotic normality by local analysis in a scaled Banach func- tion space (similar to Grubel/Pitts¨ (1993), Ann. Stat. 21, Politis/Pitts (2000), Ann. Stat. 28). Typically, the plug-in principle leads to estimated values “out- side the natural parameter space”. This can be avoided by a nonparametric maximum likelihood approach. Some simulation studies are provided. Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling 11

Claudia Czado (Technische Universit¨at Munchen)¨ Bootstrap Methods for the Nonparametric Assessment of Population Bioequivalence and Similarity of Distributions

A completely nonparametric approach to population bioequivalence in cross- over trials will be presented. It is based on the Mallows (1972) metric as a nonparametric distance measure which allows the comparison between the en- tire distribution functions of test and reference formulations. We show that a separation between carry-over and period effects is not possible in the non- parametric setting. However when carry-over effects can be excluded, treat- ment effects can be assessed when period effects are present or not. We prove bootstrap limit laws of the corresponding test statistics because estimation of the limiting variance of the test statistic is very cumbersome. The small sam- ple behavior of various bootstrap methods are investigated using simulation. The percentile (PC) and bias corrected and accelerated (BCA) bootstrap were compared for multivariate normal and nonnormal populations. From the simu- lation results presented, the BCA bootstrap is found to be less conservative and provides higher power compared to the PC bootstrap, especially when skewed multivariate populations are present.

Peter Dencker (Universit¨at Rostock) Optimality properties of tests based on quadratic statistics

We consider tests in Gaussian shift experiments {Pθ, θ ∈ H}, where H is a finite or infinite dimensional Hilbert space. A representative experiment is ∞ 2 {Pθ = ⊗j=1N(θj, 1), θ = (θj) ∈ l } on the infinite product space of the real 2 line, where l is the Hilbert sequence space. Let Xm be the projection onto the m-th component. In the case of a finite codimension of a linear null hypothesis 2 H0 the χ -statistic provides a Maximin-α-Test for H0 versus the alternative ⊥ parameter set {θ ∈ H0 : kθk = δ}, δ > 0 fixed (see e. g. Strasser, H. (1985)). If the null hypothesis have infinite codimension, a χ2-test does not exist. A way P∞ 2 out are weighted quadratic statistics. For ηj ≥ 0, j=1 ηj < ∞, we introduce P∞ 2 tests statistics j=1 ηj(Xj − 1). Such statistics assign different weights to different directions. Statistics of this type are often used in goodness of fit. To describe optimality properties we use the concept of the local maximin property introduced and studied in Giri, N., Kiefer, J. (1964). Given a family 12 Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling of alternatives Kδ, δ > 0, this concept compares a given α-test to all other α-tests locally at the null hypothesis in a maximin sense. We show that the 1 unbiased α-test ϕ = {T >q1−α} is a locally maximin test for the null hypothesis 2 P∞ 2 2 {0} against a family of alternatives Kδ = {(θj) ∈ l : j=1 λjθj ≥ δ }, δ ↓ 0. We explicitely give the dependence of the λj on the ηj and a positive factor and show the surjectivity of the corresponding function. Furthermore, in the case of a finite dimensional Hilbert space we get a bijection between tests based on quadratic statistics and collections of alternative sets based on quadratic forms, such that the test is locally maximin against a coressponding family of alternatives. We give the connection of our results to the local decomposition of power functions in the sense of Janssen, A. (1995).

References [1] Giri, N. and Kiefer, J. (1964). Local and asymptotic minimax properties of multivariate tests. Ann. Math. Stat. 35, 21–35. [2] Janssen, A. (1995). Principal component decomposition of non-parametric tests. Probab. Theory relat. Fields 101(2), 193–209. [3] Strasser, H. (1985). Mathematical theory of statistics: statistical experi- ments and asymptotic decision theory. de Gruyter, Berlin.

Lutz Dumbgen¨ (Universit¨at Bern) Confidence bounds for quantiles based on interval-censored data

This talk presents nonparametric confidence bounds for specific quantiles of a distribution function in case of current-status data. The method is based on suitable multiscale tests and valid without further regularity conditions on the distribution of the times or inspection times.

Dietmar Ferger (Technische Universit¨at Dresden) On the minimizing point of the incorrect centered empirical process and its limit distribution in non-regular experiments

Let Fn be the empirical distribution function pertaining to independent random random variables with common distribution function (df) F . Since Birnbaum Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling 13 and Pyke (1958) and Dwass (1958) it is known that the minimizing point arg min(Fn − F ) of the empirical process Fn − F again has df F . In this talk we focus on the asymptotic behavior of arg min(Fn − G), where G is another df different from F (incorrect centering). We derive distributional convergence, where the limit variable is the (almost sure unique) minimizing point T of a transformed two-sided Poisson-process with positive drift. As a statistical application we consider the location model F (x) = H(x−a) with positve shift- parameter a and known df H. The choice of G = H yields an estimator for a. If H has a density with a (possible) jump then besides T the minimizing point of a two-sided Brownian motion with parabolic drift may also appear as a limit variable (cube root asymptotic).

Carsten Franz, Ludwig Baringhaus (Universit¨at Hannover) On a new multivariate two-sample test

In this talk a new test for the multivariate two-sample problem is proposed. The test statistic is the difference of the sum of all the Euclidean interpoint distances between the random variables from the two different samples and one-half of the two corresponding sums of distances of the variables within the same sample. The asymptotic null distribution of the test statistic is derived using the projection method and shown to be the limit of the bootstrap dis- tribution. Simulation studies comparing univariate and multivariate normal distributions for location and dispersion alternatives are provided. For normal location alternatives the new test is shown to have power similar to that of the t- and T 2-Test.

Sandra Freitag, Lutz D¨umbgen (Universit¨atsklinikumKiel, Universit¨atBern) Nonparametric estimation of survival curves from interval-censored data

The well-known nonparametric maximum likelihood estimator of a distribu- tion function from interval-censored data has a very slow rate of convergence −1/3 (Op(n )). In this talk we describe an intermediate, nonparametric model: We assume the distribution function is unimodal, i. e. convex-concave. This ad- ditional but natural constraint leads to considerably better estimators. More- over, explicit algorithms for its computation will be proposed. 14 Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling

Norbert Henze

(Universit¨at Karlsruhe)

Invariant tests for symmetry about an unspecified point

We present a flexible class of omnibus affine invariant tests of the hypothe- sis that a multivariate distribution is symmetric about an unspecified point. The test statistics are weighted integrals involving the imaginary part of the empirical characteristic function of suitably standardized data, and they have an alternative interpretation in terms of a quadratic measure of distance of nonparametric kernel density estimators. Moreover, there is a connection with two measures of multivariate skewness. The tests are performed via a permu- tational procedure that conditions on the data.

Arnold Janssen

(Heinrich-Heine-Universit¨at Dusseldorf)¨

A nonparametric Cram´er-Raoinequality for estimators of statistical functionals

The Cram´er-Raobound for estimators of euclidian parameters is typically used to judge the quality of estimators. It is of local type and uses only informa- tion about the model and the estimator locally around the distribution under consideration. In nonparametrics the parameters are substituted by statistical functionals. The local geometry of models is nowadays typically studied via tangent in the sense of Pfanzagl and Wefelmeyer. In the present talk the Cram´er-Raoinequality is extended to estimators of statistical functionals. The results are new for real functionals but they can also be proved for func- tionals with values in some vector space. A specific example is the estimation of the distribution function for some large model. Again the lower bound is of local type. It is discussed when the present bound is asymptotically attained. This discussion leads to an extended concept of Fisher-efficiency for asymptotic estimators of statistical functionals. Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling 15

Matthias Kohl (Universit¨at Bayreuth) Robust regression and scale based on infinitesimal neighborhoods: M- vs. AL-estimators

While Huber’s (1981) approach based on M-estimates and minimum Fisher information has not been extended from location to the simultaneous estima- tion of location and scale (and encounters serious limitations already with the estimation of a scale parameter alone), the approach based on infinitesimal neighborhoods covers simultaneous estimation of regression and scale natu- rally. In the context of regressor distributions which converge weakly together with their second moments, we first derive the influence curve (IC) that min- imaxes the asymptotic MSE of the corresponding asymptotically linear (AL) estimator, by specializing results in Rieder (1994). Its regression coordinate, for example, is redescending. The same minimax MSE optimization problem is then solved in the smaller class of M-estimators. We calculate both AL- and M-solutions numerically. We determine the efficiency loss of M relative to AL, which amounts to only a few permille, in the case of simultaneous estimation. If regression and scale are estimated separately, again MSE-minimax AL- and M-estimators may be derived. Due to a coupling condition between regression and scale scores in the M-class, the efficiency loss of M relative to AL increases up to 29.5%. The same holds for particular M-estimates invented by Bednarski and Mueller (2001), provided the supnorm bounds are taken from the best M- estimate. As M-estimates in all respects (theoretical, numerical, . . . ) turn out inferior to AL-estimators in the local setup, and global properties (breakdown, . . . ) could be preserved by a suitable one-step construction, the M-principle seems statistically unjustified in this context.

Michael Kohler (Universit¨at Stuttgart)

Nonasymptotic bounds on the L2 error of neural network regression estimates

Estimation of multivariate regression functions from bounded i. i. d. data is considered. The L2 error with integration with respect to the design measure is used as an error criterion. It is assumed that the distribution of the design is concentrated on a finite set. Neural network estimates are defined by mini- mizing the empirical L2 risk over various sets of feed-forward neural networks. 16 Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling

Nonasymptotic bounds on the L2 error of these estimates are presented. The results imply that neural networks are able to adapt to additive regression functions and to regression functions which are a sum of ridge functions, and hence are able to circumvent the curse of dimensionality.

Arne Kovac (Universit¨at Essen) Robust nonparametric regression and modality

Non-parametric regression is a topic of much current interest in statistics. The problem is to estimate a function f on the basis of observations y1, . . . , yn at time points t1, . . . , tn where

yi = f(ti) + εi (1) and the ε1, . . . , εn are noise. Davies (1995) and Mammen and van de Geer (1997) introduce a method which employs a taut string constrained to lie in a tube centred around the integrated data. Its derivative is used as a piecewise constant approximation for the given data. Davies and Kovac (2001) use an automatic procedure to determine a locally adaptive tube width. They show that their procedure attains asymptotically the correct modality. Moreover the taut string approximation to the data can be regarded as the solution of the minimization problem: Xn Xn 2 (yi − f(ti)) + λi|f(ti) − f(ti−1)| = min . (2) i=1 i=2 We consider a method similar to the taut string method, but where the L2-norm in (2) is replaced by the L1-norm: Xn Xn |yi − f(ti)| + λi|f(ti) − f(ti−1)| = min . (3) i=1 i=2 The solutions of this minimization problem are again piecewise constant. It is possible to compute the solution with a fast algorithm of order O(n log(n)). The weights λi (i = 1, . . . , n) can be determined by an automatic procedure that attains asymptotically again the correct modality. The method is robust in a sense that it can withstand outlier patches consisting of up to 2 mini=1,...,n(λi) measurements. An application of the robust taut string method to deviation measurements from a weather balloon is shown in Figure ??. Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling 17

References

[1] Davies, P. L. (1995). Data features. Statistica Neerlandica 49, 183–245.

[2] Davies, P. L. and Gather, U. (1993). The identification of multiple outliers (with discussion). Journal of the American Statistical Association 88, 782–801.

[3] Davies, P. L. and Kovac, A. (2001). Local Extremes, Runs, Strings and Multiresolution (with discussion). Annals of Statistics 29, 1–65

[4] Mammen, E. and van de Geer, S. (1997). Locally adaptive regression splines. Annals of Statistics 25, 387–413.

Eckhard Liebscher, Petr Lachout, Silvia Vogel (Technische Universit¨atIlmenau, Charles University Prague, Technische Universit¨atIlmenau) Consistency of estimators as solutions of optimisation problems

ˆ Let {fn} be a sequence of random functions. Assume that for each n, θn is an ˆ approximate minimizer of fn on Θ, i. e. more precisely, θn satisfies ˆ fn(θn) < inf fn(θ) + εn for n ∈ N θ∈Θ where {εn} is a sequence of positive numbers tending to zero. Here Θ is the set of parameters. Suppose that fn approximates in a certain sense a limit function f as n → ∞ and f has a unique minimum at θ0 which is the true parameter of ˆ the underlying model. So θn can be regarded as an estimator for θ0. We provide ˆ rather weak and general conditions under which the estimator θn is strongly consistent. In the proofs of the statements ideas from stochastic optimization theory are utilized. Moreover, we consider the case of a random parameter set depending on n. In the second part of the talk we discuss several applications. For example, convergence theorems for M-estimators in fixed-design regression models and in autoregressive models are provided.

References

[1] Berlinet, A., Liese, F. and Vajda, I. (1994). Necessary and sufficient con- ditions for consistency of M-estimates in regression models with general errors. J. Stat. Plann. Inference 89, 243–267. 18 Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling

[2] Dupaˇcov´a,J. and Wets, R. (1988). Asymptotic behavior of statistical estimators and of optimal solutions of stochastic optimization problems. Ann. Stat. 16, 1517–1549.

[3] Vogel, S. (1994). A stochastic approach to stability in stochastic program- ming. J. Comput. Appl. Math. 56, 65–96.

Ali Majidi (Universit¨at Essen) Smooth Nonparametric Regression subject to Extreme Values

Given a dataset {y(ti), i = 1, . . . , n} which we denote by y, we look for a decomposition

y(ti) = f(ti) + r(ti) , (ti = i/n, i = 1, . . . , n) where f is a simple function and the {r(ti), (i = 1, . . . , n)} are the resulting residuals which are taken to approximate white noise as described in Davies and Kovac [1]. We use two different concepts of simplicity. The first is the number of local extreme values. The second is the smoothness of the function as measured by the standard smoothness functional

Z 1 S(f) := f (2)(t)2 dt . 0 where f (2) is the second derivative of f. The number of local extremes is taken to have priority over smoothness. The number of local extremes and their locations are determined by the taut string method developed in [1]. Given the number and location of the extreme values we look for the smoothest function which approximates our dataset and has the fixed modality. The quality of the approximation is measured by noise characterizing criteria e. g. the multiresolution criterion described in [1]. We introduce a method which solves the resulting quadratic program efficiently using the special nature of the problem. Convergence rates in the supremum norm are proved. Figure 1 shows the result of the method applied to a noisy doppler function as in Donoho and Johnstone [2]. Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling 19 1.5 4 1.0 2 0.5 0.0 0 −0.5 −2 −1.0 −4 −1.5

0 500 1000 1500 2000 0 500 1000 1500 2000

Figure 1: The left panel shows the data and the smooth approximation. The right panel shows the resulting residuals.

References

[1] Davies, P. L. and Kovac, A. (2001). Modality, Runs, Strings and Multires- olution. Annals of Statistics 29(1), 1–65.

[2] Donoho, D. L. and Johnstone, I. M. (1994). Ideal Spatial Adaption by Wavelet Shrinkage. Biometrika 81, 425–455.

Monika Meise (Universit¨at Essen) Residual Based Bandwidth Choice

We propose a locally-adaptive, residual based technique for determing the local bandwidth for local polynomial regression. m Given data (ti, yi), i = 1, . . . , n with n = 2 the observations are to be decomposed in y(ti) = fn(ti) + rn(ti), in such a way that the rn(ti) ‘look like’ white noise and the function fn is as smooth as possible subject to this constrained. To achieve this,¯ it is checked if the¯ residuals satisfy the mul- ¯ j P(k+1)2j ¯ p − 2 tiresolution conditions |sj,k| = ¯2 k2j+1 rn(ti)¯ ≤ σn 2 log(n) with σn = 1√.48 Median{|y(t ) − y(t )|,..., |y(t ) − y(t )|} (see Davies and Kovac, 2001). 2 2 1 n n−1 This criterion is used in an iterated procedure to locally reduce the bandwidth until all conditions are fulfilled. 20 Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling

Christine Muller¨

(Carl von Ossietzky Universit¨at Oldenburg)

Robust Estimators for Estimating Discontinuous Functions

We study the asymptotic behavior of a wide class of kernel estimators for esti- mating an unknown regression function. In particular we derive the asymptotic behavior at discontinuity points of the regression function. It turns out that some kernel estimators based on outlier robust estimators are consistent at jumps.

Georg Neuhaus, Marie Huˇskov´a

(Universit¨atHamburg, Charles University of Prague)

Some Simple Unconditional and Conditional Tests for Testing Tumour Onset Times

A class of optimal tests for testing tumour onset times in a two sample testing problem has been developed in a recent paper by Neuhaus (1999). These tests need estimators of the densities of the time to death distribution and are thereby somewhat complicated in nature. If such densities don’t exist, which occurs often in practice since the times to death may be sacrifice times at certain fixed time points, this method cannot work. For this more general situation we introduce a simplified version being approximately optimal and show how to perform these tests as conditional tests which are not only finite sample distribution free under the null hypothesis of equal onset time distributions with equal time to death distributions in both samples but also under many null hypothesis situations with unequal censoring. The conditional tests are asymptotically equivalent to their unconditional counterparts. As an example our considerations yield certain optimality properties of the well known Hoel & Walburg (1972) test as well as of its exact counterpart. Crucial for these results is a conditional, asymptotic limit theorem for the test statistics under local alternatives. Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling 21

Michael H. Neumann (Universit¨at zu K¨oln) Tests of time series models We propose a new method of testing time series models which is based on the difference between a model-based and a fully nonparametric estimate of the transition probabilities. An appropriate critical value is found by the bootstrap. (This talk is based on joint work with E. Paparoditis, University of Cyprus, Nicosia.)

Natalie Neumeyer (Ruhr-Universit¨at Bochum) Testing the equality of nonparametric regression functions — an empirical process approach The talk is concerned with the comparison of nonparametric regression func- tions by using independent samples. A new test for equality is proposed which is based on the difference of two marked empirical processes. The large sample behaviour of the corresponding test statistic is studied. The developed method is applicable in the case of different design points in each sample, nonequal sample sizes and heteroscedastic errors.

Robert Offinger (Universit¨at Magdeburg) The asymptotic distribution of the Wald statistic at singular parameter points

We consider the large sample Wald test for a null hypothesis H0 : R(θ) = 0 about the parameter vector in a statistical model, where R is a given smooth multivariate function. The standard asymptotics for the null-distribution of the Wald statistic applies when the Jacobian J(θ) of R has full row rank. However, there may be points with a rank defect in the null hypothesis and we will describe the asymptotic distribution at such points under a second order regularity condition. The results are illustrated for the null hypothesis of uncounfoundness of a regression Y on X with a potential confounder W , where Y , X and W are dichotomous random variables. Surprisingly, the Wald test using the standard critical χ2-value remains asymptotically conservative for a particular formulation, while for another formulation of the null hypothesis this is not true. 22 Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling

Christian Rau (Australian National University Canberra) Likelihood-based confidence bands for fault line estimators

The problem of estimating a smooth fault line in a bivariate regression, or density surface, is of considerable importance in various applications, such as computerised edge detection or the biological and geological . In this talk, we study properties of a new estimator for a fault line in both settings. This estimator is constructed by maximising a likelihood in a locally-linear model for the edge. The approach offers a unifying thread to the two problems, which are usually considered separately from each other. The convergence rate of the estimator comes within the known minimax-optimal convergence rate by an arbitrarily small power of the design intensity. Our main focus is on investigation of the local behaviour of this estimator, through which we obtain asymptotic confidence bands for the fault line, both pointwise and simulta- neous. The pointwise distance between the fault line and its bias-corrected estimator has a distribution which equals that of the location of the maximum of a Gaussian process with quadratic drift, and thus resembles a commonly en- countered limit of M-estimators. Finite-sample performance is studied through experiments with artificially generated data.

Markus Roters, Helmut Finner (Universit¨atPotsdam, Deutsches Diabetes-Forschungsinstitut D¨usseldorf) Multiple hypotheses testing and expected number of type I errors

The performance of multiple test procedures with respect to error control is an old issue. Assuming that all hypotheses are true we investigate the behaviour of the expected number of type I errors (ENE) as a characteristic of certain mul- tiple tests controlling the familywise error rate (FWER) or the false discovery rate (FDR) at a prespecified level. We derive explicit formulae for the distri- bution of the number of false rejections as well as for the ENE for single-step, step-down and step-up procedures based on independent p-values. Moreover, we determine the corresponding asymptotic distributions of the number of false rejections as well as explicit formulae for the ENE if the number of hypotheses tends to infinity. In case of FWER-control we mostly obtain Poisson distri- butions and in one case a geometric distribution as limiting distributions, in case of FDR-control we obtain limiting distributions which are apparently not Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling 23 named in the literature. Surprisingly, the ENE is bounded by a small num- ber regardless of the number of hypotheses under consideration. Finally, it turns out that in case of dependent test statistics the ENE behaves completely differently compared to the case of independent test statistics.

Ingo Steinke (Universit¨at Rostock) On uniform convergence rates in local polynomial regression

We consider the nonparametric regression model m(x) = E[Yi|Xi = x] for inde- d 1 pendent (Xi,Yi) ∈ R ×R . Local polynomials are used to construct estimators for the regression function m(x) and the the partial derivatives Dγm(x). The optimal uniform stochastic rate of convergence of the estimators is established. The covariables are allowed to be random or non-random. The smoothness of the experimental design is expressed by the asymptotic properties of the sequence of distributions of the covariables and the boundary behaviour is dis- cussed.

Jean-Pierre Stockis, J¨urgenFranke, Michael Neumann (Universit¨atKaiserslautern, Universit¨atKaiserslautern, Universit¨atzu K¨oln) Bootstrapping nonparametric estimators of the volatility function

We prove that the autoregressive bootstrap works in a strong sense for non- parametric estimators of the trend and volatility functions in nonlinear AR models. We illustrate some implications of this result by constructing uniform confidence bands for those functions.

Vyacheslav Vasil’iev (Tomsk State University) Nonparametric Estimation of Errors Distribution in Linear Stochastic Control Systems

The nonparametric estimation problem of the noise distribution function and its characteristics in an autoregressive process of the first order with a linear control is considered. The dynamic parameter of the process is unknown. The adaptive control procedure is constructed on the basis of the square type criteria. This criterion takes into account the rate of convergence of the object variance to the noise variance. There proposed a self-tuning regulators based 24 Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling on sequential least squares parameter estimators. These regulators are shown to be optimal in a broad class of control sequences and under mild conditions on the admissible values of the object parameter and noise distribution. The proposed nonparametric estimators of the unknown distribution, its den- sity function and functionals are constructed on the basis of dependent au- toregressive observations. It is shown, that their have the same asymptotic properties, as in the case for a independent observations. For example, the estimator of the variance is optimal in the mean square sence; the kernel esti- mators of the distribution function and its derivatives have the improved rate of convergence in metrics Lp, p ≥ 2. These estimators may be applied to construction of an adaptive control pro- cedure with the best possible rate of convergence of the object variance to its minimal value.

Research was supported by RFFI 00-01-00880 and RFFI – DFG 02-01-04001 Grants

Klaus Ziegler (Universit¨at Munchen)¨ On local bootstrap bandwidth choice in kernel density estimation

Under mild regularity conditions, it is shown that bandwidth selection by mini- mizing the bootstrapped mean squared error (at a point x) leads to a bandwidth of the same form as that obtained by a consistent plug-in procedure. The con- sequences of this observation for the construction of confidence intervals are also discussed.

Andreas Z¨ollner (Universit¨at Magdeburg) A resampling method for constructing a lower confidence bound for a finite population total from a censored sample

Suppose a finite population of N objects each of which has an unknown value µi ≥ 0, i = 1,...,N, of a nonnegative characteristic of interest. The aim is to obtain a reliable lower bound for the population total (the sum of all µi). To this end, a random sample has been drawn, but only for a selected subset of the sample the µ-values have been observed. The selection procedure has been somewhat obscure, and thus the subsample is censorized rather than random. We propose a resampling procedure to construct an under-estimate Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling 25 of the population total. We consider also the case when the objects of the population have unequal probabilities, in particular when the population is divided into a few number of strata with constant probabilities within each stratum. A real data example illustrates the method. (This paper is joint work with Norbert Gaffke.) 26 Sec. 1. Asymptotic Statistics, Nonparametrics and Resampling Sec. 2. Computer Intensive Methods and Stochastic Algorithms 27

Sec. 2. Computer Intensive Methods and Stochastic Algorithms

Organizer: Werner Stutzle¨ (Seattle, USA)

Invited Lecture

Peter Buhlmann¨ (ETH Zurich)¨ Bagging and Boosting: Asymptotic Aspects and New Insights

Bagging (Breiman, 1996) and Boosting (Freund and Schapire, 1996) are tech- niques for generating and aggregating multiple predictors. In a number of interesting cases, both have empirically been found to impressively improve the predictive performance, mainly in conjunction with tree-structured predic- tors such as CART. We provide here new insights what the methods actually do: (1) We present asymptotic results saying that Bagging pays off for hard- threshold estimators such as a discontinuous decision tree predictor (Buhlmann¨ and Yu, 2000). For more smooth procedures, bagging has no effect on the leading first order term (Buja and Stuetzle, 2000). (2) As pointed out first by Breiman (1999), Boosting is a functional gradi- ent descent algorithm. This view turns out to be fruitful to derive that (a version of classical) Boosting is asymptotically optimal in simple, 1- dimensional curve fitting and to gain additional insights how Boosting acts in high-dimensional settings (Buhlmann¨ and Yu, 2001).

References [1] Breiman, L. (1996). Bagging predictors. Machine Learning 24, 123–140. [2] Breiman, L. (1999). Prediction games & arcing algorithms. Neural Com- putation 11, 1493–1517. [3] Buhlmann,¨ P. and Yu, B. (2000). Analyzing bagging. To appear in Annals of Statistics.

[4] Buhlmann,¨ P. and Yu, B. (2001). Boosting with the L2-loss: regression and classification. Preprint, ETH Zurich.¨ 28 Sec. 2. Computer Intensive Methods and Stochastic Algorithms

[5] Buja, A. and Stuetzle, W. (2000). The effect of bagging on variance, bias, and mean squared error. Preprint, AT&T Labs-Research. [6] Freund, Y. and Schapire, R. E. (1996). Experiments with a new boosting algorithm. In: Machine Learning: Proc. Thirteenth International Confer- ence, 148–156. Morgan Kauffman, San Francisco.

Contributed Lectures (in alphabetic order)

Alexander Andronov (Riga Technical University) Resampling Approach and its Modifications

Resampling approach rises in Bootstrap method that has been suggested by Bradly Efron in 1977. Bootstrap method is used widely for statistical inference on small samples (Efron and Tibshirani, 1983; Hall, 1992; Davison and Hinlkey, 1997; DiCiccio and Efron, 1996). Statistical application of the resampling was considered for example in (Belyaev, 2000; Chepurin, 1999). Use of resampling approach for system simulation has been considered for the first time evidently in the paper (Ivnitski, 1967). In this paper we produce results of author in this respect, that has been gotten since 1995. We shall consider the known function φ on m independent random variables X1,X2,... , Xm : φ(X1,X2,...,Xm). It is assumed that the distribution func- tion Fi(.) of random variables Xi is unknown, but the sample population

Hi = {Xi1,Xi2,...,Xini } is available for each Xi, i = 1, m. The problem consists of an estimation of the mathematical expectation:

θ = E φ(X1,X2,...,Xm) . (1) Resampling approach proposes that the values of arguments are extracted ran- domly from the corresponding sample populations {Hi}. In another words j(l) = (j1(l), j2(l), . . . , jm(l)), l = 1, 2,... , are random samples from {Ni}.

Usually j(l) and X(l) = (Xj1(l),Xj2(l),...,Xjm(l)) are said to be the l-th resam- ples. For r resamples we have the following formula: 1 Xr θ∗ = φ(X(l)) . (2) r l=1 Sec. 2. Computer Intensive Methods and Stochastic Algorithms 29

Our main aim is to calculate a variance of the last estimator. By this following approaches are considered: Hierarchical resampling, Controlled resampling and other modifications.

References [1] Andronov, A. and Merkuryev, Y. (2000). Optimization of statistical sam- ple sizes in simulation. Journal of Statistical Planning and Inference 85, 93–102. [2] Andronov, A. and Fioshin, M. (2000). Algorithm for Calculation of Joint Distribution of Bootstrap Sample Elements. In: Probability Theory and Mathematical Statistics. TEV-VSP, Vilnius–Utrecht, 15–22. [3] Andronov, A. (2000). Resampling-Estimator of the Renewal Function. In: Proceeding of the 12th European Simulation Symposium “Simulation in Industry”. September 28–30, Hamburg, Germany. SCS, Delft, The Netherlands, 593–597. We are very thankful to Latvian Council of for the Grants N 97.0784 and N 01.0842 within which the present investigation is worked out.

Alexander G. Kolpakov (Siberean State University Novosibirsk) Numerical Experiments on Effective Dielectric Constant of a High-Contrast Random Medium

Consider the domain Q filled with the not overlapping disks Di, i = 1,...,N. One must solve the equation

[N ∆φ = 0 in Q \ Di i=1 with condition

φ = Ci on Di (Ci are unknowns) , i = 1,...,N and the boundary conditions ∂φ φ(x, ±1) = ±1 , (±1, y) = 0 . ∂n The disks are assumed to be randomly distributed in the domain Q. 30 Sec. 2. Computer Intensive Methods and Stochastic Algorithms

The problem above is a model of electrical capacitor having composite dielectric layer. Numerical solution of the problem is possible, but it takes too much time to repeat it many times sufficient to collect a good statistics. The problem was reduced to a finite-dimensional problem [1] for a given config- uration ω of the disks (the convergence theorem was proved for closely packing disks). The net problem can be solved more quickly then the initial problem and statistics can be collected. The net problem was solved for different random configurations of the disks for fixed total volume fraction of the disks. From 100 to 1000 iteration of the program were done. The collected data demonstrate that the dependence “effective dielectric constant-volume fraction of the disks” is a percolation type function.

References

[1] Berlyand, L. V. and Kolapkov, A. G. (2001). Network Approximation in the Limit of Small Interparticle Distance of the Effective Properties of a High Contrast Random Dispersed Composite. Arch. Rational Mech. Anal. 159(3), 179–227.

Gernot Muller¨ (Technische Universit¨at Munchen)¨ A regression approach to modelling high frequency financial data

In financial time series the transaction price changes mostly occur in discrete increments, for example in eights of a dollar. We consider these price changes as discrete random variables which are assumed to be generated by a latent process which incorporates both exogenous variables and autoregressive com- ponents. An initial Gibbs sampling algorithm has been developed to estimate the parameters of the model. However this algorithm exhibits bad convergence properties. To improve the initial Gibbs sampler we utilize methods proposed by Liu and Sabatti (2000, Biometrika 87), based on transformation groups on the sample space. A simulation study will be given to demonstrate the substantial improvement by this new algorithm. Sec. 2. Computer Intensive Methods and Stochastic Algorithms 31

Anatoly Naumov (Novosibirsk State Technical University) Why Classical Design of Experiments Problem is a Non-effective one? The classical optimum regression designs of experiments are investigated. Ex- tending these results, methods are obtained for investigating of the optimum designs in various settings. Many examples are given where D-optimum de- sign and non-optimum design are compared. The execution of the equality in equivalence theorem is considered and investigated. The main result is so. A distance between a prior dispersion function and a posterior one can be very essential and the equality in Kiefer’s theorem can be not true for a posterior data. Xk Let g(x) = θifi(x) + ε(x) , where ε(x) is a normal random variable i=1 with E(ε(x)) = 0 and E(ε(x)2) = σ2. The independent measurements µof function g(x) be¶ made at the points of the design of experiments ξ = x1 , x2 , . . . , xm , pi = ni/N, i = 1, 2, . . . , m. p1 , p2 , . . . , pm Xm Xm n Xm n 1 Xni Set N = n , M(ξ) = i f(x )f(x )T, b = i f(x )¯g ,g ¯ = g . i N i i N i i i n ij i=1 i=1 i=1 i j=1 The BLUE estimation of θ is θˆ = M −1(ξ)b = (F TF )−1F Tg¯ ,  p   p  n1 T n1 N f(x1) N g¯1     F = p ··· , g¯ = p··· . nm T nm N f(xm) N g¯m If ξ∗ is D-optimal then the ξ∗ is equivalent to the design of experiment T −1 of the problem minξ maxx d(x, ξ), where d(x, ξ) = f(x) M (ξ)f(x) and ∗ ˆT maxx d(x, ξ ) = k. A prior variance of the BLUE of θ f(x) is 1 1 dˆ(x, ξ) = σ2f(x)TM −1(ξ)f(x) = σ2d(x, ξ) . N N The purpose of this paper is to investigate the behaviour of posterior dispersion function of regression model estimation and a distance between prior and pos- terior dispersion functions. Let for ξ∗ define the posterior dispersion function (the variance of θˆTf(x)) by 1 de(x, ξ) = f(x)TM −1(ξ)F TDFM −1(ξ)f(x) , N 32 Sec. 2. Computer Intensive Methods and Stochastic Algorithms à ! 1 Xni where D = diag (¯g − g )2 , i = 1, 2, . . . , m. We investigated few n − 1 i ij i j=1 problems concerning a prior and posterior variances of the BLUE of θˆTf(x). ∗ ˆ ∗ e ∗ ∗ ˆ ∗ For a distance ∆d(x, ξ ) = |d(x, ξ )−d(x, ξ )| holds maxx (∆d(x, ξ )/d(x, ξ )× 100%) ≈ 75%. We show that for the design ξ0 (non-D-optimal design) holds max de(x, ξ∗) ≥ max de(x, ξ0). So, 1) a classical optimal design of experiments may be non-effective in reality and for a non-optimal design ξ0 holds max de(x, ξ∗) ≥ max de(x, ξ0); 2) in situation, when a basis vector f(x) of a regression model is not known, the classical optimal design will be more non-effective as a non-optimal one; 3) it is shown that non-optimum design can be more effective than opti- mum one and for a correction of that situation we must considered the experimental design problems as, for example, an adaptive experimenting procedure on simplex structures.

Werner Stutzle¨ (University of Washington) Bagging with or without replacement?

Bagging is a machine learning technique designed to reduce the variability of prediction rules. The idea, due to Leo Breiman, is to resample from the training set, construct a prediction rule for each resample, and then average the rules. Resampling can be done with or without replacement. We present theoretical results comparing the two alternatives and some empirical evidence backing up the theory. This is joint work with Andreas Buja (University of Pennsylvania).

Winfried Theis (Universit¨at Dortmund) Clustering of Business Cycles in Optimal Directions found by SIR and DAME

We consider the problem of classifying economic data intro business cycle phases. The long-term objective is to find a method by which we can predict Sec. 2. Computer Intensive Methods and Stochastic Algorithms 33 the current business cycle phase based on actual economic data. The investi- gation is based on a data set of quarterly observations of selected etconomic variables, which also contains an experts judgement about the business cycle phase corresponding to each quarter. We investigate whether the given data contain apropriate information about the business cycle phases and whether the experts classification is congruent with the information in the data. To do this, we first perform a dimension reduction by SIR or DAME which includes the experts information, followed by fuzzy-clustering of the reduced data. If the experts classification coincides with the information contained in the data, we expect to rediscover the business cycle phases in the clusters. 34 Sec. 2. Computer Intensive Methods and Stochastic Algorithms Sec. 3. Limit Theorems, Large Deviations and Statistics of Extremes 35

Sec. 3. Limit Theorems, Large Deviations and Statistics of Extremes

Organizer: Herold Dehling (Bochum)

Invited Lecture

S´andorCs¨orgo (University of Szeged) Resolution of the St. Petersburg Paradox

Peter tosses a fair coin repeatedly until it first lands heads and pays Paul 2k ducats if this happens on the kth toss, k = 1, 2,... . What is the price for Paul to pay to make the game “equal and fair”? It is an infinite number of ducats but, as Nicolaus Bernoulli (who posed the problem in 1713) wrote, “there ought not to exist any even halfway sensible person who would not sell the right to the game for 40 ducats”. This is the St. Petersburg paradox. A great number of possible resolutions have been suggested in the past 288 years, and almost all of these have become the target of instant and vehement criticism. The most notable early ideas belong to Cramer (1728), Euler (17??) and Daniel Bernoulli (1738), while Feller’s (1945) solution can be traced back to Buffon (1777). Feller’s weak law of large numbers suggests that Paul’s ‘fair’ price for n games should be n Log n ducats, where Log stands for the logarithm to the base 2, while a uniquely interesting approach of Steinhaus (1949) would demand somewhat more. The talk will venture to propose yet one more resolution. It is based on a whole family of asymptotic distributions that has the cardinality of the continuum. There is no equity, but fairness is still possible. 36 Sec. 3. Limit Theorems, Large Deviations and Statistics of Extremes

Contributed Lectures (in alphabetic order)

Gerold Alsmeyer (Universit¨at Munster)¨ Limit Theorems for Iterated Random Lipschitz Functions by Regenerative Methods

We will present a number of limit theorems for iterated random Lipschitz func- tions which are based on an analysis of a random walk of logarithmic Lipschitz constants and an embedded ladder height process. This is joint work with C. D. Fuh (Academia Sinica, Taipei, R. O. C.)

Peter Becker-Kern, Mark M. Meerschaert, Hans-Peter Scheffler (Universit¨atDortmund, University of Nevada, Universit¨atDortmund) Limit theorems for coupled Continuous Time Random Walks

d Let (Yn,Jn) be i. i. d. random vectors on R × R+ which model the n-th jump Pn of a particle together with its waiting time. Further let Sn = k=1 Yk, Pn Tn = k=1 Jk be the cumulative jumps respectively waiting times and Nt = max{n ≥ 0 : Tn ≤ t} be the total number of jumps up to time t. This talk discusses the limiting behaviour of the coupled continuous time random walk

(CTRW) process X(t) = SNt without any assumptions on the interdependence between jumps and waiting times. Assume the existence of invertible linear operators An and bn > 0 such that (AnSn, bnJn) converges in distribution to (A, D) where the distribution of A is some full operator stable law and D is a β- stable subordinator, 0 < β < 1. Under these conditions we show that operator scaling limits M(t) of the CTRW exist and depend on the corresponding oper- ator L´evymotion (A(t),D(t)) of (A, D). The limit M(t) can be represented as a subordination of A(t) to the hitting time process of D(t). A detailed analysis of the limit is given in H.-P. Scheffler’s talk On the distribution of Continuous Time Random Walk limits [abstract on page 43]. Sec. 3. Limit Theorems, Large Deviations and Statistics of Extremes 37

Amke Caliebe, Uwe R¨osler (Christian-Albrechts-Universit¨at Kiel) Representation of Solutions of a Distributional Fixed Point Equation

Let T1,T2,...,TN be real random variables. We investigate the following equa- tion for distributions µ: N ∼ X W = TjWj , (1) j=1 where W, W1,...,WN have distribution µ and W1,...,WN ,(T1,...,TN ) are independent. Applications of this equation occur in the context of branching processes, infinite particle systems and turbulence models.

For positive coefficients Ti and positive solutions (i. e. distributions µ on [0, ∞) satisfying (1)) Eq. 1 has been studied by Durrett and Liggett, who obtained central results. These have been improved by the work of Liu. Here we consider the case of general (i. e. not necessarily positive) coefficients and solutions. This case differs in a high from the case of positive solu- tions since the latter can be treated by using the Laplace transform. For the general case new methods have to be developed. We employ here the classical theory of convergence of infinitesimal independent triangular schemes to in- finitely divisible distributions as e. g. developed by Gnedenko and Kolmogorov. This yields a representation of characteristic functions of solutions as the in- tegral over certain characteristic functions of infinitely divisible distributions. As special cases we consider positive solutions, solutions with finite variance and symmetric solutions.

Gerd Christoph (Otto-von-Guericke Universit¨at Magdeburg) On rates of convergence to discrete stable limit laws

Discrete stable laws as discrete analogues of (continuous) stictly stable laws were introduced in Steutel and van Harn (1979). In the present paper rates of convergence of sums of non-negative integer valued random variables to discrete stable laws under pseudomoment conditions are discused. Since the discrete stable random variables are infinitely divisible, rates of convergence to some infinitely devisible limit laws with infinite expectation are obtained, too. 38 Sec. 3. Limit Theorems, Large Deviations and Statistics of Extremes

Steffen Dereich (Technische Universit¨at Berlin) Small ball probabilities around random centers for Gaussian measures

Let µ be a centered Gaussian measure on a separable Banach space (E, k · k). The talk Small ball probabilities and the quantization problem for Gaussian measures of Michael Scheutzow establishes a link between the asymptotic quan- tization error and small ball probabilities [abstract on page 43]. The aim of this talk is to make the previous link more intuitive by introducing small ball probabilities around random centers. These are defined by the random vari- 0 0 ables φX(ε) = − log P (kX − Xk ≤ ε|X)(ε > 0), where X and X denote two independent µ-distributed random elements. We give general properties of the asymptotic behavior of the random function φX(ε) as ε → 0+. In the case that E is a Hilbert space, one obtains a. s. asymptotic equivalence between the (random) small ball function φX(ε) and a deterministic function φ(ε) as ε → 0+. The function φ can be determined explicitly by the eigenvalues of the covariance operator of the measure µ. This allows us to improve some of the asymptotic estimates for the quantization error.

Holger Drees, Vladimir Piterbarg (Universit¨atzu K¨oln,Lomonosov Moscow State University) On maximal occupation time estimators of the extreme value index

Many estimators of the extreme value index γ of i. i. d. observations are based on k largest order statistics. It is well known that the performance of these estimators strongly depends on the value of k. In the last couple of years some procedures for the adaptive choice of the number of order statistics were introduced. Alternatively, the estimator is plotted against k or log k, and the number k is chosen by the eye in a region where this plot seems stable. In a new approach we propose to determine for each possible value of γ the percentage of ‘time’ the plot spends in certain neighborhoods of that value and then to defineγ ˆn as the maximizer of this occupation time. It is shown that γˆn automatically converges at the optimal rate towards the true extreme value index. Moreover, its limit distribution is established under suitable second order conditions. Sec. 3. Limit Theorems, Large Deviations and Statistics of Extremes 39

Peter Eichelsbacher (Ruhr-Universit¨at Bochum) Stein’s method and Gibbs measures

Stein’s method provides a way of finding approximations to a distribution µ of a random variable and gives at the same time estimates of the approximation error involved. We provide Stein’s method for the class of so called discrete Gibbs measures with a density eV where V is the energy function. Moreover we show that the characterization of a class of Gibbssian point processes using generalized Mecke equations parallels the development of Stein’s method. The developments happened independently of each other. This is joint work with Gesine Reinert, Oxford, UK.

Uwe Einmahl (Vrije Universiteit Brussel) Moderate deviation probabilities for open convex sets

Let X1,X2,... be i. i. d. Banach space valued random variables with mean zero. Pn Set Sn = j=1 Xj, n ≥ 1. Under appropriate assumptions we can determine large deviation probabilities at a logarithmic level, i. e. we can specify the order of log P {Sn/n ∈ A}/n for suitable Borel subsets. Much more precise results, however, can be obtained for open convex sets where so-called dominating points play a decisive role. The purpose of the present talk is to look at this problem for moderate deviation probabilities. We show that it is relatively easy to get upper bounds, but obtaining the corresponding lower bounds even in Hilbert space seems to be much more difficult.

Armelle Guillou (Universit´eParis VI) On exponential representations of log-spacings of extreme order statistics

In Beirlant et al. (1999) and Feuerverger and Hall (1999) an exponential regres- sion model (ERM) was introduced on the basis of scaled log-spacings between subsequent extreme order statistics from a Pareto-type distribution. This lead to the construction of new bias-corrected estimators for the tail index. In this paper asymptotic justification of this regression model is given under quite gen- eral conditions. Asymptotic results for the resulting estimators of the regression 40 Sec. 3. Limit Theorems, Large Deviations and Statistics of Extremes parameters are given. Also, we discuss diagnostic methods for adaptive selec- tion of the threshold when using the Hill (1975) estimator which follow from the ERM approach. We show how the diagnostic presented in Guillou and Hall (2001) is linked to the ERM, while a new proposal is suggested. (This paper is in collaboration with Jan Beirlant, Goedele Dierckx and Catalin St˘aric˘a.)

Lothar Heinrich (Universit¨at Augsburg) Rates of Convergence in Stable Limit Theorems for Power Sums of Continued Fraction Expansions

We present exact uniform¡ rates of the¢ convergence of the normalised power (α) −1/α 1/α 1/α sum Sn = n a1 + ··· + an towards an α-stable limit law Gα( · ) (0 < α ≤ 1) with skewness parameter β = 1, where the non-negative integers a1, a2,... are defined by the continued fraction expansion a1(ω) = [1/ω] and k−1 ak(ω) = a1(T ω) for k ≥ 2 (with the mapping T ω = 1/ω − [1/ω]) of a real number ω ∈ (0 , 1) which is chosen according to some probability measure P. Under the basic assumption that P possesses a strictly positive, Lipschitz 1 1 continuous Lebesgue density, we obtain, e. g. for 4 < α < 2 , the estimate ¯ ¯ ¯ ¡ 1/α (α) ¢ ¯ c(α) sup ¯ P (log 2) Sn ≤ x − Gα(x) ¯ ≤ . x≥0 n To prove this and the other results we need appropriate techniques to overcome two difficulties: (i) the rv’s ak’s are (weakly) dependent and (ii) the discrete rv 1/α a1 has a rather irregular non-lattice structure. Our approach also works in the more general context of (inhomogeneous) f-expansions, see Heinrich, L. (2001). Rates of convergence for sums of f-expansion digits with stable limit law, Monatshefte f. Math. (submitted).

Oleg Klesov (National Technical University of Ukraine, Kiev) The strong law of large numbers for “subsequences” on the plane

We consider the strong law of large numbers for multiple sums of independent identically distributed random variables normalized by nonrandom numbers. The limit is considered in the case where the indices of sums “tend” to in- finity and belong to a certain subset. N. Wiener is the first to consider such Sec. 3. Limit Theorems, Large Deviations and Statistics of Extremes 41 limit theorems. Wiener considers the case of the “diagonal”, that is the case of equal coordinates. G. Gabriel and A. Gut study the case of sectors with linear boundaries. Our consideration corresponds to “sectors with non-linear boundaries”.

Ulrich Krengel (Universit¨at G¨ottingen) Relatives of the L´evycontinuity theorem

If distributions F (n) converge weakly to a distribution F , the corresponding characteristic functions converge to the characteristic function of F . But for the converse, one needs a single additional condition, the continuity at 0 of the limit function. In this talk, it shall be shown that the analogous results hold if we consider other characterizations of distributions. We look at the failure rate, the Hardy Littlewood function, the function E(min(X, t)), the Dubins characterization of distributions with finite mean, and several others. In each case, we identify the additional condition needed to derive the convergence of the distributions from the convergence of the functions under consideration. (Joint work with T. P. Hill)

Bero Roos (Universit¨at Hamburg) Improvements in the Poisson approximation of mixed Poisson distributions

Mixed Poisson distributions are widely used in probability theory and statis- tics. For instance, the following distributions are mixed Poisson: the nega- tive binomial distribution, the Delaporte distribution in actuarial sciences, the truncated-gamma mixture of Poisson distributions in the context of limited col- lective risk theory, the Neyman Type A distribution in biology and ecology, the Poisson-Pascal and P´olya-Aeppli distributions in biology, the Poisson-Lindley distribution with applications to errors and accidents. Mixed Poisson distributions can be very involved. This is the case when the distribution of the mixing random variable has a complicated structure. For example, the convolution of mixed Poisson distributions is again mixed Pois- Pn son with mixing random variable X = i=1 Xi, where we suppose that the X1,...,Xn are independent mixing random variables of the corresponding com- ponents. Hence, it is often necessary to use an approximation of mixed Poisson 42 Sec. 3. Limit Theorems, Large Deviations and Statistics of Extremes distributions. If the mixing random variable is almost constant, it is reasonable to apply a simple Poisson distribution. In the present paper, we consider the approximation of mixed Poisson distri- butions by Poisson distributions and also by related finite signed measures of higher order. In Pfeifer (1987) and Barbour et al. (1992, pp. 12–13 and 68– 69), one can find some results concerning this problem. The most remarkable inequalities came from Barbour et al. (1992, Theorem 1.C, Remark 1.1.2). In particular they used the Stein–Chen method to prove that the total variation distance between the mixed Poisson distribution with mixing random variable X ≥ 0 and the Poisson distribution with mean µ = E(X) is bounded by σ2 min{µ−1, 1}, where σ2 = Var(X). According to this result, Poisson approx- imation is good, if the variance σ2 or the quotient σ2/µ is small. Since, in the Poisson approximation of the Poisson binomial distribution, the Stein–Chen method led to upper bounds of the correct order with sharp constants (see Barbour et al. (1992, Corollary 3.D.1)), one may ask, whether the estimate above is best possible. The results of this paper imply that the answer is no. Indeed, we present an upper bound of simple form, which has a better order and contains a sharp constant < 1. It is noteworthy that, in contrast to the bound of Barbour et al., our bound remains finite, if σ2 = ∞ and E(X ln X) < ∞.

References [1] Barbour, A. D., Holst, L. and Janson, S. (1992). Poisson Approximation. Clarendon Press, Oxford. [2] Pfeifer, D. (1987). On the distance between mixed Poisson and Poisson distributions. Statistics & Decisions 5, 367–379.

Ludger Ruschendorf¨ (Universit¨at Freiburg) A general limit theorem for recursive combinatorial structures

A general limit theorem is proved for random vectors of recursive nature as they arise as parameters of combinatorial structures as random trees or recursive algorithms. Based on the recursive structure this result allows to infer from the asymtotics of the first two moments the complete limit law. As application we obtain quite automatically many asymptotic normality results ranging from the size of tries or m-ary search trees and path lengths in digital structures to mergesort and parameters of random recursive trees which were previously Sec. 3. Limit Theorems, Large Deviations and Statistics of Extremes 43 established by other methods one by one. The proof is based on a variant of the contraction method based on properties of the Zolotarev metric. The lecture is based on joint work with Ralph Neininger.

Hans-Peter Scheffler, Peter Becker-Kern, Mark M. Meerschaert (Universit¨atDortmund, Universit¨atDortmund, University of Nevada) On the distribution of Continuous Time Random Walk limits

In this talk we analyze the limiting distribution PM(t) of a continuous time random walk (CTRW) process (also called a renewal reward process) for infinite mean inter-renewal times and no assumption on the interdependence of the jumps and the waiting times. The corresponding limit theorem is presented in P. Becker-Kern’s talk Limit theorems for coupled Continuous Time Random Walks [abstract on page 36].

If ft(y, s) denotes the joint density of (A(t),D(t)), where A(t) is an operator L´evymotion and D(t) is a β-stable subordinator, then the CTRW limit M(t) has the density Z ∞ ∂β−1 h(y, t) = β−1 fu(y, t) du 0 ∂t where ∂β−1/∂tβ−1 denotes the fractional integral of order −1 < β − 1 < 0. Equivalent formulas for the characteristic function of M(t), the corresponding governing pseudo-differential equation for h, as well as some examples are also presented.

Michael Scheutzow (Technische Universit¨at Berlin) Small ball probabilities and the quantization problem for Gaussian measures

Let µ be a centered Gaussian measure on a separable Banach space E and N a positive integer. We study the asymptotics as N → ∞ of the quantization error i. e. the infimum over all subsets E of E of cardinality N of the average distance w. r. t. µ to the closest point in the set E. We compare the quantization error with the average distance which is obtained when the set E is chosen by taking N i. i. d. copies of random elements with law µ. Our approach is based on the study of the asymptotics of the measure of a small ball around 0. Under slight conditions on the regular variation of the small ball function, we get upper and 44 Sec. 3. Limit Theorems, Large Deviations and Statistics of Extremes lower bounds of the deterministic and random quantization error and are able to show that both are of the same order. Our conditions are typically satisfied in case the Banach space is infinite dimensional. This is joint work with Steffen Dereich, Franz Fehringer and Anis Matoussi. Alexander Zaigraev (N. Copernicus University Toru´n) Multivariate large deviations with compactly supported distributions and stable exponential families The basic goal of the communication is to give the strong form of the local limit theorem for large deviations proved in [1]. More precisely, let ξ1, ξ2,... be i. i. d. d random vectors in R having the distribution P0. Suppose that for n ≥ n0 ≥ 1 there exists the bounded density pn(x) of the sum ξ1 + ··· + ξn and moreover the generating function f(s) corresponding to the distribution P0 is finite in a set S having the origin as its interior point. Then P0 generates an exponential family of distributions {Ps, s ∈ S}. Let γ(s) and B(s) be the gradient and the Hessian of ln f(s), respectively. As it is known, the function γ(s) establishes the one-to-one correspondence between S and γ(S) ⊂ Rd. Let s(x) be the inverse function with respect to γ(s). Denote by F a closed bounded subset of int S. Then from the local limit theorem for large deviations proved in [1] it follows that ¯ ¯ ¯ ¯ ¯ pn(nx) ¯ sup ¯ n − 1¯ = o(1) , n → ∞ , x∈γ(F ) ψn(x)ρ (x) −d/2 −1/2 where ρ(x) = inf f(s) exp(−(s, x)), ψn(x) = (2πn) (det B(s(x))) . s∈S We consider the case when P0 is absolutely continuous and compactly sup- ported by a convex set X. The conditions under which sup can be taken over the set γ(Rd) = X are discussed. Our method is based on establishing stabil- ity property for the exponential family of distributions that is of independent interest. The exponential family of distributions is called stable if all members of the family are of the same type (see [2]). References [1] Borovkov, A. A. and Rogozin, B. A. (1965). On the central limit theorem in the higher-dimensional case. Teor. Verojatnost. i Primenen. 10(1), 61–69. [2] Balkema, A. A., Kluppelberg,¨ C. and Resnick, S. I. (2001). Stability for multivariate exponential families. J. Math. Sci. 106(2), 2777–2790. Sec. 4. Quality Control, Reliability Theory and Survival Analysis 45

Sec. 4. Quality Control, Reliability Theory and Survival Analysis

Organizer: Wolfgang Schmid (Frankfurt/Oder)

Invited Lecture

Marion R. Reynolds, Jr. (Virginia Polytechnic Institute and State University Blacksburg) CUSUM Charts for Detecting Changes in a Proportion

In many quality control applications items being produced are classified as defective or non-defective, and this generates a sequence of Bernoulli obser- vations where the parameter p is the probability that an item defective. In these applications it is important to detect any increase in p corresponding to lower process quality. With the emphasis today on high quality products, the in-control value, p0, for p may be close to zero in many situations. When efforts are being made to improve process quality it will also be desirable to detect decreases in p. Traditional Shewhart control charts for monitoring the process are based on grouping items into groups of size n and plotting the number of defectives in each group. In general, Shewhart charts are inefficient for detect- ing changes in p, and are particularly ineffective when p0 is very small because the required value of n is very large. In this talk, cumulative sum (CUSUM) control charts based on the individual Bernoulli observations are considered. Exact expressions are derived for the expected number of items required to sig- nal. These expressions provide a method for designing these CUSUM charts to achieve a specified false alarm rate. It is shown that these CUSUM charts are much more effective than Shewhart charts for detecting increases or decreases in p. 46 Sec. 4. Quality Control, Reliability Theory and Survival Analysis

Contributed Lectures (in alphabetic order)

Do˜ganArga¸c,Joachim Hartung (Universit¨at Dortmund) Confidence intervals on the among group variance component in an unbalanced and heteroscedastic one-way random effects model In an unbalanced and heteroscedastic one-way random effects model of analysis of variance, a confidence interval on the among group variance component is proposed, based on a Satterthwaite-Patnaik moment matching approach. Several refinements of the derived basic confidence interval are obtained. By way of simulation, the confidence coefficients and lengths of these confidence intervals are compared. The refined versions of the initial confidence interval give acceptable empirical confidence coefficients and lengths. Finally, a real world example from quality control and engineering is given using data on uranium concentration from an atomic fabrication plant. Manuel Cabral Morais, Ant´onioPacheco (Instituto Superior T´ecnicoLisboa) On the Alarm Rates of Quality Control Schemes The performance of quality control schemes is usually assessed in terms of characteristics of the run length (RL) — the number of samples required for the scheme to trigger a signal. The average run length (ARL) is by far the most popular of those characteristics and has been — throughly and overstated — used to describe the likely performance of a control scheme.

The hazard rate function of the RL, λRL(m) = P (RL = m)/P (RL ≥ m), was proposed by Margavio et al. (1995) as the alarm rate at sample m. It represents the probability of a signal at sample m, given that the previous m − 1 samples were not responsible for triggering a signal, and provides an insightful and conditional snapshot of the scheme performance. We emphasize on the alarm rate function of several Markov-type control schemes. We also bring the issue of stochastically monotone matrices into focus and investigate their influence on the ageing character of RLs. In ad- dition, we provide numerical and stochastic ordering results concerning some alarm rate functions; the latter results provide a qualitative assessment of the impact of the adoption of a head start. Keywords: SPC; Run Length; Alarm Rates; Stochastic Ordering. Sec. 4. Quality Control, Reliability Theory and Survival Analysis 47

References [1] Margavio, T. M., Conerly, M. D., Woodall, W. H. and Drake, L. G. (1995). Alarm rates for quality control charts. Statistics & Probability Letters 24, 219–224.

Elart von Collani (University of Wurzburg)¨ Prediction and Tolerance Regions in Quality Control

Prediction procedures constitute in a certain sense the basis of any statistical method. Tolerance regions, which were proposed for the use in quality control by Shewhart 1931 and in 1941 by S. S. Wilks, are closely related to predic- tion regions, but are basically estimation procedures. In this paper the use of tolerance regions in industry is investigated and some further proposals are made.

Erhard Cramer, Udo Kamps (University of Oldenburg) Progressive censoring: model, inference, and experimental design

The scheme of progressive type II censoring is of importance in life-testing experiments. Units may be removed at various stages during the experiment, possibly resulting in a saving of costs and of time. In such an experiment, N identical units are placed on a life-time test. After the i-th failure, r(i) surviving items are randomly withdrawn from the experiment, 1 < i < n. Thus, n failures are observed, and r(1) + ··· + r(n) items are progressively censored; hence N = n + r(1) + ··· + r(n). The withdrawal of elements may be seen as a model describing drop-outs of units due to failures, which have causes other than the specific one under study. Progressive censoring schemes are also applied in clinical trials. The drop-outs of patients may be caused, e. g., by personal or ethical decisions, and they are regarded as random withdrawals. In the model of progressive type II censoring with underlying two-parameter exponential distributions, maximum likelihood estimators, uniformly minimum variance unbiased estimators and best linear unbiased estimators are derived for both location and scale parameters. Several properties of these estimators are shown. Finally, the design of experiments and the structure of optimal censoring schemes is discussed for underlying generalized Pareto distributions. 48 Sec. 4. Quality Control, Reliability Theory and Survival Analysis

Alessandro Fasso (University of Bergamo) Non-symmetric multivariate monitoring of time correlated data

Standard multivariate control charts use quadratic forms based on the multi- variate Gaussian distribution. These charts have the same power for process shifts along the Gaussian elliptic contours in any direction. In many applica- tions quality deteriorate as long as one or more measures increase. For example, in braking disk manufacturing, geometrical deformation and disk thikness variation have to be kept small. Other concurrently measured quanti- ties possibly correlated may have symmetrical tolerance limits. One-sided mul- tivariate MEWMA control charts have been introduced both for environmental monitoring and industrial quality control (see e. g. Fass`o1998 and 1999). In this paper we extend this approach to cope simultaneously with symmetrical and non-symmetrical specifications. Moreover, two applications are discussed. The first one is based on brake disks production testing. The second one is related to environmental monitoring and is concerned with particulate space- time monitoring and alerting.

References [1] Fass`o,A. (1997). On some control charts for nonlinear ruptures. Italian J. Appl. Statist. 9(1), 123–141. [2] Fass`o,A. (1998). One-sided Multivariate Testing and Environmental Mon- itoring. Austrian Journal of Statistics 27(1&2), 17–37. [3] Fass`o,A. (1999). One-Sided MEWMA Control Charts. Communications in Statistics: Simulation and Computation 28(2), 381–401.

Dietmar Ferger (Technische Universit¨at Dresden) Boundary detection based on set-indexed empirical processes

We observe random variables with values in an arbitrary measurable space and indexed by gridpoints of the d-dimensional unit cube. The cube is decomposed into two disjoint “regions” F and G with common boundary B. The random variables have different distributions P or Q, respectively, according as the in- dex lies in F or G. The problem is to detect the unknown boundary B. We Sec. 4. Quality Control, Reliability Theory and Survival Analysis 49 derive a completely nonparametric estimator for B, which is defined as maxi- mizer of a certain set-indexed empirical process. It turns out to be an extension (in many respects) of the estimators of E. Carlstein and C. Krishnamoorthy (1992, 1994) and of L. Dumbgen¨ (1991). Under weaker assumptions we prove upper bounds for the error probablities of our estimator, which yield signifi- cant better rates of convergence than Carlstein and Krishnamoorthy obtained. They coincide with the rates of Dumbgen,¨ who however only considered the simple one-dimensional case.

Jurgen¨ Franz (Technische Universit¨at Dresden) Stress-dependent repair models and Bayes parameter estimation

In survival time studies or in operating time considerations of repairable sys- tems we often observe that stress quantities have influence. Let us describe the failure-repair process of repairable systems by a marked point process + (τn, ξn)n≥1 where τn ∈ R are time points at which events occur, and the marks l ξn ∈ E ⊂ R contain additional information (stress values, repair times, dam- values. . . ) recorded at τn. Additionally, random censoring effects may be included. The failure intensity corresponding to a counting process is assumed to be depending on stress quantities (similarly to covariates in Cox proportional hazard model). In the contribution, Bayes estimators for certain parametric models are inves- tigated, Bayes estimators are compared with m. l. estimators. For special cases Γ-minimax (sequential) estimation procedures are derived.

Rainer G¨ob (Universit¨at Wurzburg)¨ Portfolio Selection Based on an EWMA Performance Indicator and SPC Intervention Methods

Strategies of portfolio selection based on quantitative analysis of market data are discussed in mathematical finance. However, most of the strategies sug- gested are based on relatively complicated models, and thus they are difficult to be implemented in investment practice. A simple approach to portfolio selec- tion based on long term estimation of stock returns was discussed by financial analysts in 1990s. We suggest a more refined and more flexible version of this 50 Sec. 4. Quality Control, Reliability Theory and Survival Analysis approach based on an EWMA performance indicator of stocks. An empiri- cal study considering stock data from 1990 until 2000 shows that portfolios selected by this method outperform market indices. Further refinements us- ing additional intervention strategies adopted from statistical process control are discussed. It is interesting to see that so far none of the additional inter- vention strategies was able to improve upon the performance of the portfolio. Here, further research is necessary, in particular on the classification of possible interventions into a portfolio.

Peter Hackl (Wirtschaftsuniversit¨at Wien) Studiendauer und Studienabbruchquote ¨osterreichischer Studierender

Die Analyse von individuellen Studienverl¨aufen an der Wirtschaftsuniversit¨at in Wien erlaubt wichtige Ruckschl¨ usse¨ auf das Studienverhalten der Studieren- den dieser Universit¨at, darunter auch auf die Studiendauer und auf die Studien- abbruchquote. Mit Hilfe von retrospektiven Analysen und mit Hilfe von Meth- oden der Ereignisdaten-Analyse konnten aus den Daten der Universit¨atsver- waltung und aus den Ergebnissen einer Befragung von Angeh¨origen zweier Jahrgangskohorten wichtige Informationen gewonnen werden. Neben deskrip- tiven Charakteristika, etwa zur Verteilung der Studiendauer oder zu den Stu- dienleistungen von Studienabbrechern, erlauben es die verwendeten Verfahren, erkl¨arende Faktoren zu identifizieren. U. a. konnte der Effekt von Schulleistun- gen in Schlusself¨ ¨achern, von Berufst¨atigkeit neben dem Studium und der Studi- enleistungen im ersten Studienjahr als bestimmend fur¨ die Chance auf einen er- folgreichen Studienabschluss und das Risiko des Studienabbruchs nachgewiesen werden. Die Ergebnisse sind in den Entwurf eines Systems von Indikatoren zur Beurteilung der Lehr-/Lernt¨atigkeit (performance-Indikatoren) eingegan- gen. Auch werden sie benutzt, um Potentiale fur¨ Verbesserungen zu identi- fizieren.

Waltraud Kahle, Charles E. Love (Hochschule Magdeburg–Stendal (FH), Simon Fraser University Burnaby) Modelling the Influence of Maintenance Actions

An operating system is observed to undergo failures. On failure, one of three actions was taken: failures were minimally repaired, given a minor repair or Sec. 4. Quality Control, Reliability Theory and Survival Analysis 51 given a major repair. Furthermore, periodically the machine was stopped for either minor maintenance action or major maintenance action. Either on fail- ure or maintenance stoppage, both minor and major repairs are assumed to impact the failure intensity. The issue in this research is to identify not only the virtual aging process associated with repairs but also the form of the fail- ure intensity associated with the system. A series of models appropriate for such an operating/maintenance environment are developed and estimated in order to identify the most appropriate statistical structure. Field data from an industrial setting are used to fit the models.

Bernhard Klar, Alfred M¨uller (Universit¨atKarlsruhe, Universit¨atKarlsruhe) The L-class and the M-class of life distributions

Klefsj¨o (1983) introduced the L-class of life distributions as a class of distribu- tions exhibiting a weak notion of aging. In this talk, we discuss properties of the L-class; in particular, we give an example of a distribution within that class with an infinite third moment and having the property that the hazard rate goes to zero as time approaches infinity. These results lead to serious doubts whether the L-class should be considered as a class of aging distributions. Furthermore, we present a new class of life distributions, called the M-class, the definition of which is similar to that of the L-class. Only the ordering of the Laplace transforms is replaced by the ordering of the moment generating functions. It is shown that the M-class seems to be a more reasonable notion of aging, which does not have the undesirable property of the L-class mentioned above.

Sven Knoth (Europa-Universit¨at Viadrina Frankfurt/O.) CUSUMs and EMWAs: Favorites and Falsities

CUSUM and EWMA charts are more or less well established instruments in Statistical Process Control. Both schemes belong to a larger class of change point detection schemes. There are different performance measures, which should give advice for choosing the right scheme. But one had or has to observe, that on the one hand the difference between the schemes are small, while on the other hand the performance measures are falsely applied. Thus, each scheme and each paper has its own measure. Further, links between CUSUM and 52 Sec. 4. Quality Control, Reliability Theory and Survival Analysis

EWMA are constructed which are not valid. The talk will address the problems and give — hopefully — some more insights.

Axel Lehmann (Otto-von-Guericke-University Magdeburg) A degradation based reliability model for repairable items

The reliability of items frequently depends on dynamically changing external and internal covariates which provide additional information to failure time ob- servations. External covariates, such as experimental factors, stress levels, and usage measures, characterize the operating environment of an item. Internal covariates such as degradation data represent the level of deterioration of an item. We present a reliability model that is based on failure and degradation data. An item is regarded as failed when degradation first crosses a critical threshold level or when a censoring traumatic event occurs. To model the influence on failure of an item’s dynamic operating environment, the degradation process, the threshold level, and the intensity of traumatic events may depend on pos- sibly time-varying external covariates, for instance on different stress levels. The degradation process is modeled by a univariate process with independent increments with a time scale transformation to cover non-linear degradation behavior. Degradation is related to the external covariates through a random time scale describing slowing or accelerated degradation in real time. For suit- ably chosen time scales, the resulting degradation-caused failure time possesses, conditionally on the covariates, a bathtub-shaped failure rate. To extend the threshold model to repairable items we use a marked point pro- cess approach. An item will be repaired in negligible time upon each failure and possibly preventively at regular inspection times depending on the degra- dation level. Each repair action sets back the degradation level to a value between its previous level and the level of a new item. The failure-repair pro- cess is described by a marked point process with failure and inspection times as events and degradation level, repair level and censoring information as marks. Statistical inferences for the model are based on event times and marker mea- surements. The covariate process is assumed to be continuously observable whereas the degradation process can be observed only at event times. We consider maximum likelihood and semi-parametric estimates of reliability and degradation characteristics in the model. Sec. 4. Quality Control, Reliability Theory and Survival Analysis 53

Yarema Okhrin (Europe-University Viadrina Frankfurt/O.) Tail behaviour of a general family of control charts Recently statistical process control was extended to deal with dependend data. The problem of interest is the distribution of run lenght with respect to the autocorrelation structure of the observed process. In this paper we consider a general control scheme. The control statistics Zt is equal to an arbitrary weighted sum of the past observations Xt,...,X1. This approach covers most of the applied control schemes like for example moving average, EWMA and ARMA(1,1) charts. The process {Xt} is assumed to be a stationary Gaussian process. The aim of the work is to analyze the behaviourp of the tail probability of the run length N = inf{t ∈ N : Zt − E(Zt) > c Var(Zt)} with respect to the autocorrelation of {Xt}. It is shown under which conditions on the weights and on the autocorrelations of {Xt} the correlation between Zt and Zt−i is a nondecreasing function in the autocorrelations of the observed process. Using this result it can be proved that the probability of a false alarm is a nonde- creasing function of the autocorrelations of {Xt}, too. The weight conditions are verified for several well-known charts. Wolfgang Schmid (Europe-University Viadrina Frankfurt/O.) EWMA charts for monitoring the mean and the autocovariances of stationary Gaussian processes In this paper simultaneous individual control charts for the mean and the autocovariances of a stationary process are introduced. All control schemes are EWMA (exponentially weighted moving average) charts. A multivariate quality characteristic is considered. It describes the behaviour of the mean and the autocovariances. This quantity is transformed to a one-dimensional variable by using the Mahalanobis distance. The test statistic is obtained by exponential smoothing these variables. Another control chart is based on a multivariate EWMA attempt which is directly applied to our quality characteristic. After that the resulting statistic is transformed to a univariate random variable. Besides modified control charts we consider residual charts. For the residual charts the same procedure is used but the original observations are replaced by the residuals. In an extensive simulation study all control schemes are compared with each other. The target process is assumed to be an ARMA(1,1) process. 54 Sec. 4. Quality Control, Reliability Theory and Survival Analysis

Ansgar Steland (Ruhr-Universit¨at Bochum) Sequential kernel smoothers under local alternatives, optimality, and applications

In many applications, e. g., in quality control, sequential econometrics, credit risk control, or natural sciences, one is interested to detect certain patterns in the mean of a dependent process with smalles delay. We provide an asymptotic framework to capture this feature. Asymptotic results for the normed delay are given. Further, we discuss the problem of optimal kernel choice for a given local alternative. The application of our results is illustrated by real credit risk data.

Winfried Stute (University of Giessen) Nonparametric Analysis of Survival Data Under Reporting Delays

In this paper we discuss an extension of the Kaplan-Meier estimator to a situa- tion, when the data are subject to reporting delays. As a main result we estab- lish asymptotic normality of a large class of linear statistics. Generalizations to nonlinear but smooth statistical functionals are obvious. The methodology is applied to analyze the purchase behavior of customers in a panel designed by A. C. Nielsen.

Jurgen¨ Tiedge (Hochschule Magdeburg–Stendal (FH)) A twodimensional damage model for dependent elements

Damage of a system with two elements is described by a twodimensional ho- mogeneous process with independent increments. Lifetimes are first-passage- times. Asymptotic results in sense of high reliability are discussed, where the influence of correlation between the damage processes to system reliability is of special interest. Sec. 4. Quality Control, Reliability Theory and Survival Analysis 55

Lev V. Utkin (Munich University) Interval software reliability models as generalization of probabilistic and fuzzy models

Software error occurrence phenomena have been studied extensively in the liter- ature with the objective of improving software performance. In the last decades various software reliability models have been developed based on testing or de- bugging processes. All models can be divided into two types: probabilistic software reliability models (PSRMs) and fuzzy or possibilistic software relia- bility models (FSRMs). In this paper, it is proposed the new type of models called imprecise or interval software reliability model (ISRMs) and based on applying the theory of imprecise probabilities. It is also proved that PSRMs and FSRMs can be regarded as the special cases of ISRMs.

Let Xi be the random time interval between the (i − 1)-th and i-th software failures. PSRMs assume that the variable Xi is governed by a pdf gi(x) with a vector of parameters θi. It is assumed that there holds θi = f(i, θ), where f is any function characterizing the software reliability growth. The main aim of the software modelling is to find the function f(i) and its parameters. Let {x1, . . . , xn} be the successive intervals between failures. Then the likelihood Qn function is of the form L(X|θ) = i=1 gi(xi). FSRMs assume that the variable Xi is governed by a possibility distribution µi(x) such that µi(ai) = 1, and the likelihood function is of the form L(X|θ) = mini=1,...,n iµ(xi). ISRMs assume that there exists a large number of distributions of the random variable Xi and these distributions belong to a set Ri. By assuming that the set Ri is convex, there exist lower P i(x) and upper P i(x) distributions with parameters

θ such that P i(x) = minRi Pi(x), P i(x) = maxRi Pi(x). It is proved that if the variables Xi, i = 1, . . . , n, are independent, then Yn © ª max L(X|θ) = lim max P i(xi + 4i) − P i(xi) . 41→0,...,4n→0 θ i=1

If there is no any information about independence of Xi, i = 1, . . . , n, then © ª max L(X|θ) = lim max min P i(xi + 4i) − P i(xi) . 41→0,...,4n→0 θ i=1,...,n

In particular, if P i(x) − P i(x) = P (x) and Xi, i = 1, . . . , n, are independent, then we obtain the likelihood function corresponding to the PSRM. If there is 56 Sec. 4. Quality Control, Reliability Theory and Survival Analysis no any information about independence of times to failure and ½ ½ 0, x ≤ ai µi(x), x ≤ ai P i(x) = , P i(x) = , 1 − µi(x), x > ai 1, x > ai then we obtain the likelihood function corresponding to the FSRM. So, the ISRM is a more general model and can be used for the more realistic reliability analysis of software.

Christoph Weigand (Fachhochschule Aachen) Adaptive Inspection Policy in the Case of Unknown Failure Intensity A process is considered whose quality deteriorates according to a constant failure intensity λ. As in practice it can be difficult to estimate the true value of λ the purpose of this paper is to present a strategy which can be applied without knowing λ. In order to maximize the number of conforming items per time unit perfect inspections and renewals are performed. The length of the inspection interval is described by an arithmetical sequence and changes by the time depending on perceived assignable causes. Optimal adaptive control plans provide nearly the same performance as in the case when λ is known.

Rudolf Zaidmann (Magdeburg) Optimal stop times for processes with restorations V is a certain system (e. g. technical, economical, biological, etc.). A V function consists of n cycles; one cycle consists of activity and restoration periods. The i-th V activity period, i = 1, . . . , n (e. g. its efficacy or inefficacy) is described by processes xi(t), yi(t) and zi(t) = xi(t)/yi(t). The i-th V restoration period is described respectively by restoration values Xr,i, Yr,i. The problem is: for which value zi it is necessary to stop zi(t), so that the mean value z(t) of all time period would be extreme (maximal or minimal). It is supposed, that Xr,i, Yr,i are independent from the stop time. It shows, that the particular case of this problem is the problem of the mean value x(t) optimisation. In the report the theorem is stated: zi = z is independent from i and is equal the sought extreme mean value z(t); if z(t) is the stochastic ergodish process and n goes to infinity, z goes with the probability 1 to the limit, which is a V -invariant. It shows the way to find it. Sec. 5. Stochastic Analysis 57

Sec. 5. Stochastic Analysis

Organizer: Peter Imkeller (Berlin)

Invited Lecture

Wendelin Werner (Universit´eParis-Sud) Random planar curves

It is a fairly natural problem to try to understand random long paths in the plane that are defined under a measure that incorportates some geometric constraints. For instance, if one looks at the interface between two macroscopic phases (at the phase transition) in chemical or physical 2d systems, or at level lines of random surfaces, then these random paths are self-avoiding. One way to understand such systems is to start with a discrete model (on a lattice for instance), and see if one can say something when the mesh of the lattice goes to zero. Theoretical physics, using mathematically non-rigorous ar- guments such as conformal field theory or quantum gravity, was able to predict various striking results that are supposed to describe the bahaviour of critical two-dimensional systems. One of their starting points is the fact that in the scaling limit, these systems become invariant under conformal transformations. I will review some of recent mathematical progress that has been made on this subject. In particular, I will adress the following items,

i) Scaling limit of random Peano curves ii) Scaling limits of loop-erased random walks iii) Smirnov’s proof for the scaling limits of critical percolation interfaces and its consequences for critical exponents for percolation iv) Self-avoiding walks, the geometry of the Brownian frontier

This talk will be based on joint work with Greg Lawler and Oded Schramm. 58 Sec. 5. Stochastic Analysis

Contributed Lectures (in alphabetic order)

Ludwig Arnold (Universit¨at Bremen) Periodicity and Sharkovsky’s theorem for random dynamical systems

We introduce randomized versions of a deterministic fixed point and, more generally, of a periodic orbit of a mapping. It turns out that there are several non-equivalent notions of periodicity for the iteration of random mappings. We also present a random version of the celebrated Sharkovsky’s theorem stating that for a one-dimensional continuous mapping “period three implies chaos”.

Peter Bank (Humboldt-Universit¨at zu Berlin) A stochastic representation theorem with applications to optimization and obstacle processes

We study a new kind of representation problem for optional processes with connections to singular control, optimal stopping and dynamic allocation prob- lems. As an application, we show how to solve a variant of Skorohod’s obstacle problem in the context of backward stochastic differential equations. (This is joint work with N. El Karoui.)

Dirk Bl¨omker, Christoph Gugg, Martin Raible (RWTH Aachen, Universit¨atAugsburg, Universit¨atAugsburg) On thin film growth

We consider the following stochastic partial differential equation arising as a model for surface growth of amorphous thin films (cf. [3] and the references therein) 4 2 2 2 2 ∂th = −a1∂xh − a2∂xh − (a3∂x − a4)|∂xh| + σξ h(t, x) denotes the height of the surface in a moving frame at time t > 0 over x ∈ [0,L]. We suppose periodic boundary conditions for simplicity. The noise ξ is Gaussian space-time white or spatially correlated, and it is given as the Sec. 5. Stochastic Analysis 59 generalized derivative of some Q-Wiener process. The noise strength σ > 0 and the coefficients ai > 0 are usually determined from experimental data. In [1] and [2] we apply a spectral Galerkin method and establish a-priori bounds to verify the tightness of the distribution of the approximations as measures on 2 1 L ([0,T ],Hper(0,L)). This is used to prove the existence of a (not necessarily unique) martingale (or weak solution) of the corresponding mild formulation (i. e., variation of constant formula). Moreover, we address the question whether the spectral Galerkin method is useful for numerical calculations of the mean energy and the mean correlation functions (cf. [2]). This is an important question, as these statistical quantities are used to compare the model with the experiment (cf. [3]).

References [1] Bl¨omker, D. and Gugg, C. (2002). On the existence of solutions for amor- phous molecular beam epitaxy. Journal of Nonlinear Analysis: Series B Real World Applications 3(1), 61–37. [2] Bl¨omker, D., Gugg, C. and Raible, M. (2001). Thin-film-growth-models: Roughness and correlation functions. Submitted for publication. [3] Raible, M., Mayr, S. G., Linz, S. J., Moske, M., H¨anggi, P. and Samwer, K. (2000). Amorphous thin film growth: Theory compared with experiment. Europhysics Letters 50, 61–67.

Evelyn Buckwar, C. T. H. Baker (Humboldt-Universit¨atzu Berlin, University of Manchester) Stability in p-th Mean of Stochastic Delay Differential Equations and their Numerical Solutions

Results are presented on the stability of solutions of stochastic delay differen- tial equations with multiplicative noise, and of convergent numerical solutions obtained by a a method of Euler-Maruyama type. A basic concept of the stability of a solution of an evolutionary stochastic de- lay differential equation is concerned with the sensitivity of the solution to perturbations in the initial function. We recall the stability definitions to be considered and show that an inequality of Halanay type (derivable via com- parison theory), and deterministic results, can be employed to derive stability conditions for solutions of suitable equations. 60 Sec. 5. Stochastic Analysis

In practice, closed form solutions of stochastic delay differential equations are unlikely to be available. In the second part of the talk a stability theory for numerical solutions (solutions of Euler type) is considered and new stability results are obtained using a discrete analogue of the continuous Halanay-type inequality and results for a deterministic recurrence relation.

Hans Crauel (Technische Universit¨at Ilmenau) Noise-assisted high-gain stabilisation

For a linear control system with multiplicative white noise, we develop (asymp- totic) formulas for the dependence of almost sure and second mean exponential growth rates on a high gain parameter k. We show that if the diffusion matrix is skew-symmetric so that the noise enters in a purely skew-symmetric way then the function g, where g(p)/p denotes the exponential growth rate of the pm-th mean, converges to a straight line, uniformly for p ∈ [0, 2], as k → ∞. This degeneracy in g(p) is a little surprising. We use these formulas to show that the feedback control system in Stratonovich form is high-gain stabilizable even if the zero dynamics are unstable, provided that the noise is strong enough. This contrasts with the noise free case where we need the zero dynamics to be exponentially stable. We then consider a class of systems where the diffu- sion matrix is not skew-symmetric, and show that almost sure and pm-th mean growth rates have different limiting behaviour as k → ∞.

Erika Hausenblas (University Salzburg) Error Analysis for Approximation of Stochastic Differential Equations driven by Poisson Random Measures

Let Xt be the solution of a stochastic differential equation (SDE) with starting point x0 driven by a Poisson random measure. Additive functionals are of interest in various applications. Nevertheless they are often unknown and can only be found by simulation on computers. We investigate the quality of the Euler approximation. Our main emphasis is on SDEs driven by an α-stable process, 0 < α < 2, where we study the approximation of the Monte Carlo ∞ error E [f(XT )], f belonging to L . Moreover, we treat the case where the time equals T ∧ τ, where τ is the first exit time of some interval. Sec. 5. Stochastic Analysis 61

Samuel Herrmann (Technische Universit¨at Berlin) Relation between stochastic resonance and jumps in a two-state Markov chain

A particular characterization of stochastic resonance in a double well potential is to determine the law of the residence time of the process in a given well. Our approach to the problem is to study some properties of the jumps of a time-continuous two-state Markov chain with periodic infinitesimal matrix.

Martin Hesse (Universit¨at Bonn) An Algorithm to solve a Nonlinear Dirichlet Problem

We study the nonlinear Dirichlet problem for maps f : M → N from Euclidean domains M into graphs N. Such a map f will be harmonic if and only if it is a minimizer of an appropriate nonlinear energy or, equivalently, if the N-valued process Yt := f(Xt) is a martingale for each M-valued Brownian motion Xt. In the special case where the target is a so-called spider we will show, that it is possible to approximate the solution of the nonlinear Dirichlet problem by mappings obtained from the solutions of suitable discrete nonlinear Dirichlet problems. Furthermore we will provide an algorithm for numerical simulations and give some examples of visualizations.

Ulrich Hirth (Universit¨at der Bundeswehr Munchen)¨ Viability of stochastic differential inclusions in Hilbert space

We consider stochastic inclusions of the form Z t Z t Xt ∈ X0 + A(X)s ds + B(X)s dWs , 0 0 where A and B are set-valued, and generalise the state space from Rn (as in the paper [1] of Motyl) to a Hilbert space H. We show under monotonicity- type assumptions on A and B, that there exist solutions which are viable in a random closed convex subset K = K(ω) ⊆ H. Apart from generalising from Rn to H, we have, among others, worked out a proof of Motyl’s lapidar statement that operators represented by Lebesgue and Itˆointegrals are weakly continuous in L2. 62 Sec. 5. Stochastic Analysis

References [1] Motyl, Jerzy. (2000). Viable solutions of set-valued stochastic equation. Optimization 48, 157–176.

Reinhard H¨opfner (Johannes Gutenberg Universit¨at Mainz) Nummelin-Splitting in stetiger Zeit via ‘begleitende Folgen von Harris-Prozessen’

Sei X = (Xt)t≥0 stark Markov und rekurrent im Sinn von Harris, mit Werten in einem allgemeinen Polnischen Raum. Wir konstruieren zu X eine begleitende em em Folge von Harrisprozessen X = (Xt )t≥0 mit zwei Eigenschaften: i) fur¨ großes m liegen die Pfade von Xem ‘sehr nahe’ an den Pfaden von X; ii) fur¨ jedes m kann durch Nummelin-Splitting in Xem ein rekurrentes Atom Aem konstruiert werden; die Trajektorien von Xem zerfallen somit fur¨ jedes m in iid-Lebenszyklen. Diese Technik erlaubt, im Grenzubergang¨ m → ∞ Grenzwerts¨atze fur¨ Martin- gale und additive Funktionale allgemeiner Harris-Prozesse X durch Zuruck-¨ greifen auf jeweils kunstlich¨ eingefuhrte¨ regenerative Strukturen in begleitenden Prozessen Xem zu beweisen.

Anne Kandler (Technische Universit¨at Chemnitz) Approximation of weakly correlated processes using moving-average processes

In investigations to differential equations containing random parameters Monte- Carlo simulations are used as an easily applicable tool for deriving approximate distribution characteristics of the solution as well as for the validation of cor- responding analytical methods. The application of this method requires an appropriate representation or at least approximation of the random processes involved in the equations in terms of sequences of random variables. The paper considers this problem for the case of stationary and weakly correlated random functions possessing a non-zero but short-range correlation. The approximation idea consists in assigning the values of a time-discrete moving-average (MA) process to points of a properly chosen grid and an in- terpolation of the values between the grid points. We study the computation Sec. 5. Stochastic Analysis 63 of the coefficients of the MA-process to a prescribed correlation function, the goodness of approximation of the resulting interpolated process, the limiting behaviour of the interpolated process if the order of the underlying MA-process tends to infinity and an extension of this technique to the approximation of ran- dom fields. Further a modified approximation method which uses randomly shifted grid points is considered. This ensures the stationarity of the interpolated process while in the standard case the interpolated process is periodically distributed but not stationary. Numerical results are presented.

Moritz Kassmann (Universit¨at Bonn) Green Functions of Stable-Type Processes: An Analytic Approach

We consider non-homogeneous stable processes with so called jumping coeffi- cients. The corresponding generator of such a process is given by an integro- differential operator of order α ∈ (0, 2). In recent years Green functions for operators of this kind have been studied via analytical methods by Z.-Q. Chen, R. Song, K. Bogdan, T. Kulczycki and others. Probabilistic methods have been applied by R. F. Bass and D. A. Levin. In the talk we recall these results and extend them within the general framework of Dirichlet forms and weak Hα-solutions.

Andreas Kunz (Technische Universit¨at Munchen)¨ Extremes of Multidimensional Stationary Diffusion Processes

Consider a stationary reversible diffusion process on Rn. We are interested in the fine asymptotics of large deviations of the process. By MT we denote the maximum of the process in an appropriate norm up to time T . The aim is to characterize the tail behavior of MT for fixed T > 0 as well as its long time behavior. This is related to spectral asymptotics of the generator of the process subjected to Dirichlet boundary conditions on some bounded domain of Rn when the domain grows to Rn. We give results for diffusion processes of gradient field type and present some examples including highly non-symmetric processes. 64 Sec. 5. Stochastic Analysis

Hannelore Lisei (Technische Universit¨at Berlin) Flows and attractors for a stochastic Navier-Stokes equation

We present the flow property, perfect cocycle property and the existence of global random attractors for a stochastic equation of Navier-Stokes type with multiplicative noise.

Andreas Martin (GSF-National Research Center for Environment and Health) Small Ball Asymptotics for Stochastic Wave Equations

Let X be a random variable taking values in a real Banach space (B, k · k). The small ball asymptotic is the order of

P {kXk < ε} for small ε. We will examine small ball asymptotics for processes which are given as solutions of stochastic partial differential equations. In other words, we will analyse the small ball asymptotics for the measurable family

X = {X(t, x); (t, x) ∈ R + × R} which is the weak solution of the stochastic partial differential equation

∂2X ∂2X (t, x) − (t, x) = g(X(t, x)) + f(t, x) dW (t, x) , (1) ∂t2 ∂x2 with initial conditions (F, µ). The process X is almost surely continuous. Thus if we restrict X to a compact set A ⊂ R + × R, we can consider it as a random variable with values in the Banach space of continuous functions on A endowed with the supremum norm (C(A), k · k∞). In the case g = 0 and f = 1 the solution X of (1) is similar to a Brownian sheet. M. Talagrand proved a precise small ball asymptotic for the Brownian sheet regarded as random variable with 2 values in the Banach space (C([0,1] ), k · k∞). This will be our starting point and as a matter of fact the small ball asymptotic for the solution of (1) is just the same. Sec. 5. Stochastic Analysis 65

Ilya Pavlyukevitch (Technische Universit¨at Berlin) Stochastic Resonance. Optimal Tuning of Diffusions and Markov Chains.

We consider an overdamped motion of a Brownian particle in a double-well potential in presence of small periodic forcing. The dynamics of the particle is described by the one-dimensional SDE t √ X˙ ε,T = −U 0(Xε,T , ) + ε W˙ , t ≥ 0 , (1) t t T t where ε, T > 0 and W˙ is a white noise. The potential U is periodic in time with period 1, and antisymmetric, i. e. U(x, t) = U(x, t + 1) and U(x, t) = 1 U(−x, t + 2) for x ∈ R, t ≥ 0. For T > 0 there exists ε(T ) > 0 such that the Brownian particle is located with high probability near the global minimum of U, and since this minimum changes its coordinate periodically in time, the trajectories of Xε,T have a strong spec- tral component of period T . In other words, Xε,T obtains certain deterministic periodic properties at some ‘optimal’ noise level. This phenomenon is called stochastic resonance. For small ε the diffusion reminds of a two-state process living in the minima of U. Is it possible to find the optimal tuning for the diffusion studying the appropriate two-state Markov process? We discuss this question from the point of view of the so-called spectral power amplification coefficient which is one of the measures of periodicity of trajectories.

Max von Renesse (Universit¨at Bonn) Intrinsic coupling on Riemannian manifolds

Kendall (1986) and Cranston (1991) showed that the coupling method for Eu- clidean diffusion processes can be transferred to the Riemannian case. This yields a stochastic proof of Yau-type estimates for harmonic functions in the presence of lower Ricci curvature bounds. In this talk we present an alternative approach to the construction of the coupling process which involves a central limit theorem for coupled geodesic random walks instead of SDE theory on manifolds. Apart from its simplicity our construction gives the crucial cou- pling time estimate regardless if the manifold has a non-empty cut-locus or 66 Sec. 5. Stochastic Analysis not. Moreover, it can be applied to certain non-smooth spaces as we indicate by the example of Riemannian polyhedra.

Christian Roth (Martin-Luther-Universit¨at Halle-Wittenberg) Initial-boundary-value problems for hyperbolic stochastic differential equations

We consider the stochastic initial-boundary-value problem

Z t Z t v(t, x) + v(0, x) + a vx(s, x) ds + b v(s, x) dW (s) = 0 (1) 0 0 with the initial function

v(0, x) = f(x) ∈ L2[0, ∞) (2) and the boundary condition

v(t, 0) = g(t) . (3)

We approximate equation (1) by introducing a peacewise differentiable approx- δ imation W for the Wiener-process W . For the approximation vδ(t, x), which is defined by

Z t Z t δ vδ(t, x) + vδ(0, x) + a vx,δ(s, x) ds + b vδ(s, x) dW (s) = 0 , (4) 0 0 we prove the convergence in probability. By introducing stochastic difference schemes, which converge to the solution of (4) in mean square, we get an approximation for problem (1) in probability.

Michael R¨ockner (Universit¨at Bielefeld) Singular dissipative stochastic equations on Hilbert spaces

Existence of solutions to martingale problems corresponding to singular dissipa- tive stochastic equations in Hilbert spaces are proved for any initial condition. The solutions for the single starting points form a conservative diffusion pro- cess whose transition semigroup is shown to be strong Feller. Uniqueness is a generalized sense is proved also, and a number of applications is presented. Sec. 5. Stochastic Analysis 67

Bj¨orn Schmalfuß (FH Merseburg) Stochastic partial differential equations driven by the fractional brownian motion

We consider stochastic partial differential equation driven by a fractional Brow- nian motion. At first we show that such a fractional Brownian motion generates a metric dynamical system with stationary but non-independent increments. Hence we can use the theory of random dynamical systems to investigate the qualitative behavior of such a stochastic differential equation. In particular, we construct a random fixed point for a stochastic heat equation. We can con- clude that this equation has a unique stationary solution. Using this stationary solution and the well know pullback technique for random dynamical systems we can also find stationary solutions for nonlinear partial differential equations with monotone coefficients.

Henri Schurz (Southern Illinois University) Lax-Richtmeyer Approximation Principle for Stochastic Processes on Weak Hilbert Spaces H2([0,T ]) with Application to SDEs

In deterministic-numerical analysis the principle of Lax-Richtmeyer is widely accepted as the key principle for the approximation of solutions of differential systems. This principle says that consistency and stability together imply convergence for well-posed initial value problems. We are going to establish a similar principle on the Hilbert space H2([0,T ]) of cadlag Ft-adapted stochastic processes X = (Xt)0≤t≤T with uniformly bounded second moments and values in a domain D ∈ Rd. The role of consistency, contractivity and stability constants, and a smoothness parameter of martingale parts of X for the control of H2([0,T ])-error estimates is explicitly seen. To underline its large range of applicability to Ft-adapted stochastic differential systems, we discuss the example of ordinary stochastic differential equations (SDEs) with white noise, showing even new results with respect to rates of convergence on infinite time- intervals [0, +∞). This presentation is based on recent works and Henri Schurz: “Numerical Analysis of SDEs without Tears”, Invited Chap- ter in: Kannan, D. and Lakshmikantham, V. (eds.), Handbook of Stochastic Analysis and Applications, Marcel Dekker, Basel, 2001, 239–361. 68 Sec. 5. Stochastic Analysis

Nadejda Sidorova, Oleg G. Smolyanov, Heinrich von Weizs¨acker, Olaf Wittich (Universit¨atKaiserslautern, Moscow State University, Universit¨at Kaiserslautern, GSF-Forschungszentrum f¨urUmwelt und Gesundheit) Surface limits of Brownian motion in tubular neighbourhoods of a Riemannian manifold

We construct two surface measures on the path space C([0, 1],M) in a compact orientable Riemannian manifold M embedded into Rn. The first measure is generated by the n-dimensional Brownian motion with reflection on the bound- ary of the tubular ε-neighbourhood of M. The second one is constructed by the conditioning of the usual Wiener measure on C([0, 1], Rn) on the paths, which do not leave the tubular ε-neighbourhood of M up to time 1. We prove that in both cases the limits as ε → 0 exist. The limit measure in the first case is just the Wiener measure on C([0, 1],M). In the second case, the limit measure is absolutely continuous with respect to the Wiener measure on C([0, 1],M), and we compute the corresponding density explicitly.

Hans-J¨org Starkloff (Technische Universit¨at Chemnitz) Random differential equations with weakly correlated parameters

In the talk methods for the determination of characteristics of solutions to ran- dom differential equations are presented. Especially attention is paid to asymp- totic results in the case of input parameter functions possessing the property of a short-range dependence. These input parameters are modeled by so-called weakly or ε-correlated random functions or by integral functionals of such func- tions. The theory of asymptotic expansions for integral functionals of weakly correlated random functions is used to derive approximate characteristics of the random solution functions.

Karl-Theodor Sturm (Universit¨at Bonn) Martingales in Metric Spaces

We develop a nonlinear martingale theory for time discrete processes (Yk)k. These processes are defined on any filtered probability space (Ω, F, Fk,P )k and have values in a metric space (N, d) of nonpositive curvature (in the sense Sec. 5. Stochastic Analysis 69 of A. D. Alexandrov), e. g. in a tree. The defining martingale property for such processes is

E(Yk+1|Fk) = Yk P - a. s. where the conditional expectation on the left hand side is defined as the mini- mizer of the functional 2 Z 7→ E d (Z,Yk+1) within the space of Fk-measurable maps Z :Ω → N. We give equivalent characterization of N-valued martingales, derive funda- mental properties of these martingales, e. g. a martingale convergence theorem, and we exploit the relation with harmonic maps. It turns out that a map f : M → N is harmonic w. r. t. a given Markov chain (Xk)k on M if and only if (f(Xk))k is a martingale on N. Finally, we outline the time continuous case. Our theory is an extension of the classical linear martingale theory and of the nonlinear theory of martingales with values in manifolds as developed e. g. by Emery (1989) and Kendall (1990). It provides a stochastic approach to the theory of (generalized) harmonic maps with values in such “singular” spaces initiated by Jost (1994) and Korevaar, Schoen (1993).

Michael Voit (Universit¨at Tubingen)¨ Martingale characterizations of L´evyprocesses

A classical result of P. L´evystates that an a. s.-continuous process (Bt)t≥0 on 2 R is a Brownian motion if and only if (Bt)t≥0 and (Bt − t)t≥0 are martingales. The nowadays prefered proof of the if-part of this very useful fact consists of two parts: First of all, Itˆo-calculusfor continuous semimartingales yields that 2 the processes (exp(y t + iyBt))t≥0 are martingales for all y ∈ R. An argument using injectivity of the Fourier transform then implies that (Bt)t≥0 is a BM. In the talk we generalize these arguments and give martingale characterizations of L´evyprocesses on matrix groups in terms of finite-dimensional representations of these groups. In particular, for Brownian motions on compact Lie-groups we obtain nice characterizations in terms of matrix-valued martingales even without assuming continuity of the processes. 70 Sec. 5. Stochastic Analysis

Matthias Weber (Technische Universit¨at Dresden) The Averaging Principle and Diffusion Processes on Graphs

ε n We consider diffusion processes Xt in R governed by differential operators

ε L = L1 + εL2 , where both L1 and L2 govern processes too, and ε > 0 is a small parameter. Further, the process Xt governed by L1 is assumed to have a smooth first integral H:

n Px(H(Xt) = H(x) for all t ≥ 0) = 1 , x ∈ R . Typical examples are Hamiltonian systems perturbed by a small noise. ε Under suitable conditions, the asymptotic behavior of Xt as t → ∞ and ε → 0 can be described by a diffusion process on the edges of a graph that reflects the structure of the level sets of H. We present recent results that are joint work with Mark Freidlin from the University of Maryland, U. S. A.

Ralf Wunderlich (Technische Universit¨at Chemnitz) Finite element approximations of random heat equations

Modeling real world phenomena such as heat propagation in heterogeneous media leads to parabolic PDEs containing random parameters as coefficients of the differential operator, inhomeogeneous terms as well as initial and boundary conditions. Our main interest is the description of the long-term behaviour of solutions for a class of such equations and the computation of stochastic characteristics (e. g. moment functions) to given characteristics of the random processes and fields involved in the model parameters. The PDE is discretized with respect to the spatial variables using finite el- ement techniques. It results a usually large-scale system of random ODEs. Perturbation as well as dimension reduction techniques are applied to find ap- proximations of the desired stochastic characteristics. Sec. 6. Spatial Statistics, Stochastic Geometry and Image Processing 71

Sec. 6. Spatial Statistics, Stochastic Geometry and Image Processing

Organizer: Hans Wackernagel (Paris, )

Invited Lecture

Michael L. Stein, Zhiyi Chi, Leah Welty (University of Chicago) Approximating the Likelihood for Irregularly Observed Gaussian Random Fields For a Gaussian random field observed irregularly at n sites, calculating a sin- gle value of the likelihood generally requires O(n3) computations, which is prohibitive for n much greater than 10,000. In a 1988 paper, A. Vecchia de- scribed how to approximate such likelihoods by ordering the observations in some manner, writing the joint density of the observations as the product of the conditional densities of each observation given the previous observations, and then approximating these conditional densities by only conditioning on some small subset of the previous observations. Vecchia suggested conditioning on the nearest m observations among the previous observations, with m fairly small, generally at most 10. Such small conditioning sets were adequate for the models Vecchia considered, but we show they can provide poor approximations for models with dependence over longer ranges. We describe a number of innovations based on this basic approach. First, we show how to modify this method to calculate the restricted likelihood of the parameters of the covariance structure of the random field when the mean is linear in the unknown parameters. Second, we show that the approximation can often be greatly improved by using some distant observations in the con- ditioning sets. Third, we show how using common conditioning sets for nearby observations can lead to large computational savings when m is not so small. Fourth, we demonstrate how this approximation can be viewed as an example of an estimating function, providing both theoretical insight into the qual- ity of the approximation and more defensible inferences based on it. Finally, we consider implementations of this approximation on both a single processor and a parallel system and present an application to indirect measurements of chlorophyll levels in Lake Michigan. 72 Sec. 6. Spatial Statistics, Stochastic Geometry and Image Processing

Contributed Lectures (in alphabetic order)

Rouben V. Ambartzumian (Yerevan State University) Wicksell problem for planar particles of random shape

The classical planar Wicksell problem requires reconstruction of the size prob- ability distribution for random particlaes (planar convex domains) uniformly and isotropically scattered in the plane, basing upon the probability distribu- tion of lengths of intersections with individual particles observed on a test line through the collection of particles. For the classical version, where all particles have the same shape, no stable solution algorithm has been proposed. The things change if not only the size, but also the shape of the particles is allowed to be random. Randomization of shape may lead to a stable solution algo- rithm (a system of Volterra equations). This result is obtained by application of the Pleijel-type identity for the so-called equitangent subdomains. By now, Pleijel-type identities have proved to be useful in many problems of Stochastic Geometry.

BartÃlomiejBÃlaszczyszyn,Fran¸coisBaccelli (INRIA-ENS & Mathematical Institute University of WrocÃlaw, INRIA-ENS Paris) Some stochastic geometry models of mobile communication

A common feature of various issues stemming from analysis of communication networks is a necessity of treatment of geometrical objects, such as e. g. pat- terns of antennas, cells (of cellular communication), broadcasting trees. These objects are often best represented by random processes. Randomness comes at various levels. Besides parameters that are “tradition- ally” assumed random (as various noises), the very objects can be random, as e. g. patterns of mobile antennas in ad-hoc networks. Even in cases of fixed objects, as e. g. cellular base station patterns, their shapes and locations are often irregular and thus can be seen as particular realizations (snapshots) of a random process. All this makes stochastic geometry a convenient tool within this new setting. Sec. 6. Spatial Statistics, Stochastic Geometry and Image Processing 73

The goal of the talk is to show some stochastic geometry models of mobile communication. We concentrate on presently important technology of cellular communication called CDMA/UMTS, where application of stochastic geometry has already proven to give some applicable results. Of primary interest are macroscopic capacity models, especially important for economic planning.

Stephan B¨ohm,Volker Schmidt (University of Ulm) Palm Representation and Approximation of the Covariance of Random Closed Sets

The covariance C(r), r ≥ 0, of a stationary isotropic random closed set Ξ is typically complicated to evaluate, even for simple models like the Boolean model with isotropic compact convex grains. This is the reason that an expo- nential approximation formula for C(r) has been widely used in the literature, (1) which matches C(0), C (0), and in many cases also limr→∞ C(r). However, for 0 < r < ∞, the accuracy of this approximation is not very high in general. In this talk, representation formulae are derived for the covariance C(r) and its derivative C(1)(r) using Palm calculus, where r ≥ 0 is arbitrary. As a con- sequence, an explicit expression is obtained for the second derivative C(2)(0) in terms of quantities, which can be estimated from a single realization of Ξ. These results are then used to get a refined exponential approximation for C(r), which additionally matches the second derivative C(2)(0).

Christian Hennig, Bernhard Hausdorf (ETH Z¨urich, Universit¨atHamburg) Testing homogeneity of distribution areas of land snails

In the framework of biogeographical modeling of the diversification of species the question arises if the distribution areas of different species of a taxon (the European land snails in our study) are significantly clustered. For every species, we have a 0-1-vector of regional occurences (“distribution area”). We discuss the choice of a null model for the homogeneity of such distribution areas, taking into account the spatial autocorrelation of the occurrences, and a test statistics against clustering, which is used to perform a Monte Carlo test. 74 Sec. 6. Spatial Statistics, Stochastic Geometry and Image Processing

Martin Hillebrand (Universit¨at Oldenburg) Nonparametric Regression in Image Analysis: On Corner-preserving M-Kernel Smoothing

In 1998, Chu, Glad, Godtliebsen and Marron proposed a new method for de- noising 2-dimensional data. It is based on a robust kernel estimator introduced by H¨ardle and Gasser (1984) and gives very good results if the noise level is not too high. Although the special feature of the new method is a 2-dimensional one, which is to preserve edges and even sharp corners, they only analysed the 1-dimensional case. In this talk, an approach based on differential geometry will be used to give a suitable definition of the property of an estimator being “corner-preserving” in a 2-dimensional sense. Then it will be shown that the estimator has this property.

Daniel Hug (Universit¨at Freiburg) Contact distributions of geometric Poisson processes

We describe recent results for various extensions of classical contact distribu- tions of Boolean models and Poisson networks. Such random structures provide a sufficiently general framework to cover many situations which arise in prac- tical applications of stochastic geometry. In particular, we pursue the question (inverse problem) in how far the underlying geometric Poisson processes are determined by generalized contact distributions. Analytically, this leads to the study of injectivity properties of certain integral transforms. (joint work with Gunter¨ Last and Wolfgang Weil)

Thomas K¨ampke (Forschungsinstitut fur¨ Anwendungsorientierte Wissensverarbeitung Ulm) Markov fields and superresolution images

Multiple images are often used for stereo imaging yielding depth information by triangulation. This requires to solve the so-called correspondence or reg- istration problem. Here, we make use of the opposite by assuming that the registration problem is solved. Sec. 6. Spatial Statistics, Stochastic Geometry and Image Processing 75

Several images with known registration are fused to a single image of increased resolution (pixels are of smaller than original size). A deterministic, alge- braic fusion approach can be reformulated by directed as well as by undirected Markov fields. This allows for local iteration procedures for image computation. Along the Gibbs sampler, which updates one image element per iteration, the update of several pixels per iteration can be facilitated by a ‘macro sampler’. The results are illustrated by landscape images gathered from an autonomous airship which is one example of an unmanned aerial vehicle (UAV).

Marie-Colette van Lieshout (Centrum voor Wiskunde en Informatica, Amsterdam) Markov sequential point processes

A simple sequential inhibition point process (SSI) is defined as the output of an algorithm that repeatedly introduces particles at random into a bounded window, discarding those that would overlap a previously introduced particle, until some stopping criterion is satisfied. In this talk, we discuss ‘Markov se- quential point processes’, i. e. point patterns that — as SSI — can be imagined to arise as the output of a sequential algorithm and that satisfy a local depen- dence property. We outline the measure theoretic foundations and establish a factorisation in terms of cliques of interacting particles.

Roland Maier, Johannes Mayer, Volker Schmidt (University of Ulm) Fast sampling of the typical cell of stationary Poisson-type tessellations in Rd

The simulation of the typical cell of several stationary random tessellations in Rd of Poisson type is considered. A representation of a polytope containing the origin as a monotone sequence of convex bodies is used to construct a stopping rule for radial simulation of the zero cell. This provides a method for perfect simulation of the typical cell, for example, for stationary Poisson hyperplane and Poisson-Voronoi tessellations. In comparison to related algorithms published in the literature, our algorithm works for an arbitrary dimension d ≥ 2, whereas the algorithm proposed by George [1] is tailored to the planar case only. Furthermore our algorithm is faster than the radial simulation procedure proposed in Quine and Watson [2] 76 Sec. 6. Spatial Statistics, Stochastic Geometry and Image Processing in the sense that, for example, for d = 2 we need on the average 12.7 lines to fully determine a sample of the typical cell of a stationary Poisson-Voronoi tessellation, whereas the algorithm proposed in [2] requires between 15 and 20 lines. Besides this, using Campbell’s theorem, our algorithm can also be applied to fast sampling of the typical cell of iterated tessellations, where each cell of an initial tessellation is further subdivided into smaller cells by so-called component tessellations; see [3]. As an example, it will be shown, how this extended simulation algorithm can be used to sample the typical cell of an iterated tessellation with a stationary Poisson-Voronoi tessellation as initial tessellation, which is iterated by stationary Poisson hyperplane tessellations.

References

[1] George, E. I. (1987). Sampling random polygons. Journal of Applied Probability 24, 557–573. [2] Quine, M. P. and Watson, D. F. (1984). Radial generation of n-dimension- al Poisson process. Journal of Applied Probability 21, 548–557. [3] Maier, R. and Schmidt, V. (2001). Stationary iterated tessellations. Pre- print. University of Ulm.

Rudolf Mathar, Daniel Catrein (RWTH Aachen) Interference Calculation for CDMA Mobile Networks Using Stochastic Geometrical Models

The capacity of CDMA (code division multiple access) mobile networks is inter- ference limited. Hence, an important problem in the design and optimization of such networks is to determine the amount of interference radiated to a base transmitter station (BTS) from other cells under varying traffic load. Solving this problem opens a way to describe the so called cell breathing effect ana- lytically. In this presentation, we assume a distance dependent power law for radio wave attenuation and, moreover, log-normal shadow fading. A second source of randomness arises from a planar Poisson point process, which models the positions of mobiles. Typical features of CDMA systems like power control and macroscopic diversity selection are also taken into account, making the full model rather complicated. Sec. 6. Spatial Statistics, Stochastic Geometry and Image Processing 77

As a main result we obtain an analytical formula for the total received power at an arbitrary base station, which allows for segregating the usable signal power from pure interference. A decomposition of the corresponding domain is applied to reduce the complexity of numerical integration and to avoid instabilities.

Torsten Mattfeldt (University of Ulm) Texture analysis using first-order parameters and stochastic-geometric functions

Introduction. In the material sciences and in biology, it is often attempted to characterize binary structures geometrically. Such structures arise usually as partitions of multiphase structures into a single interesting component and the remainder, which is considered as background. Often the units constituting the phase of interest are denoted as grains, and the background is called pore space. Images of binary structures can be obtained as planar samples from a truly two-dimensional process, but can also be sampled from two-dimensional sec- tions of a binary structure that is three-dimensional in reality. Stereological examples for binary structures in material science are porous media, powders, and cermets, whereas many biological tissues are also mainly built up of two phases, e. g. of epithelium and stroma. Using elementary stereological tools, first-order properties of the grains, such as volume fraction and surface area per unit volume, can be easily estimated. Moreover, the texture of the grains can be characterized by stochastic-geometric functions. The pair correlation function characterizes the spatial texture of the grains in terms of clustering and repulsion as a function of distance. The centred contact density function shows in which manner the texture differs from the Boolean model of isotropic random convex sets.

Materials and methods. Using the aforementioned methods, it is possible to characterize materials or tissues by a battery of first-order parameters and stochastic-geometric func- tions. Whether classification of binary textures is more accurate on the basis of first-order parameters or using stochastic-geometric functions is not known in general. In the present investigation, we used estimates of these functions as well as of first-order properties for the classification of three sets of biological 78 Sec. 6. Spatial Statistics, Stochastic Geometry and Image Processing tissue specimens. Recently we presented an improved, distance-based estima- tor of the pair correlation function of random closed sets. This estimator will be also presented in the lecture and illustrated by simulations (T. Mattfeldt, D. Stoyan: J. Microsc. 200, 158–173 (2000)). The case series consisted of a number of benign and malignant alterations of various organs (mammary gland, prostate, pancreas); in some cases genetic data were available. Geometrically, these samples can be considered essentially as stationary and isotropic ran- dom closed sets. Classifications were performed by linear discriminant analysis and by artificial neural networks. Specifically, learning vector quantization and multilayer feedforward networks with backpropagation were applied as neural paradigms (T. Mattfeldt et al.: J. Microsc. 198, 143–158 (2000)).

Results and conclusions. In the group of mammary lesions the accuracy of classification was very high regardless of the selection of input variables. In the group of prostatic lesions, useful classifications were only achieved when first-order parameters were in- cluded as input variables. In the set of pancreatic lesions, however, classification on the basis of first-order parameters was distinctly inferior to classification on the basis of stochastic-geometric functions. The new estimator of the pair cor- relation function had to be preferred to the usual estimator in terms of bias and variance both in simulations and in the application to real data. Using learn- ing vector quantization, higher classification accuracies could often be obtained than by multilayer feedforward networks with backpropagation. Summarizing, the question whether classification is better on the basis of first-order parame- ters or by using stochastic geometric functions cannot be answered globally — the optimal selection of input variables is dependent on the specific application.

Werner Nagel, Viola Weiss (Friedrich-Schiller-Universit¨atJena, Fachhochschule Jena) Limits of sequences of stationary planar tessellations, generated by superposition and nesting

In Stochastic Geometry, in particular regarding random tessellations, the gen- eration of new models as a result of an operation applied to well known models is a standard method. For random tessellations the most important operations are superposition and nesting (also referred to as iteration). The superposition of two tessellations means the superposition of the edges of the cells of both tessellations. This generates a new tessellation where the cells are intersections Sec. 6. Spatial Statistics, Stochastic Geometry and Image Processing 79 of pairs of cells of the original tessellations. The iteration of tessellations is a more sophisticated operation. It means that one tessellation is chosen as a ‘frame’ tessellation. The single cells of this ‘frame’ tessellation are consecutively and independently subdivided by cut-outs of tessellations of an i. i. d. sequence of tessellations. Thus different cells of the ‘frame’ tessellation are intersected with different realisations of other tessellations. In the lecture limits of sequences are investigated which are generated by suc- cessive repetitions of superposition or of iteration of i. i. d. tessellations, respec- tively — with an appropriate normalisation such that the limiting tessellations are not degenerate. Regarding the superposition of tessellations it is shown that these sequences converge to Poisson line tessellations. For iterations the notion of stability of a distribution is adapted and necessary conditions are formulated for those tessellations which may occur as limits of such sequences.

Norbert Patzschke (Friedrich-Schiller-Universit¨at Jena) Tangent measure distributions of self-conformal measures

Tangent measure distributions are introduced as a means to describe the local behavior of fractal measures. They are defined by magnifying the measure around a point. We show, that there is a unique tangent measure distribution for almost all points of a self-conformal measure.

Sergiy Prokopenko (Technische Universit¨at Munchen)¨ Inference for binary regression data with spatial and cluster effects

Binary response data with spatial covariate information occur for example in medical, biological, socio-economic and mobility investigations. Such data structure was also observed in a mobility study for the city of Munich. The goal was besides identifying spatial effects to discover factors which determine the mode of mobility such as the use of the public transport system or the use of a car. In addition to spatial effects random effects to model person or house- hold effects have to be taken into consideration. One well known approach for modeling spatial effects are Markov Random Fields (MRF) (see for example 80 Sec. 6. Spatial Statistics, Stochastic Geometry and Image Processing

Besag and Green, 1993, JRSSB). Pettitt (2001, preprint) introduced a special family of MRF’s. As usual Markov Chain Monte Carlo (MCMC) methods are needed for parameter estimation. One advantage of the Petitt Model is that it allows fast and efficient MCMC updates of the parameters. We present a modification of Pettitt’s model which has the desirable property to contain the intrinsic MRF in the limit. Our modification still allows for fast and efficient parameter updates. We show the behavior of this MCMC algorithm in a simu- lation study designed after the Munich mobility study with spatial and random effects.

Vadim V. Scherbakov, I. S. Molchanov, S. A. Zuyev (University of Glasgow, University of Glasgow, University of Strathclyde) Coverage problem for the whole space

We consider the following coverage problem. Let Πλ = {x1, x2,...} be the stationary Poisson point process in Rd with intensity λ, B(x, r), x ∈ Rd, r > 0 be the open ball centered in the point x and with radius r. Consider a Boolean model of the following type [

Ξ = B(xi, ηxi )

xi∈Πλ

d where ηx, x ∈ R , is a random field with independent values such that the distribution of ηx depends only on a distance |x| between points x and 0. Our coverage problem is given by the next question. When does a model Ξ cover the whole space with a positive probability:

P (Rd ⊆ Ξ) > 0 ?

We give the complete answer for the question. We found a critical asymptotic behavior of the field values ηx as x → ∞ separating two different classes of considered Boolean models. If the values of the field ηx increase more rapidly then this critical rate as x → ∞, then Ξ covers the whole space with a pos- itive probability. In the opposite case, when the growth of the field values is majorized above by the critical one the Boolean model covers the whole space with probability 0. Sec. 6. Spatial Statistics, Stochastic Geometry and Image Processing 81

Martin Schlather (Universit¨at Bayreuth) A dependence measure for spatial extreme values

The extremal coefficient is a dependence measure for multivariate extreme val- ues. The concept can be extended to the extremal coefficient function ϑ(h), which characterises stationary max-stable random fields. Since ϑ − 1 is a con- ditionally negative definite function, ϑ may be regarded as an analagous char- acteristic to the variogram for stationary random fields with not necessarily finite variances. We present the definition, some properties and estimators of ϑ, and give an application to a real data set.

Dominic Schuhmacher (Universit¨at Zurich)¨ Distance estimates for spatial Poisson process approximations

We consider a point process ξ on R2 which satisfies certain mild conditions — essentially an orderliness condition with respect to the first coordinate and a mixing condition with respect to the second coordinate —, and subject it to a transformation θT which stretches the first coordinate by some factor ψ(T ) and compresses the second coordinate by T . The main objective of this talk is −1 to study the proximity of the law of the transformed process ξθT to a Poisson process law in terms of a Wasserstein distance. Useful upper bounds for this distance are derived by application of Stein’s method to discretized versions of the point processes. These estimates can further be applied to obtain results about the accuracy of the kernel estimation procedure for the expectation measure density of ξ.

Eugene Spodarev (Universit¨at Ulm) Roses of neighborhood for stationary processes of flats

Consider a stochastic process of lines Φ in R3 which is, roughly speaking, a ran- dom ensemble of at most countably many lines in the three-dimensional space. It can serve as a model for many technical phenomena such as collections of fibers in textile and biological tissues, etc. Suppose the process to be station- ary, i. e., its distribution is invariant with respect to all translations in R3. An 82 Sec. 6. Spatial Statistics, Stochastic Geometry and Image Processing important characteristic of such processes is the so-called rose of directions θ, i. e., the directional distribution of the “typical” line of the process. The main problem here is to compute θ from empirical data (which are often obtained by intersecting the observed pattern with a test plane η). Thus, θ can be com- puted from the rose of intersections T θ where (T θ)(η) is the mean number of intersection points of Φ with the test flat η lying within a unit observation window in η (cf. [1], [3]). The matter of this talk is to provide analogous results for the case when test lines η are used instead of test planes. This means that there are almost surely no intersections of Φ and η. Thus, the notion of the rose of intersection is irrelevant in this case. Instead, we introduce the new notion of the rose of neighborhood Nθ which helps us to solve the above problem by reducing it to the known results for the rose of intersections. In detail, consider all lines of Φ that intersect the unit cylinder with the axis in η. Mark all points on η that minimize the distance to these lines. They form a stationary point process on η whose intensity (i. e., the mean number of points per unit length) is the value (Nθ)(η) of the rose of neighborhood at η. Surprisingly, Nθ is equal to the rose of intersections of the specially constructed dual process of planes Φ⊥ with test planes η⊥. The above method can be used for arbitrary dimension k of the “lines” of the process Φ in Rd and dimension r of the “test line” η with k + r < d (in the above example d = 3, k = r = 1).

References

[1] Mecke, J. and Nagel, W. (1980). Station¨are r¨aumliche Faserprozesse und ihre Schnittzahlrosen. Elektron. Informationsverarb. Kybernet. 16, 475– 483.

[2] Spodarev, E. (2000). Isoperimetric problems and roses of intersections for stationary flat processes. In: Jenaer Schriften zur Mathematik und Infor- matik Math/Inf/00/31. Friedrich-Schiller University Jena. Submitted to Math. Nachr.

[3] Spodarev, E. (2001). On the rose of intersections for stationary flat pro- cesses. Adv. Appl. Probab. 33, 584–599. Sec. 6. Spatial Statistics, Stochastic Geometry and Image Processing 83

Hans Wackernagel (Ecole des Mines de Paris) Geostatistics with station data and numerical model output

The geostatistical approach contributes actively to the spreading of stochastic techniques in problems of natural science or engineering with spatially corre- lated data. In the field of environmental science, with expensive monitoring points at a few locations and cheap numerical model output available over the whole spatial domain, the question of how to merge both types of data in order to estimate quantities of interest is a crucial one. The talk, intended as an overview, will discuss geostatistical formulations of a few normalization and data assimilation problems in the areas of air pollution and water quality monitoring. 84 Sec. 6. Spatial Statistics, Stochastic Geometry and Image Processing Sec. 7. Stochastic Methods in Biometry, Genetics and Bioinformatics 85

Sec. 7. Stochastic Methods in Biometry, Genetics and Bioinformatics

Organizer: Iris Pigeot-Kubler¨ (Bremen)

Invited Lecture

Nuala A. Sheehan (University of Leicester) A Graphical Modelling Approach to Complex Applications in Genetics

Analyses of genetic data on groups of related individuals, or pedigrees, fre- quently require the calculation of probabilities and likelihoods. Exact compu- tational methods such as peeling (Cannings et al. 1978) are intractable either when the pedigree or the genetic model under consideration is too complex. The fact that a pedigree can be represented as a graph naturally leads to an exploitation of graphical models (Lauritzen 1996) for these applications in ge- netics. The idea behind these models is to reduce a complex problem into small manageable components, thereby facilitating understanding of the com- putational issues involved and informing the development of more efficient al- gorithms. In particular, under the usual assumptions of the genetic model, a pedigree problem can be represented as a directed acyclic graph for which the local Markov property (Lauritzen et al. 1990) is satisfied. Such structures are sometimes called Bayesian Networks (Jensen 1996). Algorithms for per- forming calculations on general Bayesian networks which fully exploit all the conditional independence structures of the problem (e. g. Lauritzen & Spiegel- halter, 1988) can then be applied. These algorithms are essentially the same as the peeling method but are a little more efficient computationally. They all break down when the relevant graph has too many interconnecting undirected cycles, or loops, and the probabilities and likelihoods of interest must then be estimated either by Markov chain Monte Carlo (MCMC) methods (Hastings 1970, Thompson 2001) or by simplifying some aspects of the problem. How- ever, MCMC methods have not really been tested extensively on these large problems and tend to be viewed with some suspicion in practice, due to the unreliability of the resulting estimates. 86 Sec. 7. Stochastic Methods in Biometry, Genetics and Bioinformatics

An application to a very simple quantitative trait locus mapping problem will be discussed by way of illustration, but the methods are entirely general and are relevant to all areas of genetic application. In particular, it will be shown how different graphical representations for the same problem have different structural and inferential properties and present quite different computational challenges. It will be argued that the formal setting of these genetics prob- lems into a more general framework creates the potential to provide a flexible modelling environment which should enable efficient development and testing of methods for tackling a wide range of complex problems in this area.

References [1] Cannings, C., Thompson, E. A. and Skolnick, M. (1978). Probability func- tions on complex pedigrees. Advances in Applied Probability 10, 26–61. [2] Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57, 97–109. [3] Jensen, F. V. (1996). An introduction to Bayesian networks. UCL Press. [4] Lauritzen, S. L. (1996). Graphical Models. Clarendon Press, Oxford UK. [5] Lauritzen, S. L. and Spiegelhalter, D. J. (1988). Local computations with probabilities on graphical structures and their applications to expert sys- tems. Journal of the Royal Statistical Society, Ser. B 50, 157–224. [6] Lauritzen, S. L., Dawid, A. P., Larsen, B. N. and Leimer, H. G. (1990). Independence properties of directed Markov fields. Networks 20, 491– 505. [7] Thompson, E. A. (2001). Monte Carlo methods on genetic structures. In: Barndorff-Nielsen, O. E., Cox, D. R. and Kluppelberg, C. (eds.) Complex Stochastic Systems, 176–218. Sec. 7. Stochastic Methods in Biometry, Genetics and Bioinformatics 87

Contributed Lectures (in alphabetic order)

Helmut Finner, Klaus Strassburger (Deutsches Diabetes-Forschungsinstitut Dusseldorf)¨ The Partitioning Principle: A Powerful Tool in Multiple Decision Theory

A first general principle and nowadays state of the art for the construction of powerful multiple test procedures controlling a multiple level α is the so- called closure principle. In this talk we introduce another powerful tool for the construction of multiple decision procedures, especially for the construction of multiple test procedures and selection procedures. This tool is based on a partition of the parameter space and will be called partitioning principle (PP). We discuss various variants of the PP, these are a general PP (GPP), a weak PP (WPP) and a strong PP (SPP). It will be shown that — depending on the underlying decision problem — a PP may lead to more powerful test procedures than a formal application of the closure principle (FCP). Moreover, the more complex SPP may be more powerful than the WPP. Based on a duality between testing and selecting PP’s can also be applied for the construction of more powerful selection procedures. FCP, WPP and SPP will be applied and compared in some examples. References [1] Finner, H. and Strassburger, K. (2002). The partitioning principle: A powerful tool in multiple decision theory. The Annals of Statistics, to appear.

Roland Fried, Ursula Gather (University of Dortmund) PCA and Graphical Models for Monitoring Vital Signs

In intensive care, the vital signs of the critically ill patients are reported in short time intervals. Systematic changes in these data have to be detected quickly and distinguished from clinically irrelevant short term fluctuations and measurement artifacts. In view of the high dimension of the data a statisti- cal method for dimensionreduction like dynamic principal component analysis 88 Sec. 7. Stochastic Methods in Biometry, Genetics and Bioinformatics

(PCA) is useful as it allows to find those directions within a high-dimensional space which capture most of the variability in the observed dynamic data. An important problem is the suitable choice of the number of principal components. Further practical problems arise from the fact that principal components are usually difficult to interpret and they do not necessarily match the course of any single variable well. We relate graphical models for multivariate time series to dynamic principal component analysis and show how the former can be used to enhance the results of dimension reduction. Graphical models provide information on the number of principal components needed to retain the essential information on all variables and also we may find blocks of highly correlated variables. The results are illustrated by applications to real and simulated data.

G¨oran Kauermann (University of Glasgow) Gene Classification using Penalised Mixture Models

We consider microarray experiments for gene classification in which the objec- tive is to distinguish between active and non-active genes. We assume that expression levels for a large number of genes are observed and microarray data considered trace from two different groups of tissue or individuals. For in- stance, gene expressions are recorded for a number of patients suffering from a particular disease which are compared with expressions from non diseased patients. The focus of interest is to classify those genes with different level of expression in the two groups. This resembles the scenario investigated in Dudoit, Fridlyand and Speed (200) or Pan, Lin and Le (2001) or Efron, Tib- shirani, Storey & Tuscher (2001). The question of classification is tackled by assuming a multivariate mixture model. In particular the gene effect is mod- elled as a random effect tracing from a two component mixture distribution. If the gene is not active, the gene effect is assumed to be zero, while for active genes the effect is penalised by assuming a normally distributed random effect. We show how this modelling approach gives a special form of shrinkage, where weak effects are basically shrunk to zero and strong effects are left unchanged. The mixture approach then allows to obtain an “activity probability” for each gene, that is the posterior distribution that the gene is differentially expressed given the data of the experiment. The model is extended by introducing indi- vidual random effects, respectively slide effects. Estimation is carried out by backfitting Laplace approximations. This proves to be numerically handy and Sec. 7. Stochastic Methods in Biometry, Genetics and Bioinformatics 89 fast at the same time. Beside the modelling approach we also discuss possible inference arguments which are based on bootstrapping. Special emphasis is put on the calculation of global thresholds or significance levels to avoid problems occuring with multiple testing.

S. Kropf, J. L¨auter,M. Eszlinger, K. Krohn (Otto von Guericke University Magdeburg, Otto von Guericke University Magdeburg, University of Leipzig, University of Leipzig) Statistical Analysis of Gene Expression Array Data Based on Spherically Distributed Scores

The striking peculiarity of statistical analyses of gene expression array data is the extremely large number of variables (investigated genes) for only small samples. As a consequence, univariate tests for the complete set of genes have to take into consideration the problem of multiple testing with a large number of included hypotheses. Multivariate classification techniques run into the risks connected with the issue of overfit, conventional multivariate tests have to cope with singular empirical covariance matrices. In the nineties, L¨auter and co-workers developed parametric score-based tests (cf., e. g., L¨auter, Glimm and Kropf, 1998). The high-dimensional data are compressed into univariate or low-dimensional scores using data-dependent weight vectors. The scores can be interpreted in a similar way as in factor analysis. Furthermore, procedures for variable selection, multiple testing and model choice can be carried out. Despite of the pre-processing, the tests with the scores exactly maintain the type I error level. The scores can also be used for classification procedures. As the scores are calculated without utilizing any knowledge on the membership of sample ele- ments to subgroups, the same scores can be applied in discriminant procedures for different subdivisions. The procedures are demonstrated in the analysis of cDNA array data from patients with nodules in the thyroids. This includes the detection of over- and underexpressed genes as well as the support of diagnostics. Proposals are given for algebraic considerations reducing the size of the included matrices what is crucial for the processing of such extremely high-dimensional data. 90 Sec. 7. Stochastic Methods in Biometry, Genetics and Bioinformatics

References [1] L¨auter, J., Glimm, E. and Kropf, S. (1998). Multivariate tests based on left-spherically distributed linear scores. Annals of Statistics 26, 1972– 1988. Erratum: Annals of Statistics 27, 1441.

Mei-Ling Ting Lee, George Alex Whitmore (Brigham and Women’s Hospital, Harvard Medical School, Harvard School of Public Health; McGill University) Power and sample size for DNA microarray studies

In this paper, we discuss conceptual issues and present computational meth- ods for statistical power and sample size in microarray studies, taking account of the simultaneous inferences that are generic to these studies. The discus- sion encompasses choices of experimental design and replication for a study. Practical examples are used to demonstrate the methods. The examples show forcefully that replication of a microarray experiment can yield large increases in statistical power. Our analytical approach avoids the use of the observed mean square error and, hence, makes no use of t or F statistics at the level of the individual gene. The paper refers to cDNA arrays in the discussion and illustrations but the proposed methodology is equally applicable to expression data from oligonucleotide arrays.

Martin M¨ohle (University of Mainz) A Markov Chain Monte Carlo Algorithm for the Ewens Sampling Distribution and Applications to Neutrality Tests

The Ewens sampling distribution q is one of the fundamental contributions in population genetics to describe the behavior of allele frequencies. Even for moderate sample sizes the state space S of all allele configurations is quite large. Hence it is usuallyR time consuming to calculate for a given function f on S the integral E(f) = S f dq numerically. An efficient Metropolis-Hastings algorithm for the Ewens sampling probabilities of allele configurations is pro- vided to approximate any functional E(f). The standard deviation is discussed and approximative confidence intervals are provided. The method is applied to test the hypothesis of selective neutrality using the homozygosity test and the Ewens-Watterson-Slatkin neutrality test. Sec. 7. Stochastic Methods in Biometry, Genetics and Bioinformatics 91

Marcus Reich (Universit¨at Hannover) On the waiting time for patterns in Markov chains

We consider the distribution of the first occurrence of a given pattern in a random sequence of letters. This topic has received a lot of attention in recent years due to its importance in DNA sequencing. Some classical results by Gerber and Li are generalized from the i. i. d. to the Markov chain case. Mainly standard Markov chain methods are used, especially the fundamental matrix method, and a so-called “letter counting process” is introduced.

Konrad W¨alder (Technische Universit¨at Bergakademie Freiberg) Modelling the fruit and root dispersion of forest trees

In order to explore forest dynamics and regeneration processes of forests, it is of interest to study the fruit and root dispersion of trees. At first, the single-tree case is discussed. It is assumed that the fruits of a tree form a Poisson point process. The intensity of this point process depends on the fruit probability density and the number of fruits. On the basis of such an approach the Fisher-information matrix is deduced. The design problem is then discussed for different optimization criteria and distribution models of the dispersion. The results are extended for anisotropic point processes. In the case of several trees a cluster point process is assumed. Especially for root data it is of interest to model the interaction between the individual trees. The most often used aggregation operators are the weighted arithmetic means. But they are not appropriate for the aggregation of interacting criteria. With the (discrete) Choquet integral a general aggregation operator is available. It is shown, how by the choice of a suitable Fuzzy measure interaction in regeneration processes of forests can be modelled. The integration of this model into the experimental design problem is presented. 92 Sec. 7. Stochastic Methods in Biometry, Genetics and Bioinformatics

George Alex Whitmore, Mei-Ling Ting Lee (McGill University; Brigham and Women’s Hospital, Harvard Medical School, Harvard School of Public Health) A model relating quality of life to latent health status and survival

QOL assessments measure a subject’s state of health and well-being in a global sense or with reference to particular domains of function, symptoms and living. In this research, we define a QOL process for a subject as a continuous-time stochastic process that is periodically assessed by means of a survey instru- ment. The QOL process is assumed to have three components: a survival component that is correlated with the survival time of the subject, a pallia- tive component that reflects a subject’s comfort, freedom from pain and other aspects of well-being that are not correlated with survival, and a noise com- ponent that represents a combination of measurement errors and extraneous effects. We present a statistical model that can be used to distinguish between the survival component and the combined palliative and noise components. Our model also provides a way of incorporating markers of health status in the analysis and evaluating their importance. As a framework for health status and survival, we adopt the model described in Lee, DeGruttola and Schoenfeld (2000), which itself is an extension of one described in Whitmore, Crowder and Lawless (1998). We use the model structure of Lee et al. but, initially, do not use their distributional assumptions. Their model assumes that health status and related health markers follow a joint stochastic process. The markers are assumed to be observable whereas the health status process is assumed to be latent or unobservable. The primary endpoint, which we take to be death, is assumed to be triggered when this latent process first crosses a failure threshold level. Inferences for the model are based on censored survival data and marker measurements. Covariates, such as treatment variables, risk factors and other baseline variables, are related to the model parameters through generalized lin- ear regression functions. We build on a suggestion by Cox (1999) and interpret the Lee et al. model as a joint process for QOL, latent health status and, possibly, markers of health status. The part of the QOL process that is cor- related with the latent health status process forms the survival component of QOL. The proposed model provides a rich conceptual framework for the study of QOL issues and offers a flexible and tractable methodology for associated statistical inferences. The model and methods are illustrated by a small case demonstration. Sec. 7. Stochastic Methods in Biometry, Genetics and Bioinformatics 93

Andreas Wienke, Kaare Christensen, Axel Skytthe, Anatoli I. Yashin (Max Planck Institute for Demographic Research Rostock) A genetic analysis of cause-specific mortality data of Danish twins

Multivariate survival data arise for example when lifetimes of related individu- als are observed. A new approach to combine information about cause of death and age at death in a multivariate survival model is presented, where hetero- geneity induces dispersions on individual’s risk of experiencing an event as well as associations between survival times. Additionally, dependence between dif- ferent causes of death is included in the model and can be tested. This method is based on an extension of the bivariate correlated gamma-frailty model, which allows to overcome the identifiability problems in the competing risk model of univariate lifetimes. The class of multivariate distributions presented is charac- terized by the association parameters, using arbitrary marginal distributions. The multivariate distribution is specified in full by the association and variance parameters and the marginal distribution functions. The new model is applied to cause-specific mortality data from Danish twins to analyse the influence of genetic and environmental factors on frailty. Five genetic models are applied to mortality data of 4200 female Danish twin pairs with focus on coronary heart disease (CHD). Using the best fitting biometric model the heritability of frailty to mortality from CHD was 0.39 (0.13). Comparing this figure with the result of former analysis in a restricted model with independent competing risks, it turns out, that heritability of frailty on mortality due to CHD change substantially. Despite the inclusion of dependence, analysis confirms the significant genetic component to an individual’s risk of mortality from CHD. Whether dependence or independence is assumed, the best model for analysis with regard to CHD mortality risks is an AE model, implying that additive factors are responsible for heritability in susceptibility to CHD. The paper ends with a discussion of limitations and possible further extensions to the model presented.

References

[1] Wienke, A., Christensen, K., Holm, N. and Yashin, A. (2000). Heritability of death from respiratory diseases: an analysis of Danish twin survival data using a correlated frailty model. In: Medical Infobahn for Europe. A. Hasman et al. (Eds.), IOS Press, Amsterdam, 407–411.

[2] Wienke, A., Holm, N., Skytthe, A. and Yashin A. (2001). The heritability 94 Sec. 7. Stochastic Methods in Biometry, Genetics and Bioinformatics

of mortality due to heart diseases: a correlated frailty model applied to Danish twins. Twin Research 4, 266–74. [3] Yashin, A. I. and Iachine, I. A. (1995). Genetic analysis of durations: Cor- related frailty model applied to survival of Danish Twins. Genetic Epi- demiology. 12, 529–538.

Herbert Ziezold (Universit¨at Kassel) Statistical Shape Analysis in Biology

A shape with k landmarks in m-dimensional Euclidean space Rm is the equiv- m alence class [x] of a k-ad x = (x1, . . . , xk) of points xi in R with respect to similarity transformations x 7→ αBx + b of points, α > 0, b ∈ Rm, B ∈ SO(m). k With the Procrustes distance ρ we get Kendall’s shape space Σm. In investiga- tions of biological shapes the points xi are typical points, i. e. ‘landmarks’, of the biological objects under consideration. The statistical theory of shapes is developing since the last three decades. The books [1] to [3] present the state of research concerning the topological and differential geometric structure of shape spaces as well as its statistical analysis. In this talk we confine ourselves to a non-parametrical discrimination test al- lowing classifications of biological objects by their shapes, to an estimation of missing landmarks and to a test of independence of landmarks.

References [1] Dryden, I. L. and Mardia, K. V. (1998). Statistical Shape Analysis. Wiley, Chichester. [2] Kendall, D. G., Barden, D., Carne, T. K. and Le, H. (1999). Shape and Shape Theory. Wiley, Chichester. [3] Small, C. G. (1996). The Statistical Theory of Shape. Springer-Verlag, New York. Sec. 8. Stochastic Models in Biology and Physics 95

Sec. 8. Stochastic Models in Biology and Physics

Organizer: Anton Wakolbinger (Frankfurt/Main)

Invited Lecture

Alison Etheridge (University of Oxford) Evolution in spatially continuous populations

Many biological populations evolve in continuous two dimensional space. A natural starting point for modelling such organisms is the branching Brown- ian motion or its diffusion approximation, superBrownian motion. However, both these models predict either unbounded population growth or ultimate ex- tinction. Worse, if not extinct, at large times the processes form ‘clumps’ of arbitrarily large density and extent. The traditional course is to suppose that the population is subdivided into discrete ‘demes’, each of a fixed size. How- ever, for populations evolving in continua, this restriction to discrete demes is unnatural, and the fixed size of the demes may disguise important effects arising from random fluctuations in the local population density. We discuss some of the mathematical challenges arising from our attempts to model evolution for spatially continuous populations and report some modest progress in meeting those challenges.

Contributed Lectures (in alphabetic order)

Stefan Adams (Technische Universit¨at Berlin) About large deviations for the field of gradients and their thermodynamic properties

In our talk we consider continuous spin lattice models with massless interac- tions (the so-called anharmonic crystals). These Gibbsian models emerge in 96 Sec. 8. Stochastic Models in Biology and Physics various branches of physics and mathematics, with a particular frequence in quantum field theory. However our attention is mostly devoted to interfaces, d of which a massless field is an effective modelization, the spin φx at site x ∈ Z denoting the distance of an interface. The formal Hamiltonian H(φ) is given by V (φx − φy) for nearest neighbors x and y and a given convex function V . The Hamiltonian depends only on the increments of the field φ. Therefore we consider the field of gradients itself, i. e.

i ηx = (∇iφ)x = φx+ei − φx =: ηb , i = 1, . . . , d , x ∈ Z , b = (x, x + ei) . P As a vector field, η has zero curl. This means ηb = 0 for every closed b∈C loop C and his nearest neighbor bonds b and therefore the field of gradients meets a long range dependence. We show the large deviation property for the empirical field for a given Gaussian measure, where we start with the vector field η ∈ (Rd)Zd , i. e. also with non zero curl, and condition the independent Gaussian measure on the right σ-algebras. With this result we get the droplet construction (Wulff construction) with strictly convex interaction with some bounded continuous perturbation. More precisely it enables us to treat the large deviation for the linear profile XN : D → R for a bounded domain d 1 D ⊂ R , defined by XN (θ) = N φ[θN], for the strictly convex V disturbed by a bounded continuous interaction.

Sigurd Assing (University of Edinburgh) A new view of a particle system approach to Burgers equation

We consider the formal Burgers equation

ut + λuux = ν uxx + ηx(t, x) , t ≥ 0 , x ∈ R , where ηx(t, x) denotes the space-time white noise with covariance hη(t, x)η(t0, x0)i = γ δ(t − t0)δ(x − x0) , γ > 0 a constant. The expected invariant measure for this equation is the Gaussian measure µ with characteristic functional Z γ 2 exp{iφ(l)} µ(dφ) = exp{− klk 2 } , l ∈ S(R) , 4ν L (R) which only carries singular distributions in S0(R). As a consequence, the non- linearity λuux cannot be defined by applying the usual product. Sec. 8. Stochastic Models in Biology and Physics 97

On the other hand, L. Bertini/G. Giacomin could associate a possible solution process to this kind of equation which was given by the hydrodynamical scaling limit of a weakly asymmetric exclusion process. Though they could not find a meaningful version of a limiting equation, their limit process occured to be invariant with respect to the above Gaussian measure. Of course, the difficulty was to find a meaningful expression for the non-linearity. We now identify the limiting equation’s non-linearity as to be a slightly perturbed Wick product ∂ λ u ¦ u µ ∂x on the state space. The corresponding equation has to be defined in terms of a generalized martingale problem.

Ellen Baake (Universit¨at Greifswald) Mutation-selection models: A large deviations approach

In joint work with Hans-Otto Georgii, we reconsider the multi-type branching process that describes mutation and reproduction of (haploid) individuals in continuous time. For the single-step mutation model, we analyze large devia- tions of the empirical measure of the backward process. The rate function for the empirical mean genotype is given by a variational formula which can be solved explicitly and has already shown up before in this context (see abstract by Oliver Redner on page 102).

Michael Baake (Universit¨at Greifswald) Aspects of recombination

A mutation recombination model in continuous time is considered, with in- dependent mutation events at the sites of the genetic sequence, and single crossover events between sequences of equal length. It admits a closed solution in the infinite population size limit and also an extension to a model with selec- tion of additive type across sites (joint work with Ellen Baake). Furthermore, first results on the dynamics of recombination for sequences of unequal length is presented (joint work with Oliver Redner). 98 Sec. 8. Stochastic Models in Biology and Physics

Jochen Blath, Peter M¨orters (Universit¨atKaiserslautern, University of Bath) Thick points of super-Brownian motion

We determine the dimension spectrum of thick points for the state of a super- Brownian motion in dimension d ≥ 3. Our result involves a constant which can be characterized in terms of the upper tails of the associated equilibrium Palm distribution. The proof relies on a localization phenomenon in supercritical dimensions and percolation methods to deal with subfractals of Brownian level sets.

Jean-Dominique Deuschel (Technische Universit¨at Berlin) Dynamical entropic repulsion

We study the effect of a hard wall repulsion for the dynamic of an effective in- terface model given in terms of locally interaction stochastic differential equa- tions indexed by the d-dimensional lattice. Depending on the dimension we show repulsion at height log(t) for d > 2 and (log(t))2 for d = 2.

Jochen Geiger, G¨otzKersting, Vladimir A. Vatutin (Universit¨atKaiserslautern, Universit¨atFrankfurt/M., Steklov Mathematical Institute) Limit theorems for branching processes in random environment

In a branching process in random environment particles have generation depen- dent offspring distributions which vary randomly in time. Conditioned on the environment particles reproduce independently of each other. The asymptotic behavior of branching processes in random environment differs essentially from the longtime behavior of classical Galton-Watson branching processes. In this talk we will explain some of the new phenomena exhibited and present limit theorems for the case where the environment is a sequence of i. i. d. random variables. Sec. 8. Stochastic Models in Biology and Physics 99

Barbara Gentz, Nils Berglund

(Weierstrass Institute for Applied Analysis and Stochastics, Berlin; ETH Zurich)¨ Concentration of sample paths in stochastic slow–fast systems

We discuss the effect of noise on dynamical systems involving two time scales. These systems are described by singularly perturbed stochastic differential equations.

In the deterministic case, solutions are known to track so-called slow manifolds, consisting of equilibrium points of the frozen system. Assuming sufficiently small noise intensity, we show that on a time scale of order 1, typical sample paths remain in metastable equilibrium near the deterministic solution with the same initial condition. The probability of exceptional paths is exponentially small in the small parameters characterizing the system such as noise intensity and adiabatic parameter.

We shall conclude by discussing classes of systems which allow for concentration results on longer time scales.

Hans-Otto Georgii, Valentin Zagrebnov

(Universit¨atM¨unchen, Universit´ede la M´editerran´ee and Centre de Physique Th´eorique)

Entropy-driven phase transitions in multitype lattice gas models

Multitype lattice gas models with hard-core repulsion between particles of dif- ferent types are considered. Their characteristic feature is a competition be- tween the entropy of types and the positional energy and geometry resulting from the exclusion rule and the activity of particles. In various cases it is shown how this phenomenon gives rise to the existence of first-order phase transitions with coexistence of ordered and disorded phases at some critical activity. 100 Sec. 8. Stochastic Models in Biology and Physics

Steffen Grossmann, Benjamin Yakir (Johann Wolfgang Goethe-University Frankfurt/Main, Hebrew University Jerusalem) Large deviations for optimal local sequence alignments

When comparing two long DNA or amino acid sequences one might look whether there are subpieces of the sequences which match each other well. A more realistic way of comparing the sequences is to allow also gaps, which means that one can shift parts of the subpieces against each other. Any possible way to compare any two subpieces using gaps is called a local alignment of the sequences. For each such alignment a score is defined which increases in the similarity of the two aligned subpieces. Algorithms are available which find the optimal one among all alignments in reasonable time. The score of this optimal alignment is considered to be a good measure of the relatedness of the whole sequences. From a statistical point of view it is important to understand the distribution of this optimal score for random unrelated sequences. Up to now this has only been solved in the gapless case. We show that, in an appropriate large deviations regime, the tail of this distri- bution decays exponentially with a rate of decay which is strictly smaller than in the gapless case. We characterize this rate in two ways. First it is related naturally to the growth of typical alignment scores. On the other hand it is given by the zero of some limiting logarithmic moment generating function. This result extends nicely the gapless case. We also give bounds which are helpful for determining the rate in practice.

Lorens Imhof (RWTH Aachen) The long-run behavior of a stochastic replicator dynamics

Fudenberg and Harris have introduced a stochastic version of the continuous- time replicator dynamics of Taylor and Jonker. This talk describes the role of evolutionarily stable strategies in the stochastic setting and points out some differences between the deterministic and the stochastic model. An application to the war of attrition will be discussed as well. Sec. 8. Stochastic Models in Biology and Physics 101

Christof Kulske¨ (Weierstrass Institute for Applied Analysis and Stochastics, Berlin) Selfaveraging of random diffraction measures

Consider diffraction at a finite number of random point scatterers on a point set in Euclidean space. Useful examples include lattices, quasi-lattices, or random perturbations thereof, but we don’t assume any particular structure other than finite distance. We ask for the self-averaging properties of the corresponding random scattering measures applied to a function in Fourier-space (modelling the measurement device). We give a general explicit large deviation upper bound that is exponential in the number of scatterers. Our proof utilises ex- pansion methods from statistical mechanics.

V. Nollau, H.-O. M¨uller,S. El-Shehawy (Technische Universit¨at Dresden) Statistical Methods of Model Selection for Moisture Retention Characteristics (MRC)

Mathematical modeling of water flow in unsaturated natural soil leads to a partial differential equation — Richard’s equation. To solve this equation one needs the description of the relationships between soil moisture content, hy- draulic conductivity and capillary pressure. The moisture retention character- istic MRC — the relation between water content and pressure head — can be measured for natural soils. A variety of parametric models have been proposed to describe MRC. All these models are nonlinear regression models. This paper discusses the selection of parametric models of MRC from viewpoint of statis- tics. Thereby we are not only interested in the ability of a model to fit real data of MRC for different soils, but also in accuracy of parameter estimators and the accuracy of predictions made with the estimated parameters with respect to typical real data. For instance one is interested in the calculation of the hydraulic conductivity curve using the fitted MRC. In this note we use simulation studies, measures of nonlinearity (curvature) and resampling methods like cross-validation and bootstrap. If a model has many parameters, overfitting will be possible, in some cases parameters will depend on each others and therefore it will be impossible to identify a soil by the estimated parameters since estimated parameters vary in a wide range for typical measurement data. We compare the often used 102 Sec. 8. Stochastic Models in Biology and Physics van Genuchten model with five variables and its variants with four or three parameters and a MRC-model proposed by King.

Robert Offinger (Universit¨at Magdeburg) On the generation of discrete isotropic orientation distributions for linear elastic polycrystals

We consider a model for the elastic behaviour of a polycristalline material based on the volume average of the stiffness tensors (Voigt model) so that the effective elastic properties depend only on the distribution of the orientations over the finite number of grains. A special problem is to determine so-called discrete orientation distributions (DODs) which satisfy the isotropy condition. A DOD is a probability measure with finite support on SO(3), the special or- thogonal group in three dimensions. Isotropy of a DOD can be viewed as an invariance property of a certain moment matrix of the DOD. So the problem of finding isotropic DODs resembles that of finding weakly invariant linear re- gression designs. We will utilize methods applied in linear regression designs to construct various isotropic DODs and find isotropic DODs which are simul- taneously isotropic for all materials and receive special DODs with fairly small support in case of additional crystal symmetries.

Oliver Redner (Universit¨at Greifswald) Mutation-Selection Balance: Ancestors and a Maximum Principle

In this joint work with Joachim Hermisson and Ellen Baake, we analyze the equilibrium behavior of deterministic haploid mutation-selection models. To this end, the corresponding multi-type branching process is considered, both forward and backward in time. The stationary state of the time-reversed pro- cess is the ancestral distribution. For a class of models in which fitness is defined solely by the number of mutations in an individual, we use the ances- tor formulation to derive a simple maximum principle, from which the mean and variance of fitness and the number of mutations can be obtained. The results are exact for a number of limiting cases, and otherwise yield approxi- mations which are accurate for a wide range of parameters. These results may be applied to threshold phenomena caused by the interplay of selection and mutation (known as error thresholds). Sec. 8. Stochastic Models in Biology and Physics 103

Michel Sortais (Technische Universit¨at Berlin) Large Deviations in the Langevin Dynamics of some disordered systems

We consider the Langevin dynamics of a short range spin glass and give some Large Deviations estimates for the empirical process considered both in the “quenched” regime and in the “annealed” regime. The same type of results may be derived in the context of a Langevin dynamics for the Random Field Ising Model.

Anja Sturm (Weierstraß-Institut fur¨ Angewandte Analysis und Stochastik, Berlin) A branching process in a random environment relating to SPDEs in higher dimensions

We consider a system of branching particles whose offspring distributions de- pend on a random environment that is uncorrelated in time but correlated in space. We show that the diffusion limit of this system can be described by the stochastic heat equation with coloured noise. The latter has function valued solutions in any dimension, in contrast to the heat equation with white noise, which characterises Super-Brownian Motion only in dimension 1.

Pascal Vogt (University of Bath) Collision local time of singular catalytic Super-Brownian Motion

The talk presents a new approach to the collision local time of a catalytic Super- Brownian Motion and its singular catalytic medium. Our method generalizes the results obtained by Fleischmann and Le Gall for a single point catalyst. 104 Sec. 8. Stochastic Models in Biology and Physics

Anja Voß-B¨ohme (Technische Universit¨at Dresden) Limit Theorems for the Heat Equation with Random Potential

We consider the heat equation with constant drift and random potential in one space dimension. If the random potential is homogeneous in space, e. g. if it is supposed to be stationary and ergodic, then long-time limits can be observed, such as an asymptotic speed of mass concentration. For large drift values, we analyze the asymptotic mass distribution in detail and identify the scale of deviations from the asymptotic speed.

Anita Winter (Friedrich-Alexander-Universit¨at Erlangen-Nurnberg)¨ Particle representation of Interacting Fisher–Wright diffusions and applications

This is joint work with Andreas Greven and Vlada Limic. For any fixed time, a particle representation for the historical process of a col- lection of Moran models with increasing particle intensities and of the limiting interacting Fisher-Wright diffusions is provided on one and the same probabil- ity space by means of a look-down process. It is discussed how this representation can be used to obtain results on the structure of the equilibrium historical processes.

Silvelyn Zwanzig (Uppsala Universitet) Application of Errors-in-Variables Models in Astrometry

Hipparcos (1989–1993) is the first and only one spacecraft dedicated to posi- tional astronomy. Because of the new high developed scientific instruments the measurement errors reached a new high accuracy. Some of the independent variables have now the same level of insecurity as the dependent variable and can now not longer considered as known. This implies the application of error- in-variables models. In the talk astrometric reduction methods in different models are compared. The conclusions for new astrometric satellite missions are discussed. Sec. 9. Stochastic Methods in Optimization and Operations Research 105

Sec. 9. Stochastic Methods in Optimization and Operations Research

Organizer: Ulrich Rieder (Ulm)

Invited Lecture

Wolfgang J. Runggaldier (Universit´adi Padova) On optimal investment strategies under model uncertainty

Stochastic dynamic optimization (optimal stochastic control) has found vari- ous application in Finance/Insurance, mainly in relation to investment prob- lems like the classical portfolio optimization (investment/consumption) prob- lem and, in an Insurance context, the problem of maximizing survival proba- bility. Further, more recent applications concern the hedging problem in in- complete markets and the related problem of tracking a benchmark. Traditional methodologies to obtain an optimal solution are based on Dynamic Programming, but also other methodologies have been proposed more recently such as “martingale methods”. Our emphasis here is not so much on solution methodologies per se, but rather on their performance under model uncertainty (model risk). In this context, and concentrating mainly on hedging applica- tions, we discuss adaptive as well as robust methodologies. Since investment decisions take place in discrete time, our models will also be in discrete time.

Contributed Lectures (in alphabetic order)

Harald Bauer (Universit¨at Ulm) Fluid Approximation for Controlled Queueing Networks

Control problems in stochastic networks are in general difficult to treat analyt- ically and even a numerical solution is often intractable. Therefore approxima- tion procedures like fluid or Brownian approximation were of increasing interest 106 Sec. 9. Stochastic Methods in Optimization and Operations Research within the last to gain insight in the performance of queueing networks and deduce asymptotically optimal policies. We consider a general model for controlled queueing networks with exponentially distributed interarrival and service times. Under fluid scaling we establish weak convergence of the joint state and action process where the trajectories of the limiting process obey a deterministic controlled flow equation. We demonstrate the applicability of this result to several variations of the admission control problem in stochastic loss networks.

Nicole B¨auerle (University of Ulm) Portfolio Optimization in the presence of Markov-modulated volatilities

We consider the task of finding optimal investment strategies in a diffusion market with one bond and one stock where the stock volatility is a function of a Markov jump process. The problem is investigated for several different utility functions, among others we want to maximize the probability of reaching a given goal. The results are compared to the classical (non-modulated) case. Since the market is incomplete our main mathematical tool is the theory of stochastic control.

Hans Daduna (Universit¨at Hamburg) Stabilization and throughput computation for large networks of queues in discrete time

We consider large cycles of single server queues in discrete time and compute explicitly the throughput (local progress) of the system. Letting the number of nodes and the population size grow to infinity we describe the different modes of possible asymptotic behavior (between local idling and occurrence of bottlenecks). We identify the regions of asymptotic stability for the networks. We further determine the asymptotic equilibrium queue length distributions for the different modes of asymptotic behavior. The talk reports about joint research with V. Pestien and S. Ramakrishnan (University of Miami). Sec. 9. Stochastic Methods in Optimization and Operations Research 107

Ophelia Engelhardt-Funke (Technische Universit¨at Claustal-Zellerfeld) Structural Results on Routing Aircrafts to Two Runways

A mixed poisson arrival stream of aircrafts has to be routed to two runways with separate waiting loops. Different types of aircafts require different safety distances between them during the landing process. The aim is to find a routing policy that maximizes the throughput (arrival rate) of the airport such that the average delay of the aircrafts is bounded by a given limit. We model this as a queuing system with two parallel servers and dependent service times where the delay is the waiting time of the customers. We discuss three scenarios that differ in the amount of information available to the control unit. If only the type of the incoming aircraft is known, we derive conditions for the safety- distances under which the mean waiting time is always less (resp. greater) than in the classical M/G/I model with independent service times. We give stability conditions and compute the average delay of the airplanes under fixed routing policies numerically. This allows to approximate the optimal policy and the optimal throughput value heuristically. If only the state of the two runways but not the type of the incoming aircraft is known we can show that the optimal routing policy is of switching type. If the complete information is available we can show by a counter example that a policy of the switching type cannot always be optimal.

Geert Jan Franx (Vrije Universiteit Amsterdam) A simple solution for the M/D/c waiting time distribution

A surprisingly simple and explicit expression for the waiting time distribution of the M/D/c queueing system is derived by a full probabilistic analysis, requir- ing neither generating functions nor Laplace transforms. Unlike the solutions known so far, this expression presents no numerical complications, not even for high traffic intensities. Finally, the result is proved explicitly to satisfy Erlangs integral equation for the M/D/c queue, which has been somewhat problematic for the expressions known so far. 108 Sec. 9. Stochastic Methods in Optimization and Operations Research

Stefan Haar, Albert Benveniste, Eric Fabre (INRIA, IRISA) Probability and Parallelism: Non-Sequential Stochastic Processes

We will show the elements of the recent theory of concurrent Markov Pro- cesses in partially ordered logical time, in the framework of (untimed) Petri Net semantics. In this approach, motivated by the monitoring of distributed systems and in particular of telecommunications network, we introduce a probabilistic unfold- ing semantics for untimed Petri net systems. Distributed systems have local state and time, but do not possess global state and time in the usual sense; here, we give a partial order model of (logical rather than physical) time, with processes evolving locally; the probabilities are independent of the order in which causally independent events are observed, i. e. it is not the interleaving of concurrent events that is randomized, but solely the choice between different possible partially ordered runs. The progress of time is measured not by but by events of the system, and the “decisions” made during its evolution; one obtains stochastic processes in branching partial order logical time. For the stopping times we introduce and characterize in the talk, a strong Markov property is satisfied.

Frank Heyde (Martin-Luther-Universit¨at Halle–Wittenberg) Stochastic optimal control problems and non-continuous viscosity solutions of the HJB equation

We consider problems of optimal control of degenerate diffusions and show in two cases that the value function coincides with a certain kind of non- continuous viscosity solution of a corresponding boundary value problem. For that we compactify the space of feasible controls using relaxed controls. In the case of minimizing the mean exit time of the diffusion process from an open set we show that the value function is the unique envelope viscosity solution of the Dirichlet problem for the corresponding HJB equation. In the case of infinite horizon problems with state constraints we proof that the value function is the minimal viscosity supersolution of the corresponding HJB equation and the supremum over all subsolutions in larger domains. Sec. 9. Stochastic Methods in Optimization and Operations Research 109

Regina Hildenbrandt

(Technische Universit¨at Ilmenau)

DA problems with distance properties and an expanded conception of optimal monotone/dominant policies

The considered problems which are modelled as Markov decision processes (MDP) include the following characteristics:

(1) Any decision can be transformed into another by a sequence of “neigh- bouring” decisions. The transition probability matrices are only different from two elements for corresponding “neighbouring” decisions. (Such a property follows for stochastic dynamic programming problems where the random disturbances are observed before the decision is made at each stage. DA means “decision after”.)

(2) “Internal cost” fulfil distance properties (especially the triangle-inequality)

Possibly the internal cost (consequently the MDP, too) depend on parameters kij.

Properties of “stochastically monotone Markov chains” can be found by Daley (1968). If such properties and additional properties of the average (one-step) reward functions are given for MDP then the optimality of “monotone/dominant poli- cies” can be shown (Hildenbrandt 1993, Puterman 1994). Only a small number of MDP have optimal monotone policies. Especially, several MDP with (1) belong to this class.

DA problems with distance properties lead to an expanded conception of mono- tone Markov chains and optimality monotone policies of MDP in the cases that kij are nearly equal. If kij are more different then the “principle of expanded monotonicity” and the “principle of minimal stage cost” compete one with another. 110 Sec. 9. Stochastic Methods in Optimization and Operations Research

P´eterK´ar´asz (Budapest Polytechnic) On a Class of Retrial Systems with Uniformly Distributed Service Time

Introduction Based on a real problem connected with the landing of aeroplanes we investigate a special queuing system, where peculiar conditions prevail. In such systems a request for landing can be serviced upon arrival if the system is free. When other planes are using the runway or waiting to land, the entering plane has to start a circular manoeuvre and can put its further requests when it comes to the starting point of its trajectory. Because of possible fuel shortage it is quite natural to use the FIFO rule. In his works Lakatos has extensively investigated such type of queuing systems, namely where the service of a request can be started upon arrival (in case of a free system) or at times differing from it by multiples of the cycle time T (in case of a busy server). In [1] he considered a system with Poisson-arrivals and uniformly distributed service time. As a generalization, in [2] a special system which serves customers of two different types, was examined. Both types of customers form Poisson processes, and their service time distributions are exponential. In the system only one customer of first type can be present, it can only be accepted for service in the case of a free system, whereas in all other cases the requests of such customers are turned down. There is no such restriction on customers of second type; they are serviced immediately or join a queue in case of a busy server. In this paper we are going to consider the same system but service times are uniformly distributed. To elaborate the mathematical description of the system we make the following proposals. In the system there might be idle periods, when the service of a request is completed, but the next one has not reached its starting position. We consider these periods part of the service time, making the service process continuous in such way. We also make a restiriction on the boundaries of the intervals of the uniform distributions: they are multiples of the cycle-time. This assumption does not violate the generality of the theory, but without it formulae are much more complicated. For the description of the system we use the embedded Markov-chain technique, i. e. we consider the number of customers in the system at moments just before Sec. 9. Stochastic Methods in Optimization and Operations Research 111 the service of a new customer begins. For this chain we introduce the following transition probabilities: aji – the probability of appearance of i customers of second type at the service of a j-th type customer (j = 1, 2) if at the beginning there is only one customer in the system; bi – the probability of appearance of i customers of second type at the service of a second type customer, if at the beginning of service there are at least two customers in the system; ci – the probability of appearance of i customers of second type after free state.

Results We formulate the results of the paper in the following

Theorem. Let us consider a queuing system with two types of customers form- ing Poisson-processes with parameters λ1 and λ2, the service times are uni- formly distributed in the intervals [α1, β1] and [α2, β2], respectively (α1, β1, α2, β2 are multiples of cycle-time T ). There is no restriction on customers of second type; however customers of first type may only join the system when it is free (and only one of them can be present at every instant), all other requests of this type are refused. The service of a customer may start at the moment of its arrival (in case of a free system) or at moments differing from it by multiples of cycle time T ; and the FIFO rule is obeyed. We define an embedded Markov- chain, whose states correspond to the number of customers in the system at moments just before starting a service. The matrix of transition probabilities of this chain has the form:   c0 c1 c2 c3 ...    a20 a21 a22 a23 ...     0 b0 b1 b2 ...     0 0 b0 b1 ...  ......

The condition of existence of ergodic distribution is the fulfilment of inequality:

λ (α + β + T ) 2 2 2 < 1 2

The limit distribution while T → 0 is also given. 112 Sec. 9. Stochastic Methods in Optimization and Operations Research

References

[1] Lakatos, L. (1996). On a Cyclic-Waiting Queuing System. Theory of Stochastic Processes Vol. 2 (18), no. 1–2, 176–180.

[2] Lakatos, L. A Special Cyclic-Waiting Queuing System With Refusals. J. Math. Sci. (New York) (to appear).

[3] Lakatos, L. Limit Distributions for Some Cyclic-Waiting Queuing Sys- tems. Theory of Stochastic Processes (to appear).

Michael Kolonko (Technische Universit¨at Claustal-Zellerfeld) Solving a sequential optimization problem with stochastic evolutionary algorithms

In the manufacturing of composite materials (polymere impregnated fibres) a thread of flexible material is wound around a rotating mandrel. The winding robot has to be controlled in such a way that the material after hardening has a given shape. Reduced to the two-dimensional cross-section, the prob- lem is to find a sequential placement of flexible two-dimensional objects (the cross-section of the thread) such that a given target contour is filled optimally. Formally this is an N-stage dynamic programming problem where N is the number of rounds of material. The restriction that prohibits any standard so- lution techniques is the flexibility of the material, its adaptation to the surface can only be simultated. We suggest a stochastic evolutionary algorithm that treats the problem in an non-sequential manner. Starting with random place- ments, stochastic genetic operators iteratively produce new feasible solutions of increasing quality. Examples from a practical application are shown.

Kurt Majewski (Siemens AG Munchen)¨ Minimizing paths of Jackson networks

Joint work with Kavita Ramanan. It is well known that the tail probabilities of the steady state distribution of queue lengths in a stable Jackson network can be characterized by a variational problem on path space. Although the optimal value of the variational problem follows directly from the fact that the Sec. 9. Stochastic Methods in Optimization and Operations Research 113 stationary queue lengths in a Jackson network are independent and geomet- rically distributed, characterization of the optimal solution to the variational problem is of importance because it sheds insight into the manner in which large queue lengths build up. In out work we explicitly identify this optimal solution by showing that it is equal to the time-reversal of the path followed by the so-called fluid model of the time-reversed network to drain the network to zero. We also provide numerical examples to illustrate the theoretical results.

Christian Malchin (Universit¨at Hamburg) On the conditional structure of sojourn-times

We consider discrete-time and continuous-time closed cyclic networks with ge- ometric resp. exponentially distributed node dependent mean service times. Under steady state condition we compute the conditional distribution for a customer’s joint sojourn-time vector, given his total cycle time.

Peter Neumann, David Ramsey, Krzysztof Szajowski (Technische Universit¨atDresden, Technical University of WrocÃlaw, Technical University of WrocÃlaw) Randomized stopping times in stopping games of Dynkin type

A stopping game consists in that — simply 2 — players observe a stochastic process and stop in dependence on the observations obtained until now. By stopping, often the acquisition of an object by one of the players is connected as in the well-known secretary problem. Dynkin (1969) was the first who discussed the conflict what happens when several players would acquire the object at the same moment. In the following time, the Dynkin game always had been generalized and modified giving qualification and priority rules or the possibility for the non-stopping player to continue the process. Also the non-zero sum case had been investigated. A surprising result of one of the authors (Szajowski 1994) consists in improving the expected gain of a player by using randomized stopping times. In this field, recently obtained results will be presented. 114 Sec. 9. Stochastic Methods in Optimization and Operations Research

ZdzisÃlaw Porosinski

(WrocÃlaw University of Technology)

On best choice problems having similar solutions

The purpose of the paper is to point out that best choice problems with differ- ent information structure may have similar solutions. A full-information best choice problem with a random number of objects having uniform distribution is considered. An optimal stopping rule, determined by decreasing sequence of levels, is found. Asymptotic behaviour of both an optimal stopping rule and a winning probability is examined in detail. Both the sequence of optimal levels determining optimal strategies and asymptotic winning probabilities are the same in the considered problem as well as in a best choice problem with partial information considered by Petruccelli (1980, Ann. Stat. 8, 1171–1174).

Ulrich Rieder

(Universit¨at Ulm) Portfolio Optimization under Different Information Structures

We discuss the classic maximization problem of expected utility from terminal wealth. At first, we consider a jump diffusion market and study the case where the investor knows the total number of jumps. For this model of inside information, an optimal policy is derived and computed explicitly in the case of logarithmic and power utility. The optimal portfolio is time-varying and has a jump as soon as a jump occurs for the stock prices. In the second part, we require that the investor observes just the stock prices. Moreover, the stock appreciation rates are assumed to be unobservable random variables with known prior distribution. Using stochastic filtering techniques we obtain explicit formulae for the value function and the optimal portfolio process. In particular, the certainty equivalence principle does not hold for power utility functions. Sec. 9. Stochastic Methods in Optimization and Operations Research 115

Thilo Roßberg (Universit¨at Ulm) Mean-Variance Hedging in Jump-Diffusion Markets: An LQ-Approach

We extend the recent research on stochastic LQ-control to the case of dis- continuous state processes. For deterministic coefficients of the state pro- cess and cost functions we derive explicit formulae for the optimal policy and value function. In the case of stochastic coefficients, both the value function and the optimal policy are given in dependence of solutions of Backward- Stochastic-Differential-Equations (BSDE) and deterministic matrix Riccati- differential equations. These general results are applied to the problem of mean-variance hedging in financial markets. We establish an arbitrage-free jump-diffusion market model, including both equity and interest-rate risk, where the model of lognormal forward bond processes is extended to include jump risk. The jumps in forward bond prices is used to model credit risk. We derive explicit formulae for both index-tracking and mean-variance hedging of corporate bonds by using equity portfolios. Numerical tests demonstrate the performance of the derived strategies.

Manfred Sch¨al (Universit¨at Bonn) Discrete-time dynamic programming for the ruin probability

A Cramer-Lundberg model is studied which can be controlled by reinsurance and investment in a financial market. The period lengths may be deterministic or stochastic; e. g., a period may be the time between two successive claims. Minimizing the ruin probability can be described as problem of minimizing costs in a dynamic program. However, then the discount factor is one. In spite of this fact, the model enjoys a contraction property which is weaker than the usual ones in dynamic programming. This property was established by Schmidli (2001) for a continuous-time insurance model and is strong enough for the Howard iteration and for a verification theorem. As an application, it can be shown that “no reinsurance” is optimal if the safety loading of the reinsurer is too high. 116 Sec. 9. Stochastic Methods in Optimization and Operations Research

Sabine Schlegel, Hayriye Ayhan, Zbigniew Palmowski (EURANDOM, Georgia Institute of Technology, EURANDOM) Subexponential asymptotics for cyclic queues

For a K-stage cyclic queueing model with N customers and general service times we give an explicit expression for the nth departure time from each stage. Starting from this expression we analyze the asymptotic tail behaviour of cycle times and waiting times given that at least one service time distribution is subexponential. Further, we show that the tail of the residual of a subexpo- nential service time seen by an arriving customer is of the same order as the service time itself, where the asymptotic constant depends on the queue length on arrival.

Ronald Schurath, Heinz-Uwe K¨uenle (Brandenburgische Technische Universit¨at Cottbus) The optimality equation and ε-optimal strategies in Markov games with average reward criterion

In our talk we consider two-person zero-sum stochastic games with unbounded payoffs and the average reward criterion. State and action spaces are assumed to be Borel spaces. Under conditions which are more general than the ergodicity assumptions in related works by A. S. Nowak (1999), O. Hernandez-Lerma/ J. B. Lasserre (2000) and A. Ja´skiewicz/A.S. Nowak (2001), we show that the optimality equation has a solution and that ε-optimal stationary strategies exist. Our proofs use a completely different approach compared to the above mentioned papers. We prove that operators of a parametrized class have fixed points, and then we use continuity and monotonicity properties of these fixed points with respect to the class parameter to show that the optimality equation has a solution.

Maike Schwarz, David D. Yao, Feng Cheng, Markus Ettl (Universit¨atHamburg, Columbia University, IBM T. J. Watson Research Center, IBM T. J. Watson Research Center) Zur Rolle von Optionen beim Risikomanagement in Supply Chains

Wir betrachten ein zweistufiges Supply Chain-Modell in dem der H¨andler ein Produkt entweder direkt kaufen oder Optionen auf das Produkt erwerben kann. Es wird untersucht, wie sich die Supply Chain Partner im Falle eines solchen Sec. 9. Stochastic Methods in Optimization and Operations Research 117

Vertrags verhalten sollten, als auch wie sich die Risiken und erwarteten Ge- winne auf die Supply Chain Partner aufteilen. Es wird gezeigt, daß in gewissen F¨allen die Supply Chain Partner durch Kooperation den erwarteten Gewinn der gesamten Supply Chain auf den erwarteten Gewinn einer integrierten Supply Chain anheben k¨onnen. Robert Simon (Universit¨at G¨ottingen) The Common Prior Assumption in Belief Spaces With four persons there is an example of a probability space where 1) the space is generated by hierarchies of knowledge concerning a single proposition, 2) the subjective beliefs of the four persons are continuous regular conditional probability distributions of a common prior probability distribution (continu- ous with respect to the weak topology), and 3) for every subset that the four persons know in common there is no common prior probability distribution. Furthermore, for every measurable set, every person, and at every point in the space, the subjective belief in this measurable set is one of the quantities 0, 1/2, or 1. This example presents problems for understanding games of incomplete information through common priors.

J´anosSztrik (University of Debrecen) Asymptotic methods in modelling Markov-modulated finite-source queueing systems This paper deals with a First-Come, First-Served (FCFS) queueing model to analyse the behaviour of heterogeneous finite-source system with a single server. The sources and the server are supposed to operate in independent random environments, respectively, allowing the arrival and service processes to be Markov-modulated ones. Each request is characterised by its own exponentially distributed source and service time with parameter depending on the state of the corresponding environment, that is, the arrival and service rates are subject to random fluctuations. Our aim is to get the usual stationary performance measures of the system, such as, utilizations, mean number of requests staying at the server, mean queue lengths, average waiting and sojourn times. In the case of fast arrivals or fast service asymptotic methods can be applied. In the intermediate situations stochastic simulation is used. As applications of this model some problems in the field of reliability theory and telecommunications are treated. 118 Sec. 9. Stochastic Methods in Optimization and Operations Research

References

[1] Anisimov V. V. (1996). Asymptotic Analysis of Switching Queueing Sys- tems in Condition of Low and Heavy Loading, Matrix-analytic methods in Stochastic Models. Marcel Dekker Inc.

[2] Kovalenko I. N. (1994). Rare events in queueing systems, A survey. Queue- ing Systems 16, 1–49.

[3] Sztrik J. and Kouvatsos D. D. (1991). Asymptotic Analysis of a Hetero- geneous Multiprocessor System in a Randomly Changing Environment. IEEE Transactions on Software Engineering 17, 1069–1075.

[4] Sztrik J. (1993). Modelling of a Multiprocessor System in a Randomly Changing Environment. Performance Evaluation 17, 1–11.

Henk C. Tijms

(Vrije University Amsterdam)

Computing the transient reward distribution in continuous-time Markov chains

Many practical stochastic optimization problems require the computation of the probability distribution of the transient reward over a finite time interval in a continuous-time Markov chain. A special case is the computation of the probability distribution of total sojourn time of a continous-time Markov chain in a given set of states during a finite time interval. For the sojourn time distri- bution a very nice probabilistic methiod has been developed by De Soua e Silva and Gail in 1986. Those authors have recently extended this uniformization algorithm to the general reward case. However their generalized algorithm is very unsatisfactory from a numerical point of view. In this lecture a simple- minded discretization method is discussed for computing the transient reward distribution. This method is easy to program and its calculation time can be considerably reduced by using a simple correction on the discretization error. Sec. 9. Stochastic Methods in Optimization and Operations Research 119

Silvia Vogel (Technische Universit¨at Ilmenau) On Stability of Multistage Stochastic Programs

Multistage stochastic programs have proved to be useful models for many real- life decision processes. Hence there is a need for solution procedures, which have to be supplemented by stability considerations. We shall deal with a general framework, allowing for probability measures which depend on the foregoing decisions. Thus it is possible to take also Markovian decision processes into account. We shall use an approach, which was suggested by the author some years ago and recently rediscovered by L. Korf and R. Wets, and show how it can be extended to apply to the case under consideration.

Karl-Heinz Waldmann (Universit¨at Karlsruhe) On Markov decision processes with an absorbing set

A unifying approach based on the critical discount factor is used to show the equivalence of a large number of well-known sufficient conditions for optimality of a stationary policy and the value iteration to hold. Further some algorithms are presented solving the optimality equation by extrapolation. The algorithms work for all discount factors smaller than the critical one. The talk is based on a joint paper with Karl Hinderer.

Jaap Wessels (EURANDOM Eindhoven) Simple solutions for some classes of queueing problems

Queueing problems may be modelled as random walks. It will be shown that some classes of queueing problems lead to random walks which possess equilib- rium distributions in the form of (finite or countably infinite) sums of product forms. It will be shown how this may be exploited for designing efficient algo- rithms for computing performance characteristics. 120 Sec. 9. Stochastic Methods in Optimization and Operations Research Sec. 10. Stochastic Processes, Time Series and their Statistics 121

Sec. 10. Stochastic Processes, Time Series and their Statistics

Organizer: Gunter¨ Last (Karlsruhe)

Invited Lecture

Richard A. Davis (Colorado State University) Maximum Likelihood Estimation for All-Pass Time Series Models

In the analysis of returns on financial assets such as stocks, it is common to observe lack of serial correlation, heavy-tailed marginal distributions, and volatility clustering. Typically, nonlinear models with time-dependent condi- tional variances, such as ARCH and stochastic volatility models, are suggested for such time series. It is perhaps less well known that linear, non-Gaussian models can display exactly this behavior. The linear models which we will consider are all-pass models: autoregressive-moving average models in which all of the roots of the autoregressive polynomial are reciprocals of roots of the moving average polynomial and vice versa. All-pass models generate uncor- related (white noise) time series, but these series are not independent in the non-Gaussian case. If the process is driven with heavy-tailed noise, then its marginal distribution will also have heavy tails, and the process will exhibit volatility clustering. All-pass models are widely used in the engineering literature, and usually arise by modeling a series as an invertible moving average (all the roots of the moving average polynomial are outside the unit circle) when in fact the true model is noninvertible. The resulting series in this case can then be modeled as an all-pass of order r, where r is the number of roots of the true moving average polynomial inside the unit circle. Estimation methods based on Gaussian likelihood, least-squares, or related second-order moment techniques are unable to identify all-pass models. In- stead, method of moments estimators using moments of order greater than two are often used to estimate such models (Giannakis and Swami, 1990; Chi and Kung, 1995). Breidt, Davis, and Trindade (2000) consider a least absolute deviations approach, motivated by approximating the likelihood of the all-pass 122 Sec. 10. Stochastic Processes, Time Series and their Statistics model in the case of Laplace (two-sided exponential) noise. Under general conditions, the least absolute deviation estimators are asymptotically normal. In this paper, we consider estimation based on an approximation to the like- lihood. Asymptotic normality for the maximum likelihood estimator is estab- lished under smoothness conditions on the density function of the noise. Behav- ior of the estimators in finite samples is studied via simulation and estimation procedure is applied to problem of fitting noninvertible moving averages. (This is joint work with F. Jay Breidt and Beth Andrews.)

Contributed Lectures

(in alphabetic order)

Andreas Brandt

(Humboldt-Universit¨at zu Berlin) On the Moments of Overflow and Freed Carried Traffic for the GI/M/C/0 System

In circuit-switched networks call streams are characterized by their mean and peakedness (two-moment method). The GI/M/C/0 system is used to model a single link, where the GI-stream is determined by fitting moments appropri- ately. For the moments of the overflow traffic of a GI/M/C/0 system there are efficient numerical algorithms available. However, for the moments of the freed carried traffic, defined as the moments of a virtual link of infinite capac- ity to which the process of calls accepted by the link (carried arrival process) is virtually directed and where the virtual calls get fresh exponential i. i. d. holding times, only complex numerical algorithms are available. This is the reason why the concept of the freed carried traffic is not used. The main re- sult of this paper is a numerically stable and efficient algorithm for computing the moments of freed carried traffic, in particular an explicit formula for its peakedness. This result offers a unified handling of both overflow and carried traffics in networks. Furthermore, some refined characteristics for the overflow and freed carried streams are derived. Sec. 10. Stochastic Processes, Time Series and their Statistics 123

Erik A. van Doorn (University of Twente) Rates of convergence for birth-death processes

We consider an ergodic birth-death process X ≡ {X(t), t ≥ 0)} taking values in {0, 1,...} with transition probabilities pij(t) ≡ P(X(t) = j | X(0) = i) which have limits πj ≡ limt→∞ pij(t). The speed of convergence to stationarity of X may be characterized by the exponential rates of convergence 1 αij ≡ − lim sup log |pij(t) − πj| t→∞ t of the individual transition probabilities, or by the exponential rates of conver- gence ( ) 1 1 X βi ≡ − lim sup log |pij(t) − πj| t→∞ t 2 j of the total variation distance between the distribution at time t and the ergodic distribution. It is also of interest to study the quantities 1 γi ≡ − lim sup log P(T0 > t | X(0) = i) , i > 0 , t→∞ t where T0 denotes the first-entrance time into state 0, since a classical result tells us that all the rates mentioned are zero or positive together. In the talk some relations between the various quantities characterizing the speed of convergence to stationarity of X will be discussed.

Leonid Galtchouk (Universit´eLouis Pasteur de Strasbourg) On uniform asymptotic normality of sequential estimators for the parameters in stable autoregressive processes

Consider the first order autoregressive process:

Xn = θXn−1 + εn , n = 1, 2,..., 2 where (εn) is a sequence of i. i. d. random variables with E ε1 = 0, E ε1 = 1. For this model in the non-explosif case (i. e. −1 ≤ θ ≤ 1) Lai and Siegmund (1983) have proved that the sequential least squares estimator of the parameter θ:

XNc XNc 2 θNc = Xi−1Xi / Xi−1 , i=1 i=1 124 Sec. 10. Stochastic Processes, Time Series and their Statistics

Xn 2 Nc = inf{n ≥ 1 : Xi−1 ≥ c} , 0 < c < ∞ , i=1 possesses an important property of uniform (relative θ) asymptotic normality: ¯ Ã ! ¯ ¯ XNc ¯ ¯ 2 1/2 ¯ lim sup sup ¯Pθ ( xi−1) (θNc − θ) ≤ t − Φ(t)¯ = 0 , c→∞ −∞

Xn = θ0 + θ1Xn−1 + ··· + θpXn−p + εn , n = 1, 2,...

References

[1] Galtchouk, L. I. and Konev, V. V. (2001). On uniform asymptotic normal- ity of sequential least squares estimators forparameters in a stable AR(p). Multivariate Analysis, submitted.

[2] Lai, T. L. and Siegmund, D. (1983). Fixed-accuracy estimation of an autoregressive parameter. Ann. Statist. 11, 478–485.

Ouagnina Hili (National Polytechnical Institute of Yamoussoukro) Parametric estimation of nonlinear dynamical systems

The present paper deals with the minimum Hellinger distance (MHD) param- eter estimation of nonlinear dynamical systems with highly correlated resid- uals. In order to estimate the parameter of interest, we fit the residuals by an EXPAR (EXPonential AutoRegressive) time series model. Under some as- sumptions which ensure the stationarity, the existence of the moments of the stationary distribution and the strong mixing property of the fitted residuals, we establish the almost sure convergence and the asymptotic normality of the MHD estimates. Sec. 10. Stochastic Processes, Time Series and their Statistics 125

Karsten Keller (Universit¨at Greifswald) Symbolic analysis of high-dimensional time series

In order to extract and to visualize qualitative information from a high-dimen- sional time series, we apply ideas from symbolic dynamics. Counting certain patterns in the given series, we obtain a series of matrices whose entries are symbol frequencies. The matrix series is explored by using simple quantities from nominal statistics and information theory. In particular, we discuss the relation of these quantities and give a generalization of correspondence analysis. The method described is applied to detect and to visualize qualitative changes of EEG data related to epileptic activity.

Uwe Kuchler¨ (Humboldt-Universit¨at zu Berlin) Stochastic Differential Equations with Memory as Time Continuous Counterpart of AR(m)-Sequences

We consider stochastic differential equations of the type Z 0 dX(t) = X(t + s) a(ds) dt + dZ(t) , t ≥ 0 , −r X(s) = ξ(s) , s ∈ [−r, 0] , where a(ds) is a finite signed measure (“the memory”), (Z(t), t ≥ 0) is a given initial process. Properties of the solutions (X(t), t ≥ 0) and possibilities to estimate the mea- sure a(ds) from observation of the trajectory of (X(t), t ≤ T ) are discussed. Asymptotic properties are presented.

Markus Reiß (Humboldt-Universit¨at zu Berlin) Nonparametric Estimation for Stochastic Delay Differential Equations

Stochastic delay differential equations (SDDEs) can be regarded as time-con- tinuous analogues of autoregressive processes. Affine SDDEs are of the form Z 0 dX(t) = X(t + s) a(ds) dt + σ dW (t) , t ≥ 0 , −r 126 Sec. 10. Stochastic Processes, Time Series and their Statistics where a is a finite signed measure on [−r, 0] so that the drift is an average over the past trajectory (X(s), t−r ≤ s ≤ t). We wish to determine a nonparametric estimate of a from the continuous observation of a trajectory up to time T . Under stationarity conditions and assuming a to have a Lebesgue density of regularity s, we construct a so-called Galerkin estimator, which has an L2-risk of order T −s/(2s+3) for T → ∞. This rate is optimal in a minimax-sense. The deterioration of the rate compared to classical density estimation is due to an associated ill-posed inverse problem. An outlook to adaptive methods and testing is given.

Volker Reitmann, H. Kantz (Max-Planck-Institute for the Physics of Complex Systems, Dresden) Estimation of bifurcation parameters in dynamics of elastic-plastic systems via time-series analysis of Lyapunov functionals

Consider on a time interval [0,T ] the elastic-plastic body tΩ ⊂ R3 which is bounded by a surface tΓ. This surface is separated into the contact surface t t ΓF , the surfaces subject to displacement and stress boundary conditions, ΓU t 1 2 3 and ΓF , respectively, and the free surface. Suppose that ui = ui(ξ , ξ , ξ , t) k are the displacements and Γij are the Christoffel symbols. Using the covariant ∂uj k derivatives uj,i = ∂ξi − Γijuk, we define the Lagrangian strain tensor by εij = 1 2(uj,i+ui,j +uk,i uk,j) (repeated indices indicate summation) and the equation of kl i kl i i i kl motion by (σ δl +σ u, l), k +ρf = ρu¨ , where σ is the second Piola-Kirchhoff stress tensor, f i is the prescribed body force, ρ is the mass density, ui are the i contravariantly written ui, and δl is the Kronecker symbol. The material of the body is assumed to be elastic-plastic with constitutive equation in terms of the kl klmn klmn strain rate given byσ ˙ = LEP εmn, where LEP varies with stress state and t strain . The prescribed displacements on the surface ΓU are ui(x, t) = t kl i i i Ui(x, t), the prescribed boundary forces on ΓF are σ (δl + u, l)nk = F and the t ij jk i tangential frictional stress on contact surface ΓF is given by σ nj−σ njnkn = i i F , where nk and n are the covariant respectively contravariant components t t of the outward unit normal at ΓF and ΓF , respectively. The initial conditions for the body are ui(x, 0) = U0i(x),u ˙ i(x, 0) = U1i(x). Many dynamic elastic-plastic deformations with impacts and friction, such as sheet metal spinning and deep-drawing, may be described as above. An impor- tant property of such processes is their stability on a finite time interval [0,T ]. If this stability is lost, unstable plastic wave motions (dynamic wrinkling), as Sec. 10. Stochastic Processes, Time Series and their Statistics 127 in sheet metal spinning may develop. In order to characterize critical or bi- furcation parameters for stability on [0,T ] of a given displacement ui, small i perturbationsu ˜ are investigated. Lyapunov-likeR functionals for stability on [0,T ] are of the type Q(˜ui, u˜¨j, t, λ) = ρu˜¨iu˜idV and depend directly on mea- tΩ sured displacements, their second derivatives and certain parameters λ. If for a i j time tcr ∈ [0,T ] the instability property Q˙ (˜u , u˜¨ , tcr, λ) ≥ 0 is satisfied, a new parameter λˆ will be calculated in the smooth case in direction of the negative ˙ i ¨j gradient - gradλQ(˜u , u˜ , tcr, λ). It will be shown that under certain assump- tions this gradient depends only on measured values and λ, and can be used for the estimation of bifurcation parameters. The formal mathematical description of considered elastic-plastic deformations is based on second-order differential inclusions in Hilbert space. Random per- turbations in the equation of motion are included. Supported by the DFG-Schwerpunktprogramm “Mathematical methods for time series analysis and digital image processing” Helmut Rieder (Universit¨at Bayreuth) Robust estimation for time series models based on infinitesimal neighborhoods We consider parametric time series models which are LAN with a stationary scores function that is the product of the innovation scores and a function of the past; in view of the work by Drost, Klaassen, Wercker (1997), these mod- els include ARMA, TAR, and ARCH. Based on this kind of LAN, we define influence curves of asymptotically linear estimators by stationary and ergodic martingale differences that satisfy a Fisher consistency condition with respect to the model scores. Employing similar LAN-expansions of the loglikelihoods, infinitesimal neighborhoods of transition probabilities may be introduced, their radius depending on the past of the process. The neighborhoods allow for IO- outliers which are nonstationary and depend on the past, but may cover also AO- and SO-outliers if the radius function is chosen suitably. Optimally robust influence curves can then be derived by minimizing the maximum asymptotic MSE of asymptotically linear estimators over such infinitesimal neigborhoods, and corresponding least favorable radius curves are obtained. The construc- tion problem has been solved at least for the smallest neighborhood model: conditional contamination and total variation neighborhoods with bounded ra- dius curve. Our work generalizes the previous approaches to robust time series estimation by Kunsch¨ (1984), Staab (1984), Martin and Yohai (1986). 128 Sec. 10. Stochastic Processes, Time Series and their Statistics

Rainer von Sachs (Universit´ecatholique de Louvain) Forecasting non-stationary time series by wavelet process modelling The classical forecasting theory of stationary time series exploits the second- order quantities (variance, autocovariance and spectral density) of an observed process in order to construct some prediction intervals. However, many time series in the applied sciences (e. g., geophysical, biomedical or financial data) show a time-varying second order structure. In this talk, we will address the problem how to model, estimate and predict non-stationary time series with the help of particular wavelets. Using the model of locally stationary wavelet processes ([1]) which allows to model a slowly in time varying second order structure, we will present a new linear predictor based on wavelets. Its perfor- mance will be illustrated by treating financial and climatological data. This is joint work [2] with Sebastien Van Bellegem (Universit´ecatholqiue de Louvain) and Piotr Fryzlewicz (University of Bristol). References [1] Nason, G., von Sachs, R. and Kroisandt, G. (2000). Wavelet processes and adaptive estimation of evolutionary wavelet spectra. J. Royal Stat. Soc., Ser. B 62, 271–292. [2] Fryzlewicz, P. Z., Van Bellegem, S. and von Sachs, R. (2002). Forecast- ing non-stationary time series by wavelet process modelling. Discussion Paper, Institut de statistique, Universit´ecatholique de Louvain, Louvain- la-Neuve. Fabio Spizzichino (University “La Sapienza” Rome) Birth processes and random partitions of intervals

Let {Nt}t≥0 be a pure birth process, with intensity given by

µh(t) ≡ lim P {Nt+δ = h + 1|Nt = h} , h = 0, 1, . . . , t ≥ 0 , δ→0 and let T1,T2,... be the corresponding arrival times. For any n,(T1,T2,...,Tn) can be seen as the³ vector of order´ statistics of ex- (n) (n) changeable, non-negative, random variables X1 ,...,Xn and we denote by (n) f (x1, . . . , xn) Sec. 10. Stochastic Processes, Time Series and their Statistics 129 the corresponding (permutation-invariant) joint density function.

Furthermore, for t > 0, we consider the conditional distribution of (T1,T2,..., Tn), given the event {Nt = n}, and denote by

(n) φt (x1, . . . , xn) the permutation-invariant density which admits the latter as the joint distribu- tion of the order statistics (notice that, conditionally on {Nt = n}, T1,T2,..., Tn determine a random partition of the interval [0, t]).

Under special conditions for {µh(·)}h=0,1,..., in this talk we analyze some aspects (n) (n) of f (x1, . . . , xn) and of φt (x1, . . . , xn), related with properties of multivari- ate aging ([5]).

In particular, we consider in detail the special cases when {Nt}t≥0 is a counting process with the “order statistics property” (see e. g. [1], [2] and [3]) and when (n) {Nt}t≥0 is a “pure-death process” which gives rise to f (x1, . . . , xn) being “l∞-spherical densities” (see [4]). Some applications of interest are also considered.

References

[1] Berg, M. and Spizzichino, F. (2000). Time-lagged point processes with the order-statistics property. Math. Methods Oper. Res. 51(2), 301–314.

[2] Huang, W.-J. and Shoung, J. M. (1994). On a study of some properties of point processes. Sankhya, Ser. A 56(1), 67–76.

[3] Huang, W.-J. and Su, J.-C. (1999). On certain problems involving order statistics — a unified approach through order statistics property of point processes. Sankhya, Ser. A 61(1), 36–49.

[4] Petre, F., Shaked, M. and Spizzichino, F. (2002). Nonhomogeneous birth processes, load-sharing models and l∞-spherical densities. Probab. Engin. Inform. Sc. (To appear).

[5] Spizzichino, F. (2001). Subjective probability models for lifetimes. Chap- man and Hall/CRC. Boca Raton, London. 130 Sec. 10. Stochastic Processes, Time Series and their Statistics

Ryszard Szekli (Wroclaw University) On dependence orderings for stationary point processes on R

Dependence orderings such as supermodular, directionally convex, concordance and similar ones are widely used for random vectors in a variety of stochastic models. General theory is rather well developed for such ordeirngs, however in the context of stationary sequences, when finite segments of stationary se- quences are compared, there exists only a limited number of examples available (usually related to Markov chains). In this talk such examples will be gathered including known and some new ones. Some applications for example in queue- ing theory will be given. The talk will be based on two papers: 1. Comparison od dependence in marked point processes with applications, Preprint, Univer- sity of Wroclaw; 2. Sufficient conditions for long range count dependence of stationary point processes on the real line, Journal of Applied Probability 38, 570–581 both joint with Rafal Kulik.

Katharina Wittfeld (Ernst-Moritz-Arndt University of Greifswald) Ordinal time series analysis with application to EEG-data

In order to get simple and robust methods for time series analysis, we consider ordinal patterns describing the up and down in a given time series. This leads to the abstract concept of an ordinal time series introduced by C. Bandt and B. Pompe. In the first part of our talk we outline this concept and consider ordinal time series obtained from standard stochastic processes. In particular, we discuss a special representation of an ordinal time series based on ranks. We demonstrate how this rank representation can be used for a fast transformation of a given metric time series into an ordinal one. In the second part we present a computer program for analyzing 19-channel EEG-data. This program is based on counting ordinal patterns in the given data and analyzing the obtained frequencies by multivariate statistics and in- formation theory. (A detailed description of these methods is given in the talk of K. Keller [abstract on page 125].) Sec. 10. Stochastic Processes, Time Series and their Statistics 131

Jeannette H. C. Woerner (OCIAM) Statistical analysis for discretely observed L´evyprocesses

2 We consider L´evyprocesses Xt given by the L´evytriplet (µ(θ), σ (θ), νθ(dx)), 2 where µ(θ) denotes the drift, σ (θ) the diffusion part, νθ(dx) the L´evymeasure and θ is some unknown parameter. Our aim is to establish efficiency results for the estimation of θ, when the process is observed at discrete time points. First we prove local asymptotic normality under different sampling schemes and conditions on the L´evymeasure, including stable, Gamma, Normal inverse Gaussian and generalized hyperbolic L´evyprocesses. Furthermore, we apply our results to martingale estimating functions to obtain efficient estimators. 132 Sec. 10. Stochastic Processes, Time Series and their Statistics Sec. 11. Generalized Linear Models and Multivariate Statistics 133

Sec. 11. Generalized Linear Models and Multivariate Statistics

Organizer: Berthold Heiligers (Magdeburg)

Invited Lecture

Holger Dette (Ruhr-Universit¨at Bochum) Optimal designs for multivariate polynomials, continued fractions and some applications to random matrices

In this talk we consider the problem of determining optimal designs in het- eroscedastic multivariate polynomial regression models. Mathematically we are maximizing real valued functions with probability measures as arguments. We use continued fractions to solve the statistical optimization problem. As a further application we give new proofs for the almost sure approximation of eigenvalues of Wishart matrices by roots of classical orthogonal polynomials and for the asymptotic distribution of the corresponding spectrum.

Contributed Lectures (in alphabetic order)

BronisÃlaw Ceranka, MaÃlgorzataGraczyk (Agricultural University Pozna´n) Optimum chemical balance weighing design with diagonal variance-covariance matrix of errors

In this paper we study the problem of estimating individual weights of objects in chemical balance weighing design with diagonal variance-covariance matrix of errors. We assume that in each weighing operation not all objects are included. All variances of estimated weights are equal and they attain the lower bound. We give necessary and sufficient condition under which this lower bound is attain by the variances of each of the estimated weights from this chemical balance weighing design. To construction the design matrix of optimal chemical 134 Sec. 11. Generalized Linear Models and Multivariate Statistics balance weighing design we use the incidence matrices of balanced bipartite block design.

Maria Teresa Gallegos (Universit¨at Passau) Maximum Likelihood Clustering with Outliers

Suppose that we are given a list of n observations in Euclidean space, r of them being realizations of any one of g different normally distributed populations which share a common covariance matrix. We compute the ML-estimator with respect to a certain statistical model with outliers for the parameters of the g populations; it detects outliers and simultaneously partitions the complement into g clusters. It turns out that the estimator unites Rousseeuw’s minimum covariance determinant method and the well-known determinant criterion of clustering analysis. We also propose an efficient algorithm that approximates this estimator.

Michael Hamers (Universit¨at Stuttgart) How well can a regression function be estimated if the distribution of the (random) design is concentrated on a finite set?

In nonparametric regression rate-of-convergence results are often derived under the assumption that the distribution of the design is (in some sense) close to a uniform distribution on some infinite set. But in real data applications the distribution of the design is usually concentrated on some finite set. We show that in this case, totally different results concerning rate of convergence of the estimates can be obtained. E. g., we show that the optimal rate of convergence is K/n, where K is the cardinality of the support of the distribution of the design. We study how this rate can be improved under structural assumptions on the regression function like additivity. Explicit bounds on the L2-error for finite sample size n are derived, and it is shown that these upper bounds are optimal up to constants which do not depend on n. Sec. 11. Generalized Linear Models and Multivariate Statistics 135

Angelika van der Linde (Universit¨at Bremen) Mutual information: a key concept in multivariate analysis

For two random vectors X and Y the mutual information is defined to be the (symmetrised) Kullback-Leibler distance between the joint distribution of X and Y and the product of their marginal distributions. Under the assump- tion of bilinearity of the log odds ratio function (characterising the association between X and Y as shown by G. Osius [abstract on the next page]) the mu- tual information can be represented as trace of the product of a parameter matrix and the covariance matrix of X and Y . The assumption of bilinear- ity is immediately met by multivariate normal and multinomial distributions but is also often used in modelling with transforms of the originally observed variables. It is shown that eigendecompositions of the matrix underlying the mutual information yield familiar multivariate techniques like canonical cor- relation analysis or Fishers linear discriminant analysis under suggested more generally as techniques of dimension reduction yielding in particular gener- alised linear discriminant functions. The approach is also shown to be useful in solving problems of variable selection.

Karl Mosler (Universit¨at zu K¨oln) Multivariate central regions and depth: The lift zonoid approach

A probability distribution in d-space can be described by central regions, that is a family of nested sets which include a properly defined center and whose size and shape reflect the location, scale and general type of the distribution. We consider central regions that are defined by an affine invariant depth function like the Mahalanobis depth, the halfspace depth and, in particular, the zonoid depth. The latter are called zonoid regions. Given two random vectors X and Y , the set inclusion of all their central regions defines a stochastic order which measures dispersion. If X and Y have the same univariate marginals, the com- parison of all central region volumes defines a stochastic order which measures dependence. The dispersion order is equivalent to the convex-linear order and slightly weaker than the convex order. The dependence order coincides with the generalized variance order if either X and Y have arbitrary distributions and the depth is Mahalanobis or X and Y belong to some elliptical family and the depth is arbitrary. Special indices of dispersion and dependence are 136 Sec. 11. Generalized Linear Models and Multivariate Statistics obtained that are consistent with the respective orders. A d-variate probability distribution is represented by a convex compyct in (d + 1)-space, its lift zonoid. With zonoid regions, the dispersion order is the same as the lift zonoid order, that is the set inclusion of the two lift zonoids.

Gerhard Osius (Universit¨at Bremen) Association between random vectors: characterization and models

A standard approach to investigate the relation between two random vectors X and Y is to sample Y conditional on X and to use a regression model which specifies only the conditional distribution of Y given X. Sometimes however it is more appropriate to sample X conditional on Y (e. g. for case-control studies in epidemiology) and use a corresponding model for the conditional distribu- tion of X given Y . For a comparison of both approaches one needs to know exactly which information of the joint distribution is contained in both condi- tional distributions. For random elements (e. g. vectors) X and Y we provide a definition of their association in terms of an odds ratio function OR. Our main result establishes for any such function OR (satisfying some integrability conditions) and any given marginal distributions of X and Y a unique joint distribution for the pair (X,Y ). Specifying only the odds ratio function (but not the marginal distributions) provides a class of association models whose parameters are estimable under both conditional sampling schemes (Y given X or conversely). In particular log-bilinear odds ratio functions provide a flexible class of models for random vectors including e. g. multivariate logistic regression models. Further applications of these models are given by A. van der Linde [abstract on the preceding page].

Julia Schmelz (Technische Universit¨at Munchen)¨ Estimation for the Multivariate Ordered Probit Model with Markov Chain Monte Carlo Methods

I consider a regressional assumption of ordinal discrete response data on exoge- nous observed variables (design). Observations consist in independent random vectors that can have inner correlations, such as panel data. I want to estimate probabilities for the response to be in ordered classes depending on values of Sec. 11. Generalized Linear Models and Multivariate Statistics 137 explanatory variables. Using a threshold approach, I introduce a latent vari- able and define a deterministic relation between the latent variable statespace and the ordered response classes. Simultaneously, I model the expectation of a latent variable AND the correlations by means of explanatory variables. I will present how I manage the estimation with correlated response data as an extension of the independence case of Albert and Johnson (1999).

Ulrich Stadtmuller¨ (Universit¨at Ulm) Generalized functional linear models

In this joint work with H. G. Muller¨ (UC Davis, USA) we propose a generalized functional linear regression model for a regression situation where the response variable is scalar and the predictor is a random function. A linear predictor is obtained by forming the scalar product of the predictor function with a smooth parameter function and the expected value of the response is related to this linear predictor via a link function. If a variance function is specified, this leads to a functional estimating equation which corresponds to maximizing a functional quasi-likelihood. An essential step in our approach is dimension reduction by approximating the predictor process with a truncated Karhunen- Lo´eve expansion. We develop asymptotic inference for the proposed class of generalized regression models. 138 Sec. 11. Generalized Linear Models and Multivariate Statistics Sec. 12. Insurance and Finance 139

Sec. 12. Insurance and Finance

Organizer: Ralf Korn (Kaiserslautern)

Invited Lecture

Rudiger¨ Kiesel (London School of Economics) Modelling credit risk: Theory and applications

The aim of the talk is to discuss various aspects of the modelling of (portfolios of) credit risky assets. We focus on structural approaches (based on models of the firm value process) and provide a general modelling framework. Several applications will be provided.

Contributed Lectures (in alphabetic order)

Peter Bank (Humboldt-Universit¨at zu Berlin) Optimal consumption choice in the presence of durable and perishable goods

We solve a utility maximization problem where intertemporal utility is ob- tained both from consuming a perishable and from consuming a durable good. Using the utility gradient approach, we show how this mixed classical/singular control problem can be reduced to a new type of stochastic representation prob- lem. This allows us to provide a general solution to this problem in a general semimartingale context. Most notably it turns out that, while the optimal consumption rate for the perishable good only depends on its current price and the instantaneous stock of durables, the consumption decision for the durable good takes into account the full evolution of prices and preferences. 140 Sec. 12. Insurance and Finance

Karsten Bruckner¨ (Otto-von-Guericke-Universit¨at Magdeburg) Return distributions and Risk-return profiles for future time-intervals in the classical Black-Scholes model

Portfolios consisting only of an option and it’s underlying are considered, where the price processes are supposed as in the classical Black-Scholes model. For two different strategies of weighting the two assets in the portfolio the return distributions are derived, where the considered (finite) time-interval may start in the future. The return distributions are described by their density functions, which — somewhat unexpectedly — are quite complicated. Next we focus on their expectations and variances, which in fact are the important parameters needed for efficiency analysis in the context of portfolio theory.

Felix Esche (Technische Universit¨at Berlin) Preservation of the L´evyProperty under an Optimal Change of Measure

We show that under mild conditions the L´evystructure of a P -L´evyprocess L is preserved under the entropy minimizing martingale measure for L. As a main tool we use semimartingale characteristics to describe the density processes of absolutely continuous martingale measures. Thereby we parametrize the set of martingale measures and the subset of those measures which preserve the L´evy structure of L, so called L´evymartingale measures.

Kurt Helmes (Humboldt University of Berlin) Pricing Perpetual Russian Options Using Linear Programming

Let Y = (Yt)t≥0 be the price of a stock, Y0 = y0. The concept of a Russian put option, introduced by Shepp and Shiryaev, refers to a contract when the buyer of the option is guaranteed the larger of two values, one being a fixed amount % and the other one being the maximum (discounted) value of the stock up to the time the option is exercised; it is assumed that the buyer can borrow or lend unlimited amounts of money at a fixed interest rate r > 0. Assuming Y to be geometric Brownian motion and no bound on the exercise Sec. 12. Insurance and Finance 141 time, i. e. a perpetual option, Shepp and Shiryaev derived an explicit formula for the fair price of such an option noting the equivalence of the pricing problem with an optimal stopping problem. In this note we shall compute the price of a Russian option — with and without average time constraints — using numerical methods which are based on a linear programming formulation of optimal stopping problems. The LP approach to optimal stopping exploits a characterization of a stopped Markov process through a family of equations which relate the generator of the process with a pair of measures representing the expected occupation of the process and the distribution of the state when the process is stopped. The computational analysis of Russian options leads to bounds on the fair price of such contracts. The goodness of the numerical results will be illustrated by comparing the results in the case of no constraints with the analytical values.

Klaus Th. Hess (Technische Universit¨at Dresden) An Extension of Panjer’s Recursion

Sundt and Jewell (1981) have shown that a nondegenerate distribution Q =

{qn}n∈N0 on N0 satisfies the recursion µ ¶ b q = a + q n+1 n + 1 n for all n ∈ N0 if and only if Q is a binomial, Poisson, or negativebinomial dis- tribution. A similar characterization of distributions on N0 where the recursion holds for all n ≥ 1 and with q0 = 0 has been obtained by Willmot (1988). In the present talk we extend these results to the case where the recursion holds for all n ≥ k for arbitrary k ∈ N0 and with qn = 0 if n < k. This includes the known cases and leads to extensions of the negativebinomial and logarithmic distributions. The talk presents joint results with Anett Liewald and Klaus D. Schmidt.

References [1] Hess, K. Th., Liewald, A. and Schmidt, K. D. (2001). An Extension of Pan- jer’s Recursion. Dresdner Schriften zur Versicherungsmathematik 2/2001. [2] Sundt, B. and Jewell, W. S. (1981). Futher results of recursive evaluation of compound distributions. ASTIN Bull. 12, 27–39. 142 Sec. 12. Insurance and Finance

[3] Willmot (1988). Sundt and Jewell’s family of discrete distributions. ASTIN Bull. 18, 17–29.

Juri Hinz (Technische Universit¨at Dresden) A production-based approach to valuation of electricity derivatives

We address the problem of pricing contingent claims written electricity. The special feature of our approach is to consider contract hedging by owning electricity production units. Assuming that the electricity demand follows a Markov process, we calculate equilibrium prices at production capacity mar- ket. This price behavior is used to study arbitrage-free valuation of contingent claims. Further, we investigate hedging of long-term contracts by capacity participations.

Jan Kallsen (Universit¨at Freiburg) Neutral Derivative Pricing in Incomplete Markets

Arbitrage arguments do not suffice to restrict derivative prices to a single value in incomplete markets. We mimic and generalize the arbitrage reasoning in complete models by replacing arbitrage traders with specific utility maximizers. This approach leads to unique prices in incomplete models as well. These neutral derivative values occur if utility maximizers do not profit from trading contingent claims. Put differently, they are the only prices that do not lead to possibly unmatched supply or demand for derivatives created by these traders. This talk deals with motivation, existence, and uniqueness of neutral prices for European as well as American and game options.

Claudia Kluppelberg¨ (Technische Universit¨at Munchen)¨ Optimal portfolios with bounded Capital-at-Risk (CaR)

We investigate some portfolio problems that consist of maximizing expected terminal wealth under the constraint of an upper bound for the risk, where we measure risk by the Capital-at-Risk (CaR) as an alternative to the variance. For any price process which follows an exponential L´evyprocess the solution of Sec. 12. Insurance and Finance 143 the mean-variance problem has the same structure. For the mean-CaR problem we make use of an approximation of the L´evyprocess as a sum of a drift term, a Brownian motion and a compound Poisson process. Certain relations between a L´evyprocess and its stochastic exponential are in vestigated. This is joint work with Susanne Emmer.

Ralf Korn (Universit¨at Kaiserslautern) Neue Ergebnisse aus der Portfolio-Optimierung

Es werden einige neuere Resultate aus der zeitstetigen Portfolio-Optimierung vorgestellt. Dies sind insbesondere optimale Strategien bei Crash-Gefahr und bei gegebenem Konsum-/Investmentplan.

Erhard Kremer (University of Hamburg) Generalized dynamic credibility

Credibility Theory is an important subfield of modern Mathematical Risk The- ory. First results go back to the beginning of the last century, but it was not earlier than 1967 that the Swiss Buhlmann¨ gave the nowadays custom elegant approach. In principal credibility methods are a certain fine type of experience rating methods that can be applied in several insurance branches. Buhlmann’s¨ 1967 model was still quite simple. But nowadays there exist much more ex- tended models, so for example the so-called evolutionary models. Evolutionary credibility methods are strongly related to the field of statistical forecasting, e. g. to the Box-Jenkins and Kalman-filtering techniques. The lecturer himself did some research in the field of evolutionary credibility. Especially he adapted the Kalman-filtering technique to credibility rating. See on this the paper Kre- mer (1995). Already in the 1985 classical Kalman-filtering theory was extended by incorporating tools from the well-known Generalized Linear Mod- els, leading to the so-called Dynamic Generalized Linear Models (see West et al. (1985)). In the lecturer’s present paper those tools are adapted to the context of evolutionary credibility rating. Quite new is the proposed procedure for estimating the structural parameters of the underlying dynamic model. 144 Sec. 12. Insurance and Finance

References [1] Kremer, E. (1995). Empirical Kalman-Credibility. Bl¨atter der Deutschen Gesellschaft fur¨ Versicherungsmathematik, 17–28. [2] West, M., Harrison, P. J. and Migon, H. S. (1985). Dynamic generalized linear models and Bayesian forecasting. JASA 80, 73–83. Christoph Kuhn¨ (Technische Universit¨at Munchen)¨ Game contingent claims in complete and incomplete markets A game contingent claim is a generalization of an American contingent claim which also enables the seller to terminate it before maturity, but at the expense of a penalty. For complete markets Kifer (2000) shows a connection to a (zero- sum) Dynkin game whose value is the unique no-arbitrage price of the claim. But, for incomplete markets one needs a more general approach. We interpret the contract as a non-zero-sum stopping game between the buyer and the seller taking trading possibilities explicitly into consideration. It turns out that a Nash equilibrium must exist only if both the seller and the buyer have an exponential utility function. Alexander Lindner (Technische Universit¨at Munchen)¨ Tail behavior of the density of the Delta-Gamma model A possible class of models for the change in a firm’s portfolio value over a specrified horizon is given by the Delta-Gamma normal models, which are quadratic forms of gaussian random vectors. In this talk we determine the tail behavior of the density function of such models. This will be used to obtain an approximation for the corresponding Value at Risk. Further applications will also be discussed. The results of this talk are joint work with S. Jaschke and C. Kluppelberg.¨ Alfred Muller¨ (Universit¨at Karlsruhe) Dependence orders and their applications in insurance and finance We consider stochastic orders which are suitable for the comparison of random vectors with dependent components. Special emphasis is given to the super- modular order and the directionally convex order. For some stochastic models from financial and actuarial mathematics it is shown how these orders can be used to study the effects of dependence and variability. Sec. 12. Insurance and Finance 145

References [1] Muller,¨ Alfred and Stoyan, Dietrich (2002). Comparison methods for stochastic models and risks. John Wiley & Sons, Chichester.

Volkert Paulsen (Universit¨at Kiel) On Optimal Stopping and its Application to Mathematical Finance

In this talk I would like to present some new results on optimal stopping of diffusion processes. They will be applied to several examples coming from mathematical finance. Among others I will discuss the American perpetual put in an extended Black-Scholes model, index options and applications to portfolio optimization.

Hanspeter Schmidli (University of Copenhagen) On minimising the ruin probability by investment and reinsurance

We consider a classical risk model and allow investment into a risky asset modelled as a Black-Scholes model as well as (proportional) reinsurance. Via the Hamilton-Jacobi-Bellman approach we find a candidate for the optimal strategy. We prove a verification theorem in order to show that any increasing solution to the HJB equation is bounded and solves the optimisation problem. Finally we prove that an increasing solution to the HJB equation exists.

Rafael Schmidt (Universit¨at Ulm) Tail dependence for elliptically contoured distributions

The relationship between the theory of multivariate elliptically contoured dis- tributions and the concept of tail dependence is investigated. The tail depen- dence concept describes the amount of dependence in the upper-right-quadrant tail or lower-left-quadrant tail of a bivariate distribution. Multivariate tail de- pendent distributions are of special practical interest within credit portfolio modelling, since they are able to incorporate dependencies of extremal credit default events. We show that multivariate elliptically contoured distributions are tail dependent if the tail of their generating random variable is regularly 146 Sec. 12. Insurance and Finance varying. Further we give a necessary condition for tail dependence which is somewhat weaker than regular variation of the latter tail. Finally, the tail dependence concept for some well-known examples of elliptically contoured distributions is discussed, such as the multivariate normal, t, logistic, and gen- eralized symmetric hyperbolic distributions.

Wolfgang Stummer (Universit¨at Karlsruhe) On a Decision Risk Reduction for Asset Price Models

We model the price dynamics X(t) of an asset as a non-lognormally-distributed generalization of the geometric Brownian motion. For a decision problem con- cerning the size of the drift of X, we estimate the reduction of decision risk that can be obtained by observing the path of X. Furthermore, the corresponding option pricing formula is derived.

Martina Z¨ahle (Friedrich-Schiller-Universit¨at Jena) Long range dependence, no arbitrage and the Black-Scholes formula

A bond and stock model is considerd where the driving process is the sum of a Wiener process and a continuous process Z with zero quadratic variation. By means of forward integrals a hedge against Markov-type claims is constructed. Under some natural assumptions on Z and the admissible portfolio processes the model is shown to be arbitrage free. The fair price of the above claims appears to be the same as in the classical case Z = 0. In particular, the Black- Scholes formula remains valid for non-semimartingale models with long range dependence. Sec. 13. Open Section 147

Sec. 13. Open Section

Organizer: Norbert Gaffke (Magdeburg)

Contributed Lectures (in alphabetic order)

Ehrhard Behrends (Freie Universit¨at Berlin) Das Paradoxon von Parrondo

Der spanische Physiker Parrondo hat folgendes Paradoxon entdeckt. Man kann zwei Markovprozesse X, Y auf den ganzen Zahlen so definieren, dass die Erwar- tungswerte fur¨ n → ∞ gegen minus unendlich gehen, dass aber Erwartungs- werte zu erzielen sind, die gegen plus unendlich gehen, wenn man zwischen den Prozessen zuf¨allig abwechselt (also: bei Kopf“ wird ein Schritt gem¨aß X ” durchgefuhrt,¨ sonst gem¨aß Y ). Das Paradoxon selbst und seine Konsequenzen fur¨ die Physik werden im Internet ausfuhrlich¨ diskutiert, unter Google gibt es uber¨ 400 Eintr¨age zu Parrondo, paradox“. Es wird in dem Vortrag gezeigt, wie ” das Paradoxon zu erkl¨aren ist, Methoden aus den folgenden Gebieten kommen zum Einsatz: Markovketten, Graphentheorie, (elementare) nichtlineare Funk- tionalanlysis, stochastische Kontrolltheorie.

F. Thomas Bruss (Universite Libre de Bruxelles) Optimal Stopping on Patterns in Strings

Letters are drawn sequentially from a fixed Alphabet, producing strings. Out- comes are mutually independent, possibly from an inhomogeneous source. Given a pattern H (substring of fixed length) and a string of n letters, the problem we consider is to maximize the probability of identifying correctly the last, or more generally, the k-th last appearance of H up to end location n. Here the identification must be “online”, that is, it must be executed in sequen- tial order without backward-examination. The solution of this problem (some questions are still open) allows for interesting applications in the domains of 148 Sec. 13. Open Section search problems, selection problems, investment strategies, and others. The problem is related with earlier work by Conway, Li, Guibas and Odlyzko, and R´egnierand Szpankowski. Keywords: Autocorrelation pattern, selection, unimodality, odds-algorithm. (This work is in collaboration with G. Louchard.)

Rudolf Grubel¨ (Universit¨at Hannover) Discrete Extremes and Small Fluctuations in the Analysis of Algorithms

In the probabilistic analysis of many of the standard algorithms used e. g. for arithmetical purposes, for searching or sorting, one observes the phenomenon that some quantity of interest “almost” converges, exhibiting only pe- riodic fluctuations in the limit. We discuss various examples and try to explain this phenomenon as a discretization effect. Most of this is based on joint work with A. Reimers (Theor. Inform. Appl. 35 (2001), 187–206) and Th. Bruss (in preparation).

Sonja Kuhnt (Universit¨at Dortmund) Outliers in Generalized Linear Models

In every statistical analysis observations can occur which seem to deviate strongly from the main part of the data. These observations, usually called outliers, may cause completely misleading results when using standard meth- ods. They may also contain information about special events or dependencies. It is therefore of interest to identify them. We discuss outliers in situations with discrete response, where a generalised linear model is assumed as a null-model. An exact definition of outliers is derived from the α-outlier concept of Davies and Gather (1993). One-step methods for the identification of such outliers in a data set are pro- posed. For the special case of loglinear poisson models some one-step identi- fiers based on robust and non-robust estimators are introduced and compared (Kuhnt, 2000/2001). Sec. 13. Open Section 149

References [1] Davies, P. L., Gather, U. (1993). The Identification of Multiple Outliers. Journal of the American Statistical Association 88, 782–792. [2] Kuhnt, S. (2000). Ausreißeridentifikation im Loglinearen Poissonmodell fur¨ Kontingenztafeln unter Einbeziehung robuster Sch¨atzer. Dissertation, Fachbereich Statistik, Universit¨at Dortmund, Germany. [3] Kuhnt, S. (2001). Outliers in Contingency Tables. In: Proceedings of the 6th International Conference ‘Computer Data Analysis and Modeling’, Minsk, Belarus.

Lutz Mattner (University of Leeds) Sums of independent random variables: homomorphisms and optimal inequalities

The first result is a functional analytic characterization of cumulants: Let Prob∞(R) denote the set of all probability measures on R with all moments finite. Regard Prob∞(R) as a topological semigroup with respect to convolution and the topology of polynomially weighted total variation distances.

Theorem. Every continuous homomorphism from Prob∞(R) into a Hausdorff topological group factorizes through the cumulant sequence. This improves on the main result of [14], where the group was specified to be the circle T. The second result is a rare example of an optimal inequality for a functional of 1 P the distribution of the sample mean Xn := n Xi of n i. i. d. random variables ¡X1¢,...,Xn given a functional of the distribution of X := X1. Let b(n, p; k) := n k n−k k p (1 − p) .

Theorem. E|Xn − EX| ≥ cnE|X − EX| with cn = b(n, bn/2c/n; bn/2c). Equality holds iff n = 1 or Y = aX + b with P(Y = 1) = 1 − P(Y = 0) = bn/2c/n.

References Papers [14], [20] and forthcoming [21] on my homepage http://www.maths.leeds.ac.uk/~mattner/index.html, and their references. 150 Sec. 13. Open Section

Franz Merkl

(Universit¨at Bielefeld)

Recent Results in Scenery Reconstruction

Given an i. i. d. random coloring of the integers with finitely many colors (“a scenery”), the scenery reconstruction problem asks whether one can retrieve the scenery provided one is given only observations of the colors seen along the path of a recurrent random walk. Quite obviously, one can expect at most reconstruction up to translation and reflection. In the talk, I will present a joint result with Heinrich Matzinger and Matthias Loewe, which solves a scenery reconstruction problem posed by Kesten: If the random walk may perform jumps of a bounded size and if there are more colors than possible single steps for the random walk, then one can reconstruct almost surely almost every scenery.

J¨org Pawlitschko

(Universit¨at Dortmund)

Median-based rules for outlier identification in samples from an exponential distribution

We discuss the problem of outlier detection in samples where the regular obser- vations are assumed to come i. i. d. from a one-parameter exponential distribu- tion. The main focus lies on stepwise procedures. Especially for inward testing procedures it is known that reliable identification rules should be based on a robust estimator of the scale parameter. A simple robust estimator is given by the appropriately standardized sample median. We give tractable expres- sions for the distribution function of test statistics based on this estimator and show that the corresponding outlier identification rules have comparably great power. Sec. 13. Open Section 151

Rainer Schwabe, Ulrike Graßhoff, Heiko Großmann, Heinz Holling (Eberhard-Karls-Universit¨atT¨ubingen,Freie Universit¨atBerlin, Westf¨alische Wilhelms-Universit¨at,Westf¨alische Wilhelms-Universit¨at) Effiziente Planung von Paarvergleichen bei beschr¨ankter Profilst¨arke

In Anwendungsdisziplinen wie Medizin, Psychologie und Marktforschung, in denen Testpersonen verschiedene Alternativen bewerten sollen, werden diese h¨aufig in Form von Paarvergleichen dargeboten, um die Komplexit¨at der Ent- scheidung m¨oglichst gering zu halten: Bei Paarvergleichen hat sich die Test- person jeweils nur zwischen zwei spezifizierten Alternativen zu entscheiden. Als Zielgr¨oße wird entweder die Pr¨aferenz fur¨ eine der beiden Alternativen di- chotom ( Alternative A ist besser als Alternative B“ versus Alternative B ist ” ” besser als Alternative A“) oder die St¨arke der Pr¨aferenz gemessen ( Alterna- ” tive A ist . . . Einheiten [z. B. in Euro] mehr bzw. weniger wert als Alternative B“). Die Alternativen k¨onnen sich in einer gr¨oßeren Anzahl von Komponenten wie Gr¨oße, Ausstattung, Funktionalit¨at etc. unterscheiden. Das Ziel solcher Paarvergleichsexperimente besteht darin, herauszufinden, welche Komponen- ten einen Einfluss auf das Pr¨aferenzverhalten haben und wie groß dieser Ein- fluss ist. Die statistische Auswertung von Paarvergleichen ist identisch zu der paariger Beobachtungen mit festem (d. h. nichtzuf¨alligem) Paareffekt, sofern die Alter- nativen vollst¨andig spezifiziert sind. Bei einer Vielzahl m¨oglicher unterschiedli- cher Komponenten ist es jedoch fur¨ die Testpersonen im Allgemeinen zu schwie- rig, die dargebotenen Alternativen korrekt zu bewerten. In die Bewertung gehen dann zumeist nur wenige der Komponenten ein. Daher wird bei Paarvergleichen die Anzahl der tats¨achlich spezifizierten Komponenten durch die Profilst¨arke beschr¨ankt. Die Auswahl, welche Komponenten spezifiziert werden, variiert je- doch uber¨ die Darbietungen. Die Profilst¨arke liegt typischerweise bei zwei oder drei Komponenten, w¨ahrend die voll spezifizierten Alternativen h¨aufig mehr als zwanzig Komponenten besitzen. Fur¨ die oben beschriebene Fragestellung werden geeignete Modellierungs- ans¨atze (mit und ohne Berucksichtigung¨ von Wechselwirkungen zwischen den einzelnen Komponenten) vorgestellt und optimale Versuchspl¨ane fur¨ die Aus- wahl der darzubietenden Alternativen pr¨asentiert. 152 Sec. 13. Open Section

Parminder Singh, Amar Nath Gill (Guru Nanak Dev University, Panjab University) A class of selection procedures for selecting good populations

Let π1, . . . , πk be k(≥ 2) independent populations such that the cumulative distribution function (cdf) of an observation from the population πi is Fi(x) = F (x−µi ), where F (·) is any absolutely continuous cdf, i = 1, . . . , k. Let θ θi [1] be the smallest of all θs and the population πi is defined to be ‘good’ if θi ≤ δ1θ[1], where δ1 > 1, i = 1, . . . , k. A class of selection procedures, based on sample quasi ranges, is proposed to select a subset of k populations which includes all the good populations with probability at least P ∗ (a pre-assigned value). Simultaneous confidence intervals for the ratios of scale parameters, which can be derived with the help of proposed procedures, are discussed. We call population πi bad if θi > δ2θ[1] (δ2 > δ1), i = 1, . . . , k. A class of selection procedures is also proposed to guarantee that the probability of either selecting a bad population or omitting a good population is at most 1 − P ∗. Teachers’ Day 153

Teachers’ Day Thursday, March 21, 2002, 2–7 p. m., Building 16/room H¨orsaal 5

Plenary Lecture

Robert Ineichen (Universit´ede Fribourg) Wurfel,¨ Zufall und Wahrscheinlichkeit — ein Blick auf die Vorgeschichte der Stochastik

The roots of probability theory are usually attributed to the 17th century. However, one can wonder if some notions related to stochastics were not de- veloped before. Our lecture is a tentative answer to this question. It intends to cast some light on the notions of probability in the Antiquity, in the Middle Ages and in the early Modern Times, on the chance evaluation by counting the number of favorable cases and on the notions of statistical regularity. Contents: Introduction — Games of chance with astragali (heel bones of hooved animals), dice, coins — Contingency and probability in Antiquity; epis- temic probabilities and aleatory probabilities — First steps toward quantifica- tion; “favorable” cases and “unfavorable” cases — Christiaan Huygens and Jakob Bernoulli.

Heinz Klaus Strick (Landrat-Lucas-Gymnasium Leverkusen) Einsatz von EXCEL im Stochastikunterricht

Ziel des Stochastikunterricht ist es, bei den Schulerinnen¨ und Schulern¨ ange- messene Vorstellungen uber¨ Zufallsvorg¨ange zu entwickeln, Kenntnisse uber¨ Grundmodelle zu vermitteln und quantitative Vorstellungen uber¨ Wahrschein- lichkeiten und Erwartungswerte zu erzeugen. Dabei kann ein Tabellenkalkula- tionsprogramm nutzliche¨ Dienste leisten, da die Benutzung einfach ist und die graphischen M¨oglichkeiten fur¨ die Visualisierung fast unverzichtbar erscheinen. Im Vortrag werden Beispiele von Unterrichtssequenzen vorgestellt, in denen 154 Teachers’ Day sich der Einsatz von Excel bew¨ahrt hat. (Erzeugung von Pseudozufallszah- len, Uberpr¨ ufung¨ von Kriterien fur¨ die Zuf¨alligkeit“, Simulation von Zufalls- ” versuchen, Berechnung von Wahrscheinlichkeitsverteilungen und deren Kenn- gr¨oßen, Entdeckung von Gesetzm¨aßigkeiten, Vereinfachung von Rechenalgo- rithmen, Auswertung von gr¨oßeren Datenmengen im Rahmen des Unterrichts oder in Unterrichtsprojekten).

Panel Discussion

A panel discussion on the topic “Stochastik fur¨ die Schule” will be held under the chairmanship of Norbert Henze (Karlsruhe). The following persons have agreed to be discussants: Joachim Engel (Ludwigsburg), Heinz Klaus Strick (Leverkusen), Sabine Z¨ollner (Stendal). List of Authors

(Speakers’s page numbers are in bold face.) Buhlmann,¨ P...... 27

Adams, S...... 95 Caliebe, A...... 37 Alsmeyer, G...... 36 Catrein, D...... 76 Ambartzumian, R.V...... 72 Ceranka, B...... 133 Andronov, A...... 28 Cheng, F...... 116 Arga¸c,D...... 46 Chi, Z...... 71 Arnold, L...... 58 Christensen, K...... 93 Assing, S...... 96 Christoph, G...... 37 Ayhan, H...... 116 von Collani, E...... 47 Cramer, E...... 47 BÃlaszczyszyn,B...... 72 Crauel, H...... 60 Baake, E...... 97 Cs¨org˝o,S...... 35 Baake, M...... 97 Czado, C...... 11 Baccelli, F...... 72 B¨auerle, N...... 106 Daduna, H...... 106 Baker, C.T.H...... 59 Davis, R.A...... 121 Bank, P...... 58, 139 Dencker, P...... 11 Baringhaus, L...... 13 Dereich, S...... 38 Bauer, H...... 105 Dette, H...... 133 Becker-Kern, P...... 36, 43 Deuschel, J.-D...... 98 Behrends, E...... 147 van Doorn, E.A...... 123 Benveniste, A...... 108 Drees, H...... 38 Berglund, N...... 99 Dumbgen,¨ L...... 12, 13 Bischoff, W...... 10 Blath, J...... 98 Eichelsbacher, P...... 39 Bl¨omker, D...... 58 Einmahl, U...... 39 B¨ohm, S...... 73 El-Shehawy, S...... 101 Brandt, A...... 122 Engelhardt-Funke, O...... 107 Bruckner,¨ K...... 140 Esche, F...... 140 Bruss, F.Th...... 147 Eszlinger, M...... 89 Buchmann, B...... 10 Etheridge, A...... 95 Buckwar, E...... 59 Ettl, M...... 116

155 156

Fabre, E...... 108 Hess, K.Th...... 141 Fass`o,A...... 48 Hesse, M...... 61 Ferger, D...... 12, 48 Heyde, F...... 108 Finner, H...... 22, 87 Hildenbrandt, R...... 109 Franke, J...... 23 Hili, O...... 124 Franx, G.J...... 107 Hillebrand, M...... 74 Franz, C...... 13 Hinz, J...... 142 Franz, J...... 49 Hirth, U...... 61 Freitag, S...... 13 H¨opfner, R...... 62 Fried, R...... 87 Holling, H...... 151 Huˇskov´a,M...... 20 Gallegos, M.T...... 134 Hug, D...... 74 Galtchouk, L...... 123 Gather, U...... 87 Imhof, L...... 100 van de Geer, S...... 9 Ineichen, R...... 5, 153 Geiger, J...... 98 Janssen, A...... 14 Gentz, B...... 99 Georgii, H.-O...... 99 K¨ampke, Th...... 74 Gill, A.N...... 152 Kahle, W...... 50 G¨ob, R...... 49 Kallsen, J...... 142 Graczyk, M...... 133 Kamps, U...... 47 Graßhoff, U...... 151 Kandler, A...... 62 Großmann, H...... 151 Kantz, H...... 126 Grossmann, S...... 100 K´ar´asz,P...... 110 Grubel,¨ R...... 10, 148 Kassmann, M...... 63 Gugg, C...... 58 Kauermann, G...... 88 Guillou, A...... 39 Keller, K...... 125 Kersting, G...... 98 Haar, S...... 108 Kiesel, R...... 139 Hackl, P...... 50 Klar, B...... 51 Hamers, M...... 134 Klesov, O...... 40 Hartung, J...... 46 Kluppelberg,¨ C...... 142 Hausdorf, B...... 73 Knoth, S...... 51 Hausenblas, E...... 60 Kohl, M...... 15 Heinrich, L...... 40 Kohler, M...... 15 Helmes, K...... 140 Kolonko, M...... 112 Hennig, C...... 73 Kolpakov, A.G...... 29 Henze, N...... 14 Korn, R...... 143 Herrmann, S...... 61 Kovac, A...... 16 157

Kremer, E...... 143 Mosler, K...... 135 Krengel, U...... 41 Muller,¨ A...... 51, 144 Krohn, K...... 89 Muller,¨ C...... 20 Kropf, S...... 89 Muller,¨ G...... 30 Kuchler,¨ U...... 125 Muller,¨ H.-O...... 101 Kuenle,¨ H.-U...... 116 Nagel, W...... 78 Kuhn,¨ C...... 144 Naumov, A...... 31 Kulske,¨ C...... 101 Neuhaus, G...... 20 Kuhnt, S...... 148 Neumann, M...... 23 Kumar, P.R...... 5 Neumann, M.H...... 21 Kunz, A...... 63 Neumann, P...... 113 Lachout, P...... 17 Neumeyer, N...... 21 L¨auter, J...... 89 Nollau, V...... 101 Lee, M.-L.T...... 90, 92 Offinger, R...... 21, 102 Lehmann, A...... 52 Okhrin, Y...... 53 Liebscher, E...... 17 Osius, G...... 136 van Lieshout, M.-C...... 75 van der Linde, A...... 135 Pacheco, A...... 46 Lindner, A...... 144 Palmowski, Z...... 116 Lisei, H...... 64 Patzschke, N...... 79 Love, C.E...... 50 Paulsen, V...... 145 Pavlyukevitch, I...... 65 Maier, R...... 75 Pawlitschko, J...... 150 Majewski, K...... 112 Piterbarg, V...... 38 Majidi, A...... 18 Porosi´nski,Z...... 114 Malchin, C...... 113 Prokopenko, S...... 79 Martin, A...... 64 Mathar, R...... 76 Raible, M...... 58 Mattfeldt, T...... 77 Ramsey, D...... 113 Mattner, L...... 149 Rau, C...... 22 Mayer, J...... 75 Redner, O...... 102 Meerschaert, M.M...... 36, 43 Reich, M...... 91 Meise, M...... 19 Reiß, M...... 125 Merkl, F...... 150 Reitmann, V...... 126 M¨ohle, M...... 90 von Renesse, M...... 65 M¨orters, P...... 98 Reynolds, Jr., M.R...... 45 Molchanov, I.S...... 80 Rieder, H...... 127 Morais, M.C...... 46 Rieder, U...... 114 158

R¨ockner, M...... 66 Starkloff, H.-J...... 68 R¨osler, U...... 37 Stein, M.L...... 71 Roos, B...... 41 Steinke, I...... 23 Roßberg, Th...... 115 Steland, A...... 54 Roters, M...... 22 Stockis, J.-P...... 23 Roth, C...... 66 Strassburger, K...... 87 Ruckdeschel, P...... 7 Strick, H.K...... 153 Ruschendorf,¨ L...... 42 Stutzle,¨ W...... 32 Runggaldier, W.J...... 105 Stummer, W...... 146 Sturm, A...... 103 von Sachs, R...... 128 Sturm, K.-Th...... 68 Sch¨al, M...... 115 Stute, W...... 54 Scheffler, H.-P...... 36, 43 Szajowski, K...... 113 Scherbakov, V.V...... 80 Szekli, R...... 130 Scheutzow, M...... 43 Sztrik, J...... 117 Schlather, M...... 81 Schlegel, S...... 116 Theis, W...... 32 Schmalfuß, B...... 67 Tiedge, J...... 54 Schmelz, J...... 136 Tijms, H.C...... 118 Schmid, W...... 53 Schmidli, H...... 145 Utkin, L.V...... 55 Schmidt, R...... 145 Vasil’iev, V...... 23 Schmidt, V...... 73, 75 Vatutin, V.A...... 98 Schuhmacher, D...... 81 Vogel, S...... 17, 119 Schurath, R...... 116 Vogt, P...... 103 Schurz, H...... 67 Voit, M...... 69 Schwabe, R...... 151 Voß-B¨ohme, A...... 104 Schwarz, M...... 116 Sheehan, N.A...... 85 Wackernagel, H...... 83 Sidorova, N...... 68 W¨alder, K...... 91 Simon, R...... 117 Waldmann, K.-H...... 119 Singh, P...... 152 Weber, M...... 70 Skytthe, A...... 93 Weigand, C...... 56 Smolyanov, O.G...... 68 Weiss, V...... 78 Sortais, M...... 103 von Weizs¨acker, H...... 68 Speed, T...... 6 Welty, L...... 71 Spizzichino, F...... 128 Werner, W...... 57 Spodarev, E...... 81 Wessels, J...... 119 Stadtmuller,¨ U...... 137 Whitmore, G.A...... 90, 92 159

Wienke, A...... 93 Winter, A...... 104 Wittfeld, K...... 130 Wittich, O...... 68 Woerner, J.H.C...... 131 Wunderlich, R...... 70 Yakir, B...... 100 Yao, D.D...... 116 Yashin, A.I...... 93 Z¨ahle, M...... 146 Zagrebnov, V...... 99 Zaidmann, R...... 56 Zaigraev, A...... 44 Ziegler, K...... 24 Ziezold, H...... 94 Z¨ollner, A...... 24 Zuyev, S.A...... 80 Zwanzig, S...... 104