Particle Filtering-Based Maximum Likelihood Estimation for Financial Parameter Estimation

Total Page:16

File Type:pdf, Size:1020Kb

Particle Filtering-Based Maximum Likelihood Estimation for Financial Parameter Estimation View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Spiral - Imperial College Digital Repository Particle Filtering-based Maximum Likelihood Estimation for Financial Parameter Estimation Jinzhe Yang Binghuan Lin Wayne Luk Terence Nahar Imperial College & SWIP Techila Technologies Ltd Imperial College SWIP London & Edinburgh, UK Tampere University of Technology London, UK London, UK [email protected] Tampere, Finland [email protected] [email protected] [email protected] Abstract—This paper presents a novel method for estimating result, which cannot meet runtime requirement in practice. In parameters of financial models with jump diffusions. It is a addition, high performance computing (HPC) technologies are Particle Filter based Maximum Likelihood Estimation process, still new to the finance industry. We provide a PF based MLE which uses particle streams to enable efficient evaluation of con- straints and weights. We also provide a CPU-FPGA collaborative solution to estimate parameters of jump diffusion models, and design for parameter estimation of Stochastic Volatility with we utilise HPC technology to speedup the estimation scheme Correlated and Contemporaneous Jumps model as a case study. so that it would be acceptable for practical use. The result is evaluated by comparing with a CPU and a cloud This paper presents the algorithm development, as well computing platform. We show 14 times speed up for the FPGA as the pipeline design solution in FPGA technology, of PF design compared with the CPU, and similar speedup but better convergence compared with an alternative parallelisation scheme based MLE for parameter estimation of Stochastic Volatility using Techila Middleware on a multi-CPU environment. with Correlated and Contemporaneous Jumps (SVCJ) [2]. The contributions of this paper are summarised as follows: NTRODUCTION I. I • We provide a PF-based MLE method that is parallelisable Particle Filter (PF), also known as Sequential Monte Carlo and robust. It can be a potential competitor for default method (SMC), is a computationally intensive state estimation estimation tools used for jump-diffusion models such as technique that is applied to solve dynamic problems involving MCMC. non-normality and non-linearity. Information about how the • We develop a collaborative CPU-FPGA platform for this method should evolve is contained in every single particle, method, making full use of both CPU and FPGA by and in order to acquire accurate estimation results, the number resampling on multi-threaded CPU and enabling efficient of particles is often large. Empirically, the more complex evaluation of constraints and weights on FPGA. the system, the larger the amount of particles. A common • We offer optimisations through algorithmic method concern is that the computational intensity for solving real- to minimise bottlenecks in complex computation, and world problems using PF is high, and time constraint limits though memory optimisation to reduce bandwidth over- the applicability of PF to fields with runtime requirement. A head. detailed discussion of PF methods in practice can be found in • We benchmark different parallelisation schemes of parti- [1]. cle filtering on different computing platforms. Parameter estimation is one of the most crucial topics in II. BACKGROUND the finance industry, especially for buy-side practitioners. The parameters are estimated from historical data and could be Introducing jump components of diffusion models used in used to predict future movements of the market. Because of finance is one of the main developments in financial modeling the lack of likelihood functions for jump diffusion models, literature during the past decade. This paper focuses on practitioners use Bayesian methods as default estimation tools, the affine jump diffusion model, which has good analytical such as Markov Chain Monte Carlo (MCMC) methods. This properties. On the one hand, this method allows the derivation paper explores an alternative estimation method, the maximum of semi-closed form for option pricing formulae that enable likelihood estimation (MLE), for financial parameter estima- fast option pricing. This feature is a necessary criterion of the tion. model to be acceptable by the financial industry. On the other Some attempts have been made for parameter estimation hand, this model captures large movements in the financial using PF, but they only cover models without jump diffusion. market by incorporating jump components. The particles are treated as the evaluation function for the In our approach, the joint dynamic (SVCJ) of log-price st MLE, so the likelihood function could also be provided by the and stochastic variance Vt is given by: PF. The restriction of PF is that it brings heavy computational Nt burden which might be infeasible for traditional platforms. It p s X s dst = (µ − 1=2Vt)dt + VtdWt + d( Zj ) (1) can take a few days or even up to weeks to calculate the j=1 N p Xt We extend the SIR algorithm to become the auxiliary dV = κ(θ − V )dt + σ V dW v + d( Zv) (2) t t t t j particle filter (APF) to handle such problem. In contrast to j=1 a standard SIR particle filter, where the sampling of Vt+1 is where Nt ∼ poi(λdt) is a Poisson distributed random number blind of st+1, APF samples Vt+1 from p(Vt+1jVt; st+1), and s v that describes the number of jumps, Zj = µs + ρsZj + σsj, using auxiliary variables as an approximation when the exact v j ∼ N(0; 1) (normal distribution) and Zj ∼ exp(µv) distribution of p(Vt+1jVt; st + 1) is not available. (exponential distribution) are random numbers that describe Based on [6] and [7], we extend the algorithm to the SVCJ the jump size in log-price process and variance process, model. respectively. The set of parameters from the SVCJ model that we estimate Algorithm 1 APF are: µ, µv, µy, κ, θ, σv, σy, ρ, ρj and λ, where for t=1 to T do • µ: long term mean of the return for n=1 to N do • µv: rate of exponential jump size in volatility Generate auxiliary variable of volatility as the expected • µ : mean of jump size in return y integrated volatility: v^t = Vt−1 + κ(θ − Vt−1) + • κ: rate of mean reversion 1 ρσv(yt−1 − µ + 2 Vt−1 − Jt−1) + Jt−1µv • θ: long term variance Compute first stage weights wt / p(stjst−1; v^t) • σ : volatility of volatility v Resample Vt−1, v^ according to the first stage weights. • σy: volatility of jump size in return for n=1 to N do • ρ: correlation between volatility process and return pro- Propagating the current particles by generating: cess Number of jumps J, • ρ : correlation between jumps in return and in volatility j Jump sizes in return Zy • λ: jump intensity Jump sizes in volatility Zv using v^t; Some of the parameters above are not provided in formula 1 Generating volatility from p(Vtjst; st−1; J; Zv;Zy) and 2, because they appear when the model is discretised: it is Calculate second stage reweighting π as in standard transformed into functions of finite set of state variables with auxiliary particle filter using importance sampling rule. known joint distribution. It is a generic model representing Resample V , J, Zv, Zy jump-diffusion models: • When λ = 0 (SV), it is a Heston Model [3]. • When µv = 0 (SVJ), it is a Bates Model [4]. According to [8], the likelihood function can be derived as: In other words, the model we estimate is more general than T Y several widely used financial models. Following the literature, LIK =p ^(s1) p^(stjst−1) (3) s v we assume two Brownian motions W and W are correlated t=2 with correlation coefficient ρ. This enables the model to d capture the well-known leverage effect in financial time series. As proved in [9], the likelihood estimator LN is an unbiased estimator of the likelihood Ld of the state space model (Euler III. ALGORITHM DEVELOPMENT discretisation of the original jump-diffusion process). Estimation of diffusion models in finance has long been Furthermore, the likelihood of the discrete process con- a challenge due to the existence of latent states and the verges to the likelihood of the diffusion process. Denote the discreteness of observable data. Mixing a jump process and set of model parameters by θ. As ∆t ! 0, the original process naturally introduces more difficulties by providing more flexibility to the model. Ld(θ) ! Lc(θ) (4) Maximum likelihood estimation (MLE) is a widely used statistical tool to estimate the parameters of a model. By where Lc is the likelihood of the original jump diffusion maximising the likelihood function, the estimation procedure process. fits the model to the observed data. We use stock prices as a Since we can get an estimation of likelihood as a byproduct set of the observations and variance as the latent states. of the particle filtering procedure, it is straightforward to con- We first develop a Sequential Importance Resampling (SIR) struct the particle filtering-based MLE by taking the estimator strategy for our parameter estimation. It is a standard strategy based on particle filtering as the objective function of an for models with/without jumps. However, the result accuracy optimisation algorithm. To summarise the MLE procedure: for are different between the two kinds of models. For models each iteration of MLE, we input a set of estimated parameters, without jumps, such as the Heston Model, it could provide ac- the PF outputs all corresponding results and evaluates the curate parameter sets. Nevertheless, when it is applied to jump- likelihood function, the optimiser refers to the likelihood and i diffusion models, where we take distribution q(Vt+1jVt ; st+1) generates another set of parameters.
Recommended publications
  • Numerical Solution of Jump-Diffusion LIBOR Market Models
    Finance Stochast. 7, 1–27 (2003) c Springer-Verlag 2003 Numerical solution of jump-diffusion LIBOR market models Paul Glasserman1, Nicolas Merener2 1 403 Uris Hall, Graduate School of Business, Columbia University, New York, NY 10027, USA (e-mail: [email protected]) 2 Department of Applied Physics and Applied Mathematics, Columbia University, New York, NY 10027, USA (e-mail: [email protected]) Abstract. This paper develops, analyzes, and tests computational procedures for the numerical solution of LIBOR market models with jumps. We consider, in par- ticular, a class of models in which jumps are driven by marked point processes with intensities that depend on the LIBOR rates themselves. While this formulation of- fers some attractive modeling features, it presents a challenge for computational work. As a first step, we therefore show how to reformulate a term structure model driven by marked point processes with suitably bounded state-dependent intensities into one driven by a Poisson random measure. This facilitates the development of discretization schemes because the Poisson random measure can be simulated with- out discretization error. Jumps in LIBOR rates are then thinned from the Poisson random measure using state-dependent thinning probabilities. Because of discon- tinuities inherent to the thinning process, this procedure falls outside the scope of existing convergence results; we provide some theoretical support for our method through a result establishing first and second order convergence of schemes that accommodates thinning but imposes stronger conditions on other problem data. The bias and computational efficiency of various schemes are compared through numerical experiments. Key words: Interest rate models, Monte Carlo simulation, market models, marked point processes JEL Classification: G13, E43 Mathematics Subject Classification (1991): 60G55, 60J75, 65C05, 90A09 The authors thank Professor Steve Kou for helpful comments and discussions.
    [Show full text]
  • Superprocesses and Mckean-Vlasov Equations with Creation of Mass
    Sup erpro cesses and McKean-Vlasov equations with creation of mass L. Overb eck Department of Statistics, University of California, Berkeley, 367, Evans Hall Berkeley, CA 94720, y U.S.A. Abstract Weak solutions of McKean-Vlasov equations with creation of mass are given in terms of sup erpro cesses. The solutions can b e approxi- mated by a sequence of non-interacting sup erpro cesses or by the mean- eld of multityp e sup erpro cesses with mean- eld interaction. The lat- ter approximation is asso ciated with a propagation of chaos statement for weakly interacting multityp e sup erpro cesses. Running title: Sup erpro cesses and McKean-Vlasov equations . 1 Intro duction Sup erpro cesses are useful in solving nonlinear partial di erential equation of 1+ the typ e f = f , 2 0; 1], cf. [Dy]. Wenowchange the p oint of view and showhowtheyprovide sto chastic solutions of nonlinear partial di erential Supp orted byanFellowship of the Deutsche Forschungsgemeinschaft. y On leave from the Universitat Bonn, Institut fur Angewandte Mathematik, Wegelerstr. 6, 53115 Bonn, Germany. 1 equation of McKean-Vlasovtyp e, i.e. wewant to nd weak solutions of d d 2 X X @ @ @ + d x; + bx; : 1.1 = a x; t i t t t t t ij t @t @x @x @x i j i i=1 i;j =1 d Aweak solution = 2 C [0;T];MIR satis es s Z 2 t X X @ @ a f = f + f + d f + b f ds: s ij s t 0 i s s @x @x @x 0 i j i Equation 1.1 generalizes McKean-Vlasov equations of twodi erenttyp es.
    [Show full text]
  • A Switching Self-Exciting Jump Diffusion Process for Stock Prices Donatien Hainaut, Franck Moraux
    A switching self-exciting jump diffusion process for stock prices Donatien Hainaut, Franck Moraux To cite this version: Donatien Hainaut, Franck Moraux. A switching self-exciting jump diffusion process for stock prices. Annals of Finance, Springer Verlag, 2019, 15 (2), pp.267-306. 10.1007/s10436-018-0340-5. halshs- 01909772 HAL Id: halshs-01909772 https://halshs.archives-ouvertes.fr/halshs-01909772 Submitted on 31 Oct 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A switching self-exciting jump diusion process for stock prices Donatien Hainaut∗ ISBA, Université Catholique de Louvain, Belgium Franck Morauxy Univ. Rennes 1 and CREM May 25, 2018 Abstract This study proposes a new Markov switching process with clustering eects. In this approach, a hidden Markov chain with a nite number of states modulates the parameters of a self-excited jump process combined to a geometric Brownian motion. Each regime corresponds to a particular economic cycle determining the expected return, the diusion coecient and the long-run frequency of clustered jumps. We study rst the theoretical properties of this process and we propose a sequential Monte-Carlo method to lter the hidden state variables.
    [Show full text]
  • Analysis of Imbalance Strategies Recommendation Using a Meta-Learning Approach
    7th ICML Workshop on Automated Machine Learning (2020) Analysis of Imbalance Strategies Recommendation using a Meta-Learning Approach Afonso Jos´eCosta [email protected] CISUC, Department of Informatics Engineering, University of Coimbra, Portugal Miriam Seoane Santos [email protected] CISUC, Department of Informatics Engineering, University of Coimbra, Portugal Carlos Soares [email protected] Fraunhofer AICOS and LIACC, Faculty of Engineering, University of Porto, Portugal Pedro Henriques Abreu [email protected] CISUC, Department of Informatics Engineering, University of Coimbra, Portugal Abstract Class imbalance is known to degrade the performance of classifiers. To address this is- sue, imbalance strategies, such as oversampling or undersampling, are often employed to balance the class distribution of datasets. However, although several strategies have been proposed throughout the years, there is no one-fits-all solution for imbalanced datasets. In this context, meta-learning arises a way to recommend proper strategies to handle class imbalance, based on data characteristics. Nonetheless, in related work, recommendation- based systems do not focus on how the recommendation process is conducted, thus lacking interpretable knowledge. In this paper, several meta-characteristics are identified in order to provide interpretable knowledge to guide the selection of imbalance strategies, using Ex- ceptional Preferences Mining. The experimental setup considers 163 real-world imbalanced datasets, preprocessed with 9 well-known resampling algorithms. Our findings elaborate on the identification of meta-characteristics that demonstrate when using preprocessing algo- rithms is advantageous for the problem and when maintaining the original dataset benefits classification performance. 1. Introduction Class imbalance is known to severely affect the data quality.
    [Show full text]
  • Resampling Hypothesis Tests for Autocorrelated Fields
    JANUARY 1997 WILKS 65 Resampling Hypothesis Tests for Autocorrelated Fields D. S. WILKS Department of Soil, Crop and Atmospheric Sciences, Cornell University, Ithaca, New York (Manuscript received 5 March 1996, in ®nal form 10 May 1996) ABSTRACT Presently employed hypothesis tests for multivariate geophysical data (e.g., climatic ®elds) require the as- sumption that either the data are serially uncorrelated, or spatially uncorrelated, or both. Good methods have been developed to deal with temporal correlation, but generalization of these methods to multivariate problems involving spatial correlation has been problematic, particularly when (as is often the case) sample sizes are small relative to the dimension of the data vectors. Spatial correlation has been handled successfully by resampling methods when the temporal correlation can be neglected, at least according to the null hypothesis. This paper describes the construction of resampling tests for differences of means that account simultaneously for temporal and spatial correlation. First, univariate tests are derived that respect temporal correlation in the data, using the relatively new concept of ``moving blocks'' bootstrap resampling. These tests perform accurately for small samples and are nearly as powerful as existing alternatives. Simultaneous application of these univariate resam- pling tests to elements of data vectors (or ®elds) yields a powerful (i.e., sensitive) multivariate test in which the cross correlation between elements of the data vectors is successfully captured by the resampling, rather than through explicit modeling and estimation. 1. Introduction dle portion of Fig. 1. First, standard tests, including the t test, rest on the assumption that the underlying data It is often of interest to compare two datasets for are composed of independent samples from their parent differences with respect to particular attributes.
    [Show full text]
  • Sampling and Resampling
    Sampling and Resampling Thomas Lumley Biostatistics 2005-11-3 Basic Problem (Frequentist) statistics is based on the sampling distribution of statistics: Given a statistic Tn and a true data distribution, what is the distribution of Tn • The mean of samples of size 10 from an Exponential • The median of samples of size 20 from a Cauchy • The difference in Pearson’s coefficient of skewness ((X¯ − m)/s) between two samples of size 40 from a Weibull(3,7). This is easy: you have written programs to do it. In 512-3 you learn how to do it analytically in some cases. But... We really want to know: What is the distribution of Tn in sampling from the true data distribution But we don’t know the true data distribution. Empirical distribution On the other hand, we do have an estimate of the true data distribution. It should look like the sample data distribution. (we write Fn for the sample data distribution and F for the true data distribution). The Glivenko–Cantelli theorem says that the cumulative distribution function of the sample converges uniformly to the true cumulative distribution. We can work out the sampling distribution of Tn(Fn) by simulation, and hope that this is close to that of Tn(F ). Simulating from Fn just involves taking a sample, with replace- ment, from the observed data. This is called the bootstrap. We ∗ write Fn for the data distribution of a resample. Too good to be true? There are obviously some limits to this • It requires large enough samples for Fn to be close to F .
    [Show full text]
  • Efficient Computation of Adjusted P-Values for Resampling-Based Stepdown Multiple Testing
    University of Zurich Department of Economics Working Paper Series ISSN 1664-7041 (print) ISSN 1664-705X (online) Working Paper No. 219 Efficient Computation of Adjusted p-Values for Resampling-Based Stepdown Multiple Testing Joseph P. Romano and Michael Wolf February 2016 Efficient Computation of Adjusted p-Values for Resampling-Based Stepdown Multiple Testing∗ Joseph P. Romano† Michael Wolf Departments of Statistics and Economics Department of Economics Stanford University University of Zurich Stanford, CA 94305, USA 8032 Zurich, Switzerland [email protected] [email protected] February 2016 Abstract There has been a recent interest in reporting p-values adjusted for resampling-based stepdown multiple testing procedures proposed in Romano and Wolf (2005a,b). The original papers only describe how to carry out multiple testing at a fixed significance level. Computing adjusted p-values instead in an efficient manner is not entirely trivial. Therefore, this paper fills an apparent gap by detailing such an algorithm. KEY WORDS: Adjusted p-values; Multiple testing; Resampling; Stepdown procedure. JEL classification code: C12. ∗We thank Henning M¨uller for helpful comments. †Research supported by NSF Grant DMS-1307973. 1 1 Introduction Romano and Wolf (2005a,b) propose resampling-based stepdown multiple testing procedures to control the familywise error rate (FWE); also see Romano et al. (2008, Section 3). The procedures as described are designed to be carried out at a fixed significance level α. Therefore, the result of applying such a procedure to a set of data will be a ‘list’ of binary decisions concerning the individual null hypotheses under study: reject or do not reject a given null hypothesis at the chosen significance level α.
    [Show full text]
  • Resampling Methods: Concepts, Applications, and Justification
    Practical Assessment, Research, and Evaluation Volume 8 Volume 8, 2002-2003 Article 19 2002 Resampling methods: Concepts, Applications, and Justification Chong Ho Yu Follow this and additional works at: https://scholarworks.umass.edu/pare Recommended Citation Yu, Chong Ho (2002) "Resampling methods: Concepts, Applications, and Justification," Practical Assessment, Research, and Evaluation: Vol. 8 , Article 19. DOI: https://doi.org/10.7275/9cms-my97 Available at: https://scholarworks.umass.edu/pare/vol8/iss1/19 This Article is brought to you for free and open access by ScholarWorks@UMass Amherst. It has been accepted for inclusion in Practical Assessment, Research, and Evaluation by an authorized editor of ScholarWorks@UMass Amherst. For more information, please contact [email protected]. Yu: Resampling methods: Concepts, Applications, and Justification A peer-reviewed electronic journal. Copyright is retained by the first or sole author, who grants right of first publication to Practical Assessment, Research & Evaluation. Permission is granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. PARE has the right to authorize third party reproduction of this article in print, electronic and database forms. Volume 8, Number 19, September, 2003 ISSN=1531-7714 Resampling methods: Concepts, Applications, and Justification Chong Ho Yu Aries Technology/Cisco Systems Introduction In recent years many emerging statistical analytical tools, such as exploratory data analysis (EDA), data visualization, robust procedures, and resampling methods, have been gaining attention among psychological and educational researchers. However, many researchers tend to embrace traditional statistical methods rather than experimenting with these new techniques, even though the data structure does not meet certain parametric assumptions.
    [Show full text]
  • Monte Carlo Simulation and Resampling
    Monte Carlo Simulation and Resampling Tom Carsey (Instructor) Jeff Harden (TA) ICPSR Summer Course Summer, 2011 | Monte Carlo Simulation and Resampling 1/68 Resampling Resampling methods share many similarities to Monte Carlo simulations { in fact, some refer to resampling methods as a type of Monte Carlo simulation. Resampling methods use a computer to generate a large number of simulated samples. Patterns in these samples are then summarized and analyzed. However, in resampling methods, the simulated samples are drawn from the existing sample of data you have in your hands and NOT from a theoretically defined (researcher defined) DGP. Thus, in resampling methods, the researcher DOES NOT know or control the DGP, but the goal of learning about the DGP remains. | Monte Carlo Simulation and Resampling 2/68 Resampling Principles Begin with the assumption that there is some population DGP that remains unobserved. That DGP produced the one sample of data you have in your hands. Now, draw a new \sample" of data that consists of a different mix of the cases in your original sample. Repeat that many times so you have a lot of new simulated \samples." The fundamental assumption is that all information about the DGP contained in the original sample of data is also contained in the distribution of these simulated samples. If so, then resampling from the one sample you have is equivalent to generating completely new random samples from the population DGP. | Monte Carlo Simulation and Resampling 3/68 Resampling Principles (2) Another way to think about this is that if the sample of data you have in your hands is a reasonable representation of the population, then the distribution of parameter estimates produced from running a model on a series of resampled data sets will provide a good approximation of the distribution of that statistics in the population.
    [Show full text]
  • An Infinite Hidden Markov Model with Similarity-Biased Transitions
    An Infinite Hidden Markov Model With Similarity-Biased Transitions Colin Reimer Dawson 1 Chaofan Huang 1 Clayton T. Morrison 2 Abstract ing a “stick” off of the remaining weight according to a Beta (1; γ) distribution. The parameter γ > 0 is known We describe a generalization of the Hierarchical as the concentration parameter and governs how quickly Dirichlet Process Hidden Markov Model (HDP- the weights tend to decay, with large γ corresponding to HMM) which is able to encode prior informa- slow decay, and hence more weights needed before a given tion that state transitions are more likely be- cumulative weight is reached. This stick-breaking process tween “nearby” states. This is accomplished is denoted by GEM (Ewens, 1990; Sethuraman, 1994) for by defining a similarity function on the state Griffiths, Engen and McCloskey. We thus have a discrete space and scaling transition probabilities by pair- probability measure, G , with weights β at locations θ , wise similarities, thereby inducing correlations 0 j j j = 1; 2;::: , defined by among the transition distributions. We present an augmented data representation of the model i.i.d. θj ∼ H β ∼ GEM(γ): (1) as a Markov Jump Process in which: (1) some jump attempts fail, and (2) the probability of suc- G0 drawn in this way is a Dirichlet Process (DP) random cess is proportional to the similarity between the measure with concentration γ and base measure H. source and destination states. This augmenta- The actual transition distribution, πj, from state j, is drawn tion restores conditional conjugacy and admits a from another DP with concentration α and base measure simple Gibbs sampler.
    [Show full text]
  • Resampling: an Improvement of Importance Sampling in Varying Population Size Models Coralie Merle, Raphaël Leblois, François Rousset, Pierre Pudlo
    Resampling: an improvement of Importance Sampling in varying population size models Coralie Merle, Raphaël Leblois, François Rousset, Pierre Pudlo To cite this version: Coralie Merle, Raphaël Leblois, François Rousset, Pierre Pudlo. Resampling: an improvement of Importance Sampling in varying population size models. Theoretical Population Biology, Elsevier, 2017, 114, pp.70 - 87. 10.1016/j.tpb.2016.09.002. hal-01270437 HAL Id: hal-01270437 https://hal.archives-ouvertes.fr/hal-01270437 Submitted on 22 Mar 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Resampling: an improvement of Importance Sampling in varying population size models C. Merle 1,2,3,*, R. Leblois2,3, F. Rousset3,4, and P. Pudlo 1,3,5,y 1Institut Montpelli´erain Alexander Grothendieck UMR CNRS 5149, Universit´ede Montpellier, Place Eug`eneBataillon, 34095 Montpellier, France 2Centre de Biologie pour la Gestion des Populations, Inra, CBGP (UMR INRA / IRD / Cirad / Montpellier SupAgro), Campus International de Baillarguet, 34988 Montferrier sur Lez, France 3Institut de Biologie Computationnelle, Universit´ede Montpellier, 95 rue de la Galera, 34095 Montpellier 4Institut des Sciences de l’Evolution, Universit´ede Montpellier, CNRS, IRD, EPHE, Place Eug`eneBataillon, 34392 Montpellier, France 5Institut de Math´ematiquesde Marseille, I2M UMR 7373, Aix-Marseille Universit´e,CNRS, Centrale Marseille, 39, rue F.
    [Show full text]
  • Final Report (PDF)
    Foundation of Stochastic Analysis Krzysztof Burdzy (University of Washington) Zhen-Qing Chen (University of Washington) Takashi Kumagai (Kyoto University) September 18-23, 2011 1 Scientific agenda of the conference Over the years, the foundations of stochastic analysis included various specific topics, such as the general theory of Markov processes, the general theory of stochastic integration, the theory of martingales, Malli- avin calculus, the martingale-problem approach to Markov processes, and the Dirichlet form approach to Markov processes. To create some focus for the very broad topic of the conference, we chose a few areas of concentration, including • Dirichlet forms • Analysis on fractals and percolation clusters • Jump type processes • Stochastic partial differential equations and measure-valued processes Dirichlet form theory provides a powerful tool that connects the probabilistic potential theory and ana- lytic potential theory. Recently Dirichlet forms found its use in effective study of fine properties of Markov processes on spaces with minimal smoothness, such as reflecting Brownian motion on non-smooth domains, Brownian motion and jump type processes on Euclidean spaces and fractals, and Markov processes on trees and graphs. It has been shown that Dirichlet form theory is an important tool in study of various invariance principles, such as the invariance principle for reflected Brownian motion on domains with non necessarily smooth boundaries and the invariance principle for Metropolis algorithm. Dirichlet form theory can also be used to study a certain type of SPDEs. Fractals are used as an approximation of disordered media. The analysis on fractals is motivated by the desire to understand properties of natural phenomena such as polymers, and growth of molds and crystals.
    [Show full text]