The Bayesian Approach to Inverse Problems

Total Page:16

File Type:pdf, Size:1020Kb

The Bayesian Approach to Inverse Problems The Bayesian approach to inverse problems Youssef Marzouk Department of Aeronautics and Astronautics Center for Computational Engineering Massachusetts Institute of Technology [email protected], http://uqgroup.mit.edu 7 July 2015 Marzouk (MIT) ICERM IdeaLab 7 July 2015 1 / 29 Statistical inference Why is a statistical perspective useful in inverse problems? To characterize uncertainty in the inverse solution To understand how this uncertainty depends on the number and quality of observations, features of the forward model, prior information, etc. To make probabilistic predictions To choose \good" observations or experiments To address questions of model error, model validity, and model selection Marzouk (MIT) ICERM IdeaLab 7 July 2015 2 / 29 Bayesian inference Bayes' rule p(yjθ)p(θ) p(θjy) = p(y) Key idea: model parameters θ are treated as random variables (For simplicity, we let our random variables have densities) Notation θ are model parameters; y are the data; assume both to be finite-dimensional unless otherwise indicated p(θ) is the prior probability density L(θ) ≡ p(yjθ) is the likelihood function p(θjy) is the posterior probability density p(y) is the evidence, or equivalently, the marginal likelihood Marzouk (MIT) ICERM IdeaLab 7 July 2015 3 / 29 Bayesian inference Summaries of the posterior distribution What information to extract? Posterior mean of θ; maximum a posteriori (MAP) estimate of θ Posterior covariance or higher moments of θ Quantiles Credibile intervals: C(y) such that P [θ 2 C(y) j y] = 1 − α. Credibile intervals are not uniquely defined above; thus consider, for example, the HPD (highest posterior density) region. Posterior realizations: for direct assessment, or to estimate posterior predictions or other posterior expectations Marzouk (MIT) ICERM IdeaLab 7 July 2015 4 / 29 Bayesian and frequentist statistics Understanding both perspectives is useful and important. Key differences between these two statistical paradigms Frequentists do not assign probabilities to unknown parameters θ. One can write likelihoods pθ(y) ≡ p(yjθ) but not priors p(θ) or posteriors. θ is not a random variable. In the frequentist viewpoint, there is no single preferred methodology for inverting the relationship between parameters and data. Instead, consider various estimators θ^(y) of θ. The estimator θ^ is a random variable. Why? Frequentist paradigm considers y to result from a random and repeatable experiment. Marzouk (MIT) ICERM IdeaLab 7 July 2015 5 / 29 Bayesian and frequentist statistics Key differences (continued) Evaluate quality of θ^ through various criteria: bias, variance, mean-square error, consistency, efficiency, . One common estimator is maximum likelihood: ^ θML = argmaxθ p(yjθ). p(yjθ) defines a family of distributions indexed by θ. Link to Bayesian approach: MAP estimate maximizes a \penalized likelihood." What about Bayesian versus frequentist prediction of ynew ? y j θ? Frequentist: \plug-in" or other estimators of ynew Bayesian: posterior prediction via integration Marzouk (MIT) ICERM IdeaLab 7 July 2015 6 / 29 Bayesian inference Likelihood functions In general, p(yjθ) is a probabilistic model for the data In the inverse problem or parameter estimation context, the likelihood function is where the forward model appears, along with a noise model and (if applicable) an expression for model discrepancy Contrasting example (but not really!): parametric density estimation, where the likelihood function results from the probability density itself. Selected examples of likelihood functions 1 Bayesian linear regression 2 Nonlinear forward model g(θ) with additive Gaussian noise 3 Nonlinear forward model with noise + model discrepancy Marzouk (MIT) ICERM IdeaLab 7 July 2015 7 / 29 Bayesian inference Prior distributions In ill-posed parameter estimation problems, e.g., inverse problems, prior information plays a key role Intuitive idea: assign lower probability to values of θ that you don't expect to see, higher probability to values of θ that you do expect to see Examples 1 Gaussian processes with specified covariance kernel 2 Gaussian Markov random fields 3 Gaussian priors derived from differential operators 4 Hierarchical priors 5 Besov space priors 6 Higher-level representations (objects, marked point processes) Marzouk (MIT) ICERM IdeaLab 7 July 2015 8 / 29 Gaussian process priors Key idea: any finite-dimensional distribution of the stochastic process θ(x;!): D × Ω ! R is multivariate normal. In other words: θ(x;!) is a collection of jointly Gaussian random variables, indexed by x Specify via mean function and covariance function E [θ(x)] = µ(x) E [(θ(x) − µ)(θ(x0) − µ)] = C(x; x0) Smoothness of process is controlled by behavior of covariance function as x0 ! x Restrictions: stationarity, isotropy, . Marzouk (MIT) ICERM IdeaLab 7 July 2015 9 / 29 Example:Gaussian stationary Gaussian process random fields priors •! Prior is a stationary Gaussian random field: (exponential covariance kernel) (Gaussian covariance kernel) K 2 Both are θ(x;!): D × Ω ! R, with D = [0; 1] . M(x,!) = µ(x) + $ "i ci (!) #i (x) i=1 Marzouk (MIT) (Karhunen-LoèveICERM IdeaLab expansion) 7 July 2015 10 / 29 Gaussian Markov random fields Key idea: discretize space and specify a sparse inverse covariance (\precision") matrix W 1 p(θ) / exp − γθT Wθ 2 where γ controls scale Full conditionals p(θi jθ∼i ) are available analytically and may simplify dramatically. Represent as an undirected graphical model Example: E [θi jθ∼i ] is just an average of site i's nearest neighbors Quite flexible; even used to simulate textures Marzouk (MIT) ICERM IdeaLab 7 July 2015 11 / 29 Priors through differential operators Key idea: return to infinite-dimensional setting; again penalize roughness in θ(x) Stuart 2010: define the prior using fractional negative powers of the Laplacian A = −∆: −α θ ∼ N θ0; βA Sufficiently large α (α > d=2), along with conditions on the likelihood, ensures that posterior measure is well defined Marzouk (MIT) ICERM IdeaLab 7 July 2015 12 / 29 GPs, GMRFs, and SPDEs In fact, all three \types" of Gaussian priors just described are closely connected. Linear fractional SPDE: 2 β=2 d κ − ∆ θ(x) = W(x); x 2 R ; β = ν + d=2; κ > 0; ν > 0 Then θ(x) is a Gaussian field with Mat´erncovariance: σ2 C(x; x0) = (κkx − x0k)νK (κkx − x0k) 2ν−1Γ(ν) ν Covariance kernel is Green's function of differential operator β κ2 − ∆ C(x; x0) = δ(x − x0) ν = 1=2 equivalent to exponential covariance; ν ! 1 equivalent to squared exponential covariance Can construct a discrete GMRF that approximates the solution of SPDE (See Lindgren, Rue, Lindstr¨omJRSSB 2011.) Marzouk (MIT) ICERM IdeaLab 7 July 2015 13 / 29 Hierarchical Gaussian priors Inverse Problems 24 (2008) 034013 D Calvetti and E Somersalo 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 −0.1 − 0.1 −0.2 − 0.2 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Figure 1. Three realization drawn from the prior (6) with constant variance θj θ0 (left) and from the corresponding prior where the variance is 100–fold at two points indicated= by arrows (right). Calvetti & Somersalo, Inverse Problems 24 (2008) 034013. where X and W are the n-variate random variables with components Xj and Wj ,respectively, and Marzouk (MIT) 1 ICERM IdeaLab 7 July 2015 14 / 29 11 − L . ,Ddiag(θ1, θ2,...,θn). (5) = .. .. = 11 − Since W is a standard normal random variable, relation (4) allows us to write the (prior) probability density of X as 1 1/2 2 πprior(x) exp D− LX . (6) ∝ − 2 # # Not surprisingly, the first-order' autoregressive Markov( model leads to a first-order smoothness prior for the variable X.Thevariancevectorθ expresses the expected variability of the signal over the support interval, and provides a handle to control the qualitative behavior of the signal. Assume, for example, that we set θj θ0 const., 1 ! j ! n,leadingtoahomogenous smoothness over the support interval.= By changing= some of the components, e.g., setting θk θ# 100θ0 for some k,#,weexpectthesignaltohavejumpsofstandarddeviation = = √θk √θ# 10√θ0 at the grid intervals [tk 1,tk]and[t# 1,t#]. This is illustrated in figure=1,whereweshowsomerandomdrawsfromtheprior.Itisimportanttonotethatthe= − − higher values of θj sdonotforce,butmakethejumpssimplymorelikelytooccurbyincreasing the local variance. This observation suggests that when the number, location and expected amplitudes of the jumps are known, that is, when the prior information is quantitative,thefirst-orderMarkov model provides the means to encode the available information into the prior. Suppose now that the only available information about the solution of the inverse problem is qualitative:jumps may occur, but there is no available information of how many, where and how large. Adhering to the Bayesian paradigm, we express this lack of quantitative information by modeling the variance of the Markov process as a random variable. The estimation of the variance vector thus becomes a part of the inverse problem. 4 Inverse Problems 24 (2008) 034013 D Calvetti and E Somersalo Iteration 1 Iteration 3 Iteration 5 Iteration 1 Iteration 3 Iteration 5 Figure 4. Approximation of the MAP Estimate of the image (top row) and of the variance (bottom Hierarchicalrow) after 1, 3 Gaussian and 5 iteration of the priors cyclic algorithm when using the GMRES method to compute the updated of the image at each iteration step. Iteration 1 Iteration 3 Iteration 7 Iteration 1 Iteration 3 Iteration 7 Figure 5. Approximation of the MAP estimate of the image (top row) and of the variance (bottom row) after 1, 3 and 5 iteration of the cyclic algorithm when using the CGLS method to compute the updated of the image at each iteration step Calvetti & Somersalo, Inverse Problems 24 (2008) 034013. The graphs displayed in figure 6 refer to the CGLS iteration with inverse gamma hyperprior. The value ofMarzouk the objective (MIT) function levels off afterICERM five IdeaLab iterations, and this could be the7 July basis 2015 15 / 29 for a stopping criterion.
Recommended publications
  • The Study of Sound Field Reconstruction As an Inverse Problem *
    Proceedings of the Institute of Acoustics THE STUDY OF SOUND FIELD RECONSTRUCTION AS AN INVERSE PROBLEM * F. M. Fazi ISVR, University of Southampton, U.K. P. A. Nelson ISVR, University of Southampton, U.K. R. Potthast University of Reading, U.K. J. Seo ETRI, Korea 1 INTRODUCTION The problem of reconstructing a desired sound field using an array of transducers is a subject of large relevance in many branches of acoustics. In this paper, this problem is mathematically formulated as an integral equation of the first kind and represents therefore an inverse problem. It is shown that an exact solution to this problem can not be, in general, calculated but that an approximate solution can be computed, which minimizes the mean squares error between the desired and reconstructed sound field on the boundary of the reconstruction area. The robustness of the solution can be improved by applying a regularization scheme, as described in third section of this paper. The sound field reconstruction system is assumed to be an ideally infinite distribution of monopole sources continuously arranged on a three dimensional surface S , that contains the region of space Ω over which the sound field reconstruction is attempted. This is represented in Figure 1. The sound field in that region can be described by the homogeneous Helmholtz equation, which means that no source of sound or diffracting objects are contained in Ω . It is also assumed that the information on the desired sound field is represented by the knowledge of the acoustic pressure p(,)x t on the boundary ∂Ω of the reconstruction area.
    [Show full text]
  • Introduction to Inverse Problems
    Introduction to Inverse Problems Bastian von Harrach [email protected] Chair of Optimization and Inverse Problems, University of Stuttgart, Germany Department of Computational Science & Engineering Yonsei University Seoul, South Korea, April 3, 2015. B. Harrach: Introduction to Inverse Problems Motivation and examples B. Harrach: Introduction to Inverse Problems Laplace's demon Laplace's demon: (Pierre Simon Laplace 1814) "An intellect which (. ) would know all forces (. ) and all positions of all items (. ), if this intellect were also vast enough to submit these data to analysis, (. ); for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes." B. Harrach: Introduction to Inverse Problems Computational Science Computational Science / Simulation Technology: If we know all necessary parameters, then we can numerically predict the outcome of an experiment (by solving mathematical formulas). Goals: ▸ Prediction ▸ Optimization ▸ Inversion/Identification B. Harrach: Introduction to Inverse Problems Computational Science Generic simulation problem: Given input x calculate outcome y F x . x X : parameters / input = ( ) y Y : outcome / measurements F X ∈ Y : functional relation / model ∈ Goals:∶ → ▸ Prediction: Given x, calculate y F x . ▸ Optimization: Find x, such that F x is optimal. = ( ) ▸ Inversion/Identification: Given F x , calculate x. ( ) ( ) B. Harrach: Introduction to Inverse Problems Examples Examples of inverse problems: ▸ Electrical impedance tomography ▸ Computerized
    [Show full text]
  • Introductory Geophysical Inverse Theory
    Introductory Geophysical Inverse Theory John A. Scales, Martin L. Smith and Sven Treitel Samizdat Press 2 Introductory Geophysical Inverse Theory John A. Scales, Martin L. Smith and Sven Treitel Samizdat Press Golden White River Junction · 3 Published by the Samizdat Press Center for Wave Phenomena Department of Geophysics Colorado School of Mines Golden, Colorado 80401 and New England Research 76 Olcott Drive White River Junction, Vermont 05001 c Samizdat Press, 2001 Samizdat Press publications are available via FTP from samizdat.mines.edu Or via the WWW from http://samizdat.mines.edu Permission is given to freely copy these documents. Contents 1 What Is Inverse Theory 1 1.1 Too many models . 4 1.2 No unique answer . 4 1.3 Implausible models . 5 1.4 Observations are noisy . 6 1.5 The beach is not a model . 7 1.6 Summary . 8 1.7 Beach Example . 8 2 A Simple Inverse Problem that Isn't 11 2.1 A First Stab at ρ . 12 2.1.1 Measuring Volume . 12 2.1.2 Measuring Mass . 12 2.1.3 Computing ρ . 13 2.2 The Pernicious Effects of Errors . 13 2.2.1 Errors in Mass Measurement . 13 2.3 What is an Answer? . 15 2.3.1 Conditional Probabilities . 15 2.3.2 What We're Really (Really) After . 16 2.3.3 A (Short) Tale of Two Experiments . 16 ii CONTENTS 2.3.4 The Experiments Are Identical . 17 2.4 What does it mean to condition on the truth? . 20 2.4.1 Another example . 21 3 Example: A Vertical Seismic Profile 25 3.0.2 Travel time fitting .
    [Show full text]
  • Definitions and Examples of Inverse and Ill-Posed Problems
    c de Gruyter 2008 J. Inv. Ill-Posed Problems 16 (2008), 317–357 DOI 10.1515 / JIIP.2008.069 Definitions and examples of inverse and ill-posed problems S. I. Kabanikhin Survey paper Abstract. The terms “inverse problems” and “ill-posed problems” have been steadily and surely gaining popularity in modern science since the middle of the 20th century. A little more than fifty years of studying problems of this kind have shown that a great number of problems from various branches of classical mathematics (computational algebra, differential and integral equations, partial differential equations, functional analysis) can be classified as inverse or ill-posed, and they are among the most complicated ones (since they are unstable and usually nonlinear). At the same time, inverse and ill-posed problems began to be studied and applied systematically in physics, geophysics, medicine, astronomy, and all other areas of knowledge where mathematical methods are used. The reason is that solutions to inverse problems describe important properties of media under study, such as density and velocity of wave propagation, elasticity parameters, conductivity, dielectric permittivity and magnetic permeability, and properties and location of inhomogeneities in inaccessible areas, etc. In this paper we consider definitions and classification of inverse and ill-posed problems and describe some approaches which have been proposed by outstanding Russian mathematicians A. N. Tikhonov, V. K. Ivanov and M. M. Lavrentiev. Key words. Inverse and ill-posed problems, regularization. AMS classification. 65J20, 65J10, 65M32. 1. Introduction. 2. Classification of inverse problems. 3. Examples of inverse and ill-posed problems. 4. Some first results in ill-posed problems theory.
    [Show full text]
  • A Survey of Matrix Inverse Eigenvalue Problems
    . Numerical Analysis ProjeM November 1966 Manuscript NA-66-37 * . A Survey of Matrix Inverse Eigenvalue Problems bY Daniel Boley and Gene H. Golub Numerical Analysis Project Computer Science Department Stanford University Stanford, California 94305 A Survey of Matrix Inverse Eigenvalue Problems* Daniel Bole& University of Minnesota Gene H. Golubt Stanford University ABSTRACT In this paper, we present a survey of some recent results regarding direct methods for solving certain symmetric inverse eigenvalue problems. The prob- lems we discuss in this paper are those of generating a symmetric matrix, either Jacobi, banded, or some variation thereof, given only some information on the eigenvalues of the matrix itself and some of its principal submatrices. Much of the motivation for the problems discussed in this paper came about from an interest in the inverse Sturm-Liouville problem. * A preliminary version of this report was issued as a technical report of the Computer Science Department, University of Minnesota, TR 8620, May 1988. ‘The research of the first author was partlally supported by NSF Grant DCR-8420935 and DCR-8619029, and that of the second author by NSF Grant DCR-8412314. -- 10. A Survey of Matrix Inverse Eigenvalue Problems Daniel Boley and Gene H. Golub 0. Introduction. In this paper, we present a survey of some recent results regarding direct methods for solv- ing certain symmetric inverse eigenvalue problems. Inverse eigenvalue problems have been of interest in many application areas, including particle physics and geology, as well as numerical integration methods based on Gaussian quadrature rules [14]. Examples of applications can be found in [ll], [12], [13], [24], [29].
    [Show full text]
  • Inverse Problems in Geophysics Geos
    IINNVVEERRSSEE PPRROOBBLLEEMMSS IINN GGEEOOPPHHYYSSIICCSS GGEEOOSS 556677 A Set of Lecture Notes by Professors Randall M. Richardson and George Zandt Department of Geosciences University of Arizona Tucson, Arizona 85721 Revised and Updated Summer 2003 Geosciences 567: PREFACE (RMR/GZ) TABLE OF CONTENTS PREFACE ........................................................................................................................................v CHAPTER 1: INTRODUCTION ................................................................................................. 1 1.1 Inverse Theory: What It Is and What It Does ........................................................ 1 1.2 Useful Definitions ................................................................................................... 2 1.3 Possible Goals of an Inverse Analysis .................................................................... 3 1.4 Nomenclature .......................................................................................................... 4 1.5 Examples of Forward Problems .............................................................................. 7 1.5.1 Example 1: Fitting a Straight Line ........................................................... 7 1.5.2 Example 2: Fitting a Parabola .................................................................. 8 1.5.3 Example 3: Acoustic Tomography .......................................................... 9 1.5.4 Example 4: Seismic Tomography .........................................................
    [Show full text]
  • Geophysical Inverse Problems
    Seismo 1: Introduction Barbara Romanowicz Institut de Physique du Globe de Paris, Univ. of California, Berkeley Les Houches, 6 Octobre 2014 Kola 1970-1989 – profondeur: 12.3 km Photographié en 2007 Rayon de la terre: 6,371 km Ces disciplines contribuent à nos connaissances sur La structure interne de la terre, dynamique et évolution S’appuyent sur des observations à la surface de la terre, et par télédétection Geomagnetism au moyen de satellites Geochemistry/cosmochemistry Seismology • Lecture 1: Geophysical Inverse Problems • Lecture 2: Body waves, ray theory • Lecture 3: Surface waves, free oscillations • Lecture 4: Seismic tomography – anisotropy • Lecture 5: Forward modeling of deep mantle body waves for fine scale structure • Lecture 6: Inner core structure and anisotropy Seismology: 1 Geophysical Inverse Problems with a focus on seismic tomography Gold buried in beach example Unreasonable True model Model with good fit The forward problem – an example • Gravity survey over an unknown buried mass Gravity measurements x x x x x distribution x x x x x x x x x • Continuous integral x x expression: ? Unknown The anomalous mass at depth The data along the mass at depth surface The physics linking mass and gravity (Newton’s Universal Gravitation), sometimes called the kernel of the integral Make it a discrete problem • Data is sampled (in time and/or space) • Model is expressed as a finite set of parameters Data vector Model vector • Functional g can be linear or non-linear: – If linear: – If non-linear:d = Gm d - d0 = g(m)- g(m0 ) = G(m- m0 ) Linear vs. non-linear – parameterization matters! • Modeling our unknown anomaly as a sphere of unknown radius R, density anomaly Dr, and depth b.
    [Show full text]
  • Geophysical Inversion Versus Machine Learning in Inverse Problems Yuji Kim1 and Nori Nakata1
    Geophysical inversion versus machine learning in inverse problems Yuji Kim1 and Nori Nakata1 https://doi.org/10.1190/tle37120894.1. Abstract Geophysical inverse problems are often ill-posed, nonunique, Geophysical inversion and machine learning both provide and nonlinear problems. A deterministic approach to solve inverse solutions for inverse problems in which we estimate model param- problems is minimizing an objective function. Iterative algorithms eters from observations. Geophysical inversions such as impedance such as Newton algorithm, steepest descent, or conjugate gradients inversion, amplitude-variation-with-offset inversion, and travel- have been used widely for linearized inversion. Geophysical time tomography are commonly used in the industry to yield inversion usually is based on the physics of the recorded data such physical properties from measured seismic data. Machine learning, as wave equations, scattering theory, or sampling theory. Machine a data-driven approach, has become popular during the past learning is a data-driven, statistical approach to solving ill-posed decades and is useful for solving such inverse problems. An inverse problems. Machine learning has matured during the past advantage of machine learning methods is that they can be imple- decade in computer science and many other industries, including mented without knowledge of physical equations or theories. The geophysics, as big-data analysis has become common and com- challenges of machine learning lie in acquiring enough training putational power has improved. Machine learning solves the data and selecting relevant parameters, which are essential in problem of optimizing a performance criterion based on statistical obtaining a good quality model. In this study, we compare geo- analyses using example data or past experience (Alpaydin, 2009).
    [Show full text]
  • Overview of Inverse Problems Pierre Argoul
    Overview of Inverse Problems Pierre Argoul To cite this version: Pierre Argoul. Overview of Inverse Problems. DEA. Parameter Identification in Civil Engineering, Ecole Nationale des Ponts et chaussées, 2012, pp.13. cel-00781172 HAL Id: cel-00781172 https://cel.archives-ouvertes.fr/cel-00781172 Submitted on 25 Jan 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. OverviewOverview ofof Inverse Inverse Problems Problems Pierre ARGOUL Ecole Nationale des Ponts et Chaussées Parameter Identification in Civil Engineering July 27, 2012 Contents Introduction - Definitions . 1 Areas of Use – Historical Development . 2 Different Approaches to Solving Inverse Problems . 6 Functional analysis . 6 Regularization of IllPosed Problems . 6 Stochastic or Bayesian Inversion . 7 Conclusion . 9 Abstract This booklet relates the major developments of the evolution of inverse problems. Three sections are contained within. The first offers definitions and the fundamental characteristics of an inverse problem, with a brief history of the birth of inverse problem the ory in geophysics given at the beginning. Then, the most well known Internet sites and scientific reviews dedicated to inverse problems are presented, followed by a description of research un- dertaken along these lines at IFSTTAR (formerly LCPC) since the 1990s.
    [Show full text]
  • Imaging and Inverse Problems of Electromagnetic Nondestructive Evaluation Steven Gerard Ross Iowa State University
    Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 1995 Imaging and inverse problems of electromagnetic nondestructive evaluation Steven Gerard Ross Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Electrical and Electronics Commons Recommended Citation Ross, Steven Gerard, "Imaging and inverse problems of electromagnetic nondestructive evaluation " (1995). Retrospective Theses and Dissertations. 10716. https://lib.dr.iastate.edu/rtd/10716 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. INFORMATION TO USERS This manuscr^t has been reproduced from the microfibn master. UMI films the text directfy from the original or copy submitted. Thus, some thesis and dissertation copies are in ^writer face, while others may be from aiQr ^rpe of conqniter printer. The qnality of this reproduction is dqiendoit upon the qnali^ of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photogrsqphs, print bleedtbrough, substandard marging, and in^roper alignment can adversely affect reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note win indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to light in equal sections with small overlqjs.
    [Show full text]
  • Least Squares Approach to Inverse Problems in Musculoskeletal Biomechanics
    Least Squares Approach to Inverse Problems in Musculoskeletal Biomechanics Marie Lund Ohlsson and Mårten Gulliksson Mid Sweden University FSCN Fibre Science and Communication Network - a research centre at Mid-Sweden University for the forest industry Rapportserie FSCN - ISSN 1650-5387 2009:52 FSCN-rapport R-09-80 Sundsvall 2009 Least Squares Approach to Inverse Problems in Musculoskeletal Biomechanics Marie Lund Ohlsson∗ and Mårten Gulliksson† ∗Mid Sweden University, SE-83125 Östersund †Mid Sweden University, SE-85170 Sundsvall Abstract. Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. The simulation method is explained in detail. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling.
    [Show full text]
  • Learning from Examples As an Inverse Problem
    Journal of Machine Learning Research 6 (2005) 883–904 Submitted 12/04; Revised 3/05; Published 5/05 Learning from Examples as an Inverse Problem Ernesto De Vito [email protected] Dipartimento di Matematica Universita` di Modena e Reggio Emilia Modena, Italy and INFN, Sezione di Genova, Genova, Italy Lorenzo Rosasco [email protected] Andrea Caponnetto [email protected] Umberto De Giovannini [email protected] Francesca Odone [email protected] DISI Universita` di Genova, Genova, Italy Editor: Peter Bartlett Abstract Many works related learning from examples to regularization techniques for inverse problems, em- phasizing the strong algorithmic and conceptual analogy of certain learning algorithms with regu- larization algorithms. In particular it is well known that regularization schemes such as Tikhonov regularization can be effectively used in the context of learning and are closely related to algo- rithms such as support vector machines. Nevertheless the connection with inverse problem was considered only for the discrete (finite sample) problem and the probabilistic aspects of learning from examples were not taken into account. In this paper we provide a natural extension of such analysis to the continuous (population) case and study the interplay between the discrete and con- tinuous problems. From a theoretical point of view, this allows to draw a clear connection between the consistency approach in learning theory and the stability convergence property in ill-posed in- verse problems. The main mathematical result of the paper is a new probabilistic bound for the regularized least-squares algorithm. By means of standard results on the approximation term, the consistency of the algorithm easily follows.
    [Show full text]