Statistics for Inverse Problems 101

Total Page:16

File Type:pdf, Size:1020Kb

Statistics for Inverse Problems 101 Statistics for Inverse Problems 101 June 5, 2011 Introduction I Inverse problem: to recover an unknown object from indirect noisy observations Example: Deblurring of a signal f R 1 Forward problem: µ(x) = 0 A(x − t)f (t) dt Noisy data: yi = µ(xi ) + "i , i = 1;:::; n Inverse problem: recover f (t) for t 2 [0; 1] General Model I A = (Ai ), Ai : H! IR, f 2 H I yi = Ai [f ] + "i , i = 1;:::; n I The recovery of f from A[f ] is ill-posed I Errors "i modeled as random I Estimate bf is an H-valued random variable Basic Question (UQ) How good is the estimate bf ? I What is the distribution of bf ? Summarize characteristics of the distribution of bf E.g., how ‘likely’ is it that bf will be ‘far’ from f ? Review: E, Var & Cov (1) Expected value of random variable X with pdf f (x): Z E(X) = µx = x f (x) dx 2 (2) Variance of X: Var(X) = E (X − µx ) (3) Covariance of random variables X and Y : Cov(X; Y ) = E ((X − µx )(Y − µy )) (4) Expected value of random vector X = (Xi ): E(X) = µx = ( E(Xi )) (5) Covariance matrix of X: t Var(X) = E((X − µx )(X − µx ) ) = ( Cov(Xi ; Xj )) (6) For a fixed n × m matrix A E(AX) = A E(X); Var(AX) = A Var(X) At 2 Example: X = (X1;:::; Xn) correlated, Var(X) = σ Σ, then X + ··· + X σ2 Var 1 n = Var(1t X=n) = 1t Σ1 n n2 Review: least-squares y = Aβ + " t I A is n × m with n > m and A A has stable inverse 2 I E(") = 0, Var(") = σ I I Least-squares estimate of β: βb = arg min ky − Abk2 = (At A)−1At y b I βb is an unbiased estimator of β: Eβ( βb ) = β 8β I Covariance matrix of βb: 2 t −1 Varβ( βb ) = σ (A A) 2 I If " is Gaussian N(0; σ I), then βb is the MLE and βb ∼ N( β; σ2 (At A)−1 ) 2 I Unbiased estimator of σ : 2 2 σb = ky − A βbk =(n − m) 2 = ky − ybk =(n − dof(H)); t −1 t where yb = H βb, H = A (A A) A = ‘hat-matrix’ Confidence regions I 1 − α confidence interval for βi : q t −1 Ii (α): βbi ± tα/2;n−m σb ((A A) )i;i I C.I.’s are pre-data I Pointwise vs simultaneous 1 − α coverage: pointwise: P[ βi 2 Ii (α) ] = 1 − α 8i simultaneous: P[ βi 2 Ii (α) 8i ] = 1 − α I Easy to find 1 − α joint confidence region Rα for β which gives simultaneous coverage and if f f Rα = ff (b): b 2 Rαg ) P[f (β) 2 Rα] ≥ 1 − α Residuals r = y − Aβb = y − yb 2 I E(r) = 0, Var(r) = σ (I − H) 2 2 I If " ∼ N(0; σ I), then: r ∼ N(0; σ (I − H)) ) do not behave like original errors " p I Corrected residuals bri = ri = 1 − Hii I Corrected residuals may be used for model validation Review: Bayes theorem I Conditional probability measure P( · j B): P(A j B) = P(A \ B)=P(B) if P(B) 6= 0 I Bayes theorem: for P(A) 6= 0 P(A j B) = P(B j A)P(A)=P(B) I For X and Θ with pdf’s fΘjX (θ j x) / fXjΘ(x j θ)fΘ(θ) More generally: dµ dµ ΘjX / XjΘ dµΘ dν I Basic tools: conditional probability measures and Radon-Nikodym derivatives I Conditional probabilities can be defined via Kolmogorov’s definition of conditional expectations or using conditional distributions (e.g., via disintegration of measures) Review: Gaussian Bayesian linear model y j β ∼ N(Aβ; σ2I); β ∼ N(µ; τ 2Σ) µ; σ; τ; Σ known I Goal: update prior distribution of β in light of data y. Compute posterior distribution of β given y (use Bayes theorem): β j y ∼ N(µy ; Σy ) 2 t 2 2 −1 −1 Σy = σ A A + (σ /τ )Σ t 2 2 −1 −1 t 2 2 −1 µy = A A + (σ /τ )Σ (A y + (σ /τ )Σ µ) = weighted average of µbls and µ I Summarizing posterior: µy = E(µ j y) = mean and mode of posterior (MAP) Σy = Var(µ j y) = covariance matrix of posterior 1 − α credible interval for βi : q (µy )i ± zα/2 ( Σy )i;i Can also construct 1 − α joint credible region for β Some properties 2 I µy minimizes Ekδ(y) − βk among all δ(y) 2 2 2 2 I µy ! βbls as σ /τ ! 0, µy ! µ as σ /τ ! 1 I µy can be used as a frequentist estimate of β: µy is biased its variance is NOT obtained by sampling from the posterior I Warning: a 1 − α credible regions MAY NOT have 1 − α frequentist coverage I If µ; σ; τ or Σ unknown ) use hierarchical or empirical Bayesian and MCMC methods Parametric reductions for IP I Transform infinite to finite dimensional problem: i.e., A[f ] ≈ Aa I Assumptions y = A[f ] + " = Aa + δ + " Function representation: f (x) = φ(x)t a Penalized least-squares estimate: a = arg min ky − Abk2 + λ2bt Sb b b = (At A + λ2S)−1At y t bf (x) = φ(x) ab R 1 I Example: yi = 0 A(xi − t)f (t) dt + "i Quadrature approximation: m Z 1 X A(xi − t)f (t) dt ≈ (1=m) A(xi − tj )f (tj ) 0 j=1 Discretized system: y = Af + δ + "; Aij = A(xi − tj ); fi = f (xi ) Regularized estimate: bf = arg min ky − Agk2 + λ2kDgk2 g2IRm = (At A + λ2Dt D)−1At y ≡ Ly t bf (xi ) = fi = ei Ly I Example: Ai continuous on Hilbert space H. Then: n 9 orthonormal φk 2 H, orthonormal vk 2 IR and non-decreasing λk ≥ 0 such that A[φk ] = λk v k , ? f = f0 + f1, with f0 2 Null(A), f1 2 Null(A) Pn f0 not constrained by data, f1 = i=1 ak φk Discretized system: y = V a + "; V = (λ1v 1 ··· λnv n) I Regularized estimate: 2 2 2 ab = arg min ky − V bk + λ kbk ≡ Ly b2IRn t bf (x) = φ(x) Ly; φ(x) = (φi (x)) I Example: Ai continuous on RKHS H = Wm([0; 1]). L Then: H = Nm−1 Hm and 9 φj 2 H and κj 2 H such that Z 1 2 2 (m) 2 X 2 2 2 ky − A[f ]k + λ (f1 ) = (yi − hκi ; f i) + λ kPHm f k 0 i X ? f = aj φj + η; η 2 spanfφk g j Discretized system: y = Xa + δ + " Regularized estimate: 2 2 t ab = arg min ky − Xbk + λ b Pb ≡ Ly b2IRn t bf (x) = φ(x) Ly; φ(x) = (φi (x)) To summarize: y = Aa + δ + " f (x) = φ(x)t a 2 2 t ab = arg min ky − Xbk + λ b Pb b2IRn = (At A + λ2P)−1At y t bf (x) = φ(x) ab Bias, variance and MSE of bf I Is bf (x) close to f (x) on average? t t t Bias(bf (x) ) = E(bf (x)) − f (x) = φ(x) Bλa + φ(x) GλA δ I Pointwise sampling variability: 2 2 Var(bf (x) ) = σ k AGλφ(x) k I Pointwise MSE: MSE(bf (x) ) = Bias(bf (x))2 + Var(bf (x)) I Integrated MSE: Z Z IMSE(bf (x) ) = E jbf (x)−f (x)j2 dx = MSE(bf (x)) dx Review: are unbiased estimators good? I They don’t have to exist I If they exist they may be very difficult to find I A biased estimator may still be better: variance variance trade off i.e,. don’t get too attached to them What about the bias of bf ? I Upper bounds are sometimes possible I Geometric information can be obtained from bounds I Optimization approach: find max/min of bias s.t. f 2 S I Average bias: choose a prior for f and compute mean bias over prior I Example: f in RKHS H, f (x) = hρx ; f i Bias(bf (x) ) = hAx − ρx ; f i j Bias(bf (x)) j ≤ kAx − ρx k kf k Confidence intervals I If " Gaussian and jBias(bf (x))j ≤ B(x), then bf (x) ± zα/2( σkAGλφ(x)k + B(x)) is a 1 − α C.I. for f (x) I Simultaneous 1 − α C.I.’s for E(bf (x)), x 2 S: find β such that P sup jZ t V (x)j ≥ β ≤ α x2S Zi iid N(0; 1), V (x) = KGλφ(x)=kKGλφ(x)k and use results from maxima of Gaussian processes theory Residuals r = y − Aab = y − yb I Moments: E(r) = −A Bias( ab ) + δ 2 2 Var(r) = σ (I − Hλ) Note: Bias( A ab ) = A Bias( ab ) may be small even if Bias( ab ) is significant I Corrected residuals: bri = ri =(1 − (Hλ)ii ) Estimating σ and λ I λ can be chosen using same data y (with GCV, L-curve, discrepancy principle, etc.) or independent training data This selection introduces an additional error 2 I If δ ≈ 0, σ can be estimated as in least-squares: 2 2 σb = ky − ybk =(n − dof(Hλ)) otherwise one may consider y = µ + ", µ = A[f ] and use methods from nonparametric regression Resampling methods Idea: If bf and σb are ‘good’ estimates, then one should be able to generate synthetic data consistent with actual observations I If bf is a good estimate ) make synthetic data ∗ ∗ ∗ ∗ y1 = A[bf ] + "i ;:::; yb = A[bf ] + "b ∗ with "i from same distribution as ".
Recommended publications
  • The Study of Sound Field Reconstruction As an Inverse Problem *
    Proceedings of the Institute of Acoustics THE STUDY OF SOUND FIELD RECONSTRUCTION AS AN INVERSE PROBLEM * F. M. Fazi ISVR, University of Southampton, U.K. P. A. Nelson ISVR, University of Southampton, U.K. R. Potthast University of Reading, U.K. J. Seo ETRI, Korea 1 INTRODUCTION The problem of reconstructing a desired sound field using an array of transducers is a subject of large relevance in many branches of acoustics. In this paper, this problem is mathematically formulated as an integral equation of the first kind and represents therefore an inverse problem. It is shown that an exact solution to this problem can not be, in general, calculated but that an approximate solution can be computed, which minimizes the mean squares error between the desired and reconstructed sound field on the boundary of the reconstruction area. The robustness of the solution can be improved by applying a regularization scheme, as described in third section of this paper. The sound field reconstruction system is assumed to be an ideally infinite distribution of monopole sources continuously arranged on a three dimensional surface S , that contains the region of space Ω over which the sound field reconstruction is attempted. This is represented in Figure 1. The sound field in that region can be described by the homogeneous Helmholtz equation, which means that no source of sound or diffracting objects are contained in Ω . It is also assumed that the information on the desired sound field is represented by the knowledge of the acoustic pressure p(,)x t on the boundary ∂Ω of the reconstruction area.
    [Show full text]
  • Introduction to Inverse Problems
    Introduction to Inverse Problems Bastian von Harrach [email protected] Chair of Optimization and Inverse Problems, University of Stuttgart, Germany Department of Computational Science & Engineering Yonsei University Seoul, South Korea, April 3, 2015. B. Harrach: Introduction to Inverse Problems Motivation and examples B. Harrach: Introduction to Inverse Problems Laplace's demon Laplace's demon: (Pierre Simon Laplace 1814) "An intellect which (. ) would know all forces (. ) and all positions of all items (. ), if this intellect were also vast enough to submit these data to analysis, (. ); for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes." B. Harrach: Introduction to Inverse Problems Computational Science Computational Science / Simulation Technology: If we know all necessary parameters, then we can numerically predict the outcome of an experiment (by solving mathematical formulas). Goals: ▸ Prediction ▸ Optimization ▸ Inversion/Identification B. Harrach: Introduction to Inverse Problems Computational Science Generic simulation problem: Given input x calculate outcome y F x . x X : parameters / input = ( ) y Y : outcome / measurements F X ∈ Y : functional relation / model ∈ Goals:∶ → ▸ Prediction: Given x, calculate y F x . ▸ Optimization: Find x, such that F x is optimal. = ( ) ▸ Inversion/Identification: Given F x , calculate x. ( ) ( ) B. Harrach: Introduction to Inverse Problems Examples Examples of inverse problems: ▸ Electrical impedance tomography ▸ Computerized
    [Show full text]
  • Introductory Geophysical Inverse Theory
    Introductory Geophysical Inverse Theory John A. Scales, Martin L. Smith and Sven Treitel Samizdat Press 2 Introductory Geophysical Inverse Theory John A. Scales, Martin L. Smith and Sven Treitel Samizdat Press Golden White River Junction · 3 Published by the Samizdat Press Center for Wave Phenomena Department of Geophysics Colorado School of Mines Golden, Colorado 80401 and New England Research 76 Olcott Drive White River Junction, Vermont 05001 c Samizdat Press, 2001 Samizdat Press publications are available via FTP from samizdat.mines.edu Or via the WWW from http://samizdat.mines.edu Permission is given to freely copy these documents. Contents 1 What Is Inverse Theory 1 1.1 Too many models . 4 1.2 No unique answer . 4 1.3 Implausible models . 5 1.4 Observations are noisy . 6 1.5 The beach is not a model . 7 1.6 Summary . 8 1.7 Beach Example . 8 2 A Simple Inverse Problem that Isn't 11 2.1 A First Stab at ρ . 12 2.1.1 Measuring Volume . 12 2.1.2 Measuring Mass . 12 2.1.3 Computing ρ . 13 2.2 The Pernicious Effects of Errors . 13 2.2.1 Errors in Mass Measurement . 13 2.3 What is an Answer? . 15 2.3.1 Conditional Probabilities . 15 2.3.2 What We're Really (Really) After . 16 2.3.3 A (Short) Tale of Two Experiments . 16 ii CONTENTS 2.3.4 The Experiments Are Identical . 17 2.4 What does it mean to condition on the truth? . 20 2.4.1 Another example . 21 3 Example: A Vertical Seismic Profile 25 3.0.2 Travel time fitting .
    [Show full text]
  • Definitions and Examples of Inverse and Ill-Posed Problems
    c de Gruyter 2008 J. Inv. Ill-Posed Problems 16 (2008), 317–357 DOI 10.1515 / JIIP.2008.069 Definitions and examples of inverse and ill-posed problems S. I. Kabanikhin Survey paper Abstract. The terms “inverse problems” and “ill-posed problems” have been steadily and surely gaining popularity in modern science since the middle of the 20th century. A little more than fifty years of studying problems of this kind have shown that a great number of problems from various branches of classical mathematics (computational algebra, differential and integral equations, partial differential equations, functional analysis) can be classified as inverse or ill-posed, and they are among the most complicated ones (since they are unstable and usually nonlinear). At the same time, inverse and ill-posed problems began to be studied and applied systematically in physics, geophysics, medicine, astronomy, and all other areas of knowledge where mathematical methods are used. The reason is that solutions to inverse problems describe important properties of media under study, such as density and velocity of wave propagation, elasticity parameters, conductivity, dielectric permittivity and magnetic permeability, and properties and location of inhomogeneities in inaccessible areas, etc. In this paper we consider definitions and classification of inverse and ill-posed problems and describe some approaches which have been proposed by outstanding Russian mathematicians A. N. Tikhonov, V. K. Ivanov and M. M. Lavrentiev. Key words. Inverse and ill-posed problems, regularization. AMS classification. 65J20, 65J10, 65M32. 1. Introduction. 2. Classification of inverse problems. 3. Examples of inverse and ill-posed problems. 4. Some first results in ill-posed problems theory.
    [Show full text]
  • A Survey of Matrix Inverse Eigenvalue Problems
    . Numerical Analysis ProjeM November 1966 Manuscript NA-66-37 * . A Survey of Matrix Inverse Eigenvalue Problems bY Daniel Boley and Gene H. Golub Numerical Analysis Project Computer Science Department Stanford University Stanford, California 94305 A Survey of Matrix Inverse Eigenvalue Problems* Daniel Bole& University of Minnesota Gene H. Golubt Stanford University ABSTRACT In this paper, we present a survey of some recent results regarding direct methods for solving certain symmetric inverse eigenvalue problems. The prob- lems we discuss in this paper are those of generating a symmetric matrix, either Jacobi, banded, or some variation thereof, given only some information on the eigenvalues of the matrix itself and some of its principal submatrices. Much of the motivation for the problems discussed in this paper came about from an interest in the inverse Sturm-Liouville problem. * A preliminary version of this report was issued as a technical report of the Computer Science Department, University of Minnesota, TR 8620, May 1988. ‘The research of the first author was partlally supported by NSF Grant DCR-8420935 and DCR-8619029, and that of the second author by NSF Grant DCR-8412314. -- 10. A Survey of Matrix Inverse Eigenvalue Problems Daniel Boley and Gene H. Golub 0. Introduction. In this paper, we present a survey of some recent results regarding direct methods for solv- ing certain symmetric inverse eigenvalue problems. Inverse eigenvalue problems have been of interest in many application areas, including particle physics and geology, as well as numerical integration methods based on Gaussian quadrature rules [14]. Examples of applications can be found in [ll], [12], [13], [24], [29].
    [Show full text]
  • Inverse Problems in Geophysics Geos
    IINNVVEERRSSEE PPRROOBBLLEEMMSS IINN GGEEOOPPHHYYSSIICCSS GGEEOOSS 556677 A Set of Lecture Notes by Professors Randall M. Richardson and George Zandt Department of Geosciences University of Arizona Tucson, Arizona 85721 Revised and Updated Summer 2003 Geosciences 567: PREFACE (RMR/GZ) TABLE OF CONTENTS PREFACE ........................................................................................................................................v CHAPTER 1: INTRODUCTION ................................................................................................. 1 1.1 Inverse Theory: What It Is and What It Does ........................................................ 1 1.2 Useful Definitions ................................................................................................... 2 1.3 Possible Goals of an Inverse Analysis .................................................................... 3 1.4 Nomenclature .......................................................................................................... 4 1.5 Examples of Forward Problems .............................................................................. 7 1.5.1 Example 1: Fitting a Straight Line ........................................................... 7 1.5.2 Example 2: Fitting a Parabola .................................................................. 8 1.5.3 Example 3: Acoustic Tomography .......................................................... 9 1.5.4 Example 4: Seismic Tomography .........................................................
    [Show full text]
  • Geophysical Inverse Problems
    Seismo 1: Introduction Barbara Romanowicz Institut de Physique du Globe de Paris, Univ. of California, Berkeley Les Houches, 6 Octobre 2014 Kola 1970-1989 – profondeur: 12.3 km Photographié en 2007 Rayon de la terre: 6,371 km Ces disciplines contribuent à nos connaissances sur La structure interne de la terre, dynamique et évolution S’appuyent sur des observations à la surface de la terre, et par télédétection Geomagnetism au moyen de satellites Geochemistry/cosmochemistry Seismology • Lecture 1: Geophysical Inverse Problems • Lecture 2: Body waves, ray theory • Lecture 3: Surface waves, free oscillations • Lecture 4: Seismic tomography – anisotropy • Lecture 5: Forward modeling of deep mantle body waves for fine scale structure • Lecture 6: Inner core structure and anisotropy Seismology: 1 Geophysical Inverse Problems with a focus on seismic tomography Gold buried in beach example Unreasonable True model Model with good fit The forward problem – an example • Gravity survey over an unknown buried mass Gravity measurements x x x x x distribution x x x x x x x x x • Continuous integral x x expression: ? Unknown The anomalous mass at depth The data along the mass at depth surface The physics linking mass and gravity (Newton’s Universal Gravitation), sometimes called the kernel of the integral Make it a discrete problem • Data is sampled (in time and/or space) • Model is expressed as a finite set of parameters Data vector Model vector • Functional g can be linear or non-linear: – If linear: – If non-linear:d = Gm d - d0 = g(m)- g(m0 ) = G(m- m0 ) Linear vs. non-linear – parameterization matters! • Modeling our unknown anomaly as a sphere of unknown radius R, density anomaly Dr, and depth b.
    [Show full text]
  • Geophysical Inversion Versus Machine Learning in Inverse Problems Yuji Kim1 and Nori Nakata1
    Geophysical inversion versus machine learning in inverse problems Yuji Kim1 and Nori Nakata1 https://doi.org/10.1190/tle37120894.1. Abstract Geophysical inverse problems are often ill-posed, nonunique, Geophysical inversion and machine learning both provide and nonlinear problems. A deterministic approach to solve inverse solutions for inverse problems in which we estimate model param- problems is minimizing an objective function. Iterative algorithms eters from observations. Geophysical inversions such as impedance such as Newton algorithm, steepest descent, or conjugate gradients inversion, amplitude-variation-with-offset inversion, and travel- have been used widely for linearized inversion. Geophysical time tomography are commonly used in the industry to yield inversion usually is based on the physics of the recorded data such physical properties from measured seismic data. Machine learning, as wave equations, scattering theory, or sampling theory. Machine a data-driven approach, has become popular during the past learning is a data-driven, statistical approach to solving ill-posed decades and is useful for solving such inverse problems. An inverse problems. Machine learning has matured during the past advantage of machine learning methods is that they can be imple- decade in computer science and many other industries, including mented without knowledge of physical equations or theories. The geophysics, as big-data analysis has become common and com- challenges of machine learning lie in acquiring enough training putational power has improved. Machine learning solves the data and selecting relevant parameters, which are essential in problem of optimizing a performance criterion based on statistical obtaining a good quality model. In this study, we compare geo- analyses using example data or past experience (Alpaydin, 2009).
    [Show full text]
  • Overview of Inverse Problems Pierre Argoul
    Overview of Inverse Problems Pierre Argoul To cite this version: Pierre Argoul. Overview of Inverse Problems. DEA. Parameter Identification in Civil Engineering, Ecole Nationale des Ponts et chaussées, 2012, pp.13. cel-00781172 HAL Id: cel-00781172 https://cel.archives-ouvertes.fr/cel-00781172 Submitted on 25 Jan 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. OverviewOverview ofof Inverse Inverse Problems Problems Pierre ARGOUL Ecole Nationale des Ponts et Chaussées Parameter Identification in Civil Engineering July 27, 2012 Contents Introduction - Definitions . 1 Areas of Use – Historical Development . 2 Different Approaches to Solving Inverse Problems . 6 Functional analysis . 6 Regularization of IllPosed Problems . 6 Stochastic or Bayesian Inversion . 7 Conclusion . 9 Abstract This booklet relates the major developments of the evolution of inverse problems. Three sections are contained within. The first offers definitions and the fundamental characteristics of an inverse problem, with a brief history of the birth of inverse problem the ory in geophysics given at the beginning. Then, the most well known Internet sites and scientific reviews dedicated to inverse problems are presented, followed by a description of research un- dertaken along these lines at IFSTTAR (formerly LCPC) since the 1990s.
    [Show full text]
  • Imaging and Inverse Problems of Electromagnetic Nondestructive Evaluation Steven Gerard Ross Iowa State University
    Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 1995 Imaging and inverse problems of electromagnetic nondestructive evaluation Steven Gerard Ross Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Electrical and Electronics Commons Recommended Citation Ross, Steven Gerard, "Imaging and inverse problems of electromagnetic nondestructive evaluation " (1995). Retrospective Theses and Dissertations. 10716. https://lib.dr.iastate.edu/rtd/10716 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. INFORMATION TO USERS This manuscr^t has been reproduced from the microfibn master. UMI films the text directfy from the original or copy submitted. Thus, some thesis and dissertation copies are in ^writer face, while others may be from aiQr ^rpe of conqniter printer. The qnality of this reproduction is dqiendoit upon the qnali^ of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photogrsqphs, print bleedtbrough, substandard marging, and in^roper alignment can adversely affect reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note win indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to light in equal sections with small overlqjs.
    [Show full text]
  • Least Squares Approach to Inverse Problems in Musculoskeletal Biomechanics
    Least Squares Approach to Inverse Problems in Musculoskeletal Biomechanics Marie Lund Ohlsson and Mårten Gulliksson Mid Sweden University FSCN Fibre Science and Communication Network - a research centre at Mid-Sweden University for the forest industry Rapportserie FSCN - ISSN 1650-5387 2009:52 FSCN-rapport R-09-80 Sundsvall 2009 Least Squares Approach to Inverse Problems in Musculoskeletal Biomechanics Marie Lund Ohlsson∗ and Mårten Gulliksson† ∗Mid Sweden University, SE-83125 Östersund †Mid Sweden University, SE-85170 Sundsvall Abstract. Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. The simulation method is explained in detail. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling.
    [Show full text]
  • Learning from Examples As an Inverse Problem
    Journal of Machine Learning Research 6 (2005) 883–904 Submitted 12/04; Revised 3/05; Published 5/05 Learning from Examples as an Inverse Problem Ernesto De Vito [email protected] Dipartimento di Matematica Universita` di Modena e Reggio Emilia Modena, Italy and INFN, Sezione di Genova, Genova, Italy Lorenzo Rosasco [email protected] Andrea Caponnetto [email protected] Umberto De Giovannini [email protected] Francesca Odone [email protected] DISI Universita` di Genova, Genova, Italy Editor: Peter Bartlett Abstract Many works related learning from examples to regularization techniques for inverse problems, em- phasizing the strong algorithmic and conceptual analogy of certain learning algorithms with regu- larization algorithms. In particular it is well known that regularization schemes such as Tikhonov regularization can be effectively used in the context of learning and are closely related to algo- rithms such as support vector machines. Nevertheless the connection with inverse problem was considered only for the discrete (finite sample) problem and the probabilistic aspects of learning from examples were not taken into account. In this paper we provide a natural extension of such analysis to the continuous (population) case and study the interplay between the discrete and con- tinuous problems. From a theoretical point of view, this allows to draw a clear connection between the consistency approach in learning theory and the stability convergence property in ill-posed in- verse problems. The main mathematical result of the paper is a new probabilistic bound for the regularized least-squares algorithm. By means of standard results on the approximation term, the consistency of the algorithm easily follows.
    [Show full text]