The Smart Way to Fight Crime - Machine Intelligence Forecasting

Total Page:16

File Type:pdf, Size:1020Kb

The Smart Way to Fight Crime - Machine Intelligence Forecasting “The Art of Empowering your Workforce” LE Mentoring Solutions THE SMART WAY TO FIGHT CRIME - MACHINE INTELLIGENCE FORECASTING The criminal justice system is increasingly using machine intelligence in forecasting crime trends. Specialized computer analysis is assisting law enforcement agencies, the court system and parole boards in making critical decisions regarding public safety. Such as: Granting parole to an incarcerated individual Establishing conditions for parole Recommendations for bail Sentencing recommendations Where to expect crime to occur When to expect crime to occur What kind of crime to expect Risk assessment is now performed by extensive databases that have proven accurate in the application of parameters to determine guidelines for arrest, sentencing and release of criminal offenders. What does this mean for human analysts? Will they become obsolete, replaced by data entry clerks and teams of criminal justice decision makers who make a final determination based on a computer model? The digital age of crime forecasting is now a part of society’s daily policing operations. Using predictive policing policies is nothing new. What’s new is the equipment and software. Methodology for predicting crime activity can range from simplistic to complex. Sophisticated computer software tools have proven to be as accurate as a well-trained human analyst. There are two questions then, for law enforcement to answer in determining which crime mapper they opt for: 1. Which is more cost effective? 2. Which can deliver accurate results more quickly? Small communities may have a police department that is operating on what many would call a shoestring budget. Crime may be minimal. The community may be small enough that officers are very familiar with who everyone is and what everyone is doing. In a situation such as this, it may be more cost effective to have a human analyst on the job rather than invest in the expense of commercial crime forecasting software. The Sage Group CONFIDENTIAL Predictive Policing/Crime Analysis 5115 Maryland Way-Suite 204 Prepared By: Phil Costanzo Law Enforcement Mentoring Solutions Brentwood, TN 37027 917-281-5651 [email protected] “The Art of Empowering your Workforce” LE Mentoring Solutions Police departments managing large, densely populated cities, or even a sprawling, rural area with a population spread out and difficult to track, can benefit from the use of machine-based crime forecasting tools. These systems involve several different components that work together to create a complete picture of a crime forecast much like a meteorologist’s radar and satellite images work together to create a complete picture of a local region’s weather forecast for the following week. Geographic Information System (GIS) Crime events are coded geographically according to census tracts, blocks or grid cells. SQL Compatible Database This tool gathers, stores and manages data that is collected from other databases. It is a data importing tool. It is useful in that it can be set up to automatically transfer data according to specified parameters. By automating analyzing tasks that would otherwise have to be done manually, data collection is faster and more accurate. It is a tremendous time saver. Statistical Software Once all the data has been retrieved, it must be interpreted and translated into a format that is easy to understand and put to practical use. The criminal justice system loves statistics. Percentages are easy to understand. Looking at a map and seeing a geographical tag for a neighborhood with a note that this particular area is where ninety percent of burglaries are occurring is exactly the kind of information law enforcement needs to do the best job possible of protecting citizens and making cities safer by reducing crime. Crime mapping helps law enforcement prevent crime putting them on the offensive rather than responding to a crime that has already happened. The Sage Group CONFIDENTIAL Predictive Policing/Crime Analysis 5115 Maryland Way-Suite 204 Prepared By: Phil Costanzo Law Enforcement Mentoring Solutions Brentwood, TN 37027 917-281-5651 [email protected] “The Art of Empowering your Workforce” LE Mentoring Solutions Law enforcement currently uses one of two methods of machine based crime forecasting: univariate or multivariate: Univariate This method involves fewer points of data which simplifies the crime map. This method is often used to predict time expectations of certain crimes. Multivariate A more complex method of crime forecasting, it uses input from multiple databases including property records. This method is used to predict where new areas of crime may develop. It can also identify factors that contribute to a decrease or increase of particular crimes in specific areas. Machine based crime forecasting is proving to be valuable to law enforcement communities. Crime maps may not have the ability to prevent all crime but are critical tools enabling law enforcement to recognize where to use their strength and what policing methods are working. By better organizing their capabilities, effective crime prevention techniques can significantly reduce crime. As a problem area emerges, law enforcement can react in a timely and pro-active way that benefits the public welfare. Crime mapping enables police agencies to deploy manpower smarter which means our communities are safer. The Sage Group CONFIDENTIAL Predictive Policing/Crime Analysis 5115 Maryland Way-Suite 204 Prepared By: Phil Costanzo Law Enforcement Mentoring Solutions Brentwood, TN 37027 917-281-5651 [email protected] “The Art of Empowering your Workforce” LE Mentoring Solutions Crime Forecasting Requirements, Data Issues, and Methods For simple to intermediate crime forecasting methods – the primary focus of this article – most analysts already have many of the software packages they will need. Nevertheless, it is worth briefly discussing individual components because the sophistication of each will have a bearing on what it is you will be able to do. The first component is obviously a geographic information system (GIS). While strictly not required to produce actual crime forecasts, it is useful for displaying results as well as collecting data. For example, a GIS could be used to generate frequency counts of geocoded crime incidents by census tract, census block or grid cell. Many GIS systems also have built in statistical capabilities, such as spatial regression and data manipulation routines. The more sophisticated the GIS, the more you can do with your data. A staple of every crime analysis unit is a SQL compatible database. Databases are essential for gathering, storing and manipulating data that is often retrieved or exported from a variety of other database, such as an RMS, CAD system or others. Naturally, there are a variety of ways in which data can be imported into a database, but depending on the need another good tool to have in the crime forecasting toolbox (or in the crime analyst toolbox in general, for that matter) in conjunction with a SQL database is a data migration tool. Data migration tools are very useful for setting up and conducting automated data transfers/exports on a regular basis from multiple databases. The advantage of such tools is that they can connect to multiple platforms, even if they are in the cloud, and allow the user to manipulate data before they are deposited into the crime analysis database. For example, date/time formats can be changed, fields parsed etc., meaning that many tasks a crime analyst normally has to perform manually can be automated. So they are also a significant time saver. But even if a crime analyst does not have direct access to an RMS or CAD database and depends on, say, regular XML export files, a data migration tool can be set up to automatically scan a local or network directory for a new export file and, if it finds one, process it. Finally, no crime analysis unit should be without a good statistical software package. Most GIS systems and standard productivity software such as Microsoft Excel have some statistical features built in to them, but in order to progress beyond the most simple of crime forecasting techniques something more powerful is needed. Commercial statistics packages can be very expensive but fortunately there are some excellent open source alternatives. The Sage Group CONFIDENTIAL Predictive Policing/Crime Analysis 5115 Maryland Way-Suite 204 Prepared By: Phil Costanzo Law Enforcement Mentoring Solutions Brentwood, TN 37027 917-281-5651 [email protected] “The Art of Empowering your Workforce” LE Mentoring Solutions One of the issues unique to crime forecasting as compared to forecasting in other disciplines is that crime is a rare event, yet successfully building a crime forecasting model can require hundreds or thousands of data points. The more data you have and the higher quality it is, the more sophisticated a forecasting model you can build. So the first and most important step is to decide what types of data to use, how to get it, and how to incorporate it into a single dataset. A further issue that is related to how many data points you have is choosing the size of area by which you want to forecast as well as the time frame. While many analysts will have no choice but to use census tracts or blocks, or even police beats, those who decide to use grid cells or some other variable area size will have to experiment. If you choose too large an area you will have plenty of crimes but the tactical utility of even accurate forecasts diminishes. Pick too small of an area and most areas will have zero or one crime counts, meaning most forecasting methods will treat areas of high crime as outliers. Naturally, what constitutes the best size areal unit will depend on the crime type as well as jurisdiction. The choice of temporal unit also affects how many crimes you will have per unit for much the same reasons as areal units.
Recommended publications
  • Seminar to Advanced Macroeconomics
    Seminar to Advanced Macroeconomics Jaromír Baxa IES FSV UK, WS 2010/2011 Introduction Aim of the seminar: Overview over empirical methods used in macro to make your horizons wider. Easy applications of econometrics to macroeconomic topics discussed in the lectures Using econometric software Talking about your Project Tasks and discussion about Problem Sets Why Empirical Seminar? The Role of Empirical Work in Macro Correspondence between the theory and real data Forecasting and economic policy Finding empirical evidence to build new theories Fundamental ucertainty in econometrics: choice of variables => Robustness over different datasets, over different additional variables... => We should always keep in mind this uncertainty and ask: Are my results good because of the datasets? Methods Descriptive statistics, tests... Some nonparametric methods: tests, density estimates Linear Regression Panel data regression Principal Components method Time series: seasonal adjustment, trends... Dynamic models (very brief introduction) ... Software You can't do empirical work without it. There are many software packages for econometrics: Commercial: TSP, SAS, Stata, E-views, PC-Give, Gauss, S-Plus and many others Freeware/Open Source/Shareware without limitations: Gretl, R-Project, Ox See http://freestatistics.altervista.org/stat.php for comprehensive list. Use whatever you want to And bring your laptop with (if you can) Gretl Available in Room 016: TSP (GiweWin GUI), SPSS for Windows 10.0, R (with necessary libraries), Gretl, JMulti Gretl: http://gretl.sourceforge.net, GNU GPL licence, crossplatform. Have a look into documentation: manual as an textbook available. Don't forget to install seasonal adjustment methods, we will use them in a couple of weeks.
    [Show full text]
  • IPS Signature Release Note V9.17.79
    SOPHOS IPS Signature Update Release Notes Version : 9.17.79 Release Date : 19th January 2020 IPS Signature Update Release Information Upgrade Applicable on IPS Signature Release Version 9.17.78 CR250i, CR300i, CR500i-4P, CR500i-6P, CR500i-8P, CR500ia, CR500ia-RP, CR500ia1F, CR500ia10F, CR750ia, CR750ia1F, CR750ia10F, CR1000i-11P, CR1000i-12P, CR1000ia, CR1000ia10F, CR1500i-11P, CR1500i-12P, CR1500ia, CR1500ia10F Sophos Appliance Models CR25iNG, CR25iNG-6P, CR35iNG, CR50iNG, CR100iNG, CR200iNG/XP, CR300iNG/XP, CR500iNG- XP, CR750iNG-XP, CR2500iNG, CR25wiNG, CR25wiNG-6P, CR35wiNG, CRiV1C, CRiV2C, CRiV4C, CRiV8C, CRiV12C, XG85 to XG450, SG105 to SG650 Upgrade Information Upgrade type: Automatic Compatibility Annotations: None Introduction The Release Note document for IPS Signature Database Version 9.17.79 includes support for the new signatures. The following sections describe the release in detail. New IPS Signatures The Sophos Intrusion Prevention System shields the network from known attacks by matching the network traffic against the signatures in the IPS Signature Database. These signatures are developed to significantly increase detection performance and reduce the false alarms. Report false positives at [email protected], along with the application details. January 2020 Page 2 of 245 IPS Signature Update This IPS Release includes Two Thousand, Seven Hundred and Sixty Two(2762) signatures to address One Thousand, Nine Hundred and Thirty Eight(1938) vulnerabilities. New signatures are added for the following vulnerabilities: Name CVE–ID
    [Show full text]
  • Porta-SIMD: an Optimally Portable SIMD Programming Language Duke CS-1990-12 UNC CS TR90-021 May 1990
    Porta-SIMD: An Optimally Portable SIMD Programming Language Duke CS-1990-12 UNC CS TR90-021 May 1990 Russ Tuck Duke University Deparment of Computer Science Durham, NC 27706 The University of North Carolina at Chapel Hill Department of Computer Science CB#3175, Sitterson Hall Chapel Hill, NC 27599-3175 Text (without appendix) of a Ph.D. dissertation submitted to Duke University. The research was performed at UNC. @ 1990 Russell R. Tuck, III UNC is an Equal Opportunity/Atlirmative Action Institution. PORTA-SIMD: AN OPTIMALLY PORTABLE SIMD PROGRAMMING LANGUAGE by Russell Raymond Tuck, III Department of Computer Science Duke University Dissertation submitte in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science in the Graduate School of Duke University 1990 Copyright © 1990 by Russell Raymond Tuck, III All rights reserved Abstract Existing programming languages contain architectural assumptions which limit their porta­ bility. I submit optimal portability, a new concept which solves this language design problem. Optimal portability makes it possible to design languages which are portable across vari­ ous sets of diverse architectures. SIMD (Single-Instruction stream, Multiple-Data stream) computers represent an important and very diverse set of architectures for which to demon­ strate optimal portability. Porta-SIMD (pronounced "porta.-simm'd") is the first optimally portable language for SIMD computers. It was designed and implemented to demonstrate that optimal portability is a useful and achievable standard for language design. An optimally portable language allows each program to specify the architectural features it requires. The language then enables the compiled program to exploit exactly those fea.­ tures, and to run on all architectures that provide them.
    [Show full text]
  • R G M B.Ec. M.Sc. Ph.D
    Rolando Kristiansand, Norway www.bayesgroup.org Gonzales Martinez +47 41224721 [email protected] JOBS & AWARDS M.Sc. B.Ec. Applied Ph.D. Economics candidate Statistics Jobs Events Awards Del Valle University University of Alcalá Universitetet i Agder 2018 Bolivia, 2000-2004 Spain, 2009-2010 Norway, 2018- 1/2018 Ph.D scholarship 8/2017 International Consultant, OPHI OTHER EDUCATION First prize, International Young 12/2016 J-PAL Course on Evaluating Social Programs Macro-prudential Policies Statistician Prize, IAOS Massachusetts Institute of Technology - MIT International Monetary Fund (IMF) 5/2016 Deputy manager, BCP bank Systematic Literature Reviews and Meta-analysis Theorizing and Theory Building Campbell foundation - Vrije Universiteit Halmstad University (HH, Sweden) 1/2016 Consultant, UNFPA Bayesian Modeling, Inference and Prediction Multidimensional Poverty Analysis University of Reading OPHI, Oxford University Researcher, data analyst & SKILLS AND TECHNOLOGIES 1/2015 project coordinator, PEP Academic Research & publishing Financial & Risk analysis 12 years of experience making economic/nancial analysis 3/2014 Director, BayesGroup.org and building mathemati- Consulting cal/statistical/econometric models for the government, private organizations and non- prot NGOs Research & project Mathematical 1/2013 modeling coordinator, UNFPA LANGUAGES 04/2012 First award, BCG competition Spanish (native) Statistical analysis English (TOEFL: 106) 9/2011 Financial Economist, UDAPE MatLab Macro-economic R, RStudio 11/2010 Currently, I work mainly analyst, MEFP Stata, SPSS with MatLab and R . SQL, SAS When needed, I use 3/2010 First award, BCB competition Eviews Stata, SAS or Eviews. I started working in LaTex, WinBugs Python recently. 8/2009 M.Sc. scholarship Python, Spyder RECENT PUBLICATIONS Disjunction between universality and targeting of social policies: Demographic opportunities for depolarized de- velopment in Paraguay.
    [Show full text]
  • Regression Models by Gretl and R Statistical Packages for Data Analysis in Marine Geology Polina Lemenkova
    Regression Models by Gretl and R Statistical Packages for Data Analysis in Marine Geology Polina Lemenkova To cite this version: Polina Lemenkova. Regression Models by Gretl and R Statistical Packages for Data Analysis in Marine Geology. International Journal of Environmental Trends (IJENT), 2019, 3 (1), pp.39 - 59. hal-02163671 HAL Id: hal-02163671 https://hal.archives-ouvertes.fr/hal-02163671 Submitted on 3 Jul 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Distributed under a Creative Commons Attribution| 4.0 International License International Journal of Environmental Trends (IJENT) 2019: 3 (1),39-59 ISSN: 2602-4160 Research Article REGRESSION MODELS BY GRETL AND R STATISTICAL PACKAGES FOR DATA ANALYSIS IN MARINE GEOLOGY Polina Lemenkova 1* 1 ORCID ID number: 0000-0002-5759-1089. Ocean University of China, College of Marine Geo-sciences. 238 Songling Rd., 266100, Qingdao, Shandong, P. R. C. Tel.: +86-1768-554-1605. Abstract Received 3 May 2018 Gretl and R statistical libraries enables to perform data analysis using various algorithms, modules and functions. The case study of this research consists in geospatial analysis of Accepted the Mariana Trench, a hadal trench located in the Pacific Ocean.
    [Show full text]
  • 8 the Software Jmulti
    P1: IML CB698-Driver CB698-LUTKEPOHI CB698-LUTKEPOHI-Sample.cls August 25, 2006 17:44 8 The Software JMulTi Markus Kr¨atzig 8.1 Introduction to JMulTi 8.1.1 Software Concept This chapter gives a general overview of the software by which the examples in this book can be reproduced; it is freely available via the Internet.1 The information given here covers general issues and concepts of JMulTi. Detailed descriptions on how to use certain methods in the program are left to the help system installed with the software. JMulTi is an interactive JAVA application designed for the specific needs of time series analysis. It does not compute the results of the statistical calcu- lations itself but delegates this part to a computational engine via a communi- cations layer. The range of its own computing functions is limited and is only meant to support data transformations to provide input for the various statistical routines. Like other software packages, JMulTi contains graphical user interface (GUI) components that simplify tasks common to empirical analysis – espe- cially reading in data, transforming variables, creating new variables, editing data, and saving data sets. Most of its functions are accessible by simple mouse interaction. Originally the software was designed as an easy-to-use GUI for complex and difficult-to-use econometric procedures written in GAUSS that were not available in other packages. Because this concept has proved to be quite fruit- ful, JMulTi has now evolved into a comprehensive modeling environment for multiple time series analysis. The underlying general functionality has been bundled in the software framework JStatCom, which is designed as a ready-made platform for the creation of various statistical applications by developers.
    [Show full text]
  • Pnw 2020 Strunk001.Pdf
    Remote Sensing of Environment 237 (2020) 111535 Contents lists available at ScienceDirect Remote Sensing of Environment journal homepage: www.elsevier.com/locate/rse Evaluation of pushbroom DAP relative to frame camera DAP and lidar for forest modeling T ∗ Jacob L. Strunka, , Peter J. Gouldb, Petteri Packalenc, Demetrios Gatziolisd, Danuta Greblowskae, Caleb Makif, Robert J. McGaugheyg a USDA Forest Service Pacific Northwest Research Station, 3625 93rd Ave SW, Olympia, WA, 98512, USA b Washington State Department of Natural Resources, PO Box 47000, 1111 Washington Street, SE, Olympia, WA, 98504-7000, USA c School of Forest Sciences, Faculty of Science and Forestry, University of Eastern Finland, P.O. Box 111, 80101, Joensuu, Finland d USDA Forest Service Pacific Northwest Research Station, 620 Southwest Main, Suite 502, Portland, OR, 97205, USA e GeoTerra Inc., 60 McKinley St, Eugene, OR, 97402, USA f Washington State Department of Natural Resources, PO Box 47000, 1111 Washington Street SE, Olympia, WA, 98504-7000, USA g USDA Forest Service Pacific Northwest Research Station, University of Washington, PO Box 352100, Seattle, WA, 98195-2100, USA ARTICLE INFO ABSTRACT Keywords: There is growing interest in using Digital Aerial Photogrammetry (DAP) for forestry applications. However, the Lidar performance of pushbroom DAP relative to frame-based DAP and airborne lidar is not well documented. Interest Structure from motion in DAP stems largely from its low cost relative to lidar. Studies have demonstrated that frame-based DAP Photogrammetry generally performs slightly poorer than lidar, but still provides good value due to its reduced cost. In the USA Forestry pushbroom imagery can be dramatically less expensive than frame-camera imagery in part because of a na- DAP tionwide collection program.
    [Show full text]
  • End-User Probabilistic Programming
    End-User Probabilistic Programming Judith Borghouts, Andrew D. Gordon, Advait Sarkar, and Neil Toronto Microsoft Research Abstract. Probabilistic programming aims to help users make deci- sions under uncertainty. The user writes code representing a probabilistic model, and receives outcomes as distributions or summary statistics. We consider probabilistic programming for end-users, in particular spread- sheet users, estimated to number in tens to hundreds of millions. We examine the sources of uncertainty actually encountered by spreadsheet users, and their coping mechanisms, via an interview study. We examine spreadsheet-based interfaces and technology to help reason under uncer- tainty, via probabilistic and other means. We show how uncertain values can propagate uncertainty through spreadsheets, and how sheet-defined functions can be applied to handle uncertainty. Hence, we draw conclu- sions about the promise and limitations of probabilistic programming for end-users. 1 Introduction In this paper, we discuss the potential of bringing together two rather distinct approaches to decision making under uncertainty: spreadsheets and probabilistic programming. We start by introducing these two approaches. 1.1 Background: Spreadsheets and End-User Programming The spreadsheet is the first \killer app" of the personal computer era, starting in 1979 with Dan Bricklin and Bob Frankston's VisiCalc for the Apple II [15]. The primary interface of a spreadsheet|then and now, four decades later|is the grid, a two-dimensional array of cells. Each cell may hold a literal data value, or a formula that computes a data value (and may depend on the data in other cells, which may themselves be computed by formulas).
    [Show full text]
  • Admb Package
    Using AD Model Builder and R together: getting started with the R2admb package Ben Bolker March 9, 2020 1 Introduction AD Model Builder (ADMB: http://admb-project.org) is a standalone program, developed by Dave Fournier continuously since the 1980s and re- leased as an open source project in 2007, that takes as input an objective function (typically a negative log-likelihood function) and outputs the co- efficients that minimize the objective function, along with various auxiliary information. AD Model Builder uses automatic differentiation (that's what \AD" stands for), a powerful algorithm for computing the derivatives of a specified objective function efficiently and without the typical errors due to finite differencing. Because of this algorithm, and because the objective function is compiled into machine code before optimization, ADMB can solve large, difficult likelihood problems efficiently. ADMB also has the capability to fit random-effects models (typically via Laplace approximation). To the average R user, however, ADMB represents a challenge. The first (unavoidable) challenge is that the objective function needs to be written in a superset of C++; the second is learning the particular sequence of steps that need to be followed in order to output data in a suitable format for ADMB; compile and run the ADMB model; and read the data into R for analysis. The R2admb package aims to eliminate the second challenge by automating the R{ADMB interface as much as possible. 2 Installation The R2admb package can be installed in R in the standard way (with install.packages() or via a Packages menu, depending on your platform.
    [Show full text]
  • Gretl User's Guide
    Gretl User’s Guide Gnu Regression, Econometrics and Time-series Allin Cottrell Department of Economics Wake Forest university Riccardo “Jack” Lucchetti Dipartimento di Economia Università Politecnica delle Marche December, 2008 Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation (see http://www.gnu.org/licenses/fdl.html). Contents 1 Introduction 1 1.1 Features at a glance ......................................... 1 1.2 Acknowledgements ......................................... 1 1.3 Installing the programs ....................................... 2 I Running the program 4 2 Getting started 5 2.1 Let’s run a regression ........................................ 5 2.2 Estimation output .......................................... 7 2.3 The main window menus ...................................... 8 2.4 Keyboard shortcuts ......................................... 11 2.5 The gretl toolbar ........................................... 11 3 Modes of working 13 3.1 Command scripts ........................................... 13 3.2 Saving script objects ......................................... 15 3.3 The gretl console ........................................... 15 3.4 The Session concept ......................................... 16 4 Data files 19 4.1 Native format ............................................. 19 4.2 Other data file formats ....................................... 19 4.3 Binary databases ..........................................
    [Show full text]
  • Automated Likelihood Based Inference for Stochastic Volatility Models H
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Institutional Knowledge at Singapore Management University Singapore Management University Institutional Knowledge at Singapore Management University Research Collection School Of Economics School of Economics 11-2009 Automated Likelihood Based Inference for Stochastic Volatility Models H. Skaug Jun YU Singapore Management University, [email protected] Follow this and additional works at: https://ink.library.smu.edu.sg/soe_research Part of the Econometrics Commons Citation Skaug, H. and YU, Jun. Automated Likelihood Based Inference for Stochastic Volatility Models. (2009). 1-28. Research Collection School Of Economics. Available at: https://ink.library.smu.edu.sg/soe_research/1151 This Working Paper is brought to you for free and open access by the School of Economics at Institutional Knowledge at Singapore Management University. It has been accepted for inclusion in Research Collection School Of Economics by an authorized administrator of Institutional Knowledge at Singapore Management University. For more information, please email [email protected]. Automated Likelihood Based Inference for Stochastic Volatility Models Hans J. SKAUG , Jun YU November 2009 Paper No. 15-2009 ANY OPINIONS EXPRESSED ARE THOSE OF THE AUTHOR(S) AND NOT NECESSARILY THOSE OF THE SCHOOL OF ECONOMICS, SMU Automated Likelihood Based Inference for Stochastic Volatility Models¤ Hans J. Skaug,y Jun Yuz October 7, 2008 Abstract: In this paper the Laplace approximation is used to perform classical and Bayesian analyses of univariate and multivariate stochastic volatility (SV) models. We show that imple- mentation of the Laplace approximation is greatly simpli¯ed by the use of a numerical technique known as automatic di®erentiation (AD).
    [Show full text]
  • Multivariate Statistical Methods to Analyse Multidimensional Data in Applied Life Science
    Multivariate statistical methods to analyse multidimensional data in applied life science Dissertation zur Erlangung des Doktorgrades der Naturwissenschaften (Dr. rer. nat.) der Naturwissenschaftlichen Fakult¨atIII Agrar- und Ern¨ahrungswissenschaften, Geowissenschaften und Informatik der Martin-Luther-Universit¨atHalle-Wittenberg vorgelegt von Frau Trutschel (geb. Boronczyk), Diana Geb. am 18.02.1979 in Hohenm¨olsen Gutachter: 1. Prof. Dr. Ivo Grosse, MLU Halle/Saale 2. Dr. Steffen Neumann, IPB alle/Saale 3. Prof. Dr. Andr´eScherag, FSU Jena Datum der Verteidigung: 18.04.2019 I Eidesstattliche Erklärung / Declaration under Oath Ich erkläre an Eides statt, dass ich die Arbeit selbstständig und ohne fremde Hilfe verfasst, keine anderen als die von mir angegebenen Quellen und Hilfsmittel benutzt und die den benutzten Werken wörtlich oder inhaltlich entnommenen Stellen als solche kenntlich gemacht habe. I declare under penalty of perjury that this thesis is my own work entirely and has been written without any help from other people. I used only the sources mentioned and included all the citations correctly both in word or content. __________________________ ____________________________________________ Datum / Date Unterschrift des Antragstellers / Signature of the applicant III This thesis is a cumulative thesis, including five research articles that have previously been published in peer-reviewed international journals. In the following these publications are listed, whereby the first authors are underlined and my name (Trutschel) is marked in bold. 1. Trutschel, Diana and Schmidt, Stephan and Grosse, Ivo and Neumann, Steffen, "Ex- periment design beyond gut feeling: statistical tests and power to detect differential metabolites in mass spectrometry data", Metabolomics, 2015, available at: https:// link.springer.com/content/pdf/10.1007/s11306-014-0742-y.pdf 2.
    [Show full text]