2010 Public Attendance Training Calendar

Total Page:16

File Type:pdf, Size:1020Kb

2010 Public Attendance Training Calendar 2010 Public Attendance Training Calendar (UK & USA) User Group Meetings • Web Based Courses Time Series Panel Data Discrete Choice Analysis Factor Models Programming Economic Modelling & Forecasting FX Modelling & Forecasting Financial Investment Industrial Organisation Energy Modelling Medical Statistics Introduction Timberlake Consultants are pleased to announce our comprehensive 2010 event schedule, which includes public attendance training courses, User Group Meetings, Conferences and Web Based Training. Full details of all of our events and courses can be found on our website – www.timberlake.co.uk, where you can also register your interest and reserve a place on any of our courses. Our dedicated training team is also on hand to discuss any requirements that you may have from questions relating to the courses that we offer to travel arrangements and recommendations for hotel accommodation. More courses and events may be added throughout 2010. Please visit our website to keep up to date with developments of our training courses and events. Timberlake On Site Timberlake Consultants also deliver tailored On Site training courses in all of the software packages that are included in our portfolio. All of our public attendance training courses can also be delivered on site. Please speak to our training team to discuss your requirements. Web Based Courses & Web Demonstrations 2010 has seen the introduction of our Web Courses and Web Demonstration services. Full details are shown on our website and examples of the services that we offer are discussed below. We also now offer technical support services via the Web. Contact us for more details. 2010 Training Calendar Prices for all of our courses and events are shown on our Website. March 2010 The Practice of Econometrics with EViews 22-26 March 2010, Lancaster University Management School Software: EViews Course delivered by Prof. Sean Holly, University of Cambridge April 2010 Introduction to Time Series Analysis & Forecasting using STATA April 2010, New York City, USA Software: STATA Course delivered by Prof. Robert Yassee, NYU Time Series Analysis & Forecasting using STATA April 2010, New York City, USA Software: STATA Course delivered by Prof. Robert Yaffee, NYU May 2010 Applied Econometrics with STATA May 2010, Washington DC, USA Software: STATA Econometrics Modeling with PcGive & AutoMetrics May 2010, Washington DC, USA Software: OxMetrics Course delivered by Prof. Fred Joutz, George Washington University Applied Econometrics with STATA May 2010, Washington DC, USA Software: STATA Course Delivered by Prof. Sean Holly, University of Cambridge June 2010 Applied Econometrics with STATA 21-24 June 2010, Cass Business School, London Software: STATA Course Delivered by Prof. Sean Holly, University of Cambridge Time Series & Forecasting with EViews 28-30 June 2010, Cass Business School, London Software: EViews Course Delivered by Dr. Lorenzo Trapani, Cass Business School Creating Documents with Scientific Workplace 15-17 June 2010, Cass Business School, London Software: Scientific Workplace Course delivered by Dr. Marwan Izzeldin, Lancaster University Management School Applied Statistics for Financial Investment Analysis June, October, November 2010, New York City, USA Course delivered by Prof. Frank Lieber July 2010 Econometric Modelling & Programming with MATLAB 5-6 July 2010, Cass Business School, London Software: MATLAB Course delivered by Prof. Sean Holly, University of Cambridge Model Selection Using OxMetrics 5-7 July 2010 Software: OxMetrics Course delivered by Dr. Jennifer Castle, University of Oxford Economic Forecasting 8-9 July 2010, Cass Business School, London Software: OxMetrics Course delivered by Dr. Jennifer Castle, University of Oxford POLAND: STATA User Group Meeting 1 & 2 July 2010 Department of Economics, Warsaw University Topics in Empirical Industrial Organisation 2-Day Course Software: Various Course delivered by Dr. Melvyn Weeks, University of Cambridge Applied Econometrics with STATA July 2010, California, USA Software: STATA Practice of Econometrics with EViews July 2010, New York City, USA by Dr. Paul Turner, Loughborough University July 2010, California, USA by Prof. Sean Holly, University of Cambridge August 2010 Bayesian & Classical Approaches to Inference and Model Averaging 4-Day Course Software: Various Course delivered by Dr. Melvyn Weeks, University of Cambridge Econometric Programming with EViews 10 August 2010 Software: EViews Course delivered by Dr. Paul Turner, Loughborough University Time Series Analysis & Forecasting in STATA 3-Day Course, Cass Business School Course delivered by Dr. Antony Murphy, University of Oxford September 2010 Multi Level Modelling in STATA 8 September 2010, Royal Mathematical Society, London Software: STATA Course delivered by Dr. James Carpenter, London School of Hygiene & Tropical Medicine 16th London STATA User Group Meeting 9 & 10 September 2010,London School of Hygiene & Tropical Medicine, London UK Scientific Organisers: Dr. Nicholas J Cox & Prof. Patrick Royston 9th OxMetrics User Conference 16 & 17 September 2010, Cass Business School, London Scientific Organiser: Prof. Giovanni Urga Economics Training Programme for Central Banks & Ministries of Finance 13-15 September 2010 Software: OxMetrics (PcGive). Course delivered by Prof. Sir David Hendry Introduction to Medical Statistics using STATA 13-16 September 2010 Software: STATA Course delivered by Prof. Stephen Evans & Tim Collier, London School of Hygiene & Tropical Medicine PORTUGAL: 1st STATA User Group Meeting 17 September 2010 School of Economics and Management at the University of Minho, Braga, Portugal SPAIN: 3rd STATA User Group Conference 14 September 2010 Carlos III University, Madrid, Spain The Practice of Econometrics with EViews 20-23 September 2010, University of Cambridge Software: EViews Course delivered by Prof. Sean Holly, University of Cambridge Introduction to Programming in Mata 27 September 2010 Software: STATA Course delivered by Dr. Alfonso Miranda, Imperial College London Introduction to PcGive 1-Day Course Software: OxMetrics Course delivered by Dr. Jurgen Doornik, University of Oxford & principal developer of OxMetrics Introduction to Ox 1-Day Course Software: OxMetrics Course delivered by Dr. Jurgen Doornik, University of Oxford & principal developer of OxMetrics Advanced Ox Programming 2-Day Course Software: OxMetrics Course delivered by Dr. Jurgen Doornik, University of Oxford & principal developer of OxMetrics October 2010 Introduction to Meta Analysis 12 October 2010, Cass Business School, London Software: STATA Course delivered by Prof. Aurelio Tobias Time Series Modelling & Analysis 2-Day Course, Carlos III University, Madrid, Spain Software: OxMetrics Course delivered by Prof. Andrew Harvey Modelling & Forecasting FX Rates 2-Day Course, Cass Business School, London Course delivered by Prof. Lucio Sarno, Cass Business School November 2010 Dynamic Factor Models & Time Series Analysis in STATA 3-Day Course, Cass Business School Software: STATA Course delivered by Dr. Arnab Bhattacharjee, University of St Andrews Panel Data Analysis in STATA 3-Day Course, Cass Business School, London Software: STATA Course delivered by Dr. Arnab Bhattacharjee, University of St Andrews Practice of Pooled Time Series & Cross-Section Econometric Modeling in EViews November 2010, New York City, USA Software: EViews Course delivered by Prof. Fred Joutz, George Washington University Econometrics Modeling with PcGive & AutoMetrics November 2010, New York City USA Software: OxMetrics Course delivered by Prof. Fred Joutz, George Washington University Energy Modeling & Forecasting November 2010, New York City USA Course delivered by Prof. Fred Joutz, George Washington University December 2010 Financial Econometrics Cass Business School, London Software: OxMetrics (PcGive) Course delivered by Prof. Giovanni Urga, Cass Business School, London Econometric Analysis with EViews 15–17 December, Lancaster University Software:EViews Course delivered by Dr Marwan Izzledean, Lancaster University Web Based Courses Interactive courses, delivered over the internet to a time convenient to you.... STATA Fundamentals Overview: STATA is a powerful tool for data management, statistical analysis and model building. Typically, before performing any type of analysis, one has to import, prepare and manipulate data. This course aims to introduce STATA’s most popular and useful commands and procedures to import, manipulate, transform and manage data as well as to perform some commonly used statistical routines. It is ideal for new or beginner level users who want to have a head-start and learn how to use STATA efficiently. Who should attend? New or beginner-level users of STATA 11 or any previous version. This is a course for those who are considering purchasing or already own STATA. Prerequisites: Analytical thinking is essential. Regression Analysis with EViews Overview: The classical linear regression model or regression analysis is the most commonly used technique in the field of applied econometrics. The course focus on several issues and challenges frequently encountered when building econometric models. Upon completion of the course, you will have gained valuable skills that will help you build statistically sound models and make efficient use of EViews capabilities. Who should attend? If you can associate with one or more of the following statements, then this course is for you. • I have enough data but I do not know where to start from. • My models do not give
Recommended publications
  • Introduction, Structure, and Advanced Programming Techniques
    APQE12-all Advanced Programming in Quantitative Economics Introduction, structure, and advanced programming techniques Charles S. Bos VU University Amsterdam [email protected] 20 { 24 August 2012, Aarhus, Denmark 1/260 APQE12-all Outline OutlineI Introduction Concepts: Data, variables, functions, actions Elements Install Example: Gauss elimination Getting started Why programming? Programming in theory Questions Blocks & names Input/output Intermezzo: Stack-loss Elements Droste 2/260 APQE12-all Outline OutlineII KISS Steps Flow Recap of main concepts Floating point numbers and rounding errors Efficiency System Algorithm Operators Loops Loops and conditionals Conditionals Memory Optimization 3/260 APQE12-all Outline Outline III Optimization pitfalls Maximize Standard deviations Standard deviations Restrictions MaxSQP Transforming parameters Fixing parameters Include packages Magic numbers Declaration files Alternative: Command line arguments OxDraw Speed 4/260 APQE12-all Outline OutlineIV Include packages SsfPack Input and output High frequency data Data selection OxDraw Speed C-Code Fortran Code 5/260 APQE12-all Outline Day 1 - Morning 9.30 Introduction I Target of course I Science, data, hypothesis, model, estimation I Bit of background I Concepts of I Data, Variables, Functions, Addresses I Programming by example I Gauss elimination I (Installation/getting started) 11.00 Tutorial: Do it yourself 12.30 Lunch 6/260 APQE12-all Introduction Target of course I Learn I structured I programming I and organisation I (in Ox or other language) Not: Just
    [Show full text]
  • Stan: a Probabilistic Programming Language
    JSS Journal of Statistical Software MMMMMM YYYY, Volume VV, Issue II. http://www.jstatsoft.org/ Stan: A Probabilistic Programming Language Bob Carpenter Andrew Gelman Matt Hoffman Columbia University Columbia University Adobe Research Daniel Lee Ben Goodrich Michael Betancourt Columbia University Columbia University University of Warwick Marcus A. Brubaker Jiqiang Guo Peter Li University of Toronto, NPD Group Columbia University Scarborough Allen Riddell Dartmouth College Abstract Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.2.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectation propa- gation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can be called from the command line, through R using the RStan package, or through Python using the PyStan package. All three interfaces support sampling and optimization-based inference. RStan and PyStan also provide access to log probabilities, gradients, Hessians, and data I/O. Keywords: probabilistic program, Bayesian inference, algorithmic differentiation, Stan.
    [Show full text]
  • Zanetti Chini E. “Forecaster's Utility and Forecasts Coherence”
    ISSN: 2281-1346 Department of Economics and Management DEM Working Paper Series Forecasters’ utility and forecast coherence Emilio Zanetti Chini (Università di Pavia) # 145 (01-18) Via San Felice, 5 I-27100 Pavia economiaweb.unipv.it Revised in: August 2018 Forecasters’ utility and forecast coherence Emilio Zanetti Chini∗ University of Pavia Department of Economics and Management Via San Felice 5 - 27100, Pavia (ITALY) e-mail: [email protected] FIRST VERSION: December, 2017 THIS VERSION: August, 2018 Abstract We introduce a new definition of probabilistic forecasts’ coherence based on the divergence between forecasters’ expected utility and their own models’ likelihood function. When the divergence is zero, this utility is said to be local. A new micro-founded forecasting environment, the “Scoring Structure”, where the forecast users interact with forecasters, allows econometricians to build a formal test for the null hypothesis of locality. The test behaves consistently with the requirements of the theoretical literature. The locality is fundamental to set dating algorithms for the assessment of the probability of recession in U.S. business cycle and central banks’ “fan” charts Keywords: Business Cycle, Fan Charts, Locality Testing, Smooth Transition Auto-Regressions, Predictive Density, Scoring Rules and Structures. JEL: C12, C22, C44, C53. ∗This paper was initiated when the author was visiting Ph.D. student at CREATES, the Center for Research in Econometric Analysis of Time Series (DNRF78), which is funded by the Danish National Research Foundation. The hospitality and the stimulating research environment provided by Niels Haldrup are gratefully acknowledged. The author is particularly grateful to Tommaso Proietti and Timo Teräsvirta for their supervision.
    [Show full text]
  • Introduction Rats Version 9.0
    RATS VERSION 9.0 INTRODUCTION RATS VERSION 9.0 INTRODUCTION Estima 1560 Sherman Ave., Suite 510 Evanston, IL 60201 Orders, Sales Inquiries 800–822–8038 Web: www.estima.com General Information 847–864–8772 Sales: [email protected] Technical Support 847–864–1910 Technical Support: [email protected] Fax: 847–864–6221 © 2014 by Estima. All Rights Reserved. No part of this book may be reproduced or transmitted in any form or by any means with- out the prior written permission of the copyright holder. Estima 1560 Sherman Ave., Suite 510 Evanston, IL 60201 Published in the United States of America Preface Welcome to Version 9 of rats. We went to a three-book manual set with Version 8 (this Introduction, the User’s Guide and the Reference Manual; and we’ve continued that into Version 9. However, we’ve made some changes in emphasis to reflect the fact that most of our users now use electronic versions of the manuals. And, with well over a thousand example programs, the most common way for people to use rats is to pick an existing program and modify it. With each new major version, we need to decide what’s new and needs to be ex- plained, what’s important and needs greater emphasis, and what’s no longer topical and can be moved out of the main documentation. For Version 9, the chapters in the User’s Guide that received the most attention were “arch/garch and related mod- els” (Chapter 9), “Threshold, Breaks and Switching” (Chapter 11), and “Cross Section and Panel Data” (Chapter 12).
    [Show full text]
  • Econometrics Oxford University, 2017 1 / 34 Introduction
    Do attractive people get paid more? Felix Pretis (Oxford) Econometrics Oxford University, 2017 1 / 34 Introduction Econometrics: Computer Modelling Felix Pretis Programme for Economic Modelling Oxford Martin School, University of Oxford Lecture 1: Introduction to Econometric Software & Cross-Section Analysis Felix Pretis (Oxford) Econometrics Oxford University, 2017 2 / 34 Aim of this Course Aim: Introduce econometric modelling in practice Introduce OxMetrics/PcGive Software By the end of the course: Able to build econometric models Evaluate output and test theories Use OxMetrics/PcGive to load, graph, model, data Felix Pretis (Oxford) Econometrics Oxford University, 2017 3 / 34 Administration Textbooks: no single text book. Useful: Doornik, J.A. and Hendry, D.F. (2013). Empirical Econometric Modelling Using PcGive 14: Volume I, London: Timberlake Consultants Press. Included in OxMetrics installation – “Help” Hendry, D. F. (2015) Introductory Macro-econometrics: A New Approach. Freely available online: http: //www.timberlake.co.uk/macroeconometrics.html Lecture Notes & Lab Material online: http://www.felixpretis.org Problem Set: to be covered in tutorial Exam: Questions possible (Q4 and Q8 from past papers 2016 and 2017) Felix Pretis (Oxford) Econometrics Oxford University, 2017 4 / 34 Structure 1: Intro to Econometric Software & Cross-Section Regression 2: Micro-Econometrics: Limited Indep. Variable 3: Macro-Econometrics: Time Series Felix Pretis (Oxford) Econometrics Oxford University, 2017 5 / 34 Motivation Economies high dimensional, interdependent, heterogeneous, and evolving: comprehensive specification of all events is impossible. Economic Theory likely wrong and incomplete meaningless without empirical support Econometrics to discover new relationships from data Econometrics can provide empirical support. or refutation. Require econometric software unless you really like doing matrix manipulation by hand.
    [Show full text]
  • Department of Geography
    Department of Geography UNIVERSITY OF FLORIDA, SPRING 2019 GEO 4167c section #09A6 / GEO 6161 section # 09A9 (3.0 credit hours) Course# 15235/15271 Intermediate Quantitative Methods Instructor: Timothy J. Fik, Ph.D. (Associate Professor) Prerequisite: GEO 3162 / GEO 6160 or equivalent Lecture Time/Location: Tuesdays, Periods 3-5: 9:35AM-12:35PM / Turlington 3012 Instructor’s Office: 3137 Turlington Hall Instructor’s e-mail address: [email protected] Formal Office Hours Tuesdays -- 1:00PM – 4:30PM Thursdays -- 1:30PM – 3:00PM; and 4:00PM – 4:30PM Course Materials (Power-point presentations in pdf format) will be uploaded to the on-line course Lecture folder on Canvas. Course Overview GEO 4167x/GEO 6161 surveys various statistical modeling techniques that are widely used in the social, behavioral, and environmental sciences. Lectures will focus on several important topics… including common indices of spatial association and dependence, linear and non-linear model development, model diagnostics, and remedial measures. The lectures will largely be devoted to the topic of Regression Analysis/Econometrics (and the General Linear Model). Applications will involve regression models using cross-sectional, quantitative, qualitative, categorical, time-series, and/or spatial data. Selected topics include, yet are not limited to, the following: Classic Least Squares Regression plus Extensions of the General Linear Model (GLM) Matrix Algebra approach to Regression and the GLM Join-Count Statistics (Dacey’s Contiguity Tests) Spatial Autocorrelation / Regression
    [Show full text]
  • The Evolution of Econometric Software Design: a Developer's View
    Journal of Economic and Social Measurement 29 (2004) 205–259 205 IOS Press The evolution of econometric software design: A developer’s view Houston H. Stokes Department of Economics, College of Business Administration, University of Illinois at Chicago, 601 South Morgan Street, Room 2103, Chicago, IL 60607-7121, USA E-mail: [email protected] In the last 30 years, changes in operating systems, computer hardware, compiler technology and the needs of research in applied econometrics have all influenced econometric software development and the environment of statistical computing. The evolution of various representative software systems, including B34S developed by the author, are used to illustrate differences in software design and the interrelation of a number of factors that influenced these choices. A list of desired econometric software features, software design goals and econometric programming language characteristics are suggested. It is stressed that there is no one “ideal” software system that will work effectively in all situations. System integration of statistical software provides a means by which capability can be leveraged. 1. Introduction 1.1. Overview The development of modern econometric software has been influenced by the changing needs of applied econometric research, the expanding capability of com- puter hardware (CPU speed, disk storage and memory), changes in the design and capability of compilers, and the availability of high-quality subroutine libraries. Soft- ware design in turn has itself impacted applied econometric research, which has seen its horizons expand rapidly in the last 30 years as new techniques of analysis became computationally possible. How some of these interrelationships have evolved over time is illustrated by a discussion of the evolution of the design and capability of the B34S Software system [55] which is contrasted to a selection of other software systems.
    [Show full text]
  • International Journal of Forecasting Guidelines for IJF Software Reviewers
    International Journal of Forecasting Guidelines for IJF Software Reviewers It is desirable that there be some small degree of uniformity amongst the software reviews in this journal, so that regular readers of the journal can have some idea of what to expect when they read a software review. In particular, I wish to standardize the second section (after the introduction) of the review, and the penultimate section (before the conclusions). As stand-alone sections, they will not materially affect the reviewers abillity to craft the review as he/she sees fit, while still providing consistency between reviews. This applies mostly to single-product reviews, but some of the ideas presented herein can be successfully adapted to a multi-product review. The second section, Overview, is an overview of the package, and should include several things. · Contact information for the developer, including website address. · Platforms on which the package runs, and corresponding prices, if available. · Ancillary programs included with the package, if any. · The final part of this section should address Berk's (1987) list of criteria for evaluating statistical software. Relevant items from this list should be mentioned, as in my review of RATS (McCullough, 1997, pp.182- 183). · My use of Berk was extremely terse, and should be considered a lower bound. Feel free to amplify considerably, if the review warrants it. In fact, Berk's criteria, if considered in sufficient detail, could be the outline for a review itself. The penultimate section, Numerical Details, directly addresses numerical accuracy and reliality, if these topics are not addressed elsewhere in the review.
    [Show full text]
  • Estimating Regression Models for Categorical Dependent Variables Using SAS, Stata, LIMDEP, and SPSS*
    © 2003-2008, The Trustees of Indiana University Regression Models for Categorical Dependent Variables: 1 Estimating Regression Models for Categorical Dependent Variables Using SAS, Stata, LIMDEP, and SPSS* Hun Myoung Park (kucc625) This document summarizes regression models for categorical dependent variables and illustrates how to estimate individual models using SAS 9.1, Stata 10.0, LIMDEP 9.0, and SPSS 16.0. 1. Introduction 2. The Binary Logit Model 3. The Binary Probit Model 4. Bivariate Logit/Probit Models 5. Ordered Logit/Probit Models 6. The Multinomial Logit Model 7. The Conditional Logit Model 8. The Nested Logit Model 9. Conclusion 1. Introduction A categorical variable here refers to a variable that is binary, ordinal, or nominal. Event count data are discrete (categorical) but often treated as continuous variables. When a dependent variable is categorical, the ordinary least squares (OLS) method can no longer produce the best linear unbiased estimator (BLUE); that is, OLS is biased and inefficient. Consequently, researchers have developed various regression models for categorical dependent variables. The nonlinearity of categorical dependent variable models (CDVMs) makes it difficult to fit the models and interpret their results. 1.1 Regression Models for Categorical Dependent Variables In CDVMs, the left-hand side (LHS) variable or dependent variable is neither interval nor ratio, but rather categorical. The level of measurement and data generation process (DGP) of a dependent variable determines the proper type of CDVM. Binary responses (0 or 1) are modeled with binary logit and probit regressions, ordinal responses (1st, 2nd, 3rd, …) are formulated into (generalized) ordered logit/probit regressions, and nominal responses are analyzed by multinomial logit, conditional logit, or nested logit models depending on specific circumstances.
    [Show full text]
  • Towards a Fully Automated Extraction and Interpretation of Tabular Data Using Machine Learning
    UPTEC F 19050 Examensarbete 30 hp August 2019 Towards a fully automated extraction and interpretation of tabular data using machine learning Per Hedbrant Per Hedbrant Master Thesis in Engineering Physics Department of Engineering Sciences Uppsala University Sweden Abstract Towards a fully automated extraction and interpretation of tabular data using machine learning Per Hedbrant Teknisk- naturvetenskaplig fakultet UTH-enheten Motivation A challenge for researchers at CBCS is the ability to efficiently manage the Besöksadress: different data formats that frequently are changed. Significant amount of time is Ångströmlaboratoriet Lägerhyddsvägen 1 spent on manual pre-processing, converting from one format to another. There are Hus 4, Plan 0 currently no solutions that uses pattern recognition to locate and automatically recognise data structures in a spreadsheet. Postadress: Box 536 751 21 Uppsala Problem Definition The desired solution is to build a self-learning Software as-a-Service (SaaS) for Telefon: automated recognition and loading of data stored in arbitrary formats. The aim of 018 – 471 30 03 this study is three-folded: A) Investigate if unsupervised machine learning Telefax: methods can be used to label different types of cells in spreadsheets. B) 018 – 471 30 00 Investigate if a hypothesis-generating algorithm can be used to label different types of cells in spreadsheets. C) Advise on choices of architecture and Hemsida: technologies for the SaaS solution. http://www.teknat.uu.se/student Method A pre-processing framework is built that can read and pre-process any type of spreadsheet into a feature matrix. Different datasets are read and clustered. An investigation on the usefulness of reducing the dimensionality is also done.
    [Show full text]
  • Kwame Nkrumah University of Science and Technology, Kumasi
    KWAME NKRUMAH UNIVERSITY OF SCIENCE AND TECHNOLOGY, KUMASI, GHANA Assessing the Social Impacts of Illegal Gold Mining Activities at Dunkwa-On-Offin by Judith Selassie Garr (B.A, Social Science) A Thesis submitted to the Department of Building Technology, College of Art and Built Environment in partial fulfilment of the requirement for a degree of MASTER OF SCIENCE NOVEMBER, 2018 DECLARATION I hereby declare that this work is the result of my own original research and this thesis has neither in whole nor in part been prescribed by another degree elsewhere. References to other people’s work have been duly cited. STUDENT: JUDITH S. GARR (PG1150417) Signature: ........................................................... Date: .................................................................. Certified by SUPERVISOR: PROF. EDWARD BADU Signature: ........................................................... Date: ................................................................... Certified by THE HEAD OF DEPARTMENT: PROF. B. K. BAIDEN Signature: ........................................................... Date: ................................................................... i ABSTRACT Mining activities are undertaken in many parts of the world where mineral deposits are found. In developing nations such as Ghana, the activity is done both legally and illegally, often with very little or no supervision, hence much damage is done to the water bodies where the activities are carried out. This study sought to assess the social impacts of illegal gold mining activities at Dunkwa-On-Offin, the capital town of Upper Denkyira East Municipality in the Central Region of Ghana. The main objectives of the research are to identify factors that trigger illegal mining; to identify social effects of illegal gold mining activities on inhabitants of Dunkwa-on-Offin; and to suggest effective ways in curbing illegal mining activities. Based on the approach to data collection, this study adopts both the quantitative and qualitative approach.
    [Show full text]
  • Investigating Data Management Practices in Australian Universities
    Investigating Data Management Practices in Australian Universities Margaret Henty, The Australian National University Belinda Weaver, The University of Queensland Stephanie Bradbury, Queensland University of Technology Simon Porter, The University of Melbourne http://www.apsr.edu.au/investigating_data_management July, 2008 ii Table of Contents Introduction ...................................................................................... 1 About the survey ................................................................................ 1 About this report ................................................................................ 1 The respondents................................................................................. 2 The survey results............................................................................... 2 Tables and Comments .......................................................................... 3 Digital data .................................................................................... 3 Non-digital data forms....................................................................... 3 Types of digital data ......................................................................... 4 Size of data collection ....................................................................... 5 Software used for analysis or manipulation .............................................. 6 Software storage and retention ............................................................ 7 Research Data Management Plans.........................................................
    [Show full text]