Evaluation Exchange a PERIODICAL on EMERGING STRATEGIES in EVALUATING CHILD and FAMILY SERVICES

Total Page:16

File Type:pdf, Size:1020Kb

Evaluation Exchange a PERIODICAL on EMERGING STRATEGIES in EVALUATING CHILD and FAMILY SERVICES ear! the Celebrating Our 10th Y evaluation exchange A PERIODICAL ON EMERGING STRATEGIES IN EVALUATING CHILD AND FAMILY SERVICES HARVARD FAMILY RESEARCH PROJECT ■ HARVARD GRADUATE SCHOOL OF EDUCATION ■ VOL. IX NO. 4 ■ WINTER 2003/2004 from the d i r e c t o r ’ s d e s k ith this issue, The Evaluation practices we may need to rethink and, in addressing the impor- Exchange kicks off its tenth year tance of learning from our progress and success, introduce a W of publication. In our very first theme that emerges several times in this issue. issue, I said that the success of this resource Several articles offer additional thoughts about recent devel- would depend on your willingness to be an opments. Michael Scriven offers his perspective on the status of active participant. A decade later, I want to the evaluation profession and discipline. Other articles present thank you for heeding that call in ways nominations for the “best of the worst” evaluation practices, that have exceeded our expectations. emerging links between program evaluation and organization Thank you to the development, and some surprising find- Heather Weiss hundreds of authors ings about changes in university-based who have generously evaluation training. shared their experiences and thoughts, to in this issue Building on these and our own reflec- the thousands of subscribers who have tions, this issue also introduces topics that read and applied the content, and to the Reflecting on the Past and future issues will address in more depth. generous funders that have supported the Future of Evaluation While our basic format and approach will production and free distribution of The remain the same, we have included articles Evaluation Exchange. Finally, I want to that herald our commitment to covering thank my staff for remaining committed Theory & Practice themes we think require more attention in to The Evaluation Exchange and to grow- Experts recap the last decade the evaluation arena—diversity, interna- ing its value. in evaluation 2 tional evaluation, technology, and evalua- Rarely does a nonprofit publish a peri- Ask the Expert tion of the arts. Upcoming issues will fea- odical that sustains beyond its inaugural Michael Scriven on ture and spur dialogue about these topics issues. We are extremely proud that The evaluation as a discipline 7 and others, including program theory, Evaluation Exchange has evolved into a Promising Practices mixed methods, democratic approaches to nationally known resource, with a grow- Evaluators’ worst practices 8 evaluation, advocacy and activism, ac- ing and diverse audience of more than countability, and systems change, which Efforts to link evaluators 13,000 evaluators, practitioners, policy- worldwide 12 was the topic of our first issue 10 years makers, and funders. ago and remains a significant evaluation Narrative methods and While we want to celebrate this tenth- organization development 14 challenge today. year milestone, we know that we are oper- Like the evaluation field itself, much ating in a dynamic environment with ever- Questions & Answers has changed about The Evaluation Ex- changing demands and appetites for new A conversation with change. But our success still depends on Ricardo Millett 10 ideas presented in new ways. We can’t stop the participation of our readers. If you to celebrate too long—we need to be re- Beyond Basic Training have ideas about other topics you would flecting constantly on our field and practice Evaluation training trends 13 like to see featured, please don’t hesitate to so we can continuously improve our work. Book Review share them with us. We continue to wel- Accordingly, we have dedicated this is- The Success Case Method 16 come your feedback and contributions. sue to sharing some of the lessons that will inform our agenda in the future. We begin Spotlight in our Theory & Practice section with a se- Evaluation to inform learning technology policy 17 ries of reflections by renowned experts on what the past decade has meant for evalu- Evaluations to Watch Heather B. Weiss, Ed.D. ation. These essays point to areas where Evaluating a performing and Founder & Director we have not come far enough, identify visual arts intervention 18 Harvard Family Research Project New Resources From HFRP 19 > theory practice the evaluation & exchange Where We’ve Been and Where We’re Going: Experts Reflect and Look Ahead In this special edition of Theory & Practice, six evaluation experts share their thoughts on how the field has progressed (or regressed) in the last 10 years and con- sider what the next steps should be. rticles that appear in Theory & Practice occupy an important position and Founder & Director mission in The Evaluation Exchange. Tied directly to each issue’s theme, they Heather B. Weiss, Ed.D. Alead off the issue and provide a forum for the introduction of compelling new ideas in evaluation with an eye toward their practical application. Articles iden- Managing Editor tify trends and define topics that deserve closer scrutiny or warrant wider dissemina- Julia Coffman tion, and inspire evaluators and practitioners to work on their conceptual and meth- odological refinement. HFRP Contributors An examination of the topics covered in Theory & Practice over the last decade Heather B. Weiss, Ed.D. reads like a chronicle of many of the major trends and challenges the evaluation pro- Julia Coffman fession has grappled with and advanced within that same timeframe—systems change, Tezeta Tulloch the rise of results-based accountability in the mid-1990s, advances in mixed methods, learning organizations, the proliferation of complex initiatives, the challenges of evalu- Publications/Communications ating communications and policy change efforts, and the democratization of practices Manager in evaluation methodology, among many others. Stacey Miller As this issue kicks off the tenth year of publication for The Evaluation Exchange, Harvard Family Research Project is devoting this section to a discussion of some of the Publications Assistant trends—good and bad—that have impacted the theory and practice of evaluation over Tezeta Tulloch the last 10 years. ©2004 President & We asked six experts to reflect on their areas of expertise in evaluation and respond Fellows of Harvard College to two questions: (1) Looking through the lens of your unique expertise in evaluation, how is evaluation different today from what it was 10 years ago? and (2) In light of Published by your response, how should evaluators or evaluation adapt to be better prepared for Harvard Family Research Project Harvard Graduate School of Education the future? All rights reserved. This periodical On Theory-Based Evaluation: may not be reproduced in whole or in part without written permission Winning Friends and Influencing People from the publisher. Carol Hirschon Weiss Professor, Harvard University Graduate School of Education, Cambridge, Massachusetts Harvard Family Research Project gratefully acknowledges the support One of the amazing things that has happened to evaluation is that it has pervaded the of the Annie E. Casey Foundation, program world. Just about every organization that funds, runs, or develops programs W. K. Kellogg Foundation, and now calls for evaluation. This is true locally, nationally, and internationally; it is almost John S. and James L. Knight Foundation. The contents of this as true of foundations and voluntary organizations as it is of government agencies. publication are solely the responsibility The press for evaluation apparently arises from the current demand for accountability. of Harvard Family Research Project Programs are increasingly called on to justify their existence, their expenditure of and do not necessarily reflect funds, and their achievement of objectives. Behind the calls for accountability is an the view of our funders. awareness of the gap between almost unlimited social need and limited resources. The Evaluation Exchange accepts An exciting development in the last few years has been the emergence of evaluations query letters for proposed articles. based explicitly on the theory underlying the program. For some time there have been See our query guidelines on our website exhortations to base evaluations on program theory. I wrote a section in my 1972 at www.gse.harvard.edu/hfrp/eval/ submission.html textbook urging evaluators to use the program’s assumptions as the framework for evaluation.1 A number of other people have written about program theory as well, in- To request a free subscription to cluding Chen,2 Rossi (with Chen),3 Bickman,4 and Lipsey.5 The Evaluation Exchange, email us at [email protected] or 1 Weiss, C. H. (1972). Evaluation research: Methods of assessing program effectiveness. Englewood Cliffs, call 617-496-4304. NJ: Prentice-Hall. 2 Chen, H. T. (1990). Issues in constructing program theory. New Directions for Program Evaluation, 47, 7–18. Harvard Family Research Project 2 The Evaluation Exchange IX 4 > theory& practice For a while, nothing seemed to happen; evaluators went A second reason has to do with complex programs where about their business in accustomed ways—or in new ways that randomized assignment is impossible. In these cases, evaluators had nothing to do with program theory. In 1997 I wrote an ar- want some way to try to attribute causality. They want to be ticle, “How Can Theory-Based Evaluation Make Greater Head- able to say, “The program caused these outcomes.” Without way?”6 Now theory-based evaluation seems to have blossomed randomized assignment, causal statements are suspect. But if forth. A number of empirical studies have recently been pub- evaluators can show that the program moved along its expected lished with the words “theory-based” or “theory-driven” in the sequence of steps, and that participants responded in expected titles (e.g., Crew and Anderson,7 Donaldson and Gooler8). The ways at each step of the process, then they can claim a reason- evaluators hold the program up against the explicit claims and able approximation of causal explanation.
Recommended publications
  • Program Evaluation Capacity for Nonprofit Human
    PROGRAM EVALUATION CAPACITY FOR NONPROFIT HUMAN SERVICES ORGANIZATIONS: AN ANALYSIS OF DETERMINING FACTORS Salvatore Alaimo Submitted to the faculty of the University Graduate School in partial fulfillment of the requirements for the degree Doctor of Philosophy in the Department of Philanthropic Studies, Indiana University October 2008 Accepted by the Faculty of Indiana University, in partial fulfillment of the requirements for the degree of Doctor of Philosophy. __________________ David A. Reingold Ph.D., Chair ______________________________ Debra Mesch Ph.D. Doctoral Committee ______________________________ David Van Slyke Ph.D. Date of Defense July 11, 2008 _____________________________ Patrick Rooney Ph.D. ii © 2008 Salvatore Alaimo ALL RIGHTS RESERVED iii ACKNOWLEDGEMENTS This research would not have been possible without the support of numerous people and organizations. The Center on Philanthropy deserves special recognition for having the vision and earnestness to take a risk and invest in a new interdisciplinary Ph.D. program in Philanthropic Studies at Indiana University-Purdue University Indianapolis (IUPUI), which I am a proud member of the first cohort of seven students. Robert Payton; Dr. Eugene Tempel, Executive Director; Dr. Dwight Burlingame, Associate Executive Director and Director of Academic Programs; Dr. Leslie Lenkowsky, Director of Graduate Studies; the center’s administrative staff and Board of Directors; and the entire philanthropic studies faculty are recognized for getting this new program off the ground and ensuring it will be continually supported. My six cohorts have been a source of support throughout this Ph.D. program, and we continue to exchange information and moral support today. These friendships have impacted my work and my life, and I hope we will continue to be connected for years to come.
    [Show full text]
  • Third Year Preceptor Evaluation Form
    3rd Year Preceptor Evaluation Please select student’s home campus: Auburn Carolinas Virginia Printed Student Name: Start Date: End Date: Printed Preceptor Name and Degree: Discipline: The below performance ratings are designed to evaluate a student engaged in their 3rd year of clinical rotations which corresponds to their first year of full time clinical training. Unacceptable – performs below the expected standards for the first year of clinical training (OMS3) despite feedback and direction Below expectations – performs below expectations for the first year of clinical training (OMS3). Responds to feedback and direction but still requires maximal supervision, and continual prompting and direction to achieve tasks Meets expectations – performs at the expected level of training (OMS3); able to perform basic tasks with some prompting and direction Above expectations – performs above expectations for their first year of clinical training (OMS3); requires minimal prompting and direction to perform required tasks Exceptional – performs well above peers; able to model tasks for peers or juniors, of medical students at the OMS3 level Place a check in the appropriate column to indicate your rating for the student in that particular area. Clinical skills and Procedure Log Documentation Preceptor has reviewed and discussed the CREDO LOG (Clinical Experience) for this Yes rotation. This is mandatory for passing the rotation for OMS 3. No Area of Evaluation - Communication Question Unacceptable Below Meets Above Exceptional N/A Expectations Expectations Expectations 1. Effectively listen to patients, family, peers, & healthcare team. 2. Demonstrates compassion and respect in patient communications. 3. Effectively collects chief complaint and history. 4. Considers whole patient: social, spiritual & cultural concerns.
    [Show full text]
  • Standards and Criteria for Approval of Sponsors of Continuing Education for Psychologists
    Standards and Criteria for Approval of Sponsors of Continuing Education for Psychologists August 2015 AMERICAN PSYCHOLOGICAL ASSOCIATION Standards and Criteria _______ PREFACE This document is the most recent revision of the document originally entitled APA Approval of Sponsors of Continuing Education for Psychologists, first approved by the American Psychological Association Council of Representatives in January 1987. This revision is effective as of August 2015, and supersedes all previous versions. TABLE OF CONTENTS STANDARDS AND CRITERIA FOR APPROVAL OF SPONSORS OF CONTINUING EDUCATION FOR PSYCHOLOGISTS SECTION ONE Page A. Introduction ................................................................................................................................1 B. Background of the APA Office of CE Sponsor Approval (CESA) ...........................................1 SECTION TWO Standard A (Goals) ..........................................................................................................................3 Standard B (Program Management) ................................................................................................4 Standard C (Educational Planning and Instructional Methods) .......................................................5 Standard D (Curriculum Content) ....................................................................................................6 Standard E (Program Evaluation) ....................................................................................................7 Standard
    [Show full text]
  • The Fire and Smoke Model Evaluation Experiment—A Plan for Integrated, Large Fire–Atmosphere Field Campaigns
    atmosphere Review The Fire and Smoke Model Evaluation Experiment—A Plan for Integrated, Large Fire–Atmosphere Field Campaigns Susan Prichard 1,*, N. Sim Larkin 2, Roger Ottmar 2, Nancy H.F. French 3 , Kirk Baker 4, Tim Brown 5, Craig Clements 6 , Matt Dickinson 7, Andrew Hudak 8 , Adam Kochanski 9, Rod Linn 10, Yongqiang Liu 11, Brian Potter 2, William Mell 2 , Danielle Tanzer 3, Shawn Urbanski 12 and Adam Watts 5 1 University of Washington School of Environmental and Forest Sciences, Box 352100, Seattle, WA 98195-2100, USA 2 US Forest Service Pacific Northwest Research Station, Pacific Wildland Fire Sciences Laboratory, Suite 201, Seattle, WA 98103, USA; [email protected] (N.S.L.); [email protected] (R.O.); [email protected] (B.P.); [email protected] (W.M.) 3 Michigan Technological University, 3600 Green Court, Suite 100, Ann Arbor, MI 48105, USA; [email protected] (N.H.F.F.); [email protected] (D.T.) 4 US Environmental Protection Agency, 109 T.W. Alexander Drive, Durham, NC 27709, USA; [email protected] 5 Desert Research Institute, 2215 Raggio Parkway, Reno, NV 89512, USA; [email protected] (T.B.); [email protected] (A.W.) 6 San José State University Department of Meteorology and Climate Science, One Washington Square, San Jose, CA 95192-0104, USA; [email protected] 7 US Forest Service Northern Research Station, 359 Main Rd., Delaware, OH 43015, USA; [email protected] 8 US Forest Service Rocky Mountain Research Station Moscow Forestry Sciences Laboratory, 1221 S Main St., Moscow, ID 83843, USA; [email protected] 9 Department of Atmospheric Sciences, University of Utah, 135 S 1460 East, Salt Lake City, UT 84112-0110, USA; [email protected] 10 Los Alamos National Laboratory, P.O.
    [Show full text]
  • Journals in Assessment, Evaluation, Measurement, Psychometrics and Statistics
    Leadership Awards Publications Resources Membership About Journals in Assessment, Evaluation, Measurement, Psychometrics and Statistics Compiled by (mailto:[email protected]) Lihshing Leigh Wang (mailto:[email protected]) , Chair of APA Division 5 Public and International Affairs Committee. Journal Association/ Scope and Aims (Adapted from journals' mission Impact Publisher statements) Factor* American Journal of Evaluation (http://www.sagepub.com/journalsProdDesc.nav? American Explores decisions and challenges related to prodId=Journal201729) Evaluation conceptualizing, designing and conducting 1.16 Association/Sage evaluations. Offers original articles about the methods, theory, ethics, politics, and practice of evaluation. Features broad, multidisciplinary perspectives on issues in evaluation relevant to education, public administration, behavioral sciences, human services, health sciences, sociology, criminology and other disciplines and professional practice fields. The American Statistician (http://amstat.tandfonline.com/loi/tas#.Vk-A2dKrRpg) American Publishes general-interest articles about current 0.98^ Statistical national and international statistical problems and Association programs, interesting and fun articles of a general nature about statistics and its applications, and the teaching of statistics. Applied Measurement in Education (http://www.tandf.co.uk/journals/titles/08957347.asp) Taylor & Francis Because interaction between the domains of 0.33 research and application is critical to the evaluation and improvement of
    [Show full text]
  • Program Evaluation Models and Related Theories: AMEE Guide No. 67
    2012; 34: e288–e299 WEB PAPER AMEE GUIDE Program evaluation models and related theories: AMEE Guide No. 67 ANN W. FRYE1 & PAUL A. HEMMER2 1Office of Educational Development, University of Texas Medical Branch, 301 University Boulevard, Galveston, Texas 77555-0408, USA, 2Department of Medicine, Uniformed Services, University of the Health Sciences, F. Edward Hebert School of Medicine, Bethesda, MD, USA Abstract This Guide reviews theories of science that have influenced the development of common educational evaluation models. Educators can be more confident when choosing an appropriate evaluation model if they first consider the model’s theoretical basis against their program’s complexity and their own evaluation needs. Reductionism, system theory, and (most recently) complexity theory have inspired the development of models commonly applied in evaluation studies today. This Guide describes experimental and quasi-experimental models, Kirkpatrick’s four-level model, the Logic Model, and the CIPP (Context/Input/ Process/Product) model in the context of the theories that influenced their development and that limit or support their ability to do what educators need. The goal of this Guide is for educators to become more competent and confident in being able to design educational program evaluations that support intentional program improvement while adequately documenting or describing the changes and outcomes—intended and unintended—associated with their programs. Introduction Practice points Program evaluation is an essential responsibility for anyone overseeing a medical education program. A ‘‘program’’ may be . Educational programs are fundamentally about change; as small as an individual class session, a course, or a clerkship program evaluation should be designed to determine rotation in medical school or it may be as large as the whole of whether change has occurred.
    [Show full text]
  • Research Designs for Program Evaluations
    03-McDavid-4724.qxd 5/26/2005 7:16 PM Page 79 CHAPTER 3 RESEARCH DESIGNS FOR PROGRAM EVALUATIONS Introduction 81 What Is Research Design? 83 The Origins of Experimental Design 84 Why Pay Attention to Experimental Designs? 89 Using Experimental Designs to Evaluate Programs: The Elmira Nurse Home Visitation Program 91 The Elmira Nurse Home Visitation Program 92 Random Assignment Procedures 92 The Findings 93 Policy Implications of the Home Visitation Research Program 94 Establishing Validity in Research Designs 95 Defining and Working With the Four Kinds of Validity 96 Statistical Conclusions Validity 97 Working with Internal Validity 98 Threats to Internal Validity 98 Introducing Quasi-Experimental Designs: The Connecticut Crackdown on Speeding and the Neighborhood Watch Evaluation in York, Pennsylvania 100 79 03-McDavid-4724.qxd 5/26/2005 7:16 PM Page 80 80–●–PROGRAM EVALUATION AND PERFORMANCE MEASUREMENT The Connecticut Crackdown on Speeding 101 The York Neighborhood Watch Program 104 Findings and Conclusions From the Neighborhood Watch Evaluation 105 Construct Validity 109 External Validity 112 Testing the Causal Linkages in Program Logic Models 114 Research Designs and Performance Measurement 118 Summary 120 Discussion Questions 122 References 125 03-McDavid-4724.qxd 5/26/2005 7:16 PM Page 81 Research Designs for Program Evaluations–●–81 INTRODUCTION Over the last 30 years, the field of evaluation has become increasingly diverse in terms of what is viewed as proper methods and practice. During the 1960s and into the 1970s most evaluators would have agreed that a good program evaluation should emulate social science research and, more specifically, methods should come as close to randomized experiments as possible.
    [Show full text]
  • Program Evaluation 101
    Program Evaluation 101 What is program evaluation? Program evaluation is the systematic assessment of the processes and/or outcomes of a program with the intent of furthering its development and improvement. As such, it is a collaborative process in which evaluators work closely with program staff to craft and implement an evaluation design that is responsive to the needs of the program. For example, during program implementation, evaluators can provide formative evaluation findings so that program staff can make immediate, data-based decisions about program implementation and delivery. In addition, evaluators can, towards the end of a program or upon its completion, provide cumulative and summative evaluation findings, often required by funding agencies and used to make decisions about program continuation or expansion. Informal vs. Formal Evaluation Evaluation is not a new concept. As a matter of fact, as human beings we are engaged in evaluation activities all the time. Practitioners, managers, and policy makers make judgments about students, clients, personnel, programs, and policies daily and these judgments lead to choices and decisions. These judgments are based on informal, or unsystematic, evaluations. Informal evaluations can result in either faculty or wise judgments. However, informal evaluations are characterized by an absence of breadth and depth because they lack systematic procedures and formally collected evidence. The judgments may be clouded by one’s experience, instinct, generalization, and reasoning. In other words, when we conduct informal evaluations, we are less cognizant of the limitations posed by our background. In contrast, formal evaluation is developed to assist and extend natural human abilities to observe, understand, and make judgments about policies, programs, and other objects in evaluation.
    [Show full text]
  • An Outline of Critical Thinking
    AN OUTLINE OF CRITICAL THINKING LEVELS OF INQUIRY 1. Information: correct understanding of basic information. 2. Understanding basic ideas: correct understanding of the basic meaning of key ideas. 3. Probing: deeper analysis into ideas, bases, support, implications, looking for complexity. 4. Critiquing: wrestling with tensions, contradictions, suspect support, problematic implications. This leads to further probing and then further critique, & it involves a recognition of the limitations of your own view. 5. Assessment: final evaluation, acknowledging the relative strengths & limitations of all sides. 6. Constructive: an articulation of your own view, recognizing its limits and areas for further inquiry. EMPHASES Issues! Reading: Know the issues an author is responding to. Writing: Animate and organize your paper around issues. Complexity! Reading: assume that there is more to an idea than is immediately obvious; assume that a key term can be used in various ways and clarify the meaning used in the article; assume that there are different possible interpretations of a text, various implications of ideals, and divergent tendencies within a single tradition, etc. Writing: Examine ideas, values, and traditions in their complexity: multiple aspects of the ideas, different possible interpretations of a text, various implications of ideals, different meanings of terms, divergent tendencies within a single tradition, etc. Support! Reading: Highlight the kind and degree of support: evidence, argument, authority Writing: Support your views with evidence, argument, and/or authority Basis! (ideas, definitions, categories, and assumptions) Reading: Highlight the key ideas, terms, categories, and assumptions on which the author is basing his views. Writing: Be aware of the ideas that give rise to your interpretation; be conscious of the definitions you are using for key terms; recognize the categories you are applying; critically examine your own assumptions.
    [Show full text]
  • Public Policy and Program Evaluation 1St Edition Pdf, Epub, Ebook
    PUBLIC POLICY AND PROGRAM EVALUATION 1ST EDITION PDF, EPUB, EBOOK Evert Vedung | 9780765806871 | | | | | Public Policy and Program Evaluation 1st edition PDF Book Agencies should consistently use program evaluation and systematic analysis to improve program design, implementation, and effectiveness and to assess what works, what does not work, and why. University of Chicago Press, Chicago. Eval Rev 18 5 — Doing Evaluation in the Political World References. Ministerie van Veiligheid en Justitie, Den Haag. Ragin CC Redesigning social inquiry: fuzzy sets and beyond. Search within At what cost were my activities implemented and my outcomes achieved? Discussion Of the longlist of potential barriers and facilitators for evaluation use that we started from, we identified four conditions with potentially strong explanatory power for the mature evaluation setting of IOB: the timing of the evaluation, its political salience, whether policy makers show clear interest in the evaluation, and whether the evaluation presents novel knowledge. Nat Ecol Evol 4 4 — Based on the general principles discussed in the previous section, we propose that agencies in the Executive Branch establish one of the following organizational frameworks to support evaluation. In the last step, we assigned the values of the conditions for each evaluation. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use for details see Privacy Policy and Legal Notice. Parallel with the diffusion of the evidence-based policy mantra, the attention for policy evaluations has risen dramatically in recent decades. Support from the intended users will increase the likelihood that the evaluation results will be used for program improvement.
    [Show full text]
  • Project Quality Evaluation – an Essential Component of Project Success
    Mathematics and Computers in Biology, Business and Acoustics Project Quality Evaluation – An Essential Component of Project Success DUICU SIMONA1, DUMITRASCU ADELA-ELIZA2, LEPADATESCU BADEA2 Descriptive Geometry and Technical Drawing Department1, Manufacturing Engineering Department2, Faculty of Manufacturing Engineering and Industrial Management “Transilvania” University of Braşov 500036 Eroilor Street, No. 29, Brasov ROMANIA [email protected]; [email protected]; [email protected]; Abstract: - In this article are presented aspects, elements and indices regarding project quality management as a potential source of sustainable competitive advantages. The success of a project depends on implementation of the project quality processes: quality planning, quality assurance, quality control and continuous process improvement drive total quality management success. Key-Words: - project management, project quality management, project quality improvement. 1 Introduction negative impact on the possibilities of planning and Project management is the art — because it requires the carrying out project activities); skills, tact and finesse to manage people, and science — • supply chain problems; because it demands an indepth knowledge of an • lack of resources (funds or trained personnel); assortment of technical tools, of managing relatively • organizational inefficiency. short-term efforts, having finite beginning and ending - external factors: points, usually with a specific budget, and with customer • natural factors (natural disasters); specified performance criteria. “Short term” in the • external economic influences (adverse change in context of project duration is dependent on the industry. currency exchange rate used in the project); The longer and more complex a project is, the more it • reaction of people affected by the project; will benefit from the application of project management • implementation of economic or social policy tools.
    [Show full text]
  • Statistical Inference a Work in Progress
    Ronald Christensen Department of Mathematics and Statistics University of New Mexico Copyright c 2019 Statistical Inference A Work in Progress Springer v Seymour and Wes. Preface “But to us, probability is the very guide of life.” Joseph Butler (1736). The Analogy of Religion, Natural and Revealed, to the Constitution and Course of Nature, Introduction. https://www.loc.gov/resource/dcmsiabooks. analogyofreligio00butl_1/?sp=41 Normally, I wouldn’t put anything this incomplete on the internet but I wanted to make parts of it available to my Advanced Inference Class, and once it is up, you have lost control. Seymour Geisser was a mentor to Wes Johnson and me. He was Wes’s Ph.D. advisor. Near the end of his life, Seymour was finishing his 2005 book Modes of Parametric Statistical Inference and needed some help. Seymour asked Wes and Wes asked me. I had quite a few ideas for the book but then I discovered that Sey- mour hated anyone changing his prose. That was the end of my direct involvement. The first idea for this book was to revisit Seymour’s. (So far, that seems only to occur in Chapter 1.) Thinking about what Seymour was doing was the inspiration for me to think about what I had to say about statistical inference. And much of what I have to say is inspired by Seymour’s work as well as the work of my other professors at Min- nesota, notably Christopher Bingham, R. Dennis Cook, Somesh Das Gupta, Mor- ris L. Eaton, Stephen E. Fienberg, Narish Jain, F. Kinley Larntz, Frank B.
    [Show full text]