Basic Difference-In-Differences Models In

Total Page:16

File Type:pdf, Size:1020Kb

Basic Difference-In-Differences Models In How D-I-D you do that? Basic Difference-in-Differences Models in SAS® E Margaret Warton, Kaiser Permanente Division of Research, Oakland, CA Melissa M Parker, Kaiser Permanente Division of Research, Oakland, CA Andrew J Karter, Kaiser Permanente Division of Research, Oakland, CA ABSTRACT Long a mainstay in econometrics research, difference-in-differences (D-I-D) models have only recently become more commonly used in health services and epidemiologic research. D-I-D study designs are quasi-experimental, can be used with retrospective observational data, and do not require exposure randomization. This study design estimates the difference in pre-post changes in an outcome comparing an exposed group to an unexposed (reference) group. The outcome change in the unexposed group estimates the expected change in the exposed group had the group been, counterfactually, unexposed. By subtracting this change from the change in the exposed group (the “difference in differences”), the effects of background secular trends are removed. In the basic D-I-D model, each subject serves as his or her own control, removing confounding by known and unknown individual factors associated with the outcome of interest. Thus, the D-I-D generates a causal estimate of the change in an outcome associated with the initiation of the exposure of interest while controlling for biases due to secular trends and confounding. A basic repeated-measures generalized linear model provides estimates of population-average slopes between two time points for the exposed and unexposed groups and tests whether the slopes differ by including an interaction term between the time and exposure variables. In this paper, we illustrate the concepts behind the basic D-I-D model and present SAS® code for running these models. We include a brief discussion of more advanced D-I-D methods and present an example of a real-world analysis using data from a study on the impact of introducing a value-based insurance design (VBID) medication plan at Kaiser Permanente Northern California on change in medication adherence. INTRODUCTION Difference-in-differences (D-I-D) methods have been used in the field of econometrics for several decades but have only recently become more widely used in the fields of epidemiology and health research. D-I-D analysis is a quasi- experimental design used in the study of longitudinal cohort data with pre- and post-exposure repeated measures. It allows the comparison of changes over time in an outcome between exposed and control groups while accounting for changes in secular trends and controlling for both measured and unmeasured confounding. Because the design forces adherence to time-ordering in exposure and outcome measures, estimates from D-I-D models can be interpreted causally. Simple D-I-D models can be used effectively when data are available from a longitudinal pre/post cohort design. Either prospective or retrospective data collection is possible, so long as the timing of measurements is known. In this paper, we explain the fundamental D-I-D study design and illustrate a basic analysis using SAS® , specifically the GLM and MIXED procedures that allow accounting for repeated measures. THE D-I-D DESIGN OVERVIEW The D-I-D design is conceptually simple: measure the change in an outcome between the pre and post periods for an exposed group and a control group, then subtract one from the other to see the “difference in the differences” between the groups. In other words, the most basic D-I-D study is a “pre-post” design that compares the changes between two groups over two time points. EXPOSED VERSUS UNEXPOSED In order to use the D-I-D analytic approach, a longitudinal cohort is divided into at least two groups: subjects exposed and unexposed to the condition or treatment of interest. Outcome measures must be available for members of both groups before and after a time point at which exposure occurs for the exposed group. While the time points do not have to be specified calendar dates or even the same for each subject, this timing is the simplest way to create the pre-post longitudinal study. For this reason, this study design is very useful for measuring the results of programs, policies or protocols that are implemented at a specific time and are applied to a subgroup within a population. As with any study design requiring an unexposed comparison group, the identification of an appropriate control group is key. The control group should be as similar as possible to the exposed group, observed over the same period of time, and hopefully differing only in the exposure. PRE AND POST MEASURES In its simplest form, the only data required for a D-I-D analysis are the exposure flag, the outcome measures, 1 How D-I-D you do that? Basic Difference-in-Differences Models in SAS®, continued identified as pre or post, and an identifier variable for each individual. In situations where the predicted outcomes should take account of the various population characteristics (age and sex, for example), these variables can be included in the model and then used to adjust predicted values. The simplest D-I-D models are used with continuous outcomes, as changes in continuous outcomes are more easily interpreted. D-I-D models can be used with binary outcomes, although the interpretation for binary outcomes is a little more complicated. This paper will focus on continuous outcomes. CAUSAL INFERENCE The pre-post design maintains the time-ordering of events, an important aspect of the design that meets the basic requirement of any model that can be interpreted causally. In addition, in a D-I-D analysis, each subject serves as its own control: the characteristics of each subject that remain the same in both periods therefore cannot be confounders. This is true whether or not those characteristics are measured, so results from the D-I-D model account for both measured and unmeasured confounders. Including the unexposed control group in the model adjusts for underlying temporal trends in the outcome, thus differences between changes in the exposed group and the unexposed group represent changes specifically due to the exposure. For these reasons, the results from a D-I-D analysis can be interpreted causally, making it an ideal design for pre-post analyses in observational studies. EXAMPLE STUDY BACKGROUND Healthcare costs have been rising rapidly in the United States for many years. In the past, nearly all health care plans provided by Kaiser Permanente of Northern California (KPNC) had no deductible and low co-pays. However, during the past decade, employers and individuals have begun to purchase health insurance with a deductible and higher co-pays to reduce premium costs. To counter increasing out-of-pocket costs, new benefit programs have been introduced to reduce costs for effective treatments that have been shown to improve patient health. The theory is that encouraging use of these preventive treatments will lower the cost of providing care over time. Plans like these that are built into the insurance policy are known as Value-Based Insurance Design (VBID) benefits. One important healthcare area experiencing rising costs is prescription medications. As costs increase for medications, patients may take lower doses than recommended or stop taking a medication altogether. These behaviors are often measured by medication adherence: the percentage of time over a given period during which a patient has adequate medication available, usually based on prescription refill data. In 2013, KPNC began offering a VBID pharmacy benefit option (VBID Rx) to provide certain prescription medications for free, including drugs to treat high cholesterol, diabetes, and hypertension. Using this potential natural experiment, we wanted to know if the VBID medication benefit improved adherence to medications for chronic conditions among patients with a deductible plan(Reed, Mary. 2016). However, we did not have a large enough sample of patients who were on a deductible plan and had the VBID medication plan added later. Instead, we identified a cohort of patients on a non-deductible plan in 2013 whose employers switched to a deductible plan at the beginning of 2014. Our comparison cohort consisted of patients on a non-deductible plan in 2013 who switched to a deductible plan with no VBID benefit in 2014. We then compared medication adherence among those with and without the VBID benefit in 2013 and 2014. With the D-I-D design, we could identify changes in medication adherence due to the VBID plan while simultaneously removing the influence of the underlying change to a deductible plan and adjusting for confounding, measured or not, at the subject level. Figure 1 illustrates the study timing, cohort composition and sample sizes. Figure 1. Cohort Description for Difference-in-Differences Study of VBID Medication Plan Implementation and Medication Adherence 2 How D-I-D you do that? Basic Difference-in-Differences Models in SAS®, continued THE D-I-D STUDY DESIGN IN DETAIL A graphical illustration can be helpful in understanding the D-I-D study design. In Figure 2, A1 and A2 indicate the mean medication adherence values of the outcome at the pre and post time periods, respectively, in the unexposed group. Similarly, B1 and B2 represent the change in pre and post adherence for the exposed group. The change in the unexposed group over time is represented by the difference in height between the pre and post mean outcomes, the dashed line. Since the measurement points are a year apart, the slope of this line represents the annual rate of change and is interpreted as the background secular trend in the outcome over time in a group not affected by the exposure. Similarly, the slope of the solid line indicates the change in the medication adherence between the pre and post periods among those who experienced the exposure.
Recommended publications
  • Wait, I Don't Want to Be the Linux Administrator for SAS VA
    SESUG Paper 88-2017 Wait, I don’t want to be the Linux Administrator for SAS VA Jonathan Boase; Zencos Consulting ABSTRACT Whether you are a new SAS administrator or switching to a Linux environment, you have a complex mission. This job becomes even more formidable when you are working with a system like SAS Visual Analytics that requires multiple users loading data daily. Eventually a user will have data issues or create a disruption that causes the system to malfunction. When that happens, what do you do next? In this paper, we will go through the basics of a SAS Visual Analytics Linux environment and how to troubleshoot the system when issues arise. INTRODUCTION Many companies choose to implement SAS Visual Analytics in a Linux environment. With a distributed deployment, it’s the only choice but many chose this operating system because it reduces operating costs. If you are the newly chosen SAS platform administrator, you might be more versed in a Windows environment and feel intimidated by Linux. This paper introduces using basic Linux commands and methods for troubleshooting a SAS Visual Analytics environment. The paper assumes that SAS Visual Analytics is installed on a SAS 9.4 platform for Linux and that the reader has some familiarity with other operating systems, such as Windows. PLATFORM ADMINISTRATION 101 SAS platform administrators work with three main product areas. Each area provides a different functionality based on the task the administrator needs to perform. The following figure defines each area and provides a general overview of its purpose. Figure 1 Platform Administrator Tools Operating System SAS Management SAS Environment •Contains installed Console Manager software •Access and manage the •Monitor the •Contains logs used for metadata environment troubleshooting •Control database •Configure custom alerts •Administer host system connections users •Manage user accounts •Manage the LASR server With any operating system, there is always a lot to learn.
    [Show full text]
  • Drilling Network Stacks with Packetdrill
    Drilling Network Stacks with packetdrill NEAL CARDWELL AND BARATH RAGHAVAN Neal Cardwell received an M.S. esting and troubleshooting network protocols and stacks can be in Computer Science from the painstaking. To ease this process, our team built packetdrill, a tool University of Washington, with that lets you write precise scripts to test entire network stacks, from research focused on TCP and T the system call layer down to the NIC hardware. packetdrill scripts use a Web performance. He joined familiar syntax and run in seconds, making them easy to use during develop- Google in 2002. Since then he has worked on networking software for google.com, the ment, debugging, and regression testing, and for learning and investigation. Googlebot web crawler, the network stack in Have you ever had the experience of staring at a long network trace, trying to figure out what the Linux kernel, and TCP performance and on earth went wrong? When a network protocol is not working right, how might you find the testing. [email protected] problem and fix it? Although tools like tcpdump allow us to peek under the hood, and tools like netperf help measure networks end-to-end, reproducing behavior is still hard, and know- Barath Raghavan received a ing when an issue has been fixed is even harder. Ph.D. in Computer Science from UC San Diego and a B.S. from These are the exact problems that our team used to encounter on a regular basis during UC Berkeley. He joined Google kernel network stack development. Here we describe packetdrill, which we built to enable in 2012 and was previously a scriptable network stack testing.
    [Show full text]
  • UNIX X Command Tips and Tricks David B
    SESUG Paper 122-2019 UNIX X Command Tips and Tricks David B. Horvath, MS, CCP ABSTRACT SAS® provides the ability to execute operating system level commands from within your SAS code – generically known as the “X Command”. This session explores the various commands, the advantages and disadvantages of each, and their alternatives. The focus is on UNIX/Linux but much of the same applies to Windows as well. Under SAS EG, any issued commands execute on the SAS engine, not necessarily on the PC. X %sysexec Call system Systask command Filename pipe &SYSRC Waitfor Alternatives will also be addressed – how to handle when NOXCMD is the default for your installation, saving results, and error checking. INTRODUCTION In this paper I will be covering some of the basics of the functionality within SAS that allows you to execute operating system commands from within your program. There are multiple ways you can do so – external to data steps, within data steps, and within macros. All of these, along with error checking, will be covered. RELEVANT OPTIONS Execution of any of the SAS System command execution commands depends on one option's setting: XCMD Enables the X command in SAS. Which can only be set at startup: options xcmd; ____ 30 WARNING 30-12: SAS option XCMD is valid only at startup of the SAS System. The SAS option is ignored. Unfortunately, ff NOXCMD is set at startup time, you're out of luck. Sorry! You might want to have a conversation with your system administrators to determine why and if you can get it changed.
    [Show full text]
  • Lex and Yacc
    Lex and Yacc A Quick Tour HW8–Use Lex/Yacc to Turn this: Into this: <P> Here's a list: Here's a list: * This is item one of a list <UL> * This is item two. Lists should be <LI> This is item one of a list indented four spaces, with each item <LI>This is item two. Lists should be marked by a "*" two spaces left of indented four spaces, with each item four-space margin. Lists may contain marked by a "*" two spaces left of four- nested lists, like this: space margin. Lists may contain * Hi, I'm item one of an inner list. nested lists, like this:<UL><LI> Hi, I'm * Me two. item one of an inner list. <LI>Me two. * Item 3, inner. <LI> Item 3, inner. </UL><LI> Item 3, * Item 3, outer list. outer list.</UL> This is outside both lists; should be back This is outside both lists; should be to no indent. back to no indent. <P><P> Final suggestions: Final suggestions 2 if myVar == 6.02e23**2 then f( .. ! char stream LEX token stream if myVar == 6.02e23**2 then f( ! tokenstream YACC parse tree if-stmt == fun call var ** Arg 1 Arg 2 float-lit int-lit . ! 3 Lex / Yacc History Origin – early 1970’s at Bell Labs Many versions & many similar tools Lex, flex, jflex, posix, … Yacc, bison, byacc, CUP, posix, … Targets C, C++, C#, Python, Ruby, ML, … We’ll use jflex & byacc/j, targeting java (but for simplicity, I usually just say lex/yacc) 4 Uses “Front end” of many real compilers E.g., gcc “Little languages”: Many special purpose utilities evolve some clumsy, ad hoc, syntax Often easier, simpler, cleaner and more flexible to use lex/yacc or similar tools from the start 5 Lex: A Lexical Analyzer Generator Input: Regular exprs defining "tokens" my.flex Fragments of declarations & code Output: jflex A java program “yylex.java” Use: yylex.java Compile & link with your main() Calls to yylex() read chars & return successive tokens.
    [Show full text]
  • Lexical Analysis Lexeme Token Using Lex and Yacc Tools Examples
    Lex & Yacc A compiler or an interpreter performs its task in 3 stages: 1) Lexical Analysis: scans the input stream and converts Example: Lexical analysis sequences of characters into tokens (token is a Lexeme Token classification of groups of characters) Sum identifier Lex is a tool for writing lexical analyzers. i identifier 2) Syntactic Analysis (Parsing): A parser reads tokens and := assignment_op assembles them into language constructs using the grammar rules of the language. = equal_sign Yacc is a tool for constructing parsers. 57 integer_constant 3) Actions: Acting upon input is done by code supplied by * multiplication_op the compiler writer. , comma ( left_paran input Lexical stream parse output Parser Actions stream Analyzer of tokens tree executable K. Dincer Programming Languages - Lex 1 K. Dincer Programming Languages - Lex 2 (4) (4) Using Lex and Yacc Tools *.l Lex Specification Yacc Specification *.y Lex: reads a spec file containing regular expressions and generates a C routine that performs lexical analysis. Matches sequences that identify tokens. lex yacc *.c Lex declares an external variable called yytext which lex.yy.c Custom C contains the matched string. yylex() yyparse() y.tab.c routines Yacc: reads a spec file that codifies the grammar of a libl.a cc cc Unix language and generates a parsing routine. Libraries liby.a cc -o scanner lex.yy.c -ll cc -o parser y.tab.c -ly -ll scanner parser K. Dincer Programming Languages - Lex 3 K. Dincer Programming Languages - Lex 4 (4) (4) Examples %% %% [A-Z]+[ \t\n] printf(“%s”, yytext); [+-]?[0-9]*(\.)?[0-9]+ printf(“FLOAT”); . ; /* no action specified */ digit [0-9] alphabetic [A-Za-z] %% digit [0 -9] [+-]?{digit}*( \.)?{digit}+ printf(“FLOAT”); alphanumeric ({alphabetic}|{digit}) %% digit [0-9] {alphabetic }{alphanumeric }* printf(“variable”); sign [+-] \, printf(“Comma”); %% \{ printf(“Left brace”); float val; \:\= printf(“Assignment”); {sign}?{digit}*( \.)?{digit}+ {sscanf(yytext, “%f”, &val); printf(“>%f<”, val); K.
    [Show full text]
  • A/UX® Programming Languages and Tools, Volume 2
    A/UX® Programming Languages and Tools, Volume 2 031-0127 • APPLE COMPUTER, INC. © 1987, 1988, Apple Computer, Inc., and UniSoft Corporation. © 1990, Apple Computer, Inc. All rights reserved. Portions of this document have been previously copyrighted by AT&T Information Systems and the Regents of the University of California, and are reproduced with permission. Under the copyright laws, this manual may not be copied, in whole or part, without the written consent of Apple. The same proprietary and copyright notices must be affIxed to any permitted copies as were affixed to the original. Under the law, copying includes translating into another language or format. The Apple logo is a registered trademark of Apple Computer, Inc. Use of the "keyboard" Apple logo (Option-Shift-K) for commercial purposes without the prior written consent of Apple may constitute trademark infringement and unfair competition in violation of federal and state laws. Apple Computer, Inc. 20525 Mariani Ave. Cupertino, California 95014 (408) 996-1010 Apple, the Apple logo, A!UX, ImageWriter, LaserWriter, and Macintosh are registered trademarks of Apple Computer, Inc. MacPaint is a registered trademark of Claris Corporation. UNIX is a registered trademark of AT&T Information Systems. Simultaneously published in the United States and Canada. 031-0127 LIMITED WARRAN1Y ON MEDIA Even though Apple has reviewed this AND REPLACEMENT manual, APPLE MAKES NO WARRANTY OR REPRESENTATION, If you discover physical defects in the EITHER EXPRESS OR IMPLIED, manual or in the media on which a WITH RESPECT TO TInS MANUAL, software product is distributed, Apple ITS QUAUTY, ACCURACY, will replace the media or manual at MERCHANTABILITY, OR FITNESS no charge to you provided you return FOR A PARTICULAR PURPOSE.
    [Show full text]
  • Alexander Lex
    Alexander Lex Associate Professor, SCI Institute 72 South Central Campus Drive, School of Computing, University of Utah Salt Lake City, UT 84112, USA Personal Website: http://alexander-lex.net +1 857 284 2969 Visualization Design Lab: http://vdl.sci.utah.edu [email protected] BIOGRAPHY Alexander Lex is an Associate Professor of Computer Science at the Scientific Computing and Imaging Institute and the School of Computing at the University of Utah. Alex co-directs the Visualization Design Lab and conducts research on visualization methods and systems to help solve today's data analysis problems in the biomedical sciences. Alex is a co-founder of datavisyn, a company developing visual analytics solutions for the pharmaceutical industry. Before joining the University of Utah, he was a lecturer and post-doctoral researcher at Harvard University. He received his PhD, master's, and undergraduate degrees from the Graz University of Technology, and was a visiting researcher at the Department for Biomedical Informatics at Harvard Medical School. Alex is the recipient of an NSF CAREER award and multiple best paper awards or honorable mentions at IEEE VIS, ACM CHI, BioVis, and other conferences. He also received a best dissertation award from his alma mater. Research Interest Interactive data visualization, bioinformatics, visualization for biomedicine, data analysis methods for scientists and experts, visual analytics, human computer interaction, data science. Professional Appointments University of Utah Associate Professor at the School of Computing. Since 07/2020 Assistant Professor at the School of Computing. 07/2015-06/2020 Faculty Member at the Scientific Computing and Imaging Institute. Member of the Huntsman Cancer Institute.
    [Show full text]
  • SAS Programmer's Guide to Life on the SAS Grid
    PharmaSUG 2017 - Paper BB11 SAS® Programmer’s Guide to Life on the SAS Grid Eric C. Brinsfield, Meridian Analytics ABSTRACT With the goal of utilizing computing power and support staff more efficiently, many organizations are moving all or large portions of their SAS® computing to a SAS Grid platform using SAS Grid Manager. This often forces many SAS programmers to move from Windows to Linux, from local computing to server computing, and from personal resources to shared resources. This shift can be accomplished if you make slight changes in behavior, practice and expectation. This presentation offers many suggestions for not only adapting to the SAS Grid but taking advantage of the parallel processing for longer running programs. Sample programs are provided to demonstrate best practices for developing programs on the SAS Grid. Program optimization and program performance analysis techniques are discussed in detail. INTRODUCTION As SAS programmers, you may be switching to SAS Grid platform because you have a specific need for faster throughput or because your organization makes a strategic decision to centralize all SAS processing. You may be moving from running SAS on one operating system to the SAS Grid that is installed on a different operating system. For the purposes of this presentation, I will focus on one very common scenario, but I limit the discussion to differences due to the SAS Grid issues rather than issues related to operating system changes. USE CASE: SAS GRID INSTALLED ON LINUX; ALL OTHER SAS REMOVED Many large organizations move to a centralized SAS Grid processing model such that SAS software is only available on the SAS Grid.
    [Show full text]
  • The A-Z of Programming Languages: AWK
    CS@CU NEWSLETTER OF THE DEPARTMENT OF COMPUTER SCIENCE AT COLUMBIA UNIVERSITY VOL.5 NO.1 WINTER 2008 The A-Z of Programming Languages: AWK Professor Alfred V. Aho talks about the history and continuing popularity of his pattern matching language AWK. Lawrence Gussman Professor of Computer Science Alfred V. Aho The following article is reprinted How did the idea/concept of language suitable for simple It was built to do simple data with the permission of the AWK language develop data-processing tasks. processing: the ordinary data Computerworld Australia and come into practice? processing that we routinely (www.computerworld.com.au). We were heavily influenced by As with a number of languages, GREP, a popular string-matching did on a day-to-day basis. We The interview was conducted just wanted to have a very by Naomi Hamilton. it was born from the necessity utility on UNIX, which had to meet a need. As a researcher been created in our research simple scripting language that Lawrence Gussman Professor would allow us, and people of Computer Science Alfred V. at Bell Labs in the early 1970s, center. GREP would search I found myself keeping track a file of text looking for lines who weren’t very computer Aho is a man at the forefront savvy, to be able to write of computer science research. of budgets, and keeping track matching a pattern consisting Formerly the Vice President of editorial correspondence. of a limited form of regular throw-away programs for of the Computing Sciences I was also teaching at a nearby expressions, and then print all routine data processing.
    [Show full text]
  • Awk — a Pattern Scanning and Processing Language (Second
    Awk—APattern Scanning and Processing Language USD:19-73 Awk—APattern Scanning and Processing Language (Second Edition) Alfred V. Aho Brian W. Kernighan Peter J. Weinberger ABSTRACT Awk is a programming language whose basic operation is to search a set of files for patterns, and to perform specified actions upon lines or fields of lines which contain instances of those patterns. Awk makes certain data selection and transformation opera- tions easy to express; for example, the awk program length > 72 prints all input lines whose length exceeds 72 characters; the program NF % 2 == 0 prints all lines with an even number of fields; and the program {$1=log($1); print } replaces the first field of each line by its logarithm. Awk patterns may include arbitrary boolean combinations of regular expressions and of relational operators on strings, numbers, fields, variables, and array elements. Actions may include the same pattern-matching constructions as in patterns, as well as arithmetic and string expressions and assignments, if-else, while, for statements, and multiple output streams. This report contains a user’s guide, a discussion of the design and implementation of awk , and some timing statistics. 1. Introduction Awk is a programming language designed to merely printing the matching line. For example, the make many common information retrieval and text awk program manipulation tasks easy to state and to perform. {print $3, $2} The basic operation of awk is to scan a set of input lines in order, searching for lines which match prints the third and second columns of a table in that any of a set of patterns which the user has specified.
    [Show full text]
  • Sysadmin and Networking
    ;login APRIL 2014 VOL. 39, NO. 2 : Sysadmin and Networking & SDN, Beyond the Hyperbole Rob Sherwood & The Case of the Clumsy Kernel Brendan Gregg & System Administration Is Dead Todd Underwood & I Am Not a Sysadmin Elizabeth Zwicky & Splunk Performance Tuning David Lang Columns Practical Perl Tools: Using MongoDB David N. Blank-Edelman Python: When to Use Python 3 David Beazley iVoyeur: ChatOps Dave Josephsen For Good Measure: Mitigation as a Metric Dan Geer and Richard Bejtlich /dev/random: I Miss Being a Sysadmin Robert G. Ferrell UPCOMING EVENTS 2014 USENIX Federated Conferences Week CSET ’14: 7th Workshop on Cyber Security June 17–20, 2014, Philadelphia, PA, USA Experimentation and Test August 18, 2014 HotCloud ’14: 6th USENIX Workshop on www.usenix.org/cset14 Hot Topics in Cloud Computing Submissions due: April 25, 2014 June 17–18, 2014 www.usenix.org/hotcloud14 3GSE ’14: 2014 USENIX Summit on Gaming, Games, and Gamification in Security Education HotStorage ’14: 6th USENIX Workshop August 18, 2014 on Hot Topics in Storage and File Systems www.usenix.org/3gse14 June 17–18, 2014 Invited submissions due: May 6, 2014 www.usenix.org/hotstorage14 FOCI ’14: 4th USENIX Workshop on Free and Open 9th International Workshop on Feedback Computing Communications on the Internet June 17, 2014 August 18, 2014 www.usenix.org/feedbackcomputing14 www.usenix.org/foci14 WiAC ’14: 2014 USENIX Women in Advanced Submissions due: May 13, 2014 Computing Summit HotSec ’14: 2014 USENIX Summit on Hot Topics June 18, 2014 in Security www.usenix.org/wiac14 August
    [Show full text]
  • The Prosodic Structure of Serbo-Croatian Function Words: an Argument for Tied Constraints *
    The prosodic structure of Serbo-Croatian function words: An argument for tied constraints * Carson T. Schütze, UCLA 1 Introduction The question of the proper treatment of clitics has received considerable at- tention in recent literature on the syntax-morphology and morphology-phonol- ogy interfaces (e.g., Marantz 1988, 1989, Zec and Inkelas 1992, Schütze 1994, Selkirk 1996, and references cited there). Selkirk (1996) proposes an elegant the- ory of the prosodification of clitic function words crosslinguistically, demon- strating that variation in the behavior of function words both within a language (English) and across dialects of a language (Serbo-Croatian) follows straightfor- wardly from re-rankings of universal constraints in an Optimality Theory (OT) framework (McCarthy and Prince 1993, in press; Prince and Smolensky in press). In this paper I argue that, in addition to strict re-rankings of constraints, tied constraints are also needed within such a system, in order to capture the Serbo-Croatian facts.1 I discuss three empirical shortcomings of her analysis, all involving optionality, and show how they can be remedied by appealing to a par- ticular notion of what it means for constraints to be tied in rank. To the extent that Selkirk’s basic insights are correct, this supports the conclusion that tied constraints play an important role in OT accounts of the ways in which depen- dent and independent morphemes are combined into larger prosodic units. It adds to the growing evidence (cf. Anttila 1995, Reynolds 1994, and sources cited there) that a necessary part of OT theories of morphophonology is a particular notion of tied constraints or “crucial nonranking” (Prince and Smolensky in press), whereby separate tableaux are computed for each ordering of the relevant constraints and the output of each is a valid possibility in the language.
    [Show full text]