
Can We Crowdsource Language Design? Preston Tunnell Wilson∗ Justin Pombrio Shriram Krishnamurthi Brown University Brown University Brown University [email protected] [email protected] [email protected] Abstract (Community input processes are clearly a hybrid, but at best Most programming languages have been designed by com- they only suggest changes, which must then be approved by miees or individuals. What happens if, instead, we throw “the designers”.) open the design process and let lots of programmers weigh in ere are fewer examples of language design conducted on semantic choices? Will they avoid well-known mistakes through extensive user studies and user input, though there like dynamic scope? What do they expect of aliasing? What are a few noteworthy examples that we discuss in section 11. kind of overloading behavior will they choose? None of these addresses comprehensive, general-purpose We investigate this issue by posing questions to program- languages. Furthermore, many of these results focus on mers on Amazon Mechanical Turk. We examine several lan- syntax, but relatively lile on the semantics, which is at least guage features, in each case using multiple-choice questions as important as syntax, even for beginners [11, 31]. to explore programmer preferences. We check the responses In this paper, we assess the feasibility of designing a lan- for consensus (agreement between people) and consistency guage to match the expectations and desires of programmers. (agreement across responses from one person). In general Concretely, we pose a series of questions on Amazon Me- we nd low consistency and consensus, potential confusion chanical Turk (MTurk) to people with programming experi- over mainstream features, and arguably poor design choices. ence to explore the kinds of behaviors programmers would In short, this preliminary evidence does not argue in favor want to see. Our hope is to nd one or both of: of designing languages based on programmer preference. Consistency For related questions, individuals answer CCS Concepts •So ware and its engineering ! Gen- the same way. eral programming languages; •Social and professional Consensus Across individuals, we nd similar answers. topics ! History of programming languages; Neither one strictly implies the other. Each individual could Keywords crowdsourcing, language design, misconceptions, be internally consistent, but dierent people may wildly user studies disagree on what they expect. At the other extreme, people ACM Reference format: might not be internally consistent at all, but everyone may Preston Tunnell Wilson, Justin Pombrio, and Shriram Krishna- agree in their (inconsistent) expectations. murthi. 2017. Can We Crowdsource Language Design?. In Proceed- Both properties have consequences for language design. ings of Onward, Vancouver, Canada, October 22–27, 2017 (Onward! If people generate consensus without consistency, we can ’17), 17 pages. still t a language to their desires, though the language may 10.1145/nnnnnnn.nnnnnnn DOI: be unpredictable in surprising ways. On the other hand, if people are internally consistent even though they don’t 1 Introduction agree with each other, we could imagine creating “personal- Programming languages are clearly user interfaces: they ized” languages (using a mechanism like Racket’s #lang [9]), are how a programmer communicates their desires to the though the resulting languages would be confusing to peo- computer. Programming language design is therefore a form ple who don’t share their views. (is is already slightly the of user interface design. case: e.g., numerous scripting languages are supercially ere are many traditions in interface design, and in de- similar but have many subtle semantic dierences.) In an sign in general. One divisor between these traditions is how ideal world, people are both consistent and arrive at a con- design decisions are made. Sometimes, decisions are made sensus, in which case we could t a language to their views by a small number of opinionated designers (think Apple). and the language would likely have predictable behavior. ese have parallels in programming language design, from As Beeridge’s Law implies [1], we do not live in that individual designers to small commiees, and this is even world. Indeed, what we nd is that in general, programmers codied in (unocial) titles likes Benevolent Dictator for Life. exhibit neither consistency nor consensus. Naturally, however, this paper is only an initial salvo and the topic requires much ∗Last name is “Tunnell Wilson” (index under “T”). more exploration. Onward! ’17, Vancouver, Canada 2017. 978-x-xxxx-xxxx-x/YY/MM...$15.00 Methodology We conducted surveys on MTurk, each fo- DOI: 10.1145/nnnnnnn.nnnnnnn cusing on a language feature. Most of these surveys were Onward! ’17, October 22–27, 2017, Vancouver, Canada Preston Tunnell Wilson, Justin Pombrio, and Shriram Krishnamurthi open for a week; we kept a few open longer to get enough re- sponses. We focused on mainstream features found in most languages. e tasks asked for previous programming expe- 0.5 0.6 rience, sometimes experience with that specic feature (like object-oriented programming), and asked people to report on their programming experience. ose who did not list Height any experience were eliminated from the data analysis (as were duplicate respondents). We considered their responses A B 0.0 0.1 0.2 0.3 0.4 invalid and all others valid. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 For each feature we asked respondents—called workers or 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 Turkers—to tell us what they thought “a NEW programming e height of the point atdissimilarity which subtrees join gives the language would produce”. We then presented a series of average distance between the responses found in the subtrees, programs with multiple choices for the expected output of where the distance between two responses is the fraction each one. (Each question also asked if they would want a of answers on which they dier (i.e., normalized Hamming dierent answer, but workers rarely selected this option, so distance [13]). In this case, within each cluster the responses we don’t discuss it further.) For the programs, we purposely do not dier at all, so each cluster is joined at height 0. Be- used a syntax that was reminiscent of existing languages, tween the two clusters, responses always dier in 2 out of 3 but not identical to any particular one. Every question also answers, so they join at height 2=3. (We have been unable allowed “Error” and “Other”. All respondents were required to get our package to always show the full y-axis, i.e., to to give their reasoning behind their chosen answer; those height 1; sorry.) On the other hand, say everyone chose their who picked Error and Other were also asked to state their answers uniformly at random; the cluster tree might be:1 expectation. In the data analysis, for simplicity we binned all Error answers together, and likewise all Other responses 0.7 (even though this makes our measures look higher than they 0.6 really are). Our discussion will also be presented by language feature. 0.5 19 38 0.4 2 Height 15 47 0.3 1 8 5 2 Structure of the Data Analysis 18 24 31 40 50 28 16 26 23 43 49 29 46 13 20 22 In this section we describe our methods and how we will 0.2 9 6 3 4 organize the results. All the questions in one section (with 10 14 32 33 12 17 11 35 30 36 37 41 21 44 25 34 one exception: section 8.2) were answered by the same set of 0.1 7 workers, but no aempt was made to solicit the same workers 39 42 48 27 45 across sections. e nal paper will be accompanied by a e cluster trees we nd for our programming surveys link to the full questions and data sets. show weak clustering, but theredissimilarity are two incidental factors that may contribute to this. First, these trees are sensitive to the number of questions. When there are few questions 2.1 Comparisons (some of ours had fewer than 5), the height exaggerates dis- Within each section, we will compute consensus and consis- agreement. Second, we purposely asked questions with the tency for small groups of related questions. We furthermore potential for disagreement: there are plenty of straightfor- visualize these measurements as well as the spread of data ward programs whose behavior is uncontroversial, but they for most question comparisons. wouldn’t make useful survey questions for that very reason. We give a more technical description of our cluster trees Summary Tables When comparing answers to just two in appendix A.2. questions, we will summarize the results in a table. However, Consensus Consensus is how much workers give the same for comparisons between three (or more) questions, instead answers as each other. We measure it in two ways. First, we of showing an unwieldy n-dimensional table, we show a give the simple agreement probability p¯. is is the proba- cluster tree [16], as described below. bility that two random workers answered a random ques- tion the same way as each other. However, it is well un- Cluster Trees As an illustrative example of clustering, sup- derstood [6] that p¯ overstates agreement, because it does pose that there were three questions, and workers fell into not account for the fact that some agreement may be due to two camps, with the rst camp always answering “1, 2, 3”, 1 More precisely, we ploed answers chosen uniformly at random to 8 and the second camp answering “1, 4, 5”.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages17 Page
-
File Size-