
Sentence Diagram Generation Using Dependency Parsing Elijah Mayfield Division of Science and Mathematics University of Minnesota, Morris [email protected] Abstract ald et al., 2005), which is computationally expen- sive, or through analysis of another modeling sys- Dependency parsers show syntactic re- tem, such as a phrase structure parse tree, which lations between words using a directed can introduce errors from the long pipeline. To graph, but comparing dependency parsers the best of our knowledge, the first use of de- is difficult because of differences in the- pendency relations as an evaluation tool for parse oretical models. We describe a system trees was in (Lin, 1995), which described a pro- to convert dependency models to a struc- cess for determining heads in phrase structures tural grammar used in grammar educa- and assigning modifiers to those heads appropri- tion. Doing so highlights features that are ately. Because of different ways to describe rela- potentially overlooked in the dependency tions between negations, conjunctions, and other graph, as well as exposing potential weak- grammatical structures, it was immediately clear nesses and limitations in parsing models. that comparing different models would be diffi- Our system performs automated analysis cult. Research into this area of evaluation pro- of dependency relations and uses them to duced several new dependency parsers, each us- populate a data structure we designed to ing different theories of what constitutes a cor- emulate sentence diagrams. This is done rect parse. In addition, attempts to model multi- by mapping dependency relations between ple parse trees in a single dependency relation sys- words to the relative positions of those tem were often stymied by problems such as dif- words in a sentence diagram. Using an ferences in tokenization systems. These problems original metric for judging the accuracy of are discussed by (Lin, 1998) in greater detail. An sentence diagrams, we achieve precision attempt to reconcile differences between parsers of 85%. Multiple causes for errors are pre- was described in (Marneffe et al., 2006). In this sented as potential areas for improvement paper, a dependency parser (from herein referred in dependency parsers. to as the Stanford parser) was developed and com- pared to two other systems: MINIPAR, described 1 Dependency parsing in (Lin, 1998), and the Link parser of (Sleator and Dependencies are generally considered a strong Temperley, 1993), which uses a radically differ- metric of accuracy in parse trees, as described in ent approach but produces a similar, if much more (Lin, 1995). In a dependency parse, words are fine-grained, result. connected to each other through relations, with a Comparing dependency parsers is difficult. The head word (the governor) being modified by a de- main problem is that there is no clear way to com- pendent word. By converting parse trees to de- pare models which mark dependencies differently. pendency representations before judging accuracy, For instance, when clauses are linked by a con- more detailed syntactic information can be discov- junction, the Link parser considers the conjunction ered. Recently, however, a number of dependency related to the subject of a clause, while the Stan- parsers have been developed that have very differ- ford parser links the conjunction to the verb of a ent theories of a correct model of dependencies. clause. In (Marneffe et al., 2006), a simple com- Dependency parsers define syntactic relations parison was used to alleviate this problem, which between words in a sentence. This can be done was based only on the presence of dependencies, either through spanning tree search as in (McDon- without semantic information. This solution loses 45 Proceedings of the ACL-IJCNLP 2009 Student Research Workshop, pages 45–53, Suntec, Singapore, 4 August 2009. c 2009 ACL and AFNLP information and is still subject to many problems To understand these relationships better, a stan- in representational differences. Another problem dardized system of sentence diagramming has with this approach is that they only used ten sen- been developed. With a relatively small number of tences for comparison, randomly selected from the rules, a great deal of information about the func- Brown corpus. This sparse data set is not necessar- tion of each word in a sentence can be represented ily congruous with the overall accuracy of these in a compact form, using orientation and other spa- parsers. tial clues. This provides a simpler and intuitive In this paper, we propose a novel solution to means of visualizing relationships between words, the difficulty of converting between dependency especially when compared to the complexity of di- models. The options that have previously been rected dependency graphs. For the purposes of this presented for comparing dependency models are paper, we use the system of diagramming formal- either too specific to be accurate (relying on an- ized in (Kolln and Funk, 2002). notation schemes that are not adequately parallel for comparison) or too coarse to be useful (such 2.1 History as merely checking for the existence of depen- First developed in the early 20th century, structural dencies). By using a model of language which grammar was a response to the prescriptive gram- is not as fine-grained as the models used by de- mar approach of the time. Structural grammar de- pendency parsers, but still contains some semantic scribes how language actually is used, rather than information beyond unlabelled relations, a com- prescribing how grammar should be used. This promise can be made. We show that using linear approach allows an emphasis to be placed on the diagramming models can do this with acceptable systematic and formulaic nature of language. A error rates, and hope that future work can use this key change involved the shift to general role-based to compare multiple dependency models. description of the usage of a word, whereas the fo- Section 2 describes structural grammar, its his- cus before had been on declaring words to fall into tory, and its usefulness as a representation of syn- strict categories (such as the eight parts of speech tax. Section 3 describes our algorithm for conver- found in Latin). sion from dependency graphs to a structural rep- Beginning with the work of Chomsky in the resentation. Section 4 describes the process we 1950s on transformational grammar, sentence di- used for developing and testing the accuracy of agrams, used in both structural and prescriptive this algorithm, and Section 5 discusses our results approaches, slowly lost favor in educational tech- and a variety of features, as well as limitations and niques. This is due to the introduction of trans- weaknesses, that we have found in the dependency formational grammar, based on generative theo- representation of (Marneffe et al., 2006) as a result ries and intrinsic rules of natural language struc- of this conversion. ture. This generative approach is almost uni- versally used in natural language processing, as 2 Introduction to structural grammar generative rules are well-suited to computational representation. Nevertheless, both structural and Structural grammar is an approach to natural lan- transformational grammar are taught at secondary guage based on the understanding that the major- and undergraduate levels. ity of sentences in the English language can be matched to one of ten patterns. Each of these pat- 2.2 Applications of structural grammar terns has a set of slots. Two slots are universal Structural grammar still has a number of advan- among these patterns: the subject and the predi- tages over generative transformational grammar. cate. Three additional slots may also occur: the Because it is designed to emulate the natural usage direct object, the subject complement, and the ob- of language, it is more intuitive for non-experts to ject complement. A head word fills each of these understand. It also highlights certain features of slots. In addition, any word in a sentence may be sentences, such as dependency relationships be- modified by an additional word. Finally, anywhere tween words and targets of actions. Many facets that a word could be used, a substitution may be of natural language are difficult to describe using made, allowing the position of a word to be filled a parse tree or other generative data structure. Us- by a multiple-word phrase or an entire subclause, ing structural techniques, many of these aspects with its own pattern and set of slots. are obvious upon basic analysis. 46 Figure 1: Diagram of “The students are scholars.” and “The students studied their assignment.” Figure 2: Diagram of “Running through the woods By developing an algorithm to automatically is his favorite activity.” analyze a sentence using structural grammar, we hope that the advantages of structural analysis can improve the performance of natural language Figure 1 by the words “students,” “are,” “assign- parsers. By assigning roles to words in a sentence, ment,” and “scholars” respectively. Each slot con- patterns or structures in natural language that can- tains three sets (Heads, Expletives, Conjunctions). not be easily gleaned from a data structure are With the exception of the Heads slot in Subject made obvious, highlighting the limitations of that and Predicate, all sets may be empty. These sets structure. It is also important to note that while are populated by words. A word is comprised of sentence diagrams are primarily used for English, three parts: the string it represents, a set of mod- they can be adapted to any language which uses ifying words, and information about its orienta- subjects, verbs, and objects (word order is not im- tion in a diagram. Finally, anywhere that a word portant in sentence diagramming). This research may fill a role, it can be replaced by a phrase or can therefore be expanded into multilingual de- subclause. These phrases are represented iden- pendency parser systems in the future.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-