Automatic Acquisition of Subcategorization Frames from Untagged Text

Automatic Acquisition of Subcategorization Frames from Untagged Text

AUTOMATIC ACQUISITION OF SUBCATEGORIZATION FRAMES FROM UNTAGGED TEXT Michael R. Brent MIT AI Lab 545 Technology Square Cambridge, Massachusetts 02139 [email protected] ABSTRACT Journal (kindly provided by the Penn Tree Bank This paper describes an implemented program project). On this corpus, it makes 5101 observa- that takes a raw, untagged text corpus as its tions about 2258 orthographically distinct verbs. only input (no open-class dictionary) and gener- False positive rates vary from one to three percent ates a partial list of verbs occurring in the text of observations, depending on the SF. and the subcategorization frames (SFs) in which 1.1 WHY IT MATTERS they occur. Verbs are detected by a novel tech- nique based on the Case Filter of Rouvret and Accurate parsing requires knowing the sub- Vergnaud (1980). The completeness of the output categorization frames of verbs, as shown by (1). list increases monotonically with the total number (1) a. I expected [nv the man who smoked NP] of occurrences of each verb in the corpus. False to eat ice-cream positive rates are one to three percent of observa- h. I doubted [NP the man who liked to eat tions. Five SFs are currently detected and more ice-cream NP] are planned. Ultimately, I expect to provide a large SF dictionary to the NLP community and to Current high-coverage parsers tend to use either train dictionaries for specific corpora. custom, hand-generated lists of subcategorization frames (e.g., Hindle, 1983), or published, hand- 1 INTRODUCTION generated lists like the Ozford Advanced Learner's Dictionary of Contemporary English, Hornby and This paper describes an implemented program Covey (1973) (e.g., DeMarcken, 1990). In either that takes an untagged text corpus and generates case, such lists are expensive to build and to main- a partial list of verbs occurring in it and the sub- tain in the face of evolving usage. In addition, categorization frames (SFs) in which they occur. they tend not to include rare usages or specialized So far, it detects the five SFs shown in Table 1. vocabularies like financial or military jargon. Fur- ther, they are often incomplete in arbitrary ways. For example, Webster's Ninth New Collegiate Dic- SF Good Example Bad Example tionary lists the sense of strike meaning 'go occur Description to", as in "it struck him that... ", but it does not direct object greet them *arrive them list that same sense of hit. (My program discov- direct object tell him he's a *hope him he's a ered both.) fool & clause fool 1.2 WHY IT'S HARD direct object want him to *hope him to & infinitive attend attend The initial priorities in this research were: clause know I'll attend *want I'll attend . Generality (e.g., minimal assumptions about infinitive hope to attend *greet to attend the text) . Accuracy in identifying SF occurrences Table 1: The five subcategorization frames (SFs) • Simplicity of design and speed detected so far Efficient use of the available text was not a high priority, since it was felt that plenty of text was The SF acquisition program has been tested available even for an inefficient learner, assuming on a corpus of 2.6 million words of the Wall Street sufficient speed to make use of it. These priorities 209 had a substantial influence on the approach taken. size. A better approach is to require a fixed per- They are evaluated in retrospect in Section 4. centage of the total occurrences of any given verb The first step in finding a subcategorization to appear with a given SF before concluding that frame is finding a verb. Because of widespread and random error is not responsible for these observa- productive noun/verb ambiguity, dictionaries are tions. Unfortunately, determining the cutoff per- not much use -- they do not reliably exclude the centage requires human intervention and sampling possibility oflexical ambiguity. Even if they did, a error makes classification unstable for verbs with program that could only learn SFs for unambigu- few occurrences in the input. The sampling er- ous verbs would be of limited value. Statistical ror can be dealt with (Brent, 1991) but predeter- disambiguators make dictionaries more useful, but mined cutoff percentages stir require eye-bailing they have a fairly high error rate, and degrade in the data. Thus robust, unsupervised judgments the presence of many unfamiliar words. Further, in the face of error pose the third challenge to de- it is often difficult to understand where the error is veloping an accurate learning system. coming from or how to correct it. So finding verbs poses a serious challenge for the design of an accu- 1.3 HOW IT'S DONE rate, general-purpose algorithm for detecting SFs. The architecture of the system, and that of this pa- In fact, finding main verbs is more difficult per, directly reflects the three challenges described than it might seem. One problem is distinguishing above. The system consists of three modules: participles from adjectives and nouns, as shown 1. Verb detection: Finds some occurrences of below. verbs using the Case Filter (Rouvret and (2) a. John has [~p rented furniture] Vergnaud, 1980), a proposed rule of gram- (comp.: John has often rented apart- mar. ments) 2. SF detection: Finds some occurrences of b. John was smashed (drunk) last night five subcategorization frames using a simple, (comp.: John was kissed last night) finite-state grammar for a fragment of En- glish. c. John's favorite activity is watching TV (comp.: John's favorite child is watching 3. SF decision: Determines whether a verb is TV) genuinely associated with a given SF, or whether instead its apparent occurrences in In each case the main verb is have or be in a con- that SF are due to error. This is done using text where most parsers (and statistical disam- statistical models of the frequency distribu- biguators) would mistake it for an auxiliary and tions. mistake the following word for a participial main verb. The following two sections describe and eval- uate the verb detection module and the SF de- A second challenge to accuracy is determin- tection module, respectively; the decision module, ing which verb to associate a given complement which is still being refined, will be described in with. Paradoxically, example (1) shows that in a subsequent paper. The final two sections pro- general it isn't possible to do this without already vide a brief comparison to related work and draw knowing the SF. One obvious strategy would be conclusions. to wait for sentences where there is only one can- didate verb; unfortunately, it is very difficult to know for certain how many verbs occur in a sen- 2 VERB DETECTION tence. Finding some of the verbs in a text reliably The technique I developed for finding verbs is is hard enough; finding all of them reliably is well based on the Case Filter of Rouvret and Verguaud beyond the scope of this work. (1980). The Case Filter is a proposed rule of gram- Finally, any system applied to real input, no mar which, as it applies to English, says that ev- matter how carefully designed, will occasionally ery noun-phrase must appear either immediately make errors in finding the verb and determining to the left of a tensed verb, immediately to the its subcategorizatiou frame. The more times a right of a preposition, or immediately to the right given verb appears in the corpus, the more likely of a main verb. Adverbs and adverbial phrases it is that one of those occurrences will cause an (including days and dates) are ignored for the pur- erroneous judgment. For that reason any learn- poses of case adjacency. A noun-phrase that sat- ing system that gets only positive examples and isfies the Case Filter is said to "get case" or "have makes a permanent judgment on a single example case", while one that violates it is said to "lack will always degrade as the number of occurrences case". The program judges an open-class word increases. In fact, making a judgment based on to be a main verb if it is adjacent to a pronoun or any fixed number of examples with any finite error proper name that would otherwise lack case. Such rate will always lead to degradation with corpus- a pronoun or proper name is either the subject or 210 the direct object of the verb. Other noun phrases ing hand-verified tags. Typical disagreements in are not used because it is too difficult to determine which my system was right involved verbs that their right boundaries accurately. are ambiguous with much more frequent nouns, The two criteria for evaluating the perfor- like mold in "The Soviet Communist Party has the mance of the main-verb detection technique are power to shape corporate development and mold efficiency and accuracy. Both were measured us- it into a body dependent upon it ." There were ing a 2.6 million word corpus for which the Penn several systematic constructions in which the Penn Treebank project provides hand-verified tags. tags were right and my system was wrong, includ- Efficiency of verb detection was assessed by ing constructions like "We consumers are..." and running the SF detection module in the normal pseudo-clefts like '~vhat you then do is you make mode, where verbs were detected using the Case them think .... (These examples are actual text Filter technique, and then running it again with from the Penn corpus.) the Penn Tags substituted for the verb detection The extraordinary accuracy of verb detection module. The results are shown in Table 2.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us