
To appear in Proceedings of the 2004 Text Retrieval Conference (TREC). Gaithersburg, MD USA. 2005. Exploiting Zone Information, Syntactic Features, and Informative Terms in Gene Ontology Annotation from Biomedical Documents Burr Settles∗† [email protected] Mark Craveny∗ [email protected] ∗Department of Computer Sciences, University of Wisconsin, Madison, WI 53706 USA yDepartment of Biostatistics and Medical Informatics, University of Wisconsin, Madison, WI 53706 USA Abstract assignment. The TREC 2004 Genomics Track Anno- tation Task consisted of two subtasks: Annotation Hi- We describe a system developed for the Annota- tion Hierarchy subtask of the Text Retrieval Con- erarchy, and Annotation Hierarchy Plus Evidence. ference (TREC) 2004 Genomics Track. The goal We focused on developing a system to solve the Anno- of this track is to automatically predict Gene On- tology (GO) domain annotations given full-text tation Hierarchy subtask. In this task, we are given biomedical journal articles and associated genes. a document-gene tuple hd; gi and are to determine Our system uses a two-tier statistical machine which, if any, of the three GO domains (BP, CC, MF) learning system that makes predictions first on are applicable to gene g according to the text in doc- \zone"-level text (i.e. abstract, introduction, ument d. Note that the goal is not to link the gene etc.) and then combines evidence to make fi- with a precise GO code (e.g. \Ahr" and \signal trans- nal document-level predictions. We also describe duction"), but rather the domain from which these the effects of more advanced syntactic features (equivalence classes of syntactic patterns) and GO codes come (e.g. \Ahr" and \Biological Process"). \informative terms" (phrases semantically linked This is a significantly simplified version of the annota- to specific GO codes). These features are in- tion task that organism database curators must face, duced automatically from the training data and since the exact GO code need not be identified, and external sources, such as \weakly labeled" MED- we are given the relevant genes a priori. LINE abstracts. Our baseline system (using only traditional word The training data for this task comes from 178 full- features), significantly exceeds median F1 of all text journal articles as curated in 2002 by the Mouse task submissions on unseen test data. We show Genome Informatics (MGI) database (Blake et al., further improvement by including syntactic fea- 2003). These documents and their associated genes tures and informative term features. Our com- serve as \positive" examples (i.e. have at least one GO plete system exceeds median performance for all three evaluation metrics (precision, recall, and annotation). We are also provided with 326 full-text F1), and achieves the second highest reported F1 articles that were not selected for GO curation, but for of all systems in the track. which genes are associated. These document-gene tu- ples serve as \negative" examples (i.e. have no GO an- notation). In total, there are 504 documents and 1,418 1. Introduction document-gene tuples in the training data. Test data containing 378 documents and 877 document-gene tu- A major activity of most model organism database ples are similarly prepared from articles processed by projects is to assign codes from the Gene Ontology MGI in 2003. (The Gene Ontology Consortium, 2000) to annotate the known function of genes and proteins. The Gene This paper is organized as follows. First, we provide Ontology (GO) consists of three structured, controlled a fairly detailed description of our system's architec- domains (\hierarchies," or \subontologies") that de- ture and explain how more advanced feature sets are scribe functions of genes and their products. These generated. We then present experimental results on domains are Biological Process (BP), Cellular Com- cross-validated training data, as well as a report on ponent (CC), and Molecular Function (MF). Each as- official task evaluation. Finally, we present our con- signed GO code can also be delegated a level of evi- clusions and future directions for this work. dence indicating specific experimental support for its 2. System Description feature vectors and can better distinguish between them. This section describes the architecture of our sys- tem. There are several key steps involved in taking From these sentences, we generate six feature vectors a document-gene tuple and predicting GO domain an- per document (one for each zone). Our baseline sys- notations. Figure 1 illustrates the overall system. The tem employs traditional bag-of-words text classifica- following sections describe each step in detail. tion features. In this case, each word that occurs in a sentence with a mention of the gene is considered 2.1 Zoning and Text Preprocessing a unique feature (case-insensitive, with stop-words re- moved). Consider the following sentence, taken from First we process the documents, provided by task orga- a caption in the h11694502, Desi training tuple: nizers in SGML format, into six distinct information- content zones: title, abstract, introduction, \An overlay of desmin and syncoilin im- methods, results, and discussion. Partitioning is munoreactivity confirmed co-localization." done using keyword heuristics during SGML parsing (e.g. \discussion," \conclusion," and \summary" are keywords for the discussion zone). We then segment Our baseline system represents this using features like the text in each zone into paragraphs and sentences word=overlay, word=immunoreactivity, etc. But using SGML tags and regular expressions. All other perhaps the syntactic pattern \overlay of X" refers SGML tags are then stripped away. to a procedure with some special significance to label- ing? Perhaps certain keywords, such as \syncoilin," Once a document has been zoned and segmented, we have semantic correlation with a particular process, apply a gene name recognition algorithm to each sen- component, or function? We wish to induce features tence. This step involves taking the gene symbol that that capture such information. is provided in the tuple, looking it up in a list of known aliases provided by MGI,1 and matching these against In this paper, we also introduce and investigate two the text using regular expressions that are somewhat advanced types of features that were used to encode robust to orthographic variants (e.g. allowing \Aox-1" the problem. The features described here are syntac- to match against \Aox I" or \AOX1"). We find that tic features (equivalence classes of shallow syntactic this simple matching approach is able to retrieve 97% patterns) and \informative terms" (n-grams with se- of all document-gene tuples among positive training mantic association with specific GO codes). These documents, and only 52% of tuples among negative features were automatically induced from the training training documents. Therefore, we treat this step as corpus plus several external text sources. Section 3.2 a filter on our predictions: documents for which the describes the experimental impact of including these associated gene cannot be found are not considered features. The following subsections describe how they for GO annotation. Assuming the matching algorithm are generated and incorporated into the system. behaves comparably on test data, this filter sets an upper bound of 97% on recall, but effectively prunes 2.2.1 External Data Sources away 48% of potential false positives. The task training set of 178 MGI-labeled documents provided for the task is decidedly small. To remedy 2.2 Feature Vector Encoding this, we supplement data used for feature induction Once the text preprocessing is complete, our system with training data released for Task 2 of the 2003 2 selects only those sentences where gene name men- BioCreative text mining challenge. This provides an tions were identified. The reasoning for this is two- additional 803 documents labeled with GO-related in- fold. First, it reduces the feature space for the machine formation as curated by GOA (Camon et al., 2004). learning algorithms, as we only consider the text that While this corpus may be larger, it still represents rel- is close to the gene of interest. Second, and perhaps atively few specific GO codes (which we use for infor- more importantly, it introduces a unique hd; gi repre- mative term features). Thus, we also use databases sentation for a document that may contain multiple from the GO Consortium website to gather more data genes. Consider a single article that is disjointly cu- about other organisms. The databases we use include rated for Molecular Function (MF) of \Maged1," but FlyBase (The FlyBase Consortium, 2003), WormBase Biological Process (BP) of \Ndn." By using only sen- (Harris et al., 2004), SGD (Dolinski et al., 2003), and tences that reference each gene, we generate unique TAIR (Huala et al., 2001). They are similar to the 1 ftp://ftp.informatics.jax.org/pub/reports/ 2http://www.mitre.org/public/biocreative/ Gene: Des (desmin) BP BP, CC, MF, X MaxEnt s Naïve Bayes n o i ... study of H2K myoblasts and ... t c ... of the desmin intermediate ... i ... be important for maintaining ... CC d MaxEnt e . r . p . l . a n MF i MaxEnt f 1. Zoning and 2. Feature Vector 3. Zone-level 4. Document-level Text Preprocessing Encoding Classification Classification Figure 1. A system diagram for the annotation hierarchy task. (1) Documents are partitioned into zones, and sentences with gene name mentions are identified within each zone. (2) Feature vectors for each zone are constructed using the sentences where gene mentions were found. (3) Label predictions for each zone are made by a trained multi-class Na¨ıve Bayes model, and predictions are combined to create a secondary document-level feature vector. (4) Final binary predictions for each GO domain annotation are made by trained Maximum Entropy models. MGI curations provided with training data in that ple syntactic patterns for subjects and direct objects they list document, gene, and GO-related triplets for (e.g.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-