The Fast and the Numerous – Combining Machine and Community Intelligence for Semantic Annotation Sebastian Blohm, Markus Krötzsch and Philipp Cimiano Institute AIFB, Knowledge Management Research Group University of Karlsruhe D-76128 Karlsruhe, Germany {blohm, kroetzsch, cimiano}@aifb.uni-karlsruhe.de Abstract incentives for creating semantic annotations in a Semantic MediaWiki are for example semantic browsing and query- Starting from the observation that certain communities have incentive mechanisms in place to create large amounts of un- ing functionality, but most importantly the fact that queries structured content, we propose in this paper an original model over structured knowledge can be used to automatically cre- which we expect to lead to the large number of annotations ate views on data, e.g. in the form of tables. required to semantically enrich Web content at a large scale. However, creating incentives and making annotation easy The novelty of our model lies in the combination of two key and intuitive will clearly not be enough to really leverage se- ingredients: the effort that online communities are making to mantic annotation at a large scale. On the one hand, human create content and the capability of machines to detect reg- resources are limited. In particular, it is well known from ular patterns in user annotation to suggest new annotations. Wikipedia and from tagging systems that the number of con- Provided that the creation of semantic content is made easy tributors is relatively small compared to the number of infor- enough and incentives are in place, we can assume that these communities will be willing to provide annotations. How- mation consumers. On the other hand, we need to use hu- ever, as human resources are clearly limited, we aim at in- man resources economically and wisely, avoiding that peo- tegrating algorithmic support into our model to bootstrap on ple get bored by annotating the obvious or the same things existing annotations and learn patterns to be used for suggest- again and again. This is where standard machine learning ing new annotations. As the automatically extracted informa- techniques which detect regularities in data can help. How- tion needs to be validated, our model presents the extracted ever, any sort of learning algorithm will produce errors, ei- knowledge to the user in the form of questions, thus allow- ther because they overgenerate or they overfit the training ing for the validation of the information. In this paper, we data. Thus, human verification is still needed. We argue that describe the requirements on our model, its concrete imple- this verification can be provided by the community behind mentation based on Semantic MediaWiki and an information a certain project if the feedback is properly integrated into extraction system and discuss lessons learned from practi- cal experience with real users. These experiences allow us the tools they use anyway. This opens the possibility to turn to conclude that our model is a promising approach towards information consumers into “passive annotators” which, in leveraging semantic annotation. spite of not actively contributing content and annotations, can at least verify existing annotations if it is easy enough. The idea of semi-automatically supporting the annotation Introduction process is certainly not new and has been suggested before. With the advent of the so called Web 2.0, a large num- However, we think that it is only the unique combination of ber of communities with a strong will to provide content large community efforts, learning algorithms and a seamless have emerged. Essentially, these are the communities be- integration between both that will ultimately lead to the kind hind social tagging and content creation software such as of environments needed to make large scale semantic anno- del.icio.us, Flickr, and Wikipedia. Thus, it seems that one tation feasible. way of reaching massive amount of annotated web content In this paper we thus describe a novel paradigm for se- is to involve these communities in the endeavour and thus mantic annotation which combines the effort of communi- profit from their enthusiasm and effort. This requires in ties such as Wikipedia (the community intelligence or “the essence two things: semantic annotation functionality seam- numerous” dimension in our model) which contribute to the lessly integrated into the standard software used by the com- massive creation of content with the benefits of a machine munity in order to leverage its usage and, second, an in- learning approach. The learned model captures people’s an- centive mechanism such that people can immediately profit notation behaviour and is thus able to quickly extract new in- from the annotations created. This is for example the key formation and suggest corresponding annotations to be veri- idea behind projects such as Semantic MediaWiki (Krötzsch fied by the user community (this the machine intelligence or et al. 2007) and Bibsonomy (Hotho et al. 2006). Direct “the fast” dimension in our model). Copyright © 2008, Association for the Advancement of Artificial The remainder of this paper is organised as follows. In Intelligence (www.aaai.org). All rights reserved. the next section we describe our approach to combining ma- 1 extraction tools, a novel QuestionAPI as well as their basic interactions. We have selected the wiki-engine MediaWiki as a basis for our work, since this system is widely used on publicly accessible sites (including Wikipedia), such that large amounts of data are available for annotation. More- over, the free add-on Semantic MediaWiki (SMW) extends MediaWiki with means for creating and storing semantic an- notations that are then exploited to provide additional func- tionality to wiki-users (Krötzsch et al. 2007). This infras- tructure is useful in two ways: first, it allows wiki-users to make direct use of the freshly acquired annotations, and, second, it can support extraction tools by providing initial (user-generated) example annotations as seeds for learning algorithms. Figure 1: Integrating (semantic) wikis with Information Ex- As shown in Figure 1, our general architecture makes lit- traction tools – basic architecture. tle assumptions about the type and number of the employed extraction tools, so that a wide range of existing tools should be useable with the system (see the Related Work section for chine and human intelligence for semantic annotation in a an overview). As a concrete example for demonstrating and wiki setting and describe how Semantic MediaWiki can be testing our approach, we have selected the Pronto informa- used for this purpose. Then, we derive requirements for such tion extraction system (Blohm & Cimiano 2007). an integration and describe its corresponding architecture subsequently. We present an implementation based on the Requirements on User Interaction English Wikipedia and discuss practical experiences before Successful wiki projects live from vivid user communities reviewing related work and concluding. that contribute and maintain content, and therefore social processes and established interaction paradigms are often Combining Human and Machine Intelligence more important than specific technical features. Likewise, The crucial aspect of our model is that community mem- any extended functionality that is to be integrated into bers and information extraction algorithms interact in such a existing wikis must also take this into account. This has led way that they can benefit from each other. Humans benefit us to various requirements. from the fact that information extraction systems can sup- (U1) Simplicity Participating in the annotation process port them in the tedious work of manual annotation, and al- should be extremely simple for typical wiki users, and gorithms exploit human annotations to bootstrap and learn should ideally not require any prior instruction. The exten- patterns to suggest new annotations. The workflow in our sion must match the given layout, language, and interface model is thus as follows: design. (U2) Unobtrusiveness and opt-out In order to seriously 1. Extraction tools use existing high-quality and support real-world sites an extension must not obscure community-validated human annotations to learn patterns the actual main functions of the wiki. Especially, it must in data, leading to the extraction of new annotations. be acknowledged that many users of a wiki are passive 2. Users are requested to verify extracted data so as to con- readers who do not wish to contribute to the collaborative firm or reject it. This is done by presenting questions to annotation process. Registered users should be able to users. configure the behaviour of the extension where possible. 3. Confirmed extraction results are immediately incorpo- (U3) User gratification Wiki contributors typically are rated into the wiki, if possible. volunteers, such that it is only their personal motivation which determines the amount of time they are willing 4. User replies are evaluated by extraction tools to improve to spend for providing feedback. Users should thus be future results (learning), and to gather feedback on extrac- rewarded for contributions (e.g. by giving credit to active tion quality (evaluation), returning to (1) in a bootstrap- contributors), and they should understand how their contri- ping fashion. bution affects and improves the wiki. The model thus is cyclic, but also asynchronous in na- (U4) Entertainment Even if users understand the rele- ture, since learning, annotation, verification, and incorpora- vance of contributing feedback, measures must be taken tion into the wiki interact with each other asynchronously to ensure that this task does not appear monotone or even and not in a serialised manner. This mode of operation is stupid to them. Problems can arise if the majority of reflected in the requirements we present below. changes proposed by extraction tools are incorrect (and Assuming the model above, we present a concrete archi- maybe even unintelligible to humans), or if only very tecture and implementation that realises the above model in narrow topic areas are subject to extraction.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-