
Journal of Visual Languages and Computing (1999) 10, 585}606 Article No. jvlc.1999.0147, available online at http:// www.idealibrary.com on Automatically Determining Semantics for World Wide Web Multimedia Information Retrieval SOUGATA MUKHERJEA* AND JUNGHOO CHOst *C&C Research Laboratories, NEC USA Inc., 110 Rio Robles, San Jose, CA 95134, U.S.A., e-mail: [email protected] and sDepartment of Computer Science, Stanford University, U.S.A., e-mail: [email protected] Accepted 18 August 1999 Search engines are useful because they allow the user to ®nd information of interest from the World Wide Web (WWW). However, most of the popular search engines today are textual; they do not allow the user to ®nd images from the web. For effective retrieval, determining the semantics of the images is essential. In this paper, we describe the problems in determining the semantics of images on the WWW and the approach of AMORE, a WWW search engine that we have developed. AMORE's techniques can be extended to other media like audio and video. We explain how we assign keywords to the images based on HTML pages and the method to determine similar images based on the assigned text. We also discuss some statistics showing the effectiveness of our technique. Finally, we present the visual interface of AMORE with the help of several retrieval scenarios. ( 1999 Academic Press Keywords: World Wide Web, image search, semantic similarity, HTML parsing, visual interface 1. Introduction WITH THE EXPLOSIVE GROWTH OF INFORMATION that is available through the World Wide Web (WWW), it is becoming increasingly dif®cult for the users to ®nd the information of interest. As most web pages have images, effective image search engines for the WWW need to be developed. There are two major ways to search for an image. The user can specify an image and the search engine can retrieve images similar to it. The user can also specify keywords and all images relevant to the user-speci®ed keywords can be retrieved. Over the last two years we have developed an image search engine called the Advanced Multimedia Oriented Retrieval Engine (AMORE) [1] (http:// www.ccrl.com/amore) that allows the retrieval of WWW images using both the techniques. The user can specify keywords to retrieve relevant images or can specify an image to retrieve similar images. For retrieving images by keywords we have to determine the meaning of the image. Obviously this is not very easy. The best approach will be to assign several keywords to tThis work was performed when the author visited NEC. 1045-926X/99/120585#22 $30.00/0 ( 1999 Academic Press 586 S. MUKHERJEA AND J. CHO an image to specify the meaning. Manually assigning keywords to images will give the best result but is not feasible for a large collection of images. Alternatively, we can use the surrounding text of web images as their keywords. Unfortunately, unlike written material, most HTML documents do not have an explicit caption. Therefore, we need to parse the HTML source ®le and only keywords `near' an image should be assigned to it. However, because the HTML page can be structured in various ways, the `nearness' is not easy to determine. For example, if the images are in a table, the keywords relevant to an image may not be physically near the image in the HTML source ®le. Thus, we require several heuristics to determine the keywords relevant to an image. Fortunately, these heuristics can be also applied to retrieve other media like video and audio from the web. Once the keywords are assigned to the image, the user may specify keywords to retrieve relevant images. However, user studies with AMORE has shown that people also want to click on an image to ®nd similar images. This kind of `search for more like this one' is also popular for text search and is used in some WWW text search engines like Excite (http:// www.excite.com). Especially for image searching, it is sometimes very dif®cult for the user to specify the kind of images she wants only by keywords. The similarity of two images can be determined in two ways: visually and semantically. Visual similarity can be determined by image characteristics like shape, color and texture using image processing techniques. In AMORE, we use the Content-oriented Image Retrieval [2] library for this purpose. When the user wants to ®nd images similar to a red car, COIR can retrieve pictures of other red cars. However, it may also be possible that the user is not interested in pictures of red cars but pictures of other cars having the similar manufacturer and model. Finding semantically similar images is useful in this case. Since visual similarity does not consider the meaning of the images, a picture of a ®gure skater may be visually similar to the picture of an ice hockey player (because of the white background and similar shape), but it may not be meaningful for the user. To overcome this problem, AMORE allows the user to combine keyword and image similarity search. Thus, the user can integrate the visual similarity search of an ice hockey player picture with the keywords `ice hockey'. Although the integrated search retrieves very relevant images, unfortunately, an evaluation of AMORE's access logs has shown that integrated search is not as popular as keyword or image similarity search. Naive WWW users do not understand the concept of integrated search. Therefore, automatically integrating semantic and visual similarity search may be more user- friendly. For ®nding semantically similar images, we can assume that if two images have many common keywords assigned then they are similar. However, this simple approach has two drawbacks: z Obviously, not all the keywords that are assigned to an image from the HTML page containing it will be equally important. We have to determine which words are more important and give them more weights. z Since many web sites have a common format, images from a particular web site will have many common keywords. We need to reduce the weights of these common words so that images from the same web site are not found to be similar just because they are from the same site. WORLD WIDE WEB MULTIMEDIA INFORMATION RETRIEVAL 587 In this paper, we present the techniques used in AMORE to determine the semantics of images and ®nd semantically similar images. The next section cites related work. Section 3 discusses how AMORE assigns appropriate keywords to images and other media for keyword-based multimedia information retrieval. In Section 4, the method to determine semantically similar images is explained. In Section 5, we describe the evaluation of our schemes showing the effectiveness of our techniques. In Section 6, we introduce AMORE's visual interface with several retrieval scenarios. Various techniques of integrating visual and semantic search are also presented. Finally, Section 7 is the conclusion. 2. Related Work 2.1. WWW Search Engines There are many popular web search engines like Excite (http:// www.excite.com) and Infoseek (http:// www.infoseek.com). These engines gather textual information about resources on the web and build up index databases. The indices allow the retrieval of documents containing user-speci®ed keywords. Another method of searching for information on the web is manually generated subject-based directories which provide an useful browsable organization of information. The most popular one is Yahoo (http:// www.yahoo.com). However, none of these systems allow image search. Image search engines for the WWW are also being developed. Excalibur's Image Surfer (http:// isurf.yahoo.com) and WebSEEk [3] have built a collection of images that are available on the web. The collection is divided into categories (like automotive, sports, etc.), allowing the users to browse through the categories for relevant images. Moreover, keyword search and searching for images visually similar to a speci®ed image are possible. Alta Vista's Photo Finder (http:// image.altavista.com) also allows keyword and visually similar search. However, semantically similar searching is not possible in any of these systems. WebSeer [4] is a crawler that combines visual routines with textual heuristics to identify and index images of the web. The resulting database is then accessed using a text-based search engine that allows users to describe the image that they want using keywords. The user can also specify whether the required image is a photo- graph, animation, etc. However, the user cannot specify an image and ®nd similar images. 2.2. Image Searching Finding visually similar images using image processing techniques is a developed research area. Virage [5] and QBIC [6] are systems for image retrieval based on visual features, which consist of image primitives, such as color, shape, or texture and other domain-speci®c features. Although they also allow keyword search, the keywords need to be manually speci®ed and there is no concept of semantically similar images. Systems for retrieving similar images by semantic contents are also being developed [7, 8]. However, in these systems also the semantic content need to be manually 588 S. MUKHERJEA AND J. CHO associated with each image. We believe that for these techniques to be practical for the WWW, automatic assignment of keywords to the images is essential. 2.3. Assigning Text to WWW Images Research looking into the general problem of the relationship between images and captions in a large photographic library like a newspaper archive has been undertaken [9, 10]. These systems assume that the captions were already extracted from the pictures, an assumption not applicable to the WWW. Various techniques have been developed for assigning keywords to images on the WWW.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages22 Page
-
File Size-