Utilizing Multimedia Ontologies in Video Scene Interpretation Via Information Fusion and Automated Reasoning

Utilizing Multimedia Ontologies in Video Scene Interpretation Via Information Fusion and Automated Reasoning

Proceedings of the Federated Conference on DOI: 10.15439/2017F66 Computer Science and Information Systems pp. 91–98 ISSN 2300-5963 ACSIS, Vol. 11 Utilizing Multimedia Ontologies in Video Scene Interpretation via Information Fusion and Automated Reasoning Leslie F. Sikos School of Computer Science, Engineering and Mathematics Flinders University GPO Box 2100 Adelaide SA 5001 Australia Email: [email protected] and TV Anytime [2]. A more recent research outcome is the Abstract—There is an overwhelming variety of multimedia core reference ontology VidOnt,1 which aims to act as a ontologies used to narrow the semantic gap, many of which are mediator between de facto standard and standard video and overlapping, not richly axiomatized, do not provide a proper video-related ontologies [3]. taxonomical structure, and do not define complex correlations between concepts and roles. Moreover, not all ontologies used for image annotation are suitable for video scene represen- II.PROBLEM STATEMENT tation, due to the lack of rich high-level semantics and Despite the large number of multimedia ontologies men- spatiotemporal formalisms. This paper presents an approach tioned in the literature, there are very few ontologies that for combining multimedia ontologies for video scene represen- tation, while taking into account the specificity of the scenes to can be employed in video scene representation. Most prob- describe, minimizing the number of ontologies, complying with lems and limitations of these ontologies indicate ontology standards, minimizing reasoning complexity, and whenever engineering issues, such as lack of formal grounding, failure possible, maintaining decidability. to determine the scope of the ontology, overgeneralization, and using a basic subset of the mathematical constructors I. INTRODUCTION available in the implementation language [4]. Capturing the N THE last 15 years, narrowing the notorious semantic associated semantics has quite often been exhausted by Igap in video understanding has very much been neglected creating a taxonomical structure for a specific knowledge compared to image interpretation [1]. For this reason, most domain using the Protégé ontology editor,2 and not only research efforts have been limited to frame-based concept domain and range definitions are not used for properties, but mapping so that the corresponding techniques could be ap- even the property type is often incorrect. plied from the results of the research communities of image As a result, implementing multimedia ontologies in video semantics. However, these approaches failed to exploit the scene representation is not straightforward. For this reason, temporal information and multiple modalities typical to vid- a novel approach has been introduced, which captures the eos. highest possible semantics in video scenes. Most domain ontologies developed for defining multime- dia concepts with or without standards alignment went from III. TOWARDS A METHODOLOGY FOR COMBINING one extreme to the other; they attempted to cover either a MULTIMEDIA ONTOLOGIES FOR VIDEO SCENE very narrow and specific knowledge domain that cannot be REPRESENTATION used for unconstrained videos, or an overly generic taxon- The representation of video scenes largely depends on the omy for the most commonly depicted objects of video data- target application, such as content-based video scene re- bases, which do not hold rich semantics. trieval and hypervideo playback. Hence, the different re- Further structured data sources used for concept mapping quirements have to be set on a case-by-case basis. Never- include commonsense knowledge bases, upper ontologies, theless, there are common steps for structured video annota- and Linked Open Data (LOD) datasets. Very few research tion, such as determining the desired balance between ex- have actually been done to standardize the corresponding pressivity and reasoning complexity, capturing the intended resources, without which combining low-level image, audio, semantics for the knowledge domain featured in the video or and video descriptors, and sophisticated high-level de- required by the application, and standards compliance. The scriptors with rule-based video event definitions cannot be proposed approach guides through the key factors to be con- efficient. An early implementation in this field was a core audiovisual ontology based on MPEG-7, ProgramGuideML, 1 http://vidont.org 2 http://protege.stanford.edu IEEE Catalog Number: CFP1785N-ART c 2017, PTI 91 92 PROCEEDINGS OF THE FEDCSIS. PRAGUE, 2017 sidered in order to achieve the optimal level of semantic Common high-level video concepts can be utilized from enrichment for video scenes. Schema.org.4 For example, generic video metadata can be provided for video objects using schema:video and A. Intended Semantics schema:VideoObject. Movies, series, seasons, and epi- In contrast to image annotation, in which the intended se- sodes of series can be described using schema:Movie, mantics can typically be captured using concepts from do- schema:MovieSeries, schema:CreativeWorkSeason, main or upper ontologies, the spatiotemporal annotation of and schema:Episode. Analogously, video metadata can be video scenes requires a wide range of highly specialized on- described using schema:duration, schema:genre, tologies. schema:inLanguage, and similar properties. Rich video The numeric representation of audio waveforms, the edg- semantics can be described using specialized ontologies, es, interest points, regions of interest, ridges, and other visu- such as the STIMONT ontology, which can capture the al features of video frames and video clips employ low-level emotional responses associated with videos [12]. The use of descriptors, usually from an OWL mapping of MPEG- 7’s more specific high-level concepts depends on the knowledge XSD vocabulary. They correspond to local and global char- domain to represent, and often includes Linked Data [13]. acteristics of video frames, and audio and video signals, such as intensity, frequency, distribution, pixel groups, and Criteria low-level feature aggregates, such as various histograms and A1. The ontology or dataset captures the intended seman- moments based on low-level features. Some examples for tics or the semantics closest to the intended semantics audio descriptors include the zero crossing rate descriptor, in terms of concept and property definitions. which can be used to determine whether the audio channel A2. The terms to be used for annotation are defined in a contains speech or music, the descriptors of formants pa- standardized ontology or dataset. If this is not availa- rameters, which are suitable for phoneme and vowel identi- ble, or there are similar or identical definitions avail- fication, and the attack duration descriptor, which is used for able in multiple ontologies or datasets, the choice is sound identification. Two feature aggregates frequently used determined by the following precedence order: 1) for video representation are SIFT (Scale-Invariant Feature standard, 2) standard-aligned, 3) de facto standard, 4) Transform), which is suitable for object recognition and proprietary. tracking in videos [5], and HOF (Histogram of Optical Flow) [6], which can be used for, among others, detecting humans in videos. The most common motion descriptors B. Quality of Conceptualization include the camera motion descriptor, which can character- Another important consideration beyond capturing the in- ize a video scene in a particular time according to profes- tended semantics is the quality of conceptualization. For sional video camera movements, the motion activity de- example, the MPEG-7 mappings known for the literature scriptor, which can be used to indicate the spatial and transformed semistructured definitions to structured data, temporal distribution of activities, and the motion trajectory but this did not make them suitable for reasoning over visual descriptor, which represents the displacement of objects contents. Since MPEG-7 provides low-level descriptors, over time. their OWL mapping does not provide real-world semantics, The MPEG-7 descriptors can be used for tasks such as which can be achieved through high-level descriptors only. generating video summaries [7] and matching video clips The MPEG-7 descriptors provide metadata and technical [8], however, they do not convey information about the characteristics to be processed by computers, so their struc- meaning of audiovisual contents, i.e., they cannot provide tured definition does not contribute to the semantic enrich- high-level semantics [9]. Nevertheless, MPEG-7 terms can ment of the corresponding multimedia resources. To demon- be used for low-level descriptors. However, using partial strate this, take a closer look at a code fragment of the Core mappings of MPEG-7 limits semantic enrichment, because Ontology for Multimedia (COMM):5 video representation requires a wide range of multimedia descriptors. Therefore, an ontology supporting only the vis- <owl:Class rdf:about="#cbac-coefficient-14"> ual descriptors of MPEG-7, such as the Visual Descriptor <rdfs:comment rdf:datatype="&xsd;string" Ontology (VDO) [10], for example, omits audio descriptors >Corrresponds to the "CbACCoeff14” element of the "ColorLayoutType" that can be used for describing the audio channel of videos. (part 3, page 45)</rdfs:comment> In fact, even a complete mapping of MPEG-7 does not guar- <rdfs:subClassOf> antee semantic enrichment, such as the ones created via a <owl:Class

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us