Generating Architectural Landmark Descriptions

Generating Architectural Landmark Descriptions

A Case Study of NLG from Multimedia Data Sources: Generating Architectural Landmark Descriptions Simon Mille1 , Spyridon Symeonidis2 , Maria Rousi2, Montserrat Marimon Felipe1, Klearchos Stavrothanasopoulos 2, Petros Alvanitopoulos2, Roberto Carlini1, Jens Grivolla1, Georgios Meditskos2, Stefanos Vrochidis2, and Leo Wanner1,3 1Universitat Pompeu Fabra, Barcelona, Spain Corresponding author UPF: [email protected] 2Information Technologies Institute - CERTH, Thessaloniki, Greece Corresponding author ITI-CERTH: [email protected] 3Catalan Institute for Research and Advanced Studies (ICREA) Abstract et al., 2015). A few others merge multimedia con- In this paper, we present a pipeline system tent in the context of other tasks such as retrieval; that generates architectural landmark descrip- cf. (Clinchant et al., 2011). In most of these works, tions using textual, visual and structured data. the integration is done using similarity measures The pipeline comprises five main components: in a multimedia vector space. To the best of our (i) a textual analysis component, which ex- knowledge, none aims for integration (or fusion) tracts information from Wikipedia pages; (ii) at, and subsequent generation from, the level of on- a visual analysis component, which extracts tological <subj predicate obj> triples – which is information from copyright-free images; (iii) a retrieval component, which gathers relevant crucial in order to be able to generalize, use inheri- property, subject, object triples from DBpe- tance or apply advanced reasoning mechanisms. h i dia; (iv) a fusion component, which stores In this paper, we present a work that addresses the contents from the different modalities in a this challenge. It fuses triples from DBpedia, tex- Knowledge Base (KB) and resolves the con- tual information from Wikipedia, and visual infor- flicts that stem from using different sources of mation obtained from images as input to a pipeline- information; (v) an NLG component, which based generator. Our work is situated in the context verbalises the resulting contents of the KB. We show that thanks to the addition of other of a larger research initiative, in which the objec- modalities, we can make the verbalisation of tive is to automatically reconstruct architectural DBpedia triples more relevant and/or inspira- landmarks in 3D, such that they can be used by tional. architects, game designers, journalists, etc. Each 1 Introduction reconstructed landmark should be accompanied by automatically generated information that describes The bulk of the information reaches the reader its main features. The goal is to convey information nowadays across different media. Most of the such as the landmark’s architect, some of its facade videos uploaded to YouTube come accompanied by elements, its date of construction and/or renovation, written natural language comments and so do most its architectural style, etc. in terms of a text like: of the audio podcasts; even (online) newspaper ar- Petronas Towers, which Cesar´ Pelli designed, ticles can be hardly imagined without any visual are commercial offices and a tourist attraction illustrative material. This means that to generate a in Kuala Lumpur. The building has 88 floors comprehensive but, at the same time, not partially and 40 elevators and a floor area of 395,000 m2. repetitive, content summary of the provided infor- It was the highest building in the world between mation, the input of all media needs to be taken 1998 and 2004, and was restored in 2001. into account and merged. which is a verbalisation of the triples in Table1. The value of the merge of complementary in- In what follows, we describe how this is done, formation from multiple media for the generation from content extraction to the use of a grammar- of more informative texts has been pointed out al- based text generator. We thus do not focus exclu- ready early in the field; cf., e.g., (Huang et al., sively on text generation. Rather, we attempt to 1999). However, since then only a few works tack- show how richer input structures can be created led the problem; see, e.g., (Das et al., 2013; Xu using different media for the benefit of more com- 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+), Dublin, Ireland (Virtual), 18 December 2020, pages 2–14, c 2020 Association for Computational Linguistics Attribution 4.0 International. Petronas Towers location Kuala Lumpur ery new domain, style, language, etc. Machine Petronas Towers restorationDate 2001 Petronas Towers floorArea 395,000 learning-based generators have the best coverage, Petronas Towers floorCount 88 but the relevance and the quality of the produced Petronas Towers elevatorCount 40 texts cannot be ensured. Furthermore, they are fully Petronas Towers buildingType commercial offices Petronas Towers buildingType tourist attraction dependent on the available (still scarce and mostly Petronas Towers highestRegion world monolingual) training data. The development of Petronas Towers highestStart 1998 Petronas Towers highestEnd 2004 grammar-based generators is time-consuming and Petronas Towers architect Cesar´ Pelli they usually have coverage issues. However, they do not require training material, allow for a greater Table 1: A set of <subj predicate obj> input triples control over the outputs (e.g., for mitigating errors or tuning the output to a desired style), and the lin- prehensive and user-relevant texts. guistic knowledge used for one domain or language can be reused for other domains and languages. A 2 Related work number of systems also combine (i) and (iii), fill- ing the slot values of pre-existing templates using As already pointed out above, to the best of our neural network techniques (Nayak et al., 2017). knowledge, only a few works deal with fusion of In what follows, we opt for a grammar-based content from different media as input representa- generator. We show that information from visual, tion for downstream applications (in our case, text textual and structured (DBpedia) sources can be generation) and when they do, they use similar- successfully fused in order to generate informative ity measures in a multimedia vector space (Huang descriptions using a pipeline-based text generator. et al., 1999; Clinchant et al., 2011; Das et al., 2013; Xu et al., 2015) rather than mapping multimedia 3 System and dataset overview content onto a common ontology. This does not mean though that research related to text generation Let us first introduce the architecture of our system across different media would be neglected. For in- and then outline the creation of the datasets used stance, generation of image captions (Hossain et al., for development and testing. 2019) and video descriptions (Aafag et al., 2019) 3.1 General system architecture has recently become a very popular research topic. All of the proposals in this area use sequence-to- The workflow of our system is illustrated in Figure sequence neural network models. In (Idaya Aspura 1. The initial input is the topic entity on which the and Azman, 2017), indexes of textual and visual text is to be generated. Based on this, the data col- features are integrated via a multi-modality ontol- lection module harvests relevant content from the ogy, which is further enriched by DBpedia triples Web. The resources of interest are images from the for the purpose of semantics-driven image retrieval. Flickr website and texts from Wikipedia, which are On the other side, text generation from ontological processed by the visual and textual analysis mod- structures is on the rise; cf., e.g., (Bouayad-Agha ules respectively. The two modules extract a rich et al., 2014; Gatt and Krahmer, 2018) for overviews set of features that describe the entity. The knowl- and the WebNLG challenge (Gardent et al., 2017a) edge integration and reasoning module stores the for state-of-the-art works. extracted visual and textual features along with In general, there are three main approaches to additional metadata retrieved from DBpedia in ded- generating texts from ontologies: (i) filling slot icated ontologies. Semantic reasoning and fusion values in predefined sentence templates (McRoy operations are subsequently executed on top of the et al., 2003), (ii) applying grammars that encode saved data to aggregate the information coming different types of linguistic knowledge (Varges and from the different media into a unified entity repre- Mellish, 2001; Wanner et al., 2010; Bouayad-Agha sentation. Text generation starts from this represen- et al., 2012; Androutsopoulos et al., 2013), and (iii) tation in order to generate a textual description. predicting the most appropriate output based on machine learning models (Gardent et al., 2017b; 3.2 Development and test datasets Belz et al., 2011). Template-based generators are The targeted entities are architectural landmarks very robust, but also limited in terms of portabil- such as buildings, statues or stadiums. The goal ity since new templates need to be defined for ev- is to be able to generate a description which con- 3 6 features per landmark can be obtained through DBpedia (up to 13 for a single landmark). The information that corresponds to one feature can be encoded by a variety of property names (up to 10). For instance, the cost of a building can be expressed by dbo:cost, dbp:cost, or dbp:constructionCost.3 The 39 features and the corresponding 98 proper- ties are listed in Table7, Appendix A.2. Figure 1: System architecture. For the retrieval of the corresponding DBpedia triples, we developed a component that applies tains, e.g., the date of creation, the location, the SPARQL queries to the DBpedia SPARQL end- 4 architecture style or the popularity. point. In some rare cases, the queries returned an To create the sufficiently diverse datasets, we error message at the time they were performed; in first manually compiled a list of 160 landmarks such a case, the property cannot be accessed and that vary in terms of the aforementioned character- the information is not retrieved.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    13 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us