A Pattern-Based Approach to Ontology-Supported Semantic Data Integration

Total Page:16

File Type:pdf, Size:1020Kb

A Pattern-Based Approach to Ontology-Supported Semantic Data Integration A pattern-based approach to ontology-supported semantic data integration Andreas A. Jordan Wolfgang Mayer Georg Grossmann Markus Stumptner Advanced Computing Research Centre University of South Australia, Mawson Lakes Boulevard, Mawson Lakes, South Australia 5095, Email: [email protected] Email: (wolfgang.mayer|georg.grossmann)@unisa.edu.au, [email protected] Abstract RSM2, ISO 159263 and Gellish4 standards are all used to represent information about engineering assets and Interoperability between information systems is associated maintenance information. However, there rapidly becoming an increasingly difficult issue, in exists considerable variation in the scope and level of particular where data is to be exchanged with exter- detail present in each standard, and where the infor- nal entities. Although standard naming conventions mation content covered by different standards over- and forms of representation have been established, laps, their vocabulary and representation used vary significant discrepancies remain between different im- considerably. Furthermore, standards like ISO 15926 plementations of the same standards and between dif- are large yet offer little guidance on how to represent ferent standards within the same domain. Further- particular asset information in its generic data struc- more, current approaches for bridging such hetero- tures. Heterogeneities arising from continued evolu- geneities are incomplete or do not scale well to real- tion of standards have further exacerbated the prob- world problems. lem. As a consequence, tool vendors formally support We present a pragmatic approach to interoperabil- the standard, yet their implementations do not inter- ity, where ontologies are leveraged to annotate proto- operate due to different use of the same standard or typical information fragments in order to facilitate varying assumptions about the required and optional automated transformation between fragments in dif- information held in each system. Not all standards ferent standards and representations. We discuss our are supported by formal ontologies that would allow approach based on an example drawn from the oil and one to overcome these heterogeneities automatically. gas industry. Where formal models are present, their focus is typ- ically on describing individual entities and relation- 1 Introduction ships, whereas dealing with issues arising from their use in larger representation structures, and scalabil- A growing number of organisations are interested in ity problems stemming from application of inference sharing information between their internal informa- mechanisms, are often left aside. tion systems and externally with partners or the pub- The aim of our work is to build a bridge that lic sector. A core issue facing these organisations ties together the heterogeneous standards and pro- is the information management systems they employ vides automated translation between their informa- have been built without supporting a common stan- tion models and concrete implementations. Our hy- dard for the underlying data models used for storing pothesis is that this goal can be achieved only if the information and their interfaces. Consequently, creat- information representation fragments used by differ- ing custom-built software that translates between dif- ent tools and standards are captured such that pos- ferent interfaces and data models is a predominantly sible translations between equivalent fragments can manual task, and current solutions for bridging het- be inferred automatically. To resolve the interoper- erogeneities between information systems do not scale ability problem not only with respect to terminologi- well to real-world problems. cal heterogeneities but also with respect to heteroge- Although standardisation of domain models, in- neous usage patterns by different organisations, exist- terfaces, and information representation can help to ing standards must be complemented with a formal reduce this burden, standardisation alone has been in- model of the elementary \information fragments" and sufficient to overcome the interoperability challenge. possible use of these fragments within each standard This is predominantly due to the fact that the stan- and organisation. Whereas approaches for translating dards themselves are heterogeneous and sometimes a semi-formal model like Gellish into natural language suffer from ambiguity. In the Engineering domain exist, we aim to extend these ideas to formal com- alone, there exist several competing standards and plex structures. This requires a formal model of the de-facto standards set by large tool vendors. For ex- building blocks, henceforth \information fragments", 1 in the source and target representation and mech- ample, in the oil and gas industry, the MIMOSA , anisms to translate between \similar" fragments in Copyright c 2011, Australian Computer Society, Inc. This pa- different standards. per appeared at the Seventh Australasian Ontology Workshop For this purpose, we intend to construct an on- AOW 2011, Perth, Western Australia. Conferences in Research tology that reconciles the concepts and relationships and Practice in Information Technology (CRPIT), Vol. 48, in each standard into a common model and formally Vladimir Estivill-Castro and Gillian Dobbie, Ed. Reproduc- express the information fragments found in each stan- tion for academic, not-for-profit purposes permitted provided this text is included. 3http://15926.org/home/tiki-index.php 1 http://www.mimosa.org/ 4http://www.gellish.net/ dard in terms of the common model. Our focus will be on representing information fragments and their con- Upper Ontology crete representation in different standards such that A A equivalent representations in different standards can Information MIMOSA Fragment ISO be constructed automatically. Data Model B Ontology B Data Model We seek to investigate formal models and infer- 3 MIMOSA MIMOSA Fragment ISO ISO ence procedures to determine which are adequate RDL Templates 2 instances 4 Templates RDL for capturing the intent and composition of concrete 1 5 data structures on each end of the translation pro- MIMOSA Data ISO Data cess while reusing as much as possible existing on- tologies capturing domain-specific entities and their relations. Whereas (domain-specific) ontologies pro- Figure 1: Schematic of the overall translation process vided by several standards exhaustively capture do- on the example of translation between the MIMOSA main concepts and relations, we intend to develop and ISO 15926 standards. mechanisms for reasoning about translations between different representations. The resulting ontology will help to build adequate unifying ontology. Each fragment can be seen as a translation systems by providing developers of in- parametric statement that, when instantiated with tegration tools a catalogue of possible translations suitable values and objects for its parameters, ex- between different systems, either at run-time or at presses a certain meaning in the concrete representa- design time, when translations for information frag- tion corresponding ontology. By combining fragment ments in the source representation must be formed. instances representing template instances in the tar- Our ontology will also be able to verify completeness get representation, suitable translation for a given set of a given translation by identifying fragments and of template instances in the source representation can constraints in the source model that cannot be trans- be devised. lated precisely into the target model, and fragments that have ambiguous or multiple conflicting transla- Ontology of Information Fragments. Central tions. to our approach is the unifying ontology of informa- Our research will be predominantly driven by a tion fragments and translation operators depicted in case study within the oil and gas industry, where in- the centre of figure 1. It extends a suitable upper on- formation models held in MIMOSA, ISO 15926, and tology in order to foster reuse and to ensure its over- RSM are to be synchronised among a number of sys- all organisation adheres to well-established knowledge tems. representation principles. Each fragment in our on- In this paper we make the following contributions: tology is linked with its corresponding template(s) in • we outline our overall approach to automated one or more source or target ontologies. Note that not fragment-based information integration, every fragment will have corresponding templates in each data source. Therefore, it is necessary to find • present our approach to linking existing tax- combinations of fragments that are available in the onomies and ontologies to our model, target ontology that correspond to fragments found in the information source. Furthermore, it may be • show how to capture information fragments and necessary to query information from the target in or- associated translation operations in an ontology, der to obtain information (such as object identifiers of and existing objects) while instantiating target fragments. We intend to construct an ontology that captures • show how the resulting ontology can be applied the properties and constraints of fragments in a for- to infer automated translation between source mal manner such that equivalent instances of frag- and target representations. ments can be identified and synthesised. In order to The paper is organised as follows. We present our reason about transformations, it is necessary to cap- overall approach in Section 2 and introduce
Recommended publications
  • Data Warehouse: an Integrated Decision Support Database Whose Content Is Derived from the Various Operational Databases
    1 www.onlineeducation.bharatsevaksamaj.net www.bssskillmission.in DATABASE MANAGEMENT Topic Objective: At the end of this topic student will be able to: Understand the Contrasting basic concepts Understand the Database Server and Database Specified Understand the USER Clause Definition/Overview: Data: Stored representations of objects and events that have meaning and importance in the users environment. Information: Data that have been processed in such a way that they can increase the knowledge of the person who uses it. Metadata: Data that describes the properties or characteristics of end-user data and the context of that data. Database application: An application program (or set of related programs) that is used to perform a series of database activities (create, read, update, and delete) on behalf of database users. WWW.BSSVE.IN Data warehouse: An integrated decision support database whose content is derived from the various operational databases. Constraint: A rule that cannot be violated by database users. Database: An organized collection of logically related data. Entity: A person, place, object, event, or concept in the user environment about which the organization wishes to maintain data. Database management system: A software system that is used to create, maintain, and provide controlled access to user databases. www.bsscommunitycollege.in www.bssnewgeneration.in www.bsslifeskillscollege.in 2 www.onlineeducation.bharatsevaksamaj.net www.bssskillmission.in Data dependence; data independence: With data dependence, data descriptions are included with the application programs that use the data, while with data independence the data descriptions are separated from the application programs. Data warehouse; data mining: A data warehouse is an integrated decision support database, while data mining (described in the topic introduction) is the process of extracting useful information from databases.
    [Show full text]
  • Geographic Information Services
    Note: In order to continue to make this intellectual content available to the public, the National Transportation Library (NTL) copied the contents of this document from its original HTML file. The link to the original HTML file, which was last accessed on 2020-07-29, is https://ntlrepository.blob.core.windows.net/lib/4000/4200/4256/report.html NTL has preserved this content as it was presented in the HTML file, and has checked this document to ensure as complete a representation as possible. Questions about this file can be directed through the NTL Ask a Librarian service at: https://transportation.libanswers.com/form.php?queue_id=1810 _____________________________________________________________________________________ Geographic Information Services ON THE RESULTS OF A WORKSHOP ON GENERIC DATA MODEL FOR LINEAR REFERENCING SYSTEMS Authored by Workshop Participants with Alan Vonderohe (Principal Investigator Supported by The National Cooperative Highway Research Program Project) PREFACE The first draft of this report was prepared from notes and other materials developed during the workshop, audio tape recordings of the workshop sessions, and follow-up discussions with workshop participants. All participants were given the opportunity to review the first draft and provide responses. These responses were assimilated to identify both consensus revisions to the first draft and significant points of contention. The responders were informed of these results and asked to provide their opinions on each of the points of contention. The responders were also asked to provide measures of the relative importance of their positions on each of the points of contention (i.e., "critical", "strong preference", "weak preference"). The revised model presented in this second draft is true to the model as developed during the workshop and to the revisions on which there was consensus.
    [Show full text]
  • A Framework for GIS-T Data Sharing
    Portland State University PDXScholar Center for Urban Studies Publications and Reports Center for Urban Studies 6-2000 A Framework for GIS-T Data Sharing Kenneth Dueker Portland State University J. Allison Butler Follow this and additional works at: https://pdxscholar.library.pdx.edu/cus_pubs Part of the Transportation Commons, and the Urban Studies and Planning Commons Let us know how access to this document benefits ou.y Citation Details Dueker, Kenneth and Butler, J. Allison, "A Framework for GIS-T Data Sharing" (2000). Center for Urban Studies Publications and Reports. 13. https://pdxscholar.library.pdx.edu/cus_pubs/13 This Working Paper is brought to you for free and open access. It has been accepted for inclusion in Center for Urban Studies Publications and Reports by an authorized administrator of PDXScholar. Please contact us if we can make this document more accessible: [email protected]. A Framework for GIS-T Data Sharing Kenneth J. Dueker Portland State University J. Allison Butler Hamilton County Tennessee December 1999 (Revised June 2000) Submitted to Transportation Research Part C: Emerging Technologies Center for Urban Studies Portland State University Portland OR 97207-0751 www.upa.pdx.edu/cus/ Abstract This paper develops a framework and principles for sharing of transportation data. The framework is intended to clarify roles among participants, data producers, data integrators, and data users. The principles are intended to provide guidance for the participants. Both the framework and the principles are based on an enterprise GIS-T data model that defines relations among transportation data elements. The data model guards against ambiguities and provides a basis for the development of the framework and principles for sharing of transportation data.
    [Show full text]
  • A CASE STUDY EXAMINATION DATA MODELLING in PRACTICE Paul
    A CASE STUDY EXAMINATION DATA MODELLING IN PRACTICE Paul Groves A report submitted in partial fulfilment of the requirements of the degree of Master of Commerce (Honours) to the University of New South Wales 1988 CERTIFICATION "I hereby declare that this submission is my own work and that, to the best of my knowledge and belief, it contains no material previously published or written by another person nor material which to a substantial extent has been accepted for the award of any other degree or diploma of a University or any other institute of higher learning, except where due acknowledgement is made in the text." ABSTRACT Data modelling for analysis and data base design is increasingly being viewed as a critical phase in the systems development process. This report is a comparative analysis of data modelling theory and practice. It investigates the nature of data and examines several data modelling methododologies. Current international standards for the conceptual data model are reviewed and based on this a reference framework is defined. This framework is used to compare four contemporary data modelling theories. Field testing of three of the methods is conducted, two case studies from a commercial environment and one from an academic setting. The case studies are conducted on a descriptive research basis. Results from the case studies confirm that data modelling represents a technique of growing impor­ tance in the systems development process. Increasing resources applied to the practice of relational database should ensure ensure ongoing theoretical interest and development. Although in the for­ mative stages of implementation and use, binary data modelling was seen to have achieved notable sucess in enhancing communication between project participants and in increasing user participation.
    [Show full text]
  • How to Make an Effective Information Exchange Data Model Or the Good and Bad Aspects of the NATO JC3IEDM
    UNCLASSIFIED/UNLIMITED How to Make an Effective Information Exchange Data Model or The Good and Bad Aspects of the NATO JC3IEDM Eddie Lasschuyt, MSc Marcel van Hekken, MSc Willem Treurniet, MSc Marc Visser, MSc TNO Physics and Electronics Laboratory P.O. Box 96864 2509 JG The Hague THE NETHERLANDS T +31-703740208, F +31-703740652 [email protected] 2 September 2004* Abstract Coalition-wide interoperability implies, among other things, that a large amount of C4I systems are able to seamlessly exchange information. A bare necessity for this is to have common exchange languages that unambiguously define the information which is shared among the coalition parties. Information Exchange Data Models (IEDMs) are used for that. But developing an IEDM is a difficult process. The resulting model is often very big and complex, which makes it hard to comprehend and implement. This paper discusses a set of guidelines which help to make better IEDMs. The Joint C3 IEDM from MIP and NDAG is used as a case, being the most mature model of its kind in the NATO C3 environment. * This paper is presented at the NATO RTO IST-042 Symposium on “Coalition C4ISR Architectures and Information Exchange Capabilities”, in The Hague, The Netherlands, 27-28 September 2004. RTO is the NATO Research and Technology Organisation; IST is the Information Systems Technology panel of RTO. Paper presented at the RTO IST Symposium on “Coalition C4ISR Architectures and Information Exchange Capabilities”, held in The Hague, The Netherlands, 27-28 September 2004, and published in RTO-MP-IST-042. RTO-MP-IST-042 9 - 1 UNCLASSIFIED/UNLIMITED UNCLASSIFIED/UNLIMITED How to Make an Effective Information Exchange Data Model or The Good and Bad Aspects of the NATO JC3IEDM Contents ABSTRACT ..................................................................................................................................................................1 1.
    [Show full text]
  • Relational Database Systems 1
    Relational Database Systems 1 Wolf-Tilo Balke Jan-Christoph Kalo Institut für Informationssysteme Technische Universität Braunschweig www.ifis.cs.tu-bs.de Summary last week • Databases – are logical interfaces – support declarative querying – are well-structured – aim at efficient manipulation of data – support control redundancy – support multiple views of the data – support atomic multi-user transactions – support persistence and recovery of data Relational Database Systems 1 – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 2 2 Data Modeling 1 • Phases of DB Design Conceptual • Data Models Design • Basic ER Modeling ER- diagram – Chen Notation UML,… – Mathematical Model • Example Relational Database Systems 1 – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 3 2.1 Database Applications • Database applications consist of – database instances with their respective DBMS – associated application programs interfacing with the users App1 DBMS App2 App3 DB1 DB2 EN 3 Relational Database Systems 1 – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 4 2.1 Database Applications • Planning and developing application programs traditionally is a software engineering problem – Requirements Engineering – Conceptual Design – Application Design – … • Software engineers and data engineers cooperate tightly in planning the need, use and flow of data – Data Modeling – Database Design EN 3 Relational Database Systems 1 – Wolf-Tilo Balke – Institut für Informationssysteme – TU Braunschweig 5 2.1
    [Show full text]
  • Cafm Data Structures: a Review and Examples
    Thomas Schürle CAFM DATA STRUCTURES: A REVIEW AND EXAMPLES Thomas SCHÜRLE, Dieter FRITSCH Institute for Photogrammetry (ifp), Stuttgart University, Germany [email protected] [email protected] Working Group IV/III.1 KEY WORDS: Buildings, Data structures, Object oriented ABSTRACT Geographic information systems (GIS) focus on the management and mapping of outdoor information, their data structures are object oriented. CAFM systems manage and map indoor information. The concepts of information management and data structures in CAFM are quite similar to the concepts used in GIS. For the introduction of a CAFM system a data model is developed through the modelling language UML. The overall aim was to develop a general information model for CAFM at Stuttgart University. The development of UML is described and a short introduction in the UML class diagram is given. For the data model we use the class diagram for visualization purposes. 1 INTRODUCTION The development of CAFM (Computer Aided Facility Management) is an upcoming topic in industry and administration. Every organisation is faced with the hard competition in the market and therefore forced to develop efficient structures in facility management. For CAFM systems, data structures and development of these structures are very important. The research topic of the paper is therefore focussed on the development of a consistent data model for CAFM. Existing CAFM software packages are either based on CAD systems or without graphics in standard databases. The link of attributes with graphics of CAFM information does not often represent a generic data model due to the development of these systems.
    [Show full text]
  • Nosql Database Design Using UML Conceptual Data Model Based on Peter Chen’S Framework
    International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 5 (2017) pp. 632-636 © Research India Publications. http://www.ripublication.com NoSQL Database Design Using UML Conceptual Data Model Based on Peter Chen’s Framework Kwangchul Shin, Chulhyun Hwang, Hoekyung Jung* Department of Business IT, Kook Min University, 77 Jeongneung-Ro, Seongbuk-gu, Seoul, South Korea, Department of Computer Engineering, Pai Chai University, Doma2-Dong, Seo-gu, Daejeon, South Korea. Correspondent Author: Hoekyoung Jung Abstract The present paper is composed of five chapters: chapter 2 addresses the necessity of conceptual data model applied to In the Big Data era, relational databases and NoSQL databases NoSQL database design through related researches; chapter 3 coexist in Polyglot Persistence environment. Although data explains the method to design NoSQL database using management is more essential in an environment where a conceptual data model; chapter 4 illustrates the real examples variety of databases are, NoSQL databases only concentrate on applied, to find out the possibility of using the proposed data solving non-functional requirements to run well on large modelling method in Document Database; and chapter 5 clusters. This situation makes consistent data management concludes the paper with the proposal on the future work plan. standards difficult. To solve this problem, this study proposes NoSQL database design method using conceptual data model based on Peter Chen’s framework. The proposed design RELATED WORKS method is applied to the e-commerce business area in order to examine the applicability of it. Conceptual data model which can express only data in itself independent of particular technologies has been variously utilized for data sharing , reference data model and so Keywords: Data Model, Conceptual Data Model, Big Data, on[10,11].
    [Show full text]
  • Data Modeling 1 Data Modeling
    Data modeling 1 Data modeling Data modeling in software engineering is the process of creating a data model by applying formal data model descriptions using data modeling techniques. Overview Data modeling is a method used to define and analyze data requirements needed to support the business processes of an organization. The data requirements are recorded as a conceptual data model with associated data definitions. Actual implementation of the conceptual The data modeling process. The figure illustrates the way data models are developed and model is called a logical data model. used today. A conceptual data model is developed based on the data requirements for the application that is being developed, perhaps in the context of an activity model. The data To implement one conceptual data model will normally consist of entity types, attributes, relationships, integrity rules, and model may require multiple logical the definitions of those objects. This is then used as the start point for interface or [1] data models. Data modeling defines database design. not just data elements, but their structures and relationships between them [2] Data modeling techniques and methodologies are used to model data in a standard, consistent, predictable manner in order to manage it as a resource. The use of data modeling standards is strongly recommended for all projects requiring a standard means of defining and analyzing data within an organization, e.g., using data modeling: • to manage data as a resource; • for the integration of information systems; • for designing databases/data warehouses (aka data repositories) Data modeling may be performed during various types of projects and in multiple phases of projects.
    [Show full text]
  • Modeling and Management Big Data in Databases—A Systematic Literature Review
    sustainability Review Modeling and Management Big Data in Databases—A Systematic Literature Review Diana Martinez-Mosquera 1,* , Rosa Navarrete 1 and Sergio Lujan-Mora 2 1 Department of Informatics and Computer Science, Escuela Politécnica Nacional, 170525 Quito, Ecuador; [email protected] 2 Department of Software and Computing Systems, University of Alicante, 03690 Alicante, Spain; [email protected] * Correspondence: [email protected]; Tel.: +593-02-2976300 Received: 25 November 2019; Accepted: 9 January 2020; Published: 15 January 2020 Abstract: The work presented in this paper is motivated by the acknowledgement that a complete and updated systematic literature review (SLR) that consolidates all the research efforts for Big Data modeling and management is missing. This study answers three research questions. The first question is how the number of published papers about Big Data modeling and management has evolved over time. The second question is whether the research is focused on semi-structured and/or unstructured data and what techniques are applied. Finally, the third question determines what trends and gaps exist according to three key concepts: the data source, the modeling and the database. As result, 36 studies, collected from the most important scientific digital libraries and covering the period between 2010 and 2019, were deemed relevant. Moreover, we present a complete bibliometric analysis in order to provide detailed information about the authors and the publication data in a single document. This SLR reveal very interesting facts. For instance, Entity Relationship and document-oriented are the most researched models at the conceptual and logical abstraction level respectively and MongoDB is the most frequent implementation at the physical.
    [Show full text]
  • Generic Data Models for Semantic E- Government Interoperability: Literature Review
    106 Electronic Government and Electronic Participation M.F.W.H.A. Janssen et al. (Eds.) © 2014 The Authors and IOS Press. This article is published online with Open Access by IOS Press and distributed under the terms of the Creative Commons Attribution Non-Commercial License. doi:10.3233/978-1-61499-429-9-106 Generic data models for Semantic e- Government interoperability: Literature Review Katariina Ryhänena,1, Tero Päivärintab, and Pasi Tyrväinenc a Department of Computer Science and Information Systems, University of Jyväskylä, Finland b Luleå University of Technology, Sweden c Agora Center, University of Jyväskylä, Finland Abstract. Interoperability of e-government systems is suggested to increase transparency, efficiency, effectiveness, and customer service in the public sector. Generic data models are often seen as a way for achieving especially semantic interoperability. To assess how the contemporary data models support semantic e- government interoperability, we reviewed literature on data models suggested for the public sector in light of four features: standard modelling language, entity- relationship modelling, vocabulary for data exchange and methodology. The review contributes previous research by introducing a four-feature framework for assessing capability of e-government data models to enhance interoperability and by providing an up-to-date review of the generic data models for this purpose. Keywords. Data Model, Information Model, Interoperability, Public Administration Introduction E-government and electronic governmental services require good information system interoperability, which increases government transparency, efficiency, effectiveness, co-operation and information exchange among governmental organizations [8, 10]. Also, positive effects on service quality for citizens and other stakeholders are mentioned [29, 30].
    [Show full text]
  • Gellish a Generic Extensible Ontological Language
    Gellish A Generic Extensible Ontological Language - Design and Application of a Universal Data Structure - Cover illustration: Museum: Boymans van Beuningen, Rotterdam, The Netherlands Pieter Brueghel de Oude, The Tower of Babel (Genesis 11:1-9) At first everyone spoke the same language… Then people said: “Let's build a city with a tower that reaches to the sky! We'll become famous.” But the LORD said: “Come on! Let's confuse them by making them speak different languages, then they won't be able to understand each other…” So the people had to stop building the city… That is why the city was called Babel (confusing) -because there the LORD confused the language of the whole world… Nederlandse vertaling: Ooit werd er op de hele aarde één enkele taal gesproken… De mensen zeiden: laten we een stad bouwen met een toren die tot in de hemel reikt. Dat zal ons beroemd maken… Maar God dacht: …laten wij …spraakverwarring onder hen teweegbrengen, zodat ze elkaar niet meer verstaan… en de bouw van de stad werd gestaakt… Zo komt het dat die stad Babel (verwarring) heet, want daar bracht God verwarring in de taal die op de hele aarde gesproken werd… Gellish A Generic Extensible Ontological Language - Design and Application of a Universal Data Structure - Proefschrift ter verkrijging van de graad van doctor aan de Technische Universiteit Delft, op gezag van de Rector Magnificus prof.dr.ir. J.T. Fokkema, in het openbaar te verdedigen ten overstaan van een commissie, door het College voor Promoties aangewezen, op woensdag 14 september 2005 te 10.30 uur door Andries Simon Hendrik Paul VAN RENSSEN werktuigkundig ingenieur geboren te Utrecht Dit proefschrift is goedgekeurd door de promotor: Prof.dr.ir.
    [Show full text]