Ontologies and Languages for Representing Mathematical Knowledge on the Semantic Web

Total Page:16

File Type:pdf, Size:1020Kb

Ontologies and Languages for Representing Mathematical Knowledge on the Semantic Web Undefined 0 (0) 1 1 IOS Press Ontologies and Languages for Representing Mathematical Knowledge on the Semantic Web Christoph Lange fully integrated with RDF representations in order to con- tribute existing mathematical knowledge to the Web of Data. Computer Science, Jacobs University Bremen, We conclude with a roadmap for getting the mathematical Web of Data started: what datasets to publish, how to inter- Germany link them, and how to take advantage of these new connec- E-mail: [email protected] tions. Keywords: mathematics, mathematical knowledge manage- ment, ontologies, knowledge representation, formalization, linked data, XML Abstract. Mathematics is a ubiquitous foundation of sci- ence, technology, and engineering. Specific areas, such as numeric and symbolic computation or logics, enjoy consid- erable software support. Working mathematicians have re- 1. Introduction: Mathematics on the Web – State cently started to adopt Web 2.0 environment, such as blogs of the Art and Challenges and wikis, but these systems lack machine support for knowl- edge organization and reuse, and they are disconnected from tools such as computer algebra systems or interactive proof A review of the state of the art of mathematics assistants. We argue that such scenarios will benefit from Se- on the Web has to acknowledge Web 1.0 sites that mantic Web technology. are in day to day use: review and abstract services Conversely, mathematics is still underrepresented on the Web of [Linked] Data. There are mathematics-related such as Zentralblatt MATH [30] and MathSciNet [35], Linked Data, for example statistical government data or sci- the arχiv pre-print server [39], libraries of formalized entific publication databases, but their mathematical seman- and machine-verified mathematical content such as the tics has not yet been modeled. We argue that the services for Mizar Mathematical Library (MML [9]), and refer- the Web of Data will benefit from a deeper representation of ence works such as the Digital Library of Mathemati- mathematical knowledge. Mathematical knowledge comprises logical and func- cal Functions (DLMF [3]) or Wolfram MathWorld [8]. tional structures – formulæ, statements, and theories –, a These sites have facilitated the access to mathemati- mixture of rigorous natural language and symbolic nota- cal knowledge. However, (i) they offer a limited degree tion in documents, application-specific metadata, and dis- of interaction and do not facilitate collaboration, and cussions about conceptualizations, formalizations, proofs, (ii) the means of automatically retrieving, using, and and (counter-)examples. Our review of approaches to rep- resenting these structures covers ontologies for mathemat- adaptively presenting knowledge through automated ical problems, proofs, interlinked scientific publications, agents are restricted. Concerning the Web in general, scientific discourse, as well as mathematical metadata vo- problem (i) has been addressed by Web 2.0 applica- cabularies and domain knowledge from pure and applied tions, and problem (ii) by the Semantic Web. This sec- mathematics. tion reviews to what extent these developments have Many fields of mathematics have not yet been imple- mented as proper Semantic Web ontologies; however, we been adopted for mathematical applications and sug- show that MathML and OpenMath, the standard XML-based gests a new combination of Web 2.0 and Semantic Web exchange languages for mathematical knowledge, can be to overcome the remaining problems. 0000-0000/0-1900/$00.00 c 0 – IOS Press and the authors. All rights reserved 2 Lange / Ontologies and Languages for Representing Mathematical Knowledge on the Semantic Web 1.1. How Working Mathematicians have Embraced ers mathematics [187]. Targeting a general audience, it Web 2.0 Technology omits most formal proofs but embeds the pure math- ematical knowledge into a wider context, including, An increasing number of working mathematicians e.g., the history of mathematics, biographies of math- has recently started to use Web 2.0 technology for col- ematicians, and information about application areas. laboratively developing new ideas, but also as a new The lack of proofs is sometimes compensated by link- publication channel for established knowledge. Re- ing to the technically similar ProofWiki [17], contain- search blogs and wiki encyclopedias are typical repre- ing over 2,500 proofs, or to PlanetMath. Finally, Con- sentatives. nexions [72], technically driven by a more traditional Content Management System, is an open web reposi- 1.1.1. Research Blogs and Wiki Encyclopedias tory specialized on courseware. Connexions promotes Researchers have found blogs useful to gather feed- the contribution of small, reusable course modules – back about preliminary findings before the traditional more than 17,000, about 4,000 from mathematics and peer review. Successful collaborations among mathe- statistics, and about 6,000 from science and technol- maticians not knowing each other before have started ogy – to its content commons, so that the original au- in blogs and converged into conventional articles [47]. thor, but also others can flexibly combine them into GOWERS, an active blogger, initiated the successful collections, such as the notes for a particular course. Polymath series, where blogs are the exclusive com- The wikis mentioned so far have been set up from munication medium for proving theorems in a mas- scratch, hardly reusing content from existing knowl- sive collaborative effort [14,100,50], including the re- edge bases, but maintainers of established knowl- cent collaborative review of a claimed proof of P 6= edge bases are also starting to employ Web 2.0 fron- NP [15]. Compared to research blogs, the MathOver- tends – for example the recently developed proto- flow forum [7], where users can post their problems typical wiki frontend for the Mizar Mathematical Li- and solutions to others’ problems, offers more instant brary (MML) [178], a large library of formalized and help with smaller problems. By its reputation mecha- machine-verified mathematical content. The wiki in- nism, it acts as an agile simulation of the traditional tends to support common workflows in enhancing and scientific publication and peer review processes. maintaining the MML and thus to disburden the human For evolving ideas emerged from a blog discus- library committee. sion, or for creating permanent, short, interlinked de- scriptions of topics, wikis have been found more ap- 1.1.2. Critique – Little Reuse, Lack of Services propriate. The nLab wiki [28], a companion to the Web 2.0 sites facilitate collaboration but still require n-Category Café blog [27], is a prominent example a massive investment of manpower for compiling a for that, and also for the emerging practice of Open knowledge collection. Machine-supported intelligent Notebook Science, i.e. “making the entire primary knowledge reuse, e.g. from other knowledge collec- record of a research project public”, including “failed, tions on the Web, does not take place. Different knowl- less significant, and otherwise unpublished experi- edge bases are technically separated from each other ments” [188]. The Polymath maintainers have also set by using document formats that are merely suitable up a companion wiki for “collect[ing] pertinent back- for knowledge presentation but not for representation, ground information which was no longer part of the such as XHTML with LATEX formulæ. The only way of active ‘foreground’ of exchanges on the [. ] blog en- referring to other knowledge bases is an untyped hy- tries” [50]. Finally, where MathOverflow focuses on perlink. The proof techniques collected in the Tricki concrete problems and solution, the Tricki [21], also cannot be automatically applied to a problem devel- initiated by GOWERS, is a wiki repository of general oped in a research blog, as neither of them are suffi- mathematical techniques – reminiscent of a Web 2.0 ciently formalized. remake of PÓLYA’s classic “How to Solve It” [161]. Intelligent information retrieval, a prerequisite for Wikis that collect existing mathematical knowledge, finding knowledge to reuse and to apply, is poorly sup- for educational and general purposes, are more widely ported. For example, Wikipedia states the Pythagorean known. PlanetMath [159], counting more than 8,000 theorem as a2 + b2 = c2 and files it into the cate- entries at the time of this writing, is a mathemati- gories “Articles containing proofs” and “Mathematical cal encyclopedia. The general-purpose Wikipedia with theorems” [189]. The LATEX representation of the for- 15 million articles in over 250 languages also cov- mulæ does not support search by functional structure. Lange / Ontologies and Languages for Representing Mathematical Knowledge on the Semantic Web 3 Putting the fact aside that Wikipedia cannot search for- ematical textbooks and disseminate new mathemati- mulæ at all, a search for equivalentp expressions such cal results; and (iv) librarians and mathematicians who as x2 + y2 = z2 or c = a2 + b2 would not yield catalog and organize mathematical knowledge” [93]1. the theorem, unless they explicitly occur in the article. They hoped that Semantic Web technologies would From the categorization it is neither clear for a ma- help to address their challenges. This seemed techni- chine (albeit very likely for a human) whether the ar- cally feasible, particularly due to the common founda- ticle contains a proof of that theorem, nor whether it tion of XML and URIs [140]. is correct. Similarly,
Recommended publications
  • Documents with Flexible Notation Contexts As Interfaces To
    Documents with flexible Notation Contexts as Interfaces to Mathematical Knowledge Michael Kohlhase, Christine Muller,¨ Normen Muller¨ Computer Science, Jacobs University Bremen, Germany m.kohlhase/c.mueller/[email protected] In this paper we explore the use of documents as interfaces to mathematical knowledge. We propose an augmented document model that facilitates the explication of a document’s notation context, i.e. the selection of appropriate presentations for all symbols in the docu- ment. By separating the notation context from the structure and content of a document, we aim at identifying the author’s selection of notation, i.e. his notation practices. This facili- tates both, the identification of communities of practice (COP) that share specific notation preferences and, conversely, the adaptation of documents to the notation preferences of iden- tified COPs, e.g. groups of readers and co-authors. Furthermore, explicating a document’s notation context allows for referencing and, particularly, reusing contexts, which reduces the author’s workload during the authoring process. 1 Introduction In this paper we will re-evaluate the concept of a document in the age of digital media. Since the development of writing, documents have become one of the primary means of passing on knowledge. If we take the view that knowledge is a state of the (human) mind where concepts and experiences are highly interlinked, and interpret “documents” broadly to include spoken language or action sequences played out1, then we can see that documents are almost the only means to communicate knowledge: quite literally, documents are interfaces to knowledge. Here, we will look at mathematical documents as interfaces to mathematical knowledge, since they have special complexities that reveal the underlying nature of documents and knowledge.
    [Show full text]
  • Download Slides
    a platform for all that we know savas parastatidis http://savas.me savasp transition from web to apps increasing focus on information (& knowledge) rise of personal digital assistants importance of near-real time processing http://aptito.com/blog/wp-content/uploads/2012/05/smartphone-apps.jpg today... storing computing computers are huge amounts great tools for of data managing indexing example google and microsoft both have copies of the entire web (and more) for indexing purposes tomorrow... storing computing computers are huge amounts great tools for of data managing indexing acquisition discovery aggregation organization we would like computers to of the world’s information also help with the automatic correlation analysis and knowledge interpretation inference data information knowledge intelligence wisdom expert systems watson freebase wolframalpha rdbms google now web indexing data is symbols (bits, numbers, characters) information adds meaning to data through the introduction of relationship - it answers questions such as “who”, “what”, “where”, and “when” knowledge is a description of how the world works - it’s the application of data and information in order to answer “how” questions G. Bellinger, D. Castro, and A. Mills, “Data, Information, Knowledge, and Wisdom,” Inform. pp. 1–4, 2004 web – the data platform web – the information platform web – the knowledge platform foundation for new experiences “wisdom is not a product of schooling but of the lifelong attempt to acquire it” representative examples wolframalpha watson source:
    [Show full text]
  • Ontologies and Languages for Representing Mathematical Knowledge on the Semantic Web
    Ontologies and Languages for Representing Mathematical Knowledge on the Semantic Web Editor(s): Aldo Gangemi, ISTC-CNR Rome, Italy Solicited review(s): Claudio Sacerdoti Coen, University of Bologna, Italy; Alexandre Passant, DERI, National University of Galway, Ireland; Aldo Gangemi, ISTC-CNR Rome, Italy Christoph Lange data vocabularies and domain knowledge from pure and ap- plied mathematics. FB 3 (Mathematics and Computer Science), Many fields of mathematics have not yet been imple- University of Bremen, Germany mented as proper Semantic Web ontologies; however, we Computer Science, Jacobs University Bremen, show that MathML and OpenMath, the standard XML-based exchange languages for mathematical knowledge, can be Germany fully integrated with RDF representations in order to con- E-mail: [email protected] tribute existing mathematical knowledge to the Web of Data. We conclude with a roadmap for getting the mathematical Web of Data started: what datasets to publish, how to inter- link them, and how to take advantage of these new connec- tions. Abstract. Mathematics is a ubiquitous foundation of sci- Keywords: mathematics, mathematical knowledge manage- ence, technology, and engineering. Specific areas of mathe- ment, ontologies, knowledge representation, formalization, matics, such as numeric and symbolic computation or logics, linked data, XML enjoy considerable software support. Working mathemati- cians have recently started to adopt Web 2.0 environments, such as blogs and wikis, but these systems lack machine sup- 1. Introduction: Mathematics on the Web – State port for knowledge organization and reuse, and they are dis- of the Art and Challenges connected from tools such as computer algebra systems or interactive proof assistants.
    [Show full text]
  • Datatone: Managing Ambiguity in Natural Language Interfaces for Data Visualization Tong Gao1, Mira Dontcheva2, Eytan Adar1, Zhicheng Liu2, Karrie Karahalios3
    DataTone: Managing Ambiguity in Natural Language Interfaces for Data Visualization Tong Gao1, Mira Dontcheva2, Eytan Adar1, Zhicheng Liu2, Karrie Karahalios3 1University of Michigan, 2Adobe Research 3University of Illinois, Ann Arbor, MI San Francisco, CA Urbana Champaign, IL fgaotong,[email protected] fmirad,[email protected] [email protected] ABSTRACT to be both flexible and easy to use. General purpose spread- Answering questions with data is a difficult and time- sheet tools, such as Microsoft Excel, focus largely on offer- consuming process. Visual dashboards and templates make ing rich data transformation operations. Visualizations are it easy to get started, but asking more sophisticated questions merely output to the calculations in the spreadsheet. Asking often requires learning a tool designed for expert analysts. a “visual question” requires users to translate their questions Natural language interaction allows users to ask questions di- into operations on the spreadsheet rather than operations on rectly in complex programs without having to learn how to the visualization. In contrast, visual analysis tools, such as use an interface. However, natural language is often ambigu- Tableau,1 creates visualizations automatically based on vari- ous. In this work we propose a mixed-initiative approach to ables of interest, allowing users to ask questions interactively managing ambiguity in natural language interfaces for data through the visualizations. However, because these tools are visualization. We model ambiguity throughout the process of often intended for domain specialists, they have complex in- turning a natural language query into a visualization and use terfaces and a steep learning curve. algorithmic disambiguation coupled with interactive ambigu- Natural language interaction offers a compelling complement ity widgets.
    [Show full text]
  • Editing Knowledge in Large Mathematical Corpora. a Case Study with Semantic LATEX(STEX)
    Editing Knowledge in Large Mathematical Corpora. A case study with Semantic LATEX(STEX). Constantin Jucovschi Computer Science Jacobs University Bremen Supervisor: Prof. Dr. Michael Kohlhase arXiv:1010.5935v1 [cs.DL] 28 Oct 2010 A thesis submitted for the degree of Master of Science 2010 August Declaration The research subsumed in this thesis has been conducted under the supervision of Prof. Dr. Michael Kohlhase from Jacobs University Bremen. All material presented in this Master Thesis is my own, unless specifically stated. I, Constantin Jucovschi, hereby declare, that, to the best of my knowl- edge, the research presented in this Master Thesis contains original and independent results, and it has not been submitted elsewhere for the conferral of a degree. Constantin Jucovschi, Bremen, August 23rd, 2010 Acknowledgements I would like to thank my supervisor, Prof. Dr. Michael Kohlhase, for the opportunity to do great research with him, for the guidance I have received for the past 3 years and for his genuine interest in finding the topic I \tick for". I want to thank him for the great deal of support and understanding he showed when it came to more personal matters. I could not be happier to have Prof. Dr. Kohlhase as supervisor for my doctoral studies. I would also like to thank the members of the KWARC group for creating such a motivating and in the same time pleasant research environment. Thank you for all the questions, hints and discussions { they helped a lot; and special thanks for the ability to interact with you also outside academic environment. I would like to express my appreciation and gratitude to my family, for their long-term engagement to support and encourage me in all my beginnings.
    [Show full text]
  • Arxiv:1910.13561V1 [Cs.LG] 29 Oct 2019 E-Mail: [email protected] M
    Noname manuscript No. (will be inserted by the editor) A Heuristically Modified FP-Tree for Ontology Learning with Applications in Education Safwan Shatnawi · Mohamed Medhat Gaber ∗ · Mihaela Cocea Received: date / Accepted: date Abstract We propose a heuristically modified FP-Tree for ontology learning from text. Unlike previous research, for concept extraction, we use a regular expression parser approach widely adopted in compiler construction, i.e., deterministic finite automata (DFA). Thus, the concepts are extracted from unstructured documents. For ontology learning, we use a frequent pattern mining approach and employ a rule mining heuristic function to enhance its quality. This process does not rely on predefined lexico-syntactic patterns, thus, it is applicable for different subjects. We employ the ontology in a question-answering system for students' content-related questions. For validation, we used textbook questions/answers and questions from online course forums. Subject experts rated the quality of the system's answers on a subset of questions and their ratings were used to identify the most appropriate automatic semantic text similarity metric to use as a validation metric for all answers. The Latent Semantic Analysis was identified as the closest to the experts' ratings. We compared the use of our ontology with the use of Text2Onto for the question-answering system and found that with our ontology 80% of the questions were answered, while with Text2Onto only 28.4% were answered, thanks to the finer grained hierarchy our approach is able to produce. Keywords Ontologies · Frequent pattern mining · Ontology learning · Question answering · MOOCs S. Shatnawi College of Applied Studies, University of Bahrain, Sakhair Campus, Zallaq, Bahrain E-mail: [email protected] M.
    [Show full text]
  • Building Dialogue Structure from Discourse Tree of a Question
    The Workshops of the Thirty-Second AAAI Conference on Artificial Intelligence Building Dialogue Structure from Discourse Tree of a Question Boris Galitsky Oracle Corp. Redwood Shores CA USA [email protected] Abstract ed, chat bot’s capability to maintain the cohesive flow, We propose a reasoning-based approach to a dialogue man- style and merits of conversation is an underexplored area. agement for a customer support chat bot. To build a dia- When a question is detailed and includes multiple sen- logue scenario, we analyze the discourse tree (DT) of an ini- tences, there are certain expectations concerning the style tial query of a customer support dialogue that is frequently of an answer. Although a topical agreement between ques- complex and multi-sentence. We then enforce what we call tions and answers have been extensively addressed, a cor- complementarity relation between DT of the initial query respondence in style and suitability for the given step of a and that of the answers, requests and responses. The chat bot finds answers, which are not only relevant by topic but dialogue between questions and answers has not been thor- also suitable for a given step of a conversation and match oughly explored. In this study we focus on assessment of the question by style, argumentation patterns, communica- cohesiveness of question/answer (Q/A) flow, which is im- tion means, experience level and other domain-independent portant for a chat bots supporting longer conversation. attributes. We evaluate a performance of proposed algo- When an answer is in a style disagreement with a question, rithm in car repair domain and observe a 5 to 10% im- a user can find this answer inappropriate even when a topi- provement for single and three-step dialogues respectively, in comparison with baseline approaches to dialogue man- cal relevance is high.
    [Show full text]
  • A New Kind of Science
    Wolfram|Alpha, A New Kind of Science A New Kind of Science Wolfram|Alpha, A New Kind of Science by Bruce Walters April 18, 2011 Research Paper for Spring 2012 INFSY 556 Data Warehousing Professor Rhoda Joseph, Ph.D. Penn State University at Harrisburg Wolfram|Alpha, A New Kind of Science Page 2 of 8 Abstract The core mission of Wolfram|Alpha is “to take expert-level knowledge, and create a system that can apply it automatically whenever and wherever it’s needed” says Stephen Wolfram, the technologies inventor (Wolfram, 2009-02). This paper examines Wolfram|Alpha in its present form. Introduction As the internet became available to the world mass population, British computer scientist Tim Berners-Lee provided “hypertext” as a means for its general consumption, and coined the phrase World Wide Web. The World Wide Web is often referred to simply as the Web, and Web 1.0 transformed how we communicate. Now, with Web 2.0 firmly entrenched in our being and going with us wherever we go, can 3.0 be far behind? Web 3.0, the semantic web, is a web that endeavors to understand meaning rather than syntactically precise commands (Andersen, 2010). Enter Wolfram|Alpha. Wolfram Alpha, officially launched in May 2009, is a rapidly evolving "computational search engine,” but rather than searching pre‐existing documents, it actually computes the answer, every time (Andersen, 2010). Wolfram|Alpha relies on a knowledgebase of data in order to perform these computations, which despite efforts to date, is still only a fraction of world’s knowledge. Scientist, author, and inventor Stephen Wolfram refers to the world’s knowledge this way: “It’s a sad but true fact that most data that’s generated or collected, even with considerable effort, never gets any kind of serious analysis” (Wolfram, 2009-02).
    [Show full text]
  • Problem of Extracting the Knowledge of Experts Fkom the Perspective of Experimental Psychology
    AI Magazine Volume 8 Number 2 (1987) (© AAAI) The ‘Problem of Extracting the Knowledge of Experts fkom the Perspective of Experimental Psychology RobertR.Hoffman or perceptual and conceptual and represent their special knowledge The first step in the development of an problems requiring the skills of . [It] may take several months of the expert system is the extraction and charac- an expert, expertise is rare, the expert’s time and even more of the terization of the knowledge and skills of an expert’s knowledge is extremely system builder’s” (p. 264). Three years expert. This step is widely regarded as the detailed and interconnected, and our later, Duda and Shortliffe (1983) major bottleneck in the system develop- scientific understanding of the echoed this lament: “The identifica- ment process To assist knowledge engi- expert’s perceptual and conceptual tion and encoding of knowledge is one neers and others who might be interested in the development of an expert system, I processes is limited. Research on the of the most complex and arduous offer (1) a working classification of meth- skills of experts in any domain affords tasks encountered in the construction ods for extracting an expert’s knowledge, an excellent opportunity for both of an expert system” (p. 265). (2) some ideas about the types of data that basic and practical experimentation. Some common phrases that occur the methods yield, and (3) a set of criteria My investigations fall on the experi- in the literature are “knowledge acqui- by which the methods can be compared mental psychology side of expert sys- sition is the time-critical component” relative to the needs of the system develop- tem engineering, specifically the prob- (Freiling et al.
    [Show full text]
  • Towards Unsupervised Knowledge Extraction
    Towards Unsupervised Knowledge Extraction Dorothea Tsatsoua,b, Konstantinos Karageorgosa, Anastasios Dimoua, Javier Carbob, Jose M. Molinab and Petros Darasa aInformation Technologies Institute (ITI), Centre for Research and Technology Hellas (CERTH), 6th km Charilaou-Thermi Road, 57001, Thermi, Thessaloniki, Greece bComputer Science Department, University Carlos III of Madrid, Av. Universidad 30, Leganes, Madrid 28911, Spain Abstract Integration of symbolic and sub-symbolic approaches is rapidly emerging as an Artificial Intelligence (AI) paradigm. This paper presents a proof-of-concept approach towards an unsupervised learning method, based on Restricted Boltzmann Machines (RBMs), for extracting semantic associations among prominent entities within data. Validation of the approach is performed in two datasets that connect lan- guage and vision, namely Visual Genome and GQA. A methodology to formally structure the extracted knowledge for subsequent use through reasoning engines is also offered. Keywords knowledge extraction, unsupervised learning, spectral analysis, formal knowledge representation, symbolic AI, sub-symbolic AI, neuro-symbolic integration 1. Introduction Nowadays, artificial intelligence (AI) is linked mostly to machine learning (ML) solutions, enabling machines to learn from data and subsequently make predictions based on unidentified patterns in data, taking advantage of neural network (NN)-based methods. However, AI is still far from encompassing human-like cognitive capacities, which include not only learning but also understanding1, abstracting, planning, representing knowledge and logically reasoning over it. On the other hand, Knowledge Representation and Reasoning (KRR) techniques allow machines to reason about structured knowledge, in order to perform human-like complex problem solving and decision-making. AI foundations propose that all the aforementioned cognitive processes (learning, abstracting, representing, reasoning) need to be integrated under a unified strategy, in order to advance to In A.
    [Show full text]
  • RIO: an AI Based Virtual Assistant
    International Journal of Computer Applications (0975 – 8887) Volume 180 – No.45, May 2018 RIO: An AI based Virtual Assistant Samruddhi S. Sawant Abhinav A. Bapat Komal K. Sheth Department of Information Department of Information Department of Information Technology Technology Technology NBN Sinhgad Technical Institute NBN Sinhgad Technical Institute NBN Sinhgad Technical Institute Campus Campus Campus Pune, India Pune, India Pune, India Swapnadip B. Kumbhar Rahul M. Samant Department of Information Technology Professor NBN Sinhgad Technical Institute Campus Department of Information Technology Pune, India NBN Sinhgad Technical Institute Campus Pune, India ABSTRACT benefitted by such virtual assistants. The rise of messaging In this world of corporate companies, a lot of importance is apps, the explosion of the app ecosystem, advancements in being given to Human Resources. Human Capital artificial intelligence (AI) and cognitive technologies, a Management (HCM) is an approach of Human Resource fascination with conversational user interfaces and a wider Management that connotes to viewing of employees as assets reach of automation are all driving the chatbot trend. A that can be invested in and managed to maximize business chatbot can be deployed over various platforms namely value. In this paper, we build a chatbot to manage all the Facebook messenger, Slack, Skype, Kik, etc. The most functions of HRM namely -- core HR, Talent Management preferred platform among businesses seems to be Facebook and Workforce management. A chatbot is a service, powered messenger (92%). There are around 80% of businesses that by rules and sometimes artificial intelligence that you interact would like to host their chatbot on their own website.
    [Show full text]
  • Ontology and Information Systems
    Ontology and Information Systems 1 Barry Smith Philosophical Ontology Ontology as a branch of philosophy is the science of what is, of the kinds and structures of objects, properties, events, processes and relations in every area of reality. ‘Ontology’ is often used by philosophers as a synonym for ‘metaphysics’ (literally: ‘what comes after the Physics’), a term which was used by early students of Aristotle to refer to what Aristotle himself called ‘first philosophy’.2 The term ‘ontology’ (or ontologia) was itself coined in 1613, independently, by two philosophers, Rudolf Göckel (Goclenius), in his Lexicon philosophicum and Jacob Lorhard (Lorhardus), in his Theatrum philosophicum. The first occurrence in English recorded by the OED appears in Bailey’s dictionary of 1721, which defines ontology as ‘an Account of being in the Abstract’. Methods and Goals of Philosophical Ontology The methods of philosophical ontology are the methods of philosophy in general. They include the development of theories of wider or narrower scope and the testing and refinement of such theories by measuring them up, either against difficult 1 This paper is based upon work supported by the National Science Foundation under Grant No. BCS-9975557 (“Ontology and Geographic Categories”) and by the Alexander von Humboldt Foundation under the auspices of its Wolfgang Paul Program. Thanks go to Thomas Bittner, Olivier Bodenreider, Anita Burgun, Charles Dement, Andrew Frank, Angelika Franzke, Wolfgang Grassl, Pierre Grenon, Nicola Guarino, Patrick Hayes, Kathleen Hornsby, Ingvar Johansson, Fritz Lehmann, Chris Menzel, Kevin Mulligan, Chris Partridge, David W. Smith, William Rapaport, Daniel von Wachter, Chris Welty and Graham White for helpful comments.
    [Show full text]