Cyc: a Midterm Report

Total Page:16

File Type:pdf, Size:1020Kb

Cyc: a Midterm Report AI Magazine Volume 11 Number 3 (1990) (© AAAI) Articles The majority of work After explicating the need for a large common- We have come a in knowledge repre- sense knowledge base spanning human consen- long way in this . an sentation has dealt sus knowledge, we report on many of the lessons time, and this article aversion to with the technicali- learned over the first five years of attempting its presents some of the ties of relating predi- construction. We have come a long way in terms lessons learned and a addressing cate calculus to of methodology, representation language, tech- description of where the problems niques for efficient inferencing, the ontology of other formalisms we are and briefly the knowledge base, and the environment and that arise in and with the details infrastructure in which the knowledge base is discusses our plans of various schemes being built. We describe the evolution of Cyc for the coming five actually for default reason- and its current state and close with a look at our years. We chose to representing ing. There has almost plans and expectations for the coming five years, focus on technical been an aversion to including an argument for how and why the issues in representa- large bodies addressing the prob- project might conclude at the end of this time. tion, inference, and of knowledge lems that arise in ontology rather than actually represent- infrastructure issues with content. ing large bodies of knowledge with content. such as user interfaces, the training of knowl- However, deep, important issues must be edge enterers, or existing collaborations and addressed if we are to ever have a large intelli- applications of Cyc. gent knowledge-based program: What onto- logical categories would make up an adequate The Evolution of the set for carving up the universe? How are they related? What are the important facts and Cyc Methodology heuristics most humans today know about For two decades, AI research has been polar- solid objects? And so on. In short, we must ized into neats and scruffies (roughly corre- bite the bullet. sponding to theoretical versus experimental We don’t believe there is any shortcut to approaches). After an initial strongly scruffy being intelligent, any yet-to-be-discovered approach, we seem to have settled on a Maxwell’s equations of thought, any AI Risc middle ground that combines the insights architecture that will yield vast amounts of and power of each. problem-solving power. Although issues such On the one hand, we realized that a number as architecture are important, no powerful of mistakes made in the project’s initial years formalism can obviate the need for a lot of would have been avoided by a more formal knowledge. approach (especially in regard to the con- By knowledge, we don’t just mean dry, struction of the representation language). We almanac-like or highly domain-specific facts. also realized that philosophy had a lot to con- Rather, most of what we need to know to get tribute, especially when it came to deciding by in the real world is prescientific (knowl- on issues of ontology (Quine 1969). edge that is too commonsensical to be includ- On the other hand, however, there are a ed in reference books; for example, animals number of areas where we found the empiri- live for a single solid interval of time, nothing cal approach more fruitful. The areas are typi- can be in two places at once, animals don’t cally still open research issues for the formalists like pain), dynamic (scripts and rules of thumb or have not even been addressed by them, for for solving problems) and metaknowledge example, codifying the most fundamental (how to fill in gaps in the knowledge base, types of goals that people have. how to keep it organized, how to monitor Therefore, our approach is to largely carry and switch among problem-solving methods, out empirical research and be driven by look- and so on). ing at lots of examples but to keep this work Perhaps the hardest truth to face, one that supported on a strong theoretical foundation. AI has been trying to wriggle out of for 34 Further, we have been driven to adopt a kind years, is that there is probably no elegant, of tool-kit orientation: Assemble a collection effortless way to obtain this immense knowl- of partial solutions to the various difficult edge base. Rather, the bulk of the effort must problems Cyc has to handle, and add new (at least initially) be manual entry of assertion tools as required. That is, for a number of after assertion. problems (time, causality, inference, user Half a decade ago, we introduced (Lenat, interface, and so on), there aren’t any known Prakash, and Shepherd 1986) our research general-purpose, simple, efficient solutions, plans for Cyc, a decade-long, two person-cen- but we can make do with a set of modules tury effort we had recently begun at MCC to that enable us to easily handle the most manually construct such a knowledge base. common cases. Copyright ©1990 AAAI. All rights reserved. 0738-4602/90/$4.00 FALL 1990 33 Articles The bulk of the effort is currently devoted kept modifying and tweaking such mecha- to identifying, formalizing, and entering nisms, and often, this method forced us to go microtheories of various topics (such as shop- back and redo parts of the knowledge base so ping, containers, emotions). We follow a pro- that they corresponded to the new way the cess that begins with a statement, in English, inference engine worked. As the size of the of the microtheory. On the way to our goal, knowledge base increased, this process an axiomatization of the microtheory, we became intolerable. We came to realize that identify and make precise those Cyc concepts having a clean semantics for the knowledge necessary to state the knowledge in axiomatic base was vital, declaratively expressing the form. To test that the topic has been ade- meaning of inheritance, TheSetOf, default quately covered, stories that deal with the rules, automatic classification, and so on, so topic are represented in Cyc; we then pose that we wouldn’t have to change the knowl- questions that any reader ought to be able to edge base when we altered the implementa- answer after having read the story. tion of one of the mechanisms. One of the unfortunate myths about Cyc is As late as 1987, the only inferencing in Cyc that its aim is to be a sort of electronic ency- was done using these few mechanisms: inher- clopedia. We hope that this article lays this itance along instances (IS-A) links, rigid misconception to rest. If anything, Cyc is the toCompute definitions of one slot in terms of complement of an encyclopedia. The aim is others plus the running of demons (opaque that one day Cyc ought to contain enough lumps of Lisp code) and expert system–like commonsense knowledge to support natural production rules. The results were inefficien- language understanding capabilities that cies (because of the overuse of the most gen- enable it to read through and assimilate any eral mechanisms), abstraction breaking (often encyclopedia article, that is, to be able to resorting to raw Lisp code escapes), and inad- answer the sorts of questions that you or I equacies (for example, given a rule “If A Then could after having just read any article, ques- B,” and ¬B, Cyc couldn’t conclude ¬A.) tions that neither you nor I nor Cyc could be For efficiency’s sake, we developed dozens expected to answer beforehand. of specialized inference procedures, with spe- Our hope and expectation is that around cial truth maintenance system–related (TMS- the mid-1990s, we can transition more and related) bookkeeping facilities for each (Doyle more from manual entry of assertions to 1987). Then, to recoup usability, we devel- (semi-) automated entry by reading online oped a mechanical translator so that one can texts; the role of humans in the project now input general predicate calculus–like would transition from the brain surgeons to assertions, and Cyc can convert them from tutors, answering Cyc’s questions about the this epistemological level into the form difficult sentences and passages. This radical required by these efficient heuristic-level, spe- change is what it means for Cyc to have a cial-purpose mechanisms (see The Current decade-long projected lifespan. State of the Representation Language). Originally, Cyc handled defaults in an ad hoc and frequently inadequate way. In the The Evolution of the last two years, we have moved to a powerful Representation Language and principled way of handling them. As we discuss in the section Epistemological Level CycL is the language in which the Cyc knowl- and Default Reasoning, Cyc constructs and edge base is encoded. In 1984, our representa- compares arguments for and against a propo- tion was little more than frames. Although a sition, using explicit rules to decide when an significant fraction of knowledge can be con- argument is invalid or when one argument is veniently handled using just frames, this to be preferred over another. approach soon proved awkward or downright Early on, we allowed each assertion in the inadequate for expressing various assertions knowledge base to have a numeric certainty we wanted to make: disjunctions, inequalities, factor (cf), but this approach led to its own existentially quantified statements, metalevel set of increasingly severe difficulties. For propositions about sentences, and so on. At example, one knowledge enterer might assert least occasionally, therefore, we required a A and assert B and assign them cfs of 95 and framework of greater expressive power.
Recommended publications
  • Panel: Large Knowledge Bases
    From: AAAI Technical Report SS-02-06. Compilation copyright © 2002, AAAI (www.aaai.org). All rights reserved. Panel: Large KnowledgeBases AdamPease (Teknowledge, chair), Chris Welty (Vassar College), Pat Hayes (U. West Florida), Anthony G. Cohn (U. Leeds), Ken Murray (SRI) Fellbaum, C. (1998). WordNet, An Electronic Lexical Database. MITPress. Abstract Lenat, D., 1995, "Cyc: A Large-Scale Investment in It is estimated that 1-2 exabytes of data is now being KnowledgeInfrastructure". Communicationsof the ACM generated each year, almost all of it in purely digital form 38, no. 1 !, November.See also http://www.c¥c.com (Lymanet. ai. 2000). Properly structured, this information could form a global knowledge base. Currently however, Lyman,P., Varian, H., Dunn, J., Strygin, A., Swearingen, this information exists in manydifferent forms, manyof K., (2000). HowMuch Information?, University California, which are only suitable for humanconsumption, and which Berkeley are largely opaque to computerbased understanding. Majorefforts to build large formal ontologies or address http://www.sims.berkeley.edu/research/project.,ghow-much- issues in their construction have been undertaken funded by info the government in the US such as the DARPAKnowledge Niles, I., & Pease, A., (2001), Towarda Standard Upper Sharing Effort (Patil et al, 1992), High Performance Ontology, in Proceedings of the 2nd International KnowledgeBases (Cohen et. al., 1998), Rapid Knowledge Conference on Formal Ontology in Information Systems Formation (RKF, 2002) and in Europe including Advanced (FOIS-2001). See also http:llonlology.teknowledge.com KnowledgeTechnologies (Shadboit, 2001) and OntoWeb and http:l/suo.ieee.org (OntoWeb,2002), as international standards efforts such the IEEE Standard Upper Ontology (Niles & Pease, 2001) OntoWeb(2002).
    [Show full text]
  • University of Navarra Ecclesiastical Faculty of Philosophy
    UNIVERSITY OF NAVARRA ECCLESIASTICAL FACULTY OF PHILOSOPHY Mark Telford Georges THE PROBLEM OF STORING COMMON SENSE IN ARTIFICIAL INTELLIGENCE Context in CYC Doctoral dissertation directed by Dr. Jaime Nubiola Pamplona, 2000 1 Table of Contents ABBREVIATIONS ............................................................................................................................. 4 INTRODUCTION............................................................................................................................... 5 CHAPTER I: A SHORT HISTORY OF ARTIFICIAL INTELLIGENCE .................................. 9 1.1. THE ORIGIN AND USE OF THE TERM “ARTIFICIAL INTELLIGENCE”.............................................. 9 1.1.1. Influences in AI................................................................................................................ 10 1.1.2. “Artificial Intelligence” in popular culture..................................................................... 11 1.1.3. “Artificial Intelligence” in Applied AI ............................................................................ 12 1.1.4. Human AI and alien AI....................................................................................................14 1.1.5. “Artificial Intelligence” in Cognitive Science................................................................. 16 1.2. TRENDS IN AI........................................................................................................................... 17 1.2.1. Classical AI ....................................................................................................................
    [Show full text]
  • A Survey of Top-Level Ontologies to Inform the Ontological Choices for a Foundation Data Model
    A survey of Top-Level Ontologies To inform the ontological choices for a Foundation Data Model Version 1 Contents 1 Introduction and Purpose 3 F.13 FrameNet 92 2 Approach and contents 4 F.14 GFO – General Formal Ontology 94 2.1 Collect candidate top-level ontologies 4 F.15 gist 95 2.2 Develop assessment framework 4 F.16 HQDM – High Quality Data Models 97 2.3 Assessment of candidate top-level ontologies F.17 IDEAS – International Defence Enterprise against the framework 5 Architecture Specification 99 2.4 Terminological note 5 F.18 IEC 62541 100 3 Assessment framework – development basis 6 F.19 IEC 63088 100 3.1 General ontological requirements 6 F.20 ISO 12006-3 101 3.2 Overarching ontological architecture F.21 ISO 15926-2 102 framework 8 F.22 KKO: KBpedia Knowledge Ontology 103 4 Ontological commitment overview 11 F.23 KR Ontology – Knowledge Representation 4.1 General choices 11 Ontology 105 4.2 Formal structure – horizontal and vertical 14 F.24 MarineTLO: A Top-Level 4.3 Universal commitments 33 Ontology for the Marine Domain 106 5 Assessment Framework Results 37 F. 25 MIMOSA CCOM – (Common Conceptual 5.1 General choices 37 Object Model) 108 5.2 Formal structure: vertical aspects 38 F.26 OWL – Web Ontology Language 110 5.3 Formal structure: horizontal aspects 42 F.27 ProtOn – PROTo ONtology 111 5.4 Universal commitments 44 F.28 Schema.org 112 6 Summary 46 F.29 SENSUS 113 Appendix A F.30 SKOS 113 Pathway requirements for a Foundation Data F.31 SUMO 115 Model 48 F.32 TMRM/TMDM – Topic Map Reference/Data Appendix B Models 116 ISO IEC 21838-1:2019
    [Show full text]
  • A Critique of Pure Reason’
    151 A critique of pure reason’ DREWMCDERMOTT Yale University, New Haven, CT 06S20, U.S.A. Cornput. Intell. 3. 151-160 (1987) In 1978, Patrick Hayes promulgated the Naive Physics Man- the knowledge that programs must have before we write the ifesto. (It finally appeared as an “official” publication in programs themselves. We know what this knowledge is; it’s Hobbs and Moore 1985.) In this paper, he proposed that an all- what everybody knows, about physics, about time and space, out effort be mounted to formalize commonsense knowledge, about human relationships and behavior. If we attempt to write using first-order logic as a notation. This effort had its roots in the programs first, experience shows that the knowledge will earlier research, especially the work of John McCarthy, but the be shortchanged. The tendency will be to oversimplify what scope of Hayes’s proposal was new and ambitious. He sug- people actually know in order to get a program that works. On gested that the use of Tarskian seniantics could allow us to the other hand, if we free ourselves from the exigencies of study a large volume of knowledge-representation problems hacking, then we can focus on the actual knowledge in all its free from the confines of computer programs. The suggestion complexity. Once we have a rich theory of the commonsense inspired a small community of people to actually try to write world, we can try to embody it in programs. This theory will down all (or most) of commonsense knowledge in predictate become an indispensable aid to writing those programs.
    [Show full text]
  • Artificial Intelligence
    BROAD AI now and later Michael Witbrock, PhD University of Auckland Broad AI Lab @witbrock Aristotle (384–322 BCE) Organon ROOTS OF AI ROOTS OF AI Santiago Ramón y Cajal (1852 -1934) Cerebral Cortex WHAT’S AI • OLD definition: AI is everything we don’t yet know how program • Now some things that people can’t do: • unique capabilities (e.g. Style transfer) • superhuman performance (some areas of speech, vision, games, some QA, etc) • Current AI Systems can be divided by their kind of capability: • Skilled (Image recognition, Game Playing (Chess, Atari, Go, DoTA), Driving) • Attentive (Trading: Aidyia; Senior Care: CareMedia, Driving) • Knowledgeable, (Google Now, Siri, Watson, Cortana) • High IQ (Cyc, Soar, Wolfram Alpha) GOFAI • Thought is symbol manipulation • Large numbers of precisely defined symbols (terms) • Based on mathematical logic (implies (and (isa ?INST1 LegalAgreement) (agreeingAgents ?INST1 ?INST2)) (isa ?INST2 LegalAgent)) • Problems solved by searching for transformations of symbolic representations that lead to a solution Slow Development Thinking Quickly Thinking Slowly (System I) (System II) Human Superpower c.f. other Done well by animals and people animals Massively parallel algorithms Serial and slow Done poorly until now by computers Done poorly by most people Not impressive to ordinary people Impressive (prizes, high pay) "Sir, an animal’s reasoning is like a dog's walking on his hind legs. It is not done well; but you are surprised to find it done at all.“ - apologies to Samuel Johnson Achieved on computers by high- Fundamental design principle of power, low density, slow computers simulation of vastly different Computer superpower c.f. neural hardware human Recurrent Deep Learning & Deep Reasoning MACHINE LEARNING • Meaning is implicit in the data • Thought is the transformation of learned representations http://karpathy.github.io/2015/05/21/rnn- effectiveness/ .
    [Show full text]
  • Knowledge Graphs on the Web – an Overview Arxiv:2003.00719V3 [Cs
    January 2020 Knowledge Graphs on the Web – an Overview Nicolas HEIST, Sven HERTLING, Daniel RINGLER, and Heiko PAULHEIM Data and Web Science Group, University of Mannheim, Germany Abstract. Knowledge Graphs are an emerging form of knowledge representation. While Google coined the term Knowledge Graph first and promoted it as a means to improve their search results, they are used in many applications today. In a knowl- edge graph, entities in the real world and/or a business domain (e.g., people, places, or events) are represented as nodes, which are connected by edges representing the relations between those entities. While companies such as Google, Microsoft, and Facebook have their own, non-public knowledge graphs, there is also a larger body of publicly available knowledge graphs, such as DBpedia or Wikidata. In this chap- ter, we provide an overview and comparison of those publicly available knowledge graphs, and give insights into their contents, size, coverage, and overlap. Keywords. Knowledge Graph, Linked Data, Semantic Web, Profiling 1. Introduction Knowledge Graphs are increasingly used as means to represent knowledge. Due to their versatile means of representation, they can be used to integrate different heterogeneous data sources, both within as well as across organizations. [8,9] Besides such domain-specific knowledge graphs which are typically developed for specific domains and/or use cases, there are also public, cross-domain knowledge graphs encoding common knowledge, such as DBpedia, Wikidata, or YAGO. [33] Such knowl- edge graphs may be used, e.g., for automatically enriching data with background knowl- arXiv:2003.00719v3 [cs.AI] 12 Mar 2020 edge to be used in knowledge-intensive downstream applications.
    [Show full text]
  • Using Linked Data for Semi-Automatic Guesstimation
    Using Linked Data for Semi-Automatic Guesstimation Jonathan A. Abourbih and Alan Bundy and Fiona McNeill∗ [email protected], [email protected], [email protected] University of Edinburgh, School of Informatics 10 Crichton Street, Edinburgh, EH8 9AB, United Kingdom Abstract and Semantic Web systems. Next, we outline the process of GORT is a system that combines Linked Data from across guesstimation. Then, we describe the organisation and im- several Semantic Web data sources to solve guesstimation plementation of GORT. Finally, we close with an evaluation problems, with user assistance. The system uses customised of the system’s performance and adaptability, and compare inference rules over the relationships in the OpenCyc ontol- it to several other related systems. We also conclude with a ogy, combined with data from DBPedia, to reason and per- brief section on future work. form its calculations. The system is extensible with new Linked Data, as it becomes available, and is capable of an- Literature Survey swering a small range of guesstimation questions. Combining facts to answer a user query is a mature field. The DEDUCOM system (Slagle 1965) was one of the ear- Introduction liest systems to perform deductive query answering. DE- The true power of the Semantic Web will come from com- DUCOM applies procedural knowledge to a set of facts in bining information from heterogeneous data sources to form a knowledge base to answer user queries, and a user can new knowledge. A system that is capable of deducing an an- also supplement the knowledge base with further facts.
    [Show full text]
  • John Mccarthy – Father of Artificial Intelligence
    Asia Pacific Mathematics Newsletter John McCarthy – Father of Artificial Intelligence V Rajaraman Introduction I first met John McCarthy when he visited IIT, Kanpur, in 1968. During his visit he saw that our computer centre, which I was heading, had two batch processing second generation computers — an IBM 7044/1401 and an IBM 1620, both of them were being used for “production jobs”. IBM 1620 was used primarily to teach programming to all students of IIT and IBM 7044/1401 was used by research students and faculty besides a large number of guest users from several neighbouring universities and research laboratories. There was no interactive computer available for computer science and electrical engineering students to do hardware and software research. McCarthy was a great believer in the power of time-sharing computers. John McCarthy In fact one of his first important contributions was a memo he wrote in 1957 urging the Director of the MIT In this article we summarise the contributions of Computer Centre to modify the IBM 704 into a time- John McCarthy to Computer Science. Among his sharing machine [1]. He later persuaded Digital Equip- contributions are: suggesting that the best method ment Corporation (who made the first mini computers of using computers is in an interactive mode, a mode and the PDP series of computers) to design a mini in which computers become partners of users computer with a time-sharing operating system. enabling them to solve problems. This logically led to the idea of time-sharing of large computers by many users and computing becoming a utility — much like a power utility.
    [Show full text]
  • Letters to the Editor
    AI Magazine Volume 12 Number 3 (1991) (© AAAI) Letters q Editor: system-centered research. First, the the other does. No theorist is going Thank you for the opportunity to split between the neats and scruffies to spend his or her time attempting respond to the letters by Jim Hendler, is old and institutionalized, as to bring precision to a mess of hacks, James Herbsleb and Mike Wellman Hendler points out. Few researchers kludges and “knowledge,” and no regarding my survey of the Eighth are trained in both camps. Second, system builder is apt to find the National Conference on Artificial the pathologies that researchers in attempt informative. MAD does not Intelligence (AI Magazine, Volume 12, AAAI-90 themselves attributed to mean business as usual with occa- No. 1). The letters raise many inter- focusing on just one aspect did, in sional collaborative meetings. esting points that can be roughly fact, arise. Third, the expected advan- hfAn means assessing environmen- classified as questioning the validity tages of merging systems and models tal factors that affect behavior; of the survey and questioning the did, in fact, materialize (although the modelling the causal relationships proposed MAD methodology. Mike sample was very small). Wellman between a system’s design, its envi- Wellman says, “As Cohen acknowl- says that just because I didn’t see ronment, and its behavior; designing edges, a serious problem with the system-centered and model-centered or redesigning a system (or part of a survey is that the AAAI conference research reported together, doesn’t system); predicting how the system proceedings do not accurately repre- mean it wasn’t there; I say that I will behave; running experiments to sent the field.” Actually, I did not found pathoIogies (e.g., see my sec- test the predictions; explaining unex- acknowledge that AAAI is not repre- tions Models Without Systems and pected results and modifying modeis sentative; I just raised the possibility.
    [Show full text]
  • Udc 004.838.2 Irsti 20.19.21 Comparative Analysis Of
    Comparative analysis of knowledge bases: ConceptNet vs CYC A. Y. Nuraliyeva, S. S. Daukishov, R.T. Nassyrova UDC 004.838.2 IRSTI 20.19.21 COMPARATIVE ANALYSIS OF KNOWLEDGE BASES: CONCEPTNET VS CYC A. Y. Nuraliyeva1, S. S. Daukishov2, R.T. Nassyrova3 1,2Kazakh-British Technical University, Almaty, Kazakhstan 3IT Analyst, Philip Morris Kazakhstan, Almaty, Kazakhstan [email protected], ORCID 0000-0001-6451-3743 [email protected], ORCID 0000-0002-3784-4456 [email protected], ORCID 0000-0002-7968-3195 Abstract. Data summarization, question answering, text categorization are some of the tasks knowledge bases are used for. A knowledge base (KB) is a computerized compilation of information about the world. They can be useful for complex tasks and problems in NLP and they comprise both entities and relations. We performed an analytical comparison of two knowledge bases which capture a wide variety of common-sense information - ConceptNet and Cyc. Manually curated knowledge base Cyc has invested more than 1,000 man-years over the last two decades building a knowledge base that is meant to capture a wide range of common-sense skills. On the other hand, ConceptNet is a free multilingual knowledge base and crowdsourced knowledge initiative that uses a large number of links to connect commonplace items. When looking for common sense logic and answers to questions, ConceptNet is a great place to start. In this research, two well-known knowledge bases were reviewed - ConceptNet and Cyc - their origin, differences, applications, benefits and disadvantages were covered. The authors hope this paper would be useful for researchers looking for more appropriate knowledge bases for word embedding, word sense disambiguation and natural-language communication.
    [Show full text]
  • November 15, 2019, NIH Record, Vol. LXXI, No. 23
    November 15, 2019 Vol. LXXI, No. 23 with a brain-controlled robotic exoskeleton who had been paralyzed for 9 years, “didn’t say, ‘I kicked the ball!’” said Nicolelis. More CLINICAL PROMISE SHOWN importantly, he said, “I felt the ball!” It took a team of 156 scientists from Nicolelis Outlines Progress in 25 countries on 5 continents to reach this Brain-Machine Interfaces moment for which neuroscientist Nicolelis BY RICH MCMANUS had been preparing for 20 years. He recruited the team by dangling field tickets to There is probably no other scientist in the World Cup in front of potential recruits. the world for whom peer review meant “It was a hard way to win a free ticket having his experiment succeed in front of a to the game, but that is the Brazilian way,” stadium full of 75,000 screaming Brazilians, quipped Nicolelis, a native of that country. with another 1.2 billion people watching on Now a professor of neuroscience at live television. Duke University School of Medicine, But at the start of the 2014 World Cup at Nicolelis, who won an NIH Pioneer Award Corinthians Arena in Sao Paulo, Dr. Miguel in 2010 for work he said couldn’t earn a Nicolelis witnessed his patient Juliano Pinto, penny of funding a decade earlier, spoke a paraplegic, not only kick a soccer ball to Oct. 16 at an NIH Director’s Lecture in start the tournament, but also “feel” his foot Masur Auditorium. striking the ball. In the late 1980s, Nicolelis, who has been Duke’s Dr. Miguel Nicolelis discusses his research on brain-machine interfaces.
    [Show full text]
  • Conceptualization and Visualization of Tagging and Folksonomies
    Conceptualization and Visualization of Tagging and Folksonomies Von der Fakultät für Ingenieurwissenschaften, Abteilung Informatik und Angewandte Kognitionswissenschaft der Universität Duisburg-Essen zur Erlangung des akademischen Grades Doktor der Ingenieurwissenschaften (Dr.-Ing.) genehmigte Dissertation von Steffen Lohmann aus Hamburg 1. Gutachter: Prof. Dr. Maria Paloma Díaz Pérez 2. Gutachter: Prof. Dr.-Ing. Jürgen Ziegler Tag der mündlichen Prüfung: 27.11.2013 Hinweis: Diese Dissertation ist im Rahmen eines binationalen Promotionsverfahrens (Cotutelle) in Kooperation mit der Universidad Carlos III de Madrid entstanden. Abstract Tagging has become a popular indexing method for interactive systems in the past decade. It offers a simple yet effective way for users to organize an ever increasing amount of digital information for themselves and/or others. The linked user vocabulary resulting from tagging is known as folksonomy and provides a valuable source for the retrieval and exploration of digital resources. Although several models and representations of tagging have been proposed, there is no coherent conceptualization that provides a comprehensive and pre- cise description of the concepts and relationships in the domain. Furthermore, there is little systematic research in the area of folksonomy visualization, and so folksonomies are still mainly depicted as simple tag clouds. Both problems are related, as a well-defined conceptualization is an important prerequisite for the interoperable use and visualization of folksonomies. The thesis addresses these shortcomings by developing a coherent conceptualiza- tion of tagging and visualizations for the interactive exploration of folksonomies. It gives an overview and comparison of tagging models and defines key concepts of the domain. After a comprehensive review of existing tagging ontologies, a unified and coherent conceptualization is presented that incorporates the best parts of the reviewed ontologies.
    [Show full text]