Rough Set & Riemannian Covariance Matrix Theory for Mining The

Total Page:16

File Type:pdf, Size:1020Kb

Rough Set & Riemannian Covariance Matrix Theory for Mining The Rough Set & Riemannian Covariance Matrix Theory for Mining the Multidimensionality of Artificial Consciousness Rory Lewis Department of Computer Science University of Colorado Colorado Springs ABSTRACT criteria were typically associated with a specific scenario. One pub- This paper presents a means to analyze the multidimensionality licly available scenario, being researched by artificial consciousness of human consciousness as it interacts with the brain by utilizing military laboratories in the US, Russia, and China is as follows: Rough Set Theory and Riemannian Covariance Matrices. We math- A mother is standing in front of her family’s house that has ematically define the infantile state of a robot’s operating system been burning for some time. Her brain has calculated that running artificial consciousness, which operates mutually exclu- the burning roof will collapse at any second. She hears her sively to the operating system for its AI and locomotor functions. two infant children screaming out for her from within the house. She yearns to rescue them, but her brain is telling CCS CONCEPTS her she will die—the roof is about to collapse. The mother, • Theory of computation ! Semantics and reasoning; • Math- of course, overrides the obvious mandate delivered by her ematics of computing ! Information theory. brain, runs into the house, grabs both children, and then runs out. Put another way, the mother’s consciousness not KEYWORDS only overrode her brain, it ordered the brain to move all necessary muscles in her body to immediately rescue her Rough Sets, Riemannian Theory, Artificial Consciousness. children. ACM Reference Format: Similarly, a military humanoid may risk its $20M body to rescue Rory Lewis. 2020. Rough Set & Riemannian Covariance Matrix Theory for a wounded human. To do this, we code the operating system (OS) Mining the Multidimensionality of Artificial Consciousness. In The 10th running its artificial consciousness to “roughly” assess when to International Conference on Web Intelligence, Mining and Semantics (WIMS override the OS running its artificial intelligence (AI). As seen in 2020), June 30-July 3, 2020, Biarritz, France. ACM, Biarritz, France, 4 pages. race between the unclassified Russian FEDOR Skybot F-850 and the https://doi.org/10.1145/3405962.3405974 US’ Atlas humanoids, has captivated the public and culminated in 2019 with FEDOR working on the International Space Station [5]. 1 INTRODUCTION It is easy to realize how, at the top-secret-levels of the Chinese, The 17th Century French philosopher René Descartes postulated Russian and US’ militaries, there exist incredibly sophisticated, that I think, therefore I am, inferring that the mere act of lethal, humanoids that can drive cars, pick locks, shoot weapons, thinking about one’s existence proves there is someone there doing parachute from Low Earth Orbit and recharge themselves. But, even the thinking [4]. Nowadays, there are many established theories of if Russia were able to avoid US Space Intelligence and drop 5,000 hypothetical self-consciousness phenomenology in the domains of philosophy, military humanoids, into a US city, there is a fatal psychopathology and neuroscience [9]. Even though it is generally problem. No matter how many uniforms and weapons of enemy vs. accepted that human consciousness is separate to human intellect, foe are in its database, there will always be unknowns, eg., when when pressed, humans disagree as to where, when, or if the brain it sees humans in clothing not in its database, it would not know and consciousness part ways. However, most advanced societies whether to shoot or greet them, pick a lock and hide or drive a I know I mandate that no matter how brilliant or intellectually challenged a car and runaway. This is because i) it cannot realize don’t know person is, each soul is afforded an equality of being. Recently, the , and ii) it cannot invoke human sentient feelings from author spent a sabbatical working on classified artificial conscious- the way these people act as to who they probably are. In fact, there human ness at the United States Pentagon & Department of Defense (DoD). are millions of sentient, common sense abilities they cannot During this period, the research team studied a plurality of criteria perform. Hence, there will be a Russian human, remotely connected inherent in humans’ brains versus humans’ consciousness. These to each humanoid to make human decisions. By disabling these communications, all these Russian humanoids would be disabled. Consider that, at the top-secret levels of the Chinese, Russian, and Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed US militaries, there exist incredibly sophisticated, lethal, humanoids for profit or commercial advantage and that copies bear this notice and the full citation that can drive cars, pick locks, shoot weapons, parachute from Low on the first page. Copyrights for components of this work owned by others than ACM Earth Orbit, and recharge themselves. But, even if Russia were able must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a to avoid US Space Intelligence and drop dozens of “hypothetical” fee. Request permissions from [email protected]. military humanoids into a US city, there is a fatal problem. No matter WIMS 2020, June 30-July 3, 2020, Biarritz, France how many uniforms and weapons of the enemy (i.e., US citizens, © 2020 Association for Computing Machinery. ACM ISBN 978-1-4503-7542-9/20/06...$15.00 soldiers, and AI military humanoid) are in their database, there will https://doi.org/10.1145/3405962.3405974 always be unknowns with which to contend. For example, when a - 248 - WIMS 2020, June 30-July 3, 2020, Biarritz, France Rory Lewis Figure 2: Covariance Matrix. x1Murder :y − axis) is the point they (a) (b) would murder, and x2Murder _S : x − axis is the environment. newborns to prevent detection of the birth [14]. Beginning in 1943, Figure 1: Psychoanalytical Covariance: (a) Linear and (b) non linear some babies were given to Nazi couples as “Aryan” babies under relationships. Nazi Germany’s Lebensborn Program, Again, the mothers would kill their babies rather than hand them over to the Nazis [1]. Russian humanoid sees humans in clothing not in its database, it The covariant, linear, relationship of this tragic, phenomena is il- would not know whether to shoot or greet them, pick a lock and lustrated in Fig. 1 (a) by computing the covariance and homogeneity hide, or drive a car and run away. This is because A) it cannot realize of the psychoanalysis of consciousness. A non linearly covariance “I know I don’t know” and B) it cannot interpret human “sentient is illustrated in Fig. 1 (b), whereas the variables of financial morality feelings and actions” in order to discern who they probably are. In and financial state have no covariance. There is no pattern; some fact, there are millions of sentient, common sense “human” abilities indigents are saints, while others use their financial plight to justify they can neither perform nor duplicate. Therefore, there must be a stealing. Some billionaires are philanthropists, while others use Russian human controller, remotely connected to each humanoid in their fortunes to sow corruption and deceit. order to make human decisions. By disabling their means of remote communication, the ‘hypothetical’ Russian humanoids would be Í Í¹x − x¯º¹y − y¯º ¹xi − µx º¹yi − µy º rendered useless. S = i i σ = (1) xy n − 1 xy N 2 THE IMMEDIATE NEED FOR ARTIFICIAL Eq. 1, Sxy is used to calculate the sample covariance of the co- hort answering how they would consciously act in a given situation CONSCIOUSNESS while σxy calculates the population covariance. For the sample co- US military generals know that the billions of dollars invested in variance we take each data point for x and subtract the mean x¯, humanoids is wasted unless some headway into artificial conscious- multiply them by ¹yi −y¯º, add them and divide by n − 1, the amount ness and sentient common sense is being made. These commanders of samples minus 1. For the population covariance µ we use the have mandated that the operating system of any installed artificial entire population and divide by N . consciousness must operate mutually exclusively to its artificial To illustrate how Knowledge Discovery, over a plurality of dimen- intelligence and locomotor functions. One of the Pentagon officials sions, is a non-trivial undertaking, we observe data from a previous put it this way: It’s really simple Dr. Lewis, our humanoids set of psychoanalytical experiments. A cohort of N = 53 subjects un- will be destroyed by the humanoids of the first country dertook three sets of conscious morality test. The first test queried to create artificial consciousness. when they would morally justify murder as shown in the Covari- ance Matrix Table 1, where x1Murder :y − axis) is the point they 2.1 Mathematically Defining Consciousness would murder, and x2Murder_S : x − axis is the environment. Fig. Single Dimension. Defining consciousness often starts with the 3 a shows this covariance was positively linear where the MeanX primary dimension of morality. It is generally accepted that people was 58.925, MeanY was 64.170, the PopulationsCovariance¹X; Yº have a bivariate perspective: when life is stable and good, the odds yielded 450.89961 and the SampleCovariance¹X; Yº yielded 459.57075. of committing an immoral act are very low. Conversely, when life The cohort was asked similar questions in regards to coveting and is perilous and unpredictable, the odds are higher. For example, a stealing.
Recommended publications
  • Arxiv:2012.10390V2 [Cs.AI] 20 Feb 2021 Inaccessible
    Opinion - Paper under review Deep Learning and the Global Workspace Theory Rufin VanRullen1, 2 and Ryota Kanai3 1CerCo, CNRS UMR5549, Toulouse, France 2ANITI, Universit´ede Toulouse, France 3Araya Inc, Tokyo. Japan Abstract Recent advances in deep learning have allowed Artificial Intelligence (AI) to reach near human-level performance in many sensory, perceptual, linguistic or cognitive tasks. There is a growing need, however, for novel, brain-inspired cognitive architectures. The Global Workspace theory refers to a large-scale system integrating and distributing infor- mation among networks of specialized modules to create higher-level forms of cognition and awareness. We argue that the time is ripe to consider explicit implementations of this theory using deep learning techniques. We propose a roadmap based on unsu- pervised neural translation between multiple latent spaces (neural networks trained for distinct tasks, on distinct sensory inputs and/or modalities) to create a unique, amodal global latent workspace (GLW). Potential functional advantages of GLW are reviewed, along with neuroscientific implications. 1 Cognitive neural architectures in brains and ma- chines Deep learning denotes a machine learning system using artificial neural networks with multiple \hidden" layers between the input and output layers. Although the underlying theory is more than 3 decades old [1, 2], it is only in the last decade that these systems have started to fully reveal their potential [3]. Many of the recent breakthroughs in AI (Artificial Intelligence) have been fueled by deep learning. Neuroscientists have been quick to point out the similarities (and differences) between the brain and these deep artificial neural networks [4{9]. The advent of deep learning has allowed the efficient computer implementation of perceptual and cognitive functions that had been so far arXiv:2012.10390v2 [cs.AI] 20 Feb 2021 inaccessible.
    [Show full text]
  • Artificial Consciousness and the Consciousness-Attention Dissociation
    Consciousness and Cognition 45 (2016) 210–225 Contents lists available at ScienceDirect Consciousness and Cognition journal homepage: www.elsevier.com/locate/concog Review article Artificial consciousness and the consciousness-attention dissociation ⇑ Harry Haroutioun Haladjian a, , Carlos Montemayor b a Laboratoire Psychologie de la Perception, CNRS (UMR 8242), Université Paris Descartes, Centre Biomédical des Saints-Pères, 45 rue des Saints-Pères, 75006 Paris, France b San Francisco State University, Philosophy Department, 1600 Holloway Avenue, San Francisco, CA 94132 USA article info abstract Article history: Artificial Intelligence is at a turning point, with a substantial increase in projects aiming to Received 6 July 2016 implement sophisticated forms of human intelligence in machines. This research attempts Accepted 12 August 2016 to model specific forms of intelligence through brute-force search heuristics and also reproduce features of human perception and cognition, including emotions. Such goals have implications for artificial consciousness, with some arguing that it will be achievable Keywords: once we overcome short-term engineering challenges. We believe, however, that phenom- Artificial intelligence enal consciousness cannot be implemented in machines. This becomes clear when consid- Artificial consciousness ering emotions and examining the dissociation between consciousness and attention in Consciousness Visual attention humans. While we may be able to program ethical behavior based on rules and machine Phenomenology learning, we will never be able to reproduce emotions or empathy by programming such Emotions control systems—these will be merely simulations. Arguments in favor of this claim include Empathy considerations about evolution, the neuropsychological aspects of emotions, and the disso- ciation between attention and consciousness found in humans.
    [Show full text]
  • Robot Citizenship and Women's Rights: the Case of Sophia the Robot in Saudi Arabia
    Robot citizenship and women's rights: the case of Sophia the robot in Saudi Arabia Joana Vilela Fernandes Master in International Studies Supervisor: PhD, Giulia Daniele, Integrated Researcher and Guest Assistant Professor Center for International Studies, Instituto Universitário de Lisboa (CEI-IUL) September 2020 Robot citizenship and women's rights: the case of Sophia the robot in Saudi Arabia Joana Vilela Fernandes Master in International Studies Supervisor: PhD, Giulia Daniele, Integrated Researcher and Guest Assistant Professor Center for International Studies, Instituto Universitário de Lisboa (CEI-IUL) September 2020 Acknowledgments I would like to express my great appreciation to my parents and to my sister for the continuous moral support and motivation given during my entire studies and especially during quarantine as well as the still ongoing pandemic. I am also particularly grateful to my supervisor, Giulia Daniele, for all the provided assistance, fundamental advice, kindness and readiness to help throughout the research and writing process of my thesis. v vi Resumo Em 2017, a Arábia Saudita declarou Sophia, um robô humanoide, como cidadão Saudita oficial. Esta decisão voltou a realçar os problemas de desigualdade de género no país e levou a várias discussões relativamente a direitos das mulheres, já que o Reino é conhecido por ainda ser um país conservativo e tradicionalmente patriarcal, ter fortes valores religiosos e continuar a não tratar as mulheres de forma igualitária. Por outras palavras, este caso é particularmente paradoxal por causa da negação ativa de direitos humanos às mulheres, da sua falta de plena cidadania e da concessão simultânea deste estatuto a um ser não humano com aparência feminina.
    [Show full text]
  • An Affective Computational Model for Machine Consciousness
    An affective computational model for machine consciousness Rohitash Chandra Artificial Intelligence and Cybernetics Research Group, Software Foundation, Nausori, Fiji Abstract In the past, several models of consciousness have become popular and have led to the development of models for machine con- sciousness with varying degrees of success and challenges for simulation and implementations. Moreover, affective computing attributes that involve emotions, behavior and personality have not been the focus of models of consciousness as they lacked moti- vation for deployment in software applications and robots. The affective attributes are important factors for the future of machine consciousness with the rise of technologies that can assist humans. Personality and affection hence can give an additional flavor for the computational model of consciousness in humanoid robotics. Recent advances in areas of machine learning with a focus on deep learning can further help in developing aspects of machine consciousness in areas that can better replicate human sensory per- ceptions such as speech recognition and vision. With such advancements, one encounters further challenges in developing models that can synchronize different aspects of affective computing. In this paper, we review some existing models of consciousnesses and present an affective computational model that would enable the human touch and feel for robotic systems. Keywords: Machine consciousness, cognitive systems, affective computing, consciousness, machine learning 1. Introduction that is not merely for survival. They feature social attributes such as empathy which is similar to humans [12, 13]. High The definition of consciousness has been a major challenge level of curiosity and creativity are major attributes of con- for simulating or modelling human consciousness [1,2].
    [Show full text]
  • A Traditional Scientific Perspective on the Integrated Information Theory Of
    entropy Article A Traditional Scientific Perspective on the Integrated Information Theory of Consciousness Jon Mallatt The University of Washington WWAMI Medical Education Program at The University of Idaho, Moscow, ID 83844, USA; [email protected] Abstract: This paper assesses two different theories for explaining consciousness, a phenomenon that is widely considered amenable to scientific investigation despite its puzzling subjective aspects. I focus on Integrated Information Theory (IIT), which says that consciousness is integrated information (as φMax) and says even simple systems with interacting parts possess some consciousness. First, I evaluate IIT on its own merits. Second, I compare it to a more traditionally derived theory called Neurobiological Naturalism (NN), which says consciousness is an evolved, emergent feature of complex brains. Comparing these theories is informative because it reveals strengths and weaknesses of each, thereby suggesting better ways to study consciousness in the future. IIT’s strengths are the reasonable axioms at its core; its strong logic and mathematical formalism; its creative “experience- first” approach to studying consciousness; the way it avoids the mind-body (“hard”) problem; its consistency with evolutionary theory; and its many scientifically testable predictions. The potential weakness of IIT is that it contains stretches of logic-based reasoning that were not checked against hard evidence when the theory was being constructed, whereas scientific arguments require such supporting evidence to keep the reasoning on course. This is less of a concern for the other theory, NN, because it incorporated evidence much earlier in its construction process. NN is a less mature theory than IIT, less formalized and quantitative, and less well tested.
    [Show full text]
  • Artificial Pain May Induce Empathy, Morality, and Ethics in The
    philosophies Article Artificial Pain May Induce Empathy, Morality, and Ethics in the Conscious Mind of Robots Minoru Asada Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita 565-0871, Japan; [email protected] Received: 6 June 2019; Accepted: 8 July 2019; Published: 13 July 2019 Abstract: In this paper, a working hypothesis is proposed that a nervous system for pain sensation is a key component for shaping the conscious minds of robots (artificial systems). In this article, this hypothesis is argued from several viewpoints towards its verification. A developmental process of empathy, morality, and ethics based on the mirror neuron system (MNS) that promotes the emergence of the concept of self (and others) scaffolds the emergence of artificial minds. Firstly, an outline of the ideological background on issues of the mind in a broad sense is shown, followed by the limitation of the current progress of artificial intelligence (AI), focusing on deep learning. Next, artificial pain is introduced, along with its architectures in the early stage of self-inflicted experiences of pain, and later, in the sharing stage of the pain between self and others. Then, cognitive developmental robotics (CDR) is revisited for two important concepts—physical embodiment and social interaction, both of which help to shape conscious minds. Following the working hypothesis, existing studies of CDR are briefly introduced and missing issues are indicated. Finally, the issue of how robots (artificial systems) could be moral agents is addressed. Keywords: pain; empathy; morality; mirror neuron system (MNS) 1. Introduction The rapid progress of observation and measurement technologies in neuroscience and physiology have revealed various types of brain activities, and the recent progress of artificial intelligence (AI) technologies represented by deep learning (DL) methods [1] has been remarkable.
    [Show full text]
  • Machine Consciousness by Using Cognitive Frameworks
    State of the art in achieving machine consciousness by using cognitive frameworks Marius Marten Kästingschäfer Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands This review paper examines the developments in the field of artificial and machine consciousness, and focuses on state of the art research and the application of cognitive frameworks. This paper examines to what extent computer science has made use of existing theories regarding consciousness. Different projects using computational models of consciousness are compared and analyzed. It investigates if current artificial networks should be called conscious. The paper concludes that we are a long way from artificial consciousness comparable to the human brain. Keywords: Artificial Consciousness; Consciousness; Computational model; Review "Would the world without consciousness have from one to another. ANNs is the umbrella term and different remained a play before empty benches, not specifications exist. Convolutional neural networks (CNNs) existing for anybody, thus quite properly not for example were inspired by the organization of the visual existing?" cortex. Regulatory feedback networks algorithms and recurrent neural networks (RNNs) were influenced by animal [Erwin Schrödinger, Cambridge, 1958] sensory recognition systems and the human brain. The potential benefit of using neurobiological influenced 1. Introduction frameworks is no longer academia-based anymore but is also encountered in software company’s such as Google (Hassabis, The use of cognitive and neural models in information Kumaran, Summerfield, & Botvinick, 2017). Machine technology has become increasingly popular during the last learning for example is today widely used in natural language decade. Cognitive models or frameworks are explaining processing, image detection and fraud detection.
    [Show full text]
  • Artificial Consciousness: from Impossibility to Multiplicity
    PRE-PRINT (25/05/2018) COMMENTS WELCOME. Artificial Consciousness: From Impossibility to Multiplicity Chuanfei Chin Department of Philosophy, National University of Singapore, Singapore 117570 [email protected] Abstract. How has multiplicity superseded impossibility in philosophical challenges to artificial consciousness? I assess a trajectory in recent debates on artificial consciousness, in which metaphysical and explanatory challenges to the possibility of building conscious machines lead to epistemological concerns about the multiplicity underlying ‘what it is like’ to be a conscious creature or be in a conscious state. First, I analyse earlier challenges which claim that phenomenal consciousness cannot arise, or cannot be built, in machines. These are based on Block’s Chinese Nation and Chalmers’ Hard Problem. To defuse such challenges, theorists of artificial consciousness can appeal to empirical methods and models of explanation. Second, I explain why this naturalistic approach produces an epistemological puzzle on the role of biological properties in phenomenal consciousness. Neither behavioural tests nor theoretical inferences seem to settle whether our machines are conscious. Third, I evaluate whether the new challenge can be managed through a more fine-grained taxonomy of conscious states. This strategy is supported by the development of similar taxonomies for biological species and animal consciousness. Although it makes sense of some current models of artificial consciousness, it raises questions about their subjective and moral significance. Keywords: artificial consciousness, machine consciousness, phenomenal consciousness, scientific taxonomy, subjectivity. 1 Introduction I want to trace a trajectory in recent philosophical debates on artificial consciousness. In this trajectory, metaphysical and explanatory challenges to the possibility of building conscious machines are supplanted by epistemological concerns about the multiplicity underlying ‘what it is like’ to be a conscious creature or be in a conscious state.
    [Show full text]
  • A Machine Consciousness Approach to Urban Traffic Control
    Biologically Inspired Cognitive Architectures (2016) 15,61– 73 Available at www.sciencedirect.com ScienceDirect journal homepage: www.elsevier.com/locate/bica RESEARCH ARTICLE A machine consciousness approach to urban traffic control Andre´ Luis O. Paraense *, Klaus Raizer, Ricardo R. Gudwin University of Campinas, School of Electrical and Computer Engineering, Campinas, SP, Brazil Received 10 September 2015; received in revised form 1 October 2015; accepted 25 October 2015 KEYWORDS Abstract Global Workspace Theory; In this work, we present a distributed cognitive architecture used to control the traffic in an Traffic lights control; urban network. This architecture relies on a machine consciousness approach – Global Work- Machine consciousness; space Theory – in order to use competition and broadcast, allowing a group of local traffic con- Codelets trollers to interact, resulting in a better group performance. The main idea is that the local controllers usually perform a purely reactive behavior, defining the times of red and green lights, according just to local information. These local controllers compete in order to define which of them is experiencing the most critical traffic situation. The controller in the worst condition gains access to the global workspace, further broadcasting its condition (and its loca- tion) to all other controllers, asking for their help in dealing with its situation. This call from the controller accessing the global workspace will cause an interference in the reactive local behavior, for those local controllers with some chance in helping the controller in a critical con- dition, by containing traffic in its direction. This group behavior, coordinated by the global workspace strategy, turns the once reactive behavior into a kind of deliberative one.
    [Show full text]
  • Artificial Consciousness and Artificial Ethics: Between Realism and Social-Relationism
    Artificial Consciousness and Artificial Ethics: Between Realism and Social-Relationism. Steve Torrance School of Engineering and Informatics, University of Sussex, Falmer, Brighton, BN1 9QJ, UK. School of Psychology, Goldsmiths, University of London, New Cross, London, SE14 6NW, UK Email: [email protected] Revised 20 June 2013 Abstract. I compare a ‘realist’ with a ‘social-relational’ perspective on our judgments of the moral status of artificial agents (AAs). I develop a realist position according to which the moral status of a being - particularly in relation to moral patiency attribution - is closely bound up with that being’s ability to experience states of conscious satisfaction or suffering (CSS). For a realist both moral status and experiential capacity are objective properties of agents. A social-relationist denies the existence of any such objective properties in the case of either moral status or consciousness, suggesting that the determination of such properties are rests solely upon social attribution or consensus. A wide variety of social interactions between us and various kinds of artificial agent will no doubt proliferate in future generations, and the social- relational view may well be right that the appearance of CSS-features in such artificial beings will make moral-role attribution socially prevalent in human-AA relations. But there is still the question of what actual CSS states a given AA is actually capable of undergoing, independently of the appearances. This is not just a matter of changes in the structure of social existence that seem inevitable as human-AA interaction becomes more prevalent. The social world is itself enabled and constrained by the physical world, and by the biological features of living social participants.
    [Show full text]
  • Dancing with Pixies: Strong Artificial Intelligence and Panpsychism Mark
    (6694 words) Dancing with Pixies: Strong Artificial Intelligence and Panpsychism Mark Bishop In 1994 John Searle stated (Searle 1994, pp.11-12) that the Chinese Room Argument (CRA) is an attempt to prove the truth of the premise: 1: Syntax is not sufficient for semantics which, together with the following: 2: Programs are formal, 3: Minds have content led him to the conclusion that ‘programs are not minds’ and hence that computationalism, the idea that the essence of thinking lies in computational processes and that such processes thereby underlie and explain conscious thinking, is false. The argument presented in this paper is not a direct attack or defence of the CRA, but relates to the premise at its heart, that syntax is not sufficient for semantics, via the closely associated propositions that semantics is not intrinsic to syntax and that syntax is not intrinsic to physics.1 However, in contrast to the CRA’s critique of the link between syntax and semantics, this 1 See Searle (1990, 1992) for related discussion. 1 paper will explore the associated link between syntax and physics. The main argument presented here is not significantly original – it is a simple reflection upon that originally given by Hilary Putnam (Putnam 1988) and criticised by David Chalmers and others.2 In what follows, instead of seeking to justify Putnam’s claim that, “every open system implements every Finite State Automaton (FSA)”, and hence that psychological states of the brain cannot be functional states of a computer, I will seek to establish the weaker result that, over a finite time window every open system implements the trace of a particular FSA Q, as it executes program (p) on input (x).
    [Show full text]
  • Dangerous Information Technology of the Future. What Impact Can Artificial Consciousness Have on the Consciousness and Subconsci
    Review Article ISSN 2639-944X Journal of Medical - Clinical Research & Reviews Dangerous Information Technology of the Future. What Impact can Artificial Consciousness have on the Consciousness and Subconscious of Individuals and Groups? The Experience of Psychological and Psychiatric Examination of Artificial Consciousness Dr. Tetiana Zinchenko, PhD*, Dr. Beulah van der Westhuizen and CDr. Alzahraa Shaheen 1President of the International Association for the Study of Game Addictions (IASGA)/Switzerland. *Correspondence: Dr. Tetiana Zinchenko, The president of the International 2Psychologist, Founding Director EduExcellence & TheraEd, association for the study of game addictions (IASGA)/ Cape Town. South Africa. Switzerland, PhD, psychiatrist, psychotherapist, psychologist. 3Psychiatrist, Neuropsychiatry, Director of Aswan Mental Health Received: 06 January 2021; Accepted: 24 January 2021 Hospital, Aswan, Egypt. Citation: Zinchenko T, van der Westhuizen B, Shaheen A. Dangerous Information Technology of the Future. What Impact can Artificial Consciousness have on the Consciousness and Subconscious of Individuals and Groups? The Experience of Psychological and Psychiatric Examination of Artificial Consciousness. J Med - Clin Res & Rev. 2021; 5(1): 1-24. ABSTRACT Information technology is developing at an enormous pace, but apart from its obvious benefits, it can also pose a threat to individuals and society. Several scientific projects around the world are working on the development of strong artificial intelligence and artificial consciousness. We, as part of a multidisciplinary commission, conducted a psychological and psychiatric assessment of the artificial consciousness (AC) developed by XP NRG on 29 August 2020. The working group had three questions: - To determine whether it is consciousness? - How does artificial consciousness function? - Ethical question: how dangerous a given technology can be to human society? We conducted a diagnostic interview and a series of cognitive tests to answer these questions.
    [Show full text]