SPEAKER INTRODUCTION The Future of Artificial Intelligence Dr. Roman Yampolskiy University of Louisville Future of Artificial Intelligence Dr. [email protected] Computer Engineering and Computer Science University of Louisville - cecs.louisville.edu/ry Director – CyberSecurity Lab @romanyam /roman.yampolskiy @romanyam [email protected] Artificial Intelligence is Here … 26 @romanyam [email protected] Robots are Here … 27 @romanyam Future of Cybersecurity [email protected] 28 @romanyam What Might They Do? [email protected] • Terrorist acts • Infrastructure sabotage • Hacking systems/robots • Social Engineering Attacks • Privacy violating datamining • Resource depletion (crash stock market) 29 @romanyam Who Could be an Attacker? [email protected] • Militaries developing cyber-weapons and robot soldiers to achieve dominance. • Governments attempting to use AI to establish hegemony, control people, or take down other governments. • Corporations trying to achieve monopoly, destroying the competition through illegal means. • Hackers attempting to steal information, resources or destroy cyberinfrastructure targets. • Doomsday cults attempting to bring the end of the world by any means. • Psychopaths trying to add their name to history books in any way possible. • Criminals attempting to develop proxy systems to avoid risk and responsibility. With AI as a Service anyone is a potential bad actor. 30 @romanyam Deep Fakes – Adversarial Images [email protected] • Generative Adversarial Networks (GANs). http://approximatelycorrect.com/2018/03/02/defending-adversarial-examples-using-gans www.thispersondoesnotexist.com 2019 31 @romanyam [email protected] What is Next? SuperIntelligence is Coming … 32 @romanyam SuperComputing [email protected] SuperIntelligence 33 @romanyam [email protected] SuperSoon • Raymond Kurzweil in Time Magazine • 2023-2045+ 34 @romanyam [email protected] SuperSmart 35 @romanyam [email protected] Ultrafast Extreme Events (UEEs) Abrupt Rise of New Machine Ecology Beyond Human Response Time. By Johnson et al. Nature. Scientific Reports 3, #2627 (2013) 36 @romanyam [email protected] SuperComplex "That was a little-known part of the software that no airline operators or pilots knew about." 37 @romanyam [email protected] Energy: Nuclear Utilities: Water Plants/ Military: Nuclear Power Plants Electrical Grid Weapons Communications: Satellites Stock Market: 75+% of all trade orders Aviation: Uninterruptible Autopilot System generated by Automated Trading 38 Systems @romanyam [email protected] SuperViruses Relying on Kindness of Machines? The Security Threat of Artificial Agents. By Randy Eshelman and Douglas Derrick. JFQ 77, 2nd Quarter 2015. 39 @romanyam [email protected] SuperSoldiers 40 @romanyam [email protected] Positive Impacts of SuperIntelligence 41 @romanyam [email protected] Negative Impacts of SuperIntelligence 42 @romanyam [email protected] Audits — in which computers excel at executing structured processes and rigorous checking Regulatory compliance — as regulations get more complex, computers can assist in ensuring transactions comply and facilitate necessary reporting Receipt reconciliation — programs that turn receipts into machine-readable data can then reconcile them with transaction data eliminating the need to ever “balance the checkbook” Risk management — computers are already getting to be very good at fraud detection and prediction that human operators can miss Trend analysis — as with most fields, many high paid accountants and financial advisors make predictions, and with the right data, computers can do so just as well if not better *https://www.forbes.com/sites/bernardmarr/2016/10/07/big-data-ai- and-the-uncertain-future-for-accountants 43 @romanyam [email protected] 44 @romanyam [email protected] 45 @romanyam [email protected] 46 @romanyam [email protected] What is AI Safety? AI Cybersecurity AI Safety & Security + = Science and engineering aimed at creating safe and secure machines. 47 @romanyam [email protected] “AI safety is about agents with IQ 150 trying to control agents with IQ 6000, whereas cryptoeconomics is about agents with IQ 5 trying to control agents with IQ 150 ” - Vitalik Buterin (co-inventor of Ethereum) 48 @romanyam [email protected] 49 @romanyam [email protected] “The development of full artificial “I think we intelligence could should be spell the end of very careful the human race.” about artificial intelligence “… there’s some ” prudence in thinking about Concerns About A(G)I benchmarks that would indicate some general intelligence developing on the "I am in the horizon.” camp that is “…eventually concerned they'll think about super faster than us intelligence" and they'll get rid of the slow humans…” 50 @romanyam [email protected] Universe of Minds 51 @romanyam [email protected] Singularity Paradox Superintelligent machines are feared to be too dumb to possess commonsense. 52 @romanyam Taxonomy of Pathways to Dangerous [email protected] Roman V. Yampolskiy. Taxonomy of Pathways to Dangerous Artificial Intelligence. 30th AAAI Conference on Artificial Intelligence (AAAI-2016). 2nd International Workshop on AI, Ethics and Society (AIEthicsSociety2016). Phoenix, Arizona, USA. February 12-13th, 2016. • Deliberate actions of not-so-ethical people (on purpose – a, b) – Hackers, criminals, military, corporations, governments, cults, psychopaths, etc. • Side effects of poor design (engineering mistakes – c, d) – Bugs, misaligned values, bad data, wrong goals, etc. • Miscellaneous cases, impact of the surroundings of the system (environment – e, f) – Soft errors, SETI • Runaway self-improvement process (Independently – g, h) – Wireheading, Emergent Phenomena, Treacherous Turn • Purposeful design of dangerous AI is just as likely to include all other types of safety problems and will have the direst consequences, that is the most dangerous type of AI, and the one most difficult to defend against. 53 @romanyam ML Specific AI Safety Issues [email protected] “Children are untrained neural networks deployed on real data.” • Avoiding negative side effects – baby makes a mess. • Avoiding reward hacking – baby finds “reward candy” jar. • Scalable oversight – babysitting should not require a team of 10. • Safe exploration – no fingers in the outlet. • Robustness to distributional shift – use “inside voice” in the classroom. • Inductive ambiguity identification – is ant a cat or a dog? • Robust human imitation – daughter shaves like daddy. • Informed oversight – let me see your homework. • Generalizable environmental goals – ignore that mirage. • Conservative concepts – that dog has no tail. • Impact measures – keep a low profile. • Mild optimization – don’t be a perfectionist. • Averting instrumental incentives – be an altruist. Taylor, Jessica, et al. Alignment for advanced machine learning systems. Technical Report 20161, MIRI, 2016. 54 Amodei, Dario, et al. "Concrete problems in AI safety." arXiv preprint arXiv:1606.06565 (2016). @romanyam [email protected] (un)eXplainable /(un)interpretable 55 ©http://www.darpa.mil/program/explainable-artificial-intelligence @romanyam [email protected] 56 @romanyam [email protected] 57 @romanyam [email protected] Mitigating Negative Impact Kaj Sotala and Roman V. Yampolskiy. Physica Scripta 90 (1) http://iopscience.iop.org/1402-4896/90/1/018001/article Responses to Catastrophic AGI Risk: A Survey 58 @romanyam [email protected] Do Nothing 59 @romanyam [email protected] Relinquish Technology 60 @romanyam [email protected] Integrate with Society 61 @romanyam [email protected] Enhance Human Capabilities, Uploads, Neural Lace 62 @romanyam [email protected] Laws of Robotics 63 @romanyam [email protected] Formal Verification 64 @romanyam [email protected] AI Confinement Problem 65 @romanyam [email protected] Unethical Research 66 @romanyam Research Review Boards [email protected] 67 @romanyam [email protected] AI Research and Ethics AI AGI 68 @romanyam [email protected] AI Regulation 69 @romanyam [email protected] Security VS Privacy 70 @romanyam [email protected] AI failures will grow in frequency and severity proportionate to AI’s capability. AGI is Inevitable, but the Future is not Scary; if you Prepare for it! 71 https://www.makingaisafer.org References can be found in … • Kaj Sotala, Roman V. Yampolskiy. Responses to Catastrophic AGI risk: A Survey. Physica Scripta. Volume 90, Number 1. January 2015. pp. 1-33. • Roman V. Yampolskiy. Utility Function Security in Artificially Intelligent Agents. Journal of Experimental and Theoretical Artificial Intelligence (JETAI). 2014. • Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3-17. (Chapter 1). Springer, London. 2013. • Roman V. Yampolskiy,
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages52 Page
-
File Size-