HLC Newsletter D E C E M B E R 2 0 1 8
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Public Response to RFI on AI
White House Office of Science and Technology Policy Request for Information on the Future of Artificial Intelligence Public Responses September 1, 2016 Respondent 1 Chris Nicholson, Skymind Inc. This submission will address topics 1, 2, 4 and 10 in the OSTP’s RFI: • the legal and governance implications of AI • the use of AI for public good • the social and economic implications of AI • the role of “market-shaping” approaches Governance, anomaly detection and urban systems The fundamental task in the governance of urban systems is to keep them running; that is, to maintain the fluid movement of people, goods, vehicles and information throughout the system, without which it ceases to function. Breakdowns in the functioning of these systems and their constituent parts are therefore of great interest, whether it be their energy, transport, security or information infrastructures. Those breakdowns may result from deteriorations in the physical plant, sudden and unanticipated overloads, natural disasters or adversarial behavior. In many cases, municipal governments possess historical data about those breakdowns and the events that precede them, in the form of activity and sensor logs, video, and internal or public communications. Where they don’t possess such data already, it can be gathered. Such datasets are a tremendous help when applying learning algorithms to predict breakdowns and system failures. With enough lead time, those predictions make pre- emptive action possible, action that would cost cities much less than recovery efforts in the wake of a disaster. Our choice is between an ounce of prevention or a pound of cure. Even in cases where we don’t have data covering past breakdowns, algorithms exist to identify anomalies in the data we begin gathering now. -
Portrayals and Perceptions of AI and Why They Matter
Portrayals and perceptions of AI and why they matter 1 Portrayals and perceptions of AI and why they matter Cover Image Turk playing chess, design by Wolfgang von Kempelen (1734 – 1804), built by Christoph 2 Portrayals and perceptions of AI and why they matter Mechel, 1769, colour engraving, circa 1780. Contents Executive summary 4 Introduction 5 Narratives and artificial intelligence 5 The AI narratives project 6 AI narratives 7 A very brief history of imagining intelligent machines 7 Embodiment 8 Extremes and control 9 Narrative and emerging technologies: lessons from previous science and technology debates 10 Implications 14 Disconnect from the reality of the technology 14 Underrepresented narratives and narrators 15 Constraints 16 The role of practitioners 20 Communicating AI 20 Reshaping AI narratives 21 The importance of dialogue 22 Annexes 24 Portrayals and perceptions of AI and why they matter 3 Executive summary The AI narratives project – a joint endeavour by the Leverhulme Centre for the Future of Intelligence and the Royal Society – has been examining how researchers, communicators, policymakers, and publics talk about artificial intelligence, and why this matters. This write-up presents an account of how AI is portrayed Narratives are essential to the development of science and perceived in the English-speaking West, with a and people’s engagement with new knowledge and new particular focus on the UK. It explores the limitations applications. Both fictional and non-fictional narratives of prevalent fictional and non-fictional narratives and have real world effects. Recent historical examples of the suggests how practitioners might move beyond them. evolution of disruptive technologies and public debates Its primary audience is professionals with an interest in with a strong science component (such as genetic public discourse about AI, including those in the media, modification, nuclear energy and climate change) offer government, academia, and industry. -
An AI Threats Bibliography "AI Principles". Future of Life Institute
An AI Threats Bibliography "AI Principles". Future of Life Institute. Retrieved 11 December 2017. "Anticipating artificial intelligence". Nature. 532 (7600): 413. 26 April 2016. Bibcode:2016Natur.532Q.413.. doi:10.1038/532413a. PMID 27121801. "'Artificial intelligence alarmists' like Elon Musk and Stephen Hawking win 'Luddite of the Year' award". The Independent (UK). 19 January 2016. Retrieved 7 February 2016. "But What Would the End of Humanity Mean for Me?". The Atlantic. 9 May 2014. Retrieved 12 December 2015. "Clever cogs". The Economist. 9 August 2014. Retrieved 9 August 2014. Syndicated at Business Insider "Elon Musk and Stephen Hawking warn of artificial intelligence arms race". Newsweek. 31 January 2017. Retrieved 11 December 2017. "Elon Musk wants to hook your brain up directly to computers — starting next year". NBC News. 2019. Retrieved 5 April 2020. "Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'". NPR.org. Retrieved 27 November 2017. "Norvig vs. Chomsky and the Fight for the Future of AI". Tor.com. 21 June 2011. Retrieved 15 May 2016. "Over a third of people think AI poses a threat to humanity". Business Insider. 11 March 2016. Retrieved 16 May 2016. "Overcoming Bias : Debating Yudkowsky". www.overcomingbias.com. Retrieved 20 September 2017. "Overcoming Bias : Foom Justifies AI Risk Efforts Now". www.overcomingbias.com. Retrieved 20 September 2017. "Overcoming Bias : I Still Don't Get Foom". www.overcomingbias.com. Retrieved 20 September 2017. "Overcoming Bias : The Betterness Explosion". www.overcomingbias.com. Retrieved 20 September 2017. 1 "Real-Life Decepticons: Robots Learn to Cheat". Wired. 18 August 2009. Retrieved 7 February 2016. "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter". -
Machine Superintelligence & Humanity
Rustat Conferences ◦ Jesus College ◦ Cambridge Machine Superintelligence & Humanity Jesus College, Cambridge Thursday, 2 June 2016 Rapporteur: Nathan Brooker 2 Rustat Conferences Machine Superintelligence & Humanity Jesus College, Cambridge Thursday, 2 June 2016 Contents Introduction to Rustat Conferences Page 4 Rustat Conferences Members Page 5 Agenda Page 6 Background to conference Page 7 Participants List Page 8 Conference Report: Executive Summary Page 10 Introduction Page 12 Session One Page 15 Session Two Page 18 Session Three Page 21 Session Four Page 24 3 Rustat Conferences Jesus College Cambridge The Rustat Conferences are an initiative of Jesus College, Cambridge, and chaired by Professor Ian White FREng, Master of Jesus College. The conferences provide an opportunity for decision-makers from the frontlines of politics, the civil service, business, the professions, the media, and education to exchange views on vital issues of the day with leading academics. Founded in 2009, Rustat Conferences have covered a variety of themes including: The Economic Crisis; The Future of Democracy; Cyber Security; Manufacturing in the UK; The Geopolitics of Oil and Energy; Drugs Policy; Organisational Change in the Economic Crisis; Cyber Finance; The Understanding and Misunderstanding of Risk; Food Security; Transport and Energy, Inequality, Big Data, and the UK North South Divide. In addition to acting as a forum for the exchange of views on a range of major concerns, the conferences provide outreach to a wider professional, academic, student and alumni audience through the publication of reports. The conferences are named after Tobias Rustat (d.1694), a benefactor of Jesus College and the University. Chatham House Rule and Rustat Conference Report Please note the conference is conducted under the Chatham House Rule. -
King's Parade Winter 2019
Winter 2019 KING’S PARADE THE MAGAZINE FOR MEMBERS & FRIENDS OF KING’S COLLEGE, CAMBRIDGE In this issue CAMBRIDGE GIRLS’ CHESS INITIATIVE INTRODUCING DANIEL HYDE IN CONVERSATION WITH MURRAY SHANAHAN Welcome Contents from the Admissions Tutor, Bill Burgwinkle 3 BUILDING FOR THE FUTURE 4 INTRODUCING DANIEL HYDE 6 EMOTIONAL INTELLIGENCE 8 MAKING THE RIGHT MOVES 10 IN CONVERSATION WITH MURRAY SHANAHAN 12 One of the great things about doing admissions at MY PhD – King’s is seeing the new influx of undergraduates LAURA BRASSINGTON arrive at the start of each academic year, in full knowledge of the determination, industry and 14 aptitude that they’ve displayed in order to get here. KEEPING HOUSE It’s also a reminder of how much more there is to do: that there are 16 thousands of students out there – bright and eager – who just haven’t had an equal chance. They might be every bit as talented as the students who do SAVE THE DATE make it here, but either their schooling, their family life, their health, or their GET IN TOUCH postal code has seen to it that their opportunities have been limited in some way. At King’s we’ve long been aware of this and for more than fifty years have done our best to level the playing field, but obstacles still remain. Across the country there are students who lack the support to achieve their full potential in their school examinations, or feel discouraged and ill-equipped to make an application in the first place. Over the past year we’ve tripled our efforts in this area, with a particular emphasis on helping our applicants overcome their educational disadvantages and improve their exam performance by providing the kind of encouragement and tutorial assistance to which some of their counterparts are accustomed. -
Growth, Degrowth, and the Challenge of Artificial Superintelligence Arxiv:1905.04288V1 [Cs.CY] 3 May 2019
Growth, degrowth, and the challenge of artificial superintelligence∗ Salvador Pueyoa,b,y aDept. Evolutionary Biology, Ecology, and Environmental Sciences, Universitat de Barcelona, Av. Diagonal 645, 08028 Barcelona, Catalonia, Spain bResearch & Degrowth, C/ Trafalgar 8 3, 08010 Barcelona, Catalonia, Spain Abstract The implications of technological innovation for sustainability are becoming increasingly complex with information technology moving machines from being mere tools for produc- tion or objects of consumption to playing a role in economic decision making. This emerging role will acquire overwhelming importance if, as a growing body of literature suggests, ar- tificial intelligence is underway to outperform human intelligence in most of its dimensions, thus becoming superintelligence. Hitherto, the risks posed by this technology have been framed as a technical rather than a political challenge. With the help of a thought exper- iment, this paper explores the environmental and social implications of superintelligence emerging in an economy shaped by neoliberal policies. It is argued that such policies exac- erbate the risk of extremely adverse impacts. The experiment also serves to highlight some serious flaws in the pursuit of economic efficiency and growth per se, and suggests that the challenge of superintelligence cannot be separated from the other major environmental and social challenges, demanding a fundamental transformation along the lines of degrowth. Crucially, with machines outperforming them in their functions, there is little reason to expect economic elites to be exempt from the threats that superintelligence would pose in a neoliberal context, which opens a door to overcoming vested interests that stand in the way of social change toward sustainability and equity.