Chinese Health App Arrives Access to a Large Population Used to Sharing Data Could Give Icarbonx an Edge Over Rivals
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Booxter Export Page 1
Cover Title Authors Edition Volume Genre Format ISBN Keywords The Museum of Found Mirjam, LINSCHOOTEN Exhibition Soft cover 9780968546819 Objects: Toronto (ed.), Sameer, FAROOQ Catalogue (Maharaja and - ) (ed.), Haema, SIVANESAN (Da bao)(Takeout) Anik, GLAUDE (ed.), Meg, Exhibition Soft cover 9780973589689 Chinese, TAYLOR (ed.), Ruth, Catalogue Canadian art, GASKILL (ed.), Jing Yuan, multimedia, 21st HUANG (trans.), Xiao, century, Ontario, OUYANG (trans.), Mark, Markham TIMMINGS Piercing Brightness Shezad, DAWOOD. (ill.), Exhibition Hard 9783863351465 film Gerrie, van NOORD. (ed.), Catalogue cover Malenie, POCOCK (ed.), Abake 52nd International Art Ming-Liang, TSAI (ill.), Exhibition Soft cover film, mixed Exhibition - La Biennale Huang-Chen, TANG (ill.), Catalogue media, print, di Venezia - Atopia Kuo Min, LEE (ill.), Shih performance art Chieh, HUANG (ill.), VIVA (ill.), Hongjohn, LIN (ed.) Passage Osvaldo, YERO (ill.), Exhibition Soft cover 9780978241995 Sculpture, mixed Charo, NEVILLE (ed.), Catalogue media, ceramic, Scott, WATSON (ed.) Installaion China International Arata, ISOZAKI (ill.), Exhibition Soft cover architecture, Practical Exhibition of Jiakun, LIU (ill.), Jiang, XU Catalogue design, China Architecture (ill.), Xiaoshan, LI (ill.), Steven, HOLL (ill.), Kai, ZHOU (ill.), Mathias, KLOTZ (ill.), Qingyun, MA (ill.), Hrvoje, NJIRIC (ill.), Kazuyo, SEJIMA (ill.), Ryue, NISHIZAWA (ill.), David, ADJAYE (ill.), Ettore, SOTTSASS (ill.), Lei, ZHANG (ill.), Luis M. MANSILLA (ill.), Sean, GODSELL (ill.), Gabor, BACHMAN (ill.), Yung -
Sydney Go Journal Issue Date – February 2007
Author – David Mitchell on behalf of The Sydney Go Club Sydney Go Journal Issue Date – February 2007 Dr. Geoffrey Gray’s antique Go Ban (picture courtesy of Dr Gray) Up coming events Queensland Go Championship Saturday 17th and Sunday 18th February in Brisbane. Venue: Brisbane Bridge Centre Registration and other details on page 33 For the latest details visit www.uq.net.au/~zzjhardy/brisgo.html Contributions, comments and suggestions for the SGJ to: [email protected] Special thanks to Devon Bailey and Geoffrey Gray for proof reading this edition and correcting my mistakes. © Copyright 2007 – David Mitchell Page 1 February 2007 Author – David Mitchell on behalf of The Sydney Go Club Sydney Lightning Tournament report 3 Changqi Cup 4 3rd Changqi Cup – 1st Qualifier 6 3rd Changqi Cup – 2nd Qualifier 10 Problems 14 Handicap Strategy 15 Four Corners 29 Two page Joseki lesson 35 Answers 37 Korean Go Terms 39 The Sydney Go Club Meets Friday nights at :- At Philas House 17 Brisbane St Surry Hills From 5.00pm Entrance fee - $5 per head; Concession $3; Children free - includes tea and coffee. For further information from Robert [email protected] © Copyright 2007 – David Mitchell Page 2 February 2007 Lightning Tournament The lightning tournament was held on the January 12th and a good time was had by all, thanks to Robert Vadas organising skills. The final was between Max Latey and David Mitchell, the latter managing another lucky win. The following pictures tell the story David Mitchell (foreground); Max more eloquently than words. Latey (background); the two finalists Robert giving some sage advice. -
Fml-Based Dynamic Assessment Agent for Human-Machine Cooperative System on Game of Go
Accepted for publication in International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems in July, 2017 FML-BASED DYNAMIC ASSESSMENT AGENT FOR HUMAN-MACHINE COOPERATIVE SYSTEM ON GAME OF GO CHANG-SHING LEE* MEI-HUI WANG, SHENG-CHI YANG Department of Computer Science and Information Engineering, National University of Tainan, Tainan, Taiwan *[email protected], [email protected], [email protected] PI-HSIA HUNG, SU-WEI LIN Department of Education, National University of Tainan, Tainan, Taiwan [email protected], [email protected] NAN SHUO, NAOYUKI KUBOTA Dept. of System Design, Tokyo Metropolitan University, Japan [email protected], [email protected] CHUN-HSUN CHOU, PING-CHIANG CHOU Haifong Weiqi Academy, Taiwan [email protected], [email protected] CHIA-HSIU KAO Department of Computer Science and Information Engineering, National University of Tainan, Tainan, Taiwan [email protected] Received (received date) Revised (revised date) Accepted (accepted date) In this paper, we demonstrate the application of Fuzzy Markup Language (FML) to construct an FML- based Dynamic Assessment Agent (FDAA), and we present an FML-based Human–Machine Cooperative System (FHMCS) for the game of Go. The proposed FDAA comprises an intelligent decision-making and learning mechanism, an intelligent game bot, a proximal development agent, and an intelligent agent. The intelligent game bot is based on the open-source code of Facebook’s Darkforest, and it features a representational state transfer application programming interface mechanism. The proximal development agent contains a dynamic assessment mechanism, a GoSocket mechanism, and an FML engine with a fuzzy knowledge base and rule base. -
Spy on Me #2 – Artistic Manoevres for the Digital Present
Spy on Me Artistic Manoeuvres for the Digital Present #2 19. –29.3.2020 Spy on Me #2 – Artistic Manoeuvres for the Digital Present 19.–29.3.2020 / HAU1, HAU2, HAU3 We have arrived in the reality of digital transformation. Life with screens, apps and algorithms influences our behaviour, our atten - tion and desires. Meanwhile the idea of the public space and of democracy is being reorganized by digital means. The festival “Spy on Me” goes into its second round after 2018, searching for manoeuvres for the digital present together with Berlin-based and international artists. Performances, interactive spatial installati - ons and discursive events examine the complex effects of the digi - tal transformation of society. In theatre, where live encounters are the focus, we come close to the intermediate spaces of digital life, searching for ways out of feeling powerless and overwhelmed, as currently experienced by many users of internet-based technolo - gies. For it is not just in some future digital utopia, but here, in the midst of this present, that we have to deal with the basic conditi - ons of living together as a society and of planetary survival. Are we at the edge of a great digital darkness or at the decisive tur - ning point of perspective? A Festival by HAU Hebbel am Ufer. Funded by: Hauptstadtkulturfonds. What is our rela - tionship with alien conscious - nesses? As we build rivals to human intelligence, James Bridle looks at our relationship with the planet’s other alien consciousnesses. 5 On 27 June 1835, two masters of the ancient DeepBlue defeated Garry Kasparov at chess, prise, strangeness and even horror that Chinese game of Go faced off in a match up to that point a game with Go-like status as AlphaGo evokes will become a feature of more which was the culmination of a years-long a bastion of human imagination and mental and more areas of our lives. -
Computer Go: from the Beginnings to Alphago Martin Müller, University of Alberta
Computer Go: from the Beginnings to AlphaGo Martin Müller, University of Alberta 2017 Outline of the Talk ✤ Game of Go ✤ Short history - Computer Go from the beginnings to AlphaGo ✤ The science behind AlphaGo ✤ The legacy of AlphaGo The Game of Go Go ✤ Classic two-player board game ✤ Invented in China thousands of years ago ✤ Simple rules, complex strategy ✤ Played by millions ✤ Hundreds of top experts - professional players ✤ Until 2016, computers weaker than humans Go Rules ✤ Start with empty board ✤ Place stone of your own color ✤ Goal: surround empty points or opponent - capture ✤ Win: control more than half the board Final score, 9x9 board ✤ Komi: first player advantage Measuring Go Strength ✤ People in Europe and America use the traditional Japanese ranking system ✤ Kyu (student) and Dan (master) levels ✤ Separate Dan ranks for professional players ✤ Kyu grades go down from 30 (absolute beginner) to 1 (best) ✤ Dan grades go up from 1 (weakest) to about 6 ✤ There is also a numerical (Elo) system, e.g. 2500 = 5 Dan Short History of Computer Go Computer Go History - Beginnings ✤ 1960’s: initial ideas, designs on paper ✤ 1970’s: first serious program - Reitman & Wilcox ✤ Interviews with strong human players ✤ Try to build a model of human decision-making ✤ Level: “advanced beginner”, 15-20 kyu ✤ One game costs thousands of dollars in computer time 1980-89 The Arrival of PC ✤ From 1980: PC (personal computers) arrive ✤ Many people get cheap access to computers ✤ Many start writing Go programs ✤ First competitions, Computer Olympiad, Ing Cup ✤ Level 10-15 kyu 1990-2005: Slow Progress ✤ Slow progress, commercial successes ✤ 1990 Ing Cup in Beijing ✤ 1993 Ing Cup in Chengdu ✤ Top programs Handtalk (Prof. -
(CMPUT) 455 Search, Knowledge, and Simulations
Computing Science (CMPUT) 455 Search, Knowledge, and Simulations James Wright Department of Computing Science University of Alberta [email protected] Winter 2021 1 455 Today - Lecture 22 • AlphaGo - overview and early versions • Coursework • Work on Assignment 4 • Reading: AlphaGo Zero paper 2 AlphaGo Introduction • High-level overview • History of DeepMind and AlphaGo • AlphaGo components and versions • Performance measurements • Games against humans • Impact, limitations, other applications, future 3 About DeepMind • Founded 2010 as a startup company • Bought by Google in 2014 • Based in London, UK, Edmonton (from 2017), Montreal, Paris • Expertise in Reinforcement Learning, deep learning and search 4 DeepMind and AlphaGo • A DeepMind team developed AlphaGo 2014-17 • Result: Massive advance in playing strength of Go programs • Before AlphaGo: programs about 3 levels below best humans • AlphaGo/Alpha Zero: far surpassed human skill in Go • Now: AlphaGo is retired • Now: Many other super-strong programs, including open source Image source: • https://www.nature.com All are based on AlphaGo, Alpha Zero ideas 5 DeepMind and UAlberta • UAlberta has deep connections • Faculty who work part-time or on leave at DeepMind • Rich Sutton, Michael Bowling, Patrick Pilarski, Csaba Szepesvari (all part time) • Many of our former students and postdocs work at DeepMind • David Silver - UofA PhD, designer of AlphaGo, lead of the DeepMind RL and AlphaGo teams • Aja Huang - UofA postdoc, main AlphaGo programmer • Many from the computer Poker group -
Human Vs. Computer Go: Review and Prospect
This article is accepted and will be published in IEEE Computational Intelligence Magazine in August 2016 Human vs. Computer Go: Review and Prospect Chang-Shing Lee*, Mei-Hui Wang Department of Computer Science and Information Engineering, National University of Tainan, TAIWAN Shi-Jim Yen Department of Computer Science and Information Engineering, National Dong Hwa University, TAIWAN Ting-Han Wei, I-Chen Wu Department of Computer Science, National Chiao Tung University, TAIWAN Ping-Chiang Chou, Chun-Hsun Chou Taiwan Go Association, TAIWAN Ming-Wan Wang Nihon Ki-in Go Institute, JAPAN Tai-Hsiung Yang Haifong Weiqi Academy, TAIWAN Abstract The Google DeepMind challenge match in March 2016 was a historic achievement for computer Go development. This article discusses the development of computational intelligence (CI) and its relative strength in comparison with human intelligence for the game of Go. We first summarize the milestones achieved for computer Go from 1998 to 2016. Then, the computer Go programs that have participated in previous IEEE CIS competitions as well as methods and techniques used in AlphaGo are briefly introduced. Commentaries from three high-level professional Go players on the five AlphaGo versus Lee Sedol games are also included. We conclude that AlphaGo beating Lee Sedol is a huge achievement in artificial intelligence (AI) based largely on CI methods. In the future, powerful computer Go programs such as AlphaGo are expected to be instrumental in promoting Go education and AI real-world applications. I. Computer Go Competitions The IEEE Computational Intelligence Society (CIS) has funded human vs. computer Go competitions in IEEE CIS flagship conferences since 2009. -
When Are We Done with Games?
When Are We Done with Games? Niels Justesen Michael S. Debus Sebastian Risi IT University of Copenhagen IT University of Copenhagen IT University of Copenhagen Copenhagen, Denmark Copenhagen, Denmark Copenhagen, Denmark [email protected] [email protected] [email protected] Abstract—From an early point, games have been promoted designed to erase particular elements of unfairness within the as important challenges within the research field of Artificial game, the players, or their environments. Intelligence (AI). Recent developments in machine learning have We take a black-box approach that ignores some dimensions allowed a few AI systems to win against top professionals in even the most challenging video games, including Dota 2 and of fairness such as learning speed and prior knowledge, StarCraft. It thus may seem that AI has now achieved all of focusing only on perceptual and motoric fairness. Additionally, the long-standing goals that were set forth by the research we introduce the notions of game extrinsic factors, such as the community. In this paper, we introduce a black box approach competition format and rules, and game intrinsic factors, such that provides a pragmatic way of evaluating the fairness of AI vs. as different mechanical systems and configurations within one human competitions, by only considering motoric and perceptual fairness on the competitors’ side. Additionally, we introduce the game. We apply these terms to critically review the aforemen- notion of extrinsic and intrinsic factors of a game competition and tioned AI achievements and observe that game extrinsic factors apply these to discuss and compare the competitions in relation are rarely discussed in this context, and that game intrinsic to human vs. -
Lee Sedol 9P+
The Match Leo Dorst The First Match (Nature, January 2016): AlphaGo vs Fan Hui 2p: 5-0! (played October 2015) Weaknesses of AlphaGo: pro view (about October 2015) • Too soft • Follows patterns, always mimicking • Does not understand concepts like value of sente • No understanding of complicated moves with delayed consequences Myungwan Kim 9p’s conclusion: needs lessons from humans! Lee Sedol 9p+ Guo Juan 5p Fan Hui 2p Dutch Champion 6d Leo Dorst 1k Figure from the Nature paper. AlphaGo in October (4p+?) Rank Beginner The Match: AlphaGo vs Lee Sedol 9p Lee Sedol 9p digitally meets Demis Hassabis CEO Lee Sedol 9p (+) • 33 years old, professional since 12-year old • Among top 5 in the world (now, still) • Invents new joseki moves in important games • Played one of history’s most original games (the ‘broken ladder’ game) sedol_laddergame.sgf • Very good reading skills, for a human • Can manage a game, and himself, very well • But: knew that there was 1M$ at stake • But: knew that he was playing a program (Dutch) Go Players’ Expectations 5-0 4-1 3-2 2-3 1-4 0-5 Source: Bob van den Hoek, http://deeplearningskysthelimit.blogspot.nl/ The Match: AlphaGo vs Lee Sedol Nice detail: Fan Hui was referee! Game 1 B: Lee Sedol W: AlphaGo Move 102 was dubbed ‘superhuman’ in the press Black resigns! Hassabis’ tweet after GAME 1 Lee Sedol: “I am in shock [but] I do not regret accepting this challenge. I failed in the opening… " Game 2 W: Lee Sedol B: AlphaGo Move 37 (a 5th line shoulder hit ) was ‘very surprising ’ White resigns! Game 3 Lee Sedol (B) resigns -
Go Champ Recalls Defeat at Hands of 'Calm' Computer 8 March 2016
Go champ recalls defeat at hands of 'calm' computer 8 March 2016 progress—it shows that a machine can execute a certain "intellectual" task better than the humans who created it Fan is being held to secrecy about AlphaGo's playing style and his expectations for the outcome of the match. But he did insinuate that Lee will have his work cut out for him if he wants to take home the $1-million (908,000-euro) prize money. "He will face a machine that is much stronger than the one that played against me," said the Chinese- born, Bordeaux-based Go teacher. Go is something of a Holy Grail for AI developers, as the ancient Chinese board game is arguably more complex The marathon match, to be played over five days, than chess is seen as a test of how far Artificial Intelligence (AI) has advanced. What makes AlphaGo special is that it is partly self- Last October, Fan Hui was beaten by a computer taught—playing millions of games against itself after at the ancient board game of Go that is not only his initial programming to hone its tactics through trial passion but also his life's work. and error. This week, the 35-year-old European champ will referee as his vanquisher, Google's AlphaGo programme, faces the world's human Number One in a battle for Go supremacy. "I was the first professional Go player to be beaten by a computer programme. It was hard," Fan told AFP ahead of AlphaGo's five-day duel against Go world champion Lee Se-dol. -
About Go, the Ancient Game in Which AI Bested a Master 10 March 2016, by Youkyung Lee
All about Go, the ancient game in which AI bested a master 10 March 2016, by Youkyung Lee ___ WHAT IS GO? In Go, also known as baduk in Korean and weiqi in Chinese, two players take turns putting black or white stones on a 19-by-19 square grid. The goal: to put more territory under one's control by surrounding vacant areas with the stones. Many Asian people see it as not just a game but also a reflection of life, one's personality and temper. There are nearly infinite ways to play Go, South Korean professional Go player Lee Sedol, right, and each player has his or her distinctive style. appears on the screen during the second match of the That's partly the reason the game has been Google DeepMind Challenge Match against Google's particularly difficult for artificial intelligence to artificial intelligence program, AlphaGo at the media master. room in Seoul, South Korea, Thursday, March 10, 2016. Google's computer program AlphaGo defeated its human opponent, South Korean Go champion Lee Sedol, on Wednesday in the first face-off of a historic five-game match. (AP Photo/Lee Jin-man) The rules of Go, the ancient Chinese board game that is the stage for a man-vs-machine battle this week, are beautifully simple. Actually playing it is anything but. Go is considered to be far more complex than chess, making it remarkable that a cutting-edge computer program by Google has swept the first two games of a five-game match against a human South Korean professional Go player Lee Sedol, right, Go champion, Lee Sedol. -
Mastering the Game of Go Without Human Knowledge
Mastering the Game of Go without Human Knowledge David Silver*, Julian Schrittwieser*, Karen Simonyan*, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, Demis Hassabis. DeepMind, 5 New Street Square, London EC4A 3TW. *These authors contributed equally to this work. A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, su- perhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated posi- tions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self- play. Here, we introduce an algorithm based solely on reinforcement learning, without hu- man data, guidance, or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of tree search, re- sulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100-0 against the previously published, champion-defeating AlphaGo. Much progress towards artificial intelligence has been made using supervised learning sys- tems that are trained to replicate the decisions of human experts 1–4. However, expert data is often expensive, unreliable, or simply unavailable. Even when reliable data is available it may impose a ceiling on the performance of systems trained in this manner 5.