Download the Transcript

Total Page:16

File Type:pdf, Size:1020Kb

Download the Transcript AI Ignition Ignite your AI curiosity with Greg Brockman How do we take responsibility for the varied uses of AI? From Deloitte’s AI Institute, this is AI Ignition—a monthly chat about the human side of artificial intelligence with your host, Beena Ammanath. We will take a deep dive into the past, present, and future of AI, machine learning, neural networks, and other cutting-edge technologies. Here is your host, Beena. Beena Ammanath (Beena): Hi, my name is Beena Ammanath and I am the executive director for the Deloitte AI Institute, and today on AI Ignition we have Greg Brockman, the co-founder and CTO of Open AI, a research organization that aims to ensure that artificial general intelligence benefits all of humanity. Welcome to the show, Greg. We are so excited to have you. Greg, I am so happy to have you on AI Ignition today. Thank you so much for joining us. Greg: Thank you for having me. Excited to be here. Beena: So, let’s start with the current status of AI. AI is such a broad topic, and you hear so much in the news about it, everybody knows what you are doing at Open AI. What do you think is the current status of AI today? Greg: So, I think AI is definitely a field in transition. I mean, if you look at the history of the field, I think it has gotten a reputation as the field of empty promises. And you look back to the 50s and people were trying to figure out, hey maybe the hardest problem is chess, and if we can solve chess, then we are going to solve all of AI and we’ll have an artificial human and it will be great. In the 90s, chess came and went, and it didn’t really feel like our lives changed at all. I think that 2012 was really the moment that things started to transition with the creation of AlexNet, which is the first sort of convolutional network on GPU that was actually able to just meaningfully outperform everything else. It’s the first time that like all of these, you are a human, you think hard about a domain, you write down all the rules, all those systems just were paled in comparison to what you could do by just having a neural net or just kind of an architecture that can absorb whatever data you show it, and suddenly that’s the best way of doing things. In image recognition, and everyone kind of looked at it and said, “Okay, that’s a cool trick, but hey it’s never going to work for language. It’s never going to work for machine translation.” And it kind of has. Basically, the whole past decade has been, okay we are starting to see that for all of these areas that feel like machines have never really been able to touch them, things like language, things like understanding what’s in a video. We are starting to be able to crack it, and the way we crack it is it’s all with the same piece of technology, it’s just a neural net. So, that’s all the background, that’s where we have been. And I think GPT-3 is also a moment that shows a sea change in terms of what the tech can do. This is the first time where you take one of these big neural nets. It’s basically just an architecture you dump all this data into, and suddenly it’s actually economically useful. Like people actually want to use our API to just talk to our model. Some people do it for fun, but actually many, many people are building businesses on top of it. You have various applications ranging from assistants that help you write better copy, whether it’s A/B testing and we actually have had a number of users do pretty rigorous tests with GPT-3 versus human copywriters and found that actually GPT-3 does pretty darn well. You have people who use it for games, things like AI Dungeon where you actually have an infinite dungeon like a DND style game that is generated by this model, and there is a huge range of other applications, such as people who are trying to help tenants who receive eviction notices or other legal proceedings in very complex language, trying to summarize it and point out the important points for people who don’t have access to their own legal counsel. I think there is this huge variety of different things that people can do once you have a machine that can start to actually interact, meet humans in the language that we all operate in. So, I think that it’s this exciting moment where you can actually train one of these networks and they are actually economically useful. Beena: Yeah, so true, and I am going to date myself here because I studied AI back in the late 80s, early 90s and that time it was mostly theory. We didn’t have access to massive amounts of data or the compute power that was needed. So, it was mostly what was written in the books and ideas around it. I am not kidding but back then even personalized marketing, which is so prevalent now, was considered to be too complex for an AI or any kind of software to do. Greg: But I also think that all of that work laid a really good foundation. There is always this joke that everything was invented by Schmidhuber in the 90s and it’s not entirely false. That there are a couple of ideas that are out there now that are kind of totally new and unique, and partly because we saw like when you actually build things in this certain way and we have this particular hardware architecture for GPU that maybe doing massive parallelism in this particular way is useful, and that is where a transformer comes from. But most of it I think is actually, we basically have 70 years’ worth of people thinking super hard about the same neural net piece of technology we’re using, and so we can just build on the shoulder of giants. Beena: That is so fascinating because I think in a way we were unencumbered by restrictions. So, it was just ideas that we’re spouting on, what if this was real, what are all the possibilities. That’s a great point. I never thought of it that way. Greg: The thing that I find most interesting about this field is that it doesn’t comport with the way that we think science should go. And you can really look back to the history of these debates between the connectionists, the people who basically said, “Hey, these neural nets which are kind of this like if you squinted at a model of the information processing of the brain,” and then the people who are the symbolic systems people, who have said, “We humans are really smart, we know how systems work, let’s just write down all the rules and try to like really engineer a system that operates like we do.” And there were these great debates of which one was going to win, and it really looked in the 60s like these connections people were a bunch of hacks. There was really this whole campaign waged with books like the Perceptron and things like that to discredit all of them and it really worked. And somehow the connectionist school of thought has just been this steady rising tide. We see that there is this smooth continuous curve of increasing progress that has lasted since 2012, but we actually did a study too where we extended it backwards and we found that actually there has been a smooth exponential in terms of the increased performance of these networks over decades now, and I think that points to something really real that’s going on, and it’s just so fascinating. Beena: Yes. Now you are looking at core AI. How do you see AI creating new opportunities across industries? Greg: I think that this is really the most important question. We think about AI, I actually think AI has a pretty bad brand. Like if you ask someone on the street what they think of AI, you are probably going to get some reference to a Hollywood movie or something like worrying about how it will impact jobs or people’s lives, and I think all of that is fair. I think it is actually really important to confront the downsides, but I think that the reason we work on it, the reason we are so excited to build it, the reason that we have a whole—that Open AI as a company dedicated to trying to build advanced AI is because we see the promise that it can bring. I think that the real goal is to be able to build systems that can help us solve problems that are otherwise out of reach. You think about healthcare. So, I have a very close friend who has a very rare medical condition and every time he talks to a new specialist, they always bounce him to another specialist and it’s a real problem. He is often bounced back to the same person and they are like, “Oh well, there is this thing that they didn’t consider.” Imagine if you just had one doctor who just understood the whole extent of human medicine.
Recommended publications
  • Procedural Content Generation for Games
    Procedural Content Generation for Games Inauguraldissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften der Universit¨atMannheim vorgelegt von M.Sc. Jonas Freiknecht aus Hannover Mannheim, 2020 Dekan: Dr. Bernd L¨ubcke, Universit¨atMannheim Referent: Prof. Dr. Wolfgang Effelsberg, Universit¨atMannheim Korreferent: Prof. Dr. Colin Atkinson, Universit¨atMannheim Tag der m¨undlichen Pr¨ufung: 12. Februar 2021 Danksagungen Nach einer solchen Arbeit ist es nicht leicht, alle Menschen aufzuz¨ahlen,die mich direkt oder indirekt unterst¨utzthaben. Ich versuche es dennoch. Allen voran m¨ochte ich meinem Doktorvater Prof. Wolfgang Effelsberg danken, der mir - ohne mich vorher als Master-Studenten gekannt zu haben - die Promotion an seinem Lehrstuhl erm¨oglichte und mit Geduld, Empathie und nicht zuletzt einem mir unbegreiflichen Verst¨andnisf¨ur meine verschiedenen Ausfl¨ugein die Weiten der Informatik unterst¨utzthat. Sie werden mir nicht glauben, wie dankbar ich Ihnen bin. Weiterhin m¨ochte ich meinem damaligen Studiengangsleiter Herrn Prof. Heinz J¨urgen M¨ullerdanken, der vor acht Jahren den Kontakt zur Universit¨atMannheim herstellte und mich ¨uberhaupt erst in die richtige Richtung wies, um mein Promotionsvorhaben anzugehen. Auch Herr Prof. Peter Henning soll nicht ungenannt bleiben, der mich - auch wenn es ihm vielleicht gar nicht bewusst ist - davon ¨uberzeugt hat, dass die Erzeugung virtueller Welten ein lohnenswertes Promotionsthema ist. Ganz besonderer Dank gilt meiner Frau Sarah und meinen beiden Kindern Justus und Elisa, die viele Abende und Wochenenden zugunsten dieser Arbeit auf meine Gesellschaft verzichten mussten. Jetzt ist es geschafft, das n¨achste Projekt ist dann wohl der Garten! Ebenfalls geb¨uhrt meinen Eltern und meinen Geschwistern Dank.
    [Show full text]
  • Innk: a Multi-Player Game to Deceive a Neural Network
    iNNk: A Multi-Player Game to Deceive a Neural Network Jennifer Villareale1, Ana Acosta-Ruiz1, Samuel Arcaro1, Thomas Fox1, Evan Freed1, Robert Gray1, Mathias Löwe2, Panote Nuchprayoon1, Aleksanteri Sladek1, Rush Weigelt1, Yifu Li1, Sebastian Risi2, Jichen Zhu1 1Drexel University, Philadelphia, USA 2IT University of Copenhagen, Copenhagen, Denmark {jmv85, ava48, saa35, tbf33, emf67, rcg48, pn355, ams939, rw643, yl3374, jichen}@drexel.edu, {malw, sebr}@itu.dk ABSTRACT experiment with different drawing strategies. This paper also sum- This paper presents iNNK, a multiplayer drawing game where hu- marizes the main strategies our players have developed in our man players team up against an NN. The players need to success- playtesting. Certainly, a lot more effort is needed to empower cit- fully communicate a secret code word to each other through draw- izens to be more familiar with AI and to engage the technology ings, without being deciphered by the NN. With this game, we aim critically. Through our game, we have seen evidence that playful to foster a playful environment where players can, in a small way, experience can turn people from passive users into creative and re- go from passive consumers of NN applications to creative thinkers flective thinkers, a crucial step towards a more mature relationship and critical challengers. with AI. CCS CONCEPTS 2 RELATED WORK • Computer systems organization ! Embedded systems; Re- With recent breakthroughs in neural networks (NN), particularly dundancy; Robotics; • Networks ! Network reliability. deep learning, designers are increasingly exploring their use in computer games. For example, researchers use NNs to procedurally KEYWORDS generate game content, which otherwise would have to be created artificial intelligence, game design, machine learning, neural net- by human artists and thus make the game more variable [13, 20].
    [Show full text]
  • Plagiarism in the Age of Massive Generative Pre-Trained Transformers (GPT- 3): “The Best Time to Act Was Yesterday
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/348583797 Plagiarism in the age of massive Generative Pre-trained Transformers (GPT- 3): “The best time to act was yesterday. The next best time is now.” Article in Ethics in Science and Environmental Politics · January 2021 DOI: 10.3354/esep00195 CITATIONS READS 0 26 1 author: Nassim Dehouche Mahidol University 30 PUBLICATIONS 9 CITATIONS SEE PROFILE Some of the authors of this publication are also working on these related projects: Improving Access to the Bangkok Transit System for Older Adults and Disabled People View project Operational Research on Football View project All content following this page was uploaded by Nassim Dehouche on 13 February 2021. The user has requested enhancement of the downloaded file. Plagiarism in the age of massive Generative Pre-trained Transformers (GPT-3): “The best time to act was yesterday. The next best time is now.” Nassim Dehouche [email protected] Business Administration Division, Mahidol University International College, Salaya, Thailand Abstract As if 2020 were not a peculiar enough year, its fifth month has seen the relatively quiet publication of a preprint describing the most powerful Natural Language Processing (NLP) system to date, GPT-3 (Generative Pre-trained Transformer-3), by Silicon Valley research firm OpenAI. Though the software implementation of GPT-3 is still in its initial Beta release phase, and its full capabilities are still unknown as of the time of this writing, it has been shown that this Artificial Intelligence can comprehend prompts in natural language, on virtually any topic, and generate relevant, original text content that is indistinguishable from human writing.
    [Show full text]
  • Arxiv:2007.12391V6 [Cs.CV] 2 Jul 2021 Techniques for Practical Purposes Has Been Artificial Intelligence (AI)
    Artificial Intelligence Review manuscript No. (will be inserted by the editor) Artificial Intelligence in the Creative Industries: A Review Nantheera Anantrasirichai · David Bull Accepted for publication in Artificial Intelligence Review (AIRE), 19 June 2021 Abstract This paper reviews the current state of the art in Artificial Intelligence (AI) tech- nologies and applications in the context of the creative industries. A brief background of AI, and specifically Machine Learning (ML) algorithms, is provided including Convolu- tional Neural Networks (CNNs), Generative Adversarial Networks (GANs), Recurrent Neu- ral Networks (RNNs) and Deep Reinforcement Learning (DRL). We categorize creative applications into five groups, related to how AI technologies are used: i) content creation, ii) information analysis, iii) content enhancement and post production workflows, iv) in- formation extraction and enhancement, and v) data compression. We critically examine the successes and limitations of this rapidly advancing technology in each of these areas. We further differentiate between the use of AI as a creative tool and its potential as a creator in its own right. We foresee that, in the near future, ML-based AI will be adopted widely as a tool or collaborative assistant for creativity. In contrast, we observe that the successes of ML in domains with fewer constraints, where AI is the ‘creator’, remain modest. The potential of AI (or its developers) to win awards for its original creations in competition with human creatives is also limited, based on contemporary technologies. We therefore conclude that, in the context of creative industries, maximum benefit from AI will be derived where its focus is human-centric – where it is designed to augment, rather than replace, human creativity.
    [Show full text]
  • AI for Health Examples from Industry, Government, and Academia Table of Contents
    guidehouse.com AI for Health Examples from industry, government, and academia Table of Contents Abstract 3 Introduction to artificial intelligence 4 Introduction to health data 7 Molecules: AI for understanding biology and developing therapies 10 Fundamental research 10 Drug development 11 Patients: AI for predicting disease and improving care 13 Diagnostics and outcome risks 13 Personalized medicine 14 Groups: AI for optimizing clinical trials and safeguarding public health 15 Clinical trials 15 Public health 17 Conclusion and outlook 18 2 Guidehouse Abstract This paper is directed toward a health-informed reader who is curious about the developments and potential of artificial intelligence (AI) in the health space, but could equally be read by AI practitioners curious about how their knowledge and methods are being used to advance human health. We present a brief, equation-free introduction to AI and its major subfields in order to provide a framework for understanding the technical context of the examples that follow. We discuss the various data sources available for questions of health and life sciences, as well as the unique challenges inherent to these fields. We then consider recent (past five years) applications of AI that have already had tangible, measurable impact to the advancement of biomedical knowledge and the development of new and improved treatments. These examples are organized by scale, ranging from the molecule (fundamental research and drug development) to the patient (diagnostics, risk-scoring, and personalized medicine) to the group (clinical trials and public health). Finally, we conclude with a brief summary and our outlook for the future of AI for health.
    [Show full text]
  • Pre-Training of Deep Bidirectional Protein Sequence Representations with Structural Information
    Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. Digital Object Identifier 10.1109/ACCESS.2017.DOI Pre-training of Deep Bidirectional Protein Sequence Representations with Structural Information SEONWOO MIN1,2, SEUNGHYUN PARK3, SIWON KIM1, HYUN-SOO CHOI4, BYUNGHAN LEE5 (Member, IEEE), and SUNGROH YOON1,6 (Senior Member, IEEE) 1Department of Electrical and Computer engineering, Seoul National University, Seoul 08826, South Korea 2LG AI Research, Seoul 07796, South Korea 3Clova AI Research, NAVER Corp., Seongnam 13561, South Korea 4Department of Computer Science and Engineering, Kangwon National University, Chuncheon 24341, South Korea 5Department of Electronic and IT Media Engineering, Seoul National University of Science and Technology, Seoul 01811, South Korea 6Interdisciplinary Program in Artificial Intelligence, ASRI, INMC, and Institute of Engineering Research, Seoul National University, Seoul 08826, South Korea Corresponding author: Byunghan Lee ([email protected]) or Sungroh Yoon ([email protected]) This research was supported by the National Research Foundation (NRF) of Korea grants funded by the Ministry of Science and ICT (2018R1A2B3001628 (S.Y.), 2014M3C9A3063541 (S.Y.), 2019R1G1A1003253 (B.L.)), the Ministry of Agriculture, Food and Rural Affairs (918013-4 (S.Y.)), and the Brain Korea 21 Plus Project in 2021 (S.Y.). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. ABSTRACT Bridging the exponentially growing gap between the numbers of unlabeled and labeled protein sequences, several studies adopted semi-supervised learning for protein sequence modeling. In these studies, models were pre-trained with a substantial amount of unlabeled data, and the representations were transferred to various downstream tasks.
    [Show full text]
  • The Future of Storytelling in the Age of AI and Posthuman
    The Future of Storytelling in the Age of AI and Posthuman DR. SERDAR TUNCER PROF. BOULOU EBANDA DE B'BÉRI UNIVERSITY OF OTTAWA, AMLAC&S Background Computer scientists are building a vast array of machine learning systems (often called Artificial Intelligence or AI) that can perform daily human tasks reliably, sometimes more so than humans. Indeed, AI is becoming capable of capturing the unique essence that makes us “human”: storytelling. Humans are intrinsically storytellers who express themselves through stories that already exist (Fisher, 1987; Frank, 2012). Moreover, people have always learnt about themselves and made sense of the rest of the world through narratives (Ebanda de B'béri, 2006; Tunçer, 2018). However, in recent years, AI-generated games and articles have already started to emerge (Fitch, 2019). While AI is able to exceed certain functions of natural human intelligence as it gains self- learning capacity in repetitive or analytical tasks, it may have the potential to harness the persuasive capabilities of storytelling. Guiding Questions • What will become of our humanity if AI can write and tell better stories? • Is AI the greatest danger to our humanity as Bramer (2015) has suggested? • Are we already entered the posthuman era (Wolfe, 2010; Braidotti, 2013) with such advances as neural nets that allow biological and silicon-based artificial brain cells to communicate with each other (Serb et al., 2020) – a new form of AI-human being? • What will the future look like if AI can harness the influential potential of storytelling? • What would be the storytelling landscape in such an environment of posthuman? Where we are today We are still in the infant stages of AI with errors happening along the way: ◦ Microsoft had released its AI bot named Tay, which was designed to foster “conversational understanding.” But things did not go as expected.
    [Show full text]
  • Biological Structure and Function Emerge from Scaling Unsupervised Learning to 250 Million Protein Sequences
    bioRxiv preprint doi: https://doi.org/10.1101/622803; this version posted May 29, 2019. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY-NC-ND 4.0 International license. BIOLOGICAL STRUCTURE AND FUNCTION EMERGE FROM SCALING UNSUPERVISED LEARNING TO 250 MILLION PROTEIN SEQUENCES Alexander Rives ∗y z Siddharth Goyal ∗x Joshua Meier ∗x Demi Guo ∗x Myle Ott x C. Lawrence Zitnick x Jerry Ma y x Rob Fergus y z x ABSTRACT In the field of artificial intelligence, a combination of scale in data and model capacity enabled by unsupervised learning has led to major advances in representation learning and statistical generation. In biology, the anticipated growth of sequencing promises unprecedented data on natural sequence diversity. Learning the natural distribution of evolutionary protein sequence variation is a logical step toward predictive and generative modeling for biology. To this end we use unsupervised learning to train a deep contextual language model on 86 billion amino acids across 250 million sequences spanning evolutionary diversity. The resulting model maps raw sequences to representations of biological properties without labels or prior domain knowledge. The learned representation space organizes sequences at multiple levels of biological granularity from the biochemical to proteomic levels. Unsupervised learning recovers information about protein structure: secondary structure and residue-residue contacts can be identified by linear projections from the learned representations. Training language models on full sequence diversity rather than individual protein families increases recoverable information about secondary structure.
    [Show full text]
  • Machine Learning for Speech Recognition by Alice Coucke, Head of Machine Learning Research
    Machine Learning for Speech Recognition by Alice Coucke, Head of Machine Learning Research @alicecoucke [email protected] Outline: 1. Recent advances in machine learning 2. From physics to machine learning IA en général Speech & NLP Startup 3. Working at Snips (now Sonos) Snips A lot of applications in the real world, quite a difference from theoretical work Un des rares domaines scientifiques où la théorie et la pratique sont très emmêlées Recent Advances in Applied Machine Learning A lot of applications in the real world, quite a difference from theoretical work Reinforcement learning Learning goal-oriented behavior within simulated environments Go Starcraft II Dota 2 AlphaGo (Deepmind, 2016) AlphaStar (Deepmind) OpenAI Five (OpenAI) Play-driven learning for robots Sim-to-real dexterity learning (Google Brain) Project BLUE (UC Berkeley) Machine Learning for Life Sciences Deep learning applied to biology and medicine Protein folding & structure Cardiac arrhythmia prediction Eye disease diagnosis prediction (NHS, UCL, Deepmind) from ECGs AlphaFold (Deepmind) (Stanford) Reconstruct speech from neural activity Limb control restoration (UCSF) (Batelle, Ohio State Univ) Computer vision High-level understanding of digital images or videos GANs for image generation (Heriot Watt Univ, DeepMind) « Common sense » understanding of actions in videos (TwentyBn, DeepMind, MIT, IBM…) GANs for artificial video dubbing GAN for full body synthesis (Synthesia) (DataGrid) From physics to machine learning and back A surge of interest from the physics community
    [Show full text]
  • Artificial Intelligence and Cybersecurity
    CEPS Task Force Report Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges R a p p o r t e u r s L o r e n z o P u p i l l o S t e f a n o F a n t i n A f o n s o F e r r e i r a C a r o l i n a P o l i t o Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Final Report of a CEPS Task Force Rapporteurs: Lorenzo Pupillo Stefano Fantin Afonso Ferreira Carolina Polito Centre for European Policy Studies (CEPS) Brussels May 2021 The Centre for European Policy Studies (CEPS) is an independent policy research institute based in Brussels. Its mission is to produce sound analytical research leading to constructive solutions to the challenges facing Europe today. Lorenzo Pupillo is CEPS Associate Senior Research Fellow and Head of the Cybersecurity@CEPS Initiative. Stefano Fantin is Legal Researcher at Center for IT and IP Law, KU Leuven. Afonso Ferreira is Directeur of Research at CNRS. Carolina Polito is CEPS Research Assistant at GRID unit, Cybersecurity@CEPS Initiative. ISBN 978-94-6138-785-1 © Copyright 2021, CEPS All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means – electronic, mechanical, photocopying, recording or otherwise – without the prior permission of the Centre for European Policy Studies. CEPS Place du Congrès 1, B-1000 Brussels Tel: 32 (0) 2 229.39.11 e-mail: [email protected] internet: www.ceps.eu Contents Preface ...........................................................................................................................................................
    [Show full text]
  • The Hitchhiker's Guide to Technology
    The Hitchhiker’s Guide to Technology: A Conversation on Careers in Tech Falaah Arif Khan Email: [email protected] Website: https://falaaharifkhan.github.io/research/ Twitter: @FalaahArifKhan Agenda ❏ The Hacker Mindset: How I pulled off 5 Hackathon wins in 1 year ❏ AI Now: A Critical Overview of Social and Technical challenges ❏ Creativity and Computer Science: Why I created a Scientific Comic about Artificial Intelligence ❏ Alice Through the Looking Glass: What I’ve learned about the IT Industry The Hacker Mindset: My experience May ‘18: Graduated college July ‘18: Joined Dell Sept ‘18: Invoice Correction --- Order Management Dec ‘18: Credit Limit Allocation --- Credit/Finance April ‘19: Network Security ---- Cyber security June ‘19: Ethical Hack of Source code ----- Dev/Ops July ‘19: Content Classification----- Marketing The Hacker Mindset: Hackathon Lessons 1. Develop business acumen: a. understand the business end to end b. gain experience with building different components of a software product/service 2. Develop design thinking and product management skills: a. Understand your client b. MVP (Minimum viable product+iterate) mindset 3. Work in a high pressure environment 4. Gain familiarity with all stages of SDLC (Software Development Lifecycle) The Hacker Mindset: Key Takeaways ❏ Take on challenging problems. Learn to work under constraints ❏ Take on things you don't know and figure it out as you go ❏ Take risks, follow your intuitions ❏ Challenge the tradition/ status quo ❏ Collaborate, build something bigger than yourself Questions?
    [Show full text]
  • Overview of Current State of Research on the Application of Artificial Intelligence Techniques for COVID-19
    Overview of current state of research on the application of artificial intelligence techniques for COVID-19 Vijay Kumar1, Dilbag Singh2, Manjit Kaur2 and Robertas Damaševičius3,4 1 Computer Science and Engineering Department, National Institute of Technology, Hamirpur, Himachal Pradesh, India 2 School of Engineering and Applied Sciences, Bennett University, Greater Noida, India 3 Faculty of Applied Mathematics, Silesian University of Technology, Gliwice, Poland 4 Department of Applied Informatics, Vytautas Magnus University, Kaunas, Lithuania ABSTRACT Background: Until now, there are still a limited number of resources available to predict and diagnose COVID-19 disease. The design of novel drug-drug interaction for COVID-19 patients is an open area of research. Also, the development of the COVID-19 rapid testing kits is still a challenging task. Methodology: This review focuses on two prime challenges caused by urgent needs to effectively address the challenges of the COVID-19 pandemic, i.e., the development of COVID-19 classification tools and drug discovery models for COVID-19 infected patients with the help of artificial intelligence (AI) based techniques such as machine learning and deep learning models. Results: In this paper, various AI-based techniques are studied and evaluated by the means of applying these techniques for the prediction and diagnosis of COVID-19 disease. This study provides recommendations for future research and facilitates knowledge collection and formation on the application of the AI techniques for dealing with the COVID-19 epidemic and its consequences. Conclusions: The AI techniques can be an effective tool to tackle the epidemic caused by COVID-19. These may be utilized in four main fields such as prediction, diagnosis, drug design, and analyzing social implications for COVID-19 infected patients.
    [Show full text]