Graphical Model II

Total Page:16

File Type:pdf, Size:1020Kb

Graphical Model II Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory MACHINE LEARNING Vasant Honavar Artificial Intelligence Research Laboratory Department of Computer Science Bioinformatics and Computational Biology Program Center for Computational Intelligence, Learning, & Discovery Iowa State University [email protected] www.cs.iastate.edu/~honavar/ www.cild.iastate.edu/ Copyright Vasant Honavar, 2006. Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Learning Bayesian networks E B Data + L R A Prior information E B P(A | E,B) e b 0.9 0.1 C e b 0.2 0.8 e b 0.9 0.1 e b 0.01 0.99 Copyright Vasant Honavar, 2006. 1 Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory The Learning Problem Known Structure Unknown Structure Complete Data Statistical Discrete optimization parametric over structures estimation (discrete search) (closed-form eq.) Incomplete Data Parametric Combined optimization (Structural EM, mixture (EM, gradient models…) descent...) Copyright Vasant Honavar, 2006. Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Learning Problem Known Structure Unknown Structure Complete Statistical parametric Discrete optimization over estimation structures Data (closed-form eq.) (discrete search) Incomplete Parametric Combined optimization (Structural EM, mixture Data (EM, gradient descent...) models…) E, B, A <Y,N,N> <Y,Y,Y> E B P(A | E,B) <N,N,Y> E B <N,Y,Y> E B P(A | E,B) e b ?? . L A e b 0.9 0.1 e b ?? . <N,Y,Y> e b 0.2 0.8 e b ?? E B e b 0.9 0.1 e b ?? e b 0.01 0.99 A Copyright Vasant Honavar, 2006. 2 Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Learning Problem Known Structure Unknown Structure Complete Statistical parametric Discrete optimization over Data estimation structures (closed-form eq.) (discrete search) Incomplete Parametric optimization Combined Data (EM, gradient descent...) (Structural EM, mixture models…) E, B, A <Y,N,N> <Y,Y,Y> E B P(A | E,B) <N,N,Y> E B <N,Y,Y> e b ?? E B P(A | E,B) . L e b 0.9 0.1 e b ?? . A <N,Y,Y> e b ?? e b 0.2 0.8 e b ?? E B e b 0.9 0.1 e b 0.01 0.99 A Copyright Vasant Honavar, 2006. Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Learning Problem Known Structure Unknown Structure Complete Statistical parametric Discrete optimization over Data estimation structures (closed-form eq.) (discrete search) Incomplete Parametric optimization Combined Data (EM, gradient descent...) (Structural EM, mixture models…) E, B, A <Y,N,?> <Y,?,Y> E B P(A | E,B) <N,N,Y> E B <?,Y,Y> e b ?? E B P(A | E,B) . L e b 0.9 0.1 e b ?? . A <N,?,Y> e b ?? e b 0.2 0.8 e b ?? E B e b 0.9 0.1 e b 0.01 0.99 A Copyright Vasant Honavar, 2006. 3 Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Learning Problem Known Structure Unknown Structure Complete Statistical parametric Discrete optimization over Data estimation structures (closed-form eq.) (discrete search) Incomplete Parametric optimization Combined Data (EM, gradient descent...) (Structural EM, mixture models…) E, B, A <Y,N,?> <Y,?,Y> E B P(A | E,B) <N,N,Y> E B <?,Y,Y> e b ?? E B P(A | E,B) . L e b 0.9 0.1 e b ?? . A <N,?,Y> e b ?? e b 0.2 0.8 e b ?? E B e b 0.9 0.1 e b 0.01 0.99 A Copyright Vasant Honavar, 2006. Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Learning Bayesian Networks Known Structure Unknown Structure Complete data Incomplete data » Parameter learning: Complete data (Review) • Statistical parametric fitting • Maximum likelihood estimation • Bayesian inference • Parameter learning: Incomplete data • Structure learning: Complete data • Application: classification • Structure learning: Incomplete data Copyright Vasant Honavar, 2006. 4 Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Estimating probabilities from data (discrete case) • Maximum likelihood estimation • Bayesian estimation • Maximum a posteriori estimation Copyright Vasant Honavar, 2006. Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Bayesian estimation • Treat the unknown parameters as random variables • Assume a prior distribution for the unknown parameters • Update the distribution of the parameters based on data • Use Bayes rule to make prediction Copyright Vasant Honavar, 2006. 5 Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Bayesian Networks and Bayesian Prediction θY|X θX θX m θY|X X[m] X[1] X[2] X[M] X[M+1] Y[m] Y[1] Y[2] Y[M] Y[M+1] Plate notation Observed data Query • Priors for each parameter group are independent • Data instances are independent given the unknown parameters Copyright Vasant Honavar, 2006. Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Bayesian Networks and Bayesian Prediction θY|X θX θX m θY|X X[m] X[1] X[2] X[M] X[M+1] Y[m] Y[1] Y[2] Y[M] Y[M+1] Plate notation Observed data Query • We can “read” from the network: • Complete data ⇒ posteriors on parameters are independent Copyright Vasant Honavar, 2006. 6 Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Bayesian Prediction (cont.) • Since posteriors on parameters for each node are independent, we can compute them separately • Posteriors for parameters within a node are also independent: θX θX m θ m Y|X Refined model θY|X=0 X[m] X[m] θY|X=1 Y[m] Y[m] • Complete data ⇒ the posteriors on θY|X=0 and θ Y|X=1 are independent Copyright Vasant Honavar, 2006. Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Bayesian Prediction • Given these observations, we can compute the posterior for each multinomial θ independently Xi | pai • The posterior is Dirichlet with parameters • α(Xi=1|pai)+N (Xi=1|pai),…, α(Xi=k|pai)+N (Xi=k|pai) • The predictive distribution is then represented by the parameters ~ α (x , pa ) + N (x , pa ) θ = i i i i xi | pai α ( pa i ) + N ( pa i ) Copyright Vasant Honavar, 2006. 7 Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Assigning Priors for Bayesian Networks • We need the α(xi,pai) for each node xj • We can use initial parameters Θ0 as prior information • Need also an equivalent sample size parameter M0 • Then, we let α(xi,pai) = M0•P(xi,pai|Θ0) • This allows update of a network in response to new data Copyright Vasant Honavar, 2006. Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Learning Parameters • Comparing two distribution P(x) (true model) vs. Q(x) (learned distribution) -- Measure their KL Divergence P(x) KL(P || Q) = ∑ P(x)log x Q(x) – KL(P||Q) ≥ 0 – KL(P||Q) = 0 iff are P and Q equal Copyright Vasant Honavar, 2006. 8 Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Learning Parameters: Summary • Estimation relies on sufficient statistics • For multinomial these are of the form N (xi,pai) • Parameter estimation N (x , pa ) α (x , pa ) + N (x , pa ) ˆθ = i i ~θ = i i i i x i |pa i x i |pa i N ( pa i ) α ( pa i ) + N ( pa i ) MLE Bayesian (Dirichlet) • Bayesian methods also require choice of priors • Both MLE and Bayesian estimates are asymptotically equivalent and consistent but the latter work better with small samples • Both can be implemented in an on-line manner by accumulating sufficient statistics Copyright Vasant Honavar, 2006. Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Learning Problem Known Structure Unknown Structure Complete Statistical parametric Discrete optimization over Data estimation structures (closed-form eq.) (discrete search) Incomplete Parametric optimization Combined Data (EM, gradient descent...) (Structural EM, mixture models…) E, B, A <Y,N,N> <Y,Y,Y> E B P(A | E,B) <N,N,Y> E B <N,Y,Y> e b ?? E B P(A | E,B) . L e b 0.9 0.1 e b ?? . A <N,Y,Y> e b ?? e b 0.2 0.8 e b ?? E B e b 0.9 0.1 e b 0.01 0.99 A Copyright Vasant Honavar, 2006. 9 Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Why do we need accurate structure? Earthquake Alarm Set Burglary Sound Missing an arc Extraneous arc Earthquake Alarm Set Burglary Earthquake Alarm Set Burglary Sound Sound • Cannot be compensated for by • Increases the number of fitting parameters parameters to be estimated • Incorrect independence • Incorrect independence assumptions assumptions Copyright Vasant Honavar, 2006. Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Approaches to BN Structure Learning • Score based methods – assign a score to each candidate BN structure using a suitable scoring function – Search the space of candidate network structures for a BN structure with the maximum score • Independence testing based methods – Use independence tests to determine the structure of the network Copyright Vasant Honavar, 2006. 10 Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Score-based BN Structure Learning Define scoring function that evaluates how well a structure matches the data E, B, A <Y,N,N> <Y,Y,Y> <N,N,Y> <N,Y,Y> . E . E B E <N,Y,Y> A A A B B Search for a structure that maximizes the score Copyright Vasant Honavar, 2006. Iowa State University Department of Computer Science Artificial Intelligence Research Laboratory Need for parsimony Copyright Vasant Honavar, 2006.
Recommended publications
  • Dear Colleague Letter: OFR-NSF Partnership in Support of Research Collaborations in Finance Informatics
    National Science Foundation 4201 Wilson Boulevard Arlington, Virginia 22230 NSF 13-093 Dear Colleague Letter: Date: May 14, 2013 The Directorate for Computer and Information Science and Engineering (CISE) of the National Science Foundation (NSF) and the Office of Financial Research (OFR) of the Department of Treasury share an interest in advancing basic and applied research centered on Computational and Information Processing Approaches to and Infrastructure in support of, Financial Research and Analysis and Management (CIFRAM). The complexity of modern financial instruments presents many challenges in recognizing and regulating Systemic Risk. The topic has been the subject of a recent National Academy of Science Report titled "Technical Capabilities Necessary for Systemic Risk Regulation: Summary of a Workshop." The CISE directorate and the Computing Community Consortium have sponsored workshops on Knowledge Representation and Information Management for Financial Risk Management and on Next-Generation Financial Cyberinfrastructure aimed at identifying research opportunities and challenges in CIFRAM. NSF and OFR have established a collaboration (hereafter referred to as CIFRAM) to identify and fund a small number of exploratory but potentially transformative CIFRAM research proposals. The collaboration enables OFR to support a broad range of financial research related to OFR’s mission, including research on potential threats to financial stability. It also assists OFR with the goal of promoting and encouraging collaboration between the government, the private sector, and academic institutions interested in furthering financial research and analysis. The collaboration enables the NSF to nurture fundamental CISE research on a variety of topics including algorithms, informatics, knowledge representation, and data analytics needed to advance the current state of the art in financial research and analysis.
    [Show full text]
  • Calendar of Events APRIL 6–7 Workshop on Lexical Semantics Systems (WLSS–98)
    AI Magazine Volume 19 Number 1 (1998) (© AAAI) E-mail: [email protected] Calendar of Events APRIL 6–7 Workshop on Lexical Semantics Systems (WLSS–98). Pisa, Italy ■ Contact: Alessandro Lenci Scuola Normale Superiore College Park, MD 20742 Laboratorio di linguistica March 1998 Piazza dei Cavalieri 7 MARCH 23–27 56126 Pisa, Italy MARCH 16–20 Practical Application Expo-98. Voice: +39 50 509219 Fourth World Congress on Expert London. Fax: +39 50 563513 Systems. Mexico City, Mexico celi.sns.it/~wlss98 ■ Contact: Tenth International Symposium on Steve Cartmell Artificial Intelligence. Mexico City, Practical Application Company APRIL 6–8 Mexico PO Box 173, Blackpool 1998 International Symposium on ■ Sponsor: Lancs. FY2 9UN Physical Design (ISPD–98). Mon- ITESM Center for Artificial United Kingdom terey, CA Intelligence Voice: +44 (0)1253 358081 ■ Sponsors: ■ Contact: Fax: +44 (0)1253 353811 ACM SIGDA, in cooperation with Rogelio Soto E-mail: [email protected] IEEE Circuits and Systems Society Program Chair, ITESM www.demon.co.uk/ar/Expo98/ and IEEE Computer Society Centro de Inteligencia Artificial ■ Contact: Av. Eugenio Garza Sada #2501 Sur D. F. Wong Monterrey, N. L. 64849, Mexico Technical Program Chair, ISPD–98 Voice: (52–8) 328-4197 University of Texas at Austin Fax: (52-8) 328-4189 April 1998 Department of Computer Sciences E-mail: [email protected]. Austin, TX 78712 APRIL 1–4 itesm.mx E-mail: [email protected] Second European Conference on www.ee.iastate.edu/~ispd98 Cognitive Modeling (ECCM–98). MARCH 16–20 Nottingham, UK International Workshop on APRIL 14–17 ■ Intelligent Agents on the Internet Contact: Fourteenth European Meeting on and Web.
    [Show full text]
  • Technology and Engineering International Journal of Recent
    International Journal of Recent Technology and Engineering ISSN : 2277 - 3878 Website: www.ijrte.org Volume-7 Issue-5S2, JANUARY 2019 Published by: Blue Eyes Intelligence Engineering and Sciences Publication d E a n n g y i n g o e l e o r i n n h g c e T t n e c Ijrt e e E R X I N P n f L O I O t T R A o e I V N O l G N r IN n a a n r t i u o o n J a l www.ijrte.org Exploring Innovation Editor-In-Chief Chair Dr. Shiv Kumar Ph.D. (CSE), M.Tech. (IT, Honors), B.Tech. (IT), Senior Member of IEEE Professor, Department of Computer Science & Engineering, Lakshmi Narain College of Technology Excellence (LNCTE), Bhopal (M.P.), India Associated Editor-In-Chief Chair Dr. Dinesh Varshney Professor, School of Physics, Devi Ahilya University, Indore (M.P.), India Associated Editor-In-Chief Members Dr. Hai Shanker Hota Ph.D. (CSE), MCA, MSc (Mathematics) Professor & Head, Department of CS, Bilaspur University, Bilaspur (C.G.), India Dr. Gamal Abd El-Nasser Ahmed Mohamed Said Ph.D(CSE), MS(CSE), BSc(EE) Department of Computer and Information Technology, Port Training Institute, Arab Academy for Science, Technology and Maritime Transport, Egypt Dr. Mayank Singh PDF (Purs), Ph.D(CSE), ME(Software Engineering), BE(CSE), SMACM, MIEEE, LMCSI, SMIACSIT Department of Electrical, Electronic and Computer Engineering, School of Engineering, Howard College, University of KwaZulu- Natal, Durban, South Africa. Scientific Editors Prof.
    [Show full text]
  • Node Injection a Acks on Graphs Via Reinforcement Learning
    Node Injection Aacks on Graphs via Reinforcement Learning Yiwei Sun, , Suhang Wangx , Xianfeng Tang, Tsung-Yu Hsieh, Vasant Honavar Pennsylvania State University fyus162, szw494, xut10 ,tuh45 ,[email protected] ABSTRACT (a) (b) (c) Real-world graph applications, such as advertisements and prod- uct recommendations make prots based on accurately classify the label of the nodes. However, in such scenarios, there are high incentives for the adversaries to aack such graph to reduce the node classication performance. Previous work on graph adversar- ial aacks focus on modifying existing graph structures, which is clean graph dummy attacker smart attacker infeasible in most real-world applications. In contrast, it is more practical to inject adversarial nodes into existing graphs, which can Figure 1: (a) is the toy graph where the color of a node repre- also potentially reduce the performance of the classier. sents its label; (b) shows the poisoning injection attack per- In this paper, we study the novel node injection poisoning aacks formed by a dummy attacker; (c) shows the poisoning in- problem which aims to poison the graph. We describe a reinforce- jection attack performed by a smart attacker. e injected ment learning based method, namely NIPA, to sequentially modify nodes are circled with dashed line. the adversarial information of the injected nodes. We report the results of experiments using several benchmark data sets that show the superior performance of the proposed method NIPA, relative to Recent works [12, 32, 35] have shown that even the state-of-the- the existing state-of-the-art methods. art graph classiers are susceptible to aacks which aim at adversely impacting the node classication accuracy.
    [Show full text]
  • AAAI-12 Conference Committees
    AAAI 2012 Conference Committees Chairs and Cochairs AAAI Conference Committee Chair Dieter Fox (University of Washington, USA) AAAI­12 Program Cochairs Jörg Hoffmann (Saarland University, Germany) Bart Selman (Cornell University, USA) IAAI­12 Conference Chair and Cochair Markus Fromherz (ACS, a Xerox Company, USA) Hector Munoz‐Avila (Lehigh University, USA) EAAI­12 Symposium Chair David Kauchak (Middlebury College, USA) Special Track on Artificial Intelligence and the Web Cochairs Denny Vrandecic (Institute of Applied Informatics and Formal Description Methods, Germany) Chris Welty (IBM Research, USA) Special Track on Cognitive Systems Cochairs Matthias Scheutz (Tufts University, USA) James Allen (University of Rochester, USA) Special Track on Computational Sustainability and Artificial Intelligence Cochairs Carla P. Gomes (Cornell University, USA) Brian C. Williams (Massachusetts Institute of Technology, USA) Special Track on Robotics Cochairs Kurt Konolige (Industrial Perception, Inc., USA) Siddhartha Srinivasa (Carnegie Mellon University, USA) Turing Centenary Events Chair Toby Walsh (NICTA and University of New South Wales, Australia) Tutorial Program Cochairs Carmel Domshlak (Technion Israel Institute of Technology, Israel) Patrick Pantel (Microsoft Research, USA) Workshop Program Cochairs Michael Beetz (University of Munich, Germany) Holger Hoos (University of British Columbia, Canada) Doctoral Consortium Cochairs Elizabeth Sklar (Brooklyn College, City University of New York, USA) Peter McBurney (King’s College London, United Kingdom)
    [Show full text]
  • An Incremental Learning Algorithm with Confidence Estimation For
    990 ieee transactions on ultrasonics, ferroelectrics, and frequency control, vol. 51, no. 8, august 2004 An Incremental Learning Algorithm with Confidence Estimation for Automated Identification of NDE Signals Robi Polikar, Member, IEEE, Lalita Udpa, Senior Member, IEEE, Satish Udpa, Fellow, IEEE, and Vasant Honavar, Member, IEEE Abstract—An incremental learning algorithm is intro- • applications calling for analysis of large volumes of duced for learning new information from additional data data; and/or that may later become available, after a classifier has al- • applications in which human factors may introduce ready been trained using a previously available database. The proposed algorithm is capable of incrementally learn- significant errors. ing new information without forgetting previously acquired knowledge and without requiring access to the original Such NDE applications are numerous, including but are database, even when new data include examples of previ- not limited to, defect identification in natural gas trans- ously unseen classes. Scenarios requiring such a learning mission pipelines [1], [2], aircraft engine and wheel compo- algorithm are encountered often in nondestructive evalua- nents [3]–[5], nuclear power plant pipings and tubings [6], tion (NDE) in which large volumes of data are collected in batches over a period of time, and new defect types may [7], artificial heart valves, highway bridge decks [8], optical become available in subsequent databases. The algorithm, components such as lenses of high-energy laser generators named Learn++, takes advantage of synergistic general- [9], or concrete sewer pipelines [10] just to name a few. ization performance of an ensemble of classifiers in which A rich collection of classification algorithms has been each classifier is trained with a strategically chosen subset of the training databases that subsequently become avail- developed for a broad range of NDE applications.
    [Show full text]
  • Conference Program Contents AAAI-14 Conference Committee
    Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI-14) Twenty-Sixth Conference on Innovative Applications of Artificial Intelligence (IAAI-14) Fih Symposium on Educational Advances in Artificial Intelligence (EAAI-14) July 27 – 31, 2014 Québec Convention Centre Québec City, Québec, Canada Sponsored by the Association for the Advancement of Artificial Intelligence Cosponsored by the AI Journal, National Science Foundation, Microso Research, Google, Amazon, Disney Research, IBM Research, Nuance Communications, Inc., USC/Information Sciences Institute, Yahoo Labs!, and David E. Smith In cooperation with the Cognitive Science Society and ACM/SIGAI Conference Program Contents AAAI-14 Conference Committee AI Video Competition / 7 AAAI acknowledges and thanks the following individuals for their generous contributions of time and Awards / 3–4 energy to the successful creation and planning of the AAAI-14, IAAI-14, and EAAI-14 Conferences. Computer Poker Competition / 7 Committee Chair Conference at a Glance / 5 CRA-W / CDC Events / 4 Subbarao Kambhampati (Arizona State University, USA) Doctoral Consortium / 6 AAAI-14 Program Cochairs EAAI-14 Program / 6 Carla E. Brodley (Northeastern University, USA) Exhibition /24 Peter Stone (University of Texas at Austin, USA) Fun & Games Night / 4 IAAI Chair and Cochair General Information / 25 David Stracuzzi (Sandia National Laboratories, USA) IAAI-14 Program / 11–19 David Gunning (PARC, USA) Invited Presentations / 3, 8–9 EAAI-14 Symposium Chair and Cochair Posters / 4, 23 Registration / 9 Laura
    [Show full text]
  • Adversarial Attacks on Graph Neural Networks Via Node Injections: a Hierarchical Reinforcement Learning Approach
    Adversarial Atacks on Graph Neural Networks via Node Injections: A Hierarchical Reinforcement Learning Approach Yiwei Sun Suhang Wang∗ Xianfeng Tang The Pennsylvania State University The Pennsylvania State University The Pennsylvania State University University Park, PA, USA University Park, PA, USA University Park, PA, USA [email protected] [email protected] [email protected] Tsung-Yu Hsieh Vasant Honavar The Pennsylvania State University The Pennsylvania State University University Park, PA, USA University Park, PA, USA [email protected] [email protected] ABSTRACT ACM Reference Format: Graph Neural Networks (GNN) ofer the powerful approach to node Yiwei Sun, Suhang Wang, Xianfeng Tang, Tsung-Yu Hsieh, and Vasant classifcation in complex networks across many domains including Honavar. 2020. Adversarial Attacks on Graph Neural Networks via Node Injections: A Hierarchical Reinforcement Learning Approach. In Proceedings social media, E-commerce, and FinTech. However, recent studies of The Web Conference 2020 (WWW ’20), April 20–24, 2020, Taipei, Taiwan. show that GNNs are vulnerable to attacks aimed at adversely im- ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3366423.3380149 pacting their node classifcation performance. Existing studies of adversarial attacks on GNN focus primarily on manipulating the 1 INTRODUCTION connectivity between existing nodes, a task that requires greater Graphs, where nodes and their attributes denote real-world entities efort on the part of the attacker in real-world applications. In con- (e.g., individuals) and links encode relationships (e.g., friendship) trast, it is much more expedient on the part of the attacker to inject between entities, are ubiquitous in many application domains, in- adversarial nodes, e.g., fake profles with forged links, into existing cluding social media [1, 21, 35, 49, 50], e-commerce[16, 47], and graphs so as to reduce the performance of the GNN in classifying FinTech [24, 33].
    [Show full text]
  • Arxiv:1909.00384V2 [Cs.LG] 8 Aug 2020 1 Introduction
    DEEPHEALTH:REVIEW AND CHALLENGES OF ARTIFICIAL INTELLIGENCE IN HEALTH INFORMATICS APREPRINT Gloria Hyunjung Kwak Pan Hui Department of Computer Science and Engineering Department of Computer Science and Engineering The Hong Kong University of Science and Technology The Hong Kong University of Science and Technology Hong Kong Hong Kong [email protected] [email protected] August 11, 2020 ABSTRACT Artificial intelligence has provided us with an exploration of a whole new research era. As more data and better computational power become available, the approach is being implemented in various fields. The demand for it in health informatics is also increasing, and we can expect to see the potential benefits of its applications in healthcare. It can help clinicians diagnose disease, identify drug effects for each patient, understand the relationship between genotypes and phenotypes, explore new phenotypes or treatment recommendations, and predict infectious disease outbreaks with high accuracy. In contrast to traditional models, recent artificial intelligence approaches do not require domain-specific data pre-processing, and it is expected that it will ultimately change life in the future. Despite its notable advantages, there are some key challenges on data (high dimensionality, heterogeneity, time dependency, sparsity, irregularity, lack of label, bias) and model (reliability, interpretability, feasibility, security, scalability) for practical use. This article presents a comprehensive review of research applying artificial intelligence in health informatics, focusing on the last seven years in the fields of medical imaging, electronic health records, genomics, sensing, and online communication health, as well as challenges and promising directions for future research. We highlight ongoing popular approaches’ research and identify several challenges in building models.
    [Show full text]
  • Accelerating Science a Grand Challenge for AI?
    Accelerating Science A Grand Challenge for AI? CCC Task Force on Convergence of Data and Computing Vasant Honavar, Mark Hill, Kathy Yelick Presentation to DARPA Defense Science Office March 16, 2017 Computing Community Consortium The mission of Computing Research Association's Computing Community Consortium (CCC) is to catalyze the computing research community and enable the pursuit of innovative, high-impact research. Promote Audacious Thinking: Computing Research Community Open Community Initiated Visioning Workshops National Agency Blue Sky Visioning Blue Sky Ideas tracks at conferences Priorities Requests Ideas Calls Inform Science Policy Outputs of visioning activities Council-Led Community Task Forces – e.g., Artificial Intelligence, Data Workshops Visioning and Computing, Health, Internet of Things, Privacy Reports • White Papers Engage the Community: Roadmaps • New Leaders CCC Blog - http://cccblog.org/ Computing Research in Action Videos Funding Science Policy Public Research “Highlight of the Week” Agencies Leadership Promote Leadership and Service: Computing Innovation Fellows Project Leadership in Science Policy Institute Accelerating Science: A Grand Challenge for AI? • Discussion based in part on: – Accelerang Science: A Compu3ng Research Agenda. A Compu3ng Community Consor3um White Paper, Vasant Honavar, Mark Hill, and Katherine Yelick, 2016 hJp://cra.org/crn/2016/03/3501/ – AAAI Fall Symposium on Accelerang Science: A Grand Challenge for AI • Other related events: – NSF Workshop on Discovery Informacs, February 2012 –
    [Show full text]
  • BRAIN-Lightning-Introductions-Slides
    Lightning Introductions Research Interfaces between Brain Science and Computer Science December 3-5, 2014 Charles Anderson / Colorado State Brain-Computer Interfaces www.cs.colostate.edu/eeg Sanjeev Arora / Princeton Computational complexity, designing algorithms for NP-hard problems, provably correct and efficient algorithms for ML (esp. unsupervised learning) (Panel Moderator: Computing and the brain.) Satinder Singh Baveja / Michigan • Reinforcement Learning: Architectures for converting payoffs/rewards to closed-loop behavior in AIs and Humans • Optimal rewards theory, or, Where do reward (functions) come from? • Computationally Rational models for explaining animal behavior and for deriving brain mechanisms. Andrew Bernat / CRA How might computer science and brain science get the resources they need? Matt Botvinick / Princeton •Cognitive/computational neuroscience • fMRI, behavioral methods, neurophysiology • Computational modeling (deep learning, reinforcement learning, graphical models) •Perspective: Computation as a Rosetta Stone •A common language in which to understand both behavior/cognition and neural function Randal Burns / JHU CS->Brain: Data-Intensive Web-services • scale infrastructure to capture high-throughput imaging (1TB/day) • integrated visualization and analytics • semantic/spatial queries of brain structure/function BRAIN->CS: Inspiration for new data organization and indexing techniques Miyoung Chun / Kavli Foundation The benefits are mutual: • Understanding the brain will have tremendous impacts on • hardware
    [Show full text]
  • Faculty Early Career Development
    Core Techniques and Technologies for Advancing Big Data Science & Engineering (BIGDATA) NSF 12-499 Vasant Honavar Program Director Information & Intelligent Systems (IIS) Division Computer and Information Science and Engineering (CISE) Directorate National Science Foundation BIGDATA Webinar Direct questions to [email protected] Vasant Honavar, May 2012 1 Big Data Research and Development Initiative • Big Data Senior Steering Group – chartered in spring 2011 under the Networking and Information Technology R&D (NITRD) Program – Members from DARPA, DOD OSD, DHS, DOE-Science, HHS, NARA, NASA, NIST, NOAA, NSA, and USGS – Co-chaired by NIH and NSF • White House Big Data Launch – March 29, 2012 • Long-term, National Big Data R&D with four major components: – Foundational research to develop new techniques and technologies to derive knowledge from data – New cyberinfrastructure to manage, curate, and serve data to research communities – New approaches for education and workforce development – Challenges and competitions to create new data analytic ideas, approaches, and tools from a more diverse stakeholder population BIGDATA Webinar Direct questions to [email protected] Vasant Honavar, May 2012 2 The Big Data Team Suzi Iacono, NSF, Karin Remington, NIH NITRD Big Data Steering Group • Vasant G. Honavar, NSF - CISE • Peter Lyster, NIH - NIGMS • Jia Li, NSF - MPS • Karin A. Remington, NIH - NIGMS • Dane Skow, NSF – OCI • Jerry Li, NIH - NCI • Peter H. McCartney, NSF - BIO • Vinay M. Pai, NIH - NIBIB • Doris L. Carver, NSF - EHR • Karen Skinner, NIH
    [Show full text]