Mining Airfare Data to Minimize Ticket Purchase Price

Total Page:16

File Type:pdf, Size:1020Kb

Mining Airfare Data to Minimize Ticket Purchase Price To Buy or Not to Buy: Mining Airfare Data to Minimize Ticket Purchase Price Oren Etzioni Craig A. Knoblock Dept. Computer Science Information Sciences Institute University of Washington University of Southern California Seattle, Washington 98195 Marina del Rey, CA 90292 [email protected] [email protected] Rattapoom Tuchinda Alexander Yates Dept. of Computer Science Dept. Computer Science University of Southern California University of Washington Los Angeles, CA 90089 Seattle, Washington 98195 [email protected] [email protected] ABSTRACT Keywords As product prices become increasingly available on the price mining, Internet, web mining, airline price prediction World Wide Web, consumers attempt to understand how corporations vary these prices over time. However, corpora- tions change prices based on proprietary algorithms and hid- 1. INTRODUCTION AND MOTIVATION den variables (e.g., the number of unsold seats on a flight). Corporations often use complex policies to vary product Is it possible to develop data mining techniques that will prices over time. The airline industry is one of the most enable consumers to predict price changes under these con- sophisticated in its use of dynamic pricing strategies in an ditions? attempt to maximize its revenue. Airlines have many fare This paper reports on a pilot study in the domain of air- classes for seats on the same flight, use different sales chan- line ticket prices where we recorded over 12,000 price obser- nels (e.g., travel agents, priceline.com, consolidators), and vations over a 41 day period. When trained on this data, frequently vary the price per seat over time based on a slew Hamlet | our multi-strategy data mining algorithm | gen- of factors including seasonality, availability of seats, compet- erated a predictive model that saved 341 simulated passen- itive moves by other airlines, and more. The airlines are said gers $198,074 by advising them when to buy and when to to use proprietary software to compute ticket prices on any postpone ticket purchases. Remarkably, a clairvoyant algo- given day, but the algorithms used are jealously guarded rithm with complete knowledge of future prices could save trade secrets [19]. Hotels, rental car agencies, and other at most $320,572 in our simulation, thus Hamlet's savings vendors with a \standing" inventory are increasingly using were 61.8% of optimal. The algorithm's savings of $198,074 similar techniques. represents an average savings of 23.8% for the 341 passen- As product prices become increasingly available on the gers for whom savings are possible. Overall, Hamlet saved World Wide Web, consumers have the opportunity to be- 4.4% of the ticket price averaged over the entire set of 4,488 come more sophisticated shoppers. They are able to com- simulated passengers. Our pilot study suggests that mining parison shop efficiently and to track prices over time; they of price data available over the web has the potential to save can attempt to identify pricing patterns and rush or delay consumers substantial sums of money per annum. purchases based on anticipated price changes (e.g., \I'll wait to buy because they always have a big sale in the spring..."). In this paper we describe the use of data mining methods to Categories and Subject Descriptors help consumers with this task. We report on a pilot study I.2.6 [Artificial Intelligence]: Learning in the domain of airfares where an automatically learned model, based on price information available on the Web, was able to save consumers a substantial sum of money in simulation. The paper addresses the following central questions: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are • What is the behavior of airline ticket prices not made or distributed for profit or commercial advantage and that copies over time? Do airfares change frequently? Do they bear this notice and the full citation on the first page. To copy otherwise, to move in small increments or in large jumps? Do they republish, to post on servers or to redistribute to lists, requires prior specific tend to go up or down over time? Our pilot study en- permission and/or a fee. SIGKDD ’03, August 24-27, 2003, Washington, DC, USA. ables us to begin to characterize the complex behavior Copyright 2003 ACM 1-58113-737-0/03/0008 ...$5.00. of airfares. 119 • What data mining methods are able to detect for each departure date was collected 8 times a day.2 Over- patterns in price data? In this paper we consider all, we collected over 12,000 fare observations over a 41 day reinforcement learning, rule learning, time series meth- period for six different airlines including American, United, ods, and combinations of the above. etc. We used three-hour intervals to limit the number of http requests to the web site. For each flight, we recorded • Can Web price tracking coupled with data min- the lowest fare available for an economy ticket. We also ing save consumers money in practice? Vendors recorded when economy tickets were no longer available; we vary prices based on numerous variables whose values refer to such flights as sold out. are not available on the Web. For example, an air- line may discount seats on a flight if the number of 2.1 Pricing Behavior in Our Data unsold seats, on a particular date, is high relative to We found that the price of tickets on a particular flight the airline's model. However, consumers do not have can change as often as seven times in a single day. We cate- access to the airline's model or to the number of avail- gorize price change into two types: dependent price changes able seats on the flights. Thus, a priori, price changes and independent price changes. Dependent changes occur could appear to be unpredictable to a consumer track- when prices of similar flights (i.e. having the same origin ing prices over the Web. In fact, we have found price and destination) from the same airline change at the same changes to be surprisingly predictable in some cases. time. This type of change can happen as often as once or twice a day when airlines adjust their prices to maximize The remainder of this paper is organized as follows. Sec- their overall revenue or \yield". Independent changes occur tion 2 describes our data collection mechanism and analyzes when the price of a particular flight changes independently the basic characteristics of airline pricing in our data. Sec- of similar flights from the same airline. We speculate that tion 3 considers related work in the areas of computational this type of change results from the change in the seat avail- finance and time series analysis. Section 4 introduces our ability of the particular flight. Table 1 shows the average data mining methods and describes how each method was number of changes per flight aggregated over all airlines for tailored to our domain. We investigated rule learning [8], Q- each route. Overall, 762 price changes occurred across all learning [25], moving average models [13], and the combina- the flights in our data. 63% of the changes can be classified tion of these methods via stacked generalization [28]. Next, as dependent changes based on the behavior of other flights Section 5 describes our simulation and the performance of by the same airline. each of the methods on our test data. The section also re- ports on a sensitivity analysis to assess the robustness of Route Avg. number of price changes our results to changes in the simulation. We conclude with LAX-BOS 6.8 SEA-IAD a discussion of future work and a summary of the paper's 5.4 contributions. Table 1: Average number of price changes per route. 2. DATA COLLECTION We found that the round-trip ticket price for flights can We collected airfare data directly from a major travel web vary significantly over time. Table 2 shows the minimum site. In order to extract the large amount of data required price, maximum price, and the maximum difference in prices for our machine learning algorithms, we built a flight data that can occur for flights on each route. collection agent that runs at a scheduled interval, extracts the pricing data, and stores the result in a database. Route Min Price Max Price Max Price Change We built our flight data collection agent using Agent- LAX-BOS 275 2524 2249 Builder1 for wrapping web sites and Theseus for executing SEA-IAD 281 1668 1387 the agent [3]. AgentBuilder exploits machine learning tech- nology [15] that enables the system to automatically learn Table 2: Minimum price, maximum price, and max- extraction rules that reliably convert information presented imum change in ticket price per route. All prices in on web pages into XML. Once the system has learned the this paper refer to the lowest economy airfare avail- extraction rules, AgentBuilder compiles this into a Theseus able for purchase. plan. Theseus is a streaming dataflow execution system that supports highly optimized execution of a plan in a network For many flights there are easily discernible price tiers environment. The system maximizes the parallelism across where ticket prices fall into a relatively small price range. different operators and streams data between operations to The number of tiers typically varies from two to four, de- support the efficient execution of plans with complex navi- pending on the airline and the particular flight. Even flights gation paths and extraction from multiple pages. from the same airline with the same schedule (but with dif- For the purpose of our pilot study, we restricted our- ferent departure dates) can have different numbers of tiers.
Recommended publications
  • ARCHITECTS of INTELLIGENCE for Xiaoxiao, Elaine, Colin, and Tristan ARCHITECTS of INTELLIGENCE
    MARTIN FORD ARCHITECTS OF INTELLIGENCE For Xiaoxiao, Elaine, Colin, and Tristan ARCHITECTS OF INTELLIGENCE THE TRUTH ABOUT AI FROM THE PEOPLE BUILDING IT MARTIN FORD ARCHITECTS OF INTELLIGENCE Copyright © 2018 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. Acquisition Editors: Ben Renow-Clarke Project Editor: Radhika Atitkar Content Development Editor: Alex Sorrentino Proofreader: Safis Editing Presentation Designer: Sandip Tadge Cover Designer: Clare Bowyer Production Editor: Amit Ramadas Marketing Manager: Rajveer Samra Editorial Director: Dominic Shakeshaft First published: November 2018 Production reference: 2201118 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK ISBN 978-1-78913-151-2 www.packt.com Contents Introduction ........................................................................ 1 A Brief Introduction to the Vocabulary of Artificial Intelligence .......10 How AI Systems Learn ........................................................11 Yoshua Bengio .....................................................................17 Stuart J.
    [Show full text]
  • Keeping AI Legal
    Vanderbilt Journal of Entertainment & Technology Law Volume 19 Issue 1 Issue 1 - Fall 2016 Article 5 2016 Keeping AI Legal Amitai Etzioni Oren Etzioni Follow this and additional works at: https://scholarship.law.vanderbilt.edu/jetlaw Part of the Computer Law Commons, and the Science and Technology Law Commons Recommended Citation Amitai Etzioni and Oren Etzioni, Keeping AI Legal, 19 Vanderbilt Journal of Entertainment and Technology Law 133 (2020) Available at: https://scholarship.law.vanderbilt.edu/jetlaw/vol19/iss1/5 This Article is brought to you for free and open access by Scholarship@Vanderbilt Law. It has been accepted for inclusion in Vanderbilt Journal of Entertainment & Technology Law by an authorized editor of Scholarship@Vanderbilt Law. For more information, please contact [email protected]. Keeping Al Legal Amitai Etzioni* and Oren Etzioni** ABSTRACT AI programs make numerous decisions on their own, lack transparency, and may change frequently. Hence, unassisted human agents, such as auditors, accountants, inspectors, and police, cannot ensure that Al-guided instruments will abide by the law. This Article suggests that human agents need the assistance of AI oversight programs that analyze and oversee operational AI programs. This Article asks whether operationalAIprograms should be programmed to enable human users to override them; without that, such a move would undermine the legal order. This Article also points out that Al operational programs provide high surveillance capacities and, therefore, are essential for protecting individual rights in the cyber age. This Article closes by discussing the argument that Al-guided instruments, like robots, lead to endangering much more than the legal order-that they may turn on their makers, or even destroy humanity.
    [Show full text]
  • AI Ignition Ignite Your AI Curiosity with Oren Etzioni AI for the Common Good
    AI Ignition Ignite your AI curiosity with Oren Etzioni AI for the common good From Deloitte’s AI Institute, this is AI Ignition—a monthly chat about the human side of artificial intelligence with your host, Beena Ammanath. We will take a deep dive into the past, present, and future of AI, machine learning, neural networks, and other cutting-edge technologies. Here is your host, Beena. Beena Ammanath (Beena): Hello, my name is Beena Ammanath. I am the executive director of the Deloitte AI Institute, and today on AI Ignition we have Oren Etzioni, a professor, an entrepreneur, and the CEO of Allen Institute for AI. Oren has helped pioneer meta search, online comparison shopping, machine reading, and open information extraction. He was also named Seattle’s Geek of the Year in 2013. Welcome to the show, Oren. I am so excited to have you on our AI Ignition show today. How are you doing? Oren Etzioni (Oren): I am doing as well as anyone can be doing in these times of COVID and counting down the days till I get a vaccine. Beena: Yeah, and how have you been keeping yourself busy during the pandemic? Oren: Well, we are fortunate that AI is basically software with a little sideline into robotics, and so despite working from home, we have been able to stay productive and engaged. It’s just a little bit less fun because one of the things I like to say about AI is that it’s still 99% human intelligence, and so obviously the human interactions are stilted.
    [Show full text]
  • Oren Etzioni, Phd: CEO of Allen Institute for AI
    Behind the Tech Kevin Scott Podcast EP-26: Oren Etzioni, PhD: CEO of Allen Institute for AI [MUSIC] OREN ETZIONI: Whose responsibility is it? The responsibility and liability has to ultimately rest with a person. You can’t say, “Hey, you know, look, my car ran you over, it’s an AI car, I don’t know what it did, it’s not my fault, right?” You as the driver or maybe it’s the manufacturer if there’s some malfunction, but people have to be responsible for the behavior of the machines. [MUSIC] KEVIN SCOTT: Hi, everyone. Welcome to Behind the Tech. I'm your host, Kevin Scott, Chief Technology Officer for Microsoft. In this podcast, we're going to get behind the tech. We'll talk with some of the people who have made our modern tech world possible and understand what motivated them to create what they did. So, join me to maybe learn a little bit about the history of computing and get a few behind- the-scenes insights into what's happening today. Stick around. [MUSIC] CHRISTINA WARREN: Hello, and welcome to Behind the Tech. I’m Christina Warren, senior cloud advocate at Microsoft. KEVIN SCOTT: And I’m Kevin Scott. CHRISTINA WARREN: Today on the show, our guest is Oren Etzioni. Oren is a professor, entrepreneur, and is the chief executive officer for the Allen Institute for AI. So, Kevin, I’m guessing that you guys have already crossed paths in your professional pursuits. KEVIN SCOTT: Yeah, I’ve been lucky enough to know Oren for the past several years.
    [Show full text]
  • The Elephant in the Room: Getting Value from Big Data Serge Abiteboul, Luna Dong, Oren Etzioni, Divesh Srivastava, Gerhard Weikum, Julia Stoyanovich, Fabian Suchanek
    The elephant in the room: getting value from Big Data Serge Abiteboul, Luna Dong, Oren Etzioni, Divesh Srivastava, Gerhard Weikum, Julia Stoyanovich, Fabian Suchanek To cite this version: Serge Abiteboul, Luna Dong, Oren Etzioni, Divesh Srivastava, Gerhard Weikum, et al.. The elephant in the room: getting value from Big Data. Workshop on Web and Databases (WebDB), May 2015, Melbourne, France. 10.1145/2767109.2770014. hal-01699868 HAL Id: hal-01699868 https://hal-imt.archives-ouvertes.fr/hal-01699868 Submitted on 2 Feb 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. The elephant in the room: getting value from Big Data Serge Abiteboul (INRIA Saclay & ENS Cachan, France), Luna Dong (Google Inc., USA), Oren Etzioni (Allen Institute for Artificial Intelligence, USA), Divesh Srivastava (AT&T Labs-Research, USA), Gerhard Weikum (Max Planck Institute for Informatics, Germany), Julia Stoyanovich (Drexel University, USA) and Fabian M. Suchanek (Télécom ParisTech, France) 1. INTRODUCTION SIGMOD Innovation Award in 1998, the EADS Award from Big Data, and its 4 Vs { volume, velocity, variety, and the French Academy of sciences in 2007; the Milner Award veracity { have been at the forefront of societal, scientific from the Royal Society in 2013; and a European Research and engineering discourse.
    [Show full text]
  • Toward Automatic Bootstrapping of Online Communities Using
    Toward Automatic Bootstrapping of Online Communities Using Decision-theoretic Optimization Shih-Wen Huang* Jonathan Bragg* Isaac Cowhey University of Washington University of Washington Allen Institute for AI Seattle, WA, USA Seattle, WA, USA Seattle, WA, USA [email protected] [email protected] [email protected] Oren Etzioni Daniel S. Weld Allen Institute for AI University of Washington Seattle, WA, USA Seattle, WA, USA [email protected] [email protected] ABSTRACT INTRODUCTION Successful online communities (e.g., Wikipedia, Yelp, and The Internet has spawned communities that create extraordi- StackOverflow) can produce valuable content. However, nary resources. For example, Wikipedia’s 23 million users many communities fail in their initial stages. Starting an on- have created over 4 million English articles, a resource over line community is challenging because there is not enough 100 times larger than any other encyclopedia. Similarly, content to attract a critical mass of active members. This StackOverflow has become a top resource for programmers paper examines methods for addressing this cold-start prob- with 14 million answers to 8.5 million questions, while Yelp lem in datamining-bootstrappable communities by attract- users generated more than 67 million reviews.1 ing non-members to contribute to the community. We make four contributions: 1) we characterize a set of communi- In reality, however, most online communities fail. For exam- ple, thousands of open source projects have been created on ties that are “datamining-bootstrappable” and define the boot- SourceForge, but only 10% have three or more members [24]. strapping problem in terms of decision-theoretic optimiza- Furthermore, more than 50% of email-based groups received tion, 2) we estimate the model parameters in a case study involving the Open AI Resources website, 3) we demonstrate no messages during a four-month study period [6].
    [Show full text]
  • Autonomous Cars –
    Prof. Roberto V. Zicari Frankfurt Big Data Lab www.bigdata.uni-frankfurt.de Johannes Gutenberg University Mainz January 28, 2019 1 Data as an Economic and Semantic Asset “I think we’re just beginning to grapple with implications of data as an economic asset” (*) –Steve Lohr (The New York Times) Data has become a new economic asset. The companies with big data pools can have great economic power They greatly influence what the philosopher Floridi call our semantic capital (**) (*) Source: Big Data and The Great A.I. Awakening. Interview with Steve Lohr, ODBMS Industry Watch, December 19, 2016 (**) Source: Semantic Capital: Its Nature, Value, and Curation, Luciano Floridi, December 2018, Volume 31, Issue 4, pp 481–497| 2 What is more important, vast data pools, sophisticated algorithms or deep pockets? “No one can replicate your data. It’s the defensible barrier, not algorithms.” (*) -- Andrew Ng, Stanford professor (*) Source: Big Data and The Great A.I. Awakening. Interview with Steve Lohr, ODBMS Industry Watch, December 19, 2016 3 Algorithms and Data “AI is akin to building a rocket ship. You need a huge engine and a lot of fuel. The rocket engine is the learning algorithms but the fuel is the huge amounts of data we can feed to these algorithms.” (*) -- Andrew Ng It is important to note that Big Data is of NO use unless it is analysed. (*) Source: Big Data and The Great A.I. Awakening. Interview with Steve Lohr, ODBMS Industry Watch, December 19, 2016 4 Interplay and implications of Big Data and Artificial Intelligence The Big Data revolution and the new developments in Hardware have made the recent AI advances possible.
    [Show full text]
  • Jeffrey P. Bigham Updated 2/3/2020
    3525 Newell-Simon Hall +1 (412) 945-0708 Carnegie Mellon University Jeffrey P. Bigham [email protected] Pittsburgh, PA 15213 www.jeffreybigham.com updated 2/3/2020 ACADEMIC POSITIONS Research Scientist July 2018 – Accessibility, Sensing, and Machine Learning Research Apple, Pittsburgh, PA Associate Professor September 2013 – Human Computer Interaction Institute and Language Technologies Institute Carnegie Mellon University, School of Computer Science, Pittsburgh, PA Assistant Professor July 2009 – August 2013 Department of Computer Science University of Rochester, Rochester, NY EDUCATION Ph.D., Computer Science and Engineering, University of Washington – 2009 Thesis Title: Intelligent Interfaces Enabling Blind Web Users to Build Accessibility Into the Web Committee: Richard E. Ladner (chair), Tessa Lau, Ed Lazowska, and Jacob O. Wobbrock. M.Sc. Computer Science and Engineering, University of Washington – 2005 Qualifying Exam Project: Boosting Relation Extraction Recall with Soft Rules. Advisor: Oren Etzioni. B.S.E. Computer Science, Princeton University – 2003 Thesis Title: On Using Error-Correcting Codes and Boosting to Learn Multi-Class Classification Problems Advisors: Amit Sahai and Robert Shapire HONORS Alfred P. Sloan Research Fellowship (2014) NSF CAREER Award (2012) MIT Technology Review Top 35 Innovators Under 35 Award (2009) W4A 2016 Best Technical Paper Award [C.69] (2016) ASSETS 2015 Best Demo Award [P.25] (2015) W4A 2014 Best Technical Paper Award [C.50] (2014) W4A Paciello Group Accessibility Challenge Award – Scribe [O.14] (2013) ACM WSDM 2012 Best Paper Award [C.30] (2012) ACM UIST 2010 Best Paper Award [C.24] (2010) W4A Accessibility Challenge Award – VizWiz [O.8] (2010) University of Washington College of Engineering Student Innovator Award for Research (2009) NCTI Technology in the Works Award (2009) NISH National Scholar Award for Workplace Innovation & Design – Slide Rule (Honorable Mention) (2009) NISH National Scholar Award for Workplace Innovation & Design – WebAnywhere (Honorable Mention) (2009) Andrew W.
    [Show full text]
  • Convolutional Neural Networks (CNN) for Data Classification
    Convolutional Neural Networks (CNN) for data classification Gianluca Filippini EBV / FAE -ML Specialist 1 2017: We will create systems and robots, which are smarter than us Raymond Kurzweil, Google’s Director of Engineering, is a well-known futurist with a high-hitting track record for accurate predictions. “2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created” Ray Kurzweil Using big data, computer programs (artificial intelligence) will be capable of analyzing massive amounts of information, identifying trends and using that knowledge to come up with solutions to the world’s biggest problems.. https://en.wikipedia.org/wiki/Ray_Kurzweil https://futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045 https://en.wikipedia.org/wiki/Technological_singularity 2 1950: The Imitation Game. Computing Machinery and Intelligence (Mind 49, 433-460) I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think.“ […] It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows
    [Show full text]
  • Douglas C Downey
    Douglas C Downey Electrical Engineering and Computer Science Dept. [email protected] Northwestern University, Ford 3-345 (847)491-3710 2133 Sheridan Road http://www.cs.northwestern.edu/~ddowney Evanston, IL 60208 Research Natural Language Processing, Machine Learning, Artificial Intelligence Interests Education University of Washington September 2002 – Ph.D. Computer Science and Engineering, 2008 December 2008 Dissertation Title: Redundancy in Web-scale Information Extraction: Probabilistic Model and Experimental Results M.S. Computer Science and Engineering, 2004 Case Western Reserve University August 1996 – B.S./M.S. in Computer Science May 2000 Minors in Mathematics and Economics Professional Northwestern University, Associate Professor September 2014 – Experience Department of Electrical Engineering and Computer Science present Northwestern University, Assistant Professor September 2008 – Department of Electrical Engineering and Computer Science 2014 University of Washington, Research Assistant June 2003 – Advisor: Oren Etzioni September 2008 March 2006 – Microsoft Corporation, Research Intern July 2006 Mentor: Susan Dumais Manager: Eric Horvitz October 2000 – Intel Corporation, Internet Software Engineer September 2002 October 1998 – Case Western Reserve University, Research Assistant September 2000 Advisor: Randall D. Beer Honors and NSF CAREER Award, 2014 Awards Northwestern EECS “Teacher of the Year,” 2012 US SECRET Security Clearance, 2012-2016 DARPA Computer Science Study Panel, 2011 Microsoft New Faculty Fellowship,
    [Show full text]
  • Exploiting Parallel News Streams for Unsupervised Event Extraction Congle Zhang, Stephen Soderland & Daniel S
    Exploiting Parallel News Streams for Unsupervised Event Extraction Congle Zhang, Stephen Soderland & Daniel S. Weld Computer Science & Engineering University of Washington Seattle, WA 98195, USA clzhang, soderlan, weld @cs.washington.edu { } Abstract Unfortunately, while distant supervision can work well in some situations, the method is limited to rela- Most approaches to relation extraction, the tively static facts (e.g., born-in(person, location) or task of extracting ground facts from natural capital-of(location,location)) where there is a cor- language text, are based on machine learning responding knowledge base. But what about dy- and thus starved by scarce training data. Man- ual annotation is too expensive to scale to a namic event relations (also known as fluents), such comprehensive set of relations. Distant super- as travel-to(person, location) or fire(organization, vision, which automatically creates training person)? Since these time-dependent facts are data, only works with relations that already ephemeral, they are rarely stored in a pre-existing populate a knowledge base (KB). Unfortu- KB. At the same time, knowledge of real-time nately, KBs such as FreeBase rarely cover events is crucial for making informed decisions in event relations (e.g. “person travels to loca- fields like finance and politics. Indeed, news stories tion”). Thus, the problem of extracting a wide range of events — e.g., from news streams — report events almost exclusively, so learning to ex- is an important, open challenge. tract events is an important open problem. This paper introduces NEWSSPIKE-RE, a This paper develops a new unsupervised tech- novel, unsupervised algorithm that discovers nique, NEWSSPIKE-RE, to both discover event rela- event relations and then learns to extract them.
    [Show full text]
  • Artificial Intelligence: Background, Selected Issues, and Policy Considerations
    Artificial Intelligence: Background, Selected Issues, and Policy Considerations May 19, 2021 Congressional Research Service https://crsreports.congress.gov R46795 SUMMARY R46795 Artificial Intelligence: Background, Selected May 19, 2021 Issues, and Policy Considerations Laurie A. Harris The field of artificial intelligence (AI)—a term first used in the 1950s—has gone through Analyst in Science and multiple waves of advancement over the subsequent decades. Today, AI can broadly be thought Technology Policy of as computerized systems that work and react in ways commonly thought to require intelligence, such as the ability to learn, solve problems, and achieve goals under uncertain and varying conditions. The field encompasses a range of methodologies and application areas, including machine learning (ML), natural language processing, and robotics. In the past decade or so, increased computing power, the accumulation of big data, and advances in AI techniques have led to rapid growth in AI research and applications. Given these developments and the increasing application of AI technologies across economic sectors, stakeholders from academia, industry, and civil society have called for the federal government to become more knowledgeable about AI technologies and more proactive in considering public policies around their use. Federal activity addressing AI accelerated during the 115th and 116th Congresses. President Donald Trump issued two executive orders, establishing the American AI Initiative (E.O. 13859) and promoting the use of trustworthy AI in the federal government (E.O. 13960). Federal committees, working groups, and other entities have been formed to coordinate agency activities, help set priorities, and produce national strategic plans and reports, including an updated National AI Research and Development Strategic Plan and a Plan for Federal Engagement in Developing Technical Standards and Related Tools in AI.
    [Show full text]