Safe Imitation Learning Via Fast Bayesian Reward Inference from Preferences

Total Page:16

File Type:pdf, Size:1020Kb

Safe Imitation Learning Via Fast Bayesian Reward Inference from Preferences Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences Daniel S. Brown 1 Russell Coleman 1 2 Ravi Srinivasan 2 Scott Niekum 1 Abstract demonstrations, it is important for an agent to be able to Bayesian reward learning from demonstrations provide high-confidence bounds on its performance with re- enables rigorous safety and uncertainty analysis spect to the demonstrator; however, while there exists much when performing imitation learning. However, work on high-confidence off-policy evaluation in the rein- Bayesian reward learning methods are typically forcement learning (RL) setting, there has been much less computationally intractable for complex control work on high-confidence policy evaluation in the imitation problems. We propose Bayesian Reward Ex- learning setting, where the reward samples are unavailable. trapolation (Bayesian REX), a highly efficient Prior work on high-confidence policy evaluation for im- Bayesian reward learning algorithm that scales itation learning has used Bayesian inverse reinforcement to high-dimensional imitation learning problems learning (IRL) (Ramachandran & Amir, 2007) to allow an by pre-training a low-dimensional feature encod- agent to reason about reward uncertainty and policy gen- ing via self-supervised tasks and then leveraging eralization error (Brown et al., 2018). However, Bayesian preferences over demonstrations to perform fast IRL is typically intractable for complex problems due to the Bayesian inference. Bayesian REX can learn to need to repeatedly solve an MDP in the inner loop, resulting play Atari games from demonstrations, without in high computational cost as well as high sample cost if a access to the game score and can generate 100,000 model is not available. This precludes robust safety and un- samples from the posterior over reward functions certainty analysis for imitation learning in high-dimensional in only 5 minutes on a personal laptop. Bayesian problems or in problems in which a model of the MDP is REX also results in imitation learning perfor- unavailable. We seek to remedy this problem by propos- mance that is competitive with or better than state- ing and evaluating a method for safe and efficient Bayesian of-the-art methods that only learn point estimates reward learning via preferences over demonstrations. Pref- of the reward function. Finally, Bayesian REX en- erences over trajectories are intuitive for humans to provide ables efficient high-confidence policy evaluation (Akrour et al., 2011; Wilson et al., 2012; Sadigh et al., 2017; without having access to samples of the reward Christiano et al., 2017; Palan et al., 2019) and enable better- function. These high-confidence performance than-demonstrator performance (Brown et al., 2019b;a). To bounds can be used to rank the performance and the best of our knowledge, we are the first to show that risk of a variety of evaluation policies and provide preferences over demonstrations enable both fast Bayesian a way to detect reward hacking behaviors. reward learning in high-dimensional, visual control tasks as well as efficient high-confidence performance bounds. 1. Introduction We first formalize the problem of high-confidence policy evaluation (Thomas et al., 2015) for imitation learning. We arXiv:2002.09089v4 [cs.LG] 17 Dec 2020 It is important that robots and other autonomous agents can then propose a novel algorithm, Bayesian Reward Extrapola- safely learn from and adapt to a variety of human prefer- tion (Bayesian REX), that uses a pairwise ranking likelihood ences and goals. One common way to learn preferences to significantly increase the efficiency of generating samples and goals is via imitation learning, in which an autonomous from the posterior distribution over reward functions. We agent learns how to perform a task by observing demonstra- demonstrate that Bayesian REX can leverage neural network tions of the task (Argall et al., 2009). When learning from function approximation to learn useful reward features via 1 self-supervised learning in order to efficiently perform deep Computer Science Department, The University of Texas Bayesian reward inference from visual demonstrations. Fi- at Austin. 2Applied Research Laboratories, The University of Texas at Austin. Correspondence to: Daniel Brown <ds- nally, we demonstrate that samples obtained from Bayesian [email protected]>. REX can be used to solve the high-confidence policy evalua- tion problem for imitation learning. We evaluate our method th Proceedings of the 37 International Conference on Machine on imitation learning for Atari games and demonstrate that Learning, Online, PMLR 119, 2020. Copyright 2020 by the au- we can efficiently compute high-confidence bounds on pol- thor(s). Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences icy performance, without access to samples of the reward than-demonstrator performance even in the absence of ex- function. We use these high-confidence performance bounds plicit preference labels. However, despite their successes, to rank different evaluation policies according to their risk deep learning approaches to IRL typically only return a point and expected return under the posterior distribution over estimate of the reward function, precluding uncertainty and the unknown ground-truth reward function. Finally, we pro- robustness analysis. vide evidence that bounds on uncertainty and risk provide a useful tool for detecting reward hacking/gaming (Amodei 2.2. Safe Imitation Learning et al., 2016), a common problem in reward inference from demonstrations (Ibarz et al., 2018) as well as reinforcement While there has been much interest in imitation learning, learning (Ng et al., 1999; Leike et al., 2017). less attention has been given to problems related to safety. SafeDAgger (Zhang & Cho, 2017) and EnsembleDAgger (Menda et al., 2019) are extensions of DAgger that give 2. Related work control to the demonstrator in states where the imitation 2.1. Imitation Learning learning policy is predicted to have a large action difference from the demonstrator. Other approaches to safe imitation Imitation learning is the problem of learning a policy from learning seek to match the tail risk of the expert as well as demonstrations and can roughly be divided into techniques find a policy that is indistinguishable from the demonstra- that use behavioral cloning and techniques that use in- tions (Majumdar et al., 2017; Lacotte et al., 2019). verse reinforcement learning. Behavioral cloning methods (Pomerleau, 1991; Torabi et al., 2018) seek to solve the imi- Brown & Niekum(2018) propose a Bayesian sampling tation learning problem via supervised learning, in which approach to provide explicit high-confidence performance the goal is to learn a mapping from states to actions that bounds in the imitation learning setting, but require an MDP mimics the demonstrator. While computationally efficient, solver in the inner-loop. Their method uses samples from the P (RjD) these methods suffer from compounding errors (Ross et al., posterior distribution to compute sample efficient 2011). Methods such as DAgger (Ross et al., 2011) and probabilistic upper bounds on the policy loss of any evalu- DART (Laskey et al., 2017) avoid this problem by repeat- ation policy. Other work considers robust policy optimiza- edly collecting additional state-action pairs from an expert. tion over a distribution of reward functions conditioned on demonstrations or a partially specified reward function, but Inverse reinforcement learning (IRL) methods seek to solve these methods require an MDP solver in the inner loop, lim- the imitation learning problem by estimating the reward iting their scalability (Hadfield-Menell et al., 2017; Brown function that the demonstrator is optimizing (Ng & Russell, et al., 2018; Huang et al., 2018). We extend and generalize 2000). Classical approaches repeatedly alternate between a the work of Brown & Niekum(2018) by demonstrating, reward estimation step and a full policy optimization step for the first time, that high-confidence performance bounds (Abbeel & Ng, 2004; Ziebart et al., 2008; Ramachandran can be efficiently obtained when performing imitation learn- & Amir, 2007). Bayesian IRL (Ramachandran & Amir, ing from high-dimensional visual demonstrations without 2007) samples from the posterior distribution over reward requiring an MDP solver or model during reward inference. functions, whereas other methods seek a single reward func- tion that induces the demonstrator’s feature expectations 2.3. Value Alignment and Active Preference Learning (Abbeel & Ng, 2004), often while also seeking to maximize the entropy of the resulting policy (Ziebart et al., 2008). Safe imitation learning is closely related to the problem of value alignment, which seeks to design methods that prevent Most deep learning approaches for IRL use maximum en- AI systems from acting in ways that violate human values tropy policy optimization and divergence minimization be- (Hadfield-Menell et al., 2016; Fisac et al., 2020). Research tween marginal state-action distributions (Ho & Ermon, has shown that difficulties arise when an agent seeks to align 2016; Fu et al., 2017; Ghasemipour et al., 2019) and are its value with a human who is not perfectly rational (Milli related to Generative Adversarial Networks (Finn et al., et al., 2017) and there are fundamental impossibility results 2016a). These methods scale to complex control problems regarding value alignment unless the objective is represented by iterating between reward learning and
Recommended publications
  • 2021 District 41 Inter-League Rules
    2021 District 41 Inter-League Rules Objective: Promote, develop, supervise, and voluntarily assist in all lawful ways, the interest of those who will participate in Little League Baseball. District 41 leagues participating in Inter- league play will adhere to the same rules for the 2021 Season. General: The “2021 Official Regulations and Playing Rules of Little League Baseball” will be strictly enforced, except those rules adopted by these Bylaws. All Managers, Coaches and Umpires shall familiarize themselves with all rules contained in the 2021 Official Regulations and Playing Rules of Little League Baseball (Blue Book/ LL app). All Managers, Coaches and Umpires shall familiarize themselves with District 41 2021 Inter-league Bylaws District 41 Division Bylaws • Tee-Ball Division: • Teams can either use the tee or coach pitch • Each field must have a tee at their field • A Tee-ball game is 60-minutes maximum. • Each team will bat the entire roster each inning. Official scores or standings shall not be maintained. • Each batter will advance one base on a ball hit to an infielder or outfielder, with a maximum of two bases on a ball hit past an outfielder. The last batter of each inning will clear the bases and run as if a home run and teams will switch sides. • No stealing of bases or advancing on overthrows. • A coach from the team on offense will place/replace the balls on the batting tee. • On defense, all players shall be on the field. There shall be five/six (If using a catcher) infield positions. The remaining players shall be positioned in the outfield.
    [Show full text]
  • NCAA Statistics Policies
    Statistics POLICIES AND GUIDELINES CONTENTS Introduction ���������������������������������������������������������������������������������������������������������������������� 3 NCAA Statistics Compilation Guidelines �����������������������������������������������������������������������������������������������3 First Year of Statistics by Sport ���������������������������������������������������������������������������������������������������������������4 School Code ��������������������������������������������������������������������������������������������������������������������������������������������4 Countable Opponents ������������������������������������������������������������������������������������������������������ 5 Definition ������������������������������������������������������������������������������������������������������������������������������������������������5 Non-Countable Opponents ����������������������������������������������������������������������������������������������������������������������5 Sport Implementation ������������������������������������������������������������������������������������������������������������������������������5 Rosters ������������������������������������������������������������������������������������������������������������������������������ 6 Head Coach Determination ���������������������������������������������������������������������������������������������������������������������6 Co-Head Coaches ������������������������������������������������������������������������������������������������������������������������������������7
    [Show full text]
  • Mcminnville Grizzly Grand Slam JBO Invitational Baseball Tournament Supplemental Tournament Rules
    McMinnville Grizzly Grand Slam JBO Invitational Baseball Tournament Supplemental Tournament Rules TOURNAMENT RULES: Official Tournament rules will be as outlined in the most current edition of the official NFHS rule book, as adopted and modified by the OSAA, as modified in the 2015 JBO rulebook (located online at http://juniorbaseballorg.com/images/documents/2015_jbo_book_web.pdf ) or as outlined in our tournament rules. General Tournament Information: Tournaments are a (3) three game guaranteed with Team and Individual player awards awarded to the 1st and 2nd place teams in the gold bracket and 1 st place in silver bracket. Tournament Director Matthew Primbs 971-237-6529 [email protected] All communication prior to and during Tournament weekend shall be directed to the Tournament Director. Tournament Team Information: OFFICIAL ROSTERS: All coaches will need to provide a comprehensive player roster (Full player names and uniform numbers) in advance of the first tournament game played. MJBO will have an “open” roster policy in regards to the original roster submitted to the Tournament Director. No roster substitutions and/or modifications will be allowed after the initial comprehensive player roster is submitted and accepted by MJBO Tournament Director. GAME CHECK IN: Teams shall check in for all scheduled Tournament games with the appropriate site specific Tournament Official approximately twenty (20) minutes prior to the scheduled game time. Roster and/or line-up cards must also be prepared and made available to MJBO and the opposing team’s scorekeeper at least (20) twenty minutes before game time. Tournament Specific Rules: HOME/AWAY DESIGNATION: For pool and/or seeding games - the “home” & “away” team(s) will be determined by the toss of a coin, with the team physically residing the furthest from McMinnville, OR making the “call” prior to the coin being tossed.
    [Show full text]
  • Measuring Production and Predicting Outcomes in the National Basketball Association
    Measuring Production and Predicting Outcomes in the National Basketball Association Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Michael Steven Milano, M.S. Graduate Program in Education The Ohio State University 2011 Dissertation Committee: Packianathan Chelladurai, Advisor Brian Turner Sarah Fields Stephen Cosslett Copyright by Michael Steven Milano 2011 Abstract Building on the research of Loeffelholz, Bednar and Bauer (2009), the current study analyzed the relationship between previously compiled team performance measures and the outcome of an “un-played” game. While past studies have relied solely on statistics traditionally found in a box score, this study included scheduling fatigue and team depth. Multiple models were constructed in which the performance statistics of the competing teams were operationalized in different ways. Absolute models consisted of performance measures as unmodified traditional box score statistics. Relative models defined performance measures as a series of ratios, which compared a team‟s statistics to its opponents‟ statistics. Possession models included possessions as an indicator of pace, and offensive rating and defensive rating as composite measures of efficiency. Play models were composed of offensive plays and defensive plays as measures of pace, and offensive points-per-play and defensive points-per-play as indicators of efficiency. Under each of the above general models, additional models were created to include streak variables, which averaged performance measures only over the previous five games, as well as logarithmic variables. Game outcomes were operationalized and analyzed in two distinct manners - score differential and game winner.
    [Show full text]
  • Psychological Bulletin Video Game Training Does Not Enhance Cognitive Ability: a Comprehensive Meta-Analytic Investigation Giovanni Sala, K
    Psychological Bulletin Video Game Training Does Not Enhance Cognitive Ability: A Comprehensive Meta-Analytic Investigation Giovanni Sala, K. Semir Tatlidil, and Fernand Gobet Online First Publication, December 14, 2017. http://dx.doi.org/10.1037/bul0000139 CITATION Sala, G., Tatlidil, K. S., & Gobet, F. (2017, December 14). Video Game Training Does Not Enhance Cognitive Ability: A Comprehensive Meta-Analytic Investigation. Psychological Bulletin. Advance online publication. http://dx.doi.org/10.1037/bul0000139 Psychological Bulletin © 2017 American Psychological Association 2017, Vol. 0, No. 999, 000 0033-2909/17/$12.00 http://dx.doi.org/10.1037/bul0000139 Video Game Training Does Not Enhance Cognitive Ability: A Comprehensive Meta-Analytic Investigation Giovanni Sala, K. Semir Tatlidil, and Fernand Gobet University of Liverpool As a result of considerable potential scientific and societal implications, the possibility of enhancing cognitive ability by training has been one of the most influential topics of cognitive psychology in the last two decades. However, substantial research into the psychology of expertise and a recent series of meta-analytic reviews have suggested that various types of cognitive training (e.g., working memory training) benefit performance only in the trained tasks. The lack of skill generalization from one domain to different ones—that is, far transfer—has been documented in various fields of research such as working memory training, music, brain training, and chess. Video game training is another activity that has been claimed by many researchers to foster a broad range of cognitive abilities such as visual processing, attention, spatial ability, and cognitive control. We tested these claims with three random- effects meta-analytic models.
    [Show full text]
  • PHILADELPHIA PHILLIES (5-8) Vs
    PHILADELPHIA PHILLIES (5-8) vs. NEW YORK METS (7-7) Wednesday, April 19, 2017 – Citi Field – 7:10 p.m. EDT – Game 14; Road 8 RHP Vince Velasquez (0-2, 9.00) vs. RHP Robert Gsellman (0-1, 9.28) LAST NIGHT’S ACTION: The Phillies defeated the New York Mets, 6-2, in 10 innings at Citi Field in New York … Starter Zach Eflin (ND) made his season debut; he tossed 5.0 innings and allowed 2 runs PHILLIES PHACTS on 3 hits with 3 walks and 4 strikeouts … The Phils were trailing 2-1 with two outs in the 8th inning when Record: 5-8 (.385) Cameron Rupp scored on a pinch-hit double by Andres Blanco to tie the game … The Phils would score Home: 2-4 4 runs in the top of the 10th inning to snap their three game losing streak against the Mets. Road: 3-4 Current Streak: Won 1 Last 5 Games: 2-3 TRANSACTIONS: Prior to tonight’s game, RHP Pat Neshek was placed on the paternity list … To take Last 10 Games: 4-6 his place on the 25-man roster, RHP Ben Lively was recalled from Lehigh Valley (AAA) … Lively will Series Record: 1-3-0 wear #49. Sweeps/Swept: 0/1 MAIK IN NEED OF SOME MAGIC: Third baseman Maikel Franco has gotten off to a rough start and PHILLIES VS. METS is mired in an 0-for-17 slump heading into tonight’s game … But, Franco has been the victim of incredible th 2017 Record: 1-3 bad luck thus far … According to MLB’s Statcast data, Franco’s 93.2 MPH average exit velocity ranks 11 2017 at Home: 0-3 among all major league hitters with at least 30 at bats this season, yet he is hitting just .160 (8-50) due 2017 at NYM: 1-0 to a .143 batting average on balls in play … Additional positive trends for Franco would be the fact that All-Time Record: 494-459 his walk rate has increased from last year (6.3 % to 8.9%) and his strikeout rate has dropped (16.8% All-Time at Citizens Bank Park: 53-70 to 12.5%) … Lastly, Franco is seeing 3.98 pitches per plate appearance this season after averaging 3.56 All-Time at Citi Field: 33-39 P/PA from 2014-16.
    [Show full text]
  • 2021 Spring Season Rules
    Smyrna Baseball League – 2021 Spring Season Rules Smyrna Baseball League is governed by the National Federation of High School rules as modified by the below local league rules. In the event a situation is not covered by either set of rules, the rules of Major League Baseball will be applied. All COACH PITCH (5/6 and 7/8) and T-BALL (3/4) Divisions 1. Games will start at 5:30 p.m. and 7:00 p.m. on weekdays (and 8:30 p.m. on Friday’s for 7/8 Division only as needed). Games will begin at 9:00 a.m. on Saturday and continue until all scheduled games are completed that day. The Board will schedule make up games as necessary, normally Wednesday and Sunday. 2. Each Manager must sign the umpire’s scorecard after each game. The completed card becomes the official record of the game score. Game cards must be signed when a game is interrupted as well even if the game is going to be replayed. Before signing the card, each manager shall be given time to review the card and ensure that the team names and TOTAL score (as listed in the total score box) is accurate. 3. Umpires shall designate the official start time and advise the scorekeeper of the time. The first game of each day will start at the scheduled time (i.e. 5:30 p.m. on weekdays and 9:00 a.m. on Saturdays). That game will NOT be started early unless both coaches agree. Subsequent games will start at their scheduled time OR earlier if the field is available.
    [Show full text]
  • OFFICIAL RULES of SOFTBALL (Copyright by the International Softball Federation Playing Rules Committee)
    OFFICIAL RULES OF SOFTBALL (Copyright by the International Softball Federation Playing Rules Committee) New Rules and/or changes are bolded and italicized in each section. References to (SP ONLY) include Co-ed Slow Pitch. Wherever “FAST PITCH ONLY (FP ONLY)” appears in the Official Rules, the same rules apply to Modified Pitch with the exception of the pitching rule. "Any reprinting of THE OFFICIAL RULES without the expressed written consent of the International Softball Federation is strictly prohibited." Wherever "he'' or "him" or their related pronouns may appear in this rule book either as words RULE 1 or as parts of words, they have been used for literary purposes and are meant in their generic sense (i.e. To include all humankind, or both male and female sexes). RULE 1. DEFINITIONS. – Sec. 1. ALTERED BAT. Sec. 1/DEFINITIONS/Altered Bat A bat is altered when the physical structure of a legal bat has been changed. Examples of altering a bat are: replacing the handle of a metal bat with a wooden or other type handle, inserting material inside the bat, applying excessive tape (more than two layers) to the bat grip, or painting a bat at the top or bottom for other than identification purposes. Replacing the grip with another legal grip is not considered altering the bat. A "flare" or "cone" grip attached to the bat is considered an altered bat. Engraved “ID” marking on the knob end only of a metal bat is not considered an altered bat. Engraved “ID” marking on the barrel end of a metal bat is considered an altered bat.
    [Show full text]
  • Albuquerque Isotopes
    River Cats Media Relations • 400 Ballpark Drive • West Sacramento, CA 95691 • P: (916) 376-4751 • F: (916) 376-4710 • @RiverCats Wednesday, August 29 ALBUQUERQUE ISOTOPES (61-73) 4th Pacific Southern Raley Field - vs - West Sacramento, CA • 7:05 p.m (PT) SACRAMENTO RIVER CATS (53-81) 4th Pacific Northern Game 135 • Home Game 69 Isotopes: LHP David Holmberg (7-7, 5.38) • River Cats: RHP Tyler Herb (1-8, 6.17) Yesterday’s Game: The River Cats walked off for the sixth time in 2018, winning the opener of a AT A GLANCE three-game set with Albuquerque 2-1 on OF Gregor Blanco’s solo home run. It was the second walk-off OVERALL HOME ROAD home run of the year, with the previous occurance taking place back on July 2 when it was OF Chris Record: 53-81 26-42 27-39 Day: 14-15 10-7 4-8 Shaw whose bomb ended things. RHP Jordan Johnson turned out his best start of the season with Night: 39-66 16-35 23-31 Sacramento, posting a 66 game score in six innings with just one run on two hits and two walks to his vs. RHP: 45-58 22-30 23-28 name. And once again, the bullpen was fantastic as RHPs Pierce Johnson and Ray Black combiend to vs. LHP: 8-23 4-11 4-12 allow just one hit and strike out five in three frames. One-run games: 16-19 9-8 7-11 Extra-inning games: 4-3 3-1 1-2 Shutouts: 5-10 5-4 0-6 Today’s Starter: RHP Tyler Herb will be looking to bounce back from a rough start last week Cats Score 1st: 40-20 18-6 22-14 against the Fresno Grizzlies where 12 hits were put up on the board against him to go along with nine Opp.
    [Show full text]
  • For Basketball User Guide
    The STAT CREW System® ​ for Basketball User Guide (Version 4.22) Revised July 2020 Table of Contents Preface: License Agreement 1: About STAT CREW System for Basketball…………….…. 5 2: Installing STAT CREW System………..………....…………. 7 3: Getting Started……………………………………………...…. 9 4: In-Game Scoring and Statistics……………..………...….... 17 5: Manual Entry and Statistics……………………………...…. 41 6: Game Reports……………….……………………………….... 51 7: Season Reports....…………………………………………...... 57 8: Career/Historical Reports….……………………………...…. 61 9: Utility Programs…….…………………………………………. 67 10: Frequently Asked Questions………...……………………. 71 Index…………………....……………………..………………….... 73 Back 3to top STAT CREW APPLICATION AND WEB SERVICES TERMS OF USE 1. STAT CREW SOFTWARE APPLICATION TERMS OF USE. CBS Interactive Inc., a Delaware corporation, with its principal place of business at 235 Second Street, San Francisco, California 90455, ("CBSI") and CBSI's licensors (the "Licensors(s)") (CBSI and Licensor(s) collectively referred to as either "we", "us", or "Provider") make available to you the user in your own capacity, your agents, and employees ("you", "your" or "User") the Stat Crew application (the "Stat Crew Application") and the Stat Crew Web services (the "Web Services") available in conjunction with the Stat Crew Application, subject to these Stat Crew Software Application and Web Services Terms of Use (the "Terms of Use"), which may be updated by CBSI from time to time without notice to you. It is important for you to refer to these Terms of Use from time to time to make sure that you are aware of any additions, revisions, or modifications that CBSI may have made to these Terms of Use. Your continued usage of the Stat Crew Application and/or the Web Services after changes in the Terms of Use signifies your acceptance of any new terms.
    [Show full text]
  • Minor League Rule Variations
    NYBSA Major League Baseball Rules Updated Jan 2014 The official rules of Dixie Youth Baseball shall apply except as altered by the following exceptions and/or additions. NYBSA chooses to take advantage of the local option clause relating to playing rules. The following General Playing Rules may also apply to the other baseball leagues. GENERAL PLAYING RULES: I. Start Time: Games will start on time. Time limits are specified for each league. The official start time of the first game of the day will be the scheduled start time. It is up to the teams to be ready to play at this time. There will be a 15-minute grace allowed, if enough players have not arrived to meet the minimum requirements to field a team. An additional exception to this rule would be that the umpires are not ready to begin. For the second and subsequent games, the umpire shall declare the start time of the next game to be ten minutes from the time players have cleared the field from the previous game. It is up to the teams to be ready to play at this time. II. Expiration of time limit: No inning will be started after 1 hour 25 minutes (85 minutes) has been reached. A new inning is considered started at the moment the last out is made in the preceding inning. A new inning is also considered started at the moment the fifth run is scored in the bottom of the preceding inning. III. Less Than Nine Players: If a team fails to field nine players at game time, they may play with eight players.
    [Show full text]
  • Framing the Game Through a Sabermetric Lens: Major League
    FRAMING THE GAME THROUGH A SABERMETRIC LENS: MAJOR LEAGUE BASEBALL BROADCASTS AND THE DELINEATION OF TRADITIONAL AND NEW FACT METRICS by ZACHARY WILLIAM ARTH ANDREW C. BILLINGS, COMMITTEE CHAIR DARRIN J. GRIFFIN SCOTT PARROTT JAMES D. LEEPER KENON A. BROWN A DISSERTATION Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the College of Communication and Information Sciences in the Graduate School of The University of Alabama TUSCALOOSA, ALABAMA 2019 Copyright Zachary William Arth 2019 ALL RIGHTS RESERVED i ABSTRACT This purpose of this dissertation was to first understand how Major League Baseball teams are portraying and discussing statistics within their local broadcasts. From there, the goal was to ascertain how teams differed in their portrayals, with the specific dichotomy of interest being between teams heavy in advanced statistics and those heavy in traditional statistics. With advanced baseball statistics still far from being universally accepted among baseball fans, the driving question was whether or not fans that faced greater exposure to advanced statistics would also be more knowledgeable and accepting of them. Thus, based on the results of the content analysis, fans of four of the most advanced teams and four of the most traditional teams were accessed through MLB team subreddits and surveyed. Results initially indicated that there was no difference between fans of teams with advanced versus traditional broadcasts. However, there were clear differences in knowledge based on other factors, such as whether fans had a new school or old school orientation, whether they were high in Schwabism and/or mavenism, and how highly identified they were with the team.
    [Show full text]