A Dataset of Python Jupyter Notebooks from Kaggle

A Dataset of Python Jupyter Notebooks from Kaggle

KGTorrent: A Dataset of Python Jupyter Notebooks from Kaggle Luigi Quaranta Fabio Calefato Filippo Lanubile University of Bari, Italy University of Bari, Italy University of Bari, Italy [email protected] [email protected][email protected] Abstract—Computational notebooks have become the tool of Among the various datasets offered by the platform, there choice for many data scientists and practitioners for performing is Meta Kaggle,3 a daily-updated collection of data about the analyses and disseminating results. Despite their increasing Kaggle community and its activity. Moreover, Meta Kaggle popularity, the research community cannot yet count on a large, curated dataset of computational notebooks. In this paper, we stores detailed information about publicly available notebooks, fill this gap by introducing KGTORRENT, a dataset of Python which can be obtained through the Kaggle API or, at a lower Jupyter notebooks with rich metadata retrieved from Kaggle, level, through direct HTTP requests. a platform hosting data science competitions for learners and To build KGTORRENT, we thoroughly analyzed Meta Kag- practitioners with any levels of expertise. We describe how we gle and reverse-engineered its underlying data schema explicit built KGTORRENT, and provide instructions on how to use it and refresh the collection to keep it up to date. Our vision is that to build a relational database for storing Kaggle metadata; the research community will use KGTORRENT to study how data then, after populating our database, we gathered a full copy scientists, especially practitioners, use Jupyter Notebook in the of 248 761 publicly available Jupyter notebooks written in wild and identify potential shortcomings to inform the design of Python. By linking the notebook archive to the relational its future extensions. database, not only we offer to the research community a Index Terms—open dataset, repository, Kaggle, computational notebook, Jupyter large dataset of Jupyter notebooks, but also a practical way to select a sample of interest, based on any criterion that can I. INTRODUCTION be expressed in terms of the Kaggle metadata. Finally, along with the dataset and its companion database, Computational notebooks, a modern implementation of the we publish the scripts used to build them. These can be literate programming paradigm [1], are interactive documents conveniently executed to reproduce the collection as well interleaving natural language text, source code, and its output as to effortlessly update it to more recent versions of Meta to form a human-friendly narrative of a computation. The most Kaggle. To the best of our knowledge, KGTORRENT is the prominent example of a computational notebook platform is largest available dataset of Python Jupyter notebooks with rich Jupyter Notebook,1 which has seen a widespread endorsement, metadata. especially by data scientists [2]. The name of our dataset is inspired by two previous works Because of their popularity, Jupyter notebooks have also of similar nature, GHTorrent [8] and SOTorrent [9]. The become the primary target of many archival studies [3]–[7], in former provides an offline mirror of data from GitHub, the which a sizable number of publicly available notebooks from popular project hosting site. The latter is an open dataset online software repositories are put together under the lens of containing the version history of posts from Stack Overflow, researchers. However, the task of gathering a large dataset of the most popular question-and-answer website for software notebooks, which meets specific research criteria, is nontrivial developers. and time-consuming. Due to the novelty of this research area, arXiv:2103.10558v1 [cs.DB] 18 Mar 2021 The remainder of this paper is organized as follows. In a large, annotated dataset of computational notebooks has been Section II, we present an overview of the Kaggle platform. missing so far. Next, we describe the two main components of KGTORRENT, To fill this gap, in this paper we present KGTORRENT, a namely the database of metadata in Section III and the dataset large dataset of computational notebooks with rich metadata of Jupyter notebooks in Section IV. Then, Section V provides retrieved from Kaggle2, a Google-owned platform that hosts a short guide on how to use and update KGTORRENT, machine learning competitions for data scientists of all expe- while Section VI offers a couple of insights on its potential rience levels. In addition to hosting data science challenges, applications in research. Finally, we describe future work in Kaggle also provides a large number of datasets as well as Section VII. a cloud-based data science environment. The latter enables the development and execution of scripts and computational II. KAGGLE notebooks written in R or Python. Since 2010, Kaggle started offering worldwide machine learning competitions that ensure their winners both money 1https://jupyter.org 2www.kaggle.com 3https://kaggle.com/kaggle/meta-kaggle prizes and high visibility in the platform leaderboards. No- III. KGTORRENT DATABASE tably, some competitions have even been used by international companies for recruiting purposes. Most of the challenges KGTORRENT comprises 1) a dataset of Python Jupyter are indeed sponsored by large organizations seeking AI-based notebooks from Kaggle and 2) a companion database derived innovative approaches to their business challenges or research from Meta Kaggle, which stores metadata about each notebook agenda. Besides providing funds for the competition prizes, and comprehensive information about the overall activity of 5 they often supply new datasets to the platform, as part of the Kaggle users. competition packages. As a first step in the development of KGTORRENT, on Since its foundation, Kaggle has continually evolved over October 27, 2020, we downloaded the latest available version the years, gradually widening its offer to a cloud-based ecosys- of the Meta Kaggle dataset. Meta Kaggle comprises 29 files tem of services in support of competitions. It started hosting in the .csv format. Each of these represents the dump of a large number of datasets, shaping up to be a public data a database table. Since an official schema definition for Meta platform, and, most interestingly, began providing its users Kaggle was not available, we reverse-engineered the relational with a web-based data science environment powered by a schema from the set of plain text .csv tables. This step state-of-the-art containerization technology. Kaggle enables involved understanding the structure of each table, the related its users to create scripts and computational notebooks in R constraints as well as the column data types. Unfortunately, and Python – both known as ‘kernels’ in the Kaggle jargon. Kaggle does not provide any official documentation of the These can be developed directly on the platform, where large dataset, leaving the table content and relationships open to datasets are one click away. The entire computation happens interpretation. Nevertheless, we managed to piece together the in a containerized cloud environment that users can customize schema structure and column definitions, also by leveraging at their will (e.g., by installing custom dependencies). Nev- the related discussions in the forum. ertheless, a comprehensive number of commonly used data Then, we imported the information contained in Meta science packages are available in kernels out of the box. In Kaggle into a dedicated relational database. Our DBMS of addition, both kernels and datasets get versioned in Kaggle. choice was MySQL and that is the format in which we provide Once a user has finished working on their notebook, they can the dump of the KGTORRENT database, weighing 8.31GB choose to temporarily save it or commit it – and, when this (1GB compressed). Nonetheless, users interested in adopting a applies, submit its results (e.g., a pickled model) in response different DBMS technology can easily create their own version to a competition. of KGTORRENT, as we handled all database operations via 6 Besides competitions and data science tools, Kaggle hosts a SQLAlchemy, a popular ORM package for Python supporting rich bundle of social features. The platform enables its users to a large number of DBMS; therefore, minimal changes are discuss in forum-like threads about kernels, datasets, and the required in our scripts to migrate to a different database competitions themselves. Additionally, users can follow each technology. other so that content published by the followed user (kernel, Because some information is missing in the Meta Kaggle datasets, and discussion posts) surfaces in the newsfeed of the dump (i.e., it is not made publicly available by the platform follower. maintainers), many tables present rows for which one or more Another key mechanism of the platform is the Kaggle foreign keys cannot be resolved. Henceforth, a straight import Progression System.4 The growth of users as data scientists of Meta Kaggle tables using a relational DBMS is not feasible gets tracked in terms of four categories of expertise, namely due to a substantial number of violations of referential integrity Competitions, Notebooks, Datasets, and Discussion. For each constraints. To import Meta Kaggle into a relational database, category of expertise, Kaggle assigns its users a performance one has to decide whether each foreign key constraint has tier among the following: Novice, Contributor, Expert, Master, to be enforced (thus losing orphan records in the

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us