Faust Documentation Release 1.7.4

Total Page:16

File Type:pdf, Size:1020Kb

Faust Documentation Release 1.7.4 Faust Documentation Release 1.7.4 Robinhood Markets, Inc. Jul 23, 2019 CONTENTS 1 Contents 3 2 Indices and tables 457 Python Module Index 459 Index 461 i ii Faust Documentation, Release 1.7.4 ۶(◕‿◕)٩ Python Streams # # Forever scalable event processing & in-memory durable K/V store; # w/ asyncio & static typing. import faust Faust is a stream processing library, porting the ideas from Kafka Streams to Python. It is used at Robinhood to build high performance distributed systems and real-time data pipelines that process billions of events every day. Faust provides both stream processing and event processing, sharing similarity with tools such as Kafka Streams, Apache Spark/Storm/Samza/Flink, It does not use a DSL, it’s just Python! This means you can use all your favorite Python libraries when stream processing: NumPy, PyTorch, Pandas, NLTK, Django, Flask, SQLAlchemy, ++ Faust requires Python 3.6 or later for the new async/await syntax, and variable type annotations. Here’s an example processing a stream of incoming orders: app = faust.App('myapp', broker='kafka://localhost') # Models describe how messages are serialized: # {"account_id": "3fae-...", amount": 3} class Order(faust.Record): account_id: str amount: int @app.agent(value_type=Order) async def order(orders): async for order in orders: # process infinite stream of orders. print(f'Order for {order.account_id}: {order.amount}') The Agent decorator defines a “stream processor” that essentially consumes from a Kafka topic and does something for every event it receives. The agent is an async def function, so can also perform other operations asynchronously, such as web requests. This system can persist state, acting like a database. Tables are named distributed key/value stores you can use as regular Python dictionaries. Tables are stored locally on each machine using a super fast embedded database written in C++, called RocksDB. Tables can also store aggregate counts that are optionally “windowed” so you can keep track of “number of clicks from the last day,” or “number of clicks in the last hour.” for example. Like Kafka Streams, we support tumbling, hopping and sliding windows of time, and old windows can be expired to stop data from filling up. For reliability we use a Kafka topic as “write-ahead-log”. Whenever a key is changed we publish to the changelog. Standby nodes consume from this changelog to keep an exact replica of the data and enables instant recovery should any of the nodes fail. To the user a table is just a dictionary, but data is persisted between restarts and replicated across nodes so on failover other nodes can take over automatically. You can count page views by URL: # data sent to 'clicks' topic sharded by URL key. # e.g. key="http://example.com" value="1" click_topic = app.topic('clicks', key_type=str, value_type=int) (continues on next page) CONTENTS 1 Faust Documentation, Release 1.7.4 (continued from previous page) # default value for missing URL will be 0 with `default=int` counts = app.Table('click_counts', default=int) @app.agent(click_topic) async def count_click(clicks): async for url, count in clicks.items(): counts[url] += count The data sent to the Kafka topic is partitioned, which means the clicks will be sharded by URL in such a way that every count for the same URL will be delivered to the same Faust worker instance. Faust supports any type of stream data: bytes, Unicode and serialized structures, but also comes with “Models” that use modern Python syntax to describe how keys and values in streams are serialized: # Order is a json serialized dictionary, # having these fields: class Order(faust.Record): account_id: str product_id: str price: float quantity: float = 1.0 orders_topic = app.topic('orders', key_type=str, value_type=Order) @app.agent(orders_topic) async def process_order(orders): async for order in orders: # process each order using regular Python total_price = order.price * order.quantity await send_order_received_email(order.account_id, order) Faust is statically typed, using the mypy type checker, so you can take advantage of static types when writing applications. The Faust source code is small, well organized, and serves as a good resource for learning the implementation of Kafka Streams. Learn more about Faust in the Introducing Faust introduction page to read more about Faust, system requirements, installation instructions, community resources, and more. or go directly to the Quick Start tutorial to see Faust in action by programming a streaming application. then explore the User Guide for in-depth information organized by topic. 2 CONTENTS CHAPTER ONE CONTENTS 1.1 Copyright Faust User Manual Copyright © 2017-2019, Robinhood Markets, Inc. All rights reserved. This material may be copied or distributed only subject to the terms and conditions set forth in the Creative Commons Attribution-ShareAlike 4.0 International <http://creativecommons.org/licenses/by-sa/4.0/legalcode>‘_ license. You may share and adapt the material, even for commercial purposes, but you must give the original author credit. If you alter, transform, or build upon this work, you may distribute the resulting work only under the same license or a license compatible to this one. Note: While the Faust documentation is offered under the Creative Commons Attribution-ShareAlike 4.0 International license the Faust software is offered under the BSD License (3 Clause) 1.2 Introducing Faust Version 1.7.4 Web http://faust.readthedocs.io/ Download http://pypi.org/project/faust Source http://github.com/robinhood/faust Keywords distributed, stream, async, processing, data, queue Table of Contents • What can it do? • How do I use it? • What do I need? • Extensions • Design considerations 3 Faust Documentation, Release 1.7.4 • Getting Help • Resources • License 1.2.1 What can it do? Agents Process infinite streams in a straightforward manner using asynchronous generators. The concept of “agents” comes from the actor model, and means the stream processor can execute concurrently on many CPU cores, and on hundreds of machines at the same time. Use regular Python syntax to process streams and reuse your favorite libraries: @app.agent() async def process(stream): async for value in stream: process(value) Tables Tables are sharded dictionaries that enable stream processors to be stateful with persistent and durable data. Streams are partitioned to keep relevant data close, and can be easily repartitioned to achieve the topology you need. In this example we repartition an order stream by account id, to count orders in a distributed table: import faust # this model describes how message values are serialized # in the Kafka "orders" topic. class Order(faust.Record, serializer='json'): account_id: str product_id: str amount: int price: float app = faust.App('hello-app', broker='kafka://localhost') orders_kafka_topic = app.topic('orders', value_type=Order) # our table is sharded amongst worker instances, and replicated # with standby copies to take over if one of the nodes fail. order_count_by_account = app.Table('order_count', default=int) @app.agent(orders_kafka_topic) async def process(orders: faust.Stream[Order]) -> None: async for order in orders.group_by(Order.account_id): order_count_by_account[order.account_id] += 1 If we start multiple instances of this Faust application on many machines, any order with the same account id will be received by the same stream processing agent, so the count updates correctly in the table. Sharding/partitioning is an essential part of stateful stream processing applications, so take this into account when designing your system, but note that streams can also be processed in round-robin order so you can use Faust for event processing and as a task queue also. Asynchronous with asyncio Faust takes full advantage of asyncio and the new async/await keywords in Python 3.6+ to run multiple stream processors in the same process, along with web servers and other network services. 4 Chapter 1. Contents Faust Documentation, Release 1.7.4 Thanks to Faust and asyncio you can now embed your stream processing topology into your existing asyncio/gevent/ eventlet/Twisted/Tornado applications. Faust is… Simple Faust is extremely easy to use. To get started using other stream processing solutions you have complicated hello-world projects, and infrastructure requirements. Faust only requires Kafka, the rest is just Python, so If you know Python you can already use Faust to do stream processing, and it can integrate with just about anything. Here’s one of the easier applications you can make: import faust class Greeting(faust.Record): from_name: str to_name: str app = faust.App('hello-app', broker='kafka://localhost') topic = app.topic('hello-topic', value_type=Greeting) @app.agent(topic) async def hello(greetings): async for greeting in greetings: print(f'Hello from {greeting.from_name} to {greeting.to_name}') @app.timer(interval=1.0) async def example_sender(app): await hello.send( value=Greeting(from_name='Faust', to_name='you'), ) if __name__ == '__main__': app.main() You’re probably a bit intimidated by the async and await keywords, but you don’t have to know how asyncio works to use Faust: just mimic the examples, and you’ll be fine. The example application starts two tasks: one is processing a stream, the other is a background thread sending events to that stream. In a real-life application, your system will publish events to Kafka topics that your processors can consume from, and the background thread is only needed to feed data into our example. Highly Available Faust is highly available and can survive network problems and server crashes. In the case of node failure, it can automatically recover, and tables have standby nodes that will take over. Distributed Start more instances of your application as needed. Fast A single-core Faust worker instance can already process tens of thousands of events every second, and we are reasonably confident that throughput will increase once we can support a more optimized Kafka client.
Recommended publications
  • Goethe, the Japanese National Identity Through Cultural Exchange, 1889 to 1989
    Jahrbuch für Internationale Germanistik pen Jahrgang LI – Heft 1 | Peter Lang, Bern | S. 57–100 Goethe, the Japanese National Identity through Cultural Exchange, 1889 to 1989 By Stefan Keppler-Tasaki and Seiko Tasaki, Tokyo Dedicated to A . Charles Muller on the occasion of his retirement from the University of Tokyo This is a study of the alleged “singular reception career”1 that Goethe experi- enced in Japan from 1889 to 1989, i. e., from the first translation of theMi gnon song to the last issues of the Neo Faust manga series . In its path, we will high- light six areas of discourse which concern the most prominent historical figures resp. figurations involved here: (1) the distinct academic schools of thought aligned with the topic “Goethe in Japan” since Kimura Kinji 木村謹治, (2) the tentative Japanification of Goethe by Thomas Mann and Gottfried Benn, (3) the recognition of the (un-)German classical writer in the circle of the Japanese national author Mori Ōgai 森鴎外, as well as Goethe’s rich resonances in (4) Japanese suicide ideals since the early days of Wertherism (Ueruteru-zumu ウェル テルヅム), (5) the Zen Buddhist theories of Nishida Kitarō 西田幾多郎 and D . T . Suzuki 鈴木大拙, and lastly (6) works of popular culture by Kurosawa Akira 黒澤明 and Tezuka Osamu 手塚治虫 . Critical appraisal of these source materials supports the thesis that the polite violence and interesting deceits of the discursive history of “Goethe, the Japanese” can mostly be traced back, other than to a form of speech in German-Japanese cultural diplomacy, to internal questions of Japanese national identity .
    [Show full text]
  • Declarative Languages for Big Streaming Data a Database Perspective
    Tutorial Declarative Languages for Big Streaming Data A database Perspective Riccardo Tommasini Sherif Sakr University of Tartu Unversity of Tartu [email protected] [email protected] Emanuele Della Valle Hojjat Jafarpour Politecnico di Milano Confluent Inc. [email protected] [email protected] ABSTRACT sources and are pushed asynchronously to servers which are The Big Data movement proposes data streaming systems to responsible for processing them [13]. tame velocity and to enable reactive decision making. However, To facilitate the adoption, initially, most of the big stream approaching such systems is still too complex due to the paradigm processing systems provided their users with a set of API for shift they require, i.e., moving from scalable batch processing to implementing their applications. However, recently, the need for continuous data analysis and pattern detection. declarative stream processing languages has emerged to simplify Recently, declarative Languages are playing a crucial role in common coding tasks; making code more readable and main- fostering the adoption of Stream Processing solutions. In partic- tainable, and fostering the development of more complex appli- ular, several key players introduce SQL extensions for stream cations. Thus, Big Data frameworks (e.g., Flink [9], Spark [3], 1 processing. These new languages are currently playing a cen- Kafka Streams , and Storm [19]) are starting to develop their 2 3 4 tral role in fostering the stream processing paradigm shift. In own SQL-like approaches (e.g., Flink SQL , Beam SQL , KSQL ) this tutorial, we give an overview of the various languages for to declaratively tame data velocity. declarative querying interfaces big streaming data.
    [Show full text]
  • NUMA-Aware Thread Migration for High Performance NVMM File Systems
    NUMA-Aware Thread Migration for High Performance NVMM File Systems Ying Wang, Dejun Jiang and Jin Xiong SKL Computer Architecture, ICT, CAS; University of Chinese Academy of Sciences fwangying01, jiangdejun, [email protected] Abstract—Emerging Non-Volatile Main Memories (NVMMs) out considering the NVMM usage on NUMA nodes. Besides, provide persistent storage and can be directly attached to the application threads accessing file system rely on the default memory bus, which allows building file systems on non-volatile operating system thread scheduler, which migrates thread only main memory (NVMM file systems). Since file systems are built on memory, NUMA architecture has a large impact on their considering CPU utilization. These bring remote memory performance due to the presence of remote memory access and access and resource contentions to application threads when imbalanced resource usage. Existing works migrate thread and reading and writing files, and thus reduce the performance thread data on DRAM to solve these problems. Unlike DRAM, of NVMM file systems. We observe that when performing NVMM introduces extra latency and lifetime limitations. This file reads/writes from 4 KB to 256 KB on a NVMM file results in expensive data migration for NVMM file systems on NUMA architecture. In this paper, we argue that NUMA- system (NOVA [47] on NVMM), the average latency of aware thread migration without migrating data is desirable accessing remote node increases by 65.5 % compared to for NVMM file systems. We propose NThread, a NUMA-aware accessing local node. The average bandwidth is reduced by thread migration module for NVMM file system.
    [Show full text]
  • Unravel Data Systems Version 4.5
    UNRAVEL DATA SYSTEMS VERSION 4.5 Component name Component version name License names jQuery 1.8.2 MIT License Apache Tomcat 5.5.23 Apache License 2.0 Tachyon Project POM 0.8.2 Apache License 2.0 Apache Directory LDAP API Model 1.0.0-M20 Apache License 2.0 apache/incubator-heron 0.16.5.1 Apache License 2.0 Maven Plugin API 3.0.4 Apache License 2.0 ApacheDS Authentication Interceptor 2.0.0-M15 Apache License 2.0 Apache Directory LDAP API Extras ACI 1.0.0-M20 Apache License 2.0 Apache HttpComponents Core 4.3.3 Apache License 2.0 Spark Project Tags 2.0.0-preview Apache License 2.0 Curator Testing 3.3.0 Apache License 2.0 Apache HttpComponents Core 4.4.5 Apache License 2.0 Apache Commons Daemon 1.0.15 Apache License 2.0 classworlds 2.4 Apache License 2.0 abego TreeLayout Core 1.0.1 BSD 3-clause "New" or "Revised" License jackson-core 2.8.6 Apache License 2.0 Lucene Join 6.6.1 Apache License 2.0 Apache Commons CLI 1.3-cloudera-pre-r1439998 Apache License 2.0 hive-apache 0.5 Apache License 2.0 scala-parser-combinators 1.0.4 BSD 3-clause "New" or "Revised" License com.springsource.javax.xml.bind 2.1.7 Common Development and Distribution License 1.0 SnakeYAML 1.15 Apache License 2.0 JUnit 4.12 Common Public License 1.0 ApacheDS Protocol Kerberos 2.0.0-M12 Apache License 2.0 Apache Groovy 2.4.6 Apache License 2.0 JGraphT - Core 1.2.0 (GNU Lesser General Public License v2.1 or later AND Eclipse Public License 1.0) chill-java 0.5.0 Apache License 2.0 Apache Commons Logging 1.2 Apache License 2.0 OpenCensus 0.12.3 Apache License 2.0 ApacheDS Protocol
    [Show full text]
  • Johann Georg Faust
    Johann Georg Faust Dr. Johann Georg Faust (approx. 1480 – 1540) was a German alchemist who was born in the village of Knittlingen, Württemberg (it is also claimed in Roda in the province of Weimar, and also in Helmstadt near Heidelberg in 1466). He has alternatively been known by the names “Johann Sabellicus” and “Georg Faust.” In 1507, Johannes Trithemius of Sponheim wrote that Faust was a con-man and a drifter who preyed on the gullible. He said he had fled a teaching position in Kreuznach after molesting several of the boys there. He may have then gone on to the University of Heidelberg to study, obtaining a degree in divinity from Heidelberg University in 1509, and then to Poland where a friend of Martin Luther, Philip Melanchthon, says Faust studied magic at the University of Kraków. Martin Luther and Philipp Melanchthon are said to have alleged Faust’s companionship with the devil. After that, he appears at the University of Ehrfut in central Germany. It is said that when he lectured on Homer he conjured up Homer’s heroes for his students. He was expelled from Ehrfut by the Franciscan monk Dr. Klinge (who was the cathedral preacher from 1520-1556). Dr. Klinge asked for Faust’s repentance. Faust refused the monk’s offer of intervention and admitted having signed a pact with the Devil, and said that he trusted the Devil more than God. In 1523 he is said to have visited Auerbach’s Tavern in Leipzig where he conjured wine out of a table, and rode a barrel of wine.
    [Show full text]
  • Kodai: a Software Architecture and Implementation for Segmentation
    University of Rhode Island DigitalCommons@URI Open Access Master's Theses 2017 Kodai: A Software Architecture and Implementation for Segmentation Rick Rejeleene University of Rhode Island, [email protected] Follow this and additional works at: https://digitalcommons.uri.edu/theses Recommended Citation Rejeleene, Rick, "Kodai: A Software Architecture and Implementation for Segmentation" (2017). Open Access Master's Theses. Paper 1107. https://digitalcommons.uri.edu/theses/1107 This Thesis is brought to you for free and open access by DigitalCommons@URI. It has been accepted for inclusion in Open Access Master's Theses by an authorized administrator of DigitalCommons@URI. For more information, please contact [email protected]. KODAI: A SOFTWARE ARCHITECTURE AND IMPLEMENTATION FOR SEGMENTATION BY RICK REJELEENE A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS OF MASTER OF SCIENCE IN COMPUTER SCIENCE UNIVERSITY OF RHODE ISLAND 2017 MASTER OF SCIENCE THESIS OF RICK REJELEENE APPROVED: Thesis Committee: Major Professor: Joan Peckham Lisa DiPippo Ruby Dholakia Nasser H Zawia DEAN OF GRADUATE COMMITTEE UNIVERSITY OF RHODE ISLAND 2017 ABSTRACT The purpose of this thesis is to design and implement a software architecture for segmen- tation models to improve revenues for a supermarket. This tool supports analysis of su- permarket products and generates results to interpret consumer behavior, to give busi- nesses deeper insights into targeted consumer markets. The software design developed is named as Kodai. Kodai is horizontally reusable and can be adapted across various indus- tries. This software framework allows testing a hypothesis to address the problem of in- creasing revenues in supermarkets. Kodai has several advantages, such as analyzing and visualizing data, and as a result, businesses can make better decisions.
    [Show full text]
  • DARPA and Data: a Portfolio Overview
    DARPA and Data: A Portfolio Overview William C. Regli Special Assistant to the Director Defense Advanced Research Projects Agency Brian Pierce Director Information Innovation Office Defense Advanced Research Projects Agency Fall 2017 Distribution Statement “A” (Approved for Public Release, Distribution Unlimited) 1 DARPA Dreams of Data • Investments over the past decade span multiple DARPA Offices and PMs • Information Innovation (I2O): Software Systems, AI, Data Analytics • Defense Sciences (DSO): Domain-driven problems (chemistry, social science, materials science, engineering design) • Microsystems Technology (MTO): New hardware to support these processes (neuromorphic processor, graph processor, learning systems) • Products include DARPA Program testbeds, data and software • The DARPA Open Catalog • Testbeds include those in big data, cyber-defense, engineering design, synthetic bio, machine reading, among others • Multiple layers and qualities of data are important • Important for reproducibility; important as fuel for future DARPA programs • Beyond public data to include “raw” data, process/workflow data • Data does not need to be organized to be useful or valuable • Software tools are getting better eXponentially, ”raw” data can be processed • Changing the economics (Forensic Data Curation) • Its about optimizing allocation of attention in human-machine teams Distribution Statement “A” (Approved for Public Release, Distribution Unlimited) 2 Working toward Wisdom Wisdom: sound judgment - governance Abstraction Wisdom (also Understanding:
    [Show full text]
  • Artificial Intelligence for Understanding Large and Complex
    Artificial Intelligence for Understanding Large and Complex Datacenters by Pengfei Zheng Department of Computer Science Duke University Date: Approved: Benjamin C. Lee, Advisor Bruce M. Maggs Jeffrey S. Chase Jun Yang Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science in the Graduate School of Duke University 2020 Abstract Artificial Intelligence for Understanding Large and Complex Datacenters by Pengfei Zheng Department of Computer Science Duke University Date: Approved: Benjamin C. Lee, Advisor Bruce M. Maggs Jeffrey S. Chase Jun Yang An abstract of a dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science in the Graduate School of Duke University 2020 Copyright © 2020 by Pengfei Zheng All rights reserved except the rights granted by the Creative Commons Attribution-Noncommercial Licence Abstract As the democratization of global-scale web applications and cloud computing, under- standing the performance of a live production datacenter becomes a prerequisite for making strategic decisions related to datacenter design and optimization. Advances in monitoring, tracing, and profiling large, complex systems provide rich datasets and establish a rigorous foundation for performance understanding and reasoning. But the sheer volume and complexity of collected data challenges existing techniques, which rely heavily on human intervention, expert knowledge, and simple statistics. In this dissertation, we address this challenge using artificial intelligence and make the case for two important problems, datacenter performance diagnosis and datacenter workload characterization. The first thrust of this dissertation is the use of statistical causal inference and Bayesian probabilistic model for datacenter straggler diagnosis.
    [Show full text]
  • Riccardo Muti Conductor Michele Campanella Piano Eric Cutler Tenor Men of the Chicago Symphony Chorus Duain Wolfe Director Wagne
    Program ONE huNdrEd TwENTy-FirST SEASON Chicago Symphony orchestra riccardo muti Music director Pierre Boulez helen regenstein Conductor Emeritus Yo-Yo ma Judson and Joyce Green Creative Consultant Global Sponsor of the CSO Friday, September 30, 2011, at 8:00 Saturday, October 1, 2011, at 8:00 Tuesday, October 4, 2011, at 7:30 riccardo muti conductor michele Campanella piano Eric Cutler tenor men of the Chicago Symphony Chorus Duain Wolfe director Wagner Huldigungsmarsch Liszt Piano Concerto No. 1 in E-flat Major Allegro maestoso Quasi adagio— Allegretto vivace— Allegro marziale animato MiChElE CampanellA IntErmISSIon Liszt A Faust Symphony Faust: lento assai—Allegro impetuoso Gretchen: Andante soave Mephistopheles: Allegro vivace, ironico EriC CuTlEr MEN OF ThE Chicago SyMPhONy ChOruS This concert series is generously made possible by Mr. & Mrs. Dietrich M. Gross. The Chicago Symphony Orchestra thanks Mr. & Mrs. John Giura for their leadership support in partially sponsoring Friday evening’s performance. CSO Tuesday series concerts are sponsored by United Airlines. This program is partially supported by grants from the Illinois Arts Council, a state agency, and the National Endowment for the Arts. CommEntS by PhilliP huSChEr ne hundred years ago, the Chicago Symphony paid tribute Oto the centenary of the birth of Franz Liszt with the pro- gram of music Riccardo Muti conducts this week to honor the bicentennial of the composer’s birth. Today, Liszt’s stature in the music world seems diminished—his music is not all that regularly performed, aside from a few works, such as the B minor piano sonata, that have never gone out of favor; and he is more a name in the history books than an indispensable part of our concert life.
    [Show full text]
  • STORM: Refinement Types for Secure Web Applications
    STORM: Refinement Types for Secure Web Applications Nico Lehmann Rose Kunkel Jordan Brown Jean Yang UC San Diego UC San Diego Independent Akita Software Niki Vazou Nadia Polikarpova Deian Stefan Ranjit Jhala IMDEA Software Institute UC San Diego UC San Diego UC San Diego Abstract Centralizing policy specification is not a new idea. Several We present STORM, a web framework that allows developers web frameworks (e.g., HAILS [4], JACQUELINE [5], and to build MVC applications with compile-time enforcement of LWEB [6]) already do this. These frameworks, however, have centrally specified data-dependent security policies. STORM two shortcomings that have hindered their adoption. First, ensures security using a Security Typed ORM that refines the they enforce policies at run-time, typically using dynamic (type) abstractions of each layer of the MVC API with logical information flow control (IFC). While dynamic enforcement assertions that describe the data produced and consumed by is better than no enforcement, dynamic IFC imposes a high the underlying operation and the users allowed access to that performance overhead, since the system must be modified to data. To evaluate the security guarantees of STORM, we build a track the provenance of data and restrict where it is allowed formally verified reference implementation using the Labeled to flow. More importantly, certain policy violations are only IO (LIO) IFC framework. We present case studies and end-to- discovered once the system is deployed, at which point they end applications that show how STORM lets developers specify may be difficult or expensive to fix, e.g., on applications diverse policies while centralizing the trusted code to under 1% running on IoT devices [7].
    [Show full text]
  • Gunter E. Grimm
    GUNTER E. GRIMM Faust-Opern Eine Skizze Vorblatt Publikation Erstpublikation Autor Prof. Dr. Gunter E. Grimm Universität Duisburg-Essen Fachbereich Geisteswissenschaften, Germanistik Lotharstr. 65 47057 Duisburg Emailadresse: [email protected] Homepage: <http://www.uni-duisburg-essen.de/germanistik/mitarbeiterdaten.php?pid=799> Empfohlene Zitierweise Beim Zitieren empfehlen wir hinter den Titel das Datum der Einstellung oder des letzten Updates und nach der URL-Angabe das Datum Ihres letzten Besuchs die- ser Online-Adresse anzugeben: Gunter E. Grimm: Faust Opern. Eine Skizze. In: Goethezeitportal. URL: http://www.goethezeitportal.de/fileadmin/PDF/db/wiss/goethe/faust-musikalisch_grimm.pdf GUNTER E. GRIMM: Faust-Opern. Eine Skizze. S. 2 von 20 Gunter E. Grimm Faust-Opern Eine Skizze Das Faust-Thema stellt ein hervorragendes Beispiel dar, wie ein Stoff, der den dominanten Normen seines Entstehungszeitalters entspricht, bei seiner Wande- rung durch verschiedene Epochen sich den jeweils herrschenden mentalen Para- digmen anpasst. Dabei verändert der ursprüngliche Stoff sowohl seinen Charakter als auch seine Aussage. Schaubild der Faust-Opern Die „Historia von Dr. Faust“ von 1587 entspricht ganz dem christlichen Geist der Epoche. Doktor Faust gilt als Inbegriff eines hybriden Gelehrten, der über das dem Menschen zugestandene Maß an Gelehrsamkeit und Erkenntnis hinausstrebt und zu diesem Zweck einen Pakt mit dem Teufel abschließt. Er wollte, wie es im Volksbuch heißt, „alle Gründ am Himmel vnd Erden erforschen / dann sein Für- GUNTER E. GRIMM: Faust-Opern. Eine Skizze. S. 3 von 20 witz / Freyheit vnd Leichtfertigkeit stache vnnd reitzte jhn also / daß er auff eine zeit etliche zäuberische vocabula / figuras / characteres vnd coniurationes / damit er den Teufel vor sich möchte fordern / ins Werck zusetzen / vnd zu probiern jm fürname.”1 Die „Historia“ mit ihrem schrecklichen Ende stellte eine dezidierte Warnung an diejenigen dar, die sich frevelhaft über die Religion erhoben.
    [Show full text]
  • Characterizing, Modeling, and Benchmarking Rocksdb Key-Value
    Characterizing, Modeling, and Benchmarking RocksDB Key-Value Workloads at Facebook Zhichao Cao, University of Minnesota, Twin Cities, and Facebook; Siying Dong and Sagar Vemuri, Facebook; David H.C. Du, University of Minnesota, Twin Cities https://www.usenix.org/conference/fast20/presentation/cao-zhichao This paper is included in the Proceedings of the 18th USENIX Conference on File and Storage Technologies (FAST ’20) February 25–27, 2020 • Santa Clara, CA, USA 978-1-939133-12-0 Open access to the Proceedings of the 18th USENIX Conference on File and Storage Technologies (FAST ’20) is sponsored by Characterizing, Modeling, and Benchmarking RocksDB Key-Value Workloads at Facebook Zhichao Cao†‡ Siying Dong‡ Sagar Vemuri‡ David H.C. Du† †University of Minnesota, Twin Cities ‡Facebook Abstract stores is still challenging. First, there are very limited studies of real-world workload characterization and analysis for KV- Persistent key-value stores are widely used as building stores, and the performance of KV-stores is highly related blocks in today’s IT infrastructure for managing and storing to the workloads generated by applications. Second, the an- large amounts of data. However, studies of characterizing alytic methods for characterizing KV-store workloads are real-world workloads for key-value stores are limited due to different from the existing workload characterization stud- the lack of tracing/analyzing tools and the difficulty of collect- ies for block storage or file systems. KV-stores have simple ing traces in operational environments. In this paper, we first but very different interfaces and behaviors. A set of good present a detailed characterization of workloads from three workload collection, analysis, and characterization tools can typical RocksDB production use cases at Facebook: UDB (a benefit both developers and users of KV-stores by optimizing MySQL storage layer for social graph data), ZippyDB (a dis- performance and developing new functions.
    [Show full text]