Introduction to Asynchronous Programming
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
A Practical UNIX Capability System
A Practical UNIX Capability System Adam Langley <[email protected]> 22nd June 2005 ii Abstract This report seeks to document the development of a capability security system based on a Linux kernel and to follow through the implications of such a system. After defining terms, several other capability systems are discussed and found to be excellent, but to have too high a barrier to entry. This motivates the development of the above system. The capability system decomposes traditionally monolithic applications into a number of communicating actors, each of which is a separate process. Actors may only communicate using the capabilities given to them and so the impact of a vulnerability in a given actor can be reasoned about. This design pattern is demonstrated to be advantageous in terms of security, comprehensibility and mod- ularity and with an acceptable performance penality. From this, following through a few of the further avenues which present themselves is the two hours traffic of our stage. Acknowledgments I would like to thank my supervisor, Dr Kelly, for all the time he has put into cajoling and persuading me that the rest of the world might have a trick or two worth learning. Also, I’d like to thank Bryce Wilcox-O’Hearn for introducing me to capabilities many years ago. Contents 1 Introduction 1 2 Terms 3 2.1 POSIX ‘Capabilities’ . 3 2.2 Password Capabilities . 4 3 Motivations 7 3.1 Ambient Authority . 7 3.2 Confused Deputy . 8 3.3 Pervasive Testing . 8 3.4 Clear Auditing of Vulnerabilities . 9 3.5 Easy Configurability . -
Mysql NDB Cluster 7.5.16 (And Later)
Licensing Information User Manual MySQL NDB Cluster 7.5.16 (and later) Table of Contents Licensing Information .......................................................................................................................... 2 Licenses for Third-Party Components .................................................................................................. 3 ANTLR 3 .................................................................................................................................... 3 argparse .................................................................................................................................... 4 AWS SDK for C++ ..................................................................................................................... 5 Boost Library ............................................................................................................................ 10 Corosync .................................................................................................................................. 11 Cyrus SASL ............................................................................................................................. 11 dtoa.c ....................................................................................................................................... 12 Editline Library (libedit) ............................................................................................................. 12 Facebook Fast Checksum Patch .............................................................................................. -
Ultimate++ Forum Probably with These Two Variables You Could Try to Integrate D-BUS
Subject: DBus integration -- need help Posted by jlfranks on Thu, 20 Jul 2017 15:30:07 GMT View Forum Message <> Reply to Message We are trying to add a DBus server to existing large U++ application that runs only on Linux. I've converted from X11 to GTK for Upp project config to be compatible with GIO dbus library. I've created a separate thread for the DBus server and ran into problems with event loop. I've gutted my DBus server code of everything except what is causing an issue. DBus GIO examples use GMainLoop in order for DBus to service asynchronous events. Everything compiles and runs except the main UI is not longer visible. There must be a GTK main loop already running and I've stepped on it with this code. Is there a way for me to obtain a pointer to the UI main loop and use it with my DBus server? How/where can I do that? Code snipped example as follows: ---- code snippet ---- myDBusServerMainThread() { //========================================================== ===== // // Enter main service loop for this thread // while (not needsExit ()) { // colaborate with join() GMainLoop *loop; loop = g_main_loop_new(NULL, FALSE); g_main_loop_run(loop); } } Subject: Re: DBus integration -- need help Posted by Klugier on Thu, 20 Jul 2017 22:30:17 GMT View Forum Message <> Reply to Message Hello, You could try use following methods of Ctrl to obtain gtk & gdk handlers: GdkWindow *gdk() const { return top ? top->window->window : NULL; } GtkWindow *gtk() const { return top ? (GtkWindow *)top->window : NULL; } Page 1 of 3 ---- Generated from Ultimate++ forum Probably with these two variables you could try to integrate D-BUS. -
Fundamentals of Xlib Programming by Examples
Fundamentals of Xlib Programming by Examples by Ross Maloney Contents 1 Introduction 1 1.1 Critic of the available literature . 1 1.2 The Place of the X Protocol . 1 1.3 X Window Programming gotchas . 2 2 Getting started 4 2.1 Basic Xlib programming steps . 5 2.2 Creating a single window . 5 2.2.1 Open connection to the server . 6 2.2.2 Top-level window . 7 2.2.3 Exercises . 10 2.3 Smallest Xlib program to produce a window . 10 2.3.1 Exercises . 10 2.4 A simple but useful X Window program . 11 2.4.1 Exercises . 12 2.5 A moving window . 12 2.5.1 Exercises . 15 2.6 Parts of windows can disappear from view . 16 2.6.1 Testing overlay services available from an X server . 17 2.6.2 Consequences of no server overlay services . 17 2.6.3 Exercises . 23 2.7 Changing a window’s properties . 23 2.8 Content summary . 25 3 Windows and events produce menus 26 3.1 Colour . 26 3.1.1 Exercises . 27 i CONTENTS 3.2 A button to click . 29 3.3 Events . 33 3.3.1 Exercises . 37 3.4 Menus . 37 3.4.1 Text labelled menu buttons . 38 3.4.2 Exercises . 43 3.5 Some events of the mouse . 44 3.6 A mouse behaviour application . 55 3.6.1 Exercises . 58 3.7 Implementing hierarchical menus . 58 3.7.1 Exercises . 67 3.8 Content summary . 67 4 Pixmaps 68 4.1 The pixmap resource . -
Efficient Parallel I/O on Multi-Core Architectures
Lecture series title/ lecture title Efficient parallel I/O on multi-core architectures Adrien Devresse CERN IT-SDC-ID Thematic CERN School of Computing 2014 1 Author(s) names – Affiliation Lecture series title/ lecture title How to make I/O bound application scale with multi-core ? What is an IO bound application ? → A server application → A job that accesses big number of files → An application that uses intensively network 2 Author(s) names – Affiliation Lecture series title/ lecture title Stupid example: Simple server monothreaded // create socket socket_desc = socket(AF_INET , SOCK_STREAM , 0); // bind the socket bind(socket_desc,(struct sockaddr *)&server , sizeof(server)); listen(socket_desc , 100); //accept connection from an incoming client while(1){ // declarations client_sock = accept(socket_desc, (struct sockaddr *)&client, &c); //Receive a message from client while( (read_size = recv(client_sock , client_message , 2000 , 0)) > 0{ // Wonderful, we have a client, do some useful work std::string msg("hello bob"); write(client_sock, msg.c_str(), msg.size()); } } 3 Author(s) names – Affiliation Lecture series title/ lecture title Stupid example: Let's make it parallel ! int main(int argc, char** argv){ // creat socket void do_work(int socket){ socket_desc = socket(AF_INET , SOCK_STREAM , 0); //Receive a message while( (read_size = // bind the socket recv(client_sock , bind(socket_desc, server , sizeof(server)); client_message , 2000 , 0)) > 0{ listen(socket_desc , 100); // Wonderful, we have a client // useful works //accept connection -
Message Passing and Network Programming
Message Passing and Network Programming Advanced Operating Systems Lecture 13 Colin Perkins | https://csperkins.org/ | Copyright © 2017 | This work is licensed under the Creative Commons Attribution-NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. Lecture Outline • Actors, sockets, and network protocols • Asynchronous I/O frameworks • Higher level abstractions Colin Perkins | https://csperkins.org/ | Copyright © 2017 2 Message Passing and Network Protocols • Recap: • Actor-based framework for message passing Send to • Each actor has a receive loop other actors Mailbox Actor Calls to one function per state Queue • Receive Message • Messages delivered by runtime system; Receiver processed sequentially Message Done Message Process • Actor can send messages in reply; Message Dispatcher return identity of next state Dequeue • Can we write network code this way? Request next • Send data by sending a message to an actor representing a socket • Receive messages representing data received on a socket Colin Perkins | https://csperkins.org/ | Copyright © 2017 3 Integrating Actors and Sockets Sending Thread Send to other actors Encoder Network Socket Mailbox Actor Queue Parser Receive Message Receiver Message Done Receiving Thread Message Process Message Dispatcher • Conceptually straightforward to integrate Dequeue actors with network code Request next • Runtime system maintains sending and -
Memc3: Compact and Concurrent Memcache with Dumber Caching and Smarter Hashing
MemC3: Compact and Concurrent MemCache with Dumber Caching and Smarter Hashing Bin Fan, David G. Andersen, Michael Kaminsky∗ Carnegie Mellon University, ∗Intel Labs Abstract Standard Memcached, at its core, uses a typical hash table design, with linked-list-based chaining to handle This paper presents a set of architecturally and workload- collisions. Its cache replacement algorithm is strict LRU, inspired algorithmic and engineering improvements also based on linked lists. This design relies on locking to the popular Memcached system that substantially to ensure consistency among multiple threads, and leads improve both its memory efficiency and throughput. to poor scalability on multi-core CPUs [11]. These techniques—optimistic cuckoo hashing, a com- This paper presents MemC3 (Memcached with pact LRU-approximating eviction algorithm based upon CLOCK and Concurrent Cuckoo Hashing), a complete CLOCK, and comprehensive implementation of opti- redesign of the Memcached internals. This re-design mistic locking—enable the resulting system to use 30% is informed by and takes advantage of several observa- less memory for small key-value pairs, and serve up to tions. First, architectural features can hide memory access 3x as many queries per second over the network. We latencies and provide performance improvements. In par- have implemented these modifications in a system we ticular, our new hash table design exploits CPU cache call MemC3—Memcached with CLOCK and Concur- locality to minimize the number of memory fetches re- rent Cuckoo hashing—but believe that they also apply quired to complete any given operation; and it exploits more generally to many of today’s read-intensive, highly instruction-level and memory-level parallelism to overlap concurrent networked storage and caching systems. -
Qcontext, and Supporting Multiple Event Loop Threads in QEMU
IBM Linux Technology Center QContext, and Supporting Multiple Event Loop Threads in QEMU Michael Roth [email protected] © 2006 IBM Corporation IBM Linux Technology Center QEMU Threading Model Overview . Historically (earlier this year), there were 2 main types of threads in QEMU: . vcpu threads – handle execution of guest code, and emulation of hardware access (pio/mmio) and other trapped instructions . QEMU main loop (iothread) – everything else (mostly) GTK/SDL/VNC UIs QMP/HMP management interfaces Clock updates/timer callbacks for devices device I/O on behalf of vcpus © 2006 IBM Corporation IBM Linux Technology Center QEMU Threading Model Overview . All core qemu code protected by global mutex . vcpu threads in KVM_RUN can run concurrently thanks to address space isolation, but attempt to acquire global mutex immediately after an exit . Iothread requires global mutex whenever it's active © 2006 IBM Corporation IBM Linux Technology Center High contention as threads or I/O scale vcpu 1 vcpu 2 vcpu n iothread . © 2006 IBM Corporation IBM Linux Technology Center QEMU Thread Types . vcpu threads . iothread . virtio-blk-dataplane thread Drives a per-device AioContext via aio_poll Handles event fd callbacks for virtio-blk virtqueue notifications and linux_aio completions Uses port of vhost's vring code, doesn't (currently) use core QEMU code, doesn't require global mutex Will eventually re-use QEMU block layer code © 2006 IBM Corporation IBM Linux Technology Center QEMU Block Layer Features . Multiple image format support . Snapshots . Live Block Copy . Live Block migration . Drive-mirroring . Disk I/O limits . Etc... © 2006 IBM Corporation IBM Linux Technology Center More dataplane in the future . -
The Design and Use of the ACE Reactor an Object-Oriented Framework for Event Demultiplexing
The Design and Use of the ACE Reactor An Object-Oriented Framework for Event Demultiplexing Douglas C. Schmidt and Irfan Pyarali g fschmidt,irfan @cs.wustl.edu Department of Computer Science Washington University, St. Louis 631301 1 Introduction create concrete event handlers by inheriting from the ACE Event Handler base class. This class specifies vir- This article describes the design and implementation of the tual methods that handle various types of events, such as Reactor pattern [1] contained in the ACE framework [2]. The I/O events, timer events, signals, and synchronization events. Reactor pattern handles service requests that are delivered Applications that use the Reactor framework create concrete concurrently to an application by one or more clients. Each event handlers and register them with the ACE Reactor. service of the application is implemented by a separate event Figure 1 shows the key components in the ACE Reactor. handler that contains one or more methods responsible for This figure depicts concrete event handlers that implement processing service-specific requests. In the implementation of the Reactor pattern described in this paper, event handler dispatching is performed REGISTERED 2: accept() by an ACE Reactor.TheACE Reactor combines OBJECTS 5: recv(request) 3: make_handler() the demultiplexing of input and output (I/O) events with 6: process(request) other types of events, such as timers and signals. At Logging Logging Logging Logging Handler Acceptor LEVEL thecoreoftheACE Reactor implementation is a syn- LEVEL Handler Handler chronous event demultiplexer, such as select [3] or APPLICATION APPLICATION Event Event WaitForMultipleObjects [4]. When the demulti- Event Event Handler Handler Handler plexer indicates the occurrence of designated events, the Handler ACE Reactor automatically dispatches the method(s) of 4: handle_input() 1: handle_input() pre-registered event handlers, which perform application- specified services in response to the events. -
A Study on Performance Improvement of Tor Network with Active Circuit Switching
A Study on Performance Improvement of Tor Network with Active Circuit Switching Author Kale, Timothy Girry Doctoral Thesis in Engineering Graduate School of Information System Department of Information Network Systems The University of Electro-communications September 2015 A Study on Performance Improvement of Tor Network with Active Circuit Switching Author Kale, Timothy Girry Approved by supervisory committee: Chairperson: Assoc. Prof. Satoshi Ohzahata Member: Prof. Hiroyoshi Morita Member: Prof. Nagao Ogino Member: Assoc. Prof. Tomohiro Ogawa Member: Assoc. Prof. Shinobu Miwa © Copyright 2015 by Kale, Timothy Girry All Rights Reserved 動的経路切り替えを用いた Tor ネットワークの性 能改善に関する研究 カレ, ティモシー ギリ 概要 Tor ネットワークは分散サーキットスイッチによるアプリケーションレベルオーバ レイネットワークであり、世界中に配置されているオニオンルータによって、匿 名性のある通信を提供する。異なる転送速度を持つサーキットが TCP コネクショ ンに集約されてデータを転送するために競合が起き、Tor はネットワーク輻輳やパ フォーマンスに対して脆弱になっている。Tor では利用可能なネットワークの容量 の大部分はバルクのユーザのトラフィックによって消費されているため、対話性 のある軽量トラフィックの遅延が増大している。サーキットの不公平なトラフィ ックの分配によって、Tor ルータがボトルネックとなり、遅延が増加し、通信品質 も低下する。この問題のため、多くの Tor ユーザが Tor ネットワークを利用する 動機を低下させ、その結果、Tor ネットワークのパフォーマンスと匿名性が大幅に 低下している。 本論文では、まず、Tor ネットワークの遅延の問題を調査する。Tor サーキット のトラフィックと暗号科通信のための計算遅延の詳細の分析をするために測定分 析を行った。測定は研究室内に設置したテストベッドと実際の Tor ネットワーク で行い、遅延がどこで発生し、パケット損失がどのように遅延を引き起こすかを 分析した。Tor ネットワークで遅延増加の原因はサーキットの集約であることを明 らかにした。分析により、Tor ネットワークの設計は、低いネットワーク容量、大 きな遅延、TCP の送信バッファのキュー長の増大に対していくつか性能の問題が あることを明らかにした。 この性能低下の問題を解決するために、動的なサーキットスイッチ方式を用い て、限られたネットワーク容量の問題に対処する。トラフィックの輻輳の監視と より大きな帯域幅をもつ OR へのサーキットを動的に構築するのため、エントリ i OR に提案方式を実装した。提案方式では Tor の現在のアルゴリズムを修正し、バ ッファオーバーフローやソケット書き込み不可のイベントを考慮したメトリック -
Events Vs. Threads COS 316: Principles of Computer System Design
Events vs. Threads COS 316: Principles of Computer System Design Amit Levy & Wyatt Lloyd Two Abstractions for Supporting Concurrency Problem: computationally light applications with large state-space ● How to support graphical interfaces with many possible interactions in each state? ● How to scale servers to handle many simultaneous requests? Corollary: how do we best utilize the CPU in I/O-bound workloads? ● Ideally, CPU should not be a bottleneck for saturating I/O Threads Event Event Event Event Event Wait for event Wait for event Wait for event ... Wait for event Wait for event Wait for event ... Wait for event Wait for event Wait for event ... Threads ● One thread per sequence of related I/O operations ○ E.g. one thread per client TCP connection ● When next I/O isn’t ready, block thread until it is ○ i.e. return to event handler instead of blocking for next event ● State machine is implicit in the sequential program ○ E.g., parse HTTP request, then send DB query, then read file from disk, then send HTTP response ● Only keeping track of relevant next I/O operations ● Any shared state must be synchronized ○ E.g. with locks, channels, etc ● In theory, scale by having a thread per connection Events Event Event Event Event Event Handle event kind A Wait for any event Handle event kind B Events ● Single “event-loop” waits for the next event (of any kind) ● When event arrives, handle it “to completion” ○ i.e. return to event handler instead of blocking for next event ● Event loop keeps track of state machine explicitly ○ How to handle an event depends on which state I’m in ○ E.g. -
VOLTTRON 3.0: User Guide
PNNL-24907 Prepared for the U.S. Department of Energy under Contract DE-AC05-76RL01830 VOLTTRON 3.0: User Guide RG Lutes JN Haack S Katipamula KE Monson BA Akyol BJ Carpenter November 2015 DISCLAIMER United States Government. Neither the United States Government nor any agency thereof, nor Battelle Memorial Institute, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof, or Battelle Memorial Institute. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. PACIFIC NORTHWEST NATIONAL LABORATORY operated by BATTELLE for the UNITED STATES DEPARTMENT OF ENERGY under Contract DE-AC05-76RL01830 Printed in the United States of America Available to DOE and DOE contractors from the Office of Scientific and Technical Information, P.O. Box 62, Oak Ridge, TN 37831-0062; ph: (865) 576-8401, fax: (865) 576-5728 email: [email protected] Available to the public from the National Technical Information Service, U.S. Department of Commerce, 5285 Port Royal Rd., Springfield, VA 22161 ph: (800) 553-6847, fax: (703) 605-6900 email: [email protected] online ordering: http://www.ntis.gov/ordering.htm This document was printed on recycled paper.