COS 318: Operating Systems Message Passing

Total Page:16

File Type:pdf, Size:1020Kb

COS 318: Operating Systems Message Passing Motivation u Locks, semaphores, monitors are good but they only COS 318: Operating Systems work under the shared-address-space model l Threads in the same process l Processes that share an address space Message Passing u We have assumed that processes/threads communicate via shared data (counter, producer-consumer buffer, …) u How to synchronize and communicate among processes with different address spaces? (http://www.cs.princeton.edu/courses/cos318/) l Inter-process communication (IPC) u Can we have a single set of primitives that work for all cases: single machine OS, multiple machines same OS, multiple machines multiple OS, distributed? With No Shared Address Space Sending A Message u No need for explicit mutual exclusion primitives l Processes cannot touch the same data directly Within A Computer Across A Network u Communicate by sending and receiving explicit messages: Message Passing P1 P2 Send() Recv() u Synchronization is implicit in message passing Send() Recv() l No need for explicit mutual exclusion Network l Event ordering via sending and receiving of messages OS Kernel OS OS u More portable to different environments, though lacks COS461 some of the convenience of a shared address space u Typically, communication in consummated via a send P1 can send to P2, P2 can send to P1 and a matching receive 4 1 Simple Send and Receive Simple API send( dest, data ), receive( src, data ) send( dest, data ), receive( src, data ) S S R u Destination or source l Direct address: send(dest, data) node Id, process Id recv(src, data) l Indirect address: send(dest, data) mailbox, socket, R channel, … u Data l Buffer (addr) and size recv(src, data) u Send “data” specifies where the data are in sender’s address space l Anything else that u Recv “data” specifies where the incoming message data should be specifies the source put in receiver’s address space data or destination data structure 5 6 Simple Semantics Issues/options u Send call does not return until data have been copied u Asynchronous vs. synchronous out of source data structure u Buffering of messages l Could be to destination process/machine, or to OS buffer, … (there are variants depending on this) u Matching of messages l Source data structure can be safely overwritten without u Direct vs. indirect communication/specification changing the data carried by the message u Data alone, or function invocation? u Receive does not return until data have been received u How to handle exceptions (when bad things happen)? and copied into destination data structure u This is called Synchronous Message Passing l Makes synchronization implicit and easy l But processes wait around a lot, so can hurt performance 2 Synchronous vs. Asynchronous Send Synchronous vs Asynchronous Receive u Synchronous u Synchronous send( dest, msg) Msg transfer resource l Will not return until data are l Return data if there is a copied out of source data message Msg transfer resource structure l Block on empty buffer recv( src, msg ) l If a buffer is used for messaging and it is full, block status = async_recv( src, msg ); u Asynchronous if ( status == SUCCESS ) u Asynchronous status = async_send( dest, msg ) l Return data if there is a consume msg; l Return before data are copied … message while ( probe(src) != HaveMSG ) out of source data structure if !send_complete( status ) l Return status if there is no wait for msg arrival wait for completion; l Completion message (probe) recv( src, msg ); • Applications must check status … use msg data structure; consume msg; • Notify or signal the application … l Block on full buffer Buffering Synchronous Send/Recv Within a System u No buffering Synchronous send: l Sender must wait until the u Call send system call with M receiver receives message u Send system call: l Rendezvous on each msg l No buffer in kernel: block send( M ) recv(M ) l Copy M to kernel buffer u Finite buffer Synchronous recv: l Sender blocks on buffer full u Call recv system call M u Recv system call: l No M in kernel: block buffer l Copy to user buffer How to manage kernel buffer? On distributed machines/OSes, buffers at one/both ends 12 3 What if Buffers Fill Up? Direct Addressing Example u Make processes wait (can be hard to do when they Producer(){ Consumer(){ are on different machines) ... ... while (1) { for (i=0; i<N; i++) produce item; send(Producer, credit); u Drop messages recv(Consumer, &credit); while (1) { send(Consumer, item); recv(Producer, &item); } send(Producer, credit); u Don’t send fast enough to fill up buffers: flow control } consume item; } } u Credits l Receivers provide credits based on space availability l Senders don’t send unless they have the credits to do so u Does this work? u Would it work with multiple producers and 1 consumer? u Would it work with 1 producer and multiple consumers? u What about multiple producers and multiple consumers? 14 Indirect Addressing Example Indirect Communication u Names Producer(){ Consumer(){ ... ... l mailbox, socket, channel, … while (1) { for (i=0; i<N; i++) u Properties produce item; send(prodMbox, credit); recv(prodMbox, &credit); while (1) { l Some allow one-to-one mbox send(consMbox, item); recv(consMbox, &item); (e.g. pipe) } send(prodMbox, credit); } consume item; l Some allow many-to-one or } one-to-many communications } (e.g. mailbox) u Would it work with multiple producers and 1 consumer? u Would it work with 1 producer and multiple consumers? u What about multiple producers and multiple pipe consumers? 15 16 4 Mailbox Message Passing Example: Keyboard Input u Message-oriented 1-way communication u Interrupt handler u Data structure l Get the input characters and give to device thread u Device thread l Mutex, condition variable, buffer for messages l Generate a message and send it to mailbox of an input process u Operations l Init, open, close, send, receive, … u Does the sender know when receiver gets a message? while (1) { P(s); Acquire(m); V(s); convert … … getchar() Release(m); }; mbox mbox_recv(M) Interrupt Device mbox_send(M) handler thread 17 18 Sockets Network Socket Address Binding u Sockets u A network socket binds to socket l Bidirectional (unlike mailbox) send send/ u Host: IP address /recv recv l Unix domain sockets (IPC) u Protocol: UDP/TCP l Network sockets (over network) Kernel u Port: ports l Same APIs u Well known ports (0..1023), u Two types e.g. port 80 for Web UDP/TCP protocols l Datagram Socket (UDP) u Unused ports available for 128.112.9.1 address clients (1025..65535) • Collection of messages socket • Best effort u Why ports? • Connectionless • Indirection: No need to know which process to communicate with l Stream Socket (TCP) • Updating software on one side • Stream of bytes (like pipe) won’t affect another side • Reliable • Connection-oriented 19 20 5 Communication with Stream Sockets Sockets API Client Server u Create and close a socket l sockid = socket(af, type, protocol); Create a socket l sockerr = close(sockid); Bind to a port u Bind a socket to a local address Create a socket l sockerr = bind(sockid, localaddr, addrlength); Listen on the port u Negotiate the connection Establish l listen(sockid, length); Connect to server Accept connection connection l accept(sockid, addr, length); request u Connect a socket to destimation Send request Receive request l connect(sockid, destaddr, addrlength); reply Receive response Send response u Message passing l send(sockid, buf, size, flags); … l recv(sockid, buf, size, flags); 21 22 Unix pipes What if things go bad? u An output stream connected to an input stream by a u R waits for a message from S, chunk of memory (a queue of bytes). but S has terminated l R may be blocked forever u Send (called write) is non-blocking S R u Receive (called read) is blocking u S sends a message to R, but R has terminated u Buffering is provided by OS l S has no buffer and will be S R blocked forever 24 6 Exception: Message Loss Exception: Message Loss, contd. u Use ack and timeout to detect u Retransmission must handle and retransmit a lost message l Duplicate messages on receiver side l Receiver sends an ack for each msg l Out-of-sequence ack messages on send sender side l Sender blocks until an ack message send1 u Retransmission is back or timeout S ack R status = send( dest, msg, timeout ); l Use sequence number for each ack1 message to identify duplicates l If timeout happens and no ack, then S send2 R l Remove duplicates on receiver side retransmit the message ack2 l Sender retransmits on an out-of- u Issues sequence ack l Duplicates u Reduce ack messages l Losing ack messages l Bundle ack messages l Piggy-back acks in send messages 25 26 Exception: Message Corruption Message Passing Interface (MPI) u A message-passing library for parallel machines Data x CRC l Implemented at user-level for high-performance computing l Portable Compute checksum u Basic (6 functions) u Detection l Works for most parallel programs l Compute a checksum over the entire message and send u Large (125 functions) the checksum (e.g. CRC code) as part of the message l Blocking (or synchronous) message passing l Recompute a checksum on receive and compare with the l Non-blocking (or asynchronous) message passing checksum in the message l Collective communication u Correction u References l Trigger retransmission l http://www.mpi-forum.org/ l Use correction codes to recover 27 28 7 Remote Procedure Call (RPC) RPC Model u Make remote procedure calls Caller (Client) Server l Similar to local procedure calls l Examples: SunRPC, java RMI Request message RPC call including arguments u Restrictions l Call by value l Call by object reference (maintain consistency) Function execution l Not call by reference w/ passed arguments Reply message u Different from mailbox, socket or MPI Including a return value l Remote execution, not just data transfer Return u References (same as local calls) l B.
Recommended publications
  • Network/Socket Programming in Java
    Network/Socket Programming in Java Rajkumar Buyya Elements of C-S Computing a client, a server, and network Request Client Server Network Result Client machine Server machine java.net n Used to manage: c URL streams c Client/server sockets c Datagrams Part III - Networking ServerSocket(1234) Output/write stream Input/read stream Socket(“128.250.25.158”, 1234) Server_name: “manjira.cs.mu.oz.au” 4 Server side Socket Operations 1. Open Server Socket: ServerSocket server; DataOutputStream os; DataInputStream is; server = new ServerSocket( PORT ); 2. Wait for Client Request: Socket client = server.accept(); 3. Create I/O streams for communicating to clients is = new DataInputStream( client.getInputStream() ); os = new DataOutputStream( client.getOutputStream() ); 4. Perform communication with client Receiive from client: String line = is.readLine(); Send to client: os.writeBytes("Hello\n"); 5. Close sockets: client.close(); For multithreade server: while(true) { i. wait for client requests (step 2 above) ii. create a thread with “client” socket as parameter (the thread creates streams (as in step (3) and does communication as stated in (4). Remove thread once service is provided. } Client side Socket Operations 1. Get connection to server: client = new Socket( server, port_id ); 2. Create I/O streams for communicating to clients is = new DataInputStream( client.getInputStream() ); os = new DataOutputStream( client.getOutputStream() ); 3. Perform communication with client Receiive from client: String line = is.readLine(); Send to client: os.writeBytes("Hello\n");
    [Show full text]
  • Efficient Inter-Core Communications on Manycore Machines
    ZIMP: Efficient Inter-core Communications on Manycore Machines Pierre-Louis Aublin Sonia Ben Mokhtar Gilles Muller Vivien Quema´ Grenoble University CNRS - LIRIS INRIA CNRS - LIG Abstract—Modern computers have an increasing num- nication mechanisms enabling both one-to-one and one- ber of cores and, as exemplified by the recent Barrelfish to-many communications. Current operating systems operating system, the software they execute increasingly provide various inter-core communication mechanisms. resembles distributed, message-passing systems. To sup- port this evolution, there is a need for very efficient However it is not clear yet how these mechanisms be- inter-core communication mechanisms. Current operat- have on manycore processors, especially when messages ing systems provide various inter-core communication are intended to many recipient cores. mechanisms, but it is not clear yet how they behave In this report, we study seven communication mech- on manycore processors. In this report, we study seven anisms, which are considered state-of-the-art. More mechanisms, that are considered state-of-the-art. We show that these mechanisms have two main drawbacks that precisely, we study TCP, UDP and Unix domain sock- limit their efficiency: they perform costly memory copy ets, pipes, IPC and POSIX message queues. These operations and they do not provide efficient support mechanisms are supported by traditional operating sys- for one-to-many communications. We do thus propose tems such as Linux. We also study Barrelfish message ZIMP, a new inter-core communication mechanism that passing, the inter-process communication mechanism of implements zero-copy inter-core message communications and that efficiently handles one-to-many communications.
    [Show full text]
  • D-Bus, the Message Bus System Training Material
    Maemo Diablo D-Bus, The Message Bus System Training Material February 9, 2009 Contents 1 D-Bus, The Message Bus System 2 1.1 Introduction to D-Bus ......................... 2 1.2 D-Bus architecture and terminology ................ 3 1.3 Addressing and names in D-Bus .................. 4 1.4 Role of D-Bus in maemo ....................... 6 1.5 Programming directly with libdbus ................. 9 1 Chapter 1 D-Bus, The Message Bus System 1.1 Introduction to D-Bus D-Bus (the D originally stood for "Desktop") is a relatively new inter process communication (IPC) mechanism designed to be used as a unified middleware layer in free desktop environments. Some example projects where D-Bus is used are GNOME and Hildon. Compared to other middleware layers for IPC, D-Bus lacks many of the more refined (and complicated) features and for that reason, is faster and simpler. D-Bus does not directly compete with low level IPC mechanisms like sock- ets, shared memory or message queues. Each of these mechanisms have their uses, which normally do not overlap the ones in D-Bus. Instead, D-Bus aims to provide higher level functionality, like: Structured name spaces • Architecture independent data formatting • Support for the most common data elements in messages • A generic remote call interface with support for exceptions (errors) • A generic signalling interface to support "broadcast" type communication • Clear separation of per-user and system-wide scopes, which is important • when dealing with multi-user systems Not bound to any specific programming language (while providing a • design that readily maps to most higher level languages, via language specific bindings) The design of D-Bus benefits from the long experience of using other mid- dleware IPC solutions in the desktop arena and this has allowed the design to be optimised.
    [Show full text]
  • Retrofitting Privacy Controls to Stock Android
    Saarland University Faculty of Mathematics and Computer Science Department of Computer Science Retrofitting Privacy Controls to Stock Android Dissertation zur Erlangung des Grades des Doktors der Ingenieurwissenschaften der Fakultät für Mathematik und Informatik der Universität des Saarlandes von Philipp von Styp-Rekowsky Saarbrücken, Dezember 2019 Tag des Kolloquiums: 18. Januar 2021 Dekan: Prof. Dr. Thomas Schuster Prüfungsausschuss: Vorsitzender: Prof. Dr. Thorsten Herfet Berichterstattende: Prof. Dr. Michael Backes Prof. Dr. Andreas Zeller Akademischer Mitarbeiter: Dr. Michael Schilling Zusammenfassung Android ist nicht nur das beliebteste Betriebssystem für mobile Endgeräte, sondern auch ein ein attraktives Ziel für Angreifer. Um diesen zu begegnen, nutzt Androids Sicher- heitskonzept App-Isolation und Zugangskontrolle zu kritischen Systemressourcen. Nutzer haben dabei aber nur wenige Optionen, App-Berechtigungen gemäß ihrer Bedürfnisse einzuschränken, sondern die Entwickler entscheiden über zu gewährende Berechtigungen. Androids Sicherheitsmodell kann zudem nicht durch Dritte angepasst werden, so dass Nutzer zum Schutz ihrer Privatsphäre auf die Gerätehersteller angewiesen sind. Diese Dissertation präsentiert einen Ansatz, Android mit umfassenden Privatsphäreeinstellun- gen nachzurüsten. Dabei geht es konkret um Techniken, die ohne Modifikationen des Betriebssystems oder Zugriff auf Root-Rechte auf regulären Android-Geräten eingesetzt werden können. Der erste Teil dieser Arbeit etabliert Techniken zur Durchsetzung von Sicherheitsrichtlinien
    [Show full text]
  • How to XDP (With Snabb)
    How to XDP (with Snabb) Max Rottenkolber <[email protected]> Tuesday, 21 January 2020 Table of Contents Intro 1 Creating an XDP socket 2 Mapping the descriptor rings 4 Binding an XDP socket to an interface 7 Forwarding packets from a queue to an XDP socket 8 Creating a BPF map . 9 Assembling a BPF program . 9 Attaching a BPF program to an interface . 12 Packet I/O 14 Deallocating the XDP socket 18 Intro Networking in userspace is becoming mainstream. Recent Linux kernels ship with a feature called eXpress Data Path (XDP) that intends to con- solidate kernel-bypass packet processing applications with the kernel’s networking stack. XDP provides a new kind of network socket that al- lows userspace applications to do almost direct I/O with network device drivers while maintaining integration with the kernel’s native network stack. To Snabb XDP looks like just another network I/O device. In fact the XDP ABI/API is so similar to a typical hardware interfaces that it feels a bit like driving a NIC, and not by accident. I think it is fair to describe XDP as a generic interface to Linux network drivers that also 1 has a virtual backend it integrates with: the Linux network stack. In that sense, XDP can be seen as a more hardware centric sibling of AF_PACKET. I can think of two reasons why XDP might be interesting for Snabb users: • While the Snabb community prefers NICs with open hardware interface specifications there are a lot of networking devices out there that lack open specifications.
    [Show full text]
  • Message Passing Model
    Message Passing Model Giuseppe Anastasi [email protected] Pervasive Computing & Networking Lab. (PerLab) Dept. of Information Engineering, University of Pisa PerLab Based on original slides by Silberschatz, Galvin and Gagne Overview PerLab Message Passing Model Addressing Synchronization Example of IPC systems Message Passing Model 2 Operating Systems Objectives PerLab To introduce an alternative solution (to shared memory) for process cooperation To show pros and cons of message passing vs. shared memory To show some examples of message-based communication systems Message Passing Model 3 Operating Systems Inter-Process Communication (IPC) PerLab Message system – processes communicate with each other without resorting to shared variables. IPC facility provides two operations: send (message ) – fixed or variable message size receive (message ) If P and Q wish to communicate, they need to: establish a communication link between them exchange messages via send/receive The communication link is provided by the OS Message Passing Model 4 Operating Systems Implementation Issues PerLab Physical implementation Single-processor system Shared memory Multi-processor systems Hardware bus Distributed systems Networking System + Communication networks Message Passing Model 5 Operating Systems Implementation Issues PerLab Logical properties Can a link be associated with more than two processes? How many links can there be between every pair of communicating processes? What is the capacity of a link? Is the size of a message that the link can accommodate fixed or variable? Is a link unidirectional or bi-directional? Message Passing Model 6 Operating Systems Implementation Issues PerLab Other Aspects Addressing Synchronization Buffering Message Passing Model 7 Operating Systems Overview PerLab Message Passing Model Addressing Synchronization Example of IPC systems Message Passing Model 8 Operating Systems Direct Addressing PerLab Processes must name each other explicitly.
    [Show full text]
  • Improving Networking
    IMPERIAL COLLEGE LONDON FINALYEARPROJECT JUNE 14, 2010 Improving Networking by moving the network stack to userspace Author: Matthew WHITWORTH Supervisor: Dr. Naranker DULAY 2 Abstract In our modern, networked world the software, protocols and algorithms involved in communication are among some of the most critical parts of an operating system. The core communication software in most modern systems is the network stack, but its basic monolithic design and functioning has remained unchanged for decades. Here we present an adaptable user-space network stack, as an addition to my operating system Whitix. The ideas and concepts presented in this report, however, are applicable to any mainstream operating system. We show how re-imagining the whole architecture of networking in a modern operating system offers numerous benefits for stack-application interactivity, protocol extensibility, and improvements in network throughput and latency. 3 4 Acknowledgements I would like to thank Naranker Dulay for supervising me during the course of this project. His time spent offering constructive feedback about the progress of the project is very much appreciated. I would also like to thank my family and friends for their support, and also anybody who has contributed to Whitix in the past or offered encouragement with the project. 5 6 Contents 1 Introduction 11 1.1 Motivation.................................... 11 1.1.1 Adaptability and interactivity.................... 11 1.1.2 Multiprocessor systems and locking................ 12 1.1.3 Cache performance.......................... 14 1.2 Whitix....................................... 14 1.3 Outline...................................... 15 2 Hardware and the LDL 17 2.1 Architectural overview............................. 17 2.2 Network drivers................................. 18 2.2.1 Driver and device setup.......................
    [Show full text]
  • Message Passing
    COS 318: Operating Systems Message Passing Jaswinder Pal Singh Computer Science Department Princeton University (http://www.cs.princeton.edu/courses/cos318/) Sending A Message Within A Computer Across A Network P1 P2 Send() Recv() Send() Recv() Network OS Kernel OS OS COS461 2 Synchronous Message Passing (Within A System) Synchronous send: u Call send system call with M u send system call: l No buffer in kernel: block send( M ) recv( M ) l Copy M to kernel buffer Synchronous recv: u Call recv system call M u recv system call: l No M in kernel: block l Copy to user buffer How to manage kernel buffer? 3 API Issues u Message S l Buffer (addr) and size l Message type, buffer and size u Destination or source send(dest, msg) R l Direct address: node Id, process Id l Indirect address: mailbox, socket, recv(src, msg) channel, … 4 Direct Addressing Example Producer(){ Consumer(){ ... ... while (1) { for (i=0; i<N; i++) produce item; send(Producer, credit); recv(Consumer, &credit); while (1) { send(Consumer, item); recv(Producer, &item); } send(Producer, credit); } consume item; } } u Does this work? u Would it work with multiple producers and 1 consumer? u Would it work with 1 producer and multiple consumers? u What about multiple producers and multiple consumers? 5 Indirect Addressing Example Producer(){ Consumer(){ ... ... while (1) { for (i=0; i<N; i++) produce item; send(prodMbox, credit); recv(prodMbox, &credit); while (1) { send(consMbox, item); recv(consMbox, &item); } send(prodMbox, credit); } consume item; } } u Would it work with multiple producers and 1 consumer? u Would it work with 1 producer and multiple consumers? u What about multiple producers and multiple consumers? 6 Indirect Communication u Names l mailbox, socket, channel, … u Properties l Some allow one-to-one mbox (e.g.
    [Show full text]
  • Post Sockets - a Modern Systems Network API Mihail Yanev (2065983) April 23, 2018
    Post Sockets - A Modern Systems Network API Mihail Yanev (2065983) April 23, 2018 ABSTRACT In addition to providing native solutions for problems, es- The Berkeley Sockets API has been the de-facto standard tablishing optimal connection with a Remote point or struc- systems API for networking. However, designed more than tured data communication, we have identified that a com- three decades ago, it is not a good match for the networking mon issue with many of the previously provided networking applications today. We have identified three different sets of solutions has been leaving the products vulnerable to code problems in the Sockets API. In this paper, we present an injections and/or buffer overflow attacks. The root cause implementation of Post Sockets - a modern API, proposed by for such problems has mostly proved to be poor memory Trammell et al. that solves the identified limitations. This or resource management. For this reason, we have decided paper can be used for basis of evaluation of the maturity of to use the Rust programming language to implement the Post Sockets and if successful to promote its introduction as new API. Rust is a programming language, that provides a replacement API to Berkeley Sockets. data integrity and memory safety guarantees. It uses region based memory, freeing the programmer from the responsibil- ity of manually managing the heap resources. This in turn 1. INTRODUCTION eliminates the possibility of leaving the code vulnerable to Over the years, writing networked applications has be- resource management errors, as this is handled by the lan- come increasingly difficult.
    [Show full text]
  • Multiprocess Communication and Control Software for Humanoid Robots Neil T
    IEEE Robotics and Automation Magazine Multiprocess Communication and Control Software for Humanoid Robots Neil T. Dantam∗ Daniel M. Lofaroy Ayonga Hereidx Paul Y. Ohz Aaron D. Amesx Mike Stilman∗ I. Introduction orrect real-time software is vital for robots in safety-critical roles such as service and disaster response. These systems depend on software for Clocomotion, navigation, manipulation, and even seemingly innocuous tasks such as safely regulating battery voltage. A multi-process software design increases robustness by isolating errors to a single process, allowing the rest of the system to continue operating. This approach also assists with modularity and concurrency. For real-time tasks such as dynamic balance and force control of manipulators, it is critical to communicate the latest data sample with minimum latency. There are many communication approaches intended for both general purpose and real-time needs [19], [17], [13], [9], [15]. Typical methods focus on reliable communication or network-transparency and accept a trade-off of increased mes- sage latency or the potential to discard newer data. By focusing instead on the specific case of real-time communication on a single host, we reduce communication latency and guarantee access to the latest sample. We present a new Interprocess Communication (IPC) library, Ach,1 which addresses this need, and discuss its application for real-time, multiprocess control on three humanoid robots (Fig. 1). There are several design decisions that influenced this robot software and motivated development of the Ach library. First, to utilize decades of prior development and engineering, we implement our real-time system on top of a POSIX-like Operating System (OS)2.
    [Show full text]
  • Message Passing
    Message passing CS252 • Sending of messages under control of programmer – User-level/system level? Graduate Computer Architecture – Bulk transfers? Lecture 24 • How efficient is it to send and receive messages? – Speed of memory bus? First-level cache? Network Interface Design • Communication Model: Memory Consistency Models – Synchronous » Send completes after matching recv and source data sent » Receive completes after data transfer complete from matching send – Asynchronous Prof John D. Kubiatowicz » Send completes after send buffer may be reused http://www.cs.berkeley.edu/~kubitron/cs252 4/27/2009 cs252-S09, Lecture 24 2 Synchronous Message Passing Asynch. Message Passing: Optimistic Source Destination Source Destination (1) Initiate send Recv Psrc, local VA, len (1) Initiate send (2) Address translation on Psrc Send Pdest, local VA, len (2) Address translation (3) Local/remote check Send (P , local VA, len) (3) Local/remote check dest (4) Send-ready request Send-rdy req (4) Send data (5) Remote check for (5) Remote check for posted receive; on fail, Tag match allocate data buffer posted receive Wait Tag check Data-xfer req (assume success) Processor Allocate buffer Action? (6) Reply transaction Recv-rdy reply (7) Bulk data transfer Source VA Dest VA or ID Recv P , local VA, len Time src Data-xfer req Time • More powerful programming model • Constrained programming model. • Wildcard receive => non-deterministic • Deterministic! What happens when threads added? • Destination contention very limited. • Storage required within msg layer? • User/System boundary? 4/27/2009 cs252-S09, Lecture 24 3 4/27/2009 cs252-S09, Lecture 24 4 Asynch. Msg Passing: Conservative Features of Msg Passing Abstraction Destination • Source knows send data address, dest.
    [Show full text]
  • Inter-Process Communication
    QNX Inter-Process Communication QNX IPC 2008/01/08 R08 © 2006 QNX Software Systems. QNX and Momentics are registered trademarks of QNX Software Systems in certain jurisdictions NOTES: QNX, Momentics, Neutrino, Photon microGUI, and “Build a more reliable world” are registered trademarks in certain jurisdictions, and PhAB, Phindows, and Qnet are trademarks of QNX Software Systems Ltd. All other trademarks and trade names belong to their respective owners. 1 IntroductionIntroduction IPC: • Inter-Process Communication • two processes exchange: – data – control – notification of event occurrence IPC Thread Thread Process A Process B QNX IPC 2008/01/08 R08 2 © 2006, QNX Software Systems. QNX and Momentics are registered trademarks of QNX Software Systems in certain jurisdictions NOTES: 2 IntroductionIntroduction QNX Neutrino supports a wide variety of IPC: – QNX Core (API is unique to QNX or low-level) • includes: – QNX Neutrino messaging – QNX Neutrino pulses – shared memory – QNX asynchronous messaging • the focus of this course module – POSIX/Unix (well known, portable API’s ) • includes: – signals – shared memory – pipes (requires pipe process) – POSIX message queues (requires mqueue or mq process) – TCP/IP sockets (requires io-net process) • the focus of the POSIX IPC course module QNX IPC 2008/01/08 R08 3 © 2006, QNX Software Systems. QNX and Momentics are registered trademarks of QNX Software Systems in certain jurisdictions NOTES: API: Application Programming Interface 3 InterprocessInterprocess Communication Communication Topics: Message Passing Designing a Message Passing System (1) Pulses How a Client Finds a Server Server Cleanup Multi-Part Messages Designing a Message Passing System (2) Client Information Structure Issues Related to Priorities Designing a Message Passing System (3) Event Delivery Asynchronous Message Passing QNET Shared Memory Conclusion QNX IPC 2008/01/08 R08 4 © 2006, QNX Software Systems.
    [Show full text]