Operating systems (vimia219)

Collaboration of Tasks

Tamás Kovácsházy, PhD 13rd Topic Inter Process Communication with Message Passing

Budapest University of Technology and Economics Department of Measurement and Information Systems Looking back, communication solutions . Using shared memory(RAM or PRAM model): o Among threads running in the context of a process (shared memory of the process) . Messages: o No shared memory • Among processes running inside an • Distributed system (network communication) o based operating system . Inter Process Communication, IPC

© BME-MIT 2014, All Rights Reserved 2. lap Messages . Different from the same word used in computer networks o We consider a more generic notion of message . Message passing . For example: o System call o TCP/IP connection (TCP) or message (UDP) for internal (localhost) or external communication (among machines) . Most cases they are implemented as OS API function/method calls resulting a system call . The operating system implements them by its services

© BME-MIT 2014, All Rights Reserved 3. lap Some notes . Semaphore, Critical section object, and Mutex are also implemented by the OS and handled by system calls o Threads running in the context of a process communicate using shared memory (fast, low resource utilization) o Mutual exclusion and synchronization are solved by messages (using system calls). o It has some overhead: • Experiments: Lockless programming, transactional memory etc. • There is no good solution, but we can pick a better one than the other (Churchill is right). • The good solution is application and software architecture dependent

© BME-MIT 2014, All Rights Reserved 4. lap Properties of message passing . Compared to shared memory: o Higher delay o Lower bandwidth o Unreliable communication channel • Shared memory is reliable with the propability of 1 – RAM, PRAM model • It is not true even for system calls in an operating system – System overload may happen! • Using a computer network is unreliable by definition – Random and intentional errors – Intentional errors are the worse, because they target the vulnerabilities of the system directly

© BME-MIT 2014, All Rights Reserved 5. lap Addressing messages . Computer networks... . A given process (unicast address). . All processes (broadcast address). o E.g. power management messages • Standby, Hibernate, PowerOff, etc. . A group of processes (multicast address). . One process from a group of processes (anycast address). o E.g. a process that is going to serve the request from the processes that can serve the request because they run a specific service

© BME-MIT 2014, All Rights Reserved 6. lap Direct communication . Symmetric message based communication o send(P, message) o receive(Q, message) o P, Q are process identifiers o Q, the sender, is specified when receive() is called! o Message is a data structure containing the information to be sent . Asymmetric message based communication o send(P, message) o receive(id, message) o P is the process identifier of the recipient o The id identifies the sender. The receiver receives from anybody! • In other words, id is a return value… o Message is a data structure containing the information to be sent . There is a direct reference in the code to the receiver or the sender (symmetric) o Not a good idea... o Makes everything too complex.

© BME-MIT 2014, All Rights Reserved 7. lap Indirect communication . There is a entity in between the communicating parties o Proxy design pattern . This entity can be: Mailbox, MesssageQueue, Port, etc. . Interface: constructor and destructor plus o send(A, message) o receive(A, message) . „A” is the identifier of the entity o In distributed systems it may be in part the identifier of the node (identifier of a computer)

P A (mailbox) Q

© BME-MIT 2014, All Rights Reserved 8. lap Additional properties . Minimum one sender . Minimum one receiver o After the message is received by a process it is deleted (single read) o The message can be read multiple times (explicit delete is needed to remove it) • SystemV Shared Memory in . Owner can be: o Operating system • It exists independently of the processes which use it o A process (It is located in the memory area of the process) • It exists with that process

© BME-MIT 2014, All Rights Reserved 9. lap Blocking . Non-blocking call = asynchronous call o Results and side effects does not available when the call returns (They may not have happened at all) o Only execution of the real functionality is started after returning from the call o Handling of return values, results, and side effects needs some other solutions from the caller: • E.g. Events, signals, callback fuctions, etc. are used . Blocking call = synchronous call o Results and side effects are available on the return of the call (they happened). o Handling of return values is simple...

© BME-MIT 2014, All Rights Reserved 10. lap Blocking on the sender side . Blocking send(): o The send() call does not return until the message is received (direct communication) or stored into the communication entity (indirect communication) o How we handle errors? • The send() call returns with errors . Non-blocking send(): o After sending the message locally, it returns (does not wait for delivery or positive acknowledgements) o A callback function or signal handling, etc.

© BME-MIT 2014, All Rights Reserved 11. lap Blocking on the receiver side . Blocking receive(): o The receive() call does not return until something is received (maybe with a timeout) o Classic example: TCP/UDP socket listen(). . Non-blocking receive(): o The receive() call returns immediately with some data • If there is a message receive, it return with that • If there is no message it is told (pl. empty message with 0 length, null reference, error code, etc.). • If there is no message and non-blocking receive is called in an infinite cycle it results busy waiting (eats the CPU).

© BME-MIT 2014, All Rights Reserved 12. lap Implementations 1. . Mailbox: o Indirect communication o A single message is stored or multiple one, but the maximum number of messages is specified o The mailbox is handled on the OS level . MessageQueue: o Indirect communication o Infinite number of messages can be stored • Of course, system resources limit the number o Message based middlewares • MSMQ, IBM's WebSphere MQ, Oracle Advanced Queuing (AQ), JBoss Messaging, Apache Qpid. . Embedded operating systems typically support Mailbox/MessageQueue type solutions even to communicate among threads o Simple, problem free solution

© BME-MIT 2014, All Rights Reserved 13. lap Implementations 2. . TCP/IP TCP or UDP port: o Direct communication o Socket interface o Localhost (127.0.0.1/8) can be used inside the machine o Low level solution, several middlewares are based on it: • Remote Procedure Call, RPC • Remote method Invocation: – CORBA (Common Request Broker Architecture), – JAVA RMI (Remote Method Invocation), – DCOM/.NET Remoting, – SOAP (Simple Object Access Protocol). • Message based middlewares (we have already talked about them)

© BME-MIT 2014, All Rights Reserved 14. lap Implementations 3. . Various pipes and streams: o Typically direct but can be indirect (named pipe) o E.g. UNIX pipe, Windows Named Pipe, RTLinux FIFO . System V Shared Memory (UNIX, Linux) o Direct o Memory based interface using the special features of the MMU o In the UNIX lectures is will be introduced

© BME-MIT 2014, All Rights Reserved 15. lap Remote Procedure Call . It is introduced in detail: o It is used even now, primarily for OO, so it is called Remote Method Invocation in this context o Very illuminating to see how it works... . Remote Procedure Call, RPC: o Calling a function located in the memory of an other process from the calling process using messages o The caller blocks while waiting for the answer o The called function runs in a thread of the called process

© BME-MIT 2014, All Rights Reserved 16. lap Architecture of RPC

P process (client) Q process (server)

Stack Stack

Free memory Free memory

Q Q

P P

Q Q

P P side

Computer side

side OS side sideOS Network

Heap Heap

HW HW

Data Data

Code Code Function/method

Calling Called code code

© BME-MIT 2014, All Rights Reserved 17. lap How the programmer sees RPC . Practically using a remote procedure is like calling a local one (actually it is calling a local one). o The function is available as a stub function in a program (prepared by the RPC development system) o The programmer needs to nothing about where the actual function will be executed (RPC hides the details) . Implementation of the actual function is similar than writing a local function o The programmer gets an interface definition (prepared by the RPC development system) and implements the functionality o The programmer needs to nothing about from where the actual function is called (RPC hides the details)

© BME-MIT 2014, All Rights Reserved 18. lap RPC in operation 1. . The parameters and return value of the call has types o Structured message is sent o Platform independence is realized by the Operating System and the development system (compiler or interpreter) • All sent data is converted to standard formats e.g. Binary Encoding Rules (BER), XML, etc. . The client program calls a normal local function o The local function is an automatically generated stub function handling the RPC o The stub hides the details of communication from the programmer using it o We do not talk about that how the server is found • Let us assume that it is known…

© BME-MIT 2014, All Rights Reserved 19. lap RPC in operation 2. . The responsibilities of the client side stub are during the call o Packing the parameters of the call into a platform independent form and putting it into a message (or messages), and sending it to the server o To implement this it uses the services of the operating system and the computer network . On the server side the RPC service gets the messages containing the parameters of the call (including function name) o It converts the parameters to a local form o It calls the local function o The return values are converted back to standard form o The standard form return values are sent back to the client in a message (or messages)

© BME-MIT 2014, All Rights Reserved 20. lap RPC in operation 3. . The client side stub receives the messages with the return values in standard form o It converts the standard form return values to the local form o It returns with the return values converted back to local form from the stub into the calling program . The client thread calling the remote code o Waits for an event caused by the incoming message containing the return values . The server thread waits for incoming calls, and if there is any, it runs (executes the calls) o It blocks on listening for incoming messages

© BME-MIT 2014, All Rights Reserved 21. lap