
4/14/2017 IPC, Threads, Races, Critical Sections Inter-Process Communication 7A. Inter-Process Communication • the exchange of data between processes 3T. Threads • Goals 7B. The Critical Section Problem – simplicity – convenience – generality – efficiency – security/privacy – robustness and reliability • some of these turn out to be contradictory IPC, Threads, Races and Critical Sections 1 IPC, Threads, Races and Critical Sections 2 OS Support For IPC Typical IPC Operations • channel creation and destruction Wide range of semantics • • write/send/put – may appear to be another kind of file – insert data into the channel – may involve very different APIs • read/receive/get • provide more powerful semantics – extract data from the channel • more accurately reflect complex realities • channel content query • Connection establishment mediated by the OS – how much data is currently in the channel – to ensure authentication and authorization • connection establishment and query • Data exchange mediated by the OS – control connection of one channel end to another – to protect processes from one-another – who are end-points, what is status of connections – to ensure data integrity and authenticity IPC, Threads, Races and Critical Sections 3 IPC, Threads, Races and Critical Sections 4 IPC: messages vs streams IPC: flow-control • streams • queued messages consume system resources – a continuous stream of bytes – buffered in the OS until the receiver asks for them – read or write few or many bytes at a time • many things can increase required buffer space – write and read buffer sizes are unrelated – fast sender, non-responsive receiver – stream may contain app-specific record delimiters • must be a way to limit required buffer space • Messages (aka datagrams) – back-pressure: block sender or refuse message – a sequence of distinct messages – receiver side: drop connection or messages – each message has its own length (subject to limits) – this is usually handled by network protocols – message is typically read/written as a unit • mechanisms to report stifle/flush to sender – delivery of a message is typically all-or-nothing IPC, Threads, Races and Critical Sections 5 IPC, Threads, Races and Critical Sections 6 1 4/14/2017 IPC: reliability and robustness Simplicity: pipelines • reliable delivery (e.g. TCP vs UDP) – networks can lose requests and responses • data flows through a series of programs • a sent message may not be processed – ls | grep | sort | mail – receiver invalid, dead, or not responding – macro processor | complier | assembler • When do we tell the sender "OK"? • data is a simple byte stream – queued locally? added to receivers input queue? – buffered in the operating system – receiver has read? receiver has acknowledged? – no need for intermediate temporary files • how persistent is system in attempting to deliver? – retransmission, alternate routes, back-up servers, ... • there are no security/privacy/trust issues • do channel/contents survive receiver restarts? – all under control of a single user – can new server instance pick up where the old left off? • error conditions – input: End of File output: SIGPIPE IPC, Threads, Races and Critical Sections 7 IPC, Threads, Races and Critical Sections 8 Generality: sockets half way: mail boxes, named pipes • connections between addresses/ports • client/server rendezvous point – connect/listen/accept – a name corresponds to a service – lookup: registry, DNS, service discovery protocols • many data options – a server awaits client connections – reliable (TCP) or best effort data-grams (UDP) – once open, it may be as simple as a pipe – streams, messages, remote procedure calls, … – OS may authenticate message sender • complex flow control and error handling – retransmissions, timeouts, node failures • limited fail-over capability – possibility of reconnection or fail-over – if server dies, another can take its place • trust/security/privacy/integrity – but what about in-progress requests? – we have a whole lecture on this subject • client/server must be on same system IPC, Threads, Races and Critical Sections 9 IPC, Threads, Races and Critical Sections 10 Ludicrous Speed – Shared Memory IPC: synchronous and asynchronous • synchronous operations • shared read/write memory segments – writes block until message sent/delivered/received – mmap(2) into multiple address spaces – reads block until a new message is available – perhaps locked in physical memory – easy for programmers, but no parallelism – applications maintain circular buffers • asynchronous operations – OS is not involved in data transfer – writes return when system accepts message • no confirmation of transmission/delivery/reception • simplicity, ease of use … your kidding, right? • requires auxiliary mechanism to learn of errors • reliability, security … caveat emptor! – reads return promptly if no message available • requires auxiliary mechanism to learn of new messages • generality … locals only! • often involves "wait for any of these" operation IPC, Threads, Races and Critical Sections 11 IPC, Threads, Races and Critical Sections 12 2 4/14/2017 a brief history of threads What is a thread? • processes are very expensive • strictly a unit of execution/scheduling – to create: they own resources – each thread has its own stack, PC, registers – to dispatch: they have address spaces • multiple threads can run in a process • different processes are very distinct – they all share the same code and data space – they cannot share the same address space – they all have access to the same resources – they cannot (usually) share resources – this makes the cheaper to create and run • not all programs require strong separation • sharing the CPU between multiple threads – cooperating parallel threads of execution – user level threads (w/voluntary yielding) – all are trusted, part of a single program – scheduled system threads (w/preemption) IPC, Threads, Races and Critical Sections 13 IPC, Threads, Races and Critical Sections 14 When to use processes Using Multiple Processes: cc • running multiple distinct programs # shell script to implement the cc command cpp $1.c | cc1 | ccopt > $1.s • creation/destruction are rare events as $1.s • running agents with distinct privileges ld /lib/crt0.o $1.o /lib/libc.so • limited interactions and shared resources mv a.out $1 rm $1.s $1.o • prevent interference between processes • firewall one from failures of the other IPC, Threads, Races and Critical Sections 15 IPC, Threads, Races and Critical Sections 16 When to use threads Using Multiple Threads: telnet • parallel activities in a single program netfd = get_telnet_connection(host); pthread_create(&tid, NULL, writer, netfd); • frequent creation and destruction reader(netfd); • all can run with same privileges pthread_join(tid, &status); ... • they need to share resources reader( fd ) { int cnt; char buf[100]; while( cnt = read(0, buf, sizeof (buf) > 0 ) • they exchange many messages/signals write(fd, buf, cnt); • no need to protect from each other } writer( fd ) { int cnt; char buf[100]; while( cnt = read(fd, buf, sizeof (buf) > 0 ) write(1, buf, cnt); } IPC, Threads, Races and Critical Sections 17 IPC, Threads, Races and Critical Sections 18 3 4/14/2017 Kernel vs User-Mode Threads Thread state and thread stacks • Does OS schedule threads or processes? • each thread has its own registers, PS, PC • Advantages of Kernel implemented threads • each thread must have its own stack area – multiple threads can truly run in parallel • max size specified when thread is created – one thread blocking does not block others – a process can contain many threads – OS can enforce priorities and preemption they cannot all grow towards a single hole – OS can provide atomic sleep/wakeup/signals – • Advantages of library implemented threads – thread creator must know max required stack size – fewer system calls – stack space must be reclaimed when thread exits – faster context switches • procedure linkage conventions are unchanged – ability to tailor semantics to application needs IPC, Threads, Races and Critical Sections 19 IPC, Threads, Races and Critical Sections 20 UNIX stack space management Thread Stack Allocation 0x00000000 code segment data segment stack segment thread thread thread code data stack 1 stack 2 stack 3 0x00000000 0xFFFFFFFF stack Data segment starts at page boundary after code segment 0x0120000 Stack segment starts at high end of address space 0xFFFFFFFF Unix extends stack automatically as program needs more. Data segment grows up; Stack segment grows down Both grow towards the hole in the middle. They are not allowed to meet . IPC, Threads, Races and Critical Sections 21 IPC, Threads, Races and Critical Sections 22 Thread Safety - Reentrancy Thread Safety – Shared Data/Events • thread-safe routines must be reentrant • threads operate in a single address space – any routine can be called by multiple threads – automatic (stack) locals are private – concurrent or interspersed execution – signals can also cause reentrancy – storage (from thread-safe malloc) can be private • state cannot be saved in static variables – read-only data causes no problems – e.g. errno … getting around C scalar returns – shared read/write data is a problem – e.g. optarg … implicit session state • transient state can be safely allocated on stack • signals are sent to processes • persistent session state must be client-owned – delivered to first available thread – open returns a descriptor – chosen recipient may not have been expecting it – descriptor is passed
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-