Assignment No. 1: Shell programming Write a program to implement an address book with options given below: a) Create address book. b) View address book. c) Insert a record. d) Delete a record. e) Modify a record. f) .

The shell is a command programming language that provides an interface to the operating system. Its features include control-flow primitives, parameter passing, variables and string substitution. Constructs such as while, if then else, case and for are available. Two-way communication is possible between the shell and commands. String-valued parameters, typically file names or flags, may be passed to a command. A return code is set by commands that may be used to determine control-flow, and the standard output from a command may be used as shell input.

The shell can modify the environment in which commands run. Input and output can be redirected to files, and processes that communicate through `pipes' can be invoked. Commands are found by searching directories in the file system in a sequence that can be defined by the user. Commands can be read either from the terminal or from a file, which allows command procedures to be stored for later use.

1.5 Syntax:-

1.5.1 To print message

 To print message on output screen, ‘echo’ statement is used.  echo in shell is equivalent to printf in C.  Syntax:- echo “ message”

 Example:- echo “SNJB”

1.5.2 To take input

• To take input from console, ‘read’ statement is used. • read in shell is equivalent to scanf in C. • Syntax:- read variable_name

• Example:- read num

1.5.3 Conditional Operators

• -eq is equal to if [ "$a" -eq "$b" ]

• -ne is not equal to if [ "$a" -ne "$b" ]

• -gt is greater than if ["$a" -gt "$b" ]

• -ge is greater than or equal to if [ "$a" -ge "$b" ]

• -lt is less than if [ "$a" -lt "$b" ]

• -le is less than or equal to if [ "$a" -le "$b" ]

• < is less than (within double parentheses) (( "$a" < "$b" ))

• <= is less than or equal to (within double parentheses) (( "$a" <= "$b" ))

• > is greater than (within double parentheses) (( "$a" > "$b" ))

• >= is greater than or equal to (within double parentheses) (( "$a" >= "$b" ))

1.5.4 if..then..else..

if [ condition ]

then statements

elif [ condition ]

then statements

else statements

fi

Example:-

echo "Enter the number“

read no if [ $no –gt 0 ] then

echo “Number is +ve“

elif [ $no –lt 0 ] then

echo “Number is negative“

else

echo “Number is 0“

fi

1.5.5 While statement

while [ condition ]

do

command1

command2

commandN

done

Example:-

c=1

while [ $c -le 5 ]

do

echo "Welcome $c times"

(( c++ ))

done

1.5.6 For statement

for (( exp1; exp2; exp3 ))

do

command1

command2

commandN

done

Example:-

for (( c=1; c<=5; c++ ))

do

echo "Welcome $c times“

done

1.5.7 Switch..case statement

Read choice ‘ch’

case ch in

1) statements;;

2) statements;;

3) statements;;

esac

Example:-

echo “Enter number”

read num case num in

1) echo “ONE”;;

2) echo “TWO”;;

3) echo “THREE”;; esac

Assignment No. 2: control system calls: The demonstration of FORK, EXECVE and system calls along with zombie and orphan states. a. Implement the C program in which main program accepts the integers to be sorted. Main program uses the FORK to create a new process called a . sorts the integers using sorting algorithm and waits for child process using WAIT system call to sort the integers using any sorting algorithm. Also demonstrate zombie and orphan states. b. Implement the C program in which main program accepts an integer array. Main program uses the FORK system call to create a new process called a child process. Parent process sorts an integer array and passes the sorted array to child process through the command line arguments of EXECVE system call. The child process uses EXECVE system call to load new program that uses this sorted array for performing the binary search to search the particular item in the array.

Theory:

What is a System Call?

A system call (or system request) is a call to the kernel in order to execute a specific function

that controls a device or executes a privileged instruction. The way system calls are handled is

up to the processor. Usually, a call to the kernel is due to an interrupt or exception; in the call,

there is a request to execute something special. For example, the serial port may be

programmed to assert an interrupt when some character has arrived, instead of polling for it.

This way, the processor can be used by other processes and service the serial port only when it is required.

The internal operation between an interrupt request and its servicing involve several CPU registers and memory segments. Briefly, a device raises an interrupt by asserting an interrupt request line on the Peripheral Interrupt Controller (PIC) which informs the CPU by setting the interrupt request pin. After each instruction, the CPU checks this pin. If it is enabled, it gets the ID from the data bus, which points to the Interrupt Descriptor Table (IDT), where a number of task, interrupt and gate descriptors are stored. The descriptor contains a selector to the Global Descriptor Table (GDT) which contains the base address to a memory segment in which the Interrupt Service Routine (ISR) resides.

FORK( )

fork - create a child process

Syntax :

#include #include pid_t fork(void); fork creates a child process that differs from the parent process only in its PID and PPID, and in the fact that resource utilizations are set to 0

The fork() function is used to create a new process by duplicating the existing process from which it is called. The existing process from which this function is called becomes the parent process and the newly created process becomes the child process. As already stated that child is a duplicate copy of the parent but there are some exceptions to it.

 The child has a unique PID like any other process running in the operating system.  The child has a parent process ID which is same as the PID of the process that created it.  Resource utilization and CPU time counters are reset to zero in child process.  Set of pending signals in child is empty.  Child does not inherit any timers from its parent

The Return Type

Fork() has an interesting behavior while returning to the calling method. If the fork() function is successful then it returns twice. Once it returns in the child process with return value ’0′ and then it returns in the parent process with child’s PID as return value. This behavior is because of the fact that once the fork is called, child process is created and since the child process shares the text segment with parent process and continues execution from the next statement in the same text segment so fork returns twice (once in parent and once in child).

ZOMBIE PROCESS

1) A zombie process or defunct process is a process that has completed execution but still has an entry in the process table. This entry is still needed to allow the parent process to read its child's . The term zombie process derives from the common definition of zombie — an undead person. 2) When a process ends, all of the memory and resources associated with it are deallocated so they can be used by other processes. However, the process's entry in the process table remains. The parent can read the child's exit status by executing the wait system call, whereupon the zombie is removed. 3) After the zombie is removed, its (PID) and entry in the process table can then be reused. However, if a parent fails to call wait, the zombie will be left in the process table. In some situations this may be desirable, for example if the parent creates another child process it ensures that it will not be allocated the same PID.

Code to create processes using fork() and check zombie state in C programming

# include int main() { int pid; pid=getpid();

printf("Current Process ID is : %d\n",pid);

printf("[ Forking Child Process ... ] \n"); pid=fork(); /* This will Create Child Process and Returns Child's PID */ if(pid < 0) { /* Process Creation Failed ... */

exit(-1); } elseif(pid==0) { /* Child Process */

printf("Child Process Started ...\n"); printf("Child Process Completed ...\n");

} else { /* Parent Process */

(10); printf("Parent Process Running ... \n"); printf("I am In Zombie State ...\n"); while(1) { /* Infinite Loop that Keeps the Process Running */

} } return 0; }

/* Output [divyen@localhost PP-TW1]$ ./Prog01-Z & [1] 2320 Current Process ID is : 2320 [ Forking Child Process ... ] Child Process Started ... Child Process Completed ... [divyen@localhost PP-TW1]$ –l

F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 0 S 500 2193 2192 0 75 0 - 1078 wait4 pts/2 00:00:00 bash 0 S 500 2320 2193 0 75 0 - 336 schedu pts/2 00:00:00 Prog01-Z 1 Z 500 2321 2320 0 75 0 - 0 t> pts/2 00:00:00 Prog01-Z 0 R 500 2322 2193 0 81 0 - 788 - pts/2 00:00:00 ps

[divyen@localhost PP-TW1]$ Parent Process Running ... I am In Zombie State ... [divyen@localhost PP-TW1]$ ps –l

F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 0 S 500 2193 2192 0 76 0 - 1078 wait4 pts/2 00:00:00 bash 0 R 500 2320 2193 26 80 0 - 336 - pts/2 00:00:04 Prog01-Z 1 Z 500 2321 2320 0 75 0 - 0 t> pts/2 00:00:00 Prog01-Z 0 R 500 2323 2193 5 81 0 - 787 - pts/2 00:00:00 ps

AN :

An orphan process is a process that is still executing, but whose parent has died. These do not become zombie processes; instead, they are adopted by (process ID 1), which waits on its children. The result is that a process that is both a zombie and an orphan will be reaped automatically.

Program to create processes using fork() and check orphan state in C Programming

# include int main() { int pid; pid=getpid();

printf("Current Process ID is : %d\n",pid);

printf("[ Forking Child Process ... ] \n"); pid=fork(); /* This will Create Child Process and Returns Child's PID */ if(pid < 0) { /* Process Creation Failed ... */

exit(-1); } elseif(pid==0) { /* Child Process */

printf("Child Process is Sleeping ..."); sleep(5);

/* Orphan Child's Parent ID is 1 */

printf("\nOrphan Child's Parent ID : %d",getppid()); } else { /* Parent Process */

printf("Parent Process Completed ..."); } return 0; }

/* Output [divyen@localhost PP-TW1]$ ./Prog01-O & [1] 2277 Current Process ID is : 2277 [ Forking Child Process ... ] Parent Process Completed ...[divyen@localhost PP-TW1]$ ps -l F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 0 S 500 2193 2192 0 75 0 - 1078 wait4 pts/2 00:00:00 bash 1 S 500 2278 1 0 75 0 - 336 schedu pts/2 00:00:00 Prog01-O 0 R 500 2279 2193 0 81 0 - 787 - pts/2 00:00:00 ps [1]+ Done ./Prog01-O [divyen@localhost PP-TW1]$ Child Process is Sleeping ... Orphan Child's Parent ID : 1 */

exit ()- cause normal program termination

Syntax

#include void exit(int status);

The exit status or return code of a process in computer programming is a small number passed from a child process (or callee) to a parent process (or caller) when it has finished executing a specific procedure or delegated task The exit() function causes normal program termination and the value of status & 0377 is returned to the parent.

Wait ()

Syntax

#include #include pid_t wait(int *status);

The wait function suspends execution of the current process until a child has exited, or until a is delivered whose action is to terminate the current process or to call a signal handling function. If a child has already exited by the time of the call (a so-called "zombie" process), the function returns immediately. Any system resources used by the child are freed.

The value of pid can be one of:

< -1

which means to wait for any child process whose process group ID is equal to the absolute value of pid.

-1

which means to wait for any child process; this is the same behaviour which wait exhibits.

0

which means to wait for any child process whose process group ID is equal to that of the calling process. > 0

which means to wait for the child whose process ID is equal to the value of pid.

EXECVE - EXECUTE PROGRAM

Syntax:

#include int execve(const char *filename, char *const argv[],char *const envp[]);

1) execve() executes the program pointed to by filename. Filename must be either a binary executable, or a script starting with a line of the form "#! interpreter [arg]". In the latter case, the interpreter must be a valid pathname for an executable which is not itself a script, which will be invoked as interpreter [arg] filename. 2) argv is an array of argument strings passed to the new program. envp is an array of strings, conventionally of the form key=value, which are passed as environment to the new program. Both argv and envp must be terminated by a null pointer. The argument vector and environment can be accessed by the called program’s main function, when it is defined as int main(int argc, char *argv[], char *envp[]).

Examples

Synchronously waiting for the specific child processes in a (specific) order may leave zombies present longer than the above-mentioned "short period of time". It is not necessarily a program bug.

#include #include #include int main(void) { pid_t pids[10]; int i;

for (i = 9; i >= 0; --i) { pids[i] = fork(); if (pids[i] == 0) { sleep(i+1); _exit(0); } }

for (i = 9; i >= 0; --i) waitpid(pids[i], NULL, 0);

return 0; }

Another Sample Program on fork() * Includes */ #include /* Symbolic Constants */ #include /* Primitive System Data Types */ #include /* Errors */ #include /* Input/Output */ #include /* Wait for Process Termination */ #include /* General Utilities */

int main(int argc, char **argv) { printf("--beginning of program\n");

int counter = 0; pid_t pid = fork();

if (pid == 0) { // child process int i = 0; for (; i < 5; ++i) { printf("child process: counter=%d\n", ++counter); } } else if (pid > 0) { // parent process int j = 0; for (; j < 5; ++j) { printf("parent process: counter=%d\n", ++counter); } } else { // fork failed printf("fork() failed!\n"); return 1; }

printf("--end of program--\n");

return 0; }

This program declares a counter variable, set to zero, before fork()ing. After the fork call, we have two processes running in parallel, both incrementing their own version of counter. Each process will run to completion and exit. Because the processes run in parallel, we have no way of knowing which will finish first. Running this program will print something similar to what is shown below, though results may vary from one run to the next.

--beginning of program parent process: counter=1 parent process: counter=2 parent process: counter=3 child process: counter=1 parent process: counter=4 child process: counter=2 parent process: counter=5 child process: counter=3 --end of program-- child process: counter=4 child process: counter=5 --end of program--

Sample Program for execve ()

#include #include #include main() { char *temp[] = {NULL,"hello","world",NULL}; temp[0]="hello"; execv("hello",temp); printf("error"); }

Output:

Filename: hello hello world

FAQ’s:

1) What are Process & its state?

2) What are system calls?

3) What is the function of kernel?

Conclusion: - Thus familiarized with process control system calls like fork, execve, wait, zombie & orphan state.

Assignment No. 3: Implement multithreading for Matrix Multiplication using pthreads

Theory:

What is a Thread ?

A thread is a single sequence stream within in a process. Because threads have some of the properties of processes, they are sometimes called lightweight processes. In a process, threads allow multiple executions of streams. In many respect, threads are popular way to improve application through parallelism. The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in parallel. Like a traditional process i.e., process with one thread, a thread can be in any of several states (Running, Blocked, Ready or Terminated). Each thread has its own stack. Since thread will generally call different procedures and thus a different execution history. This is why thread needs its own stack. An operating system that has thread facility, the basic unit of CPU utilization is a thread. A thread has or consists of a program counter (PC), a register set, and a stack space. Threads are not independent of one other like processes as a result threads shares with other threads their code section, data section, OS resources also known as task, such as open files and signals.

Processes Vs Threads

Some of the similarities and differences are:

Similarities

 Like processes threads share CPU and only one thread active (running) at a time.  Like processes, threads within a processes, threads within a processes execute sequentially.  Like processes, thread can create children.  And like process, if one thread is blocked, another thread can run. Differences

 Unlike processes, threads are not independent of one another.  Unlike processes, all threads can access every address in the task .  Unlike processes, thread are design to assist one other. Note that processes might or might not assist one another because processes may originate from different users.

Why Threads?

Following are some reasons why we use threads in designing operating systems. 1. A process with multiple threads make a great server for example printer server. 2. Because threads can share common data, they do not need to use interprocess communication. 3. Because of the very nature, threads can take advantage of multiprocessors.

Threads are cheap in the sense that

1. They only need a stack and storage for registers therefore, threads are cheap to create. 2. Threads use very little resources of an operating system in which they are working. That is, threads do not need new address space, global data, program code or operating system resources. 3. Context switching are fast when working with threads. The reason is that we only have to save and/or restore PC, SP and registers.

What is POSIX (pthread ) libraries:

The POSIX thread libraries are a standards based thread API for C/C++. It allows one to spawn a new concurrent process flow. It is most effective on multi-processor or multi-core systems where the process flow can be scheduled to run on another processor thus gaining speed through parallel or distributed processing. Threads require less overhead than "forking" or spawning a new process because the system does not initialize a new system virtual memory space and environment for the process. While most effective on a multiprocessor system, gains are also found on uniprocessor systems which exploit latency in I/O and other system functions which may halt process execution. (One thread may execute while another is waiting for I/O or some other system latency.)

Parallel programming technologies such as MPI and PVM are used in a distributed computing environment while threads are limited to a single computer system. All threads within a process share the same address space. A thread is spawned by defining a function and its arguments which will be processed in the thread. The purpose of using the POSIX thread library in your software is to execute software faster.

Thread Basics:

 Thread operations include thread creation, termination, synchronization (joins,blocking), scheduling, data management and process interaction.  A thread does not maintain a list of created threads, nor does it know the thread that created it.  All threads within a process share the same address space.  Threads in the same process share: o Process instructions o Most data o open files (descriptors) o signals and signal handlers o current working directory o User and group id  Each thread has a unique: o Thread ID o set of registers, stack pointer o stack for local variables, return addresses o signal mask o priority o Return value: errno  pthread functions return "0" if OK.

Thread Creation & Termination:

Example: pthread1.c

01 #include

02 #include

03 #include

04

05 void *print_message_function( void *ptr );

06

07 main()

08 {

09 pthread_t thread1, thread2;

10 const char *message1 = "Thread 1";

11 const char *message2 = "Thread 2";

12 int iret1, iret2;

13

/* Create independent threads each of which will execute 14 function */ 15

iret1 = pthread_create( &thread1, NULL, print_message_function, 16 (void*) message1);

17 if(iret1)

18 {

fprintf(stderr,"Error - pthread_create() return code: 19 %d\n",iret1);

20 exit(EXIT_FAILURE);

21 }

22

iret2 = pthread_create( &thread2, NULL, print_message_function, 23 (void*) message2);

24 if(iret2)

25 {

fprintf(stderr,"Error - pthread_create() return code: 26 %d\n",iret2);

27 exit(EXIT_FAILURE);

28 }

29

30 printf("pthread_create() for thread 1 returns: %d\n",iret1);

31 printf("pthread_create() for thread 2 returns: %d\n",iret2);

32

/* Wait till threads are complete before main continues. Unless 33 we */

/* wait we run the risk of executing an exit which will 34 terminate */

/* the process and all threads before the threads have 35 completed. */

36

37 pthread_join( thread1, NULL); 38 pthread_join( thread2, NULL);

39

40 exit(EXIT_SUCCESS);

41 }

42

43 void *print_message_function( void *ptr )

44 {

45 char *message;

46 message = (char *) ptr;

47 printf("%s \n", message);

48 }

Compile:

 C compiler: cc -pthread pthread1.c (or cc -lpthread pthread1.c)

The GNU compiler now has the command line option "-pthread" while older versions of the compiler specify the pthread library explicitly with "-lpthread".

Run: ./a.out

Results: Thread 1

Thread 2

Thread 1 returns: 0

Thread 2 returns: 0

Details:

 In this example the same function is used in each thread. The arguments are different. The functions need not be the same.  Threads terminate by explicitly calling pthread_exit(), by letting the function return, or by a call to the function exit() which will terminate the process including any threads.  Function call: pthread_create - create a new thread int pthread_create(pthread_t * thread, const pthread_attr_t * attr, void * (*start_routine)(void *), void *arg);

Arguments:

o thread - returns the thread id. (unsigned long int defined in bits/pthreadtypes.h) o attr - Set to NULL if default thread attributes are used. (else define members of the struct pthread_attr_t defined in bits/pthreadtypes.h) Attributes include: . detached state (joinable? Default: PTHREAD_CREATE_JOINABLE. Other option: PTHREAD_CREATE_DETACHED) . scheduling policy (real-time? PTHREAD_INHERIT_SCHED,PTHREAD_EXPLICIT_SCHED,SCHED_OTHER ) . scheduling parameter . inheritsched attribute (Default: PTHREAD_EXPLICIT_SCHED Inherit from parent thread: PTHREAD_INHERIT_SCHED) . scope (Kernel threads: PTHREAD_SCOPE_SYSTEM User threads: PTHREAD_SCOPE_PROCESS Pick one or the other not both.) . guard size . stack address (See unistd.h and bits/posix_opt.h _POSIX_THREAD_ATTR_STACKADDR) . stack size (default minimum PTHREAD_STACK_SIZE set in pthread.h), o void * (*start_routine) - pointer to the function to be threaded. Function has a single argument: pointer to void. o *arg - pointer to argument of function. To pass multiple arguments, send a pointer to a structure.

 Function call: pthread_join - wait for termination of another thread

int pthread_join(pthread_t th, void **thread_return); Arguments:

 th - thread suspended until the thread identified by th terminates, either by calling pthread_exit() or by being cancelled.  thread_return - If thread_return is not NULL, the return value of th is stored in the location pointed to by thread_return. Function call: pthread_exit - terminate the calling thread

void pthread_exit(void *retval);

Arguments:

 retval - Return value of pthread_exit().

This routine kills the thread. The pthread_exit() function never returns. If the thread is not detached, the thread id and return value may be examined from another thread by using pthread_join(). Note: the return pointer *retval, must not be of local scope otherwise it would cease to exist once the thread terminates.

FAQ’s:

1) What is thread?

2) What is the difference between user level thread & kernel level thread?

3) What is POSIX ?

4) What is the difference between process and thread ?

5) Explain pthread library functions ?

Conclusion: -Thus implemented matrix multiplication using multithreading concepts.

Assignment No. 4: Thread synchronization using counting semaphores. Application to demonstrate: Producer-consumer problem with counting semaphores and mutex.

Assignment No. 5: Thread synchronization and mutual exclusion using mutex. Application to demonstrate: Reader-Writer problem with reader priority

Theory:

What is Semaphore?

Semaphore is a synchronization tool. Semaphore is a value that indicates the status of common resources. A semaphore, in its most basic form, is a protected integer variable that can facilitate and restrict access to shared resources in a multi-processing environment.

The two most common kinds of semaphores are counting semaphores and binary semaphores.

Counting semaphores represent multiple resources, while binary semaphores, as the name implies, represents two possible states (generally 0 or 1; locked or unlocked). Semaphores were invented by the late EdsgerDijkstra.

Semaphores can be looked at as a representation of a limited number of resources, like seating capacity at a restaurant. If a restaurant has a capacity of 50 people and nobody is there, the semaphore would be initialized to 50. As each person arrives at the restaurant, they cause the seating capacity to decrease, so the semaphore in turn is decremented. When the maximum capacity is reached, the semaphore will be at zero, and nobody else will be able to enter the restaurant. Instead the hopeful restaurant goers must wait until someone is done with the resource, or in this analogy, done eating. When a patron leaves, the semaphore is incremented and the resource becomes available again.

TWO OPERATIONS ON SEMAPHORE:

A semaphore can only be accessed using the following operations: wait() and signal().

wait() is called when a process wants access to a resource. This would be equivalent to the arriving customer trying to get an open table. If there is an open table, or the semaphore is greater than zero, then he can take that resource and sit at the table.

If there is no open table and the semaphore is zero, that process must wait until it becomes available. signal() is called when a process is done using a resource, or when the patron is finished with his meal. The following is an implementation of this counting semaphore (where the value can be greater than 1):

wait(Semaphore s){ while (s==0); /* wait until s>0 */ s=s-1;

}

signal(Semaphore s){ s=s+1;

}

Init(Semaphore s , Int v){ s=v;

}

wait() was called P (for Dutch “Proberen” meaning to test) and signal() was called V (for Dutch “Verhogen” meaning to increment). The standard Java library instead uses the name "acquire" for P and "release" for V.

No other process can access the semaphore when P or V are executing. This is implemented with atomic hardware and code. An atomic operation is indivisible, that is, it can be considered to execute as a unit. If there is only one count of a resource, a binary semaphore is used which can only have the values of 0 or 1. They are often used as mutex locks. Here is an implementation of mutual- exclusion using binary semaphores: do

{ wait(s);

// critical section

signal(s);

// remainder section

} while(1);

In this implementation, a process wanting to enter its critical section it has to acquire the binary semaphore which will then give it mutual exclusion until it signals that it is done.

For example, we have semaphore s, and two processes, P1 and P2 that want to enter their critical sections at the same time. P1 first calls wait(s). The value of s is decremented to 0 and P1 enters its critical section. While P1 is in its critical section, P2 calls wait(s), but because the value of s is zero, it must wait until P1 finishes its critical section and executes signal(s). When P1 calls signal, the value of s is incremented to 1, and P2 can then proceed to execute in its critical section (after decrementing the semaphore again). Mutual exclusion is achieved because only one process can be in its critical section at any time.

Disadvantage:

As shown in the examples above, processes waiting on a semaphore must constantly check to see if the semaphore is not zero. This continual looping is clearly a problem in a real multiprogramming system (where often a single CPU is shared among multiple processes). This is called busy waiting and it wastes CPU cycles. When a semaphore does this, it is called a spinlock.

To avoid busy waiting, a semaphore may use an associated queue of processes that are waiting on the semaphore, allowing the semaphore to block the process and then wake it when the semaphore is incremented. The operating system may provide the block() system call, which suspends the process that calls it, and the wakeup(P ) system call which resumes the execution of blocked process P. If a process calls wait() on a semaphore with a value of zero, the process is added to the semaphore’s queue and then blocked. The state of the process is switched to the waiting state, and control is transferred to the CPU scheduler, which selects another process to execute. When another process increments the semaphore by calling signal() and there are tasks on the queue, one is taken off of it and resumed.

In a slightly modified implementation, it would be possible for a semaphore's value to be less than zero. When a process executes wait(), the semaphore count is automatically decremented. The magnitude of the negative value would determine how many processes were waiting on the semaphore: wait(Semaphore s)

{ s=s-1; if (s<0) {

// add process to queue block();

}

}

signal(Semaphore s){ s=s+1; if (s<=0) {

// remove process p from queue wakeup(p);

}

}

Init(Semaphore s , Int v){ s=v;

}

Producer-Consumer Problem Using Semaphores

We use three semaphores: empty and full, which count the number of empty and full slots in the buffer, and mutex, which is a binary (or mutual exclusion) semaphore that protects the actual insertion or removal of items in the buffer. The producer and consumer – running as separate threads – will move items to and from a buffer that is synchronized with these empty, full, and mutex structures.

Internally, the buffer will consist of a fixed-size array of type buffer_item (which will be defined using a typedef). The array of buffer_item objects will be manipulated as a circular queue. The definition of buffer_item, along with the size of the buffer, can be stored in a header file such as the following:

/* buffer.h */ typedef int buffer_item;

#define BUFFER_SIZE 5

The buffer will be manipulated with two functions, insert_item() and remove_item(), which are called by the producer and consumer threads, respectively. A skeleton outlining these functions appears as:

#include

/* the buffer */ buffer_item buffer[BUFFER_SIZE]; int insert_item(buffer_item item)

{

/* insert item into buffer return 0 if successful, otherwise return -1 indicating as error condition */ } int remove_item(buffer_item *item)

{

/* remove an object from buffer placing it in item return 0 if successful, otherwise return -1 indicating an error condition */

}

The insert_item() and remove_item() functions will synchronize the producer and consumer processes. The buffer will also require an initialization function that initializes the mutual- exclusion object mutex along with the empty and full semaphores.

The main() function will initialize the buffer and create the separate producer and consumer threads. Once it has created the producer and consumer threads, the main() function will sleep for a period of time and, upon awakening, will terminate the application. The main() function will be passed three parameters on the command line:

1. How long to sleep before terminating 2. The number of producer threads 3. The number of consumer threads

Skeleton for this function appears as:

#include int main(int argc, char *argv[])

{

/* 1. Get command line arguments argv[1], argv[2], argv[3]

/* 2. Initialize buffer */

/* 3. Create producer thread(s) */

/* 4. Create consumer thread(s) */ /* 5. Sleep for argv[1] seconds */

/* 6. Exit */

}

The producer thread will alternate between sleeping for a random period of time and inserting a random integer into the buffer. Random numbers will be produced using the rand() function, which produces random integers between 0 and RAND_MAX. The consumer will also sleep for a random period of time and, upon awakening, will attempt to remove an item from the buffer. An outline of the producer and consumer threads appears as:

#include /* required for rand() */

#include void *producer(void *param) {

buffer_item rand;

while (TRUE) {

/* sleep for a random period of time between 0 and 4 sec*/

sleep(...);

/*generate a random number between 0 and 99*/

rand = rand() % 100;

/* Print ":- Producer TT created item RR" where TT is the

thread ID of this producer and RR is the random number */

printf(...);

if ( insert_item(rand) )

fprintf(“report error condition”);

}

} void *consumer(void *param) { buffer_item rand;

while (TRUE) {

/* sleep for a random period of time between 0 and 4 sec*/

sleep(...);

if ( remove_item(&rand) )

/* Print an error message */

printf(...);

else

/* Print "<== Consumer TT removed item RR" where TT is the thread ID of this consumer and RR is the removed number */

printf(...);

}

}

The following code sample illustrates how mutex locks available in the Pthread API can be used to protect a critical section:

#include pthread_mutex_t mutex;

/* create the mutex lock */ pthread_mutex_init(&mutex, NULL);

/* acquire the mutex lock */ pthread_mutex_lock(&mutex);

/*** critical section ***/

/* release the mutex lock */ pthread_mutex_unlock(&mutex);

Pthreads uses the pthread_mutex_t data type for mutex locks. A mutex is created with the pthread_mutex_init(&mutex, NULL) function, with the first parameter being a pointer to the mutex. By passing NULL as a second parameter, we initialize the mutex to its default attributes. The mutex is acquired and released with the pthread_mutex_lock() and pthread_mutex_unlock() functions. If the mutex lock is unavailable when pthread_mutex_lock() is invoked, the calling thread is blocked until the owner invokes pthread_mutex_unlock(). All mutex functions return a value of 0 with correct operation; if an error occurs, these functions return a nonzero error code.

Pthreads provides two types of semaphores – named and unnamed. For this project, we use unnamed semaphores. The code below illustrates how a semaphore is created:

#include sem_t sem;

/* create the semaphore and initialize it to 5 */ sem_init(&sem, 0, 5);

The sem_init() creates and initializes a semaphore. This function is passed three parameters: pointer to the semaphore, flag indicating the level of sharing, semaphore's initial value.In this example, by passing the flag 0, we are indicating that this semaphore can only be shared by threads belonging to the same process that created the semaphore. A nonzero value would allow other processes to access the semaphore as well. In this example, we initialize the semaphore to the value 5.

Pthreads names the wait() and signal() operations sem_wait() and sem_post(), respectively.

The code example below creates a binary semaphore mutex with an initial value of 1 and illustrates its use in protecting a critical section:

#include sem_t sem mutex;

/* create the semaphore */ sem_init(&mutex, 0. 1);

/* acqurire the semaphore */ sem_wait(&mutex);

/*** critical section ***/

/* release the semaphore */ sem_post(&mutex);

FAQ’s:

1) What is semaphore?

2) What is the mutual exclusion ?

3) Explain types of semaphore ?

4) Explain steps for producer-consumer problem ?

5) Explain thread synchronization?

Conclusion: -Thus implemented produces-consumer problem with counting semaphore and mutex.

Assignment No. 6: Deadlock Avoidance Using Semaphores: Implement the deadlock-free solution to Dining Philosophers problem to illustrate the problem of deadlock and/or starvation that can occur when many synchronized threads are competing for limited resources.

Theory:

What is a deadlock system?

In a multiprogramming system, processes request resources. If those resources are beingused by other processes then the process enters a waiting state. However, if otherprocesses are also in a waiting state, we have deadlock.

The formal definition of deadlock is as follows:

Definition: A set of processes is in a deadlock state if every process in the set is waiting foran event (release) that can only be caused by some other process in the same set.

Example

Process-1 requests the printer, gets it

Process-2 requests the tape unit, gets it Process-1 and

Process-1 requests the tape unit, waits Process-2 are

Process-2 requests the printer, waits deadlocked!

we shall analyze deadlocks with the following assumptions:

• A process must request a resource before using it. It must release the resource after using it. (request-> use ->release)

• A process cannot request a number more than the total number of resources available inthe system.For the resources of the system, a resource table shall be kept, which shows whether eachprocess is free or if occupied, by which process it is occupied. For every resource, queuesshall be kept, indicating the names of processes waiting for that resource.

4 Conditions

More formally there are 4 conditions for deadlock:

• Mutual Exclusion: Resources can only be held by at most one process. A process canniot deadlock on a sharable resource.2 • Hold and Wait: A process can hold a resource and block waiting for another.

• No Preemption: The operating system cannot take a resource back from a process that has it.

• Circular Wait: A process is waiting for a resource that another process has which is waiting for a resource that the first has. This generalizes.

If these 4 conditions are necessary and sufficient. (If these four appear, there is a deadlock, if not no deadlock.) The problem is that the chain in a circular wait can be long and complex. We use resource graphs to help us visualize deadlocks.

Three ways of handling deadlocks:

1. Deadlock prevention or avoidance - Do not allow the system to get into a deadlocked state. 2. Deadlock detection and recovery - Abort a process or preempt some resources when deadlocks are detected. 3. Ignore the problem all together - If deadlocks only occur once a year or so, it may be better to simply let them happen and reboot as necessary than to incur the constant overhead and system performance penalties associated with deadlock prevention or detection. This is the approach that both Windows and UNIX take.

Deadlock Prevention

 Deadlocks can be prevented by preventing at least one of the four required conditions:

Mutual Exclusion

 Shared resources such as read-only files do not lead to deadlocks.  Unfortunately some resources, such as printers and tape drives, require exclusive access by a single process.

Hold and Wait

To prevent this condition processes must be prevented from holding one or more resources

while simultaneously waiting for one or more others. There are possibilities for this:

 Require that all processes request all resources at one time. This can be wasteful of system resources if a process needs one resource early in its execution and doesn't need some other resource until much later.  Require that processes holding resources must release them before requesting new resources, and then re-acquire the released resources along with the new ones in a single new request. This can be a problem if a process has partially completed an operation using a resource and then fails to get it re-allocated after releasing it.  Either of the methods described above can lead to starvation if a process requires one or more popular resources.

No Preemption

 Preemption of process resource allocations can prevent this condition of deadlocks, when it is possible. o One approach is that if a process is forced to wait when requesting a new resource, then all other resources previously held by this process are implicitly released, ( preempted ), forcing this process to re-acquire the old resources along with the new resources in a single request, similar to the previous discussion. o Another approach is that when a resource is requested and not available, then the system looks to see what other processes currently have those resources and are themselves blocked waiting for some other resource. If such a process is found, then some of their resources may get preempted and added to the list of resources for which the process is waiting. o Either of these approaches may be applicable for resources whose states are easily saved and restored, such as registers and memory, but are generally not applicable to other devices such as printers and tape drives.

Circular Wait

 One way to avoid circular wait is to number all resources, and to require that processes request resources only in strictly increasing ( or decreasing ) order.  In other words, in order to request resource Rj, a process must first release all Ri such that i >= j.  One big challenge in this scheme is determining the relative ordering of the different resources

Deadlock Avoidance

 The general idea behind deadlock avoidance is to prevent deadlocks from ever happening, by preventing at least one of the aforementioned conditions.  This requires more information about each process, AND tends to lead to low device utilization. ( I.e. it is a conservative approach. )  In some algorithms the scheduler only needs to know the maximum number of each resource that a process might potentially use. In more complex algorithms the scheduler can also take advantage of the schedule of exactly what resources may be needed in what order.  When a scheduler sees that starting a process or granting resource requests may lead to future deadlocks, then that process is just not started or the request is not granted.  A resource allocation state is defined by the number of available and allocated resources, and the maximum requirements of all processes in the system. Deadlock Detection

 If deadlocks are not avoided, then another approach is to detect when they have occurred and recover somehow.  In addition to the performance hit of constantly checking for deadlocks, a policy / algorithm must be in place for recovering from deadlocks, and there is potential for lost work when processes must be aborted or have their resources preempted.

Single Instance of Each Resource Type

 If each resource category has a single instance, then we can use a variation of the resource-allocation graph known as a wait-for graph.  A wait-for graph can be constructed from a resource-allocation graph by eliminating the resources and collapsing the associated edges, as shown in the figure below.  An arc from Pi to Pj in a wait-for graph indicates that process Pi is waiting for a resource that process Pj is currently holding.  As before, cycles in the wait-for graph indicate deadlocks. This algorithm must maintain the wait-for graph, and periodically search it for cycles.

Recovery From Deadlock

 There are three basic approaches to recovery from deadlock: 1. Inform the system operator, and allow him/her to take manual intervention. 2. Terminate one or more processes involved in the deadlock 3. Preempt resources.

1. Process Termination

 Two basic approaches, both of which recover resources allocated to terminated processes: o Terminate all processes involved in the deadlock. This definitely solves the deadlock, but at the expense of terminating more processes than would be absolutely necessary. o Terminate processes one by one until the deadlock is broken. This is more conservative, but requires doing deadlock detection after each step.  In the latter case there are many factors that can go into deciding which processes to terminate next: 1. Process priorities. 2. How long the process has been running, and how close it is to finishing. 3. How many and what type of resources is the process holding. ( Are they easy to preempt and restore? ) 4. How many more resources does the process need to complete. 5. How many processes will need to be terminated 6. Whether the process is interactive or batch. 7. ( Whether or not the process has made non-restorable changes to any resource. )

2. Resource Preemption When preempting resources to relieve deadlock, there are three important issues to be a addressed:

1. Selecting a victim - Deciding which resources to preempt from which processes involves many of the same decision criteria outlined above. 2. Rollback - Ideally one would like to roll back a preempted process to a safe state prior to the point at which that resource was originally allocated to the process. Unfortunately it can be difficult or impossible to determine what such a safe state is, and so the only safe rollback is to roll back all the way back to the beginning. ( I.e. abort the process and make it start over. ) 3. Starvation - How do you guarantee that a process won't starve because its resources are constantly being preempted? One option would be to use a priority system, and increase the priority of a process every time its resources get preempted. Eventually it should get a high enough priority that it won't get preempted any more.

What is Dining Philosophers problem in C?

Suppose there are N philosophers meeting around a table, eating spaghetti and talking about philosophy. Now let us discuss the problem. There are only N forks available such that only one fork between each philosopher. Since there are only 5 philosophers and each one requires 2 forks to eat, we need to formulate an algorithm which ensures that utmost number of philosophers can eat spaghetti at once.

The next question is why we are detailing problems in this manner? Sometimes when it comes to computers, some of the analogous situations often demands solutions in a creative fashion. This is somewhat like an abstract problem in a novel dimension.

In this problem, the condition is each philosopher has to think and eat alternately. Assume that there is an infinite supply of spaghetti and eating is by no means limited by the quantity of food left. When available, each philosopher can pick up the adjacent fork. But he can eat only if the right and left forks are available.

Dining Philosophers Problem using Semaphores

PROBLEM DEFINITION

To implement Dining Philosophers Problem using Threads and Semaphores

ALGORITHM

1. Define the number of philosophers 2. Declare one thread per philosopher 3. Declare one semaphore (represent chopsticks) per philosopher 4. When a philosopher is hungry 1. See if chopsticks on both sides are free 2. Acquire both chopsticks or 3. eat 4. restore the chopsticks 5. If chopsticks aren’t free 5. Wait till they are available

PROGRAM DEVELOPMENT

# include

# include

# include

# include

# include

# include

# include

# include

# include sem_t chopstick[100]; int n; void *thread_func(int no)

{ int i,iteration=5; for(i=0;i

{ sem_wait(&chopstick[no]); sem_wait(&chopstick[(no+1)%n]); printf(“\nPhilosopher %d –> Begin eating”,no); sleep(1); printf(“\nPhilosopher %d –> Finish eating\n”,no); sem_post(&chopstick[no]); sem_post(&chopstick[(no+1)%n]);

} pthread_exit(NULL);

} int main()

{ int i,res; pthread_t a_thread[100]; printf(“\nEnter the number of philosopher :”); scanf(“%d”,&n); for(i=0;i

{ res=sem_init(&chopstick[i],0,0); if(res==-1)

{ perror(“semaphore initialization failed”); exit(1);

}

} for(i=0;i

{ res=pthread_create(&a_thread[i],NULL,thread_func,(int*) i); if(res!=0)

{ perror(“semaphore creation failed”); exit(1);

} sem_post(&chopstick[i]);

} for(i=0;i

{ res=pthread_join(a_thread[i],NULL); if(res!=0)

{ perror(“semaphore join failed”); exit(1);

}

} printf(“\n \n thread join succesfull\n”); for(i=0;i

{ res=sem_destroy(&chopstick[i]); if(res==-1)

{ perror(“semaphore destruction failed”); exit(1);

}

} exit(0);

} OUTPUT

$ gcc dining_sem.c -o dining_sem_op -lpthread

$ ./dining_sem_op

FAQ’s:

1) What is deadlock?

2) What are the methods to handle deadlock?

3) What is semaphore?

4) What is the difference between semaphore and monitor?

5) Explain deadlock starvation ?

Conclusion: -Thus implemented deadlock avoidance using semaphore and thread synchronization.

Assignment No. 7: Inter process communication in Linux using following. a. Pipes: Full duplex communication between parent and child processes. Parent process writes a pathname of a file (the contents of the file are desired) on one pipe to be read by child process and child process writes the contents of the file on second pipe to be read by parent process and displays on standard output. b. FIFOs: Full duplex communication between two independent processes. First process accepts sentences and writes on one pipe to be read by second process and second process counts number of characters, number of words and number of lines in accepted sentences, writes this output in a text file and writes the contents of the file on second pipe to be read by first process and displays on standard output.

Assignment No. 8: Inter-process Communication using Shared Memory using System V. Application to demonstrate: Client and Server Programs in which server process creates a shared memory sement and writes the message to the shared memory segment. Client process reads the message from the shared memory segment and displays it to the screen.

What is IPC ?

In computer science, inter-process communication (IPC) is the activity of sharing data across multiple and commonly specialized processes using communication protocols. Typically, applications using IPC are categorized as clients and servers, where the client requests data and the server responds to client requests. Many applications are both clients and servers, as commonly seen in distributed computing. Methods for achieving IPC are divided into categories which vary based on software requirements, such as performance and modularity requirements, and system circumstances, such as network bandwidth and latency.

There are several reasons for implementing inter-process communication systems:

 Sharing information; for example, web servers use IPC to share web documents and media with users through a web browser.  Distributing labor across systems; for example, Wikipedia uses multiple servers that communicate with one another using IPC to process user requests.[3]  Privilege separation; for example, HMI software systems are separated into layers based on privileges to minimize the risk of attacks. These layers communicate with one another using encrypted IPC.

Approaches :

Method Short Description Provided by OS

A record stored on disk, or a record synthesized on Most operating

File demand by a file server, which can be accessed by systems multiple processes.

Most operating

Signal A system message sent from one process to another, not usually used to transfer data but instead used to systems remotely command the partnered process.

A data stream sent over a network interface, either Most operating

Socket to a different process on the same computer or to systems another computer on the network.

An anonymous data stream similar to a socket, usually implemented by the operating system, that Message Most operating allows multiple processes to read and write to the queue systems message queue without being directly connected to each other.

A two-way data stream between two processes All POSIX systems,

Pipe interfaced through standard input and output and Windows read in one character at a time.

A pipe implemented through a file on the file system instead of standard input and output. Multiple All POSIX systems,

Named pipe processes can read and write to the file as a buffer Windows for IPC data.

A simple structure that synchronizes multiple All POSIX systems,

Semaphore processes acting on shared resources. Windows

Multiple processes are given access to the same Shared All POSIX systems, block of memory which creates a shared buffer for memory Windows the processes to communicate with each other.

Used in MPI paradigm, Message Allows multiple programs to communicate using Java RMI, CORBA, DDS,

passing channels, commonly used in concurrency models. MSMQ, MailSlots, QNX, others

A file mapped to RAM and can be modified by Memory- changing memory addresses directly instead of All POSIX systems,

mapped file outputting to a stream. This shares the same benefits Windows as a standard file.

We can classify IPC facilities into the following categories:

 communication facilities concerned with exchanging data among processes

 synchronization facilities concerned with synchronizing actions among processes

 signals facilities concerned with notifying processes of events

What is Pipes?

A pipe is a unidirectional data connection between two processes. A process uses file descriptors to read from or write to a pipe.

A pipe is designed as a queue offering buffered communication. Therefore the operating system will receive messages written to the pipe and make them available to entities reading from the pipe.

The pipe is referenced using file descriptors for both end points. Each endpoint can have more than one file descriptor associated with it.

PIPE SYSTEM CALL :- (UNNAMED)

Creates a half‐duplex pipe.

•Return: Success: 0; Failure: ‐1; Will set errno when fail.

•If successful, the pipe system call will return two integer file descriptors, pipefd[0] and pipefd[1].

–pipefd[1] is the write end to the pipe.

–pipefd[0] is the read end from the pipe.

•Parent/child processes communicating via unnamed pipe.

–One writes to the write end

–One reads from the read end

•A pipe exists until both file descriptors are closed inall processes

write fd1 read fd0

pipe

However, when we fork a process after a pipe is open

• Both the parent and the child will have the same write and read ends • It is a good idea to let each process close one of the ends -To avoid confusion and errors

If we want two way communication We create two pipes–One for each direction

Full Duplex Communication via Two Pipes

Two separate pipes, say p0 and p1

int main (void)

{ pid_tpid; intmypipe[2];

/* Create the pipe. */

if (pipe (mypipe))

{ fprintf (stderr, "Pipe failed.\n"); exit(1);

}

/* Create the child process. */

pid = fork (); if (pid == (pid_t) 0)

{

/* This is the child process. Close other end first. */ close (mypipe[1]); read_from_pipe (mypipe[0]); return EXIT_SUCCESS;

} else if (pid< (pid_t) 0)

{

/* The fork failed. */ fprintf (stderr, "Fork failed.\n"); return EXIT_FAILURE;

} else

{/* This is the parent process.Close other end first. */

close (mypipe[0]);

write_to_pipe (mypipe[1]);

return EXIT_SUCCESS;

}

}

How does this read/write even work?

•A pipe behaves like a queue (first‐in, first‐out).

– The first thing written to the pipe is the first thing read from the pipe. – For efficiency, the pipe has a limited size (we won’t go into system details).

•Writes to the pipe will block when the pipe is full.

– They block until another process reads some data from the pipe – The write call returns when all the data given to write have been written to the pipe.

•Reads from the pipe will block if the pipe is empty

– until at least a byte is written at the other end. – The read call (when successfully read some data) then returns without waiting for the number of bytes requested to be all available. – Hence the most reliable way to read from a pipe is to continue reading a byte until the pipe is closed (reading will return with zero byte read) – • When all the descriptors on a pipe’s write end are closed,

– a call to read from its read end returns zero (for UNIX read call) or EOF (for C lib call) . From Posix and Linux man pages:

“If all file descriptors referring to the write end of a pipe have beenclosed, then an attempt to read(2) from the pipe will see end‐ of‐file(read(2) will return 0).”

unamed Pipes

•only used between related process, such as parent/child, or child/child process.

•exist only as long as the processes using them.

•Next, we examine an example of unnamed pipes

int main (void)

{ pid_tpid; intmypipe[2];

/* Create the pipe. */

if (pipe (mypipe))

{ fprintf (stderr, "Pipe failed.\n"); exit(1);

}

/* Create the child process. */ pid = fork (); if (pid == (pid_t) 0)

{

/* This is the child process. Close other end first. */ close (mypipe[1]); read_from_pipe (mypipe[0]); return EXIT_SUCCESS;

} else if (pid< (pid_t) 0)

{

/* The fork failed. */ fprintf (stderr, "Fork failed.\n");

return EXIT_FAILURE;

}

Else /* This is the parent process.Close other end first.

Close(mypipe[0]);

write_to_pipe (mypipe[1]);

return EXIT_SUCCESS;

}

}

------

What is a named pipe?

A named pipe is a special file that is used to transfer data between unrelatedprocesses. One (or more) processes write to it, while another process reads from it.

Named pipes are visible in the file system and may be viewed with `ls' commandlike any other

file. (Named pipes are also called fifos; this term stands for `First In,First Out'.)

Named pipes may be used to pass data between unrelated processes, while normal(unnamed) pipes can only connect parent/child processes (unless you try very hard).Named pipes are strictly unidirectional, even on systems where anonymous pipesare bidirectional (full-duplex).

How do I create a named pipe?

To create a named pipe interactively, you'll use mkfifo.

To make a named pipe within a C program use include the following C libraries:

#include

#include

and to create the named pipe(FIFO) use mkfifo()function:

if (mkfifo("test_fifo", 0777))

/* permission is for all */

{

perror("mkfifo");

exit(1);

}

How do I use a named pipe?

To use the pipe, you open it like a normal file, and use read() and write() just asthough it was a

plain pipe.However, the open() of the pipe may block!

The following rules apply:

• If you open for both reading and writing (O_RDWR), then the open will notblock.

• If you open for reading (O_RDONLY), the open will block until anotherprocess opens the

FIFO for writing, unless O_NONBLOCK is specified, in whichcase the open succeeds.

• If you open for writing O_WRONLY, the open will block until anotherprocess opens the FIFO

for reading, unless O_NONBLOCK is specified, in whichcase the open fails.

open_mode = O_RDONLY or O_WRONLY, or O_NONBLOCK;

res = open("test_fifo", open_mode);

Here, open mode is one of the modes that was discussed above.

When reading and writing the FIFO, the same considerations apply as for regular

pipes, i.e. read() and write();

read(res, buffer, BUFFER_SIZE);

write(res, buffer,BUFFER_SIZE);

Here is a simple use of pipes, unlike the named pipes, we use two differenct fileswithout using fork()or execl(). Offcourse we can use fork and execl in ourprograms, but they are not necessary.

/* fifo1.c*/

/*

This program will read from a fifo and prints the received string to the screen.

*/

#include

#include

#include

#include

#include

#include

#include

#define FIFO_NAME "my_fifo" /* the named pipe*/

#define BUFFER_SIZE PIPE_BUF

int main()

{ int res; charbuffer[BUFFER_SIZE + 1]; if (access(FIFO_NAME, F_OK) == -1) / * check if fifo alreadyexists*/

{ res = mkfifo(FIFO_NAME, 0777); /* if not then, create the fifo*/ if (res != 0)

{

fprintf(stderr, "Could not create fifo %s\n", FIFO_NAME);

exit(EXIT_FAILURE);

}

} memset(buffer, '\0', BUFFER_SIZE + 1); /* clear the string */ printf("Process %d opening FIFO for reading\n", getpid()); res = open(FIFO_NAME, O_RDONLY); /* open fifo in read-onlymode */ read(res, buffer, BUFFER_SIZE); printf("Process %d received: %s\n", getpid(), buffer); sleep(5); if (res != -1) (void)close(res); /* close the fifo */ printf("Process %d finished\n", getpid()); /* make sure you close fifo after */ exit(EXIT_SUCCESS); /* using it. */

}

------

/* fifo2.c */

/* This file will write to the fifo a string.*/

#include

#include

#include

#include

#include

#include

#include

#define FIFO_NAME "my_fifo" /* the fifo name */

#define BUFFER_SIZE PIPE_BUF

int main()

{ int res; char buffer[BUFFER_SIZE + 1]; if (access(FIFO_NAME, F_OK) == -1) /* check if fifo alreadyexists*/

{ res = mkfifo(FIFO_NAME, 0777); /* if not then, create the fifo*/ if (res != 0)

{

fprintf(stderr, "Could not create fifo %s\n", FIFO_NAME);

exit(EXIT_FAILURE);

}

} printf("Process %d opening FIFO\n", getpid()); res = open(FIFO_NAME, O_WRONLY); /* open fifo in write onlymode */ sprintf(buffer, "hello"); /* string to be sent */ write(res, buffer,BUFFER_SIZE); /* write the string to fifo */ printf("Process %d result %d\n", getpid(), res); sleep(5); if (res != -1) (void)close(res); /* close the fifo */ printf("Process %d finished\n", getpid()); exit(EXIT_SUCCESS);

} result of the programs on the screen:

Process 23242 opening FIFO

Process 23242 result 3

Process 23243 opening FIFO for reading

Process 23243 received: hello

Process 23242 finished

Process 23243 finished

Advantages of Named pipes :

• Named pipes are very simple to use. • mkfifo is a thread-safe function. • No synchronization mechanism is needed when using named pipes. • Write (using write function call) to a named pipe is guaranteed to be atomic. It is atomic even if the named pipe is opened in non-blocking mode. • Named pipes have permissions (read and write) associated with them, unlike anonymous pipes. These permissions can be used to enforce secure communication.

Disadvantage of Named pipes:

• Named pipes can only be used for communication among processes on the same host machine. • Named pipes can be created only in the local file system of the host, (i.e. cannot create a named pipe on the NFS file system.) • Careful programming is required for the client and server, in order to avoid deadlocks. • Named pipe data is a byte stream, and no record identification exists.

Properties of Pipe:

 Pipes do not have a name. For this reason, the processes must share a parent process. This is the main drawback to pipes. However, pipes are treated as file descriptors, so the pipes remain open even after fork and exec.  Pipes do not distinguish between messages; they just read a fixed number of bytes. Newline (\n) can be used to separate messages. A structure with a length field can be used for message containing binary data.  Pipes can also be used to get the output of a command or to provide input to a command Signals : Detecting the termination of multiple child processes

What is Signal ?

 A signal is an asynchronous event which is delivered to a process. •Asynchronous means that the event can occur at any time may be unrelated to the execution of the process.  •Signals are raised by some error conditions, such as memory segment violations, floating point processor errors, or illegal instructions.  –e.g. user types ctrl-C.

What are types of Signals ?

Signal Types Two signal types :-> 1. standard signal (traditional unix signals) delivered to a process by setting a bit in a bitmap one for each signal thus there cannot be multiple instances of the same signal; bit can be one (signal) or zero (no-signal) 2. real-time signals (or queued signals) defined by POSIX 1003.1b where successive instances of the same signal are significant and need to be properly delivered. In order to use queued signals, you must use the sigaction () system call, rather than signal ().

Where are signals defined?

Signals are defined in /usr/include/signal.h other platform-specific header files e.g., /usr/include/asm/signal.h

Programmer may chose that

• particular signal triggers a user-defined signal handler • triggers the default kernel-supplied handler • signal is ignored 6

What does Default handler do?

• terminates the process and generates a dump of memory in a core file (core) • terminates the process without generating a core image file (quit) • ignores and discards the signal (ignore) • suspends the process (stop) • resumes the process

POSIX predefined signals

•SIGALRM: Alarm timer time-out. Generated by alarm( ) API.

•SIGABRT:Abort process execution. Generated by abort( ) API.

•SIGFPE: Illegal mathematical operation.

•SIGHUP: Controlling terminal hang-up.

•SIGILL: Execution of an illegal machine instruction.

•SIGINT: Process interruption. Can be generated by or keys.

•SIGKILL: Sure a process. Can be generated by–“kill -9 “ command.

•SIGPIPE: Illegal write to a pipe.

•SIGQUIT: Process quit. Generated by keys.

•SIGSEGV:Segmentation fault. generated by de-referencing a NULL pointer.

SIGTERM: process termination. Can be generated by–“kill ” command.

•SIGUSR1: Reserved to be defined by user. •SIGUSR2: Reserved to be defined by user.

•SIGCHLD: Sent to a parent process when its child process has terminated.

•SIGCONT: Resume execution of a stopped process.

•SIGSTOP: Stop a process execution.

•SIGTTIN: Stop a background process when it tries to read from its controlling terminal.

•SIGTSTP: Stop a process execution by the control_Z keys.

•SIGTTOUT: Stop a background process when it tries to write to its controlling terminal.

Actions on Signals :

Process that receives a signal can take one of three action:

•Perform the system-specified default for the signal

 notify the parent process that it is terminating;  generate a core file; (a file containing the current memory image of the process)  terminate.

•Ignore the signal

 A process can do ignoring with all signal but two special signals: SIGSTOP and SIGKILL.

•Catch the Signal

 When a process catches a signal, except SIGSTOP and SIGKILL, it invokes a special signal handing routine.

Example of Signals:

 User types Ctrl-c 1. –Event gains attention of OS 2. –OS stops the application process immediately, sending it a 2/SIGINTsignal 3. –Signal handler for 2/SIGINT signal executes to completion 4. –Default signal handler for 2/SIGINT signal exits process

 Process makes illegal memory reference

1. Event gains attention of OS 2. OS stops application process immediately, sending it a 11/SIGSEGV signal 3. –Signal handler for 11/SIGSEGV signal executes to completion 4. –Default signal handler for 11/SIGSEGV signal prints “segmentation fault” and exits process

Send signals via commands

kill Command

–kill signal pid

•Send a signal of type signal to the process with id pid

•Can specify either signal type name (-SIGINT) or number (-2)

 No signal type name or number specified => sends 15/SIGTERM signal

•Default 15/SIGTERM handler exits process

–Better command name would be sendsig

• Examples kill –2 1234

killSIGINT 1234

•Same as pressing Ctrl-c if process 1234 is running in foreground

Signal Concepts

Signals are defined in

•man 7 signal for complete list of signals and their numeric values.

•kill –l for full list of signals on a system. •64 signals. The first 32 are traditional signals, the rest are for real time applications

Signal Handler Function

Programs can handle signals using the signal library function.

void (*signal(intsigno, void (*func)(int)))(int);

•signo is the signal number to handle.

•func defines how to handle the signal

 SIG_IGN  SIG_DFL  Function pointer of a custom handler

•Returns previous disposition if ok, or SIG_ERR on error

Example:

#include

#include

#include

voidohh(int sig)

{

printf("Ohh! I got signal %d\n", sig);

(void) signal(SIGINT, SIG_DFL);

}

int main()

{

(void) signal(SIGINT, ohh);

while(1)

{

printf("Hello World!\n");

sleep(1);

}

return 0;

}

Requesting Alarm Signal

• System Call: unsigned int alarm (unsigned int count) • alarm () instructs the kernel to send the SIGALRM signal to the calling process after count seconds. If an alarm had already been scheduled, it is overwritten. If count is 0, any pending alarm requests are cancelled. • alarm () returns the number of seconds that remain until the alarm signal is sent. • The default handler for this signal displays the message "Alarm clock" and terminates the process

Ignoring Signals

While looked at changing the actions for a set of signals using a handler. So far, our handlers have been doing things — mostly, printing "Hello World!" — but we might just want our handler to do nothing, essentially, ignoring the signal. That is easy enough to write in code, for example, here is a program that will ignore SIGINT by handling the signal and do nothing:

#include

#include

void nothing(intsignum){ /*DO NOTHING*/ }

int main(){

signal(SIGINT, nothing);

while(1);

}

we have to use Ctrl-\ to quit the program:

he signal.h header defines a set of actions that can be used in place of the handler:

 SIG_IGN : Ignore the signal  SIG_DFL : Replace the current signal handler with the default handler Checking Errors of signal()

The signal() function returns a pointer to the previous signal handler, which means that here, again, is a system call that we cannot error check in the typical way, by checking if the return value is less than 0. This is because a pointer type is unsigned, there is no such thing as negative pointers.

Instead, a special value is used SIG_ERR which we can compare the return value of signal(). Here, again, is the program where we try and ignore SIGKILL, but this time with proper error checking:

int main()

{

//ignore SIGSTOP ?

if( signal(SIGKILL, SIG_IGN) == SIG_ERR)

{

perror("signal");;

exit(1);

}

//infinite loop while(1);

}

And the output from the perror() is clear:

#>./signal_errorcheck signal: Invalid argument

The invalid argument is SIGKILL which cannot be handled or ignored. It can only KILL!

FAQ’s:

1) What is PIPES?

2) What is IPC?

3) What is difference between unnamed and named pipes?

4) What is the Signal? Why do we require them?

5) Explain all signals used in communication?

6) What is signal handler ?

Conclusion: -Thus implemented PIPES ,FIFO and using sigchild signal detected multiple child termination.

Assignment No. 9: Implement an assignment using File Handling System Calls (Low level system calls like open, read, write, etc).

Theory:

What is virtual file system?

• The Linux kernel implements the concept of Virtual File System (VFS, originally Virtual Filesystem Switch), so that it is (to a large degree) possible to separate actual "low-level" file system code from the rest of the kernel. • This API was designed with things closely related to the ext2 filesystem in mind. For very different filesystems, like NFS, there are all kinds of problems • The kernel keeps track of files using in-core inodes ("index nodes"), usually derived by the low-level filesystem from on-disk inodes. • A file may have several names, and there is a layer of dentries ("directory entries") that represent pathnames, speeding up the lookup operation. • Several processes may have the same file open for reading or writing, and file structures contain the required information such as the current file position. • Access to a filesystem starts by mounting it. This operation takes a filesystem type (like ext2, vfat, iso9660, nfs) and a device and produces the in-core superblock that contains the information required for operations on the filesystem; a third ingredient, the mount point, specifies what pathname refers to the root of the filesystem.

What is /proc ?

• A pseudo-filesystem that acts as an interface to internal data structures in the kernel • The /proc filesystem contains a illusionary filesystem. • It does not exist on a disk. Instead, the kernel creates it in memory. • It is used to provide information about the system (originally about processes, hence the name). • The proc filesystem is a pseudo-filesystem which provides an interface to kernel data structures. It is commonly mounted at /proc. • Most of it is read-only, but some files allow kernel variables to be changed. • The /proc filesystem is described in more detail in the proc manual page. Some/proc :-

/proc/1

– A directory with information about process number

1. Each process has a directory below /proc with the name being its process identification number.

• /proc/cpuinfo

–Information about the processor, such as its type, make, model, and performance.

• /proc/devices

–List of device drivers configured into the currently running kernel.

What is it used for?

• Can be used to obtain information about the system. • Can be used to change certain kernel parameters at runtime.

How do we allow controlled access to kernel data and parameters and provide a familiar interface that programmers can easily adopt?

• Create pseudo-filesystem to represent status information and configuration parameters as files

• Provides a unified ‘API’ for collecting status information and configuring drivers

• No new libraries needed – simple filesystem calls are all that is necessary.

• Quick, easy access via command line & Not version- or configuration-specific.

Linux has virtual filesystem layer (VFS)

• VFS provides an abstraction layer between user processes and filesystems

• Allows for any filesystem to be used transparently in the system

• File systems don’t have to be physical

• /proc fileystem resides entirely in memory Steps to implement /proc on Linux.

• Create new-proc.c for kernel module code.

• Create a Makefile.

• Compile and link the module.

• Insert the module into the kernel – use insmod

• List the module in the kernel – use lsmod

• Check the entry for the file created in the /proc using ls -l /proc/

• Read the file by using cat command- cat /proc/hello_proc

• Remove the module from the kernel – use rmmod

Functions and structures used:

• proc_create:- To create a virtual file in the /proc directory.

• remove_proc_entry:- To remove a virtual file from the /proc directory.

• The hello_proc_show() is what decides the output.

seq_printf:- Use sequential operations on the file.

• The hello_proc_open() is the open callback, called when the proc file is opened. single_open() means that all the data is output at once.

FILE OPERATION STRUCTURE :

• The file_operations structure holds pointers to functions defined by the driver that perform various operations on the device. • Each field of the structure corresponds to the address of some function defined by the driver to handle a requested operation.

Structure used in program: struct file_operations hello_proc_fops = {

.owner = THIS_MODULE, .open = hello_proc_open,

.read = seq_read,

.release = single_release,

};

Limitation to /proc file system:

• Our module cannot output more than one page of data to the pseudo-file at once. • A page is a pre-defined amount of memory, typically 4096 bytes (4K defined by processor), and is available in the PAGE_SIZE macro. • This limitation is bypassed by using sequence files.

FAQ’s:

1) What is /proc file system?

2) What is VFS?

Conclusion :

We have implemented file system in Linux OS

Assignment No. 10: Implement a new system call in the kernel space, add this new system call in the Linux kernel by the compilation of this kernel (any kernel source, any architecture and any Linux kernel distribution) and demonstrate the use of this embedded system call using C program in user space.

What is system call ?

System call: an interface between a user-level program and a service provided by kernel

 Implemented in kernel  With a user-level interface  Crossing the user-space/kernel-space boundaries

How modules begin and end A program usually begins with a main() function, executes a bunch of instructions and terminates upon completion of those instructions. Kernel modules work a bit differently. A module always begin with either the init_module or the function you specify with module_init call. This is the entry function for modules; it tells the kernel what functionality the module provides and sets up the kernel to run the module's functions when they're needed. Once it does this, entry function returns and the module does nothing until the kernel wants to do something with the code that the module provides. All modules end by calling either cleanup_module or the function you specify with the module_exit call. This is the exit function for modules; it undoes whatever entry function did. It unregisters the functionality that the entry function registered. Every module must have an entry function and an exit function. Since there's more than one way to specify entry and exit functions, I'll try my best to use the terms `entry function' and `exit function', but if I slip and simply refer to them as init_module and cleanup_module. Functions available to modules Programmers use functions they don't define all the time. A prime example of this is printf(). You use these library functions which are provided by the standard C library, libc. The definitions for these functions don't actually enter your program until the linking stage, which insures that the code (for printf() for example) is available, and fixes the call instruction to point to that code. Kernel modules are different here, too. In the hello world example, you might have noticed that we used a function, printk() but didn't include a standard I/O library. That's because modules are object files whose symbols get resolved upon insmod'ing. The definition for the symbols comes from the kernel itself; the only external functions you can use are the ones provided by the kernel. If you're curious about what symbols have been exported by your kernel, take a look at /proc/ksyms. One point to keep in mind is the difference between library functions and system calls. Library functions are higher level, run completely in user space and provide a more convenient interface for the programmer to the functions that do the real work--- system calls. System calls run in kernel mode on the user's behalf and are provided by the kernel itself. The library function printf() may look like a very general printing function, but all it really does is format the data into strings and write the string data using the low-level system call write(), which then sends the data to standard output. Would you like to see what system calls are made by printf()? It's easy! Compile the following program:

#include int main(void) { printf("hello"); return 0; }

What is int in kernel module?

• Instruction int 0x80

• int means interrupt, and the number 0x80 is the interrupt number.

• An interrupt transfers the program flow to whomever is handling that interrupt, which is interrupt 0x80 in this case.

• In Linux, 0x80 interrupt handler is the kernel, and is used to make system calls to the kernel by other programs.

• The kernel is notified about which system call the program wants to make, by examining the value in the register %eax (gas syntax, and EAX in Intel syntax). • Each system call have different requirements about the use of the other registers.

• For example, a value of 1 in %eax means a system call of exit(), and the value in %ebx holds the value of the status code for exit().

What is system call table?

• System calls in Linux are stored in syscall.tbl in arch/syscalls

– 32-bit system call table – syscall_32.tbl

– 64-bit system call table – syscall_64.tbl

• Ways to write a system call to kernel

– Adding a kernel module

– Change the existing kernel code

Steps to add new system call in Linux kernel module :

1- Download kernel source by : apt-get source linux-image-3.17.7

Extracted to /usr/src/

2- Define a new system call sys_hello() :

I created a directory hello in the kernel source directory in /usr/src/linux-3.13/

And created a hello.c file in hello directory with below content :

#include asmlinkage long sys_hello(void) { printk(“Hello world\n”); return 0; }

And Created a Makefile in the hello directory with following content : obj-y := hello.o

3- Add the hello directory to the kernel’s Makefile

I changed following line in /usr/src/linux-3.13/Makefile: core-y += kernel/ mm/ fs/ ipc/ security/ crypto/ block/ to : core-y += kernel/ mm/ fs/ ipc/ security/ crypto/ block/ hello/

4- Add the new system call sys_hello() into the system call table (syscall_64.tbl file)

Because I'm using 64 bit system, I need to alter the syscall_64.tbl file in : vim /usr/src/linux-3.13/arch/x86/syscalls/syscall_64.tbl

Added the following line in the end of the file :

-Last line number was 313

314 common hello sys_hello

5- Add the new system call sys_hello() in the system call header file vim /usr/src/linux-3.13/include/linux/syscalls.h

I've added the following line to the end of the file just before the #endif statement at the very bottom : asmlinkage long sys_hello(void);

6- Compiling this kernel on my system

To configure the kernel I tried the following command : sudo make menuconfig

After above command a pop up window came up and I made sure that ext4 was selected and then save.

Then :

# cd /usr/src/linux-3.13/ # make

It took 2/3 hours.

After that : # make modules_install install

After that, I reboot system.

7- Test the system call (Problem Is Here)

After reboot, I created a file with name hello.c in home folder with following content :

#include #include #include #include int main() { long int amma = syscall(314); // 314 is the number of sys_hello line in `syscall_64.tbl` printf(“System call sys_hello returned %ld\n”, amma); return 0; }

Then :

# gcc hello.c # ./a.out

The output is :

System call sys_hello returned -1

• Now implement the helper program that will call our system call. • Write a helper program (e.g. helper.c in any directory) • Use syscall function known as indirect system call to invoke our system call. • syscall is needed because we have no wrapper function to call our system call through that wrapper function.

FAQ’s:

1) What is LINUX? 2) What is kernel Module ? 3) What is dmesg? 4) What is library function ? 5) What is ltrace ? 6) What is a system call ? 7) What is a system call interface ? 8) What is strace ?

Conclusion: -Thus implemented new system call in Linux kernel.