CS 4310: Operating Systems Lecture Notes - Student Version∗

Kyle Burke January 10, 2018

Contents

-1.0 Using Chapel ...... 2

0 OS Basics 2 0.1 ...... 2

1 Parallel Programming (using Chapel) 2

2 Hardware Threads 2

3 Concurrency 3

4 Semaphores 4

5 The Producer-Consumer Problem 4 5.1 Circular Queues ...... 4

6 Memory Management 6

7 Stack vs. Heap 8 7.1 Stack Management ...... 9 7.2 Heap Management ...... 9

8 Scheduling 9 8.1 Shortest-Job First ...... 10 8.2 First Come, First Serve ...... 13 8.3 Earliest-Deadline First ...... 14 8.4 Round Robin ...... 15 8.5 Hybrid Schedulers ...... 15 8.6 Examples ...... 15

9 Interrupts 20

10 File Systems 24 10.1 Fragmentation ...... 27

11 OS Security 31

12 History of OSes by Candace 31

∗Created with lectureNotes.sty, which is available at: http://turing.plymouth.edu/~kgb1013/lectureNotesLatexStyle.php (or, GitHub: https://github.com/paithan/LaTeX-LectureNotes). Many or most of the answers to questions are hidden so that some of class will still be a challenge for students.

1 -1.0 Using Chapel 2 HARDWARE THREADS

-1.0 Using Chapel This course was last taught with programming assignments given in Chapel1 using compiler version 1.14. This is a High-Performance Computing language designed to make parallel programming easier for computational scientists. Here are some comments about this language.

• It makes launching threads and handling synchronization very easy. • It is missing many of the basic libraries that exist for common languages (e.g. Java).

• Goal: focus on the

0 OS Basics

〈 Go over syllabus! 〉 Q: What are the responsibilities of an ? What do they do?

A: • TODO

0.1 Interrupts This material is currently in 9. Should we move it?

1 Parallel Programming (using Chapel)

2 Hardware Threads

Q: How many hardware threads do your computers have? How many CPUs? (Might be different numbers.)

Q: How can a CPU have multiple threads?

A:

TODO: add stuff here? Picture, maybe? 1http://chapel.cray.com

© 2018 Kyle Burke 3 CONCURRENCY

Q: What if you have more threads than that?

A:

Q: What are some of the parts of a PCB?

ID Unique identification number for this process. State "New", "Ready", "Running", "Blocking", etc. Program Counter : Pointer to the address of the next line of code to execute. Registers Data that will be loaded into the registers when this executes. Process Scheduling State "Ready" or "Suspended". A: Could include info about the priority. Process Structuring Information IDs of any chil- dren processes. Interprocess communication info Links to any other processes or variables outside of this process that are relevant. Privileges What are the privileges of this process? (User/superuser, etc) And more!

Q: Do PCBs go on the stack or the heap?

A:

3 Concurrency

〈 Do a Dining Philosophers live-action example. Show a deadlock. (Tell everyone to pick up their right-hand fork first, then tell them all that they’re hungry.) 〉 〈 Talk about the following things:

© 2018 Kyle Burke 5 THE PRODUCER-CONSUMER PROBLEM

• Concurrent Threads/Processes • Race conditions • Critical Section code - code where race conditions can occur. • Mutual Exclusion (mutex) - property of concurrent programming where only one thread can execute its critical section at a time. • Lock: Actual mechanism to enforce mutual exclusion. We will use semaphores. 〉 Q: What are the four steps for a thread executing a critical section?

1. Non-critical: Code is not executing a critical section. 2. Try: Before entering a critical section, the thread requests access. A: 3. Critical section: The thread executes the critical section. No other threads are simultaneously exe- cuting their critical sections. 4. Exit: The thread exits the critical section, freeing up the lock.

4 Semaphores

〈 How can we use semaphores as locks? 〉 〈 Do dining philosophers again. 〉

Q: How many semaphores are we using?

5 The Producer-Consumer Problem

〈 Quickly describe the Producer-Consumer problem. 〉 Common solution: give them a buffer of items produced, but not yet consumed.

5.1 Circular Queues 〈 Explain need for a bounded-size buffer. 〉 〈 Buffer should be a queue. 〉

© 2018 Kyle Burke 5.1 Circular Queues 5 THE PRODUCER-CONSUMER PROBLEM

〈 Two common options: Linked List or Circular array. Quickly describe each. 〉

Q: What’s the pro for each?

A:

Q: How can we speed the circular array up?

A:

Q: When should I not be able to add to the circular array?

A: When it is full.

Q: What should happen to the thread that’s trying to add to it?

A: Block!

Q: Until when?

A:

Q: When should I not be able to remove from the circular array?

A:

© 2018 Kyle Burke 6 MEMORY MANAGEMENT

Q: What should happen to a thread trying to remove something?

A:

Q: How can we enforce these blocking behaviors?

A:

Q: How many semaphores does each blocking queue need?

A:

〈 Talk about whether 2 is enough! 〉

6 Memory Management

OS Takes up some memory space. ("Linux Kernel") Remainder goes to the applications that will run on top of the OS.

Q: How big are modern OS kernels?

A:

Memory Management Techniques • Single contiguous Allocation: all the memory is available to one application. (MSDOS, Single-purpose hardware.)

© 2018 Kyle Burke 6 MEMORY MANAGEMENT

• Partitioned Allocation: Memory is divided up into separate partitions (maybe by appli- cation or process). Need to allocate space when jobs/applications start and deallocate when they stop. • Paged Allocation: Memory is divided up into fixed size page frames. These pages do not need to be contiguous in RAM. Addresses: (page, offset). Hardware Memory Management Unit (MMU) is needed to convert page indices to actual RAM addresses. • Memory Segmentation: divided into variable-size segments. Addresses: (segment, off- set). MMU translates these pairs into physical addresses. (Implements Segment Table.) Segmentation Fault: Tried to access an offset too big for the given segment.

Q: Pros and Cons. Which has most overhead? Which is most flexible? Which is most used?

Q: What is Virtual Memory?

A:

2

2VM Image source: https://commons.wikimedia.org/wiki/File:Virtual_memory.svg

© 2018 Kyle Burke 7 STACK VS. HEAP

Q: What is the distinction between page, page frame, and paging?

• A page is a fixed-length piece of VM. A: • A page frame is a fixed-length piece of RAM. • Paging is process of exchanging pages from hard drive to RAM as needed. AKA swapping.

Q: Can we have VM without paging?

A:

Q: Pros and Cons?

Q: Can we use Segmentation technique with paging?

A:

Nice bonus of VM: each application/process could get it’s own contiguous space, a la Partitioned Allocation. Thus, each could have it’s own Stack and Heap. Note: may have to share some heap parts with other processes.

7 Stack vs. Heap

"The Stack", also known as call stack, stores data about currently-running code. What does top stack frame look like? • Process Control Block up top (small) • Lines of code from module below • Data below that. • Other stack frames below that.

© 2018 Kyle Burke 7.1 Stack Management 8 SCHEDULING

7.1 Stack Management 7.2 Heap Management 〈 Just go over the notes from CS 232. I should probably copy them over here, but I haven’t done that yet... 〉

8 Scheduling

〈 Talk about traffic lights, train tunnels, etc. 〉

Q: What are different goals for scheduling?

A:

Q: Okay, so what properties of a "Job" might a scheduler want to know?

A:

© 2018 Kyle Burke 8.1 Shortest-Job First 8 SCHEDULING

Q: What are some possible ways you would choose which process gets to run next? What do you think are common scheduling algorithms?

A:

Q: What is preemptive?

A:

8.1 Shortest-Job First SJF minimizes total waiting time and latency.

Q: What’s the benefit of shortest job first?

A:

© 2018 Kyle Burke 8.1 Shortest-Job First 8 SCHEDULING

Let’s say I have four processes: A, B, C, D. Each has the following running times: • A: 5 ms Q: • B: 10 ms • C: 7 ms • D: 3 ms How long does it take to run all the processes using SJF? (Assume negligible context switching.)

A:

Q: Is that any faster if I rearrange them?

A:

Q: So how does the order change the latency? I’ve still got to wait 25 ms to complete them all!

A:

Q: What’s the latency for each of the four processes using SJF?

A:

© 2018 Kyle Burke 8.1 Shortest-Job First 8 SCHEDULING

Q: So average?

A:

Q: What if we aren’t using SJF and instead do them in the order D, A, B, C? What is the average latency now?

A:

Q: How long to add something to the scheduler?

A:

Q: How long to remove the next process to run?

A:

Q: What’s a big issue with SJF? What’s a problem with this algorithm?

A: Starvation: One (or more) processes never get executed.

© 2018 Kyle Burke 8.2 First Come, First Serve 8 SCHEDULING

Q: How could this happen?

A:

Q: Could there be a preemptive version of SJF?

Q: How does that work?

8.2 First Come, First Serve

Q: Which property does FCFS optimize for?

A:

Q: How long to add something to the scheduler?

A:

Q: How long to remove the next process to run?

A:

Q: Could there be a preemptive version of this?

A:

© 2018 Kyle Burke 8.3 Earliest-Deadline First 8 SCHEDULING

8.3 Earliest-Deadline First

Q: Which property does EDF optimize for?

A:

Q: What should you do if you can’t make a deadline?

A:

Q: How long to add something to the scheduler?

A:

Q: How long to remove the next process to run?

A:

Q: Could there be a preemptive version of this?

A:

© 2018 Kyle Burke 8.4 Round Robin 8 SCHEDULING

8.4 Round Robin

Q: Which property does Round Robin optimize for?

A:

Q: How long to add something to the scheduler?

A:

Q: How long to remove the next process to run?

A:

Q: Could there be a preemptive version of this?

A:

8.5 Hybrid Schedulers 〈 Talk about them. 〉

8.6 Examples Situation: it takes 1 ms to context switch and the queue has these processes:

• Process A: Duration : 10 ms Arrived : 21 ms ago Deadline : in 32 ms • Process B:

© 2018 Kyle Burke 8.6 Examples 8 SCHEDULING

Duration : 16 ms Arrived : 34 ms ago Deadline : in 100 ms • Process C: Duration : 7 ms Arrived : 24 ms ago Deadline : 12 ms • Process D: Duration : 3 ms Arrived : 10 ms ago Deadline : in 17 ms • Process E: Duration : 11 ms Arrived : 121 ms ago Deadline : none • Process F: Duration : 5 ms Arrived : 6 ms ago Deadline : in 10 ms

© 2018 Kyle Burke 8.6 Examples 8 SCHEDULING

Answer the following questions for a Shortest-Job First scheduler: • What is the order the processes are run on the CPU and how long does each run? Q: • What is the overall execution time? (Inverse of throughput.) • What is the average latency? • What is the maximum waiting time? • How many processes make their deadlines?

A:

© 2018 Kyle Burke 8.6 Examples 8 SCHEDULING

Answer the following questions for a FCFS scheduler: • What is the order the processes are run on the CPU and how long does each run? Q: • What is the overall execution time? (Inverse of throughput.) • What is the average latency? • What is the maximum waiting time? • How many processes make their deadlines?

A:

Answer the following questions for an Earliest-Deadline First scheduler: • What is the order the processes are run on the CPU and how long does each run? Q: • What is the overall execution time? (Inverse of throughput.) • What is the average latency? • What is the maximum waiting time? • How many processes make their deadlines?

A:

© 2018 Kyle Burke 8.6 Examples 8 SCHEDULING

Answer the following questions for a Round Robin scheduler: • What is the order the processes are run on the CPU and how long does each run? Q: • What is the overall execution time? (Inverse of throughput.) • What is the average latency? • What is the maximum waiting time? • How many processes make their deadlines?

A:

Consider the following hybrid algorithm. Uses a priority: waiting time minus the job length. • What is the order the processes are run on the and how long does each run? Q: • What is the overall execution time? (Inverse of throughput.) • What is the average latency? • What is the maximum waiting time? • How many processes make their deadlines?

A:

© 2018 Kyle Burke 9 INTERRUPTS

Design a hybrid scheduling algorithm and try it out on this process set. • What is the order the processes are run on the CPU and how long does each run? Q: • What is the overall execution time? (Inverse of throughput.) • What is the average latency? • What is the maximum waiting time? • How many processes make their deadlines?

A:

9 Interrupts

Q: What is a hardware ? (Maybe you learned about this in hard- ware...)

A:

Q: How is this like Low-level event-driven computing?

A:

© 2018 Kyle Burke 9 INTERRUPTS

Q: Does this use the normal CPU or does it use a sepa- rate resource?

A:

Q: What’s the problem with that other hardware?

A:

Q: Let’s assume we have to use the CPU. What has to happen, then?

A:

Sequence of events: 1. IRQ (Interrupt ReQuest) is created. 2. CPU saves the state of the current computation 3. Interrupt Handler is called on the data from the IRQ. 4. Handler completes, and the CPU resumes the computations it had suspended.

Q: What else (besides peripherals) can cause interrupts?

A:

Q: How does a processor interrupt itself?

A:

© 2018 Kyle Burke 9 INTERRUPTS

Q: What’s an example of this?

A:

Q: What would happen in the divide-by-zero case?

A:

Q: Most modern OSes are interrupt-driven, as it is much better than polling. What is polling?

A:

Q: Software Engineering students, what is this the low-level version of?

Observer Design Pattern. This kind of mechanism is A: what makes the higher-level event-driven programming possible.

Q: How can a processor interrupt another processor?

A: Also via an interrupt. These are Inter-processor Inter- rupts (IPI).

Q: When might you use this?

A:

© 2018 Kyle Burke 9 INTERRUPTS

Q: How could a process no longer be relevant?

A: Consider evaluating the boolean statement X or Y in Python...

Q: ...How does Python actually do this?

A: Evaluates X first, then, if X was False, it evaluates Y.

Q: Python doesn’t play nicely with multiple processors, but what if we can use an extra thread to do the computation?

A:

Q: What do we do if Y finishes first and it evaluates to True?

A:

Q: What’s a spurious interrupt?

A: An unwanted interrupt.

Q: What might cause a spurious interrupt?

A:

© 2018 Kyle Burke 10 FILE SYSTEMS

Q: What’s it called when there are too many interrupts, slowing the system down.

A: An interrupt storm.

Q: How can we correctly ignore these interrupts?

A:

Q: This later version is called level-triggered interrupts. The alternative is edge-triggered interrupts. What’s the difference?

On an interrupt line (input) to the CPU, a level-triggered interrupt occurs when the value on that line is 1. That means the CPU just has to sample it at the appropriate A: time. An edge-triggered interrupt means it happens when the signal changes (either up or down or both). This means the CPU must be constantly listening to the line.

10 File Systems

Q: When was the File metaphor in full use in computing?

A:

© 2018 Kyle Burke 10 FILE SYSTEMS

Q: How many logical layers in FS?

A:

Q: What are they?

• Bottom: Physical - the actual data on the disk. • Top: Logical - Tree-like FS we’re used to. "Wrap- A: per" for lower layer. • Middle (maybe): Virtual - represents multiple phys- ical devices as one virtual device.

Q: Which layer do we directly work with when issuing terminal commands?

A:

Q: What situations need a virtual layer?

A:

Q: What is a sector on a hard disk?

A: A sector is the smallest block size to store data in.

© 2018 Kyle Burke 10 FILE SYSTEMS

Assume we have a 500 GB hard drive partition for our OS, with 512- Q: byte sector space. What is the maximum number of files we can put into that hard drive?

A:

Q: What’s the minimum number of files?

A:

Q: How big does that one file have to be? (What’s the smallest size?)

A: 500 GB − 512 bytes + 1 byte = (500 × 230 − 511) bytes

It turns out it doesn’t have to be that big. We’ll find out why soon! Q: What is slack space?

A: Slack space is the amount of unused space in sectors.

© 2018 Kyle Burke 10.1 Fragmentation 10 FILE SYSTEMS

Q: What if the average file in our example from above has only 312-bytes of data? How much slack space do we have?

A:

Q: 12 bytes is a good estimate of a minimum file size. What’s the maximum slack space in that case?

A:

Q: What’s going on that 12 bytes?

A:

10.1 Fragmentation

Q: What is file fragmentation?

A: File fragmentation occurs when a file needs to be stored in a contiguous space.

© 2018 Kyle Burke 10.1 Fragmentation 10 FILE SYSTEMS

Q: How can we solve this problem?

A:

Cool, but triggering defrag gif: https://en.wikipedia.org/wiki/File_system_fragmentation# /media/File:FragmentationDefragmentation.gif

Q: What’s the problem with defragmentation?

A:

Q: What’s the problem with a fragmented system?

A: Each sector needs to include a pointer to the next sector with the rest of the file. A la a linked list!

Q: How big would that pointer have to be in our sample system?

A:

Q: We actually need 31 bits, why?

A:

© 2018 Kyle Burke 10.1 Fragmentation 10 FILE SYSTEMS

Q: If a sector (in our continuing example) has a pointer to another sector, how much of the space is free for data?

A:

Q: What if there’s no EOF symbol and no pointer to the next sector?

A:

The data for a file is 2000 bytes not including the EOF symbol. It is Q: broken into fragmented sectors. (Each needs a pointer unless it’s the last one.) How many sectors are needed?

A:

Q: What is the slack space for this file?

A:

Q: What’s the space efficiency? Hint: file size divided by number of bytes used.

A: (2000/2032) ≈ 98.4%

Q: Okay, what about 2030 bytes?

A:

© 2018 Kyle Burke 10.1 Fragmentation 10 FILE SYSTEMS

Q: What is that slack space for that last file?

A:

Q: What’s the space efficiency now?

A:

Q: Can we store this 2030-byte file in four sectors?

A: Yes!

Q: How?

A:

Q: Slack space and efficiency?

A:

© 2018 Kyle Burke 12 HISTORY OF OSES BY CANDACE

Q: So which is the best policy? Frag or Defrag? What are the pros of each?

A:

Q: So what should we actually do?

A:

11 OS Security

12 History of OSes by Candace

First computers: no OS. Change by altering the Hardware. 3 Next step: front panels. (50s - 70s) 4 Note from Candace: IBM 1620 called CADET: "Can’t Add Doesn’t Even Try". It had lookup tables instead of doing addition algorithmi- cally.

Q: In the early days, what happened after one program finished running?

A:

Early OSes: software "monitors" to handle queues of jobs. (Early scheduling?) These became batch processing systems, which automated the queues. Downsides: hours or days to process. Difficult to debug. Q: What is batch processing?

A: Sequence of programs, each run on a set of inputs.

3Link for image: http://turing.plymouth.edu/~kgb1013/?link=0&course=4310 4Link for image: http://turing.plymouth.edu/~kgb1013/?link=1&course=4310

© 2018 Kyle Burke 12 HISTORY OF OSES BY CANDACE

Q: How is/was this useful?

A:

Early Interactive Computing: programmed, debugged, operated in real time. (Still no OS.) • 1946: Whirlwind I (MIT, US Navy) Had a visual output. 5 • 1956: Transistorized eXperimental computer Zero (TX-0) (MIT Lincoln Lab) Add took 10 microseconds. Basically a Whirlwind I with transistors. Ken Olsen helped engineer it. He later became CEO of Digital (DEC). • June, 1958: Semi-Automatic Ground Environment (SAGE) (MIT, USAF, IBM) Had early networking. 23 centers around the US. Each was a 4-story block house. 6 With a second redundant computer in case the first went offline. Used light guns for input. 7 CPU so large, there were telephones at either end so people could talk from one side to the other. Largest program known to be encoded to punch cards. 8 • 1959: Programmed Data-Processor (PDP-1) (DEC)9 Descendant from TX-0. Had some games, and a chess engine. Circular Screen. Still had a front panel10 . 1960s: • Time-sharing systems. (The Compatible Time-Sharing System, TSS (IBM), Berkely TSS, PTSS (UPenn), DTSS (Dartmouth TSS)–This is where BASIC started, etc.) Multitasking/multiuser OSes. Could access through a dumb terminal. Even through TTy phone line. • Burroughs MCP (Master Control Program): 1961. 11 Maybe oldest continuously used OS. (Still in use today.) Lots of big steps made: – First OS to be fully written in a HLL. (Written in ESPOL, a derivative of ALGOL.) – First OS to be handle multiple processors. 5Link for image: http://turing.plymouth.edu/~kgb1013/?link=2&course=4310 6Link for image: http://turing.plymouth.edu/~kgb1013/?link=3&course=4310 7Link for image: http://turing.plymouth.edu/~kgb1013/?link=4&course=4310 8Link for image: http://turing.plymouth.edu/~kgb1013/?link=5&course=4310 9Link for image: http://turing.plymouth.edu/~kgb1013/?link=6&course=4310 10Link for image: http://turing.plymouth.edu/~kgb1013/?link=7&course=4310 11Link for image: http://turing.plymouth.edu/~kgb1013/?link=8&course=4310

© 2018 Kyle Burke 12 HISTORY OF OSES BY CANDACE

– Open Source: since it was designed to run only on their hardware, they freely distributed the code for the OS. Had user group meetings and some code made it’s way into the base stuff. First to have these meetings and bring stuff back in, but not first to openly distribute the code. – First OS to use virtual memory. (Weird: it doesn’t fetch fixed-size pages, mostly because architecture is non-von-Neumann.) – Interesting: Burroughs machines were all stack-oriented instead of register-oriented. – Awesome photo: Panel Dump (via Polaroid) 12 • LINC (Laboratory INstrument Computer) 1962. – First "PC" or "Minicomputer" – cost: ≈ $40, 000 apiece. – Mary Allen Wilkes wrote the OS, an early interactive OS. – She was also the first person to use a computer in their own home. Photo! 13 – In 1965, her dad liked to brag: "I’ll bet you don’t have a computer in your living room!" • Multics: 196414 . More big steps: – Data and programs stored in same device: RAM. (Combined device for stack and heap.) – Dynamic linking (process could request more address space, and could bring in new code) – Hot-swappable: CPU, Memory, Disk Drives! Reason: during off-peak-hours, could schedule parts to go offline to save power, then bring them back up later without restarting the system. – Hardware support for ring-protection. – First hierarchical file system (and actually stored that way. Later Burroughs MCP had hierarchical overlays on a flat system.) File names could be arbitrarily long. • 1966: PDP-10 – Created by DEC. 15 – Normalized Time-Sharing Systems. – Used heavily in ARPANET. 16 • 1968: NLS - "oN-Line System" - Early GUI-based interaction. Firsts: – Hypertext links – Mouse interaction 17 12Link for image: http://turing.plymouth.edu/~kgb1013/?link=9&course=4310 13Link for image: http://turing.plymouth.edu/~kgb1013/?link=10&course=4310 14Link for image: http://turing.plymouth.edu/~kgb1013/?link=11&course=4310 15Link for image: http://turing.plymouth.edu/~kgb1013/?link=12&course=4310 16Link for image: http://turing.plymouth.edu/~kgb1013/?link=13&course=4310 17Link for image: http://turing.plymouth.edu/~kgb1013/?link=14&course=4310

© 2018 Kyle Burke 12 HISTORY OF OSES BY CANDACE

– 2-dimensional rectangular graphical display. (2-D array of pixels.) Used the televi- sion model. – Integrated email. – Document version control. – Video conferencing!!!! 18 1970s: • 1970: UNIX – 1969: Bell Labs developers start writing a replacement to Multics, which they think has gotten too big. (Dennis Ritchie, Ken Thompson, M. D. McIlroy, J. F. Ossanna.) – 1970: They call the project "Unics", then "Unix". – 1972: Assembly sucks, so they restart the project using C. (Dennis Ritchie created C.) C catches on, so Unix suddenly becomes portable! 19 – (Lots more people worked on the Unix kernel, but all through Bell Labs. Not open source.) – 1973: Ritchie and Thompson present "The UNIX Time-Sharing System" at the Symposium on Operating Systems Principles. Bob Fabry (Berkeley Professor) is on the organizing committee and requests a copy of the source code to check out. – 1974: Unix code arrives at Berkeley. Funny story: he goes in with the Math and Stats department to buy a computer to share, a PDP-11/45. Math and Stats wants RSTS installed, so Unix is only loaded on the machine for 8 hours a day. Sometimes during the day, sometimes at night. • 1973: Xerox Alto – First GUI-based computing system. Inspired by NLS. – Mouse! ... but... portrait-oriented screen? 20 – Designed for home use. Personal Computer. – Not widely adopted... for 6 years. • 1975: UNIX Version 5 ("Research Unix") – 1975: UIUC purchases first Unix offsite license (of Version 5, "Research Unix"). (This snowballs and becomes a source of income for Bell through the 80’s.) – 1975: Version 6 is released. Universities pay $200 for a license, businesses pay $20000. • 1978: BSD (Berkeley Software Distribution) – Berkeley has been modifying their Unix code, now they have their own version. – 1BSD is released. – Slowly, BSD starts to outperform AT&T Unixes. 18Link for image: http://turing.plymouth.edu/~kgb1013/?link=15&course=4310 19Link for image: http://turing.plymouth.edu/~kgb1013/?link=16&course=4310 20Link for image: http://turing.plymouth.edu/~kgb1013/?link=17&course=4310

© 2018 Kyle Burke 12 HISTORY OF OSES BY CANDACE

– Institutes and Universities tend to prefer BSD Unix, while businesses tend to prefer AT&T Unix. – Reality: they both share back and forth as they grow. – FreeBSD Mascot: Beastie. First drawn in 1976 by Phil Foglio, but that image was lost at DEC. Updated in 1988 by John Lasseter: 21 • 1979: Famous Xerox Fail – Xerox Alto hasn’t taken off. – Apple (Steve Jobs) expresses interest in the concepts. Xerox trades license of con- cepts for stock options. 1980s: • 1981: MS-DOS – Microsoft purchases 86-DOS for $75000 to run on 8086 IBM machines. – Rename it MS-DOS. First on "IBM Personal Computer" (IBM 5150.) 22 – Really brought term "PC" to common use. – Computer price: $1565 • 1983: GNU ("GNU’s Not Unix") – Unix clone. (Contains no Unix software.) – 1983: GNU project created by Richard Stallman (RMS). – 1984: Leaves MIT so they can’t claim it! – Designed to be completely free and open source. Created new GNU License to publish code with. – 1985: GNU Manifesto. 23 – Kernel part called GNU HURD. But, not (yet) production-ready. (This turns out to be okay, as we’ll see...) – Big contribution: Other code. LOADS of free software. – 1986: Started Free Software Foundation (FSF) to help fund development of GNU software. • 1987: Minix ("Mini-Unix") by Andrew Tannenbaum – "Minimal Unix Clone" – Created for Pedagogy: licensed only for academic use. 1990s: • 1991: Linux – Current Free OS Statuses: 21Link for image: http://turing.plymouth.edu/~kgb1013/?link=18&course=4310 22Link for image: http://turing.plymouth.edu/~kgb1013/?link=19&course=4310 23Link for image: http://turing.plymouth.edu/~kgb1013/?link=20&course=4310

© 2018 Kyle Burke 12 HISTORY OF OSES BY CANDACE

∗ Minix only available to professors. ∗ BSD stuck in legal trouble, so it’s hard to get. ∗ GNU HURD isn’t done yet. Now the Internet starts to become a thing, and people online start asking for a free OS. – August 25, 1991. Linus Torvalds posts to the comp.os.minix newsthread: Hello everybody out there using minix - I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I’d like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things). I’ve currently ported bash(1.08) and gcc(1.40), and things seem to work. This implies that I’ll get something practical within a few months, and I’d like to know what features most people would want. Any suggestions are welcome, but I won’t promise I’ll implement them :-) Linus ([email protected]) PS. Yes - it’s free of any minix code, and it has a multi-threaded fs. It is NOT portable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that’s all I have :-(. - Linus Torvalds 24

– September 1991: v 0.01 is available. 10k lines of code. (Linux not named by Linus.) Linux on a 486: 25 – Jan 1992: Linux newsgroup is created! (Newsgroup: early Internet forum.) – Feb 1992: v 0.12 - Switches to the GNU license. This allows for commercial distri- bution. – March 1992: v 0.95 - adopts the X window system. Now has a GUI. – March 1994: v 1.0 launched. ≈ 180, 000 lines of code. – Implementation of GNU HURD more or less stops. GNU/Linux is a thing, with GNU overshadowed by Linux. 26 • Mac OS • Windows NT (late 90s?) • 1999: Linux starts to dominate public servers. 27

24https://groups.google.com/forum/#!original/comp.os.minix/dlNtH7RRrGA/SwRavCzVE7gJ 25Link for image: http://turing.plymouth.edu/~kgb1013/?link=21&course=4310 26Link for image: http://turing.plymouth.edu/~kgb1013/?link=22&course=4310 27Link for image: http://turing.plymouth.edu/~kgb1013/?link=23&course=4310

© 2018 Kyle Burke