Outlook on Operating Systems
Total Page:16
File Type:pdf, Size:1020Kb
COVER FEATURE OUTLOOK Outlook on Operating Systems Dejan Milojičić, Hewlett Packard Labs Timothy Roscoe, ETH Zurich Will OSs in 2025 still resemble the Unix-like consensus of today, or will a very different design achieve widespread adoption? eventeen years ago, six renowned technolo- more technical: by following the argument that the OS gists attempted to predict the future of OSs will change, we can identify the most promising paths for an article in the January 1999 issue of IEEE for OS research to follow—toward either a radically dif- Concurrency.1 With the benefit of hindsight, ferent model or the evolution of existing systems. In Sthe results were decidedly mixed. These six experts research, it’s often better to overshoot (and then figure correctly predicted the emergence of scale-out archi- out what worked) than to undershoot. tectures, nonuniform memory access (NUMA) perfor- Current trends in both computer hardware and mance issues, and the increasing importance of OS application software strongly suggest that OSs will security. But they failed to predict the dominance of need to be designed differently in the future. Whether Linux and open source software, the decline of propri- this means that Linux, Windows, and the like will be etary Unix variants, and the success of vertically inte- replaced by something else or simply evolve rapidly grated Mac OS X and iOS. will be determined by a combination of various tech- The reasons to believe that OS design won’t change nical, business, and social factors beyond the con- much going forward are well known and rehearsed: trol of OS technologists and researchers. Similarly, requirements for backward compatibility, the Unix mod- the change might come from incumbent vendors and el’s historical resilience and adaptability,2 and so on. “If open source communities, or from players in new it ain’t broke, don’t fix it.” markets with requirements that aren’t satisfied by However, we argue that OSs will change radically. existing designs. Our motivation for this argument is two-fold. The first Ultimately, though, things are going to have to change. has been called the “Innovator’s Dilemma,” after the book of the same name:3 a variety of interests, both com- HARDWARE TRENDS mercial and open source, have invested substantially in Hardware is changing at the levels of individual devices, current OS structures and view disruption with suspi- cores, boards, and complete computer systems, with cion. We seek to counterbalance this view. The second is deep implications for OS design. COMPUTER 0018-9162/16/$33.00 © 2016 IEEE JANUARY 2016 43 OUTLOOK Complexity hardware—such as cores, devices, OS—and perhaps application—level. Hardware is becoming increasingly and memory—can be powered up and They will be able to execute adaptive more complex. Today, the program- down at any point during execution. algorithms subject to memory access ming manual for a system-on-chip patterns. Memory controller functions (SoC) part for a phone or server blade Nonvolatile main memory will be executed closer to memory typically runs to more than 6,000 As technology advances, we expect (outside CPUs), implementing optimi- pages, usually not including documen- large, nonvolatile main memories zations such as encryption, compres- tation for the main application cores. to become prevalent.5 This doesn’t sion, and quality-of-service functions. Large numbers of peripheral devices only mean that most data will per- are integrated onto a die, each with sist in nonvolatile memory (NVM) as Systems complex, varying programming mod- opposed to DRAM or disk, but also that Taking a step back, we see that the els. More and more of these devices— packaging, cost, and power efficiency boundaries of today’s machines are networking adaptors, radios, graphics will make it possible to architect and different from traditional scale-up processors, power controllers, and so deploy far more nonvolatile RAM and scale-out systems. The resources on—are now built as specialized pro- (NVRAM) than DRAM. Main memory of a closely coupled cluster such as a cessors that execute specialized firm- will be mostly persistent and will have rack-scale InfiniBand cluster must be ware with little OS integration. much greater capacity than today’s managed at a timescale the OS is used The OS communicates with these disks and disk arrays. to, rather than those used for tradi- peripherals via proprietary messag- With large numbers of heteroge- tional middleware. ing protocols specific to producers of neous processors, this memory will be We’re so accustomed to thinking peripheral devices. This marks the highly distributed but perhaps not in of Linux or Windows as OSs—because beginning of the trend toward dark the way we see today. Photonic inter- they started that way—that it’s rare to silicon, a large collection of highly connects and high-radix switches can consider a functional definition of an specialized processors, only a few of expand the load-store domain (where OS. Consider the following traditional which can be used at a time.4 cores can issue cache-line reads and definition of an OS: “an OS is a col- These devices’ interconnects are writes) across multiple server units lection of system software that man- also becoming more complex. A mod- (within a rack or even a datacenter) ages the complete hardware platform ern machine is a heterogeneous net- and potentially flatten the switch and securely multiplexes resources work of links, interconnects, buses, hierarchy, resulting in more uniform between tasks.” and addressing models. Not all devices memory access.6 Linux, Windows, and Mac OS all are accessible from all general-purpose This further enlarges the memory fail this definition. A cellphone OS cores, and the same is true of memory: accessible from a single CPU beyond is a mishmash of proprietary device the cozy view of a computer as a single that supported by modern 64-bit pro- firmware, embedded real-time execu- cache-coherent physical address space cessors, which implement no more tives, and the Linux or iOS kernel and containing RAM and devices has been than 52 bits of physical address space its daemons that run applications and a myth for at least the last decade. and as little as 42 bits for some CPUs. manage a fraction of the hardware. The In the short term (it’ll be a few years OS of a datacenter or rack-scale appli- Energy until processor vendors implement ance is typically a collection of differ- These systems aren’t static, either. As more address bits), this will lead to con- ent Linux installations as well as cus- a system resource to be managed by figurable apertures or windows into tom and off-the-shelf middleware. If the OS, energy is now just as import- available memory. Typical time scales we want to examine the structure of a ant as CPU cycles, bytes of RAM, of general-purpose OSs span multiple real-world, contemporary OS, we need or network bandwidth, whether in decades, whereas real-time and embed- to look elsewhere. A great deal of tra- the form of cellphone battery life or ded OS lifetimes are only a few years. ditional OS functionality is now occur- datacenter power consumption. Not In the longer term, memory con- ring outside of general-purpose OSs; it’s least among the many implications trollers are likely to become more moved to the top-of-rack management for new OS design is that most of the intelligent and programmable at the server, closer to memory (memory-side 44 COMPUTER WWW.COMPUTER.ORG/COMPUTER controllers), to various appliances (intru- example) is light: applications are very backplane network. Allocating inter- sion detection systems, and storage), or different now, granting us the freedom connect bandwidth be comes import- to specialized cores on the same die. to change the OS interface. ant in these machines: a skewed hash join in a large relational database can Diversity Rack-scale computing easily saturate 100 gigabits per second Hardware is also becoming more One trend we’re seeing in applica- (Gbps) FDR InfiniBand links inside the diverse. Beyond the complexity of any tions is rack-scale computing, which appliance and significantly impact single product line of machines, the is sometimes deployed as software performance. Distributed coordina- designs of every processor, SoC, and appliances (called tin-wrapped soft- tion of CPU scheduling, combined complete system are different. SoCs ware). Many enterprise applications with careful interconnect manage- were always heterogeneous, but they such as file servers, relational and ment, is essential. were used in special-purpose systems; nonrelational databases, and big data In an effort to optimize perfor- now they’re used in general-purpose analytics now come prepackaged in a mance, rack-scale applications often sys tems. Engineering a general-purpose rack-scale software appliance (exam- exploit the lowest-latency communi- OS that can be used widely and evolve ples include Oracle’s Exadata and Exa- cation mechanisms available, such as as hardware changes (and with it, the lytics products or the SAP HANA data- remote direct memory access (RDMA) tradeoffs required for scalability, per- base). Such appliances usually consist one-side operations. This means that formance, and energy efficiency) is a of a collection of server machines and memory management and protection formidable challenge. optional custom hardware accelera- in the OS becomes a distributed sys- In a remarkable change from 15 tors, connected by a high-bandwidth tems problem. years ago, hardware adapts and diver- internal network such as InfiniBand. Large-scale persistent main mem- sifies much faster today than system Customers plug in the power and net- ories are likely to be adopted first in software does—faster than the current work cables, configure it, and go.