ESSENTIAL Linux for EMBEDDED Developers

Total Page:16

File Type:pdf, Size:1020Kb

ESSENTIAL Linux for EMBEDDED Developers ESSENTIAL Linux For EMBEDDED Developers The open source way! An overview of Linux environment Linux Evolution What is Linux? Distributions Linux Virtualization Linux Everywhere The Linux kernel archives http://en.wikipedia.org/wiki/Linux GNU/Linux • Recursive acronym “GNU’s not UNIX” • http://www.gnu.org/ not http://www.gnu.com/ • Richard Stallman (1983) Goal a free Unix • Known for Free Software movement, GNU, Emacs, gcc • Never really released GNU operating system • Linus 1992 open sourced Linux Kernel and named GNU/Linux • Free Software Foundation • http://www.fsf.org/ http://en.wikipedia.org/wiki/GNU Typical Linux SYSTEM Layout System Reserved Rootfs Kernel Bootloader Hardware Linux Distributions •Redhat/fedora/Centos • Most popular, good all around choice • Fedora – community supported • Enterprise Redhat – corporate supported •Debian/Ubuntu/Mint • Completely noncommercial • Massive package selection and easy management • Not as user friendly, but improving •SuSe • IBM Preferred Linux – z/Series Linux of choice Embedded LINUX The Blackfin uClinux Distribution by Analog Devices – a fork of the uClinux distribution for Blackfin processors Embedded Alley - see http://www.embeddedalley.com/ Lineo Solutions uLinux MontaVista Linux - see http://www.mvista.com/products_services.php Pengutronix - see http://www.pengutronix.de/oselas/bsp/index_en.html RidgeRun Linux - see http://www.ridgerun.com/sdk.shtml TimeSys Linux- see http://www.timesys.com/embedded-linux/linuxlink Wind River - see http://www.windriver.com/products/linux/ Digi Embedded Linux for Digi's ARM based modules Virtualization Guest Operating System Virtualization Shared Kernel Virtualization VMware Server and VirtualBox. Linux VServer, Solaris Zones and Containers, and OpenVZ See L4Linux CoLinux MkLinux Kernel Level Virtualization Hypervisor Virtualization (UML) and (KVM). Xen, VMware ESX Server and Microsoft's Hyper-V technology Linux KERNEL TODAY Kernel.org Linux Kernel Information l The Linux kernel version numbers consist of three numbers separated by decimals, such as 2.2.14. The first number is the major version number. The second number is the minor revision number. The third number is the patch level version l There are two stages of kernel releases: “stable” and “development”. Development kernels end in an odd number (2.3, 2.5, …), stable or production kernels end in an even number (2.4, 2.6,3.0). l Once a kernel is deemed stable, it will move from an odd to even second number for release (e.g., from 2.3.51 to 2.4.0). l You can get a good sense of what the future production state of Linux will be by looking at the development kernel. l http://www.kernel.org Application interactions in Linux The Kernel Architecture overview and address space view Application and modes of operations, Kernel and User address Spaces System call, entry and exit points, low level view, strace, parameter passing and kernel implementations Structure: The “Core” Linux Kernel Applications System Libraries (libc) System Call Interface I/O Related Process Related File Systems Scheduler Networking Memory Management Modules Device Drivers IPC Architecture-Dependent Code Hardware Operative Modes • To avoid having applications that constantly crashed, newer OSs were designed with 2 different operative modes: • Kernel Mode: • the machine operates with critical data structure, direct hardware (IN/OUT or memory mapped), direct memory, IRQ, DMA, and so on. • User Mode: • users can run applications. User mode & Kernel mode (32 bit) 0xFFFFFFFF Kernel Address Space 1G Vmlinux 0xC1000000 Init 0xBFFFFFFF | Mdm | Mdm | User Address Space 3G gnome-terminal | Bash | App.exe 0x00000000 Operative Modes • Kernel Mode "prevents" User Mode applications from damaging the system or its features. • Modern microprocessors implement in hardware at least 2 different states. For ex. under Intel, 4 states determine the PL (Privilege Level). It is possible to use 0,1,2,3 states, with 0 used in Kernel Mode. • Linux/Unix OS requires only 2 privilege levels, and we will use such a paradigm as point of reference Switching from User Mode to Kernel Mode-1 • When do we switch? • Once we understand that there are 2 different modes, we have to know when we switch from one to the other. • Typically, there are 2 points of switching: 1. When calling a System Call: 2. When an IRQ (or exception) comes Let’s observe the user/kernel space • use vmstat to grasp recent process context Library call vs System Call in Linux Wrapper and Wrapper based System call , Entry and Exit Points Example of wrapper to a system call flow System calls • The main interface between the kernel and userspace is the set of system calls • About ~300 system calls that provides the main kernel services • File and device operations, networking operations, inter- process communication, process management, memory mapping, timers, threads, synchronization primitives, etc. • This interface is stable over time: only new system calls can be added by the kernel developers • This system call interface is wrapped by the C library, and userspace applications usually never make a system call directly but rather use the corresponding C library function System Calls: read • C example: count = read(fd,buffer,nbyte) • push parameters on stack read library procedure register • call library code X (read) count = read (fd , buffer , nbytes) memory (stack) application buffer • put system call number in register nbytes buffer user space fd • call kernel (TRAP) kernel space • kernel examines system call number • finds requested system call handler • system call execute requested operation handler • return to library and clean up • increase instruction pointer X • remove parameters from stack sys_read() • resume process Kernel Entry and Exit app app Library Code exceptions (error traps) System Call Interface trap 80h trap / system boot interrupt call scheduler table table Kernel device interrupt page faults dialog Devices System Calls vs. Library Calls • man 2 • historical evolution of # of calls • Unix 6e (~50), Solaris 7 (~250) • Linux 2.0 (~160), Linux 2.2 ( ~190), Linux 2.4 (~220) • library calls vs. system call possibilities: • library call never invokes system call • library call sometimes invokes system call • library call always invokes system call • system call not available via library • can invoke system call “directly” via assembly code Cost of Crossing the “Kernel Barrier” • More than a procedure call • Less than a context switch • Costs: • Establishing kernel stack • Validating parameters • Kernel mapped to user address space? Implementation Example: “Hello, world!” .data # section declaration msg: .string "Hello, world!\n" # our dear string len = . - msg # length of our dear string .text # section declaration # we must export the entry point to the ELF linker or .global _start # loader. They conventionally recognize _start as their # entry point. Use ld -e foo to override the default. _start: # write our string to stdout movl $len,%edx # third argument: message length movl $msg,%ecx # second argument: pointer to message to write movl $1,%ebx # first argument: file handle (stdout) movl $4,%eax # system call number (sys_write) int $0x80 # call kernel # and exit movl $0,%ebx # first argument: exit code movl $1,%eax # system call number (sys_exit) int $0x80 # call kernel Linux System Calls (1) Invoked by executing int $0x80. • Programmed exception vector number 128. • CPU switches to kernel mode & executes a kernel function. • Calling process passes syscall number identifying system call in eax register (on Intel processors). • Syscall handler responsible for: • Saving registers on kernel mode stack. • Invoking syscall service routine. • Exiting by calling ret_from_sys_call(). Parameter Passing • On the 32-bit Intel 80x86: • 6 registers are used to store syscall parameters. • eax (syscall number). • ebx, ecx, edx, esi, edi store parameters to syscall service routine, identified by syscall number. Interacting with Modules App_1 App_2 App_N /dev/device_nodes Vmlinux module.ko Basic utility, filter & developer command essentials Understanding Root File System Hierarchy Linux File systems on desktop • Historically Linux had no fs of its own and formally had minix fs running on it • Later adpoted the the second extended file system formally known as ext2fs • Has been enhanced to ext3fs, ext4fs • Now the Linux 3.X is looking to have Btrfs • Btrfs is a new copy on write filesystem for Linux aimed at implementing advanced features while focusing on fault tolerance, repair and easy administration Linux Tree Hierarchy / /root /bin /proc /usr /sbin /dev /opt /home /src /home/ram linux /home/sita / - (unnamed) the actual root /usr – usr utilities and application /sbin – special commands /privileged /root – a home area for root /dev – device node for external devices /bin – utility commands /opt – optional applications /home – usually all you do is here /proc – a bogus fs for Linux kernel /var – spool , log messages etc The Shell • Command interpreter in original Unix • Read command • Perhaps pre-process command • Fork/execute • Return exit status of command • A little history revisited • Bourne Shell (sh) • C chell • Korn shell • Bourne Again shell (bash) Entry level commands • man • date • which • cal • whatis • who • info • w • apropos • id • Write • mesg • bc PiPEs (|) & FILTER PROCESS • ps • grep • nice • sort • sleep • tr • at • cut • nohup • paste • kill • more • ctrl+c • less • ctrl+z • head • fg • tail • bg • nl • top • tee • vmstat • wc FILEs/misc DIR/FILE • chmod
Recommended publications
  • Improving the Reliability of Commodity Operating Systems
    Improving the Reliability of Commodity Operating Systems MICHAEL M. SWIFT, BRIAN N. BERSHAD, and HENRY M. LEVY University of Washington Despite decades of research in extensible operating system technology, extensions such as device drivers remain a significant cause of system failures. In Windows XP, for example, drivers account for 85% of recently reported failures. This paper describes Nooks, a reliability subsystem that seeks to greatly enhance OS reliability by isolating the OS from driver failures. The Nooks approach is practical: rather than guaranteeing complete fault tolerance through a new (and incompatible) OS or driver architecture, our goal is to prevent the vast majority of driver-caused crashes with little or no change to existing driver and system code. Nooks isolates drivers within lightweight protection domains inside the kernel address space, where hardware and software prevent them from corrupting the kernel. Nooks also tracks a driver’s use of kernel resources to facilitate automatic clean-up during recovery. To prove the viability of our approach, we implemented Nooks in the Linux operating system and used it to fault-isolate several device drivers. Our results show that Nooks offers a substantial increase in the reliability of operating systems, catching and quickly recovering from many faults that would otherwise crash the system. Under a wide range and number of fault conditions, we show that Nooks recovers automatically from 99% of the faults that otherwise cause Linux to crash. While Nooks was designed for drivers, our techniques generalize to other kernel extensions. We demonstrate this by isolating a kernel-mode file system and an in-kernel Internet service.
    [Show full text]
  • Linux Virtualization Update
    Linux Virtualization Update Chris Wright <[email protected]> Japan Linux Symposium, November 2007 Intro Virtualization mini-summit Paravirtualization Full virtualization Hardware changes Libvirt Xen Virtualization Mini-summit June 25-27, 2007 ± Just before OLS in Ottawa. 18 attendees ● Xen, Vmware, KVM, lguest, UML, LinuxOnLinux ● x86, ia64, PPC and S390 Focused primarily on Linux as guest and areas of cooperation ● paravirt_ops and virtio Common interfaces ● Not the best group to design or discuss management interfaces ● Defer to libvirt, CIM, etc... ● CPUID 0x4000_00xx for hypervisor feature detection ● Can we get to common ABI for paravirt hybrid guest? Virtualization Mini-summit paravirt_ops ● Make use of existing abstractions wherever possible (clocksource, clockevents or irqchip) ● Could use a common lib/x86_emulate.c ● Open question: performance benefit of shadow vs. direct paging? Distro Issues ● Lack of feature parity between bare metal and Xen is difficult for distros ● Single binary kernel image ● Merge upstream Performance ● NUMA awareness lacking in Xen ± difficult for Altix ● Static NUMA representation doesn©t map well to dynamic virt environment ● Cooperative memory management ± guest memory hints Virtualization Mini-summit Hardware ● x86 and ia64 hardware virtualization roadmap ● ppc virtualization is gaining in embedded market, realtime requirments ● S390 ªhas an instruction for thatº Virtio ● Separate driver from transport ● Makes driver small, looks like a Linux driver and reusable ● Hypervisor specific
    [Show full text]
  • Ubuntu Server Guide Basic Installation Preparing to Install
    Ubuntu Server Guide Welcome to the Ubuntu Server Guide! This site includes information on using Ubuntu Server for the latest LTS release, Ubuntu 20.04 LTS (Focal Fossa). For an offline version as well as versions for previous releases see below. Improving the Documentation If you find any errors or have suggestions for improvements to pages, please use the link at thebottomof each topic titled: “Help improve this document in the forum.” This link will take you to the Server Discourse forum for the specific page you are viewing. There you can share your comments or let us know aboutbugs with any page. PDFs and Previous Releases Below are links to the previous Ubuntu Server release server guides as well as an offline copy of the current version of this site: Ubuntu 20.04 LTS (Focal Fossa): PDF Ubuntu 18.04 LTS (Bionic Beaver): Web and PDF Ubuntu 16.04 LTS (Xenial Xerus): Web and PDF Support There are a couple of different ways that the Ubuntu Server edition is supported: commercial support and community support. The main commercial support (and development funding) is available from Canonical, Ltd. They supply reasonably- priced support contracts on a per desktop or per-server basis. For more information see the Ubuntu Advantage page. Community support is also provided by dedicated individuals and companies that wish to make Ubuntu the best distribution possible. Support is provided through multiple mailing lists, IRC channels, forums, blogs, wikis, etc. The large amount of information available can be overwhelming, but a good search engine query can usually provide an answer to your questions.
    [Show full text]
  • Legacy Reuse
    Faculty of Computer Science Institute for System Architecture, Operating Systems Group LEGACY REUSE CARSTEN WEINHOLD THIS LECTURE ... ■ So far ... ■ Basic microkernel concepts ■ Drivers, resource management ■ Today: ■ How to provide legacy OS personalities ■ How to reuse existing infrastructure ■ How to make applications happy TU Dresden Legacy Reuse 2 VIRTUALIZATION ■ Virtualization: ■ Reuse legacy OS + applications ■ Run applications in natural environment ■ Problem: Applications trapped in VMs ■ Different resource pools, namespaces ■ Cooperation is cumbersome (network, ...) ■ Full legacy OS in VM adds overhead ■ Multiple desktops? Bad user experience TU Dresden Legacy Reuse 3 MAKING THE CUT ■ Hardware level: Next week ■ Virtualize legacy OS on top of new OS ■ Operating System Personality: ■ Legacy OS interfaces reimplemented on top of – or ported to – new OS ■ Hybrid operating systems: Today ■ Run legacy OS virtualized … ■ … but tightly integrated with new OS TU Dresden Legacy Reuse 4 OPERATING SYSTEM PERSONALITIES TU Dresden Legacy Reuse 5 OS PERSONALITY ■ Idea: Adapt OS / application boundary ■ (Re-)Implement legacy APIs, not whole OS ■ May need to recompile application ■ Benefits: ■ Get desired application, established APIs ■ Good integration (namespaces, files, ...) ■ Smaller overhead than virtualization ■ Flexible, configurable, but more effort? TU Dresden Legacy Reuse 6 MONOLITHIC KERNELS App App Monolithic Kernel System Call Entry Ext2 VFAT IP Stack Disk Driver NIC Driver TU Dresden Legacy Reuse 7 DECOMPOSITION App App App App Monolithic
    [Show full text]
  • Kernel Architectures
    A short history of kernels n Early kernel: a library of device drivers, support for threads (QNX) Operating System Kernels n Monolithic kernels: Unix, VMS, OS 360… n Unstructured but fast… n Over time, became very large Ken Birman n Eventually, DLLs helped on size n Pure microkernels: Mach, Amoeba, Chorus… (borrowing some content from n OS as a kind of application Peter Sirokman) n Impure microkernels: Modern Windows OS n Microkernel optimized to support a single OS n VMM support for Unix on Windows and vice versa The great m-kernel debate Summary of First Paper n How big does it need to be? n The Performance of µ-Kernel-Based Systems (Hartig et al. 16th SOSP, Oct 1997) n With a m-kernel protection-boundary crossing forces us to n Evaluates the L4 microkernel as a basis for a full operating system n Change memory -map n Ports Linux to run on top of L4 and compares n Flush TLB (unless tagged) performance to native Linux and Linux running on n With a macro-kernel we lose structural the Mach microkernel protection benefits and fault-containment n Explores the extensibility of the L4 microkernel n Debate raged during early 1980’s Summary of Second Paper In perspective? n The Flux OSKit: A Substrate for Kernel and n L4 seeks to validate idea that a m-kernel Language Research (Ford et al. 16th SOSP, can support a full OS without terrible 1997) cost penalty n Describes a set of OS components designed to be used to build custom operating systems n Opened the door to architectures like the n Includes existing code simply using “glue code” Windows
    [Show full text]
  • The Nizza Secure-System Architecture
    Appears in the proceedings of CollaborateCom 2005, San Jose, CA, USA The Nizza Secure-System Architecture Hermann Härtig Michael Hohmuth Norman Feske Christian Helmuth Adam Lackorzynski Frank Mehnert Michael Peter Technische Universität Dresden Institute for System Architecture D-01062 Dresden, Germany [email protected] Abstract rely on a standard OS (including the kernel) to assure their security properties. The trusted computing bases (TCBs) of applications run- To address the conflicting requirements of complete ning on today’s commodity operating systems have become functionality and the protection of security-sensitive data, extremely large. This paper presents an architecture that researchers have devised system architectures that reduce allows to build applications with a much smaller TCB. It the system’s TCB by running kernels in untrusted mode is based on a kernelized architecture and on the reuse of in a secure compartment on top of a small security kernel; legacy software using trusted wrappers. We discuss the de- security-sensitive services run alongside the OS in isolated sign principles, the architecture and some components, and compartments of their own. This architecture is widely re- a number of usage examples. ferred to as kernelized standard OS or kernelized system. In this paper, we describe Nizza, a new kernelized- system architecture. In the design of Nizza, we set out to answer the question of how small the TCB can be made. 1 Introduction We have argued in previous work that the (hardware and software) technologies needed to build small secure-system Desktop and hand-held computers are used for many platforms have become much more mature since earlier at- functions, often in parallel, some of which are security tempts [8].
    [Show full text]
  • Unprivileged GPU Containers on a LXD Cluster
    Unprivileged GPU containers on a LXD cluster GPU-enabled system containers at scale Stéphane Graber Christian Brauner LXD project leader LXD maintainer @stgraber @brau_ner https://stgraber.org https://brauner.io [email protected] [email protected] What are system containers? They are the oldest type of containers 01 BSD jails, Linux vServer, Solaris Zones, OpenVZ, LXC and LXD. They behave like standalone systems 02 No need for specialized software or custom images. No virtualization overhead 03 They are containers after all. LXD System nova-lxd command line tool your own client/script ? container LXD REST API manager LXD LXD LXD LXD LXC LXC LXC LXC Linux kernel Linux kernel Linux kernel Linux kernel Host A Host B Host C Host ... What LXD is Simple 01 Clean command line interface, simple REST API and clear terminology. Fast 02 Image based, no virtualization, direct hardware access. Secure 03 Safe by default. Combines all available kernel security features. Scalable 04 From a single container on a laptop to tens of thousands of containers in a cluster. What LXD isn’t Another virtualization technology 01 LXD offers an experience very similar to a virtual machine. But it’s still containers, with no virtualization overhead and real hardware. A fork of LXC 02 LXD uses LXC’s API to manage the containers behind the scene. Another application container manager 03 LXD only cares about full system containers. You can run whatever you want inside a LXD container, including Docker. LXD Main Certificates components Cluster Containers Snapshots Backups Events Images Aliases Networks Operations Projects Storage pools Storage volumes Snapshots LXD clustering Built-in clustering support 01 No external dependencies, all LXD 3.0 or higher installations can be instantly turned into a cluster.
    [Show full text]
  • Separating Protection and Management in Cloud Infrastructures
    SEPARATING PROTECTION AND MANAGEMENT IN CLOUD INFRASTRUCTURES A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by Zhiming Shen December 2017 c 2017 Zhiming Shen ALL RIGHTS RESERVED SEPARATING PROTECTION AND MANAGEMENT IN CLOUD INFRASTRUCTURES Zhiming Shen, Ph.D. Cornell University 2017 Cloud computing infrastructures serving mutually untrusted users provide se- curity isolation to protect user computation and resources. Additionally, clouds should also support flexibility and efficiency, so that users can customize re- source management policies and optimize performance and resource utiliza- tion. However, flexibility and efficiency are typically limited due to security requirements. This dissertation investigates the question of how to offer flexi- bility and efficiency as well as strong security in cloud infrastructures. Specifically, this dissertation addresses two important platforms in cloud in- frastructures: the containers and the Infrastructure as a Service (IaaS) platforms. The containers platform supports efficient container provisioning and execut- ing, but does not provide sufficient security and flexibility. Different containers share an operating system kernel which has a large attack surface, and kernel customization is generally not allowed. The IaaS platform supports secure shar- ing of cloud resources among mutually untrusted users, but does not provide sufficient flexibility and efficiency. Many powerful management primitives en- abled by the underlying virtualization platform are hidden from users, such as live virtual machine migration and consolidation. The main contribution of this dissertation is the proposal of an approach in- spired by the exokernel architecture that can be generalized to any multi-tenant system to improve security, flexibility, and efficiency.
    [Show full text]
  • Openvz Forum Each Container
    Subject: I am thinking of moving to LXD Posted by votsalo on Wed, 17 Aug 2016 19:13:50 GMT View Forum Message <> Reply to Message I am thinking of moving my dedicated server to LXD (a variant of LXC). One reason is that I can run LXD on several different environments where I can run Ubuntu: * On a dedicated server. * On an Amazon EC2 instance. * On my home machine, running Ubuntu desktop. * On a cheap Ubuntu 16.04 Openstack KVM VPS. I find the VPS option very usuful, however tricky because of the tight disk space. I can have a few containers on my VPS. I can move containers between the VPS and another LXD server. I can run a container OS that I need on my VPS that is not offered as a choice by my VPS provider, but is available as an LXD template. And perhaps I can replace the dedicated server with a few cheaper VPSs. I have tried doing all these with OpenVZ in the past. I couldn't get it to run anywhere, except on the dedicated server or on a dedicated home machine (or dual boot). Now it seems that I cannot run OpenVZ 7.0.0 on any of the above infrastructure. I have installed OpenVZ 7.0.0 on my home machine, as a dual boot, but I'm not using it, since I use my desktop OS and I cannot use both at the same time. In the past I tried installing OpenVZ on several desktop distributions, but I couldn't find one that worked.
    [Show full text]
  • Security of OS-Level Virtualization Technologies: Technical Report
    Security of OS-level virtualization technologies Elena Reshetova1, Janne Karhunen2, Thomas Nyman3, N. Asokan4 1 Intel OTC, Finland 2 Ericsson, Finland 3 University of Helsinki, Finland 4 Aalto University and University of Helsinki, Finland Abstract. The need for flexible, low-overhead virtualization is evident on many fronts ranging from high-density cloud servers to mobile devices. During the past decade OS-level virtualization has emerged as a new, efficient approach for virtualization, with implementations in multiple different Unix-based systems. Despite its popularity, there has been no systematic study of OS-level virtualization from the point of view of security. In this report, we conduct a comparative study of several OS- level virtualization systems, discuss their security and identify some gaps in current solutions. 1 Introduction During the past couple of decades the use of different virtualization technolo- gies has been on a steady rise. Since IBM CP-40 [19], the first virtual machine prototype in 1966, many different types of virtualization and their uses have been actively explored both by the research community and by the industry. A relatively recent approach, which is becoming increasingly popular due to its light-weight nature, is Operating System-Level Virtualization, where a number of distinct user space instances, often referred to as containers, are run on top of a shared operating system kernel. A fundamental difference between OS-level virtualization and more established competitors, such as Xen hypervisor [24], VMWare [48] and Linux Kernel Virtual Machine [29] (KVM), is that in OS-level virtualization, the virtualized artifacts are global kernel resources, as opposed to hardware.
    [Show full text]
  • Container Technologies
    Zagreb, NKOSL, FER Container technologies Marko Golec · Juraj Vijtiuk · Jakov Petrina April 11, 2020 About us ◦ Embedded Linux development and integration ◦ Delivering solutions based on Linux, OpenWrt and Yocto • Focused on software in network edge and CPEs ◦ Continuous participation in Open Source projects ◦ www.sartura.hr Introduction to GNU/Linux ◦ Linux = operating system kernel ◦ GNU/Linux distribution = kernel + userspace (Ubuntu, Arch Linux, Gentoo, Debian, OpenWrt, Mint, …) ◦ Userspace = set of libraries + system software Linux kernel ◦ Operating systems have two spaces of operation: • Kernel space – protected memory space and full access to the device’s hardware • Userspace – space in which all other application run • Has limited access to hardware resources • Accesses hardware resources via kernel • Userspace applications invoke kernel services with system calls User applications E.g. bash, LibreOffice, GIMP, Blender, Mozilla Firefox, etc. System daemons: Windowing system: User mode Low-level system systemd, runit, logind, X11, Wayland, Other libraries: GTK+, Qt, EFL, SDL, SFML, Graphics: Mesa, AMD components networkd, PulseAudio, SurfaceFlinger FLTK, GNUstep, etc. Catalyst, … … (Android) C standard library Up to 2000 subroutines depending on C library (glibc, musl, uClibc, bionic) ( open() , exec() , sbrk() , socket() , fopen() , calloc() , …) About 380 system calls ( stat , splice , dup , read , open , ioctl , write , mmap , close , exit , etc.) Process scheduling Memory management IPC subsystem Virtual files subsystem Network subsystem Kernel mode Linux Kernel subsystem subsystem Other components: ALSA, DRI, evdev, LVM, device mapper, Linux Network Scheduler, Netfilter Linux Security Modules: SELinux, TOMOYO, AppArmor, Smack Hardware (CPU, main memory, data storage devices, etc.) TABLE 1 Layers within Linux Virtualization Virtualization Concepts Two virtualization concepts: ◦ Hardware virtualization (full/para virtualization) • Emulation of complete hardware (virtual machines - VMs) • VirtualBox, QEMU, etc.
    [Show full text]
  • Research on Virtualisation Technlogy for Real-Time Reconfigurable Systems Tian Xia
    Research on virtualisation technlogy for real-time reconfigurable systems Tian Xia To cite this version: Tian Xia. Research on virtualisation technlogy for real-time reconfigurable systems. Electronics. INSA de Rennes, 2016. English. NNT : 2016ISAR0009. tel-01418453 HAL Id: tel-01418453 https://tel.archives-ouvertes.fr/tel-01418453 Submitted on 16 Dec 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. THESE INSA Rennes présentée par sous le sceau de l’Université Bretagne Loire pour obtenir le titre de Tian Xia DOCTEUR DE L’INSA RENNES ECOLE DOCTORALE : MATISSE Spécialité : Electronique et Télécommunications LABORATOIRE : IETR Thèse soutenue le 05.07.2016 Research on devant le jury composé de : Virtualization Technology François Verdier Professeur, Université de Nice, Nice / Président for Real-time Emmanuel Grolleau Professeur, ISAE-ENSMA, Chasseneuil-Futuroscope / Rapporteur Reconfigurable Systems Guy Gogniat Professeur, Université de Bretagne-Sud, Lorient / Rapporteur Jean-Luc Bechennec Chargé de Recherche, Ecole Centrale de Nantes, Nantes / Examinateur Jean-Christophe Prévotet Maître de Conférence, INSA, Rennes / Co-encadrant de thèse Fabienne Nouvel Maître de Conférence HDR, INSA, Rennes / Directrice de thèse Research on Virtualization Technology for Real-time Reconfigurable Systems Tian Xia Résumé de la Thèse Aujourd’hui, les systèmes embarqués jouent un rôle prépondérant dans la vie quo- tidienne des utilisateurs.
    [Show full text]