Bill Farner Peter Zalewski What is openMosix? • Linux kernel extension for Single-System Image (SSI) clustering • Began in 2002 when Mosix became proprietary • Requires no additional libraries • Cluster behaves as a ‘virtual SMP’ machine • Scalable, adaptive distributive computing openMosix Details • Pre-emptive Process Migration • Adaptive Resource Sharing Algorithms • Decentralized Control and Scalability • File System • Auto Discovery • Checkpointing • Shared Memory Implementation Preemptive Process Migration • Occurs when processing requirements at node exceeds threshold levels for that node • Moves process from UHN to a remote node • User context is moved to remote node • Contains program code, stack, data, memory-maps and registers of the process • System context is not moved • Contains description of resources the process is attached to Adaptive Resource Sharing Algorithms • Memory ushering • Prevents thrashing and excessive swapping of processes • Load balancing • Costs of resources are not comparable • Uses economic principles and competitive analysis • Mechanism exists for handling I/O intensive processes Decentralized Control and Scalability • No central control or master / slave relationship • Each node makes control decisions independently • Each node communicates with random nodes • No need to know entire cluster topology • As a result scalability increases File System • oMFS – openMosix File System • Provides features that NFS doesn't: • Cache Consistency • Timestamp Consistency • Link Consistency • DFSA – Direct File System Access • Runs on top of a cluster file system • Remote processes can execute system calls locally Other Features • Autodiscovery • Nodes can be added seamlessly to the cluster while it is still running • Checkpointing • Process context can be saved to file and later restored and operation can resume • Useful for tasks with long execution time or programs which do not have a resume mechanism • Helps overcome the effects of system instability, power failures and reboots Shared Memory Implementation • MigShm – Migration of Shared Memory • Developed by students at the Pune University, India • Problem: openMosix destroys the memory map on the current node and re-creates it on the remote node • Uses a modular design MigShm Modules • Migration of shared memory processes • Most important; enables shared memory processes to be migrated at any point during execution • Communication Module • Consistency Module • Access to logs and migration decisions • Migration of shared memory • Thread migration Code Changes Required to use openMosix Other SSI Implementations • OpenSSI • Offers high availability • High IPC overhead after process migration • Kerrighed • Relatively new • No support for hot node addition/removal • Node crash can lead to cluster crash Process Migration Time Why use openMosix? • Requires no special programming • Automatic load-balancing (CPU and memory) • Easy installation, zero configuration (autodiscovery) • Highly scalable • Decentralized (no requirement for a master) • Options for improved manageability • Nodes do not need to be identical* • Cost effective Shortcomings of openMosix • Unable to migrate threads • Will not migrate shared memory applications • Greater number of nodes does not improve performance for a single process • Operates as a kernel patch More shortcomings… • No security (assumes private, trusted network) • Single node failure could crash entire cluster • Not well-suited for I/O intensive applications Installing openMosix • Kernel Patch • Manual compilation • RPMs available • Apt packages • Userland Tools • Administration • Monitoring Monitoring Questions?.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages22 Page
-
File Size-