Tightly-Coupled and Fault-Tolerant Communication in Parallel Systems

Total Page:16

File Type:pdf, Size:1020Kb

Tightly-Coupled and Fault-Tolerant Communication in Parallel Systems Tightly-Coupled and Fault-Tolerant Communication in Parallel Systems Inauguraldissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften der Universität Mannheim vorgelegt von Dipl.-Inf. David Christoph Slogsnat aus Heidelberg Mannheim, 2008 Dekan: Prof. Dr. Matthias Krause, Universität Mannheim Referent: Prof. Dr. Ulrich Brüning, Universität Heidelberg Koreferent: Prof. Dr. Reinhard Männer, Universität Heidelberg Tag der mündlichen Prüfung: 4. August 2008 Abstract The demand for processing power is increasing steadily. In the past, single processor archi- tectures clearly dominated the markets. As instruction level parallelism is limited in most applications, significant performance can only be achieved in the future by exploiting par- allelism at the higher levels of thread or process parallelism. As a consequence, modern “processors” incorporate multiple processor cores that form a single shared memory multi- processor. In such systems, high performance devices like network interface controllers are connected to processors and memory like every other input/output device over a hierarchy of periph- eral interconnects. Thus, one target must be to couple coprocessors physically closer to main memory and to the processors of a computing node. This removes the overhead of today’s peripheral interconnect structures. Such a step is the direct connection of Hyper- Transport (HT) devices to Opteron processors, which is presented in this thesis. Also, this work analyzes how communication from a device to processors can be optimized on the protocol level. As today’s computing nodes are shared memory systems, the cache coherence protocol is the central protocol for data exchange between processors and devices. Consequently, the analysis extends to classes of devices that are cache coherence protocol aware. Also, the concept of a transfer cache is proposed in this thesis, which reduces latency significantly even for non-coherent devices. The trend to the exploitation of process and thread level parallelism leads to a steady increase of system sizes. Networks that are used in such large systems are very susceptible to both hard and transient faults. Most transient fault rates are constant per bit that is stored or transmitted. With increasing system sizes and higher clock frequencies, the number of faults in time increases drastically. In the end, the error rate may rise at a level where high level error recovery becomes too costly if lower layers do not perform error correction that is transparent to the layers above. The second part of this thesis describes a direct intercon- nection network that provides a reliable transport service even without the use of end-to- end protocols. Also, a novel hardware based solution for intermediate routing is developed in this thesis, which allows an efficient, deadlock free routing around faulty links. Zusammenfassung Der Bedarf an Rechenkraft von Computer-System wächst ständig. Insbesondere auf dem Massenmarkt wurde dieser in der Vergangenheit vor allem durch Einprozessorsysteme gedeckt. Die parallele Abarbeitung von Operationen ist dabei ein wesentlicher Faktor zur Geschwindigkeitssteigerung. Da die Parallelität auf Instruktionsebene in den meisten Anwendungen sehr beschränkt ist, sind weitere Leistungssteigerungen nur möglich, wenn auch die Parallelität auf Prozess- und Thread-Ebene genutzt wird. Daher bestehen heutige Prozessor-Chips meist aus mehreren Prozessor-Kernen, die einen gemeinsamen Speicher mit einem globalen Adressraum nutzen. In solchen Systemen sind hochperformante Netzwerkschnittstellen genauso über eine Hier- archie von Verbindungsnetzwerken und Bussen mit dem System verbunden wie klassische Eingabe/Ausgabe Geräte. Um die Kommunikationsleistung zwischen Prozessor und Netzwerkschnittstelle zu verbessern, ist es erforderlich diese Verbindungsstruktur zu opti- mieren. Ein solcher Ansatz ist die Entwicklung von Geräten, die über das HyperTransport Protokoll direkt mit dem Prozessorchip verbunden werden können. Eine Umsetzung dieses Konzeptes wird in dieser Arbeit vorgestellt. Darüber hinaus werden in dieser Arbeit weitere Möglichkeiten zur Verbesserung der Kom- munikation untersucht. In heutigen Computersystemen ist das Cache-Kohärenz Protokoll das zentrale Protokoll, welches den Datenaustausch zwischen den Kernkomponenten des Rechners regelt. In dieser Arbeit werden Klassen von Geräten vorgestellt, die direkt als Kommunikationspartner an diesem Protokoll teilnehmen. Als bedeutende Neuerung wird außerdem das Konzept des Transfer Caches in dieser Arbeit entwickelt und vorgestellt, welches die Kommunikationslatenz zwischen Gerät und Prozessor bedeutend verbessert. Die bessere Ausnutzung der Parallelität auf der Ebene von Prozessen und Threads führt außerdem zu ständig komplexer werdenden Systemen. In Netzwerken, die solche Systeme verbinden, muss mit dem häufigen Auftreten von statischen und transienten Fehler gerech- net werden. In einem solchen System können die Fehlerraten dabei auf ein solches Maß steigen, dass eine ausschließlich in höheren Softwareebenen erfolgende Fehlerbehandlung sehr ineffizient wird. Mit einer Fehlerbehandlung direkt in Hardware kann dieses Problem umgangen werden. In diesem Sinne beschreibt der zweite Teil dieser Arbeit ein fehlertol- erantes Verbindungsnetzwerk, welches eine fehlertolerante Übertragung auf der Ebene 8b/ 10b kodierter serieller Links sicherstellt. Eine weitere Komponente des Protokolls ist ein neuartiger hardwarebasierter Mechanismus, der über ein "intermediate routing" eine effi- ziente und blockierungsfreie Lösung darstellt, um Pakete um fehlerhafte Komponenten herumzuleiten. I Contents CHAPTER 1 Introduction 1 1.1 The Extoll Project . .4 1.2 Physical Implementation . .6 1.3 Graphical Representations. .7 1.4 Methodologies . .9 1.5 A Theoretical Model for cHT/HT Performance . .11 CHAPTER 2 Communication in Parallel Computers 13 2.1 Caches . .13 2.2 Parallel Computing Architectures . .15 2.2.1 Communication Paradigms . .20 2.2.2 Remote Load/Store . .21 2.2.3 Put/Get . .22 2.2.4 Send-Receive . .23 2.3 Device Integration Design Space . .24 2.3.1 Process-Device Interaction . .26 2.3.2 Device Virtualization. .30 2.4 Cache Coherence for Shared Memory Systems . .32 2.4.1 Consistency Models for Shared Memory . .33 2.4.2 Cache Coherence Protocols . .35 2.4.3 Broadcast Protocols . .37 2.4.3.1 MOESI. .39 2.4.3.2 MESIF . .42 2.4.4 Directory-Based Protocols. .45 2.4.5 Serialization of Conflicting Accesses . .49 2.5 Introduction to x86 Systems . .54 2.5.1 Intel Xeon Architecture . .54 2.5.2 AMD . .57 2.6 Examples of Parallel Systems . .58 2.6.1 Sun UltraSPARC T2 . .58 2.6.2 Cray T3E . .60 2.6.3 Cray XT3 and XT4 . .61 2.6.4 IBM BlueGene/L . .63 II 2.6.5 NIs on Standardized Peripheral Interfaces . 64 CHAPTER 3 Improving Device to Processor Communication 65 3.1 HyperTransport Devices and Accelerators . 66 3.1.1 The HyperTransport Protocol . 67 3.1.2 I/O in HTX Systems . 70 3.1.3 Ordering in PIO. .71 3.1.4 Ordering PIO Write Requests . 73 3.1.5 Ordering PIO Read Requests . 76 3.1.6 Potential Incremental Solutions . 76 3.2 The Space of Analysis . .76 3.2.1 Latency-Sensitive Data . 76 3.2.2 Buffering . 78 3.2.3 Feasible Solutions . .80 3.3 Memory and Interconnect Bottlenecks . 81 3.3.1 Influence of the Cache Coherence Protocol . 85 3.3.2 Summary . 87 3.4 Devices at the Coherent Interconnect . 88 3.4.1 Devices with Coherent Caches . 89 3.5 The Performance of Coherent Transfers. 92 3.5.1 Devices with Coherent Caches . 94 3.5.1.1 Off-SOC Devices . 98 3.5.1.2 Devices with Caches in SOCs. 100 3.5.2 Devices with a Coherent Memory Controller . 101 3.6 Transfer Cache . 103 3.7 Results. ..
Recommended publications
  • Tesi Final Marco Oliverio.Pdf
    i Abstract Advancements in exploitation techniques call for the need of advanced defenses. Modern operating systems have to face new sophisticate attacks that do not rely on any programming mistake, rather they exploit leaking information from computational side effects (side-channel attacks) or hardware glitches (rowhammer attacks). Mitigating these new attacks poses new challanges and involves delicate trade-offs, balancing security on one side and performance, simplicity, and compatibility on the other. In this disseration we explore the attack surface exposed by page fusion, a memory saving optimization in modern operating systems and, after that, a secure page fusion implementation called VUsion is shown. We then propose a complete and compatible software solution to rowhammer attacks called ZebRAM. Lastly, we show OpenCAL, a free and general libray for the implementation of Cellular Automata, that can be used in several security scenarios. iii Acknowledgements I would like to thank my Supervisor prof. Andrea Pugliese for the encouragement and the extremely valuable advice that put me in a fruitful and exciting research direction. A hearty acknowledgment goes to Kaveh Razavi, Cristiano Giuffrida, Herbert Bos and all the VUSec group of the Vrije Universiteit of Amsterdam. I felt at home from day one and I found myself surrounded by brilliant researchers and good friends. v Contents Abstract i Acknowledgements iii Introduction1 1 VUsion3 1.1 Introduction...................................3 1.2 Page Fusion...................................5 1.2.1 Linux Kernel Same-page Merging...................5 1.2.2 Windows Page Fusion.........................7 1.3 Threat Model..................................8 1.4 Known Attack Vectors.............................8 1.4.1 Information Disclosure.........................8 1.4.2 Flip Feng Shui.............................9 1.5 New Attack Vectors..............................
    [Show full text]
  • An In-Depth Study with All Rust Cves
    Memory-Safety Challenge Considered Solved? An In-Depth Study with All Rust CVEs HUI XU, School of Computer Science, Fudan University ZHUANGBIN CHEN, Dept. of CSE, The Chinese University of Hong Kong MINGSHEN SUN, Baidu Security YANGFAN ZHOU, School of Computer Science, Fudan University MICHAEL R. LYU, Dept. of CSE, The Chinese University of Hong Kong Rust is an emerging programing language that aims at preventing memory-safety bugs without sacrificing much efficiency. The claimed property is very attractive to developers, and many projects start usingthe language. However, can Rust achieve the memory-safety promise? This paper studies the question by surveying 186 real-world bug reports collected from several origins which contain all existing Rust CVEs (common vulnerability and exposures) of memory-safety issues by 2020-12-31. We manually analyze each bug and extract their culprit patterns. Our analysis result shows that Rust can keep its promise that all memory-safety bugs require unsafe code, and many memory-safety bugs in our dataset are mild soundness issues that only leave a possibility to write memory-safety bugs without unsafe code. Furthermore, we summarize three typical categories of memory-safety bugs, including automatic memory reclaim, unsound function, and unsound generic or trait. While automatic memory claim bugs are related to the side effect of Rust newly-adopted ownership-based resource management scheme, unsound function reveals the essential challenge of Rust development for avoiding unsound code, and unsound generic or trait intensifies the risk of introducing unsoundness. Based on these findings, we propose two promising directions towards improving the security of Rust development, including several best practices of using specific APIs and methods to detect particular bugs involving unsafe code.
    [Show full text]
  • Ensuring the Spatial and Temporal Memory Safety of C at Runtime
    MemSafe: Ensuring the Spatial and Temporal Memory Safety of C at Runtime Matthew S. Simpson, Rajeev K. Barua Department of Electrical & Computer Engineering University of Maryland, College Park College Park, MD 20742-3256, USA fsimpsom, [email protected] Abstract—Memory access violations are a leading source of dereferencing pointers obtained from invalid pointer arith- unreliability in C programs. As evidence of this problem, a vari- metic; and dereferencing uninitialized, NULL or “manufac- ety of methods exist that retrofit C with software checks to detect tured” pointers.1 A temporal error is a violation caused by memory errors at runtime. However, these methods generally suffer from one or more drawbacks including the inability to using a pointer whose referent has been deallocated (e.g. with detect all errors, the use of incompatible metadata, the need for free) and is no longer a valid object. The most well-known manual code modifications, and high runtime overheads. temporal violations include dereferencing “dangling” point- In this paper, we present a compiler analysis and transforma- ers to dynamically allocated memory and freeing a pointer tion for ensuring the memory safety of C called MemSafe. Mem- more than once. Dereferencing pointers to automatically allo- Safe makes several novel contributions that improve upon previ- ous work and lower the cost of safety. These include (1) a method cated memory (stack variables) is also a concern if the address for modeling temporal errors as spatial errors, (2) a compati- of the referent “escapes” and is made available outside the ble metadata representation combining features of object- and function in which it was defined.
    [Show full text]
  • Improving Memory Management Security for C and C++
    Improving memory management security for C and C++ Yves Younan, Wouter Joosen, Frank Piessens, Hans Van den Eynden DistriNet, Katholieke Universiteit Leuven, Belgium Abstract Memory managers are an important part of any modern language: they are used to dynamically allocate memory for use in the program. Many managers exist and depending on the operating system and language. However, two major types of managers can be identified: manual memory allocators and garbage collectors. In the case of manual memory allocators, the programmer must manually release memory back to the system when it is no longer needed. Problems can occur when a programmer forgets to release it (memory leaks), releases it twice or keeps using freed memory. These problems are solved in garbage collectors. However, both manual memory allocators and garbage collectors store management information for the memory they manage. Often, this management information is stored where a buffer overflow could allow an attacker to overwrite this information, providing a reliable way to achieve code execution when exploiting these vulnerabilities. In this paper we describe several vulnerabilities for C and C++ and how these could be exploited by modifying the management information of a representative manual memory allocator and a garbage collector. Afterwards, we present an approach that, when applied to memory managers, will protect against these attack vectors. We implemented our approach by modifying an existing widely used memory allocator. Benchmarks show that this implementation has a negligible, sometimes even beneficial, impact on performance. 1 Introduction Security has become an important concern for all computer users. Worms and hackers are a part of every day internet life.
    [Show full text]
  • HALO: Post-Link Heap-Layout Optimisation
    HALO: Post-Link Heap-Layout Optimisation Joe Savage Timothy M. Jones University of Cambridge, UK University of Cambridge, UK [email protected] [email protected] Abstract 1 Introduction Today, general-purpose memory allocators dominate the As the gap between memory and processor speeds continues landscape of dynamic memory management. While these so- to widen, efficient cache utilisation is more important than lutions can provide reasonably good behaviour across a wide ever. While compilers have long employed techniques like range of workloads, it is an unfortunate reality that their basic-block reordering, loop fission and tiling, and intelligent behaviour for any particular workload can be highly subop- register allocation to improve the cache behaviour of pro- timal. By catering primarily to average and worst-case usage grams, the layout of dynamically allocated memory remains patterns, these allocators deny programs the advantages of largely beyond the reach of static tools. domain-specific optimisations, and thus may inadvertently Today, when a C++ program calls new, or a C program place data in a manner that hinders performance, generating malloc, its request is satisfied by a general-purpose allocator unnecessary cache misses and load stalls. with no intimate knowledge of what the program does or To help alleviate these issues, we propose HALO: a post- how its data objects are used. Allocations are made through link profile-guided optimisation tool that can improve the fixed, lifeless interfaces, and fulfilled by
    [Show full text]
  • Buffer Overflows Buffer Overflows
    L15: Buffer Overflow CSE351, Spring 2017 Buffer Overflows CSE 351 Spring 2017 Instructor: Ruth Anderson Teaching Assistants: Dylan Johnson Kevin Bi Linxing Preston Jiang Cody Ohlsen Yufang Sun Joshua Curtis L15: Buffer Overflow CSE351, Spring 2017 Administrivia Homework 3, due next Friday May 5 Lab 3 coming soon 2 L15: Buffer Overflow CSE351, Spring 2017 Buffer overflows Address space layout (more details!) Input buffers on the stack Overflowing buffers and injecting code Defenses against buffer overflows 3 L15: Buffer Overflow CSE351, Spring 2017 not drawn to scale Review: General Memory Layout 2N‐1 Stack Stack . Local variables (procedure context) Heap . Dynamically allocated as needed . malloc(), calloc(), new, … Heap Statically allocated Data . Read/write: global variables (Static Data) Static Data . Read‐only: string literals (Literals) Code/Instructions Literals . Executable machine instructions Instructions . Read‐only 0 4 L15: Buffer Overflow CSE351, Spring 2017 not drawn to scale x86‐64 Linux Memory Layout 0x00007FFFFFFFFFFF Stack Stack . Runtime stack has 8 MiB limit Heap Heap . Dynamically allocated as needed . malloc(), calloc(), new, … Statically allocated data (Data) Shared Libraries . Read‐only: string literals . Read/write: global arrays and variables Code / Shared Libraries Heap . Executable machine instructions Data . Read‐only Instructions Hex Address 0x400000 0x000000 5 L15: Buffer Overflow CSE351, Spring 2017 not drawn to scale Memory Allocation Example Stack char big_array[1L<<24]; /* 16 MB */ char huge_array[1L<<31]; /* 2 GB */ int global = 0; Heap int useless() { return 0; } int main() { void *p1, *p2, *p3, *p4; Shared int local = 0; Libraries p1 = malloc(1L << 28); /* 256 MB */ p2 = malloc(1L << 8); /* 256 B */ p3 = malloc(1L << 32); /* 4 GB */ p4 = malloc(1L << 8); /* 256 B */ Heap /* Some print statements ..
    [Show full text]
  • Memory Vulnerability Diagnosis for Binary Program
    ITM Web of Conferences 7 03004 , (2016) DOI: 10.1051/itmconf/20160703004 ITA 2016 Memory Vulnerability Diagnosis for Binary Program Feng-Yi TANG, Chao FENG and Chao-Jing TANG College of Electronic Science and Engineering National University of Defense Technology Changsha, China Abstract. Vulnerability diagnosis is important for program security analysis. It is a further step to understand the vulnerability after it is detected, as well as a preparatory step for vulnerability repair or exploitation. This paper mainly analyses the inner theories of major memory vulnerabilities and the threats of them. And then suggests some methods to diagnose several types of memory vulnerabilities for the binary programs, which is a difficult task due to the lack of source code. The diagnosis methods target at buffer overflow, use after free (UAF) and format string vulnerabilities. We carried out some tests on the Linux platform to validate the effectiveness of the diagnosis methods. It is proved that the methods can judge the type of the vulnerability given a binary program. 1 Introduction (or both) the memory area. These headers can contain some information like size, used or not, and etc. Memory vulnerabilities are difficult to detect and diagnose Therefore, we can monitor the allocator so as to especially for those do not crash the program. Due to its determine the base address and the size of the allocation importance, vulnerability diagnosis problem has been areas. Then, check all STORE and LOAD accesses in intensively studied. Researchers have proposed different memory to see which one is outside the allocated area. techniques to diagnose memory vulnerabilities.
    [Show full text]
  • Transparent Garbage Collection for C++
    Document Number: WG21/N1833=05-0093 Date: 2005-06-24 Reply to: Hans-J. Boehm [email protected] 1501 Page Mill Rd., MS 1138 Palo Alto CA 94304 USA Transparent Garbage Collection for C++ Hans Boehm Michael Spertus Abstract A number of possible approaches to automatic memory management in C++ have been considered over the years. Here we propose the re- consideration of an approach that relies on partially conservative garbage collection. Its principal advantage is that objects referenced by ordinary pointers may be garbage-collected. Unlike other approaches, this makes it possible to garbage-collect ob- jects allocated and manipulated by most legacy libraries. This makes it much easier to convert existing code to a garbage-collected environment. It also means that it can be used, for example, to “repair” legacy code with deficient memory management. The approach taken here is similar to that taken by Bjarne Strous- trup’s much earlier proposal (N0932=96-0114). Based on prior discussion on the core reflector, this version does insist that implementations make an attempt at garbage collection if so requested by the application. How- ever, since there is no real notion of space usage in the standard, there is no way to make this a substantive requirement. An implementation that “garbage collects” by deallocating all collectable memory at process exit will remain conforming, though it is likely to be unsatisfactory for some uses. 1 Introduction A number of different mechanisms for adding automatic memory reclamation (garbage collection) to C++ have been considered: 1. Smart-pointer-based approaches which recycle objects no longer ref- erenced via special library-defined replacement pointer types.
    [Show full text]
  • An Evolutionary Study of Linux Memory Management for Fun and Profit Jian Huang, Moinuddin K
    An Evolutionary Study of Linux Memory Management for Fun and Profit Jian Huang, Moinuddin K. Qureshi, and Karsten Schwan, Georgia Institute of Technology https://www.usenix.org/conference/atc16/technical-sessions/presentation/huang This paper is included in the Proceedings of the 2016 USENIX Annual Technical Conference (USENIX ATC ’16). June 22–24, 2016 • Denver, CO, USA 978-1-931971-30-0 Open access to the Proceedings of the 2016 USENIX Annual Technical Conference (USENIX ATC ’16) is sponsored by USENIX. An Evolutionary Study of inu emory anagement for Fun and rofit Jian Huang, Moinuddin K. ureshi, Karsten Schwan Georgia Institute of Technology Astract the patches committed over the last five years from 2009 to 2015. The study covers 4587 patches across Linux We present a comprehensive and uantitative study on versions from 2.6.32.1 to 4.0-rc4. We manually label the development of the Linux memory manager. The each patch after carefully checking the patch, its descrip- study examines 4587 committed patches over the last tions, and follow-up discussions posted by developers. five years (2009-2015) since Linux version 2.6.32. In- To further understand patch distribution over memory se- sights derived from this study concern the development mantics, we build a tool called MChecker to identify the process of the virtual memory system, including its patch changes to the key functions in mm. MChecker matches distribution and patterns, and techniues for memory op- the patches with the source code to track the hot func- timizations and semantics. Specifically, we find that tions that have been updated intensively.
    [Show full text]
  • Declarative Computation Model Memory Management Last Call
    Memory Management Declarative Computation Model Memory management (VRH 2.5) • Semantic stack and store sizes during computation – analysis using operational semantics Carlos Varela – recursion used for looping RPI • efficient because of last call optimization October 5, 2006 – memory life cycle Adapted with permission from: – garbage collection Seif Haridi KTH Peter Van Roy UCL C. Varela; Adapted w/permission from S. Haridi and P. Van Roy 1 C. Varela; Adapted w/permission from S. Haridi and P. Van Roy 2 Last call optimization Last call optimization • Consider the following procedure ST: [ ({Loop10 0}, E0) ] proc {Loop10 I} ST: [({Browse I}, {I→i ,...}) proc {Loop10 I} 0 if I ==10 then skip ({Loop10 I+1}, {I→i ,...}) ] else if I ==10 then skip 0 else σ : {i =0, ...} {Browse I} 0 {Browse I} Recursive call {Loop10 I+1} {Loop10 I+1} end is the last call end ST: [({Loop10 I+1}, {I→i0,...}) ] end end σ : {i0=0, ...} • This procedure does not increase the size of the STACK ST: [({Browse I}, {I→i1,...}) • It behaves like a looping construct ({Loop10 I+1}, {I→i1,...}) ] σ : {i0=0, i1=1,...} C. Varela; Adapted w/permission from S. Haridi and P. Van Roy 3 C. Varela; Adapted w/permission from S. Haridi and P. Van Roy 4 Stack and Store Size Garbage collection proc {Loop10 I} proc {Loop10 I} ST: [({Browse I}, {I→ik,...}) ST: [({Browse I}, {I→ik,...}) if I ==10 then skip if I ==10 then skip ({Loop10 I+1}, {I i ,...}) ] ({Loop10 I+1}, {I i ,...}) ] else → k else → k {Browse I} σ : {i0=0, i1=1,..., ik-i=k-1, ik=k,..
    [Show full text]
  • Ubuntu Server Guide Basic Installation Preparing to Install
    Ubuntu Server Guide Welcome to the Ubuntu Server Guide! This site includes information on using Ubuntu Server for the latest LTS release, Ubuntu 20.04 LTS (Focal Fossa). For an offline version as well as versions for previous releases see below. Improving the Documentation If you find any errors or have suggestions for improvements to pages, please use the link at thebottomof each topic titled: “Help improve this document in the forum.” This link will take you to the Server Discourse forum for the specific page you are viewing. There you can share your comments or let us know aboutbugs with any page. PDFs and Previous Releases Below are links to the previous Ubuntu Server release server guides as well as an offline copy of the current version of this site: Ubuntu 20.04 LTS (Focal Fossa): PDF Ubuntu 18.04 LTS (Bionic Beaver): Web and PDF Ubuntu 16.04 LTS (Xenial Xerus): Web and PDF Support There are a couple of different ways that the Ubuntu Server edition is supported: commercial support and community support. The main commercial support (and development funding) is available from Canonical, Ltd. They supply reasonably- priced support contracts on a per desktop or per-server basis. For more information see the Ubuntu Advantage page. Community support is also provided by dedicated individuals and companies that wish to make Ubuntu the best distribution possible. Support is provided through multiple mailing lists, IRC channels, forums, blogs, wikis, etc. The large amount of information available can be overwhelming, but a good search engine query can usually provide an answer to your questions.
    [Show full text]
  • Computational Science and Engineering İSTANBUL TECHNICAL
    İSTANBUL TECHNICAL UNIVERSITY INFORMATICS INSTITUTE A NEW PARALLEL PROGRAMING LANGUAGE FORTRESS: FEATURES AND APPLICATIONS M.Sc. Thesis by Erdem ÜNEY Department : Informatics Institute Programme : Computational Science and Engineering SEPTEMBER 2009 İ STANBUL TECHNICAL UNIVERSITY INFORMATICS INSTITUTE A NEW PARALLEL PROGRAMING LANGUAGE FORTRESS: FEATURES AND APPLICATIONS M.Sc. Thesis by Erdem ÜNEY (702051007) Date of submission : 28 August 2009 Date of defense examination: 11 September 2009 Supervisor (Chairman) : Prof. Dr. H. Nüzhet DALFES (İTU) Members of the Examining Committee : Prof. Dr. Serdar ÇELEBİ (İTU) Prof. Dr. Hasan DAĞ (KHAS) SEPTEMBER 2009 İSTANBUL TEKNİK ÜNİVERSİTESİ BİLİŞİM ENSTİTÜSÜ YENİ BİR PARALEL PROGRAMLAMA DİLİ FORTRESS: ÖZELLİKLERİ VE UYGULAMALARI YÜKSEK LİSANS TEZİ Erdem ÜNEY (702051007) Tezin Enstitüye Verildiği Tarih : 28 Ağustos 2009 Tezin Savunulduğu Tarih : 11 Eylül 2009 Tez Danışmanı : Prof. Dr. H. Nüzhet DALFES (ITU) Diğer Jüri Üyeleri : Prof. Dr. Serdar ÇELEBİ (ITU) Prof. Dr. Hasan DAĞ (KHAS) EYLÜL 2009 In the loving and constantly illuminating memory of my father, Tuncer Üney… FOREWORD I would like to express my deep appreciation and thanks for my advisor, Prof. Dalfes. Every student presents his regards to his advisor but without the support of my advisor from the days of my undergraduate thesis, I would not be able to seek the academic appretiation and complete the process of graduating. I also would like to thank my good friends İlker Kopan, Sayat Baronyan and lovely Pelin Çallı for their support and motivation during the course of my thesis. Last but not least I would like to thank my little brother and my mother, who is constantly pushing me to go forward.
    [Show full text]