PROUHD : RAID for the End-User

Total Page:16

File Type:pdf, Size:1020Kb

PROUHD : RAID for the End-User PROUHD : RAID for the end-user. Pierre Vignéras [email protected] April 14, 2010 Résumé RAID has still not been adopted by most end-users despite its inherent quality such as performance and reliability. Reasons such as complexity of RAID technology (levels, hard/soft), set-up, or support may be given. We believe the main reason is that most end-users own a vast amount of heterogeneous storage devices (USB stick, IDE/SATA/SCSI internal/external hard drives, SD/XD Card, SSD, . .), and that RAID- based systems are mostly designed for homogenous (in size and technology) hard disks. Therefore, there is currently no storage solution that manages heterogeneous storage devices efficiently. In this article, we propose such a solution and we call it PROUHD (Pool of RAID Over User Hetero- geneous Devices). This solution supports heterogeneous (in size and technology) storage devices, maxi- mizes the available storage space consumption, is tolerant to device failure up to a customizable degree, still makes automatic addition, removal and replacement of storage devices possible and remains per- formant in the face of average end-user workflow. Although this article makes some references to Linux, the algorithms described are independent of the operating system and thus may be implemented on any of them. Copyrights This document is licensed under a Creative Commons Attribution-Share Alike 2.0 France License. Please, see for details : http://creativecommons.org/licenses/by-sa/2.0/ Disclaimer The information contained in this document is for general information purposes only. The information is provided by Pierre Vignéras and while I endeavor to keep the information up to date and correct, I make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliabil- ity, suitability or availability with respect to the document or the information, products, services, or related graphics contained in the document for any purpose. Any reliance you place on such information is therefore strictly at your own risk. In no event I will we be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this document. Through this document you are able to link to other documents which are not under the control of Pierre Vignéras. I have no control over the nature, content and availability of those sites. The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them. 1 TABLE DES MATIÈRES TABLE DES MATIÈRES Table des matières 1 Introduction 3 2 Algorithm 3 2.1 Description.................................................3 2.2 Analysis...................................................5 2.3 Implementation (layout-disks)......................................8 2.4 Performance................................................8 3 Partitionning drives 9 4 Handling Disk Failure 9 4.1 Replacement Procedure.......................................... 11 4.1.1 Replacing a failed device with a same-size one......................... 11 4.1.2 Replacing a failed device with a larger one............................ 11 4.1.3 Replacing a failed drive with a smaller one........................... 14 4.1.4 RAID array reconstruction..................................... 17 5 Adding/removing a device to/from a PROUHD 17 6 Forecasting: Storage Box for Average End-Users 18 7 Alternatives 19 8 Questions, Comments & Suggestions 19 9 Note 19 10 Acknowledgment 19 2 2 ALGORITHM hda: 1 Tb hdb: 1 Tb hdb: 1 Tb hdc: 1 Tb FIGURE 1 – Stacking storage devices (same size, ideal RAID case). 1 Introduction Whereas RAID 1 has been massively adopted by the industry, it is still not common on end-users desk- top. Complexity of RAID system might be one reason... among many others. Actually, in a state-of-the-art data center, the storage is designed according to some requirements (the ”top-bottom” approach already discussed in a previous article 2). Therefore, from a RAID perspective, the storage is usually composed of a pool of disks of same size and characteristics including spares 3. The focus is often on performance. The global storage capacity is usually not a big deal. The average end-user case is rather different in that their global storage capacity is composed of various storage devices such as : – Hard drives (internal IDE, internal/external SATA, external USB, external Firewire) ; – USB Sticks ; – Flash Memory such as SDCard, XDCard, ... ; – SSD. On the opposite, performance is not the big deal for the end-user : most usage does not require very high throughput. Cost and capacity are main important factors along with ease of use. By the way, the end-user does not usually have any spare devices. We propose in this paper an algorithm for disk layout using (software) RAID that has the following char- acteristics : – it supports heterogeneous storage devices (size and technology) ; – it maximizes storage space ; – it is tolerant to device failure up to a certain degree that depends on the number of available devices and on the RAID level chosen ; – it still makes automatic addition, removal and replacement of storage devices possible under certain conditions ; – it remains performant in the face of average end-user workflow. 2 Algorithm 2.1 Description Conceptually, we first stack storage devices one over the other as shown in figure1. 1. For an introduction on RAID technology, please refer to online articles such as : http://en.wikipedia.org/wiki/Standard_RAID_levels 2. http://www.vigneras.org/pierre/wp/2009/07/21/choosing-the-right-file-system-layout-under-linux/ 3. By the way, since similar disks may fail at similar time, it may be better to create storage pools from disks of different model or even vendor. 3 2.1 Description 2 ALGORITHM hda: 1 Tb hdb: 2 Tb hdc: 1 Tb hdd: 4 Tb FIGURE 2 – Stacking storage devices (different size = usual end-user case). On that example with n 4 devices, each of capacity c 1T b (terabytes), we end up with a global storage Æ Æ capacity of G n c 4T b. From that global storage space, using RAID, you can get : Æ ¤ Æ – a4Tb(n c) virtual storage devices (called PV for Physical Volume 4 in the following) using RAID0 ¤ (level 0), but then you have no fault tolerancy (if a physical device fail, the whole virtual device is lost). – a1Tb(c) PV using RAID1 ; in that case, you have a fault tolerancy degree of 3 (the PV remains valid in the face of 3 drives failure, and this is the maximum). – a 3 Tb ((n 1) c) PV using RAID5 ; in that case, you have a fault tolerancy degree of 1 ; ¡ ¤ – a 2 Tb (M c) PV using RAID10 ; it that case, the fault tolerancy degree is also 1 5 (M is the number of ¤ mirrored sets, 2 in our case). The previous example hardly represents a real (end-user) case. Figure2 represents such a scenario, with 4 disks also (though listed capacities does not represent common use cases, they ease mental capacity calcu- lation for the algorithm description). In this case, we face n 4 devices d, of respective capacity c : 1 Tb, Æ d 2 Tb, 1 Tb, and 4 Tb. Hence the global storage capacity is : G §c 1 2 1 4 8T b. Since traditional RAID Æ d Æ Å Å Å Æ array requires same device size, in that case, the minimum device capacity is used : c 1T b. Therefore, min Æ we can have : – 4 Tb, using RAID0 ; – 1 Tb, using RAID1 ; – 3 Tb, using RAID5 ; – 2 Tb, using RAID10. Thus, exactly the same possibilities than in the previous example. The main difference however, is the wasted storage space — defined as the storage space unused from each disk neither for storage nor for fault tolerancy 6. In our example, the 1 Tb capacity of both devices hda and hdc are fortunately fully used. But only 1 Tb out of 2 Tb of device hdb and 1 Tb out of 4 Tb of device hdd is really used. Therefore in this case, the wasted storage space is given by the formula : X W (cd cmin) (1 1) (2 1) (1 1) (4 1) 4T b Æ d ¡ Æ ¡ Å ¡ Å ¡ Å ¡ Æ In this example, W 4T b out of G 8T b, i.e. 50% of the global storage space is actually unused. For an Æ Æ end-user, such an amount of wasted space is definitely an argument against using RAID, despite all the other advantages RAID provides (flexibility for adding/removing devices, fault tolerancy and performance). 4. This comes from the LVM terminology which is often used with RAID on Linux. 5. This is the worst case and the one that should be taken into account. Of course, disks hda and hdc may fail, for example, and the PV will remain available, but the best case is not the one that represents the fault tolerancy degree. 6. Note that this is independent on the actual RAID level chosen : each byte in a RAID array is used, either for storage or for fault tolerance. In the example, using RAID1, we only get 1 Tb out of 8 Tb and it may look like a waste. But if RAID1 is chosen for such an array, it actually means that the fault tolerancy degree of 3 is required. And such a fault tolerancy degree has a storage cost ! 4 2 ALGORITHM 2.2 Analysis hda: 1 Tb hdc: 1 Tb hdb: 2 Tb hdd: 4 Tb p p p 1 2 3 R R Raid Array: 1 2 Physical Volume: PV(1) PV(2) FIGURE 3 – Illustration of the vertical RAID layout. The algorithm we propose is very simple indeed.
Recommended publications
  • Disk Array Data Organizations and RAID
    Guest Lecture for 15-440 Disk Array Data Organizations and RAID October 2010, Greg Ganger © 1 Plan for today Why have multiple disks? Storage capacity, performance capacity, reliability Load distribution problem and approaches disk striping Fault tolerance replication parity-based protection “RAID” and the Disk Array Matrix Rebuild October 2010, Greg Ganger © 2 Why multi-disk systems? A single storage device may not provide enough storage capacity, performance capacity, reliability So, what is the simplest arrangement? October 2010, Greg Ganger © 3 Just a bunch of disks (JBOD) A0 B0 C0 D0 A1 B1 C1 D1 A2 B2 C2 D2 A3 B3 C3 D3 Yes, it’s a goofy name industry really does sell “JBOD enclosures” October 2010, Greg Ganger © 4 Disk Subsystem Load Balancing I/O requests are almost never evenly distributed Some data is requested more than other data Depends on the apps, usage, time, … October 2010, Greg Ganger © 5 Disk Subsystem Load Balancing I/O requests are almost never evenly distributed Some data is requested more than other data Depends on the apps, usage, time, … What is the right data-to-disk assignment policy? Common approach: Fixed data placement Your data is on disk X, period! For good reasons too: you bought it or you’re paying more … Fancy: Dynamic data placement If some of your files are accessed a lot, the admin (or even system) may separate the “hot” files across multiple disks In this scenario, entire files systems (or even files) are manually moved by the system admin to specific disks October 2010, Greg
    [Show full text]
  • Architectures and Algorithms for On-Line Failure Recovery in Redundant Disk Arrays
    Architectures and Algorithms for On-Line Failure Recovery in Redundant Disk Arrays Draft copy submitted to the Journal of Distributed and Parallel Databases. A revised copy is published in this journal, vol. 2 no. 3, July 1994.. Mark Holland Department of Electrical and Computer Engineering Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213-3890 (412) 268-5237 [email protected] Garth A. Gibson School of Computer Science Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213-3890 (412) 268-5890 [email protected] Daniel P. Siewiorek School of Computer Science Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213-3890 (412) 268-2570 [email protected] Architectures and Algorithms for On-Line Failure Recovery In Redundant Disk Arrays1 Abstract The performance of traditional RAID Level 5 arrays is, for many applications, unacceptably poor while one of its constituent disks is non-functional. This paper describes and evaluates mechanisms by which this disk array failure-recovery performance can be improved. The two key issues addressed are the data layout, the mapping by which data and parity blocks are assigned to physical disk blocks in an array, and the reconstruction algorithm, which is the technique used to recover data that is lost when a component disk fails. The data layout techniques this paper investigates are variations on the declustered parity organiza- tion, a derivative of RAID Level 5 that allows a system to trade some of its data capacity for improved failure-recovery performance. Parity declustering improves the failure-mode performance of an array significantly, and a parity-declustered architecture is preferable to an equivalent-size multiple-group RAID Level 5 organization in environments where failure-recovery performance is important.
    [Show full text]
  • Memory Systems : Cache, DRAM, Disk
    CHAPTER 24 Storage Subsystems Up to this point, the discussions in Part III of this with how multiple drives within a subsystem can be book have been on the disk drive as an individual organized together, cooperatively, for better reliabil- storage device and how it is directly connected to a ity and performance. This is discussed in Sections host system. This direct attach storage (DAS) para- 24.1–24.3. A second aspect deals with how a storage digm dates back to the early days of mainframe subsystem is connected to its clients and accessed. computing, when disk drives were located close to Some form of networking is usually involved. This is the CPU and cabled directly to the computer system discussed in Sections 24.4–24.6. A storage subsystem via some control circuits. This simple model of disk can be designed to have any organization and use any drive usage and confi guration remained unchanged of the connection methods discussed in this chapter. through the introduction of, fi rst, the mini computers Organization details are usually made transparent to and then the personal computers. Indeed, even today user applications by the storage subsystem presenting the majority of disk drives shipped in the industry are one or more virtual disk images, which logically look targeted for systems having such a confi guration. like disk drives to the users. This is easy to do because However, this simplistic view of the relationship logically a disk is no more than a drive ID and a logical between the disk drive and the host system does not address space associated with it.
    [Show full text]
  • I/O Workload Outsourcing for Boosting RAID Reconstruction Performance
    WorkOut: I/O Workload Outsourcing for Boosting RAID Reconstruction Performance Suzhen Wu1, Hong Jiang2, Dan Feng1∗, Lei Tian12, Bo Mao1 1Key Laboratory of Data Storage Systems, Ministry of Education of China 1School of Computer Science & Technology, Huazhong University of Science & Technology 2Department of Computer Science & Engineering, University of Nebraska-Lincoln ∗Corresponding author: [email protected] {suzhen66, maobo.hust}@gmail.com, {jiang, tian}@cse.unl.edu, [email protected] Abstract ing reconstruction without serving any I/O requests from User I/O intensity can significantly impact the perfor- user applications, and on-line reconstruction, when the mance of on-line RAID reconstruction due to contention RAID continues to service user I/O requests during re- for the shared disk bandwidth. Based on this observa- construction. tion, this paper proposes a novel scheme, called WorkOut Off-line reconstruction has the advantage that it’s (I/O Workload Outsourcing), to significantly boost RAID faster than on-line reconstruction, but it is not practical reconstruction performance. WorkOut effectively out- in environments with high availability requirements, as sources all write requests and popular read requests orig- the entire RAID set needs to be taken off-line during re- inally targeted at the degraded RAID set to a surrogate construction. RAID set during reconstruction. Our lightweight pro- On the other hand, on-line reconstruction allows fore- totype implementation of WorkOut and extensive trace- ground traffic to continue during reconstruction, but driven and benchmark-driven experiments demonstrate takes longer to complete than off-line reconstruction as that, compared with existing reconstruction approaches, the reconstruction process competes with the foreground WorkOut significantly speeds up both the total recon- workload for I/O bandwidth.
    [Show full text]
  • Which RAID Level Is Right for Me?
    STORAGE SOLUTIONS WHITE PAPER Which RAID Level is Right for Me? Contents Introduction.....................................................................................1 RAID 10 (Striped RAID 1 sets) .................................................3 RAID Level Descriptions..................................................................1 RAID 50 (Striped RAID 5 sets) .................................................4 RAID 0 (Striping).......................................................................1 RAID 60 (Striped RAID 6 sets) .................................................4 RAID 1 (Mirroring).....................................................................2 RAID Level Comparison ..................................................................5 RAID 1E (Striped Mirror)...........................................................2 About Adaptec RAID .......................................................................5 RAID 5 (Striping with parity) .....................................................2 RAID 5EE (Hot Space).....................................................................3 RAID 6 (Striping with dual parity).............................................3 Data is the most valuable asset of any business today. Lost data of users. This white paper intends to give an overview on the means lost business. Even if you backup regularly, you need a performance and availability of various RAID levels in general fail-safe way to ensure that your data is protected and can be and may not be accurate in all user
    [Show full text]
  • Software-RAID-HOWTO.Pdf
    Software-RAID-HOWTO Software-RAID-HOWTO Table of Contents The Software-RAID HOWTO...........................................................................................................................1 Jakob Østergaard [email protected] and Emilio Bueso [email protected] 1. Introduction..........................................................................................................................................1 2. Why RAID?.........................................................................................................................................1 3. Devices.................................................................................................................................................1 4. Hardware issues...................................................................................................................................1 5. RAID setup..........................................................................................................................................1 6. Detecting, querying and testing...........................................................................................................2 7. Tweaking, tuning and troubleshooting................................................................................................2 8. Reconstruction.....................................................................................................................................2 9. Performance.........................................................................................................................................2
    [Show full text]
  • A Secure, Reliable and Performance-Enhancing Storage Architecture Integrating Local and Cloud-Based Storage
    Brigham Young University BYU ScholarsArchive Theses and Dissertations 2016-12-01 A Secure, Reliable and Performance-Enhancing Storage Architecture Integrating Local and Cloud-Based Storage Christopher Glenn Hansen Brigham Young University Follow this and additional works at: https://scholarsarchive.byu.edu/etd Part of the Electrical and Computer Engineering Commons BYU ScholarsArchive Citation Hansen, Christopher Glenn, "A Secure, Reliable and Performance-Enhancing Storage Architecture Integrating Local and Cloud-Based Storage" (2016). Theses and Dissertations. 6470. https://scholarsarchive.byu.edu/etd/6470 This Thesis is brought to you for free and open access by BYU ScholarsArchive. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of BYU ScholarsArchive. For more information, please contact [email protected], [email protected]. A Secure, Reliable and Performance-Enhancing Storage Architecture Integrating Local and Cloud-Based Storage Christopher Glenn Hansen A thesis submitted to the faculty of Brigham Young University in partial fulfillment of the requirements for the degree of Master of Science James Archibald, Chair Doran Wilde Michael Wirthlin Department of Electrical and Computer Engineering Brigham Young University Copyright © 2016 Christopher Glenn Hansen All Rights Reserved ABSTRACT A Secure, Reliable and Performance-Enhancing Storage Architecture Integrating Local and Cloud-Based Storage Christopher Glenn Hansen Department of Electrical and Computer Engineering, BYU Master of Science The constant evolution of new varieties of computing systems - cloud computing, mobile devices, and Internet of Things, to name a few - have necessitated a growing need for highly reliable, available, secure, and high-performing storage systems. While CPU performance has typically scaled with Moore’s Law, data storage is much less consistent in how quickly perfor- mance increases over time.
    [Show full text]
  • By Michail D. Flouris a Thesis Submitted in Conformity with The
    EXTENSIBLE NETWORKED-STORAGE VIRTUALIZATION WITH METADATA MANAGEMENT AT THE BLOCK LEVEL by Michail D. Flouris A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Computer Science University of Toronto Copyright c 2009 by Michail D. Flouris Abstract Extensible Networked-Storage Virtualization with Metadata Management at the Block Level Michail D. Flouris Doctor of Philosophy Graduate Department of Computer Science University of Toronto 2009 Increased scaling costs and lack of desired features is leading to the evolution of high-performance storage systems from centralized architectures and specialized hardware to decentralized, commodity storage clusters. Existing systems try to address storage cost and management issues at the filesystem level. Besides dictating the use of a specific filesystem, however, this approach leads to increased complexity and load imbalance towards the file-server side, which in turn increase costs to scale. In this thesis, we examine these problems at the block-level. This approach has several advantages, such as transparency, cost-efficiency, better resource utilization, simplicity and easier management. First of all, we explore the mechanisms, the merits, and the overheads associated with advanced metadata-intensive functionality at the block level, by providing versioning at the block level. We find that block-level versioning has low overhead and offers transparency and simplicity advantages over filesystem-based approaches. Secondly, we study the problem of providing extensibility required by diverse and changing appli- cation needs that may use a single storage system. We provide support for (i) adding desired functions as block-level extensions, and (ii) flexibly combining them to create modular I/O hierarchies.
    [Show full text]
  • On-Line Data Reconstruction in Redundant Disk Arrays
    On-Line Data Reconstruction In Redundant Disk Arrays A dissertation submitted to the Department of Electrical and Computer Engineering, Carnegie Mellon University, in partial fulfillment of the requirements for the degree of Doctor of Philosophy. Copyright © 1994 by Mark Calvin Holland ii Abstract There exists a wide variety of applications in which data availability must be continu- ous, that is, where the system is never taken off-line and any interruption in the accessibil- ity of stored data causes significant disruption in the service provided by the application. Examples include on-line transaction processing systems such as airline reservation sys- tems and automated teller networks in banking systems. In addition, there exist many applications for which a high degree of data availability is important, but continuous oper- ation is not required. An example is a research and development environment, where access to a centrally-stored CAD system is often necessary to make progress on a design project. These applications and many others mandate both high performance and high availability from their storage subsystems. Redundant disk arrays are systems in which a high level of I/O performance is obtained by grouping together a large number of small disks, rather than building one large, expensive drive. The high component count of such systems leads to unacceptably high rates of data loss due to component failure, and so they typically incorporate redun- dancy to achieve fault tolerance. This redundancy takes one of two forms: replication or encoding. In replication, the system maintains one or more duplicate copies of all data. In the encoding approach, the system maintains an error-correcting code (ECC) computed over the data.
    [Show full text]
  • Scalability of RAID Systems
    Scalability of RAID Systems Yan Li I V N E R U S E I T H Y T O H F G E R D I N B U Doctor of Philosophy Institute of Computing Systems Architecture School of Informatics University of Edinburgh 2010 Abstract RAID systems (Redundant Arrays of Inexpensive Disks) have dominated back- end storage systems for more than two decades and have grown continuously in size and complexity. Currently they face unprecedented challenges from data intensive applications such as image processing, transaction processing and data warehousing. As the size of RAID systems increases, designers are faced with both performance and reliability challenges. These challenges include limited back-end network bandwidth, physical interconnect failures, correlated disk failures and long disk reconstruction time. This thesis studies the scalability of RAID systems in terms of both performance and reliability through simulation, using a discrete event driven simulator for RAID systems (SIMRAID) developed as part of this project. SIMRAID incorporates two benchmark workload generators, based on the SPC-1 and Iometer benchmark specifi- cations. Each component of SIMRAID is highly parameterised, enabling it to explore a large design space. To improve the simulation speed, SIMRAID develops a set of abstraction techniques to extract the behaviour of the interconnection protocol without losing accuracy. Finally, to meet the technology trend toward heterogeneous storage architectures, SIMRAID develops a framework that allows easy modelling of different types of device and interconnection technique. Simulation experiments were first carried out on performance aspects of scalabil- ity. They were designed to answer two questions: (1) given a number of disks, which factors affect back-end network bandwidth requirements; (2) given an interconnec- tion network, how many disks can be connected to the system.
    [Show full text]
  • The Vinum Volume Manager
    The vinum Volume Manager Greg Lehey Table of Contents 1. Synopsis ................................................................................................................................ 1 2. Access Bottlenecks ................................................................................................................... 1 3. Data Integrity ......................................................................................................................... 3 4. vinum Objects ........................................................................................................................ 4 5. Some Examples ....................................................................................................................... 5 6. Object Naming ....................................................................................................................... 10 7. Configuring vinum ................................................................................................................ 11 8. Using vinum for the Root File System ........................................................................................ 12 1. Synopsis No matter the type of disks, there are always potential problems. The disks can be too small, too slow, or too unreliable to meet the system's requirements. While disks are getting bigger, so are data storage requirements. Often a le system is needed that is bigger than a disk's capacity. Various solutions to these problems have been proposed and implemented. One method is through
    [Show full text]
  • The Vinum Volume Manager Gr Eg Lehey LEMIS (SA) Pty Ltd PO Box 460 Echunga SA 5153
    The Vinum Volume Manager Gr eg Lehey LEMIS (SA) Pty Ltd PO Box 460 Echunga SA 5153. [email protected] [email protected] [email protected] ABSTRACT The Vinum Volume Manager is a device driver which implements virtual disk drives. It isolates disk hardwarefromthe device interface and maps data in ways which result in an incr ease in flexibility, perfor mance and reliability compared to the traditional slice view of disk storage. Vinum implements the RAID-0, RAID-1, RAID-4 and RAID-5 models, both in- dividually and in combination. Vinum is an open source volume manager which runs under FreeBSD and NetBSD. It was inspired by the VERITAS® volume manager and implements many of the concepts of VERITAS®. Its key features are: • Vinum implements many RAID levels: • RAID-0 (striping). • RAID-1 (mirroring). • RAID-4 (fixed parity). • RAID-5 (block-interleaved parity). • RAID-10 (mirroring and striping), a combination of RAID-0 and RAID-5. In addition, other combinations arepossible for which no formal RAID level definition exists. • Volume managers initially emphasized reliability and perfor mance rather than ease of use. The results arefrequently down time due to misconfiguration, with consequent reluctance on the part of operational personnel to attempt to use the moreunusual featur es of the product. Vinum attempts to provide an easier-to-use non-GUI inter- face. In place of conventional disk partitions, Vinum presents synthetic disks called volumes to 1 the user.These volumes arethe top level of a hierarchy of objects used to construct vol- umes with differ ent characteristics: • The top level is the virtual disk or volume.Volumes effectively replace disk drives.
    [Show full text]