On the Use of Specifications of Binary File

Total Page:16

File Type:pdf, Size:1020Kb

On the Use of Specifications of Binary File On the Use of Specifications of Binary File Formats for Analysis and Processing of Binary Scientific Data ? Alexei Hmelnov1;2[0000−0002−0125−1130] and Tianjiao Li3[0000−0002−1037−2452] 1 Matrosov Institute for System Dynamics and Control Theory of Siberian Branch of Russian Academy of Sciences, 134 Lermontov st. Irkutsk, Russia [email protected] 2 Institute of Mathematics, Economics and Informatics, Irkutsk State University, Gagarin Blvd. 20, Irkutsk, Russia 3 ITMO University, 49 Kronverksky Pr., St. Petersburg, Russia [email protected] Abstract. The data collected during various kinds of scientific research may be represented both by well known binary file formats and by cus- tom formats specially developed for some unique device. While thorough understanding of the file format may be required for the former case of the well known format, for the latter case of custom formats it is of crit- ical importance. For the custom formats usually only few people know how they are organized, and this expertise can easily be lost. We have developed the language FlexT (Flexible Types) for specification of binary data formats. The language is declarative and designed to be well understood for human readers. Its main elements are the data type declarations, which look very much like the usual type declarations of the imperative programming languages, but are more flexible. The primary purpose of the language FlexT development is to be able to view the binary data and to check that the data conform to the specification, and that the specification conforms to the data samples available. As a result of the tests we can be sure, that the specification is correct. The FlexT specifications doesn`t contain any surplus information besides from that about the file format. They are compact and human readable. We have also developed the algorithm for data reading code generation from the specification. In the report we'll consider some FlexT language details and the expe- rience of its application to specification of some scientific data formats. Keywords: Specifications of binary data formats · Declarative language · Sci- entific data life-cycle. ? The work was carried out with financial support of Russian Foundation for Basic Research, grant No. 18-07-00758-a. Some results were obtained using the facilities of the Centre of collective usage "Integrated information network of Irkutsk scientific educational complex" 1 Introduction In the field of natural sciences the outcomes of some research are usually repre- sented by the articles, which summarize the main results of the analysis of the collected data. But the results of the analysis depend also on the methods cho- sen, their parameters, and some other subjective factors, so the other researchers are interested in the access to the data (both raw original and processed) to be able to check the results, try their own ways of the data analysis, or compare the data to their own ones. The concept of scientific data life-cycle should meet this demand. So, it becomes not enough to just obtain the data, process them and write some articles using the results of the processing. It is also required to share the data with other researchers. These researchers may be not only our con- temporaries but also our descendants, living in a few decades from now. Our descendants, we hope, will have more advanced devices, computers and soft- ware. But they still will not be able to measure the values of physical quantities for previous periods (say, several years ago). So the data, that we have stored for them, may become very valuable. Let us consider the possible ways of the data representation, compare their features, and describe some approaches, that should simplify data sharing. 2 Options for choosing the format of scientific data When deciding on the data representation it is required to select between several groups of options. We may consider the groups as a coordinates in the space of possible file formats. Let us consider the dimensions and their possible values. The first classification is by the level of customization: { well-known file formats with ready to use data processing libraries and soft- ware (JPEG,TIFF,DBF,CSV,SHP,...); { data exchange frameworks with the data format setup/data processing/code generation libraries, software; { "universal" formats with the data format setup/data processing/code gen- eration libraries, software; { custom file formats. So it is required to select between the well-known file formats, which may already have the ready to use libraries for their processing; the data exchange frameworks and "universal" formats, which require some additional information about the particular data structures, but provide a lot of ready to use services for their handling and the custom file formats, which will require to write all the software from scratch. Since the existing file formats may impose some unac- ceptable limitations on the data structures or have too large memory overhead, the choice here is far from being obvious. The "universal" file formats will be addressed in more detail in the next sections. The second classification is by the file format usage: there exist internal and exchange file formats. The internal file formats are designed to maximize the efficiency and ease of processing by a specific software. They may contain some auxiliary data structures (e.g. indices) and be very complex. The internal file formats are usually binary, unless the files are very small. The exchange file for- mats should be rather simple and easily understandable for other programmers, but they may be not very well-suited for everyday work (e.g. data editing). The next option to consider when selecting the data representation is whether to use a text-based or a binary data representation. The binary data formats are much more space- and time-efficient, than the text-based ones. The main disadvantage of the binary data is that they look opaque to the users and it is hard to control their contents with a "naked eye". That's why programmers nevertheless often prefer to use text formats, and among them the XML-based ones are of great popularity in spite of the fact that it becomes impossible for a human being to comprehend the extremely large text files. The language FlexT can make binary data transparent. 3 The "Universal" formats The "Universal" formats allow you to store a wide (but still limited) range of information. They are always accompanied by some software libraries and utilities for handling the files of the format. Some examples: { XML (eXtensible Markup Language) is a text based format family, they may use XML schemata, various utilities and editors, libraries/code generation software to simplify data processing. Various file format projects claim to be the binary version of the XML format, but no proposal has been accepted as a binary XML standard. { Hierarchical Data Format (HDF) [1] is a set of file formats (HDF4, HDF5) designed to store and organize large amounts of data. Originally developed at the National Center for Supercomputing Applications, the formats are now supported by the HDF Group. It has libraries for C,C++,Java, MATLAB, Scilab, Octave, Mathematica, IDL, Python, R, Fortran, and Julia. The main goal of its development is to create portable scientific data format. HDF files are self-describing allowing an application to interpret the structure and contents of a file without an external specification. One HDF4 file can hold a mix of related objects which can be accessed as a group or as individual objects. Users can create their own grouping structures called "vgroups". HDF5 simplifies the file structure to include only two major types of object: • datasets, which are multidimensional arrays of a homogeneous type; • groups, which are container structures that can hold datasets and other groups. { Protocol Buffers (by Google) [2] is a method of serializing structured data. The method involves an interface description language that describes the structure of some data and a program that generates source code from that description for writing or reading a stream of bytes that represents the struc- tured data. In contrast to HDF the resulting files are not self-contained and require the structure specification (or, better, the code generated from it) to read the files, corresponding to this specification. "Universal" data formats are often used to represent scientific data. Not to mention XML, which is used everywhere, HDF files, for example, are used for representation of remote sensing data, while the Protocol Buffers are the main way to represent the weights of deep neural networks. The main reason for this is the availability of tools and libraries that simplify the development and processing of "Universal" formats at the cost of following their prescripts. Our approach makes it possible to do nearly the same with an almost arbitrary binary file format. 4 Binary file format specifications in FlexT The main goal of the language FlexT is to provide the instrument, that can help us to explore and understand the contents of the binary files, for which there exists a specification of their format. It can also help to check whether the format specification is correct using the samples of data in this format. FlexT specifications can be used to check, view, process, and document both well-known and custom file formats. Our experience shows that the vast majority of format specifications written in natural language contain errors and ambiguities, which can be detected and fixed by trying to apply various versions of the specification to the sample data in order to find the correct variant of understanding of the format description. The information about a file format may also be obtained from the source code of a program that reads or writes the data.
Recommended publications
  • Use of Seek When Writing Or Reading Binary Files
    Title stata.com file — Read and write text and binary files Description Syntax Options Remarks and examples Stored results Reference Also see Description file is a programmer’s command and should not be confused with import delimited (see [D] import delimited), infile (see[ D] infile (free format) or[ D] infile (fixed format)), and infix (see[ D] infix (fixed format)), which are the usual ways that data are brought into Stata. file allows programmers to read and write both text and binary files, so file could be used to write a program to input data in some complicated situation, but that would be an arduous undertaking. Files are referred to by a file handle. When you open a file, you specify the file handle that you want to use; for example, in . file open myfile using example.txt, write myfile is the file handle for the file named example.txt. From that point on, you refer to the file by its handle. Thus . file write myfile "this is a test" _n would write the line “this is a test” (without the quotes) followed by a new line into the file, and . file close myfile would then close the file. You may have multiple files open at the same time, and you may access them in any order. 1 2 file — Read and write text and binary files Syntax Open file file open handle using filename , read j write j read write text j binary replace j append all Read file file read handle specs Write to file file write handle specs Change current location in file file seek handle query j tof j eof j # Set byte order of binary file file set handle byteorder hilo j lohi j 1 j 2 Close
    [Show full text]
  • Parallel Data Analysis Directly on Scientific File Formats
    Parallel Data Analysis Directly on Scientific File Formats Spyros Blanas 1 Kesheng Wu 2 Surendra Byna 2 Bin Dong 2 Arie Shoshani 2 1 The Ohio State University 2 Lawrence Berkeley National Laboratory [email protected] {kwu, sbyna, dbin, ashosani}@lbl.gov ABSTRACT and physics, produce massive amounts of data. The size of Scientific experiments and large-scale simulations produce these datasets typically ranges from hundreds of gigabytes massive amounts of data. Many of these scientific datasets to tens of petabytes. For example, the Intergovernmen- are arrays, and are stored in file formats such as HDF5 tal Panel on Climate Change (IPCC) multi-model CMIP-5 and NetCDF. Although scientific data management systems, archive, which is used for the AR-5 report [22], contains over petabytes such as SciDB, are designed to manipulate arrays, there are 10 of climate model data. Scientific experiments, challenges in integrating these systems into existing analy- such as the LHC experiment routinely store many gigabytes sis workflows. Major barriers include the expensive task of of data per second for future analysis. As the resolution of preparing and loading data before querying, and convert- scientific data is increasing rapidly due to novel measure- ing the final results to a format that is understood by the ment techniques for experimental data and computational existing post-processing and visualization tools. As a con- advances for simulation data, the data volume is expected sequence, integrating a data management system into an to grow even further in the near future. existing scientific data analysis workflow is time-consuming Scientific data are often stored in data formats that sup- and requires extensive user involvement.
    [Show full text]
  • A Metadata Based Approach for Supporting Subsetting Queries Over Parallel HDF5 Datasets
    A Metadata Based Approach For Supporting Subsetting Queries Over Parallel HDF5 Datasets Thesis Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the Graduate School of The Ohio State University By Vignesh Santhanagopalan, B.S. Graduate Program in Computer Science and Engineering The Ohio State University 2011 Thesis Committee: Dr. Gagan Agrawal, Advisor Dr. Radu Teodorescu ABSTRACT A key challenge in scientific data management is to manage the data as the data sizes are growing at a very rapid speed. Scientific datasets are typically stored using low-level formats which store the data as binary. This makes specification of the data and processing very hard. Also, as the volume of data is huge, parallel configurations must be used to process the data to enable efficient access. We have developed a data virtualization approach for supporting subsetting queries on scientific datasets stored in native format. The data is stored in Hierarchical Data Format (HDF5) which is one of the popular formats for storing scientific data. Our system supports SQL queries using the Select, From and Where clauses. We support queries based on the dimensions of the dataset and also queries which are based on the dimensions and attributes (which provide extra information about the dataset) of the dataset. In order to support the different types of queries, we have the pre-processing and post-processing modules. We also parallelize the selection queries involving the dimensions and the attributes. Our system offers the following advantages. We provide SQL like abstraction for specifying subsets of interest which is a powerful mechanism.
    [Show full text]
  • File Handling in Python
    hapter C File Handling in 2 Python There are many ways of trying to understand programs. People often rely too much on one way, which is called "debugging" and consists of running a partly- understood program to see if it does what you expected. Another way, which ML advocates, is to install some means of understanding in the very programs themselves. — Robin Milner In this Chapter » Introduction to Files » Types of Files » Opening and Closing a 2.1 INTRODUCTION TO FILES Text File We have so far created programs in Python that » Writing to a Text File accept the input, manipulate it and display the » Reading from a Text File output. But that output is available only during » Setting Offsets in a File execution of the program and input is to be entered through the keyboard. This is because the » Creating and Traversing a variables used in a program have a lifetime that Text File lasts till the time the program is under execution. » The Pickle Module What if we want to store the data that were input as well as the generated output permanently so that we can reuse it later? Usually, organisations would want to permanently store information about employees, inventory, sales, etc. to avoid repetitive tasks of entering the same data. Hence, data are stored permanently on secondary storage devices for reusability. We store Python programs written in script mode with a .py extension. Each program is stored on the secondary device as a file. Likewise, the data entered, and the output can be stored permanently into a file.
    [Show full text]
  • HUDDL for Description and Archive of Hydrographic Binary Data
    University of New Hampshire University of New Hampshire Scholars' Repository Center for Coastal and Ocean Mapping Center for Coastal and Ocean Mapping 4-2014 HUDDL for description and archive of hydrographic binary data Giuseppe Masetti University of New Hampshire, Durham, [email protected] Brian R. Calder University of New Hampshire, Durham, [email protected] Follow this and additional works at: https://scholars.unh.edu/ccom Recommended Citation G. Masetti and Calder, Brian R., “Huddl for description and archive of hydrographic binary data”, Canadian Hydrographic Conference 2014. St. John's, NL, Canada, 2014. This Conference Proceeding is brought to you for free and open access by the Center for Coastal and Ocean Mapping at University of New Hampshire Scholars' Repository. It has been accepted for inclusion in Center for Coastal and Ocean Mapping by an authorized administrator of University of New Hampshire Scholars' Repository. For more information, please contact [email protected]. Canadian Hydrographic Conference April 14-17, 2014 St. John's N&L HUDDL for description and archive of hydrographic binary data Giuseppe Masetti and Brian Calder Center for Coastal and Ocean Mapping & Joint Hydrographic Center – University of New Hampshire (USA) Abstract Many of the attempts to introduce a universal hydrographic binary data format have failed or have been only partially successful. In essence, this is because such formats either have to simplify the data to such an extent that they only support the lowest common subset of all the formats covered, or they attempt to be a superset of all formats and quickly become cumbersome. Neither choice works well in practice.
    [Show full text]
  • Towards Interactive, Reproducible Analytics at Scale on HPC Systems
    Towards Interactive, Reproducible Analytics at Scale on HPC Systems Shreyas Cholia Lindsey Heagy Matthew Henderson Lawrence Berkeley National Laboratory University Of California, Berkeley Lawrence Berkeley National Laboratory Berkeley, USA Berkeley, USA Berkeley, USA [email protected] [email protected] [email protected] Drew Paine Jon Hays Ludovico Bianchi Lawrence Berkeley National Laboratory University Of California, Berkeley Lawrence Berkeley National Laboratory Berkeley, USA Berkeley, USA Berkeley, USA [email protected] [email protected] [email protected] Devarshi Ghoshal Fernando Perez´ Lavanya Ramakrishnan Lawrence Berkeley National Laboratory University Of California, Berkeley Lawrence Berkeley National Laboratory Berkeley, USA Berkeley, USA Berkeley, USA [email protected] [email protected] [email protected] Abstract—The growth in scientific data volumes has resulted analytics at scale. Our work addresses these gaps. There are in a need to scale up processing and analysis pipelines using High three key components driving our work: Performance Computing (HPC) systems. These workflows need interactive, reproducible analytics at scale. The Jupyter platform • Reproducible Analytics: Reproducibility is a key com- provides core capabilities for interactivity but was not designed ponent of the scientific workflow. We wish to to capture for HPC systems. In this paper, we outline our efforts that the entire software stack that goes into a scientific analy- bring together core technologies based on the Jupyter Platform to create interactive, reproducible analytics at scale on HPC sis, along with the code for the analysis itself, so that this systems. Our work is grounded in a real world science use case can then be re-run anywhere. In particular it is important - applying geophysical simulations and inversions for imaging to be able reproduce workflows on HPC systems, against the subsurface.
    [Show full text]
  • Parsing Hierarchical Data Format (HDF) Files Karl Nyberg Grebyn Corporation P
    Parsing Hierarchical Data Format (HDF) Files Karl Nyberg Grebyn Corporation P. O. Box 47 Sterling, VA 20167-0047 703-406-4161 [email protected] ABSTRACT This paper presents a description of the creation of a library to parse Hierarchical Data Format (HDF) Files in Ada. It describes a “work in progress” with discussion of current performance, limitations and future plans. Categories and Subject Descriptors D.2.2 [Design Tools and Techniques]: Software Libraries; E.2 [Data Storage Representations]: Object Representation; I.2.10 [Vision and Scene Understanding]: Representations, data structures, and transforms. General Terms Algorithms, Performance, Experimentation, Data Formats. Keywords Ada, HDF. 1. INTRODUCTION Large quantities of scientific data are published each year by NASA. These data are often accompanied by metadata files that describe the contents of individual files of the data. One example of this data is the ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) [1]. Each file of data (consisting of satellite imagery) in HDF (Hierarchical Data Format) [2] is accompanied by a metadata file in XML (Extensible Markup Language) [3], encoded according to a published DTD (Document Type Description) that indicates the components and types of data in the metadata file. Each ASTER data file consists of an image of the satellite as it passes over the earth [4]. Information on the location of the data collected as the satellite passes is contained in the metadata file. Over time, multiple images of the same location on earth are obtained. For many purposes of analysis (erosion, building patterns, deforestation, glacier movement, etc.), these images of the same location are compared over time.
    [Show full text]
  • Chapter 10 Streams Streams Text Files and Binary Files
    Streams Chapter 10 • A stream is an object that enables the flow of File I/O data between a ppgrogram and some I/O device or file – If the data flows into a program, then the stream is called an input stream – If the dtdata flows out of a program, then the stream is called an output stream Copyright © 2012 Pearson Addison‐Wesley. All rights reserved. 10‐2 Streams Text Files and Binary Files • Input streams can flow from the kbkeyboar d or from a • Files that are designed to be read by human beings, file and that can be read or written with an editor are – StSystem. in is an itinput stream tha t connects to the called text files keyboard – Scanner keyy(y);board = new Scanner(System.in); Text files can also be called ASCII files because the data they contain uses an ASCII encoding scheme • Output streams can flow to a screen or to a file – An advantage of text files is that the are usually the same – System.out is an output stream that connects to the screen on all computers, so tha t they can move from one System.out.println("Output stream"); computer to another Copyright © 2012 Pearson Addison‐Wesley. All rights reserved. 10‐3 Copyright © 2012 Pearson Addison‐Wesley. All rights reserved. 10‐4 Text Files and Binary Files Writing to a Text File • Files tha t are didesigne d to be read by programs and • The class PrintWriter is a stream class that consist of a sequence of binary digits are called binary files that can be used to write to a text file – Binary files are designed to be read on the same type of – An object of the class PrintWriter has the computer and with the same programming language as the computer that created the file methods print and println – An advantage of binary files is that they are more efficient – These are similar to the System.out methods to process than text files of the same names, but are used for text file – Unlike most binary files, Java binary files have the advantage of being platform independent also output, not screen output Copyright © 2012 Pearson Addison‐Wesley.
    [Show full text]
  • Automatic Porting of Binary File Descriptor Library
    Automatic Porting of Binary File Descriptor Library Maghsoud Abbaspour+, Jianwen Zhu++ Technical Report TR-09-01 September 2001 + Electrical and Computer Engineering University of Tehran, Iran ++ 10 King's College Road Edward S. Rogers Sr. Electrical and Computer Engineering University of Toronto, Ontario M5S 3G4, Canada [email protected] [email protected] Abstract Since software is playing an increasingly important role in system-on-chip, retargetable compi- lation has been an active research area in the last few years. However, the retargetting of equally important downstream system tools, such as assemblers, linkers and debuggers, has either been ignored, or falls short of production quality due to the complexity involved in these tools. In this paper, we present a technique that can automatically retarget the GNU BFD library, the foundation library for a suite of binary tools. Other than having all the advantages enjoyed by open-source software by aligning to a de facto standard, our technique is systematic, as a result of using a formal model of abstract binary interface (ABI) as a new element of architectural model; and simple, as a result of leveraging free software to the largest extent. Contents 1 Introduction 1 2 Related Work 2 3 Binary File Descriptor Library (BFD) 3 4 ABI Modeling 5 5 Retargetting BFD 9 6 Implementation and Experiments 10 7 Conclusion 12 8 References 12 i 1 Introduction New products in consumer electronics and telecommunications are characterized by increasing functional complexity and shorter design cycle. It is generally conceived that the complexity problem can be best solved by the use of system-on-chip (SOC) technology.
    [Show full text]
  • A New Data Model, Programming Interface, and Format Using HDF5
    NetCDF-4: A New Data Model, Programming Interface, and Format Using HDF5 Russ Rew, Ed Hartnett, John Caron UCAR Unidata Program Center Mike Folk, Robert McGrath, Quincey Kozial NCSA and The HDF Group, Inc. Final Project Review, August 9, 2005 THG, Inc. 1 Motivation: Why is this area of work important? While the commercial world has standardized on the relational data model and SQL, no single standard or tool has critical mass in the scientific community. There are many parallel and competing efforts to build these tool suites – at least one per discipline. Data interchange outside each group is problematic. In the next decade, as data interchange among scientific disciplines becomes increasingly important, a common HDF-like format and package for all the sciences will likely emerge. Jim Gray, Distinguished “Scientific Data Management in the Coming Decade,” Jim Gray, David Engineer at T. Liu, Maria A. Nieto-Santisteban, Alexander S. Szalay, Gerd Heber, Microsoft, David DeWitt, Cyberinfrastructure Technology Watch Quarterly, 1998 Turing Award Volume 1, Number 2, February 2005 winner 2 Preservation of scientific data … the ephemeral nature of both data formats and storage media threatens our very ability to maintain scientific, legal, and cultural continuity, not on the scale of centuries, but considering the unrelenting pace of technological change, from one decade to the next. … And that's true not just for the obvious items like images, documents, and audio files, but also for scientific images, … and MacKenzie Smith, simulations. In the scientific research community, Associate Director standards are emerging here and there—HDF for Technology at (Hierarchical Data Format), NetCDF (network the MIT Libraries, Common Data Form), FITS (Flexible Image Project director at Transport System)—but much work remains to be MIT for DSpace, a groundbreaking done to define a common cyberinfrastructure.
    [Show full text]
  • File and Console I/O
    File and Console I/O CS449 Spring 2016 What is a Unix(or Linux) File? • File: “a resource for storing information [sic] based on some kind of durable storage” (Wikipedia) • Wider sense: “In Unix, everything is a file.” (a.k.a “In Unix, everything is a stream of bytes.”) – Traditional files, directories, links – Inter-process communication (pipes, shared memory, sockets) – Devices (interactive terminals, hard drives, printers, graphic card) • Usually mounted under /dev/ directory – Process Links (for getting process information) • Usually mounted under /proc/ directory Stream of Bytes Abstraction • A file, in abstract, is a stream of bytes • Can be manipulated using five system calls: – open: opens a file for reading/writing and returns a file descriptor • File descriptor: index into an OS array called open file table – read: reads current offset through file descriptor – write: writes current offset through file descriptor – lseek: changes current offset in file – close: closes file descriptor • Some files do not support certain operations (e.g. a terminal device does not support lseek) C Standard Library Wrappers • C Standard Library wraps file system calls in library functions – For portability across multiple systems – To provide additional features (buffering, formatting) • All C wrappers buffered by default – Buffering can be controlled using “setbuf” or “setlinebuf” calls (remember those?) • Works on FILE * instead of file descriptor – FILE is a library data structure that abstracts a file – Contains file descriptor, current offset, buffering mode etc. Wrappers for the Five System Calls Function Prototype Description FILE *fopen(const char *path, const Opens the file whose name is the string pointed to char *mode); by path and associates a stream with it.
    [Show full text]
  • [MS-CFB]: Compound File Binary File Format
    [MS-CFB]: Compound File Binary File Format Intellectual Property Rights Notice for Open Specifications Documentation . Technical Documentation. Microsoft publishes Open Specifications documentation (“this documentation”) for protocols, file formats, data portability, computer languages, and standards support. Additionally, overview documents cover inter-protocol relationships and interactions. Copyrights. This documentation is covered by Microsoft copyrights. Regardless of any other terms that are contained in the terms of use for the Microsoft website that hosts this documentation, you can make copies of it in order to develop implementations of the technologies that are described in this documentation and can distribute portions of it in your implementations that use these technologies or in your documentation as necessary to properly document the implementation. You can also distribute in your implementation, with or without modification, any schemas, IDLs, or code samples that are included in the documentation. This permission also applies to any documents that are referenced in the Open Specifications documentation. No Trade Secrets. Microsoft does not claim any trade secret rights in this documentation. Patents. Microsoft has patents that might cover your implementations of the technologies described in the Open Specifications documentation. Neither this notice nor Microsoft's delivery of this documentation grants any licenses under those patents or any other Microsoft patents. However, a given Open Specifications document might be covered by the Microsoft Open Specifications Promise or the Microsoft Community Promise. If you would prefer a written license, or if the technologies described in this documentation are not covered by the Open Specifications Promise or Community Promise, as applicable, patent licenses are available by contacting [email protected].
    [Show full text]