Webgl Models: End-To-End 30

Total Page:16

File Type:pdf, Size:1020Kb

Webgl Models: End-To-End 30 OpenGL Insights Edited by Patrick Cozzi and Christophe Riccio WebGL Models: End-to-End 30 Won Chun 30.1 Introduction When we were making Google Body in 2010 (see Figure 30.1), WebGL was new technology. It wasn’t entirely clear what we’d be able to accomplish; indeed, the uncertainty was one of the reasons for building it in the first place. Adopting a new technology like WebGL was aleapoffaith.Evenifthe idea made perfect sense to us, so many things outside of our control needed to go right. We eventually open sourced Google Body,1 and unequivocally demonstrated that it is possible to build a rich, 3D application using WebGL. Google Body made the use of WebGL less an act Figure 30.1. Google Body, a 3D anatomy browser for of faith, and more an act of the web. execution. 1The content is now available at zygotebody.com. 431 432 V Transfers When we started, the simple act of loading a 3D model was its own little leap of faith. The source 3D model data was incredibly detailed; we ended up shipping a reduced version with 1.4 million triangles and 740 thousand vertices for 1,800 anatomical entities. Our first prototype simply loaded everything using COLLADA, but that took almost a minute to load from a local web server. That’s a really long time to wait for a web page. COLLADA was inefficient in its use of bandwidth and computation, but it wasn’t immediately apparent how we could do better. There is no DOM node or MIME-type for 3D models. WebGL added TypedArray support to JavaScript, but XMLHttpRequest wouldn’t get it for months. Fortunately, we devised a way to load Google Body’s 1.4 million triangles much more efficiently than we expected using a surprising combination of standard web technologies. I’ve shared WebGL Loader, http://code.google.com/p/webgl-loader/, as an open- source project so that other WebGL projects can benefit. This chapter describes how WebGL Loader works. WebGL Loader can compress a 6.3 MB OBJ file to under 0.5 MB, better than a 13:1 compression ratio. GZIP alone shrinks the same file to about 1.8 MB and yields a model file that is slower to parse and render. Taken apart, the techniques and concepts are actually quite simple to understand. I also hope to give insights into the process of how we went about making those choices so that we can be equipped with more than just faith for the next time we are spurred to delve into the depths of technical uncertainty. 30.2 Life of a 3D Model Loading 3D models was the apparent challenge, but it is always important to keep the big picture in mind. When I joined Google, as part of orientation, I attended internal lectures about how the key internal systems at Google work. They all had titles like“Life of a [blank].” For example, “Life of a Query” walks through, step by step, what happens when a user performs a search on google.com. This end-to-end perspective is the key to building computer systems because it motivates what we build, even when we focus on a single component. I spent two years optimizing a tiny corner of web search by a few percent, but I knew the effort was worthwhile because it was in the critical path for every web search by all of our users. If “Life of a Query” teaches us anything, it is that there are lots of queries. It doesn’t make sense to think about loading 3D models outside the context of all the other steps a 3D model takes through an application. Each step informs the de- sign. For Google Body, we thought of the model as it related to four key stages. They are general enough that they should apply to most other WebGL projects as well: • Pipeline. Where does the 3D model data originate? • Serving. How does the 3D model data arrive in users’ browsers? • Encoding. What format should the 3D model data use? • Rendering. How does the 3D model data get to the screen? 30. WebGL Models: End-to-End 433 30.2.1 Stage 1: Pipeline Unless the application is a demo where 3D models are mere stand-ins to showcase some new technology, it is important to understand their creation process, called the art pipeline. The art pipeline can include many creative steps, like concept drawing or 3D modeling, but the key technological steps to understand are the tools that get models from the care of the artists to the application. Art creation tools like Maya, Blender, Photoshop, or GIMP have their own file formats, but applications usually don’t use them directly. The flexibility that is a necessity during creation ends up as baggage after the art is created. Contrast a Photoshop PSD file with a plain old JPG image. The PSD file can contain lots of extra sophistication like layers or color transforms that are essential for lossless, nondestructive editing, but that will never get used by something that simply needs to present the final image. A similar thing happens with 3D models. For example, 3D modeling software keeps track of positions, normals, texture coordinates, or other vertex attributes in separate lists. A vertex is represented as a tuple of indices, one for each vertex at- tribute. This is an excellent choice for creation and editing; if an artist changes an attribute like a position or texture coordinate, all the vertices that share that attribute are implicitly updated. GPUs don’t support these multi-indexed vertex lists; they must first be converted to a single-indexed vertex list. WebGL additionally enforces a limit of 65,536 unique vertices in a single drawElements call for maximum porta- bility. Larger meshes must be split into batches. Somewhere along the line, the model needs to be converted from a multipurpose, creation-appropriate form to a streamlined, render-appropriate form. The best time and place to do this is in the art pipeline. From the perspective of the application, everything that happens in the art pipeline appears to happen magically ahead of time. Since more time can be spent in the art pipeline than during loading, it is a great opportunity to offload work from the stages ahead. This is where to optimize the mesh for rendering and compress it for serving. 30.2.2 Stage 2: Serving We don’t have much control over what happens between the server and client because it is all governed by Internet standards; all we can do is understand how things work and how to best use them. I learned about high-performance web serving during my first project at Google, optimizing infrastructure behind iGoogle.2 As it turns out, serving 3D models over HTTP is not that different from serving other kinds of static contents, such as stylesheets or images. Almost everything in this section applies equally to images and to meshes since it is all about how to send bits onto the client. Perhaps more accurately, this section is about how not to send bits to the client using that reliable system design workhorse: caching. The bits we never send are the fastest and cheapest of all. 2iGoogle is the customizable homepage for Google: http://www.google.com/ig. 434 V Transfers HTTP caching fundamentals. Before a browser makes an HTTP GET request to download the data for a URL, it first checks the cache to see if the data corresponding to that URL are already present. If so, the browser potentially has a cache hit. The problem with simply using these data right away is that they might be stale; the data in the cache might not reflect what is actually being served anymore. This is the trouble with caches; they seem like a cheap and simple way to improve performance, but then we have to deal with the extra complexity of validation. There are several ways a browser validates its cache data. The basic approaches use a conditional GET. The browser remembers when it originally fetched the data, so it uses an if-modified-since header in a GET request, and the server responds with a new version only if necessary. Another approach is to use ETags, which can act as a content fingerprint so the server can detect when a URL has changed since last requested. These approaches work as we would expect, but both still require a round-trip communication with the server. These requests, while usually cheap, still consume connections, a limited resource on both the client and server, and can have unpre- dictable latencies depending on the state of the network. It is better to completely avoid this parasitic latency with a simple tweak in the server. Instead of forcing the browser to ask, “Is this stale?” for each request, the server can proactively tell the browser how long the data will be fresh by using an Expires header. With an explicit expiration, the browser can use locally cached data immediately. AlongExpires header (the longest supported is just under a year) improves caching performance, but makes updates less visible. My favorite approach to deal with this is to simply never update the data for a given URL by using unique URLs for each version of the data. One particularly simple way is to embed a content fingerprint in the URL—no need to maintain any extra state. Fingerprinting works best if you control the references to your fingerprinted as- sets; otherwise, clients can end up with stale references.
Recommended publications
  • Application Profile Avi Networks — Technical Reference (16.3)
    Page 1 of 12 Application Profile Avi Networks — Technical Reference (16.3) Application Profile view online Application profiles determine the behavior of virtual services, based on application type. The application profile types and their options are described in the following sections: HTTP Profile DNS Profile Layer 4 Profile Syslog Profile Dependency on TCP/UDP Profile The application profile associated with a virtual service may have a dependency on an underlying TCP/UDP profile. For example, an HTTP application profile may be used only if the TCP/UDP profile type used by the virtual service is set to type TCP Proxy. The application profile associated with a virtual service instructs the Service Engine (SE) to proxy the service's application protocol, such as HTTP, and to perform functionality appropriate for that protocol. Application Profile Tab Select Templates > Profiles > Applications to open the Application Profiles tab, which includes the following functions: Search: Search against the name of the profile. Create: Opens the Create Application Profile popup. Edit: Opens the Edit Application Profile popup. Delete: Removes an application profile if it is not currently assigned to a virtual service.Note: If the profile is still associated with any virtual services, the profile cannot be removed. In this case, an error message lists the virtual service that still is referencing the application profile. The table on this tab provides the following information for each application profile: Name: Name of the Profile. Type: Type of application profile, which will be either: DNS: Default for processing DNS traffic. HTTP: Default for processing Layer 7 HTTP traffic.
    [Show full text]
  • SSL/TLS Implementation CIO-IT Security-14-69
    DocuSign Envelope ID: BE043513-5C38-4412-A2D5-93679CF7A69A IT Security Procedural Guide: SSL/TLS Implementation CIO-IT Security-14-69 Revision 6 April 6, 2021 Office of the Chief Information Security Officer DocuSign Envelope ID: BE043513-5C38-4412-A2D5-93679CF7A69A CIO-IT Security-14-69, Revision 6 SSL/TLS Implementation VERSION HISTORY/CHANGE RECORD Person Page Change Posting Change Reason for Change Number of Number Change Change Initial Version – December 24, 2014 N/A ISE New guide created Revision 1 – March 15, 2016 1 Salamon Administrative updates to Clarify relationship between this 2-4 align/reference to the current guide and CIO-IT Security-09-43 version of the GSA IT Security Policy and to CIO-IT Security-09-43, IT Security Procedural Guide: Key Management 2 Berlas / Updated recommendation for Clarification of requirements 7 Salamon obtaining and using certificates 3 Salamon Integrated with OMB M-15-13 and New OMB Policy 9 related TLS implementation guidance 4 Berlas / Updates to clarify TLS protocol Clarification of guidance 11-12 Salamon recommendations 5 Berlas / Updated based on stakeholder Stakeholder review / input Throughout Salamon review / input 6 Klemens/ Formatting, editing, review revisions Update to current format and Throughout Cozart- style Ramos Revision 2 – October 11, 2016 1 Berlas / Allow use of TLS 1.0 for certain Clarification of guidance Throughout Salamon server through June 2018 Revision 3 – April 30, 2018 1 Berlas / Remove RSA ciphers from approved ROBOT vulnerability affected 4-6 Salamon cipher stack
    [Show full text]
  • Compressed Transitive Delta Encoding 1. Introduction
    Compressed Transitive Delta Encoding Dana Shapira Department of Computer Science Ashkelon Academic College Ashkelon 78211, Israel [email protected] Abstract Given a source file S and two differencing files ∆(S; T ) and ∆(T;R), where ∆(X; Y ) is used to denote the delta file of the target file Y with respect to the source file X, the objective is to be able to construct R. This is intended for the scenario of upgrading soft- ware where intermediate releases are missing, or for the case of file system backups, where non consecutive versions must be recovered. The traditional way is to decompress ∆(S; T ) in order to construct T and then apply ∆(T;R) on T and obtain R. The Compressed Transitive Delta Encoding (CTDE) paradigm, introduced in this paper, is to construct a delta file ∆(S; R) working directly on the two given delta files, ∆(S; T ) and ∆(T;R), without any decompression or the use of the base file S. A new algorithm for solving CTDE is proposed and its compression performance is compared against the traditional \double delta decompression". Not only does it use constant additional space, as opposed to the traditional method which uses linear additional memory storage, but experiments show that the size of the delta files involved is reduced by 15% on average. 1. Introduction Differential file compression represents a target file T with respect to a source file S. That is, both the encoder and decoder have available identical copies of S. A new file T is encoded and subsequently decoded by making use of S.
    [Show full text]
  • Protecting Encrypted Cookies from Compression Side-Channel Attacks
    Protecting encrypted cookies from compression side-channel attacks Janaka Alawatugoda1, Douglas Stebila1;2, and Colin Boyd3 1 School of Electrical Engineering and Computer Science, 2 School of Mathematical Sciences 1;2 Queensland University of Technology, Brisbane, Australia [email protected],[email protected] 3 Department of Telematics, Norwegian University of Science and Technology, Trondheim, Norway [email protected] December 28, 2014 Abstract Compression is desirable for network applications as it saves bandwidth; however, when data is compressed before being encrypted, the amount of compression leaks information about the amount of redundancy in the plaintext. This side channel has led to successful CRIME and BREACH attacks on web traffic protected by the Transport Layer Security (TLS) protocol. The general guidance in light of these attacks has been to disable compression, preserving confidentiality but sacrificing bandwidth. In this paper, we examine two techniques|heuristic separation of secrets and fixed-dictionary compression|for enabling compression while protecting high-value secrets, such as cookies, from attack. We model the security offered by these techniques and report on the amount of compressibility that they can achieve. 1This is the full version of a paper published in the Proceedings of the 19th International Conference on Financial Cryptography and Data Security (FC 2015) in San Juan, Puerto Rico, USA, January 26{30, 2015, organized by the International Financial Cryptography Association in cooperation with IACR. 1 Contents 1 Introduction 3 2 Definitions 6 2.1 Encryption and compression schemes.........................6 2.2 Existing security notions................................7 2.3 New security notions..................................7 2.4 Relations and separations between security notions.................8 3 Technique 1: Separating secrets from user inputs9 3.1 The scheme.......................................9 3.2 CCI security of basic separating-secrets technique.................
    [Show full text]
  • Implementing Compression on Distributed Time Series Database
    Implementing compression on distributed time series database Michael Burman School of Science Thesis submitted for examination for the degree of Master of Science in Technology. Espoo 05.11.2017 Supervisor Prof. Kari Smolander Advisor Mgr. Jiri Kremser Aalto University, P.O. BOX 11000, 00076 AALTO www.aalto.fi Abstract of the master’s thesis Author Michael Burman Title Implementing compression on distributed time series database Degree programme Major Computer Science Code of major SCI3042 Supervisor Prof. Kari Smolander Advisor Mgr. Jiri Kremser Date 05.11.2017 Number of pages 70+4 Language English Abstract Rise of microservices and distributed applications in containerized deployments are putting increasing amount of burden to the monitoring systems. They push the storage requirements to provide suitable performance for large queries. In this paper we present the changes we made to our distributed time series database, Hawkular-Metrics, and how it stores data more effectively in the Cassandra. We show that using our methods provides significant space savings ranging from 50 to 95% reduction in storage usage, while reducing the query times by over 90% compared to the nominal approach when using Cassandra. We also provide our unique algorithm modified from Gorilla compression algorithm that we use in our solution, which provides almost three times the throughput in compression with equal compression ratio. Keywords timeseries compression performance storage Aalto-yliopisto, PL 11000, 00076 AALTO www.aalto.fi Diplomityön tiivistelmä Tekijä Michael Burman Työn nimi Pakkausmenetelmät hajautetussa aikasarjatietokannassa Koulutusohjelma Pääaine Computer Science Pääaineen koodi SCI3042 Työn valvoja ja ohjaaja Prof. Kari Smolander Päivämäärä 05.11.2017 Sivumäärä 70+4 Kieli Englanti Tiivistelmä Hajautettujen järjestelmien yleistyminen on aiheuttanut valvontajärjestelmissä tiedon määrän kasvua, sillä aikasarjojen määrä on kasvanut ja niihin talletetaan useammin tietoa.
    [Show full text]
  • A Comprehensive Study of the BREACH A8ack Against HTTPS
    A Comprehensive Study of the BREACH A8ack Against HTTPS Esam Alzahrani, JusCn Nonaka, and Thai Truong 12/03/13 BREACH Overview Browser Reconnaissance and Exfiltraon via AdapCve Compression of Hypertext Demonstrated at BlackHat 2013 by Angelo Prado, Neal Harris, and Yoel Gluck • Chosen plaintext aack against HTTP compression • Client requests a webpage, the web server’s response is compressed • The HTTP compression may leak informaon that will reveal encrypted secrets about the user Network Intrusion DetecCon System Edge Firewall Switch Router DMZ Clients A8acker (Vicm) 2 BREACH Requirements Requirements for chosen plain text (side channel) aack • The web server should support HTTP compression • The web server should support HTTPS sessions • The web server reflects the user’s request • The reflected response must be in the HTML Body • The aacker must be able to measure the size of the encrypted response • The aacker can force the vicCm’s computer to send HTTP requests • The HTTP response contains secret informaon that is encrypted § Cross Site Request Forgery token – browser redirecCon § SessionID (uniquely idenCfies HTTP session) § VIEWSTATE (handles mulCple requests to the same ASP, usually hidden base64 encoded) § Oath tokens (Open AuthenCcaon - one Cme password) § Email address, Date of Birth, etc (PII) SSL/TLS protocol structure • X.509 cerCficaon authority • Secure Socket Layer (SSL) • Transport Layer Security (TLS) • Asymmetric cryptography for authenCcaon – IniCalize on OSI layer 5 (Session Layer) – Use server public key to encrypt pre-master
    [Show full text]
  • A Perfect CRIME?
    AA PerfectPerfect CRIME?CRIME? OnlyOnly TIMETIME WillWill TellTell Tal Be'ery, Amichai Shulman i ii Table of Contents 1. Abstract ................................................................................................................ 4 2. Introduction to HTTP Compression ................................................................. 5 2.1 HTTP compression and the web .............................................................................................. 5 2.2 GZIP ........................................................................................................................................ 6 2.2.1 LZ77 ................................................................................................................................ 6 2.2.2 Huffman coding ............................................................................................................... 6 3. CRIME attack ..................................................................................................... 8 3.1 Compression data leaks ........................................................................................................... 8 3.2 Attack outline ........................................................................................................................... 8 3.3 Attack example ........................................................................................................................ 9 4. Extending CRIME ............................................................................................
    [Show full text]
  • Randomized Lempel-Ziv Compression for Anti-Compression Side-Channel Attacks
    Randomized Lempel-Ziv Compression for Anti-Compression Side-Channel Attacks by Meng Yang A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Applied Science in Electrical and Computer Engineering Waterloo, Ontario, Canada, 2018 c Meng Yang 2018 I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii Abstract Security experts confront new attacks on TLS/SSL every year. Ever since the compres- sion side-channel attacks CRIME and BREACH were presented during security conferences in 2012 and 2013, online users connecting to HTTP servers that run TLS version 1.2 are susceptible of being impersonated. We set up three Randomized Lempel-Ziv Models, which are built on Lempel-Ziv77, to confront this attack. Our three models change the determin- istic characteristic of the compression algorithm: each compression with the same input gives output of different lengths. We implemented SSL/TLS protocol and the Lempel- Ziv77 compression algorithm, and used them as a base for our simulations of compression side-channel attack. After performing the simulations, all three models successfully pre- vented the attack. However, we demonstrate that our randomized models can still be broken by a stronger version of compression side-channel attack that we created. But this latter attack has a greater time complexity and is easily detectable. Finally, from the results, we conclude that our models couldn't compress as well as Lempel-Ziv77, but they can be used against compression side-channel attacks.
    [Show full text]
  • In-Place Reconstruction of Delta Compressed Files
    In-Place Reconstruction of Delta Compressed Files Randal C. Burns Darrell D. E. Long’ IBM Almaden ResearchCenter Departmentof Computer Science 650 Harry Rd., San Jose,CA 95 120 University of California, SantaCruz, CA 95064 [email protected] [email protected] Abstract results in high latency and low bandwidth to web-enabled clients and prevents the timely delivery of software. We present an algorithm for modifying delta compressed Differential or delta compression [5, 11, compactly en- files so that the compressedversions may be reconstructed coding a new version of a file using only the changedbytes without scratchspace. This allows network clients with lim- from a previous version, can be usedto reducethe size of the ited resources to efficiently update software by retrieving file to be transmitted and consequently the time to perform delta compressedversions over a network. software update. Currently, decompressingdelta encoded Delta compressionfor binary files, compactly encoding a files requires scratch space,additional disk or memory stor- version of data with only the changedbytes from a previous age, used to hold a required second copy of the file. Two version, may be used to efficiently distribute software over copiesof the compressedfile must be concurrently available, low bandwidth channels, such as the Internet. Traditional as the delta file contains directives to read data from the old methods for rebuilding these delta files require memory or file version while the new file version is being materialized storagespace on the target machinefor both the old and new in another region of storage. This presentsa problem. Net- version of the file to be reconstructed.
    [Show full text]
  • Online Resources for Virtual Learning
    Resources for Online Learning/Teaching ● Khan Academy: great from a anatomy & physiology side of things ​ ○ https://www.khanacademy.org/science/health-and-medicine ○ https://www.khanacademy.org/join/PA5RUCHU - Anatomy & Physiology Class ​ ​ ○ https://www.khanacademy.org/join/799YCDVS - Growth Mindset Class ​ ​ ○ You can sign up for our class with a free account using your google login ○ Explore the resources for anatomy and physiology, growth mindset, and – there are many different things you can refresh yourself with or materials you can explore based on your interests ● Cengage: is free for students right now ​ ○ Cengage: Digital Learning & Online Textbooks ○ If you email our WA & OR Rep Christine Stark, she can probably give you free instructor access for the rest of the school year. She is GREAT and really helped me out!! ○ [email protected] ○ https://www.cengage.com/discipline-health-care/ ○ PT Tech Course - https://login.cengagebrain.com/course/MTPN-V0GP-XPDD ​ ​ ​ ● Anatomy in Clay: has some good stuff ​ ○ FREE RESOURCES | anatomyinclay ● https://www.getbodysmart.com/a-p-resources ○ This has a number of links to sites and sources that are free or that you can pay for ○ Some of my favorites so far: Human Anatomy by McGraw Hill, Anatomy Arcade, Zygotebody ● Flinn Science has some great online resources for At Home Labs, etc ​ ○ https://www.flinnsci.com/athomescience/?utm_campaign=MKT20383%20-%20Mar20-DistanceL earning-President-Email&utm_medium=email&utm_source=Net-Results&utm_content=MKT20383 %20-%20Mar20-DistanceLearning-President-Email# ● Zoom teaching -- great video on how to teach with Zoom or just use Zoom in general. ​ ○ https://www.youtube.com/watch?v=UTXUmoNsgg0&fbclid=IwAR3t9CLPZExRsDUOAyWWsL1uqx tqKjzeNOehYTsjVjYmRwYs8CQgB2qRXXU&disable_polymer=true ● Nearpod: great online tool for teachers to be able to engage their students in a live lesson.
    [Show full text]
  • Delta Compression Techniques
    D but the concept can also be applied to multimedia Delta Compression and structured data. Techniques Delta compression should not be confused with Elias delta codes, a technique for encod- Torsten Suel ing integer values, or with the idea of coding Department of Computer Science and sorted sequences of integers by first taking the Engineering, Tandon School of Engineering, difference (or delta) between consecutive values. New York University, Brooklyn, NY, USA Also, delta compression requires the encoder to have complete knowledge of the reference files and thus differs from more general techniques for Synonyms redundancy elimination in networks and storage systems where the encoder has limited or even Data differencing; Delta encoding; Differential no knowledge of the reference files, though the compression boundaries with that line of work are not clearly defined. Definition Delta compression techniques encode a target Overview file with respect to one or more reference files, such that a decoder who has access to the same Many applications of big data technologies in- reference files can recreate the target file from the volve very large data sets that need to be stored on compressed data. Delta compression is usually disk or transmitted over networks. Consequently, applied in cases where there is a high degree of data compression techniques are widely used to redundancy between target and references files, reduce data sizes. However, there are many sce- leading to a much smaller compressed size than narios where there are significant redundancies could be achieved by just compressing the tar- between different data files that cannot be ex- get file by itself.
    [Show full text]
  • Categorizing Efficient XML Compression Schemes
    John N. Dyer Categorizing Efficient XML Compression Schemes John N. Dyer, Department of Information Systems, College of Business Administration, Georgia Southern University, P.O. Box 7998, Statesboro, GA 30459 Abstract Web services are Extensible Markup Language (XML) applications mapped to programs, objects, databases, and comprehensive business functions. In essence, Web services transform XML documents into and out of information technology systems. As more businesses turn to web services data transfer, XML has become the language of web services. Unfortunately, the structure of XML results in extremely verbose documents, often 3 times larger than ordinary content files. As XML becomes more common through Web services applications, its large file sizes increasingly burden the systems that must utilize it. This paper provides a qualitative overview of existing and proposed schemes for efficient XML compression, proposes three categories for relating XML compression scheme efficiency for Web services, and makes recommendations relating to efficient XML compression based on the proposed categories of XML documents. The goal of this paper is to aid the practitioner and Web services manager in understanding the impact of XML document size on Web services, and to aid them in selecting the most appropriate schemes for applications of XML compression for Web services. Keywords: Compression, Web services, XML Introduction XML is the foundation upon which Web services are built, and provides the description of data, as well as the storage and transmission format of data exchanged via Web services (Newcomer, 2002). XML is similar to Hypertext Markup Language (HTML), and well-formed XML documents can even be displayed in Web browsers.
    [Show full text]