Proceedings, ITC/USA
Total Page:16
File Type:pdf, Size:1020Kb
Group Telemetry Analysis Using the World Wide Web Item Type text; Proceedings Authors Kalibjian, Jeffrey R. Publisher International Foundation for Telemetering Journal International Telemetering Conference Proceedings Rights Copyright © International Foundation for Telemetering Download date 27/09/2021 16:30:59 Link to Item http://hdl.handle.net/10150/607611 Group Telemetry Analysis Using the World Wide Web Jeffrey R. Kalibjian Lawrence Livermore National Laboratory Keywords secure data sharing, world wide web, hypertext transfer protocol, dual asymmetric key cryptography Abstract Today it is not uncommon to have large contractor teams involved in the design and deployment of even small satellite systems. The larger (and more geographically remote) the team members, the more difficult it becomes to efficiently manage the disbursement of telemetry data for evaluation and analysis. Further complications are introduced if some of the telemetry data is sensitive. An application is described which can facilitate telemetry data sharing utilizing the National Information Infrastructure (Internet). Introduction The World Wide Web (WWW) has transformed the Internet from a research aid into a multi-media display case. The WWW is based on the Hypertext Transfer Protocol (HTTP)-----an object oriented stateless protocol which can be used to build distributed systems in which data representation can be negotiated. Secure HTTP (S-HTTP) is a security enhanced version of HTTP that supports application level cryptographic enhancements (e.g public key cryptography). Although more commonly used for multi-media applications, the World Wide Web offers great promise as a “group-ware” or general data sharing mechanism. Secure HTTP has made it possible to design and implement Web based applications which can accomplish secure data sharing among a group of business or research partners. After reviewing HTTP, Internet security and the World Wide Web, this paper discusses the design and implementation of a secure Web based data sharing tool. Some design trade-offs impacting data analysis will also be explored. HTTP The Hypertext Transfer Protocol [1] is a very simple communication protocol. A client makes a TCP/IP (Transmission Control Protocol/Internet Protocol, the underlying communication protocol used on the Internet) connection to the server. The server will accept the connection upon which the client will make a document request. The connection domain and the document requested are contained in a Universal Resource Locator (URL), the server responds to the request, the client collects the response, and finally, the server terminates the connection to the client. The server then continues to listen for other requests. A key element of the interaction is that the server will treat each subsequent request as brand new; that is, it maintains no state. This is contrasted with other Internet protocols (e.g. File Transfer Protocol, FTP) which do maintain state. Thus, HTTP interaction amounts to a connection, request, response, and closure. A request is basically an action (known more formally as a method) that can be applied to the entity (an object identified by a Universal Resource Identifier, URI) requested. The request also identifies the protocol version in use. More common methods include GET ( retrieve data identified by the URI), and POST (creates a new object linked to the object specified). A response consists of a status line which contains of a protocol version, status code and its associated text phrase. It is on this line that the server confirms the client request to “speak” in the requested protocol (HTTP). Optional header fields follow including date, and originating location. The message section is next in which the Multipurpose Internet Mail Extension (MIME) [2] content type of the returned data is indicated (typically text/HTML), as well as the number of characters in the message, followed by a blank line, and then the message itself. Typically, HTTP servers and clients return messages making use of the Hyper Text Markup Language (HTML). HTML [3] can be used to represent formatted text documents, tables, forms, in-line graphics, and hypertext (linked) information. It forms the basis for much of the information obtainable from the World Wide Web. Because of the flexibility of the HTTP protocol, web server and clients capabilities are in a continuous state of evolution delivering ever more increasing power. An example of this is JAVA [4]. JAVA is an object oriented programming language developed by Sun Microsystems. Small programs written in JAVA can be embedded in web pages, so that when the web page is accessed, the program is downloaded to the client and executed. Such small programs are called applets Key elements of the request/reply paradigm, then, are the concept of negotiated protocol spoken, as well as utilization of MIME headers to specify message content types. WWW servers may communicate information about objects it receives or manipulates via the Common Gateway Interface specification (CGI) [5]. The server and a CGI program communicate via command line arguments or environment variables. The CGI programs themselves can be written in many programming (e.g. C) or scripting (e.g. Perl) languages. The analogy of such a capability on the client is known as a helper application. This is a stand alone program that is activated by the client browser on detection of a specified MIME type. Data received by the client is forwarded to the helper application for processing. Since the helper application is a stand alone program, it cannot use the web browser window to report or display results of its actions. Instead, it must manage such capabilities on its own. Security When transporting sensitive information over public networks, one must generally have three capabilities present to insure the information will not being comprised. First, there must be assurance that the information being transported can only be read by the intended recipient (privacy). The second notion is that of authentication. The recipient must be able to guarantee that the person he receives data from is truly, "that person." Finally, there must be a guarantee that the contents of the message have not been altered in the message's travel from sender to recipient-----that is, one must have confidence in the message integrity. Dual asymmetric key cryptography [6] can facilitate these security capabilities. Two keys are generated which have the unique feature that information encrypted with one key, can only be decrypted using the other. Encryption is the process of “disguising” clear text so it cannot be understood. One key is kept private, the other public. If Person A wishes to send a private message to Person B, he may encrypt the message using Person B's public key. Person A is assured that only Person B's private key can decrypt the message. In order for Person B to be assured that the message he is receiving is from Person A, Person A may sign the encrypted message by using his private key. Person B can be assured that only Person A's public key could decrypt the signature. Person B may be assured the message he received was not tampered with, if Person A calculates a checksum of the message he wishes to send before encrypting the message. If the checksum is passed along in the encrypted (and possibly signed) message, Person B can calculate a checksum on the decrypted (and possibly authenticated) file by calculating his own checksum and comparing it with the checksum sent in the message. At this point is should become clear that both private and public keys need to be protected. The security for the private key is obvious, one desires that only they alone may read their own private messages. This is usually implemented by password protecting utilization of the private key on the host system it resides on. Security is needed for the public key so one can guarantee that no other key may be substituted for their own. This is provided by having a certifying authority sign the public key (forming what is known as an X.509 [7] certificate). The signature indicates that the name associated with the public key (in the certificate) is indeed the “that person.” The certifying authority may require many forms of identification before signing the certificate (e.g. birth certificate, social security number, etc). SSL, S-HTTP These cryptographic principles are utilized in two specifications which have given the World Wide Web security; namely, Secure Sockets Layer (SSL) [8] and the Secure Hyper Text Transfer Protocol (S-HTTP) [9]. SSL is designed to run under the protocol being used for application communication. Thus, while SSL is most commonly used to support secure communication under HTTP, it can also be used to effect secure communication making use of other Internet protocols like FTP. In the SSL communication process, a client (as usual) contacts the server. The server responds by sending the client its public key certificate. The client validates the signature on the certificate (assuming it has access to the certifying authority public key), generates a symmetric (session) encryption key (this type of key has the property of being able to both encrypt as well as decrypt clear text), and uses the server’s public key to encrypt the symmetric key, so it may be sent back to the server. To achieve client authentication, the client would send his certificate back to the server. While the SSL specification provides for both client and server authentication (as well as data privacy and integrity), in its first implementation, as used in Netscape products (and described above), client authentication was not implemented. The Secure HTTP effort by CommerceNet (a consortium of high technology companies attempting to bring about more rapid utilization of the Internet for commerce activities) implemented both client and server authentication. In the S-HTTP model, security is achieved at the application level; i.e., HTTP has been expanded to incorporate security.