<<

INFORMATION TO USERS

This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer.

The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction.

In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion.

Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps. Each original is also photographed in one exposure and is included In reduced form at the back of the book.

Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6" x 9” black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order. UMI Bell & Howell Information and Learning 300 North Zeeb Road, Ann Arbor, Ml 48106-1346 USA 800-521-0600

D e s ig n a n d V erification o f S e c u r e E - C o m m e r c e P r o t o c o l s

dissertation

Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the

Graduate School of The Ohio State University

By

Srividhya Subramanian, B.Tech., M.S.

*****

The Ohio State University

1999

Dissertation Committee: Approved by

Prof. Mukesh Singhal, Adviser Prof. P. Sadayappan Adviser Prof. Paul Sivilotti Department of Computer and Information Science UMI Number: 9941441

UMI Microform 9941441 Copyright 1999, by UMI Company. All rights reserved.

This microform edition is protected against unauthorized copying under Title 17, United States Code.

UMI 300 North Zeeb Road Ann Arbor, MI 48103 (ç) Copyright by

Sri\ddhya Subramanian

1999 ABSTRACT

The recent years have witnessed an explosion in the volume of trade over computer networks, also called e-commerce (or electronic commerce). El-commerce transactions can be either general transactions (two party exchange of product for pa^unent), auc­ tions. or multi-party (stock/commodity market) transactions. These transactions may or may not involve real-time constraints. To successfully complete e-commerce transactions, the involved parties execute a sequence of steps, called a protocol. The design of good protocols requires the addressing of various issues, which have been universally identified as required or desired for e-commerce protocols: security, atom­ icity. privacy, anonymity, and low overhead cost. Current state-of-the-art protocols for general transactions, such as Ecash, NetBill. etc.. have been designed to specifi­ cally target only a subset of the above properties. Only preliminary research has been done in the areas of electronic auctions and multi-party transactions.

This dissertation is focussed on the design and verification of e-commerce pro­ tocols for four categories of transactions: (a) general transactions, (b) auctions, (c) multi-party transactions, and (d) each of the above with real-time constraints. E- commerce protocols having all of the five above mentioned properties are developed for each of these categories of transactions. Furthermore, the developed protocols have various other attractive characteristics. For example, a customer is not required

11 to store money in a special account, mediation by a third party is required only in some rare special cases, multiple rounds of automated bidding are allowed by the auction protocols, and the developed protocols are simple to implement. Also, a novel methodology' is developed which allows incorporation of real-time constraints into any given e-commerce protocol with the guarantee that all existing properties will be preserved.

-A. formal logic is developed based on the semantiçs of the popular BAN logic, which allows the modeling of powerful intruders capable of both passive and active attacks. The logic is used to prove that each of the developed protocols have all the five properties. Together, the developed protocols, the real-time methodology, and the associated proofs, make a fundamental contribution to the process of realizing the promise of e-commerce.

Ill Dedicated to Appa, Raghu, and Kram,

for setting their expectations high, and to Amma for her unbounded love

IV ACKNOWLEDGMENTS

I owe it to many people for making it so far in five short, fun filled years. I can only possibly mention a few of these wonderful people below. However. I will never forget the help and support I received from each one of them.

I thank Dr. Mukesh Singhal for his constant encouragement, help, and guidance through every step. I have worked part-time, worked full-time, and lived three time zones away. Dr. Singhal always accommodated all my constraints at all times and has helped me find ways to work around them. It has been most enjoyable working with him. I thank Dr. Sadayappan and Dr. Miller for their guidance and help through various stages of my graduate education, and Elley Quinlan for being a wonderful supervisor and friend. I thank Tom Fletcher and Ron Salyers for the many times they have made it possible for me to meet deadlines. I thank Dr. Sivilotti and Dr.

-A.rora for their comments and suggestions that have helped refine this thesis.

I thank Dr. Shafer for his support when I was working at OCLC and Eric for his support through the last frenzied days before the final defense.

I thank my friends in Columbus who saw me through my difficult and happy times.

RC, V'ijaya, Shankar, Barbara, and the Raos have all been like family - the kind who feed you, the kind you can call at midnight because you are depressed, the kind you take for granted. Many student stories are about horrible roommates. I thank Satya, Anitha, Sandhya, Ambika, Radha, Sonali, Suneeta, and Venky for being such easy people to live with, and for being such dear friends. I also thank Matt, Mohammad,

Debasis, Venkat, Sonia, Steve, Sandeep, Jayanthi, Sowmyaa, Manoj, and all my other friends and ofEcemates for the many hours we spent together chatting, drinking coffee, and the few hours we spent working.

I thank all my friends from my undergrad years who I spent some of my most memorable vears with: Diwa, Sudhir. Nikhil, Candv, Xi, Venkv, Mridul, Soumvo,

Birad, Abhay, Srikala, Mita, Asha, Madhuri, Suman, Saryu, Eli, Bama, Raj, and all the other RECTians.

Gobbo, Shaji, Bobby, and Gopal deserve special mention for helping me through my first years in the United States, for teaching me many things about computers, and for being such wonderful people.

I thank my family, especially those in the US: .\nu, Sundar. Chaya, Sekar, .Jay.

Kalyani ,A.thai. and Siva Uncle, for the support they have given me. I thank my in-laws for accepting and supporting all my decisions.

The biggest hurdle I have faced in any endeavor is my own complacency. I owe my achievements to the men I have dedicated this thesis to - my father, brother, and husband - who have expected much more than I have demanded of myself. They not only encouraged and helped me; they praised my abilities, ridiculed my lethargy*, teased me for my under achievements, and put up with my stubborn and proud self.

Thank you.

I have reserved my biggest thanks for Amma who taught me how to live, love, and be loved.

VI VITA

October 31, 1971 ...... Born - Buffalo, N \'

1994 ...... B.Tech., Chemical Engineering, Regional Engineering College, Trichy, India. 1996 ...... M.S., Computer and Info. Science, The Ohio State University. Fall 1994-Winter 1999 ...... Graduate Teaching Associate. The Ohio State University. Spring 1997-Spring 1998 ...... Graduate Research .Associate. Online Computer Center, Dublin.

PUBLICATIONS

Research Publications

S. Subramanian and M. Singhal. "Real-Time .A.ware Protocols for General E- Commerce and Electronic .Auction Transactions". Proceedings of ICDCS Workshop, June 1999.

S. Subramanian and M. Singhal. ".A Real-Time Protocol for Stock Market Transac­ tions”. Proceedings of the International Workshp of .Advanced Issues of Electronic Commerce and Web-based Information Systems, .April 1999.

S. Subramanian. "Design and Verification of a Secure Electronic .Auction Protocol". Proceedings of the 17th IEEE Symposium on Reliable Distributed Systems, October 1998, pp. 204-210.

vu s. Subramanian and M. Singhal. ’’Detecting Violation of Real-Time Constraints in Secure Electronic Commerce Transactions”. Proceedings of the 14th International Conference IFIP SEC’98, September 1998, pp. 460-464.

S. Subramanian and M. Singhal. ” A Secure Electronic Stock Market Transaction Protocol”. Economic Research and Electronic Networking (NETNOMICS), 1999, Under Review.

S. Subramanian and M. Singhal. ” Analysis of E-Commerce Protocols for Security Under Passive and Active Attacks”. Journal of , 1999, Under Review.

S. Subramanian. ’’Design and Verification of Protocols for Secure Transaction Exe­ cution in Electronic Commerce”. Dissertation Proposal, OSU-CISRC-10/98-TR40, 1998.

S. Subramanian and M. Singhal ” Design and Verification of a Secure, Atomic Auction Protocol” Technical Report, OSU-CISRC-5/98-TR15, 1998.

S. Subramanian and M. Singhal. "Design and Verification of a Secure, Atomic Transaction Execution Protocol for Electronic Commerce”. Technical Report, OSU- CISRC-10/98-TR12, 1998.

S. Subramanian and M. Singhal. ”A Methodology^ for Detecting Violation of Real- Time Constraints in Secure Electronic Commerce Transactions”. Technical Report, OSU-CISRC-11 /97-TR56, 1997.

S. Subramanian and M. Singhal. ’’Protocols for Secure, Atomic Transaction Execu­ tion in Electronic Commerce”. Technical Report, OSU-CISRC-10/97-TR49, 1997.

S. Subramanian. ”Clustering and Automatic Classification”. OCLC Annual Report, 1998.

S. Subramanian. ’’Inquire: A New Paradigm for ”. .A.dvances in Digital Libraries Forum, Library of Congress, May 1996.

FIELDS OF STUDY

Major Field: Computer and Information Science

V lll Studies in: Software Systems Prof. Mukesh Singhal Computer Architechture Prof. P. Sadayappan Algorithmics and Theory Prof. Raphael Wenger

IX TABLE OF CONTENTS

P age

A b stra c t ...... ii

D edication ...... iv

Acknowledgments ...... v

V i t a ...... vii

List of T a b le s ...... xiv

List of Figures ...... xv

Chapters:

1. Introduction ...... 1

1.1 Enabling Technologies ...... 2 1.1.1 D ata W a re h o u sin g ...... 3 1.1.2 D ata M in i n g ...... 5 1.1.3 C o n n e c tiv ity ...... 6 1.1.4 Interoperability ...... 6 1.1.5 C ryptography ...... 8 1.1.6 S e c u rity ...... 12 1.2 Technical Challenges in E-Commerce ...... 13 1.3 Components of any s o lu tio n ...... 16 1.3.1 C ryptography ...... 17 1.3.2 Payment ...... 17 1.3.3 Transactions ...... 19 1.4 Existing Solutions ...... 20

X 1.4.1 First V i r t u a l ...... 21 1.4.2 IBM’s iKP ...... 22 1.4.3 Net Cash and NetCheque ...... 23 1.4.4 C y b e rC a s h ...... 24 1.4.5 CyberCoin ...... 25 1.4.6 S E T ...... 26 1.4.7 C h eck free...... 26 1.4.8 C A F E ...... 27 1.4.9 E C a s h ...... 27 1.4.10 Citibank’s Transaction C a r d s ...... 28 1.4.11 M o n d e x ...... 29 1.4.12 V is a C a s h ...... 29 1.4.13 Millicent ...... 30 1.4.14 MicroMint ...... 31 1.4.15 P a y W o rd ...... 31 1.4.16 N e t B i l l ...... 32 1.4.17 OpenMarket ...... 33 1.4.18 E D I ...... 33 1.5 Problem S ta te m en t ...... 34 1.5.1 Properties of a GeneralTransaction P rotocol ...... 35 1.5.2 Different Types of T ran sactio n s ...... 35 1.6 S u m m a ry ...... 36

2. Framework for Verifying Protocol Security ...... 37

2.1 Standards C o m p lian ce ...... 37 2.2 Challenges and Known .\ttacks ...... 38 2.3 Security Reviews ...... 39 2.3.1 Formal \ erificatio n ...... 39 2.4 Our .\pproach for Formal Verification ...... 40 2.4.1 The B.A.N L ogic ...... 41 2.4.2 Notation ...... 45 2.4.3 .Analysis for S e c u rity ...... 46 2.4.4 Illustrating the Framework with a Formal Proof of Security 54 2.5 Summary" ...... 59

3. General Two-Party Transaction Protocol ...... 60

3.1 Existing Work ...... 61 3.2 Svstem Model ...... 61

XI 3.3 Protocol D e sc rip tio n ...... 62 3.3.1 P ro p e rtie s ...... 65 3.4 Formal Verification of Properties ...... 69 3.4.1 Initial S t a t e ...... 69 3.4.2 P ro p e rties ...... 69 3.4.3 V erificatio n ...... 70 3.5 S u m m a ry ...... 85

4. Real-Time Aware P r o to c o ls ...... 86

4.1 Existing Work ...... 86 4.2 System M o d el ...... 87 4.3 Real-Time C o n s tr a in t ...... 89 4.4 The Sixth Property ...... 90 4.5 -A. Real-Time Aware General Two-Party Transaction Protocol . . . 91 4.5.1 Role of the Secure Co-Processor ...... 92 4.5.2 N o ta tio n s ...... 94 4.5.3 Real-Time Violation Detection Mechanism ...... 95 4.6 P ro p erties ...... 96 4.7 Formal Verification ...... 99 4.8 S u m m a ry ...... 108

5. A uctions ...... 110

5.1 Existing Work ...... 110 5.2 System M odel ...... I l l 5.3 Protocol D e sc rip tio n ...... 112 5.3.1 The P r o to c o l ...... 113 5.3.2 Formal Verification ...... 115 5.3.3 P r o o f ...... 116 5.4 Real-Time .A.ware Auction Protocol ...... 140 5.5 S u m m a ry ...... 141

6. Stock Market Transactions ...... 142

6.1 Existing Work ...... 144 6.2 System M odel ...... 146 6.2.1 Transaction M o d e l ...... 147 6.2.2 Real-Time Constraint ...... 147 6.3 Protocol D e sc rip tio n ...... 149 6.3.1 N o ta tio n s ...... 149

X ll 6.3.2 Computing Clock Drift ...... 150 6.3.3 Getting Q u o te s ...... 151 6.3.4 Placing the O rders ...... 151 6.3.5 Processing the Orders at the Exchange ...... 152 6.3.6 Transaction Completion ...... 153 6.4 Real-time A w areness...... 154 6.4.1 Real-Time Violation Detection Mechanism ...... 156 6.5 Properties ...... 157 6.5.1 S e c u rity ...... 159 6.5.2 .\to m ic ity ...... 160 6.5.3 P r iv a c y ...... 160 6.5.4 .A.nonymity ...... 161 6.5.5 Overhead C o s t ...... 161 6.6 Formal Verification ...... 161 6.6.1 Initial S t a t e ...... 161 6.6.2 V erificatio n ...... 162 6.7 Sum m ar}^...... 187

7. Conclusion and Future Work ...... 189

7.1 Summary ...... 189 7.2 Future Research ...... 191

Bibliography ...... 194

xiu LIST OF TABLES

Table P age

4.1 Vector tim e sta m p ...... 92

4.2 Role of the Secure Co-Processor ...... 93

4.3 .Attachments to be sent with the Demand ...... 95

6.1 Role of the Secure C o-P rocessor ...... 156

6.2 .Attachments to be sent with the Demand ...... 156

XIV LIST OF FIGURES

Figure Page

3.1 Two Party Transaction Protocol between the merchant, Bob, and the customer, Alice ...... 63

4.1 Timing diagram showing a real-time transaction ...... 90

4.2 Two Party Transaction Protocol between the merchant. Bob, and the customer, Alice ...... 91

4.3 to evaluate the customer's Dem and ...... 97

5.1 Auction Protocol between the merchant. Bob, and a representative customer, Alice ...... 114

6.1 Timing diagram showing a real-time transaction ...... 148

6.2 Different Types of Orders ...... 152

6.3 Steps involved in completing the transaction ...... 153

6.4 Algorithm to evaluate the investor's Demand ...... 158

XV CHAPTER 1

INTRODUCTION

There is already a vast amount of information and products available to us through the or through email. Some of these products, such as, electronic books, papers, reports, information, typesetting services, ownership titles, share cer­ tificates, software, etc., are in electronic form. Others are non-electronically-stored goods that are advertised on the Internet. The Internet and all of the network culture is reaching a point where everj^one is clambering for a way to exchange money elec­ tronically. The Net evolved as a generous place where people gave away information, in part, because there was no simple way to charge for it [86]. However, this lack of structured commerce on the Net is changing very fast. According to a projection by Business Week, trade worth 1,650 billion dollars will be conducted over electronic networks by the year 2000. Projections varv", but all are in the order of billions of dollars. Stockholders are pouring money into companies that have the potential to grab the smallest slice of these transactions.

The growing shift towards electronic trading can be attributed to many factors.

Electronic transfer of goods and information is easy, efficient, fast, and inexpensive.

In addition, electronically advertised goods are more widely available, and easier to

1 search for. The coavenience of being able to look up various products, compare prices and features, place orders and pay, all from one’s desktop computer, further favors this trend towards electronic commerce. Besides benefiting the consumer, the records of transactions can prove invaluable to the merchant and to the manufacturer for targeted advertising, marketing, auditing, revenue reports, and even improved product design.

Although the term electronic commerce has gained attention only in recent years, the concept has existed, in various forms, for over 20 years. Electronic Data Exchange

(EDI) [84, 41, 19] and Electronic Funds Transfer (EFT) were first introduced in the late 1970s. Automated Teller Machines and Telephone Banking gained acceptance in the 1980s. While these technologies have changed our lives and commerce in funda­ mental ways, none has been hailed as the purveyors of economic transformation to the extent that e-commerce has in recent years. The reason for this wave of optimism is the coming together of various technologies that can enable this transformation to an era of electronic money and transaction execution. In the next section, we examine some of the key technologies that have made electronic commerce possible.

1.1 Enabling Technologies

In this section, we briefly survey these technologies that have given rise to such tremendous hope and optimism. These technologies make it possible to compile and store product information; search, display, and analyze this information; advertise and broker services; support transactions; provide security and interoperability of distributed transactions; authenticate users; and a variety of other things. Not all of these technologies are recent or new. They are described here more because of their relevance to electronic commerce, rather than as technologies the readers may be unaware of.

1.1.1 Data Warehousing

A data warehouse is a time varying, non-volatile collection of data. It differs from normal in that it is designed to enable decision support and analysis as against traditional transaction processing. Organizational decision making requires summarized historical data from various internal and external resources. The real challenges in supporting a data warehouse are (a) the large amount of data, (b) the heterogenous nature of the data, (c) different representations of the same data used in the various data sources, and (d) the ability to quickly summarize this information meaningfully. Research work being done in this area include the following:

Modeling and Design issues

The emphasis of data warehousing application is on data analysis. The common term used is On-Line Analytical Processing (OLAP). OLAP applications are charac­ terized by the ability of users to view and analyze data across multiple dimensions

[88, 69]. For example, sales volume can be measured with respect to time, prod­ uct category, geographical location, and distributor, which constitute the dimensions.

The data can now be represented in a four dimensional hypercube, where each data point refers to a certain time, product, place, and distributor. The dimensions of this hypercube may be hierarchical. For example, time may be measured in days, weeks, months, quarters, or years. Various operations are defined on this cube. Roll up is the aggregation of data along one dimension (e.g., sales data can be rolled up from days to quarters). Drill down is the reverse of roll up. Slicing is the projection of data along a subset of dimensions with equality selection along other dimensions.

Dicing is the same as slicing with the exception that instead of equality selection, range selection is used.

The two common methods of storing hypercubes are: (a) multi-dimensional arrays and (b) relational tables called fact and dimension tables. The former is easy to

\dsualize and makes roll up, drill down, slicing and dicing operations efficient. The drawbacks of this method are the lack of standardized APIs for client tools, and severe restrictions on the amount of data and the number of dimensions that can be handled. In the relational model, there is a fact table for the actual data and one or more dimensional table for representing the hierarchies on each dimension. This schema is also referred to as the star schema. A refinement of this model involves not normalizing the dimension tables since they are never updated. This schema is known as the snow flake schema. When multiple fact tables share the same dimension tables, the schema is referred to as fact constellations [2.3]. Robust data administration tools, standard SQL interfaces, and scalability are advantages of the relational model. A disadvantage of this model is the large number of tables leading to slow processing time.

Materialized Views, Indexes, and Query Processing

In order to alleviate the slow processing time of relational OLAPs, new query processing techniques have evolved. Materialized views refers to the caching and refreshing of data frequently used for query processing. The trick is in identifying

4 this data accurately. The data frequently include aggregate data that are used to answer queries [39]. Indexes developed to improve processing time include bit-map indexes and join indexes. A bit-map index is a bit vector representation of the values in the table. Expensive operations such as intersection, union, aggregation, etc. are first performed on the bitmaps to reduce base table access. Join indexes are the pre-computation of a binar\" or n-way join and provide efficient access to data from multiple tables [10].

1.1.2

Also known as Knowledge Discovery, this technique refers to the analysis of or­ ganizational databases for finding useful patterns in data. Various industry sectors such as health care, store chains, credit card companies, etc., stand to benefit from data mining techniques. The primary tasks of data mining are prediction of unknown values for variables of interest, and description of those in easily interpretable form.

There are three important techniques of prediction: (a) Clustering [43, 28] refers to the process of grouping a given dataset into several categories based on similarity metrics or density functions [33]; (b) Classification [15. 70] refers to the process of assigning elements of a given dataset to predefined categories: and (c) .45- sociation Rules [11, 12, 13] refer to the process of recognizing regularities in the data, and are expressed in the form X => Y, where X and Y are sets of data. The support for an association rule is the fraction of total transaction that contain both X and

Y (or satisfy the rule). The confidence in the rule is the fraction of the transactions containing X that also contain Y. Various research issues in this area are discussed in

[42|. 1.1.3 Connectivity

Connectivity or accessibility is critical for electronic commerce [9]. The Internet has been the single greatest revolution in the realm of electronic connectivity after the telephone. Connectivity, in today's context, refers to ability to transfer different types of data over different media with very low time delay. The different types of data include text, images, video, sound, etc, and media include coaxial cables, twisted pair cables, fiber optic cables, wireless, etc. Furthermore, e-commerce systems have to contend with a multiplicity of client appliances: TV sets, radios, PCs, PDAs, laptops, cellular phones, etc. In this heterogenous world of objects, networks, clients, and servers, che issue of interoperability between these assumes paramount importance.

In the next section we discuss this issue in greater length.

1.1.4 Interoperability

Interoperability has been addressed at various levels. We discuss some of them below:

CORBA

The Common Object Request Broker Architecture (CORBA) provides a mech­ anism for heterogenous software objects to make and receive requests through the use of public names and interfaces. In the CORBA architecture, there are objects

(servers), clients, and object request brokers (ORBs). The ORBs have access to a directory of services provided by various objects and can determine the means of accessing them. Clients have access to at least one local ORB through which they submit requests. The requests may then reach the receiving object after passing through one or more ORBs. CORBA provides common object services and common object facilities. Examples of services are discover}' of available services, naming, life cycle management, security, concurrency control, event notification, querying, li­ censing, etc. An example of a common object facility is the Distributed Document

Component Facility (DDCF). DDCF allows for the presentation and interchange of objects based on a document model.

Java

.Java is a platform independent . This allows for object code compiled on one machine (say, the server) to be interpreted and run on another ma­ chine (say, the client). Java applets are some libraries that allow simple applications to be run on a client via an internet browser and allow for ver}' nice communication between a server and thin clients.

Software Agents

Software agents are computer programs that can accomplish a task without the direct manipulation of a human user [50, 65]. Several agent-based systems have been developed for electronic commerce. PersonalLogic [7] allows users to specify product features and uses a constraint satisfaction algorithm to filter through the product space. BargainFinder [1] and Jango 164Y are systems that can take a product name as input, obtain price information from other Web sites, and perform a price comparison.

For software agents to communicate with each other, communication languages have been developed. Examples of procedural languages are TCL, Telescript, and

-A.pple Events. A declarative language, known as ACL, has been developed by D.A.RPA [53]. ACL consists of an open ended vocabulary, knowledge interchange format (KIF), and knowledge query and manipulation language (KQML).

Markup Languages

Markup languages, especially HTML, are one of the foundations of internet devel- opement. The Standard Generalized Markup Language (SGML) is a generalized markup scheme for representing the logical structure of documents in a system- independent, platform-independent, and presentation-style-independent fashion. .A.

Document Type Definition (DTD) describes the properties of the elements of the document and tells a computer how to recognize it. The HyperText Markup Lan­ guage (HTML) is a SGML application popularly used for document display and linking in browsers. HTML, however, does not allow encoding of the structure and semantics needed by structured documents, databases, and catalogs. The extensible

Markup Language (XML) is a subset of SGML. .-An attached DTD allows for proper interpretations of the markups. However, in the absence of a DTD. a default DTD is used. Various style sheets may be used to present XML documents. XML is a close cousin of SGML with some minor adaptations to exclude features not perceived as pertinent to the Internet. XML has recently received tremendous attention from standards bodies and is increasingly being adopted by users.

1.1.5

Such a high level of electronic connectivity and interoperability in commerce im­ mediately leads to issues such as verifiability of the source, integrity of data, security from tampering of data, etc. The first step to answering these concerns is cr^'ptog- raphy. Cryptographic are also the basis of the payments and transaction protocols discussed in this thesis. Some basic cryptographic algorithms [63, 48] are briefly described below.

Secret Key Encryption

These algorithms scramble data using a single, secret key. The data must then be unscrambled or unlocked with the same key. Some of the most common secret key algorithms are Data Encryption Standard (DES) from the US government, triple-DES

(which repeats DES three times for good measure), RC-4 from RSA Data Security, and IDE-A. from Europe.

Public Key Encryption

These algorithms use a pair of keys — one to scramble and the other to unscramble data. It is important to note that either key can be used to scramble and the other to unscramble. One of the keys is kept secret or private by the user and the other is broadcast to ever\'body. Now, when the user scrambles a message with his private key, others can unscramble it with the public key. This serves as a signature for the user. When others want to send a secret message to the user, they scramble it with the public key. Now. only the user can unscramble it with his private key.

The most common form of public-key encryption available is the Rivest. Shamir, and .Adleman system, known as RSA and marketed by RSA Data Security. Many other systems exist that are not as well known, and are thus more likely to have undiscovered flaws. Secure Hash Functions

Hash functions take a large file and reduce it to a relatively short number (128 to 512 bits long). Two important features of a secure hash function are: (a) there is equal probability that any of the 2" n-bit numbers are generated, and (b) the file cannot be recreated from the hash number. Hash numbers are sent with messages to ensure data integrity. The sender computes the hash function of a message, signs it, and attaches it to the message. Now, the receiver computes the hash function of the message received and compares it to the decrypted hash number. If they do not match, it implies that the file has been tampered with. The better known hash algorithms are MD-5 [44, 59], developed by Ron Rivest, and Secure Hash .A.lgorithm

(SHA) [54, 62], from the US government.

Signature-Only Public Key systems

Signature only systems [30, 31] are similar to public key systems without the ability to secure messages. That is, the private key may be used to sign a message.

However, the public key may not be used to secure it. The reason for developing such a system is the need for certain governments to control the use of encryption. The best known signature only system is the Digital Signature .Algorithm from the US government. This system can be cleverly tweaked to send secret messages. However, the U.S. government continues to support the system for what seems to be reasons of bureaucratic inertia. [86]

10 Blinded Digital Signatures

Blinded Digital Signatures are used to create anonymous cash [24, 25, 26, 27].

Ob\dously, the bank will not want to sign a bill or coin without verifying its value.

The solution is to present n bills to the bank to sign. The bank will set aside one at random, and open all the others. If the value of all others is same as that claimed by the presentee, the bank signs the unopened one. Now, the presentee has an authorized bill from the bank. However, the bank does not know the serial number of the bills it issues and cannot trace them.

Secret Sharing

Some secrets are extremely sensitive and cannot be wholly trusted to a single person. For example, one would not like to trust the code to trigger a nuclear warhead to one general. However, we may feel safer, if five generals were to jointly own the secret in a way that at least three of them must collude to trigger the warhead. Such a distribution of secret information is known as secret sharing.

This technique is also used in detecting abusers of electronic anonymous cash.

Information from one use of the coin is not enough to detect the identity of the spender. However, if the same coin is spent twice, the spender can be identified and prosecuted.

Bit Commitment

Bit commitment is a large pad concatenated to a message before encrypting.

This technique is used to prevent the encr\"ptor from being able to reveal one of two messages depending on the circumstance. Without bit commitment it would be

11 possible for the encr>’ptor to encrypt a message such that the cipher can be decrypted with two different keys to get two different messages. Now, the encryptor can, by revealing the right key, choose which message is to be delivered. The recipient will have no way of knowing that the cipher can be decrypted with a different key to get another message, and thus may think the encryptor to be clairvoyant.

Zero Knowledge Proofs

Digital authentication is a very tricky problem. Any authentication message is open to replay attacks. This led to the challenge and response type authentication.

The server sends a challenge that only the authorized party knows the response to.

Since the challenge changes ever}- time, the response is not open to replay attacks. One of the strictest forms of challenge and response authentication is the zero knowledge proof.

1.1.6 Security

One of the biggest concerns in this day of network connectivity is security. There are different ways of ensuring security. Some of the more common and popular tech­ niques are discussed below;

Firew alls

Firewalls [34, 55] enable access control by separating a private network from an open network. Based on the owner's specification, a firewall allows only part of the traffic to pass through. The simplest firewalls consist of a screening router that screens the header of each incoming and outgoing packet to decide if it is allowed or

12 not. This technique is open to IP spoofing attacks. Enhanced security is provided through proxy servers.

SSL

This is a general purpose transport layer protocol for the TCP/IP suite. The

SSL consists of a two phase handshake protocol for serv'er and client authentication using X.509 v3 public key certificates. Confidentiality is provided with user-specific algorithms, while integrity is implemented with user-specified crj^ptographic hash functions, and non repudiation is provided through crj'ptographic signatures.

1.2 Technical Challenges in E-Commerce

If the need for electronic money is so acute and the technology as advanced as it is, why is it so difficult to create and circulate electronic money? Most forms of traditional payments, e.g.,bank checks, cashiers checks, travelers checks, credit card, and bank wire, are already computerized. Paper is still part of the transaction, but computers back up most of the detail. W'hat more can be done? Why can't these current instruments and their electronic systems be modified to ser\'e the wider network? Isn't this a matter for the programmers to work out the details?

Creating truly digital money has a number of technical challenges. System de­ signers are facing them with a variety of different solutions. The commercial systems actually working in the world today involve a number of compromises where the company chose to ignore or underplay one or more features in order to reduce the complexity of their system. Some of these challenges are:

13 How is counterfeiting stopped?

The biggest issue with designing digital currency is the ease of copying digital information. A digital bank note can be copied ad infinitum. Even digital signatures can be copied. The solution is to create a complex audit mechanism that can identify abusers. The detection of counterfeiting may be done prior to completing a transac­ tion (online) or afterwards (off-line). At the time of this writing, there is no known method to prevent counterfeiting of digital currency.

How secure is the system?

One of the most talked about issues in electronic commerce is security. The TIME magazine (July 20, 1998 issue) reports that 22% of potential customers do not buy over the Internet for fear of security. ,\t the same time, we know that credit card transactions are not very secure and credit card fraud is ver\^ high. How important is security then in electronic commerce? How much computational cost is justified to make a system more secure? While security fear on part of individual customers may be due to the hype, there is rational fear on the part of corporations and financial institutions, who's potential losses are extremely large. Due to the nature of electronic medium, a single sophisticated attack can result in the compromise on not one but most of the transactions. The losses due to such attacks can prove to be disastrous.

How strong is the intruder?

Almost every system has an attack that is theoretically possible. Whether pos­ sibility of the attack causes a veritable fear or not, depends on how powerful the intruder is likelv to be. Currencv bills can be counterfeited. However, that would

14 require more than a traditional printing press, and is, therefore, unlikely. Snagging

the secret ID of a cell phone number and charging it requires a sophisticated hacker.

However, this is fairly common. How powerful is the intruder in the electronic world expected to be?

How thin is the client?

In other words, how powerful is the customer. It is easy for a customer to remem­ ber a four digit PIN, even two or three of those. However, he may need a powerful computer to use more complex security algorithms. This means that the power for such computations should exist on portable smart cards or the customer's computer at home.

How anonymous is the customer?

Customer anonymity is a very controversial issue, in part because it is a very personal choice. Some customers feel very strongly about it and others care very little for it. .A.s a result, it is even harder for a system designer to choose how much complexity is worth the extra anonymity.

How off-line can you go?

The best system would be where a bank or third party was never involved in any transaction. This would be like an actual cash transaction. However, from our earlier discussion on counterfeiting, we know that this is not yet possible. .\n audit system run by a third party or a financial institution is essential to check counterfeiting. How best then can we reduce the role of this third party? Can the audit be done after the transaction is completed?

15 Who pays for losses?

Any amount of security will still leave the door open for some losses, either due to breaches, or due to malfunctioning of the system. Whose losses are acceptable?

In credit card transactions, losses are transferred (beyond a limit) to credit card companies and those who carr\' their balances from month to month. Until the extent of losses in digital systems is known, it is hard to make a convincing argument for any party to accept losses unconditionally.

All these issues make the design of electronic money hard. Various companies and researchers have chosen different answers to the above questions, and have come up with different solutions. In the next two sections, we examine some of the existing and proposed solutions.

1.3 Components of any solution

In the next section, we describe some of the solutions proposed by researchers or being used in the industry today. To better understand these solutions, we divide the solutions into three components: the underlying cryptographic algorithms, the payment model, and the transaction model.

Traditionally, the industry has not separated the payment scheme from the trans­ action model, although these are logically separate entities. Payment refers to the nature of the money being transferred, and transaction to the mode of exchanging goods for that money. So when we go to a supermarket, we can pay using cash, credit

16 card, or check. These are the various payment schemes available. A transaction in­

volves taking a cart, picking the goods, standing in the payment queue, and making

the payment.

We individually describe each of these three components below.

1.3.1 Cryptography

Various cryptographic algorithms are described in Section 1.1.5. These algorithms enable security, anonymity, data integrity, and many other properties required and desired in e-commerce. However, a more complex algorithm, while enabling more

features, is typically computationally intensive. Hence, any solution must balance cost of computation against the benefits of the algorithm. These choices become evident in the discussion about each solution below.

1.3.2 Payment

Any payment scheme can be examined with respect to a number of different features. We can think of any payment scheme as a point in an n-dimensional space, where each dimension corresponds to one of these features. The features are:

• online or off-line: A payment is online, if the merchant must confirm or autho­

rize payment from a bank or other third party before shipping the goods. If the

merchant can ship the goods without checking with the third party and still be

assured that he will be paid, the payment scheme is said to be off-line.

• anonymous or traceable: A paym ent is anonymous when no one can trace the

identity of the person(s) using a certain payment token or coin. If the identity of

17 the users is revealed, the payment is known to be traceable. While anonymous payrnent is desirable, it may also be expensive to create such a payment. credit, debit, cash, or check: A credit tj^’pe payment does not require the payee to have any money in any particular account. The payee is charged after the transaction is completed for the amount involved. In a debit type of payment, the payee must have some money in a special account. The money is debited from the account at the time of the transaction. In a cash t\q>e of payment, the customer withdraws the money from a bank account (say, at an ATM) in digital form. Most customers prefer either a credit type of payment scheme, or a cash scheme where the customer can withdraw electronic cash from any bank whenever needed. These give the customer maximum flexibility in managing their floating cash and investing it. The check scheme is like any check payment and is conceptually closest to the credit scheme with automatic bill payment. presence and role of trusted third party: Some payment schemes require a third party other than a regular bank. Such third parties are also supposed to be trusted by the customer and in many cases by the merchant. This is usually an undesirable feature. However, this allows for the payment token to be simpler and avoids a lot of computation required for signatures. security against counterfeiting: This is essential to all payment systems. If counterfeiting were easy, the whole concept of token money would be rendered meaningless. Security may be provided through a policing mechanism after counterfeiting has occurred, in which case there must be a mechanism to detect

18 not only that a coin has been duplicated, but also the identity of the culprit.

Another mechanism is to detect counterfeiting before transaction completion,

by processing each transaction at the bank. The best alternative would be to

issue coins that cannot be duplicated or copied. This may be achieved by issuing

tamper proof silicon, e.g., a smart card. However, there are many unresolved

issues, such as, loss of the smart card, counterfeiting while loading and unloading

money into the smart card, etc.

1.3.3 Transactions

A transaction refers to the exchange of goods for a payment that is agreed upon by the involved parties. .A. transaction protocol may be examined against a number of various axes, each representing a certain property. Some of these properties, such as security and anonymity, were also mentioned with respect to payment schemes. When examining these properties in the context of the transaction protocol, we may assume that the underlying payment scheme provides maximum security and anonymity. By doing so, we can accurately determine where these properties are being compromised: in the payment scheme, in the transaction protocol, or in both.

• Security: A protocol is secure when tampering or replay of messages by a third

party, deliberately wrong messages sent by the parties involved, or corruption

and loss of messages in the network, does not result in loss of payment or product

by either party.

19 • Atomicity: A protocol is atomic if under all circumstances, the transaction

either goes to completion (the right goods are exchanged for the right amount

of money) or is aborted (there is no exchange of goods or money).

• Anonymity: A protocol preserves customer anonymity if no one (including the

merchant involved in the transaction) can determine the true identity of the

customer involved. At the least, no one should be able to trace all transactions

made by a customer and create a profile of the customer.

• Privacy: A protocol ensures privacy if no third party can find out details about

the transaction, such as, payment amount, product details, etc.

• Client Thinness: A. protocol has a thin client if the customer needs no special

software or hardware (usually required for enciw'ption). While this has been

the trend for a while, with really inexpensive PCs flooding the market, it is

becoming less of a requirement.

• Low Overhead: The cost for each transaction must be low enough to justify

transactions involving a small amount of money. This measure is relative to the

money being transacted. For extremely small value transactions, called micro­

payments, the overhead must be close to nil. However, for most transactions,

five or six messages with some signatures and encr\'ption are low enough [36],

1.4 Existing Solutions

In this section, we discuss proposed and implemented solution and their character­ istics. Traditional solutions, both in the industry- and academia, have been a mixture

20 of a payment scheme and a transaction protocol. Consequently, these have been so intertwined that it is sometimes hard to see them distinctly. Below, we describe each solution briefly, and then mention the features of its underlying payment scheme and those of the transaction protocol. We hope that the separate lists will help the reader to distinguish between the payment method and the transaction protocol. We believe that understanding this distinction is important in recognizing which compo­ nents contribute to the strengths and failings of the solution. In the following section, we establish the need for a flexible protocol that can be used with a wide range of payment schemes. The need for such a protocol cannot be well understood unless the distinction between the payment and the transaction protocol is clear.

1.4.1 First Virtual

First Virtual is the closest to present day credit card transactions in design, and probably the most easily viable existing protocol. The protocol works as follows:

First Virtual takes a credit card number from a customer and issues an ID number.

Whenever the customer purchases a good, he emails (or sends by other means) this number to the merchant. The merchant may charge First Virtual either after shipping the goods or before. First Virtual sends an email to the customer confirming trans­ action. On receipt of confirmation First Virtual pays the merchant. If the merchant hasn't already shipped the goods, he does so now. In other words, the id number functions like any other credit card number, except that the merchant is paid only after confirming with the customer, thus avoiding a large portion of the credit card frauds.

2 1 The underlying payment scheme is online (typically the merchant chooses to ship the goods after confirmation), traceable, credit based, requires First Virtual to be a trusted third party (where credit cards must be registered before use), and allows for as much security as a credit card payment.

The transaction protocol provides a moderate level security through the email check. It provides atomicity only when the merchant confirms payment before ship­ ping the goods. There is no anonymity, or privacy. The client is thin and the trans­ action cost is fairly low, as there is no encryption. However, it is unclear as to who picks up the cost for transactions denied by the customer after the goods have been shipped; the merchant or First Virtual. If it is the later, the savings due to lack of encryption may be lost in these transactions.

1.4.2 IBM’s iKP

This family of protocols, designed by IBM [18], works as follows: the merchant and customer agree upon the price and the product to be exchanged. The customer digitally signs a purchase request with the price, secures it with the bank's public key, and sends it to the bank. The merchant digitally signs a sales request with the price, secures it with the bank's public key, and sends it to the bank. The bank compares the prices. If they match, it charges the customer's account and instructs the merchant to complete the transaction.

The underlying payment scheme is online, traceable, debit based, requires only the bank to be trusted, and is as secure as a wire transfer.

The transaction protocol is secure and atomic. It is not anonymous or private and is also expensive, not so much due to the encryption cost as due to the latency and

2 2 scalability issues involved in having the bank process every transaction. The client is

thick.

1.4.3 NetCash and Net Cheque

NetCash and NetCheque are two payment systems designed by B. Clifford Neu­

mann and Gennady Medvinsky at the Information Sciences Institute at the University

of Southern California. TEKnology-Laine (http://www.TEKChek.com) is commer­

cializing the software and lists a number of merchants offering to accept payment

through this system.

The NetCash protocol works as follows: Authorized currency servers mint digital

coins consisting of the value, a serial number, and a signature. A customer may

withdraw coins against other currency or against digital coins. The merchant and the

customer establish a secure channel. The customer sends the coins to the merchant

along with a session key and a session id. The merchant sends the coins to the currency server that issued it. The currency server checks if the coin has been used

before, retires the coins, and sends the merchant new coins with new serial numbers.

The merchant now sends the customer the product.

The payment scheme is online, traceable when properly used, coin based, involves no third party apart from the bank, and provides an auditing system to protect against counterfeiting.

The transaction scheme is secure but lacks atomicity, anonymity, and privacy.

The messages are sent on secure encrypted channels. However, the audit system required to trace counterfeit coins would prove to be a much larger factor in the cost calculations. The client is moderatelv thin.

23 The NetCheque protocol works as follows: The issuer of the check receives a key

from a Kerberos [68] server. The key is encrypted in two ways: with the issuer’s

password and with the bank's password. The issuer signs the check with this key and

sends it to the recipient, along with the key encrypted using the bank's password.

The recipient endorses it in a similar fashion and sends it to the bank to cash. The

bank can verify the signatures using the attached keys and complete the transaction.

The payment scheme is off-line, traceable, check based, involves a Kerberos server,

and provides adequate protection against counterfeiting.

The transaction protocol is secure and atomic, but not anonymous or private. A

large part of the cost lies in getting keys for every transaction from the Kerberos

server, which itself involves 4 messages. The client is thick.

1.4.4 CyberCash

CyberCash acts as a conduit between the transacting parties and the bank. The

customer can download a free software from CyberCash and register his credit cards

or other types of cards. After negotiations, the merchant sends an invoice with the

details of the transaction. The customer software combines the information in the

invoice with the registered payment options and presents it to the customer. .\fter the

customer selects the payment option, all this information is bundled up and secured

with the CyberCash public key, signed with the customer private key, and sent to

the merchant. The merchant adds his own version of the price to this, signs it, and sends it to CyberCash. CyberCash ensures that the price matches and completes the

transaction. Information about the success or failure of the transaction is put into

24 two bundles, one for the merchant, and one for the customer. Both bundles are sent to the merchant, who can then decide to send the customer bundle to the customer.

The CyberCash payment scheme is online, traceable, credit based, involves two trusted third parties: the credit card company and CyberCash, and provides adequate protection against counterfeiting.

The transaction protocol is secure and atomic, but not anonymous or private. All transactions must be processed by CyberCash. This creates issues such as scalability, latency in transaction execution, and hot-spots, and solving these implies additional cost. The client is thick.

1.4.5 CyberCoin

This scheme allows CyberCash users to make micro-payments. .\ customer can have an account of up to $100 with CyberCash. When this customer offers to pay a merchant at the merchant website, the offer includes the amount and the CyberCash account number. The merchant sends this message to CyberCash, which then collects its commission and transfers money. .A.11 the transaction occurs within CyberCash's ledgers, and no bank is involved.

The payment scheme is off-line, traceable, debit based, and places a certain amount of trust on the merchant and on the network. The payment is easily counter- feitable, however, the low value of these payments is assumed to be enough deterrent against such crime.

The transaction protocol is secure, though not atomic, anonymous, or private.

All transactions must be processed by CyberCash. resulting in additional cost. The client is thin.

25 1.4.6 SET

VISA and MasterCard produced SET to be their standard for credit card transac­ tions. This is an exact replica of normal credit card transactions with strong encr^^p- tion and security. The protocol works as follows: after negotiations, the merchant calls the bank and authorizes the amount. If credit is available, the bank asks the issuer to set aside that amount from the credit and issues authorization. .A.t a later date, the merchant posts the transaction by officially asking for the payment. Until this point the amount authorized is unusable by the customer. Now the amount is actually transferred to the merchant and the customer is billed. Each of these op­ erations is signed by all parties involved creating an audit trail that can be used to settle disputes.

The payment is online, credit based, traceable, places trust on the credit card company, and is non-counterfeitable.

The transaction protocol is secure and atomic, but not anonymous or private.

The protocol incurs nominal cost due to the encryption involved at every stage of the protocol. However, the audit trail is complex and storing and sorting all the necessary information can prove to be expensive. The client is thick.

1.4.7 Checkfree

A customer provides a voided check to open a CheckFree account. The money comes from the customers bank account as if it were a check issued from that account.

However. CheckPree uses the best electronic method applicable to that situation, to

26 credit the merchant account. The cost of each transaction is about the same as the cost of a check transaction plus postage.

The payment scheme is ofif-Iine, check based, traceable, assumes the CheckFree organization to be trustworthy, and is not easily counterfeitable except by the Check-

Free organization itself.

The transaction protocol is secure, but not atomic, anonymous, or private, and incurs minimal overhead cost, especially when transacting with large organizations.

1.4.8 CAFE

CAFE supports both smart cards and wallets (a device with some memor\', pro­ cessor and a smart card reader) that are about the size of a credit card and that can be carried to a store. It mimics real cash and people can transfer money from the smart card to the wallet and vice versa. To make a payment money is first transferred from the wallet to the smart card, and from there to the merchants's wallet.

The payment scheme is off-line, cash based, non traceable, non counterfeitable. and places little trust on any third party.

The transaction protocol is secure, atomic, anonymous in face-to-face transactions, and private. It incurs a large amount of cost in the hardware and software required for the smart card, the reader on the wallet, and the cryptographic capabilities that must be squeezed onto a small amount of silicon. The client is thick.

1.4.9 ECash

ECash, a system designed by DigiCash [3], works as follows; the customer with­ draws an ecash coin from the bank. The customer pays a merchant by sending a

27 function of the coin and an arbitrary number provided by the merchant. This func­ tion acts as a “proof’ that the customer has the coin without revealing the actual coin to the merchant. The merchant sends the customer the goods. The merchant collects such “proofs” from various customers and deposits them in the bank at the end of the day. The bank checks to see that the coins deposited haven’t been deposited pre­ viously (double spent). If a coin has been double spent, the identity of the customer is revealed to the bank (it is impossible to find the customer’s identity otherwise).

The network police are notified and the customer is charged with double spending.

The only time anyone (including the bank) knows the identity of a customer is when the customer spends the same coin more than once.

The payment scheme is off-line, cash based, non traceable, and places no trust on anybody including the bank.

The transaction protocol is not properly specified for this payment scheme.

1.4.10 Citibank’s Transaction Cards

Citibank transaction cards are smart cards that have two jobs: to maintain a secure file system that catalogs the cash on hand and to apply a secure digital signa­ ture to back up each transaction. The cards are intended to be tamper resistant and can support multiple currencies and even credit lines. Smart cards can also exchange currencies between the credit lines within themselves and with other smart cards.

This system is still not implemented and details have not been published. Hence, it is hard to come to any conclusions either about the payment scheme or the transaction protocol.

2 8 1.4.11 Mondex

Mondex is a successfully tested and deployed smart card. The functionality is not

too different from any other smart card. The card is tamper resistant and come with a wallet which has an LCD display and a small keyboard to make transactions with. The wallet can be integrated with other devices such as a cell phone. Transaction occurs as follows: customer and merchant cards exchange identity numbers and prove that they are valid by signing random numbers provided by each other. The algorithm for the signature may vary. The merchant card now signs a request. The customer card acknowledges the request and confirms that the money is available. The merchant acknowledges this message and the money is deducted. At this time, the transaction is moved from the list of pending transactions to the list of completed transactions on the customer card. The merchant card adds the money to its account and marks the transaction as completed. If any problem occurs, it is entered into the exception log.

The payment scheme is off-line, cash based, not easily traceable, and involves trusted certificate authorities.

The transaction protocol is secure and atomic, but not anonymous or private.

A large part of the cost lies in the sophisticated silicon capable of encryption and signing. The client is thick.

1.4.12 VisaCash

Visa has also developed its own version of a cash-like smart card. The most visible use of this system was at the 1996 Olympic Games held in Atlanta. Unfortunately, the

2 9 cards were accepted only at some locations and the experiment was not as successful

as might have been otherwise.

At the time of this writing, details of the payment and transaction model are not

available for analysis. However, this product is expected to have the same advantages

and disadvantages of other smart card schemes discussed above.

1.4.13 Millicent

Millicent, a protocol designed at DEC [51], emphasizes low overhead cost. The

protocol works as follows: a piece of "scrip" represents an account the customer has

established with a vendor. At any given time, a vendor has outstanding scrip (open

accounts) with the recently active customers. The balance of the account is kept as

the value of the scrip. When a customer makes a purchase with a scrip, the cost

of the purchase is deducted from the scrip's value and a new scrip (with the new

value/account balance) is returned to the customer as change. After completing a

series of transactions, the customer can "cash in" the remaining value of the scrip

(close the account). Brokers serve as accounting intermediaries between customers

and vendors. Customers enter into long-term relationships with brokers, in much the same way as they would enter into an agreement with a bank, credit card company, or

Internet service provider. Brokers buy and sell vendor scrip as a service to customers and vendors. Broker scrip serves as a common currency for customers to use when

buying vendor scrip, and for vendors to give as a refund for unspent scrip.

The payment scheme is off-line, debit based, traceable, places trust on the mer­ chants, and is non-counterfeitable.

30 The transaction, protocol is secure, but not atomic, anonymous, or private. The overhead cost is very low. The client is thin.

1.4.14 MicroMint

Most electronic money systems use public key cryptography. This has the fol­ lowing disadvantages: it involves export restrictions by the US government, and it involves high computation on the client side, making the client thick. MicroMint [60] transfers this computation to the government mint and uses only a hash function at the customer/merchant end. A coin is a set of numbers that hash to the same value.

Finding this set is extremely computationally intensive and can only be done by a mint (a government supercomputer, in this case). The coins last only for a short period of time. The validity check of a coin is some boolean condition on the set of numbers that changes periodically. The mint also keeps an audit trail of the coins to detect double spending.

This is still a nascent solution. A number of details, such as. how the audit trail is maintained are not discussed in the paper. The payment scheme itself is very different from all existing solutions and is very interesting. It seems unlikely that this scheme will be implemented soon, since it will require blessings from the government.

However, it may in future become the dominant solution.

1.4.15 Pay Word

PayWord [14, 57] uses the principal that a crv'ptographically secure hash function is practically impossible to invert. The user creates a chain of hash values such that each value is the hash function applied on the next value. Once the chain is created.

31 the first value is given to the ultimate recipient. To spend each coin just give the next value. The recipient can check the hash function but cannot generate this next value. This is particularly useful in micro-payment schemes with the same merchant.

This proposed solution also has many gaps to be filled, and not much can be said about it till all the details are ironed out. However, it is an interesting payment option and may in future be the way micro-payments are made.

1.4.16 NetBill

NetBill, a protocol designed at the Carnegie Melon University [64], works as fol­ lows: the customer asks the merchant for the price of a product. The merchant responds with a price and the customer accepts it. Now, the merchant sends the customer goods encrv'pted by some key K. The customer sends the merchant an electronic payment order (EPO) with a digitally signed value for the 3-tuple < price, cj'yptographic checksum of encrypted goods, timeout >. The merchant counter signs the EPO and sends it to the NetBill server along with a signed copy of the key

K. The NetBill server checks both signatures on the EPO. checks if the customer has enough funds, stores a copy of the cryptographic checksum of the encrypted goods and the key K, and sends a signed receipt with the key K to the merchant. The merchant forwards this key to the customer.

The pa\Tnent scheme is online, debit based, traceable, places trust on the NetBill server and is non-counterfeitable.

The transaction protocol is secure and atomic, but not anonymous or private. All transactions are routed via a central processor resulting in scalability issues, latency, and additional cost. The client is thick.

32 1.4.17 OpenMarket

This is a transaction protocol with an expandable, secure authentication protocol for the customer. The web server starts with getting the emailid of the customer and extracting the IP address of the machine where the request was generated. If this is not enough to establish identity, the server proceeds to ask for higher levels of authentication. The customer is expected to have sophisticated encryption software to complete these authentication procedures. Once identity is established, the data is sent to the IP that has been authenticated.

This transaction protocol is secure and atomic, but not anonymous or private.

The overhead cost can be high, depending on what level of secure authentication is chosen. Parts of the protocol that involve the payment transfer are not described in sufficient detail.

1.4.18 EDI

Electronic Data Interchange (EDI) [49. 32. 16], has two standards: .\NSI XI2 used by the department of Defense, and EDIFACT supported by the United Nations.

The standards define simple electronic formats for every major business document

(e.g., invoice, request for quote). Most EDI transactions are not intended to handle money flow which is done through other electronic payments or through checks. These standards are intended at making business transactions interoperable and compatible.

This transaction is way too expensive for business to consumer transactions, and is too expensive even for small businesses to participate in.

3 3 1.5 Problem Statement

With so many solutions present, it would seem that yet another solution is a disaster waiting to happen. However, in this section, we make the case for one. None of the solutions presented above has gained mass popularity, or been able to fill the void in electronic commerce. The electronic commerce world, however, is clamoring for a solution. What then is missing in the above solutions?

Each of the solutions presented in the previous section is a combination of a payment mechanism and a transaction protocol. This has led to a deployment issue.

The merchants want to wait and see which payment options are more popular before spending large sums of money in implementing a specific transaction protocol, or making all the changes necessary to incorporate it into their present system. The customers, as is clear from the VisaCash experiment, do not wish to buy into any system that is not already widely deployed. This is a classic chicken and egg problem.

It is our belief that if there were one transaction protocol that accepted most present and future forms of payment, the merchants would readily deploy this, .\ccepting a new form of payment would now involve a small upgrade and should be much less expensive. Merchants will soon start accepting more forms of payment. This will result in customers freely using electronic payment schemes of their choice. The waiting cycle is now broken, and whether a particular payment scheme gains more popularity than the others becomes a less important question.

How should such a general transaction protocol look like? What properties must such a general transaction protocol have? Will one protocol fit the bill for all types of transactions?

34 1.5.1 Properties of a General Transaction Protocol

We believe [72, 74] that, in general, security and atomicity are required for any electronic commerce protocol to be viable. Breach of security or lack of atomicity can lead to loss of money, duplication of money, or transfer of money to a third party.

This may have disastrous financial consequences for an individual or an organization, especially if the amount being transferred is large. It is, therefore, very important to ensure security and atomicity of transactions. Privacy, anonymity, and low overhead cost, on the other hand, are desirable features. However, these are the very features for which the customer may choose a certain payment scheme. They cannot, therefore, be ignored. The thickness of the client is a more difficult question. We have taken the view that since most electronic commerce transactions are going to be initiated at a computer or with a smart card, ability to encrypt or deciy^pt may be assumed.

1.5.2 Different Types of Transactions

One single protocol accepting all different t\q>es of payment options would still not solve all our problems. Special types of transactions, such as auctions, stock market transactions, and commodity market transactions would require different protocols.

These protocols must also accept different payment options and have the properties discussed above, so as to avoid the chicken and egg problem once again. .-Additionally, there exists no mechanism to ensure timely delivery of goods on the net. One of the advantages of trading information on the network is the quick deliver}- time. In the non-electronic world, customers are willing to pay for fast delivery of goods. For such a delivery system to be in place, the time of actual delivery must be verifiable. Such

35 a system will enable refunds and compensations for late deliveries. If a transaction protocol could incorporate timely delivery, then stock market quotes, news updates, and many other products could be delivered with a guarantee on the time of delivery.

In this thesis, we address the design of these different protocols and verification of their properties.

1.6 Summary

In this chapter, we briefly looked at various technologies that enable e-commerce and the solutions available or proposed. We examined the features present and miss­ ing in these payment and transaction protocols, and established a need for various transaction protocols that are compatible with a wide range of payment schemes.

In the next chapter, we describe various methods of ensuring that a designed protocol is secure. Many protocols believed to be secure, have been shown to be vulnerable to attacks. It is. therefore, very- important to understand the techniques that are available to establish security and to choose the right technique for our purposes.

3 6 CHAPTER 2

A FRAMEW ORK FOR VTERIFYING PROTOCOL SECURITY

Of all the protocol properties mentioned in Chapter 1, security is the trickiest.

There is no good news about security. At best, there is the lack of bad news. Protocols proven to be secure have subsequently been broken. So why are we dedicating a chapter to proving security?

While it may be impossible to be certain that a system is secure, it is important to be assured, as much as possible, that the system cannot be compromised. Such assurance may come from standards compliance, testing known attacks, and/or formal verification. In this chapter, we talk about these techniques briefly and describe the methods we chose in greater detail.

2.1 Standards Compliance

There exist various security standards that specify the level of security of a sys­ tems. One of the most common is the Federal Information Processing Standards (or

PIPS) issued by the National Institute of Standards and Technologv- (NIST). There are various PIPS Publications, each for a certain purpose. Many of them relate to

37 security and cryptography. The most relevant are FIPS 140-1, which specifies security

requirements of cryptographic modules, and FIPS 800-2 that specifies the standards

for public key cryptography.

In this thesis, we have not specified the hardware platform. Hence we are not

concerned with physical security standards. However, we have a formal model for our

software security (described below). Hence we have level 4 (highest) compliance with

respect to the software. Also all the cryptographic algorithms used in this thesis are

FIPS approved.

While compliance only indicates security, many non compliant solutions could be

more secure than a fully compliant solution. This is because it is very difficult (and

probably impossible) to define levels of security. While some guidelines may be spelt

out, a solution may have security holes in parts of the solution that are not covered

by the guidelines.

2.2 Challenges and Known Attacks

One of the more popular security checks involves open challenges. The messages

from dummy transactions are placed in public domain for people to decipher. Some­

times a dummy merchant is provided with all messages logged, so that the analyst can

also use known text attacks. This is an exceptionally good method to measure the strength of an encryption algorithm and even failings in the logic of message transfer.

Various security experts try to break into the system within the smallest possible

time. Due to lack of resources (the required hardware), we have not conducted this

test.

3 8 2.3 Security Reviews

The most common method of checking for security is security reviews conducted

both at the design stage and after implementation. At the design stage, our protocol

has been reviewed by many scientists: professors and students within our department, external reviewers of our publications, and scholars who attended v-arious presenta­ tions we made at conferences and research labs. Our protocol has been refined through the process of these multiple reviews.

2.3.1 Formal Verification

Formal verification is one of the best known methods to check for protocol failures.

The first stage of formal verification is formal modeling of the system. Such a formal model is what is required for level 4 software security as per FIPS 140-1.

Cryptographic protocols, if designed improperly, may have flaws that make them vulnerable to message modification attacks or sometimes even eavesdropping attacks.

Formal verification approaches have been suggested to detect protocol flaws. Protocol flaws depend on the logic of message exchange and content, and should be distin­ guished from weaknesses in cryptosystems and similar mechanisms whose strength is judged in the context of computational complexity.

One can take either an analysis or synthesis approach to designing secure proto­ cols. The analysis approach does not constrain the original proposed protocol design, but uses to analyze the design, and either prove it correct or search for protocol flaws. The synthesis approach focuses on identifying design principles

39 and techniques [8] which, if followed, guarantee a priori the security of the resulting protocol.

While it is important to keep some of these guidelines in mind while designing protocols, strictly adhering to and relying on the design principles specified may prove pointless. The reasons for this include: (a) security goals not foreseen by the design approach, (b) inconsistency of some unforeseen security goals with the design constraints specified, (c) infiexibility of the design constraints leading to inefficiency of the protocol, and (d) insuflficiently precise design constraints resulting in unintended security breaches.

Given all these considerations, we have chosen to formally verify security and other properties of our protocols.

2.4 Our Approach for Formal Verification

Formal verification is the process of defining the initial state of the system, tracing the state through the transaction, and checking each state for security. The two places where a security proof can go wrong are: (a) in defining the initial state of the system, and (b) in defining the condition for a state to be secure. We believe that we have taken sustained care in defining both of the above. These definitions are clearly stated in the following sections.

The logic we use [80] for state transition is an extension of the well known BAN logic and is developed below. In this chapter, we illustrate our formal methods on the well known NetBill protocol developed at Carnegie Melon University.

40 2.4.1 The BAN Logic

The use of formal methods to analyze the correctness of cryptographic protocols became prevalent with the development of the BAN logic [22] in 1989. Since then several papers have been published reporting problems with the BAN logic [52, 66,

82] and several others proposing the BAN-style logics that overcome many of these limitations [38, 83]. The logic we use is based on a semantics [20] of a BAN-style logic [22]. In this section, we state the language (or syntax), the axioms, the model, and the semantics of the logic as proposed in [20]. The semantics used are proven to be sound [20]. In this section, we describe the BAN logic as presented in [20]. In the next section, we extend the semantics of the BAN logic to model a powerful intruder.

The Language

We distinguish between the following sorts: Principal, Key, Message, and Formula.

Here, Principal refers to the parties involved in a transaction. Key to the cryptographic keys used. Message to the messages exchanged between the parties, and Formula to expressions built from the other sorts and some logical and word operators. We have the traditional logical operators: A, V, — V, and the identity operator = .

Furthermore, we have the following word operators:

• (*,*) : Message x Message — > Message

(an associative, commutative, idempotent operator for message joining. Thus,

if a message Z contains two message components A' and Y , Z = (A. 1').)

• believes : Principal x Formula — > Formula

• once_said : Principal x Message — )■ Formula

41 • sees : Principal x Message — > Formula

• possesses : Principal x Key — y Formula

• public_key_of : Key x Principal — > Formula

• private_key_of : Key x Principal — y Formula

• [*]* : Message x Key — y Message

(for encryption; intuitively, X* denotes X encrypted by k: for readability, Q and

0 are optionally used to enclose the message)

• rightly_believes : Principal x Formula — y Formula

Here, P rightly .believes y := (P believes Aip

The word operators have higher precedence than the traditional logical operators, e.g., P believes ç A ii; must be interpreted as (P believes y) Aib

T h e M odel

The environment consists of a finite collection of principals. The local state of a principal P is defined as the tuple {Bp.Op, Sp, Kp), with the following intuitive interpretation:

• Bp. the set of formulas that P currently believes:

• Op, the set of (sub-)messages P once said;

• S p , the set of messages that P has seen so far:

• ICp, the set of keys P possesses.

42 D efin ition 1 A global state is a mapping from principals to the local states for each principal in the environment. The unqualified term "state'' will, from now on, mean a global state.

The axiomatization of the logic and its semantics can be found in [20]. It has been proven that the logic is sound with respect to the given semantics [20].

In the next section, some terms are defined and then used in a theorem that is later used to analyze e-commerce protocols for security.

Some Definitions and an Important Theorem

In this section, some terms are formally defined to express the properties of and the relationships between the system state and the protocol actions. These definitions are then used to develop and express theorems that are useful in analyzing protocols.

A generic protocol step consists of a message (A) ^eing sent by one party (P) to another(Q) and is expressed as P — > Q : X .

Definition 2 The logic translation of an action a, 1~(a) is defined as:

T(P — > Q : X ) := (P once^aid X A Q sees X A I sees X ) (here, I refers to the intruder) T(

Intuitively, T{a) is the minimal set of predicates required to compute the transition from a state to a new state on execution of a. The new state is given by the set of predicates: {A : (^uT (a) PA}

D efin ition 3 For a collection of predicates A. and a protocol step a = P — > Q :

X , the predicate “A allows a" is defined recursively with respect to the structure of

43 message X:

A allows P - ~ ^ Q : = for some S: A allows P — > S : X A allows P - ~ ^ Q : %*=, t 0 /c = A \- P sees X^ A allows P - AY, - A allows P — > Q: X & A allows P — Q: Y A allows P - Q: o := A ^ P believes o A allows P - X := True (all other cases)

Intuitively, A allows a means that action a is a valid action in any state where A holds. An allowed action is one where the message sent is built from any combination of seen message components, known keys, and plain text.

Definitiori 4 Positive formulas are the least set such that:

Q ISis positive if 1- à; k private-key.of Pis is positive: k pubiic-key-of P isis positive: P believes d is positive if o is positive: P sees Xis positive: P oncesaid X is positive: O A t' is positive if o is positive and w is positive; OVW is positive if o is positive or o is positive: (Vx : o) is positive if à[x f - u] is positive for all terms u of the appropriate kind not containing unbound variables. ,4 finite set of formulas P is positive if the logical AN D of all the formulas in P is positive.

Intuitively, a positive formula is a formula such that if it is T ru e in a state, it remains

True after execution of any allowed action in that state. Note that formulas where a principal sees or once_said are always positive.

44 D e fin itio n 5 A specification of a protocol is given by two sets of formulas. A (the assumptions) andC (the conclusions). Protocol P satisfies such a specification (nota­ tion: {A}P{C}) when ~A holds initially” guarantees that “C holds finally”, i.e., after executing P.

D e fin itio n 6 The rectify operation maps formulas to formulas. In particular, it maps formulas of the form ~P believes g>” to ~P rightiy^believes 4>”. It is defined as follows:

IZ[P believes Q] = P rightly-believes o IZ[(I) A ij] = n[(j)\ A n[w\ IZ[(p V w] IZ[0 — > ip\ = Pfcf)] ----> 7^[Vx : à] = ( ix : Tl[4>\) = Ô. other cases

The rectify operation is important in being able to express statements such as Theo­ rem 1 below.

T h e o re m 1 If AU T'(a) is positive, A allows a, and AUl~(a) h C, then a

{fRlfZ\}. For a proof, see [20].

2.4.2 Notation

We use the following notations in this chapter, and throughout this thesis.

A stands for the customer, Alice. a stands for Alice's public key.

- for Alice's private key.

45 B stands for the merchant, Bob. b stands for Bob’s public key. i stands for Bob’s private key.

I stands for the intruder. with public key i stands for the intruder’s public key.

4 stands for the intruder’s private key.

e and E are used to denote randomly generated encrj'ption keys.

^ and ^ are corresponding decryption keys

P ----->. Q : [message]p stands for P sends “message” to Q signed with P ’s private key.

P Q : [message]^ stands for P sends "message” to Q secured with Q ’s public key.

.A.n expression of the form: C = A 01 A 02 refers to 0 i A 0? A 0s A 03

2.4.3 Analysis for Security

In this section we build on the logic from the previous section, and show how security of e-commerce protocols may be analyzed. The process of proving that a protocol is secure includes a) modeling the system b) defining security with respect

46 to the application in question, c) modeling a powerful intruder, d) defining the initial state of the system, and e) analyzing the system as it changes in course of the protocol.

Each of these steps is discussed below.

E-Commerce System Model

In general, an e-commerce transaction protocol involves two types of parties: the customer and the merchant. The customer must have a certain amount of money that the customer is ready to pay in return for a product, P. The merchant must have the product, P, that the merchant is willing to sell in exchange for money/payment.

The transaction is completed when the customer receives the product P from the merchant and the merchant receives the correct payment from the customer. These parties may be physically located anywhere. Each party has a computer connected to an electronic network. The parties communicate by sending messages to each other over the network. A protocol involves a number of messages, exchanged between the parties in a certain order, that result in proper transaction completion.

Defining Security

One of the most dificult issues in proving protocol security is defining security.

Everybody agrees that breach of security is anything "bad" that can happen to a system. However, it is very difficult to list all possible "bad" things. Systems declared secure under the set of known security attacks are often found to be unsecure under hitherto unknown/ unforeseen attacks.

47 However, one cannot prove something that is not formally defined or stated. We define security as broadly as possible. This broad definition is then expected to hold even as newer and more subtle security attacks are devised.

For an electronic commerce system, we define security as follows:

1. No intruder may have the private key, of the customer. A, at any time during

or after the transaction.

2. No intruder may have the private key, of the merchant, B. at any time during

or after the transaction.

3. No intruder may have any part of the payment being made towards the product

at any time during or after the transaction.

4. No intruder may have the product being transacted at any time during or after

the transaction.

Formally, if I denotes the intruder, we define security. C, as follows:

C = A — ^ Sj

A p a y m e n t 0 S i A p r o d u c t 0 S i

The Intruder Model

In order to show that there is no breach of security, it is important to know how powerful the intruder is and what type of intruder actions are possible. Again, we try to be very broad in modeling an intruder and allow all possible reasonable actions.

4 8 We believe that the model of the intruder specified here would cover known as well as future types of security attacks (passive and active).

To our best knowledge, an intruder has never been modeled using formal logic.

The reason is simple: in the past, formal verification has been used to prove correct­ ness of security protocols (e.g., authentication, key distribution, etc.). The proof of correctness of these protocols is the same as proof of security. Only recently, formal proofs have been extended to other protocols where the object/goal of the protocol is not security. However, in these protocols, security is an essential property. For example, electronic auction [71] protocols are not security protocols in that the ob­ ject (exchange goods for payment) is not security related. However, security is an important, required property since one must ensure that, apart from exchange of goods for payment, there is no replication of goods/payment or false claim for non­ receipt of goods/payment. Unlike security protocols, these other protocols can be shown to be secure only by modeling an intruder. Technically, it may be argued that the distinction between the goal and the properties of a protocol is not necessar}', and that security property may be added to the goal/object of the protocol with a conjunction. However, such an approach leads to a cumbersome and long definition of the protocoFs goal and can result in many errors in the proof process.

One alternative approach is to use a theorem prover that models an intruder [37].

However, many theorem provers are not based on formally verified models. Therefore, their assumptions need to be checked for correctness, especially with respect to the application being analyzed.

49 Due to lack of extensive work on formal proof of security for various protocols (such as e-commerce protocols), the logic for verification has not incorporated a model of an intruder of any type. We describe below an intruder by enumerating all possible intruder actions. The intruder described here is capable of all actions modeled in theorem provers [37, 56] and is, therefore, as or more powerful than other intruders modeled in literature. We first describe the actions of a passive intruder and then those of an active intruder.

Possible passive actions of an intruder include:

1. Interrupt message delivery, though not indefinitely. Hence, a message sent re­

peatedly would eventually reach the recipient.

2. Intercept messages.

3. Break up readable (plain text or decrypted) messages into components.

4. Decry'pt messages (and message components) for which encryption key is known

to the intruder.

5. Store messages and message components derived from repeated decryption and

breaking up.

Note that loss or corruption of a message in the network can simply be modeled as message interruption by an intruder (in this case, the intruder would be the network itself).

Possible active actions of an intruder include:

1. Construct messages from

50 stored messages and message components

keys known to the intruder

plain text

2. Introduce constructed messages into the network.

Note that replay attacks are messages constructed from stored messages and are, thus, included in the above actions.

At the beginning of the first run of transactions, the intruder has no "inside” in­ formation. The intruder has no information that is not available to the public (with the possible exception of one’s own private keys). Formally, if I denotes the intruder and B is the merchant in the protocol,

Bi = {4> : (p} U { b public_key_of B, i public_key_of I, 4 private_key_of /} ;

Or = 0;

S t = 0 :

= {%, b,i}:

Initicd State of the System

The state of the system at the start of the first instance of the protocol. A, is given below. If there is more than one customer, merchant, or intruder in a transaction, all customer’s, merchant’s, and intruder’s initial states are analogous to that of the customer (A), merchant (J5), and intruder (/), respectively, as shown below.

51 B a = {0 : t- 0} U

{ b public_key_of B , i public_key_of J, a public_key_of A , ^ private_key_of a};

B b = {0 : h 0} U { 6 public_key_of B , i public_key_of J, ^ private_key_of B}-,

= {4> ■ H 0 } U { 6 public_key_of B , i public_key_of I, 4 private_key_of J};

O a = (Ù:Ob = 0; O i = 0 ;

S a = 0; = 0; S i = 0 ;

JCa =

JCb = b, i}\

JCi = {4,6, z};

Note that all participants (including the intruder) only know tautologies, their pri­ vate keys and public keys of those who chose to reveal it to the public (here, the mer­ chants and optionally the customers/ intruders). Also, all beliefs held are true. There­ fore, is the same as A. with b e lie v e s operator replaced by rightly -believes.

None of the participants have said or heard anything.

Analyzing the Security Under Passive Attacks

During passive attacks, the intruder cannot introduce any messages into the net­ work. However, the intruder may interrupt messages and prevent delivery temporar­ ily. Therefore, it is necessary to show that the system is secure after any step of the protocol (if a protocol terminates on interruption) or after any number of repetitions of a step (if the protocol step is retried until the intruder is not interrupting any longer).

52 Let the protocol a consist of steps Oi, a 2 , — , If the protocol terminates on an interruption, we must show:

a i,a 2 ;. - {'R,[C\}.

In other words, the protocol is secure after each step. If a protocol step is repeated till an acknowledgement is received by the sender, we must further show:

V?=1 {% [C |} Oi

In other words, any number of repetitions of a step still leaves the system in a secure state. It is clear from the above two statements that the protocol has to be analyzed n times if there are n steps in the protocol. However, for any step i of the protocol how does one analyze the protocol? Notice that both statements above specify an initial state (J l or C), a required final state (C), and one or more protocol steps that occur between the initial and final state. From Theorem 1, we know that

[T llA ]] a {%[C]} if:

A A U 7”(a) is p o s itiv e . A A allows a A A U 'T~{a) f- C

We use the definitions for 7”(a). positive, and allows from Section 2.4.1 to show the above three conjuncts to be true.

Analyzing the Security Under Active Attacks

During active attacks, the intruder introduces fake messages into the network.

However, these messages constitute meaningful attacks only if at least one of the participants can mistake the message to be a proper protocol step. We thus prune

53 the infinite space of actions by an active intruder to a finite set of actions that can

be potentially harmful.

Formally, if step aj constitutes P — y Q : X^, intruder action ij must be of the

form I — y Q : x^. Here, x is distinct from X, but of the same form for it to be

recorded as the protocol message content instead of X.

In order to show that a protocol is safe under active attacks, we must show that

an intruder cannot send a misleading/harmful message at any stage of the protocol.

Formally, we must show:

Vy_j^AuT(ai, ay) not allows ij.

We use the definition of allows from Section 2.4.1 to show that intruder actions

of the above type are not valid (or possible).

2.4.4 Illustrating the Framework with a Formal Proof of Se­ curity

We choose a well known e-commerce protocol, NetBill [85], to illustrate our frame­

work. In this section, we present the proof of security for Step 1 and Step 5 of this

protocol to illustrate how our framework can be applied to prove security of a given

protocol. We give a brief sketch of the protocol below. We use the notation ”X y

Y Msg" to indicate that X sends the specified message Msg to Y. The parties involved

in the protocol are the customer C, the merchant M and the NetBill server N. The

NetBill protocol consists of the following steps:

1. C y M Price request

^Note that we check for intruder action after the step ay as we would not like the intruder to eavesdrop on ay, interrupt it, and then send out ij.

54 2. M - C Price quote

3. C M Goods request

4. M -— C Goods, encrypted wdth key K

5. C M Signed Electronic Payment Order (EPO)

6. M N Endorsed EPO (eEPO), including K

7. N - M Signed eEPO, including K

8. M - C Signed eEPO, including K

Analyzing the Security Under Passive Attacks

The proof of security after Step 1 is given below. This is the first stage of the proof where only passive attacks are considered. In the next section, we show how active attacks are incorporated into the proof. The proofs in this thesis are written in a style meant to increase readability. First, the outline for the proof is given. Where ever the transition from one step to the next, in the outline, is unclear, a proof for the derived step is given separately.

Outline of Proof for Step 1

P r o v e : {A.} u i {C}

Proof sketch: We use Theorem 1 to prove that the system is secure or { -4 .} ai

{C}.

I. : C > M Price request

DO 1.1. A u T'(ai) is positive

1.2. A a llo w s tti

1.3. A U T ( a i) f- C

1.4. Q.E.D.

Proof for Step 1.1

P r o v e : A U 7”( a i) i s p o s itiv e

Proof sketch: We use Definition 4 to prove the above.

1.1. A U T'(ai) is positive

1.1.1. 'T{ax) '.= A C once_said Price request A M sees Price request A I sees Price request 1.1.2. T'(ax) is positive

1.1.3. A is positive

1.1.4. Q.E.D.

Proof for Step 1.2

P r o v e : A a llo w s T"{ax)

Proof sketch: We use Definition 3 to prove the above.

1 . 2 . A allows T'{ax)

1.2.1. A C believes Price request

1.2.2. Q.E.D.

Proof for Step 1.3

P r o v e : A U T ( a i ) H C

P roof s k e t c h : We show that in step ai no unwanted information is revealed to

the intruder.

56 1.3. ^ U 7 "(a i) I— C

1.3.1. 'T (ax) := A C once_said Price request A A f sees Price request A I sees Price request 1.3.2. New state is ^ U A Price request 6 O c A Price request G S m A Price request G S i 1.3.3. A — 0 S i

A p a y m e n t ^ Si A product 0 S i 1.3.4. Q.E.D.

Analyzing The Security Under Active Attacks

In this section, we analyze active intruder attacks. We must show that an intruder is not allowed to send a message that reads like a protocol message, but is not identical to a protocol message. Since, Step 1 is a request for price, any message that reads like a request for price has to be an exact replica of Step 1. Hence, the active intruder part of the proof for Step 1 of the NetBill protocol is vacuously true.

For those messages where the content can be marginally changed to dupe the actual participants, we use Definition 3 to show that the intruder is not allowed to send the changed message. We illustrate this by proving security under active attacks for Step 5. We choose Step 5 since a) it falls in the middle of the protocol and b) it is an important step from the point of view of active attacks (it is the step where the customer actually places an order and we do not want any intruder to be able to place an order as if he were the customer).

Proof for Step 5 A ssu m e: A U T (ai, 02, • • •, 0 4 ) h C

Under passive attacks, A U 7 '( a i , a 2 , — , #5 ) U C

P r o v e : A U 02, — , as) not allows I 5

Proof sketch: We assume that security has already been proved for Step 4 and that security under passive attacks has been proven for Step 5. We now use Definition 3 to prove security under active attacks.

5. as : I h M {modified E P O f^

5.1. From Definitions, A allows a > Q : k ^ Kp

:= A h P sees

5.2. A U 7”(a i, a 2 , • • •, as) allows I — ^ M \ {m odified EPO)'^ ^ przvate key

C's private key 0 /C/

:= A U T~{ai, a 2 , , as) h I sees { m o d i f i e d E P O ) ^ ^ private key

5.3. From assumption, C 's private key 0 JCj

5.4. A U 7 '( a i, a 2 , — , as) allows I — >■ M : { m o d i f i e d E P O ) ^ ^ private key

:= A U T~{ai, a 2 , 1 as) h I sees { m o d i f i e d E P O )'^ ^

5.5. / n o t sees {modified EPO)^'^

•5.6. Q.E.D.

Proof for Step 5.5

P r o v e : I n o t sees {modified EPO)^'^

P roof sketch: We use an axiom of the B.A.M logic 20 [ ] to prove this step

5 .5 . / n o t sees { m o d ifie d ^ ;p o )C 's prit,ate

58 5.5.1. From Axiom Good key ensures utterer [20],

I- k private_key_of PAR sees [AC]* P once_said X

5.5.2. i- k private_key_of C A I sees (m odified EPO)^

i- C once_said (modified EPO)

5.5.3. We know, C n o t once_said (modified EPO)

5.5.4. Q.E.D.

2.5 SummcLry

In this chapter, we examined various methods for ensuring security of a system and chose the methods suited to our needs. We then presented a semantics for the

BAN logic and developed an extension of this logic to prove security of protocols. This extended logic can (and will, in future chapters) be used for proving other important properties, such as, atomicity, anonymity, and privacy.

In the next chapter, we explain general two party transactions and develop a transaction protocol for such transactions that is independent of the payment scheme being used.

59 CHAPTER 3

GENERAL TWO-PARTY TRANSACTION PROTOCOL

The most common type of commerce transaction over a network is a business- to-consumer tj-pe transaction. This includes transactions where a person buys a book, a video, a CD, a dress, a pen, or anv'thing else by ordering it over the web.

We all are aware of these transactions, have seen them being conducted, and most of us have even participated in such transactions. Business-to-consumer (or B2C) and business-to-business (or B2B) have become the buzzwords in the world of e- commerce. .A. business-to-consumer transaction is any commerce transaction involving a business and a common person. A business-to-business transaction is any commerce transaction between two businesses. The fundamental differences between these two types of transaction are few. In both, there is a seller and a buyer, there is a price to be set, product description to be given, and money and goods to be exchanged. The technical differences are: (a) in a business-to-business transaction may not require anonymity in certain situations, and (b) a certain amount of trust may be placed on the parties involved. This differentiation is becoming popular, more due to the difference in the amount of money being transacted, and less due to the technical challenges posed by one as opposed to the other. That is, a company in the B2B

60 space has more potential for profit, and is tempted to tout this. Technically, since the differences are few, we treat this two spaces as a single category of transactions, and call it general two-party transactions.

In this chapter, we design a protocol [76] for general two party transactions. Since it is a general protocol, we cannot not make any strong assumptions about trust and anonymity. However, we point out how and where the protocol may be simplified, if these assumptions are valid.

3.1 Existing Work

A number of systems [2, 3, 67, 51, 18, 40, 21] have been developed as solutions for two party e-commerce. A number of these were discussed in Chapter 1. As discussed before, these solutions include both a payment scheme and a transaction protocol. As a result, their transaction protocol design is not suitable for other payment schemes. In Chapter 1. we discussed various properties required or desired in general e-commerce protocols. Since the existing protocols are specific to certain pa;yment schemes, they do not have all the properties listed. This is evident from the list of properties of each protocol detailed after a brief description of the protocol in Chapter 1. In this chapter, we design a transaction protocol that is logically independent of the payment scheme.

3.2 System Model

A general transaction protocol involves two parties: the customer and the mer­ chant. The customer must have a certain amount of monev. sav M. that the customer

61 is ready to pay in. return for a product, P. The merchant must have the product P

that the merchant is willing to sell for money M. The sale is completed when the customer receives the product P from the merchant and the merchant receives money

M from the customer.

In electronic commerce transactions, these parties may be physically located any­ where. Each party has a computer connected to an electronic network. The parties communicate by sending messages to each other over the network. We assume the existence of some acceptable electronic form of payment such as a credit card number, an electronic coin withdrawn from a bank, or any other electronic entity that does not compromise the customer’s anonymity or privacy.

3.3 Protocol Description

-A. transaction in electronic commerce involves a merchant, say Bob, and a cus­ tomer, say .-Alice. The transaction can be broken into three stages. In the first stage.

Bob advertises that he has a product he wishes to sell. In the second stage, the inter­ ested buyer (.-Alice) and Bob agree on a price. In the third stage, the actual exchange of goods and payment occurs. .A.11 this must occur electronically.

Our protocol is based on public key cryptography [61. 63]. Every person is issued a unique and universal identity (like the Social Security Number in the US). This universal identity constitutes a public and a private key. The public key is revealed to others and is used to trace a person, if necessary. It is also used to secure confidential messages sent to the person. The private key is used as a signature, for authentication and to read secure messages. In addition, every person can generate any number of

6 2 public and private key pairs that are distinct from the universal identities. Since these key pairs (or pseudo-identities) cannot be linked to other pseudo-identities or universal identity of that person, they ensure anonymity.

A protocol for atomic, secure, private electronic transactions between an anony­ mous customer, Alice, and a merchant. Bob, is illustrated in Figure 3.1. Here Bob uses his universal identity and Alice uses a pseudo-identity.

(1) B y everybody [product description, price]&

(2) A >■ B [{product description, price]^, a, payment^]^

(3) B ---->■ A [[payment^, product^ ]b ]“

(4) A h B [[product^, ^

(5) B — , A[[l

Figure 3.1: Two Party Transaction Protocol between the merchant. Bob. and the customer. .\lice.

In Step 1 of the protocol. Bob advertises his product by broadcasting both the product description and the expected price. The advertisement is signed with Bob's private key, both for authentication by a prospective customer and for non-repudiation.

In Step 2, Alice responds to the advertisement by sending a copy of Bob's adver­ tisement (for reference) signed with her private key (for authentication), her public key (for further communication), a signed value of price (for non-repudiation) and also an encrypted payment. The payment is encr\^ted with encryption key e to

6 3 prevent the situation where Bob maliciously claims her pavmient was lost or stolen in the network, but cashes it anyway. In such a situation, he could demand another payment, thus getting paid twice for the same good. With encrypted payments, he can only cash the single pavanent that Alice sends him the key for. She also secures the message with Bob's public key. This ensures that only Bob can read the message.

In Step 3, Bob acknowledges Alice's payment and also sends her the encrypted product. Bob encrypts the product to prevent Alice from absconding without paving for the product, or maliciously claiming that the product was lost or stolen in the network, and receiving two products for a single payment. He signs the encrypted product with his private key and secures the message with Alice's public key. This ensures that only Alice can get the encrypted product and she can also verify that the message is indeed from Bob.

In Step 4, Alice acknowledges receiving the encrypted product and sends a signed copy of the payment decrv^ption key. Since only Bob has the encrypted payment, only he can use this key to get the payment. Further, since the message is secure, only he can read this message. The encryption key is signed by .Alice so that Bob can verify the source of the message.

In Step -5 of the protocol. Bob acknowledges receipt of the payment decrv'ption key from .Alice and also sends her a signed copy of the product decryption key. He further secures this message with .Alice's public key so that only she can read the

64 Goods of Non-Electronic Format

With some minor modifications, the protocol can be extended to non-electronically

formatted goods. In Step 3, instead of sending the encrypted product, Bob sends a

product description (or product identification number). Then, in Step 5, instead of

sending the decryption key for the product. Bob acknowledges receipt of the payment

encryption key and sends the product via some other medium (e.g., certified post).

3.3.1 Properties

The properties required or desired in any e-commerce protocol are; security, atom­

icity, anonymity, privacy, and low overhead cost. Electronic fraud is both easier and

more likely. Therefore, it is important to assure the strictest security and complete

atomicity to attract organizations and individuals to use this form of commerce. The

aggregation of profiles of customers in recent years, based on electronic billing infor­

mation, has led to a desire for privacy and anonymity. According to a survey reported

by the TIME magazine, these factors alone deter 40% of the potential customers from engaging in electronic commerce.

All these above properties are defined in Chapter 1. In Chapter 1. we discussed another property, namely, client thickness. Till recently, it was considered important for a client to be thin. However, as computers become smaller and cheaper, this is becoming less of a consideration.

In this section, we give an intuitive feel for why our protocol is secure, atomic, private, anonymous, and incurs nominal overhead charges. The formal proof of the first four properties is given in the next section.

65 Security

On electronic networks, an intruder may prevent message delivery, forge messages, or replay old messages as if they are new. In addition, messages may be lost or corrupted in the network. Since both the product and the payment are encrypted, if a message with either the encrypted product or the encrypted payment is not delivered, the sender can re-send the message using a different encryption key. This only adds some additional computation cost for the sender. .4.11 other messages, if lost, corrupted, or interrupted can just be resent without any additional cost to either party. Thus, loss, corruption, or interruption of messages cannot cause the transaction to abort, reach an inconsistent state, or result in a financial loss to the involved parties.

Since all messages are signed with the sender’s private key, an intruder cannot forge a message. Hence, forger}- does not pose a security threat to our protocol.

.\n intruder can tiy replaying old messages as if they are new. Let us first assume that the intruder replays the message in Step I from Bob. .\lice responds with an enciypted payment. Bob, even if he receives this payment, responds with a note saying he is not selling the product anymore. The intruder cannot acknowledge the payment, since he cannot read and parse Alice’s message. The transaction simply aborts. Let us now assume that the intruder responds to an advertisement by Bob, with an old response from .A.lice. Bob now sends Alice an encr}'pted product which

Alice must acknowledge. Since Alice has not initiated a new transaction with Bob, she ignores Bob’s message even if she receives it. The intruder cannot acknowledge with an old acknowledgement either (we assume that Bob uses fresh encryption keys

66 for each, transaction.) The transaction does not continue, and no product or payment exchanges hands. Let us next assume that the intruder tries to replay a message during a valid transaction between Bob and Alice. Step 3 and Step 4 include a copy of the encrypted product. As shown earlier, old messages do not have products encr\'pted with the same key and hence cannot be used. In Step 5, Bob includes a copy of the payment encryption key that the intruder cannot reproduce from old messages. Thus, the intruder cannot cause a transaction to occur, or tamper an ongoing transaction between Bob and .A.lice, by just replaying old messages.

We conclude that our protocol is secure from all three security threats - message blocking, forger}% and replay attacks.

A to m icity

In the above protocol. Alice sends the payment decryption key, e, before receiving the product decrv'ption key, E. Therefore, she cannot abscond with the product without paying for it. Bob, on the other hand, can cheat by not sending the product key in Step 5 or by sending the wrong product in Step 3. However, Bob’s true identity is known. Alice also has the product advertisement, acknowledgement of her payment, and the encrypted product. Therefore, she can prove Bob’s wrongdoing in front of an arbitrator [29] and demand that the right product be delivered. This ensures that goods are delivered if and only if the payment is made. In other words, the protocol is atomic.

67 P rivacy

It is commonly believed that even with the knowledge of the public key, a malicious intruder cannot compute the corresponding private key. Thus, the intruder cannot read messages secured with the recipient’s public key or forge messages signed with the sender’s private key. The intruder’s inability to read secured messages ensures privacy in our protocols.

A nonym ity

In the above protocol, the merchant’s true identity is revealed to the customer,

Alice. Alice, however, can hide her identity from the merchant. Bob and the rest of the world. To do so, Alice generates a new set of public and private keys (pseudo­ identity) for each transaction. Since these pseudonvms cannot be associated with the other pseudo-identities or with Alice’s universal identity, anonymity is guaranteed.

Overhead Cost

The above protocol does not require any complex computation. Furthermore, there are only five messages exchanged between the customer and the merchant.

Therefore, the cost incurred is nominal [.36]. Even such nominal overhead cost can render this protocol unsuitable for transactions involving goods of very low cost. We believe that a single protocol cannot handle transactions involving goods of extremely low value as well as applications with other specific requirements. The proof (or counter proof) is a future exercise.

6 8 3.4 Formal Verification of Properties

In this section, we formally prove the above properties for the above general two party transaction protocol.

3.4.1 Initial State

The state of the system at the start of the protocol is given below:

O a = 0:O b = 0: O / = 0;

S a = $; S b = 0; «S/ = 0;

Ba = {4> : h 0 } U { 6 public_key_of B, i public_key_of I, a public_key_of A , ^ private-key_of A };

Bb = { 0 : I- 0 } U { 6 public_key_of JB, i public_key_of J, | private_key_of B}:

B/ = (0 : h- 0} U { 6 public_key_of B. i public_key_of I. \ private_key_of /} ;

JCa =

K b =

/C f = ( 4 , b, %}:

3.4.2 Properties

The properties we want to verify are security, anonymity, privacy, and atomicity.

These translate logically into:

69 Security: ^ ^ Sr A ^^ S / A p a y m e n t ^ Si A product 0 Si (the intruder must not find out the private keys of the parties involved or get the payment or the product being transacted)

Anonymity: id ^ Si A id ^ S b (the merchant or intruder must not find out the universal public key (id) assigned to the customer for identification)

Privacy: a ^ Si (the intruder must also not find out the public key (i.e., pseudonym) that the customer is using for the transaction)

Atomicity: {product G S a A pa y m e n t G ^ s) V {product 0 S a A p a y m e n t 0 «Sg) V {product^ G S a A [payment^]b G «Sa) (the product and payment should have been exchanged or neither the product nor the payment exchanged or the customer must have proof that the encrypted product and the encrypted payment have been exchanged)

3.4.3 Verification

In this section, we first analyze the protocol in the presence of passive attacks to formally show that the above mentioned properties are ensured. Next, active attacks are included in the analysis. Protocol cost is not covered as part of the proofs.

Analysis Under Passive Attacks

Let A, be the set of assumptions or the initial state we start with (where all the beliefs of principals are true), P the protocol, and C the properties we want to prove.

Assuming only passive attacks, the protocol may still be interrupted by either the customer or the merchant failing to execute the next step. The protocol steps that are executed, however, may not be tampered or forged. We must show that C holds after every properly executed step of the protocol. That is, if an is the nth of the

70 protocol, then we must show:

Vf=i {'^[^]} ai;a2;. ..;a„ [Tl{C]}.

We formally prove this below. In the next section, we incorporate active attacks into the proof.

Outline of Proof for Step 1

P rove : {«4.} a i {C}

Proof sketch: We use Theorem 1 to prove that the system is secure.

1 . 7”(ax) : B ----> everybody [product description^ price

1.1. U "T{ax) is positive

1. 2 . A. allow s ax

1.3. A U T ( a i ) h C

1.4. Q.E.D.

Proof for Step 1.1

1 . 1 . A U 'T{ax) is positive

P roof s k e t c h : We use Definition 4 to prove the above.

1.1.1. 7~{ax) := A B once_said [product description, p r ic e ]è A A sees [product description, price]F A I sees [product description, p r ic e ]F 1 . 1 . 2 . '7~(ax) is positive

1.1.3. A is positive

1.1.4. Q.E.D.

Proof for Step 1.2

71 1 . 2 . A. allows'T{ax)

P roof sketch : We use Definition 3 to prove the above.

1.2.1. A B believes [ p ro d u c t d e sc rip tio n , p r ic e ]F

1.2.2. Q.E.D.

Proof for Step 1.3

1.3. A U 'T{ax) P C

Proof sketch: We show that in step a\ no unwanted information is revealed to

the intruder.

1.3.1. T '(a i) := A B once_said [ product description, price]F A A sees [ product description, price ] f A I sees [product description, price]F 1.3.2. New state is A U A [product description, price]F g O a A [product description, price ]F g S b A [product description, price ]F g S i 1.3.3. A ^ 0 S i A ^ ^ S i A p a ym en t ^ S i A product ^ S i A id ^ S i A id 0 S b A a ^ S i A product 0 S a A p a y m e n t ^ S b 1.3.4. Q.E.D.

Outline of Proof for Step 2

Assume: A U T(ui) H C

Prove: {C} a^ {C}

P roof sketch : We use Theorem 1 to prove that the properties are not violated or

{C} {C}.

2. I~{a 2) : A — >■ B [[product desc, price]^, a, payment^]^

72 2.1. C U 7 ~{a2) is positive

2.2. C allow s tt2

2.3. C U T ( a 2 ) I- C

2.4. Q.E.D.

Proof for Step 2.1

2.1. A. U T~{a2) is positive

Proof sketch : We use Defiaition 4 to prove the above.

2.1.1. T'ia^) '.= A A once-said [ [product desc, price]», a, payment®]*^ A B sees [[product desc, price]», a, payment® ]‘^ A I sees [ [product desc, price]», a, payment®]*^ 2 . 1 . 2 . T'{a 2 ) is positive

2.1.3. C is positive

2.1.4. Q.E.D.

Proof for Step 2.2

2.2. C allows "T{a2 )

Proof sketch : We use Definition 3 to prove the above.

2 .2 .1 . C h- A believes [ [product desc, price]», a, payment® ]^

2.2.2. Q.E.D.

Proof for Step 2.3

2.3. C U T ( o 2 ) H C

Proof sketch: We show that in step 0 2 no unwanted information is revealed to

the intruder.

2.3.1. 7”(a2) := A A oncejsaid [ [product desc, price]», a, payment®]^ A B sees [ [product desc, price]», a, payment® ]^ A I sees [ [product desc, price]», a, payment® ]^

73 2.3.2. New state is C U A [[product desc, price]», a, payment®]*^ 6 O a A [[product desc, price]», a, payment®]^ G «Sg A [[product desc, price]», a, payment®]^ G Si 2.3.3. A ^ ^ Si a I i S , A p a y m en t 0 Si A product 0 S i A id 0 S i A id 0 S b A Cl 0 S i A product^ G S a A \payment^\b ç- S a 2.3.4. Q.E.D.

Outline of Proof for Step 3

Assume: A u T ( a i , a^) I- C

Prove: {C} 0 3 {C}

Proof sketch : We use Theorem 1 to prove that the system is secure.

3. T'^az) : B — > A [[payment^, product^]^]°-

3.1. C u l~{az) is positive

3.2. C allows ug

3.3. C U T (a s) h C

3.4. Q.E.D.

Proof for Step 3.1

3.1. C U T~{az) is p o sitive

P roof sketch : We use Definition 4 to prove the above.

3.1.1. 7'{az) := A B once_said [[payment®, product^]F ]^ A A sees [ [payment®, product^ ]è ]^ A I sees [ [ payment®, produ ct^ ] f ]^ 3.1.2. 7~(az) is p o sitive

74 3.1.3. C is positive

3.1.4. Q.E.D.

Proof for Step 3.2

3.2. A. allows'7~(as)

P roof sk e tc h : We use Definition 3 to prove the above.

3.2.1. A. B believes [[paym ent® , product® ]è

3.2.2. Q.E.D.

Proof for Step 3.3

3.3. C U 'T{az) P C

P roof sk e tc h : We show that in step as no unwanted information is revealed to

the intruder.

3.3.1. 7”(a s) := A .5 once_said [[payment®, product®]^]® A A sees [ [payment®, product® ]f A I sees [ [payment®, product® ]f 3.3.2. New state is C UA [[payment®, product® G C?a A [[payment®, product® E S b A [[payment®, product® ]b E Si 3.3.3. A — ^ S i ^ \ ^ S i A p a y m e n t ^ Si A p ro d u c t ^ S i A id ^ S i A id ^ S b A a ^ S i A p ro d u c t^ € S a A {payment^]^ G S a 3.3.4. Q.E.D.

Outline of Proof for Step 4

Assum e: A U T ( u i , ag, a s) P C

P ro v e : {C} a^ {C}

to P roof sk e t c h : We use Theorem 1 to prove that the properties are not violated or

{C} 0 4 {C}.

4. T ( o 2 ) : A y B {{produc±^, 1 ] “ ]**

4-1. C u 'T{a^) is positive

4.2. C allows 0 4

4.3. C U T (o 4 ) H C

4.4. Q.E.D.

Proof for Step 4.1

4.1. C U 7”(o4) is positive

P roof sk e t c h : We use Definition 4 to prove the above.

4.1.1. T ( o 4 ) := A A once jsaid [ [product^, A B sees [ [product^, g A J sees [ [product^, 4.1.2. 7”(o4) is positive

4.1.3. C is positive

4.1.4. Q.E.D.

Proof for Step 4.2

4.2. C allows 7“(o4)

P r o o f sk e t c h : We use Definition 3 to prove the above.

4.2.1. C f- A believes [ [product®, g

4.2.2. Q.E.D.

Proof for Step 4.3

4.3. C U 7 ^(0 4 ) I- C

76 Proof sketch: We show that in step 0 4 no unwanted information is revealed to

the intruder.

4.3.1. T ( a 4 ) := A A once-said [ [product^, ^ A jB sees [[product®, ^ A I sees [ [product®, | 4.3.2. New state is C U A [[product®, ^ E Oa A [[product®, g]» GS b A [[product®, G S i 4.3.3. A ^ ^ S i ^ I ^ S i A p a y m en t ^ Si A product 0 S i A id ^ S i A id 0 S b A u ^ S i A product^ G «Sa A [payment^] & G S a 4.3.4. Q.E.D.

Outline of Proof for Step 5

Assume: A u T(ui, 0 2 , a s, 0 4 ) P C

Prove: (C} 05 {C}

Proof sketch: We use Theorem 1 to prove that the system is secure.

5. r(05) : A[[l, i]i]«

•5.1. C U 'TÇas) is positive

5.2. C allows as

5.3. C U T{as) h C

5.4. Q.E.D.

Proof for Step 5.1

i I 5.1. C U T'{a^) is p o sitive

Proof sketch : We use Definition 4 to prove the above.

5.1.1. T ( a s ) := A B once-said [ [ | , A A sees [[J, A I sees [[i, 5.1.2. l~{a^) is positive

5.1.3. C is positive

5.1.4. Q.E.D.

Proof for Step 5.2

5.2. A. allow s T~{as)

Proof sketch : We use Definition 3 to prove the above.

5.2.1. A h B believes [[i,

5.2.2. Q.E.D.

Proof for Step 5.3

5.3. C U T ( a s ) H C

P roof s k e t c h : We show th a t in step a s no unwanted information is revealed to

the intruder.

5.3.1. TCas) := A B once_said [[^, ]a A A sees [[\, |] b A I sees [[i, 5.3.2. New state is C U A [[^, E O a A G S b

78 5.3.3. A i ^ 5/ A ^ ^ 5 / A p a y m en t 0 Si A product 0 S i A id ^ S i A id ^ Sb A a ^ S i A product G S a A p a y m e n t G S b 5.3.4. Q.E.D.

Analysis Under Active Attacks

In this section, we analyze active intruder attacks. We must show that an intruder is not allowed to send a message that reads like a protocol message, but is not identical to a protocol message. If it were identical to the protocol message for that transaction run, it could cause no harm. Such a replication would be equivalent to stopping the message on the network momentarily and then releasing it.

Some messages may not have any components that can change. We do not analyse such messages as the proof for those is vacuously true. For those messages where the content can be marginally changed to dupe the actual participants, we use Defini­ tion 3 to show that the intruder is not allowed to send the changed message.

Proof for Step 1

Assume: Under passive attacks, A. C

Prove: A. U l~{a{) not allows Ix

P roof sketch : We assume that security under passive attacks has already been proven for Step 1. We now use Definition 3 to prove security under active attacks.

1. '.I y A \product description, modi fied price

79 1.1. From Definitions, A. allows P >• Q : X^, k ^ Kp

:= A P sees X *

1.2. ^ U 7”(a i)allow^s J y A: [product description, modified price ]^,

:= A U 'T{ai) h I sees [product description, modified price]^

1.3. From assumption, B 's private key ^ /Cj

1.4. A U T '(ai) allows F y A: [product description, modified price

:= A L) T~{ai) h I sees [product description, modified price]à

1.5. I not sees [product description, modifiedprice]^

1.6. Q.E.D.

Proof for Step 1.5

P roof sketch ; We use an axiom of the BAN logic [20] to prove this step

1.5. I not sees [product description, modified price]^

1.5.1. From Axiom Good key ensures utterer [20],

1- k private_key_of P A R sees [X]* —y P once_said X

1.5.2. h I private_key_of P a Jsees [product description, modified price]i

—y B once_said [product description, modified price]^

1.5.3. We know. B not once_said [product description, modified price]^

1.5.4. Q.E.D.

Proof for Step 2

A ssume : A U T(ui) I- C

Under passive attacks, X U 7”(ai,a2) U C

80 P rove : A. U 7”( a i,a 2 ) not allows I 2

P roof sketch : We assume that security has already been proved for Step 1 and that security under passive attacks has been proven for Step 2. We now use Definition 3 to prove security under active attacks.

b 2. 0 2 : / --- > B [[product description, price]i, a, modified payment^]

2.1. From Definitions, A allows P >■ Q : (X , Y)^, k G ICp

:= for some S: A allows P ---- > Q : X

2.2. b G K-i, A U 7”(oi,02) allows

I > B: [product desc, price]^ a, modified payment^

:= ^ U 0 2 ) 1“ 3 5 3

AU 7 ^(0 1 , 0 2 ) allows I y S : [product description, pr i c e ] “

2.3. From Definitions, A u 7~{ai, 0 2 ) allows P ------> Q : X * , k ^ Kp

:= A \- P sees X *

2.4. A U 7“(ai,a2) allows I ---- > S: [product description, p r ic e ] = , ^ 0

1Ci

A \J 7 ^(0 1 , 0 2 ) F I sees [product description, price]^

2.5. From assumption, ^ 0 /C/

2.6. A U 7^((Zi, 0 2 ) allows I >■ B: [product description, price]^

:= A U 7 '( o i, 0 2 ) F I sees [product description, price\^

2.7. I n o t sees [product description, price]^

2.8. Q.E.D.

Proof for Step 3

81 A ssum e: A u T ( a i, 0 2 ) 1- C

Under passive attacks, >1 U 7"(ai, Ü 2 , i s ) U C

P ro v e : A U T'iai, 0 2 , a^) not allows

P roof sketch : We assume that security has already been proved for Step 2 and that security under passive attacks has been proven for Step 3. We now use Definition 3 to prove security under active attacks.

3. 0 3 : I --- >• A [[payment^, modified product^ ]^ ]°’

3.1. From Definition3, A allows P -----)- Q : X^, k 0 Kp

:= A \- P sees X *

3.2. I ^ Ki, A U G2 , (I3 ) allows

I — > A: [[payment®, modified product^

:= A yj 0 2 , az) h i sees [payment^, modified product^ ]

3.3. From assumption, | ^ K i

3.4. A u T '(ai, 0 2 , as) allows J > A: [[payment^, modi fied product^ ]°-

:= A U 7"(ai, 0 2 , a s) P I sees [[payment^, modified product^ ]

3.5. I not sees [[payment^, modified product^ ]

3.6. Q.E.D.

Proof for Step 3.5

P roof sketch : We use an axiom of the BAN logic [20] to prove this step

3.5. I not sees [[payment^, modified product^ ]

3.5.1. From .Axiom Good key ensures utterer [20],

f- k private_key_of PAR sees [X]*' —f P once_said X

8 2 3.5.2. h I private_key_of B A I sees [payment^, modified product^

—>■ B once_said [payment^, modified product^]

3.5.3. We know, B not once_said [payment^, modified product^ \

3.5.4. Q.E.D.

Proof for Step 4

A ssume : A U T (oi, az, as) H C

Under passive attacks, A U T{ax, Og, ,0 4 ) h C

Prove: A U 7'{ai, 0 2 , , 0 4 ) not allows J4

Proof sketch : We assume that security has already been proved for Step 3 and that security under passive attacks has been proven for Step 4. We now use Definition 3 to prove security under active attacks.

4. <24 : / y B [[product^ ,

4.1. From Definition.3, A allows P — y Q : X^. k ^ /Cp

■.= A P sees X *

4.2. A U T '(a i, 0 2 , • • ' , 0 4 ) allows / — y B: [[product^ , m odified ^

:= A U 7”( a i, 0 2 , , 0 4 ) h I sees [ [product^, modified ^ ]“ ]

4.3. From assumption, ^ 0 7C/

4.4. A U / “(o i, 0 2 , • • •, 0 4 ) allows I — y B: [[producd^, m odified ^

:= A U 7”(o i, 0 2 , , 0 4 ) 1- I sees [ [product^, modified ^ ]“ ]

4.5. I not sees [[product^ , m odified ^]“]

4.6. Q.E.D.

8 3 Proof for Step 4.5

P roof sketch : We use an axiom of the BAN logic [20] to prove this step

4.5. I not sees [[product^, modified g]» ]

4.5.1. From Axiom Good key ensures utterer [20],

I- k private_key_of PAR sees [X]* —>■ P once_said X

4.5.2. h ^ private_key_of A A I sees [product^ , m odified ^ ]»

C once_said [product^, modified ^ ]^

4.5.3. We know, A not once_said [product^ , m odified ^ ]^

4.5.4. Q.E.D.

Proof for Step 5

A ssum e: A u T (c ti, 0 2 , • • •, 0 4 ) I- C

Under passive attacks, A U 0 2 , • • •, 0 5 ) h C

Prove: A U 7'(ai, 0 2 , - -, as) riot allows I 5

P roof sketch : We assume that security has already been proved for Step 4 and that security under passive attacks has been proven for Step 5. We now use Definition 3 to prove security under active attacks.

5. 0 5 : I --- > A[[|, modified^]^]^^

5.1. From Definition3, A allows P > Q : X ^ , k ^ JCp

:= A P P sees X *

5.2. A U 7”( a i, 0 2 , ■ ■ ’ 5 as) allows I ---->• A: [ [ ^, \ ^ X/

:= A U T ( a i , 0 2 , , a s ) h I sees [ m odified ^ ]F

5.3. From assumption, | 0 X /

8 4 5.4. A. U T ( a i , t t 2 , — , As) allows I -----)■ A: [[|, m odified

:= A U T'(ai, 0 2 , , as) I- I sees [ m odified ^

5.5. I not sees [^ , m odified ^

5.6. Q.E.D.

Proof for Step 5.5

P r o o f s k e t c h : We use an axiom of the BAN logic [20] to prove this step

5.5. I n ot sees [ g, m odified

5.5.1. From Axiom Good key ensures utterer [20],

1- k private_key_of PAR sees [X]* —>■ P once_said X

5.5.2. 1- ^ private-key_of B A I sees [ m odified ^ ] &

— B once_said [^ , m odified ^ ]^

5.5.3. We know, B not once_said [^ , m odified ^ ] f

5.5.4. Q.E.D.

3.5 Summary

In this chapter, we develop a general protocol for two party transactions. We show that the protocol is secure, atomic, anonymous, private, and incurs nominal overhead cost. We prove, using the logic developed in Chapter 2 . that the protocol is secure, atomic, anonymous, and private in the presence of a powerful external intruder.

In the next chapter, we define real-time aware protocols and develop a real-time aware protocol for general two party transactions.

8 5 CHAPTER 4

REAL-TIME AWARE PROTOCOLS

In many situations, it is important that transactions complete within a fixed period of time. We refer to such transactions as real-time transactions. This may be either because the price agreed upon changes rapidly with time (e.g., currency market, stock market, options, etc.), or because the product is useful to the customer only for a certain period of time (e.g newspapers, stock updates, etc.). Since such transactions must complete almost instantaneously, electronic networks seem to be the ideal medium to conduct them over.

However, for any real-time assurance to be given to a customer, there must be a method to verify that the real-time constraint agreed upon was not violated during a transaction run. Unfortunately, no such violation detection mechanism exists today.

In this chapter, we present a methodolog\'[7.3, 77, 81] to detect real-time violation in a two party transaction using the protocol presented in Chapter 3.

4.1 Existing Work

The importance of real-time transactions cannot be under estimated. Fedex, Air­ borne, and a number of other overnight delivery serv ices cater to just that need. Many

8 6 of these services have extended their shipment tracking and records to the electronic world. Fedex has one of the largest e-commerce initiatives in the countrj*. However, guaranteed timely delivery has not yet been extended to delivery of electronic data over the Internet.

Research work in electronic real-time transactions is still preliminary. A possible approach for design of real-time aware protocols has been discussed in [87]. In [47], an auction protocol suitable for real-time applications is proposed. However, the protocol has no mechanism to detect real-time violation. Although, there exists very little work in the area of real-time e-commerce transactions, the heightened interest in this area is evident from the workshops and conferences (e.g.. The IEEE Real-

Time Systems Symposium (RTSS), Dependable & Real-Time E-Commerce Systems

(DARE), International Workshop on Advance Issues of E-Commerce and Web-based

Information Systems (WECWTS)) encouraging publications in this area.

4.2 System Model

.A. general real-time transaction execution involves two parties: the merchant and the customer. The merchant is willing to supply a product within a specified period of time for a fee. The custom er is willing to pay the fee in return for receiving the product within the specified time. The time is measured from when the customer sends the order to when the customer receives the product.

The merchant and the customer are connected through an electronic network. The communication channels are FIFO. The customer is in close proximity (measured in terms of message delay) to a secure, trusted clock. This trusted clock may be a secure

87 co-processor [90, 91] attached to the customer's workstation. Secure co-processors are secure physical devices that erase their RAM and CPU registers if they are accessed in any manner other than the defined interfaces. These devices have limited capabilities, are inexpensive, and are available commercially [89]. Many customers may share the same clock. In order to avoid problems involving clock synchronization, we use a single trusted clock for all measurements in a single transaction. The local clocks of the customer and the exchange may not be synchronized with this clock, but we expect the clock periods to be the same. This is a reasonable assumption given the accuracy of modern day clocks.

Our protocols are based on two simple forms of underlying support: (a) public key cryptography [61] and (b) time-stamping of certain messages by the trusted, secure processor. They are both described below.

The first form of support ensures that every party is issued a unique and universal identity (like the Social Security Number in the US). This universal identity consti­ tutes a public and a private key. The public key is revealed to others and is used to trace a person, if necessan.'. It is also used to send secure non-tamperable messages to the party. The private key is used as a signature, for authentication and to read messages secured with the private key. Further, a trusted third party can issue extra public and private key pairs (that do not coincide with the set of universal identities).

These extra pairs are also unique and may be used as pseudonyms by the parties they are issued to. The secure, trusted processor also has a universal identity and may have some pseudonyms.

8 8 The second form, of support ensures that messages are sent and received by the

customer via its trusted processor. The message transmission between a customer and

its trusted processor is in-order and the transmission time is negligible. All messages

transmitted via the coprocessor may now be totally ordered based on the time at

which they are received by the secure, trusted processor. The trusted processor

appends a timestamp consisting of the time (as per its local clock) when the message

was received at the secure, trusted processor and a count of the incoming messages

from the merchant to the customer to everj^ incoming and outgoing message. It also signs this time-stamped message to prevent timestamp forgery. Note that even if

the secure processor is placed on the customers workstation, the customer cannot

tamper with it. The secure, trusted processor also holds copies of all time-stamped

messages for a certain period of time specified by the customer. We ensure that the customer or the merchant cannot by-pass the secure, trusted processor by securing the messages with the co-processor's public key. rendering the message unreadable unless decr\'pted by the co-processor.

4.3 Real-Time Constraint

As noted earlier, in a real-time aware commerce transactions, the merchant must exchange a product (P) for a payment (F) within a specified period of time (T) for the real-time constraint to be satisfied. The time T is measured from the time the customer sends the order/bid for P till the customer receives the product. This is illustrated in Figure 4.1. Let T w be the time required for the order to be received by the exchange, T y the time required for the exchange to process the order, and T z

8 9 the time required to inform the customer of the final price. The real-time constraint can then be expressed as: T w -T TV + T z < T .

Tz Tw = order from customer over the network Ty = execution time at the merchant's end Tz = product delivery time over the network T = stipulated time C = customer M = merchant

Figure 4.1: Timing diagram showing a real-time transaction

4.4 The Sixth Property

We call the ability to detect and prove to a third party the violation of (or ad­ herence to) the above real-time constraint real-time awareness of a protocol. The five properties: security, atomicity, anonvunity, privacy, and low overhead cost, iden­ tified as either required or desirable in electronic commerce protocols, still remain valid. Real-time awareness becomes an extra consideration for applications requiring real-time transactions: the sixth property.

In other words, if we were to plot a protocol as a point in an n-dimensional space, where each dimension represents the extent to which the protocol satisfies a property, real-time awareness would be one of the dimensions. It is wrong, however, to think that these axes are orthogonal to each other. A higher value on one of the axes may

90 mean a lower value on another. For example, assuring atomicity, implies ensuring

payment. Traditionally this is done through a collection agency, which compromises

the anonymity of the customer. In Chapter 5, we will see, for auction protocols, why it is not easy to circumvent this interplay between anonymity and atomicity. We will also see, later in this chapter, that adding real-time awareness without affecting other properties (especially atomicity and anonymity) of a protocol is extremely tricky.

4.5 A Real-Time Aware General Two-Party Transaction Pro­ tocol

In this section, we describe the mechanism, used in conjunction with the two party transaction protocol described in Chapter 3, to detect delays in transaction execution.

We start with the protocol described in Chapter 3.

(1) B— >• everybody [product description, pr i c e ]&

(2) A )- B [[product description, pr i c e ] » , a, payment^]^

(3) B — >■ A [[payment‘s, product^

(4) A h B [[product^, ^ ] » ]^

(5) b ^ A [[\, Tjèj-

Figure 4.2: Two Party Transaction Protocol between the merchant. Bob, and the customer. .A.lice.

91 The next three sections deal with the role of the secure co-processor, the nota­ tion used to describe timestamps, and the real-time violation detection mechanism, respectively.

4.5.1 Role of the Secure Co-Processor

Each secure, trusted co-processor maintains a vector timestamp (Table 4.1) . The timestamp has a field associated with each merchant. The field corresponds to the number of messages exchanged with that merchant via the co-processor. This is illustrated in Table 4.1.

Merchant Id Customer Id # of messages received 3 1 3 3 1 1 5 4 17 17 1 3 3 4 2 22 1 7

Table 4.1: Vector tim estam p

The original protocol is modified by first identifying the following messages in the transaction protocol.

Order: .A. message from the customer to the merchant where the customer makes a binding agreement to buy the goods. This may be in the form of a payment, a binding promise to pay, etc.

Product: A message from the merchant to the customer that finally makes the goods

92 available to the customer. The message may contain the goods themselves, decryption key to previously supplied encrypted goods, etc.

The customer's order (Order/Step 2) and the final product delivery (Product/Step 5) are both routed through the secure processor. The product is available to the cus­ tomer only after the product decryption key is received. Hence, this message is the final product delivery message. The merchant encrypts this message with the secure processor’s public key. This prevents the customer from rerouting the message and by-passing the secure processor. The secure processor attaches to these messages a timestamp consisting of two fields: the current time as per the secure processor and the number of messages exchanged between the customer and the merchant in question. The significance of these becomes clear in Section 4.5.3, where we describe the mechanism to detect real-time violations. The secure processor further signs the time-stamped message with its private key. This prevents anyone from tampering with or forging a time-stamped message. This is illustrated in Table 4.2 where R is the nonce, K is the coprocessor’s public key, and ^ the private key. The secure processor also sends a copy" of the order, after time-stamping and signing it, for the customer’s records. This is illustrated in Table 4.2. The notation used in this table is described in further detail in the next section.

Old Msg New message sent to secure processor New Time-stamped Msg Order [Order, R \^ [Order, To, R, Product Price, [Product, R]^ [Product, Tp, R, Np]^

Table 4.2: Role of the Secure Co-Processor

93 4.5.2 Notations

We use the following notations to describe the timestamps on the protocol mes­ sages.

Ts = stipulated time

Tp = timestamp on the Product

To = timestamp on the Order

T o = timestamp on the Demand

Ti = timestamp on message M i

T 2 = timestamp on message M 2

T x = timestamp on message M x

R p = nonce attached to the Product

R o = nonce attached to the Order

R \ = nonce attached to message M\

R 2 = nonce attached to message M 2

R x = nonce attached to message M x

N p = number of messages received before the Product

N q = number of messages received before the Order

Njo = number of messages received before the Demand

94 The following section describes the detection mechanism that detects if the time between the dispatching the first message (Order) and receiving the second message

(Product) is greater than the stipulated time.

4.5.3 Real-Time Violation Detection Mechanism

The protocol described above has a detection mechanism that is executed when the customer believes that the final product was not delivered on time. Note that no real-time violation is detected unless the customer executes the detection mechanism.

Thus, the extra cost of the detection mechanism is not associated with transactions that completed to the satisfaction of the customer.

The detection mechanism consists of a demand for refund (Demand) sent by the customer via the secure processor. The Demand is simply a plain text message asking for refund that is time-stamped and signed in the same manner as an Order. Copies of some old messages are also attached to the Demand. This Demand is then evaluated by the merchant. .A. refund is made if and only if there was a violation of the real-time constraints.

Situation Attachments Customer has received Product Order. Product Customer has not received Product Order, all time-stamped messages received from the merchant after Order (iVfi, Afg,..., M x )

Table 4.3: Attachments to be sent with the Demand

95 There are two possible situations; (a) the customer has received the Product, and (b) the customer has not received the Product. If the customer has received the

Product (but late), the customer attaches the Order and the Product to the demand.

If the customer has not yet received the Product, the customer attaches all messages the customer received from the merchant after sending the Order. Table 4.3 shows the attachments to be sent for each situation.

The customer should be given a refund if and only if the Demand was sent after the stipulated time elapsed and the Product was not received within the stipulated time.

Figure 6.4 shows a flowchart describing the evaluation used to evaluate eligibility for refund.

S itu a tio n 1 : The merchant receives the Demand and the Order from the customer.

The merchant verifles if To > To +T s- If not, it implies that the stipulated time had not elapsed when the merchant sent the Demand and customer's Demand is denied.

S itu a tio n 2 : If the Product was not attached to the Demand, the merchant verifies if N d — N o — N and Ti, T?, , T x ^ To nnd Rx^ R 21 , R x 7^ Ro- This implies that none of the messages received by the customer after sending the Order is the Product corresponding to that Order, i.e.. the Product was not received by the customer within the stipulated time.

4.6 Properties

In this section, we show that the real-time aware protocol presented in this chapter is secure, atomic, private, anonymous, and incurs nominal overhead charges. The only changes from the original two party transaction protocol are: (a) extra timestamps

96 Demand , Order, Product / other messages Mi,M t ,.-,Mx

NO time Tq on Demand - time Tq on Order > stipulated time Tg

YES NO Product Attached ?

YES

time Tp on Product - time Tq on Order > stipulated time Tg

YES

Refund given

i 1 YES

# of messages Nn on Demand -# of messages N q on Order = X; and time on each attached messageM > time Tq on Order

1, NO

Refund denied

Figure 4.3: Algorithm to evaluate the customer’s Demand

97 and signatures on two messages by the co-processor and (b) an optional extra demand for refund message. Since we have already sho^vn the original protocol to have all the five above mentioned properties, we only show that these changes do not violate these properties.

Security: We know that no secure information can be gathered by an intruder from the messages in the original protocol, and the messages themselves cannot be forged. Since the messages in this protocol are the same messages with extra wrappers

(signatures, timestamps, and encryption), no information leak may take place, nor can the messages be forged.

Atomicity: The concept of atomicity adopted in the previous chapter was that the payment and the transaction would eventually complete properly, or would com­ pletely rollback. Eventually is the key word in this definition. In real-time aware transactions, the transaction must complete within a stipulated period of time, or must rollback. Thus adding a real-time constraint affests atomicity. The demand for refund provides a mechanism for resolving this conflict via a rollback.

Anonymity: In the previous chapter, the only possible identification of the cus­ tomer was through the customer's signature. Since the customer used multiple sig­ natures or pseudonyms, a certain level of anonymity was assured. With the real-time aware protocol, the signature of the co-processor is also an identification of customer whenever a single co-processor is associated with a single customer. One possible way to deal wdth this is to have co-processor pseudonvmis that correspond to cus­ tomer pseudonyms. Now, the co-processor must also sign with different pseudonyms.

98 depending on which pseudonym the customer uses. This maintains the level of

anonymity assured in the original protocol.

Privacy: The customer's privacy remains unchanged, as no one may access unau­

thorized information from the coprocessor or tamper with it.

Overhead Cost: The extra overhead costs in the real-time violation detection

are: (a) a one time per customer co-processor cost that can range from $ 1 0 to $ 2 0 0

depending on the level of security of the co-processor, (b) extra encryption and de­ cryption of two messages - the Order and the Product, and (c) one extra message,

the demand for refund, sent only when the customer feels the constraint has been violated.

4.7 Formal Verification

Since we intend to apply this methodology." for real-time violation detection to other protocols, we formally prove that this methodology", when applied to any pro­

tocol, does not compromise the useful properties (security, atomicity, anonymity, and privacy) ensured by the underlying protocol. This is a more general exercise than proving these properties for the real-time aware protocol presented above.

To prove the useful properties of the real-time aware protocol, we assume that the underlying protocol (in this case the two party transaction protocol) ensures these properties. We then show that changes in the protocol due to the real-time methodology do not compromise or weaken them. To show this we need only analyze the three modified/new messages: the order, the product, and the demand for refund.

We first analyze them for passive attacks:

99 Outline of Proof for Order/Step 2

A ssum e: A U T(ai) h C

P r o v e : {C} 0 2 {C}

Proof sketch: We use Theorem 1 to prove that the properties are not violated or

{C} a-2, {C}.

2. 7 ”( a2 ) : A v coprocessor [original

2.1. C U T'ia^) is positive

2.2. C allows tt2

2.3. C U T (a z ) t- C.

2.4. Q.E.D.

Outline of Proof for Order/Step 3

A ssum e: A U T ( a i , 0 2 ) P C

P r o v e : {C} as {C}

Proof sketch: We use Theorem 1 to prove that the properties are not violated or

{C} as {C}.

3. l~{as) : coprocessor y B [original a^t tim estam p s jcoprocessor

3.1. C U l~{as) is positive

3.2. C allows as

3.3. C U T{as) P C

3.4. Q.E.D.

Outline of Proof for Product/Step 6

1 0 0 Assume: A U T ( o i, @2 , -, 0 5 ) t- C

Prove: {C} ae {C}

Proof sketch: We use Theorem 1 to prove that the properties are not violated or

{C} 06 {C}.

6 . 7'{ae) : B — y coprocessor [original 0 5 ]

6.1. C U T~{ae) is p o sitiv e

6.2. C allow s 0 6

6.3. C U T ( o 6 ) H C

6.4. Q.E.D.

Outline of Proof for Product/Step 7

Assume: A U T(o%, 0 2 , -, oe) P C

Prove: (C} 0 7 {C}

Proof sketch : We use Theorem 1 to prove that the properties are not violated or

{C} OT {C}.

7. T'(ar) : coprocessor y A [original 0 5 , tz m e s to m p s jcoprocesaor key

7.1. C U T~(ar) is p o sitiv e

7.2. C allow s 0 %

7.3. C U r ( o r ) P C

7.4. Q.E.D.

Outline of Proof for Demand/ Step 8

Assume: A U T ( o i , 0 2 , •••, 0 .7) P C

Prove: {C} a& {C}

1 01 P roof sketch : We use Theorem 1 to prove that the properties are not violated or

{C} as {C}.

8. T~{as) : A — y coprocessor [attachments]

8.1. C U T'^aa) is positive

8.2. C allows as

8.3. C U T(as) P C

8.4. Q.E.D.

Outline of Proof for Demand/Step 9

Assume: A U T ( a i, aa, •••, og) P C

P rove : {C} ag {C}

P roof sketch : We use Theorem 1 to prove that the properties are not violated or

{C} ag {C}.

9. T'ias) : coprocessor ----^ B [attachments, tzm,e sta m p a jcoprocessor key

9.1. C U 'T~{ag) is positive

9.2. C allows ag

9.3. C U T(ag) P C

9.4. Q.E.D.

Next we analyze the same three messages for active attacks

Analysis Under Active Attacks

In this section, we analyze active intruder attacks. We must show that an intruder is not allowed to send a message that reads like a protocol message, but is not identical

102 to a protocol message. If it were identical to the protocol message for that transaction run, it could cause no harm. Such a replication would be equivalent to stopping the message on the network momentarily and then releasing it.

Some messages may not have any components that can change. We do not analyse such messages as the proof for those is vacuously true. For those messages where the content can be marginally changed to dupe the actual participants, we use Defini­ tion 3 to show that the intruder is not allowed to send the changed message.

Proof for Step 2

A ssu m e : A u T{a{) h C

Under passive attacks, A. U T'Çax, (Zz) U C

Prove: A U T'Çax, az) not allows I 2

P r o o f s k e t c h : We assume that security has already been proved for Step 1 and that security under passive attacks has been proven for Step 2. We now use Definition 3 to prove security under active attacks.

2 . Cz : J --- > coprocessor

[[product description, price , a, modified payment^

2.1. From Definition3, A allows P > Q : (X, Y)^, k G JCp

:= for some S: A allows P y Q : X

2.2. b G JCi, A U az) allows I > coprocessor:

[product description, p r ic e ] ^ a, modified payment^]

:= A U 7"(ai,az) P 3

A u P (o i, (Zz) allows I >- S : [product description, p r i c e ] »

103 2.3. From Definitions, A u ag) allows P y Q ; X *. k 0 JCp

:= A I- P sees

2.4. A U 7”(ai,a2) allows

I y S: [product description, p r i c e ]i, ^ ^ /C/

:= A U 7'(ai,a2) H J sees [product description, p r ic e ] »

2.5. From assumption, ^ ^ K j

2.6. A U 7 ^(0 1 , 0 2 ) a llo w s / ----->- coprocessor: [product description, price]^

:= A U (Z2 ) / sees [ p ro d u c t description, price]^

2.7. J not sees [product description, price\^

2.8. Q.E.D.

Proof for Step 3

A ssum e : A u T(ui) F C

Under passive attacks, A U P (ui, <2 2 , Ug) F C

P ro v e : A U / '( u i , U 2 , 0 3 ) n o t allows /g

P roof sk e tc h : We assume that security has already been proved for Step 2 and that security under passive attacks has been proven for Step 3. We now use Definition 3 to prove security under active attacks.

3. tts : I y B [ [ o r ip in u i ug ] coprocessor modified payment^

3.1. From Definition3, A allows P y Q : (X , Y)^, k 6 Kp

:= for some S: A allows P y Q : X

3.2. b G K i, A U 7”(ai, U2 , o-z) allows I — y B:

[original U2 private key modified payment^ ]

104 := A. u 7'{ai, a 2t a^) t- 3 S 9 A u T'(ai, 0 2 , 0 3 ) allows

I ------). S[ originol O 2 private key

3.3. From Definition3, A U 7“(ai, 0 2 , 0 3 ) allows P >- Q : X *. k ^ TCp

:= A I- P sees

3.4. A U 7 '{oi,02, 0 3 ) allows

I ----^ S: [originol ]coprocessor prirote key 1 ^ /Cf

;= A U T (o i, 0 2 , 0 3 ) I sees [ o rig in o l 0 2 private key

3.5. From assumption, ^ ^ /C/

3.6. A U T ( a i , a 2 , a 3 ) allows I ---- )- B: [originol ]coproce«.or pri«atekey

:= A U T ( a i , 0 2 , 0 3 ) h I sees [originol O2 ]coprocessor private fcep

3.7. I not sees [o rig in o l ]coprocessor private key

3.8. Q.E.D.

Proof for Step 6

A ssu m e : A U T {o i, ■ • • , 0 5 ) C

Under passive attacks. A U 'T{ox, - , Oq) I- C

P r o v e : A U ^ { o i, •••, oe) not ollows

P roof sk e t c h : We assume that security has already been proved for Step 5 and that security under passive attacks has been proven for Step 6 . We now use Definition 3

to prove security under active attacks.

6. Oq : I --- >■ secure coprocesor [ [ - , modified^]^]^^E

~k 6 . 1 . From Definition3, A allows P --- >■ Q : X *. k ^ Kp

:= A I- P sees X *

105 6-2. A. U T"(oi, , ûe) allows

I --> coprocessor: [[^, m o d i f i e d ^]^]“, | ^ /C/

:= A U 7'(ai, , tte) H I sees [ m o d ifie d ^

6.3. From assumption, ^ 0 7C/

6.4. A U T'Cai, • • •, og) allows J ---- )- coprocessor: [ [ ^, m o d ifie d ^

:= A U • • •, ag) F I sees [ m o d ifie d ^

6.5. I not sees [^, m o d ifie d

6 .6 . Q.E.D.

Proof for Step 6.5

P ro o f sk e t c h : We use an axiom of the BAN logic [20] to prove this step

6.5. I n o t sees [^ , m o d i f i e d ^

6.5.1. From Axiom Good key ensures utterer [20],

f- k private_key_of PAR sees [.X"]* —>■ P once_said X

6.5.2. h I private_key_of B A I sees [^ , m o d ifie d ^ ]F

—>■ B once_said [ i , m o d ifie d A ]f

6.5.3. We know. B n o t once_sald [^ , m o d ifie d ^ ] s

6.5.4. Q.E.D.

Proof for Step 7

A ssu m e : A u T(ni, - " , ng) F C

Under passive attacks, A U 7~{ai, • • •, ay) F C

P r o v e : A U l~{ax, , ay) not allows ly

106 Proof sketch : We assume that security has already been proved for Step 4 and that security under passive attacks has been proven for Step 7. We now use Definition 3 to prove security under active attacks.

7. C7 : J y A[modified {original

7.1. From Definitions, A. allows P y Q : X^, k ^ JCp

:= A P sees X^

7.2. coprocessor key ^ /C/, A U 'T{ax-, , aj) allows

I y A: [[m o d ifie d ( original a s ) ], tim e sta m p s ] coprocessor key, a

~ A D 'r{ax, • • •, a^) 1- I sees m o d ifie d ( original as )

7.3. From assumption, coprocessor key ^ /C/

7.4. A U 'T{ax, — , a%) allows

I y A: [m o d ifie d ( original as ) ]'=op^ooessor key a

:= A U T (ai, • • •, a%) h I sees m o d ifie d ( original as )

7.5. I not sees m o d ifie d ( original as )

7.6. Q.E.D.

Proof for Step 7.5

Proof sketch : We use an axiom of the B.A.N logic [20] to prove this step

7.5. I not sees m o d ifie d ( original as )

7.5.1. From .A..xiom Good key ensures utterer [20],

f- k private_key_of PAR sees [Xj* —y P once_said X

107 7.5.2. h coprocessor key private_key_of coprocessor A

I sees m o d ifie d ( original as ^^oproceasor key

—> coprocessor once_said m o d ifie d ( o rig in a l as )

7.5.3. We know, coprocessor n o t once_said m o d ifie d ( original as )

7.5.4. Q.E.D.

4.8 Summary

In this chapter, we described a methodology to detect violation of real-time con­ straints in general two party transactions. In later chapters, we show that the same methodology may be used for other protocols as well.

It is important to note that, in our model, a product delivered late is of no value to the customer. In other words, we do not have different refunds or compensations for customers who receive their product late versus those who do not receive it.

Let us assume, for a moment, that the product received late is of some value to the customer. The customer may now deliberately delay receipt of the product by temporarily disconnecting from the network, and get not only the devalued product but also the refund. A less than complete refund kills the motivation to do so.

Furthermore, this is a large effort for products that are of low value (such as stock quotes). While this lack of differentiation in no way makes the protocol less valuable, a protocol that allows for detection of non-delivery of the product and differentiates this from late delivery remains an interesting future exercise.

We believe that client side security model used in this solution is extremely use­ ful for many other applications such as content metering, music distribution, secure

108 storage of credit card and smart card information, etc. With the cost of personal computers falling tremendously, this could become a standard feature on most com­ puters.

In the next chapter, we develop a protocol for auction transaction, and extend it to real-time aware applications. We also show that the proposed protocol is secure, atomic, anonymous, private, and incurs nominal overhead costs.

109 CHAPTER 5

AUCTIONS

In the previous chapters, we talked about two party transactions. In this chapter we extend the same problems and questions to electronic auctions. Auctions are an important form of commerce. In traditional auctions the bidder must be present at the site of the auction. This reduces the appeal of auction and restricts the number of people who would otherwise participate in it. An auction over an electronic network is, therefore, particularly attractive. This is evident from the number of auction houses that are already established on the web. However, due to the complexity of electronic price negotiation [17], most electronic auction houses implement highly simplified model of price negotiation, often compromising other important properties required in electronic commerce.

5.1 Existing Work

While there exist a number of auction sites on the Internet, such as eBay (http://- www.ebay.com), Onsale (http://v^\"w.onsale.com), FirstAuction (http://www.firstauction-

.com), Z.A.uction (http://w^\^v.zauction.com), Dealdeal (http://www.dealdeal.com), and Ubid (http://www.ubid.com), that are fully/partially automated, few of them

110 ensure some of the important properties such as security, privacy, anonymity, and atomicity. This is due to the inherent difficulties in automated price negotiation [17].

While electronic auctions are complex, they are also equally popular and desirable.

Consequently, a lot of research has been done in the area of electronic auctions and particularly in electronic negotiations. Negotiation Support Systems (NSS) is a class of computer software systems geared specifically towards situations requiring negotia­ tions. While these are powerful tools, they require near-constant human input and are far from being fully automated negotiation engines [17]. A number of papers on auc­ tion models, autom atic price negotiation, and auction protocols [35, 17, 92, 87, 47, 58] have been published in the literature. These papers are directed at specific types of auction applications: single round auctions, real-time auctions, non-automatic price negotiation, etc. There still exists a need for a general purpose auction protocol that is efficient, secure, atomic, anonymous, private, and inexpensive.

5.2 System Model

An auction involves a set of customers and a single merchant. The merchant must have a product that he is willing to sell for the highest price offered. .\ round of bidding consists of interested customers making a bid on the product. The round of bidding is bounded by a cut-off time. At this time, the merchant collects all the bids and computes the highest bid. If no higher bid has been made since the previous round of bidding, the product is sold to the highest bidder and all other customers are notified about the sale. If a higher bid has been made, all customers are notified about this price and another round of bidding starts. This model is used in order to

111 simplify the negotiation process without compromising either the anonymity of the

customers or their ability to judge the market and make an intelligent bid.

In electronic commerce transactions, the involved parties may be physically lo­ cated anywhere. Each party has a computer connected to an electronic network. The

parties communicate by sending messages to each other over the network. The exis­

tence of some acceptable electronic form of payment such as a credit card number, an electronic coin withdrawn from a bank, or any other electronic entity that does

not compromise the customer's privacy or anonymity is assumed.

It is also assume that there are third parties that issue pseudo-identities to the customers. Apart from facilitating anonymity (by issuing pseudo-identities), these third parties are expected to act as collection agencies if their customer does not pay for goods bought at an auction. Third parties are important in electronic auctions

because, unlike in normal transactions, the customer cannot refuse to buy a product

(i.e. abort the transaction) after making the highest bid in an auction. Since third parties are expected to act as collection agencies (in the situation that the customer retracts his bid) they must know the customer's true identity. The customers must trust these third parties not to use a customer's identity or other information the customer reveals to the third party [45, 46].

5.3 Protocol Description

An auction involves a merchant, say Bob, and many customers. An auction can be broken into three stages. In the first stage, Bob advertises his product. In the second stage, the customers bid for the product until no customer wishes to make a

112 higher bid. The highest bidder, say Alice, and the merchant. Bob, now exchange the product and the payment. The important feature of an auction is that once Alice makes the highest bid for a product she must buy it. This aspect of auctions makes it extremely diflBcult to ensure customer anonymity. How can the customer be forced to honor her bid if her identity is to be kept a secret?

The protocol [71, 75] presented is based on public key cryptography and uses trusted third parties to ensure anonymity.

5.3.1 The Protocol

Figure 5.1 illustrates the auction protocol with respect to the highest bidder, .Alice.

There may, however, be any number of customers involved until the product is sold to Alice.

In Step 1, Bob advertises for the product by broadcasting the product description and a list of third parties that he trusts. In Step 2. he also broadcasts the product encrypted with key E. Both messages have the product identifier (may be a part of the product description) to prevent mix-ups with messages regarding sales of other products. Bob signs both messages so that customers can verify that they are from him.

In Steps 3, Alice (along with other interested customers) responds to the advertise­ ment with a reference to the advertisement, her pseudo-identity (i.e., Alice's public key and name of the third party that issued the public key), price offered, payment encr\'pted with key e and the signed value of the price offered. Bob can look up the third party’s director}- to check that Alice’s public key is indeed issued by the third party involved. Furthermore, he can verify the message is from Alice by decrypting

113 (1) B — y everybody : \^product description, list of recognized third parties\^

(2) B — y everybody : [product identifier, product^]~^

(3) A — yB : [[message {X)i third party t, payment^, price]“ ]*

(4) B — y A: [[payment^ ]b, [price]^]“

(5) B — y everybody : [product description, maximum priceoffered

(6) B — y everybody : [product description, sold for price

(7) A y B : [[product description, sold for price]^,

(8 ) [[i,

Figure 5.1: Auction Protocol between the merchant, Bob, and a representative cus­ tomer. Alice.

the signed value of the price oflfered. Since the message is secured with Bob's public key. only Bob can read it. Bob acknowledges the receipt of the bid in Step 4.

Once the time allotted for a round of bidding is over. Bob broadcasts the maximum price offered. This constitutes Step 5. The message is signed b}- Bob so that the customers can verify that it is from him. Bob can hike the maximum price announced in Step 5. This is an accepted practice in auctioneering. Most auctions have their agents among the bidders. These agents artificially hike the maximum bid, goading other bidders to offer more. This method may also be used to resolve ties between bidders. Step 3, Step 4 and Step 5 are repeated till no higher price is offered.

114 When the no higher bid is made, the product is advertised to be sold for the highest price quoted (Step 6 ). This announcement is again signed by Bob so that it can be verified that he sent it.

In Step 7, the highest bidder, Alice, responds with the decryption key for her payment. (If she does not respond, Bob approaches the third party with her bid and collects the payment.) Alice encloses the old notice for reference and a signed copy of the decryption key so that Bob can verify that the message is from her.

In Step 8 , Bob acknowledges the pavunent and sends the signed product encryption key.

Non-EIectronic Goods

With some minor modifications, the protocol can be extended to non-electronic goods. Step 2 is skipped. Then, in Step 8 , instead of sending an encr\'ption key for the product. Bob acknowledges receipt of the payment encrj'ption key and sends the product via some other medium (e.g. certified post).

5.3.2 Formal Verification

In this section, we formally prove the above auction protocol to be secure, atomic, anonymous, and private. The initial state and definitions of the properties remain the same as defined in Chapter 3.

115 5.3.3 Proof

In this section, we first analyze the protocol in the presence of passive attacks to

formally show that the above mentioned properties are ensured. Next, active attacks

are included in the analysis. Protocol cost is not covered as part of the proofs.

Analysis Under Passive Attacks

Let A be the set of assumptions or the initial state we start with (where all the

beliefs of principals are true), P the protocol, and C the properties we want to prove.

Assuming only passive attacks, the protocol may still be interrupted by either the customer or the merchant failing to execute the next step. The protocol steps that are executed, however, may not be tampered or forged. We must show that C holds after every properly executed step of the protocol. That is, if a„ is the nth of the protocol, then we must show:

VLi ai:a2;...:On

We formally prove this below. In the next section, we will incorporate active at­ tacks into the proof.

Outline of Proof for Step 1

P r o v e : {A} ai {C}

Proof sketch: We use Theorem 1 to prove that the system is secure or {.4.} Ci

{C}.

1. T"(ai) : B everybody [product description, third parties]^

116 1.1. X u 'T{ax) is p o sitive

1.2. X allows a\

1.3. X U T ( o i) I- C

1.4. Q.E.D.

Proof for Step 1.1

1.1. X U 7”( a i) is positive

P ro o f s k e t c h : We use Definitioa 4 to prove the above.

1.1.1. 7”(ai) := A jB once_said [product description, third parties]^ A X sees [ product description, third parties ] f A I sees [ product description, third parties ] f 1.1.2. 7~{ai) is p o sitiv e

1.1.3. X is positive

1.1.4. Q.E.D.

Proof for Step 1.2

1.2. A, allows {ax)

P ro o f s k e t c h : We use Definition 3 to prove the above.

1.2.1. X h S believes [product description, third parties] f

1.2.2. Q.E.D.

Proof for Step 1.3

1.3. X U T ( a i ) 1- C

P r o o f s k e t c h : We show that in step 0 % no unwanted information is revealed to

the intruder.

1.3.1. T'(ai) := A B once_said [product description, third parties]f A A sees [ product description, third parties ] f A I sees [ product description, third parties ] ^

117 1.3.2. New state is ^ U A [product description, third p a r tie s ]è g Oa A [product description, third parties]b ç S b A [product description, third parties]^ ç Si 1.3.3. A — ^Si A: e A p a y m e n t 0 Si A produ ct 0 S i A id ^ S i A id ^ 5b A a ^ S i A produ ct 0 S a A payment ^ Ss 1.3.4. Q.E.D.

Outline of Proof for Step 2

Assume: A u T (ai) 1- C

Prove: {C} ug {C}

Proof sketch: We use Theorem 1 to prove that the system is secure or {C} og

{C}.

2. 'T(a 2) : B >■ everybody [product identifier, product^

2.1. A u {02) is positive

2.2. A allow s Ug

2.3. A U T(o2) I- C

2.4. Q.E.D.

Proof for Step 2 . 1

2.1. A U 7”(a2) ISpositive

Proof sketch : We use Definition 4 to prove the above.

2.1.1. 7 ' ( a 2 ) := A B once_said [product identifier, p r o d u c t^ ]è A A sees [product identifier, product^ ]f A I sees [product identifier, product^ ]^

118 2 .1 .2 . T'^a^) is positive

2.1.3. v4. is positive

2.1.4. Q.E.D.

Proof for Step 2.2

2 .2 . A, allows

Proof sketch : We use Definition 3 to prove the above.

2.2.1. A. B believes [p rod u ct identifier, prod u ct^

2.2.2. Q.E.D.

Proof for Step 2.3

2.3. A U T (a 2 ) H C

Proof sketch : We show that in step no unwanted information is revealed to

the intruder.

2.3.1. 7 ”(a2 ) := A B once_said [product identifier, product^ A A sees [ product identifier, product^ ] è A I sees [product identifier, product^ 2.3.2. New state is C UA [product identifier, product^ GOa A [product identifier, p rod u ct^ ]è g Sb A [product identifier, product^ ]F G Si 2.3.3. A ^ ^ S i A ^ ^ S i A paym ent ^ Si A product 0 Si A id ^ Si A id ^ Sb A a ^ Si A product ^ Sa A paym ent ^ Sb 2.3.4. Q.E.D.

Outline of Proof for Step 3

Assume: A u T ( u i , 0 2 , «3 ) P C

119 Prove: {C} 0 3 {C}

P roof sketch : We use Theorem 1 to prove that the properties are not violated or

{C} 03 {C}.

3. T{az) : A )■ B [message 1 , a , payment^, priced Ÿ

3.1. C U 'T{az) is positive

3.2. C allows az

3.3. C U T ( a a ) P C

3.4. Q.E.D.

Proof for Step 3.1

3.1. A U 'T{az) is positive

Proof sketch : We use Definition 4 to prove the above.

3.1.1. T(az) := A A once^aid [message 1, a, t, payment®, price» A B sees [m essage 1, a, t, payment®, price» A I sees [m essage 1, a, t, payment®, price» 3.1.2. 7~{az) is positive

3.1.3. C is positive

3.1.4. Q.E.D.

Proof for Step 3.2

3.2. C allows 'T{az)

Proof sketch : We use Definition 3 to prove the above.

3.2.1. C f- A believes [ m essage 1 , a, t, payment®, price»

3.2.2. Q.E.D.

Proof for Step 3.3

3.3. C U T ( a a ) P C

1 2 0 P ro o f sk etch : We show that in step as no unwanted information is revealed to

the intruder.

3.3.1. T ( a s ) := A A once^said [message 1, a, t, payment®, price» A B sees [message 1, a, t, payment®, price» A I sees [m essage 1, a, t, payment®, price» 3.3.2. New state is C U A [message 1, a, t, payment®, price» G O a A [message 1, a, t, payment®, price» G S b A [message 1, a, t, payment®, price» G S i 3.3.3. A — 0 S i A I ^ <5/ A p a y m e n t ^ S i A product ^ S i A id ^ S i A id 0 S b A a ^ S i A product^ G S a A [payment^] & G «Sa 3.3.4. Q.E.D.

Outline of Proof for Step 4

A ssum e: A u T (ai, - - -, 0 4 ) t- C

P ro v e : {C} 0 4 {C}

P ro o f sketch: We use Theorem 1 to prove that the properties are not violated or

{C} 0 4 {C}.

4. I~{a4 ) : B > A [[payment^ , price» ]“

4.1. C U is positive

4.2. C allows 0 4

4.3. C U T (a i) P C

4.4. Q.E.D.

Proof for Step 4.1

121 4.1. A U 7”(a4) is positive

P roof s k e t c h : We use Definition 4 to prove the above.

4.1.1. T'Ça^) := A A oncejsaid [[ payment® ] è , price® A B sees [ [payment® ]tr, price® ]® A I sees [[ payment® ] F, price® ]® 4.1.2. T'(

4.1.3. C is positive

4.1.4. Q.E.D.

Proof for Step 4.2

4.2. C allows '7~{a4)

P roof sk e t c h : We use Definition 3 to prove the above.

4.2.1. C t- A believes [[payment®]è, price®

4.2.2. Q.E.D.

Proof for Step 4.3

4.3. C U T ( o 4 ) H C

P roof sk e t c h : We show that in step no unwanted information is revealed to

the intruder.

4.3.1. 7~{a4) : = A B once_said [ [ p a y m e n t® ] è , p rice®L.C- ] A A sees [ [ p a y m e n t® ] è , p rice® ] a A I sees [[ p a y m e n t® ]F, p rice®- la]

4.3.2. New state is C U A [[ p a y m e n t® ] è , p rice® ]® G O b A [ [ p a y m e n t® ] F , p rice® E «Sa

A [ [ p a y m e n t® ] F , p rice® G S t

122 4.3.3. A — i ^ s,S j A paym ent ^ Si A product 0 Si A id ^ Si A id 0 5 b A a ^ Si A product^ G 5 a A [payment^] b G 5 a 4.3.4. Q.E.D.

Outline of Proof for Step 5

A ssu m e : A u T ( a i, • • - , as) \- C

P r o v e : {C} as {C}

Proof sketch: We use Theorem 1 to prove that the system is secure or {C} as

{C}.

5. 'T{as) : B ---->■ everybody [product description, maximum bid]^

5.1. C U T '(a s) is positive

5.2. C allows a s

5.3. C U T (a s ) f- C

5.4. Q.E.D.

Proof for Step 5.1

5.1. C U T'(as) is positive

P roof sk e t c h : We use Definition 4 to prove the above.

5.1.1. 7 ”(a s ) := A B once_said [product description, m a x i m u m bid]f A A sees [product description, maximum bid]F A I sees [product description, maximum bid]F 5.1.2. 7'((Zs) is positive

123 5.1.3. C is positive

5.1.4. Q.E.D.

Proof for Step 5.2

5.2. A. allows "T(as)

P r o o f s k e t c h : We use Definitioa 3 to prove the above.

5.2.1. A B believes [product description, m a x i m u m b id ]f

5.2.2. Q.E.D.

Proof for Step 5.3

5.3. C U T(as) I- C

P ro o f sk e t c h : We show that in step as no unwanted information is revealed to

the intruder.

5.3.1. T'(as) := A B once_sald [product description, maximum bid]F A A sees [ product description, maximum bid ] f A I sees [product description, maximum bid]f 5.3.2. New state is C U A [product description, maximum bid]F ç Oa A [product description, m a x i m u m b id ]f ç Sb A [product description, m a x i m u m b id ]f ç S r 5.3.3. A i ^ 5/ A ^ ^ 5 / A paym ent ^ Si A product 0 S i A id ^ S i A id 0 S b A a ^ S i A product^ 6 S a A [payment®] f g S a 5.3.4. Q.E.D.

Outline of Proof for Step 6

A ssum e: A U T(ai,---, a^) P C

124 P r o v e : {C} ae {C}

Proof sketch: We use Theorem 1 to prove that the system is secure or {C} Og

{C}.

6 . 'T{ae) : B --- )■ everybody {product description, final sale price]^

6.1. C U T '(ae) is positive

6.2. C allows og

6.3. C U TCog) I- C

6.4. Q.E.D.

Proof for Step 6 . 1

6.1. C U 'T{ao) is positive

P r o o f s k e t c h : We use Definition 4 to prove the above.

6.1.1. T~(a6) := A B once_said [product description, final sale price]F A A sees [ product description, final sale price ] f A I sees [ product description, final sale price ] f 6 .1 .2 . 'Ti^ao) is positive

6.1.3. C is positive

6.1.4. Q.E.D.

Proof for Step 6 . 2

6 .2 . A allows l~{ao)

Proof sketch : We use Definition 3 to prove the above.

6.2.1. A B believes [product description, final sale price]F

6.2.2. Q.E.D.

Proof for Step 6.3

6.3. C U rC og) I- c

125 P r o o f s k e t c h : We show that in step og ao unwanted information is revealed to

the intruder.

6.3.1. T'^ae) := A B once_said [product description, final sale price]F A A sees [ product description, final sale price ] f A I sees [ product description, final sale price ] ^ 6.3.2. New state is C U A [product description, final sale price]^ G O a A [product description, final sale price]F g S b A [ product description, final sale price ] f G 6.3.3. A — ^ Si A I ^ 5 / A p a y m e n t 0 S i A p r o d u c t 0 S i A i d ^ S i A id ^ S b A n 0 S i A p r o d u c t^ G «Sa A [payment®]^ G «Sa 6.3.4. Q.E.D.

Outline of Proof for Step 7

A ssu m e : A U T(ai, , ag) P C

P r o v e : {C} n? {C}

P r o o f s k e t c h : We use Theorem 1 to prove that the properties are not violated or

{C} a, {C}.

7. T~{ar) : A ^ B [[product description, final sale price ]^, [^]“ ]^

7.1. C U 'J~{a-i) is positive

7.2. C allows a-i

7.3. C U T ia i) t- C

7.4. Q.E.D.

126 Proof for Step 7.1

7.1. C U 7 ^(0 7 ) is positive

P r o o f s k e t c h : We use Definition 4 to prove the above.

7.1.1. 'J~{a,r) :=

A A once_said [ [product description, final sale price]f ^ A B sees [ [product description, final sale price]f, [ | A I sees [ [product description, final sale price]f, [ ^ ]“ ]*^ 7.1.2. T’iar) is positive

7.1.3. C is positive

7.1.4. Q.E.D.

Proof for Step 7.2

7.2. C allows T”( a 7 )

P r o o f s k e t c h : We use Definition 3 to prove the above.

7.2.1. C h- A believes [ [product description, final sale price]f , [^]f]^

7.2.2. Q.E.D.

Proof for Step 7.3

7.3. C U T ( a 7 ) H C

P r o o f s k e t c h : We show that in step aj no unwanted information is revealed to

the intruder.

7.3.1. 'T{ar) :=

A A once-said [ [product description, final sale p r i c e ] f , [ | ]f ]

A B sees [ [product description, final sale pr ice] f , [ |] f ]^

A I sees [ [product description, final sale pr ic e] f , [ | ]f

127 7.3.2. New state is C U

A [[product description, final sale price]è, [ 1 G Oa A [[product description, final sale price]è, [lj»]b g Sb A [[product description, final sale price] t , [ ^]a G S i

t .3.3. A — ^ S i A I « 5 , A paym ent ^ S i A product ^ Si A id ^ Si A id 0 Sb A <2 0 5 /

A product^ 6 «Sa A [payment®] & G 7.3.4. Q.E.D.

Outline of Proof for Step 8

A s s u m e : A. U T(ai, , a?) t- C

P r o v e : {C} ag {C}

P roof sketch: We use Theorem 1 to prove that the system is secure or {C} ag

{C}

8. T(as) : B ^ A [ [ i ,

8. 1 . C U 'T~{as) is positive

8.2. C allows ag

8.3. C U T ( a g ) I- C

8.4. Q.E.D.

Proof for Step 8 . 1

8.1. C U T '(a g ) is positive

P r o o f s k e t c h : We use Definition 4 to prove the above.

128 8.1.1. T (a s) := A B once_said [[|, g]b]* A A sees [[J, A I sees [[ 8. 1. 2 . T'(as) is positive

8.1.3. C is positive

8.1.4. Q.E.D.

Proof for Step 8.2

8.2. A. allow s T~(as)

P r o o f s k e t c h : We use Definition 3 to prove the above.

8.2.1. A P B believes [[i,

8 .2 .2 . Q.E.D.

Proof for Step 8.3

8.3. C U T(as) h C

P r o o f s k e t c h : We show that in step as no unwanted information is revealed to

the intruder.

8.3.1. T(as) •= A B once_said [ [ i , A A sees [[i, A I sees [[i, 8.3.2. New state is C U A [ [ -, G OA A[[^ G A[[^ G 8.3.3. A — ^ A I ^ 5 r A p a y m e n t ^ S i A product 0 S i A id ^ S i A id 0 S b A a 0 S i A product G S a A paym en t E S b 8.3.4. Q.E.D.

129 Analysis Under Active Attacks

In this section, we analyze active intruder attacks. We must show that an Intruder is not allowed to send a message that reads like a protocol message, but is not identical to a protocol message. If it were identical to the protocol message for that transaction run, it could cause no harm. Such a replication would be equivalent to stopping the message on the network momentarily and then releasing it.

Some messages may not have any components that can change. We will not anal­ yse such messages as the proof for those is vacuously true. For those messages where the content can be marginally changed to dupe the actual participants, we use Defi­ nition 3 to show that the intruder is not allowed to send the changed message.

Proof for Step 1

A ssu m e : Under passive attacks, A. U T '( a i ) h C

P r o v e : A U 7^(ai) not allows Ii

P r o o f s k e t c h : We assume that security under passive attacks has already been proven for Step 1 . We now use Definition 3 to prove security under active attacks.

1. J i : J ----> A [modified (original a^)]^

1.1. From Definition3, A allows P >- Q : X'‘, k 0 TCp

~ AV- P sees

1.2. A U 7”(ai) allows I ----- > A: [modified (original a^ ) ]^. ^ ^ /C/

:= A U 'T(ai) I- I sees [modified (original Oi)]s

1.3. From assumption, B's private key ^ /C/

130 1.4. yA. U T~(ai) allows I )■ A: [modified {original ai)]&

:= ^ U 'F(ai) I- I sees [modified {original ai )]^

1.5. I not sees [modified {original a\ )]^

1 .6 . Q.E.D.

Proof for Step 1.5

P r o o f s k e t c h : We use an axiom of the BAN logic [20] to prove this step

1.5. I n ot sees [modified {original ai )]s

1.5.1. From .Axiom Good key ensures utterer [20],

f- k private_key_of P A R sees [X]^ P once_said X

1.5.2. h I private_key_of B A I sees [m odified ( original Ui)]^

— B once_said [modified {original oi)]s

1.5.3. We know, B not once-said [modified {original Ui)]s

1.5.4. Q.E.D.

Proof for Step 2

A s s u m e : A U T{ai) 1- C

Under passive attacks. A U l~{ai, do) I- C

P r o v e : A U '7~{ax, o g ) not allows I 2

P r o o f s k e t c h : We assume that security under passive attacks has already been

proven for Step 2. We now use Definition 3 to prove security under active attacks.

1.1. I 2 : I --- >- A [modified {original 02)

1 . 1 .1 . From Definition3, A allows P y Q : X ^. k ^ Kp

:= A h P sees X ^

131 1.1.2. A U T '(a 2 ) allows I -----> A: [modi fied { original fig | ^ /Cf

~ A U T~{a2 ) I- I sees [modified {original ag)]^

1.1.3. From assumption, B 's private key 0 /C/

1.1.4. A U l~{a 2 ) a llo w s ! ------> A: [m odified {original Cg ) ]^

:= A U 7”(ag) f-I sees [m odified ( original ag )

1.1.5. I n o t sees [modified {original ag ) ]s

1.1.6. Q.E.D.

Proof for Step 2.5

P r o o f s k e t c h ; We use an axiom of the BAN logic [20] to prove this step

2.5. I n o t sees [m odified ( original Og ) ]^

2.5.1. From Axiom Good key ensures utterer [20],

h- k private_key_of PAR sees [X]* — P once_said X

2.5.2. h i private_key_of B A I sees [modified { original ag ) ]^

—>■ B once_said [m odified ( original ng ) ]^

2.5.3. We know. B n o t once_said [modified {original Cg ) ]s

2.5.4. Q.E.D.

Proof for Step 3

A s s u m e : A U T(ai,ag) f- C

Under passive attacks, A U Ug, Ug) t- C

P r o v e : A U

P r o o f s k e t c h : We assume that security has already been proved for Step 2 and that security under passive attacks has been proven for Step 3. We now use Definition 3 to prove security under active attacks.

132 3. Is : I --- > B [[m odified (^original as) ]<^]^

3.1. From Definitions, A. allows P >■ Q : X * . k ^ /Cp

:= A P sees

3.2. A U T'Cai, 0 2 , 0 3 ) allows I ----- ^ B: [[m odified (original 0 3 )]“ ]^,

:= A Li T"(oi, 0 2 , 0 3 ) h I sees [modified (original 0 3 ) ]»

3.3. From assumption, ^ ^ TCj

3.4. A U 7”(o i, 0 2 , 0 3 ) allows I -----> B: [[modified (original 0 3 )]»]^

:= A D P(ai, 0 2 , 0 3 ) h- J sees [modified (original 0 3 )]^

3.5. I n o t sees [m odified ( original 0 3 )]^

3.6. Q.E.D.

Proof for Step 3.5

P r o o f s k e t c h : We use an axiom of the BAN logic [20] to prove this step

3.5. I n o t sees [m odified ( original 0 3 )]»

3.5.1. From .Axiom G o o d key e n s u re s u tte re r [20],

I- k private_key_of PAR sees [XŸ —> P once_said X

3.5.2. h i private_key_of A A I sees [modified ( original 0 3 ) ] i

—> A once_said [m odified ( original 0 3 ) ]»

3.5.3. We know, A n o t once_said [modified ( original 0 3 ) ]»

3.5.4. Q.E.D.

Proof for Step 4

133 Assume: A u T (ai, - - -, cg) C

Under passive attacks, A U 'T{ai, • • • , a^) f- C

P r o v e : A U 'T'{ai, • • •, 0 4 ) not allows J4

P r o o f s k e t c h : We assume that security has already been proved for Step 3 and th a t security under passive attacks has been proven for Step 4. We now use Definition 3 to prove security under active attacks.

4. J 4 : / y A[[modified payment^ ^ [price]“ ]“

4.1. From Definition3, A allows P y Q : X^, k 0 Kp

:= A y- P sees X *

4.2. i ^ Ki, A U — , 0 4 ) allows

I y A: [[modified payment^ , [ p r ic e ]»]“

:= A U 7~(ai , 1 04:) h I sees [payment^ , [ p ric e ]«

4.3. From assumption. ^ 0 K i

4.4. A U T(ai,---,a 4 ) allows I — y A: [payment^ ]è, [price]^

:= A U P Ç a i , as) \-I sees [modified payment^ ]^, [price ]i

4.5. I n o t sees [m o d ified paym ent^ \ ^, [price]^^

4.6. Q.E.D.

Proof for Step 4.5

P ro o f s k e t c h : We use an axiom of the B.A.N logic [20] to prove this step

4.5. I n o t sees [modified payment^ \ ^^ [price\i^

4.5.1. From .A.xiom Good key ensures utterer [20],

f- k private_key_of P A R sees [X]* —y P once_said X

134 4.5.2. h i private_key_of B A I sees [modified payment^ \ ^ [p ric e]»

—>• B once_said [modified payment^ , [p ric e ]»

4.0.3. We know, B not once_said [modified payment^ , [price]^

4.5.4. Q.E.D.

Proof for Step 5

A ssume : Under passive attacks, .A I— C

P rove : A . u T’^a^) not allows I 5

P roof sketch : We assume that security has already been proved for Step 4 and that security under passive attacks has been proven for Step 5. We now use Definition 3 to prove security under active attacks. o. I5 : I ---->■ A [modi fied (original as)

5 . 1 . From Definition.3. A allows P ---- >■ Q : X^, k 0 /Cp

:= A H- P sees X ^

5 .2 . A U 7”(a5) allows I > A: [modi fied (original as ^ /C/

:= A U T'(as) t- I sees [modified (original as )]^

5 .3 . From assumption, B 's private key ^ Kj

5 .4 . A U T~(as) allows I >■ A: [modi fied (original as )]^

:= A U 'T(as) U I sees [modified ( original 0 5 ) ]^

5 .5 . I not sees [modified ( original 0 5 ) ]s

5 .6 . Q.E.D.

Proof for Step 5.5

P roof sketch : We use an axiom of the BAN logic [20] to prove this step

135 5.5. I n o t sees [modi fied (^original Og ) ]^

5.5.1. From Axiom Good key ensures utterer [20],

I- k private-key_of PAR sees [X]*' —> P once_said X

5.5.2. I- i private_key_of B A I sees [m o d ifie d ( original ag ) ]^

—B once_said [m o d ified ( original Og ) ] &

5.5.3. We know, B n o t once_said [m o d ified ( original

5.5.4. Q.E.D.

Proof for Step 6

A ssu m e: Under passive attacks, A. C

P r o v e : A U T'^ae) not allows Iq

P r o o f sk e t c h : We assume that security has already been proved for Step 5 and that security under passive attacks has been proven for Step 6. We now use Definition 3 to prove security under active attacks.

6. le • I — >■ A [modified (original ae)]^

6.1. From Definition.3, A allows P — >■ Q : X^, k ^ /Cp

:= A h- P sees X*^

6.2. A U T~(a6) allows I — >- A: [m o d ifie d (original og )]^, | ^ /C/

:= A U T^Cog) F I sees [m o d ified ( original Og ) ]^

6.3. From assumption, B 's private key ^ /C/

6.4. A U T~(ao) allows / ----> A: [modi fied (original ag)]^

:= A U 7~(ag) t- I sees [modified ( original ag ) ]s

6.5. I n o t sees [modified ( original Og ) ]^

136 6.6. Q.E.D.

Proof for Step 6.5

P r o o f sk e t c h : We use an axiom of the BAN logic [20] to prove this step

6.5. I n o t sees [m o d ified ( original Og ) ]^

6.5.1. From Axiom Good key ensures utterer [20],

f- k private_key_of PAR sees [X]* —P once_said X

6.5.2. h I private_key_of B A I sees [ m o d ifie d ( original Cg ) ] ^

—>■ B once_said [m o d ified ( original ag ) ]s

6.5.3. We know, B n o t once_said [modified {original Qg )]^

6.5.4. Q.E.D.

Proof for Step 7

A s s u m e : A U T(ai,---,ag) f- C

Under passive attacks. A U 'T{ax, — , a?) h C

P r o v e : A U l~{ai, • • •, ar) not allows I 7

P r o o f sk e t c h : We assume that security has already been proved for Step 6 and that security under passive attacks has been proven for Step 7. We now use Definition 3 to prove security under active attacks.

7. I 7 : I — )■ B [modified {original 0 7 )]^

7.1. From Definition3, A allows P ---- > Q : X^. k ^ Kp

AY- P sees

7.2. A U l~{ax, • • •, 0 7 ) allows I y B: [modified { original aj ]^, f ^

K r

137 := A u T ’{ax, — , ar) H

I sees {product description^ modified sold for p r ic e ]è

V J sees [m o d ified - ] “

7.3. From assumption, -, r ^ /Cf

7.4. A. U '7~{ax, , ar) allows I — > B: [modified ( original ar ) ]

:= ^ U 7“(ai, , ar) H

I sees [product description.^ modified sold for price\b

V l sees [m o d ified ^

7.5. I n ot sees [product description, modified sold for price]b A I not

sees [m odified ^

7.6. Q.E.D.

Proof for Step 7.5

P r o o f sk e t c h : We use an axiom of the B.A.N logic [20] to prove this step

7.5. I not sees [product description, modified sold for price]^ a I n ot

sees [m odified ^ ] =

7.5.1. From .A.xiom Good key ensures utterer [20],

1- k private_key_of P A R sees [X]*^ —>• P once_said X

7.5.2. I- i private_key_of BA

I sees [product description, modified sold for price]^

—)■ B once_said [product description, modified sold for price]^

7.5.3. We know,

B not once_said [product description, modified sold for price]^

138 7.5.4. h ^ private_key_of A A I sees [m odified ^ =

—A oncejsaid [m o d ified ^

7.5.5. We know, A n o t once_said [m o d ified ^

7.5.6. Q.E.D.

Proof for Step 8

A ssum e: A u T(ai, , a ? ) 1- C

Under passive attacks, A U 7'{ai, — , og) f- C

P r o v e : A u T '( a i, • • •, as) not allows Is

P roof sk e t c h : We assume that security has already been proved for Step 4 and that security under passive attacks has been proven for Step 8. We now use Definition 3 to prove security under active attacks.

8. Is '• I --- >• A [[i, modified^]^]°-

8.1. From Definitions, A allows P — y Q : X^, k ^ Kp

:= A h P sees

8.2. A U T(ai, , as) allows I — y A: [ [ ^, m o d ifie d ;| ]^ ]“• \ ^ K i

:= A U P ( o i , , as) h I sees [ m o d ifie d ^

8.3. From assumption, | 0 K j

8.4. A U T~(ai, • • •, as) allows I — y A: [^, m o d ifie d ^

:= A U T '( a i, , as) F I sees [ m o d ifie d ^

8.5. I n o t sees [^ , m o d ifie d

8.6. Q.E.D.

Proof for Step 8.5

139 P ro o f sk e t c h : We use an axiom of the BAN logic [20] to prove this step

8.5. I n o t sees [ | , m o d ifie d ^

8.5.1. From Axiom Good key ensures utterer [20],

I— k private_key_of PAR sees [X]*' —>■ P once_said X

8.5.2. I- i private_key_of B A I sees [^ , m o d ifie d ^ ] s

—f B once_said [ | , m o d ifie d ^ ] ^

8.5.3. We know, B n o t once_said [^ , m o d ifie d ^ ]F

8.5.4. Q.E.D.

5.4 Real-Time Aware Auction Protocol

The value of real-time aware auctions is not immediately visible, as it may be for

general two party transactions.

The original protocol is modified by first identifying the following messages in the

transaction protocol.

Order: A message from the customer to the merchant where the customer makes a binding agreement to buy the goods. This may be in the form of a payment, a binding promise to pay, etc.

Product: A message from the merchant to the customer that finally makes the goods available to the customer. The message may contain the goods themselves, decryption key to previously supplied encr\'pted goods, etc.

140 The customer’s order (Order/Step 3) and the final product deliver}- (Product/Step 8) are both routed through the secure processor. The product is available to the cus­ tomer only after the product decryption key is received. Hence, this message is the final product delivery message.

5.5 Summary

In this chapter, a secure, atomic protocol for electronic auction transaction exe­ cution was presented. The electronic auction protocol presented above was formally proved to be atomic, secure, anonymous, and private. Furthermore, it was shown that the protocol incurs nominal charges and can easily be extended to transactions involving non-electronic goods. .A. sketch of the real-time aware version of the protocol was also presented.

In the next chapter, we emphasize the need for better protocols for stock and commodity markets and develop a protocol that is secure, atomic, anonymous, and private.

141 CH A PTER 6

STOCK MARKET TRANSACTIONS

Stock market transactions constitute the largest volume of present day electronic trade. The stock market is also the most technology friendly market and has embraced state-of-the-art technology" to keep up with the tremendous growth in the trading volume over the years. While stock exchanges readily use the available technology" to execute their transactions, and electronic brokers (e.g., E*Trade. Discover) are becoming successful, the traditional transaction model has not been adapted to take advantage of new features provided by recent technological advancements.

In the traditional stock exchange model [5], the investor may place two ty"pes of transaction orders: (a) a market order to buy/sell stock at the current/going price, or (b) a limit order where the investor may" specify a price that is acceptable to the investor. When a limit order is placed, the stock must be bought/sold at the specified price or better. For this reason the limit order is less risky than a market order when the trend in the market is against the investor. However, when the market trend (or stock price change) is favorable to the customer, the market order can be more profitable than the limit order. These orders do not, however, cover certain investment scenarios. Furthermore, the traditional stock exchange model does not

142 support detection of unnecessary delays in transaction execution. In the next two paragraphs, we motivate the need for more sophisticated transaction orders and the ability to detect delays in transaction execution respectively.

Consider the following situation during a stock market crash: An investor, Alice, wants to cut her losses and sell shares for company ABC. However, if the share price drops below a certain threshold price, she prefers to keep the shares and wait for the prices to pick up again. If she places a market order, she risks selling her shares at a price below the threshold. If she places a limit order to sell at the threshold price, she risks selling at the threshold price even if she could have done better. Ideally, she would like to place an order to sell at current price unless the price falls below the specified threshold. Presently, there exists no method for the investor to place such an order.

Now, consider the following situation: .A.lice finds through technical analysis (a tool to predict future trends in the stock market based on past trends) that the price of -A.BC shares is reaching its peak and will fall soon. She decides to sell the shares at current market price. Now, if her order did not reach the potential buyers in real time she would miss the market and the share prices would start falling. Delays may be caused by many reasons: network failures, hot spots, broker's laziness or neglect, deliberate malpractice, etc. In present day transactions, there is a lot of emphasis on fast and reliable resources to avoid delays. However, at present, there exists no mechanism fur the investor to detect a delay and receive any refund/compensation for it.

143 In this thesis, we address the two issues described above. A system model is described where (a) a third type of transaction order {threshold order) is allowed in addition to the market order and the limit order, and (b) a real-time constraint is defined, the violation of which indicates a delay in transaction execution. Based on this system model a protocol for stock market transactions [78, 79] is presented. The two important features of this protocol are the inclusion of a new type of transaction order and a mechanism to detect delays in transaction execution. The protocol is also shown to have all the five properties of e-commerce transactions: security, atomicity, anonymity, privacy, and low overhead cost,

6.1 Existing Work

Stock market is an auction market. However, it is much more complex than the traditional auctions. It involves many sellers selling the same product (or shares of the same company) and many buyers placing simultaneous bids to multiple sellers.

In traditional auctions, there is one seller, and the buyers only place bids to this seller, making the price negotiation and transaction execution simpler, .A. number of traditional electronic auction houses are already established on the web, for example,

Onsale (http://w\\^v.onsale.com), First Auction (http://ww^w,firstauction.com), Z.A- uction (http://www.zauction.com), and Dealdeal (http://ww^v.dealdeal.com). How­ ever, none of these sites are real-time aware and few of them ensure security, atomicity, anonymity, privacy, and low overhead cost. This is due to the inherent difficulties in automated price negotiation [17]. A number of auction protocols [17, 35, 47, 58, 71,

92, 87] have also been proposed in the literature. These protocols are each directed

144 at different flavors of traditional auction applications (e.g., sealed-bid auctions, mul­ tiple round auctions, anonymous auctions, auctions using mobile agents) and are not suitable for stock market transactions. NASDAQ [4] and OptiMark [6] are the only fully electronic stock markets in the United States. While NASDAQ has been able to ensure security, anonymity, atomicity, privacy, and low overhead cost, it does not support any new model for stock transaction or provide a mechanism to detect delays in transaction execution. Optimark supports certain new types of transactions, espe­ cially those involving transacting a large percentage of a company’s common stock.

Not enough is known about OptiMark to know if it can easily support the threshold model for stock transaction. However, it is known that Optimark does not yet have any mechanism to detect violations of real-time constraints.

Real-time awareness in electronic auctions has recently been addressed in litera­ ture. However, there still exists no auction protocol that has a mechanism to detect violation of real-time constraints. A possible approach for design of real-time aware protocols has been discussed in a recent paper [87]. In [47], an auction protocol suit­ able for real-time applications is proposed. However, the protocol has no mechanism to detect real-time \dolation. In [77]. the authors have developed a methodology to enhance certain protocols for real-time applications with real-time awareness.

Clearly, the work towards designing protocols even for simple models of auction is still preliminary. To the best of our knowledge, there have been no successful attempts to extend such protocols to more complex auction models or incorporate real-time violation detection into auction.

145 6.2 System Model

A stock market transaction involves three types of parties: buyers, sellers, and the stock exchange. Each seller wishes to sell shares for a certain company, and places an order to do so. Each buyer wishes to buy shares of a certain company, and places an order to do so. The stock exchange ensures that the transactions are committed in a proper and timely manner.

The investors (buyers and sellers) are connected to the exchange through an elec­ tronic network. They exchange messages containing information, goods, payment, etc. over this network. The communication channels are in-order (FIFO). The in­ vestors/brokers (alternate terms used for the buyers and sellers collectively) are in close proximity (measured in terms of message delay) to a trusted clock. This trusted clock may be a secure co-processor [90, 91] attached to the investor's w-orkstation. Se­ cure co-processors are secure physical devices that erase their RAM and CPU registers if they are accessed in any manner other than the defined interfaces. These devices have limited capabilities, are inexpensive, and are available commercially [89]. Many investors may share the same clock. The exchange has a similar secure clock. In order to avoid problems involving clock synchronization, we use a single trusted clock for all measurements in a single transaction. The local clocks of the investor and the exchange may not be synchronized with this clock, but w-e expect the clock periods to be the same. This is a reasonable assumption given the accuracy of modern day clocks. Hence, w'e assume that if the drift between the investors clock is 6 at time T, it remains close to 6 at time T -f- A T. Periodically (say, every 5 hours), this drift is calculated.

146 6.2.1 Transaction Model

An order is an instruction to buy or sell a certain stock. An order to buy stock is called a bid. An order to sell is referred to as an offer. Each bid/offer may be of three types:

• Market Order: A market order is an order to buy or sell a stock at the current

price. In other words, the stock will be bought from the lowest offerer , or sold

to the highest bidder.

• Limit Order: A limit order is an order to buy or sell a stock at a certain price or

better. In other words, the stock will be bought at the specified price or lower,

or sold at the specified price or higher. The investor buying/selling the stock

can specify the limit order price. A limit order is less risky than a market order,

as sudden shifts in the value of a stock cannot adversely affect the investor.

However, favorable trends in the market are better capitalized by a market

order than a limit order.

• Threshold Order: A threshold order is an order to buy or sell a stock at the

current price unless the current price is worse than a specified threshold. .A

threshold order always gets the specified price or better. However, unlike a limit

order, the threshold order also capitalizes on favorable trends in the market.

6.2.2 Real-Time Constraint

-A.S noted earlier, in a stock market transaction, the buyer and seller must commit the transaction within a specified period of time (T) for the real-time constraint to

147 be satisfied. The time T is measured from the time the investor sends the order to the time the investor receives the final price for the transaction. This is illustrated in

Figure 6.1. Let X be the time required for the order to be received by the exchange,

Y the time required for the exchange to process the order, and Z the time required to inform the investor of the final price. The real-time constraint can then be expressed as: X + Y + Z

S

X = time for investor’s order to reach the exchange Y = processing time at the exchange Z = time for the final price to reach the investor T = stipulated time S = stock exchange I = investor I

Figure 6.1: Timing diagram showing a real-time transaction

It is important to note that real-time constraints are meaningful only for market orders. Limit and threshold orders may never execute if the stock cannot be trans­ acted for the specified price or better. It is therefore not meaningful to talk about time constraints on processing these orders.

Also, notice that the real-time constraint is restricted to only one portion of the transaction execution. The actual exchange of the shares for money is not included in the constraint. This is due to the nature of stock market transactions. It is important

148 in these transactions to determine the final transaction price in real-time as delays

(especially during stock market booms and crashes) can drastically change the final transaction price and prove financially damaging to the investor. However, once the price is finalized, it is not as imperative for the actual transaction to complete in real-time.

We call the ability to detect and prove to a third party the violation of (or ad­ herence to) the above real-time constraint real-time awareness of a protocol. In the next section, we present a real-time aware protocol for stock market transactions that ensures security, atomicity, and low overhead cost of the transaction, along with anonymity and privacy of the investors involved.

6.3 Protocol Description

In this section, we present a protocol for stock market transactions based on the system model described in Section 6.2. This protocol supports all the three types of transaction orders described in Section 6.2.1, satisfies the real-time constraints de­ scribed in Section 6.2.2, and ensures the properties: security, atomicity, anonymity, privacy, low overhead cost. The protocol is can be divided into five stages: com­ puting clock drift, getting quotes, placing the orders, the processing of orders, and the transaction completion. In the following sections, we first present the notation used to describe transactions. Next, we describe the five stages of the stock market transaction protocol.

6.3.1 Notations

We use the following notation to describe the protocol.

149 A stands for the investor, Alice.

a stands for Alice’s public key.

g for Alice’s private key.

B stands for the investor. Bob. b stands for Bob’s public key.

p stands for Bob’s private key.

S stands for the stock exchange. s stands for the exchange’s public key.

i stands for the exchange’s private key. e and E represent randomly generated encryption keys.

^ and ^ are corresponding decryption keys

P ---- > Q : [message]^ stands for P sends "message” to Q signed with P ’s private key.

P Q : [message]^ stands for P sends "message” to Q secured with Q ’s public key.

In the next five sections, we describe the five stages of the transaction protocol.

6.3.2 Computing Clock Drift

This computation is not really a protocol component. It is executed once every few hours, and then used for as many transactions as may follow before the next compu­ tation. It involves four signed messages exchanged between the secure co-processors of the investor and the exchange. The investor’s co-processor initiates computation, when the processor is free, by sending the local time and a nonce. The exchange’s

150 co-processor responds with a similar message. If this response is received within rea­ sonable time, the investor’s co-processor calculates the difference, 6 , in the time given by the two messages, t then sends this, in a signed and timestamped message, to the exchange’s co-processor for confirmation. If the response is not received within reasonable time, the co-processor re-initiates computation. On receiving 6 , the ex­ change’s co-processor verifies it, signs and timestamps it,and sends it back to the investor’s co-processor. The investor’s co-processor stores this and attaches it to any demand for refund.

6.3.3 Getting Quotes

The first stage of the transaction involves querying the stock exchange about the current price (or quote) of a stock. The quote helps the investor decide how to invest

: how many shares to buy/sell, what type of order to place etc. quote consists of the highest price offered to buy the stock and the lowest price asked to sell the stock.

Only limit orders are considered while determining the highest bid or the lowest offer.

6.3.4 Placing the Orders

The orders can be of three types: market, limit, and threshold. In each type, the order can be to buy shares (bid) or sell shares (offer). Investor .\lice is assumed to always place bids while Bob always places offers. The following figure shows the exact structure of each bid/offer.

Market orders do not have any price specified as they are orders to transact at the current price in the market. Both limit and threshold orders look similar in their structure. However, they are interpreted very differently. Limit orders are interpreted

151 Market Bid A y S[[product description]^ Market Offer JB y S [[product description, product^ ]b Y

Limit Bid A >■ S [ [product description, bid — p r ic e ]« Limit Offer B y S [[product description, product^, offerprice\^Y

Threshold Bid A y S[[product description, max —price]^Y Threshold Offer B y S [[product description, product^ , minprice\^Y

Figure 6.2: Different Types of Orders

as orders to buy/sell at the specified price or better. Threshold orders are orders to buy/sell at the best price as long as this price is better than the specified price.

Threshold orders are not found in traditional stock markets.

6.3.5 Processing the Orders at the Exchange

When an order is received at the exchange, it must be matched with another appropriate order. A bid must be matched with an offer and vice versa. This section describes how the exchange processes an incoming order and finds a matching order for it.

Market orders are matched with the best limit or threshold order in the exchange database. The final price is determined as the maximum/ minimum price specified in the order detected from the database. Threshold orders are matched with the best limit or threshold order in the exchange database, providing the price specified on the order selected from the database is within the threshold limits of the incoming order.

The final price is determined as the maximum/ minimum price specified in the order selected from the database. If a match cannot be found that is within the threshold

152 limits, the order is entered into the exchange database as a potential match for future incoming orders. A limit order is matched with any limit or threshold order that has a minimum price less than the incoming order’s bid price or a maximum price more than the incoming order’s offer price. The final price is the price specified in the incoming limit order. If a match cannot immediately be found, the order is entered into the database as a potential match for future incoming orders.

6.3.6 Transaction Completion

The transaction completion can takes place through the following eight steps. The product referred to here is the share being transacted.

(1) S - A [product^, price, product description]^

(2) S - B [price, product description]^

(3) A S [product^, payment^ ]^ Y

(4) S -—B [payment^, product description]^ ]*

(5) BS [payment^,

(6) SA [ product description]^ ]“

(7) A S

(8) S B

Figure 6.3: Steps involved in completing the transaction

153 In Step 1 and Step 2, the exchange informs the investors of the final price and also delivers an encrypted copy of the certificate of sale (product) to the the bidder, Alice.

This step must take place immediately after the transaction is processed. This is the message that determines the time of completion of the sale. In Step 3 and Step 4, Al­ ice acknowledges the receipt of the encrypted product and sends Bob the encrypted payment. This message can be aggregated over various transactions over the day and can thus be common to many transactions. Such aggregation would compromise anonymity, but would be more in tune with how finances are settled in the US stock market today. In Step 5 and Step 6, Bob acknowledges the encrypted payment and sends Alice the product decryption key. In Step 7 and Step 8, Alice acknowledges receiving the product decryption key and sends Bob the payment decryption key.

The exchange of encrypted product and payment followed by the exchange of the deciyption keys allows eveiy message to be repeatable in case of message loss, cor­ ruption, or intruder interception without loss of security. In case of loss, corruption, or interception of the messages with the encn'-pted product or encrypted payment, these must be encrv'pted with a fresh key and resent.

6.4 Real-time Awareness

In this section we will describe the mechanism used in conjunction with the above protocol to detect delays in determining the final transaction price. The role of the secure co-processor, the notation used to describe timestamps, and the detection mechanism, are exactly as described for two party transactions and auctions. The difference is in identifying the two important messages to be routed through the

154 co-processor. In the previous cases these constituted the message (Order) with the binding order from the customer, and the message (Product) which concludes the final product delivery to the customer. In the case of the stock market transaction, the messages in question are: the message through which an investor places a bid, and the message through which the investor is informed of the final price for the transaction. We call these messages the Order and the Price, respectively.

The investor’s Order (Step 1/Step 2) and the final Price notification (Step 3/Step 4) are both routed through the secure processor. The investor attaches a nonce to the

Order before sending it. The exchange attaches the same nonce to the Price. This sen,'es to match the two messages as part of the same transaction. The exchange further encrypts the Price with the secure processor’s public key. This prevents the investor from rerouting the Price and by-passing the secure processor. The secure processor attaches to these messages a timestamp consisting of two fields : the cur­ rent time as per the secure processor and the number of messages received from the exchange by the investor. The significance of these will be clear in Section 6.4.1 where we describe the mechanism to detect real-time violations. The secure processor fur­ ther signs the time-stamped message with its private key. This prevents anyone from tampering with or forging a time-stamped message. This is illustrated in Table 6.1 where R is the nonce, K is the public key. and ^ the private key of the coprocessor.

The secure processor also sends a copy of the Order after time-stamping and signing it to the customer for the customer’s records. This is illustrated in Table 6.1. The notation used in this table is described in further detail in the next section.

155 Old Msg New message sent to secure processor New Time-stamped Msg Order [Order, [Order, To, R, N o Price Price, [Price, R\^ [Price, Tp, R, iV p ]^

Table 6.1: Role of the Secure Co-Processor

6.4.1 Real-Time Violation Detection Mechanism

The protocol described above has a detection mechanism that is executed when the investor believes that he was not notified of the final price on time. Note that no real-time violation will be detected unless the investor executes the detection mechanism. Thus, the extra cost of the detection mechanism is not associated with transactions that completed to the satisfaction of the investor.

The detection mechanism consists of a demand for refund (Demand) sent by the investor via the secure processor. The Demand is simply a plain text message asking for refund that is time-stamped and signed in the same manner as an Order. Copies of some old messages are also attached to the Demand. This Demand is then evaluated by the exchange. A refund is made if and only if there was a violation of the real-time constraints.

Situation Attachments Investor has received Price Order, Price Investor has not received Price Order, all time-stamped messages received from the exchange after Order [Mx, M 2, , M x)

Table 6.2: Attachments to be sent with the Demand

156 There are two possible situations : (a) the investor has received the Price, and

(b) the investor has not received the Price. If the investor has received the Price (but late), the investor attaches the Order and the Price to the demand. If the investor has not yet received the Price, the investor attaches all messages the investor received from the exchange after sending the Order. Table 6.2 shows the attachments to be sent for each situation.

The investor should be given a refund if and only if the Demand was sent after the stipulated time elapsed and the Price was not received within the stipulated time.

Figure 6.4 shows a flowchart describing the evaluation used to evaluate eligibility for refund.

Situation 1: The exchange receives the Demand and the Order from the investor.

The exchange verifies if To > To+Ts- If not, it implies that the stipulated time had not elapsed when the exchange sent the Demand and investor's Demand is denied.

Situation 2: If the Price was not attached to the Demand, the exchange verifies if

N d — N o = X. and T i, T 2 , , T x > Tq and R-i, R 21 , R x Ro- This implies that none of the messages received by the investor after sending the Order is the Price corresponding to that Order, i.e., the Price was not received by the investor within the stipulated time.

6.5 Properties

We will show below that the protocol presented in this paper is secure, atomic, private, anonymous, and incurs nominal overhead charges.

157 Demand , Order, Price / other messages Mj ,Mx

NO time Tg on Demand - timeT q on Order > stipulated time Tg

YES NO Price Attached ? I YES time Tp on Price - time Tq on Order > stipulated time Tg

r YES

Refund given

YES

# of messages Np on Demand - # of messagesN q on Order = X; and time Tf^ on each attached message M > timeT q on Order

NO

Refund denied

Figure 6.4: Algorithm to evaluate the investor's Demand

158 6.5.1 Security

Electronic networks are susceptible to three types of security attacks. They are:

(a) prevention of message delivery, (b) message forgeiy, and (c) replay attacks. We consider all these three situations below, and show how the protocol is secure from security attacks.

Since both the product and the payment are encrypted, if a message with either the encr\y)ted product or the encr>'pted payment is not delivered, the encrypted product/payment is resent encrypted with a different key and the key corresponding to the lost message is never revealed. This will only add some additional computation cost for the sender. All other messages, if lost, can simply be resent without any additional cost to either party. Thus, by blocking a few messages, the intruder cannot cause the transaction to abort, reach an inconsistent state, or result in a financial loss to the involved parties. Messages genuinely lost or corrupted in the network are dealt with in the same fashion.

Since all messages are signed with the sender's private key, an intruder cannot forge a message. Hence, forgery does not pose a security threat to our protocol.

An intruder can tiy replaying old messages as if they are new. For all messages originating from the investor the timestamp of the secure co-processor will disclose that the message is old and that it can be ignored. .A.11 messages originating from the exchange have a component from a previous message from the investor. Since with each transaction this component will change, the intruder cannot replay old messages as if it were part of the present transaction. Thus, the intruder cannot

159 cause a transaction to occur, or tamper an ongoing transaction between Bob and

Alice, by just replaying old messages.

We conclude that our protocol is secure from all three intruder threats - message blocking, forgery and replay attacks.

A malicious investor may deliberately delay the price notification and collect the compensation for late receipt. This is not such a problem with the other transacions as the value of a product received late actually diminishes with time. On the other hand, with stock price the value is independent of the time of receipt and changes only if the price is set at a later time. In order to avoid such malicious play, the compensation must be determined according to the final price and not the time of notification. One formula to do this is as follows. Let P be the price on the notification. Let AT be the worst price on the stock recorded between the time To + 6 and T q + T s + d as per the exchange secure clock. If M is better than P , the compensation is the difference between M and P , else it is zero.

6.5.2 Atomicity

In this protocol, the exchange knows the identity of each investor. The exchange ensures that no investor aborts a transaction after the order has been processed by the exchange.

6.5.3 Privacy

It is well known that even with the knowledge of the public key, a malicious intruder cannot compute the corresponding private key. Since all messages are signed

160 by the sender's private key and the receiver's public key, the intruder cannot decipher any message. This ensures privacy in our protocol.

6.5.4 Anonymity

In the above protocol, all messages are exchanged between the investor and the exchange. The investor also has multiple pairs of keys (or pseudonyms) that he can use for different transaction. Hence no intruder can trace all the transactions to the same investor. In fact, the intruder cannot find out the identity of the investor even if the investor were to use the same key.

6.5.5 Overhead Cost

The protocol presented above does not require any complex computation. Fur­ thermore. there are at most five messages exchanged between the investor and the exchange per transaction. Therefore, the cost incurred is nominal [36].

6.6 Formal Verification

In this section, we will formally prove the above protocol to be secure, atomic, anonymous, and private. The formal definition of these properties are defined in

Chapter 3.

6.6.1 Initial State

The state of the system at the start of the protocol is given below:

O a = 0; O b = 0; Os = (Ù; Oi = 0;

S a = Q: S b = 0; S s = 0; 5 / = 0;

161 H a = {: I- U

{ b public-key_of B, i pubiic_key_of J, a public_key_of A, s public_key_of S,

^ private_key_of A };

Hg = {0 : 1- 0} U

{ b public_key_of B, i public_key_of J, s public_key_of S, y private_key_of jB};

Hs = {0 : i“ 0} U

{ b public_key_of B, i public_key_of I, s pubiic_key_of S, j private_key_of S}:

H/ = {0 : I- 0} U

{ b public_key_of B, i public_key_of I, s public_key_of S, 4 private_key_of I}:

/Ca — {~jO, 6, i,s } ;

JCb = s};

= {%, b, i, s};

/C/ = {i,6,z, s};

6.6.2 Verification

In this section, we first analyze the protocol in the presence of passive attacks to formally show that the above mentioned properties are ensured. Next, active attacks are included in the analysis. Protocol cost is not covered as part of the proofs.

Analysis Under Passive Attacks

Let A be the set of assumptions or the initial state we start with (where all the beliefs of principals are true), P the protocol, and C the properties we want to prove.

Assuming only passive attacks, the protocol may still be interrupted by either the

162 customer or the merchant failing to execute the next step. The protocol steps that are executed, however, may not be tampered or forged. We must show that C holds after every properly executed step of the protocol. That is, if is the Tzth of the protocol, then we must show: VLi 0i;02:.. .;an

We formally prove this below. In the next section, we will incorporate active at­ tacks into the proof.

Outline of Proof for Step 1

P r o v e : {A} ai {C}

P r o o f sk e t c h : We use Theorem 1 to prove that the system is secure or {A} ai

{C}.

1. 'T(ai) : S > A [[product^, price, product description]^ ]°-

1.1. A U 'T(ax) is positive

1.2. A allows tti

1.3. A U T ( a i) h- C

1.4. Q.E.D.

Proof for Step 1.1

1.1. A U T~{ax) is positive

P r o o f s k e t c h : We use Definition 4 to prove the above.

163 1 .1 . 1 . T '(ai) := A S once_said [[product®, price, product description] = ]‘ A A sees [ [product®, price, product description] » A B sees [ [product®, price, product description] » ]^ A I sees [ [product®, price, product description]» ]^ 1. 1 . 2 . 7"(ai) is positive

1.1.3. A is positive

1.1.4. Q.E.D.

Proof for Step 1 . 2

1. 2 . A allows 'T{ai)

P r o o f s k e t c h : We use Definition 3 to prove the above.

1.2.1. A t- S believes [ [product®, price, product description]» ]^

1.2.2. Q.E.D.

Proof for Step 1.3

1.3. A U T ( u i) I- C

P r o o f s k e t c h : We show that in step Oi no unwanted information is revealed to

the intruder.

1.3.1. 'r(ai) := A S once_said [ [product®, price, product description] » ]^ A A sees [ [product®, price, product description]» ]® A B sees [ [product®, price, product description]» ]^ A I sees [[product®, price, product description]» ]^ 1.3.2. New state is ^ U

A [ [product®, price, product description] » ]^ G Os A [ [product®, price, product description] » ]^ G Sb A [[product®, price, product description]» ]^ G Sa A [[product®, price, product description]» ]^ G Si

164 1.3.3. A — 0 S i a | g S / A p a y m e n t 0 S i A p ro d u c t ^ S i A i d ^ S i A id ^ S b A a ^ S i A p ro d u c t 0 S a. A p a y m e n t 0 5 b 1.3.4. Q.E.D.

Outline of Proof for Step 2

P r o v e : (C} 02 {C}

Proof sketch: We use Theorem 1 to prove that the system is secure or {C} 02

{C}.

2. 7'(a2) : S > B [ [ p r i c e , product description]^

2.1. A U 7'{a2) is positive

2.2. A. allows a^

2.3. A U T (a 2 ) f- C

2.4. Q.E.D.

Proof for Step 2 . 1

2.1. A U 7”(a2) is positive

P r o o f s k e t c h : We use Definition 4 to prove the above.

2.1.1. T ’ia^) := A S once_said [[price, product description]» A A sees [ [price, product description]» ]*’ A B sees [[price, product description]» ]'^ A I sees [[price, product description]» ]^ 2.1.2. I~{a 2 ) is positive

2.1.3. A is p o s itiv e

165 2.1.4. Q.E.D.

Proof for Step 2 . 2

2.2. A. allows T'{a 2 )

Proof sketch ; We use Definition 3 to prove the above.

2.2.1. A S believes A sees [[p rice, p rod u ct description]»

2.2.2. Q.E.D.

Proof for Step 2.3

2.3. A U T {a 2 ) H C

Proof sketch : We show that in step og no unwanted information is revealed to

the intruder.

2.3.1. I~{a 2 ) := A S once^aid [[price, product description]» A A sees [ [price, product description]» A B sees [[price, product description]» ]^ A I sees [[price, product description]» ]*^ 2.3.2. New state is ^ U A [[price, product description]» ]^ G Os A [[price, product description]» ]^ G Sb A [[price, product description]% ]^ G Sa A [[price, product description]» ]*^ G Si 2.3.3. A — ^ Si

A p a y m e n t ^ Si A p r o d u c t 0 S i A id 0 S i A id ^ S b A a ^ S i A p r o d u c t 0 S a A p a y m e n t ^ S b 2.3.4. Q.E.D.

Outline of Proof for Step 3

Assume: A u T (a i, 0 2 ) h C

Prove: {C} as {C}

166 P r o o f s k e t c h : We use Theorem 1 to prove that the properties are not violated

or {C} as {C}.

2.1. 7'(as) : A y S [[product^, payment^]^Y

2.1.1. C U T'(as) is positive

2.1.2. C allows as

2.1.3. C U T{as) H C

2.1.4. Q.E.D.

Proof for Step 3.1

3.1. A . U 'T{as) is positive

P r o o f s k e t c h : We use Definition 4 to prove the above.

3.1.1. T (a s ) : = A A once-said [[product^, payment®]^]® A B sees [[product^, p aym en t® ]« ]® A S sees [[product^, payment®]» A I sees [ [product®, payment® ]« 3.1.2. T'(as) is positive

3.1.3. C is positive

3.1.4. Q.E.D.

Proof for Step 3.2

3.2. C allows 'T{as)

P r o o f s k e t c h :We use Definition 3 to prove the above.

3.2.1. C h A believes [ [product®, payment®

3.2.2. Q.E.D.

Proof for Step 3.3

3.3. C U T{as) H C

167 P r o o f s k e t c h : We show that in step a s no unwanted information is revealed to

the intruder.

3-3-1. 7'(a3) := A A once_said [ [product^, payment®]»]® A B sees [[product®, payment®]» ]® A S sees [[product®, payment®]»]® A I sees [ [product®, payment®]»]® 3.3.2. New state is C U A [[product®, payment®]»]® E.O a A [[product®, payment®]»]® E S b A [[product®, payment®]» ]® G S s A [[product®, payment®]»]® G S i 3.3.3. A — ^ S i A I ^ «S/ A p a y m e n t ^ S i A p r o d u c t ^ S i A id ^ S i A id ^ S b A a ^ S i A p ro d u c t^ G S a A [payment^]^ G S a 3.3.4. Q.E.D.

Outline of Proof for Step 4

ASSUME: A U T (ci, •••,03) h C

P rove : {C} 0 4 {C}

Proof sketch: We use Theorem 1 to prove that the system is secure or {C} 0 4

{C}.

4. T~{a^) : S >■ B [[payment^, product description]^

4.1. C U is positive

4.2. C allows G4

4.3. C U T ( a 4 ) P C

168 4.4. Q.E.D.

Proof for Step 4.1

4.1. C U T '( a 4 ) is positive

P roof s k e t c h : We use Definition 4 to prove the above.

4.1.1. T ( o 4 ) := A S once_said [[payment®, product description]» A A sees [[payment®, product description]» ]^ A B sees [[payment®, product description]» ]^ A I sees [ [payment®, product description]» ]^ 4.1.2. T'ia^) is p o sitiv e

4.1.3. C is positive

4.1.4. Q.E.D.

Proof for Step 4.2

4.2. A allows'T^a^)

P roof sketch : We use Definition 3 to prove the above.

4.2.1. A B believes [ [payment®, product description] » ]^

4.2.2. Q.E.D.

Proof for Step 4.3

4.3. C U 'T'ia^) I— C

P roof sketch : We show that in step no unwanted information is revealed to

the intruder.

4.3.1. := A S once_said [[payment®, product description]» lb]‘ A A sees [ [payment®, product description]» ]*^ A B sees [ [payment®, product description]» ]^ A I sees [[payment®, product description]» ]*^

169 4.3.2. New state is C U A [[payment®, product description]» E Os A [[payment®, product description]» G Sb A [[payment®, product description]» ]^ G Sa A [[payment®, product description]» ]^ G Sr 4.3.3. A — ^ Sf A ^ 0 «Sf A p a y m e n t ^ S i A p ro d u c t ^ S i A id ^ S i A id EL S b A o ^ A p ro d u ct^ G 5a A [paym ent^ ] s g «Sa 4.3.4. Q.E.D.

Outline of Proof for Step 5

Assume: A u T ( a i, •••, 0 4 ) I- C

P r o v e : {C} as {C}

Proof sketch: We use Theorem 1 to prove that the properties are not violated

or {C} as {C}.

4.1. T'ias) : B ---- > S [[payment^, ^ ]^ ]^

4.1.1. C U 7 ^(0 5 ) is positive

4.1.2. C allows as

4.1.3. C U T(as) h C

4.1.4. Q.E.D.

Proof for Step 5.1

4.1. C U T'{as) is positive

P r o o f sk e t c h : We use Definition 4 to prove the above.

170 4.1.1. Tias) := A B once_said [ [payment®, ^ ]5 ]= A A sees [ [payment®, A S sees [ [ payment®, g ]^ A I sees [[payment®, 4.1.2. 'T{a^) is positive

4.1.3. C is positive

4.1.4. Q.E.D.

Proof for Step 5.2

5.2. C allow s 7”(as)

P r o o f s k e t c h : We use Definition 3 to prove the above.

5.2.1. C 1- B believes [[paym ent® ,

5.2.2. Q.E.D.

Proof for Step 5.3

5.3. C U T ( a s ) I- C

Proof sketch: We show that in step 0 5 no unwanted information is revealed to

the intruder.

5.3.1. 7~(as) := A B once_said [[ payment®, ^ ]F A 5 sees [ [ payment®, è ] ^ [ [payment®. [ [payment®. [ [payment®. G O b [ [payment®. G S b [ [payment®. G Ss [ [payment®. G «S/

171 5.3.3. A ^ ^ 5 / A I ^ «S/ A payment 0 Si A product 0 Si A id ^ A id 0 A a 0 5 / A product^ E A [poT/meni®] b ^ 5a 5.3.4. Q.E.D.

Outline of Proof for Step 6

A s s u m e : A u T ( a i , •••,05) h C

P r o v e : {C} uq {C}

P r o o f sk e t c h : We use Theorem 1 to prove that the system is secure or {C} Oe

{C}.

6. 7'{ae) : S -- >■ A [ [ ^ , product description]^ ]'^

6.1. C U 'T{ae) is positive

6.2. C allows Û6

6.3. C U r{ae) h C

6.4. Q.E.D.

Proof for Step 6.1

6.1. C U T'iae) is positive

P r o o f s k e t c h : We use Definition 4 to prove the above.

6.1.1. T(ae) := A S once_said [ [ ^, product description] » A A sees [ [ ^, product description]% A B sees [[ g, product description]» ]^ A I sees [ [ ^, product description]» ]^

172 6.1.2. T'(ag) is positive

6.1.3. C is positive

6.1.4. Q.E.D.

Proof for Step 6.2

6.2. A~ allows 'T{ae)

P ro o f s k e t c h : We use Definition 3 to prove the above.

6.2.1. A. I- S believes [[ product description] »

6.2.2. Q.E.D.

Proof for Step 6.3

6.3. C U T (a g ) H C

P ro o f s k e t c h : We show that in step og no unwanted information is revealed to

the intruder.

6.3.1. T'(ag) := A S once_said [ [ g, product description]» A A sees [ [ product description] » ]^ A B sees [ [ ^, product description] » ]^ A I sees [ g, product description] » 6.3.2. New state is C U A [ product description]» ^ e Os A[ g, product description]» ^ G Sb A[ g, product description]» ^ G Sa A[ product description]» ^ G Si 6.3.3. A — ^ «S/ A I e .Sf A p a y m e n t ^ S i A p r o d u c t ^ S i A id ^ S i A id 0 S b A a ^ S i A p r o d u c t G S a A p a y m e n t G S b 6.3.4. Q.E.D.

Outline of Proof for Step 7

173 Assume: A u T (ai, • • •, oe) I- C

Prove: {C} a? {C}

Proof sketch : We use Theorem 1 to prove that the properties are not violated

or {C} ar {C}.

6.1. T{aj) : A > «S' [ [ ;|, |

6.1.1. C U T~{ar) is positive

6.1.2. C allows 0 7

6.1.3. C U T{ar) H C

6.1.4. Q.E.D.

Proof for Step 7.1

7.1. A U 'T(a-r) is positive

P roof sketch : We use Definition 4 to prove the above.

7.1.1. T (o7) := A A oncejsaid [ [ ^, A B sees [[^, A S sees [[i, A I sees [[i, 7.1.2. Tiaj) is positive

7.1.3. C is positive

7.1.4. Q.E.D.

Proof for Step 7.2

7.2. C allows I'iar)

P r o o f s k e t c h : We use Definition 3 to prove the above.

7.2.1. C f- A believes [[

7.2.2. Q.E.D.

Proof for Step 7.3

174 7.3. C U TM h C

P r o o f s k e t c h : We show that in step a? no unwanted information is revealed to

the intruder.

7.3.1. T(ar) := A A oncejsaid [[^, A B sees [[|, A S s e e s [[i, A I sees [[|, 7.3.2. New state is C U A [[|, e O a [[è’ e G S b ^ [[ E’ e]^]^ S A [[J, ;]^r E 5r 7.3.3. ^ Sr Î I A p a y m e n t ^

Outline of Proof for Step 8

A s s u m e : A u T ( a i , •••, 0 7 ) 1- C

P r o v e : {C} og {C}

P r o o f s k e t c h : We use Theorem I to prove that the system is secure or {C} ag

{C}.

8. T (% ) : S — i-S [[|, i]i]“

8.1. C U T'(ag) is positive

8.2. C allows Og

175 8.3. C U T ( a s ) H C

8.4. Q.E.D.

Proof for Step 8.1

7.1. C U 'T'(as) is positive

P r o o f s k e t c h : We use Definition 4 to prove the above.

7.1.1. T ias) := A 5 once_said [ [ ^ , g]»]* A A sees [[|, A B sees [[^, ^]i]^ A I sees [[i, ;]^]a 7.1.2. 'J'(as) is positive

7.1.3. C is positive

7.1.4. Q.E.D.

Proof for Step 8.2

7.2. A allows T{as)

P r o o f s k e t c h : We use Definition 3 to prove the above.

7.2.1. A 1 - S believes [[ ^ ,

7.2.2. Q.E.D.

Proof for Step 8.3

7.3. C U T{as) h C

P r o o f s k e t c h : We show that in step a s no unwanted information is revealed to

the intruder.

(.3.1. 7'{as) := A 5 once_said [[^, AA AA sees r[[|, r 1 i]r]=1 la ASsees [[|, A I sees [[|, i]T]^

176 7.3.2. New state is C U A [ [ G O s G S b A[[|, G S a A? G S i 7.3.3. A i ^ 5 / A I ^ <5/ A p a y m e n t ^ S j A p r o d u c t 0 S i A id ^

Analysis Under Active Attacks

In this section, we analyze active intruder attacks. We must show that an intruder is not allowed to send a message that reads like a protocol message, but is not identical to a protocol message. If it were identical to the protocol message for that transaction run, it could cause no harm. Such a replication would be equivalent to stopping the message on the network momentarily and then releasing it.

Some messages may not have any components that can change. We will not anal­ yse such messages as the proof for those is vacuously true. For those messages where the content can be marginally changed to dupe the actual participants, we use Defi­ nition 3 to show that the intruder is not allowed to send the changed message.

Proof for Step 1

A ssu m e : Under passive attacks, A. C

P r o v e : A. U l~{ax) not allows J i

177 P r o o f s k e t c h : We assume that security under passive attacks has already been proven for Step 1. We now use Definition 3 to prove security under active attacks.

1 . : J — > A [[modified (original ai)]i

1.1. From Definitions, A allows P — Q : X ^ , k ^ Kp

:= A h P sees

1.2. A U T'(ai) allows I -----> A: [[modi fied ( original ai ) ]* ]“, | ^ /C/

:= -4. U T'(ai) h I sees [ m o d ifie d ( o rig in a l a i ) ] »

1.3. From assumption. S's private key ^ K.i

1.4. A U T'(a i) a llo w s / -----v A: [[modi fied (original a i ) ] '] “

:= A U 7~(ai) h I sees [ m o d ifie d ( o rig in a l cii ) ] '

1.5. I n o t sees [ m o d ifie d ( o rig in a l a i ) ] »

1.6. Q.E.D.

Proof for Step 1.5

P ro o f sk e t c h : We use an axiom of the B.A.N logic [20] to prove this step

1.5. I n o t sees [ m o d ifie d ( o rig in a l Oi ) ]^

1.5.1. From .-\xiom Good key ensures utterer [20],

h k private.key-of P A R sees [XŸ —>■ P once_said X

1.5.2. h J private_key_of S A I sees [ m o d ifie d ( original Ui ) ]»

—)■ S once_said [ m o d ifie d ( o rig in a l Oi ) ]>

1.5.3. We know, S n o t once_said [ m o d ifie d ( original cii ) ]*

1.5.4. Q.E.D.

Proof for Step 2

178 A s s u m e : A u T{ax) I- C

Under passive attacks, A U Cg) C

P r o v e : A U 'T{ax, 0 -2 ) 'not allows I 2

Proof sketch: We assume that security has already been proved for Step 4 and that

security under passive attacks has been proven for Step 5. We now use Definition 3

to prove security under active attacks.

2. I 2 • I --- >■ B [[modified{price, product description)]^

2.1. From Definitions, A allows P ---->■ Q : k ^ ICp

.= A \ - P sees

2.2. A U T ( a i, (Z2 ) allows I -----> B: [modified^ original 0 2 ) ]*; 7 ^ TCf

:= -4 U 7”( a i, 0 2 ) P I sees [m o d ifie d ^ o rig in a l c&2 ) ]

2.3. From assumption, j ^ /C/

2.4. A U T~(ax,a,2 ) allows / ----->■ B: [modified^ original 0 2 )]*

:= A U 7“(a i, 0 2 ) P / sees [m o d ifie d { o rig in a l 0 2 ) ]

2.5. I n o t sees [m o d ifie d ^ o rig in a l (I2 ) ]

2.6. Q.E.D.

Proof for Step 2.5

Proof sketch: We use an axiom of the BAN logic [20] to prove this step

2.5. I n o t sees [m o d ifie d ^ o rig in a l 0 2 ) ]

2.5.1. From Axiom Good key ensures utterer [20],

P k private_key_of P A R sees [X]*^ —» P once_said X

179 2.5.2. h J private_key_of S A I sees [modified{ original a-z ) ]

A once_said [modified^ original a^ ) ]

2.5.3. We know, S not once_said [modified^ original a-z ) ]

2.5.4. Q.E.D.

Proof for Step 3

A s s u m e : A u T(oi, oz) H C

Under passive attacks, A U l~{a\,az,Cbz) U C

P r o v e : A u 'T^ax^az-, az) n o t allow s Iz

P r o o f s k e t c h : We assume that security has already been proved for Step 2 and th at security under passive attacks has been proven for Step 3. We now use Definition 3 to prove security under active attacks.

3. Is : J --- > S [[modified{ original az)]^ Y

3.1. From Definition3, A allows P y Q : k 0 ICp

~ A \ - P sees X *

3.2. A U l~{ax, az, a-z) allows / y S: [[modi fied{ original az)\^ Y ^

:= A U T~{ai, az, az) P / sees [ m o d ifie d { orig in a l 0 3 )]»

3.3. From assumption, ^ ^ /Cj

3.4. A U l'{ax,az,az) allows / y S\ [[modi fied{ original

:= A U T~{ax, azi as) P I sees [ m o d ifie d { o rig in a l 0 3 )]»

3.5. I not sees [modified{ original C3 ) ]«

3.6. Q.E.D.

180 Proof for Step 3.5

P r o o f s k e t c h : We use an axiom of the BAN logic [20] to prove this step

3.5. I n o t sees [modified{ original 0 3 )]“

3.5.1. From Axiom Good key ensures utterer [20],

I— k private_key_of PAR sees [.X’]*' —> P once_said X

3.5.2. I- ^ private_key_of A A I sees [modified{ original 0 3 ) ]«

—>• A once_said [modified^ original U3 ) ]»

3.5.3. We know, A n o t once_sald [modified{ original 0 3 ) ]»

3.5.4. Q.E.D.

Proof for Step 4

A s s u m e : A u T ( u i, 0 2 ,( 1 3 ) 1- C

Under passive attacks, A U 7'(ai, - , 0 4 ) F C

P r o v e : A U 7 '( o i, , 0 4 ) not allows J 4

P r o o f s k e t c h : We assume that security has already been proved for Step 3 and that security under passive attacks has been proven for Step 4. We now use Definition 3 to prove security under active attacks.

4. J 4 : / --- > B [[modified {original 0 4 )]* ]*

4.1. From Definition3, A allows P ----> Q : k ^ K p

:= A P P sees

4.2. A U T"(oi, - - -, 0 4 ) allo w s! --->• B: [[modi fied {original 0 4 ) ] » ]^,

:= A U T~{ai, , 0 4 ) h I sees [m odified { original 0 4 ) ]»

4.3. From assumption, - ^ /C/

181 4.4. A. U T'(ai, , (I4 ) allows I >■ B: [[modified ( original « 4 ) ] * ]*

:= A U 'T{a-L, — , 0 4 ) I- I sees [ m o d ifie d ( orig in a l 0 4 )]'

4.5. I n o t sees [ m o d ifie d ( o rig in a l 0 4 ) ] *

4.6. Q.E.D.

Proof for Step 4.5

P roof sk etc h : We use an axiom of the BAN logic [20] to prove this step

4.5. I n ot sees [ m o d ifie d {o rig in a l 0 4 )]*

4.5.1. From Axiom Good key ensures utterer [20],

h k private_key_of PAR sees [X]* —P once_said X

4.5.2. h- i private-key _of S A I sees [ m o d ifie d ( o rig in a l o^)]*

—f S once_sald [ m o d ifie d ( o rig in a l 0 4 )]*

4.5.3. We know, S n ot once_said [ m o d ifie d ( o rig in a l o^)]*

4.5.4. Q.E.D.

Proof for Step 5

A ssume : A u T(ni, - - -, 0 4 ) I- C

Under passive attacks. A U 'T{a^, • ‘ , <^s) C

P rove : A U ^{a i,'" ,as) not allows I 5

P roof sketch : We assume that security has already been proved for Step 4 and that security under passive attacks has been proven for Step 5. We now use Definition 3 to prove security under active attacks.

5. J 5 : I --- > S [[modified{ original as )]^ Y

182 5.1. From DefînitioaS, A, allows P >■ Q : k ^ X p

:= A P sees X *

5.2. A U 'r{ax, — , 0 .5) allows / ---- >■ S: [ [ m o d ifie d ( o rig in a l as ) ]è F ex:;

:= A U 7”( a i, ' ■ ■, as) h I sees [m odified ( o rig in a l as ) ]^

5.3. From assumption, | ^ /C;

5.4. A U 7“(ai, • • •, as) allows I ----- >- S: [[modified ( o rig in a l a s ) ]s

:= A Li 7”( a i,• • •, a s ) H I sees [ m o d ifie d ( orig in a l as ) ]^

5.5. J n o t sees [ m o d ifie d ( o rig in a l as ) ]^

5.6. Q.E.D.

Proof for Step 5.5

P r o o f s k e t c h : We use an axiom of the BAN logic [20] to prove this step

5.5. I n o t sees [ m o d ifie d ( o rig in a l as ) ]^

5.5.1. From Axiom Good, key ensures utterer [20],

1- k private_key_of P A R sees [X]* —>■ P once_said X

5.5.2. h i private_key_of B A I sees [ m o d ifie d ( original as ) ]^

—>• B once_said [ m o d ifie d ( original as ) ]^

5.5.3. We know, B not once_said [ m o d ifie d ( o riginal as ) ]^

5.5.4. Q.E.D.

Proof for Step 6

Assum e: A u T(ai,---,as) F C

Under passive attacks, A U / '( a i , — ,ag) F C

183 P r o v e : A. U T{ax-, not allows Iq

P roof sk et c h : We assume that security has already been proved for Step 5 and that security under passive attacks has been proven for Step 6. We now use Definition 3 to prove security under active attacks.

6. Js : / --- >■ A [[m odif ied{ original ae)]^ \°-

6.1. From Definitions, A allows P >■ Q : k ^ JCp

:= A P sees X *

6.2. A U T'{ax, ' " 1 as) allows / y A. [[modif ied{ original

:= AU T ’(a i, ' , as) t- I sees [ m o d ifie d ( o rig in a l oe ) ]»

6.3. From assumption, ^ ^ /C/

6.4. A U , Og) allows / ------y A: [[modified {original ae)]»]"

:= AL) / “(a i, • • •, oe) F / sees [ m o d ifie d ( o rig in a l (Zg ) ]'

6.5. I n o t sees [ m o d ifie d ( o rig in a l rzg )] «

6.6. Q.E.D.

Proof for Step 6.5

P roof sk et c h : We use an axiom of the BAN logic [20] to prove this step

6.5. I n o t sees { m o d ifie d EPO)^ 3 private key

6.5.1. From Axiom Good key ensures utterer [20],

h k private_key_of P A R sees [X]^ —y P once_said X

6.5.2. h i private_key_of S A I sees [ m o d ifie d {o rig in a l ag ) ]>

—y B once_said [ m o d ifie d { o rig in a l og ) ] «

184 6.5.3. We know, S n ot once_said [modified {original Og ) ] '

6.5.4. Q.E.D.

Proof for Step 7

A ssume : A u T (ai, • • •, oq) 1- C

Under passive attacks, A U 7”( a i,• • •, aj) l~ C

P ro ve : A U 'TÇai, • • •, n?) not allows Ij

Proof sketch: We assume that security has already been proved for Step 6 and that

security under passive attacks has been proven for Step 7. We now use Definition 3

to prove security under active attacks.

7. I r ■- I S [[i, i]i]«

7.1. From Definition3, A allows P > Q : X'^, k ^ K.p

:= A \ - P sees X *

7.2. A U P {a^, ' " , a j) allows I ---- > S: [[modi fied {original ar )]^ Y,

:= A U T'{ax, • • • , Or) P I sees [modified { original (%%)]«

7.3. From assumption, ^ ^ /C/

7.4. A U 7”( a i,• • • , ar) allows I ----> S: [ [ m o d ifie d ( o rig in a l 0 7 ) ]« ]^

:= A Li 7'(ai, • • •, tts) P I sees [ m o d ifie d { orig in a l 0 7 ) ]»

7.5. I not sees [modified {original (Z7 ) ] »

7.6. Q.E.D.

Proof for Step 7.5

P roof sk etc h : We use an axiom of the B.A.N logic [20] to prove this step

185 7.5. I not sees { [ m o d ifie d ( o rig in a l « 7 )

7.5.1. From Axiom Good key ensures utterer [20],

h- k private_key_of PAR sees [X j* —>• P once_said X

7.5.2. I- 2 private_key_of A A I sees [ [ m o d ifie d ( o rig in a l 0 7 ) ] “ ]*

—> B once_said [ [ m o d ifie d ( o rig in a l 0 7 ) ]“

7.5.3. We know, A not once_said [ [ m o d ifie d ( o rig in a l 0 7 ) ]«

7.5.4. Q.E.D.

Proof for Step 8

A ss u m e : A u T (a i, • ■ - , 0 7 ) I- C

Under passive attacks, A Ul~{ax, • • • , og) U C

P r o v e : A U 7"(ui, , as) not allows Is

P ro o f sk e t c h : We assume that security has already been proved for Step 7 and that security under passive attacks has been proven for Step 8 . We now use Definition 3 to prove security under active attacks.

8. as : I --- > B [[modified (original as)

8 . 1 . From Definition3, A allows P >• Q : X^, k ^ JCp

:= A \ - P sees X *

8.2. A U 7”( a i, • • • , as) allows I B: [ [ m o d ifie d ( o riginal Cg ) ]* ]“,

:= A U l~(ax, • • • , ttg) h I sees [ m o d ifie d ( o rig in a l (Zg ) ] '

8.3. From assumption, j 0 fCi

8.4. A U 7 ”( a i , — , as) allows I — >■ B: [ [ m o d ifie d ( o riginal Og ) ]“

:= A U l~(axi • • • , ttg) h J sees[ m o d ifie d ( o rig in a l ag ) ]^

186 8.5. I n o t sees [ [ m o d ifie d ( o rig in a l Og ) ] * ]“

8.6. Q.E.D.

Proof for Step 8.5

P r o o f s k e t c h : We use an axiom of the BAN logic [20] to prove this step

8.5. I n o t sees [ [ m o d ifie d ( o rig in a l ag ) ] “ ]“

8.5.1. From Axiom Good key ensures utterer [20],

h k private_key_of PAR sees [A"]*^ P once_said X

8.5.2. h- j private_key-of S A I sees [ [ m o d ifie d ( o rig in a l Cg ) ]^ ]“

—)• S once.said [ [ m o d ifie d ( o rig in a l ag ) ] - ]“

8.5.3. We know, S n o t once_said [ [ m o d ifie d ( o r ig in a l og ) ]» ]“

8.5.4. Q.E.D.

6.7 Summary

While stock exchanges have been the most technology friendly commerce market, they have not yet adapted the traditional model of transaction to take advantage of the modern technology. The two most important features required in modern day stock markets are: (a) the ability to place sophisticated orders to buy/sell stock, and

(b) the ability to detect delays in transaction processing. The lack of these two fea­ tures can potentially be financially disastrous to the investor. In this chapter, a new, sophisticated method of placing transaction order called threshold order was intro­ duced. A protocol for stock market transactions that supports threshold orders and detects delays in transaction processing was presented. It was further shown that the

187 developed protocol ensures the five important properties of e-commerce transactions: security, atomicity, anonymity, privacy, and low overhead cost.

Work in the areas of stock market transactions, simpler auction transactions, and real-time aware e-commerce transactions is still preliminary. We believe that the protocol presented in this chapter will be a significant contribution to the state-of- the-art.

188 CHAPTER 7

CONCLUSION AND FUTURE WORK

In this thesis, we addressed the problem of design and verification of secure E- commerce transaction protocols. We identified the need to abstract transaction pro­ tocols from the underlying payment protocols. We identified the properties required or desired in e-commerce transaction protocols as security, atomicity, anonymity, pri­ vacy, and nominal cost. We then identified three important classes of e-commerce transactions: two party, auctions, and stock market transactions. For each class, we designed a transaction protocol that is independent of the underlying payment scheme and satisfies all the five properties listed above. While the work in this thesis covers a large portion of the space of transaction protocols, there are still many interesting problems remaining. In this chapter, we summarize the work completed, and point out some future directions for further research.

7.1 Summary

Two party transactions: These transactions involve two parties, a buyer and a seller. The price is set at the price the seller asks. If this is not agreeable to the buyer, the transaction does not proceed until the seller resets the asking price to a

189 lower, acceptable value. We designed a protocol that is secure, atomic, anonymous, private and incurs nominal overhead costs. We assured both atomicity and a limited anonymity by providing customers with pseudo-identities. Our protocol also has the added feature that any message, if not delivered, can be resent any number of times, without causing the transaction to roll back or change state.

Real-tim e Aware transactions: This classification of transactions is orthogonal to the other classes. In other words any transaction protocol - two party, auction, or stock market - may be either real-time aware or not. A real-time aware protocol is a protocol that can detect the violation of a real-time constraint. While, in general, this constraint may be of any nature, in our protocols, we defined the constraint to be a maximum allowed time between the customer’s commitment to buy a product and the customer’s receipt of proof that sale is completed. The proof may be either by receiving the product itself or a confirmation of sale (e.g., in the case of stock transactions). In this thesis, we designed a methodology to convert a non-real-time- aware protocol to one that is real-time aware. We demonstrated this methodology' on our protocols for all three classes of transactions.

Auctions: .-Automated auctions are extremely popular with buyers, sellers, and auctioneers. The reasons are the ease of execution, the possibility of getting a better price, and the lack of need for the maintenance of an extensive knowledge base. In this thesis, we developed a protocol for an open cry based auctions that assures a limited amount of anonymity, and a great deal of security, atomicity, privacy, and cost advantage.

190 Stock m arket tramsactions: Stock market transactions evolved over years of

refinement and will continue to evolve. However, it seems that the transition to

electronic markets has not taken full advantage of the benefits of computerization.

The model of transaction is still the same in these electronic markets. We have made

a case for more sophisticated models of stock market transaction, and have presented one such model where previous forms of transactions are preserved and a new type of transaction added. We have designed a protocol for this new model of transaction that is secure, atomic, anonymous, private, inexpensive, and also real-time aware.

7.2 Future Research

Many problems remain to be solved. It is not feasible to list all the problems in this still nascent field of electronic commerce. However, in this section, we attempt to point out a few major directions of research. We expect that preliminar}' work in each of these broad areas of research will yield a number of well defined problems.

• The costs incurred by our protocols are low enough for most transactions. How­

ever. these overhead costs render them inefficient for transactions involving

products and payments of the order of a few cents (also called micro-payments).

.■V special case of this protocol with lesser security and encr^^ption requirements

may be used for such small value transactions. The design of such protocols

is a future exercise. These transaction protocols must not be confused with

underlying micro-payment protocols that already exist.

• The real-time aware protocols presented in this thesis do not differentiate be­

tween late delivery of a product and non-delivery of a product. In other words.

191 a product received after elapse of the stipulated time may still be partially

valuable, while a product not received at all is of nil value. The design of a

protocol for the model where the partial value is recognized is an interesting

future exercise.

• In this thesis, the time constraint for real-time aware protocols is relevant for

the interv^al between the point of ordering for a product or service and the point

of receiving it. This includes unintentional network delays. This definition may

be extended only to measure the execution time at the merchant's site/server,

or to other time intervals depending on the application in question. Extending

this work to measure such interv^als of tim e is left as a future exercise.

• A great deal of work still remains to be done in designing better transaction

models and pricing models for stock markets. The transaction model suitable

to the electronic world will probably evolve slowly, just as the wall street model

did. Every small improvement counts towards the final answer.

• The work in the area of stock markets may also be extended to the commodity

markets. Commodity markets, such as gold, wheat, etc. dynamically set their

prices based on market considerations. This is ver\* close to how the stock

market operates. While commodities may not be transferred electronically, the

price negotiation can happen electronically. There is not much literature in

this area, although there are researchers at IBM T.J Watson looking into the

problems associated with commodity market transactions.

192 • Secure co-processors exist that may be used for specific purposes. A general

purpose secure co-processor may be used for multiple applications; real-time

transactions, content distribution, secure payments, etc. The design of a ver­

satile one-size-fits-all secure co-processor and the business model for successful

large scale deployment of the same are very challenging problems involving

hardware and business acumen respectively.

• It is important to note that in the extended BAN logic developed in this thesis,

the intruder is modeled as an external intruder. Hence, the merchant, customer,

or other participants may not be malicious. Security against maliciousness is

covered in the operational arguments. Extending this logic to reason about

malicious participants is another challenging and interesting future exercise.

193 BIBLIOGRAPHY

[1] Bargain finder. URL: http://bf.cstar.ac.com/bf.

[2] Cybercash. URL: http://www.cybercash.com/cybercash/cyber2.html.

[3] Digicash. URL: http://www.digicash.com/ecash/ecash-home.html.

[4] Nasdaq. URL: http://www.nasdaq.com.

[5] The new york stock exchange. URL: http://www.nyse.com.

[6] Optimark. URL: http://www.optimark.com.

[7] Personal logic. URL: http://www.personalogic.com.

[8] M. .A.badi and R. Needham. Prudent Engineering Practice for Cr^^ptographic Protocols. In Proceedings of the IEEE Computer Society Symposium on Research in Security and Privacy, May 1994.

[9] N. R. .\dam, B. S. Fordham, and Y. Yesha. Some Key Issues in Database Systems in a Setting. Lecture Notes in : Digital Libraries. 916, May 1996.

[10] P. O'Neil adn G. Graefe. Multi-Table .Joins through Bitmapped Join Indices. SIGMOD Record. September 1995.

[11] R. .\grawal, T. Imielinski. and .A.. Swami. Mining and .-Association Rules between Sets of Items in Large Databases. In Proceedings of ACM SIGMOD Conference on Management of Data, 1993.

[12] R. .Agrawal, H. Mannila, R. Srikant, H. Tiovonen, and .A. I. Verkano. Fast Discovery of .Association Rules. Advances in Knowledge Discovery and Data Mining, 1996.

[13] R. .Agrawal and R. Srikant. Fast .Algorithm for Mining .Association Rules. In Proceedings of the 20th VLDB Conference, 1994.

194 [14] R. Anderson, H. Manifavas, and C. Sutherland. A Practical Electronic Cash System. Technical Report, Cambridge University, 1995.

[15] C. Apte and S. J. Hong. Predicting Equity Returns from Securities Data. Ad­ vances in Knowledge Discovery and Data Mining, 1996.

[16] M. S. Baum. Electronic Contracting and EDI Law. Wiley Law Publications, New York, 1991.

[17] C. Beam, A. Segev, and .J. G. Shanthikumar. Electronic Negotiation through Internet-based Auctions. Technical Report, CITM working paper 96-WP-1016, Haas School, Berkeley, 1996.

[18] M. Bellare, J. A. Garay, R. Hauser, et al. iKP- A Family of Secure Electronic Payment Protocols. In Proceedings of the First USENIX Workshop on Electronic Commerce, 1995.

[19] J. Berge, the EDIFACT Standards. Technical Report, NCC Blackwell, 1991.

[20] A. Bleeker and L. Meertens. A Semantics for BAN Logic. In Proceedings of the DIMAC S Workshop on Design and Formal Verification of Security Protocols, 1997.

[21] .1. P. Boly et al. The ESPRIT Project CAFE - High Security Digital Payment Systems. In Proceedings of ESORICS '94, 1994.

[22] M. Burrows, M. Abadi, and R. Needham. A Logic of Authentication. ACM Transactions on Computer Systems, 1990.

[23] S. Chaudhary and Ü. Dayal. ,A.n overview of Data Warehousing and OLAP Technoloy. SIGMOD Record, 26(1). 1997.

[24] D. Chaum. Blind Signatures for Untraceable Payments. In Advances in Cryp­ tology: Proceedings of CRYPTO 82, 1983.

[25] D. Chaum. Security without Identification: Transaction Systems to make Big Brother Obsolete. Communications of the ACM, 28(10), October 1985.

[26] D. Chaum. Blind Signature Systems. U.S. Patent #4:'i^59,063, .July 1988.

[27] D. Chaum. Blinding for Unanticipated Signatures. In Advances in Cryptology: Proceedings of EUROCRYPT 87, 1988.

[28] P. Cheeseman and .1. Stutz. Bayesian Classification (AutoClass): Theory and Results. Advances in Knowledge Discovery and Data Mining, 1996.

195 [29] T. Coffey and P. Saidha. Non-Repudiation with Mandatory Proof of Receipt. Sigcomm, 26, 1996.

[30] T. ElGamal. A Public Key Cryptosystem and a Signature Scheme Based on Discrete Logarithms. In Advances in Cryptology: Proceedings of CRYPTO 84, 1985.

[31] T. ElGamal. A Public Key Cryptosystem and a Signature Scheme Based on Discrete Logarithms. IEEE Transactions on , 31(4), 1985.

[32] M. A. Emmelhainz. Electronic Data Exchange: A Total Management Guide. Van Nostrand Reinhold, New York, 1990.

[33] U. Fayyad, G. Piatetsky-Shapiro, and P. Smyth. The KDD Process of Extracting Useful Knowledge from Volumes of Data. Communications of the ACM, 39(11), 1996.

[34] W. Ford and M. S. Baum. Secure Electronic Commerce. Prentice Hall, 1997.

[35] M. Franklin and M. Reiter. The Design and Implementation of a Secure Auction Service. In Proceedings of the IEEE Symposium on Security and Privacy, 1995.

[36] E. Gabber and A. Silberschatz. Agora: A Minimal Distributed Protocol for Elec­ tronic Commerce. In Proceedings of the 2nd USENIX Workshop on Electronic Commerce. 1996.

[37] F. Germeau and G. Leduc. Model-based Design and Verification of Security Protocols using LOTOS. In Proceedings of the DIM ACS Workshop on Design and Formal Verification of Security Protocols, 1997.

[38] L. Gong, R. Needham, and R. Yahalom. Reasoning about Beliefs in Cr\'p- tographic Protocols. In Proceedings of the IEEE Symposium on Research in Security and Privacy, 1990.

[39] V. Harinarayan, Rajaraman. and .1. D. Ullman. Implementing Data Cubes Efficiently. In Proceedings of the SIGMOD Conference, 1996.

[40] K. E. B. Hickman. Secure Socket Library. 1995. URL: h ttp :// www.mcom.com/info/SSL.html.

[41] W. P. Hamilton III and M. M. Henderson. Forging a Prtnership through EDI. Technical Report, A Handbook for DoD and Small Business, Logistics Manage­ ment Institute, 1993.

196 [42] T. Immeilinski and H. Mannila. A Database Perspective on Knowledge Discov­ ery. Communications of ACM, 39(11), 1996.

[43] A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Englewood Cliffs, NJ: Prentice Hall, 1998.

[44] B. S. Kaliski. The MD2 Message Digest .-Vlgorithm. Technical Report, RSA Laboratories, Inc., April 1992.

[45] S. Ketchpel. Transaction Protection for Information Buyers and Sellers. In Pro­ ceedings of DAGS95: Electronic Publishing and the Information Superhighway, 1995. URL: http://robotics.stanford.edu/users/ketchpel/dags4.html.

[46] S. P. Ketchpel and H. Garcia-Molina. Making Trust Explicit in Distributed Commerce Transactions. In Proceedings of the 16th ICDCS, 1996.

[47] H. Kikuchi, M. Harkavv', and J. D. Tygar. Multiround Anonymous Auction Protocols. In Proceedings of the IEEE Workshop on Dependable and Real-Time E-Commerce Systems, 1998.

[48] Neal Koblitz. A Course in Number Theory and Cryptography. Springer-Verlag, 1987.

[49] V. Leyland. Electronic Data Interchange: a Management View. Prentice Hall, New York, 1993.

[50] P. Maes. .A.gents that Reduce Work and Information Overload. Communications of the ACM. 37(7), 1994.

[51] M. Manasse. The Millicent Protocols for Electronic Commerce. In Proceedings of the First USENIX Workshop on Electronic Commerce, 1995.

[52] C. Meadows. .A.nalyzing the Needham-Schroeder Public Key Protocol: A Com­ parison of Two .-Vpproaches. In Proceedings of ESORICS. 1996.

[53] R. Neches, R. Pikes, T. Finin, T. Gruber, R. Patil, T. Senator, and W. Swatout. Enabling Technology for Knowledge Sharing. .4/ Magazine, 12(3). 1991.

[54] NIST. Proposed Federal Information Processing Standard for Secure Hash Stan­ dard. 57(21). .Jan 1992.

[55] R. Oplinger. Internet Security, Firewalls, and Beyond. Communications of ACM, 40(5), May 1997.

197 [56] L. C. Paulson. Isabelle, A Generic Theorem Prover. Lecture Notes in Computer Science, 828, 1994.

[57] T. P. Pedersen. Electronic Payment of Small Amounts. Technical RepoH PB-4.95, Aarhus University, Denmark, August 1995.

[58] C. Peng, J. M. Pulidod, K. .J. Lin, and D. Plough. The Design of an Internet- based Real Time Auction System. In Proceedings of the IEEE Workshop on Dependable and Real-Time E-Commerce Systems, 1998.

[59] R. Rivest. The MD5 Messge Digest Algorithm. Technical Report, RFC 1321, RSA Data Security, Inc., April 1992.

[60] R. Rivest and A. Shamir. PayWord and MicroMint: Two Simple Micropayment Schemes. Technical Report, MIT, Cambridge, MA, 1996.

[61] R. Rivest, A. Shamir, and L. .A.dleman. .A. Method for Obtaining Digital Signa­ tures and Public-Key Cryptosystems. Communications of ACM, 1978.

[62] M. .1. B. Robshaw. MD2, MD4, MD5, SHA, and other hash functions. Technical Report, MIT, Cambridge, MA, 1996.

[63] B. Schneider. Applied Cryptography. John Wiley &: Sons, Inc.. 1994.

[64] M. Sirbu and J. D. Tygar. NetBill; An Internet Commerce Sys­ tem. In Proceedings of IEEE COMPCON, 1995. URL; h ttp :// www.ini.cmu.edu/netbill/CompCon.html.

[65] D. C. Smith, A. Cypher, and J. Spohrer. KIDSIM: Programming .-Agents without a Programming Language. Communications of ACM, 37(7), 1994.

[66] E. Snekkenes. Exploring the BAN .-Approach to Protocol .A.nalysis. In Proceedings of the IEEE Symposium on Research in Security and Privacy, 1991.

[67] L. H. Stein, E. A. Stefferud. N. S. Borenstein, and M. T. Rose. The Green Commerce Model. 1994. URL: http://www.fv.com/tech/green-model.html.

[68] J. G. Steiner, B. C. Neuman, and J. I. Schiller. Kerberose: An authentication Service for Open Network Systems. In USENIX Conference Proceedings, 1988.

[69] S. Subramanian. Inquire: .A. New Paradigm for Information Retrieval. Advances in Digital Libraries Forum, Library of Congress, May 1996.

[70] S. Subramanian. Clustering and Automatic Classification. 1998.

198 [71] S. Subramanian. Design and Verification of a Secure Electronic Auction Protocol. In Proceedings of the 17th IEEE Symposium on Reliable Distributed Systems, O ctober 1998.

[72] S. Subramanian. Design and Verification of Protocols for Secure Transaction Execution in Electronic Commerce. Dissertation Proposal, OSU-CISRC-10/98- TR40, 1998.

[73] S. Subramanian and M. Singhal. A Methodology for Detecting Violation of Real-Time Constraints in Secure Electronic Commerce Transactions. Technical Report, OSU-CISRC-11/97-TR56, 1997.

[74] S. Subramanian and M. Singhal. Protocols for Secure, Atomic Transaction Ex­ ecution in Electronic Commerce. Technical Report, OSU-CISRC-10/97-TR49, 1997.

[75] S. Subramanian and M. Singhal. Design and Verification of a Secure, Atomic Auction Protocol. Technical Report, OSU-CISRC-5/98-TR15, 1998.

[76] S. Subramanian and M. Singhal. Design and Verification of a Secure, Atomic Transaction Execution Protocol for Electronic Commerce. Technical Report, OSU-CISRC-10/98-TR12, 1998.

[77] S. Subramanian and M. Singhal. Detecting Violation of Real-Time Constraints in Secure Electronic Commerce Transactions. In Proceedings of the Ifth Inter­ national Information Security Conference IFIP SEC ’98, September 1998.

[78] S. Subramanian and M. Singhal. A Real-Time Protocol for Stock Market Trans­ actions. In Proceedings of the International Workshp of Advanced Issues of Elec­ tronic Commerce and Web-based Information Systems, April 1999.

[79] S. Subramanian and M. Singhal. A Secure Electronic Stock Market Transaction Protocol. In Economic Research and Electronic Networking (NETNOMICS), 1999. Under Review.

[80] S. Subramanian and M. Singhal. Analysis of E-Commerce Protocols for Security Under Passive and Active Attacks. In Journal of Computer Security, 1999. Under Review.

[81] S. Subramanian and M. Singhal. Real-Time Aware Protocols for General El- Commerce and Electronic Auction Transactions. In Proceedings of ICDCS Work­ shop 1999, June 1999.

199 [82] P. Syverson. Adding Time to a Logic of Authentication. In Proceedings of the First ACM Conference on Computer and Communications Security, 1993.

[83] P. Syverson and P. van Oorschot. On Unifying some Cryptographic Protocol Logics. In Proceedings of the IEEE Symposium on Research in Security and Privacy, 1994.

[84] A. Tang and S. Scoggins. Open Networking with OSI. Prentice Hall, 1992.

[85] J. D. Tygar. Atomicity in Electronic Commerce. PODC, 1996.

[86] P. Wayner. Digital Cash: Commerce on the Net. Academic Press, AP Profes­ sional, 1997.

[87] M. P. Wellman and P. R. Wurman. Real Time Issues For Internet Auctions. In Proceedings of the IEEE Workshop on Dependable and Real-Time E-Commerce Systems, 1998.

[88] C. White. Multidimentional OLAP versus Relational OLAP. 1996.

[89] S. R. White, S. H. Weingart, et al. Introduction to the Citadel Architechture; Security in Physically Exposed Environments. Technical Report, Distributed Se­ curity Systems Group, IBM T. J. Watson Research Center, 1991.

[90] B. Yee and J. D. Tygar. Secure Coprocessors in Electronic Commerce Applica­ tions. In Proceedings of the First USENIX Workshop on Electronic Commerce, 1995.

[91] B. S. Yee. Using Secure Coprocessors. Ph.D. thesis, Computer Science Technical Report CMU-CS-94-149, Carnegie Mellon University, 1994.

[92] X. Yi, X. Wang, and K. Lam. X Secure .A.uction-like Negotiation Protocol for Agent-based Internet Trading. In Proceedings of the 17th IEEE Symposium on Reliable Distributed Systems, 1998.

200