Enhancing Security in Distributed Systems with Trusted Computing Hardware

Total Page:16

File Type:pdf, Size:1020Kb

Enhancing Security in Distributed Systems with Trusted Computing Hardware Enhancing Security in Distributed Systems with Trusted Computing Hardware by Jason Reid Bachelor of Commerce, UQ Australia 1994 Master of Information Technology, QUT Australia 1999 Thesis submitted in accordance with the regulations for Degree of Doctor of Philosophy Information Security Institute Queensland University of Technology 2007 ii Keywords Trusted computing, trusted computing hardware, trusted systems, operating sys- tem security, distributed systems security, smart card, security evaluation, tam- per resistance, distance bounding protocol, side channel leakage, electronic cash, electronic health records, role-based access control. iii iv Abstract The need to increase the hostile attack resilience of distributed and internet- worked computer systems is critical and pressing. This thesis contributes to con- crete improvements in distributed systems trustworthiness through an enhanced understanding of a technical approach known as trusted computing hardware. Because of its physical and logical protection features, trusted computing hard- ware can reliably enforce a security policy in a threat model where the authorised user is untrusted or when the device is placed in a hostile environment. We present a critical analysis of vulnerabilities in current systems, and argue that current industry-driven trusted computing initiatives will fail in efforts to retrofit security into inherently flawed operating system designs, since there is no substitute for a sound protection architecture grounded in hardware-enforced domain isolation. In doing so we identify the limitations of hardware-based ap- proaches. We argue that the current emphasis of these programs does not give sufficient weight to the role that operating system security plays in overall system security. New processor features that provide hardware support for virtualisation will contribute more to practical security improvement because they will allow multiple operating systems to concurrently share the same processor. New oper- ating systems that implement a sound protection architecture will thus be able to be introduced to support applications with stringent security requirements. These can coexist alongside inherently less secure mainstream operating systems, allowing a gradual migration to less vulnerable alternatives. We examine the effectiveness of the ITSEC and Common Criteria evaluation and certification schemes as a basis for establishing assurance in trusted comput- ing hardware. Based on a survey of smart card certifications, we contend that the practice of artificially limiting the scope of an evaluation in order to gain a higher assurance rating is quite common. Due to a general lack of understanding in the marketplace as to how the schemes work, high evaluation assurance levels v are confused with a general notion of ‘high security strength’. Vendors invest little effort in correcting the misconception since they benefit from it and this has arguably undermined the value of the whole certification process. We contribute practical techniques for securing personal trusted hardware de- vices against a type of attack known as a relay attack. Our method is based on a novel application of a phenomenon known as side channel leakage, heretofore considered exclusively as a security vulnerability. We exploit the low latency of side channel information transfer to deliver a communication channel with timing resolution that is fine enough to detect sophisticated relay attacks. We avoid the cost and complexity associated with alternative communication techniques sug- gested in previous proposals. We also propose the first terrorist attack resistant distance bounding protocol that is efficient enough to be implemented on resource constrained devices. We propose a design for a privacy sensitive electronic cash scheme that lever- ages the confidentiality and integrity protection features of trusted computing hardware. We specify the command set and message structures and implement these in a prototype that uses Dallas Semiconductor iButtons. We consider the access control requirements for a national scale electronic health records system of the type that Australia is currently developing. We ar- gue that an access control model capable of supporting explicit denial of privileges is required to ensure that consumers maintain their right to grant or withhold consent to disclosure of their sensitive health information in an electronic system. Finding this feature absent in standard role-based access control models, we pro- pose a modification to role-based access control that supports policy constructs of this type. Explicit denial is difficult to enforce in a large scale system with- out an active central authority but centralisation impacts negatively on system scalability. We show how the unique properties of trusted computing hardware can address this problem. We outline a conceptual architecture for an electronic health records access control system that leverages hardware level CPU virtu- alisation, trusted platform modules, personal cryptographic tokens and secure coprocessors to implement role based cryptographic access control. We argue that the design delivers important scalability benefits because it enables access control decisions to be made and enforced locally on a user’s computing platform in a reliable way. vi Contents Keywords iii Abstract v List of Abbreviations xvii Declaration xxi Previously Published Material xxiii Acknowledgements xxv 1 Introduction & overview 1 1.1 Aims and objectives . 3 1.2 Outline of the thesis . 4 1.3 Contributions and achievements . 7 2 Background 9 2.1 Introduction . 9 2.2 Assets: threats and vulnerabilities . 10 2.3 What is trusted computing hardware? . 11 2.3.1 Tamper resistance . 11 2.3.2 Tamper detection and response . 13 2.3.3 Hardware types . 14 2.4 Attack methods . 18 2.5 Security services provided by trusted computing hardware . 19 2.5.1 Data confidentiality . 19 2.5.2 Code confidentiality . 20 2.5.3 Data integrity . 20 vii 2.5.4 Code integrity . 22 2.5.5 Confidentiality and integrity - authentication . 23 2.5.6 Availability . 24 2.6 Applications . 28 2.7 Conclusion . 28 3 Trusted computing and trusted systems 31 3.1 Introduction . 31 3.2 Vulnerabilities in distributed computing infrastructure . 34 3.3 Trusted systems . 37 3.3.1 Historical background and context . 37 3.3.2 Properties of trusted systems . 39 3.4 Protection architecture flaws in mainstream operating systems . 43 3.4.1 No least privilege or MAC . 43 3.4.2 Lack of assurance . 44 3.4.3 Memory architecture - no reference monitor . 45 3.4.4 Trustworthiness of mainstream operating systems . 47 3.4.5 Trusted Operating Systems . 48 3.4.6 Trusted systems - barriers to adoption . 48 3.5 TCG Scheme - description . 49 3.5.1 Key features . 49 3.5.2 The trusted computing controversy . 50 3.5.3 Background and relationship to prior work . 51 3.5.4 TCG trusted platform module . 53 3.5.5 Integrity measurement and reporting . 54 3.5.6 Protected storage . 55 3.5.7 Sealed storage . 56 3.6 Analysis of TCG remote attestation and privacy model . 56 3.6.1 Identity credentials . 58 3.6.2 Credential revocation requirements . 59 3.6.3 Minimising the trust on the privacy CA . 62 3.7 Integrating TCG with mainstream operating systems . 64 3.7.1 DRM case study introduction . 65 3.7.2 Relevance of trusted systems to DRM . 66 3.7.3 Policy enforcement on a DRM client platform . 67 3.7.4 Maintaining integrity assurance . 70 viii 3.7.5 Privacy impacts . 73 3.8 Next Generation Secure Computing Base (NGSCB) . 74 3.8.1 NGSCB and virtual machine monitors . 75 3.8.2 Trusted systems properties of NGSCB . 76 3.8.3 Microsoft delays NGSCB . 76 3.8.4 The significance of virtual machine technology . 77 3.9 Conclusion . 79 4 Certification - Trusted and Trustworthy? 81 4.1 Introduction . 81 4.2 How certification works . 83 4.2.1 ITSEC . 83 4.2.2 The Common Criteria . 88 4.2.3 FIPS 140 . 90 4.3 A survey of smart card certifications . 91 4.3.1 Mondex - E6 High or EAL 1 Low . 93 4.3.2 Justification for a MULTOS High SoM . 95 4.3.3 Motivations to exclude hardware from the TOE . 97 4.4 Evaluation by composition . 101 4.5 Conclusion . 103 5 Side channel leakage 105 5.1 Introduction . 105 5.2 Attacking trusted hardware with side channel analysis . 107 5.2.1 CMOS power consumption and side channel leakage . 107 5.2.2 A correlation power analysis attack on DES . 109 5.2.3 Current practical significance of power analysis attacks . 114 5.3 Applying side channel leakage to distance bounding protocols . 116 5.3.1 Introduction to relay attacks and distance bounding . 117 5.3.2 Distance-bounding protocols . 118 5.3.3 Hancke and Kuhn’s distance-bounding protocol . 122 5.3.4 New distance-bounding protocol . 124 5.4 Security of new protocol . 126 5.4.1 Comparison with existing schemes . 127 5.4.2 Communications requirements for distance bounding . 128 5.4.3 Timing Resolution for relay attack detection . 130 ix 5.4.4 Timing resolution for contactless card communication . 131 5.4.5 A new approach to low latency communication . 135 5.4.6 Experimental results . 136 5.4.7 Investigations into modulation latency . 138 5.5 Conclusion . 143 6 An electronic cash scheme based on trusted computing hardware145 6.1 Introduction . 145 6.2 Background and design issues . 147 6.2.1 Desirable scheme properties . 149 6.2.2 Common approaches to representing electronic value . 149 6.3 Outline of the proposed scheme . 157 6.3.1 Left cash . 158 6.3.2 Right cash . 158 6.4 Scheme design and implementation . 159 6.4.1 Overview and design principles . 159 6.4.2 Payment protocol . 163 6.4.3 Fraud and counterfeit detection . 167 6.4.4 Non-repudiation . 168 6.4.5 Public key authentication framework . 170 6.4.6 Timing of left for right exchange . 171 6.4.7 Divisible instruments allow linking . 172 6.4.8 Summary .
Recommended publications
  • NSA Security-Enhanced Linux (Selinux)
    Integrating Flexible Support for Security Policies into the Linux Operating System http://www.nsa.gov/selinux Stephen D. Smalley [email protected] Information Assurance Research Group National Security Agency ■ Information Assurance Research Group ■ 1 Outline · Motivation and Background · What SELinux Provides · SELinux Status and Adoption · Ongoing and Future Development ■ Information Assurance Research Group ■ 2 Why Secure the Operating System? · Information attacks don't require a corrupt user. · Applications can be circumvented. · Must process in the clear. · Network is too far. · Hardware is too close. · End system security requires a secure OS. · Secure end-to-end transactions requires secure end systems. ■ Information Assurance Research Group ■ 3 Mandatory Access Control · A ªmissing linkº of security in current operating systems. · Defined by three major properties: ± Administratively-defined security policy. ± Control over all subjects (processes) and objects. ± Decisions based on all security-relevant information. ■ Information Assurance Research Group ■ 4 Discretionary Access Control · Existing access control mechanism of current OSes. · Limited to user identity / ownership. · Vulnerable to malicious or flawed software. · Subject to every user©s discretion (or whim). · Only distinguishes admin vs. non-admin for users. · Only supports coarse-grained privileges for programs. · Unbounded privilege escalation. ■ Information Assurance Research Group ■ 5 What can MAC offer? · Strong separation of security domains · System, application, and data integrity · Ability to limit program privileges · Processing pipeline guarantees · Authorization limits for legitimate users ■ Information Assurance Research Group ■ 6 MAC Implementation Issues · Must overcome limitations of traditional MAC ± More than just Multi-Level Security / BLP · Policy flexibility required ± One size does not fit all! · Maximize security transparency ± Compatibility for applications and existing usage.
    [Show full text]
  • Spf Based Selinux Operating System for Multimedia Applications
    International Journal of Reviews in Computing st 31 December 2011. Vol. 8 IJRIC © 2009 - 2011 IJRIC & LLS. All rights reserved. ISSN: 2076-3328 www.ijric.org E-ISSN: 2076-3336 SPF BASED SELINUX OPERATING SYSTEM FOR MULTIMEDIA APPLICATIONS 1NITISH PATHAK, 2NEELAM SHARMA 1BVICAM, GGSIPU, Delhi, India 2Department of Information Technology, Maharaja Agrasen Institute of Technology, Delhi, India E-mail: [email protected], [email protected] ABSTRACT Trusted Operating Systems, offer a number of security mechanisms that can help protect information, make a system difficult to break into, and confine attacks far better than traditional operating systems. However, this security will come at a cost, since it can degrade the performance of an operating system. This performance loss is one of the reasons why Trusted Operating Systems have not become popular. While Trusted Operating Systems offer an incredible amount of security, observations about computing workloads suggest that only some parts of the operating system security are actually necessary. Web servers are the best example. For many web servers, the majority of the information on the server is publicly readable and available on the Internet. Therefore, if a Trusted Operating System is used on a web server, any security used to secure the confidentiality of the server’s information is not necessary. Any security used to protect the confidentiality of web server data can be considered a waste of computational resources. The security needed in web servers is the security to protect the integrity of data, not the confidentiality of data. Other workloads such as multimedia or database workloads may also only need parts of the operating system security.
    [Show full text]
  • Trusted System Concepts
    Trusted System Concepts Trusted System Concepts Marshall D. Abrams, Ph.D. Michael V. Joyce The MITRE Corporation 7525 Colshire Drive McLean, VA 22102 703-883-6938 [email protected] This is the first of three related papers exploring how contemporary computer architecture affects security. Key issues in this changing environment, such as distributed systems and the need to support multiple access control policies, necessitates a generalization of the Trusted Computing Base paradigm. This paper develops a conceptual framework with which to address the implications of the growing reliance on Policy-Enforcing Applications in distributed environments. 1 INTRODUCTION A significant evolution in computer software architecture has taken place over the last quarter century. The centralized time-sharing systems of the 1970s and early 1980s are rapidly being superseded by the distributed architectures of the 1990s. As an integral part of the architecture evolution, the composition of the system access control policy has changed. Instead of a single policy, the system access control policy is more likely to be a composite of several constituent policies implemented in applications that create objects and enforce their own unique access control policies. This paper first provides a survey that explains how the security community developed the accepted concepts and criteria that addressed the time-shared architectures. Second, the paper focuses on the changes currently ongoing, providing insight into the driving forces and probable directions. This paper presents contemporary thinking; it summarizes and generalizes vertical and horizontal extensions to the Trusted Computing Base (TCB) concept. While attempting to be logical and rigorous, formalism is avoided. This paper was first published in Computers & Security, Vol.
    [Show full text]
  • Looking Back: Addendum
    Looking Back: Addendum David Elliott Bell Abstract business environment produced by Steve Walker’s Com- puter Security Initiative.3 The picture of computer and network security painted in What we needed then and what we need now are “self- my 2005 ACSAC paper was bleak. I learned at the con- less acts of security” that lead to strong, secure commercial ference that the situation is even bleaker than it seemed. products.4 Market-driven self-interest does not result in al- We connect our most sensitive networks to less-secure net- truistic product decisions. Commercial and government ac- works using low-security products, creating high-value tar- quisitions with tight deadlines are the wrong place to ex- gets that are extremely vulnerable to sophisticated attack or pect broad-reaching initiatives for the common good. Gov- subversion. Only systems of the highest security are suffi- ernment must champion selfless acts of security separately cient to thwart such attacks and subversions. The environ- from acquisitions. ment for commercial security products can be made healthy again. 3 NSA Has the Mission In 1981, the Computer Security Evaluation Center was 1 Introduction established at the National Security Agency (NSA). It was charged with publishing technical standards for evaluating In the preparation of a recent ACSAC paper [1], I con- trusted computer systems, with evaluating commercial and firmed my opinion that computer and network security is in GOTS products, and with maintaining an Evaluated Prod- sad shape. I also made the editorial decision to soft-pedal ucts List (EPL) of products successfully evaluated. NSA criticism of my alma mater, the National Security Agency.
    [Show full text]
  • Trusted Computer System Evaluation Criteria
    DoD 5200.28-STD Supersedes CSC-STD-00l-83, dtd l5 Aug 83 Library No. S225,7ll DEPARTMENT OF DEFENSE STANDARD DEPARTMENT OF DEFENSE TRUSTED COMPUTER SYSTEM EVALUATION CRITERIA DECEMBER l985 December 26, l985 Page 1 FOREWORD This publication, DoD 5200.28-STD, "Department of Defense Trusted Computer System Evaluation Criteria," is issued under the authority of an in accordance with DoD Directive 5200.28, "Security Requirements for Automatic Data Processing (ADP) Systems," and in furtherance of responsibilities assigned by DoD Directive 52l5.l, "Computer Security Evaluation Center." Its purpose is to provide technical hardware/firmware/software security criteria and associated technical evaluation methodologies in support of the overall ADP system security policy, evaluation and approval/accreditation responsibilities promulgated by DoD Directive 5200.28. The provisions of this document apply to the Office of the Secretary of Defense (ASD), the Military Departments, the Organization of the Joint Chiefs of Staff, the Unified and Specified Commands, the Defense Agencies and activities administratively supported by OSD (hereafter called "DoD Components"). This publication is effective immediately and is mandatory for use by all DoD Components in carrying out ADP system technical security evaluation activities applicable to the processing and storage of classified and other sensitive DoD information and applications as set forth herein. Recommendations for revisions to this publication are encouraged and will be reviewed biannually by the National Computer Security Center through a formal review process. Address all proposals for revision through appropriate channels to: National Computer Security Center, Attention: Chief, Computer Security Standards. DoD Components may obtain copies of this publication through their own publications channels.
    [Show full text]
  • A Survey of Security Research for Operating Systems∗
    A Survey of Security Research for Operating Systems∗ Masaki HASHIMOTOy Abstract In recent years, information systems have become the social infrastructure, so that their security must be improved urgently. In this paper, the results of the sur- vey of virtualization technologies, operating system verification technologies, and access control technologies are introduced, in association with the design require- ments of the reference monitor introduced in the Anderson report. Furthermore, the prospects and challenges for each of technologies are shown. 1 Introduction In recent years, information systems have become the social infrastructure, so that improving their security has been an important issue to the public. Besides each of security incidents has much more impact on our social life than before, the number of security incident is increasing every year because of the complexity of information systems for their wide application and the explosive growth of the number of nodes connected to the Internet. Since it is necessary for enhancing information security to take drastic measures con- cerning technologies, managements, legislations, and ethics, large numbers of researches are conducted throughout the world. Especially focusing on technologies, a wide variety of researches is carried out for cryptography, intrusion detection, authentication, foren- sics, and so forth, on the assumption that their working basis is safe and sound. The basis is what is called an operating system in brief and various security enhancements running on it are completely useless if it is vulnerable and unsafe. Additionally, if the operating system is safe, it is an urgent issue what type of security enhancements should be provided from it to upper layers.
    [Show full text]
  • An Introduction to Computer Security: the NIST Handbook U.S
    HATl INST. OF STAND & TECH R.I.C. NIST PUBLICATIONS AlllOB SEDS3fl NIST Special Publication 800-12 An Introduction to Computer Security: The NIST Handbook U.S. DEPARTMENT OF COMMERCE Technology Administration National Institute of Standards Barbara Guttman and Edward A. Roback and Technology COMPUTER SECURITY Contingency Assurance User 1) Issues Planniii^ I&A Personnel Trairang f Access Risk Audit Planning ) Crypto \ Controls O Managen»nt U ^ J Support/-"^ Program Kiysfcal ~^Tiireats Policy & v_ Management Security Operations i QC 100 Nisr .U57 NO. 800-12 1995 The National Institute of Standards and Technology was established in 1988 by Congress to "assist industry in the development of technology . needed to improve product quality, to modernize manufacturing processes, to ensure product reliability . and to facilitate rapid commercialization ... of products based on new scientific discoveries." NIST, originally founded as the National Bureau of Standards in 1901, works to strengthen U.S. industry's competitiveness; advance science and engineering; and improve public health, safety, and the environment. One of the agency's basic functions is to develop, maintain, and retain custody of the national standards of measurement, and provide the means and methods for comparing standards used in science, engineering, manufacturing, commerce, industry, and education with the standards adopted or recognized by the Federal Government. As an agency of the U.S. Commerce Department's Technology Administration, NIST conducts basic and applied research in the physical sciences and engineering, and develops measurement techniques, test methods, standards, and related services. The Institute does generic and precompetitive work on new and advanced technologies. NIST's research facilities are located at Gaithersburg, MD 20899, and at Boulder, CO 80303.
    [Show full text]
  • Expanding Malware Defense by Securing Software Installations*
    Expanding Malware Defense by Securing Software Installations? Weiqing Sun1, R. Sekar1, Zhenkai Liang2 and V.N. Venkatakrishnan3 1 Department of Computer Science, Stony Brook University 2 Department of Computer Science, Carnegie Mellon University 3 Department of Computer Science, University of Illinois, Chicago. Abstract. Software installation provides an attractive entry vector for malware: since installations are performed with administrator privileges, malware can easily get the en- hanced level of access needed to install backdoors, spyware, rootkits, or “bot” software, and to hide these installations from users. Previous research has been focused mainly on securing the execution phase of untrusted software, while largely ignoring the safety of installations. Even security-enhanced operating systems such as SELinux and Vista don't usually impose restrictions during software installs, expecting the system administrator to “know what she is doing.” This paper addresses this “gap in armor” by securing software installations. Our technique can support a diversity of package managers and software installers. It is based on a framework that simplifies the development and enforcement of policies that govern safety of installations. We present a simple policy that can be used to prevent untrusted software from modifying any of the files used by benign software packages, thus blocking the most common mechanism used by malware to ensure that it is run automatically after each system reboot. While the scope of our technique is limited to the installation phase, it can be easily combined with approaches for secure execution, e.g., by ensuring that all future runs of an untrusted package will take place within an administrator-specified sandbox.
    [Show full text]
  • Vid6014-Vr-0001 Draft
    National Information Assurance Partnership ® TM Common Criteria Evaluation and Validation Scheme Validation Report BAE Systems Information Technology, Inc. XTS-400 Version 6.4.U4 Report Number: CCEVS-VR-VID10293-2008 Dated: July 3, 2008 Version: 2.3 National Institute of Standards and Technology National Security Agency Information Technology Laboratory Information Assurance Directorate 100 Bureau Drive 9600 Savage Road Suite 6757 Gaithersburg, Maryland 20878 Fort George G. Meade, MD 20755-6740 i Acknowledgements: The TOE evaluation was sponsored by: BAE Systems Information Technology Inc. Evaluation Personnel: Arca Common Criteria Testing Laboratory Ms. Diann Carpenter Mr. J. David Thompson Dr. Gary Grainger Mr. John Boone Ms. Louise Huang Mr. Ray Rugen Mr. Ken Dill The National Security Agency . Validation Personnel: Dr. Jerome Myers, The Aerospace Corporation Mr. Daniel Faigin, The Aerospace Corporation ii Table of Contents 1 Executive Summary................................................................................................................ 1 2 Identification............................................................................................................................ 3 3 Security Policy ........................................................................................................................ 5 3.1 Identification and Authentication Policy ......................................................................... 5 3.2 Mandatory Access Control Policy .................................................................................
    [Show full text]
  • Freebsd Advanced Security Features
    FreeBSD Advanced Security Features Robert N. M. Watson Security Research Computer Laboratory University of Cambridge 19 May, 2007 Introduction ● Welcome! – Introduction to some of the advanced security features in the FreeBSD operating system ● Background – Introduce a series of access control and audit security features used to manage local security – Features appeared between FreeBSD 4.0 and FreeBSD 6.2, and build on the UNIX security model – To talk about new security features, we must understand the FreeBSD security architecture 19 May 2007 2 Post-UNIX Security Features ● Securelevels ● IPFW, PF, IPFilter ● Pluggable ● KAME IPSEC, authentication FAST_IPSEC modules (OpenPAM) ● Access control lists ● Crypto library and (ACLs) tools (OpenSSL) ● Security event audit ● Resource limits ● Mandatory access ● Jails, jail securelevels control (MAC) ● GBDE, GELI ● 802.11 security 19 May 2007 3 Brief History of the TrustedBSD Project ● TrustedBSD Project founded in April, 2000 – Goal to provide trusted operating system extensions to FreeBSD – DARPA funding began in July, 2001 – Continuing funding from a variety of government and industry sponsors – Work ranges from immediately practical to research – While many of these features are production- quality, some are still under development – Scope now also includes Apple's Mac OS X 19 May 2007 4 FreeBSD Security Architecture 19 May 2007 5 FreeBSD Security Architecture ● FreeBSD's security architecture is the UNIX security architecture – Entirely trusted monolithic kernel – UNIX process model – Kernel UIDs/GIDs driven by user-space user mode – Privileged root user – Various forms of access control (permissions, ...) ● Security features discussed here extend this security model in a number of ways 19 May 2007 6 Kernel and User Processes Kernel s s Inter-process e l c l communication c a a c m m e e t t s s y y s s e l i F User User User ..
    [Show full text]
  • CS526: Information Security
    Cristina Nita-Rotaru CS526: Information security Trusted Computing Base. Orange Book, Common Criteria Related Readings for This Lecture • Wikipedia } Trusted computing base } TCSEC } Common Criteria, } Evaluation Assurance Level 2 TCB Trusted vs. Trustworthy } A component of a system is trusted means that } the security of the system depends on it } if the component is insecure, so is the system } determined by its role in the system } A component is trustworthy means that } the component deserves to be trusted } e.g., it is implemented correctly } determined by intrinsic properties of the component Trusted Operating System is actually a misnomer 3 TCB Trusted Computing Base } A trusted computing base is the set of all hardware, software and procedural components that enforce the security policy. } in order to break security, an attacker must subvert one or more of them. } What consists of the conceptual Trusted Computing Based in a Unix/Linux system? } hardware, kernel, system binaries, system configuration files, etc. 4 TCB Trusted Path } A trusted path is a mechanism that provides confidence that the user is communicating with what the user intended to communicate with (typically TCB) } attackers can't intercept or modify whatever information is being communicated. } defends attacks such as fake login programs } Example: Ctrl+Alt+Del for log in on Windows 5 TCB Trusted Computing and Trusted Platform Module } Trusted Computing means that the computer will consistently behave in specific ways, and those behaviors will be enforced by hardware and software. } Trusted Computing Group } an alliance of Microsoft, Intel, IBM, HP and AMD; } promotes a standard for a `more secure' PC.
    [Show full text]
  • Cybersecurity and Domestic Surveillance Or Why Trusted Computing Shouldn’T Be: Agency, Trust and the Future of Computing
    Cybersecurity and Domestic Surveillance or Why Trusted Computing Shouldn’t Be: Agency, Trust and the Future of Computing. Douglas Thomas Annenberg School for Communication University of Southern California As you begin playing the latest MP3 file downloaded from the Internet your computer flashes an ominous warning. A few second later your computer shuts down and you are unable to turn it back on. The work of malicious hackers? The result of the latest virus? The work of cyber-terrorists disrupting the Internet? The answer becomes apparent a few days later when you receive a summons to appear in court for federal copyright infringement. Before it stopped working, your now defunct computer forwarded a list of suspicious files to the record companies. Legislation currently under consideration by Congress would make such scenario not only likely, but also perfectly legal. At the same time, computer chip manufactures, software producers and record company executives are hard at work, bringing products to market to make it a reality. While many of our worst fears about the future of the Internet circulate around notions of hackers, viruses and cyber-terrorists, the real threat to our individual freedoms and liberties are grounded in a threat much closer to home—Digital Rights Management, information security, and domestic surveillance. As the number of cameras, recording devices, and monitoring apparatuses grows larger each year, the public is offered cataclysmic scenarios of financial ruin and devastating loss of life as the result of a few mouse clicks or carefully written code by cyber-terrorists.1 In reality such threats are nearly non-existent.2 Why then are stories of cyber-terrorists using the Internet to attack American power plants, airlines, hospitals and food services so widespread? The answer has more to do with record companies, Walt Disney, and corporate databases than we might possibly imagine.
    [Show full text]