The Importance of Trust in Computer Security Christian Jensen
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Analyzing Cyber Trends in Online Financial Frauds Using Digital Forensics Techniques Simran Koul, Yash Raj, Simriti Koul
International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-9 Issue-9, July 2020 Analyzing Cyber Trends in Online Financial Frauds using digital Forensics Techniques Simran Koul, Yash Raj, Simriti Koul Online frauds refer to the usage of Internet services or other Abstract: Online financial frauds are one of the leading issues open-source software requiring Internet access to frame users in the fields of digital forensics and cyber-security today. Various or to otherwise take advantage of them. Finance-related flaws online firms have been employing several methodologies for the are becoming quite commonplace today. The most common prevention of finance-related malpractices. This domain of criminal activity is becoming increasingly common in the present types of online financial frauds include: cyberspace. In this paper, we will try to implement an online Phishing: Here, the fraudsters acquire users’ sensitive data financial fraud investigation using the digital forensics tool: such as passwords and credit card credentials through email Autopsy. A few existing cyber-security techniques for the messages, fraud websites, and phone calls. investigation of such crimes, namely the Formal Concept Analysis Card Skimming: This crime involves the illegal extraction and Confirmatory Factor Analysis; have been analyzed and of the user’s sensitive financial details on the magnetic stripe reviewed. These techniques are primarily based on mathematical cyber-security concepts. Henceforth, it has been tried to find out from ATMs, debit, and credit cards. This is usually done by whether the investigation of similar crimes can be done the installation of malware on the card reader used by the satisfactorily using the readily-accessible digital forensics tool: victim. -
193 194 Chapter 17
National Institute of Standards and Technology Technology Administration U.S. Department of Commerce An Introduction to Computer Security: The NIST Handbook Special Publication 800-12 User Contingency Assurance I & A Issues Planning Personnel Training Access Risk Crypto Controls Audit Planning Management Support Physical Program Threats Policy & Management Security Operations Table of Contents I. INTRODUCTION AND OVERVIEW Chapter 1 INTRODUCTION 1.1 Purpose .................................................... 3 1.2 Intended Audience .......................................... 3 1.3 Organization ............................................... 4 1.4 Important Terminology ..................................... 5 1.5 Legal Foundation for Federal Computer Security Programs . 7 Chapter 2 ELEMENTS OF COMPUTER SECURITY 2.1 Computer Security Supports the Mission of the Organization. 9 2.2 Computer Security is an Integral Element of Sound Management. .............................................. 10 2.3 Computer Security Should Be Cost-Effective. ............... 11 2.4 Computer Security Responsibilities and Accountability Should Be Made Explicit. .......................................... 12 2.5 Systems Owners Have Security Responsibilities Outside Their Own Organizations. ........................................ 12 2.6 Computer Security Requires a Comprehensive and Integrated Approach. ................................................. 13 2.7 Computer Security Should Be Periodically Reassessed. ...... 13 2.8 Computer Security is Constrained by Societal -
Cybersecurity & Computing Innovations
Cybersecurity & Computing Innovations: Notes In this lesson: - Online Security - Legal & Ethical Concerns - Computing Innovations Online Security: ● Personal identifiable information (PII) is information about an individual that identifies links, relates, or describes them. ● Examples of PII include: ○ Social Security Number ○ Age ○ Race ○ Phone numbers ○ Medical information ○ Financial information ○ Biometric data (fingerprint and retinal scans) ● Search engines can record and maintain a history of search made by users. ○ Search engines can use search history to suggest websites or for targeted marketing. ● Websites can record and maintain a history of individuals who have viewed their pages. ○ Devices, websites, and networks can collect information about a user’s location. ● Technology enables the collection, use, and exploitation of information about by and for individuals, groups and institutions. ● Personal data such as geolocation, cookies, and browsing history, can be aggregated to create knowledge about an individual. ● Personal identifiable information and other information placed online can be used to enhance a user’s experiences. ● Personal identifiable information stored online can be used to simplify making online purchases. ● Commercial and government curation (collection) of information may be exploited if privacy and other protections are ignored. ● Information placed online can be used in ways that were not intended and that may have a harmful effect. ○ For example, an email message may be forwarded, tweets can be retweeted, and social media posts can be viewed by potential employers. ● Personal identifiable information can be used to stalk or steal the identity of a person or to aid in the planning of other criminal acts. ● Once information is placed online, it is difficult to delete. -
A Understanding Graph-Based Trust Evaluation in Online Social Networks: Methodologies and Challenges
A Understanding Graph-based Trust Evaluation in Online Social Networks: Methodologies and Challenges Wenjun Jiang, Hunan University Guojun Wang, Central South University Md Zakirul Alam Bhuiyan, Temple University Jie Wu, Temple University Online social networks (OSNs) are becoming a popular method of meeting people and keeping in touch with friends. OSNs resort to trust evaluation models and algorithms, as to improve service qualities and enhance user experiences. Much research has been done to evaluate trust and predict the trustworthiness of a target, usually from the view of a source. Graph-based approaches make up a major portion of the existing works, in which the trust value is calculated through a trusted graph (or trusted network, web of trust, multiple trust chains). In this paper, we focus on graph-based trust evaluation models in OSNs, particularly in computer science literature. We first summarize the features of OSNs and the properties of trust. Then, we comparatively review two categories of graph-simplification based and graph-analogy based approaches, and discuss their individual problems and challenges. We also analyze the common challenges of all graph-based models. To provide an integrated view of trust evaluation, we conduct a brief review of its pre-and-post processes, i.e., the preparation and the validation of trust models, including information collection, performance evaluation, and related applications. Finally, we identify some open challenges that all trust models are facing. Categories and Subject Descriptors: A.1 [Introductory and Survey]; C.2.4 [Computer-Communication Networks]: Distributed Systems General Terms: Design, Reliability, Management Additional Key Words and Phrases: trusted graph, trust evaluation, simplification, analogy, online social networks (OSNs), trust models. -
Decentralized Reputation Model and Trust Framework Blockchain and Smart Contracts
IT 18 062 Examensarbete 30 hp December 2018 Decentralized Reputation Model and Trust Framework Blockchain and Smart contracts Sujata Tamang Institutionen för informationsteknologi Department of Information Technology Abstract Decentralized Reputation Model and Trust Framework: Blockchain and Smart contracts Sujata Tamang Teknisk- naturvetenskaplig fakultet UTH-enheten Blockchain technology is being researched in diverse domains for its ability to provide distributed, decentralized and time-stamped Besöksadress: transactions. It is attributed to by its fault-tolerant and zero- Ångströmlaboratoriet Lägerhyddsvägen 1 downtime characteristics with methods to ensure records of immutable Hus 4, Plan 0 data such that its modification is computationally infeasible. Trust frameworks and reputation models of an online interaction system are Postadress: responsible for providing enough information (e.g., in the form of Box 536 751 21 Uppsala trust score) to infer the trustworthiness of interacting entities. The risk of failure or probability of success when interacting with an Telefon: entity relies on the information provided by the reputation system. 018 – 471 30 03 Thus, it is crucial to have an accurate, reliable and immutable trust Telefax: score assigned by the reputation system. The centralized nature of 018 – 471 30 00 current trust systems, however, leaves the valuable information as such prone to both external and internal attacks. This master's thesis Hemsida: project, therefore, studies the use of blockchain technology as an http://www.teknat.uu.se/student infrastructure for an online interaction system that can guarantee a reliable and immutable trust score. It proposes a system of smart contracts that specify the logic for interactions and models trust among pseudonymous identities of the system. -
An Introduction to Computer Security: the NIST Handbook U.S
HATl INST. OF STAND & TECH R.I.C. NIST PUBLICATIONS AlllOB SEDS3fl NIST Special Publication 800-12 An Introduction to Computer Security: The NIST Handbook U.S. DEPARTMENT OF COMMERCE Technology Administration National Institute of Standards Barbara Guttman and Edward A. Roback and Technology COMPUTER SECURITY Contingency Assurance User 1) Issues Planniii^ I&A Personnel Trairang f Access Risk Audit Planning ) Crypto \ Controls O Managen»nt U ^ J Support/-"^ Program Kiysfcal ~^Tiireats Policy & v_ Management Security Operations i QC 100 Nisr .U57 NO. 800-12 1995 The National Institute of Standards and Technology was established in 1988 by Congress to "assist industry in the development of technology . needed to improve product quality, to modernize manufacturing processes, to ensure product reliability . and to facilitate rapid commercialization ... of products based on new scientific discoveries." NIST, originally founded as the National Bureau of Standards in 1901, works to strengthen U.S. industry's competitiveness; advance science and engineering; and improve public health, safety, and the environment. One of the agency's basic functions is to develop, maintain, and retain custody of the national standards of measurement, and provide the means and methods for comparing standards used in science, engineering, manufacturing, commerce, industry, and education with the standards adopted or recognized by the Federal Government. As an agency of the U.S. Commerce Department's Technology Administration, NIST conducts basic and applied research in the physical sciences and engineering, and develops measurement techniques, test methods, standards, and related services. The Institute does generic and precompetitive work on new and advanced technologies. NIST's research facilities are located at Gaithersburg, MD 20899, and at Boulder, CO 80303. -
Maintaining Security and Trust in Large Scale Public Key Infrastructures
Maintaining Security and Trust in Large Scale Public Key Infrastructures Vom Fachbereich Informatik der Technischen Universit¨atDarmstadt genehmigte Dissertation zur Erlangung des Grades Doktor-Ingenieur (Dr.-Ing.) von Dipl. Wirtsch.-Inform. Johannes Braun geboren in Herrenberg. Referenten: Prof. Dr. Johannes Buchmann Prof. Dr. Max M¨uhlh¨auser Tag der Einreichung: 16.03.2015 Tag der m¨undlichen Pr¨ufung: 30.04.2015 Hochschulkennziffer: D 17 Darmstadt 2015 List of Publications [B1] Johannes Braun. Ubiquitous support of multi path probing: Preventing man in the middle attacks on Internet communication. IEEE Conference on Com- munications and Network Security (CNS 2014) - Poster Session, pages 510{ 511, IEEE Computer Society, 2014. Cited on pages 52 and 173. [B2] Johannes Braun, Florian Volk, Jiska Classen, Johannes Buchmann, and Max M¨uhlh¨auser.CA trust management for the Web PKI. Journal of Computer Security, 22: 913{959, IOS Press, 2014. Cited on pages 9, 66, 89, and 104. [B3] Johannes Braun, Johannes Buchmann, Ciaran Mullan, and Alex Wiesmaier. Long term confidentiality: a survey. Designs, Codes and Cryptography, 71(3): 459{478, Springer, 2014. Cited on page 161. [B4] Johannes Braun and Gregor Rynkowski. The potential of an individ- ualized set of trusted CAs: Defending against CA failures in the Web PKI. International Conference on Social Computing (SocialCom) - PAS- SAT 2013, pages 600{605, IEEE Computer Society, 2013. Extended version: http://eprint.iacr.org/2013/275. Cited on pages 9, 32, and 57. [B5] Johannes Braun, Florian Volk, Johannes Buchmann, and Max M¨uhlh¨auser. Trust views for the Web PKI. Public Key Infrastructures, Services and Appli- cations - EuroPKI 2013, vol. -
Scalable Blockchains for Computational Trust
A Scalable Blockchain Approach for Trusted Computation and Verifiable Simulation in Multi-Party Collaborations Ravi Kiran Raman,∗y Roman Vaculin,y Michael Hind,y Sekou L. Remy,y Eleftheria K. Pissadaki,y Nelson Kibichii Bore,y Roozbeh Daneshvar,y Biplav Srivastava,y and Kush R. Varshneyy ∗University of Illinois at Urbana-Champaign yIBM Research Abstract—In high-stakes multi-party policy making based on ology in this paper. machine learning and simulation models involving independent In AI, increasingly complex machine learning models with computing agents, a notion of trust in results is critical in facilitat- ever increasing complexity are being learned on vast datasets. ing transparency, accountability, and collaboration. Using a novel combination of distributed validation of atomic computation Lack of access to the training process and the hyperparameters blocks and a blockchain-based immutable audit mechanism, this used therein do not only hinder scientific reproducibility, work proposes a framework for distributed trust in computations. but also remove any basis to trust the reported results. In In particular we address the scalability problem by reducing particular, applications interested in the trained model might the storage and communication costs using a lossy compression not trust the agent performing the training. Other than sanity scheme. This framework guarantees not only verifiability of final results, but also the validity of local computations, and its cost- checks such as validation datasets, there exists no simpler benefit tradeoffs are studied using a synthetic example of training way to guarantee correctness than complete retraining. This a neural network. is impractical considering the extensive amounts of time and hardware required. -
A Survey of Trust in Computer Science and the Semantic Web
To appear in Journal of Web Semantics: Science, Services and Agents on the World Wide Web, 2007. A Survey of Trust in Computer Science and the Semantic Web Donovan Artz and Yolanda Gil Information Sciences Institute University of Southern California 4676 Admiralty Way, Marina del Rey CA 90292 +1 310-822-1511 [email protected] March 15, 2007 Abstract Trust is an integral component in many kinds of human interaction, allowing people to act under uncertainty and with the risk of negative consequences. For example, exchanging money for a service, giving access to your property, and choosing between conflicting sources of information all may utilize some form of trust. In computer science, trust is a widely- used term whose definition differs among researchers and application areas. Trust is an essential component of the vision for the Semantic Web, where both new problems and new applications of trust are being studied. This paper gives an overview of existing trust research in computer science and the Semantic Web. Keywords: Trust, Web of Trust, Policies, Reputation 1 Introduction Trust is a central component of the Semantic Web vision (Berners-Lee 1999; Berners-Lee et al 2001; Berners-Lee et al 2006). The Semantic Web stack (Berners-Lee 2000; Berners-Lee et al 2006) has included all along a trust layer to assimilate the ontology, rules, logic, and proof layers. Trust often refers to mechanisms to verify that the source of information is really who the source claims to be. Signatures and encryption mechanisms should allow any consumer of information to check the sources of that information. -
Decentralized Trust Management: Risk Analysis and Trust Aggregation 1:3
1 Decentralized Trust Management: Risk Analysis and Trust Aggregation XINXIN FAN, Institute of Computing Technology, Chinese Academy of Sciences, China LING LIU, School of Computer Science, Georgia Institute of Technology, USA RUI ZHANG, Institute of Information Engineering, Chinese Academy of Sciences, China QUANLIANG JING and JINGPING BI, Institute of Computing Technology, Chinese Academy of Sciences, China Decentralized trust management is used as a referral benchmark for assisting decision making by human or intelligence machines in open collaborative systems. During any given period of time, each participant may only interact with a few of other participants. Simply relying on direct trust may frequently resort to random team formation. Thus, trust aggregation becomes critical. It can leverage decentralized trust management to learn about indirect trust of every participant based on past transaction experiences. This paper presents alternative designs of decentralized trust management and their efficiency and robustness from three per- spectives. First, we study the risk factors and adverse effects of six common threat models. Second, we review the representative trust aggregation models and trust metrics. Third, we present an in-depth analysis and comparison of these reference trust aggregation methods with respect to effectiveness and robustness. We show our comparative study results through formal analysis and experimental evaluation. This comprehen- sive study advances the understanding of adverse effects of present and future threats and the robustness of different trust metrics. It may also serve as a guideline for research and development of next generation trust aggregation algorithms and services in the anticipation of risk factors and mischievous threats. CCS Concepts: • General and reference → Surveys and overviews; • Security and privacy → Trust frameworks; • Information systems → Collaborative and social computing systems and tools. -
Smart Card Technology : QC100 .U57 NO.500-157 1988 V19 C.1 NIST- St
^ NAFL INST OF STANDARDS & TECH R.I.C. oi Computer Science A1 11 02903092 Ni HaykIn, Martha E/Smart card technology : QC100 .U57 NO.500-157 1988 V19 C.1 NIST- St. and Technology (formerly National Bureau of Standards) NBS NIST Special Publication 500-157 PUBLICATIONS Smart Card Technology: New Methods for Computer Access Control Martha E. Haykin Robert B. J. Warnar i Computer Science and Technology iwi NIST Special Publication 500-157 Smart Card Technology: New Methods for Computer Access Control Martha E. Haykin and Robert B. J. Warnar Security Technology Group Institute for Computer Sciences and Technology National Institute of Standards and Technology Gaithersburg, MD 20899 September 1988 / w % NOTE: As of 23 August 1988, the National Bureau of Standards (NBS) became the National Institute of Standards and Technology (NIST) when President Reagan signed Into law the Omnibus Trade and Competitiveness Act. U.S. DEPARTMENT OF COMMERCE C. William Verity, Secretary National Institute of Standards and Technology (formerly National Bureau of Standards) Ernest Ambler, Director Reports on Computer Science and Technology The National Institute of Standards and Technology has a special responsibility within the Federal Government for computer science and technology activities. The programs of the NIST Institute for Computer Sciences and Technology are designed to provide ADP standards, guidelines, and technical advisory services to improve the effectiveness of computer utilization, and to perform appropriate re- search and development efforts as foundation for such activities and programs. This publication series will report these NIST efforts to the Federal computer community as well as to interested specialists in the governmental, academic, and private sectors. -
Social Computational Trust Model (SCTM): a Framework to Facilitate the Selection of Partners
Social Computational Trust Model (SCTM): A Framework to Facilitate the Selection of Partners 1st Ameneh Deljoo 2nd Tom van Engers 3th Leon Gommans Informatics Institute, Faculty of Science Leibniz Center for Law AirFrance-KLM University of Amsterdam University of Amsterdam Amsterdam, the Netherlands Amsterdam, the Netherlands Amsterdam, the Netherlands [email protected] [email protected] [email protected] 4th Cees de Laat Informatics Institute, Faculty of Science University of Amsterdam Amsterdam, the Netherlands [email protected] Abstract—Creating a cyber security alliance among network and private organizations is required to arrange technical domain owners, as a means to minimize security incidents, counter measures. Sharing cyber intelligence among dif- has gained the interest of practitioners and academics in the ferent parties, such as internet & cloud service providers last few years. A cyber security alliance, like any membership and enterprise networks, becomes increasingly important. organization, requires the creation and maintenance of trust Additionally, networks have evolved over the time and among its members, in this case the network domain owners. became more complex and less connected, therefore the To promote the disclosure and sharing of cyber security infor- protection of such a complex network can often only be mation among the network domain owners, a trust framework guaranteed and financed as a shared effort. One of the is needed. benefits of information sharing among the members in an This paper discusses a social computational trust model alliance is improving the decision and policy making in the (SCTM), that helps alliance members to select the right different levels of organization and facilitating the selection partner to collaborate with and perform collective tasks, and of optimal cyber defense tactics during the attack period.