Emergent Neural Networks Simulation System
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Detecting Data Theft Using Stochastic Forensics
Author's personal copy digital investigation 8 (2011) S71eS77 available at www.sciencedirect.com journal homepage: www.elsevier.com/locate/diin Detecting data theft using stochastic forensics Jonathan Grier* Vesaria, LLC, United States abstract Keywords: We present a method to examine a filesystem and determine if and when files were copied Data theft from it. We develop this method by stochastically modeling filesystem behavior under both Stochastic forensics routine activity and copying, and identifying emergent patterns in MAC timestamps Data breach unique to copying. These patterns are detectable even months afterwards. We have Data exfiltration successfully used this method to investigate data exfiltration in the field. Our method Filesystem forensics presents a new approach to forensics: by looking for stochastically emergent patterns, we MAC times can detect silent activities that lack artifacts. Forensics of emergent properties ª 2011 Grier. Published by Elsevier Ltd. All rights reserved. 1. Background 3), simulated trials (Section 4), its mathematical basis (Section 5), and usage in the field (Section 6). Theft of corporate proprietary information, according to the FBI and CSI, has repeatedly been the most financially harmful category of computer crime (CSI and FBI, 2003). Insider data 2. Can we use MAC timestamps? theft is especially difficult to detect, since the thief often has the technical authority to access the information (Yu and Farmer and Venema’s seminal work (Farmer, 2000; Venema, Chiueh, 2004; Hillstrom and Hillstrom, 2002). Frustratingly, 2000; Farmer and Venema, 2004) describes reconstructing despite the need, no reliable method of forensically deter- system activity via MAC timestamps. MAC timestamps are mining if files have been copied has been developed (Carvey, filesystem metadata which record a file’s most recent Modi- 2009, p. -
TOSM Server Backup Service Memorandum of Understanding
TOSM Server Backup Service Memorandum of Understanding The department of Technology Operations and Systems Management (TOSM) provides its customers with various IT-related services, including backups. This document describes the TOSM Server Backup service, which is available to Texas Tech University (TTU) and Texas Tech University System (TTUS) departments and colleges, and serves as a backup Memorandum of Understanding (MOU) between TOSM and its Customers. 1.0 Overview This MOU is between TOSM, who is the service provider, and the Customers of this service, the departments and colleges of TTU and TTUS. This document outlines the details of the TOSM Server Backup service as well as the roles, responsibilities and expectations of both parties while providing a framework for problem resolution and communication. 2.0 TOSM Standard Server Backup service 2.1 Service Description – The TOSM Standard Server Backup service is designed to create copies of designated data residing on servers to an alternate location physically separate from the primary data location as to protect the data from accidental loss. Designated data refers to all data not explicitly excluded by the Customer. Exclusions to Standard Server Backups are addressed in Section 3.0 and Section 5.4 of this document. All Standard Server Backups performed by TOSM, unless otherwise explicitly specified, are backed up to an offsite location as to provide data recovery capabilities in the event of a disaster that rendered the primary data center unusable. 3.0 Backup Exclusions 3.1 Removable media, such as external hard drives (i.e. USB, eSATA) and thumb drives, will not be backed up. -
Critical Security Patch Management for Nuclear Real Time Systems
Critical Security Patch Management for Nuclear Real Time Systems Dave Hinrichs – Exelon Corporation Abstract Until fairly recently, the issue of operating system patch management was relegated to traditional business computers, those on the company’s local and wide area networks running a version of Microsoft Windows. The SCADA and real time computers were either on isolated networks, networks with limited access from the company’s local and wide area networks, or in some cases just not included because it was too hard. With the advent of recent regulatory requirements and the increased use of Microsoft Windows as the operating system for real time computers, patch management strategies for real time computers is an issue that must be addressed. Nuclear real time systems are most likely to be engineered-systems in that they are under engineering configuration control. Support from the application vendors is particularly important for these systems so that engineering design concerns can be properly addressed. This paper describes the patch management strategies for Exelon Nuclear real time systems, how the strategies were developed based on regulatory requirements and historical events with the company’s real time systems. An example of one application vendor’s support for providing this information will also be shown. Introduction Exelon has the usual mix of operating systems in our Nuclear and Energy Delivery real time systems. At the time most of these systems were installed, cyber security, security patch management and virus/Trojan threats were not a consideration. The operating systems were setup per the vendor recommendations, and put in production. Reasonable system administration practices were followed to maintain them. -
A Multi-Disciplinary Perspective on Emergent and Future Innovations in Peer Review
F1000Research 2015 - DRAFT ARTICLE (PRE-SUBMISSION) A multi-disciplinary perspective on emergent and future innovations in peer review Jonathan P. Tennant*1, Jonathan M. Dugan2, Daniel Graziotin3, Damien C. Jacques4, Franc¸ois Waldner4, Daniel Mietchen5, Yehia Elkhatib6, Lauren B. Collister7, Christina K. Pikas8, Tom Crick9, Paola Masuzzo10, Anthony Caravaggi11, Devin R. Berg12, Kyle E. Niemeyer13, Tony Ross- Hellauer14, Sara Mannheimer15, Lillian Rigling16, Daniel S. Katz17, Bastian Greshake18, Josmel Pacheco-Mendoza19, Nazeefa Fatima20, Marta Poblet21, Marios Isaakidis22, Dasapta Erwin Irawan23,Sebastien´ Renaut24, Christopher R. Madan25, Lisa Matthias26, Jesper Nørgaard Kjær27, Daniel Paul O’Donnell28, Cameron Neylon29, Sarah Kearns30, Manojkumar Selvaraju31, and Julien Colomb32 [email protected]; Imperial College London, London, UK, ORCID: 0000-0001-7794-0218; ScienceOpen, Berlin, Ger- many (*corresponding author) 2Berkeley Institute for Data Science, University of California, Berkeley, CA USA, ORCID: 0000-0001-8525-6221 3Institute of Software Technology, University of Stuttgart, Stuttgart, Germany; ORCID: 0000-0002-9107-7681 4Earth and Life Institute, Universite´ catholique de Louvain, Louvain-la-Neuve, Belgium; ORCID: 0000-0002-9069-4143 and 0000-0002-5599-7456 5Data Science Institute, University of Virginia, United States of America; ORCID: 0000-0001-9488-1870 6School of Computing and Communications, Lancaster University, Lancaster, UK; ORCID: 0000-0003-4639-436X 7University Library System, University of Pittsburgh, Pittsburgh, -
Melding Proprietary and Open Source Platform Strategies Joel West San Jose State University, [email protected]
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by SJSU ScholarWorks San Jose State University SJSU ScholarWorks Faculty Publications School of Management 2003 How open is open enough?: Melding proprietary and open source platform strategies Joel West San Jose State University, [email protected] Follow this and additional works at: https://scholarworks.sjsu.edu/org_mgmt_pub Recommended Citation Joel West. "How open is open enough?: Melding proprietary and open source platform strategies" Research Policy (2003): 1259-1285. doi:10.1016/S0048-7333(03)00052-0 This Article is brought to you for free and open access by the School of Management at SJSU ScholarWorks. It has been accepted for inclusion in Faculty Publications by an authorized administrator of SJSU ScholarWorks. For more information, please contact [email protected]. How Open is Open Enough? Melding Proprietary and Open Source Platform Strategies Joel West1 College of Business, San José State University, One Washington Square, San José, CA 95192-0070 USA December 31, 2002 Forthcoming in Research Policy special issue on “Open Source Software Development” (Eric von Hippel and Georg von Krogh, editors) Abstract Computer platforms provide an integrated architecture of hardware and software standards as a basis for developing complementary assets. The most successful platforms were owned by proprietary sponsors that controlled platform evolution and appropriated associated rewards. Responding to the Internet and open source systems, three traditional vendors of proprietary platforms experimented with hybrid strategies which attempted to combine the advantages of open source software while retaining control and differentiation. Such hybrid standards strategies reflect the competing imperatives for adoption and appropriability, and suggest the conditions under which such strategies may be preferable to either the purely open or purely proprietary alternatives.