English Arabic Technical Computing Dictionary
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Efficient Support of Position Independence on Non-Volatile Memory
Efficient Support of Position Independence on Non-Volatile Memory Guoyang Chen Lei Zhang Richa Budhiraja Qualcomm Inc. North Carolina State University Qualcomm Inc. [email protected] Raleigh, NC [email protected] [email protected] Xipeng Shen Youfeng Wu North Carolina State University Intel Corp. Raleigh, NC [email protected] [email protected] ABSTRACT In Proceedings of MICRO-50, Cambridge, MA, USA, October 14–18, 2017, This paper explores solutions for enabling efficient supports of 13 pages. https://doi.org/10.1145/3123939.3124543 position independence of pointer-based data structures on byte- addressable None-Volatile Memory (NVM). When a dynamic data structure (e.g., a linked list) gets loaded from persistent storage into 1 INTRODUCTION main memory in different executions, the locations of the elements Byte-Addressable Non-Volatile Memory (NVM) is a new class contained in the data structure could differ in the address spaces of memory technologies, including phase-change memory (PCM), from one run to another. As a result, some special support must memristors, STT-MRAM, resistive RAM (ReRAM), and even flash- be provided to ensure that the pointers contained in the data struc- backed DRAM (NV-DIMMs). Unlike on DRAM, data on NVM tures always point to the correct locations, which is called position are durable, despite software crashes or power loss. Compared to independence. existing flash storage, some of these NVM technologies promise This paper shows the insufficiency of traditional methods in sup- 10-100x better performance, and can be accessed via memory in- porting position independence on NVM. It proposes a concept called structions rather than I/O operations. -
Rethinking Security
RETHINKING SECURITY Fighting Known, Unknown and Advanced Threats kaspersky.com/business “Merchants, he said, are either not running REAL DANGERS antivirus on the servers managing point- of-sale devices or they’re not being updated AND THE REPORTED regularly. The end result in Home Depot’s DEMISE OF ANTIVIRUS case could be the largest retail data breach in U.S. history, dwarfing even Target.” 1 Regardless of its size or industry, your business is in real danger of becoming a victim of ~ Pat Belcher of Invincea cybercrime. This fact is indisputable. Open a newspaper, log onto the Internet, watch TV news or listen to President Obama’s recent State of the Union address and you’ll hear about another widespread breach. You are not paranoid when you think that your financial data, corporate intelligence and reputation are at risk. They are and it’s getting worse. Somewhat more controversial, though, are opinions about the best methods to defend against these perils. The same news sources that deliver frightening stories about costly data breaches question whether or not anti-malware or antivirus (AV) is dead, as reported in these articles from PC World, The Wall Street Journal and Fortune magazine. Reports about the death by irrelevancy of anti-malware technology miss the point. Smart cybersecurity today must include advanced anti-malware at its core. It takes multiple layers of cutting edge technology to form the most effective line of cyberdefense. This eBook explores the features that make AV a critical component of an effective cybersecurity strategy to fight all hazards targeting businesses today — including known, unknown and advanced cyberthreats. -
Cyren's 2016 Cyberthreat Report
2016 CYBERTHREAT Report AUTOMATED THREAT INTELLIGENCE: The Key to Preventing, Mitigating, and Identifying Cyber Breaches Introduction .................................................................................................4 The Cloud Sandbox Array: A New Tool Against Cybercrime .....................6 The Benefits of Big Data .......................................................................... 12 2016 Predictions....................................................................................... 14 Malware Newsmakers of 2015 ................................................................ 16 The Criminal Power of the Unknown ...................................................... 22 2015 Statistics: Android, Phishing, Malware, Spam ............................... 26 Table of Contents Table CYREN 2016 CYBERTHREAT REPORT 3 INTRODUCTION Lior Kohavi Chief Technical Officer, CYREN, Inc. There is a false perception that sophisticated attacks are too difficult to prevent and the only alternative is detection. But detection is NOT the new prevention. Cybersecurity professionals must make it their mission to STOP attacks, not just become proficient at detecting them. It's no secret that cybercriminals are willing to spend a lot of time and money to obtain the information they desire. And, the risk that these criminals will be caught and convicted is relatively low. Despite well-publicized botnet takedowns, like that of Darknode this past July, researchers estimate that less than 1% of cybercrimes receive a corresponding conviction. -
BCIS 1305 Business Computer Applications
BCIS 1305 Business Computer Applications BCIS 1305 Business Computer Applications San Jacinto College This course was developed from generally available open educational resources (OER) in use at multiple institutions, drawing mostly from a primary work curated by the Extended Learning Institute (ELI) at Northern Virginia Community College (NOVA), but also including additional open works from various sources as noted in attributions on each page of materials. Cover Image: “Keyboard” by John Ward from https://flic.kr/p/tFuRZ licensed under a Creative Commons Attribution License. BCIS 1305 Business Computer Applications by Extended Learning Institute (ELI) at NOVA is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted. CONTENTS Module 1: Introduction to Computers ..........................................................................................1 • Reading: File systems ....................................................................................................................................... 1 • Reading: Basic Computer Skills ........................................................................................................................ 1 • Reading: Computer Concepts ........................................................................................................................... 1 • Tutorials: Computer Basics................................................................................................................................ 1 Module 2: Computer -
Database : a Toolkit for Constructing Memory Mapped Databases Peter A
Database : A Toolkit for Constructing Memory Mapped Databases Peter A. Buhr, Anil K. Goel and Anderson Wai Dept. of Computer Science, University of Waterloo Waterloo, Ontario, Canada, N2L 3G1 Abstract The main objective of this work was an ef®cient methodology for constructing low-level database tools that are built around a single-level store implemented using memory mapping. The methodology allowed normal programming pointers to be stored directly onto secondary storage, and subsequently retrieved and manipulated by other programs without the need for relocation, pointer swizzling or reading all the data. File structures for a database, e.g. a B- Tree, built using this approach are signi®cantly simpler to build, test, and maintain than traditional ®le structures. All access methods to the ®le structure are statically type-safe and ®le structure de®nitions can be generic in the type of the record and possibly key(s) stored in the ®le structure, which affords signi®cant code reuse. An additional design requirement is that multiple ®le structures may be simultaneously accessible by an application. Concurrency at both the front end (multiple accessors) and the back end (®le structure partitioned over multiple disks) are possible. Finally, experimental results between traditional and memory mapped ®les structures show that performance of a memory mapped ®le structure is as good or better than the traditional approach. 1 Introduction The main objective of this work was an ef®cient methodology for constructing low-level database tools that are built around a single-level store. A single-level store gives the illusion that data on disk (secondary storage) is accessible in the same way as data in main memory (primary storage), which is analogous to the goals of virtual memory. -
Malware Primer Malware Primer
Malware Primer Malware Primer Table of Contents Introduction Introduction ...........................................................................................................................................................................2 In The Art of War, Sun Tzu wrote, “If you know the enemy and know yourself, you need not fear the result of a hundred battles.” This certainly applies Chapter 1: A Brief History of Malware—Its Evolution and Impact ..............................3 to cyberwarfare. This primer will help you get to know cybercriminals by providing you with a solid foundation in one of their principle weapons: Chapter 2: Malware Types and Classifications ....................................................................................8 malware. Chapter 3: How Malware Works—Malicious Strategies and Tactics ........................11 Our objective here is to provide a baseline of knowledge about the different types of malware, what malware is capable of, and how it’s distributed. Chapter 4: Polymorphic Malware—Real Life Transformers .............................................14 Because effectively protecting your network, users, data, and company from Chapter 5: Keyloggers and Other Password Snatching Malware ...............................16 malware-based attacks requires an understanding of the various ways that the enemy is coming at you. Chapter 6: Account and Identity Theft Malware ...........................................................................19 Keep in mind, however, that we’re only able here -
NETSURION DEFENSE AGAINST BACKOFF: How Netsurion Effectively Protected Against Threats
NETSURION DEFENSE AGAINST BACKOFF: How Netsurion Effectively Protected Against Threats Powering Secure and Agile Networks In the wake of the numerous recent data breaches, many consumers are demanding answers as to the how and why surrounding companies who have inadvertently allowed data to be compromised, given security measures accessible today. After a breach is confirmed, the process typically involves PCI Forensic Investigators spending time researching and investigating compromised networks, logging files, and any other pieces of the system traceable to not only determine how the hackers gained access, but once in control of a machine, how data was removed or retrieved. Investigative reports post-breach have uncovered vast amounts of useful information employed to reactively secure networks going forward. The industry, as a whole, has learned that in many of the instances, the culprit responsible for the data theft is linked to businesses utilizing remote access, or more specifically, insecure remote access. It should come as no surprise then that the method of choice for many blackhats (a.k.a. computer hackers) looking to enter a system has been identifying insecure remote access. This method includes several different remote platforms, of which you can read more about in the DHS article on Backoff: New Point of Sale Malware. Hackers search for vulnerabilities and once located, it is only a matter of moments before they are able to connect to machines remotely, often times gaining administrative privileges in the process. Once they have these privileges, it is quite easy for them to download the Backoff malware on the machine in order to begin sending credit card data to their destination of choice. -
Exact Positioning of Data Approach to Memory Mapped Persistent Stores: Design, Analysis and Modelling
Exact Positioning of Data Approach to Memory Mapped Persistent Stores: Design, Analysis and Modelling by Anil K. Goel A thesis presented to the University of Waterloo in fulfilment of the thesis requirement for the degree of Doctor of Philosophy in Computer Science Waterloo, Ontario, Canada, 1996 c Anil K. Goel 1996 I hereby declare that I am the sole author of this thesis. I authorize the University of Waterloo to lend this thesis to other institutions or indi- viduals for the purpose of scholarly research. I further authorize the University of Waterloo to reproduce this thesis by photocopy- ing or by other means, in total or in part, at the request of other institutions or individuals for the purpose of scholarly research. iii The University of Waterloo requires the signatures of all persons using or photocopy- ing this thesis. Please sign below, and give address and date. v Abstract One of the primary functions of computers is to store information, i.e., to deal with long lived or persistent data. Programmers working with persistent data structures are faced with the problem that there are two, mostly incompatible, views of structured data, namely data in primary and secondary storage. Traditionally, these two views of data have been dealt with independently by researchers in the programming language and database communities. Significant research has occurred over the last decade on efficient and easy-to-use methods for manipulating persistent data structures in a fashion that makes the sec- ondary storage transparent to the programmer. Merging primary and secondary storage in this manner produces a single-level store, which gives the illusion that data on sec- ondary storage is accessible in the same way as data in primary storage. -
Q3 Internet Threat Trends Report
INTERNET THREATS TREND REPORT OCTOBER 2014 THE STATE OF CORPORATE SECURITY Lior Kohavi Chief Technical Officer, CYREN, Inc. The Companies—Home Depot, JP Morgan Chase, Target The Information—56 million credit cards, 76 million ‘households,’ 7 million small businesses, and 110 million accounts The Impact—According to recent reports, Home Depot estimates that investigation, credit monitoring, call center, and other costs could top $62 million. Target’s stock fell by almost 14% in the months following news of the breach, with profits down 46% by the end of Q4 2013, and breach-related expenses totaling $146 million. The impact of the JP Morgan Chase data breach has yet to be determined, but early reports suggest the costs could top that of Target. These are just a few of the big names that have hit the headlines recently. But, it’s not just large organizations being hacked. Regional small- to medium-sized businesses (SMBs) are potentially even greater targets, as hackers view SMB security as less sophisticated and easy to breach. According to a 2013 Verizon report, which analyzed 47,000 security incidents, 40% of all confirmed breaches involved companies with fewer than 1,000 employees; more notably, the largest single segment of corporate breach victims among companies with less than 1,000 employees were companies with fewer than 100 employees. When you consider that an estimated 60% of all companies experiencing a cybersecurity breach go out of business within six months of the attack, it is a wonder that all corporations aren’t making cybersecurity a higher priority. Yet, the level of corporate complacency with regard to cybersecurity remains high. -
Leanstore: In-Memory Data Management Beyond Main Memory
LeanStore: In-Memory Data Management Beyond Main Memory Viktor Leis, Michael Haubenschild∗, Alfons Kemper, Thomas Neumann Technische Universitat¨ Munchen¨ Tableau Software∗ fleis,kemper,[email protected] [email protected]∗ Abstract—Disk-based database systems use buffer managers 60K in order to transparently manage data sets larger than main 40K memory. This traditional approach is effective at minimizing the number of I/O operations, but is also the major source of 20K overhead in comparison with in-memory systems. To avoid this TPC-C [txns/s] 0 overhead, in-memory database systems therefore abandon buffer BerkeleyDB LeanStore in-memoryWiredTiger management altogether, which makes handling data sets larger Fig. 1. Single-threaded in-memory TPC-C performance (100 warehouses). than main memory very difficult. In this work, we revisit this fundamental dichotomy and design Two representative proposals for efficiently managing larger- a novel storage manager that is optimized for modern hardware. Our evaluation, which is based on TPC-C and micro benchmarks, than-RAM data sets in main-memory systems are Anti- shows that our approach has little overhead in comparison Caching [7] and Siberia [8], [9], [10]. In comparison with a with a pure in-memory system when all data resides in main traditional buffer manager, these approaches exhibit one major memory. At the same time, like a traditional buffer manager, weakness: They are not capable of maintaining a replacement it is fully transparent and can manage very large data sets strategy over relational and index data. Either the indexes, effectively. Furthermore, due to low-overhead synchronization, our implementation is also highly scalable on multi-core CPUs. -
A Survey of B-Tree Logging and Recovery Techniques
1 A Survey of B-Tree Logging and Recovery Techniques GOETZ GRAEFE, Hewlett-Packard Laboratories B-trees have been ubiquitous in database management systems for several decades, and they serve in many other storage systems as well. Their basic structure and their basic operations are well understood including search, insertion, and deletion. However, implementation of transactional guarantees such as all-or-nothing failure atomicity and durability in spite of media and system failures seems to be difficult. High-performance techniques such as pseudo-deleted records, allocation-only logging, and transaction processing during crash recovery are widely used in commercial B-tree implementations but not widely understood. This survey collects many of these techniques as a reference for students, researchers, system architects, and software developers. Central in this discussion are physical data independence, separation of logical database contents and physical representation, and the concepts of user transactions and system transactions. Many of the techniques discussed are applicable beyond B-trees. Categories and Subject Descriptors: H.2.2 [Database Management]: Physical Design—Recovery and restart; H.2.4 [Database Management]: Systems—Transaction Processing; H.3.0 [Information Storage and Retrieval]: General General Terms: Algorithms ACM Reference Format: Graefe, G. 2012. A survey of B-tree logging and recovery techniques. ACM Trans. Datab. Syst. 37, 1, Article 1 (February 2012), 35 pages. DOI = 10.1145/2109196.2109197 http://doi.acm.org/10.1145/2109196.2109197 1. INTRODUCTION B-trees [Bayer and McCreight 1972] are primary data structures not only in databases but also in information retrieval, file systems, key-value stores, and many application- specific indexes. -
R-Code a Very Capable Virtual Computer
R-Code a Very Capable Virtual Computer The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Walton, Robert Lee. 1995. R-Code a Very Capable Virtual Computer. Harvard Computer Science Group Technical Report TR-37-95. Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:26506458 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA RCo de a Very Capable Virtual Computer Rob ert Lee Walton TR Octob er Center for Research in Computing Technology Harvard University Cambridge Massachusetts RCODE A Very Capable Virtual Computer A thesis presented by Rob ert Lee Walton to The Division of Applied Sciences in partial fulllment of the requirements for the degree of Do ctor of Philosophy in the sub ject of Computer Science Harvard University Cambridge Massachusetts Octob er c by Rob ert Lee Walton All rights reserved iii ABSTRACT This thesis investigates the design of a machine indep endent virtual computer the R CODE computer for use as a target by high level language compilers Unlike previous machine indep endent targets RCODE provides higher level capabilities such as a garbage collecting memory manager tagged data type maps array descriptors register dataow semantics and a shared ob ject memory Emphasis is on trying to nd universal versions of these