Scalable Network Forensics

Total Page:16

File Type:pdf, Size:1020Kb

Scalable Network Forensics Scalable Network Forensics Matthias Vallentin Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2016-55 http://www.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-55.html May 12, 2016 Copyright © 2016, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Scalable Network Forensics by Matthias Vallentin A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Vern Paxson, Chair Professor Michael Franklin Professor David Brillinger Spring 2016 Scalable Network Forensics Copyright 2016 by Matthias Vallentin 1 Abstract Scalable Network Forensics by Matthias Vallentin Doctor of Philosophy in Computer Science University of California, Berkeley Professor Vern Paxson, Chair Network forensics and incident response play a vital role in site operations, but for large networks can pose daunting difficulties to cope with the ever-growing volume of activity and resulting logs. On the one hand, logging sources can generate tens of thousands of events per second, which a system supporting comprehensive forensics must somehow continually ingest. On the other hand, operators greatly benefit from interactive exploration of disparate types of activity when analyzing an incident, which often leaves network operators scrambling to ferret out answers to key questions: How did the attackers get in? What did they do once inside? Where did they come from? What activity patterns serve as indicators reflecting their presence? How do we prevent this attack in the future? Operators can only answer such questions by drawing upon high-quality descriptions of past activity recorded over extended time. A typical analysis starts with a narrow piece of intelligence, such as a local system exhibiting questionable behavior, or a report from another site describing an attack they detected. The analyst then tries to locate the described behavior by examining past activity, often cross-correlating information of different types to build up additional context. Frequently, this process in turn produces new leads to explore iteratively (\peeling the onion"), continuing and expanding until ultimately the analyst converges on as complete of an understanding of the incident as they can extract from the available information. This process, however, remains manual and time-consuming, as no single storage system efficiently integrates the disparate sources of data that investigations often involve. While standard Security Information and Event Management (SIEM) solutions aggregate logs from different sources into a single database, their data models omit crucial semantics, and they struggle to scale to the data rates that large-scale environments require. 2 In this thesis we present the design, implementation, and evaluation of VAST (Visibility Across Space and Time), a distributed platform for high-performance network forensics and incident response that provides both continuous ingestion of voluminous event streams and interactive query performance. VAST offers a type-rich data model to avoid loss of critical semantics, allowing operators to express activity directly. Similarly, strong typing persists throughout the entire system, enabling type-specific optimization at lower levels while retaining type safety during querying for a less error-prone interaction. A central contribution of this work concerns our novel type-specific indexes that directly support the type's common operations, e.g., top-k prefix search for IP addresses. We show that composition of these indexes allows for a powerful and unified approach to fine-grained data localization, which directly supports the workflows of security investigators. VAST leverages a native implementation of the actor model to scale both intra-machine across available CPU cores, and inter-machine over a cluster of commodity systems. Our evaluation with real-world log and packet data demonstrates the system's potential to support interactive exploration at a level beyond what current systems offer. We release VAST as free open-source software under a permissive license. i To my parents ii Contents Contents ii List of Figuresv List of Tables vii List of Algorithms viii 1 Introduction1 1.1 Use Cases..................................... 2 1.1.1 Incident Response............................. 3 1.1.2 Network Troubleshooting......................... 4 1.1.3 Insider Abuse............................... 4 1.2 Goals........................................ 5 1.2.1 Interactivity................................ 5 1.2.2 Scalability................................. 5 1.2.3 Expressiveness............................... 6 1.3 Outline....................................... 6 2 Background9 2.1 Literature Search................................. 9 2.2 Related Work................................... 12 2.2.1 Traditional Databases .......................... 12 2.2.2 Modern Data Stores ........................... 14 2.2.3 Distributed Computing.......................... 16 2.2.4 Network Forensics Domain........................ 17 2.3 High-Level Message Passing ........................... 19 2.3.1 Actor Model................................ 20 2.3.2 Implementations ............................. 21 2.4 Accelerating Search................................ 25 2.4.1 Hash and Tree Indexes.......................... 25 2.4.2 Inverted and Bitmap Indexes ...................... 26 iii 2.4.3 Space-Time Trade-off........................... 27 2.4.4 Composition................................ 34 3 Architecture 39 3.1 Data Model.................................... 39 3.1.1 Type System ............................... 39 3.1.2 Query Language ............................. 40 3.2 Components.................................... 42 3.2.1 Import................................... 43 3.2.2 Archive .................................. 46 3.2.3 Index.................................... 48 3.2.4 Export................................... 54 3.3 Deployment.................................... 57 3.3.1 Component Distribution......................... 57 3.3.2 Fault Tolerance.............................. 59 3.4 Summary ..................................... 61 4 Implementation 62 4.1 Message Passing Challenges ........................... 62 4.1.1 Adapting to Load Fluctuations with Flow Control........... 62 4.1.2 Resolving Routing Inefficiencies with Direct Connections . 66 4.2 Composable and Type-Rich Indexing...................... 67 4.2.1 Boolean Index............................... 68 4.2.2 Integral Index............................... 68 4.2.3 Floating Point Index........................... 70 4.2.4 Duration & Time Index ......................... 72 4.2.5 String Index................................ 72 4.2.6 IP Address Index............................. 77 4.2.7 Subnet Index ............................... 78 4.2.8 Port Index................................. 79 4.2.9 Container Indexes............................. 79 4.3 Query Processing................................. 81 4.3.1 Expression Normalization ........................ 81 4.3.2 Evaluating Expressions.......................... 83 4.3.3 Finite State Machines .......................... 84 4.4 Code Base..................................... 84 5 Evaluation 88 5.1 Measurement Infrastructure ........................... 88 5.1.1 Machines ................................. 88 5.1.2 Data Sets ................................. 89 5.2 Correctness .................................... 91 iv 5.3 Throughput.................................... 91 5.4 Latency ...................................... 97 5.5 Scaling....................................... 102 5.6 Storage....................................... 104 5.6.1 Archive Compression........................... 104 5.6.2 Index Overhead.............................. 115 5.7 Summary ..................................... 115 6 Conclusion 117 6.1 Summary ..................................... 117 6.2 Outlook ...................................... 118 6.2.1 Systems Challenges............................ 118 6.2.2 Algorithmic Challenges.......................... 119 Bibliography 120 A Multi-Component Range Queries 137 v List of Figures 1.1 Thesis structure.................................... 8 2.1 A summary of research on network forensics over the last decade......... 10 2.2 The actor model ................................... 20 2.3 CAF performance compared to other actor model implementations . 24 2.4 Efficient access of the base data through an index.................. 26 2.5 Juxtaposition of inverted and bitmap indexes ................... 27 2.6 Design choices to map keys to identifier sets.................... 30 2.7 Illustrating how different encoding schemes work.................. 31 2.8 Equality, range, and interval coding......................... 32 2.9 Value decomposition example ............................ 35 3.1 The type system of VAST.............................. 40 3.2 High-level system architecture of VAST....................... 42 3.3 VAST deployment styles............................... 42 3.4 Event
Recommended publications
  • Globalfs: a Strongly Consistent Multi-Site File System
    GlobalFS: A Strongly Consistent Multi-Site File System Leandro Pacheco Raluca Halalai Valerio Schiavoni University of Lugano University of Neuchatelˆ University of Neuchatelˆ Fernando Pedone Etienne Riviere` Pascal Felber University of Lugano University of Neuchatelˆ University of Neuchatelˆ Abstract consistency, availability, and tolerance to partitions. Our goal is to ensure strongly consistent file system operations This paper introduces GlobalFS, a POSIX-compliant despite node failures, at the price of possibly reduced geographically distributed file system. GlobalFS builds availability in the event of a network partition. Weak on two fundamental building blocks, an atomic multicast consistency is suitable for domain-specific applications group communication abstraction and multiple instances of where programmers can anticipate and provide resolution a single-site data store. We define four execution modes and methods for conflicts, or work with last-writer-wins show how all file system operations can be implemented resolution methods. Our rationale is that for general-purpose with these modes while ensuring strong consistency and services such as a file system, strong consistency is more tolerating failures. We describe the GlobalFS prototype in appropriate as it is both more intuitive for the users and detail and report on an extensive performance assessment. does not require human intervention in case of conflicts. We have deployed GlobalFS across all EC2 regions and Strong consistency requires ordering commands across show that the system scales geographically, providing replicas, which needs coordination among nodes at performance comparable to other state-of-the-art distributed geographically distributed sites (i.e., regions). Designing file systems for local commands and allowing for strongly strongly consistent distributed systems that provide good consistent operations over the whole system.
    [Show full text]
  • Big Data Storage Workload Characterization, Modeling and Synthetic Generation
    BIG DATA STORAGE WORKLOAD CHARACTERIZATION, MODELING AND SYNTHETIC GENERATION BY CRISTINA LUCIA ABAD DISSERTATION Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate College of the University of Illinois at Urbana-Champaign, 2014 Urbana, Illinois Doctoral Committee: Professor Roy H. Campbell, Chair Professor Klara Nahrstedt Associate Professor Indranil Gupta Assistant Professor Yi Lu Dr. Ludmila Cherkasova, HP Labs Abstract A huge increase in data storage and processing requirements has lead to Big Data, for which next generation storage systems are being designed and implemented. As Big Data stresses the storage layer in new ways, a better understanding of these workloads and the availability of flexible workload generators are increas- ingly important to facilitate the proper design and performance tuning of storage subsystems like data replication, metadata management, and caching. Our hypothesis is that the autonomic modeling of Big Data storage system workloads through a combination of measurement, and statistical and machine learning techniques is feasible, novel, and useful. We consider the case of one common type of Big Data storage cluster: A cluster dedicated to supporting a mix of MapReduce jobs. We analyze 6-month traces from two large clusters at Yahoo and identify interesting properties of the workloads. We present a novel model for capturing popularity and short-term temporal correlations in object re- quest streams, and show how unsupervised statistical clustering can be used to enable autonomic type-aware workload generation that is suitable for emerging workloads. We extend this model to include other relevant properties of stor- age systems (file creation and deletion, pre-existing namespaces and hierarchical namespaces) and use the extended model to implement MimesisBench, a realistic namespace metadata benchmark for next-generation storage systems.
    [Show full text]
  • A Decentralized Cloud Storage Network Framework
    Storj: A Decentralized Cloud Storage Network Framework Storj Labs, Inc. October 30, 2018 v3.0 https://github.com/storj/whitepaper 2 Copyright © 2018 Storj Labs, Inc. and Subsidiaries This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 license (CC BY-SA 3.0). All product names, logos, and brands used or cited in this document are property of their respective own- ers. All company, product, and service names used herein are for identification purposes only. Use of these names, logos, and brands does not imply endorsement. Contents 0.1 Abstract 6 0.2 Contributors 6 1 Introduction ...................................................7 2 Storj design constraints .......................................9 2.1 Security and privacy 9 2.2 Decentralization 9 2.3 Marketplace and economics 10 2.4 Amazon S3 compatibility 12 2.5 Durability, device failure, and churn 12 2.6 Latency 13 2.7 Bandwidth 14 2.8 Object size 15 2.9 Byzantine fault tolerance 15 2.10 Coordination avoidance 16 3 Framework ................................................... 18 3.1 Framework overview 18 3.2 Storage nodes 19 3.3 Peer-to-peer communication and discovery 19 3.4 Redundancy 19 3.5 Metadata 23 3.6 Encryption 24 3.7 Audits and reputation 25 3.8 Data repair 25 3.9 Payments 26 4 4 Concrete implementation .................................... 27 4.1 Definitions 27 4.2 Peer classes 30 4.3 Storage node 31 4.4 Node identity 32 4.5 Peer-to-peer communication 33 4.6 Node discovery 33 4.7 Redundancy 35 4.8 Structured file storage 36 4.9 Metadata 39 4.10 Satellite 41 4.11 Encryption 42 4.12 Authorization 43 4.13 Audits 44 4.14 Data repair 45 4.15 Storage node reputation 47 4.16 Payments 49 4.17 Bandwidth allocation 50 4.18 Satellite reputation 53 4.19 Garbage collection 53 4.20 Uplink 54 4.21 Quality control and branding 55 5 Walkthroughs ...............................................
    [Show full text]
  • Metadefender Core V4.12.2
    MetaDefender Core v4.12.2 © 2018 OPSWAT, Inc. All rights reserved. OPSWAT®, MetadefenderTM and the OPSWAT logo are trademarks of OPSWAT, Inc. All other trademarks, trade names, service marks, service names, and images mentioned and/or used herein belong to their respective owners. Table of Contents About This Guide 13 Key Features of Metadefender Core 14 1. Quick Start with Metadefender Core 15 1.1. Installation 15 Operating system invariant initial steps 15 Basic setup 16 1.1.1. Configuration wizard 16 1.2. License Activation 21 1.3. Scan Files with Metadefender Core 21 2. Installing or Upgrading Metadefender Core 22 2.1. Recommended System Requirements 22 System Requirements For Server 22 Browser Requirements for the Metadefender Core Management Console 24 2.2. Installing Metadefender 25 Installation 25 Installation notes 25 2.2.1. Installing Metadefender Core using command line 26 2.2.2. Installing Metadefender Core using the Install Wizard 27 2.3. Upgrading MetaDefender Core 27 Upgrading from MetaDefender Core 3.x 27 Upgrading from MetaDefender Core 4.x 28 2.4. Metadefender Core Licensing 28 2.4.1. Activating Metadefender Licenses 28 2.4.2. Checking Your Metadefender Core License 35 2.5. Performance and Load Estimation 36 What to know before reading the results: Some factors that affect performance 36 How test results are calculated 37 Test Reports 37 Performance Report - Multi-Scanning On Linux 37 Performance Report - Multi-Scanning On Windows 41 2.6. Special installation options 46 Use RAMDISK for the tempdirectory 46 3. Configuring Metadefender Core 50 3.1. Management Console 50 3.2.
    [Show full text]
  • Archive and Compressed [Edit]
    Archive and compressed [edit] Main article: List of archive formats • .?Q? – files compressed by the SQ program • 7z – 7-Zip compressed file • AAC – Advanced Audio Coding • ace – ACE compressed file • ALZ – ALZip compressed file • APK – Applications installable on Android • AT3 – Sony's UMD Data compression • .bke – BackupEarth.com Data compression • ARC • ARJ – ARJ compressed file • BA – Scifer Archive (.ba), Scifer External Archive Type • big – Special file compression format used by Electronic Arts for compressing the data for many of EA's games • BIK (.bik) – Bink Video file. A video compression system developed by RAD Game Tools • BKF (.bkf) – Microsoft backup created by NTBACKUP.EXE • bzip2 – (.bz2) • bld - Skyscraper Simulator Building • c4 – JEDMICS image files, a DOD system • cab – Microsoft Cabinet • cals – JEDMICS image files, a DOD system • cpt/sea – Compact Pro (Macintosh) • DAA – Closed-format, Windows-only compressed disk image • deb – Debian Linux install package • DMG – an Apple compressed/encrypted format • DDZ – a file which can only be used by the "daydreamer engine" created by "fever-dreamer", a program similar to RAGS, it's mainly used to make somewhat short games. • DPE – Package of AVE documents made with Aquafadas digital publishing tools. • EEA – An encrypted CAB, ostensibly for protecting email attachments • .egg – Alzip Egg Edition compressed file • EGT (.egt) – EGT Universal Document also used to create compressed cabinet files replaces .ecab • ECAB (.ECAB, .ezip) – EGT Compressed Folder used in advanced systems to compress entire system folders, replaced by EGT Universal Document • ESS (.ess) – EGT SmartSense File, detects files compressed using the EGT compression system. • GHO (.gho, .ghs) – Norton Ghost • gzip (.gz) – Compressed file • IPG (.ipg) – Format in which Apple Inc.
    [Show full text]
  • RAMA: a Filesystem for Massively Parallel Computers
    RAMA: A FILESYSTEM FOR MASSIVELY the advantages of the system, and some possible draw- PARALLEL COMPUTERS backs. We conclude with future directions for research. Ethan L. Miller and Randy H. Katz PREVIOUS WORK University of California, Berkeley Berkeley, California There have been few file systems truly designed for parallel machines. While there have been many mas- sively parallel processors, most of them have used uni- ABSTRACT processor-based file systems. These computers generally perform file I/O through special I/O interfac- es and employ a front-end CPU to manage the file sys- This paper describes a file system design for massively tem. This method has the major advantage that it uses parallel computers which makes very efficient use of a well-understood uniprocessor file systems; little addi- few disks per processor. This overcomes the traditional tional effort is needed to support a parallel processor. I/O bottleneck of massively parallel machines by stor- The disadvantage, however, is that bandwidth to the ing the data on disks within the high-speed interconnec- parallel processor is generally low, as there is only a tion network. In addition, the file system, called single CPU managing the file system. Bandwidth is RAMA, requires little inter-node synchronization, re- limited by this CPU’s ability to handle requests and by moving another common bottleneck in parallel proces- the single channel into the parallel processor. Nonethe- sor file systems. Support for a large tertiary storage less, systems such as the CM-2 use this method. system can easily be integrated into the file system; in fact, RAMA runs most efficiently when tertiary storage is used.
    [Show full text]
  • Curriculum Vitae Ethan L
    Curriculum Vitae Ethan L. Miller June 2021 Computer Science & Engineering Department MOBILE: +1 (831) 295-8432 University of California, Santa Cruz 1156 High Street, MS SOE3 EMAIL: [email protected] Santa Cruz, CA 95064 https://www.soe.ucsc.edu/~elm/ EMPLOYMENT HISTORY 2018–present Professor, Computer Science and Engineering Department, University of California, Santa Cruz 2017–2018 Professor, Computer Engineering Department, University of California, Santa Cruz 2014–2019 Veritas Presidential Chair in Storage [formerly Symantec Presidential Chair in Storage & Security], Jack Baskin School of Engineering, University of California, Santa Cruz 2013–2019 Director, NSF IUCRC Center for Research in Storage Systems, University of California, Santa Cruz 2012–2013 Pure Storage (on 80% leave from the University of California, Santa Cruz) 2009–2013 Site Director, NSF IUCRC Center for Research in Intelligent Storage, University of California, Santa Cruz 2008–2017 Professor, Computer Science Department, University of California, Santa Cruz 2007–present Associate Director, Storage Systems Research Center, University of California, Santa Cruz 2002–2008 Associate Professor, Computer Science Department, University of California, Santa Cruz 2000–2002 Assistant Professor, Computer Science Department, University of California, Santa Cruz 1999 System Architect, Endeca, Cambridge, MA 1994–2000 Assistant Professor, Computer Science and Electrical Engineering Department, University of Mary- land Baltimore County 1988–1994 Research Assistant, Computer Science Division, University of California at Berkeley 1988–1990 Teaching Associate, Computer Science Division, University of California at Berkeley 1987–1988 Software Engineer, BBN Laboratories, Cambridge, MA 1986 Summer intern, GTE Government Systems, Rockville, MD EDUCATION 1995 Ph. D., University of California at Berkeley, Computer Science (advisor: Randy Katz) Thesis: Storage Hierarchy Management for Scientific Computing 1990 M.
    [Show full text]
  • Sequence Analysis Data-Dependent Bucketing Improves Reference-Free Compression of Sequencing Reads Rob Patro1 and Carl Kingsford2,*
    Bioinformatics, 31(17), 2015, 2770–2777 doi: 10.1093/bioinformatics/btv248 Advance Access Publication Date: 24 April 2015 Original Paper Sequence analysis Data-dependent bucketing improves reference-free compression of sequencing reads Rob Patro1 and Carl Kingsford2,* 1Department of Computer Science, Stony Brook University, Stony Brook, NY 11794-4400, USA and 2Department Computational Biology, School of Computer Science, Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA 15213, USA *To whom correspondence should be addressed. Associate Editor: Inanc Birol Received on November 16, 2014; revised on April 11, 2015; accepted on April 20, 2015 Abstract Motivation: The storage and transmission of high-throughput sequencing data consumes signifi- cant resources. As our capacity to produce such data continues to increase, this burden will only grow. One approach to reduce storage and transmission requirements is to compress this sequencing data. Results: We present a novel technique to boost the compression of sequencing that is based on the concept of bucketing similar reads so that they appear nearby in the file. We demonstrate that, by adopting a data-dependent bucketing scheme and employing a number of encoding ideas, we can achieve substantially better compression ratios than existing de novo sequence compression tools, including other bucketing and reordering schemes. Our method, Mince, achieves up to a 45% reduction in file sizes (28% on average) compared with existing state-of-the-art de novo com- pression schemes. Availability and implementation: Mince is written in Cþþ11, is open source and has been made available under the GPLv3 license. It is available at http://www.cs.cmu.edu/ckingsf/software/mince.
    [Show full text]
  • Improving Mapreduce Performance in Heterogeneous Environments
    Improving MapReduce Performance in Heterogeneous Environments Matei Zaharia, Andy Konwinski, Anthony D. Joseph, Randy Katz, Ion Stoica University of California, Berkeley {matei,andyk,adj,randy,stoica}@cs.berkeley.edu Abstract The MapReduce model popularized by Google is very MapReduce is emerging as an important programming attractive for ad-hoc parallel processing of arbitrary data. model for large-scale data-parallel applications such as MapReduce breaks a computation into small tasks that web indexing, data mining, and scientific simulation. run in parallel on multiple machines, and scales easily to Hadoop is an open-source implementation of MapRe- very large clusters of inexpensive commodity comput- duce enjoying wide adoption and is often used for short ers. Its popular open-source implementation, Hadoop jobs where low response time is critical. Hadoop’s per- [2], was developed primarily by Yahoo, where it runs formance is closely tied to its task scheduler, which im- jobs that produce hundreds of terabytes of data on at least plicitly assumes that cluster nodes are homogeneous and 10,000 cores [4]. Hadoop is also used at Facebook, Ama- tasks make progress linearly, and uses these assumptions zon, and Last.fm [5]. In addition, researchers at Cornell, to decide when to speculatively re-execute tasks that ap- Carnegie Mellon, University of Maryland and PARC are pear to be stragglers. In practice, the homogeneity as- starting to use Hadoop for seismic simulation, natural sumptions do not always hold. An especially compelling language processing, and mining web data [5, 6]. setting where this occurs is a virtualized data center, such A key benefit of MapReduce is that it automatically as Amazon’s Elastic Compute Cloud (EC2).
    [Show full text]
  • The Quantcast File System
    The Quantcast File System Michael Ovsiannikov Silvius Rus Damian Reeves Quantcast Quantcast Google movsiannikov@quantcast [email protected] [email protected] Paul Sutter Sriram Rao Jim Kelly Quantcast Microsoft Quantcast [email protected] [email protected] [email protected] ABSTRACT chine writing it, another on the same rack, and a third on a The Quantcast File System (QFS) is an efficient alterna- distant rack. Thus HDFS is network efficient but not par- tive to the Hadoop Distributed File System (HDFS). QFS is ticularly storage efficient, since to store a petabyte of data, written in C++, is plugin compatible with Hadoop MapRe- it uses three petabytes of raw storage. At today’s cost of duce, and offers several efficiency improvements relative to $40,000 per PB that means $120,000 in disk alone, plus ad- HDFS: 50% disk space savings through erasure coding in- ditional costs for servers, racks, switches, power, cooling, stead of replication, a resulting doubling of write through- and so on. Over three years, operational costs bring the put, a faster name node, support for faster sorting and log- cost close to $1 million. For reference, Amazon currently ging through a concurrent append feature, a native com- charges $2.3 million to store 1 PB for three years. mand line client much faster than hadoop fs, and global Hardware evolution since then has opened new optimiza- feedback-directed I/O device management. As QFS works tion possibilities. 10 Gbps networks are now commonplace, out of the box with Hadoop, migrating data from HDFS and cluster racks can be much chattier.
    [Show full text]
  • Basic Guide to the Command Line
    Basic guide to the command line Outline Introduction Definitions Special Characters Filenames Pathnames and the Path Variable (Search Path) Wildcards Standard in, standards out, standard error, and redirections Here document (heredoc) Owners, groups, and permissions Commands Regular Expressions Introduction To utilize many of the programs in NMRbox you will need to at least have a rudimentary understanding of how to navigate around the command line. With some practice you may even find that the command line is easier than using a GUI based file browser – it is certainly much more powerful. In addition to navigating from the command line you will also need a rudimentary understanding of how to create and execute a shell script. This Guide may take a little bit to get through, but if you know most of the contents in this document you will be well served. (top) Definitions Terminal Emulator – A terminal emulator is a program that makes your computer act like an old fashion computer terminal. When running inside a graphical environment it is often called a “terminal window”, “terminal”, “term”, or “shell” (although it is actually not a shell – read on). I will refer to a “Terminal Emulator” as “Terminal” in this document. Shell – A shell is a program that acts as a command language interpreter that executes the commands sent to it from the keyboard or a file. It is not part of the operating system kernel, but acts to pass commands to the kernel and receives the output from those commands. When you run a Terminal Emulator from a graphical interface a shell is generally run from within the Terminal so your interaction is with the shell and the Terminal simply presents a place for the shell to run in the graphical interface.
    [Show full text]
  • Metadefender Core V4.17.3
    MetaDefender Core v4.17.3 © 2020 OPSWAT, Inc. All rights reserved. OPSWAT®, MetadefenderTM and the OPSWAT logo are trademarks of OPSWAT, Inc. All other trademarks, trade names, service marks, service names, and images mentioned and/or used herein belong to their respective owners. Table of Contents About This Guide 13 Key Features of MetaDefender Core 14 1. Quick Start with MetaDefender Core 15 1.1. Installation 15 Operating system invariant initial steps 15 Basic setup 16 1.1.1. Configuration wizard 16 1.2. License Activation 21 1.3. Process Files with MetaDefender Core 21 2. Installing or Upgrading MetaDefender Core 22 2.1. Recommended System Configuration 22 Microsoft Windows Deployments 22 Unix Based Deployments 24 Data Retention 26 Custom Engines 27 Browser Requirements for the Metadefender Core Management Console 27 2.2. Installing MetaDefender 27 Installation 27 Installation notes 27 2.2.1. Installing Metadefender Core using command line 28 2.2.2. Installing Metadefender Core using the Install Wizard 31 2.3. Upgrading MetaDefender Core 31 Upgrading from MetaDefender Core 3.x 31 Upgrading from MetaDefender Core 4.x 31 2.4. MetaDefender Core Licensing 32 2.4.1. Activating Metadefender Licenses 32 2.4.2. Checking Your Metadefender Core License 37 2.5. Performance and Load Estimation 38 What to know before reading the results: Some factors that affect performance 38 How test results are calculated 39 Test Reports 39 Performance Report - Multi-Scanning On Linux 39 Performance Report - Multi-Scanning On Windows 43 2.6. Special installation options 46 Use RAMDISK for the tempdirectory 46 3.
    [Show full text]