Log Management with Open-Source Tools

Total Page:16

File Type:pdf, Size:1020Kb

Log Management with Open-Source Tools Log Management with Open-Source Tools Risto Vaarandi rvaarandi 4T Y4H00 D0T C0M Outline ● Why do we need log collection and management? Why use open source tools? ● Widely used logging protocols and recently introduced new standards ● Open-source syslog servers ● Open-source log management tools Why collect logs from your IT system and network? ● Observation – logs contain information which is often not available from other sources ● Real-time monitoring – analyze logs in real-time (or near-real-time) fashion, in order to discover important changes in the state of the IT system ● Post-factum incident analysis – leverage collected data for discovering unknown past incidents and getting detailed insights into them Why use open source tools for log management? ● Commercial SIEM and log management frameworks: ✔ many frameworks are consultant-oriented – have complex design and insufficient documentation ✔ prohibitive deployment and licensing costs ✔ many frameworks repeat a number of design mistakes of network management solutions (made almost two decades ago!) ● Past experience with network management solutions: ✔ Phase 1: initial marketing hype, followed by a number of success stories in the context of large and wealthy institutions ✔ Phase2: disappointment among many potential customers (failed deployments, prohibitive pricing, etc.) and search for alternatives ✔ Phase3: appearance of well-designed open-source solutions which become widely used and acknowledged, especially by small- and mid- size enterprises Traditional log collection protocols ● The scene of log collection protocols was relatively stable for two decades ● BSD syslog – the only cross-vendor protocol designed specifically for logging ● UDP based plaintext, thus resource-efficient, but unreliable and not secure ● Simple message layout in the UDP frame – priority, simple timestamp, host name, program name, unstructured message text New log collection protocols ● IETF syslog (2009) – support for including structured data in messages, UDP and TCP based transport, encryption and authentication, detailed timestamps ● CEE (Common Event Expression) logging standard (2012) – use JSON format inside originally unstructured BSD/IETF syslog message fields ● Other protocols – non-RFC flavors of BSD and IETF syslog (e.g., BSD syslog over TCP), GELF, SNMP trap messages, etc. Examples # Traditional BSD syslog – priority value 28 encapsulates facility value 3 # (daemon) and severity value 4 (warning): 3*8 + 4 = 28 <28>Nov 17 12:33:59 myhost2 ids[1299]: port scan from 192.168.1.102 # IETF syslog – note high granularity timestamps with timezone information # and two blocks of structured data <28>1 2012-11-17T12:33:59.223+02:00 myhost2 ids 1299 - [timeQuality tzKnown="1" isSynced="1"][origin ip="10.1.1.2"] port scan from 192.168.1.102 # CEE message format – use standard BSD syslog message for transporting # structured data in JSON format <28>Nov 17 12:33:59 myhost2 ids[1299]: @cee:{"pname":"ids","pid":1299,"msg":"port scan from 192.168.1.102", "originip":"10.1.1.2","action":"portscan","src":"192.168.1.102"} Why pass structured data in log messages? ● Unstructured message fields often contain additional information about event which needs to be highlighted ● It is much easier to parse structured data (keyword- value pairs) than unstructured free-format strings ● Some structured data can be used without extra parsing – JSON format is supported by several log management frameworks and databases (e.g., Elasticsearch) Log collection on Linux platform db GUI local syslog messages to programs server remote syslog servers openlog(3) incoming messages to local logfiles syslog(3) ... /proc/kmsg /var/log/... /dev/log configuration kernel network /etc/syslog-server.conf port messages from other nodes 9 Syslog servers – rsyslog ● http://www.rsyslog.com + fast message processing, efficient multithreading, designed to handle at least 150-200K messages per second (see the paper “Rsyslog: going up from 40K messages per second to 250K” by Rainer Gerhards from Linux Kongress 2010) + backwards compatible with UNIX syslogd configuration directives + has a number of unique features and advantages over competitors (disk based buffers, support for Elasticsearch database, etc.) - documentation could be better - configuration language has a non-intuitive syntax - filtering conditions can not be named which prevents their reuse Syslog servers – syslog-ng ● http://www.balabit.com/network-security/syslog-ng/ + a flexible and readable configuration language which allows for specifying complex configurations + single-threaded until the 3.2 version, but multi-threading has been introduced into recent versions which considerably improves scalability and performance + well documented - open-source edition does not support disk based buffers - no support for Elasticsearch (although could be configured through a self-developed output plugin) Syslog servers – nxlog ● http://nxlog-ce.sourceforge.net/ + native support for Windows platform and Windows Event Log + supports the use of embedded Perl constructs for message processing + supports a number of input and output types not supported by competitors (e.g., accepting input events from SQL databases, producing output events in GELF format, etc.) - poor message filtering performance Elasticsearch DB for log management ● http://www.elasticsearch.org/ ✔ Apache Lucene based noSQL database technology that is frequently used for storing log data ✔ native support for distributed operations and building clusters ✔ allows for splitting indexes into parts (shards) and distributing shards over several nodes (e.g., split an index into 2 shards and distribute them over 2 nodes, turning disks at individual nodes into a single logical storage space) ✔ indexes can be configured to have one or more replicas which increases fault tolerance (e.g., split an index into 2 shards and configure the index to have 1 replica, and distribute resulting 4 shards across 4 nodes) ✔ builtin support for data compression (important when storing large volumes of log data) ✔ supported by several log management tools (Kibana, Graylog2, logstash, rsyslog) Log management tools – Kibana ● http://kibana.org/ ✔ Kibana is a GUI for searching log data stored into Elasticsearch DB ✔ Kibana is designed to work with logstash log preprocessing tool, but can accept data from any other tool which is able to store it to Elasticsearch in a recognizable way (e.g., rsyslog) ✔ Kibana is lightweight, written in Ruby, accessible over HTTP, and contains only searching and reporting functionality (e.g., user authentication and SSL connectivity has to be accomplished with external tools like Apache reverse proxy) ✔ When building a Kibana based log management solution, you are creating the system from well-documented and well-established building blocks, and thus having the opportunity for many customizations during initial installation and later maintenance Kibana web interface Log management tools – Graylog2 ● http://graylog2.org/ ✔ A full log management solution consisting of a server for log message reception (syslog, GELF) and a GUI ✔ The GUI is user-friendly with builtin help, and is intuitive to use ✔ Many configuration tasks (such as setting log data retention intervals, etc.) can be accomplished through a web interface ✔ Graylog2 supports users with different roles and password authentication ✔ Earlier versions of Graylog2 employed single-server approach which limited the system scalability, while most recent versions allow to run several servers in parallel Graylog2 web interface Other log management tools ● Logstash (http://www.logstash.net/) - has a web interface for searching logs stored to Elasticsearch database, but since it supports large number of input and output types, it is mostly used as a log parsing and conversion tool ● ELSA (http://code.google.com/p/enterprise-log-search-and- archive/) - a log management system which is built on top of syslog-ng, MySQL and Sphinx Netflow protocol ● Proposed by Cisco in 1990s, nowadays supported by many major vendors ● A Netflow-enabled network device (e.g., router, switch, dedicated probe) collects network traffic statistics and exports it to collector over UDP ● Traffic statistics consists of flow records, where each record describes some network flow ● Network flow – unidirectional sequence of packets which share transport protocol, source and destination IP, source and destination port, and few other parameters (e.g., type of service) Example of collected Netflow data ● The following two records represent a successfully negotiated and completed TCP connection from client 10.3.1.1 port 48896 to the HTTP service (port 80) running at the server 10.2.1.1: Start = 2013-02-18 00:04:05.733 Duration = 0.014 TCP 10.3.1.1:48896 -> 10.2.1.1:80 TCPflags = .AP.SF Packets = 5 Bytes = 513 Start = 2013-02-18 00:04:05.734 Duration = 0.010 TCP 10.2.1.1:80 -> 10.3.1.1:48896 TCPflags = .AP.SF Packets = 4 Bytes = 375 How to collect/use netflow data ● Enable Netflow collection at your network device or use dedicated probes (e.g., fprobe) ● Open-source software packages for collecting Netflow ✔ NfSen (http://nfsen.sourceforge.net/) ✔ SiLK (http://tools.netsa.cert.org/silk/) ✔ Flow-tools (http://www.splintered.net/sw/flow-tools/) - unmaintained ● What you might be interested in finding in Netflow data ✔ Flows with unusual combinations of TCP flags (e.g., FIN without ACK) ✔ Flows which represent connections to/from known bad IP addresses ✔ Unexpected spikes in traffic volumes (measured in number of bytes, packets, flows) associated with certain sources (e.g., foreign IP addresses or bad IP addresses) .
Recommended publications
  • Implementation of Centralized Logging and Log Analysis in Cloud Transition
    Implementation of Centralized Logging and Log Analysis in Cloud Transition Antti Vainio School of Science Thesis submitted for examination for the degree of Master of Science in Technology. Espoo 3.7.2018 Supervisor Prof. Jukka Suomela Advisor MSc Cyril de Vaumas Copyright ⃝c 2018 Antti Vainio Aalto University, P.O. BOX 11000, 00076 AALTO www.aalto.fi Abstract of the master’s thesis Author Antti Vainio Title Implementation of Centralized Logging and Log Analysis in Cloud Transition Degree programme Computer, Communication and Information Sciences Major Computer Science Code of major SCI3042 Supervisor Prof. Jukka Suomela Advisor MSc Cyril de Vaumas Date 3.7.2018 Number of pages 84 Language English Abstract Centralized logging can be used to collect log data from multiple log files on multiple separate server machines and transmit the data to a single centralized location. Log analysis on top of that can automatically process large amounts of logs for various different purposes including problem detection, troubleshooting, monitoring system performance, identifying security incidents, and understanding user behavior. As the volume of log data is growing when software systems, networks, and services grow in size, the log data located on multiple separate server machines can be difficult to manage. The traditional way of manually inspecting logs hasalso become too labor-intensive and error-prone when large amounts of log data need to be analyzed. Setting up centralized logging and automatic log analysis systems can be used to solve this problem. This thesis explains the concepts of log data, centralized logging, and log analysis, and also takes a look at existing software solutions to implement such a system.
    [Show full text]
  • NXLOG Community Edition Reference Manual for V2.9.1716 I
    Ed. v2.9.1716 NXLOG Community Edition Reference Manual for v2.9.1716 i NXLOG Community Edition Reference Manual for v2.9.1716 Ed. v2.9.1716 Ed. v2.9.1716 NXLOG Community Edition Reference Manual for v2.9.1716 ii Copyright © 2009-2014 NXLog Ltd. Ed. v2.9.1716 NXLOG Community Edition Reference Manual for v2.9.1716 iii Contents 1 Introduction 1 1.1 Overview . .1 1.2 Features . .1 1.2.1 Multiplatform . .1 1.2.2 Modular architecture . .1 1.2.3 Client-server mode . .2 1.2.4 Log message sources and destinations . .2 1.2.5 Importance of security . .2 1.2.6 Scalable multi-threaded architecture . .2 1.2.7 High performance I/O . .2 1.2.8 Message buffering . .2 1.2.9 Prioritized processing . .3 1.2.10 Avoiding lost messages . .3 1.2.11 Apache-style configuration syntax . .3 1.2.12 Built-in config language . .3 1.2.13 Scheduled tasks . .3 1.2.14 Log rotation . .3 1.2.15 Different log message formats . .4 1.2.16 Advanced message processing capabilites . .4 1.2.17 Offline processing mode . .4 1.2.18 Character set and i18n support . .4 2 Installation and quickstart 5 2.1 Microsoft Windows . .5 2.2 GNU/Linux . .6 2.2.1 Installing from DEB packages (Debian, Ubuntu) . .6 2.2.2 Installing from RPM packages (CentOS, RedHat) . .6 2.2.3 Configuring nxlog on GNU/Linux . .6 Ed. v2.9.1716 NXLOG Community Edition Reference Manual for v2.9.1716 iv 3 Architecture and concepts 7 3.1 History .
    [Show full text]
  • Nxlog Community Edition Reference Manual
    NXLog Community Edition Reference Manual NXLog Ltd. Version 2.10.2150, November 2018 Table of Contents 1. Man Pages. 1 1.1. nxlog(8) . 1 1.2. nxlog-processor(8) . 3 2. Configuration . 5 2.1. General Directives . 5 2.2. Global Directives . 6 2.3. Common Module Directives. 7 2.4. Route Directives. 12 3. Language. 15 3.1. Types . 15 3.2. Expressions. 15 3.3. Statements . 24 3.4. Variables . 26 3.5. Statistical Counters . 26 3.6. Functions. 28 3.7. Procedures . 31 4. Extension Modules . 34 4.1. Character Set Conversion (xm_charconv) . 34 4.2. Delimiter-Separated Values (xm_csv) . 35 4.3. External Programs (xm_exec) . 39 4.4. File Operations (xm_fileop) . 41 4.5. GELF (xm_gelf) . 44 4.6. JSON (xm_json). 48 4.7. Key-Value Pairs (xm_kvp) . 51 4.8. Multi-Line Parser (xm_multiline) . 60 4.9. Perl (xm_perl) . 68 4.10. Syslog (xm_syslog) . 71 4.11. WTMP (xm_wtmp) . 83 4.12. XML (xm_xml). 84 5. Input Modules . 88 5.1. Fields . 88 5.2. DBI (im_dbi) . 88 5.3. External Programs (im_exec) . 89 5.4. Files (im_file) . 90 5.5. Internal (im_internal). 93 5.6. Kernel (im_kernel) . 95 5.7. Mark (im_mark) . 96 5.8. EventLog for Windows XP/2000/2003 (im_mseventlog). 97 5.9. EventLog for Windows 2008/Vista and Later (im_msvistalog) . 100 5.10. Null (im_null) . 105 5.11. TLS/SSL (im_ssl) . 105 5.12. TCP (im_tcp) . 107 5.13. UDP (im_udp) . 108 5.14. Unix Domain Sockets (im_uds) . 109 6. Processor Modules . 111 6.1. Blocker (pm_blocker) . ..
    [Show full text]
  • Facing the Challenge(S) of Windows Logs Collection to Leverage Valuable Iocs
    Facing the challenge(s) of Windows logs collection to leverage valuable IOCs . Michel de Crevoisier Security Analyst, Radar Cyber Security 15.10.2019, Berne © RadarServices // Classification: Public The five challenges © RadarServices // Classification: Public #1 High diversity of log sources Server Microsoft 3rd party Built-in roles software software Advanced Threat ADFS Application Analytics (ATA) Ivanti software Certification authority Exchange PowerShell Kaspersky DHCP server Skype Security DNS server SQL Server Veeam Backup System IIS web server SYSMON […] […] NPS Radius Defender © RadarServices // Classification: Public 3 #2 Different log extensions EVTX ETL TXT (standard Windows logs (analytical logs, like DNS (IIS, NPS, DHCP, in XML format) Server or PowerShell) PowerShell Transcript, former DNS logs) © RadarServices // Classification: Public 4 #3 Multiple architectural approaches Access method / Protocol (MS-EVEN6, RPC, WMI,…) Push vs Pull Agent vs Agentless Intermediate collector VS Direct sending to receiver Central file store vs Shared folder Managed agent VS Unmanaged agent © RadarServices // Classification: Public 5 #4 Disabled and restrictive event logs • Protected users (if configured, on DCs only) Valuable event • LSA (Local Security Authority) logs disabled • IIS web server • DNS client Event logs with • SMB server restrictive • SMB client access • IIS web server © RadarServices // Classification: Public 6 6 #5 Operational constraints Security Data exchange Performance Configuration Environment • Avoid usage of • Data
    [Show full text]
  • Using Nxlog with Elasticsearch and Kibana I
    Using NXLog with Elasticsearch and Kibana i Using NXLog with Elasticsearch and Kibana Using NXLog with Elasticsearch and Kibana ii Contents 1 Setting up Elasticsearch and Kibana 1 1.1 Installing Elasticsearch . .1 1.2 Installing Kibana . .1 2 Loading data into Elasticsearch with NXLog2 2.1 Loading data with om_elasticsearch . .2 2.2 Loading data with om_http . .4 2.3 Using Logstash . .5 Using NXLog with Elasticsearch and Kibana 1 / 6 Elasticsearch coupled with the Kibana frontend has become quite popular recently as a low-cost centralized log monitoring solution. This is commonly referred to as the ELK stack comprised of Elasticsearch, Logstash and Kibana. While Logstash is a great piece of software it has some disadvantages compared to NXLog: • Logstash is written in ruby and requires Java to run. Besides being a lot more hungry on system resources, many system administrators would rather not take the hassle of deploying the Java runtime onto their production servers and needing take care of the Java security updates. • The Eventlog plugin in Logstash pulls the eventlog data through the Windows WMI interface which incurs a significant performance penalty. NXLog hooks directly into the Windows EventLog API natively and can collect logs from our highly loaded Domain Controllers also. • It’s just one more piece of software to take care about. NXLog is a small and efficient log collector that can be set up to securely and reliably centralize event data from Windows and Unix platforms. As such, NXLog is recommended by many ELK users as the log collector of choice for Windows and Linux.
    [Show full text]
  • An Exploratory Semantic Analysis of Logging Questions
    Received: Added at production Revised: Added at production Accepted: Added at production DOI: xxx/xxxx An Exploratory Semantic Analysis of Logging Questions Harshit Gujral*1 | Sangeeta Lal2 | Heng Li3 1Department of Computer Science Engineering and Information Abstract Technology, Jaypee Institute of Information Technology, Noida, India. Logging is an integral part of software development. Software practitioners often face issues Email: [email protected] in software logging, and they post these issues on Q&A websites to take suggestions from 2Lecture Data Science, School of Computing and Mathematics, Keele the experts. In this study, we perform a three-level empirical analysis of logging questions University, Keele, United Kingdom. posted on six popular technical Q&A websites, namely Stack Overflow (SO), Serverfault Email: [email protected] 3Department of Computer and Software (SF), Superuser (SU), Database Administrators (DB), Software Engineering (SE), and Engineering, Polytechnique Montréal, Android Enthusiasts (AE). The findings show that logging issues are prevalent across var- Montréal, Canada. Email: [email protected] ious domains, e.g., database, networks, and mobile computing, and software practitioners from different domains face different logging issues. The semantic analysis of logging ques- Correspondence *Corresponding author. tions using Latent Dirichlet Allocation (LDA) reveals trends of several existing and new logging topics, such as logging conversion pattern, android device logging, and database logging. In addition, we observe specific logging topics for each website: DB (Log shipping, Log file growing/shrinking), SU (Event Log, Syslog configuration), SF (Log analysis, Sys- log configuration), AE (App Install, Usage tracking), SE (Client server logging, Exception logging), and SO (Log file creation/deletion, Android Emulator Logging, Logger class of log4j).
    [Show full text]
  • Analysis and Comparison of Log Shipment Solutions at AWS S3 for Windows 10
    Department of Information Engineering and Computer Science Master in Computer Science 2020-2021 Final thesis Analysis and comparison of Log Shipment solutions at AWS S3 for Windows 10. Francisco Manuel Colmenar Lamas i SUMMARY A fundamental aspect that every company must address to start building its security infrastructure is visibil- ity. Increasing a company’s visibility raises the quality and effectiveness of all other existing security solutions. The objective was to implement an endpoint log forwarding solution for the Windows 10 devices of the com- pany About You. To accomplish the objective, several concepts and knowledge in the scope of log management solutions were studied, as well as the use of AmazonWeb Services (AWS) dedicated to these activities. After analyzing the different solutions, Kinesis Windows Agent was chosen to implement the endpoint log shipment solution. Because it provides a serverless architecture, where the agent sends logs from the endpoints to Kinesis Firehose. In addition, it does not require any heavy-weight dependencies and its configuration is straightforward. Also, since Kinesis Firehose is an AWS managed service, there is no need to handle the scaling or fault tolerance issues common in a client-server architecture and it integrates seamlessly with S3. Regarding the implementation, the code for the installation and maintenance of the Kinesis Windows Agent was mainly developed in Powershell scripts triggered remotely using Ninjarmm. And the AWS infrastructure code required for this project was developed using Terraform. In addition, through Gitlab’s CI/CD pipeline, AWS resources are automatically updated if the code is modified. As a conclusion, the deployment of the Kinesis agent for Windows on all employee Windows devices was a success.
    [Show full text]
  • Data Collection
    Data Collection The big data Challenge Any number of advanced algorithms or visualization tools and dashboards will do little in the effort of detecting cyber threats without the right data. Log files have to be collected, parsed, enriched, and loaded into a scalable data lake where it can be accessed, processed, and stored for real-time and future analysis. Security challenges today are a “big data” challenge. A true cloud-scale security solution has to be capable of collecting all the data in an enterprise to enable data-driven analytics. Log aggregation, the process of collecting logs from multiple computing systems, parsing them and applying the data to a data model that makes it easily searchable and explorable, can be challenging with an exponential increase in number of sources and data volume. The fact that the log sources are often distributed on-premise, in the cloud or across different clouds makes collection of data even more complex and costly. SEARCH ANALYTICS Elysium Analytics provides a scalable, robust and flexible architecture, delivered as a service, that assures secure and reliable collection, parsing and loading of any type of log data as well as network traffic data into a cloud scale data lake built on top of Snowflake’s data warehouse. Simple and flexible Secure & reliable Scalable Set up and flow your Never lose an event. Our We scale to any security data to Snowflake checks and balances number of data and customize, enrich, and ensures error handling sources at high parse the data on the without slowing down or low volume stream before it even hits your pipeline the data warehouse.
    [Show full text]
  • Security Auditing on Modern Operating Systems
    SECURITY AUDITING ON MODERN OPERATING SYSTEMS NXLog Ltd. 2020-07-01 14:54:13 UTC Audit logging is important for maintaining security and investigating incidents, and is required by many cybersecurity standards. Most operating systems have a native auditing system available for this purpose. About security event auditing Auditing systems in modern operating systems collect detailed information about security-related events. The audit or security logs generated by an auditing system facilitate identification of attempted attacks, security policy improvement, security incident investigation, and review by auditors. Logging of security events is required by ISO/IEC 27001,[1] the NIST Cybersecurity Framework (CSF),[2] various Common Criteria protection profiles,[3] and many other cybersecurity standards and guidelines.[4] Note that an auditing system does not provide improved security directly. It is the responsibility of an operating system’s security policy to specify whether an action should be allowed or denied. Meanwhile, the auditing system is responsible for keeping a detailed record showing how the security policy was applied. In most cases, an audit record will include a success or failure status along with any relevant information such as a username. An audit policy specifies which attempted action and status combinations should be logged, allowing administrators to reduce auditing scope to the security events that are considered important for the organization. For example, an audit policy may include only successful user login events but include all attempts to access a restricted resource (successful or not). Once an auditing system is configured and enabled, the resulting audit logs should be stored in a secure location and retained for a period of time, so they are available for any audits or investigations that may be required.
    [Show full text]
  • Alienvault Usm Appliance Plugins List
    ALIENVAULT USM APPLIANCE PLUGINS LIST This is the current plugin library that ships with AlienVault USM Appliance as of May 21, 2019. The AlienVault Labs Security Research Team regularly updates the plugin library to increase the extensibility of USM Appliance. These plugins enable your USM Appliance to process and analyze logs produced by your existing devices and applications quickly. Plugin Name Vendor Model a10-thunder-waf A10 Thunder WAF abas abas ERP abas accellion-kiteworks Accellion Kiteworks actiontec Actiontec Verizon FIOS router adaudit-plus ManageEngine ADAudit Plus aerohive-wap Aerohive Networks Wireless Access Point airlock Envault Airlock airport-extreme Apple AirPort Extreme aix-audit IBM Aix Audit aladdin SafeNet eSafe alcatel Alcatel Arista Switch allot Allot Communications NetEnforcer alteonos Nortel Networks Alteon amun-honeypot Amun Amun Honeypot Apache Software apache Apache HTTP Server Foundation Apache Software apache-ldap OpenLDAP Foundation Apache Software apache-syslog Apache HTTP Server Foundation Apache Software apache-tomcat Tomcat Foundation aqtronix-webknight AQTRONiX WebKnight arista-switch Arista Switches arpalert-idm Arpalert Arpalert arpalert-syslog Arpalert Arpalert array-networks-sag Array Networks Secure Access Gateway artemisa Artemisa Artemisa Honeypot artica Artica Proxy artillery Binary Defense Artillery Honeypot ALIENVAULT USM APPLIANCE PLUGINS LIST aruba Aruba Networks Mobility Access Switches aruba-6 Aruba Networks Wireless aruba-airwave Aruba Networks Airwave aruba-clearpass Aruba Networks
    [Show full text]
  • Windows OS Event Correlation
    TALLINN UNIVERSITY OF TECHNOLOGY Faculty of Information Technology Department of Computer Science TUT Centre for Digital Forensics and Cyber Security ITC70LT Mina Gerges - 114381IVCMM LOG MONITORING AND EVENT CORRELATION ON MICROSOFT® WINDOWS™ USING SIMPLE EVENT CORRELATOR Master thesis Supervisor: Risto Vaarandi Doctor of Philosophy Senior Researcher Tallinn 2016 TALLINNA TEHNIKAÜLIKOOL Infotehnoloogia teaduskond Arvutiteaduse instituut TTÜ Küberkriminalistika ja Küberjulgeoleku Keskus ITC70LT Mina Gerges - 114381IVCMM LOGIDE MONITOORING JA SÜNDMUSTE KORRELATSIOON MICROSOFT® WINDOWS™ PLATVORMIL SEC KORRELATSIOONIMOOTORI ABIL Magistritöö Juhendaja: Risto Vaarandi Filosoofiadoktor Vanemteadur Tallinn 2016 Declaration I hereby declare that I am the sole author of this thesis. All the used materials, references to the literature and the work of others have been referred to. This thesis has not been presented for examination anywhere else. Author: Mina Gerges ……………………………………. ……………………………… (Date) (Signature) 2 Abstract The aim of this thesis is providing a stable integration solution for Simple Event Correlator (SEC) Perl process to run on Microsoft Windows operating system as a service. The purpose of running SEC on Microsoft Windows is providing a reliable correlation engine to monitor event logs of the operating system and other software for critical alerts, malicious action, attack patterns and other anomalies, then to interact proactively. SEC is coded using Perl. However Microsoft does not provide a native Perl distribution for Windows. A third party Perl distribution from the available shall be selected, based on features and performance comparison. SEC is not capable of loading Microsoft Windows event logs without the aid of a log collection tool. Miscellaneous applications provide different log formats which require normalization to standardize parsing for extracting information from events.
    [Show full text]
  • USM Anywhere Deployment Guide
    USM Anywhere™ Deployment Guide Copyright © 2020 AT&T Intellectual Property. All rights reserved. AT&T, the AT&T logo and all other AT&T marks contained herein are trademarks of AT&T Intellectual Property and/or affiliated companies. All other marks are the property of their respective owners. Updated November 24, 2020 2 USM Anywhere™ Deployment Guide Contents About USM Anywhere Deployment 6 USM Anywhere Deployment Overview 7 USM Anywhere Architecture 7 USM Anywhere Data Security 10 USM Anywhere Log Data Enhancement 12 USM Anywhere Deployment Types and Scalability 17 USM Anywhere Deployment Requirements 19 USM Anywhere Deployment Process 20 USM Anywhere Updates 21 USM Anywhere Sensor Deployment on VMware 23 About VMware Sensor Deployment 24 Requirements for VMware Sensor Deployment 25 Creating the VMware Virtual Machine 29 The OVF Package Is Invalid and Cannot Be Deployed - SHA256 Error 37 USM Anywhere Setup on the VMware Virtual Machine 38 VMware Sensor Connection to USM Anywhere 40 Completing the VMware Sensor Setup 43 USM Anywhere Sensor Deployment on Microsoft Hyper-V 56 About Hyper-V Sensor Deployment 57 Requirements for Hyper-V Sensor Deployment 57 Create the Hyper-V Virtual Machine 60 USM Anywhere Setup on the Hyper-V Virtual Machine 70 Hyper-V Sensor Connection to USM Anywhere 72 Completing the Hyper-V Sensor Setup 75 USM Anywhere Sensor Deployment on AWS 88 About AWS Sensor Deployment 89 USM Anywhere™ Deployment Guide 3 Requirements for AWS Sensor Deployment 90 Deploying the AWS Sensor 96 Setting the AWS Sensor Connection to USM
    [Show full text]