Security Auditing on Modern Operating Systems

Total Page:16

File Type:pdf, Size:1020Kb

Security Auditing on Modern Operating Systems SECURITY AUDITING ON MODERN OPERATING SYSTEMS NXLog Ltd. 2020-07-01 14:54:13 UTC Audit logging is important for maintaining security and investigating incidents, and is required by many cybersecurity standards. Most operating systems have a native auditing system available for this purpose. About security event auditing Auditing systems in modern operating systems collect detailed information about security-related events. The audit or security logs generated by an auditing system facilitate identification of attempted attacks, security policy improvement, security incident investigation, and review by auditors. Logging of security events is required by ISO/IEC 27001,[1] the NIST Cybersecurity Framework (CSF),[2] various Common Criteria protection profiles,[3] and many other cybersecurity standards and guidelines.[4] Note that an auditing system does not provide improved security directly. It is the responsibility of an operating system’s security policy to specify whether an action should be allowed or denied. Meanwhile, the auditing system is responsible for keeping a detailed record showing how the security policy was applied. In most cases, an audit record will include a success or failure status along with any relevant information such as a username. An audit policy specifies which attempted action and status combinations should be logged, allowing administrators to reduce auditing scope to the security events that are considered important for the organization. For example, an audit policy may include only successful user login events but include all attempts to access a restricted resource (successful or not). Once an auditing system is configured and enabled, the resulting audit logs should be stored in a secure location and retained for a period of time, so they are available for any audits or investigations that may be required. In some cases it may be necessary to forward audit logs to another system with higher security, perhaps even at a remote location. Native auditing systems Most modern operating systems have some form of audit system available, but the functionality, configuration, and audit log formats vary. Generally, an auditing system has a kernel component and various user space utilities. The kernel portion watches for system calls and generates those events that are specified by the audit policy. User space utilities may be available for functionality such as enabling or disabling auditing, configuring audit policy, collecting events from the kernel, filtering events, or creating reports for administrators. Implementing an organization-wide auditing policy requires working with a variety of auditing systems, depending on which operating systems are deployed in the organization’s network. Linux The Linux Audit system is available with most Linux-based operating systems. The Audit system can log many different types of events, including file and network accesses, system calls, failed logins, and commands run by users. It is composed of a kernel component that generates events, an Audit daemon (auditd) for collecting events, and several other tools for configuring auditing, relaying events, and reading the log files.[5] The system is configured with control rules, which specify the system’s behavior; file system rules, which audit access to a file or directory; and system call rules, which log system calls made by specified programs.[6] By default, auditd collects events from the kernel and writes them in a key-value pair format to /var/log/audit/audit.log.[7] The ausearch and aureport commands can be used to search audit records and generate audit reports, respectively. For more information about the Linux Audit system, see the Red Hat Security Guide, as well as the audit(8) manual page and other pages listed there. Microsoft Windows Windows security auditing provides two groups of policy settings for configuring native auditing on Windows: "basic" and "advanced". The two groups are considered incompatible in practice; if advanced settings are used, it is recommended that a Group Policy setting be enabled to force the basic settings to be ignored.[8] The current local audit policy can be displayed and manipulated with the auditpol command. Audit records are written to the Security channel of the Windows Event Log. The basic audit policy settings, available since Windows 2000,[9] include nine settings for auditing. Each setting controls a group of related auditable events. Settings are available for a range of events including logon activity, account management, object access, policy changes, and privilege use.[10] These settings are configured using the Local Security Policy snap-in (secpol.msc) under Local Policies\Audit Policy or via Group Policy under Computer Configuration\Windows Settings\Security Settings\Audit Policy.[11] For more information, see Basic security audit policies on Microsoft Docs. The advanced settings, introduced in Windows Vista and Windows Server 2008,[9] offer more detailed policy configuration, with more than fifty settings in ten categories. These settings overlap with the basic settings and provide more granularity for specifying auditable events. As with the basic settings, each advanced setting controls a group of events. These settings are found under Computer Configuration\Windows Settings\Security Settings\Advanced Audit Policy Configuration\System Audit Policies in Group Policy.[12] For more information, see Advanced security audit policies on Microsoft Docs. The Sysmon utility, included in Microsoft’s Sysinternals toolset, is a Windows system service and device driver that monitors and logs system activity to the Windows Event Log. Sysmon logs events and event information beyond what is possible with standard security auditing. Events include process creations, network connections, driver loading, raw filesystem reading, changes in file creation timestamps, and early boot process activity. Additional logged details include process command lines and hashes; process and session identifiers for event correlation; and for network connections, parent processes, IP addresses, and port numbers. An XML configuration system allows the administrator to specify the events to be logged with detailed rules for filtering events. Starting with Windows Vista, Sysmon events are written to the Microsoft-Windows-Sysmon/Operational channel.[13] For more information, see Sysmon on Microsoft Docs. Apple macOS, FreeBSD, and Oracle Solaris These operating systems each implement Sun’s Basic Security Module (BSM) auditing system.[14] The BSM auditing system can be used to audit a variety of events, including authentication, logins, program execution, inter-process communication (IPC), file activity, and network access.[15] It has a kernel component for generating events, a user space daemon (auditd) for collecting the events, and several other utilities for working with the audit logs (referred to as "audit trails"). Auditable events are grouped into event classes, such as the fc class for file creations. The audit policy is configured by listing these classes (with optional prefixes to select success/failure events) in the system-wide policy and in any per-user policies that may be required.[15] The audit daemon writes audit trails to files in /var/audit/ using the BSM binary format. A stream of audit records, useful for live monitoring, is also available through the /dev/auditpipe device.[16] In both cases, the auditreduce command can be used to filter events and praudit will parse the audit trail and dump it to plain text. For more information about BSM auditing, see the NIST Guide to Securing Apple OS X 10.10 Systems for IT Professionals (section 6.4.2), the Security Event Auditing chapter in the FreeBSD Handbook, and Solaris Auditing in the System Administration Guide. IBM AIX The AIX auditing subsystem logs a wide array of "security-relevant occurrences": process and file activity, network connections, user account management, audit system administration, filesystem volume changes, mail delivery configuration, job scheduling, and so on.[17] The system has a kernel audit logger and several commands for controlling the audit system and processing audit logs. Audit policy is based on a per-process or per-object scope. With per-process auditing, auditable events are selected and grouped into classes, which are then assigned to users. With per-object auditing, events are assigned to file objects.[18] Output can be written to binary audit files for long-term storage (BIN mode), to a circular buffer that can be read from /dev/audit (STREAM mode), or both.[19] For either mode, processing of audit data is done with a chain of commands that read, filter, compress, and convert the audit records as required.[18] [20] For more information about AIX auditing, see the Auditing overview topic in IBM’s AIX 7.2 documentation and AIX Auditing Best Practices on IBM Support. Auditable system events may also be generated outside of a platform’s native auditing system. For example, PAM authentication events on Linux are logged to Syslog, and audit events for a particular application may be written to a separate Windows Event Log channel. Using NXLog to collect audit events NXLog Enterprise Edition is a flexible, high-performance logging solution that offers a wide range of functionality for collecting audit data on multiple platforms. All log data is collected and processed using a structured log format, where each event is comprised of a set of fields. This reduces or eliminates the need for event parsing
Recommended publications
  • Implementation of Centralized Logging and Log Analysis in Cloud Transition
    Implementation of Centralized Logging and Log Analysis in Cloud Transition Antti Vainio School of Science Thesis submitted for examination for the degree of Master of Science in Technology. Espoo 3.7.2018 Supervisor Prof. Jukka Suomela Advisor MSc Cyril de Vaumas Copyright ⃝c 2018 Antti Vainio Aalto University, P.O. BOX 11000, 00076 AALTO www.aalto.fi Abstract of the master’s thesis Author Antti Vainio Title Implementation of Centralized Logging and Log Analysis in Cloud Transition Degree programme Computer, Communication and Information Sciences Major Computer Science Code of major SCI3042 Supervisor Prof. Jukka Suomela Advisor MSc Cyril de Vaumas Date 3.7.2018 Number of pages 84 Language English Abstract Centralized logging can be used to collect log data from multiple log files on multiple separate server machines and transmit the data to a single centralized location. Log analysis on top of that can automatically process large amounts of logs for various different purposes including problem detection, troubleshooting, monitoring system performance, identifying security incidents, and understanding user behavior. As the volume of log data is growing when software systems, networks, and services grow in size, the log data located on multiple separate server machines can be difficult to manage. The traditional way of manually inspecting logs hasalso become too labor-intensive and error-prone when large amounts of log data need to be analyzed. Setting up centralized logging and automatic log analysis systems can be used to solve this problem. This thesis explains the concepts of log data, centralized logging, and log analysis, and also takes a look at existing software solutions to implement such a system.
    [Show full text]
  • NXLOG Community Edition Reference Manual for V2.9.1716 I
    Ed. v2.9.1716 NXLOG Community Edition Reference Manual for v2.9.1716 i NXLOG Community Edition Reference Manual for v2.9.1716 Ed. v2.9.1716 Ed. v2.9.1716 NXLOG Community Edition Reference Manual for v2.9.1716 ii Copyright © 2009-2014 NXLog Ltd. Ed. v2.9.1716 NXLOG Community Edition Reference Manual for v2.9.1716 iii Contents 1 Introduction 1 1.1 Overview . .1 1.2 Features . .1 1.2.1 Multiplatform . .1 1.2.2 Modular architecture . .1 1.2.3 Client-server mode . .2 1.2.4 Log message sources and destinations . .2 1.2.5 Importance of security . .2 1.2.6 Scalable multi-threaded architecture . .2 1.2.7 High performance I/O . .2 1.2.8 Message buffering . .2 1.2.9 Prioritized processing . .3 1.2.10 Avoiding lost messages . .3 1.2.11 Apache-style configuration syntax . .3 1.2.12 Built-in config language . .3 1.2.13 Scheduled tasks . .3 1.2.14 Log rotation . .3 1.2.15 Different log message formats . .4 1.2.16 Advanced message processing capabilites . .4 1.2.17 Offline processing mode . .4 1.2.18 Character set and i18n support . .4 2 Installation and quickstart 5 2.1 Microsoft Windows . .5 2.2 GNU/Linux . .6 2.2.1 Installing from DEB packages (Debian, Ubuntu) . .6 2.2.2 Installing from RPM packages (CentOS, RedHat) . .6 2.2.3 Configuring nxlog on GNU/Linux . .6 Ed. v2.9.1716 NXLOG Community Edition Reference Manual for v2.9.1716 iv 3 Architecture and concepts 7 3.1 History .
    [Show full text]
  • Nxlog Community Edition Reference Manual
    NXLog Community Edition Reference Manual NXLog Ltd. Version 2.10.2150, November 2018 Table of Contents 1. Man Pages. 1 1.1. nxlog(8) . 1 1.2. nxlog-processor(8) . 3 2. Configuration . 5 2.1. General Directives . 5 2.2. Global Directives . 6 2.3. Common Module Directives. 7 2.4. Route Directives. 12 3. Language. 15 3.1. Types . 15 3.2. Expressions. 15 3.3. Statements . 24 3.4. Variables . 26 3.5. Statistical Counters . 26 3.6. Functions. 28 3.7. Procedures . 31 4. Extension Modules . 34 4.1. Character Set Conversion (xm_charconv) . 34 4.2. Delimiter-Separated Values (xm_csv) . 35 4.3. External Programs (xm_exec) . 39 4.4. File Operations (xm_fileop) . 41 4.5. GELF (xm_gelf) . 44 4.6. JSON (xm_json). 48 4.7. Key-Value Pairs (xm_kvp) . 51 4.8. Multi-Line Parser (xm_multiline) . 60 4.9. Perl (xm_perl) . 68 4.10. Syslog (xm_syslog) . 71 4.11. WTMP (xm_wtmp) . 83 4.12. XML (xm_xml). 84 5. Input Modules . 88 5.1. Fields . 88 5.2. DBI (im_dbi) . 88 5.3. External Programs (im_exec) . 89 5.4. Files (im_file) . 90 5.5. Internal (im_internal). 93 5.6. Kernel (im_kernel) . 95 5.7. Mark (im_mark) . 96 5.8. EventLog for Windows XP/2000/2003 (im_mseventlog). 97 5.9. EventLog for Windows 2008/Vista and Later (im_msvistalog) . 100 5.10. Null (im_null) . 105 5.11. TLS/SSL (im_ssl) . 105 5.12. TCP (im_tcp) . 107 5.13. UDP (im_udp) . 108 5.14. Unix Domain Sockets (im_uds) . 109 6. Processor Modules . 111 6.1. Blocker (pm_blocker) . ..
    [Show full text]
  • Facing the Challenge(S) of Windows Logs Collection to Leverage Valuable Iocs
    Facing the challenge(s) of Windows logs collection to leverage valuable IOCs . Michel de Crevoisier Security Analyst, Radar Cyber Security 15.10.2019, Berne © RadarServices // Classification: Public The five challenges © RadarServices // Classification: Public #1 High diversity of log sources Server Microsoft 3rd party Built-in roles software software Advanced Threat ADFS Application Analytics (ATA) Ivanti software Certification authority Exchange PowerShell Kaspersky DHCP server Skype Security DNS server SQL Server Veeam Backup System IIS web server SYSMON […] […] NPS Radius Defender © RadarServices // Classification: Public 3 #2 Different log extensions EVTX ETL TXT (standard Windows logs (analytical logs, like DNS (IIS, NPS, DHCP, in XML format) Server or PowerShell) PowerShell Transcript, former DNS logs) © RadarServices // Classification: Public 4 #3 Multiple architectural approaches Access method / Protocol (MS-EVEN6, RPC, WMI,…) Push vs Pull Agent vs Agentless Intermediate collector VS Direct sending to receiver Central file store vs Shared folder Managed agent VS Unmanaged agent © RadarServices // Classification: Public 5 #4 Disabled and restrictive event logs • Protected users (if configured, on DCs only) Valuable event • LSA (Local Security Authority) logs disabled • IIS web server • DNS client Event logs with • SMB server restrictive • SMB client access • IIS web server © RadarServices // Classification: Public 6 6 #5 Operational constraints Security Data exchange Performance Configuration Environment • Avoid usage of • Data
    [Show full text]
  • Using Nxlog with Elasticsearch and Kibana I
    Using NXLog with Elasticsearch and Kibana i Using NXLog with Elasticsearch and Kibana Using NXLog with Elasticsearch and Kibana ii Contents 1 Setting up Elasticsearch and Kibana 1 1.1 Installing Elasticsearch . .1 1.2 Installing Kibana . .1 2 Loading data into Elasticsearch with NXLog2 2.1 Loading data with om_elasticsearch . .2 2.2 Loading data with om_http . .4 2.3 Using Logstash . .5 Using NXLog with Elasticsearch and Kibana 1 / 6 Elasticsearch coupled with the Kibana frontend has become quite popular recently as a low-cost centralized log monitoring solution. This is commonly referred to as the ELK stack comprised of Elasticsearch, Logstash and Kibana. While Logstash is a great piece of software it has some disadvantages compared to NXLog: • Logstash is written in ruby and requires Java to run. Besides being a lot more hungry on system resources, many system administrators would rather not take the hassle of deploying the Java runtime onto their production servers and needing take care of the Java security updates. • The Eventlog plugin in Logstash pulls the eventlog data through the Windows WMI interface which incurs a significant performance penalty. NXLog hooks directly into the Windows EventLog API natively and can collect logs from our highly loaded Domain Controllers also. • It’s just one more piece of software to take care about. NXLog is a small and efficient log collector that can be set up to securely and reliably centralize event data from Windows and Unix platforms. As such, NXLog is recommended by many ELK users as the log collector of choice for Windows and Linux.
    [Show full text]
  • An Exploratory Semantic Analysis of Logging Questions
    Received: Added at production Revised: Added at production Accepted: Added at production DOI: xxx/xxxx An Exploratory Semantic Analysis of Logging Questions Harshit Gujral*1 | Sangeeta Lal2 | Heng Li3 1Department of Computer Science Engineering and Information Abstract Technology, Jaypee Institute of Information Technology, Noida, India. Logging is an integral part of software development. Software practitioners often face issues Email: [email protected] in software logging, and they post these issues on Q&A websites to take suggestions from 2Lecture Data Science, School of Computing and Mathematics, Keele the experts. In this study, we perform a three-level empirical analysis of logging questions University, Keele, United Kingdom. posted on six popular technical Q&A websites, namely Stack Overflow (SO), Serverfault Email: [email protected] 3Department of Computer and Software (SF), Superuser (SU), Database Administrators (DB), Software Engineering (SE), and Engineering, Polytechnique Montréal, Android Enthusiasts (AE). The findings show that logging issues are prevalent across var- Montréal, Canada. Email: [email protected] ious domains, e.g., database, networks, and mobile computing, and software practitioners from different domains face different logging issues. The semantic analysis of logging ques- Correspondence *Corresponding author. tions using Latent Dirichlet Allocation (LDA) reveals trends of several existing and new logging topics, such as logging conversion pattern, android device logging, and database logging. In addition, we observe specific logging topics for each website: DB (Log shipping, Log file growing/shrinking), SU (Event Log, Syslog configuration), SF (Log analysis, Sys- log configuration), AE (App Install, Usage tracking), SE (Client server logging, Exception logging), and SO (Log file creation/deletion, Android Emulator Logging, Logger class of log4j).
    [Show full text]
  • Analysis and Comparison of Log Shipment Solutions at AWS S3 for Windows 10
    Department of Information Engineering and Computer Science Master in Computer Science 2020-2021 Final thesis Analysis and comparison of Log Shipment solutions at AWS S3 for Windows 10. Francisco Manuel Colmenar Lamas i SUMMARY A fundamental aspect that every company must address to start building its security infrastructure is visibil- ity. Increasing a company’s visibility raises the quality and effectiveness of all other existing security solutions. The objective was to implement an endpoint log forwarding solution for the Windows 10 devices of the com- pany About You. To accomplish the objective, several concepts and knowledge in the scope of log management solutions were studied, as well as the use of AmazonWeb Services (AWS) dedicated to these activities. After analyzing the different solutions, Kinesis Windows Agent was chosen to implement the endpoint log shipment solution. Because it provides a serverless architecture, where the agent sends logs from the endpoints to Kinesis Firehose. In addition, it does not require any heavy-weight dependencies and its configuration is straightforward. Also, since Kinesis Firehose is an AWS managed service, there is no need to handle the scaling or fault tolerance issues common in a client-server architecture and it integrates seamlessly with S3. Regarding the implementation, the code for the installation and maintenance of the Kinesis Windows Agent was mainly developed in Powershell scripts triggered remotely using Ninjarmm. And the AWS infrastructure code required for this project was developed using Terraform. In addition, through Gitlab’s CI/CD pipeline, AWS resources are automatically updated if the code is modified. As a conclusion, the deployment of the Kinesis agent for Windows on all employee Windows devices was a success.
    [Show full text]
  • Data Collection
    Data Collection The big data Challenge Any number of advanced algorithms or visualization tools and dashboards will do little in the effort of detecting cyber threats without the right data. Log files have to be collected, parsed, enriched, and loaded into a scalable data lake where it can be accessed, processed, and stored for real-time and future analysis. Security challenges today are a “big data” challenge. A true cloud-scale security solution has to be capable of collecting all the data in an enterprise to enable data-driven analytics. Log aggregation, the process of collecting logs from multiple computing systems, parsing them and applying the data to a data model that makes it easily searchable and explorable, can be challenging with an exponential increase in number of sources and data volume. The fact that the log sources are often distributed on-premise, in the cloud or across different clouds makes collection of data even more complex and costly. SEARCH ANALYTICS Elysium Analytics provides a scalable, robust and flexible architecture, delivered as a service, that assures secure and reliable collection, parsing and loading of any type of log data as well as network traffic data into a cloud scale data lake built on top of Snowflake’s data warehouse. Simple and flexible Secure & reliable Scalable Set up and flow your Never lose an event. Our We scale to any security data to Snowflake checks and balances number of data and customize, enrich, and ensures error handling sources at high parse the data on the without slowing down or low volume stream before it even hits your pipeline the data warehouse.
    [Show full text]
  • Alienvault Usm Appliance Plugins List
    ALIENVAULT USM APPLIANCE PLUGINS LIST This is the current plugin library that ships with AlienVault USM Appliance as of May 21, 2019. The AlienVault Labs Security Research Team regularly updates the plugin library to increase the extensibility of USM Appliance. These plugins enable your USM Appliance to process and analyze logs produced by your existing devices and applications quickly. Plugin Name Vendor Model a10-thunder-waf A10 Thunder WAF abas abas ERP abas accellion-kiteworks Accellion Kiteworks actiontec Actiontec Verizon FIOS router adaudit-plus ManageEngine ADAudit Plus aerohive-wap Aerohive Networks Wireless Access Point airlock Envault Airlock airport-extreme Apple AirPort Extreme aix-audit IBM Aix Audit aladdin SafeNet eSafe alcatel Alcatel Arista Switch allot Allot Communications NetEnforcer alteonos Nortel Networks Alteon amun-honeypot Amun Amun Honeypot Apache Software apache Apache HTTP Server Foundation Apache Software apache-ldap OpenLDAP Foundation Apache Software apache-syslog Apache HTTP Server Foundation Apache Software apache-tomcat Tomcat Foundation aqtronix-webknight AQTRONiX WebKnight arista-switch Arista Switches arpalert-idm Arpalert Arpalert arpalert-syslog Arpalert Arpalert array-networks-sag Array Networks Secure Access Gateway artemisa Artemisa Artemisa Honeypot artica Artica Proxy artillery Binary Defense Artillery Honeypot ALIENVAULT USM APPLIANCE PLUGINS LIST aruba Aruba Networks Mobility Access Switches aruba-6 Aruba Networks Wireless aruba-airwave Aruba Networks Airwave aruba-clearpass Aruba Networks
    [Show full text]
  • Windows OS Event Correlation
    TALLINN UNIVERSITY OF TECHNOLOGY Faculty of Information Technology Department of Computer Science TUT Centre for Digital Forensics and Cyber Security ITC70LT Mina Gerges - 114381IVCMM LOG MONITORING AND EVENT CORRELATION ON MICROSOFT® WINDOWS™ USING SIMPLE EVENT CORRELATOR Master thesis Supervisor: Risto Vaarandi Doctor of Philosophy Senior Researcher Tallinn 2016 TALLINNA TEHNIKAÜLIKOOL Infotehnoloogia teaduskond Arvutiteaduse instituut TTÜ Küberkriminalistika ja Küberjulgeoleku Keskus ITC70LT Mina Gerges - 114381IVCMM LOGIDE MONITOORING JA SÜNDMUSTE KORRELATSIOON MICROSOFT® WINDOWS™ PLATVORMIL SEC KORRELATSIOONIMOOTORI ABIL Magistritöö Juhendaja: Risto Vaarandi Filosoofiadoktor Vanemteadur Tallinn 2016 Declaration I hereby declare that I am the sole author of this thesis. All the used materials, references to the literature and the work of others have been referred to. This thesis has not been presented for examination anywhere else. Author: Mina Gerges ……………………………………. ……………………………… (Date) (Signature) 2 Abstract The aim of this thesis is providing a stable integration solution for Simple Event Correlator (SEC) Perl process to run on Microsoft Windows operating system as a service. The purpose of running SEC on Microsoft Windows is providing a reliable correlation engine to monitor event logs of the operating system and other software for critical alerts, malicious action, attack patterns and other anomalies, then to interact proactively. SEC is coded using Perl. However Microsoft does not provide a native Perl distribution for Windows. A third party Perl distribution from the available shall be selected, based on features and performance comparison. SEC is not capable of loading Microsoft Windows event logs without the aid of a log collection tool. Miscellaneous applications provide different log formats which require normalization to standardize parsing for extracting information from events.
    [Show full text]
  • USM Anywhere Deployment Guide
    USM Anywhere™ Deployment Guide Copyright © 2020 AT&T Intellectual Property. All rights reserved. AT&T, the AT&T logo and all other AT&T marks contained herein are trademarks of AT&T Intellectual Property and/or affiliated companies. All other marks are the property of their respective owners. Updated November 24, 2020 2 USM Anywhere™ Deployment Guide Contents About USM Anywhere Deployment 6 USM Anywhere Deployment Overview 7 USM Anywhere Architecture 7 USM Anywhere Data Security 10 USM Anywhere Log Data Enhancement 12 USM Anywhere Deployment Types and Scalability 17 USM Anywhere Deployment Requirements 19 USM Anywhere Deployment Process 20 USM Anywhere Updates 21 USM Anywhere Sensor Deployment on VMware 23 About VMware Sensor Deployment 24 Requirements for VMware Sensor Deployment 25 Creating the VMware Virtual Machine 29 The OVF Package Is Invalid and Cannot Be Deployed - SHA256 Error 37 USM Anywhere Setup on the VMware Virtual Machine 38 VMware Sensor Connection to USM Anywhere 40 Completing the VMware Sensor Setup 43 USM Anywhere Sensor Deployment on Microsoft Hyper-V 56 About Hyper-V Sensor Deployment 57 Requirements for Hyper-V Sensor Deployment 57 Create the Hyper-V Virtual Machine 60 USM Anywhere Setup on the Hyper-V Virtual Machine 70 Hyper-V Sensor Connection to USM Anywhere 72 Completing the Hyper-V Sensor Setup 75 USM Anywhere Sensor Deployment on AWS 88 About AWS Sensor Deployment 89 USM Anywhere™ Deployment Guide 3 Requirements for AWS Sensor Deployment 90 Deploying the AWS Sensor 96 Setting the AWS Sensor Connection to USM
    [Show full text]
  • USM Anywhere Agents Guide
    USM Anywhere™ Agents Guide Copyright © 2020 AT&T Intellectual Property. All rights reserved. AT&T, the AT&T logo and all other AT&T marks contained herein are trademarks of AT&T Intellectual Property and/or affiliated companies. All other marks are the property of their respective owners. Updated October 21, 2020 2 USM Anywhere™ Agents Guide Contents The AlienVault Agent 5 Agent Communication 5 AlienVault Agent Use Cases 6 Using the AlienVault Agent 6 Using NXLog 7 AlienVault Agent Deployment 7 Agent Deployment Details 8 AlienVault Agent IDs 8 Agent Deployments in Virtual Environments 10 Agent Data Collection 10 AlienVault Agent Installation on Windows Hosts 11 AlienVault Agent Installation on Linux Hosts 17 AlienVault Agent Installation on macOS Hosts 24 Agent and Asset Associations 29 Associate or Unassociate the AlienVault Agent with an Existing Asset 30 Create New Assets for the Association 32 AlienVault Agent Configuration Profiles 33 Assign AlienVault Agent Configuration Profiles to Assets 35 Assign AlienVault Agent Configuration Profiles to Asset Groups 37 The AlienVault Agent Command Script and Agent Updates 38 Complete AlienVault Agent Command List 39 AlienVault Agent Auto-Update 40 AlienVault Agents Memory Consumption and the osquery Watchdog 41 Watchdog Overview 41 Work Process Control Flags 43 USM Anywhere™ Agents Guide 3 AlienVault Agent Events and Queries 43 AlienVault Agent Queries 44 4 USM Anywhere™ Agents Guide The AlienVault Agent The AlienVault Agent is a lightweight endpoint agent based on osquery, the leading open-source operating system instrumentation framework for Windows, Apple macOS, and Linux. It enables endpoint detection an visibility, file integrity monitoring (FIM), and rich endpoint telemetry capabilities that are essential for complete and effective threat visibility, detection, and compliance.
    [Show full text]