Exporting Events to Syslog

Total Page:16

File Type:pdf, Size:1020Kb

Exporting Events to Syslog Exporting Events to Syslog Author: Sila Kissuu, IBM On Demand Consulting Created: 5/19/2017 Introduction One of the product enhancements in IBM API Connect v5.0.7.0 is the capability to offload analytics data to external systems for further processing. This capability is useful if you want to consolidate data from multiple sources, if you require enhanced monitoring, or if you want to enrich your analytics data. Supported third-party systems include: • HTTP servers • Elasticsearch clusters • Kafka clusters • Syslog servers In this article, we illustrate the process of configuring IBM API Connect to offload event data to the Splunk server via syslog. Prerequisites Collect the connection details and other configuration information that is required to configure a data offload to the target system. The configuration requirements are as follows: • To configure data offload to an HTTP server, you must, at a minimum, provide the server URL. If you require encrypted communication with the server, you must also have a Transport Layer Security (TLS) profile defined. You can optionally add standardized or custom HTTP headers to define additional operating parameters. • To configure data offload to an Elasticsearch cluster, you must, at a minimum, provide one or more server URLs, define how indexes should be created by specifying a dynamically configured name, specify the number of primary shards for storing the indexed documents, and specify a number of replica shards for redundancy. If you require encrypted communication with the cluster, you must also have a TLS profile defined. You can optionally specify server authentication credentials. • To configure data offload to a Kafka cluster, you must, at a minimum, provide host and port connection details for one or more servers, and the name of the Kafka topic to which you want to publish the offloaded data. If you require encrypted communication with the cluster, you must also have a TLS profile defined. For logging and monitoring purposes within Kafka, you can optionally specify a string identifier by which API Connect can be uniquely identified. • To configure data offload to Syslog, you must, at a minimum, provide host and port connection details for the server. If you require encrypted communication with Syslog using the TCP protocol, you must also have a TLS profile defined. It is recommended that you create one or more TLS profiles that you can use for encrypted communication instead of using the default TLS profile named Cloud Manager and API Manager TLS Profile. Procedure: Splunk 1. Configure a data input that listens on a TCP or UDP port a. Specify the port number upon which Splunk will listen b. Click Next c. In the Select Source Type drop-down list select Operating System then syslog. d. Specify a preferred method for receiving the host field for events coming from the Management server e. Specify the index where Splunk will store incoming data. f. Click Review, then Submit. 2. If the configuration finishes successfully, your Splunk server is ready to collect events from the Management server 3. Proceed to enable syslog in the Cloud Manager Procedure: Enable syslog in Cloud Manager Console 1. Run the command debug tail file /var/log/cmc.out to monitor log entries as you go through these steps. 2. In the Cloud Manager, click Settings. 3. From the navigation pane, click Analytics 4. Complete the following fields that are available under the API Events, Monitoring Events, Log Events, or Audit Events section (configure any or all event types of interest): a. Select the Export events to a third-party system checkbox b. From the Select Analytics Platform drop-down list that is displayed, select Syslog. c. Click Configure to specify connection details and other configuration options for the Syslog server. d. Complete the fields in the Syslog Output window as follows: i. In the Host field, specify a fully qualified host name or IP address of the Syslog server. ii. In the Port field, accept the default port (which is set based on the protocol that you specify in the next field, for secure HTTP connections). Alternatively, specify another port number on which Syslog listens for incoming connections. The default port is 514 if connecting using the UDP protocol, and 601 for the TCP protocol. iii. From the Protocol drop-down list, specify your preferred protocol for secure HTTP connections from either of the following options: 1. UDP: The connectionless User Datagram Protocol (UDP), which is generally used for simpler messaging transmissions, offers no guarantee of delivery, and has no handshaking dialogues 2. TCP: The more complex Transmission Control Protocol (TCP), which is used for connection-oriented transmissions, with reliable, ordered data streaming between communicating network applications and error-correction facilities iv. To use TLS to set up a private connection to the Syslog server and secure the transmission of the data being offloaded, select the Use TLS check box. Then, select your preferred profile from the drop-down list. This list shows all TLS profiles that have been created in the Cloud Manager. The default TLS profile (Cloud Manager and API Manager TLS Profile), which is defined in the Cloud Manager, is selected by default. NOTE: The Use TLS check box and corresponding TLS profile drop-down list are available only if you selected TCP in the Protocol field. v. If you specified TCP as the protocol and selected the Use TLS check box, select the Validate Certificate checkbox if you want to verify the authenticity of the TLS certificate presented by the Syslog server before you establish a connection. vi. To verify that your connection and configuration details are valid, click Send Test Event to generate and transmit a test event to the Syslog server. Messages are displayed beneath the primary banner in the Cloud Manager UI to indicate that an analytics event is being sent, and to indicate that the transmission was successful. The log entry below shows submission of a test event: 2017-05-19 19:21:02.058 INFO [T-172] [com.ibm.apimgmt.api.rest.ApiServlet.logRequest] 192.168.1.199: POST /v1/cloud/analyticsoutputs/test 2017-05-19 19:21:02.915 INFO [T-172] [com.ibm.apimgmt.config.impl.ETagRepositoryServiceConfig.invalidate] Invalidate path: /cloud/analyticsoutputs/test A successful test returns a response similar to this: 2017-05-19 19:22:41.129 INFO [T-172] [com.ibm.apimgmt.resources.analytics.AnalyticsOutputsResource.analyticsoffloadTestPost] Response: { "code": 0, "stdOut": "Sending Logstash's logs to /opt/logstash/testlogs which is now configured via log4j2.properties\n[2017-05-19T19:22:31,627][INFO ][logstash.pipeline ] Starting pipeline {\"id\"=>\"main\", \"pipeline.workers\"=>4, \"pipeline.batch.size\"=>125, \"pipeline.batch.delay\"=>5, \"pipeline.max_inflight\"=>500}\n[2017-05- 19T19:22:31,651][INFO ][logstash.pipeline ] Pipeline main started\n[2017-05- 19T19:22:31,788][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9805}\n", "stdErr": "", "eventSent": true } 2017-05-19 19:22:41.131 INFO [T-172] [com.ibm.apimgmt.api.util.ApiResponseHandler.log] 192.168.1.199: POST /v1/cloud/analyticsoutputs/test 200 If the transmission failed, an error message is displayed on top of the current window to inform you of the failure. For example: Click OK to close the error message and then verify that your configuration settings are correct. The log file cmc.out will also provide additional failure information to assist with troubleshooting: 2017-05-19 19:30:52.512 INFO [T-172] [com.ibm.apimgmt.resources.analytics.AnalyticsOutputsResource.analyticsoffloadTestPost] Response: {"code" : 99, "stdOut": "Error running command, see logs for details", "stdErr": ""} 2017-05-19 19:30:52.514 WARNING [T-172] [com.ibm.apimgmt.resources.analytics.AnalyticsOutputsResource.analyticsoffloadTestPost] Logstash configure test command failed, see /var/log/logstashconfigure.out for more info. You can verify that the target system has received the event by looking for an event that includes 5apimanagement_testevent as a value in the event record. Below is an illustration of the event 5apimanagement_testevent as received by Splunk: To see examples of the test events that are generated for API, monitoring, log, and audit events, see Sample test events for analytics offload. Below are sample log entries created in /var/log/cmc.out indicating a successful test: 2017-05-19 19:07:08.040 INFO [T-172] [com.ibm.apimgmt.resources.analytics.AnalyticsOutputsResource.analyticsoffloadTestPost] Response: { "code": 0, "stdOut": "Sending Logstash's logs to /opt/logstash/testlogs which is now configured via log4j2.properties\n[2017-05-19T19:06:56,722][INFO ][logstash.pipeline ] Starting pipeline {\"id\"=>\"main\", \"pipeline.workers\"=>4, \"pipeline.batch.size\"=>125, \"pipeline.batch.delay\"=>5, \"pipeline.max_inflight\"=>500}\n[2017-05- 19T19:06:56,779][INFO ][logstash.pipeline ] Pipeline main started\n[2017-05- 19T19:06:56,882][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9805}\n", "stdErr": "", "eventSent": true } 2017-05-19 19:07:08.043 INFO [T-172] [com.ibm.apimgmt.api.util.ApiResponseHandler.log] 192.168.1.199: POST /v1/cloud/analyticsoutputs/test 200 vii. Click Update to store the settings configured for offloading data to the Syslog server. e. Click the Save icon to collectively save your defined settings for enabling or disabling access to analytics, and for offloading data. Procedure: Enabling syslog in the Management server To enable syslog in the management server, run the following command: Mgmt. syslog set remote ip <Splunk_server_IP> port <portNumber> For example: management/APIConnect> mgmt syslog set remote ip 192.168.1.199 port 6001 syslog messages will now be sent to 192.168.1.199:6001 Reloading syslog configuration syslogng start/running, process 28025 The command enables syslog and sends events to the Splunk server at 192.168.1.199 port 6001. Other syslog-specific commands include: • mgmt syslog del config Delete syslog configuration.
Recommended publications
  • NXLOG Community Edition Reference Manual for V2.9.1716 I
    Ed. v2.9.1716 NXLOG Community Edition Reference Manual for v2.9.1716 i NXLOG Community Edition Reference Manual for v2.9.1716 Ed. v2.9.1716 Ed. v2.9.1716 NXLOG Community Edition Reference Manual for v2.9.1716 ii Copyright © 2009-2014 NXLog Ltd. Ed. v2.9.1716 NXLOG Community Edition Reference Manual for v2.9.1716 iii Contents 1 Introduction 1 1.1 Overview . .1 1.2 Features . .1 1.2.1 Multiplatform . .1 1.2.2 Modular architecture . .1 1.2.3 Client-server mode . .2 1.2.4 Log message sources and destinations . .2 1.2.5 Importance of security . .2 1.2.6 Scalable multi-threaded architecture . .2 1.2.7 High performance I/O . .2 1.2.8 Message buffering . .2 1.2.9 Prioritized processing . .3 1.2.10 Avoiding lost messages . .3 1.2.11 Apache-style configuration syntax . .3 1.2.12 Built-in config language . .3 1.2.13 Scheduled tasks . .3 1.2.14 Log rotation . .3 1.2.15 Different log message formats . .4 1.2.16 Advanced message processing capabilites . .4 1.2.17 Offline processing mode . .4 1.2.18 Character set and i18n support . .4 2 Installation and quickstart 5 2.1 Microsoft Windows . .5 2.2 GNU/Linux . .6 2.2.1 Installing from DEB packages (Debian, Ubuntu) . .6 2.2.2 Installing from RPM packages (CentOS, RedHat) . .6 2.2.3 Configuring nxlog on GNU/Linux . .6 Ed. v2.9.1716 NXLOG Community Edition Reference Manual for v2.9.1716 iv 3 Architecture and concepts 7 3.1 History .
    [Show full text]
  • Log Monitoring and Analysis with Rsyslog and Splunk
    Non è possibile visualizzare questa immagine. Consiglio Nazionale delle Ricerche Istituto di Calcolo e Reti ad Alte Prestazioni Log monitoring and analysis with rsyslog and Splunk A. Messina, I. Fontana, G. Giacalone Rapporto Tecnico N.: RT-ICAR-PA-15-07 Dicembre 2015 Consiglio Nazionale delle Ricerche, Istituto di Calcolo e Reti ad Alte Prestazioni (ICAR) – Sede di Cosenza, Via P. Bucci 41C, 87036 Rende, Italy, URL: www.icar.cnr.it – Sede di Napoli, Via P. Castellino 111, 80131 Napoli, URL: www.na.icar.cnr.it – Sede di Palermo, Viale delle Scienze, 90128 Palermo, URL: www.pa.icar.cnr.it Non è possibile visualizzare questa immagine. Consiglio Nazionale delle Ricerche Istituto di Calcolo e Reti ad Alte Prestazioni Log monitoring and analysis with rsyslog and Splunk A. Messina1, I. Fontana2, G. Giacalone2 Rapporto Tecnico N.: RT-ICAR-PA-15-07 Dicembre 2015 1 Istituto di Calcolo e Reti ad Alte Prestazioni, ICAR-CNR, Sede di Palermo, Viale delle Scienze edificio 11, 90128 Palermo. 2 Istituto per l’Ambiente Marino Costiero, IAMC-CNR, Sede di Capo Granitola, Via del Mare n. 3, 90121 Torretta Granitola – Campobello di Mazara. I rapporti tecnici dell’ICAR-CNR sono pubblicati dall’Istituto di Calcolo e Reti ad Alte Prestazioni del Consiglio Nazionale delle Ricerche. Tali rapporti, approntati sotto l’esclusiva responsabilità scientifica degli autori, descrivono attività di ricerca del personale e dei collaboratori dell’ICAR, in alcuni casi in un formato preliminare prima della pubblicazione definitiva in altra sede. Index 1 INTRODUCTION ......................................................................................................... 4 2 THE SYSTEM LOG PROTOCOL ............................................................................... 6 2.1 Introduction ........................................................................................................................ 6 2.2 The protocol ......................................................................................................................
    [Show full text]
  • Centralized and Structured Log File Analysis with Open Source and Free Software Tools
    Bachelor Thesis Summer Semester 2013 at Fachhochschule Frankfurt am Main University of Applied Sciences Department of Computer Science and Engineering towards Bachelor of Science Computer Science submitted by Jens Kühnel Centralized and structured log file analysis with Open Source and Free Software tools 1. Supervisor: Prof. Dr. Jörg Schäfer 2. Supervisor: Prof. Dr. Matthias Schubert topic received: 11. 07. 2013 thesis delivered: 30. 08. 2013 Abstract This thesis gives an overview on the Open Source and Free Software tools available for a centralized and structured log file analysis. This includes the tools to convert unstructured logs into structured log and different possibilities to transport this log to a central analyzing and storage station. The different storage and analyzing tools will be introduced, as well as the different web front ends to be used by the system administrator. At the end different tool chains will be introduced, that are well tested in this field. Revisions Rev. 269: Official Bachelor these sent to FH Rev. 273: Removal of Affidavit, fix of Pagenumber left/right II Table of Contents 1 Introduction.......................................................................................................................................1 1.1 Selection criteria........................................................................................................................1 1.2 Programs that are included in this thesis...................................................................................2 1.3 What
    [Show full text]
  • Cyber Security Platform Modicon Controllers 12/2018 (Original Document) EIO0000001999 12/2018 Platform Controllers Modicon
    Modicon Controllers Platform EIO0000001999 12/2018 Modicon Controllers Platform Cyber Security Reference Manual (Original Document) 12/2018 EIO0000001999.06 www.schneider-electric.com The information provided in this documentation contains general descriptions and/or technical characteristics of the performance of the products contained herein. This documentation is not intended as a substitute for and is not to be used for determining suitability or reliability of these products for specific user applications. It is the duty of any such user or integrator to perform the appropriate and complete risk analysis, evaluation and testing of the products with respect to the relevant specific application or use thereof. Neither Schneider Electric nor any of its affiliates or subsidiaries shall be responsible or liable for misuse of the information contained herein. If you have any suggestions for improvements or amendments or have found errors in this publication, please notify us. You agree not to reproduce, other than for your own personal, noncommercial use, all or part of this document on any medium whatsoever without permission of Schneider Electric, given in writing. You also agree not to establish any hypertext links to this document or its content. Schneider Electric does not grant any right or license for the personal and noncommercial use of the document or its content, except for a non-exclusive license to consult it on an "as is" basis, at your own risk. All other rights are reserved. All pertinent state, regional, and local safety regulations must be observed when installing and using this product. For reasons of safety and to help ensure compliance with documented system data, only the manufacturer should perform repairs to components.
    [Show full text]
  • Guide to Computer Security Log Management
    Special Publication 800-92 Guide to Computer Security Log Management Recommendations of the National Institute of Standards and Technology Karen Kent Murugiah Souppaya GUIDE TO COMPUTER SECURITY LOG MANAGEMENT Reports on Computer Systems Technology The Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST) promotes the U.S. economy and public welfare by providing technical leadership for the nation’s measurement and standards infrastructure. ITL develops tests, test methods, reference data, proof of concept implementations, and technical analysis to advance the development and productive use of information technology. ITL’s responsibilities include the development of technical, physical, administrative, and management standards and guidelines for the cost-effective security and privacy of sensitive unclassified information in Federal computer systems. This Special Publication 800-series reports on ITL’s research, guidance, and outreach efforts in computer security and its collaborative activities with industry, government, and academic organizations. National Institute of Standards and Technology Special Publication 800-92 Natl. Inst. Stand. Technol. Spec. Publ. 800-92, 72 pages (September 2006) Certain commercial entities, equipment, or materials may be identified in this document in order to describe an experimental procedure or concept adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended
    [Show full text]
  • The Journald Reference Guide by Jorgen Schäfer
    The JournalD Reference Guide by Jorgen Schäfer ©2017, Loggly, Inc. FREE TRIAL > 1 With the advent of systemd, journald has arrived. Most systems Introduction run this program, but not everyone is fully aware of what journald is, which problems it solves, and why it’s valuable. After reading this white paper, both system administrators and programmers should know: What journald is Why there is some controversy around it What it means for systems and programs running on it and what to expect from it ©2017, Loggly, Inc. FREE TRIAL > 2 What is journald? and verified fields on which application a log message originates from, something Journald is a daemon included in the that is easily faked in the traditional systemd distribution to handle log data. syslog protocol. It is tightly integrated with systemd and Journald adopters provides an interface that is compatible Finally, syslog servers rely on plain text with syslog, the standard logging files for storing log messages. While interface on UNIX. this is a simple approach, it also has drawbacks. Plain text files need to The tight integration with systemd be “rotated” when they become too makes it possible for all log messages large: that is, renamed, compressed, to be handled with a single interface. archived, and eventually deleted. Also, Traditional syslog servers are started text files do not offer any structure to much like other services during the boot improve searching, making it slow to process and can miss log messages find log messages, such as from specific emitted during early boot. By integrating services. Journald replaces the plain journald with systemd, even the earliest text files with a custom file format that boot process messages are available to addresses these problems.
    [Show full text]
  • Investigating Evidence from Linux Logs
    5 INVESTIGATINGEVIDENCEFROM LINUXLOGS The computer term log originates from an ancient sailor’s technique for measuring the speed of a moving ship. A wooden log attached to a long rope was thrown overboard behind the ship. The rope had regularly spaced knots that sailors would count as the moving ship distanced itself from the floating log. They could calculate the speed of the ship from the number of knots counted over a period of time. Regular measurements of the ship’s speed were recorded in the ship’s “log book” or log. Over time, the word log came to represent a variety of recorded periodic measurements or events. Log books are still used by organizations to docu­ ment visitors entering buildings, the delivery of goods, and other activities that need a written historical record. The concept of a computer login and logout was created to control and record user activity. Early time­sharing computer systems were expensive and needed to keep track of computing resources consumed by different users. As the cost of storage capacity and processing power dropped, the use of logging expanded to nearly all parts of a modern computer system. This wealth of logged activity is a valuable source of digital evidence and helps forensic investigators reconstruct past events and activity. Traditional Syslog The traditional logging system on Unix and Unix­like operating systems such as Linux is syslog. Syslog was originally written for the sendmail software package in the early 1980s and has since become the de facto logging stan­ dard for IT infrastructure. Syslog is typically implemented as a daemon (also known as a collector) that listens for log messages from multiple sources, such as packets arriving over network sockets (UDP port 514), local named pipes, or syslog library calls (see Figure 5­1).
    [Show full text]
  • Reliable and Tamper Resistant Centralized Logging in a High Availability System - an Investigation on Ericsson SGSN-MME Master of Science Thesis
    Reliable and Tamper Resistant Centralized Logging in a High Availability System - An Investigation on Ericsson SGSN-MME Master of Science Thesis Improving landfill monitoring programs ELIN KALLQVIST¨ withJANNY the QUACH aid of LAM geoelectrical - imaging techniques and geographical information systems Master’s Thesis in the Master Degree Programme, Civil Engineering KEVINChalmers University HINE of Technology University of Gothenburg Department of Computer Science and Engineering Department of Civil and Environmental Engineering Gothenburg, Sweden, August 2012 Division of GeoEngineering Engineering Geology Research Group CHALMERS UNIVERSITY OF TECHNOLOGY Göteborg, Sweden 2005 Master’s Thesis 2005:22 The Author grants to Chalmers University of Technology and University of Gothenburg the non-exclusive right to publish the Work electronically and in a non-commercial purpose make it accessible on the Internet. The Author war- rants that he/she is the author to the Work, and warrants that the Work does not contain text, pictures or other material that violates copyright law. The Author shall, when transferring the rights of the Work to a third party (for example a publisher or a company), acknowledge the third party about this agreement. If the Author has signed a copyright agreement with a third party regarding the Work, the Author warrants hereby that he/she has obtained any necessary permission from this third party to let Chalmers University of Technology and University of Gothenburg store the Work electronically and make it accessible on the Internet. Reliable and Tamper Resistant Centralized Logging in a High Availability Sys- tem An Investigation on Ericsson SGSN-MME ELIN KÄLLQVIST JANNY QUACH LAM © Elin Källqvist, August 2012.
    [Show full text]