Log Files Overview Last Updated: Thu, 02 Aug 2018 14:36:21 GMT

Total Page:16

File Type:pdf, Size:1020Kb

Log Files Overview Last Updated: Thu, 02 Aug 2018 14:36:21 GMT Log files overview Last updated: Thu, 02 Aug 2018 14:36:21 GMT EZproxy offers institutions multiple customizable log formats to record user activity. These logs can be helpful in identifying levels of proxy usage, security concerns, and general operational details. Customization through directives allows EZproxy administrators to refine what information they receive and how often they receive it. Log types The log files generated by EZproxy provide information only about remote users’ activity. The types of logs that can be generated by EZproxy are defined below. The settings described are the default settings for standalone EZproxy that are set in the config.txt file when EZproxy is first downloaded. Many of these options can be customized to fit the needs of your institution. For more information about customization options, see the tabs for each individual log and the related directives for these log types. Audit logs Audit logs are daily logs that contain information about your users’ access to EZproxy as specified by the conditions you set with the Audit directive. By default, the config.txt file you download with EZproxy is set to Audit Most, which will record most login events, usage limits, and others (see the Audit logs tab for more details). These log files are retained in a directory named audit which is a subdirectory of the EZproxy installation directory; you cannot redirect these files to be saved in another directory. However, you can customize your audit logs by specifying which events you would like EZproxy to record and how long you would like EZproxy to retain these files. Audit events can be viewed by logging in to the EZproxy administration page where you can search all files in audit directory. They can also be viewed in your operating system by opening the audit folder within the EZproxy installation directory. The information included in these logs can be helpful in monitoring and resolving security issues. EZproxy logs EZproxy logs are monthly logs that contain large amounts of data about the information sent between EZproxy and all the database providers you have configured in your config.txt file. These logs are retained by default in the EZproxy installation directory and named ezpyyyymm.log. You can change the name of this log, the directory where the information is retained, designate the periods of time over which an individual file collects information, and limit the type of information recorded in these logs by including certain directives in your config.txt file. The active log can be viewed and searched from your EZproxy administration page, and all ezproxy logs can be accessed directly from the EZproxy installation directory. The information included in these logs can be used to assess EZproxy usage and evaluate and resolve potential security threats. https://help.oclc.org/Library_Management/EZproxy/Manage_EZproxy/Log_files_overview 1 Printed: Wed, 21 Nov 2018 20:12:48 GMT message.txt logs The messages.txt file records daily operational information about each time EZproxy was started or stopped and fatal or non-fatal errors. This log also contains any messages resulting from debugging scripts generated by the Debug directive when included in your config.txt file. The messages.txt file is retained by default in the EZproxy installation directory and named messages.txt. You can change the name of this log, the directory where the information is retained, and designate periods of time over which an individual file collects information by editing the messages.txt log itself. The active log can be viewed and searched from your EZproxy administration page, and all messages.txt files can be accessed from your EZproxy installation directory. These logs can be useful in troubleshooting problems with EZproxy and verifying system details. SPU logs Starting Point URL logs record information about each time a starting point URL is clicked. Standalone EZproxy is not configured to create SPU logs by default when downloaded. When configured without specifying directory locations where SPU logs should be saved, EZproxy stores these files in the EZproxy installation directory with the name spu.log. You can specify different locations for these logs to be stored and refine the information they collect with the same configurations as EZproxy logs. These log files can only be viewed through your operating system viewer. This log can be used to determine how often remote users access certain resources. Audit logs Adding the Audit directive to your config.txt file will command EZproxy to create audit logs when specified events occur and save these files in the EZproxy installation directory. Individual files will be named the year, month, and day that the event occurred (e.g. 20140512.txt). The most common use of the Audit directive to command EZproxy to create audit logs is as follows: Audit Most This directive statement will create a log when any of the following events occur: • A user is denied accessed EZproxy • A user successfully logs in to EZproxy • A user has a failed attempt to log in to EZproxy • An intrusion attempt based on the IntruderIPAttempts or IntruderUserAttempts directive occurs • General system activities, such as system startup, occur • An unauthorized user attempts to access the administrative features of EZproxy • An event resulting from the UsageLimit directive occurs This is the default configuration in the config.txt file that is downloaded with EZproxy. For more details about further customization of the audit logs, see Audit. Audit logs can be accessed from the EZproxy admin page at any time. Logs that have been purged as a result of the AuditPurge directive will not be accessible. https://help.oclc.org/Library_Management/EZproxy/Manage_EZproxy/Log_files_overview 2 Printed: Wed, 21 Nov 2018 20:12:48 GMT Suggested Configuration By retaining the default Audit Most directive statement in your config.txt file, you will have the most commonly assessed security events recorded to your audit logs. To limit the amount of storage space your audit logs take up, the AuditPurge directive is also configured by default to 7 following the Audit statement so EZproxy will delete files after a specified period of time. This will keep only the audit logs for the current day plus the previous week and delete any files older than 7 days. OCLC suggests increasing the AuditPurge time period to 180, so that your config.txt file looks as follows: Audit Most AuditPurge 180 This change to the AuditPurge directive will cause EZproxy to retain the audit file for the current day plus the audit files of the previous 180 days, and delete any file older than 180 days. Retaining audit files for longer periods of time will provide you with a larger pool of information for review if you should need to access it. You can increase or decrease the purge number as you see fit to save disc space or ensure that you have the data you need should reporting requests or a security breach require you to reference it. When determining your how long to set your AuditPurge, consult with your IT department to ensure that your retention schedule complies with institutional policies for security and reporting purposes. Ezproxy.log EZproxy will automatically generate EZproxy logs and save them in a file named ezproxy.log in the directory where EZproxy is installed. The default command used to format data collected in the EZproxy log is as follows: LogFormat %h %l %u %t “%r” %s %b This will generate the following data in your EZproxy log: 132.174.1.1 - - [14 /Mar/2014:09:39:18 -0700] “GET http://www.somedb.com:80/index.html HTTP/1.0” 200 1234 The following table breaks down this line of data: CORRESPONDING VALUE IN FIELD DESCRIPTION EXAMPLE The IP address of the Host %h 132.174.1.1 accessing EZproxy The remote username obtained by %l - identd, ifidentd is not used, the a - will be recorded in your EZproxy log The username or session identifier, %u - based on other config.txt options https://help.oclc.org/Library_Management/EZproxy/Manage_EZproxy/Log_files_overview 3 Printed: Wed, 21 Nov 2018 20:12:48 GMT CORRESPONDING VALUE IN FIELD DESCRIPTION EXAMPLE [14/Mar/ The date and time the request was %t 2014:09:39:18-0700] made The complete http request sent to the remote server; this field is “GET contained in quotation marks so it is "%r" http://www.somedb.com:80/ parsed as one piece of data even index.html HTTP/1.0” though it contains spaces, since spaces are generally a signal that a new field of data is beginning The HTTP numeric status of the request (see LogFormat Status %s 200 Codes for more information about these numbers) %b 1234 The number of bytes transferred Note: If EZproxy is not able to collect the data denoted by a particular field, it will insert a dash for the missing information. For more details about these and additional fields that can be used to further customize this directive, see LogFormat. Suggested Configuration The default configuration in the standalone EZproxy config.txt file will provide you with monthly log files that contain data about all data transfers and requests sent through your instance of EZproxy. Depending on use levels, you might want to configure EZproxy to maintain daily instead of monthly log files so you can more quickly identify the location of the information you may need by date. This suggested configuration does not contain any LogFilters, but you may also want to consider whether the inclusion of filters to limit the volume of data collected would make your ezproxy logs more manageable and provide more focused information. See LogFilter for more details about this option. The following configuration will change your EZproxy logs to daily files instead of monthly: LogFormat %h %l %u %t "%r" %s %b LogFile -strftime ezp%Y%m%d.log Messages.txt EZproxy will automatically generate messages.txt logs and save them in files of the same name in the EZproxy installation directory.
Recommended publications
  • Application Log Analysis
    Masarykova univerzita Fakulta}w¡¢£¤¥¦§¨ informatiky !"#$%&'()+,-./012345<yA| Application Log Analysis Master’s thesis Júlia Murínová Brno, 2015 Declaration Hereby I declare, that this paper is my original authorial work, which I have worked out by my own. All sources, references and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source. Júlia Murínová Advisor: doc. RNDr. Vlastislav Dohnal, Ph.D. iii Acknowledgement I would like to express my gratitude to doc. RNDr. Vlastislav Dohnal, Ph.D. for his guidance and help during work on this thesis. Furthermore I would like to thank my parents, friends and family for their continuous support. My thanks also belongs to my boyfriend for all his assistance and help. v Abstract The goal of this thesis is to introduce the log analysis area in general, compare available systems for web log analysis, choose an appropriate solution for sample data and implement the proposed solution. Thesis contains overview of monitoring and log analysis, specifics of application log analysis and log file formats definitions. Various available systems for log analysis both proprietary and open-source are compared and categorized with overview comparison tables of supported functionality. Based on the comparison and requirements analysis appropriate solution for sample data is chosen. The ELK stack (Elasticsearch, Logstash and Kibana) and ElastAlert framework are deployed and configured for analysis of sample application log data. Logstash configuration is adjusted for collecting, parsing and processing sample data input supporting reading from file as well as online socket logs collection. Additional information for anomaly detection is computed and added to log records in Logstash processing.
    [Show full text]
  • Analysis of Web Logs and Web User in Web Mining
    International Journal of Network Security & Its Applications (IJNSA), Vol.3, No.1, January 2011 ANALYSIS OF WEB LOGS AND WEB USER IN WEB MINING L.K. Joshila Grace 1, V.Maheswari 2, Dhinaharan Nagamalai 3, 1Research Scholar, Department of Computer Science and Engineering [email protected] 2 Professor and Head,Department of Computer Applications 1,2 Sathyabama University,Chennai,India 3Wireilla Net Solutions PTY Ltd, Australia ABSTRACT Log files contain information about User Name, IP Address, Time Stamp, Access Request, number of Bytes Transferred, Result Status, URL that Referred and User Agent. The log files are maintained by the web servers. By analysing these log files gives a neat idea about the user. This paper gives a detailed discussion about these log files, their formats, their creation, access procedures, their uses, various algorithms used and the additional parameters that can be used in the log files which in turn gives way to an effective mining. It also provides the idea of creating an extended log file and learning the user behaviour. KEYWORDS Web Log file, Web usage mining, Web servers, Log data, Log Level directive. 1. INTRODUCTION Log files are files that list the actions that have been occurred. These log files reside in the web server. Computers that deliver the web pages are called as web servers. The Web server stores all of the files necessary to display the Web pages on the users computer. All the individual web pages combines together to form the completeness of a Web site. Images/graphic files and any scripts that make dynamic elements of the site function.
    [Show full text]
  • System Log Files Kernel Ring Buffer Viewing Log Files the Log Files
    System Log Files Most log files are found in /var/log Checking logs are critical to see if things are working correctly Checking logs is critical to see if things are working correctly. Take a look at all the log files on scratch ls /var/log Kernel Ring Buffer The kernel ring buffer is something like a log file for the kernel; however, unlike other log files, it’s stored in memory rather than in a disk file. You can use the dmesg command to view it. Many times it is logged to /var/log/dmesg as well. It requires sudo privileges to read the /var/log/dmesg file, but not to run the dmesg command. Viewing log files There are a number of commands to view log files. cat less head tail Anytime a new entry is added to a log file, it is appended to the end of the file. This is one of those times where tail is particularly useful. Usually when we want to look at log files we want to look at the most recent entries. When organizing our viewing command - order matters. Most of the following commands produce different results. And all are useful depending on what type of results you want. Go through the thought process and figure out what each command does. Can you figure out which three produce identical results? cat /var/log/syslog cat /var/log/syslog | grep daemon cat /var/log/syslog | grep daemon | tail -n 10 cat /var/log/syslog | tail -n 10 cat /var/log/syslog | tail -n 10 | grep daemon less /var/log/syslog less /var/log/syslog | tail -n 10 | grep daemon head -n 10 /var/log/syslog head -n 10 /var/log/syslog | grep daemon tail -n 10 /var/log/syslog tail -n 10 /var/log/syslog | grep daemon If you add the -f option to the tail command it provides a live watch of the log file.
    [Show full text]
  • Forensic Investigation of P2P Cloud Storage Services and Backbone For
    Forensic investigation of P2P cloud storage services and backbone for IoT networks : BitTorrent Sync as a case study Yee Yang, T, Dehghantanha, A, Choo, R and Yang, LK http://dx.doi.org/10.1016/j.compeleceng.2016.08.020 Title Forensic investigation of P2P cloud storage services and backbone for IoT networks : BitTorrent Sync as a case study Authors Yee Yang, T, Dehghantanha, A, Choo, R and Yang, LK Type Article URL This version is available at: http://usir.salford.ac.uk/id/eprint/40497/ Published Date 2016 USIR is a digital collection of the research output of the University of Salford. Where copyright permits, full text material held in the repository is made freely available online and can be read, downloaded and copied for non-commercial private study or research purposes. Please check the manuscript for any further copyright restrictions. For more information, including our policy and submission procedure, please contact the Repository Team at: [email protected]. Note: This is authors accepted copy – for final article please refer to International Journal of Computers & Electrical Engineering Forensic Investigation of P2P Cloud Storage: BitTorrent Sync as a Case Study 1 2 3 1 Teing Yee Yang , Ali Dehghantanha , Kim-Kwang Raymond Choo , Zaiton Muda 1 Department of Computer Science, Faculty of Computer Science and Information Technology, Universiti Putra Malaysia, UPM Serdang, Selangor, Malaysia 2 The School of Computing, Science & Engineering, Newton Building, University of Salford, Salford, Greater Manchester, United Kingdom 3 Information Assurance Research Group, University of South Australia, Adelaide, South Australia, Australia. Abstract Cloud computing has been regarded as the technology enabler for the Internet of Things (IoT).
    [Show full text]
  • Log File Management Tool Deployment and User's Guide
    Log File Management Tool Deployment and User's Guide Genesys Care/Support current 9/9/2021 Table of Contents Log File Management Tool Deployment and User's Guide 8.5.104 4 Overview 5 Architecture 8 New in this Release 11 Downloading LFMT 13 Known Issues and Limitations 14 Migration to LFMT 8.5.104 15 Log File Management Tool Deployment Planning 17 LFMT Client - GAX Dependencies 18 LFMT Database Sizing 19 LFMT Storage and Resource Sizing 21 Log File Management Tool General Deployment 24 Prerequisites 25 Deployment of the LFMT Client 27 Deployment of the LFMT Indexer 29 Deployment of the LFMT Collector 32 Configuration of the LFMT Database 36 Initializing the DBMS 40 Deployment of Workbench Agent for LFMT 8.5.1 44 Installing Workbench Agent (Mass Deployment) for LFMT 8.5.1 49 LFMT Application Connections 59 Log File Management Tool Configuration Options 60 LFMT Host Object Configuration Options 61 LFMT GAX Configuration Options 62 LFMT Indexer Configuration Options 64 LFMT Collector Configuration Options 67 LFMT DAP Object Configuration Options 73 Workbench Agent Configuration Options (for LFMT 8.5.1) 74 Workbench Agent Host Object Configuration Options (for LFMT 8.5.102) 78 Log File Management Tool User's Guide 79 Configuration of Access Control for LFMT Users 80 Site Configuration 85 Collection Schedules 89 Force Collection 93 Indexing and Scrubbing 97 Log File Packaging 101 Available Packages 104 LFMT Audit Information 108 Additional Information 111 Configuration of TLS Connections 112 Best Practices 116 Regular Expressions 117 Release Notes 119 Log File Management Tool Deployment and User's Guide 8.5.104 Log File Management Tool Deployment and User's Guide 8.5.104 The Log File Management Tool (LFMT) is an intelligent, configurable, centralized log collection tool developed by Genesys Customer Care.
    [Show full text]
  • Dude, Where's My Log File?
    Dude, Where’s My Log File? Making the Most of Progress OpenEdge Log Files Michael Banks Principal Software Engineer, OpenEdge Progress Software Introduction Why Log Files? • History • Troubleshooting • Security Introduction . Many components . Diverse technologies . Some common • AppServer, WebSpeed, Pacific AppServer for OpenEdge • Servlet container hosted components (adapters mainly) • ubroker.properties configuration 3 © 2014 Progress Software Corporation. All rights reserved. Goals for This Session How can I Where control the What is in should I type and the log file? look? amount of messages? 4 © 2014 Progress Software Corporation. All rights reserved. Progress OpenEdge Database Database Log <dbdir>/<dbname>.lg . Location cannot be changed . Startup parameter settings . Date/time startup and shutdown . User login/logout . System error messages . Utility and maintenance activity . SQL server startup/shutdown 5 © 2014 Progress Software Corporation. All rights reserved. Progress OpenEdge Database – Log Format [2014/09/16@15:25:45.022-0400] P-13259 T-47842003425760 I DBUTIL : (451) p t t t i b i f b k bth [ [2014/09/16@15:25:4 P- 13259 T- 47842003425760 ER 0: (333) Multi-user session begin. [2014/09/16@15:25:45.509-0400] P-13267 T-469809696I BROKER 0 : (15321) Before Image Log Initialization at block 0 offset 0. [2014/09/16@15:25:45.622-0400] P-13267 T-46980969695680 I BROKER 0: (452) Login by mbanks on batch. [ 6980969695680 I BROKER 0: (4234) P(452) g Loginp E g by mbanksa . Lon batch. u 61 on Linux devlinux15 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009 x86_64.
    [Show full text]
  • Forensics in Peer-To-Peer Sharing and Associated Litigation Challenges
    Forensics in Peer-to-Peer Sharing and Associated Litigation Challenges Presented by Mo Hamoudi, Seattle, WA ([email protected]) Terry Lahman, Snoqualmie, WA ([email protected]) 1 Statement of Probable Cause “Between April 08, 2016, and April 09, 2016, while acting in an undercover capacity, I used a law enforcement version of eMule, a commonly used P2P file sharing program for the eD2k file sharing network, to monitor for P2P users possessing and distributing image and video files depicting child pornography. I used the law enforcement version of eMule to download several files depicting child pornography from a P2P user at IP address <redacted> (the SUBJECT IP ADDRESS).” The statement of probable cause implies the detective was sitting at a computer utilizing a special version of software to identify and download suspected child pornography. It also implies that the specialize software is running autonomously on the detective’s computer. The law enforcement version of eMule runs automatically without user invention. And the law enforcement computer is only one component of a large scale network of computers. Digging into the technical data of the law enforcement version of eMule required a deep investigation similar to peeling an onion, one layer at a time. The entire process required a number of discovery requests to peel back each layer. Each new discovery item was analyzed to dig deeper into the next layer. PRACTICE POINT: The practitioner needs to use the Federal Rules of Criminal Procedure 16 or its state counterpart to obtain information beyond the affidavit. United States v. Soto-Zuniga, 837 F.3d 992 (9th Cir.
    [Show full text]
  • IRC Channel Data Analysis Using Apache Solr Nikhil Reddy Boreddy Purdue University
    Purdue University Purdue e-Pubs Open Access Theses Theses and Dissertations Spring 2015 IRC channel data analysis using Apache Solr Nikhil Reddy Boreddy Purdue University Follow this and additional works at: https://docs.lib.purdue.edu/open_access_theses Part of the Communication Technology and New Media Commons Recommended Citation Boreddy, Nikhil Reddy, "IRC channel data analysis using Apache Solr" (2015). Open Access Theses. 551. https://docs.lib.purdue.edu/open_access_theses/551 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. Graduate School Form 30 Updated 1/15/2015 PURDUE UNIVERSITY GRADUATE SCHOOL Thesis/Dissertation Acceptance This is to certify that the thesis/dissertation prepared By Nikhil Reddy Boreddy Entitled IRC CHANNEL DATA ANALYSIS USING APACHE SOLR For the degree of Master of Science Is approved by the final examining committee: Dr. Marcus Rogers Chair Dr. John Springer Dr. Eric Matson To the best of my knowledge and as understood by the student in the Thesis/Dissertation Agreement, Publication Delay, and Certification Disclaimer (Graduate School Form 32), this thesis/dissertation adheres to the provisions of Purdue University’s “Policy of Integrity in Research” and the use of copyright material. Approved by Major Professor(s): Dr. Marcus Rogers Approved by: Dr. Jeffery L Whitten 3/13/2015 Head of the Departmental Graduate Program Date IRC CHANNEL DATA ANALYSIS USING APACHE SOLR A Thesis Submitted to the Faculty of Purdue University by Nikhil Reddy Boreddy In Partial Fulfillment of the Requirements for the Degree of Master of Science May 2015 Purdue University West Lafayette, Indiana ii To my parents Bhaskar and Fatima: for pushing me to get my Masters before I join the corporate world! To my committee chair Dr.
    [Show full text]
  • An Effective Method for Web Log Preprocessing and Page Access Frequency Using Web Usage Mining
    International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 2 (2018) pp. 1227-1232 © Research India Publications. http://www.ripublication.com An Effective method for Web Log Preprocessing and Page Access Frequency using Web Usage Mining Jayanti Mehra1 Research Scholar, Department of computer Application, Maulana Azad National Institute of Technology (MANIT), Bhopal, Madhya Pradesh, India. Dr. R S Thakur2 Assistant Professor, Department of computer Application, ,Maulana Azad National Institute of Technology (MANIT), Bhopal, Madhya Pradesh, India. Abstract web is huge. A web site is the connection the consumer to company. The companies can revise visitor’s performance World Wide Web is rising rapidly and enormous amount of during web investigation, and discover the patterns. Web information is produced due to user’s communications with mining is generally distinct as find outing and study of helpful web sites. To exploit this information, recognizing usage information commencing the World Wide Web. Web mining pattern of users is very significant. Web Usage Mining is the split into three parts: Web Contents Mining, Web structure application of data mining techniques to find out the useful, mining and Web Usage Mining. Web Contents Mining hidden information about the users and interesting patterns represents the taking out of helpful information and web from data extracted from Web Log files. It supports to know knowledge from web resources or web contents such as text, frequently accessed pages, suppose user navigation, progress image, audio, video, and structured records. Web Usage web site structure etc. In command to relate Web Usage Mining can be as the find and analysis of access patterns of Mining, variety of actions is executed.
    [Show full text]
  • 02-Paolillo 1..9999
    c:/3socio/5-2/02-paolillo.3d ± 3/4/1 ± 18:21 ± disk/mp Journal of Sociolinguistics 5/2, 2001: 180±213 Language variation on Internet Relay Chat: A social network approach1 John C. Paolillo University of Texas at Arlington and Indiana University This paper examines linguistic variation on an Internet Relay Chat channel with respect to the hypothesis, based on the model of Milroy and Milroy (1992), that standard variants tend to be associated with weak social network ties, while vernacular variants are associated with strong network ties. An analysis of frequency of contact as a measure of tie strength reveals a structured relationship between tie strength and several linguistic variants. However, the variant features are associated with social positions in a way that does not correlate neatly with tie strength. An account of these results is proposed in terms of the social functions of the dierent variables and the larger social context of IRC aecting tie strength. KEYWORDS: Social networks, language variation, computer-mediated communication (CMC), Internet Relay Chat 1. INTRODUCTION The 1990s will be remembered as the decade in which the Internet and its protocols for communication (e-mail, Listserv discussions, Usenet news, the World-Wide Web, Internet Relay Chat, etc.) arrived in the lives of millions of people around the world. Almost from the outset of its popularization, digital technology enthusiasts promoted the Internet as a means for fostering social connection and community-building among geographically dispersed people in `virtual communities' (Rheingold 1993), where `virtual' denotes, as in com- puter science, something manifest only in a realm of electronic information.
    [Show full text]
  • Aspera FASP Proxy Admin Guide 1.4.0
    Aspera FASP Proxy Admin Guide 1.4.0 Linux Revision: 1.4.0.124878 Generated: 05/12/2016 18:15 | Contents | 2 Contents Introduction............................................................................................................... 3 Installation................................................................................................................. 5 System Requirements............................................................................................................................................5 Installing Aspera Proxy........................................................................................................................................ 5 Uninstalling Aspera Proxy....................................................................................................................................7 Forward Proxy.......................................................................................................... 8 Configuring the Server for Forward Proxy..........................................................................................................8 Firewall Considerations...................................................................................................................................... 10 Configuring the Client........................................................................................................................................10 Reverse Proxy..........................................................................................................13
    [Show full text]
  • User Guide for Creating a Whatsup Event Logs Database on Microsoft SQL Server for Log Management V10.X Contents
    User Guide for Creating a WhatsUp Event Logs Database on Microsoft SQL Server for Log Management v10.x Contents Creating a WhatsUp Event Logs Database on Microsoft SQL Server Please read the following topics in this help file thoroughly before beginning your configuration of a WhatsUp Event Logs Database on Microsoft SQL Server. Click on any of the topics below to review them in depth. Microsoft SQL Server 2005 Instructions (on page 2) Microsoft SQL Server 2008 Instructions (on page 9) Legal Information Including Patent and Trademark Notices WhatsUp Event Archiver is Copyright © 1997-2011 Ipswitch, Inc. All Rights Reserved. WhatsUp Event Archiver is protected by U.S. Patent # 7,155,514. Other patents pending. WhatsUp Event Archiver, WhatsUp Event Analyst, WhatsUp Event Alarm, WhatsUp Event Rover, and the WhatsUp word mark are trademarks or registered trademarks of Ipswitch, Inc. Microsoft Windows NT®, Microsoft Windows 2000®, Microsoft Windows XP®, Microsoft Windows 2003®, Microsoft Windows Vista®, Microsoft Windows Server 2008®, Microsoft Windows® 7, Microsoft Access®, and Microsoft SQL Server® are all registered trademarks of Microsoft Corp. Microsoft Windows NT®, Microsoft Windows 2000®, Microsoft Windows XP®, Microsoft Windows 2003®, Microsoft Windows Vista®, Microsoft Windows Server 2008®, Microsoft Windows® 7, Microsoft Access®, Microsoft Exchange® and Microsoft SQL Server® will hereafter be referred to as NT, 2000, XP, 2003, Vista, 2008, Windows 7, Windows, Access, Exchange, and SQL Server respectively. Oracle® is a registered trademark of the Oracle Corporation. All other products or technologies not specifically mentioned here are the registered trademarks of their respective companies, and are used by permission. Ipswitch Contact Information Ipswitch, Inc.
    [Show full text]