Basic Directions for Working with Daily OPAC Log Files Using Microsoft Access

Total Page:16

File Type:pdf, Size:1020Kb

Basic Directions for Working with Daily OPAC Log Files Using Microsoft Access Basic directions for working with daily OPAC log files using Microsoft Access. To work with the daily OPAC log files, your computer needs to have both Microsoft Access and software tool for unzipping .gz files, such as 7-zip for Windows computers. Downloading, Saving, and Unzipping OPAC log files 1. Visit https://www.carli.illinois.edu/logs/ and choose which log file you wish to download to your computer. Note that there are both VuFind and WebVoyage log files available for each date; the data from each OPAC is formatted differently (and needs different queries applied to interpret them). 2. Click on the file name for the log; a window should open prompting you to Open or Save the file. Choose to Save the file, and click OK. Browse to the location where you’d like to save the file on your computer. • Your setup may vary, but Google Chrome may automatically save the file when selected to the default download location on your computer, without prompting. 3. You now need to unzip the .gz file. • If using 7-zip, right click on the file, and select 7-zip from the menu, and then “Extract Here” to unzip the file in the same location on your computer where the .gz file was saved. • If you are using a different program, follow its direction for unzipping files. 4. It may take several minutes for the file to extract; the files are VERY large. 5. The extracted file will be named without a file extension; the log “date” will appear to be the extension. • For example: access_log.20170123 Right click on the file, and choose to rename it. Add the file extension .txt at the end of the name and remove the period before the date. • For example: VuFindlog20170123.txt or WebVoyagelog20170123.txt You will receive the error message, “If you change a file name extension, the file might become unusable. Are you sure you want to change it?” Choose Yes. The file is now ready to import into Microsoft Access. 6. Create a new, blank desktop database. Save it to your computer with a name that designates the log file’s date. You can import both a WebVoyage and a VuFind log file into the same desktop database, but you will need to create separate tables for importing each set of data, and query them separately. (these directions follow). 7. Close the default Table 1 that is created with your database; you do not need it. Once you close it, it will disappear. Note that the maximum Microsoft Access database size is 2G; files larger than 2G become unusable. You will not be able to import/save multiple days’ worth of full log files into the same database at the same time. Since the log files are large, you have a few options: 1) Create a separate database file for each day you wish to investigate; save that day’s log files (both VuFind and WebVoyage) into the date-named database file. 2) Work with one day’s log files, export all needed data into Excel (or other format), delete the log file tables from Access and start again with new imported log files. 3) Use one database file to use for your data importing and cleanup work, and a second database for querying/analysis. You’ll then use Access’s data import/export functionality to transfer the cleaned data (limited to your library’s transactions) from the first database into the second database, so that you can reuse the first database for importing/cleaning a new set of logs. Please let us know any questions about these options. Importing VuFind Logs into Microsoft Access 1. To import, select the External Data tab> Text File option. 2. Browse for the location where the VuFind log file is saved on your computer, make sure the radio button is set to “Import the source data into a new table in the current database” and then click OK. 3. The Import Text Wizard will open. 4. On the first screen, select “Delimited” and click Next. 5. On the second screen, switch the delimiter character to “Space” and click Next. 6. The third screen should list 11 fields. Scroll to see them all. Select each field in turn, and adjust it as such: • Field1: rename field AccessMethod • Field 2: check the “Do not import field (Skip)” checkbox • Field 3: check the “Do not import field (Skip)” checkbox • Field 4: check the “Do not import field (Skip)” checkbox • Field 5: rename field DateTime • Field 6: check the “Do not import field (Skip)” checkbox • Field 7: rename field SearchURL, and set the Data Type as “Long Text” (or “Memo”) • Field 8: check the “Do not import field (Skip)” checkbox • Field 9: check the “Do not import field (Skip)” checkbox • Field 10: check the “Do not import field (Skip)” checkbox • Field 11: rename field Browser Click next. 7. The fourth screen, select “Let Access add primary key”, click Next. 8. Enter an appropriate table name in the “Import to Table” box. Example table name would be VuFindLogYYYYMMDD, or simply VuFindLog if you’re reusing your database and queries. Click Finish. • Note that it may take several (10+) minutes for the data to import into Access. An average file contains over a million data rows. 9. A confirmation screen will notify you that your data was imported. 10. You may receive an additional “Import Errors” table that contains a number of “Unparsable Records”. You can delete this table. Narrowing the VuFind Log dataset for only your library’s results (deleting the other library’s data) When VuFind knows the patron’s library affiliation, the SearchURL field will contain the letters vf-xxx, where xxx is your library’s 3 letter code in lowercase. If desired, you can delete the results that are not specific to your library by using text filters. Steps below. 1. Open your VuFind Access Log table. 2. For the SearchURL column, left-click the down arrow once, and wait for the menu to open (it may take a few minutes). 3. Select “Text Filters” from the menu. 4. Select “Does Not Contain” from the Text Filters menu. 5. The Custom Filter window will open, with the prompt, “SearchURL does not contain” 6. Enter the text vf-xxx, where the xxx is replaced with your library’s 3 letter code in lowercase. Example for Kishwaukee College, where their three letter code is KIS: 7. It may take several minutes for Access to filter your results. Scan through the results to double-check that your library’s data is not included. 8. Click the “Last Result” button, to view the last row in the dataset. This may take several minutes. 9. Click to highlight the last row in the filtered data set. 10. With the last row selected, use the scroll bar on the right side of the table view to scroll up to the first row in the data set. Hold down the Shift key while clicking to select the first row of data. All rows in this filtered view are now selected. While still holding down the Shift key, right click on the data. Select Delete Record to delete these rows. 11. After a moment, Access will give the warning: “You won’t be able to undo this Delete operation...” Click Yes. It will take several minutes for the data to be deleted. This is a good time to get a snack and a fresh cup of coffee/tea. (If your computer freezes up too much when deleting this many records simultaneously, you can break the deletion set into smaller chunks.) 12. With the records not matching your filter deleted, again click the down arrow for the SearchURL column. 13. Select “Clear filter from SearchURL”. 14. The remaining results should all include vf-xxx, designating action by someone either searching directly in your local VuFind OPAC, or in the I-Share catalog after beginning with your local catalog. 15. Go to File tab, select Save to retain your changes. Importing WebVoyage Logs into Microsoft Access 1. To import, select the External Data tab> Text File option. 2. Browse for the location where the WebVoyage log file is saved on your computer, make sure the radio button is set to “Import the source data into a new table in the current database” and then click OK. 3. The Import Text Wizard will open. 4. On the first screen, select “Delimited” and click Next. 5. On the second screen, switch the delimiter character to “Space” and click Next. 6. The third screen should list up to 30 fields (depending on how the Browser information is parsed). Scroll to see them all. Select each field in turn, and adjust it as such: • Field1: rename field AccessMethod • Field 2: check the “Do not import field (Skip)” checkbox • Field 3: check the “Do not import field (Skip)” checkbox • Field 4: check the “Do not import field (Skip)” checkbox • Field 5: check the “Do not import field (Skip)” checkbox • Field 6: rename field DateTime • Field 7: check the “Do not import field (Skip)” checkbox • Field 8: check the “Do not import field (Skip)” checkbox • Field 9: rename field SearchURL, and set the Data Type as “Long Text” • Field 10: check the “Do not import field (Skip)” checkbox • Field 11: check the “Do not import field (Skip)” checkbox • Field 12: check the “Do not import field (Skip)” checkbox • Field 13: check the “Do not import field (Skip)” checkbox • Field 14: rename field Browser • Fields 15-30: If you wish to keep this additional web browser data, which was parsed into separate fields for your queries, you can rename each field with an incrementing Browser number.
Recommended publications
  • Application Log Analysis
    Masarykova univerzita Fakulta}w¡¢£¤¥¦§¨ informatiky !"#$%&'()+,-./012345<yA| Application Log Analysis Master’s thesis Júlia Murínová Brno, 2015 Declaration Hereby I declare, that this paper is my original authorial work, which I have worked out by my own. All sources, references and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source. Júlia Murínová Advisor: doc. RNDr. Vlastislav Dohnal, Ph.D. iii Acknowledgement I would like to express my gratitude to doc. RNDr. Vlastislav Dohnal, Ph.D. for his guidance and help during work on this thesis. Furthermore I would like to thank my parents, friends and family for their continuous support. My thanks also belongs to my boyfriend for all his assistance and help. v Abstract The goal of this thesis is to introduce the log analysis area in general, compare available systems for web log analysis, choose an appropriate solution for sample data and implement the proposed solution. Thesis contains overview of monitoring and log analysis, specifics of application log analysis and log file formats definitions. Various available systems for log analysis both proprietary and open-source are compared and categorized with overview comparison tables of supported functionality. Based on the comparison and requirements analysis appropriate solution for sample data is chosen. The ELK stack (Elasticsearch, Logstash and Kibana) and ElastAlert framework are deployed and configured for analysis of sample application log data. Logstash configuration is adjusted for collecting, parsing and processing sample data input supporting reading from file as well as online socket logs collection. Additional information for anomaly detection is computed and added to log records in Logstash processing.
    [Show full text]
  • Analysis of Web Logs and Web User in Web Mining
    International Journal of Network Security & Its Applications (IJNSA), Vol.3, No.1, January 2011 ANALYSIS OF WEB LOGS AND WEB USER IN WEB MINING L.K. Joshila Grace 1, V.Maheswari 2, Dhinaharan Nagamalai 3, 1Research Scholar, Department of Computer Science and Engineering [email protected] 2 Professor and Head,Department of Computer Applications 1,2 Sathyabama University,Chennai,India 3Wireilla Net Solutions PTY Ltd, Australia ABSTRACT Log files contain information about User Name, IP Address, Time Stamp, Access Request, number of Bytes Transferred, Result Status, URL that Referred and User Agent. The log files are maintained by the web servers. By analysing these log files gives a neat idea about the user. This paper gives a detailed discussion about these log files, their formats, their creation, access procedures, their uses, various algorithms used and the additional parameters that can be used in the log files which in turn gives way to an effective mining. It also provides the idea of creating an extended log file and learning the user behaviour. KEYWORDS Web Log file, Web usage mining, Web servers, Log data, Log Level directive. 1. INTRODUCTION Log files are files that list the actions that have been occurred. These log files reside in the web server. Computers that deliver the web pages are called as web servers. The Web server stores all of the files necessary to display the Web pages on the users computer. All the individual web pages combines together to form the completeness of a Web site. Images/graphic files and any scripts that make dynamic elements of the site function.
    [Show full text]
  • System Log Files Kernel Ring Buffer Viewing Log Files the Log Files
    System Log Files Most log files are found in /var/log Checking logs are critical to see if things are working correctly Checking logs is critical to see if things are working correctly. Take a look at all the log files on scratch ls /var/log Kernel Ring Buffer The kernel ring buffer is something like a log file for the kernel; however, unlike other log files, it’s stored in memory rather than in a disk file. You can use the dmesg command to view it. Many times it is logged to /var/log/dmesg as well. It requires sudo privileges to read the /var/log/dmesg file, but not to run the dmesg command. Viewing log files There are a number of commands to view log files. cat less head tail Anytime a new entry is added to a log file, it is appended to the end of the file. This is one of those times where tail is particularly useful. Usually when we want to look at log files we want to look at the most recent entries. When organizing our viewing command - order matters. Most of the following commands produce different results. And all are useful depending on what type of results you want. Go through the thought process and figure out what each command does. Can you figure out which three produce identical results? cat /var/log/syslog cat /var/log/syslog | grep daemon cat /var/log/syslog | grep daemon | tail -n 10 cat /var/log/syslog | tail -n 10 cat /var/log/syslog | tail -n 10 | grep daemon less /var/log/syslog less /var/log/syslog | tail -n 10 | grep daemon head -n 10 /var/log/syslog head -n 10 /var/log/syslog | grep daemon tail -n 10 /var/log/syslog tail -n 10 /var/log/syslog | grep daemon If you add the -f option to the tail command it provides a live watch of the log file.
    [Show full text]
  • Forensic Investigation of P2P Cloud Storage Services and Backbone For
    Forensic investigation of P2P cloud storage services and backbone for IoT networks : BitTorrent Sync as a case study Yee Yang, T, Dehghantanha, A, Choo, R and Yang, LK http://dx.doi.org/10.1016/j.compeleceng.2016.08.020 Title Forensic investigation of P2P cloud storage services and backbone for IoT networks : BitTorrent Sync as a case study Authors Yee Yang, T, Dehghantanha, A, Choo, R and Yang, LK Type Article URL This version is available at: http://usir.salford.ac.uk/id/eprint/40497/ Published Date 2016 USIR is a digital collection of the research output of the University of Salford. Where copyright permits, full text material held in the repository is made freely available online and can be read, downloaded and copied for non-commercial private study or research purposes. Please check the manuscript for any further copyright restrictions. For more information, including our policy and submission procedure, please contact the Repository Team at: [email protected]. Note: This is authors accepted copy – for final article please refer to International Journal of Computers & Electrical Engineering Forensic Investigation of P2P Cloud Storage: BitTorrent Sync as a Case Study 1 2 3 1 Teing Yee Yang , Ali Dehghantanha , Kim-Kwang Raymond Choo , Zaiton Muda 1 Department of Computer Science, Faculty of Computer Science and Information Technology, Universiti Putra Malaysia, UPM Serdang, Selangor, Malaysia 2 The School of Computing, Science & Engineering, Newton Building, University of Salford, Salford, Greater Manchester, United Kingdom 3 Information Assurance Research Group, University of South Australia, Adelaide, South Australia, Australia. Abstract Cloud computing has been regarded as the technology enabler for the Internet of Things (IoT).
    [Show full text]
  • Log File Management Tool Deployment and User's Guide
    Log File Management Tool Deployment and User's Guide Genesys Care/Support current 9/9/2021 Table of Contents Log File Management Tool Deployment and User's Guide 8.5.104 4 Overview 5 Architecture 8 New in this Release 11 Downloading LFMT 13 Known Issues and Limitations 14 Migration to LFMT 8.5.104 15 Log File Management Tool Deployment Planning 17 LFMT Client - GAX Dependencies 18 LFMT Database Sizing 19 LFMT Storage and Resource Sizing 21 Log File Management Tool General Deployment 24 Prerequisites 25 Deployment of the LFMT Client 27 Deployment of the LFMT Indexer 29 Deployment of the LFMT Collector 32 Configuration of the LFMT Database 36 Initializing the DBMS 40 Deployment of Workbench Agent for LFMT 8.5.1 44 Installing Workbench Agent (Mass Deployment) for LFMT 8.5.1 49 LFMT Application Connections 59 Log File Management Tool Configuration Options 60 LFMT Host Object Configuration Options 61 LFMT GAX Configuration Options 62 LFMT Indexer Configuration Options 64 LFMT Collector Configuration Options 67 LFMT DAP Object Configuration Options 73 Workbench Agent Configuration Options (for LFMT 8.5.1) 74 Workbench Agent Host Object Configuration Options (for LFMT 8.5.102) 78 Log File Management Tool User's Guide 79 Configuration of Access Control for LFMT Users 80 Site Configuration 85 Collection Schedules 89 Force Collection 93 Indexing and Scrubbing 97 Log File Packaging 101 Available Packages 104 LFMT Audit Information 108 Additional Information 111 Configuration of TLS Connections 112 Best Practices 116 Regular Expressions 117 Release Notes 119 Log File Management Tool Deployment and User's Guide 8.5.104 Log File Management Tool Deployment and User's Guide 8.5.104 The Log File Management Tool (LFMT) is an intelligent, configurable, centralized log collection tool developed by Genesys Customer Care.
    [Show full text]
  • Dude, Where's My Log File?
    Dude, Where’s My Log File? Making the Most of Progress OpenEdge Log Files Michael Banks Principal Software Engineer, OpenEdge Progress Software Introduction Why Log Files? • History • Troubleshooting • Security Introduction . Many components . Diverse technologies . Some common • AppServer, WebSpeed, Pacific AppServer for OpenEdge • Servlet container hosted components (adapters mainly) • ubroker.properties configuration 3 © 2014 Progress Software Corporation. All rights reserved. Goals for This Session How can I Where control the What is in should I type and the log file? look? amount of messages? 4 © 2014 Progress Software Corporation. All rights reserved. Progress OpenEdge Database Database Log <dbdir>/<dbname>.lg . Location cannot be changed . Startup parameter settings . Date/time startup and shutdown . User login/logout . System error messages . Utility and maintenance activity . SQL server startup/shutdown 5 © 2014 Progress Software Corporation. All rights reserved. Progress OpenEdge Database – Log Format [2014/09/16@15:25:45.022-0400] P-13259 T-47842003425760 I DBUTIL : (451) p t t t i b i f b k bth [ [2014/09/16@15:25:4 P- 13259 T- 47842003425760 ER 0: (333) Multi-user session begin. [2014/09/16@15:25:45.509-0400] P-13267 T-469809696I BROKER 0 : (15321) Before Image Log Initialization at block 0 offset 0. [2014/09/16@15:25:45.622-0400] P-13267 T-46980969695680 I BROKER 0: (452) Login by mbanks on batch. [ 6980969695680 I BROKER 0: (4234) P(452) g Loginp E g by mbanksa . Lon batch. u 61 on Linux devlinux15 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009 x86_64.
    [Show full text]
  • Forensics in Peer-To-Peer Sharing and Associated Litigation Challenges
    Forensics in Peer-to-Peer Sharing and Associated Litigation Challenges Presented by Mo Hamoudi, Seattle, WA ([email protected]) Terry Lahman, Snoqualmie, WA ([email protected]) 1 Statement of Probable Cause “Between April 08, 2016, and April 09, 2016, while acting in an undercover capacity, I used a law enforcement version of eMule, a commonly used P2P file sharing program for the eD2k file sharing network, to monitor for P2P users possessing and distributing image and video files depicting child pornography. I used the law enforcement version of eMule to download several files depicting child pornography from a P2P user at IP address <redacted> (the SUBJECT IP ADDRESS).” The statement of probable cause implies the detective was sitting at a computer utilizing a special version of software to identify and download suspected child pornography. It also implies that the specialize software is running autonomously on the detective’s computer. The law enforcement version of eMule runs automatically without user invention. And the law enforcement computer is only one component of a large scale network of computers. Digging into the technical data of the law enforcement version of eMule required a deep investigation similar to peeling an onion, one layer at a time. The entire process required a number of discovery requests to peel back each layer. Each new discovery item was analyzed to dig deeper into the next layer. PRACTICE POINT: The practitioner needs to use the Federal Rules of Criminal Procedure 16 or its state counterpart to obtain information beyond the affidavit. United States v. Soto-Zuniga, 837 F.3d 992 (9th Cir.
    [Show full text]
  • IRC Channel Data Analysis Using Apache Solr Nikhil Reddy Boreddy Purdue University
    Purdue University Purdue e-Pubs Open Access Theses Theses and Dissertations Spring 2015 IRC channel data analysis using Apache Solr Nikhil Reddy Boreddy Purdue University Follow this and additional works at: https://docs.lib.purdue.edu/open_access_theses Part of the Communication Technology and New Media Commons Recommended Citation Boreddy, Nikhil Reddy, "IRC channel data analysis using Apache Solr" (2015). Open Access Theses. 551. https://docs.lib.purdue.edu/open_access_theses/551 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. Graduate School Form 30 Updated 1/15/2015 PURDUE UNIVERSITY GRADUATE SCHOOL Thesis/Dissertation Acceptance This is to certify that the thesis/dissertation prepared By Nikhil Reddy Boreddy Entitled IRC CHANNEL DATA ANALYSIS USING APACHE SOLR For the degree of Master of Science Is approved by the final examining committee: Dr. Marcus Rogers Chair Dr. John Springer Dr. Eric Matson To the best of my knowledge and as understood by the student in the Thesis/Dissertation Agreement, Publication Delay, and Certification Disclaimer (Graduate School Form 32), this thesis/dissertation adheres to the provisions of Purdue University’s “Policy of Integrity in Research” and the use of copyright material. Approved by Major Professor(s): Dr. Marcus Rogers Approved by: Dr. Jeffery L Whitten 3/13/2015 Head of the Departmental Graduate Program Date IRC CHANNEL DATA ANALYSIS USING APACHE SOLR A Thesis Submitted to the Faculty of Purdue University by Nikhil Reddy Boreddy In Partial Fulfillment of the Requirements for the Degree of Master of Science May 2015 Purdue University West Lafayette, Indiana ii To my parents Bhaskar and Fatima: for pushing me to get my Masters before I join the corporate world! To my committee chair Dr.
    [Show full text]
  • An Effective Method for Web Log Preprocessing and Page Access Frequency Using Web Usage Mining
    International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 2 (2018) pp. 1227-1232 © Research India Publications. http://www.ripublication.com An Effective method for Web Log Preprocessing and Page Access Frequency using Web Usage Mining Jayanti Mehra1 Research Scholar, Department of computer Application, Maulana Azad National Institute of Technology (MANIT), Bhopal, Madhya Pradesh, India. Dr. R S Thakur2 Assistant Professor, Department of computer Application, ,Maulana Azad National Institute of Technology (MANIT), Bhopal, Madhya Pradesh, India. Abstract web is huge. A web site is the connection the consumer to company. The companies can revise visitor’s performance World Wide Web is rising rapidly and enormous amount of during web investigation, and discover the patterns. Web information is produced due to user’s communications with mining is generally distinct as find outing and study of helpful web sites. To exploit this information, recognizing usage information commencing the World Wide Web. Web mining pattern of users is very significant. Web Usage Mining is the split into three parts: Web Contents Mining, Web structure application of data mining techniques to find out the useful, mining and Web Usage Mining. Web Contents Mining hidden information about the users and interesting patterns represents the taking out of helpful information and web from data extracted from Web Log files. It supports to know knowledge from web resources or web contents such as text, frequently accessed pages, suppose user navigation, progress image, audio, video, and structured records. Web Usage web site structure etc. In command to relate Web Usage Mining can be as the find and analysis of access patterns of Mining, variety of actions is executed.
    [Show full text]
  • 02-Paolillo 1..9999
    c:/3socio/5-2/02-paolillo.3d ± 3/4/1 ± 18:21 ± disk/mp Journal of Sociolinguistics 5/2, 2001: 180±213 Language variation on Internet Relay Chat: A social network approach1 John C. Paolillo University of Texas at Arlington and Indiana University This paper examines linguistic variation on an Internet Relay Chat channel with respect to the hypothesis, based on the model of Milroy and Milroy (1992), that standard variants tend to be associated with weak social network ties, while vernacular variants are associated with strong network ties. An analysis of frequency of contact as a measure of tie strength reveals a structured relationship between tie strength and several linguistic variants. However, the variant features are associated with social positions in a way that does not correlate neatly with tie strength. An account of these results is proposed in terms of the social functions of the dierent variables and the larger social context of IRC aecting tie strength. KEYWORDS: Social networks, language variation, computer-mediated communication (CMC), Internet Relay Chat 1. INTRODUCTION The 1990s will be remembered as the decade in which the Internet and its protocols for communication (e-mail, Listserv discussions, Usenet news, the World-Wide Web, Internet Relay Chat, etc.) arrived in the lives of millions of people around the world. Almost from the outset of its popularization, digital technology enthusiasts promoted the Internet as a means for fostering social connection and community-building among geographically dispersed people in `virtual communities' (Rheingold 1993), where `virtual' denotes, as in com- puter science, something manifest only in a realm of electronic information.
    [Show full text]
  • Aspera FASP Proxy Admin Guide 1.4.0
    Aspera FASP Proxy Admin Guide 1.4.0 Linux Revision: 1.4.0.124878 Generated: 05/12/2016 18:15 | Contents | 2 Contents Introduction............................................................................................................... 3 Installation................................................................................................................. 5 System Requirements............................................................................................................................................5 Installing Aspera Proxy........................................................................................................................................ 5 Uninstalling Aspera Proxy....................................................................................................................................7 Forward Proxy.......................................................................................................... 8 Configuring the Server for Forward Proxy..........................................................................................................8 Firewall Considerations...................................................................................................................................... 10 Configuring the Client........................................................................................................................................10 Reverse Proxy..........................................................................................................13
    [Show full text]
  • User Guide for Creating a Whatsup Event Logs Database on Microsoft SQL Server for Log Management V10.X Contents
    User Guide for Creating a WhatsUp Event Logs Database on Microsoft SQL Server for Log Management v10.x Contents Creating a WhatsUp Event Logs Database on Microsoft SQL Server Please read the following topics in this help file thoroughly before beginning your configuration of a WhatsUp Event Logs Database on Microsoft SQL Server. Click on any of the topics below to review them in depth. Microsoft SQL Server 2005 Instructions (on page 2) Microsoft SQL Server 2008 Instructions (on page 9) Legal Information Including Patent and Trademark Notices WhatsUp Event Archiver is Copyright © 1997-2011 Ipswitch, Inc. All Rights Reserved. WhatsUp Event Archiver is protected by U.S. Patent # 7,155,514. Other patents pending. WhatsUp Event Archiver, WhatsUp Event Analyst, WhatsUp Event Alarm, WhatsUp Event Rover, and the WhatsUp word mark are trademarks or registered trademarks of Ipswitch, Inc. Microsoft Windows NT®, Microsoft Windows 2000®, Microsoft Windows XP®, Microsoft Windows 2003®, Microsoft Windows Vista®, Microsoft Windows Server 2008®, Microsoft Windows® 7, Microsoft Access®, and Microsoft SQL Server® are all registered trademarks of Microsoft Corp. Microsoft Windows NT®, Microsoft Windows 2000®, Microsoft Windows XP®, Microsoft Windows 2003®, Microsoft Windows Vista®, Microsoft Windows Server 2008®, Microsoft Windows® 7, Microsoft Access®, Microsoft Exchange® and Microsoft SQL Server® will hereafter be referred to as NT, 2000, XP, 2003, Vista, 2008, Windows 7, Windows, Access, Exchange, and SQL Server respectively. Oracle® is a registered trademark of the Oracle Corporation. All other products or technologies not specifically mentioned here are the registered trademarks of their respective companies, and are used by permission. Ipswitch Contact Information Ipswitch, Inc.
    [Show full text]