Mykonos Web Security 4.5.1 User Guide Mykonos Web Security 4.5.1 User Guide This guide provides an overview of the Mykonos Web Security system, explains its functions, and includes directions for deploying and configuring the appliance for use in a network. It also outlines how to properly use the software included. Other Documentation Formats viii
1. Introduction 1 1.1. Overview ...... 1 1.1.1. Summary ...... 1 1.1.2. Why is it Unique as a Security Product? ...... 1 1.1.3. What Does the Software Do? ...... 1 1.1.4. What traffic does it inspect? ...... 2 1.1.5. How is it managed? ...... 2 1.2. Major Components ...... 2 1.3. Features ...... 3 1.4. Support ...... 3
2. Deployment Overview 4 2.1. Appliance Network Placement ...... 4 2.1.1. Between Firewall and Web Servers ...... 4 2.2. Options for Load-Balanced Environments ...... 5 2.3. SSL Traffic Consideration ...... 5 2.4. Limitations ...... 6 2.5. Hardware Requirements for Mykonos ...... 6
3. Appliance Installation 7 3.1. Mykonos Terminology ...... 7 3.2. Initial Setup ...... 7 3.2.1. Changing the Password ...... 8 3.3. First Time Configuration ...... 8 3.4. TUI Steps ...... 8 3.4.1. Network Configuration ...... 8 3.5. Licensing ...... 15 3.6. Creating a Test Configuration ...... 16 3.6.1. Initial Configuration and Testing ...... 17 3.6.2. Larger-Scale Testing ...... 17
4. Configuring Mykonos Web Security 18 4.1. Configuration Wizard ...... 18 4.1.1. Wizard: Backend Servers ...... 19 4.1.2. Wizard: Backend Server Configuration ...... 19 4.1.3. Wizard: SMTP Servers ...... 20 4.1.4. Wizard: Alert Service ...... 21 4.1.5. Wizard: Backup Service ...... 21 4.1.6. Wizard: Alerts via SNMP ...... 22 4.1.7. Wizard: Alerts via email ...... 23 4.1.8. Wizard: Confirmation Page ...... 24 4.2. Verify the Installation ...... 25 4.3. Health Check URL ...... 25 4.4. What's Next? ...... 25 4.5. Basic vs. Expert Configuration ...... 26 4.5.1. Global Configuration ...... 27 4.5.2. Services ...... 28 4.5.3. Processor Configuration ...... 28 4.5.4. Application Profiles ...... 29 4.6. Configuration Import / Export ...... 31 4.7. Configuring SSL ...... 32
iii 4.7.1. Enabling SSL to the Client ...... 32 4.7.2. Enabling SSL to the backend ...... 32 4.8. Clustering ...... 32 4.8.1. Node Types ...... 33 4.8.2. Setting up Clustering ...... 33 4.8.3. Updating the Cluster ...... 37 4.9. High Availability ...... 37 4.9.1. Configuring HA ...... 38 4.9.2. Updating with HA ...... 41
5. Managing the Appliance 43 5.1. Restart/Shutdown ...... 43 5.2. System Updates ...... 43 5.3. System Statistics ...... 45 5.3.1. Master-Slave Mode ...... 49 5.4. Troubleshooting and Maintenance ...... 49 5.4.1. Managing Services ...... 49 5.4.2. Viewing Logs ...... 51 5.5. Backup and Recovery ...... 52 5.5.1. Restoring System Configuration from Backup ...... 53 5.5.2. Backing up and Restoring Console Data ...... 54
6. Processor Reference 56 6.1. Complexity Definitions ...... 56 6.2. Security Processors ...... 57 6.3. Honeypot Processors ...... 57 6.3.1. Access Policy Processor ...... 57 6.3.2. Ajax Processor ...... 60 6.3.3. Basic Authentication Processor ...... 62 6.3.4. Cookie Processor ...... 67 6.3.5. File Processor ...... 69 6.3.6. Hidden Input Form Processor ...... 71 6.3.7. Hidden Link Processor ...... 73 6.3.8. Query String Processor ...... 75 6.3.9. Robots Processor ...... 76 6.4. Activity Processors ...... 78 6.4.1. Custom Authentication Processor ...... 78 6.4.2. Cookie Protection Processor ...... 82 6.4.3. Error Processor ...... 83 6.4.4. Header Processor ...... 90 6.4.5. Method Processor ...... 97 6.5. Tracking Processors ...... 99 6.5.1. Etag Beacon Processor ...... 99 6.5.2. Client Beacon Processor ...... 100 6.6. Response Processors ...... 103 6.6.1. About Responses ...... 103 6.6.2. Block Processor ...... 104 6.6.3. Request Captcha Processor ...... 105 6.6.4. CSRF Processor ...... 123 6.6.5. Header Injection Processor ...... 128 6.6.6. Force Logout Processor ...... 128 6.6.7. Strip Inputs Processor ...... 129
iv 6.6.8. Slow Connection Processor ...... 130 6.6.9. Warning Processor ...... 130 6.6.10. Application Vulnerability Processor ...... 132 6.6.11. Support Processor ...... 133 6.6.12. Cloppy Processor ...... 134 6.6.13. Login Processor ...... 136 6.6.14. Google Map Processor ...... 143
7. Reporting 145 7.1. On-Demand Reporting ...... 145 7.2. Report Schedules ...... 146 7.3. Scheduling a Report ...... 146 7.4. Report History ...... 148 7.5. Report Details ...... 148
8. Using the Security Monitor 157 8.1. Navigation Menu ...... 157 8.2. Dashboard ...... 157 8.3. Application Specific Views ...... 157 8.4. Search ...... 159 8.4.1. Additional Search Query Options ...... 161 8.5. Incidents, Sessions, and Profiles Charts ...... 162 8.6. Incidents Menu Filter Options ...... 162 8.7. Complexity Definitions ...... 163 8.8. Incidents List Widget ...... 164 8.8.1. Incidents List Details ...... 164 8.9. Incident Widget ...... 164 8.9.1. Incidents Widget Details ...... 165 8.10. Sessions Widget ...... 165 8.11. Profile List Widget ...... 166 8.11.1. Profiles List Details ...... 166 8.12. Profile Widget ...... 166 8.12.1. Editing a Profile ...... 168 8.12.2. Configuring Responses ...... 168 8.13. Location Widget ...... 169 8.14. Environment Details ...... 169
9. Auto-Response Configuration 170 9.1. Using the Editor ...... 172
10. Command Line Interface 175 10.1. Supported arguments ...... 176 10.1.1. Data Sources ...... 176 10.1.2. Formatting ...... 177 10.1.3. Example Report ...... 177
A. Incident Methods 178
B. Captcha Template 182
C. Incident Log Format 184
D. Open Source Software Licenses 185 D.1. Overview ...... 185 D.2. License List ...... 185
v D.2.1. Ahocorasick Algorithm Library ...... 188 D.2.2. AJAXUpload ...... 189 D.2.3. Ant / Ant Contrib ...... 190 D.2.4. Apache Jakarta Commons ...... 193 D.2.5. Apache XML Security ...... 196 D.2.6. AsyncAppender Logback ...... 199 D.2.7. Base64 ...... 206 D.2.8. CodeMirror ...... 207 D.2.9. Collectd ...... 207 D.2.10. DataTables ...... 214 D.2.11. Crypt ...... 215 D.2.12. DirBuster ...... 221 D.2.13. Django Web Framework ...... 222 D.2.14. EditArea ...... 223 D.2.15. FAMFAMFAM Flag Icons ...... 224 D.2.16. FAMFAMFAM Silk Icons ...... 224 D.2.17. Fatcow Hosting Icons ...... 225 D.2.18. FreeTTS ...... 225 D.2.19. MaxMind GeoIP Java API ...... 226 D.2.20. MaxMind MaxLite City ...... 233 D.2.21. Google Data API ...... 233 D.2.22. Groovy ...... 236 D.2.23. HornetQ ...... 239 D.2.24. Icojam Icons ...... 243 D.2.25. Icons-Land Icons ...... 243 D.2.26. iText-2.1.7 ...... 244 D.2.27. Jasper Reports ...... 250 D.2.28. Java ...... 252 D.2.29. JavaCC ...... 257 D.2.30. JSONXML ...... 257 D.2.31. Jcaptcha ...... 264 D.2.32. JCaptcha FreeTTS extension ...... 271 D.2.33. JFreeChart ...... 278 D.2.34. Jpam ...... 281 D.2.35. jqGrid ...... 284 D.2.36. JQuery ...... 285 D.2.37. JQuery AJAX Upload Plug-In ...... 285 D.2.38. JQuery - Autotab Plug-In ...... 286 D.2.39. jquery-cluetip ...... 286 D.2.40. JQuery - ContextMenu Plug-In ...... 287 D.2.41. jquery.cookie ...... 287 D.2.42. JQuery iPhone-Style Checkboxes Plug-In ...... 288 D.2.43. JQuery jqDnR ...... 288 D.2.44. jquery.layout ...... 289 D.2.45. JQuery MultiSelect ...... 289 D.2.46. JQuery qTip Plug-In ...... 290 D.2.47. Jquery Search Filter ...... 290 D.2.48. jquery-timepicker ...... 291 D.2.49. JQuery - TreeTable Plug-In ...... 291 D.2.50. JQuery UI ...... 292 D.2.51. jshash ...... 292
vi D.2.52. JSON-Min ...... 293 D.2.53. JSON-Simple ...... 294 D.2.54. JSONXML ...... 297 D.2.55. jsTree ...... 304 D.2.56. Jython ...... 305 D.2.57. logback ...... 308 D.2.58. mail.jar ...... 315 D.2.59. MD5.js ...... 320 D.2.60. MedicalSeries Icons ...... 321 D.2.61. MemCached ...... 321 D.2.62. MemCacheDB ...... 321 D.2.63. Mod Security Rule Set ...... 323 D.2.64. MultiPartFormOutputStream ...... 327 D.2.65. netty ...... 330 D.2.66. Nginx ...... 333 D.2.67. openQRM Icons ...... 334 D.2.68. OpenSSL ...... 341 D.2.69. paramiko ...... 343 D.2.70. PostgreSQL ...... 350 D.2.71. PostgreSQL JDBC Driver ...... 350 D.2.72. Public Suffix List ...... 351 D.2.73. Python Paste ...... 356 D.2.74. Raphael JS ...... 357 D.2.75. Rhino JS ...... 357 D.2.76. RRDtool ...... 365 D.2.77. Sarissa ...... 367 D.2.78. Saxon ...... 374 D.2.79. Scientific Linux ...... 382 D.2.80. SLF4J ...... 386 D.2.81. SNMP4j ...... 387 D.2.82. Apple Festival Icon Set ...... 390 D.2.83. SpringFramework ...... 391 D.2.84. SpyMemCachedClient ...... 394 D.2.85. SWFObject ...... 394 D.2.86. Tomcat ...... 395 D.2.87. URLEncoding ...... 398 D.2.88. WebOb ...... 399 D.2.89. WhalinMemCachedClient ...... 400 D.2.90. YUI Compressor ...... 400 D.2.91. YUI ...... 401
vii Other Documentation Formats The Mykonos Web Security User Guide is displayed in HTML, but additional formats can be downloaded from below. If you are reading a non-HTML version, you may ignore this section.
Table 1. Available User Guide Formats PDF (.pdf) eBook (.epub) Text (.txt)
3 1 2
viii Chapter 1.
Introduction
1.1. Overview
1.1.1. Summary Mykonos Web Security (MWS) secures websites against hackers, fraud and theft. Its web intrusion prevention system uses deception to detect, track, profile and block hackers in real-time. Unlike signature-based approaches, Mykonos Web Security is the first technology that pro-actively inserts detection points to identify attackers before they do damage - and without any false positives. Mykonos goes beyond the IP address to track the individual attacker, profile their behavior and deploy counter measures. With Mykonos Web Security software, administrators are liberated from writing rules, analyzing massive log files or monitoring another console. Mykonos neutralizes threats as they occur, preventing the loss of data and saving companies millions of dollars from fraud or lost revenue.
1.1.2. Why is it Unique as a Security Product? The software uses deception-based techniques that make attacking a website protected by Mykonos Web Security more time consuming, tedious, and costly. The software inserts detection points into your web server's output to browsers. The detection points do not affect your web server (it never sees them), nor do they appear on user screens. The detection points are monitored and stripped from the requests coming back from the user's browser. Any change to a detection point is an indicator of hacking attempts; this triggers the deception response. Would-be hackers are deceived into wasting time attempting penetration using 'dummy' information, and so do not continue exploring elsewhere for security holes.
Mykonos sits between your web servers and the outside world. It adds detection points to outbound web traffic and removes them from inbound web traffic; thus, it works with any web-hosting system. There is no need to alter a single line of code on the website or web application. Annoying false positives are eliminated; MWS only looks at points it has inserted. Legitimate users never see these and have no reason to modify them; thus when a detection point is triggered, there is no chance it is a false positive; someone is probing.
1.1.3. What Does the Software Do? There are four main phases within Mykonos Web Security.
Table 1.1. The phases of Mykonos Web Security Step 1: Detect Mykonos uses tar traps to detect threats without false positives to detect attackers before they materialize into attacks. These detection points detect when attackers are doing reconnaissance on your site, looking for vulnerabilities to exploit.
Step 2: Track Mykonos user's persistent tokens and fingerprinting to track not only IPs but browsers, software and scripts.
1 What traffic does it inspect?
Step 3: Profile Mykonos creates a smart profile of the threats, helping you understand the threat and determine the best way to respond.
Step 4: Respond After detecting, tracking and profiling the threat, Mykonos provides you with many block, warn or deceive options.
1.1.4. What traffic does it inspect? Mykonos Web Security software inspects only HTTP and HTTPS traffic. It functions as a reverse proxy.
1.1.5. How is it managed? The system logs incidents to a database of attacker profiles, and exposes them to the security administrators through a web-based Monitoring Console. System administrators can apply automated abuse-prevention policies, or respond manually.
1.2. Major Components Major system components include: • The HTTP/HTTPS Gateway Proxy for accepting and translating raw traffic and interacting with the defensive security layer. This is core to having the system serve as an in-line proxy. This layer is also responsible for SSL encryption, decryption, and compression.
• The Security Engine and Profile Database, core components for managing all aspects of run-time security.
• The Response Rules Engine for automating responses to in-violation behavior in real time.
• A Web Monitoring Console that provides real-time monitoring, system configuration, reporting and charting.
• Processors - objects that contain specific security instructions for the Security Engine as well as logic to detect and interpret malicious behavior.
2 Features
1.3. Features Abuse Detection Processors A library of HTTP processors that implement specific abuse detection points in application code. Detection points identify abusive users who are trying to establish attack vectors such as SQL injection, cross-site scripting, and cross-site request forgery.
HTTP Capture Captures, logs, and displays HTTP traffic for security incidents.
Abuse Responses Enables administrators to respond to application abuse with session-specific warnings, blocks, and additional checks.
Abuse Profiles Maintains a historical profile of known application abusers and all of their malicious activity against the application, for analysis and sharing.
Tagging and Re-identification Enables application administrators to re-identify abusive users and apply persistent responses, over time and across sessions.
Policy Expressions Simple expression syntax for writing automated, application-wide countermeasures for the system policy engine.
Email Alerts Sends alert emails when specific incidents or incident patterns occur.
Monitoring Console Web-based monitoring and analysis interface allows administrators to drill into application sessions, security incidents, and abuse profiles. As well as, manage and monitor manual and automated responses.
Web-based Configuration Browser-based configuration interface for all system configuration and deployment options.
Software and Hardware Delivery Support Distributed as VM image, or drop-shipped as a pre-built hardware appliance on HP hardware.
SSL Inspection Passive decryption or termination.
Multi-application Protection Single system processes and secures traffic for multiple application domains.
1.4. Support For any issues or questions contact Juniper Support:
• 1-888-314-5822 (toll free, US & Canada)
• Juniper Support Contact Information1
For those outside the US and Canada, there are regional numbers available through the above URL.
3 Chapter 2.
Deployment Overview The Mykonos Web Security (MWS) appliance processes all inbound web requests and outbound web responses. Outbound responses are modified in ways that are invisible to the average user; inbound requests are checked to see if the modified responses have been touched in any way. Such touching is suspicious, and indicates a possible hacker. Due to its focus on web applications, MWS only accepts HTTP/HTTPS traffic and is normally placed between a load balancer and your web applications. Topologically, the best way to look at the MWS Appliance is as a Web Reverse Proxy Server. Using this technique makes the design resistant to failure of the MWS hardware.
2.1. Appliance Network Placement
2.1.1. Between Firewall and Web Servers MWS acts as a reverse proxy and actively manipulates traffic between the protected web application and the Internet. It is deployed between the protected web server and the last system which can alter user-facing traffic. This location gives MWS full visibility into the HTTP traffic destined for the web servers (including any errors that may have been caused by authentication failures), and lets it inject and strip out any code it uses in protecting the application.
This topology has the added benefit of minimum impact on internal network bandwidth.
The network placement requirements also include:
4 Options for Load-Balanced Environments
• MWS only processes HTTP and HTTPS traffic, therefore it must live behind a device that can separate Application Layer (Layer 7) traffic.
• In order to prevent an MWS problem from impacting a protected application, the upstream device (i.e., the router or load balancer) must perform Health Check monitoring on MWS over HTTP. If the Health Check fails, the load balancer or Layer 7 router should pass traffic directly to the protected application servers, rather than to the MWS.
The actual implementation depends on the user's specific network topology. The first figure shows the MWS deployed in its most simple form as a reverse proxy connected to a load balancer. The following figure shows a more complex environment with clustered web servers and clustered Mykonos appliances.
2.2. Options for Load-Balanced Environments MWS can serve as a load balancer for HTTP and HTTPS web traffic, however it is recommended that a dedicated hardware solution be used in that capacity. Dedicated load balancers are optimized for that role and should provide higher overall performance.
2.3. SSL Traffic Consideration MWS includes SSL decryption capabilities to give it visibility into all of the protected application's traffic. MWS supports two modes: Passive Decryption and SSL Termination.
In Passive Decryption mode, the MWS decrypts requests for processing, then re-encrypts them before sending them on to the application server. HTTPS responses to the user follow the same process, where they are decrypted, processed, and re-encrypted before returning to the user.
5 Limitations
In SSL Termination mode, the appliance serves as an SSL termination point. It decrypts incoming HTTPS traffic, processes them, then proxies the decrypted requests on to the application. Responses to the user are received unencrypted from the application server, processed, encrypted, then passed to the user.
Details of SSL configuration are covered fully in the Setup and Configuration sections of this document.
Note You will need a copy of the certificate on the MWS in order to process SSL traffic.
2.4. Limitations • MWS only accepts HTTP and HTTPS 1.0 and 1.1 traffic.
• It is not a network firewall and should not be an edge device.
• MWS does not currently support redundancy for network traffic, including link aggregation, or fail open hardware devices.
• MWS does not support hardware fail open capability itself and should not be physically in line with protected application servers.
• MWS doesn't support link aggregation or trunking, which means it cannot be connected to 2 network devices that can assume each other's IP addresses in the Active/Passive redundant network configuration.
2.5. Hardware Requirements for Mykonos MWS records suspicious activity to disk by default and requires fast and reliable disk access. The volume of disk access can increase dramatically when extended logging is turned on. MWS can be installed directly on a suitable server, or as a virtual machine under VMware. When deploying in a virtual environment, be sure the selected disk is logically close enough to the virtual machine that it can maintain the required throughput even when backups or other disk-intensive operations are underway.
Table 2.1. Hardware Requirements Virtual Physical Minimum Recommended Minimum Recommended CPU 2 vcores, >2 GHz 4 vcores, >2.4 GHz 2 cores, >2 GHz 4 cores, >2.4 GHz RAM 8 GB per VM 16 GB per VM 8 GB 16 GB Disk Size 30 GB 60 GB 30 GB 60 GB Disk I/O Tier 2 or better Tier 2 or better Directly-attached, Directly-attached, 5400 RPM drives 7200 RPM drives or better
These are minimum specifications and should provide sufficient performance for most applications. However, application traffic and server load can vary widely. Your site may require higher performance hardware, or virtual hardware, to adequately handle the load.
A Mykonos deployment configured for high-availability requires 10GB Ethernet connections on each platform.
6 Chapter 3.
Appliance Installation MWS is a software appliance (operating system, utilities, and application software) that can be installed directly on customer provided hardware from the supplied ISO image, or as a Virtual Machine. It can also be installed in a Cloud environment. For hardware installation, please see the documentation that came with your hardware before burning the ISO to a CD or DVD media. For a Virtual Machine installation, please follow VMware's procedures to install the Mykonos .ova file through the vSphere application. For a Cloud installation, please see your vendor's documentation.
3.1. Mykonos Terminology Appliance The Mykonos software/hardware system.
Application The web server program, whether it is Apache, JBOSS, Microsoft, or other web serving software.
GUI Graphical User Interface.
HA High Availability, a configuration that aims to reduce the chance the entire system fails.
MWS Mykonos Web Security, the name of the product.
TUI Text User Interface, usually invoked on the command line via "sudo setup".
3.2. Initial Setup Initial configuration is done through the console in a limited shell. Once the MWS has been initialized for the first time, the administrator can log into a Web console to finish the initial setup. Once the setup is complete, the MWS will be protecting your applications. You must use the direct console interface to configure the appliance IP address. Once the system has an IP address, you can use SSH to connect via port 2022.
Use the SSH command:
SSH
Default login credentials for the console are:
• User: mykonos
• Password: mykonosadmin
Important You should change the default login and password immediately after installation.
7 Changing the Password
3.2.1. Changing the Password The system password can only be changed from the underlying Linux command line. To do this, connect via the console or SSH. You will see the setup utility screen; navigate to 'Quit' and you will exit to the shell. Type 'passwd' and follow the prompts.
3.3. First Time Configuration Basic configuration of the MWS is straightforward. The following steps are executed via the console and the Text User Interface (TUI):
• Set the host name
• Set up networking
• Restart networking
• Initialize the appliance
At this point, the web interface should be active. From it, you will:
• License the appliance
• Perform a basic appliance test
• Run the installation wizard
3.4. TUI Steps The following steps are performed through the Text User Interface. This is initially done via the console, however the TUI is available through SSH once the Appliance is operating.
3.4.1. Network Configuration The minimum requirements for the appliance's network setup are to set a hostname and configure a network interface.
8 Network Configuration
3.4.1.1. Setting the Hostname To configure the network interface, first enter the TUI and select "Network Configuration" from the menu. From here, enter the "Set Hostname" menu and enter the name by which you want the appliance to be known.
9 Network Configuration
3.4.1.2. Interface Configuration Once the Hostname is set, open the "Configure Interfaces" menu and enter the network settings for the appliance's network interface.
• Interface - Select network interface
• Use DHCP - Check to use DHCP; uncheck for fixed IP address
• IP Address - Specify a fixed IP address (if DHCP is unchecked)
• Netmask - Specify the netmask
• Gateway - Specify network gateway IP address
• On Boot - Select to start the interface at boot time
10 Network Configuration
3.4.1.3. Set DNS If you set a static IP address, you will need to specify a DNS server. Select DNS settings from the network menu, and enter appropriate values.
11 Network Configuration
3.4.1.4. Restart the network After the Hostname and network interface information has been set, you must restart networking to have the changes take effect.
3.4.1.5. Initialize the Appliance Now you are ready to initialize the appliance's services. Open the TUI menu and select the "Initialize Appliance" from the menu. You will be presented with four choices:
12 Network Configuration
• Standalone: A single MWS is being used.
• Master: An MWS that both processes traffic and controls other MWS appliances.
• Dedicated Master: This MWS only directs other MWS appliances; it requires at least one appliance in "Traffic Processor" mode to protect a site.
• Traffic Processor: This MWS is a slave; it only processes traffic and requires at least one Master or Dedicated Master to protect a site.
After you make your selection, the appliance typically takes three to five minutes to initialize, depending on available resources. Once initialization is complete, you will have access to the web interface to finish the initial configuration.
13 Network Configuration
The Appliance typically takes 3 to 5 minutes to initialize, depending on available resources. Once initialization is complete, the administrator will have access to the Web interface where they can finish the initial configuration.
Note Initialization sets the system to a "default" state and will overwrite any existing configurations. If this is the first time you're configuring the Appliance it will not be an issue. However, if you are rerunning the initialization on an existing system, be aware this will reset it to defaults, deleting any existing data on the Appliance. For existing systems, see the section, 'Configuration Import / Export'.
3.4.1.6. Verify Connectivity After all initialization steps have been performed, verify that all network settings are correct, and that the appliance can be reached from the network. Navigate to the ip or hostname assigned to the appliance (on SSL), and specify port 5000.
For example: • https://10.10.10.104:5000
• https://my-mws-hostname:5000
If you see the following dialog, network settings are correctly configured.
14 Licensing
3.5. Licensing In order to complete MWS configuration, you will need to install the license for your product. Use a web browser to connect to your appliance on port 5000 and log in using the Admin's credentials.
Note The configuration url is: https://
Examples: • https://10.11.12.13:5000
• https://mws.mydomain.com:5000
Go to the Licensing section and follow the prompts.
Enter the license key supplied by your Sales Engineer in the Add a New License field.
Click Add.
Review the Terms of Service.
Click Yes on "I agree to the terms of service."
If the license validation step fails, check the network settings, particularly proxy settings for the network. Your MWS has to reach the outside world to contact the licensing server.
15 Creating a Test Configuration
3.6. Creating a Test Configuration MWS acts as a reverse proxy and is deployed between the protected web server and the load balancer. This location gives MWS full visibility into the HTTP traffic destined for the web servers (including any errors that may have been caused by authentication failures), and lets it inject and strip out any code it uses in protecting the application.
16 Initial Configuration and Testing
With this topology, the web server remains in production, at its normal (private) IP address. You are creating a small 'island' of system(s) which access the web server through MWS; all production traffic continues to flow in the normal manner. Your MWS appliance is set up as it will be for production; this makes it easy to go live when you are ready.
3.6.1. Initial Configuration and Testing One (or more) test systems must be configured so that they see the web server at the IP address of the MWS appliance, 172.25.1.120. This can be done with either a local split-DNS or the local Hosts file.
3.6.2. Larger-Scale Testing When you are ready to scale up testing, your split DNS can be configured so that outside users 'see' the web server directly, but all inside users (i.e. employees) receive the MWS IP address when accessing the web server. This lets you thoroughly test and evaluate before going live.
17 Chapter 4.
Configuring Mykonos Web Security The Web interface is used for system configuration, as well as monitoring and reporting. The initial installation required you to access this interface to license MWS and bring your appliance on-line. You will use this interface for nearly all configuration options.
The configuration URL is: https://
Examples: • https://10.11.12.13:5000
• https://mws.mydomain.com:5000
Note Log into the Web interface using the "mykonos" account. If you have not changed the password, the default is "mykonosadmin" and we strongly recommend you change it.
4.1. Configuration Wizard The Configuration Wizard gives you a simple way to configure the most commonly used basic features on the MWS Appliance, including defining one or more web applications to protect, alerting, and backups. During the initial configuration, the Appliance received network connectivity, its license, and was initialized. However, it needs to have one or more Backend Servers (web applications) defined in order to protect live traffic.
The Wizard walks you through setting up Backend Servers, SMTP Settings, Alerts, and Backups. When you've completed the Wizard, it will give show you a confirmation page and some additional steps, such as pointing your Load Balancer to the Appliance, you may wish to perform.
Warning Upon completion of the Wizard, PLEASE make a note of your backup encryption key. If you lose this key, NOBODY, not even Mykonos Support, can retrieve the information contained in your backups!
Note When using the system's default mail server, users should set a valid hostname and ensure that the mail configuration observes all best practices for setting up a mail server.
The Wizard has a minimum of 6 steps. The actual number of steps may increase depending on your choices, such as defining multiple applications to protect.
18 Wizard: Backend Servers
4.1.1. Wizard: Backend Servers Mykonos Web Security functions as a reverse proxy, sitting in front of your web application. MWS can protect an arbitrary number of application servers. In order for us to process traffic, you must specify at least one backend server to which we will proxy traffic. The default is 1 and should suffice in most applications. If the MWS will serve as a software load balancer, rather than using a dedicated hardware solution, multiple servers can be configured at this time.
4.1.2. Wizard: Backend Server Configuration Each Backend Server you configure here requires the following information: • Server Name: A unique name that MWS uses to identify this server. The name can include any alphanumeric character, "-", and "_", with no whitespace. Do not use the server's Fully Qualified Domain Name (FQDN) or a URL. If you are using VMware, you may wish to use the same name here as you assigned in VMware, to avoid confusion. This is not necessary, however.
• Server Address: Specify the server's IP. MWS does not support IPv6 addressing at this time.
• HTTP Port: Usually port 80.
• HTTPS Port: Usually port 443.
• Weight: Defaults to 1. This value is used when the MWS is serving as a software load balancer and represents the relative "weight" the server has for balancing purposes.
• Backup: Defaults to NO. This only applies if you are using the MWS as a software load balancer and designates this server as a backup.
19 Wizard: SMTP Servers
4.1.3. Wizard: SMTP Servers MWS can email alerts to your administration team. While the appliance can serve as its own mail server, we recommend that you use a valid mail server for your network.
SMTP Configuration supports the following fields: • SMTP Default Sender: This will be the "From:" address in email alerts. The address should be valid for your network so alert mails won't be mis-categorized as spam.
• SMTP Server Address: Defaults to localhost. Set it to the IP address or FQDN of your mail server if you are using an off-board mail server as recommended.
• SMTP Server Port Number: Defaults to 25. Set it to the port your mail server is listening on.
• SMTP Username: Defaults to blank, and may remain blank if you are using the onboard server. Set it to a user with valid access to the mail server.
• SMTP Password: Defaults to blank, and may remain blank if you are using the onboard server. Set it to password for the SMTP username supplied above.
20 Wizard: Alert Service
4.1.4. Wizard: Alert Service The MWS can send alerts to an SNMP server or via email to appropriate personnel. The alert service is optional, and defaults to "No." If you choose not to activate alerts, the Wizard will skip to the next section.
4.1.5. Wizard: Backup Service Mykonos Web Security can perform regular, scheduled backups of all of its data. We strongly recommend that you turn backups on. You can select backup via FTP or SSH.
The Backup Service lets you specify the following fields: • Frequency: How often backups are sent off-board
• Retention: Number of days to keep off-board backups
• FTP Service: Whether to use FTP. If set to YES, the server, username, and password fields are required.
• SSH Service: Whether to use SSH. If set to YES, the server, username, and password fields are required.
21 Wizard: Alerts via SNMP
4.1.6. Wizard: Alerts via SNMP If you choose to activate alerts, it will give you the option of setting up the number of SNMP servers to alert and the number of email addresses to which messages will be sent. The defaults to both are 0.
4.1.6.1. SNMP Settings If you have chosen to activate Simple Network Management Protocol (SNMP) alerts, the Wizard will walk you through setting up each of the addresses to send SNMP alerts to. You will need to provide an IP address or FQDN, and port number, for each server.
22 Wizard: Alerts via email
4.1.7. Wizard: Alerts via email If you have chosen to activate email alerts, the Wizard will walk you through setting up each email address to which the Appliance will send alerts to.
Email alerts require the following fields: • Name: A common name for referencing this email address
• Email Address: Email address
• Minimum Severity: Minimum severity level to trigger an email alert to this address
• Shift Start: Start time for this address in 24 hour format.
• Shift End: End time for this address in 24 hour format.
You are also given the option of having it send alerts on the weekend. Note that you can build complex schedules by creating multiple entries for the same person. For example, [email protected] could have an entry named admin-weekday that specified 8 AM to 5 PM, M-F, and a second entry named admin- weekend that specified 6 AM to 6 PM.
23 Wizard: Confirmation Page
Note Configuration of advanced features, such as encryption keys, are not available in the Wizard.
4.1.8. Wizard: Confirmation Page Once you have completed the Wizard's main steps, you will see the confirmation page, where you will see the URL you can use to confirm the appliance is performing correctly. You will also see the secret key the appliance generated for your backups. Whether MWS is storing backups locally or off-site, you MUST have this key. Note that the key is actually a link. You must click on it and confirm acceptance of the key.
Important Record the secret key and keep it someplace safe. If you run through System Initialization again (See section 3) it will create a new key and you will lose access to your backups if you haven't recorded the old key. If you lose this key, Technical Support will not be able to recover it or your backups.
24 Verify the Installation
4.2. Verify the Installation In order to verify that your MWS appliance is processing traffic, use the following URL to access the appliance honeypot and confirm that it replies with a fake .htaccess. http://
The appliance should reply with something similar to the following. Note that the actual fake .htaccess file may not look exactly like this example.
Congratulations! Your initial MWS configuration is complete and the appliance is ready to start protecting your applications.
4.3. Health Check URL The Health Check URL lets an external system (typically a load balancer) confirm that the MWS system is operating properly. Your MWS system will generate a file name consisting of an arbitrary string of characters; make a note of it. If an HTTP request is sent to MWS for this file name, MWS will return 200 OK, with a code in the body of the message. The responses are below.
Table 4.1. Health Check responses and corresponding meanings. Response Meaning No response MWS is offline 200 OK, plus OK MWS is fully functional and is protecting your web sites 200 OK, plus DISABLED MWS is running, but has been disabled or the license has expired 200 OK, plus STAND BY MWS is waiting on an external resource. The contents of [...] will provide [...] additional information
The format of the HTTP request should be: http://MWS_fullyqualifieddomainname_or_IPaddress/filenamegeneratedbyMWS
4.4. What's Next? Mykonos Web Security is now configured to secure one web server application. If you have multiple web server applications, or would like to configure the system differently on a per-page basis, please refer to Chapter 6 Additional Configuration Options to add or edit applications.
Your system is now installed and has its basic configuration parameters set. In the next section, you will learn how to protect multiple web sites and modify your protection settings.
25 Basic vs. Expert Configuration
4.5. Basic vs. Expert Configuration Mykonos Web Security is widely configurable with numerous settings to optimize it for your environment. Once you've completed the Wizard, the Appliance will route traffic through a default application profile to the designated backend servers. This default profile is simple, but adequate for many applications. However, there are still hundreds of options available to further customize the Appliance to meet your specific requirements.
MWS has two configuration modes accessible from the Configuration button on the Web interface. Basic Mode gives you access to all of the Appliance's features with a wide range of customizations through a user friendly interface and is recommended for most situations. Expert mode, on the other hand, gives you access to the deepest levels of the Appliance's configuration presented in a key:value pair format. Expert mode is just that: best used by experts who are comfortable making multiple changes at once. The only functional difference between these two modes is presentation and the ability to perform multiple changes at once in Expert mode. Basic mode is recommended for most users and most applications.
Important When using Expert Mode, be sure to click the "Save" button when you are done making changes. Unlike Basic Mode, Expert Mode does NOT save the configuration to the engine after each parameter is modified; it lets you make multiple changes at once and then write the entire configuration image as one transaction.
Important Using Expert Mode with Internet Explorer (even recent versions) might cause unexpected behavior, so it is recommended to use other browsers like Chrome, Safari, Opera or Firefox!
26 Global Configuration
Note The Configuration UI, by default, will only display the simplest of configuration options. Select the "Advanced" filter to display all of the options.
4.5.1. Global Configuration The global configuration defines the system's default actions, bindings, and preferences. It has the following sub-sections:
Application - Identification (Advanced) How the root-level application exposes itself. Control the name of the server software that is running, the web root, and webmaster email.
Application - Traffic Parameters related to the root-level application's traffic, such as timeouts, backend servers and health checks, and whether or not to use SSL.
Default Responses (Advanced) Allows administrators to upload a custom page that will be returned to the client on a successful request instead of the standard http response page.
Error Handling (Advanced) Any response codes that will be considered to be unsuccessful requests by the system.
General Configuration related to low-level operations of the Security Engine.
Incident Monitoring (Advanced) Administrators can set how the system monitors the security of session tokens.
Logging (Advanced) Configures how the system logs user data. Administrators can configure whether to log request and response data before or after processing, as well as the default logging level.
SMTP Configuration Mykonos Web Security uses email for several pieces of key functionality, including alerting and reporting. The Configuration Wizard walks you through setting up SMTP servers, but, if you need to, you can change these settings through this menu.
Security Configure various security-related parameters, such as the health-check URL, and whether or not to log various pieces of the HTTP Request / Response.
Security Monitor Configure session timeouts for the Security Monitor application.
Server Ports Specify which ports to use for the Configuration UI (default: 5000), Security Monitor (default: 8080), and system HTTP port (default: 80).
27 Services
Session Management (Advanced) Administrators can configure the session cookie used to track users, as well as the profile name list.
4.5.2. Services Services run in the background, performing tasks such as sending alerts, generating reports, or performing maintenance tasks. The following Services are configurable through the "Services" tab in the Configuration UI.
Alert Service The system can be configured to send email alerts to administrators when an incident of a specified severity is detected. For instance, the system could send out an email notification to a specific admin who is on call if an incident level of critical is detected, allowing the admin to respond quickly to the threat.
Mykonos Web Security can send Alerts to email addresses on a defined schedule, and / or send SNMP traps to one or more SNMP servers. The initial configuration of the Alert Service can be performed with the Configuration Wizard, but if you need to change these settings, they are available through the "Services" tab of Configuration.
Auto Response Service Allows administrators to turn the Auto Response service on or off.
Backup Service This menu configures the backup service. Administrators can configure the frequency, type, and destination for system backups. These settings can also be configured through the Configuration Wizard.
Database Cleanup Service (Advanced) Administrators can configure how often the database cleanup service runs for sessions, profiles, and incidents through this option.
Security Engine Service (Advanced) Configures the memory, database, and fingerprinting used by the security engine.
Session Consolidation Service (Advanced) Administrators can configure how and when the system consolidates user sessions.
Statistic Service (Advanced) Configures how the system logs statistical data.
4.5.3. Processor Configuration Each processor has its own set of configuration options that may applicable to your application. For each processor, the Administrators can leave the default configuration in place, modify it to suit their situation, or disable the individual processor. In most cases, the default is appropriate.
To configure a processor, browse it on the menu and click on it to expand the options. To turn it on or off, select True or False in the "Processor Enabled" field. Many processors have additional options you can select, some very detailed, allowing you to optimize the processor for your specific application.
28 Application Profiles
4.5.4. Application Profiles A single installation of Mykonos Web Security can leverage application profiles to protect multiple applications on your site. The "Default Catch-All Application" is configured to catch all traffic before routing it to the Backend Server. MWS processes applications in order, and the Catch All would be last as a failsafe for processing traffic. It is also safe to delete this profile once you have added one or more specific to your environment.
Select "Applications" in the Configuration UI To configure application profiles. All applications inherit the Global Application parameters. To override what will be handled by the new profile, you will wish to define the following fields:
• A URL pattern that the system can use to route traffic to the correct URL.
• If there are multiple Backend Servers, which ones MWS will be proxy for.
When you add an application, the final step will be to set the URL pattern. You can change each application you've added later through the Application / Traffic . Url patterns follow standard Perl Regular Expression (PCRE) syntax.
For example: • Any traffic: ^.*$
• Any subdomain: ^.*\.domain\.com$
• Multiple, or no, subdomains: ^((www|shop)\.)?domain\.com$
Important URL patterns and profiles are observed in the order they are created/arranged, ensure that all new application profiles are added above the default catch-all to avoid conflicts. The order of Application Profiles can be changed by clicking the "Move Up/Down" options to avoid conflicts in URL patterns and routing.
29 Application Profiles
Note If SSL is required for this application, you will also need to enable SSL and ensure that all the required certificates are uploaded and configured as described in the "Configuring SSL" section of this guide.
Note Most, but not all, parameters can be overridden at the context of an application or page. If you have overridden a parameter, and wish to instead use the value defined at a higher level, you may wish to click the "clear" button to re-establish the override.
When you add an application, the final step will be to set the URL pattern. If you need to change it at a later date, this parameter is accessible through the Application / Traffic submenu for each application you have added.
4.5.4.1. Adding Pages MWS supports different configurations for different pages within a protected application. You can define page specific configurations under the "Applications" menu option. To add an individual page within an application, find the application profile that contains the page, and click "Pages" in the list of options below. From there, select "Add Page". Configuration is similar to adding applications, requiring you to supply a URL pattern for each page.
30 Configuration Import / Export
4.6. Configuration Import / Export Mykonos Web Security lets you export a configuration "Image" containing every parameter you've specified. These images can be imported by the system, letting you make a backup of your system configuration before making major changes, or to aid in some types of deployment. The exported configuration is a standard XML
31 Configuring SSL
document which you can hand edit. However hand editing is not recommended since it introduces the risk of corrupting your configuration.
To access the Configuration Import / Export feature, click the "Configuration" button on the top of the UI, and then click "Configuration Import / Export".
Note If you execute a System Initialization from the TUI, you can use an imported configuration to bring your system back to its previous running configuration. However, historical traffic information will still be lost.
4.7. Configuring SSL
4.7.1. Enabling SSL to the Client To enable SSL between Mykonos Web Security and the client, first navigate to the application for which you want to enable SSL. Then, select "Server Ports" under "Application". Upload your SSL certificate and Key file, then select a listening interface IP address, and finally, a port. The combination of port/IP must be unique for the system. If the system is clustered, an IP must be selected for each node.
Note Certificates should be in PEM (Privacy Enhanced Mail) format.
4.7.2. Enabling SSL to the backend To enable SSL for the traffic between Mykonos Web Security and the backend servers, simply login to the Configuration UI and under "Global / Application / Traffic", enable the option for SSL to the backend. This can also be done on a per-application basis.
4.8. Clustering Individual MWS appliances have the ability to work together as one system, in a cluster. Clustering allows traffic to be divided among multiple appliances, effectively reducing per-system load. In a clustered network configuration, the "master" node holds the database that will be populated by one or more "traffic processors". In order to successfully utilize a Mykonos cluster, a load-balancer must properly segregate traffic to each of the defined traffic processing nodes. Each of these traffic nodes must maintain connectivity with the master in order to operate.
Important Clustering should not be confused with High Availability! Clustering would be used to increase throughput (by utilizing multiple processing nodes), and can reduce the chance that the whole system will fail. Clustering does not protect the master node from failure as in a High Availability setup; only HA configurations are set up to include failsafe procedures to designate a new master when the first one is unavailable.
32 Node Types
4.8.1. Node Types In a traditional MWS deployment (one system), the appliance is responsible for holding its own database as well as processing the traffic. In a clustered deployment, you have the ability to segregate the database from those systems which will process incoming requests. During cluster configuration, you will have the ability to designate a node type for each system. At a minimum, the cluster must have a way to process traffic, and a way to store the relevant information.
• Master
A Master node is similar to a single-system deployment in that it holds the database, and also processes incoming traffic. This satisfies both requirements for a cluster (database and traffic processor), so it is possible to set up a cluster with only one Master node (no additional processing nodes). Additional traffic processing nodes can be added at a later point in time if desired.
• Dedicated Master
A Dedicated Master node holds the database similar to a Master node, but does not have the ability to process traffic. Using a Dedicated Master in a clustered configuration requires the addition of at least one Traffic node.
• Traffic Processor
A Traffic node is only responsible for processing incoming requests. It does not contain a database, so a Master or Dedicated Master node must accompany a Traffic node. The number of Traffic nodes you can add to a cluster is dependent on (1.) the hardware specifications of the Master, (2.) the amount of incoming traffic on protected web application, and (3.) the number of additional Traffic nodes in the cluster. For optimal stability, be sure to monitor the cluster's performance as you add each Traffic node.
4.8.2. Setting up Clustering
Important Unlike a traditional cluster, a MWS cluster does not automatically balance traffic between each of the nodes. For this reason, a load balancer is required to be configured to send traffic to each traffic processing node.
Setting up a cluster is as easy as configuring multiple stand-alone boxes. The first step is to set up the master. You must set up the master first because you will need to supply the master's IP when initializing the traffic nodes. To initialize a master, simply choose "Master" or "Dedicated Master" from the TUI setup menu ('sudo setup').
33 Setting up Clustering
34 Setting up Clustering
Setup will initialize the master, and once complete you should be able to navigate to the management interface at https://HOSTNAME:5000. Once the master is initialized, you can initialize the other appliances as traffic processing nodes. The steps are similar to the master setup, however you will be prompted to enter the IP of the master node.
35 Setting up Clustering
Once the traffic node is initialized, you can verify the cluster by navigating to the management interface (https://HOSTNAME:5000) and clicking on "System Stats". There should be a separate tab for each node in the cluster, and an additional tab for the aggregate cluster data.
Note Remember: you must use an external load balancing solution to point to each traffic processing node, as the MWS cluster will not do this for you!
36 Updating the Cluster
4.8.3. Updating the Cluster Updating a cluster is similar to updating a stand-alone box. Navigate to "Updates" in the management interface on the master node (the traffic nodes have no management interface) and apply the updates as you would on an individual appliance (see Chapter entitled, "System Updates"). The master node will automatically apply the same update to each of it's traffic processing nodes in the cluster; there is no need to individually update each appliance.
Note The process for updating a cluster will take longer than updating a single appliance, as the same update has to be applied to each node.
4.9. High Availability To minimize the risk of downtime, MWS deployments have the ability to be placed in a Highly Available (HA) configuration. In this setup, an additional appliance is on stand-by in the event that the currently-active appliance goes offline. If this happens, the passive appliance is able to become the new active appliance automatically - without needing to restart the system. An HA configuration is similar to MWS Clustering, with the major exception being that the passive system has a copy of the services needed to take over when the Master fails. MWS uses a Virtual IP (VIP) to float between the currently active system and the current passive system.
Note An HA configuration is only available on the MWS dedicated hardware systems -- it is not available in a software installation.
37 Configuring HA
4.9.1. Configuring HA If your appliance is HA-ready, you will see two additional modes available on the "Select Appliance Mode" screen during appliance Initialization. On the active appliance (the one that will be the primary appliance), enter TUI setup by typing "sudo setup". Select "Initialize Appliance" and select either "HA Master" or "HA Dedicated Master" for the mode. A distinction on these modes are available below:
• HA Master
If the appliance is in HA Master mode, it will act as a stand-alone system. The system's database will be stored on this node, and will be replicated/mirrored to the passive appliance (configured later). The appliance will also be able to process traffic like a standard installation.
• HA Dedicated Master
Just like in MWS Clustering, the Dedicated Master has no way to process traffic by itself. It only contains the database and essential services to talk to the other appliances. If you would like an MWS Cluster configured for HA, you can select this mode to prevent the master from processing any traffic. Keep in mind you will need to utilize Clustering to configure at least one Traffic Processing node. A copy of the master will still exist on the passive system.
After choosing the HA master mode, you will be prompted to enter the IPs belonging to the HA pair. This will include the IP of the current master (the active appliance) as well as the IP of the appliance to fail-over to (the passive appliance).
38 Configuring HA
Note Be sure that each of the appliances are on the same MWS version (invoke "mykonos-get- version" from the appliance's command line). Appliances not on the same version as the master will need to be manually updated to the HA master's version before continuing on.
Next, you will be prompted to enter the Virtual IP (VIP) that the system will use as the IP of the currently- active system. You may enter either the standard or CIDR bitmask (for example, 255.255.255.0 or /24) for the netmask.
After allowing the Initialization process to complete, you can verify proper HA setup by navigating to the management interface ("https://VIP:5000 where VIP is the Virtual IP").
39 Configuring HA
40 Updating with HA
Note You must use the VIP to access the configuration interface. If you attempt to use the management interface on the passive appliance, you will see:
Important Since the various HA appliances in a configuration need to interface with the database, port 5432 will be open! Be sure to restrict access to this port with your firewall to prevent unwanted incoming connections. Mykonos Web Security is NOT intended to be used as an edge device.
4.9.2. Updating with HA To update an HA system, navigate to the management interface (https://VIP:5000 where VIP is the Virtual IP) and update as described in the "System Updates" chapter. The update will be applied to both systems in the HA pair.
41 Updating with HA
Note While both the active and passive machines must be on the same MWS version to be initially configured in HA mode, appliances already in HA mode will successfully update together.
42 Chapter 5.
Managing the Appliance
5.1. Restart/Shutdown Shutting down or restarting the Appliance is done from the TUI, accessed either from the system console, vSphere, or via SSH to your Appliance on port 2022. Normal console login will take the administrator directly to the Text User Interface. If you are at a shell prompt, you can start the TUI with the command: sudo setup
To safely shutdown or reboot the appliance administrators choose the "Reboot appliance" option from the menu.
5.2. System Updates Provided the MWS has internet access, either direct or through a configured proxy, it will automatically check for software updates every night and download them when new ones are available. However, the appliance will not automatically apply updates. For security and stability, they will need to be manually applied by an administrator.
While MWS only checks for updates every night, you can force the appliance to check for updates at any time by clicking the "check for updates" link under "Online Updates". MWS will fetch any available online updates at this time.
Note Updates will not automatically populate in the Updates pane until you refresh the page. You are not required to stay on the Updates page while MWS is fetching an online update.
MWS also has the ability to upload updates manually, without an internet connection. After uploading the package to the appliance, it will become available to the updates system, and you will be able to apply the update as described below.
43 System Updates
Important While MWS is uploading offline updates, you should refrain from navigating off the Updates pane until the upload is complete.
If an update is available (either an uploaded offline update or an automatically downloaded one), you can view the available update package along with any information about it, including the package name, version, whether or not a reboot is required after installing the update, a description, and list of changes. After reviewing the changes you can choose to apply the update by sliding the 'Update?' switch to the 'Yes' position (it should be on 'Yes' by default) and clicking the 'Update Selected' button at the bottom of the package table.
44 System Statistics
Note At this time, it is not possible to roll back to earlier versions of the Appliance software.
5.3. System Statistics The MWS software allows for standard SNMP system monitoring. All statistics available on a typical Linux system would be available to MWS customers through standard system SNMP mibs.
In addition, the MWS currently offers six types of systems statistics in a form of graphs in Configuration interface. They include CPU Utilization, CPU Load Average, Memory Utilization, Network Traffic, Proxy Connection and Proxy Requests. They can be access via System Status button in the top menu of the Configuration management interface.
45 System Statistics
Depending on the desired level of details, the statistics can be viewed for the Last Hour, Last 12 Hours, Last Day, Last Week and, finally, Last Month (always last 30 days).
Below are the details of the statistics that are available for each type:
CPU Utilization • Wait - Percentage of CPU time spent in wait (on disk)
• Softirq - Percentage of CPU time spent handling software interrupts
• System - Percentage of CPU time spent in kernel space
• User - Percentage of CPU time spent in user space
CPU Load Average • 1 min - CPU Load for the last minute
• 10 min - CPU Load for the last 10 minutes
• 15 min - CPU Load for the last 15 minutes
46 System Statistics
Memory Utilization • Used - Amount of memory used
• Free - Amount of memory free
• Swap - Amount of swap used
Network Traffic • Outbound - Amount of traffic leaving the box
• Inbound - Amount of traffic entering the box
47 System Statistics
Proxy Connections • Reading - Number of TCP connections reading data
• Waiting - Number of TCP connections waiting
• Writing - Number of TCP connection writing
Proxy Requests • Requests - Current number of HTTP/HTTPS requests being processed
48 Master-Slave Mode
5.3.1. Master-Slave Mode In the case of the appliance running in multi-server mode, the systems statistics will show details for each node in the cluster as well as the key cumulative data across the entire cluster. Each system will be presented as a tab in the Configuration UI's system status page with Aggregate tab being first. The Aggregate tab always shows CPU Utilization, Network traffic, Proxy connections and Proxy requests collected from the entire active MWS cluster.
5.4. Troubleshooting and Maintenance
5.4.1. Managing Services The MWS runs on top of an optimized, hardened, Linux installation and is very stable in normal operation. The core of MWS are several programs that run as services, "daemons" in Linux parlance, that work in concert to defend your web applications.
The Services menu in the TUI lets you check the status of MWS services, or start, stop, or restart, them, if necessary. It is also possible to perform these actions from the command line as well. Note that these are all console functions, and not accessed through the Web interface.
49 Managing Services
50 Viewing Logs
5.4.2. Viewing Logs MWS keeps its log files in the /var/logs/mws directory. The log files often prove useful for troubleshooting if there is ever a problem with the Appliance. MWS uses the following logs:
• audit.log
• mws.log
• mws-access.log
The audit.log file contains the systems auditing information on who has logged into the system and what actions they've performed.
The mws.log file includes all of the systems operational logs. These entries each include a header that states which service created the log entry.
The mws-access.log file includes details of HTTP transactions that are passing between the outside user, the MWS, and the protected Application Server.
The administrator can adjust logging levels for HTTP access that are logged to mws-access.log, using the "Global/Logging" section of the Web interface. Be default, MWS doesn't log the details of URL requests. However, it can be set to one of three levels.
• URL Only
• URL and Headers
• Full HTTP Packets
51 Backup and Recovery
The information logged here is usually used for troubleshooting, allowing an administrator to see exactly what the requests look like before and after processing by the MWS.
5.5. Backup and Recovery By default, the Appliance runs backups every day and the administrator can adjust the backup interval on the configuration UI. MWS stores its Backups in the /home/mykonos/backups directory. You can restore configuration backups from the TUI or command line, while backups of the incident database are only done from the command line.
52 Restoring System Configuration from Backup
5.5.1. Restoring System Configuration from Backup While rarely required, the administrator can restore the Appliance configuration from backup using the TUI on the Backups submenu.
53 Backing up and Restoring Console Data
This menu is pre-populated with a list of current system backups, and the following options:
• Restore Latest Backup - This will restore the database using the last saved backup file.
• Reset to Clean State - This option will clear all of the data from the database and set it to a pristine state.
5.5.2. Backing up and Restoring Console Data To restore the data that is displayed in the Monitoring Console from a back up, users must use the command line utility.
Note If the data is being restored to the Console, a database backup will need to be specified from /usr/share/msa/database or use the "latest" option to restore from the last valid backup.
This is run with the following command: sudo /usr/sbin/mykonos-db
The options to this command are:
• backup
• restore (filename)
• restore latest
• clean
54 Backing up and Restoring Console Data
55 Chapter 6.
Processor Reference Mykonos Web Security uses a modular approach to securing your application. Each module is responsible for monitoring, detecting and securing a particular aspect of the application and/or individual HTTP request/ response. These logical entities are referred to as Security Processors.
Processors are the configurable operators that implement an additional layer of security between the application/web servers and the end user. They are responsible for analyzing the request and response data sent to and from the server and monitor anything from the state of injected honey pots to contents of the headers and body of the HTTP/HTTPS requests and responses. Processors can be managed through the system configuration user interface. While some of the operations may be as simple as incrementing a counter, others are far more sophisticated and may alter the request and response data so it is important that administrators configure processors correctly to ensure web application's security and functionality.
Each processor is monitoring the HTTP stream for particular alterations from what is considered typical traffic. These alterations are called "triggers". Each security processor may have several triggers they are responsible for detecting. If matched, the processor responsible for handling it will generate a security incident. Incident varies by its complexity, which is explained in the section below.
6.1. Complexity Definitions Complexity is a rating of the skill, effort, and experience necessary to trigger a specific incident. The following is a description of the rating system:
Informational (0.0) Informational incidents represent information about the client that may or may not indicate malicious activity, but are not common. Informational incidents are used to identify more complex abuse patterns that cannot be identified from a single request. An example of an informational incident is when the user has disabled the Referer header.
Suspicious (1.0) Suspicious incidents represent activity that is abnormal but not guaranteed to be malicious. This is similar to an informational incident, except that the event is borderline malicious, not just unusual. Just like informational incidents, suspicious incidents are used to identify more complex abuse patterns that cannot be confirmed as malicious from just one request. An example of a suspicious incident is when the user requests a file that does not exist (404 error).
Low (2.0) Low complexity incidents represent malicious activity that does not require any special tools, does not require a deep understanding of application architecture, and generally can be executed by an unsophisticated threat. An example of a low complexity incident is when the user modifies a query string parameter in the URL.
Medium (3.0) Medium complexity incidents represent malicious activity that would require special tools, advanced browser configuration, scripting, or a understanding of how web applications are designed and implemented. These types of attacks are generally not executed by unsophisticated attackers, and are more likely to be targeted at the protected site, rather than at an arbitrary IP range. An example of a medium complexity incident is when the user requests the robots.txt spider configuration file from a browser or a script spoofing its identity as a browser.
56 Security Processors
High (4.0) High complexity incidents represent malicious activity that is highly advanced and requires a deep understanding of web application architecture, implementation, security features, and multi request workflows. High complexity incidents are generally far too advanced for an average attacker and usually have a specific target. An example of a high complexity incident is when a user is able to break the encryption used on basic authentication password files.
6.2. Security Processors The Security Processors are separated into four groups:
• Honeypot Processors
• Activity Processors
• Tracking Processors
• Response Processors
Honeypot processors contain the logic of injecting the fake vulnerabilities and points of interest to the hackers with the goal of exposing the attacker prior to them finding an actual vulnerability on the site. Activity processors are the processors that monitor for and report any other malicious behavior. These operators watch for malicious activity based on non-injected points of interest. These typically involve monitoring headers, errors, input fields, URL sequences, etc, with the goal of identifying malicious behavior within the valid application stream.
Tracking processors, allow for more advanced tracking of the attackers. These processors attempt to collect additional data based on behavioral characteristics and unique attacker's environment information. These "fingerprints" become a basis for the "hacker database" used in detecting attackers from the first request they make.
Finally, Response processors are the processors that are used for generating response to the end user. If turned on, these can be used to either manually or automatically (depending on the configuration) respond to a hacker as soon as their activity is detected. In case of an automated response, these can be tuned to match more or less any condition including but not limited to frequency of occurrence, complexity, types of incidents triggered.
6.3. Honeypot Processors
6.3.1. Access Policy Processor This processor injects fake permission data into the clientaccesspolicy.xml file of the web application's domain. The fake access policy references a fake service and grants a random domain access to call it. If the service is ever called, or any files are ever requested in the directory the service is supposedly contained in, an incident can be created. Under normal conditions, no user will ever see the clientaccesspolicy.xml file, and therefore be unaware of the URL to the fake service or the directory it resides in. In the cases where a Silverlight object is legitimately requesting clientaccesspolicy.xml from the protected domain in order to access a known service, it will not create an incident, because the service being called is defined with real access directives.
57 Access Policy Processor
6.3.1.1. Configuration
Table 6.1. Access Policy Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether or not to enable this process for https traffic. Advanced Fake Service String Random The fake service the user requested. Incident: Malicious Boolean True The user manually entered the URL into the Service Call browser and accessed the service that way. They did not call the function. Incident: Service Boolean True The user asked for a file index on the directory Directory Indexing that contains the fake service. Incident: Service Boolean True The user is issuing requests for resources Directory Spider inside the directory that contains the fake service. Since the directory does not exist, all of these types of requests are unintended and malicious.
6.3.1.2. Incidents
6.3.1.2.1. Malicious Service Call Complexity: Medium (3.0)
Default Response: 1x = 5 day Strip Inputs
Cause: MWS adds a fake cookie to the websites it protects. The cookie is intended to look as though it is part of the applications overall functionality, and is often selected to appear vulnerable (such as naming the cookie 'debug' or 'admin' and giving it a numerical or Boolean value). The "Cookie Parameter Manipulation" incident is triggered whenever the fake cookie value changes its value.
Behavior: Modifying the inputs of a page is the foundation of a large variety of attack vectors. Basically, if you want to get the backend server to do something different, you need to supply different input values (either by cookie, query string, url, or form parameters). Depending on what value the user chose for the input, the attack could fall under large number of vectors, including "Buffer Overflow", "XSS", "Denial of Service", "Fingerprinting", "Format String", "HTTP Response Splitting", "Integer Overflow", and "SQL injection" among many others. A common practice is to first spider the website, then test every single input on the site for a specific set of vulnerabilities. For example, the user might first index the site, then visit each page on the site, then test every exposed input (cookie, query string, and form inputs) with a list of SQL injection tests. These tests are designed to break the resulting page if the input is vulnerable. As such, the entire process (which can involve thousands of requests) can be automated and return a clean report on which inputs should be targeted. Because MWS cookie looks just like a normal application cookie, a spider that tests all inputs will eventually test the fake cookie as well. This means that if there is a large volume of this incident, it is likely due to such an automated process. It should be assumed that the values tested against the fake cookie, have also been tested against the rest of the cookies on the site.
58 Access Policy Processor
6.3.1.2.2. Service Directory Indexing Complexity: Medium (3.0)
Default Response: 1x = 5 day Block
Cause: Originally, embedded HTML technologies such as Flash and Java, were not able to communicate with 3rd party domains. This was a security constraint to prevent a malicious Java or Flash object from performing unwanted actions against a site other then the one hosting the object (for example, a Java applet that brute forces a Gmail login in the background). This limitation was eventually decreased in order to facilitate more complex mash-ups of information from a variety of sources. However to prevent any untrusted websites from abusing this new capability, a resource called the "clientaccesspolicy.xml" was introduced. Now, when a plugin object wants to communicate with a different domain, it will first request "clientaccesspolicy.xml" from that domain. If the file specifies that the requesting domain is allowed to access the specified resource, then the plugin object will be given permission to communicate directly with the 3rd party. The clientaccesspolicy.xml therefore provides a convenient reference for hackers when trying to scope the attack surface of the website. For example, there may be a vulnerable service listed in clientaccesspolicy.xml, but that service may not be referenced anywhere else on the site. So unless the hacker looks at clientaccesspolicy.xml, they would never even know the service existed. MWS will inject a fake service definition into the clientaccesspolicy.xml file in order to identify which users are manually probing the file for information. The "Service Directory Indexing" incident will be triggered if the user attempts to get a file listing from the directory the fake service is supposedly located in.
Behavior: Attempting to get a file listing from the directory where the potentially vulnerable service is located is likely in an effort to identify other unreferenced vulnerable services, or possibly even data or source files used by the service. Such a request represents a " Directory Indexing1" attack, and is generally performed while attempting to establish a full understanding of a websites attack surface.
6.3.1.2.3. Service Directory Spider Complexity: Medium (3.0)
Default Response: 1x = 5 day Block
Cause: Originally, embedded HTML technologies such as Flash and Java, were not able to communicate with 3rd party domains. This was a security constraint to prevent a malicious Java or Flash object from performing unwanted actions against a site other then the one hosting the object (for example, a Java applet that brute forces a Gmail login in the background). This limitation was eventually decreased in order to facilitate more complex mash-ups of information from a variety of sources. However to prevent any untrusted websites from abusing this new capability, a resource called the "clientaccesspolicy.xml" was introduced. Now, when a plugin object wants to communicate with a different domain, it will first request "clientaccesspolicy.xml" from that domain. If the file specifies that the requesting domain is allowed to access the specified resource, then the plugin object will be given permission to communicate directly with the 3rd party. The clientaccesspolicy.xml therefore provides a convenient reference for hackers when trying to scope the attack surface of the website. For example, there may be a vulnerable service listed in clientaccesspolicy.xml, but that service may not be referenced anywhere else on the site. So unless the hacker looks at clientaccesspolicy.xml, they would never even know the service existed. MWS will inject a fake service definition into the clientaccesspolicy.xml file in order to identify which users are manually probing the file for information. The "Service Directory Spidering" incident will be triggered if the user attempts to request a random file inside the directory the fake service is supposedly located in.
Behavior: Requesting a random file from the directory where the potentially vulnerable service is supposedly located is likely in an effort to identify other unreferenced resources. This could include configuration files,
1 http://projects.webappsec.org/w/page/13246922/Directory-Indexing
59 Ajax Processor
other services, data files, etc... Usually an attacker will first attempt to get a full directory index (which only takes one request), but if that fails, the only other technique is to guess the filenames (which could take thousands of requests). Because guessing the file names can take so many requests, there are several publicly available tools that can enumerate over a large list of common file and directory names in a matter of minutes. This type of behavior is an attempt to exploit a server for "Predictable Resource Location2" vulnerabilities, and is generally done while the attack is trying to scope the web applications attack surface.
6.3.2. Ajax Processor A mistake commonly made by web developers is to consolidate every JavaScript file used by their website into a single file. They then reference that one file from every page on the site, regardless of whether it needs all of the code defined in the file. This is an optimization trick that works, but exposes potential vulnerabilities. The goal is to get the browser to cache all of the external JavaScript, so that you don't need to keep downloading additional code as you navigate the site. Consider the case where one of the pages on the site contains an administrative console written with AJAX technology. In the administrative page, there is a JavaScript file that contains code for managing users of the site (creating user, deleting users, getting user details, etc...). Normally only administrators would visit this page, and they would be the only ones who can see this code. Once all JavaScript on the site is consolidated however, these types of sensitive functions tend to get mixed into the rest of the safer functions. Hackers look for these types of functions in order to find both the administrative page that uses them, as well as exploit the function itself. The goal of this trap is to emulate this common mistake and entice hackers into attempting to exploit the "sensitive looking" function.
6.3.2.1. Configuration
Table 6.2. Ajax Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Advanced Inject Script Enabled Boolean True Whether to inject the fake Javascript code into HTML responses. Service ConfigurableAJAX Service The fake service to expose. Incident: Malicious Boolean True The user executed the fake JavaScript Script Execution function Incident: Malicious Boolean True The user manually entered the URL into the Service Call browser and accessed the service that way. They did not call the function.
6.3.2.2. Incidents
6.3.2.2.1. Malicious Script Execution Complexity: Medium (3.0)
2 http://projects.webappsec.org/Predictable-Resource-Location
60 Ajax Processor
Default Response: 1x = Slow Connection 2-6 seconds and permanent Strip Inputs in 10 minutes.
Cause: MWS injects a fake JavaScript file into the websites it protects. This fake JavaScript file is designed to look as though it is intended for administrative use only, but has been mistakenly linked in with non administrative pages. The JavaScript file exposes an AJAX function that communicates with a potentially vulnerable fake service. If the user attempts to invoke this function using a tool like Firebug, this incident will be triggered.
Behavior: It is common practice to create a few single JavaScript files that contain the majority of the code your site needs, and then importing that code into all of the pages. This increases the performance of the site, because the user can download and cache all the JavaScript at once, rather then having to re-download all or some of it again on every page change. However in some cases, developers mistakenly include sensitive administrative functions in with common functions needed by unauthenticated users. For example, a developer might include an "addUser" function into a file that also contains a "changeImageOnHover" function. The "addUser" function may only be called from an administrative UI (behind a login), while the hover image effect would be called on a lot of different pages. Hackers often look through all of the various Javascript files being included on the pages of a website in order to find references to other services that might be vulnerable. Once a function has been identified, the hacker will attempt to find a way to exploit the service the function uses. Because the attacker is actually executing the function instead of attempting to directly communicate with the potentially vulnerable service, this is likely a less sophisticated attack. They are more then likely just trying to determine if the service actually exists, and if they can call it without being authenticated, however depending on the values they supplied as arguments to the function, this could be a number of different attack types, including "Abuse of Functionality3", "Buffer Overflow4", "Denial of Service5", "Format String6", "Integer Overflows7", "OS Commanding8", and "SQL Injection9"
6.3.2.2.2. Malicious Script Introspection Complexity: Medium (3.0)
Default Response: 1x = Slow Connection 2-6 seconds and Captcha. 2x = Slow Connection 4-14 seconds and permanent Block in 10 minutes.
Cause: MWS injects a fake JavaScript file into the websites it protects. This fake JavaScript file is designed to look as though it is intended for administrative use only, but has been mistakenly linked in with non administrative pages. The JavaScript file exposes an AJAX function that communicates with a potentially vulnerable fake service. If the user manually inspects the code of the function and attempts to exploit the service it uses directly (without calling the function itself), this incident will be triggered.
Behavior: It is common practice to create a few single JavaScript files that contain the majority of the code your site needs, and then importing that code into all of the pages. This increases the performance of the site, because the user can download and cache all the JavaScript at once, rather then having to re-download all or some of it again on every page change. However in some cases, developers mistakenly include sensitive administrative functions in with common functions needed by unauthenticated users. For example, a developer might include an "addUser"function into a file that also contains a "changeImageOnHover" function. The "addUser" function may only be called from an administrative UI (behind a login), while the hover image effect would be called on a lot of different pages. Hackers often look through all of the various Javascript files
3 http://projects.webappsec.org/Abuse-of-Functionality 4 http://projects.webappsec.org/Buffer-Overflow 5 http://projects.webappsec.org/Denial-of-Service 6 http://projects.webappsec.org/Format-String 7 http://projects.webappsec.org/Integer-Overflows 8 http://projects.webappsec.org/OS-Commanding 9 http://projects.webappsec.org/SQL-Injection
61 Basic Authentication Processor
being included on the pages of a website in order to find references to other services that might be vulnerable. Once a function has been identified, the hacker will attempt to find a way to exploit the service the function uses. Unlike the malicious script execution incident, here the attacker has actually dissected the fake AJAX function and attempted to directly exploit the service it uses. This is a more sophisticated attack then actually calling the Javascript function, because it requires that the user understand Javascript logic. Depending on what values they are sending to the service, this could be in an effort to perform any number of exploits, including Abuse of Functionality10", "Buffer Overflow11", "Denial of Service12", "Format String13", "Integer Overflows14", "OS Commanding15", and "SQL Injection16"
6.3.3. Basic Authentication Processor The basic authentication processor is responsible for emulating a vulnerable authentication mechanism in the web application. This is done by publicly exposing fake server configuration files (.htaccess and .htpasswd) that appear to be protecting a resource with basic authentication (a part of the HTTP protocol). To the attacker, the site will appear to be exposing a sensitive administrative script on the site, with weak password protection. As the malicious user identifies the availability of such publicly exposed files, they are walked through a series of steps that emulate exposing an additional piece of information. As the final step, if they end up breaking the weakly authenticated password, they will be considered a high threat.
Note Note: This processor should only be used when the site is using Apache as front end web servers due to particular files involved (.htaccess and .htpasswd) being specific to Apache web server.)
Note Browsers often ignore the body content of HTTP responses if the status code is anything except 200. For best compatibility with different browser versions, you may wish to use a 200 status code when uploading responses such as images or executable code.
6.3.3.1. Configuration
Table 6.3. Basic Authentication Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Advanced Authorized Users Collection Collection A list of authorized user accounts.
10 http://projects.webappsec.org/Abuse-of-Functionality 11 http://projects.webappsec.org/Buffer-Overflow 12 http://projects.webappsec.org/Denial-of-Service 13 http://projects.webappsec.org/Format-String 14 http://projects.webappsec.org/Integer-Overflows 15 http://projects.webappsec.org/OS-Commanding 16 http://projects.webappsec.org/SQL-Injection
62 Basic Authentication Processor
Parameter Type Default Value Description Protected Resource Collection Protected Resource The fake protected resource. Randomization Salt String Random A random set of characters used to salt the generation of code. Any value is fine here. Incident: Cracked Boolean True The user has successfully accessed a fake Authentication protected resource using a cracked username and password. Incident: Directory Boolean True The user has requested the apache directory Configuration configuration file .htaccess. Requested Incident: Directory Boolean True The user has requested the apache password Passwords file .htpasswd Requested Incident: Invalid Boolean True The user has attempted to login to access the Resource Login fake file protected by basic authentication, but failed. Incident: Protected Boolean True The user has requested a fake file which is Resource Requested protected by basic authentication
6.3.3.2. Incidents
6.3.3.2.1. Apache Configuration Requested Complexity: Low (2.0)
Default Response: 1x = Slow Connection 2-6 seconds.
Cause: Apache is a web server used by many websites on the internet. As a result, hackers will often look for vulnerabilities specific to apache, since there is a good chance any given website is probably running apache. One such vulnerability involves the use of an .htaccess17 file to provide directory level configuration (such as default 404 messages, password protected resources, directory indexing options, etc...), while not sufficiently protecting the .htaccess file itself. By convention, any resource that provides directory level configuration should not be exposed to the public. This means that if a user requests .htaccess or a related resource, they should get either a 404 or a 403 error. Unfortunately, not all web servers are configured correctly to block requests for these resources. In such a scenario, a hacker could gain valuable intelligence on the way the server is configured.
Behavior: Hackers will often attempt to get the .htaccess file from various directories on a website in an effort to find valuable information about how the server is configured. This is usually done to find a "Server Misconfiguration18" weakness that might expose a "Credential/Session Prediction19", "OS Commanding20", "Path Traversal21", or "URL Redirector Abuse22" vulnerability among others. The fact that an .htaccess file is even exposed is a "Server Misconfiguration" vulnerability in itself. In this specific case, the attacker is asking for a different resource that is related to .htaccess. They are requesting a user database file for a password
17 http://httpd.apache.org/docs/current/howto/htaccess.html 18 http://projects.webappsec.org/Server-Misconfiguration 19 http://projects.webappsec.org/Credential-and-Session-Prediction 20 http://projects.webappsec.org/OS-Commanding 21 http://projects.webappsec.org/Path-Traversal 22 http://projects.webappsec.org/URL-Redirector-Abuse
63 Basic Authentication Processor
protected resource defined in .htaccess. This file is generally named ".htpasswd". The user either opened the .htaccess file and found the reference to .htpasswd, or they simply tried .htpasswd to see if anything came back (with or without asking for .htaccess). Either way, this behavior is involved in the establishment of a "Credential/Session Prediction" vulnerability. The request for .htpasswd is usually performed while attempting to establish the scope of the websites attack surface, although sometimes is not performed until trying to identify a valid attack vector.
6.3.3.2.2. Apache Password File Requested Complexity: Low (2.0)
Default Response: 1x = Slow Connection 2-6 seconds.
Cause: Apache is a web server used by many websites on the internet. As a result, hackers will often look for vulnerabilities specific to apache, since there is a good chance any given website is probably running apache. One such vulnerability involves the use of an .htaccess23 file to provide directory level configuration (such as default 404 messages, password protected resources, directory indexing options, etc...), while not sufficiently protecting the .htaccess file itself. By convention, any resource that provides directory level configuration should not be exposed to the public. This means that if a user requests .htaccess or a related resource, they should get either a 404 or a 403 error. Unfortunately, not all web servers are configured correctly to block requests for these resources. In such a scenario, a hacker could gain valuable intelligence on the way the server is configured. MWS will automatically block any requests for the .htaccess resource, and return a fake version of the file. The fake version of the file will contain the directives necessary to password protect a fake resource. These directives allude to the existence of a user database file that contains usernames and encrypted passwords. The "Apache Password File Requested" incident will trigger in the event that the user requests the fake user database file (generally named .htpasswd).
Behavior: Hackers will often attempt to get the .htaccess file from various directories on a website in an effort to find valuable information about how the server is configured. This is usually done to find a "Server Misconfiguration24" weakness that might expose a "Credential/Session Prediction25", "OS Commanding26", "Path Traversal27", or "URL Redirector Abuse28" vulnerability among others. The fact that an .htaccess file is even exposed is a "Server Misconfiguration" vulnerability in itself. In this specific case, the attacker is asking for a different resource that is related to .htaccess. They are requesting a user database file for a password protected resource defined in .htaccess. This file is generally named ".htpasswd". The user either opened the .htaccess file and found the reference to .htpasswd, or they simply tried .htpasswd to see if anything came back (with or without asking for .htaccess). Either way, this behavior is involved in the establishment of a "Credential/Session Prediction" vulnerability. The request for .htpasswd is usually performed while attempting to establish the scope of the websites attack surface, although sometimes is not performed until trying to identify a valid attack vector.
6.3.3.2.3. Invalid Credentials Complexity: Medium (3.0)
Default Response: 1x = Slow Connection 2-6 seconds. 15x = Brute Force Invalid Credentials Incident.
Cause: Apache is a web server used by many websites on the internet. As a result, hackers will often look for vulnerabilities specific to apache, since there is a good chance any given website is probably running
23 http://httpd.apache.org/docs/current/howto/htaccess.html 24 http://projects.webappsec.org/Server-Misconfiguration 25 http://projects.webappsec.org/Credential-and-Session-Prediction 26 http://projects.webappsec.org/OS-Commanding 27 http://projects.webappsec.org/Path-Traversal 28 http://projects.webappsec.org/URL-Redirector-Abuse
64 Basic Authentication Processor
apache. One such vulnerability involves the use of an .htaccess29 file to provide directory level configuration (such as default 404 messages, password protected resources, directory indexing options, etc...), while not sufficiently protecting the .htaccess file itself. By convention, any resource that provides directory level configuration should not be exposed to the public. This means that if a user requests .htaccess or a related resource, they should get either a 404 or a 403 error. Unfortunately, not all web servers are configured correctly to block requests for these resources. In such a scenario, a hacker could gain valuable intelligence on the way the server is configured. MWS will automatically block any requests for the .htaccess resource, and return a fake version of the file. The fake version of the file will contain the directives necessary to password protect a fake resource. Should the user request the password protected resource, MWS will simulate the correct authentication method defined in .htaccess, and simulate the existence of the fake resource. The "Invalid Credentials" incident will trigger in the event that the user requests the fake password protected file and supplies an invalid username and password (as would be the case if they requested the file in a browser and guessed a username and password at the login prompt).
Behavior: Hackers will often attempt to get the .htaccess file from various directories on a website in an effort to find valuable information about how the server is configured. This is usually done to find a "Server Misconfiguration30" weakness that might expose a "Credential/Session Prediction31", "OS Commanding32", "Path Traversal33", or "URL Redirector Abuse34" vulnerability among others. The fact that an .htaccess file is even exposed is a "Server Misconfiguration" vulnerability in itself. In this specific case, the attacker is asking for a different resource that is referenced only from .htaccess. The fake resource is password protected, and the user has attempted to authenticate with bad credentials. This is most likely in an effort to guess a valid username and password combination, such as "admin:admin", or "guest:guest". It may also be part of a larger brute force attempt, where the attacker tries a long list of possible combinations. This is a poor method for locating valid usernames and passwords, especially since the user database file .htpasswd is actually exposed (albeit fake). So a brute force attack (represented by a large quantity of this incident type) generally means the attacker is less sophisticated.
6.3.3.2.4. Protected Resource Requested Complexity: Low (2.0)
Default Response: 1x = Slow Connection 2-6 seconds.
Cause: Apache is a web server used by many websites on the internet. As a result, hackers will often look for vulnerabilities specific to apache, since there is a good chance any given website is probably running apache. One such vulnerability involves the use of an .htaccess35 file to provide directory level configuration (such as default 404 messages, password protected resources, directory indexing options, etc...), while not sufficiently protecting the .htaccess file itself. By convention, any resource that provides directory level configuration should not be exposed to the public. This means that if a user requests .htaccess or a related resource, they should get either a 404 or a 403 error. Unfortunately, not all web servers are configured correctly to block requests for these resources. In such a scenario, a hacker could gain valuable intelligence on the way the server is configured. MWS will automatically block any requests for the .htaccess resource, and return a fake version of the file. The fake version of the file will contain the directives necessary to password protect a fake resource. Should the user request the password protected resource, MWS will simulate the correct authentication method defined in .htaccess, and simulate the existence of the fake resource. The "Protected Resource Requested" incident will trigger in the event that the user requests the
29 http://httpd.apache.org/docs/current/howto/htaccess.html 30 http://projects.webappsec.org/Server-Misconfiguration 31 http://projects.webappsec.org/Credential-and-Session-Prediction 32 http://projects.webappsec.org/OS-Commanding 33 http://projects.webappsec.org/Path-Traversal 34 http://projects.webappsec.org/URL-Redirector-Abuse 35 http://httpd.apache.org/docs/current/howto/htaccess.html
65 Basic Authentication Processor
fake password protected file and does not supply a username and password (as would be the case if they requested the file in a browser and canceled the login prompt).
Behavior: Hackers will often attempt to get the .htaccess file from various directories on a website in an effort to find valuable information about how the server is configured. This is usually done to find a "Server Misconfiguration36" weakness that might expose a "Credential/Session Prediction37", "OS Commanding38", "Path Traversal39", or "URL Redirector Abuse40" vulnerability among others. The fact that an .htaccess file is even exposed is a "Server Misconfiguration" vulnerability in itself. In this specific case, the attacker is asking for a different resource that is referenced only from .htaccess. The resource is password protected, but the user has not yet tried to supply credentials. This is most likely in an attempt to see if the password protected file actually exists.
6.3.3.2.5. Brute Force Invalid Credentials Complexity: Medium (3.0)
Default Response: 1x = Captcha. 2x = permanent Block.
Cause: Apache is a web server used by many websites on the internet. As a result, hackers will often look for vulnerabilities specific to apache, since there is a good chance any given website is probably running apache. One such vulnerability involves the use of an .htaccess41 file to provide directory level configuration (such as default 404 messages, password protected resources, directory indexing options, etc...), while not sufficiently protecting the .htaccess file itself. By convention, any resource that provides directory level configuration should not be exposed to the public. This means that if a user requests .htaccess or a related resource, they should get either a 404 or a 403 error. Unfortunately, not all web servers are configured correctly to block requests for these resources. In such a scenario, a hacker could gain valuable intelligence on the way the server is configured. MWS will automatically block any requests for the .htaccess resource, and return a fake version of the file. The fake version of the file will contain the directives necessary to password protect a fake resource. Should the user request the password protected resource, MWS will simulate the correct authentication method defined in .htaccess, and simulate the existence of the fake resource. The "Brute Force Invalid Credentials" incident will trigger in the event that the user requests the fake password protected file and repeatedly supplies an invalid username and password (as would be the case if the user was guessing various username and password combinations).
Behavior: Hackers will often attempt to get the .htaccess file from various directories on a website in an effort to find valuable information about how the server is configured. This is usually done to find a "Server Misconfiguration42" weakness that might expose a "Credential/Session Prediction43", "OS Commanding44", "Path Traversal45", or "URL Redirector Abuse46" vulnerability among others. The fact that an .htaccess file is even exposed is a "Server Misconfiguration" vulnerability in itself. In this specific case, the attacker is asking for a different resource that is referenced only from .htaccess. The fake resource is password protected, and the user has attempted to authenticate with a large number of bad credentials. This is most likely in an effort to guess a valid username and password combination, such as "admin:admin", or "guest:guest". This is a
36 http://projects.webappsec.org/Server-Misconfiguration 37 http://projects.webappsec.org/Credential-and-Session-Prediction 38 http://projects.webappsec.org/OS-Commanding 39 http://projects.webappsec.org/Path-Traversal 40 http://projects.webappsec.org/URL-Redirector-Abuse 41 http://httpd.apache.org/docs/current/howto/htaccess.html 42 http://projects.webappsec.org/Server-Misconfiguration 43 http://projects.webappsec.org/Credential-and-Session-Prediction 44 http://projects.webappsec.org/OS-Commanding 45 http://projects.webappsec.org/Path-Traversal 46 http://projects.webappsec.org/URL-Redirector-Abuse
66 Cookie Processor
poor method for locating valid usernames and passwords, especially since the user database file .htpasswd is actually exposed (albeit fake). So a brute force attack generally means the attacker is less sophisticated.
6.3.3.2.6. Password Cracked Complexity: High (4.0)
Default Response: 1x = permanent Block.
Cause: Apache is a web server used by many websites on the internet. As a result, hackers will often look for vulnerabilities specific to apache, since there is a good chance any given website is probably running apache. One such vulnerability involves the use of an .htaccess47 file to provide directory level configuration (such as default 404 messages, password protected resources, directory indexing options, etc...), while not sufficiently protecting the .htaccess file itself. By convention, any resource that provides directory level configuration should not be exposed to the public. This means that if a user requests .htaccess or a related resource, they should get either a 404 or a 403 error. Unfortunately, not all web servers are configured correctly to block requests for these resources. In such a scenario, a hacker could gain valuable intelligence on the way the server is configured. MWS will automatically block any requests for the .htaccess resource, and return a fake version of the file. The fake version of the file will contain the directives necessary to password protect a fake resource. The directives will also allude to the existence of a password database file. If the attacker requests the password database file, and then uses a tool such John The Ripper48 to crack one of the encrypted passwords, they will be able to authenticate against the fake protected resource successfully. Should the user request the password protected resource, and supply a valid username and password combination (as defined in the password database), the "Password Cracked" incident will be triggered.
Behavior: Hackers will often attempt to get the .htaccess file from various directories on a website in an effort to find valuable information about how the server is configured. This is usually done to find a "Server Misconfiguration49" weakness that might expose a "Credential/Session Prediction50", "OS Commanding51", "Path Traversal52", or "URL Redirector Abuse53" vulnerability among others. The fact that an .htaccess file is even exposed is a "Server Misconfiguration" vulnerability in itself. In this specific case, the attacker is asking for a different resource that is referenced only from .htaccess. The fake resource is password protected, and the user has supplied valid authentication credentials. The only way to obtain valid credentials is to either brute force the login (which would be the case if there were excessive numbers of "Invalid Credential" incidents), or to access the fake password database file (usually .htpasswd) and crack one of the encrypted passwords using an encryption cracking tool. This represents the final and most complicated step in a successful "Credential/Session Prediction" exploit, and is usually performed long after the attack surface of the site has been fully scoped. Unless there are excessive numbers of "Invalid Credential" incidents, which would be the case in a brute force attack, the user must have also requested ".htpasswd", and therefore should also have an "Apache Password File Requested" incident. If this incident is missing, then the hacker has likely established two independent profiles in MWS.
6.3.4. Cookie Processor Cookies are used by web applications to maintain state for a given user. They consist of key/value pairs that are passed around in headers and also stored client side. Each key/value pair has various attributes including
47 http://httpd.apache.org/docs/current/howto/htaccess.html 48 http://www.openwall.com/john/ 49 http://projects.webappsec.org/Server-Misconfiguration 50 http://projects.webappsec.org/Credential-and-Session-Prediction 51 http://projects.webappsec.org/OS-Commanding 52 http://projects.webappsec.org/Path-Traversal 53 http://projects.webappsec.org/URL-Redirector-Abuse
67 Cookie Processor
which domains it is valid for, what paths within those domains, as well as security restrictions and expiration information. Because this is the primary way for a web application to maintain a session, hackers will often try to manipulate cookie values manually in an effort to escalate access or hijack someone else's session. All of the attacks applicable to modifying form parameters are also applicable to modifying cookie parameters. It may even be possible, although unlikely, to find an SQL injection flaw in a cookie parameter.
6.3.4.1. Configuration
Table 6.4. Cookie Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether or not to enable this process for http traffic. Advanced Cookie String Cookie The fake cookie to use. Incident: Cookie Boolean True The user modified the value of a cookie which Parameter should never be modified. Manipulation
6.3.4.2. Incidents
6.3.4.2.1. Cookie Parameter Manipulation Complexity: Medium (3.0)
Default Response: 1x = Slow Connection 2-6 seconds and permanent Strip Inputs in 10 minutes.
Cause: MWS adds a fake cookie to the websites it protects. The cookie is intended to look as though it is part of the applications overall functionality, and is often selected to appear vulnerable (such as naming the cookie 'debug' or 'admin' and giving it a numerical or Boolean value). The "Cookie Parameter Manipulation" incident is triggered whenever the fake cookie value changes its value.
Behavior: Modifying the inputs of a page is the foundation of a large variety of attack vectors. Basically, if you want to get the backend server to do something different, you need to supply different input values (either by cookie, query string, url, or form parameters). Depending on what value the user chose for the input, the attack could fall under large number of vectors, including "Buffer Overflow54", "XSS55", "Denial of Service56", "Fingerprinting57", "Format String58", "HTTP Response Splitting59", "Integer Overflow60", and "SQL injection61" among many others. A common practice is to first spider the website, then test every single input on the site for a specific set of vulnerabilities. For example, the user might first index the site, then visit each page on the site, then test every exposed input (cookie, query string, and form inputs) with a list of SQL injection tests. These tests are designed to break the resulting page if the input is vulnerable. As such, the entire process
54 http://projects.webappsec.org/Buffer-Overflow 55 http://projects.webappsec.org/Cross-Site+Scripting 56 http://projects.webappsec.org/Denial-of-Service 57 http://projects.webappsec.org/Fingerprinting 58 http://projects.webappsec.org/Format-String 59 http://projects.webappsec.org/HTTP-Response-Splitting 60 http://projects.webappsec.org/Integer-Overflows 61 http://projects.webappsec.org/SQL-Injection
68 File Processor
(which can involve thousands of requests) can be automated and return a clean report on which inputs should be targeted. Because MWS cookie looks just like a normal application cookie, a spider that tests all inputs will eventually test the fake cookie as well. This means that if there is a large volume of this incident, it is likely due to such an automated process. It should be assumed that the values tested against the fake cookie, have also been tested against the rest of the cookies on the site.
6.3.5. File Processor When developing websites, administrators will often rename files in order to make room for a newer version of the file. They may also archive older files. A common vulnerability is the case where these older files are left in the web accessible directories, and they contain non static resources. For example, consider the case where a developer renames shopping_cart.php to shopping_cart.php.bak. If an attacker looks for php files and tries to access all of them with a .bak extension, they may stumble across the backup file. Because the server is not configured to parse .bak files as php files, it will serve the unexecuted script source code to the client. This technique can yield database credentials, system credentials, as well as expose more serious vulnerabilities in the code itself. The goal of this processor is to detect when a user is attempting to find unreferenced files.
6.3.5.1. Configuration
Table 6.5. File Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Advanced Block Response ConfigurableHTTP Response The response to return when a request is blocked due to a matching suspicious token rule with blocking enabled. Suspicious Tokens Collection Collection The configured suspicious extensions. Incident: Suspicious Boolean True A file which has a suspicious filename is File Exposed publicly available. Incident: Suspicious Boolean True A file with a filename that contains a Filename suspicious token was requested.
6.3.5.2. Incidents
6.3.5.2.1. Suspicious Filename Complexity: Suspicious (1.0)
Default Response: 10x = Suspicious File Enumeration Incident.
Cause: MWS has a list of file tokens which represent potentially sensitive files. For example, developers will often rename source files with a ".bck" extension during debugging, and sometimes they forget to delete the backup after they are done. Hackers often look for these left over source files. MWS is configured to look for any request to a file with a ".bck" extension (as well as any other configured extensions), and trigger this incident if the file does not exist. An incident will not be triggered if the file does in fact exist, and the extension is not configured to block the response. This is to avoid legitimate files being flagged as suspicious filenames.
69 File Processor
Behavior: There are specific files that many websites host, that contain valuable information for a hacker. These files generally include data such as passwords, SQL schema's, source code, etc... When hackers try to breach a site, they will often check to see if they can locate some of these special files in order to make their jobs easier. For example, if a hacker sees that the home page is called "index.php", they may try and request "index.php.bak", because if it exists, it will be returned as raw source code. This is usually an effort to exploit a "Predictable Resource Location62" vulnerability. Automated scanners will generally test all of these types of extensions (.bck, .bak, .zip, .tar, .gz, etc...) against every legitimate file that is located through simple spidering. Because this incident is only created if the file being requested does not actually exist, it does not represent a successful exploit.
6.3.5.2.2. Suspicious File Exposed Complexity: Suspicious (1.0)
Default Response: 10x = Suspicious File Enumeration Incident.
Cause: MWS has a list of file tokens which represent potentially sensitive files. For example, developers will often rename source files with a ".bck" extension during debugging, and sometimes they forget to delete the backup after they are done. Hackers often look for these left over source files. MWS is configured to look for any request to a file with a ".bck" extension (as well as any other configured extensions), and trigger this incident if the extension is configured as illegal. This incident will only be triggered if the file actually exists, and it is configured to be blocked by default. For example, the user might request "database.sql". If the .sql extension is configured to block, and the file actually exists on the server, this incident will be generated. If "database.sql" does not exist, then only a "Suspicious Filename" incident will be created.
Behavior: There are specific files that many websites host, that contain valuable information for a hacker. These files generally include data such as passwords, SQL schema's, source code, etc... When hackers try to breach a site, they will often check to see if they can locate some of these special files in order to make their jobs easier. For example, if a hacker sees that the home page is called "index.php", they may try and request "index.php.bak", because if it exists, it will be returned as raw source code. This is usually an effort to exploit a "Predictable Resource Location63" vulnerability. Automated scanners will generally test all of these types of extensions (.bck, .bak, .zip, .tar, .gz, etc...) against every legitimate file that is located through simple spidering. This incident is only triggered when the user requested a file that would otherwise have been successfully returned, if it were not blocked by MWS. For example, the user might request "database.sql" and actually get a 200 response from the server indicating that the file exists and is accessible to everyone. However if the Mykonos system is configured to mark the ".sql" extension as illegal, then MWS will block the request and trigger this incident. This prevents the sensitive file from potentially being exposed to an actual malicious user. If this incident occurs, the server administrator should immediately remove the sensitive file or change its permissions so it is no longer publicly accessible.
6.3.5.2.3. Suspicious Resource Enumeration Complexity: Low (2.0)
Default Response: 1x = 5 day Block.
Cause: MWS has a list of file tokens which represent potentially sensitive files. For example, developers will often rename source files with a ".bck" extension during debugging, and sometimes they forget to delete the backup after they are done. Hackers often look for these left over source files. MWS is configured to look for any request to a file with a ".bck" extension (as well as any other configured extensions), and trigger a
62 http://projects.webappsec.org/Predictable-Resource-Location 63 http://projects.webappsec.org/Predictable-Resource-Location
70 Hidden Input Form Processor
Suspicious Filename incident if the file does not exist. Should the suspicious filename incident be triggered several times, this incident will then be triggered.
Behavior: There are specific files that many websites host, that contain valuable information for a hacker. These files generally include data such as passwords, SQL schema's, source code, etc... When hackers try to breach a site, they will often check to see if they can locate some of these special files in order to make their jobs easier. For example, if a hacker sees that the home page is called "index.php", they may try and request "index.php.bak", because if it exists, it will be returned as raw source code. This is usually an effort to exploit a "Predictable Resource Location64" vulnerability. Automated scanners will generally test all of these types of extensions (.bck, .bak, .zip, .tar, .gz, etc...) against every legitimate file that is located through simple spidering. The first few times a user requests a filename containing a suspicious token, they will only get "Suspicious Filename" incidents. However if they request a large volume of filenames with suspicious tokens, then the "Suspicious Resource Enumeration" incident is generated. This incident represents a user who is actively scanning the site with very aggressive tactics to find unlinked and sensitive data.
6.3.6. Hidden Input Form Processor Many webmasters create forms which post to a common form handling service; using hidden fields to indicate how the service should handle the data. A common hacking technique is to look for these hidden parameters and see if there is any way to change the behavior of the service by manipulating its input parameters. This processor is responsible for injecting a fake hidden input into forms in HTML responses and ensuring that when those values are posted back to the server, they have not been modified.
6.3.6.1. Configuration
Table 6.6. Hidden Input Form Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Advanced Hidden Input Collection Collection The possible hidden inputs on a page. Parameter Inject Input Enabled Boolean True Whether to inject hidden inputs into HTML forms. Maximum Injections Integer 3 The maximum number of fake hidden parameters that will be added to any given URL. Strip Fake Input Boolean True Whether to remove the fake input value from the posted form results before proxying the request to the backend servers. This should only be turned off if there is some additional security implemented on the form, where its contents are signed on the client and validated on the server.
64 http://projects.webappsec.org/Predictable-Resource-Location
71 Hidden Input Form Processor
Parameter Type Default Value Description Incident: Hidden Boolean True The user submitted the form and the value Parameter of the injected parameter is not what was Manipulation expected. Incident: Hidden Input Boolean True The user submitted the form and the value Type Manipulation of the injected parameter is not what was expected. It was also modified to post a file.
6.3.6.2. Incidents
6.3.6.2.1. Parameter Type Manipulation Complexity: High (4.0)
Default Response: 1x = Permanent Strip Inputs.
Cause: MWS inspects outgoing traffic for HTML forms with a "POST" method type. Forms that post to a local URL (within the same domain), will be modified to include a fake hidden input with a defined value. The input is intended to look as though it was always part of the form, and is often selected to appear vulnerable (such as naming the input 'debug' or 'loglevel' and giving it a numerical or Boolean value). The input will however, always be assigned a value that can be represented as a string of characters (in other words, not binary data). The "Parameter Type Manipulation" incident is triggered whenever the fake hidden input is modified from its originally assigned value in order to submit a multipart file.
Behavior: Modifying the inputs of a page is the foundation of a large variety of attack vectors. Basically, if you want to get the backend server to do something different, you need to supply different input values (either by cookie, query string, url, or form parameters). Depending on what value the user chose for the input, the attack could fall under large number of vectors, including "Buffer Overflow65", "XSS66", "Denial of Service67", "Fingerprinting68", "Format String69", "HTTP Response Splitting70", "Integer Overflow71", and "SQL injection72" among many others. Unlike a normal "Hidden Parameter Manipulation" incident, this version is triggered when the user changes the encoding of the form and submits the hidden input as a file post. This is likely in an attempt to either achieve a "Buffer Overflow", or to exploit a filter evasion weakness, that might have otherwise blocked the value being submitted. A common practice is to first spider the website, then test every single input on the site for a specific set of vulnerabilities. For example, the user might first index the site, then visit each page on the site, then test every exposed input (cookie, query string, and form inputs) with a list of SQL injection tests. These tests are designed to break the resulting page if the input is vulnerable. As such, the entire process (which can involve thousands of requests) can be automated and return a clean report on which inputs should be targeted. Because MWS injects several fake inputs, a spider that tests all inputs will eventually test the fake input as well. This means that if there is a large volume of this incident, it is likely due to such an automated process. It should be assumed that the values tested against the fake input, have also been tested against the rest of the inputs on the site.
65 http://projects.webappsec.org/Buffer-Overflow 66 http://projects.webappsec.org/Cross-Site+Scripting 67 http://projects.webappsec.org/Denial-of-Service 68 http://projects.webappsec.org/Fingerprinting 69 http://projects.webappsec.org/Format-String 70 http://projects.webappsec.org/HTTP-Response-Splitting 71 http://projects.webappsec.org/Integer-Overflows 72 http://projects.webappsec.org/SQL-Injection
72 Hidden Link Processor
6.3.6.2.2. Hidden Parameter Manipulation Complexity: Medium (3.0)
Default Response: 1x = Slow Connection 2-6 seconds. 2x = Logout User. 3x = Strip Inputs.
Cause: MWS inspects outgoing traffic for HTML forms with a "POST" method type. Forms that post to a local URL (within the same domain), will be modified to include a fake hidden input with a defined value. The input is intended to look as though it was always part of the form, and is often selected to appear vulnerable (such as naming the input 'debug' or 'loglevel' and giving it a numerical or Boolean value). The "Hidden Parameter Manipulation" incident is triggered whenever the fake hidden input is modified from its originally assigned value.
Behavior: Modifying the inputs of a page is the foundation of a large variety of attack vectors. Basically, if you want to get the backend server to do something different, you need to supply different input values (either by cookie, query string, url, or form parameters). Depending on what value the user chose for the input, the attack could fall under large number of vectors, including "Buffer Overflow73", "XSS74", "Denial of Service75", "Fingerprinting76", "Format String77", "HTTP Response Splitting78", "Integer Overflow79", and "SQL injection80" among many others. A common practice is to first spider the website, then test every single input on the site for a specific set of vulnerabilities. For example, the user might first index the site, then visit each page on the site, then test every exposed input (cookie, query string, and form inputs) with a list of SQL injection tests. These tests are designed to break the resulting page if the input is vulnerable. As such, the entire process (which can involve thousands of requests) can be automated and return a clean report on which inputs should be targeted. Because MWS injects several fake inputs, a spider that tests all inputs will eventually test the fake input as well. This means that if there is a large volume of this incident, it is likely due to such an automated process. It should be assumed that the values tested against the fake input, have also been tested against the rest of the inputs on the site.
6.3.7. Hidden Link Processor When trying to exploit a site, hackers will often scan the contents of the site in search of directories and files that are of interest. Because this activity is done at the source level, the hacker finds every file referenced, whereas when a user views a website, they can only see the links that are visible according to the HTML. This processor injects a fake link into documents that references a file that looks interesting. The link is injected in such a way that prevents it from being rendered when the browser loads the page. This means that no normal user would ever find/click on the link, but that a scanner or hacker who is looking at the source code likely will.
6.3.7.1. Configuration
Table 6.7. Hidden Link Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor.
73 http://projects.webappsec.org/Buffer-Overflow 74 http://projects.webappsec.org/Cross-Site+Scripting 75 http://projects.webappsec.org/Denial-of-Service 76 http://projects.webappsec.org/Fingerprinting 77 http://projects.webappsec.org/Format-String 78 http://projects.webappsec.org/HTTP-Response-Splitting 79 http://projects.webappsec.org/Integer-Overflows 80 http://projects.webappsec.org/SQL-Injection
73 Hidden Link Processor
Parameter Type Default Value Description Advanced Hidden Links ConfigurableHidden Links The set of hidden links that can be injected into the site. Inject Link Enabled Boolean True Whether to inject the link into HTTP responses. Incident: Link Boolean True The user requested a directory index on one Directory Indexing of the fake parent directories of the linked file. Incident: Link Boolean True The user requested a resource inside the fake Directory Spidering directory of the linked file. Incident: Malicious Boolean True The user requested the fake linked resource. Resource Requested
6.3.7.2. Incidents
6.3.7.2.1. Link Directory Indexing Complexity: Low (2.0)
Default Response: 1x = Slow Connection 2-6 seconds and 1 day Block.
Cause: MWS injects a hidden link into pages on the protected web application. This link is not exposed visually to users of the website. In order to find the link, a user would need to manually inspect the source code of the page. If a user finds the hidden link code in the HTML, and attempts to get a directory file listing from the directory the link points to, this incident will be triggered.
Behavior: A common technique for hackers when scoping the attack surface of a website is to spider the site and collect the locations of all of its pages. This is generally done using a simple script that looks for URL's in the returned HTML of the home page, then requests those pages and checks for URL's in their source, and so forth. Legitimate search engine spiders will do this as well. But the difference between a legitimate spider and a malicious user, is how aggressively they will use the newly discovered URL to derive other URLs. This incident triggers when the user goes beyond just checking the linked URL, but instead also attempts to get a file listing from the directory the URL points to. A legitimate spider would not do this, because it is considered fairly invasive. This activity is generally looking for a "Directory Indexing81"weakness on the server, in an effort to locate unlinked and possibly sensitive resources.
6.3.7.2.2. Link Directory Spidering Complexity: Low (2.0)
Default Response: 1x = Slow Connection 2-6 seconds and 5 day Block in 6 minutes.
Cause: MWS injects a hidden link into pages on the protected web application. This link is not exposed visually to users of the website. In order to find the link, a user would need to manually inspect the source code of the page. If a user finds the hidden link code in the HTML, and attempts to request some other arbitrary file in the same fake directory as the link, this incident will be triggered.
Behavior: A common technique for hackers when scoping the attack surface of a website is to spider the site and collect the locations of all of its pages. This is generally done using a simple script that looks for URL's
81 http://projects.webappsec.org/Directory-Indexing
74 Query String Processor
in the returned HTML of the home page, then requests those pages and checks for URL's in their source, and so forth. Legitimate search engine spiders will do this as well. But the difference between a legitimate spider and a malicious user, is how aggressively they will use the newly discovered URL to derive other URLs. This incident triggers when the user goes beyond just checking the linked URL, but instead also attempts to request one or more arbitrary files inside the same directory as the file referenced by the hidden link. A legitimate spider would not do this, because it is considered fairly invasive. This activity is generally looking for a "Directory Indexing82" weakness on the server, or a "Predictable Resource Location"83 vulnerability, in an effort to locate unlinked and possibly sensitive resources.
6.3.8. Query String Processor Hackers tend to manipulate the values of query string parameters in order to get the application to behave differently. The goal of this processor is to add fake query string parameters to some of the links and forms in the page, and verify that they do not get modified when accessed by the user.
6.3.8.1. Configuration
Table 6.8. Query String Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Advanced Fake Parameters Collection Collection The collection of fake parameters to add to the links which already have parameters. Inject Parameter Boolean True Whether to inject query string parameters on Enabled urls in HTTP responses. Maximum Injections Integer 3 Whether to inject query string parameters on urls in HTTP responses. Strip Fake Input Boolean True Whether to remove the fake input value from the query string before proxying the request to the backend servers. This should only be turned off if there is some additional security implemented on the site, where links are signed on the client and validated on the server. Incident: Query Boolean True The user manually modified the value of a Parameter query string parameter. Manipulation
82 http://projects.webappsec.org/Directory-Indexing 83 http://projects.webappsec.org/Predictable-Resource-Location
75 Robots Processor
6.3.8.2. Incidents
6.3.8.2.1. Query Parameter Manipulation Complexity: Low (2.0)
Default Response: 3x = Slow Connection 2-6 seconds. 5x = 1 day Strip Inputs.
Cause: MWS injects a fake query parameter into some of the links of the protected web site. This query parameter has a known value, and should never change, because it is not part of the actual web application. If a user modifies the query parameter value, this incident will be triggered.
Behavior: Query parameters represent the most visible form of user input a web application exposes. They are clearly visible in the address bar, and can be easily changed by even an inexperienced user. However most users do not attempt to change values directly in the query string, unless they are trying to perform some action the website does not normally expose through its interface, or does not make sufficiently easy. Because it is so easy for a normal user to accidentally change a query parameter, this incident alone is not considered strictly malicious. However depending on the value that is submitted, this could be part of a number of different exploit attempts, including "Buffer Overflow84", "XSS85", "Denial of Service86", "Fingerprinting87", "Format String88", "HTTP Response Splitting89", "Integer Overflow90", and "SQL injection91".
6.3.9. Robots Processor The Robots.txt proxy processor is responsible for catching malicious spiders that do not behave in accordance with established standards for spidering. Hackers often utilize the extra information sites expose to spiders, and then use that information to access resources normally not linked from the public site. Because this activity is effectively breaking established standards for spidering, this processor will also identify hackers who are using the information maliciously.
6.3.9.1. Configuration
Table 6.9. Robots Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Advanced Fake Disallowed String Random The path to a fake directory to add to the Directories disallow rules in the robots.txt file. This path should be completely fake and not overlap with actual directories
84 http://projects.webappsec.org/Buffer-Overflow 85 http://projects.webappsec.org/Cross-Site+Scripting 86 http://projects.webappsec.org/Denial-of-Service 87 http://projects.webappsec.org/Fingerprinting 88 http://projects.webappsec.org/Format-String 89 http://projects.webappsec.org/HTTP-Response-Splitting 90 http://projects.webappsec.org/Integer-Overflows 91 http://projects.webappsec.org/SQL-Injection
76 Robots Processor
Parameter Type Default Value Description Incident: Spider Boolean True The user requested the spider rules file Configuration robots.txt. This may not be a problem, as Requested good spiders will do this too, but users should never do it. Incident: Malicious Boolean True The user requested a resource which was Spider Activity restricted in the spider rules file, indicating this user is not a good spider, but is spidering the site anyway.
6.3.9.2. Incidents
6.3.9.2.1. Spider Configuration Requested Complexity: Informational (0.0)
Default Response: None.
Cause: One of the standard resources commonly exposed by websites is called robots.txt, which is used to instruct search engines on how to spider the website. Two of the more important directives are "allow" and "disallow" — used to identify which directories a spider should index, and which directories it should stay away from.
Despite the best-practice being to actually lock down any resource that should not be exposed, some webmasters simply add a "disallow" statement so that those resources do not get indexed and therefore are never found by users. This technique does not work, because attackers will often access robots.txt and intentionally traverse the "disallow" directories in search of vulnerabilities. So, in effect, the listing of such directories effectively points hackers in the direction of the most sensitive resources on the site.
Mykonos Web Security will intercept requests for robots.txt and either generate a completely fake robots.txt file (if one does not exist), or modify the existing version by injecting a fake directory as a disallow directive. The "Spider Configuration Requested" incident (which is considered informational only) is triggered whenever a user requests robots.txt (including legitimate spiders).
Behavior: Requesting robots.txt occurs in two different scenarios. The first is where a legitimate spider, such as Googlebot, attempts to index the website. In this case, the robots.txt file will be requested, and no requests from that client will be issued to the disallow directories. In the second scenario, a malicious user requests robots.txt and then indexes some or all of the disallow directories. The fact that this incident was triggered does not necessarily mean the user is malicious (which is why it is informational), but it does exclude the user from the general population of average users, because the user is attempting to gain information not intended for a normal user.
This type of behavior is generally observed while the client is attempting to establish the overall attack surface of the website (or, in the case of a legitimate spider, they are attempting to establish the desired index limitations).
6.3.9.2.2. Malicious Spider Activity Complexity: Low (2.0)
Default Response: 1x = Captcha and Slow Connection 2-6 seconds. 6x = 1 day Block.
77 Activity Processors
Cause: One of the standard resources that just about every website should expose is called robots.txt. This resource is used by search engines to instruct them on how to spider the website. Two of the more important directives are "allow" and "disallow". These directives are used to identify which directories a spider should index, and which directories it should stay away from. Good practice for any website is to lock down any resource that should not be exposed. However some web masters simply add a "disallow" statement so that those resources do not get indexed and therefore are never found by users. This technique does not work, because attackers will often access robots.txt and intentionally traverse the "disallow" directories in search of vulnerabilities. So in effect, the listing of such directories is basically pointing hackers in the direction of the most sensitive resources on the site. MWS will intercept requests for robots.txt and either generate a completely fake robots.txt file (if one does not exist), or modify the existing version by injecting a fake directory as a disallow directive. The "Malicious Spider Activity" incident is triggered whenever a user attempts to request a resource in the fake disallow directory, or attempts to perform a directory index on the disallow directory.
Behavior: Requesting robots.txt occurs in two different scenarios. The first is where a legitimate spider, such as Google, attempts to index the website. In this case, the robots.txt file will be requested, and no requests from that client will be issued to the disallow directories. In the second scenario, a malicious user requests robots.txt and then indexes some or all of the disallow directories. In this specific case, the user has requested robots.txt to obtain the list of disallow directories, and then started searching for resources in those directories. This activity is performed to find a "Predictable Resource Location92" vulnerability. Because spidering a directory tends to be a noisy process (lots of requests), there are likely to be many of these incidents if there are any. The sum of occurrences of this incident represent the type of activity the user is performing to index a directory. The set of URL's for which this incident is triggered, represent the filenames the malicious user is testing for. For example, if they were searching for PDF files that contain stock information, there would be an incident for each filename with a PDF extension they tried to request. There is a very strong chance that if the filename was requested in the disallow directory, it was probably requested in every other directory on the site as well. This type of behavior is generally observed while the client is attempting to establish the overall attack surface of the website (or in the case of a legitimate spider, they are attempting to establish the desired index limitations).
6.4. Activity Processors
6.4.1. Custom Authentication Processor The custom authentication processor is designed to add strong and secure authentication to any page in the protected application. The authentication processor also logs malicious activity like invalid logins and modifying cookies or query parameters.
6.4.1.1. Configuration
Table 6.10. Custom Authentication Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor.
92 http://projects.webappsec.org/Predictable-Resource-Location
78 Custom Authentication Processor
Parameter Type Default Value Description User Accounts Collection Collection A collection of user accounts to use for this processor. Advanced Auth Cookie Name String Random The name of the authentication cookie. Login Page Timeout Integer 10 Minutes The number of seconds a login page can be used before it times out. This is intended to prevent attacks based on watching network traffic. It should be as short as is tolerable. MD5 Script Name String Random The name of the Javascript resource that contains the MD5 code. Session Timeout Integer 1 Hour The number of seconds a session can be idle before it times out. User Accounts Collection Null Configure user account login credentials for this processor. Incident: Auth Cookie Boolean True The user has modified the cookie used to Tampering manage custom authentication, probably in an attempt to expose sensitive information or bypass access restrictions. Incident: Auth Input Boolean True The user has modified the parameters used Parameter Tampering to manage custom authentication, probably in an attempt to expose sensitive information or bypass the authentication mechanism. Incident: Auth Invalid Boolean True The user has attempted to login but supplied Login invalid credentials, this could be perfectly normal, but large numbers of this type of incident would indicate a brute force attack. Incident: Auth Query Boolean True The user has modified the query parameters Parameter Tampering that were submitted when the user was asked to originally login. This is likely in an attempt to probe the authentication mechanism for exploits.
6.4.1.2. Incidents
6.4.1.2.1. Auth Input Parameter Tampering Complexity: Medium (3.0)
Default Response: 3x = Warn User, 5x = Captcha. 9x = 1 day Strip Inputs.
Cause: MWS provides the capability of password protecting any URL on the protected site. This means that if a user attempts to access that URL, they will be prompted to enter a username and password before the original request is allowed to be completed. This incident is triggered when a user attempts to manipulate the hidden form parameters used to handle authentication.
Behavior: Manipulating hidden input fields in a form, for whatever reason is generally considered malicious. In this case, since the form is being used to password protect a resource, it is likely that the attacker is trying
79 Custom Authentication Processor
to bypass the authentication by finding a vulnerability in the authentication mechanism. Depending on the modified value they submit, they could be attempting to launch a "Buffer Overflow93", "XSS94", "Denial of Service95", "Fingerprinting96", "Format String97", "HTTP Response Splitting98", "Integer Overflow99", or "SQL injection100" attack among many others.
6.4.1.2.2. Auth Query Parameter Tampering Complexity: Low (2.0)
Default Response: 1x = Warn User. 2x = 1 day Strip Inputs.
Cause: MWS provides the capability of password protecting any URL on the protected site. This means that if a user attempts to access that URL, they will be prompted to enter a username and password before the original request is allowed to be completed. This incident is triggered when a user attempts to manipulate the query parameters that were submitted with the original unauthenticated request, after authentication has been completed.
Behavior: Manipulating query parameters after authenticating is not very easy to do without a 3rd party tool, and has no legitimate purpose. As such, this type of behavior is most likely related to a user who is trying to smuggle a malicious payload through a network or web firewall. Depending on the value the user submits for the modified query string, they could be attempting a "Buffer Overflow101", "XSS102", "Denial of Service103", "Fingerprinting104", "Format String105", "HTTP Response Splitting106", "Integer Overflow107", or "SQL injection108" attack among many others. One interesting note is that the user has actually authenticated in order to cause this incident. As such, it is also likely that the account for which the user authenticated has been compromised and should be updated (with a new password). Although it is possible that the true owner of the account has executed the malicious action, and should therefore potentially be banned.
6.4.1.2.3. Auth Cookie Tampering Complexity: Medium (3.0)
Default Response: 1x = Warn User, 2x = Captcha. 3x = 1 day Strip Inputs.
Cause: MWS provides the capability of password protecting any URL on the protected site. This means that if a user attempts to access that URL, they will be prompted to enter a username and password before the original request is allowed to be completed. This incident is triggered when a user attempts to manipulate the cookie used to maintain the authenticated session once the user logs in.
Behavior: Manipulating cookies is not easy to do without a 3rd party tool, and has no legitimate purpose. As such, this type of behavior is most likely related to a user who is trying to perform a "Credential/Session
93 http://projects.webappsec.org/Buffer-Overflow 94 http://projects.webappsec.org/Cross-Site+Scripting 95 http://projects.webappsec.org/Denial-of-Service 96 http://projects.webappsec.org/Fingerprinting 97 http://projects.webappsec.org/Format-String 98 http://projects.webappsec.org/HTTP-Response-Splitting 99 http://projects.webappsec.org/Integer-Overflows 100 http://projects.webappsec.org/SQL-Injection 101 http://projects.webappsec.org/Buffer-Overflow 102 http://projects.webappsec.org/Cross-Site+Scripting 103 http://projects.webappsec.org/Denial-of-Service 104 http://projects.webappsec.org/Fingerprinting 105 http://projects.webappsec.org/Format-String 106 http://projects.webappsec.org/HTTP-Response-Splitting 107 http://projects.webappsec.org/Integer-Overflows 108 http://projects.webappsec.org/SQL-Injection
80 Custom Authentication Processor
Prediction109" attack, or execute an input based attack such as a "Buffer Overflow110", "XSS111", "Denial of Service112", "Fingerprinting113", "Format String114", "HTTP Response Splitting115", "Integer Overflow116", or "SQL injection117" attack among many others. One interesting note is that the user has actually authenticated in order to cause this incident. As such, it is also likely that the account for which the user authenticated has been compromised and should be updated (with a new password). Although it is possible that the true owner of the account has executed the malicious action, and should therefore potentially be banned.
6.4.1.2.4. Authentication Brute Force Complexity: Medium (3.0)
Default Response: 1x = Captcha. 2x = 1 day Block.
Cause: MWS provides the capability of password protecting any URL on the protected site. This means that if a user attempts to access that URL, they will be prompted to enter a username and password before the original request is allowed to be completed. This incident is triggered when a user submits a large volume of invalid username and password combinations.
Behavior: Submitting a single invalid username or password is likely a user typo, and is not necessarily malicious. However it does represent a security event, and a large number of these events may represent a more serious threat such as "Brute Force118". It is possible however, that the invalid username or password might also be an attack vector targeted at the authentication mechanism such as a "Buffer Overflow119", "XSS120", "Denial of Service121", "Fingerprinting122", "Format String123", "HTTP Response Splitting124", "Integer Overflow125", or "SQL injection126" attack among many others. This incident is a higher level incident that gets tripped when dozens of "Auth Invalid Login" incidents are created. As such, it does not contain much information about the actual accounts being targeted. If more detail is desired, the underlying "Auth Invalid Login" incidents should be reviewed. These incidents are only suspicious (not considered malicious on their own), so the filtering option will need to be set to show non malicious incidents.
6.4.1.2.5. Auth Invalid Login Complexity: Suspicious (1.0)
Default Response: 20x = Auth Brute Force Incident.
Cause: MWS provides the capability of password protecting any URL on the protected site. This means that if a user attempts to access that URL, they will be prompted to enter a username and password before
109 http://projects.webappsec.org/Credential-and-Session-Prediction 110 http://projects.webappsec.org/Buffer-Overflow 111 http://projects.webappsec.org/Cross-Site+Scripting 112 http://projects.webappsec.org/Denial-of-Service 113 http://projects.webappsec.org/Fingerprinting 114 http://projects.webappsec.org/Format-String 115 http://projects.webappsec.org/HTTP-Response-Splitting 116 http://projects.webappsec.org/Integer-Overflows 117 http://projects.webappsec.org/SQL-Injection 118 http://projects.webappsec.org/Brute-Force 119 http://projects.webappsec.org/Buffer-Overflow 120 http://projects.webappsec.org/Cross-Site+Scripting 121 http://projects.webappsec.org/Denial-of-Service 122 http://projects.webappsec.org/Fingerprinting 123 http://projects.webappsec.org/Format-String 124 http://projects.webappsec.org/HTTP-Response-Splitting 125 http://projects.webappsec.org/Integer-Overflows 126 http://projects.webappsec.org/SQL-Injection
81 Cookie Protection Processor
the original request is allowed to be completed. This incident is triggered when a user submits an invalid username or password. This incident alone is not necessarily malicious, as it is possible for a legitimate user to accidentally type their username or password incorrectly.
Behavior: Submitting a single invalid username or password is likely a user typo, and is not necessarily malicious. However it does represent a security event, and a large number of these events may represent a more serious threat such as "Brute Force127". It is possible however, that the invalid username or password might also be an attack vector targeted at the authentication mechanism such as a "Buffer Overflow128", "XSS129", "Denial of Service130", "Fingerprinting131", "Format String132", "HTTP Response Splitting133", "Integer Overflow134", or "SQL injection135" attack among many others. So if the value specified for the username and password does not look like a legitimate username and password (they are too long, or contain unusual characters), then this incident may be more serious. However, even in this case, the user is more likely to submit dozens of invalid credentials (not just one), and there is a different incident for that scenario.
6.4.2. Cookie Protection Processor This processor is responsible for protecting a set of application cookies from modification or assignment by the user.
6.4.2.1. Configuration
Table 6.11. Cookie Protection Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Protected Cookies Collection Collection The name of the protected cookie. Advanced Protected Cookie String Random The suffix to add to the protected cookie Signature Suffix names when generating a signature cookie. For example, if the protected cookie is PHPSESSID and the suffix is _MX, then the signature for PHPSESSID would be in a cookie named PHPSESSID_MX. Incident: Application Boolean True The user either attempted to modify one of the Cookie Manipulation protected cookies, or attempted to assign a new value.
127 http://projects.webappsec.org/Brute-Force 128 http://projects.webappsec.org/Buffer-Overflow 129 http://projects.webappsec.org/Cross-Site+Scripting 130 http://projects.webappsec.org/Denial-of-Service 131 http://projects.webappsec.org/Fingerprinting 132 http://projects.webappsec.org/Format-String 133 http://projects.webappsec.org/HTTP-Response-Splitting 134 http://projects.webappsec.org/Integer-Overflows 135 http://projects.webappsec.org/SQL-Injection
82 Error Processor
6.4.2.2. Incidents
6.4.2.2.1. Application Cookie Manipulation Complexity: Low (2.0)
Default Response: 1x = Warn User and Logout User. 2x = 5 day Strip Inputs.
Cause: MWS is designed to provide additional protection to cookies used by the web application for tracking user sessions. This is done by issuing a signature cookie any time the web application issues a "protected cookie"(which cookies to protect is defined in configuration). The signature cookie ties the application cookie (such as PHPSESSID) to the Mykonos session cookie. If any of the 3 cookies are modified (Mykonos session cookie, signature cookie, or the actual application cookie), then this incident will be triggered, and the application cookie will be terminated (effectively terminating the users session). This prevents any users from manually creating a session cookie, hijacking another users cookie, or manipulating an existing cookie.
Behavior: Manipulation of cookies is generally performed in order to hijack another user's session. However because cookies represent another type of application input, modifications could also be performed to attempt other exploits. If the modified value resembles a legitimate value for the application cookie, then this is likely a session hijacking attempt. If the cookie contains other values that are clearly not valid, then it is more then likely an attack on generic application inputs such as a "Buffer Overflow136", "XSS137", "Denial of Service138", "Fingerprinting139", "Format String140", "HTTP Response Splitting141", "Integer Overflow142", and "SQL injection143" attack among many others.
6.4.3. Error Processor Errors and their contents play a big part in hacking a website. When a hacker obtains an error message, it provides useful information, the very least of which is that the attacker found a way to do something unintended in the web application and the server executed code to handle it. As such, when a user attempts to hack a website, they frequently induce and receive error messages. Often these error messages are very unusual and are not common when a normal user visits the site. For example, the error code 400 (Bad Request) is returned when the raw data in a request does not follow the HTTP standards. While it is possible to get a 400 error by typing invalid characters into the URL, the majority of these errors are caused by 3rd party software (usually not a browser), improperly communicating with the server. A hacker might for example, manually construct a malicious request and forget to include the "Host" header. The goal of this processor is to record unusual and unexpected errors as incidents.
This processor will also monitor all 404 errors and attempt to identify Common Directory Enumeration and User Directory Enumeration.
136 http://projects.webappsec.org/Buffer-Overflow 137 http://projects.webappsec.org/Cross-Site+Scripting 138 http://projects.webappsec.org/Denial-of-Service 139 http://projects.webappsec.org/Fingerprinting 140 http://projects.webappsec.org/Format-String 141 http://projects.webappsec.org/HTTP-Response-Splitting 142 http://projects.webappsec.org/Integer-Overflows 143 http://projects.webappsec.org/SQL-Injection
83 Error Processor
6.4.3.1. Configuration
Table 6.12. Error Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Legitimate Error Boolean True Whether to attempt to identify errors in the Detection Enabled protected web applications so that they can be ignored. Advanced Error Cache Integer 43200 (12 hours) The number of seconds to cache an error Expiration condition so that subsequent matching error conditions from other users can be identified. The less traffic the site sees on a regular basis, the higher this value must be. The recommended default is for sites that see several thousand users a day or more. Error Cache Size Integer 50 The number of error conditions to cache for each level of specificity. If too many error conditions are encountered in a short period of time, this will prevent the tracking code from consuming too much memory. Errors at the full URL with query string specificity will cache this many conditions, at the URL only level it will cache twice this many, and at the filename level, it will cache 3 times as many as this value. Filename Only Integer 259200 (3 days) The number of seconds that an error must not Expiration be encountered on a filename regardless of its location before an ignored error starts being recorded again. Filename Only Integer 70 The maximum number of unique users who Threshold can hit a specific filename, regardless of location, and get the same error before it stops being recorded as suspicious (zero = do not track based on filename). URL With Query Integer 259200 (3 days) The number of seconds that an error must not Expiration be encountered on the full URL with query string before an ignored error starts being recorded again. URL With Query Integer 30 The maximum number of unique users who Threshold can hit a full URL including query string and get the same error before it stops being recorded as suspicious (zero = do not track based on full url).
84 Error Processor
Parameter Type Default Value Description URL Without Query Integer 259200 (3 days) The number of seconds that an error must not Expiration be encountered on the URL excluding query string before an ignored error starts being recorded again. URL Without Query Integer 50 The maximum number of unique users who Threshold can hit a URL excluding query string and get the same error before it stops being recorded as suspicious (zero = do not track based on url). 100 Continue ConfigurableError Status Continue. 101 Switching ConfigurableError Status Switching Protocols. Protocols 102 Processing ConfigurableError Status Processing. 300 Multiple Choices ConfigurableError Status Multiple Choices. 301 Moved ConfigurableError Status Moved Permanently. Permanently 302 Found ConfigurableError Status Found. 303 See Other ConfigurableError Status See Other. 304 Not Modified ConfigurableError Status Not Modified. 305 Use Proxy ConfigurableError Status Use Proxy. 306 Switch Proxy ConfigurableError Status Switch Proxy. 307 Temporary ConfigurableError Status Switch Proxy. Redirect 400 Bad Request ConfigurableError Status Bad Request. 401 Unauthorized ConfigurableError Status Unauthorized. 402 Payment ConfigurableError Status Payment Required. Required 403 Forbidden ConfigurableError Status Forbidden. 404 Not Found ConfigurableError Status Not Found. 405 Method Not ConfigurableError Status Not allowed. Allowed 406 Not Acceptable ConfigurableError Status Not acceptable. 407 Proxy ConfigurableError Status Proxy Authentication Required. Authentication Required 408 Request Timeout ConfigurableError Status Request Timeout. 409 Conflict ConfigurableError Status Conflict. 410 Gone ConfigurableError Status Gone. 411 Length Required ConfigurableError Status Length Required. 412 Precondition ConfigurableError Status Precondition Failed. Failed
85 Error Processor
Parameter Type Default Value Description 413 Request Entity ConfigurableError Status Request Entity Too Large. Too Large 414 Request-URI Too ConfigurableError Status Request-URI Too Long. Long 415 Unsupported ConfigurableError Status Unsupported Media Type. Media Type 416 Requested ConfigurableError Status Requested Range Not Satisfiable. Range Not Satisfiable 417 Expectation ConfigurableError Status Expectation Failed. Failed 418 I'm a teapot ConfigurableError Status 418 I'm a teapot. 422 Unprocessable ConfigurableError Status Unprocessable Entity. Entity 423 Locked ConfigurableError Status Locked. 424 Failed ConfigurableError Status Failed Dependency. Dependency 425 Unordered ConfigurableError Status Unordered Collection. Collection 426 Upgrade ConfigurableError Status Upgrade Required. Required 449 Retry With ConfigurableError Status Retry With. 450 Blocked by ConfigurableError Status Blocked by Windows Parental Controls. Windows Parental Controls 500 Internal Server ConfigurableError Status Internal Server Error. Error 501 Not Implemented ConfigurableError Status Not Implemented. 502 Bad Gateway ConfigurableError Status Bad Gateway. 503 Service ConfigurableError Status Service Unavailable. Unavailable 504 Gateway Timeout ConfigurableError Status Gateway Timeout. 505 HTTP Version ConfigurableError Status HTTP Version Not Supported. Not Supported 506 Variant Also ConfigurableError Status Variant Also Negotiates. Negotiates 507 Insufficient ConfigurableError Status Insufficient Storage. Storage 509 Bandwidth Limit ConfigurableError Status Bandwidth Limit Exceeded. Exceeded 510 Not Extended ConfigurableError Status Not Extended.
86 Error Processor
Parameter Type Default Value Description Incident: Illegal Boolean True The user issued a request that resulted in an Response Status error status code that is considered suspicious and possibly malicious. Incident: Suspicious Boolean True The user issued a request that resulted in a Response Status known error status code generally involved in malicious behavior. On its own this is not enough to classify abuse, but patterns of this indicator may lead to higher level malicious incidents. Incident: Unexpected Boolean True The user issued a request that resulted in Response Status an unknown error status code and could represent a successful exploit. Incident: Unknown Boolean True The user has requested a directory that does Common Directory not exist. The directory is in a list of common Requested directory names, so it is likely that this request is in an attempt to find a directory that is not linked from the site. Incident: Unknown Boolean True The user has requested a directory for a User Directory specific system user that does not exist. The Requested username is in a list of common usernames, so it is likely that this request is in an attempt to identify a user account that is not linked from the site.
6.4.3.2. Incidents
6.4.3.2.1. Illegal Response Status Complexity: Suspicious (1.0)
Default Response: None.
Cause: MWS monitors the various status codes returned by the protected website and compares them to a configurable list of know and acceptable status codes. Some status codes are expected during normal usage of the site (such as 200 - OK, or 403 - Not Modified), but some status codes are much less common for a normal user (such as 500 - Server Error, or 404 - File Not Found). When a user issues a request that results in a status code that is configured as illegal, then this incident will be triggered.
Behavior: In the process of attempting to find vulnerabilities on a web server, hackers will often encounter errors. Just a single error or two is likely not a problem, because even legitimate users accidentally type a URL incorrectly on occasion. However when excessive numbers of unexpected status codes are returned, the behavior of the user can be narrowed down and classified as malicious. The actual vulnerability an attacker is looking for, can be identified through the status codes they are being returned. For example, if the user is getting a lot of 404 errors, they are likely searching for unlinked files ("Predictable Resource Location144"). If the user is getting a lot of 500 errors, they may be trying to establish a successful "SQL Injection145" or "XSS146" vulnerability.
144 http://projects.webappsec.org/Predictable-Resource-Location
87 Error Processor
6.4.3.2.2. Suspicious Response Status Complexity: Suspicious (1.0)
Default Response: 10x 404 = Resource Enumeration Incident.
Cause: MWS monitors the various status codes returned by the protected website and compares them to a configurable list of know and acceptable status codes. Some status codes are expected during normal usage of the site (such as 200 - OK, or 403 - Not Modified), but some status codes are much less common for a normal user (such as 500 - Server Error, or 404 - File Not Found). When a user issues a request which results in a status code that is not known and does not have any associated configuration, this incident will be triggered.
Behavior: In the process of attempting to find vulnerabilities on a web server, hackers will often encounter errors. Just a single error or two is likely not a problem, because even legitimate users accidentally type a URL incorrectly on occasion. However when excessive numbers of unexpected status codes are returned, the behavior of the user can be narrowed down and classified as malicious. The actual vulnerability the attacker is looking for can be identified through the status codes they are being returned. For example, if the user is getting a lot of 404 errors, they are likely searching for unlinked files ("Predictable Resource Location147"). If the user is getting a lot of 500 errors, they may be trying to establish a successful "SQL Injection148" or "XSS149" vulnerability. In the case of this incident, the user is getting an unexpected status code. This is likely because of a bug in the web application which the user has found and is attempting to exploit. The URL this incident is created for, should be reviewed to determine why it would be responding with a non standard status code. If the status code is intentionally non-standard, but is acceptable behavior, then the custom status code should be added to the list of known and accepted status codes in config.
6.4.3.2.3. Unexpected Response Status Complexity: Suspicious (1.0)
Default Response: None.
Cause: Mykonos Web Security monitors the various status codes returned by the protected website and compares them to a configurable list of known and acceptable status codes. Some status codes are expected during normal usage of the site (such as "200 OK" or "304 Not Modified"), but some status codes are much less common for a normal user (such as "500 Internal Server Error" or "404 Not Found"). When a user issues a request which results in a status code that is not known and does not have any associated configuration, this incident will be triggered.
Behavior: In the process of attempting to find vulnerabilities on a web server, hackers will often encounter errors. Just a single error or two is likely not a problem, because even legitimate users accidentally type a URL incorrectly on occasion. However when excessive numbers of unexpected status codes are returned, the behavior of the user can be narrowed down and classified as malicious. The actual vulnerability the attacker is looking for can be identified through the status codes they are being returned. For example, if the user is getting a lot of 404 errors, they are likely searching for unlinked files ("Predictable Resource Location150"). If the user is getting a lot of 500 errors, they may be trying to establish a successful "SQL Injection151" or "XSS152" vulnerability. In the case of this incident, the user is getting an unexpected status
145 http://projects.webappsec.org/SQL-Injection 146 http://projects.webappsec.org/Cross-Site+Scripting 147 http://projects.webappsec.org/Predictable-Resource-Location 148 http://projects.webappsec.org/SQL-Injection 149 http://projects.webappsec.org/Cross-Site+Scripting 150 http://projects.webappsec.org/Predictable-Resource-Location 151 http://projects.webappsec.org/SQL-Injection
88 Error Processor
code. This is likely because of a bug in the web application which the user has found and is attempting to exploit. The URL this incident is created for, should be reviewed to determine why it would be responding with a non standard status code. If the status code is intentionally non-standard, but is acceptable behavior, then the custom status code should be added to the list of known and accepted status codes in config.
In the case of this incident, the user is getting an unexpected status code. This is likely because of a bug in the web application which the user has found and is attempting to exploit. The URL this incident is created for should be reviewed to determine why it would be responding with an unexpected status code. If the status code is intentionally non-standard, but is acceptable behavior, then the custom status code should be added to the list of known and accepted status codes in config.
6.4.3.2.4. Unknown Common Directory Requested Complexity: Suspicious (1.0)
Default Response: 5x = Common Directory Enumeration Incident
Cause: This incident is triggered when a user requests a directory on the server that does not exist, and that directory name is in a list of commonly used directory names (for example: http://www.example.com/public/ where "public" is not a real directory).
Behavior: Often times, administrators will upload sensitive content onto a web server in an obscure location and not link to that content anywhere on the site. The assumption is that the content is private because no one will find it. However humans are somewhat predictable, so it's actually quite common for two administrators to pick the same "obscure" location to place sensitive content. As such, hackers have compiled a list of the most commonly chosen directory names where sensitive content is often stored, and they will basically test every name in the list to see if a site has a directory by that name. If it does, the attacker is able to locate and obtain that sensitive content. An example of a tool that allows attackers to quickly identify hidden directories is called "DirBuster" (https://www.owasp.org/index.php/Category:OWASP_DirBuster_Project).
6.4.3.2.5. Unknown User Directory Requested Complexity: Suspicious (1.0)
Default Response: 5x = User Directory Enumeration Incident
Cause: Many web servers allow the users on the system to maintain publicly accessible web directories. These directories are generally accessible from the root directory of the website followed by a tilde and the username. For example, if the web server had a user named ‘george', that user could serve content from http://www.example.com/~george/. This incident is triggered when an attacker requests a user directory on the server that does not exist, and that user directory name is in a list of commonly used usernames (for example: http://www.example.com/~root/ where "root" is not a real user directory).
Behavior: Often times, administrators will upload sensitive content onto a web server in an obscure location and not link to that content anywhere on the site. The assumption is that the content is private because no one will find it. However humans are somewhat predictable, so it's actually quite common for two administrators to pick the same "obscure" location to place sensitive content. As such, hackers have compiled a list of the most commonly chosen directory names where sensitive content is often stored, and they will basically test every name in the list to see if a site has a directory by that name. If it does, the attacker is able to locate and obtain that sensitive content. In this specific case, the attacker is testing for default user directories for users with predictable names (such as ‘root', ‘guest', ‘nobody', etc...). An example of a tool that allows attackers to quickly identify hidden user directories is called "DirBuster" (https://www.owasp.org/index.php/ Category:OWASP_DirBuster_Project).
152 http://projects.webappsec.org/Cross-Site+Scripting
89 Header Processor
6.4.3.2.6. Common Directory Enumeration Complexity: Medium (3.0)
Default Response: 1x = Slow Connection 2-6 seconds & Captcha, 2x = Slow Connection 2-6 seconds & 1 day Block
Cause: This incident is triggered when a user requests a directory on the server that does not exist, and that directory name is in a list of commonly used directory names (for example: http://www.example.com/public/ where "public" is not a real directory). Specifically, this incident is triggered when the user requests many different commonly named directories, as would be the case if they were testing for a large list of possible directory names.
Behavior: Often times, administrators will upload sensitive content onto a web server in an obscure location and not link to that content anywhere on the site. The assumption is that the content is private because no one will find it. However humans are somewhat predictable, so it's actually quite common for two administrators to pick the same "obscure" location to place sensitive content. As such, hackers have compiled a list of the most commonly chosen directory names where sensitive content is often stored, and they will basically test every name in the list to see if a site has a directory by that name. If it does, the attacker is able to locate and obtain that sensitive content. An example of a tool that allows attackers to quickly identify hidden directories is called "DirBuster" (https://www.owasp.org/index.php/Category:OWASP_DirBuster_Project).
6.4.3.2.7. User Directory Enumeration Complexity: Medium (3.0)
Default Response: 1x = Slow Connection 2-6 seconds & Captcha, 2x = Slow Connection 2-6 seconds & 1 day Block
Cause: Many web servers allow the users on the system to maintain publically accessible web directories. These directories are generally accessible from the root directory of the website followed by a tilde and the username. For example, if the web server had a user named ‘george', that user could serve content from http://www.example.com/~george/. This incident is triggered when an attacker requests a user directory on the server that does not exist, and that user directory name is in a list of commonly used usernames (for example: http://www.example.com/~root/ where "root" is not a real user directory). Specifically, this incident is triggered when an attacker requests many different username directories, as would be the case if they were testing for a large list of possible usernames.
Behavior: Often times, administrators will upload sensitive content onto a web server in an obscure location and not link to that content anywhere on the site. The assumption is that the content is private because no one will find it. However humans are somewhat predictable, so it's actually quite common for two administrators to pick the same "obscure" location to place sensitive content. As such, hackers have compiled a list of the most commonly chosen directory names where sensitive content is often stored, and they will basically test every name in the list to see if a site has a directory by that name. If it does, the attacker is able to locate and obtain that sensitive content. In this specific case, the attacker is testing for default user directories for users with predictable names (such as ‘root', ‘guest', ‘nobody', etc...). An example of a tool that allows attackers to quickly identify hidden user directories is called "DirBuster" (https://www.owasp.org/index.php/ Category:OWASP_DirBuster_Project).
6.4.4. Header Processor A useful technique when attacking a site is to determine what software the site is using. This is known as fingerprinting the server. There are many methods used, but the basic idea is to look for signatures that identify various products. For example, it might be a known signature that Apache always lists the "Date"
90 Header Processor
response header before the "Last-Modified" response header. If very few other servers follow this same pattern, then checking to see which header comes first could be used as a means of identifying if Apache is being used or not. Other key methods include looking for "Server" or "X-Powered-By" headers that actually specify the software being used. The goal of this processor is to eliminate headers as a means of fingerprinting a server.
Important While the goal of this processor is mainly to prevent fingerprinting, it may also catch some malicious behavior and erroneous behavior in the protected applications (potentially as a result of an exploit). As such, the following incidents are recognized by the processor.
6.4.4.1. Configuration
Table 6.13. Header Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Advanced Header Mixing Boolean True Whether this processor should shuffle the Enabled order of response headers to avoid exposing identifiable information. Maximum Header Integer 8192 The maximum allowed length of a header Length in bytes. If header stripping is enabled, then any headers that exceed this length will be removed from the request before proxying. Request Header Boolean True Whether this processor should strip Stripping Enabled unnecessary headers in request packets to avoid sending malicious data to the server Response Header Boolean True Whether this processor should strip Stripping Enabled unnecessary response headers to avoid giving away identifiable information Known Request Collection Collection A list of known request headers. Headers Known Response Collection Collection A list of known Response headers Headers Incident: Duplicate Boolean True The application returned multiple instances of Request Header the same header, which it is never expected to do. Incident: Duplicate Boolean False The user provided multiple instances of the Response Header same header, and the header does not usually allow multiples. Incident: Illegal Boolean True The user provided a request header which is Request Header known to be involved in malicious activity.
91 Header Processor
Parameter Type Default Value Description Incident: Illegal Boolean True The application returned a response header Response Header which it is never supposed to return. Incident: Missing All Boolean True The user issued a request which is missing Headers both the User Agent and Host headers. Incident: Missing Host Boolean True The application returned a response which is Header missing a required header. Incident: Missing Boolean True The user issued a request which is missing a Request Header required header. Incident: Missing Boolean True The application returned a response which is Response Header missing a required header. Incident: Missing User Boolean True The user issued a request which is missing a Agent Header required header. Incident: Request Boolean True The user issued a request which contained Header Overflow a header that was longer then the allowed maximum. Incident: Unexpected Boolean False The user issued a request which contains an Request Header unexpected and unknown header. Incident: Compound Boolean True The user issued multiple requests which Request Header contained a header that was longer then the Overflow allowed maximum.
6.4.4.2. Incidents
6.4.4.2.1. Duplicate Request Header Complexity: Informational (0.0)
Default Response: None.
Cause: MWS monitors all of the request headers sent from the client to the web application. According to the HTTP RFC, no client should ever provide more the one copy of a specific header. For example, clients should not send multiple "Host" headers. However there are a few exceptions, such as the "Cookie" header, which can be configured to allow multiples. If the user sends multiple headers that are not configured explicitly to allow duplicates, then this incident will be triggered.
Behavior: Sending duplicate headers of the same type can be caused by several different things. It is either an attempt to profile the web server and see how it reacts, an attempt to smuggle malicious data into the headers (because a firewall might not look at subsequent copies of the same header), or possibly just be a poorly programmed web client. In either case, it represents unusual activity that sets the user aside from everyone else. It signifies that the user is suspicious and is doing something average users do not do.
6.4.4.2.2. Duplicate Response Header Complexity: Informational (0.0)
Default Response: None
Cause: MWS monitors all of the response headers sent from the server to the client. According to the HTTP RFC, no server should ever provide more the one copy of a specific header. For example, servers should
92 Header Processor
not send multiple "Content-Length" headers. However there are a few exceptions, such as the "Set-Cookie" header, which can be configured to allow multiples. If the server attempts to return multiple headers of the same type, which are not configured explicitly to allow duplicates, then this incident will be triggered.
Behavior: The RFC does not allow for servers to return multiple headers of the same type, with a few exceptions, such as "Set-Cookie". If the server does return duplicates for a header that normally does not support duplicates, then there is either a bug in the web application, or the user has successfully executed a "Response Splitting153" attack. In either case, the service located at the URL this incident is triggered for should probably be reviewed for response splitting vulnerabilities or bugs that would cause duplicate response headers to be returned.
6.4.4.2.3. Illegal Request Header Complexity: Suspicious (1.0)
Default Response: None.
Cause: MWS monitors all of the request headers included by clients. It has a list of known request headers that should never be accepted. This list is configurable, and by default, includes any headers known to be exclusively involved in malicious activity. Should a user include one of the illegal headers, this incident will be triggered. Because the list of illegal headers is configurable, it cannot be guaranteed that the request that contained the header is strictly malicious, but it does signify that the client is doing something highly unusual.
Behavior: Some HTTP headers can be used in order to get the server to do something it isn't designed to do. For example, the "max-forwards" header can be used to specify how many hops within the internal network the request should make before it is dropped. An attacker could use this header to identify how many network devices are between themselves and the target web server. Because the list of illegal headers is customizable, the type of behavior the header relates to can vary. However this type of behavior is generally performed when scoping the attack surface of the website.
6.4.4.2.4. Illegal Response Header Complexity: Informational (0.0)
Default Response: None.
Cause: MWS monitors all of the response headers sent to the client from the web application. It has a list of known response headers that should never be returned. This list is configurable, and by default, includes any headers known to compromise the server's identity or security. Should the server return one of the illegal headers, this incident will be triggered. Because the list of illegal headers is configurable, it cannot be guaranteed that the request that contained the header is strictly malicious, but it does signify that something unusual has taken place. This may even represent a hackers successful attempt to exploit a backend service.
Behavior: There is a strict set of HTTP response headers that browsers understand and can actually use. Any headers returned by the server outside of the standard set could potentially expose information about the server or its software. Some headers can even be used to execute more complex attacks. In order to protect the server in the event of a serious issue (such as a "Response Splitting154"attack), some headers can be configured as illegal. Because the set is configurable, it is not straight forward as to what the actual header means or what vulnerability it might be targeted at.
153 http://projects.webappsec.org/HTTP-Response-Splitting 154 http://projects.webappsec.org/HTTP-Response-Splitting
93 Header Processor
6.4.4.2.5. Missing All Headers Complexity: Low (2.0)
Default Response: 1x = Slow Connection 2-6 seconds and Captcha.
Cause: Most legitimate web browsers and tools submit at least a few headers with each HTTP request. Headers are used to provide valuable information to the server when trying to construct a response, such as what type of browser the user is using, or what domain name they are trying to access. If a user submits a request that does not contain any headers at all, this incident will be triggered.
Behavior: Not providing any headers at all, is generally an activity performed when probing an IP to see if it is running a web server. The user will submit a minimal request containing 1 line of text, and see if the response given back from the server is an HTTP response. If so, the attacker has confirmed that the IP is hosting a web server on the given port. In many cases, the attacker will also be able to identify which web server is running, and if that web server has any known vulnerabilities. Such information can then be used to attack the web server directly.
6.4.4.2.6. Missing Host Header Complexity: Low (2.0)
Default Response: 1x = Slow Connection 2-6 seconds and Captcha.
Cause: All legitimate web browsers submit a Host header with each HTTP request. The host header contains the value entered into the address bar as the server. This could be either the server IP address or the domain name. In either case, it will always be provided. If a user submits a request that does not contain a Host header, this incident will be triggered.
Behavior: Not providing a host header is generally an activity performed when trying to scope the attack surface of the website. Some web servers are configured to host different websites from the same IP address, based on which domain name is supplied. Hackers will often attempt to send a request without a host header to see if the server will serve back a default website. If the default website is not the main website, this may provide additional pages the attacker can attempt to exploit. This could be considered a "Server Misconfiguration155" weakness, but may also be a legitimate design choice for the web server and its applications. It does not necessarily expose a vulnerability as long as the default web application is secure. Because all major browsers submit host headers on every request, the user would need to take advantage of a more complex tool, such as a raw data client, or HTTP debugging proxy to manually construct a request that does not have a host header. As such, this activity is almost always malicious. In a few cases, some legitimate monitoring tools may omit this header, but those tools should be added to the trusted IP list in configuration.
6.4.4.2.7. Missing Request Header Complexity: Low (2.0)
Default Response: None.
Cause: MWS monitors all of the request headers sent from the client to the server. It also maintains a list of headers which are required for all HTTP requests (such as Host and User-Agent). If one of the required headers is not included in a request, this incident will be triggered.
Behavior: Every legitimate client will always supply specific headers such as "Host" and "User-Agent". If a client does not provide these headers, then the client is likely not a legitimate user. There are several different
155 http://projects.webappsec.org/Server-Misconfiguration
94 Header Processor
cases of not legitimate clients, such as hacking tools, manually crafted HTTP requests using something like Putty, or a network diagnostic tool such as nagios. Because there are a few cases that are not necessarily malicious (such as nagios), the incident itself is not necessarily malicious. It does however exclude the user from being a legitimate web browser doing the intended actions allowed by the web application.
6.4.4.2.8. Missing Response Header Complexity: Informational (0.0)
Default Response: None.
Cause: MWS monitors all of the response headers sent from the server to the client. It also maintains a list of headers which are required for all HTTP responses (such as Content-Type). If one of the required headers is not included in a response, this incident will be triggered.
Behavior: If the server is acting correctly, it should always return all of the required response headers. If it is missing a response header, this is likely due to a bug in the web application, or a successfully executed "Response Splitting156"attack. In either case, the service located at the URL this incident is triggered for, should probably be reviewed for either response splitting vulnerabilities, or bugs that would cause abnormal HTTP responses (such as dropping the connection immediately after sending the status code).
6.4.4.2.9. Missing User Agent Header Complexity: Low (2.0)
Default Response: 1x = Slow Connection 2-6 seconds and Captcha.
Cause: Most legitimate web browsers and tools submit a User-Agent header with each HTTP request. The user agent header contains information that identifies which software the user is using to access the website, whether that software it is Googlebot, Firefox, Safari, or another piece of software... If a user submits a request that does not contain a User-Agent header, this incident will be triggered.
Behavior: Not providing a user-agent header is generally an activity performed trying to evade detection. The user agent header provides identifying information that could be used by the web server to track requests made by the same user. It may also provide information about the user's personal computer. Sometimes, hackers will replace the user agent string with another user agent string that is perfectly legitimate, but for a different environment than the one they are actually using. Some legitimate users also take this measure as a general security practice; therefore, as long as at least some value is submitted for the user-agent, it cannot be guaranteed to be a malicious act.
However, in the case of the header being absent, a user would have had to take advantage of a tool or debugging proxy in order to filter the traffic. This is almost always performed during the course of a malicious action. Some tools such as network heath monitors may also trigger this incident, because they are doing something normal users should not do, but they are considered trusted. In this case, the IP addresses of those tools should be added to the configuration trusted IP whitelist.
6.4.4.2.10. Request Header Overflow Complexity: Suspicious (1.0)
Default Response: 3x = Compound Request Header Overflow Incident.
156 http://projects.webappsec.org/HTTP-Response-Splitting
95 Header Processor
Cause: MWS monitors all of the request headers sent from the client to the server. It has a configured limit that defines how long any individual header is allowed to be. After 3 or more headers are submitted that exceed the limit, this incident will be triggered.
Behavior: While not as common as form inputs or query parameter inputs, some web applications actually use the values submitted in headers within their code base. If these values are treated incorrectly, such as not being validated before being used in an SQL statement, they potentially expose the same set of vulnerabilities a form input might. As such a hacker who is attempting to execute a "Buffer Overflow157" attack might do so by attempting to provide an excessively long value in a header. They may also use an excessively long header value to craft a complex "SQL Injection158" attack. Because the user submitted multiple headers which exceeded the defined limit, the intentions of the user are more likely to be malicious. It is less likely that a poorly crafted browser plug-in would overflow multiple headers, despite the possibility that it might overflow a single one.
Because there is a possibility that a legitimate user with a poorly-written browser plugin may cause a header of unusual length to be submitted, this incident cannot be guaranteed to be malicious from just a single case.
6.4.4.2.11. Compound Request Header Overflow Complexity: Medium (3.0)
Default Response: 1x = Captcha, 2x = 1 Day Strip Inputs.
Cause: Mykonos Web Security monitors all of the request headers sent from the client to the server, which are then compared with a configured limit that defines how long any individual header is allowed to be. After 3 or more headers are submitted that exceed the limit, this incident will be triggered.
Behavior: While not as common as form inputs or query parameter inputs, some web applications actually use the values submitted in headers within their codebase. If these values are treated incorrectly, such as not being validated before being used in an SQL statement, they potentially expose the same set of vulnerabilities as a form input. A hacker who is attempting a "Buffer Overflow159" attack might do so by attempting to provide an excessively long value in a header. They may also use an excessively long header value to craft a complex "SQL Injection160" attack.
Because the user submitted multiple headers which exceeded the defined limit, the intentions of the user are more likely to be malicious. It is less likely that a poorly-crafted browser plugin would overflow multiple headers, despite the possibility that it might overflow a single one.
6.4.4.2.12. Unexpected Request Header Complexity: Informational (0.0)
Default Response: None
Cause: MWS monitors all of the request headers included by clients. It has a list of known request headers that should be accepted. This list includes all of the headers defined in the HTTP RFC document, which means that if any additional headers are passed, it is part of some non standard HTTP extension. Should a user include a non standard header, this incident will be triggered. It is not necessarily a malicious action on its own, but it does signify that the client is unusual in some way (and potentially malicious) and therefore warrants additional monitoring.
157 http://projects.webappsec.org/Buffer-Overflow 158 http://projects.webappsec.org/SQL-Injection 159 http://projects.webappsec.org/Buffer-Overflow 160 http://projects.webappsec.org/SQL-Injection
96 Method Processor
Behavior: When attackers are trying to exploit a server, one of the techniques is to attempt to profile what software the server is running. This can be partially accomplished by observing how the server reacts to various types of headers. For example, if the attacker knows that a specific 3rd party web application has a feature where it behaves differently if you send a header "X-No-Auth", then a hacker might send "X-No- Auth" to the site just to see what happens. While this could represent a higher level attack on a specific application; sending non standard headers is more likely part of the hacker's effort to scope the attack surface of the website. This incident alone cannot be deemed malicious because some users have browser plug-ins installed that automatically include non standard headers with requests to some sites. Additionally, some AJAX sites also pass around custom headers as part of their expected protocol.
This incident alone cannot be deemed malicious because some users have browser plug-ins installed that automatically include non-standard headers with requests to some sites. Additionally, some AJAX sites utilize custom headers as part of their expected protocol.
6.4.5. Method Processor GET and POST are two very well known HTTP request methods. A request method is a keyword that tells the server what type of request the user is making. In the case of a GET, the user is requesting a resource. In the case of a POST, the user is submitting data to a resource. There are however, several other supported request methods which include HEAD, PUT, DELETE, TRACE, and OPTIONS. These methods are intended to divide the types of requests into more granular operation. In almost all web application implementations, the PUT, DELETE, TRACE and OPTIONS methods are all left unimplemented. Unfortunately, some systems provide default implementations for things such as TRACE and OPTIONS. As a result, some administrators accidentally expose unprotected services. Hackers often try these different request methods to identify servers which support them, and therefore may be vulnerable.
6.4.5.1. Configuration
Table 6.14. Method Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Advanced Block Unknown Boolean True Whether to block requests that contain Methods unknown HTTP methods. Known Methods Collection Collection The list of known HTTP methods. Also allows you to customize the action to take for each occurrence of the known HTTP method. Incident: Illegal Boolean True The user issued a request using an HTTP Method Requested method which is considered illegal. Incident: Unexpected Boolean True The user issued a request using a request Method Requested method other then GET, POST, and HEAD, which resulted in a server error.
97 Method Processor
6.4.5.2. Incidents
6.4.5.2.1. Illegal Method Requested Complexity: Low (2.0)
Default Response: 1x = Slow Connection 2-6 seconds and 1 Day Strip Inputs in 10 minutes
Cause: HTTP supports several different "methods"of submitting data to a web server. These methods generally include "GET", "POST", and "HEAD", and less commonly "PUT", "DELETE", "TRACE", and "OPTIONS". MWS monitors all of the methods used by a user when issuing HTTP requests, and compares them to a configured list of known and allowed HTTP methods. If the user submits a request that uses a method which is not in the list of known methods, this incident will be triggered.
Behavior: HTTP methods allow the web server to handle user provided data in different ways. However some of the supported methods are somewhat insecure and should not be supported unless absolutely necessary. In a few cases, methods which are not standard to HTTP are used by 3rd party web applications. When an attacker is looking for a known vulnerability, they may issue requests using some of these custom defined HTTP methods to see if the server accepts or rejects the request. If the server accepts the request, then the software is likely installed. This type of activity is generally performed when scoping the attack surface of the web application. It is possible that if a third-party web application is legitimately installed and is using custom HTTP methods, that those methods will need to be added to the list of configured HTTP methods so as not to flag users who are using those applications. In either case, because it is possible for this incident to happen without malicious intent, it is considered only suspicious.
6.4.5.2.2. Unexpected Method Requested Complexity: Suspicious (1.0)
Default Response: None.
Cause: HTTP supports several different "methods"of submitting data to a web server. These methods generally include "GET", "POST", and "HEAD", and less commonly "PUT", "DELETE", "TRACE", and "OPTIONS". MWS monitors all of the methods used by a user when issuing HTTP requests, and compares them to a configured list of known and allowed HTTP methods. If the user submits a request that uses a method which is not in the list of known methods, this incident will be triggered.
Behavior: HTTP methods allow the web server to handle user provided data in different ways. However some of the supported methods are somewhat insecure and should not be supported unless absolutely necessary. In a few cases, methods which are not standard to HTTP are used by 3rd party web applications. When an attacker is looking for a known vulnerability, they may issue requests using some of these custom defined HTTP methods to see if the server accepts or rejects the request. If the server accepts the request, then the software is likely installed. This type of activity is generally performed when scoping the attack surface of the web application. It is possible that if a 3rd party web application is legitimately installed and is using custom HTTP methods, that those methods will need to be added to the list of configured HTTP methods so as not to flag users who are using those applications. In either case, because it is possible for this incident to happen without malicious intent, it is considered only suspicious.
98 Tracking Processors
6.5. Tracking Processors
6.5.1. Etag Beacon Processor This processor is not intended to identify hacking activity, but instead is intended to help resolve a potential vulnerability in the proxy. Because session tracking in the proxy is done using cookies, it is possible for an attacker to clear their cookies in order to be recognized by the proxy as a new user. This means that if we identify that someone is a hacker, they can shed that classification simply by clearing their cookies. To help resolve this vulnerability, this processor attempts to store identifying information in the browsers JavaScript persistence mechanism. It then uses this information to attempt to identify new sessions as being created by the same user as a previous session. If successful, a hacker who clears their cookies and obtains a new session will be re-associated with the previous session shortly afterwards.
6.5.1.1. Configuration
Table 6.15. Etag Beacon Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Advanced Beacon Resource ConfigurableRandom The resource to use for tracking. Inject Beacon Boolean True Whether a reference to the beacon resource Enabled should be automatically injected into HTML responses. Revalidation Integer 180 (3 Minutes) How often in seconds to re-validate the old Frequency stored etag and re-associate that session with the current one. This value should not be left too short, because it will cause the browser to constantly re-request the fake resource and make the tracking technique more visible. Incident: Session Boolean True The user has provided a fake ETag value Etag Spoofing which is not a valid session.
6.5.1.2. Incidents
6.5.1.2.1. Session Etag Spoofing Complexity: Medium (3.0)
Default Response: 1x = Slow Connection 2-6 seconds, 3x = Slow Connection 4-15 seconds.
Cause: The HTTP protocol supports many different types of client side resource caching in order to increase performance. One of these caching mechanisms uses a special header called "E-Tag" to identify when the client already has a valid copy of a resource. When a user requests a resource for the first time, the server has the option of returning an E-Tag header. This header contains a key that represents the version of the file that was returned (ex. an MD5 hash of the file contents). On subsequent requests for the same resource, the
99 Client Beacon Processor
client will provide the last E-Tag it was given for that resource. If the server identifies that both the provided E- Tag, and the actual E-Tag of the file are the same, then it will respond with a 403 status code (Not Modified), and the client will display the last copy it successfully downloaded. This prevents the client from downloading the same version of a resource over and over again. In the event that the E-Tag value does not match, the server will return a new copy of the resource and a new E-Tag value. MWS takes advantage of this caching mechanism to store a tracking token on the client. It does this by injecting a fake embedded resource reference (such as an image or a JavaScript file) into some of the pages on the protected site. When the browser loads these pages, it will automatically request the embedded resources in the background. The fake resource that was injected by MWS, will supply a special E-Tag value that contains a tracking token. As the user continues to navigate around the site, each time they load a page that contains a reference to the fake resource, the browser will automatically transmit the previously received E-Tag to the server. This allows MWS to correlate the requests, even if other tracking mechanisms such as cookies are not successful. The E-Tag value returned by the fake resource, which contains the tracking token, is also digitally signed and encrypted, much like MWS session cookie. This prevents a user from successfully guessing a valid E-Tag token, or attempting to provide an arbitrary value without being detected. If an invalid E-Tag is supplied for the fake resource, a "Session E-Tag Spoofing" incident is triggered.
Behavior: There are very few cases where the E-Tag caching mechanism is part of an attack vector, so this incident would almost exclusively represent a user who is attempting to evade tracking or exploit the tracking method to their advantage. For example, if a user identifies the E-Tag tracking mechanism, they may provide alternate values in order to generate errors in the tracking logic and potentially disconnect otherwise correlated traffic. They may also attempt to guess other valid values in order to correlate otherwise non- related traffic (such as a hacker attempting to group other legitimate users into their traffic). While this is a highly unlikely attack vector, it could loosely be classified as a "Credential and Session Prediction161" attack. It is also possible, though unlikely, that once an attacker identifies the dynamic nature of the E-Tag header for the fake resource, they may also launch a series of other attacks based on input manipulation. This could include testing for SQL injection162, XSS163, Buffer Overflow164, Integer Overflow165, and HTTP Response Splitting166 among others. However these would be attacks directly against MWS, and not against the protected web application.
6.5.2. Client Beacon Processor The client beacon processor is intended to digitally tag users for later identification by for embedding a tracking token into the client. There are configurable parameters that administrators can use to configure each type of storage mechanisms that are used track malicious users.
6.5.2.1. Configuration
Table 6.16. Client Beacon Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor.
161 http://projects.webappsec.org/w/page/13246918/Credential-and-Session-Prediction 162 http://projects.webappsec.org/SQL-Injection 163 http://projects.webappsec.org/Cross-Site+Scripting 164 http://projects.webappsec.org/Buffer-Overflow 165 http://projects.webappsec.org/Integer-Overflows 166 http://projects.webappsec.org/HTTP-Response-Splitting
100 Client Beacon Processor
Parameter Type Default Value Description Advanced Flash Storage Boolean True Whether to use the flash shared data API to Enabled track the user. IE UserData Storage Boolean True Whether to use internet explorers userData Enabled storage API to track the user Local Storage Boolean True Whether to use Javascript local storage to Enabled track the user. Private Storage Boolean True Whether to track users between private Enabled browsing mode and normal browsing mode in Firefox. A collection of names to use for the Application session cookie. Silverlight Storage Boolean True Whether to use the Silverlight storage api Enabled to track the user. The Silverlight storage API is unique in that it is exposed across all browsers. If this beacon is enabled and the user has Silverlight installed, this beacon can track the user even if they switch browsers. Window Name Boolean True Whether to use the window.name property of Storage Enabled the browser window to track the user. Resource Extensions Collection Collection A collection of resource extensions to use for the processor. Script Refresh Delay Integer 3600 (1 Hour) The amount of time in seconds to cache the randomly generated set of beacon scripts. After this amount of time, the beacon scripts will change. Script Variations Integer 30 The number of random variations of the beacon script to cache, and then to select from on each request. Incident: Beacon Boolean True The user has issued a request to the session Parameter Tampering tracking service which appears to be manually crafted. This is likely in an attempt to spoof another users session, or to exploit the applications session management. This would never happen under normal usage. Incident: Beacon Boolean True The user has altered the data stored on the Session Tampering client in an effort to prevent tracking. They have altered the data in such a way as to remain consistent with the same data format. This would never happen under normal usage.
101 Client Beacon Processor
6.5.2.2. Incidents
6.5.2.2.1. Beacon Parameter Tampering Complexity: Medium (3.0)
Default Response: 1x = 5 day Strip Inputs in 10 minutes
Cause: MWS uses a special persistent token that inserts itself in multiple locations throughout the client. When a user returns to the site later on, these tokens are transmitted back to the server. This allows the server to correlate the traffic issued by the same user, even if the requests are weeks apart. This incident is triggered when the user manipulates the token data being transmitted to the server on a subsequent visit. They manipulated the data in such a way as to break the expected formatting for the token.
Behavior: Attempts to manipulate and spoof the tracking tokens are generally performed when the attacker is trying to figure out what the token is used for and potentially evade tracking. Because the format of the token is completely wrong, this is likely a generic input attack, where the user is attempting to find a vulnerability in the code that handles the token. This could include a "Buffer Overflow167", "XSS168", "Denial of Service169", "Fingerprinting170", "Format String171", "HTTP Response Splitting172", "Integer Overflow173", or "SQL injection174" attack among many others. The content of the manipulated token should be reviewed to better understand what type of attack the user was attempting, however because the tokens are heavily encrypted and validated, this incident does not represent a threat to the security of the system tracking mechanism.
6.5.2.2.2. Beacon Session Tampering Complexity: Medium (3.0)
Default Response: 1x = 5 day Strip Inputs in 10 minutes.
Cause: MWS uses a special persistent token that inserts itself in multiple locations throughout the client. When a user returns to the site later on, these tokens are transmitted back to the server. This allows the server to correlate the traffic issued by the same user, even if the requests are weeks apart. This incident is triggered when the user manipulates the token data being transmitted to the server on a subsequent visit. They manipulated the data in such a way as to remain consistent with the correct formatting for the token, but the token itself is not valid and was never issued by the server.
Behavior: Attempts to manipulate and spoof the tracking tokens are generally performed when the attacker is trying to figure out what the token is used for and potentially evade tracking. If they are assuming it's used for session management, this might also be a part of a "Credential/Session Prediction175" attack. Because the format of the submitted modified token is still consistent with the format expected, this is not likely a generic input attack. It also does not represent any threat to the system, as the modified token is simply ignored.
167 http://projects.webappsec.org/Buffer-Overflow 168 http://projects.webappsec.org/Cross-Site+Scripting 169 http://projects.webappsec.org/Denial-of-Service 170 http://projects.webappsec.org/Fingerprinting 171 http://projects.webappsec.org/Format-String 172 http://projects.webappsec.org/HTTP-Response-Splitting 173 http://projects.webappsec.org/Integer-Overflows 174 http://projects.webappsec.org/SQL-Injection 175 http://projects.webappsec.org/Credential-and-Session-Prediction
102 Response Processors
6.6. Response Processors
6.6.1. About Responses The processors in this section are responsible for issuing the various counter responses to malicious users on a server protected by Mykonos Web Security. A response is activated when MWS believes intervention is required between the profiled user and the webserver. This response can manifest into any of the types fully explained below.
6.6.1.1. Response Methodology When Mykonos Web Security believes a response is required, the type of response issued depends on the type of behavior the malicious user exhibited to receive the response. For example, users that MWS think are automated tools will likely get issued a CAPTCHA response, whereas it is obvious that a real malicious user (not a bot) will be able to solve a CAPTCHA. In the second case, adding a 2 to 6 second slow might be more effective at wasting the hacker's time.
Another factor that comes into play when issuing counter responses is risk level. If Mykonos Web Security believes a user is of no immediate risk to the system, it might only activate those responses which still allow the user to browse the site somehow, such as the Warning response or Slow Connection response. This way, MWS can monitor that user and gather additional information to properly assess their risk level. If MWS believes the user is a danger to the system, it will issue a more severe response, such as stripping out all inputs on every request or outright blocking the profile.
Some responses might not get issued right away. For example, an incident may produce "a permanent block in 20 minutes". The reason for this delay in the counter response is MWS uses this buffer time to gather some last-minute information on the profile before issuing the final response. MWS will respond instantly if it perceives immediate threat to the integrity of the system, but instances where this is not the case allow MWS to profile the attacker for a bit longer. The end result will be a more complete look at the attacker and his/her habits.
6.6.1.2. Types of Responses Certain response processors are self-explanitory, such as the Block Processor (the user will see that they are blocked). Other responses are "invisible" in that there are no manifestations of the response visible to the user. An example of an invisible response processor is the Strip Inputs Processor. This processor will simply remove all values from all inputs on any form submitted because MWS has determined that the user's input can no longer be trusted. On the user-end, they will see nothing that will indicate to them that this response is active (until they figure out that all inputs are not being recognised).
6.6.1.3. Response Activation Responses get automatically activated according to rules set forth within MWS. These rules are outlined for each incident a user can trigger, and are described in the documentation for each processor. The default response for each incident is documented in the User Guide, and will look something like, "Default Response: 1x = Warn User. 2x = 1 Day Block". The '1x' or '2x' indicate the number of incidents of that type triggered. For this example, triggering this incident once results in the Warning Processor being activated. If the same incident is triggered again on the same profile, the user then gets a 1 day block via the Block Processor.
103 Block Processor
Note You may wish to completely disable automatic counter responses entirely. If this is the case, changing the configuration parameter "Auto Response Activation Enabled" to 'False' will prevent any new automatic activations, but will not hinder your ability to manually activate responses on profiles. (Configuration >> Global Configuration >> Auto Response Service >> "Auto Response Activation Enabled = False")
6.6.1.4. Compounding and Overriding Responses • Warning - There is no need to warn someone when they are already blocked.
• Captcha - If the user is ever unblocked (or the block expires), they will be prompted to solve the captcha.
• Cloppy - If they are ever unblocked (or the block expires) Cloppy will appear.
• Google Maps - If the user is ever unblocked (or the block expires), they will be shown the Google map.
Captcha overrides: • Warning - MWS will warn after they solve the captcha.
• Cloppy - Cloppy will appear after they solve the captcha.
• Google Maps - The Google map will be shown after they solve the captcha.
Strip Inputs overrides: • Break Authentication - It is redundant, as MWS is already stripping login credentials.
6.6.2. Block Processor The block processor is actually a form of auto response. When this processor is enabled, it will allow the security system to block a response with "Blocked!" message sent back to the user.
Note There are no actual triggers for this processor; it is a form of response.
6.6.2.1. Configuration
Table 6.17. Block Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Advanced Block Response ConfigurableHTTP Response The response to return to the user when they are blocked.
104 Request Captcha Processor
6.6.3. Request Captcha Processor The Captcha processor is designed to protect specific pages in a web application from automation. This is done by using a "Captcha" challenge, where the user is required to transcribe random characters from an obscured image or muffled audio file in order to complete the request. The intent is that a human would be capable of correctly answering the challenge, while an automated script with no human intervention would be unable to do so. This assumes that the image is obscured enough that text recognition software is not effective, and the audio file significantly distorted to defeat speech-to-text software. Requiring such user interaction is somewhat disruptive, so it should be utilized only for pages that are prime automation targets (such as contact forms, registration pages, login pages, etc.).
Furthermore, these captcha challenges can be customized to fit the style of the application it is protecting. For more information on how to customize a captcha, see the Captcha Template documentation in the appendix of the user guide.
6.6.3.1. Configuration
Table 6.18. Request Captcha Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Advanced Bad Request Block HTTP 400 HTTP Response The response to return if the user issues a Response Response request that either is too large, or uses multi- part and multi-part is disabled. Blocked Replay String Random Value The response to return if the user attempts to Response submit the validated request multiple times using the same captcha answer, and that behavior is not allowed. Captcha Binary String Random Value The name of the directory where captcha Directory images and audio files will be served from. This should not conflict with any actual directories on the site. Captcha Characters String Random Value The characters to use when generating a random captcha value. Avoid using characters that can be easily mixed up. This set of characters is case sensitive. Captcha State Cookie String Random Value The name of the cookie to use to track the Name active captchas that have not yet been solved. The cookie is only served to the captcha binary directory. Captcha Validation String Random Value The name of the form input used to transmit Input Name the captcha validation key. This should be obscure so that users who have not been required to enter a captcha cannot supply bad values to this input to profile the system.
105 Request Captcha Processor
Parameter Type Default Value Description Maximum Active Integer 7 The maximum number of captchas any given Captchas user can be solving at any given time. This limit can be overcome, but the majority of users will not be able to. This is primarily for performance, as the more active captchas that are allowed, the larger the state cookie becomes. Support Audio Boolean True Whether an audio version of the captcha Version is provided to the user. This may be a requirement for accessibility, as vision impaired users would otherwise be unable to solve the captcha. Watermark String Random Value The text to watermark the captcha with. This can be used to prevent the captcha from being used in a phishing attack. For example, an abuser would not be able to simply display the captcha on a different site and ask a user to solve it. The watermark would tip the user off that the captcha was not intended for the site they are visiting. Use %DOMAIN to use the domain name as the watermark. Cancel URL String None The URL to redirect the user to if they cancel the captcha. This should not be to the same domain, because the domain is being blocked using a captcha, and therefore, canceling would only redirect to a new captcha. An empty value will hide the cancel button. Captcha Expiration Integer 2 minutes The maximum number of seconds the user has to solve the captcha before the request is no longer possible. Expired Captcha HTTP 400 HTTP Response The response to return if the user submits Response Response a validated request after the captcha has expired. This may happen if the user refreshes the results of the captcha long after they have solved it. Maximum Request Integer 500kb The maximum number of bytes in a request Size before it is considered not acceptable for captcha validation, and will be blocked. Protected Pages Collection None A collection of protected pages. Incident: Bad Captcha Boolean False The user was asked to solve a captcha and Answer entered the wrong value. This could be a normal user error, or it could be the results of failed abuse. Incident: Captcha Boolean True The user submitted a request and was asked Cookie Manipulation to solve a captcha. They then modified the state cookie used to track captchas, making
106 Request Captcha Processor
Parameter Type Default Value Description it invalid. This is likely in an attempt to find a way to bypass the captcha validation mechanism. Incident: Captcha Boolean True The user has requested a directory index in Directory Indexing the directory that serves the captcha images and audio files. This is likely in an attempt to get a list of all active captchas or to identify how the captchas are generated. Incident: Captcha Boolean True The user has requested a random file inside Directory Probing the directory that serves the captcha images and audio files. This is likely in an attempt to find an exploitable service or sensitive file that may help bypass the captcha validation mechanism. Incident: Captcha Boolean True The user has submitted a multipart form Disallowed MultiPart post to the protected page, which has been configured as a disallowed option. This is likely in an attempt to find an edge case the captcha validation mechanism is not expecting. Incident: Captcha Boolean True The user is probing the directory used to Image Probing serve captcha images. This is likely in an attempt to find hidden files or a way to invoke errors from the captcha serving logic. Incident: Captcha Boolean True The user has submitted a request with a valid Parameter captcha, but they modified the query string Manipulation parameters. This could be in an attempt to change the output of executing the request without requiring the user to re-validate with another captcha. Incident: Captcha Boolean True The user has attempted to submit the same Request Replay request multiple times with the same captcha Attack answer. In order words, they solved the captcha once and issued the resulting request multiple times. Incident: Captcha Boolean True The user has submitted a request to the Request Size Limit protected page which contains more data Exceeded then is allowed. This is may be an attempt to reduce system performance by issuing expensive requests, or it may be an indicator of a more complex attack. Incident: Captcha Boolean True The user submitted a request and was asked Request Tampering to solve a captcha. They introspected the page containing the captcha and altered the serialized request data (the data from the original request before the captcha prompt). They then submitted a valid captcha using
107 Request Captcha Processor
Parameter Type Default Value Description the modified request data. This is likely in an attempt to abuse the captcha system and identify a bypass technique. Incident: Captcha Boolean True The user submitted a request and was asked Signature Spoofing to solve a captcha. They introspected the page containing the captcha and provided a validation key from a previously solved captcha. This is likely in an attempt to submit multiple requests under the validation of the first. Incident: Captcha Boolean True The user submitted a request and was asked Signature Tampering to solve a captcha. They introspected the page containing the captcha and provided a fake validation key. This is likely in an attempt to bypass the captcha validation mechanism. Incident: Expired Boolean True The user submitted a request and was given Captcha Request a set window of time to solve a captcha. The user solved the captcha and submitted the request for final processing after the window of time expired. This is likely an indication of a packet replay attack, where the user attempts to invoke the business logic of the protected page multiple times under the same captcha validation. Incident: Mismatched Boolean True The user submitted a request and was asked Captcha Session to solve a captcha. They solved the captcha, but upon submitting the request for final processing, they did so under a different session ID. This is likely due to multiple machines participating in the execution of the site workflow and may indicate a serious targeted automation attack. Incident: No Captcha Boolean True The user attempted to validate a captcha Answer Provided but did not supply an answer to validate. There is no interface that allows the user to do this, so they must be manually executing requests against the captcha validation API in an attempt to evade the mechanism. Incident: Unsupported Boolean True The user has requested an audio version Audio Captcha of the captcha challenge, but audio is not Requested supported and there should not be an interface to ask for the audio version. The user is likely trying to find a way to more easily bypass the captcha system.
108 Request Captcha Processor
6.6.3.2. Incidents
6.6.3.2.1. Captcha Answer Automation Complexity: Low (2.0)
Default Response: 1x = Slow Connection 2-6 seconds and 1 Day Strip Inputs
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation176" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user provides an abnormal volume of bad solutions to the captcha image. For example, the image may have said "Hello", but the user attempted 30 different values all of which did not match "Hello". Because the images can be somewhat difficult to read at times (in order to ensure a script cannot break them), it is not uncommon for a legitimate user to enter the wrong value a few times before getting it right, especially if they are unfamiliar with this type of technique, but after dozens of failed attempts, it is more likely a malicious user.
Behavior: Simply providing a bad solution to the captcha image is not necessarily malicious.Legitimate users are not always able to solve the captcha on the first try. However if a large volume of invalid solutions are provided, then it is more likely that a script is attempting to crack the captcha image through educated guessing and "Brute Force177".
6.6.3.2.2. No Captcha Answer Provided Complexity: Medium (3.0)
Default Response: 1x = Warn User. 2x = 1 Day Block
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation178" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user forces
176 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation 177 http://projects.webappsec.org/Brute-Force 178 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation
109 Request Captcha Processor
the captcha interface to submit the request without a valid captcha solution. There is no way to do this without manipulating the logic that controls captcha protected requests.
Behavior: When a hacker is attempting to establish an automated script that is capable of defeating the captcha, they may use various different techniques. One of these techniques is to try changing various values used by the web application in the captcha mechanism in an effort to see if an error can be generated, or an unexpected outcome can be achieved.This type of probing and reverse engineering is generally performed by advanced hackers. In this specific case, the attacker attempted to submit the captcha protected page without actually solving the captcha. Instead they provided an empty value for the solution parameter. It is not possible to submit an empty solution using the provided captcha interface, so this is almost guaranteed to be a malicious attempt at generating an error and obtaining additional details about the captcha implementation though an "Information Leakage179"weakness.
6.6.3.2.3. Multiple Captcha Request Overflow Complexity: Low (2.0)
Default Response: 1x = 1 Day Strip Inputs.
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation180" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user attempts to submit dozens of captcha protected requests that exceed the configured maximum for protected request sizes.
Behavior: When a hacker is attempting to establish an automated script that is capable of defeating the captcha, they may use various different techniques. One of these techniques is to try changing various values used by the web application in the captcha mechanism in an effort to see if an error can be generated, or an unexpected outcome can be achieved.This type of probing and reverse engineering is generally performed by advanced hackers. In this specific case, the attacker submitted dozens of extremely large requests, probably in an effort to find a "Buffer Overflow181" vulnerability, which would produce useful error data and potentially open the server up to further exploitation. They may also be attempting to overload the server and execute a "Denial of Service182" attack.
6.6.3.2.4. Unsupported Audio Captcha Requested Complexity: Medium (3.0)
Default Response: 3x = Slow Connection 2-6 seconds and Warn User. 5x = 1 Day Block.
179 http://projects.webappsec.org/Information-Leakage 180 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation 181 http://projects.webappsec.org/Buffer-Overflow 182 http://projects.webappsec.org/Denial-of-Service
110 Request Captcha Processor
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation183" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user attempts to request the audio version of a captcha challenge when support for audio captchas has been explicitly disabled.
Behavior: Solving an image based captcha is exceptionally difficult and requires a great deal of time and research. Solving an audio captcha however is far less difficult. There are already multiple open source libraries available for translating speech to text. As such, it is often necessary to disable the support of "audio" captchas for critical workflows (such as administrative login dialogs), unless absolutely necessary for accessibility reasons. This incident occurs when the audio captcha has been disabled, but a user is attempting to manually request the audio version of the captcha challenge anyway. The captcha interface does not expose a link to the audio version unless it is explicitly enabled in configuration, so this would require that the user knows where to look for the audio version, they understand the filename conventions, and they know how to make the request manually to download the file. In either case, if audio captchas are not enabled (through configuration), then this effort will not be successful.
6.6.3.2.5. Bad Captcha Answer Complexity: Suspicious (1.0)
Default Response: 10x = Captcha Answer Automation Incident.
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation184" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user provides a bad solution to the captcha image. For example, the image may have said "Hello", but the user typed "hfii0"instead. Because the images can be somewhat difficult to read at times (in order to ensure a script cannot break them), it is not uncommon for a legitimate user to enter the wrong value a few times before getting it right, especially if they are unfamiliar with this type of technique.
183 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation 184 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation
111 Request Captcha Processor
Behavior: Simply providing a bad solution to the captcha image is not necessarily malicious. Legitimate users are not always able to solve the captcha on the first try. However if a large volume of invalid solutions are provided, then it is more likely that a script is attempting to crack the captcha image through educated guessing and "Brute Force185".
6.6.3.2.6. Mismatched Captcha Session Complexity: High (4.0)
Default Response: 1x = Warn User, 2x = 5 Day Strip Inputs.
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation186" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user provides a solution to a captcha that was issued for a different session then their own, as might be the case in a script that uses minimal human interaction to solve the captcha's, but everything else is automated
Behavior: When a hacker is attempting to establish an automated script that is capable of defeating the captcha, they may use various different techniques. One of these techniques is to try and harvest successfully solves captchas from other users on the site. This can be done either by infecting those machines with a virus, or by implanting script into some of the sites pages (possibly through XSS). If this technique is used, then the captcha that is being solved may not have originated from the same session as the user who is submitting the solution. This is a dead giveaway that the user is attempting to defeat the captcha system to automate a specific task.
6.6.3.2.7. Expired Captcha Request Complexity: Suspicious (1.0)
Default Response: None.
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script).
185 http://projects.webappsec.org/Brute-Force 186 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation
112 Request Captcha Processor
Captchas are generally used to resolve "Insufficient Anti-Automation187" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user provides a solution to a captcha after the allotted time for solving the captcha has elapsed.
Behavior: When a hacker is attempting to establish an automated script that is capable of defeating the captcha, they may use various different techniques. One of these techniques is to run expensive image processing algorithms on the captcha image in order to identify what the represented value might be. Additionally, a user might attempt to send the captcha to a warehouse of human captcha solvers. These warehouses specialize in solving large volumes of captchas at a fairly low price (less then a penny per captcha). In either case, it can take several minutes to get the correct captcha answer, and will likely run out the amount of time the user is allowed for solving the captcha. If using a browser, the input would flat out stop accepting answers, but in a scripted scenario, the script will likely try and submit the value anyway, because it is unaware of the expiration. It is possible that this incident would be triggered by a legitimate user, if they were to refresh the page that was produced after the captcha was solved. This would effectively cause the captcha to be reprocessed after the expiration time had been exceeded. As such, this incident on its own is not considered malicious.
6.6.3.2.8. Captcha Request Tampering Complexity: High (4.0)
Default Response: 1x = Warn User. 2x = 5 Day Strip Inputs.
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system.They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation188" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user provides a solution to a captcha which is correct, but they have modified the parameter containing the original request (which is heavily encrypted to prevent tampering).
Behavior: When a hacker is attempting to establish an automated script that is capable of defeating the captcha, they may use various different techniques. One of these techniques is to try changing various values used by the web application in the captcha mechanism in an effort to see if an error can be generated, or an unexpected outcome can be achieved. This type of probing and reverse engineering is generally performed by advanced hackers. The parameter that was modified contained the original request data (before the captcha was issued), it is likely that the attacker is attempting to smuggle a malicious payload through the system without being detected by any network or web firewalls. Because this parameter uses heavy encryption and validation, this type of activity will not produce any useful information or expose any vulnerabilities. Depending on the value they submitted for the original request data, this may also fall under one of the other attack categories involving manipulating general inputs, such as a "Buffer Overflow189", "XSS190", "Denial of
187 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation 188 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation 189 http://projects.webappsec.org/Buffer-Overflow 190 http://projects.webappsec.org/Cross-Site+Scripting
113 Request Captcha Processor
Service191", "Fingerprinting192", "Format String193", "HTTP Response Splitting194", "Integer Overflow195", or "SQL injection196"attack among many others.
6.6.3.2.9. Captcha Signature Tampering Complexity: High (4.0)
Default Response: 1x = Warn User. 2x = 5 Day Strip Inputs.
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation197" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user provides a solution to a captcha which is correct, but they have modified the integrity checking signature passed along with the captcha solution.
Behavior: When a hacker is attempting to establish an automated script that is capable of defeating the captcha, they may use various different techniques. One of these techniques is to try changing various values used by the web application in the captcha mechanism in an effort to see if an error can be generated, or an unexpected outcome can be achieved. This type of probing and reverse engineering is generally performed by advanced hackers. Depending on the value they submitted for the original request data, this may also fall under one of the other attack categories involving manipulating general inputs, such as a "Buffer Overflow198", "XSS199", "Denial of Service200", "Fingerprinting201", "Format String202", "HTTP Response Splitting203", "Integer Overflow204", or "SQL injection205" attack among many others.
6.6.3.2.10. Captcha Signature Spoofing Complexity: High (4.0)
Default Response: 1x = Warn User. 2x = 5 Day Strip Inputs.
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image
191 http://projects.webappsec.org/Denial-of-Service 192 http://projects.webappsec.org/Fingerprinting 193 http://projects.webappsec.org/Format-String 194 http://projects.webappsec.org/HTTP-Response-Splitting 195 http://projects.webappsec.org/Integer-Overflows 196 http://projects.webappsec.org/SQL-Injection 197 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation 198 http://projects.webappsec.org/Buffer-Overflow 199 http://projects.webappsec.org/Cross-Site+Scripting 200 http://projects.webappsec.org/Denial-of-Service 201 http://projects.webappsec.org/Fingerprinting 202 http://projects.webappsec.org/Format-String 203 http://projects.webappsec.org/HTTP-Response-Splitting 204 http://projects.webappsec.org/Integer-Overflows 205 http://projects.webappsec.org/SQL-Injection
114 Request Captcha Processor
and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation206" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user provides a solution to a captcha which is correct, but they have replaced the integrity checking signature passed along with the captcha solution to one that was used in a previous captcha solution.
Behavior: When a hacker is attempting to establish an automated script that is capable of defeating the captcha, they may use various different techniques. One of these techniques is to try changing various values used by the web application in the captcha mechanism in an effort to see if an error can be generated, or an unexpected outcome can be achieved. This type of probing and reverse engineering is generally performed by advanced hackers. This specific incident generally reflects the behavior of a user who is trying to submit a request that would normally be protected by a captcha, but they are trying to trick the system into thinking the captcha was solved correctly, even though it was not. This is generally looking for a "Insufficient Anti- Automation207" weakness in the captcha handling mechanism.
6.6.3.2.11. Captcha Cookie Manipulation Complexity: Medium (3.0)
Default Response: 1x = Warn User. 2x = 5 Day Strip Inputs.
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation208" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user alters the cookies used to maintain captcha state.
Behavior: When a hacker is attempting to establish an automated script that is capable of defeating the captcha, they may use various different techniques. One of these techniques is to try changing various values used by the web application in the captcha mechanism in an effort to see if an error can be generated, or an unexpected outcome can be achieved. This type of probing and reverse engineering is generally performed by advanced hackers. In this specific case, the attacker modified a cookie that is used to maintain the state of the captcha. The cookie is heavily encrypted, but the attacker may be attempting to establish a way of either identifying what the value of the captcha is algorithmically (by analyzing the cookie value), or they may be
206 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation 207 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation 208 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation
115 Request Captcha Processor
attempting to assign a value to the captcha. In either case, this activity generally indicates a user who is trying to find a way to bypass the captcha. Depending on the value they submitted for the original request data, this may also fall under one of the other attack categories involving manipulating general inputs, such as a "Buffer Overflow209", "XSS210", "Denial of Service211", "Fingerprinting212", "Format String213", "HTTP Response Splitting214", "Integer Overflow215", or "SQL injection216" attack among many others.
6.6.3.2.12. Captcha Image Probing Complexity: Low (2.0)
Default Response: 1x = Warn User. 2x = 5 Day Block.
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation217" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user attempts to request a captcha image file for a request that is not being protected by a captcha.
Behavior: In order to find a way to bypass the captcha mechanism, attackers will often attempt to collect a large number of captcha images for offline analysis. If the attacker can find a pattern in how the captcha images are issued, or how the filename relates to the value in the image, then they can effectively bypass the captcha mechanism at will. In this case, the attacker is guessing arbitrary captcha image filenames, but is attempting to keep the format of the names consistent with known captcha image URL's. Because the filename used and the values in the image have no correlation, this technique will not be successful and will simply waste the attacker's time and resources.
6.6.3.2.13. Captcha Request Size Limit Exceeded Complexity: Suspicious (1.0)
Default Response: 10x = Multiple Captcha Request Overflow Incident.
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the
209 http://projects.webappsec.org/Buffer-Overflow 210 http://projects.webappsec.org/Cross-Site+Scripting 211 http://projects.webappsec.org/Denial-of-Service 212 http://projects.webappsec.org/Fingerprinting 213 http://projects.webappsec.org/Format-String 214 http://projects.webappsec.org/HTTP-Response-Splitting 215 http://projects.webappsec.org/Integer-Overflows 216 http://projects.webappsec.org/SQL-Injection 217 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation
116 Request Captcha Processor
deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation218" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user attempts to submit a captcha protected request that contains a request body larger then the configured maximum.
Behavior: When a hacker is attempting to establish an automated script that is capable of defeating the captcha, they may use various different techniques. One of these techniques is to try changing various values used by the web application in the captcha mechanism in an effort to see if an error can be generated or if an unexpected outcome can be achieved. This type of probing and reverse engineering is generally performed by advanced hackers. In this specific case, the attacker submitted an extremely large request, probably in an effort to find a "Buffer Overflow219" vulnerability, which would produce useful error data and potentially open the server up to further exploitation. This incident is not necessarily malicious on its own, as it is possible for a normal user to submit a value that is larger then the configured maximum, especially if the configured maximum is small, or if the form protected by the captcha allows file posts.
6.6.3.2.14. Captcha Disallowed MultiPart Complexity: Suspicious (1.0)
Default Response: 10x = Multiple Captcha Disallow Multipart Incident.
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation220" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user attempts to submit a captcha protected request that contains a binary file, and the captcha is explicitly configured to not allow binary file submission (it has been configured to disallow multi-part form submissions).
Behavior: When a hacker is attempting to establish an automated script that is capable of defeating the captcha, they may use various different techniques. One of these techniques is to try changing various values used by the web application in the captcha mechanism in an effort to see if an error can be generated, or an unexpected outcome can be achieved.This type of probing and reverse engineering is generally performed by advanced hackers. In this specific case, the attacker submitted a binary file in the request that is being protected. The captcha in this case has been explicitly configured to not allow Multi-Part form submissions, so this represents unexpected and undesired activity. Using Multi-Part forms, the attacker can
218 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation 219 http://projects.webappsec.org/Buffer-Overflow 220 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation
117 Request Captcha Processor
more easily accomplish a "Buffer Overflow221"attack, which would produce useful error data and potentially open the server up to further exploitation. Additionally, some web applications do not handle the encoding used for multi-part forms gracefully, so error information may also be obtained from conflicts arising from the submission type. This is not necessarily a malicious incident on its own, because it is possible that the user is legitimately submitting a multi-part form, and just happened to have the captcha activated during the submission.However this is a very rare case, and still represents a somewhat suspicious client.
6.6.3.2.15. Captcha Directory Indexing Complexity: Low (2.0)
Default Response: 1x = Slow Connection 2-6 seconds and 1 Day Block.
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation222" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user attempts to request a directory index from the same fake directory as the captcha images are being served from.
Behavior: When attempting to either bypass the captcha mechanism, or find a vulnerability in the server, attackers will often try finding unlinked resources throughout the web site. The captcha mechanism uses a fake directory in order to serve the images and audio files that contain the captcha challenge. If the attacker is requesting an arbitrary file within the same fake directory, they are likely trying to find a "Predictable Resource Location223" vulnerability. In this specific case, the attacker is attempting to get a full file listing of everything inside the captcha directory. This could potentially be used to get a massive list of all active captcha URL's, or to find resources that are used in the creation of captcha challenges. The directory index will not be allowed, so this does not actually provide the attacker with any useful information.
6.6.3.2.16. Captcha Directory Probing Complexity: Low (2.0)
Default Response: 1x = Warn User. 2x = Slow Connection 2-6 seconds and 5 Day Block.
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully.
221 http://projects.webappsec.org/Buffer-Overflow 222 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation 223 http://projects.webappsec.org/Predictable-Resource-Location
118 Request Captcha Processor
Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation224" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user attempts to request an arbitrary file (not a captcha image, but something else) from within the same fake directory as the captcha images are being served from.
Behavior: When attempting to either bypass the captcha mechanism, or find a vulnerability in the server, attackers will often try finding unlinked resources throughout the web site. The captcha mechanism uses a fake directory in order to serve the images and audio files that contain the captcha challenge. If the attacker is requesting an arbitrary file within the same fake directory, they are likely trying to find a "Predictable Resource Location225" vulnerability. For example, the attacker might be trying to find a source file in the captcha serving directory in hopes of actually being able to get the source code behind how captcha images are generated. Because the directory is fake, the attacker will never find any of the resources they are looking for.
6.6.3.2.17. Captcha Parameter Manipulation Complexity: Suspcious (1.0)
Default Response: 5x = Multiple Captcha Parameter Manipulation Incident.
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation226" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user attempts to submit multiple solutions for multiple captchas, but they keep modifying the query parameters that were submitted with the original requests. For example, if the user submitted a "add product to cart" request, and one of the query parameters was the item to add, this incident would be triggered if after solving the captcha, the value of that query parameter was modified to some other value, and this modification happened dozens of times.
Behavior: Because captcha's prevent automation, attackers will sometimes try and find ways to abuse the technique used to request the captcha in order to exploit the site. For example, if the attacker can find a way to submit the same solution over and over again, but have the web application perform a different action each time, they may be able to solve the captcha once and still automate the resulting workflow. In this case, the attacker changed a query parameter that was submitted with the original request. They submitted the original request, solved the captcha, changed the query parameter, and then resubmitted the solved captcha request. In some cases, this might cause the web application to execute a different operation based on the difference in query parameter values. For example, if the protected workflow is "add product to cart" on a shopping site,
224 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation 225 http://projects.webappsec.org/Predictable-Resource-Location 226 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation
119 Request Captcha Processor
then the attacker might attempt to submit the same solved captcha repeatedly, but change the product ID that is being added on each request. This might allow them to automate the addition of products to a shopping cart, after solving only one captcha challenge. The captcha mechanism does not allow the modification of query parameters after the original request has been submitted, so this type of activity will not be successful.
6.6.3.2.18. Captcha Request Replay Attack Complexity: Suspicious (1.0)
Default Response: 5x = Multiple Captcha Replay Incident
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation227" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user attempts to submit a captcha solution multiple times and "replay" is explicitly disabled for the captcha being used.
Behavior: Because captcha's prevent automation, attackers will sometimes try and find ways to abuse the technique used to request the captcha in order to exploit the site. For example, if the attacker can find a way to submit the same solution over and over again, they may be able to solve the captcha once and still automate the resulting workflow. This is sometimes considered legitimate behavior (as would be expected if the user refreshed the browser after submitting a successful captcha), however in many cases, such functionality would make the captcha significantly less effective at preventing automation. In this case, the attacker resubmitted a request that had already been successfully validated through a captcha, and "replay" was explicitly disabled for the captcha. This is not necessarily a malicious incident on its own, because the user may have accidentally refreshed the browser, however multiple attempts would definitely represent malicious intent. An example of where a captcha's "replay" could cause a problem is on a gaming site, where the user is adding fake "money" to their account. In order to add the fake money, they must solve the captcha. This workflow is protected with a captcha, because if a user could automate the process, they would be able to add unlimited funds to their account. If an attacker were able to solve the captcha once, and continuously resubmit the resulting request, they could effectively add funds over and over again without resolving a new captcha. This would then allow for automation. Replay attackers are less of a problem if the web application being protected already has a method of preventing the same request from being submitted accidentally multiple times. Such would be the case if the web application maintained state information for the given session, and recorded the operation after it was successful, then used that state information to prevent a future occurrence of the operation.
6.6.3.2.19. Multiple Captcha Replays Complexity: Low (2.0)
Default Response: 1x = Warn User, 2x = 1 Day Strip Inputs
227 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation
120 Request Captcha Processor
Cause: A captcha is a special technique used to differentiate between human users, and automated scripts. This is done through a Turing test, where the user is required to visually identify characters in a jumbled image and transcribe them into an input. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed. Additionally, an audio version is optionally available to allow users who have a visual handicap to complete the captcha successfully. Captchas are used in two different ways by the system. They can be explicitly added to any workflow within the protected web application (such as requiring a captcha to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). Captchas are generally used to resolve "Insufficient Anti-Automation228" weaknesses in the protected web application. Regardless of which type of captcha is being used, this incident is generated when the user attempts to submit a captcha solution multiple times and "replay" is explicitly disabled for the captcha being used.
Behavior: Because captcha's prevent automation, attackers will sometimes try and find ways to abuse the technique used to request the captcha in order to exploit the site. For example, if the attacker can find a way to submit the same solution over and over again, they may be able to solve the captcha once and still automate the resulting workflow. This is sometimes considered legitimate behavior (as would be expected if the user refreshed the browser after submitting a successful captcha), however in many cases, such functionality would make the captcha significantly less effective at preventing automation. In this case, the attacker resubmitted a request that had already been successfully validated through a captcha, and "replay" was explicitly disabled for the captcha. This is not necessarily a malicious incident on its own, because the user may have accidentally refreshed the browser, however multiple attempts would definitely represent malicious intent. An example of where a captcha's "replay" could cause a problem is on a gaming site, where the user is adding fake "money" to their account. In order to add the fake money, they must solve the captcha. This workflow is protected with a captcha, because if a user could automate the process, they would be able to add unlimited funds to their account. If an attacker were able to solve the captcha once, and continuously resubmit the resulting request, they could effectively add funds over and over again without resolving a new captcha. This would then allow for automation. Replay attackers are less of a problem if the web application being protected already has a method of preventing the same request from being submitted accidentally multiple times. Such would be the case if the web application maintained state information for the given session, and recorded the operation after it was successful, then used that state information to prevent a future occurrence of the operation.
6.6.3.2.20. Multiple Captcha Disallow Multipart Complexity: Low (2.0)
Default Response: 1x = 1 Day Strip Inputs
Cause: A CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is a special technique used to differentiate between human users, and automated scripts. The user is required to visually identify characters in a jumbled image and transcribe them into a text box. An audio version is also available, for users with a visual handicap. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed.
CAPTCHAs are used in two different ways by the System. They can be explicitly added to any workflow within the protected web application (such as requiring a CAPTCHA to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the
228 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation
121 Request Captcha Processor
user, but with a way for the user to unblock themselves if they can prove they are not an automated script). CAPTCHAs are generally used to resolve "Insufficient Anti-Automation229" weaknesses in the protected web application.
Regardless of which type of CAPTCHA is being used, this incident is generated when the user attempts to submit dozens of CAPTCHA-protected requests that contain binary files, and the CAPTCHAs are explicitly configured to not allow binary file submission (it has been configured to disallow multi-part form submissions).
Behavior: When a hacker is attempting to establish an automated script that is capable of defeating the CAPTCHA, they may use various techniques. One of these techniques is to try changing various values used by the web application in the CAPTCHA mechanism in an effort to see if an error can be generated, or an unexpected outcome can be achieved. This type of probing and reverse-engineering is generally performed by advanced hackers.
In this specific case, the attacker submitted dozens of binary files in the requests that are being protected. The CAPTCHA in this case has been explicitly configured to not allow Multi-Part form submissions, so this represents unexpected and undesired activity. Using Multi-Part forms, the attacker can more easily accomplish a "Buffer Overflow230" attack, which would produce potentially sensitive error data and possibly open the server up to further exploitation. Additionally, some web applications do not handle the encoding used for multi-part forms gracefully, so error information may also be obtained from conflicts arising from the submission type. Because this is happening so frequently from the same user, it is also possible that the user is attempting to execute a "Denial of Service231" attack.
6.6.3.2.21. Multiple Captcha Parameter Manipulation Complexity: Low (2.0)
Default Response: 1x = Warn User, 2x = 1 Day Strip Inputs
Cause: A CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is a special technique used to differentiate between human users, and automated scripts. The user is required to visually identify characters in a jumbled image and transcribe them into a text box. An audio version is also available, for users with a visual handicap. If the user is unable to complete the challenge in a reasonable amount of time, they are not allowed to proceed with their original request. Because it is nearly-impossible to script the deciphering of the image, automated scripts generally get stuck and cannot proceed.
CAPTCHAs are used in two different ways by the System. They can be explicitly added to any workflow within the protected web application (such as requiring a CAPTCHA to login, or checkout a shopping cart), and they can be used to test a suspicious user before allowing them to continue using the site (similar to blocking the user, but with a way for the user to unblock themselves if they can prove they are not an automated script). CAPTCHAs are generally used to resolve "Insufficient Anti-Automation232" weaknesses in the protected web application.
Regardless of which type of CAPTCHA is being used, this incident is generated when the user attempts to submit multiple solutions for multiple CAPTCHAs, but they keep modifying the query parameters that were submitted with the original requests. For example, if the user submitted a "add product to cart" request, and one of the query parameters was the item to add, this incident would be triggered if, after solving the CAPTCHA, the value of that query parameter was modified to some other value, and this modification happened dozens of times.
229 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation 230 http://projects.webappsec.org/w/page/13246916/Buffer%20Overflow 231 http://projects.webappsec.org/w/page/13246921/Denial%20of%20Service 232 http://projects.webappsec.org/w/page/13246938/Insufficient%20Anti-automation
122 CSRF Processor
Behavior: Because CAPTCHAs prevent automation, attackers will sometimes try to find ways to abuse the technique used to request the CAPTCHA in order to exploit the site. For example, if the attacker can find a way to submit the same solution over and over again, but have the web application perform a different action each time, they may be able to solve the CAPTCHA once, and still automate the resulting workflow.
In this case, the attacker changed many query parameters on many different requests that were protected with a CAPTCHA. They submitted the original request, solved the CAPTCHA, changed the original query parameters, and then resubmitted the solved CAPTCHA request. In some cases, this might cause the web application to execute a different operation based on the difference in query parameter values. For example, if the protected workflow is "add product to cart" on a shopping site, then the attacker might attempt to submit the same solved CAPTCHA repeatedly, but change the product ID that is being added on each request. This might allow them to automate the addition of products to a shopping cart, after solving only one CAPTCHA challenge.
The CAPTCHA mechanism does not allow the modification of query parameters after the original request has been submitted, so this type of activity will not be successful.
This is not considered malicious activity right away, because it is possible that a user may accidentally modify a query parameter; however, when this incident is triggered, it represents a user who has modified dozens of different query parameters on different CAPTCHA-protected pages.
6.6.4. CSRF Processor The CSRF processor is responsible for ensuring that the protected website does not allow a cross site request forgery attack. CSRF attacks are a type of session hijacking, where a malicious website redirects a user to a sensitive service call on the target website. For example, a user might visit a malicious website that has an image tag pointed to the "deleteAccount" service running on a target website. When a user visits the malicious website, they are unknowingly calling the "deleteAccount" operation. If they had an active session on the target site, their account would be deleted.
This processor works by intercepting any request that could potentially be part of a CSRF attack. This is determined by looking at the referrer header being passed in by the client. The referrer header tells the server where the user came from. If the user is navigating around the actual website legitimately, they will have a referrer header on nearly all requests they make which will match the domain of the site they are navigating. If the user types the URL in manually, or follows a link from another site, they will not have a referrer. If it's a CSRF attack, there will either be no referrer, or a 3rd party domain in the referrer.
In all cases where the referrer does not match the domain of the protected site, a special redirection page will be returned to the client instead of the request they actually asked for. The redirection page will check to make sure the user is not a victim of a CSRF attack, and if they are not, it will automatically redirect the user to the original page they requested.
This processor only protects clients that have "user-agent" headers matching that of a known browser. This is because CSRF attacks are specifically targeted at average web users, and they generally stick to the major browsers. So spiders and scripts will bypass the CSRF processors detection/protection mechanism. This processor also detects the case where a user has turned off referrers (and thus, no requests will contain a referrer), and in that case, will turn off CSRF protection for the client. As such, a user who has disabled referrers will still be susceptible to CSRF, but that should be a very small percentage (if not zero) of the overall user pool.
In the event that a user issues a request that cannot be validated as not a CSRF attack, the user will not be automatically redirected. Instead, they will be presented a "This page has moved" response, and will be asked to click a link to continue to the page they actually wanted. The link to proceed is randomly positioned on the
123 CSRF Processor
page to prevent Click Jacking attacks (where a malicious site overlays legitimate content on top of the target site and gets the user to click the legitimate content, while also hijacking the click to transparently activate the content underneath). A special case involves when a 3rd party website opens the target site in a new window or tab. If the 3rd party site retains ownership of the newly opened window or tab, the user will be asked to click the "continue" link so that the original window can be closed and a new window can be opened in its place. This action breaks the ownership and prevents the 3rd party website from performing actions on the window (such as closing or redirecting it).
Because it is sometimes expected that a 3rd party site will be making calls into the target site, it is possible to configure a list of "trusted" 3rd party sites. Any requests issued from a trusted domain will not be protected against CSRF. This allows the trusted site to host the target site in an IFRAME or make service calls unimpeded. Be careful who you add to the trusted domain list, because if the trusted domain is susceptible to XSS or CSRF itself, then it can be used as a proxy to launch a CSRF attack against your protected sites. This trust does not apply if the hosting domain is running over SSL, and the target domain is not running over SSL. If the 3rd party page hosting an IFRAME of the target site is running in SSL, it must load the SSL version of the target site, otherwise the CSRF protection will still be applied. It is however fine if the 3rd party site is not SSL protected and the target site is SSL protected.
6.6.4.1. Configuration
Table 6.19. CSRF Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Advanced Block Response ConfigurableHTTP Response The response to return if the CSRF mechanism cannot complete the request due to errors or tampering. CSRF Nonce Salt String Random A 256 character random string used to ensure that CSRF nonce tokens are generated differently between different deployments. CSRF Token Name String Random The name of the query string parameter used to indicate a successfully validated request after it has been determined that it is not a CSRF attack. Select a name that will not conflict with a real query parameter used by the site Ignore Scripts Boolean True CSRF is largely a browser based attack, so to ensure that scripts such as legitimate spiders are not treated as potential CSRF victims, this option can be enabled to ignore all non browsers for CSRF protection. Ignored Extensions Collection .xap, .xaml A list of file suffixes (extensions) that will not be protected by CSRF. By default, Silverlight binaries are included, because some browsers will remove the referrer for
124 CSRF Processor
Parameter Type Default Value Description Silverlight embedded content, which may interfere with CSRF protection and prevent the Silverlight content from loading. Remote Script ConfigurableCSRF Script Inclusion The fake resource to request if the page is Resource Resource being loaded as a remote script on a third party domain. This is primarily for detection of the attack and can be any fake resources as long as it does not actually exist on the server. Trusted Domains Collection None The list of domains that are allowed to display the web application in a frame, reference resources such as images or scripts, or are allowed to make remote API calls using techniques that are similar to a CSRF attack. If the trusted domain starts with a period, then it will match any subdomain before the designated period. For example, .site.com will match www.site.com, my.new.site.com and site.com. CSRF Extra String None Since CSRF protection may cause the referrer JavaScript to be removed from the request, it may be necessary to add any analytic code to the JavaScript used to detect and stop CSRF attacks. As such, if you use a 3rd party analytics script, you should put that code in this parameter to capture the unmodified original request details. The code will be injected into a script tag, so it must be valid JavaScript or the CSRF protection may stop functioning correctly. Incident: CSRF Boolean True The user tampered with the parameters Parameter Tampering used by the security engine to prevent CSRF on requests that have an untrusted 3rd party referer. This is likely in an attempt to find a vulnerability in the CSRF protection mechanism. Incident: CSRF Boolean False The user has accessed an untrusted 3rd party Remote Script website which contains an embedded script Inclusion reference to the protected application. While the user may not be malicious, this represents a CSRF attack from the untrusted website against the protected application. Because the attack was not successful, it is likely being executed by the user who is attempting to construct the attack vector. Incident: HTTP Boolean True The user is using what looks like a browser, Referrers Disabled but they have HTTP referrers disabled. This is not a malicious incident, but it does indicate an unusual client
125 CSRF Processor
6.6.4.2. Incidents
6.6.4.2.1. CSRF Parameter Tampering Complexity: Suspicious (1.0)
Default Response: 10x = Multiple CSRF Parameter Tampering Incident.
Cause: MWS protects against CSRF attacks by using a special interception technique. When a request comes in to MWS, the referrer is checked. In the event that there is a 3rd party referrer (the user was following a link from another site), the interception mechanism kicks in. This involves returning a special page to the user that validates that the user is intentionally requesting the resource. If the validation is successful, the user is transparently redirected to the original resource they requested. If the validation fails, the user is then instructed to manually confirm their intentions, or return to the page they came from (to prevent the CSRF attack from working). In most cases, a valid CSRF attack would function in such a way as to hide this manual confirmation step, so the user would probably never see it (e.g. if the URL was loaded using an image HTML tag, then the resulting HTML confirmation step would not render, because its HTML, not an image). This incident is triggered when a user submits a request with a 3rd party referrer, and then manipulates the code of the CSRF interception page to alter the original data that was submitted. For example, they submit a request that looks like a CSRF attack (has a 3rd party referrer), and then use a tool like Firebug to edit the query string parameters that would be sent to the server after they manually allowed the request on the CSRF intercept page.
Behavior: CSRF attacks are generally two-phase. The first phase involves the attacker establishing a functional CSRF attack. This could take quite a while and involves the attacker making requests to the protected site, trying all different types of CSRF techniques. The second phase is when the attacker injects the successful CSRF vector into a public website. In the second phase, legitimate users are visiting the public website and unknowingly executing the CSRF attack in the background. It is not useful to flag the victims of the CSRF attack as hackers, because they may not even know what is going on. However it is useful to flag the original attack vector establishment, because it may shed light on who created the "CSRF233"attack. This incident reflects a user who is manipulating the CSRF prevention mechanism, likely in an attempt to find a way to get around it.As such, if a user has this incident, they are probably trying to establish a CSRF attack, and careful attention should be paid to the values they are changing the parameters to and which URL is being requested (this will help identify what the user is trying to attack).
6.6.4.2.2. Multiple CSRF Parameter Tampering Complexity: Low (2.0)
Default Response: 1x = Captcha, 2x = 1 Day Strip Inputs
Cause: MWS protects against CSRF attacks by using a special interception technique. When a request comes in to MWS, the referrer is checked. In the event that there is a 3rd party referrer (the user was following a link from another site), the interception mechanism kicks in. This involves returning a special page to the user that validates that the user is intentionally requesting the resource. If the validation is successful, the user is transparently redirected to the original resource they requested. If the validation fails, the user is then instructed to manually confirm their intentions, or return to the page they came from (to prevent the CSRF attack from working). In most cases, a valid CSRF attack would function in such a way as to hide this manual confirmation step, so the user would probably never see it (e.g. if the URL was loaded using an image HTML tag, then the resulting HTML confirmation step would not render, because its HTML, not an image). This incident is triggered when a user submits dozens of requests with a 3rd party referrers, and
233 http://projects.webappsec.org/Cross-Site-Request-Forgery
126 CSRF Processor
then manipulates the code of the CSRF interception page to alter the original data that was submitted. For example, they submit a bunch of requests that look like CSRF attacks (they have 3rd party referrers), and then use a tool like Firebug to edit the query string parameters that would be sent to the server after they manually allowed the requests on the CSRF intercept page.
Behavior: CSRF attacks are generally two-phase. The first phase involves the attacker establishing a functional CSRF attack. This could take quite a while and involves the attacker making requests to the protected site, trying all different types of CSRF techniques. The second phase is when the attacker injects the successful CSRF vector into a public website. In the second phase, legitimate users are visiting the public website and unknowingly executing the CSRF attack in the background. It is not useful to flag the victims of the CSRF attack as hackers, because they may not even know what is going on. However it is useful to flag the original attack vector establishment, because it may shed light on who created the "CSRF234"attack. This incident reflects a user who is manipulating the CSRF prevention mechanism, likely in an attempt to find a way to get around it. As such, if a user has this incident, they are probably trying to establish a CSRF attack, and careful attention should be paid to the values they are changing the parameters to and which URL is being requested (this will help identify what the user is trying to attack).
6.6.4.2.3. CSRF Remote Script Inclusion Complexity: Informational (0.0)
Default Response: None.
Cause: MWS protects against CSRF attacks by using a special interception technique. When a request comes in to MWS, the referrer is checked. In the event that there is a 3rd party referrer (the user was following a link from another site), the interception mechanism kicks in. This involves returning a special page to the user that validates that the user is intentionally requesting the resource. If the validation is successful, the user is transparently redirected to the original resource they requested. If the validation fails, the user is then instructed to manually confirm their intentions, or return to the page they came from (to prevent the CSRF attack from working). In most cases, a valid CSRF attack would function in such a way as to hide this manual confirmation step, so the user would probably never see it (e.g. if the URL was loaded using an image HTML tag, then the resulting HTML confirmation step would not render, because its HTML, not an image). This incident is triggered when a user accesses a page on a 3rd party website which contains a Javascript tag that loads content from the protected site. This would normally represent a victim of a CSRF attack, but because CSRF attacks are blocked, an attacker is unlikely to execute such an attack. Therefore, it is more probable that the attacker is testing a possible vector to see if it will work and encountering this incident.
Behavior: CSRF attacks are generally two-phase. The first phase involves the attacker establishing a functional CSRF attack. This could take quite a while and involves the attacker making requests to the protected site, trying all different types of CSRF techniques. The second phase is when the attacker injects the successful CSRF vector into a public website. In the second phase, legitimate users are visiting the public website and unknowingly executing the CSRF attack in the background. It is not useful to flag the victims of the CSRF attack as hackers, because they may not even know what is going on. However it is useful to flag the original attack vector establishment, because it may shed light on who created the "CSRF235"attack. While this incident would potentially be fired for any victims of a CSRF attack, CSRF attacks are blocked by this processor, so it is unlikely that an attacker would ever actually try to use the vector against legitimate users. As such, it is far more likely that the attacker is still in the first phase and trying to uncover a successful CSRF vector. Because of this, careful attention should be paid to the URL that is being requested (this will help identify what the user is trying to exploit).
234 http://projects.webappsec.org/Cross-Site-Request-Forgery 235 http://projects.webappsec.org/Cross-Site-Request-Forgery
127 Header Injection Processor
6.6.4.2.4. HTTP Referers Disabled Complexity: Suspicious (1.0)
Default Response: None.
Cause: The HTTP protocol provides support for a special header called the "referer" (misspelled on purpose). This header tells the web server where the user just came from. So if the user visits google and follows a link from google to get to another page, the request for that second page will contain a "referer" of "http:// www.google.com". Some browsers provide the option to turn off automatic transmission of the "referer" header. This would make it impossible for websites to identify the page the user came from. This incident is triggered whenever a user accesses the website with referers disabled. This is not necessarily a malicious act, as it could be the result of an excessively paranoid legitimate user, but it is also somewhat unusual and is often a technique employed by malicious users.
Behavior: Hackers will often disable the referer header to make it more difficult to monitor and analyze an attack through the traditional HTTP log files. Many web servers will record the URL the user is accessing, as well as the referer that was submitted. As such, by disabling referers, the hacker is able to eliminate a large percentage of the information collected about the attack.
6.6.5. Header Injection Processor This processor provides the header injection counter response. It allows extra a custom header to be defined that is injected into a suspected hackers requests to allow custom handling.
Note There are no actual triggers for this processor; it is a form of response.
6.6.5.1. Configuration
Table 6.20. Header Injection Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Advanced Default Header Name String Random The default header name to use if one is not specified in the response configuration. Default Header Value String True The default header value to use if one is not specified in the response configuration.
6.6.6. Force Logout Processor This processor provides the force logout counter response. It strips out and invalidates the users session tokens logging them out of the site.
128 Strip Inputs Processor
Note There are no actual triggers for this processor - it is a form of response.
6.6.6.1. Configuration
Table 6.21. Force Logout Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Application Session Collection Collection A collection of names to use for the Cookie Application session cookie Advanced Clear Session Boolean False Whether to clear any terminated session Cookies cookies from the malicious users browser. This may help the user identify why they are getting logged off, so unless the application has code on the client that reads the session cookie value, or the cookie is used in traffic not protected by the Mykonos system, this option should be turned off.
6.6.7. Strip Inputs Processor This processor is used to transparently remove all user input from requests being issued to the server. This response will make the web application, or the client accessing it, to appear broken from the users perspective. The website will also take on a much smaller attack surface should the client be a vulnerability scanner.
Note There are no actual triggers for this processor; it is a form of response.
6.6.7.1. Configuration
Table 6.22. Strip Inputs Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor.
129 Slow Connection Processor
6.6.8. Slow Connection Processor The slow connection processor is designed to introduce large delays in requests issued by malicious traffic without impacting the performance of legitimate users.
Note There are no actual triggers for this processor; it is a form of response.
6.6.8.1. Configuration
Table 6.23. Slow Connection Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Advanced Default Maximum Integer 5 Seconds The default maximum number of milliseconds Delay to delay malicious requests. Default Minimum Integer 500 MilliSeconds The default minimum number of milliseconds Delay to delay malicious requests.
6.6.9. Warning Processor The warning processor is designed to allow a warning message to be presented to a user without completely blocking site access. The warning processor only enables the ability to respond to a user with a "warning", which would allow them to continue browsing the page and the site. The warning would be created and activated for a user by the auto response system, or manually from the console. The existing processor overlays semi-transparent HTML elements on top of the entire webpage, which temporarily disables any mouse or keystrokes on the page and, therefore, creating a "modal dialog" effect. This processor isn't designed to completely stop an attacker from using the website; it is there to warn them. Given the browser debugging tools available today, an attacker may be able to dismiss the warning by means of such tools. Any tampering with the warning's default dismissal behavior (waiting 5 seconds until dismissal button is automatically enabled and clicking on dismiss button) will be considered an incident and will be tracked.
6.6.9.1. Configuration
Table 6.24. Warning Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Advanced
130 Warning Processor
Parameter Type Default Value Description Default Warning String "Your connection The default message to use in the warning Message has been detected dialog. This can be defined on a session performing suspicious by session basis, but if no explicit value is activity. Your traffic assigned to the warning, this value will be is now being used. monitored." Default Warning Title String Security Warning The default title to use in the warning dialog. This can be defined on a session by session basis, but if no explicit value is assigned to the warning, this value will be used. Dismissal Delay Integer 10 Seconds The amount of time in seconds that must elapse before the warning can be dismissed. This is a soft limit, as an experienced user may be able to get around enforcement measures. Dismissal Resource ConfigurableRandom The information needed to define the URL and response used to dismiss a warning. Warning Directory String Random The name of the directory where the warning Javascript and css code will be served from. For example: warningcode. Incident: Warning Boolean True The user has attempted to dismiss the Code Tampering warning without waiting the delay and using the provided mechanism. This is probably an attack on the warning system.
6.6.9.2. Incidents
6.6.9.2.1. Warning Code Tampering Complexity: Medium (3.0)
Default Response: 1x = Logout User, 2x = 5 Day Strip Inputs.
Cause: MWS is capable of issuing non blocking warning messages to potentially malicious users. These warning messages are designed to force the user to wait for a period of time, before they can dismiss the warning and continue using the site. If the user attempts to exploit or bypass this delay mechanism in order to dismiss the warning early, this incident will be triggered.
Behavior: Once a hacker has been warned, they are then aware that a security system is monitoring their activity. This may cause some hackers to investigate what might be protecting the site. This could involve additional scanning, or it could involve attacking the warning mechanism directly. This type of behavior generally indicates a hacker with moderate to advanced skill levels. Depending on what they modify the warning code input to be, this could represent a simple exploratory test, or the user could be trying to launch a more complex attack against he warning code handler itself, such as "Buffer Overflow236", "XSS237", "Denial
236 http://projects.webappsec.org/Buffer-Overflow 237 http://projects.webappsec.org/Cross-Site+Scripting
131 Application Vulnerability Processor
of Service238", "Fingerprinting239", "Format String240", "HTTP Response Splitting241", "Integer Overflow242", and "SQL injection243" among many others.
6.6.10. Application Vulnerability Processor The application vulnerability processor is designed to block known attack vectors for select 3rd party applications. By default this processor does nothing. If you host a 3rd party application such as WordPress, you should enable the configuration parameters that represent the 3rd party software you are using. This will enable protection for that software component.
6.6.10.1. Configuration
Table 6.25. Application Vulnerability Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Note that even though traffic passes through the processor, it may not actually do anything unless specific 3rd party application protection is enabled. Joomla Vulnerability Boolean False Whether traffic should be analyzed for Joomla Protection Enabled vulnerabilities PHPBB Vulnerability Boolean False Whether traffic should be analyzed for PHPBB Protection Enabled vulnerabilities Wordpress Boolean False Whether traffic should be analyzed for Vulnerability Wordpress vulnerabilities Protection Enabled Advanced Mode of Operation Integer 1 Whether to block a request on a positive signature, or just create an incident Block Response HTTP 404 Error The default message to use in the warning Response dialog. This can be defined on a session by session basis, but if no explicit value is assigned to the warning, this value will be used.
6.6.10.2. Incidents
6.6.10.2.1. App Vulnerability Detected Complexity: Low (2.0)
238 http://projects.webappsec.org/Denial-of-Service 239 http://projects.webappsec.org/Fingerprinting 240 http://projects.webappsec.org/Format-String 241 http://projects.webappsec.org/HTTP-Response-Splitting 242 http://projects.webappsec.org/Integer-Overflows 243 http://projects.webappsec.org/SQL-Injection
132 Support Processor
Default Response: 1x = Slow Connection 2-6 seconds, 3x = Slow Connection 2-6 seconds and Strip Inputs for 1 day
Cause: The application vulnerability processor is designed to identify known attack vectors issued to 3rd party applications such as WordPress. This incident indicates that one of those known attack vectors has been issued by the associated user. The exact nature of the vector that was identified should be described in the incident details.
Behavior: One of the easiest ways to compromise a website is to look for 3rd party web applications such as WordPress. If one is found, the attacker can then look up any known vulnerabilities in that software and the version of it that is running on the website. If they find vulnerabilities, they can then launch them and potentially compromise the site with a few minutes with minimal effort.
6.6.11. Support Processor When a user is blocked or otherwise responded to using one of the counter measures, this processor provides a way to identify which profile is associated with a user, and to then allow those responses to be deactivated at the discretion of the IT administrator. For example, if a user were to get a 404 error when asking for a PDF document linked from the main site, and they then try to find the file by trying a bunch of different file names, they may eventually get blocked for performing a directory enumeration attack. When this happens, the blocked user may contact support for assistance getting access to the site again.
This processor works by exposing a special administrative URL (defined in configuration) which the support team can access. When a support request comes in from a blocked users, the support representative can access this administrative URL which will provide another URL. The support representative should then provide this second URL to the affected user. The affected user can then visit that URL and get a special code. This code can be used to search for the profile and deactivate responses in the Security Monitor (profile list).
If the affected user gets a code of "00000000000000000000" (all zeros), this means that the user is not identified as an attacker and therefore is not being blocked or responded to with a counter response from Mykonos. As such, other causes of the user's inability to access the site should be investigated.
DO NOT GIVE OUT THE ADMINISTRATIVE URL! It is only used to get a fresh URL that is safe to provide to the affected user. If the administrative URL is leaked to the public, it should be changed immediately.
So the overall workflow is as follows: 1. User is blocked or otherwise responded to with a counter measure.
2. User calls support for assistance.
3. Support accesses the administrative URL.
4. Support copies the newly created URL in the response and provides to the affected user.
5. The affected user accesses the newly created URL and provides the resulting code to support.
6. Support or an Admin then logs into the security monitor, clicks on the profile graph to get a list of profiles, and then searches for the code.
7. Support or Admin reviews user's list of incidents to verify the user was responded to in error. If so, the Support or Admin disables the responses.
133 Cloppy Processor
Note that the "block" response is by default, configured to return the code. So if a user has been blocked, steps 3-5 can be omitted, and the user can simply provide the code specified in the block message to support. For all other responses, the full workflow needs to be followed, because there is no other way to obtain the code.
6.6.11.1. Configuration
Table 6.26. Support Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Advanced Private Support URL String Random The URL a support representative would access to get additional details about how to provide support to users who are having issues that may be Mykonos related. If the value is "ABC", then the private URL would be http://www.example.com/ABC. It is absolutely imperative that this URL not be leaked to non-internal users. If it is leaked, it must be changed immediately. Public Support URL String Random A random value used to ensure that support Salt URLs are not predictable. This can be any random string 30 characters in length. Public URL Expiration Integer 3 The number of days a public support URL remains valid for. After this many days, the URL will no longer provide support information. This is to prevent any issues from a public support URL being leaked.
6.6.12. Cloppy Processor The Cloppy processor is a joke response built for demonstration purposes. It creates an animated paperclip in the lower right corner of the website, which belittles and taunts the attacker. This should never be used on a legitimate threat and is not the default counter response for any type of behavior. It is provided to demonstrate the diversity of counter responses Mykonos is capable of. You should never activate this response unless you have a good relationship with the user you are activating it on, and they have a good sense of humor.
You can configure the message and options cloppy presents both in configuration (the default messages), or in the response specific config (the XML you define when you manually activate a response or when you write a rule that activates a response). The oldest cloppy response will be the one for which the messages are loaded, so if you create multiple cloppy responses, you can create a dialog of several messages. For example, try activating cloppy three times with the following config values (create them in the following order):
1. Activate Cloppy:
134 Cloppy Processor
2. Activate Cloppy:
3. Activate Cloppy:
Once you activate the above 3 cloppy responses, you should see that cloppy will present the "This is the first message" dialog first. Once you click on an option in that dialog, the next page you load will display "This is the second message", and finally, after clicking on one of those options, you should get "This is the third message".
Once you click an option in cloppys dialog, it will dismiss that specific cloppy response. Thats why you are able to stack the responses and get a dialog going.
6.6.12.1. Configuration
Table 6.27. Cloppy Processor Configuration Parameters Parameter Type Default Value Description Basic Processor Enabled Boolean True Whether traffic should be passed through this processor. Note that just because traffic is passing through the processor, does not mean any users will actually have a Cloppy response activated on them. As such, simply enabling this processor will not result in cloppy being activated for any users. You would still need to manually activate the Cloppy response in the Security Monitor (or define an auto response rule that activates it, but that is highly discouraged). Cloppy Message String "It looks like youre an What do you want cloppy to say when offering unsophisticated script help? kiddie attempting to hack this website" Cloppy Options Collection Collection The list of ways cloppy can help with associated URLs. Advanced Cloppy Directory String cloppybin The name of the directory where the binary resources needed to load cloppy are served from. For example: cloppyfiles. The name should be selected not to conflict with a real directory at the top level of the website. Cloppy Dismiss String Random The name of the directory used to dismiss Directory cloppy. This URL should be random and not conflict with existing directory names on the site.
135 Login Processor
6.6.13. Login Processor The login processor is designed to add additional protection to the login dialogs throughout the protected site. By default, it will not provide any additional protection, and must be configured to protect specific login forms. Once a login form has been configured, the processor will begin to monitor the login attempts and start checking for abusive patterns.
This processor is capable of detecting a wide variety of abuse patterns on a login dialog, as well as stopping these abusive activities. One key protection mechanism is to require a captcha if a user attempts to login to an account which has experienced more then 3 failed login attempts since the last successful login attempt. This ensures that a malicious user cannot brute force a specific username, because after 3 failed attempts, the brute force tool will be stopped by a captcha. This does not represent a counter response, but instead is built in functionality that applies to all users on the system. So if user "A" submits 5 bad passwords, and then user "B" submits a password for the same username, user "B" will get a captcha, as well as user "A" for any additional login attempts they try. As soon as a user successfully logs into the account, it will take another 3 failed login attempts before the next captcha is required.
In addition to protecting against a single username being attacked with a brute force script, the processor also detects "User sharing", "User pooling", "Username scans", "Multi-User brute force scans". See the incident descriptions for more information on what these incidents represent and what counter responses will be activated as a result.
In order to configure the Login Processor to protect a login form, edit the "Protected Login Pages" configuration parameter. Add a new row and provide the following information. It will be useful to look at the HTML source code of the login form as it will have critical information you will need to configure protection: • Name: The name of the login page (this is just for your reference, it can be anything)
• URL Pattern: The Regular Expression used to identify a username/password submission. This pattern should match the "action" attribute of the HTML