Are you at risk of an injection attack? These types of attacks are common, primarily because they affect ubiquitous SQL databases. If a user — internal or external — supplies information through a form, you may be at risk. Insufficient input validation may allow users to insert or inject their own parameters in your SQL call and gain access to some or all of your data.

Injection flaws occur primarily when an application uses untrusted data when constructing an SQL call or when an application relies on older, vulnerable frameworks. An attacker can simply modify the ‗id‘ parameter in their browser to return all records. More savvy attackers can even, under the right circumstances, make changes to data.

In addition to impacting SQL databases, injection flaws can also be found in queries from LDAP, XPath or NoSQL as well as OS commands, SMTP headers, XML parsers and the like. Injection flaws are most common in legacy code, but they are tough to find and eradicate during testing. You really need to examine the code manually to check for injection flaws.

Injection attacks can be devastating to your business, both from a technical aspect and from the business side. Once an injection attack takes place, you can no longer trust your data. It may be corrupted or denial of access may occur. In rare cases, injection attacks can take down your entire DBMS or issue commands to your operating system.

You lose the ability to guarantee confidentiality to your authorized users or customers, because sensitive information may have been made available to attackers. And if you‘re holding authorization information in your SQL database, an attacker may be able to change this to limit access.

From a manager‘s perspective, how does losing the data or having it stolen impact your day-to-day operations? Depending on the sensitivity of the data, such a worst-case breach could be devastating to your company‘s ability to continue operations. It could also be terrible publicity for your company and cost you customers should the breach be made public.

Eliminating any opportunities for an attacker to take advantage of injection flaws should be a top consideration for your business because of the high impact a savvy attack could have on critical business data.

Injection flaws are common with applications written in PHP or ASP, because these tend to have older functional interfaces. They are much less common with J2EE and ASP.NET applications, which don‘t generally allow SQL injections. While you can‘t assume that applications written in the latter will be fine, you should be more concerned about your legacy apps in PHP and ASP.

The first thing to do when investigating the vulnerability of your application to an injection attack is to make sure that any interpreter in use separates the query from any untrusted data.

In SQL, you can simply avoid dynamic queries, or you can use bind variables in stored procedures and prepared statements so users are unable to insert text.

You can use a code analysis tool to find out if an interpreter is in place and to see the way data flows through the application. Some penetration testers can create scenarios such as would be used by an attacker, and this can tell you if you‘re at risk. But in most cases, you‘ll have to check the code for yourself — that‘s the quickest and most accurate method for determining if the interpreter is present. Scanners can‘t always tell whether an attack could be successfully carried out.

In order to keep an attacker from accessing your data, you need to ensure that untrusted data stays separated from commands and queries. That means that an attacker isn‘t able to call records by modifying the ‗id‘ parameter; the query will return nothing.

The easiest way to do this? Don‘t use the interpreter at all. A safe API can avoid using the interpreter by creating a parameterized interface that only uses PreparedStatements instead

of Statements. These stored procedures are compiled before the user adds any input, so it‘s not possible to modify the SQL statement.

Even some stored procedures are vulnerable, so avoid dynamic queries and use parameterized queries. All your SQL statements should recognize user inputs as variables and validate them before passing data back to the user. In other words, statements should be precompiled before actual inputs are substituted for variables.

In some cases, there‘s no API with a parameterized interface that is suitable for use. It is possible to escape special characters, but you must use specific escape syntax for whichever interpreter you are running. Check your code library for an escape routine that will work. This is the only way to make special characters safe if they need to be used in your application. While you can ―white list‖ approved characters for use, that may not be a perfect solution if your users need to be able to input many different special characters.

You should also avoid disclosing any error information to the user. An uncaught SQL error may contain information like procedure names or table names and, in the hands of a knowledgeable attacker, can be used to access data.

Injection flaws are relatively common and should be removed from legacy code. The good news is that most injection flaws are rather hard to exploit — they have to be set up in just the right way for an attacker to take advantage of them. However, if a vulnerability is exploited, it can be very damaging to your business. As part of an overall security review, it may be a good idea to re-examine your databases that allow user queries to ensure that queries or commands and unsecured data are completely separated.

Weaknesses arise when form processing code constructs SQL queries on the fly. The naive approach is to construct a prototype statement and fill it out with data from the form fields. A couple of examples:

Let‘s suppose that some web application requests a user name and a password to access a private area. The application will pick these values and it will collect a query to send to the database. The query might look like this:

1 select id

2 from users_table

3 where username='$username' and password='$password';

Where $username and $password are replaced by the values the user has entered on the screen.

In this case, we allowed the user to write some texts that will be sent to the database without any verification. Given the case of a malicious user, he could write in the password field and in this case, the resulting query will be as follows:

1 select id

2 from users_table

3 where username='admin' and password=''or '1'='1';

This query will produce the same result as if the admin user had introduced their password correctly. So that the web application will allow access to the administration area to a user who doesn‘t know the proper password.

Let‘s see another example:

SELECT ADDRESS FROM CUSTOMERS WHERE LAST_NAME = ‘#1′

Excessively trusting code will accept whatever the user types into a ―Last Name‖ field and replace ―#1″ with it. This works fine as long as the user types a normal last name. But let‘s say this appears in the form field:

Smith';DELETE FROM CUSTOMERS WHERE LAST_NAME = ‘Smith

Note the quotation marks. After the SELECT statement, the next statement will delete all rows with the last name of Smith from the CUSTOMERS table. A more elaborate statement could change records or do anything else. Some injection attacks can even obtain information.

Reading or modifying the database requires knowing or guessing something about its tables. Less sophisticated attackers can attempt a denial-of-service attack by blindly

modifying the SQL to make it invalid. That approach might crash an application on the server or put the database into an invalid state.

The backslash (\) character is as dangerous as the quote. A backslash followed by a single quote tells SQL to treat it as a literal quote character, not a string delimiter. Using it at the end of a string leaves the string unterminated, and whatever comes next will be part of the string. A backslash-based SQL injection is harder to construct that one using quote characters, but not impossible.

The comment introducer, ―–‖ (two hyphens), will make the database engine ignore the rest of the line. The character sequence ―/*‖ will make it ignore everything until a matching ―*/‖ comment delimiter. These don‘t work within quoted strings, and not all SQL implementations support them, but they provide a venue for injection if the attacker can get outside a quoted string.

The attacker can find more options by constructing post data directly rather than going through a Web form. The code needs to validate all post data fields, not just the ones that come from form fields.

To get the best chances of success, an attacker needs to learn something about the database. For open-source code, this information may already be available. In other cases, the error-based approach can yield information. The attacker performs a trial injection and hopes to get back an error page. The error report will often consist of a stack dump and a display of the failed SQL statement. This lets the attacker see what injected text can perform harm.

A variation on this is a page that displays the SQL output, even if it‘s an error message. The page will look normal, except where the error message appears instead of expected data. This approach also lets the attacker inject arbitrary SELECT statements and see the output on the page.

Web software should catch all errors rather than letting them put up error messages that browser users can see. Not only are those pages ugly, but they can also give attackers clues

on just how they can exploit bugs. It also needs to check whether SQL statements have returned an error or other anomalous data before displaying their output.

The out-of-band approach lets attackers get information even if it doesn‘t go to the Web page. The injected SQL may direct the output to a file. MySQL has the INTO OUTFILE option, which writes a query‘s output to a file. If the attacker can get the file back (perhaps by writing it within the Web directory), it can divulge all the database‘s secrets.

A blind SQL injection attack is the most difficult to execute successfully, but it can be effective with sites using open-source software, where the database‘s structure is known. Even proprietary databases will often use predictable tables names like EMPLOYEES and CUSTOMERS, so pure guesswork combined with persistence can do damage.

A simple defense against SQL injection is to check all user inputs for risky characters and delete them, replace them with something harmless, or reject the input. It‘s called sanitizing the input. This approach works in principle, but it‘s error-prone. If the programmer neglects to sanitize every single input, that leaves a hole open to attackers. Several other defenses are better.

Prepared statements avoid the problem by letting the SQL engine handle the variables instead of processing them directly in the code. The statement is pre-parsed, so user data can‘t change its syntax. In the example above, the code would create a prepared statement from this template:

SELECT ADDRESS FROM CUSTOMERS WHERE LAST_NAME = ‘?’

The code would give the prepared statement and the form input to the SQL engine. If a user attempts SQL injection on the LAST_NAME field, the SELECT statement simply won‘t get a match. Sanitization still offers benefits, since other code may have trouble handling strangely formed inputs, but it‘s not a necessity.

Stored procedures are another approach to preventing injection. A stored procedure, like a prepared statement, take parameters rather than blindly concatenating pieces into an SQL statement. In most cases this is safe. However, stored procedures can incorporate dynamic

SQL, and developers sometimes do this to increase their flexibility. A dynamic SQL fragment incorporated into a stored procedure has all the vulnerability of a generated SQL statement. Developers either need to restrict stored procedures to safe ones or sanitize parameters.

The specific case of the login fields we used in the example above is just a possibility of the many more that can happen. Any field requested to the user can be used to inject SQL in the application through that field. But it is also possible to inject SQL via a URL with parameters, through a contact form or even by using a mobile application or an integration API.

The best way to detect if your application is vulnerable is by looking at its source code. If in Java, PHP, Objective- (iPhone & iPad), Ruby or SQL parts there is a query that collects a parameter and inserts it directly into that query, then it means that your application is vulnerable. At the very least, it is potentially vulnerable.

There are several alternatives. Each programming language or each developer may use the most comfortable or most convenient one in each case. We suggest using the SQL prepared statements, that consist of compiling the query before placing the parameters. After that, the parameters are placed, but they cannot modify the query. In the case above, we would obtain this result if we programmed it in PHP:

$stmt = $mysqli -> prepare("SELECT id FROM 1 users_table WHERE username=? AND password=?"); 2 $stmt -> bind_param("ss", $user, $pass); 3 $stmt -> execute();

As you can see, there are many serious errors that can be easily detected and fixed. A code inspection tool like Kiuwan may help you detect those defects and save a lot of money.

The important thing is to use a consistent approach and never yield to the temptation to write ad hoc code. The further the code is from the raw SQL, the safer it is. Using an object-relational mapping (ORM) library eliminates the need to write SQL. Frameworks are available in many programming languages to separate application code cleanly from database access.

Any software development project that accepts user inputs for database access needs to consider the hazard of SQL injection and use programming practices that minimize its risk.

Authentication and session management includes verifying user credentials and managing their active sessions. Broken authentication and session management occurs when credentials cannot be authenticated and session IDs cannot be initiated due to lack of encryption and/or weak session management. These flaws create vulnerabilities that not only risk confidential data but entire company systems and networks by allowing attackers to impersonating other users, for instance. It takes just one stolen permission to infiltrate and damage your company‘s viability.

The rather straightforward authentication process precedes opening user sessions. Authentication occurs when provided credentials are successfully matched against an authorized user database. The user session can then be initiated once credentials are authorized. Session IDs, also known as session identifiers and session tokens, are unique identifiers (cookies) stored and sent by the server to the client.

Session management workflow looks something like this:

 A user delivers login information (submits credentials)

 Credentials are authenticated by token (smart card), password or biometrics (fingerprint, retinal scans)

 A session and relevant IDs are issued and cookies are delivered

 The user carries out the workflow

 At some point, the user may need credentials re-authenticated (i.e. user walks away for a time)

 The user will log off the session, ending it, or there might be an idle session timeout

 The session will time out due to idleness or predetermined time constraint (called an absolute session)

 The session, therefore, becomes invalidated due to user log out or session timeout

Any deviation from the authentication and session management process can lead to brokenness, flaws, and potentially severe technical impacts as all user accounts in your company, including privileged accounts, can be breached both internally and externally.

Issues tend to be widespread, but are detectable. It sounds easy to discover a flaw–put in a bad password and you don‘t log in to the page you seek. The issues go deeper than mere login credentials being rejected. Often problems lie with development and custom code which makes detection and issue resolution more difficult.

Common authentication flaws include user names and passwords. For example:

 Poor password management: password simplicity and reuse, stored credentials leave user credentials unprotected. Also, forgotten password and change password features that are sent over email or are given through weak authentication policy.

 Weakened account management leaves credentials exposed to ―brute force‖ guessing and overwrites. Your user accounts can be accosted with many password guesses for one account or one password guess for multiple accounts.

 Session IDs vulnerabilities: IDs exposed in URL, lack of timeouts, unlimited password tries, or session sign-on tokens that are not invalidated upon logout, and unrotated, validated session ID logins are just some of the flaws that can create exposure.

 Unencrypted and weakly encrypted connections are a major security risk.

Two issues take these one step further :

1. Computer system customization–developers have to custom build logins, logouts, password management and workflow processes, including timeouts. This type of development is difficult and weaknesses occur easily due to human error.

2. Utilizing the cloud, IoT, and mobile devices add extra chances for authentication errors, especially for mobile devices that are used for one employee and then another employee without changing credential criteria.

It is most important to properly protect your company user sessions throughout their life cycle through consistent authentication and session management checks and testing, i.e. penetration testing.

How is broken authentication and session management different from broken access controls? Broken access controls involve vulnerabilities in authorization while broken authentication involves verifying the identity of a user before the user is authorized to have the session. While off the top they seem very similar, these two issues have very different processes, vulnerabilities, and resolutions, and shouldn‘t be confused.

There are a number of ways broken authentication and session management can be resolved. Here are 3 common problems and their resolutions:

Problem 1: An authentication match to an authentication list is not enough security.

Resolution: Create a two-factor authentication process. Use a token and password, for instance. Remove any default passwords and make sure that recovery paths do not show current passwords.

Problem 2: Sessions are not timing out after inactivity.

Resolution: establish inactivity timeout by adding a functional time (i.e. 1 to 2 minutes) notice that the session will time out due to inactivity. This prompt will allow the user time to become active within the session or to log out.

Problem 3: A user wants to log in to a site not on the company safe list.

Resolution: Web Application Firewalls (WAFs) can mitigate session attacks by protecting HTTP applications from infiltration in case a user tries to log in to a potentially unsafe site.

There are innumerable ways that broken authentication can lead to insecure session management. Every issue and vulnerability is unique as is its resolution and should be handled on a case-by-case basis.

Everyone knows it is easier to prevent than to fix an issue.

The key to resolving broken authentication and session management is that authentication controls must be implemented centrally. This will prevent many issues since all web servers, application servers, and other computing environments will have just one location that establishes and feeds the authentication and session management process.

Some key preventive measures include:

 Passwords: strength, use, change controls, storage. Password strength should be complex, non-worded, alphanumeric with approved special characters, at least 8 to 15 characters long and changed periodically. Often the system will prompt for password change and let you know when a password is set to expire.

 Use information: a predetermined amount of login attempts and no mention of whether the username or password was wrong. Change controls should include providing both the old and the new password and re-authenticating the user should the password or username be emailed.

 Protecting Credentials in Transit can only be accomplished by encryption through Secure Socket Layer (SSL).

 Session ID protection, using SSL. Never use session IDs in the URL as they can be easily cached or passed to another user. Make session IDs long, boring and complicated random numbers and change them frequently to reduce the length of their validity (e.g. timing out). Never accept session IDs initiated by an individual user.

 The account list should not be available for access to users. If they must see who is on the site, use a pseudonym (screen name) that can lead to particular accounts.

 Do not browser cache. All it takes is the back button to resubmit login credentials in many browsers. This

 Your site architecture should not implicitly trust among components. Components should be self authenticated because site architecture evolves and trust can be breached.

 Avoid cross-site scripting flaws, since they can reveal session IDs.

Developers should always have one set of strong authentication and session controls given to them by their company as there needs to be constant maintenance and testing so that broken authentication and session management issues and the subsequent risks are circumvented.

Number 3 on the OWASP Top 10 2017 list is A3-Sensitive Data Exposure. The first question to ask is whether your organization even has sensitive data that needs protection against exposure. The quick answer is that, in today‘s digital world, most organizations will have some sensitive data that requires extra protection, such as:

 Passwords  Credit card numbers  Social Security Numbers  Health information  Other personal information

Depending on your organization‘s interaction with data collection involving consumer information, your data collection efforts may reveal additional sensitive personal information.

To discover the answer to this query, you must analyze how the network treats each piece of sensitive information that you collect. For each type of sensitive information, determine whether the network:

 Stores sensitive information in easily readable text over long periods of time; transmits sensitive information — externally or internally — in easily readable text. As you might expect, transmission over the internet to external sources is especially prone to cybercriminals;  Uses old, outdated algorithms or weak encryption to protect sensitive information; generates weak cryptographic keys or the key system lacks key rotation or has poor key management; and sends sensitive information to the browser without security directives or there are security directives missing when sensitive information comes from the browser.

Note: Hackers do not generally break into cryptography directly. More often than not, they will steal something that helps them break the encryption. For example, they steal keys or

steal clear text data from the server (while the data is in transit) or from the network‘s browser. Hackers often conduct man-in-the-middle attacks to steal sensitive data. A man- in-the-middle attack takes place when a hacker steps in-between two trusted parties who think they are talking directly to each other (but the hacker is in the middle of their communication stream allowing him to modify the communications to his advantage).

Believe it or not, the most common security breach occurs because an organization did not encrypt sensitive information at all. For those networks that use encryption, the problems derive from weak key generation or weak key management. Other common problems include weak algorithms and weak password hashing (hashing means turning the original password into another, shorter string value or a key that represents the original password).

Browser weaknesses are also very common and easy for hackers to detect. Fortunately, hackers find them difficult to take advantage of on a large-scale. Almost all browser exploitation relies on a user clicking on something. For example, a bug in a browser may make it possible to construct a web link that displays one website in the status bar but, when clicked on, the browser actually transports the user to another website, called status bar spoofing.

Hackers trying to break in from the outside have trouble finding vulnerability flaws on the server-side because they have limited access. Those vulnerabilities they find are also harder to exploit. Internal hackers often take advantage of vulnerabilities on the server-side through their access to sensitive information.

You must start by asking who your threat agents are; that is, what bad actors can gain access to your sensitive data (and include in that list anyone who can gain access to the

backups of your sensitive data). We are talking here not only about data during transit. You need to recognize who can gain access to sensitive data when it is at rest (stored, not going anywhere) or when it remains vulnerable in a customer‘s browser. And you need to think internally as well as externally if you are going to get the full picture of your vulnerability.

As part of the review of your network‘s vulnerability to sensitive data exposure, determine what your organization‘s legal liabilities are if a data exposure occurs. Consider the damage to your organization‘s reputation as well as the monetary exposure from angry clients for lost or exploited data.

If network encryption fails, or if your organization fails to encrypt the network to protect sensitive data, the result is the same. Hackers will steal sensitive data (passwords, health care information, sensitive financial or personal information, credit card information, etc.). We‘ve all seen headline news of hackers breaking into government data banks, merchant customer data (like the attack against Target) or stealing Yahoo‘s customer information or ransomware attacks demanding money to release files or return whole networks to the owner.

In the future, instead of attacks to steal information, the attacks may seek to destroy data or modify a company‘s data in a malicious manner.

So, is there any way to prevent sensitive data exposure? There are six things — at a minimum — that you can do to try to prevent sensitive data exposure.

1. When you identify the threats and the threat actors you need to protect against, take pains to encrypt all the sensitive data (at rest and in transit) in a way calculated to protect the data against the identified threats. 2. Don‘t ask for data you do not need. Don‘t store data you do not need. Discard data as soon as possible after you no longer need it. Cybercriminals cannot steal what you do not store. 3. Use strong algorithms and keys and strong key management. Consider using FIPS 140-1 and 140-2 cryptographic modules.

4. Store passwords with an algorithm designed specifically for passwords. Examples are bcrypt, PBKDF2, or scrypt. 5. Disable auto-complete when completing forms that contain sensitive information. 6. Disable auto caching for pages that contain sensitive information. And keep a wary eye on cybercriminal activity.

Extensible Markup Language (XML) files are plain-text files that describe data behavior as that data relates to a connected network or server application. If you open an XML file, you'll see code describing how that file's particular data is transported, structured and stored. You'll also notice that XML information is wrapped in tags that look much like HTML tags.

However, unlike HTML, XML files don't intrinsically accomplish anything. XML files essentially act as data carriers parsing information to some external entity. That is, the components of the XML file are broken down by an internal program that analyzes those components, sending them into your computer's memory so that those components are available for use in an application or on a network immediately or at a later date. However, if that XML file is weakly parsed to that external entity by that internal program, your data can become vulnerable to an XML External Entities (XXE) attack.

External entities' attacks can cause denial of service attacks, file scans, and remote code execution that undermine the security of your computer system. Understanding the relationship between XML files, parsing and weak parsing is imperative to understanding what an XXE attack is and why such an attack can put your company at risk.

First, to parse means to break down components so that they can be comprehended in a meaningful way. If you've ever diagrammed a sentence, you have parsed the components of the sentence so that you can better understand the sentence's structure. It's the same principle if you are parsing XML code. Today's computers have standard parsing interfaces built-in, such as the Document Object Model (DOM), which parses XML files into flowchart-like structures or trees. This parsed information can be utilized by applications and networks independent of programming language and no matter what platform is being

used. That's the beauty of modern XML. However, that beauty can be a double-edged sword.

Take this simple XML file, for instance. It has three components:

John Smith

software engineer

January 7

The DOM (or whatever processor your particular system uses to parse the XML code) identifies the purpose of each of those components (name, occupation, birthday) and puts them into the computer's memory where the information is readily available to external entities. An external entity can be an application on a network that needs to pull that information for a specific purpose, such as an employee birthday list.

But what if your company hasn't updated its system in years or what if it has only updated parts of its system? Examples might include a non-profit working with donated equipment or an older business that hasn't had the need, staff or funding to update part or all of its computer system. These types of legacy computer systems often have older XML processors that point to specified entities like Uniform Resource Identifiers (URIs).

Much like URLs identify addresses, URIs identify resources over server applications or networks. If you have an XML file that is parsed to a URI that has been injected with malevolent data, then that information reaches your XML files' components. That URI's malicious data infiltrates your computer system. This can cause your company's files to be reconfigured, scanned, or extracted. For instance, the attacker can add code that reroutes your XML file to a URI that sends it to another external entity that changes the XML tags to something reflecting an end result that is definitely not an employee birthday list.

Often you don't know if your files are vulnerable or compromised until an issue comes up that alerts you, such as an XML upload from an untrusted source creating probed data or a denial of service attack. As a result, developers and other manual testers don't routinely test for XXE attacks.

Penetration testing hones in on your system's path of least resistance by manually injecting malicious URIs, tags, and other data into your XML files to exploit them. It's easy to do because all modern computers use XML files that can be opened with any plain or dedicated text editor, read and edited. This is where XML's beauty becomes that double- edged sword.

XML tags aren't predetermined like HTML tags are. The XML author "invents" tags that define the file's structure to be used for applications. HTML tags are visual. You would think that since XML tag structure is so individualized, no external entity could attack it. However, since it is quite easy to inject new tags (code) into your XML files, XXE attacks can infiltrate your files even if those files are deeply embedded or nested into your source code.

There are several ways to prevent XXE attacks:

 Sensitive data is an easy target to attack when it is serialized. Keep it simple and use special permissions when it cannot be unserialized.

 Patch or upgrade XML processors in legacy systems

 "Whitelist" server-side input by using specific criteria that will validate it

 Update web firewalls to keep untrusted networks to protect server-side or Web services

 Validate the functionality of XML data: make sure that it is well-structured

The first line of defense against XXE attacks is to formally train developers about what they are, how to recognize potential vulnerabilities so that they know how to minimize or negate the impact on your business. You don't want attacks like a billion laughs to have the last laugh on you.

Access control (authorization) determines what users communicate with what systems and resources within your company. When access control is broken, anyone can send requests to one of your network applications. Broken access control means that unauthorized access to system functionality and resources has created an exploitable weakness that opens your company to harmful and potentially expensive outcomes.

Knowing what your assets are will help you decide on what kind of controls to assign to them. Assets are company information, system, and hardware that is used within your business. If you do not understand what your company‘s most important assets are, how will you know what kinds of access controls to apply?

 Informational assets: databases, current and archived files, policies and procedures  Physical assets: computers, servers, routers–anything physically visible relating to your system  Software assets: system and application software  Services assets: how your system serves by its operations, such as serving consumer and communications

What value do you give these assets? That is, what assets need the most security to protect them? Once you have identified the critical assets within your company infrastructure, you can assign access control dependent on the value given to those assets.

The CIA triad is the base principle of all access control to information. Its meaning is pretty straightforward:

 Confidentiality: rules that limit access to information  Integrity: surety of information trustworthiness and accuracy over its life cycle  Availability: the information must be available to access

―Access‖ defines the flow of information from its user to its requested resources, such as a selected computer file. The security of that resource depends on three primary types of access control: administrative, technical and physical:

 ―Administrative access control‖ involves all company employees and their secure access to particular company resources. This involves security policy, administrative and personnel controls and their time limits. Examples of administrative controls include personnel administration, security training, and testing. This incorporates determining the principle of least privilege–giving access only for what is needed to get the job done.  ―Technical (logical) access control‖ uses technology to keep sensitive company data secure over networks and systems. Examples include antivirus software, firewalls, auditing, and encryption. Keeping access control lists, setting up alerts.  ―Physical access control‖ includes non-technical access controls to secure a company and their resources like dead-bolt locks, cameras and security guards. Examples include keeping user computers from the server areas and data backups.

The main focus of the file example involves mainly the administrative and technical controls, but also includes the physical control: the file is on your computer system and now it has to be determined who gets access to it (administrative), and the type or manner of their access (technical) and where (physical). Broader scope concerns include access control in the cloud, the IoT, and the sheer volumes of data that many enterprises carry out daily.

There are 3 types of file access modes: files will be read-only, read and write, and execute. Each type of file will have its own particular types of access control. These access controls should be carried out throughout the system and be a standard operating procedure for your company. Carrying out access control follows a multi-layered protocol:

 Subject ID: know who wants the request for access  Authentication: verify who wants the request for access resulting in allotting user accounts, password allocation, and usage  Privilege ACLs: once authenticated, the request is checked against the access control list to see what privileges can be granted to the requestor  Audits: checks for vulnerabilities and flaws in the system.

Once authentication is validated and privilege is granted, access authorization is based on the following:

 Role-based: limited, hybrid, full roles  Rule based: access is granted only if it follows a rule  Mandatory: a self-managing system that allows access on a need-to-know-basis  Discretionary: access is owner-granted

Even with such protocols, files with improper access control happen. Access control is an on-going process, not a one-off, set-up-and-be-done-with-it event.

Access controls become vulnerable when functionality and resources are compromised due to users who do not have proper authorization to access files. Verifying function level access on every level is the best way to find vulnerabilities such as unrestricted navigation to unauthorized functions and missing authorization checks and balances.

Weaknesses can be found in the URL, old directories, cached pages, passwords that are not strong enough or that have not been changed when employees or employee roles change. Many times users are afraid to forget information like passwords and save them in their computer, making them easy to infiltrate.

Access can also be compromised when users fail to follow strict pathways to needed information using company protocols for retrieval. Back-door pathways can cause loss of system functionality because authorized access controls are bypassed. Users may try to manipulate access controls such as firewalls to gain access to needed information.

It is important to note that passwords are the weakest link in access control, subject to guessing and easy to create an attack from both within a company and from outside invaders.

Passwords should be 8 to 15 characters using no words, utilizing upper- and lower-case letters, numbers and company-designated special characters.

There are many ways to break access into a system, including ―dictionary‖ attacks that scan for password matches, ―brute-force‖ attacks run password combinations until they find a way to match one, and ―birthday‖ attacks use ―colliding‖ hashtags. Other attacks that can happen once access controls are breached are spoofing and phishing attacks.

Broken access controls leave the door open for such attacks. Impacts include broken day- to-day operations (denied access, downtime), data breaches, and bad PR if such breaches are publicized.

Company application access can be broken when functional level access is misconfigured by developers resulting in access vulnerabilities.

Denied access is arguably the most common result of broken access controls. Access can be denied in applications, networks, servers, individual files, data fields, and memory. Denied access not only causes inaccessible requested files, it can cause other security mechanisms to fail. For instance, if the access is broken on one control, other controls may be affected in the file hierarchy.

IT teams have to resolve broken access controls by fixing not only what is broken, like a bad password leading to denied access; but, they must look wherever that access control had functionality in the first place, including controllers and business logic.

Preventing broken access control should come from a central entity that ensures all company access functionality is maintained and managed.

There are many ways to enforce and manage access usability within your company. Close tabs on employee identification and credentials, having employees sign non-disclosure agreements, activity monitoring for unauthorized personal-use web sites, telephone usage, and software installation, as well as creating multi-layered login-in processes and workflow accessibility, monitoring password resets, reuse and expiration, and daily issue logging help track functionality and any broken access controls.

Access control is a proactive process. Understanding what it is, how it works and following company protocol keeps broken controls in check and your company running smoothly.

Security misconfigurations are “holes” or weaknesses within your computer applications that leave your system vulnerable to attack. These misconfigurations allow easy exploitation from threat agents from both inside and outside of your company. The good news is that although misconfigurations are common, they are also easy to detect and fix. But often they aren‘t discovered until your system is compromised and the costly damage is done.

Today‘s companies are run on multiple platforms, using multiple applications, utilizing multiple servers, any of which can harbor security misconfiguration. Additionally, you are at risk if any of your apps are executed using cloud servers and insecure mobile devices in conjunction with your company‘s internal computer platforms, servers, and applications. You are also at risk if you make the assumption that your out-of-box computer programs are ready-to-go and secure with company firewalls, or that IT teams are always on top of maintenance when they are often stretched thin with daily issues.

Misconfigurations can occur anywhere on your application stack. Your application stack is simply all the applications required by your company, such as word processing, spreadsheet, and database management packages. Your stack also includes your communication programs like email and internal messaging as well as your web browsers. Misconfigurations can occur within those apps, on your web and application servers, any place in your company‘s computer system architecture. Here are a few common security misconfigurations:

 Default system credentials: user accounts, passwords in factory default or unchanged status

 Directory and file listings: not disabled and easily available through search engines

 User traces: pages returned to users with error messages that have too much information in them

 Unnecessary pages: sample apps, old privileges, and user accounts, for example

 Software: not up-to-date, legacy systems, patches not utilized, orphaned custom code

An important point to remember is that your computer system is multi-layered. If any of those layers aren‘t securely assembled, your system can be infiltrated and data can be compromised or stolen all at once or over time and disguised so that you‘d never know it is happening. It is imperative to establish multi-layered security protocols and to establish minimum application configuration reviews.

Developers and system administrators working together can find security misconfigurations and fix them. This is done through regular use of automated security scans and periodic manual reviews of each application, platform, and server configuration guidelines. Do not assume that if you are not seeing immediate issues that there are no security misconfigurations.

Arguably one of the most overlooked security misconfigurations is the default mode, especially in enterprise corporations where there can be hundreds of user interfaces occurring at any given time of the day and night. It‘s easy to assume that perimeter firewalls protect your system. That‘s a dangerous assumption. Leaving system credentials in a factory or user default mode enables attackers to peel away those layers until your critical and other sensitive data is exposed.

Resolution: modify or change factory default credentials before making applications active is a best practice. This is called application hardening. This makes your applications more secure.

Here are some specifics:

 Be consistent among company departments. Configure common apps identically with strong separation between application architectures. Development, production, and quality should be configured so that data can be accessed as needed, but with individualized passwords and user accounts. This promotes security, accountability and helps trace errors back to particular departments and users.

 Disable company server directories, archive or delete old files, and get rid of unneeded services. This includes disabling or deleting sample apps that come with your app servers. Also, modify your app servers default mode regarding returning user traces to users with error information that they might not understand, but an attacker would.

 Remove old privileges and user accounts.

 Update individual software security patches, update legacy systems, get rid of orphaned custom code.

 Make sure all mobile device defaults are changed accordingly.

 Immediately change new employee defaults and delete terminated employee user permissions and accounts. This includes mobile devices and cloud access.

 Monitor outside vendor cloud usage of company data.

Security misconfiguration resolution depends on your company‘s unique operating environment. Resolving issues isn‘t a one size fits all boxed solution. It is important to keep abreast of new security updates, attend conferences, keeping communication open with vendors, and keeping close track of your company‘s mobile devices. Each company is unique and resolving security misconfiguration within your application stacks, platforms, and other architecture should always be tracked carefully and consistently.

It is important for C-level admins to understand what security misconfigurations are and what potential impact they have on a company, from the sublime annoyances to the major threats. You need to understand those threats and how to mitigate them, ask questions to your IT team and know what kinds of testing are available. Knowledge and teamwork reduce risk. Understanding also helps you go to bat for your IT teams should they need to update legacy systems, obtain penetration testing or present company-wide user training.

After resolving security configurations, don‘t assume that it is a one-time deal. Designate one person or a team to keep abreast of changes and issues. Be consistent in monitoring, applying and updating changes and be vigilant in running audits and performing automated and manual scans to avert future security misconfigurations.

Cross-site scripting (XSS) occurs when an attacker injects malicious script, like JavaScript, into your web browser which compromises an infected web site. When the user inputs data into the visited web site, the malicious code routes that input data to the attacker. Such XSS vulnerability resides in any place where the user can input data: a URL, a form, even a social media post. It is not only a home user threat. XSS can seriously damage your company because attack code can be written in endless ways causing more than rerouted and stolen data. It can create internal and external end-user issues that can cripple your site‘s ability to generate revenue.

JavaScript by nature is a dynamic, interpretative coding language that instructs something to run on your browser. XSS, therefore, can be somewhat innocuous if tapped into by the user. For example, the XSS vulnerability may be nothing more than taking you to an annoying web site. On the flip side, XSS vulnerabilities that hijack your site‘s cookies, count keystrokes or phish for sensitive information, can be a platform for more damaging attacks like destroying customer order information, creating destructive user experiences, and ruining the business-to-customer relationship.

Here‘s an example of how it happens:

You go to an XSS-injected web site and begin to fill out a form. The malicious code interprets your input, exploits, and executes the XSS vulnerability giving the attacker the desired information or data.

The code can be written and injected in a myriad of ways depending on the needs and creativity of the attacker. It can send the user blatant message or it can work behind the scenes unbeknownst to the victim. The bottom line is that malicious XSS is not to be trifled with and needs to be resolved by the individual end user or by the infiltrated company.

As noted, XSS takes three players to execute an injected script: the website, the user (victim), and the attacker. The attack can come from the server-side code or from the client-side code.

 Persistent (stored) XSS originates with the website database. This is a server-sided attack and runs unbeknownst to the user. This is arguably the most damaging XSS code.

 Reflected XSS (think echo) is brought on by victim request, that is, the user is often tricked into the malicious vulnerability. This is a server-sided attack. This is the most common XSS code and is used in phishing schemes and social networks to gather insecure information.

 DOM-based is non-HTML (rather Document Object Model) XSS that is a client- side attack only. It is a mix of persistent and reflected XSS in that the legitimate script is run first before the malicious script is executed and sent to the attacker.

In lay terms, XSS is a very common attack vector that can affect your servers and your clients. It can be benignly annoying by overtly leading you to an insecure page. Or, it can be quite dangerous, impacting your enterprise to the point of not only expensive financial recovery but also your company‘s reputation.

You are a target for XSS if you generate your revenue online or are part of large online communities like social media and entertainment and news sites. XSS attackers taking the time to design malicious code with the intent to inflict damage on your company are going to want the maximum return on their investment in the form of both internal and external damage.

Internal damage includes the total downtime caused by investigating, testing, and resolving XSS issues. This leads to loss of employee productivity which means lost revenue

for every minute of downtime. So if you are a 24/7 online company, you could calculate your lost revenue by the following equation:

Revenue loss = annual revenue/525,000 * t (minutes of downtime)

As you can see, downtime minutes count toward lost revenue, lost inventory, repair time, internal and external damage creating lost business opportunities, issues with public relations, brand, and reputation.

Resolving XSS attacks is not a cut-and-dried process. The issue lies in the flexibility of the XSS code and the mindset of the attacker. Everything depends on how the malicious code injection is worded, interpreted by the browser, and executed. It is imperative to utilize all automated updates and patches to thwart unseen vulnerabilities. Once they are discovered, however, resolution takes some finesse and thoughtful coding to rid the exploitation. But the best resolution lies in prevention.

The best way to prevent XSS attacks is to provide secure input handling on both the server and client-side code. Web developers can prevent XSS by encoding and validation. Most of the time, encoding is enough to prevent XSS by having the browser interpret user input as data, not malicious code. When encoding is not enough, validation prevents XSS by filtering user input so that the browser construes the input as code and not malicious commands.

When encoding and validation together are not enough to completely neutralize XSS vulnerability, Content Security Policy alleviates risk by making sure that only trusted resources can be used by the page in question.

XSS gives attackers a foothold into your enterprise that can bring forth a plethora of legal, financial and security woes. From changing the face of your company‘s web site to complete computer system hijacking, XSS is by far one of the most prevailing, risky web application vulnerabilities today.

In 2017 OWASP Top 10, insecure deserialization edged out CSRF (Cross-site Request Forgery) from the number 8 spot. For those who question the potency of an insecure deserialization attack, the 2013 Equifax breach should put all doubts to rest.

Although the Equifax breach is dwarfed in size by the 2013 Yahoo data hack, it is considered by far the worst breach in terms of information stolen. The Yahoo hack may have affected 3 billion accounts, but the 143 million accounts affected by the Equifax breach involved the theft of Social Security numbers, driver's license numbers, and credit card numbers. Meanwhile, credit card info and bank account data were said to be excluded from the 2013 Yahoo breach.

So, what caused the Equifax breach? Industry experts maintain that the breach occurred as a result of an insecure Apache Struts framework (CVE-2017-9805) within a Java web application, which led to the execution of arbitrary code on Equifax web servers. Meanwhile, Apache disputes this rationale. It maintains that the more likely vulnerability is a compromised CVE-2017-5638 in connection with the Jakarta multipart parser.

In mid-September 2017, Equifax confirmed that the CVE-2017-5638 vulnerability led to the breach but provided no details to support the claim. Thus, speculation abounds about the possible role CVE-2017-9805 (deserialization vulnerabilities) may have played in Equifax's breach.

The Apache Struts REST (Representational State Transfer) plugin supports XML through XStream, making the serialization of complex data objects into strings of text characters possible. However, this welcomed convenience also allows malicious XML payloads to be introduced into Struts servers during the deserialization process.

The idea of hackers exploiting an application flaw to steal sensitive consumer data is an outrage to our modern sensibilities. Now, 143 million consumers must rely on credit freezes, manual monitoring of bank accounts, and fraud alerts to minimize the

consequences of the breach in their personal lives. Thus, it comes as no surprise that 23 class-action lawsuits have been filed against Equifax for the egregious breach.

There are 3 main types of deserialization attacks:

 Blind deserialization attacks

Blind attacks occur behind restricted firewall-protected systems or networks with strong security manager policies in place. Hackers can either exploit the Java payload or manipulate a chain of Transformers to facilitate RCE (Remote Code Execution).

 Asynchronous deserialization attacks

This type of attack stores serialized gadgets in databases. When the target application commences deserialization, gadget chains programmed to compromise the process will be executed in JMS broker client libraries. Vulnerable JMS libraries include the Oracle OpenMQ, Oracle Weblogic, IBM WebSphereMQ, Pivotal RabbitMQ, Apache QPID JMS, and Pivotal RabbitMQ.

 Deferred-execution deserialization

In this type of attack, gadget chains are executed after the process of deserialization. In both deferred-execution and asynchronous deserialization, gadget chains are sequences of ROP (return-oriented programming) gadgets that end in RET (return from procedure) instructions. These gadget chains facilitate the exploitation of vulnerable applications by bypassing non-executable page protections such as read- only memory and kernel-code cohesion protections.

ROP gadgets are not dependent on binary code injection; hackers need only link executable addresses to the requisite data arguments to facilitate code execution. Thus, ROP gadgets are becoming a popular way for hackers to exploit the deserialization process.

There are essentially two types of ROP gadgets: one that ends with RET instructions and another that uses replicated transfer information to simulate RET instructions. These gadgets allow hackers to bypass weak or inadequate exploit mitigations during the deserialization process. If the Equifax hackers exploited a deserialization flaw in Struts, this is how they might have done it:

1. The hacker focuses on obtaining control of the stack pointer and program counter. By obtaining control of the program counter, he can facilitate the execution of the first gadget. Next, control of the stack pointer allows the RET instructions on his gadget chain to transfer control between gadgets. 2. Immediately after, the hacker overflows the buffer by injecting malicious data into the system, corrupting the memory in the process. He appropriates the code execution process by putting in place his own input buffer and gadget chain containing multiple return-to-libc calls. 3. The original return address is now replaced with the return address of the first gadget in the chain. Code execution continues until the RET instruction is encountered in the last gadget. 4. A VirtualAlloc function is called to facilitate the allocation of a new memory region, where a malicious shellcode or ysoserial payload can be placed. Once the payload is in place, program flow is transferred to this new memory region. Then, when the "infected" program/application begins the deserialization process, the malicious payload is activated, allowing the hacker to gain complete control of the system.

In September 2017, the Apache Struts Project Management Committee confirmed that Equifax's failure to install patches led to the unfortunate breach. Subsequently, a patch was released for CVE-2017-9805, with Apache announcing that developers should upgrade to Struts version 2.5.13.

Meanwhile, vendors are also turning to blacklisting and whitelisting to mitigate the exploitation of insecure deserialization processes. Blacklisting refers to the practice of prohibiting emails, domains, software, and apps that are recognized as unsafe from accessing sensitive applications. Whitelisting, on the other hand, authorizes access to an exclusive list of users, software, and domains.

Although both methods can be effective, they are not exhaustive solutions. Instead, they need to be combined with methods that will neutralize the effectiveness of gadget chains, the most popular variations of deserialization attacks, and attacks via HTTP, JMS, JNDI endpoints (to name a few).

These methods include hardening java.io.ObjectInputStream class behavior by subclassing it, using in-place code randomization, and deploying ASLR (Address Space Layout Randomization), which randomizes memory addresses and allocates them in a dynamic manner. With ASLR, memory addresses are not static; each time an application commences deserialization, memory regions and stack pointers are given new base addresses.

Developers should be aware, however, that US CERT (the United States Computer Emergency Readiness Team) has just issued a warning about a previously unknown vulnerability in ASRL. It recommends applying a temporary fix until a patch is released.

To find out whether your application is susceptible to an insecure deserialization attack and to deploy effective mitigation solutions, schedule a Kiuwan Code Security demo today. At Kiuwan, protecting your company from an Equifax-type breach is our first priority.

Known Security Vulnerabilities are those gaps in security that have been identified, either by the developer/vendor of the products used, by the user/developer, or by the hacker/intruder. To exploit known security vulnerabilities, hackers identify a weak component in the system by scanning the system using automated tools (more common because these hacking tools are available online) or by analyzing the components manually (less common, because it takes more advanced skills).

Almost all applications have at least some vulnerabilities, due to weaknesses in the components or libraries the application depends on. Sometimes vulnerabilities are deliberate. Vendors have been known to leave ―backdoor‖ vulnerabilities in their systems so that they can access the system remotely once it‘s deployed. Most vulnerabilities, however, are unintentional. These are due to security gaps that are inherent in the design of the product.

Most vendors address vulnerabilities as they identify them and address the vulnerabilities either with product updates and patches or by releasing a new version of the product. Keeping the components and libraries updated with the latest patches and upgrading as soon as the newest version becomes available helps significantly reduce the number of known vulnerabilities that put the application at risk, but it‘s becoming more common for developers to be unaware of all the components their applications are actually using, making it impossible to address all the vulnerabilities.

It actually should be relatively easy to determine if you‘re using any components or libraries with known vulnerabilities, but proprietors and open source communities don‘t always specify precisely which versions of their products are vulnerable, at least, not in a format that‘s standardized and easy to search. Causing additional difficulties, not every library uses

a version numbering system that‘s easily understandable. Plus, vulnerability reporting to the vendor isn‘t always centralized, meaning it isn‘t collected and assembled into an easily searchable resource pool.

In order to assess your vulnerabilities, you have to search all of the various databases related to your components and stay on top of project mailing lists and announcements the company or community issues regularly. When you identify a known vulnerability in one of your components, you‘ll need to check to see if your code actually uses the part of the component that‘s vulnerable, as well as determining if the flaw actually impacts your application.

Another way to protect your applications against using components with known vulnerabilities is not to use any components except those you write yourself. However, that‘s not an option in most production development environments. There‘s just not time and money to develop all new components from scratch for each new application. Usually, component vendors correct known vulnerabilities in new versions instead of releasing patches for older versions of the component. That means upgrading to new versions of all components as quickly as possible is the best way to stay ahead of these known vulnerabilities.

When developing a new web application, you should establish a means to:

 Identify all the components or libraries the application uses (don‘t forget those dependencies) and the versions.

 Monitor known security vulnerabilities in any published databases, project newsletters and mailing lists, etc., and keep components upgraded to the latest versions available.

 Establish security policies and best practices for component use, including what licenses are acceptable.

 Add your own security wrappers to components whenever possible, including the disablement of any functionality your application doesn‘t require and any unnecessary aspects of the component, especially those involving known security vulnerabilities.

The deeper a component is buried in the application, the harder it is for hackers to exploit. So consider how deep the component works within the application when determining how much time and effort to invest in closing the known vulnerabilities.

Unfortunately, hackers gain access to the same public databases and mailing lists that legitimate customers do. Additionally, hackers assemble and share lists of known vulnerabilities online, and use these lists to develop hacking tools that break into components and applications even if the attacker doesn‘t have any personal knowledge or savvy with that particular component. So, legitimate developers need to become even more vigilant and savvy about protecting their applications than hackers are at attacking them.

Here are a couple of examples of how hackers use known component security vulnerabilities to cause trouble in your apps:

 The vulnerability in Apache CXF Authentication Bypass — Hackers simply neglected to provide an identity token, which allowed them to invoke any web service with full permissions.

 The vulnerability in Spring Remote Code Execution — Attackers misused the Expression Language, giving them the ability to execute completely arbitrary code, which essentially took over the server.

It also behooves the developer to choose components from proprietors and open source communities that take these vulnerabilities seriously. If your vendors consistently fail to identify and correct known vulnerabilities in their components or libraries, it‘s probably time to move on to a vendor that does value the integrity and security of your web applications.

Insufficient logging and monitoring of computer systems, applications and networks provide multiple gateways to probes and breaches that can be difficult or impossible to identify and resolve without a viable audit trail. Typical log architecture generates both security and operational logs, analyzes, stores and monitors those logs. This is not only important for dealing with the threats resulting from insufficient logging and monitoring, but for regulatory compliance as well--and with today's varying levels of application, server and network communication, it's imperative to maintain more than a modicum of best practices pertinent to your industry and to your organization. Vulnerabilities and breaches often take the better part of a year, up to 200 days in some cases, to find--and can cost enterprises millions of dollars.

Security logs are different than operational logs. Operational, or operating system, logs will include routine system events like logins and shutdowns on workstations, servers and networks. They will also derive logs from associated security software. Security logs from such security software include logs from firewalls, routers, and host or network security devices and services. Operational logs are indispensable for honing in on questionable activity that may lead to an attack on your most sensitive and mission-critical data. However, security logs are often deemed supplementary, depending on what a particular company requires for an investigation into a vulnerability or threat. Either way, both types of logs are invaluable for identifying and resolving insufficient logging and monitoring vulnerabilities.

Basic vulnerabilities include:

 unlogged events, such as failed login credentials

 locally stored logs without cloud backup

 misconfigurations in firewalls and routing systems

 alerts and subsequent responses that aren't handled effectively

 malicious activity alerts not detected in real time

Penetration testers and auditors should test and validate computer systems, applications, related servers, and networks periodically. Deficiencies are vulnerabilities that need to be addressed as soon as they are identified.

You do not want to stop logging if you find yourself vulnerable. While insufficient logging and monitoring vulnerabilities create a high prevalence of breach potential, abruptly stopping the logging may alert savvy attackers to the fact that your sensitive and mission-critical data may be exposed and easily exploited. Adhere to your company's best practices if your company has established them.

Prevention is key. Not all logs are clear or readable. They should be monitored and validated at a pace commensurate with your business impact.

If you do not mandate that your company collect, store, maintain and monitor logging activity, then you are opening yourself to costly security breaches that you cannot easily track. Audit trails provide your IT staff with the tools for the identification, prioritization, investigation, and resolution of exploitation.

Most U.S. states and territories have enacted security breach notification laws. Therefore, it is in the best interests of all private and government entities to comply with all national and state regulations. Center to compliance is logging and monitoring all events that involve your company. Here are a few regulations that you may be required to comply with:

 The Federal Information Security Management Act for Federal agency systems

 The Gramm-Leach-Bliley Act for financial institutions

 The Sarbanes-Oxley Act of 2002 for financial and accounting purposes

 The Health Insurance Portability and Accountability Act of 1996 (HIPAA) for healthcare

 Requirement 10 of the Payment Card Industry Data Security Standard (PCIDSS) for organizations storing, processing and transmitting credit card data

The Control Objectives for Information and Related Technology (COBIT) has more information to guide you regarding best practices and increasing regulatory demands.

You can further minimize insufficient logging and monitoring exploitation by establishing feasible policies and procedures that mandate consistency and compliance throughout your company's infrastructure.

Doing so will:

 reduce breach risk by setting up clear audit trails

 help establish log and compliance goals

 set up internal standardization for procedures, including who manages those procedures

 protect sensitive and mission-critical data

Whether your organizational structure needs to facilitate a single management system for a small business or is based on several types of enterprise structures that demand interoperability for regulatory compliance, it is imperative to train those involved, including developers, and re-train often as new regulations emerge or security breaches mandate change. Something as seemingly innocuous as realigning work stations with a new server merits retraining personnel.

You don't want to engage in damage control. The levels of log management can be overwhelming to sift through. There are many passive commercial and open source log

management frameworks available that can automate the monitoring process. Penetration testers and auditors tend to be resource and financially intensive, but sometimes necessary.

There has to be a new mindset. Simply keeping attackers out of company business is passe and dangerous--whether you are a small business or a major corporation. It's not enough to find the who and eradicate the threat. Today you have to know why your company is vulnerable and how to prevent probing eyes and attacks before they start. Not only will it keep your company viable, but also compliant with ever-changing regulations that mandate an audit trail.

You can't resolve a breach if you don't detect it. By defining (and verifying) the roles and responsibilities of those involved in log management at both the systemic and operational levels, companies can mitigate breaches and have ready audit trails to resolve breaches.