Kendal, Simon (2012) Selected Computing Research Papers Volume 1 June 2012. Selected Computing Research Papers . University of Sunderland, Sunderland.

Downloaded from: http://sure.sunderland.ac.uk/id/eprint/9586/

Usage guidelines

Please refer to the usage guidelines at http://sure.sunderland.ac.uk/policies.html or alternatively contact [email protected]. Selected Computing Research Papers

Volume 1

June 2012

Dr. S. Kendal (editor)

Published by the University of Sunderland

The publisher endeavours to ensure that all its materials are free from bias or discrimination on grounds of religious or political belief, gender, race or physical ability.

This material is copyright of the University of Sunderland and infringement of copyright laws will result in legal proceedings.

© University of Sunderland 2012

Authors of papers enclosed here are required to acknowledge all copyright material but if any have been inadvertently overlooked, the University of Sunderland Press will be pleased to make the necessary arrangements at the first opportunity.

Edited, typeset and printed by Dr. S Kendal University of Sunderland David Goldman Informatics Centre St Peters Campus Sunderland SR6 0DD

Tel: +44 191 515 2756 Fax: +44 191 515 2781

Contents Page

An Evaluation of Anti-phishing Solutions (Arinze Bona Umeaku) ...... 1

A Detailed Analysis of Current Biometric Research Aimed at Improving Online Authentication Systems (Daniel Brown) ...... 7

An Evaluation of Current Intrusion Detection Systems Research (Gavin Alexander Burns) ...... 13

An Analysis of Current Research on Quantum Key Distribution (Mark Lorraine) ...... 19

A Critical Review of Current Distributed Denial of Service Prevention Methodologies (Paul Mains) ...... 29

An Evaluation of Current Computing Methodologies Aimed at Improving the Prevention of SQL Injection Attacks in Web Based Applications (Niall Marsh) ...... 39

An Evaluation of Proposals to Detect Cheating in Multiplayer Online Games (Bradley Peacock) ...... 45

An Empirical Study of Security Techniques Used In Online Banking (Rajinder D G Singh) ...... 51

A Critical Study on Proposed Firewall Implementation Methods in Modern Networks (Loghin Tivig) ...... 57

An Evaluation of Anti-phishing Solutions

Arinze Bona-Umeaku

Abstract Phishing as an online threat has resulted in the development of currently available anti-phishing solutions. Most of these solutions are automated web-based tools that warn users against phishing sites and protect users from third party invasion on their details when accessing a genuine website. In this paper we closely evaluate the performance of a web content system and a password protection system using documented evidence to summarize findings and offer a proposed solution.

1 Introduction 2 Web Content Systems One of the major internet threats is phishing. Phishing which has turned out to be a major 2.1 Anti-phishing Toolbars problem in the advancement of e-commerce In respect to this paper where we evaluate bringing doubts to the minds of customers on automated phishing solution, research has been the use of internet as a business tool. It has been done in the area of automated anti-phishing noticed that phishing attacks progressively detection solution for example the spoofguard increases year after year, (Chen et al. 2011) this (Zhang et al. 2007) which checks the website for has also been seen in more recent attacks against certain characteristics like the host name, Sony‟s Play station network and Epsilon. This traceable spoofing techniques and checking has resulted in several businesses and against previously marked images. Various organisations investing substantial amount of software vendors have developed anti-phishing research to find lasting solution (APWG, 2011). tool bars that identify a page as a phishing site For example companies like Cyveillance anti- or legitimate site using methods of blacklisting phishing is dedicated to providing anti-phishing (list of fake websites), whitelisting (list of solutions to organisations to enable them genuine websites) and user ratings (Cao et al. prevent, detect and recover from phishing 2008). Some of such examples are outlined attacks. Some software vendors offer toolbars below; Google makes available the source code that help users identify genuine sites (Zhang et for the safe browsing feature and claims that it al. 2007). cross-checks URLs against a blacklist. E-bay

also developed a toolbar that uses both In this paper we evaluate methods that have heuristics and blacklists to identify fraudulent been proposed through series of research and URLs, It was also developed in a way were how they have performed towards the anti- users can report newly identified fraudulent phishing crusade. Among other methods sites. In internet explorer Microsoft also mentioned here we would take a close look at embedded a phishing filter which uses heuristics two systems, a content web-based automated as well as depends largely on the blacklist system and an authentication system that hosted by Microsoft and option for users to prevents a preset password. After which a report suspected sites. The Netcraft toolbar practical solution from evaluations made, based which was built to use a log of blacklisted on tangible evidence, implementation and tests websites hosted by Netcraft and a list of sites that has been carried out in previous research identified by users but confirmed by Netcraft. papers.

1

Having evaluated this using publicly available information provided by the toolbar download websites (APWG, 2011) it was discovered that majority of the tool bars mentioned above used blacklists to identify fraudulent sites, but not all Figure 1 were able to accurately identify phishing sites Where is number of phishing (Reddy, 2011). This could be as a result of the pages which are rightly labelled as phishing, and size of blacklist used by each toolbar therefore a is number of phishing pages. The list with more tagged and user updated sites will higher TP value the better the detector. While perform better than that with fewer list of false positive rate measures the percentage of phishing sites. Also the heuristics used could legitimate sites which are falsely labeled as have been used to detect other sites that have not positive phishing and it is computed by (Figure been put in the blacklist. Although spoofguard 2) (He et al. 2011). was the only tool bar that did not use blacklist instead it used heuristics still missed some phishing sites and had a high rate of false positive (genuine sites mistaken for fraudulent) this could probably be improved by the addition of whitelist (list of genuine sites) Figure 2

2.2 Automated Webpage Detector Where is number of Meanwhile the solution Presented here by (He et legitimate pages which are wrongly labeled as al. 2011) is a heuristic solution to determine if a phishing and is number of website is a genuine or a phishing page, based legitimate pages(He et al. 2011). This means on its content, HTTP transaction and search that when there is a lower FP value the betterr engine results. According to He et al (2011) this the detector. solution identifies phishing sites without using the blacklist technique. This is done by Experiment 1 converting a web page into 12 features which 100 login pages of top targeted legitimate are selected based on exisiting normal and websites (He et al. 2011) were collected. phishing page. This method claims to help users Another 100 webpages consisting the following identify a phishing website before it is were collected; blacklisted or shutdown as is the case with the 35 Homepage of top targeted websites, 35 Top blacklist-based anti-phisihng toolbar mentioned pages, 30 Random pages from a list of 500 above. Also claiming that the solution is able to pages used in 3Sharp‟s phishing study. (He et extract a webpage identity using the famous tf- al. 2011) idf (frequency-in-verse document frequency) From the 200 legitimate and 325 phishing method and this identity would be the basis for pages, 50 legitimate and 50 phishing pages as determing a phishing or legitimate webpage the training set. The 50 legitimate pages consist using an SVM (support vector machine) of: classifier. 10 login pages of top targeted websites

15 Homepages of top Alexa websites Two different experiments were conducted 10 Homepage of top targeted websites using two different testing data-set to evalute the 15 pages from 3Sharp list. method. There are two metrics used to evalute Therefore classification result is shown below the performance; true posititve(TP) rate and as: false posititve (FP) rate where the true positive True positive rate of the method is 97.33% and rate measures the percentage of phishing site false positive rate is low at 1.45%. Although which correctly labeled as positive phishing and most of the phishing pages used here were it is computed by (Figure 1) (He et al. 2011). already shutdown at the time of the test. Below is the graphical representation of the performance on the first dataset.

2

Figure 3 Figure 5 False positive comparison (He et al. 2011)

Experiment 2 Having evaluated the webpage detector (method Using the same training set as the first 1) used to detect phishing sites here, it was experiment the proposed detector is tested with discovered it has some limitations which include a different set. 100 legitimate webpages are that tf-Idf (frequency / inverse document collected along with another 100 phishing pages frequency) does not work with other languages from live phishing pages from phishTank other than English so an Asian phishing site (PhishTank). The legitimate webpages were also cannot be identified, according to the results changed to evaluate the stability of the proposed done were gotten from only English data sets. detector (He et al. 2011). Below is a graphical And in the case of a false positive (legitimate reperesentation of the results when compared to web page that is detected as fake) a user will be other methods like CATINA (Zhang, 2007). misinformed. Another issue of this method is that it could suffer from performance problems due to the time lag involved in querying database, there were no experiments to represent timing.

3 Authentication System

3.1 One-Time Password

Here we evaluate in details the solution proposed by Huang et al (2011). Here they introduce a solution that eliminates the possibility of a user having a preset password by Figure 4 leveraging on existing infrastructure on the True Posititve comparison (He et al. 2011). internet –specifically the instant messaging service. The study also shows that the solution can uniformly integrate with openID service which will enable websites that support openID use the solution. Huang (2011) believes that using content based approach is limited in solving the phishing attack on websites “List- based and heuristic-based methods cannot detect all phishing sites” Huang (2011, pp.1293) going

3 ahead to claim that heuristic-based mechanisms According to the researchers, for this solution to uses one criteria to assess web sites thereby be implemented there are parties involved and bringing limitation for a list-based method to assumptions used. For example it is assumed identify and prevent all phishing sites. Therefore that the HTTP protocol for the IM is secure, this claiming how possible it is to manage password assumption is believed to be considerable phishing attack by delivering OTP via a (Huang, 2011) because a lot of commercial secondary real-time channel also that this websites use the SSL protocol for encryption of method is attack proof against IP-spoofing, safe HTTP. Another assumption was that the website authentication in an unfamiliar environment. In has incorporated an instant messaging service. this solution a user is given a one-time new The user will have to install an (IM) service password every time the user wants to access the supported by the website. The parties involved web. It was designed such that a trusted are; websites, users, phishers and instant secondary channel will be needed to messaging (IM) service providers. There are two communicate the password, in this case the processes involved in this solution; instant messaging service. Registration Process The one time password (OTP) will be The registration is same with any other online communicated through a trusted secondary registration process except for the two added communication channel. There will be a user section in order for the user to effectively use database in the server side and this will match the solution. (as seen in Figure 7) Series of with the user‟s matching identity on the exchanges take place between the website. A) secondary channel. Once person A wants to the page containing the IM account of the access a web service, an OTP is sent by the website and string t which is a session token server to person A through the secondary with the space field were the OTP will be channel. Person A is expected to use password inputted B) If the account is new confirmation to within a space of time before it expires. One of add to database is obtained. C) When IM the things the researchers claimed that is unique account is verified a message is sent to the user about this solution is that it provides three levels through the IM service. This message will of security protection. An attack can occur only contain the session token t and a one-time if the phisher knows I) account name; II) the password p. (D) Finally the user confirms that secondary communication line were the user IM account and token are same with previous receives the OTP; III) password for secondary page. channel service. Huang et al (2011) decided to use instant message service as the best communication channel for the solution. This is majorly because it is cost effective as this can be downloaded and installed for free or is pre- installed on most operating systems e.g. Microsoft‟s messenger (Microsoft corporation) Generally the website will need to setup an identity management database and run an IM bot program to send the one-time password solution.

Figure 6

The user registration process, the areas marked in red rectangle are the extra steps required for this solution. Figure 7: The instant messaging registration process

4

Login Process account. So claims made by Huang et al (2011) that the OTP solution can only be compromised As indicated in (figure 3) the login process if the attacker knows the components that are comprises of five steps, A-E As an addition to associated with each other and target them the OTP field as requested by the website the accordingly has no concrete evidence of page also contains a randomly generated session implementation in the work done to prove this. token t and the IP address used by the user. The test method does not necessarily show that Once the account name is valid the website the method will always work on claims made checks the user‟s IM account and sends an also no substantial experiments where presented authentication message to the account. This to considered the fact that the proposed solution message will contain the former generated can be compromised by the attacker if able to session token t as well as the generated OTP p acquire the user‟s account name identity from a (In figure 4) shows a sample confirmation third party to log into the website and message. At the end of the process when the successfully making a user submit their OTP authentication message has been received the were is believed to be legitimate. user can check the validity of this message by checking if the token t received by the IM client and the one shown on the OTP input page are same; If the user confirms the authentication message, the user supplies the OTP on the available OTP input page. When this is done the website has to confirm that the user has submitted the OTP from exactly the same IP address the user made a request to login from, then the website authenticates the user using the OTP entered if same as formerly assigned OTP i.e. if user enters p’ in the password field of the input page, then the login will be successful if p’ is the same as the OTP generated by the website. During the process of registration and login, the life span of the password is limited. This means Figure 8: User login process that if a user inputs a wrong OTP more than a certain time n or a given OTP has not been used within m period of time, the website will invalidate the current password and stop process. In this case n has to be a very small number and m a short period of time for Figure 9: Example of authentication message example n=3 and m=30s. Additionally each OTP will comprise of a variety of letters including upper/lower case and numerals to 4 Conclusions reduce probability guessing. In this paper we have carefully evaluated a web content based and one-time password anti- Having evaluated the One-Time Password phishing solution. Although it may be method (method 2) it is discovered that the OTP impossible to completely wipe-out all phishing solution has a good mutual authentication threats. The purpose of these solutions is to feature were the server authenticates the user reduce the amount of phishing attack online. and the user authenticates the server. The OTP Having considered that the two major ways solution strength lies in the hidden relationships through which phishing attackers invade users between three components and the fact that it is are; having illegal authorisation into user difficult (considering all assumptions made account and deceptive website links. It would be (Huang, 2011)) to compromise the components recommended that using the automated webpage which are the user‟s account, the website‟s IM detector (method 1) as the major protection tool account and the user‟s Instant Messaging against phishing will be an effective way to

5 combat the phishing attack among businesses Gouda M. G., Liu A.X., Leung L.M., Alam and social network sites. Based on the M.A., 2007, „An anti-phishing single password experiments carried-out on the webpage detector protocol‟, Computer Networks, [e-journal], Vol. showing an average of 4-6% false positive 51, pages 3715 – 3726. Available through: which means that 1 out of every 10 phishing Science Direct database [Accessed 2 November sites will be detected by the webpage detector as 2011]. against the previous research done by (Zhang et al 2007) on phishing toolbar that uses the He M., Horng S., Fan P., Khan M. K., Run R., blacklist method showing a false positive of 15- Lai J., Chen R., Sutanto A., 2011, „An efficient 20%. However the one-time password solution phishing webpage detector‟, Expert Systems (method 2) compliments (method 1). If an with Applications, Vol. 38, Issue 10, pages attacker eventually succeeds in tricking a user 12018-12027, ISSN: 0957-4174. into using a fake website, the authentication process will identify the phishing attack Huang C., Ma S., Chen K., 2011, „Using one considering that the user will need to be time password to prevent one time phishing assigned a one-time password after attacks‟, Journal of network and computer authentication confirmations are made from both Applications, [e-journal], Vol. 34, pages 1292 – ends. This method also has a reduced cost of 1301. Available through: ACM Digital Library deployment to almost zero by the installation of [Accessed 11 October 2011]. instant messaging bots at server side only. Kumaraguru P., Sheng S., Acquisti A., Cranor L.F., Hong J., May 2010 „Teaching Johnny not References to fall for Phish‟, ACM Transactions on Internet Technology, [e-journal], Vol.10, Issue 2, Article Anti-phishing Working Group (APWG), 2011. 7, ISSN: 1533-5399. Available through: ACM APWG Phishing Trends Report. [Online] Digital Library [Accessed 11 October, 2011]. Available at: [Accessed 15 October 2011]. Rao H. R., 2011. „Why do people get phished? Testing individual differences in phishing Cao Y., Han W., Le Y., 2008, „Anti-phishing vulnerability within an integrated, information based on automated individual white-list‟, processing model‟, Decision Support Systems th Proceedings of the 4 ACM workshop on digital [e-journal], Vol. 51, Issue 3, pages 576 – 586 identity management, [e-journal], pages 51, Available through: ACM Digital library ISBN: 9781605582948. Available through: [Accessed 12 October, 2011]. ACM Digital library [Accessed 10 October 2011]. Ye Z. (Eileen), Smith S., Anthony D., 2005, „Trusted paths for Browsers‟, ACM Chen X., Bose I., Leung A. C. M., Guo C, 2011. Transactions on Information and System „Assessing the Severity of phishing attacks: A Security (TISSEC), Vol. 8, Issue 2, pages 153 – hybrid data mining approach‟, Decision Support 186. Systems [e-journal], vol. 50, pages 662-672 ISSN: 01679236. Available through: Science Zhang Y., Hong J., Cranor L., 2007. Direct database [Accessed 11 October, 2011]. „CANTINA: A Content-Based Approach to Detecting Phishing Web Sites‟, Proceedings of Cranor L., Egelman S., Hong J., Zhang Y., the 16th international conference on World 2006, „Phinding Phish: An Evaluation of Anti- Wide Web [e-journal], pages 638 – 648, ISSN: Phishing Toolbars‟, CyLab Carnegie Mellon 978-1-59593-654-7. Available through: ACM University Pittsburgh, [e-journal], Available Digital library [Accessed 11 October, 2011]. through: CyLab Carnegie Mellon University digital library [Accessed 13 October, 2011.]

6

A Detailed Analysis of Current Biometric Research Aimed at Improving Online Authentication Systems

Daniel Brown

Abstract One of the biggest problems with all online systems in this day and age, is security. Many security measures are implemented across various systems and throughout time we find that as technology advances, secure systems soon become exploited. In this paper the use of biometrics within online security will be analysed, looking at iris, fingerprint and facial recognition. From this, the efficiency and reliability of biometrics as an authentication method for online systems will be evaluated. The main vulnerabilities of biometric systems are how accurate this method is and the way in which data is transmitted from biometric devices to a secure authentication server. This paper looks at factors which could potentially affect the interpretation of key features by the biometric devices and will assess the overall security of biometric systems.

attempted to unlock their system, it failed to 1 Introduction recognise their fingerprint and other users experienced the opposite, in which they found For some time now, the technology industry has other people could unlock their system. These been searching for a method of security that will systems have developed rapidly since this point be impossible to compromise and with the and it is now a highly accurate authentication integration of biometric systems into security we method as it is considered that no two seemed to potentially have a solution. Already fingerprints will ever be the same. Fingerprints we have began to see biometric systems present are analysed based on certain patterns, there are in airports, in which your passport is scanned three major patterns: loops (65%), whorls (30%) and vital measurements are taken such as the and arches (30%) (J.W. Osterburg et al. 1977). distance between several distinct features and Iris scanning takes place to verify the identity of The ridge lines present in a fingerprint will not the person standing before the system. It would change from birth until death therefore we can seem impossible to deceive such systems be sure that everyone has a unique fingerprint however we cannot rule out the possibility that however we must determine every characteristic these systems are not 100% accurate and if they in each fingerprint to pinpoint what makes it are not, what is the chance of a false rejection or unique to that person. These characteristics are even a false acceptance. named Galton characteristics after Sir Francis Galton who studied the individual 2 Biometric Systems characteristics within fingerprints. There are ten types of ridge lines within fingerprints which help with the characterisation of a fingerprint 2.1 Fingerprint Recognition and these characteristics are analysed using grid Basic fingerprint security has been available for over the fingerprint to help break down each some time now, when it first appeared on section. popular products such as laptops it was seen as a gimmick and experienced difficulties with In a study (J.W. Osterburg et al. 1977) the recognition. Users reported that they had set probability of confirming an identity using a their fingerprint recognition yet when they partial fingerprint is calculated, they display a

7 full table of probability values however do not were tested were between the ages of 19 and 65. conclude these findings in writing. They explain Binary iris codes were generated from features that naturally, there is a much lower chance of that had been extracted from the original iris not being able to obtain a match in the database images and then the binary codes are compared if only a partial print is given. Their research to see if they match. This calculation was done does not clearly state whether their results found by using “a quantified dissimilarity measure the probability of confirming an identity to be called the Hamming Distance (HD)” (Rankin et less for a partial fingerprint in comparison to the al. 2011). From the comparison of the Hamming full fingerprint although it is suggested Distance, you can tell whether the patterns throughout the paper. belong to the same iris. They obtained results which showed an overall recognition failure rate 2.2 Iris Recognition of 21% after sampling at 3 and 6 months where failures rates were 21.2% and 20.5% Iris recognition is the latest biometric device to respectively. They conclude that as they only be implemented successfully into a full working measured their results over a relatively short environment, recently we have seen this period for such a test, they could only predict technology added to airports as a passport that over time, rates of recognition failure will control security feature. With the current level increase and that rates would vary depending on of security at such a high point throughout the value of the Hamming Distance. They airports worldwide, it is fair to assume that a lot discuss that if the Hamming Distance is set too is resting on the shoulders of this particular low then users would experience an increased system however you would expect that vast rate in failures whereas if it was set too high amounts of research have been done to conclude then there would be an increased chance of false that this technology is suitable for this purpose. positive results. At present, the only method of Similar to fingerprints, a human iris has a preventing failure caused by changes to the iris number of distinct features that make up a very is to re-enrol the image of the iris into the complex pattern, these features include nerve system upon any changes however this method rings, fibre thinning, pigment spots and other leaves the system vulnerable to security attacks. features (Rankin et al. 2011). Various factors need to be considered with iris scanning to ensure that an accurate image of the iris is 2.3 Facial Recognition extracted, as the human eye reacts accordingly Facial recognition systems consist of a soft and to certain lighting then this could potentially hard biometric system, both serving very affect the image that is extracted. different purposes. A soft biometric system will analyse features that are not unique to simply Low lighting is recommended as when there is a one person, they will be straight forward strain upon the eye, this is when we will see the features that could apply to hundreds of image change. There should also be a certain thousands of people. Soft biometrics would not point that each person should look at so that the be used in a situation in which the security of a device can obtain a consistent reading from the system should be strict, if personal details could same angle each time, this also ensures that the be at risk or where an identity can be imitated. most accurate reading can be taken each time. In The features that are interpreted in soft a recent study (Rankin et al. 2011) it was found biometric systems would be gender, eye colour, that the iris texture could be graded effectively hair colour, skin colour, height and weight (Gian into a manner in which the iris could be Luca Marcialis et al. 2009). Obviously there is a identified from finest to coarsest ranging from good chance that several people throughout the grade one to six respectively. This research was world will share all of the listed characteristics done to investigate whether the iris recognition and therefore the system would be unable to system would fail over time as natural effects identify one specific person. This means that the take their toll on the human iris and alter the chance of unauthorised access is already above way in which such a system would interpret the 0% which would be the target for any secure iris. In their investigations, they sampled images system. from 119 adults by capturing images of their iris from both left and right eyes. The adults that

8

Despite a serious lack of real security within soft system correctly then there will be no errors in biometrics in facial recognition, hard biometrics identifying someone. In a recent study (Rankin is right at the top of the current biometric et al. 2011) it is suggested that certain factors security systems. As mentioned before with iris may affect the scanning of an iris such as the recognition systems being implemented within environment and how lighting can cause a strain airports, facial recognition is also in place on the human eye, causing the iris to dilate or alongside previously mentioned systems. With contract to different angles and this would security measures being so high, using multiple provide a different reading to other lighting biometric systems together will ensure that the levels. However the main, concern was not with chance of a false acceptance or rejection is getting the person into the correct environment, reduced wherever possible. Facial recognition it was the concern that the human iris can be systems look for unique characteristics just as subject to change over time due to a number of all other systems, it will take measurements factors. from vital points in a person's facial structure and compare it to measurements that it is In their research they found that a major factor currently reading from either a photograph or that can affect the iris is ageing as well as previous database entry. medication, disease and surgery. The texture of the iris can cause a false reading if it changes 3 Analysis of Current Research on from a previous reading, this is due to a change in the fibre pattern formation of the iris which Biometric Authentication eventually causes a variety of failure rates. Methods A system must be found which is the most Facial scanning was either used as a soft or hard secure and efficient choice for online security, biometric system, however for the purpose of there will need to be a focus on what the online security, it would appear that soft probability of a mistake will be because in this biometrics alone would not suffice. Soft day and age there is no room for error. A breach biometrics may be useful to implement in security where IT systems are involved is alongside another system as an extra measure of considered a substantial failure because we have security, however with this system not being as so much advanced technology available to us. advanced as others and you must expect that the system could be easily deceived. A hard All of the biometric technology seems to biometric system could be used as a primary provide excellent solutions for online security, method of authentication for online security, due for example fingerprints are unique to one to its improved accuracy and interpretation of person and they require no specific environment several key features of the human face. in order for the scan to take place due to the person having to physically interact with the By measuring vital features of a person's face it scanner. In a study (J.W Osterburg et al. 1977) is possible to use these measurements to narrow they discuss that the weakest fingerprint reading down an identity search however there may be must have consist of 12 ridge endings and a more than one person with the same partial fingerprint has a probability of around 20 measurements, meaning that the reading would (10-20) of having 12 ridge endings. Therefore the not be able to match to a unique record such as a probability of a false rejection in increased as finger print or iris. It would be a great the system would have less chance of matching improvement to any of the proposed soft the fingerprint. However it is safe to say that if biometric methods but there would still question the fingerprint scanner is used correctly and full marks over how unique each of the fingerprint in taken then the room for error is characteristics are. As with the soft biometrics it very low. may be a system that would be best used with another system, there is another factor that needs Iris scanning is very similar in the way that a to be considered which is if there are any person's iris is unique to them and no one will physical changes since their last scan. Weight ever have the same iris. Therefore it means that loss, surgery or disease could cause a change in as long as the iris scan is interpreted by the measurements which would be used to

9 determine whether the image matched who was systems in the way that the human iris changes meant to be gaining access to the system. throughout a person‟s lifetime and can be directly affected by other factors over time. It 4 Conclusions was found that the texture of the iris can change naturally as well as with the effects of Biometrics within online security is without medication therefore the system would fail to doubt the next step forward for many reasons, match the iris pattern with an image stored on a not only is there several biometric systems that database. As mentioned, re-enrolling the iris into could be implemented but it is truly an the database would currently be the only way in authentication method that is unique to one which the system would once again recognise person. After analysing all of the systems in this that image and obviously re-enrolling regularly paper, it is clear to see that still we can could present security issues as you would encounter problems with a system that would prefer not having to ever re-enrol an image. It is seem to be highly secure. Passwords are the mentioned in the current research (Rankin et al. current leading security method, almost every 2011) that if a more intelligent system which website has the option register and for this a could deal with any physiological changes to the password is required. This means that either iris then it would be more accurate and would people use the same password for everything have less chance of being used for fraud. This which could lead to multiple accounts being statement is not suggesting that such system is compromised if that one password is revealed currently being developed and from other however having a separate password for each research it appears that there are no current website could cause users to forget passwords. advances with this problem. Passwords are easily compromised as they can be entered by anyone, obviously they need to Facial Recognition was discussed as there are obtain password first but there are methods of hard and soft biometric systems with this exploiting passwords so there is definitely a call specific method, both of these are suitable for for a new method of authentication. online security however both for different situations. Soft biometric systems are not Fingerprint recognition is a system that has reliable enough to be used as a primary already been implemented throughout many authentication method especially where sensitive systems however reports suggest that false information could be in question. Soft positives and negatives can occur with the biometrics could be used as an extra layer of system. The problem is however not with the security alongside another method such as system that reads the fingerprint as current passwords, if a user enters the correct password research shows that it is highly accurate and then the system will then check if the user has comparison with database records is accurate blonde hair, blue eyes and white skin for and secure. The main problem seems to be with example. Hard biometrics could be used as a obtaining a full fingerprint scan when using the primary method of confirming an identity by system, as research has shown, the probability of confirming vital statistics of a user which would receiving a false reading is higher. This is a be unique to them. Obviously it is a possibility problem that could have a simple solution, if that someone else could share these features there was a method of securing the finger on the however it is highly unlikely that someone reader ensuring a full reading every time. If this would share multiple and therefore you would could be done then there is no doubt that have a unique profile. fingerprint recognition could be the leading biometric system for online security, we know To conclude, the biometrics systems available that everyone has unique fingerprint pattern that for use with online security all have problems, will not change throughout their lifetime and some more than others. More than anything, the therefore you can guarantee that whoever is situation must be considered and with online authenticating, is actually the correct person. security we have to be thinking that this system can be used in a variety of situations ranging Current research highlights what could be from professional to home use. Fingerprint considered as major flaw in iris recognition recognition systems have a minor problem in

10 which a full fingerprint is not always obtained Muhammad Khurram Khan, Jiashu Zhang, by the reader, this is a human error by not Khaled Alghathbar (2011). “Challenge- placing their finger correctly onto the reader response-based biometric image scrambling for however in comparison to iris recognition secure personal identification”, Future systems which are proved to fail over time. The Generation Computer Systems, Volume 27, other major problem with facial and iris Issue 4, April 2011, Pages 411-418, ISSN 0167- technology is that in order to capture the quality 739X. of image needed to process iris and facial recognition you would need a good quality Peng Zhang, Jiankun Hu, Cai Li, Mohammed system in place with the correct conditions. In Bennamoun, Vijayakumar Bhagavatula (2011). terms of home use, this would not be a viable “A pitfall in fingerprint bio-cryptographic key option in the majority of cases. Therefore generation”, Computers & Security, Volume 30, fingerprint recognition is the best suited system Issue 5, July 2011, Pages 311-319, ISSN 0167- for online security across all of the possible 4048. situations as well as proving to have the least problems with accuracy. Tassabehji, R.; Kamala, M.A. (2009). "Improving E-Banking Security with Biometrics: Modelling User Attitudes and References Acceptance," New Technologies, Mobility and Security (NTMS), 2009 3rd International D.M. Rankin, B.W. Scotney, P.J. Morrow, B.K. Conference on , vol., no., pp.1-6, 20-23 Dec. Pierscionek (2011). “Iris recognition failure 2009 over time: The effects of texture”, Pattern Recognition, Volume 45, Issue 1, January 2012, Pages 145-150, ISSN 0031-3203.

Gian Luca Marcialis, Fabio Roli, Daniele Muntoni (2009). “Group-specific face verification using soft biometrics”, Journal of Visual Languages, Volume 20, Issue 2, April 2009, Pages 101-109, ISSN 1045-926X.

James W. Osterburg, T. Parthasarathy, T. E. S. Raghavan and Stanley L. Sclove (1977). “Development of a Mathematical Formula for the Calculation of Fingerprint Probabilities Based on Individual Characteristics”, Journal of the American Statistical Association, Vol. 72, No. 360 (Dec., 1977), pp. 772-778

L.H. Chan, S.H. Salleh and C. M. Ting (2010). “Face Biometrics Based on Principal Component Analysis and Linear Discriminant Analysis”. J. Comput. Sci., 6 :Pg 693-699.

Marcos Martinez-Diaz, Julian Fierrez, Javier Galbally, Javier Ortega-Garcia (2011). “An evaluation of indirect attacks and countermeasures in fingerprint verification systems”, Pattern Recognition Letters, Volume 32, Issue 12, 1 September 2011, Pages 1643- 1651, ISSN 0167-8655.

11

12

An Evaluation of Current Intrusion Detection Systems Research

Gavin Alexander Burns

Abstract This paper is looking at a number of new research journals on the subject of Intrusion Detection Systems. This is a system that at present monitors for known attacks on the system, but current research is looking into ways to update the system to allow it to find unknown attacks as well. A number of papers, dating from 2006 to the newest 2011, on the subject were found some of which have been looked at in more detail within this paper. The information examined explored testing on a university‟s network, testing datasets, and re-examining past research information to underline comparisons with other systems that appear to do what they set out to do. The Intrusion Detection System found anomalies on the university network. One of the research papers showed that no testing had been done. What was found is that there is a lot of diverse research on this technology some really well carried out, but none that on their own will solve the problem. What did come to light was that more research and testing will be needed to have a truly standalone system but progress is being made.

1 Introduction 2 ULISSE Anomaly detection with Intrusion Detection Systems (IDS) can be intelligence classed as another level of protection which can Unsupervised Learning IDS with 2Stage Engine sit behind the firewall as a secondary form of (ULISSE) is the start point which according to protection. They are software applications that Zanero (2008) “It uses two tier architecture with monitor the network looking for malicious unsupervised learning algorithms to perform activities and also policy violations and can also network intrusion and anomaly detection. produce a log and report for the network ULISSE uses a combination of clustering of administrators. There are various forms of packet payloads and correlation of anomalies in research going on at present. There is the the packet stream.” (Zanero, 2008) construction of lightweight IDS which will This method has come about because of our use detect anomalies in networks (Siva et al. 2011). of complex networked computer systems in our There is also research into Swarm Intelligence everyday life. This dependence has brought which is a new bio inspired method that has about the need to make the networks we use taken its inspiration from the way swarms of more secure which is where it gets more bees and other insects work (Kolias, et al. complex. He also states that “The majority of (2011). There is also (Fang-Yie Leu. et al. network IDS systems deployed today are 2009) who are doing research into enhancing misuse-based” (Zanero, 2008) which in effect their previous grid-based intrusion detection the IDS systems just sit and wait till a known system. Then there is the research by (Xenakis, signature of an intrusion type happens. Panos et al. 2010) who are looking into the Unfortunately this makes them ineffective evaluation of intrusion detection architectures against new attacks and a great number of for the mobile ad hoc networks. For this paper evasion techniques. Which means these systems we will look at a number of different research can give a lot of false negative and possibly journals and try to analyse their findings false positives.

13

However using this misuse-based system on of traffic taken from the 1999 DARPA dataset critical networks where the numbers of threats as well as adding attacks against the Apache are increasing is not the best solution anymore web server and Samba service using Metasploit. which is why ULISSE has been developed. The results compare well and indeed outperform ULISSE is a Two Tier network based anomaly the results given for SmartSifter and PAYL IDS system designed to analyze the network which are two other prototype intrusion packets and avoid discarding its contents with detection systems which used the KDD Cup the use of the two tiers first by reducing the 1999 dataset which is derived from the 1999 packets payload to a more tractable size, then DARPA dataset for their testing. with a traditional anomaly algorithm. (Zanero, 2008) 2.3 Conclusion

Although the tests seem to be a success the Similar approaches are being used by Tajbakhsh noticeable problem is that they are not a direct et al. (2009) who were looking into using fuzzy comparison tests as the same testing dataset has association rules for their method of intrusion not been used even though it is stated that KDD detection, and Kamran et al. (2009) who were Cup 1999 dataset is taken from the 1999 looking into an adaptive genetic based learning DARPA dataset that was used. Also it would be system for detection systems. Both of their useful to investigate if there are any newer methods were using a two tier system. evaluation tests available which could give a more up to date image of current network 2.1 Why is it different? traffic, although this does give the opportunity At present there is a need to look at the packet for the team or a new team to continue the flow in order to undertake packet level network research in the future. intelligence. The packet header data is dealt with easily as it is a uniform size and can be mapped to a time series, but problems occur 3 HawkEye Solutions A Network when dealing with the payload data of the packet as they are of varying size. Intrusion Detection System Abstract This problem is over come by most of the HawkEye Solutions is the name given to the existing research using unsupervised learning development group‟s network intrusion detec- algorithms for the purpose of intrusion detection tion system. It is a basic system that detects ab- by dropping the payload data and only using the normal internet protocol (IP) packets which they packet header information. This leads to the say is “the basic building blocks of an IDS that development of ULISSE as dropping the include mechanisms for carrying out TCP port payload data looses information that is needed scans, Traceroute scan, which in association to detect most attacks as they are only detectable with the ping scan can monitor network health.” by analyzing this payload data. (Mukhopadhyay et al 2011)

ULISSE is proposed as a two tier architecture What it is described as is a libpcap-based packet solution that applies a clustering algorithm to sniffer and logger which are made up of three the payload data which allocates a single value primary subsystems which are packet decoder, to them which is then added to information from detection engine, and logging and alerting. the header then passed on to the second tier Some of its features are rule based logging which is an anomaly detection algorithm. which allows it to detect a number of different (Zanero, 2008) types of attacks. It also includes real time alerting with the alerts sent to syslog, popup 2.2 Testing messages, or an alert file, and its configuration According to Zanero (2008) the tests seem to is done though command line switches. have been carried out in a well organised and (Mukhopadhyay et al 2011) controlled way. To evaluate ULISSE in a repeatable manner it was ran over various days

14

3.1 How it works allow with the attacker the more interaction there is the more information can be collected. The system is similar in looks to its peers but Low interaction which most are virtual has a greater focus on the security of packet operating systems and services these are sniffing with its most important features being relatively simple to deploy but are easy to packet payload inspection. This is where the detect, and high interaction which are a application layer of the packet is decoded and development of real systems. The outline for the rules can be given to collect specific data within work undertaken in this journal is to use low the traffic which allows the system to detect interaction honeypots that emulate operating many different types of unwanted activity. The systems or services and pass malicious traffic on other big advantage is that the output data is of a to high interaction honeypots to analyze an more user focused format. Hawkeye Solutions is intruder‟s activity then results can be used to also set to a working principle which is an take measures to protect the network. They are eleven step by step guide on how the system is using the honeypot technology to try to get coded to work from the beginning of the event around the inherent problems of the traditional to the eventual conclusion (Mukhopadhyay et al. intrusion detection systems. As honeypots can 2011) be used to lure hackers into a controlled environment to protect the real system resources 3.2 Testing and Conclusion and use the time to analyze their activities and There has been no definitive repeatable testing how defend against it. (Artail et al 2006) using a recognised dataset all that seems to have been done is they have set Hawkeye off on 4.1 Proposed hybrid honeypot different types of scans then explained the Artail et al (2006) suggest proposing an outcome within the text of the paper. Also there intrusion detection system with an adaptable low is what they claim to be a comparison of what interaction honeypot that can adjust to the type of features Hawkeye has and can do changes in the network. They will be deployed compared against a number of other basic IDS though unused IP addresses on the network to systems and the results that they give come out help them blend in and look like the real in favour of the Hawkeye system. So with the operating systems and services around them. In evidence that is presented with in this paper it is most instances the traffic that is directed to the quite evident that a lot more testing is needed low interaction honeypots will be passed on before the Hawkeye Solutions system can be seamlessly to high interaction honeypots. This said to work. combination has the advantage of requiring a minimum of administrative intervention as the number of honeypots adjusts automatically 4 Hybrid honeypot framework to around the state of the network and available IP improve intrusion detection addresses. It also magnifies the high interaction honeypots with the redirection of the traffic To some a honeypot is intended as a way to from the low interaction honeypots which also make attackers believe they are somewhere helps make the low interaction honeypots look within your system when in reality they are in a like real systems to anyone trying to hack the controlled part of the system, but to others it is a system. form of technology for the detection of attacks or in some instances real computer systems that The system will need to use static IP addresses are designed to be hacked into so as to gain and a relatively large number of free IP information from about the hacker and how to addresses are required as the more available free stop them in the future. Honeypots get classed IP addresses the higher chance there will be to into two wide categories production and detect any intruders. research. Production honeypots are there to reduce an organizations risk by keeping them 4.2 Testing and Results away from real system. Research honeypots are used for information collection. Honeypots are The test of the hybrid system was carried out by also classified by the level of interaction they integrating it into the network of the Faculty of

15

Engineering and Architecture at the American system on a daily basis, and that still will not be University of Beirut which had at least four enough if an unknown attack signature happens hundred computers. Their initial scan of the as this will just get straight through without network indicated that a significant number of causing any alarm or log entry. the honeypot virtual systems could be used. They performed two sets of tests effecting 75 The research which has been looked at is very computers to get an understanding of the impact interesting in that it is looking at different ways the scans can have on the network and the length to allow the network intrusion systems that are of time they take. being developed and adapted to catch the unknown attacks and intrusions that can occur The first test was an Nmap load test to measure before they have appeared on any update the data collection performance of the system database or software update as for them to and what sort of effect it has on the network. appear on those someone has to detect them in The second set of tests, nesses tests, which is an the first place. open source vulnerability tester which subjects the system to simulated attacks as well as Generally the current research looked at was of verifying the performance. a good standard, but unfortunately some as in the case of HawkEye Solutions has not got The tests lasted for over two weeks and came up enough depth to it as they have done no known with some interesting results; it detected a repeatable testing. On a positive outlook for hacking activity through the logs generated by HawkEye Solutions if extra research and the low interaction honeypots. The hacker used development can be done and achieve the results an IP address of a computer that was known to which is hoped then it will be a much needed not be used implying that the address was improvement over some of the systems that it spoofed. The low interaction honeypot logs was compared against. came up with a possible attack by showing that unused IPs were being scanned by a hacker With the research done into the ULISSE using nmap, and the system helped detect the network detection system it looks very presence of the Natche virus on one of the promising as their way of looking into the machines. (Artail et al, 2006) packets contents and not just discarding it can be looked at as a great improvement as it means 4.3 Conclusion there will not be as many dropped packets which can be used to detect any attacks earlier. Also This research seems to be well thought out and what is promising is the two tier system that has set up their testing looks to be of a very high an anomaly detection algorithm which will help standard and although their main test will not be with any new attacks that happen. The testing repeatable as no know standard testing dataset that was done was of a good standard with good or systems were used, they have carried recorded out comes. everything out on a real University system and it did come up with some good results. They are The research carried out on the honeypot also aware that the system needs more time framework was also very interesting as this spent on it to develop it more and as such it method will help to protect the systems very cannot be discredited totally. well as the attacks can be controlled as they are not on the actual system. The testing for the 5 Conclusions honeypots was of a high standard as they were Currently there is no Intrusion Detection System done on an actual real world system and not just that can fully protect network systems from a test bed. If their results can be replicated unknown attacks which means that at present it elsewhere on other systems it will prove very is a make do system as the system has to be kept promising. up to date with the known attack signatures to make sure the system is protected in the most One thing stands out with all this research is efficient way. Unfortunately that means the there is still a need for more research to solve network administrator may have to update the problems within some of the research or to

16 improve it even more. One possible way to go rules.‟ Applied Soft Computing 9, Pages 462- could be to add different research together to 469. see if two or more methods could get to the solution that is being looked for. A possible Xenakis C, Panos C, Stavrakakis I (2010) „A solution that has came about with this paper is comparative evaluation of intrusion detection the possible outcome of merging ULISSE architectures for mobile ad hoc networks.‟ system with the honeypot framework as it looks Computers & Security 30 (2011), Pages 63-80. like they could be beneficial together as the honeypot could give ULISSE extra time to catch Zanero S. (2008) „ULISSE, a Network Intrusion new attacks. Detection System.‟ CSIIRW '08 Cyber Security and Information Intelligence Research Workshop Oak Ridge, USA, (May 2008) References Artail H, Safa H, Sraj M, Kuwatly I, Al-Masri (2006) „A hybrid honeypot framework for improving intrusion detection systems in protecting organizational networks.‟ Computers & Security 25, Pages 274-288

Fang-Yie Leu, Chao-Tang Yang, Fuu-Cheng Jiang (2010) „Improving reliability of a Heterogeneous grid-based intrusion detection platform using levels of redundancies.‟ Future Generation Computer Systems, Volume 26, Issue 4, Pages 554-568

Kamran S, Hussein A A., (2009) „An adaptive genetic-based signature learning system for intrusion detection.‟ Expert Systems with Applications, Volume 36, Issue 10, Pages 12036-12043

Kolias C, Kambourakis G., Maragoudakis M., (2011) „Swarm intelligence in intrusion detection: A survey.‟ Computers & Security, Pages 1-18.

Mukhopadhyay I, Chakraborty M, Chakrabarti S., (2011) „HawkEye Solutions: A Network Intrusion Detection System.‟ International Conference and Workshop on Emerging Trends in Technology, Pages 252-257

Siva S. Sivatha S, Geetha.S, Kannan. A, (2012) „Decision tree based light weight intrusion detection using a wrapper approach.‟ Expert Systems with Applications, Volume 39, Issue 1, Pages 129-141

Tajbakhsh A, Rahmati M, Mirzaei A, (2009) „Intrusion detection using fuzzy association

17

18

An Analysis of Current Research on Quantum Key Distribution

Mark Lorraine

Abstract This paper is an analysis of current Quantum Key Distribution research, asking whether the protocol has a place in the future of security.

Have a universal set of quantum gates 1 Introduction Permit high quantum efficiency, qubit-specific measurements (DiVincenzo, 2000) 1.1 The Problem with Quantum Key Distribution A qubit is any item to which quantum physics applies which can have at least two different Quantum Key Distribution (QKD) is one of the states. Usually, these are particles which are few aspects of quantum computing to be not spun either on the x or y axis to emulate 1s and only physically realised, but ultimately 0s which are used in classical computing. commercially available. Magiq Technologies, founded in 1999, sells quantum hardware from www.magiqtech.com claiming that their 1.3 What is QKD? commercial versions of QKD are QKD is a security protocol which combines the unconditionally secure, as of this paper being classical idea of key distribution and one-time written. For some time, QKD had been regarded pads with elements of quantum mechanics which as being impenetrable. However, recent studies are supposed to lend themselves to complete have shown that QKD has flaws as a security security. system. Currently, QKD is being used very sparingly, This paper will be looking at the more recent but there is a major QKD network used for the papers on QKD and discussing how secure most important governmental and financial data QKD really is and if it has a long term place on transfers, which require the utmost security. the market. This is a massive responsibility for QKD and if the total security of QKD starts to falter and 1.2 What is Quantum Computing? turns out to be a facade, then it is only a matter of time before it falls out of favour, or worse A quantum computer is a device which meets still, is exploited. most, if not all, of the following criteria:

Be a scalable physical system with well-defined 2 An Analysis of QKD qubits Be initializable to a simple fiducial state such as 2.1 A Brief Introduction to QKD |000...> Have much longer decoherence times QKD is widely accepted to be unconditionally secure (Lo et al, 1999).

19

and is transmitted in small sections to make the There are several QKD protocols, but the oldest results of potential interception negligible. A and most developed is BB84, named after the table of these steps is shown in Fig 2. two creators, Charles H. Bennett and Gilles Brassard and the year of its creation.

In the networking scenario displayed in Fig 1., using photon polarisation, Alice chooses a random string of bits to send to Bob and then randomly selects one of two basis supported by the protocol. The rectilinear basis has Figure 2. This table shows what actions are horizontal and vertical photons representing 0 performed by Alice and Bob in the creation and 1 respectively. Similarly, the diagonal of the shared secret key. Note that, once the basis has 45-degree and 135-degree photons key has been generated, there is no further representing 0 and 1 respectively. informtion on the quantum channel until all of the key has been used. (Bennett and Brassard, 1984, p.177)

If Eve tries to listen in on the quantum channel, because of the fact that you can‟t copy a quantum state, the mixed basis make it difficult to try to replicate the data reliably and send on to Bob without a high error rate (Bennett and Brassard, 1984 pp.175-178).

2.2 Discussing the Security of QKD Bennett and Brassard‟s paper on QKD aimed to delve into the world of quantum mechanics to Figure 1. This shows a BB84 QKD network design a cryptographic system, creating a system between two users, Alice and Bob, and Eve, on which it is theoretically impossible to who is an eavesdropper on the quantum eavesdrop. They decided to attempt to exploit channel. The quantum channel only the uncertainty principle in doing this. They transmits data in one direction, from Alice to proposed that current digital cryptography was Bob (Rieffel and Polak, 2000, p.307) flawed, as it cannot generate truly random numbers and that using quantum mechanics Once Alice has chosen a string and basis, the would also be a key advantage in this context. photon qubits are sent to Bob, via a one way quantum channel. Once Bob receives the Their idea was most certainly a very valid claim. qubits, he randomly, and independently of Alice This is the sort of thing that can have world- decides to measure each qubit using a rectilinear wide implications in the security industry. A or diagonal basis and interprets this back into definite break-through, built on the shoulders of binary. Attempting to read a rectilinear qubit previous research into making money which, in using a diagonal basis will produce a random principle, cannot be counterfeited (Wiesner, answer and all information is lost. Therefore, 1981) and unforgeable subway tokens (Bennett what Bob ends up with is meaningful data from et al, 1983). Both of these used quantum coding the qubits he correctly guessed the basis for, as a security measure. Cryptography is one of which is on average, 50%. the many aspects of computing which is in a constant game of catch-up, with every new After that, the remaining steps occur on the cryptographic method slowly sliding towards public channel. Bob and Alice determine which becoming obsolete. Hackers will always find a of the qubits were received successfully and way around conventional classical security, correctly measured. Once the binary results because no computer generated number is ever have been verified by Alice, this becomes a key truly random and it is only ever a matter of time

20 before the formula for generating it is the shared secret is then used in whichever calculated. Therefore, any progress towards a format is desired on the classical channel. If the cryptographic method which would be transmission is not secure, then the outcome is impenetrable is an excellent idea. They do discarded and the process repeated. concede, however, that implementation of particles for data transfer is limited in practice This is an excellent proposition for how to because of how weak the signal has to be implement the secure quantum channel. The without any way to strengthen the signal. one-time pad is the most secure cryptographic method, but its big weakness is sharing a secret Bennett and Brassard theorised that polarised number before the encryption can begin. The light would be the form for their qubit to take. combination of this cryptographic method with a They reached this conclusion, because ordinary quantum channel is therefore, theoretically light is easily polarised by a filter, and the axis completely secure. Additionally, the application of the polarisation is determined by the of the convention of named users from orientation of the filter. Additionally, picking cryptography, Alice, Bob and Eve, make the out a single polarised photon is also possible, whole thing easier to follow. simply by picking them from a polarised beam. They reason that while polarisation of a photon Bennett and Brassard follow up their summary is a continuous variable, the uncertainty of how the quantum channel can be principle prevents more than one bit of data implemented alongside a classical one by going being obtained. Furthermore, polarised axes into more detail. They provide a detailed, step which are the same as the filter will always by step walkthrough of how Alice and Bob result in transmission and polarised axes which communicate over the quantum and classical are perpendicular will definitely be absorbed. channel to produce a shared secret key. (I again Any other value will randomly be transmitted or refer you Fig 2. which summaries this not. It is not possible to further measure axes, excellently.) They say that if an eavesdropper because after being transmitted, they will always doesn‟t know the secret key after it has been have the same polarisation as the filter and generated, that they would have a minimal because it is impossible to clone a quantum state chance of being able to do any meaningful reliably. interception on the classical channel. An eavesdropper can prevent communication by The beginnings of this element of their suppressing either the classical or quantum reasoning are logically sound. They put forward channel, but this will not fool Alice or Bob into polarised light as their data transfer method and thinking the network is safe. justify it well, highlighting the properties which make it well suited to the task. However, they The protocols inner workings are obviously well do not compare using photons with using any of reasoned out and do indeed create an the other various particles which could unconditionally secure system. Also, the paper potentially be put in their place. makes good use of tables to clarify and summarise the facts. The paper has no Bennett and Brassard next moved onto the conclusion, only results of the logical reasoning. problem of cryptography. They proposed the Reflecting on it from a current perspective, it idea of using a secure quantum channel to reasonable to conclude that with the BB84 address the problem of secretly transferring a protocol having been conceived in theory, number for use in a one-time pad, or any other would have potential for error when practically security application which require shared secret implemented. It is logical that something knowledge. They suggest that two users, Alice created only as a concept should come across and Bob, should be connected by a quantum and certain issues when it is first built. While minor a classical channel. A secret number can be improvements can be made to an idea before it generated over the quantum channel in such a is put into practice, it is only when it is truly way that there is a high probability that an tested by application of the idea that it can start eavesdropper (Eve) has disturbed the to have its problems properly highlighted and transmission. If the transmission is secure, then addressed. As this is theoretical, the research

21 falls under the category of basic and the use of that an average of the results was taken, but this hard numbers means it is also quantitative. is not specifically mentioned.

It has been proven that it is possible to exploit The outcome of the experiment shows that at the the fact that a single-photon detector‟s pause lowest pulse intensity, the classical channel‟s after a detection event leave‟s it vulnerable to an data was barely discernible from being random, attack without leaving any noticeable difference but the best result demonstrated that 98.83% of in the error rate (Weier et al, 2011). This the classical data could be correctly discerned research is valid, as QKD is used on a lot of (see impossible to eavesdrop. They decided to high-security networks and any attacks on these attempt to exploit the uncertainty principle in networks should be addressed as soon as doing this. They proposed that current digital possible. cryptography was flawed, as it cannot generate truly random numbers and that using quantum The paper claims that, using a standard BB84 mechanics would also be a key advantage in this protocol setup network, the single photon context. avalanche detector (SPAD) can be forced into dead time and effectively blinded, by well timed Their idea was most certainly a very valid claim. pulses sent by an eavesdropper. This is the sort of thing that can have world- wide implications in the security industry. A This is an important idea being explored. If definite break-through, built on the shoulders of QKD is proven to have vulnerabilities, then it is previous research into making money which, in important that they are addressed as quickly as principle, cannot be counterfeited (Wiesner, possible if it is to have any long term presence 1981) and unforgeable subway tokens (Bennett as a high security product. It wouldn‟t take et al, 1983). Both of these used quantum coding much to make people think twice about how as a security measure. Cryptography is one of many more weaknesses haven‟t been noticed yet the many aspects of computing which is in a after the first questions are raised about the no constant game of catch-up, with every new longer unconditional security QKD provides. cryptographic method slowly sliding towards becoming obsolete. Hackers will always find a The paper goes on to discuss an experiment in way around conventional classical security, which they used standard hardware found in because no computer generated number is ever almost all QKD networks and they also setup truly random and it is only ever a matter of time the pulse timing to a proper length for unbiased before the formula for generating it is detections. They then proceed to use blinding calculated. Therefore, any progress towards a pulses to blind the SPAD and while the SPAD is cryptographic method which would be blinded, Eve can detect these instead without impenetrable is an excellent idea. They do being noticed. After an attack, Eve then uses concede, however, that implementation of what she has learned about the secret key to particles for data transfer is limited in practice decode information sent on the classical because of how weak the signal has to be channel. The results are based on how well Eve without any way to strengthen the signal. can read the contents of the classical channel. After this, the experiment is repeated with Bennett and Brassard theorised that polarised increasing intensities of the blinding pulse and light would be the form for their qubit to take. the results are compared. They reached this conclusion, because ordinary light is easily polarised by a filter, and the axis This is a very fair and unbiased experimental of the polarisation is determined by the method. It is also effective in getting to the nub orientation of the filter. Additionally, picking of the issue, by actually having Eve listen in on out a single polarised photon is also possible, the classical channel afterwards. If the simply by picking them from a polarised beam. experiment was limited to how much of the They reason that while polarisation of a photon shared secret key Eve could discover, the results is a continuous variable, the uncertainty would be much more limited. It seems likely principle prevents more than one bit of data that each element was tested several times and being obtained. Furthermore, polarised axes

22 which are the same as the filter will always by step walkthrough of how Alice and Bob result in transmission and polarised axes which communicate over the quantum and classical are perpendicular will definitely be absorbed. channel to produce a shared secret key. (I again Any other value will randomly be transmitted or refer you Fig 2. which summaries this not. It is not possible to further measure axes, excellently.) They say that if an eavesdropper because after being transmitted, they will always doesn‟t know the secret key after it has been have the same polarisation as the filter and generated, that they would have a minimal because it is impossible to clone a quantum state chance of being able to do any meaningful reliably. interception on the classical channel. An eavesdropper can prevent communication by The beginnings of this element of their suppressing either the classical or quantum reasoning are logically sound. They put forward channel, but this will not fool Alice or Bob into polarised light as their data transfer method and thinking the network is safe. justify it well, highlighting the properties which make it well suited to the task. However, they The protocols inner workings are obviously well do not compare using photons with using any of reasoned out and do indeed create an the other various particles which could unconditionally secure system. Also, the paper potentially be put in their place. makes good use of tables to clarify and summarise the facts. The paper has no Bennett and Brassard next moved onto the conclusion, only results of the logical reasoning. problem of cryptography. They proposed the Reflecting on it from a current perspective, it idea of using a secure quantum channel to reasonable to conclude that with the BB84 address the problem of secretly transferring a protocol having been conceived in theory, number for use in a one-time pad, or any other would have potential for error when practically security application which require shared secret implemented. It is logical that something knowledge. They suggest that two users, Alice created only as a concept should come across and Bob, should be connected by a quantum and certain issues when it is first built. While minor a classical channel. A secret number can be improvements can be made to an idea before it generated over the quantum channel in such a is put into practice, it is only when it is truly way that there is a high probability that an tested by application of the idea that it can start eavesdropper (Eve) has disturbed the to have its problems properly highlighted and transmission. If the transmission is secure, then addressed. As this is theoretical, the research the shared secret is then used in whichever falls under the category of basic and the use of format is desired on the classical channel. If the hard numbers means it is also quantitative. transmission is not secure, then the outcome is discarded and the process repeated. It has been proven that it is possible to exploit the fact that a single-photon detector‟s pause This is an excellent proposition for how to after a detection event leave‟s it vulnerable to an implement the secure quantum channel. The attack without leaving any noticeable difference one-time pad is the most secure cryptographic in the error rate (Weier et al, 2011). This method, but its big weakness is sharing a secret research is valid, as QKD is used on a lot of number before the encryption can begin. The high-security networks and any attacks on these combination of this cryptographic method with a networks should be addressed as soon as quantum channel is therefore, theoretically possible. completely secure. Additionally, the application of the convention of named users from The paper claims that, using a standard BB84 cryptography, Alice, Bob and Eve, make the protocol setup network, the single photon whole thing easier to follow. avalanche detector (SPAD) can be forced into dead time and effectively blinded, by well timed Bennett and Brassard follow up their summary pulses sent by an eavesdropper. of how the quantum channel can be implemented alongside a classical one by going This is an important idea being explored. If into more detail. They provide a detailed, step QKD is proven to have vulnerabilities, then it is

23 important that they are addressed as quickly as principle, cannot be counterfeited (Wiesner, possible if it is to have any long term presence 1981) and unforgeable subway tokens (Bennett as a high security product. It wouldn‟t take et al, 1983). Both of these used quantum coding much to make people think twice about how as a security measure. Cryptography is one of many more weaknesses haven‟t been noticed yet the many aspects of computing which is in a after the first questions are raised about the no constant game of catch-up, with every new longer unconditional security QKD provides. cryptographic method slowly sliding towards becoming obsolete. Hackers will always find a The paper goes on to discuss an experiment in way around conventional classical security, which they used standard hardware found in because no computer generated number is ever almost all QKD networks and they also setup truly random and it is only ever a matter of time the pulse timing to a proper length for unbiased before the formula for generating it is detections. They then proceed to use blinding calculated. Therefore, any progress towards a pulses to blind the SPAD and while the SPAD is cryptographic method which would be blinded, Eve can detect these instead without impenetrable is an excellent idea. They do being noticed. After an attack, Eve then uses concede, however, that implementation of what she has learned about the secret key to particles for data transfer is limited in practice decode information sent on the classical because of how weak the signal has to be channel. The results are based on how well Eve without any way to strengthen the signal. can read the contents of the classical channel. After this, the experiment is repeated with Bennett and Brassard theorised that polarised increasing intensities of the blinding pulse and light would be the form for their qubit to take. the results are compared. They reached this conclusion, because ordinary light is easily polarised by a filter, and the axis This is a very fair and unbiased experimental of the polarisation is determined by the method. It is also effective in getting to the nub orientation of the filter. Additionally, picking of the issue, by actually having Eve listen in on out a single polarised photon is also possible, the classical channel afterwards. If the simply by picking them from a polarised beam. experiment was limited to how much of the They reason that while polarisation of a photon shared secret key Eve could discover, the results is a continuous variable, the uncertainty would be much more limited. It seems likely principle prevents more than one bit of data that each element was tested several times and being obtained. Furthermore, polarised axes that an average of the results was taken, but this which are the same as the filter will always is not specifically mentioned. result in transmission and polarised axes which are perpendicular will definitely be absorbed. The outcome of the experiment shows that at the Any other value will randomly be transmitted or lowest pulse intensity, the classical channel‟s not. It is not possible to further measure axes, data was barely discernable from being random, because after being transmitted, they will always but the best result demonstrated that 98.83% of have the same polarisation as the filter and the classical data could be correctly discerned because it is impossible to clone a quantum state (seeimpossible to eavesdrop. They decided to reliably. attempt to exploit the uncertainty principle in doing this. They proposed that current digital The beginnings of this element of their cryptography was flawed, as it cannot generate reasoning are logically sound. They put forward truly random numbers and that using quantum polarised light as their data transfer method and mechanics would also be a key advantage in this justify it well, highlighting the properties which context. make it well suited to the task. However, they do not compare using photons with using any of Their idea was most certainly a very valid claim. the other various particles which could This is the sort of thing that can have world- potentially be put in their place. wide implications in the security industry. A definite break-through, built on the shoulders of Bennett and Brassard next moved onto the previous research into making money which, in problem of cryptography. They proposed the

24 idea of using a secure quantum channel to reasonable to conclude that with the BB84 address the problem of secretly transferring a protocol having been conceived in theory it number for use in a one-time pad, or any other would have potential for error when practically security application which require shared secret implemented. It is logical that something knowledge. They suggest that two users, Alice created only as a concept should come across and Bob, should be connected by a quantum and certain issues when it is first built. While minor a classical channel. A secret number can be improvements can be made to an idea before it generated over the quantum channel in such a is put into practice, it is only when it is truly way that there is a high probability that an tested by application of the idea that it can start eavesdropper (Eve) has disturbed the to have its problems properly highlighted and transmission. If the transmission is secure, then addressed. As this is theoretical, the research the shared secret is then used in whichever falls under the category of basic and the use of format is desired on the classical channel. If the hard numbers means it is also quantitative. transmission is not secure, then the outcome is discarded and the process repeated. It has been proven that it is possible to exploit the fact that a single-photon detector‟s pause This is an excellent proposition for how to after a detection event leave‟s it vulnerable to an implement the secure quantum channel. The attack without leaving any noticeable difference one-time pad is the most secure cryptographic in the error rate (Weier et al, 2011). This method, but its big weakness is sharing a secret research is valid, as QKD is used on a lot of number before the encryption can begin. The high-security networks and any attacks on these combination of this cryptographic method with a networks should be addressed as soon as quantum channel is therefore, theoretically possible. completely secure. Additionally, the application of the convention of named users from The paper claims that, using a standard BB84 cryptography, Alice, Bob and Eve, make the protocol setup network, the single photon whole thing easier to follow. avalanche detector (SPAD) can be forced into dead time and effectively blinded, by well timed Bennett and Brassard follow up their summary pulses sent by an eavesdropper. of how the quantum channel can be implemented alongside a classical one by going This is an important idea being explored. If into more detail. They provide a detailed, step QKD is proven to have vulnerabilities, then it is by step walkthrough of how Alice and Bob important that they are addressed as quickly as communicate over the quantum and classical possible if it is to have any long term presence channel to produce a shared secret key. (I again as a high security product. It wouldn‟t take refer you Fig 2. which summaries this much to make people think twice about how excellently.) They say that if an eavesdropper many more weaknesses haven‟t been noticed yet doesn‟t know the secret key after it has been after the first questions are raised about the no generated, that they would have a minimal longer unconditional security QKD provides. chance of being able to do any meaningful interception on the classical channel. An The paper goes on to discuss an experiment in eavesdropper can prevent communication by which they used standard hardware found in suppressing either the classical or quantum almost all QKD networks and they also setup channel, but this will not fool Alice or Bob into the pulse timing to a proper length for unbiased thinking the network is safe. detections. They then proceed to use blinding pulses to blind the SPAD and while the SPAD is The protocols inner workings are obviously well blinded, Eve can detect these instead without reasoned out and do indeed create an being noticed. After an attack, Eve then uses unconditionally secure system. Also, the paper what she has learned about the secret key to makes good use of tables to clarify and decode information sent on the classical summarise the facts. The paper has no channel. The results are based on how well Eve conclusion, only results of the logical reasoning. can read the contents of the classical channel. Reflecting on it from a current perspective, it After this, the experiment is repeated with

25 increasing intensities of the blinding pulse and with, then they are probably a pulse attack. the results are compared. However, Eve can instead send blinding pulses at random times during the window in which This is a very fair and unbiased experimental Alice‟s pulses arrive and simulate background method. It is also effective in getting to the nub noise. They suggest a more efficient of the issue, by actually having Eve listen in on countermeasure would be to monitor the voltage the classical channel afterwards. If the of the SPAD to be sure there is nothing experiment was limited to how much of the malicious going on. shared secret key Eve could discover, the results would be much more limited. It seems likely This outcome is a good and unbiased that each element was tested several times and interpretation of the results and it is always most that an average of the results was taken, but this useful to see a security breach with a well is not specifically mentioned. reasoned idea to patch the security hole. This vulnerability is a significant one, and until it is The outcome of the experiment shows that at the countered by adding new elements to the lowest pulse intensity, the classical channel‟s standard QKD networks, there is definitely a data was barely discernable from being random, real threat. The paper uses an actual scientific but the best result demonstrated that 98.83% of experiment generating hard data to reach its the classical data could be correctly discerned conclusions and therefore falls under applied (see Fig 3.) and quantitative research.

(Wang et al, 2009) Proposes that is possible to perform a man-in-the-middle attack on the BB84 protocol. Again, much like Weier‟s paper, this is a valid research topic, as it deals with a threat to QKD, which has a lot of security responsibility and finding and stopping exploits is important.

They say this attack would involve a malicious Mallory, in place of the eavesdropping Eve, intercepting the secure quantum messages between Alice and Bob. To do this, Mallory must place herself in between Alice and Bob on Figure 3. This image shows examples of the quantum channel. When Alice sends Bob increasing intenstity using blinding pulses to the first random bits with their random attack the quantum channel in a QKD polarisations, Mallory intercepts this and sends a network. It is clear that, there is a significant separate random set of bits with random amount of data intercepted at the highest polarisations to Bob. Bob then replies with the intensity (Weier et al, 2011, p.8) series of detectors he randomly selected and sends this information to Alice, but again, this is This falls in line with the predictions that were intercepted and Alice is sent Mallory‟s detector made, with the exception of some of the higher information instead. At this point, Bob and values not quite matching up in real life because Alice only keep the bits that were properly of the greater chance of interference caused by detected and therefore have two different secret generating the more intense pulses. To keys. Finally, Bob and Alice send each other conclude, they point out that the attack is part of their key and verify that they match, in currently potent and applies to the vast majority order to be sure there were no eavesdroppers. of QKD networks. Its strengths are in how easy At this point, Mallory can receive Alice‟s key it is to perform. However, the ease of bits and confirm Mallory‟s with Bob. Then implementation also leads to an ease of Mallory confirms Alice has the correct code. At detection. If Bob checks for detection events this point, they will start to use the shared secret outside of the pulse timing that Alice is sending on the classical channel and Mallory can

26 intercept data from one user, decrypt it and uncertainty theory. They also mention that encrypt it with the other key before sending it to Quantum non-cloning theorem has a big role in the other user. quantum computing, but has been truly proven, because research cannot claim that there is no If this is true, then it would be a severe threat to way of cloning at all. They make a comparison the security of QKD. However, while there is to human beings, and that before discovering the no reason to doubt that it is possible to perform gene, nobody would have said human cloning a man-in-the-middle attack like this one, it is not was possible. They say it may be possible to, in proven either by backing it up with existing the future, discover a quantum gene, which will evidence, or demonstrating any sort of allow us to clone quantum states, at which point, experiment. the security of QKD would be severely threatened. The paper continues to discuss means of preventing man-in-the-middle attacks on a They have not come up with a strong conclusion quantum system. It claims the core of this or with any discernible new knowledge in this problem is not being able to verify who the paper, the whole paper is a list of events, users are communicating with. It then lists a without any proof that any of the elements core number of possible solutions. First, they suggest to the title, „Man-in-the-middle Attack on BB84 that digital certificates could be used, to verify Protocol and is Defence‟. There are six the authenticity of a sender. Secondly, they references in the paper, two are on QKD in recommend using a secure key to encrypt the general and feature in the introduction, while the authentication message with. Thirdly, they put other four are in the conclusion, and two of forward the idea of using Einstein-Podolsky- those are on quantum non-cloning. The Rosen pairs, being shared between Alice and introduction of Quantum non-cloning into the Bob. Alice would send her pairs with the very paper was a deviation and bears no relation to first set of random bits and tell Bob which is the title. The paper skips from the introduction, entangled. Bob would then detect the to the attack and defence of man-in-the-middle, interrelated pairs and see if there is correlation. and finally to the conclusion. In general, this The next option would be to check that Alice paper seems thin on useful information and none and Bob‟s completed shared secrets match by of what it claims is proven by referencing or by sending then by phone or email, which is at experiment. This paper technically falls into a lower risk of interception. Alternatively, the basic approach to research, in that there is no shared keys could be compared on the classical physical testing, however there are no examples channel. Finally, they mention that a man-in- of quantitative nor qualitative research at any the-middle attack is difficult to practically point. implement, because information is difficult to intercept at each stage. If anything is not 3 Conclusions intercepted, it is possible that the attack will be discovered and the key discarded. Bennett and Brassard‟s paper on QKD is a well written concept, and while it may not live up to Once again, while these claims seem to make the idea of unconditional security in reality as sense, they are not backed up by any evidence. well as it does in theory, it is at least providing a Furthermore, there are elements which are platform for securing further against attacks. It written in such bad English that it is difficult, or is unlikely that the authors envisaged a time impossible, to make useful sense of. where there protocol would be fully realised and implemented. The paper concludes by saying that QKD is vulnerable to man-in-the-middle attacks, but The papers of Weier and Wang both put could be protected from this by introducing forward problems within a QKD system, which elements of classical cryptography. It also can be exploited, but more importantly, suggests it might be possible to include other protected against. Both of their topics have elements of quantum cryptography, like security equal validity, but only Weier really seemed to based on Quantum non-cloning theorem or the do the title justice. Additionally, Weier‟s

27 contribution has a more feasible problem to Unforgeable Subway Tokens‟ Advances in discuss, and has a problem which is put into Cryptography: Proceedings of Crypto 82 context with evidence. Plenum (New York), pp.267-275

Weier‟s research has a fair and un-biased Lo, Hoi-Kwong and Chau, H.F. (1999) experiment, which has relevant results, while „Unconditional Security of Quantum Key Wang has no experiment. While, they cover Distribution over Arbitrarily Long Distances„ different enough topics that both of their Science Vol. 283, pp.2050-2056 suggestions to solve the issues they covered could be implemented together, this still leaves Rieffel, E. and Polak, W. (2000) „An the concern that one might question introduction to quantum computing for non- implementing the man-in-the-middle protection physicists‟ ACM Computing Surveys, Vol. 32, based on a lack of evidence. pp.300-335,2000

In the context of the commercial side of QKD, Wang, Y., Wang, H., Li, Z. and Huang, J (2009) the idea that QKD was unconditionally secure Computer Science and Information Technology, was the biggest draw, and this is no longer true. 2nd IEEE International Conference on 2009, pp. It was probably not a good idea to herald a new 438-439. era of complete security so quickly, without giving it a chance to be properly tested. Weier, H., Krauss, H., Rau, M., Furst, M., Obviously, the holes in the QKD network can be Nauerth, S., Weinfurter, H. (2011) „Quantum patched, but ultimately, now that there have eavesdropping without interception: an attack been successful breaches, it is highly likely there exploiting the dead time of single-photon will be more, based on the common trends in detectors‟ New Journal of Physics vol. 13 classical security. Certainly QKD is not a bad security protocol, as it is a very specialist Wiesner, S. (1983)‟Conjugate coding‟, Sigact attacker who has the tools and ability to hack a News, vol. 15, no. 1, pp. 78 - 88 QKD system. This is the most significant redeeming quality of the network at the moment.

The real question is, how long will it be before hackers either start to learn the elements of the security system well enough to perform an attack? Or how long before someone who already has this knowledge is lured by greed or bribery or force to attack a QKD network? This is a real unknown. Despite all of QKD‟s problems, it remains secure more out of ignorance than out of it putting up a good defence. The only thing that is for certain is that it will not stay this way forever.

References Bennett, C.H. and Brassard, G. (1984) „Quantum cryptography: Public key distribution and coin tossing‟ Proceedings of IEEE International Conference on Computers Systems and Signal Processing, Bangalore India, December, pp.175-179

Bennett, C.H., Brassard, G., Breidbert, S. and Weisner, S. (1983) „Quantum Cryptography, or

28

A Critical Review of Current Distributed Denial of Service Prevention Methodologies

Paul Mains

Abstract This research paper is in the field of improving online security. It will be narrowing the focus on defending against distributed denial of service attacks. This is a well-documented area with high profile attacks perpetrated by „hacktivist‟ group „‟ and mischief makers „lulzsec‟. These aren‟t the only groups or individuals that attempt these attacks. Anyone with the ability to download simple software packages like the „‟ can execute an attack at a web server or someone‟s internet connection. That is the main obstacle to overcome that the attacks can be made so simple that any Joe Blogs can participate in a mass illegal destructive activity. There are more advanced approaches that are harder to predict. It could be the more advanced „High Orbit Ion Cannon‟ package or the manipulation of the „Conficker‟ virus. A DDoS can even be produced by a single individual manipulating a distributed hash table directing a massive amount of traffic towards a target. How these attacks can be avoided will be the focus of this research paper. Attention will be around „Genetic Algorithms‟, „Neural Classifiers‟, „P2P defending‟ and how to defend the future of cloud computing from the frustrations of DDoS attacks.

(Mansfield-Devine 2011). The damage these 1 Introduction people inflict is difficult to measure but side effects of having a weapon so easily obtainable One of the most well-known approaches to can be easily facilitated to dark acts. Having a antagonise a person (a person using a computer system vulnerable to such a basic attack causes that is) from a significant distance away is to damage in many ways. undertake a DoS or DDoS attack against them.  Loss of money This is starting to move over to the newer  Invasion of privacy sensation called RefRef, but we‟ll only  Loss of time concentrate on the DDoS problem. The biggest  Mental anguish problem is that DDoS is not produced from a  Increase in Crime highly refined complicated methodology, it is a (Kim et al. 2011) simple crude annoyance distributed to people In 2004 the ground work was already set in the with no substantial hacking skills, therefore easy paper “A taxonomy of DDoS attack and DDoS to undertake. So why hasn‟t this issue not been defense mechanisms” (Jelena & Reiher 2004). It handled yet? Well technically DDoS is just one has been 7 years since that research was carried of many approaches being utilised, but none of out and it is now time to see where the research the other approaches are considered to be has progressed. impressive or even remotely complex either.

“The low level of skills displayed is a worry in 2 DDoS Prevention Research itself. If hacktivists are achieving this level of In this area research papers will be analysed in mayhem, imagine what real hackers might do.” the field of DDoS prevention. 4 Papers have

29 been chosen to be the focus of the investigation. These will be critically evaluated to determine the knowledge behind the approaches. The 4 areas are covered in the following research papers:  “Detection of DDoS attacks using optimized traffic” (Lee et al. 2011) Figure 1 Variance Equation (Lee et al.  “Distributed denial of service attack 2011) detection using an ensemble of neural classifier” (Kumar & Selvakumar That is what is made to identify the variance 2011) which is value „V‟. Basically if V exceeds the  “Preventing DDoS attacks on internet designated threshold the system is alerted to this servers exploiting P2P systems” (Sun occurrence. This is then identified as an et al. 2010) anomaly and the mass generation of these  “Cloud security defence to protect cloud anomalies aided by visual referencing is the computing against HTTP-DoS and detection measure. The steps are illustrated XML-DoS attacks” (Chonka et al. below. 2011)

2.1 Traffic Matrix

The research by (Lee et al. 2011) is an applied quantitative piece of research dedicated to the goal of the detection of DDoS attacks. It contin- ues on from the previous work “Detecting DDoS attacks using dispersible traffic matrix and weighted moving average” this is a heavily mathematical driven approach to a network se- curity issue.

Analysis

The problem that it is trying to tackle is the rise in DDoS threats effecting networks. It clearly identifies the different fields of DDoS defence. “Defense mechanisms against DDoS attacks to cope with them can be classified into four cate- gories: prevention, detection, mitigation and response.” (Lee et al. 2011) This research pro- ject is designed to detect the anomalies that in- Figure 2 Overall flow of proposed dicate a DDoS attack is occurring. They verify model (Lee et al. 2011) their approach by undertaking an experiment and analyses process. The tests that are per- formed involve the use of advanced mathemati- The approach does in fact display a visual aid to cal data sets. The research from a graphical per- identify when the attack takes place. This is spective is quite revealing. What the experiment illustrated in the diagram on the next page which is exploring is the validity of the equation be- identifies the difference between a normal set of low. traffic results and a set of DDoS attack results.

30

Figure 3 Visualisation results of traffic matrix (Lee et al. 2011) Figure 5 16 bit subnet spoofed attack with LBL-PKT-4 (Lee et al. 2011) The image above identifies the difference between a DDoS attack and a normal trend of traffic. How the normal traffic is scaled is not The graphs above display how the traffic matrix made apparent, whether that is an operation detects the DDoS successfully. The method is intensive network or if it is a particular calm described as having a low amount of timeframe. Seeing how this matrix is identifying computational overhead and must have a anomalies it may not be important, but could be minimal detection delay while producing high elaborated upon in further research to find the detection results. The way the experiment is limits of its capabilities. Another feature of conducted displays the trend between low further research could be in the format of the IP overheads and high rate of detection. So the addresses used. “We converted these sanitized statistical evidence proves the authors proposed source IP addresses to Ipv4 format, since our detection model has achieved its goal. The only proposed model needs Ipv4 format as source IP thing to be raised is how it compares to other addresses to construct a traffic matrix.” (Lee et methodologies in this field. There is a neural al. 2011) Now that the experiment has produced classifier approach that I will use to compare results to be analysed it could progress to new this approach with. But as it stands now the environments to test the limits of this approach. detection model has reliably informed the reader This may warrant testing it in tandem with the of its inception and practicality and then IPv6 format. justified through experimental trial. “We showed that our proposed approach satisfies the Conclusion major requirements of the detection approach such as low processing overheads, short The quantitative method of research displayed detection delay, and high detection rates” (Lee here is appropriate for the desired result. The et al. 2011) How this is implemented into real resulting data sets do illuminate the success of world situations hasn‟t been experimented on as the theory discussed prior. far as it has been covered in this research paper. A passing comment in the conclusion says that it is easily done, and elaborates slightly on how this can be done but as far as can be read it hasn‟t gone through a thorough testing phase. “our model can be used in a real-time network environment and it can be implemented easily” (Lee et al. 2011) It would be interesting to see how this approach fits with other detection models and security packages, whether it could be implemented into an operational security Figure 4 DARPA 2000 LLDOS 1.0 policy or not. with LBL-PKT-4 (Lee et al. 2011) 2.2 Neural Classifier

The research by (Kumar & Selvakumar 2011) is an applied quantitative piece of research, dedicated to improving the detection rate of

31

DDoS attacks by reducing the amount of false alarms. This is targeted at the network security audience that wish to progress their defences against resource draining attacks. This is also a method built primarily to detect attacks rather than prevent or respond.

2.2.1 Analysis

Figure 6 SSE environment (Kumar & Selvakumar 2011) Figure 7 Architecture for DDoS at- tack detection and response system This method triggers an alarmed state that (Kumar & Selvakumar 2011) communicates to other machines on that security network. This is a method that is already implemented and the focus of the research is to 2.2.2 Conclusion improve it by modifying the mathematical side. The experiments that took place (there was The obstacle at hand is to decipher what is multiple) were very detailed and rigorous. The malicious traffic and what is safe traffic. “Real proposed system uses a classifier named Challenge lies in distinguishing the flooding RBPBoost built off the RBP classifier. It was attacks from abrupt change in legitimate traffic.” tested in 2 experiments using a range of data (Kumar & Selvakumar 2011) This is the main sets to give us an idea of how faster the focus of what the algorithm is designed to algorithm is at detecting attacks while obtaining achieve. It is created to differentiate between fewer false alarms. There are diagrams over the various kinds of traffic, by training the classifier next page that shows us just a few of the results to recognise current signatures and learn new of how the algorithm faired against other current traffic trends that can be gauged as a threat. algorithms. The results can be hard to take in How the method is implemented is illustrated on because of the amount of results that were the top right of this page, the diagram at the start presented. But the overriding consensus is that of this analyses section displays how it appears the proposed algorithm has a better detection on a large scale. It is examined on a large rate, less false positives and lower cost. The cost distributed network and it is this large to give is defined as the occurrence of misclassifying the method as much training data as possible. It data. “Cost function is based on the number of is also useful as the experiments can be analysed samples that are misclassified.” (Kumar & in the larger context of the whole network or Selvakumar 2011) This was previously minimised to see how it operates at its smallest identified during the work of Devi Parikh and computational level. Tsuhan Chen in their cost minimization paper. “the cost of a false alarm may be much higher than false detection” (Parikh & Chen 2008)

32

result matrix) it is evident that improvements have been made against other detection models. Progression is valuable for the online security Table 1 Simulation results of field to keep up with the constant security classification algorithms of conficker threats that are conceived constantly. The dataset (Kumar & Selvakumar 2011) methods of procuring these results were rigorous and detailed as evident by the range of results that have been discovered. The experiments were conceived with real world situations in mind having the large scale set up that was used create the precise results we have seen. “These Table 2 Simulation results of applications/projects have inherently classification algorithms for SSE geographically distributed centers carrying out dataset (Kumar & Selvakumar 2011) specific task/research, networked together to achieve the common goal.” (Kumar & Selvakumar 2011).

2.3 Exploiting & Defending P2P

This is a problem that has more to do with the general public as they can be used to target web servers and domestic connections. The P2P operations are quite common among the general public as software like „µTorrent‟ is well established among internet users. Using large- scale distributed hash tables brings many unsuspecting participants into DDoS attacks as their traffic can be directed towards victims (As illustrated by the diagram below and on the next page). The research by (Sun et al. 2010) identifies the threats and discovers the defence Figure 8 Confusion matrix for the K- against these approaches. Means Clustering (Kumar & Selvakumar 2011)

Figure 10 (a) Normal operation and (b) attack (Sun et al. 2010)

Figure 9 Cost per sample for existing ensemble classifiers and RBPBoost (Kumar & Selvakumar 2011)

So from what is gathered from those examples above (table of results, cost bar chart and the

33

Figure 11 Index poisoning attack in

Overnet (Sun et al. 2010) Figure 13 Complete sequence of steps for normal buddy operations with probing-based validation mechanisms 2.3.1 Analysis (Sun et al. 2010) This was inspired from the monitoring of DNS traffic which came up with some curious results. The main points considered in this scheme are select criteria as to what methods will need to be used. These are implemented to enhance the P2P system‟s resilience against DDoS manipulation. The sequence displayed above is designed to be implemented in current P2P systems and the areas it covers are as follows: 1. Use of centralized authorities 2. DHT-specific approaches 3. Corroboration from multiple sources 4. Validating peers through active probing 5. Preventing DDoS exploiting validation probes 6. Source-throttling 7. Destination-throttling

Figure 12 (a) Number of unsuccessful 8. Avoiding validation failures with NATs Kad flows sent to ports that received (Sun et al. 2010) more than 500 unsuccessful flows. (b) Fraction of unsuccessful Kad flows to The experiments implemented to test this port that received more than 500 scheme were utilised in real world conditions. unsuccessful flows (Sun et al. 2010) Planetlab and the Kad network were the areas picked to test this method.

How these processes are combatted against in 2.3.2 Conclusion this paper is by 1st knowing the range of The data that has been produced in these amplification methods that can be deterred. environments are around how these methods Then afterwards the stage of enhancing the impact performance within these systems. So the resilience against these kinds of attacks is focus shifts from having a scheme that shields enacted. “the key principles involved in limiting against the vulnerabilities inherent with P2P to DDoS amplification is validating membership also wanting to comply with achieving good information.” (Sun et al. 2010) So from this system performance while doing so. There is a insight a scheme was produced to mitigate the multitude of graphical data on display that won‟t vulnerabilities within the P2P systems. The be shown too much of in here. But the data does sequence of this scheme is illustrated at the top appear to be succinct in what it is presenting. right of this page. There are many variables to consider in this experiment which is why the testing had to be very broad. In some instances the network

34 address translation (NAT) delay had increased around 9 times longer than when it was not under attack. By implementing their modified kad system (Resilient Kad) they reduced this delay to around 1.5 to 2 times the not under attack value. An improvement of between 4.5 and 6 times the original systems performance.

Figure 15 Traffic seen at the victim as a function of time, with five attackers and 50% of the nodes behind NAT. Measured in Kad (Sun et al. 2010)

The diagram above displays the massive difference in attack traffic between the base and the modified system. The difference being so Figure 14 Average time a search large it is evident that this approach has had a takes to locate the index node, with dramatic effect on the impact of the DDoS different NAT percentages. Each bar attack. So the conclusion can be made that this is the average over fine runs. The applied research using quantitative methods has error bars show the standard achieved its goal in reducing the impact of the deviation. Measured in Kad (Sun et DDoS attack. al. 2010) 2.4 Cloud Security Cloud security when it comes to DDoS is in its Now from that diagram it may be misunderstood infancy. An attack on a cloud infrastructure as not a big deal, changing NAT response rates would have a crippling effect as the attack from 9 seconds to 2 seconds. But in that targets the cloud architectures main weakness. experiment it was only 5 attackers that were Cloud systems rely on the network infrastructure present. In real world scenarios attacks can be to function and by attacking it in this manner the participated by hundreds maybe even thousands resources become so depleted the cloud stops of different nodes. “There are five attackers that functioning. The research being undertaken is stay through the entire experiment and there is a specifically against HTTP-DoS and XML-DoS single victim.” (Sun et al. 2010) kinds of attack. This is researched in depth by (Chonka et al. 2011).

2.4.1 Analysis The system being proposed is a cloud trace back and a neural network called „cloud protector‟. How this operates is illustrated below. This is called the Service-oriented traceback architecture (SOTA) model.

35

brought down the Iranian website.” (Chonka et al. 2011)

Figure 16 Distributed XML-based Denial of Service attack, where an Figure 18 Demonstration of Iranian attacker has taken control of 2 cloud attack networks and starts sending huge amounts of XML-based messages to Cloud 0 Web Server (Chonka et al. 2.4.2 Conclusion 2011) This was actually one of many works headed by Mr Ashley Chonka covering his PhD thesis “Protecting Web Services from Distributed Denial of Service Attacks”. So this an area he is improving on and in this paper he takes his previous research on “Chaos theory based detection against network mimicking DDoS attacks” (Chonka et al. 2009) and implemented it into the context of cloud computing. So that should explain why the conclusion isn‟t completely conclusive when it comes to verifying the success of the approach. He leaves the door open for future work in this area. This may be due to the results of his experiment not being concrete enough to be implemented on a Figure 17 Distributed XML-based larger scale. “We detected and filtered between Denial of Service attack, where CTB 88% and 91%, but we achieved a very large and Cloud Protector are located just response variance due to a problem with the between each cloud Web Service, in neural network settings” (Chonka et al. 2011) order to detect and filter X-DoS the findings of the cloud protector are that it attacks (Chonka et al. 2011) isn‟t as powerful as it needs to be to be a reliable defense. The findings of the CTB are a bit different as it has been proven to work, but The problems this approach is meant to solve is with a stipulation. The system is under greater what happened in the Iranian election in 2009. strain when the CTB is identifying the attackers. “the Iranian opposition coordinated an ongoing “an increase in response time was generated by cyber attack that has successfully managed to thirty percent. This increase means that during a disrupt access to major pro-Ahmadinejad DDoS attack, more processing time is required Iranian web sites” (Danchev 2009). They even to handle the extra burden” (Chonka et al. used this scenario in their experiment. The 2011). So what is required is a more stable image on the next page displays how they cloud protector to mitigate the strain of the thought this attack was perpetrated. CTB. More research into this is needed because “Demonstration of our three virtual machines as it stands the system isn‟t reliable therefore not with 20 Firefox browsers and 20 tabs open to viable for commercial implementation. the Page Reboot Website. The purpose of our demonstration was to replicate the attack that

36

Because content is quite advanced the testability 4. Overall Conclusions is based on the skill set of the researcher. The Taking all of the previously discussed research methods have been outlaid in the research and papers into consideration the future of DDoS have followed quantitative techniques to prevention seems strong. As is evident by the evaluate the validity of their research. In these results from the various papers is that DDoS can experiments the measurement was always in be mitigated quite effectively. It is down to how it can reduce the effectiveness of DDoS by finding the right application of security and measuring the reduction in traffic and responses. discovering what is portable to different In all cases there was significant reduction to be architectures. Whatever system this may be it seen as a technique to combat DDoS. How these needs to have cloud computing at its foundation. can be reproduced can be quite difficult as “it is important to design distributed detection previously mentioned the research is advanced and defense system where heterogeneous nodes in practise. But allowing for the same skill sets can collaborate over the network and monitor the reproduction of results is possible. In the traffic in cooperative manner.” (Tariq et al. case of the research by (Chonka et al. 2011) the 2011) The only way to combat any kind of experiment was a scenario dictated by twitter distributed attack is with a distributed defence. posts over the internet roughly discussing how It seems that the agenda of all of the research the original attack was executed. So the previously discussed are compatible, and that is experiment has the possibility of being flawed as what is needed to put an end to this threat. The the original circumstances could be different to detection matrix, neural classifier, concepts of what was casually discussed. “the attackers went P2P defence and the SOTA model all can be on Twitter and various discussion forums unified to strengthen the future systems from announcing how to use the Page Rebooter malicious resource draining attacks. website, by opening it up in many browser tabs and just leaving them to refresh the Iran All 4 of these areas have established a clear Government website.” (Chonka et al. 2011). So purpose in their investigation which has been in this case the reproducibility may be relying relatable with each other. Whether it was on false information. The other cases are mostly focused on the detecting, mitigating, preventing repeatable with standard equipment but the way or responding sides of the issue, the purpose is the distributed neural classifier method by local to the need to defend against DDoS. (Kumar & Selvakumar 2011) was approached was so large it would require large investment to All 4 have been rigorous with their quantitative replicate the same results. experimental methods. Not only highlighting the successes derived from their hypothesis, but also The precision of the discussed research is discussing the issues that needs to be tackled. In substantial due to its quantitative approach. The the cloud defence research Mr Ashley Chonka empirical data that lead to their conclusions was identifies that his approach was not as effective collected by measuring network traffic and as he liked it to be. Also the amount of rigor going in to significant enough depth to ensure does vary. We can compare the experiments of precise data relating to the detection or (Lee et al. 2011) and (Kumar & Selvakumar prevention of DDoS. Also elaborating on the 2011) and notice that the degree of which these collected data as to why the data has behaved in 2 research groups worked can be seen to be at that specific way has helped decipher precision. different levels of rigor. The experiments used For example in preventing DDoS exploitations to verify the operations of the neural classifier of p2p the author explains how the network had more layers and covered a larger range of address translator has been improved because of outcomes. Whereas the traffic matrix only used the precise nature of the data. “Thus, its routing data sets to analyse its effectiveness. One used a table could include nodes behind NAT, and large operating network when the other was many search messages could fail since nodes observed in a smaller detached environment. behind NAT are contacted, leading to longer Due to this more in depth approach the results of search times.” (Sun et al. 2010) Without the the neural classifier tests are deemed to be more precise measurement of how long the NAT reliable.

37 delay was, that conclusion could not be opposition-launches-organized-cyber-attack- validated. against-pro-ahmadinejad-sites/3613 [Accessed 26 October 2011]. The discussed research projects did have all the positives discussed in this conclusion but only Jelena, M. & Reiher, P., 2004. 'A taxonomy of two have the right to be praised for its DDoS attack and DDoS defense mechanisms.' generalizability. The way (Sun et al. 2010) ACM SIGCOMM Computer Communications utilised the Kad network and (Kumar & Review, 34(2), pp. 39-53. Selvakumar 2011) managed the SSE are examples of how experiments need to Kim, W., Jeong, O.-R., Kim, C. & So, J., 2011. incorporate real world situations. Although the 'The dark side of the Internet: Attacks, costs and other 2 research projects can fit quite responses.' Information Systems, 36(3), pp. 675- comfortably under the DDoS prevention 705. umbrella, their experiments concentrated on the detection aspect in a more specific smaller Kumar, P. A. R. & Selvakumar, S., 2011. environment. 'Distributed denial of service attack detection using an ensemble of neural classifier.' The confidence in these results is high based on Computer Communications, 34(11), pp. 1328- their precision and rigorous approaches. But to 1341. bring some parsimony into the equation none of these individually are meant to solve the Lee, S. M., Kim, D. S., Lee, J. H. & Park, J. S., problem of DDoS. They require other areas to 2011. 'Detection of DDoS attacks using synchronise with these areas to produce a optimized traffic.' Computers and Mathematics product that can cure this threat. The cloud with Applications. defense approach needs a better neural classifier doi:10.1016/j.camwa.2011.08.020. method and (Kumar & Selvakumar 2011) may already have that in their RBPBoost. Also to see Mansfield-Devine, S., 2011. 'Hacktivism: the merits of the traffic matrix by (Lee et al. assessing the damage.' Network Security, 2011) then it could be valuable to test it under 2011(8), pp. 5-13. the conditions of a large neural network like (Kumar & Selvakumar 2011) utilised. If all Parikh, D. & Chen, T., 2008. 'Data Fusion and these systems can be harmoniously linked Cost Minimization for Intrusion Detection.' together to produce a distributed defence IEEE TRANSACTIONS ON INFORMATION network, then DDoS‟s days may be numbered. FORENSICS AND SECURITY, 3(3), pp. 381- 390. References Sun, X., Torres, r. & Rao, S., 2010. 'Preventing Chonka, A., Singh, J. & Zhou, W., 2009. 'Chaos DDoS attacks on internet servers exploiting P2P theory based detection against network systems.' Computer Networks, 54(15), pp. 2756- mimicking DDoS attacks.' IEEE 2774. Communications Letters, 13(9), pp. 717-719. Tariq, U., Malik, Y., Abdulrazak, b. & Hong, Chonka, A., Xiang, Y., Zhou, W. & Bonti, A., M., 2011. 'Collaborative Peer to Peer Defense 2011. 'Cloud security defence to protect cloud Mechanism for DDoS Attacks.' Procedia computing against HTTP-DoS and XML-DoS Computer Science, Volume 5, pp. 157-164. attacks.' Journal of Network and Computer Applications, 34(4), pp. 1097-1107.

Danchev, D., 2009. 'Iranian opposition launches organized cyber attack against pro-Ahmadinejad sites.' [Online] Available at: http://www.zdnet.com/blog/security/iranian-

38

An Evaluation of Current Computing Methodologies Aimed at Improving the Prevention of SQL Injection Attacks in Web Based Applications

Niall Marsh

Abstract SQL Injection attacks are now common place within the World Wide Web and the development of online applications utilizing web based databases. Combating these attacks is the responsibility of designers, programmers and developers who must use a myriad of techniques in their arsenal to neutralise the threat of these attacks. This paper examines new methodologies being researched for combating SQL Injection Attacks (SQLIA) through detecting vulnerabilities in an applications source code and discussion is given to the use and deployment of these techniques..

Prevention. This research review will focus on different research methods for both strategies. 1 Introduction

Online security is a key fundamental factor that 2 Vulnerability Detection Identifi- must be considered by any company or organi- cation zation that utilizes online functionality in its application. Information security breaches can Vulnerability detection for SQLIA is the first have significant ramifications for both users and line defence in preventing attacks and involves developers of a system typically loss of personal analysing source code for SQL insertion most data, legal proceedings through the introduction commonly through black box testing or static of the Data Protection Act 1998, financial and analysis within the development environment. or fiscal costs through fraud/theft and loss of Several systems have been proposed in current revenue, compensation pay-outs, com- research to address vulnerability detection such pany/brand reputations and market integrity. as “Patching Vulnerabilities with Sanitization Synthesis” (Yu et al. 2011) and “CANDID: Dy- Research carried out by Impervia (2011) namic candidate evaluations for automatic pre- claimed that as of July 2011, 357,292,065 indi- vention of SQL injection attacks” (Bisht et al. vidual sites make up the internet. From Decem- 2010) which uses similar methodology in its ber 2010 through to May 2011 Impervia‟s Ap- approach to the SAFELI model discussed in the plication Defence Centre monitored more than next section. Additionally primary research has 10 million individual web application attacks, been carried out into creating automated test 23% of these attacks were categorized as SQL generators purely for testing the integrity of a Injection Attacks Impervia (2011). DBMS web application “Generating Effective Attacks for Efficient and Precise Penetration Programmers and developers must be aware of Testing against SQL Injection” (Kosuga et al. current up to date hacking techniques and re- 2011) discusses a novel approach and although search aimed at defending against them. Re- relevant to the research covered in this literature search strategies can be categorised into two review, further discussion of the topic will not main common areas, Vulnerability Detec- be conducted in this paper as the Kosuga et al. tion/Identification and Assault Detection and (2011) research focuses on developing strategies

39 for creating attack patterns across multiple web and formal proof has been used to validate the platforms and applications live on the web and results of the framework in action. Xiang Fu et does not interrogate the source code directly or al. (2007) state “the technique can handle hybrid detect vulnerabilities . constraints that involve Boolean, integer and string variables” which would enable the 2.1 SAFELI framework to detect true/false inputs on system flags and inputs as well as detecting number Xiang Fu et al. (2007) have carried out research variables not inputted as strings. Contradictory into building an automated framework called to this, evidence shown within the examples of SAFELI to dynamically inspect bytecode of the article only use one string variable, ASP.net applications identifying insertion points throughout and no demonstration of opposing for SQL. Once detected, SAFELI scrutinizes data types are given demonstrating the handling the code identifying uninitialized variables and of Boolean or integer variables. replaces them with symbolic constraints. Tags are added to the SQL insertion point to trigger Adopting this method of initially scanning the the constraint solver. When ran through the bytecode of a .NET application for insertion of constraint solver, satisfiability is checked SQL parameters allows this method to test any against the constraint and values are assigned .NET application in particular the ASP.net for the variable. Now modified the SAFELI framework which processers user input from framework activates a Test Case Generator web applications typically through POST and which posts the variable in HTML format to a GET transactions from form fields within web server and then monitors the result sent HTML script. ASP.net handles the SQL back. If vulnerabilities are detected in the statements which communicate with an online response, error message traces are created and database. SAFELI is currently not in flags are tagged to the event, identifying the production and therefore cannot be adopted or error. adapted to a current operational .NET system or application. Performance and efficiency of the The researchers in this paper have focused their framework currently cannot be guaranteed until research into the adapting applications in the further testing and development has been .NET framework, SQL injection methodology completed. SAFELI has only been designed to and development of data security. In order work in parallel with .NET applications and develop an effective strategy to create their would not be able to test contrary frameworks or detection system, particular attention has been languages such as PHP or JavaScript as SAFELI paid to identifying and highlighting an performs its algorithms in conjunction with the anthology of SQL injection attack methods and Microsoft Information Server. The framework have given explanations as to how methods currently has only been designed to detect adopted by the SAFELI framework counteract vulnerabilities within an application and it is left these threats. The research team give for the developer/programmer to patch any supposition that other current systems either in errors retrospectively. To further advance the production or development for vulnerability framework and enhance its usability, a solutions detection are less accurate than the SAFELI module could be introduced into the framework framework however, no comparative study, such as in the PSR Generator method, which analysis or justification is given for the would automatically fix any subsequent statement. As SAFELI is still in current vulnerability that was detected; this would development no real-time evaluation of further reduce the possibility of an attack productivity or effectiveness of the framework penetrating the application or database. has yet been conducted, however, the research team have stated that they intend on testing the 2.2 PSR-Algorithm & Generator system on several open source applications available on the internet and developed a test Thomas et al. (2009) have devised a unique suite to verify the accuracy and ability of approach for automatic vulnerability detection SAFELI. Theoretical examples have been used within a class object orientated environment. throughout the paper to demonstrate the model The PRS algorithm identifies SQLIV from

40 within the source code through dynamic analy- per, explaining the logic and action of the algo- sis. Once detected the PRS algorithm separates rithm, its implementation and its limitations. To the SQL statement into structure and input using demonstrate the effectiveness of the model the prepared statements, these prepared statements research team carried out tests of 4 online open are created by the PSR Generator. Reformatted, source applications using the PSR Algorithm the SQL statement still has its structure but the and Generator to analyse the source code of the inputs have been replaced with placeholders applications, detect the vulnerabilities and mod- resulting in the vulnerability being removed. ify the code appropriately. To measure the va- Converting SQL to prepared statements can be lidity of the experiments, test case suites were time consuming especially for large complex devised ensuring the security of the replaced applications and models as each line of code code had been achieved and the original func- needs to be physically analysed by the developer tionality of the SQL statement was still intact. or tester, searching for SQL inputs and modify- Results from the test showed that the PSR model ing the statements accordingly. Using the PSR removed 94% of SQLIV and justification was model can significantly reduce this time through given for why the model did not manage to its automation which in return can have signifi- achieve 100% detection. Recommendations are cant cost implications when developing an ap- also given into how the model can be developed plication as less time spent coding is needed to in the future to further improve its reliability and bring the application to market. Automating this accuracy with speculation that 100% detection process also reduces the potential for human rates may be possible. error-prone actions however, for the system to be effective the developer must conform to strict PSR researchers Thomas et al. (2009) have coding practices. Poor coding structure and syn- demonstrated in the paper a successful and tax could lead to a potential risk of vulnerabili- efficient approach for detecting SQLIV and ties being undetected in the application. Run- performing countermeasures directly into the ning within the ECLIPSE IDE the algorithm has source code. Adopting this model allows the been written in JAVA for use in native or web developer to conventionally write the code in driven applications. One of the primary reasons their normal style. Providing the programmer for this is that JAVA and Eclipse are both open adheres to conventional language syntax source and free to use and adapt by developers. protocols they can then run the algorithm and Being open source would allow this solution to see the changes that have been made verifying be accessible to many programmers as JAVA back that the vulnerabilities have been fixed and and Eclipse are both widely used throughout the the application will be secure and ready for software industry Capra et al. (2011). Due to the release. At this point using the SAFELI method method and structure in which the system has the coder would now need to concentrate and been created using common syntax methodol- spend valuable time on fixing the vulnerabilities ogy, PSR could easily be adapted to other object detected within the application; this however orientated programming language, Thomas et al. would be taken care of by the PSR Generator. (2009). Allowing this adaptation would enable A benefit of the PSR model is the ability to run the system to test and fix a greater number of the algorithm on pre written code allowing any applications such as programs created in PHP, JAVA application to be tested and secured. ASP, JavaScript or Pearl, again all open source, Software production companies who may have readily available software solutions. developed many applications would need to invest a considerably large percentage of their As with the researchers of the SAFELI model, resources to evaluate the security risks Thomas et al. (2009) conducted extensive re- embedded within their applications. PSR has search in SQLIA and evaluated the current tech- the potential to offer a solution to reduce this niques that are being adopted in similar solu- resource allocation considerably with a high tions. An outline of how PSR has been designed success rate. As currently there is only a to counteract these threats and how it differs guaranteed success rate of 94%, programmers from other methods has been comprehensively will need to be made aware of the limitations of documented and the concept of PSR has been the system and compensation efforts will need to well structured in several subsection of the pa- be made in these instances using an alternative

41 method to prevent the SQLIA when developing To conduct the research and development of or modifying an application. DIESEL, the team have focused on existing research into SQLIA methodologies, counter- measures for prevention of SQLIA, database 3 Assault Detection and Prevention authentication, database access control, data pooling, multiplexing and proxy services. To Assault Detection and Prevention is a evaluate the performance and eligibility of methodology that monitors a system or DIESEL, 3 popular web based applications were application at runtime for injection attacks and retrofitted with the prototype framework through performs an action to prevent the database or modification of the applications source code. application being compromised during The applications were then evaluated against operation. Research carried out in this approach one know documented SQLIAV previously de- focuses primarily on communication between tected within its own framework. Results shown backend database management systems and the from the test, did detect an intercept the attack web based application. “A Specification-based successfully. No extended testing had been Approach for SQL-Injection Detection” conducted by the team to test if all vulnerabili- (Konstantinos 2008) is research carried out in ties and possible attack patterns could success- 2008 that looked at creating a simple interface fully be intercepted by the model; however, between the two mechanisms, monitoring and theoretical examples were given demonstrating comparing the structure of the SQL statement the actions of the DIESEL in action. Before inputs against a stored library of validated SQL DIESEL could be adapted to live operational structures. Further developments in this area of web applications, additional testing would need research were conducted by Felt et al. (2011) to be completed, especially on large modular who have progressed with a similar system applications to verify the performance and effi- method with the introduction of a proxy driver ciency of the multiplexing proxy, and user ac- between the two mechanisms called DIESEL. cess control driver. Examples and statistics have been given depicting the modifications 3.1 DIESEL DIESEL performs during retrofitting, showing the total lines of code that need modification A separate approach conducted by researchers at and the total number of policies created to se- Berkley University investigate the use of Proxy cure the application. Applications such Drupal based Architecture and User Based Access Con- and WordPress showed only a minimal amount trol to prevent SQLIA with a system called of modifications were needed to be implement DIESEL. The model adopts a policy of “least the model, however, one application, JForum privilege state” (Felt et al. 2011). When in- needed over 80% more modifications than the voked the framework breaks down an applica- previous two. This neither proves nor disproves tion into modules and assigns database privilege the success of the research or the application but access to the minimum requirement needed for does demonstrate that the system could prove to the successful operation and function of the be complicated and time consuming when retro- module, essentially providing a restricted con- fitting a large complex application. The pre- nection to the database. The second component liminary research has shown relatively positive to the functionality of DIESEL is its proxy results in the areas tested; demonstrating the driver. The proxy driver becomes the link and concept and benefits of data separation and how detection system between the web application they can be applied in preventing SQLIA. Dis- and the database. Using Data Separation the cussion has been given on the models limitations proxy sets restriction policies on the module‟s notably a high computational overhead on per- connections and then the proxy communicates formance of 73% during operation and a de- and monitors the requests and responses from scription is given on the inability to modify cer- the database detecting any anomalies, removing tain code in specific instances. an attack statement if detected, Felt et al. (2011). Application of the DIESEL framework cannot yet be guaranteed as a successful candidate for the detection of SQLIA due to inconsistencies in

42 the testing and further development is required impractical, especially in an application such as to patch the limitations of the model. Testing of Facebook or Twitter that conduct several the DIESEL model has shown a high million transactions an hour. Evidence does computational overhead that may be show higher success rates of an implemented inappropriate for large or high level usage web model when programmers adopt secure application, adding additional resource conventional coding practices during the requirements to the database server. Due to the development cycle paying particular attention to methodology and system implementation of the modules of code that contain embedded SQL model, access to the application source code is statements. Even when using an automated needed and cannot simply interact between checking system such as SAFELI or PSR application and the database. In addition, code validation analysis still need to be performed to verification of the policies library needs to be check the stability and security of the system. independently tested to ensure security and Standing alone each research method has its functionality has been achieved; again own benefits and limitations; combining potentially very time consuming for large techniques to reduce vulnerabilities in an complex or multiple applications. applications source code and monitoring the application at runtime to detect a possible attack would greatly reduce the risk of data being 4 Conclusions compromised and would further maintain the integrity of the system. Articles examined in this literature review have explored research into varying methods References currently being adopted in software engineering, to combat the threat posed by SQL injection Prithvi Bisht, P. Madhusudan, and V. N. attacks. Evaluation of these different models Venkatakrishnan, 2010, “CANDID: Dynamic and strategies has highlighted both the strengths candidate evaluations for automatic prevention and weaknesses of the individual frameworks of SQL injection attacks”, ACM Trans. Inf. Syst. and their design implementation. Explanations Secur. 13, 2, Article 14 (March 2010) have been given as to how they can be applied in real time environment situations and what Eugenio Capra, Chiara Francalanci, Francesco impact can be expected of them. Conclusions Merlo and Cristina Rossi-Lamastra, 2011 drawn from the research have identified that “Firms‟ involvement in Open Source projects: A none of the systems offer guaranteed detection trade-off between software structural quality and or elimination of SQLIA or SQLIV in their popularity”, Journal of Systems and Software, method and further research must be carried out Volume 84, Issue 1, January 2011, Pages 144- to enhance web application security. Overall the 161, ISSN 0164-1212 PSR model has demonstrated to be the most effective example of the models discussed at Adrienne Porter Felt, Matthew Finifter, Joel removing vulnerabilities and has the added Weinberger, and David Wagner, 2011, “Diesel: advantage of having the functionality to rectify applying privilege separation to database source code, patching the fault. Minimal effort access”. In Proceedings of the 6th ACM is needed to adopt the method into new and Symposium on Information, Computer and existing programs or applications and has the Communications Security (ASIACCS '11), ability to be adapted to many other pages 416-422 programming languages. Further research of the PSR model is expected to yield improved results Impervia, 2011 “Web apps attacked every two potentially allowing it to remove 100% of all minutes, Network Security”, Volume 2011, vulnerabilities. Introduction of proxy services Issue 8, August 2011, page 20, ISSN 1353-4858 to monitor and restrict access between an application and its database do show the Xiang Fu, Xin Lu, Boris Peltsverger, Shijun potential to keep an application secure, but due Chen, Kai Qian and Lixin Tao, 2007, to their nature and functionality incur high “A Static Analysis Framework for Detecting computational overheads which may be SQL Injection Vulnerabilities”,

43

In Proceedings of 31st Annual International Computer Software and Applications Conference (COMPSAC 2007), pages 87 – 96

Konstantinos Kemalis and Theodores Tzouramanis. 2008, “SQL-IDS: A specification- based approach for SQL-injection detection”, In Proceedings of the 2008 ACM symposium on Applied computing (SAC '08). pages 2153-2158

Yuji Kosuga, Miyuki Hanaoka and Kenji Kono, 2011, “Generating Effective Attacks for Efficient and Precise Penetration Testing against SQL Injection”, Information and Media Technologies, Vol. 6, No. 2, pages 420-433

Frank S. Rietta, 2006. “Application layer intrusion detection for SQL injection”, In Proceedings of the 44th annual Southeast regional conference (ACM-SE 44), pages 531- 536

Stephen Thomas, Laurie Williams and Tao Xie, “On automated prepared statement generation to remove SQL injection vulnerabilities”, Elseveir Information and Software Technology 51 (2009) page 589–598

Fang Yu, Muath Alkhalaf, and Tevfik Bultan, 2011, “Patching vulnerabilities with sanitization synthesis”, In Proceeding of the 33rd international conference on Software engineering (ICSE '11). ACM, New York, pages 251-260

44

An Evaluation of Proposals to Detect Cheating in Multiplayer Online Games

Bradley Peacock

Abstract With the advent of broadband came the exponential explosion and popularity of online gaming, in particular Massively Multiplayer Online Games (MMOG's). This is seen as fun for most people but it has become a platform for others to develop their hacking skills. I am going to evaluate current research methods designed to detect cheating in MMOG's and de- termine if their proposed solutions are effective in combating or even stopping online gam- ing cheating. achieved through a highest score, progression 1 Introduction through a game the quickest, or buying illegally copied items from other gamers to short-cut a game. The number of people who play games online is There is even a virtual market place for people to rising year on year. Advances in graphics and faster buy and trade virtual avatars and property which CPU's along with fast broadband connections earns them real money, Kabus, Terpstra, Cilia and means that online games are more popular than Buchmann (2005). ever. According to Ki and Cheon (2004), the sub- scription MMOG market alone was worth $1.6 bil- And so with cost involved in developing, produc- lion in 2009. ing, deploying and maintaining an online game, it's imperative for the developers to keep cheats out of The traditional network infrastructure that MMOG's the game because it could drive away honest play- are built upon is the Client/Server Model. The cost ers who can't compete with the cheats, thus result- to implement such a system can be substantial as it ing in lost revenue. However one thing that is al- requires a large investment in hardware resources, most inevitable is flaws at the design stage of a computing power and network bandwidth. And with game and this is the source of most hacks as hack- the actual development of a game potentially cost- ers exploit these oversights, Laurens et all (2007). ing up to and including $20 million, Reynolds One aspect of current computing research is aimed (2010), the developers need to generate as much at detecting whether a human being is in control of return revenue as possible. the game and inputting commands, or whether an automated player agent, commonly referred to as a With the Client/Server approach being the most 'gamebot' or 'bot', has been implemented to input widely used model, it should make it hard to cheat commands. The research is aimed at providing the as the developers have sole ownership of the serv- ability to find out. ers. And with the business premise of MMOG's being subscriptions the provider needs to be total The rest of this paper is organized as follows: Sec- control of this to allow or deny players based on tion 2 presents and evaluates the five technical pro- who has paid their subscriptions and the posals researched which aim to achieve the same Client/Server model provides this functionality. goal. Section 3 is a critical evaluation of the re- search and the paper concludes in section 4 with A cheaper alternative platform is the peer-2-peer synthesis of what has been learnt from sections 2 model but as the game is distributed amongst and 3. clients, security concerns are raised severely as it is harder to detect and prevent cheating although re- 2 Presentation and Evaluation search has been done to try and improve security on the Peer-to-Peer architecture, Kabus and Buchmann It would appear that most players in the gaming (2008), Kabus et all (2005). community disagree with the use of bots as most players will put a lot of effort into earning various Research has also been carried out as to why people achievements and also gaining rewards. Several cheat, Kabus, Terpstra, Cilia and Buchmann anti-cheat software programs have been developed (2005), Joshi (2008). By manipulating the source to combat cheating, such as Punkbuster and Game- code, the ultimate aim is to gain an unfair advan- Guard but they can't prevent all bots. Ki and Cheon tage over the other players and possibly a reputa- (2007), analyse and classify known attacks and also tion within the gaming community, whether this is categorize these attacks into layers, helping to se- gregate the attacks which could possibly help focus 45 the development of an anti-cheat software program, specifically aimed at certain attacks. However this The software proposal is a CAPTCHA token whose research which has been compiled aims to deter- function is testing a player is indeed human and mine whether or not a human is in control, by using they estimate it is a lot cheaper than the hardware various methods, regardless of what category the counterpart. Again no research has been done to attack falls under. ascertain this. It would be based upon standard cryptographic techniques and come in the form of 2.1 CAPTCHA's an onscreen challenge in which the player has to press a number(s) on the keypad correctly to con- It may not initially sound difficult in distinguishing tinue playing the game. These would appear at ran- human players from bots as the abilities of the two dom intervals during the game which, one would are quite different, but a bot can be injected into a assume, would be frustrating to the game player. game controlling an avatar just like a human con- trolled one, only a little quicker and more respon- "CAPTCHA tokens require an interruption on the sive, predicting and reacting to a scenario quicker part of the player and thus appear most suitable for than any human could. And so Golle and Duche- slow-paced games with very specific rules and low- neaut (2005), propose the use of a variety of tests, entropy inputs and outputs, such as card and board which can tell humans and computers apart, collec- games (chess, poker, etc)" Golle and Ducheneaut tively known as Completely Automated Public Tur- (2005). ing Test to Tell Computers and Humans Apart

(CAPTCHA's), Von Ahn et all (2003).They pro- They do compare their implementations to others pose to apply them in both hardware and software used in other two-factor authentication and so it tests with the hardware tests being versatile enough would appear they have taken an idea and tried to for use in a wide variety of online games. A capture improve upon it. They state they propose two test works by presenting challenges that are simple broad approaches to help combat bots, but the term for humans to solve but are difficult for a computer 'broad' implies it will not work on all systems. They to solve, Pau all (2010). The most implemented also state that they can distinguish bots from human CAPTCHA tests rely on humans‟ to ability to rec- players which is commendable, but they do not ognize randomly distorted text or images, Pau et all propose any solution as to what they will or can do (2010). once the bots are identified.

The hardware proposal is to turn input devices such 2.2 Human Input Device's as a mouse or joystick into CAPTCHA devices. If the CAPTCHA device is rendered tamper proof While Golle and Ducheneaut (2005) believe there would be no other way for input signals to be CAPTCHAs do provide a sound solution, Schluess- sent to the game software other than physically inte- ler et all (2007), contradict their proposal, finding racting through the device. The signals would then through research that CAPTCHAs are limited to be authenticated either at the game server or to the non-real time applications, Pao et all (2010). What game software running on the machine. They be- Schluessler et all (2007) propose is a method which lieve a machine could physically interact with the ensures data input actually enters a system through hardware device but a human would be much faster a physically present human input device (HID). The and more reliable where reactions are critical. This HID will detect whether data has been modified could also apply to games played using a keyboard prior to its use with the aid of an input monitoring but they state to render a keyboard tamper-proof system. It works on the premise that the input data could be difficult and expensive to design, however generated by the HID has to be the same as the in- the research does not state that they have ap- put data used by the game software. If they differ, proached any manufacturers to ascertain how much then some form of illicit modification, insertion, or the cost would be. Perhaps if the authors had taking deletion occurred. The input monitoring system will this into consideration then the proposal would be compare the two data streams and if they differ in more substantiated as a lot of games are played any way then a notification is sent to a remote ser- using a mouse/joystick and a keyboard. They claim vice provider (RSP). The prototype proposed can that CAPTCHA input devices are best suited to be implemented on any standard personal computer First Person Shooters (FPS) and claim they also (PC) and the system architecture will change only guarantee that the „aimbot‟ cheat, typically used in slightly as the input monitoring system will come in FPS, cannot be used. It could be argued that the the form of a firmware update but additional hard- player still has to control the avatar to some degree ware must be purchased in the form of hardware in order to make use of the aimbot which requires duplication filters. The authors alluded to the cost the interaction of the hardware device anyway, thus of these only stating it was moderate, a more specif- challenging this statement. ic figure would be beneficial to any persons con-

46 templating this system. It has been designed initial- input traces already collected. In direct contrast to ly to work with Intel® chipsets and so it is assumed the HID system designed by Schluessler et all the authors have not experimented with other chip- (2007), the proposed system actually lets a data sets. stream pass through to the game server.Two com- ponents are required: a client-side exporter and a When a software application is started and elects to server-side analyser. The exporter sends the users have the inputted data protected, a RSP is contacted data stream for the exporter to analyse to decide if a and the application sends its inputconfiguration as bot is operating the client or not. A cascade neural well input devices allowed to connect,to the input network is at the core of the analyser, which verification service (IVS) within the firmware and „learns‟ the behaviours of human players. They the IVS then links up to the RSP. Each key press, have used a neural network as research shows that mouse movement etc is passed through hardware they perform well with user input data, Webb and input duplication filters and one data stream is sent Soh (2007). What the authors do not state is wheth- to the game software and one to the IVS. The IVS er the neural network needs to be installed on each then compares the two and if an attack is detected, game server and then subsequently „learns‟ human determined by two different data streams, the IVS behaviour or whether or not it can „learn‟ one per- sends notification the RSP. This is sent over a se- sons behaviour which can then be transferred to cure connection to prevent any sort of tampering, other servers and used as a basis for every player. and so the connection has to provide message inte- They have not stated if it is the same for the bot grity, authenticity, confidentiality and message dup- behaviours. lication detection. Once an attack is detected, all connected clients are notified of who is cheating The authors devised a series of experiments under and an on screen message is displayed stating who different configurations to gather information about the cheater is. It is also possible to kick cheaters different game bots and used different human play- from game and ban them completely. ers to give them enough scope to make a successful system within the parameters they propose, and The prototype which was built deviated from the were thorough in which user-input actions to initial design architecture to keep hardware costs record. Unlike the previous research, Schluessler et down and because of this Schluessler et all (2007) all (2007); they also used their system on two could not implement such a system which was not games, World of Warcraft and Diablo 2, and so vulnerable to circumvention, and this was one of could confidently state they had enough information the prerequisites for the system to be successful. As to reach a definite conclusion. There research such the initial design cannot be verified. However shows quite clearly in the form of charts how the the system they did test was successful in detecting input actions of a human and a bot differ and could input injection attacks. They decided to test it be used as a visual referenceas proof of this as it is against the online game Quake 3 which is a FPS, very precise. Unlike Schluessler et all (2007), and using an Intel® x86 platform. The prototype de- Golle and Ducheneaut (2005), what they do not tected the use of an aimbot which automatically state is what they would do once a cheat is detected. aims a user‟s weapon at the enemy. 2.4 Player Behaviour Analysis Although providing an initial successful implemen- A similar concept is proposed by Laurens et all tation of their system, the authors do conclude that (2007). They have designed a proof-of–concept the answer to their papers title "Is a bot at the con- system that detects cheats by monitoring player trols" remains unanswered. This is because there behaviour although the authors do state from the are many questions which remain as the prototype outset that that their evidence shows that their sys- was only tested on one specific machine against one tem can only correctly distinguish most cheating specific game and the architecture deviated against from non-cheating players, not all, but it is a proof- the initial proposal. of-concept.

2.3 Human Observational Proofs They based their system on CounterStrike: Source A very similar scheme is one proposed by Gianvec- and specifically targeted wall-hacking. Again like chio et all (2009). They base their system of bot Gianvecchio et all (2009), the authors state that detection on Human Observational Proofs (HOPs) players using cheats exhibit in-game behaviour which is a form of an Intrusion Detection System which is very distinguishable from players who are (IDS). According to their research it is difficult for not. a bot to perform certain actions in a human-like manner. The HOPs passively monitor input actions and compare them against a series of human user

47

"A system based on behavioural analysis to detect gated navigation of each player. The same was cheating play promises significant advantages over done using three different bots in the game and the current anti-cheat mechanisms" results were compared. The authors found that the (Laurens et all, 2007) pattern of the human players varied greatly to that of the bots, with the behavioural analysis, for ex- This could be true and only if it works, again their ample, being that humans tended to keep next to system is a proof-of-concept but any good idea walls when moving around the map so as not to needs a starting point. However Gianvecchio et all expose themselves in open areas where they would (2009) have proven results based on two games, but most likely be shot by an enemy, unlike a bot who‟s unlike Gianvecchio et all (2009), the authors of this reaction times would be quicker and would be able system devised their own algorithm based on results expose themselves in open ground for that exact from game play to detect cheating play. To achieve reason. Also human players adjust their movements the results they performed thorough experiments more continuously and slightly as they constantly similar to Gianvecchio et all (2009), using players move and spin round, side-to-side etc to detect very experienced in gaming to eliminate the learn- enemies that happen upon them. This work appears ing curve as to achieve a more accurate set of beha- to derive some good and valuable information. vioural results. Yet the authors do not relate to any other research on behavioural analysis and actually Using this the authors then used a framework to state that there appears to be very little academic train a classifier which is then able to determine research done in the field of online gaming security. whether any given segment of an avatars trajectory This can be contested as scores of papers were is the result of a human player or a bot. The detec- found on this subject suggesting more could have tion accuracy achieved results of 95% with traces been made of their idea if similar work was taking longer than 200 seconds.With the type of bot used into consideration. also being detected this is very commendable work and appears more accurate than the previous pro- Their concept is very similar to Schluessler et all posals. (2007), the difference being the cheat detection system resides on the game server as a plug-in 3 Critical Evaluation where it monitors the player‟s actions from within the game world. Communication takes place via an All of the systems evaluated achieved varied results interaction layer with players‟ data being pulled in based around the same concept. The authors have and traces performed upon the state of the game to proposed solutions but due to various reasons in- determine the information required. The data is cluding budget and time, could not fully test their then formatted and then passed to the analysis en- systems on more than two games at any one time. gine and then passed back to the game server to be What is interesting is the how similar the systems acted upon. Again, unlike Schluessler et all (2007) are, excluding Golle and Ducheneaut (2005), which and Golle and Duchneaut (2005), the authors do not would not work in real-time gaming. And it seems state what happens if a cheating player is detected apparent that the potential flaws in one system but do state in their „Future Work‟ segment that the could be overcome by using another. It could be scope of their work was to be limited and was de- possible to combine Schluessler et all (2007) with signed to demonstrate the feasibility of behavioural Gianvecchio et all (2009) as the user data carried analysis to be implemented to gaming security. by the client-side exporter in Gianvecchio et alls (2009) proposal could be tampered with prior to it 2.5 Trajectory-Based Player Behaviour Analy- being sent to the server-side analyser. Implementing sis Schluessler et all (2007) idea would ensure that the data was indeed inputted by a human. If Chen et Chen et all (2008) take the concept of behavioural alls (2008) proposal could also be implemented analysis and propose a trajectory-based approach, then there would be indeed three layers of protec- which they believe can be applied to any game in tion, but the burden on hardware resources could be which a player directly controls an avatar. Their considerable. Chen et all (2008), Gianvechhio et all theory is sound but they do not qualify this state- (2009), Laurens et all (2007), Schluessler et all ment with results from research. They used the (2007) and Golle and Duchneaut (2005) have all game Quake 2,a FPS, which has a built-in game- stated the cost for their systems is minimal but only play recording function which can be used to recon- for their very specific scenarios, the actual cost to struct the game and review each player‟s actions design such systems which would work across all and movementsfrom any position and angle.. To platforms could be more. represent diversity the authors only used traces that actual players contributed voluntarily. Using one map of Quake 2, they were able to plot the aggre-

48

4 Lessons to be Learnt Golle, P., & Ducheneaut, N. (2005). Keeping bots out of online games. ACE '05: Proceedings of the The most obvious question arising from the re- search is whether the proposed solutions would 2005 ACM SIGCHI International Conference on work on any other system and game aside from the Advances in computer entertainment technology. actual ones used. Until, or indeed if further research Valencia. is done using these methods the question will re- main but progress has been made. With hacks de- Yan, J., & Randell, B. (2005). Security in computer riving from security flaws at the design stage all of games : from pong to online poker . Newcastle- the proposed solutions merit further investigation Upon-Tyne. by developers as the cost incurred in the research could be a lot less in comparison to revenue lost Harding-Rolls, P. (2010, September 19). due to players abandoning games because of cheats. Subscription MMOGs - Mixed Fortunes in High Risk Game. Retrieved October 17, 2011, from References www.screendigest.com http://www.screendigest.com/reports/2010822a/10_ Ki, J., & Cheon, J. h. (2004). Taxonomy of online 09_subscription_mmogs_mixed_fortunes_in_high_ game security. The Electronic Library Vol. 22 Iss: risk_game/view.html 1 , 65 - 73.

Reynolds, D. (2010, June 22). The Cost to Make a Kabus, P., & Buchmann, A. P. (2008). Design of a cheat-resistant P2P online gaming system. DIMEA Quality MMORPG. Retrieved October 17, 2011, '07: Proceedings of the 2nd international from www.WHAT MMORG?com: conference on Digital interactive media in http://www.whatmmorpg.com/cost-to-make-a- entertainment and arts, (pp. 299-306). Athens. quality-mmorpg.php

Kabus, P., Terpstra, W. W., Cilia, M., & Joshi, R. (2008). Cheating and Virtual Crimes in Buchmann, A. P. (2005). Addressing cheating in Massively. London. distributed MMOGs. NetGames '05: Proceedings of 4th ACM SIGCOMM workshop on Network and von Ahn, L., Blum, M., Hopper, N., & Langford, J. system support for games. New York. (2003). CAPTCHA:Telling humans and computers

apart. Advances in Cryptology, Eurocrypt '03, (pp. Yan, J. J., & Choi, H.-J. (2002). Security issues in online games. The Electronic Library Volume 20 294-311). Warsaw. Number 2 , 125-133. Pao, H-K; Lin, H-Y; Chen, K-T; Fadlil, Junaidillah. Chen, K.-T., Liao, A., Pao, H.-K. k., & Chu, H.-H. (2010). Trajectory Based Behaviour Analysis for (2008). Game Bot Detection Based on Avatar User Verification. 11th International Conference Trajectory. Entertainment Computing - 7th on Intelligent Data Engineering and Automated International Conference, (pp. 94-105). Pittsburgh. Learning pp. 316-323). Paisley: Computer Science

Gianvecchio, S., Wu, Z., Xie, M., & Wang, H. Webb, S. D., & Soh, S. (2007). Cheating in (2009). Battle of Botcraft: fighting bots in online networked computer games: a review. DIMEA '07: games with human observational proofs. CCS '09: Proceedings of the 2nd international conference on Proceedings of the 16th ACM conference on Digital interactive media in entertainment and arts, Computer and communications security. Chicago. (pp. 49-53). Perth.

Laurens, P., Paige, R. F., Brooke, P. J., & Chivers, Duh, H. B., & Chen, V. H. (2009). Cheating H. (2007). A novel approach to the detection of behaviors in online gaming. Online Communities cheating in multiplayer online games. 2007 IEEE International Conference on Engineering of and Social Computing. Proceedings Third Complex Computer Systems ,(pp. 86-95).Auckland. International Conference, OCSC, (pp. 567-573 ). California. Schluessler, T., Goglin, S., & Johnson, E. (2007). Is a bot at the controls?: Detecting input data attacks. Chow, Y.-W., Susilo, W., & Zhou, H.-Y. (2010). NetGames '07: Proceedings of the 6th ACM CAPTCHA Challenges for Massively Multiplayer SIGCOMM workshop on Network and system Online Games Mini-game CAPTCHAs. 2010 support for games. Melbourne. INTERNATIONAL CONFERENCE ON

49

CYBERWORLDS (CW 2010), (pp. 254-261). Singapore.

50

An Empirical Study of Security Techniques Used In Online Banking

Rajinder D G Singh

Abstract The objective of this paper is to examine encryption methods used to achieving a secure environment in the banking sector. This paper discusses the encryption techniques of online security comprising the factors that compromise online banking security comprising key points that cause the vulnerability of users to phishing and malware threats in section 2, followed by the common security threats in e-banking in section 3. The evolution of encryption methods will be covered in section 4 which discuss the efforts taken by banks to improvise the online banking systems with the implementation of new technologies. Section 5 touches on some experimental work carried out by researchers in order to examine the strengths and weaknesses of authentication methods that have been executed by banks followed by section 6 that summarizes a comparative study to the encryption techniques being experimented on. Finally I conclude the usage of encryption techniques and some recommendations for banks and users towards achieving a better future in online banking in section 7.

users using dial-up facilities to hack into computers 1 Introduction by guessing combinations of user id and password on a repeated basis (Haskett, 1984). Online transactions have been taking place since the early 19th century. Over time, the usage of “While the battle between those who would have internet for banking matters has grown amongst tight security and those who want simpler (user- users that they find it efficient and a hassle free way friendly) log-ins continues, some of us find our- to conduct their daily banking needs may it be pur- selves needing an immediately implementable chasing goods from virtual stores, fund transfers or system...” (Haskett, 1984). paying bills. The popularity and usage of online banking has increased tremendously in the past few It was studied that one of the reasons hackers were years thus marking the importance of security stan- on the large at that point in time was caused by dards over the internet. Various encryption methods customer mentality of wanting a system that can and technologies have been implemented since the simply be accessed without having to remember start of online banking, however, comparably it has long combinations of passwords. Commonly, most increased an alarming threat of fraud and identity users prefer having short passwords than long theft amongst users. As early as 1984, there has nonsense alphanumeric characters to enable them to been a surge of interest amongst active e-banking remember passwords easily rather than walking users about a number of computer break-ins around with a notebook full of passwords to various [Haskett, 1984] thus jeopardizing the trust of users secured sites. in performing transactions online. Besides choosing short and easy manageable pass- 2 Factors That Contribute To E- word, users tend to reuse their passwords across Banking Security Threats various platforms. “...people accumulated more accounts but did not create more passwords” (Gaw Banking activities over the internet platform are et al. 2006). sensitive therefore higher security standards have become a necessity to curb the growing The number of passwords people use is not at par with the number of online bank accounts that they menace in the industry. Constant use of greater se- have which makes it easier for attackers to crack the curity technologies improves the level of secure code and have access to not only one account but to banking contradictorily affecting the usability of the all accounts associated with that particular product and services (Holbl, 2008). According to password. In addition to that, Gaw et al (2006) also Haskett (1984), break-ins were caused by novel claimed that reuse of passwords was a major

51 problem amongst users where they used a single automatically redirected to a dummy webpage that password for multiple bank accounts. appears to be a genuine one. The intruder therefore thinks he‟s successful in cracking the code but 3 Common Security Threats in where in actual sense, he isn‟t. The motive of this technique was to create confusion amongst hackers Online Banking where they believe they were on the right track but The most known attacks in the online banking they were not. Failure to login on several counts on industry are the Trojan attacks, phishing attacks and a frequent basis can lead to hacker trap, if desirable the MitM (Man-in-the-Middle) attacks. In the (Botting, 1986). Trojan attack scenario, the attacker sends out emails to user accounts convincing them to open a In order to overcome the problem of having users to specific site. When the site is visited, the website remember long passwords a technique known as automatically installs the virus on the user‟s secondary passwords was introduced (also known computer. The Trojan then sits in the system and as supersets of Pass-algorithms). This technique is observes the computer activities and gets into used by providing passwords along with the one action as soon as the user begins an online banking provided by the vendor of a particular operating session. When a transaction is made, the virus software of a system. Pass-algorithms are invisibly changes the amount and destination of the affordable, easy to install and maintain. Moreover it transaction not known to the user (AlZomai et al. requires no alterations to the 2008). therefore it makes it a hassle free process for both the provider and the users. According to Haskett Phishing attacks are done by tricking users into (1984), this scheme allows enforces and system revealing their banking account details (Peng et al. administrator to collect site dependant information 2010). For instance, an email from L1odys is about any attempted break-ins. impersonated to convince a user that it actually sent from a genuine server. The change of „l‟ to „1‟ is According to Holbl (2008), most banks have hardly visible at one glance, making it difficult for adopted the two-factor authentication approach that the user to notice the attack. The user therefore is basically gets information from what the users opens the email which leads him to a dummy know (passwords) and what they have (security Lloyds bank web page. As soon as the user inputs device). Though the actual implementation of this his username and password, the attacker is able to method may vary among banks worldwide, but the record the details and the account is hacked in no basic idea of this authentication technique is the time. same. Along with the static password that users use to access their online accounts, all transactions In the man-in-middle scenario, the attacker performed will require the second part of the potentially tends to sit between the bank server and authentication which may be a PIN/code on their the user and impersonate them both. This kind of security device or cell phones that were registered activity is known as pharming (Berghel, 2006). The during the opening of the account. This code is man-in-middle can act as both the bank and the user known as the one-time-password approach. This therefore making it next to impossible to detect any approach is currently used by the Barclays Bank in faults by both parties where the bank thinks it‟s a the UK and also some banks in Malaysia, German genuine user and the user thinks it‟s a legitimate and the Scandinavian countries (Holbl, 2008). bank website. The two-factor authentication method was backed 4 The Evolution of Encryption up by Nilsson (2005), when she talked about: 1) Security Box, which works with a dynamic Techniques of Online Banking password and 2) Fixed Passwords which means The password technique is a greenhorn method having one constant or static password (Nilsson et used more than twenty years ago where a user is al. 2005). The security box method simply means required to enter passwords with combinations of that users are provided by dynamic passwords on a alphabets and numbers to access a secure banking device (e.g. cell phones or smartcard readers) that site. The loop holes in this method were soon users receive a unique transaction code on. This noticed by the intruders where they only had to code coupled with the user‟s password is used to guess a combination of usernames and passwords login and perform transaction over the internet. randomly till they found a matching pair. Keeping Where else in the fixed passwords method, a user is that into consideration, this technique only allowed provided with a code upon registering at the bank three attempts of inserting a correct combination of for the first time. This code is static and users use username and passwords. If an account is indeed the same code together with their passwords to accessed by an intruder, on the fourth attempt he is perform online transactions overtime which is used

52 by Lloyds TSB, UK. The security box type of hardware token that can generate one-time- authentication is known to be safer than the fixed authorization code with limited validity time. The passwords method because a unique password is OTP implementation can be of different ways (e.g. generated each time users try to login their online security boxes, hardware tokens or SMS messages bank accounts. The unique code is dynamic and through mobile devices). different each time it‟s sent out from the server. The „fixed password‟ method is used by some financial Li et al. (2010) studied another method known as institutions in the United Kingdom where else most CAPTCHA (Completely Automated Public Turing Sweden banks use the „security box‟ method. test to tell Computers and Human Apart). A text CAPTCHA is shown below: Another form of classification in the two-factor approach is the certificate-based-approach, where a license is used as a second validation factor. This certificate requires PKIs (Public Key Infrastructures) which can be stored in any physical storage device (e.g. USB stick or smart card) CAPTCHA (Image: Courtesy of Li et al. 2010) (Holbl, 2008). This method is known to be cheap and easy to use. This approach can be classified as This method is deployed at the login level and the a timer-based-password approach where it‟s used transactions level. CAPTCHAs are mainly used to by the banks to providers of services like Paypal prevent automated MitM (Man-in-the-Middle) and eBay (Holbl, 2008). This method is used using attackers (Li et al. 2010). Many banks around the hardware generators where the password generated world have deployed this technique alongside their for a particular transaction lasts for a certain time existing security methods to make it harder for only (e.g. 60 seconds) thus limiting intruder malicious programs to manipulate transactions. activities. The final classification of the two-factor Unlike the OTP, this solution does not base on any authentication approach is the certificate-smartcard physical device thus making it cost effective to based approach. This approach requires a specific maintain and implement for both banks and its card reader that can be used to store certificates or users. The principle of CAPTCHA is that pattern to generate one time passwords. All transactions are recognition tasks are tremendously tricky for recorded on the smart card therefore chances of computers but simple for humans (Li et al. 2010). identity stealing in the Trojan attacks are equal to zero. Although this method has its benefits and is Some of the methods mentioned above are further safe to use, it is rarely used by banks around the discussed in the following sections based on world due to its expensive cost. “There is not a experiments conducted by different researchers. single bank that makes such an approach mandatory” (Holbl, 2008). 5 Experimental Research and Results

Where some researchers talked about secure login The first two experiments discussions are done on and transactions, Alzomai et al. (2008) expressed the bank‟s end and the third experiment is done that data security is also an essential key point to based on a user‟s viewpoint towards secure online online banking. The team claimed that there are two banking. types of authentication that are important in online banking. They are: 1) user authentication and 2) As mentioned earlier, to achieve a certain standard data origin authentication. He stressed that data of online security, greater technologies have been authentication is as equally as important as user implemented which affects the usability of an authentication. It is important to be within genuine online banking. AlZomai et al. (2008) conducted an web pages throughout the transaction process right experiment on the two-factor authentication from the user login until the transactions are approach focusing on the one-time-password (OTP) completed. It is important to generate data from a method using email instead of SMSs that required genuine provider and that the data is not tampered cell phones. The study was carried out on a group with to ensure data integrity. It is of no use if a user of students from Queensland University of logs in safely and protects his data but the site that Technology (QUT), Australia which consist of he is accessing is not a genuine one. The impact is undergraduate students, master students and PhD of no less than a stolen user identity. Therefore, a research students including staff on campus and non typical method for data origin discussed was the QUT participants. The experiment was to examine OTP (one-time-password) for every transaction. the usability of the SMS authentication scheme This method is similar to the one discussed (AlZomai et al. 2008). The participants varied from previously where the transaction not only requires different faculties (law, education and IT) within the normal user passwords but also requires a the university and aged between 20 and 40. A

53 dummy online bank simulator was created and participants were asked to perform transactions on the simulator. While the participants were doing 6 Comparative Conclusion on their transactions, two simulated attacks were carried out namely the Trojan attack and the man- Experimental Study in-the-middle attack. The results showed that out of The experiment done by AlZomai et al. (2008) on 53 attacks sent out, 42 were discovered by One-Tine-Password methods returned high results participants and the transactions were therefore of users being able to avoid security attacks. This cancelled. This means that 79% of possible obvious study was narrowed down on a specific number of attacks were successfully avoided. In a separate participants within an organization. Users method of sending out attacks, only 39% of stealthy performing transactions from dummy accounts on attacks were managed to be blocked. The OTP simulated WebPages were not sensitive to the codes were sent via email rather than the SMS nature of real world banking that involves actual system where in real world banking; mostly all personal funds. Therefore I conclude that this study banks use the SMS systems or a security device did not return reliable results based on the scope of alternatively. Although the avoided attacks success the experiment and the platform used to deliver rates were high, the accuracy of results cannot be OTPs which in this case is through emails. relied on 100 percent as emails are more likely to Moreover, CAPTCHAs on the other hand were spread viruses and phishing activities compared to proven to be weak and could be broken into using SMS systems. This experiment therefore concludes carefully designed mathematical formulas by Li et that the one-time-passwords sent via emails are not al. (2010). Neither methods proved to solve e- the best solution to online banking security. banking issues efficiently thus increasing the necessity of a thorough study to reveal best Apart from that, Li et al. (2010) carried out an methods possible to achieve safe and secure online experiment to test the strengths of CAPTCHA banking. Although the OTP encryption method is solutions by deploying practical real time attacks on most commonly used by banks around the world, it various CAPTCHA schemes used by banks in has not been proven that the security attacks via this Germany and China using image processing and method is zero. Besides, user experience and pattern recognition tools. These CAPTCHA knowledge on security issues also plays a major breaking tools are technically run on specific role in the banking industry. mathematical formulas. The results showed that automated programs could recognize each and every character even better than humans when the 7 Conclusions distorted text was well segmented (Chellipah et al. 2005). The results returned an alarming success rate The nature of online banking security is sensitive as of 100% of breaking CAPTHCAs. Therefore it can this type of transaction over the internet is always be concluded that CAPTCHA solutions do not meet susceptible to security threats. Various methods the expected security requirements in real world were studied and examined by different researchers banking. on various aspects of e-banking security around the world. The results however were not as promising Nilsson et al. (2005) however took a different considering the platforms on which e-banking approach in examining encryption techniques from operates. The e-banking security has to be a user‟s perspective. The team conducted a survey meticulously evaluated in order to achieve the high on 40 participants in Sweden to test the „security level security standards of banking online. E- box‟ method and 40 participants in UK on „fixed banking security is yet to be improved to overcome password‟ systems. The questionnaire comprised of malicious programs and curb other invisible questions encompassing four major aspects; trust, attacks. authentication, location and control (Nilsson et al. 2005). The findings showed that the first three The usability factor should also be kept into factors significantly influenced online banking consideration while upgrading and implementing users with an average of more than 3.7 points for safer security technologies in order to ease online „location‟ on a likert scale of 1 till 5 for both the transaction amongst users. methods used. This concludes that trust and authentication of users in a particular online Apart from the banks assessing new technologies to webpage depends on the location of where they are withstand particular attacks, users equally hold accessing the webpage from. A user accessing his importance in enhancing online banking security online account from a cyber cafe is more likely to and keeping their accounts as safe as possible. be exposed to virtual threats than a user accessing Users should take measure to educate themselves his account from a home PC.

54 about possible attacks and how attackers operate on Peng Y, Chen W, Chang M and Guan Y, 2010, their paroles. „Secure Online Banking on Untrusted Computers‟, Proceeding CCS '10 Proceedings Of The 17th ACM Conference On Computer And References Communications Security, ACM New York, NY, USA ©2010 AlZomai M, AlFayyadh B, Josang A and McCullagh A, 2008, „An Experimental Investigation Of The Usability Of Transaction Authorization In Online Bank Security Systems‟, AISC '08 Proceedings Of The Sixth Australasian Conference On Information Security, Volume 81, Australian Computer Society, Inc. Darlinghurst, Australia, Australia ©2008

Berghel H, 2006, „Phishing Mongers and Posers‟, Communications of the ACM – Suporting Exploratory Search, Volume 49 Issue 4, April 2006, ACM New York, NY, USA

Botting R, 1986, „Novel Security Techniques For Online Systems‟, Magazine: Communications of the ACM, Volume 29 Issue 5, May 1986 ACM New York, NY, USA

Chellipah K., Larson K., Simard P, and Czerwinski M, 2005, „Computer Beat Humans At Single Character Recognition In Reading Based Human Interaction Proofs (HIPs)‟, CEAS 2005.

Gaw S. and Felten E W, 2006, „Password Management Strategies For Online Accounts‟, Proceeding SOUPS '06 Proceedings Of The Second Symposium On Usable Privacy And Security, ACM New York, NY, USA ©2006

Haskett A. J, 1984, „Pass-Algorithms: A User Validation Scheme Based On Knowledge Of Secret Algorithms‟, Magazine: Communications of the ACM, Volume 27 Issue 8, Aug 1984, ACM New York, NY, USA

Holbl M, 2008, „Authentication Approaches For Online-Banking‟, CEPIS, ACM Digital Library ©2008

Li Shujun, S.A. Shah H, Khan Usman, Syed Ali Khayam, Sadeghi A and Schmitz R, 2010, „Breaking E-Banking CAPTCHAs‟, Proceeding ACSAC '10 Proceedings of the 26th Annual Computer Security Applications Conference, ACM New York, NY, USA ©2010

Nilsson M and Adams A, 2005, „Building Security And Trust In Online Banking‟, Proceeding CHI EA '05 CHI '05 Extended Abstracts On Human Factors In Computing Systems, ACM New York, NY, USA ©2005

55

56

A Critical Study on Proposed Firewall Implementation Methods in Modern Networks

Loghin Tivig

Abstract The world‟s economy faces a high dependency of Internet and computer systems. The need for data protection solutions has increased along with the development of internet based systems. The firewall is one of the most important tools created to protect a system or a computer network from outside attacks. The aim of this paper is to analyse and critical evaluate some of the existing and some of the proposed firewall implementation techniques that apply to various types of networks. The paper also suggests several practical approaches to efficient implementation and structuring of firewalls for companies in order to reduce time, expenditure and in the same time to increase the standard of security in their networks, based on the testing proofs presented in the research papers studied. The conclusion sustains that researchers have managed to find effective solutions of firewall implementation that can be applied by companies in order to benefit of high standard security at reduced costs.

development of e-commerce has led to a 1 Introduction necessity for having security rules that every company using this type of commerce needs to In order to benefit of protection, confidentiality, comply with. This is in order to protect financial integrity and availability, networks can be information used in these transactions (e.g. protected by firewall applications. The firewall credit cards details, bank accounts), typically represents the barrier between a trusted targeted by hackers. These companies must network/system and an untrusted one. Due to respect security standards like Payment Card increasing network speed and traffic, the Industry Data Security Standard (PCI DSS) or firewall software needs to evolve in order to be Sarbanes Oxley (Garlick 2009). Developers able to confer a good quality of service. The have tried to change the common way of transfer of data packages between thinking in creating firewalls, some of them networks/systems is analysed following sets of getting ideas from nature like “Ant Colony rules and if the package complies with the rules Optimization based approach” (Sreelaja and the transfer is allowed. Modern firewalls must Vijayalakshmi Pai 2010) or “Modeling firewalls perform more than applying these rules; they by (Tissue-like) P Systems” (Leporati and also need to act as protection against viruses, Ferretti 2010). A different approach was to different types of attack and to monitor all the verify packets transfers by using multiple data traffic. Basically the good functionality of firewalls at the same time either in parallel or in the firewall is set by how good the rules policy series (Khalil et al. 2010). The latest software is: if it‟s too permissive the network is unsecure product in security matters is the Intrusion and if it‟s too restrictive, traffic that should have Prevention System (IPS) or Intrusion Detection access is denied. Having a good policy is not and Prevention System (IDPS). This type of enough. Too much data can produce a huge software monitors the traffic and activities queue and data packages are permitted without a within the network or system in order to prevent valid inspection giving the possibility of an an attack using signature-based detection outside attack. The modern mobile applications methods, statistical anomaly-base detection and that use the Internet can become gateways into also stateful protocol analysis detection. Later in the network and increase the risks of attack. The this paper it will be discussed the idea of

57 creating a firewall‟s simple structure to match effective solutions of firewall systems must be the Network Intrusion Detection System developed and in the same time, these products capabilities. must offer a level of protection comparative with other more expensive products existing on the market. In the aspect of performance there is 2 Background a need of techniques that can deal with high volume of data transfer for large companies‟ The problems related to protection through use. The different architecture of Wi-Fi using firewalls emerge from continuous networks also requires new firewall systems that evolution of computer networks. The amount of can adapt to their demands. data transferred inside a network along with new versions of harmful software and changing strategies used by hackers ask for a constant 3 Literature Review need of regular updates. Many companies consider that having an older For the problems listed above, many solutions version of firewalls will protect them against have been proposed in the literature and some of threats but the reality is different: hackers are them will be analyzed in the following most of the time one step ahead the current paragraphs of this paper. protection software, so developers also must to keep up with them by creating new products or 3.1 MANET Networks updating old versions (Garlick 2009). The “A mobile ad hoc network (MANET) is a self- appearance of Wi-Fi networks and Smart phones organizing network of mobile routers and has created the need of a new firewall strategy to associated hosts connected by wireless links” be implemented in this type of networks. (Suman et al. 2010).

The increasing number of devices Wi-Fi As state by Strohmeyer (2009) in PC World compatible like Smart phones, computers, magazine that some people use to believe there printers etc. creates large networks, especially in is no need to install firewall software on their public places, and a growing demand of security devices because the Wi-Fi router already has a solutions for this type of networks. One of the built-in firewall. The article clarifies that in fact, primary protections is the existence of built-in the router has a built-in firewall but it protects firewall software in network routers but these only against port scanners, and the main risk of firewalls only prevent port scanner attacks and getting your device infected with malicious cannot protect against malicious software software is the downloading of information. (Strohmeyer 2008). This is where the need of protection software like firewalls should be installed on a device. Research has been carried out in this domain to

solve security problems and a proposed According to Chris King (2011), from Palo Alto technique for MANET networks is to randomly Networks, using the IP port 80 associated with change the firewall router after a fixed amount HTTP transfer gives the possibility of dangerous of time in order to make more complicate packages to go through because the traffic is finding an entry in the network in a case of always allowed by the firewall‟s rules. The outside attack (Suman et al. 2010). The authors solution he proposes is to analyse the claim that their technique will increase the applications and also the port used and the security in MANET networks. A graphical packet to be allowed only if the user has the representation of the method‟s functionality is right credentials. The same problem applies to presented in figure 1. In this case every device HTTPS networking protocol. within the network must be able and willing to

act as a firewall. The authors claim that this The problems identified can be classified in method will confer a much better security for three categories: costs, performances and type of these types of networks. network. The small to medium size businesses cannot afford the high prices for the latest Although this concept is a revolutionary one and products‟ releases on the market. More costly might help increasing the security for MANET,

58 functionality has been demonstrated using a hardware and software resources have been series of mathematical calculation. For specific used: Microsoft parallel firewalls (ISA) with a values and in a certain situation the authors constant number of 3000 rules policy, Virtual make obvious the efficiency of their theory. The Personal Network (VPN) and also Cisco 6500 mathematical demonstrations sustain their switches with Network Load Balancing (NLB). claims in a “perfect” network. In the real world Tests have been performed for the following there is no such a thing as perfect network, cases: because devices can develop a multitude of  No firewall. problems, so to demonstrate the efficiency and  Standalone firewall without VPN. advantages that this implementation might bring,  Standalone firewall with VPN. a series of tests must be performed. The  Enterprise edition ISA integrated with networks to be used in these experiments must NLB for only internal without VPN. contain a large variety of devices in order to  Enterprise edition ISA integrated with analyze how each one of them will act. A NLB for only internal with VPN. problem that could be encountered is to create  Enterprise edition ISA integrated with the firewall software in such a way to be NLB for only internal & external compatible with all the wireless platforms and without VPN. operating systems currently existing. The  Enterprise edition ISA integrated with devices‟ battery power is also an important NLB for only internal & external with factor that should be considered. VPN.  Two standalone firewalls with two Cisco Therefore, as the authors admit in their 6500 switch with HSRP enabled conclusion, more research is needed for this without VPN. technique to become usable (Suman et al.  Two standalone firewalls with two Cisco 2010). 6500 switch with HSRP enabled with VPN.

For each one of the above cases it has been recorded and compared the following results: processor usage, the amount of data transferred, time and bandwidth usage. The conclusion of all these experiments is that using their proposed technique (two standalone firewalls with two Cisco 6500 switch with HSRP enabled) the best results can be achieved for any of both cases, with or without VPN.

The experiment is descriptive giving the possibility to be easily repeated by anyone Figure 1 Flowchart of random selection interested in this aspect. Considering the wide of firewall (Suman et al., 2010) range of tests performed, there is enough evidence to accept the claims made for the specific case presented. A concerning fact in 3.2 Parallel Firewalls this approach is the cost of its implementation. A proposed solution that claims to reduce time Having parallel firewalls could be an and speed up the data packages verifications is improvement in speed and quality but because the use of multiple parallel firewalls. According of high cost it might not be applicable to to Khalil et al. (2010), this technique has proved companies with small budget. Further tests of to be extremely efficient. The claims have been this concept can be performed using different confirmed by testing different implementations firewall products and implementing this of firewall protection and comparing the results. technique in a real world company network to In order to test their experiment the following see what level of improvement and service

59 quality can be gained in comparison with the remake of the experiment. Their theory is more existing system. interesting because for the test performed, it has been used common applications and the whole 3.3 Filter and Firewall Automatic implementation doesn‟t necessary need Cooperation expensive hardware and software resources. The fact that Snort is an open source detection One of the latest network protection techniques system and IPTABLES a common tool is the use of IPS (Intrusion Prevention Systems). makes this research a practical solution for low These systems incorporate existing technologies budget companies due to the reduced costs of its like firewall, antivirus and intrusion detection implementation. The experiment had good software. The problem of IPS is the cost of results in the case presented and sustain the implementation because they require expensive claims made by the authors. Further tests should hardware. For medium to large size companies be performed in the matter of attack diversity. this might not be an issue, but for small Also a comparison between IPS and this type of companies this solution can be unaffordable. implementation could reveal some useful In their research Shirazi and Salehi (2009), information about the level of security each of demonstrate that the same standards of security the methods provides. Considering the variety of can be achieved using a much more cost operating system used, a continuation of this effective solution. The proposed technique is the research would be the development of software use of NIDS (Network Based IDS) integrated compatible with all major operating systems. with Packet filtering. The experiment was performed using Snort NIDS and a simple 3.4 Network Processor Solution firewall created in IPTABLES. The results obtained after running intrusion attacks against A method proposed to speed up the filtering the experimental protection demonstrate the process is network processor solution. This effectiveness of the method. Although the method targets the data transfers between two of experiment has been performed on a Linux the most used types of network: WAN and platform, using XML for communication LAN. It can be defined as a “hybrid firewall between NIDS and firewall means that this type architecture that independently leverages the of implementation can be extended on other control and data path capabilities available in platforms. the network processors” (Seong-Hwan Kim et al. 2010). The schema and all the necessary details are The technical details of functionality are well explained in the paper leading to an easy presented in figure 2.

60

Figure 2 Network processor implementation (Seong-Hwan Kim, et al., 2010)

from the ant colony architecture which helps For this experiment it has been used the filtering in the firewall rule set. The proposed APP2200 device. The results of the performed solution targets general implementation of comparison between this kind of implementation firewalls rules. Efficiency of this approach has and the traditional firewall implementation been demonstrated by performing a diversity of demonstrate that the proposed solution, in the case studies. The authors have developed an specific scenario presented, provides better algorithm that has been presented in pseudo- results. The details of the test presented has a code format and they have used that algorithm lack of information regarding the conditions and for a variety of situations. The functionality is equipment used making the results open to well explained including complete mathematical questions regarding the connection between calculations for every case considered. The hardware / software capacities of the system and research contains also results of comparison the results. Based on the evidences provided, it between this type of method and the major only can be assumed that this implementation existing methods in targeted area. targets complex networks and it requires powerful hardware resources. In terms of mathematical demonstration, the The output of the test makes known that using overall research is descriptive and well the proposed method the transfer between WAN performed but there is a lack of information and LAN is much faster in any of the cases: with about a real experiment. The theoretical firewall enabled or disabled. Based on these advantages sustain the authors‟ claims, but in results the authors concluded that “This model order to demonstrate the effectiveness of this provides consistent and predictable high solution, practical tests must be performed. throughput performance for the traffic while Also, there is no specification about the type and retaining the flexibility offered by a general- performances of the hardware and software purpose CPU architecture”. (Seong-Hwan Kim resources that must be used and also the cost of et al. 2010) implementation and maintenance. A further study must be performed in these fields in order 3.5 Ant Colony Optimization to convince companies to adopt this approach. In their research N. K. Sreelaja et al. (2010) present a new bio-mimetic technique inspired

61

3.6 Tissue-like Model solution. Because the mentioned paper presents a basic firewall configuration for teaching and Based on the similarity between P system rules learning purposes a complete analysis of the and firewall rules a research has been carried method has not been performed. out by Leporati and Ferretti. (2010) in finding a solution to use the tissue-like P system as an instrument to create and examine security 4 Conclusions properties of firewall. The aim of this technique The firewall has become a primary tool in the is to help network administrators to test the fight against viruses and hackers attacks. Due to filtering rules before implementation, in a faster the high levels of security it provides, the way. The practical outcome targeted, is a majority of Internet users, including companies reduced cost and fewer problems in system‟s and institutions have chosen to use a firewall implementation. The authors claim that this product. Looking at the examples presented in approach of using P system has never been this paper it is obvious the continuity of new studied in the literature and what seems to be a firewall methods research as response to huge problem in the existing firewalls‟ rules progress of network technologies, diversity of creation methods can be easily fixed using the harmful software, hackers‟ attacks and the huge proposed tissue-like model. Functionality has amount of data transfer. Based on the problems been proved using a complex mathematical identified, this paper has analysed and presented demonstration which brings enough evidence to existing solutions that can be adopted by sustain the authors‟ claims. The lack of companies. The methods studied above solve information about a practical implementation problems like costs of implementation, data and testing makes the method just a possible transfer and methods for Wi-Fi networks. In solution for the problems intended to solve. some of the cases more research is needed for the proposed methods to become feasible 3.7 Firewall Strategies for Businesses solutions ready to be used. For current companies it is critical to implement a firewall security. Their management must find References a balance between performance, security and Garlick N. (2009) „The hidden benefits of administration for a successful implementation optimising your firewall.‟ Network Security, 9:6- of firewall to protect the networks and system -9, September. they use and also to keep the software up to date. The rules applied should be optimized and Gold S. (2011) „The future of the firewall.‟ the firewall must not be loaded with extra- Network Security, 2:13--15, February. services. In the event of replacing the firewall product, the use of special packages for Khalil R. K., Zaki F. W., Ashour M. M. and analyzing the base rule is critical. Firewall Mohamed A. M. (2010) „A Study of Network platforms that allow this type of technology, can Security Systems.‟ International Journal of use a connection acceleration to speed up the Computer Science and Network Security, 10(6), traffic. In order to increase the network speed June. sometimes the administrators tend to use dangerous modification like reducing the level Kim S.-H., Vedantham S. and Pathak P. (2010) of encryption, disabling the logging for all „SMB gateway firewall implementation using a services or switching off the TCP stateful network processor.‟ Network Security, 8:10--15, inspection. Allowing third-party connectivity August. without authentication and encryption is also a common practice. All these things can put the Leporati A. and Ferretti C. (2010) „Modeling network at a high risk, therefore must be and Analysis of Firewalls by (Tissue-like) P avoided. (Garlic 2009). Systems.‟ Eighth Brainstorming Week on

Membrane Computing. Sevilla. Maskey et al. (2007) present a template firewall configuration that can be used as a starting point by companies or institutions as a low budget

62

Maskey S., Jansen B., Guster D. and Hall C. (2007) „A Basic Firewall Configuration Strategy for the Protection of Development-related Computer Networks and Subnetworks.‟ Information Systems Security, 16(5):281--290.

Rowan T. (2007) „Application firewalls: filling the void.‟ Network Security, 4:4-7, April.

Shirazi H. M. and Salehi H. (2009) „Automatic Cooperation between Filter and Firewall for Improving Network Security.‟ Journal of Information and Communication Technology, 2(2):107-113.

Sreelaja N. K. and Vijayalakshmi Pai, G. A. (2010) „Ant Colony Optimization based approach for efficient packet filtering in firewall.‟ Applied Soft Computing, 10:1222- 1236.

Strohmeyer R. (2008) „True or False: Wi-Fi Users Don‟t Need a Software Firewall.‟ PC World, October.

Suman, Patel R. B. and Singh P. (2010) „Random Time Identity Based Firewall In Mobile Ad hoc Networks.‟ International Conference on Methods and Models in Science and Technology. Chandigarh, India.

63