
Space Efficient Deep Packet Inspection of Compressed Web Traffic Yehuda Afeka, Anat Bremler-Barrb,1, Yaron Korala,∗ aBlavatnik School of Computer Sciences Tel-Aviv University, Israel bComputer Science Dept. Interdisciplinary Center, Herzliya, Israel Abstract In this paper we focus on the process of deep packet inspection of compressed web traffic. The major limiting factor in this process imposed by the compression, is the high memory requirements of 32KB per connection. This leads to the requirements of hundreds of megabytes to gigabytes of main memory on a multi-connection setting. We introduce new algorithms and techniques that drastically reduce this space requirement for such bump-in-the-wire devices like security and other content based networking tools. Our proposed scheme improves both space and time performance by almost 80% and over 40% respectively, thus making real-time compressed traffic inspection a viable option for networking devices. Keywords: pattern matching, compressed http, network security, deep packet inspection 1. Introduction within the last 32KB of the text. Therefore, the decom- pression process requires a 32KB buffer of the recent Compressing HTTP text when transferring pages decompressed data to keep all possible bytes that might over the web is in sharp increase motivated mostly by be back-referenced by the pointers, what causes a major the increase in web surfing over mobile devices. Sites space penalty. Considering today’s mid-range firewalls such as Yahoo!, Google, MSN, YouTube, Facebook and which are built to support 100K to 200K concurrent others use HTTP compression to enhance the speed connections, keeping a buffer for the 32KB window for of their content download. In Section 7.2 we provide each connection occupies few gigabytes of main mem- statistics on the percentage of top sites using HTTP ory. Decompression causes also a time penalty but the Compression. Among the top 1000 most popular sites time aspect was successfully reduced in [1]. 66% use HTTP compression (see Figure 5). The stan- This high memory requirement leaves the vendors dard compression method used by HTTP 1.1 is GZIP. and network operators with three bad options: either ig- This sharp increase in HTTP compression presents nore compressed traffic, or forbid compression, or divert new challenges to networking devices, such as the compressed traffic for offline processing. Obviously intrusion-prevention system (IPS), content filtering and neither is acceptable as they present a security hole or web-application firewall (WAF), that inspect the con- serious performance degradation. tent for security hazards and balancing decisions. Those devices reside between the server and the client and The basic structure of our approach is to keep the ff perform Deep Packet Inspection (DPI). Upon receiving 32KB bu er of all connections compressed, except for compressed traffic the networking device needs first to the data of the connection whose packet(s) is now be- decompress the message in order to inspect its payload. ing processed. Upon packet arrival, unpack its connec- ff We note that GZIP replaces repeated strings with back- tion bu er and process it. One may na¨ıvely suggest references, denoted as pointers, to their prior occurrence to just keep the appropriate amount of original com- pressed data as it was received. However this approach fails since the buffer would contain recursive pointers ∗Corresponding author to data more than 32KB backwards. Our technique, Email addresses: [email protected] (Yehuda Afek), called “Swap Out-of-boundary Pointers” (SOP), packs [email protected] (Anat Bremler-Barr), the buffer’s connection by combining recent informa- [email protected] (Yaron Koral) 1Supported by European Research Council (ERC) Starting Grant tion from both compressed and uncompressed 32KB no. 259085. buffer to create the new compressed buffer that contains Preprint submitted to Computer Communications September 9, 2012 pointers that refer only to locations within itself. We LZ77 Compression [4]- The purpose of LZ77 is to show that by employing our technique for DPI on real reduce the string presentation size, by spotting repeated life data we reduce the space requirement by a factor strings within the last 32KB of the uncompressed data. of 5 with a time penalty of 26%. Notice that while our The algorithm replaces a repeated string by a backward- method modifies the compressed data locally, it is trans- pointer consisting of a (distance,length) pair, where dis- parent to both the client and the server. tance is a number in [1,32768] (32K) indicating the We further design an algorithm that combines our distance in bytes of the string and length is a number SOP technique that reduces space with the ACCH al- in [3,258] indicating the length of the repeated string. gorithm which was presented in [1] (method that accel- For example, the text: ‘abcdeabc’ can be compressed erates the pattern matching on compressed HTTP traf- to: ‘abcde(5,3)’; namely, “go back 5 bytes and copy 3 fic). The combined algorithm achieves an improvement bytes from that point”. LZ77 refers to the above pair as of 42% on the time and 79% on the space requirements. “pointer” and to uncompressed bytes as “literals”. The time-space tradeoff presented by our technique pro- Huffman Coding [5]- Recall that the second stage of vides the first solution that enables DPI on compressed GZIP is the Huffman coding, that receives the LZ77 traffic in wire speed for network devices such as IPS and symbols as input. The purpose of Huffman coding is WAF. to reduce the symbol coding size by encoding frequent The paper is organized as follows: A background on symbols with fewer bits. The Huffman coding method compressed web traffic and DPI is presented in Section builds a dictionary that assigns to symbols from a given 2. An overview on the related work appears in Section alphabet a variable-size codeword (coded symbol). The 3. An overview on the challenges in performing DPI on codewords are coded such that no codeword is a prefix compressed traffic appears in Section 4. In Section 5 we of another so the end of each codeword can be easily de- describe our SOP algorithm and in Section 6 we present termined. Dictionaries are constructed to facilitate the the combined algorithm for the entire DPI process. Sec- translation of binary codewords to bytes. tion 7 describes the experimental results for the above In the case of GZIP, Huffman encodes both literals algorithms and concluding remarks appear in Section 8. and pointers. The distance and length parameters are Preliminary abstract of this paper was published in treated as numbers, where each of those numbers is the proceedings of IFIP Networking 2011 [2]. coded with a separate codeword. The Huffman dictio- nary which states the encoding of each symbol, is usu- 2. Background ally added to the beginning of the compressed file (oth- erwise a predefined dictionary is selected). In this section we provide background on compressed The Huffman decoding process is relatively fast. A HTTP and DPI and its time and space requirements. common implementation (cf. zlib [6]) extracts the dic- This helps us in explaining the considerations behind tionary, with average size of 200B, into a temporary the design of our algorithm and is supported by our ex- lookup-table that resides in the cache. Frequent sym- perimental results described in Section 7. bols require only one lookup-table reference, while less Compressed HTTP: HTTP 1.1 [3] supports the us- frequent symbols require two lookup-table references. age of content-codings to allow a document to be com- Deep packet inspection (DPI): DPI is the process of pressed. The RFC suggests three content-codings: identifying signatures (patterns or regular expressions) GZIP, COMPRESS and DEFLATE. In fact, GZIP uses in the packet payload. Today, the performance of secu- DEFLATE as its underlying compression protocol. For rity tools is dominated by the speed of the underlying the purpose of this paper they are considered the same. DPI algorithms [7]. The two most common algorithms Currently GZIP and DEFLATE are the common cod- to perform string matching are the Aho-Corasick (AC) ings supported by current browsers and web servers.2 [8] and Boyer-Moore (BM) [9] algorithms. The BM The GZIP algorithm uses a combination of the fol- algorithm does not have deterministic time complexity lowing compression techniques: first the text is com- and is prone to denial-of-service attacks using tailored pressed with the LZ77 algorithm and then the output is input as discussed in [10]. Therefore the AC algorithm compressed with the Huffman coding. Let us elaborate is the standard. The implementations need to deal with on the two algorithms: thousands of signatures. For example, ClamAV [11] virus-signature database contains 27 000 patterns, and the popular Snort IDS [12] has 6 600 patterns; note that 2Analyzing captured packets from last versions of both Internet Explorer, FireFox and Chrome browsers shows that accept only the typically the number of patterns considered by IDS sys- GZIP and DEFLATE codings. tems grows quite rapidly over time. 2 In Section 6 we provide an algorithm that combines the pattern matching task on compressed traffic and did our SOP technique with a Aho-Corasick based DPI al- not handle the space problem, and it still requires the gorithm. The Aho-Corasick algorithm relies on an un- decompression. We show in Section 6 that our paper derlying Deterministic Finite Automaton (DFA) to sup- can be combined with the techniques of [1] to achieve port all required patterns. A DFA is represented by a a fast pattern matching algorithm for compressed traf- “five-tuple” consisting of a finite set of states, a finite set fic, with moderate space requirement.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages14 Page
-
File Size-