ASPEN: a Scalable In-SRAM Architecture for Pushdown Automata

ASPEN: a Scalable In-SRAM Architecture for Pushdown Automata

ASPEN: A Scalable In-SRAM Architecture for Pushdown Automata Kevin Angstadt,∗ Arun Subramaniyan,∗ Elaheh Sadredini,† Reza Rahimi,† Kevin Skadron,† Westley Weimer,∗ Reetuparna Das,∗ ∗Computer Science and Engineering †Department of Computer Science University of Michigan University of Virginia Ann Arbor, MI USA Charlottesville, VA USA fangstadt, arunsub, weimerw, [email protected] felaheh, rahimi, [email protected] Abstract—Many applications process some form of tree- falls within the “thirteenth dwarf” in the Berkeley parallel structured or recursively-nested data, such as parsing XML or computation taxonomy [4]. Software parsing solutions often JSON web content as well as various data mining tasks. Typical exhibit irregular data access patterns and branch mispredic- CPU processing solutions are hindered by branch misprediction penalties while attempting to reconstruct nested structures and tions, resulting in poor performance. Custom accelerators exist also by irregular memory access patterns. Recent work has for particular parsing applications (e.g., for parsing XML [5]), demonstrated improved performance for many data process- but do not generalize to multiple important problems. ing applications through memory-centric automata processing We observe that deterministic pushdown automata (DPDA) engines. Unfortunately, these architectures do not support a provide a general-purpose computational model for processing computational model rich enough for tasks such as XML parsing. In this paper, we present ASPEN, a general-purpose, scalable, tree-structured data. Pushdown automata extend basic finite and reconfigurable memory-centric architecture for processing automata with a stack. State transitions are determined by of tree-like data. We take inspiration from previous automata both the next input symbol and also the top of stack value. processing architectures, but support the richer deterministic Determinism precludes stack divergence (i.e., simultaneous pushdown automata computational model. We propose a custom transitions never result in different stack values) and admits datapath capable of performing the state matching, stack manip- ulation, and transition routing operations of pushdown automata, efficient hardware implementation. While somewhat restric- all efficiently stored and computed in memory arrays. Further, we tive, we demonstrate that DPDAs are powerful enough to parse present compilation algorithms for transforming large classes of most programming languages and serialization formats as well existing grammars to pushdown automata executable on ASPEN, as mine for frequent subtrees within a dataset. and demonstrate their effectiveness on four different languages: In this paper, we present ASPEN, the Accelerated in-SRAM Cool (object oriented programming), DOT (graph visualization), JSON, and XML. Pushdown ENgine, a realization of deterministic pushdown au- Finally, we present an empirical evaluation of two application tomata in Last Level Cache (LLC). Our design is based on the scenarios for ASPEN: XML parsing, and frequent subtree insight that much of the DPDA processing can be architected mining. The proposed architecture achieves an average 704.5 ns as LLC SRAM array lookups without involving the CPU. per KB parsing XML compared to 9983 ns per KB in a state-of- By performing DPDA computation in-cache, ASPEN avoids the-art XML parser across 23 benchmarks. We also demonstrate a 37.2x and 6x better end-to-end speedup over CPU and GPU conventional CPU overheads such as random memory accesses implementations of subtree mining. and branch mispredictions. Execution of a DPDA with ASPEN Index Terms—pushdown automata, emerging technologies is divided into five stages: (1) input symbol match, (2) stack (memory and computing), accelerators symbol match, (3) state transition, (4) stack action lookup, and (5) stack update, with each stage making use of SRAM arrays I. INTRODUCTION to encode matching and transition operations. Processing of tree-structured or recursively-nested data is To scale to large DPDAs with thousands of states, ASPEN intrinsic to many computational applications. Data serializa- adopts a hierarchical architecture while still processing one in- tion formats such as XML and JSON are inherently nested put symbol in one cycle. Further, ASPEN supports processing (with opening and closing tags or braces, respectively), and of hundreds of different DPDAs in parallel as any number of structures in programming languages, such as arithmetic ex- LLC SRAM arrays can be re-purposed for DPDA processing. pressions, form trees of operations. Further, the grammatical This feature is critical for applications such as frequent subtree structure of English text is tree-like in nature [1]. Reconstruct- mining which require parsing several trees in parallel. ing and validating tree-like data is often referred to as parsing. To support direct adaptation of a large class of legacy Studies on data processing and analytics in industry demon- parsing applications, we implement a compiler for converting strate both increased rates of data collection and also increased existing grammars for common parser generators to DPDAs demand for real-time analyses [2], [3]. Therefore, scalable and executable by ASPEN. We propose two key optimizations high-performance techniques for parsing and processing data for improving the runtime of parsers on ASPEN. First, the are needed to keep up with industrial demand. Unfortunately, architecture supports popping a reconfigurable number of parsing is an extremely challenging task to accelerate and values from the stack in a single cycle, a feature we call multipop. Second, our compiler implements a state merging DPDA and an input symbol. This restriction prevents stack algorithm that reduces chains containing "-transitions. Both of divergence, a property we leverage for efficient implementa- these optimizations reduce stalls in input symbol processing. tion in hardware. Some transitions perform stack operations To summarize, this work makes the following contributions: without considering the next input symbol, and we refer to a We propose ASPEN, a scalable execution engine which these transitions as epsilon- or "-transitions. To maintain the re-purposes LLC slices for DPDA acceleration. determinism, all "-transitions take place before transitions a We develop a custom data path for DPDA processing considering the next input symbol. using SRAM array lookups. ASPEN implements state Unlike basic finite automata, where non-deterministic and matches, state transition, stack updates, includes efficient deterministic machines have the same representative power multipop support, and can parse one token per cycle. (any NFA has an equivalent DFA and vice versa), DPDAs a We develop an optimizing compiler for transforming are strictly weaker than PDAs [7]. DPDAs, however, are still existing language grammars into DPDAs. Our compiler powerful enough to parse most programming languages and optimizations reduce the number of stalled cycles dur- serialization formats as well as mine for frequent subtrees ing execution. We demonstrate this compilation on four within a dataset. We leave the exploration of hardware im- different languages: Cool (object oriented programming), plementations of PDAs for future work. DOT (graph visualization), JSON, and XML. For hardware efficiency, we extend the definition of homo- a We empirically evaluate ASPEN on two application geneous finite automata to DPDA. In a homogeneous DPDA scenarios: a tightly coupled XML tokenizer and parser (hDPDA), all transitions to a state occur on the same input pipeline and also a highly parallelized subtree miner. character, stack comparison, and stack operation. Concretely, ¬ ¬ ¬ ¬ ¬ First, results demonstrate an average of 704:5 ns per KB for any q; q ; p; p " Q, σ; σ " Σ, γ; γ " Γ, and op; op parsing XML compared to 9983 ns per KB in a state-of- that are operations on the stack, if δ(q; σ; γ) = (p; op) and ¬ ¬ ¬ ¬ ¬ the-art XML parser across 23 XML benchmarks. Second, δ(q ; σ ; γ ) = (p ; op ), then we demonstrate 37:2× and 6× better end-to-end speedup ¬ ¬ ¬ ¬ p = p ⇒ σ = σ 0 γ = γ 0 op = op : over CPU and GPU implementations of subtree mining. This restriction on the transitions function does not limit II. BACKGROUND AND MOTIVATION computational power, but may increase the number of states In this section, we review automata theory relevant to needed to represent a particular computation. ASPEN. We then introduce two real-world applications that Claim 1. Given any DPDA A = (Q; Σ; Γ; δ; S; F ), the number motivate accelerated pushdown automata execution. 2 of states in an equivalent hDPDA is bounded by O(∣Σ¶¶Q¶ ). A. Automata Primer Proof. We consider the worst case: A is fully-connected with A non-deterministic finite automaton (NFA) is a state ma- ¶Σ¶⋅¶Q¶ incident edges to each state and each of these incoming chine represented by a 5-tuple, (Q; Σ; δ; S; F ), where Q is a edges performs a different set of input/stack matches and stack finite set of states, Σ is a finite set of symbols, δ is a transition operations. Therefore, we must duplicate each node ¶Σ∣(∣Q¶ − N N function, S Q are initial states, and F Q is a set of final or 1) times to ensure the homogeneity property. For any node accepting states. The transition function determines the next q " Q, we add ¶Σ¶ ⋅ ¶Q¶ copies of q to the equivalent hDPDA, states using the current active states and the next input symbol. one node for each of the different input/stack operations on If the automaton

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us