Rete + DBMS = Efficient Continuous Profile Matching on Large-Volume

Total Page:16

File Type:pdf, Size:1020Kb

Rete + DBMS = Efficient Continuous Profile Matching on Large-Volume ARGUS: Rete + DBMS = E±cient Continuous Pro¯le Matching on Large-Volume Data Streams Chun Jin, Jaime Carbonell Language Technologies Institute School of Computer Science Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213 fcjin, [email protected] July 6, 2004 Abstract Most existing data stream management projects focus on applications in which answers are generated at a fairly high rate. We identify an important sub-class of data stream applications, Stream Anomaly Monitoring Systems (SAMS), in which answers are generated very infrequently (i.e. the underlying SQL queries have extremely high selectivity), but may generate very high-urgency alerts when they do match. We exploit this property to produce signi¯cant optimization via an extension of the Rete algorithm, minimizing intermediate join sizes. A SAMS monitors transactions or other structured data streams and generates alerts when anomalies or potential hazards are detected. SAMS applications include but are not limited to potential ¯nancial crime monitoring, epidemiology tracking for ¯rst signs of new outbreaks, and potential fraud analysis on stock market transaction streams. An important secondary objective is to build the SAMS on top of mature DBMS technology, to preserve full DBMS functionality and to permit full modularization of functions, guaranteeing cross-DBMS portability. The di±cult part is to accomplish this objective while maximizing performance. But, the very-high-selectivity nature of SAMS applications permits us to achieve both objectives: e±ciency and full DBMS compatibility. We introduce the ARGUS prototype SAMS, which exploits very-high-selectivity via an extension of the Rete algorithm on a DBMS. The primary Rete extensions in our implementation are User-De¯ned Join Priority and Transitivity Inference. Preliminary experiments show the e®ectiveness and e±ciency of the ARGUS approach over a standard Oracle DBMS. We further investigate several new techniques for Rete optimization: computation sharing, incremental aggregation, time window detection, and index selection. Keywords: Stream Data, Continuous Query, Rete, Incremental Query Evaluation, Transitivity Inference, Database. Acknowledgements: This work was funded by the Advanced Research and Development Activity as part of the Novel Intelligence from Massive Data program with contract NMA401-02-C-0033. The views and conclu- sions are those of the authors, not of the U.S. government or its agencies. We would like to thank Chris Olston for his helpful suggestions and comments, and Phil Hayes, Bob Frederking, Eugene Fink, Cenk Gazen, Dwight Dietrich, Ganesh Mani, Aaron Goldstein, and Johny Mathew for helpful discussions. 1 Contents 1 Introduction 3 2 Overview of Stream Anomaly Monitoring Systems 3 2.1 SAMS Characteristics . 4 2.2 SAMS Query Examples . 6 3 Designs of Stream Anomaly Monitoring Systems 9 3.1 Alternative Approaches to SAMS . 9 3.2 Adapted Rete Algorithm . 10 4 ARGUS Pro¯le System Design 13 4.1 Database Design . 13 4.2 Restrictions on the SQL . 15 4.3 Translating SQL Queries into Rete Networks . 17 4.3.1 Rete Topology Construction . 17 4.3.2 Aggregation and Union . 17 5 Improvements on Rete Network 20 5.1 User-De¯ned Join Priority . 20 5.2 Transitivity Inference . 20 6 Experimental Results 22 6.1 Experiment Setting . 22 6.1.1 Data Sets . 22 6.1.2 Queries . 22 6.2 Results Interpretation . 23 6.2.1 Aggregation . 23 6.2.2 Transitivity Inference . 24 6.2.3 Partial Rete Generation . 24 6.2.4 Using Indexes . 26 7 Related Work 27 8 Conclusion and Future Work 29 8.1 Rete Network Optimization . 29 8.2 Computation Sharing . 29 8.3 Incremental Aggregation . 29 8.4 Using Time Windows . 29 8.5 Enhancing Transitivity Inference . 29 8.6 Index Selection . 30 A A Sample Rete Network Procedure for Example 4 30 B Sample Rete Network DDLs for Example 4 34 C A Sample Rete Network Procedure for Example 5 35 D A Sample Rete Network Procedure for Example 3 35 2 1 Introduction As data processing and network infrastructure continue to grow rapidly, data stream processing quickly becomes possible, demanding, and prevalent. A Data Stream Management System (DSMS) [58] is designed to process continuous queries over data streams. Existing data stream management projects focus on applications in which answers are generated at a fairly high rate. We identify an important sub-class of data stream applications, Stream Anomaly Monitoring Systems (SAMS), in which answers are generated very infrequently (i.e. the underlying SQL queries have extremely high selectivity). We exploit this property to produce signi¯cant optimization via an extension of the Rete algorithm, minimizing intermediate join sizes. A SAMS monitors transaction data streams or other structured data streams and alert when anomalies or potential hazards are detected. The conditions of anomalies or potential hazards are formulated as continuous queries over data streams composed by experienced analysts. In a SAMS, data streams are composed of homogeneous records that record information of transactions or events. For example, a stream could be a stream of money transfer transaction records, stock trading transaction records, or in-hospital patient admission records. A SAMS is expected to periodically process thousands of continuous queries over rapidly growing streams at the daily volume of millions of records in a timely manner. Examples motivating a SAMS can be found in many domains including banking, medicine, and stock trading. For instance, given a data stream of FedWire money transfers, an analyst may want to ¯nd linkages among big money transfer transactions connected to suspected people or organizations that may invite further investigation. Given data streams from all the hospitals in a region, a SAMS may help with early alerting of potential diseases or bio-terrorist events. In a stock trading domain, connections among suspiciously high pro¯t trading transactions may draw an analyst's attention for further check whether insider information is illegally used. Comparing to traditional On-Line Transaction Processing (OLTP), a SAMS usually runs complex queries with joins and/or aggregations involving multiple facts and data records. Processing data streams dynamically and incrementally, a SAMS also distinguishes itself from On-Line Analytic Processing (OLAP). It has been well recognized that many traditional DBMS techniques are useful for stream data processing. However, traditional DBMS's by themselves are not optimal for stream applications because of performance concerns. While a DSMS bene¯ts from many well-understood DBMS techniques, such as operator implementation methods, indexing techniques, query optimization techniques, and bu®er management strategies, a traditional DBMS is not designed for stream processing. A traditional DBMS is assumed to deal with rather stable data relations and volatile queries. In a stream application, queries are rather stable, persistent or residential, and data streams are rapidly changing. It is usual to expect a stream system to cope with thousands of continuous queries and high data rates. Therefore, while utilizing traditional DBMS techniques, many general-purpose stream projects, such as STREAM, TelegraphCQ, Aurora, and NiagaraCQ, are developing stream systems from scratch. However, for speci¯c applications, such as SAMS's, it turns out that we can get away with an implementation on top of a DBMS. ARGUS is a prototype SAMS that exploits the very-high-selectivity property with the adapted Rete algorithm, and is built upon the platform of a full-fledged traditional DBMS. For illustration purpose in this paper, we assume ARGUS runs in the FedWire Money Transfer domain. However, ARGUS is designed to be general enough to run with any streams. In ARGUS, a continuous query is translated into a procedural network of operators: selections, binary joins, and aggregations, which operates on streams and relations. Derived (intermediate) streams or relations are conditionally materialized as DBMS tables with old data truncated to maintain reasonable table sizes. A node of the network, or the operator, is represented as one or more simple SQL queries that perform the incremental evaluation. The whole network is wrapped as a DBMS stored procedure. Registering a continuous query in ARGUS involves two steps: create and initialize the intermediate tables, and install and compile the stored procedure. In this paper, we ¯rst identify the unique characteristics of SAMS vs. general stream processing, and compare the streams with other types of streams, such as network tra±c data, sensor data, and Internet data. Keeping the above observations and comparisons in mind, we then discuss alternative approaches to SAMS including those explored in other stream projects, and justify our choice of the ARGUS system design. We then present a detailed description of our Rete-based ARGUS design. We conclude with performance results and motivate future work. 2 Overview of Stream Anomaly Monitoring Systems Figure 1 shows the SAMS dataflow. Data records in streams arrive continuously. They are matched against contin- uous queries registered in the system, and are stored in the database. Old stream data are truncated to keep the database manageable. Analysts selectively formulate the conditions of anomalies or potential hazards as continuous 3 queries, and register them with the SAMS. Registered queries are scheduled periodical executions over the data streams, and return new results (e.g. alerts), if any. Data Streams s e ri e u Stream Q Stream
Recommended publications
  • Drools Expert User Guide
    Drools Expert User Guide Version 5.5.0.CR1 by The JBoss Drools team [http://www.jboss.org/drools/team.html] ....................................................................................................................................... vii 1. Introduction ................................................................................................................. 1 1.1. Artificial Intelligence ............................................................................................ 1 1.1.1. A Little History ......................................................................................... 1 1.1.2. Knowledge Representation and Reasoning ................................................ 2 1.1.3. Rule Engines and Production Rule Systems (PRS) .................................... 3 1.1.4. Hybrid Reasoning Systems (HRS) ............................................................ 5 1.1.5. Expert Systems ........................................................................................ 8 1.1.6. Recommended Reading ........................................................................... 9 1.2. Why use a Rule Engine? .................................................................................. 12 1.2.1. Advantages of a Rule Engine ................................................................. 13 1.2.2. When should you use a Rule Engine? ..................................................... 14 1.2.3. When not to use a Rule Engine .............................................................. 15 1.2.4. Scripting
    [Show full text]
  • Reactive Rules on the Web
    Reactive Rules on the Web Bruno Berstel1, Philippe Bonnard1,Fran¸cois Bry2,MichaelEckert2, and Paula-Lavinia P˘atrˆanjan2 1 ILOG 9, rue de Verdun, 94250 Gentilly, France {berstel,bonnard}@ilog.fr — http://www.ilog.com 2 University of Munich, Institute for Informatics Oettingenstr. 67, 80538 M¨unchen, Germany {bry,eckert,patranjan}@pms.ifi.lmu.de — http://www.pms.ifi.lmu.de Abstract. Reactive rules are used for programming rule-based, reactive systems, which have the ability to detect events and respond to them au- tomatically in a timely manner. Such systems are needed on the Web for bridging the gap between the existing, passive Web, where data sources can only be accessed to obtain information, and the dynamic Web, where data sources are enriched with reactive behavior. This paper presents two possible approaches to programming rule-based, reactive systems. They are based on different kinds of reactive rules, namely Event-Condition- Action rules and production rules. Concrete reactive languages of both kinds are used to exemplify these programming paradigms. Finally the similarities and differences between these two paradigms are studied. 1 Introduction Reactivity on the Web, the ability to detect events and respond to them automat- ically in a timely manner, is needed for bridging the gap between the existing, passive Web, where data sources can only be accessed to obtain information, and the dynamic Web, where data sources are enriched with reactive behavior. Reactivity is a broad notion that spans Web applications such as e-commerce platforms that react to user input (e.g., putting an item into the shopping bas- ket), Web Services that react to notifications or service requests (e.g., SOAP messages), and distributed Web information systems that react to updates in other systems elsewhere on the Web (e.g., update propagation among biological Web databases).
    [Show full text]
  • Production Systems and Rete Algorithm Formalisation Horatiu Cirstea, Claude Kirchner, Michael Moossen, Pierre-Etienne Moreau
    Production Systems and Rete Algorithm Formalisation Horatiu Cirstea, Claude Kirchner, Michael Moossen, Pierre-Etienne Moreau To cite this version: Horatiu Cirstea, Claude Kirchner, Michael Moossen, Pierre-Etienne Moreau. Production Systems and Rete Algorithm Formalisation. [Research Report] 2004, pp.21. inria-00280938 HAL Id: inria-00280938 https://hal.inria.fr/inria-00280938 Submitted on 20 May 2008 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Meta-compilation´ non intrusive du filtrage par contraintes Non intrusive meta-compilation of matching by constraints Sous Projet 1 RETE et re´ecriture´ Production Systems and Rete Algorithm Formalisation Description : The rete algorithm is a well-known algorithm for efficiently addressing the many patterns/many objects match problem, and it has been widely used and imple- mented in several applications, mainly production systems. But despite of the wide usage of production systems and the rete algorithm, to the best of our knowledge there has been just one proposition for a formal definition of the rete algorithm given by Fages and Lissajoux [FL92], but no attempt to give a formal description of production systems as a whole, giving rise to lots of ambiguities and incompatibilities between the different implementations.
    [Show full text]
  • Fuzzy Dynamic Discrimination Algorithms for Distributed Knowledge Management Systems
    Annals of “Dunarea de Jos” University of Galati Fascicle I – 2010. Economics and Applied Informatics. Years XVI – no 2 - ISSN 1584-0409 Fuzzy Dynamic Discrimination Algorithms for Distributed Knowledge Management Systems Vasile MAZILESCU Dunarea de Jos University of Galati [email protected] Abstract A reduction of the algorithmic complexity of the fuzzy inference engine has the following property: the inputs (the fuzzy rules and the fuzzy facts) can be divided in two parts, one being relatively constant for a long a time (the fuzzy rule or the knowledge model) when it is compared to the second part (the fuzzy facts) for every inference cycle. The occurrence of certain transformations over the constant part makes sense, in order to decrease the solution procurement time, in the case that the second part varies, but it is known at certain moments in time. The transformations attained in advance are called pre-processing or knowledge compilation. The use of variables in a Business Rule Management System knowledge representation allows factorising knowledge, like in classical knowledge based systems. The language of the first-degree predicates facilitates the formulation of complex knowledge in a rigorous way, imposing appropriate reasoning techniques. It is, thus, necessary to define the description method of fuzzy knowledge, to justify the knowledge exploiting efficiency when the compiling technique is used, to present the inference engine and highlight the functional features of the pattern matching and the state space processes. This paper presents the main results of our project PR356 for designing a compiler for fuzzy knowledge, like Rete compiler, that comprises two main components: a static fuzzy discrimination structure (Fuzzy Unification Tree) and the Fuzzy Variables Linking Network.
    [Show full text]
  • Using Cloud Computing with RETE Algorithms in a Platform As a Service (Paas) for Business Systems Development
    Using Cloud Computing with RETE Algorithms in a Platform as a Service (PaaS) for Business Systems Development Richard Poblete 1, David de la Fuente 2, and Margarita Alonso 3 1Department of Business Administration, Cantabria University, Santander, Spain [email protected] 2Department of Business Administration, Oviedo University, Gijón, Asturias, Spain [email protected] 1Department of Business Administration, Cantabria University, Santander, Spain [email protected] this technology is the Virtualization and its model of operative Abstract - Given that companies operate in an environment of business is varied and can include: Pay per Use Models; Free continuous evolution it requires information systems to have Models; Fee for Service Models and Free Use Models, greater adaptation, flexibility, scalability and reliability of sharing utilities. incorporation of these changes. Cloud Computing can be regarded as a paradigm and key element of recent computing in business management solutions. A business solution arises 1.1 Cloud Computing in the cloud; this solution includes both the environmental Cloud Computing is defined by the way in which the dynamics and the changes needed in the business logic service provider and the consumer of that service are related. assuring speed of response, efficiency in the solution and In this context it is very common that the consumers are not maintenance of the application. This article shows a design the owners of the infrastructure, they do not incur CAPEX 1, model of Saas applications based on a platform (PaaS) which since they are under a modality of rent and use from a has the business logic as central element and is subdivided in provider.
    [Show full text]
  • Evaluation and Implementation of Match Algorithms for Rule-Based Multi-Agent Systems Using the Example of Jadex
    Universität Hamburg Fakultät für Mathematik, Informatik und Naturwissenschaften Evaluation and Implementation of Match Algorithms for Rule-based Multi-Agent Systems using the Example of Jadex Master's Thesis Diplomarbeit Eryk Lagun [email protected] Studiengang Informatik Matrikelnummer: 5318142 Fachsemester: 16 Erstgutachter: Dr. Lars Braubach Zweitgutachter: Dr. Guido Gryczan Contents 1 Introduction.....................................................................................................1 1.1 Motivation and Goals....................................................................................................1 1.2 Context and Scope.........................................................................................................2 1.3 Intended Readers........................................................................................................... 4 1.4 Outline ........................................................................................................................... 4 2 Background......................................................................................................6 2.1 Rule-based Systems....................................................................................................... 6 2.1.1 Definition............................................................................................................ 7 2.1.2 Rule Execution................................................................................................... 8 2.1.3 Recognize-Act Cycle..........................................................................................8
    [Show full text]
  • Eclipse: a C-Based Inference Engine Em6edded in a Lisp Interpreter Brian Beach; Randolph Splitter Software and Systems Laboratory HPL-90-213 December, 1990
    HEWLETT PACKARD Eclipse: A C-based Inference Engine Em6edded in a Lisp Interpreter Brian Beach; Randolph Splitter Software and Systems Laboratory HPL-90-213 December, 1990 expert systems; software Artificial intelligence began on the far-out fringe engineering environment of computer science and has been progressing more and more into the mainstream of software engineering through the use of AI components, such as inference engines, that can be embedded in programs written in more traditional languages. Eclipse takes this progression one step further by allowing the use of Lisp in combination with rules and traditional C programming. Weare currently making use of Eclipse in our work on a team software development environment. Internal Accession Date Only (c)Copyright Hewlett-Packard Company 1990 1 Introduction Eclipse, the subject of this paper, is one component of an object-based software develop­ ment environment (SDE), that is being developed at Hewlett-Packard Labs in Palo Alto. This SDE is designed to support teams of developers building large software systems. In addition to providing a shared object base of project-wide information, the environment provides rule-based support for managing the software development process. These rules create an active database that responds to user actions, checks constraints and automates repetitive tasks. A typical constraint rule might require that a software module pass certain tests before being included in a release. For reasons of efficiency, the rules are attached not to the persistent, project-wide object base but rather to a local cache of objects in the virtual memory of each developer's work­ station.
    [Show full text]
  • Memory Efficient Stream Reasoning on Resource-Limited Devices
    UNIVERSITY OF DUBLIN, TRINITY COLLEGE Memory Efficient Stream Reasoning on Resource-Limited Devices by Dylan Armstrong Supervisor: Declan O'Sullivan Assistant Supervisor: Wei Tai A dissertation submitted in partial fulfillment for the degree of Master in Computer Science Submitted to the University of Dublin, Trinity College, May 2014 Declaration of Authorship I, Dylan Armstrong, declare that the following dissertation, except where otherwise stated, is entirely my own work; that it has not previously been submitted as an exercise for a degree, either in Trinity College Dublin, or in any other University; and that the library may lend or copy it or any part thereof on request. Signed: Date: i Summary The first goal of this research was to determine which of two reasoning algorithms could be adapted for use in a streaming reasoning system with memory constraints. The two algorithms investigated were TREAT and LEAPS. Once achieved, the primary goal was then to determine whether an implementation of the chosen algorithm provided a more efficient option in terms of memory consumption in a resource constrained device in comparison to an existing implementation of the RETE algorithm. This paper first illustrates the motivation for this research by providing context for why and when memory constrained devices are used in reasoning. Standard approaches usually attempt to perform reasoning on a centralised machine, sending data from sensor devices through the web or by other wired or wireless technologies. However, in some conditions this may not be feasible, or may be restrictive. In these cases reasoning can be performed on the sensor device nodes themselves.
    [Show full text]
  • 292 Chapter 9 Inference in First-Order Logic
    292 Chapter 9 Inference in First-Order Logic Rete algorithm The Rete algorithm3 was the first to address this problem. The algorithm preprocesses the set of rules in the knowledge base to construct a dataflow network in which each node is a literal from a rule premise. Variable bindings flow through the network and are filtered out when they fail to match a literal. If two literals in a rule share a variable—for example, Sells(x,y,z)∧Hostile(z) in the crime example—then the bindings from each literal are filtered through an equality node. A variable binding reaching a node for an n-ary literal such as Sells(x,y,z) might have to wait for bindings for the other variables to be established before the process can continue. At any given point, the state of a Rete network captures all the partial matches of the rules, avoiding a great deal of recomputation. Rete networks, and various improvements thereon, have been akeycomponentofso- Production system called production systems,whichwereamongtheearliestforward-chainingsystemsin widespread use.4 The XCON system (originally called R1; McDermott, 1982) was built with a production-system architecture. XCON contained several thousand rules for designing configurations of computer components for customers of the Digital Equipment Corporation. It was one of the first clear commercial successes in the emerging field of expert systems. Many other similar systems have been built with the same underlying technology, which has been implemented in the general-purpose language OPS-5. Cognitive architectures Production systems are also popular in cognitive architectures—that is, models of hu- man reasoning—such as ACT (Anderson, 1983) and SOAR (Laird et al.,1987).Insuchsys- tems, the “working memory” of the system models human short-term memory, and the pro- ductions are part of long-term memory.
    [Show full text]
  • Research of the Rule Engine Based On
    3rd International Conference on Materials Engineering, Manufacturing Technology and Control (ICMEMTC 2016) Research of the Rule Engine based on XML Zhao Ni1, a, Lifang Bai2, b 1 Zhengzhou Institute of Information Science and Technology, Henan, China, 450000, China 2 Zhengzhou Institute of Information Science and Technology, Henan, China, 450000, China aemail: [email protected], bemail:[email protected] Keywords: Pattern matching algorithm; XML; Rule engine; Rete network Abstract. As Studying of the current rule engine implementation mechanism, An XML format rules is put forward for the poor ability in the predicate expansion and the rule reuse of the traditional rule description, which can describe of the detail interface information called by entity class in rules, and the improved rete network corresponding to the rule description is proposed, forming a new pattern matching algorithm based on XML format rules. Introduction Rules engine originated from production system. It can be regarded as a component of the application, which realizes the separation of the business rules or logic in application from the program code, and writes business rules with predefined semantic module [1]. The application field of rule engine is very extensive, including banking, telecommunications billing, game software, business management, etc. At present, an open source business rules engine Drools (JBoss) is the most representative of the rule engine [2]. Pattern matching algorithm is the core algorithm in the rule engine. This paper focuses on Rete pattern matching algorithm, which is the basis for the shell of many production systems, including the CLIPS (C language integrated production system), JESS, Drools and Soar [3].
    [Show full text]
  • Rete Algorithm
    Rete algorithm The Rete algorithm (/’ri:ti:/ REE -tee or /’reɪti:/ RAY -tee , rarely /’ri:t/ REET or /rɛ’teɪ/ re- TAY ) is a pattern matching algorithm for implementing production rule systems . It is used to determine which of the system's rules should fire based on its data store. Contents • 1 Overview • 2 Description o 2.1 Alpha network o 2.2 Beta network o 2.3 Conflict resolution o 2.4 Production execution o 2.5 Existential and universal quantifications o 2.6 Memory indexing o 2.7 Removal of WMEs and WME lists o 2.8 Handling ORed conditions o 2.9 Diagram • 3 Miscellaneous considerations • 4 Optimization and performance • 5 Rete II • 6 Rete-III • 7 Rete-NT • 8 See also • 9 References • 10 External links Overview A naive implementation of an expert system might check each rule against known facts in a knowledge base , firing that rule if necessary, then moving on to the next rule (and looping back to the first rule when finished). For even moderate sized rules and facts knowledge- bases, this naive approach performs far too slowly. The Rete algorithm provides the basis for a more efficient implementation. A Rete-based expert system builds a network of nodes , where each node (except the root) corresponds to a pattern occurring in the left-hand-side (the condition part) of a rule. The path from the root node to a leaf node defines a complete rule left-hand-side. Each node has a memory of facts which satisfy that pattern. This structure is essentially a generalized trie .
    [Show full text]
  • Chapter 12 –Business Rules and DROOLS
    SOFTWARE ARCHITECTURES Chapter 12 –Business Rules and DROOLS Katanosh Morovat Introduction In the recent decade, the information systems community declares a new concept which is called business rules. This new concept is a formal approach for identifying the rules that encapsulate the structure, constraint, and control the operation as a one package. Before advent of this definition, system analysts have been able to describe the structure of the data and functions that manipulate these data, and almost always the constraints would be neglected. Business rules are statements that precisely describe, constrain, and control the structure, operations and strategies of a business in an organization. Other terms which come with business rules, are business rules engine, and business rules management system. Business rules engine which is a component of business rules management system is a software system that executes a set of business rules. Business rules management system monitors and maintains the variety and complexity of decision logic that is used by operational systems within an organization or enterprise. This logic is referred to as business rules. One of more widely used business rules management system is Drools, more correctly known as a production rules system. Drools use an enhanced implementation of the Rete algorithm. Drools support the JSR-94 standard for its business rules engine and enterprise framework for the construction, maintenance, and enforcement of business policies in an organization, application, or service. This paper describes the nature of business rules, business rules engine, and business rules management system. It also prepares some information about the Drools, included several projects, which is a software system prepared by JBoss Community, and different productions made by Logitech which have used the business rules method.
    [Show full text]