Database Decomposition Into Fourth Normal Form

Total Page:16

File Type:pdf, Size:1020Kb

Database Decomposition Into Fourth Normal Form DATABASE DECOMPOSITION INTO FOURTH NORMAL FORM GGsta Grahne and Kari-Jouko R;iihB: University of Helsinki, Department of Computer Science Tukholmankatu 2, SF-00250 Helsinki 25, Finland Abstract. We present an algorithm that decom- database scheme. poses a database scheme when the dependency set contains functional and multivalued dependencies. A good decomposition should have several The schemes in the resulting decomposition are in properties [BBG78]: fourth normal form and have a lossless join. Our algorithm does not impose restrictions on the Minimal redundancy: the relation schemes allowed set of dependencies, and it never re- (1) quires the computation of the full closure of the should represent meaningful objects so that, dependency set. Furthermore, the algorithm works for instance, they can be updated without in polynomial time for classes of dependencies that properly contain conflict-free dependency knowing the values for all the attributes in sets. the universal scheme. This is usually taken to mean that the relation schemes should be 1. INTRODUCTION in one of various normal forms. The structure of a relational database can be (2) Representation: it should be possible to re- defined by a universal relation scheme U and by cover the universal relation from the compo- nent relations, i.e. the decomposition should a set of dependencies D. Relations that conform have a lossless join. to the universal scheme may contain much redun- dant information and have undesirable update (3) Separation: when a (projected) relation is properties [DatBl]. It is therefore customary to updated, it should be possible to ensure that decompose the universal scheme into subschemes the dependencies in the universal relation k still hold after the update without actually Rl,-..,s such that U = 'J R.. The collection i=l ' constructing the universal relation. This $,. ,I$> is called the database scheme. Uni- condition is satisfied if the decomposition versal relations can be represented as their pro- is dependency preserving, i.e. if the pro- jections onto the relation schemes in the jected dependencies imply the original set D. This work was supported by the Academy of There exists a wide variety of dependency Finland. types, each one giving rise to different sets of normal forms. The two most common dependency types are functional dependencies and multivalued dependencies. The normal forms associated with functional dependencies are -~-third normal form (3NF) [Cod721 and Boyce-Codd ~-normal form (BCNF) [Cod74], whereas -~-fourth normal form (4NF) [Fag771 is defined for multivalued dependencies. The 186 requirements for these normal forms increase from 2. PREVIOUS WORK 3NF to 4NF, i.e. 4NF implies BCNF, which implies There is a straightforward way of finding a 3NF. lossless decomposition into 4NF. Suppose U has Unfortunately, it is not always possible to been replaced by a set of relation schemes find a decomposition having all these desirable Rl* . ,Rk (initially k=l and Rl=U). Suppose fur- properties. In particular, there exist relation ther that some R. is not in 4NF. This means that 1 schemes (or rather dependencies) that do not have there exists a dependency X-WY in D’ whose pro- a dependency preserving decomposition into BCNF jection onto Ri is nontrivial (i.e. O#XCRi, or 4NF [BeB79]. The question of what to do when a YDX=8, and O#YflRi#Ri-X) such that X is not a good decomposition does not exist is controver- key of R.. Then Ri is replaced by XY fl Ri and 1 sial (see e.g. [BBG78]). Especially the validity XU (Ri-Y). This guarantees the losslessness of of the universal relation assumption has given the join [Fag77,ABU79], and the repeated applica- rise to much debate (cf. [AtP82,U1182a]). tion produces a set of schemes in 4NF. It is, of course, important to assess how This algorithm has several, efficiency prob- well each of the three properties meets its in- lems. First, it is known that testing whether a tended goals. But this is not enough. Even if it relation scheme is in BCNF is NP-complete [BeB79]. is possible to decompose a relation scheme into Therefore testing the 4NF property is also compu- subschemes that satisfy some set of properties, tationally intractable. In their algorithm for the method will be of little use if the decompo- functional dependencies [TsF80], Tsou and Fischer sition algorithm is prohibitively expensive. have circumvented this problem by testing only a In this paper we will focus our attention on sufficient BCNF condition, not a sufficient and a problem that is known to be always solvable: necessary one. In other words, if a nontrivial the problem of decomposing a scheme U into a set dependency is found in Ri, then Ri is decomposed of subschemes {R such that R 1”’ 4-J l”..,\ regardless of whether it actually is in BCNF or have a lossless join and each Ri is in normal not. The same idea can be applied to multivalued form. dependencies and 4NF also. (If the set of depend- encies includes only multivalued dependencies, The complexity of the problem depends heavily the absence of nontrivial dependencies is a suf- on the desired normal form. For 3NF a simple ficient and necessary condition for 4NF.) polynomial time synthesis algorithm was presented in ;BDB79]. Finding a lossless decomposition into The problem then becomes that of finding a BCSF is more difficult, but an algorithm working nontrivial dependency in some Ri. Fagin [Fag771 in polynomial time was recently given by Tsou and proposed to compute the dependency closure D+ Fischer [TsF80]. Their approach does not general- before running the decomposition algorithm. Then ize to multivalued dependencies, and the complex- it would be sufficient just to scan through the ity of lossless decomposition into 4b’F is open. precomputed set when searching for some nontriv- We shall give an algorithm that is more efficient ial dependency X-Y. Unfortunately, the size of than existing general algorithms, and that works the closure D+ can be exponential [U1182b], in polynomial time for a broader class of depend- thereby ruining the efficiency of this approach. encies than existing polynomial algorithms. Moreover, Lien has shown in lLie81.1 that if such a precomputed set of dependencies is used in the We use the notation and terminology of decomposition algorithm, then that set must be [Ull82b]. the closure D+: if any smaller set is used, the algorithm may not work correctly. 187 Therefore the nontrivial dependency X-Y Finding efficiently the smallest nontrivial must be found differently. Another approach would multivalued dependency X++Y appears to be a dif- be to avoid computing D+ and to derive the de- ficult problem, and it was left open by Tsou and pendencies on demand from the initial set D. In- Fischer. However, solving this problem is essen- deed, if there are any nontrivial dependencies in tial for making the decomposition efficient, if Ri, then at least one of them can be found effi- the dependencies are tested on the basis of D ciently using a result of [HIT791. without computing D+. But then we are faced with the problem that Our solution is in some sense between the two the decomposition tree may grow too large. If we extremes. We avoid computing D+, but we do not use an arbitrary nontrivial dependency X-Y for use the fixed set D, either. Instead, during the decomposing Ri, neither of the new schemes decomposition process we will add to D members of (XYflRi and XU(Ri-Y)) is necessarily in 4NF. D+ in such a way that whenever we wish to find a Therefore both of them must be decomposed fur- nontrivial dependency X-Y, it is sufficient to ther. In each decomposition step the new sub- examine only the dependencies in the extended de- schemes must have less attributes than the decom- pendency set D. This process is dynamic: it is posed scheme Ri, but the difference may be only guided by the decomposition tree. Therefore all one attribute. Thus the decomposition tree may the dependencies in D+ will not be added to D. have IUl levels, and in the case of a full binary There exist some algorithms for performing IUI tree this means 2 -1 decomposition steps. the decomposition efficiently, but none of these This indicates that instead of an arbitrary algorithms achieves all the desired properties. nontrivial dependency X++Y, one should find a Lien's algorithm [Lie821 produces a lossless de- dependency that is in a sense as "small" as poss- composition into 4NF schemes in polynomial time, ible. That is, the subscheme XYflRi should not but it only works for so-called conflict-free de- contain any nontrivial dependencies. This would pendency sets. Sciore's method [Scidl] works guarantee that XYflRi is in 4NF, and the decompo- under the same restriction. Although this is a sition tree would degenerate into a linear tree practical subclass of multivalued dependencies, with O(llJl> nodes. In the case of functional de- the general case (unrestricted dependencies) pendencies, such a "smallest" X+Y can be found still remains as an intrigueging open question. in polynomial time. Tsou and Fischer start with Similarly, da Silva's algorithm [daS80] produces a an arbitrary nontrivial dependency X+Y. Then lossless decomposition into schemes that are in they repeatedly test whether some nontrivial de- 4NF as far as D is concerned; however, there may pendency X' +Y' still holds in XYflRi; if so, still exist derived dependencies in D+ that viol- they throw out the attributes in (XYflR;) -X'Y'. ate the normal form conditions. This is possible, since the properties of func- tional dependencies imply that X'+Y' also holds 3.
Recommended publications
  • Normalized Form Snowflake Schema
    Normalized Form Snowflake Schema Half-pound and unascertainable Wood never rhubarbs confoundedly when Filbert snore his sloop. Vertebrate or leewardtongue-in-cheek, after Hazel Lennie compartmentalized never shreddings transcendentally, any misreckonings! quite Crystalloiddiverted. Euclid grabbles no yorks adhered The star schemas in this does not have all revenue for this When done use When doing table contains less sensible of rows Snowflake Normalizationde-normalization Dimension tables are in normalized form the fact. Difference between Star Schema & Snow Flake Schema. The major difference between the snowflake and star schema models is slot the dimension tables of the snowflake model may want kept in normalized form to. Typically most of carbon fact tables in this star schema are in the third normal form while dimensional tables are de-normalized second normal. A relation is danger to pause in First Normal Form should each attribute increase the. The model is lazy in single third normal form 1141 Options to Normalize Assume that too are 500000 product dimension rows These products fall under 500. Hottest 'snowflake-schema' Answers Stack Overflow. Learn together is Star Schema Snowflake Schema And the Difference. For step three within the warehouses we tested Redshift Snowflake and Bigquery. On whose other hand snowflake schema is in normalized form. The CWM repository schema is a standalone product that other products can shareeach product owns only. The main difference between in two is normalization. Families of normalized form snowflake schema snowflake. Star and Snowflake Schema in Data line with Examples. Is spread the dimension tables in the snowflake schema are normalized. Like price weight speed and quantitiesie data execute a numerical format.
    [Show full text]
  • Boyce-Codd Normal Forms Lecture 10 Sections 15.1 - 15.4
    Boyce-Codd Normal Forms Lecture 10 Sections 15.1 - 15.4 Robb T. Koether Hampden-Sydney College Wed, Feb 6, 2013 Robb T. Koether (Hampden-Sydney College) Boyce-Codd Normal Forms Wed, Feb 6, 2013 1 / 15 1 Third Normal Form 2 Boyce-Codd Normal Form 3 Assignment Robb T. Koether (Hampden-Sydney College) Boyce-Codd Normal Forms Wed, Feb 6, 2013 2 / 15 Outline 1 Third Normal Form 2 Boyce-Codd Normal Form 3 Assignment Robb T. Koether (Hampden-Sydney College) Boyce-Codd Normal Forms Wed, Feb 6, 2013 3 / 15 Third Normal Form Definition (Transitive Dependence) A set of attributes Z is transitively dependent on a set of attributes X if there exists a set of attributes Y such that X ! Y and Y ! Z. Definition (Third Normal Form) A relation R is in third normal form (3NF) if it is in 2NF and there is no nonprime attribute of R that is transitively dependent on any key of R. 3NF is violated if there is a nonprime attribute A that depends on something less than a key. Robb T. Koether (Hampden-Sydney College) Boyce-Codd Normal Forms Wed, Feb 6, 2013 4 / 15 Example Example order_no cust_no cust_name 222-1 3333 Joe Smith 444-2 4444 Sue Taylor 555-1 3333 Joe Smith 777-2 7777 Bob Sponge 888-3 4444 Sue Taylor Table 3 Table 3 is in 2NF, but it is not in 3NF because [order_no] ! [cust_no] ! [cust_name]: Robb T. Koether (Hampden-Sydney College) Boyce-Codd Normal Forms Wed, Feb 6, 2013 5 / 15 3NF Normalization To put a relation into 3NF, for each set of transitive function dependencies X ! Y ! Z , make two tables, one for X ! Y and another for Y ! Z .
    [Show full text]
  • Normalization Exercises
    DATABASE DESIGN: NORMALIZATION NOTE & EXERCISES (Up to 3NF) Tables that contain redundant data can suffer from update anomalies, which can introduce inconsistencies into a database. The rules associated with the most commonly used normal forms, namely first (1NF), second (2NF), and third (3NF). The identification of various types of update anomalies such as insertion, deletion, and modification anomalies can be found when tables that break the rules of 1NF, 2NF, and 3NF and they are likely to contain redundant data and suffer from update anomalies. Normalization is a technique for producing a set of tables with desirable properties that support the requirements of a user or company. Major aim of relational database design is to group columns into tables to minimize data redundancy and reduce file storage space required by base tables. Take a look at the following example: StdSSN StdCity StdClass OfferNo OffTerm OffYear EnrGrade CourseNo CrsDesc S1 SEATTLE JUN O1 FALL 2006 3.5 C1 DB S1 SEATTLE JUN O2 FALL 2006 3.3 C2 VB S2 BOTHELL JUN O3 SPRING 2007 3.1 C3 OO S2 BOTHELL JUN O2 FALL 2006 3.4 C2 VB The insertion anomaly: Occurs when extra data beyond the desired data must be added to the database. For example, to insert a course (CourseNo), it is necessary to know a student (StdSSN) and offering (OfferNo) because the combination of StdSSN and OfferNo is the primary key. Remember that a row cannot exist with NULL values for part of its primary key. The update anomaly: Occurs when it is necessary to change multiple rows to modify ONLY a single fact.
    [Show full text]
  • Fundamentals of Database Systems [Normalization – II]
    Outline First Normal Form Second Normal Form Third Normal Form Boyce-Codd Normal Form Fundamentals of Database Systems [Normalization { II] Malay Bhattacharyya Assistant Professor Machine Intelligence Unit Indian Statistical Institute, Kolkata October, 2019 Outline First Normal Form Second Normal Form Third Normal Form Boyce-Codd Normal Form 1 First Normal Form 2 Second Normal Form 3 Third Normal Form 4 Boyce-Codd Normal Form Outline First Normal Form Second Normal Form Third Normal Form Boyce-Codd Normal Form First normal form The domain (or value set) of an attribute defines the set of values it might contain. A domain is atomic if elements of the domain are considered to be indivisible units. Company Make Company Make Maruti WagonR, Ertiga Maruti WagonR, Ertiga Honda City Honda City Tesla RAV4 Tesla, Toyota RAV4 Toyota RAV4 BMW X1 BMW X1 Only Company has atomic domain None of the attributes have atomic domains Outline First Normal Form Second Normal Form Third Normal Form Boyce-Codd Normal Form First normal form Definition (First normal form (1NF)) A relational schema R is in 1NF iff the domains of all attributes in R are atomic. The advantages of 1NF are as follows: It eliminates redundancy It eliminates repeating groups. Note: In practice, 1NF includes a few more practical constraints like each attribute must be unique, no tuples are duplicated, and no columns are duplicated. Outline First Normal Form Second Normal Form Third Normal Form Boyce-Codd Normal Form First normal form The following relation is not in 1NF because the attribute Model is not atomic. Company Country Make Model Distributor Maruti India WagonR LXI, VXI Carwala Maruti India WagonR LXI Bhalla Maruti India Ertiga VXI Bhalla Honda Japan City SV Bhalla Tesla USA RAV4 EV CarTrade Toyota Japan RAV4 EV CarTrade BMW Germany X1 Expedition CarTrade We can convert this relation into 1NF in two ways!!! Outline First Normal Form Second Normal Form Third Normal Form Boyce-Codd Normal Form First normal form Approach 1: Break the tuples containing non-atomic values into multiple tuples.
    [Show full text]
  • Aslmple GUIDE to FIVE NORMAL FORMS in RELATIONAL DATABASE THEORY
    COMPUTING PRACTICES ASlMPLE GUIDE TO FIVE NORMAL FORMS IN RELATIONAL DATABASE THEORY W|LL|AM KErr International Business Machines Corporation 1. INTRODUCTION The normal forms defined in relational database theory represent guidelines for record design. The guidelines cor- responding to first through fifth normal forms are pre- sented, in terms that do not require an understanding of SUMMARY: The concepts behind relational theory. The design guidelines are meaningful the five principal normal forms even if a relational database system is not used. We pres- in relational database theory are ent the guidelines without referring to the concepts of the presented in simple terms. relational model in order to emphasize their generality and to make them easier to understand. Our presentation conveys an intuitive sense of the intended constraints on record design, although in its informality it may be impre- cise in some technical details. A comprehensive treatment of the subject is provided by Date [4]. The normalization rules are designed to prevent up- date anomalies and data inconsistencies. With respect to performance trade-offs, these guidelines are biased to- ward the assumption that all nonkey fields will be up- dated frequently. They tend to penalize retrieval, since Author's Present Address: data which may have been retrievable from one record in William Kent, International Business Machines an unnormalized design may have to be retrieved from Corporation, General several records in the normalized form. There is no obli- Products Division, Santa gation to fully normalize all records when actual perform- Teresa Laboratory, ance requirements are taken into account. San Jose, CA Permission to copy without fee all or part of this 2.
    [Show full text]
  • Database Normalization
    Outline Data Redundancy Normalization and Denormalization Normal Forms Database Management Systems Database Normalization Malay Bhattacharyya Assistant Professor Machine Intelligence Unit and Centre for Artificial Intelligence and Machine Learning Indian Statistical Institute, Kolkata February, 2020 Malay Bhattacharyya Database Management Systems Outline Data Redundancy Normalization and Denormalization Normal Forms 1 Data Redundancy 2 Normalization and Denormalization 3 Normal Forms First Normal Form Second Normal Form Third Normal Form Boyce-Codd Normal Form Elementary Key Normal Form Fourth Normal Form Fifth Normal Form Domain Key Normal Form Sixth Normal Form Malay Bhattacharyya Database Management Systems These issues can be addressed by decomposing the database { normalization forces this!!! Outline Data Redundancy Normalization and Denormalization Normal Forms Redundancy in databases Redundancy in a database denotes the repetition of stored data Redundancy might cause various anomalies and problems pertaining to storage requirements: Insertion anomalies: It may be impossible to store certain information without storing some other, unrelated information. Deletion anomalies: It may be impossible to delete certain information without losing some other, unrelated information. Update anomalies: If one copy of such repeated data is updated, all copies need to be updated to prevent inconsistency. Increasing storage requirements: The storage requirements may increase over time. Malay Bhattacharyya Database Management Systems Outline Data Redundancy Normalization and Denormalization Normal Forms Redundancy in databases Redundancy in a database denotes the repetition of stored data Redundancy might cause various anomalies and problems pertaining to storage requirements: Insertion anomalies: It may be impossible to store certain information without storing some other, unrelated information. Deletion anomalies: It may be impossible to delete certain information without losing some other, unrelated information.
    [Show full text]
  • A Simple Guide to Five Normal Forms in Relational Database Theory
    A Simple Guide to Five Normal Forms in Relational Database Theory 1 INTRODUCTION 2 FIRST NORMAL FORM 3 SECOND AND THIRD NORMAL FORMS 3.1 Second Normal Form 3.2 Third Normal Form 3.3 Functional Dependencies 4 FOURTH AND FIFTH NORMAL FORMS 4.1 Fourth Normal Form 4.1.1 Independence 4.1.2 Multivalued Dependencies 4.2 Fifth Normal Form 5 UNAVOIDABLE REDUNDANCIES 6 INTER-RECORD REDUNDANCY 7 CONCLUSION 8 REFERENCES 1 INTRODUCTION The normal forms defined in relational database theory represent guidelines for record design. The guidelines corresponding to first through fifth normal forms are presented here, in terms that do not require an understanding of relational theory. The design guidelines are meaningful even if one is not using a relational database system. We present the guidelines without referring to the concepts of the relational model in order to emphasize their generality, and also to make them easier to understand. Our presentation conveys an intuitive sense of the intended constraints on record design, although in its informality it may be imprecise in some technical details. A comprehensive treatment of the subject is provided by Date [4]. The normalization rules are designed to prevent update anomalies and data inconsistencies. With respect to performance tradeoffs, these guidelines are biased toward the assumption that all non-key fields will be updated frequently. They tend to penalize retrieval, since data which may have been retrievable from one record in an unnormalized design may have to be retrieved from several records in the normalized form. There is no obligation to fully normalize all records when actual performance requirements are taken into account.
    [Show full text]
  • Functional Dependency and Normalization for Relational Databases
    Functional Dependency and Normalization for Relational Databases Introduction: Relational database design ultimately produces a set of relations. The implicit goals of the design activity are: information preservation and minimum redundancy. Informal Design Guidelines for Relation Schemas Four informal guidelines that may be used as measures to determine the quality of relation schema design: Making sure that the semantics of the attributes is clear in the schema Reducing the redundant information in tuples Reducing the NULL values in tuples Disallowing the possibility of generating spurious tuples Imparting Clear Semantics to Attributes in Relations The semantics of a relation refers to its meaning resulting from the interpretation of attribute values in a tuple. The relational schema design should have a clear meaning. Guideline 1 1. Design a relation schema so that it is easy to explain. 2. Do not combine attributes from multiple entity types and relationship types into a single relation. Redundant Information in Tuples and Update Anomalies One goal of schema design is to minimize the storage space used by the base relations (and hence the corresponding files). Grouping attributes into relation schemas has a significant effect on storage space Storing natural joins of base relations leads to an additional problem referred to as update anomalies. These are: insertion anomalies, deletion anomalies, and modification anomalies. Insertion Anomalies happen: when insertion of a new tuple is not done properly and will therefore can make the database become inconsistent. When the insertion of a new tuple introduces a NULL value (for example a department in which no employee works as of yet).
    [Show full text]
  • Normal Forms
    Database Relational Database Design Management Peter Wood Relational Database Design Update Anomalies Data Redundancy Normal Forms FD Inference There are two interconnected problems which are caused Boyce-Codd Normal Form by bad database design: Third Normal Form Normalisation Algorithms I Redundancy problems Lossless Join BCNF Algorithm I Update anomalies BCNF Examples Dependency Preservation 3NF Algorithm Good database design is based on using certain normal forms for relation schemas. Database Example 1 Management Peter Wood Relational Let F1 = fE ! D; D ! M; M ! Dg. Database Design Update Anomalies E stands for ENAME, D stands for DNAME and M stands Data Redundancy for MNAME Normal Forms FD Inference Boyce-Codd Normal Form A relation r1 over EMP1 (whose schema is {ENAME, Third Normal Form DNAME, MNAME}): Normalisation Algorithms Lossless Join BCNF Algorithm ENAME DNAME MNAME BCNF Examples Dependency Preservation Mark Computing Peter 3NF Algorithm Angela Computing Peter Graham Computing Peter Paul Math Donald George Math Donald E is the only key for EMP1 w.r.t. F1. Database Problems with EMP1 and F1 Management Peter Wood Relational Database Design Update Anomalies Data Redundancy Normal Forms 1. We cannot represent a department and manager FD Inference Boyce-Codd Normal Form without any employees (i.e., we cannot insert a tuple Third Normal Form with a null ENAME because of entity integrity); Normalisation Algorithms such a problem is called an insertion anomaly. Lossless Join BCNF Algorithm BCNF Examples 2. For the same reason as (1), we cannot delete all the Dependency Preservation employees in a department and keep just the 3NF Algorithm department information; such a problem is called a deletion anomaly.
    [Show full text]
  • A New Normal Form for the Design of Relational Database Schemata
    A New Normal Form for the Design of Relational Database Schemata CARLO ZANIOLO Sperry Research Center This paper addresses the problem of database schema design in the framework of the relational data model and functional dependencies. It suggests that both Third Normal Form (3NF) and Boyce- Codd Normal Form (BCNF) supply an inadequate basis for relational schema design. The main problem with 3NF is that it is too forgiving and does not enforce the separation principle as strictly as it should. On the other hand, BCNF is incompatible with the principle of representation and prone to computational complexity. Thus a new normal form, which lies between these two and captures the salient qualities of both is proposed. The new normal form is stricter than 3NF, but it is still compatible with the representation principle. First a simpler definition of 3NF is derived, and the analogy of this new definition to the definition of BCNF is noted. This analogy is used to derive the new normal form. Finally, it is proved that Bernstein’s algorithm for schema design synthesizes schemata that are already in the new normal form. Categories and Subject Descriptors: H.2.1 [Database Management]: Logical Design--normal forms General Terms: Algorithms, Design, Theory Additional Key Words and Phrases: Relational model, functional dependencies, database schema 1. INTRODUCTION The concept of normal form has supplied the cornerstone for most of the formal approaches to the design of relational schemata for database systems. Codd’s original Third Normal Form (3NF) [9] was followed by a number of refinements. These include Boyce-Codd Normal Form (BCNF) [lo], Fourth Normal Form [ 121, and Fifth Normal Form [ 131.
    [Show full text]
  • Database Normalization—Chapter Eight
    DATABASE NORMALIZATION—CHAPTER EIGHT Introduction Definitions Normal Forms First Normal Form (1NF) Second Normal Form (2NF) Third Normal Form (3NF) Boyce-Codd Normal Form (BCNF) Fourth and Fifth Normal Form (4NF and 5NF) Introduction Normalization is a formal database design process for grouping attributes together in a data relation. Normalization takes a “micro” view of database design while entity-relationship modeling takes a “macro view.” Normalization validates and improves the logical design of a data model. Essentially, normalization removes redundancy in a data model so that table data are easier to modify and so that unnecessary duplication of data is prevented. Definitions To fully understand normalization, relational database terminology must be defined. An attribute is a characteristic of something—for example, hair colour or a social insurance number. A tuple (or row) represents an entity and is composed of a set of attributes that defines an entity, such as a student. An entity is a particular kind of thing, again such as student. A relation or table is defined as a set of tuples (or rows), such as students. All rows in a relation must be distinct, meaning that no two rows can have the same combination of values for all of their attributes. A set of relations or tables related to each other constitutes a database. Because no two rows in a relation are distinct, as they contain different sets of attribute values, a superkey is defined as the set of all attributes in a relation. A candidate key is a minimal superkey, a smaller group of attributes that still uniquely identifies each row.
    [Show full text]
  • Chapter- 7 DATABASE NORMALIZATION Keys Types of Key
    Chapter- 7 DATABASE NORMALIZATION Keys o Keys play an important role in the relational database. o It is used to uniquely identify any record or row of data from the table. It is also used to establish and identify relationships between tables. For example: In Student table, ID is used as a key because it is unique for each student. In PERSON table, passport_number, license_number, SSN are keys since they are unique for each person. Types of key: 1. Primary key o It is the first key which is used to identify one and only one instance of an entity uniquely. An entity can contain multiple keys as we saw in PERSON table. The key which is most suitable from those lists become a primary key. o In the EMPLOYEE table, ID can be primary key since it is unique for each employee. In the EMPLOYEE table, we can even select License_Number and Passport_Number as primary key since they are also unique. o For each entity, selection of the primary key is based on requirement and developers. 2. Candidate key o A candidate key is an attribute or set of an attribute which can uniquely identify a tuple. o The remaining attributes except for primary key are considered as a candidate key. The candidate keys are as strong as the primary key. For example: In the EMPLOYEE table, id is best suited for the primary key. Rest of the attributes like SSN, Passport_Number, and License_Number, etc. are considered as a candidate key. 3. Super Key Super key is a set of an attribute which can uniquely identify a tuple.
    [Show full text]