Basic Concepts of Statistics 1

Total Page:16

File Type:pdf, Size:1020Kb

Basic Concepts of Statistics 1 Unit1 BASIC CONCEPTS OF STATISTICS SESSION 1-1: NATURE OF STATISTICS 1-1.1 Introduction Statistics is that body of knowledge (or field of study) used in making sense of data. It is basically concerned with the collection and analysis of data in order to obtain better understanding of phenomena for effective decision-making. Statistical methods required to achieve the objectives of a statistical investigation are classified into two, namely, Descriptive Statistics and Inferential Statistics. Descriptive Statistics deals with methods for summarizing and presenting data in tabular and graphical forms as well as using numerical measures such as percentages, mean, standard deviation, etc. in an informative way. Inferential Statistics is concerned with procedures used to make generalization about the characteristics of a population using the information contained in a sample, randomly selected from the population under investigation. Another important tool, in the study of Statistics is Probability Theory. Probability Theory provides good analysis of any situation (in Science, Business or in everyday life), which in some way involves an element of uncertainty or chance. It is the study of measure of uncertainty and the risk associated with it. It provides the basis for the methods involved in inferential analysis of data. The use of Statistics has permeated almost every facet of our lives. Everyone, both in professional careers and in everyday life through contact with newspapers, television and other media, is presented with information in the form of data which we often need to draw some conclusion from; hence some understanding of Statistics would be helpful to anyone. Since scientists, engineers, and business professionals routinely engage in obtaining and analysing data, knowledge of Statistics is especially important in these fields. Specifically, knowledge of Statistics and Probability Theory can be a powerful tool to help professionals in Science, Engineering, Industry and Business in designing new products and systems, improving existing designs and designing, developing and improving product processes. Most universities all over the world require at least an introductory study of Statistics in all their academic programmes for students to be able acquire knowledge in statistical methods to evaluate published 1 numerical facts and to believe or reject them as well as employ such scientific methods to help interpret the results of surveys or take decisions and understand them more effectively. The amount of statistical information that is collected, processed and disseminated to the public for one reason or the other has increased almost beyond comprehension and what part of is “good” statistics and part is “bad” statistics is anybody’s guess. To act as watchdogs more and more persons with some knowledge of Statistics are needed to take an active part in the collection, analysis and, more importantly, in the preliminary planning of data. Persons employed in this area of study need to know the basic concepts, strength and limitations of Statistics. Statisticians are indeed of great assistance in scientific research works. They, specifically, Design surveys and experiments to minimize the cost of obtaining a specified quantity of information, Seek the best method for analysing the data and making an inference for a given sampling situation, and, Analyse the data collected and provide a measure of goodness of an inference. 1-1.2 Applications of Statistics Data, as earlier noted, are numerical facts and figures from which conclusions can be drawn. Such conclusions are important to the decision-making processes of many professions and organizations. Governments, businesses and individuals collect required statistical data to carry out their activities efficiently and effectively. The rate at which statistical data are being collected is staggering and is primarily due to the realisation that better decisions are possible with more information and, perhaps more importantly, to technological advances that have enabled the efficient collection and analysis of large bodies of data. The most important technological advance in this area has, of course, been the development of the electronic digital computer. Statistical concepts and methods, and the use of computers in statistical analyses, have affected virtually all disciplines in Science, Engineering, Business and others. In Business and Economics, the development and application of statistical methods have led to greater production efficiency, to better forecasting techniques, and to better management 2 practices. To appreciate the extensive applications of Statistics to wide range of problems, a whole lot of examples can be cited: Product Quality Design/Improvement: The importance of Statistics in Science, Engineering and Management has been underscored by the involvement of industries in quality improvement. Many companies worldwide have realize that poor product quality in the form of manufacturing defects and /or unsatisfactory product reliability and field performance dramatically affects their overall productivity, market share and competitive position and ultimately the profitability. Improving these aspects of quality can eliminate waste, reduce scrap and rework, the requirements for inspection and test, and warranty losses, enhance customer satisfaction and enable the company to become the high- quality, low cost producer in its market. Businesses and other organizations often employ statistical analysis of data to help in improving their processes. Production supervisors use manufacturing data to evaluate, control and improve product quality. Businesses decide which products to develop and market by using data that reveal consumer preferences. In particular, statistical methods help to demonstrate the need for improvements, identify ways to make improvements, assess whether or not improvement activities have been successful, and estimate the benefits of improvement strategies. Insurance Premiums: Insurance companies use statistical analyses to set rates for home, automobile, life and health insurance. Tables are available, determining the probabilities of survival of persons of years ahead of them. On the basis of these probabilities, life insurance premiums can be established. Water Quality: The Environmental Protection Agency (EPA) is always interested in the water quality of rivers/lakes. They periodically take water samples to establish the level of contamination and maintain the level of quality. Potency of Drugs: Medical Researchers study the cure rates for diseases based on the use of different drugs and different forms of treatment. For example, what is the effect of treating a certain type of knee injury surgically or with physical therapy? If you take an aspirin each day, does that reduce your risk of a heart attack? Physicians and hospitals use data on the effectiveness of drugs and surgical procedures to provide patients with the best possible treatment. 3 Opinion Polls: Politicians and their supporters rely immensely on data from public opinion polls to formulate legislation and devise campaign strategies about their prospects of winning an election. The percentage of a candidate winning an election from a random sample of about 1,000 registered voters prior to the election may be used to estimate the percentage of votes they are likely to receive in the election. Unemployment/Inflation: Government officials use conclusions drawn from the latest data on unemployment and inflation from a survey to make policy decisions. Investment Decisions: Financial planners use recent trends in stock market prices to make investment decisions. The use of computers now play important role providing statistical summaries of data arising from the above situations. They are used for variety of purposes, such as word- processing, record-keeping, accounting, etc. There are various statistical packages notably among these are Microsoft Excel, MINITAB, GENSTAT, Statistical analysis System (SAS), Statistical Package for Social Sciences (SPSS), Biomedical Statistics Packages (BMDP), R and Strata. SESSION 2-1: COLLECTION OF DATA 2-1.1 Definitions Population and Sample: A population is a collection of all possible individual units (persons, objects, experimental outcomes, etc.), whose characteristics are to be studied. A sample is a part of a population that is studied to learn more about the entire population. That is, to infer about a population, we usually take a sample from the population. Parameters and Statistics: Numerical values computed from a given set of data are quantitative measures. A quantitative measure that describes a characteristic of a population is called a parameter. Such a measure is always computed from the population data. A quantitative measure that describes a characteristic of the sample is called a statistic. A statistic is computed from a sample data and used to estimate a parameter or make an inference about a certain characteristic of the population under study. 4 Types of Data: Data are classified as either quantitative or qualitative. quantitative data assume numerical values, which are as a result of measurements. They indicate “how much or many” of something. Quantitative data are always numeric and are obtained from either an interval or a ratio scale of measurement. For example, age, height, number of brother/sisters, number of accidents occurring, etc. Qualitative data (also known as categorical data or attributes) are data
Recommended publications
  • Curriculum Vitae
    CURRICULUM VITAE Name Ankit Patras Address 111 Agricultural and Biotechnology Building, Department of Agricultural and Environmental Sciences, Tennessee State University, Nashville TN 37209 Phone 615-963-6007, 615-963-6019/6018 Email [email protected], [email protected] EDUCATION 2005- 2009: Ph.D. Biosystems Engineering: School of Biosystems Engineering, College of Engineering & Architecture, Institute of Food and Health, University College Dublin, Ireland. 2005- 2006: Post-graduate certificate (Statistics & Computing): Department of Statistics and Actuarial Science, School of Mathematical Sciences, University College Dublin, Ireland 2003- 2004: Master of Science (Bioprocess Technology): UCD School of Biosystems Engineering, College of Engineering & Architecture, University College Dublin, Ireland 1998- 2002: Bachelor of Technology (Agricultural and Food Engineering): Allahabad Agriculture Institute, India ACADEMIC POSITIONS Assistant Professor, Food Biosciences: Department of Agricultural and Environmental Research, College of Agriculture, Human and Natural Sciences, Tennessee State University, Nashville, Tennessee 2nd Jan, 2014 - Present • Leading a team of scientist and graduate students in developing a world-class food research centre addressing current issues in human health, food safety specially virus, bacterial and mycotoxins contamination • Developing a world-class research program on improving safety of foods and pharmaceuticals • Develop cutting edge technologies (i.e. optical technologies, bioplasma, power Ultrasound,
    [Show full text]
  • Full-Text (PDF)
    Vol. 13(6), pp. 153-162, June 2019 DOI: 10.5897/AJPS2019.1785 Article Number: E69234960993 ISSN 1996-0824 Copyright © 2019 Author(s) retain the copyright of this article African Journal of Plant Science http://www.academicjournals.org/AJPS Full Length Research Paper Adaptability and yield stability of bread wheat (Triticum aestivum) varieties studied using GGE-biplot analysis in the highland environments of South-western Ethiopia Leta Tulu1* and Addishiwot Wondimu2 1National Agricultural Biotechnology Research Centre, P. O. Box 249, Holeta, Ethiopia. 2Department of Plant Sciences, College of Agriculture and Veterinary Science, Ambo University. P. O. Box 19, Ambo, Ethiopia. Received 13 February, 2019; Accepted 11 April, 2019 The objectives of this study were to evaluate released Ethiopian bread wheat varieties for yield stability using the GGE biplot method and identify well adapted and high-yielding genotypes for the highland environments of South-western Ethiopia. Twenty five varieties were evaluated in a randomized complete block design with three replications at Dedo and Gomma during the main cropping season of 2016 and at Dedo, Bedelle, Gomma and Manna during the main cropping season of 2017, generating a total of six environments in location-by-year combinations. Combined analyses of variance for grain yield indicated highly significant (p<0.001) mean squares due to environments, genotypes and genotype-by- environment interaction. Yield data were also analyzed using the GGE (that is, G, genotype + GEI, genotype-by-environment interaction) biplot method. Environment explained 73.2% of the total sum of squares, and genotype and genotype X environment interaction explained 7.16 and 15.8%, correspondingly.
    [Show full text]
  • Multivariate Data Analysis in Sensory and Consumer Science
    MULTIVARIATE DATA ANALYSIS IN SENSORY AND CONSUMER SCIENCE Garmt B. Dijksterhuis, Ph. D. ID-DLO, Institute for Animal Science and Health Food Science Department Lely stad The Netherlands FOOD & NUTRITION PRESS, INC. TRUMBULL, CONNECTICUT 06611 USA MULTIVARIATE DATA ANALYSIS IN SENSORY AND CONSUMER SCIENCE MULTIVARIATE DATA ANALYSIS IN SENSORY AND CONSUMER SCIENCE F NP PUBLICATIONS FOOD SCIENCE AND NUTRITIONIN Books MULTIVARIATE DATA ANALYSIS, G.B. Dijksterhuis NUTRACEUTICALS: DESIGNER FOODS 111, P.A. Lachance DESCRIPTIVE SENSORY ANALYSIS IN PRACTICE, M.C. Gacula, Jr. APPETITE FOR LIFE: AN AUTOBIOGRAPHY, S.A. Goldblith HACCP: MICROBIOLOGICAL SAFETY OF MEAT, J.J. Sheridan er al. OF MICROBES AND MOLECULES: FOOD TECHNOLOGY AT M.I.T., S.A. Goldblith MEAT PRESERVATION, R.G. Cassens S.C. PRESCOlT, PIONEER FOOD TECHNOLOGIST, S.A. Goldblith FOOD CONCEPTS AND PRODUCTS: JUST-IN-TIME DEVELOPMENT, H.R.Moskowitz MICROWAVE FOODS: NEW PRODUCT DEVELOPMENT, R.V. Decareau DESIGN AND ANALYSIS OF SENSORY OPTIMIZATION, M.C. Gacula, Jr. NUTRIENT ADDITIONS TO FOOD, J.C. Bauernfeind and P.A. Lachance NITRITE-CURED MEAT, R.G. Cassens POTENTIAL FOR NUTRITIONAL MODULATION OF AGING, D.K. Ingram ef al. CONTROLLEDlMODIFIED ATMOSPHERENACUUM PACKAGING, A. L. Brody NUTRITIONAL STATUS ASSESSMENT OF THE INDIVIDUAL, G.E. Livingston QUALITY ASSURANCE OF FOODS, J.E. Stauffer SCIENCE OF MEAT & MEAT PRODUCTS, 3RD ED., J.F. Price and B.S. Schweigert HANDBOOK OF FOOD COLORANT PATENTS, F.J. Francis ROLE OF CHEMISTRY IN PROCESSED FOODS, O.R. Fennema et al. NEW DIRECTIONS FOR PRODUCT TESTING OF FOODS, H.R. Moskowitz ENVIRONMENTAL ASPECTS OF CANCER: ROLE OF FOODS, E.L. Wynder et al.
    [Show full text]
  • Relevance of Eating Pattern for Selection of Growing Pigs
    RELEVANCE OF EATING PATTERN FOR SELECTION OF GROWING PIGS CENTRALE LANDB OUW CA TA LO GU S 0000 0489 0824 Promotoren: Dr. ir. E.W. Brascamp Hoogleraar in de Veefokkerij Dr. ir. M.W.A. Verstegen Buitengewoon hoogleraar op het vakgebied van de Veevoeding, in het bijzonder de voeding van de eenmagigen L.C.M, de Haer RELEVANCE OFEATIN G PATTERN FOR SELECTION OFGROWIN G PIGS Proefschrift ter verkrijging van de graad van doctor in de landbouw- en milieuwetenschappen, op gezag van de rector magnificus, dr. H.C. van der Plas, in het openbaar te verdedigen op maandag 13 april 1992 des namiddags te vier uur in de aula van de Landbouwuniversiteit te Wageningen m st/eic/, u<*p' Cover design: L.J.A. de Haer BIBLIOTHEEK! LANDBOUWUNIVERSIIBtt WAGENINGEM De Haer, L.C.M., 1992. Relevance of eating pattern for selection of growing pigs (Belang van het voeropnamepatroon voor de selektie van groeiende varkens). In this thesis investigations were directed at the consequences of testing future breeding pigs in group housing, with individual feed intake recording. Subjects to be addressed were: the effect of housing system on feed intake pattern and performance, relationships between feed intake pattern and performance and genetic aspects of the feed intake pattern. Housing system significantly influenced feed intake pattern, digestibility of feed, growth rate and feed conversion. Through effects on level of activity and digestibility, frequency of eating and daily eating time were negatively related with efficiency of production. Meal size and rate of feed intake were positively related with growth rate and backfat thickness.
    [Show full text]
  • General Linear Models - Part II
    Introduction to General and Generalized Linear Models General Linear Models - part II Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby October 2010 Henrik Madsen Poul Thyregod (IMM-DTU) Chapman & Hall October 2010 1 / 32 Today Test for model reduction Type I/III SSQ Collinearity Inference on individual parameters Confidence intervals Prediction intervals Residual analysis Henrik Madsen Poul Thyregod (IMM-DTU) Chapman & Hall October 2010 2 / 32 Tests for model reduction Assume that a rather comprehensive model (a sufficient model) H1 has been formulated. Initial investigation has demonstrated that at least some of the terms in the model are needed to explain the variation in the response. The next step is to investigate whether the model may be reduced to a simpler model (corresponding to a smaller subspace),. That is we need to test whether all the terms are necessary. Henrik Madsen Poul Thyregod (IMM-DTU) Chapman & Hall October 2010 3 / 32 Successive testing, type I partition Sometimes the practical problem to be solved by itself suggests a chain of hypothesis, one being a sub-hypothesis of the other. In other cases, the statistician will establish the chain using the general rule that more complicated terms (e.g. interactions) should be removed before simpler terms. In the case of a classical GLM, such a chain of hypotheses n corresponds to a sequence of linear parameter-spaces, Ωi ⊂ R , one being a subspace of the other. n R ⊆ ΩM ⊂ ::: ⊂ Ω2 ⊂ Ω1 ⊂ R ; where Hi : µ 2
    [Show full text]
  • Introducing SPM® Infrastructure
    Introducing SPM® Infrastructure © 2019 Minitab, LLC. All Rights Reserved. Minitab®, SPM®, SPM Salford Predictive Modeler®, Salford Predictive Modeler®, Random Forests®, CART®, TreeNet®, MARS®, RuleLearner®, and the Minitab logo are registered trademarks of Minitab, LLC. in the United States and other countries. Additional trademarks of Minitab, LLC. can be found at www.minitab.com. All other marks referenced remain the property of their respective owners. Salford Predictive Modeler® Introducing SPM® Infrastructure Introducing SPM® Infrastructure The SPM® application is structured around major predictive analysis scenarios. In general, the workflow of the application can be described as follows. Bring data for analysis to the application. Research the data, if needed. Configure and build a predictive analytics model. Review the results of the run. Discover the model that captures valuable insight about the data. Score the model. For example, you could simulate future events. Export the model to a format other systems can consume. This could be PMML or executable code in a mainstream or specialized programming language. Document the analysis. The nature of predictive analysis methods you use and the nature of the data itself could dictate particular unique steps in the scenario. Some actions and mechanisms, though, are common. For any analysis you need to bring the data in and get some understanding of it. When reviewing the results of the modeling and preparing documentation, you can make use of special features embedded into charts and grids. While we make sure to offer a display to visualize particular results, there’s always a Summary window that brings you a standard set of results represented the same familiar way throughout the application.
    [Show full text]
  • Measuring the Outcome of Psychiatric Care Using Clinical Case Notes
    MEASURING THE OUTCOME OF PSYCHIATRIC CARE USING CLINICAL CASE NOTES Marco Akerman MD, MSc A report submitted as part of the requirements for the degree of Doctor of Philosophy in the Faculty of Medicine, University of London Department of Epidemiology and Public Health University College London 1993 ProQuest Number: 10016721 All rights reserved INFORMATION TO ALL USERS The quality of this reproduction is dependent upon the quality of the copy submitted. In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted. Also, if material had to be removed, a note will indicate the deletion. uest. ProQuest 10016721 Published by ProQuest LLC(2016). Copyright of the Dissertation is held by the Author. All rights reserved. This work is protected against unauthorized copying under Title 17, United States Code. Microform Edition © ProQuest LLC. ProQuest LLC 789 East Eisenhower Parkway P.O. Box 1346 Ann Arbor, Ml 48106-1346 ABSTRACT This thesis addresses the question: ‘Can case notes be used to measure the outcome of psychiatric treatment?’ Patient’s case notes from three psychiatric units (two in-patient and one day hospital) serving Camden, London (N=225) and two units (one in-patient and one day- hospital) in Manchester (N=34) were used to assess outcome information of psychiatric care in tenns of availability (are comparable data present for both admission and discharge?), reliability (can this data be reliably extracted?), and validity (are the data true measures?). Differences in outcome between units and between diagnostic groups were considered in order to explore the possibility of auditing the outcome of routine psychiatric treatment using case notes.
    [Show full text]
  • Types of Sums of Squares
    Types of Sums of Squares With flexibility (especially unbalanced designs) and expansion in mind, this ANOVA package was implemented with general linear model (GLM) approach. There are different ways to quantify factors (categorical variables) by assigning the values of a nominal or ordinal variable, but we adopt binary coding for each factor level and all applicable interactions into dummy (indicator) variables. An ANOVA can be written as a general linear model: Y = b0 + b1X1 + b2X2 + ... + bkXk+e With matrix notation, it is reduced to a simple form Y = Xb + e The design matrix for a 2-way ANOVA with factorial design 2X3 looks like Data Design Matrix A B A*B A B A1 A2 B1 B2 B3 A1B1 A1B2 A1B3 A2B1 A2B2 A2B3 1 1 1 1 0 1 0 0 1 0 0 0 0 0 1 2 1 1 0 0 1 0 0 1 0 0 0 0 1 3 1 1 0 0 0 1 0 0 1 0 0 0 2 1 1 0 1 1 0 0 0 0 0 1 0 0 2 2 1 0 1 0 1 0 0 0 0 0 1 0 2 3 1 0 1 0 0 1 0 0 0 0 0 1 After removing an effect of a factor or an interaction from the above full model (deleting some columns from matrix X), we obtain the increased error due to the removal as a measure of the effect. And the ratio of this measure relative to some overall error gives an F value, revealing the significance of the effect.
    [Show full text]
  • Chapter 1: Introduction
    GEO4-4420 Stochastic Hydrology Prof. dr. Marc F.P. Bierkens Department of Physical Geography Utrecht University 2 Contents 1. Introduction 5 2. Descriptive statistics 15 3. Probablity and random variables 27 4. Hydrological statistics and extremes 51 5. Random functions 71 6. Time series analysis (by Martin Knotters, Alterra) 97 7. Geostatistics 117 8. Forward stochastic modelling 147 9. State estimation and data-assimilation (handouts) 183 References 185 3 4 Chapter 1: Introduction 1.1 Why stochastic hydrology? The term “stochastic” derives from the Greek word στοχαστιχηζ which is translated with “a person who forecasts a future event in the sense of aiming at the truth”, that is a seer or soothsayer. So, “stochastic” refers to predicting the future. In the modern sense “stochastic” in stochastic methods refers to the random element incorporated in these methods. Stochastic methods thus aim at predicting the value of some variable at non- observed times or at non-observed locations, while also stating how uncertain we are when making these predictions. But why should we care so much about the uncertainty associated with our predictions? The following example (Figure 1.1) shows a time series of observed water table elevations in a piezometer and the outcome of a groundwater model at this location. Also plotted are the differences (residuals) between the data and the model results. We can observe two features. First, the model time series seems to vary more smoothly then the observations. Secondly, there are noisy differences between model results and observations. These differences, which are called residuals, have among others the following causes: • observation errors.
    [Show full text]
  • Evaluation of Left Ventricular Diastolic Dysfunction in Type Ii Diabetes
    DOI: 10.14260/jemds/2014/2222 ORIGINAL ARTICLE EVALUATION OF LEFT VENTRICULAR DIASTOLIC DYSFUNCTION IN TYPE II DIABETES MELLITUS – THE ROLE OF VALSALVA MANEUVER Uday G1, Jayakumar S2, Jnaneshwari M3, Arun Kumar N4 HOW TO CITE THIS ARTICLE: Uday G, Jayakumar S, Jnaneshwari M, Arun Kumar N. “Evaluation of Left Ventricular Diastolic Dysfunction in Type II Diabetes Mellitus – The Role of Valsalva Maneuver”. Journal of Evolution of Medical and Dental Sciences 2014; Vol. 3, Issue 11, March 17; Page: 2898-2906, DOI: 10.14260/jemds/2014/2222 ABSTRACT: BACKGROUND & OBJECTIVES: Left ventricular diastolic dysfunction is known to occur in early stages of diabetic cardiomyopathy but its exact prevalence is not known. The present study was conducted to assess the prevalence of diastolic dysfunction in diabetic patients in the absence of hypertension or CAD. The role of valsalva maneuver in diagnosis of diastolic dysfunction was also studied. AIMS: To study left ventricular diastolic dysfunction in asymptomatic normotensive patients with Type 2 Diabetes. To study the role of valsalva maneuver in diagnosing diastolic dysfunction. METHODOLOGY: 100 consecutive asymptomatic normotensive patients (mean age 52.34 ±8.6 yrs.) with Type 2 diabetes free of any major clinical diabetic complications and having no evidence of CAD on non-invasive testing were studied for LV diastolic functions, using pulsed Doppler at the tip of mitral valve. The peak velocities of LV filling during the early rapid (E wave) and atrial contraction (A wave) phases, the ratio of the 2 filling velocities (E/A ratio) were recorded at end expiration at baseline and again during phase II of valsalva maneuver.
    [Show full text]
  • Use of Statistical Packages for Designing and Analysis of Experiments in Agriculture Latika Sharma and Nitu Mehta* (Ranka) Dept
    Popular Article Popular Kheti Volume -2, Issue-1 (January-March), 2014 Available online at www.popularkheti.info © 2014 popularkheti.info ISSN:2321-0001 Use of Statistical Packages for Designing and Analysis of Experiments in Agriculture Latika Sharma and Nitu Mehta* (Ranka) Dept. of Agricultural Economics & Management, Rajasthan College of Agriculture, MPUAT, Udaipur-313001, India *Email of corresponding author: [email protected] As long as analysis of data generated is concerned there are many statistical software packages that come very handy. But as far as generation of design is concerned there are not many software packages available for this purpose. And when we talk of generation of the design, we mean an algorithm that is capable of generating the design already available in the literature. Using computer algorithms, it has not been possible to generate new designs. This is an open area of research. So the purpose of the present article is to give an exposure to various statistical software packages that are useful for designing and analysis of experiments. Introduction Statistical Science is concerned with the twin aspect of theory of design of experiments and sample surveys and drawing valid inferences there from using various statistical techniques/methods. The art of drawing valid conclusions depends on how the data have been collected and analyzed. Depending upon the objective of the study, one has to choose an appropriate statistical procedure to test the hypothesis. When the number of observations is large or when the researcher is interested in multifarious aspects or some time series study, such calculations are very tedious and time consuming on a desk calculator.
    [Show full text]
  • Assessment of the Effects of Liquid and Granular Fertilizers on Maize Yield in Rwanda
    Afr. J. Food Agric. Nutr. Dev. 2021; 21(3): 17787-17800 https://doi.org/10.18697/ajfand.98.19670 ASSESSMENT OF THE EFFECTS OF LIQUID AND GRANULAR FERTILIZERS ON MAIZE YIELD IN RWANDA Hatungimana JC1*, Srinivasan RT1 and RR Vetukuri2 Hatungimana Jean Claude *Corresponding author email: [email protected] 1Department of Crop Science, School of Agriculture and Food Science, College of Agriculture and Veterinary Medicine, University of Rwanda (UR - CAVM), Rwanda 2Department of Plant Breeding, Swedish University of Agricultural Sciences (SLU), Sweden https://doi.org/10.18697/ajfand.98.19670 17787 ABSTRACT Maize (Zea mays L.) is the most widely grown cereal in the world, accounting for 1,116.34 MT of production in 2019/2020. In Africa, this crop represented approximately 56% of the total cultivated area from 1990 to 2005. About 50% of the African population depends on maize as a staple food and source of carbohydrates, protein, iron, vitamin B, and minerals. Lately, maize has become a cash crop which contributes to the improvement of farmers' livelihoods. For example, the Strategic Plan for Agricultural Transformation (SPAT) III outlined that fertilizer availability in Rwanda should increase to 55,000 MT per year, while fertilizer use should increase from 30 kg/ha in 2013 to 45 kg/ha for the 2017/18 cropping season. Only inorganic fertilizers are currently being used in maize production in Rwanda. This research was conducted to assess the effects of liquid (CBX: Complete Biological Extract) and granular fertilizers on maize crop yields in Rwanda. The study was conducted in the fields of the Rwanda Agriculture and Animal Resources Development Board (Rubona Station) during the 2018/2019 cropping season.
    [Show full text]