Commonly Used Distributions • Random Number Generation Algorithms for Distributions Commonly Used by Computer Systems Performance Analysts

Total Page:16

File Type:pdf, Size:1020Kb

Commonly Used Distributions • Random Number Generation Algorithms for Distributions Commonly Used by Computer Systems Performance Analysts Commonly Used Distributions ² Random number generation algorithms for distributions commonly used by computer systems performance analysts. ² Organized alphabetically for reference ² For each distribution: { Key characteristics { Algorithm for random number generation { Examples of applications °c 1994 Raj Jain 29.1 Bernoulli Distribution ² Takes only two values: failure and success or x = 0 and x = 1, respectively. ² Key Characteristics: 1. Parameters: p = Probability of success (x = 1) 0 · p · 1 2. Range: x = 0; 1 8 > > > 1 ¡ p; if x = 0 > <> 3. pmf: f(x) = > p; if x = 1 > > > :> 0; Otherwise 4. Mean: p 5. Variance: p(1 ¡ p) °c 1994 Raj Jain 29.2 ² Applications: To model the probability of an outcome having a desired class or characteristic: 1. A computer system is up or down. 2. A packet in a computer network reaches or does not reach the destination. 3. A bit in the packet is a®ected by noise and arrives in error. ² Can be used only if the trials are independent and identical ² Generation: Inverse transformation Generate u » U(0; 1) If u · p return 0. Otherwise, return 1. °c 1994 Raj Jain 29.3 Beta Distribution ² Used to represent random variates that are bounded ² Key Characteristics: 1. Parameters: a; b = Shape parameters, a > 0, b > 0 2. Range: 0 · x · 1 xa¡1(1¡x)b¡1 3. pdf: f(x) = ¯(a;b) ¯(:) is the beta function and is related to the gamma function as follows: Z 1 a¡1 b¡1 ¯(a; b) = 0 x (1 ¡ x) dx ¡(a)¡(b) = ¡(a + b) 4. Mean: a=(a + b) 5. Variance: ab=f(a + b)2(a + b + 1)g ² Substitute (x ¡ xmin)=(xmax ¡ xmin) in place of x for other ranges °c 1994 Raj Jain 29.4 ² Applications: To model random proportions 1. Fraction of packets requiring retransmissions. 2. Fraction of remote procedure calls (RPC) taking more than a speci¯ed time. ² Generation: 1. Generate two gamma variates γ(1; a) and γ(1; b), and take the ratio: γ(1; a) BT (a; b) = γ(1; a) + γ(1; b) 2. If a and b are integers: (a) Generate a + b + 1 uniform U(0,1) random numbers. (b) Return the the ath smallest number as BT(a; b). °c 1994 Raj Jain 29.5 3. If a and b are less than one: (a) Generate two uniform U(0,1) random numbers u1 and u2 1=a 1=b (b) Let x = u1 and y = u2 . If (x + y) > 1, go back to the previous step. Otherwise, return x=(x + y) as BT(a; b). 4. If a and b are greater than 1: Use rejection °c 1994 Raj Jain 29.6 Binomial Distribution ² The number of successes x in a sequence of n Bernoulli trials has a binomial distribution. ² Characteristics: 1. Parameters: p = Probability of success in a trial, 0 < p < 1. n = Number of trials; n must be a positive integer. 2. Range: x = 0; 1; : : : ; n 0 1 B C B n C B C x n¡x 3. pdf: f(x) = B C p (1 ¡ p) @ x A 4. Mean: np 5. Variance: np(1 ¡ p) °c 1994 Raj Jain 29.7 ² Applications: To model the number of successes 1. The number of processors that are up in a multiprocessor system. 2. The number of packets that reach the destination without loss. 3. The number of bits in a packet that are not a®ected by noise. 4. The number of items in a batch that have certain characteristics. ² Variance < Mean ) Binomial Variance > Mean ) Negative Binomial Variance = Mean ) Poisson ² Generation: 1. Composition: Generate n U(0,1). The number of RNs that are less than p is BN(p; n) °c 1994 Raj Jain 29.8 2. For small p: (a) Generate geometric random numbers ln(ui) Gi(p) = dln(1¡p)e. (b) If the sum of geometric RNs so far is less than or equal to n, go back to the previous step. Otherwise, return the number of RNs generated minus one. Pm If i=1 Gi(p) > n, return m ¡ 1. 3. Inverse Transformation Method: Compute the CDF F(x) for x = 0; 1; 2;:::, n and store in an array. For each binomial variate, generate a U(0,1) variate u and search the array to ¯nd x so that F (x) · u < F (x + 1); return x. °c 1994 Raj Jain 29.9 Chi-Square Distribution ² Sum of squares of several unit normal variates ² Key Characteristics: 1. Parameters: º=degrees of freedom, º must be a positive integer. 2. Range: 0 · x · 1 (º¡2)=2 ¡x=2 3. pdf: f(x) = x e 2º=2¡(º=2) Here, ¡(:) is the gamma function de¯ned as follows: Z 1 ¡x b¡1 ¡(b) = 0 e x dx 4. Mean: º 5. Variance: 2º °c 1994 Raj Jain 29.10 ² Application: To model sample variances. ² Generation: 1. Â2(º) = γ(2; º=2): For º even: 0 1 2 1 BQº=2 C  (º) = ¡2 ln @ i=1 uiA For º odd: Â2(º) = Â2(º ¡ 1) + [N(0; 1)]2 2. Generate º N(0,1) variates and return the sum of their squares. °c 1994 Raj Jain 29.11 Erlang Distribution ² Used in queueing models ² Key characteristics: 1. Parameters: a = Scale parameter, a > 0 m = Shape parameter; m is a positive integer 2. Range: 0 · x · 1 xm¡1e¡x=a 3. pdf: f(x) = (m¡1)!am 2 i 3 6P 7 ¡x=a 6 m¡1 (x=a) 7 4. CDF: F (x) = 1 ¡ e 4 i=0 i! 5 5. Mean: am 6. Variance: a2m °c 1994 Raj Jain 29.12 ² Application: Extension to the exponential distribution if the coe±cient of variation is less than one 1. To model service times in a queueing network model. 2. A server with Erlang(a; m) service times can be represented as a series of m servers with exponentially distributed service times. 3. To model time-to-repair and time-between-failures. ² Generation: Convolution Generate m U(0,1) random numbers ui and then: 0 1 B mY C B C Erlang(a; m) »¡a ln @ uiA i=1 °c 1994 Raj Jain 29.13 Exponential Distribution ² Used extensively in queueing models. ² Key characteristics 1. Parameters: a = Scale parameter = Mean, a > 0 2. Range: 0 · x · 1 1 ¡x=a 3. pdf: f(x) = ae 4. CDF: F (x) = 1 ¡ e¡x=a 5. Mean: a 6. Variance: a2 ² Memoryless Property: Past history is not helpful in predicting the future °c 1994 Raj Jain 29.14 ² Applications: To model time between successive events 1. Time between successive request arrivals to a device. 2. Time between failures of a device. The service times at devices are also modeled as exponentially distributed. ² Generation: Inverse transformation Generate a U(0,1) random number u and return ¡a ln(u) as Exp(a) °c 1994 Raj Jain 29.15 Memoryless Property ² Remembering the past does not help in predicting the time till the next event. F (¿) = P (¿ < t) = 1 ¡ e¡¸tt ¸ 0 ² At t = 0, the mean time to the next arrival is 1=¸. ² At t = x, the distribution of the time remaining till the next arrival is: P (¿ ¡ x < tj¿ > x) P (x < ¿ < x + t) = P (¿ > x) P (¿ < x + t) ¡ P (¿ < x) = P (¿ > x) (1 ¡ e¡¸(x+t)) ¡ (1 ¡ e¡¸x) = e¡¸t = 1 ¡ e¡¸x This is identical to the situation at t = 0. °c 1994 Raj Jain 29.16 F Distribution ² The ratio of two chi-square variates has an F distribution. ² Key characteristics: 1. Parameters: n = Numerator degrees of freedom; n should be a positive integer. m = Denominator degrees of freedom; m should be a positive integer. 2. Range: 0 · x · 1 3. pdf: f(x) = (n=m)n=2 (n¡2)=2 n ¡(n+m)=2 ¯(n=2;m=2)x (1 + mx) m 4. Mean: m¡2, provided m > 2. 2 5. Variance: 2m (n+m¡2) , provided n(m¡2)2(m¡4) m > 4. °c 1994 Raj Jain 29.17 ² High quantiles: 1 F[1¡®;n;m] = F[®;m;n] ² Applications: To model ratio of sample variances In the F-test for regression and analysis of variance ² Generation: Characterization Generate two chi-square variates Â2(n) and Â2(m) and compute: Â2(n)=n F (n; m) = Â2(m)=m °c 1994 Raj Jain 29.18 Gamma Distribution ² Generalization of Erlang distribution Allows noninteger shape parameters ² Key Characteristics: 1. Parameters: a = Scale parameter, a > 0 b = Shape parameter, b > 0 2. Range: 0 · x · 1 x b¡1 ¡x=a (a) e 3. pdf: f(x) = a¡(b) ¡(:) is the gamma function. 4. Mean: ab 5. Variance: a2b. °c 1994 Raj Jain 29.19 ² Applications: To model service times and repair times ² Generation: 1. If b is an integer, the sum of b exponential variates has a gamma distribution. 2 3 6 Yb 7 6 7 γ(a; b) »¡a ln 4 ui5 i=1 2. For b < 1, generate a beta variate x » BT(b; 1 ¡ b) and an exponential variate y » Exp(1). The product axy has a gamma(a,b) distribution. 3. For non-integer values of b: γ(a; b) » γ(a; bbc) + γ(a; b ¡ bbc) °c 1994 Raj Jain 29.20 Geometric Distribution ² Discrete equivalent of the exponential distribution ² Key characteristics: 1. Parameters: p = Probability of success, 0 < p < 1. 2. Range: x = 1; 2;:::; 1 3. pmf: f(x) = (1 ¡ p)x¡1p 4. CDF: F (x) = 1 ¡ (1 ¡ p)x 5.
Recommended publications
  • Modelling Censored Losses Using Splicing: a Global Fit Strategy With
    Modelling censored losses using splicing: a global fit strategy with mixed Erlang and extreme value distributions Tom Reynkens∗a, Roel Verbelenb, Jan Beirlanta,c, and Katrien Antoniob,d aLStat and LRisk, Department of Mathematics, KU Leuven. Celestijnenlaan 200B, 3001 Leuven, Belgium. bLStat and LRisk, Faculty of Economics and Business, KU Leuven. Naamsestraat 69, 3000 Leuven, Belgium. cDepartment of Mathematical Statistics and Actuarial Science, University of the Free State. P.O. Box 339, Bloemfontein 9300, South Africa. dFaculty of Economics and Business, University of Amsterdam. Roetersstraat 11, 1018 WB Amsterdam, The Netherlands. August 14, 2017 Abstract In risk analysis, a global fit that appropriately captures the body and the tail of the distribution of losses is essential. Modelling the whole range of the losses using a standard distribution is usually very hard and often impossible due to the specific characteristics of the body and the tail of the loss distribution. A possible solution is to combine two distributions in a splicing model: a light-tailed distribution for the body which covers light and moderate losses, and a heavy-tailed distribution for the tail to capture large losses. We propose a splicing model with a mixed Erlang (ME) distribution for the body and a Pareto distribution for the tail. This combines the arXiv:1608.01566v4 [stat.ME] 11 Aug 2017 flexibility of the ME distribution with the ability of the Pareto distribution to model extreme values. We extend our splicing approach for censored and/or truncated data. Relevant examples of such data can be found in financial risk analysis. We illustrate the flexibility of this splicing model using practical examples from risk measurement.
    [Show full text]
  • Probabilistic Proofs of Some Beta-Function Identities
    1 2 Journal of Integer Sequences, Vol. 22 (2019), 3 Article 19.6.6 47 6 23 11 Probabilistic Proofs of Some Beta-Function Identities Palaniappan Vellaisamy Department of Mathematics Indian Institute of Technology Bombay Powai, Mumbai-400076 India [email protected] Aklilu Zeleke Lyman Briggs College & Department of Statistics and Probability Michigan State University East Lansing, MI 48825 USA [email protected] Abstract Using a probabilistic approach, we derive some interesting identities involving beta functions. These results generalize certain well-known combinatorial identities involv- ing binomial coefficients and gamma functions. 1 Introduction There are several interesting combinatorial identities involving binomial coefficients, gamma functions, and hypergeometric functions (see, for example, Riordan [9], Bagdasaryan [1], 1 Vellaisamy [15], and the references therein). One of these is the following famous identity that involves the convolution of the central binomial coefficients: n 2k 2n − 2k =4n. (1) k n − k k X=0 In recent years, researchers have provided several proofs of (1). A proof that uses generating functions can be found in Stanley [12]. Combinatorial proofs can also be found, for example, in Sved [13], De Angelis [3], and Miki´c[6]. A related and an interesting identity for the alternating convolution of central binomial coefficients is n n n 2k 2n − 2k 2 n , if n is even; (−1)k = 2 (2) k n − k k=0 (0, if n is odd. X Nagy [7], Spivey [11], and Miki´c[6] discussed the combinatorial proofs of the above identity. Recently, there has been considerable interest in finding simple probabilistic proofs for com- binatorial identities (see Vellaisamy and Zeleke [14] and the references therein).
    [Show full text]
  • Sensor-Seeded Cryptographically Secure Random Number Generation
    International Journal of Recent Technology and Engineering (IJRTE) ISSN: 2277-3878, Volume-8 Issue-3, September 2019 Sensor-Seeded Cryptographically Secure Random Number Generation K.Sathya, J.Premalatha, Vani Rajasekar Abstract: Random numbers are essential to generate secret keys, AES can be executed in various modes like Cipher initialization vector, one-time pads, sequence number for packets Block Chaining (CBC), Cipher Feedback (CFB), Output in network and many other applications. Though there are many Feedback (OFB), and Counter (CTR). Pseudo Random Number Generators available they are not suitable for highly secure applications that require high quality randomness. This paper proposes a cryptographically secure II. PRNG IMPLEMENTATIONS pseudorandom number generator with its entropy source from PRNG requires the seed value to be fed to a sensor housed on mobile devices. The sensor data are processed deterministic algorithm. Many deterministic algorithms are in 3-step approach to generate random sequence which in turn proposed, some of widely used algorithms are fed to Advanced Encryption Standard algorithm as random key to generate cryptographically secure random numbers. Blum-Blum-Shub algorithm uses where X0 is seed value and M =pq, p and q are prime. Keywords: Sensor-based random number, cryptographically Linear congruential Generator uses secure, AES, 3-step approach, random key where is the seed value. Mersenne Prime Twister uses range of , I. INTRODUCTION where that outputs sequence of 32-bit random Information security algorithms aim to ensure the numbers. confidentiality, integrity and availability of data that is transmitted along the path from sender to receiver. All the III. SENSOR-SEEDED PRNG security mechanisms like encryption, hashing, signature rely Some of PRNGs uses clock pulse of computer system on random numbers in generating secret keys, one time as seeds.
    [Show full text]
  • A Secure Authentication System- Using Enhanced One Time Pad Technique
    IJCSNS International Journal of Computer Science and Network Security, VOL.11 No.2, February 2011 11 A Secure Authentication System- Using Enhanced One Time Pad Technique Raman Kumar1, Roma Jindal 2, Abhinav Gupta3, Sagar Bhalla4 and Harshit Arora 5 1,2,3,4,5 Department of Computer Science and Engineering, 1,2,3,4,5 D A V Institute of Engineering and Technology, Jalandhar, Punjab, India. Summary the various weaknesses associated with a password have With the upcoming technologies available for hacking, there is a come to surface. It is always possible for people other than need to provide users with a secure environment that protect their the authenticated user to posses its knowledge at the same resources against unauthorized access by enforcing control time. Password thefts can and do happen on a regular basis, mechanisms. To counteract the increasing threat, enhanced one so there is a need to protect them. Rather than using some time pad technique has been introduced. It generally random set of alphabets and special characters as the encapsulates the enhanced one time pad based protocol and provides client a completely unique and secured authentication passwords we need something new and something tool to work on. This paper however proposes a hypothesis unconventional to ensure safety. At the same time we need regarding the use of enhanced one time pad based protocol and is to make sure that it is easy to be remembered by you as a comprehensive study on the subject of using enhanced one time well as difficult enough to be hacked by someone else.
    [Show full text]
  • A Note on Random Number Generation
    A note on random number generation Christophe Dutang and Diethelm Wuertz September 2009 1 1 INTRODUCTION 2 \Nothing in Nature is random. number generation. By \random numbers", we a thing appears random only through mean random variates of the uniform U(0; 1) the incompleteness of our knowledge." distribution. More complex distributions can Spinoza, Ethics I1. be generated with uniform variates and rejection or inversion methods. Pseudo random number generation aims to seem random whereas quasi random number generation aims to be determin- istic but well equidistributed. 1 Introduction Those familiars with algorithms such as linear congruential generation, Mersenne-Twister type algorithms, and low discrepancy sequences should Random simulation has long been a very popular go directly to the next section. and well studied field of mathematics. There exists a wide range of applications in biology, finance, insurance, physics and many others. So 2.1 Pseudo random generation simulations of random numbers are crucial. In this note, we describe the most random number algorithms At the beginning of the nineties, there was no state-of-the-art algorithms to generate pseudo Let us recall the only things, that are truly ran- random numbers. And the article of Park & dom, are the measurement of physical phenomena Miller (1988) entitled Random generators: good such as thermal noises of semiconductor chips or ones are hard to find is a clear proof. radioactive sources2. Despite this fact, most users thought the rand The only way to simulate some randomness function they used was good, because of a short on computers are carried out by deterministic period and a term to term dependence.
    [Show full text]
  • ACTS 4304 FORMULA SUMMARY Lesson 1: Basic Probability Summary of Probability Concepts Probability Functions
    ACTS 4304 FORMULA SUMMARY Lesson 1: Basic Probability Summary of Probability Concepts Probability Functions F (x) = P r(X ≤ x) S(x) = 1 − F (x) dF (x) f(x) = dx H(x) = − ln S(x) dH(x) f(x) h(x) = = dx S(x) Functions of random variables Z 1 Expected Value E[g(x)] = g(x)f(x)dx −∞ 0 n n-th raw moment µn = E[X ] n n-th central moment µn = E[(X − µ) ] Variance σ2 = E[(X − µ)2] = E[X2] − µ2 µ µ0 − 3µ0 µ + 2µ3 Skewness γ = 3 = 3 2 1 σ3 σ3 µ µ0 − 4µ0 µ + 6µ0 µ2 − 3µ4 Kurtosis γ = 4 = 4 3 2 2 σ4 σ4 Moment generating function M(t) = E[etX ] Probability generating function P (z) = E[zX ] More concepts • Standard deviation (σ) is positive square root of variance • Coefficient of variation is CV = σ/µ • 100p-th percentile π is any point satisfying F (π−) ≤ p and F (π) ≥ p. If F is continuous, it is the unique point satisfying F (π) = p • Median is 50-th percentile; n-th quartile is 25n-th percentile • Mode is x which maximizes f(x) (n) n (n) • MX (0) = E[X ], where M is the n-th derivative (n) PX (0) • n! = P r(X = n) (n) • PX (1) is the n-th factorial moment of X. Bayes' Theorem P r(BjA)P r(A) P r(AjB) = P r(B) fY (yjx)fX (x) fX (xjy) = fY (y) Law of total probability 2 If Bi is a set of exhaustive (in other words, P r([iBi) = 1) and mutually exclusive (in other words P r(Bi \ Bj) = 0 for i 6= j) events, then for any event A, X X P r(A) = P r(A \ Bi) = P r(Bi)P r(AjBi) i i Correspondingly, for continuous distributions, Z P r(A) = P r(Ajx)f(x)dx Conditional Expectation Formula EX [X] = EY [EX [XjY ]] 3 Lesson 2: Parametric Distributions Forms of probability
    [Show full text]
  • Standard Distributions
    Appendix A Standard Distributions A.1 Standard Univariate Discrete Distributions I. Binomial Distribution B(n, p) has the probability mass function n f(x)= px (x =0, 1,...,n). x The mean is np and variance np(1 − p). II. The Negative Binomial Distribution NB(r, p) arises as the distribution of X ≡{number of failures until the r-th success} in independent trials each with probability of success p. Thus its probability mass function is r + x − 1 r x fr(x)= p (1 − p) (x =0, 1, ··· , ···). x Let Xi denote the number of failures between the (i − 1)-th and i-th successes (i =2, 3,...,r), and let X1 be the number of failures before the first success. Then X1,X2,...,Xr and r independent random variables each having the distribution NB(1,p) with probability mass function x f1(x)=p(1 − p) (x =0, 1, 2,...). Also, X = X1 + X2 + ···+ Xr. Hence ∞ ∞ x x−1 E(X)=rEX1 = r p x(1 − p) = rp(1 − p) x(1 − p) x=0 x=1 ∞ ∞ d d = rp(1 − p) − (1 − p)x = −rp(1 − p) (1 − p)x x=1 dp dp x=1 © Springer-Verlag New York 2016 343 R. Bhattacharya et al., A Course in Mathematical Statistics and Large Sample Theory, Springer Texts in Statistics, DOI 10.1007/978-1-4939-4032-5 344 A Standard Distributions ∞ d d 1 = −rp(1 − p) (1 − p)x = −rp(1 − p) dp dp 1 − (1 − p) x=0 =p r(1 − p) = . (A.1) p Also, one may calculate var(X) using (Exercise A.1) var(X)=rvar(X1).
    [Show full text]
  • Cryptanalysis of the Random Number Generator of the Windows Operating System
    Cryptanalysis of the Random Number Generator of the Windows Operating System Leo Dorrendorf School of Engineering and Computer Science The Hebrew University of Jerusalem 91904 Jerusalem, Israel [email protected] Zvi Gutterman Benny Pinkas¤ School of Engineering and Computer Science Department of Computer Science The Hebrew University of Jerusalem University of Haifa 91904 Jerusalem, Israel 31905 Haifa, Israel [email protected] [email protected] November 4, 2007 Abstract The pseudo-random number generator (PRNG) used by the Windows operating system is the most commonly used PRNG. The pseudo-randomness of the output of this generator is crucial for the security of almost any application running in Windows. Nevertheless, its exact algorithm was never published. We examined the binary code of a distribution of Windows 2000, which is still the second most popular operating system after Windows XP. (This investigation was done without any help from Microsoft.) We reconstructed, for the ¯rst time, the algorithm used by the pseudo- random number generator (namely, the function CryptGenRandom). We analyzed the security of the algorithm and found a non-trivial attack: given the internal state of the generator, the previous state can be computed in O(223) work (this is an attack on the forward-security of the generator, an O(1) attack on backward security is trivial). The attack on forward-security demonstrates that the design of the generator is flawed, since it is well known how to prevent such attacks. We also analyzed the way in which the generator is run by the operating system, and found that it ampli¯es the e®ect of the attacks: The generator is run in user mode rather than in kernel mode, and therefore it is easy to access its state even without administrator privileges.
    [Show full text]
  • Extended K-Gamma and K-Beta Functions of Matrix Arguments
    mathematics Article Extended k-Gamma and k-Beta Functions of Matrix Arguments Ghazi S. Khammash 1, Praveen Agarwal 2,3,4 and Junesang Choi 5,* 1 Department of Mathematics, Al-Aqsa University, Gaza Strip 79779, Palestine; [email protected] 2 Department of Mathematics, Anand International College of Engineering, Jaipur 303012, India; [email protected] 3 Institute of Mathematics and Mathematical Modeling, Almaty 050010, Kazakhstan 4 Harish-Chandra Research Institute Chhatnag Road, Jhunsi, Prayagraj (Allahabad) 211019, India 5 Department of Mathematics, Dongguk University, Gyeongju 38066, Korea * Correspondence: [email protected] Received: 17 August 2020; Accepted: 30 September 2020; Published: 6 October 2020 Abstract: Various k-special functions such as k-gamma function, k-beta function and k-hypergeometric functions have been introduced and investigated. Recently, the k-gamma function of a matrix argument and k-beta function of matrix arguments have been presented and studied. In this paper, we aim to introduce an extended k-gamma function of a matrix argument and an extended k-beta function of matrix arguments and investigate some of their properties such as functional relations, inequality, integral formula, and integral representations. Also an application of the extended k-beta function of matrix arguments to statistics is considered. Keywords: k-gamma function; k-beta function; gamma function of a matrix argument; beta function of matrix arguments; extended gamma function of a matrix argument; extended beta function of matrix arguments; k-gamma function of a matrix argument; k-beta function of matrix arguments; extended k-gamma function of a matrix argument; extended k-beta function of matrix arguments 1.
    [Show full text]
  • TYPE BETA FUNCTIONS of SECOND KIND 135 Operators
    Adv. Oper. Theory 1 (2016), no. 1, 134–146 http://doi.org/10.22034/aot.1609.1011 ISSN: 2538-225X (electronic) http://aot-math.org (p, q)-TYPE BETA FUNCTIONS OF SECOND KIND ALI ARAL 1∗ and VIJAY GUPTA2 Communicated by A. Kaminska Abstract. In the present article, we propose the (p, q)-variant of beta func- tion of second kind and establish a relation between the generalized beta and gamma functions using some identities of the post-quantum calculus. As an ap- plication, we also propose the (p, q)-Baskakov–Durrmeyer operators, estimate moments and establish some direct results. 1. Introduction The quantum calculus (q-calculus) in the field of approximation theory was discussed widely in the last two decades. Several generalizations to the q variants were recently presented in the book [3]. Further there is possibility of extension of q-calculus to post-quantum calculus, namely the (p, q)-calculus. Actually such extension of quantum calculus can not be obtained directly by substitution of q by q/p in q-calculus. But there is a link between q-calculus and (p, q)-calculus. The q calculus may be obtained by substituting p = 1 in (p, q)-calculus. We mentioned some previous results in this direction. Recently, Gupta [8] introduced (p, q) gen- uine Bernstein–Durrmeyer operators and established some direct results. (p, q) generalization of Sz´asz–Mirakyan operators was defined in [1]. Also authors inves- tigated a Durrmeyer type modifications of the Bernstein operators in [9]. We can also mention other papers as Bernstein operators [10], Bernstein–Stancu oper- ators [11].
    [Show full text]
  • Identifying Probability Distributions of Key Variables in Sow Herds
    Iowa State University Capstones, Theses and Creative Components Dissertations Summer 2020 Identifying Probability Distributions of Key Variables in Sow Herds Hao Tong Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/creativecomponents Part of the Veterinary Preventive Medicine, Epidemiology, and Public Health Commons Recommended Citation Tong, Hao, "Identifying Probability Distributions of Key Variables in Sow Herds" (2020). Creative Components. 617. https://lib.dr.iastate.edu/creativecomponents/617 This Creative Component is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Creative Components by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. 1 Identifying Probability Distributions of Key Variables in Sow Herds Hao Tong, Daniel C. L. Linhares, and Alejandro Ramirez Iowa State University, College of Veterinary Medicine, Veterinary Diagnostic and Production Animal Medicine Department, Ames, Iowa, USA Abstract Figuring out what probability distribution your data fits is critical for data analysis as statistical assumptions must be met for specific tests used to compare data. However, most studies with swine populations seldom report information about the distribution of the data. In most cases, sow farm production data are treated as having a normal distribution even when they are not. We conducted this study to describe the most common probability distributions in sow herd production data to help provide guidance for future data analysis. In this study, weekly production data from January 2017 to June 2019 were included involving 47 different sow farms.
    [Show full text]
  • Random Number Generation Using MSP430™ Mcus (Rev. A)
    Application Report SLAA338A–October 2006–Revised May 2018 Random Number Generation Using MSP430™ MCUs ......................................................................................................................... MSP430Applications ABSTRACT Many applications require the generation of random numbers. These random numbers are useful for applications such as communication protocols, cryptography, and device individualization. Generating random numbers often requires the use of expensive dedicated hardware. Using the two independent clocks available on the MSP430F2xx family of devices, software can generate random numbers without such hardware. Source files for the software described in this application report can be downloaded from http://www.ti.com/lit/zip/slaa338. Contents 1 Introduction ................................................................................................................... 1 2 Setup .......................................................................................................................... 1 3 Adding Randomness ........................................................................................................ 2 4 Usage ......................................................................................................................... 2 5 Overview ...................................................................................................................... 2 6 Testing for Randomness...................................................................................................
    [Show full text]