Logarithmic Arithmetic As an Alternative to Floating-Point: a Review

Logarithmic Arithmetic As an Alternative to Floating-Point: a Review

1 Logarithmic Arithmetic as an Alternative to Floating-Point: A Review Manik Chugh and Behrooz Parhami Dept. Electrical & Computer Eng., Univ. of California Santa Barbara, CA 93106-9560, USA [email protected] ABSTRACT: The logarithmic number system (LNS) has found It was recognized early on that LNS can offer an appeal in digital arithmetic because it allows multiplication advantage over floating-point (FP) representation only if and division to be performed much faster and more accurately LNS addition and subtraction can be performed with the than with the widely used floating-point (FP) number formats. speed and accuracy at least equal to those of FP. We review the sign/logarithmic number system and present a However, achieving this goal is complicated by the fact comparison of various techniques and architectures for performing arithmetic operations efficiently in LNS. As a case that these operations require the evaluation of nonlinear study, we describe the European logarithmic microprocessor, functions [6], [7], [8], [9]. a device built in the framework of a research project launched Contemplating the development of LNS-based in 1999. Comparison of the arithmetic performance of this commercial microprocessors, Lewis et al. [10], Paliouras microprocessor with that of a commercial superscalar et al. [11], and Arnold [12] proposed architectures for pipelined FP processor leads to the conclusion that LNS can LNS-based processors in the late 1990s and early 2000s, be successfully deployed in general-purpose systems. but did not present a finished design or extensive KEYWORDS: ALU design, computation errors, interpolation, simulation results. At about the same time, a European instruction-set architecture, logarithmic number system, project, initiated by Coleman et al. [13], [14], laid down machine arithmetic, performance per watt, real arithmetic. the foundations for the development of such a commercial digital system, dubbed the European 1. Introduction logarithmic microprocessor (ELM), which provided performance similar to commercial superscalar pipelined Proposals for the logarithmic number system (LNS) floating-point processors [15]. began in 1971, when Kingsburg and Rayner [2] Modern computation-intensive applications, with their introduced “logarithmic arithmetic” for digital signal increased algorithmic complexities as well larger processing. A similar logarithmic number system was problem sizes and data sets, are becoming bounded by proposed by Swartzlander and Alexopoulos [3] in 1975. the speed of FP operations. Real-time applications in this Instead of using 2’s-complement format for logarithms, class are exemplified by RLS-based algorithms, numbers were scaled to avoid negative logarithms. subspace methods required in broadcasting and cellular The sign/logarithmic number system overcomes the telephony, Kalman filtering, and Riccati-like equations slowness of multiplication and division with for advanced real-time control. Graphics systems [16] conventional weighted numbers, while also avoiding the provide another case in point. problems inherent in a residue number system (RNS). The Gravity Pipe supercomputer (GRAPE), that won This advantage, however, is offset by the fact that the Gordon Bell Prize in 1999, used LNS representation addition and subtraction operations require a fairly and arithmetic. LNS is commonly used as part of hidden complex procedure to be applied to logarithms. Despite Markov models, exemplified by the Viterbi algorithm, its attractive properties, until fairly recently, only a few applied in speech recognition and DNA sequencing. The implementations of LNS arithmetic were attempted, all past two decades have seen substantial efforts to explore of which were restricted to low-precision applications, the applicability of LNS as a viable alternative to FP for the difficulty in performing addition and subtraction on general-purpose processing of single-precision real long words being the principal reason. LNS addition and numbers, demonstrating LNS as an alternative to subtraction require lookup tables whose size grows l floating-point, with improved accuracy and speed. exponentially (several times 2 words), with logarithms Collecting pertinent references and information about that are of width l bits. For this reason, implementations ongoing efforts in one place has been a primary described in the early literature were limited to 8-12 bits motivation for writing this paper. of fractional precision [4], [5]. Log Arithmetic as Alternative to Floating-Point (M. Chugh & B. Parhami) 1 47th Asilomar Conf. Signals, Systems, and Computers, Nov. 3-6, 2013 2 2. Background and Terminology Sx Sz Sy Control The LNS representation of a number x consists of the number’s sign Sx and the binary logarithm Lx of its Lx > Ly? magnitude. LNS representation equivalent to the 32-bit ROM for + – (single precision) IEEE standard FP format [13] has a Lx , 31-bit logarithm part that forms a 2’s-complement fixed- Add/ 0 1 point value ranging from –128 to approximately +128. Sub The real numbers represented are signed and have Add/ –128 +128 L magnitudes ranging from 2 to ~2 (i.e., from 2.9 y Sub Lz 10–39 to 3.4 10+38). The smallest representable positive value, 40000000 , is used as a special code for zero, 0 16 Scale factor Lm 1 while C000000016 is dedicated to represent NaN. LNS’s uniform geometric error characteristics across Fig. 2. A complete four-function ALU for LNS [21]. the entire range of values leads to roughly an additional 1/2 bit of precision compared with a FP representation Realization of Eqns. (1), (2), and (3) by means of a using the same number of bits. Thus, in signal- comparator, an adder, a subtractor, a read-only memory processing applications, LNS offers better signal/noise (ROM) table, two multiplexers, and a small amount of peripheral logic can result in a simple four-function ratio as well as a better dynamic range. LNS multiplication and division are defined as: ALU [3], as shown in Fig. 2. The operation latency in this ALU is TOP = TCOMP + 2TADD + TROM, where TCOMP is Multiplication: Sp = Sx Sy, Lp = Lx + Ly (1) the delay of a comparator. For convenience, we have Division: S = S S L = L – L (2) assumed the peripheral logic delay to be negligible. In q x y, q x y Given x and y, with |x| ≥ |y|, z = x ± y is computed as: practice, the comparator may be implemented with a subtractor, leading to TOP = 3TADD + TROM. Sz = Sx Lz = log2|x ± y| = log2|x(1 ± y / x)| (3) 3. Addition and Subtraction = log2|x| + log2|1 ± y / x| = log |x| + log |1 ± 2(Ly – Lx)| In this section, we focus on the efficient calculation of 2 2 ± d the nonlinear term (d) = log2|1 ± 2 |, where d = Ly – Lx ± d Let d = Ly – Lx ≤ 0 and (d) = log2|1 ± 2 |. The value (or d = j – i, taking Ly as j and Lx as i for simplicity). ± of (d), shown in Fig. 1, can be read out from a ROM, Implementation work began with Swartzlander and but this is infeasible for wide words. Schemes based on Alexopoulos’s 1975 paper [3], with a 12-bit device [4], interpolation are discussed in Section 3. For now, we while a 1988 scheme extended the width to 20 bits [5]. will assume that a ROM is sufficient for our purpose. Both designs were direct implementations of Eqn. (2), ± with a ROM covering all possible values of (d). 2 In the preceding simple scheme, table sizes increase exponentially with the word width, limiting its practical 1 Addition: +(d) utility to about 20 bits. Lewis’s 1991 design [6] extended the word width to 28 bits by implementing the lookup 0 table for d values only at intervals of. Any negative value of d satisfies d = –hΔ – δ for some integer value h, –1 Subtraction –(d) leading to the Taylor-series expansion of F(d), of which only the first-order term was included in Lewis’s design: –2 Δδ Δδ Δ ⋯ –3 1! 2! Lewis’s scheme exposed a further problem intrinsic to –4 LNS arithmetic: the difficulty of interpolating –(d) in –5 d the region –1 < d < 0 (Fig. 1). To maintain accuracy, it is –10 –8 –6 –4 –2 0 necessary to implement a large number of successively Fig. 1. Plots of +(d) and –(d) as functions of d. smaller intervals as d → 0. Log Arithmetic as Alternative to Floating-Point (M. Chugh & B. Parhami) 2 47th Asilomar Conf. Signals, Systems, and Computers, Nov. 3-6, 2013 3 Table 1. Latencies of VLSI arithmetic circuits (ns). 4 Tables Operation Fixed FP LNS F Carry- Add 4 28 28 n propagate Subtract 4 28 28/42 D adder Multiply 32 22 4 d 4-to-2 + Divide -- -- 4 CSA (two Depending on whether the range-shifter is required in E levels) a particular subtraction, two timing values are shown for Carry- P save subtraction in Table 1. Assuming that the range-shifter i adder comes into play in 50% of subtractions on average, the mean subtraction time amounts to 35 ns. Assuming an Fig. 3: An LNS adder/subtractor implementation [13]. equal mix of additions and subtractions, the average add/subtract time is about 31 ns, and the average LNS Coleman et al. [13] published the design of a 32-bit multiply/divide time is 4 ns. Thus, the LNS unit would logarithmic adder/subtractor, using a first-order Taylor- reach roughly twice the speed of FP at an add/multiply series approximation, augmented with concurrent error ratio of about 40/60 percent. estimation to form a correction term. The critical path of LNS ALUs have markedly different characteristics their design (Fig. 3) contains a ROM, a multiplier, and from their FP counterparts and therefore require some two adders. For the range –1 < d < 0, they employed a reevaluation of the surrounding microprocessor design to range-shifter for transforming a subtraction into one deploy them to best advantage.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us