• Title/Summary/Keyword: Goldschmidt algorithm

Search Result 6, Processing Time 0.028 seconds

Floating Point Number N'th Root K'th Order Goldschmidt Algorithm (부동소수점수 N차 제곱근 K차 골드스미스 알고리즘)

  • Cho, Gyeong Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.9
    • /
    • pp.1029-1035
    • /
    • 2019
  • In this paper, a tentative Kth order Goldschmidt floating point number Nth root algorithm for K order convergence rate in one iteration is proposed by applying Taylor series to the Goldschmidt square root algorithm. Using the proposed algorithm, Nth root and Nth inverse root can be computed from iterative multiplications without division. It also predicts the error of the algorithm iteration. It iterates until the predicted error becomes smaller than the specified value. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a floating point number Nth root unit.

The improved Goldschmidt floating point reciprocal algorithm (개선한 Goldschmidt 부동소수점 역수 알고리즘)

  • 한경헌;최명용;김성기;조경연
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.247-250
    • /
    • 2004
  • Goldschmidt 알고리즘에 의한 부동소수점 1.f2의 역수는 q=NK1K2....Kn (Ki=1+Aj, j=2i)이다. 본 논문에서는 N과 A 값을 1.f2의 값에 따라서 선정하고 Aj의 값이 유효자리수의 반이하 값을 가지면 연산을 종료하는 개선된 Goldschmidt 부동소수점 역수 알고리즘을 제안한다. 1.f2가 1.01012보다 작으면 N=2-1.f2, A=1.f2-1로 하며, 1.01012보다 크거나 같으면 N=2-0.lf2, A=1-0.lf2로 한다. 한편 Goldschmidt 알고리즘은 곱셈을 반복해서 수행하므로 계산 오류가 누적이 된다. 이러한 누적 오류를 감안하면 배정도실수 역수에서는 2-57, 단정도실수 역수에서는 2-28의 유효자리수까지 연산해야 한다. 따라서 Aj가 배정도실수 역수에서는 2-29, 단정도실수 역수에서는 2-14 보다 작아지면 연산을 종료한다. 본 논문에서 제안한 개선한 Goldschmidt 역수 알고리즘은 N=2-0.1f2, A=1-0.lf2로 계산하는 종래 알고리즘과 비교하여 곱셈 연산 회수가 배정도실수 역수는 22%, 단정도실수 역수는 29% 감소하였다. 본 논문의 연구 결과는 테이블을 사용하는 Goldschmidt 역수 알고리즘에 적용해서 연산 시간을 줄일 수 있다.

  • PDF

Error Corrected K'th order Goldschmidt's Floating Point Number Division (오차 교정 K차 골드스미트 부동소수점 나눗셈)

  • Cho, Gyeong-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.10
    • /
    • pp.2341-2349
    • /
    • 2015
  • The commonly used Goldschmidt's floating-point divider algorithm performs two multiplications in one iteration. In this paper, a tentative error corrected K'th Goldschmidt's floating-point number divider algorithm which performs K times multiplications in one iteration is proposed. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation in single precision and double precision divider is derived from many reciprocal tables with varying sizes. In addition, an error correction algorithm, which consists of one multiplication and a decision, to get exact result in divider is proposed. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a divider unit. Also, it can be used to construct optimized approximate reciprocal tables.

Goldschmidt's Double Precision Floating Point Reciprocal Computation using 32 bit multiplier (32 비트 곱셈기를 사용한 골드스미트 배정도실수 역수 계산기)

  • Cho, Gyeong-Yeon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.5
    • /
    • pp.3093-3099
    • /
    • 2014
  • Modern graphic processors, multimedia processors and audio processors mostly use floating-point number. Meanwhile, high-level language such as C and Java uses both single-precision and double precision floating-point number. In this paper, an algorithm which computes the reciprocal of double precision floating-point number using a 32 bit multiplier is proposed. It divides the mantissa of double precision floating-point number to upper part and lower part, and calculates the reciprocal of the upper part with Goldschmidt's algorithm, and computes the reciprocal of double precision floating-point number with calculated upper part reciprocal as the initial value is proposed. Since the number of multiplications performed by the proposed algorithm is dependent on the mantissa of floating-point number, the average number of multiplications per an operation is derived from some reciprocal tables with varying sizes.

A Variable Latency Goldschmidt's Floating Point Number Square Root Computation (가변 시간 골드스미트 부동소수점 제곱근 계산기)

  • Kim, Sung-Gi;Song, Hong-Bok;Cho, Gyeong-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.1
    • /
    • pp.188-198
    • /
    • 2005
  • The Goldschmidt iterative algorithm for finding a floating point square root calculated it by performing a fixed number of multiplications. In this paper, a variable latency Goldschmidt's square root algorithm is proposed, that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the square root of a floating point number F, the algorithm repeats the following operations: $R_i=\frac{3-e_r-X_i}{2},\;X_{i+1}=X_i{\times}R^2_i,\;Y_{i+1}=Y_i{\times}R_i,\;i{\in}\{{0,1,2,{\ldots},n-1} }}'$with the initial value is $'\;X_0=Y_0=T^2{\times}F,\;T=\frac{1}{\sqrt {F}}+e_t\;'$. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than $'e_r=2^{-p}'$. The value of p is 28 for the single precision floating point, and 58 for the doubel precision floating point. Let $'X_i=1{\pm}e_i'$, there is $'\;X_{i+1}=1-e_{i+1},\;where\;'\;e_{i+1}<\frac{3e^2_i}{4}{\mp}\frac{e^3_i}{4}+4e_{r}'$. If '|X_i-1|<2^{\frac{-p+2}{2}}\;'$ is true, $'\;e_{i+1}<8e_r\;'$ is less than the smallest number which is representable by floating point number. So, $\sqrt{F}$ is approximate to $'\;\frac{Y_{i+1}}{T}\;'$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal square root tables ($T=\frac{1}{\sqrt{F}}+e_i$) with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a square root unit. Also, it can be used to construct optimized approximate reciprocal square root tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia, scientific computing, etc.

A Variable Latency Goldschmidt's Floating Point Number Divider (가변 시간 골드스미트 부동소수점 나눗셈기)

  • Kim Sung-Gi;Song Hong-Bok;Cho Gyeong-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.2
    • /
    • pp.380-389
    • /
    • 2005
  • The Goldschmidt iterative algorithm for a floating point divide calculates it by performing a fixed number of multiplications. In this paper, a variable latency Goldschmidt's divide algorithm is proposed, that performs multiplications a variable number of times until the error becomes smaller than a given value. To calculate a floating point divide '$\frac{N}{F}$', multifly '$T=\frac{1}{F}+e_t$' to the denominator and the nominator, then it becomes ’$\frac{TN}{TF}=\frac{N_0}{F_0}$'. And the algorithm repeats the following operations: ’$R_i=(2-e_r-F_i),\;N_{i+1}=N_i{\ast}R_i,\;F_{i+1}=F_i{\ast}R_i$, i$\in${0,1,...n-1}'. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than ‘$e_r=2^{-p}$'. The value of p is 29 for the single precision floating point, and 59 for the double precision floating point. Let ’$F_i=1+e_i$', there is $F_{i+1}=1-e_{i+1},\;e_{i+1}',\;where\;e_{i+1}, If '$[F_i-1]<2^{\frac{-p+3}{2}}$ is true, ’$e_{i+1}<16e_r$' is less than the smallest number which is representable by floating point number. So, ‘$N_{i+1}$ is approximate to ‘$\frac{N}{F}$'. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal tables ($T=\frac{1}{F}+e_t$) with varying sizes. 1'he superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a divider. Also, it can be used to construct optimized approximate reciprocal tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia, scientific computing, etc