• Title/Summary/Keyword: floating point

Search Result 497, Processing Time 0.024 seconds

Error Corrected K'th order Goldschmidt's Floating Point Number Division (오차 교정 K차 골드스미트 부동소수점 나눗셈)

  • Cho, Gyeong-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.10
    • /
    • pp.2341-2349
    • /
    • 2015
  • The commonly used Goldschmidt's floating-point divider algorithm performs two multiplications in one iteration. In this paper, a tentative error corrected K'th Goldschmidt's floating-point number divider algorithm which performs K times multiplications in one iteration is proposed. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation in single precision and double precision divider is derived from many reciprocal tables with varying sizes. In addition, an error correction algorithm, which consists of one multiplication and a decision, to get exact result in divider is proposed. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a divider unit. Also, it can be used to construct optimized approximate reciprocal tables.

Design of Parallel Decimal Floating-Point Arithmetic Unit for High-speed Operations (고속 연산을 위한 병렬 구조의 십진 부동소수점 연산 장치 설계)

  • Yun, Hyoung-Kie;Moon, Dai-Tchul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.12
    • /
    • pp.2921-2926
    • /
    • 2013
  • In this paper, a decimal floating-point arithmetic unit(DFP) was proposed and redesigned to support high speed arithmetic operation employed parallel processing technique. The basic architecture of the proposed DFP was based on the L.K.Wang's DFP and improved it enabling high speed operation by parallel processing for two operands with same size of exponent. The proposed DFP was synthesized as a target device of xc2vp30-7ff896 using Xilinx ISE and verified by simulation using Flowrian tool of System Centroid co. Compared to L.K.Wang's DFP and reference [6]'s method, the proposed DFP improved data processing speed about 8.4% and 3% respectively in case of same input data.

Design of Decimal Floating-Point Adder for High Speed Operation with Leading Zero Anticipator (선행 제로 예측기를 이용한 고속 연산 십진 부동소수점 가산기 설계)

  • Yun, Hyoung-Kie;Moon, Dai-Tchul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.2
    • /
    • pp.407-413
    • /
    • 2015
  • In this paper, a DFPA(decimal floating-point adder) designed a pipeline structure that uses a LZA(leading zero anticipator) to reduce critical route to shorten delay to improve the speed of operation processing. The evaluation and verification of performance of proposed DFPA applied the Flowrian tool with simulation and Cyclone III FPGA was set as the target on the Quartus II tool for the synthesis. The proposed method compared and verified to proposed the other method using same input data. As a result, the performance of proposed method is improved 11.2% and 5.9% more than L.K.Wang's method and etc.. Also, it is confirmed that improvement of operation processing speed and reduction of the number of delay elements on critical path.

Selection of Suitable Plants for Artificial Floating Islands - Comparisons of Vegetation Structure and Growth of Four Emergent Macrophytes (인공 식물섬에 적합한 식물의 선발 - 4종 정수식물의 식생구조와 생장의 비교)

  • Lee, Hyo Hye Mi;Kwon, Oh Byung;Suck, Jeong Hyun;Cho, Kang-Hyun
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.4 no.1
    • /
    • pp.57-66
    • /
    • 2001
  • The floating islands have been constructed for the water quality improvement and the biodiversity conservation in an disturbed aquatic ecosystem. We made floating islands consisted of a special float and substrates of coconut fibers implanted with four emergent macrophytes such as Phragmites australis, Zizania latifolia, Iris pseudoacorus, Typha angustifolia. Vegetation structure and plant growth were compared between on the floating islands and on ground in order to select suitable plants for the construction of floating islands. Emergent-macrophytic vegetation on the floating islands showed lower coverages and higher plant biodiversity due to natural introduction of various hydrophytes and hygrophytes. Shoot density was increased on floating islands except for Zizania latifolia. From the point of coverage and density of plants, Phragmites australis and Iris pseudoacorus were suitable for floating islands. Total biomass of emergent macrophytes was decreased on the floating islands. The belowground/aboveground biomass ratio of floating islands was higher than that of the ground. Out of planted macrophytes, Iris pseudoacorus with a high belowground/aboveground biomass ratio could be evaluated a suitable plant for the floating islands because a plenty of its root is profitable to adapt with the nutrient-limited environment of floating islands.

  • PDF

A Study on Electrical Characteristic Improvement & Design Parameters of Power MOSFET with Single Floating Island Structure (단일 Floating Island 구조 Power MOSFET의 전기적 특성 향상과 설계 파라미터에 관한 연구)

  • Cho, Yu Seup;Sung, Man Young
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.28 no.4
    • /
    • pp.222-228
    • /
    • 2015
  • Power MOSFETs (metal oxide semiconductor field effect transistor) operate as energy control semiconductor switches. In order to reduce energy loss of the device, it is essential to increase its conductance. However, a trade-off relationship between the breakdown voltage and conductance of the device have been the critical difficulty to improve. In this paper, theoretical analysis of electrical benefits on single floating island power MOSFET is proposed. By the method, the optimization point has set defining the doping limit under single floating island structure. The numerical multiple 2.22 was obtained which indicates the doping limit of the original device, improving its ON state voltage drop by 45%.

A Variable Latency Goldschmidt's Floating Point Number Square Root Computation (가변 시간 골드스미트 부동소수점 제곱근 계산기)

  • Kim, Sung-Gi;Song, Hong-Bok;Cho, Gyeong-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.1
    • /
    • pp.188-198
    • /
    • 2005
  • The Goldschmidt iterative algorithm for finding a floating point square root calculated it by performing a fixed number of multiplications. In this paper, a variable latency Goldschmidt's square root algorithm is proposed, that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the square root of a floating point number F, the algorithm repeats the following operations: $R_i=\frac{3-e_r-X_i}{2},\;X_{i+1}=X_i{\times}R^2_i,\;Y_{i+1}=Y_i{\times}R_i,\;i{\in}\{{0,1,2,{\ldots},n-1} }}'$with the initial value is $'\;X_0=Y_0=T^2{\times}F,\;T=\frac{1}{\sqrt {F}}+e_t\;'$. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than $'e_r=2^{-p}'$. The value of p is 28 for the single precision floating point, and 58 for the doubel precision floating point. Let $'X_i=1{\pm}e_i'$, there is $'\;X_{i+1}=1-e_{i+1},\;where\;'\;e_{i+1}<\frac{3e^2_i}{4}{\mp}\frac{e^3_i}{4}+4e_{r}'$. If '|X_i-1|<2^{\frac{-p+2}{2}}\;'$ is true, $'\;e_{i+1}<8e_r\;'$ is less than the smallest number which is representable by floating point number. So, $\sqrt{F}$ is approximate to $'\;\frac{Y_{i+1}}{T}\;'$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal square root tables ($T=\frac{1}{\sqrt{F}}+e_i$) with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a square root unit. Also, it can be used to construct optimized approximate reciprocal square root tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia, scientific computing, etc.

A Variable Latency Goldschmidt's Floating Point Number Divider (가변 시간 골드스미트 부동소수점 나눗셈기)

  • Kim Sung-Gi;Song Hong-Bok;Cho Gyeong-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.2
    • /
    • pp.380-389
    • /
    • 2005
  • The Goldschmidt iterative algorithm for a floating point divide calculates it by performing a fixed number of multiplications. In this paper, a variable latency Goldschmidt's divide algorithm is proposed, that performs multiplications a variable number of times until the error becomes smaller than a given value. To calculate a floating point divide '$\frac{N}{F}$', multifly '$T=\frac{1}{F}+e_t$' to the denominator and the nominator, then it becomes ’$\frac{TN}{TF}=\frac{N_0}{F_0}$'. And the algorithm repeats the following operations: ’$R_i=(2-e_r-F_i),\;N_{i+1}=N_i{\ast}R_i,\;F_{i+1}=F_i{\ast}R_i$, i$\in${0,1,...n-1}'. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than ‘$e_r=2^{-p}$'. The value of p is 29 for the single precision floating point, and 59 for the double precision floating point. Let ’$F_i=1+e_i$', there is $F_{i+1}=1-e_{i+1},\;e_{i+1}',\;where\;e_{i+1}, If '$[F_i-1]<2^{\frac{-p+3}{2}}$ is true, ’$e_{i+1}<16e_r$' is less than the smallest number which is representable by floating point number. So, ‘$N_{i+1}$ is approximate to ‘$\frac{N}{F}$'. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal tables ($T=\frac{1}{F}+e_t$) with varying sizes. 1'he superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a divider. Also, it can be used to construct optimized approximate reciprocal tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia, scientific computing, etc

A Variable Latency Newton-Raphson's Floating Point Number Reciprocal Square Root Computation (가변 시간 뉴톤-랍손 부동소수점 역수 제곱근 계산기)

  • Kim Sung-Gi;Cho Gyeong-Yeon
    • The KIPS Transactions:PartA
    • /
    • v.12A no.5 s.95
    • /
    • pp.413-420
    • /
    • 2005
  • The Newton-Raphson iterative algorithm for finding a floating point reciprocal square mot calculates it by performing a fixed number of multiplications. In this paper, a variable latency Newton-Raphson's reciprocal square root algorithm is proposed that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the rediprocal square root of a floating point number F, the algorithm repeats the following operations: '$X_{i+1}=\frac{{X_i}(3-e_r-{FX_i}^2)}{2}$, $i\in{0,1,2,{\ldots}n-1}$' with the initial value is '$X_0=\frac{1}{\sqrt{F}}{\pm}e_0$'. The bits to the right of p fractional bits in intermediate multiplication results are truncated and this truncation error is less than '$e_r=2^{-p}$'. The value of p is 28 for the single precision floating point, and 58 for the double precision floating point. Let '$X_i=\frac{1}{\sqrt{F}}{\pm}e_i$, there is '$X_{i+1}=\frac{1}{\sqrt{F}}-e_{i+1}$, where '$e_{i+1}{<}\frac{3{\sqrt{F}}{{e_i}^2}}{2}{\mp}\frac{{Fe_i}^3}{2}+2e_r$'. If '$|\frac{\sqrt{3-e_r-{FX_i}^2}}{2}-1|<2^{\frac{\sqrt{-p}{2}}}$' is true, '$e_{i+1}<8e_r$' is less than the smallest number which is representable by floating point number. So, $X_{i+1}$ is approximate to '$\frac{1}{\sqrt{F}}$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications Per an operation is derived from many reciprocal square root tables ($X_0=\frac{1}{\sqrt{F}}{\pm}e_0$) with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a reciprocal square root unit. Also, it can be used to construct optimized approximate reciprocal square root tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia, scientific computing, etc.

A Variable Latency Newton-Raphson's Floating Point Number Reciprocal Computation (가변 시간 뉴톤-랍손 부동소수점 역수 계산기)

  • Kim Sung-Gi;Cho Gyeong-Yeon
    • The KIPS Transactions:PartA
    • /
    • v.12A no.2 s.92
    • /
    • pp.95-102
    • /
    • 2005
  • The Newton-Raphson iterative algorithm for finding a floating point reciprocal which is widely used for a floating point division, calculates the reciprocal by performing a fixed number of multiplications. In this paper, a variable latency Newton-Raphson's reciprocal algorithm is proposed that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the reciprocal of a floating point number F, the algorithm repeats the following operations: '$'X_{i+1}=X=X_i*(2-e_r-F*X_i),\;i\in\{0,\;1,\;2,...n-1\}'$ with the initial value $'X_0=\frac{1}{F}{\pm}e_0'$. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than $'e_r=2^{-p}'$. The value of p is 27 for the single precision floating point, and 57 for the double precision floating point. Let $'X_i=\frac{1}{F}+e_i{'}$, these is $'X_{i+1}=\frac{1}{F}-e_{i+1},\;where\;{'}e_{i+1}, is less than the smallest number which is representable by floating point number. So, $X_{i+1}$ is approximate to $'\frac{1}{F}{'}$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal tables $(X_0=\frac{1}{F}{\pm}e_0)$ with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a reciprocal unit. Also, it can be used to construct optimized approximate reciprocal tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia scientific computing, etc.

Real-Time Implementation of MPEG-1 Audio decoder on ARM RISC (ARM RISC 상에서의 MPEG-1 Audio decoder의 실시간 구현)

  • 김선태
    • Proceedings of the IEEK Conference
    • /
    • 2000.11d
    • /
    • pp.119-122
    • /
    • 2000
  • Recently, many complex DSP (Digital Signal Processing) algorithms have being realized on RISC CPU due to good compilation, low power consumption and large memory space. But, real-time implementation of multiple DSP algorithms on RISC requires the minimum and efficient memory usage and the lower occupancy of CPU. In this thesis, the original floating-point code of MPEG-1 audio decoder is converted to the fixed-point code and then optimized to the efficient assembly code in time-consuming function in accord with RISC feature. Finally, compared with floating-point and fixed-point, about 30 and 3 times speed enhancements are achieved respectively. And 3~4 times memory spaces are spared.

  • PDF