• Title/Summary/Keyword: Floating-Point Unit

Search Result 76, Processing Time 0.029 seconds

Analyzing Influence Factors of Foodservice Sales by Rebuilding Spatial Data : Focusing on the Conversion of Aggregation Units of Heterogeneous Spatial Data (공간 데이터 재구축을 통한 음식업종 매출액 영향 요인 분석 : 이종 공간 데이터의 집계단위 변환을 중심으로)

  • Noh, Eunbin;Lee, Sang-Kyeong;Lee, Byoungkil
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.581-590
    • /
    • 2017
  • This study analyzes the effect of floating population, locational characteristics and spatial autocorrelation on foodservice sales using big data provided by the Seoul Institute. Although big data provided by public sector is growing recently, research difficulties are occurred due to the difference of aggregation units of data. In this study, the aggregation unit of a dependent variable, sales of foodservice is SKT unit but those of independent variables are various, which are provided as the aggregation unit of Korea National Statistical Office, administration dong unit and point. To overcome this problem, we convert all data to the SKT aggregation unit. The spatial error model, SEM is used for analysing spatial autocorrelation. Floating population, the number of nearby workers, and the area of aggregation unit effect positively on foodservice sales. In addition, the sales of Jung-gu, Yeongdeungpo-gu and Songpa-gu are less than that of Gangnam-gu. This study provides implications for further study by showing the usefulness and limitations of converting aggregation units of heterogeneous spatial data.

An Improved Newton-Raphson's Reciprocal and Inverse Square Root Algorithm (개선된 뉴톤-랍손 역수 및 역제곱근 알고리즘)

  • Cho, Gyeong-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.1
    • /
    • pp.46-55
    • /
    • 2007
  • The Newton-Raphson's algorithm for finding a floating point reciprocal and inverse square root calculates the result by performing a fixed number of multiplications. In this paper, an improved Newton-Raphson's algorithm is proposed, that performs multiplications a variable number. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal and inverse square tables with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a reciprocal and inverse square root unit. Also, it can be used to construct optimized approximate tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia, scientific computing, etc.

A Variable Latency Goldschmidt's Floating Point Number Square Root Computation (가변 시간 골드스미트 부동소수점 제곱근 계산기)

  • Kim, Sung-Gi;Song, Hong-Bok;Cho, Gyeong-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.1
    • /
    • pp.188-198
    • /
    • 2005
  • The Goldschmidt iterative algorithm for finding a floating point square root calculated it by performing a fixed number of multiplications. In this paper, a variable latency Goldschmidt's square root algorithm is proposed, that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the square root of a floating point number F, the algorithm repeats the following operations: $R_i=\frac{3-e_r-X_i}{2},\;X_{i+1}=X_i{\times}R^2_i,\;Y_{i+1}=Y_i{\times}R_i,\;i{\in}\{{0,1,2,{\ldots},n-1} }}'$with the initial value is $'\;X_0=Y_0=T^2{\times}F,\;T=\frac{1}{\sqrt {F}}+e_t\;'$. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than $'e_r=2^{-p}'$. The value of p is 28 for the single precision floating point, and 58 for the doubel precision floating point. Let $'X_i=1{\pm}e_i'$, there is $'\;X_{i+1}=1-e_{i+1},\;where\;'\;e_{i+1}<\frac{3e^2_i}{4}{\mp}\frac{e^3_i}{4}+4e_{r}'$. If '|X_i-1|<2^{\frac{-p+2}{2}}\;'$ is true, $'\;e_{i+1}<8e_r\;'$ is less than the smallest number which is representable by floating point number. So, $\sqrt{F}$ is approximate to $'\;\frac{Y_{i+1}}{T}\;'$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal square root tables ($T=\frac{1}{\sqrt{F}}+e_i$) with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a square root unit. Also, it can be used to construct optimized approximate reciprocal square root tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia, scientific computing, etc.

A Variable Latency Newton-Raphson's Floating Point Number Reciprocal Square Root Computation (가변 시간 뉴톤-랍손 부동소수점 역수 제곱근 계산기)

  • Kim Sung-Gi;Cho Gyeong-Yeon
    • The KIPS Transactions:PartA
    • /
    • v.12A no.5 s.95
    • /
    • pp.413-420
    • /
    • 2005
  • The Newton-Raphson iterative algorithm for finding a floating point reciprocal square mot calculates it by performing a fixed number of multiplications. In this paper, a variable latency Newton-Raphson's reciprocal square root algorithm is proposed that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the rediprocal square root of a floating point number F, the algorithm repeats the following operations: '$X_{i+1}=\frac{{X_i}(3-e_r-{FX_i}^2)}{2}$, $i\in{0,1,2,{\ldots}n-1}$' with the initial value is '$X_0=\frac{1}{\sqrt{F}}{\pm}e_0$'. The bits to the right of p fractional bits in intermediate multiplication results are truncated and this truncation error is less than '$e_r=2^{-p}$'. The value of p is 28 for the single precision floating point, and 58 for the double precision floating point. Let '$X_i=\frac{1}{\sqrt{F}}{\pm}e_i$, there is '$X_{i+1}=\frac{1}{\sqrt{F}}-e_{i+1}$, where '$e_{i+1}{<}\frac{3{\sqrt{F}}{{e_i}^2}}{2}{\mp}\frac{{Fe_i}^3}{2}+2e_r$'. If '$|\frac{\sqrt{3-e_r-{FX_i}^2}}{2}-1|<2^{\frac{\sqrt{-p}{2}}}$' is true, '$e_{i+1}<8e_r$' is less than the smallest number which is representable by floating point number. So, $X_{i+1}$ is approximate to '$\frac{1}{\sqrt{F}}$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications Per an operation is derived from many reciprocal square root tables ($X_0=\frac{1}{\sqrt{F}}{\pm}e_0$) with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a reciprocal square root unit. Also, it can be used to construct optimized approximate reciprocal square root tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia, scientific computing, etc.

A Variable Latency Newton-Raphson's Floating Point Number Reciprocal Computation (가변 시간 뉴톤-랍손 부동소수점 역수 계산기)

  • Kim Sung-Gi;Cho Gyeong-Yeon
    • The KIPS Transactions:PartA
    • /
    • v.12A no.2 s.92
    • /
    • pp.95-102
    • /
    • 2005
  • The Newton-Raphson iterative algorithm for finding a floating point reciprocal which is widely used for a floating point division, calculates the reciprocal by performing a fixed number of multiplications. In this paper, a variable latency Newton-Raphson's reciprocal algorithm is proposed that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the reciprocal of a floating point number F, the algorithm repeats the following operations: '$'X_{i+1}=X=X_i*(2-e_r-F*X_i),\;i\in\{0,\;1,\;2,...n-1\}'$ with the initial value $'X_0=\frac{1}{F}{\pm}e_0'$. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than $'e_r=2^{-p}'$. The value of p is 27 for the single precision floating point, and 57 for the double precision floating point. Let $'X_i=\frac{1}{F}+e_i{'}$, these is $'X_{i+1}=\frac{1}{F}-e_{i+1},\;where\;{'}e_{i+1}, is less than the smallest number which is representable by floating point number. So, $X_{i+1}$ is approximate to $'\frac{1}{F}{'}$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal tables $(X_0=\frac{1}{F}{\pm}e_0)$ with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a reciprocal unit. Also, it can be used to construct optimized approximate reciprocal tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia scientific computing, etc.

A Design of a 8-Thread Graphics Processor Unit with Variable-Length Instructions

  • Lee, Kwang-Yeob;Kwak, Jae-Chang
    • Journal of information and communication convergence engineering
    • /
    • v.6 no.3
    • /
    • pp.285-288
    • /
    • 2008
  • Most of multimedia processors for 2D/3D graphics acceleration use a lot of integer/floating point arithmetic units. We present a new architecture with an efficient ALU, built in a smaller chip size. It reduces instruction cycles significantly based on a foundation of multi-thread operation, variable length instruction words, dual phase operation, and phase instruction's coordination. We can decrease the number of instruction cycles up to 50%, and can achieve twice better performance.

A Low-Cost Vector Control System of Induction Motor Using the ARM Cortex-M4 (ARM Cortex-M4를 이용한 저가형 유도전동기 벡터제어 시스템)

  • Kim, Dong-Ki;Lee, Seung-Yong;Yoon, Duck-Yong
    • Proceedings of the KIPE Conference
    • /
    • 2012.07a
    • /
    • pp.584-585
    • /
    • 2012
  • 본 논문에서는 DSP 및 FPU(Floating Point Unit) 기능을 갖는 ARM계열의 Cortex-M4 마이크로컨트롤러를 이용하여 저가형 벡터제어 시스템을 구현하였다. 이것은 최대 168MHz의 높은 클록으로 동작하면서도 소비전력이 낮고 가격이 저렴하다.

  • PDF

A design of transcendental function arithmetic unit for lighting operation of mobile 3D graphic processor (모바일 3차원 그래픽 프로세서의 조명처리 연산을 위한 초월함수 연산기 구현)

  • Lee, Sang-Hun;Lee, Chan-Ho
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.715-718
    • /
    • 2005
  • Mobile devices is getting to include more functions according to the demand of digital convergence. Applications based on 3D graphic calculation such as 3D games and navigation are one of the functions. 3D graphic calculation requires heavy calculation. Therefore, we need dedicated 3D graphic hardware unit with high performance. 3D graphic calculation needs a lot of complicated floating-point arithmetic operation. However, most of current mobile 3D graphics processors do not have efficient architecture for mobile devices because they are based on those for conventional computer systems. In this paper, we propose arithmetic units for special functions of lighting operation of 3D graphics. Transcendental arithmetic units are designed using approximation of logarithm function. Special function units for lighting operation such as reciprocal, square root, reciprocal of square root, and power can be obtained. The proposed arithmetic unit has lower error rate and smaller silicon area than conventional arithmetic architecture.

  • PDF

A Parallel-Architecture Processor Design for the Fast Multiplication of Homogeneous Transformation Matrices (Homogeneous Transformation Matrix의 곱셈을 위한 병렬구조 프로세서의 설계)

  • Kwon Do-All;Chung Tae-Sang
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.12
    • /
    • pp.723-731
    • /
    • 2005
  • The $4{\times}4$ homogeneous transformation matrix is a compact representation of orientation and position of an object in robotics and computer graphics. A coordinate transformation is accomplished through the successive multiplications of homogeneous matrices, each of which represents the orientation and position of each corresponding link. Thus, for real time control applications in robotics or animation in computer graphics, the fast multiplication of homogeneous matrices is quite demanding. In this paper, a parallel-architecture vector processor is designed for this purpose. The processor has several key features. For the accuracy of computation for real application, the operands of the processors are floating point numbers based on the IEEE Standard 754. For the parallelism and reduction of hardware redundancy, the processor takes column vectors of homogeneous matrices as multiplication unit. To further improve the throughput, the processor structure and its control is based on a pipe-lined structure. Since the designed processor can be used as a special purpose coprocessor in robotics and computer graphics, additionally to special matrix/matrix or matrix/vector multiplication, several other useful instructions for various transformation algorithms are included for wide application of the new design. The suggested instruction set will serve as standard in future processor design for Robotics and Computer Graphics. The design is verified using FPGA implementation. Also a comparative performance improvement of the proposed design is studied compared to a uni-processor approach for possibilities of its real time application.

A Design of Low-power/Small-area Arithmetic Units for Mobile 3D Graphic Accelerator (휴대형 3D 그래픽 가속기를 위한 저전력/저면적 산술 연산기 회로 설계)

  • Kim Chay-Hyeun;Shin Kyung-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.5
    • /
    • pp.857-864
    • /
    • 2006
  • This paper describes a design of low-power/small-area arithmetic circuits which are vector processing unit powering nit, divider unit and square-root unit for mobile 3D graphic accelerator. To achieve area-efficient and low-power implementation that is an essential consideration for mobile environment, the fixed-point f[mat of 16.16 is adopted instead of conventional floating-point format. The vector processing unit is designed using redundant binary(RB) arithmetic. As a result, it can operate 30% faster and obtained gate count reduction of 10%, compared to the conventional methods which consist of four multipliers and three adders. The powering nit, divider unit and square-root nit are based on logarithm number system. The binary-to-logarithm converter is designed using combinational logic based on six-region approximation method. So, the powering mit, divider unit and square-root unit reduce gate count when compared with lookup table implementation.