• Title/Summary/Keyword: Computation amount

Search Result 604, Processing Time 0.025 seconds

Improvement in Computation of Δ V10 Flicker Severity Index Using Intelligent Methods

  • Moallem, Payman;Zargari, Abolfazl;Kiyoumarsi, Arash
    • Journal of Power Electronics
    • /
    • v.11 no.2
    • /
    • pp.228-236
    • /
    • 2011
  • The ${\Delta}\;V_{10}$ or 10-Hz flicker index, as a common method of measurement of voltage flicker severity in power systems, requires a high computational cost and a large amount of memory. In this paper, for measuring the ${\Delta}\;V_{10}$ index, a new method based on the Adaline (adaptive linear neuron) system, the FFT (fast Fourier transform), and the PSO (particle swarm optimization) algorithm is proposed. In this method, for reducing the sampling frequency, calculations are carried out on the envelope of a power system voltage that contains a flicker component. Extracting the envelope of the voltage is implemented by the Adaline system. In addition, in order to increase the accuracy in computing the flicker components, the PSO algorithm is used for reducing the spectral leakage error in the FFT calculations. Therefore, the proposed method has a lower computational cost in FFT computation due to the use of a smaller sampling window. It also requires less memory since it uses the envelope of the power system voltage. Moreover, it shows more accuracy because the PSO algorithm is used in the determination of the flicker frequency and the corresponding amplitude. The sensitivity of the proposed method with respect to the main frequency drift is very low. The proposed algorithm is evaluated by simulations. The validity of the simulations is proven by the implementation of the algorithm with an ARM microcontroller-based digital system. Finally, its function is evaluated with real-time measurements.

Method for Estimating Irrigation Requirements by G.H. Hargreaves. (Hargreaves식에 의한 필요수량산정에 관한 소고)

  • 엄태영;홍종진
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.18 no.3
    • /
    • pp.4195-4205
    • /
    • 1976
  • The purpose of this study is to evaluate the existing methods for calculating or estimating the consumptive use (Evaportranspiration) of any agricutural development project area. In determing the consumptive use water in the project area, there will require the best way for estimating irrigation requirement. Many methods for computing the evaportranspiration have been used, each of them with its merits and demerits at home and abroad. Some of these methods are listed as follows: 1.The Penman's formula 2.The B1aney-Criddle method 3.The Munson P.E. Index method 4.The Atmometer method 5.The Texas Water Rights Commission (TWRC) method 6.The Jensen-Haise method 7.The Christiasen method Therefore, the authors will introduce the more widely used method for calculating Consumptive Use by G.H. Hargreaves. The formula is expressed in the form Ep= K·d·T (1.0-0.01·Hn) Hn=1.0+0.4H+0.005H2. This method was adopted for the first time to determine the Irrigation requirements of Ogseo Comprehensive Agricultual Development project (Benefited area:100,500ha) in Korea. This method is presented in somewhat greater detail than the others. Formula is given for the computation of evaportranspiration (with various levels of data availability) Sampel computation of irrigation requirements for Ogseo irrigation project is included. The results and applied materials are summarized as follows. 1. In calculating the Hargreaves formula, the mean temperature relative, humidity, length of day, and percentage of sunshine from three stations of Iri, Jeonju, and Gunsan were used. 2. Monthly evaporation values were calculated by using the formula. 3. Meteological data from the three stations records for the ten years (1963∼1972) were used. 4. The annual irrigation requirements is 1,186mm per hectare, but the case to consider effective rainfall amount takes the annual irrigation demand being 700mm per hectare.

  • PDF

Fast Visualization Technique and Visual Analytics System for Real-time Analyzing Stream Data (실시간 스트림 데이터 분석을 위한 시각화 가속 기술 및 시각적 분석 시스템)

  • Jeong, Seongmin;Yeon, Hanbyul;Jeong, Daekyo;Yoo, Sangbong;Kim, Seokyeon;Jang, Yun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.4
    • /
    • pp.21-30
    • /
    • 2016
  • Risk management system should be able to support a decision making within a short time to analyze stream data in real time. Many analytical systems consist of CPU computation and disk based database. However, it is more problematic when existing system analyzes stream data in real time. Stream data has various production periods from 1ms to 1 hour, 1day. One sensor generates small data but tens of thousands sensors generate huge amount of data. If hundreds of thousands sensors generate 1GB data per second, CPU based system cannot analyze the data in real time. For this reason, it requires fast processing speed and scalability for analyze stream data. In this paper, we present a fast visualization technique that consists of hybrid database and GPU computation. In order to evaluate our technique, we demonstrate a visual analytics system that analyzes pipeline leak using sensor and tweet data.

Fast Intermode Decision Method Using CBP on Variable Block Coding (가변 블록 부호화에서 CBP를 이용한 고속 인터모드 결정 방법)

  • Ryu, Kwon-Yeol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.7
    • /
    • pp.1589-1596
    • /
    • 2010
  • In this paper, we propose the method that reduce computational complexity for intermode decision using CBP(coded block pattern) and coded information of colocated-MB(macro block). Proposed method classifies MB into best-CBP and normal-CBP according to the characteristics of CBP. On best-CBP, it eliminates the computation for $8{\times}8$ mode on intermode decision process because the probability for SKIP mode and M-Type mode is 96.3% statistically. On normal-CBP, it selectively eliminates the amount of computation for bit-rate distortion cost, because it uses coded information of colocated-MB and motion vector cost in deciding SKIP mode and M-Type mode. The simulation results show that the proposed method reduces total coding time to 58.44% in average, and is effective in reducing computational burden in videos with little motion.

A Study of A Design Optimization Problem with Many Design Variables Using Genetic Algorithm (유전자 알고리듬을 이용할 대량의 설계변수를 가지는 문제의 최적화에 관한 연구)

  • 이원창;성활경
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.20 no.11
    • /
    • pp.117-126
    • /
    • 2003
  • GA(genetic algorithm) has a powerful searching ability and is comparatively easy to use and to apply as well. By that reason, GA is in the spotlight these days as an optimization skill for mechanical systems.$^1$However, GA has a low efficiency caused by a huge amount of repetitive computation and an inefficiency that GA meanders near the optimum. It also can be shown a phenomenon such as genetic drifting which converges to a wrong solution.$^{8}$ These defects are the reasons why GA is not widdy applied to real world problems. However, the low efficiency problem and the meandering problem of GA can be overcomed by introducing parallel computation$^{7}$ and gray code$^4$, respectively. Standard GA(SGA)$^{9}$ works fine on small to medium scale problems. However, SGA done not work well for large-scale problems. Large-scale problems with more than 500-bit of sere's have never been tested and published in papers. In the result of using the SGA, the powerful searching ability of SGA doesn't have no effect on optimizing the problem that has 96 design valuables and 1536 bits of gene's length. So it converges to a solution which is not considered as a global optimum. Therefore, this study proposes ExpGA(experience GA) which is a new genetic algorithm made by applying a new probability parameter called by the experience value. Furthermore, this study finds the solution throughout the whole field searching, with applying ExpGA which is a optimization technique for the structure having genetic drifting by the standard GA and not making a optimization close to the best fitted value. In addition to them, this study also makes a research about the possibility of GA as a optimization technique of large-scale design variable problems.

Topology Correction for Flattening of Brain Cortex

  • Kwon Min Jeong;Park Hyun Wook
    • Journal of Biomedical Engineering Research
    • /
    • v.26 no.2
    • /
    • pp.73-86
    • /
    • 2005
  • We need to flatten the brain cortex to smooth surface, sphere, or 2D plane in order to view the buried sulci. The rendered 3D surface of the segmented white matter and gray matter does not have the topology of a sphere due to the partial volume effect and segmentation error. A surface without correct topology may lead to incorrect interpretation of local structural relationships and prevent cortical unfolding. Although some algorithms try to correct topology, they require heavy computation and fail to follow the deep and narrow sulci. This paper proposes a method that corrects topology of the rendered surface fast, accurately, and automatically. The proposed method removes fractions beside the main surface, fills cavities in the inside of the main surface, and removes handles in the surface. The proposed method to remove handles has three-step approach. Step 1 performs smoothing operation on the rendered surface. In Step 2, vertices of sphere are gradually deformed to the smoothed surfaces and finally to the boundary of the segmented white matter and gray matter. The Step 2 uses multi-resolutional approach to prevent the deep sulci from geometrical intersection. In Step 3, 3D binary image is constructed from the deformed sphere of Step 2 and 3D surface is regenerated from the 3D binary image to remove intersection that may happen. The experimental results show that the topology is corrected while principle sulci and gyri are preserved and the computation amount is acceptable.

An Efficient Method for Determining Work Process Number of Each Node on Computation Grid (계산 그리드 상에서 각 노드의 작업 프로세스 수를 결정하기 위한 효율적인 방법)

  • Kim Young-Hak;Cho Soo-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.1
    • /
    • pp.189-199
    • /
    • 2005
  • The grid computing is a technique to solve big problems such as a field of scientific technique by sharing the computing power and a big storage space of the numerous computers on the distributed network. The environment of the grid computing is composed with the WAN which has a different performance and a heterogeneous network condition. Therefore, it is more important to reflect heterogeneous performance elements to calculation work. In this paper, we propose an efficient method that decides work process number of each node by considering a network state information. The network state information considers the latency, the bandwidth and latency-bandwidth mixture information. First, using information which was measured, we compute the performance ratio and decide work process number of each node. Finally, RSL file was created automatically based on work process number which was decided, and then accomplishes a work. The network performance information is collected by the NWS. According to experimental results, the method which was considered of network performance information is improved respectively 23%, 31%, and 57%, compared to the methods of existing in a viewpoint of work amount, work process number, and node number.

  • PDF

Fast K-Means Clustering Algorithm using Prediction Data (예측 데이터를 이용한 빠른 K-Means 알고리즘)

  • Jee, Tae-Chang;Lee, Hyun-Jin;Lee, Yill-Byung
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.1
    • /
    • pp.106-114
    • /
    • 2009
  • In this paper we proposed a fast method for a K-Means Clustering algorithm. The main characteristic of this method is that it uses precalculated data which possibility of change is high in order to speed up the algorithm. When calculating distance to cluster centre at each stage to assign nearest prototype in the clustering algorithm, it could reduce overall computation time by selecting only those data with possibility of change in cluster is high. Calculation time is reduced by using the distance information produced by K-Means algorithm when computing expected input data whose cluster may change, and by using such distance information the algorithm could be less affected by the number of dimensions. The proposed method was compared with original K-Means method - Lloyd's and the improved method KMHybrid. We show that our proposed method significantly outperforms in computation speed than Lloyd's and KMHybrid when using large size data which has large amount of data, great many dimensions and large number of clusters.

Design and Verification of Pipelined Face Detection Hardware (파이프라인 구조의 얼굴 검출 하드웨어 설계 및 검증)

  • Kim, Shin-Ho;Jeong, Yong-Jin
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.10
    • /
    • pp.1247-1256
    • /
    • 2012
  • There are many filter based image processing algorithms and they usually require a huge amount of computations and memory accesses making it hard to attain a real-time performance, expecially in embedded applications. In this paper, we propose a pipelined hardware structure of the filter based face detection algorithm to show that the real time performance can be achieved by hardware design. In our design, the whole computation is divided into three pipeline stages: resizing the image (Resize), Transforming the image (ICT), and finding candidate area (Find Candidate). Each stage is optimized by considering the parallelism of the computation to reduce the number of cycles and utilizing the line memory to minimize the memory accesses. The resulting hardware uses 507 KB internal SRAM and occupies 9,039 LUTs when synthesized and configured on Xilinx Virtex5LX330 FPGA. It can operate at maximum 165MHz clock, giving the performance of 108 frame/sec, while detecting up to 20 faces.

A Fast Sub-pixel Motion Estimation Method for H.264 Video Compression (H.264 동영상 압축을 위한 부 화소 단위에서의 고속 움직임 추정 방법)

  • Lee, Yun-Hwa;Choi, Myung-Hoon;Shin, Hyun-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.4
    • /
    • pp.411-417
    • /
    • 2006
  • Motion Estimation (ME) is an important part of video coding process and it takes the largest amount of computation in video compression. Half-pixel and quarter-pixel motion estimation can improve the video compression rate at the cost of higher computational complexity In this paper, we suggest a new efficient low-complexity algorithm for half-pixel and quarter pixel motion estimation. It is based on the experimental results that the sum of absolute differences(SAD) shows parabolic shape and thus can be approximated by using interpolation techniques. The sub-pixel motion vector is searched from the minimum SAD integer-pixel motion vector. The sub-pixel search direction is determined toward the neighboring pixel with the lowest SAD among 8 neighbors. Experimental results show that more than 20% reduction in computation time can be achieved without affecting the quality of video.