• Title/Summary/Keyword: Iteration number

Search Result 356, Processing Time 0.024 seconds

Fast Analysis of Fractal Antenna by Using FMM (FMM에 의한 프랙탈 안테나 고속 해석)

  • Kim, Yo-Sik;Lee, Kwang-Jae;Kim, Kun-Woo;Oh, Kyung-Hyun;Lee, Taek-Kyung;Lee, Jae-Wook
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.19 no.2
    • /
    • pp.121-129
    • /
    • 2008
  • In this paper, we present a fast analysis of multilayer microstrip fractal structure by using the fast multipole method (FMM). In the analysis, accurate spatial green's functions from the real-axis integration method(RAIM) are employed to solve the mixed potential integral equation(MPIE) with FMM algorithm. MoM's iteration and memory requirement is $O(N^2)$ in case of calculation using the green function. the problem is the unknown number N can be extremely large for calculation of large scale objects and high accuracy. To improve these problem is fast algorithm FMM. FMM use the addition theorem of green function. So, it reduce the complexity of a matrix-vector multiplication and reduce the cost of calculation to the order of $O(N^{1.5})$, The efficiency is proved from comparing calculation results of the moment method and Fast algorithm.

Application of Residual Statics to Land Seismic Data: traveltime decomposition vs stack-power maximization (육상 탄성파자료에 대한 나머지 정적보정의 효과: 주행시간 분해기법과 겹쌓기제곱 최대화기법)

  • Sa, Jinhyeon;Woo, Juhwan;Rhee, Chulwoo;Kim, Jisoo
    • Geophysics and Geophysical Exploration
    • /
    • v.19 no.1
    • /
    • pp.11-19
    • /
    • 2016
  • Two representative residual static methods of traveltime decomposition and stack-power maximization are discussed in terms of application to land seismic data. For the model data with synthetic shot/receiver statics (time shift) applied and random noises added, continuities of reflection event are much improved by stack-power maximization method, resulting the derived time-shifts approximately equal to the synthetic statics. Optimal parameters (maximum allowable shift, correlation window, iteration number) for residual statics are effectively chosen with diagnostic displays of CSP (common shot point) stack and CRP (common receiver point) stack as well as CMP gather. In addition to removal of long-wavelength time shift by refraction statics, prior to residual statics, processing steps of f-k filter, predictive deconvolution and time variant spectral whitening are employed to attenuate noises and thereby to minimize the error during the correlation process. The reflectors including horizontal layer of reservoir are more clearly shown in the variable-density section through repicking the velocities after residual statics and inverse NMO correction.

Development of A-ABR System Using a Microprocessor (마이크로프로세서를 이용한 자동청력검사 시스템 개발)

  • Noh, Hyung-Wook;Lee, Tak-Hyung;Kim, Nam-Hyun;Kim, Soo-Chan;Cha, Eun-Jong;Kim, Deok-Won
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.2
    • /
    • pp.15-21
    • /
    • 2009
  • Hearing loss is one of the most common birth defects among infants. Most of hearing-impaired children are not diagnosed until 1 to 3 years of age - which is too late for the critical period (6 month) for normal speech and language development. If a hearing impairment is identified and treated in its early stage, child's speech and language skills could be comparable to his or her normal-hearing peers. For these reasons, hearing screening at birth and throughout childhood is extremely important. ABR (Auditory brain-stem response) is nowadays one of the most reliable diagnostic tools in the early detection of hearing impairment. In this study, we have developed the system that automatically detects if there is hearing impairment or not for infants or children. For future studies, it will be developed as a portable system to be able to take a measurement not only in sound proof room but also in nursery for neonates.

The Efficient Method of Parallel Genetic Algorithm using MapReduce of Big Data (빅 데이터의 MapReduce를 이용한 효율적인 병렬 유전자 알고리즘 기법)

  • Hong, Sung-Sam;Han, Myung-Mook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.5
    • /
    • pp.385-391
    • /
    • 2013
  • Big Data is data of big size which is not processed, collected, stored, searched, analyzed by the existing database management system. The parallel genetic algorithm using the Hadoop for BigData technology is easily realized by implementing GA(Genetic Algorithm) using MapReduce in the Hadoop Distribution System. The previous study that the genetic algorithm using MapReduce is proposed suitable transforming for the GA by MapReduce. However, they did not show good performance because of frequently occurring data input and output. In this paper, we proposed the MRPGA(MapReduce Parallel Genetic Algorithm) using improvement Map and Reduce process and the parallel processing characteristic of MapReduce. The optimal solution can be found by using the topology, migration of parallel genetic algorithm and local search algorithm. The convergence speed of the proposal method is 1.5 times faster than that of the existing MapReduce SGA, and is the optimal solution can be found quickly by the number of sub-generation iteration. In addition, the MRPGA is able to improve the processing and analysis performance of Big Data technology.

A Non-Uniform Convergence Tolerance Scheme for Enhancing the Branch-and-Bound Method (비균일 수렴허용오차 방법을 이용한 분지한계법 개선에 관한 연구)

  • Jung, Sang-Jin;Chen, Xi;Choi, Gyung-Hyun;Choi, Dong-Hoon
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.36 no.4
    • /
    • pp.361-371
    • /
    • 2012
  • In order to improve the efficiency of the branch-and-bound method for mixed-discrete nonlinear programming, a nonuniform convergence tolerance scheme is proposed for the continuous subproblem optimizations. The suggested scheme assigns the convergence tolerances for each continuous subproblem optimization according to the maximum constraint violation obtained from the first iteration of each subproblem optimization in order to reduce the total number of function evaluations needed to reach the discrete optimal solution. The proposed tolerance scheme is integrated with five branching order options. The comparative performance test results using the ten combinations of the five branching orders and two convergence tolerance schemes show that the suggested non-uniform convergence tolerance scheme is obviously superior to the uniform one. The results also show that the branching order option using the minimum clearance difference method performed best among the five branching order options. Therefore, we recommend using the "minimum clearance difference method" for branching and the "non-uniform convergence tolerance scheme" for solving discrete optimization problems.

Preliminary Study on the Enhancement of Reconstruction Speed for Emission Computed Tomography Using Parallel Processing (병렬 연산을 이용한 방출 단층 영상의 재구성 속도향상 기초연구)

  • Park, Min-Jae;Lee, Jae-Sung;Kim, Soo-Mee;Kang, Ji-Yeon;Lee, Dong-Soo;Park, Kwang-Suk
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.443-450
    • /
    • 2009
  • Purpose: Conventional image reconstruction uses simplified physical models of projection. However, real physics, for example 3D reconstruction, takes too long time to process all the data in clinic and is unable in a common reconstruction machine because of the large memory for complex physical models. We suggest the realistic distributed memory model of fast-reconstruction using parallel processing on personal computers to enable large-scale technologies. Materials and Methods: The preliminary tests for the possibility on virtual manchines and various performance test on commercial super computer, Tachyon were performed. Expectation maximization algorithm with common 2D projection and realistic 3D line of response were tested. Since the process time was getting slower (max 6 times) after a certain iteration, optimization for compiler was performed to maximize the efficiency of parallelization. Results: Parallel processing of a program on multiple computers was available on Linux with MPICH and NFS. We verified that differences between parallel processed image and single processed image at the same iterations were under the significant digits of floating point number, about 6 bit. Double processors showed good efficiency (1.96 times) of parallel computing. Delay phenomenon was solved by vectorization method using SSE. Conclusion: Through the study, realistic parallel computing system in clinic was established to be able to reconstruct by plenty of memory using the realistic physical models which was impossible to simplify.

Improvement of Address Pointer Assignment in DSP Code Generation (DSP용 코드 생성에서 주소 포인터 할당 성능 향상 기법)

  • Lee, Hee-Jin;Lee, Jong-Yeol
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.1
    • /
    • pp.37-47
    • /
    • 2008
  • Exploitation of address generation units which are typically provided in DSPs plays an important role in DSP code generation since that perform fast address computation in parallel to the central data path. Offset assignment is optimization of memory layout for program variables by taking advantage of the capabilities of address generation units, consists of memory layout generation and address pointer assignment steps. In this paper, we propose an effective address pointer assignment method to minimize the number of address calculation instructions in DSP code generation. The proposed approach reduces the time complexity of a conventional address pointer assignment algorithm with fixed memory layouts by using minimum cost-nodes breaking. In order to contract memory size and processing time, we employ a powerful pruning technique. Moreover our proposed approach improves the initial solution iteratively by changing the memory layout for each iteration because the memory layout affects the result of the address pointer assignment algorithm. We applied the proposed approach to about 3,000 sequences of the OffsetStone benchmarks to demonstrate the effectiveness of the our approach. Experimental results with benchmarks show an average improvement of 25.9% in the address codes over previous works.

Simulation-Based Stochastic Markup Estimation System $(S^2ME)$ (시뮬레이션을 기반(基盤)으로 하는 영업이윤율(營業利潤率) 추정(推定) 시스템)

  • Yi, Chang-Yong;Kim, Ryul-Hee;Lim, Tae-Kyung;Kim, Wha-Jung;Lee, Dong-Eun
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2007.11a
    • /
    • pp.109-113
    • /
    • 2007
  • This paper introduces a system, Simulation based Stochastic Markup Estimation System (S2ME), for estimating optimum markup for a project. The system was designed and implemented to better represent the real world system involved in construction bidding. The findings obtained from the analysis of existing assumptions used in the previous quantitative markup estimation methods were incorporated to improve the accuracy and predictability of the S2ME. The existing methods has four categories of assumption as follows; (1) The number of competitors and who is the competitors are known, (2) A typical competitor, who is fictitious, is assumed for easy computation, (3) the ratio of bid price against cost estimate (B/C) is assumed to follow normal distribution, (4) The deterministic output obtained from the probabilistic equation of existing models is assumed to be acceptable. However, these assumptions compromise the accuracy of prediction. In practice, the bidding patterns of the bidders are randomized in competitive bidding. To complement the lack of accuracy contributed by these assumptions, bidding project was randomly selected from the pool of bidding database in the simulation experiment. The probability to win the bid in the competitive bidding was computed using the profile of the competitors appeared in the selected bidding project record. The expected profit and probability to win the bid was calculated by selecting a bidding record randomly in an iteration of the simulation experiment under the assumption that the bidding pattern retained in historical bidding DB manifest revival. The existing computation, which is handled by means of deterministic procedure, were converted into stochastic model using simulation modeling and analysis technique as follows; (1) estimating the probability distribution functions of competitors' B/C which were obtained from historical bidding DB, (2) analyzing the sensitivity against the increment of markup using normal distribution and actual probability distribution estimated by distribution fitting, (3) estimating the maximum expected profit and optimum markup range. In the case study, the best fitted probability distribution function was estimated using the historical bidding DB retaining the competitors' bidding behavior so that the reliability was improved by estimating the output obtained from simulation experiment.

  • PDF

Time-Scale Modification of Polyphonic Audio Signals Using Sinusoidal Modeling (정현파 모델링을 이용한 폴리포닉 오디오 신호의 시간축 변화)

  • 장호근;박주성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.77-85
    • /
    • 2001
  • This paper proposes a method of time-scale modification of polyphonic audio signals based on a sinusoidal model. The signals are modeled with sinusoidal component and noise component. A multiresolution filter bank is designed which splits the input signal into six octave-spaced subbands without aliasing and sinusoidal modeling is applied to each subband signal. To alleviate smearing of transients in time-scale modification a dynamic segmentation method is applied to subbands which determines the analysis-synthesis frame size adaptively to fit time-frequency characteristics of the subband signal. For extracting sinusoidal components and calculating their parameters matching pursuit algorithm is applied to each analysis frame of subband signal. In accordance with spectrum analysis a psychoacoustic model implementing the effect of frequency masking is incorporated with matching pursuit to provide a resonable stop condition of iteration and reduce the number of sinusoids. The noise component obtained by subtracting the synthesized signal with sinusoidal components from the original signal is modeled by line-segment model of short time spectrum envelope. For various polyphonic audio signals the result of simulation shows suggested sinusoidal modeling can synthesize original signal without loss of perceptual quality and do more robust and high quality time-scale modification for large scale factor because of representing transients without any perceptual loss.

  • PDF

Analysis for Applicability of Differential Evolution Algorithm to Geotechnical Engineering Field (지반공학 분야에 대한 차분진화 알고리즘 적용성 분석)

  • An, Joon-Sang;Kang, Kyung-Nam;Kim, San-Ha;Song, Ki-Il
    • Journal of the Korean Geotechnical Society
    • /
    • v.35 no.4
    • /
    • pp.27-35
    • /
    • 2019
  • This study confirmed the applicability to the field of geotechnical engineering for relatively complicated space and many target design variables in back analysis. The Sharan's equation and the Blum's method were used for the tunnel field and the retaining wall as a model for the multi-variate problem of geotechnical engineering. Optimization methods are generally divided into a deterministic method and a stochastic method. In this study, Simulated Annealing Method (SA) was selected as a deterministic method and Differential Evolution Algorithm (DEA) and Particle Swarm Optimization Method (PSO) were selected as stochastic methods. The three selected optimization methods were compared by applying a multi-variate model. The problem of deterministic method has been confirmed in the multi-variate back analysis of geotechnical engineering, and the superiority of DEA can be confirmed. DEA showed an average error rate of 3.12% for Sharan's solution and 2.23% for Blum's problem. The iteration number of DEA was confirmed to be smaller than the other two optimization methods. SA was confirmed to be 117.39~167.13 times higher than DEA and PSO was confirmed to be 2.43~6.91 times higher than DEA. Applying a DEA to the multi-variate back analysis of geotechnical problems can be expected to improve computational speed and accuracy.