• Title/Summary/Keyword: random iterative algorithm

Search Result 47, Processing Time 0.021 seconds

Inversion of Geophysical Data Using Genetic Algorithms (유전적 기법에 의한 지구물리자료의 역산)

  • Kim, Hee Joon
    • Economic and Environmental Geology
    • /
    • v.28 no.4
    • /
    • pp.425-431
    • /
    • 1995
  • Genetic algorithms are so named because they are analogous to biological processes. The model parameters are coded in binary form. The algorithm then starts with a randomly chosen population of models called chromosomes. The second step is to evaluate the fitness values of these models, measured by a correlation between data and synthetic for a particular model. Then, the three genetic processes of selection, crossover, and mutation are performed upon the model in sequence. Genetic algorithms share the favorable characteristics of random Monte Carlo over local optimization methods in that they do not require linearizing assumptions nor the calculation of partial derivatives, are independent of the misfit criterion, and avoid numerical instabilities associated with matrix inversion. An additional advantage over converntional methods such as iterative least squares is that the sampling is global, rather than local, thereby reducing the tendency to become entrapped in local minima and avoiding the dependency on an assumed starting model.

  • PDF

Convergence of Min-Sum Decoding of LDPC codes under a Gaussian Approximation (MIN-SUM 복호화 알고리즘을 이용한 LDPC 오류정정부호의 성능분석)

  • Heo, Jun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.10C
    • /
    • pp.936-941
    • /
    • 2003
  • Density evolution was developed as a method for computing the capacity of low-density parity-check(LDPC) codes under the sum-product algorithm [1]. Based on the assumption that the passed messages on the belief propagation model can be approximated well by Gaussian random variables, a modified and simplified version of density evolution technique was introduced in [2]. Recently, the min-sum algorithm was applied to the density evolution of LDPC codes as an alternative decoding algorithm in [3]. Next question is how the min-sum algorithm is combined with a Gaussian approximation. In this paper, the capacity of various rate LDPC codes is obtained using the min-sum algorithm combined with the Gaussian approximation, which gives a simplest way of LDPC code analysis. Unlike the sum-product algorithm, the symmetry condition [4] is not maintained in the min-sum algorithm. Therefore, the variance as well as the mean of Gaussian distribution are recursively computed in this analysis. It is also shown that the min-sum threshold under a gaussian approximation is well matched to the simulation results.

Location Error Reduction method using Iterative Calculation in UWB system (Iterative Calculation을 이용한 UWB 위치측정에서의 오차감소 기법)

  • Jang, Sung-Jeen;Hwang, Jae-Ho;Choi, Nack-Hyun;Kim, Jae-Moung
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.45 no.12
    • /
    • pp.105-113
    • /
    • 2008
  • In Ubiquitous Society, accurate Location Calculation of user's device is required to achieve the need of users. As the location calculation is processed by ranging between transceivers, if some obstacles exist between transceivers, NLoS(Non-line-of-Sight) components of received signal increase along with the reduction of LoS(Line-of-Sight) components. Therefore the location calculation error will increase due to the NLoS effect. The conventional location calculation algorithm has the original ranging error because there is no transformation of ranging information which degrades the ranging accuracy. The Iterative Calculation method which minimizes the location calculation error relys on accurately identifying NLoS or LoS condition of the tested channel. We employ Kurtosis, Mean Excess Delay and RMS Delay spread of the received signal to identify whether the tested channel is LoS or NLoS firstly. Thereafter, to minimize location calculation error, the proposed Iterative Calculation method iteratively select random range and finds the averaged target location which has high probability. The simulation results confirm the enhancement of the proposed method.

A Gaussian process-based response surface method for structural reliability analysis

  • Su, Guoshao;Jiang, Jianqing;Yu, Bo;Xiao, Yilong
    • Structural Engineering and Mechanics
    • /
    • v.56 no.4
    • /
    • pp.549-567
    • /
    • 2015
  • A first-order moment method (FORM) reliability analysis is commonly used for structural stability analysis. It requires the values and partial derivatives of the performance to function with respect to the random variables for the design. These calculations can be cumbersome when the performance functions are implicit. A Gaussian process (GP)-based response surface is adopted in this study to approximate the limit state function. By using a trained GP model, a large number of values and partial derivatives of the performance functions can be obtained for conventional reliability analysis with a FORM, thereby reducing the number of stability analysis calculations. This dynamic renewed knowledge source can provide great assistance in improving the predictive capacity of GP during the iterative process, particularly from the view of machine learning. An iterative algorithm is therefore proposed to improve the precision of GP approximation around the design point by constantly adding new design points to the initial training set. Examples are provided to illustrate the GP-based response surface for both structural and non-structural reliability analyses. The results show that the proposed approach is applicable to structural reliability analyses that involve implicit performance functions and structural response evaluations that entail time-consuming finite element analyses.

Text Filtering using Iterative Boosting Algorithms (반복적 부스팅 학습을 이용한 문서 여과)

  • Hahn, Sang-Youn;Zang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.270-277
    • /
    • 2002
  • Text filtering is a task of deciding whether a document has relevance to a specified topic. As Internet and Web becomes wide-spread and the number of documents delivered by e-mail explosively grows the importance of text filtering increases as well. The aim of this paper is to improve the accuracy of text filtering systems by using machine learning techniques. We apply AdaBoost algorithms to the filtering task. An AdaBoost algorithm generates and combines a series of simple hypotheses. Each of the hypotheses decides the relevance of a document to a topic on the basis of whether or not the document includes a certain word. We begin with an existing AdaBoost algorithm which uses weak hypotheses with their output of 1 or -1. Then we extend the algorithm to use weak hypotheses with real-valued outputs which was proposed recently to improve error reduction rates and final filtering performance. Next, we attempt to achieve further improvement in the AdaBoost's performance by first setting weights randomly according to the continuous Poisson distribution, executing AdaBoost, repeating these steps several times, and then combining all the hypotheses learned. This has the effect of mitigating the ovefitting problem which may occur when learning from a small number of data. Experiments have been performed on the real document collections used in TREC-8, a well-established text retrieval contest. This dataset includes Financial Times articles from 1992 to 1994. The experimental results show that AdaBoost with real-valued hypotheses outperforms AdaBoost with binary-valued hypotheses, and that AdaBoost iterated with random weights further improves filtering accuracy. Comparison results of all the participants of the TREC-8 filtering task are also provided.

Joint Transmitter and Receiver Optimization for Improper-Complex Second-Order Stationary Data Sequence

  • Yeo, Jeongho;Cho, Joon Ho;Lehnert, James S.
    • Journal of Communications and Networks
    • /
    • v.17 no.1
    • /
    • pp.1-11
    • /
    • 2015
  • In this paper, the transmission of an improper-complex second-order stationary data sequence is considered over a strictly band-limited frequency-selective channel. It is assumed that the transmitter employs linear modulation and that the channel output is corrupted by additive proper-complex cyclostationary noise. Under the average transmit power constraint, the problem of minimizing the mean-squared error at the output of a widely linear receiver is formulated in the time domain to find the optimal transmit and receive waveforms. The optimization problem is converted into a frequency-domain problem by using the vectorized Fourier transform technique and put into the form of a double minimization. First, the widely linear receiver is optimized that requires, unlike the linear receiver design with only one waveform, the design of two receive waveforms. Then, the optimal transmit waveform for the linear modulator is derived by introducing the notion of the impropriety frequency function of a discrete-time random process and by performing a line search combined with an iterative algorithm. The optimal solution shows that both the periodic spectral correlation due to the cyclostationarity and the symmetric spectral correlation about the origin due to the impropriety are well exploited.

Design of the Computer Generated Holographic Diffuser (컴퓨터 생성 홀로그래픽 디퓨저의 설계)

  • Choi, Kyong-Sik;Yoon, Jin-Seon;Kim, Nam
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.38 no.5
    • /
    • pp.357-366
    • /
    • 2001
  • In this paper, computer generated holographic diffuser with high diffraction efficiency and uniformity was designed by the modified iterative Fourier transform algorithm. Newly proposed method to design a CGHD is to flip and to combine BPHs or MPHs, so it makes the computation time decreased and it makes the reconstructed signal area enlarged. The designed sixteen phase holographic diffuser had the high diffraction efficiency of 85.20%, the uniformity of 2.43%, and the average signal to noise ratio of 18.97[㏈]. Also, we compared the CGHD with a 128 level pseudo random phase diffuser about the diffraction efficiency and the uniformity. The proposed diffuser can be provided good performance for a holographic diffuser and a next-generation display device.

  • PDF

A turbo code with reduced decoding delay (감소된 복호지연을 갖는 Turbo Code)

  • 김준범;문태현;임승주;주판유;홍대식;강창언
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.7
    • /
    • pp.1427-1436
    • /
    • 1997
  • Turbo codes, decoded through an iterative decoding algorithm, habe recently been shown to yidel remarkable coding gains close to theoretical limits in the Gaussian channel environment. This thesis presents the performance of Turbo code through the computer simulation. The performance of modified Turbo code is compared to that of the conventional Turbo codes. The modified Turbo code reduces the time delay in decoding with minimal effect to the performance for voice transmission sytems. To achieve the same performance, random interleaver the size of which is no less than the square root of the original one should be used. Also, the modified Turbo code is applied to MC-CDMA system, and its performance is analyzed under the Rayleigh Fading channel environment. In Rayleigh fading channel environment, due to the amplitude distortion caused by fading, the interleaver of the size twice no less than that in the Gaussian channel enironment was required. In overall, the modified Turbo code maintained the performance of the conventional Turbo code while the time delay in transmission and decoding was reduced at the rate of multiples of two times the squared root of the interleaver size.

  • PDF

Boundary-adaptive Despeckling : Simulation Study

  • Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.3
    • /
    • pp.295-309
    • /
    • 2009
  • In this study, an iterative maximum a posteriori (MAP) approach using a Bayesian model of Markovrandom field (MRF) was proposed for despeckling images that contains speckle. Image process is assumed to combine the random fields associated with the observed intensity process and the image texture process respectively. The objective measure for determining the optimal restoration of this "double compound stochastic" image process is based on Bayes' theorem, and the MAP estimation employs the Point-Jacobian iteration to obtain the optimal solution. In the proposed algorithm, MRF is used to quantify the spatial interaction probabilistically, that is, to provide a type of prior information on the image texture and the neighbor window of any size is defined for contextual information on a local region. However, the window of a certain size would result in using wrong information for the estimation from adjacent regions with different characteristics at the pixels close to or on boundary. To overcome this problem, the new method is designed to use less information from more distant neighbors as the pixel is closer to boundary. It can reduce the possibility to involve the pixel values of adjacent region with different characteristics. The proximity to boundary is estimated using a non-uniformity measurement based on standard deviation of local region. The new scheme has been extensively evaluated using simulation data, and the experimental results show a considerable improvement in despeckling the images that contain speckle.

An Implementation of Stable Optical Security System using Interferometer and Cascaded Phase Keys (간섭계와 직렬 위상 키를 이용한 안정한 광 보안 시스템의 구현)

  • Kim, Cheol-Su
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.1
    • /
    • pp.101-107
    • /
    • 2007
  • In this paper, we proposed an stable optical security system using interferometer and cascaded phase keys. For the encryption process, a BPCGH(binary phase computer generated hologram) that reconstructs the origial image is designed, using an iterative algorithm and the resulting hologram is regarded as the image to be encrypted. The BPCGH is encrypted through the exclusive-OR operation with the random generated phase key image. For the decryption process, we cascade the encrypted image and phase key image and interfere with reference wave. Then decrypted hologram image is transformed into phase information. Finally, the origianl image is recovered by an inverse Fourier transformation of the phase information. During this process, interference intensity is very sensitive to external vibrations. a stable interference pattern is obtained using self-pumped phase-conjugate minor made of the photorefractive material. In the proposed security system, without a random generated key image, the original image can not be recovered. And we recover another hologram pattern according to the key images, so can be used an authorized system.

  • PDF