• Title/Summary/Keyword: Gradient Algorithm

Search Result 1,152, Processing Time 0.023 seconds

Improved Watershed Image Segmentation Using the Morphological Multi-Scale Gradient

  • Gelegdorj, Jugdergarav;Chu, Hyung-Suk;An, Chong-Koo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.2
    • /
    • pp.91-95
    • /
    • 2011
  • In this paper, we present an improved multi-scale gradient algorithm. The proposed algorithm works the effectively handling of both step and blurred edges. In the proposed algorithm, the image sharpening operator is sharpening the edges and contours of the objects. This operation gives an opportunity to get noise reduced image and step edged image. After that, multi-scale gradient operator works on noise reduced image in order to get a gradient image. The gradient image is segmented by watershed transform. The approach of region merging is used after watershed transform. The region merging is carried out according to the region area and region homogeneity. The region number of the proposed algorithm is 36% shorter than that of the existing algorithm because the proposed algorithm produces a few irrelevant regions. Moreover, the computational time of the proposed algorithm is relatively fast in comparison with the existing one.

An Acoustic Noise Cancellation Using Subband Block Conjugate Gradient Algorithm (부밴드 블록 공액 경사 알고리듬을 이용한 음향잡음 제거)

  • 김대성;배현덕
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.8-14
    • /
    • 2001
  • In this paper, we present a new cost function for subband block adaptive algorithm and block conjugate gradient algorithm for noise cancellation of acoustic signal. For the cost function, we process the subband signals with data blocks for each subbands and recombine it a whole data block. After these process, the cost function has a quadratic form in adaptive filter coefficients, it guarantees the convergence of the suggested block conjugate gradient algorithm. And the block conjugate gradient algorithm which minimizes the suggested cost function has better performance than the case of full-band block conjugate gradient algorithm, the computer simulation results of noise cancellation show the efficiency of the suggested algorithm.

  • PDF

Improving the Training Performance of Neural Networks by using Hybrid Algorithm (하이브리드 알고리즘을 이용한 신경망의 학습성능 개선)

  • Kim, Weon-Ook;Cho, Yong-Hyun;Kim, Young-Il;Kang, In-Ku
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2769-2779
    • /
    • 1997
  • This Paper Proposes an efficient method for improving the training performance of the neural networks using a hybrid of conjugate gradient backpropagation algorithm and dynamic tunneling backpropagation algorithm The conjugate gradient backpropagation algorithm, which is the fast gradient algorithm, is applied for high speed optimization. The dynamic tunneling backpropagation algorithm, which is the deterministic method with tunneling phenomenon, is applied for global optimization. Conversing to the local minima by using the conjugate gradient backpropagation algorithm, the new initial point for escaping the local minima is estimated by dynamic tunneling backpropagation algorithm. The proposed method has been applied to the parity check and the pattern classification. The simulation results show that the performance of proposed method is superior to those of gradient descent backpropagtion algorithm and a hybrid of gradient descent and dynamic tunneling backpropagation algorithm, and the new algorithm converges more often to the global minima than gradient descent backpropagation algorithm.

  • PDF

Novel Variable Step-Size Gradient Adaptive Lattice Algorithm for Active Noise Control (능동 소음 제어를 위한 새로운 가변 수렴 상수 Gradient Adaptive Lattice Algorithm)

  • Lee, Keunsang;Kim, Seong-Woo;Im, Jaepoong;Seo, Young-Soo;Park, Youngcheol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.33 no.5
    • /
    • pp.309-315
    • /
    • 2014
  • In this paper, a novel variable step-size filtered-x gradient adaptive lattice (NVSS-FxGAL) algorithm for active noise control system is proposed. The gradient adaptive lattice (GAL) algorithm is capable of controlling the narrow band noise effectively. The GAL algorithm can achieve both fast convergence rate and low steady-state level using the variable step-size. However, it suffers from the convergence performance for varying signal characteristic since the global variable step-size is equally applied to all lattice stages. Therefore, the proposed algorithm guarantees the stable and consistency convergence performance by using the local variable step-size for the suitable each lattice stage. Simulation results confirm that the proposed algorithm can obtain the fast convergence rate and low steady-state level compared to the conventional algorithms.

Multi-gradient learning algorithm for multilayer neural networks (다층 신경망을 위한 Multi-gradient 학습 알고리즘)

  • 고진욱
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.1017-1020
    • /
    • 1999
  • Recently, a new learning algorithm for multilayer neural networks has been proposed 〔1〕. In the new learning algorithm, each output neuron is considered as a function of weights and the weights are adjusted so that the output neurons produce desired outputs. And the adjustment is accomplished by taking gradients. However, the gradient computation was performed numerically, resulting in a long computation time. In this paper, we derive the all necessary equations so that the gradient computation is performed analytically, resulting in a much faster learning time comparable to the backpropagation. Since the weight adjustments are accomplished by summing the gradients of the output neurons, we will call the new learning algorithm “multi-gradient.” Experiments show that the multi-gradient consistently outperforms the backpropagation.

  • PDF

Nulling algorithm design using approximated gradient method (근사화된 Gradient 방법을 사용한 널링 알고리즘 설계)

  • Shin, Chang Eui;Choi, Seung Won
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.1
    • /
    • pp.95-102
    • /
    • 2013
  • This paper covers nulling algorithm. In this algorithm, we assume that nulling points are already known. In general, nulling algorithm using matrix equation was utilized. But, this algorithm is pointed out that computational complexity is disadvantage. So, we choose gradient method to reduce the computational complexity. In order to further reduce the computational complexity, we propose approximate gradient method using characteristic of trigonometric functions. The proposed method has same performance compared with conventional method while having half the amount of computation when the number of antenna and nulling point are 20 and 1, respectively. In addition, we could virtually eliminate the trigonometric functions arithmetic. Trigonometric functions arithmetic cause a big problem in actual implementation like FPGA processor(Field Programmable gate array). By utilizing the above algorithm in a multi-cell environment, beamforming gain can be obtained and interference can be reduced at same time. By the above results, the algorithm can show excellent performance in the cell boundary.

Large-Scale Phase Retrieval via Stochastic Reweighted Amplitude Flow

  • Xiao, Zhuolei;Zhang, Yerong;Yang, Jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4355-4371
    • /
    • 2020
  • Phase retrieval, recovering a signal from phaseless measurements, is generally considered to be an NP-hard problem. This paper adopts an amplitude-based nonconvex optimization cost function to develop a new stochastic gradient algorithm, named stochastic reweighted phase retrieval (SRPR). SRPR is a stochastic gradient iteration algorithm, which runs in two stages: First, we use a truncated sample stochastic variance reduction algorithm to initialize the objective function. The second stage is the gradient refinement stage, which uses continuous updating of the amplitude-based stochastic weighted gradient algorithm to improve the initial estimate. Because of the stochastic method, each iteration of the two stages of SRPR involves only one equation. Therefore, SRPR is simple, scalable, and fast. Compared with the state-of-the-art phase retrieval algorithm, simulation results show that SRPR has a faster convergence speed and fewer magnitude-only measurements required to reconstruct the signal, under the real- or complex- cases.

Tuning Method of the Membership Function for FLC using a Gradient Descent Algorithm (Gradient Descent 알고리즘을 이용한 퍼지제어기의 멤버십함수 동조 방법)

  • Choi, Hansoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.12
    • /
    • pp.7277-7282
    • /
    • 2014
  • In this study, the gradient descent algorithm was used for FLC analysis and the algorithm was used to represent the effects of nonlinear parameters, which alter the antecedent and consequence fuzzy variables of FLC. The controller parameters choose the control variable by iteration for gradient descent algorithm. The FLC consists of 7 membership functions, 49 rules and a two inputs - one output system. The system adopted the Min-Max inference method and triangle type membership function with a 13 quantization level.

A study on applying random forest and gradient boosting algorithm for Chl-a prediction of Daecheong lake (대청호 Chl-a 예측을 위한 random forest와 gradient boosting 알고리즘 적용 연구)

  • Lee, Sang-Min;Kim, Il-Kyu
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.35 no.6
    • /
    • pp.507-516
    • /
    • 2021
  • In this study, the machine learning which has been widely used in prediction algorithms recently was used. the research point was the CD(chudong) point which was a representative point of Daecheong Lake. Chlorophyll-a(Chl-a) concentration was used as a target variable for algae prediction. to predict the Chl-a concentration, a data set of water quality and quantity factors was consisted. we performed algorithms about random forest and gradient boosting with Python. to perform the algorithms, at first the correlation analysis between Chl-a and water quality and quantity data was studied. we extracted ten factors of high importance for water quality and quantity data. as a result of the algorithm performance index, the gradient boosting showed that RMSE was 2.72 mg/m3 and MSE was 7.40 mg/m3 and R2 was 0.66. as a result of the residual analysis, the analysis result of gradient boosting was excellent. as a result of the algorithm execution, the gradient boosting algorithm was excellent. the gradient boosting algorithm was also excellent with 2.44 mg/m3 of RMSE in the machine learning hyperparameter adjustment result.

Learning Behaviors of Stochastic Gradient Radial Basis Function Network Algorithms for Odor Sensing Systems

  • Kim, Nam-Yong;Byun, Hyung-Gi;Kwon, Ki-Hyeon
    • ETRI Journal
    • /
    • v.28 no.1
    • /
    • pp.59-66
    • /
    • 2006
  • Learning behaviors of a radial basis function network (RBFN) using a singular value decomposition (SVD) and stochastic gradient (SG) algorithm, together named RBF-SVD-SG, for odor sensing systems are analyzed, and a fast training method is proposed. RBF input data is from a conducting polymer sensor array. It is revealed in this paper that the SG algorithm for the fine-tuning of centers and widths still shows ill-behaving learning results when a sufficiently small convergence coefficient is not used. Since the tuning of centers in RBFN plays a dominant role in the performance of RBFN odor sensing systems, our analysis is focused on the center-gradient variance of the RBFN-SVD-SG algorithm. We found analytically that the steadystate weight fluctuation and large values of a convergence coefficient can lead to an increase in variance of the center-gradient estimate. Based on this analysis, we propose to use the least mean square algorithm instead of SVD in adjusting the weight for stable steady-state weight behavior. Experimental results of the proposed algorithm have shown faster learning speed and better classification performance.

  • PDF