• 제목/요약/키워드: Gradient Algorithm

검색결과 1,152건 처리시간 0.02초

Improved Watershed Image Segmentation Using the Morphological Multi-Scale Gradient

  • Gelegdorj, Jugdergarav;Chu, Hyung-Suk;An, Chong-Koo
    • 융합신호처리학회논문지
    • /
    • 제12권2호
    • /
    • pp.91-95
    • /
    • 2011
  • In this paper, we present an improved multi-scale gradient algorithm. The proposed algorithm works the effectively handling of both step and blurred edges. In the proposed algorithm, the image sharpening operator is sharpening the edges and contours of the objects. This operation gives an opportunity to get noise reduced image and step edged image. After that, multi-scale gradient operator works on noise reduced image in order to get a gradient image. The gradient image is segmented by watershed transform. The approach of region merging is used after watershed transform. The region merging is carried out according to the region area and region homogeneity. The region number of the proposed algorithm is 36% shorter than that of the existing algorithm because the proposed algorithm produces a few irrelevant regions. Moreover, the computational time of the proposed algorithm is relatively fast in comparison with the existing one.

부밴드 블록 공액 경사 알고리듬을 이용한 음향잡음 제거 (An Acoustic Noise Cancellation Using Subband Block Conjugate Gradient Algorithm)

  • 김대성;배현덕
    • 한국음향학회지
    • /
    • 제20권3호
    • /
    • pp.8-14
    • /
    • 2001
  • 본 논문에서는 부밴드 적응 필터 구조에서 음향 신호에 부가된 잡음을 제거하기 위한 새로운 비용함수와 블록 공액 경사 알고리듬을 제안하였다. 제안한 비용함수를 위하여 부밴드로 나뉘는 신호를 블록으로 구성한 후 각 대역에서의 신호를 하나의 블록으로 조합하였다. 이러한 과정을 통해 제시된 비용함수는 부밴드 적응 필터 구조에서 적응 필터에 대한 2차 형식을 가짐으로서 제안한 알고리듬의 수렴성이 보장되었다. 또한 제안한 비용함수를 최소화하는 알고리듬으로 사용한 부밴드 블록 공액 경사 알고리듬은 전대역 블록 공액 경사 알고리듬에 비해 잡음제거 성능이 뛰어난 것을 컴퓨터 모의 실험으로 확인하였다.

  • PDF

하이브리드 알고리즘을 이용한 신경망의 학습성능 개선 (Improving the Training Performance of Neural Networks by using Hybrid Algorithm)

  • 김원욱;조용현;김영일;강인구
    • 한국정보처리학회논문지
    • /
    • 제4권11호
    • /
    • pp.2769-2779
    • /
    • 1997
  • 본 논문에서는 공액기울기법과 터널링 시스템을 조합사용하여 신경망의 학습성능을 향상시킬 수 있는 효율적인 방법을 제안하였다. 빠른 수렴속도의 학습을 위하여 공액 기울기법에 기초한 후향전파 알고리즘을 사용하였고, 국소최적해를 만났을 때 이를 벗어난 다른 연결가중치의 설정을 위해 동적터널링 시스템에 기초한 후향전파 알고리즘을 조합한 학습 알고리즘을 적용하였다. 제안된 방법을 패리티 검사 및 패턴분류 문제에 각각 적용하여 기존의 기울기 하강법에 기초한 후향전파 알고리즘 및 기울기 하강법과 동적터널링 시스템을 조합한 후향전파 알고리즘방법의 결과와 비교 고찰하여 제안된 방법이 다른 방법들 보다 학습성능에서 우수함을 나타내었다.

  • PDF

능동 소음 제어를 위한 새로운 가변 수렴 상수 Gradient Adaptive Lattice Algorithm (Novel Variable Step-Size Gradient Adaptive Lattice Algorithm for Active Noise Control)

  • 이근상;김성우;임재풍;서영수;박영철
    • 한국음향학회지
    • /
    • 제33권5호
    • /
    • pp.309-315
    • /
    • 2014
  • 본 논문은 능동 소음 제어 시스템에 적합한 새로운 가변 수렴 상수 filtered-x gradient adaptive lattice(NVSS-FxGAL) 알고리즘을 제안한다. Gradient adaptive lattice(GAL) 알고리즘은 협대역 특성을 가지는 소음을 효과적으로 제어할 수 있다. GAL 알고리즘의 수렴 성능을 개선하기 위한 가변 격자 필터의 각 단계에 동일하게 적용하면 입력 신호의 특성 변화에 강인하게 대처하지 못한다. 제안 알고리즘은 격자 필터의 각 단계에 적합한 로컬 가변 수렴상수를 이용하여 안정적이고 일관성 있는 수렴 성능을 보장한다. 실험을 통해 제안 알고리즘은 빠른 수렴 속도와 낮은 정상 상태를 보임을 확인하였다.

다층 신경망을 위한 Multi-gradient 학습 알고리즘 (Multi-gradient learning algorithm for multilayer neural networks)

  • 고진욱
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1999년도 하계종합학술대회 논문집
    • /
    • pp.1017-1020
    • /
    • 1999
  • Recently, a new learning algorithm for multilayer neural networks has been proposed 〔1〕. In the new learning algorithm, each output neuron is considered as a function of weights and the weights are adjusted so that the output neurons produce desired outputs. And the adjustment is accomplished by taking gradients. However, the gradient computation was performed numerically, resulting in a long computation time. In this paper, we derive the all necessary equations so that the gradient computation is performed analytically, resulting in a much faster learning time comparable to the backpropagation. Since the weight adjustments are accomplished by summing the gradients of the output neurons, we will call the new learning algorithm “multi-gradient.” Experiments show that the multi-gradient consistently outperforms the backpropagation.

  • PDF

근사화된 Gradient 방법을 사용한 널링 알고리즘 설계 (Nulling algorithm design using approximated gradient method)

  • 신창의;최승원
    • 디지털산업정보학회논문지
    • /
    • 제9권1호
    • /
    • pp.95-102
    • /
    • 2013
  • This paper covers nulling algorithm. In this algorithm, we assume that nulling points are already known. In general, nulling algorithm using matrix equation was utilized. But, this algorithm is pointed out that computational complexity is disadvantage. So, we choose gradient method to reduce the computational complexity. In order to further reduce the computational complexity, we propose approximate gradient method using characteristic of trigonometric functions. The proposed method has same performance compared with conventional method while having half the amount of computation when the number of antenna and nulling point are 20 and 1, respectively. In addition, we could virtually eliminate the trigonometric functions arithmetic. Trigonometric functions arithmetic cause a big problem in actual implementation like FPGA processor(Field Programmable gate array). By utilizing the above algorithm in a multi-cell environment, beamforming gain can be obtained and interference can be reduced at same time. By the above results, the algorithm can show excellent performance in the cell boundary.

Large-Scale Phase Retrieval via Stochastic Reweighted Amplitude Flow

  • Xiao, Zhuolei;Zhang, Yerong;Yang, Jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권11호
    • /
    • pp.4355-4371
    • /
    • 2020
  • Phase retrieval, recovering a signal from phaseless measurements, is generally considered to be an NP-hard problem. This paper adopts an amplitude-based nonconvex optimization cost function to develop a new stochastic gradient algorithm, named stochastic reweighted phase retrieval (SRPR). SRPR is a stochastic gradient iteration algorithm, which runs in two stages: First, we use a truncated sample stochastic variance reduction algorithm to initialize the objective function. The second stage is the gradient refinement stage, which uses continuous updating of the amplitude-based stochastic weighted gradient algorithm to improve the initial estimate. Because of the stochastic method, each iteration of the two stages of SRPR involves only one equation. Therefore, SRPR is simple, scalable, and fast. Compared with the state-of-the-art phase retrieval algorithm, simulation results show that SRPR has a faster convergence speed and fewer magnitude-only measurements required to reconstruct the signal, under the real- or complex- cases.

Gradient Descent 알고리즘을 이용한 퍼지제어기의 멤버십함수 동조 방법 (Tuning Method of the Membership Function for FLC using a Gradient Descent Algorithm)

  • 최한수
    • 한국산학기술학회논문지
    • /
    • 제15권12호
    • /
    • pp.7277-7282
    • /
    • 2014
  • 본 연구에서는 gradient descent 알고리즘을 퍼지제어기의 동조를 위해 멤버십함수의 폭을 해석하는데 이용하였으며 이 해석은 퍼지 제어규칙의 전건부와 후건부 퍼지변수들을 변화시켜 보다 개선된 제어 효과를 얻기 위해 사용된다. 이 방법은 제어기의 파라미터들이 gradient descent 알고리즘의 반복 과정에서 제어변수를 선택하는 것이다. 본 논문에서는 궤환 목표치 제어를 위해 7개의 멤버십함수와 49개의 규칙 그리고 2개의 입력과 1개의 출력을 갖는 FLC을 사용하였다. 추론은 Min-Max 합성법을 이용하였고 멤버십함수는 13개의 양자화 레벨에 대한 삼각 형태를 채택하였다.

대청호 Chl-a 예측을 위한 random forest와 gradient boosting 알고리즘 적용 연구 (A study on applying random forest and gradient boosting algorithm for Chl-a prediction of Daecheong lake)

  • 이상민;김일규
    • 상하수도학회지
    • /
    • 제35권6호
    • /
    • pp.507-516
    • /
    • 2021
  • In this study, the machine learning which has been widely used in prediction algorithms recently was used. the research point was the CD(chudong) point which was a representative point of Daecheong Lake. Chlorophyll-a(Chl-a) concentration was used as a target variable for algae prediction. to predict the Chl-a concentration, a data set of water quality and quantity factors was consisted. we performed algorithms about random forest and gradient boosting with Python. to perform the algorithms, at first the correlation analysis between Chl-a and water quality and quantity data was studied. we extracted ten factors of high importance for water quality and quantity data. as a result of the algorithm performance index, the gradient boosting showed that RMSE was 2.72 mg/m3 and MSE was 7.40 mg/m3 and R2 was 0.66. as a result of the residual analysis, the analysis result of gradient boosting was excellent. as a result of the algorithm execution, the gradient boosting algorithm was excellent. the gradient boosting algorithm was also excellent with 2.44 mg/m3 of RMSE in the machine learning hyperparameter adjustment result.

Learning Behaviors of Stochastic Gradient Radial Basis Function Network Algorithms for Odor Sensing Systems

  • Kim, Nam-Yong;Byun, Hyung-Gi;Kwon, Ki-Hyeon
    • ETRI Journal
    • /
    • 제28권1호
    • /
    • pp.59-66
    • /
    • 2006
  • Learning behaviors of a radial basis function network (RBFN) using a singular value decomposition (SVD) and stochastic gradient (SG) algorithm, together named RBF-SVD-SG, for odor sensing systems are analyzed, and a fast training method is proposed. RBF input data is from a conducting polymer sensor array. It is revealed in this paper that the SG algorithm for the fine-tuning of centers and widths still shows ill-behaving learning results when a sufficiently small convergence coefficient is not used. Since the tuning of centers in RBFN plays a dominant role in the performance of RBFN odor sensing systems, our analysis is focused on the center-gradient variance of the RBFN-SVD-SG algorithm. We found analytically that the steadystate weight fluctuation and large values of a convergence coefficient can lead to an increase in variance of the center-gradient estimate. Based on this analysis, we propose to use the least mean square algorithm instead of SVD in adjusting the weight for stable steady-state weight behavior. Experimental results of the proposed algorithm have shown faster learning speed and better classification performance.

  • PDF