• Title/Summary/Keyword: 가우시안 커널함수

Search Result 15, Processing Time 0.024 seconds

Structural Design of Radial Basis function Neural Network(RBFNN) Based on PSO (PSO 기반 RBFNN의 구조적 설계)

  • Seok, Jin-Wook;Kim, Young-Hoon;Oh, Sung-Kwun
    • Proceedings of the IEEK Conference
    • /
    • 2009.05a
    • /
    • pp.381-383
    • /
    • 2009
  • 본 논문에서는 대표적인 시스템 모델링 도구중의 하나인 RBF 뉴럴 네트워크(Radial Basis Function Neural Network)를 설계하고 모델을 최적화하기 위하여 최적화 알고리즘인 PSO(Particle Swarm Optimization) 알고리즘을 이용하였다. 즉, 모델의 최적화에 주요한 영향을 미치는 모델의 파라미터들을 PSO 알고리즘을 이용하여 동정한다. 제안된 RBF 뉴럴 네트워크는 은닉층에서의 활성함수로서 일반적으로 많이 사용되어지는 가우시안 커널함수를 사용한다. 더 나아가 모델의 최적화를 위하여 각 커널함수의 중심값은 HCM 클러스터링에 기반을 두어 중심값을 결정하고, PSO 알고리즘을 통하여 가우시안 커널함수의 분포상수, 은닉층에서의 노드 수 그리고 다수의 입력을 가질 경우 입력의 종류를 동정한다. 제안한 모델의 성능을 평가하기 위해 Mackey-Glass 시계열 공정 데이터를 적용하였으며 제안된 모델의 근사화와 일반화 능력을 분석한다.

  • PDF

Combining Radar and Rain Gauge Observations Utilizing Gaussian-Process-Based Regression and Support Vector Learning (가우시안 프로세스 기반 함수근사와 서포트 벡터 학습을 이용한 레이더 및 강우계 관측 데이터의 융합)

  • Yoo, Chul-Sang;Park, Joo-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.3
    • /
    • pp.297-305
    • /
    • 2008
  • Recently, kernel methods have attracted great interests in the areas of pattern classification, function approximation, and anomaly detection. The role of the kernel is particularly important in the methods such as SVM(support vector machine) and KPCA(kernel principal component analysis), for it can generalize the conventional linear machines to be capable of efficiently handling nonlinearities. This paper considers the problem of combining radar and rain gauge observations utilizing the regression approach based on the kernel-based gaussian process and support vector learning. The data-assimilation results of the considered methods are reported for the radar and rain gauge observations collected over the region covering parts of Gangwon, Kyungbuk, and Chungbuk provinces of Korea, along with performance comparison.

A Differential Evolution based Support Vector Clustering (차분진화 기반의 Support Vector Clustering)

  • Jun, Sung-Hae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.5
    • /
    • pp.679-683
    • /
    • 2007
  • Statistical learning theory by Vapnik consists of support vector machine(SVM), support vector regression(SVR), and support vector clustering(SVC) for classification, regression, and clustering respectively. In this algorithms, SVC is good clustering algorithm using support vectors based on Gaussian kernel function. But, similar to SVM and SVR, SVC needs to determine kernel parameters and regularization constant optimally. In general, the parameters have been determined by the arts of researchers and grid search which is demanded computing time heavily. In this paper, we propose a differential evolution based SVC(DESVC) which combines differential evolution into SVC for efficient selection of kernel parameters and regularization constant. To verify improved performance of our DESVC, we make experiments using the data sets from UCI machine learning repository and simulation.

Performance Enhancement of Algorithms based on Error Distributions under Impulsive Noise (충격성 잡음하에서 오차 분포에 기반한 알고리듬의 성능향상)

  • Kim, Namyong;Lee, Gyoo-yeong
    • Journal of Internet Computing and Services
    • /
    • v.19 no.3
    • /
    • pp.49-56
    • /
    • 2018
  • Euclidean distance (ED) between error distribution and Dirac delta function has been used as an efficient performance criterion in impulsive noise environmentsdue to the outlier-cutting effect of Gaussian kernel for error signal. The gradient of ED for its minimization has two components; $A_k$ for kernel function of error pairs and the other $B_k$ for kernel function of errors. In this paper, it is analyzed that the first component is to govern gathering close together error samples, and the other one $B_k$ is to conduct error-sample concentration on zero. Based upon this analysis, it is proposed to normalize $A_k$ and $B_k$ with power of inputs which are modified by kernelled error pairs or errors for the purpose of reinforcing their roles of narrowing error-gap and drawing error samples to zero. Through comparison of fluctuation of steady state MSE and value of minimum MSE in the results of simulation of multipath equalization under impulsive noise, their roles and efficiency of the proposed normalization method are verified.

Centroid Neural Network with Bhattacharyya Kernel (Bhattacharyya 커널을 적용한 Centroid Neural Network)

  • Lee, Song-Jae;Park, Dong-Chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.9C
    • /
    • pp.861-866
    • /
    • 2007
  • A clustering algorithm for Gaussian Probability Distribution Function (GPDF) data called Centroid Neural Network with a Bhattacharyya Kernel (BK-CNN) is proposed in this paper. The proposed BK-CNN is based on the unsupervised competitive Centroid Neural Network (CNN) and employs a kernel method for data projection. The kernel method adopted in the proposed BK-CNN is used to project data from the low dimensional input feature space into higher dimensional feature space so as the nonlinear problems associated with input space can be solved linearly in the feature space. In order to cluster the GPDF data, the Bhattacharyya kernel is used to measure the distance between two probability distributions for data projection. With the incorporation of the kernel method, the proposed BK-CNN is capable of dealing with nonlinear separation boundaries and can successfully allocate more code vector in the region that GPDF data are densely distributed. When applied to GPDF data in an image classification probleml, the experiment results show that the proposed BK-CNN algorithm gives 1.7%-4.3% improvements in average classification accuracy over other conventional algorithm such as k-means, Self-Organizing Map (SOM) and CNN algorithms with a Bhattacharyya distance, classed as Bk-Means, B-SOM, B-CNN algorithms.

A Study on Optimization of Perovskite Solar Cell Light Absorption Layer Thin Film Based on Machine Learning (머신러닝 기반 페로브스카이트 태양전지 광흡수층 박막 최적화를 위한 연구)

  • Ha, Jae-jun;Lee, Jun-hyuk;Oh, Ju-young;Lee, Dong-geun
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.7
    • /
    • pp.55-62
    • /
    • 2022
  • The perovskite solar cell is an active part of research in renewable energy fields such as solar energy, wind, hydroelectric power, marine energy, bioenergy, and hydrogen energy to replace fossil fuels such as oil, coal, and natural gas, which will gradually disappear as power demand increases due to the increase in use of the Internet of Things and Virtual environments due to the 4th industrial revolution. The perovskite solar cell is a solar cell device using an organic-inorganic hybrid material having a perovskite structure, and has advantages of replacing existing silicon solar cells with high efficiency, low cost solutions, and low temperature processes. In order to optimize the light absorption layer thin film predicted by the existing empirical method, reliability must be verified through device characteristics evaluation. However, since it costs a lot to evaluate the characteristics of the light-absorbing layer thin film device, the number of tests is limited. In order to solve this problem, the development and applicability of a clear and valid model using machine learning or artificial intelligence model as an auxiliary means for optimizing the light absorption layer thin film are considered infinite. In this study, to estimate the light absorption layer thin-film optimization of perovskite solar cells, the regression models of the support vector machine's linear kernel, R.B.F kernel, polynomial kernel, and sigmoid kernel were compared to verify the accuracy difference for each kernel function.

Hyper-ellipsoidal clustering algorithm using Linear Matrix Inequality (선형행렬 부등식을 이용한 타원형 클러스터링 알고리즘)

  • Lee, Han-Sung;Park, Joo-Young;Park, Dai-Hee
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.12 no.4
    • /
    • pp.300-305
    • /
    • 2002
  • In this paper, we use the modified gaussian kernel function as clustering distance measure and recast the given hyper-ellipsoidal clustering problem as the optimization problem that minimizes the volume of hyper-ellipsoidal clusters, respectively and solve this using EVP (eigen value problem) that is one of the LMI (linear matrix inequality) techniques.

Gaussian Noise Reduction Technique using Improved Kernel Function based on Non-Local Means Filter (비지역적 평균 필터 기반의 개선된 커널 함수를 이용한 가우시안 잡음 제거 기법)

  • Lin, Yueqi;Choi, Hyunho;Jeong, Jechang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.73-76
    • /
    • 2018
  • A Gaussian noise is caused by surrounding environment or channel interference when transmitting image. The noise reduces not only image quality degradation but also high-level image processing performance. The Non-Local Means (NLM) filter finds similarity in the neighboring sets of pixels to remove noise and assigns weights according to similarity. The weighted average is calculated based on the weight. The NLM filter method shows low noise cancellation performance and high complexity in the process of finding the similarity using weight allocation and neighbor set. In order to solve these problems, we propose an algorithm that shows an excellent noise reduction performance by using Summed Square Image (SSI) to reduce the complexity and applying the weighting function based on a cosine Gaussian kernel function. Experimental results demonstrate the effectiveness of the proposed algorithm.

  • PDF

Practical Approach for Blind Algorithms Using Random-Order Symbol Sequence and Cross-Correntropy (랜덤오더 심볼열과 상호 코렌트로피를 이용한 블라인드 알고리듬의 현실적 접근)

  • Kim, Namyong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.3
    • /
    • pp.149-154
    • /
    • 2014
  • The cross-correntropy concept can be expressed with inner products of two different probability density functions constructed by Gaussian-kernel density estimation methods. Blind algorithms based on the maximization of the cross-correntropy (MCC) and a symbol set of randomly generated N samples yield superior learning performance, but have a huge computational complexity in the update process at the aim of weight adjustment based on the MCC. In this paper, a method of reducing the computational complexity of the MCC algorithm that calculates recursively the gradient of the cross-correntropy is proposed. The proposed method has only O(N) operations per iteration while the conventional MCC algorithms that calculate its gradients by a block processing method has $O(N^2)$. In the simulation results, the proposed method shows the same learning performance while reducing its heavy calculation burden significantly.

A Study on Power Variations of Magnitude Controlled Input of Algorithms based on Cross-Information Potential and Delta Functions (상호정보 에너지와 델타함수 기반의 알고리즘에서 크기 조절된 입력의 전력변화에 대한 연구)

  • Kim, Namyong
    • Journal of Internet Computing and Services
    • /
    • v.18 no.6
    • /
    • pp.1-6
    • /
    • 2017
  • For the algorithm of cross-information potential with delta functions (CIPD) which has superior performance in impulsive noise environments, a new method of employing the information of power variations of magnitude controlled input (MCI) in the weight update equation of the CIPD is proposed in this paper where the input of CIPD is modified by the Gaussian kernel of error. To prove its effectiveness compared to the conventionalCIPD algorithm, the distance between the current weight vector and its previous one is analyzed and compared under impulsive noise. In the simulation results the proposed method shows a two-fold improvement in steady state stability, faster convergence speed by 1.8 times, and 2 dB - lower minimum MSE in the impulsive noise situation.