• Title/Summary/Keyword: kernel learning

Search Result 243, Processing Time 0.029 seconds

Truncated Kernel Projection Machine for Link Prediction

  • Huang, Liang;Li, Ruixuan;Chen, Hong
    • Journal of Computing Science and Engineering
    • /
    • v.10 no.2
    • /
    • pp.58-67
    • /
    • 2016
  • With the large amount of complex network data that is increasingly available on the Web, link prediction has become a popular data-mining research field. The focus of this paper is on a link-prediction task that can be formulated as a binary classification problem in complex networks. To solve this link-prediction problem, a sparse-classification algorithm called "Truncated Kernel Projection Machine" that is based on empirical-feature selection is proposed. The proposed algorithm is a novel way to achieve a realization of sparse empirical-feature-based learning that is different from those of the regularized kernel-projection machines. The algorithm is more appealing than those of the previous outstanding learning machines since it can be computed efficiently, and it is also implemented easily and stably during the link-prediction task. The algorithm is applied here for link-prediction tasks in different complex networks, and an investigation of several classification algorithms was performed for comparison. The experimental results show that the proposed algorithm outperformed the compared algorithms in several key indices with a smaller number of test errors and greater stability.

Real-Time Prediction for Product Surface Roughness by Support Vector Regression (서포트벡터 회귀를 이용한 실시간 제품표면거칠기 예측)

  • Choi, Sujin;Lee, Dongju
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.3
    • /
    • pp.117-124
    • /
    • 2021
  • The development of IOT technology and artificial intelligence technology is promoting the smartization of manufacturing system. In this study, data extracted from acceleration sensor and current sensor were obtained through experiments in the cutting process of SKD11, which is widely used as a material for special mold steel, and the amount of tool wear and product surface roughness were measured. SVR (Support Vector Regression) is applied to predict the roughness of the product surface in real time using the obtained data. SVR, a machine learning technique, is widely used for linear and non-linear prediction using the concept of kernel. In particular, by applying GSVQR (Generalized Support Vector Quantile Regression), overestimation, underestimation, and neutral estimation of product surface roughness are performed and compared. Furthermore, surface roughness is predicted using the linear kernel and the RBF kernel. In terms of accuracy, the results of the RBF kernel are better than those of the linear kernel. Since it is difficult to predict the amount of tool wear in real time, the product surface roughness is predicted with acceleration and current data excluding the amount of tool wear. In terms of accuracy, the results of excluding the amount of tool wear were not significantly different from those including the amount of tool wear.

Design of an Optimized GPGPU for Data Reuse in DeepLearning Convolution (딥러닝 합성곱에서 데이터 재사용에 최적화된 GPGPU 설계)

  • Nam, Ki-Hun;Lee, Kwang-Yeob;Jung, Jun-Mo
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.664-671
    • /
    • 2021
  • This paper proposes a GPGPU structure that can reduce the number of operations and memory access by effectively applying a data reuse method to a convolutional neural network(CNN). Convolution is a two-dimensional operation using kernel and input data, and the operation is performed by sliding the kernel. In this case, a reuse method using an internal register is proposed instead of loading kernel from a cache memory until the convolution operation is completed. The serial operation method was applied to the convolution to increase the effect of data reuse by using the principle of GPGPU in which instructions are executed by the SIMT method. In this paper, for register-based data reuse, the kernel was fixed at 4×4 and GPGPU was designed considering the warp size and register bank to effectively support it. To verify the performance of the designed GPGPU on the CNN, we implemented it as an FPGA and then ran LeNet and measured the performance on AlexNet by comparison using TensorFlow. As a result of the measurement, 1-iteration learning speed based on AlexNet is 0.468sec and the inference speed is 0.135sec.

Learning Discriminative Fisher Kernel for Image Retrieval

  • Wang, Bin;Li, Xiong;Liu, Yuncai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.522-538
    • /
    • 2013
  • Content based image retrieval has become an increasingly important research topic for its wide application. It is highly challenging when facing to large-scale database with large variance. The retrieval systems rely on a key component, the predefined or learned similarity measures over images. We note that, the similarity measures can be potential improved if the data distribution information is exploited using a more sophisticated way. In this paper, we propose a similarity measure learning approach for image retrieval. The similarity measure, so called Fisher kernel, is derived from the probabilistic distribution of images and is the function over observed data, hidden variable and model parameters, where the hidden variables encode high level information which are powerful in discrimination and are failed to be exploited in previous methods. We further propose a discriminative learning method for the similarity measure, i.e., encouraging the learned similarity to take a large value for a pair of images with the same label and to take a small value for a pair of images with distinct labels. The learned similarity measure, fully exploiting the data distribution, is well adapted to dataset and would improve the retrieval system. We evaluate the proposed method on Corel-1000, Corel5k, Caltech101 and MIRFlickr 25,000 databases. The results show the competitive performance of the proposed method.

Automation of Model Selection through Neural Networks Learning (신경 회로망 학습을 통한 모델 선택의 자동화)

  • 류재흥
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.10a
    • /
    • pp.313-316
    • /
    • 2004
  • Model selection is the process that sets up the regularization parameter in the support vector machine or regularization network by using the external methods such as general cross validation or L-curve criterion. This paper suggests that the regularization parameter can be obtained simultaneously within the learning process of neural networks without resort to separate selection methods. In this paper, extended kernel method is introduced. The relationship between regularization parameter and the bias term in the extended kernel is established. Experimental results show the effectiveness of the new model selection method.

  • PDF

Improvement of Support Vector Clustering using Evolutionary Programming and Bootstrap

  • Jun, Sung-Hae
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.3
    • /
    • pp.196-201
    • /
    • 2008
  • Statistical learning theory has three analytical tools which are support vector machine, support vector regression, and support vector clustering for classification, regression, and clustering respectively. In general, their performances are good because they are constructed by convex optimization. But, there are some problems in the methods. One of the problems is the subjective determination of the parameters for kernel function and regularization by the arts of researchers. Also, the results of the learning machines are depended on the selected parameters. In this paper, we propose an efficient method for objective determination of the parameters of support vector clustering which is the clustering method of statistical learning theory. Using evolutionary algorithm and bootstrap method, we select the parameters of kernel function and regularization constant objectively. To verify improved performances of proposed research, we compare our method with established learning algorithms using the data sets form ucr machine learning repository and synthetic data.

Improving effective Learning Performance of Kernel method (커널 메소드의 효과적인 학습 성능 향상)

  • 김은미;김수희;정태웅;이배호
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.9-12
    • /
    • 2002
  • This paper proposes a dynamic moment algorithm to control oscillaion before the convergence of the KR(Kernel Relaxation). The proposed dynamic moment algorithm can be controlled to convergence speed and performance according to the change of the dynamic moment by teaming training. we used SONAR data that is a neural network classifier standard evaluation data in order to do impartial performance evaluation. The proposed algorithm has been applied to the KP (kernel perceptron), KPM(kernel perceptron with margin) and KLMS(kernel lms) as the kernel method presented recently. The simulation results of proposed algorithm have better the convergence performance than those using none and static moment.

  • PDF

Mercer Kernel Isomap

  • Choi, Hee-Youl;Choi, Seung-Jin
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.748-750
    • /
    • 2005
  • Isomap [1] is a manifold learning algorithm, which extends classical multidimensional scaling (MDS) by considering approximate geodesic distance instead of Euclidean distance. The approximate geodesic distance matrix can be interpreted as a kernel matrix, which implies that Isomap can be solved by a kernel eigenvalue problem. However, the geodesic distance kernel matrix is not guaranteed to be positive semidefinite. In this paper we employ a constant-adding method, which leads to the Mercer kernel-based Isomap algorithm. Numerical experimental results with noisy 'Swiss roll' data, confirm the validity and high performance of our kernel Isomap algorithm.

  • PDF

SKU-Net: Improved U-Net using Selective Kernel Convolution for Retinal Vessel Segmentation

  • Hwang, Dong-Hwan;Moon, Gwi-Seong;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.29-37
    • /
    • 2021
  • In this paper, we propose a deep learning-based retinal vessel segmentation model for handling multi-scale information of fundus images. we integrate the selective kernel convolution into U-Net-based convolutional neural network. The proposed model extracts and segment features information with various shapes and sizes of retinal blood vessels, which is important information for diagnosing eye-related diseases from fundus images. The proposed model consists of standard convolutions and selective kernel convolutions. While the standard convolutional layer extracts information through the same size kernel size, The selective kernel convolution extracts information from branches with various kernel sizes and combines them by adaptively adjusting them through split-attention. To evaluate the performance of the proposed model, we used the DRIVE and CHASE DB1 datasets and the proposed model showed F1 score of 82.91% and 81.71% on both datasets respectively, confirming that the proposed model is effective in segmenting retinal blood vessels.

Autoencoder-Based Automotive Intrusion Detection System Using Gaussian Kernel Density Estimation Function (가우시안 커널 밀도 추정 함수를 이용한 오토인코더 기반 차량용 침입 탐지 시스템)

  • Donghyeon Kim;Hyungchul Im;Seongsoo Lee
    • Journal of IKEEE
    • /
    • v.28 no.1
    • /
    • pp.6-13
    • /
    • 2024
  • This paper proposes an approach to detect abnormal data in automotive controller area network (CAN) using an unsupervised learning model, i.e. autoencoder and Gaussian kernel density estimation function. The proposed autoencoder model is trained with only message ID of CAN data frames. Afterwards, by employing the Gaussian kernel density estimation function, it effectively detects abnormal data based on the trained model characterized by the optimally determined number of frames and a loss threshold. It was verified and evaluated using four types of attack data, i.e. DoS attacks, gear spoofing attacks, RPM spoofing attacks, and fuzzy attacks. Compared with conventional unsupervised learning-based models, it has achieved over 99% detection performance across all evaluation metrics.