• Title/Summary/Keyword: kernel feature

Search Result 191, Processing Time 0.022 seconds

Analysis of Dimensionality Reduction Methods Through Epileptic EEG Feature Selection for Machine Learning in BCI (BCI에서 기계 학습을 위한 간질 뇌파 특징 선택을 통한 차원 감소 방법 분석)

  • Tong, Yang;Aliyu, Ibrahim;Lim, Chang-Gyoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.6
    • /
    • pp.1333-1342
    • /
    • 2018
  • Until now, Electroencephalography(: EEG) has been the most important and convenient method for the diagnosis and treatment of epilepsy. However, it is difficult to identify the wave characteristics of an epileptic EEG signals because it is very weak, non-stationary and has strong background noise. In this paper, we analyse the effect of dimensionality reduction methods on Epileptic EEG feature selection and classification. Three dimensionality reduction methods: Pincipal Component Analysis(: PCA), Kernel Principal Component Analysis(: KPCA) and Linear Discriminant Analysis(: LDA) were investigated. The performance of each method was evaluated by using Support Vector Machine SVM, Logistic Regression(: LR), K-Nearestneighbor(: K-NN), Decision Tree(: DR) and Random Forest(: RF). From the experimental result, PCA recorded 75% of highest accuracy in SVM, LR and K-NN. KPCA recorded 85% of best performance in SVM and K-KNN while LDA achieved 100% accuracy in K-NN. Thus, LDA dimensionality reduction is found to provide the best classification result for epileptic EEG signal.

Human activity recognition with analysis of angles between skeletal joints using a RGB-depth sensor

  • Ince, Omer Faruk;Ince, Ibrahim Furkan;Yildirim, Mustafa Eren;Park, Jang Sik;Song, Jong Kwan;Yoon, Byung Woo
    • ETRI Journal
    • /
    • v.42 no.1
    • /
    • pp.78-89
    • /
    • 2020
  • Human activity recognition (HAR) has become effective as a computer vision tool for video surveillance systems. In this paper, a novel biometric system that can detect human activities in 3D space is proposed. In order to implement HAR, joint angles obtained using an RGB-depth sensor are used as features. Because HAR is operated in the time domain, angle information is stored using the sliding kernel method. Haar-wavelet transform (HWT) is applied to preserve the information of the features before reducing the data dimension. Dimension reduction using an averaging algorithm is also applied to decrease the computational cost, which provides faster performance while maintaining high accuracy. Before the classification, a proposed thresholding method with inverse HWT is conducted to extract the final feature set. Finally, the K-nearest neighbor (k-NN) algorithm is used to recognize the activity with respect to the given data. The method compares favorably with the results using other machine learning algorithms.

Decomposition of primary tar influenced by char particle types and reaction time during biomass gasification (바이오매스 가스화시 촤 입자 종류 및 반응시간에 따른 일차타르의 분해 특성)

  • Park, Jinje;Lee, Yongwoon;Ryu, Changkook
    • 한국연소학회:학술대회논문집
    • /
    • 2014.11a
    • /
    • pp.33-36
    • /
    • 2014
  • Gasification of biomass produces syngas containing CO, $H_2$ and/or $CH_4$, which can then be converted into energy or value-added fuels. One of key issues for efficient gasification is to minimize tar concentration in the syngas for use in a final conversion device such as gas engine. This study investigated the decomposition of primary tar by catalytic cracking using char as catalyst, of which the feature can be integrated into a fixed bed gasifier design. The pyrolysis vapor containing tar from pyrolysis of wood at $500^{\circ}C$ was passed through a reactor filled with or without char at $800^{\circ}C$ for a residence time of 1, 3 or 5 sec. Then, the condensable vapor (water and tar) and gases were analyzed for the yields and elemental composition. Four types of char particles with different microscopic surface area and pore size distribution: wood, paddy straw, palm kernel shell and activated carbon. The results were analyzed for the mass and carbon yields of tar and the composition of product gases to conclude the effects of char types and residence time.

  • PDF

3D Rendering of Magnetic Resonance Images using Visualization Toolkit and Microsoft.NET Framework

  • Madusanka, Nuwan;Zaben, Naim Al;Shidaifat, Alaaddin Al;Choi, Heung-Kook
    • Journal of Multimedia Information System
    • /
    • v.2 no.2
    • /
    • pp.207-214
    • /
    • 2015
  • In this paper, we proposed new software for 3D rendering of MR images in the medical domain using C# wrapper of Visualization Toolkit (VTK) and Microsoft .NET framework. Our objective in developing this software was to provide medical image segmentation, 3D rendering and visualization of hippocampus for diagnosis of Alzheimer disease patients using DICOM Images. Such three dimensional visualization can play an important role in the diagnosis of Alzheimer disease. Segmented images can be used to reconstruct the 3D volume of the hippocampus, and it can be used for the feature extraction, measure the surface area and volume of hippocampus to assist the diagnosis process. This software has been designed with interactive user interfaces and graphic kernels based on Microsoft.NET framework to get benefited from C# programming techniques, in particular to design pattern and rapid application development nature, a preliminary interactive window is functioning by invoking C#, and the kernel of VTK is simultaneously embedded in to the window, where the graphics resources are then allocated. Representation of visualization is through an interactive window so that the data could be rendered according to user's preference.

Analysis of market share attraction data using LS-SVM (최소제곱 서포트벡터기계를 이용한 시장점유율 자료 분석)

  • Park, Hye-Jung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.5
    • /
    • pp.879-886
    • /
    • 2009
  • The purpose of this article is to present the application of Least Squares Support Vector Machine in analyzing the existing structure of brand. We estimate the parameters of the Market Share Attraction Model using a non-parametric technique for function estimation called Least Squares Support Vector Machine, which allows us to perform even nonlinear regression by constructing a linear regression function in a high dimensional feature space. Estimation by Least Squares Support Vector Machine technique makes it a good candidate for solving the Market Share Attraction Model. To illustrate the performance of the proposed method, we use the car sales data in South Korea's car market.

  • PDF

New Kernel-Based Normality Recovery Method and Applications (새로운 커널 기반 정상 상태 복구 기법과 응용)

  • Kang Dae-Sung;Park Joo-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.4
    • /
    • pp.410-415
    • /
    • 2006
  • The SVDD(support vector data description) is one of the most important one-class support vector learning methods, which depends on the strategy of utilizing the balls defined on the feature space to discriminate the normal data from all other possible abnormal objects. This paper addresses on the extension of the SVDD method toward the problem of recovering the normal contents from the data contaminated with noises. The validity of the proposed de-noising method is shown via application to recovering the high-resolution images from the low-resolution images based on the high-resolution training data.

(PMU (Performance Monitoring Unit)-Based Dynamic XIP(eXecute In Place) Technique for Embedded Systems) (내장형 시스템을 위한 PMU (Performance Monitoring Unit) 기반 동적 XIP (eXecute In Place) 기법)

  • Kim, Dohun;Park, Chanik
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.3
    • /
    • pp.158-166
    • /
    • 2008
  • These days, mobile embedded systems adopt flash memory capable of XIP feature since they can reduce memory usage, power consumption, and software load time. XIP provides direct access to ROM and flash memory for processors. However, using XIP incurs unnecessary degradation of applications' performance because direct access to ROM and flash memory shows more delay than that to main memory. In this paper, we propose a memory management framework, dynamic XIP, which can resolve the performance degradation of using XIP. Using a constrained RAM cache, dynamic XIP can dynamically change XIP region according to page access pattern to reduce performance degradation in execution time or energy consumption resulting from native XIP problem. The proposed framework consists of a page profiler gathering applications' memory access pattern using PMU and an XIP manager deciding that a page is accessed whether in main memory or in flash memory. The proposed framework is implemented and evaluated in Linux kernel. Our evaluation shows that our framework can reduce execution time at most 25% and energy consumption at most 22% compared with using XIP-only case adopted in general mobile embedded systems. Moreover, the evaluation shows that in execution time and energy consumption, our modified LRU algorithm with code page filters can reduce more than at most 90% and 80% respectively compared with applying just existing LRU algorithm to dynamic XIP.

  • PDF

GA-optimized Support Vector Regression for an Improved Emotional State Estimation Model

  • Ahn, Hyunchul;Kim, Seongjin;Kim, Jae Kyeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.6
    • /
    • pp.2056-2069
    • /
    • 2014
  • In order to implement interactive and personalized Web services properly, it is necessary to understand the tangible and intangible responses of the users and to recognize their emotional states. Recently, some studies have attempted to build emotional state estimation models based on facial expressions. Most of these studies have applied multiple regression analysis (MRA), artificial neural network (ANN), and support vector regression (SVR) as the prediction algorithm, but the prediction accuracies have been relatively low. In order to improve the prediction performance of the emotion prediction model, we propose a novel SVR model that is optimized using a genetic algorithm (GA). Our proposed algorithm-GASVR-is designed to optimize the kernel parameters and the feature subsets of SVRs in order to predict the levels of two aspects-valence and arousal-of the emotions of the users. In order to validate the usefulness of GASVR, we collected a real-world data set of facial responses and emotional states via a survey. We applied GASVR and other algorithms including MRA, ANN, and conventional SVR to the data set. Finally, we found that GASVR outperformed all of the comparative algorithms in the prediction of the valence and arousal levels.

A Feature Based Approach to Extracting Ground Points from LIDAR Data (LIDAR 데이터로부터 지표점 추출을 위한 피쳐 기반 방법)

  • Lee, Im-Pyeong
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.4
    • /
    • pp.265-274
    • /
    • 2006
  • Extracting ground points is the kernel of DTM generation being considered as one of the most popular LIDAR applications. The previous extraction approaches can be mostly characterized as a point based approach, which sequentially examines every individual point to determine whether it is measured from ground surfaces. The number of examinations to be performed is then equivalent to the number of points. Particularly in a large set, the heavy computational requirement associated with the examinations is obviously an obstacle to employing more sophisticated criteria for the examination. To reduce the number of entities to be examined and produce more robust results, we developed an approach based on features rather than points, where a feature indicates an entity constructed by grouping some points. In the proposed approach, we first generate a set of features by organizing points into surface patches and grouping the patches into surface clusters. Among these features, we then attempt to identify the ground features with the criteria based on the attributes of the features. The points grouped into these identified features are labeled ground points, being used for DTM generation afterward. The Proposed approach was applied to many real airborne LIDAR data sets. The analysis on the results strongly supports the prominent performance of the proposed approach in terms of not only the computational requirement but also the quality of the DTM.

Optimization of Support Vector Machines for Financial Forecasting (재무예측을 위한 Support Vector Machine의 최적화)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.241-254
    • /
    • 2011
  • Financial time-series forecasting is one of the most important issues because it is essential for the risk management of financial institutions. Therefore, researchers have tried to forecast financial time-series using various data mining techniques such as regression, artificial neural networks, decision trees, k-nearest neighbor etc. Recently, support vector machines (SVMs) are popularly applied to this research area because they have advantages that they don't require huge training data and have low possibility of overfitting. However, a user must determine several design factors by heuristics in order to use SVM. For example, the selection of appropriate kernel function and its parameters and proper feature subset selection are major design factors of SVM. Other than these factors, the proper selection of instance subset may also improve the forecasting performance of SVM by eliminating irrelevant and distorting training instances. Nonetheless, there have been few studies that have applied instance selection to SVM, especially in the domain of stock market prediction. Instance selection tries to choose proper instance subsets from original training data. It may be considered as a method of knowledge refinement and it maintains the instance-base. This study proposes the novel instance selection algorithm for SVMs. The proposed technique in this study uses genetic algorithm (GA) to optimize instance selection process with parameter optimization simultaneously. We call the model as ISVM (SVM with Instance selection) in this study. Experiments on stock market data are implemented using ISVM. In this study, the GA searches for optimal or near-optimal values of kernel parameters and relevant instances for SVMs. This study needs two sets of parameters in chromosomes in GA setting : The codes for kernel parameters and for instance selection. For the controlling parameters of the GA search, the population size is set at 50 organisms and the value of the crossover rate is set at 0.7 while the mutation rate is 0.1. As the stopping condition, 50 generations are permitted. The application data used in this study consists of technical indicators and the direction of change in the daily Korea stock price index (KOSPI). The total number of samples is 2218 trading days. We separate the whole data into three subsets as training, test, hold-out data set. The number of data in each subset is 1056, 581, 581 respectively. This study compares ISVM to several comparative models including logistic regression (logit), backpropagation neural networks (ANN), nearest neighbor (1-NN), conventional SVM (SVM) and SVM with the optimized parameters (PSVM). In especial, PSVM uses optimized kernel parameters by the genetic algorithm. The experimental results show that ISVM outperforms 1-NN by 15.32%, ANN by 6.89%, Logit and SVM by 5.34%, and PSVM by 4.82% for the holdout data. For ISVM, only 556 data from 1056 original training data are used to produce the result. In addition, the two-sample test for proportions is used to examine whether ISVM significantly outperforms other comparative models. The results indicate that ISVM outperforms ANN and 1-NN at the 1% statistical significance level. In addition, ISVM performs better than Logit, SVM and PSVM at the 5% statistical significance level.