• 제목/요약/키워드: Classification Performance Compare

검색결과 278건 처리시간 0.023초

PET-CT 영상 알츠하이머 분류에서 유전 알고리즘 이용한 심층학습 모델 최적화 (Optimization of Deep Learning Model Using Genetic Algorithm in PET-CT Image Alzheimer's Classification)

  • 이상협;강도영;송종관;박장식
    • 한국멀티미디어학회논문지
    • /
    • 제23권9호
    • /
    • pp.1129-1138
    • /
    • 2020
  • The performance of convolutional deep learning networks is generally determined according to parameters of target dataset, structure of network, convolution kernel, activation function, and optimization algorithm. In this paper, a genetic algorithm is used to select the appropriate deep learning model and parameters for Alzheimer's classification and to compare the learning results with preliminary experiment. We compare and analyze the Alzheimer's disease classification performance of VGG-16, GoogLeNet, and ResNet to select an effective network for detecting AD and MCI. The simulation results show that the network structure is ResNet, the activation function is ReLU, the optimization algorithm is Adam, and the convolution kernel has a 3-dilated convolution filter for the accuracy of dementia medical images.

Bagging 방법을 이용한 원전SG 세관 결함패턴 분류성능 향상기법 (Classification Performance Improvement of Steam Generator Tube Defects in Nuclear Power Plant Using Bagging Method)

  • 이준표;조남훈
    • 전기학회논문지
    • /
    • 제58권12호
    • /
    • pp.2532-2537
    • /
    • 2009
  • For defect characterization in steam generator tubes in nuclear power plant, artificial neural network has been extensively used to classify defect types. In this paper, we study the effectiveness of Bagging for improving the performance of neural network for the classification of tube defects. Bagging is a method that combines outputs of many neural networks that were trained separately with different training data set. By varying the number of neurons in the hidden layer, we carry out computer simulations in order to compare the classification performance of bagging neural network and single neural network. From the experiments, we found that the performance of bagging neural network is superior to the average performance of single neural network in most cases.

다양한 합성곱 신경망 방식을 이용한 폐음 분류 방식의 성능 비교 (Performance comparison of lung sound classification using various convolutional neural networks)

  • 김지연;김형국
    • 한국음향학회지
    • /
    • 제38권5호
    • /
    • pp.568-573
    • /
    • 2019
  • 폐질환 진단에서 청진은 다른 진단 방식에 비해 단순하고, 폐음을 이용하여 폐질환 환자식별뿐 아니라 폐음과 관련된 질병을 예측할 수 있다. 따라서 본 논문에서는 다양한 합성곱 신경방 방식을 기반으로 폐음을 이용하여 폐질환 환자를 식별하고, 소리특성에 따른 폐음을 분류하여 각 신경망 방식의 분류 성능을 비교한다. 먼저 폐질환 소견을 갖는 흉부 영역에서 단채널 폐음 녹음기기를 이용하여 폐음 데이터를 수집하고, 수집된 시간축 신호를 스펙트럼 형태의 특징값으로 추출하여 각 분류 신경망 방식에 적용한다. 폐 사운드 분류 방식으로는 일반적인 합성곱 신경망, 병렬 구조, 잔류학습이 적용된 구조의 합성곱 신경망을 사용하고 실험을 통해 각 신경망 모델의 폐음 분류 성능을 비교한다.

Analyzing performance of time series classification using STFT and time series imaging algorithms

  • Sung-Kyu Hong;Sang-Chul Kim
    • 한국컴퓨터정보학회논문지
    • /
    • 제28권4호
    • /
    • pp.1-11
    • /
    • 2023
  • 본 논문은 순환 신경망 대신 합성곱 신경망을 사용하여 시계열 데이터 분류 성능을 분석한다. TSC(Time Series Community)에는 GAF(Gramian Angular Field), MTF(Markov Transition Field), RP(Recurrence Plot)와 같은 전통적인 시계열 데이터 이미지화 알고리즘들이 있다. 실험은 이미지화 알고리즘들에 필요한 하이퍼 파라미터들을 조정하면서 합성곱 신경망의 성능을 평가하는 방식으로 진행된다. UCR 아카이브의 GunPoint 데이터셋을 기준으로 성능을 평가했을 때, 본 논문에서 제안하는 STFT(Short Time Fourier Transform) 알고리즘이 최적화된 하이퍼 파라미터를 찾은 경우, 기존의 알고리즘들 대비 정확도가 높고, 동적으로 feature map 이미지의 크기도 조절가능하다는 장점이 있다. GAF 또한 98~99%의 높은 정확도를 보이지만, feature map 이미지의 크기를 동적으로 조절할 수 없어 크다는 단점이 존재한다.

Performance Comparison of Classication Methods with the Combinations of the Imputation and Gene Selection Methods

  • Kim, Dong-Uk;Nam, Jin-Hyun;Hong, Kyung-Ha
    • 응용통계연구
    • /
    • 제24권6호
    • /
    • pp.1103-1113
    • /
    • 2011
  • Gene expression data is obtained through many stages of an experiment and errors produced during the process may cause missing values. Due to the distinctness of the data so called 'small n large p', genes have to be selected for statistical analysis, like classification analysis. For this reason, imputation and gene selection are important in a microarray data analysis. In the literature, imputation, gene selection and classification analysis have been studied respectively. However, imputation, gene selection and classification analysis are sequential processing. For this aspect, we compare the performance of classification methods after imputation and gene selection methods are applied to microarray data. Numerical simulations are carried out to evaluate the classification methods that use various combinations of the imputation and gene selection methods.

Classification of High Dimensionality Data through Feature Selection Using Markov Blanket

  • Lee, Junghye;Jun, Chi-Hyuck
    • Industrial Engineering and Management Systems
    • /
    • 제14권2호
    • /
    • pp.210-219
    • /
    • 2015
  • A classification task requires an exponentially growing amount of computation time and number of observations as the variable dimensionality increases. Thus, reducing the dimensionality of the data is essential when the number of observations is limited. Often, dimensionality reduction or feature selection leads to better classification performance than using the whole number of features. In this paper, we study the possibility of utilizing the Markov blanket discovery algorithm as a new feature selection method. The Markov blanket of a target variable is the minimal variable set for explaining the target variable on the basis of conditional independence of all the variables to be connected in a Bayesian network. We apply several Markov blanket discovery algorithms to some high-dimensional categorical and continuous data sets, and compare their classification performance with other feature selection methods using well-known classifiers.

영상분류문제를 위한 역전파 신경망과 Support Vector Machines의 비교 연구 (A Comparison Study on Back-Propagation Neural Network and Support Vector Machines for the Image Classification Problems)

  • 서광규
    • 한국산학기술학회논문지
    • /
    • 제9권6호
    • /
    • pp.1889-1893
    • /
    • 2008
  • 본 논문은 영상 분류 문제를 위한 support vector machines (SVMs)의 적용을 통한 분류의 성능을 다루고 있다. 본 연구에서는 영상 분류 문제에서 자연영상을 대상으로 색상, 질감, 형상 특징벡터를 추출하고, 각각의 특징벡터와 이들을 결합한 특징벡터를 사용하여 역전파 신경망과 SVM 기반의 방법을 적용하여 영상 분류의 정확성을 비교한다. 실험결과는 각각의 특징벡터중에는 색상 특징벡터값을 이용한 영상 분류가 그리고 각각의 특징벡터보다는 이들을 결합한 특징벡터를 이용한 영상 분류가 보다 우수함을 보여준다. 그리고 알고리즘간의 비교에서는 정확성과 일반화성능 측면에서 역전파 신경망보다 SVMs이 우수함을 보였다.

전이학습에 방법에 따른 컨벌루션 신경망의 영상 분류 성능 비교 (Comparison of Image Classification Performance in Convolutional Neural Network according to Transfer Learning)

  • 박성욱;김도연
    • 한국멀티미디어학회논문지
    • /
    • 제21권12호
    • /
    • pp.1387-1395
    • /
    • 2018
  • Core algorithm of deep learning Convolutional Neural Network(CNN) shows better performance than other machine learning algorithms. However, if there is not sufficient data, CNN can not achieve satisfactory performance even if the classifier is excellent. In this situation, it has been proven that the use of transfer learning can have a great effect. In this paper, we apply two transition learning methods(freezing, retraining) to three CNN models(ResNet-50, Inception-V3, DenseNet-121) and compare and analyze how the classification performance of CNN changes according to the methods. As a result of statistical significance test using various evaluation indicators, ResNet-50, Inception-V3, and DenseNet-121 differed by 1.18 times, 1.09 times, and 1.17 times, respectively. Based on this, we concluded that the retraining method may be more effective than the freezing method in case of transition learning in image classification problem.

분류 알고리즘의 효율성에 대한 경험적 비교연구 (The empirical comparison of efficiency in classification algorithms)

  • 전홍석;이주영
    • 대한안전경영과학회지
    • /
    • 제2권3호
    • /
    • pp.171-184
    • /
    • 2000
  • We may be given a set of observations with the classes or clusters. The aim of this article is to provide an up-to-date review of different approaches to classification, compare their performance on a wide range of challenging data-sets. In this paper, machine learning algorithm classifiers based on CART, C4.5, CAL5, FACT, QUEST and statistical discriminant analysis are compared on various datasets in classification error rate and algorithms.

  • PDF

Variations of AlexNet and GoogLeNet to Improve Korean Character Recognition Performance

  • Lee, Sang-Geol;Sung, Yunsick;Kim, Yeon-Gyu;Cha, Eui-Young
    • Journal of Information Processing Systems
    • /
    • 제14권1호
    • /
    • pp.205-217
    • /
    • 2018
  • Deep learning using convolutional neural networks (CNNs) is being studied in various fields of image recognition and these studies show excellent performance. In this paper, we compare the performance of CNN architectures, KCR-AlexNet and KCR-GoogLeNet. The experimental data used in this paper is obtained from PHD08, a large-scale Korean character database. It has 2,187 samples of each Korean character with 2,350 Korean character classes for a total of 5,139,450 data samples. In the training results, KCR-AlexNet showed an accuracy of over 98% for the top-1 test and KCR-GoogLeNet showed an accuracy of over 99% for the top-1 test after the final training iteration. We made an additional Korean character dataset with fonts that were not in PHD08 to compare the classification success rate with commercial optical character recognition (OCR) programs and ensure the objectivity of the experiment. While the commercial OCR programs showed 66.95% to 83.16% classification success rates, KCR-AlexNet and KCR-GoogLeNet showed average classification success rates of 90.12% and 89.14%, respectively, which are higher than the commercial OCR programs' rates. Considering the time factor, KCR-AlexNet was faster than KCR-GoogLeNet when they were trained using PHD08; otherwise, KCR-GoogLeNet had a faster classification speed.