• Title/Summary/Keyword: 잡음제거알고리즘

Search Result 846, Processing Time 0.023 seconds

P Wave Detection Algorithm through Adaptive Threshold and QRS Peak Variability (적응형 문턱치와 QRS피크 변화에 따른 P파 검출 알고리즘)

  • Cho, Ik-sung;Kim, Joo-Man;Lee, Wan-Jik;Kwon, Hyeog-soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.8
    • /
    • pp.1587-1595
    • /
    • 2016
  • P wave is cardiac parameters that represent the electrical and physiological characteristics, it is very important to diagnose atrial arrhythmia. However, It is very difficult to detect because of the small size compared to R wave and the various morphology. Several methods for detecting P wave has been proposed, such as frequency analysis and non-linear approach. However, in the case of conduction abnormality such as AV block or atrial arrhythmia, detection accuracy is at the lower level. We propose P wave detection algorithm through adaptive threshold and QRS peak variability. For this purpose, we detected Q, R, S wave from noise-free ECG signal through the preprocessing method. And then we classified three pattern of P wave by peak variability and detected adaptive window and threshold. The performance of P wave detection is evaluated by using 48 record of MIT-BIH arrhythmia database. The achieved scores indicate the average detection rate of 92.60%.

PVC Classification based on QRS Pattern using QS Interval and R Wave Amplitude (QRS 패턴에 의한 QS 간격과 R파의 진폭을 이용한 조기심실수축 분류)

  • Cho, Ik-Sung;Kwon, Hyeog-Soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.4
    • /
    • pp.825-832
    • /
    • 2014
  • Previous works for detecting arrhythmia have mostly used nonlinear method such as artificial neural network, fuzzy theory, support vector machine to increase classification accuracy. Most methods require accurate detection of P-QRS-T point, higher computational cost and larger processing time. Even if some methods have the advantage in low complexity, but they generally suffer form low sensitivity. Also, it is difficult to detect PVC accurately because of the various QRS pattern by person's individual difference. Therefore it is necessary to design an efficient algorithm that classifies PVC based on QRS pattern in realtime and decreases computational cost by extracting minimal feature. In this paper, we propose PVC classification based on QRS pattern using QS interval and R wave amplitude. For this purpose, we detected R wave, RR interval, QRS pattern from noise-free ECG signal through the preprocessing method. Also, we classified PVC in realtime through QS interval and R wave amplitude. The performance of R wave detection, PVC classification is evaluated by using 9 record of MIT-BIH arrhythmia database that included over 30 PVC. The achieved scores indicate the average of 99.02% in R wave detection and the rate of 93.72% in PVC classification.

Arrhythmia Classification based on Binary Coding using QRS Feature Variability (QRS 특징점 변화에 따른 바이너리 코딩 기반의 부정맥 분류)

  • Cho, Ik-Sung;Kwon, Hyeog-Soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.8
    • /
    • pp.1947-1954
    • /
    • 2013
  • Previous works for detecting arrhythmia have mostly used nonlinear method such as artificial neural network, fuzzy theory, support vector machine to increase classification accuracy. Most methods require accurate detection of P-QRS-T point, higher computational cost and larger processing time. But it is difficult to detect the P and T wave signal because of person's individual difference. Therefore it is necessary to design efficient algorithm that classifies different arrhythmia in realtime and decreases computational cost by extrating minimal feature. In this paper, we propose arrhythmia detection based on binary coding using QRS feature varibility. For this purpose, we detected R wave, RR interval, QRS width from noise-free ECG signal through the preprocessing method. Also, we classified arrhythmia in realtime by converting threshold variability of feature to binary code. PVC, PAC, Normal, BBB, Paced beat classification is evaluated by using 39 record of MIT-BIH arrhythmia database. The achieved scores indicate the average of 97.18%, 94.14%, 99.83%, 92.77%, 97.48% in PVC, PAC, Normal, BBB, Paced beat classification.

Health Risk Management using Feature Extraction and Cluster Analysis considering Time Flow (시간흐름을 고려한 특징 추출과 군집 분석을 이용한 헬스 리스크 관리)

  • Kang, Ji-Soo;Chung, Kyungyong;Jung, Hoill
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.1
    • /
    • pp.99-104
    • /
    • 2021
  • In this paper, we propose health risk management using feature extraction and cluster analysis considering time flow. The proposed method proceeds in three steps. The first is the pre-processing and feature extraction step. It collects user's lifelog using a wearable device, removes incomplete data, errors, noise, and contradictory data, and processes missing values. Then, for feature extraction, important variables are selected through principal component analysis, and data similar to the relationship between the data are classified through correlation coefficient and covariance. In order to analyze the features extracted from the lifelog, dynamic clustering is performed through the K-means algorithm in consideration of the passage of time. The new data is clustered through the similarity distance measurement method based on the increment of the sum of squared errors. Next is to extract information about the cluster by considering the passage of time. Therefore, using the health decision-making system through feature clusters, risks able to managed through factors such as physical characteristics, lifestyle habits, disease status, health care event occurrence risk, and predictability. The performance evaluation compares the proposed method using Precision, Recall, and F-measure with the fuzzy and kernel-based clustering. As a result of the evaluation, the proposed method is excellently evaluated. Therefore, through the proposed method, it is possible to accurately predict and appropriately manage the user's potential health risk by using the similarity with the patient.

Adversarial Learning-Based Image Correction Methodology for Deep Learning Analysis of Heterogeneous Images (이질적 이미지의 딥러닝 분석을 위한 적대적 학습기반 이미지 보정 방법론)

  • Kim, Junwoo;Kim, Namgyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.457-464
    • /
    • 2021
  • The advent of the big data era has enabled the rapid development of deep learning that learns rules by itself from data. In particular, the performance of CNN algorithms has reached the level of self-adjusting the source data itself. However, the existing image processing method only deals with the image data itself, and does not sufficiently consider the heterogeneous environment in which the image is generated. Images generated in a heterogeneous environment may have the same information, but their features may be expressed differently depending on the photographing environment. This means that not only the different environmental information of each image but also the same information are represented by different features, which may degrade the performance of the image analysis model. Therefore, in this paper, we propose a method to improve the performance of the image color constancy model based on Adversarial Learning that uses image data generated in a heterogeneous environment simultaneously. Specifically, the proposed methodology operates with the interaction of the 'Domain Discriminator' that predicts the environment in which the image was taken and the 'Illumination Estimator' that predicts the lighting value. As a result of conducting an experiment on 7,022 images taken in heterogeneous environments to evaluate the performance of the proposed methodology, the proposed methodology showed superior performance in terms of Angular Error compared to the existing methods.

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.