• Title/Summary/Keyword: Recall and Precision

Search Result 724, Processing Time 0.025 seconds

Arrhythmia Classification using GAN-based Over-Sampling Method and Combination Model of CNN-BLSTM (GAN 오버샘플링 기법과 CNN-BLSTM 결합 모델을 이용한 부정맥 분류)

  • Cho, Ik-Sung;Kwon, Hyeog-Soong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.10
    • /
    • pp.1490-1499
    • /
    • 2022
  • Arrhythmia is a condition in which the heart has an irregular rhythm or abnormal heart rate, early diagnosis and management is very important because it can cause stroke, cardiac arrest, or even death. In this paper, we propose arrhythmia classification using hybrid combination model of CNN-BLSTM. For this purpose, the QRS features are detected from noise removed signal through pre-processing and a single bit segment was extracted. In this case, the GAN oversampling technique is applied to solve the data imbalance problem. It consisted of CNN layers to extract the patterns of the arrhythmia precisely, used them as the input of the BLSTM. The weights were learned through deep learning and the learning model was evaluated by the validation data. To evaluate the performance of the proposed method, classification accuracy, precision, recall, and F1-score were compared by using the MIT-BIH arrhythmia database. The achieved scores indicate 99.30%, 98.70%, 97.50%, 98.06% in terms of the accuracy, precision, recall, F1 score, respectively.

Data Efficient Image Classification for Retinal Disease Diagnosis (데이터 효율적 이미지 분류를 통한 안질환 진단)

  • Honggu Kang;Huigyu Yang;Moonseong Kim;Hyunseung Choo
    • Journal of Internet Computing and Services
    • /
    • v.25 no.3
    • /
    • pp.19-25
    • /
    • 2024
  • The worldwide aging population trend is causing an increase in the incidence of major retinal diseases that can lead to blindness, including glaucoma, cataract, and macular degeneration. In the field of ophthalmology, there is a focused interest in diagnosing diseases that are difficult to prevent in order to reduce the rate of blindness. This study proposes a deep learning approach to accurately diagnose ocular diseases in fundus photographs using less data than traditional methods. For this, Convolutional Neural Network (CNN) models capable of effective learning with limited data were selected to classify Conventional Fundus Images (CFI) from various ocular disease patients. The chosen CNN models demonstrated exceptional performance, achieving high Accuracy, Precision, Recall, and F1-score values. This approach reduces manual analysis by ophthalmologists, shortens consultation times, and provides consistent diagnostic results, making it an efficient and accurate diagnostic tool in the medical field.

Research on Pairwise Attention Reinforcement Model Using Feature Matching (특징 매칭을 이용한 페어와이즈 어텐션 강화 모델에 대한 연구)

  • Joon-Shik Lim;Yeong-Seok Ju
    • Journal of IKEEE
    • /
    • v.28 no.3
    • /
    • pp.390-396
    • /
    • 2024
  • Vision Transformer (ViT) learns relationships between patches, but it may overlook important features such as color, texture, and boundaries, which can result in performance limitations in fields like medical imaging or facial recognition. To address this issue, this study proposes the Pairwise Attention Reinforcement (PAR) model. The PAR model takes both the training image and a reference image as input into the encoder, calculates the similarity between the two images, and matches the attention score maps of images with high similarity, reinforcing the matching areas of the training image. This process emphasizes important features between images and allows even subtle differences to be distinguished. In experiments using clock-drawing test data, the PAR model achieved a Precision of 0.9516, Recall of 0.8883, F1-Score of 0.9166, and an Accuracy of 92.93%. The proposed model showed a 12% performance improvement compared to API-Net, which uses the pairwise attention approach, and demonstrated a 2% performance improvement over the ViT model.

A New Shot Change Detection Scheme Using Color Histogram and Macroblock Information of MPEG Video Stream (MPEG 비디오 스트림의 칼라 히스토그램 정보와 매크로블록 정보를 이용한 새로운 샷 경계 검출 방법)

  • 정진국;이화순;낭종호;김경수;하명환;정병희
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04b
    • /
    • pp.418-420
    • /
    • 2001
  • 최근 디지털 비디오 데이터의 사용이 급격히 증가하면서 보다 정확하게 샷을 검출하는 기법이 요구되고 있다. 비디오 정보를 이용하여 샷을 검출하는 역는 크게 이산코사인 변환의 결과값을 이용하는 방법과 움직임 보상의 결과값을 이용하는 방법으로 그룹화할 수 있는데 전자의 방법은 점진적인 변화를 검출할 수 있는 반면에 전체적인 검출율이 떨어진다는 단점이 있고, 후자의 방법은 전체적인 검출율은 높지만 점진적인 변화를 검출할 수 없다는 단점이 있다. 본 논문에서는 실험을 통하여 이러한 두 가지 방법의 특징을 살펴본 후 이 방법들을 이용한 새로운 샷 경계 검출 방법을 제안한다. 전체적으로 검출율을 높이는 데 목적을 두었기 때문에 매크로블록 타입을 이용하는 방법을 기본으로 하면서 히스토그램을 이용하는 방법을 추가하여 precision을 높일 수 있도록 하였다. 히스토그램을 이용하는 방법에서는 단순히 프레임과의 비교를 하던 기존의 방법에다 프레임들간의 차이의 차이를 이용하여 성능을 높일 수 있도록 하였다. 본 논문에서 제안한 알고리즘을 이용하여 실험을 한 결과 평균 0.96의 recall과 0.96의 precision을 보이고 있음을 알 수 있었다.

  • PDF

Asphalt Concrete Pavement Surface Crack Detection using Convolutional Neural Network (합성곱 신경망을 이용한 아스팔트 콘크리트 도로포장 표면균열 검출)

  • Choi, Yoon-Soo;Kim, Jong-Ho;Cho, Hyun-Chul;Lee, Chang-Joon
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.23 no.6
    • /
    • pp.38-44
    • /
    • 2019
  • A Convolution Neural Network(CNN) model was utilized to detect surface cracks in asphalt concrete pavements. The CNN used for this study consists of five layers with 3×3 convolution filter and 2×2 pooling kernel. Pavement surface crack images collected by automated road surveying equipment was used for the training and testing of the CNN. The performance of the CNN was evaluated using the accuracy, precision, recall, missing rate, and over rate of the surface crack detection. The CNN trained with the largest amount of data shows more than 96.6% of the accuracy, precision, and recall as well as less than 3.4% of the missing rate and the over rate.

An Improved Combined Content-similarity Approach for Optimizing Web Query Disambiguation

  • Kamal, Shahid;Ibrahim, Roliana;Ghani, Imran
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.79-88
    • /
    • 2015
  • The web search engines are exposed to the issue of uncertainty because of ambiguous queries, being input for retrieving the accurate results. Ambiguous queries constitute a significant fraction of such instances and pose real challenges to web search engines. Moreover, web search has created an interest for the researchers to deal with search by considering context in terms of location perspective. Our proposed disambiguation approach is designed to improve user experience by using context in terms of location relevance with the document relevance. The aim is that providing the user a comprehensive location perspective of a topic is informative than retrieving a result that only contains temporal or context information. The capacity to use this information in a location manner can be, from a user perspective, potentially useful for several tasks, including user query understanding or clustering based on location. In order to carry out the approach, we developed a Java based prototype to derive the contextual information from the web results based on the queries from the well-known datasets. Among those results, queries are further classified in order to perform search in a broad way. After the result provision to users and the selection made by them, feedback is recorded implicitly to improve the web search based on contextual information. The experiment results demonstrate the outstanding performance of our approach in terms of precision 75%, accuracy 73%; recall 81% and f-measure 78% when compared with generic temporal evaluation approach and furthermore achieved precision 86%, accuracy 71%; recall 67% and f-measure 75% when compared with web document clustering approach.

Clustering and Pattern Analysis for Building Semantic Ontologies in RESTful Web Services (RESTful 웹 서비스에서 시맨틱 온톨로지를 구축하기 위한 클러스터링 및 패턴 분석 기법)

  • Lee, Yong-Ju
    • Journal of Internet Computing and Services
    • /
    • v.12 no.4
    • /
    • pp.119-133
    • /
    • 2011
  • With the advent of Web 2.0, the use of RESTful web services is expected to overtake that of the traditional SOAP-based web services. Recently, the growing number of RESTful web services available on the web raises the challenging issue of how to locate the desired web services. However, the existing keyword searching method is insufficient for the bad recall and the bad precision. In this paper, we propose a novel building semantic ontology method which employs both the clustering technique based on association rules and the semantic analysis technique based on patterns. From this method, we can generate ontologies automatically, reduce the burden of semantic annotations, and support more efficient web services search. We ran our experiments on the subset of 168 RESTful web services downloaded from the PregrammableWeb site. The experimental results show that our method achieves up to 35% improvement for recall performance, and up to 18% for precision performance compared to the existing keyword searching method.

The Design and Implementation of a Content-based Image Retrieval System using the Texture Pattern and Slope Components of Contour Points (턱스쳐패턴과 윤곽점 기울기 성분을 이용한 내용기반 화상 검색시스템의 설계및 구현)

  • Choe, Hyeon-Seop;Kim, Cheol-Won;Kim, Seong-Dong;Choe, Gi-Ho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.1
    • /
    • pp.54-66
    • /
    • 1997
  • Efficient retrieval of image data is an important research issue in multimedia database. This paper proposes a new approach to a content-based image retrieval which allows queries to be composed of the local texture patterns and the slope components of contour points. The texture patterns extracted from the source image using the graylevel co-occurrence matrix and the slope components of contour points extracted from the binary image are converted into a internal feature representation of reduced dimensionality which preserves the perceptual similarity and those features can be used in creating efficient indexing structures for a content-based image retrieval. Experimental results of the image retrievalare presented to illustrate the usefulness of this approach that demonstrates the precision 82%, the recall 87% and the average rang 3.3 in content-based image data retrieval.

  • PDF

A Video Stream Retrieval System based on Trend Vectors (경향 벡터 기반 비디오 스트림 검색 시스템)

  • Lee, Seok-Lyong;Chun, Seok-Ju
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.8
    • /
    • pp.1017-1028
    • /
    • 2007
  • In this paper we propose an effective method to represent, store, and retrieve video streams efficiently from a video database. We extract features from each video frame, normalize the feature values, and represent them as values in the range [0,1]. In this way a video frame with f features can be represented by a point in the f-dimensional space $[0,1]^f$, and thus the video stream is represented by a trail of points in the multidimensional space. The video stream is partitioned into video segments based on camera shots, each of which is represented by a trend vector which encapsulates the moving trend of points in a segment. The video stream query is processed depending on the comparison of those trend vectors. We examine our method using a collection of video streams that are composed of sports, news, documentary, and educational videos. Experimental results show that our trend vector representation reduces a reconstruction error remarkably (average 37%) and the retrieval using a trend vector achieves the high precision (average 2.1 times) while maintaining the similar response time and recall rate as existing methods.

  • PDF

A Document Summarization System Using Dynamic Connection Graph (동적 연결 그래프를 이용한 자동 문서 요약 시스템)

  • Song, Won-Moon;Kim, Young-Jin;Kim, Eun-Ju;Kim, Myung-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.1
    • /
    • pp.62-69
    • /
    • 2009
  • The purpose of document summarization is to provide easy and quick understanding of documents by extracting summarized information from the documents produced by various application programs. In this paper, we propose a document summarization method that creates and analyzes a connection graph representing the similarity of keyword lists of sentences in a document taking into account the mean length(the number of keywords) of sentences of the document. We implemented a system that automatically generate a summary from a document using the proposed method. To evaluate the performance of the method, we used a set of 20 documents associated with their correct summaries and measured the precision, the recall and the F-measure. The experiment results show that the proposed method is more efficient compared with the existing methods.