• Title/Summary/Keyword: Precision-recall

Search Result 717, Processing Time 0.024 seconds

A Comparative Study on Performance of Deep Learning Models for Vision-based Concrete Crack Detection according to Model Types (영상기반 콘크리트 균열 탐지 딥러닝 모델의 유형별 성능 비교)

  • Kim, Byunghyun;Kim, Geonsoon;Jin, Soomin;Cho, Soojin
    • Journal of the Korean Society of Safety
    • /
    • v.34 no.6
    • /
    • pp.50-57
    • /
    • 2019
  • In this study, various types of deep learning models that have been proposed recently are classified according to data input / output types and analyzed to find the deep learning model suitable for constructing a crack detection model. First the deep learning models are classified into image classification model, object segmentation model, object detection model, and instance segmentation model. ResNet-101, DeepLab V2, Faster R-CNN, and Mask R-CNN were selected as representative deep learning model of each type. For the comparison, ResNet-101 was implemented for all the types of deep learning model as a backbone network which serves as a main feature extractor. The four types of deep learning models were trained with 500 crack images taken from real concrete structures and collected from the Internet. The four types of deep learning models showed high accuracy above 94% during the training. Comparative evaluation was conducted using 40 images taken from real concrete structures. The performance of each type of deep learning model was measured using precision and recall. In the experimental result, Mask R-CNN, an instance segmentation deep learning model showed the highest precision and recall on crack detection. Qualitative analysis also shows that Mask R-CNN could detect crack shapes most similarly to the real crack shapes.

Design and Implementation of a Content-based Color Image Retrieval System based on Color -Spatial Feature (색상-공간 특징을 사용한 내용기반 칼라 이미지 검색 시스템의 설계 및 구현)

  • An, Cheol-Ung;Kim, Seung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.5
    • /
    • pp.628-638
    • /
    • 1999
  • In this paper, we presents a method of retrieving 24 bpp RGB images based on color-spatial features. For each image, it is subdivided into regions by using similarity of color after converting RGB color space to CIE L*u*v* color space that is perceptually uniform. Our segmentation algorithm constrains the size of region because a small region is discardable and a large region is difficult to extract spatial feature. For each region, averaging color and center of region are extracted to construct color-spatial features. During the image retrieval process, the color and spatial features of query are compared with those of the database images using our similarity measure to determine the set of candidate images to be retrieved. We implement a content-based color image retrieval system using the proposed method. The system is able to retrieve images by user graphic or example image query. Experimental results show that Recall/Precision is 0.80/0.84.

Feature Extraction for Content-based Image Retrievaland Implementation of Image Database Retrieval System (내용기반 영상 검색을 위한 특징 추출 및 영상 데이터베이스 검색 시스템 구현)

  • Kim, Jin-Ah;Lee, Seung-Hoon;Woo, Yong-Tae;Jung, Sung-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.8
    • /
    • pp.1951-1959
    • /
    • 1998
  • In this paper, we propose an efficient feature extaetion method for content-based approach and implement an image retrieval system in the Oracle database. First, we estract color feature by the modified Stricker's method from input images, and this color feature and ART2 neural network are used for the rough classification of images. Next, we extract texture feature using wavelet transform, and finally exeute the detailed classification on the rough classified images from the previous step. Exsing the proposed feature extraction methods, we implement a useful image retrieval system by Extended SQI, statement on the relational database. The proposed system is implemented on the Oracle DBMS, and in the experimental results with 200 sample images, it shows the retrieval rate 90% and 81% in Recall and Precision, respectively.

  • PDF

A Study on Automatic Text Categorization of Web-Based Query Using Synonymy List (유사어 사전을 이용한 웹기반 질의문의 자동 범주화에 관한 연구)

  • Nam, Young-Joon;Kim, Gyu-Hwan
    • Journal of Information Management
    • /
    • v.35 no.4
    • /
    • pp.81-105
    • /
    • 2004
  • In this study, the way of the automatic text categorization on web-based query was implemented. X2 methods based on the Supported Vector Machine were used to test the efficiency of text categorization on queries. This test is carried out by the model using the Synonymy List. 713 synonyms were extracted manually from the tested documents. As the result of this test, the precision ratio and the recall ratio were decreased by -0.01% and by 8.53%, respectively whether the synonyms were assigned or not. It also shows that the Value of F1 Measure was increased by 4.58%. The standard deviation between the recall and precision ratio was improve by 18.39%.

Content-based Image Retrieval Using Texture Features Extracted from Local Energy and Local Correlation of Gabor Transformed Images

  • Bu, Hee-Hyung;Kim, Nam-Chul;Lee, Bae-Ho;Kim, Sung-Ho
    • Journal of Information Processing Systems
    • /
    • v.13 no.5
    • /
    • pp.1372-1381
    • /
    • 2017
  • In this paper, a texture feature extraction method using local energy and local correlation of Gabor transformed images is proposed and applied to an image retrieval system. The Gabor wavelet is known to be similar to the response of the human visual system. The outputs of the Gabor transformation are robust to variants of object size and illumination. Due to such advantages, it has been actively studied in various fields such as image retrieval, classification, analysis, etc. In this paper, in order to fully exploit the superior aspects of Gabor wavelet, local energy and local correlation features are extracted from Gabor transformed images and then applied to an image retrieval system. Some experiments are conducted to compare the performance of the proposed method with those of the conventional Gabor method and the popular rotation-invariant uniform local binary pattern (RULBP) method in terms of precision vs recall. The Mahalanobis distance is used to measure the similarity between a query image and a database (DB) image. Experimental results for Corel DB and VisTex DB show that the proposed method is superior to the conventional Gabor method. The proposed method also yields precision and recall 6.58% and 3.66% higher on average in Corel DB, respectively, and 4.87% and 3.37% higher on average in VisTex DB, respectively, than the popular RULBP method.

Detection of Artificial Caption using Temporal and Spatial Information in Video (시·공간 정보를 이용한 동영상의 인공 캡션 검출)

  • Joo, SungIl;Weon, SunHee;Choi, HyungIl
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.115-126
    • /
    • 2012
  • The artificial captions appearing in videos include information that relates to the videos. In order to obtain the information carried by captions, many methods for caption extraction from videos have been studied. Most traditional methods of detecting caption region have used one frame. However video include not only spatial information but also temporal information. So we propose a method of detection caption region using temporal and spatial information. First, we make improved Text-Appearance-Map and detect continuous candidate regions through matching between candidate-regions. Second, we detect disappearing captions using disappearance test in candidate regions. In case of captions disappear, the caption regions are decided by a merging process which use temporal and spatial information. Final, we decide final caption regions through ANNs using edge direction histograms for verification. Our proposed method was experienced on many kinds of captions with a variety of sizes, shapes, positions and the experiment result was evaluated through Recall and Precision.

Implementation of a Machine Learning-based Recommender System for Preventing the University Students' Dropout (대학생 중도탈락 예방을 위한 기계 학습 기반 추천 시스템 구현 방안)

  • Jeong, Do-Heon
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.10
    • /
    • pp.37-43
    • /
    • 2021
  • This study proposed an effective automatic classification technique to identify dropout patterns of university students, and based on this, an intelligent recommender system to prevent dropouts. To this end, 1) a data processing method to improve the performance of machine learning was proposed based on actual enrollment/dropout data of university students, and 2) performance comparison experiments were conducted using five types of machine learning algorithms. 3) As a result of the experiment, the proposed method showed superior performance in all algorithms compared to the baseline method. The precision rate of discrimination of enrolled students was measured to be up to 95.6% when using a Random Forest(RF), and the recall rate of dropout students was measured to be up to 80.0% when using Naive Bayes(NB). 4) Finally, based on the experimental results, a method for using a counseling recommender system to give priority to students who are likely to drop out was suggested. It was confirmed that reasonable decision-making can be conducted through convergence research that utilizes technologies in the IT field to solve the educational issues, and we plan to apply various artificial intelligence technologies through continuous research in the future.

Microscopic Traffic Parameters Estimation from UAV Video Using Multiple Object Tracking of Deep Learning-based (다중객체추적 알고리즘을 활용한 드론 항공영상 기반 미시적 교통데이터 추출)

  • Jung, Bokyung;Seo, Sunghyuk;Park, Boogi;Bae, Sanghoon
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.20 no.5
    • /
    • pp.83-99
    • /
    • 2021
  • With the advent of the fourth industrial revolution, studies on driving management and driving strategies of autonomous vehicles are emerging. While obtaining microscopic traffic data on vehicles is essential for such research, we also see that conventional traffic data collection methods cannot collect the driving behavior of individual vehicles. In this study, UAV videos were used to collect traffic data from the viewpoint of the aerial base that is microscopic. To overcome the limitations of the related research in the literature, the micro-traffic data were estimated using the multiple object tracking of deep learning and an image registration technique. As a result, the speed obtained error rates of MAE 3.49 km/h, RMSE 4.43 km/h, and MAPE 5.18 km/h, and the traffic obtained a precision of 98.07% and a recall of 97.86%.

Real Time Hornet Classification System Based on Deep Learning (딥러닝을 이용한 실시간 말벌 분류 시스템)

  • Jeong, Yunju;Lee, Yeung-Hak;Ansari, Israfil;Lee, Cheol-Hee
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1141-1147
    • /
    • 2020
  • The hornet species are so similar in shape that they are difficult for non-experts to classify, and because the size of the objects is small and move fast, it is more difficult to detect and classify the species in real time. In this paper, we developed a system that classifies hornets species in real time based on a deep learning algorithm using a boundary box. In order to minimize the background area included in the bounding box when labeling the training image, we propose a method of selecting only the head and body of the hornet. It also experimentally compares existing boundary box-based object recognition algorithms to find the best algorithms that can detect wasps in real time and classify their species. As a result of the experiment, when the mish function was applied as the activation function of the convolution layer and the hornet images were tested using the YOLOv4 model with the Spatial Attention Module (SAM) applied before the object detection block, the average precision was 97.89% and the average recall was 98.69%.

Change Attention based Dense Siamese Network for Remote Sensing Change Detection (원격 탐사 변화 탐지를 위한 변화 주목 기반의 덴스 샴 네트워크)

  • Hwang, Gisu;Lee, Woo-Ju;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.26 no.1
    • /
    • pp.14-25
    • /
    • 2021
  • Change detection, which finds changes in remote sensing images of the same location captured at different times, is very important because it is used in various applications. However, registration errors, building displacement errors, and shadow errors cause false positives. To solve these problems, we propose a novle deep convolutional network called CADNet (Change Attention Dense Siamese Network). CADNet uses FPN (Feature Pyramid Network) to detect multi-scale changes, applies a Change Attention Module that attends to the changes, and uses DenseNet as a feature extractor to use feature maps that contain both low-level and high-level features for change detection. CADNet performance measured from the Precision, Recall, F1 side is 98.44%, 98.47%, 98.46% for WHU datasets and 90.72%, 91.89%, 91.30% for LEVIR-CD datasets. The results of this experiment show that CADNet can offer better performance than any other traditional change detection method.