• Title/Summary/Keyword: Precision-Recall Curve

Search Result 30, Processing Time 0.027 seconds

Sentiment Analysis From Images - Comparative Study of SAI-G and SAI-C Models' Performances Using AutoML Vision Service from Google Cloud and Clarifai Platform

  • Marcu, Daniela;Danubianu, Mirela
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.179-184
    • /
    • 2021
  • In our study we performed a sentiments analysis from the images. For this purpose, we used 153 images that contain: people, animals, buildings, landscapes, cakes and objects that we divided into two categories: images that suggesting a positive or a negative emotion. In order to classify the images using the two categories, we created two models. The SAI-G model was created with Google's AutoML Vision service. The SAI-C model was created on the Clarifai platform. The data were labeled in a preprocessing stage, and for the SAI-C model we created the concepts POSITIVE (POZITIV) AND NEGATIVE (NEGATIV). In order to evaluate the performances of the two models, we used a series of evaluation metrics such as: Precision, Recall, ROC (Receiver Operating Characteristic) curve, Precision-Recall curve, Confusion Matrix, Accuracy Score and Average precision. Precision and Recall for the SAI-G model is 0.875, at a confidence threshold of 0.5, while for the SAI-C model we obtained much lower scores, respectively Precision = 0.727 and Recall = 0.571 for the same confidence threshold. The results indicate a lower classification performance of the SAI-C model compared to the SAI-G model. The exception is the value of Precision for the POSITIVE concept, which is 1,000.

A Study on the Effectiveness of Information Retrieval (정보검색효율에 관한 연구)

  • Yoon Koo-ho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.8
    • /
    • pp.73-101
    • /
    • 1981
  • Retrieval effectiveness is the principal criterion for measuring the performance of an information retrieval system. The effectiveness of a retrieval system depends primarily on the extent to which it can retrieve wanted documents without retrieving unwanted ones. So, ultimately, effectiveness is a function of the relevant and nonrelevant documents retrieved. Consequently, 'relevance' of information to the user's request has become one of the most fundamental concept encountered in the theory of information retrieval. Although there is at present no consensus as to how this notion should be defined, relevance has been widely used as a meaningful quantity and an adequate criterion for measures of the evaluation of retrieval effectiveness. The recall and precision among various parameters based on the 'two-by-two' table (or, contingency table) were major considerations in this paper, because it is assumed that recall and precision are sufficient for the measurement of effectiveness. Accordingly, different concepts of 'relevance' and 'pertinence' of documents to user requests and their proper usages were investigated even though the two terms have unfortunately been used rather loosely in the literature. In addition, a number of variables affecting the recall and precision values were discussed. Some conclusions derived from this study are as follows: Any notion of retrieval effectiveness is based on 'relevance' which itself is extremely difficult to define. Recall and precision are valuable concepts in the study of any information retrieval system. They are, however, not the only criteria by which a system may be judged. The recall-precision curve represents the average performance of any given system, and this may vary quite considerably in particular situations. Therefore, it is possible to some extent to vary the indexing policy, the indexing policy, the indexing language, or the search methodology to improve the performance of the system in terms of recall and precision. The 'inverse relationship' between average recall and precision could be accepted as the 'fundamental law of retrieval', and it should certainly be used as an aid to evaluation. Finally, there is a limit to the performance(in terms of effectiveness) achievable by an information retrieval system. That is : "Perfect retrieval is impossible."

  • PDF

Enhanced Network Intrusion Detection using Deep Convolutional Neural Networks

  • Naseer, Sheraz;Saleem, Yasir
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.5159-5178
    • /
    • 2018
  • Network Intrusion detection is a rapidly growing field of information security due to its importance for modern IT infrastructure. Many supervised and unsupervised learning techniques have been devised by researchers from discipline of machine learning and data mining to achieve reliable detection of anomalies. In this paper, a deep convolutional neural network (DCNN) based intrusion detection system (IDS) is proposed, implemented and analyzed. Deep CNN core of proposed IDS is fine-tuned using Randomized search over configuration space. Proposed system is trained and tested on NSLKDD training and testing datasets using GPU. Performance comparisons of proposed DCNN model are provided with other classifiers using well-known metrics including Receiver operating characteristics (RoC) curve, Area under RoC curve (AuC), accuracy, precision-recall curve and mean average precision (mAP). The experimental results of proposed DCNN based IDS shows promising results for real world application in anomaly detection systems.

A Study on measuring techniques of retrieval effectiveness (검색효율 측정척도에 관한 연구)

  • Yoon Koo Ho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.16
    • /
    • pp.177-205
    • /
    • 1989
  • Retrieval effectiveness is the principal criteria for measuring the performance of an information retrieval system. This paper deals with the characteristics of 'relevance' of information and various measuring techniques of retrieval effectivess. The outlines of this study are as follows: 1) Relevance decision for evaluation should be devided into the user-oriented and the system-oriented decisions. 2) The recall-precision measure seems to be user-oriented, and the recall-fallout measure to be system-oriented. 3) Many of composite measures can not be justified III any rational manner unfortunately. 4) The Swets model has demonstrated that it yields, in general, a straight line instead of a curve of varying curvature and emphasized the fundamentally probabilistic nature of information retrieval. 5) The Cooper model seems to be a good substitute for precision and a useful measure for systems which ranked documents. 6) The Rocchio model were proposed for the evaluation of retreval systems which ranked documents, and were designed to be independent of cut-off. 7) The Cawkell model suggested that the Shannon's equation for entropy can be applied to measuring of retrieval effectiveness.

  • PDF

Evaluation of Classifiers Performance for Areal Features Matching (면 객체 매칭을 위한 판별모델의 성능 평가)

  • Kim, Jiyoung;Kim, Jung Ok;Yu, Kiyun;Huh, Yong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.1
    • /
    • pp.49-55
    • /
    • 2013
  • In this paper, we proposed a good classifier to match different spatial data sets by applying evaluation of classifiers performance in data mining and biometrics. For this, we calculated distances between a pair of candidate features for matching criteria, and normalized the distances by Min-Max method and Tanh (TH) method. We defined classifiers that shape similarity is derived from fusion of these similarities by CRiteria Importance Through Intercriteria correlation (CRITIC) method, Matcher Weighting method and Simple Sum (SS) method. As results of evaluation of classifiers performance by Precision-Recall (PR) curve and area under the PR curve (AUC-PR), we confirmed that value of AUC-PR in a classifier of TH normalization and SS method is 0.893 and the value is the highest. Therefore, to match different spatial data sets, we thought that it is appropriate to a classifier that distances of matching criteria are normalized by TH method and shape similarity is calculated by SS method.

An effective automated ontology construction based on the agriculture domain

  • Deepa, Rajendran;Vigneshwari, Srinivasan
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.573-587
    • /
    • 2022
  • The agricultural sector is completely different from other sectors since it completely relies on various natural and climatic factors. Climate changes have many effects, including lack of annual rainfall and pests, heat waves, changes in sea level, and global ozone/atmospheric CO2 fluctuation, on land and agriculture in similar ways. Climate change also affects the environment. Based on these factors, farmers chose their crops to increase productivity in their fields. Many existing agricultural ontologies are either domain-specific or have been created with minimal vocabulary and no proper evaluation framework has been implemented. A new agricultural ontology focused on subdomains is designed to assist farmers using Jaccard relative extractor (JRE) and Naïve Bayes algorithm. The JRE is used to find the similarity between two sentences and words in the agricultural documents and the relationship between two terms is identified via the Naïve Bayes algorithm. In the proposed method, the preprocessing of data is carried out through natural language processing techniques and the tags whose dimensions are reduced are subjected to rule-based formal concept analysis and mapping. The subdomain ontologies of weather, pest, and soil are built separately, and the overall agricultural ontology are built around them. The gold standard for the lexical layer is used to evaluate the proposed technique, and its performance is analyzed by comparing it with different state-of-the-art systems. Precision, recall, F-measure, Matthews correlation coefficient, receiver operating characteristic curve area, and precision-recall curve area are the performance metrics used to analyze the performance. The proposed methodology gives a precision score of 94.40% when compared with the decision tree(83.94%) and K-nearest neighbor algorithm(86.89%) for agricultural ontology construction.

A Comparative Performance Analysis of Segmentation Models for Lumbar Key-points Extraction (요추 특징점 추출을 위한 영역 분할 모델의 성능 비교 분석)

  • Seunghee Yoo;Minho Choi ;Jun-Su Jang
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.5
    • /
    • pp.354-361
    • /
    • 2023
  • Most of spinal diseases are diagnosed based on the subjective judgment of a specialist, so numerous studies have been conducted to find objectivity by automating the diagnosis process using deep learning. In this paper, we propose a method that combines segmentation and feature extraction, which are frequently used techniques for diagnosing spinal diseases. Four models, U-Net, U-Net++, DeepLabv3+, and M-Net were trained and compared using 1000 X-ray images, and key-points were derived using Douglas-Peucker algorithms. For evaluation, Dice Similarity Coefficient(DSC), Intersection over Union(IoU), precision, recall, and area under precision-recall curve evaluation metrics were used and U-Net++ showed the best performance in all metrics with an average DSC of 0.9724. For the average Euclidean distance between estimated key-points and ground truth, U-Net was the best, followed by U-Net++. However the difference in average distance was about 0.1 pixels, which is not significant. The results suggest that it is possible to extract key-points based on segmentation and that it can be used to accurately diagnose various spinal diseases, including spondylolisthesis, with consistent criteria.

Learning Behavior Analysis of Bayesian Algorithm Under Class Imbalance Problems (클래스 불균형 문제에서 베이지안 알고리즘의 학습 행위 분석)

  • Hwang, Doo-Sung
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.6
    • /
    • pp.179-186
    • /
    • 2008
  • In this paper we analyse the effects of Bayesian algorithm in teaming class imbalance problems and compare the performance evaluation methods. The teaming performance of the Bayesian algorithm is evaluated over the class imbalance problems generated by priori data distribution, imbalance data rate and discrimination complexity. The experimental results are calculated by the AUC(Area Under the Curve) values of both ROC(Receiver Operator Characteristic) and PR(Precision-Recall) evaluation measures and compared according to imbalance data rate and discrimination complexity. In comparison and analysis, the Bayesian algorithm suffers from the imbalance rate, as the same result in the reported researches, and the data overlapping caused by discrimination complexity is the another factor that hampers the learning performance. As the discrimination complexity and class imbalance rate of the problems increase, the learning performance of the AUC of a PR measure is much more variant than that of the AUC of a ROC measure. But the performances of both measures are similar with the low discrimination complexity and class imbalance rate of the problems. The experimental results show 4hat the AUC of a PR measure is more proper in evaluating the learning of class imbalance problem and furthermore gets the benefit in designing the optimal learning model considering a misclassification cost.

Assessment of the Object Detection Ability of Interproximal Caries on Primary Teeth in Periapical Radiographs Using Deep Learning Algorithms (유치의 치근단 방사선 사진에서 딥 러닝 알고리즘을 이용한 모델의 인접면 우식증 객체 탐지 능력의 평가)

  • Hongju Jeon;Seonmi Kim;Namki Choi
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.50 no.3
    • /
    • pp.263-276
    • /
    • 2023
  • The purpose of this study was to evaluate the performance of a model using You Only Look Once (YOLO) for object detection of proximal caries in periapical radiographs of children. A total of 2016 periapical radiographs in primary dentition were selected from the M6 database as a learning material group, of which 1143 were labeled as proximal caries by an experienced dentist using an annotation tool. After converting the annotations into a training dataset, YOLO was trained on the dataset using a single convolutional neural network (CNN) model. Accuracy, recall, specificity, precision, negative predictive value (NPV), F1-score, Precision-Recall curve, and AP (area under curve) were calculated for evaluation of the object detection model's performance in the 187 test datasets. The results showed that the CNN-based object detection model performed well in detecting proximal caries, with a diagnostic accuracy of 0.95, a recall of 0.94, a specificity of 0.97, a precision of 0.82, a NPV of 0.96, and an F1-score of 0.81. The AP was 0.83. This model could be a valuable tool for dentists in detecting carious lesions in periapical radiographs.

The application of convolutional neural networks for automatic detection of underwater object in side scan sonar images (사이드 스캔 소나 영상에서 수중물체 자동 탐지를 위한 컨볼루션 신경망 기법 적용)

  • Kim, Jungmoon;Choi, Jee Woong;Kwon, Hyuckjong;Oh, Raegeun;Son, Su-Uk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.2
    • /
    • pp.118-128
    • /
    • 2018
  • In this paper, we have studied how to search an underwater object by learning the image generated by the side scan sonar in the convolution neural network. In the method of human side analysis of the side scan image or the image, the convolution neural network algorithm can enhance the efficiency of the analysis. The image data of the side scan sonar used in the experiment is the public data of NSWC (Naval Surface Warfare Center) and consists of four kinds of synthetic underwater objects. The convolutional neural network algorithm is based on Faster R-CNN (Region based Convolutional Neural Networks) learning based on region of interest and the details of the neural network are self-organized to fit the data we have. The results of the study were compared with a precision-recall curve, and we investigated the applicability of underwater object detection in convolution neural networks by examining the effect of change of region of interest assigned to sonar image data on detection performance.