• Title/Summary/Keyword: Precision-recall

Search Result 731, Processing Time 0.025 seconds

Machine Learning-Based Transactions Anomaly Prediction for Enhanced IoT Blockchain Network Security and Performance

  • Nor Fadzilah Abdullah;Ammar Riadh Kairaldeen;Asma Abu-Samah;Rosdiadee Nordin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.7
    • /
    • pp.1986-2009
    • /
    • 2024
  • The integration of blockchain technology with the rapid growth of Internet of Things (IoT) devices has enabled secure and decentralised data exchange. However, security vulnerabilities and performance limitations remain significant challenges in IoT blockchain networks. This work proposes a novel approach that combines transaction representation and machine learning techniques to address these challenges. Various clustering techniques, including k-means, DBSCAN, Gaussian Mixture Models (GMM), and Hierarchical clustering, were employed to effectively group unlabelled transaction data based on their intrinsic characteristics. Anomaly transaction prediction models based on classifiers were then developed using the labelled data. Performance metrics such as accuracy, precision, recall, and F1-measure were used to identify the minority class representing specious transactions or security threats. The classifiers were also evaluated on their performance using balanced and unbalanced data. Compared to unbalanced data, balanced data resulted in an overall average improvement of approximately 15.85% in accuracy, 88.76% in precision, 60% in recall, and 74.36% in F1-score. This demonstrates the effectiveness of each classifier as a robust classifier with consistently better predictive performance across various evaluation metrics. Moreover, the k-means and GMM clustering techniques outperformed other techniques in identifying security threats, underscoring the importance of appropriate feature selection and clustering methods. The findings have practical implications for reinforcing security and efficiency in real-world IoT blockchain networks, paving the way for future investigations and advancements.

Data Efficient Image Classification for Retinal Disease Diagnosis (데이터 효율적 이미지 분류를 통한 안질환 진단)

  • Honggu Kang;Huigyu Yang;Moonseong Kim;Hyunseung Choo
    • Journal of Internet Computing and Services
    • /
    • v.25 no.3
    • /
    • pp.19-25
    • /
    • 2024
  • The worldwide aging population trend is causing an increase in the incidence of major retinal diseases that can lead to blindness, including glaucoma, cataract, and macular degeneration. In the field of ophthalmology, there is a focused interest in diagnosing diseases that are difficult to prevent in order to reduce the rate of blindness. This study proposes a deep learning approach to accurately diagnose ocular diseases in fundus photographs using less data than traditional methods. For this, Convolutional Neural Network (CNN) models capable of effective learning with limited data were selected to classify Conventional Fundus Images (CFI) from various ocular disease patients. The chosen CNN models demonstrated exceptional performance, achieving high Accuracy, Precision, Recall, and F1-score values. This approach reduces manual analysis by ophthalmologists, shortens consultation times, and provides consistent diagnostic results, making it an efficient and accurate diagnostic tool in the medical field.

Frequent Pattern Mining By using a Completeness for BigData (빅데이터에 대한 Completeness를 이용한 빈발 패턴 마이닝)

  • Park, In-Kyu
    • Journal of Korea Game Society
    • /
    • v.18 no.2
    • /
    • pp.121-130
    • /
    • 2018
  • Most of those studies use frequency, the number of times a pattern appears in a transaction database, as the key measure for pattern interestingness. It prerequisites that any interesting pattern should occupy a maximum portion of the transactions it appears. But in our real world scenarios the completeness of any pattern is more likely to become various in transactions. Hence, we should also consider the problem of finding the qualified patterns with the significant values of the weighted support by completeness in order to reduce the loss of information within any pattern in transaction. In these pattern recommendation applications, patterns with higher completeness may lead to higher recall while patterns with higher completeness may lead to higher recall while patterns with higher frequency lead to higher precision. In this paper, we propose a measure of weighted support and completeness and an algorithm WSCFPM(weigted support and completeness frequent pattern mining). Our algorithm handles the invalidation of the monotone or anti-monotone property which does not hold on completeness. Extensive performance analysis show that our algorithm is very efficient and scalable for word pattern mining.

Automatic Generation of Code-clone Reference Corpus (코드클론 표본 집합체 자동 생성기)

  • Lee, Hyo-Sub;Doh, Kyung-Goo
    • Journal of Software Assessment and Valuation
    • /
    • v.7 no.1
    • /
    • pp.29-39
    • /
    • 2011
  • To evaluate the quality of clone detection tools, we should know how many clones the tool misses. Hence we need to have the standard code-clone reference corpus for a carefully chosen set of sample source codes. The reference corpus available so far has been built by manually collecting clones from the results of various existing tools. This paper presents a tree-pattern-based clone detection tool that can be used for automatic generation of reference corpus. Our tool is compared with CloneDR for precision and Bellon's reference corpus for recall. Our tool finds no false positives and 2 to 3 times more clones than CloneDR. Compared to Bellon's reference corpus, our tools shows the 93%-to-100% recall rate and detects far more clones.

Within-and between-Individual Variation in Nutrient Intkes Assessed by Recall and Record Methods among College Women (회상법과 기록법으로 측정한 여대생의 영양소 섭취량에서의 개인내 변이와 개인간 변이)

  • 오세영
    • Journal of Nutrition and Health
    • /
    • v.29 no.9
    • /
    • pp.1028-1034
    • /
    • 1996
  • This study examined within-and between-individual variation in nutrient intakes in order to estimate the degrees of precison in dietary assessment among 59 female volunteers aged 21-23 years. Self-recorded 7-day dietary recalls and records were collected by during a 3 month period. Between the recall and record methods, there were little difference of within-and between-individual variations. Within-to-between individual variation ratios were > 2.0 for most of the nutrients examined, and were higher for niacin, vitamin A and C (>2.5) in the recals and for calcium, iron, vitamin A and C(>3.0) in the records. With 7-day dietary data, observed nutrient intakes were estimated to within 26-107% of the subjects' true(usual) intakes, among those vitamin C and energy showed the highest and lowest values, respectively. Correlation coefficients between observed and true nutrient intakes were 0.73-0.81 for the recalls and 0.68-0.77 for the records. In order to estimate with 20% precision, 12-13 days of dietary study were required for energy, 46 for calcium, 71-72 for vitamin A, and 199-200 for vitamin C. Attenuation factor ranged 0.73-0.81 for the recalls and 0.68-0.77 for the records. This study implies that commonly used 1 or 3 day dietary studies may not be appropriate for assessing individuals' nutrient intakes. Further research focusing on the methodological issues in the assessment of Korean diet are needed for between understanding of the relationship between diet and health in Koreans.

  • PDF

Automated Segmentation of the Lateral Ventricle Based on Graph Cuts Algorithm and Morphological Operations

  • Park, Seongbeom;Yoon, Uicheul
    • Journal of Biomedical Engineering Research
    • /
    • v.38 no.2
    • /
    • pp.82-88
    • /
    • 2017
  • Enlargement of the lateral ventricles have been identified as a surrogate marker of neurological disorders. Quantitative measure of the lateral ventricle from MRI would enable earlier and more accurate clinical diagnosis in monitoring disease progression. Even though it requires an automated or semi-automated segmentation method for objective quantification, it is difficult to define lateral ventricles due to insufficient contrast and brightness of structural imaging. In this study, we proposed a fully automated lateral ventricle segmentation method based on a graph cuts algorithm combined with atlas-based segmentation and connected component labeling. Initially, initial seeds for graph cuts were defined by atlas-based segmentation (ATS). They were adjusted by partial volume images in order to provide accurate a priori information on graph cuts. A graph cuts algorithm is to finds a global minimum of energy with minimum cut/maximum flow algorithm function on graph. In addition, connected component labeling used to remove false ventricle regions. The proposed method was validated with the well-known tools using the dice similarity index, recall and precision values. The proposed method was significantly higher dice similarity index ($0.860{\pm}0.036$, p < 0.001) and recall ($0.833{\pm}0.037$, p < 0.001) compared with other tools. Therefore, the proposed method yielded a robust and reliable segmentation result.

A Study of an Image Retrieval Method using Binary Subimage (이진 부분영상을 이용한 영상 검색 기법에 관한 연구)

  • 정순영;최민규;남재열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.1
    • /
    • pp.28-37
    • /
    • 2001
  • An image retrieval method combining shape information of 2-dimension color histograms with color information of HSI color histograms is proposed in this paper. In addition, the proposed method can find location information of image through the comparison of similarity among subimages. The suggested retrieval method applies the location information to shape and color information and can retrieve region information which is hard to distinguish in the binary image. Some simulation results show that it works very well in the behalf of precision/recall graph compare with conventional method which uses color histogram. Especially, the proposed method brought well effects such as rotations and transitions of the objects in an image was found.

  • PDF

Construction of an Internet of Things Industry Chain Classification Model Based on IRFA and Text Analysis

  • Zhimin Wang
    • Journal of Information Processing Systems
    • /
    • v.20 no.2
    • /
    • pp.215-225
    • /
    • 2024
  • With the rapid development of Internet of Things (IoT) and big data technology, a large amount of data will be generated during the operation of related industries. How to classify the generated data accurately has become the core of research on data mining and processing in IoT industry chain. This study constructs a classification model of IoT industry chain based on improved random forest algorithm and text analysis, aiming to achieve efficient and accurate classification of IoT industry chain big data by improving traditional algorithms. The accuracy, precision, recall, and AUC value size of the traditional Random Forest algorithm and the algorithm used in the paper are compared on different datasets. The experimental results show that the algorithm model used in this paper has better performance on different datasets, and the accuracy and recall performance on four datasets are better than the traditional algorithm, and the accuracy performance on two datasets, P-I Diabetes and Loan Default, is better than the random forest model, and its final data classification results are better. Through the construction of this model, we can accurately classify the massive data generated in the IoT industry chain, thus providing more research value for the data mining and processing technology of the IoT industry chain.

Query Expansion Using Augmented Terms in an Extended Boolean Model

  • Nguyen, Tuan-Quang;Heo, Jun-Seok;Lee, Jung-Hoon;Kim, Yi-Reun;Whang, Kyu-Young
    • Journal of Computing Science and Engineering
    • /
    • v.2 no.1
    • /
    • pp.26-43
    • /
    • 2008
  • We propose a new query expansion method in the extended Boolean model that improves precision without degrading recall. For improving precision, our method promotes the ranks of documents having more query terms since users typically prefer such documents. The proposed method consists of the following three steps: (1) expanding the query by adding new terms related to each term of the query, (2) further expanding the query by adding augmented terms, which are conjunctions of the terms, (3) assigning a weight on each term so that augmented terms have higher weights than the other terms. We conduct extensive experiments to show the effectiveness of the proposed method. The experimental results show that the proposed method improves precision by up to 102% for the TREC-6 data compared with the existing query expansion method using a thesaurus proposed by Kwon et al.

Development of Evaluation Metrics that Consider Data Imbalance between Classes in Facies Classification (지도학습 기반 암상 분류 시 클래스 간 자료 불균형을 고려한 평가지표 개발)

  • Kim, Dowan;Choi, Junhwan;Byun, Joongmoo
    • Geophysics and Geophysical Exploration
    • /
    • v.23 no.3
    • /
    • pp.131-140
    • /
    • 2020
  • In training a classification model using machine learning, the acquisition of training data is a very important stage, because the amount and quality of the training data greatly influence the model performance. However, when the cost of obtaining data is so high that it is difficult to build ideal training data, the number of samples for each class may be acquired very differently, and a serious data-imbalance problem can occur. If such a problem occurs in the training data, all classes are not trained equally, and classes containing relatively few data will have significantly lower recall values. Additionally, the reliability of evaluation indices such as accuracy and precision will be reduced. Therefore, this study sought to overcome the problem of data imbalance in two stages. First, we introduced weighted accuracy and weighted precision as new evaluation indices that can take into account a data-imbalance ratio by modifying conventional measures of accuracy and precision. Next, oversampling was performed to balance weighted precision and recall among classes. We verified the algorithm by applying it to the problem of facies classification. As a result, the imbalance between majority and minority classes was greatly mitigated, and the boundaries between classes could be more clearly identified.