• Title/Summary/Keyword: Precision-recall

Search Result 706, Processing Time 0.033 seconds

An Index Structure for Substructure Searching In Chemical Databases (화학 데이타베이스에서 부분구조 검색을 위한 인덱스 구조)

  • Lee Hwangu;Cha Jaehyuk
    • Journal of KIISE:Databases
    • /
    • v.31 no.6
    • /
    • pp.641-649
    • /
    • 2004
  • The relationship between chemical structures and biological activities is researched briskly in the area of 'Medicinal Chemistry' At the base of these structure-based drug design tries, medicinal chemists search the existing drugs of similar chemical structure to target drug for the development of a new drug. Therefore, it is such necessary that an automatic system selects drug files that have a set of chemical moieties matching a user-defined query moiety. Substructure searching is the process of identifying a set of chemical moieties that match a specific query moiety. Testing for substructure searching was developed in the late 1950s. In graph theoretical terms, this problem corresponds to determining which graphs in a set are subgraph isomorphic to a specified query moiety. Testing for subgraph isomorphism has been proved, in the general case, to be an NP- complete problem. For the purpose of overcoming this difficulty, there were computational approaches. On the 1990s, a US patent has been granted on an atom-centered indexing scheme, used by the RS3 system; this has the virtue that the indexes generated can be searched by direct text comparison. This system is commercially used(http://www.acelrys.com/rs3). We define the RS3 system's drawback and present a new indexing scheme. The RS3 system treats substructure searching with substring matching by means of expressing chemical structure aspredefined strings. However, it has insufficient 'rerall' and 'precision‘ because it is impossible to index structures uniquely for same atom and same bond. To resolve this problem, we make the minimum-cost- spanning tree for one centered atom and describe a structure with paths per levels. Expressing 2D chemical structure into 1D a string has limit. Therefore, we break 2D chemical structure into 1D structure fragments. We present in this paper a new index technique to improve recall and precision surprisingly.

The Prediction of Survival of Breast Cancer Patients Based on Machine Learning Using Health Insurance Claim Data (건강보험 청구 데이터를 활용한 머신러닝 기반유방암 환자의 생존 여부 예측)

  • Doeggyu Lee;Kyungkeun Byun;Hyungdong Lee;Sunhee Shin
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.2
    • /
    • pp.1-9
    • /
    • 2023
  • Research using AI and big data is also being actively conducted in the health and medical fields such as disease diagnosis and treatment. Most of the existing research data used cohort data from research institutes or some patient data. In this paper, the difference in the prediction rate of survival and the factors affecting survival between breast cancer patients in their 40~50s and other age groups was revealed using health insurance review claim data held by the HIRA. As a result, the accuracy of predicting patients' survival was 0.93 on average in their 40~50s, higher than 0.86 in their 60~80s. In terms of that factor, the number of treatments was high for those in their 40~50s, and age was high for those in their 60~80s. Performance comparison with previous studies, the average precision was 0.90, which was higher than 0.81 of the existing paper. As a result of performance comparison by applied algorithm, the overall average precision of Decision Tree, Random Forest, and Gradient Boosting was 0.90, and the recall was 1.0, and the precision of multi-layer perceptrons was 0.89, and the recall was 1.0. I hope that more research will be conducted using machine learning automation(Auto ML) tools for non-professionals to enhance the use of the value for health insurance review claim data held by the HIRA.

Precision Position Control of PMSM Using Neural Network Disturbance observer and Parameter compensator (신경망 외란관측기와 파라미터 보상기를 이용한 PMSM의 정밀 위치제어)

  • 고종선;진달복;이태훈
    • The Transactions of the Korean Institute of Electrical Engineers B
    • /
    • v.53 no.3
    • /
    • pp.188-195
    • /
    • 2004
  • This paper presents neural load torque observer that is used to deadbeat load torque observer and gain compensation by parameter estimator As a result, the response of the PMSM(permanent magnet synchronous motor) follows that nominal plant. The load torque compensation method is composed of a neural deadbeat observer To reduce the noise effect, the post-filter implemented by MA(moving average) process, is adopted. The parameter compensator with RLSM (recursive least square method) parameter estimator is adopted to increase the performance of the load torque observer and main controller The parameter estimator is combined with a high performance neural load torque observer to resolve the problems. The neural network is trained in on-line phases and it is composed by a feed forward recall and error back-propagation training. During the normal operation, the input-output response is sampled and the weighting value is trained multi-times by error back-propagation method at each sample period to accommodate the possible variations in the parameters or load torque. As a result, the proposed control system has a robust and precise system against the load torque and the Parameter variation. A stability and usefulness are verified by computer simulation and experiment.

A Study on Lung Cancer Segmentation Algorithm using Weighted Integration Loss on Volumetric Chest CT Image (흉부 볼륨 CT영상에서 Weighted Integration Loss을 이용한 폐암 분할 알고리즘 연구)

  • Jeong, Jin Gyo;Kim, Young Jae;Kim, Kwang Gi
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.5
    • /
    • pp.625-632
    • /
    • 2020
  • In the diagnosis of lung cancer, the tumor size is measured by the longest diameter of the tumor in the entire slice of the CT. In order to accurately estimate the size of the tumor, it is better to measure the volume, but there are some limitations in calculating the volume in the clinic. In this study, we propose an algorithm to segment lung cancer by applying a custom loss function that combines focal loss and dice loss to a U-Net model that shows high performance in segmentation problems in chest CT images. The combination of values of the various parameters in custom loss function was compared to the results of the model learned. The purposed loss function showed F1 score of 88.77%, precision of 87.31%, recall of 90.30% and average precision of 0.827 at α=0.25, γ=4, β=0.7. The performance of the proposed custom loss function showed good performance in lung cancer segmentation.

Restricting Answer Candidates Based on Taxonomic Relatedness of Integrated Lexical Knowledge Base in Question Answering

  • Heo, Jeong;Lee, Hyung-Jik;Wang, Ji-Hyun;Bae, Yong-Jin;Kim, Hyun-Ki;Ock, Cheol-Young
    • ETRI Journal
    • /
    • v.39 no.2
    • /
    • pp.191-201
    • /
    • 2017
  • This paper proposes an approach using taxonomic relatedness for answer-type recognition and type coercion in a question-answering system. We introduce a question analysis method for a lexical answer type (LAT) and semantic answer type (SAT) and describe the construction of a taxonomy linking them. We also analyze the effectiveness of type coercion based on the taxonomic relatedness of both ATs. Compared with the rule-based approach of IBM's Watson, our LAT detector, which combines rule-based and machine-learning approaches, achieves an 11.04% recall improvement without a sharp decline in precision. Our SAT classifier with a relatedness-based validation method achieves a precision of 73.55%. For type coercion using the taxonomic relatedness between both ATs and answer candidates, we construct an answer-type taxonomy that has a semantic relationship between the two ATs. In this paper, we introduce how to link heterogeneous lexical knowledge bases. We propose three strategies for type coercion based on the relatedness between the two ATs and answer candidates in this taxonomy. Finally, we demonstrate that this combination of individual type coercion creates a synergistic effect.

Precision Position Control of PMSM using Neural Observer and Parameter Compensator

  • Ko, Jong-Sun;Seo, Young-Ger;Kim, Hyun-Sik
    • Journal of Power Electronics
    • /
    • v.8 no.4
    • /
    • pp.354-362
    • /
    • 2008
  • This paper presents neural load torque compensation method which is composed of a deadbeat load torque observer and gains compensation by a parameter estimator. As a result, the response of the PMSM (permanent magnet synchronous motor) obtains better precision position control. To reduce the noise effect, the post-filter is implemented by a MA (moving average) process. The parameter compensator with an RLSM (recursive least square method) parameter estimator is adopted to increase the performance of the load torque observer and main controller. The parameter estimator is combined with a high performance neural load torque observer to resolve problems. The neural network is trained in online phases and it is composed by a feed forward recall and error back-propagation training. During normal operation, the input-output response is sampled and the weighting value is trained multi-times by the error back-propagation method at each sample period to accommodate the possible variations in the parameters or load torque. As a result, the proposed control system has a robust and precise system against load torque and parameter variation. Stability and usefulness are verified by computer simulation and experiment.

A Multi-Layer Perceptron for Color Index based Vegetation Segmentation (색상지수 기반의 식물분할을 위한 다층퍼셉트론 신경망)

  • Lee, Moon-Kyu
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.43 no.1
    • /
    • pp.16-25
    • /
    • 2020
  • Vegetation segmentation in a field color image is a process of distinguishing vegetation objects of interests like crops and weeds from a background of soil and/or other residues. The performance of the process is crucial in automatic precision agriculture which includes weed control and crop status monitoring. To facilitate the segmentation, color indices have predominantly been used to transform the color image into its gray-scale image. A thresholding technique like the Otsu method is then applied to distinguish vegetation parts from the background. An obvious demerit of the thresholding based segmentation will be that classification of each pixel into vegetation or background is carried out solely by using the color feature of the pixel itself without taking into account color features of its neighboring pixels. This paper presents a new pixel-based segmentation method which employs a multi-layer perceptron neural network to classify the gray-scale image into vegetation and nonvegetation pixels. The input data of the neural network for each pixel are 2-dimensional gray-level values surrounding the pixel. To generate a gray-scale image from a raw RGB color image, a well-known color index called Excess Green minus Excess Red Index was used. Experimental results using 80 field images of 4 vegetation species demonstrate the superiority of the neural network to existing threshold-based segmentation methods in terms of accuracy, precision, recall, and harmonic mean.

Evaluation of Feature Extraction and Matching Algorithms for the use of Mobile Application (모바일 애플리케이션을 위한 특징점 검출 연산자의 비교 분석)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.14 no.4
    • /
    • pp.56-60
    • /
    • 2015
  • Mobile devices like smartphones and tablets are becoming increasingly capable in terms of processing power. Although they are already used in computer vision, no comparable measurement experiments of the popular feature extraction algorithm have been made yet. That is, local feature descriptors are widely used in many computer vision applications, and recently various methods have been proposed. While there are many evaluations have focused on various aspects of local features, matching accuracy, however there are no comparisons considering on speed trade-offs of recent descriptors such as ORB, FAST and BRISK. In this paper, we try to provide a performance evaluation of feature descriptors, and compare their matching precision and speed in KD-Tree setup with efficient computation of Hamming distance. The experimental results show that the recently proposed real valued descriptors such as ORB and FAST outperform state-of-the-art descriptors such SIFT and SURF in both, speed-up efficiency and precision/recall.

A Multi-Stage Approach to Secure Digital Image Search over Public Cloud using Speeded-Up Robust Features (SURF) Algorithm

  • AL-Omari, Ahmad H.;Otair, Mohammed A.;Alzwahreh, Bayan N.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12
    • /
    • pp.65-74
    • /
    • 2021
  • Digital image processing and retrieving have increasingly become very popular on the Internet and getting more attention from various multimedia fields. That results in additional privacy requirements placed on efficient image matching techniques in various applications. Hence, several searching methods have been developed when confidential images are used in image matching between pairs of security agencies, most of these search methods either limited by its cost or precision. This study proposes a secure and efficient method that preserves image privacy and confidentially between two communicating parties. To retrieve an image, feature vector is extracted from the given query image, and then the similarities with the stored database images features vector are calculated to retrieve the matched images based on an indexing scheme and matching strategy. We used a secure content-based image retrieval features detector algorithm called Speeded-Up Robust Features (SURF) algorithm over public cloud to extract the features and the Honey Encryption algorithm. The purpose of using the encrypted images database is to provide an accurate searching through encrypted documents without needing decryption. Progress in this area helps protect the privacy of sensitive data stored on the cloud. The experimental results (conducted on a well-known image-set) show that the performance of the proposed methodology achieved a noticeable enhancement level in terms of precision, recall, F-Measure, and execution time.

A Computer-Aided Diagnosis of Brain Tumors Using a Fine-Tuned YOLO-based Model with Transfer Learning

  • Montalbo, Francis Jesmar P.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4816-4834
    • /
    • 2020
  • This paper proposes transfer learning and fine-tuning techniques for a deep learning model to detect three distinct brain tumors from Magnetic Resonance Imaging (MRI) scans. In this work, the recent YOLOv4 model trained using a collection of 3064 T1-weighted Contrast-Enhanced (CE)-MRI scans that were pre-processed and labeled for the task. This work trained with the partial 29-layer YOLOv4-Tiny and fine-tuned to work optimally and run efficiently in most platforms with reliable performance. With the help of transfer learning, the model had initial leverage to train faster with pre-trained weights from the COCO dataset, generating a robust set of features required for brain tumor detection. The results yielded the highest mean average precision of 93.14%, a 90.34% precision, 88.58% recall, and 89.45% F1-Score outperforming other previous versions of the YOLO detection models and other studies that used bounding box detections for the same task like Faster R-CNN. As concluded, the YOLOv4-Tiny can work efficiently to detect brain tumors automatically at a rapid phase with the help of proper fine-tuning and transfer learning. This work contributes mainly to assist medical experts in the diagnostic process of brain tumors.