• Title/Summary/Keyword: Classification accuracy

Search Result 3,065, Processing Time 0.031 seconds

Detection of Titanium bearing Myeonsan Formation in the Joseon Supergroup based on Spectral Analysis and Machine Learning Techniques (분광분석과 기계학습기법을 활용한 조선누층군 타이타늄 함유 면산층 탐지)

  • Park, Chanhyeok;Yu, Jaehyung;Oh, Min-Kyu;Lee, Gilljae;Lee, Giyeon
    • Economic and Environmental Geology
    • /
    • v.55 no.2
    • /
    • pp.197-207
    • /
    • 2022
  • This study investigated spectroscopic exploration of Myeonsan formation, the titanium(Ti) ore hostrock, in Joseon supergroup based on machine learning technique. The mineral composition, Ti concentration, spectral characteristics of Myeonsan and non-Myeonsan formation of Joseon supergroup were analyzed. The Myeonsan formation contains relatively larger quantity of opaque minerals along with quartz and clay minerals. The PXRF analysis revealed that the Ti concentration of Myeosan formation is at least 10 times larger than the other formations with bi-modal distribution. The bi-modal concentration is caused by high Ti concentrated sandy layer and relatively lower Ti concentrated muddy layer. The spectral characteristics of Myeonsan formation is manifested by Fe oxides at near infrared and clay minerals at shortwave infrared bands. The Ti exploration is expected to be more effective on detection of hostrock rather than Ti ore because ilmenite does not have characteristic spectral features. The random-forest machine learning classification detected the Myeonsan fomation at 85% accuracy with overall accuracy of 97%, where spectral features of iron oxides and clay minerals played an important role. It indicates that spectral analysis can detect the Ti host rock effectively, and can contribute for UAV based remote sensing for Ti exploration.

Development and Validation of Figure-Copy Test for Dementia Screening (치매 선별을 위한 도형모사검사 개발 및 타당화)

  • Kim, Chobok;Heo, Juyeon;Hong, Jiyun;Yi, Kyongmyon;Park, Jungkyu;Shin, Changhwan
    • 한국노년학
    • /
    • v.40 no.2
    • /
    • pp.325-340
    • /
    • 2020
  • Early diagnosis and intervention of dementia is critical to minimize future risk and cost for patients and their families. The purpose of this study was to develop and validate Figure-Copy Test(FCT), as a new dementia screening test, that can measure neurological damage and cognitive impairment, and then to examine whether the grading precesses for screening can be automated through machine learning procedure by using FCT imag es. For this end, FCT, Korean version of MMSE for Dementia Screening (MMSE-DS) and Clock Drawing Test were administrated to a total of 270 participants from normal and damaged elderly groups. Results demonstrated that FCT scores showed high internal constancy and significant correlation coefficients with the other two test scores. Discriminant analyses showed that the accuracy of classification for the normal and damag ed g roups using FCT were 90.8% and 77.1%, respectively, and these were relatively higher than the other two tests. Importantly, we identified that the participants whose MMSE-DS scores were higher than the cutoff but showed lower scores in FCT were successfully screened out through clinical diagnosis. Finally, machine learning using the FCT image data showed an accuracy of 73.70%. In conclusion, our results suggest that FCT, a newly developed drawing test, can be easily implemented for efficient dementia screening.

The Detection of Online Manipulated Reviews Using Machine Learning and GPT-3 (기계학습과 GPT3를 시용한 조작된 리뷰의 탐지)

  • Chernyaeva, Olga;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.347-364
    • /
    • 2022
  • Fraudulent companies or sellers strategically manipulate reviews to influence customers' purchase decisions; therefore, the reliability of reviews has become crucial for customer decision-making. Since customers increasingly rely on online reviews to search for more detailed information about products or services before purchasing, many researchers focus on detecting manipulated reviews. However, the main problem in detecting manipulated reviews is the difficulties with obtaining data with manipulated reviews to utilize machine learning techniques with sufficient data. Also, the number of manipulated reviews is insufficient compared with the number of non-manipulated reviews, so the class imbalance problem occurs. The class with fewer examples is under-represented and can hamper a model's accuracy, so machine learning methods suffer from the class imbalance problem and solving the class imbalance problem is important to build an accurate model for detecting manipulated reviews. Thus, we propose an OpenAI-based reviews generation model to solve the manipulated reviews imbalance problem, thereby enhancing the accuracy of manipulated reviews detection. In this research, we applied the novel autoregressive language model - GPT-3 to generate reviews based on manipulated reviews. Moreover, we found that applying GPT-3 model for oversampling manipulated reviews can recover a satisfactory portion of performance losses and shows better performance in classification (logit, decision tree, neural networks) than traditional oversampling models such as random oversampling and SMOTE.

Development of Graph based Deep Learning methods for Enhancing the Semantic Integrity of Spaces in BIM Models (BIM 모델 내 공간의 시멘틱 무결성 검증을 위한 그래프 기반 딥러닝 모델 구축에 관한 연구)

  • Lee, Wonbok;Kim, Sihyun;Yu, Youngsu;Koo, Bonsang
    • Korean Journal of Construction Engineering and Management
    • /
    • v.23 no.3
    • /
    • pp.45-55
    • /
    • 2022
  • BIM models allow building spaces to be instantiated and recognized as unique objects independently of model elements. These instantiated spaces provide the required semantics that can be leveraged for building code checking, energy analysis, and evacuation route analysis. However, theses spaces or rooms need to be designated manually, which in practice, lead to errors and omissions. Thus, most BIM models today does not guarantee the semantic integrity of space designations, limiting their potential applicability. Recent studies have explored ways to automate space allocation in BIM models using artificial intelligence algorithms, but they are limited in their scope and relatively low classification accuracy. This study explored the use of Graph Convolutional Networks, an algorithm exclusively tailored for graph data structures. The goal was to utilize not only geometry information but also the semantic relational data between spaces and elements in the BIM model. Results of the study confirmed that the accuracy was improved by about 8% compared to algorithms that only used geometric distinctions of the individual spaces.

Diagnosis of Residual Tumors after Unplanned Excision of Soft-Tissue Sarcomas: Conventional MRI Features and Added Value of Diffusion-Weighted Imaging

  • Jin, Kiok;Lee, Min Hee;Yoon, Min A;Kim, Hwa Jung;Kim, Wanlim;Chee, Choong Geun;Chung, Hye Won;Lee, Sang Hoon;Shin, Myung Jin
    • Investigative Magnetic Resonance Imaging
    • /
    • v.26 no.1
    • /
    • pp.20-31
    • /
    • 2022
  • Purpose: To assess conventional MRI features associated with residual soft-tissue sarcomas following unplanned excision (UPE), and to compare the diagnostic performance of conventional MRI only with that of MRI including diffusion-weighted imaging (DWI) for residual tumors after UPE. Materials and Methods: We included 103 consecutive patients who had received UPE of a soft-tissue sarcoma with wide excision of the tumor bed between December 2013 and December 2019 and who also underwent conventional MRI and DWI in this retrospective study. The presence of focal enhancement, soft-tissue edema, fascial enhancement, fluid collections, and hematoma on MRI including DWI was reviewed by two musculoskeletal radiologists. We used classification and regression tree (CART) analysis to identify the most significant MRI features. We compared the diagnostic performances of conventional MRI and added DWI using the McNemar test. Results: Residual tumors were present in 69 (66.9%) of 103 patients, whereas no tumors were found in 34 (33.1%) patients. CART showed focal enhancement to be the most significant predictor of residual tumors and correctly predicted residual tumors in 81.6% (84/103) and 78.6% (81/103) of patients for Reader 1 and Reader 2, respectively. Compared with conventional MRI only, the addition of DWI for Reader 1 improved specificity (32.8% vs. 56%, 33.3% vs. 63.0%, P < 0.05), decreased sensitivity (96.8% vs. 84.1%, 98.7% vs. 76.7%, P < 0.05), without a difference in diagnostic accuracy (76.7% vs. 74.8%, 72.9% vs. 71.4%) in total and in subgroups. For Reader 2, diagnostic performance was not significantly different between the sets of MRI (P > 0.05). Conclusion: After UPE of a soft-tissue sarcoma, the presence or absence of a focal enhancement was the most significant MRI finding predicting residual tumors. MRI provided good diagnostic accuracy for detecting residual tumors, and the addition of DWI to conventional MRI may increase specificity.

A Study on the Application of the Price Prediction of Construction Materials through the Improvement of Data Refactor Techniques (Data Refactor 기법의 개선을 통한 건설원자재 가격 예측 적용성 연구)

  • Lee, Woo-Yang;Lee, Dong-Eun;Kim, Byung-Soo
    • Korean Journal of Construction Engineering and Management
    • /
    • v.24 no.6
    • /
    • pp.66-73
    • /
    • 2023
  • The construction industry suffers losses due to failures in demand forecasting due to price fluctuations in construction raw materials, increased user costs due to project cost changes, and lack of forecasting system. Accordingly, it is necessary to improve the accuracy of construction raw material price forecasting. This study aims to predict the price of construction raw materials and verify applicability through the improvement of the Data Refactor technique. In order to improve the accuracy of price prediction of construction raw materials, the existing data refactor classification of low and high frequency and ARIMAX utilization method was improved to frequency-oriented and ARIMA method utilization, so that short-term (3 months in the future) six items such as construction raw materials lumber and cement were improved. ), mid-term (6 months in the future), and long-term (12 months in the future) price forecasts. As a result of the analysis, the predicted value based on the improved Data Refactor technique reduced the error and expanded the variability. Therefore, it is expected that the budget can be managed effectively by predicting the price of construction raw materials more accurately through the Data Refactor technique proposed in this study.

Accuracy of F-18 FDG PET/CT in Preoperative Assessment of Cervical Lymph Nodes in Head and Neck Squamous Cell Cancer: Comparison with CT/MRI (두경부 편평상피암 환자에서 수술 전 경부림프절 전이 평가에 대한 F-18 FDG PET/CT의 정확도: CT/MRI와의 비교)

  • Choi, Seung-Jin;Byun, Sung-Su;Park, Sun-Won;Kim, Young-Mo;Hyun, In-Young
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.40 no.6
    • /
    • pp.309-315
    • /
    • 2006
  • Purpose: Accurate evaluation of cervical lymph node (LN) metastasis of head and neck squamous cell canter (SCC) is important to treatment planning. We evaluated the diagnostic accuracy of F-18 fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) for the detection of cervical LN metastasis of head and neck SCC and performed a retrospective comparison with CT/MRI findings. Materials & Methods: Seventeen patients with pathologically proven head and neck SCC underwent F-18 FDG PET/CT and CT/MRI within 4 week before surgery. We recorded lymph node metastases according to the neck level system of imaging-based nodal classification. F-18 FDG PET/CT images were analyzed visually for assessment of regional tracer uptake in LN. We analyzed the differences in sensitivity and specificity between F-18 FDG PET/CT and CT/MRI using the Chi-square test. Results: Among the 17 patients, a total of 123 LN levels were dissected, 29 of which showed metastatic involvement. The sensitivity and specificity of F-18 FDG PET/CT for detecting cervical LN metastasis on a level-by-level basis were 69% (20/29) and 99% (93/94). The sensitivity and specificity of CT/MRI were 62% (18/29) and 96% (90/94). There was no significant difference in diagnostic accuracy between F-18 FDG PET/CT and CT/MRI. Interestingly, F-18 FDG PET/CT detected double primary tumor (hepatocellular carcinoma) and rib metastasis, respectively. Conclusion: There was not statistically significant difference of diagnostic accuracy between F-18 FDG PET/CT and CT/MRI for the detection of cervical LN metastasis of head and neck SCC. The low sensitivity of F-18 FDG PET/CT was due to limited resolution for small metastatic deposits.

Comparison of Models for Stock Price Prediction Based on Keyword Search Volume According to the Social Acceptance of Artificial Intelligence (인공지능의 사회적 수용도에 따른 키워드 검색량 기반 주가예측모형 비교연구)

  • Cho, Yujung;Sohn, Kwonsang;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.103-128
    • /
    • 2021
  • Recently, investors' interest and the influence of stock-related information dissemination are being considered as significant factors that explain stock returns and volume. Besides, companies that develop, distribute, or utilize innovative new technologies such as artificial intelligence have a problem that it is difficult to accurately predict a company's future stock returns and volatility due to macro-environment and market uncertainty. Market uncertainty is recognized as an obstacle to the activation and spread of artificial intelligence technology, so research is needed to mitigate this. Hence, the purpose of this study is to propose a machine learning model that predicts the volatility of a company's stock price by using the internet search volume of artificial intelligence-related technology keywords as a measure of the interest of investors. To this end, for predicting the stock market, we using the VAR(Vector Auto Regression) and deep neural network LSTM (Long Short-Term Memory). And the stock price prediction performance using keyword search volume is compared according to the technology's social acceptance stage. In addition, we also conduct the analysis of sub-technology of artificial intelligence technology to examine the change in the search volume of detailed technology keywords according to the technology acceptance stage and the effect of interest in specific technology on the stock market forecast. To this end, in this study, the words artificial intelligence, deep learning, machine learning were selected as keywords. Next, we investigated how many keywords each week appeared in online documents for five years from January 1, 2015, to December 31, 2019. The stock price and transaction volume data of KOSDAQ listed companies were also collected and used for analysis. As a result, we found that the keyword search volume for artificial intelligence technology increased as the social acceptance of artificial intelligence technology increased. In particular, starting from AlphaGo Shock, the keyword search volume for artificial intelligence itself and detailed technologies such as machine learning and deep learning appeared to increase. Also, the keyword search volume for artificial intelligence technology increases as the social acceptance stage progresses. It showed high accuracy, and it was confirmed that the acceptance stages showing the best prediction performance were different for each keyword. As a result of stock price prediction based on keyword search volume for each social acceptance stage of artificial intelligence technologies classified in this study, the awareness stage's prediction accuracy was found to be the highest. The prediction accuracy was different according to the keywords used in the stock price prediction model for each social acceptance stage. Therefore, when constructing a stock price prediction model using technology keywords, it is necessary to consider social acceptance of the technology and sub-technology classification. The results of this study provide the following implications. First, to predict the return on investment for companies based on innovative technology, it is most important to capture the recognition stage in which public interest rapidly increases in social acceptance of the technology. Second, the change in keyword search volume and the accuracy of the prediction model varies according to the social acceptance of technology should be considered in developing a Decision Support System for investment such as the big data-based Robo-advisor recently introduced by the financial sector.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Development of the Quotient Equation of the Hypothesis Evaluating Ability by Analysis of the Pre-service Elementary School Teachers' Knowledges for Evaluating Hypothesis on a Woodpecker Task (딱따구리 과제에서 초등예비 교사들의 가설 평가 지식에 대한 분석을 통한 가설 평가 능력 지수 산출식의 개발)

  • Lee, Jun-Ki;Lee, Il-Sun;Kwon, Yong-Ju
    • Journal of Korean Elementary Science Education
    • /
    • v.27 no.1
    • /
    • pp.49-59
    • /
    • 2008
  • The purpose of this study was to invent a quotient equation which could quantitatively evaluate individual's hypothesis evaluating ability. The equation was induced by the analysis of the classification types about hypothesis evaluation knowledges generated by 15 pre-service elementary school teachers and the construction of the quotient equation on hypothesis evaluating ability. The hypothesis evaluation task administered to subjects was dealt with the woodpecker behavior. The task was initiated by generating hypothesis on the following question: 'Why don't woodpecker have brain damage after pecking wood?' Subjects then were asked to design and perform experiments for testing hypothesis. Finally they were asked to evaluate their own hypothesis based on the collected, analyzed and interpreted data. The knowledges generated from their evaluating hypothesis were analyzed by 4 major categories (richness, type, level and accuracy). Then, a general equation which could quantitatively and systematically evaluate individual's hypothesis evaluating ability was invented by an inductive process. After combining all the categories the following quotient equation was proposed; '$VQ\;=\;{\sum}(TE_n\;{\times}\;AE_n)\;{\times}\;LE$'. According to this results, woodpecker task and hypothesis evaluating ability quotient equation (VQ) which invented in this study could be applied to a practical use of measuring students' ability of scientific hypothesis evaluation.

  • PDF