• Title/Summary/Keyword: Classification accuracy

Search Result 3,122, Processing Time 0.027 seconds

Priority Analysis for Consumers' Purchasing Factors of Seafood Online Using AHP Method (온라인 플랫폼을 활용한 수산식품 구매요인 우선순위 분석: AHP 기법을 활용하여)

  • Jeong, Hyun-Ki;Kee, Hae-Kyung;Park, Se-Hyun
    • Asia-Pacific Journal of Business
    • /
    • v.13 no.3
    • /
    • pp.449-461
    • /
    • 2022
  • Purpose - The purpose of this study to explore factors consumers prioritize when purchasing seafood online. The originality of the study lies on adopting AHP-based approach in analyzing prioritized purchasing factors of seafood online. Design/methodology/approach - A survey was conducted targeting Korean consumers who have purchased seafood online. AHP method was applied to rank factors consumers prioritize before making decision. Findings - First, product's factor ranked first among other high level factors including delivery service, seller, online platform. Second, sanitation, taste, country of origin ranked first, second, third respectively, within product's factors. Third, safe delivery, timeliness, information accuracy ranked first, second, third respectively, within delivery factors. Fourth, consumer reviews, consumer response ability, promotion ranked first, second, third within seller factors. Fifth, Personal information management system, credibility, user-friendliness ranked first, second, third, within online platform factors. Research implications or Originality - To activate seafood online market, it is crucial to assure consumers that the seafood is well managed in a sanitary way from the production site to table. Existing government programs such as seafood traceability system, HACCP, and cold-chain infrastructure needs improvement. Due to highly perishable characteristic of seafood, delivery factors matter when purchasing online. Online platforms needs to continue to improve delivery service. Seafood products are mostly not branded and without objective information about their properties. Creating quality classification and seafood brands are likely to help consumers chose seafood online.

2D-MELPP: A two dimensional matrix exponential based extension of locality preserving projections for dimensional reduction

  • Xiong, Zixun;Wan, Minghua;Xue, Rui;Yang, Guowei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.9
    • /
    • pp.2991-3007
    • /
    • 2022
  • Two dimensional locality preserving projections (2D-LPP) is an improved algorithm of 2D image to solve the small sample size (SSS) problems which locality preserving projections (LPP) meets. It's able to find the low dimension manifold mapping that not only preserves local information but also detects manifold embedded in original data spaces. However, 2D-LPP is simple and elegant. So, inspired by the comparison experiments between two dimensional linear discriminant analysis (2D-LDA) and linear discriminant analysis (LDA) which indicated that matrix based methods don't always perform better even when training samples are limited, we surmise 2D-LPP may meet the same limitation as 2D-LDA and propose a novel matrix exponential method to enhance the performance of 2D-LPP. 2D-MELPP is equivalent to employing distance diffusion mapping to transform original images into a new space, and margins between labels are broadened, which is beneficial for solving classification problems. Nonetheless, the computational time complexity of 2D-MELPP is extremely high. In this paper, we replace some of matrix multiplications with multiple multiplications to save the memory cost and provide an efficient way for solving 2D-MELPP. We test it on public databases: random 3D data set, ORL, AR face database and Polyu Palmprint database and compare it with other 2D methods like 2D-LDA, 2D-LPP and 1D methods like LPP and exponential locality preserving projections (ELPP), finding it outperforms than others in recognition accuracy. We also compare different dimensions of projection vector and record the cost time on the ORL, AR face database and Polyu Palmprint database. The experiment results above proves that our advanced algorithm has a better performance on 3 independent public databases.

A Comparative Study of Predictive Factors for Passing the National Physical Therapy Examination using Logistic Regression Analysis and Decision Tree Analysis

  • Kim, So Hyun;Cho, Sung Hyoun
    • Physical Therapy Rehabilitation Science
    • /
    • v.11 no.3
    • /
    • pp.285-295
    • /
    • 2022
  • Objective: The purpose of this study is to use logistic regression and decision tree analysis to identify the factors that affect the success or failurein the national physical therapy examination; and to build and compare predictive models. Design: Secondary data analysis study Methods: We analyzed 76,727 subjects from the physical therapy national examination data provided by the Korea Health Personnel Licensing Examination Institute. The target variable was pass or fail, and the input variables were gender, age, graduation status, and examination area. Frequency analysis, chi-square test, binary logistic regression, and decision tree analysis were performed on the data. Results: In the logistic regression analysis, subjects in their 20s (Odds ratio, OR=1, reference), expected to graduate (OR=13.616, p<0.001) and from the examination area of Jeju-do (OR=3.135, p<0.001), had a high probability of passing. In the decision tree, the predictive factors for passing result had the greatest influence in the order of graduation status (x2=12366.843, p<0.001) and examination area (x2=312.446, p<0.001). Logistic regression analysis showed a specificity of 39.6% and sensitivity of 95.5%; while decision tree analysis showed a specificity of 45.8% and sensitivity of 94.7%. In classification accuracy, logistic regression and decision tree analysis showed 87.6% and 88.0% prediction, respectively. Conclusions: Both logistic regression and decision tree analysis were adequate to explain the predictive model. Additionally, whether actual test takers passed the national physical therapy examination could be determined, by applying the constructed prediction model and prediction rate.

An Algorithm Study to Detect Mass Flow Controller Error in Plasma Deposition Equipment Using Artificial Immune System (인공면역체계를 이용한 플라즈마 증착 장비의 유량조절기 오류 검출 실험 연구)

  • You, Young Min;Jeong, Ji Yoon;Ch, Na Hyeon;Park, So Eun;Hong, Sang Jeen
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.4
    • /
    • pp.161-166
    • /
    • 2021
  • Errors in the semiconductor process are generated by a change in the state of the equipment, and errors usually arise when the state of the equipment changes or when parts that make up the equipment have flaws. In this investigation, we anticipated that aging of the mass flow controller in the plasma enhanced chemical vapor deposition SiO2 thin film deposition method caused a minute flow rate shift. In seven cases, fourier transformation infrared film quality analysis of the deposited thin film was used to characterize normal and pathological processes. The plasma condition was monitored using optical emission spectrometry data as the flow rate changed during the procedure. Preprocessing was used to apply the collected OES data to the artificial immune system algorithm, which was then used to process diagnosis. Through comparisons between datasets, the learning algorithm compared classification accuracy and improved the method. It has been confirmed that data characterized as a normal process and abnormal processes with differing flow rates may be discriminated by themselves using the artificial immune system data mining method.

Development of an efficient method of radiation characteristic analysis using a portable simultaneous measurement system for neutron and gamma-ray

  • Jin, Dong-Sik;Hong, Yong-Ho;Kim, Hui-Gyeong;Kwak, Sang-Soo;Lee, Jae-Geun;Jung, Young-Suk
    • Analytical Science and Technology
    • /
    • v.35 no.2
    • /
    • pp.69-81
    • /
    • 2022
  • The method of measuring and classifying the energy category of neutrons directly using raw data acquired through a CZT detector is not satisfactory, in terms of accuracy and efficiency, because of its poor energy resolution and low measurement efficiency. Moreover, this method of measuring and analyzing the characteristics of low-energy or low-activity gamma-ray sources might be not accurate and efficient in the case of neutrons because of various factors, such as the noise of the CZT detector itself and the influence of environmental radiation. We have therefore developed an efficient method of analyzing radiation characteristics using a neutron and gamma-ray analysis algorithm for the rapid and clear identification of the type, energy, and radioactivity of gamma-ray sources as well as the detection and classification of the energy category (fast or thermal neutrons) of neutron sources, employing raw data acquired through a CZT detector. The neutron analysis algorithm is based on the fact that in the energy-spectrum channel of 558.6 keV emitted in the nuclear reaction 113Cd + 1n → 114Cd + in the CZT detector, there is a notable difference in detection information between a CZT detector without a PE modulator and a CZT detector with a PE modulator, but there is no significant difference between the two detectors in other energy-spectrum channels. In addition, the gamma-ray analysis algorithm uses the difference in the detection information of the CZT detector between the unique characteristic energy-spectrum channel of a gamma-ray source and other channels. This efficient method of analyzing radiation characteristics is expected to be useful for the rapid radiation detection and accurate information collection on radiation sources, which are required to minimize radiation damage and manage accidents in national disaster situations, such as large-scale radioactivity leak accidents at nuclear power plants or nuclear material handling facilities.

Analysis of COVID-19 Context-awareness based on Clustering Algorithm (클러스터링 알고리즘기반의 COVID-19 상황인식 분석)

  • Lee, Kangwhan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.5
    • /
    • pp.755-762
    • /
    • 2022
  • This paper propose a clustered algorithm that possible more efficient COVID-19 disease learning prediction within clustering using context-aware attribute information. In typically, clustering of COVID-19 diseases provides to classify interrelationships within disease cluster information in the clustering process. The clustering data will be as a degrade factor if new or newly processing information during treated as contaminated factors in comparative interrelationships information. In this paper, we have shown the solving the problems and developed a clustering algorithm that can extracting disease correlation information in using K-means algorithm. According to their attributes from disease clusters using accumulated information and interrelationships clustering, the proposed algorithm analyzes the disease correlation clustering possible and centering points. The proposed algorithm showed improved adaptability to prediction accuracy of the classification management system in terms of learning as a group of multiple disease attribute information of COVID-19 through the applied simulation results.

A Filter Algorithm based on Partial Mask and Lagrange Interpolation for Impulse Noise Removal (임펄스 잡음 제거를 위한 부분 마스크와 라그랑지 보간법에 기반한 필터 알고리즘)

  • Cheon, Bong-Won;Kim, Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.5
    • /
    • pp.675-681
    • /
    • 2022
  • Recently, with the development of IoT technology and AI, unmanned and automated in various fields, interest in video processing, which is the basis for automation such as object recognition and object classification, is increasing. Various studies have been conducted on noise removal in the video processing process, which has a significant impact on image quality and system accuracy and reliability, but there is a problem that it is difficult to restore images for areas with high impulse noise density. In this paper proposes a filter algorithm based on partial mask and Lagrange interpolation to restore the damaged area of impulse noise in the image. In the proposed algorithm, the filtering process was switched by comparing the filtering mask with the noise estimate and the purge weight was calculated based on the low frequency component and the high frequency component of the image to restore the image.

Prediction Model of CNC Processing Defects Using Machine Learning (머신러닝을 이용한 CNC 가공 불량 발생 예측 모델)

  • Han, Yong Hee
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.2
    • /
    • pp.249-255
    • /
    • 2022
  • This study proposed an analysis framework for real-time prediction of CNC processing defects using machine learning-based models that are recently attracting attention as processing defect prediction methods, and applied it to CNC machines. Analysis shows that the XGBoost, CatBoost, and LightGBM models have the same best accuracy, precision, recall, F1 score, and AUC, of which the LightGBM model took the shortest execution time. This short run time has practical advantages such as reducing actual system deployment costs, reducing the probability of CNC machine damage due to rapid prediction of defects, and increasing overall CNC machine utilization, confirming that the LightGBM model is the most effective machine learning model for CNC machines with only basic sensors installed. In addition, it was confirmed that classification performance was maximized when an ensemble model consisting of LightGBM, ExtraTrees, k-Nearest Neighbors, and logistic regression models was applied in situations where there are no restrictions on execution time and computing power.

CNN based data anomaly detection using multi-channel imagery for structural health monitoring

  • Shajihan, Shaik Althaf V.;Wang, Shuo;Zhai, Guanghao;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.181-193
    • /
    • 2022
  • Data-driven structural health monitoring (SHM) of civil infrastructure can be used to continuously assess the state of a structure, allowing preemptive safety measures to be carried out. Long-term monitoring of large-scale civil infrastructure often involves data-collection using a network of numerous sensors of various types. Malfunctioning sensors in the network are common, which can disrupt the condition assessment and even lead to false-negative indications of damage. The overwhelming size of the data collected renders manual approaches to ensure data quality intractable. The task of detecting and classifying an anomaly in the raw data is non-trivial. We propose an approach to automate this task, improving upon the previously developed technique of image-based pre-processing on one-dimensional (1D) data by enriching the features of the neural network input data with multiple channels. In particular, feature engineering is employed to convert the measured time histories into a 3-channel image comprised of (i) the time history, (ii) the spectrogram, and (iii) the probability density function representation of the signal. To demonstrate this approach, a CNN model is designed and trained on a dataset consisting of acceleration records of sensors installed on a long-span bridge, with the goal of fault detection and classification. The effect of imbalance in anomaly patterns observed is studied to better account for unseen test cases. The proposed framework achieves high overall accuracy and recall even when tested on an unseen dataset that is much larger than the samples used for training, offering a viable solution for implementation on full-scale structures where limited labeled-training data is available.

Data abnormal detection using bidirectional long-short neural network combined with artificial experience

  • Yang, Kang;Jiang, Huachen;Ding, Youliang;Wang, Manya;Wan, Chunfeng
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.117-127
    • /
    • 2022
  • Data anomalies seriously threaten the reliability of the bridge structural health monitoring system and may trigger system misjudgment. To overcome the above problem, an efficient and accurate data anomaly detection method is desiderated. Traditional anomaly detection methods extract various abnormal features as the key indicators to identify data anomalies. Then set thresholds artificially for various features to identify specific anomalies, which is the artificial experience method. However, limited by the poor generalization ability among sensors, this method often leads to high labor costs. Another approach to anomaly detection is a data-driven approach based on machine learning methods. Among these, the bidirectional long-short memory neural network (BiLSTM), as an effective classification method, excels at finding complex relationships in multivariate time series data. However, training unprocessed original signals often leads to low computation efficiency and poor convergence, for lacking appropriate feature selection. Therefore, this article combines the advantages of the two methods by proposing a deep learning method with manual experience statistical features fed into it. Experimental comparative studies illustrate that the BiLSTM model with appropriate feature input has an accuracy rate of over 87-94%. Meanwhile, this paper provides basic principles of data cleaning and discusses the typical features of various anomalies. Furthermore, the optimization strategies of the feature space selection based on artificial experience are also highlighted.