• Title/Summary/Keyword: Neural Network Model

Search Result 4,651, Processing Time 0.043 seconds

A Comparative Study of Prediction Models for College Student Dropout Risk Using Machine Learning: Focusing on the case of N university (머신러닝을 활용한 대학생 중도탈락 위험군의 예측모델 비교 연구 : N대학 사례를 중심으로)

  • So-Hyun Kim;Sung-Hyoun Cho
    • Journal of The Korean Society of Integrative Medicine
    • /
    • v.12 no.2
    • /
    • pp.155-166
    • /
    • 2024
  • Purpose : This study aims to identify key factors for predicting dropout risk at the university level and to provide a foundation for policy development aimed at dropout prevention. This study explores the optimal machine learning algorithm by comparing the performance of various algorithms using data on college students' dropout risks. Methods : We collected data on factors influencing dropout risk and propensity were collected from N University. The collected data were applied to several machine learning algorithms, including random forest, decision tree, artificial neural network, logistic regression, support vector machine (SVM), k-nearest neighbor (k-NN) classification, and Naive Bayes. The performance of these models was compared and evaluated, with a focus on predictive validity and the identification of significant dropout factors through the information gain index of machine learning. Results : The binary logistic regression analysis showed that the year of the program, department, grades, and year of entry had a statistically significant effect on the dropout risk. The performance of each machine learning algorithm showed that random forest performed the best. The results showed that the relative importance of the predictor variables was highest for department, age, grade, and residence, in the order of whether or not they matched the school location. Conclusion : Machine learning-based prediction of dropout risk focuses on the early identification of students at risk. The types and causes of dropout crises vary significantly among students. It is important to identify the types and causes of dropout crises so that appropriate actions and support can be taken to remove risk factors and increase protective factors. The relative importance of the factors affecting dropout risk found in this study will help guide educational prescriptions for preventing college student dropout.

Pathophysiological Role of TLR4 in Chronic Relapsing Itch Induced by Subcutaneous Capsaicin Injection in Neonatal Rats

  • Hee Joo Kim;Eun-Hui Lee;Yoon Hee Lim;Dongil Jeong;Heung Sik Na;YunJae Jung
    • IMMUNE NETWORK
    • /
    • v.22 no.2
    • /
    • pp.20.1-20.9
    • /
    • 2022
  • Despite the high prevalence of chronic dermatitis and the accompanied intractable itch, therapeutics that specifically target itching have low efficacy. Increasing evidence suggests that TLRs contribute to immune activation and neural sensitization; however, their roles in chronic itch remain elusive. Here, we show that the RBL-2H3 mast cell line expresses TLR4 and that treatment with a TLR4 antagonist opposes the LPS dependent increase in mRNA levels of Th2 and innate cytokines. The pathological role of TLR4 activation in itching was studied in neonate rats that developed chronic itch due to neuronal damage after receiving subcutaneous capsaicin injections. Treatment with a TLR4 antagonist protected these rats with chronic itch against scratching behavior and chronic dermatitis. TLR4 antagonist treatment also restored the density of cutaneous nerve fibers and inhibited the histopathological changes that are associated with mast cell activation after capsaicin injection. Additionally, the expression of IL-1β, IL-4, IL-5, IL-10, and IL-13 mRNA in the lesional skin decreased after TLR4 antagonist treatment. Based on these data, we propose that inhibiting TLR4 alleviated itch in a rat model of chronic relapsing itch, and the reduction in the itch was associated with TLR4 signaling in mast cells and nerve fibers.

Toxicity prediction of chemicals using OECD test guideline data with graph-based deep learning models (OECD TG데이터를 이용한 그래프 기반 딥러닝 모델 분자 특성 예측)

  • Daehwan Hwang;Changwon Lim
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.3
    • /
    • pp.355-380
    • /
    • 2024
  • In this paper, we compare the performance of graph-based deep learning models using OECD test guideline (TG) data. OECD TG are a unique tool for assessing the potential effects of chemicals on health and environment. but many guidelines include animal testing. Animal testing is time-consuming and expensive, and has ethical issues, so methods to find or minimize alternatives are being studied. Deep learning is used in various fields using chemicals including toxicity prediciton, and research on graph-based models is particularly active. Our goal is to compare the performance of graph-based deep learning models on OECD TG data to find the best performance model on there. We collected the results of OECD TG from the website eChemportal.org operated by the OECD, and chemicals that were impossible or inappropriate to learn were removed through pre-processing. The toxicity prediction performance of five graph-based models was compared using the collected OECD TG data and MoleculeNet data, a benchmark dataset for predicting chemical properties.

Predictive modeling algorithms for liver metastasis in colorectal cancer: A systematic review of the current literature

  • Isaac Seow-En;Ye Xin Koh;Yun Zhao;Boon Hwee Ang;Ivan En-Howe Tan;Aik Yong Chok;Emile John Kwong Wei Tan;Marianne Kit Har Au
    • Annals of Hepato-Biliary-Pancreatic Surgery
    • /
    • v.28 no.1
    • /
    • pp.14-24
    • /
    • 2024
  • This study aims to assess the quality and performance of predictive models for colorectal cancer liver metastasis (CRCLM). A systematic review was performed to identify relevant studies from various databases. Studies that described or validated predictive models for CRCLM were included. The methodological quality of the predictive models was assessed. Model performance was evaluated by the reported area under the receiver operating characteristic curve (AUC). Of the 117 articles screened, seven studies comprising 14 predictive models were included. The distribution of included predictive models was as follows: radiomics (n = 3), logistic regression (n = 3), Cox regression (n = 2), nomogram (n = 3), support vector machine (SVM, n = 2), random forest (n = 2), and convolutional neural network (CNN, n = 2). Age, sex, carcinoembryonic antigen, and tumor staging (T and N stage) were the most frequently used clinicopathological predictors for CRCLM. The mean AUCs ranged from 0.697 to 0.870, with 86% of the models demonstrating clear discriminative ability (AUC > 0.70). A hybrid approach combining clinical and radiomic features with SVM provided the best performance, achieving an AUC of 0.870. The overall risk of bias was identified as high in 71% of the included studies. This review highlights the potential of predictive modeling to accurately predict the occurrence of CRCLM. Integrating clinicopathological and radiomic features with machine learning algorithms demonstrates superior predictive capabilities.

Multi-dimensional Contextual Conditions-driven Mutually Exclusive Learning for Explainable AI in Decision-Making

  • Hyun Jung Lee
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.7-21
    • /
    • 2024
  • There are various machine learning techniques such as Reinforcement Learning, Deep Learning, Neural Network Learning, and so on. In recent, Large Language Models (LLMs) are popularly used for Generative AI based on Reinforcement Learning. It makes decisions with the most optimal rewards through the fine tuning process in a particular situation. Unfortunately, LLMs can not provide any explanation for how they reach the goal because the training is based on learning of black-box AI. Reinforcement Learning as black-box AI is based on graph-evolving structure for deriving enhanced solution through adjustment by human feedback or reinforced data. In this research, for mutually exclusive decision-making, Mutually Exclusive Learning (MEL) is proposed to provide explanations of the chosen goals that are achieved by a decision on both ends with specified conditions. In MEL, decision-making process is based on the tree-based structure that can provide processes of pruning branches that are used as explanations of how to achieve the goals. The goal can be reached by trade-off among mutually exclusive alternatives according to the specific contextual conditions. Therefore, the tree-based structure is adopted to provide feasible solutions with the explanations based on the pruning branches. The sequence of pruning processes can be used to provide the explanations of the inferences and ways to reach the goals, as Explainable AI (XAI). The learning process is based on the pruning branches according to the multi-dimensional contextual conditions. To deep-dive the search, they are composed of time window to determine the temporal perspective, depth of phases for lookahead and decision criteria to prune branches. The goal depends on the policy of the pruning branches, which can be dynamically changed by configured situation with the specific multi-dimensional contextual conditions at a particular moment. The explanation is represented by the chosen episode among the decision alternatives according to configured situations. In this research, MEL adopts the tree-based learning model to provide explanation for the goal derived with specific conditions. Therefore, as an example of mutually exclusive problems, employment process is proposed to demonstrate the decision-making process of how to reach the goal and explanation by the pruning branches. Finally, further study is discussed to verify the effectiveness of MEL with experiments.

Classification of mandibular molar furcation involvement in periapical radiographs by deep learning

  • Katerina Vilkomir;Cody Phen;Fiondra Baldwin;Jared Cole;Nic Herndon;Wenjian Zhang
    • Imaging Science in Dentistry
    • /
    • v.54 no.3
    • /
    • pp.257-263
    • /
    • 2024
  • Purpose: The purpose of this study was to classify mandibular molar furcation involvement (FI) in periapical radiographs using a deep learning algorithm. Materials and Methods: Full mouth series taken at East Carolina University School of Dental Medicine from 2011-2023 were screened. Diagnostic-quality mandibular premolar and molar periapical radiographs with healthy or FI mandibular molars were included. The radiographs were cropped into individual molar images, annotated as "healthy" or "FI," and divided into training, validation, and testing datasets. The images were preprocessed by PyTorch transformations. ResNet-18, a convolutional neural network model, was refined using the PyTorch deep learning framework for the specific imaging classification task. CrossEntropyLoss and the AdamW optimizer were employed for loss function training and optimizing the learning rate, respectively. The images were loaded by PyTorch DataLoader for efficiency. The performance of ResNet-18 algorithm was evaluated with multiple metrics, including training and validation losses, confusion matrix, accuracy, sensitivity, specificity, the receiver operating characteristic (ROC) curve, and the area under the ROC curve. Results: After adequate training, ResNet-18 classified healthy vs. FI molars in the testing set with an accuracy of 96.47%, indicating its suitability for image classification. Conclusion: The deep learning algorithm developed in this study was shown to be promising for classifying mandibular molar FI. It could serve as a valuable supplemental tool for detecting and managing periodontal diseases.

Enhancing mechanical performance of steel-tube-encased HSC composite walls: Experimental investigation and analytical modeling

  • ZY Chen;Ruei-Yuan Wang;Yahui Meng;Huakun Wu;Lai B;Timothy Chen
    • Steel and Composite Structures
    • /
    • v.52 no.6
    • /
    • pp.647-656
    • /
    • 2024
  • This paper discusses the study of concrete composite walls of algorithmic modeling, in which steel tubes are embedded. The load-bearing capacity of STHC composite walls increases with the increase of axial load coefficient, but its ductility decreases. The load-bearing capacity can be improved by increasing the strength of the steel pipes; however, the elasticity of STHC composite walls was found to be slightly reduced. As the shear stress coefficient increases, the load-bearing capacity of STHC composite walls decreases significantly, while the deformation resistance increases. By analyzing actual cases, we demonstrate the effectiveness of the research results in real situations and enhance the persuasiveness of the conclusions. The research results can provide a basis for future research, inspire more explorations on seismic design and construction, and further advance the development of this field. Emphasize the importance of research results, promote interdisciplinary cooperation in the fields of structural engineering, earthquake engineering, and materials science, and improve overall seismic resistance. The emphasis on these aspects will help highlight the practical impact of the research results, further strengthen the conclusions, and promote progress in the design and construction of earthquake-resistant structures. The goals of this work are access to adequate, safe and affordable housing and basic services, promotion of inclusive and sustainable urbanization and participation, implementation of sustainable and disaster-resilient architecture, sustainable planning and management of human settlements. Simulation results of linear and nonlinear structures show that this method can detect structural parameters and their changes due to damage and unknown disturbances. Therefore, it is believed that with the further development of fuzzy neural network artificial intelligence theory, this goal will be achieved in the near future.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Computer Aided Diagnosis System for Evaluation of Mechanical Artificial Valve (기계식 인공판막 상태 평가를 위한 컴퓨터 보조진단 시스템)

  • 이혁수
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.5
    • /
    • pp.421-430
    • /
    • 2004
  • Clinically, it is almost impossible for a physician to distinguish subtle changes of frequency spectrum by using a stethoscope alone especially in the early stage of thrombus formation. Considering that reliability of mechanical valve is paramount because the failure might end up with patient death, early detection of valve thrombus using noninvasive technique is important. Thus the study was designed to provide a tool for early noninvasive detection of valve thrombus by observing shift of frequency spectrum of acoustic signals with computer aid diagnosis system. A thrombus model was constructed on commercialized mechanical valves using polyurethane or silicon. Polyurethane coating was made on the valve surface, and silicon coating on the sewing ring of the valve. To simulate pannus formation, which is fibrous tissue overgrowth obstructing the valve orifice, the degree of silicone coating on the sewing ring varied from 20%, 40%, 60% of orifice obstruction. In experiment system, acoustic signals from the valve were measured using microphone and amplifier. The microphone was attached to a coupler to remove environmental noise. Acoustic signals were sampled by an AID converter, frequency spectrum was obtained by the algorithm of spectral analysis. To quantitatively distinguish the frequency peak of the normal valve from that of the thrombosed valves, analysis using a neural network was employed. A return map was applied to evaluate continuous monitoring of valve motion cycle. The in-vivo data also obtained from animals with mechanical valves in circulatory devices as well as patients with mechanical valve replacement for 1 year or longer before. Each spectrum wave showed a primary and secondary peak. The secondary peak showed changes according to the thrombus model. In the mock as well as the animal study, both spectral analysis and 3-layer neural network could differentiate the normal valves from thrombosed valves. In the human study, one of 10 patients showed shift of frequency spectrum, however the presence of valve thrombus was yet to be determined. Conclusively, acoustic signal measurement can be of suggestive as a noninvasive diagnostic tool in early detection of mechanical valve thrombosis.

Detection of Wildfire Burned Areas in California Using Deep Learning and Landsat 8 Images (딥러닝과 Landsat 8 영상을 이용한 캘리포니아 산불 피해지 탐지)

  • Youngmin Seo;Youjeong Youn;Seoyeon Kim;Jonggu Kang;Yemin Jeong;Soyeon Choi;Yungyo Im;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1413-1425
    • /
    • 2023
  • The increasing frequency of wildfires due to climate change is causing extreme loss of life and property. They cause loss of vegetation and affect ecosystem changes depending on their intensity and occurrence. Ecosystem changes, in turn, affect wildfire occurrence, causing secondary damage. Thus, accurate estimation of the areas affected by wildfires is fundamental. Satellite remote sensing is used for forest fire detection because it can rapidly acquire topographic and meteorological information about the affected area after forest fires. In addition, deep learning algorithms such as convolutional neural networks (CNN) and transformer models show high performance for more accurate monitoring of fire-burnt regions. To date, the application of deep learning models has been limited, and there is a scarcity of reports providing quantitative performance evaluations for practical field utilization. Hence, this study emphasizes a comparative analysis, exploring performance enhancements achieved through both model selection and data design. This study examined deep learning models for detecting wildfire-damaged areas using Landsat 8 satellite images in California. Also, we conducted a comprehensive comparison and analysis of the detection performance of multiple models, such as U-Net and High-Resolution Network-Object Contextual Representation (HRNet-OCR). Wildfire-related spectral indices such as normalized difference vegetation index (NDVI) and normalized burn ratio (NBR) were used as input channels for the deep learning models to reflect the degree of vegetation cover and surface moisture content. As a result, the mean intersection over union (mIoU) was 0.831 for U-Net and 0.848 for HRNet-OCR, showing high segmentation performance. The inclusion of spectral indices alongside the base wavelength bands resulted in increased metric values for all combinations, affirming that the augmentation of input data with spectral indices contributes to the refinement of pixels. This study can be applied to other satellite images to build a recovery strategy for fire-burnt areas.