• Title/Summary/Keyword: decision algorithm

Search Result 2,381, Processing Time 0.025 seconds

Uncertainty Sequence Modeling Approach for Safe and Effective Autonomous Driving (안전하고 효과적인 자율주행을 위한 불확실성 순차 모델링)

  • Yoon, Jae Ung;Lee, Ju Hong
    • Smart Media Journal
    • /
    • v.11 no.9
    • /
    • pp.9-20
    • /
    • 2022
  • Deep reinforcement learning(RL) is an end-to-end data-driven control method that is widely used in the autonomous driving domain. However, conventional RL approaches have difficulties in applying it to autonomous driving tasks due to problems such as inefficiency, instability, and uncertainty. These issues play an important role in the autonomous driving domain. Although recent studies have attempted to solve these problems, they are computationally expensive and rely on special assumptions. In this paper, we propose a new algorithm MCDT that considers inefficiency, instability, and uncertainty by introducing a method called uncertainty sequence modeling to autonomous driving domain. The sequence modeling method, which views reinforcement learning as a decision making generation problem to obtain high rewards, avoids the disadvantages of exiting studies and guarantees efficiency, stability and also considers safety by integrating uncertainty estimation techniques. The proposed method was tested in the OpenAI Gym CarRacing environment, and the experimental results show that the MCDT algorithm provides efficient, stable and safe performance compared to the existing reinforcement learning method.

Detection of Depression Trends in Literary Cyber Writers Using Sentiment Analysis and Machine Learning

  • Faiza Nasir;Haseeb Ahmad;CM Nadeem Faisal;Qaisar Abbas;Mubarak Albathan;Ayyaz Hussain
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.3
    • /
    • pp.67-80
    • /
    • 2023
  • Rice is an important food crop for most of the population in Nowadays, psychologists consider social media an important tool to examine mental disorders. Among these disorders, depression is one of the most common yet least cured disease Since abundant of writers having extensive followers express their feelings on social media and depression is significantly increasing, thus, exploring the literary text shared on social media may provide multidimensional features of depressive behaviors: (1) Background: Several studies observed that depressive data contains certain language styles and self-expressing pronouns, but current study provides the evidence that posts appearing with self-expressing pronouns and depressive language styles contain high emotional temperatures. Therefore, the main objective of this study is to examine the literary cyber writers' posts for discovering the symptomatic signs of depression. For this purpose, our research emphases on extracting the data from writers' public social media pages, blogs, and communities; (3) Results: To examine the emotional temperatures and sentences usage between depressive and not depressive groups, we employed the SentiStrength algorithm as a psycholinguistic method, TF-IDF and N-Gram for ranked phrases extraction, and Latent Dirichlet Allocation for topic modelling of the extracted phrases. The results unearth the strong connection between depression and negative emotional temperatures in writer's posts. Moreover, we used Naïve Bayes, Support Vector Machines, Random Forest, and Decision Tree algorithms to validate the classification of depressive and not depressive in terms of sentences, phrases and topics. The results reveal that comparing with others, Support Vectors Machines algorithm validates the classification while attaining highest 79% f-score; (4) Conclusions: Experimental results show that the proposed system outperformed for detection of depression trends in literary cyber writers using sentiment analysis.

Development and Usability Evaluation of a Mobile Physical Therapeutic Diagnosis Application (모바일 물리치료 진단 어플리케이션 개발 및 사용성 평가)

  • Min-Hyung Rhee;Jong-Soon Kim
    • PNF and Movement
    • /
    • v.21 no.1
    • /
    • pp.129-137
    • /
    • 2023
  • Purpose: The physical therapy diagnosis process requires high-level background knowledge, the ability to obtain added information from patients, accurate examination skills, and a framework for transforming thoughts into a diagnostic decision. Thus, the physical therapy diagnostic process is highly complicated and difficult work. To function as autonomous professionals, physical therapists must develop effective clinical diagnosis skills. As such, mobile application aids can help with accurate and scientific diagnoses. Therefore, this study aims to develop and evaluate the usability of a mobile application for physical therapy diagnoses. Methods: In this study, a diagnostic application was developed using App Inventor, the development environment was the Chrome web browser for Windows 10, and the mobile application was run on a Google Pixel 5. The developed application was evaluated for usability by 20 physical therapists with more than 5 years of clinical experience in the musculoskeletal physical therapy field, and a usability evaluation was conducted using a 5-point Likert scale for accuracy, convenience, satisfaction, and usability. The collected Likert scores were converted into percentages and analyzed as descriptive statistics. Results: The graphical user interface consisted of an initial screen with program guidance, 18 screens presenting the algorithm, and 12 screens presenting the estimated diagnosis based on the algorithm. As such, the usability evaluation of the developed application was as follows: accuracy 100%, convenience 90%, satisfaction 91%, and usability 88%. Conclusion: The newly developed mobile application for physical therapeutic diagnoses has a high accuracy, and it will aid in building an easy and reliable physical therapy diagnosis system.

Analysis of Potential Construction Risk Types in Formal Documents Using Text Mining (텍스트 마이닝을 통한 건설공사 공문 잠재적 리스크 유형 분석)

  • Eom, Sae Ho;Cha, Gichun;Park, Sun Kyu;Park, Seunghee;Park, Jongho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.1
    • /
    • pp.91-98
    • /
    • 2023
  • Since risks occurring in construction projects can have a significant impact on schedules and costs, there have been many studies on this topic. However, risk analysis is often limited to only certain construction situations,and experience-dependent decision-making is therefore mainly performed. Data-based analyses have only been partially applied to safety and contract documents. Therefore, in this study, cluster analysis and a Word2Vec algorithm were applied to formal documents that contain important elements for contractors or clients. An initial classification of document content into six types was performed through cluster analysis, and 157 occurrence types were subdivided through application of the Word2Vec algorithm. The derived terms were re-classified into five categories and reviewed as to whether the terms could develop into potential construction risk factors. Identifying potential construction risk factors will be helpful as basic data for process management in the construction industry.

Parameter Calibration of Storage Function Model and Flood Forecasting (1) Calibration Methods and Evaluation of Simulated Flood Hydrograph (저류함수모형의 매개변수 보정과 홍수예측 (1) 보정 방법론과 모의 홍수수문곡선의 평가)

  • Song, Jae Hyun;Kim, Hung Soo;Hong, Il Pyo;Kim, Sang Ug
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.1B
    • /
    • pp.27-38
    • /
    • 2006
  • The storage function model (SFM) has been used for the flood forecasting in Korea. The SFM has a simple calculation process and it is known that the model is more reasonable than linear model because it considers non-linearity of flood runoff. However, the determination of parameters is very difficult. In general, the trial and error method which is an manual calibration by the decision of a model manager. This study calibrated the parameters by the trial and error method and optimization technique. The calibrated parameters were compared with the representative parameters which are used in the Flood Control Centers in Korea. Also, the evaluation indexes on objective functions and calibration methods for the comparative analysis of simulation efficiency. As a result, the Genetic Algorithm showed the smallest variation in objective functions and, in this study, it is known that the objective function of SSR (Sum of Squared of Residual) is the best one for the flood forecasting.

Optimization of Information Security Investment Portfolios based on Data Breach Statistics: A Genetic Algorithm Approach (침해사고 통계 기반 정보보호 투자 포트폴리오 최적화: 유전자 알고리즘 접근법)

  • Jung-Hyun Lim;Tae-Sung Kim
    • Information Systems Review
    • /
    • v.22 no.2
    • /
    • pp.201-217
    • /
    • 2020
  • Information security is an essential element not only to ensure the operation of the company and trust with customers but also to mitigate uncertain damage by preventing information data breach. Therefore, It is important to select appropriate information security countermeasures and determine the appropriate level of investment. This study presents a decision support model for the appropriate investment amount for each countermeasure as well as an optimal portfolio of information countermeasures within a limited budget. We analyze statistics on the types of information security breach by industry and derive an optimal portfolio of information security countermeasures by using genetic algorithms. The results of this study suggest guidelines for investing in information security countermeasures in various industries and help to support objective information security investment decisions.

A Comparative Study of Prediction Models for College Student Dropout Risk Using Machine Learning: Focusing on the case of N university (머신러닝을 활용한 대학생 중도탈락 위험군의 예측모델 비교 연구 : N대학 사례를 중심으로)

  • So-Hyun Kim;Sung-Hyoun Cho
    • Journal of The Korean Society of Integrative Medicine
    • /
    • v.12 no.2
    • /
    • pp.155-166
    • /
    • 2024
  • Purpose : This study aims to identify key factors for predicting dropout risk at the university level and to provide a foundation for policy development aimed at dropout prevention. This study explores the optimal machine learning algorithm by comparing the performance of various algorithms using data on college students' dropout risks. Methods : We collected data on factors influencing dropout risk and propensity were collected from N University. The collected data were applied to several machine learning algorithms, including random forest, decision tree, artificial neural network, logistic regression, support vector machine (SVM), k-nearest neighbor (k-NN) classification, and Naive Bayes. The performance of these models was compared and evaluated, with a focus on predictive validity and the identification of significant dropout factors through the information gain index of machine learning. Results : The binary logistic regression analysis showed that the year of the program, department, grades, and year of entry had a statistically significant effect on the dropout risk. The performance of each machine learning algorithm showed that random forest performed the best. The results showed that the relative importance of the predictor variables was highest for department, age, grade, and residence, in the order of whether or not they matched the school location. Conclusion : Machine learning-based prediction of dropout risk focuses on the early identification of students at risk. The types and causes of dropout crises vary significantly among students. It is important to identify the types and causes of dropout crises so that appropriate actions and support can be taken to remove risk factors and increase protective factors. The relative importance of the factors affecting dropout risk found in this study will help guide educational prescriptions for preventing college student dropout.

Classification of Aβ State From Brain Amyloid PET Images Using Machine Learning Algorithm

  • Chanda Simfukwe;Reeree Lee;Young Chul Youn;Alzheimer’s Disease and Related Dementias in Zambia (ADDIZ) Group
    • Dementia and Neurocognitive Disorders
    • /
    • v.22 no.2
    • /
    • pp.61-68
    • /
    • 2023
  • Background and Purpose: Analyzing brain amyloid positron emission tomography (PET) images to access the occurrence of β-amyloid (Aβ) deposition in Alzheimer's patients requires much time and effort from physicians, while the variation of each interpreter may differ. For these reasons, a machine learning model was developed using a convolutional neural network (CNN) as an objective decision to classify the Aβ positive and Aβ negative status from brain amyloid PET images. Methods: A total of 7,344 PET images of 144 subjects were used in this study. The 18F-florbetaben PET was administered to all participants, and the criteria for differentiating Aβ positive and Aβ negative state was based on brain amyloid plaque load score (BAPL) that depended on the visual assessment of PET images by the physicians. We applied the CNN algorithm trained in batches of 51 PET images per subject directory from 2 classes: Aβ positive and Aβ negative states, based on the BAPL scores. Results: The binary classification of the model average performance matrices was evaluated after 40 epochs of three trials based on test datasets. The model accuracy for classifying Aβ positivity and Aβ negativity was (95.00±0.02) in the test dataset. The sensitivity and specificity were (96.00±0.02) and (94.00±0.02), respectively, with an area under the curve of (87.00±0.03). Conclusions: Based on this study, the designed CNN model has the potential to be used clinically to screen amyloid PET images.

A comparative study of different radial basis function interpolation algorithms in the reconstruction and path planning of γ radiation fields

  • Yulong Zhang;Jinjia Cao;Biao Zhang;Xiaochang Zheng;Wei Chen
    • Nuclear Engineering and Technology
    • /
    • v.56 no.7
    • /
    • pp.2806-2820
    • /
    • 2024
  • Accurate reconstruction of radiation field and path planning are very important for the safety of operators in the process of dismantling nuclear facilities. Based on radial basis function (RBF) interpolation algorithm, this paper discussed the application of inverse multiquadric radial basis Function (IMRBF) interpolation method to the reconstruction of gamma radiation field, and proved the feasibility of reconstructing a radiation field with multiple γ sources. The average relative errors of IMRBF interpolation results were 4.28% and 8.76%, respectively, for the experimental scenarios with single and double gamma sources. After comparing the consistency between the simulated scene and the experimental scene, IMRBF method and Cubic Spline method were respectively used to reconstruct the gamma radiation field by Geant4 simulation data. The results showed that the interpolation accuracy of IMRBF method was superior to that of Cubic Spline method. Further, more RBF interpolation algorithms were used to reconstruct the multi-γ source radiation field, and then the Probabilistic Roadmap (PRM) algorithm was used to optimize the human walking path in the radiation field reconstructed by different interpolation methods. The optimal paths in radiation fields generated by multiple interpolation methods were compared. The results herein contribute to a comprehensive understanding of RBF interpolation methods in reconstructing γ radiation fields and their application in optimizing paths in radiation environments. The insights may provide valuable information for decision-making in radiation protection during the decommissioning of nuclear facilities.

Automatic Matching of Building Polygon Dataset from Digital Maps Using Hierarchical Matching Algorithm (계층적 매칭 기법을 이용한 수치지도 건물 폴리곤 데이터의 자동 정합에 관한 연구)

  • Yeom, Junho;Kim, Yongil;Lee, Jeabin
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.1
    • /
    • pp.45-52
    • /
    • 2015
  • The interoperability of multi-source data has become more important due to various digital maps, produced from public institutions and enterprises. In this study, the automatic matching algorithm of multi-source building data using hierarchical matching was proposed. At first, we divide digital maps into blocks and perform the primary geometric registration of buildings with the ICP algorithm. Then, corresponding building pairs were determined by evaluating the similarity of overlap area, and the matching threshold value of similarity was automatically derived by the Otsu binary thresholding. After the first matching, we extracted error matching candidates buildings which are similar with threshold value to conduct the secondary ICP matching and to make a matching decision using turning angle function analysis. For the evaluation, the proposed method was applied to representative public digital maps, road name address map and digital topographic map 2.0. As a result, the F measures of matching and non-matching buildings increased by 2% and 17%, respectively. Therefore, the proposed method is efficient for the matching of building polygons from multi-source digital maps.