• Title/Summary/Keyword: Machine-Learning

Search Result 5,627, Processing Time 0.032 seconds

A Hybrid Multi-Level Feature Selection Framework for prediction of Chronic Disease

  • G.S. Raghavendra;Shanthi Mahesh;M.V.P. Chandrasekhara Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.12
    • /
    • pp.101-106
    • /
    • 2023
  • Chronic illnesses are among the most common serious problems affecting human health. Early diagnosis of chronic diseases can assist to avoid or mitigate their consequences, potentially decreasing mortality rates. Using machine learning algorithms to identify risk factors is an exciting strategy. The issue with existing feature selection approaches is that each method provides a distinct set of properties that affect model correctness, and present methods cannot perform well on huge multidimensional datasets. We would like to introduce a novel model that contains a feature selection approach that selects optimal characteristics from big multidimensional data sets to provide reliable predictions of chronic illnesses without sacrificing data uniqueness.[1] To ensure the success of our proposed model, we employed balanced classes by employing hybrid balanced class sampling methods on the original dataset, as well as methods for data pre-processing and data transformation, to provide credible data for the training model. We ran and assessed our model on datasets with binary and multivalued classifications. We have used multiple datasets (Parkinson, arrythmia, breast cancer, kidney, diabetes). Suitable features are selected by using the Hybrid feature model consists of Lassocv, decision tree, random forest, gradient boosting,Adaboost, stochastic gradient descent and done voting of attributes which are common output from these methods.Accuracy of original dataset before applying framework is recorded and evaluated against reduced data set of attributes accuracy. The results are shown separately to provide comparisons. Based on the result analysis, we can conclude that our proposed model produced the highest accuracy on multi valued class datasets than on binary class attributes.[1]

The gene expression programming method for estimating compressive strength of rocks

  • Ibrahim Albaijan;Daria K. Voronkova;Laith R. Flaih;Meshel Q. Alkahtani;Arsalan Mahmoodzadeh;Hawkar Hashim Ibrahim;Adil Hussein Mohammed
    • Geomechanics and Engineering
    • /
    • v.36 no.5
    • /
    • pp.465-474
    • /
    • 2024
  • Uniaxial compressive strength (UCS) is a critical geomechanical parameter that plays a significant role in the evaluation of rocks. The practice of indirectly estimating said characteristics is widespread due to the challenges associated with obtaining high-quality core samples. The primary aim of this study is to investigate the feasibility of utilizing the gene expression programming (GEP) technique for the purpose of forecasting the UCS for various rock categories, including Schist, Granite, Claystone, Travertine, Sandstone, Slate, Limestone, Marl, and Dolomite, which were sourced from a wide range of quarry sites. The present study utilized a total of 170 datasets, comprising Schmidt hammer (SH), porosity (n), point load index (Is(50)), and P-wave velocity (Vp), as the effective parameters in the model to determine their impact on the UCS. The UCS parameter was computed through the utilization of the GEP model, resulting in the generation of an equation. Subsequently, the efficacy of the GEP model and the resultant equation were assessed using various statistical evaluation metrics to determine their predictive capabilities. The outcomes indicate the prospective capacity of the GEP model and the resultant equation in forecasting the unconfined compressive strength (UCS). The significance of this study lies in its ability to enable geotechnical engineers to make estimations of the UCS of rocks, without the requirement of conducting expensive and time-consuming experimental tests. In particular, a user-friendly program was developed based on the GEP model to enable rapid and very accurate calculation of rock's UCS, doing away with the necessity for costly and time-consuming laboratory experiments.

Blockchain-based Important Information Management Techniques for IoT Environment (IoT 환경을 위한 블록체인 기반의 중요 정보 관리 기법)

  • Yoon-Su Jeong
    • Advanced Industrial SCIence
    • /
    • v.3 no.1
    • /
    • pp.30-36
    • /
    • 2024
  • Recently, the Internet of Things (IoT), which has been applied to various industrial fields, is constantly evolving in the process of automation and digitization. However, in the network where IoT devices are built, research on IoT critical information-related data sharing, personal information protection, and data integrity among intermediate nodes is still being actively studied. In this study, we propose a blockchain-based IoT critical information management technique that is easy to implement without burdening the intermediate node in the network environment where IoT is built. The proposed technique allocates a random value of a random size to the IoT critical information arriving at the intermediate node and manages it to become a decentralized P2P blockchain. In addition, the proposed technique makes it easier to manage IoT critical data by creating licenses such as time limit and device limitation according to the weight condition of IoT critical information. Performance evaluation and proposed techniques have improved delay time and processing time by 7.6% and 10.1% on average compared to existing techniques.

Crab Landing QAR (Quick Access Recorder) Flight Data Statistical Analysis Model (크랩랜딩(Crab Landing) QAR(Quick Access Recorder) 비행 데이터 통계분석 모델)

  • Jeon Je-Hyung;Kim Hyeon-deok
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.2
    • /
    • pp.185-192
    • /
    • 2024
  • The aviation has improved safety through technological innovation and strengthened flight safety through safety regulations and supervision by aviation authorities. As the industry's safety approach has evolved into a systematic approach to the aircraft system, airlines have established a safety management system. Technical defects or abnormal data in an aircraft can be warning signs that could lead to an accident, and the risk of an accident can be reduced by identifying and responding to these signs early. Therefore, management of abnormal warning signs is an essential element in promoting data-based decision-making and enhancing the operational efficiency and safety level of airlines. In this study, we present a model to statistically analyze quick access recorder (QAR) flight data in the preliminary analysis stage to analyze the patterns and causes of crab landing events that can lead to runway departures when landing an aircraft, and provide a precursor to a landing event. We aim to identify signs and causes and contribute to increasing the efficiency of safety management.

Analyzing fashion item purchase patterns and channel transition patterns using association rules and brand loyalty in big data (빅데이터의 연관규칙과 브랜드 충성도를 활용한 패션품목 구매패턴과 구매채널 전환패턴 분석)

  • Ki Yong Kwon
    • The Research Journal of the Costume Culture
    • /
    • v.32 no.2
    • /
    • pp.199-214
    • /
    • 2024
  • Until now, research on consumers' purchasing behavior has primarily focused on psychological aspects or depended on consumer surveys. However, there may be a gap between consumers' self-reported perceptions and their observable actions. In response, this study aimed to investigate consumer purchasing behavior utilizing a big data approach. To this end, this study investigated the purchasing patterns of fashion items, both online and in retail stores, from a data-driven perspective. We also investigated whether individual consumers switched between online websites and retail establishments for making purchases. Data on 516,474 purchases were obtained from fashion companies. We used association rule analysis and K-means clustering to identify purchase patterns that were influenced by customer loyalty. Furthermore, sequential pattern analysis was applied to investigate the usage patterns of online and offline channels by consumers. The results showed that high-loyalty consumers mainly purchased infrequently bought items in the brand line, as well as high-priced items, and that these purchase patterns were similar both online and in stores. In contrast, the low-loyalty group showed different purchasing behaviors for online versus in-store purchases. In physical environments, the low-loyalty consumers tended to purchase less popular or more expensive items from the brand line, whereas in online environments, their purchases centered around items with relatively high sales volumes. Finally, we found that both high and low loyalty groups exclusively used a single preferred channel, either online or in-store. The findings help companies better understand consumer purchase patterns and build future marketing strategies around items with high brand centrality.

The Perception of Pre-service English Teachers' use of AI Translation Tools in EFL Writing (영작문 도구로서의 인공지능번역 활용에 대한 초등예비교사의 인식연구)

  • Jaeseok Yang
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.121-128
    • /
    • 2024
  • With the recent rise in the use of AI-based online translation tools, interest in their methods and effects on education has grown. This study involved 30 prospective elementary school teachers who completed an English writing task using an AI-based online translation tool. The study focused on assessing the impact of these tools on English writing skills and their practical applications. It examined the usability, educational value, and the advantages and disadvantages of the AI translation tool. Through data collected via writing tests, surveys, and interviews, the study revealed that the use of translation tools positively affects English writing skills. From the learners' perspective, these tools were perceived to provide support and convenience for learning. However, there was also recognition of the need for educational strategies to effectively use these tools, alongside concerns about methods to enhance the completeness or accuracy of translations and the potential for over-reliance on the tools. The study concluded that for effective utilization of translation tools, the implementation of educational strategies and the role of the teacher are crucial.

A review of ground camera-based computer vision techniques for flood management

  • Sanghoon Jun;Hyewoon Jang;Seungjun Kim;Jong-Sub Lee;Donghwi Jung
    • Computers and Concrete
    • /
    • v.33 no.4
    • /
    • pp.425-443
    • /
    • 2024
  • Floods are among the most common natural hazards in urban areas. To mitigate the problems caused by flooding, unstructured data such as images and videos collected from closed circuit televisions (CCTVs) or unmanned aerial vehicles (UAVs) have been examined for flood management (FM). Many computer vision (CV) techniques have been widely adopted to analyze imagery data. Although some papers have reviewed recent CV approaches that utilize UAV images or remote sensing data, less effort has been devoted to studies that have focused on CCTV data. In addition, few studies have distinguished between the main research objectives of CV techniques (e.g., flood depth and flooded area) for a comprehensive understanding of the current status and trends of CV applications for each FM research topic. Thus, this paper provides a comprehensive review of the literature that proposes CV techniques for aspects of FM using ground camera (e.g., CCTV) data. Research topics are classified into four categories: flood depth, flood detection, flooded area, and surface water velocity. These application areas are subdivided into three types: urban, river and stream, and experimental. The adopted CV techniques are summarized for each research topic and application area. The primary goal of this review is to provide guidance for researchers who plan to design a CV model for specific purposes such as flood-depth estimation. Researchers should be able to draw on this review to construct an appropriate CV model for any FM purpose.

Prediction of ocean surface current: Research status, challenges, and opportunities. A review

  • Ittaka Aldini;Adhistya E. Permanasari;Risanuri Hidayat;Andri Ramdhan
    • Ocean Systems Engineering
    • /
    • v.14 no.1
    • /
    • pp.85-99
    • /
    • 2024
  • Ocean surface currents have an essential role in the Earth's climate system and significantly impact the marine ecosystem, weather patterns, and human activities. However, predicting ocean surface currents remains challenging due to the complexity and variability of the oceanic processes involved. This review article provides an overview of the current research status, challenges, and opportunities in the prediction of ocean surface currents. We discuss the various observational and modelling approaches used to study ocean surface currents, including satellite remote sensing, in situ measurements, and numerical models. We also highlight the major challenges facing the prediction of ocean surface currents, such as data assimilation, model-observation integration, and the representation of sub-grid scale processes. In this article, we suggest that future research should focus on developing advanced modeling techniques, such as machine learning, and the integration of multiple observational platforms to improve the accuracy and skill of ocean surface current predictions. We also emphasize the need to address the limitations of observing instruments, such as delays in receiving data, versioning errors, missing data, and undocumented data processing techniques. Improving data availability and quality will be essential for enhancing the accuracy of predictions. The future research should focus on developing methods for effective bias correction, a series of data preprocessing procedures, and utilizing combined models and xAI models to incorporate data from various sources. Advancements in predicting ocean surface currents will benefit various applications such as maritime operations, climate studies, and ecosystem management.

Harnessing the Power of Voice: A Deep Neural Network Model for Alzheimer's Disease Detection

  • Chan-Young Park;Minsoo Kim;YongSoo Shim;Nayoung Ryoo;Hyunjoo Choi;Ho Tae Jeong;Gihyun Yun;Hunboc Lee;Hyungryul Kim;SangYun Kim;Young Chul Youn
    • Dementia and Neurocognitive Disorders
    • /
    • v.23 no.1
    • /
    • pp.1-10
    • /
    • 2024
  • Background and Purpose: Voice, reflecting cerebral functions, holds potential for analyzing and understanding brain function, especially in the context of cognitive impairment (CI) and Alzheimer's disease (AD). This study used voice data to distinguish between normal cognition and CI or Alzheimer's disease dementia (ADD). Methods: This study enrolled 3 groups of subjects: 1) 52 subjects with subjective cognitive decline; 2) 110 subjects with mild CI; and 3) 59 subjects with ADD. Voice features were extracted using Mel-frequency cepstral coefficients and Chroma. Results: A deep neural network (DNN) model showed promising performance, with an accuracy of roughly 81% in 10 trials in predicting ADD, which increased to an average value of about 82.0%±1.6% when evaluated against unseen test dataset. Conclusions: Although results did not demonstrate the level of accuracy necessary for a definitive clinical tool, they provided a compelling proof-of-concept for the potential use of voice data in cognitive status assessment. DNN algorithms using voice offer a promising approach to early detection of AD. They could improve the accuracy and accessibility of diagnosis, ultimately leading to better outcomes for patients.

Artificial Intelligence-Based Colorectal Polyp Histology Prediction by Using Narrow-Band Image-Magnifying Colonoscopy

  • Istvan Racz;Andras Horvath;Noemi Kranitz;Gyongyi Kiss;Henriett Regoczi;Zoltan Horvath
    • Clinical Endoscopy
    • /
    • v.55 no.1
    • /
    • pp.113-121
    • /
    • 2022
  • Background/Aims: We have been developing artificial intelligence based polyp histology prediction (AIPHP) method to classify Narrow Band Imaging (NBI) magnifying colonoscopy images to predict the hyperplastic or neoplastic histology of polyps. Our aim was to analyze the accuracy of AIPHP and narrow-band imaging international colorectal endoscopic (NICE) classification based histology predictions and also to compare the results of the two methods. Methods: We studied 373 colorectal polyp samples taken by polypectomy from 279 patients. The documented NBI still images were analyzed by the AIPHP method and by the NICE classification parallel. The AIPHP software was created by machine learning method. The software measures five geometrical and color features on the endoscopic image. Results: The accuracy of AIPHP was 86.6% (323/373) in total of polyps. We compared the AIPHP accuracy results for diminutive and non-diminutive polyps (82.1% vs. 92.2%; p=0.0032). The accuracy of the hyperplastic histology prediction was significantly better by NICE compared to AIPHP method both in the diminutive polyps (n=207) (95.2% vs. 82.1%) (p<0.001) and also in all evaluated polyps (n=373) (97.1% vs. 86.6%) (p<0.001) Conclusions: Our artificial intelligence based polyp histology prediction software could predict histology with high accuracy only in the large size polyp subgroup.