• Title/Summary/Keyword: Machine learning (ML)

Search Result 300, Processing Time 0.025 seconds

A Comprehensive Literature Study on Precision Agriculture: Tools and Techniques

  • Bh., Prashanthi;A.V. Praveen, Krishna;Ch. Mallikarjuna, Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.12
    • /
    • pp.229-238
    • /
    • 2022
  • Due to digitization, data has become a tsunami in almost every data-driven business sector. The information wave has been greatly boosted by man-to-machine (M2M) digital data management. An explosion in the use of ICT for farm management has pushed technical solutions into rural areas and benefited farmers and customers alike. This study discusses the benefits and possible pitfalls of using information and communication technology (ICT) in conventional farming. Information technology (IT), the Internet of Things (IoT), and robotics are discussed, along with the roles of Machine learning (ML), Artificial intelligence (AI), and sensors in farming. Drones are also being studied for crop surveillance and yield optimization management. Global and state-of-the-art Internet of Things (IoT) agricultural platforms are emphasized when relevant. This article analyse the most current publications pertaining to precision agriculture using ML and AI techniques. This study further details about current and future developments in AI and identify existing and prospective research concerns in AI for agriculture based on this thorough extensive literature evaluation.

A Supervised Feature Selection Method for Malicious Intrusions Detection in IoT Based on Genetic Algorithm

  • Saman Iftikhar;Daniah Al-Madani;Saima Abdullah;Ammar Saeed;Kiran Fatima
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.3
    • /
    • pp.49-56
    • /
    • 2023
  • Machine learning methods diversely applied to the Internet of Things (IoT) field have been successful due to the enhancement of computer processing power. They offer an effective way of detecting malicious intrusions in IoT because of their high-level feature extraction capabilities. In this paper, we proposed a novel feature selection method for malicious intrusion detection in IoT by using an evolutionary technique - Genetic Algorithm (GA) and Machine Learning (ML) algorithms. The proposed model is performing the classification of BoT-IoT dataset to evaluate its quality through the training and testing with classifiers. The data is reduced and several preprocessing steps are applied such as: unnecessary information removal, null value checking, label encoding, standard scaling and data balancing. GA has applied over the preprocessed data, to select the most relevant features and maintain model optimization. The selected features from GA are given to ML classifiers such as Logistic Regression (LR) and Support Vector Machine (SVM) and the results are evaluated using performance evaluation measures including recall, precision and f1-score. Two sets of experiments are conducted, and it is concluded that hyperparameter tuning has a significant consequence on the performance of both ML classifiers. Overall, SVM still remained the best model in both cases and overall results increased.

Machine Learning Algorithms Evaluation and CombML Development for Dam Inflow Prediction (댐 유입량 예측을 위한 머신러닝 알고리즘 평가 및 CombML 개발)

  • Hong, Jiyeong;Bae, Juhyeon;Jeong, Yeonseok;Lim, Kyoung Jae
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.317-317
    • /
    • 2021
  • 효율적인 물관리를 위한 댐 유입량 대한 연구는 필수적이다. 본 연구에서는 다양한 머신러닝 알고리즘을 통해 40년동안의 기상 및 댐 유입량 데이터를 이용하여 소양강댐 유입량을 예측하였으며, 그 중 고유량과 저유량예측에 적합한 알고리즘을 각각 선정하여 머신러닝 알고리즘을 결합한 CombML을 개발하였다. 의사 결정 트리 (DT), 멀티 레이어 퍼셉트론 (MLP), 랜덤 포레스트(RF), 그래디언트 부스팅 (GB), RNN-LSTM 및 CNN-LSTM 알고리즘이 사용되었으며, 그 중 가장 정확도가 높은 모형과 고유량이 아닌 경우에서 특별히 예측 정확도가 높은 모형을 결합하여 결합 머신러닝 알고리즘 (CombML)을 개발 및 평가하였다. 사용된 알고리즘 중 MLP가 NSE 0.812, RMSE 77.218 m3/s, MAE 29.034 m3/s, R 0.924, R2 0.817로 댐 유입량 예측에서 최상의 결과를 보여주었으며, 댐 유입량이 100 m3/s 이하인 경우 앙상블 모델 (RF, GB) 이 댐 유입 예측에서 MLP보다 더 나은 성능을 보였다. 따라서, 유입량이 100 m3/s 이상 시의 평균 일일 강수량인 16 mm를 기준으로 강수가 16mm 이하인 경우 앙상블 방법 (RF 및 GB)을 사용하고 강수가 16 mm 이상인 경우 MLP를 사용하여 댐 유입을 예측하기 위해 두 가지 복합 머신러닝(CombML) 모델 (RF_MLP 및 GB_MLP)을 개발하였다. 그 결과 RF_MLP에서 NSE 0.857, RMSE 68.417 m3/s, MAE 18.063 m3/s, R 0.927, R2 0.859, GB_MLP의 경우 NSE 0.829, RMSE 73.918 m3/s, MAE 18.093 m3/s, R 0.912, R2 0.831로 CombML이 댐 유입을 가장 정확하게 예측하는 것으로 평가되었다. 본 연구를 통해 하천 유황을 고려한 여러 머신러닝 알고리즘의 결합을 통한 유입량 예측 결과, 알고리즘 결합 시 예측 모형의 정확도가 개선되는 것이 확인되었으며, 이는 추후 효율적인 물관리에 이용될 수 있을 것으로 판단된다.

  • PDF

An AutoML-driven Antenna Performance Prediction Model in the Autonomous Driving Radar Manufacturing Process

  • So-Hyang Bak;Kwanghoon Pio Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.12
    • /
    • pp.3330-3344
    • /
    • 2023
  • This paper proposes an antenna performance prediction model in the autonomous driving radar manufacturing process. Our research work is based upon a challenge dataset, Driving Radar Manufacturing Process Dataset, and a typical AutoML machine learning workflow engine, Pycaret open-source Python library. Note that the dataset contains the total 70 data-items, out of which 54 used as input features and 16 used as output features, and the dataset is properly built into resolving the multi-output regression problem. During the data regression analysis and preprocessing phase, we identified several input features having similar correlations and so detached some of those input features, which may become a serious cause of the multicollinearity problem that affect the overall model performance. In the training phase, we train each of output-feature regression models by using the AutoML approach. Next, we selected the top 5 models showing the higher performances in the AutoML result reports and applied the ensemble method so as for the selected models' performances to be improved. In performing the experimental performance evaluation of the regression prediction model, we particularly used two metrics, MAE and RMSE, and the results of which were 0.6928 and 1.2065, respectively. Additionally, we carried out a series of experiments to verify the proposed model's performance by comparing with other existing models' performances. In conclusion, we enhance accuracy for safer autonomous vehicles, reduces manufacturing costs through AutoML-Pycaret and machine learning ensembled model, and prevents the production of faulty radar systems, conserving resources. Ultimately, the proposed model holds significant promise not only for antenna performance but also for improving manufacturing quality and advancing radar systems in autonomous vehicles.

Trends in Artificial Intelligence Applications in Clinical Trials: An analysis of ClinicalTrials.gov (임상시험에서 인공지능의 활용에 대한 분석 및 고찰: ClinicalTrials.gov 분석)

  • Jeong Min Go;Ji Yeon Lee;Yun-Kyoung Song;Jae Hyun Kim
    • Korean Journal of Clinical Pharmacy
    • /
    • v.34 no.2
    • /
    • pp.134-139
    • /
    • 2024
  • Background: Increasing numbers of studies and research about artificial intelligence (AI) and machine learning (ML) have led to their application in clinical trials. The purpose of this study is to analyze computer-based new technologies (AI/ML) applied on clinical trials registered on ClinicalTrials.gov to elucidate current usage of these technologies. Methods: As of March 1st, 2023, protocols listed on ClinicalTrials.gov that claimed to use AI/ML and included at least one of the following interventions-Drug, Biological, Dietary Supplement, or Combination Product-were selected. The selected protocols were classified according to their context of use: 1) drug discovery; 2) toxicity prediction; 3) enrichment; 4) risk stratification/management; 5) dose selection/optimization; 6) adherence; 7) synthetic control; 8) endpoint assessment; 9) postmarketing surveillance; and 10) drug selection. Results: The applications of AI/ML were explored in 131 clinical trial protocols. The areas where AI/ML was most frequently utilized in clinical trials included endpoint assessment (n=80), followed by dose selection/optimization (n=15), risk stratification/management (n=13), drug discovery (n=4), adherence (n=4), drug selection (n=1) and enrichment (n=1). Conclusion: The most frequent application of AI/ML in clinical trials is in the fields of endpoint assessment, where the utilization is primarily focuses on the diagnosis of disease by imaging or video analyses. The number of clinical trials using artificial intelligence will increase as the technology continues to develop rapidly, making it necessary for regulatory associates to establish proper regulations for these clinical trials.

Application and Potential of Artificial Intelligence in Heart Failure: Past, Present, and Future

  • Minjae Yoon;Jin Joo Park;Taeho Hur;Cam-Hao Hua;Musarrat Hussain;Sungyoung Lee;Dong-Ju Choi
    • International Journal of Heart Failure
    • /
    • v.6 no.1
    • /
    • pp.11-19
    • /
    • 2024
  • The prevalence of heart failure (HF) is increasing, necessitating accurate diagnosis and tailored treatment. The accumulation of clinical information from patients with HF generates big data, which poses challenges for traditional analytical methods. To address this, big data approaches and artificial intelligence (AI) have been developed that can effectively predict future observations and outcomes, enabling precise diagnoses and personalized treatments of patients with HF. Machine learning (ML) is a subfield of AI that allows computers to analyze data, find patterns, and make predictions without explicit instructions. ML can be supervised, unsupervised, or semi-supervised. Deep learning is a branch of ML that uses artificial neural networks with multiple layers to find complex patterns. These AI technologies have shown significant potential in various aspects of HF research, including diagnosis, outcome prediction, classification of HF phenotypes, and optimization of treatment strategies. In addition, integrating multiple data sources, such as electrocardiography, electronic health records, and imaging data, can enhance the diagnostic accuracy of AI algorithms. Currently, wearable devices and remote monitoring aided by AI enable the earlier detection of HF and improved patient care. This review focuses on the rationale behind utilizing AI in HF and explores its various applications.

Federated Learning-Internet of Underwater Things (연합 학습기반 수중 사물 인터넷)

  • Shrutika Sinha;G., Pradeep Reddy;Soo-Hyun Park
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.140-142
    • /
    • 2023
  • Federated learning (FL) is a new paradigm in machine learning (ML) that enables multiple devices to collaboratively train a shared ML model without sharing their local data. FL is well-suited for applications where data is sensitive or difficult to transmit in large volumes, or where collaborative learning is required. The Internet of Underwater Things (IoUT) is a network of underwater devices that collect and exchange data. This data can be used for a variety of applications, such as monitoring water quality, detecting marine life, and tracking underwater vehicles. However, the harsh underwater environment makes it difficult to collect and transmit data in large volumes. FL can address these challenges by enabling devices to train a shared ML model without having to transmit their data to a central server. This can help to protect the privacy of the data and improve the efficiency of training. In this view, this paper provides a brief overview of Fed-IoUT, highlighting its various applications, challenges, and opportunities.

Classifying Severity of Senior Driver Accidents In Capital Regions Based on Machine Learning Algorithms (머신러닝 기반의 수도권 지역 고령운전자 차대사람 사고심각도 분류 연구)

  • Kim, Seunghoon;Lym, Youngbin;Kim, Ki-Jung
    • Journal of Digital Convergence
    • /
    • v.19 no.4
    • /
    • pp.25-31
    • /
    • 2021
  • Moving toward an aged society, traffic accidents involving elderly drivers have also attracted broader public attention. A rapid increase of senior involvement in crashes calls for developing appropriate crash-severity prediction models specific to senior drivers. In that regard, this study leverages machine learning (ML) algorithms so as to predict the severity of vehicle-pedestrian collisions induced by elderly drivers. Specifically, four ML algorithms (i.e., Logistic model, K-nearest Neighbor (KNN), Random Forest (RF), and Support Vector Machine (SVM)) have been developed and compared. Our results show that Logistic model and SVM have outperformed their rivals in terms of the overall prediction accuracy, while precision measure exhibits in favor of RF. We also clarify that driver education and technology development would be effective countermeasures against severity risks of senior driver-induced collisions. These allow us to support informed decision making for policymakers to enhance public safety.

Addressing Inter-floor Noise Issues in Apartment Buildings using On-Sensor AI Embedded with TinyML on Ultra-Low-Power Systems

  • Jae-Won Kwak;In-Yeop Choi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.3
    • /
    • pp.75-81
    • /
    • 2024
  • In this paper, we proposes a method for real-time processing of inter-floor noise problems by embedding TinyML, which includes a deep learning model, into ultra-low-power systems. The reason this method is feasible is because of lightweight deep learning model technology, which allows even systems with small computing resources to perform inference autonomously. The conventional method proposed to solve inter-floor noise problems was to send data collected from sensors to a server for analysis and processing. However, this centralized processing method has issues with high costs, complexity, and difficulty in real-time processing. In this paper, we address these limitations by employing On-Sensor AI using TinyML. The method presented in this paper is simple to install, cost-effective, and capable of processing problems in real-time.

Can Artificial Intelligence Boost Developing Electrocatalysts for Efficient Water Splitting to Produce Green Hydrogen?

  • Jaehyun Kim;Ho Won Jang
    • Korean Journal of Materials Research
    • /
    • v.33 no.5
    • /
    • pp.175-188
    • /
    • 2023
  • Water electrolysis holds great potential as a method for producing renewable hydrogen fuel at large-scale, and to replace the fossil fuels responsible for greenhouse gases emissions and global climate change. To reduce the cost of hydrogen and make it competitive against fossil fuels, the efficiency of green hydrogen production should be maximized. This requires superior electrocatalysts to reduce the reaction energy barriers. The development of catalytic materials has mostly relied on empirical, trial-and-error methods because of the complicated, multidimensional, and dynamic nature of catalysis, requiring significant time and effort to find optimized multicomponent catalysts under a variety of reaction conditions. The ultimate goal for all researchers in the materials science and engineering field is the rational and efficient design of materials with desired performance. Discovering and understanding new catalysts with desired properties is at the heart of materials science research. This process can benefit from machine learning (ML), given the complex nature of catalytic reactions and vast range of candidate materials. This review summarizes recent achievements in catalysts discovery for the hydrogen evolution reaction (HER) and oxygen evolution reaction (OER). The basic concepts of ML algorithms and practical guides for materials scientists are also demonstrated. The challenges and strategies of applying ML are discussed, which should be collaboratively addressed by materials scientists and ML communities. The ultimate integration of ML in catalyst development is expected to accelerate the design, discovery, optimization, and interpretation of superior electrocatalysts, to realize a carbon-free ecosystem based on green hydrogen.