• Title/Summary/Keyword: Machine learning (ML)

Search Result 290, Processing Time 0.026 seconds

Development of Prediction Models for Fatal Accidents using Proactive Information in Construction Sites (건설현장의 공사사전정보를 활용한 사망재해 예측 모델 개발)

  • Choi, Seung Ju;Kim, Jin Hyun;Jung, Kihyo
    • Journal of the Korean Society of Safety
    • /
    • v.36 no.3
    • /
    • pp.31-39
    • /
    • 2021
  • In Korea, more than half of work-related fatalities have occurred on construction sites. To reduce such occupational accidents, safety inspection by government agencies is essential in construction sites that present a high risk of serious accidents. To address this issue, this study developed risk prediction models of serious accidents in construction sites using five machine learning methods: support vector machine, random forest, XGBoost, LightGBM, and AutoML. To this end, 15 proactive information (e.g., number of stories and period of construction) that are usually available prior to construction were considered and two over-sampling techniques (SMOTE and ADASYN) were used to address the problem of class-imbalanced data. The results showed that all machine learning methods achieved 0.876~0.941 in the F1-score with the adoption of over-sampling techniques. LightGBM with ADASYN yielded the best prediction performance in both the F1-score (0.941) and the area under the ROC curve (0.941). The prediction models revealed four major features: number of stories, period of construction, excavation depth, and height. The prediction models developed in this study can be useful both for government agencies in prioritizing construction sites for safety inspection and for construction companies in establishing pre-construction preventive measures.

Coreference Resolution for Korean Using Random Forests (랜덤 포레스트를 이용한 한국어 상호참조 해결)

  • Jeong, Seok-Won;Choi, MaengSik;Kim, HarkSoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.535-540
    • /
    • 2016
  • Coreference resolution is to identify mentions in documents and is to group co-referred mentions in the documents. It is an essential step for natural language processing applications such as information extraction, event tracking, and question-answering. Recently, various coreference resolution models based on ML (machine learning) have been proposed, As well-known, these ML-based models need large training data that are manually annotated with coreferred mention tags. Unfortunately, we cannot find usable open data for learning ML-based models in Korean. Therefore, we propose an efficient coreference resolution model that needs less training data than other ML-based models. The proposed model identifies co-referred mentions using random forests based on sieve-guided features. In the experiments with baseball news articles, the proposed model showed a better CoNLL F1-score of 0.6678 than other ML-based models.

Precision Agriculture using Internet of Thing with Artificial Intelligence: A Systematic Literature Review

  • Noureen Fatima;Kainat Fareed Memon;Zahid Hussain Khand;Sana Gul;Manisha Kumari;Ghulam Mujtaba Sheikh
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.7
    • /
    • pp.155-164
    • /
    • 2023
  • Machine learning with its high precision algorithms, Precision agriculture (PA) is a new emerging concept nowadays. Many researchers have worked on the quality and quantity of PA by using sensors, networking, machine learning (ML) techniques, and big data. However, there has been no attempt to work on trends of artificial intelligence (AI) techniques, dataset and crop type on precision agriculture using internet of things (IoT). This research aims to systematically analyze the domains of AI techniques and datasets that have been used in IoT based prediction in the area of PA. A systematic literature review is performed on AI based techniques and datasets for crop management, weather, irrigation, plant, soil and pest prediction. We took the papers on precision agriculture published in the last six years (2013-2019). We considered 42 primary studies related to the research objectives. After critical analysis of the studies, we found that crop management; soil and temperature areas of PA have been commonly used with the help of IoT devices and AI techniques. Moreover, different artificial intelligence techniques like ANN, CNN, SVM, Decision Tree, RF, etc. have been utilized in different fields of Precision agriculture. Image processing with supervised and unsupervised learning practice for prediction and monitoring the PA are also used. In addition, most of the studies are forfaiting sensory dataset to measure different properties of soil, weather, irrigation and crop. To this end, at the end, we provide future directions for researchers and guidelines for practitioners based on the findings of this review.

Applying Deep Reinforcement Learning to Improve Throughput and Reduce Collision Rate in IEEE 802.11 Networks

  • Ke, Chih-Heng;Astuti, Lia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.1
    • /
    • pp.334-349
    • /
    • 2022
  • The effectiveness of Wi-Fi networks is greatly influenced by the optimization of contention window (CW) parameters. Unfortunately, the conventional approach employed by IEEE 802.11 wireless networks is not scalable enough to sustain consistent performance for the increasing number of stations. Yet, it is still the default when accessing channels for single-users of 802.11 transmissions. Recently, there has been a spike in attempts to enhance network performance using a machine learning (ML) technique known as reinforcement learning (RL). Its advantage is interacting with the surrounding environment and making decisions based on its own experience. Deep RL (DRL) uses deep neural networks (DNN) to deal with more complex environments (such as continuous state spaces or actions spaces) and to get optimum rewards. As a result, we present a new approach of CW control mechanism, which is termed as contention window threshold (CWThreshold). It uses the DRL principle to define the threshold value and learn optimal settings under various network scenarios. We demonstrate our proposed method, known as a smart exponential-threshold-linear backoff algorithm with a deep Q-learning network (SETL-DQN). The simulation results show that our proposed SETL-DQN algorithm can effectively improve the throughput and reduce the collision rates.

An Optimized Deep Learning Techniques for Analyzing Mammograms

  • Satish Babu Bandaru;Natarajasivan. D;Rama Mohan Babu. G
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.7
    • /
    • pp.39-48
    • /
    • 2023
  • Breast cancer screening makes extensive utilization of mammography. Even so, there has been a lot of debate with regards to this application's starting age as well as screening interval. The deep learning technique of transfer learning is employed for transferring the knowledge learnt from the source tasks to the target tasks. For the resolution of real-world problems, deep neural networks have demonstrated superior performance in comparison with the standard machine learning algorithms. The architecture of the deep neural networks has to be defined by taking into account the problem domain knowledge. Normally, this technique will consume a lot of time as well as computational resources. This work evaluated the efficacy of the deep learning neural network like Visual Geometry Group Network (VGG Net) Residual Network (Res Net), as well as inception network for classifying the mammograms. This work proposed optimization of ResNet with Teaching Learning Based Optimization (TLBO) algorithm's in order to predict breast cancers by means of mammogram images. The proposed TLBO-ResNet, an optimized ResNet with faster convergence ability when compared with other evolutionary methods for mammogram classification.

A Survey on Predicting Workloads and Optimising QoS in the Cloud Computing

  • Omar F. Aloufi;Karim Djemame;Faisal Saeed;Fahad Ghabban
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.59-66
    • /
    • 2024
  • This paper presents the concept and characteristics of cloud computing, and it addresses how cloud computing delivers quality of service (QoS) to the end-user. Next, it discusses how to schedule one's workload in the infrastructure using technologies that have recently emerged such as Machine Learning (ML). That is followed by an overview of how ML can be used for resource management. This paper then looks at the primary goal of this project, which is to outline the benefits of using ML to schedule upcoming demands to achieve QoS and conserve energy. In this survey, we reviewed the research related to ML methods for predicting workloads in cloud computing. It also provides information on the approaches to elasticity, while another section discusses the methods of prediction used in previous studies and those that used in this field. The paper concludes with a summary of the literature on predicting workloads and optimising QoS in the cloud computing.

A Study on the Prediction of Nitrogen Oxide Emissions in Rotary Kiln Process using Machine Learning (머신러닝 기법을 이용한 로터리 킬른 공정의 질소산화물 배출예측에 관한 연구)

  • Je-Hyeung Yoo;Cheong-Yeul Park;Jae Kwon Bae
    • Journal of Industrial Convergence
    • /
    • v.21 no.7
    • /
    • pp.19-27
    • /
    • 2023
  • As the secondary battery market expands, the process of producing laterite ore using the rotary kiln and electric furnace method is expanding worldwide. As ESG management expands, the management of air pollutants such as nitrogen oxides in exhaust gases is strengthened. The rotary kiln, one of the main facilities of the pyrometallurgy process, is a facility for drying and preliminary reduction of ore, and it generate nitrogen oxides, thus prediction of nitrogen oxide is important. In this study, LSTM for regression prediction and LightGBM for classification prediction were used to predict and then model optimization was performed using AutoML. When applying LSTM, the predicted value after 5 minutes was 0.86, MAE 5.13ppm, and after 40 minutes, the predicted value was 0.38 and MAE 10.84ppm. As a result of applying LightGBM for classification prediction, the test accuracy rose from 0.75 after 5 minutes to 0.61 after 40 minutes, to a level that can be used for actual operation, and as a result of model optimization through AutoML, the accuracy of the prediction after 5 minutes improved from 0.75 to 0.80 and from 0.61 to 0.70. Through this study, nitrogen oxide prediction values can be applied to actual operations to contribute to compliance with air pollutant emission regulations and ESG management.

Implementation of a Classification System for Dog Behaviors using YOLI-based Object Detection and a Node.js Server (YOLO 기반 개체 검출과 Node.js 서버를 이용한 반려견 행동 분류 시스템 구현)

  • Jo, Yong-Hwa;Lee, Hyuek-Jae;Kim, Young-Hun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.21 no.1
    • /
    • pp.29-37
    • /
    • 2020
  • This paper implements a method of extracting an object about a dog through real-time image analysis and classifying dog behaviors from the extracted images. The Darknet YOLO was used to detect dog objects, and the Teachable Machine provided by Google was used to classify behavior patterns from the extracted images. The trained Teachable Machine is saved in Google Drive and can be used by ml5.js implemented on a node.js server. By implementing an interactive web server using a socket.io module on the node.js server, the classified results are transmitted to the user's smart phone or PC in real time so that it can be checked anytime, anywhere.

A Smartphone-based Virtual Reality Visualization System for Human Activities Classification

  • Lomaliza, Jean-Pierre;Moon, Kwang-Seok;Park, Hanhoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.45-46
    • /
    • 2018
  • This paper focuses on human activities monitoring problem using onboard smartphone sensors as data generator. Monitoring such activities can be very important to detect anomalies and prevent disease from patients. Machine learning (ML) algorithms appear to be ideal approaches to use for processing data from smartphone to get sense of how to classify human activities. ML algorithms depend on quality, the quantity and even more important, the properties or features, that can be learnt from data. This paper proposes a mobile virtual reality visualization system that helps to view data representation in a very immersive way so that its quality and discriminative characteristics may be evaluated and improved. The proposed system comes as well with a handy data collecting application that can be accessed directly by the VR visualization part.

  • PDF

Design of Distributed Cloud System for Managing large-scale Genomic Data

  • Seine Jang;Seok-Jae Moon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.119-126
    • /
    • 2024
  • The volume of genomic data is constantly increasing in various modern industries and research fields. This growth presents new challenges and opportunities in terms of the quantity and diversity of genetic data. In this paper, we propose a distributed cloud system for integrating and managing large-scale gene databases. By introducing a distributed data storage and processing system based on the Hadoop Distributed File System (HDFS), various formats and sizes of genomic data can be efficiently integrated. Furthermore, by leveraging Spark on YARN, efficient management of distributed cloud computing tasks and optimal resource allocation are achieved. This establishes a foundation for the rapid processing and analysis of large-scale genomic data. Additionally, by utilizing BigQuery ML, machine learning models are developed to support genetic search and prediction, enabling researchers to more effectively utilize data. It is expected that this will contribute to driving innovative advancements in genetic research and applications.