• Title/Summary/Keyword: Machine Learning(ML)

Search Result 307, Processing Time 0.022 seconds

Synthetic Data Generation with Unity 3D and Unreal Engine for Construction Hazard Scenarios: A Comparative Analysis

  • Aqsa Sabir;Rahat Hussain;Akeem Pedro;Mehrtash Soltani;Dongmin Lee;Chansik Park;Jae- Ho Pyeon
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.1286-1288
    • /
    • 2024
  • The construction industry, known for its inherent risks and multiple hazards, necessitates effective solutions for hazard identification and mitigation [1]. To address this need, the implementation of machine learning models specializing in object detection has become increasingly important because this technological approach plays a crucial role in augmenting worker safety by proactively recognizing potential dangers on construction sites [2], [3]. However, the challenge in training these models lies in obtaining accurately labeled datasets, as conventional methods require labor-intensive labeling or costly measurements [4]. To circumvent these challenges, synthetic data generation (SDG) has emerged as a key method for creating realistic and diverse training scenarios [5], [6]. The paper reviews the evolution of synthetic data generation tools, highlighting the shift from earlier solutions like Synthpop and Data Synthesizer to advanced game engines[7]. Among the various gaming platforms, Unity 3D and Unreal Engine stand out due to their advanced capabilities in replicating realistic construction hazard environments [8], [9]. Comparing Unity 3D and Unreal Engine is crucial for evaluating their effectiveness in SDG, aiding developers in selecting the appropriate platform for their needs. For this purpose, this paper conducts a comparative analysis of both engines assessing their ability to create high-fidelity interactive environments. To thoroughly evaluate the suitability of these engines for generating synthetic data in construction site simulations, the focus relies on graphical realism, developer-friendliness, and user interaction capabilities. This evaluation considers these key aspects as they are essential for replicating realistic construction sites, ensuring both high visual fidelity and ease of use for developers. Firstly, graphical realism is crucial for training ML models to recognize the nuanced nature of construction environments. In this aspect, Unreal Engine stands out with its superior graphics quality compared to Unity 3D which typically considered to have less graphical prowess [10]. Secondly, developer-friendliness is vital for those generating synthetic data. Research indicates that Unity 3D is praised for its user-friendly interface and the use of C# scripting, which is widely used in educational settings, making it a popular choice for those new to game development or synthetic data generation. Whereas Unreal Engine, while offering powerful capabilities in terms of realistic graphics, is often viewed as more complex due to its use of C++ scripting and the blueprint system. While the blueprint system is a visual scripting tool that does not require traditional coding, it can be intricate and may present a steeper learning curve, especially for those without prior experience in game development [11]. Lastly, regarding user interaction capabilities, Unity 3D is known for its intuitive interface and versatility, particularly in VR/AR development for various skill levels. In contrast, Unreal Engine, with its advanced graphics and blueprint scripting, is better suited for creating high-end, immersive experiences [12]. Based on current insights, this comparative analysis underscores the user-friendly interface and adaptability of Unity 3D, featuring a built-in perception package that facilitates automatic labeling for SDG [13]. This functionality enhances accessibility and simplifies the SDG process for users. Conversely, Unreal Engine is distinguished by its advanced graphics and realistic rendering capabilities. It offers plugins like EasySynth (which does not provide automatic labeling) and NDDS for SDG [14], [15]. The development complexity associated with Unreal Engine presents challenges for novice users, whereas the more approachable platform of Unity 3D is advantageous for beginners. This research provides an in-depth review of the latest advancements in SDG, shedding light on potential future research and development directions. The study concludes that the integration of such game engines in ML model training markedly enhances hazard recognition and decision-making skills among construction professionals, thereby significantly advancing data acquisition for machine learning in construction safety monitoring.

Prediction of Landslides and Determination of Its Variable Importance Using AutoML (AutoML을 이용한 산사태 예측 및 변수 중요도 산정)

  • Nam, KoungHoon;Kim, Man-Il;Kwon, Oil;Wang, Fawu;Jeong, Gyo-Cheol
    • The Journal of Engineering Geology
    • /
    • v.30 no.3
    • /
    • pp.315-325
    • /
    • 2020
  • This study was performed to develop a model to predict landslides and determine the variable importance of landslides susceptibility factors based on the probabilistic prediction of landslides occurring on slopes along the road. Field survey data of 30,615 slopes from 2007 to 2020 in Korea were analyzed to develop a landslide prediction model. Of the total 131 variable factors, 17 topographic factors and 114 geological factors (including 89 bedrocks) were used to predict landslides. Automated machine learning (AutoML) was used to classify landslides and non-landslides. The verification results revealed that the best model, an extremely randomized tree (XRT) with excellent predictive performance, yielded 83.977% of prediction rates on test data. As a result of the analysis to determine the variable importance of the landslide susceptibility factors, it was composed of 10 topographic factors and 9 geological factors, which was presented as a percentage for each factor. This model was evaluated probabilistically and quantitatively for the likelihood of landslide occurrence by deriving the ranking of variable importance using only on-site survey data. It is considered that this model can provide a reliable basis for slope safety assessment through field surveys to decision-makers in the future.

Support vector regression을 응용한 barbaralane의 global potential energy surface 재구성

  • Ryu, Seong-Ok;Choe, Seong-Hwan;Kim, U-Yeon
    • Proceeding of EDISON Challenge
    • /
    • 2014.03a
    • /
    • pp.1-13
    • /
    • 2014
  • Potential Energy Surface(PES)를 양자 계산을 통해 알아내는 것은 화학 반응을 이해하는 데에 큰 도움이 된다. 이를테면 Transition State(TS)의 configuration을 알 수 있고, 따라서 reaction path와 활성화 에너지 값을 예측하여, 진행시키고자 하는 화학반응의 이해를 도울 수 있다. 하지만 PES를 그리기 위해서는 해당 분자의 다양한 configuration에 대한 singlet point energy 계산이 필요하기 때문에, 계산적인 측면에서 많은 비용을 요구한다. 따라서 product와 reactant의 구조와 같은 critical point의 정보를 이용하여 최소한의 configuration을 sampling하여 전체 PES를 재구성하는 기계학습 알고리즘을 개발하여 다차원 PES 상에서의 화학반응의 예측을 가능하게 하고자 한다. 본 연구에서는 Barbaralane의 두 안정화 된 구조의 critical point로 하여 이 주변을 random normal distribution하여, B3LYP/6-31G(d) level의 DFT 계산을 통해 relaxed scanning하여 구조와 에너지를 구하였으며, 이 정보를 Support Vector Regression(SVR) 알고리즘을 적용하여 PES를 재구현하였으며, 반응경로와 TS의 구조 그리고 활성화 에너지를 구하였다. 또한 본 기계학습 알고리즘을 바닥상태에서 일어나는 반응이 아닌, 들뜬 상태와 전자 구조가 변하는 화학반응, avoid crossing, conical intersection과 같은 Non-adiabatic frame에서 일어나는 현상에 적용 가능성을 논하고자 한다.

  • PDF

Design and implementation of an improved MA-APUF with higher uniqueness and security

  • Li, Bing;Chen, Shuai;Dan, Fukui
    • ETRI Journal
    • /
    • v.42 no.2
    • /
    • pp.205-216
    • /
    • 2020
  • An arbiter physical unclonable function (APUF) has exponential challenge-response pairs and is easy to implement on field-programmable gate arrays (FPGAs). However, modeling attacks based on machine learning have become a serious threat to APUFs. Although the modeling-attack resistance of an MA-APUF has been improved considerably by architecture modifications, the response generation method of an MA-APUF results in low uniqueness. In this study, we demonstrate three design problems regarding the low uniqueness that APUF-based strong PUFs may exhibit, and we present several foundational principles to improve the uniqueness of APUF-based strong PUFs. In particular, an improved MA-APUF design is implemented in an FPGA and evaluated using a well-established experimental setup. Two types of evaluation metrics are used for evaluation and comparison. Furthermore, evolution strategies, logistic regression, and K-junta functions are used to evaluate the security of our design. The experiment results reveal that the uniqueness of our improved MA-APUF is 81.29% (compared with that of the MA-APUF, 13.12%), and the prediction rate is approximately 56% (compared with that of the MA-APUF (60%-80%).

A Review of the Methodology for Sophisticated Data Classification (정교한 데이터 분류를 위한 방법론의 고찰)

  • Kim, Seung Jae;Kim, Sung Hwan
    • Journal of Integrative Natural Science
    • /
    • v.14 no.1
    • /
    • pp.27-34
    • /
    • 2021
  • 전 세계적으로 인공지능(AI)을 구현하려는 움직임이 많아지고 있다. AI구현에서는 많은 양의 데이터, 목적에 맞는 데이터의 분류 등 데이터의 중요성을 뺄 수 없다. 이러한 데이터를 생성하고 가공하는 기술에는 사물인터넷(IOT)과 빅데이터(Big-data) 분석이 있으며 4차 산업을 이끌어 가는 원동력이라 할 수 있다. 또한 이러한 기술은 국가와 개인 차원에서 많이 활용되고 있으며, 특히나 특정분야에 집결되는 데이터를 기준으로 빅데이터 분석에 활용함으로써 새로운 모델을 발견하고, 그 모델로 새로운 값을 추론하고 예측함으로써 미래비전을 제시하려는 시도가 많아지고 있는 추세이다. 데이터 분석을 통한 결론은 데이터가 가지고 있는 정보의 정확성에 따라 많은 변화를 가져올 수 있으며, 그 변화에 따라 잘못된 결과를 발생시킬 수도 있다. 이렇듯 데이터의 분석은 데이터가 가지는 정보 또는 분석 목적에 맞는 데이터 분류가 매우 중요하다는 것을 알 수 있다. 또한 빅데이터 분석결과 통계량의 신뢰성과 정교함을 얻기 위해서는 각 변수의 의미와 변수들 간의 상관관계, 다중공선성 등을 고려하여 분석해야 한다. 즉, 빅데이터 분석에 앞서 분석목적에 맞도록 데이터의 분류가 잘 이루어지도록 해야 한다. 이에 본 고찰에서는 AI기술을 구현하는 머신러닝(machine learning, ML) 기법에 속하는 분류분석(classification analysis, CA) 중 의사결정트리(decision tree, DT)기법, 랜덤포레스트(random forest, RF)기법, 선형분류분석(linear discriminant analysis, LDA), 이차선형분류분석(quadratic discriminant analysis, QDA)을 이용하여 데이터를 분류한 후 데이터의 분류정도를 평가함으로써 데이터의 분류 분석률 향상을 위한 방안을 모색하려 한다.

An Extended Work Architecture for Online Threat Prediction in Tweeter Dataset

  • Sheoran, Savita Kumari;Yadav, Partibha
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.1
    • /
    • pp.97-106
    • /
    • 2021
  • Social networking platforms have become a smart way for people to interact and meet on internet. It provides a way to keep in touch with friends, families, colleagues, business partners, and many more. Among the various social networking sites, Twitter is one of the fastest-growing sites where users can read the news, share ideas, discuss issues etc. Due to its vast popularity, the accounts of legitimate users are vulnerable to the large number of threats. Spam and Malware are some of the most affecting threats found on Twitter. Therefore, in order to enjoy seamless services it is required to secure Twitter against malicious users by fixing them in advance. Various researches have used many Machine Learning (ML) based approaches to detect spammers on Twitter. This research aims to devise a secure system based on Hybrid Similarity Cosine and Soft Cosine measured in combination with Genetic Algorithm (GA) and Artificial Neural Network (ANN) to secure Twitter network against spammers. The similarity among tweets is determined using Cosine with Soft Cosine which has been applied on the Twitter dataset. GA has been utilized to enhance training with minimum training error by selecting the best suitable features according to the designed fitness function. The tweets have been classified as spammer and non-spammer based on ANN structure along with the voting rule. The True Positive Rate (TPR), False Positive Rate (FPR) and Classification Accuracy are considered as the evaluation parameter to evaluate the performance of system designed in this research. The simulation results reveals that our proposed model outperform the existing state-of-arts.

Exploring AI Principles in Global Top 500 Enterprises: A Delphi Technique of LDA Topic Modeling Results

  • Hyun BAEK
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.2
    • /
    • pp.7-17
    • /
    • 2023
  • Artificial Intelligence (AI) technology has already penetrated deeply into our daily lives, and we live with the convenience of it anytime, anywhere, and sometimes even without us noticing it. However, because AI is imitative intelligence based on human Intelligence, it inevitably has both good and evil sides of humans, which is why ethical principles are essential. The starting point of this study is the AI principles for companies or organizations to develop products. Since the late 2010s, studies on ethics and principles of AI have been actively published. This study focused on AI principles declared by global companies currently developing various products through AI technology. So, we surveyed the AI principles of the Global 500 companies by market capitalization at a given specific time and collected the AI principles explicitly declared by 46 of them. AI analysis technology primarily analyzed this text data, especially LDA (Latent Dirichlet Allocation) topic modeling, which belongs to Machine Learning (ML) analysis technology. Then, we conducted a Delphi technique to reach a meaningful consensus by presenting the primary analysis results. We expect to provide meaningful guidelines in AI-related government policy establishment, corporate ethics declarations, and academic research, where debates on AI ethics and principles often occur recently based on the results of our study.

'Knowing' with AI in construction - An empirical insight

  • Ramalingham, Shobha;Mossman, Alan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.686-693
    • /
    • 2022
  • Construction is a collaborative endeavor. The complexity in delivering construction projects successfully is impacted by the effective collaboration needs of a multitude of stakeholders throughout the project life-cycle. Technologies such as Building Information Modelling and relational project delivery approaches such as Alliancing and Integrated Project Delivery have developed to address this conundrum. However, with the onset of the pandemic, the digital economy has surged world-wide and advances in technology such as in the areas of machine learning (ML) and Artificial Intelligence (AI) have grown deep roots across specializations and domains to the point of matching its capabilities to the human mind. Several recent studies have both explored the role of AI in the construction process and highlighted its benefits. In contrast, literature in the organization studies field has highlighted the fear that tasks currently done by humans will be done by AI in future. Motivated by these insights and with the understanding that construction is a labour intensive sector where knowledge is both fragmented and predominantly tacit in nature, this paper explores the integration of AI in construction processes across project phases from planning, scheduling, execution and maintenance operations using literary evidence and experiential insights. The findings show that AI can complement human skills rather than provide a substitute for them. This preliminary study is expected to be a stepping stone for further research and implementation in practice.

  • PDF

Prediction of karst sinkhole collapse using a decision-tree (DT) classifier

  • Boo Hyun Nam;Kyungwon Park;Yong Je Kim
    • Geomechanics and Engineering
    • /
    • v.36 no.5
    • /
    • pp.441-453
    • /
    • 2024
  • Sinkhole subsidence and collapse is a common geohazard often formed in karst areas such as the state of Florida, United States of America. To predict the sinkhole occurrence, we need to understand the formation mechanism of sinkhole and its karst hydrogeology. For this purpose, investigating the factors affecting sinkholes is an essential and important step. The main objectives of the presenting study are (1) the development of a machine learning (ML)-based model, namely C5.0 decision tree (C5.0 DT), for the prediction of sinkhole susceptibility, which accounts for sinkhole/subsidence inventory and sinkhole contributing factors (e.g., geological/hydrogeological) and (2) the construction of a regional-scale sinkhole susceptibility map. The study area is east central Florida (ECF) where a cover-collapse type is commonly reported. The C5.0 DT algorithm was used to account for twelve (12) identified hydrogeological factors. In this study, a total of 1,113 sinkholes in ECF were identified and the dataset was then randomly divided into 70% and 30% subsets for training and testing, respectively. The performance of the sinkhole susceptibility model was evaluated using a receiver operating characteristic (ROC) curve, particularly the area under the curve (AUC). The C5.0 model showed a high prediction accuracy of 83.52%. It is concluded that a decision tree is a promising tool and classifier for spatial prediction of karst sinkholes and subsidence in the ECF area.

Risk Estimates of Structural Changes in Freight Rates (해상운임의 구조변화 리스크 추정)

  • Hyunsok Kim
    • Journal of Korea Port Economic Association
    • /
    • v.39 no.4
    • /
    • pp.255-268
    • /
    • 2023
  • This paper focuses on the tests for generalized fluctuation in the context of assessing structural changes based on linear regression models. For efficient estimation there has been a growing focus on the structural change monitoring, particularly in relation to fields such as artificial intelligence(hereafter AI) and machine learning(hereafter ML). Specifically, the investigation elucidates the implementation of structural changes and presents a coherent approach for the practical application to the BDI(Baltic Dry-bulk Index), which serves as a representative maritime trade index in global market. The framework encompasses a range of F-statistics type methodologies for fitting, visualization, and evaluation of empirical fluctuation processes, including CUSUM, MOSUM, and estimates-based processes. Additionally, it provides functionality for the computation and evaluation of sequences of pruned exact linear time(hereafter PELT).