• Title/Summary/Keyword: Smart Training System

Search Result 241, Processing Time 0.023 seconds

Waterbody Detection for the Reservoirs in South Korea Using Swin Transformer and Sentinel-1 Images (Swin Transformer와 Sentinel-1 영상을 이용한 우리나라 저수지의 수체 탐지)

  • Soyeon Choi;Youjeong Youn;Jonggu Kang;Seoyeon Kim;Yemin Jeong;Yungyo Im;Youngmin Seo;Wanyub Kim;Minha Choi;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.949-965
    • /
    • 2023
  • In this study, we propose a method to monitor the surface area of agricultural reservoirs in South Korea using Sentinel-1 synthetic aperture radar images and the deep learning model, Swin Transformer. Utilizing the Google Earth Engine platform, datasets from 2017 to 2021 were constructed for seven agricultural reservoirs, categorized into 700 K-ton, 900 K-ton, and 1.5 M-ton capacities. For four of the reservoirs, a total of 1,283 images were used for model training through shuffling and 5-fold cross-validation techniques. Upon evaluation, the Swin Transformer Large model, configured with a window size of 12, demonstrated superior semantic segmentation performance, showing an average accuracy of 99.54% and a mean intersection over union (mIoU) of 95.15% for all folds. When the best-performing model was applied to the datasets of the remaining three reservoirsfor validation, it achieved an accuracy of over 99% and mIoU of over 94% for all reservoirs. These results indicate that the Swin Transformer model can effectively monitor the surface area of agricultural reservoirs in South Korea.

A Design and Analysis of Pressure Predictive Model for Oscillating Water Column Wave Energy Converters Based on Machine Learning (진동수주 파력발전장치를 위한 머신러닝 기반 압력 예측모델 설계 및 분석)

  • Seo, Dong-Woo;Huh, Taesang;Kim, Myungil;Oh, Jae-Won;Cho, Su-Gil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.11
    • /
    • pp.672-682
    • /
    • 2020
  • The Korea Nowadays, which is research on digital twin technology for efficient operation in various industrial/manufacturing sites, is being actively conducted, and gradual depletion of fossil fuels and environmental pollution issues require new renewable/eco-friendly power generation methods, such as wave power plants. In wave power generation, however, which generates electricity from the energy of waves, it is very important to understand and predict the amount of power generation and operational efficiency factors, such as breakdown, because these are closely related by wave energy with high variability. Therefore, it is necessary to derive a meaningful correlation between highly volatile data, such as wave height data and sensor data in an oscillating water column (OWC) chamber. Secondly, the methodological study, which can predict the desired information, should be conducted by learning the prediction situation with the extracted data based on the derived correlation. This study designed a workflow-based training model using a machine learning framework to predict the pressure of the OWC. In addition, the validity of the pressure prediction analysis was verified through a verification and evaluation dataset using an IoT sensor data to enable smart operation and maintenance with the digital twin of the wave generation system.

Online Learning Platform Activation Strategy based on STEP Learner Analysis and Survey (STEP 학습자분석 및 실태조사에 기반한 온라인 학습 플랫폼 활성화 방안)

  • Myung, Jae Kyu;Park, Min-Ju;Min, Jun-Ki;Kim, Mi Hwa
    • Journal of Practical Engineering Education
    • /
    • v.13 no.2
    • /
    • pp.333-349
    • /
    • 2021
  • The fourth industrial revolution based on information and communication technology has increased the need for an environment where contents in new technologies can be learned for the development of lifelong vocational capabilities. To prepare for this, K University's online lifelong education center has established STEP, a smart learning platform. In this study, we conducted a study and other platform case analysis for STEP learner types, a survey of learners, and a comprehensive analysis based on these results to classify characteristics by learner types. It also intended to establish a plan to provide customized services to meet the needs of STEP learners in the future. The derived results are as follows. It is necessary to constantly manage learning content difficulty and learning motivation survey, and also needs to refine the operation of learning content in terms of learning composition. In addition, it is important to secure specialized content, to manage vulnerable learners, to actively introduce a learner support system and various educational methods.

Prediction Model of Real Estate ROI with the LSTM Model based on AI and Bigdata

  • Lee, Jeong-hyun;Kim, Hoo-bin;Shim, Gyo-eon
    • International journal of advanced smart convergence
    • /
    • v.11 no.1
    • /
    • pp.19-27
    • /
    • 2022
  • Across the world, 'housing' comprises a significant portion of wealth and assets. For this reason, fluctuations in real estate prices are highly sensitive issues to individual households. In Korea, housing prices have steadily increased over the years, and thus many Koreans view the real estate market as an effective channel for their investments. However, if one purchases a real estate property for the purpose of investing, then there are several risks involved when prices begin to fluctuate. The purpose of this study is to design a real estate price 'return rate' prediction model to help mitigate the risks involved with real estate investments and promote reasonable real estate purchases. Various approaches are explored to develop a model capable of predicting real estate prices based on an understanding of the immovability of the real estate market. This study employs the LSTM method, which is based on artificial intelligence and deep learning, to predict real estate prices and validate the model. LSTM networks are based on recurrent neural networks (RNN) but add cell states (which act as a type of conveyer belt) to the hidden states. LSTM networks are able to obtain cell states and hidden states in a recursive manner. Data on the actual trading prices of apartments in autonomous districts between January 2006 and December 2019 are collected from the Actual Trading Price Disclosure System of the Ministry of Land, Infrastructure and Transport (MOLIT). Additionally, basic data on apartments and commercial buildings are collected from the Public Data Portal and Seoul Metropolitan Government's data portal. The collected actual trading price data are scaled to monthly average trading amounts, and each data entry is pre-processed according to address to produce 168 data entries. An LSTM model for return rate prediction is prepared based on a time series dataset where the training period is set as April 2015~August 2017 (29 months), the validation period is set as September 2017~September 2018 (13 months), and the test period is set as December 2018~December 2019 (13 months). The results of the return rate prediction study are as follows. First, the model achieved a prediction similarity level of almost 76%. After collecting time series data and preparing the final prediction model, it was confirmed that 76% of models could be achieved. All in all, the results demonstrate the reliability of the LSTM-based model for return rate prediction.

Exploratory Study on Enhancing Cyber Security for Busan Port Container Terminals (부산항 컨테이너 터미널 사이버 보안 강화를 위한 탐색적 연구)

  • Do-Yeon Ha;Yul-Seong Kim
    • Journal of Navigation and Port Research
    • /
    • v.47 no.6
    • /
    • pp.437-447
    • /
    • 2023
  • By actively adopting technologies from the Fourth Industrial Revolution, the port industry is trending toward new types of ports, such as automated and smart ports. However, behind the development of these ports, there is an increasing risk of cyber security incidents and threats within ports and container terminals, including information leakage through cargo handling equipment and ransomware attacks leading to disruptions in terminal operations. Despite the necessity of research to enhance cyber security within ports, there is a lack of such studies in the domestic context. This study focuses on Busan Port, a representative port in South Korea that actively incorporates technology from the Fourth Industrial Revolution, in order to discover variables for improving cyber security in container terminals. The research results categorized factors for enhancing cyber security in Busan Port's container terminals into network construction and policy support, standardization of education and personnel training, and legal and regulatory factors. Subsequently, multiple regression analysis was conducted based on these factors, leading to the identification of detailed factors for securing and enhancing safety, reliability, performance, and satisfaction in Busan Port's container terminals. The significance of this study lies in providing direction for enhancing cyber security in Busan Port's container terminals and addressing the increasing incidents of cyber security attacks within ports and container terminals.

Estimation of fruit number of apple tree based on YOLOv5 and regression model (YOLOv5 및 다항 회귀 모델을 활용한 사과나무의 착과량 예측 방법)

  • Hee-Jin Gwak;Yunju Jeong;Ik-Jo Chun;Cheol-Hee Lee
    • Journal of IKEEE
    • /
    • v.28 no.2
    • /
    • pp.150-157
    • /
    • 2024
  • In this paper, we propose a novel algorithm for predicting the number of apples on an apple tree using a deep learning-based object detection model and a polynomial regression model. Measuring the number of apples on an apple tree can be used to predict apple yield and to assess losses for determining agricultural disaster insurance payouts. To measure apple fruit load, we photographed the front and back sides of apple trees. We manually labeled the apples in the captured images to construct a dataset, which was then used to train a one-stage object detection CNN model. However, when apples on an apple tree are obscured by leaves, branches, or other parts of the tree, they may not be captured in images. Consequently, it becomes difficult for image recognition-based deep learning models to detect or infer the presence of these apples. To address this issue, we propose a two-stage inference process. In the first stage, we utilize an image-based deep learning model to count the number of apples in photos taken from both sides of the apple tree. In the second stage, we conduct a polynomial regression analysis, using the total apple count from the deep learning model as the independent variable, and the actual number of apples manually counted during an on-site visit to the orchard as the dependent variable. The performance evaluation of the two-stage inference system proposed in this paper showed an average accuracy of 90.98% in counting the number of apples on each apple tree. Therefore, the proposed method can significantly reduce the time and cost associated with manually counting apples. Furthermore, this approach has the potential to be widely adopted as a new foundational technology for fruit load estimation in related fields using deep learning.

A Development of Facility Web Program for Small and Medium-Sized PSM Workplaces (중·소규모 공정안전관리 사업장의 웹 전산시스템 개발)

  • Kim, Young Suk;Park, Dal Jae
    • Korean Chemical Engineering Research
    • /
    • v.60 no.3
    • /
    • pp.334-346
    • /
    • 2022
  • There is a lack of knowledge and information on the understanding and application of the Process Safety Management (PSM) system, recognized as a major cause of industrial accidents in small-and medium-sized workplaces. Hence, it is necessary to prepare a protocol to secure the practical and continuous levels of implementation for PSM and eliminate human errors through tracking management. However, insufficient research has been conducted on this. Therefore, this study investigated and analyzed the various violations in the administrative measures, based on the regulations announced by the Ministry of Employment and Labor, in approximately 200 small-and medium-sized PSM workplaces with fewer than 300 employees across in korea. This study intended to contribute to the prevention of major industrial accidents by developing a facility maintenance web program that removed human errors in small-and medium-sized workplaces. The major results are summarized as follows. First, It accessed the web via a QR code on a smart device to check the equipment's specification search function, cause of failure, and photos for the convenience of accessing the program, which made it possible to make requests for the it inspection and maintenance in real time. Second, it linked the identification of the targets to be changed, risk assessment, worker training, and pre-operation inspection with the program, which allowed the administrator to track all the procedures from start to finish. Third, it made it possible to predict the life of the equipment and verify its reliability based on the data accumulated through the registration of the pictures for improvements, repairs, time required, cost, etc. after the work was completed. It is suggested that these research results will be helpful in the practical and systematic operation of small-and medium-sized PSM workplaces. In addition, it can be utilized in a useful manner for the development and dissemination of a facility maintenance web program when establishing future smart factories in small-and medium-sized PSM workplaces under the direction of the government.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Study on Major Safety Problems and Improvement Measures of Personal Mobility (개인형 이동장치의 안전 주요 문제점 및 개선방안 연구)

  • Kang, Seung Shik;Kang, Seong Kyung
    • Journal of the Society of Disaster Information
    • /
    • v.18 no.1
    • /
    • pp.202-217
    • /
    • 2022
  • Purpose: The recent increased use of Personal Mobility (PM) has been accompanied by a rise in the annual number of accidents. Accordingly, the safety requirements for PM use are being strengthened, but the laws/systems, infrastructure, and management systems remain insufficient for fostering a safe environment. Therefore, this study comprehensively searches the main problems and improvement methods through a review of previous studies that are related to PM. Then the priorities according to the importance of the improvement methods are presented through the Delphi survey. Method: The research method is mainly composed of a literature study and an expert survey (Delphi survey). Prior research and improvement cases (local governments, government departments, companies, etc.) are reviewed to derive problems and improvements, and a problem/improvement classification table is created based on keywords. Based on the classification contents, an expert survey is conducted to derive a priority improvement plan. Result: The PM-related problems were in 'non-compliance with traffic laws, lack of knowledge, inexperienced operation, and lack of safety awareness' in relation to human factors, and 'device characteristics, road-drivable space, road facilities, parking facilities' in relation to physical factors. 'Management/supervision, product management, user management, education/training' as administrative factors and legal factors are divided into 'absence/sufficiency of law, confusion/duplication, reduced effectiveness'. Improvement tasks related to this include 'PM education/public relations, parking/return, road improvement, PM registration/management, insurance, safety standards, traffic standards, PM device safety, PM supplementary facilities, enforcement/management, dedicated organization, service providers, management system, and related laws/institutional improvement', and 42 detailed tasks are derived for these 14 core tasks. The results for the importance evaluation of detailed tasks show that the tasks with a high overall average for the evaluation items of cost, time, effect, urgency, and feasibility were 'strengthening crackdown/instruction activities, education publicity/campaign, truancy PM management, and clarification of traffic rules'. Conclusion: The PM market is experiencing gradual growth based on shared services and a safe environment for PM use must be ensured along with industrial revitalization. In this respect, this study seeks out the major problems and improvement plans related to PM from a comprehensive point of view and prioritizes the necessary improvement measures. Therefore, it can serve as a basis of data for future policy establishment. In the future, in-depth data supplementation will be required for each key improvement area for practical policy application.