• Title/Summary/Keyword: network algorithms

Search Result 2,993, Processing Time 0.036 seconds

Mean Teacher Learning Structure Optimization for Semantic Segmentation of Crack Detection (균열 탐지의 의미론적 분할을 위한 Mean Teacher 학습 구조 최적화 )

  • Seungbo Shim
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.27 no.5
    • /
    • pp.113-119
    • /
    • 2023
  • Most infrastructure structures were completed during periods of economic growth. The number of infrastructure structures reaching their lifespan is increasing, and the proportion of old structures is gradually increasing. The functions and performance of these structures at the time of design may deteriorate and may even lead to safety accidents. To prevent this repercussion, accurate inspection and appropriate repair are requisite. To this end, demand is increasing for computer vision and deep learning technology to accurately detect even minute cracks. However, deep learning algorithms require a large number of training data. In particular, label images indicating the location of cracks in the image are required. To secure a large number of those label images, a lot of labor and time are consumed. To reduce these costs as well as increase detection accuracy, this study proposed a learning structure based on mean teacher method. This learning structure was trained on a dataset of 900 labeled image dataset and 3000 unlabeled image dataset. The crack detection network model was evaluated on over 300 labeled image dataset, and the detection accuracy recorded a mean intersection over union of 89.23% and an F1 score of 89.12%. Through this experiment, it was confirmed that detection performance was improved compared to supervised learning. It is expected that this proposed method will be used in the future to reduce the cost required to secure label images.

Study on Establishment of a Monitoring System for Long-term Behavior of Caisson Quay Wall (케이슨 안벽의 장기 거동 모니터링 시스템 구축 연구 )

  • Tae-Min Lee;Sung Tae Kim;Young-Taek Kim;Jiyoung Min
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.27 no.5
    • /
    • pp.40-48
    • /
    • 2023
  • In this paper, a sensor-based monitoring system was established to analyze the long-term behavioral characteristics of the caisson quay wall, a representative structural type in port facilities. Data was collected over a period of approximately 10 months. Based on existing literature, anomalous behaviors of port facilities were classified, and a measurement system was selected to detect them. Monitoring systems were installed on-site to periodically collect data. The collected data was transmitted and stored on a server through LTE network. Considering the site conditions, inclinometers for measuring slope and crack meters for measuring spacing and settlement were installed. They were attached to two caissons for comparison between different caissons. The correlation among measured data, temperature, and tidal level was examined. The temperature dominated the spacing and settlement data. When the temperature changed by approximately 50 degrees, the spacing changed by 10 mm, the settlement by 2 mm, and the slope by 0.1 degrees. On the other hand, there was no clear relationship with tidal level, indicating a need for more in-depth analysis in the future. Based on the characteristics of these collected database, it will be possible to develop algorithms for detecting abnormal states in gravity-type quay walls. The acquisition and analysis of long-term data enable to evaluate the safety and usability of structures in the event of disasters and emergencies.

Data-driven Modeling for Valve Size and Type Prediction Using Machine Learning (머신 러닝을 이용한 밸브 사이즈 및 종류 예측 모델 개발)

  • Chanho Kim;Minshick Choi;Chonghyo Joo;A-Reum Lee;Yun Gun;Sungho Cho;Junghwan Kim
    • Korean Chemical Engineering Research
    • /
    • v.62 no.3
    • /
    • pp.214-224
    • /
    • 2024
  • Valves play an essential role in a chemical plant such as regulating fluid flow and pressure. Therefore, optimal selection of the valve size and type is essential task. Valve size and type have been selected based on theoretical formulas about calculating valve sizing coefficient (Cv). However, this approach has limitations such as requiring expert knowledge and consuming substantial time and costs. Herein, this study developed a model for predicting valve sizes and types using machine learning. We developed models using four algorithms: ANN, Random Forest, XGBoost, and Catboost and model performances were evaluated using NRMSE & R2 score for size prediction and F1 score for type prediction. Additionally, a case study was conducted to explore the impact of phases on valve selection, using four datasets: total fluids, liquids, gases, and steam. As a result of the study, for valve size prediction, total fluid, liquid, and gas dataset demonstrated the best performance with Catboost (Based on R2, total: 0.99216, liquid: 0.98602, gas: 0.99300. Based on NRMSE, total: 0.04072, liquid: 0.04886, gas: 0.03619) and steam dataset showed the best performance with RandomForest (R2: 0.99028, NRMSE: 0.03493). For valve type prediction, Catboost outperformed all datasets with the highest F1 scores (total: 0.95766, liquids: 0.96264, gases: 0.95770, steam: 1.0000). In Engineering Procurement Construction industry, the proposed fluid-specific machine learning-based model is expected to guide the selection of suitable valves based on given process conditions and facilitate faster decision-making.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

A Comparative Errors Assessment Between Surface Albedo Products of COMS/MI and GK-2A/AMI (천리안위성 1·2A호 지표면 알베도 상호 오차 분석 및 비교검증)

  • Woo, Jongho;Choi, Sungwon;Jin, Donghyun;Seong, Noh-hun;Jung, Daeseong;Sim, Suyoung;Byeon, Yugyeong;Jeon, Uujin;Sohn, Eunha;Han, Kyung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1767-1772
    • /
    • 2021
  • Global satellite observation surface albedo data over a long period of time are actively used to monitor changes in the global climate and environment, and their utilization and importance are great. Through the generational shift of geostationary satellites COMS (Communication, Ocean and Meteorological Satellite)/MI (Meteorological Imager sensor) and GK-2A (GEO-KOMPSAT-2A)/AMI (Advanced Meteorological Imager sensor), it is possible to continuously secure surface albedo outputs. However, the surface albedo outputs of COMS/MI and GK-2A/AMI differ between outputs due to Differences in retrieval algorithms. Therefore, in order to expand the retrieval period of the surface albedo of COMS/MI and GK-2A/AMI to secure continuous climate change monitoring linkage, the analysis of the two satellite outputs and errors should be preceded. In this study, error characteristics were analyzed by performing comparative analysis with ground observation data AERONET (Aerosol Robotic Network) and other satellite data GLASS (Global Land Surface Satellite) for the overlapping period of COMS/MI and GK-2A/AMI surface albedo data. As a result of error analysis, it was confirmed that the RMSE of COMS/MI was 0.043, higher than the RMSE of GK-2A/AMI, 0.015. In addition, compared to other satellite (GLASS) data, the RMSE of COMS/MI was 0.029, slightly lower than that of GK-2A/AMI 0.038. When understanding these error characteristics and using COMS/MI and GK-2A/AMI's surface albedo data, it will be possible to actively utilize them for long-term climate change monitoring.

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.

Design and Implementation of IoT based Low cost, Effective Learning Mechanism for Empowering STEM Education in India

  • Simmi Chawla;Parul Tomar;Sapna Gambhir
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.163-169
    • /
    • 2024
  • India is a developing nation and has come with comprehensive way in modernizing its reducing poverty, economy and rising living standards for an outsized fragment of its residents. The STEM (Science, Technology, Engineering, and Mathematics) education plays an important role in it. STEM is an educational curriculum that emphasis on the subjects of "science, technology, engineering, and mathematics". In traditional education scenario, these subjects are taught independently, but according to the educational philosophy of STEM that teaches these subjects together in project-based lessons. STEM helps the students in his holistic development. Youth unemployment is the biggest concern due to lack of adequate skills. There is a huge skill gap behind jobless engineers and the question arises how we can prepare engineers for a better tomorrow? Now a day's Industry 4.0 is a new fourth industrial revolution which is an intelligent networking of machines and processes for industry through ICT. It is based upon the usage of cyber-physical systems and Internet of Things (IoT). Industrial revolution does not influence only production but also educational system as well. IoT in academics is a new revolution to the Internet technology, which introduced "Smartness" in the entire IT infrastructure. To improve socio-economic status of the India students must equipped with 21st century digital skills and Universities, colleges must provide individual learning kits to their students which can help them in enhancing their productivity and learning outcomes. The major goal of this paper is to present a low cost, effective learning mechanism for STEM implementation using Raspberry Pi 3+ model (Single board computer) and Node Red open source visual programming tool which is developed by IBM for wiring hardware devices together. These tools are broadly used to provide hands on experience on IoT fundamentals during teaching and learning. This paper elaborates the appropriateness and the practicality of these concepts via an example by implementing a user interface (UI) and Dashboard in Node-RED where dashboard palette is used for demonstration with switch, slider, gauge and Raspberry pi palette is used to connect with GPIO pins present on Raspberry pi board. An LED light is connected with a GPIO pin as an output pin. In this experiment, it is shown that the Node-Red dashboard is accessing on Raspberry pi and via Smartphone as well. In the final step results are shown in an elaborate manner. Conversely, inadequate Programming skills in students are the biggest challenge because without good programming skills there would be no pioneers in engineering, robotics and other areas. Coding plays an important role to increase the level of knowledge on a wide scale and to encourage the interest of students in coding. Today Python language which is Open source and most demanding languages in the industry in order to know data science and algorithms, understanding computer science would not be possible without science, technology, engineering and math. In this paper a small experiment is also done with an LED light via writing source code in python. These tiny experiments are really helpful to encourage the students and give play way to learn these advance technologies. The cost estimation is presented in tabular form for per learning kit provided to the students for Hands on experiments. Some Popular In addition, some Open source tools for experimenting with IoT Technology are described. Students can enrich their knowledge by doing lots of experiments with these freely available software's and this low cost hardware in labs or learning kits provided to them.

Basic Research on the Possibility of Developing a Landscape Perceptual Response Prediction Model Using Artificial Intelligence - Focusing on Machine Learning Techniques - (인공지능을 활용한 경관 지각반응 예측모델 개발 가능성 기초연구 - 머신러닝 기법을 중심으로 -)

  • Kim, Jin-Pyo;Suh, Joo-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.51 no.3
    • /
    • pp.70-82
    • /
    • 2023
  • The recent surge of IT and data acquisition is shifting the paradigm in all aspects of life, and these advances are also affecting academic fields. Research topics and methods are being improved through academic exchange and connections. In particular, data-based research methods are employed in various academic fields, including landscape architecture, where continuous research is needed. Therefore, this study aims to investigate the possibility of developing a landscape preference evaluation and prediction model using machine learning, a branch of Artificial Intelligence, reflecting the current situation. To achieve the goal of this study, machine learning techniques were applied to the landscaping field to build a landscape preference evaluation and prediction model to verify the simulation accuracy of the model. For this, wind power facility landscape images, recently attracting attention as a renewable energy source, were selected as the research objects. For analysis, images of the wind power facility landscapes were collected using web crawling techniques, and an analysis dataset was built. Orange version 3.33, a program from the University of Ljubljana was used for machine learning analysis to derive a prediction model with excellent performance. IA model that integrates the evaluation criteria of machine learning and a separate model structure for the evaluation criteria were used to generate a model using kNN, SVM, Random Forest, Logistic Regression, and Neural Network algorithms suitable for machine learning classification models. The performance evaluation of the generated models was conducted to derive the most suitable prediction model. The prediction model derived in this study separately evaluates three evaluation criteria, including classification by type of landscape, classification by distance between landscape and target, and classification by preference, and then synthesizes and predicts results. As a result of the study, a prediction model with a high accuracy of 0.986 for the evaluation criterion according to the type of landscape, 0.973 for the evaluation criterion according to the distance, and 0.952 for the evaluation criterion according to the preference was developed, and it can be seen that the verification process through the evaluation of data prediction results exceeds the required performance value of the model. As an experimental attempt to investigate the possibility of developing a prediction model using machine learning in landscape-related research, this study was able to confirm the possibility of creating a high-performance prediction model by building a data set through the collection and refinement of image data and subsequently utilizing it in landscape-related research fields. Based on the results, implications, and limitations of this study, it is believed that it is possible to develop various types of landscape prediction models, including wind power facility natural, and cultural landscapes. Machine learning techniques can be more useful and valuable in the field of landscape architecture by exploring and applying research methods appropriate to the topic, reducing the time of data classification through the study of a model that classifies images according to landscape types or analyzing the importance of landscape planning factors through the analysis of landscape prediction factors using machine learning.

A Study of 'Emotion Trigger' by Text Mining Techniques (텍스트 마이닝을 이용한 감정 유발 요인 'Emotion Trigger'에 관한 연구)

  • An, Juyoung;Bae, Junghwan;Han, Namgi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.69-92
    • /
    • 2015
  • The explosion of social media data has led to apply text-mining techniques to analyze big social media data in a more rigorous manner. Even if social media text analysis algorithms were improved, previous approaches to social media text analysis have some limitations. In the field of sentiment analysis of social media written in Korean, there are two typical approaches. One is the linguistic approach using machine learning, which is the most common approach. Some studies have been conducted by adding grammatical factors to feature sets for training classification model. The other approach adopts the semantic analysis method to sentiment analysis, but this approach is mainly applied to English texts. To overcome these limitations, this study applies the Word2Vec algorithm which is an extension of the neural network algorithms to deal with more extensive semantic features that were underestimated in existing sentiment analysis. The result from adopting the Word2Vec algorithm is compared to the result from co-occurrence analysis to identify the difference between two approaches. The results show that the distribution related word extracted by Word2Vec algorithm in that the words represent some emotion about the keyword used are three times more than extracted by co-occurrence analysis. The reason of the difference between two results comes from Word2Vec's semantic features vectorization. Therefore, it is possible to say that Word2Vec algorithm is able to catch the hidden related words which have not been found in traditional analysis. In addition, Part Of Speech (POS) tagging for Korean is used to detect adjective as "emotional word" in Korean. In addition, the emotion words extracted from the text are converted into word vector by the Word2Vec algorithm to find related words. Among these related words, noun words are selected because each word of them would have causal relationship with "emotional word" in the sentence. The process of extracting these trigger factor of emotional word is named "Emotion Trigger" in this study. As a case study, the datasets used in the study are collected by searching using three keywords: professor, prosecutor, and doctor in that these keywords contain rich public emotion and opinion. Advanced data collecting was conducted to select secondary keywords for data gathering. The secondary keywords for each keyword used to gather the data to be used in actual analysis are followed: Professor (sexual assault, misappropriation of research money, recruitment irregularities, polifessor), Doctor (Shin hae-chul sky hospital, drinking and plastic surgery, rebate) Prosecutor (lewd behavior, sponsor). The size of the text data is about to 100,000(Professor: 25720, Doctor: 35110, Prosecutor: 43225) and the data are gathered from news, blog, and twitter to reflect various level of public emotion into text data analysis. As a visualization method, Gephi (http://gephi.github.io) was used and every program used in text processing and analysis are java coding. The contributions of this study are as follows: First, different approaches for sentiment analysis are integrated to overcome the limitations of existing approaches. Secondly, finding Emotion Trigger can detect the hidden connections to public emotion which existing method cannot detect. Finally, the approach used in this study could be generalized regardless of types of text data. The limitation of this study is that it is hard to say the word extracted by Emotion Trigger processing has significantly causal relationship with emotional word in a sentence. The future study will be conducted to clarify the causal relationship between emotional words and the words extracted by Emotion Trigger by comparing with the relationships manually tagged. Furthermore, the text data used in Emotion Trigger are twitter, so the data have a number of distinct features which we did not deal with in this study. These features will be considered in further study.

T-Cache: a Fast Cache Manager for Pipeline Time-Series Data (T-Cache: 시계열 배관 데이타를 위한 고성능 캐시 관리자)

  • Shin, Je-Yong;Lee, Jin-Soo;Kim, Won-Sik;Kim, Seon-Hyo;Yoon, Min-A;Han, Wook-Shin;Jung, Soon-Ki;Park, Se-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.293-299
    • /
    • 2007
  • Intelligent pipeline inspection gauges (PIGs) are inspection vehicles that move along within a (gas or oil) pipeline and acquire signals (also called sensor data) from their surrounding rings of sensors. By analyzing the signals captured in intelligent PIGs, we can detect pipeline defects, such as holes and curvatures and other potential causes of gas explosions. There are two major data access patterns apparent when an analyzer accesses the pipeline signal data. The first is a sequential pattern where an analyst reads the sensor data one time only in a sequential fashion. The second is the repetitive pattern where an analyzer repeatedly reads the signal data within a fixed range; this is the dominant pattern in analyzing the signal data. The existing PIG software reads signal data directly from the server at every user#s request, requiring network transfer and disk access cost. It works well only for the sequential pattern, but not for the more dominant repetitive pattern. This problem becomes very serious in a client/server environment where several analysts analyze the signal data concurrently. To tackle this problem, we devise a fast in-memory cache manager, called T-Cache, by considering pipeline sensor data as multiple time-series data and by efficiently caching the time-series data at T-Cache. To the best of the authors# knowledge, this is the first research on caching pipeline signals on the client-side. We propose a new concept of the signal cache line as a caching unit, which is a set of time-series signal data for a fixed distance. We also provide the various data structures including smart cursors and algorithms used in T-Cache. Experimental results show that T-Cache performs much better for the repetitive pattern in terms of disk I/Os and the elapsed time. Even with the sequential pattern, T-Cache shows almost the same performance as a system that does not use any caching, indicating the caching overhead in T-Cache is negligible.