• Title/Summary/Keyword: Smart Monitoring Systems

Search Result 889, Processing Time 0.026 seconds

Analysis of Emerging Geo-technologies and Markets Focusing on Digital Twin and Environmental Monitoring in Response to Digital and Green New Deal (디지털 트윈, 환경 모니터링 등 디지털·그린 뉴딜 정책 관련 지질자원 유망기술·시장 분석)

  • Ahn, Eun-Young;Lee, Jaewook;Bae, Junhee;Kim, Jung-Min
    • Economic and Environmental Geology
    • /
    • v.53 no.5
    • /
    • pp.609-617
    • /
    • 2020
  • After introducing the industry 4.0 policy, Korean government announced 'Digital New Deal' and 'Green New Deal' as 'Korean New Deal' in 2020. We analyzed Korea Institute of Geoscience and Mineral Resources (KIGAM)'s research projects related to that policy and conducted markets analysis focused on Digital Twin and environmental monitoring technologies. Regarding 'Data Dam' policy, we suggested the digital geo-contents with Augmented Reality (AR) & Virtual Reality (VR) and the public geo-data collection & sharing system. It is necessary to expand and support the smart mining and digital oil fields research for '5th generation mobile communication (5G) and artificial intelligence (AI) convergence into all industries' policy. Korean government is suggesting downtown 3D maps for 'Digital Twin' policy. KIGAM can provide 3D geological maps and Internet of Things (IoT) systems for social overhead capital (SOC) management. 'Green New Deal' proposed developing technologies for green industries including resource circulation, Carbon Capture Utilization and Storage (CCUS), and electric & hydrogen vehicles. KIGAM has carried out related research projects and currently conducts research on domestic energy storage minerals. Oil and gas industries are presented as representative applications of digital twin. Many progress is made in mining automation and digital mapping and Digital Twin Earth (DTE) is a emerging research subject. The emerging research subjects are deeply related to data analysis, simulation, AI, and the IoT, therefore KIGAM should collaborate with sensors and computing software & system companies.

A Study on Integrated Platform for Prevention of Disease and Insect-Pest of Fruit Tree (특용과수의 병해충 및 기상재해 방지를 위한 통합관리 플랫폼 설계에 대한 연구)

  • Kim, Hong Geun;Lee, Myeong Bae;Kim, Yu Bin;Cho, Yong Yun;Park, Jang Woo;Shin, Chang Sun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.10
    • /
    • pp.347-352
    • /
    • 2016
  • Recently, IoT technology has been applied in various field. In particular, the technology focuses on analysing large amount of data that has been gathered from the environmental sensors, to provide valuable information. This technique has been actively researched in the agro-industrial sector. Many researches are underway in the monitoring and control for growth crop environment in agro-industrial. Normally, the average weather data is provided by the manual agro-control method but the value may differ due to the different region's weather and environment that may cause problem in the disease and insect-pest prevention. In order to develop a suitable integrated system for fruit tree, all the necessary information is obtained from the Jeollanam-do province, which has the high production rate in the Korea. In this paper, we propose an integrated support platform for the growing crops, to minimize the damage caused due to the weather disaster through image analysis, forecasting models, by using the micro-climate weather information collection and CCTV. The fruit tree damage caused by the weather disaster are controlled by utilizing various IoT technology by maintaining the growth environment, which helps in the disease and insect-pest prevention and also helps farmers to improve the expected production.

Seismic safety assessment of eynel highway steel bridge using ambient vibration measurements

  • Altunisik, Ahmet Can;Bayraktar, Alemdar;Ozdemir, Hasan
    • Smart Structures and Systems
    • /
    • v.10 no.2
    • /
    • pp.131-154
    • /
    • 2012
  • In this paper, it is aimed to determine the seismic behaviour of highway bridges by nondestructive testing using ambient vibration measurements. Eynel Highway Bridge which has arch type structural system with a total length of 216 m and located in the Ayvaclk county of Samsun, Turkey is selected as an application. The bridge connects the villages which are separated with Suat U$\breve{g}$urlu Dam Lake. A three dimensional finite element model is first established for a highway bridge using project drawings and an analytical modal analysis is then performed to generate natural frequencies and mode shapes in the three orthogonal directions. The ambient vibration measurements are carried out on the bridge deck under natural excitation such as traffic, human walking and wind loads using Operational Modal Analysis. Sensitive seismic accelerometers are used to collect signals obtained from the experimental tests. To obtain experimental dynamic characteristics, two output-only system identification techniques are employed namely, Enhanced Frequency Domain Decomposition technique in the frequency domain and Stochastic Subspace Identification technique in time domain. Analytical and experimental dynamic characteristic are compared with each other and finite element model of the bridge is updated by changing of boundary conditions to reduce the differences between the results. It is demonstrated that the ambient vibration measurements are enough to identify the most significant modes of highway bridges. After finite element model updating, maximum differences between the natural frequencies are reduced averagely from 23% to 3%. The updated finite element model reflects the dynamic characteristics of the bridge better, and it can be used to predict the dynamic response under complex external forces. It is also helpful for further damage identification and health condition monitoring. Analytical model of the bridge before and after model updating is analyzed using 1992 Erzincan earthquake record to determine the seismic behaviour. It can be seen from the analysis results that displacements increase by the height of bridge columns and along to middle point of the deck and main arches. Bending moments have an increasing trend along to first and last 50 m and have a decreasing trend long to the middle of the main arches.

Development of an Analytical Method for the Determination of Dexamethasone in Bovine Milk Using Liquid Chromatography Coupled to Tandem Mass Spectrometry (LC-MS/MS를 이용한 우유 중 덱사메타손의 잔류 분석법 개발)

  • Cha, Chun-Nam;Park, Eun-Kee;Yoo, Chang-Yeul;Lee, Sung Joong;Son, Song-Ee;Kim, Suk;Lee, Hu-Jang
    • Journal of Food Hygiene and Safety
    • /
    • v.32 no.5
    • /
    • pp.418-423
    • /
    • 2017
  • An analytical method for the determination of dexamethasone (DM) in bovine milk samples was developed and validated using liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS). Milk samples were extracted by the liquid-liquid extraction based on acetonitrile. The chromatographic separation was achieved on a reverse phase $C_{18}$ column with gradient elution using a mobile phase of 0.1% formic acid in 95% acetonitrile. The procedure was validated according to the Ministry of Food and Drug Safety guideline determining accuracy, precision, limit of detection (LOD), and limit of quantification (LOQ). Mean recoveries of DM from spiked milk samples (25, 125, and 1,250 ng/mL) were 98.9-109.6%, and the relative standard deviation was between 1.7 and 4.4%. Linearity in concentration range of 12.5-1,250 ng/mL was obtained with the correlation coefficient ($r^2$) of 0.9997. LOD and LOQ for the investigated DM were 0.15 and 0.5 ng/mL depending on milk samples, respectively. This method was reliable, sensitive, economical and suitable for routine monitoring of DM residues in bovine milk.

A Time Slot Assignment Scheme for Sensor Data Compression (센서 데이터의 압축을 위한 시간 슬롯 할당 기법)

  • Yeo, Myung-Ho;Kim, Hak-Sin;Park, Hyoung-Soon;Yoo, Jae-Soo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.11
    • /
    • pp.846-850
    • /
    • 2009
  • Recently, wireless sensor networks have found their way into a wide variety of applications and systems with vastly varying requirements and characteristics such as environmental monitoring, smart spaces, medical applications, and precision agriculture. The sensor nodes are battery powered. Therefore, the energy is the most precious resource of a wireless sensor network since periodically replacing the battery of the nodes in large scale deployments is infeasible. Energy efficient mechanisms for gathering sensor readings are indispensable to prolong the lifetime of a sensor network as long as possible. There are two energy-efficient approaches to prolong the network lifetime in sensor networks. One is the compression scheme to reduce the size of sensor readings. When the communication conflict is occurred between two sensor nodes, the sender must try to retransmit its reading. The other is the MAC protocol to prevent the communication conflict. In this paper, we propose a novel approaches to reduce the size of the sensor readings in the MAC layer. The proposed scheme compresses sensor readings by allocating the time slots of the TDMA schedule to them dynamically. We also present a mathematical model to predict latency from collecting the sensor readings as the compression ratio is changed. In the simulation result, our proposed scheme reduces the communication cost by about 52% over the existing scheme.

Development of a Slope Condition Analysis System using IoT Sensors and AI Camera (IoT 센서와 AI 카메라를 융합한 급경사지 상태 분석 시스템 개발)

  • Seungjoo Lee;Kiyen Jeong;Taehoon Lee;YoungSeok Kim
    • Journal of the Korean Geosynthetics Society
    • /
    • v.23 no.2
    • /
    • pp.43-52
    • /
    • 2024
  • Recent abnormal climate conditions have increased the risk of slope collapses, which frequently result in significant loss of life and property due to the absence of early prediction and warning dissemination. In this paper, we develop a slope condition analysis system using IoT sensors and AI-based camera to assess the condition of slopes. To develop the system, we conducted hardware and firmware design for measurement sensors considering the ground conditions of slopes, designed AI-based image analysis algorithms, and developed prediction and warning solutions and systems. We aimed to minimize errors in sensor data through the integration of IoT sensor data and AI camera image analysis, ultimately enhancing the reliability of the data. Additionally, we evaluated the accuracy (reliability) by applying it to actual slopes. As a result, sensor measurement errors were maintained within 0.1°, and the data transmission rate exceeded 95%. Moreover, the AI-based image analysis system demonstrated nighttime partial recognition rates of over 99%, indicating excellent performance even in low-light conditions. Through this research, it is anticipated that the analysis of slope conditions and smart maintenance management in various fields of Social Overhead Capital (SOC) facilities can be applied.

A Checklist to Improve the Fairness in AI Financial Service: Focused on the AI-based Credit Scoring Service (인공지능 기반 금융서비스의 공정성 확보를 위한 체크리스트 제안: 인공지능 기반 개인신용평가를 중심으로)

  • Kim, HaYeong;Heo, JeongYun;Kwon, Hochang
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.259-278
    • /
    • 2022
  • With the spread of Artificial Intelligence (AI), various AI-based services are expanding in the financial sector such as service recommendation, automated customer response, fraud detection system(FDS), credit scoring services, etc. At the same time, problems related to reliability and unexpected social controversy are also occurring due to the nature of data-based machine learning. The need Based on this background, this study aimed to contribute to improving trust in AI-based financial services by proposing a checklist to secure fairness in AI-based credit scoring services which directly affects consumers' financial life. Among the key elements of trustworthy AI like transparency, safety, accountability, and fairness, fairness was selected as the subject of the study so that everyone could enjoy the benefits of automated algorithms from the perspective of inclusive finance without social discrimination. We divided the entire fairness related operation process into three areas like data, algorithms, and user areas through literature research. For each area, we constructed four detailed considerations for evaluation resulting in 12 checklists. The relative importance and priority of the categories were evaluated through the analytic hierarchy process (AHP). We use three different groups: financial field workers, artificial intelligence field workers, and general users which represent entire financial stakeholders. According to the importance of each stakeholder, three groups were classified and analyzed, and from a practical perspective, specific checks such as feasibility verification for using learning data and non-financial information and monitoring new inflow data were identified. Moreover, financial consumers in general were found to be highly considerate of the accuracy of result analysis and bias checks. We expect this result could contribute to the design and operation of fair AI-based financial services.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.