• Title/Summary/Keyword: Information Flow Management

Search Result 1,053, Processing Time 0.023 seconds

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Evaluation of SWAT Applicability to Simulation of Sediment Behaviois at the Imha-Dam Watershed (임하댐 유역의 유사 거동 모의를 위한 SWAT 모델의 적용성 평가)

  • Park, Younshik;Kim, Jonggun;Park, Joonho;Jeon, Ji-Hong;Choi, Dong Hyuk;Kim, Taedong;Choi, Joongdae;Ahn, Jaehun;Kim, Ki-sung;Lim, Kyoung Jae
    • Journal of Korean Society on Water Environment
    • /
    • v.23 no.4
    • /
    • pp.467-473
    • /
    • 2007
  • Although the dominant land use at the Imha-dam watershed is forest areas, soil erosion has been increasing because of intensive agricultural activities performed at the fields located along the stream for easy-access to water supply and relatively favorable topography. In addition, steep topography at the Imha-dam watershed is also contributing increased soil erosion and sediment loads. At the Imha-dam watershed, outflow has increased sharply by the typhoons Rusa and Maemi in 2002, 2003 respectively. In this study, the Soil and Water Assessment Tool (SWAT) model was evaluated for simulation of flow and sediment behaviors with long-term temporal and spatial conditions. The precipitation data from eight precipitation observatories, located at Ilwol, Subi and etc., were used. There was no significant difference in monthly rainfall for 8 locations. However, there was slight differences in rainfall amounts and patterns in 2003 and 2004. The topographical map at 1:5000 scale from the National Geographic Information Institute was used to define watershed boundaries, the detailed soil map at 1:25,000 scale from the National Institute of Highland Agriculture and the land cover data from the Korea Institute of Water and Environment were used to simulate the hydrologic response and soil erosion and sediment behaviors. To evaluate hydrologic component of the SWAT model, calibration was performed for the period from Jan. 2002 to Dec. 2003, and validation for Jan. 2004 to Apr. 2005. The $R^2$ value and El value were 0.93 and 0.90 respectively for calibration period, and the $R^2$ value and El value for validation were 0.73 and 0.68 respectively. The $R^2$ value and El value of sediment yield data with the calibrated parameters was 0.89 and 0.84 respectively. The comparisons with the measured data showed that the SWAT model is applicable to simulate hydrology and sediment behaviors at Imha dam watershed. With proper representation of the Best Management Practices (BM Ps) in the SWAT model, the SWAT can be used for pre-evaluation of the cost-effective and sustainable soil erosion BMPs to solve sediment issues at the Imha-dam watershed. In Korea, the Universal Soil Loss Equation (USLE) has been used to estimate the soil loss for over 30 years. However, there are limitations in the field scale mdel, USLE when applied for watershed. Also, the soil loss changes temporarily and spatially, for example, the Imha-dam watershed. Thus, the SW AT model, capable of simulating hydrologic and soil erosion/sediment behaviors temporarily and spatially at watershed scale, should be used to solve the muddy water issues at the Imha-dam watershed to establish more effective muddy water reduction countermeasure.

A Study on the Use of GIS-based Time Series Spatial Data for Streamflow Depletion Assessment (하천 건천화 평가를 위한 GIS 기반의 시계열 공간자료 활용에 관한 연구)

  • YOO, Jae-Hyun;KIM, Kye-Hyun;PARK, Yong-Gil;LEE, Gi-Hun;KIM, Seong-Joon;JUNG, Chung-Gil
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.50-63
    • /
    • 2018
  • The rapid urbanization had led to a distortion of natural hydrological cycle system. The change in hydrological cycle structure is causing streamflow depletion, changing the existing use tendency of water resources. To manage such phenomena, a streamflow depletion impact assessment technology to forecast depletion is required. For performing such technology, it is indispensable to build GIS-based spatial data as fundamental data, but there is a shortage of related research. Therefore, this study was conducted to use the use of GIS-based time series spatial data for streamflow depletion assessment. For this study, GIS data over decades of changes on a national scale were constructed, targeting 6 streamflow depletion impact factors (weather, soil depth, forest density, road network, groundwater usage and landuse) and the data were used as the basic data for the operation of continuous hydrologic model. Focusing on these impact factors, the causes for streamflow depletion were analyzed depending on time series. Then, using distributed continuous hydrologic model based DrySAT, annual runoff of each streamflow depletion impact factor was measured and depletion assessment was conducted. As a result, the default value of annual runoff was measured at 977.9mm under the given weather condition without considering other factors. When considering the decrease in soil depth, the increase in forest density, road development, and groundwater usage, along with the change in land use and development, and annual runoff were measured at 1,003.5mm, 942.1mm, 961.9mm, 915.5mm, and 1003.7mm, respectively. The results showed that the major causes of the streaflow depletion were lowered soil depth to decrease the infiltration volume and surface runoff thereby decreasing streamflow; the increased forest density to decrease surface runoff; the increased road network to decrease the sub-surface flow; the increased groundwater use from undiscriminated development to decrease the baseflow; increased impervious areas to increase surface runoff. Also, each standard watershed depending on the grade of depletion was indicated, based on the definition of streamflow depletion and the range of grade. Considering the weather, the decrease in soil depth, the increase in forest density, road development, and groundwater usage, and the change in land use and development, the grade of depletion were 2.1, 2.2, 2.5, 2.3, 2.8, 2.2, respectively. Among the five streamflow depletion impact factors except rainfall condition, the change in groundwater usage showed the biggest influence on depletion, followed by the change in forest density, road construction, land use, and soil depth. In conclusion, it is anticipated that a national streamflow depletion assessment system to be develop in the future would provide customized depletion management and prevention plans based on the system assessment results regarding future data changes of the six streamflow depletion impact factors and the prospect of depletion progress.