• Title/Summary/Keyword: Prevention System

Search Result 4,769, Processing Time 0.037 seconds

Development of the Model for Total Quality Management and Cost of Quality using Activity Based Costing in the Hospital (병원의 활동기준원가를 이용한 총체적 질관리 모형 및 질비용 산출 모형 개발)

  • 조우현;전기홍;이해종;박은철;김병조;김보경;이상규
    • Health Policy and Management
    • /
    • v.11 no.2
    • /
    • pp.141-168
    • /
    • 2001
  • Healthcare service organizations can apply the cost of quality(COQ) model as a method to evaluate a service quality improvement project such as Total Quality Management (TQM). COQ model has been used to quantify and evaluate the efficiency and effectiveness of TQM project through estimation between cost and benefit in intervention for a quality Improvement to provide satisfied services for a customer, and to identify a non value added process. For estimating cost of quality, We used activities and activity costs based on Activity Based Costing(ABC) system. These procedures let the researchers know whether the process is value-added by each activity, and identify a process to require improvement in TQM project. Through the series of procedures, health care organizations are service organizations can identify a problem in their quality improvement programs, solve the problem, and improve their quality of care for their costumers with optimized cost. The study subject was a quality improvement program of the department of radiology department in a hospital with n bed sizes in Metropolitan Statistical Area (MSA). The principal source of data for developing the COQ model was total cases of retaking shots for diagnoses during five months period from December of the 1998 to April of the 1999 in the department. First of the procedures, for estimating activity based cost of the department of diagnostic radiology, the researchers analyzed total department health insurance claims to identify activities and activity costs using one year period health insurance claims from September of the 1998 to August of the 1999. COQ model in this study applied Simpson & Multher's COQ(SM's COQ) model, and SM's COQ model divided cost of quality into failure cost with external and internal failure cost, and evaluation/prevention cost. The researchers identified contents for cost of quality, defined activities and activity costs for each content with the SM's COQ model, and finally made the formula for estimating activity costs relating to implementing service quality improvement program. The results from the formula for estimating cost of quality were following: 1. The reasons for retaking shots were largely classified into technique, appliances, patients, quality management, non-appliances, doctors, and unclassified. These classifications by reasons were allocated into each office doing re-taking shots. Therefore, total retaking shots categorized by reasons and offices, the researchers identified internal and external failure costs based on these categories. 2. The researchers have developed cost of quality (COQ) model, identified activities by content for cost of quality, assessed activity driving factors and activity contribution rate, and calculated total cost by each content for cost for quality, except for activity cost. 3. According to estimation of cost of quality for retaking shots in department of diagnostic radiology, the failure cost was ₩35,880, evaluation/preventive cost was ₩72,521, two times as much as failure cost. The proportion between internal failure cost and external failure cost in failure cost is similar. The study cannot identify trends on input cost and quality improving in cost of qualify over the time, because the study employs cross-sectional design. Even with this limitation, results of this study are much meaningful. This study shows possibility to evaluate value on the process of TQM subjects using activities and activity costs by ABC system, and this study can objectively evaluate quality improvement program through quantitative comparing input costs with marginal benefits in quality improvement.

  • PDF

Monitoring of the Sea Surface Temperature in the Saemangeum Sea Area Using the Thermal Infrared Satellite Data (열적외선 위성자료를 이용한 새만금 해역 해수표면온도 모니터렁)

  • Yoon, Suk;Ryu, Joo-Hyung;Min, Jee-Eun;Ahn, Yu-Hwan;Lee, Seok;Won, Joong-Sun
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.4
    • /
    • pp.339-357
    • /
    • 2009
  • The Saemangeum Reclamation Project was launched as a national project in 1991 to reclaim a large coastal area of 401 km$^2$ by constructing a 33-km long dyke. The final dyke enclosure in April 2006 has transformed the tidal flat into lake and land. The dyke construction has abruptly changed not only the estuarine tidal system inside the dyke, but also the coastal marine environment outside the dyke. In this study, we investigated the spatial change of SST distribution using the Landsat-5/7 and NOAA data before and after the dyke completion in the Saemangeum area. Satellite-induced SST was verified by compared with the various in situ measurements such as tower, buoy, and water sample. The correlation coefficient resulted in above 0.96 and RMSE was about 1$^{\circ}C$ in all data. 38 Landsat satellite images from 1985 to 2007 were analyzed to estimate the temporal and spatial change of SST distribution from the beginning to the completion of the Samangeum dyke's construction. The seasonal change in detailed spatial distribution of SST was measured, however, the estimation of change during the Saemangeum dyke's construction was hard to figure out owing to the various environmental conditions. Monthly averaged SST induced from NOAA data from 1998 to 2007 has been analyzed for a complement of Landsat's temporal resolution. At the inside of the dyke, the change of SST from summer to winter was large due to the relatively high temperature in summer. In this study, multi-sensor thermal remote sensing is an efficient tool for monitoring the temporal and spatial distribution of SST in coastal area.

Predicting Crime Risky Area Using Machine Learning (머신러닝기반 범죄발생 위험지역 예측)

  • HEO, Sun-Young;KIM, Ju-Young;MOON, Tae-Heon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.64-80
    • /
    • 2018
  • In Korea, citizens can only know general information about crime. Thus it is difficult to know how much they are exposed to crime. If the police can predict the crime risky area, it will be possible to cope with the crime efficiently even though insufficient police and enforcement resources. However, there is no prediction system in Korea and the related researches are very much poor. From these backgrounds, the final goal of this study is to develop an automated crime prediction system. However, for the first step, we build a big data set which consists of local real crime information and urban physical or non-physical data. Then, we developed a crime prediction model through machine learning method. Finally, we assumed several possible scenarios and calculated the probability of crime and visualized the results in a map so as to increase the people's understanding. Among the factors affecting the crime occurrence revealed in previous and case studies, data was processed in the form of a big data for machine learning: real crime information, weather information (temperature, rainfall, wind speed, humidity, sunshine, insolation, snowfall, cloud cover) and local information (average building coverage, average floor area ratio, average building height, number of buildings, average appraised land value, average area of residential building, average number of ground floor). Among the supervised machine learning algorithms, the decision tree model, the random forest model, and the SVM model, which are known to be powerful and accurate in various fields were utilized to construct crime prevention model. As a result, decision tree model with the lowest RMSE was selected as an optimal prediction model. Based on this model, several scenarios were set for theft and violence cases which are the most frequent in the case city J, and the probability of crime was estimated by $250{\times}250m$ grid. As a result, we could find that the high crime risky area is occurring in three patterns in case city J. The probability of crime was divided into three classes and visualized in map by $250{\times}250m$ grid. Finally, we could develop a crime prediction model using machine learning algorithm and visualized the crime risky areas in a map which can recalculate the model and visualize the result simultaneously as time and urban conditions change.

Changing Trends of Climatic Variables of Agro-Climatic Zones of Rice in South Korea (벼 작물 농업기후지대의 연대별 기후요소 변화 특성)

  • Jung, Myung-Pyo;Shim, Kyo-Moon;Kim, Yongseok;Kim, Seok-Cheol;So, Kyu-Ho
    • Journal of Climate Change Research
    • /
    • v.5 no.1
    • /
    • pp.13-19
    • /
    • 2014
  • In the past, Korea agro-climatic zone except Jeju-do was classified into nineteen based on rice culture by using air temperature, precipitation, and sunshine duration etc. during rice growing periods. It has been used for selecting safety zone of rice cultivation and countermeasures to meteorological disasters. In this study, the climatic variables such as air temperature, precipitation, and sunshine duration of twenty agro-climatic zones including Jeju-do were compared decennially (1970's, 1980's, 1990's, and 2000's). The meteorological data were obtained in Meteorological Information Portal Service System-Disaster Prevention, Korea Meteorological Administration. The temperature of 1970s, 1980s, 1990s, and 2000s were $12.0{\pm}0.14^{\circ}C$, $11.9{\pm}0.13^{\circ}C$, $12.2{\pm}0.14^{\circ}C$, and $12.6{\pm}0.13^{\circ}C$, respectively. The precipitation of 1970s, 1980s, 1990s, and 2000s were $1,270.3{\pm}20.05mm$, $1,343.0{\pm}26.01mm$, $1,350.6{\pm}27.13mm$, and $1,416.8{\pm}24.87mm$, respectively. And the sunshine duration of 1970s, 1980s, 1990s, and 2000s were $421.7{\pm}18.37hours$, $2,352.4{\pm}15.01hours$, $2,196.3{\pm}12.32hours$, and $2,146.8{\pm}15.37hours$, respectively. The temperature in Middle-Inland zone ($+1.2^{\circ}C$) and Eastern-Southern zone ($+1.1^{\circ}C$) remarkably increased. The temperature increased most in Taebak highly Cold zone ($+364mm$) and Taebak moderately Cold Zone ($+326mm$). The sunshine duration decreased most in Middle-Inland Zone (-995 hours). The temperature (F=2.708, df=3, p= 0.046) and precipitation (F=5.037, df=3, p=0.002) increased significantly among seasons while the sunshine duration decreased significantly(F=26.181, df=3, p<0.0001) among seasons. In further study, it will need to reclassify agro-climatic zone of rice and it will need to conduct studies on safe cropping season, growth and developing of rice, and cultivation management system etc. based on reclassified agro-climatic zone.

Development of a Storage Level and Capacity Monitoring and Forecasting Techniques in Yongdam Dam Basin Using High Resolution Satellite Image (고해상도 위성자료를 이용한 용담댐 유역 저수위/저수량 모니터링 및 예측 기술 개발)

  • Yoon, Sunkwon;Lee, Seongkyu;Park, Kyungwon;Jang, Sangmin;Rhee, Jinyung
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_1
    • /
    • pp.1041-1053
    • /
    • 2018
  • In this study, a real-time storage level and capacity monitoring and forecasting system for Yongdam Dam watershed was developed using high resolution satellite image. The drought indices such as Standardized Precipitation Index (SPI) from satellite data were used for storage level monitoring in case of drought. Moreover, to predict storage volume we used a statistical method based on Principle Component Analysis (PCA) of Singular Spectrum Analysis (SSA). According to this study, correlation coefficient between storage level and SPI (3) was highly calculated with CC=0.78, and the monitoring and predictability of storage level was diagnosed using the drought index calculated from satellite data. As a result of analysis of principal component analysis by SSA, correlation between SPI (3) and each Reconstructed Components (RCs) data were highly correlated with CC=0.87 to 0.99. And also, the correlations of RC data with Normalized Water Surface Level (N-W.S.L.) were confirmed that has highly correlated with CC=0.83 to 0.97. In terms of high resolution satellite image we developed a water detection algorithm by applying an exponential method to monitor the change of storage level by using Multi-Spectral Instrument (MSI) sensor of Sentinel-2 satellite. The materials of satellite image for water surface area detection in Yongdam dam watershed was considered from 2016 to 2018, respectively. Based on this, we proposed the possibility of real-time drought monitoring system using high resolution water surface area detection by Sentinel-2 satellite image. The results of this study can be applied to estimate of the reservoir volume calculated from various satellite observations, which can be used for monitoring and estimating hydrological droughts in an unmeasured area.

Dental Assistant and Dental Hygienist-comparison with U.S. (치과 보조 인력과 치과위생사-미국의 제도 비교)

  • Youngyuhn Choi
    • Journal of Korean Dental Hygiene Science
    • /
    • v.6 no.2
    • /
    • pp.65-77
    • /
    • 2023
  • Background: The shortage of dental hygienists as assistant is a great concern to dental clinics, while dental hygienists are rather pursuing the role of oral hygiene control and preventive treatments which is the main role for dental hygienists in the United States. The dental hygienist and dental assistant system in the United States can be a reference in these discussions. Methods: Educational requirements for licensure and work areas for dental hygienists and dental assistants were investigated through the information provided by the American Dental Association (ADA), American Dental Hygienists Association, National Board Dental Hygiene Examination (NBDHE), Dental Assistants Association of America (ADAA), and Dental Assistants National Board (DANB). Results: In the United States, each state has different systems, but in general, dental hygienists obtain licenses after completing 2~3 years of associate degree programs in dental hygiene after obtaining basic learning skills, and mainly perform tasks related to patient screening procedures, oral hygiene management and preventive care. Dental assistants can take the license test after completing a training course of 9~11 months to obtain a dental assistant certification. Additional expanded work typically requires passing state qualification tests, completing a training program, obtaining a degree, or gaining clinical experience for a certain period of time, depending on the state Conclusion: The scope of work of dental hygienists designated by the Medical Engineer Act and the Enforcement Decree in Korea includes both the work of dental hygienists and dental assistants in the United States, and if a dental assistant system like the United States is introduced to address the current shortage of dental assistants, institutional supplementation such as adjustment of the scope of work and expansion of the role of dental hygienists in oral hygiene management and prevention work is needed and in-depth discussion is necessary.

Analysis and Improvement Strategies for Korea's Cyber Security Systems Regulations and Policies

  • Park, Dong-Kyun;Cho, Sung-Je;Soung, Jea-Hyen
    • Korean Security Journal
    • /
    • no.18
    • /
    • pp.169-190
    • /
    • 2009
  • Today, the rapid advance of scientific technologies has brought about fundamental changes to the types and levels of terrorism while the war against the world more than one thousand small and big terrorists and crime organizations has already begun. A method highly likely to be employed by terrorist groups that are using 21st Century state of the art technology is cyber terrorism. In many instances, things that you could only imagine in reality could be made possible in the cyber space. An easy example would be to randomly alter a letter in the blood type of a terrorism subject in the health care data system, which could inflict harm to subjects and impact the overturning of the opponent's system or regime. The CIH Virus Crisis which occurred on April 26, 1999 had significant implications in various aspects. A virus program made of just a few lines by Taiwanese college students without any specific objective ended up spreading widely throughout the Internet, causing damage to 30,000 PCs in Korea and over 2 billion won in monetary damages in repairs and data recovery. Despite of such risks of cyber terrorism, a great number of Korean sites are employing loose security measures. In fact, there are many cases where a company with millions of subscribers has very slackened security systems. A nationwide preparation for cyber terrorism is called for. In this context, this research will analyze the current status of Korea's cyber security systems and its laws from a policy perspective, and move on to propose improvement strategies. This research suggests the following solutions. First, the National Cyber Security Management Act should be passed to have its effectiveness as the national cyber security management regulation. With the Act's establishment, a more efficient and proactive response to cyber security management will be made possible within a nationwide cyber security framework, and define its relationship with other related laws. The newly passed National Cyber Security Management Act will eliminate inefficiencies that are caused by functional redundancies dispersed across individual sectors in current legislation. Second, to ensure efficient nationwide cyber security management, national cyber security standards and models should be proposed; while at the same time a national cyber security management organizational structure should be established to implement national cyber security policies at each government-agencies and social-components. The National Cyber Security Center must serve as the comprehensive collection, analysis and processing point for national cyber crisis related information, oversee each government agency, and build collaborative relations with the private sector. Also, national and comprehensive response system in which both the private and public sectors participate should be set up, for advance detection and prevention of cyber crisis risks and for a consolidated and timely response using national resources in times of crisis.

  • PDF

A Study on the Use of GIS-based Time Series Spatial Data for Streamflow Depletion Assessment (하천 건천화 평가를 위한 GIS 기반의 시계열 공간자료 활용에 관한 연구)

  • YOO, Jae-Hyun;KIM, Kye-Hyun;PARK, Yong-Gil;LEE, Gi-Hun;KIM, Seong-Joon;JUNG, Chung-Gil
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.50-63
    • /
    • 2018
  • The rapid urbanization had led to a distortion of natural hydrological cycle system. The change in hydrological cycle structure is causing streamflow depletion, changing the existing use tendency of water resources. To manage such phenomena, a streamflow depletion impact assessment technology to forecast depletion is required. For performing such technology, it is indispensable to build GIS-based spatial data as fundamental data, but there is a shortage of related research. Therefore, this study was conducted to use the use of GIS-based time series spatial data for streamflow depletion assessment. For this study, GIS data over decades of changes on a national scale were constructed, targeting 6 streamflow depletion impact factors (weather, soil depth, forest density, road network, groundwater usage and landuse) and the data were used as the basic data for the operation of continuous hydrologic model. Focusing on these impact factors, the causes for streamflow depletion were analyzed depending on time series. Then, using distributed continuous hydrologic model based DrySAT, annual runoff of each streamflow depletion impact factor was measured and depletion assessment was conducted. As a result, the default value of annual runoff was measured at 977.9mm under the given weather condition without considering other factors. When considering the decrease in soil depth, the increase in forest density, road development, and groundwater usage, along with the change in land use and development, and annual runoff were measured at 1,003.5mm, 942.1mm, 961.9mm, 915.5mm, and 1003.7mm, respectively. The results showed that the major causes of the streaflow depletion were lowered soil depth to decrease the infiltration volume and surface runoff thereby decreasing streamflow; the increased forest density to decrease surface runoff; the increased road network to decrease the sub-surface flow; the increased groundwater use from undiscriminated development to decrease the baseflow; increased impervious areas to increase surface runoff. Also, each standard watershed depending on the grade of depletion was indicated, based on the definition of streamflow depletion and the range of grade. Considering the weather, the decrease in soil depth, the increase in forest density, road development, and groundwater usage, and the change in land use and development, the grade of depletion were 2.1, 2.2, 2.5, 2.3, 2.8, 2.2, respectively. Among the five streamflow depletion impact factors except rainfall condition, the change in groundwater usage showed the biggest influence on depletion, followed by the change in forest density, road construction, land use, and soil depth. In conclusion, it is anticipated that a national streamflow depletion assessment system to be develop in the future would provide customized depletion management and prevention plans based on the system assessment results regarding future data changes of the six streamflow depletion impact factors and the prospect of depletion progress.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.