• Title/Summary/Keyword: device performance

Search Result 5,992, Processing Time 0.038 seconds

Assembly and Testing of a Visible and Near-infrared Spectrometer with a Shack-Hartmann Wavefront Sensor (샤크-하트만 센서를 이용한 가시광 및 근적외선 분광기 조립 및 평가)

  • Hwang, Sung Lyoung;Lee, Jun Ho;Jeong, Do Hwan;Hong, Jin Suk;Kim, Young Soo;Kim, Yeon Soo;Kim, Hyun Sook
    • Korean Journal of Optics and Photonics
    • /
    • v.28 no.3
    • /
    • pp.108-115
    • /
    • 2017
  • We report the assembly procedure and performance evaluation of a visible and near-infrared spectrometer in the wavelength region of 400-900 nm, which is later to be combined with fore-optics (a telescope) to form a f/2.5 imaging spectrometer with a field of view of ${\pm}7.68^{\circ}$. The detector at the final image plane is a $640{\times}480$ charge-coupled device with a $24{\mu}m$ pixel size. The spectrometer is in an Offner relay configuration consisting of two concentric, spherical mirrors, the secondary of which is replaced by a convex grating mirror. A double-pass test method with an interferometer is often applied in the assembly process of precision optics, but was excluded from our study due to a large residual wavefront error (WFE) in optical design of 210 nm ($0.35{\lambda}$ at 600 nm) root-mean-square (RMS). This results in a single-path test method with a Shack-Hartmann sensor. The final assembly was tested to have a RMS WFE increase of less than 90 nm over the entire field of view, a keystone of 0.08 pixels, a smile of 1.13 pixels and a spectral resolution of 4.32 nm. During the procedure, we confirmed the validity of using a Shack-Hartmann wavefront sensor to monitor alignment in the assembly of an Offner-like spectrometer.

A Study on the Fabrication and Comparison of the Phantom for CT Dose Measurements Using 3D Printer (3D프린터를 이용한 CT 선량측정 팬텀 제작 및 비교에 관한 연구)

  • Yoon, Myeong-Seong;Kang, Seong-Hyeon;Hong, Soon-Min;Lee, Youngjin;Han, Dong-Koon
    • Journal of the Korean Society of Radiology
    • /
    • v.12 no.6
    • /
    • pp.737-743
    • /
    • 2018
  • Patient exposure dose exposure test, which is one of the items of accuracy control of Computed Tomography, conducts measurements every year based on the installation and operation of special medical equipment under Article 38 of the Medical Law, And keep records. The CT-Dose phantom used for dosimetry can accurately measure doses, but has the disadvantage of high price. Therefore, through this research, the existing CT - Dose phantom was similarly manufactured with a 3D printer and compared with the existing phantom to examine the usefulness. In order to produce the same phantom as the conventional CT-Dose phantom, a 3D printer of the FFF method is used by using a PLA filament, and in order to calculate the CTDIw value, Ion chambers were inserted into the central part and the central part, and measurements were made ten times each. Measurement results The CT-Dose phantom was measured at $30.44{\pm}0.31mGy$ in the periphery, $29.55{\pm}0.34mGy$ CTDIw value was measured at $30.14{\pm}0.30mGy$ in the center, and the phantom fabricated using the 3D printer was measured at the periphery $30.59{\pm}0.18mGy$, the central part was $29.01{\pm}0.04mGy$, and the CTDIw value was measured at $30.06{\pm}0.13mGy$. Analysis using the Mann - Whiteney U-test of the SPSS statistical program showed that there was a statistically significant difference in the result values in the central part, but statistically significant differences were observed between the peripheral part and CTDIw results I did not show. In conclusion, even in the CT-Dose phantom made with a 3D printer, we showed dose measurement performance like existing CT-Dose phantom and confirmed the possibility of low-cost phantom production using 3D printer through this research did it.

Development of Electret to Improve Output and Stability of Triboelectric Nanogenerator (마찰대전 나노발전기의 출력 및 안정성 향상을 위한 일렉트렛 개발)

  • Kam, Dongik;Jang, Sunmin;Yun, Yeongcheol;Bae, Hongeun;Lee, Youngjin;Ra, Yoonsang;Cho, Sumin;Seo, Kyoung Duck;Cha, Kyoung Je;Choi, Dongwhi
    • Korean Chemical Engineering Research
    • /
    • v.60 no.1
    • /
    • pp.93-99
    • /
    • 2022
  • With the rapid development of ultra-small and wearable device technology, continuous electricity supply without spatiotemporal limitations for driving electronic devices is required. Accordingly, Triboelectric nanogenerator (TENG), which utilizes static electricity generated by the contact and separation of two different materials, is being used as a means of effectively harvesting various types of energy dispersed without complex processes and designs due to its simple principle. However, to apply the TENG to real life, it is necessary to increase the electrical output. In addition, stable generation of electrical output, as well as increase in electrical output, is a task to be solved for the commercialization of TENG. In this study, we proposed a method to not only improve the output of TENG but also to stably represent the improved output. This was solved by using the contact layer, which is one of the components of TENG, as an electret for improved output and stability. The utilized electret was manufactured by sequentially performing corona charging-thermal annealing-corona charging on the Fluorinated ethylene propylene (FEP) film. Electric charges artificially injected due to corona charging enter a deep trap through the thermal annealing, so an electret that minimizes charge escape was fabricated and used in TENG. The output performance of the manufactured electret was verified by measuring the voltage output of the TENG in vertical contact separation mode, and the electret treated to the corona charging showed an output voltage 12 times higher than that of the pristine FEP film. The time and humidity stability of the electret was confirmed by measuring the output voltage of the TENG after exposing the electret to a general external environment and extreme humidity environment. In addition, it was shown that it can be applied to real-life by operating the LED by applying an electret to the clap-TENG with the motif of clap.

Development of flow measurement method using drones in flood season (II) - application of surface velocity doppler radar (드론을 이용한 홍수기 유량측정방법 개발(II) - 전자파표면유속계 적용)

  • Lee, Tae Hee;Kang, Jong Wan;Lee, Ki Sung;Lee, Sin Jae
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.11
    • /
    • pp.903-913
    • /
    • 2021
  • In the flood season, the measurement of the river discharge has many restrictions due to reasons such as budget, manpower, safety, convenience in measurement and so on. In particular, when heavy rain events occur due to typhoons, etc., it is difficult to measure the amount of flood due to the above problems. In order to improve this problem, in this study, a method was developed that can measure the river discharge in a flood season simply and safely in a short time with minimal manpower by combining the functions of a drone and a surface velocity doppler radar. To overcome the mechanical limitations of drones caused by weather issues such as wind and rainfall derived from the measurement of the river discharge using the conventional drone, we developed a drone with P56 grade dustproof and waterproof performance, stable flight capability at a wind speed of up to 36 km/h, and a payload weight of up to 10 kg. Further, to eliminate vibration which is the most important constraint factor in the measurement with a surface velocity doppler radar, a damper plate was developed as a device that combines a drone and a surface velocity Doppler radar. The velocity meter DSVM (Dron and Surface Veloctity Meter using doppler radar) that combines the flight equipment with the velocity meter was produced. The error of ±3.5% occurred as a result of measuring the river discharge using DSVM at the point of Geumsan-gun (Hwangpunggyo) located at Bonghwang stream (the first tributary stream of the Geum River). In addition, when calculating the mean velocity from the measured surface velocity, the measurement was performed using ADCP simultaneously to improve accuracy, and the mean velocity conversion factor (0.92) was calculated by comparing the mean velocity. In this study, the discharge measured by combining a drone and a surface velocity meter was compared with the discharge measured using ADCP and floats, so that the application and utility of DSVM was confirmed.

NOx Reduction Characteristics of Ship Power Generator Engine SCR Catalysts according to Cell Density Difference (선박 발전기관용 SCR 촉매의 셀 밀도차에 따른 NOx 저감 특성)

  • Kyung-Sun Lim;Myeong-Hwan Im
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.7
    • /
    • pp.1209-1215
    • /
    • 2022
  • The selective catalytic reduction (SCR) is known as a very efficient method to reduce nitrogen oxides (NOx) and the catalyst performs reduction from nitrogen oxides (NOx) to nitrogen (N2) and water vapor (H2O). The catalyst, which is one of the factors determining the performance of the nitrogen oxide (NOx) ruduction method, is known to increase catalyst efficiency as cell density increases. In this study, the reduction characteristics of nitrogen oxides (NOx) under various engine loads investigated. A 100CPSI(60Cell) catalysts was studied through a laboratory-sized simulating device that can simulate the exhaust gas conditions from the power generation engine installed in the training ship SEGERO. The effect of 100CPSI(60Cell) cell density was compared with that of 25.8CPSI(30Cell) cell density that already had NOx reduction data from the SCR manufacturing. The experimental catalysts were honeycomb type and its compositions and materials of V2O5-WO3-TiO2 were retained, with only change on cell density. As a result, the NOx concentration reduction rate from 100CPSI(60Cell) catalyst was 88.5%, and IMO specific NOx emission was 0.99g/kwh satisfying the IMO Tier III NOx emission requirement. The NOx concentration reduction rate from 25.8CPSI(30Cell) was 78%, and IMO specific NOx emission was 2.00g/kwh. Comparing the NOx concentration reduction rate and emission of 100CPSI(60Cell) and 25.8CPSI(30Cell) catalysts, notably, the NOx concentration reduction rate of 100CPSI(60Cell) catalyst was 10.5% higher and its IMO specific NOx emission was about twice less than that of the 25.8CPSI(30Cell) catalysts. Therefore, an efficient NOx reduction effect can be expected by increasing the cell density of catalysts. In other words, effects to production cost reduction, efficient arrangement of engine room and cargo space can be estimated from the reduced catalyst volume.

Verification of Multi-point Displacement Response Measurement Algorithm Using Image Processing Technique (영상처리기법을 이용한 다중 변위응답 측정 알고리즘의 검증)

  • Kim, Sung-Wan;Kim, Nam-Sik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.3A
    • /
    • pp.297-307
    • /
    • 2010
  • Recently, maintenance engineering and technology for civil and building structures have begun to draw big attention and actually the number of structures that need to be evaluate on structural safety due to deterioration and performance degradation of structures are rapidly increasing. When stiffness is decreased because of deterioration of structures and member cracks, dynamic characteristics of structures would be changed. And it is important that the damaged areas and extent of the damage are correctly evaluated by analyzing dynamic characteristics from the actual behavior of a structure. In general, typical measurement instruments used for structure monitoring are dynamic instruments. Existing dynamic instruments are not easy to obtain reliable data when the cable connecting measurement sensors and device is long, and have uneconomical for 1 to 1 connection process between each sensor and instrument. Therefore, a method without attaching sensors to measure vibration at a long range is required. The representative applicable non-contact methods to measure the vibration of structures are laser doppler effect, a method using GPS, and image processing technique. The method using laser doppler effect shows relatively high accuracy but uneconomical while the method using GPS requires expensive equipment, and has its signal's own error and limited speed of sampling rate. But the method using image signal is simple and economical, and is proper to get vibration of inaccessible structures and dynamic characteristics. Image signals of camera instead of sensors had been recently used by many researchers. But the existing method, which records a point of a target attached on a structure and then measures vibration using image processing technique, could have relatively the limited objects of measurement. Therefore, this study conducted shaking table test and field load test to verify the validity of the method that can measure multi-point displacement responses of structures using image processing technique.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

The Characteristics and Performances of Manufacturing SMEs that Utilize Public Information Support Infrastructure (공공 정보지원 인프라 활용한 제조 중소기업의 특징과 성과에 관한 연구)

  • Kim, Keun-Hwan;Kwon, Taehoon;Jun, Seung-pyo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.1-33
    • /
    • 2019
  • The small and medium sized enterprises (hereinafter SMEs) are already at a competitive disadvantaged when compared to large companies with more abundant resources. Manufacturing SMEs not only need a lot of information needed for new product development for sustainable growth and survival, but also seek networking to overcome the limitations of resources, but they are faced with limitations due to their size limitations. In a new era in which connectivity increases the complexity and uncertainty of the business environment, SMEs are increasingly urged to find information and solve networking problems. In order to solve these problems, the government funded research institutes plays an important role and duty to solve the information asymmetry problem of SMEs. The purpose of this study is to identify the differentiating characteristics of SMEs that utilize the public information support infrastructure provided by SMEs to enhance the innovation capacity of SMEs, and how they contribute to corporate performance. We argue that we need an infrastructure for providing information support to SMEs as part of this effort to strengthen of the role of government funded institutions; in this study, we specifically identify the target of such a policy and furthermore empirically demonstrate the effects of such policy-based efforts. Our goal is to help establish the strategies for building the information supporting infrastructure. To achieve this purpose, we first classified the characteristics of SMEs that have been found to utilize the information supporting infrastructure provided by government funded institutions. This allows us to verify whether selection bias appears in the analyzed group, which helps us clarify the interpretative limits of our study results. Next, we performed mediator and moderator effect analysis for multiple variables to analyze the process through which the use of information supporting infrastructure led to an improvement in external networking capabilities and resulted in enhancing product competitiveness. This analysis helps identify the key factors we should focus on when offering indirect support to SMEs through the information supporting infrastructure, which in turn helps us more efficiently manage research related to SME supporting policies implemented by government funded institutions. The results of this study showed the following. First, SMEs that used the information supporting infrastructure were found to have a significant difference in size in comparison to domestic R&D SMEs, but on the other hand, there was no significant difference in the cluster analysis that considered various variables. Based on these findings, we confirmed that SMEs that use the information supporting infrastructure are superior in size, and had a relatively higher distribution of companies that transact to a greater degree with large companies, when compared to the SMEs composing the general group of SMEs. Also, we found that companies that already receive support from the information infrastructure have a high concentration of companies that need collaboration with government funded institution. Secondly, among the SMEs that use the information supporting infrastructure, we found that increasing external networking capabilities contributed to enhancing product competitiveness, and while this was no the effect of direct assistance, we also found that indirect contributions were made by increasing the open marketing capabilities: in other words, this was the result of an indirect-only mediator effect. Also, the number of times the company received additional support in this process through mentoring related to information utilization was found to have a mediated moderator effect on improving external networking capabilities and in turn strengthening product competitiveness. The results of this study provide several insights that will help establish policies. KISTI's information support infrastructure may lead to the conclusion that marketing is already well underway, but it intentionally supports groups that enable to achieve good performance. As a result, the government should provide clear priorities whether to support the companies in the underdevelopment or to aid better performance. Through our research, we have identified how public information infrastructure contributes to product competitiveness. Here, we can draw some policy implications. First, the public information support infrastructure should have the capability to enhance the ability to interact with or to find the expert that provides required information. Second, if the utilization of public information support (online) infrastructure is effective, it is not necessary to continuously provide informational mentoring, which is a parallel offline support. Rather, offline support such as mentoring should be used as an appropriate device for abnormal symptom monitoring. Third, it is required that SMEs should improve their ability to utilize, because the effect of enhancing networking capacity through public information support infrastructure and enhancing product competitiveness through such infrastructure appears in most types of companies rather than in specific SMEs.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.