• Title/Summary/Keyword: monitoring time

Search Result 7,489, Processing Time 0.037 seconds

Implementation of integrated monitoring system for trace and path prediction of infectious disease (전염병의 경로 추적 및 예측을 위한 통합 정보 시스템 구현)

  • Kim, Eungyeong;Lee, Seok;Byun, Young Tae;Lee, Hyuk-Jae;Lee, Taikjin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.69-76
    • /
    • 2013
  • The incidence of globally infectious and pathogenic diseases such as H1N1 (swine flu) and Avian Influenza (AI) has recently increased. An infectious disease is a pathogen-caused disease, which can be passed from the infected person to the susceptible host. Pathogens of infectious diseases, which are bacillus, spirochaeta, rickettsia, virus, fungus, and parasite, etc., cause various symptoms such as respiratory disease, gastrointestinal disease, liver disease, and acute febrile illness. They can be spread through various means such as food, water, insect, breathing and contact with other persons. Recently, most countries around the world use a mathematical model to predict and prepare for the spread of infectious diseases. In a modern society, however, infectious diseases are spread in a fast and complicated manner because of rapid development of transportation (both ground and underground). Therefore, we do not have enough time to predict the fast spreading and complicated infectious diseases. Therefore, new system, which can prevent the spread of infectious diseases by predicting its pathway, needs to be developed. In this study, to solve this kind of problem, an integrated monitoring system, which can track and predict the pathway of infectious diseases for its realtime monitoring and control, is developed. This system is implemented based on the conventional mathematical model called by 'Susceptible-Infectious-Recovered (SIR) Model.' The proposed model has characteristics that both inter- and intra-city modes of transportation to express interpersonal contact (i.e., migration flow) are considered. They include the means of transportation such as bus, train, car and airplane. Also, modified real data according to the geographical characteristics of Korea are employed to reflect realistic circumstances of possible disease spreading in Korea. We can predict where and when vaccination needs to be performed by parameters control in this model. The simulation includes several assumptions and scenarios. Using the data of Statistics Korea, five major cities, which are assumed to have the most population migration have been chosen; Seoul, Incheon (Incheon International Airport), Gangneung, Pyeongchang and Wonju. It was assumed that the cities were connected in one network, and infectious disease was spread through denoted transportation methods only. In terms of traffic volume, daily traffic volume was obtained from Korean Statistical Information Service (KOSIS). In addition, the population of each city was acquired from Statistics Korea. Moreover, data on H1N1 (swine flu) were provided by Korea Centers for Disease Control and Prevention, and air transport statistics were obtained from Aeronautical Information Portal System. As mentioned above, daily traffic volume, population statistics, H1N1 (swine flu) and air transport statistics data have been adjusted in consideration of the current conditions in Korea and several realistic assumptions and scenarios. Three scenarios (occurrence of H1N1 in Incheon International Airport, not-vaccinated in all cities and vaccinated in Seoul and Pyeongchang respectively) were simulated, and the number of days taken for the number of the infected to reach its peak and proportion of Infectious (I) were compared. According to the simulation, the number of days was the fastest in Seoul with 37 days and the slowest in Pyeongchang with 43 days when vaccination was not considered. In terms of the proportion of I, Seoul was the highest while Pyeongchang was the lowest. When they were vaccinated in Seoul, the number of days taken for the number of the infected to reach at its peak was the fastest in Seoul with 37 days and the slowest in Pyeongchang with 43 days. In terms of the proportion of I, Gangneung was the highest while Pyeongchang was the lowest. When they were vaccinated in Pyeongchang, the number of days was the fastest in Seoul with 37 days and the slowest in Pyeongchang with 43 days. In terms of the proportion of I, Gangneung was the highest while Pyeongchang was the lowest. Based on the results above, it has been confirmed that H1N1, upon the first occurrence, is proportionally spread by the traffic volume in each city. Because the infection pathway is different by the traffic volume in each city, therefore, it is possible to come up with a preventive measurement against infectious disease by tracking and predicting its pathway through the analysis of traffic volume.

Establishment of Analytical Method for Dichlorprop Residues, a Plant Growth Regulator in Agricultural Commodities Using GC/ECD (GC/ECD를 이용한 농산물 중 생장조정제 dichlorprop 잔류 분석법 확립)

  • Lee, Sang-Mok;Kim, Jae-Young;Kim, Tae-Hoon;Lee, Han-Jin;Chang, Moon-Ik;Kim, Hee-Jeong;Cho, Yoon-Jae;Choi, Si-Won;Kim, Myung-Ae;Kim, MeeKyung;Rhee, Gyu-Seek;Lee, Sang-Jae
    • Korean Journal of Environmental Agriculture
    • /
    • v.32 no.3
    • /
    • pp.214-223
    • /
    • 2013
  • BACKGROUND: This study focused on the development of an analytical method about dichlorprop (DCPP; 2-(2,4-dichlorophenoxy)propionic acid) which is a plant growth regulator, a synthetic auxin for agricultural commodities. DCPP prevents falling of fruits during their growth periods. However, the overdose of DCPP caused the unwanted maturing time and reduce the safe storage period. If we take fruits with exceeding maximum residue limits, it could be harmful. Therefore, this study presented the analytical method of DCPP in agricultural commodities for the nation-wide pesticide residues monitoring program of the Ministry of Food and Drug Safety. METHODS AND RESULTS: We adopted the analytical method for DCPP in agricultural commodities by gas chromatograph in cooperated with Electron Capture Detector(ECD). Sample extraction and purification by ion-associated partition method were applied, then quantitation was done by GC/ECD with DB-17, a moderate polarity column under the temperature-rising condition with nitrogen as a carrier gas and split-less mode. Standard calibration curve presented linearity with the correlation coefficient ($r^2$) > 0.9998, analysed from 0.1 to 2.0 mg/L concentration. Limit of quantitation in agricultural commodities represents 0.05 mg/kg, and average recoveries ranged from 78.8 to 102.2%. The repeatability of measurements expressed as coefficient of variation (CV %) was less than 9.5% in 0.05, 0.10, and 0.50 mg/kg. CONCLUSION(S): Our newly improved analytical method for DCPP residues in agricultural commodities was applicable to the nation-wide pesticide residues monitoring program with the acceptable level of sensitivity, repeatability and reproducibility.

Blood Pressure Reactivity during Nasal Continuous Positive Airway Pressure in Obstructive Sleep Apnea Syndrome (폐쇄성(閉鎖性) 수면무호흡증(睡眠無呼吸症)에서 지속적(持續的) 상기도(上氣道) 양압술(陽壓術)이 혈력학적(血力學的) 변화(變化)에 끼치는 영향(影響))

  • Park, Doo-Heum;Jeong, Do-Un
    • Sleep Medicine and Psychophysiology
    • /
    • v.9 no.1
    • /
    • pp.24-33
    • /
    • 2002
  • Objectives: Nasal continuous positive airway pressure (CPAP) corrected elevated blood pressure (BP) in some studies of obstructive sleep apnea syndrome (OSAS) but not in others. Such inconsistent results in previous studies might be due to differences in factors influencing the effects of CPAP on BP. The factors referred to include BP monitoring techniques, the characteristics of subjects, and method of CPAP application. Therefore, we evaluated the effects of one night CPAP application on BP and heart rate (HR) reactivity using non-invasive beat-to-beat BP measurement in normotensive and hypertensive subjects with OSAS. Methods: Finger arterial BP and oxygen saturation monitoring with nocturnal polysomnography were performed on 10 OSAS patients (mean age $52.2{\pm}12.4\;years$; 9 males, 1 female; respiratory disturbance index (RDI)>5) for one baseline night and another CPAP night. Beat-to-beat measurement of BP and HR was done with finger arterial BP monitor ($Finapres^{(R)}$) and mean arterial oxygen saturation ($SaO_2$) was also measured at 2-second intervals for both nights. We compared the mean values of cardiovascular and respiratory variables between baseline and CPAP nights using Wilcoxon signed ranks test. Delta ($\Delta$) BP, defined as the subtracted value of CPAP night BP from baseline night BP, was correlated with age, body mass index (BMI), baseline night values of BP, BP variability, HR, HR variability, mean $SaO_2$ and respiratory disturbance index (RDI), and CPAP night values of TWT% (total wake time%) and CPAP pressure, using Spearman's correlation. Results: 1) Although increase of mean $SaO_2$ (p<.01) and decrease of RDI (p<.01) were observed on the CPAP night, there were no significant differences in other variables between two nights. 2) However, delta BP tended to increase or decease depending on BP values of the baseline night and age. Delta systolic BP and baseline systolic BP showed a significant positive correlation (p<.01), but delta diastolic BP and baseline diastolic BP did not show a significant correlation except for a positive correlation in wake stage (p<.01). Delta diastolic BP and age showed a significant negative correlation (p<.05) during all stages except for REM stage, but delta systolic BP and age did not. 3) Delta systolic and diastolic BPs did not significantly correlate with other factors, such as BMI, baseline night values of BP variability, HR, HR variability, mean SaO2 and RDI, and CPAP night values of TWT% and CPAP pressure, except for a positive correlation of delta diastolic pressure and TWT% of CPAP night (p<.01). Conclusions: We observed that systolic BP and diastolic BP tended to decrease, increase or remain still in accordance with the systolic BP level of baseline night and aging. We suggest that BP reactivity by CPAP be dealt with as a complex phenomenon rather than a simple undifferentiated BP decrease.

  • PDF

Risk Factor Analysis for Preventing Foodborne Illness in Restaurants and the Development of Food Safety Training Materials (레스토랑 식중독 예방을 위한 위해 요소 규명 및 위생교육 매체 개발)

  • Park, Sung-Hee;Noh, Jae-Min;Chang, Hye-Ja;Kang, Young-Jae;Kwak, Tong-Kyung
    • Korean journal of food and cookery science
    • /
    • v.23 no.5
    • /
    • pp.589-600
    • /
    • 2007
  • Recently, with the rapid expansion of the franchise restaurants, ensuring food safety has become essential for restaurant growth. Consequently, the need for food safety training and related material is in increasing demand. In this study, we identified potentially hazardous risk factors for ensuring food safety in restaurants through a food safety monitoring tool, and developed training materials for restaurant employees based on the results. The surveyed restaurants, consisting of 6 Korean restaurants and 1 Japanese restaurant were located in Seoul. Their average check was 15,500 won, ranging from 9,000 to 23,000 won. The range of their total space was 297.5 to $1322.4m^2$, and the amount of kitchen space per total area ranged from 4.4 to 30 percent. The mean score for food safety management performance was 57 out of 100 points, with a range of 51 to 73 points. For risk factor analysis, the most frequently cited sanitary violations involved the handwashing methods/handwashing facilities supplies (7.5%), receiving activities (7.5%), checking and recording of frozen/refrigerated foods temperature (0%), holding foods off the floor (0%), washing of fruits and vegetables (42%), planning and supervising facility cleaning and maintaining programs of facilities (50%), pest control (13%), and toilet equipped/cleaned (13%). Base on these results, the main points that were addressed in the hygiene training of restaurant employees included 4 principles and 8 concepts. The four principles consisted of personal hygiene, prevention of food contamination, time/temperature control, and refrigerator storage. The eight concepts included: (1) personal hygiene and cleanliness with proper handwashing, (2) approved food source and receiving management (3) refrigerator and freezer control, (4) storage management, (5) labeling, (6) prevention of food contamination, (7) cooking and reheating control, and (8) cleaning, sanitation, and plumbing control. Finally, a hygiene training manual and poster leaflets were developed as a food safety training materials for restaurants employees.

Comparison of Imposed Work of Breathing Between Pressure-Triggered and Flow-Triggered Ventilation During Mechanical Ventilation (기계환기시 압력유발법과 유량유발법 차이에 의한 부가적 호흡일의 비교)

  • Choi, Jeong-Eun;Lim, Chae-Man;Koh, Youn-Suck;Lee, Sang-Do;Kim, Woo-Sung;Kim, Dong-Soon;Kim, Won-Dong
    • Tuberculosis and Respiratory Diseases
    • /
    • v.44 no.3
    • /
    • pp.592-600
    • /
    • 1997
  • Background : The level of imposed work of breathing (WOB) is important for patient-ventilator synchrony and during weaning from mechanical ventilation. Triggering methods and the sensitivity of demand system are important determining factors of the imposed WOB. Flow triggering method is available on several modern ventilator and is believed to impose less work to a patient-triggered breath than pressure triggering method. We intended to compare the level of imposed WOB on two different methods of triggering and also at different levels of sensitivities on each triggering method (0.7 L/min vs 2.0 L/min on flow triggering ; $-1\;cmH_2O$ vs $-2cm\;H_2O$ on pressure triggering). Methods : The subjects were 12 patients ($64.8{\pm}4.2\;yrs$) on mechanical ventilation and were stable in respiratory pattern on CPAP $3\;cmH_2O$. Four different triggering sensitivities were applied at random order. For determination of imposed WOB, tracheal end pressure was measured through the monitoring lumen of Hi-Lo Jet tracheal tube (Mallincrodt, New York, USA) using pneumotachograph/pressure transducer (CP-100 pulmonary monitor, Bicore, Irvine, CA, USA). Other data of respiratory mechanics were also obtained by CP-100 pulmonary monitor. Results : The imposed WOB was decreased by 37.5% during 0.7 L/min on flow triggering compared to $-2\;cmH_2O$ on pressure triggering and also decreased by 14% during $-1\;cmH_2O$ compared to $-2\;cmH_2O$ on pressure triggering (p < 0.05 in each). The PTP(Pressure Time Product) was also decreased significantly during 0.7 L/min on flow triggering and $-1\;cmH_2O$ on pressure triggering compared to $-2\;cmH_2O$ on pressure triggering (p < 0.05 in each). The proportions of imposed WOB in total WOB were ranged from 37% to 85% and no significant changes among different methods and sensitivities. The physiologic WOB showed no significant changes among different triggering methods and sensitivities. Conclusion : To reduce the imposed WOB, flow triggering with sensitivity of 0.7 L/min would be better method than pressure triggering with sensitivity of $-2\;cm\;H_2O$.

  • PDF

APPLICATION OF FUZZY SET THEORY IN SAFEGUARDS

  • Fattah, A.;Nishiwaki, Y.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1051-1054
    • /
    • 1993
  • The International Atomic Energy Agency's Statute in Article III.A.5 allows it“to establish and administer safeguards designed to ensure that special fissionable and other materials, services, equipment, facilities and information made available by the Agency or at its request or under its supervision or control are not used in such a way as to further any military purpose; and to apply safeguards, at the request of the parties, to any bilateral or multilateral arrangement, or at the request of a State, to any of that State's activities in the field of atomic energy”. Safeguards are essentially a technical means of verifying the fulfilment of political obligations undertaken by States and given a legal force in international agreements relating to the peaceful uses of nuclear energy. The main political objectives are: to assure the international community that States are complying with their non-proliferation and other peaceful undertakings; and to deter (a) the diversion of afeguarded nuclear materials to the production of nuclear explosives or for military purposes and (b) the misuse of safeguarded facilities with the aim of producing unsafeguarded nuclear material. It is clear that no international safeguards system can physically prevent diversion. The IAEA safeguards system is basically a verification measure designed to provide assurance in those cases in which diversion has not occurred. Verification is accomplished by two basic means: material accountancy and containment and surveillance measures. Nuclear material accountancy is the fundamental IAEA safeguards mechanism, while containment and surveillance serve as important complementary measures. Material accountancy refers to a collection of measurements and other determinations which enable the State and the Agency to maintain a current picture of the location and movement of nuclear material into and out of material balance areas, i. e. areas where all material entering or leaving is measurab e. A containment measure is one that is designed by taking advantage of structural characteristics, such as containers, tanks or pipes, etc. To establish the physical integrity of an area or item by preventing the undetected movement of nuclear material or equipment. Such measures involve the application of tamper-indicating or surveillance devices. Surveillance refers to both human and instrumental observation aimed at indicating the movement of nuclear material. The verification process consists of three over-lapping elements: (a) Provision by the State of information such as - design information describing nuclear installations; - accounting reports listing nuclear material inventories, receipts and shipments; - documents amplifying and clarifying reports, as applicable; - notification of international transfers of nuclear material. (b) Collection by the IAEA of information through inspection activities such as - verification of design information - examination of records and repo ts - measurement of nuclear material - examination of containment and surveillance measures - follow-up activities in case of unusual findings. (c) Evaluation of the information provided by the State and of that collected by inspectors to determine the completeness, accuracy and validity of the information provided by the State and to resolve any anomalies and discrepancies. To design an effective verification system, one must identify possible ways and means by which nuclear material could be diverted from peaceful uses, including means to conceal such diversions. These theoretical ways and means, which have become known as diversion strategies, are used as one of the basic inputs for the development of safeguards procedures, equipment and instrumentation. For analysis of implementation strategy purposes, it is assumed that non-compliance cannot be excluded a priori and that consequently there is a low but non-zero probability that a diversion could be attempted in all safeguards ituations. An important element of diversion strategies is the identification of various possible diversion paths; the amount, type and location of nuclear material involved, the physical route and conversion of the material that may take place, rate of removal and concealment methods, as appropriate. With regard to the physical route and conversion of nuclear material the following main categories may be considered: - unreported removal of nuclear material from an installation or during transit - unreported introduction of nuclear material into an installation - unreported transfer of nuclear material from one material balance area to another - unreported production of nuclear material, e. g. enrichment of uranium or production of plutonium - undeclared uses of the material within the installation. With respect to the amount of nuclear material that might be diverted in a given time (the diversion rate), the continuum between the following two limiting cases is cons dered: - one significant quantity or more in a short time, often known as abrupt diversion; and - one significant quantity or more per year, for example, by accumulation of smaller amounts each time to add up to a significant quantity over a period of one year, often called protracted diversion. Concealment methods may include: - restriction of access of inspectors - falsification of records, reports and other material balance areas - replacement of nuclear material, e. g. use of dummy objects - falsification of measurements or of their evaluation - interference with IAEA installed equipment.As a result of diversion and its concealment or other actions, anomalies will occur. All reasonable diversion routes, scenarios/strategies and concealment methods have to be taken into account in designing safeguards implementation strategies so as to provide sufficient opportunities for the IAEA to observe such anomalies. The safeguards approach for each facility will make a different use of these procedures, equipment and instrumentation according to the various diversion strategies which could be applicable to that facility and according to the detection and inspection goals which are applied. Postulated pathways sets of scenarios comprise those elements of diversion strategies which might be carried out at a facility or across a State's fuel cycle with declared or undeclared activities. All such factors, however, contain a degree of fuzziness that need a human judgment to make the ultimate conclusion that all material is being used for peaceful purposes. Safeguards has been traditionally based on verification of declared material and facilities using material accountancy as a fundamental measure. The strength of material accountancy is based on the fact that it allows to detect any diversion independent of the diversion route taken. Material accountancy detects a diversion after it actually happened and thus is powerless to physically prevent it and can only deter by the risk of early detection any contemplation by State authorities to carry out a diversion. Recently the IAEA has been faced with new challenges. To deal with these, various measures are being reconsidered to strengthen the safeguards system such as enhanced assessment of the completeness of the State's initial declaration of nuclear material and installations under its jurisdiction enhanced monitoring and analysis of open information and analysis of open information that may indicate inconsistencies with the State's safeguards obligations. Precise information vital for such enhanced assessments and analyses is normally not available or, if available, difficult and expensive collection of information would be necessary. Above all, realistic appraisal of truth needs sound human judgment.

  • PDF

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Evaluation of Cryptosporidiurn Disinfection by Ozone and Ultraviolet Irradiation Using Viability and Infectivity Assays (크립토스포리디움의 활성/감염성 판별법을 이용한 오존 및 자외선 소독능 평가)

  • Park Sang-Jung;Cho Min;Yoon Je-Yong;Jun Yong-Sung;Rim Yeon-Taek;Jin Ing-Nyol;Chung Hyen-Mi
    • Journal of Life Science
    • /
    • v.16 no.3 s.76
    • /
    • pp.534-539
    • /
    • 2006
  • In the ozone disinfection unit process of a piston type batch reactor with continuous ozone analysis using a flow injection analysis (FIA) system, the CT values for 1 log inactivation of Cryptosporidium parvum by viability assays of DAPI/PI and excystation were $1.8{\sim}2.2\;mg/L{\cdot}min$ at $25^{\circ}C$ and $9.1mg/L{\cdot}min$ at $5^{\circ}C$, respectively. At the low temperature, ozone requirement rises $4{\sim}5$ times higher in order to achieve the same level of disinfection at room temperature. In a 40 L scale pilot plant with continuous flow and constant 5 minutes retention time, disinfection effects were evaluated using excystation, DAPI/PI, and cell infection method at the same time. About 0.2 log inactivation of Cryptosporidium by DAPI/PI and excystation assay, and 1.2 log inactivation by cell infectivity assay were estimated, respectively, at the CT value of about $8mg/L{\cdot}min$. The difference between DAPI/PI and excystation assay was not significant in evaluating CT values of Cryptosporidium by ozone in both experiment of the piston and the pilot reactors. However, there was significant difference between viability assay based on the intact cell wall structure and function and infectivity assay based on the developing oocysts to sporozoites and merozoites in the pilot study. The stage of development should be more sensitive to ozone oxidation than cell wall intactness of oocysts. The difference of CT values estimated by viability assay between two studies may partly come from underestimation of the residual ozone concentration due to the manual monitoring in the pilot study, or the difference of the reactor scale (50 mL vs 40 L) and types (batch vs continuous). Adequate If value to disinfect 1 and 2 log scale of Cryptosporidium in UV irradiation process was 25 $mWs/cm^2$ and 50 $mWs/cm^2$, respectively, at $25^{\circ}C$ by DAPI/PI. At $5^{\circ}C$, 40 $mWs/cm^2$ was required for disinfecting 1 log Cryptosporidium, and 80 $mWs/cm^2$ for disinfecting 2 log Cryptosporidium. It was thought that about 60% increase of If value requirement to compensate for the $20^{\circ}C$ decrease in temperature was due to the low voltage low output lamp letting weaker UV rays occur at lower temperatures.

Field Studios of In-situ Aerobic Cometabolism of Chlorinated Aliphatic Hydrocarbons

  • Semprini, Lewts
    • Proceedings of the Korean Society of Soil and Groundwater Environment Conference
    • /
    • 2004.04a
    • /
    • pp.3-4
    • /
    • 2004
  • Results will be presented from two field studies that evaluated the in-situ treatment of chlorinated aliphatic hydrocarbons (CAHs) using aerobic cometabolism. In the first study, a cometabolic air sparging (CAS) demonstration was conducted at McClellan Air Force Base (AFB), California, to treat chlorinated aliphatic hydrocarbons (CAHs) in groundwater using propane as the cometabolic substrate. A propane-biostimulated zone was sparged with a propane/air mixture and a control zone was sparged with air alone. Propane-utilizers were effectively stimulated in the saturated zone with repeated intermediate sparging of propane and air. Propane delivery, however, was not uniform, with propane mainly observed in down-gradient observation wells. Trichloroethene (TCE), cis-1, 2-dichloroethene (c-DCE), and dissolved oxygen (DO) concentration levels decreased in proportion with propane usage, with c-DCE decreasing more rapidly than TCE. The more rapid removal of c-DCE indicated biotransformation and not just physical removal by stripping. Propane utilization rates and rates of CAH removal slowed after three to four months of repeated propane additions, which coincided with tile depletion of nitrogen (as nitrate). Ammonia was then added to the propane/air mixture as a nitrogen source. After a six-month period between propane additions, rapid propane-utilization was observed. Nitrate was present due to groundwater flow into the treatment zone and/or by the oxidation of tile previously injected ammonia. In the propane-stimulated zone, c-DCE concentrations decreased below tile detection limit (1 $\mu$g/L), and TCE concentrations ranged from less than 5 $\mu$g/L to 30 $\mu$g/L, representing removals of 90 to 97%. In the air sparged control zone, TCE was removed at only two monitoring locations nearest the sparge-well, to concentrations of 15 $\mu$g/L and 60 $\mu$g/L. The responses indicate that stripping as well as biological treatment were responsible for the removal of contaminants in the biostimulated zone, with biostimulation enhancing removals to lower contaminant levels. As part of that study bacterial population shifts that occurred in the groundwater during CAS and air sparging control were evaluated by length heterogeneity polymerase chain reaction (LH-PCR) fragment analysis. The results showed that an organism(5) that had a fragment size of 385 base pairs (385 bp) was positively correlated with propane removal rates. The 385 bp fragment consisted of up to 83% of the total fragments in the analysis when propane removal rates peaked. A 16S rRNA clone library made from the bacteria sampled in propane sparged groundwater included clones of a TM7 division bacterium that had a 385bp LH-PCR fragment; no other bacterial species with this fragment size were detected. Both propane removal rates and the 385bp LH-PCR fragment decreased as nitrate levels in the groundwater decreased. In the second study the potential for bioaugmentation of a butane culture was evaluated in a series of field tests conducted at the Moffett Field Air Station in California. A butane-utilizing mixed culture that was effective in transforming 1, 1-dichloroethene (1, 1-DCE), 1, 1, 1-trichloroethane (1, 1, 1-TCA), and 1, 1-dichloroethane (1, 1-DCA) was added to the saturated zone at the test site. This mixture of contaminants was evaluated since they are often present as together as the result of 1, 1, 1-TCA contamination and the abiotic and biotic transformation of 1, 1, 1-TCA to 1, 1-DCE and 1, 1-DCA. Model simulations were performed prior to the initiation of the field study. The simulations were performed with a transport code that included processes for in-situ cometabolism, including microbial growth and decay, substrate and oxygen utilization, and the cometabolism of dual contaminants (1, 1-DCE and 1, 1, 1-TCA). Based on the results of detailed kinetic studies with the culture, cometabolic transformation kinetics were incorporated that butane mixed-inhibition on 1, 1-DCE and 1, 1, 1-TCA transformation, and competitive inhibition of 1, 1-DCE and 1, 1, 1-TCA on butane utilization. A transformation capacity term was also included in the model formation that results in cell loss due to contaminant transformation. Parameters for the model simulations were determined independently in kinetic studies with the butane-utilizing culture and through batch microcosm tests with groundwater and aquifer solids from the field test zone with the butane-utilizing culture added. In microcosm tests, the model simulated well the repetitive utilization of butane and cometabolism of 1.1, 1-TCA and 1, 1-DCE, as well as the transformation of 1, 1-DCE as it was repeatedly transformed at increased aqueous concentrations. Model simulations were then performed under the transport conditions of the field test to explore the effects of the bioaugmentation dose and the response of the system to tile biostimulation with alternating pulses of dissolved butane and oxygen in the presence of 1, 1-DCE (50 $\mu$g/L) and 1, 1, 1-TCA (250 $\mu$g/L). A uniform aquifer bioaugmentation dose of 0.5 mg/L of cells resulted in complete utilization of the butane 2-meters downgradient of the injection well within 200-hrs of bioaugmentation and butane addition. 1, 1-DCE was much more rapidly transformed than 1, 1, 1-TCA, and efficient 1, 1, 1-TCA removal occurred only after 1, 1-DCE and butane were decreased in concentration. The simulations demonstrated the strong inhibition of both 1, 1-DCE and butane on 1, 1, 1-TCA transformation, and the more rapid 1, 1-DCE transformation kinetics. Results of tile field demonstration indicated that bioaugmentation was successfully implemented; however it was difficult to maintain effective treatment for long periods of time (50 days or more). The demonstration showed that the bioaugmented experimental leg effectively transformed 1, 1-DCE and 1, 1-DCA, and was somewhat effective in transforming 1, 1, 1-TCA. The indigenous experimental leg treated in the same way as the bioaugmented leg was much less effective in treating the contaminant mixture. The best operating performance was achieved in the bioaugmented leg with about over 90%, 80%, 60 % removal for 1, 1-DCE, 1, 1-DCA, and 1, 1, 1-TCA, respectively. Molecular methods were used to track and enumerate the bioaugmented culture in the test zone. Real Time PCR analysis was used to on enumerate the bioaugmented culture. The results show higher numbers of the bioaugmented microorganisms were present in the treatment zone groundwater when the contaminants were being effective transformed. A decrease in these numbers was associated with a reduction in treatment performance. The results of the field tests indicated that although bioaugmentation can be successfully implemented, competition for the growth substrate (butane) by the indigenous microorganisms likely lead to the decrease in long-term performance.

  • PDF