• Title/Summary/Keyword: 분 단위

Search Result 903, Processing Time 0.023 seconds

Environmental Pollution in Korea and Its Control (우리나라의 환경오염 현황과 그 대책)

  • 윤명조
    • Proceedings of the KOR-BRONCHOESO Conference
    • /
    • 1972.03a
    • /
    • pp.5-6
    • /
    • 1972
  • Noise and air pollution, which accompany the development of industry and the increase of population, contribute to the deterioration of urban environment. The air pollution level of Seoul has gradually increased and the city residents are suffering from a high pollution of noise. If no measures were taken against pollution, the amount of emission of pollutant into air would be 36.7 thousand tons per year per square kilometer in 1975, three times more than that of 1970, and it would be the same level as that of United States in 1968. The main sources of air pollution in Seoul are the exhaust has from vehicles and the combustion of bunker-C oil for heating purpose. Thus, it is urgent that an exhaust gas cleaner should be instaled to every car and the fuel substituted by less sulfur-contained-oil to prevent the pollution. Transportation noise (vehicular noise and train noise) is the main component of urban noise problem. The average noise level in downtown area is about 75㏈ with maximum of 85㏈ and the vehicular homing was checked 100㏈ up and down. Therefore, the reduction of the number of bus-stop the strict regulation of homing in downtown area and a better maintenance of car should be an effective measures against noise pollution in urban areas. Within the distance of 200 metres from railroad, the train noise exceeds the limit specified by the pollution control law in Korea. Especially, the level of noise and steam-whistle of train as measured by the ISO evaluation can adversely affect the community activities of residents. To prevent environmental destruction, many developed countries have taken more positive action against worsening pollution and such an action is now urgently required in this country.

  • PDF

The Use of Radioactive $^{51}Cr$ in Measurement of Intestinal Blood Loss ($^{51}Cr$을 사용(使用)한 장관내(賜管內) 출혈량측정법(出血量測定法))

  • Lee, Mun-Ho
    • The Korean Journal of Nuclear Medicine
    • /
    • v.4 no.1
    • /
    • pp.19-26
    • /
    • 1970
  • 1. Sixteen normal healthy subjects free from occult blood in the stool were selected and administered with their $^{51}Cr$ labeled own blood via duodenal tube and the recovery rate of radioactivity in feces and urine was measured. The average fecal recovery rate was 90.7 per cent ($85.7{\sim}97.7%$) of the administered radioactivity, and the average urinary excretion rate was 0.8 per cent ($0.5{\sim}1.5%$) 2. There was a close correlation between the amount of blood administered and the recovery rate from the feces; the more the blood administered, the higher the recovery rate was. It was also found that the administration of the tagged blood in the amount exceeding 15ml was suitable for measuring the radioactivity in the stools. 3. In five normal healthy subjects whose circulating erythrocytes had been tagged with $^{51}Cr$, there was little fecal excretion of radioactivity (average 0.9 ml of blood per day). This excretion is not related to hemorrhage and the main route of excretion of such an negligible radioactivity was postulated as gastric juice and bile. 4. A comparison of the radioactivity in the blood and feces of the patients with $^{51}Cr$ labeled erythrocytes seems to be a valid way of estimating intestinal blood loss.

  • PDF

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.