• Title/Summary/Keyword: data value prediction

Search Result 1,091, Processing Time 0.033 seconds

Dst Prediction Based on Solar Wind Parameters (태양풍 매개변수를 이용한 Dst 예측)

  • Park, Yoon-Kyung;Ahn, Byung-Ho
    • Journal of Astronomy and Space Sciences
    • /
    • v.26 no.4
    • /
    • pp.425-438
    • /
    • 2009
  • We reevaluate the Burton equation (Burton et al. 1975) of predicting Dst index using high quality hourly solar wind data supplied by the ACE satellite for the period from 1998 to 2006. Sixty magnetic storms with monotonously decreasing main phase are selected. In order to determine the injection term (Q) and the decay time ($\tau$) of the equation, we examine the relationships between $Dst^*$ and $VS_s$, ${\Delta}Dst^*$ and $VS_s$, and ${\Delta}Dst^*$ and $Dst^*$ during the magnetic storms. For this analysis, we take into account one hour of the propagation time from the ACE satellite to the magnetopause, and a half hour of the response time of the magnetosphere/ring current to he solar wind forcing. The injection term is found to be $Q(nT/h)\;=\;-3.56VS_s$ for $VS_s$ > 0.5mV/m and Q(nT=h) = 0 for $VB_s\;{\leq}\;0.5mV/m$. The $\tau$ (hour) is estimated as $0.060Dst^*\;+\;16.65$ for $Dst^*$ > -175nT and 6.15 hours for $Dst^*\;{\leq}\;-175nT$. Based on these empirical relationships, we predict the 60 magnetic storms and find that the correlation coefficient between the observed and predicted $Dst^*$ is 0.88. To evaluate the performance of our prediction scheme, the 60 magnetic storms are predicted again using the models by Burton et al. (1975) and O'Brien & McPherron (2000a). The correlation coefficients thus obtained are 0.85, the same value for both of the two models. In this respect, our model is slightly improved over the other two models as far as the correlation coefficients is concerned. Particularly our model does a better job than the other two models in predicting intense magnetic storms ($Dst^*\;{< \atop \sim}\;-200nT$).

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

Study on sea fog detection near Korea peninsula by using GMS-5 Satellite Data (GMS-5 위성자료를 이용한 한반도 주변 해무탐지 연구)

  • 윤홍주
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.4 no.4
    • /
    • pp.875-884
    • /
    • 2000
  • Sea fog/stratus is very difficult to detect because of the characteristics of air-sea interaction and locality ,and the scantiness of the observed data from the oceans such as ships or ocean buoys. The aim of our study develops new algorism for sea fog detection by using Geostational Meteorological Satellite-5(GMS-5) and suggests the technics of its continuous detection. In this study, atmospheric synoptic patterns on sea fog day of May, 1999 are classified; cold air advection type(OOUTC, May 10, 1999) and warm air advection type(OOUTC, May 12, 1999), respectively, and we collected two case days in order to analyze variations of water vapor at Osan observation station during May 9-10, 1999.So as to detect daytime sea fog/stratus(OOUTC, May 10, 1999), composite image, visible accumulated histogram method and surface albedo method are used. The characteristic value during day showed A(min) .20% and DA < 10% when visible accumulated histogram method was applied. And the sea fog region which is detected is similar in composite image analysis and surface albedo method. Inland observation which visibility and relative humidity is beneath 1Km and 80%, respectively, at OOUTC, May 10,1999; Poryoung for visble accumulated histogram method and Poryoung, Mokp'o and Kangnung for surface albedo method. In case of nighttime sea fog(18UTC, May 10, 1999), IR accumulated histogram method and Maximum brightness temperature method are used, respectively. Maxium brightness temperature method dectected sea fog better than IR accumulated histogram method with the charateristic value that is T_max < T_max_trs, and then T_max is beneath 700hPa temperature of GDAPS(Global Data Assimilation and Prediction System). Sea fog region which is detected by Maxium brighness temperature method was similar to the result of National Oceanic and Atmosheric Administratio/Advanced Very High Resolution Radiometer (NOAA/AVHRR) DCD(Dual Channel Difference), but usually visibility and relative humidity are not agreed well in inland.

  • PDF

Application of The Semi-Distributed Hydrological Model(TOPMODEL) for Prediction of Discharge at the Deciduous and Coniferous Forest Catchments in Gwangneung, Gyeonggi-do, Republic of Korea (경기도(京畿道) 광릉(光陵)의 활엽수림(闊葉樹林)과 침엽수림(針葉樹林) 유역(流域)의 유출량(流出量) 산정(算定)을 위한 준분포형(準分布型) 수문모형(水文模型)(TOPMODEL)의 적용(適用))

  • Kim, Kyongha;Jeong, Yongho;Park, Jaehyeon
    • Journal of Korean Society of Forest Science
    • /
    • v.90 no.2
    • /
    • pp.197-209
    • /
    • 2001
  • TOPMODEL, semi-distributed hydrological model, is frequently applied to predict the amount of discharge, main flow pathways and water quality in a forested catchment, especially in a spatial dimension. TOPMODEL is a kind of conceptual model, not physical one. The main concept of TOPMODEL is constituted by the topographic index and soil transmissivity. Two components can be used for predicting the surface and subsurface contributing area. This study is conducted for the validation of applicability of TOPMODEL at small forested catchments in Korea. The experimental area is located at Gwangneung forest operated by Korea Forest Research Institute, Gyeonggi-do near Seoul metropolitan. Two study catchments in this area have been working since 1979 ; one is the natural mature deciduous forest(22.0 ha) about 80 years old and the other is the planted young coniferous forest(13.6 ha) about 22 years old. The data collected during the two events in July 1995 and June 2000 at the mature deciduous forest and the three events in July 1995 and 1999, August 2000 at the young coniferous forest were used as the observed data set, respectively. The topographic index was calculated using $10m{\times}10m$ resolution raster digital elevation map(DEM). The distribution of the topographic index ranged from 2.6 to 11.1 at the deciduous and 2.7 to 16.0 at the coniferous catchment. The result of the optimization using the forecasting efficiency as the objective function showed that the model parameter, m and the mean catchment value of surface saturated transmissivity, $lnT_0$ had a high sensitivity. The values of the optimized parameters for m and InT_0 were 0.034 and 0.038; 8.672 and 9.475 at the deciduous and 0.031, 0.032 and 0.033; 5.969, 7.129 and 7.575 at the coniferous catchment, respectively. The forecasting efficiencies resulted from the simulation using the optimized parameter were comparatively high ; 0.958 and 0.909 at the deciduous and 0.825, 0.922 and 0.961 at the coniferous catchment. The observed and simulated hyeto-hydrograph shoed that the time of lag to peak coincided well. Though the total runoff and peakflow of some events showed a discrepancy between the observed and simulated output, TOPMODEL could overall predict a hydrologic output at the estimation error less than 10 %. Therefore, TOPMODEL is useful tool for the prediction of runoff at an ungaged forested catchment in Korea.

  • PDF

Mathematical Transformation Influencing Accuracy of Near Infrared Spectroscopy (NIRS) Calibrations for the Prediction of Chemical Composition and Fermentation Parameters in Corn Silage (수 처리 방법이 근적외선분광법을 이용한 옥수수 사일리지의 화학적 조성분 및 발효품질의 예측 정확성에 미치는 영향)

  • Park, Hyung-Soo;Kim, Ji-Hye;Choi, Ki-Choon;Kim, Hyeon-Seop
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.36 no.1
    • /
    • pp.50-57
    • /
    • 2016
  • This study was conducted to determine the effect of mathematical transformation on near infrared spectroscopy (NIRS) calibrations for the prediction of chemical composition and fermentation parameters in corn silage. Corn silage samples (n=407) were collected from cattle farms and feed companies in Korea between 2014 and 2015. Samples of silage were scanned at 1 nm intervals over the wavelength range of 680~2,500 nm. The optical data were recorded as log 1/Reflectance (log 1/R) and scanned in intact fresh condition. The spectral data were regressed against a range of chemical parameters using partial least squares (PLS) multivariate analysis in conjunction with several spectral math treatments to reduce the effect of extraneous noise. The optimum calibrations were selected based on the highest coefficients of determination in cross validation ($R^2{_{cv}}$) and the lowest standard error of cross validation (SECV). Results of this study revealed that the NIRS method could be used to predict chemical constituents accurately (correlation coefficient of cross validation, $R^2{_{cv}}$, ranging from 0.77 to 0.91). The best mathematical treatment for moisture and crude protein (CP) was first-order derivatives (1, 16, 16, and 1, 4, 4), whereas the best mathematical treatment for neutral detergent fiber (NDF) and acid detergent fiber (ADF) was 2, 16, 16. The calibration models for fermentation parameters had lower predictive accuracy than chemical constituents. However, pH and lactic acids were predicted with considerable accuracy ($R^2{_{cv}}$ 0.74 to 0.77). The best mathematical treatment for them was 1, 8, 8 and 2, 16, 16, respectively. Results of this experiment demonstrate that it is possible to use NIRS method to predict the chemical composition and fermentation quality of fresh corn silages as a routine analysis method for feeding value evaluation to give advice to farmers.

Development on Early Warning System about Technology Leakage of Small and Medium Enterprises (중소기업 기술 유출에 대한 조기경보시스템 개발에 대한 연구)

  • Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.143-159
    • /
    • 2017
  • Due to the rapid development of IT in recent years, not only personal information but also the key technologies and information leakage that companies have are becoming important issues. For the enterprise, the core technology that the company possesses is a very important part for the survival of the enterprise and for the continuous competitive advantage. Recently, there have been many cases of technical infringement. Technology leaks not only cause tremendous financial losses such as falling stock prices for companies, but they also have a negative impact on corporate reputation and delays in corporate development. In the case of SMEs, where core technology is an important part of the enterprise, compared to large corporations, the preparation for technological leakage can be seen as an indispensable factor in the existence of the enterprise. As the necessity and importance of Information Security Management (ISM) is emerging, it is necessary to check and prepare for the threat of technology infringement early in the enterprise. Nevertheless, previous studies have shown that the majority of policy alternatives are represented by about 90%. As a research method, literature analysis accounted for 76% and empirical and statistical analysis accounted for a relatively low rate of 16%. For this reason, it is necessary to study the management model and prediction model to prevent leakage of technology to meet the characteristics of SMEs. In this study, before analyzing the empirical analysis, we divided the technical characteristics from the technology value perspective and the organizational factor from the technology control point based on many previous researches related to the factors affecting the technology leakage. A total of 12 related variables were selected for the two factors, and the analysis was performed with these variables. In this study, we use three - year data of "Small and Medium Enterprise Technical Statistics Survey" conducted by the Small and Medium Business Administration. Analysis data includes 30 industries based on KSIC-based 2-digit classification, and the number of companies affected by technology leakage is 415 over 3 years. Through this data, we conducted a randomized sampling in the same industry based on the KSIC in the same year, and compared with the companies (n = 415) and the unaffected firms (n = 415) 1:1 Corresponding samples were prepared and analyzed. In this research, we will conduct an empirical analysis to search for factors influencing technology leakage, and propose an early warning system through data mining. Specifically, in this study, based on the questionnaire survey of SMEs conducted by the Small and Medium Business Administration (SME), we classified the factors that affect the technology leakage of SMEs into two factors(Technology Characteristics, Organization Characteristics). And we propose a model that informs the possibility of technical infringement by using Support Vector Machine(SVM) which is one of the various techniques of data mining based on the proven factors through statistical analysis. Unlike previous studies, this study focused on the cases of various industries in many years, and it can be pointed out that the artificial intelligence model was developed through this study. In addition, since the factors are derived empirically according to the actual leakage of SME technology leakage, it will be possible to suggest to policy makers which companies should be managed from the viewpoint of technology protection. Finally, it is expected that the early warning model on the possibility of technology leakage proposed in this study will provide an opportunity to prevent technology Leakage from the viewpoint of enterprise and government in advance.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Relationship between Steady Flow and Dynamic Rheological Properties for Viscoelastic Polymer Solutions - Examination of the Cox-Merz Rule Using a Nonlinear Strain Measure - (점탄성 고분자 용액의 정상유동특성과 동적 유변학적 성질의 상관관계 -비선헝 스트레인 척도를 사용한 Cox-Merz 법칙의 검증-)

  • 송기원;김대성;장갑식
    • The Korean Journal of Rheology
    • /
    • v.10 no.4
    • /
    • pp.234-246
    • /
    • 1998
  • The objective of this study is to investigate the correlation between steady shear flow (nonlinear behavior) and dynamic viscoelastic (linear behavior) properties for concentrated polymer solutions. Using both an Advanced Rheometic Expansion System(ARES) and a Rheometics Fluids Spectrometer (RFS II), the steady shear flow viscosity and the dynamic viscoelastic properties of concentrated poly(ethylene oxide)(PEO), polyisobutylene(PIB), and polyacrylamide(PAAm) solutions have been measured over a wide range of shear rates and angular frequencies. The validity of some previously proposed relationships was compared with experimentally measured data. In addition, the effect of solution concentration on the applicability of the Cox-Merz rule was examined by comparing the steady flow viscosity and the magnitude of the complex viscosity Finally, the applicability of the Cox-Merz rule was theoretically discussed by introducing a nonlinear strain measure. Main results obtained from this study can be summarized as follows : (1) Among the previously proposed relationships dealt with in this study, the Cox-Merz rule implying the equivalence between the steady flow viscosity and the magnitude of the complex viscosity has the best validity. (2) For polymer solutions with relatively lower concentration, the steady flow viscosity is higher than the complex viscosity. However, such a relation between the two viscosities is reversed for highly concentrated polymer solutions. (3) A nonlinear strain measure is decreased with increasing stran magnitude, after reaching the maximum value in small strain range. This behavior is different from the theoretical prediction demonstrating the shape of a damped oscillatory function. (4) The applicability of the Cox-Merz rule is influenced by the $\beta$ value, which indicates the slope of a nonlinear stain measure (namely, the degree of nonlinearity) at large shear deformations. The Cox-Merz rule shows better applicability as the $\beta$ value becomes smaller.

  • PDF

Prediction of the risk of skin cancer caused by UVB radiation exposure using a method of meta-analysis (Meta-analysis를 이용한 UVB 조사량에 따른 피부암 발생 위해도의 예측 연구)

  • Shin, D.C.;Lee, J.T.;Yang, J.Y.
    • Journal of Preventive Medicine and Public Health
    • /
    • v.31 no.1 s.60
    • /
    • pp.91-103
    • /
    • 1998
  • Under experimental conditions, UVB radiation, a type of ultra violet radiation, has shown to .elate with the occurrence of skin erythema (sun-burn) in human and skin cancer in experimental animal. Cumulative exposure to UVB is also believed to be at least partly responsible for the 'aging' process of the skin in human. It has also been observed to have an effect of altering DNA (deoxyribonucleic acid). UVB radiation is both an initiator and a promoter of non-melanoma skin cancer. Meta-analysis is a new discipline that critically reviews and statistically combines the results of previous researches. A recent review of meta-analysis in the field of public health emphasized its growing importance. Using a meta-analysis in this study, we explored more reliable dose-response relationships between UVB radiation and skin cancer incidence. We estimated skin cancer incidence using measured UVB radiation dose at a local area of Seoul (Shin chou-dong). The studies showing the dose-response relationships between UVB radiation and non-melanoma skin cancer incidence were searched and selected for a meta-analysis. The data for 7 reported epidemiological studies of three counties (USA, England, Australia) were pooled to estimated the risk. We estimated rate of incidence change of skin cancer using pooled data by meta-analysis method, and exponential and power models. Using either model, the regression coefficients for UVB did not differ significantly by gender and age. In each analysis of variance, non-melanoma skin cancer incidence after removing the gender and age and UVB effects was significant (p>0.01). The coefficients for UVB dose were estimated $2.07\times10^{-6}$ by the exponential model and 2.49 by the power model. At a local area of Seoul (Shinchon-Dong), BAF value were estimated 1.90 and 2.51 by the exponential and power model, respectively. The estimated BAP value were increased statistical power than that of primary studies that using a meta-analysis method.

  • PDF

The Prognostic Value of the Seventh Day APACHE III Score in Medical Intensive Care Unit (내과계 중환자들의 예후 판정에 었어서 제 7병일 APACHE III 점수의 임상적 유용성)

  • Kim, Mi-Ok;Yun, Soo-Mi;Park, Eun-Joo;Sohn, Jang-Won;Yang, Seok-Chul;Yoon, Ho-Joo;Shin, Dong-Ho;Park, Sung-Soo
    • Tuberculosis and Respiratory Diseases
    • /
    • v.50 no.2
    • /
    • pp.236-244
    • /
    • 2001
  • Background : Most current research using prognostic scoring systems in critically ill patients have focused on prediction using the first intensive care unit (ICU) day data or daily updated data. Usually the mean ICU length of stay in Korea is longer than in the western world. Consequently, a more cost-effective and practical prognostic parameter is required. The principal aim of this study was to assess the prognostic value of the seventh day(7th day : the average mean ICU length of stay) APACHE III score in a medical intensive care unit. Methods : 241 medical ICU patients from July 1997 to April 1998 were enrolled. The 1st and 7th scores were measured by using the APACHE III scoring system and compared between survivors and non-survivors. Logistic regression analysis was performed to determine the relationship between the $1^{st}$ and $7^{th}$ APACHE III scores and the mortality risk. Results : 1 )The mean length of stay in the ICU was $10.3{\pm}13.8$ days. 2)The mean $1^{st}$ and $7^{th}$ day APACHE III scores were $59.7{\pm}30.9$ and $37.9{\pm}27.7$. 3) The mean $1^{st}$ day APACHE III score was significantly lower in survivors than in non- survivors($49.9{\pm}23.8$ vs $86.3{\pm}32.3$, P<0.0001). 4)The mean $7^{th}$ day APACHE III score was significantly lower in survivors than in non- survivors($30.1{\pm}18.5$ vs $80.1{\pm}30.4$, P<0.0001). 5)The odds ratios among the $1^{st}$ and $7^{th}$ day APACHE III scores and the mortality rate were 1.0507 and 1.0779 respectively. Conclusion : These results suggest that the seventh day APACHE III score is as useful in predicting the outcome as is such like the first day APACHE III score. Therefore, in comparison to the daily APACHE III score, measuring the $1^{st}$ and $7^{th}$ day APACHE III scores are also useful for predicting the prognosis of critically ill patients in terms of cost-effectiveness. It is suggested that the $7^{th}$ day APACHE III score is useful for predicting the clinical outcome.

  • PDF