• Title/Summary/Keyword: Mean Value Reliability Method

Search Result 119, Processing Time 0.029 seconds

Correlation between Bone Mineral Density Measured by Dual-Energy X-Ray Absorptiometry and Hounsfield Units Measured by Diagnostic CT in Lumbar Spine

  • Lee, Sungjoon;Chung, Chun Kee;Oh, So Hee;Park, Sung Bae
    • Journal of Korean Neurosurgical Society
    • /
    • v.54 no.5
    • /
    • pp.384-389
    • /
    • 2013
  • Objective : Use of quantitative computed tomography (CT) to evaluate bone mineral density was suggested in the 1970s. Despite its reliability and accuracy, technical shortcomings restricted its usage, and dual-energy X-ray absorptiometry (DXA) became the gold standard evaluation method. Advances in CT technology have reduced its previous limitations, and CT evaluation of bone quality may now be applicable in clinical practice. The aim of this study was to determine if the Hounsfield unit (HU) values obtained from CT correlate with patient age and bone mineral density. Methods : A total of 128 female patients who underwent lumbar CT for back pain were enrolled in the study. Their mean age was 66.4 years. Among them, 70 patients also underwent DXA. The patients were stratified by decade of life, forming five age groups. Lumbar vertebrae L1-4 were analyzed. The HU value of each vertebra was determined by averaging three measurements of the vertebra's trabecular portion, as shown in consecutive axial CT images. The HU values were compared between age groups, and correlations of HU value with bone mineral density and T-scores were determined. Results : The HU values consistently decreased with increasing age with significant differences between age groups (p<0.001). There were significant positive correlations (p<0.001) of HU value with bone mineral density and T-score. Conclusion : The trabecular area HU value consistently decreases with age. Based on the strong positive correlation between HU value and bone mineral density, CT-based HU values might be useful in detecting bone mineral diseases, such as osteoporosis.

The Comparative Study for Software Reliability Models Based on NHPP (NHPP에 기초한 소프트웨어 신뢰도 모형에 대한 비교연구)

  • Gan, Gwang-Hyeon;Kim, Hui-Cheol;Lee, Byeong-Su
    • The KIPS Transactions:PartD
    • /
    • v.8D no.4
    • /
    • pp.393-400
    • /
    • 2001
  • This paper presents a stochastic model for the software failure phenomenon based on a nonhomogeneous Poisson process (NHPP). The failure process is analyzed to develop a suitable mean value function for the NHPP ; expressions are given for several performance measure. Actual software failure data are compared with generalized model by Goel dependent on the constant reflecting the quality of testing. The performance measures and parametric inferences of the new models, Rayleigh and Gumbel distributions, are discussed. The results of the new models are applied to real software failure data and compared with Goel-Okumoto and Yamada, Ohba and Osaki models. Tools of parameter inference was used method of the maximun likelihood estimate and the bisection algorithm for the computing nonlinear root. In this paper, using the sum of the squared errors, model selection was employed. The numerical example by NTDS data was illustrated.

  • PDF

The Bayesian Analysis for Software Reliability Models Based on NHPP (비동질적 포아송과정을 사용한 소프트웨어 신뢰 성장모형에 대한 베이지안 신뢰성 분석에 관한 연구)

  • Lee, Sang-Sik;Kim, Hee-Cheul;Kim, Yong-Jae
    • The KIPS Transactions:PartD
    • /
    • v.10D no.5
    • /
    • pp.805-812
    • /
    • 2003
  • This paper presents a stochastic model for the software failure phenomenon based on a nonhomogeneous Poisson process (NHPP) and performs Bayesian inference using prior information. The failure process is analyzed to develop a suitable mean value function for the NHPP; expressions are given for several performance measure. The parametric inferences of the model using Logarithmic Poisson model, Crow model and Rayleigh model is discussed. Bayesian computation and model selection using the sum of squared errors. The numerical results of this models are applied to real software failure data. Tools of parameter inference was used method of Gibbs sampling and Metropolis algorithm. The numerical example by T1 data (Musa) was illustrated.

4:2:1 compromise plans using Min-Max method (Min-Max 방법을 적용한 4:2:1 절충적 계획)

  • 최재혁;강창욱
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.21 no.47
    • /
    • pp.1-10
    • /
    • 1998
  • Testing high reliability devices under nomal operating condition is difficult, because the devices are not likely to fail in the relatively short time available for tests. For most applications it is necessary to accelerate the causes of failure by increasing a stress above its nomal value. Previous accelerated life test(ALT) plans have shown how to find optimum allocation, lowest stress and sample size subject to minimizing the variance of mean life estimator. In these ALT plans, the highest acceptable test-stress was assumed to be specified in advance by the experimenter but there is no guidance for selecting it. This assumption is, however, inappropriate for many applications. Testing devices at too-high stress levels can invalidate the extrapolation model, or introduce failure mechanisms that are not anticipated under nomal operating conditions. In this paper, we propose new 4:2:1 compromise plans using Min-Max method to minimize this risk and present minimized test-stress levels(max, middle, min), and find sample allocation based on Min-Max 4:2:1 compromise plans. In result, we compare previous 4:2:1 compromise plans specified maximum test-stress with Min-Max 4:2:1 compromise plans minimized maximum test-stress.

  • PDF

Analysis of the Optimal Window Size of Hampel Filter for Calibration of Real-time Water Level in Agricultural Reservoirs (농업용저수지의 실시간 수위 보정을 위한 Hampel Filter의 최적 Window Size 분석)

  • Joo, Dong-Hyuk;Na, Ra;Kim, Ha-Young;Choi, Gyu-Hoon;Kwon, Jae-Hwan;Yoo, Seung-Hwan
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.64 no.3
    • /
    • pp.9-24
    • /
    • 2022
  • Currently, a vast amount of hydrologic data is accumulated in real-time through automatic water level measuring instruments in agricultural reservoirs. At the same time, false and missing data points are also increasing. The applicability and reliability of quality control of hydrological data must be secured for efficient agricultural water management through calculation of water supply and disaster management. Considering the characteristics of irregularities in hydrological data caused by irrigation water usage and rainfall pattern, the Korea Rural Community Corporation is currently applying the Hampel filter as a water level data quality management method. This method uses window size as a key parameter, and if window size is large, distortion of data may occur and if window size is small, many outliers are not removed which reduces the reliability of the corrected data. Thus, selection of the optimal window size for individual reservoir is required. To ensure reliability, we compared and analyzed the RMSE (Root Mean Square Error) and NSE (Nash-Sutcliffe model efficiency coefficient) of the corrected data and the daily water level of the RIMS (Rural Infrastructure Management System) data, and the automatic outlier detection standards used by the Ministry of Environment. To select the optimal window size, we used the classification performance evaluation index of the error matrix and the rainfall data of the irrigation period, showing the optimal values at 3 h. The efficient reservoir automatic calibration technique can reduce manpower and time required for manual calibration, and is expected to improve the reliability of water level data and the value of water resources.

Estimation of Drought Rainfall by Regional Frequency Analysis Using L and LH-Moments (II) - On the method of LH-moments - (L 및 LH-모멘트법과 지역빈도분석에 의한 가뭄우량의 추정 (II)- LH-모멘트법을 중심으로 -)

  • Lee, Soon-Hyuk;Yoon , Seong-Soo;Maeng , Sung-Jin;Ryoo , Kyong-Sik;Joo , Ho-Kil;Park , Jin-Seon
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.46 no.5
    • /
    • pp.27-39
    • /
    • 2004
  • In the first part of this study, five homogeneous regions in view of topographical and geographically homogeneous aspects except Jeju and Ulreung islands in Korea were accomplished by K-means clustering method. A total of 57 rain gauges were used for the regional frequency analysis with minimum rainfall series for the consecutive durations. Generalized Extreme Value distribution was confirmed as an optimal one among applied distributions. Drought rainfalls following the return periods were estimated by at-site and regional frequency analysis using L-moments method. It was confirmed that the design drought rainfalls estimated by the regional frequency analysis were shown to be more appropriate than those by the at-site frequency analysis. In the second part of this study, LH-moment ratio diagram and the Kolmogorov-Smirnov test on the Gumbel (GUM), Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA) distributions were accomplished to get optimal probability distribution. Design drought rainfalls were estimated by both at-site and regional frequency analysis using LH-moments and GEV distribution, which was confirmed as an optimal one among applied distributions. Design rainfalls were estimated by at-site and regional frequency analysis using LH-moments, the observed and simulated data resulted from Monte Carlotechniques. Design drought rainfalls derived by regional frequency analysis using L1, L2, L3 and L4-moments (LH-moments) method have shown higher reliability than those of at-site frequency analysis in view of RRMSE (Relative Root-Mean-Square Error), RBIAS (Relative Bias) and RR (Relative Reduction) for the estimated design drought rainfalls. Relative efficiency were calculated for the judgment of relative merits and demerits for the design drought rainfalls derived by regional frequency analysis using L-moments and L1, L2, L3 and L4-moments applied in the first report and second report of this study, respectively. Consequently, design drought rainfalls derived by regional frequency analysis using L-moments were shown as more reliable than those using LH-moments. Finally, design drought rainfalls for the classified five homogeneous regions following the various consecutive durations were derived by regional frequency analysis using L-moments, which was confirmed as a more reliable method through this study. Maps for the design drought rainfalls for the classified five homogeneous regions following the various consecutive durations were accomplished by the method of inverse distance weight and Arc-View, which is one of GIS techniques.

A Study on The Velocity Distribution in Closed Conduit by Using The Entropy Concept (엔트로피 개념을 이용한 관수로내의 유속분포에 관한 연구)

  • Choo, Tai Ho;Ok, Chi Youl;Kim, Jin Won;Maeng, Seung Jin
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.4B
    • /
    • pp.357-363
    • /
    • 2009
  • When yields the mean velocity of the closed conduit which is used generally, it is available to use Darcy Weisbach Friction Loss Head equation. But, it is inconvenient very because Friction Loss coefficient f is the function of Reynolds Number and Relative roughness (${\varepsilon}$/d). So, it is demanded more convenient equation to estimate. In order to prove the reliability and an accuracy of Chiu's velocity equation from the research which sees hereupon, proved agreement very well about measured velocity measurement data by using Laser velocimeter which is a non-insertion velocity measuring equipment from the closed conduit (Laser Doppler Velocimeter: LDV) and an insertion velocity measuring equipment and the Pitot tube which is a supersonic flow meter (Transit-Time Flowmeters). By proving theoretical linear-relation between maximum velocity and mean velocity in laboratory flume without increase and decrease of discharge, the equilibrium state of velocity in the closed conduit which reachs to equilibrium state corresponding to entropy parameter M value has a trend maintaining consistently this state. If entropy M value which is representing one section is determinated, mean velocity can be gotten only by measuring the velocity in the point appearing the maximum velocity. So, it has been proved to estimate simply discharge and it indicates that this method can be a theoretical way, which is the most important in the future, when designing, managing and operating the closed conduit.

Nurses' Attitudes Toward Complementary and Alternative Therapies (간호사의 보완대체요법에 대한 태도)

  • Son, Haeng-Mi
    • Korean Journal of Adult Nursing
    • /
    • v.14 no.1
    • /
    • pp.62-72
    • /
    • 2002
  • Purpose: This study was performed to develop a scale for evaluation of attitudes toward complementary and alternative therapies (CAT) and to investigate nurses' attitudes toward CAT. Method: The subjects were 263 nurses working at a university hospital in Seoul and Inchon. The personally designed questionnaire was tested for its reliability and validity. Nurses' attitudes to CAT were evaluated using the questionnaire. Results: Cronbach's $\alpha$ coefficient was 0.7405. 23 items were selected by item analysis and 4 factors including application, therapeutic effect, social interest and communication about CAT were classified by factor analysis. The mean score of attitudes and its subcategories were high, especially that of communication was very high. Nurses had a positive response to CAT in several items; acceptance as nursing intervention, its therapeutic value, complement for conventional medicine, and open communication about CAT. Attitudes were different significantly according to education and number of working years. There were high relationships between attitudes and its subcategories except communication. Conclusion: The Scale of attitudes toward CAT was proven to be reliable and valid. Positive nurses' attitudes toward CAT will help the patients be provided with a proper and safe way to take CAT.

  • PDF

Comparative analysis of wavelet transform and machine learning approaches for noise reduction in water level data (웨이블릿 변환과 기계 학습 접근법을 이용한 수위 데이터의 노이즈 제거 비교 분석)

  • Hwang, Yukwan;Lim, Kyoung Jae;Kim, Jonggun;Shin, Minhwan;Park, Youn Shik;Shin, Yongchul;Ji, Bongjun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.209-223
    • /
    • 2024
  • In the context of the fourth industrial revolution, data-driven decision-making has increasingly become pivotal. However, the integrity of data analysis is compromised if data quality is not adequately ensured, potentially leading to biased interpretations. This is particularly critical for water level data, essential for water resource management, which often encounters quality issues such as missing values, spikes, and noise. This study addresses the challenge of noise-induced data quality deterioration, which complicates trend analysis and may produce anomalous outliers. To mitigate this issue, we propose a noise removal strategy employing Wavelet Transform, a technique renowned for its efficacy in signal processing and noise elimination. The advantage of Wavelet Transform lies in its operational efficiency - it reduces both time and costs as it obviates the need for acquiring the true values of collected data. This study conducted a comparative performance evaluation between our Wavelet Transform-based approach and the Denoising Autoencoder, a prominent machine learning method for noise reduction.. The findings demonstrate that the Coiflets wavelet function outperforms the Denoising Autoencoder across various metrics, including Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Mean Squared Error (MSE). The superiority of the Coiflets function suggests that selecting an appropriate wavelet function tailored to the specific application environment can effectively address data quality issues caused by noise. This study underscores the potential of Wavelet Transform as a robust tool for enhancing the quality of water level data, thereby contributing to the reliability of water resource management decisions.

Feasibility of the Lapse Rate Prediction at an Hourly Time Interval (기온감률의 일중 경시변화 예측 가능성)

  • Kim, Soo-ock;Yun, Jin I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.18 no.1
    • /
    • pp.55-63
    • /
    • 2016
  • Temperature lapse rate within the planetary boundary layer shows a diurnal cycle with a substantial variation. The widely-used lapse rate value for the standard atmosphere may result in unaffordable errors if used in interpolating hourly temperature in complex terrain. We propose a simple method for estimating hourly lapse rate and evaluate whether this scheme is better than the conventional method using the standard lapse rate. A standard curve for lapse rate based on the diurnal course of temperature was drawn using upper air temperature for 1000hPa and 925hPa standard pressure levels. It was modulated by the hourly sky condition (amount of clouds). In order to test the reliability of this method, hourly lapse rates for the 500-600m layer over Daegwallyeong site were estimated by this method and compared with the measured values by an ultrasonic temperature profiler. Results showed the mean error $-0.0001^{\circ}C/m$ and the root mean square error $0.0024^{\circ}C/m$ for this vertical profile experiment. An additional experiment was carried out to test if this method is applicable for the mountain slope lapse rate. Hourly lapse rates for the 313-401m slope range in a complex watershed ('Hadong Watermark 2') were estimated by this method and compared with the observations. We found this method useful in describing diurnal cycle and variation of the mountain slope lapse rate over a complex terrain despite larger error compared with the vertical profile experiment.