• Title/Summary/Keyword: underestimation

Search Result 348, Processing Time 0.025 seconds

A REVIEW OF NEUTRON SCATTERING CORRECTION FOR THE CALIBRATION OF NEUTRON SURVEY METERS USING THE SHADOW CONE METHOD

  • KIM, SANG IN;KIM, BONG HWAN;KIM, JANG LYUL;LEE, JUNG IL
    • Nuclear Engineering and Technology
    • /
    • v.47 no.7
    • /
    • pp.939-944
    • /
    • 2015
  • The calibration methods of neutron-measuring devices such as the neutron survey meter have advantages and disadvantages. To compare the calibration factors obtained by the shadow cone method and semi-empirical method, 10 neutron survey meters of five different types were used in this study. This experiment was performed at the Korea Atomic Energy Research Institute (KAERI; Daejeon, South Korea), and the calibration neutron fields were constructed using a $^{252}Californium$ ($^{252}Cf$) neutron source, which was positioned in the center of the neutron irradiation room. The neutron spectra of the calibration neutron fields were measured by a europium-activated lithium iodide scintillator in combination with KAERI's Bonner sphere system. When the shadow cone method was used, 10 single moderator-based survey meters exhibited a smaller calibration factor by as much as 3.1-9.3% than that of the semi-empirical method. This finding indicates that neutron survey meters underestimated the scattered neutrons and attenuated neutrons (i.e., the total scatter corrections). This underestimation of the calibration factor was attributed to the fact that single moderator-based survey meters have an under-ambient dose equivalent response in the thermal or thermal-dominant neutron field. As a result, when the shadow cone method is used for a single moderator-based survey meter, an additional correction and the International Organization for Standardization standard 8529-2 for room-scattered neutrons should be considered.

Prevalence, comorbidities, diagnosis, and treatment of nonallergic rhinitis: real-world comparison with allergic rhinitis

  • Yum, Hye Yung;Ha, Eun Kyo;Shin, Yoon Ho;Han, Man Yong
    • Clinical and Experimental Pediatrics
    • /
    • v.64 no.8
    • /
    • pp.373-383
    • /
    • 2021
  • Rhinitis is among the most common respiratory diseases in children. Nonallergic rhinitis, which involves nasal symptoms without evidence of systemic allergic inflammation or infection, is a heterogeneous entity with diverse manifestations and intensities. Nonallergic rhinitis accounts for 16%-89% of the chronic rhinitis cases, affecting 1%-50% (median 10%) of the total pediatric population. The clinical course of nonallergic rhinitis is generally rather mild and less likely to be associated with allergic comorbidities than allergic rhinitis. Here, we aimed to estimate the rate of coexisting comorbidities of nonallergic rhinitis. Nonallergic rhinitis is more prevalent during the first 2 years of life; however, its underestimation for children with atopic tendencies is likely due to low positive rates of specific allergic tests during early childhood. Local allergic rhinitis is a recently noted phenotype with rates similar to those in adults (median, 44%; range, 4%-67%), among patients previously diagnosed with nonallergic rhinitis. Idiopathic rhinitis, a subtype of nonallergic rhinitis, has been poorly studied in children, and its rates are known to be lower than those in adults. The prevalence of nonallergic rhinitis with eosinophilia syndrome is even lower. A correlation between nonallergic rhinitis and pollution has been suggested owing to the recent increase in nonallergic rhinitis rates in highly developing regions such as some Asian countries, but many aspects remain unknown. Conventional treatments include antihistamines, intranasal corticosteroids, and recent treatments include combination of intranasal corticosteroids with azelastin or decongestants. Here we review the prevalence, diagnosis, comorbidities, and treatment recommendations for nonallergic rhinitis versus allergic rhinitis in children.

FORECAST OF DAILY MAJOR FLARE PROBABILITY USING RELATIONSHIPS BETWEEN VECTOR MAGNETIC PROPERTIES AND FLARING RATES

  • Lim, Daye;Moon, Yong-Jae;Park, Jongyeob;Park, Eunsu;Lee, Kangjin;Lee, Jin-Yi;Jang, Soojeong
    • Journal of The Korean Astronomical Society
    • /
    • v.52 no.4
    • /
    • pp.133-144
    • /
    • 2019
  • We develop forecast models of daily probabilities of major flares (M- and X-class) based on empirical relationships between photospheric magnetic parameters and daily flaring rates from May 2010 to April 2018. In this study, we consider ten magnetic parameters characterizing size, distribution, and non-potentiality of vector magnetic fields from Solar Dynamics Observatory (SDO)/Helioseismic and Magnetic Imager (HMI) and Geostationary Operational Environmental Satellites (GOES) X-ray flare data. The magnetic parameters are classified into three types: the total unsigned parameters, the total signed parameters, and the mean parameters. We divide the data into two sets chronologically: 70% for training and 30% for testing. The empirical relationships between the parameters and flaring rates are used to predict flare occurrence probabilities for a given magnetic parameter value. Major results of this study are as follows. First, major flare occurrence rates are well correlated with ten parameters having correlation coefficients above 0.85. Second, logarithmic values of flaring rates are well approximated by linear equations. Third, using total unsigned and signed parameters achieved better performance for predicting flares than the mean parameters in terms of verification measures of probabilistic and converted binary forecasts. We conclude that the total quantity of non-potentiality of magnetic fields is crucial for flare forecasting among the magnetic parameters considered in this study. When this model is applied for operational use, it can be used using the data of 21:00 TAI with a slight underestimation of 2-6.3%.

Evaluation of Matrix Effects in Quantifying Microbial Secondary Metabolites in Indoor Dust Using Ultraperformance Liquid Chromatographe-Tandem Mass Spectrometer

  • Jaderson, Mukhtar;Park, Ju-Hyeong
    • Safety and Health at Work
    • /
    • v.10 no.2
    • /
    • pp.196-204
    • /
    • 2019
  • Background: Liquid chromatography-tandem mass spectrometry (LC-MSMS) for simultaneous analysis of multiple microbial secondary metabolites (MSMs) is potentially subject to interference by matrix components. Methods: We examined potential matrix effects (MEs) in analyses of 31 MSMs using ultraperformance LC-MSMS. Twenty-one dust aliquots from three buildings (seven aliquots/building) were spiked with seven concentrations of each of the MSMs ($6.2pg/{\mu}l-900pg/{\mu}l$) and then extracted. Another set of 21 aliquots were first extracted and then, the extract was spiked with the same concentrations. We added deepoxy-deoxynivalenol (DOM) to all aliquots as a universal internal standard. Ten microliters of the extract was injected into the ultraperformance LC-MSMS. ME was calculated by subtracting the percentage of the response of analyte in spiked extract to that in neat standard from 100. Spiked extract results were used to create a matrix-matched calibration (MMC) curve for estimating MSM concentration in dust spiked before extraction. Results: Analysis of variance was used to examine effects of compound (MSM), building and concentration on response. MEs (range: 63.4%-99.97%) significantly differed by MSM (p < 0.01) and building (p < 0.05). Mean percent recoveries adjusted with DOM and the MMC method were 246.3% (SD = 226.0) and 86.3% (SD = 70.7), respectively. Conclusion: We found that dust MEs resulted in substantial underestimation in quantifying MSMs and that DOM was not an optimal universal internal standard for the adjustment but that the MMC method resulted in more accurate and precise recovery compared with DOM. More research on adjustment methods for dust MEs in the simultaneous analyses of multiple MSMs using LC-MSMS is warranted.

Correction of Depth Perception in Virtual Environment Using Spatial Compnents and Perceptual Clues (공간 구성요소 및 지각단서를 활용한 가상환경 내 깊이지각 보정)

  • Chae, Byung-Hoon;Lee, In-Soo;Chae, U-Ri;Lee, Joo-Yeoun
    • Journal of Digital Convergence
    • /
    • v.17 no.8
    • /
    • pp.205-219
    • /
    • 2019
  • As the education and training is such a virtual environment is applied to various fields, its usability is endless. However, there is an underestimation of the depth of perception in the training environment. In order to solve this problem, we tried to solve the problem by applying the top-down correction method. However, it is difficult to classify the result as a learning effect or perception change. In this study, it was confirmed that the proportion of spatial components of urine had a significant effect on the depth perception, and it was confirmed that the size perception were corrected together. In this study, we propose a correction method using spatial component and depth perception to improve the accuracy of depth perception.

Biases in the Assessment of Left Ventricular Function by Compressed Sensing Cardiovascular Cine MRI

  • Yoon, Jong-Hyun;Kim, Pan-ki;Yang, Young-Joong;Park, Jinho;Choi, Byoung Wook;Ahn, Chang-Beom
    • Investigative Magnetic Resonance Imaging
    • /
    • v.23 no.2
    • /
    • pp.114-124
    • /
    • 2019
  • Purpose: We investigate biases in the assessments of left ventricular function (LVF), by compressed sensing (CS)-cine magnetic resonance imaging (MRI). Materials and Methods: Cardiovascular cine images with short axis view, were obtained for 8 volunteers without CS. LVFs were assessed with subsampled data, with compression factors (CF) of 2, 3, 4, and 8. A semi-automatic segmentation program was used, for the assessment. The assessments by 3 CS methods (ITSC, FOCUSS, and view sharing (VS)), were compared to those without CS. Bland-Altman analysis and paired t-test were used, for comparison. In addition, real-time CS-cine imaging was also performed, with CF of 2, 3, 4, and 8 for the same volunteers. Assessments of LVF were similarly made, for CS data. A fixed compensation technique is suggested, to reduce the bias. Results: The assessment of LVF by CS-cine, includes bias and random noise. Bias appeared much larger than random noise. Median of end-diastolic volume (EDV) with CS-cine (ITSC or FOCUSS) appeared -1.4% to -7.1% smaller, compared to that of standard cine, depending on CF from (2 to 8). End-systolic volume (ESV) appeared +1.6% to +14.3% larger, stroke volume (SV), -2.4% to -16.4% smaller, and ejection fraction (EF), -1.1% to -9.2% smaller, with P < 0.05. Bias was reduced from -5.6% to -1.8% for EF, by compensation applied to real-time CS-cine (CF = 8). Conclusion: Loss of temporal resolution by adopting missing data from nearby cardiac frames, causes an underestimation for EDV, and an overestimation for ESV, resulting in underestimations for SV and EF. The bias is not random. Thus it should be removed or reduced for better diagnosis. A fixed compensation is suggested, to reduce bias in the assessment of LVF.

A Method to Calculate Off-site Radionuclide Concentration for Multi-unit Nuclear Power Plant Accident (다수기 원자력발전소 사고 시 소외 방사성물질 농도 계산 방법)

  • Lee, Hye Rin;Lee, Gee Man;Jung, Woo Sik
    • Journal of the Korean Society of Safety
    • /
    • v.33 no.6
    • /
    • pp.144-156
    • /
    • 2018
  • Level 3 Probabilistic Safety Assessment (PSA) is performed for the risk assessment that calculates radioactive material dispersion to the environment. This risk assessment is performed with a tool of MELCOR Accident Consequence Code System (MACCS2 or WinMACCS). For the off-site consequence analysis of multi-unit nuclear power plant (NPP) accident, the single location (Center Of Mass, COM) method has been usually adopted with the assumption that all the NPPs in the nuclear site are located at the same COM point. It was well known that this COM calculation can lead to underestimated or overestimated radionuclide concentration. In order to overcome this underestimation or overestimation of radionuclide concentrations in the COM method, Multiple Location (ML) method was developed in this study. The radionuclide concentrations for the individual NPPs are separately calculated, and they are summed at every location in the nuclear site by the post-processing of radionuclide concentrations that is based on two-dimensional Gaussian Plume equations. In order to demonstrate the efficiency of the ML method, radionuclide concentrations were calculated for the six-unit NPP site, radionuclide concentrations of the ML method were compared with those by COM method. This comparison was performed for conditions of constant weather, yearly weather in Korea, and four seasons, and the results were discussed. This new ML method (1) improves accuracy of radionuclide concentrations when multi-unit NPP accident occurs, (2) calculates realistic atmospheric dispersion of radionuclides under various weather conditions, and finally (3) supports off-site emergency plan optimization. It is recommended that this new method be applied to the risk assessment of multi-unit NPP accident. This new method drastically improves the accuracy of radionuclide concentrations at the locations adjacent to or very close to NPPs. This ML method has a great strength over the COM method when people live near nuclear site, since it provides accurate radionuclide concentrations or radiation doses.

A Development of Nonstationary Frequency Analysis Model using a Bayesian Multiple Non-crossing Quantile Regression Approach (베이지안 다중 비교차 분위회귀 분석 기법을 이용한 비정상성 빈도해석 모형 개발)

  • Uranchimeg, Sumiya;Kim, Yong-Tak;Kwon, Young-Jun;Kwon, Hyun-Han
    • Journal of Coastal Disaster Prevention
    • /
    • v.4 no.3
    • /
    • pp.119-131
    • /
    • 2017
  • Global warming under the influence of climate change and its direct impact on glacial and sea level are known issue. However, there is a lack of research on an indirect impact of climate change such as coastal structure design which is mainly based on a frequency analysis of water level under the stationary assumption, meaning that maximum sea level will not vary significantly over time. In general, stationary assumption does not hold and may not be valid under a changing climate. Therefore, this study aims to develop a novel approach to explore possible distributional changes in annual maximum sea levels (AMSLs) and provide the estimate of design water level for coastal structures using a multiple non-crossing quantile regression based nonstationary frequency analysis within a Bayesian framework. In this study, 20 tide gauge stations, where more than 30 years of hourly records are available, are considered. First, the possible distributional changes in the AMSLs are explored, focusing on the change in the scale and location parameter of the probability distributions. The most of the AMSLs are found to be upward-convergent/divergent pattern in the distribution, and the significance test on distributional changes is then performed. In this study, we confirm that a stationary assumption under the current climate characteristic may lead to underestimation of the design sea level, which results in increase in the failure risk in coastal structures. A detailed discussion on the role of the distribution changes for design water level is provided.

Body Surface Area Is Not a Reliable Predictor of Tracheal Tube Size in Children

  • Uzumcugil, Filiz;Celebioglu, Emre Can;Ozkaragoz, Demet Basak;Yilbas, Aysun Ankay;Akca, Basak;Lotfinagsh, Nazgol;Celebioglu, Bilge
    • Clinical and Experimental Otorhinolaryngology
    • /
    • v.11 no.4
    • /
    • pp.301-308
    • /
    • 2018
  • Objectives. The age-based Cole formula has been employed for the estimation of endotracheal tube (ETT) size due to its ease of use, but may not appropriately consider growth rates among children. Child growth is assessed by calculating the body surface area (BSA). The association between the outer diameter of an appropriate uncuffed-endotracheal-tube (ETT-OD) and the BSA values of patients at 24-96 months of age was our primary outcome. Methods. Cole formula, BSA, age, height, weight and ultrasound measurement of subglottic-transverse-diameter were evaluated for correlations with correct uncuffed ETT-OD. The Cole formula, BSA, and ultrasound measurements were analyzed for estimation rates in all patients and age subgroups. The maximum allowed error for the estimation of ETT-OD was ${\leq}0.3mm$. Patients' tracheas were intubated with tubes chosen by Cole formula and correct ETT-OD values were determined using leak test. ETT exchange rates were recorded. Results. One-hundred twenty-seven patients were analyzed for the determination of estimation rates. Thirteen patients aged ${\geq}72months$ were intubated with cuffed ETT-OD of 8.4 mm and were accepted to need uncuffed ETT-OD >8.4 mm in order to be included in estimation rates, but excluded from correlations for size analysis. One-hundred fourteen patients were analyzed for correlations between correct ETT-OD (determined by the leak test) and outcome parameters. Cole formula, ultrasonography, and BSA had similar correct estimation rates. All three parameters had higher underestimation rates as age increased. Conclusion. The Cole formula, BSA, and ultrasonography had similar estimation rates in patients aged ${\geq}24$ to ${\leq}96months$. BSA had a correct estimation rate of 40.2% and may not be reliable in clinical practice to predict uncuffed-ETT-size.

A Methodological Thinking on Valuation Analysis of the Architectural Aesthetic based on the Hedonic Calculus by Bentham (Bentham의 쾌락계산법에 기초한 건축미 가치추정 방법론적 소고)

  • Lee, Dong-Joo;Ko, Eun-Hyung
    • Journal of the Architectural Institute of Korea Planning & Design
    • /
    • v.34 no.4
    • /
    • pp.11-18
    • /
    • 2018
  • The beauty is like the bubble of beer, and behavior scholars have been regarded as worthless to study. As a result, the aesthetic in construction projects has resulted in underestimation or neglect. The fundamental cause of this result is that it is not easy to estimate the value of the aesthetic. The hedonic calculus by Bentham has a possibility to valuating the intangible or invisible goods. In this context, this study proposes a method to valuate architectural aesthetic based on the hedonic calculus by Bentham. As a precondition for suggesting the valuation method, this study defined the architectural aesthetic as the value of attraction that affects the value of the built environment. As the concept of beauty that conforms to the architectural aesthetic, it has established the concept of beauty as 'the phenomenon of combining the foreground and background of Hartmann'. In addition, the scope of the value measurement is defined as 'built environment' so as to include not only the building but also the surrounding environment. We have reinterpreted the seven dimension of the hedonic calculus proposed by Bentham and systematized the method of valuation of the architectural esthetic based on the seven dimension. The result of this study is meaningful in that it presents a new perspective and approach to architectural aesthetic. And it will be used as grounds for valuation and analytical approach to architectural aesthetic and will be used as a basis for expanding the field of study from aesthetic to value.