• Title/Summary/Keyword: 제곱평균제곱근

Search Result 80, Processing Time 0.029 seconds

Bayesian networks-based probabilistic forecasting of hydrological drought considering drought propagation (가뭄의 전이 현상을 고려한 수문학적 가뭄에 대한 베이지안 네트워크 기반 확률 예측)

  • Shin, Ji Yae;Kwon, Hyun-Han;Lee, Joo-Heon;Kim, Tae-Woong
    • Journal of Korea Water Resources Association
    • /
    • v.50 no.11
    • /
    • pp.769-779
    • /
    • 2017
  • As the occurrence of drought is recently on the rise, the reliable drought forecasting is required for developing the drought mitigation and proactive management of water resources. This study developed a probabilistic hydrological drought forecasting method using the Bayesian Networks and drought propagation relationship to estimate future drought with the forecast uncertainty, named as the Propagated Bayesian Networks Drought Forecasting (PBNDF) model. The proposed PBNDF model was composed with 4 nodes of past, current, multi-model ensemble (MME) forecasted information and the drought propagation relationship. Using Palmer Hydrological Drought Index (PHDI), the PBNDF model was applied to forecast the hydrological drought condition at 10 gauging stations in Nakdong River basin. The receiver operating characteristics (ROC) curve analysis was applied to measure the forecast skill of the forecast mean values. The root mean squared error (RMSE) and skill score (SS) were employed to compare the forecast performance with previously developed forecast models (persistence forecast, Bayesian network drought forecast). We found that the forecast skill of PBNDF model showed better performance with low RMSE and high SS of 0.1~0.15. The overall results mean the PBNDF model had good potential in probabilistic drought forecasting.

Co-registration of PET-CT Brain Images using a Gaussian Weighted Distance Map (가우시안 가중치 거리지도를 이용한 PET-CT 뇌 영상정합)

  • Lee, Ho;Hong, Helen;Shin, Yeong-Gil
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.7
    • /
    • pp.612-624
    • /
    • 2005
  • In this paper, we propose a surface-based registration using a gaussian weighted distance map for PET-CT brain image fusion. Our method is composed of three main steps: the extraction of feature points, the generation of gaussian weighted distance map, and the measure of similarities based on weight. First, we segment head using the inverse region growing and remove noise segmented with head using region growing-based labeling in PET and CT images, respectively. And then, we extract the feature points of the head using sharpening filter. Second, a gaussian weighted distance map is generated from the feature points in CT images. Thus it leads feature points to robustly converge on the optimal location in a large geometrical displacement. Third, weight-based cross-correlation searches for the optimal location using a gaussian weighted distance map of CT images corresponding to the feature points extracted from PET images. In our experiment, we generate software phantom dataset for evaluating accuracy and robustness of our method, and use clinical dataset for computation time and visual inspection. The accuracy test is performed by evaluating root-mean-square-error using arbitrary transformed software phantom dataset. The robustness test is evaluated whether weight-based cross-correlation achieves maximum at optimal location in software phantom dataset with a large geometrical displacement and noise. Experimental results showed that our method gives more accuracy and robust convergence than the conventional surface-based registration.

Boosting the Performance of Python-based Geodynamic Code using the Just-In-Time Compiler (Just-In-Time 컴파일러를 이용한 파이썬 기반 지구동역학 코드 가속화 연구)

  • Park, Sangjin;An, Soojung;So, Byung-Dal
    • Geophysics and Geophysical Exploration
    • /
    • v.24 no.2
    • /
    • pp.35-44
    • /
    • 2021
  • As the execution speed of Python is slower than those of other programming languages (e.g., C, C++, and FORTRAN), Python is not considered to be efficient for writing numerical geodynamic code that requires numerous iterations. Recently, many computational techniques, such as the Just-In-Time (JIT) compiler, have been developed to enhance the calculation speed of Python. Here, we developed two-dimensional (2D) numerical geodynamic code that was optimized for the JIT compiler, based on Python. Our code simulates mantle convection by combining the Particle-In-Cell (PIC) scheme and the finite element method (FEM), which are both commonly used in geodynamic modeling. We benchmarked well-known mantle convection problems to evaluate the reliability of our code, which confirmed that the root mean square velocity and Nusselt number obtained from our numerical modeling were consistent with those of the mantle convection problems. The matrix assembly and PIC processes in our code, when run with the JIT compiler, successfully achieved a speed-up 30× and 258× faster than without the JIT compiler, respectively. Our Python-based FEM-PIC code shows the high potential of Python for geodynamic modeling cases that require complex computations.

Quantitative precipitation estimation of X-band radar using empirical relationship (경험적 관계식을 이용한 X밴드 레이더의 정량적 강우 추정)

  • Song, Jae In;Lim, Sanghun;Cho, Yo Han;Jeong, Hyeon Gyo
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.9
    • /
    • pp.679-686
    • /
    • 2022
  • As the occurrences of flash floods have increased due to climate change, faster and more accurate precipitation observation using X-band radar has become important. Therefore, the Ministry of Environment installed two dual-pol X-band radars at Samcheok and Uljin. The radar data used in this study were obtained from two different elevation angles and composed to reduce the shielding effect. To obtain quantitative rainfall, quality control (QC), KDP retrieval, and Hybrid Surface Rainfall (HSR) methods were sequentially applied. To improve the accuracy of the quantitative precipitation estimation (QPE) of the X-band radar, we retrieved parameters for the relationship between rainfall rate and specific differential phase, which is commonly called the R-KDP relationship; hence, an empirical approach was developed using multiple rain gauges for those two radars. The newly suggested relationship, R = 27.4K0.81DP, slightly increased the correlation coefficient by 1% more than the relationship suggested by the previous study. The root mean square error significantly decreased from 3.88 mm/hr to 3.68 mm/hr, and the bias of the estimated precipitation also decreased from -1.72 mm/hr to -0.92 mm/hr for overall cases, showing the improvement of the new method.

Propensity score methods for estimating treatment delay effects (생존자료분석에서 성향 점수를 이용한 treatment delay effect 추정법에 대한 연구)

  • Jooyi Jung;Hyunjin Song;Seungbong Han
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.5
    • /
    • pp.415-445
    • /
    • 2023
  • Oftentimes, the time dependent treatment covariate and the time dependent confounders exist in observation studies. It is an important problem to correctly adjust for the time dependent confounders in the propensity score analysis. Recently, In the survival data, Hade et al. (2020) used a propensity score matching method to correctly estimate the treatment delay effect when the time dependent confounder affects time to the treatment time, where the treatment delay effects is defined to the delay in treatment reception. In this paper, we proposed the Cox model based marginal structural model (Cox-MSM) framework to estimate the treatment delay effect and conducted extensive simulation studies to compare our proposed Cox-MSM with the propensity score matching method proposed by Hade et al. (2020). Our simulation results showed that the Cox-MSM leads to more exact estimate for the treatment delay effect compared with two sequential matching schemes based on propensity scores. Example from study in treatment discontinuation in conjunction with simulated data illustrates the practical advantages of the proposed Cox-MSM.

A simulation study for various propensity score weighting methods in clinical problematic situations (임상에서 발생할 수 있는 문제 상황에서의 성향 점수 가중치 방법에 대한 비교 모의실험 연구)

  • Siseong Jeong;Eun Jeong Min
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.5
    • /
    • pp.381-397
    • /
    • 2023
  • The most representative design used in clinical trials is randomization, which is used to accurately estimate the treatment effect. However, comparison between the treatment group and the control group in an observational study without randomization is biased due to various unadjusted differences, such as characteristics between patients. Propensity score weighting is a widely used method to address these problems and to minimize bias by adjusting those confounding and assess treatment effects. Inverse probability weighting, the most popular method, assigns weights that are proportional to the inverse of the conditional probability of receiving a specific treatment assignment, given observed covariates. However, this method is often suffered by extreme propensity scores, resulting in biased estimates and excessive variance. Several alternative methods including trimming, overlap weights, and matching weights have been proposed to mitigate these issues. In this paper, we conduct a simulation study to compare performance of various propensity score weighting methods under diverse situation, such as limited overlap, misspecified propensity score, and treatment contrary to prediction. From the simulation results overlap weights and matching weights consistently outperform inverse probability weighting and trimming in terms of bias, root mean squared error and coverage probability.

Fabrication and Characteristics of Zinc Oxide- and Gallium doped Zinc Oxide thin film transistor using Radio Frequency Magnetron sputtering at Room Temperature (Zinc Oxide와 갈륨이 도핑 된 Zinc Oxide를 이용하여 Radio Frequency Magnetron Sputtering 방법에 의해 상온에서 제작된 박막 트랜지스터의 특성 평가)

  • Jeon, Hoon-Ha;Verma, Ved Prakash;Noh, Kyoung-Seok;Kim, Do-Hyun;Choi, Won-Bong;Jeon, Min-Hyon
    • Journal of the Korean Vacuum Society
    • /
    • v.16 no.5
    • /
    • pp.359-365
    • /
    • 2007
  • In this paper we present a bottom-gate type of zinc oxide (ZnO) and Gallium (Ga) doped zinc oxide (GZO) based thin film transistors (TFTs) through applying a radio frequency (RF) magnetron sputtering method at room temperature. The gate leakage current can be reduced up to several ph by applying $SiO_2$ thermally grown instead of using new gate oxide materials. The root mean square (RMS) values of the ZnO and GZO film surface were measured as 1.07 nm and 1.65 nm, respectively. Also, the transmittances of the ZnO and GZO film were more than 80% and 75%, respectively, and they were changed as their film thickness. The ZnO and GZO film had a wurtzite structure that was arranged well as a (002) orientation. The ZnO TFT had a threshold voltage of 2.5 V, a field effect mobility of $0.027\;cm^2/(V{\cdot}s)$, a on/off ratio of $10^4$, a gate voltage swing of 17 V/decade and it operated in a enhancement mode. In case of the GZO TFT, it operated in a depletion mode with a threshold voltage of -3.4 V, a field effect mobility of $0.023\;cm^2/(V{\cdot}s)$, a on/off ratio of $2{\times}10^4$ and a gate voltage swing of 3.3 V/decade. We successfully demonstrated that the TFTs with the enhancement and depletion mode type can be fabricated by using pure ZnO and 1wt% Ga-doped ZnO.

Assembly and Testing of a Visible and Near-infrared Spectrometer with a Shack-Hartmann Wavefront Sensor (샤크-하트만 센서를 이용한 가시광 및 근적외선 분광기 조립 및 평가)

  • Hwang, Sung Lyoung;Lee, Jun Ho;Jeong, Do Hwan;Hong, Jin Suk;Kim, Young Soo;Kim, Yeon Soo;Kim, Hyun Sook
    • Korean Journal of Optics and Photonics
    • /
    • v.28 no.3
    • /
    • pp.108-115
    • /
    • 2017
  • We report the assembly procedure and performance evaluation of a visible and near-infrared spectrometer in the wavelength region of 400-900 nm, which is later to be combined with fore-optics (a telescope) to form a f/2.5 imaging spectrometer with a field of view of ${\pm}7.68^{\circ}$. The detector at the final image plane is a $640{\times}480$ charge-coupled device with a $24{\mu}m$ pixel size. The spectrometer is in an Offner relay configuration consisting of two concentric, spherical mirrors, the secondary of which is replaced by a convex grating mirror. A double-pass test method with an interferometer is often applied in the assembly process of precision optics, but was excluded from our study due to a large residual wavefront error (WFE) in optical design of 210 nm ($0.35{\lambda}$ at 600 nm) root-mean-square (RMS). This results in a single-path test method with a Shack-Hartmann sensor. The final assembly was tested to have a RMS WFE increase of less than 90 nm over the entire field of view, a keystone of 0.08 pixels, a smile of 1.13 pixels and a spectral resolution of 4.32 nm. During the procedure, we confirmed the validity of using a Shack-Hartmann wavefront sensor to monitor alignment in the assembly of an Offner-like spectrometer.

Origin-Destination Estimation Based on Cellular Phone's Base Station (휴대폰 기지국 정보를 이용한 O/D 추정기법 연구)

  • Kim, Si-Gon;Yu, Byeong-Seok;Gang, Seung-Pil
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.1
    • /
    • pp.93-102
    • /
    • 2005
  • An Origin-Destination (O/D) is considered as one of the important information in route choices and trip assignments. A household interview survey is deemed to be the traditional and the most widely used method in making sample O/D and its conversion to the total O/D. Some researchers have studied to estimate dynamic O/D from the relationship between link volumes and trip assignment model. Nowadays, owing to the recent rapid spread of cellular phones. Location information of the cellular phone through the Base Station(BS) is considered as an alternative to O/D estimation. In this study, the methodology of generating BS-based O/D and the methodology of converting this O/D into an administrative district-based O/D are proposed. The information of GPS positions and cellular BS positions have acquired by establishing GPS equipment and cellular phone on taxies in Cheongju. Three weeks data are collected and used in estimating O/D by matching them on a digital map. Scatter diagram and sample correlation coefficients are used to investigate the similarity of the GPS-based O/D pattern among weeks, among days, and among times in day. The results show that there are few significant differences among weeks. But there is a difference in O/C pattern between weekday and weekend. Furthermore, there is a difference between morning peak and afternoon peak. Two methodologies are proposed to convert BS-based O/D into an administrative district-based O/D. The first one is to use the distribution pattern of GPS coordinates, the other is to use the coverage area of the BSs. To validate such converted O/D, GPS O/D is used as a true value. The statical analyses through scatter diagram, MAE and RMSE shows that there is few significant defference of pattern between the estimated BS-based O/D and GPS O/D. In the case of using only cellular information, the methodology using coverage area of the BSs is recommended for estimating O/D.

Development of Optimum Traffic Safety Evaluation Model Using the Back-Propagation Algorithm (역전파 알고리즘을 이용한 최적의 교통안전 평가 모형개발)

  • Kim, Joong-Hyo;Kwon, Sung-Dae;Hong, Jeong-Pyo;Ha, Tae-Jun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.35 no.3
    • /
    • pp.679-690
    • /
    • 2015
  • The need to remove the cause of traffic accidents by improving the engineering system for a vehicle and the road in order to minimize the accident hazard. This is likely to cause traffic accident continue to take a large and significant social cost and time to improve the reliability and efficiency of this generally poor road, thereby generating a lot of damage to the national traffic accident caused by improper environmental factors. In order to minimize damage from traffic accidents, the cause of accidents must be eliminated through technological improvements of vehicles and road systems. Generally, it is highly probable that traffic accident occurs more often on roads that lack safety measures, and can only be improved with tremendous time and costs. In particular, traffic accidents at intersections are on the rise due to inappropriate environmental factors, and are causing great losses for the nation as a whole. This study aims to present safety countermeasures against the cause of accidents by developing an intersection Traffic safety evaluation model. It will also diagnose vulnerable traffic points through BPA (Back -propagation algorithm) among artificial neural networks recently investigated in the area of artificial intelligence. Furthermore, it aims to pursue a more efficient traffic safety improvement project in terms of operating signalized intersections and establishing traffic safety policies. As a result of conducting this study, the mean square error approximate between the predicted values and actual measured values of traffic accidents derived from the BPA is estimated to be 3.89. It appeared that the BPA appeared to have excellent traffic safety evaluating abilities compared to the multiple regression model. In other words, The BPA can be effectively utilized in diagnosing and practical establishing transportation policy in the safety of actual signalized intersections.