• Title/Summary/Keyword: 가우시안 차이

Search Result 75, Processing Time 0.019 seconds

Error Analysis of Waterline-based DEM in Tidal Flats and Probabilistic Flood Vulnerability Assessment using Geostatistical Simulation (지구통계학적 시뮬레이션을 이용한 수륙경계선 기반 간석지 DEM의 오차 분석 및 확률론적 침수 취약성 추정)

  • KIM, Yeseul;PARK, No-Wook;JANG, Dong-Ho;YOO, Hee Young
    • Journal of The Geomorphological Association of Korea
    • /
    • v.20 no.4
    • /
    • pp.85-99
    • /
    • 2013
  • The objective of this paper is to analyze the spatial distribution of errors in the DEM generated using waterlines from multi-temporal remote sensing data and to assess flood vulnerability. Unlike conventional research in which only global statistics of errors have been generated, this paper tries to quantitatively analyze the spatial distribution of errors from a probabilistic viewpoint using geostatistical simulation. The initial DEM in Baramarae tidal flats was generated by corrected tidal level values and waterlines extracted from multi-temporal Landsat data in 2010s. When compared with the ground measurement height data, overall the waterline-based DEM underestimated the actual heights and local variations of the errors were observed. By applying sequential Gaussian simulation based on spatial autocorrelation of DEM errors, multiple alternative error distributions were generated. After correcting errors in the initial DEM with simulated error distributions, probabilities for flood vulnerability were estimated under the sea level rise scenarios of IPCC SERS. The error analysis methodology based on geostatistical simulation could model both uncertainties of the error assessment and error propagation problems in a probabilistic framework. Therefore, it is expected that the error analysis methodology applied in this paper will be effectively used for the probabilistic assessment of errors included in various thematic maps as well as the error assessment of waterline-based DEMs in tidal flats.

Effect of Noise on Density Differences of Tissue in Computed Tomography (컴퓨터 단층촬영의 조직간 밀도차이에 대한 노이즈 영향)

  • Yang, Won Seok;Son, Jung Min;Chon, Kwon Su
    • Journal of the Korean Society of Radiology
    • /
    • v.12 no.3
    • /
    • pp.403-407
    • /
    • 2018
  • Currently, the highest cancer death rate in Korea is lung cancer, which is a typical cancer that is difficult to detect early. Low-dose chest CT is being used for early detection, which has a greater lung cancer diagnosis rate of about three times than regular chest x-ray images. However, low-dose chest CT not only significantly reduces image resolution but also has a weak signal and is sensitive to noise. Also, air filled lungs are low-density organs and the presence of noise can significantly affect early diagnosis of cancer. This study used Visual C++ to set a circle inside a large circle with a density of 2.0, with a density of 1.0, which is the density of water, in which five small circle of mathematics have different densities. Gaussian noise was generated by 1%, 2%, 3%, and 4% respectively to determine the effect of noise on the mean value, the standard deviation value, and the relative noise ratio(SNR). In areas where the density difference between the large and small circles was greatest in the event of 1 % noise, the SNR in the area with the greatest variation in noise was 4.669, and in areas with the lowest density difference, the SNR was 1.183. In addition, the SNR values can be seen to be high if the same results are obtained for both positive and negative densities. Quality was also clearly visible when the density difference was large, and if the noise level was increased, the SNR was reduced to significantly affect the noise. Low-density organs or organs in areas of similar density to cancers, will have significant noise effects, and the effects of density differences on the probability of noise will affect diagnosis.

Gaussian Filtering Effects on Brain Tissue-masked Susceptibility Weighted Images to Optimize Voxel-based Analysis (화소 분석의 최적화를 위해 자화감수성 영상에 나타난 뇌조직의 가우시안 필터 효과 연구)

  • Hwang, Eo-Jin;Kim, Min-Ji;Jahng, Geon-Ho
    • Investigative Magnetic Resonance Imaging
    • /
    • v.17 no.4
    • /
    • pp.275-285
    • /
    • 2013
  • Purpose : The objective of this study was to investigate effects of different smoothing kernel sizes on brain tissue-masked susceptibility-weighted images (SWI) obtained from normal elderly subjects using voxel-based analyses. Materials and Methods: Twenty healthy human volunteers (mean $age{\pm}SD$ = $67.8{\pm}6.09$ years, 14 females and 6 males) were studied after informed consent. A fully first-order flow-compensated three-dimensional (3D) gradient-echo sequence ran to obtain axial magnitude and phase images to generate SWI data. In addition, sagittal 3D T1-weighted images were acquired with the magnetization-prepared rapid acquisition of gradient-echo sequence for brain tissue segmentation and imaging registration. Both paramagnetically (PSWI) and diamagnetically (NSWI) phase-masked SWI data were obtained with masking out non-brain tissues. Finally, both tissue-masked PSWI and NSWI data were smoothed using different smoothing kernel sizes that were isotropic 0, 2, 4, and 8 mm Gaussian kernels. The voxel-based comparisons were performed using a paired t-test between PSWI and NSWI for each smoothing kernel size. Results: The significance of comparisons increased with increasing smoothing kernel sizes. Signals from NSWI were greater than those from PSWI. The smoothing kernel size of four was optimal to use voxel-based comparisons. The bilaterally different areas were found on multiple brain regions. Conclusion: The paramagnetic (positive) phase mask led to reduce signals from high susceptibility areas. To minimize partial volume effects and contributions of large vessels, the voxel-based analysis on SWI with masked non-brain components should be utilized.

A New Bias Scheduling Method for Improving Both Classification Performance and Precision on the Classification and Regression Problems (분류 및 회귀문제에서의 분류 성능과 정확도를 동시에 향상시키기 위한 새로운 바이어스 스케줄링 방법)

  • Kim Eun-Mi;Park Seong-Mi;Kim Kwang-Hee;Lee Bae-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1021-1028
    • /
    • 2005
  • The general solution for classification and regression problems can be found by matching and modifying matrices with the information in real world and then these matrices are teaming in neural networks. This paper treats primary space as a real world, and dual space that Primary space matches matrices using kernel. In practical study, there are two kinds of problems, complete system which can get an answer using inverse matrix and ill-posed system or singular system which cannot get an answer directly from inverse of the given matrix. Further more the problems are often given by the latter condition; therefore, it is necessary to find regularization parameter to change ill-posed or singular problems into complete system. This paper compares each performance under both classification and regression problems among GCV, L-Curve, which are well known for getting regularization parameter, and kernel methods. Both GCV and L-Curve have excellent performance to get regularization parameters, and the performances are similar although they show little bit different results from the different condition of problems. However, these methods are two-step solution because both have to calculate the regularization parameters to solve given problems, and then those problems can be applied to other solving methods. Compared with UV and L-Curve, kernel methods are one-step solution which is simultaneously teaming a regularization parameter within the teaming process of pattern weights. This paper also suggests dynamic momentum which is leaning under the limited proportional condition between learning epoch and the performance of given problems to increase performance and precision for regularization. Finally, this paper shows the results that suggested solution can get better or equivalent results compared with GCV and L-Curve through the experiments using Iris data which are used to consider standard data in classification, Gaussian data which are typical data for singular system, and Shaw data which is an one-dimension image restoration problems.

Development of Quantification Methods for the Myocardial Blood Flow Using Ensemble Independent Component Analysis for Dynamic $H_2^{15}O$ PET (동적 $H_2^{15}O$ PET에서 앙상블 독립성분분석법을 이용한 심근 혈류 정량화 방법 개발)

  • Lee, Byeong-Il;Lee, Jae-Sung;Lee, Dong-Soo;Kang, Won-Jun;Lee, Jong-Jin;Kim, Soo-Jin;Choi, Seung-Jin;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.6
    • /
    • pp.486-491
    • /
    • 2004
  • Purpose: factor analysis and independent component analysis (ICA) has been used for handling dynamic image sequences. Theoretical advantages of a newly suggested ICA method, ensemble ICA, leaded us to consider applying this method to the analysis of dynamic myocardial $H_2^{15}O$ PET data. In this study, we quantified patients' blood flow using the ensemble ICA method. Materials and Methods: Twenty subjects underwent $H_2^{15}O$ PET scans using ECAT EXACT 47 scanner and myocardial perfusion SPECT using Vertex scanner. After transmission scanning, dynamic emission scans were initiated simultaneously with the injection of $555{\sim}740$ MBq $H_2^{15}O$. Hidden independent components can be extracted from the observed mixed data (PET image) by means of ICA algorithms. Ensemble learning is a variational Bayesian method that provides an analytical approximation to the parameter posterior using a tractable distribution. Variational approximation forms a lower bound on the ensemble likelihood and the maximization of the lower bound is achieved through minimizing the Kullback-Leibler divergence between the true posterior and the variational posterior. In this study, posterior pdf was approximated by a rectified Gaussian distribution to incorporate non-negativity constraint, which is suitable to dynamic images in nuclear medicine. Blood flow was measured in 9 regions - apex, four areas in mid wall, and four areas in base wall. Myocardial perfusion SPECT score and angiography results were compared with the regional blood flow. Results: Major cardiac components were separated successfully by the ensemble ICA method and blood flow could be estimated in 15 among 20 patients. Mean myocardial blood flow was $1.2{\pm}0.40$ ml/min/g in rest, $1.85{\pm}1.12$ ml/min/g in stress state. Blood flow values obtained by an operator in two different occasion were highly correlated (r=0.99). In myocardium component image, the image contrast between left ventricle and myocardium was 1:2.7 in average. Perfusion reserve was significantly different between the regions with and without stenosis detected by the coronary angiography (P<0.01). In 66 segment with stenosis confirmed by angiography, the segments with reversible perfusion decrease in perfusion SPECT showed lower perfusion reserve values in $H_2^{15}O$ PET. Conclusions: Myocardial blood flow could be estimated using an ICA method with ensemble learning. We suggest that the ensemble ICA incorporating non-negative constraint is a feasible method to handle dynamic image sequence obtained by the nuclear medicine techniques.