• 제목/요약/키워드: k-mean algorithm

Search Result 1,274, Processing Time 0.03 seconds

Quantitative Assessment Technology of Small Animal Myocardial Infarction PET Image Using Gaussian Mixture Model (다중가우시안혼합모델을 이용한 소동물 심근경색 PET 영상의 정량적 평가 기술)

  • Woo, Sang-Keun;Lee, Yong-Jin;Lee, Won-Ho;Kim, Min-Hwan;Park, Ji-Ae;Kim, Jin-Su;Kim, Jong-Guk;Kang, Joo-Hyun;Ji, Young-Hoon;Choi, Chang-Woon;Lim, Sang-Moo;Kim, Kyeong-Min
    • Progress in Medical Physics
    • /
    • v.22 no.1
    • /
    • pp.42-51
    • /
    • 2011
  • Nuclear medicine images (SPECT, PET) were widely used tool for assessment of myocardial viability and perfusion. However it had difficult to define accurate myocardial infarct region. The purpose of this study was to investigate methodological approach for automatic measurement of rat myocardial infarct size using polar map with adaptive threshold. Rat myocardial infarction model was induced by ligation of the left circumflex artery. PET images were obtained after intravenous injection of 37 MBq $^{18}F$-FDG. After 60 min uptake, each animal was scanned for 20 min with ECG gating. PET data were reconstructed using ordered subset expectation maximization (OSEM) 2D. To automatically make the myocardial contour and generate polar map, we used QGS software (Cedars-Sinai Medical Center). The reference infarct size was defined by infarction area percentage of the total left myocardium using TTC staining. We used three threshold methods (predefined threshold, Otsu and Multi Gaussian mixture model; MGMM). Predefined threshold method was commonly used in other studies. We applied threshold value form 10% to 90% in step of 10%. Otsu algorithm calculated threshold with the maximum between class variance. MGMM method estimated the distribution of image intensity using multiple Gaussian mixture models (MGMM2, ${\cdots}$ MGMM5) and calculated adaptive threshold. The infarct size in polar map was calculated as the percentage of lower threshold area in polar map from the total polar map area. The measured infarct size using different threshold methods was evaluated by comparison with reference infarct size. The mean difference between with polar map defect size by predefined thresholds (20%, 30%, and 40%) and reference infarct size were $7.04{\pm}3.44%$, $3.87{\pm}2.09%$ and $2.15{\pm}2.07%$, respectively. Otsu verse reference infarct size was $3.56{\pm}4.16%$. MGMM methods verse reference infarct size was $2.29{\pm}1.94%$. The predefined threshold (30%) showed the smallest mean difference with reference infarct size. However, MGMM was more accurate than predefined threshold in under 10% reference infarct size case (MGMM: 0.006%, predefined threshold: 0.59%). In this study, we was to evaluate myocardial infarct size in polar map using multiple Gaussian mixture model. MGMM method was provide adaptive threshold in each subject and will be a useful for automatic measurement of infarct size.

Development of an Offline Based Internal Organ Motion Verification System during Treatment Using Sequential Cine EPID Images (연속촬영 전자조사 문 영상을 이용한 오프라인 기반 치료 중 내부 장기 움직임 확인 시스템의 개발)

  • Ju, Sang-Gyu;Hong, Chae-Seon;Huh, Woong;Kim, Min-Kyu;Han, Young-Yih;Shin, Eun-Hyuk;Shin, Jung-Suk;Kim, Jing-Sung;Park, Hee-Chul;Ahn, Sung-Hwan;Lim, Do-Hoon;Choi, Doo-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.91-98
    • /
    • 2012
  • Verification of internal organ motion during treatment and its feedback is essential to accurate dose delivery to the moving target. We developed an offline based internal organ motion verification system (IMVS) using cine EPID images and evaluated its accuracy and availability through phantom study. For verification of organ motion using live cine EPID images, a pattern matching algorithm using an internal surrogate, which is very distinguishable and represents organ motion in the treatment field, like diaphragm, was employed in the self-developed analysis software. For the system performance test, we developed a linear motion phantom, which consists of a human body shaped phantom with a fake tumor in the lung, linear motion cart, and control software. The phantom was operated with a motion of 2 cm at 4 sec per cycle and cine EPID images were obtained at a rate of 3.3 and 6.6 frames per sec (2 MU/frame) with $1,024{\times}768$ pixel counts in a linear accelerator (10 MVX). Organ motion of the target was tracked using self-developed analysis software. Results were compared with planned data of the motion phantom and data from the video image based tracking system (RPM, Varian, USA) using an external surrogate in order to evaluate its accuracy. For quantitative analysis, we analyzed correlation between two data sets in terms of average cycle (peak to peak), amplitude, and pattern (RMS, root mean square) of motion. Averages for the cycle of motion from IMVS and RPM system were $3.98{\pm}0.11$ (IMVS 3.3 fps), $4.005{\pm}0.001$ (IMVS 6.6 fps), and $3.95{\pm}0.02$ (RPM), respectively, and showed good agreement on real value (4 sec/cycle). Average of the amplitude of motion tracked by our system showed $1.85{\pm}0.02$ cm (3.3 fps) and $1.94{\pm}0.02$ cm (6.6 fps) as showed a slightly different value, 0.15 (7.5% error) and 0.06 (3% error) cm, respectively, compared with the actual value (2 cm), due to time resolution for image acquisition. In analysis of pattern of motion, the value of the RMS from the cine EPID image in 3.3 fps (0.1044) grew slightly compared with data from 6.6 fps (0.0480). The organ motion verification system using sequential cine EPID images with an internal surrogate showed good representation of its motion within 3% error in a preliminary phantom study. The system can be implemented for clinical purposes, which include organ motion verification during treatment, compared with 4D treatment planning data, and its feedback for accurate dose delivery to the moving target.

The Comparative Study of NHPP Software Reliability Model Based on Exponential and Inverse Exponential Distribution (지수 및 역지수 분포를 이용한 NHPP 소프트웨어 무한고장 신뢰도 모형에 관한 비교연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.2
    • /
    • pp.133-140
    • /
    • 2016
  • Software reliability in the software development process is an important issue. Software process improvement helps in finishing with reliable software product. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, we were proposed the reliability model with the exponential and inverse exponential distribution, which made out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on mean square error (MSE) and coefficient of determination($R^2$), for the sake of efficient model, were employed. Analysis of failure, using real data set for the sake of proposing the exponential and inverse exponential distribution, was employed. This analysis of failure data compared with the exponential and inverse exponential distribution property. In order to insurance for the reliability of data, Laplace trend test was employed. In this study, the inverse exponential distribution model is also efficient in terms of reliability because it (the coefficient of determination is 80% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, the software developers have to consider life distribution by prior knowledge of the software to identify failure modes which can be able to help.

Comparison of the sound source localization methods appropriate for a compact microphone array (소형 마이크로폰 배열에 적용 가능한 음원 위치 추정법 비교)

  • Jung, In-Jee;Ih, Jeong-Guon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.1
    • /
    • pp.47-56
    • /
    • 2020
  • The sound source localization technique has various application fields in the era of internet-of-things, for which the probe size becomes critical. The localization methods using the acoustic intensity vector has an advantage of downsizing the layout of the array owing to a small finite-difference error for the short distance between adjacent microphones. In this paper, the acoustic intensity vector and the Time Difference of Arrival (TDoA) method are compared in the viewpoint of the localization error in the far-field. The comparison is made according to the change of spacing between adjacent microphones of the three-dimensional microphone array arranged in a tetrahedral shape. An additional test is conducted in the reverberant field by varying the reverberation time to verify the effectiveness of the methods applied to the actual environments. For estimating the TDoA, the Generalized Cross Correlation-Phase transform (GCC-PHAT) algorithm is adopted in the computation. It is found that the mean localization error of the acoustic intensimetry is 2.9° and that of the GCC-PHAT is 7.3° for T60 = 0.4 s, while the error increases as 9.9°, 13.0° for T60 = 1.0 s, respectively. The data supports that a compact array employing the acoustic intensimetry can localize of the sound source in the actual environment with the moderate reflection conditions.

Improvement of the PFCM(Possibilistic Fuzzy C-Means) Clustering Method (PFCM 클러스터링 기법의 개선)

  • Heo, Gyeong-Yong;Choe, Se-Woon;Woo, Young-Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.1
    • /
    • pp.177-185
    • /
    • 2009
  • Cluster analysis or clustering is a kind of unsupervised learning method in which a set of data points is divided into a given number of homogeneous groups. Fuzzy clustering method, one of the most popular clustering method, allows a point to belong to all the clusters with different degrees, so produces more intuitive and natural clusters than hard clustering method does. Even more some of fuzzy clustering variants have noise-immunity. In this paper, we improved the Possibilistic Fuzzy C-Means (PFCM), which generates a membership matrix as well as a typicality matrix, using Gath-Geva (GG) method. The proposed method has a focus on the boundaries of clusters, which is different from most of the other methods having a focus on the centers of clusters. The generated membership values are suitable for the classification-type applications. As the typicality values generated from the algorithm have a similar distribution with the values of density function of Gaussian distribution, it is useful for Gaussian-type density estimation. Even more GG method can handle the clusters having different numbers of data points, which the other well-known method by Gustafson and Kessel can not. All of these points are obvious in the experimental results.

Automatic Clustering on Trained Self-organizing Feature Maps via Graph Cuts (그래프 컷을 이용한 학습된 자기 조직화 맵의 자동 군집화)

  • Park, An-Jin;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.9
    • /
    • pp.572-587
    • /
    • 2008
  • The Self-organizing Feature Map(SOFM) that is one of unsupervised neural networks is a very powerful tool for data clustering and visualization in high-dimensional data sets. Although the SOFM has been applied in many engineering problems, it needs to cluster similar weights into one class on the trained SOFM as a post-processing, which is manually performed in many cases. The traditional clustering algorithms, such as t-means, on the trained SOFM however do not yield satisfactory results, especially when clusters have arbitrary shapes. This paper proposes automatic clustering on trained SOFM, which can deal with arbitrary cluster shapes and be globally optimized by graph cuts. When using the graph cuts, the graph must have two additional vertices, called terminals, and weights between the terminals and vertices of the graph are generally set based on data manually obtained by users. The Proposed method automatically sets the weights based on mode-seeking on a distance matrix. Experimental results demonstrated the effectiveness of the proposed method in texture segmentation. In the experimental results, the proposed method improved precision rates compared with previous traditional clustering algorithm, as the method can deal with arbitrary cluster shapes based on the graph-theoretic clustering.

Optimized DSP Implementation of Audio Decoders for Digital Multimedia Broadcasting (디지털 방송용 오디오 디코더의 DSP 최적화 구현)

  • Park, Nam-In;Cho, Choong-Sang;Kim, Hong-Kook
    • Journal of Broadcast Engineering
    • /
    • v.13 no.4
    • /
    • pp.452-462
    • /
    • 2008
  • In this paper, we address issues associated with the real-time implementation of the MPEG-1/2 Layer-II (or MUSICAM) and MPEG-4 ER-BSAC decoders for Digital Multimedia Broadcasting (DMB) on TMS320C64x+ that is a fixed-point DSP processor with a clock speed of 330 MHz. To achieve the real-time requirement, they should be optimized in different steps as follows. First of all, a C-code level optimization is performed by sharing the memory, adjusting data types, and unrolling loops. Next, an algorithm level optimization is carried out such as the reconfiguration of bitstream reading, the modification of synthesis filtering, and the rearrangement of the window coefficients for synthesis filtering. In addition, the C-code of a synthesis filtering module of the MPEG-1/2 Layer-II decoder is rewritten by using the linear assembly programming technique. This is because the synthesis filtering module requires the most processing time among all processing modules of the decoder. In order to show how the real-time implementation works, we obtain the percentage of the processing time for decoding and calculate a RMS value between the decoded audio signals by the reference MPEG decoder and its DSP version implemented in this paper. As a result, it is shown that the percentages of the processing time for the MPEG-1/2 Layer-II and MPEG-4 ER-BSAC decoders occupy less than 3% and 11% of the DSP clock cycles, respectively, and the RMS values of the MPEG-1/2 Layer-II and MPEG-4 ER-BSAC decoders implemented in this paper all satisfy the criterion of -77.01 dB which is defined by the MPEG standards.

A simple approach to refraction statics with the Generalized Reciprocal Method and the Refraction Convolution Section (GRM과 RCS 방법을 이용한 굴절파 정적 시간차를 구하는 간단한 방법)

  • Palmer Derecke;Jones Leonie
    • Geophysics and Geophysical Exploration
    • /
    • v.8 no.1
    • /
    • pp.18-25
    • /
    • 2005
  • We derive refraction statics for seismic data recorded in a hard rock terrain, in which there are large and rapid variations in the depth of weathering. The statics corrections range from less than 10 ms to more than 70 ms, often over distances as short as 12 receiver intervals. This study is another demonstration of the importance in obtaining accurate initial refraction models of the weathering in hard rock terrains in which automatic residual statics may fail. We show that the statics values computed with a simple model of the weathering using the Generalized Reciprocal Method (GRM) and the Refraction Convolution Section (RCS) are comparable in accuracy to those computed with a more complex model of the weathering, using least-mean-squares inversion with the conjugate gradient algorithm (Taner et al., 1998). The differences in statics values between the GRM model and that of Taner et al. (1998) systematically vary from an average of 2ms to 4ms over a distance of 8.8 km. The differences between these two refraction models and the final statics model, which includes the automatic residual values, are generally less than 5 ms. The residuals for the GRM model are frequently less than those for the model of Taner et al. (1998). The RCS statics are picked approximately 10 ms later, but their relative accuracy is comparable to that of the GRM statics. The residual statics values show a general correlation with the refraction statics values, and they can be reduced in magnitude by using a lower average seismic velocity in the weathering. These results suggest that inaccurate average seismic velocities in the weathered layer may often be a source of short-wavelength statics, rather than any shortcomings with the inversion algorithms in determining averaged delay times from the traveltimes.

High-Frequency Bottom Loss Measured at Near-Normal Incidence Grazing Angle in Jinhae Bay (진해만에서 측정된 높은 수평입사각에서의 고주파 해저면 반사손실)

  • La, Hyoung-Sul;Park, Chi-Hyung;Cho, Sung-Ho;Choi, Jee-Woong;Na, Jung-Yul;Yoon, Kwan-Seob;Park, Kyung-ju;Park, Joung-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.4
    • /
    • pp.223-228
    • /
    • 2010
  • High-frequency bottom loss measurements for grazing angle of $82^{\circ}$ in frequency range 17-40 kHz were made in Jinhae bay in the southern part of Korea. Observations of bottom loss showed the strong variation as a function of frequency, which were compared to the predicted values using two-layered sediment reflection model. The geoacoustic parameters including sound speed, density and attenuation coefficient for the second sediment layer were predicted from the empirical relations with the mean grain size obtained from sediment core analysis. The geoacoustic parameters for the surficial sediment layer were inverted using Monte Carlo inversion algorithm. A sensitivity study for the geoacoustic parameters showed that the thickness of surficial sediment layer was most sensitive to the variation of the bottom loss.

Improvement of 2-pass DInSAR-based DEM Generation Method from TanDEM-X bistatic SAR Images (TanDEM-X bistatic SAR 영상의 2-pass 위성영상레이더 차분간섭기법 기반 수치표고모델 생성 방법 개선)

  • Chae, Sung-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.847-860
    • /
    • 2020
  • The 2-pass DInSAR (Differential Interferometric SAR) processing steps for DEM generation consist of the co-registration of SAR image pair, interferogram generation, phase unwrapping, calculation of DEM errors, and geocoding, etc. It requires complicated steps, and the accuracy of data processing at each step affects the performance of the finally generated DEM. In this study, we developed an improved method for enhancing the performance of the DEM generation method based on the 2-pass DInSAR technique of TanDEM-X bistatic SAR images was developed. The developed DEM generation method is a method that can significantly reduce both the DEM error in the unwrapped phase image and that may occur during geocoding step. The performance analysis of the developed algorithm was performed by comparing the vertical accuracy (Root Mean Square Error, RMSE) between the existing method and the newly proposed method using the ground control point (GCP) generated from GPS survey. The vertical accuracy of the DInSAR-based DEM generated without correction for the unwrapped phase error and geocoding error is 39.617 m. However, the vertical accuracy of the DEM generated through the proposed method is 2.346 m. It was confirmed that the DEM accuracy was improved through the proposed correction method. Through the proposed 2-pass DInSAR-based DEM generation method, the SRTM DEM error observed by DInSAR was compensated for the SRTM 30 m DEM (vertical accuracy 5.567 m) used as a reference. Through this, it was possible to finally create a DEM with improved spatial resolution of about 5 times and vertical accuracy of about 2.4 times. In addition, the spatial resolution of the DEM generated through the proposed method was matched with the SRTM 30 m DEM and the TanDEM-X 90m DEM, and the vertical accuracy was compared. As a result, it was confirmed that the vertical accuracy was improved by about 1.7 and 1.6 times, respectively, and more accurate DEM generation was possible with the proposed method. If the method derived in this study is used to continuously update the DEM for regions with frequent morphological changes, it will be possible to update the DEM effectively in a short time at low cost.