• Title/Summary/Keyword: gaussian model

Search Result 1,397, Processing Time 0.043 seconds

Common Spectrum Assignment for low power Devices for Wireless Audio Microphone (WPAN용 디지털 음향기기 및 통신기기간 스펙트럼 상호운용을 위한 채널 할당기술에 관한 연구)

  • Kim, Seong-Kweon;Cha, Jae-Sang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.5
    • /
    • pp.724-729
    • /
    • 2008
  • This paper presents the calculation of the required bandwidth of common frequency bandwidth applying queueing theory for maximizing the efficiency of frequency resource of WPAN(Wireless Personal Area Network) based Digital acoustic and communication devices. It assumed that LBT device(ZigBee) and FH devices (DCP, RFID and Bluetooth) coexist in the common frequency band for WPAN based Digital acoustic and communication devices. Frequency hopping (FH) and listen before talk (LBT) have been used for interference avoidance in the short range device (SRD). The LBT system transmits data after searching for usable frequency bandwidth in the radio wave environment. However, the FH system transmits data without searching for usable frequency bandwidth. The queuing theory is employed to model the FH and LBT system, respectively. As a result, the throughput for each channel was analyzed by processing the usage frequency and the interval of service time for each channel statistically. When common frequency bandwidth is shared with SRD using 250mW, it was known that about 35 channels were required at the condition of throughput 84%, which was determined with the input condition of Gaussian distribution implying safety communication. Therefore, the common frequency bandwidth is estimated with multiplying the number of channel by the bandwidth per channel. These methodology will be useful for the efficient usage of frequency bandwidth.

New Illumination compensation algorithm improving a multi-view video coding performance by advancing its temporal and inter-view correlation (다시점 비디오의 시공간적 중복도를 높여 부호화 성능을 향상시키는 새로운 조명 불일치 보상 기법)

  • Lee, Dong-Seok;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.15 no.6
    • /
    • pp.768-782
    • /
    • 2010
  • Because of the different shooting position between multi-view cameras and the imperfect camera calibration, Illumination mismatches of multi-view video can happen. This variation can bring about the performance decrease of multi-view video coding(MVC) algorithm. A histogram matching algorithm can be applied to recompensate these inconsistencies in a prefiltering step. Once all camera frames of a multi-view sequence are adjusted to a predefined reference through the histogram matching, the coding efficiency of MVC is improved. However the histogram distribution can be different not only between neighboring views but also between sequential views on account of movements of camera angle and some objects, especially human. Therefore the histogram matching algorithm which references all frames in chose view is not appropriate for compensating the illumination differences of these sequence. Thus we propose new algorithms both the image classification algorithm which is applied two criteria to improve the correlation between inter-view frames and the histogram matching which references and matches with a group of pictures(GOP) as a unit to advance the correlation between successive frames. Experimental results show that the compression ratio for the proposed algorithm is improved comparing with the conventional algorithms.

RPCA-GMM for Speaker Identification (화자식별을 위한 강인한 주성분 분석 가우시안 혼합 모델)

  • 이윤정;서창우;강상기;이기용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.7
    • /
    • pp.519-527
    • /
    • 2003
  • Speech is much influenced by the existence of outliers which are introduced by such an unexpected happenings as additive background noise, change of speaker's utterance pattern and voice detection errors. These kinds of outliers may result in severe degradation of speaker recognition performance. In this paper, we proposed the GMM based on robust principal component analysis (RPCA-GMM) using M-estimation to solve the problems of both ouliers and high dimensionality of training feature vectors in speaker identification. Firstly, a new feature vector with reduced dimension is obtained by robust PCA obtained from M-estimation. The robust PCA transforms the original dimensional feature vector onto the reduced dimensional linear subspace that is spanned by the leading eigenvectors of the covariance matrix of feature vector. Secondly, the GMM with diagonal covariance matrix is obtained from these transformed feature vectors. We peformed speaker identification experiments to show the effectiveness of the proposed method. We compared the proposed method (RPCA-GMM) with transformed feature vectors to the PCA and the conventional GMM with diagonal matrix. Whenever the portion of outliers increases by every 2%, the proposed method maintains almost same speaker identification rate with 0.03% of little degradation, while the conventional GMM and the PCA shows much degradation of that by 0.65% and 0.55%, respectively This means that our method is more robust to the existence of outlier.

Coupled Finite Element Analysis of Partially Saturated Soil Slope Stability (유한요소 연계해석을 이용한 불포화 토사사면 안전성 평가)

  • Kim, Jae-Hong;Lim, Jae-Seong;Park, Seong-Wan
    • Journal of the Korean Geotechnical Society
    • /
    • v.30 no.4
    • /
    • pp.35-45
    • /
    • 2014
  • Limit equilibrium methods of slope stability analysis have been widely adopted mainly due to their simplicity and applicability. However, the conventional methods may not give reliable and convincing results for various geological conditions such as nonhomogeneous and anisotropic soils. Also, they do not take into account soil slope history nor the initial state of stress, for example excavation or fill placement. In contrast to the limit equilibrium analysis, the analysis of deformation and stress distribution by finite element method can deal with the complex loading sequence and the growth of inelastic zone with time. This paper proposes a technique to determine the critical slip surface as well as to calculate the factor of safety for shallow failure on partially saturated soil slope. Based on the effective stress field in finite element analysis, all stresses are estimated at each Gaussian point of elements. The search strategy for a noncircular critical slip surface along weak points is appropriate for rainfall-induced shallow slope failure. The change of unit weight by seepage force has an effect on the horizontal and vertical displacements on the soil slope. The Drucker-Prager failure criterion was adopted for stress-strain relation to calculate coupling hydraulic and mechanical behavior of the partially saturated soil slope.

Skin Region Detection Using Histogram Approximation Based Mean Shift Algorithm (Mean Shift 알고리즘 기반의 히스토그램 근사화를 이용한 피부 영역 검출)

  • Byun, Ki-Won;Joo, Jae-Heum;Nam, Ki-Gon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.21-29
    • /
    • 2011
  • At existing skin detection methods using skin color information defined based on the prior knowldege, threshold value to be used at the stage of dividing the backround and the skin region was decided on a subjective point of view through experiments. Also, threshold value was selected in a passive manner according to their background and illumination environments in these existing methods. These existing methods displayed a drawback in that their performance was fully influenced by the threshold value estimated through repetitive experiments. To overcome the drawback of existing methods, this paper propose a skin region detection method using a histogram approximation based on the mean shift algorithm. The proposed method is to divide the background region and the skin region by using the mean shift method at the histogram of the skin-map of the input image generated by the comparison of the similarity with the standard skin color at the CbCr color space and actively finding the maximum value converged by brightness level. Since the histogram has a form of discontinuous function accumulated according to the brightness value of the pixel, it gets approximated as a Gaussian Mixture Model (GMM) using the Bezier Curve method. Thus, the proposed method detects the skin region by using the mean shift method and actively finding the maximum value which eventually becomes the dividing point, not by using the manually selected threshold value unlike other existing methods. This method detects the skin region high performance effectively through experiments.

Fast Bayesian Inversion of Geophysical Data (지구물리 자료의 고속 베이지안 역산)

  • Oh, Seok-Hoon;Kwon, Byung-Doo;Nam, Jae-Cheol;Kee, Duk-Kee
    • Journal of the Korean Geophysical Society
    • /
    • v.3 no.3
    • /
    • pp.161-174
    • /
    • 2000
  • Bayesian inversion is a stable approach to infer the subsurface structure with the limited data from geophysical explorations. In geophysical inverse process, due to the finite and discrete characteristics of field data and modeling process, some uncertainties are inherent and therefore probabilistic approach to the geophysical inversion is required. Bayesian framework provides theoretical base for the confidency and uncertainty analysis for the inference. However, most of the Bayesian inversion require the integration process of high dimension, so massive calculations like a Monte Carlo integration is demanded to solve it. This method, though, seemed suitable to apply to the geophysical problems which have the characteristics of highly non-linearity, we are faced to meet the promptness and convenience in field process. In this study, by the Gaussian approximation for the observed data and a priori information, fast Bayesian inversion scheme is developed and applied to the model problem with electric well logging and dipole-dipole resistivity data. Each covariance matrices are induced by geostatistical method and optimization technique resulted in maximum a posteriori information. Especially a priori information is evaluated by the cross-validation technique. And the uncertainty analysis was performed to interpret the resistivity structure by simulation of a posteriori covariance matrix.

  • PDF

Object-Based Integral Imaging Depth Extraction Using Segmentation (영상 분할을 이용한 객체 기반 집적영상 깊이 추출)

  • Kang, Jin-Mo;Jung, Jae-Hyun;Lee, Byoung-Ho;Park, Jae-Hyeung
    • Korean Journal of Optics and Photonics
    • /
    • v.20 no.2
    • /
    • pp.94-101
    • /
    • 2009
  • A novel method for the reconstruction of 3D shape and texture from elemental images has been proposed. Using this method, we can estimate a full 3D polygonal model of objects with seamless triangulation. But in the triangulation process, all the objects are stitched. This generates phantom surfaces that bridge depth discontinuities between different objects. To solve this problem we need to connect points only within a single object. We adopt a segmentation process to this end. The entire process of the proposed method is as follows. First, the central pixel of each elemental image is computed to extract spatial position of objects by correspondence analysis. Second, the object points of central pixels from neighboring elemental images are projected onto a specific elemental image. Then, the center sub-image is segmented and each object is labeled. We used the normalized cut algorithm for segmentation of the center sub-image. To enhance the speed of segmentation we applied the watershed algorithm before the normalized cut. Using the segmentation results, the subdivision process is applied to pixels only within the same objects. The refined grid is filtered with median and Gaussian filters to improve reconstruction quality. Finally, each vertex is connected and an object-based triangular mesh is formed. We conducted experiments using real objects and verified our proposed method.

A Method of Detecting the Aggressive Driving of Elderly Driver (노인 운전자의 공격적인 운전 상태 검출 기법)

  • Koh, Dong-Woo;Kang, Hang-Bong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.11
    • /
    • pp.537-542
    • /
    • 2017
  • Aggressive driving is a major cause of car accidents. Previous studies have mainly analyzed young driver's aggressive driving tendency, yet they were only done through pure clustering or classification technique of machine learning. However, since elderly people have different driving habits due to their fragile physical conditions, it is necessary to develop a new method such as enhancing the characteristics of driving data to properly analyze aggressive driving of elderly drivers. In this study, acceleration data collected from a smartphone of a driving vehicle is analyzed by a newly proposed ECA(Enhanced Clustering method for Acceleration data) technique, coupled with a conventional clustering technique (K-means Clustering, Expectation-maximization algorithm). ECA selects high-intensity data among the data of the cluster group detected through K-means and EM in all of the subjects' data and models the characteristic data through the scaled value. Using this method, the aggressive driving data of all youth and elderly experiment participants were collected, unlike the pure clustering method. We further found that the K-means clustering has higher detection efficiency than EM method. Also, the results of K-means clustering demonstrate that a young driver has a driving strength 1.29 times higher than that of an elderly driver. In conclusion, the proposed method of our research is able to detect aggressive driving maneuvers from data of the elderly having low operating intensity. The proposed method is able to construct a customized safe driving system for the elderly driver. In the future, it will be possible to detect abnormal driving conditions and to use the collected data for early warning to drivers.

A study on the Derivation of GIUH-Clark Model (GIUH-Clark 모형의 유도에 관한 연구)

  • Lee, Byung Woon;Jang, Dae Won;Kim, Hung Soo;Seoh, Byung Ha
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2004.05b
    • /
    • pp.731-736
    • /
    • 2004
  • 강우-유출과정의 수문학적 현상을 보다 정확히 분석하고 예측하는 기법으로 강우에 의한 유출의 반응을 나타내는 지체시간, 도달시간 등 수문학적 반응시간을 유역의 지형형태학적 인자들과 연계하는 방법이 많이 이용되고 있다. 이에 본 연구에서는 Clark방법과 지형형태학적 순간단위도(GIUH)를 이용하여 계측유역의 강우-유출반응을 모의하였고, 이를 관측된 값과 비교하여 미계측유역의 적용성 여부를 검토해보았다. 대상 유역의 하상지형인자 및 지형형태학적 특성은 Arc-View를 이용하여 구하였으며, 이를 기존의 문헌자료와 비교해보았다. Clark방법의 매개변수의 결정에 있어서 시간-면적곡선은 HEC-1의 무차원 식을 이용하였고, 도달시간은 Kirpich 공식을 이용하여 구하였으며, 저류상수는 Clark방법에 의해 추정된 순간단위도의 첨두유량이 Horton의 차수비의 함수로 구한 철두유량과 같아지는 값으로 결정하였다. 본 연구는 전적비교를 출구점으로하는 유역면적 $8.5km^2$인 설마천을 대상유역으로 하였으며, 모의된 강우-유출반응과 비교하기 위해 사용된 강우사상은 2002년의 8월 4일과 2002년 10월 6일의 10분 단위 우량이다. Clark방법과 GIUH를 이용하여 모의한 유출곡선과 관측된 유출곡선을 비교해본 결과 첨두유량은 8월의 강우사상 때는 $21\%$크게, 10월의 강우사상 때는 $35\%$작게 나타났다. 첨두시간은 모의된 경우가 각각 10분, 20분 빨리 도달하였다. 또한 이러한 결과는 유역의 도달시긴에 가장 민감하게 반응함을 알 수 있었다. 따라서, 유역의 도달시간 산정에 주의를 요한다면 프랙탈 차원이 유사한 미계측유역의 수문곡선 산정에 있어서 Clark방법과 GIUH를 이용하는 방법도 유용하다고 사료된다. 주는 각 수문인자 중 강우시간분포와 유효우량 산정방법 그리고 유출모형에 대해 자자 검토하였으며, 최종적으로 면적에 따른 임계지속기간과 유출량의 변화를 검토해 보았다.이를 각각의 경우의 해석해 결과와 비교${\cdot}$분석하였다. 후방추적 퍼프모형은 전방추적 퍼프모형에 비하여 사용된 퍼프수와 관계없이 작은 오차를 발생하였으며, 전체적으로 퍼프 모형이 입자모형보다는 훨씬 적은 수의 계산을 통해서도 작은 오차를 나타낼 수 있다는 것을 알 수 있었다. 그러나 Gaussian 분포를 갖는 퍼프모형은 전단흐름에서의 긴 유선형 농도분포를 모의할 수 없었고, 이에 관한 오차는 전단계수가 증가함에 따라 비선형적으로 증가하였다. 향후, 보다 다양한 흐름영역에서 장${\cdot}$단점 분석 및 오차해석을 수행한 후에 각각의 Lagrangian 모형의 장점만을 갖는 모형결합 방법을 제시할 수 있을 것으로 판단된다.mm/$m^{2}$로 감소한 소견을 보였다. 승모판 성형술은 전 승모판엽 탈출증이 있는 두 환아에서 동시에 시행하였다. 수술 후 1년 내 시행한 심초음파에서 모든 환아에서 단지 경등도 이하의 승모판 폐쇄 부전 소견을 보였다. 수술 후 조기 사망은 없었으며, 합병증으로는 유미흉이 한 명에서 있었다. 술 후 10개월째 허혈성 확장성 심근증이 호전되지 않아 Dor 술식을 시행한 후 사망한 예를 제외한 나머지 6명은 특이 증상 없이 정상 생활 중이다 결론: 좌관상동맥 페동맥이상 기시증은 드물기는 하나, 영유아기에 심근경색 및 허혈성 심근증 또는 선천성 승모판 폐쇄 부전등을 초래하는 심각한 선천성 심질환이다. 그러나 진단 즉시 직접 좌관상동맥-대동맥 이식술로 수술적 교정을 해줌으로써 좋은 성적을 기대할 수 있음을 보여주었다.특히 교사들이 중요하

  • PDF

Algorithms for Indexing and Integrating MPEG-7 Visual Descriptors (MPEG-7 시각 정보 기술자의 인덱싱 및 결합 알고리즘)

  • Song, Chi-Ill;Nang, Jong-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.1-10
    • /
    • 2007
  • This paper proposes a new indexing mechanism for MPEG-7 visual descriptors, especially Dominant Color and Contour Shape descriptors, that guarantees an efficient similarity search for the multimedia database whose visual meta-data are represented with MPEG-7. Since the similarity metric used in the Dominant Color descriptor is based on Gaussian mixture model, the descriptor itself could be transform into a color histogram in which the distribution of the color values follows the Gauss distribution. Then, the transformed Dominant Color descriptor (i.e., the color histogram) is indexed in the proposed indexing mechanism. For the indexing of Contour Shape descriptor, we have used a two-pass algorithm. That is, in the first pass, since the similarity of two shapes could be roughly measured with the global parameters such as eccentricity and circularity used in Contour shape descriptor, the dissimilar image objects could be excluded with these global parameters first. Then, the similarities between the query and remaining image objects are measured with the peak parameters of Contour Shape descriptor. This two-pass approach helps to reduce the computational resources to measure the similarity of image objects using Contour Shape descriptor. This paper also proposes two integration schemes of visual descriptors for an efficient retrieval of multimedia database. The one is to use the weight of descriptor as a yardstick to determine the number of selected similar image objects with respect to that descriptor, and the other is to use the weight as the degree of importance of the descriptor in the global similarity measurement. Experimental results show that the proposed indexing and integration schemes produce a remarkable speed-up comparing to the exact similarity search, although there are some losses in the accuracy because of the approximated computation in indexing. The proposed schemes could be used to build a multimedia database represented in MPEG-7 that guarantees an efficient retrieval.