• Title/Summary/Keyword: gaussian weight

Search Result 113, Processing Time 0.022 seconds

Robust Image Mosaic using Geometrical Feature Model (기하학적 특징 모델을 이용한 강건한 영상 모자이크 기법)

  • 김정훈;김대현;윤용인;최종수
    • Proceedings of the IEEK Conference
    • /
    • 2000.11d
    • /
    • pp.13-16
    • /
    • 2000
  • This paper presents a robust method to combine a collection of images with small fields of view to obtain an image with a large field of view. In the previous works, there are two main areas which one is a cross correlation-based method and the other is a feature-based method. The former is based on motion estimation from video sequences. so there are a problem on rotating a camera about optical axis. In the latter method, it is difficult to match correspondence feature points correctly.'re find correct correspondences, we proposed the geometrical feature model and correspondence filters and the Gaussian distribution weight function to blend the images smoothly. The experiments show that our method is robust and effective.

  • PDF

Automatic Clustering of Speech Data Using Modified MAP Adaptation Technique (수정된 MAP 적응 기법을 이용한 음성 데이터 자동 군집화)

  • Ban, Sung Min;Kang, Byung Ok;Kim, Hyung Soon
    • Phonetics and Speech Sciences
    • /
    • v.6 no.1
    • /
    • pp.77-83
    • /
    • 2014
  • This paper proposes a speaker and environment clustering method in order to overcome the degradation of the speech recognition performance caused by various noise and speaker characteristics. In this paper, instead of using the distance between Gaussian mixture model (GMM) weight vectors as in the Google's approach, the distance between the adapted mean vectors based on the modified maximum a posteriori (MAP) adaptation is used as a distance measure for vector quantization (VQ) clustering. According to our experiments on the simulation data generated by adding noise to clean speech, the proposed clustering method yields error rate reduction of 10.6% compared with baseline speaker-independent (SI) model, which is slightly better performance than the Google's approach.

A Greedy Merging Method for User-Steered Mesh Segmentation

  • Ha, Jong-Sung;Park, Young-Jin;Yoo, Kwan-Hee
    • International Journal of Contents
    • /
    • v.3 no.2
    • /
    • pp.25-29
    • /
    • 2007
  • In this paper, we discuss the mesh segmentation problem which divides a given 3D mesh into several disjoint sets. To solve the problem, we propose a greedy method based on the merging priority metric defined for representing the geometric properties of meaningful parts. The proposed priority metric is a weighted function using five geometric parameters, those are, a distribution of Gaussian map, boundary path concavity, boundary path length, cardinality, and segmentation resolution. In special, we can control by setting up the weight values of the proposed geometric parameters to obtain visually better mesh segmentation. Finally, we carry out an experiment on several 3D mesh models using the proposed methods and visualize the results.

A Study on Optimization of Speech Encryption Scheme using Hopping Filter in order to Solve the Synchronization Problem (동기 문제 해결을 위한 호핑 필터를 이용한 음성 보호 방식의 최적화에 관한 연구)

  • 정지원;이경호;원동호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.11
    • /
    • pp.1677-1688
    • /
    • 1993
  • Two dimensional amplitude scrambling algorithm using the hopping filter, which improve the drawback of conventional speech encryption scheme, is powerful encryption scheme in analog sppech signal. In this paper, we proposed the variable delay weight algorithm using hopping filter in order to solve the synchronization problem of two dimensional amplitude scrambling. Futhermore, analyzing the distortion of received signal which is transmitted in the gaussian noise channel, we determined the optimal encryption algorithm and optimal SNR using the simulation.

  • PDF

Non-Local Mean based Post Processing Scheme for Performance Enhancement of Image Interpolation Method (이미지 보간기법의 성능 개선을 위한 비국부평균 기반의 후처리 기법)

  • Kim, Donghyung
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.16 no.3
    • /
    • pp.49-58
    • /
    • 2020
  • Image interpolation, a technology that converts low resolution images into high resolution images, has been widely used in various image processing fields such as CCTV, web-cam, and medical imaging. This technique is based on the fact that the statistical distributions of the white Gaussian noise and the difference between the interpolated image and the original image is similar to each other. The proposed algorithm is composed of three steps. In first, the interpolated image is derived by random image interpolation. In second, we derive weighting functions that are used to apply non-local mean filtering. In the final step, the prediction error is corrected by performing non-local mean filtering by applying the selected weighting function. It can be considered as a post-processing algorithm to further reduce the prediction error after applying an arbitrary image interpolation algorithm. Simulation results show that the proposed method yields reasonable performance.

Real time Background Estimation and Object Tracking (실시간 배경갱신 및 이를 이용한 객체추적)

  • Lee, Wan-Joo
    • The Journal of Information Technology
    • /
    • v.10 no.4
    • /
    • pp.27-39
    • /
    • 2007
  • Object tracking in a real time environment is one of challenging subjects in computer vision area during past couple of years. This paper proposes a method of object detection and tracking using adaptive background estimation in real time environment. To obtain a stable and adaptive background, we combine 3-frame differential method and running average single gaussian background model. Using this background model, we can successfully detect moving objects while minimizing false moving objects caused by noise. In the tracking phase, we propose a matching criteria where the weight of position and inner brightness distribution can be controlled by the size of objects. Also, we adopt a Kalman Filter to overcome the occlusion of tracked objects. By experiments, we can successfully detect and track objects in real time environment.

  • PDF

Suppressing Artefacts in the ECG by Independent Component Analysis (독립성분 분석기법에 의한 심전도 신호의 왜곡 보정)

  • Kim, Jeong-Hwan;Kim, Kyeong-Seop;Kim, Hyun-Tae;Lee, Jeong-Whan
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.6
    • /
    • pp.825-832
    • /
    • 2013
  • In this study, Independent Component Analysis (ICA) algorithms are suggested to extract the original ECG part from the mixed signal contaminated with the unwanted frequency components and especially 60Hz power line disturbances. With this aim, we implement a novel method to suppress the baseline-wandering disturbances and power line artefacts contained in patch-electrodes sensory ECG data by separating the unmixed signal with finding the optimal weight W based on Kurtosis value. With applying brutal force and gradient ascent searching algorithm to find W, we can conclude that the unwanted frequency components especially in the ambulatory ECG data can be eliminated by Independent Component Analysis.

Performance Improvement of Microphone Array Speech Recognition Using Features Weighted Mahalanobis Distance (가중특징 Mahalanobis거리를 이용한 마이크 어레이 음석인식의 성능향상)

  • Nguyen, Dinh Cuong;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.1E
    • /
    • pp.45-53
    • /
    • 2010
  • In this paper, we present the use of the Features Weighted Mahalanobis Distance (FWMD) in improving the performance of Likelihood Maximizing Beamforming (Limabeam) algorithm in speech recognition for microphone array. The proposed approach is based on the replacement of the traditional distance measure in a Gaussian classifier with adding weight for different features in the Mahalanobis distance according to their distances after the variance normalization. By using Features Weighted Mahalanobis Distance for Limabeam algorithm (FWMD-Limabeam), we obtained correct word recognition rate of 90.26% for calibrate Limabeam and 87.23% for unsupervised Limabeam, resulting in a higher rate of 3% and 6% respectively than those produced by the original Limabearn. By implementing a HM-Net speech recognition strategy alternatively, we could save memory and reduce computation complexity.

Signal Compensation of LiDAR Sensors and Noise Filtering (LiDAR 센서 신호 보정 및 노이즈 필터링 기술 개발)

  • Park, Hong-Sun;Choi, Joon-Ho
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.5
    • /
    • pp.334-339
    • /
    • 2019
  • In this study, we propose a compensation method of raw LiDAR data with noise and noise filtering for signal processing of LiDAR sensors during the development phase. The raw LiDAR data include constant errors generated by delays in transmitting and receiving signals, which can be resolved by LiDAR signal compensation. The signal compensation consists of two stage. First one is LiDAR sensor calibration for a compensation of geometric distortion. Second is walk error compensation. LiDAR data also include fluctuation and outlier noise, the latter of which is removed by data filtering. In this study, we compensate for the fluctuation by using the Kalman filter method, and we remove the outlier noise by applying a Gaussian weight function.

Performance estimation of the noise reduction by window function on a single tone (단일 신호에 대한 창 함수의 잡음 제거 성능 평가)

  • Baek, Moon-Yeol;Kim, Byoung-Sam
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.13 no.5
    • /
    • pp.38-43
    • /
    • 1996
  • Windowing routines have as their purpose the reduction of the sidelobes of a spectral output of the FFT or DFT routines. Windowing routines accomplish this by forcing the beginning and end of any sequence to approach each other in value. Since they must work with any sequence they force the beginning and ending samples near zero. To make up for this reduction in power, windowing routines give extra weight to the values near the middle of the sequence. The difference between windows is the way in which they transition from the low weights near the edges to the higher weights neqr the middle of the sequence. Signal-to-noise ratio(SNR) can be determined by the ratio of the output noisy signal variance to the input noisy signal variance of a window. Standard deviation of noise is reduced by windowing. Thus, the windowing operation improved the SNR of the noisy signal. This paper shows a performance estimation of windowing on a single tone with added Gaussian noise and uniform noise.

  • PDF