• Title/Summary/Keyword: 잡음 강인성

Search Result 207, Processing Time 0.03 seconds

A Study on Optimum Threshold for Robust Watermarking (강인한 워터마킹을 위한 최적 임계치 설정에 관한 연구)

  • Park, Ki-Bum;Lee, Kang-Seung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.11a
    • /
    • pp.739-742
    • /
    • 2005
  • 본 논문은 디지털 영상 데이터를 대상으로 웨이블릿 변환을 이용하여 주파수 영역에서 워터마크를 삽입하는 블라인드 워터마킹 알고리즘을 제안한다. 실험을 통하여 다양한 임계치에 따른 워터마크 정보의 수용력과 영상의 손실 정도(PSNR), 저작권 인증 여부와 검출 값(Correlation response) 사이의 관계(Trade-off)들을 고려하여 최적의 임계치에 관하여 연구한다. 또한 인간의 시각적인 특성을 고려한 HVS(Human visual system) 기법을 적용하여 영상의 비가시도를 유지하면서 시각적으로 중요한 영역에 워터마크를 삽입하여 일반적인 공격에 강인성을 가지는 워터마킹 방법을 연구한다. 워터마크로서 가우시안 랜덤 수열(Gaussian Random sequence)을 삽입하여 최적의 임계값을 적용한 제안된 알고리즘의 성능 평가를 위해 여러 영상에 대하여 실험해 본 결과 워터마크가 삽입된 영상의 화질은 비가시도 측면에서 시각적으로 인지할 수 없을 만큼 측정되었으며, JPEG 손실압축, 선형 필터링, 잡음첨가 그리고 크로핑 등의 공격에 대하여 향상된 상관도와 강인함을 알 수가 있었다.

  • PDF

A Study on the Digital Watermarking using Set Partitioning In Hierarchical Trees (계층구조상에서 집합 분할방식을 이용한 디지털 워터마킹에 관한 연구)

  • 조홍용;조영;박장한;남궁재찬
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10d
    • /
    • pp.547-549
    • /
    • 2002
  • 본 논문에서는 웨이블릿(wavelet) 변환된 영상의 압축 방법으로 사용되는 SPIHT(Set Partitioning In Hierarchical Trees)을 이용하여 워터마크를 삽입하는 방법을 제안하였다. 기존의 특정 대역에만 워터마크를 삽입하는 방법은 화질열화와 압축의 두 가지 문제점을 동시에 해결할 수 없었다. 제안된 방법은 웨이블릿 변환된 영상의 계수 값이 동일한 방향을 갖는 부대역 간에 상관관계를 갖는 점을 이용하여 특정 대역이 아닌 중요 계수에만 워터마크를 삽입하므로써 강인성과 비 가시성이 증가되도록 하였다. 워터마크의 추출은 워터마크된 영상과 PN(Pseudo Noise)코드와의 계수 차를 이용하였으며, 워터마크가 삽입된 영상의 인증을 위해 통계학적인 접근 방법을 사용하였다. 실험을 통하여 워터마크가 삽입된 영상에 대해 손실 압축, 잡음, 크로핑, 리사이즈, 콜루션의 공격을 가한 결과 평균 유사도 값이 0.987의 높은 추출율을 보여 강인성을 입증하였다.

  • PDF

Comparison of Independent Component Analysis and Blind Source Separation Algorithms for Noisy Data (잡음환경에서 독립성분 분석과 암묵신호분리 알고리즘의 성능비교)

  • O, Sang-Hun;Cichocki, Andrzej;Choe, Seung-Jin;Lee, Su-Yeong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.2
    • /
    • pp.10-20
    • /
    • 2002
  • Various blind source separation (BSS) and independent component analysis (ICA) algorithms have been developed. However, comparison study for BSS/ICA algorithms has not been extensively carried out yet. The main objective of this paper is to compare various promising BSS/ICA algorithms in terms of several factors such as robustness to sensor noise, computational complexity, the conditioning of the mixing matrix, the number of sensors, and the number of training patterns. We propose several benchmarks which are useful for the evaluation of the algorithm. This comparison study will be useful for real-world applications, especially EEG/MEG analysis and separation of miked speech signals.

Audio Fingerprint Extraction Method Using Multi-Level Quantization Scheme (다중 레벨 양자화 기법을 적용한 오디오 핑거프린트 추출 방법)

  • Song Won-Sik;Park Man-Soo;Kim Hoi-Rin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.4
    • /
    • pp.151-158
    • /
    • 2006
  • In this paper, we proposed a new audio fingerprint extraction method, based on Philips' music retrieval algorithm, which uses the energy difference of neighboring filter-bank and probabilistic characteristics of music. Since Philips method uses too many filter-banks in limited frequency band, it may cause audio fingerprints to be highly sensitive to additive noises and to have too high correlation between neighboring bands. The proposed method improves robustness to noises by reducing the number of filter-banks while it maintains the discriminative power by representing the energy difference of bands with 2 bits where the quantization levels are determined by probabilistic characteristics. The correlation which exists among 4 different levels in 2 bits is not only utilized in similarity measurement. but also in efficient reduction of searching area. Experiments show that the proposed method is not only more robust to various environmental noises (street, department, car, office, and restaurant), but also takes less time for database search than Philips in the case where music is highly degraded.

A Study on Power Variations of Magnitude Controlled Input of Algorithms based on Cross-Information Potential and Delta Functions (상호정보 에너지와 델타함수 기반의 알고리즘에서 크기 조절된 입력의 전력변화에 대한 연구)

  • Kim, Namyong
    • Journal of Internet Computing and Services
    • /
    • v.18 no.6
    • /
    • pp.1-6
    • /
    • 2017
  • For the algorithm of cross-information potential with delta functions (CIPD) which has superior performance in impulsive noise environments, a new method of employing the information of power variations of magnitude controlled input (MCI) in the weight update equation of the CIPD is proposed in this paper where the input of CIPD is modified by the Gaussian kernel of error. To prove its effectiveness compared to the conventionalCIPD algorithm, the distance between the current weight vector and its previous one is analyzed and compared under impulsive noise. In the simulation results the proposed method shows a two-fold improvement in steady state stability, faster convergence speed by 1.8 times, and 2 dB - lower minimum MSE in the impulsive noise situation.

3D Mesh Model Watermarking Based on POCS (POCS에 기반한 3D 메쉬 모델 워터마킹)

  • Lee Suk-Hwan;Kwon Ki-Ryong;Lee Kuhn-Il
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.11C
    • /
    • pp.1592-1599
    • /
    • 2004
  • In this paper, we proposed the 3D mesh watermarking using projection onto convex sets (POCS). 3D mesh is projected iteratively onto two constraint convex sets until it satisfy the convergence condition. These sets consist of the robustness set and the invisibility set that designed to embed watermark Watermark is extracted without original mesh by using the decision values and the index that watermark is embedded. Experimental results verified that the watermarked mesh have the robustness against mesh simplification, cropping, affine transformation, and vertex randomization as well as the invisibility.

Performance Analysis of Correntropy-Based Blind Algorithms Robust to Impulsive Noise (충격성 잡음에 강인한 코렌트로피 기반 블라인드 알고리듬의 성능분석)

  • Kim, Namyong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.12
    • /
    • pp.2324-2330
    • /
    • 2015
  • In blind signal processing in impulsive noise environment the maximum cross-correntropy (MCC) algorithm shows superior performance compared to MSE-based algorithms. But optimum weight conditions of MCC algorithm and its properties related with robustness to impulsive noise have not been studied sufficiently. In this paper, through the analysis of the behavior of its optimum weight and the relationship with the MSE-based LMS algorithm, it is shown that the optimum weight of MCC and MSE-based LMS have an equal solution. Also the factor that keeps optimum weight of MCC undisturbed and stable under impulsive noise is proven to be the magnitude controlled input through simulation.

DS/SS Code Acquisition Scheme Based on Signed-Rank Statistic in Non-Gaussian Impulsive Noise Environments (비정규 충격성 잡음 환경에서 부호 순위 통계량에 바탕을 둔 직접수열 대역확산 부호 획득기법)

  • Kim, Sang-Hun;Ahn, Sang-Ho;Lee, Young-Yoon;Yoo, Seung-Soo;Yoon, Seok-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.2C
    • /
    • pp.200-207
    • /
    • 2008
  • In this paper, a new detector is proposed for code acquisition, which employs the signs and ranks of the received signal samples, instead of their actual values, and so does not require knowledge of the non-Gaussian noise dispersion. The mean acquisition performance of the proposed detector is compared with that of the detector of $^{[1]}$. The simulation results show that the proposed scheme is not only robust to deviations from the true value of the non-Gaussian noise dispersion, but also has comparable performance to that of the scheme of $^{[1]}$ using exact knowledge of the non-Gaussian noise dispersion.

PCMM-Based Feature Compensation Method Using Multiple Model to Cope with Time-Varying Noise (시변 잡음에 대처하기 위한 다중 모델을 이용한 PCMM 기반 특징 보상 기법)

  • 김우일;고한석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.6
    • /
    • pp.473-480
    • /
    • 2004
  • In this paper we propose an effective feature compensation scheme based on the speech model in order to achieve robust speech recognition. The proposed feature compensation method is based on parallel combined mixture model (PCMM). The previous PCMM works require a highly sophisticated procedure for estimation of the combined mixture model in order to reflect the time-varying noisy conditions at every utterance. The proposed schemes can cope with the time-varying background noise by employing the interpolation method of the multiple mixture models. We apply the‘data-driven’method to PCMM tot move reliable model combination and introduce a frame-synched version for estimation of environments posteriori. In order to reduce the computational complexity due to multiple models, we propose a technique for mixture sharing. The statistically similar Gaussian components are selected and the smoothed versions are generated for sharing. The performance is examined over Aurora 2.0 and speech corpus recorded while car-driving. The experimental results indicate that the proposed schemes are effective in realizing robust speech recognition and reducing the computational complexities under both simulated environments and real-life conditions.

Robust Speech Recognition Using Missing Data Theory (손실 데이터 이론을 이용한 강인한 음성 인식)

  • 김락용;조훈영;오영환
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.56-62
    • /
    • 2001
  • In this paper, we adopt a missing data theory to speech recognition. It can be used in order to maintain high performance of speech recognizer when the missing data occurs. In general, hidden Markov model (HMM) is used as a stochastic classifier for speech recognition task. Acoustic events are represented by continuous probability density function in continuous density HMM(CDHMM). The missing data theory has an advantage that can be easily applicable to this CDHMM. A marginalization method is used for processing missing data because it has small complexity and is easy to apply to automatic speech recognition (ASR). Also, a spectral subtraction is used for detecting missing data. If the difference between the energy of speech and that of background noise is below given threshold value, we determine that missing has occurred. We propose a new method that examines the reliability of detected missing data using voicing probability. The voicing probability is used to find voiced frames. It is used to process the missing data in voiced region that has more redundant information than consonants. The experimental results showed that our method improves performance than baseline system that uses spectral subtraction method only. In 452 words isolated word recognition experiment, the proposed method using the voicing probability reduced the average word error rate by 12% in a typical noise situation.

  • PDF