• Title/Summary/Keyword: Perceptual model

Search Result 219, Processing Time 0.028 seconds

Japanese Adults' Perceptual Categorization of Korean Three-way Distinction (한국어 3중 대립 음소에 대한 일본인의 지각적 범주화)

  • Kim, Jee-Hyun;Kim, Jung-Oh
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2005.05a
    • /
    • pp.163-167
    • /
    • 2005
  • Current theories of cross-language speech perception claim that patterns of perceptual assimilation of non-native segments to native categories predict relative difficulties in learning to perceive (and produce) non-native phones. Perceptual assimilation patterns by Japanese listeners of the three-way voicing distinction in Korean syllable-initial obstruent consonants were assessed directly. According to Speech Learning Model (SLM) and Perceptual Assimilation Model (PAM), the resulting perceptual assimilation pattern predicts relative difficulty in discrimination between lenis and aspirated consonants, and relative ease in the discrimination of fortis. This study compared the effects of two different training conditions on Japanese adults’perceptual categorization of Korean three-way distinction. In one condition, participants were trained to discriminate lenis and aspirated consonants which were predicted to be problematic, whereas in another condition participants were trained with all three classes of 'learnability' did not seem to depend lawfully on the perceived cross-language similarity of Korean and Japanese consonants.

  • PDF

A Model-Based Image Steganography Method Using Watson's Visual Model

  • Fakhredanesh, Mohammad;Safabakhsh, Reza;Rahmati, Mohammad
    • ETRI Journal
    • /
    • v.36 no.3
    • /
    • pp.479-489
    • /
    • 2014
  • This paper presents a model-based image steganography method based on Watson's visual model. Model-based steganography assumes a model for cover image statistics. This approach, however, has some weaknesses, including perceptual detectability. We propose to use Watson's visual model to improve perceptual undetectability of model-based steganography. The proposed method prevents visually perceptible changes during embedding. First, the maximum acceptable change in each discrete cosine transform coefficient is extracted based on Watson's visual model. Then, a model is fitted to a low-precision histogram of such coefficients and the message bits are encoded to this model. Finally, the encoded message bits are embedded in those coefficients whose maximum possible changes are visually imperceptible. Experimental results show that changes resulting from the proposed method are perceptually undetectable, whereas model-based steganography retains perceptually detectable changes. This perceptual undetectability is achieved while the perceptual quality - based on the structural similarity measure - and the security - based on two steganalysis methods - do not show any significant changes.

Perceptual Evaluation of Duration Models in Spoken Korean

  • Chung, Hyun-Song
    • Speech Sciences
    • /
    • v.9 no.1
    • /
    • pp.207-215
    • /
    • 2002
  • Perceptual evaluation of duration models of spoken Korean was carried out based on the Classification and Regression Tree (CART) model for text-to-speech conversion. A reference set of durations was produced by a commercial text-to-speech synthesis system for comparison. The duration model which was built in the previous research (Chung & Huckvale, 2001) was applied to a Korean language speech synthesis diphone database, 'Hanmal (HN 1.0)'. The synthetic speech produced by the CART duration model was preferred in the subjective preference test by a small margin and the synthetic speech from the commercial system was superior in the clarity test. In the course of preparing the experiment, a labeled database of spoken Korean with 670 sentences was constructed. As a result of the experiment, a trained duration model for speech synthesis was obtained. The 'Hanmal' diphone database for Korean speech synthesis was also developed as a by-product of the perceptual evaluation.

  • PDF

Adaptive Digital Watermarking Based on Wavelet Transform Using Successive Subband Quantization and Perceptual Model

  • Kim, Ju-Young;Kwon, Seong-geun;Hwang, Hee-Chul;Kwon, Ki-Ryong;Kim, Duk-Gyoo
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1240-1243
    • /
    • 2002
  • In this paper, we propose an adaptive digital image watermarking algorithm using successive subband quantization (SSQ) and perceptual model based on wavelet domain. The watermark is embedded into the perceptually significant coefficients (PSCs) of image. The PSCs in the baseband are selected according to the amplitude of the coefficients and the high frequency subbands are selected by SSQ. To embed the watermark, we use perceptual model. The perceptual model is based on the computation of the noise visibility function (NVF) and embed at the texture and edge region stronger embedded watermarks.

  • PDF

Perceptual Quality-based Video Coding with Foveated Contrast Sensitivity (Foveated Contrast Sensitivity를 이용한 인지품질 기반 비디오 코딩)

  • Ryu, Jiwoo;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.19 no.4
    • /
    • pp.468-477
    • /
    • 2014
  • This paper proposes a novel perceptual quality-based (PQ-based) video coding method with foveated contrast sensitivity (FCS). Conventional methods on PQ-based video coding with FCS achieve minimum loss on perceptual quality of compressed video by exploiting the property of human visual system (HVS), that is, its sensitivity differs by the spatial frequency of visual stimuli. On the other hand, PQ-based video coding with foveated masking (FM) exploits the difference of the sensitivity of the HVS between the central vision and the peripheral vision. In this study, a novel FCS model is proposed which considers both the conventional DCT-based JND model and the FM model. Psychological study is conducted to construct the proposed FCS model, and the proposed model is applied to PQ-based video coding algorithm implemented on HM10.0 reference software. Experimental results show that the proposed method decreases bitrate by the average of 10% without loss on the perceptual quality.

Object Motion Detection and Tracking Based on Human Perception System (인간의 지각적인 시스템을 기반으로 한 연속된 영상 내에서의 움직임 영역 결정 및 추적)

  • 정미영;최석림
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2120-2123
    • /
    • 2003
  • This paper presents the moving object detection and tracking algorithm using edge information base on human perceptual system The human visual system recognizes shapes and objects easily and rapidly. It's believed that perceptual organization plays on important role in human perception. It presents edge model(GCS) base on extracted feature by perceptual organization principal and extract edge information by definition of the edge model. Through such human perception system I have introduced the technique in which the computers would recognize the moving object from the edge information just like humans would recognize the moving object precisely.

  • PDF

Perceptual Ad-Blocker Design For Adversarial Attack (적대적 공격에 견고한 Perceptual Ad-Blocker 기법)

  • Kim, Min-jae;Kim, Bo-min;Hur, Junbeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.871-879
    • /
    • 2020
  • Perceptual Ad-Blocking is a new advertising blocking technique that detects online advertising by using an artificial intelligence-based advertising image classification model. A recent study has shown that these Perceptual Ad-Blocking models are vulnerable to adversarial attacks using adversarial examples to add noise to images that cause them to be misclassified. In this paper, we prove that existing perceptual Ad-Blocking technique has a weakness for several adversarial example and that Defense-GAN and MagNet who performed well for MNIST dataset and CIFAR-10 dataset are good to advertising dataset. Through this, using Defense-GAN and MagNet techniques, it presents a robust new advertising image classification model for adversarial attacks. According to the results of experiments using various existing adversarial attack techniques, the techniques proposed in this paper were able to secure the accuracy and performance through the robust image classification techniques, and furthermore, they were able to defend a certain level against white-box attacks by attackers who knew the details of defense techniques.

Subjective Evaluation on Perceptual Tracking Errors from Modeling Errors in Model-Based Tracking

  • Rhee, Eun Joo;Park, Jungsik;Seo, Byung-Kuk;Park, Jong-Il
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.6
    • /
    • pp.407-412
    • /
    • 2015
  • In model-based tracking, an accurate 3D model of a target object or scene is mostly assumed to be known or given in advance, but the accuracy of the model should be guaranteed for accurate pose estimation. In many application domains, on the other hand, end users are not highly distracted by tracking errors from certain levels of modeling errors. In this paper, we examine perceptual tracking errors, which are predominantly caused by modeling errors, on subjective evaluation and compare them to computational tracking errors. We also discuss the tolerance of modeling errors by analyzing their permissible ranges.

A Perceptually-Adaptive High-Capacity Color Image Watermarking System

  • Ghouti, Lahouari
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.570-595
    • /
    • 2017
  • Robust and perceptually-adaptive image watermarking algorithms have mainly targeted gray-scale images either at the modeling or embedding levels despite the widespread availability of color images. Only few of the existing algorithms are specifically designed for color images where color correlation and perception are constructively exploited. In this paper, a new perceptual and high-capacity color image watermarking solution is proposed based on the extension of Tsui et al. algorithm. The $CIEL^*a^*b^*$ space and the spatio-chromatic Fourier transform (SCFT) are combined along with a perceptual model to hide watermarks in color images where the embedding process reconciles between the conflicting requirements of digital watermarking. The perceptual model, based on an emerging color image model, exploits the non-uniform just-noticeable color difference (NUJNCD) thresholds of the $CIEL^*a^*b^*$ space. Also, spread-spectrum techniques and semi-random low-density parity check codes (SR-LDPC) are used to boost the watermark robustness and capacity. Unlike, existing color-based models, the data hiding capacity of our scheme relies on a game-theoretic model where upper bounds for watermark embedding are derived. Finally, the proposed watermarking solution outperforms existing color-based watermarking schemes in terms of robustness to standard image/color attacks, hiding capacity and imperceptibility.

Speech Enhancement Based on Psychoacoustic Model

  • Lee, Jingeol;Kim, Soowon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.3E
    • /
    • pp.12-18
    • /
    • 2000
  • Psychoacoustic model based methods have recently been introduced in order to enhance speech signals corrupted by ambient noise. In particular, the perceptual filter is analytically derived where the frequency content of the input noisy signal is made the same as that of the estimated clean signal in auditory domain. However, the analytical derivation should rely on the deconvolution associated with the spreading function in the psychoacoustic model, which results in an ill-conditioned problem. In order to cope with the problem associated with the deconvolution, we propose a novel psychoacoustic model based speech enhancement filter whose principle is the same as the perceptual filter, however the filter is derived by a constrained optimization which provides solutions to the ill-conditioned problem. It is demonstrated with artificially generated signals that the proposed filter operates according to the principle. It is shown that superior performance results from the proposed filter over the perceptual filter provided that a clean speech signal is separable from noise.

  • PDF