• Title/Summary/Keyword: Gray coding

Search Result 54, Processing Time 0.018 seconds

Interband Vector Quantization of Remotely Sensed Satellite Image Using Edge Region Compensation (에지 영역 보상을 이용한 원격 센싱된 인공위성 화상의 대역간 벡터양자화)

  • Ban, Seong-Won;Kim, Young-Choon;Lee, Kuhn-Il
    • Journal of Sensor Science and Technology
    • /
    • v.8 no.2
    • /
    • pp.124-132
    • /
    • 1999
  • In this paper, we propose interband vector quantization of remotely sensed satellite image using edge region compensation. This method classifies each pixel vector considering spectral reflection characteristics of satellite image data. For each class, we perform classified intraband VQ and classified interband VQ to remove intraband and interband redundancies, respectively. In edge region case, edge region is compensated using class information of neighboring blocks and gray value of quantized reference band. Then we perform classified interband VQ to remove interband, redundancy using compensated class information, effectively. Experiments on remotely sensed satellite image show that coding efficiency of the proposed method is better than that of the conventional method.

  • PDF

An Adaptive Finite State Vector Quantization Method Using a New Side Match Distortion Function for Image Coding (영상 부호화를 위한 새로운 사이드 매치 왜곡 함수를 이용한 적응 유한상태 벡터 양자화 기법)

  • Lee, Sang-Un;Lee, Doo-Soo;Lim, In-Chil
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.10
    • /
    • pp.118-125
    • /
    • 1998
  • We introduce an adaptive finite state vector quantization using a new side match distortion function. The conventional side match distortion function can make the gray level transition between the block bounddaries as smooth as possible and proper state codebooks in the flat areas where the spatial correlations are high. But it can't make proper codebooks in the edge areas where the spatial correlations are not high. The proposed distortion function adds the variances which represent the image characteristics to the conventional side match distortion function as weighted values. Then it can select better state codebooks than the conventional side match distortion function. Also if it predicts a wrong state, the proposed quantizer can correct the state. As a result, we can obtain the satisfiable image quality.

  • PDF

Classification of Whale Sounds using LPC and Neural Networks (신경망과 LPC 계수를 이용한 고래 소리의 분류)

  • An, Woo-Jin;Lee, Eung-Jae;Kim, Nam-Gyu;Chong, Ui-Pil
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.18 no.2
    • /
    • pp.43-48
    • /
    • 2017
  • The underwater transients signals contain the characteristics of complexity, time varying, nonlinear, and short duration. So it is very hard to model for these signals with reference patterns. In this paper we separate the whole length of signals into some short duration of constant length with overlapping frame by frame. The 20th LPC(Linear Predictive Coding) coefficients are extracted from the original signals using Durbin algorithm and applied to neural network. The 65% of whole signals were learned and 35% of the signals were tested in the neural network with two hidden layers. The types of the whales for sound classification are Blue whale, Dulsae whale, Gray whale, Humpback whale, Minke whale, and Northern Right whale. Finally, we could obtain more than 83% of classification rate from the test signals.

  • PDF

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.