• 제목/요약/키워드: Quantization Error

검색결과 296건 처리시간 0.025초

RGB 공간상의 국부 영역 블럭을 이용한 칼라 영상 양자화 (Color Image Quantization Using Local Region Block in RGB Space)

  • 박양우;이응주;김기석;정인갑;하영호
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 1995년도 학술대회
    • /
    • pp.83-86
    • /
    • 1995
  • Many image display devices allow only a limited number of colors to be simultaneously displayed. In displaying of natural color image using color palette, it is necessary to construct an optimal color palette and map each pixel of the original image to a color palette with fast. In this paper, we proposed the clustering algorithm using local region block centered one color cluster in the prequantized 3-D histogram. Cluster pairs which have the least distortion error are merged by considering distortion measure. The clustering process is continued until to obtain the desired number of colors. Same as the clustering process, original color image is mapped to palette color via a local region block centering around prequantized original color value. The proposed algorithm incorporated with a spatial activity weighting value which is smoothing region. The method produces high quality display images and considerably reduces computation time.

X-ray 의료영상 압축을 위한 ADCT-VQ와 JPEG의 성능 비교 (Performance Comparision of the ADCT-VQ and JPEG for X-ray Image Compression)

  • 김근섭;임호근;권용무;이재천;김형곤
    • 대한의용생체공학회:학술대회논문집
    • /
    • 대한의용생체공학회 1992년도 추계학술대회
    • /
    • pp.29-33
    • /
    • 1992
  • We examine the compression performance of two irreversible (lossy) compression techniques, ADCT-VQ (Adaptive Discrete Cosine Trandform - Vector Quantization) and JPEG (Joint Photographic Experts group) which are basis of medical image information systems. Under the same compression ratio, MSE(Mean Square Error) is 0.578 lower in JPEG than in ADCT-VQ while SNR(Signal to Noise Ratio) is 1.236 dB higher in JPEG than in ADCT-VQ.

  • PDF

연속된 데이터를 위한 새로운 롬 압축 방식 (A New ROM Compression Method for Continuous Data)

  • 양병도;김이섭
    • 대한전자공학회논문지SD
    • /
    • 제40권5호
    • /
    • pp.354-360
    • /
    • 2003
  • 연속된 데이터를 위한 새로운 롬 압축 방식이 제한 되었다. 이 롬 압축 방식은 새롭게 제안된 두 롬 압축 알고리즘들을 기반으로 한다. 하나는 영역 선택 롬 압축 알고리즘이다. 이 방식은 크기와 주소로 여러 영역들로 나눈 후 데이터를 포함하는 영역들만을 선택적으로 저장하는 방식이다. 다른 하나는 양자화 롬과 오차 롬 압축 알고리즘이다. 이 방식은 양자화된 데이터와 양자화에 의한 오차를 나누어 저장하는 방식이다. 이 두 알고리즘을 사용하면 다양한 연속된 데이터들에 대하여 40∼60%의 롬 크기의 감소를 얻을 수 있다.

TCM/PSK의 양지화 Radix-trellis Viterbi 복호 (Radix-trellis Viterbi Decoding of TCM/PSK using Metric Quantization)

    • 한국전자파학회논문지
    • /
    • 제11권5호
    • /
    • pp.731-737
    • /
    • 2000
  • 본 논문에서는 기존의 컨볼루션 부호화(강판정 비터비 알고리즘 사용)에 이용된 Radix-trellis 개념의 디코딩 방법을 Ungerboeck의 TCM/PSK 부호화 변조에 적용하여 TCM/PSK의 고속 복호 방식을 제안한다. 구체적인 예로서 16-stage, trellis 부호화 8-ary PSK의 경우를 다루었다. Radix-4와 Radix-16 격자 디코딩에 대하여 path metric(PM) 및 branch metric(BM) 값의 계산과정을 설명하고 모의 실험을 통하여 I-Q 값, branch metric 값 및 path metric 값 양자화 레벨에 따른 성능을 분석하여 이들의 적정 양자화 이진 심별(binary digit)수를 도출하였다.

  • PDF

Reduction of Reconstruction Errors in Kinoform CGHs by Modified Simulated Annealing Algorithm

  • Yang, Han-Jin;Cho, Jeong-Sik;Won, Yong-Hyub
    • Journal of the Optical Society of Korea
    • /
    • 제13권1호
    • /
    • pp.92-97
    • /
    • 2009
  • In this paper, a conventional simulated annealing (SA) method for optimization of a kinoform computer generated hologram (CGH) is analyzed and the SA method is modified to reduce a reconstruction error rate (ER) of the CGH. The dependences of the quantization level of the hologram pattern and the size of the data on the ER are analyzed. To overcome saturation of the ER, the conventional SA method is modified as it magnifies a Fourier-transformed pattern in the intermediate step. The proposed method can achieve a small ER less than 1%, which is impossible in the conventional SA method.

차이영상에 대한 DCT 계수의 끼워짜기를 이용한 비트율 감소 (Bitrate Reduction by Interleaving DCT Coefficients for Differential Images)

  • 이상길;양경호;이충웅
    • 전자공학회논문지B
    • /
    • 제30B권7호
    • /
    • pp.14-23
    • /
    • 1993
  • This paper proposes an algorithm to reduce the bitrate for transmission of MCP(motion compensated prediction) error signals. Many digital image coders have recently employed hybrid coding schemes which perform motion compensation, DCT transform, quantization, and variable length coding. The variable length coding compresses the quantized DCT coefficient data by removing their statistical redundancy. But some DCT blocks have the interblock statistical redundancy as well as the intrablock one. To utilize both of them, the DCT blocks are classified into the interleaving group and the non-interleaving group. And then each DCT blocks in the interleaving group are is encoded independently, and the DCT blocks in the interleaving group are encoded after interleaving the DCT coefficients. Through the simulations, it is shown that the proposed method outperforms the conventional method in which each DCT block is encoded independently.

  • PDF

웨이브렛 변환과 다중 가중치를 이용한 강인한 패턴 워터마킹 (Robust pattern watermarking using wavelet transform and multi-weights)

  • 김현환;김용민;김두영
    • 한국통신학회논문지
    • /
    • 제25권3B호
    • /
    • pp.557-564
    • /
    • 2000
  • This paper presents a watermarking algorithm for embedding visually recognizable pattern (Mark, Logo, Symbol, stamping or signature) into the image. first, the color image(RGB model)is transformed in YCbCr model and then the Y component is transformed into 3-level wavelet transform. Next, the values are assembled with pattern watermark. PN(pseudo noise) code at spread spectrum communication method and mutilevel watermark weights. This values are inserted into discrete wavelet domain. In our scheme, new calculating method is designed to calculate wavelet transform with integer value in considering the quantization error. and we used the color conversion with fixed-point arithmetic to be easy to make the hardware hereafter. Also, we made the new solution using mutilevel threshold to robust to common signal distortions and malicious attack, and to enhance quality of image in considering the human visual system. the experimental results showed that the proposed watermarking algorithm was superior to other similar water marking algorithm. We showed what it was robust to common signal processing and geometric transform such as brightness. contrast, filtering. scaling. JPEG lossy compression and geometric deformation.

  • PDF

Concurrent Support Vector Machine 프로세서 (Concurrent Support Vector Machine Processor)

  • 위재우;이종호
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제53권8호
    • /
    • pp.578-584
    • /
    • 2004
  • The CSVM(Current Support Vector Machine) that is a digital architecture performing all phases of recognition process including kernel computing, learning, and recall of SVM(Support Vector Machine) on a chip is proposed. Concurrent operation by parallel architecture of elements generates high speed and throughput. The classification problems of bio data having high dimension are solved fast and easily using the CSVM. Quadratic programming in original SVM learning algorithm is not suitable for hardware implementation, due to its complexity and large memory consumption. Hardware-friendly SVM learning algorithms, kernel adatron and kernel perceptron, are embedded on a chip. Experiments on fixed-point algorithm having quantization error are performed and their results are compared with floating-point algorithm. CSVM implemented on FPGA chip generates fast and accurate results on high dimensional cancer data.

A Review of Fixed-Complexity Vector Perturbation for MU-MIMO

  • Mohaisen, Manar
    • Journal of Information Processing Systems
    • /
    • 제11권3호
    • /
    • pp.354-369
    • /
    • 2015
  • Recently, there has been an increasing demand of high data rates services, where several multiuser multiple-input multiple-output (MU-MIMO) techniques were introduced to meet these demands. Among these techniques, vector perturbation combined with linear precoding techniques, such as zero-forcing and minimum mean-square error, have been proven to be efficient in reducing the transmit power and hence, perform close to the optimum algorithm. In this paper, we review several fixed-complexity vector perturbation techniques and investigate their performance under both perfect and imperfect channel knowledge at the transmitter. Also, we investigate the combination of block diagonalization with vector perturbation outline its merits.

클래스 히스토그램 등화 기법에 의한 강인한 음성 인식 (Robust Speech Recognition by Utilizing Class Histogram Equalization)

  • 서영주;김회린;이윤근
    • 대한음성학회지:말소리
    • /
    • 제60호
    • /
    • pp.145-164
    • /
    • 2006
  • This paper proposes class histogram equalization (CHEQ) to compensate noisy acoustic features for robust speech recognition. CHEQ aims to compensate for the acoustic mismatch between training and test speech recognition environments as well as to reduce the limitations of the conventional histogram equalization (HEQ). In contrast to HEQ, CHEQ adopts multiple class-specific distribution functions for training and test environments and equalizes the features by using their class-specific training and test distributions. According to the class-information extraction methods, CHEQ is further classified into two forms such as hard-CHEQ based on vector quantization and soft-CHEQ using the Gaussian mixture model. Experiments on the Aurora 2 database confirmed the effectiveness of CHEQ by producing a relative word error reduction of 61.17% over the baseline met-cepstral features and that of 19.62% over the conventional HEQ.

  • PDF