• Title/Summary/Keyword: Re-quantization

Search Result 14, Processing Time 0.031 seconds

A STUDY ON THE RE-QUANTIZATION METHOD FOR PREVENTING DISTORTION OF CORRELATION RESULT (상관결과의 왜곡 방지를 위한 재양자화 방법에 관한 연구)

  • Yeom, Jae-Hwan;Oh, Se-Jin;Roh, Duk-Gyoo;Oh, Chung-Sik;Jung, Jin-Seung;Chung, Dong-Kyu;Oyama, Tomoaki;Kawaguchi, Noriyuki;Kobayashi, Hideyuki;Kawakami, Kazuyuki;Onuki, Hirofumi;Ozeki, Kensuke
    • Publications of The Korean Astronomical Society
    • /
    • v.27 no.5
    • /
    • pp.419-429
    • /
    • 2012
  • In this paper, we propose a new re-quantization method after FFT processing to prevent the distortion of correlation result of VCS (VLBI Correlation Subsystem). The re-quantization is used to rearrange the data bit so as to reduce the data rate processed as 16-bit of FFT result of VCS. Having done this procedure, we found that the distorted spectrum of correlation result occurred in the delay tracking experiments by the re-quantization method introduced for initial design of VCS. In order to solve this, two kinds of re-quantization method, that is, the comparison and selection-type, are proposed. The first is to re-quantize the FFT result as a valid-bit by comparing with the input data after determining the adequate threshold. The second is manually to select the valid-bit of FFT result after finding the valid-field of data according to the bit-distribution of input data. We confirmed that the second is more effective compared with the first through the experimental result, and it will be implemented without so much modification of applied method in the condition of the limited resource of FPGA. The re-quantization is, however, carried out with 4-bit in the proposed second method for FFT result, and then the distortion of correlation result is also appeared. To fix this problem, the bit for re-quantization is extended to 8-bit. The proposed 8-bit selection-type is effectively verified so that the distortion of correlation result disappeared by applying to VCS in consequence of the simulation and correlation experiments.

IMAGE COMPRESSION USING VECTOR QUANTIZATION

  • Pantsaena, Nopprat;Sangworasil, M.;Nantajiwakornchai, C.;Phanprasit, T.
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.979-982
    • /
    • 2002
  • Compressing image data by using Vector Quantization (VQ)[1]-[3]will compare Training Vectors with Codebook. The result is an index of position with minimum distortion. The implementing Random Codebook will reduce the image quality. This research presents the Splitting solution [4],[5]to implement the Codebook, which improves the image quality[6]by the average Training Vectors, then splits the average result to Codebook that has minimum distortion. The result from this presentation will give the better quality of the image than using Random Codebook.

  • PDF

DEVELOPMENT AND PERFORMANCE EVALUATION OF SOFTWARE SIMULATOR FOR APPROVING OF VLBI CORRELATION SUBSYSTEM (VLBI상관서브시스템의 검증을 위한 소프트웨어 시뮬레이터의 개발 및 성능시험)

  • Oh, Se-Jin;Roh, Duk-Gyoo;Yeom, Jae-Hwan;Chung, Hyun-Soo;Lee, Chang-Hoon;Kim, Hyo-Ryoung;Kim, Kwang-Dong;Kang, Yong-Woo;Park, Sun-Yeop
    • Publications of The Korean Astronomical Society
    • /
    • v.23 no.2
    • /
    • pp.73-90
    • /
    • 2008
  • A software simulator is developed for verifying the VLBI Correlation Subsystem (VCS) trial product hardware. This software simulator includes the delay tracking, fringe rotation, bit-jump, FFT analysis, re-quantization, and auto/cross-correlation functions so as to confirm the function of the VCS trial product hardware. To verify the effectiveness of the developed software simulator, we carried out experiments using the simulation data which is a mixed signal with white noise and tone signal generated by software. We confirmed that the performance of this software simulator is similar as that of the hardware system. In case of spectral analysis and re-quantization experiment, a serious problem of the VCS hardware, which is not enough for expressing the data stream of FFT results specified in VCS hardware specification, was found by this software simulator. Through the experiments, the performance of software simulator was verified to be efficient. In future, we will improve and modify the function of software simulator to be used as a software correlator of Korea-Japan Joint VLBI Correlator (KJJVC).

Performance Evaluation of VLBI Correlation Subsystem Main Product (VLBI 상관 서브시스템 본제품의 제작현장 성능시험)

  • Oh, Se-Jin;Roh, Duk-Gyoo;Yeom, Jae-Hwan;Oyama, Tomoaki;Park, Sun-Youp;Kang, Yong-Woo;Kawaguchi, Noriyuki;Kobayashi, Hideyuki;Kawakami, Kazuyuki
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.4
    • /
    • pp.322-332
    • /
    • 2011
  • In this paper, we introduce the 1st performance evaluation of VLBI Correlation Subsystem (VCS) main product, which is core system of Korea-Japan Joint VLBI Correlator (KJJVC). The main goal of the 1st performance evaluation of VCS main product is that the perfection of overall system will be enhanced after checking the unsolved part by performing the experiments towards various test items at the manufacturer before installation of field. The functional test was performed by including the overflow problem occurred in the FFT re-quantization module due to the insufficient of effective bit at the VCS trial product in this performance test of VCS main product. Through the performance test for VCS main product in the factory, the problem such as FFT re-quantization discovered at performance test of VCS trial product in 2008 was clearly solved and the important functions such as delay tracking, daly compensation, and frequency bining were added in this VCS main product. We also confirmed that the predicted correlation results (fringe) was obtained in the correlation test by using real astronomical observed data(wideband/narrow band).

Robust Image Hashing for Tamper Detection Using Non-Negative Matrix Factorization

  • Tang, Zhenjun;Wang, Shuozhong;Zhang, Xinpeng;Wei, Weimin;Su, Shengjun
    • Journal of Ubiquitous Convergence Technology
    • /
    • v.2 no.1
    • /
    • pp.18-26
    • /
    • 2008
  • The invariance relation existing in the non-negative matrix factorization (NMF) is used for constructing robust image hashes in this work. The image is first re-scaled to a fixed size. Low-pass filtering is performed on the luminance component of the re-sized image to produce a normalized matrix. Entries in the normalized matrix are pseudo-randomly re-arranged under the control of a secret key to generate a secondary image. Non-negative matrix factorization is then performed on the secondary image. As the relation between most pairs of adjacent entries in the NMF's coefficient matrix is basically invariant to ordinary image processing, a coarse quantization scheme is devised to compress the extracted features contained in the coefficient matrix. The obtained binary elements are used to form the image hash after being scrambled based on another key. Similarity between hashes is measured by the Hamming distance. Experimental results show that the proposed scheme is robust against perceptually acceptable modifications to the image such as Gaussian filtering, moderate noise contamination, JPEG compression, re-scaling, and watermark embedding. Hashes of different images have very low collision probability. Tampering to local image areas can be detected by comparing the Hamming distance with a predetermined threshold, indicating the usefulness of the technique in digital forensics.

  • PDF

A Quantization-adaptive Watermarking Algorithm to Protect MPEG Moving Picture Contents (MPEG 동영상 컨텐츠 보호를 위한 양자화-적응적 워터마킹 알고리즘)

  • Kim Joo-Hyuk;Choi Hyun-Jun;Seo Young-Ho;Kim Dong-Wook
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.6
    • /
    • pp.149-158
    • /
    • 2005
  • This paper proposed a blind watermarking method for video contents which satisfies both the invisibility and the robustness to attacks to prohibit counterfeiting, modification, illegal usage and illegal re-production of video contents. This watermarking algorithm targets MPEG compression system and was designed to control the amount of watermarking to be inserted according to the adaptive quantization scale code to follow the adaptive quantization of the compression system. The inserting positions of the watermark were chosen by considering the frequency property of an image and horizontal, vertical and diagonal property of a $8{\times}8$ image block. Also the amount of watermarking for each watermark bit was decided by considering the quantization step. This algorithm was implemented by C++ and experimented for invisibility and robustness with MPEG-2 system. The experiment results showed that the method satisfied enough the invisibility of the inserted watermark and robustness against attacks. For the general attacks, the error rate of the extracted watermark was less than $10\%$, which is enough in robustness against the attacks. Therefore, this algorithm is expected to be used effectively as a part in many MPEG systems for real-time watermarking, especially in the sensitive applications to the network environments.

DCT and DWT Based Robust Audio Watermarking Scheme for Copyright Protection

  • Deb, Kaushik;Rahman, Md. Ashikur;Sultana, Kazi Zakia;Sarker, Md. Iqbal Hasan;Chong, Ui-Pil
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.15 no.1
    • /
    • pp.1-8
    • /
    • 2014
  • Digital watermarking techniques are attracting attention as a proper solution to protect copyright for multimedia data. This paper proposes a new audio watermarking method based on Discrete Cosine Transformation (DCT) and Discrete Wavelet Transformation (DWT) for copyright protection. In our proposed watermarking method, the original audio is transformed into DCT domain and divided into two parts. Synchronization code is applied on the signal in first part and 2 levels DWT domain is applied on the signal in second part. The absolute value of DWT coefficient is divided into arbitrary number of segments and calculates the energy of each segment and middle peak. Watermarks are then embedded into each middle peak. Watermarks are extracted by performing the inverse operation of watermark embedding process. Experimental results show that the hidden watermark data is robust to re-sampling, low-pass filtering, re-quantization, MP3 compression, cropping, echo addition, delay, and pitch shifting, amplitude change. Performance analysis of the proposed scheme shows low error probability rates.

Improvement of the TCX Module in AMR-WB+ Codec Using Pyramid VQ (Pyramid VQ를 이용한 AMR-WB+ 코덱 내 TCX 모듈의 성능 개선)

  • Park, Sang-Kuk;Park, Jung-Eun;Baik, Seung-Kweon;Seo, Jung-Il;Kang, Sang-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.3
    • /
    • pp.109-114
    • /
    • 2007
  • In this paper, we Propose a pyramid VQ to quantize the transform coefficients of TCX module for the audio improvement of AMR-WB+ codec. The Proposed pyramid VQ is compared to the $RE_8$ Lattice VQ used in the AMR-WB+ standard codec. demonstrating improvement 4% and 5.7%. respectively, in Mean Squared Error (MSE) and 3.3% and 4.7%. respectively, in Perceptual Evaluation of Audio Quality (PEAQ) by 8-dimensional and 16-dimensional Pyramid VQ.

The First Quantization Parameter Decision Algorithm for the H.264/AVC Encoder (H.264/AVC를 위한 초기 Quantization Parameter 결정 알고리즘)

  • Kwon, Soon-Young;Lee, Sang-Heon;Lee, Dong-Ha
    • Journal of KIISE:Information Networking
    • /
    • v.35 no.3
    • /
    • pp.235-242
    • /
    • 2008
  • To improve video quality and coding efficiency, H.264/AVC adopted an adaptive rate control. But this method has a problem as it cannot predict an accurate quantization parameter(QP) for the first frame. The first QP is decided among four constant values by using encoder input parameters. It does not consider encoding bits, results in significant fluctuation of the image quality and decreases the average quality of the whole coded sequence. In this paper, we propose a new algorithm for the first frame QP decision in the H.264/AVC encoder. The QP is decided by the existing algorithm and the first frame is encoded. According to the encoded bits, the new initial QP is decided. We can predict optimal value because there is a linear relationship between encoded bits and the new initial QP. Next, we re-encode the first frame using the new initial QP. Experimental results show that the proposed algorithm not only achieves better quality than the state of the art algorithm, but also adopts a rate control forthe sequence that was impossible with the existing algorithm. By reducing fluctuation, subjective quality also improved.

A STUDY ON DEVELOPMENT OF VLBI CORRELATION SUBSYSTEM TRIAL PRODUCT (VLBI상관서브시스템 시작품의 개발에 관한 연구)

  • Oh, Se-Jin;Roh, Duk-Gyoo;Yeom, Jae-Hwan;Chung, Hyun-Soo;Lee, Chang-Hoon;Kobayashi, Hideyuki;Kawaguchi, Noriyuki;Kawakami, Kazuyuki
    • Publications of The Korean Astronomical Society
    • /
    • v.24 no.1
    • /
    • pp.65-81
    • /
    • 2009
  • We present the performance test results of VLBI Correlation Subsystem (VCS) trial product which was being developed for 1 year from August 2007. It is a core component of Korea-Japan Joint VLBI Correlator (KJJVC). The aim for developing VCS trial product is to improve the performance of VCS main product to reduce the efforts and cost, and to solve the design problems by performing the preliminary test of the manufactured trial product. The function of VCS trial product is that it is able to process the 2 stations-1 baseline, 8 Gbps/station speed, 1.2 Gbps output speed with FX-type. VCS trial product consists of Read Data Control Board (RDC), Fourier Transform Board (FTB), and Correlation and Accumulation Board (CAB). Almost main functions are integrated in the FTB and CAB board. In order to confirm the performance of VCS trial product functions, the spectral analysis, delay compensation and correlation processing experiments were carried out by using simulation and real observation data. We found that the overflow problem of re-quantization after FFT processing was occurred in the delay compensation experiment. We confirmed that this problem was caused by valid bit-expression of the re-quantized data. To solve this problem, the novel method will be applied to VCS main product. The effectiveness of VCS trial product has been verified through the preliminary experimental results, but the overflow problem was occurred.