• Title/Summary/Keyword: Local quantization

Search Result 73, Processing Time 0.021 seconds

Adaptive Digital Watermarking using Stochastic Image Modeling Based on Wavelet Transform Domain (웨이브릿 변환 영역에서 스토케스틱 영상 모델을 이용한 적응 디지털 워터마킹)

  • 김현천;권기룡;김종진
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.3
    • /
    • pp.508-517
    • /
    • 2003
  • This paper presents perceptual model with a stochastic multiresolution characteristic that can be applied with watermark embedding in the biorthogonal wavelet domain. The perceptual model with adaptive watermarking algorithm embeds at the texture and edge region for more strongly embedded watermark by the SSQ. The watermark embedding is based on the computation of a NVF that has local image properties. This method uses non- stationary Gaussian and stationary Generalized Gaussian models because watermark has noise properties. The particularities of embedding in the stationary GG model use shape parameter and variance of each subband regions in multiresolution. To estimate the shape parameter, we use a moment matching method. Non-stationary Gaussian model uses the local mean and variance of each subband. The experiment results of simulation were found to be excellent invisibility and robustness. Experiments of such distortion are executed by Stirmark 3.1 benchmark test.

  • PDF

Joint Optimization of Source Codebooks and Channel Modulation Signal for AWGN Channels (AWGN 채널에서 VQ 부호책과 직교 진폭변조신호 좌표의 공동 최적화)

  • 한종기;박준현
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.6C
    • /
    • pp.580-593
    • /
    • 2003
  • A joint design scheme has been proposed to optimize the source encoder and the modulation signal constellation based on the minimization of the end-to-end distortion including both the quantization error and channel distortion. The proposed scheme first optimizes the VQ codebook for a fixed modulation signal set, and then the modulation signals for the fixed VQ codebook. These two steps are iteratively repeated until they reach a local optimum solution. It has been shown that the performance of the proposed system can be enhanced by employing a new efficient mapping scheme between codevectors and modulation signals. Simulation results show that a jointly optimized system based on the proposed algorithms outperforms the conventional system based on a conventional QAM modulation signal set and the VQ codebook designed for a noiseless channel.

Enhanced VLAD

  • Wei, Benchang;Guan, Tao;Luo, Yawei;Duan, Liya;Yu, Junqing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.7
    • /
    • pp.3272-3285
    • /
    • 2016
  • Recently, Vector of Locally Aggregated Descriptors (VLAD) has been proposed to index image by compact representations, which encodes powerful local descriptors and makes significant improvement on search performance with less memory compared against the state of art. However, its performance relies heavily on the size of the codebook which is used to generate VLAD representation. It indicates better accuracy needs higher dimensional representation. Thus, more memory overhead is needed. In this paper, we enhance VLAD image representation by using two level hierarchical-codebooks. It can provide more accurate search performance while keeping the VLAD size unchanged. In addition, hierarchical-codebooks are used to construct multiple inverted files for more accurate non-exhaustive search. Experimental results show that our method can make significant improvement on both VLAD image representation and non-exhaustive search.

On the Characteristics of MSE-Optimal Symmetric Scalar Quantizers for the Generalized Gamma, Bucklew-Gallagher, and Hui-Neuhoff Sources

  • Rhee, Jagan;Na, Sangsin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.7
    • /
    • pp.1217-1233
    • /
    • 2015
  • The paper studies characteristics of the minimum mean-square error symmetric scalar quantizers for the generalized gamma, Bucklew-Gallagher and Hui-Neuhoff probability density functions. Toward this goal, asymptotic formulas for the inner- and outermost thresholds, and distortion are derived herein for nonuniform quantizers for the Bucklew-Gallagher and Hui-Neuhoff densities, parallelling the previous studies for the generalized gamma density, and optimal uniform and nonuniform quantizers are designed numerically and their characteristics tabulated for integer rates up to 20 and 16 bits, respectively, except for the Hui-Neuhoff density. The assessed asymptotic formulas are found consistently more accurate as the rate increases, essentially making their asymptotic convergence to true values numerically acceptable at the studied bit range, except for the Hui-Neuhoff density, in which case they are still consistent and suggestive of convergence. Also investigated is the uniqueness problem of the differentiation method for finding optimal step sizes of uniform quantizers: it is observed that, for the commonly studied densities, the distortion has a unique local minimizer, hence showing that the differentiation method yields the optimal step size, but also observed that it leads to multiple solutions to numerous generalized gamma densities.

Robust Image Hashing for Tamper Detection Using Non-Negative Matrix Factorization

  • Tang, Zhenjun;Wang, Shuozhong;Zhang, Xinpeng;Wei, Weimin;Su, Shengjun
    • Journal of Ubiquitous Convergence Technology
    • /
    • v.2 no.1
    • /
    • pp.18-26
    • /
    • 2008
  • The invariance relation existing in the non-negative matrix factorization (NMF) is used for constructing robust image hashes in this work. The image is first re-scaled to a fixed size. Low-pass filtering is performed on the luminance component of the re-sized image to produce a normalized matrix. Entries in the normalized matrix are pseudo-randomly re-arranged under the control of a secret key to generate a secondary image. Non-negative matrix factorization is then performed on the secondary image. As the relation between most pairs of adjacent entries in the NMF's coefficient matrix is basically invariant to ordinary image processing, a coarse quantization scheme is devised to compress the extracted features contained in the coefficient matrix. The obtained binary elements are used to form the image hash after being scrambled based on another key. Similarity between hashes is measured by the Hamming distance. Experimental results show that the proposed scheme is robust against perceptually acceptable modifications to the image such as Gaussian filtering, moderate noise contamination, JPEG compression, re-scaling, and watermark embedding. Hashes of different images have very low collision probability. Tampering to local image areas can be detected by comparing the Hamming distance with a predetermined threshold, indicating the usefulness of the technique in digital forensics.

  • PDF

3D Model Compression For Collaborative Design

  • Liu, Jun;Wang, Qifu;Huang, Zhengdong;Chen, Liping;Liu, Yunhua
    • International Journal of CAD/CAM
    • /
    • v.7 no.1
    • /
    • pp.1-10
    • /
    • 2007
  • The compression of CAD models is a key technology for realizing Internet-based collaborative product development because big model sizes often prohibit us to achieve a rapid product information transmission. Although there exist some algorithms for compressing discrete CAD models, original precise CAD models are focused on in this paper. Here, the characteristics of hierarchical structures in CAD models and the distribution of their redundant data are exploited for developing a novel data encoding method. In the method, different encoding rules are applied to different types of data. Geometric data is a major concern for reducing model sizes. For geometric data, the control points of B-spline curves and surfaces are compressed with the second-order predictions in a local coordinate system. Based on analysis to the distortion induced by quantization, an efficient method for computation of the distortion is provided. The results indicate that the data size of CAD models can be decreased efficiently after compressed with the proposed method.

Block-based Adaptive Bit Allocation for Reference Memory Reduction (효율적인 참조 메모리 사용을 위한 블록기반 적응적 비트할당 알고리즘)

  • Park, Sea-Nae;Nam, Jung-Hak;Sim, Dong-Gy;Joo, Young-Hun;Kim, Yong-Serk;Kim, Hyun-Mun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.3
    • /
    • pp.68-74
    • /
    • 2009
  • In this paper, we propose an effective memory reduction algorithm to reduce the amount of reference frame buffer and memory bandwidth in video encoder and decoder. In general video codecs, decoded previous frames should be stored and referred to reduce temporal redundancy. Recently, reference frames are recompressed for memory efficiency and bandwidth reduction between a main processor and external memory. However, these algorithms could hurt coding efficiency. Several algorithms have been proposed to reduce the amount of reference memory with minimum quality degradation. They still suffer from quality degradation with fixed-bit allocation. In this paper, we propose an adaptive block-based min-max quantization that considers local characteristics of image. In the proposed algorithm, basic process unit is $8{\times}8$ for memory alignment and apply an adaptive quantization to each $4{\times}4$ block for minimizing quality degradation. We found that the proposed algorithm can obtain around 1.7% BD-bitrate gain and 0.03dB BD-PSNR gain, compared with the conventional fixed-bit min-max algorithm with 37.5% memory saving.

Person Identification based on Clothing Feature (의상 특징 기반의 동일인 식별)

  • Choi, Yoo-Joo;Park, Sun-Mi;Cho, We-Duke;Kim, Ku-Jin
    • Journal of the Korea Computer Graphics Society
    • /
    • v.16 no.1
    • /
    • pp.1-7
    • /
    • 2010
  • With the widespread use of vision-based surveillance systems, the capability for person identification is now an essential component. However, the CCTV cameras used in surveillance systems tend to produce relatively low-resolution images, making it difficult to use face recognition techniques for person identification. Therefore, an algorithm is proposed for person identification in CCTV camera images based on the clothing. Whenever a person is authenticated at the main entrance of a building, the clothing feature of that person is extracted and added to the database. Using a given image, the clothing area is detected using background subtraction and skin color detection techniques. The clothing feature vector is then composed of textural and color features of the clothing region, where the textural feature is extracted based on a local edge histogram, while the color feature is extracted using octree-based quantization of a color map. When given a query image, the person can then be identified by finding the most similar clothing feature from the database, where the Euclidean distance is used as the similarity measure. Experimental results show an 80% success rate for person identification with the proposed algorithm, and only a 43% success rate when using face recognition.

Bit Assignment for Wyner-Ziv Video Coding (Wyner-Ziv 비디오 부호화를 위한 비트배정)

  • Park, Jong-Bin;Jeon, Byeung-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.128-138
    • /
    • 2010
  • In this paper, we propose a new bit assignment scheme for Wyner-Ziv video coding. Distributed video coding (DVC) is a new video coding paradigm which enables greatly low complexity encoding because it does not have any motion prediction module at encoder. Therefore, it is very well suited for many applications such as video communication, video surveillance, extremely low power consumption video coding, and other portable applications. Theoretically, the Wyner-Ziv video coding is proved to achieve the same rate-distortion (RD) performance comparable to that of the joint video coding. However, its RD performance has much gap compared to MC-DCT-based video coding such as H.264/AVC. Moreover, Transform Domain Wyner-Ziv (TDWZ) video coding which is a kind of DVC with transform module has difficulty of exact bit assignment because the entire image is treated as a same message. In this paper, we propose a feasible bit assignment algorithm using adaptive quantization matrix selection for the TDWZ video coding. The proposed method can calculate suitable bit amount for each region using the local characteristics of image. Simulation results show that the proposed method can enhance coding performance.

An Empirical Digital Image Watermarking using Frequency Properties of DWT (DWT의 주파수 특성을 이용한 실험적 디지털 영상 워터마킹)

  • Kang, I-Seul;Lee, Yong-Seok;Seob), Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.22 no.3
    • /
    • pp.295-312
    • /
    • 2017
  • Digital video content is the most information-intensive and high-value content. Therefore, it is necessary to protect the intellectual property rights of these contents, and this paper also proposes a watermarking method of digital image for this purpose. The proposed method uses the frequency characteristics of 2-Dimensional Discrete Wavelet Transform (2D-DWT) for digital images and digital watermark on global data without using local or specific data of the image for watermark embedding. The method to insert digital watermark data uses a simple Quantization Index Modulation (QIM) and a multiple watermarking method that inserts the same watermark data in multiple. When extracting a watermark, multiple watermarks are extracted and the final watermark data is determined by a simple statistical method. This method is an empirical method for experimentally determining the parameters in the watermark embedding process. The proposed method performs experiments on various images against various attacks and shows the superiority of the proposed method by comparing the performance with the representative existing methods.