• Title/Summary/Keyword: discrete cosine transform(DCT)

Search Result 346, Processing Time 0.023 seconds

ECG-based Biometric Authentication Using Random Forest (랜덤 포레스트를 이용한 심전도 기반 생체 인증)

  • Kim, JeongKyun;Lee, Kang Bok;Hong, Sang Gi
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.6
    • /
    • pp.100-105
    • /
    • 2017
  • This work presents an ECG biometric recognition system for the purpose of biometric authentication. ECG biometric approaches are divided into two major categories, fiducial-based and non-fiducial-based methods. This paper proposes a new non-fiducial framework using discrete cosine transform and a Random Forest classifier. When using DCT, most of the signal information tends to be concentrated in a few low-frequency components. In order to apply feature vector of Random Forest, DCT feature vectors of ECG heartbeats are constructed by using the first 40 DCT coefficients. RF is based on the computation of a large number of decision trees. It is relatively fast, robust and inherently suitable for multi-class problems. Furthermore, it trade-off threshold between admission and rejection of ID inside RF classifier. As a result, proposed method offers 99.9% recognition rates when tested on MIT-BIH NSRDB.

Image Contrast Enhancement using Adaptive Unsharp Mask and Directional Information (방향성 정보와 적응적 언샾 마스크를 이용한 영상의 화질 개선)

  • Lee, Im-Geun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.3
    • /
    • pp.27-34
    • /
    • 2011
  • In this paper, the novel approach for image contrast enhancement is introduced. The method is based on the unsharp mask and directional information of images. Since the unsharp mask techniques give better visual quality than the conventional sharpening mask, there are much works on image enhancement using unsharp masks. The proposed algorithm decomposes the image to several blocks and extracts directional information using DCT. From the geometric properties of the block, each block is labeled as appropriate type and processed by adaptive unsharp mask. The masking process is skipped at the flat area to reduce the noise artifact, but at the texture and edge area, the adaptive unsharp mask is applied to enhance the image contrast based on the edge direction. Experiments show that the proposed algorithm produces the contrast enhanced images with superior visual quality, suppressing the noise effects and enhancing edge at the same time.

A Steganalysis using Blockiness in JPEG images (블록 왜곡도를 이용한 JPEG 기반의 심층암호분석)

  • 장정아;유정재;이상진
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.14 no.4
    • /
    • pp.39-47
    • /
    • 2004
  • In general, the steganographic algorithm for embedding message in JPEG images, such as Jsteg$^{(1)}$ , JP Hide & Seek$^{(2)}$ , F5$^{(3)}$ , outGuess$^{(4)}$ replaces the LSB of DCT coefficients by the message bits. Both Jsteg and n Hide & seek are detected by $\chi$$^2$- test, steganalytic technique$^{(4)}$ , the rate of detection is very low, though. In this Paper, we Propose a new steganalysis method that determine not only the existence of hidden messages in JPEG images exactly, but also the steganographic algorithm used. This method is advanced from the technique Blockiness$^{(5)}$ . It has many advantages that include a computational efficiency, correctness and that can detect without bowing steganographic algorithm. Experiment results show the superiority of our approach over Blockiness$^{(5)}$ .

Color Image Splicing Detection using Benford's Law and color Difference (밴포드 법칙과 색차를 이용한 컬러 영상 접합 검출)

  • Moon, Sang-Hwan;Han, Jong-Goo;Moon, Yong-Ho;Eom, Il-Kyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.5
    • /
    • pp.160-167
    • /
    • 2014
  • This paper presents a spliced color image detection method using Benford' Law and color difference. For a suspicious image, after color conversion, the discrete wavelet transform and the discrete cosine transform are performed. We extract the difference between the ideal Benford distribution and the empirical Benford distribution of the suspicious image as features. The difference between Benford distributions for each color component were also used as features. Our method shows superior splicing detection performance using only 13 features. After training the extracted feature vector using SVM classifier, we determine whether the presence of the image splicing forgery. Experimental results show that the proposed method outperforms the existing methods with smaller number of features in terms of splicing detection accuracy.

Image Compression Using DCT Map FSVQ and Single - side Distribution Huffman Tree (DCT 맵 FSVQ와 단방향 분포 허프만 트리를 이용한 영상 압축)

  • Cho, Seong-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.10
    • /
    • pp.2615-2628
    • /
    • 1997
  • In this paper, a new codebook design algorithm is proposed. It uses a DCT map based on two-dimensional discrete cosine of transform (2D DCT) and finite state vector quantizer (FSVQ) when the vector quantizer is designed for image transmission. We make the map by dividing input image according to edge quantity, then by the map, the significant features of training image are extracted by using the 2D DCT. A master codebook of FSVQ is generated by partitioning the training set using binary tree based on tree-structure. The state codebook is constructed from the master codebook, and then the index of input image is searched at not master codebook but state codebook. And, because the coding of index is important part for high speed digital transmission, it converts fixed length codes to variable length codes in terms of entropy coding rule. The huffman coding assigns transmission codes to codes of codebook. This paper proposes single-side growing huffman tree to speed up huffman code generation process of huffman tree. Compared with the pairwise nearest neighbor (PNN) and classified VQ (CVQ) algorithm, about Einstein and Bridge image, the new algorithm shows better picture quality with 2.04 dB and 2.48 dB differences as to PNN, 1.75 dB and 0.99 dB differences as to CVQ respectively.

  • PDF

Design of Multiple-symbol Lookup Table for Fast Thumbnail Generation in Compressed Domain (압축영역에서 빠른 축소 영상 추출을 위한 다중부호 룩업테이블 설계)

  • Yoon, Ja-Cheon;Sull, Sanghoon
    • Journal of Broadcast Engineering
    • /
    • v.10 no.3
    • /
    • pp.413-421
    • /
    • 2005
  • As the population of HDTV is growing, among many useful features of modern set top boxes (STBs) or digital video recorders (DVRs), video browsing, visual bookmark, and picture-in-picture capabilities are very frequently required. These features typically employ reduced-size versions of video frames, or thumbnail images. Most thumbnail generation approaches generate DC images directly from a compressed video stream. A discrete cosine transform (DCT) coefficient for which the frequency is zero in both dimensions in a compressed block is called a DC coefficient and is simply used to construct a DC image. If a block has been encoded with field DCT, a few AC coefficients are needed to generate the DC image in addition to a DC coefficient. However, the bit length of a codeword coded with variable length coding (VLC) cannot be determined until the previous VLC codeword has been decoded, thus it is required that all codewords should be fully decoded regardless of their necessary for DC image generation. In this paper, we propose a method especially for fast DC image generation from an I-frame using multiple-symbol lookup table (mLUT). The experimental results show that the method using the mLUT improves the performance greatly by reducing LUT count by 50$\%$.

Point Cloud Video Codec using 3D DCT based Motion Estimation and Motion Compensation (3D DCT를 활용한 포인트 클라우드의 움직임 예측 및 보상 기법)

  • Lee, Minseok;Kim, Boyeun;Yoon, Sangeun;Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.680-691
    • /
    • 2021
  • Due to the recent developments of attaining 3D contents by using devices such as 3D scanners, the diversity of the contents being used in AR(Augmented Reality)/VR(Virutal Reality) fields is significantly increasing. There are several ways to represent 3D data, and using point clouds is one of them. A point cloud is a cluster of points, having the advantage of being able to attain actual 3D data with high precision. However, in order to express 3D contents, much more data is required compared to that of 2D images. The size of data needed to represent dynamic 3D point cloud objects that consists of multiple frames is especially big, and that is why an efficient compression technology for this kind of data must be developed. In this paper, a motion estimation and compensation method for dynamic point cloud objects using 3D DCT is proposed. This will lead to switching the 3D video frames into I frames and P frames, which ensures higher compression ratio. Then, we confirm the compression efficiency of the proposed technology by comparing it with the anchor technology, an Intra-frame based compression method, and 2D-DCT based V-PCC.

Multiple Reference Frame based Error-Resilient Video Coding (다중 레프런스 프레임 기반의 에러에 강인한 동영상 부호화 기법)

  • 정한승;김인철;이상욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.10B
    • /
    • pp.1382-1389
    • /
    • 2001
  • 움직임 보상-이산 코사인 변환 (motion compensation-discrete cosine transform : MC-DCT) 기반의 동영상 부호화 기법이 부호화 효율성 및 구현의 단순성으로 인해 널리 사용되고 있으나, 에러 환경에서 구조적으로 취약한 면이 있다. 본 논문에서는 다중 메모리 움직임 보상 예측 (long-term memory motion compensated prediction : LTMP) 기반의 다중 레프런스 프레임을 사용하여 에러에 강인한 동영상 부호화 기법을 제안한다. 또한 제안하는 알고리듬에 기반한 에러 은닉 기법 (error concealment : EC)을 구현한다. 즉, R-D (rate-distortion) 최적화에 프레임간 움직임 벡터 (temporal motion vectors)의 확산 인자를 추가하여 에러에 대한 강인성 및 에러 은닉 기법의 효율성을 증가시켰다. 또한, 제안하는 알고리듬은 시간축상의 에러 전파를 피드백 정보 (negative acknowledgement : NAK)를 사용하여 억제한다. 즉, NAK는 채널 에러에 의해 손실된 영역과 에러가 전파된 영역을 추정하여 움직임 보상 영역에서 제외되도록 하는데 이용된다. 따라서, 제안하는 알고리듬은 PSNR 측면에서 FIU (forced intra update)에 근사하는 성능을 보이나, FIU와는 달리 비트율의 증가를 피할 수 있어 제한된 대역폭의 네트웍을 효율적으로 사용할 수 있다. 컴퓨터 모의 실험을 통해 제안하는 알고리듬이 기존의 H.263 및 LTMP 기반의 부호기에 비해 에러 환경에서 주관적 및 객관적 화질 측면에서 성능이 우수함을 보인다.

  • PDF

a study on an Implementation of CAVLC Decoder for H.264/AVC (H.264/AVC용 CAVLC 디코더의 구현 연구)

  • Bong, Jae-Hoon;Kim, One-Sam;Sun, Sung-Il
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.06a
    • /
    • pp.552-555
    • /
    • 2007
  • 지상파 DMB등에서 많이 사용하고 있는 기술은 H.264이다. 이 H.264는 적은 비트율에 비하여 고해상도의 영상을 만들어 낸다. 이런 손실압축을 하기 위해서 인트라와 인터등과 같은 전처리 과정과 DCT(Discrete Cosine Transform), 양자화 등등이 존재하지만 H.264에서 실제로 압축이 되는 부분은 엔트로피코딩이다. H.264에서는 Exp-Golomb과 CAVLC(Context-Adaptive Variable Length Coding), CABAC(Context-Adaptive Binary Arithmetic Coding) 세 가지를 지원하고 있다. 이중 CAVLC는 테이블을 기반으로한 압축기법을 사용한다. 테이블을 이용할 때는 코드워드의 길이와 값을 비교하는 방식을 사용하게 된다. 이는 수 많은 메모리 접속으로 인한 전력소모와 연산지연을 가져온다. 본 논문에서는 전송된 비트스트림에서 데이터를 찾을 때 코드워드의 길이와 값을 테이블에 비교해서 찾지 않고 테이블에 존재하는 규칙을 수식화 하여 찾을 수 있도록 하였다. 이는 최초 '1'이 나올때까지의 '0'의 개수와 그 이후 존재하는 코드의 값을 이용하여서 각 단계에 필요한 데이터를 추출해 낸다. 위와 같은 알고리즘을 이용하여 VHDL언어로 설계하였다.

  • PDF

Adaptive Medical Image Compression Based on Lossy and Lossless Embedded Zerotree Methods

  • Elhannachi, Sid Ahmed;Benamrane, Nacera;Abdelmalik, Taleb-Ahmed
    • Journal of Information Processing Systems
    • /
    • v.13 no.1
    • /
    • pp.40-56
    • /
    • 2017
  • Since the progress of digital medical imaging techniques, it has been needed to compress the variety of medical images. In medical imaging, reversible compression of image's region of interest (ROI) which is diagnostically relevant is considered essential. Then, improving the global compression rate of the image can also be obtained by separately coding the ROI part and the remaining image (called background). For this purpose, the present work proposes an efficient reversible discrete cosine transform (RDCT) based embedded image coder designed for lossless ROI coding in very high compression ratio. Motivated by the wavelet structure of DCT, the proposed rearranged structure is well coupled with a lossless embedded zerotree wavelet coder (LEZW), while the background is highly compressed using the set partitioning in hierarchical trees (SPIHT) technique. Results coding shows that the performance of the proposed new coder is much superior to that of various state-of-art still image compression methods.