• Title/Summary/Keyword: DCT-IF

Search Result 51, Processing Time 0.026 seconds

An Efficient Method for Mining Frequent Patterns based on Weighted Support over Data Streams (데이터 스트림에서 가중치 지지도 기반 빈발 패턴 추출 방법)

  • Kim, Young-Hee;Kim, Won-Young;Kim, Ung-Mo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.8
    • /
    • pp.1998-2004
    • /
    • 2009
  • Recently, due to technical developments of various storage devices and networks, the amount of data increases rapidly. The large volume of data streams poses unique space and time constraints on the data mining process. The continuous characteristic of streaming data necessitates the use of algorithms that require only one scan over the stream for knowledge discovery. Most of the researches based on the support are concerned with the frequent itemsets, but ignore the infrequent itemsets even if it is crucial. In this paper, we propose an efficient method WSFI-Mine(Weighted Support Frequent Itemsets Mine) to mine all frequent itemsets by one scan from the data stream. This method can discover the closed frequent itemsets using DCT(Data Stream Closed Pattern Tree). We compare the performance of our algorithm with DSM-FI and THUI-Mine, under different minimum supports. As results show that WSFI-Mine not only run significant faster, but also consume less memory.

Video Indexing and Retrieval of MPEG Video using Motion and DCT Coefficients in Compressed Domain (움직임과 DCT 계수를 이용한 압축영역에서 MPEG 비디오의 인덱싱과 검색)

  • 박한엽;최연성;김무영;강진석;장경훈;송왕철;김장형
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.2
    • /
    • pp.121-132
    • /
    • 2000
  • Most of video indexing applications depend on fast and efficient archiving, browsing, retrieval techniques. A number of techniques have been approached about only pixel domain analysis until now. Those approaches brought about the costly overhead of decompressing because the most of multimedia data is typically stored in compressed format. But with a compressed video data, if we can analyze the compressed data directly. then we avoid the costly overhead such as in pixel domain. In this paper, we analyze the information of compressed video stream directly, and then extract the available features for video indexing. We have derived the technique for cut detection using these features, and the stream is divided into shots. Also we propose a new brief key frame selection technique and an efficient video indexing method using the spatial informations(DT coefficients) and also the temporal informations(motion vectors).

  • PDF

Raining Image Enhancement and Its Processing Acceleration for Better Human Detection (사람 인식을 위한 비 이미지 개선 및 고속화)

  • Park, Min-Woong;Jeong, Geun-Yong;Cho, Joong-Hwee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.9 no.6
    • /
    • pp.345-351
    • /
    • 2014
  • This paper presents pedestrian recognition to improve performance for vehicle safety system or surveillance system. Pedestrian detection method using HOG (Histograms of Oriented Gradients) has showed 90% recognition rate. But if someone takes a picture in the rain, the image may be distorted by rain streaks and recognition rate goes down by 62%. To solve this problem, we applied image decomposition method using MCA (Morphological Component Analysis). In this case, rain removal method improves recognition rate from 62% to 70%. However, it is difficult to apply conventional image decomposition method using MCA on vehicle safety system or surveillance system as conventional method is too slow for real-time system. To alleviate this issue, we propose a rain removal method by using low-pass filter and DCT (Discrete Cosine Transform). The DCT helps separate the image into rain components. The image is removed rain components by Butterworth filtering. Experimental results show that our method achieved 90% of recognition rate. In addition, the proposed method had accelerated processing time to 17.8ms which is acceptable for real-time system.

An adaptive bandwidth allocation for the two-layer VBR video transmission in ATM networks (ATM망에서 2계층 VBR 비디오 전송을 위한 적응적인 대역할당)

  • 이동은;이청훈;이팔진;김영선;김영천
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.8
    • /
    • pp.1928-1936
    • /
    • 1996
  • In this paper, we propose an adaptive bandwidth allocation algorithm for the transmission of VBR video through ATM Networks. In order to evaluate the required bandwidth for the VBR video, the characteristics of the compressed VBR video generated by the two-layered coder are analyzed with variations in the number of GOP(N), quantizer scale(q), and the number of low-frequency DCT coefficients(.betha.). The two-layer coder which is used to separate from the number of DCT coefficients is designed to transmit the VBR video efficiently. The compressed data generated by the two-layer coder are splitted into the high priority and low priority cells. If congestion is occurred in ATM networks, the minimum image quality is maintained by the high priority cells. The required bandwidth for VBR video is estimated with a prediction algorithm using the scene anframe correlations as well as the statistical properties of the VBR video sources. Strong correlation among the adjacent slices in a frame represents by the scene correlation andstrong correlation among the frames is represented by the frame correlation. The performance of the bandwidth allocation scheme proposed is evaluated in terms of the bandwith utilization, cell loss rate, and SNR with variations in q, n, .betha.. Simulation rewsults shown that the proposed scheme is superior to the conventional methods.

  • PDF

A Deep Learning based Inter-Layer Reference Picture Generation Method for Improving SHVC Coding Performance (SHVC 부호화 성능 개선을 위한 딥러닝 기반 계층간 참조 픽처 생성 방법)

  • Lee, Wooju;Lee, Jongseok;Sim, Dong-Gyu;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.401-410
    • /
    • 2019
  • In this paper, we propose a reference picture generation method for Inter-layer prediction based deep learning to improve the SHVC coding performance. A description will be given of a structure for performing filtering using a VDSR network on a DCT-IF based upsampled picture to generate a new reference picture and a training method for generating a reference picture between SHVC Inter-layer. The proposed method is implemented based on SHM 12.0. In order to evaluate the performance, we compare the method of generating Inter-layer predictor by applying dictionary learning. As a result, the coding performance of the enhancement layer showed a bitrate reduction of up to 13.14% compared to the method using dictionary learning, a bitrate reduction of up to 15.39% compared to SHM, and a bitrate reduction of 6.46% on average.

An Adaptive Algorithm for the Quantization Step Size Control of MPEG-2

  • Cho, Nam-Ik
    • Journal of Electrical Engineering and information Science
    • /
    • v.2 no.6
    • /
    • pp.138-145
    • /
    • 1997
  • This paper proposes an adaptive algorithm for the quantization step size control of MPEG-2, using the information obtained from the previously encoded picture. Before quantizing the DCT coefficients, the properties of reconstruction error of each macro block (MB) is predicted from the previous frame. For the prediction of the error of current MB, a block with the size of MB in the previous frame are chosen by use of the motion vector. Since the original and reconstructed images of the previous frame are available in the encoder, we can calculate the reconstruction error of this block. This error is considered as the expected error of the current MB if it is quantized with the same step size and bit rate. Comparing the error of the MB with the average of overall MBs, if it is larger than the average, small step size is given for this MB, and vice versa. As a result, the error distribution of the MB is more concentrated to the average, giving low variance and improved image quality. Especially for the low bit application, the proposed algorithm gives much smaller error variance and higher PSNR compared to TM5 (test model 5).

  • PDF

Error Resilient Video Coding Techniques Using Multiple Description Scheme (다중 표현을 이용한 에러에 강인한 동영상 부호화 방법)

  • 김일구;조남익
    • Journal of Broadcast Engineering
    • /
    • v.9 no.1
    • /
    • pp.17-31
    • /
    • 2004
  • This paper proposes an algorithm for the robust transmission of video in error Prone environment using multiple description codingby optimal split of DCT coefficients and rate-distortionoptimization framework. In MDC, a source signal is split Into several coded streams, which is called descriptions, and each description is transmitted to the decoder through different channel. Between descriptions, structured correlations are introduced at the encoder, and the decoder exploits this correlation to reconstruct the original signal even if some descriptions are missing. It has been shown that the MDC is more resilient than the singe description coding(SDC) against severe packet loss ratecondition. But the excessive redundancy in MDC, i.e., the correlation between the descriptions, degrades the RD performance under low PLR condition. To overcome this Problem of MDC, we propose a hybrid MDC method that controls the SDC/MDC switching according to channel condition. For example, the SDC is used for coding efficiency at low PLR condition and the MDC is used for the error resilience at high PLR condition. To control the SDC/MDC switching in the optimal way, RD optimization framework are used. Lagrange optimization technique minimizes the RD-based cost function, D+M, where R is the actually coded bit rate and D is the estimated distortion. The recursive optimal pet-pixel estimatetechnique is adopted to estimate accurate the decoder distortion. Experimental results show that the proposed optimal split of DCT coefficients and SD/MD switching algorithm is more effective than the conventional MU algorithms in low PLR conditions as well as In high PLR condition.

Improvement of Image Compression Using EZW Based in HWT (HWT에 기초한 EZW를 이용한 영상압축 개선)

  • Kim, Jang-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.12
    • /
    • pp.2641-2646
    • /
    • 2011
  • In this paper, we studied that the EZW algorithm based in HWT was proposed effective compression technique of wavelet transformed image. The proposed Haar-EZW algorithm is that image was coding by zerotree coding technique using self-similarity of HWT coefficients. If the HWT coefficient is larger than the threshold, that is coding to POS. If the HWT coefficient is smaller than the threshold, that is coding to NEG. If the HWT coefficient is larger than the root of zerotree, that is coding to ZTR. If the HWT coefficient is smaller then the threshold, and if that is not the root of zerotree, that is coding to IZ. This process is repeated until all the HWT coefficients have been encoded completely. This paper is compared Haar-EZW algorithm with Daubechies and Antonini. As the results of compare, it is shown that the PSNR of the Haar-EZW algorithm is better than Daubechies's and Antonini's.

Development of 3-State Blind Digital Watermark based on the Correlation Function (신호상관함수를 이용한 3 상태 능동적 디지털 워터마크의 개발)

  • Choi, YongSoo
    • Journal of Software Assessment and Valuation
    • /
    • v.16 no.2
    • /
    • pp.143-151
    • /
    • 2020
  • The digital content's security and authentication are important in the field of digital content application. There are some methods to perform the authentication. The digital watermarking is one of authentication methods. Paper presents a digital watermark authentication method that works in the application of digital image. The proposed watermark has the triple status information and performs the embedding and the detection without original Content. When authenticating the owner information of digital content, an autocorrelation function is used. In addition, a spread spectrum method is used to be adaptive to the signal of the original content in the frequency domain(DWT Domain). Therefore, the possibility of errors occurring in the detection of hidden information was reduced. it also has a advantage what Watermarking in DWT has faster embedding and detection time than other transformation domains(DFT, DCT, etc.). if it has a an image of size N=mXm, the computational amount can be reduced from O(N·logN) to O(N). The particular advantage is that it can hide more information(bits) per bit.

Directional Postprocessing Techniques to Improve Image Quality in Wavelet-based Image Compression (웨이블릿 기반 압축영상의 화질 향상을 위한 방향성 후처리 기법)

  • 김승종;정제창
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.6B
    • /
    • pp.1028-1040
    • /
    • 2000
  • Since image data has large data amount, proper image compression is necessary to transmit and store the data efficiently. Image compression brings about bit rate reduction but results in some artifacts. This artifacts are blocking artifacts, mosquito noise, which are observed in DCT based compression image, and ringing artifacts, which is perceived around the edges in wavelet based compression image. In this paper, we propose directional postprocessing technique which improved the decoded image quality using the fact that human vision is sensible to ringing artifacts around the edges of image. First we detect the edge direction in each block. Next we perform directional postprocessing according to detected edge direction. Proposed method is that the edge direction is block. Next performed directional postprocessing according to detected edge direction. If the correlation coefficients are equivalent to each directions, postprocessing is not performed. So, time of the postproces ing brings about shorten.

  • PDF