• Title/Summary/Keyword: Image complexity

Search Result 940, Processing Time 0.025 seconds

Document Image Layout Analysis Using Image Filters and Constrained Conditions (이미지 필터와 제한조건을 이용한 문서영상 구조분석)

  • Jang, Dae-Geun;Hwang, Chan-Sik
    • The KIPS Transactions:PartB
    • /
    • v.9B no.3
    • /
    • pp.311-318
    • /
    • 2002
  • Document image layout analysis contains the process to segment document image into detailed regions and the process to classify the segmented regions into text, picture, table or etc. In the region classification process, the size of a region, the density of black pixels, and the complexity of pixel distribution are the bases of region classification. But in case of picture, the ranges of these bases are so wide that it's difficult to decide the classification threshold between picture and others. As a result, the picture has a higher region classification error than others. In this paper, we propose document image layout analysis method which has a better performance for the picture and text region classification than that of previous methods including commercial softwares. In the picture and text region classification, median filter is used in order to reduce the influence of the size of a region, the density of black pixels, and the complexity of pixel distribution. Futhermore the classification error is corrected by the use of region expanding filter and constrained conditions.

Super-Resolution Algorithm Using Motion Estimation for Moving Vehicles (움직임 추정 기법을 이용한 움직이는 차량의 초고해상도 복원 알고리즘)

  • Kim, Seung-Hoon;Cho, Sang-Bock
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.4
    • /
    • pp.23-31
    • /
    • 2012
  • This paper proposes a motion estimation-based super resolution algorithm to restore input low-resolution images of large movement into a super-resolution image. It is difficult to find the sub-pixel motion estimation in images of large movement compared to typical experimental images. Also, it has disadvantage which have high computational complexity to find reference images and candidate images using general motion estimation method. In order to solve these problems for the traditional two-dimensional motion estimation using the proposed registration threshold that satisfy the conditions based on the reference image is determined. Candidate image with minimum weight among the best candidates for super resolution images, the restoration process to proceed with to find a new image registration algorithm is proposed. According to experimental results, the average PSNR of the proposed algorithm is 31.89dB and this is better than PSNR of traditional super-resolution algorithm and it also shows improvement of computational complexity.

Single Image Haze Removal Algorithm using Dual DCP and Adaptive Brightness Correction (Dual DCP 및 적응적 밝기 보정을 통한 단일 영상 기반 안개 제거 알고리즘)

  • Kim, Jongho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.11
    • /
    • pp.31-37
    • /
    • 2018
  • This paper proposes an effective single-image haze-removal algorithm with low complexity by using a dual dark channel prior (DCP) and an adaptive brightness correction technique. The dark channel of a small patch preserves the edge information of the image, but is sensitive to noise and local brightness variations. On the other hand, the dark channel of a large patch is advantageous in estimation of the exact haze value, but halo effects from block effects deteriorate haze-removal performance. In order to solve this problem, the proposed algorithm builds a dual DCP as a combination of dark channels from patches with different sizes, and this meets low-memory and low-complexity requirements, while the conventional method uses a matting technique, which requires a large amount of memory and heavy computations. Moreover, an adaptive brightness correction technique that is applied to the recovered image preserves the objects in the image more clearly. Experimental results for various hazy images demonstrate that the proposed algorithm removes haze effectively, while requiring much fewer computations and less memory than conventional methods.

Design and Implementation of Image Detection System Using Vertical Histogram-Based Shadow Removal Algorithm (수직 히스토그램 기반 그림자 제거 알고리즘을 이용한 영상 감지 시스템 설계 및 구현)

  • Jang, Young-Hwan;Lee, Jae-Chul;Park, Seok-Cheon;Lee, Bong-Gyou;Lee, Sang-Soon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.1
    • /
    • pp.91-99
    • /
    • 2020
  • For the shadow removal technology that is the base technology of the image detection system, real-time image processing has a problem that the processing speed is reduced due to the calculation complexity and it is also sensitive to illumination or light because shadows are removed only by the difference in brightness. Therefore, in this paper, we improved real-time performance by reducing the calculation complexity through the removal of the weighting part in order to solve the problem of the conventional system. In addition, we designed and evaluated an image detection system based on a shadow removal algorithm that could improve the shadow recognition rate using a vertical histogram. The evaluation results confirmed that the average speed increased by approximately 5.6ms and the detection rate improved by approximately 5.5%p compared to the conventional image detection system.

Application of PRA to The Differenec Image for Prediction Error Reduction in DPCM Image Coding (DPCM 영상 부호화기에서 예측 오차를 줄이기 위한 변환된 영상에서의 PRA 적용)

  • 문주희;고종석;김재균
    • Proceedings of the Korean Institute of Communication Sciences Conference
    • /
    • 1986.10a
    • /
    • pp.56-58
    • /
    • 1986
  • This paper propose a conversion method to reduce prediction error produced when PRA(Pel Recursive Algorithm) motion estimation method is used in real image. The method is th get a spatial difference image from a given raw image and then to apply any PRA method to the difference image. The algorithm proposed in this paper is compared with several algorithm including the ubiquitious Netravali and Robbins` based on the performance and the hardware complexity. Computer simulation shows that the difference image conversion method is about 4.5dB better than the other algorithm with regard to prediction error power.

  • PDF

A Low-Complexity and High-Quality Image Compression Method for Digital Cameras

  • Xie, Xiang;Li, Guolin;Wang, Zhihua
    • ETRI Journal
    • /
    • v.28 no.2
    • /
    • pp.260-263
    • /
    • 2006
  • This letter proposes a new near-lossless image compression method with only one line buffer cost for a digital camera with Bayer format image. For such format data, it can provide a low average compression rate (4.24bits/pixel) with high-image quality (larger than 46.37dB where the error of every pixel is less than two). The experimental results show that the near-lossless compression method has better performance than JPEG-LS (lossless) with ${\delta}$ = 2 for a Bayer format image.

  • PDF

Improvement of 3D Stereoscopic Perception Using Depth Map Transformation (깊이맵 변환을 이용한 3D 입체감 개선 방법)

  • Jang, Seong-Eun;Jung, Da-Un;Seo, Joo-Ha;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.16 no.6
    • /
    • pp.916-926
    • /
    • 2011
  • It is well known that high-resolution 3D movie contents frequently do not deliver the identical 3D perception to low-resolution 3D images. For solving this problem, we propose a novel method that produces a new stereoscopic image based on depth map transformation using the spatial complexity of an image. After analyzing the depth map histogram, the depth map is decomposed into multiple depth planes that are transformed based upon the spatial complexity. The transformed depth planes are composited into a new depth map. Experimental results demonstrate that the lower the spatial complexity is, the higher the perceived video quality and depth perception are. As well, visual fatigue test showed that the stereoscopic images deliver less visual fatigue.

A Study on Frequency Hopping Signal Detection Using a Polyphase DFT Filterbank (다상 DFT 필터뱅크를 이용한 도약신호 검출에 관한 연구)

  • Kwon, Jeong-A;Lee, Cho-Ho;Jeong, Eui-Rim
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.4
    • /
    • pp.789-796
    • /
    • 2013
  • It is known that the detection of hopping signals without any information about hopping duration and hopping frequency is rather difficult. This paper considers the blind detection of hopping signal's information such as hopping duration and hopping frequency from the sampled wideband signals. In order to find hopping information from the wideband signals, multiple narrow-band filters are required in general, which leads to huge implementation complexity. Instead, this paper employs the polyphase DFT(discrete Fourier transform) filterbank to reduce the implementation complexity. This paper propose hopping signal detection algorithm from the polyphase DFT filterbank output. Specifically, based on the binary image processing, the proposed algorithm is developed to decrease the memory size and H/W complexity. The performance of the proposed algorithm is evaluated through the computer simulation and FPGA (field programmable gate array) implementation.

An Image Interpolation Method using an Improved Least Square Estimation (개선된 Least Square Estimation을 이용한 영상 보간 방법)

  • Lee Dong Ho;Na Seung Je
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.10C
    • /
    • pp.1425-1432
    • /
    • 2004
  • Because of the high performance with the edge regions, the existing LSE(Least Square Estimation) method provides much better results than other methods. However, since it emphasizes not oがy edge components but also noise components, some part of interpolated images looks like unnatural. It also requires very high computational complexity and memory for implementation. We propose a new LSE interpolation method which requires much lower complexity and memory, but provides better performance than the existing method. To reduce the computational complexity, we propose and adopt a simple sample window and a direction detector to reduce the size of memory without blurring image. To prevent from emphasizing noise components, the hi-linear interpolation method is added in the LSE formula. The simulation results show that the proposed method provides better subjective and objective performance with love. complexity than the existing method.

Adaptive In-loop Filter Method for High-efficiency Video Coding (고효율 비디오 부호화를 위한 적응적 인-루프 필터 방법)

  • Jung, Kwang-Su;Nam, Jung-Hak;Lim, Woong;Jo, Hyun-Ho;Sim, Dong-Gyu;Choi, Byeong-Doo;Cho, Dae-Sung
    • Journal of Broadcast Engineering
    • /
    • v.16 no.1
    • /
    • pp.1-13
    • /
    • 2011
  • In this paper, we propose an adaptive in-loop filter to improve the coding efficiency. Recently, there are post-filter hint SEI and block-based adaptive filter control (BAFC) methods based on the Wiener filter which can minimize the mean square error between the input image and the decoded image in video coding standards. However, since the post-filter hint SEI is applied only to the output image, it cannot reduce the prediction errors of the subsequent frames. Because BAFC is also conducted with a deblocking filter, independently, it has a problem of high computational complexity on the encoder and decoder sides. In this paper, we propose the low-complexity adaptive in-loop filter (LCALF) which has lower computational complexity by using H.264/AVC deblocking filter, adaptively, as well as shows better performance than the conventional method. In the experimental results, the computational complexity of the proposed method is reduced about 22% than the conventional method. Furthermore, the coding efficiency of the proposed method is about 1% better than the BAFC.