• Title/Summary/Keyword: Images processing

Search Result 4,224, Processing Time 0.035 seconds

LOSSLESS DATA COMPRESSION ON SAR DISPLAY IMAGES (SAR 디스플레이 영상을 위한 무손실 압축)

  • Lee, Tae-hee;Song, Woo-jin;Do, Dae-won;Kwon, Jun-chan;Yoon, Byung-woo
    • Proceedings of the IEEK Conference
    • /
    • 2001.09a
    • /
    • pp.117-120
    • /
    • 2001
  • Synthetic aperture radar (SAR) is a promising active remote sensing technique to obtain large terrain information of the earth in all-weather conditions. SAR is useful in many applications, including terrain mapping and geographic information system (GIS), which use SAR display images. Usually, these applications need the enormous data storage because they deal with wide terrain images with high resolution. So, compression technique is a useful approach to deal with SAR display images with limited storage. Because there is some indispensable data loss through the conversion of a complex SAR image to a display image, some applications, which need high-resolution images, cannot tolerate more data loss during compression. Therefore, lossless compression is appropriate to these applications. In this paper, we propose a novel lossless compression technique for a SAR display image using one-step predictor and block arithmetic coding.

  • PDF

Image Processing-based Validation of Unrecognizable Numbers in Severely Distorted License Plate Images

  • Jang, Sangsik;Yoon, Inhye;Kim, Dongmin;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.1
    • /
    • pp.17-26
    • /
    • 2012
  • This paper presents an image processing-based validation method for unrecognizable numbers in severely distorted license plate images which have been degraded by various factors including low-resolution, low light-level, geometric distortion, and periodic noise. Existing vehicle license plate recognition (LPR) methods assume that most of the image degradation factors have been removed before performing the recognition of printed numbers and letters. If this is not the case, conventional LPR becomes impossible. The proposed method adopts a novel approach where a set of reference number images are intentionally degraded using the same factors estimated from the input image. After a series of image processing steps, including geometric transformation, super-resolution, and filtering, a comparison using cross-correlation between the intentionally degraded reference and the input images can provide a successful identification of the visually unrecognizable numbers. The proposed method makes it possible to validate numbers in a license plate image taken under low light-level conditions. In the experiment, using an extended set of test images that are unrecognizable to human vision, the proposed method provides a successful recognition rate of over 95%, whereas most existing LPR methods fail due to the severe distortion.

  • PDF

On Post-Processing of Coded Images by Using the Narrow Quantization Constraint (협 양자화 제약 조건을 이용한 부호화된 영상의 후처리)

  • 박섭형;김동식;이상훈
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.4
    • /
    • pp.648-661
    • /
    • 1997
  • This paper presents a new method for post-processing of coded images based upon the low-pass filtering followed by the projection onto the NQCS (narrow quantization constraint set). We also investigate how the proposed method works on JPEG-coded real images. The starting point of the QCS-based post-processing techniques is the centroid of the QCS, where the original image belongs. The low-pass filtering followed by the projection onto the QCS makes the images lie on the boundary of the QCS. It is likely that, however, the original image is inside the QCS. Hence projection onto the NQCS gives a lower MSE (mean square error) than does the projection onto the QCS. Simulation results show that setting the narrowing coefficients of the NQCS to be 0.2 yields the best performance in most cases. Even though the JPEG-coded image is low-pass filtered and projected onto the NQCS repeatedly, there is no guarantee that the resultant image has a lower MSE and goes closer to the original image. Thus only one iteration is sufficient for the post-processing of the coded images. This is interesting because the main drawback of the iterative post-processing techniques is the heavy computational burden. The single iteration method reduces the computational burden and gives us an easy way to implement the real time VLSI post-processor.

  • PDF

Feature Based Multi-Resolution Registration of Blurred Images for Image Mosaic

  • Fang, Xianyong;Luo, Bin;He, Biao;Wu, Hao
    • International Journal of CAD/CAM
    • /
    • v.9 no.1
    • /
    • pp.37-46
    • /
    • 2010
  • Existing methods for the registration of blurred images are efficient for the artificially blurred images or a planar registration, but not suitable for the naturally blurred images existing in the real image mosaic process. In this paper, we attempt to resolve this problem and propose a method for a distortion-free stitching of naturally blurred images for image mosaic. It adopts a multi-resolution and robust feature based inter-layer mosaic together. In each layer, Harris corner detector is chosen to effectively detect features and RANSAC is used to find reliable matches for further calibration as well as an initial homography as the initial motion of next layer. Simplex and subspace trust region methods are used consequently to estimate the stable focal length and rotation matrix through the transformation property of feature matches. In order to stitch multiple images together, an iterative registration strategy is also adopted to estimate the focal length of each image. Experimental results demonstrate the performance of the proposed method.

Statistical Image Processing using Java on the Web

  • Lim, Dong Hoon;Park, Eun Hee
    • Communications for Statistical Applications and Methods
    • /
    • v.9 no.2
    • /
    • pp.355-366
    • /
    • 2002
  • The web is one of the most plentiful sources of images. The web has an immediate need for image processing technology in Java. This paper provides a practical introduction to statistical image processing using Java on the web. The paper describes how images are represented in Java and deals with four image processing operations based on basic statistical methods: point processing, spatial filtering, edge detection and image segmentation.

A Classification Algorithm Using Ant Colony System (개미 군락 시스템을 이용한 영역 분류 알고리즘)

  • Kim, In-Kyeom;Yun, Min-Young
    • The KIPS Transactions:PartB
    • /
    • v.15B no.3
    • /
    • pp.245-252
    • /
    • 2008
  • We present a classification algorithm based on ant colony system(ACS) for classifying digital images. The ACS has been recently emerged as a useful tool for the pattern recognition, image extraction, and edge detection. The classification algorithm of digital images is very important in the application areas of digital image coding, image analysis, and image recognition because it significantly influences the quality of images. The conventional procedures usually classify digital images with the fixed value for the associated parameters and it requires postprocessing. However, the proposed algorithm utilizing randomness of ants yields the stable and enhanced images even for processing the rapidly changing images. It is also expected that, due to this stability and flexibility of the present procedure, the digital images are stably classified for processing images with various noises and error signals arising from processing of the drastically fast moving images could be automatically compensated and minimized.

Off-Site Distortion and Color Compensation of Underwater Archaeological Images Photographed in the Very Turbid Yellow Sea

  • Jung, Young-Hwa;Kim, Gyuho;Yoo, Woo Sik
    • Journal of Conservation Science
    • /
    • v.38 no.1
    • /
    • pp.14-32
    • /
    • 2022
  • Underwater photographing and image recording are essential for pre-excavation survey and during excavation in underwater archaeology. Unlike photographing on land, all underwater images suffer various quality degradations such as shape distortions, color shift, blur, low contrast, high noise levels and so on. Outcome is very often heavily photographing equipment and photographer dependent. Excavation schedule, weather conditions, and water conditions can put burdens on divers. Usable images are very limited compared to the efforts. In underwater archaeological study in very turbid water such as in the Yellow Sea (between mainland China and the Korean peninsula), underwater photographing is very challenging. In this study, off-site image distortion and color compensation techniques using an image processing/analysis software is investigated as an alternative image quality enhancement method. As sample images, photographs taken during the excavation of 800-year-old Taean Mado Shipwrecks in the Yellow Sea in 2008-2010 were mainly used. Significant enhancement in distortion and color compensation of archived images were obtained by simple post image processing using image processing/analysis software (PicMan) customized for given view ports, lenses and cameras with and without optical axis offsets. Post image processing is found to be very effective in distortion and color compensation of both recent and archived images from various photographing equipment models and configurations. Merits and demerit of in-situ, distortion and color compensated photographing with sophisticated equipment and conventional photographing equipment, which requires post image processing, are compared.

Comparison of Pre-processed Brain Tumor MR Images Using Deep Learning Detection Algorithms

  • Kwon, Hee Jae;Lee, Gi Pyo;Kim, Young Jae;Kim, Kwang Gi
    • Journal of Multimedia Information System
    • /
    • v.8 no.2
    • /
    • pp.79-84
    • /
    • 2021
  • Detecting brain tumors of different sizes is a challenging task. This study aimed to identify brain tumors using detection algorithms. Most studies in this area use segmentation; however, we utilized detection owing to its advantages. Data were obtained from 64 patients and 11,200 MR images. The deep learning model used was RetinaNet, which is based on ResNet152. The model learned three different types of pre-processing images: normal, general histogram equalization, and contrast-limited adaptive histogram equalization (CLAHE). The three types of images were compared to determine the pre-processing technique that exhibits the best performance in the deep learning algorithms. During pre-processing, we converted the MR images from DICOM to JPG format. Additionally, we regulated the window level and width. The model compared the pre-processed images to determine which images showed adequate performance; CLAHE showed the best performance, with a sensitivity of 81.79%. The RetinaNet model for detecting brain tumors through deep learning algorithms demonstrated satisfactory performance in finding lesions. In future, we plan to develop a new model for improving the detection performance using well-processed data. This study lays the groundwork for future detection technologies that can help doctors find lesions more easily in clinical tasks.

A FUZZY NEURAL NETWORK-BASED DECISION OF ROAD IMAGE QUALITY FOR THE EXTRACTION OF LANE-RELATED INFORMATION

  • YI U. K.;LEE J. W.;BAEK K. R.
    • International Journal of Automotive Technology
    • /
    • v.6 no.1
    • /
    • pp.53-63
    • /
    • 2005
  • We propose a fuzzy neural network (FNN) theory capable of deciding the quality of a road image prior to extracting lane-related information. The accuracy of lane-related information obtained by image processing depends on the quality of the raw images, which can be classified as good or bad according to how visible the lane marks on the images are. Enhancing the accuracy of the information by an image-processing algorithm is limited due to noise corruption which makes image processing difficult. The FNN, on the other hand, decides whether road images are good or bad with respect to the degree of noise corruption. A cumulative distribution function (CDF), a function of edge histogram, is utilized to extract input parameters from the FNN according to the fact that the shape of the CDF is deeply correlated to the road image quality. A suitability analysis shows that this deep correlation exists between the parameters and the image quality. The input pattern vector of the FNN consists of nine parameters in which eight parameters are from the CDF and one is from the intensity distribution of raw images. Experimental results showed that the proposed FNN system was quite successful. We carried out simulations with real images taken in various lighting and weather conditions, and obtained successful decision-making about $99\%$ of the time.