• 제목/요약/키워드: Images, processing

검색결과 4,224건 처리시간 0.032초

SAR 디스플레이 영상을 위한 무손실 압축 (LOSSLESS DATA COMPRESSION ON SAR DISPLAY IMAGES)

  • Lee, Tae-hee;Song, Woo-jin;Do, Dae-won;Kwon, Jun-chan;Yoon, Byung-woo
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2001년도 제14회 신호처리 합동 학술대회 논문집
    • /
    • pp.117-120
    • /
    • 2001
  • Synthetic aperture radar (SAR) is a promising active remote sensing technique to obtain large terrain information of the earth in all-weather conditions. SAR is useful in many applications, including terrain mapping and geographic information system (GIS), which use SAR display images. Usually, these applications need the enormous data storage because they deal with wide terrain images with high resolution. So, compression technique is a useful approach to deal with SAR display images with limited storage. Because there is some indispensable data loss through the conversion of a complex SAR image to a display image, some applications, which need high-resolution images, cannot tolerate more data loss during compression. Therefore, lossless compression is appropriate to these applications. In this paper, we propose a novel lossless compression technique for a SAR display image using one-step predictor and block arithmetic coding.

  • PDF

Image Processing-based Validation of Unrecognizable Numbers in Severely Distorted License Plate Images

  • Jang, Sangsik;Yoon, Inhye;Kim, Dongmin;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제1권1호
    • /
    • pp.17-26
    • /
    • 2012
  • This paper presents an image processing-based validation method for unrecognizable numbers in severely distorted license plate images which have been degraded by various factors including low-resolution, low light-level, geometric distortion, and periodic noise. Existing vehicle license plate recognition (LPR) methods assume that most of the image degradation factors have been removed before performing the recognition of printed numbers and letters. If this is not the case, conventional LPR becomes impossible. The proposed method adopts a novel approach where a set of reference number images are intentionally degraded using the same factors estimated from the input image. After a series of image processing steps, including geometric transformation, super-resolution, and filtering, a comparison using cross-correlation between the intentionally degraded reference and the input images can provide a successful identification of the visually unrecognizable numbers. The proposed method makes it possible to validate numbers in a license plate image taken under low light-level conditions. In the experiment, using an extended set of test images that are unrecognizable to human vision, the proposed method provides a successful recognition rate of over 95%, whereas most existing LPR methods fail due to the severe distortion.

  • PDF

협 양자화 제약 조건을 이용한 부호화된 영상의 후처리 (On Post-Processing of Coded Images by Using the Narrow Quantization Constraint)

  • 박섭형;김동식;이상훈
    • 한국통신학회논문지
    • /
    • 제22권4호
    • /
    • pp.648-661
    • /
    • 1997
  • This paper presents a new method for post-processing of coded images based upon the low-pass filtering followed by the projection onto the NQCS (narrow quantization constraint set). We also investigate how the proposed method works on JPEG-coded real images. The starting point of the QCS-based post-processing techniques is the centroid of the QCS, where the original image belongs. The low-pass filtering followed by the projection onto the QCS makes the images lie on the boundary of the QCS. It is likely that, however, the original image is inside the QCS. Hence projection onto the NQCS gives a lower MSE (mean square error) than does the projection onto the QCS. Simulation results show that setting the narrowing coefficients of the NQCS to be 0.2 yields the best performance in most cases. Even though the JPEG-coded image is low-pass filtered and projected onto the NQCS repeatedly, there is no guarantee that the resultant image has a lower MSE and goes closer to the original image. Thus only one iteration is sufficient for the post-processing of the coded images. This is interesting because the main drawback of the iterative post-processing techniques is the heavy computational burden. The single iteration method reduces the computational burden and gives us an easy way to implement the real time VLSI post-processor.

  • PDF

Feature Based Multi-Resolution Registration of Blurred Images for Image Mosaic

  • Fang, Xianyong;Luo, Bin;He, Biao;Wu, Hao
    • International Journal of CAD/CAM
    • /
    • 제9권1호
    • /
    • pp.37-46
    • /
    • 2010
  • Existing methods for the registration of blurred images are efficient for the artificially blurred images or a planar registration, but not suitable for the naturally blurred images existing in the real image mosaic process. In this paper, we attempt to resolve this problem and propose a method for a distortion-free stitching of naturally blurred images for image mosaic. It adopts a multi-resolution and robust feature based inter-layer mosaic together. In each layer, Harris corner detector is chosen to effectively detect features and RANSAC is used to find reliable matches for further calibration as well as an initial homography as the initial motion of next layer. Simplex and subspace trust region methods are used consequently to estimate the stable focal length and rotation matrix through the transformation property of feature matches. In order to stitch multiple images together, an iterative registration strategy is also adopted to estimate the focal length of each image. Experimental results demonstrate the performance of the proposed method.

Statistical Image Processing using Java on the Web

  • Lim, Dong Hoon;Park, Eun Hee
    • Communications for Statistical Applications and Methods
    • /
    • 제9권2호
    • /
    • pp.355-366
    • /
    • 2002
  • The web is one of the most plentiful sources of images. The web has an immediate need for image processing technology in Java. This paper provides a practical introduction to statistical image processing using Java on the web. The paper describes how images are represented in Java and deals with four image processing operations based on basic statistical methods: point processing, spatial filtering, edge detection and image segmentation.

개미 군락 시스템을 이용한 영역 분류 알고리즘 (A Classification Algorithm Using Ant Colony System)

  • 김인겸;윤민영
    • 정보처리학회논문지B
    • /
    • 제15B권3호
    • /
    • pp.245-252
    • /
    • 2008
  • 본 연구에서는 개미 군락 시스템을 이용하여 디지털 영상의 영역을 분류하는 방법을 제안하였다. 개미 군락 시스템(Ant Colony System, ACS)은 조합 최적화 문제뿐 아니라 최근에는 영상처리 분야의 패턴 인식, 영상 추출, 에지 검색 등에 응용되고 있다. 디지털 영상처리에서 영역 분류는 영상 정보를 처리하는 분석 및 인식 분야에서 가장 중요한 단계중의 하나로 알려져 있으며, 잘 분류된 영역은 디지털 영상 부호화나 영상 분석 혹은 영상 인식과 같은 응용분야에서 더 좋은 결과를 얻을 수 있도록 해준다. 기존의 영상 처리에서의 영역 분류는 고정된 변수에 의하여 처리되어서 후처리 작업들이 필요하였으며 그 결과 또한 영상의 특성에 따라 변하였다. 그러나 본 연구에서는 개미의 무작위성을 이용함으로써 영상에 어느 정도의 변화가 발생하더라도 여전히 안정적인 결과를 얻을 수 있었다. 이러한 안정성과 유연성은 영상을 촬영하는 동안 발생할 수 있는 여러 종류의 잡음에 대하여 안정적인 상태를 유지할 수 있을 것이며 동영상내에서 급한 움직임에 의한 흐려짐에 대한 보상도 이루어 질 수 있을 것으로 기대한다.

Off-Site Distortion and Color Compensation of Underwater Archaeological Images Photographed in the Very Turbid Yellow Sea

  • Jung, Young-Hwa;Kim, Gyuho;Yoo, Woo Sik
    • 보존과학회지
    • /
    • 제38권1호
    • /
    • pp.14-32
    • /
    • 2022
  • Underwater photographing and image recording are essential for pre-excavation survey and during excavation in underwater archaeology. Unlike photographing on land, all underwater images suffer various quality degradations such as shape distortions, color shift, blur, low contrast, high noise levels and so on. Outcome is very often heavily photographing equipment and photographer dependent. Excavation schedule, weather conditions, and water conditions can put burdens on divers. Usable images are very limited compared to the efforts. In underwater archaeological study in very turbid water such as in the Yellow Sea (between mainland China and the Korean peninsula), underwater photographing is very challenging. In this study, off-site image distortion and color compensation techniques using an image processing/analysis software is investigated as an alternative image quality enhancement method. As sample images, photographs taken during the excavation of 800-year-old Taean Mado Shipwrecks in the Yellow Sea in 2008-2010 were mainly used. Significant enhancement in distortion and color compensation of archived images were obtained by simple post image processing using image processing/analysis software (PicMan) customized for given view ports, lenses and cameras with and without optical axis offsets. Post image processing is found to be very effective in distortion and color compensation of both recent and archived images from various photographing equipment models and configurations. Merits and demerit of in-situ, distortion and color compensated photographing with sophisticated equipment and conventional photographing equipment, which requires post image processing, are compared.

Comparison of Pre-processed Brain Tumor MR Images Using Deep Learning Detection Algorithms

  • Kwon, Hee Jae;Lee, Gi Pyo;Kim, Young Jae;Kim, Kwang Gi
    • Journal of Multimedia Information System
    • /
    • 제8권2호
    • /
    • pp.79-84
    • /
    • 2021
  • Detecting brain tumors of different sizes is a challenging task. This study aimed to identify brain tumors using detection algorithms. Most studies in this area use segmentation; however, we utilized detection owing to its advantages. Data were obtained from 64 patients and 11,200 MR images. The deep learning model used was RetinaNet, which is based on ResNet152. The model learned three different types of pre-processing images: normal, general histogram equalization, and contrast-limited adaptive histogram equalization (CLAHE). The three types of images were compared to determine the pre-processing technique that exhibits the best performance in the deep learning algorithms. During pre-processing, we converted the MR images from DICOM to JPG format. Additionally, we regulated the window level and width. The model compared the pre-processed images to determine which images showed adequate performance; CLAHE showed the best performance, with a sensitivity of 81.79%. The RetinaNet model for detecting brain tumors through deep learning algorithms demonstrated satisfactory performance in finding lesions. In future, we plan to develop a new model for improving the detection performance using well-processed data. This study lays the groundwork for future detection technologies that can help doctors find lesions more easily in clinical tasks.

A FUZZY NEURAL NETWORK-BASED DECISION OF ROAD IMAGE QUALITY FOR THE EXTRACTION OF LANE-RELATED INFORMATION

  • YI U. K.;LEE J. W.;BAEK K. R.
    • International Journal of Automotive Technology
    • /
    • 제6권1호
    • /
    • pp.53-63
    • /
    • 2005
  • We propose a fuzzy neural network (FNN) theory capable of deciding the quality of a road image prior to extracting lane-related information. The accuracy of lane-related information obtained by image processing depends on the quality of the raw images, which can be classified as good or bad according to how visible the lane marks on the images are. Enhancing the accuracy of the information by an image-processing algorithm is limited due to noise corruption which makes image processing difficult. The FNN, on the other hand, decides whether road images are good or bad with respect to the degree of noise corruption. A cumulative distribution function (CDF), a function of edge histogram, is utilized to extract input parameters from the FNN according to the fact that the shape of the CDF is deeply correlated to the road image quality. A suitability analysis shows that this deep correlation exists between the parameters and the image quality. The input pattern vector of the FNN consists of nine parameters in which eight parameters are from the CDF and one is from the intensity distribution of raw images. Experimental results showed that the proposed FNN system was quite successful. We carried out simulations with real images taken in various lighting and weather conditions, and obtained successful decision-making about $99\%$ of the time.