• Title/Summary/Keyword: 에지 영상

Search Result 1,239, Processing Time 0.025 seconds

A Car License Plate Recognition Using Morphological Characteristic, Difference Operator and ART2 Algorithm (형태학적 특징 및 차 연산과 ART2 알고리즘을 이용한 차량 번호판 인식)

  • Kang, Moo-Jin;Kim, Jae-Kun;Kim, Kwang-Baek
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.431-435
    • /
    • 2008
  • 2006년 11월 이후 신 차량 번호판 등장 후, 신 차량 번호판과 구 차량 번호판이 혼합되어 있다. 이에 따라 속도위반, 신호위반 단속, 무인 주차관리 시스템, 범죄 및 도주 차량 검거, 고속도로 톨게이트에서 통행료 지불로 인한 교통 체증현상을 해소하기 위한 자동 요금 징수와 같은 다양한 경우에서 자동차 번호판의 특징에 맞는 인식 시스템이 요구되고 있다. 따라서 본 논문에서는 이러한 문제를 해결하기 위해 형태학적 특징 및 차 연산과 ART2 알고리즘을 이용한 차량 번호판 인식 방법을 제안한다. 무인 카메라에서 획득된 차량 번호판 영상에서 차 연산을 이용하여 에지를 추출한 후에 블록 이진화를 한다. 이진화 된 차량 영상에서 신 구 차량 번호판의 형태학적 특성을 8방향 윤곽선 추적 알고리즘에 적용하여 잡음 영역을 제거하고, 차량의 번호판 영역을 추출한다 추출된 번호판 영역에 대하여 평균 이진화와 최대 최소 이진화를 적용하여 번호판의 개별 영역에 대한 형태학적 특성을 고려하여 잡음을 제거하고, Labeling 알고리즘을 적용하여 개별 문자를 추출한 후에 결합한다. 이렇게 분류된 개별 문자 및 숫자 코드를 ART2 알고리즘에 적용하여 학습 및 인식을 한다. 제안된 차량 번호판 추출 및 인식 방법의 성능을 평가하기 위해 녹색 번호판과 흰색 번호판 이미지 각각 100장을 대상으로 실험한 결과, 제시 된 차량 번호판 추출 및 인식 방법이 실험을 통해서 효율적인 것을 확인하였다.

  • PDF

High Accurate Cup Positioning System for a Coffee Printer (커피 프린터를 위한 커피 잔 정밀 측위 시스템)

  • Kim, Heeseung;Lee, Jaesung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.10
    • /
    • pp.1950-1956
    • /
    • 2017
  • In food-printing field, precise positioning technique for a printing object is very important. In this paper, we propose cup positioning method for a latte-art printer through image processing. A camera sensor is installed on the upper side of the printer, and the image obtained from this is projected and converted into a top-view image. Then, the edge lines of the image is detected first, and then the coordinate of the center and the radius of the cup are detected through a Circular Hough transformation. The performance evaluation results show that the image processing time is 0.1 ~ 0.125 sec and the cup detection rate is 92.26%. This means that a cup is detected almost perfectly without affecting the whole latte-art printing time. The center point coordinates and radius values of cups detected by the proposed method show very small errors less than an average of 1.5 mm. Therefore, it seems that the problem of the printing position error is solved.

Evaluation of Edge Detector′s Smoothness using Fuzzy Ambiguity (퍼지 애매성을 이용한 에지검출기의 평활화 정도평가)

  • Kim, Tae-Yong;Han, Joon-Hee
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.9
    • /
    • pp.649-661
    • /
    • 2001
  • While the conventional edge detection can be considered as the problem of determining the existence of edges at certain locations, the fuzzy edge modeling can be considered as the problem of determining the membership values of edges. Thus, if the location of an edge is unclear, or if the intensity function is different from the ideal edge model, the degree of edgeness at the location is represented as a fuzzy membership value. Using the concept of fuzzy edgeness, an automatic smoothing parameter evaluation and selection method for a conventional edge detector is proposed. This evaluation method uses the fuzzy edge modeling, and can analyze the effect of smoothing parameter to determine an optimal parameter for a given image. By using the selected parameter we can detect least ambiguous edges of a detection method for an image. The effectiveness of the parameter evaluation method is analyzed and demonstrated using a set of synthetic and real images.

  • PDF

The Improved Deblocking Filter for Low-bit Rate H.264/AVC Video (저해상도 H.264/AVC 비디오를 위한 개선된 디블럭킹 필터)

  • Kwon, Dong-Jin;Ryu, Sung-Pil;Kwak, Nae-Joung;Ahn, Jae-Hyeong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.2
    • /
    • pp.284-289
    • /
    • 2008
  • H.264/AVC among moving picture compression standard is the standard format for high compression rate and reliable video transimission. It generates blocking effects in video due to compressing video using block-based DCT and includes de-blocking filter to reduce blocking effect. Therefore, the filter makes the video over-smoothing and the quality of it is reduced. In this paper, we propose a improved de-blocking filter to solve the demerit. The proposed de-blocking filter redetermine the block boundary strength and apply the comer filtering to eliminate artifacts in low frequency domain. To evaluate the performance, we apply the proposed deblocking filter and exiting method to various video and evaluated the quality of image subjectively and objectively by analyzing the result. The simulation result shows the proposed method preserves the edge of video, reduces blocking effects and improves PSNR than the existing method.

Driving Assist System using Semantic Segmentation based on Deep Learning (딥러닝 기반의 의미론적 영상 분할을 이용한 주행 보조 시스템)

  • Kim, Jung-Hwan;Lee, Tae-Min;Lim, Joonhong
    • Journal of IKEEE
    • /
    • v.24 no.1
    • /
    • pp.147-153
    • /
    • 2020
  • Conventional lane detection algorithms have problems in that the detection rate is lowered in road environments having a large change in curvature and illumination. The probabilistic Hough transform method has low lane detection rate since it exploits edges and restrictive angles. On the other hand, the method using a sliding window can detect a curved lane as the lane is detected by dividing the image into windows. However, the detection rate of this method is affected by road slopes because it uses affine transformation. In order to detect lanes robustly and avoid obstacles, we propose driving assist system using semantic segmentation based on deep learning. The architecture for segmentation is SegNet based on VGG-16. The semantic image segmentation feature can be used to calculate safety space and predict collisions so that we control a vehicle using adaptive-MPC to avoid objects and keep lanes. Simulation results with CARLA show that the proposed algorithm detects lanes robustly and avoids unknown obstacles in front of vehicle.

Scleral Diagnostic System Implementation with Color and Blood Vessel Sign Pattern Code Generations (컬러와 혈관징후패턴 코드 생성에 의한 공막진단시스템 구현)

  • Ryu, Kwang Ryol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.12
    • /
    • pp.3029-3034
    • /
    • 2014
  • The paper describes the scleral diagnostic system implementation for human eyes by using the scleral color code and vessels sign pattern code generations. The system is based on the high performance DSP image signal processor, programmable gain control for preprocessing and RISC SD frames storage. RGB image signals are optimized by PGC, the edge image is detected form the gray image converted. The processing algorithms are executed by scleral color code generation and scleral vessels sign pattern code creation for discriminating and matching. The scleral symptomatic color code is generated by YCbCr values at memory map tolerated and the vessel sign pattern code is created by digitizing the 24 clock and 13 ring zones, overlay matching and tolerances. The experimental results for performance are that the system runs 40ms, and the color and pattern for diagnostic errors are around 20% and 24% on average. The system and technique enable a scleral diagnosis with subdividing the patterns and patient database.

Colormap Construction and Combination Method between Colormaps (컬러맵의 생성과 컬러맵간의 결합 방법)

  • Kim, Jin-Hong;Jo, Cheol-Hyo;Kim, Du-Yeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.4
    • /
    • pp.541-550
    • /
    • 1994
  • A true color image is needed many data on the occasion of the transmission and storage. Therefore, we want to describe color image by a minority data without unreasonableness at eyesight. In this paper, it is presented 256 colormap construction method in RGB, YIQ/YUV space and common colormap expression method at merge between colormaps by reason of dissimilar original color image to display at a monitor for each other colormap at the same time. In comparison with processed result in RGB, YIQ/YUV space, it was measured by PSNR, standard variation, and edge preservation rate using sobel operator. Process time is 3second in colormap construction and 2second in merge between colormaps. In the PSNR value, RGB space has higher 0.15, 0.34 on an average than YIQ and YUV spae. Standard variation has lower in 0.15, 0.41 on an average than Yiq and YUV space. But in the data compression, YIQ/YUV space have about 1/3 compression efficiency than RGB space by reason of use to only 4bit of 8bit in color component.

  • PDF

An Adaptive De-blocking Algorithm in Low Bit-rate Video Coding (저 비트율 비디오를 위한 적응적 블록킹 현상 제거 기법)

  • 김종호;김해욱;정제창
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.505-513
    • /
    • 2004
  • Most video codecs including the international standards use the block-based hybrid structure for efficient compression. But for low bit-rate applications such as video transmission through wireless channels, the blocking artifacts degrade image qualify seriously. In this paper, we propose an adaptive de-blocking algorithm using characteristics of the block boundaries. Blocking artifacts contain the high frequency components near the block boundaries, therefore the lowpass filtering can remove them. However, simple lowpass filtering results into blurring by removing important information such as edges. To overcome this problem, we determine the modes depending upon the characteristics of pixels adjacent to block boundary then proper filter is applied to each area. Simulation results show that proposed method improves de-blocking performance compared to that of MPEG-4.

Directional Interpolation of Lost Block Using Difference of DC values and Similarity of AC Coefficients (DC값 차이와 AC계수 유사성을 이용한 방향성 블록 보간)

  • Lee Hong Yub;Eom Il Kyu;Kim Yoo Shin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.6C
    • /
    • pp.465-474
    • /
    • 2005
  • In this paper, a directional reconstruction of lost block in image over noisy channel is presented. DCT coefficients or pixel values in the lost blocks are recovered by using the linear interpolation with available neighboring blocks that are adaptively selected by the directional measure that are composed of the DDC (Difference of DC opposite blocks)and SAC(Similarity of AC opposite blocks) between opposite blocks around lost blocks. The proposed directional recovery method is effective for the strong edge and texture regions because we do not make use of the fixed 4-neighboring blocks but exploit the varying neighboring blocks adaptively by the directional information in the local image. In this paper, we describe the novel directional measure(CDS: Combination of DDC and SAC) composed of the DDC and the SAC and select the usable block to recover the lost block with the directional measure. The proposed method shows about 0.6dB PSNR improvement in average compared to the conventional methods.

A study on Improved De-Interlacing Applying Newton Difference Interpolation (Newton 차분법을 이용한 개선된 디인터레이싱 연구)

  • Baek, Kyunghoon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.1
    • /
    • pp.449-454
    • /
    • 2020
  • We propose an improved de-interlacing method that converts the interlaced images into the progressive images by one field. In the first, Inter-pixel values are calculated by applying Newton's forward difference, backward difference interpolation from upper and lower 5 pixel values. Using inter-pixel values obtained from upper and lower 5 pixel values, it makes more accurate a direction estimate by applying the correlation between upper and lower pixel. If an edge direction is determined from the correlation, a missing pixel value is calculated into the average of upper and lower pixel obtained from predicted direction of edge. From simulation results, it is shown that the proposed method improves subjective image quality at edge region and objective image quality at 0.2~0.3dB as quantitative calculation result of PSNR, compared to previous various de-interlacing methods.