• Title/Summary/Keyword: edge extraction

Search Result 500, Processing Time 0.029 seconds

Object Extraction and Tracking out of Color Image in Real-Time (실시간 칼라영상에서 객체추출 및 추적)

  • Choi, Nae-Won;Oh, Hae-Seok
    • The KIPS Transactions:PartB
    • /
    • v.10B no.1
    • /
    • pp.81-86
    • /
    • 2003
  • In this paper, we propose the tracking method of moving object which use extracted object by difference between background image and target image in fixed domain. As a extraction method of object, calculate not pixel of full image but predefined some edge pixel of image to get a position of new object. Since the center area Is excluded from calculation, the extraction time is efficiently reduced. To extract object in the predefined area, get a starting point in advance and then extract size of width and height of object. Central coordinate is used to track moved object.

Text Region Extraction from Videos using the Harris Corner Detector (해리스 코너 검출기를 이용한 비디오 자막 영역 추출)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.646-654
    • /
    • 2007
  • In recent years, the use of text inserted into TV contents has grown to provide viewers with better visual understanding. In this paper, video text is defined as superimposed text region located of the bottom of video. Video text extraction is the first step for video information retrieval and video indexing. Most of video text detection and extraction methods in the previous work are based on text color, contrast between text and background, edge, character filter, and so on. However, the video text extraction has big problems due to low resolution of video and complex background. To solve these problems, we propose a method to extract text from videos using the Harris corner detector. The proposed algorithm consists of four steps: corer map generation using the Harris corner detector, extraction of text candidates considering density of comers, text region determination using labeling, and post-processing. The proposed algorithm is language independent and can be applied to texts with various colors. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

Extraction of Text Alignment by Tensor Voting and its Application to Text Detection (텐서보팅을 이용한 텍스트 배열정보의 획득과 이를 이용한 텍스트 검출)

  • Lee, Guee-Sang;Dinh, Toan Nguyen;Park, Jong-Hyun
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.11
    • /
    • pp.912-919
    • /
    • 2009
  • A novel algorithm using 2D tensor voting and edge-based approach is proposed for text detection in natural scene images. The tensor voting is used based on the fact that characters in a text line are usually close together on a smooth curve and therefore the tokens corresponding to centers of these characters have high curve saliency values. First, a suitable edge-based method is used to find all possible text regions. Since the false positive rate of text detection result generated from the edge-based method is high, 2D tensor voting is applied to remove false positives and find only text regions. The experimental results show that our method successfully detects text regions in many complex natural scene images.

Correction of Signboard Distortion by Vertical Stroke Estimation

  • Lim, Jun Sik;Na, In Seop;Kim, Soo Hyung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.9
    • /
    • pp.2312-2325
    • /
    • 2013
  • In this paper, we propose a preprocessing method that it is to correct the distortion of text area in Korean signboard images as a preprocessing step to improve character recognition. Distorted perspective in recognizing of Korean signboard text may cause of the low recognition rate. The proposed method consists of four main steps and eight sub-steps: main step consists of potential vertical components detection, vertical components detection, text-boundary estimation and distortion correction. First, potential vertical line components detection consists of four steps, including edge detection for each connected component, pixel distance normalization in the edge, dominant-point detection in the edge and removal of horizontal components. Second, vertical line components detection is composed of removal of diagonal components and extraction of vertical line components. Third, the outline estimation step is composed of the left and right boundary line detection. Finally, distortion of the text image is corrected by bilinear transformation based on the estimated outline. We compared the changes in recognition rates of OCR before and after applying the proposed algorithm. The recognition rate of the distortion corrected signboard images is 29.63% and 21.9% higher at the character and the text unit than those of the original images.

An Edge Extraction Method Using K-means Clustering In Image (영상에서 K-means 군집화를 이용한 윤곽선 검출 기법)

  • Kim, Ga-On;Lee, Gang-Seong;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.12 no.11
    • /
    • pp.281-288
    • /
    • 2014
  • A method for edge detection using K-means clustering is proposed in this paper. The method is performed through there steps. Histogram equalizing is applied to the image for the uniformed intensity distribution. Pixels are clustered by K-means clustering technique. Then Sobel mask is applied to detect edges. Experiments showed that this method detected edges better than conventional method.

Image Retrieval Using Color feature and GLCM and Direction in Wavelet Transform Domain (Wavelet 변환 영역에서 칼라 정보와 GLCM 및 방향성을 이용한 영상 검색)

  • 이정봉
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.585-589
    • /
    • 2002
  • In this paper, hierarchical retrieval system based on efficient feature extraction is proposed. In order to retrieval the image with robustness for geometrical transformation such as translation, scaling, and rotation. After performing the 2-level wavelet transform on image, We extract moment in low-level subband which was subdivided into subimages and texture feature, contrast of GLCM(Gray Level Co-occurrence Matrix). At first we retrieve the candidate images in database by the ones of image. To perform a more accurate image retrieval, the edge information on the high-level subband was subdivided horizontally, vertically and diagonally. And then, the energy rate of edge per direction was determined and used to compare the energy rate of edge between images for higher accuracy.

  • PDF

An Onboard Image Processing System for Road Images (도로교통 영상처리를 위한 고속 영상처리시스템의 하드웨어 구현)

  • 이운근;이준웅;조석빈;고덕화;백광렬
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.7
    • /
    • pp.498-506
    • /
    • 2003
  • A computer vision system applied to an intelligent safety vehicle has been required to be worked on a small sized real time special purposed hardware not on a general purposed computer. In addition, the system should have a high reliability even under the adverse road traffic environment. This paper presents a design and an implementation of an onboard hardware system taking into account for high speed image processing to analyze a road traffic scene. The system is mainly composed of two parts: an early processing module of FPGA and a postprocessing module of DSP. The early processing module is designed to extract several image primitives such as the intensity of a gray level image and edge attributes in a real-time Especially, the module is optimized for the Sobel edge operation. The postprocessing module of DSP utilizes the image features from the early processing module for making image understanding or image analysis of a road traffic scene. The performance of the proposed system is evaluated by an experiment of a lane-related information extraction. The experiment shows the successful results of image processing speed of twenty-five frames of 320$\times$240 pixels per second.

Silhouette-Edge-Based Descriptor for Human Action Representation and Recognition

  • Odoyo, Wilfred O.;Choi, Jae-Ho;Moon, In-Kyu;Cho, Beom-Joon
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.2
    • /
    • pp.124-131
    • /
    • 2013
  • Extraction and representation of postures and/or gestures from human activities in videos have been a focus of research in this area of action recognition. With various applications cropping up from different fields, this paper seeks to improve the performance of these action recognition machines by proposing a shape-based silhouette-edge descriptor for the human body. Information entropy, a method to measure the randomness of a sequence of symbols, is used to aid the selection of vital key postures from video frames. Morphological operations are applied to extract and stack edges to uniquely represent different actions shape-wise. To classify an action from a new input video, a Hausdorff distance measure is applied between the gallery representations and the query images formed from the proposed procedure. The method is tested on known public databases for its validation. An effective method of human action annotation and description has been effectively achieved.

Security Algorithm for Vehicle Type Recognition (에지영상의 비율을 이용한 차종 인식 보안 알고리즘)

  • Rhee, Eugene
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.2
    • /
    • pp.77-82
    • /
    • 2017
  • In this paper, a new security algorithm to recognize the type of the vehicle with the vehicle image as a input image is suggested. The vehicle recognition security algorithm is composed of five core parts, such as the input image, background removal, edge areas extraction, pre-processing(binarization), and the vehicle recognition. Therefore, the final recognition rate of the security algorithm for vehicle type recognition can be affected by the function and efficiency of each step. After inputting image into a gray scale image and removing backgrounds, the binarization is performed by extracting only the edge region. After the pre-treatment process for making outlines clear, the type of vehicles is categorized into large vehicles, passenger cars and motorcycles through the ratio of height and width of the vehicle.

Image-based Extraction of Histogram Index for Concrete Crack Analysis

  • Kim, Bubryur;Lee, Dong-Eun
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.912-919
    • /
    • 2022
  • The study is an image-based assessment that uses image processing techniques to determine the condition of concrete with surface cracks. The preparations of the dataset include resizing and image filtering to ensure statistical homogeneity and noise reduction. The image dataset is then segmented, making it more suited for extracting important features and easier to evaluate. The image is transformed into grayscale which removes the hue and saturation but retains the luminance. To create a clean edge map, the edge detection process is utilized to extract the major edge features of the image. The Otsu method is used to minimize intraclass variation between black and white pixels. Additionally, the median filter was employed to reduce noise while keeping the borders of the image. Image processing techniques are used to enhance the significant features of the concrete image, especially the defects. In this study, the tonal zones of the histogram and its properties are used to analyze the condition of the concrete. By examining the histogram, the viewer will be able to determine the information on the image through the number of pixels associated and each tonal characteristic on a graph. The features of the five tonal zones of the histogram which implies the qualities of the concrete image may be evaluated based on the quality of the contrast, brightness, highlights, shadow spikes, or the condition of the shadow region that corresponds to the foreground.

  • PDF