• Title/Summary/Keyword: template matching

Search Result 394, Processing Time 0.027 seconds

Detection of Traffic Light using Color after Morphological Preprocessing (형태학적 전처리 후 색상을 이용한 교통 신호의 검출)

  • Kim, Chang-dae;Choi, Seo-hyuk;Kang, Ji-hun;Ryu, Sung-pil;Kim, Dong-woo;Ahn, Jae-hyeong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.367-370
    • /
    • 2015
  • This paper proposes an improve method of the detection performance of traffic lights for autonomous driving cars. Earlier detection methods used to adopt color thresholding, template matching and based learning maching methods, but its have some problems such as recognition rate decreasing, slow processing time. The proposed method uses both detection mask and morphological preprocessing. Firstly, input color images are converted to YCbCr image in order to strengthen its illumination, and horizontal edge components are extracted in the Y Channel. Secondly, the region of interest is detected according to morphological characteristics of the traffic lights. Finally, the traffic signal is detected based on color distributions. The proposed method showed that the detection rate and processing time improved rather than the conventional algorithm about some surrounding environments.

  • PDF

Automatic Phonetic Segmentation of Korean Speech Signal Using Phonetic-acoustic Transition Information (음소 음향학적 변화 정보를 이용한 한국어 음성신호의 자동 음소 분할)

  • 박창목;왕지남
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.8
    • /
    • pp.24-30
    • /
    • 2001
  • This article is concerned with automatic segmentation for Korean speech signals. All kinds of transition cases of phonetic units are classified into 3 types and different strategies for each type are applied. The type 1 is the discrimination of silence, voiced-speech and unvoiced-speech. The histogram analysis of each indicators which consists of wavelet coefficients and SVF (Spectral Variation Function) in wavelet coefficients are used for type 1 segmentation. The type 2 is the discrimination of adjacent vowels. The vowel transition cases can be characterized by spectrogram. Given phonetic transcription and transition pattern spectrogram, the speech signal, having consecutive vowels, are automatically segmented by the template matching. The type 3 is the discrimination of vowel and voiced-consonants. The smoothed short-time RMS energy of Wavelet low pass component and SVF in cepstral coefficients are adopted for type 3 segmentation. The experiment is performed for 342 words utterance set. The speech data are gathered from 6 speakers. The result shows the validity of the method.

  • PDF

Generation Method of Spatiotemporal Image for Detecting Leukocyte Motions in a Microvessel (미소혈관내 백혈구 운동검출을 위한 시공간 영상 생성법)

  • Kim, Eung Kyeu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.9
    • /
    • pp.99-109
    • /
    • 2016
  • This paper presents a method for generating spatiotemporal images to detect the leukocyte motions in a microvessel. By using the constraint that the leukocytes move along the contour line of a blood vessel wall, the method detects leukocyte motions and then generates spatiotemporal images. the translational motion by a movement in vivo is removed first by the template matching method. Next, a blood vessel region is detected by the automatic threshold selection method to binarize the temporal variance image, then a blood vessel wall's contour is expressed by B-spline function. With the detected blood vessel wall's contour as an initial curve, the plasma layer of the best accurate position is determined to be the spatial axis by snake. Finally, the spatiotemporal images are generated. The experimental results show the spatiotemporal images are generated effectively through the comparison of each step of three image sequences.

VALIDATION OF SEA ICE MOTION DERIVED FROM AMSR-E AND SSM/I DATA USING MODIS DATA

  • Yaguchi, Ryota;Cho, Ko-Hei
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.301-304
    • /
    • 2008
  • Since longer wavelength microwave radiation can penetrate clouds, satellite passive microwave sensors can observe sea ice of the entire polar region on a daily basis. Thus, it is becoming popular to derive sea ice motion vectors from a pair of satellite passive microwave sensor images observed at one or few day interval. Usually, the accuracies of derived vectors are validated by comparing with the position data of drifting buoys. However, the number of buoys for validation is always quite limited compared to a large number of vectors derived from satellite images. In this study, the sea ice motion vectors automatically derived from pairs of AMSR-E 89GHz images (IFOV = 3.5 ${\times}$ 5.9km) by an image-to-image cross correlation were validated by comparing with sea ice motion vectors manually derived from pairs of cloudless MODIS images (IFOV=250 ${\times}$ 250m). Since AMSR-E and MODIS are both on the same Aqua satellite of NASA, the observation time of both sensors are the same. The relative errors of AMSR-E vectors against MODIS vectors were calculated. The accuracy validation has been conducted for 5 scenes. If we accept relative error of less than 30% as correct vectors, 75% to 92% of AMSR-E vectors derived from one scene were correct. On the other hand, the percentage of correct sea ice vectors derived from a pair of SSM/I 85GHz images (IFOV = 15 ${\times}$ 13km) observed nearly simultaneously with one of the AMSR-E images was 46%. The difference of the accuracy between AMSR-E and SSM/I is reflecting the difference of IFOV. The accuracies of H and V polarization were different from scene to scene, which may reflect the difference of sea ice distributions and their snow cover of each scene.

  • PDF

Detecting and Tracking Vehicles at Local Region by using Segmented Regions Information (분할 영역 정보를 이용한 국부 영역에서 차량 검지 및 추적)

  • Lee, Dae-Ho;Park, Young-Tae
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.10
    • /
    • pp.929-936
    • /
    • 2007
  • The novel vision-based scheme for real-time extracting traffic parameters is proposed in this paper. Detecting and tracking of vehicle is processed at local region installed by operator. Local region is divided to segmented regions by edge and frame difference, and the segmented regions are classified into vehicle, road, shadow and headlight by statistical and geometrical features. Vehicle is detected by the result of the classification. Traffic parameters such as velocity, length, occupancy and distance are estimated by tracking using template matching at local region. Because background image are not used, it is possible to utilize under various conditions such as weather, time slots and locations. It is performed well with 90.16% detection rate in various databases. If direction, angle and iris are fitted to operating conditions, we are looking forward to using as the core of traffic monitoring systems.

The Implementation of Automatic Compensation Modules for Digital Camera Image by Recognition of the Eye State (눈의 상태 인식을 이용한 디지털 카메라 영상 자동 보정 모듈의 구현)

  • Jeon, Young-Joon;Shin, Hong-Seob;Kim, Jin-Il
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.3
    • /
    • pp.162-168
    • /
    • 2013
  • This paper examines the implementation of automatic compensation modules for digital camera image when a person is closing his/her eyes. The modules detect the face and eye region and then recognize the eye state. If the image is taken when a person is closing his/her eyes, the function corrects the eye and produces the image by using the most satisfactory image of the eye state among the past frames stored in the buffer. In order to recognize the face and eye precisely, the pre-process of image correction is carried out using SURF algorithm and Homography method. For the detection of face and eye region, Haar-like feature algorithm is used. To decide whether the eye is open or not, similarity comparison method is used along with template matching of the eye region. The modules are tested in various facial environments and confirmed to effectively correct the images containing faces.

Realtime Facial Expression Recognition from Video Sequences Using Optical Flow and Expression HMM (광류와 표정 HMM에 의한 동영상으로부터의 실시간 얼굴표정 인식)

  • Chun, Jun-Chul;Shin, Gi-Han
    • Journal of Internet Computing and Services
    • /
    • v.10 no.4
    • /
    • pp.55-70
    • /
    • 2009
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. In that sense, inferring the emotional state of the person based on the facial expression recognition is an important issue. In this paper, we present a novel approach to recognize facial expression from a sequence of input images using emotional specific HMM (Hidden Markov Model) and facial motion tracking based on optical flow. Conventionally, in the HMM which consists of basic emotional states, it is considered natural that transitions between emotions are imposed to pass through neutral state. However, in this work we propose an enhanced transition framework model which consists of transitions between each emotional state without passing through neutral state in addition to a traditional transition model. For the localization of facial features from video sequence we exploit template matching and optical flow. The facial feature displacements traced by the optical flow are used for input parameters to HMM for facial expression recognition. From the experiment, we can prove that the proposed framework can effectively recognize the facial expression in real time.

  • PDF

Automatic Recognition of Direction Information in Road Sign Image Using OpenCV (OpenCV를 이용한 도로표지 영상에서의 방향정보 자동인식)

  • Kim, Gihong;Chong, Kyusoo;Youn, Junhee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.4
    • /
    • pp.293-300
    • /
    • 2013
  • Road signs are important infrastructures for safe and smooth traffic by providing useful information to drivers. It is necessary to establish road sign DB for managing road signs systematically. To provide such DB, manually detection and recognition from imagery can be done. However, it is time and cost consuming. In this study, we proposed algorithms for automatic recognition of direction information in road sign image. Also we developed algorithm code using OpenCV library, and applied it to road sign image. To automatically detect and recognize direction information, we developed program which is composed of various modules such as image enhancement, image binarization, arrow region extraction, interesting point extraction, and template image matching. As a result, we can confirm the possibility of automatic recognition of direction information in road sign image.

Accurate Pose Measurement of Label-attached Small Objects Using a 3D Vision Technique (3차원 비전 기술을 이용한 라벨부착 소형 물체의 정밀 자세 측정)

  • Kim, Eung-su;Kim, Kye-Kyung;Wijenayake, Udaya;Park, Soon-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.10
    • /
    • pp.839-846
    • /
    • 2016
  • Bin picking is a task of picking a small object from a bin. For accurate bin picking, the 3D pose information, position, and orientation of a small object is required because the object is mixed with other objects of the same type in the bin. Using this 3D pose information, a robotic gripper can pick an object using exact distance and orientation measurements. In this paper, we propose a 3D vision technique for accurate measurement of 3D position and orientation of small objects, on which a paper label is stuck to the surface. We use a maximally stable extremal regions (MSERs) algorithm to detect the label areas in a left bin image acquired from a stereo camera. In each label area, image features are detected and their correlation with a right image is determined by a stereo vision technique. Then, the 3D position and orientation of the objects are measured accurately using a transformation from the camera coordinate system to the new label coordinate system. For stable measurement during a bin picking task, the pose information is filtered by averaging at fixed time intervals. Our experimental results indicate that the proposed technique yields pose accuracy between 0.4~0.5mm in positional measurements and $0.2-0.6^{\circ}$ in angle measurements.

Meter Numeric Character Recognition Using Illumination Normalization and Hybrid Classifier (조명 정규화 및 하이브리드 분류기를 이용한 계량기 숫자 인식)

  • Oh, Hangul;Cho, Seongwon;Chung, Sun-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.71-77
    • /
    • 2014
  • In this paper, we propose an improved numeric character recognition method which can recognize numeric characters well under low-illuminated and shade-illuminated environment. The LN(Local Normalization) preprocessing method is used in order to enhance low-illuminated and shade-illuminated image quality. The reading area is detected using line segment information extracted from the illumination-normalized meter images, and then the three-phase procedures are performed for segmentation of numeric characters in the reading area. Finally, an efficient hybrid classifier is used to classify the segmented numeric characters. The proposed numeric character classifier is a combination of multi-layered feedforward neural network and template matching module. Robust heuristic rules are applied to classify the numeric characters. Experiments using meter image database were conducted. Meter image database was made using various kinds of meters under low-illuminated and shade-illuminated environment. The experimental results indicates the superiority of the proposed numeric character recognition method.