• Title/Summary/Keyword: template matching

Search Result 392, Processing Time 0.023 seconds

Generation Method of Spatiotemporal Image for Detecting Leukocyte Motions in a Microvessel (미소혈관내 백혈구 운동검출을 위한 시공간 영상 생성법)

  • Kim, Eung Kyeu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.9
    • /
    • pp.99-109
    • /
    • 2016
  • This paper presents a method for generating spatiotemporal images to detect the leukocyte motions in a microvessel. By using the constraint that the leukocytes move along the contour line of a blood vessel wall, the method detects leukocyte motions and then generates spatiotemporal images. the translational motion by a movement in vivo is removed first by the template matching method. Next, a blood vessel region is detected by the automatic threshold selection method to binarize the temporal variance image, then a blood vessel wall's contour is expressed by B-spline function. With the detected blood vessel wall's contour as an initial curve, the plasma layer of the best accurate position is determined to be the spatial axis by snake. Finally, the spatiotemporal images are generated. The experimental results show the spatiotemporal images are generated effectively through the comparison of each step of three image sequences.

VALIDATION OF SEA ICE MOTION DERIVED FROM AMSR-E AND SSM/I DATA USING MODIS DATA

  • Yaguchi, Ryota;Cho, Ko-Hei
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.301-304
    • /
    • 2008
  • Since longer wavelength microwave radiation can penetrate clouds, satellite passive microwave sensors can observe sea ice of the entire polar region on a daily basis. Thus, it is becoming popular to derive sea ice motion vectors from a pair of satellite passive microwave sensor images observed at one or few day interval. Usually, the accuracies of derived vectors are validated by comparing with the position data of drifting buoys. However, the number of buoys for validation is always quite limited compared to a large number of vectors derived from satellite images. In this study, the sea ice motion vectors automatically derived from pairs of AMSR-E 89GHz images (IFOV = 3.5 ${\times}$ 5.9km) by an image-to-image cross correlation were validated by comparing with sea ice motion vectors manually derived from pairs of cloudless MODIS images (IFOV=250 ${\times}$ 250m). Since AMSR-E and MODIS are both on the same Aqua satellite of NASA, the observation time of both sensors are the same. The relative errors of AMSR-E vectors against MODIS vectors were calculated. The accuracy validation has been conducted for 5 scenes. If we accept relative error of less than 30% as correct vectors, 75% to 92% of AMSR-E vectors derived from one scene were correct. On the other hand, the percentage of correct sea ice vectors derived from a pair of SSM/I 85GHz images (IFOV = 15 ${\times}$ 13km) observed nearly simultaneously with one of the AMSR-E images was 46%. The difference of the accuracy between AMSR-E and SSM/I is reflecting the difference of IFOV. The accuracies of H and V polarization were different from scene to scene, which may reflect the difference of sea ice distributions and their snow cover of each scene.

  • PDF

Detecting and Tracking Vehicles at Local Region by using Segmented Regions Information (분할 영역 정보를 이용한 국부 영역에서 차량 검지 및 추적)

  • Lee, Dae-Ho;Park, Young-Tae
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.10
    • /
    • pp.929-936
    • /
    • 2007
  • The novel vision-based scheme for real-time extracting traffic parameters is proposed in this paper. Detecting and tracking of vehicle is processed at local region installed by operator. Local region is divided to segmented regions by edge and frame difference, and the segmented regions are classified into vehicle, road, shadow and headlight by statistical and geometrical features. Vehicle is detected by the result of the classification. Traffic parameters such as velocity, length, occupancy and distance are estimated by tracking using template matching at local region. Because background image are not used, it is possible to utilize under various conditions such as weather, time slots and locations. It is performed well with 90.16% detection rate in various databases. If direction, angle and iris are fitted to operating conditions, we are looking forward to using as the core of traffic monitoring systems.

The Implementation of Automatic Compensation Modules for Digital Camera Image by Recognition of the Eye State (눈의 상태 인식을 이용한 디지털 카메라 영상 자동 보정 모듈의 구현)

  • Jeon, Young-Joon;Shin, Hong-Seob;Kim, Jin-Il
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.3
    • /
    • pp.162-168
    • /
    • 2013
  • This paper examines the implementation of automatic compensation modules for digital camera image when a person is closing his/her eyes. The modules detect the face and eye region and then recognize the eye state. If the image is taken when a person is closing his/her eyes, the function corrects the eye and produces the image by using the most satisfactory image of the eye state among the past frames stored in the buffer. In order to recognize the face and eye precisely, the pre-process of image correction is carried out using SURF algorithm and Homography method. For the detection of face and eye region, Haar-like feature algorithm is used. To decide whether the eye is open or not, similarity comparison method is used along with template matching of the eye region. The modules are tested in various facial environments and confirmed to effectively correct the images containing faces.

Realtime Facial Expression Recognition from Video Sequences Using Optical Flow and Expression HMM (광류와 표정 HMM에 의한 동영상으로부터의 실시간 얼굴표정 인식)

  • Chun, Jun-Chul;Shin, Gi-Han
    • Journal of Internet Computing and Services
    • /
    • v.10 no.4
    • /
    • pp.55-70
    • /
    • 2009
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. In that sense, inferring the emotional state of the person based on the facial expression recognition is an important issue. In this paper, we present a novel approach to recognize facial expression from a sequence of input images using emotional specific HMM (Hidden Markov Model) and facial motion tracking based on optical flow. Conventionally, in the HMM which consists of basic emotional states, it is considered natural that transitions between emotions are imposed to pass through neutral state. However, in this work we propose an enhanced transition framework model which consists of transitions between each emotional state without passing through neutral state in addition to a traditional transition model. For the localization of facial features from video sequence we exploit template matching and optical flow. The facial feature displacements traced by the optical flow are used for input parameters to HMM for facial expression recognition. From the experiment, we can prove that the proposed framework can effectively recognize the facial expression in real time.

  • PDF

Automatic Recognition of Direction Information in Road Sign Image Using OpenCV (OpenCV를 이용한 도로표지 영상에서의 방향정보 자동인식)

  • Kim, Gihong;Chong, Kyusoo;Youn, Junhee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.4
    • /
    • pp.293-300
    • /
    • 2013
  • Road signs are important infrastructures for safe and smooth traffic by providing useful information to drivers. It is necessary to establish road sign DB for managing road signs systematically. To provide such DB, manually detection and recognition from imagery can be done. However, it is time and cost consuming. In this study, we proposed algorithms for automatic recognition of direction information in road sign image. Also we developed algorithm code using OpenCV library, and applied it to road sign image. To automatically detect and recognize direction information, we developed program which is composed of various modules such as image enhancement, image binarization, arrow region extraction, interesting point extraction, and template image matching. As a result, we can confirm the possibility of automatic recognition of direction information in road sign image.

Accurate Pose Measurement of Label-attached Small Objects Using a 3D Vision Technique (3차원 비전 기술을 이용한 라벨부착 소형 물체의 정밀 자세 측정)

  • Kim, Eung-su;Kim, Kye-Kyung;Wijenayake, Udaya;Park, Soon-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.10
    • /
    • pp.839-846
    • /
    • 2016
  • Bin picking is a task of picking a small object from a bin. For accurate bin picking, the 3D pose information, position, and orientation of a small object is required because the object is mixed with other objects of the same type in the bin. Using this 3D pose information, a robotic gripper can pick an object using exact distance and orientation measurements. In this paper, we propose a 3D vision technique for accurate measurement of 3D position and orientation of small objects, on which a paper label is stuck to the surface. We use a maximally stable extremal regions (MSERs) algorithm to detect the label areas in a left bin image acquired from a stereo camera. In each label area, image features are detected and their correlation with a right image is determined by a stereo vision technique. Then, the 3D position and orientation of the objects are measured accurately using a transformation from the camera coordinate system to the new label coordinate system. For stable measurement during a bin picking task, the pose information is filtered by averaging at fixed time intervals. Our experimental results indicate that the proposed technique yields pose accuracy between 0.4~0.5mm in positional measurements and $0.2-0.6^{\circ}$ in angle measurements.

Meter Numeric Character Recognition Using Illumination Normalization and Hybrid Classifier (조명 정규화 및 하이브리드 분류기를 이용한 계량기 숫자 인식)

  • Oh, Hangul;Cho, Seongwon;Chung, Sun-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.71-77
    • /
    • 2014
  • In this paper, we propose an improved numeric character recognition method which can recognize numeric characters well under low-illuminated and shade-illuminated environment. The LN(Local Normalization) preprocessing method is used in order to enhance low-illuminated and shade-illuminated image quality. The reading area is detected using line segment information extracted from the illumination-normalized meter images, and then the three-phase procedures are performed for segmentation of numeric characters in the reading area. Finally, an efficient hybrid classifier is used to classify the segmented numeric characters. The proposed numeric character classifier is a combination of multi-layered feedforward neural network and template matching module. Robust heuristic rules are applied to classify the numeric characters. Experiments using meter image database were conducted. Meter image database was made using various kinds of meters under low-illuminated and shade-illuminated environment. The experimental results indicates the superiority of the proposed numeric character recognition method.

Separation of the Occluding Object from the Stack of 3D Objects Using a 2D Image (겹쳐진 3차원 물체의 2차원 영상에서 가리는 물체의 구분기법)

  • 송필재;홍민철;한헌수
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.11-22
    • /
    • 2004
  • Conventional algorithms of separating overlapped objects are mostly based on template matching methods and thus their application domain is restricted to 2D objects and the processing time increases when the number of templates (object models) does. To solve these problems, this paper proposes a new approach of separating the occluding object from the stack of 3D objects using the relationship between surfaces without any information on the objects. The proposed algorithm considers an object as a combination of surfaces which are consisted with a set of boundary edges. Overlap of 3D objects appears as overlap of surfaces and thus as crossings of edges in 2D image. Based on this observation, the types of edge crossings are classified from which the types of overlap of 3D objects can be identified. The relationships between surfaces are represented by an attributed graph where the types of overlaps are represented by relation values. Using the relation values, the surfaces pertained to the same object are discerned and the overlapping object on the top of the stack can be separated. The performance of the proposed algorithm has been proved by the experiments using the overlapped images of 3D objects selected among the standard industrial parts.

A Study on Attitude Estimation of UAV Using Image Processing (영상 처리를 이용한 UAV의 자세 추정에 관한 연구)

  • Paul, Quiroz;Hyeon, Ju-Ha;Moon, Yong-Ho;Ha, Seok-Wun
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.5
    • /
    • pp.137-148
    • /
    • 2017
  • Recently, researchers are actively addressed to utilize Unmanned Aerial Vehicles(UAV) for military and industry applications. One of these applications is to trace the preceding flight when it is necessary to track the route of the suspicious reconnaissance aircraft in secret, and it is necessary to estimate the attitude of the target flight such as Roll, Yaw, and Pitch angles in each instant. In this paper, we propose a method for estimating in real time the attitude of a target aircraft using the video information that is provide by an external camera of a following aircraft. Various image processing methods such as color space division, template matching, and statistical methods such as linear regression were applied to detect and estimate key points and Euler angles. As a result of comparing the X-plane flight data with the estimated flight data through the simulation experiment, it is shown that the proposed method can be an effective method to estimate the flight attitude information of the previous flight.