• Title/Summary/Keyword: color edge detection

Search Result 175, Processing Time 0.024 seconds

Recognition of Events by Human Motion for Context-aware Computing (상황인식 컴퓨팅을 위한 사람 움직임 이벤트 인식)

  • Cui, Yao-Huan;Shin, Seong-Yoon;Lee, Chang-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.4
    • /
    • pp.47-57
    • /
    • 2009
  • Event detection and recognition is an active and challenging topic recent in Computer Vision. This paper describes a new method for recognizing events caused by human motion from video sequences in an office environment. The proposed approach analyzes human motions using Motion History Image (MHI) sequences, and is invariant to body shapes. types or colors of clothes and positions of target objects. The proposed method has two advantages; one is thant the proposed method is less sensitive to illumination changes comparing with the method using color information of objects of interest, and the other is scale invariance comparing with the method using a prior knowledge like appearances or shapes of objects of interest. Combined with edge detection, geometrical characteristics of the human shape in the MHI sequences are considered as the features. An advantage of the proposed method is that the event detection framework is easy to extend by inserting the descriptions of events. In addition, the proposed method is the core technology for event detection systems based on context-aware computing as well as surveillance systems based on computer vision techniques.

Implementation of an Effective Human Head Tracking System Using the Ellipse Modeling and Color Information (타원 모델링과 칼라정보를 이용한 효율적인 머리 추적 시스템 구현)

  • Park, Dong-Sun;Yoon, Sook
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.6
    • /
    • pp.684-691
    • /
    • 2001
  • In this paper, we design and implement a system which recognizes and tracks a human head on a sequence of images. In this paper, the color of the skin and ellipse modeling is used as feature vectors to recognize the human head. And the modified time-varying edge detection method and the vertical projection method is used to acquire regions of the motion from images with very complex backgrounds. To select the head from the acquired candidate regions, the process for thresholding on the basis of the I-component of YIQ color information and mapping with ellipse modeling is used. The designed system shows an excellent performance in the cases of the rotated heads, occluded heads, and tilted heads as well as in the case of the normal up-right heads. And in this paper, the combinational technique of motion-based tracking and recognition-based tracking is used to track the human head exactly even though the human head moves fast.

  • PDF

An Iris Detection Algorithm for Disease Prediction based Iridology (홍채학기반이 질병예측을 위한 홍채인식 알고리즘)

  • Cho, Young-bok;Woo, Sung-Hee;Lee, Sang-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.107-114
    • /
    • 2017
  • Iris diagnosis is an alternative medicine to diagnose the disease of the patient by using different of the iris pattern, color and other characteristics. This paper proposed a disease prediction algorithm that using the iris regions that analyze iris change to using differential image of iris image. this method utilize as patient's health examination according to iris change. Because most of previous studies only find a sign pattern in a iris image, it's not enough to be used for a iris diagnosis system. We're developed an iris diagnosis system based on a iris images processing approach, It's presents the extraction algorithms of 8 major iris signs and correction manually for improving the accuracy of analysis. As a result, PNSR of applied edge detection image is about 132, and pattern matching area recognition presented practical use possibility by automatic diagnostic that presume situation of human body by iris about 91%.

Multi-legged robot system enabled to decide route and recognize obstacle based on hand posture recognition (손모양 인식기반의 경로교사와 장애물 인식이 가능한 자율보행 다족로봇 시스템)

  • Kim, Min-Sung;Jeong, Woo-Won;Kwan, Bae-Guen;Kang, Dong-Joong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.8
    • /
    • pp.1925-1936
    • /
    • 2010
  • In this paper, multi-legged robot was designed and produced using stable walking pattern algorithm. The robot had embedded camera and wireless communication function and it is possible to recognize both hand posture and obstacles. The algorithm decided moving paths, and recognized and avoided obstacles through Hough Transform using Edge Detection of inputed image from image sensor. The robot can be controlled by hand posture using Mahalanobis Distance and average value of skin's color pixel, which is previously learned in order to decide the destination. The developed system has shown obstacle detection rate of 96% and hand posture recognition rate of 94%.

Facial Contour Extraction in Moving Pictures by using DCM mask and Initial Curve Interpolation of Snakes (DCM 마스크와 스네이크의 초기곡선 보간에 의한 동영상에서의 얼굴 윤곽선 추출)

  • Kim Young-Won;Jun Byung-Hwan
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.4 s.310
    • /
    • pp.58-66
    • /
    • 2006
  • In this paper, we apply DCM(Dilation of Color and Motion information) mask and Active Contour Models(Snakes) to extract facial outline in moving pictures with complex background. First, we propose DCM mask which is made by applying morphology dilation and AND operation to combine facial color and motion information, and use this mask to detect facial region without complex background and to remove noise in image energy. Also, initial curves are automatically set according to rotational degree estimated with geometric ratio of facial elements to overcome the demerit of Active Contour Models which is sensitive to initial curves. And edge intensity and brightness are both used as image energy of snakes to extract contour at parts with weak edges. For experiments, we acquired total 480 frames with various head-poses of sixteen persons with both eyes shown by taking pictures in inner space and also by capturing broadcasting images. As a result, it showed that more elaborate facial contour is extracted at average processing time of 0.28 seconds when using interpolated initial curves according to facial rotation degree and using combined image energy of edge intensity and brightness.

A new approach for overlay text detection from complex video scene (새로운 비디오 자막 영역 검출 기법)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of Broadcast Engineering
    • /
    • v.13 no.4
    • /
    • pp.544-553
    • /
    • 2008
  • With the development of video editing technology, there are growing uses of overlay text inserted into video contents to provide viewers with better visual understanding. Since the content of the scene or the editor's intention can be well represented by using inserted text, it is useful for video information retrieval and indexing. Most of the previous approaches are based on low-level features, such as edge, color, and texture information. However, existing methods experience difficulties in handling texts with various contrasts or inserted in a complex background. In this paper, we propose a novel framework to localize the overlay text in a video scene. Based on our observation that there exist transient colors between inserted text and its adjacent background a transition map is generated. Then candidate regions are extracted by using the transition map and overlay text is finally determined based on the density of state in each candidate. The proposed method is robust to color, size, position, style, and contrast of overlay text. It is also language free. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

Image Tracking Based Lane Departure Warning and Forward Collision Warning Methods for Commercial Automotive Vehicle (이미지 트래킹 기반 상용차용 차선 이탈 및 전방 추돌 경고 방법)

  • Kim, Kwang Soo;Lee, Ju Hyoung;Kim, Su Kwol;Bae, Myung Won;Lee, Deok Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.2
    • /
    • pp.235-240
    • /
    • 2015
  • Active Safety system is requested on the market of the medium and heavy duty commercial vehicle over 4.5ton beside the market of passenger car with advancement of the digital equipment proportionally. Unlike the passenger car, the mounting position of camera in case of the medium and heavy duty commercial vehicle is relatively high, it is disadvantaged conditions for lane recognition in contradiction to passenger car. In this work, we show the method of lane recognition through the Sobel edge, based on the spatial domain processing, Hough transform and color conversion correction. Also we suggest the low error method of front vehicles recognition in order to reduce the detection error through Haar-like, Adaboost, SVM and Template matching, etc., which are the object recognition methods by frontal camera vision. It is verified that the reliability over 98% on lane recognition is obtained through the vehicle test.

Shape region segmentation based on color and edge characteristics of moving images (동영상의 컬러 및 에지 정보에 기초한 shape 영역 segmentation 기법 연구)

  • Park, Jin-Nam;Lee, Jae-Duck;Yoon, Sung-Soo;Huh, Young;Jung, Sung-Hwan
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2001.11b
    • /
    • pp.149-154
    • /
    • 2001
  • 멀티미디어 정보표현 기술인 MPEG-7 표준이 빠른 속도의 진전을 보임에 따라 이를 활용한 검색 기술 개발도 활발히 진행 중에 있다 방대한 량의 동영상 내용 검색 기술 연구에 있어서 우선적으로 고려되어야 할 부분이 내용이 연속되는 프레임들의 분류이다. 이를 위해서는 물리적인 장면전환이 이루어지는 부분에 대한 실시간 자동 cut detection 기술 및 이 컷 프레임 영상에 대한 내용 기술을 자동적으로 수행할 필요성이 있다. 각 컷 프레임의 자동 내용 기술의 전처리로써 본 논문에서는 장면전환이 생기는 프레임의 영상의 어떠한 정보도 사전 정보로 취하지 않고 사용자의 개입이 없는 상황에서 영상의 컬러 특성 및 에지 정보만을 가지고 shape 영역 segmentation을 자동으로 실행하는 방법을 제안한다. 제안한 방법의 성능은 segmentation된 영상과 원 영상과의 영역비교를 통한 유사도에 의해 평가하며, 시뮬레이션 결과에서 제안한 알고리즘은 평균 90%이상의 영역 분할이 정확하게 됨을 알 수 있었고, 컬러의 구분이 명확하지 않은 자연영상에서도 robust한 segmentation 결과를 나타냄을 본 연구를 통하여 알 수 있었다.

  • PDF

Development of Classification System for Thermal Comfort Behavior of Pigs by Image Processing and Neural Network (영상처리와 인공신경망을 이용한 돼지의 체온조절행동 분류 시스템 개발)

  • 장동일;임영일;장홍희
    • Journal of Biosystems Engineering
    • /
    • v.24 no.5
    • /
    • pp.431-438
    • /
    • 1999
  • The environmental control based on interactive thermoregulatory behavior for swine production has many advantages over the conventional temperature-based control methods. Therefore, this study was conducted to compare various feature selection methods using postural images of growing pigs under various environmental conditions. A color CCD camera was used to capture the behavioral images which were then modified to binary images. The binary images were processed by thresholding, edge detection, and thinning techniques to separate the pigs from their background. Following feature were used for the input patterns to the neural network ; \circled1 perimeter, \circled2 area, \circled3 Fourier coefficients (5$\times$5), \circled4 combination of (\circled1 + \circled2), \circled5 combination of (\circled1 + \circled3), \circled6 combination of (\circled2 + \circled3), and \circled7 combination of (\circled1 + \circled2 + \circled3). Using the above each input pattern, the neural network could classify training images with the success rates of 96%, 96%, 96%, 100%, 100%, 96%, 100%, and testing images with those of 88%, 86%, 93%, 96%, 91%, 90%, 98%, respectively. Thus, the combination of perimeter, area and Fourier coefficients of the thinning images as neural network features gave the best performance (98%) in the behavioral classification.

  • PDF

Car Frame Extraction using Background Frame in Video (동영상에서 배경프레임을 이용한 차량 프레임 검출)

  • Nam, Seok-Woo;Oh, Hea-Seok
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.705-710
    • /
    • 2003
  • Recent years, as a rapid development of multimedia technology, video database system to retrieve video data efficiently seems to core technology in the oriented society. This thesis describes an efficient automatic frame detection and location method for content based retrieval of video. Frame extraction part is consist of incoming / outgoing car frame extraction and car number frame extraction stage. We gain star/end time of car video also car number frames. Frames are selected at fixed time interval from video and key frames are selected by color scale histogram and edge operation method. Car frame recognized can be searched by content based retrieval method.