• 제목/요약/키워드: Color-based Vision System

검색결과 168건 처리시간 0.023초

능동 스테레오 비젼을 시스템을 이용한 자율이동로봇의 목표물 추적에 관한 연구 (Study on the Target Tracking of a Mobile Robot Using Active Stereo-Vision System)

  • 이희명;이수희;이병룡;양순용;안경관
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2003년도 춘계학술대회 논문집
    • /
    • pp.915-919
    • /
    • 2003
  • This paper presents a fuzzy-motion-control based tracking algorithm of mobile robots, which uses the geometrical information derived from the active stereo-vision system mounted on the mobile robot. The active stereo-vision system consists of two color cameras that rotates in two angular dimensions. With the stereo-vision system, the center position and depth information of the target object can be calculated. The proposed fuzzy motion controller is used to calculate the tracking velocity and angular position of the mobile robot, which makes the mobile robot keep following the object with a constant distance and orientation.

  • PDF

Computer Vision-based Method to Detect Fire Using Color Variation in Temporal Domain

  • Hwang, Ung;Jeong, Jechang;Kim, Jiyeon;Cho, JunSang;Kim, SungHwan
    • Quantitative Bio-Science
    • /
    • 제37권2호
    • /
    • pp.81-89
    • /
    • 2018
  • It is commonplace that high false detection rates interfere with immediate vision-based fire monitoring system. To circumvent this challenge, we propose a fire detection algorithm that can accommodate color variations of RGB in temporal domain, aiming at reducing false detection rates. Despite interrupting images (e.g., background noise and sudden intervention), the proposed method is proved robust in capturing distinguishable features of fire in temporal domain. In numerical studies, we carried out extensive real data experiments related to fire detection using 24 video sequences, implicating that the propose algorithm is found outstanding as an effective decision rule for fire detection (e.g., false detection rate <10%).

랜덤 포레스트와 칼라 코렐로그램을 이용한 도로추출 (Road Extraction Based on Random Forest and Color Correlogram)

  • 최지혜;송광열;이준웅
    • 제어로봇시스템학회논문지
    • /
    • 제17권4호
    • /
    • pp.346-352
    • /
    • 2011
  • This paper presents a system of road extraction for traffic images from a single camera. The road in the images is subject to large changes in appearance because of environmental effects. The proposed system is based on the integration of color correlograms and random forest. The color correlogram depicts the color properties of an image properly. Using the random forest, road extraction is formulated as a learning paradigm. The combined effects of color correlograms and random forest create a robust system capable of extracting the road in very changeable situations.

야시조명계통 요구도 분석 (Analysis of Requirements for Night Vision Imaging System)

  • 권종광;이대열;김환우
    • 한국군사과학기술학회지
    • /
    • 제10권3호
    • /
    • pp.51-61
    • /
    • 2007
  • This paper concerns about the requirement analysis for night vision imaging system(NVIS), whose purpose is to intensify the available nighttime near infrared(IR) radiation sufficiently to be caught by the human eyes on a miniature green phosphor screen. The requirements for NVIS are NVIS radiance(NR), chromaticity, daylight legibility/readability, etc. The NR is a quantitative measure of night vision goggle (NVG) compatibility of a light source as viewed through goggles. The chromaticity is the quality of a color as determined by its purity and dominant wavelength. The daylight legibility/readability is the degree at which words are readable based on appearance and a measure of an instrument's ability to display incremental changes in its output value. In this paper, the requirements of NR, chromaticity, and daylight legibility/readability for Type I and Class B/C NVIS are analyzed. Also the rationale is shown with respect to those requirements.

Human Tracking using Multiple-Camera-Based Global Color Model in Intelligent Space

  • Jin Tae-Seok;Hashimoto Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권1호
    • /
    • pp.39-46
    • /
    • 2006
  • We propose an global color model based method for tracking motions of multiple human using a networked multiple-camera system in intelligent space as a human-robot coexistent system. An intelligent space is a space where many intelligent devices, such as computers and sensors(color CCD cameras for example), are distributed. Human beings can be a part of intelligent space as well. One of the main goals of intelligent space is to assist humans and to do different services for them. In order to be capable of doing that, intelligent space must be able to do different human related tasks. One of them is to identify and track multiple objects seamlessly. In the environment where many camera modules are distributed on network, it is important to identify object in order to track it, because different cameras may be needed as object moves throughout the space and intelligent space should determine the appropriate one. This paper describes appearance based unknown object tracking with the distributed vision system in intelligent space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

컬러 정보를 이용한 실시간 표정 데이터 추적 시스템 (Realtime Facial Expression Data Tracking System using Color Information)

  • 이윤정;김영봉
    • 한국콘텐츠학회논문지
    • /
    • 제9권7호
    • /
    • pp.159-170
    • /
    • 2009
  • 온라인 기반의 3차원 얼굴 애니메이션을 위해서 실시간으로 얼굴을 캡처하고 표정 데이터를 추출하는 것은 매우 중요한 작업이다. 최근 동영상 입력을 통해 연기자의 표정을 캡처하고 그것을 그대로 3차원 얼굴 모델에 표현하는 비전 기반(vision-based) 방법들에 대한 연구가 활발히 이루어지고 있다. 본 논문 에서는 실시간으로 입력되는 동영상으로부터 얼굴과 얼굴 특징점들을 자동으로 검출하고 이를 추적하는 시스템을 제안한다. 제안 시스템은 얼굴 검출과 얼굴 특징점 추출 및 추적과정으로 구성된다. 얼굴 검출은 3차원 YCbCr 피부 색상 모델을 이용하여 피부 영역을 분리하고 Harr 기반 검출기를 이용해 얼굴 여부를 판단한다. 얼굴 표정에 영향을 주는 눈과 입 영역의 검출은 밝기 정보와 특정 영역의 고유한 색상 정보를 이용한다. 검출된 눈과 입 영역에서 MPEG-4에서 정의한 FAP를 기준으로 10개의 특징점을 추출하고, 컬러 확률 분포의 추적을 통해 연속 프레임에서 특징점들의 변위를 구한다 실험 결과 제안 시스템 은 약 초당 8 프레임으로 표정 데이터를 추적하였다.

AGV 운행을 위한 비전기반 유도선 해석 기술 (A Vision Based Guideline Interpretation Technique for AGV Navigation)

  • 변성민;김민환
    • 한국멀티미디어학회논문지
    • /
    • 제15권11호
    • /
    • pp.1319-1329
    • /
    • 2012
  • AGV는 최근 생산라인에서 활용이 증대되고 있으며, 저렴하고 속도가 빠른 자기 테이프 유도 방식의 AGV가 널리 사용되고 있다. 그러나 이러한 방식의 AGV 운행 시스템은 고가의 설치비와 운행경로 변경의 유연성 저하 등으로 인해 다품종 소량 생산 시스템이나 협업 기반 생산 시스템에 적용하기 어려운 단점이 있다. 본 논문에서는 설치 및 변경이 매우 용이한 색 테이프 또는 페인트 기반의 유도선을 카메라 비전을 이용하여 검출하고 해석하는 기술을 제시한다. AGV 운행경로의 자유로운 설정 및 변경이 가능하도록 분기 지점이나 합류 지점과 같은 복잡한 구조의 유도선 부분도 자동으로 분석하는 방법을 제시하며, 또한 안정적인 AGV 운행이 가능하도록 적합한 유도선 추적방향을 결정하는 방법도 제시한다. 제시한 기술을 구현 적용한 실제 산업용 AGV의 실시간 운행 실험을 통해, 제시한 기술이 산업현장에서 실제로 안정적으로 적용 가능함을 확인하였다.

Development of a Ubiquitous Vision System for Location-awareness of Multiple Targets by a Matching Technique for the Identity of a Target;a New Approach

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hag-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.68-73
    • /
    • 2005
  • Various techniques have been proposed for detection and tracking of targets in order to develop a real-world computer vision system, e.g., visual surveillance systems, intelligent transport systems (ITSs), and so forth. Especially, the idea of distributed vision system is required to realize these techniques in a wide-spread area. In this paper, we develop a ubiquitous vision system for location-awareness of multiple targets. Here, each vision sensor that the system is composed of can perform exact segmentation for a target by color and motion information, and visual tracking for multiple targets in real-time. We construct the ubiquitous vision system as the multiagent system by regarding each vision sensor as the agent (the vision agent). Therefore, we solve matching problem for the identity of a target as handover by protocol-based approach. We propose the identified contract net (ICN) protocol for the approach. The ICN protocol not only is independent of the number of vision agents but also doesn't need calibration between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. We adapt the ICN protocol in our ubiquitous vision system that we construct in order to make an experiment. Our ubiquitous vision system shows us reliable results and the ICN protocol is successfully operated through several experiments.

  • PDF

A Tracking-by-Detection System for Pedestrian Tracking Using Deep Learning Technique and Color Information

  • Truong, Mai Thanh Nhat;Kim, Sanghoon
    • Journal of Information Processing Systems
    • /
    • 제15권4호
    • /
    • pp.1017-1028
    • /
    • 2019
  • Pedestrian tracking is a particular object tracking problem and an important component in various vision-based applications, such as autonomous cars and surveillance systems. Following several years of development, pedestrian tracking in videos remains challenging, owing to the diversity of object appearances and surrounding environments. In this research, we proposed a tracking-by-detection system for pedestrian tracking, which incorporates a convolutional neural network (CNN) and color information. Pedestrians in video frames are localized using a CNN-based algorithm, and then detected pedestrians are assigned to their corresponding tracklets based on similarities between color distributions. The experimental results show that our system is able to overcome various difficulties to produce highly accurate tracking results.

가우시안 프로세스를 이용한 실내 환경에서 소형무인기에 적합한 SLAM 시스템 개발 (Development of a SLAM System for Small UAVs in Indoor Environments using Gaussian Processes)

  • 전영산;최종은;이정욱
    • 제어로봇시스템학회논문지
    • /
    • 제20권11호
    • /
    • pp.1098-1102
    • /
    • 2014
  • Localization of aerial vehicles and map building of flight environments are key technologies for the autonomous flight of small UAVs. In outdoor environments, an unmanned aircraft can easily use a GPS (Global Positioning System) for its localization with acceptable accuracy. However, as the GPS is not available for use in indoor environments, the development of a SLAM (Simultaneous Localization and Mapping) system that is suitable for small UAVs is therefore needed. In this paper, we suggest a vision-based SLAM system that uses vision sensors and an AHRS (Attitude Heading Reference System) sensor. Feature points in images captured from the vision sensor are obtained by using GPU (Graphics Process Unit) based SIFT (Scale-invariant Feature Transform) algorithm. Those feature points are then combined with attitude information obtained from the AHRS to estimate the position of the small UAV. Based on the location information and color distribution, a Gaussian process model is generated, which could be a map. The experimental results show that the position of a small unmanned aircraft is estimated properly and the map of the environment is constructed by using the proposed method. Finally, the reliability of the proposed method is verified by comparing the difference between the estimated values and the actual values.