• 제목/요약/키워드: FlowVision

검색결과 190건 처리시간 0.028초

A Low Power Analog CMOS Vision Chip for Edge Detection Using Electronic Switches

  • Kim, Jung-Hwan;Kong, Jae-Sung;Suh, Sung-Ho;Lee, Min-Ho;Shin, Jang-Kyoo;Park, Hong-Bae;Choi, Chang-Auck
    • ETRI Journal
    • /
    • 제27권5호
    • /
    • pp.539-544
    • /
    • 2005
  • An analog CMOS vision chip for edge detection with power consumption below 20mW was designed by adopting electronic switches. An electronic switch separates the edge detection circuit into two parts; one is a logarithmic compression photocircuit, the other is a signal processing circuit for edge detection. The electronic switch controls the connection between the two circuits. When the electronic switch is OFF, it can intercept the current flow through the signal processing circuit and restrict the magnitude of the current flow below several hundred nA. The estimated power consumption of the chip, with $128{\times}128$ pixels, was below 20mW. The vision chip was designed using $0.25{\mu}m$ 1-poly 5-metal standard full custom CMOS process technology.

  • PDF

영상 기반 센서 융합을 이용한 이쪽로봇에서의 환경 인식 시스템의 개발 (Vision Based Sensor Fusion System of Biped Walking Robot for Environment Recognition)

  • 송희준;이선구;강태구;김동원;서삼준;박귀태
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.123-125
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tole-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

무인선의 비전기반 장애물 충돌 위험도 평가 (Vision-Based Obstacle Collision Risk Estimation of an Unmanned Surface Vehicle)

  • 우주현;김낙완
    • 제어로봇시스템학회논문지
    • /
    • 제21권12호
    • /
    • pp.1089-1099
    • /
    • 2015
  • This paper proposes vision-based collision risk estimation method for an unmanned surface vehicle. A robust image-processing algorithm is suggested to detect target obstacles from the vision sensor. Vision-based Target Motion Analysis (TMA) was performed to transform visual information to target motion information. In vision-based TMA, a camera model and optical flow are adopted. Collision risk was calculated by using a fuzzy estimator that uses target motion information and vision information as input variables. To validate the suggested collision risk estimation method, an unmanned surface vehicle experiment was performed.

Effect of Korean Red Ginseng Supplementation on Ocular Blood Flow in Patients with Glaucoma

  • Kim, Na-Rae;Kim, Ji-Hyun;Kim, Chan-Yun
    • Journal of Ginseng Research
    • /
    • 제34권3호
    • /
    • pp.237-245
    • /
    • 2010
  • The purpose of this study was to evaluate the effect of Korean red ginseng (KRG) on ocular blood flow in patients with glaucoma. In a prospective, randomized, placebo-controlled, double-masked crossover trial, 36 patients with open-angle glaucoma were consecutively recruited. Subjects were randomly assigned into two groups. Group A received 1.5 g KRG, administered orally three times daily for 12 weeks, followed by a wash-out period of 8 weeks and 12 weeks of placebo treatment (identical capsules filled with 1.5 g corn starch). Group B underwent the same regimen, but took the placebo first and then KRG. Blood pressure, heart rate, and intraocular pressure were measured at baseline and at the end of each phase of the study. Visual field examination and ocular blood flow measurements by the Heidelberg Retina Flowmeter were performed at baseline and at the end of each phase of the study. Changes in blood pressure, heart rate, intraocular pressure, visual field indices, and retinal peripapillary blood flow were evaluated. Blood pressure, heart rate, intraocular pressure, and visual field indices did not change after placebo or KRG treatment. After KRG treatment, retinal peripapillary blood flow in the temporal peripapillary region significantly improved (p=0.005). No significant changes were found in retinal peripapillary blood flow in either the rim region or the nasal peripapillary region (p=0.051 and 0.278, respectively). KRG ingestion appears to improve retinal peripapillary blood flow in patients with open-angle glaucoma. These results imply that KRG ingestion might be helpful for glaucoma management.

이족로봇 플랫폼을 위한 동체탐지 (Moving object detection for biped walking robot flatfrom)

  • 강태구;황상현;김동원;박귀태
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년 학술대회 논문집 정보 및 제어부문
    • /
    • pp.570-572
    • /
    • 2006
  • This paper discusses the method of moving object detection for biped robot walking. Most researches on vision based object detection have mostly focused on fixed camera based algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since hired walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, method for moving object detection has been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. But these methods are not suitable to biped walking robot. So, we suggest the advanced method which is suitable to biped walking robot platform. For carrying out certain tasks, an object detecting system using modified optical flow algorithm by wireless vision camera is implemented in a biped walking robot.

  • PDF

천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정 (Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion)

  • 신옥식;박찬국
    • 제어로봇시스템학회논문지
    • /
    • 제18권1호
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.

Optical Flow Based Collision Avoidance of Multi-Rotor UAVs in Urban Environments

  • Yoo, Dong-Wan;Won, Dae-Yeon;Tahk, Min-Jea
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제12권3호
    • /
    • pp.252-259
    • /
    • 2011
  • This paper is focused on dynamic modeling and control system design as well as vision based collision avoidance for multi-rotor unmanned aerial vehicles (UAVs). Multi-rotor UAVs are defined as rotary-winged UAVs with multiple rotors. These multi-rotor UAVs can be utilized in various military situations such as surveillance and reconnaissance. They can also be used for obtaining visual information from steep terrains or disaster sites. In this paper, a quad-rotor model is introduced as well as its control system, which is designed based on a proportional-integral-derivative controller and vision-based collision avoidance control system. Additionally, in order for a UAV to navigate safely in areas such as buildings and offices with a number of obstacles, there must be a collision avoidance algorithm installed in the UAV's hardware, which should include the detection of obstacles, avoidance maneuvering, etc. In this paper, the optical flow method, one of the vision-based collision avoidance techniques, is introduced, and multi-rotor UAV's collision avoidance simulations are described in various virtual environments in order to demonstrate its avoidance performance.

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • 제36권6호
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

컴퓨터 비전 기술을 활용한 관객의 움직임과 상호작용이 가능한 실시간 파티클 아트 (Real-time Interactive Particle-art with Human Motion Based on Computer Vision Techniques)

  • 조익현;박거태;정순기
    • 한국멀티미디어학회논문지
    • /
    • 제21권1호
    • /
    • pp.51-60
    • /
    • 2018
  • We present a real-time interactive particle-art with human motion based on computer vision techniques. We used computer vision techniques to reduce the number of equipments that required for media art appreciations. We analyze pros and cons of various computer vision methods that can adapted to interactive digital media art. In our system, background subtraction is applied to search an audience. The audience image is changed into particles with grid cells. Optical flow is used to detect the motion of the audience and create particle effects. Also we define a virtual button for interaction. This paper introduces a series of computer vision modules to build the interactive digital media art contents which can be easily configurated with a camera sensor.

어안 이미지 기반의 전방향 영상 SLAM을 이용한 충돌 회피 (Collision Avoidance Using Omni Vision SLAM Based on Fisheye Image)

  • 최윤원;최정원;임성규;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제22권3호
    • /
    • pp.210-216
    • /
    • 2016
  • This paper presents a novel collision avoidance technique for mobile robots based on omni-directional vision simultaneous localization and mapping (SLAM). This method estimates the avoidance path and speed of a robot from the location of an obstacle, which can be detected using the Lucas-Kanade Optical Flow in images obtained through fish-eye cameras mounted on the robots. The conventional methods suggest avoidance paths by constructing an arbitrary force field around the obstacle found in the complete map obtained through the SLAM. Robots can also avoid obstacles by using the speed command based on the robot modeling and curved movement path of the robot. The recent research has been improved by optimizing the algorithm for the actual robot. However, research related to a robot using omni-directional vision SLAM to acquire around information at once has been comparatively less studied. The robot with the proposed algorithm avoids obstacles according to the estimated avoidance path based on the map obtained through an omni-directional vision SLAM using a fisheye image, and returns to the original path. In particular, it avoids the obstacles with various speed and direction using acceleration components based on motion information obtained by analyzing around the obstacles. The experimental results confirm the reliability of an avoidance algorithm through comparison between position obtained by the proposed algorithm and the real position collected while avoiding the obstacles.