• 제목/요약/키워드: Motion vision

검색결과 539건 처리시간 0.028초

무인선의 비전기반 장애물 충돌 위험도 평가 (Vision-Based Obstacle Collision Risk Estimation of an Unmanned Surface Vehicle)

  • 우주현;김낙완
    • 제어로봇시스템학회논문지
    • /
    • 제21권12호
    • /
    • pp.1089-1099
    • /
    • 2015
  • This paper proposes vision-based collision risk estimation method for an unmanned surface vehicle. A robust image-processing algorithm is suggested to detect target obstacles from the vision sensor. Vision-based Target Motion Analysis (TMA) was performed to transform visual information to target motion information. In vision-based TMA, a camera model and optical flow are adopted. Collision risk was calculated by using a fuzzy estimator that uses target motion information and vision information as input variables. To validate the suggested collision risk estimation method, an unmanned surface vehicle experiment was performed.

Effects of Visual Information on Joint Angular Velocity of Trunk and Lower Extremities in Sitting and Squat Motion

  • Bu, Kyoung hee;Oh, Tae young
    • The Journal of Korean Physical Therapy
    • /
    • 제27권2호
    • /
    • pp.89-95
    • /
    • 2015
  • Purpose: The purpose of this study is to determine the effects of visual information on movement time and each angular velocity of trunk and lower extremity joints while healthy adults are in sitting and squat motion. Methods: Participants consisted of 20 healthy male and female adults; movement time and each angular velocity of trunk, pelvis, hip, knee and ankle of sitting and squat motion according to common vision, visual task and visual block were analyzed using a three dimensional motion analysis system. Results: Each angular velocity of the trunk, pelvis, hip, knee and ankle in phase 2 of the sitting showed significant difference according to the types of visual information (p<0.05). Movement time and each angular velocity of pelvis and hip in phase 2 of squat motion showed significant difference according to the types of visual information (p<0.05). According to the common vision, each angular velocity of knee and ankle in phase 1 was significantly fast in sitting (p<0.05). According to the common vision, each angular velocity of trunk, pelvis, hip, knee, and ankle in phase 2 was significantly fast in sitting (p<0.05). Conclusion: Visual information affects the angular velocity of the motion in a simple action such as sitting, and that in more complicated squat motion affects both the angular velocity and the movement time. In addition, according to the common vision, visual task and visual block, as angular velocities of all joints were faster in sitting than squat motion.

Stereo Vision Based 3-D Motion Tracking for Human Animation

  • Han, Seung-Il;Kang, Rae-Won;Lee, Sang-Jun;Ju, Woo-Suk;Lee, Joan-Jae
    • 한국멀티미디어학회논문지
    • /
    • 제10권6호
    • /
    • pp.716-725
    • /
    • 2007
  • In this paper we describe a motion tracking algorithm for 3D human animation using stereo vision system. This allows us to extract the motion data of the end effectors of human body by following the movement through segmentation process in HIS or RGB color model, and then blob analysis is used to detect robust shape. When two hands or two foots are crossed at any position and become disjointed, an adaptive algorithm is presented to recognize whether it is left or right one. And the real motion is the 3-D coordinate motion. A mono image data is a data of 2D coordinate. This data doesn't acquire distance from a camera. By stereo vision like human vision, we can acquire a data of 3D motion such as left, right motion from bottom and distance of objects from camera. This requests a depth value including x axis and y axis coordinate in mono image for transforming 3D coordinate. This depth value(z axis) is calculated by disparity of stereo vision by using only end-effectors of images. The position of the inner joints is calculated and 3D character can be visualized using inverse kinematics.

  • PDF

Vision System을 이용한 다관절 로봇팔의 장애물 우회에 관한 연구 (A Study on the Obstacle Avoidance of a Multi-Link Robot System using Vision System)

  • 송경수;이병룡
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2000년도 추계학술대회 논문집
    • /
    • pp.691-694
    • /
    • 2000
  • In this paper, a motion control algorithm is proposed by using neural network system, which makes a robot arm successfully avoid unexpected obstacle when the robot is moving from the start to the goal position. During the motion, if there is an obstacle the vision system recognizes it. And in every time the optimization-algorithm quickly chooses a motion among the possible motions of robot. The proposed algorithm has a good avoidance characteristic in simulation.

  • PDF

컴퓨터 비전 기술을 활용한 관객의 움직임과 상호작용이 가능한 실시간 파티클 아트 (Real-time Interactive Particle-art with Human Motion Based on Computer Vision Techniques)

  • 조익현;박거태;정순기
    • 한국멀티미디어학회논문지
    • /
    • 제21권1호
    • /
    • pp.51-60
    • /
    • 2018
  • We present a real-time interactive particle-art with human motion based on computer vision techniques. We used computer vision techniques to reduce the number of equipments that required for media art appreciations. We analyze pros and cons of various computer vision methods that can adapted to interactive digital media art. In our system, background subtraction is applied to search an audience. The audience image is changed into particles with grid cells. Optical flow is used to detect the motion of the audience and create particle effects. Also we define a virtual button for interaction. This paper introduces a series of computer vision modules to build the interactive digital media art contents which can be easily configurated with a camera sensor.

Unusual Motion Detection for Vision-Based Driver Assistance

  • Fu, Li-Hua;Wu, Wei-Dong;Zhang, Yu;Klette, Reinhard
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제15권1호
    • /
    • pp.27-34
    • /
    • 2015
  • For a vision-based driver assistance system, unusual motion detection is one of the important means of preventing accidents. In this paper, we propose a real-time unusual-motion-detection model, which contains two stages: salient region detection and unusual motion detection. In the salient-region-detection stage, we present an improved temporal attention model. In the unusual-motion-detection stage, three kinds of factors, the speed, the motion direction, and the distance, are extracted for detecting unusual motion. A series of experimental results demonstrates the proposed method and shows the feasibility of the proposed model.

이동 로봇을 위한 전정안반사 기반 비젼 추적 시스템의 인식 성능 평가 (Recognition Performance of Vestibular-Ocular Reflex Based Vision Tracking System for Mobile Robot)

  • 박재홍;반욱;최태영;권현일;조동일;김광수
    • 제어로봇시스템학회논문지
    • /
    • 제15권5호
    • /
    • pp.496-504
    • /
    • 2009
  • This paper presents a recognition performance of VOR (Vestibular-Ocular Reflex) based vision tracking system for mobile robot. The VOR is a reflex eye movement which, during head movements, produces an eye movement in the direction opposite to the head movement, thus maintaining the image of interested objects placed on the center of retina. We applied this physiological concept to the vision tracking system for high recognition performance in mobile environments. The proposed method was implemented in a vision tracking system consisting of a motion sensor module and an actuation module with vision sensor. We tested the developed system on an x/y stage and a rate table for linear motion and angular motion, respectively. The experimental results show that the recognition rates of the VOR-based method are three times more than non-VOR conventional vision system, which is mainly due to the fact that VOR-based vision tracking system has the line of sight of vision system to be fixed to the object, eventually reducing the blurring effect of images under the dynamic environment. It suggests that the VOR concept proposed in this paper can be applied efficiently to the vision tracking system for mobile robot.

휴머노이드 로봇을 위한 비전기반 장애물 회피 시스템 개발 (Development of Vision based Autonomous Obstacle Avoidance System for a Humanoid Robot)

  • 강태구;김동원;박귀태
    • 전기학회논문지
    • /
    • 제60권1호
    • /
    • pp.161-166
    • /
    • 2011
  • This paper addresses the vision based autonomous walking control system. To handle the obstacles which exist beyond the field of view(FOV), we used the 3d panoramic depth image. Moreover, to decide the avoidance direction and walking motion of a humanoid robot for the obstacle avoidance by itself, we proposed the vision based path planning using 3d panoramic depth image. In the vision based path planning, the path and walking motion are decided under environment condition such as the size of obstacle and available avoidance space. The vision based path planning is applied to a humanoid robot, URIA. The results from these evaluations show that the proposed method can be effectively applied to decide the avoidance direction and the walking motion of a practical humanoid robot.

Zynq를 이용한 비전 및 모션 컨트롤러 통합모듈 구현 (Implementation of Integration Module of Vision and Motion Controller using Zynq)

  • 문용선;노상현;이영필
    • 한국전자통신학회논문지
    • /
    • 제8권1호
    • /
    • pp.159-164
    • /
    • 2013
  • 최근 자동화 시스템에 있어 중요한 요소인 비전과 모션 컨트롤러를 통합한 솔루션이 많이 개발되고 있다. 다만 이러한 솔루션은 비전 처리와 모션 컨트롤을 네트워크로 통합하거나 하나의 모듈에 투 칩 솔루션으로 구성된 경우가 많다. 본 연구에서는 최근에 개발된 확장형 프로세싱 플랫폼인 Zynq-7000을 이용한 비전 및 모션 제어기를 통합한 원 칩 솔루션을 구현하였다. 또한 모션 컨트롤은 제어의 실시간이 보장되면서 대량의 데이터를 처리할 수 있는 개방형 표준 이더넷 호환성을 가지고 있는 산업용 이더넷 프로토콜인 EtherCAT을 적용하였다.

출력옵셋의 제거기능을 가지는 윤곽 및 움직임 검출용 시각칩 (Vision Chip for Edge and Motion Detection with a Function of Output Offset Cancellation)

  • 박종호;김정환;서성호;신장규;이민호
    • 센서학회지
    • /
    • 제13권3호
    • /
    • pp.188-194
    • /
    • 2004
  • With a remarkable advance in CMOS (complimentary metal-oxide-semiconductor) process technology, a variety of vision sensors with signal processing circuits for complicated functions are actively being developed. Especially, as the principles of signal processing in human retina have been revealed, a series of vision chips imitating human retina have been reported. Human retina is able to detect the edge and motion of an object effectively. The edge detection among the several functions of the retina is accomplished by the cells called photoreceptor, horizontal cell and bipolar cell. We designed a CMOS vision chip by modeling cells of the retina as hardwares involved in edge and motion detection. The designed vision chip was fabricated using $0.6{\mu}m$ CMOS process and the characteristics were measured. Having reliable output characteristics, this chip can be used at the input stage for many applications, like targe tracking system, fingerprint recognition system, human-friendly robot system and etc.