• Title/Summary/Keyword: Motion vision

Search Result 539, Processing Time 0.025 seconds

Vision-Based Obstacle Collision Risk Estimation of an Unmanned Surface Vehicle (무인선의 비전기반 장애물 충돌 위험도 평가)

  • Woo, Joohyun;Kim, Nakwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.12
    • /
    • pp.1089-1099
    • /
    • 2015
  • This paper proposes vision-based collision risk estimation method for an unmanned surface vehicle. A robust image-processing algorithm is suggested to detect target obstacles from the vision sensor. Vision-based Target Motion Analysis (TMA) was performed to transform visual information to target motion information. In vision-based TMA, a camera model and optical flow are adopted. Collision risk was calculated by using a fuzzy estimator that uses target motion information and vision information as input variables. To validate the suggested collision risk estimation method, an unmanned surface vehicle experiment was performed.

Effects of Visual Information on Joint Angular Velocity of Trunk and Lower Extremities in Sitting and Squat Motion

  • Bu, Kyoung hee;Oh, Tae young
    • The Journal of Korean Physical Therapy
    • /
    • v.27 no.2
    • /
    • pp.89-95
    • /
    • 2015
  • Purpose: The purpose of this study is to determine the effects of visual information on movement time and each angular velocity of trunk and lower extremity joints while healthy adults are in sitting and squat motion. Methods: Participants consisted of 20 healthy male and female adults; movement time and each angular velocity of trunk, pelvis, hip, knee and ankle of sitting and squat motion according to common vision, visual task and visual block were analyzed using a three dimensional motion analysis system. Results: Each angular velocity of the trunk, pelvis, hip, knee and ankle in phase 2 of the sitting showed significant difference according to the types of visual information (p<0.05). Movement time and each angular velocity of pelvis and hip in phase 2 of squat motion showed significant difference according to the types of visual information (p<0.05). According to the common vision, each angular velocity of knee and ankle in phase 1 was significantly fast in sitting (p<0.05). According to the common vision, each angular velocity of trunk, pelvis, hip, knee, and ankle in phase 2 was significantly fast in sitting (p<0.05). Conclusion: Visual information affects the angular velocity of the motion in a simple action such as sitting, and that in more complicated squat motion affects both the angular velocity and the movement time. In addition, according to the common vision, visual task and visual block, as angular velocities of all joints were faster in sitting than squat motion.

Stereo Vision Based 3-D Motion Tracking for Human Animation

  • Han, Seung-Il;Kang, Rae-Won;Lee, Sang-Jun;Ju, Woo-Suk;Lee, Joan-Jae
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.6
    • /
    • pp.716-725
    • /
    • 2007
  • In this paper we describe a motion tracking algorithm for 3D human animation using stereo vision system. This allows us to extract the motion data of the end effectors of human body by following the movement through segmentation process in HIS or RGB color model, and then blob analysis is used to detect robust shape. When two hands or two foots are crossed at any position and become disjointed, an adaptive algorithm is presented to recognize whether it is left or right one. And the real motion is the 3-D coordinate motion. A mono image data is a data of 2D coordinate. This data doesn't acquire distance from a camera. By stereo vision like human vision, we can acquire a data of 3D motion such as left, right motion from bottom and distance of objects from camera. This requests a depth value including x axis and y axis coordinate in mono image for transforming 3D coordinate. This depth value(z axis) is calculated by disparity of stereo vision by using only end-effectors of images. The position of the inner joints is calculated and 3D character can be visualized using inverse kinematics.

  • PDF

A Study on the Obstacle Avoidance of a Multi-Link Robot System using Vision System (Vision System을 이용한 다관절 로봇팔의 장애물 우회에 관한 연구)

  • 송경수;이병룡
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.691-694
    • /
    • 2000
  • In this paper, a motion control algorithm is proposed by using neural network system, which makes a robot arm successfully avoid unexpected obstacle when the robot is moving from the start to the goal position. During the motion, if there is an obstacle the vision system recognizes it. And in every time the optimization-algorithm quickly chooses a motion among the possible motions of robot. The proposed algorithm has a good avoidance characteristic in simulation.

  • PDF

Real-time Interactive Particle-art with Human Motion Based on Computer Vision Techniques (컴퓨터 비전 기술을 활용한 관객의 움직임과 상호작용이 가능한 실시간 파티클 아트)

  • Jo, Ik Hyun;Park, Geo Tae;Jung, Soon Ki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.1
    • /
    • pp.51-60
    • /
    • 2018
  • We present a real-time interactive particle-art with human motion based on computer vision techniques. We used computer vision techniques to reduce the number of equipments that required for media art appreciations. We analyze pros and cons of various computer vision methods that can adapted to interactive digital media art. In our system, background subtraction is applied to search an audience. The audience image is changed into particles with grid cells. Optical flow is used to detect the motion of the audience and create particle effects. Also we define a virtual button for interaction. This paper introduces a series of computer vision modules to build the interactive digital media art contents which can be easily configurated with a camera sensor.

Unusual Motion Detection for Vision-Based Driver Assistance

  • Fu, Li-Hua;Wu, Wei-Dong;Zhang, Yu;Klette, Reinhard
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.1
    • /
    • pp.27-34
    • /
    • 2015
  • For a vision-based driver assistance system, unusual motion detection is one of the important means of preventing accidents. In this paper, we propose a real-time unusual-motion-detection model, which contains two stages: salient region detection and unusual motion detection. In the salient-region-detection stage, we present an improved temporal attention model. In the unusual-motion-detection stage, three kinds of factors, the speed, the motion direction, and the distance, are extracted for detecting unusual motion. A series of experimental results demonstrates the proposed method and shows the feasibility of the proposed model.

Recognition Performance of Vestibular-Ocular Reflex Based Vision Tracking System for Mobile Robot (이동 로봇을 위한 전정안반사 기반 비젼 추적 시스템의 인식 성능 평가)

  • Park, Jae-Hong;Bhan, Wook;Choi, Tae-Young;Kwon, Hyun-Il;Cho, Dong-Il;Kim, Kwang-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.496-504
    • /
    • 2009
  • This paper presents a recognition performance of VOR (Vestibular-Ocular Reflex) based vision tracking system for mobile robot. The VOR is a reflex eye movement which, during head movements, produces an eye movement in the direction opposite to the head movement, thus maintaining the image of interested objects placed on the center of retina. We applied this physiological concept to the vision tracking system for high recognition performance in mobile environments. The proposed method was implemented in a vision tracking system consisting of a motion sensor module and an actuation module with vision sensor. We tested the developed system on an x/y stage and a rate table for linear motion and angular motion, respectively. The experimental results show that the recognition rates of the VOR-based method are three times more than non-VOR conventional vision system, which is mainly due to the fact that VOR-based vision tracking system has the line of sight of vision system to be fixed to the object, eventually reducing the blurring effect of images under the dynamic environment. It suggests that the VOR concept proposed in this paper can be applied efficiently to the vision tracking system for mobile robot.

Development of Vision based Autonomous Obstacle Avoidance System for a Humanoid Robot (휴머노이드 로봇을 위한 비전기반 장애물 회피 시스템 개발)

  • Kang, Tae-Koo;Kim, Dong-Won;Park, Gwi-Tae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.1
    • /
    • pp.161-166
    • /
    • 2011
  • This paper addresses the vision based autonomous walking control system. To handle the obstacles which exist beyond the field of view(FOV), we used the 3d panoramic depth image. Moreover, to decide the avoidance direction and walking motion of a humanoid robot for the obstacle avoidance by itself, we proposed the vision based path planning using 3d panoramic depth image. In the vision based path planning, the path and walking motion are decided under environment condition such as the size of obstacle and available avoidance space. The vision based path planning is applied to a humanoid robot, URIA. The results from these evaluations show that the proposed method can be effectively applied to decide the avoidance direction and the walking motion of a practical humanoid robot.

Implementation of Integration Module of Vision and Motion Controller using Zynq (Zynq를 이용한 비전 및 모션 컨트롤러 통합모듈 구현)

  • Moon, Yong-Seon;Roh, Sang-Hyun;Lee, Young-Pil
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.1
    • /
    • pp.159-164
    • /
    • 2013
  • Recently the solution integrated of vision and motion controller which are important element in automatiomn system has been many developed. However typically such a solutions has a many case that integrated vision processing and motion control into network or organized two chip solution on one module. We implement one chip solution integrated into vision and motion controller using Zynq-7000 that is developed recently as extended processing platform. We also apply EtherCAT to motion control that is industrial Ethernet protocol which have compatibility for open standardization Ethernet in order to control of motion because EtherCAT has a secure to realtime control and can treat massive data.

Vision Chip for Edge and Motion Detection with a Function of Output Offset Cancellation (출력옵셋의 제거기능을 가지는 윤곽 및 움직임 검출용 시각칩)

  • Park, Jong-Ho;Kim, Jung-Hwan;Suh, Sung-Ho;Shin, Jang-Kyoo;Lee, Min-Ho
    • Journal of Sensor Science and Technology
    • /
    • v.13 no.3
    • /
    • pp.188-194
    • /
    • 2004
  • With a remarkable advance in CMOS (complimentary metal-oxide-semiconductor) process technology, a variety of vision sensors with signal processing circuits for complicated functions are actively being developed. Especially, as the principles of signal processing in human retina have been revealed, a series of vision chips imitating human retina have been reported. Human retina is able to detect the edge and motion of an object effectively. The edge detection among the several functions of the retina is accomplished by the cells called photoreceptor, horizontal cell and bipolar cell. We designed a CMOS vision chip by modeling cells of the retina as hardwares involved in edge and motion detection. The designed vision chip was fabricated using $0.6{\mu}m$ CMOS process and the characteristics were measured. Having reliable output characteristics, this chip can be used at the input stage for many applications, like targe tracking system, fingerprint recognition system, human-friendly robot system and etc.