• Title/Summary/Keyword: Motion vision

Search Result 539, Processing Time 0.033 seconds

Development of a Correspondence Point Selection Algorithm for Visual Servo Control (시각 서보 제어에 있어서 대응점 선택 알고리즘 개발)

  • 문용선;정남채
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.2
    • /
    • pp.66-76
    • /
    • 1999
  • This paper proposes that can take at high speed the information of binocular disparity with moving translational and forward stereo camera to the extent that does not occur the problem of a correspondence point. It shows that in case of stereo vision with translational motion, can take the information of binocular disparity being devoid of error and putting high confidence in, and that in case of stereo vision with forward motion, can take the horizontal component which can not be detected in common stereo vision. Besides, the stereo vision can be done at high speed due to being bright and small a correspondence point comparing not to do, because a correspondence between right and left images is previously limited. But there are problem that a resonable information of binocular disparity can not be taken, if the vicinity of center of image is in accord with the region of occlusion in stereo vision with forward motion.

  • PDF

The Method to Reduce the Driving Time of Gentry (겐트리 구동시간의 단축 방법)

  • Kim, Soon Ho;Kim, Chi Su
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.11
    • /
    • pp.405-410
    • /
    • 2018
  • When more parts are mounted in the same time in a surface mount equipment, the total output will increase and will improve productivity. In this paper, we propose a method to reduce the gantry drive time from the suction to the mounting of the component to improve the productivity of the surface mount equipment. The method was to find a way to get the maximum velocity in front of the camera during the vision inspection. In this paper, we have developed a stop-motion, fly1-motion, and fly2-motion drive time calculation algorithms for vision inspection and calculated the driving time of 3 methods and compared them. As a result, the fly1-motion method shortened the time by 13% and the fly2-motion method shortened the time by 18% than the stop-motion method.

Research on a New Vision Test Chart Measuring Visual and Spatial Sense of Moire Fringes (무아레 무늬의 시각적 공간감각을 측정하는 시표로서의 가능성 조사)

  • Woo, Hyun Kyung;Lee, Seongjae;Jeong, Youn Hong
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.15 no.3
    • /
    • pp.241-245
    • /
    • 2010
  • Purpose: In this work we suggested a grating chart of vision test which could be used to measure the sense of distance and motion of object. Methods: A couple of gratings with periodic structure were fabricated. Through a lens the grating images showing geometrical shapes were projected on a vision test chart in order to form a new grating chart of vision test. In rotating and translating the gratings the examinee perceived the variation of position of gratings by the variation of the sense of distance and motion. Results: The results of the sense of distance and motion measured in rotating and translating the gratings showed the average errors of ~2.98% and ~1.73% at $\theta=15^{\circ}$ respectively compared to calculated values. Conclusions: The grating chart of vision test suggested in this work can be used as a new test chart that lets an examinee perceive a sense of distance and motion of object.

Steering Control of Autonomous Vehicle by the Vision System

  • Kim, Jung-Ha;Sugisaka, Masanori
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.91.1-91
    • /
    • 2001
  • The subject of this paper is vision system analysis of the autonomous vehicle. But, autonomous vehicle is one of the difficult topics from the point of view of several constrains on mobility, speed of vehicle and lack of environment information. Therefore, we are application of the vision system so that autonomous vehicle. Vision system of autonomous vehicle is likely to eyes of human. This paper can be divided into 2 parts. First, acceleration system and brake control system for longitudinal motion control. Second vision system of real time lane detection is for lateral motion control. This part deals lane detection method and image processing method. Finally, this paper focus on the integration of tole-operating vehicle and autonomous ...

  • PDF

Study on the Target Tracking of a Mobile Robot Using Active Stereo-Vision System (능동 스테레오 비젼을 시스템을 이용한 자율이동로봇의 목표물 추적에 관한 연구)

  • 이희명;이수희;이병룡;양순용;안경관
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.915-919
    • /
    • 2003
  • This paper presents a fuzzy-motion-control based tracking algorithm of mobile robots, which uses the geometrical information derived from the active stereo-vision system mounted on the mobile robot. The active stereo-vision system consists of two color cameras that rotates in two angular dimensions. With the stereo-vision system, the center position and depth information of the target object can be calculated. The proposed fuzzy motion controller is used to calculate the tracking velocity and angular position of the mobile robot, which makes the mobile robot keep following the object with a constant distance and orientation.

  • PDF

Real-time Marker-free Motion Capture System to Create an Agent in the Virtual Space (가상 공간에서 에이전트 생성을 위한 실시간 마커프리 모션캡쳐 시스템)

  • 김성은;이란희;박창준;이인호
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.199-202
    • /
    • 2002
  • We described a real-time 3D computer vision system called MIMIC(Motion interface f Motion information Capture system) that can capture and save motion of an actor. This system analyzes input images from vision sensors and searches feature information like a head, hands, and feet. Moreover, this estimates intermediated joints as an elbow and hee using feature information and makes 3D human model having 20 joints. This virtual human model mimics the motion of an actor in real-time. Therefore this system can realize the movement of an actor unaffectedly because of making intermediated joint for complete human body contrary to other marker-free motion capture system.

  • PDF

A Comparison of the Moving Time about Gantry (겐트리에 대한 구동 시간의 비교)

  • Kim, Soon Ho;Kim, Chi Su
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.3
    • /
    • pp.135-140
    • /
    • 2017
  • SMT is an equipment that picks up electronic components and does precise placing onto PCBs. In order to do this, it stops in front of a camera installed in the middle to go over vision inspection. And after that it is move for placing. In this paper, We compared to the method of the placing after inspect to the stoped component and the moving component in front of the camera. As a result, This paper shows that the time efficiency of the fly-motion was increased by 9 percent than the stop-motion.

Low-Complexity Motion Estimation for H.264/AVC Through Perceptual Video Coding

  • An, Byoung-Man;Kim, Young-Seop;Kwon, Oh-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.8
    • /
    • pp.1444-1456
    • /
    • 2011
  • This paper presents a low-complexity algorithm for an H.264/AVC encoder. The proposed motion estimation scheme determines the best coding mode for a given macroblock (MB) by finding motion-blurred MBs; identifying, before motion estimation, an early selection of MBs; and hence saving processing time for these MBs. It has been observed that human vision is more sensitive to the movement of well-structured objects than to the movement of randomly structured objects. This study analyzed permissible perceptual distortions and assigned a larger inter-mode value to the regions that are perceptually less sensitive to human vision. Simulation results illustrate that the algorithm can reduce the computational complexity of motion estimation by up to 47.16% while maintaining high compression efficiency.

A New Refinement Method for Structure from Stereo Motion (스테레오 연속 영상을 이용한 구조 복원의 정제)

  • 박성기;권인소
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.11
    • /
    • pp.935-940
    • /
    • 2002
  • For robot navigation and visual reconstruction, structure from motion (SFM) is an active issue in computer vision community and its properties arc also becoming well understood. In this paper, when using stereo image sequence and direct method as a tool for SFM, we present a new method for overcoming bas-relief ambiguity. We first show that the direct methods, based on optical flow constraint equation, are also intrinsically exposed to such ambiguity although they introduce robust methods. Therefore, regarding the motion and depth estimation by the robust and direct method as approximated ones. we suggest a method that refines both stereo displacement and motion displacement with sub-pixel accuracy, which is the central process f3r improving its ambiguity. Experiments with real image sequences have been executed and we show that the proposed algorithm has improved the estimation accuracy.

Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image (어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Dai, Yanyan;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.8
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.