• Title/Summary/Keyword: vision-based method

Search Result 1,454, Processing Time 0.031 seconds

Automation of a Teleoperated Microassembly Desktop Station Supervised by Virtual Reality

  • Antoine Ferreira;Fontaine, Jean-Guy;Shigeoki Hirai
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.4 no.1
    • /
    • pp.23-31
    • /
    • 2002
  • We proposed a concept of a desktop micro device factory for visually servoed teleoperated microassembly assisted by a virtual reality (VR) interface. It is composed of two micromanipulators equipped with micro tools operating under a light microscope. First a manipulator, control method for the micro object to follow a planned trajectory in pushing operation is proposed undo. vision based-position control. Then, we present the cooperation control strategy of the micro handling operation under vision-based force control integrating a sensor fusion framework approach. A guiding-system based on virtual micro-world exactly reconstructed from the CAD-CAM databases of the real environment being considered is presented for the imprecisely calibrated micro world. Finally, some experimental results of microassembly tasks performed on millimeter-sized components are provided.

A vision based people tracking and following for mobile robots using CAMSHIFT and KLT feature tracker (캠시프트와 KLT특징 추적 알고리즘을 융합한 모바일 로봇의 영상기반 사람추적 및 추종)

  • Lee, S.J.;Won, Mooncheol
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.7
    • /
    • pp.787-796
    • /
    • 2014
  • Many mobile robot navigation methods utilize laser scanners, ultrasonic sensors, vision camera, and so on for detecting obstacles and path following. However, human utilizes only vision(e.g. eye) information for navigation. In this paper, we study a mobile robot control method based on only the camera vision. The Gaussian Mixture Model and a shadow removal technology are used to divide the foreground and the background from the camera image. The mobile robot uses a combined CAMSHIFT and KLT feature tracker algorithms based on the information of the foreground to follow a person. The algorithm is verified by experiments where a person is tracked and followed by a robot in a hallway.

A Study of Line Recognition and Driving Direction Control On Vision based AGV (Vision을 이용한 자율주행 로봇의 라인 인식 및 주행방향 결정에 관한 연구)

  • Kim, Young-Suk;Kim, Tae-Wan;Lee, Chang-Goo
    • Proceedings of the KIEE Conference
    • /
    • 2002.07d
    • /
    • pp.2341-2343
    • /
    • 2002
  • This paper describes a vision-based line recognition and control of driving direction for an AGV(autonomous guided vehicle). As navigation guide, black stripe attached on the corridor is used. Binary image of guide stripe captured by a CCD camera is used. For detect the guideline quickly and extractly, we use for variable thresholding algorithm. this low-cost line-tracking system is efficiently using pc-based real time vision processing. steering control is studied through controller with guide-line angle error. This method is tested via a typical agv with a single camera in laboratory environment.

  • PDF

Estimation of Angular Acceleration By a Monocular Vision Sensor

  • Lim, Joonhoo;Kim, Hee Sung;Lee, Je Young;Choi, Kwang Ho;Kang, Sung Jin;Chun, Sebum;Lee, Hyung Keun
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.3 no.1
    • /
    • pp.1-10
    • /
    • 2014
  • Recently, monitoring of two-body ground vehicles carrying extremely hazardous materials has been considered as one of the most important national issues. This issue induces large cost in terms of national economy and social benefit. To monitor and counteract accidents promptly, an efficient methodology is required. For accident monitoring, GPS can be utilized in most cases. However, it is widely known that GPS cannot provide sufficient continuity in urban cannons and tunnels. To complement the weakness of GPS, this paper proposes an accident monitoring method based on a monocular vision sensor. The proposed method estimates angular acceleration from a sequence of image frames captured by a monocular vision sensor. The possibility of using angular acceleration is investigated to determine the occurrence of accidents such as jackknifing and rollover. By an experiment based on actual measurements, the feasibility of the proposed method is evaluated.

Linear Velocity Control of the Mobile Robot with the Vision System at Corridor Navigation (비전 센서를 갖는 이동 로봇의 복도 주행 시 직진 속도 제어)

  • Kwon, Ji-Wook;Hong, Suk-Kyo;Chwa, Dong-Kyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.9
    • /
    • pp.896-902
    • /
    • 2007
  • This paper proposes a vision-based kinematic control method for mobile robots with camera-on-board. In the previous literature on the control of mobile robots using camera vision information, the forward velocity is set to be a constant, and only the rotational velocity of the robot is controlled. More efficient motion, however, is needed by controlling the forward velocity, depending on the position in the corridor. Thus, both forward and rotational velocities are controlled in the proposed method such that the mobile robots can move faster when the comer of the corridor is far away, and it slows down as it approaches the dead end of the corridor. In this way, the smooth turning motion along the corridor is possible. To this end, visual information using the camera is used to obtain the perspective lines and the distance from the current robot position to the dead end. Then, the vanishing point and the pseudo desired position are obtained, and the forward and rotational velocities are controlled by the LOS(Line Of Sight) guidance law. Both numerical and experimental results are included to demonstrate the validity of the proposed method.

Visual Servoing of a Mobile Manipulator Based on Stereo Vision

  • Lee, H.J.;Park, M.G.;Lee, M.C.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.767-771
    • /
    • 2003
  • In this study, stereo vision system is applied to a mobile manipulator for effective tasks. The robot can recognize a target and compute the position of the target using a stereo vision system. While a monocular vision system needs properties such as geometric shape of a target, a stereo vision system enables the robot to find the position of a target without additional information. Many algorithms have been studied and developed for an object recognition. However, most of these approaches have a disadvantage of the complexity of computations and they are inadequate for real-time visual servoing. However, color information is useful for simple recognition in real-time visual servoing. In this paper, we refer to about object recognition using colors, stereo matching method, recovery of 3D space and the visual servoing.

  • PDF

Visual Servoing of a Mobile Manipulator Based on Stereo Vision (스테레오 영상을 이용한 이동형 머니퓰레이터의 시각제어)

  • Lee Hyun Jeong;Park Min Gyu;Lee Min Cheol
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.5
    • /
    • pp.411-417
    • /
    • 2005
  • In this study, stereo vision system is applied to a mobile manipulator for effective tasks. The robot can recognize a target and compute the potion of the target using a stereo vision system. While a monocular vision system needs properties such as geometric shape of a target, a stereo vision system enables the robot to find the position of a target without additional information. Many algorithms have been studied and developed for an object recognition. However, most of these approaches have a disadvantage of the complexity of computations and they are inadequate for real-time visual servoing. Color information is useful for simple recognition in real-time visual servoing. This paper addresses object recognition using colors, stereo matching method to reduce its calculation time, recovery of 3D space and the visual servoing.

Development of a SLAM System for Small UAVs in Indoor Environments using Gaussian Processes (가우시안 프로세스를 이용한 실내 환경에서 소형무인기에 적합한 SLAM 시스템 개발)

  • Jeon, Young-San;Choi, Jongeun;Lee, Jeong Oog
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.11
    • /
    • pp.1098-1102
    • /
    • 2014
  • Localization of aerial vehicles and map building of flight environments are key technologies for the autonomous flight of small UAVs. In outdoor environments, an unmanned aircraft can easily use a GPS (Global Positioning System) for its localization with acceptable accuracy. However, as the GPS is not available for use in indoor environments, the development of a SLAM (Simultaneous Localization and Mapping) system that is suitable for small UAVs is therefore needed. In this paper, we suggest a vision-based SLAM system that uses vision sensors and an AHRS (Attitude Heading Reference System) sensor. Feature points in images captured from the vision sensor are obtained by using GPU (Graphics Process Unit) based SIFT (Scale-invariant Feature Transform) algorithm. Those feature points are then combined with attitude information obtained from the AHRS to estimate the position of the small UAV. Based on the location information and color distribution, a Gaussian process model is generated, which could be a map. The experimental results show that the position of a small unmanned aircraft is estimated properly and the map of the environment is constructed by using the proposed method. Finally, the reliability of the proposed method is verified by comparing the difference between the estimated values and the actual values.

Simultaneous Tracking of Multiple Construction Workers Using Stereo-Vision (다수의 건설인력 위치 추적을 위한 스테레오 비전의 활용)

  • Lee, Yong-Ju;Park, Man-Woo
    • Journal of KIBIM
    • /
    • v.7 no.1
    • /
    • pp.45-53
    • /
    • 2017
  • Continuous research efforts have been made on acquiring location data on construction sites. As a result, GPS and RFID are increasingly employed on the site to track the location of equipment and materials. However, these systems are based on radio frequency technologies which require attaching tags on every target entity. Implementing the systems incurs time and costs for attaching/detaching/managing the tags or sensors. For this reason, efforts are currently being made to track construction entities using only cameras. Vision-based 3D tracking has been presented in a previous research work in which the location of construction manpower, vehicle, and materials were successfully tracked. However, the proposed system is still in its infancy and yet to be implemented on practical applications for two reasons. First, it does not involve entity matching across two views, and thus cannot be used for tracking multiple entities, simultaneously. Second, the use of a checker board in the camera calibration process entails a focus-related problem when the baseline is long and the target entities are located far from the cameras. This paper proposes a vision-based method to track multiple workers simultaneously. An entity matching procedure is added to acquire the matching pairs of the same entities across two views which is necessary for tracking multiple entities. Also, the proposed method simplified the calibration process by avoiding the use of a checkerboard, making it more adequate to the realistic deployment on construction sites.

A fast high-resolution vibration measurement method based on vision technology for structures

  • Son, Ki-Sung;Jeon, Hyeong-Seop;Chae, Gyung-Sun;Park, Jae-Seok;Kim, Se-Oh
    • Nuclear Engineering and Technology
    • /
    • v.53 no.1
    • /
    • pp.294-303
    • /
    • 2021
  • Various types of sensors are used at industrial sites to measure vibration. With the increase in the diversity of vibration measurement methods, vibration monitoring methods using camera equipment have recently been introduced. However, owing to the physical limitations of the hardware, the measurement resolution is lower than that of conventional sensors, and real-time processing is difficult because of extensive image processing. As a result, most such methods in practice only monitor status trends. To address these disadvantages, a high-resolution vibration measurement method using image analysis of the edge region of the structure has been reported. While this method exhibits higher resolution than the existing vibration measurement technique using a camera, it requires significant amount of computation. In this study, a method is proposed for rapidly processing considerable amount of image data acquired from vision equipment, and measuring the vibration of structures with high resolution. The method is then verified through experiments. It was shown that the proposed method can fast measure vibrations of structures remotely.