• 제목/요약/키워드: vision-based method

Search Result 1,469, Processing Time 0.185 seconds

Pose alignment control of robot using polygonal approximated gripper images (다각 근사화된 그리퍼 영상을 이용한 로봇의 위치 정렬)

  • Park, Kwang-Ho;Kim, Nam-Seong;Kee, Seok-Ho;Kee, Chang-Doo
    • Proceedings of the KSME Conference
    • /
    • 2000.11a
    • /
    • pp.559-563
    • /
    • 2000
  • In this paper we describe a method for aligning a robot gripper using image information. The region of gripper is represented from HSI color model that has major advantage of brightness independence. In order to extract the feature points for vision based position control, we find the corners of gripper shape using polygonal approximation method which determines the segment size and curvature of each points. We apply the vision based scheme to the task of alignment of gripper to reach the desired position by 2 RGB cameras. Experiments are carried out to exhibit the effectiveness of vision based control using feature points from polygonal approximation of gripper.

  • PDF

A study on the rigid bOdy placement task of robot system based on the computer vision system (컴퓨터 비젼시스템을 이용한 로봇시스템의 강체 배치 실험에 대한 연구)

  • 장완식;유창규;신광수;김호윤
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1114-1119
    • /
    • 1995
  • This paper presents the development of estimation model and control method based on the new computer vision. This proposed control method is accomplished using a sequential estimation scheme that permits placement of the rigid body in each of the two-dimensional image planes of monitoring cameras. Estimation model with six parameters is developed based on a model that generalizes known 4-axis scara robot kinematics to accommodate unknown relative camera position and orientation, etc. Based on the estimated parameters,depending on each camers the joint angle of robot is estimated by the iteration method. The method is tested experimentally in two ways, the estimation model test and a three-dimensional rigid body placement task. Three results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as assembly and welding.

  • PDF

A Study on Rigid body Placement Task of based on Robot Vision System (로봇 비젼시스템을 이용한 강체 배치 실험에 대한 연구)

  • 장완식;신광수;안철봉
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.15 no.11
    • /
    • pp.100-107
    • /
    • 1998
  • This paper presents the development of estimation model and control method based on the new robot vision. This proposed control method is accomplished using the sequential estimation scheme that permits placement of the rigid body in each of the two-dimensional image planes of monitoring cameras. Estimation model with six parameters is developed based on the model that generalizes known 4-axis scara robot kinematics to accommodate unknown relative camera position and orientation, etc. Based on the estimated parameters, depending on each camera the joint angle of robot is estimated by the iteration method. The method is experimentally tested in two ways, the estimation model test and a three-dimensional rigid body placement task. Three results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as assembly and welding.

  • PDF

Investigation of the super-resolution methods for vision based structural measurement

  • Wu, Lijun;Cai, Zhouwei;Lin, Chenghao;Chen, Zhicong;Cheng, Shuying;Lin, Peijie
    • Smart Structures and Systems
    • /
    • v.30 no.3
    • /
    • pp.287-301
    • /
    • 2022
  • The machine-vision based structural displacement measurement methods are widely used due to its flexible deployment and non-contact measurement characteristics. The accuracy of vision measurement is directly related to the image resolution. In the field of computer vision, super-resolution reconstruction is an emerging method to improve image resolution. Particularly, the deep-learning based image super-resolution methods have shown great potential for improving image resolution and thus the machine-vision based measurement. In this article, we firstly review the latest progress of several deep learning based super-resolution models, together with the public benchmark datasets and the performance evaluation index. Secondly, we construct a binocular visual measurement platform to measure the distances of the adjacent corners on a chessboard that is universally used as a target when measuring the structure displacement via machine-vision based approaches. And then, several typical deep learning based super resolution algorithms are employed to improve the visual measurement performance. Experimental results show that super-resolution reconstruction technology can improve the accuracy of distance measurement of adjacent corners. According to the experimental results, one can find that the measurement accuracy improvement of the super resolution algorithms is not consistent with the existing quantitative performance evaluation index. Lastly, the current challenges and future trends of super resolution algorithms for visual measurement applications are pointed out.

A Study on the Real-Time Vision Control Method for Manipulator's position Control in the Uncertain Circumstance (불확실한 환경에서 매니퓰레이터 위치제어를 위한 실시간 비젼제어기법에 관한 연구)

  • Jang, W.-S.;Kim, K.-S.;Shin, K.-S.;Joo, C.;;Yoon, H.-K.
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.12
    • /
    • pp.87-98
    • /
    • 1999
  • This study is concentrated on the development of real-time estimation model and vision control method as well as the experimental test. The proposed method permits a kind of adaptability not otherwise available in that the relationship between the camera-space location of manipulable visual cues and the vector of manipulator joint coordinates is estimate in real time. This is done based on a estimation model ta\hat generalizes known manipulator kinematics to accommodate unknown relative camera position and orientation as well as uncertainty of manipulator. This vision control method is roboust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the manipulator, and correct knowledge of position and orientation of CCD camera with respect to the manipulator base. Finally, evidence of the ability of real-time vision control method for manipulator's position control is provided by performing the thin-rod placement in space with 2 cues test model which is completed without a prior knowledge of camera or manipulator positions. This feature opens the door to a range of applications of manipulation, including a mobile manipulator with stationary cameras tracking and providing information for control of the manipulator event.

  • PDF

Development of the Lighting System Design Code for Computer Vision (컴퓨터 비전용 조명 시스템 설계 코드 개발)

  • Ahn, In-Mo;Lee, Kee-Sang
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.11
    • /
    • pp.514-520
    • /
    • 2002
  • n industrial computer vision systems, the image quality is dependent on the parameters such as light source, illumination method, optics, and surface properties. Most of them are related with the lighting system, which is designed in heuristic based on the designer's experimental knowledge. In this paper, a design code by which the optimal lighting method and light source for computer vision systems can be found are suggested based on experimental results. The design coed is applied to the design of the lighting system for the transistor marking inspection system, and the overall performance of the machine vision system with the lighting system show the effectiveness of the proposed design code.

Investigation on the Real-Time Environment Recognition System Based on Stereo Vision for Moving Object (스테레오 비전 기반의 이동객체용 실시간 환경 인식 시스템)

  • Lee, Chung-Hee;Lim, Young-Chul;Kwon, Soon;Lee, Jong-Hun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.3
    • /
    • pp.143-150
    • /
    • 2008
  • In this paper, we investigate a real-time environment recognition system based on stereo vision for moving object. This system consists of stereo matching, obstacle detection and distance estimation. In stereo matching part, depth maps can be obtained real road images captured adjustable baseline stereo vision system using belief propagation(BP) algorithm. In detection part, various obstacles are detected using only depth map in case of both v-disparity and column detection method under the real road environment. Finally in estimation part, asymmetric parabola fitting with NCC method improves estimation of obstacle detection. This stereo vision system can be applied to many applications such as unmanned vehicle and robot.

  • PDF

Vision-based multipoint measurement systems for structural in-plane and out-of-plane movements including twisting rotation

  • Lee, Jong-Han;Jung, Chi-Young;Choi, Eunsoo;Cheung, Jin-Hwan
    • Smart Structures and Systems
    • /
    • v.20 no.5
    • /
    • pp.563-572
    • /
    • 2017
  • The safety of structures is closely associated with the structural out-of-plane behavior. In particular, long and slender beam structures have been increasingly used in the design and construction. Therefore, an evaluation of the lateral and torsional behavior of a structure is important for the safety of the structure during construction as well as under service conditions. The current contact measurement method using displacement meters cannot measure independent movements directly and also requires caution when installing the displacement meters. Therefore, in this study, a vision-based system was used to measure the in-plane and out-of-plane displacements of a structure. The image processing algorithm was based on reference objects, including multiple targets in Lab color space. The captured targets were synchronized using a load indicator connected wirelessly to a data logger system in the server. A laboratory beam test was carried out to compare the displacements and rotation obtained from the proposed vision-based measurement system with those from the current measurement method using string potentiometers. The test results showed that the proposed vision-based measurement system could be applied successfully and easily to evaluating both the in-plane and out-of-plane movements of a beam including twisting rotation.

Evaluation of Robot Vision Control Scheme Based on EKF Method for Slender Bar Placement in the Appearance of Obstacles (장애물 출현 시 얇은 막대 배치작업에 대한 EKF 방법을 이용한 로봇 비젼제어기법 평가)

  • Hong, Sung-Mun;Jang, Wan-Shik;Kim, Jae-Meung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.5
    • /
    • pp.471-481
    • /
    • 2015
  • This paper presents the robot vision control schemes using Extended Kalman Filter (EKF) method for the slender bar placement in the appearance of obstacles during robot movement. The vision system model used for this study involves the six camera parameters($C_1{\sim}C_6$). In order to develop the robot vision control scheme, first, the six parameters are estimated. Then, based on the estimated parameters, the robot's joint angles are estimated for the slender bar placement. Especially, robot trajectory caused by obstacles is divided into three obstacle regions, which are beginning region, middle region and near target region. Finally, the effects of number of obstacles using the proposed robot's vision control schemes are investigated in each obstacle region by performing experiments of the slender bar placement.

Vision-based Autonomous Semantic Map Building and Robot Localization (영상 기반 자율적인 Semantic Map 제작과 로봇 위치 지정)

  • Lim, Joung-Hoon;Jeong, Seung-Do;Suh, Il-Hong;Choi, Byung-Uk
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.86-88
    • /
    • 2005
  • An autonomous semantic-map building method is proposed, with the robot localized in the semantic-map. Our semantic-map is organized by objects represented as SIFT features and vision-based relative localization is employed as a process model to implement extended Kalman filters. Thus, we expect that robust SLAM performance can be obtained even under poor conditions in which localization cannot be achieved by classical odometry-based SLAM

  • PDF