• Title/Summary/Keyword: Vision-based

Search Result 3,438, Processing Time 0.032 seconds

A $160{\times}120$ Light-Adaptive CMOS Vision Chip for Edge Detection Based on a Retinal Structure Using a Saturating Resistive Network

  • Kong, Jae-Sung;Kim, Sang-Heon;Sung, Dong-Kyu;Shin, Jang-Kyoo
    • ETRI Journal
    • /
    • v.29 no.1
    • /
    • pp.59-69
    • /
    • 2007
  • We designed and fabricated a vision chip for edge detection with a $160{\times}120$ pixel array by using 0.35 ${\mu}m$ standard complementary metal-oxide-semiconductor (CMOS) technology. The designed vision chip is based on a retinal structure with a resistive network to improve the speed of operation. To improve the quality of final edge images, we applied a saturating resistive circuit to the resistive network. The light-adaptation mechanism of the edge detection circuit was quantitatively analyzed using a simple model of the saturating resistive element. To verify improvement, we compared the simulation results of the proposed circuit to the results of previous circuits.

  • PDF

Development of Non-Contacting Automatic Inspection Technology of Precise Parts (정밀부품의 비접촉 자동검사기술 개발)

  • Lee, Woo-Sung;Han, Sung-Hyun
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.6
    • /
    • pp.110-116
    • /
    • 2007
  • This paper presents a new technique to implement the real-time recognition for shapes and model number of parts based on an active vision approach. The main focus of this paper is to apply a technique of 3D object recognition for non-contacting inspection of the shape and the external form state of precision parts based on the pattern recognition. In the field of computer vision, there have been many kinds of object recognition approaches. And most of these approaches focus on a method of recognition using a given input image (passive vision). It is, however, hard to recognize an object from model objects that have similar aspects each other. Recently, it has been perceived that an active vision is one of hopeful approaches to realize a robust object recognition system. The performance is illustrated by experiment for several parts and models.

Experiments of Urban Autonomous Navigation using Lane Tracking Control with Monocular Vision (도심 자율주행을 위한 비전기반 차선 추종주행 실험)

  • Suh, Seung-Beum;Kang, Yeon-Sik;Roh, Chi-Won;Kang, Sung-Chul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.480-487
    • /
    • 2009
  • Autonomous Lane detection with vision is a difficult problem because of various road conditions, such as shadowy road surface, various light conditions, and the signs on the road. In this paper we propose a robust lane detection algorithm to overcome shadowy road problem using a statistical method. The algorithm is applied to the vision-based mobile robot system and the robot followed the lane with the lane following controller. In parallel with the lane following controller, the global position of the robot is estimated by the developed localization method to specify the locations where the lane is discontinued. The results of experiments, done in the region where the GPS measurement is unreliable, show good performance to detect and to follow the lane in complex conditions with shades, water marks, and so on.

Vision-based Method for Estimating Cable Tension Using the Stay Cable Shape (사장재 케이블 형태를 이용하여 케이블 장력을 추정하는 영상기반 방법)

  • Jin-Soo Kim;Jae-Bong Park;Deok-Keun Lee;Dong-Uk Park;Sung-Wan Kim
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.1
    • /
    • pp.98-106
    • /
    • 2024
  • Due to advancements in construction technology and analytical tools, an increasing number of cable-stayed bridges have been designed and constructed in recent years. A cable is a structural element that primarily transmits the main load of a cable-stayed bridge and plays the most crucial role in reflecting the overall condition of the entire bridge system. In this study, a vision-based method was applied to estimate the tension of the stay cables located at a long distance. To measure the response of a cable using a vision-based method, it is necessary to install feature points or targets on the cable. However, depending on the location of the point to be measured, there may be no feature points in the cable, and there may also be limitations in installing the target on the cable. Hence, it is necessary to find a way to measure cable response that overcomes the limitations of existing vision-based methods. This study proposes a method for measuring cable responses by utilizing the characteristics of cable shape. The proposed method involved extracting the cable shape from the acquired image and determining the center of the extracted cable shape to measure the cable response. The extracted natural frequencies of the vibration mode were obtained using the measured responses, and the tension was estimated by applying them to the vibration method. To verify the reliability of the vision-based method, cable images were obtained from the Hwatae Bridge in service under ambient vibration conditions. The reliability of the method proposed in this study was confirmed by applying it to the vibration method using a vision-based approach, resulting in estimated tensions with an error of less than 1% compared to tensions estimated using an accelerometer.

Unconscious Personal Recognition Method using Personal Footprint (발자국 정보를 이용한 무의식적 개인 식별 방법)

  • 정진우;김대진;박광현;변증남
    • Proceedings of the IEEK Conference
    • /
    • 2002.06e
    • /
    • pp.137-140
    • /
    • 2002
  • We introduce a personal identification method which can find user's ID without any help of the user. To do this, there has been two approaches, vision-based and pressure-based. Pressure-based approach has some advantages compared than vision-based one in the aspects of illumination, occlusion, and the amount of data. In the previous study about pressure-based personal identification, there are some restrictions about human body posture for extracting normalized footprints. Since this approach cannot be extended unconscious and continuos identification, we suppose more natural method and verified it by experiments.

  • PDF

Vision-Based Robust Control of Robot Manipulators with Jacobian Uncertainty (자코비안 불확실성을 포함하는 로봇 매니퓰레이터의 영상기반 강인제어)

  • Kim, Chin-Su;Jie, Min-Seok;Lee, Kang-Woong
    • Journal of Advanced Navigation Technology
    • /
    • v.10 no.2
    • /
    • pp.113-120
    • /
    • 2006
  • In this paper, a vision-based robust controller for tracking the desired trajectory a robot manipulator is proposed. The trajectory is generated to move the feature point into the desired position which the robot follows to reach to the desired position. To compensate the parametric uncertainties of the robot manipulator which contain in the control input, the robust controller is proposed. In addition, if there are uncertainties in the Jacobian, to compensate it, a vision-based robust controller which has control input is proposed as well in this paper. The stability of the closed-loop system is shown by Lyapunov method. The performance of the proposed method is demonstrated by simulations and experiments on a two degree of freedom 5-link robot manipulators.

  • PDF

Development and application of a vision-based displacement measurement system for structural health monitoring of civil structures

  • Lee, Jong Jae;Fukuda, Yoshio;Shinozuka, Masanobu;Cho, Soojin;Yun, Chung-Bang
    • Smart Structures and Systems
    • /
    • v.3 no.3
    • /
    • pp.373-384
    • /
    • 2007
  • For structural health monitoring (SHM) of civil infrastructures, displacement is a good descriptor of the structural behavior under all the potential disturbances. However, it is not easy to measure displacement of civil infrastructures, since the conventional sensors need a reference point, and inaccessibility to the reference point is sometimes caused by the geographic conditions, such as a highway or river under a bridge, which makes installation of measuring devices time-consuming and costly, if not impossible. To resolve this issue, a visionbased real-time displacement measurement system using digital image processing techniques is developed. The effectiveness of the proposed system was verified by comparing the load carrying capacities of a steel-plate girder bridge obtained from the conventional sensor and the present system. Further, to simultaneously measure multiple points, a synchronized vision-based system is developed using master/slave system with wireless data communication. For the purpose of verification, the measured displacement by a synchronized vision-based system was compared with the data measured by conventional contact-type sensors, linear variable differential transformers (LVDT) from a laboratory test.

Vision-based remote 6-DOF structural displacement monitoring system using a unique marker

  • Jeon, Haemin;Kim, Youngjae;Lee, Donghwa;Myung, Hyun
    • Smart Structures and Systems
    • /
    • v.13 no.6
    • /
    • pp.927-942
    • /
    • 2014
  • Structural displacement is an important indicator for assessing structural safety. For structural displacement monitoring, vision-based displacement measurement systems have been widely developed; however, most systems estimate only 1 or 2-DOF translational displacement. To monitor the 6-DOF structural displacement with high accuracy, a vision-based displacement measurement system with a uniquely designed marker is proposed in this paper. The system is composed of a uniquely designed marker and a camera with a zooming capability, and relative translational and rotational displacement between the marker and the camera is estimated by finding a homography transformation. The novel marker is designed to make the system robust to measurement noise based on a sensitivity analysis of the conventional marker and it has been verified through Monte Carlo simulation results. The performance of the displacement estimation has been verified through two kinds of experimental tests; using a shaking table and a motorized stage. The results show that the system estimates the structural 6-DOF displacement, especially the translational displacement in Z-axis, with high accuracy in real time and is robust to measurement noise.

Vision-based Small UAV Indoor Flight Test Environment Using Multi-Camera (멀티카메라를 이용한 영상정보 기반의 소형무인기 실내비행시험환경 연구)

  • Won, Dae-Yeon;Oh, Hyon-Dong;Huh, Sung-Sik;Park, Bong-Gyun;Ahn, Jong-Sun;Shim, Hyun-Chul;Tahk, Min-Jea
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.12
    • /
    • pp.1209-1216
    • /
    • 2009
  • This paper presents the pose estimation of a small UAV utilizing visual information from low cost cameras installed indoor. To overcome the limitation of the outside flight experiment, the indoor flight test environment based on multi-camera systems is proposed. Computer vision algorithms for the proposed system include camera calibration, color marker detection, and pose estimation. The well-known extended Kalman filter is used to obtain an accurate position and pose estimation for the small UAV. This paper finishes with several experiment results illustrating the performance and properties of the proposed vision-based indoor flight test environment.

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.