• Title/Summary/Keyword: Robot Vision

Search Result 878, Processing Time 0.025 seconds

Fast and Fine Control of a Visual Alignment Systems Based on the Misalignment Estimation Filter (정렬오차 추정 필터에 기반한 비전 정렬 시스템의 고속 정밀제어)

  • Jeong, Hae-Min;Hwang, Jae-Woong;Kwon, Sang-Joo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.12
    • /
    • pp.1233-1240
    • /
    • 2010
  • In the flat panel display and semiconductor industries, the visual alignment system is considered as a core technology which determines the productivity of a manufacturing line. It consists of the vision system to extract the centroids of alignment marks and the stage control system to compensate the alignment error. In this paper, we develop a Kalman filter algorithm to estimate the alignment mark postures and propose a coarse-fine alignment control method which utilizes both original fine images and reduced coarse ones in the visual feedback. The error compensation trajectory for the distributed joint servos of the alignment stage is generated in terms of the inverse kinematic solution for the misalignment in task space. In constructing the estimation algorithm, the equation of motion for the alignment marks is given by using the forward kinematics of alignment stage. Secondly, the measurements for the alignment mark centroids are obtained from the reduced images by applying the geometric template matching. As a result, the proposed Kalman filter based coarse-fine alignment control method enables a considerable reduction of alignment time.

Visual Object Tracking based on Particle Filters with Multiple Observation (다중 관측 모델을 적용한 입자 필터 기반 물체 추적)

  • Koh, Hyeung-Seong;Jo, Yong-Gun;Kang, Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.5
    • /
    • pp.539-544
    • /
    • 2004
  • We investigate a visual object tracking algorithm based upon particle filters, namely CONDENSATION, in order to combine multiple observation models such as active contours of digitally subtracted image and the particle measurement of object color. The former is applied to matching the contour of the moving target and the latter is used to independently enhance the likelihood of tracking a particular color of the object. Particle filters are more efficient than any other tracking algorithms because the tracking mechanism follows Bayesian inference rule of conditional probability propagation. In the experimental results, it is demonstrated that the suggested contour tracking particle filters prove to be robust in the cluttered environment of robot vision.

Performing Missions of a Minicar Using a Single Camera (단안 카메라를 이용한 소형 자동차의 임무 수행)

  • Kim, Jin-Woo;Ha, Jong-Eun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.1
    • /
    • pp.123-128
    • /
    • 2017
  • This paper deals with performing missions through autonomous navigation using camera and other sensors. Extracting pose of the car is necessary to navigate safely within the given road. Homography is used to find it. Color image is converted into grey image and thresholding and edge is used to find control points. Two control ponits are converted into world coordinates using homography to find the angle and position of the car. Color is used to find traffic signal. It was confirmed that the given tasks performed well through experiments.

A study on the real time obstacle recognition by scanned line image (스캔라인 연속영상을 이용한 실시간 장애물 인식에 관한 연구)

  • Cheung, Sheung-Youb;Oh, Jun-Ho
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.21 no.10
    • /
    • pp.1551-1560
    • /
    • 1997
  • This study is devoted to the detection of the 3-dimensional point obstacles on the plane by using accumulated scan line images. The proposed accumulating only one scan line allow to process image at real time. And the change of motion of the feature in image is small because of the short time between image frames, so it does not take much time to track features. To obtain recursive optimal obstacles position and robot motion along to the motion of camera, Kalman filter algorithm is used. After using Kalman filter in case of the fixed environment, 3-dimensional obstacles point map is obtained. The position and motion of moving obstacles can also be obtained by pre-segmentation. Finally, to solve the stereo ambiguity problem from multiple matches, the camera motion is actively used to discard mis-matched features. To get relative distance of obstacles from camera, parallel stereo camera setup is used. In order to evaluate the proposed algorithm, experiments are carried out by a small test vehicle.

3D Image Processing System for an Robotic Milking System (로봇 착유기를 위한 3차원 위치정보획득 시스템)

  • Kim, W.;Kwon, D.J.;Seo, K.W.;Lee, D.W.
    • Journal of Animal Environmental Science
    • /
    • v.8 no.3
    • /
    • pp.165-170
    • /
    • 2002
  • This study was carried out to measure the 3D-distance of a cow model teat for an application possibility on Robotic Milking System(RMS). A teat recognition algorithm was made to find 3D-distance of the model by using Gonzalrez's theory. Some of the results are as follows. 1 . In the distance measurement experiment on the test board, as the measured length, and the length between the center of image surface and the measured image point became longer, their error values increased. 2. The model teat was installed and measured the error value at the random position. The error value of X and Y coordinates was less than 5㎜, and that of Z coordinates was less than 20㎜. The error value increased as the distance of camera's increased. 3. The equation for distance information acquirement was satisfied with obtaining accurate distance that was necessary for a milking robot to trace teats, A teat recognition algorithm was recognized well four model cow teats. It's processing time was about 1 second. It appeared that a teat recognition algorithm could be used to determine the 3D-distance of the cow teat to develop a RMS.

  • PDF

A Comparison of System Performances Between Rectangular and Polar Exponential Grid Imaging System (POLAR EXPONENTIAL GRID와 장방형격자 영상시스템의 영상분해도 및 영상처리능력 비교)

  • Jae Kwon Eem
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.2
    • /
    • pp.69-79
    • /
    • 1994
  • The conventional machine vision system which has uniform rectangular grid requires tremendous amount of computation for processing and analysing an image especially in 2-D image transfermations such as scaling, rotation and 3-D reconvery problem typical in robot application environment. In this study, the imaging system with nonuiformly distributed image sensors simulating human visual system, referred to as Ploar Exponential Grid(PEG), is compared with the existing conventional uniform rectangular grid system in terms of image resolution and computational complexity. By mimicking the geometric structure of the PEG sensor cell, we obtained PEG-like images using computer simulation. With the images obtained from the simulation, image resolution of the two systems are compared and some basic image processing tasks such as image scaling and rotation are implemented based on the PEG sensor system to examine its performance. Furthermore Fourier transform of PEG image is described and implemented in image analysis point of view. Also, the range and heading-angle measurement errors usually encountered in 3-D coordinates recovery with stereo camera system are claculated based on the PEG sensor system and compared with those obtained from the uniform rectangular grid system. In fact, the PEC imaging system not only reduces the computational requirements but also has scale and rotational invariance property in Fourier spectrum. Hence the PEG system has more suitable image coordinate system for image scaling, rotation, and image recognition problem. The range and heading-angle measurement errors with PEG system are less than those of uniform rectangular rectangular grid system in practical measurement range.

  • PDF

Autonomous Traveling of Unmanned Golf-Car using GPS and Vision system (GPS와 비전시스템을 이용한 무인 골프카의 자율주행)

  • Jung, Byeong Mook;Yeo, In-Joo;Cho, Che-Seung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.26 no.6
    • /
    • pp.74-80
    • /
    • 2009
  • Path tracking of unmanned vehicle is a basis of autonomous driving and navigation. For the path tracking, it is very important to find the exact position of a vehicle. GPS is used to get the position of vehicle and a direction sensor and a velocity sensor is used to compensate the position error of GPS. To detect path lines in a road image, the bird's eye view transform is employed, which makes it easy to design a lateral control algorithm simply than from the perspective view of image. Because the driving speed of vehicle should be decreased at a curved lane and crossroads, so we suggest the speed control algorithm used GPS and image data. The control algorithm is simulated and experimented from the basis of expert driver's knowledge data. In the experiments, the results show that bird's eye view transform are good for the steering control and a speed control algorithm also shows a stability in real driving.

A Study on ISpace with Distributed Intelligent Network Devices for Multi-object Recognition (다중이동물체 인식을 위한 분산형 지능형네트워크 디바이스로 구현된 공간지능화)

  • Jin, Tae-Seok;Kim, Hyun-Deok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.950-953
    • /
    • 2007
  • The Intelligent Space(ISpace) provides challenging research fields for surveillance, human-computer interfacing, networked camera conferencing, industrial monitoring or service and training applications. ISpace is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment, it is very important that the system knows the location information to offer the useful services. In order to achieve these goals, we present a method for representing, tracking and human following by fusing distributed multiple vision systems in ISpace, with application to pedestrian tracking in a crowd.

  • PDF

A Relative Depth Estimation Algorithm Using Focus Measure (초점정보를 이용한 패턴간의 상대적 깊이 추정알고리즘 개발)

  • Jeong, Ji-Seok;Lee, Dae-Jong;Shin, Yong-Nyuo;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.527-532
    • /
    • 2013
  • Depth estimation is an essential factor for robot vision, 3D scene modeling, and motion control. The depth estimation method is based on focusing values calculated in a series of images by a single camera at different distance between lens and object. In this paper, we proposed a relative depth estimation method using focus measure. The proposed method is implemented by focus value calculated for each image obtained at different lens position and then depth is finally estimated by considering relative distance of two patterns. We performed various experiments on the effective focus measures for depth estimation by using various patterns and their usefulness.

An Improved Approach for 3D Hand Pose Estimation Based on a Single Depth Image and Haar Random Forest

  • Kim, Wonggi;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3136-3150
    • /
    • 2015
  • A vision-based 3D tracking of articulated human hand is one of the major issues in the applications of human computer interactions and understanding the control of robot hand. This paper presents an improved approach for tracking and recovering the 3D position and orientation of a human hand using the Kinect sensor. The basic idea of the proposed method is to solve an optimization problem that minimizes the discrepancy in 3D shape between an actual hand observed by Kinect and a hypothesized 3D hand model. Since each of the 3D hand pose has 23 degrees of freedom, the hand articulation tracking needs computational excessive burden in minimizing the 3D shape discrepancy between an observed hand and a 3D hand model. For this, we first created a 3D hand model which represents the hand with 17 different parts. Secondly, Random Forest classifier was trained on the synthetic depth images generated by animating the developed 3D hand model, which was then used for Haar-like feature-based classification rather than performing per-pixel classification. Classification results were used for estimating the joint positions for the hand skeleton. Through the experiment, we were able to prove that the proposed method showed improvement rates in hand part recognition and a performance of 20-30 fps. The results confirmed its practical use in classifying hand area and successfully tracked and recovered the 3D hand pose in a real time fashion.