• Title/Summary/Keyword: Robot vision

Search Result 880, Processing Time 0.029 seconds

Loop Closure in a Line-based SLAM (직선기반 SLAM에서의 루프결합)

  • Zhang, Guoxuan;Suh, Il-Hong
    • The Journal of Korea Robotics Society
    • /
    • v.7 no.2
    • /
    • pp.120-128
    • /
    • 2012
  • The loop closure problem is one of the most challenging issues in the vision-based simultaneous localization and mapping community. It requires the robot to recognize a previously visited place from current camera measurements. While the loop closure often relies on visual bag-of-words based on point features in the previous works, however, in this paper we propose a line-based method to solve the loop closure in the corridor environments. We used both the floor line and the anchored vanishing point as the loop closing feature, and a two-step loop closure algorithm was devised to detect a known place and perform the global pose correction. We propose an anchored vanishing point as a novel loop closure feature, as it includes position information and represents the vanishing points in bi-direction. In our system, the accumulated heading error is reduced using an observation of a previously registered anchored vanishing points firstly, and the observation of known floor lines allows for further pose correction. Experimental results show that our method is very efficient in a structured indoor environment as a suitable loop closure solution.

A Study of Efficient Pattern Classification on Texture Feature Representation Coordinate System (텍스처 특징 표현 좌표체계에서의 효율적인 패턴 분류 방법에 대한 연구)

  • Woo, Kyeong-Deok;Kim, Sung-Gook;Baik, Sung-Wook
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.2
    • /
    • pp.237-248
    • /
    • 2010
  • When scenes in the real world are perceived for the purpose of computer/robot vision fields, there are great deals of texture based patterns in them. This paper introduces a texture feature representation on a coordinate system in which many different patterns can be represented with a mathematical model (Gabor function). The representation of texture features of each pattern on the coordinate system results in the high performance/competence of texture pattern classification. A decision tree algorithm is used to classify pattern data represented on the proposed coordinate system. The experimental results for the texture pattern classification show that the proposed method is better than previous researches.

Fast and Fine Control of a Visual Alignment Systems Based on the Misalignment Estimation Filter (정렬오차 추정 필터에 기반한 비전 정렬 시스템의 고속 정밀제어)

  • Jeong, Hae-Min;Hwang, Jae-Woong;Kwon, Sang-Joo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.12
    • /
    • pp.1233-1240
    • /
    • 2010
  • In the flat panel display and semiconductor industries, the visual alignment system is considered as a core technology which determines the productivity of a manufacturing line. It consists of the vision system to extract the centroids of alignment marks and the stage control system to compensate the alignment error. In this paper, we develop a Kalman filter algorithm to estimate the alignment mark postures and propose a coarse-fine alignment control method which utilizes both original fine images and reduced coarse ones in the visual feedback. The error compensation trajectory for the distributed joint servos of the alignment stage is generated in terms of the inverse kinematic solution for the misalignment in task space. In constructing the estimation algorithm, the equation of motion for the alignment marks is given by using the forward kinematics of alignment stage. Secondly, the measurements for the alignment mark centroids are obtained from the reduced images by applying the geometric template matching. As a result, the proposed Kalman filter based coarse-fine alignment control method enables a considerable reduction of alignment time.

Visual Object Tracking based on Particle Filters with Multiple Observation (다중 관측 모델을 적용한 입자 필터 기반 물체 추적)

  • Koh, Hyeung-Seong;Jo, Yong-Gun;Kang, Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.5
    • /
    • pp.539-544
    • /
    • 2004
  • We investigate a visual object tracking algorithm based upon particle filters, namely CONDENSATION, in order to combine multiple observation models such as active contours of digitally subtracted image and the particle measurement of object color. The former is applied to matching the contour of the moving target and the latter is used to independently enhance the likelihood of tracking a particular color of the object. Particle filters are more efficient than any other tracking algorithms because the tracking mechanism follows Bayesian inference rule of conditional probability propagation. In the experimental results, it is demonstrated that the suggested contour tracking particle filters prove to be robust in the cluttered environment of robot vision.

Performing Missions of a Minicar Using a Single Camera (단안 카메라를 이용한 소형 자동차의 임무 수행)

  • Kim, Jin-Woo;Ha, Jong-Eun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.1
    • /
    • pp.123-128
    • /
    • 2017
  • This paper deals with performing missions through autonomous navigation using camera and other sensors. Extracting pose of the car is necessary to navigate safely within the given road. Homography is used to find it. Color image is converted into grey image and thresholding and edge is used to find control points. Two control ponits are converted into world coordinates using homography to find the angle and position of the car. Color is used to find traffic signal. It was confirmed that the given tasks performed well through experiments.

A study on the real time obstacle recognition by scanned line image (스캔라인 연속영상을 이용한 실시간 장애물 인식에 관한 연구)

  • Cheung, Sheung-Youb;Oh, Jun-Ho
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.21 no.10
    • /
    • pp.1551-1560
    • /
    • 1997
  • This study is devoted to the detection of the 3-dimensional point obstacles on the plane by using accumulated scan line images. The proposed accumulating only one scan line allow to process image at real time. And the change of motion of the feature in image is small because of the short time between image frames, so it does not take much time to track features. To obtain recursive optimal obstacles position and robot motion along to the motion of camera, Kalman filter algorithm is used. After using Kalman filter in case of the fixed environment, 3-dimensional obstacles point map is obtained. The position and motion of moving obstacles can also be obtained by pre-segmentation. Finally, to solve the stereo ambiguity problem from multiple matches, the camera motion is actively used to discard mis-matched features. To get relative distance of obstacles from camera, parallel stereo camera setup is used. In order to evaluate the proposed algorithm, experiments are carried out by a small test vehicle.

3D Image Processing System for an Robotic Milking System (로봇 착유기를 위한 3차원 위치정보획득 시스템)

  • Kim, W.;Kwon, D.J.;Seo, K.W.;Lee, D.W.
    • Journal of Animal Environmental Science
    • /
    • v.8 no.3
    • /
    • pp.165-170
    • /
    • 2002
  • This study was carried out to measure the 3D-distance of a cow model teat for an application possibility on Robotic Milking System(RMS). A teat recognition algorithm was made to find 3D-distance of the model by using Gonzalrez's theory. Some of the results are as follows. 1 . In the distance measurement experiment on the test board, as the measured length, and the length between the center of image surface and the measured image point became longer, their error values increased. 2. The model teat was installed and measured the error value at the random position. The error value of X and Y coordinates was less than 5㎜, and that of Z coordinates was less than 20㎜. The error value increased as the distance of camera's increased. 3. The equation for distance information acquirement was satisfied with obtaining accurate distance that was necessary for a milking robot to trace teats, A teat recognition algorithm was recognized well four model cow teats. It's processing time was about 1 second. It appeared that a teat recognition algorithm could be used to determine the 3D-distance of the cow teat to develop a RMS.

  • PDF

A Comparison of System Performances Between Rectangular and Polar Exponential Grid Imaging System (POLAR EXPONENTIAL GRID와 장방형격자 영상시스템의 영상분해도 및 영상처리능력 비교)

  • Jae Kwon Eem
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.2
    • /
    • pp.69-79
    • /
    • 1994
  • The conventional machine vision system which has uniform rectangular grid requires tremendous amount of computation for processing and analysing an image especially in 2-D image transfermations such as scaling, rotation and 3-D reconvery problem typical in robot application environment. In this study, the imaging system with nonuiformly distributed image sensors simulating human visual system, referred to as Ploar Exponential Grid(PEG), is compared with the existing conventional uniform rectangular grid system in terms of image resolution and computational complexity. By mimicking the geometric structure of the PEG sensor cell, we obtained PEG-like images using computer simulation. With the images obtained from the simulation, image resolution of the two systems are compared and some basic image processing tasks such as image scaling and rotation are implemented based on the PEG sensor system to examine its performance. Furthermore Fourier transform of PEG image is described and implemented in image analysis point of view. Also, the range and heading-angle measurement errors usually encountered in 3-D coordinates recovery with stereo camera system are claculated based on the PEG sensor system and compared with those obtained from the uniform rectangular grid system. In fact, the PEC imaging system not only reduces the computational requirements but also has scale and rotational invariance property in Fourier spectrum. Hence the PEG system has more suitable image coordinate system for image scaling, rotation, and image recognition problem. The range and heading-angle measurement errors with PEG system are less than those of uniform rectangular rectangular grid system in practical measurement range.

  • PDF

Autonomous Traveling of Unmanned Golf-Car using GPS and Vision system (GPS와 비전시스템을 이용한 무인 골프카의 자율주행)

  • Jung, Byeong Mook;Yeo, In-Joo;Cho, Che-Seung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.26 no.6
    • /
    • pp.74-80
    • /
    • 2009
  • Path tracking of unmanned vehicle is a basis of autonomous driving and navigation. For the path tracking, it is very important to find the exact position of a vehicle. GPS is used to get the position of vehicle and a direction sensor and a velocity sensor is used to compensate the position error of GPS. To detect path lines in a road image, the bird's eye view transform is employed, which makes it easy to design a lateral control algorithm simply than from the perspective view of image. Because the driving speed of vehicle should be decreased at a curved lane and crossroads, so we suggest the speed control algorithm used GPS and image data. The control algorithm is simulated and experimented from the basis of expert driver's knowledge data. In the experiments, the results show that bird's eye view transform are good for the steering control and a speed control algorithm also shows a stability in real driving.

A Study on ISpace with Distributed Intelligent Network Devices for Multi-object Recognition (다중이동물체 인식을 위한 분산형 지능형네트워크 디바이스로 구현된 공간지능화)

  • Jin, Tae-Seok;Kim, Hyun-Deok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.950-953
    • /
    • 2007
  • The Intelligent Space(ISpace) provides challenging research fields for surveillance, human-computer interfacing, networked camera conferencing, industrial monitoring or service and training applications. ISpace is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment, it is very important that the system knows the location information to offer the useful services. In order to achieve these goals, we present a method for representing, tracking and human following by fusing distributed multiple vision systems in ISpace, with application to pedestrian tracking in a crowd.

  • PDF