• Title/Summary/Keyword: projective invariance

Search Result 6, Processing Time 0.02 seconds

Automatic Edge Detection Method for Mobile Robot Application (이동로봇을 위한 영상의 자동 엣지 검출 방법)

  • Kim Dongsu;Kweon Inso;Lee Wangheon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.5
    • /
    • pp.423-428
    • /
    • 2005
  • This paper proposes a new edge detection method using a $3{\times}3$ ideal binary pattern and lookup table (LUT) for the mobile robot localization without any parameter adjustments. We take the mean of the pixels within the $3{\times}3$ block as a threshold by which the pixels are divided into two groups. The edge magnitude and orientation are calculated by taking the difference of average intensities of the two groups and by searching directional code in the LUT, respectively. And also the input image is not only partitioned into multiple groups according to their intensity similarities by the histogram, but also the threshold of each group is determined by fuzzy reasoning automatically. Finally, the edges are determined through non-maximum suppression using edge confidence measure and edge linking. Applying this edge detection method to the mobile robot localization using projective invariance of the cross ratio. we demonstrate the robustness of the proposed method to the illumination changes in a corridor environment.

Extraction of Camera Parameters Using Projective Invariance for Virtual Studio

  • Han, Seo-Won;Lee, Joon-Whaon;Nakajima, Masayuki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.141-146
    • /
    • 1998
  • Currently virtual studio has used the cromakey method in which an image is captured, and the blue portion of that image is replaced by a graphic image or a real image. The replaced image must be changed according to the camera motion. This paper proposes a novel method to extract camera parameters using the recognition of pentagonal patterns which are painted on the blue screen. The corresponding parameters are position, direction and focal length of the camera in the virtual studio. At first, pentagonal patterns are found using invariant features of the pentagon. Then, the projective transformation of two projected images and the camera parameters are calculated using the matched points. Simulation results indicate that camera parameters are more easily calculated compared to the conventional methods.

  • PDF

Controlling robot by image-based visual servoing with stereo cameras

  • Fan, Jun-Min;Won, Sang-Chul
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.229-232
    • /
    • 2005
  • In this paper, an image-based "approach-align -grasp" visual servo control design is proposed for the problem of object grasping, which is based on the binocular stand-alone system. The basic idea consists of considering a vision system as a specific sensor dedicated a task and included in a control servo loop, and we perform automatic grasping follows the classical approach of splitting the task into preparation and execution stages. During the execution stage, once the image-based control modeling is established, the control task can be performed automatically. The proposed visual servoing control scheme ensures the convergence of the image-features to desired trajectories by using the Jacobian matrix, which is proved by the Lyapunov stability theory. And we also stress the importance of projective invariant object/gripper alignment. The alignment between two solids in 3-D projective space can be represented with view-invariant, more precisely; it can be easily mapped into an image set-point without any knowledge about the camera parameters. The main feature of this method is that the accuracy associated with the task to be performed is not affected by discrepancies between the Euclidean setups at preparation and at task execution stages. Then according to the projective alignment, the set point can be computed. The robot gripper will move to the desired position with the image-based control law. In this paper we adopt a constant Jacobian online. Such method describe herein integrate vision system, robotics and automatic control to achieve its goal, it overcomes disadvantages of discrepancies between the different Euclidean setups and proposes control law in binocular-stand vision case. The experimental simulation shows that such image-based approach is effective in performing the precise alignment between the robot end-effector and the object.

  • PDF

The Extraction of Camera Parameters using Projective Invariance for Virtual Studio (가상 스튜디오를 위한 카메라 파라메터의 추출)

  • Han, Seo-Won;Eom, Gyeong-Bae;Lee, Jun-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.9
    • /
    • pp.2540-2547
    • /
    • 1999
  • Chromakey method is one of key technologies for realizing virtual studio, and the blue portions of a captured image in virtual studio, are replaced with a computer generated or real image. The replaced image must be changed according to the camera parameter of studio for natural merging with the non-blue portions of a captured image. This paper proposes a novel method to extract camera parameters using the recognition of pentagonal patterns that are painted on a blue screen. We extract corresponding points between a blue screen. We extract corresponding points between a blue screen and a captured image using the projective invariant features of a pentagon. Then, calculate camera parameters using corresponding points by the modification of Tsai's method. Experimental results indicate that the proposed method is more accurate compared to conventional method and can process about twelve frames of video per a second in Pentium-MMX processor with CPU clock of 166MHz.

  • PDF

Camera Extrinsic Parameter Estimation using 2D Homography and LM Method based on PPIV Recognition (PPIV 인식기반 2D 호모그래피와 LM방법을 이용한 카메라 외부인수 산출)

  • Cha Jeong-Hee;Jeon Young-Min
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.2 s.308
    • /
    • pp.11-19
    • /
    • 2006
  • In this paper, we propose a method to estimate camera extrinsic parameter based on projective and permutation invariance point features. Because feature informations in previous research is variant to c.:men viewpoint, extraction of correspondent point is difficult. Therefore, in this paper, we propose the extracting method of invariant point features, and new matching method using similarity evaluation function and Graham search method for reducing time complexity and finding correspondent points accurately. In the calculation of camera extrinsic parameter stage, we also propose two-stage motion parameter estimation method for enhancing convergent degree of LM algorithm. In the experiment, we compare and analyse the proposed method with existing method by using various indoor images to demonstrate the superiority of the proposed algorithms.