• Title/Summary/Keyword: Camera orientation

Search Result 312, Processing Time 0.042 seconds

A Study on the Geometric Correction of a CCD Camera Scanner Using the Exterior Orientation Parameters (외부표정요소를 이용한 CCD 카메라 스캐너의 기하학적 왜곡 보정기법 연구)

  • 안기원;문명상
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.11 no.2
    • /
    • pp.69-77
    • /
    • 1993
  • Investigation is given to the detailed procedure of a computer assisted automatic correction for scanning errors of the digital images of close-range photographs scanned by the CCD camera scanner. After determination of the exterior orientation parameters, photo coordinates of the all pixels were calculated using collinearity equation. For the generation of geometric corrected image from the photo coordinates of the all pixels, inverse-weighted-distance average method was used. And the accuracy of the resulting new image was checked comparing its image coordinates with there corresponding ground coordinates for the check points.

  • PDF

Single Photo Resection Using Cosine Law and Three-dimensional Coordinate Transformation (코사인 법칙과 3차원 좌표 변환을 이용한 단사진의 후방교회법)

  • Hong, Song Pyo;Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.189-198
    • /
    • 2019
  • In photogrammetry, single photo resection is a method of determining exterior orientation parameters corresponding to a position and an attitude of a camera at the time of taking a photograph using known interior orientation parameters, ground coordinates, and image coordinates. In this study, we proposed a single photo resection algorithm that determines the exterior orientation parameters of the camera using cosine law and linear equation-based three-dimensional coordinate transformation. The proposed algorithm first calculated the scale between the ground coordinates and the corresponding normalized coordinates using the cosine law. Then, the exterior orientation parameters were determined by applying linear equation-based three-dimensional coordinate transformation using normalized coordinates and ground coordinates considering the calculated scale. The proposed algorithm was not sensitive to the initial values by using the method of dividing the longest distance among the combinations of the ground coordinates and dividing each ground coordinates, although the partial derivative was required for the nonlinear equation. In addition, since the exterior orientation parameters can be determined by using three points, there was a stable advantage in the geometrical arrangement of the control points.

Defect Length Measurement using Underwater Camera and A Laser Slit Beam

  • Kim, Young-Hwan;Yoon, Ji-Sup
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.746-751
    • /
    • 2003
  • A method of measuring the length of defects on the wall of the spent nuclear fuel pool using the image processing and a laser slit beam is proposed. Since the defect monitoring camera is suspended by a crane and hinged to the crane hook, the camera viewing direction can not be adjusted to the orientation that is exactly perpendicular to the wall. Thus, the image taken by the camera, which is horizontally rotated along the axis of the camera supporting beam, is distorted and thus, the precise length can not be measured. In this paper, by using the LASER slit beam generator, the horizontally rotated angle of the camera is estimated. Once the angle is obtained, the distorted image can be easily reconstructed to the image normal to the wall. The estimation algorithm adopts a 3-dimensional coordinate transformation of the image plane where both the laser slit beam and the original image of the defects exist. The estimation equation is obtained by using the information of the beam projected on the wall and the parameters of this equation are experimentally obtained. With this algorithm, the original image of the defect taken at arbitrary rotated angle can be reconstructed to an image normal to the wall. From the result of a series of experiments, the accuracy of the defect is measured within 0.6 and 1.3 % error bound of real defect size in the air and underwater, respectively under 30 degree of the inclined angle of the laser slit beam generator. Also, the error increases as the inclined angle increases upto 60 degree. Over this angle, the defect length can not be measured since the defect image disappears. The proposed algorithm enables the accurate measurement of the defect length only by using a single camera and a laser slit beam.

  • PDF

Obtaining 3-D Depth from a Monochrome Shaded Image (단시안 명암강도를 이용한 물체의 3차원 거리측정)

  • Byung Il Kim
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.29B no.7
    • /
    • pp.52-61
    • /
    • 1992
  • An iterative scheme for computing the three-dimensional position and the surface orientation of an opaque object from a singel shaded image is proposed. This method demonstrates that calculating the depth(distance) between the camera and the object from one shaded video image is possible. Most previous research works on $'Shape from Shading$' problem, even in the $'Photometric Stereo Method$', invoved the determination of surface orientation only. To measure the depth of an object, depth of the object, and the reflectance properties of the surface. Assuming that the object surface is uniform Lambertian the measured intensity level at a given image pixel*x,y0becomes a function of surface orientation and depth component of the object. Derived Image Irradiance Equation can`t be solved without further informations since three unknown variables(p,q and D) are in one nonlinear equation. As an additional constraints we assume that surface satisfy smoothness conditions. Then equation can be solved relaxatively using standard methods of TEX>$'Calculus of VariationTEX>$'. After checking the sensitivity of the algorithm to the errors ininput parameters, the theoretical results is tested by experiments. Three objects (plane, cylinder, and sphere)are used. Thees initial results are very encouraging since they match the theoretical calculations within 20$\%$ error in simple experiments.> error in simple experiments.

  • PDF

The Accuracy of Stereo Digital Camera Photogrammetry (스테레오 디지털 카메라를 이용한 사진측량의 정확도)

  • Kim, Gi-Hong;Youn, Jun-Hee;Park, Ha-Jin
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.6
    • /
    • pp.663-668
    • /
    • 2010
  • In this study a stereo digital camera system was developed. Using this system, we can collect informations such as coordinates, lengths of all objects shown in the photo image just by taking digital photograph in field. This system has the advantage of obtaining stereo images with settled exterior orientation parameters, while the accuracy slightly worsen because in a close range photogrammetry with stereo digital camera system, the base line distance is restricted within about 1m. We took images with various exposure distances and angles to objects for experimental error assessment, and analyzed the affection of image coordinates errors.

The Extract of 3D Road Centerline Using Video Camera (비디오 카메라를 이용한 3차원 도로중심선 추출)

  • Seo Dong-Ju;Lee Jong-Chool
    • International Journal of Highway Engineering
    • /
    • v.8 no.1 s.27
    • /
    • pp.65-75
    • /
    • 2006
  • According to development of computer technology, the utilization of the fourth generation of digital photogrammetry progresses favorable. Especially the method of using digital video camera is very practicable and has an advantage such as a profitability for the amateur. In road field which if centrical facilities of national industry, this method was utilized to acquire road information for the safety diagnosis or maintenance. In this study, 3-dimensional position information of road centerline was extracted using digital video camera which has practicality and economical efficiency. This data could be a basic source in road information project.

  • PDF

Calibration of the depth measurement system with a laser pointer, a camera and a plain mirror

  • Kim, Hyong-Suk;Lin, Chun-Shin;Gim, Seong-Chan;Chae, Hee-Sung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1994-1998
    • /
    • 2005
  • Characteristic analysis of the depth measurement system with a laser, a camera and a rotating mirror has been done and the parameter calibration technique for it has been proposed. In the proposed depth measurement system, the laser beam is reflected to the object by the rotating mirror and again the position of the laser beam is observed through the same mirror by the camera. The depth of the object pointed by the laser beam is computed depending on the pixel position on the CCD. There involved several number of internal and external parameters such as inter-pixel distance, focal length, position and orientation of the system components in the depth measurement error. In this paper, it is shown through the error sensitivity analysis of the parameters that the most important parameters in the sense of error sources are the angle of the laser beam and the inter pixel distance. The calibration techniques to minimize the effect of such major parameters are proposed.

  • PDF

Vision Based Position Control of a Robot Manipulator Using an Elitist Genetic Algorithm (엘리트 유전 알고리즘을 이용한 비젼 기반 로봇의 위치 제어)

  • Park, Kwang-Ho;Kim, Dong-Joon;Kee, Seok-Ho;Kee, Chang-Doo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.19 no.1
    • /
    • pp.119-126
    • /
    • 2002
  • In this paper, we present a new approach based on an elitist genetic algorithm for the task of aligning the position of a robot gripper using CCD cameras. The vision-based control scheme for the task of aligning the gripper with the desired position is implemented by image information. The relationship between the camera space location and the robot joint coordinates is estimated using a camera-space parameter modal that generalizes known manipulator kinematics to accommodate unknown relative camera position and orientation. To find the joint angles of a robot manipulator for reaching the target position in the image space, we apply an elitist genetic algorithm instead of a nonlinear least square error method. Since GA employs parallel search, it has good performance in solving optimization problems. In order to improve convergence speed, the real coding method and geometry constraint conditions are used. Experiments are carried out to exhibit the effectiveness of vision-based control using an elitist genetic algorithm with a real coding method.

A Study on the Robot Vision Control Schemes of N-R and EKF Methods for Tracking the Moving Targets (이동 타겟 추적을 위한 N-R과 EKF방법의 로봇비젼제어기법에 관한 연구)

  • Hong, Sung-Mun;Jang, Wan-Shik;Kim, Jae-Meung
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.23 no.5
    • /
    • pp.485-497
    • /
    • 2014
  • This paper presents the robot vision control schemes based on the Newton-Raphson (N-R) and the Extended Kalman Filter (EKF) methods for the tracking of moving targets. The vision system model used in this study involves the six camera parameters. The difference is that refers to the uncertainty of the camera's orientation and focal length, and refers to the unknown relative position between the camera and the robot. Both N-R and EKF methods are employed towards the estimation of the six camera parameters. Based on the these six parameters estimated using three cameras, the robot's joint angles are computed with respect to the moving targets, using both N-R and EKF methods. The two robot vision control schemes are tested by tracking the moving target experimentally. Given the experimental results, the two robot control schemes are compared in order to evaluate their strengths and weaknesses.

Fine-Motion Estimation Using Ego/Exo-Cameras

  • Uhm, Taeyoung;Ryu, Minsoo;Park, Jong-Il
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.766-771
    • /
    • 2015
  • Robust motion estimation for human-computer interactions played an important role in a novel method of interaction with electronic devices. Existing pose estimation using a monocular camera employs either ego-motion or exo-motion, both of which are not sufficiently accurate for estimating fine motion due to the motion ambiguity of rotation and translation. This paper presents a hybrid vision-based pose estimation method for fine-motion estimation that is specifically capable of extracting human body motion accurately. The method uses an ego-camera attached to a point of interest and exo-cameras located in the immediate surroundings of the point of interest. The exo-cameras can easily track the exact position of the point of interest by triangulation. Once the position is given, the ego-camera can accurately obtain the point of interest's orientation. In this way, any ambiguity between rotation and translation is eliminated and the exact motion of a target point (that is, ego-camera) can then be obtained. The proposed method is expected to provide a practical solution for robustly estimating fine motion in a non-contact manner, such as in interactive games that are designed for special purposes (for example, remote rehabilitation care systems).