• Title/Summary/Keyword: Camera Position

Search Result 1,277, Processing Time 0.038 seconds

Pedestrian Navigation System using Inertial Sensors and Vision (관성센서와 비전을 이용한 보행용 항법 시스템)

  • Park, Sang-Kyeong;Suh, Young-Soo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.11
    • /
    • pp.2048-2057
    • /
    • 2010
  • Is this paper, a pedestrian inertial navigation system with vision is proposed. The navigation system using inertial sensors has problems that it is difficult to determine the initial position and the position error increases over time. To solve these problems, a vision system in addition to an inertial navigation system is used, where a camera is attached to a pedestrian. Landmarks are installed to known positions so that the position and orientation of a camera can be computed once a camera views the landmark. Using this position information, estimation errors in the inertial navigation system is compensated.

Correction of Position Error Using Modified Hough Transformation For Inspection System with Low Precision X- Y Robot (저정밀 X-Y 로봇을 이용한 검사 시스템의 변형된 Hough 변환을 이용한 위치오차보정)

  • 최경진;이용현;박종국
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.10
    • /
    • pp.774-781
    • /
    • 2003
  • The important factors that cause position error in X-Y robot are inertial force, frictions and spring distortion in screw or coupling. We have to estimate these factors precisely to correct position errors, Which is very difficult. In this paper, we makes systems to inspect metal stencil which is used to print solder paste on pads of SMD of PCB with low precision X-Y robot and vision system. To correct position error that is caused by low precision X-Y robot, we defines position error vector that is formed with position of objects that exist in reference and camera image. We apply MHT(Modified Hough Transformation) for the aim of determining the dominant position error vector. We modify reference image using extracted dominant position error vector and obtain reference image that is the same with camera image. Effectiveness and performance of this method are verified by simulation and experiment.

Camera Calibration for Machine Vision Based Autonomous Vehicles (머신비젼 기반의 자율주행 차량을 위한 카메라 교정)

  • Lee, Mun-Gyu;An, Taek-Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.9
    • /
    • pp.803-811
    • /
    • 2002
  • Machine vision systems are usually used to identify traffic lanes and then determine the steering angle of an autonomous vehicle in real time. The steering angle is calculated using a geometric model of various parameters including the orientation, position, and hardware specification of a camera in the machine vision system. To find the accurate values of the parameters, camera calibration is required. This paper presents a new camera-calibration algorithm using known traffic lane features, line thickness and lane width. The camera parameters considered are divided into two groups: Group I (the camera orientation, the uncertainty image scale factor, and the focal length) and Group II(the camera position). First, six control points are extracted from an image of two traffic lines and then eight nonlinear equations are generated based on the points. The least square method is used to find the estimates for the Group I parameters. Finally, values of the Group II parameters are determined using point correspondences between the image and its corresponding real world. Experimental results prove the feasibility of the proposed algorithm.

HVCM (Hybrid Voice Coil Motor) Actuator apply performance improvement through the AUTO Focusing Camera Module (HVCM(Hybrid Voice Coil Motor) Actuator적용을 통한 AUTO Focusing Camera Module 성능개선)

  • Kwon, Tae-Kwon;Kim, Young-Kil
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.05a
    • /
    • pp.307-309
    • /
    • 2011
  • The recently-released camera modules assembled into high-end handsets generally carry auto focusing function. The resolution size of the camera modules is getting higher, and customers demand more precise and stable auto focusing function. When auto focusing function is getting performed, the camera modules applied to VCM usually have the problems, which are an error of lens focusing position and resolution deviation according to the shift of one's position. For this reason, I propose Hybrid VCM that has an improved structure for a stable work of actuator and higher resolution level.

  • PDF

Line Image Correction of the Positron Camera in the Secondary Beam Course of HIMAC

  • Iseki, Yasushi;Mizuno, Hideyuki;Kanai, Tatsuaki;Kanazawa, Mitsutaka;Kitagawa, Atsushi;Suda, Mitsuru;Tomitani, Takehiro;Urakabe, Eriko
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2002.09a
    • /
    • pp.195-198
    • /
    • 2002
  • A positron camera, consisting of a pair of Anger-type scintillation detectors, has been developed for verifying the ranges of irradiation beams in heavy-ion radiotherapy. Images obtained by a centroid calculation of photomultiplier outputs exhibit a distortion near the edge of the crystal plane in an Anger-type scintillation detector. The images of a $\^$68/Ge line source were detected and look-up tables were prepared for the position correction parameters. Asymmetry of the position distribution detected by the positron camera was prevented with this correction. As a result, a linear position response and a position resolution of 8.6 mm were obtained over a wide measurement field.

  • PDF

Linear Velocity Control of the Mobile Robot with the Vision System at Corridor Navigation (비전 센서를 갖는 이동 로봇의 복도 주행 시 직진 속도 제어)

  • Kwon, Ji-Wook;Hong, Suk-Kyo;Chwa, Dong-Kyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.9
    • /
    • pp.896-902
    • /
    • 2007
  • This paper proposes a vision-based kinematic control method for mobile robots with camera-on-board. In the previous literature on the control of mobile robots using camera vision information, the forward velocity is set to be a constant, and only the rotational velocity of the robot is controlled. More efficient motion, however, is needed by controlling the forward velocity, depending on the position in the corridor. Thus, both forward and rotational velocities are controlled in the proposed method such that the mobile robots can move faster when the comer of the corridor is far away, and it slows down as it approaches the dead end of the corridor. In this way, the smooth turning motion along the corridor is possible. To this end, visual information using the camera is used to obtain the perspective lines and the distance from the current robot position to the dead end. Then, the vanishing point and the pseudo desired position are obtained, and the forward and rotational velocities are controlled by the LOS(Line Of Sight) guidance law. Both numerical and experimental results are included to demonstrate the validity of the proposed method.

Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion (천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정)

  • Shin, Ok-Shik;Park, Chan-Gook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.1
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.

Development of a Vision-based Position Estimation System for the Inspection and Maintenance Manipulator of Steam Generator Tubes a in Nuclear Power Plant

  • Jeong, Kyung-Min;Cho, Jae-Wan;Kim, Seung-Ho;Kim, Seung-Ho;Jung, Seung-Ho;Shin, Ho-Chul;Choi, Chang-Whan;Seo, Yong-Chil
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.772-777
    • /
    • 2003
  • A vision-based tool position estimation system for the inspection and maintenance manipulator working inside the steam generator bowl of nuclear power plants can help human operators ensure that the inspection probe or plug are inserted to the targeted tube. Some previous research proposed a simplified tube position verification system that counts the tubes passed through during the motion and displays only the position of the tool. In this paper, by using a general camera calibration approach, tool orientation is also estimated. In order to reduce the computation time and avoid the parameter bias problem in an ellipse fitting, a small number of edge points are collected around the large section of the ellipse boundary. Experiment results show that the camera calibration parameters, detected ellipses, and estimated tool position are appropriate.

  • PDF

Facial Gaze Detection by Estimating Three Dimensional Positional Movements (얼굴의 3차원 위치 및 움직임 추정에 의한 시선 위치 추적)

  • Park, Gang-Ryeong;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.23-35
    • /
    • 2002
  • Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) his face to gaze at a different position on the monitor. To detect the gaze position, we locate facial region and facial features(both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by camera calibration and parameter estimation algorithm. Then, when a user moves(rotates and/or translates) his face in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D rotation and translation estimation and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. As experimental results, we can obtain the gaze position on a monitor(19inches) and the gaze position accuracy between the computed positions and the real ones is about 2.01 inches of RMS error.

Head Position Detection Using Omnidirectional Camera (전 방향 카메라 영상에서 사람의 얼굴 위치검출 방법)

  • Bae, Kwang-Hyuk;Park, Kang-Ryoung;Kim, Jai-Hie
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.283-284
    • /
    • 2007
  • This paper proposes a method of real-time segmentation of moving region and detection of head position in a single omnidrectional camera Segmentation of moving region used background modeling method by a mixture of Gaussian(MOG) and shadow detection method. Circular constraint was proposed for detecting head position.

  • PDF