• Title/Summary/Keyword: Camera scene control

Search Result 51, Processing Time 0.025 seconds

A Calibration Algorithm Using Known Angle (각도 정보를 이용한 카메라 보정 알고리듬)

  • 권인소;하종은
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.5
    • /
    • pp.415-420
    • /
    • 2004
  • We present a new algorithm for the calibration of a camera and the recovery of 3D scene structure up to a scale from image sequences using known angles between lines in the scene. Traditional method for calibration using scene constraints requires various scene constraints due to the stratified approach. Proposed method requires only one type of scene constraint of known angle and also it directly recovers metric structure up to an unknown scale from projective structure. Specifically, we recover the matrix that is the homography between the projective structure and the Euclidean structure using angles. Since this matrix is a unique one in the given set of image sequences, we can easily deal with the problem of varying intrinsic parameters of the camera. Experimental results on the synthetic and real images demonstrate the feasibility of the proposed algorithm.

A development of the simple camera calibration system using the grid type frame with different line widths (다른 선폭들로 구성된 격자형 교정판을 이용한 간단한 카메라 교정 시스템의 개발)

  • 정준익;최성구;노도환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.371-374
    • /
    • 1997
  • Recently, the development of computer achieves a system which is similar to the mechanics of human visual system. The 3-dimensional measurement using monocular vision system must be achieved a camera calibration. So far, the camera calibration technique required reference target in a scene. But, these methods are inefficient because they have many calculation procedures and difficulties in analysis. Therefore, this paper proposes a native method that without reference target in a scene. We use the grid type frame with different line widths. This method uses vanishing point concept that possess a rotation parameter of the camera and perspective ration that perspect each line widths into a image. We confirmed accuracy of calibration parameter estimation through experiment on the algorithm with a grid paper with different line widths.

  • PDF

A Study on the Algorithm Development of End-point Position Tracking for Aerial Work Platform with Bend-linked Boom (굴절링크 붐을 갖는 장비의 끝점 좌표 추적 알고리즘 개발에 대한 연구)

  • Oh, Seok-Hyung;Hong, Yong
    • Journal of Power System Engineering
    • /
    • v.20 no.3
    • /
    • pp.64-73
    • /
    • 2016
  • In this research work, an algorithm development on tracking end-point of aerial work platform with jib profile and bend-linked boom was carried out to find the X, Y and Z direction value using coordinate transformation matrix. This matrix consists of device status value(length and angle) based on camera position axis, which are sent from device controller PLUS+1 by CAN protocol. These values are used to measure the distance and angle from the camera to the end-point. Using these distance and angle value, monitoring system controls FAN/TILT/ZOOM status of camera to get an adequate scene of workplace. This program was written in Java, C# and C for mobile device. These results provide the information to the aerial work device for secure operation.

SATELLITE ORBIT AND ATTITUDE MODELING FOR GEOMETRIC CORRECTION OF LINEAR PUSHBROOM IMAGES

  • Park, Myung-Jin;Kim, Tae-Jung
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.543-547
    • /
    • 2002
  • In this paper, we introduce a more improved camera modeling method for linear pushbroom images than the method proposed by Orun and Natarajan(ON). ON model shows an accuracy of within 1 pixel if more than 10 ground control points(GCPs) are provided. In general, there is high correlation between platform position and attitude parameters but ON model ignores attitude variation in order to overcome such correlation. We propose a new method that obtains an optimal solution set of parameters without ignoring the attitude variation. We first assume that attitude parameters are constant and estimate platform position's. Then we estimate platform attitude parameters using the values of estimated position parameters. As a result, we can set up an accurate camera model for a linear pushbroom satellite scene. In particular, we can apply the camera model to its surrounding scenes because our model provide sufficient information on satellite's position and attitude not only for a single scene but also for a whole imaging segment. We tested on two images: one with a pixel size 6.6m$\times$6.6m acquired from EOC(Electro Optical Camera), and the other with a pixel size 10m$\times$l0m acquired from SPOT. Our camera model procedures were applied to the images and gave satisfying results. We had obtained the root mean square errors of 0.5 pixel and 0.3 pixel with 25 GCPs and 23 GCPs, respectively.

  • PDF

INITIAL GEOMETRIC ACCURACY OF KOMPSAT-2 HIGH RESOLUTION IMAGE

  • Seo, Doo-Chun;Lim, Hyo-Suk;Shin, Ji-Hyeon;Kim, Moon-Gyu
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.780-783
    • /
    • 2006
  • The KOrea Multi-Purpose Satellite-2 (KOMPSAT-2) was launched in July 2006 and the main mission of the KOMPSAT-2 is a high resolution imaging for the cartography of Korea peninsula by utilizing Multi Spectral Camera (MSC) images. The camera resolutions are 1 m in panchromatic scene and 4 m in multi-spectral imaging. This paper provides an initial geometric accuracy assessment of the KOMPSAT-2 high resolution image without ground control points and briefly introduces the sensor model of KOMPSAT-2. Also investigated and evaluated the obtained 3-dimensional terrain information using the MSC pass image and scene images acquired from the KOMPSAT-2 satellite.

  • PDF

Active Focusing Technique for Extracting Depth Information (액티브 포커싱을 이용한 3차원 물체의 깊이 계측)

  • 이용수;박종훈;최종수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.29B no.2
    • /
    • pp.40-49
    • /
    • 1992
  • In this paper,a new approach-using the linear movement of the lens location in a camera and focal distance in each location for the measurement of the depth of the 3-D object from several 2-D images-is proposed. The sharply focused edges are extracted from the images obtained by moving the lens of the camera, that is, the distance between the lens and the image plane, in the range allowed by the camera lens system. Then the depthin formation of the edges are obtained by the lens location. In our method, the accurate and complicated control system of the camera and a special algorithm for tracing the accurate focus point are not necessary, and the method has some advantage that the depth of all objects in a scene are measured by only the linear movement of the lens location of the camera. The accuracy of the extracted depth information is approximately 5% of object distances between 1 and 2m. We can see the possibility of application of the method in the depth measurement of the 3-D objects.

  • PDF

A Study on the Rotation Angle Estimation of HMD for the Tele-operated Vision System (원격 비전시스템을 위한 HMD의 방향각 측정 알고리즘에 관한 연구)

  • Ro, Young-Shick;Yoon, Seung-Jun;Kang, Hee-Jun;Suh, Young-Soo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.3
    • /
    • pp.605-613
    • /
    • 2009
  • In this paper, we studied for the real-time azimuthal measurement of HMD (Head Mounted Display) to control the tele-operated vision system on the mobile robot. In the preexistence tele-operated vision system, a joystick was used to control the pan-tilt unit of the remote camera system. To give the sense of presence to the tele-operator, we used a HMD to display the remote scene, measured the rotation angle of the HMD on a real time basis, and transmitted the measured rotation angles to the mobile robot controller to synchronize the pan-tilt angles of remote camera with the HMD. In this paper, we suggest an algorithm for the real-time estimation of the HMD rotation angles using feature points extraction from pc-camera image. The simple experiment is conducted to demonstrate the feasibility.

Depth Measurement System Using Structured Light, Rotational Plane Mirror and Mono-Camera (선형 레이저와 회전 평면경 및 단일 카메라를 이용한 거리측정 시스템)

  • Yoon Chang-Bae;Kim Hyong-Suk;Lin Chun-Shin;Son Hong-Rak;Lee Hye-Jeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.5
    • /
    • pp.406-410
    • /
    • 2005
  • A depth measurement system that consists of a single camera, a laser light source and a rotating mirror is investigated. The camera and the light source are fixed, facing the rotating mirror. The laser light is reflected by the mirror and projected to the scene objects whose locations are to be determined. The camera detects the laser light location on object surfaces through the same mirror. The scan over the area to be measured is done by mirror rotation. Advantages are 1) the image of the light stripe remains sharp while that of the background becomes blurred because of the mirror rotation and 2) the only rotating part of this system is the mirror but the mirror angle is not involved in depth computation. This minimizes the imprecision caused by a possible inaccurate angle measurement. The detail arrangement and experimental results are reported.

SPOT Camera Modeling Using Auxiliary Data (영상보조자료를 이용한 SPOT 카메라 모델링)

  • 김만조;차승훈;고보연
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.4
    • /
    • pp.285-290
    • /
    • 2003
  • In this paper, a camera modeling method that utilizes ephemeris data and imaging geometry is presented. The proposed method constructs a mathematical model only with parameters that are contained in auxiliary files and does not require any ground control points for model construction. Control points are only needed to eliminate geolocation error of the model that is originated from errors embedded in the parameters that are used in model construction. By using a few (one or two) control points, RMS error of around pixel size can be obtained and control points are not necessarily uniformly distributed in line direction of the scene. This advantage is crucial in large-scale projects and will enable to reduce project cost dramatically.

Vehicle Detection for Adaptive Head-Lamp Control of Night Vision System (적응형 헤드 램프 컨트롤을 위한 야간 차량 인식)

  • Kim, Hyun-Koo;Jung, Ho-Youl;Park, Ju H.
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.6 no.1
    • /
    • pp.8-15
    • /
    • 2011
  • This paper presents an effective method for detecting vehicles in front of the camera-assisted car during nighttime driving. The proposed method detects vehicles based on detecting vehicle headlights and taillights using techniques of image segmentation and clustering. First, in order to effectively extract spotlight of interest, a pre-signal-processing process based on camera lens filter and labeling method is applied on road-scene images. Second, to spatial clustering vehicle of detecting lamps, a grouping process use light tracking method and locating vehicle lighting patterns. For simulation, we are implemented through Da-vinci 7437 DSP board with visible light mono-camera and tested it in urban and rural roads. Through the test, classification performances are above 89% of precision rate and 94% of recall rate evaluated on real-time environment.