• 제목/요약/키워드: Camera

검색결과 10,551건 처리시간 0.047초

DESIGN OF CAMERA CONTROLLER FOR HIGH RESOLUTION SPACE-BORN CAMERA SYSTEM

  • Heo, Haeng-Pal;Kong, Jong-Pil;Kim, Young-Sun;Park, Jong-Euk;Yong, Sang-Soon
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2007년도 Proceedings of ISRS 2007
    • /
    • pp.130-133
    • /
    • 2007
  • In order to get high quality and high resolution image data from the space-borne camera system, the image chain from the sensor to the user in the ground-station need to be designed and controlled with extreme care. The behavior of the camera system needs to be controlled by ground commands to support on-orbit calibration and to adjust imaging parameters and to perform early stage on-orbit image correction, like gain and offset control, non-uniformity correction, etc. The operation status including the temperature of the sensor needs to be transferred to the ground-station. The preparation time of the camera system for imaging with specific parameters should be minimized. The camera controller needs to synchronize the operation of cameras for every channel and for every spectral band. Detail timing information of the image data needs to be provided for image data correction at ground-station. In this paper, the design of the camera controller for the AEISS on KOMPSAT-3 will be introduced. It will be described how the image chain is controlled and which imaging parameters are to be adjusted The camera controller will have software for the flexible operation of the camera by the ground-station operators and it can be reconfigured by ground commands. A simple concept of the camera operations and the design of the camera controller, not only with hardware but also with controller software are to be introduced in this paper.

  • PDF

자율주행을 위한 이중초점 스테레오 카메라 시스템을 이용한 깊이 영상 생성 방법 (Depth Generation using Bifocal Stereo Camera System for Autonomous Driving)

  • 이은경
    • 한국전자통신학회논문지
    • /
    • 제16권6호
    • /
    • pp.1311-1316
    • /
    • 2021
  • 본 논문에서는 이중시점 스테레오 이미지와 그에 상응하는 깊이맵을 생성하기 위해 서로 다른 초점거리를 가지고 있는 두 카메라를 결합한 이중시점 스테레오 카메라 시스템을 제안한다. 제안한 이중초점 스테레오 카메라 시스템을 이용해 깊이맵을 생성하기 위해서는 먼저 서로 다른 초점을 가진 두 카메라에 대한 카메라 정보를 추출하기 위한 카메라 보정(Camera Calibration)을 수행한다. 카메라 파라미터를 이용해 깊이맵 생성을 위한 공통 이미지 평면을 생성하고 스테레오 이미지 정렬화(Image Rectification)를 수행한다. 마지막으로 정렬화된 스테레오 이미지를 이용하여 깊이맵을 생성하였다. 본 논문에서는 깊이맵을 생성하기 위해서 SGM(Semi-global Matching) 알고리즘을 사용하였다. 제안한 이중초점 스테레오 카메라 시스템은 서로 다른 초점 카메라들이 수행해야 하는 기능을 수행함과 동시에 두 카메라를 이용한 스테레오 정합(Stereo Matching)을 통해서 현재 주행 중인 환경에서의 차량, 보행자, 장애물과의 거리 정보까지 생성할 수 있어서 보다 안전한 자율주행 차량 설계를 가능하게 하였다.

파노라믹 3D가상 환경 생성을 위한 다수의 카메라 캘리브레이션 (Multiple Camera Calibration for Panoramic 3D Virtual Environment)

  • 김세환;김기영;우운택
    • 전자공학회논문지CI
    • /
    • 제41권2호
    • /
    • pp.137-148
    • /
    • 2004
  • 본 논문에서는 영상기반 파노라믹 3D 가상 환경 (Virtual Environment: VE) 생성을 위해 회전하는 다수의 멀티뷰 카메라를 위한 캘리브레이션 방법을 제안한다. 일반적으로, 카메라 캘리브레이션 알고리즘은 카메라와 캘리브레이션 패턴 사이의 이 멀어질수록 획득되는 카메라 파라미터의 정확도가 상당히 저하되어 파노라마 영상 제작에는 부적합하다. 이러한 문제점을 극복하기 위해 멀티뷰 카메라의 렌즈간 그리고 회전하는 다수의 멀티뷰 카메라간의 기하학적인 상관 위치 관계를 이용하여 정확도를 높인다. 우선, Tsai의 캘리브레이션 알고리즘을 적용하여 획득된 카메라 파라미터를 카메라 렌즈간의 사전 기하 정보와 비교하여 그 오차에 기반한 인트라 카메라 캘리브레이션 (Intra-camera Calibration)을 수행한다. 그리고 가상 공간에 역투영된 3D point cloud에 ICP 알고리즘을 적용하여 인터 카메라 캘리브레이션 (Inter-camera Calibration)을 수행한다. 이를 확장하여, 다수의 카메라를 회전시켜 획득된 3D point cloud에 대해 기준 카메라의 위치를 중심으로 인터 카메라 캘리브레이션을 연속적으로 수행함으로써 회전하는 다수의 멀티뷰 카메라에 대한 캘리브레이션을 수행한다. 이와 같은 캘리브레이션 방법을 통해 중에서도 비교적 개선된 카메라 파라미터를 획득할 수 있기 때문에 파노라믹 3D 가상 환경을 생성하기 위한 정합과정에 사용할 수 있다. 또한, 실시간 3D 객체 추적 및 AR 응용 시스템 등의 다양한 AR 응용분야에 활용될 수 있다.

머신비젼 기반의 자율주행 차량을 위한 카메라 교정 (Camera Calibration for Machine Vision Based Autonomous Vehicles)

  • 이문규;안택진
    • 제어로봇시스템학회논문지
    • /
    • 제8권9호
    • /
    • pp.803-811
    • /
    • 2002
  • Machine vision systems are usually used to identify traffic lanes and then determine the steering angle of an autonomous vehicle in real time. The steering angle is calculated using a geometric model of various parameters including the orientation, position, and hardware specification of a camera in the machine vision system. To find the accurate values of the parameters, camera calibration is required. This paper presents a new camera-calibration algorithm using known traffic lane features, line thickness and lane width. The camera parameters considered are divided into two groups: Group I (the camera orientation, the uncertainty image scale factor, and the focal length) and Group II(the camera position). First, six control points are extracted from an image of two traffic lines and then eight nonlinear equations are generated based on the points. The least square method is used to find the estimates for the Group I parameters. Finally, values of the Group II parameters are determined using point correspondences between the image and its corresponding real world. Experimental results prove the feasibility of the proposed algorithm.

Modified Particle Filtering for Unstable Handheld Camera-Based Object Tracking

  • Lee, Seungwon;Hayes, Monson H.;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제1권2호
    • /
    • pp.78-87
    • /
    • 2012
  • In this paper, we address the tracking problem caused by camera motion and rolling shutter effects associated with CMOS sensors in consumer handheld cameras, such as mobile cameras, digital cameras, and digital camcorders. A modified particle filtering method is proposed for simultaneously tracking objects and compensating for the effects of camera motion. The proposed method uses an elastic registration algorithm (ER) that considers the global affine motion as well as the brightness and contrast between images, assuming that camera motion results in an affine transform of the image between two successive frames. By assuming that the camera motion is modeled globally by an affine transform, only the global affine model instead of the local model was considered. Only the brightness parameter was used in intensity variation. The contrast parameters used in the original ER algorithm were ignored because the change in illumination is small enough between temporally adjacent frames. The proposed particle filtering consists of the following four steps: (i) prediction step, (ii) compensating prediction state error based on camera motion estimation, (iii) update step and (iv) re-sampling step. A larger number of particles are needed when camera motion generates a prediction state error of an object at the prediction step. The proposed method robustly tracks the object of interest by compensating for the prediction state error using the affine motion model estimated from ER. Experimental results show that the proposed method outperforms the conventional particle filter, and can track moving objects robustly in consumer handheld imaging devices.

  • PDF

특징점 기반 확률 맵을 이용한 단일 카메라의 위치 추정방법 (Localization of a Monocular Camera using a Feature-based Probabilistic Map)

  • 김형진;이동화;오택준;명현
    • 제어로봇시스템학회논문지
    • /
    • 제21권4호
    • /
    • pp.367-371
    • /
    • 2015
  • In this paper, a novel localization method for a monocular camera is proposed by using a feature-based probabilistic map. The localization of a camera is generally estimated from 3D-to-2D correspondences between a 3D map and an image plane through the PnP algorithm. In the computer vision communities, an accurate 3D map is generated by optimization using a large number of image dataset for camera pose estimation. In robotics communities, a camera pose is estimated by probabilistic approaches with lack of feature. Thus, it needs an extra system because the camera system cannot estimate a full state of the robot pose. Therefore, we propose an accurate localization method for a monocular camera using a probabilistic approach in the case of an insufficient image dataset without any extra system. In our system, features from a probabilistic map are projected into an image plane using linear approximation. By minimizing Mahalanobis distance between the projected features from the probabilistic map and extracted features from a query image, the accurate pose of the monocular camera is estimated from an initial pose obtained by the PnP algorithm. The proposed algorithm is demonstrated through simulations in a 3D space.

HVCM(Hybrid Voice Coil Motor) Actuator적용을 통한 AUTO Focusing Camera Module 성능개선 (HVCM (Hybrid Voice Coil Motor) Actuator apply performance improvement through the AUTO Focusing Camera Module)

  • 권태권;김영길
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2011년도 춘계학술대회
    • /
    • pp.307-309
    • /
    • 2011
  • 최근 출시되는 고사양의 Hand-phone 적용되어진 Camera Module은 대부분 Auto Focusing기능이 탑재되어 있으며 Camera Module의 화소수가 높아지면서 좀 더 정밀하고 안정적인 AF구동제품이 소비자에 의해 요구되고 있다. 본 논문은 현재 출시되고 있는 Camera Module적용 VCM(Actuator)의 문제점인 Auto Focusing시 Lens 초점위치 및 Module 자세에 따른 해상도편차 발생으로 해상도 보증 및 안정된 Actuator 구동을 위해 개선된 구조의 Hybrid VCM을 제안한다.

  • PDF

SURF와 Label Cluster를 이용한 이동형 카메라에서 동적물체 추출 (Moving Object Detection Using SURF and Label Cluster Update in Active Camera)

  • 정용한;박은수;이형호;왕덕창;허욱열;김학일
    • 제어로봇시스템학회논문지
    • /
    • 제18권1호
    • /
    • pp.35-41
    • /
    • 2012
  • This paper proposes a moving object detection algorithm for active camera system that can be applied to mobile robot and intelligent surveillance system. Most of moving object detection algorithms based on a stationary camera system. These algorithms used fixed surveillance system that does not consider the motion of the background or robot tracking system that track pre-learned object. Unlike the stationary camera system, the active camera system has a problem that is difficult to extract the moving object due to the error occurred by the movement of camera. In order to overcome this problem, the motion of the camera was compensated by using SURF and Pseudo Perspective model, and then the moving object is extracted efficiently using stochastic Label Cluster transport model. This method is possible to detect moving object because that minimizes effect of the background movement. Our approach proves robust and effective in terms of moving object detection in active camera system.

Search for Gravity Waves with n New All-sky Camera System

  • Kim, Yong-Ha;Chung, Jong-Kyun;Won, Yong-In;Lee, Bang-Yong
    • Ocean and Polar Research
    • /
    • 제24권3호
    • /
    • pp.263-266
    • /
    • 2002
  • Gravity waves have been searched for with a new all-sky camera system over Korean Peninsular. The all-sky camera consists of a 37mm/F4.5 Mamiya fisheye lens with a 180 dog field of view, interference filters and a 1024 by 1024 CCD camera. The all-sky camera has been tested near Daejeon city, and moved to Mt. Bohyun where the largest astronomical telescope is operated in Korea. A clear wave pattern was successfully detected in OH filter images over Mt. Bohyun on July 18, 2001, indicating that small scale coherent gravity waves perturbed OH airglow near the mesopause. Other wave features are since then observed with Na 589.8nm and OI 630.0nm filters. Since a Japanese all-sky camera network has already detected traveling ionospheric disturbances (TID) over the northeast-southwest range of Japanese islands, we hope our all-sky camera extends the coverage of the TID's observations to the west direction. We plan to operate our all-sky camera all year around to study seasonal variation of wave activities over the mid-latitude upper atmosphere.

Design & Test of Stereo Camera Ground Model for Lunar Exploration

  • Heo, Haeng-Pal;Park, Jong-Euk;Shin, Sang-Youn;Yong, Sang-Soon
    • 대한원격탐사학회지
    • /
    • 제28권6호
    • /
    • pp.693-704
    • /
    • 2012
  • Space-born remote sensing camera systems tend to be developed to have very high performances. They are developed to provide extremely small ground sample distance, wide swath width, and good MTF (Modulation Transfer Function) at the expense of big volume, massive weight, and big power consumption. Therefore, the camera system occupies relatively big portion of the satellite bus from the point of mass and volume. However, the camera systems for lunar exploration don't need to have such high performances. Instead, it should be versatile for various usages under various operating environments. It should be light and small and should consume small power. In order to be used for national program of lunar exploration, electro-optical versatile camera system, called MAEPLE (Multi-Application Electro-Optical Payload for Lunar Exploration), has been designed after the derivation of camera system requirements. A ground model of the camera system has been manufactured to identify and secure relevant key technologies. The ground model was mounted on an aircraft and checked if the basic design concept would be valid and versatile functions implemented on the camera system would worked properly. In this paper, results of design and functional test performed with the field campaigns and air-born imaging are introduced.