• Title/Summary/Keyword: Camera

Search Result 10,550, Processing Time 0.052 seconds

DESIGN OF CAMERA CONTROLLER FOR HIGH RESOLUTION SPACE-BORN CAMERA SYSTEM

  • Heo, Haeng-Pal;Kong, Jong-Pil;Kim, Young-Sun;Park, Jong-Euk;Yong, Sang-Soon
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.130-133
    • /
    • 2007
  • In order to get high quality and high resolution image data from the space-borne camera system, the image chain from the sensor to the user in the ground-station need to be designed and controlled with extreme care. The behavior of the camera system needs to be controlled by ground commands to support on-orbit calibration and to adjust imaging parameters and to perform early stage on-orbit image correction, like gain and offset control, non-uniformity correction, etc. The operation status including the temperature of the sensor needs to be transferred to the ground-station. The preparation time of the camera system for imaging with specific parameters should be minimized. The camera controller needs to synchronize the operation of cameras for every channel and for every spectral band. Detail timing information of the image data needs to be provided for image data correction at ground-station. In this paper, the design of the camera controller for the AEISS on KOMPSAT-3 will be introduced. It will be described how the image chain is controlled and which imaging parameters are to be adjusted The camera controller will have software for the flexible operation of the camera by the ground-station operators and it can be reconfigured by ground commands. A simple concept of the camera operations and the design of the camera controller, not only with hardware but also with controller software are to be introduced in this paper.

  • PDF

Depth Generation using Bifocal Stereo Camera System for Autonomous Driving (자율주행을 위한 이중초점 스테레오 카메라 시스템을 이용한 깊이 영상 생성 방법)

  • Lee, Eun-Kyung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.6
    • /
    • pp.1311-1316
    • /
    • 2021
  • In this paper, we present a bifocal stereo camera system combining two cameras with different focal length cameras to generate stereoscopic image and their corresponding depth map. In order to obtain the depth data using the bifocal stereo camera system, we perform camera calibration to extract internal and external camera parameters for each camera. We calculate a common image plane and perform a image rectification for generating the depth map using camera parameters of bifocal stereo camera. Finally we use a SGM(Semi-global matching) algorithm to generate the depth map in this paper. The proposed bifocal stereo camera system can performs not only their own functions but also generates distance information about vehicles, pedestrians, and obstacles in the current driving environment. This made it possible to design safer autonomous vehicles.

Multiple Camera Calibration for Panoramic 3D Virtual Environment (파노라믹 3D가상 환경 생성을 위한 다수의 카메라 캘리브레이션)

  • 김세환;김기영;우운택
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.137-148
    • /
    • 2004
  • In this paper, we propose a new camera calibration method for rotating multi-view cameras to generate image-based panoramic 3D Virtual Environment. Since calibration accuracy worsens with an increase in distance between camera and calibration pattern, conventional camera calibration algorithms are not proper for panoramic 3D VE generation. To remedy the problem, a geometric relationship among all lenses of a multi-view camera is used for intra-camera calibration. Another geometric relationship among multiple cameras is used for inter-camera calibration. First camera parameters for all lenses of each multi-view camera we obtained by applying Tsai's algorithm. In intra-camera calibration, the extrinsic parameters are compensated by iteratively reducing discrepancy between estimated and actual distances. Estimated distances are calculated using extrinsic parameters for every lens. Inter-camera calibration arranges multiple cameras in a geometric relationship. It exploits Iterative Closet Point (ICP) algorithm using back-projected 3D point clouds. Finally, by repeatedly applying intra/inter-camera calibration to all lenses of rotating multi-view cameras, we can obtain improved extrinsic parameters at every rotated position for a middle-range distance. Consequently, the proposed method can be applied to stitching of 3D point cloud for panoramic 3D VE generation. Moreover, it may be adopted in various 3D AR applications.

Camera Calibration for Machine Vision Based Autonomous Vehicles (머신비젼 기반의 자율주행 차량을 위한 카메라 교정)

  • Lee, Mun-Gyu;An, Taek-Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.9
    • /
    • pp.803-811
    • /
    • 2002
  • Machine vision systems are usually used to identify traffic lanes and then determine the steering angle of an autonomous vehicle in real time. The steering angle is calculated using a geometric model of various parameters including the orientation, position, and hardware specification of a camera in the machine vision system. To find the accurate values of the parameters, camera calibration is required. This paper presents a new camera-calibration algorithm using known traffic lane features, line thickness and lane width. The camera parameters considered are divided into two groups: Group I (the camera orientation, the uncertainty image scale factor, and the focal length) and Group II(the camera position). First, six control points are extracted from an image of two traffic lines and then eight nonlinear equations are generated based on the points. The least square method is used to find the estimates for the Group I parameters. Finally, values of the Group II parameters are determined using point correspondences between the image and its corresponding real world. Experimental results prove the feasibility of the proposed algorithm.

Modified Particle Filtering for Unstable Handheld Camera-Based Object Tracking

  • Lee, Seungwon;Hayes, Monson H.;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.2
    • /
    • pp.78-87
    • /
    • 2012
  • In this paper, we address the tracking problem caused by camera motion and rolling shutter effects associated with CMOS sensors in consumer handheld cameras, such as mobile cameras, digital cameras, and digital camcorders. A modified particle filtering method is proposed for simultaneously tracking objects and compensating for the effects of camera motion. The proposed method uses an elastic registration algorithm (ER) that considers the global affine motion as well as the brightness and contrast between images, assuming that camera motion results in an affine transform of the image between two successive frames. By assuming that the camera motion is modeled globally by an affine transform, only the global affine model instead of the local model was considered. Only the brightness parameter was used in intensity variation. The contrast parameters used in the original ER algorithm were ignored because the change in illumination is small enough between temporally adjacent frames. The proposed particle filtering consists of the following four steps: (i) prediction step, (ii) compensating prediction state error based on camera motion estimation, (iii) update step and (iv) re-sampling step. A larger number of particles are needed when camera motion generates a prediction state error of an object at the prediction step. The proposed method robustly tracks the object of interest by compensating for the prediction state error using the affine motion model estimated from ER. Experimental results show that the proposed method outperforms the conventional particle filter, and can track moving objects robustly in consumer handheld imaging devices.

  • PDF

Localization of a Monocular Camera using a Feature-based Probabilistic Map (특징점 기반 확률 맵을 이용한 단일 카메라의 위치 추정방법)

  • Kim, Hyungjin;Lee, Donghwa;Oh, Taekjun;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.4
    • /
    • pp.367-371
    • /
    • 2015
  • In this paper, a novel localization method for a monocular camera is proposed by using a feature-based probabilistic map. The localization of a camera is generally estimated from 3D-to-2D correspondences between a 3D map and an image plane through the PnP algorithm. In the computer vision communities, an accurate 3D map is generated by optimization using a large number of image dataset for camera pose estimation. In robotics communities, a camera pose is estimated by probabilistic approaches with lack of feature. Thus, it needs an extra system because the camera system cannot estimate a full state of the robot pose. Therefore, we propose an accurate localization method for a monocular camera using a probabilistic approach in the case of an insufficient image dataset without any extra system. In our system, features from a probabilistic map are projected into an image plane using linear approximation. By minimizing Mahalanobis distance between the projected features from the probabilistic map and extracted features from a query image, the accurate pose of the monocular camera is estimated from an initial pose obtained by the PnP algorithm. The proposed algorithm is demonstrated through simulations in a 3D space.

HVCM (Hybrid Voice Coil Motor) Actuator apply performance improvement through the AUTO Focusing Camera Module (HVCM(Hybrid Voice Coil Motor) Actuator적용을 통한 AUTO Focusing Camera Module 성능개선)

  • Kwon, Tae-Kwon;Kim, Young-Kil
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.05a
    • /
    • pp.307-309
    • /
    • 2011
  • The recently-released camera modules assembled into high-end handsets generally carry auto focusing function. The resolution size of the camera modules is getting higher, and customers demand more precise and stable auto focusing function. When auto focusing function is getting performed, the camera modules applied to VCM usually have the problems, which are an error of lens focusing position and resolution deviation according to the shift of one's position. For this reason, I propose Hybrid VCM that has an improved structure for a stable work of actuator and higher resolution level.

  • PDF

Moving Object Detection Using SURF and Label Cluster Update in Active Camera (SURF와 Label Cluster를 이용한 이동형 카메라에서 동적물체 추출)

  • Jung, Yong-Han;Park, Eun-Soo;Lee, Hyung-Ho;Wang, De-Chang;Huh, Uk-Youl;Kim, Hak-Il
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.1
    • /
    • pp.35-41
    • /
    • 2012
  • This paper proposes a moving object detection algorithm for active camera system that can be applied to mobile robot and intelligent surveillance system. Most of moving object detection algorithms based on a stationary camera system. These algorithms used fixed surveillance system that does not consider the motion of the background or robot tracking system that track pre-learned object. Unlike the stationary camera system, the active camera system has a problem that is difficult to extract the moving object due to the error occurred by the movement of camera. In order to overcome this problem, the motion of the camera was compensated by using SURF and Pseudo Perspective model, and then the moving object is extracted efficiently using stochastic Label Cluster transport model. This method is possible to detect moving object because that minimizes effect of the background movement. Our approach proves robust and effective in terms of moving object detection in active camera system.

Search for Gravity Waves with n New All-sky Camera System

  • Kim, Yong-Ha;Chung, Jong-Kyun;Won, Yong-In;Lee, Bang-Yong
    • Ocean and Polar Research
    • /
    • v.24 no.3
    • /
    • pp.263-266
    • /
    • 2002
  • Gravity waves have been searched for with a new all-sky camera system over Korean Peninsular. The all-sky camera consists of a 37mm/F4.5 Mamiya fisheye lens with a 180 dog field of view, interference filters and a 1024 by 1024 CCD camera. The all-sky camera has been tested near Daejeon city, and moved to Mt. Bohyun where the largest astronomical telescope is operated in Korea. A clear wave pattern was successfully detected in OH filter images over Mt. Bohyun on July 18, 2001, indicating that small scale coherent gravity waves perturbed OH airglow near the mesopause. Other wave features are since then observed with Na 589.8nm and OI 630.0nm filters. Since a Japanese all-sky camera network has already detected traveling ionospheric disturbances (TID) over the northeast-southwest range of Japanese islands, we hope our all-sky camera extends the coverage of the TID's observations to the west direction. We plan to operate our all-sky camera all year around to study seasonal variation of wave activities over the mid-latitude upper atmosphere.

Design & Test of Stereo Camera Ground Model for Lunar Exploration

  • Heo, Haeng-Pal;Park, Jong-Euk;Shin, Sang-Youn;Yong, Sang-Soon
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.6
    • /
    • pp.693-704
    • /
    • 2012
  • Space-born remote sensing camera systems tend to be developed to have very high performances. They are developed to provide extremely small ground sample distance, wide swath width, and good MTF (Modulation Transfer Function) at the expense of big volume, massive weight, and big power consumption. Therefore, the camera system occupies relatively big portion of the satellite bus from the point of mass and volume. However, the camera systems for lunar exploration don't need to have such high performances. Instead, it should be versatile for various usages under various operating environments. It should be light and small and should consume small power. In order to be used for national program of lunar exploration, electro-optical versatile camera system, called MAEPLE (Multi-Application Electro-Optical Payload for Lunar Exploration), has been designed after the derivation of camera system requirements. A ground model of the camera system has been manufactured to identify and secure relevant key technologies. The ground model was mounted on an aircraft and checked if the basic design concept would be valid and versatile functions implemented on the camera system would worked properly. In this paper, results of design and functional test performed with the field campaigns and air-born imaging are introduced.