• Title/Summary/Keyword: camera model

Search Result 1,498, Processing Time 0.026 seconds

Modified Particle Filtering for Unstable Handheld Camera-Based Object Tracking

  • Lee, Seungwon;Hayes, Monson H.;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.2
    • /
    • pp.78-87
    • /
    • 2012
  • In this paper, we address the tracking problem caused by camera motion and rolling shutter effects associated with CMOS sensors in consumer handheld cameras, such as mobile cameras, digital cameras, and digital camcorders. A modified particle filtering method is proposed for simultaneously tracking objects and compensating for the effects of camera motion. The proposed method uses an elastic registration algorithm (ER) that considers the global affine motion as well as the brightness and contrast between images, assuming that camera motion results in an affine transform of the image between two successive frames. By assuming that the camera motion is modeled globally by an affine transform, only the global affine model instead of the local model was considered. Only the brightness parameter was used in intensity variation. The contrast parameters used in the original ER algorithm were ignored because the change in illumination is small enough between temporally adjacent frames. The proposed particle filtering consists of the following four steps: (i) prediction step, (ii) compensating prediction state error based on camera motion estimation, (iii) update step and (iv) re-sampling step. A larger number of particles are needed when camera motion generates a prediction state error of an object at the prediction step. The proposed method robustly tracks the object of interest by compensating for the prediction state error using the affine motion model estimated from ER. Experimental results show that the proposed method outperforms the conventional particle filter, and can track moving objects robustly in consumer handheld imaging devices.

  • PDF

Design & Test of Stereo Camera Ground Model for Lunar Exploration

  • Heo, Haeng-Pal;Park, Jong-Euk;Shin, Sang-Youn;Yong, Sang-Soon
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.6
    • /
    • pp.693-704
    • /
    • 2012
  • Space-born remote sensing camera systems tend to be developed to have very high performances. They are developed to provide extremely small ground sample distance, wide swath width, and good MTF (Modulation Transfer Function) at the expense of big volume, massive weight, and big power consumption. Therefore, the camera system occupies relatively big portion of the satellite bus from the point of mass and volume. However, the camera systems for lunar exploration don't need to have such high performances. Instead, it should be versatile for various usages under various operating environments. It should be light and small and should consume small power. In order to be used for national program of lunar exploration, electro-optical versatile camera system, called MAEPLE (Multi-Application Electro-Optical Payload for Lunar Exploration), has been designed after the derivation of camera system requirements. A ground model of the camera system has been manufactured to identify and secure relevant key technologies. The ground model was mounted on an aircraft and checked if the basic design concept would be valid and versatile functions implemented on the camera system would worked properly. In this paper, results of design and functional test performed with the field campaigns and air-born imaging are introduced.

Robust Camera Calibration using TSK Fuzzy Modeling

  • Lee, Hee-Sung;Hong, Sung-Jun;Kim, Eun-Tai
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.7 no.3
    • /
    • pp.216-220
    • /
    • 2007
  • Camera calibration in machine vision is the process of determining the intrinsic camera parameters and the three-dimensional (3D) position and orientation of the camera frame relative to a certain world coordinate system. On the other hand, Takagi-Sugeno-Kang (TSK) fuzzy system is a very popular fuzzy system and approximates any nonlinear function to arbitrary accuracy with only a small number of fuzzy rules. It demonstrates not only nonlinear behavior but also transparent structure. In this paper, we present a novel and simple technique for camera calibration for machine vision using TSK fuzzy model. The proposed method divides the world into some regions according to camera view and uses the clustered 3D geometric knowledge. TSK fuzzy system is employed to estimate the camera parameters by combining partial information into complete 3D information. The experiments are performed to verify the proposed camera calibration.

The Design of an Intelligent Assembly Robot System for Lens Modules of Phone Camera.

  • Song, Jun-Yeob;Lee, Chang-Woo;Kim, Yeong-Gyoo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.649-652
    • /
    • 2005
  • The camera cellular phone has a large portion of cellular phone market in recent year. The variety of a customer demand makes a fast model change and the spatial resolution is changed from VGA to multi-mega pixel. The 1.3 mega pixel (MP) camera cellular phone was first released into the Korean market in October 2003. The major cellular phone companies released a 2MP camera cellular phone that supports zoom function and a 2MP camera cellular phone is settled down with the Korea cellular phone market. It makes a keen competition in price and demands automation for phone camera module. There is an increasing requirement for the automatic assembly to correspond to a fast model change. The hard automation techniques that rely on dedicated manufacturing system are too inflexible to meet this requirement. Therefore in this study, this system is designed with the flexibility concept in order to cope with phone camera module change. The system has a same platform that has X-Y-Z motion or X-Z motion with ${\mu}m$order accuracy. It has a special gripper according to the type of a component to be put together. If the camera model changes, the gripper may be updated to fit for the camera module. The controller of this system acquires the data sets that have the information about the assembly part by the tray. This information is obtained ahead of an inspection step. The controller excludes an inferior part to be assembled by using this information to diminish the inferior goods. The assembly jig used in this system has a function of self adjustment that reduces the tact time and also diminish the inferior goods. Finally, the intelligent assembly system for phone camera module will be designed to get a flexibility to meet model change and a high productivity with a high reliability.

  • PDF

Video Camera Model Identification System Using Deep Learning (딥 러닝을 이용한 비디오 카메라 모델 판별 시스템)

  • Kim, Dong-Hyun;Lee, Soo-Hyeon;Lee, Hae-Yeoun
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.8
    • /
    • pp.1-9
    • /
    • 2019
  • With the development of imaging information communication technology in modern society, imaging acquisition and mass production technology have developed rapidly. However, crime rates using these technology are increased and forensic studies are conducted to prevent it. Identification techniques for image acquisition devices are studied a lot, but the field is limited to images. In this paper, camera model identification technique for video, not image is proposed. We analyzed video frames using the trained model with images. Through training and analysis by considering the frame characteristics of video, we showed the superiority of the model using the P frame. Then, we presented a video camera model identification system by applying a majority-based decision algorithm. In the experiment using 5 video camera models, we obtained maximum 96.18% accuracy for each frame identification and the proposed video camera model identification system achieved 100% identification rate for each camera model.

Assembling three one-camera images for three-camera intersection classification

  • Marcella Astrid;Seung-Ik Lee
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.862-873
    • /
    • 2023
  • Determining whether an autonomous self-driving agent is in the middle of an intersection can be extremely difficult when relying on visual input taken from a single camera. In such a problem setting, a wider range of views is essential, which drives us to use three cameras positioned in the front, left, and right of an agent for better intersection recognition. However, collecting adequate training data with three cameras poses several practical difficulties; hence, we propose using data collected from one camera to train a three-camera model, which would enable us to more easily compile a variety of training data to endow our model with improved generalizability. In this work, we provide three separate fusion methods (feature, early, and late) of combining the information from three cameras. Extensive pedestrian-view intersection classification experiments show that our feature fusion model provides an area under the curve and F1-score of 82.00 and 46.48, respectively, which considerably outperforms contemporary three- and one-camera models.

Monocular Camera based Real-Time Object Detection and Distance Estimation Using Deep Learning (딥러닝을 활용한 단안 카메라 기반 실시간 물체 검출 및 거리 추정)

  • Kim, Hyunwoo;Park, Sanghyun
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.4
    • /
    • pp.357-362
    • /
    • 2019
  • This paper proposes a model and train method that can real-time detect objects and distances estimation based on a monocular camera by applying deep learning. It used YOLOv2 model which is applied to autonomous or robot due to the fast image processing speed. We have changed and learned the loss function so that the YOLOv2 model can detect objects and distances at the same time. The YOLOv2 loss function added a term for learning bounding box values x, y, w, h, and distance values z as 클래스ification losses. In addition, the learning was carried out by multiplying the distance term with parameters for the balance of learning. we trained the model location, recognition by camera and distance data measured by lidar so that we enable the model to estimate distance and objects from a monocular camera, even when the vehicle is going up or down hill. To evaluate the performance of object detection and distance estimation, MAP (Mean Average Precision) and Adjust R square were used and performance was compared with previous research papers. In addition, we compared the original YOLOv2 model FPS (Frame Per Second) for speed measurement with FPS of our model.

Stereo Calibration Using Support Vector Machine

  • Kim, Se-Hoon;Kim, Sung-Jin;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.250-255
    • /
    • 2003
  • The position of a 3-dimensional(3D) point can be measured by using calibrated stereo camera. To obtain more accurate measurement ,more accurate camera calibration is required. There are many existing methods to calibrate camera. The simple linear methods are usually not accurate due to nonlinear lens distortion. The nonlinear methods are accurate more than linear method, but it increase computational cost and good initial guess is needed. The multi step methods need to know some camera parameters of used camera. Recent years, these explicit model based camera calibration work with the development of more precise camera models involving correction of lens distortion. But these explicit model based camera calibration have disadvantages. So implicit camera calibration methods have been derived. One of the popular implicit camera calibration method is to use neural network. In this paper, we propose implicit stereo camera calibration method for 3D reconstruction using support vector machine. SVM can learn the relationship between 3D coordinate and image coordinate, and it shows the robust property with the presence of noise and lens distortion, results of simulation are shown in section 4.

  • PDF

An Improved Fast Camera Calibration Method for Mobile Terminals

  • Guan, Fang-li;Xu, Ai-jun;Jiang, Guang-yu
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1082-1095
    • /
    • 2019
  • Camera calibration is an important part of machine vision and close-range photogrammetry. Since current calibration methods fail to obtain ideal internal and external camera parameters with limited computing resources on mobile terminals efficiently, this paper proposes an improved fast camera calibration method for mobile terminals. Based on traditional camera calibration method, the new method introduces two-order radial distortion and tangential distortion models to establish the camera model with nonlinear distortion items. Meanwhile, the nonlinear least square L-M algorithm is used to optimize parameters iteration, the new method can quickly obtain high-precise internal and external camera parameters. The experimental results show that the new method improves the efficiency and precision of camera calibration. Terminals simulation experiment on PC indicates that the time consuming of parameter iteration reduced from 0.220 seconds to 0.063 seconds (0.234 seconds on mobile terminals) and the average reprojection error reduced from 0.25 pixel to 0.15 pixel. Therefore, the new method is an ideal mobile terminals camera calibration method which can expand the application range of 3D reconstruction and close-range photogrammetry technology on mobile terminals.

Research on Thermal Refocusing System of High-resolution Space Camera

  • Li, Weiyan;Lv, Qunbo;Wang, Jianwei;Zhao, Na;Tan, Zheng;Pei, Linlin
    • Current Optics and Photonics
    • /
    • v.6 no.1
    • /
    • pp.69-78
    • /
    • 2022
  • A high-resolution camera is a precise optical system. Its vibrations during transportation and launch, together with changes in temperature and gravity field in orbit, lead to different degrees of defocus of the camera. Thermal refocusing is one of the solutions to the problems related to in-orbit defocusing, but there are few relevant thermal refocusing mathematical models for systematic analysis and research. Therefore, to further research thermal refocusing systems by using the development of a high-resolution micro-nano satellite (CX6-02) super-resolution camera as an example, we established a thermal refocusing mathematical model based on the thermal elasticity theory on the basis of the secondary mirror position. The detailed design of the thermal refocusing system was carried out under the guidance of the mathematical model. Through optical-mechanical-thermal integration analysis and Zernike polynomial calculation, we found that the data error obtained was about 1%, and deformation in the secondary mirror surface conformed to the optical index, indicating the accuracy and reliability of the thermal refocusing mathematical model. In the final ground test, the thermal vacuum experimental verification data and in-orbit imaging results showed that the thermal refocusing system is consistent with the experimental data, and the performance is stable, which provides theoretical and technical support for the future development of a thermal refocusing space camera.