• Title/Summary/Keyword: camera model

Search Result 1,492, Processing Time 0.033 seconds

Vision-based Camera Localization using DEM and Mountain Image (DEM과 산영상을 이용한 비전기반 카메라 위치인식)

  • Cha Jeong-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.6 s.38
    • /
    • pp.177-186
    • /
    • 2005
  • In this Paper. we propose vision-based camera localization technique using 3D information which is created by mapping of DEM and mountain image. Typically, image features for localization have drawbacks, it is variable to camera viewpoint and after time information quantify increases . In this paper, we extract invariance features of geometry which is irrelevant to camera viewpoint and estimate camera extrinsic Parameter through accurate corresponding Points matching by Proposed similarity evaluation function and Graham search method we also propose 3D information creation method by using graphic theory and visual clues, The Proposed method has the three following stages; point features invariance vector extraction, 3D information creation, camera extrinsic Parameter estimation. In the experiments, we compare and analyse the proposed method with existing methods to demonstrate the superiority of the proposed methods.

  • PDF

Real-Time Augmented Reality on 3-D Mobile Display using Stereo Camera Tracking (스테레오 카메라 추적을 이용한 모바일 3차원 디스플레이 상의 실시간 증강현실)

  • Park, Jungsik;Seo, Byung-Kuk;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.362-371
    • /
    • 2013
  • This paper presents a framework of real-time augmented reality on 3-D mobile display with stereo camera tracking. In the framework, camera poses are jointly estimated with the geometric relationship between stereoscopic images, which is based on model-based tracking. With the estimated camera poses, the virtual contents are correctly augmented on stereoscopic images through image rectification. For real-time performance, stereo camera tracking and image rectification are efficiently performed using multiple threads. Image rectification and color conversion are accelerated with a GPU processing. The proposed framework is tested and demonstrated on a commercial smartphone, which is equipped with a stereoscopic camera and a parallax barrier 3-D display.

Development of a Remote Object's 3D Position Measuring System (원격지 물체의 삼차원 위치 측정시스템의 개발)

  • Park, Kang
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.8
    • /
    • pp.60-70
    • /
    • 2000
  • In this paper a 3D position measuring device that finds the 3D position of an arbitarily placed object using a camersa system is introduced. The camera system consists of three stepping motors and a CCD camera and a laser. The viewing direction of the camera is controlled by two stepping motors (pan and tilt motors) and the direction of a laser is also controlled by a stepping motors(laser motor). If an object in a remote place is selected from a live video image the x,y,z coordinates of the object with respect to the reference coordinate system can be obtained by calculating the distance from the camera to the object using a structured light scheme and by obtaining the orientation of the camera that is controlled by two stepping motors. The angles o f stepping motors are controlled by a SGI O2 workstation through a parallel port. The mathematical model of the camera and the distance measuring system are calibrated to calculate an accurate position of the object. This 3D position measuring device can be used to acquire information that is necessary to monitor a remote place.

  • PDF

Determination of Physical Camera Parameters from DLT Parameters

  • Jeong Soo;Lee Changno;Oh Jaehong
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.233-236
    • /
    • 2004
  • In this study, we analyzed the accuracy of the conversion from DLT parameters to physical camera parameters and optimized the use of DLT model for non-metric cameras in photogrammetric tasks. Using the simulated data, we computed two sets of physical camera parameters from DLT parameters and Bundle adjustment for various cases. Comparing two results based on the RMSE values of check points, we optimized the arrangement of GCPs for DLT.

  • PDF

Estimating Population Density of Leopard Cat (Prionailurus bengalensis) from Camera Traps in Maekdo Riparian Park, South Korea

  • Park, Heebok;Lim, Anya;Choi, Tae-Young;Lim, Sang-Jin;Park, Yung-Chul
    • Journal of Forest and Environmental Science
    • /
    • v.33 no.3
    • /
    • pp.239-242
    • /
    • 2017
  • Although camera traps have been widely used to understand the abundance of wildlife in recent decades, the effort has been restricted to small sub-set of wildlife which can mark-and-recapture. The Random Encounter Model shows an alternative approach to estimate the absolute abundance from camera trap detection rate for any animals without the need for individual recognition. Our study aims to examine the feasibility and validity of the Random Encounter Model for the density estimation of endangered leopard cats (Prionailurus bengalensis) in Maekdo riparian park, Busan, South Korea. According to the model, the estimated leopard cat density was $1.76km^{-2}$ (CI 95%, 0.74-3.49), which indicated 2.46 leopard cats in $1.4km^2$ of our study area. This estimate was not statistically different from the previous leopard cat population count ($2.33{\pm}0.58$) in the same area. As follows, our research demonstrated the application and usefulness of the Random Encounter Model in density estimation of unmarked wildlife which helps to manage and protect the target species with a better understanding of their status.

Three Dimension Scanner System Using Parallel Camera Model (패러렐 카메라모델을 이용한 3차원 스캐너 시스템)

  • Lee, Hee-Man
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.2
    • /
    • pp.27-32
    • /
    • 2001
  • In this paper, the three dimension scanner system employing the parallel camera model is discussed. The camera calibration process and the three dimension scanning algorithm are developed. The laser strip line is utilized for assisting stereo matching. An object being scanned rotates on the plate which is activated by a stepping motor, The world coordinate which is. the measured distance from the camera to the object is converted into the model coordinate. The facets created from the point. cloud of the model coordinate is used for rendering the scanned model by using the graphic library such as OpenGL. The unmatched points having no validate matching points are interpolated from the validate matching points of the vicinity epipolar lines.

  • PDF

A Study on Three-Dimensional Model Reconstruction Based on Laser-Vision Technology (레이저 비전 기술을 이용한 물체의 3D 모델 재구성 방법에 관한 연구)

  • Nguyen, Huu Cuong;Lee, Byung Ryong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.7
    • /
    • pp.633-641
    • /
    • 2015
  • In this study, we proposed a three-dimensional (3D) scanning system based on laser-vision technique and rotary mechanism for automatic 3D model reconstruction. The proposed scanning system consists of a laser projector, a camera, and a turntable. For laser-camera calibration a new and simple method was proposed. 3D point cloud data of the surface of scanned object was fully collected by integrating extracted laser profiles, which were extracted from laser stripe images, corresponding to rotary angles of the rotary mechanism. The obscured laser profile problem was also solved by adding an addition camera at another viewpoint. From collected 3D point cloud data, the 3D model of the scanned object was reconstructed based on facet-representation. The reconstructed 3D models showed effectiveness and the applicability of the proposed 3D scanning system to 3D model-based applications.

Multiple Camera-based Person Correspondence using Color Distribution and Context Information of Human Body (색상 분포 및 인체의 상황정보를 활용한 다중카메라 기반의 사람 대응)

  • Chae, Hyun-Uk;Seo, Dong-Wook;Kang, Suk-Ju;Jo, Kang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.9
    • /
    • pp.939-945
    • /
    • 2009
  • In this paper, we proposed a method which corresponds people under the structured spaces with multiple cameras. The correspondence takes an important role for using multiple camera system. For solving this correspondence, the proposed method consists of three main steps. Firstly, moving objects are detected by background subtraction using a multiple background model. The temporal difference is simultaneously used to reduce a noise in the temporal change. When more than two people are detected, those detected regions are divided into each label to represent an individual person. Secondly, the detected region is segmented as features for correspondence by a criterion with the color distribution and context information of human body. The segmented region is represented as a set of blobs. Each blob is described as Gaussian probability distribution, i.e., a person model is generated from the blobs as a Gaussian Mixture Model (GMM). Finally, a GMM of each person from a camera is matched with the model of other people from different cameras by maximum likelihood. From those results, we identify a same person in different view. The experiment was performed according to three scenarios and verified the performance in qualitative and quantitative results.

Locally Initiating Line-Based Object Association in Large Scale Multiple Cameras Environment

  • Cho, Shung-Han;Nam, Yun-Young;Hong, Sang-Jin;Cho, We-Duke
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.3
    • /
    • pp.358-379
    • /
    • 2010
  • Multiple object association is an important capability in visual surveillance system with multiple cameras. In this paper, we introduce locally initiating line-based object association with the parallel projection camera model, which can be applicable to the situation without the common (ground) plane. The parallel projection camera model supports the camera movement (i.e. panning, tilting and zooming) by using the simple table based compensation for non-ideal camera parameters. We propose the threshold distance based homographic line generation algorithm. This takes account of uncertain parameters such as transformation error, height uncertainty of objects and synchronization issue between cameras. Thus, the proposed algorithm associates multiple objects on demand in the surveillance system where the camera movement dynamically changes. We verify the proposed method with actual image frames. Finally, we discuss the strategy to improve the association performance by using the temporal and spatial redundancy.

A Study on the Real-Time Vision Control Method for Manipulator's position Control in the Uncertain Circumstance (불확실한 환경에서 매니퓰레이터 위치제어를 위한 실시간 비젼제어기법에 관한 연구)

  • Jang, W.-S.;Kim, K.-S.;Shin, K.-S.;Joo, C.;;Yoon, H.-K.
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.12
    • /
    • pp.87-98
    • /
    • 1999
  • This study is concentrated on the development of real-time estimation model and vision control method as well as the experimental test. The proposed method permits a kind of adaptability not otherwise available in that the relationship between the camera-space location of manipulable visual cues and the vector of manipulator joint coordinates is estimate in real time. This is done based on a estimation model ta\hat generalizes known manipulator kinematics to accommodate unknown relative camera position and orientation as well as uncertainty of manipulator. This vision control method is roboust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the manipulator, and correct knowledge of position and orientation of CCD camera with respect to the manipulator base. Finally, evidence of the ability of real-time vision control method for manipulator's position control is provided by performing the thin-rod placement in space with 2 cues test model which is completed without a prior knowledge of camera or manipulator positions. This feature opens the door to a range of applications of manipulation, including a mobile manipulator with stationary cameras tracking and providing information for control of the manipulator event.

  • PDF