• Title/Summary/Keyword: Camera Modeling

Search Result 334, Processing Time 0.023 seconds

Projective Reconstruction Method for 3D modeling from Un-calibrated Image Sequence (비교정 영상 시퀀스로부터 3차원 모델링을 위한 프로젝티브 재구성 방법)

  • Hong Hyun-Ki;Jung Yoon-Yong;Hwang Yong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.2 s.302
    • /
    • pp.113-120
    • /
    • 2005
  • 3D reconstruction of a scene structure from un-calibrated image sequences has been long one of the central problems in computer vision. For 3D reconstruction in Euclidean space, projective reconstruction, which is classified into the merging method and the factorization, is needed as a preceding step. By calculating all camera projection matrices and structures at the same time, the factorization method suffers less from dia and error accumulation than the merging. However, the factorization is hard to analyze precisely long sequences because it is based on the assumption that all correspondences must remain in all views from the first frame to the last. This paper presents a new projective reconstruction method for recovery of 3D structure over long sequences. We break a full sequence into sub-sequences based on a quantitative measure considering the number of matching points between frames, the homography error, and the distribution of matching points on the frame. All of the projective reconstructions of sub-sequences are registered into the same coordinate frame for a complete description of the scene. no experimental results showed that the proposed method can recover more precise 3D structure than the merging method.

Design and Implementation of Distributed QoS Management Architecture for Real-time Negotiation and Adaptation Control on CORBA Environments (CORBA 환경에서 실시간 협약 및 작응 제어를 위한 분사 QoS 관리 구조의 설계 및 구현)

  • Lee, Won-Jung;Shin, Chang-Sun;Jeong, Chang-Won;Joo, Su-Chong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.1C
    • /
    • pp.21-35
    • /
    • 2002
  • Nowadays, in accordance with increasing expectations of multimedia stream service on the internet, a lot of distributed applications are being required and developed. But the models of the existing systems have the problems that cannot support the extensibility and the reusability, when the QoS relating functions are being developed as an integrated modules which are suited on the centralized controlled specific-purpose application services. To cope with these problems, it is suggested in this paper to a distributed QoS management system on CORBA, an object-oriented middleware compliance. This systems we suggested can provides not only for efficient control of resources, various service QoS, and QoS control functions as the existing functions, but also QoS control real-time negotiation and dynamic adaptation in addition. This system consists of QoS Control Management Module(QoS CMM) in client side and QoS Management Module(QoS MM) in server side, respectively. These distributed modules are interfacing with each other via CORBA on different systems for distributed QoS management while serving distributed streaming applications. In phase of design of our system, we use UML(Unified Modeling Language) for designing each component in modules, their method calls and various detailed functions for controlling QoS of stream services. For implementation of our system, we used OrbixWeb 3.1c following CORBA specification on Solaris 2.5/2.7, Java language, Java Media Framework API 2.0 beta2, Mini-SQL 1.0.16 and the multimedia equipments, such as SunVideoPlus/Sun Video capture board and Sun Camera. Finally, we showed a numerical data controlled by real-time negotiation and adaptation procedures based on QoS map information to GUIs on client and server dynamically, while our distributed QoS management system is executing a given streaming service.

Implementation of virtual reality for interactive disaster evacuation training using close-range image information (근거리 영상정보를 활용한 실감형 재난재해 대피 훈련 가상 현실 구현)

  • KIM, Du-Young;HUH, Jung-Rim;LEE, Jin-Duk;BHANG, Kon-Joon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.1
    • /
    • pp.140-153
    • /
    • 2019
  • Cloase-range image information from drones and ground-based camera has been frequently used in the field of disaster mitigation with 3D modeling and mapping. In addition, the utilization of virtual reality(VR) is being increased by implementing realistic 3D models with the VR technology simulating disaster circumstances in large scale. In this paper, we created a VR training program by extracting realistic 3D models from close-range images from unmanned aircraft and digital camera on hand and observed several issues occurring during the implementation and the effectiveness in the case of a VR application in training for disaster mitigation. First of all, we built up a scenario of disaster and created 3D models after image processing with the close-range imagery. The 3D models were imported into Unity, a software for creation of augmented/virtual reality, as a background for android-based mobile phones and VR environment was created with C#-based script language. The generated virtual reality includes a scenario in which the trainer moves to a safe place along the evacuation route in the event of a disaster, and it was considered that the successful training can be obtained with virtual reality. In addition, the training through the virtual reality has advantages relative to actual evacuation training in terms of cost, space and time efficiencies.

3D Modeling from 2D Stereo Image using 2-Step Hybrid Method (2단계 하이브리드 방법을 이용한 2D 스테레오 영상의 3D 모델링)

  • No, Yun-Hyang;Go, Byeong-Cheol;Byeon, Hye-Ran;Yu, Ji-Sang
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.7
    • /
    • pp.501-510
    • /
    • 2001
  • Generally, it is essential to estimate exact disparity for the 3D modeling from stereo images. Because existing methods calculate disparities from a whole image, they require too much cimputational time and bring about the mismatching problem. In this article, using the characteristic that the disparity vectors in stereo images are distributed not equally in a whole image but only exist about the background and obhect, we do a wavelet transformation on stereo images and estimate coarse disparity fields from the reduced lowpass field using area-based method at first-step. From these coarse disparity vectors, we generate disparity histogram and then separate object from background area using it. Afterwards, we restore only object area to the original image and estimate dense and accurate disparity by our two-step pixel-based method which does not use pixel brightness but use second gradient. We also extract feature points from the separated object area and estimate depth information by applying disparity vectors and camera parameters. Finally, we generate 3D model using both feature points and their z coordinates. By using our proposed, we can considerably reduce the computation time and estimate the precise disparity through the additional pixel-based method using LOG filter. Furthermore, our proposed foreground/background method can solve the mismatching problem of existing Delaunay triangulation and generate accurate 3D model.

  • PDF

A Feasibility Study for Mapping Using The KOMPSAT-2 Stereo Imagery (아리랑위성 2호 입체영상을 이용한 지도제작 가능성 연구)

  • Lee, Kwang-Jae;Kim, Youn-Soo;Seo, Hyun-Duck
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.15 no.1
    • /
    • pp.197-210
    • /
    • 2012
  • The KOrea Multi-Purpose SATellite(KOMPSAT)-2 has a capability to provide a cross-track stereo imagery using two different orbits for generating various spatial information. However, in order to fully realize the potential of the KOMPSAT-2 stereo imagery in terms of mapping, various tests are necessary. The purpose of this study is to evaluate the possibility of mapping using the KOMPSAT-2 stereo imagery. For this, digital plotting was conducted based on the stereoscopic images. Also the Digital Elevation Model(DEM) and an ortho-image were generated using digital plotting results. An accuracy of digital plotting, DEM, and ortho-image were evaluated by comparing with the existing data. Consequently, we found that horizontal and vertical error of the modeling results based on the Rational Polynomial Coefficient(RPC) was less than 1.5 meters compared with the Global Positioning System(GPS) survey results. The maximum difference of vertical direction between the plotted results in this study and the existing digital map on the scale of 1/5,000 was more than 5 meters according as the topographical characteristics. Although there were some irregular parallax on the images, we realized that it was possible to interpret and plot at least seventy percent of the layer which was required the digital map on the scale of 1/5,000. Also an accuracy of DEM, which was generated based on the digital plotting, was compared with the existing LiDAR DEM. We found that the ortho-images, which were generated using the extracted DEM in this study, sufficiently satisfied with the requirement of the geometric accuracy for an ortho-image map on the scale of 1/5,000.

Depth-Based Recognition System for Continuous Human Action Using Motion History Image and Histogram of Oriented Gradient with Spotter Model (모션 히스토리 영상 및 기울기 방향성 히스토그램과 적출 모델을 사용한 깊이 정보 기반의 연속적인 사람 행동 인식 시스템)

  • Eum, Hyukmin;Lee, Heejin;Yoon, Changyong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.6
    • /
    • pp.471-476
    • /
    • 2016
  • In this paper, recognition system for continuous human action is explained by using motion history image and histogram of oriented gradient with spotter model based on depth information, and the spotter model which performs action spotting is proposed to improve recognition performance in the recognition system. The steps of this system are composed of pre-processing, human action and spotter modeling and continuous human action recognition. In pre-processing process, Depth-MHI-HOG is used to extract space-time template-based features after image segmentation, and human action and spotter modeling generates sequence by using the extracted feature. Human action models which are appropriate for each of defined action and a proposed spotter model are created by using these generated sequences and the hidden markov model. Continuous human action recognition performs action spotting to segment meaningful action and meaningless action by the spotter model in continuous action sequence, and continuously recognizes human action comparing probability values of model for meaningful action sequence. Experimental results demonstrate that the proposed model efficiently improves recognition performance in continuous action recognition system.

Real-Time Object Tracking Algorithm based on Minimal Contour in Surveillance Networks (서베일런스 네트워크에서 최소 윤곽을 기초로 하는 실시간 객체 추적 알고리즘)

  • Kang, Sung-Kwan;Park, Yang-Jae
    • Journal of Digital Convergence
    • /
    • v.12 no.8
    • /
    • pp.337-343
    • /
    • 2014
  • This paper proposes a minimal contour tracking algorithm that reduces transmission of data for tracking mobile objects in surveillance networks in terms of detection and communication load. This algorithm perform detection for object tracking and when it transmit image data to server from camera, it minimized communication load by reducing quantity of transmission data. This algorithm use minimal tracking area based on the kinematics of the object. The modeling of object's kinematics allows for pruning out part of the tracking area that cannot be mechanically visited by the mobile object within scheduled time. In applications to detect an object in real time,when transmitting a large amount of image data it is possible to reduce the transmission load.

Implementation of Layered Clouds considering Frame Rate and Reality in Real-time Flight Simulation (비행시뮬레이션에서 프레임율과 현실감을 고려한 계층형 구름 구현 방안)

  • Kang, Seok-Yoon;Kim, Ki-Il
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.1
    • /
    • pp.72-77
    • /
    • 2014
  • There are two main technologies to implement cloud effect in flight simulator, cloud modeling using particle system and texture mapping. In former case, this approach may cause a low frame rate while unrealistic cloud effect is observed in latter case. To Solve this problem, in this paper, we propose how to apply fog effect into camera to display more realistic cloud effect with high frame rate. The proposed method is tested with massive terrain database environment through implemented software by using OpenSceneGraph. As a result, compared to texture mapping method, the degree of difference on frame rate is 1 or 2Hz while the cloud effect is significantly improved as realistic as particle system.

An effective filtering for noise smoothing using the area information of 3D mesh (3차원 메쉬의 면적 정보를 이용한 효과적인 잡음 제거)

  • Hyeon, Dae-Hwan;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.2 s.314
    • /
    • pp.55-62
    • /
    • 2007
  • This paper proposes method to get exquisite third dimension data removing included noise by error that occur in third dimension reconstruction through camera auto-calibration. Though reconstructing third dimension data by previous noise removing method, mesh that area is wide is happened problem by noise. Because mesh's area is important, the proposed algorithm need preprocessing that remove unnecessary triangle meshes of acquired third dimension data. The research analyzes the characteristics of noise using the area information of 3-dimensional meshes, separates a peek noise and a Gauss noise by its characteristics and removes the noise effectively. We give a quantitative evaluation of the proposed preprocessing filter and compare with the mesh smoothing procedures. We demonstrate that our effective preprocessing filter outperform the mesh smoothing procedures in terms of accuracy and resistance to over-smoothing.

Boundary Depth Estimation Using Hough Transform and Focus Measure (허프 변환과 초점정보를 이용한 경계면 깊이 추정)

  • Kwon, Dae-Sun;Lee, Dae-Jong;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.1
    • /
    • pp.78-84
    • /
    • 2015
  • Depth estimation is often required for robot vision, 3D modeling, and motion control. Previous method is based on the focus measures which are calculated for a series of image by a single camera at different distance between and object. This method, however, has disadvantage of taking a long time for calculating the focus measure since the mask operation is performed for every pixel in the image. In this paper, we estimates the depth by using the focus measure of the boundary pixels located between the objects in order to minimize the depth estimate time. To detect the boundary of an object consisting of a straight line and a circle, we use the Hough transform and estimate the depth by using the focus measure. We performed various experiments for PCB images and obtained more effective depth estimation results than previous ones.