• Title/Summary/Keyword: 3D model reconstruction

Search Result 290, Processing Time 0.028 seconds

Analysis of the Increase of Matching Points for Accuracy Improvement in 3D Reconstruction Using Stereo CCTV Image Data

  • Moon, Kwang-il;Pyeon, MuWook;Eo, YangDam;Kim, JongHwa;Moon, Sujung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.2
    • /
    • pp.75-80
    • /
    • 2017
  • Recently, there has been growing interest in spatial data that combines information and communication technology with smart cities. The high-precision LiDAR (Light Dectection and Ranging) equipment is mainly used to collect three-dimensional spatial data, and the acquired data is also used to model geographic features and to manage plant construction and cultural heritages which require precision. The LiDAR equipment can collect precise data, but also has limitations because they are expensive and take long time to collect data. On the other hand, in the field of computer vision, research is being conducted on the methods of acquiring image data and performing 3D reconstruction based on image data without expensive equipment. Thus, precise 3D spatial data can be constructed efficiently by collecting and processing image data using CCTVs which are installed as infrastructure facilities in smart cities. However, this method can have an accuracy problem compared to the existing equipment. In this study, experiments were conducted and the results were analyzed to increase the number of extracted matching points by applying the feature-based method and the area-based method in order to improve the precision of 3D spatial data built with image data acquired from stereo CCTVs. For techniques to extract matching points, SIFT algorithm and PATCH algorithm were used. If precise 3D reconstruction is possible using the image data from stereo CCTVs, it will be possible to collect 3D spatial data with low-cost equipment and to collect and build data in real time because image data can be easily acquired through the Web from smart-phones and drones.

Realtime Facial Expression Representation Method For Virtual Online Meetings System

  • Zhu, Yinge;Yerkovich, Bruno Carvacho;Zhang, Xingjie;Park, Jong-il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.212-214
    • /
    • 2021
  • In a society with Covid-19 as part of our daily lives, we had to adapt ourselves to a new reality to maintain our lifestyles as normal as possible. An example of this is teleworking and online classes. However, several issues appeared on the go as we started the new way of living. One of them is the doubt of knowing if real people are in front of the camera or if someone is paying attention during a lecture. Therefore, we encountered this issue by creating a 3D reconstruction tool to identify human faces and expressions actively. We use a web camera, a lightweight 3D face model, and use the 2D facial landmark to fit expression coefficients to drive the 3D model. With this Model, it is possible to represent our faces with an Avatar and fully control its bones with rotation and translation parameters. Therefore, in order to reconstruct facial expressions during online meetings, we proposed the above methods as our solution to solve the main issue.

  • PDF

3D Head Modeling using Depth Sensor

  • Song, Eungyeol;Choi, Jaesung;Jeon, Taejae;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • v.2 no.1
    • /
    • pp.13-16
    • /
    • 2015
  • Purpose We conducted a study on the reconstruction of the head's shape in 3D using the ToF depth sensor. A time-of-flight camera (ToF camera) is a range imaging camera system that resolves distance based on the known speed of light, measuring the time-of-flight of a light signal between the camera and the subject for each point of the image. The above method is the safest way of measuring the head shape of plagiocephaly patients in 3D. The texture, appearance and size of the head were reconstructed from the measured data and we used the SDF method for a precise reconstruction. Materials and Methods To generate a precise model, mesh was generated by using Marching cube and SDF. Results The ground truth was determined by measuring 10 people of experiment participants for 3 times repetitively and the created 3D model of the same part from this experiment was measured as well. Measurement of actual head circumference and the reconstructed model were made according to the layer 3 standard and measurement errors were also calculated. As a result, we were able to gain exact results with an average error of 0.9 cm, standard deviation of 0.9, min: 0.2 and max: 1.4. Conclusion The suggested method was able to complete the 3D model by minimizing errors. This model is very effective in terms of quantitative and objective evaluation. However, measurement range somewhat lacks 3D information for the manufacture of protective helmets, as measurements were made according to the layer 3 standard. As a result, measurement range will need to be widened to facilitate production of more precise and perfectively protective helmets by conducting scans on all head circumferences in the future.

Developing Stereo-vision based Drone for 3D Model Reconstruction of Collapsed Structures in Disaster Sites (재난지역의 붕괴지형 3차원 형상 모델링을 위한 스테레오 비전 카메라 기반 드론 개발)

  • Kim, Changyoon;Lee, Woosik
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.6
    • /
    • pp.33-38
    • /
    • 2016
  • Understanding of current features of collapsed buildings, terrain, and other infrastructures is a critical issue for disaster site managers. On the other hand, a comprehensive site investigation of current location of survivors buried under the remains of a building is a difficult task for disaster managers due to the difficulties in acquiring the various information on the disaster sites. To overcome these circumstances, such as large disaster sites and limited capability of rescue workers, this study makes use of a drone (unmanned aerial vehicle) to effectively obtain current image data from large disaster areas. The framework of 3D model reconstruction of disaster sites using aerial imagery acquired by drones was also presented. The proposed methodology is expected to assist fire fighters and workers on disaster sites in making a rapid and accurate identification of the survivors under collapsed buildings.

Human Face Recognition and 3-D Human Face Modelling (얼굴 영상 인식 및 3차원 얼굴 모델 구현 알고리즘)

  • 이효종;이지항
    • Proceedings of the IEEK Conference
    • /
    • 2000.11c
    • /
    • pp.113-116
    • /
    • 2000
  • Human face recognition and 3D human face reconstruction has been studied in this paper. To find the facial feature points, find edge from input image and analysis the accumulated histogram of edge information. This paper use a Generic Face Model to display the 3D human face model which was implement with OpenGL and generated with 500 polygons. For reality of 3D human face model, we propose Group matching mapping method between facial feature points and the one of Generic Face Model. The personalized 3D human face model which resembles real human face can be generated automatically in less than 5 seconds on Pentium PC.

  • PDF

Image-based Modeling by Minimizing Projection Error of Primitive Edges (정형체의 투사 선분의 오차 최소화에 의한 영상기반 모델링)

  • Park Jong-Seung
    • The KIPS Transactions:PartB
    • /
    • v.12B no.5 s.101
    • /
    • pp.567-576
    • /
    • 2005
  • This paper proposes an image-based modeling method which recovers 3D models using projected line segments in multiple images. Using the method, a user obtains accurate 3D model data via several steps of simple manual works. The embedded nonlinear minimization technique in the model parameter estimation stage is based on the distances between the user provided image line segments and the projected line segments of primitives. We define an error using a finite line segment and thus increase accuracy in the model parameter estimation. The error is defined as the sum of differences between the observed image line segments provided by the user and the predicted image line segments which are computed using the current model parameters and camera parameters. The method is robust in a sense that it recovers 3D structures even from partially occluded objects and it does not be seriously affected by small measurement errors in the reconstruction process. This paper also describesexperimental results from real images and difficulties and tricks that are found while implementing the image-based modeler.

The Three Dimensional Modeling Method of Structure in Urban Areas using Airborne Multi-sensor Data (다중센서 데이터를 이용한 구조물의 3차원 모델링)

  • Son, Ho-Woong;Kim, Ki-Young;Kim, Young-Kyung
    • Journal of the Korean Geophysical Society
    • /
    • v.9 no.1
    • /
    • pp.7-19
    • /
    • 2006
  • Laser scanning is a new technology for obtaining Digital Surface Models(DSM) of the earth surface.It is a fast method for sampling the earth surface with high density and high point accuracy. This paper is for buildings extraction from LiDAR points data. The core part of building construction is based on a parameters filter for distinguishing between terrain and non-terrain laser points. The 3D geometrical properties of the building facades are obtained based on plane fitting using least-squares adjustment. The reconstruction part of the procedure is based on the adjacency among the roof facades. Primitive extraction and facade intersections are used for building reconstruction. For overcome the difficulty just reconstruct of laser points data used with digital camera images. Also, 3D buildings of city area reconstructed using digital map. Finally, In this paper show 3D building Modeling using digital map and LiDAR data.

  • PDF

Surface Reconstruction from unorganized 3D Points by an improved Shrink-wrapping Algorithm (개선된 Shrink-wrapping 알고리즘을 이용한 비조직 3차원 데이터로부터의 표면 재구성)

  • Park, Eun-Jin;Koo, Bon-Ki;Choi, Young-Kyu
    • The KIPS Transactions:PartA
    • /
    • v.14A no.3 s.107
    • /
    • pp.133-140
    • /
    • 2007
  • The SWBF(shrink-wrapped boundary face) algorithm is a recent mesh reconstruction method for constructing a surface model from a set of unorganized 3D points. In this paper, we point out the surface duplication problem of SWBF and propose an improved mesh reconstruction scheme. Our method tries to classify the non-boundary cells as the inner cell or the outer cell, and makes an initial mesh without surface duplication by adopting the improved boundary face definition. To handle the directional unbalance of surface sampling density arise in typical 3D scanners, two dimensional connectivity in the cell image is introduced and utilized. According to experiments, our method is proved to be very useful to overcome the surface duplication problem of the SWBF algorithm.

3D Model Reconstruction Algorithm Using a Focus Measure Based on Higher Order Statistics (고차 통계 초점 척도를 이용한 3D 모델 복원 알고리즘)

  • Lee, Joo-Hyun;Yoon, Hyeon-Ju;Han, Kyu-Phil
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.1
    • /
    • pp.11-18
    • /
    • 2013
  • This paper presents a SFF(shape from focus) algorithm using a new focus measure based on higher order statistics for the exact depth estimation. Since conventional SFF-based 3D depth reconstruction algorithms used SML(sum of modified Laplacian) as the focus measure, their performance is strongly depended on the image characteristics. These are efficient only for the rich texture and well focused images. Therefore, this paper adopts a new focus measure using HOS(higher order statistics), in order to extract the focus value for relatively poor texture and focused images. The initial best focus area map is generated by the measure. Thereafter, the area refinement, thinning, and corner detection methods are successively applied for the extraction of the locally best focus points. Finally, a 3D model from the carefully selected points is reconstructed by Delaunay triangulation.

Computational integral imaging reconstruction method using round-type mapping model (원형 매핑 모델을 사용하는 컴퓨터 직접 영상 재생 방식)

  • Sin, Dong-Hak;Kim, Nam-Woo;Lee, Jun-Jae;Lee, Byeong-Guk
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2007.07a
    • /
    • pp.259-260
    • /
    • 2007
  • In this paper, we propose a novel computational integral imaging reconstruction (CIIR) method using round-type mapping model. Proposed CIIP method can overcome problems of non-uniformly reconstructed images caused from the conventional method and improve the resoulution of 3-D images. To show the usefulness of the proposed method, both computational experiment and optical experiment are carried out and their results are presented.

  • PDF