• Title/Summary/Keyword: 3D Coordinates

Search Result 613, Processing Time 0.025 seconds

Determination of 3D Object Coordinates from Overlapping Omni-directional Images Acquired by a Mobile Mapping System (모바일매핑시스템으로 취득한 중첩 전방위 영상으로부터 3차원 객체좌표의 결정)

  • Oh, Tae-Wan;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.3
    • /
    • pp.305-315
    • /
    • 2010
  • This research aims to develop a method to determine the 3D coordinates of an object point from overlapping omni-directional images acquired by a ground mobile mapping system and assess their accuracies. In the proposed method, we first define an individual coordinate system on each sensor and the object space and determine the geometric relationships between the systems. Based on these systems and their relationships, we derive a straight line of the corresponding object point candidates for a point of an omni-directional image, and determine the 3D coordinates of the object point by intersecting a pair of straight lines derived from a pair of matched points. We have compared the object coordinates determined through the proposed method with those measured by GPS and a total station for the accuracy assessment and analysis. According to the experimental results, with the appropriate length of baseline and mutual positions between cameras and objects, we can determine the relative coordinates of the object point with the accuracy of several centimeters. The accuracy of the absolute coordinates is ranged from several centimeters to 1 m due to systematic errors. In the future, we plan to improve the accuracy of absolute coordinates by determining more precisely the relationship between the camera and GPS/INS coordinates and performing the calibration of the omni-directional camera

Constructing 3D Outlines of Objects based on Feature Points using Monocular Camera (단일카메라를 사용한 특징점 기반 물체 3차원 윤곽선 구성)

  • Park, Sang-Heon;Lee, Jeong-Oog;Baik, Doo-Kwon
    • The KIPS Transactions:PartB
    • /
    • v.17B no.6
    • /
    • pp.429-436
    • /
    • 2010
  • This paper presents a method to extract 3D outlines of objects in an image obtained from a monocular vision. After detecting the general outlines of the object by MOPS(Multi-Scale Oriented Patches) -algorithm and we obtain their spatial coordinates. Simultaneously, it obtains the space-coordinates with feature points to be immanent within the outlines of objects through SIFT(Scale Invariant Feature Transform)-algorithm. It grasps a form of objects to join the space-coordinates of outlines and SIFT feature points. The method which is proposed in this paper, it forms general outlines of objects, so that it enables a rapid calculation, and also it has the advantage capable of collecting a detailed data because it supplies the internal-data of outlines through SIFT feature points.

Light 3D Modeling with mobile equipment (모바일 카메라를 이용한 경량 3D 모델링)

  • Ju, Seunghwan;Seo, Heesuk;Han, Sunghyu
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.4
    • /
    • pp.107-114
    • /
    • 2016
  • Recently, 3D related technology has become a hot topic for IT. 3D technologies such as 3DTV, Kinect and 3D printers are becoming more and more popular. According to the flow of the times, the goal of this study is that the general public is exposed to 3D technology easily. we have developed a web-based application program that enables 3D modeling of facial front and side photographs using a mobile phone. In order to realize 3D modeling, two photographs (front and side) are photographed with a mobile camera, and ASM (Active Shape Model) and skin binarization technique are used to extract facial height such as nose from facial and side photographs. Three-dimensional coordinates are generated using the face extracted from the front photograph and the face height obtained from the side photograph. Using the 3-D coordinates generated for the standard face model modeled with the standard face as a control point, the face becomes the face of the subject when the RBF (Radial Basis Function) interpolation method is used. Also, in order to cover the face with the modified face model, the control point found in the front photograph is mapped to the texture map coordinate to generate the texture image. Finally, the deformed face model is covered with a texture image, and the 3D modeled image is displayed to the user.

A Study on Estimating Smartphone Camera Position (스마트폰 카메라의 이동 위치 추정 기술 연구)

  • Oh, Jongtaek;Yoon, Sojung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.6
    • /
    • pp.99-104
    • /
    • 2021
  • The technology of estimating a movement trajectory using a monocular camera such as a smartphone and composing a surrounding 3D image is key not only in indoor positioning but also in the metaverse service. The most important thing in this technique is to estimate the coordinates of the moving camera center. In this paper, a new algorithm for geometrically estimating the moving distance is proposed. The coordinates of the 3D object point are obtained from the first and second photos, and the movement distance vector is obtained using the matching feature points of the first and third photos. Then, while moving the coordinates of the origin of the third camera, a position where the 3D object point and the feature point of the third picture coincide is obtained. Its possibility and accuracy were verified by applying it to actual continuous image data.

3D Boundary Extraction of A Building Using Terrestrial Laser Scanner (지상라이다를 이용한 건축물의 3차원 경계 추출)

  • Lee, In-Su
    • Spatial Information Research
    • /
    • v.15 no.1
    • /
    • pp.53-65
    • /
    • 2007
  • Terrestrial laser scanner provides highly accurate, 3D images and by sweeping a laser beam over a scene or object, the laser scanner is able to record millions of 3D points' coordinates in a short period, so becoming distinguished in various application fields as one of the representative surveying instruments. This study deals with 3D building boundary extraction using Terrestrial Laser Scanner. The results shows that high accuracy 3D coordinates for building boundaries are possibly acquired fast, but terrestrial laser scanner is a ground-based system, so "no roofs", and "no lower part of building" due to trees and electric-poles, etc. It is expected that the combination of total station, terrestrial laser scanner, airborne laser scanner with aerial photogrammetry will contribute to the acquisition of an effective 3D spatial information.

  • PDF

THE POSITIONAL RELATIONSHIP BETWEEN THE MANDIBLE AND THE HYOID BONE IN MANDIBULAR PROTRUSION AFTER ORTHOGNATHIC SURGERY EVALUATED WITH 3-D CT (3-D CT를 이용한 악교정술 전후의 하악과 설골의 위치에 관한 연구)

  • Lee, Sang-Han;Nam, Jeong-Hun;Jung, Chang-Wook;Kwon, Tae-Geon
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.29 no.3
    • /
    • pp.173-181
    • /
    • 2003
  • Purpose : This study was intended to evaluate the positional relationship between the hyoid bone and the mandible in patients with mandibular protrusion after mandibular set-back surgery by means of 3D-CT. Materials and methods : Preoperative(3 weeks before) and postoperative (6 weeks after) 3D-CT & cephalogram were taken on 32 patients(12 male, 20 female, mean age of 23.2) treated by bilateral sagittal split osteotomy with rigid fixation. The angular measurement on 3D-CT basilar view were deviation of Me & H, long axis angle of left & right cornu majus. The lineal measurement on 3D-CT basilar view were composed of intercondylar line and coordinates(x,y) of Me & H. The angular & lineal measurement of lateral cephalogram were composed of mandibular plane angle, SNA, SNB, ANB, FH-NA & FH-NB, and coordinates(x,y) of B, Pog, Me & H, PAS, Lpw, MPH and IAS. On the frontal cephalogram, deviation of Me were evaluated. Results : The mean mandibular set-back was 8.0mm horizontally and mandibular plane angle was slightly increased. The hyoid bone was displaced postero-inferiorly, the distance between MP(mandibular plane) and H(hyoid bone) was increased and the posterior airway space values (PAS, Lpw, IAS) were decreased. The coordinates Me(x,y), H(x,y) and deviation angle Me'& H' were revealed the strong positive correlation. Conclusion : The results revealed that the horizontal, vertical and transverse relationship of the mandibular and the hyoid bone movements were significantly correlated in patients performed mandibular set-back surgery.

Coordinate Determination for Texture Mapping using Camera Calibration Method (카메라 보정을 이용한 텍스쳐 좌표 결정에 관한 연구)

  • Jeong K. W.;Lee Y.Y.;Ha S.;Park S.H.;Kim J. J.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.9 no.4
    • /
    • pp.397-405
    • /
    • 2004
  • Texture mapping is the process of covering 3D models with texture images in order to increase the visual realism of the models. For proper mapping the coordinates of texture images need to coincide with those of the 3D models. When projective images from the camera are used as texture images, the texture image coordinates are defined by a camera calibration method. The texture image coordinates are determined by the relation between the coordinate systems of the camera image and the 3D object. With the projective camera images, the distortion effect caused by the camera lenses should be compensated in order to get accurate texture coordinates. The distortion effect problem has been dealt with iterative methods, where the camera calibration coefficients are computed first without considering the distortion effect and then modified properly. The methods not only cause to change the position of the camera perspective line in the image plane, but also require more control points. In this paper, a new iterative method is suggested for reducing the error by fixing the principal points in the image plane. The method considers the image distortion effect independently and fixes the values of correction coefficients, with which the distortion coefficients can be computed with fewer control points. It is shown that the camera distortion effects are compensated with fewer numbers of control points than the previous methods and the projective texture mapping results in more realistic image.

Standard Terminology System Referenced by 3D Human Body Model

  • Choi, Byung-Kwan;Lim, Ji-Hye
    • Journal of information and communication convergence engineering
    • /
    • v.17 no.2
    • /
    • pp.91-96
    • /
    • 2019
  • In this study, a system to increase the expressiveness of existing standard terminology using three-dimensional (3D) data is designed. We analyze the existing medical terminology system by searching the reference literature and perform an expert group focus survey. A human body image is generated using a 3D modeling tool. Then, the anatomical position of the human body is mapped to the 3D coordinates' identification (ID) and metadata. We define the term to represent the 3D human body position in a total of 12 categories, including semantic terminology entity and semantic disorder. The Blender and 3ds Max programs are used to create the 3D model from medical imaging data. The generated 3D human body model is expressed by the ID of the coordinate type (x, y, and z axes) based on the anatomical position and mapped to the semantic entity including the meaning. We propose a system of standard terminology enabling integration and utilization of the 3D human body model, coordinates (ID), and metadata. In the future, through cooperation with the Electronic Health Record system, we will contribute to clinical research to generate higher-quality big data.

Plane Detection Method Using 3-D Characteristics at Depth Pixel Unit (깊이 화소 단위의 3차원 특성을 통한 평면 검출 방법)

  • Lee, Dong-Seok;Kwon, Soon-Kak
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.5
    • /
    • pp.580-587
    • /
    • 2019
  • In this paper, a plane detection method using depth information is proposed. 3-D characteristics of a pixel are defined as a direction and length of a normal vector whose is calculated from a plane consisting of a local region centered on the pixel. Image coordinates of each pixel are transformed to 3-D coordinates in order to obtain the local planes. Regions of each plane are detected by calculating similarity of the 3-D characteristics. The similarity of the characteristics consists of direction and distance similarities of normal vectors. If the similarity of the characteristics between two adjacent pixels is enough high, the two pixels are regarded as consisting of same plane. Simulation results show that the proposed method using the depth picture is more accurate for detecting plane areas than the conventional method.

Automatic Road Lane Matching Using Aerial Images (항공사진을 이용한 도로차선 자동매칭)

  • 김진곤;한동엽;유기윤;김용일
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2003.10a
    • /
    • pp.147-152
    • /
    • 2003
  • Aerial Images are usually used to extract 3-D coordinates of various urban features. In this process, the stereo matching of images should be performed precisely to extract these information from aerial Images. In this research, we proposed a matching technique based on geometric features of lanes. We extracted lanes from aerial images and grouped into 4 lane's types. They are lane lines, dotted lines, arrow lane, safety zone. After preprocessing, We will match them by spatial relationships, for example, the distance and orientation between the extracted features. In the future, we will obtain lane coordinates and reconstruct 3-d coordinates of roads.

  • PDF