• Title/Summary/Keyword: 3D물체

Search Result 874, Processing Time 0.028 seconds

A Research on Joint Types in Physics Engines for 3D Games (3차원게임을 위한 물리엔진에서의 관절체 구조 연구)

  • Heo, Won;Son, Min-Woo;Shin, Dong-Il;Shin, Dong-Kyoo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.05a
    • /
    • pp.3-6
    • /
    • 2003
  • 초기의 3D 게임 엔진들은 3차원 이미지 렌더링에 초점을 맞춰 개발이 되었기 때문에 현실세계에서와 마찬가지로 물체들 간에 발생하는 선형운동 및 회전운동 등의 고전 물리학의 운동법칙과 운동에 의해 발생하는 물리적 현상에 대한 고려가 미비하였다. 3차원 게임 개발자들에게 게임을 구성하는 객체 및 요소를 현실적으로 인식하기 위한 물리학의 중요성이 부각되었고, 이후에 많은 개발자 및 개발회사들이 현실 세계의 물리현상을 게임에 적용시키고 있다. 이에 본 논문에서는 3차원 게임을 위한 물리엔진에서 물체들의 사실적인 표현과 움직임에 도움이 되는 관절체의 종류와 구조에 대해 연구하였다.

  • PDF

3D Depth Estimation by a Single Camera (단일 카메라를 이용한 3D 깊이 추정 방법)

  • Kim, Seunggi;Ko, Young Min;Bae, Chulkyun;Kim, Dae Jin
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.281-291
    • /
    • 2019
  • Depth from defocus estimates the 3D depth by using a phenomenon in which the object in the focal plane of the camera forms a clear image but the object away from the focal plane produces a blurred image. In this paper, algorithms are studied to estimate 3D depth by analyzing the degree of blur of the image taken with a single camera. The optimized object range was obtained by 3D depth estimation derived from depth from defocus using one image of a single camera or two images of different focus of a single camera. For depth estimation using one image, the best performance was achieved using a focal length of 250 mm for both smartphone and DSLR cameras. The depth estimation using two images showed the best 3D depth estimation range when the focal length was set to 150 mm and 250 mm for smartphone camera images and 200 mm and 300 mm for DSLR camera images.

Natural Photography Generation with Text Guidance from Spherical Panorama Image (360 영상으로부터 텍스트 정보를 이용한 자연스러운 사진 생성)

  • Kim, Beomseok;Jung, Jinwoong;Hong, Eunbin;Cho, Sunghyun;Lee, Seungyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.65-75
    • /
    • 2017
  • As a 360-degree image carries information of all directions, it often has too much information. Moreover, in order to investigate a 360-degree image on a 2D display, a user has to either click and drag the image with a mouse, or project it to a 2D panorama image, which inevitably introduces severe distortions. In consequence, investigating a 360-degree image and finding an object of interest in such a 360-degree image could be a tedious task. To resolve this issue, this paper proposes a method to find a region of interest and produces a 2D naturally looking image from a given 360-degree image that best matches a description given by a user in a natural language sentence. Our method also considers photo composition so that the resulting image is aesthetically pleasing. Our method first converts a 360-degree image to a 2D cubemap. As objects in a 360-degree image may appear distorted or split into multiple pieces in a typical cubemap, leading to failure of detection of such objects, we introduce a modified cubemap. Then our method applies a Long Short Term Memory (LSTM) network based object detection method to find a region of interest with a given natural language sentence. Finally, our method produces an image that contains the detected region, and also has aesthetically pleasing composition.

The Laser Calibration Based On Triangulation Method (삼각법을 기반으로 한 레이저 캘리브레이션)

  • 주기세
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.3 no.4
    • /
    • pp.859-865
    • /
    • 1999
  • Many sensors such as a laser, and CCD camera to obtain 3D information have been used, but most of algorithms for laser calibration are inefficient since a huge memory and experiment data are required. This method saves a memory and an experimental data since the 3D information are obtained simply triangulation method. In this paper, the calibration algorithm of a slit km laser based on triangulation method is introduced to calculate 3D information in the real world. The laser beam orthogonally mounted on the XY table is projected on the floor. A CCD camera observes the intersection plane of a light and an object plane. The 3D information is calculated using observed and calibration data.

  • PDF

Object Detection Based on Hellinger Distance IoU and Objectron Application (Hellinger 거리 IoU와 Objectron 적용을 기반으로 하는 객체 감지)

  • Kim, Yong-Gil;Moon, Kyung-Il
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.2
    • /
    • pp.63-70
    • /
    • 2022
  • Although 2D Object detection has been largely improved in the past years with the advance of deep learning methods and the use of large labeled image datasets, 3D object detection from 2D imagery is a challenging problem in a variety of applications such as robotics, due to the lack of data and diversity of appearances and shapes of objects within a category. Google has just announced the launch of Objectron that has a novel data pipeline using mobile augmented reality session data. However, it also is corresponding to 2D-driven 3D object detection technique. This study explores more mature 2D object detection method, and applies its 2D projection to Objectron 3D lifting system. Most object detection methods use bounding boxes to encode and represent the object shape and location. In this work, we explore a stochastic representation of object regions using Gaussian distributions. We also present a similarity measure for the Gaussian distributions based on the Hellinger Distance, which can be viewed as a stochastic Intersection-over-Union. Our experimental results show that the proposed Gaussian representations are closer to annotated segmentation masks in available datasets. Thus, less accuracy problem that is one of several limitations of Objectron can be relaxed.

Fast Axis Estimation from 3D Axially-Symmetric Object's Fragment (3차원 회전축 대칭 물체 조각의 축 추정 방법)

  • Li, Liang;Han, Dong-Jin;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.6
    • /
    • pp.748-754
    • /
    • 2010
  • To reduce the computational cost required for assembling vessel fragments using surface geometry, this paper proposes a fast axis estimation method. Using circular constraint of pottery and local planar patch assumption, it finds the axis of the symmetry. First, the circular constraint on each cylinder is used. A circular symmetric pot can be thought of unions of many cylinders with different radii. It selects one arbitrary point on the pot fragment surface and searches a path where a circumference exists on that point. The variance of curvature will be calculated along the path and the path with the minimum variance will be selected. The symmetric axis will pass through the center of that circle. Second, the planar patch assumption and profile curve is used. The surface of fragment is divided into small patches and each patch is assumed as plane. The surface normal of each patch will intersects the axis in 3D space since each planar patch faces the center of the pot. A histogram method and minimization of the profile curve error are utilized to find the probability distribution of the axis location. Experimental results demonstrate the improvement in speed and robustness of the algorithms.

Object-Based Integral Imaging Depth Extraction Using Segmentation (영상 분할을 이용한 객체 기반 집적영상 깊이 추출)

  • Kang, Jin-Mo;Jung, Jae-Hyun;Lee, Byoung-Ho;Park, Jae-Hyeung
    • Korean Journal of Optics and Photonics
    • /
    • v.20 no.2
    • /
    • pp.94-101
    • /
    • 2009
  • A novel method for the reconstruction of 3D shape and texture from elemental images has been proposed. Using this method, we can estimate a full 3D polygonal model of objects with seamless triangulation. But in the triangulation process, all the objects are stitched. This generates phantom surfaces that bridge depth discontinuities between different objects. To solve this problem we need to connect points only within a single object. We adopt a segmentation process to this end. The entire process of the proposed method is as follows. First, the central pixel of each elemental image is computed to extract spatial position of objects by correspondence analysis. Second, the object points of central pixels from neighboring elemental images are projected onto a specific elemental image. Then, the center sub-image is segmented and each object is labeled. We used the normalized cut algorithm for segmentation of the center sub-image. To enhance the speed of segmentation we applied the watershed algorithm before the normalized cut. Using the segmentation results, the subdivision process is applied to pixels only within the same objects. The refined grid is filtered with median and Gaussian filters to improve reconstruction quality. Finally, each vertex is connected and an object-based triangular mesh is formed. We conducted experiments using real objects and verified our proposed method.

The Slit Beam Laser Calibration Method Based On Triangulation (삼각법을 이용한 슬릿 빔 레이저 캘리브레이션)

  • 주기세
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.05a
    • /
    • pp.168-173
    • /
    • 1999
  • Many sensors such as a laser, CCD camera to obtain 3D information have been used, but most of calibration algorithms are inefficient since a huge memory and an experiment data for laser calibration are required. In this paper, the calibration algorithm of a slit beam laser based on triangulation method is introduced to calculate 3D information in the real world. The laser beam orthogonally mounted on the XY table is projected on the floor. A Cm camera observes the intersection plane of a light and an object plane. The 3D information is calculated using observed and calibration data. This method saves a memory and an experimental data since the 3D information are obtained simply triangulation method.

  • PDF

Reconstruction Of Photo-Realistic 3D Assets For Actual Objects Combining Photogrammetry And Computer Graphics (사진측량과 컴퓨터 그래픽의 결합을 통한 실제 물체의 사실적인 3D 에셋 재건)

  • Yan, Yong
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.1
    • /
    • pp.147-161
    • /
    • 2021
  • Through photogrammetry techniques, what current researches can achieve at present is rough 3D mesh and color map of objects, rather than usable photo-realistic 3D assets. This research aims to propose a new method to create photo-realistic 3D assets that can be used in the field of visualization applications. The new method combines photogrammetry with computer graphics modeling. Through the description of the production process of three objects in the real world - "Bullet Box", "Gun" and "Metal Beverage Bottle," it introduces in details the concept, functions, operating skills and software packages used in the steps including the photograph object, white balance, reconstruction, cleanup reconstruction, retopology, UV unwrapping, projection, texture baking, De-Lighting and Create Material Maps. In order to increase the flexibility of the method, alternatives to the software packages are also recommended for each step. In this research, 3D assets are produced that are accurate in shape, correct in color, easy to render and can be physically interacted with dynamic lighting in texture. The new method can obtain more realistic visual effects at a faster speed. It does not require large-scale teams, expensive equipment and software packages, therefore it is suitable for small studios and independent artists and educational institutions.