• Title/Summary/Keyword: 3D 장면 접근

Search Result 13, Processing Time 0.032 seconds

Development of an X3D Python Language Binding Viewer Providing a 3D Data Interface (3D 데이터 인터페이스를 제공하는 X3D Python 언어 바인딩 뷰어 개발)

  • Kim, Ha Seong;Lee, Myeong Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.6
    • /
    • pp.243-250
    • /
    • 2021
  • With the increased development of 3D VR applications augmented by recent VR/AR/MR technologies and by the advance of 3D devices, interchangeability and portability of 3D data have become essential. 3D files should be processed in a standard data format for common usage between applications. Providing standardized libraries and data structures along with the standard file format means that a more efficient system organization is possible and unnecessary processing due to the usage of different file formats and data structures depending on the applications can be omitted. In order to provide the function of using a common data file and data structure, this research is intended to provide a programming binding tool for generating and storing standardized data so that various services can be developed by accessing the common 3D files. To achieve this, this paper defines a common data structure including classes and functions to access X3D files with a standardized scheme using the Python programming language. It describes the implementation of a Python language binding viewer, which is an X3D VR viewer for rendering standard X3D data files based on the language binding interface. The VR viewer includes Python based 3D scene libraries and a data structure for creation, modification, exchange, and transfer of X3D objects. In addition, the viewer displays X3D objects and processes events using the libraries and data structure.

3D Motion of Objects in an Image Using Vanishing Points (소실점을 이용한 2차원 영상의 물체 변환)

  • 김대원;이동훈;정순기
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.11
    • /
    • pp.621-628
    • /
    • 2003
  • This paper addresses a method of enabling objects in an image to have apparent 3D motion. Many researchers have solved this issue by reconstructing 3D model from several images using image-based modeling techniques, or building a cube-modeled scene from camera calibration using vanishing points. This paper, however, presents the possibility of image-based motion without exact 3D information of scene geometry and camera calibration. The proposed system considers the image plane as a projective plane with respect to a view point and models a 2D frame of a projected 3D object using only lines and points. And a modeled frame refers to its vanishing points as local coordinates when it is transformed.

An Integrated VR Platform for 3D and Image based Models: A Step toward Interactivity with Photo Realism (상호작용 및 사실감을 위한 3D/IBR 기반의 통합 VR환경)

  • Yoon, Jayoung;Kim, Gerard Jounghyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.4
    • /
    • pp.1-7
    • /
    • 2000
  • Traditionally, three dimension model s have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity. it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined. traversed, and rendered together. In fact, as suggested by Shade et al. [1]. these different representations can be used as different LOD's for a given object. For in stance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range. and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform : designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection. handling their transition s. implementing appropriate interaction schemes. and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit. to accommodate new node types for environment maps. billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also. during interaction, regardless of the viewing distance. a 3D representation would be used, if it exists. Finally. we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

  • PDF

Denoising neural network to improve the foam effect via screen projection method (스크린 투영 방식의 거품 효과를 개선하기 위한 노이즈 제거 신경망)

  • Kim, Jong-Hyun;Kim, Donghui;Kim, Soo Kyun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.663-666
    • /
    • 2021
  • 본 논문에서는 바다와 같은 스케일이 큰 장면인 물 시뮬레이션에서 표현되는 거품 효과(Foam effects)를 노이즈 없이 디테일하게 표현할 수 있는 프레임워크를 소개한다. 거품이 생성될 위치와 거품 입자의 이류는 기존의 접근법인 스크린 투영 방법을 통해 계산한다. 이 과정에서 중요한 것이 투영맵이지만 이산화된 스크린 공간에 운동량을 투영하는 과정에서 노이즈가 발생한다. 본 논문에서는 노이즈 제거 신경망(Denoising neural network)을 활용하여 이 문제를 효율적으로 풀어낸다. 투영맵을 통해 거품이 생성될 영역이 선별되면 2D공간을 3D공간으로 역변환(Inverse transformation)하여 거품 입자를 생성한다. 결과적으로 깔끔한 거품 효과뿐만 아니라, 노이즈 제거 과정으로 인해 소실되는 거품 없이 안정적으로 거품 효과를 만들어냈다.

  • PDF

A Study on Virtual Reality Techniques for Immersive Traditional Fairy Tale Contents Production (몰입형 전래동화 콘텐츠 제작을 위한 가상현실 기술에 대한 연구)

  • Jeong, Kisung;Han, Seunghun;Lee, Dongkyu;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.43-52
    • /
    • 2016
  • This paper is to study techniques of a virtual reality to maximize the depth of the users' immersion based on differentiated interactive contents using korean traditional fairy tale. In order to increase more interests in korean traditional fairy tale, we produce a interactive 3D contents and propose a new approach to a system designing applying a virtual realities such as HMD, Leap motion. First, using Korean traditional fairy tale, we generate interactive contents consisting of scenes intensifying user's tensions while interaction of game process. Based on the interactive contents generated, we design scene generation using Oculus HMD, the gaze based input processes and a hand interface using Leap motion, in order to provide a multi dimensional scene transmission and an input process method to intensify the sense of the reality. We will verify through diverse tests whether the proposed virtual reality contents based on a technique of an input process will actually intensify the immersion in the virtual reality or not while minimizing the motion sickness of the users.

Recognition and Modeling of 3D Environment based on Local Invariant Features (지역적 불변특징 기반의 3차원 환경인식 및 모델링)

  • Jang, Dae-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.3
    • /
    • pp.31-39
    • /
    • 2006
  • This paper presents a novel approach to real-time recognition of 3D environment and objects for various applications such as intelligent robots, intelligent vehicles, intelligent buildings,..etc. First, we establish the three fundamental principles that humans use for recognizing and interacting with the environment. These principles have led to the development of an integrated approach to real-time 3D recognition and modeling, as follows: 1) It starts with a rapid but approximate characterization of the geometric configuration of workspace by identifying global plane features. 2) It quickly recognizes known objects in environment and replaces them by their models in database based on 3D registration. 3) It models the geometric details the geometric details on the fly adaptively to the need of the given task based on a multi-resolution octree representation. SIFT features with their 3D position data, referred to here as stereo-sis SIFT, are used extensively, together with point clouds, for fast extraction of global plane features, for fast recognition of objects, for fast registration of scenes, as well as for overcoming incomplete and noisy nature of point clouds.

  • PDF

Recognition of 3D Environment for Intelligent Robots (지능로봇을 위한 3차원 환경인식)

  • Jang, Dae-Sik
    • Journal of Internet Computing and Services
    • /
    • v.7 no.5
    • /
    • pp.135-145
    • /
    • 2006
  • This paper presents a novel approach to real-time recognition of 3D environment and objects for intelligent robots. First. we establish the three fundamental principles that humans use for recognizing and interacting with the environment. These principles have led to the development of an integrated approach to real-time 3D recognition and modeling, as follows: 1) It starts with a rapid but approximate characterization of the geometric configuration of workspace by identifying global plane features. 2) It quickly recognizes known objects in environment and replaces them by their models in database based on 3D registration. 3) It models the geometric details on the fly adaptively to the need of the given task based on a multi-resolution octree representation. SIFT features with their 3D position data, referred to here as stereo-sis SIFT, are used extensively, together with point clouds, for fast extraction of global plane features, for fast recognition of objects, for fast registration of scenes, as well as for overcoming incomplete and noisy nature of point clouds. The experimental results show the feasibility of real-time and behavior-oriented 3D modeling of workspace for robotic manipulative tasks.

  • PDF

Development of a Solid Modeler for Web-based Collaborative CAD System (웹 기반 협동CAD시스템의 솔리드 모델러 개발)

  • 김응곤;윤보열
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.5
    • /
    • pp.747-754
    • /
    • 2002
  • We propose a Web-based collaborative CAD system which is independent from any platforms, and develop a 3D solid modeler in the system. We developed a new prototype of 3D solid modeler based on the web using Java 3D API, which could be executed without any 3D graphics software and worked collaboratively interacting with each user. The modeler can create primitive objects and get various 3D objects by using loader. The interactive control is available to manipulate-objects such as picking, translating, rotating, zooming. Users connect to this solid modeler and they can create 3D objects and modify them as they want. When this solid modeler is imported to collaborative design system, it will be proved its real worth in today's CAD system. Moreover, if we improve this solid modeler adding to the 3D graphic features such as rendering and animation, it will be able to support more detail design and effect view.

GA based Adaptive Sampling for Image-based Walkthrough (영상기반 항해를 위한 유전 알고리즘 기반 적응적 샘플링)

  • Lee, Dong-Hoon;Kim, Jong-Ryul;Jung, Soon-Ki
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11a
    • /
    • pp.721-723
    • /
    • 2005
  • 본 논문에서는 영상 기반 항해를 위하여 최적의 영상 샘플링을 획득하기 위한 영상 샘플링 알고리즘을 제안한다. 이를 위해 초기 과다 샘플링 된 영상열의 입력으로부터 장면 전역에 걸쳐 적절한 렌더링 품질을 보장하는 최소의 샘플링을 선택하는 감소 샘플링(decremental sampling)의 접근 방법을 기반으로 본 문제를 Set Covering 문제로 공식화한다. 각 시점으로부터 최상의 영상 품질을 보장하는 영역을 3D 와핑 알고리즘을 사용하여 포함 영역으로 표현하여, 이렇게 표현된 Set Covering 문제는 유전 알고리즘을 사용하여 최적화 문제로 설계한다. 실험 결과 본 논문에서 제안한 방법을 통해 최적 해를 구함으로서 만족할 만한 영상 기반 항해의 결과를 얻을 수 있었다.

  • PDF

Development of a Web Service Generation System Using Virtual Environments (가상공간을 이용한 웹 서비스 생성 시스템 개발)

  • Park Chang-Keun;Lee Myeong Won
    • Journal of Internet Computing and Services
    • /
    • v.4 no.1
    • /
    • pp.27-37
    • /
    • 2003
  • This paper presents the Web service generation system using virtual environments and databases. Main features include that the environments and databases are generated and maintained correspondingly. It means that the virtual environments are changed automatically if the databases are updated, and also that the databases are maintained accordingly as the information about the environments are modified at the scene End users can modify the property of the virtual environments in the scene directly using the VRML edit interface, which visualizes the structures of virtual environments. Each object can be accessed through the VRML editor, its property be modified directly, and the information is updated in the database automatically. Web service pages are maintained accordingly. In addition, we define a texture mapping method based on weighted view interpolation using 2 photo images for a scene. A texture mapping interface Is also provided for end users to generate realistic images themselves.

  • PDF