• Title/Summary/Keyword: Virtual Space Map

Search Result 52, Processing Time 0.027 seconds

A Concurrency Control and a Collaborative Editing Mechanism in a Collaborative Virtual Environment for Designing a Game Map (게임 맵 디자인을 위한 협업 가상 환경에서의 동시성 제어 및 공동 편집 방법)

  • Park, Sung-Jun;Lee, Jun;Lim, Min-Gyu;Kim, Jee-In
    • Journal of Korea Game Society
    • /
    • v.11 no.4
    • /
    • pp.15-26
    • /
    • 2011
  • Game level design is a collaborative work to create a virtual world for a computer game including maps, agents, monsters, objects, players and events based on predefined its game scenario. It is a promising collaborative design application. The game level design generally requires much time and cost, as the size of its target game space becomes huge. However, traditional game level design tools do not provide concurrency control mechanisms among multiple participating game designers. They do not provide consistency of undo and redo mechanisms for erroneous collaborative tasks during iterative modifications and updates of collaborative tasks among multiple designers. In this paper, we propose a concurrency control and a collaborative editing mechanism to enhance productivity of the collaborative game level design. The proposed system provides hierarchical structures of shared objects and a concurrency control mechanism for each object. The proposed system also provides a consistent undo and redo mechanism to enhance modifications and updates on intermediate results of the level design procedures.

Generation of Multi-view Images Using Depth Map Decomposition and Edge Smoothing (깊이맵의 정보 분해와 경계 평탄 필터링을 이용한 다시점 영상 생성 방법)

  • Kim, Sung-Yeol;Lee, Sang-Beom;Kim, Yoo-Kyung;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.471-482
    • /
    • 2006
  • In this paper, we propose a new scheme to generate multi-view images utilizing depth map decomposition and adaptive edge smoothing. After carrying out smooth filtering based on an adaptive window size to regions of edges in the depth map, we decompose the smoothed depth map into four types of images: regular mesh, object boundary, feature point, and number-of-layer images. Then, we generate 3-D scenes from the decomposed images using a 3-D mesh triangulation technique. Finally, we extract multi-view images from the reconstructed 3-D scenes by changing the position of a virtual camera in the 3-D space. Experimental results show that our scheme generates multi-view images successfully by minimizing a rubber-sheet problem using edge smoothing, and renders consecutive 3-D scenes in real time through information decomposition of depth maps. In addition, the proposed scheme can be used for 3-D applications that need the depth information, such as depth keying, since we can preserve the depth data unlike the previous unsymmetric filtering method.

A 3D Terrain Reconstruction System using Navigation Information and Realtime-Updated Terrain Data (항법정보와 실시간 업데이트 지형 데이터를 사용한 3D 지형 재구축 시스템)

  • Baek, In-Sun;Um, Ky-Hyun;Cho, Kyung-Eun
    • Journal of Korea Game Society
    • /
    • v.10 no.6
    • /
    • pp.157-168
    • /
    • 2010
  • A terrain is an essential element for constructing a virtual world in which game characters and objects make various interactions with one another. Creating a terrain requires a great deal of time and repetitive editing processes. This paper presents a 3D terrain reconstruction system to create 3D terrain in virtual space based on real terrain data. In this system, it converts the coordinate system of the height maps which are generated from a stereo camera and a laser scanner from global GPS into 3D world using the x and z axis vectors of the global GPS coordinate system. It calculates the movement vectors and the rotation matrices frame by frame. Terrain meshes are dynamically generated and rendered in the virtual areas which are represented in an undirected graph. The rendering meshes are exactly created and updated by correcting terrain data errors. In our experiments, the FPS of the system was regularly checked until the terrain was reconstructed by our system, and the visualization quality of the terrain was reviewed. As a result, our system shows that it has 3 times higher FPS than other terrain management systems with Quadtree for small area, improves 40% than others for large area. The visualization of terrain data maintains the same shape as the contour of real terrain. This system could be used for the terrain system of realtime 3D games to generate terrain on real time, and for the terrain design work of CG Movies.

Automated Construction of IndoorGML Data Using Point Cloud (포인트 클라우드를 이용한 IndoorGML 데이터의 자동적 구축)

  • Kim, Sung-Hwan;Li, Ki-Joune
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.611-622
    • /
    • 2020
  • As the advancement of technologies on indoor positioning systems and measuring devices such as LiDAR (Light Detection And Ranging) and cameras, the demands on analyzing and searching indoor spaces and visualization services via virtual and augmented reality have rapidly increasing. To this end, it is necessary to model 3D objects from measured data from real-world structures. In addition, it is important to store these structured data in standardized formats to improve the applicability and interoperability. In this paper, we propose a method to construct IndoorGML data, which is an international standard for indoor modeling, from point cloud data acquired from LiDAR sensors. After examining considerations that should be addressed in IndoorGML data, we present a construction method, which consists of free space extraction and connectivity detection processes. With experimental results, we demonstrate that the proposed method can effectively reconstruct the 3D model from point cloud.

RDB Schema Model of XML Document for Storage Capacity and Searching Efficiency (저장 공간과 검색 효율을 위한 XML 문서의 RDB 스키마 모델)

  • Kim Jeong-Hee;Kwak Ho-Young;Kwon Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.4
    • /
    • pp.19-28
    • /
    • 2006
  • XML instances for purpose of information exchange are normally stored in the legacy relational database. Therefore, integrations with relational database are required for effective XML applications. To support these requirements, virtual decomposition storage or decomposition storage methods which save separates structures of instances to relational database have researched. However, these storage methods contain different information of schema structure and layers which has caused difficulties to process query during search operation as well as increased overheads due to duplicate savings for separate storages. Therefore, in this research, additional field of 'Eltype' has introduced to previous database schema structure to instance and schema structure, provide consistent level information and propose storage structure to map each field to schema field of relational database. As results, XML instance and structures can be stored together to minimize overheads and required storage-space. Also, synchronized storage layer structure provides easier processing of search query.

  • PDF

A Study on the Global Possibilities of Gugak Broadcasting as K-Music Content through the Metaverse Audition Platform

  • KIM, JOY
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.37-43
    • /
    • 2022
  • This study is a sustainability study of K-Music beyond K-Pop through New Media. New media literally means 'new media'. When TV, classified as legacy media, first appeared in the world, it was an innovative new media platform. Of course, it is considered the most traditional legacy media. However, the definition of new media inevitably changes with the times. Most of the media called new media today are based on online and mobile. This thesis focuses on popular music including crossover traditional music genre. And we define popular music exported abroad as K-pop, and propose the possibility of globalization of Korean music using K-pop users and new media, a metaverse based K-pop audition platform, as consumers and suppliers in the global market. Hallyu, the studying of K-Pop through the study of attitudes and economic effects of K-pop, such as reactions to the spread of K-pop and the reactions of fans who like K-pop, are constantly being discussed in various ways. But there has been no case of cultural technology research that linked the sustainability of Gugak as the Korean music through new media to the K-pop business platform. As the overflowing data pours out in the virtual space as an act that gives the meaning of existence, the online is able to become an open market that provides reliable information all over the world. Therefore we would like to propose on the sustainability of Korean music through the 'Korean Traditional Music Broadcasting Metaverse Audition' beyond the K-pop business model as the K-Music content in the cultural technology era.

An Integrated VR Platform for 3D and Image based Models: A Step toward Interactivity with Photo Realism (상호작용 및 사실감을 위한 3D/IBR 기반의 통합 VR환경)

  • Yoon, Jayoung;Kim, Gerard Jounghyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.4
    • /
    • pp.1-7
    • /
    • 2000
  • Traditionally, three dimension model s have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity. it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined. traversed, and rendered together. In fact, as suggested by Shade et al. [1]. these different representations can be used as different LOD's for a given object. For in stance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range. and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform : designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection. handling their transition s. implementing appropriate interaction schemes. and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit. to accommodate new node types for environment maps. billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also. during interaction, regardless of the viewing distance. a 3D representation would be used, if it exists. Finally. we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

  • PDF

Preliminary Design and Implementation of 3D Sound Play Interface for Graphic Contents Developer (그래픽 콘텐츠 개발자를 위한 입체음 재생 인터페이스 기본 설계 및 구현)

  • Won, Yong-Tae;Jang, Bong-Seog;Ahn, Dong-Soon;Kwak, Hoon-Sung
    • Journal of Digital Contents Society
    • /
    • v.9 no.2
    • /
    • pp.203-211
    • /
    • 2008
  • Due to the advance of H/W and S/W techniques to play 3D sound, the virtual space contented by 3D graphics and sounds can provide users more improved realities and vividness. However for the small 3D contents developers and companies, it is hard to implement 3D sound techniques because the implementation requires expensive sound engines, 3D sound technical understanding and 3D sound programming skills. Therefore 3D-sound-playing-interface is necessary to easy and cost-effective 3D sound implementation. Using this interface, graphics experts can easily add 3D sound techniques to their applications. In this paper, the followings are designed and implemented as a preliminary stage in the way of developing the 3D sound playing interface. First, we develop 3D sound S/W modules converting mono to 3D sound in PC based systems. Second, we develop the interconnection modules to map 3D graphic objects and sound sources. The developed modules in this paper can allow the user to percept sound source position and surround effect at the moving positions in the virtual world. In the coming works, we are going to develop the more completed 3D sound playing interface consisted of the synchronization technique for sound and moving objects, and HRTF.

  • PDF

A Study of 3D Modeling of Compressed Urban LiDAR Data Using VRML (VRML을 이용한 도심지역 LiDAR 압축자료의 3차원 표현)

  • Jang, Young-Woon;Choi, Yun-Woong;Cho, Gi-Sung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.19 no.2
    • /
    • pp.3-8
    • /
    • 2011
  • Recently, the demand for enterprise for service map providing and portal site services of a 3D virtual city model for public users has been expanding. Also, accuracy of the data, transfer rate and the update for the update for the lapse of time emerge are considered as more impertant factors, by providing 3D information with the web or mobile devices. With the latest technology, we have seen various 3D data through the web. With the VRML progressing actively, because it can provide a virtual display of the world and all aspects of interaction with web. It offers installation of simple plug-in without extra cost on the web. LiDAR system can obtain spatial data easily and accurately, as supprted by numerous researches and applications. However, in general, LiDAR data is obtained in the form of an irregular point cloud. So, in case of using data without converting, high processor is needed for presenting 2D forms from point data composed of 3D data and the data increase. This study expresses urban LiDAR data in 3D, 2D raster data that was applied by compressing algorithm that was used for solving the problems of large storage space and processing. For expressing 3D, algorithm that converts compressed LiDAR data into code Suited to VRML was made. Finally, urban area was expressed in 3D with expressing ground and feature separately.

2D Interpolation of 3D Points using Video-based Point Cloud Compression (비디오 기반 포인트 클라우드 압축을 사용한 3차원 포인트의 2차원 보간 방안)

  • Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.692-703
    • /
    • 2021
  • Recently, with the development of computer graphics technology, research on technology for expressing real objects as more realistic virtual graphics is being actively conducted. Point cloud is a technology that uses numerous points, including 2D spatial coordinates and color information, to represent 3D objects, and they require huge data storage and high-performance computing devices to provide various services. Video-based Point Cloud Compression (V-PCC) technology is currently being studied by the international standard organization MPEG, which is a projection based method that projects point cloud into 2D plane, and then compresses them using 2D video codecs. V-PCC technology compresses point cloud objects using 2D images such as Occupancy map, Geometry image, Attribute image, and other auxiliary information that includes the relationship between 2D plane and 3D space. When increasing the density of point cloud or expanding an object, 3D calculation is generally used, but there are limitations in that the calculation method is complicated, requires a lot of time, and it is difficult to determine the correct location of a new point. This paper proposes a method to generate additional points at more accurate locations with less computation by applying 2D interpolation to the image on which the point cloud is projected, in the V-PCC technology.