• Title/Summary/Keyword: 3D-Virtual Reality

Search Result 886, Processing Time 0.022 seconds

Production of fusion-type realistic contents using 3D motion control technology (3D모션 컨트롤 기술을 이용한 융합형 실감 콘텐츠 제작)

  • Jeong, Sun-Ri;Chang, Seok-Joo
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.4
    • /
    • pp.146-151
    • /
    • 2019
  • In this paper, we developed a multi-view video content based on real-world technology and a pilot using the production technology, and provided realistic contents production technology that can select a desired direction at a user 's view point by providing users with various viewpoint images. We also created multi-view video contents that can indirectly experience local cultural tourism resources and produced cyber tour contents based on multi-view video (realistic technology). This technology development can be used to create 3D interactive real-world contents that are used in all public education fields such as libraries, kindergartens, elementary schools, middle schools, elderly universities, housewives classrooms, lifelong education centers, The domestic VR market is still in it's infancy, and it's expected to develop in combination with the 3D market related to games and shopping malls. As the domestic educational trend and the demand for social public education system are growing, it is expected to increase gradually.

Efficient 3D Geometric Structure Inference and Modeling for Tensor Voting based Region Segmentation (효과적인 3차원 기하학적 구조 추정 및 모델링을 위한 텐서 보팅 기반 영역 분할)

  • Kim, Sang-Kyoon;Park, Soon-Young;Park, Jong-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.3
    • /
    • pp.10-17
    • /
    • 2012
  • In general, image-based 3D scenes can now be found in many popular vision systems, computer games and virtual reality tours. In this paper, we propose a method for creating 3D virtual scenes based on 2D image that is completely automatic and requires only a single scene as input data. The proposed method is similar to the creation of a pop-up illustration in a children's book. In particular, to estimate geometric structure information for 3D scene from a single outdoor image, we apply the tensor voting to an image segmentation. The tensor voting is used based on the fact that homogeneous region in an image is usually close together on a smooth region and therefore the tokens corresponding to centers of these regions have high saliency values. And then, our algorithm labels regions of the input image into coarse categories: "ground", "sky", and "vertical". These labels are then used to "cut and fold" the image into a pop-up model using a set of simple assumptions. The experimental results show that our method successfully segments coarse regions in many complex natural scene images and can create a 3D pop-up model to infer the structure information based on the segmented region information.

Three-Dimensional GSIS for Determination of Optimal Route (3차원 GSIS를 이용한 최적노선 선정)

  • Kang, In-Joon;Choi, Hyun;Park, Hun-Shik
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.11 no.1 s.24
    • /
    • pp.71-75
    • /
    • 2003
  • The highway is greatly changed by the constant economic growth for a long times the traffic situation such as the large volumes and the performance vehicles, the performance enlargement of vehicles, the high speedization, etc., due to growth economic. A study of an optimal route selection model is researched over late 1980s by development of computer and GSIS, and consisted including research about the optimal route that uses digital terrain model in domestic such as the earth volume calculations, the mass curve output and the automation system construction. Lately, the study of the driving simulation of the highway and the virtual reality using VGIS(Virtual Geographic Information System) is researched. This study shows when the alternative highway selection considered surrounding facilities, development plan and according to estimate amount of traffic and the additional possibility of view analysis and environment effect analysis element will study through 3D simulation method.

  • PDF

D4AR - A 4-DIMENSIONAL AUGMENTED REALITY - MODEL FOR AUTOMATION AND VISUALIZATION OF CONSTRUCTION PROGRESS MONITORING

  • Mani Golparvar-Fard;Feniosky Pena-Mora
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.30-31
    • /
    • 2009
  • Early detection of schedule delay in field construction activities is vital to project management. It provides the opportunity to initiate remedial actions and increases the chance of controlling such overruns or minimizing their impacts. This entails project managers to design, implement, and maintain a systematic approach for progress monitoring to promptly identify, process and communicate discrepancies between actual and as-planned performances as early as possible. Despite importance, systematic implementation of progress monitoring is challenging: (1) Current progress monitoring is time-consuming as it needs extensive as-planned and as-built data collection; (2) The excessive amount of work required to be performed may cause human-errors and reduce the quality of manually collected data and since only an approximate visual inspection is usually performed, makes the collected data subjective; (3) Existing methods of progress monitoring are also non-systematic and may also create a time-lag between the time progress is reported and the time progress is actually accomplished; (4) Progress reports are visually complex, and do not reflect spatial aspects of construction; and (5) Current reporting methods increase the time required to describe and explain progress in coordination meetings and in turn could delay the decision making process. In summary, with current methods, it may be not be easy to understand the progress situation clearly and quickly. To overcome such inefficiencies, this research focuses on exploring application of unsorted daily progress photograph logs - available on any construction site - as well as IFC-based 4D models for progress monitoring. Our approach is based on computing, from the images themselves, the photographer's locations and orientations, along with a sparse 3D geometric representation of the as-built scene using daily progress photographs and superimposition of the reconstructed scene over the as-planned 4D model. Within such an environment, progress photographs are registered in the virtual as-planned environment, allowing a large unstructured collection of daily construction images to be interactively explored. In addition, sparse reconstructed scenes superimposed over 4D models allow site images to be geo-registered with the as-planned components and consequently, a location-based image processing technique to be implemented and progress data to be extracted automatically. The result of progress comparison study between as-planned and as-built performances can subsequently be visualized in the D4AR - 4D Augmented Reality - environment using a traffic light metaphor. In such an environment, project participants would be able to: 1) use the 4D as-planned model as a baseline for progress monitoring, compare it to daily construction photographs and study workspace logistics; 2) interactively and remotely explore registered construction photographs in a 3D environment; 3) analyze registered images and quantify as-built progress; 4) measure discrepancies between as-planned and as-built performances; and 5) visually represent progress discrepancies through superimposition of 4D as-planned models over progress photographs, make control decisions and effectively communicate those with project participants. We present our preliminary results on two ongoing construction projects and discuss implementation, perceived benefits and future potential enhancement of this new technology in construction, in all fronts of automatic data collection, processing and communication.

  • PDF

Implementation of Real-time Interactive Ray Tracing on GPU (GPU 기반의 실시간 인터렉티브 광선추적법 구현)

  • Bae, Sung-Min;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.7 no.3
    • /
    • pp.59-66
    • /
    • 2007
  • Ray tracing is one of the classical global illumination methods to generate a photo-realistic rendering image with various lighting effects such as reflection and refraction. However, there are some restrictions on real-time applications because of its computation load. In order to overcome these limitations, many researches of the ray tracing based on GPU (Graphics Processing Unit) have been presented up to now. In this paper, we implement the ray tracing algorithm by J. Purcell and combine it with two methods in order to improve the rendering performance for interactive applications. First, intersection points of the primary ray are determined efficiently using rasterization on graphics hardware. We then construct the acceleration structure of 3D objects to improve the rendering performance. There are few researches on a detail analysis of improved performance by these considerations in ray tracing rendering. We compare the rendering system with environment mapping based on GPU and implement the wireless remote rendering system. This system is useful for interactive applications such as the realtime composition, augmented reality and virtual reality.

  • PDF

Augmented Reality Based Tangible Interface For Digital Lighting of CAID System (CAID 시스템의 디지털 라이팅을 위한 증강 현실 기반의 실체적 인터페이스에 관한 연구)

  • Hwang, Jung-Ah;Nam, Tek-Jin
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.119-128
    • /
    • 2007
  • With the development of digital technologies, CAID became an essential part in the industrial design process. Creating photo-realistic images from a virtual scene with 3D models is one of the specialized task for CAID users. This task requires a complex interface of setting the positions and the parameters of camera and lights for optimal rendering results. However, the user interface of existing CAID tools are not simple for designers because the task is mostly accomplished in a parameter setting dialogue window. This research address this interface issues, in particular the issues related to lighting, by developing and evaluating TLS(Tangible Lighting Studio) that uses Augmented Reality and Tangible User Interface. The interface of positioning objects and setting parameters become tangible and distributed in the workspace to support more intuitive rendering task. TLS consists of markers, and physical controller, and a see-through HMD(Head Mounted Display). The user can directly control the lighting parameters in the AR workspace. In the evaluation experiment, TLS provide higher effectiveness, efficiency and user satisfaction compared to existing GUI(Graphic User Interface) method. It is expected that the application of TLS can be expanded to photography education and architecture simulation.

  • PDF

A Study on Enhancement of 3D Sound Using Improved HRTFS (개선된 머리전달함수를 이용한 3차원 입체음향 성능 개선 연구)

  • Koo, Kyo-Sik;Cha, Hyung-Tai
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.6
    • /
    • pp.557-565
    • /
    • 2009
  • To perceive the direction and the distance of a sound, we always use a couple of information. Head Related Transfer Function (HRTF) contains the information that sound arrives from a sound source to the ears of the listener, like differences of level, phase and frequency spectrum. For a reproduction system using 2 channels, we apply HRTF to many algorithms which make 3d sound. But it causes a problem to localize a sound source around a certain places which is called the cone-of-confusion. In this paper, we proposed the new algorithm to reduce the confusion of sound image localization. The difference of frequency spectrum and psychoacoustics theory are used to boost the spectral cue among each directions. To confirm the performance of the algorithm, informal listening tests are carried out. As a result, we can make the improved 3d sound in 2 channel system based on a headphone. Also sound quality of improved 3d sound is much better than conventional methods.

Character Motion Control by Using Limited Sensors and Animation Data (제한된 모션 센서와 애니메이션 데이터를 이용한 캐릭터 동작 제어)

  • Bae, Tae Sung;Lee, Eun Ji;Kim, Ha Eun;Park, Minji;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.85-92
    • /
    • 2019
  • A 3D virtual character playing a role in a digital story-telling has a unique style in its appearance and motion. Because the style reflects the unique personality of the character, it is very important to preserve the style and keep its consistency. However, when the character's motion is directly controlled by a user's motion who is wearing motion sensors, the unique style can be discarded. We present a novel character motion control method that uses only a small amount of animation data created only for the character to preserve the style of the character motion. Instead of machine learning approaches requiring a large amount of training data, we suggest a search-based method, which directly searches the most similar character pose from the animation data to the current user's pose. To show the usability of our method, we conducted our experiments with a character model and its animation data created by an expert designer for a virtual reality game. To prove that our method preserves well the original motion style of the character, we compared our result with the result obtained by using general human motion capture data. In addition, to show the scalability of our method, we presented experimental results with different numbers of motion sensors.

A Watermarking Algorithm of 3D Mesh Model Using Spherical Parameterization (구면 파라미터기법을 이용한 3차원 메쉬 모델의 워더마킹 알고리즘)

  • Cui, Ji-Zhe;Kim, Jong-Weon;Choi, Jong-Uk
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.1
    • /
    • pp.149-159
    • /
    • 2008
  • In this paper, we propose a blind watermarking algorithm of 3d mesh model using spherical parameterization. Spherical parameterization is a useful method which is applicable to 3D data processing. Especially, orthogonal coordinate can not analyse the feature of the vertex coordination of the 3D mesh model, but this is possible to analyse and process. In this paper, the centroid center of the 3D model was set to the origin of the spherical coordinate, the orthogonal coordinate system was transformed to the spherical coordinate system, and then the spherical parameterization was applied. The watermark was embedded via addition/modification of the vertex after the feature analysis of the geometrical information and topological information. This algorithm is robust against to the typical geometrical attacks such as translation, scaling and rotation. It is also robust to the mesh reordering, file format change, mesh simplification, and smoothing. In this case, the this algorithm can extract the watermark information about $90{\sim}98%$ from the attacked model. This means it can be applicable to the game, virtual reality and rapid prototyping fields.

Research on the Visual Elements of VR game (VR 게임 <엘더 스크롤 V : 스카이 림 VR>의 시각 요소 연구)

  • JIANG, QIANQIAN;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.18 no.12
    • /
    • pp.507-512
    • /
    • 2020
  • In recent years, various virtual reality games are becoming commercialized. A variety of large 3D online games popular with users has begun to transform into VR games. < the Elder Scrolls V: skyrim VR > as a derivative of large-scale 3D online games < the Elder Scrolls V: skyrim special edition >, Since its launch on the steam game platform in April 2018, because of its original UI design, maximizes the user's immersive experience and firmly maintains the leading position of VR games. Although the visual similarities between < the Elder Scrolls V: Skylim VR > and < the Elder Scrolls V: Skylim Special Edition > are high, but the visual elements in the UI are quite different. This paper studies characters, scenes, operating interfaces, colors and fonts five visual UI elements that enhance the immersion in game. It provided a unique interface design reference to immersive experience of new VR games in the future.