• 제목/요약/키워드: 실감 콘텐츠 생성

Search Result 59, Processing Time 0.029 seconds

A Suggestion for Structure of Interactive Storytelling that Mediates Online and Offline: Focusing on the Comparison between ARG and AR Games (온·오프라인 매개 인터랙티브 스토리텔링 구조 제안 : 대체현실게임과 AR게임의 비교를 중심으로)

  • Kim, Ji-Young;Kwon, Byung-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.6
    • /
    • pp.687-700
    • /
    • 2021
  • The advent of realistic technologies such as AR has extended the interaction area from the computer environment to the offline space. As demand is expected to increase in the future, the need for study on interactive storytelling that mediates online and offline is emerging. This study proposes a storytelling structure to achieve a balance between interactivity and narrativity in interactive narrative characterized by online and offline mediation. According to a case study of ARG and AR games based on Henry Jenkins' theory of 'Environmental Storytelling', there should be a balance between the space designed by the game designer and the space created by the player's interaction, and the roles should be properly distributed in both online and offline spaces to contribute to the formation of narrative together. In addition, it is necessary to borrow the characteristics of ARG that achieves a balance of interactivity and narrativity based on offline spatiality. The significance of this study is to expand the area of interactive storytelling, which has been discussed centering on online, to offline, and to suggest the interaction area as a factor to consider. In addition, as a basic study related to storytelling that mediates online and offline, it is expected to provide a direction for the development of content based on realistic technologies.

Producing Stereoscopic Video Contents Using Transformation of Character Objects (캐릭터 객체의 변환을 이용하는 입체 동영상 콘텐츠 제작)

  • Won, Jee-Yean;Lee, Kwan-Wook;Kim, Man-Bae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.07a
    • /
    • pp.307-309
    • /
    • 2010
  • 본 논문에서는 깊이맵을 활용하여 살아있는 객체 입체영상 구현을 제안한다. 살아있는 객체 입체영상은 입력영상에 있는 각 객체가 움직이도록 제작되어 2D영상의 시청에서 살아있는 객체들을 시청할 수 있다. 제안 시스템은 C언어를 기반으로 제작되었으며, 한 장의 영상이 주어지면 그래픽 툴을 이용하여 영상에 따른 배경영상, 마스크 영상, 배경 깊이맵 영상, 객체 깊이 맵영상 파일을 생성한다. 이렇게 제작된 입력영상, 마스크영상을 이용하여 각 객체를 이동, 회전, 확대/축소를 통해 결과적으로 살아있는 객체로 구현하며, 이에 따라 변환된 영상에 깊이맵영상을 이용하여 실감있는 입체영상으로 구현한다. 실험영상은 조선시대 화가인 신윤복의 단오풍정을 이용하여 2D 입체영상으로 구현하였다.

  • PDF

Overview and Performance analysis of the HEVC based 3D Video Coding (HEVC 기반 3차원 비디오 부호화 기법 성능 분석)

  • Park, Daemin;Choi, Haechul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2013.11a
    • /
    • pp.186-189
    • /
    • 2013
  • 최근 다양한 3D 콘텐츠들에 대한 사용자의 요구에 따라 HD(High Definition)화질 및 이를 넘어서는 고해상도(FHD(full high definition), UHD(ultra high definition))의 고품질 3D 방송 서비스에 대한 연구가 진행되고 있으며, 차세대 영상 기술로 주목되고 있는 3차원 비디오 기술은 사용자에게 실감 있는 영상을 제공할 수 있다, 하지만 많은 시점을 전부 촬영하는 것은 한계가 있으므로, 카메라의 깊이 정보를 이용하여, 전송하는 시점을 줄이고, 시점영상을 합성함으로써 사용하는 카메라의 수보다 더 많은 시점을 생성하는 방법이 필요하다. 현재 국제 표준화 기구인 MPEG(Moving Picture Experts Group)의 3차원 비디오 부호화(3D Video Coding, 3DVC)에서는 깊이영상을 가지는 3차원 비디오영상에 대한 효과적인 부호화 기술들에 대해 표준화가 진행되고 있다. 이에 본 논문은 HEVC 기반의 3D-HEVC에서 사용하는 표준 기술들에 대하여 소개하고, 현재 사용되고 있는 기술들에 대한 성능 평가를 분석 하였다.

  • PDF

Development of a Chameleonic Pin-Art Equipment for Generating Realistic Solid Shapes (실감 입체 형상 생성을 위한 카멜레온형 핀아트 장치 개발)

  • Kwon, Ohung;Kim, Jinyoung;Lee, Sulhee;Kim, Juhea;Lee, Sang-won;Cho, Jayang;Kim, Hyungtae
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.497-506
    • /
    • 2020
  • A chameleonic surface proposed in this study was a pin-art and 3D display device for generating arbitrary shapes. A smooth and continuous surface was formed using slim telescopic actuators and high-elasticity composite material. Realistic 3D shapes were continuously generated by projecting dynamic mapping images on the surface. A slim telescopic actuator was designed to show long strokes and minimize area for staking. A 3D shape was formed by thrusting and extruding the high-elasticity material using multiple telescopic actuators. This structure was advantageous for generating arbitrary continuous surface, projecting dynamic images and lightening weight. Because of real-time synchronization, a distributed controller based on EtherCAT was applied to operate hundreds of telescopic actuators smoothly. Integrated operating software consecutively generated realistic scenes by coordinating extruded shapes and projecting 3D image from multiple projectors. An opera content was optimized for the chameleon surface and showed to an audience in an actual concert.

Reconstruction of the Lost Hair Depth for 3D Human Actor Modeling (3차원 배우 모델링을 위한 깊이 영상의 손실된 머리카락 영역 복원)

  • Cho, Ji-Ho;Chang, In-Yeop;Lee, Kwan-H.
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.2
    • /
    • pp.1-9
    • /
    • 2007
  • In this paper, we propose a reconstruction technique of the lost hair region for 3D human actor modeling. An active depth sensor system can simultaneously capture both color and geometry information of any objects in real-time. However, it cannot acquire some regions whose surfaces are shiny and dark. Therefore, to get a natural 3D human model, the lost region in depth image should be recovered, especially human hair region. The recovery is performed using both color and depth images. We find out the hair region using color image first. After the boundary of hair region is estimated, the inside of hair region is estimated using an interpolation technique and closing operation. A 3D mesh model is generated after performing a series of operations including adaptive sampling, triangulation, mesh smoothing, and texture mapping. The proposed method can generate recovered 3D mesh stream automatically. The final 3D human model allows the user view interaction or haptic interaction in realistic broadcasting system.

  • PDF

3DTIP: 3D Stereoscopic Tour-Into-Picture of Korean Traditional Paintings (3DTIP: 한국 고전화의 3차원 입체 Tour-Into-Picture)

  • Jo, Cheol-Yong;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.14 no.5
    • /
    • pp.616-624
    • /
    • 2009
  • This paper presents a 3D stereoscopic TIP (Tour Into Picture) for Korean classical paintings being composed of persons, boat, and landscape. Unlike conventional TIP methods providing 2D image or video, our proposed TIP can provide users with 3D stereoscopic contents. Navigating a picture with stereoscopic viewing can deliver more realistic and immersive perception. The method firstly makes input data being composed of foreground mask, background image, and depth map. The second step is to navigate the picture and to obtain rendered images by orthographic or perspective projection. Then, two depth enhancement schemes such as depth template and Laws depth are utilized in order to reduce a cardboard effect and thus to enhance 3D perceived depth of the foreground objects. In experiments, the proposed method was tested on 'Danopungjun' and 'Muyigido' that are famous paintings made in Chosun Dynasty. The stereoscopic animation was proved to deliver new 3D perception compared with 2D video.

3D Library Platform Construction using Drone Images and its Application to Kangwha Dolmen (드론 촬영 영상을 활용한 3D 라이브러리 플랫폼 구축 및 강화지석묘에의 적용)

  • Kim, Kyoung-Ho;Kim, Min-Jung;Lee, Jeongjin
    • Cartoon and Animation Studies
    • /
    • s.48
    • /
    • pp.199-215
    • /
    • 2017
  • Recently, a drone is used for the general purpose application although the drone was builtfor the military purpose. A drone is actively used for the creation of contents, and an image acquisition. In this paper, we develop a 3D library module platform using 3D mesh model data, which is generated by a drone image and its point cloud. First, a lot of 2D image data are taken by a drone, and a point cloud data is generated from 2D drone images. A 3D mesh data is acquired from point cloud data. Then, we develop a service library platform using a transformed 3D data for multi-purpose uses. Our platform with 3D data can minimize the cost and time of contents creation for special effects during the production of a movie, drama, or documentary. Our platform can contribute the creation of experts for the digital contents production in the field of a realistic media, a special image, and exhibitions.

Optimal Camera Placement Leaning of Multiple Cameras for 3D Environment Reconstruction (3차원 환경 복원을 위한 다수 카메라 최적 배치 학습 기법)

  • Kim, Ju-hwan;Jo, Dongsik
    • Smart Media Journal
    • /
    • v.11 no.9
    • /
    • pp.75-80
    • /
    • 2022
  • Recently, research and development on immersive virtual reality(VR) technology to provide a realistic experience is being widely conducted. To provide realistic experience in immersive virtual reality for VR participants, virtual environments should consist of high-realistic environments using 3D reconstruction. In this paper, to acquire 3D information in real space using multiple cameras in the reconstruction process, we propose a novel method of optimal camera placement for accurate reconstruction to minimize distortion of 3D information. Through our approach in this paper, real 3D information can obtain with minimized errors during environment reconstruction, and it is possible to provide a more immersive experience with the created virtual environment.

Free-viewpoint Stereoscopic TIP Generation Using Virtual Camera and Depth Map (가상 카메라와 깊이 맵을 활용하는 자유시점 입체 TIP 생성)

  • Lee, Kwang-Hoon;Jo, Cheol-Yong;Choi, Chang-Yeol;Kim, Man-Bae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.11a
    • /
    • pp.219-222
    • /
    • 2009
  • 자유시점 비디오는 단순히 수동적으로 비디오를 보는 것이 아니라 원하는 시점을 자유로이 선택하여 보는 능동형 비디오이다. 일반적으로 다양한 위치 및 다양한 각도에 위치하는 다수의 카메라로부터 촬영된 영상을 이용하여 제작하는데, 이 기술은 박물관 투어, 엔터테인먼트 등의 다양한 분야에서 활용된다. 본 논문에서는 자유시점 비디오의 새로운 분야로 한 장의 영상을 가상 카메라와 깊이맵을 이용하여 영상 내부를 네비게이션하는 자유시점 입체 Tour-Into-Picture (TIP)을 제안한다. 오래전부터 TIP가 연구되어 왔는데, 이 분야는 한 장의 사진 내부를 탐험하면서 애니메이션으로 볼 수 있게 하는 기술이다. 제안 방법은 전처리과정으로 전경 마스크, 배경영상, 및 깊이맵을 자동 및 수동 방법으로 구한다. 다음에는 영상 내부를 항해하면서 투영 영상들을 획득한다. 배경영상과 전객객체의 3D 모델링 데이터를 기반으로 가상 카메라의 3차원 공간 이동, yaw, pitch, rolling의 회전, look-around effect, 줌인 등의 다양한 카메라 기능을 활용하여 자유시점 비디오를 구현한다. 또한 깊이정보의 특성 및 구조에 따라 놀라운 시청효과를 전달하는 카메라 기능의 설정 방법을 소개한다. 소프트웨어는 OpenGL 및 MFC Visual C++ 기반으로 구축되었으며, 실험영상으로 조선시대의 작품인 신윤복의 단오풍정을 사용하였고, 입체 애니메이션으로 제작되어 보다 실감있는 콘텐츠를 제공한다.

  • PDF

Multi-View Video System using Single Encoder and Decoder (단일 엔코더 및 디코더를 이용하는 다시점 비디오 시스템)

  • Kim Hak-Soo;Kim Yoon;Kim Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.11 no.1 s.30
    • /
    • pp.116-129
    • /
    • 2006
  • The progress of data transmission technology through the Internet has spread a variety of realistic contents. One of such contents is multi-view video that is acquired from multiple camera sensors. In general, the multi-view video processing requires encoders and decoders as many as the number of cameras, and thus the processing complexity results in difficulties of practical implementation. To solve for this problem, this paper considers a simple multi-view system utilizing a single encoder and a single decoder. In the encoder side, input multi-view YUV sequences are combined on GOP units by a video mixer. Then, the mixed sequence is compressed by a single H.264/AVC encoder. The decoding is composed of a single decoder and a scheduler controling the decoding process. The goal of the scheduler is to assign approximately identical number of decoded frames to each view sequence by estimating the decoder utilization of a Gap and subsequently applying frame skip algorithms. Furthermore, in the frame skip, efficient frame selection algorithms are studied for H.264/AVC baseline and main profiles based upon a cost function that is related to perceived video quality. Our proposed method has been performed on various multi-view test sequences adopted by MPEG 3DAV. Experimental results show that approximately identical decoder utilization is achieved for each view sequence so that each view sequence is fairly displayed. As well, the performance of the proposed method is examined in terms of bit-rate and PSNR using a rate-distortion curve.