• Title/Summary/Keyword: 3D Scene Editing

Search Result 10, Processing Time 0.019 seconds

3D SCENE EDITING BY RAY-SPACE PROCESSING

  • Lv, Lei;Yendo, Tomohiro;Tanimoto, Masayuki;Fujii, Toshiaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.732-736
    • /
    • 2009
  • In this paper we focus on EPI (Epipolar-Plane Image), the horizontal cross section of Ray-Space, and we propose a novel method that chooses objects we want and edits scenes by using multi-view images. On the EPI acquired by camera arrays uniformly distributed along a line, all the objects are represented as straight lines, and the slope of straight lines are decided by the distance between objects and camera plane. Detecting a straight line of a specific slope and removing it mean that an object in a specific depth has been detected and removed. So we propose a scheme to make a layer of a specific slope compete with other layers instead of extracting layers sequentially from front to back. This enables an effective removal of obstacles, object manipulation and a clearer 3D scene with what we want to see will be made.

  • PDF

Development of 3D Stereoscopic Image Generation System Using Real-time Preview Function in 3D Modeling Tools

  • Yun, Chang-Ok;Yun, Tae-Soo;Lee, Dong-Hoon
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.746-754
    • /
    • 2008
  • A 3D stereoscopic image is generated by interdigitating every scene with video editing tools that are rendered by two cameras' views in 3D modeling tools, like Autodesk MAX(R) and Autodesk MAYA(R). However, the depth of object from a static scene and the continuous stereo effect in the view of transformation, are not represented in a natural method. This is because after choosing the settings of arbitrary angle of convergence and the distance between the modeling and those two cameras, the user needs to render the view from both cameras. So, the user needs a process of controlling the camera's interval and rendering repetitively, which takes too much time. Therefore, in this paper, we will propose the 3D stereoscopic image editing system for solving such problems as well as exposing the system's inherent limitations. We can generate the view of two cameras and can confirm the stereo effect in real-time on 3D modeling tools. Then, we can intuitively determine immersion of 3D stereoscopic image in real-time, by using the 3D stereoscopic image preview function.

  • PDF

Indoor Scene Classification based on Color and Depth Images for Automated Reverberation Sound Editing (자동 잔향 편집을 위한 컬러 및 깊이 정보 기반 실내 장면 분류)

  • Jeong, Min-Heuk;Yu, Yong-Hyun;Park, Sung-Jun;Hwang, Seung-Jun;Baek, Joong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.3
    • /
    • pp.384-390
    • /
    • 2020
  • The reverberation effect on the sound when producing movies or VR contents is a very important factor in the realism and liveliness. The reverberation time depending the space is recommended in a standard called RT60(Reverberation Time 60 dB). In this paper, we propose a scene recognition technique for automatic reverberation editing. To this end, we devised a classification model that independently trains color images and predicted depth images in the same model. Indoor scene classification is limited only by training color information because of the similarity of internal structure. Deep learning based depth information extraction technology is used to use spatial depth information. Based on RT60, 10 scene classes were constructed and model training and evaluation were conducted. Finally, the proposed SCR + DNet (Scene Classification for Reverb + Depth Net) classifier achieves higher performance than conventional CNN classifiers with 92.4% accuracy.

Automatic Image Segmention of Brain CT Image (뇌조직 CT 영상의 자동영상분할)

  • 유선국;김남현
    • Journal of Biomedical Engineering Research
    • /
    • v.10 no.3
    • /
    • pp.317-322
    • /
    • 1989
  • In this paper, brain CT images are automatically segmented to reconstruct the 3-D scene from consecutive CT sections. Contextual segmentation technique was applied to overcome the partial volume artifact and statistical fluctuation phenomenon of soft tissue images. Images are hierarchically analyzed by region growing and graph editing techniques. Segmented regions are discriptively decided to the final organs by using the semantic informations.

  • PDF

Study of Scene Directing with Cinemachine

  • Park, Sung-Suk;Kim, Jae-Ho
    • International Journal of Contents
    • /
    • v.18 no.1
    • /
    • pp.98-104
    • /
    • 2022
  • With Unity creating footage is possible by using 3D motion, 2D motion, particular, and sound. Even post-production video editing is possible by combining the footage. In particular, Cinemachine, a suite of camera tools for Unity, that greatly affects screen layout and the flow of video images, can implement most of the functions of a physical camera. Visual aesthetics can be achieved through it. However, as it is a part of a game engine. Thus, the understanding of the game engine should come first. Also doubts may arise as to how similar it is to a physical camera. Accordingly, the purpose of this study is to examine the advantages and cautions of virtual cameras in Cinemachine, and explore the potential for development by implementing storytelling directly.

Standardization Strategy on 3D Animation Contents (3D 애니메이션 콘텐츠의 SCORM 기반 표준화 전략)

  • Jang, Jae-Kyung;Kim, Sun-Hye;Kim, Ho-Sung
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.218-222
    • /
    • 2006
  • In making 3D animation with digital technology, it is necessary to increase productivity and reusability by managing production pipeline systematically through standardization of animation content. For this purpose, we try to develop the animation content management system that can manage all kind of information on the production pipeline, based on SCORM of e-teaming by considering production, publication and re-editing. A scene as the unit of visual semantics is standardize into an object that contains meta-data of place, cast, weather, season, time and viewpoint about the scene. The meta-data of content includes a lot of information of copyright, publication, description, etc, so that it plays an important role on the management and the publication. If an effective management system of meta-data such as ontology will be implemented, it is possible to search multimedia contents powerfully. Hence, it will bring on production and publication of UCC. Using the meta-data of content object, user and producer can easily search and reuse the contents. Hence, they can choose the contents object according to their preference and reproduce their own creative animation by reorganizing and packaging the selected objects.

  • PDF

Feature-Based Light and Shadow Estimation for Video Compositing and Editing (동영상 합성 및 편집을 위한 특징점 기반 조명 및 그림자 추정)

  • Hwang, Gyu-Hyun;Park, Sang-Hun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.1
    • /
    • pp.1-9
    • /
    • 2012
  • Video-based modeling / rendering developed to produce photo-realistic video contents have been one of the important research topics in computer graphics and computer visions. To smoothly combine original input video clips and 3D graphic models, geometrical information of light sources and cameras used to capture a scene in the real world is essentially required. In this paper, we present a simple technique to estimate the position and orientation of an optimal light source from the topology of objects and the silhouettes of shadows appeared in the original video clips. The technique supports functions to generate well matched shadows as well as to render the inserted models by applying the estimated light sources. Shadows are known as an important visual cue that empirically indicates the relative location of objects in the 3D space. Thus our method can enhance realism in the final composed videos through the proposed shadow generation and rendering algorithms in real-time.

A Plan to Maximizing the Visual Immersion of 3D Media Art (3D 미디어아트의 시각적 몰입감 극대화 방안)

  • Kim, Ki-Bum;Kim, Kyoung-Soo
    • Journal of Digital Contents Society
    • /
    • v.16 no.4
    • /
    • pp.659-669
    • /
    • 2015
  • Recently, media art is transforming from analogue to 'digital', and from 2D to '3D'. In particular, the range of utilizing 3D Media Art is getting wider through merging with other genres of contents in the digital environments, such as Media façade, Hologram, Virtual reality, App application, and etc. Therefore, by referring to the 3D award-winning works of Pirx Ars Electronica, which are regarded as the most outstanding works of media art of today, factors that affect sensation of visual immersion have been analyzed, through which strategies for maximizing viewers' interests in media arts and heightening their emotions while viewing have been determined. Based on the findings of the study, it has been shown that such works of media arts that involve development of concepts with 'creativity' and 'variability' from the perspective of visual concept, such as 3D modeling and mapping, with 'consistency' through out all concepts, as well as the works with stronger 'restriction' of concept within its animation and postproduction, attracted more interests from the viewer. From the point of view with visual four steps in composition, positioning the change in quality of 3D 'shape' and 'material' following the four-step rule, and gradual increase of change in quantity within the 'number' and 'size', in addition to increased degree of systematization within the change in editing, such as the 'scene change', resulted in more heightened emotions from the viewer. Thus, in order to maximize sensation of visual immersion, strategies for 'developing 3D visual concepts' while 'synchronizing' them, as well as 'strengthening the four steps within 3D visual composition' while 'systematizing' them should be emphasized.

Development of a Haptic Modeling and Editing (촉감 모델링 및 편집 툴 개발)

  • Seo, Yong-Won;Lee, Beom-Chan;Cha, Jong-Eun;Kim, Jong-Phil;Ryu, Je-Ha
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.373-378
    • /
    • 2007
  • 최근 들어 햅틱 분야는 디지털 콘텐츠를 만질 수 있게 촉감을 제공함으로써 의학, 교육, 군사, 엔터테인먼트, 방송 분야 등에서 널리 연구되고 있다. 그러나 햅틱 분야가 사용자에게 시청각 정보와 더불어 추가적인 촉감을 제공함으로써 보다 실감 있고 자연스러운 상호작용을 제공하는 등 여러 가지 장점을 가진 것에 비해 아직은 일반 사용자들에게 생소한 분야다. 그 이유 중 하나로 촉감 상호작용이 가능한 콘텐츠의 부재를 들 수 있다. 또한 최근에 가상환경(Virtual Environment, VR)에 관심이 증가 되고, 가상환경에 햅틱이라는 기술을 접목시키는 시도가 많이 일어나고 있어서, 촉감 모델링에 대한 욕구 또한 증대 되고 있다. 일반적으로 촉감 모델링은 Material properties를 가지고 있는 그래픽 모델들로 구성이 된다. 그래픽 모델링은 일반적인 모델링툴 (MAYA, 3D MAX, 기타 등)으로 할 수 있다. 하지만 촉감 관련된 촉감 모델들은 콘텐츠를 제작한 이후에 일일이 수작업으로 넣어 주어야 한다. 그래픽 모델링에서는 사용자가 직접 눈으로 확인 하면서 작업을 이루어 지기 때문에 직관적으로 이루어질 수 있다. 이와 비슷하게 촉감 모델링은 직관적인 모델링을 하기 위해서 사용자가 직접 촉감을 느껴 보면서 진행이 되어야 한다. 또한 그래픽 모델링과 촉감 모델링이 동시에 진행이 되지 않기 때문에 촉감 콘텐츠를 만드는데 시간이 많이 걸리게 되고 직관적이지 못하는 단점이 있다. 더 나아가서 이런 촉감 모델링을 포함한 모델링 높은 생산성을 위해서 신속히 이루어져야 한다. 이런 이유들 때문에 촉감 모델링을 위한 새로운 인터페이스가 필요하다. 본 논문에서는 촉감 상호작용이 가능한 촉감 콘텐츠를 직관적으로 생성하고 조작할 수 있게 하는 촉감 모델러를 기술한다. 촉감 모델러에서 사용자는 3 자유도 촉감 장치를 사용하여 3 차원의 콘텐츠 (정적 이거나 동적이거나 Deformation이 가능한 2D, 2.5D, 3D Scene)를 실시간으로 만져보면서 생성, 조작할 수 있는 촉감 사용자 인터페이스 (Haptic User Interface, HUI)를 통해서 콘텐츠의 표면 촉감 특성을 직관적으로 편집할 수 있다. 촉감 사용자인터페이스는 마우스로 조작하는 기존의 2 차원 그래픽 사용자 인터페이스를 포함하여 3 차원으로 사용자 인터페이스도 추가되어 있고 그 형태는 촉감 장치로 조작할 수 있는 버튼, 라디오버튼, 슬라이더, 조이스틱의 구성요소로 이루어져있다. 사용자는 각각의 구성요소를 조작하여 콘텐츠의 표면 촉감 특성 값을 바꾸고 촉감 사용자 인터페이스의 한 부분을 만져 그 촉감을 실시간으로 느껴봄으로써 직관적으로 특성 값을 정할 수 있다. 또한, XML 기반의 파일포맷을 제공함으로써 생성된 콘텐츠를 저장할 수 있고 저장된 콘텐츠를 불러오거나 다른 콘텐츠에 추가할 수 있다. 이러한 시스템은 햅틱이라는 분야를 잘 모르는 사람들도 직관적으로 촉감 모델링을 하는데 큰 도움을 줄 수 있을 것이다.

  • PDF

Technology Status and Improvement Direction of Special Theaters in Korea by Format (국내 특수상영관 포맷별 기술현황과 개선방향)

  • Jung, Hyun-Jin
    • Journal of Korea Entertainment Industry Association
    • /
    • v.15 no.4
    • /
    • pp.73-87
    • /
    • 2021
  • Special theaters were created to provide a sense of immersion and spectacles due to differentiated screens, sound, seating facilities, and advanced services, and also expanded screens. The purpose of this study is to perform comparative analysis of the technical characteristics formats shown in special theaters(3D film, 4DX, IMAX, ScreenX, and VR) in order to identify and find ways to overcome the technological limitations in production. The various formats show differences in field of view depending on the exhibition technology and these differences affect the mise-en-scene, narrative, and editing of the film and consequently result in changes in the production environment and process. Therefore, directors and creators must understand the technological features and limitations of the new formats before making their approach. However, a new format encounters limitations on production sets due to the decline of technical education and succession. In situations where shooting with a special camera is essential, the particular characteristics of each format should be carefully considered from the planning stage but financial problems arise due to increase in production period and cost. To overcome these various obstacles, it is essential to first identify problems and present alternatives through in-depth research on the production set of each format. Finally, this research aims to explore the prototype of each format and analyze the current state of production technology with formats that have not been adapted to the market trends by combining with the other formats and showing that they can survive in new ways.