• Title/Summary/Keyword: Rendering quality

Search Result 247, Processing Time 0.029 seconds

Study of threshold and opacity in three-dimensional CT volume rendering of oral and maxillofacial area (구강악안면영역의 3차원 CT 영상 재형성시 역치 및 불투명도에 대한 연구)

  • Choi, Mun-Kyung;Lee, Sam-Sun;Huh, Kyung-Hoe;Yi, Won-Jin;Choi, Soon-Chul
    • Imaging Science in Dentistry
    • /
    • v.39 no.1
    • /
    • pp.13-18
    • /
    • 2009
  • Purpose: This study was designed to determine a proper threshold value and opacity in three-dimensional CT volume rendering of oral and maxillofacial area. Materials and Methods: Three-dimensional CT data obtained from 50 persons who were done orthognatic surgery in department of oral and maxillofacial radiology of Seoul National University retrospectively. 12 volume rendering post-processing protocols of combination of threshold(100HU, 150HU, 221HU, 270HU) and opacity (58%, 80%, 90%) were applied. Five observers independently evaluated image quality using a five-point range scale. The results were analyzed by receiver operating characteristic curves, ANOVA and Kappa value. And three oromaxillofacial surgeons chose the all images that they thought proper clinically in the all of images. Results: Analysis using ROC curves revealed the area under each curve which indicated a diagnostic accuracy. The highest diagnostic accuracy appear with 100HU and 58% opacity. and the lowest diagnostic accuracy appear with 221HU and 58% opacity that are being used protocol in department of oral and maxillofacial radiology of Seoul National University. But, no statistically significant difference was noted between any of the protocols. And the number of proper images clinically that chosen by three oromaxillofacial surgeons is the largest in the cases of protocol 8 (221HU, opacity 80%) and protocol 11 (270HU, opacity 80%) in one after the other. Conclusion: Threshold and opacity in volume rendering can be controled easily and these can be causes of making an diagnostic accuracy. So we need to select proper values of these factors.

  • PDF

Effective Volume Rendering and Virtual Staining Framework for Visualizing 3D Cell Image Data (3차원 세포 영상 데이터의 효과적인 볼륨 렌더링 및 가상 염색 프레임워크)

  • Kim, Taeho;Park, Jinah
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.1
    • /
    • pp.9-16
    • /
    • 2018
  • In this paper, we introduce a visualization framework for cell image data obtained from optical diffraction tomography (ODT), including a method for representing cell morphology in 3D virtual environment and a color mapping protocol. Unlike commonly known volume data sets, such as CT images of human organ or industrial machinery, that have solid structural information, the cell image data have rather vague information with much morphological variations on the boundaries. Therefore, it is difficult to come up with consistent representation of cell structure for visualization results. To obtain desired visual representation of cellular structures, we propose an interactive visualization technique for the ODT data. In visualization of 3D shape of the cell, we adopt a volume rendering technique which is generally applied to volume data visualization and improve the quality of volume rendering result by using empty space jittering method. Furthermore, we provide a layer-based independent rendering method for multiple transfer functions to represent two or more cellular structures in unified render window. In the experiment, we examined effectiveness of proposed method by visualizing various type of the cell obtained from the microscope which can capture ODT image and fluorescence image together.

Group-based Adaptive Rendering for 6DoF Immersive Video Streaming (6DoF 몰입형 비디오 스트리밍을 위한 그룹 분할 기반 적응적 렌더링 기법)

  • Lee, Soonbin;Jeong, Jong-Beom;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.216-227
    • /
    • 2022
  • The MPEG-I (Immersive) group is working on a standardization project for immersive video that provides 6 degrees of freedom (6DoF). The MPEG Immersion Video (MIV) standard technology is intended to provide limited 6DoF based on depth map-based image rendering (DIBR) technique. Many efficient coding methods have been suggested for MIV, but efficient transmission strategies have received little attention in MPEG-I. This paper proposes group-based adaptive rendering method for immersive video streaming. Each group can be transmitted independently using group-based encoding, enabling adaptive transmission depending on the user's viewport. In the rendering process, the proposed method derives weights of group for view synthesis and allocate high quality bitstream according to a given viewport. The proposed method is implemented through the Test Model for Immersive Video (TMIV) test model. The proposed method demonstrates 17.0% Bjontegaard-delta rate (BD-rate) savings on the peak signalto-noise ratio (PSNR) and 14.6% on the Immersive Video PSNR(IV-PSNR) in terms of various end-to-end evaluation metrics in the experiment.

Composed Animation Production Pipeline using Miniature Set (미니어처 세트를 이용한합성 애니메이션 제작 공정)

  • Kim, Jaejung;Kim, Minji;Seo, Jihye;Kim, Jinmo;Jung, Seowon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.63-73
    • /
    • 2016
  • Animation contents are gradually growing every year, but production period and budget for making one animation contents is insufficient as of now. In particular, in case of animation series that are broadcasted on television, many episodes should be made within a short period of production term. Hence, production pipeline of full three-dimensional animation is frequently chosen in this case. However, another problem emerges as the full three-dimensional animation also requires a lot of time for making high-quality background and for rendering. Composed animation is a production pipeline that attempts to solve such problem. It is a pipeline of producing animation by composing computer graphic (CG) character and real background. It requires relatively small number of human resources compared to the full three-dimensional animation pipeline. Hence, it has an advantage in that natural-looking image can be produced under efficient structure and time for rendering can also be reduced. This paper proposes an efficient process of producing composed animation by using miniature set and three-dimensional computer graphic.

Hole-Filling Method for Depth-Image-Based Rendering for which Modified-Patch Matching is Used (개선된 패치 매칭을 이용한 깊이 영상 기반 렌더링의 홀 채움 방법)

  • Cho, Jea-Hyung;Song, Wonseok;Choi, Hyuk
    • Journal of KIISE
    • /
    • v.44 no.2
    • /
    • pp.186-194
    • /
    • 2017
  • Depth-image-based rendering is a technique that can be applied in a variety of 3D-display systems. It generates the images that have been captured from virtual viewpoints by using a depth map. However, disoccluded hole-filling problems remain a challenging issue, as a newly exposed area appears in the virtual view. Image inpainting is a popular approach for the filling of the hole region. This paper presents a robust hole-filling method that reduces the error and generates a high quality-virtual view. First, the adaptive-patch size is decided using the color and depth information. Also, a partial filling method for which the patch similarity is used is proposed. These efforts reduce the error occurrence and the propagation. The experiment results show that the proposed method synthesizes the virtual view with a higher visual comfort compared with the existing methods.

Realistic Re-lighting System with Occlusion (사실적인 리라이팅 시스템 연구)

  • Park, Jae-Wook
    • Cartoon and Animation Studies
    • /
    • s.38
    • /
    • pp.133-144
    • /
    • 2015
  • Lighting work on 3D animation requires much time and labor. For example, about 60 lighting artists were needed for the theater animation named Postman Pat the Movie globally distributed in 2014. These artists were needed to rapidly respond with shot feedback. However, complicated shots required much of rendering time. Therefore, it was difficult to correspond the work in the middle of test rendering. After 2005, re-lighting techniques were developed to cope with such situations. However, it was not feasible to create shadow with current re-lighting technique which doesn't make natural or realistic result. This paper has realized relighting system that could realistic quality while maintaining fast calculation by applying rasterized ambient occlusion to the relighting calculation of each of the lights. This relighting system has been used for the Hollywood animation titled "Postman Pat: The Movie" globally released in 2014. It is anticipated for this method to be more widely used for animation manufacture that other manufacturers can reduce expenses and create well completed output in the future.

Generation of Stereoscopic Image from 2D Image based on Saliency and Edge Modeling (관심맵과 에지 모델링을 이용한 2D 영상의 3D 변환)

  • Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.20 no.3
    • /
    • pp.368-378
    • /
    • 2015
  • 3D conversion technology has been studied over past decades and integrated to commercial 3D displays and 3DTVs. The 3D conversion plays an important role in the augmented functionality of three-dimensional television (3DTV), because it can easily provide 3D contents. Generally, depth cues extracted from a static image is used for generating a depth map followed by DIBR (Depth Image Based Rendering) rendering for producing a stereoscopic image. However except some particular images, the existence of depth cues is rare so that the consistent quality of a depth map cannot be accordingly guaranteed. Therefore, it is imperative to make a 3D conversion method that produces satisfactory and consistent 3D for diverse video contents. From this viewpoint, this paper proposes a novel method with applicability to general types of image. For this, saliency as well as edge is utilized. To generate a depth map, geometric perspective, affinity model and binomic filter are used. In the experiments, the proposed method was performed on 24 video clips with a variety of contents. From a subjective test for 3D perception and visual fatigue, satisfactory and comfortable viewing of 3D contents was validated.

A 2-Tier Server Architecture for Real-time Multiple Rendering (실시간 다중 렌더링을 위한 이중 서버 구조)

  • Lim, Choong-Gyoo
    • Journal of Korea Game Society
    • /
    • v.12 no.4
    • /
    • pp.13-22
    • /
    • 2012
  • The wide-spread use of the broadband Internet service makes the cloud computing-based gaming service possible. A game program is executed on a cloud node and its live image is fed into a remote user's display device via video streaming. The user's input is immediately transmitted and applied to the game. The minimization of the time to process remote user's input and transmit the live image back to the user and thus satisfying the requirement of instant responsiveness for gaming makes it possible. However, the cost to build its servers can be very expensive to provide high quality 3D games because a general purpose graphics system that cloud nodes are likely to have for the service supports a single 3D application at a time. Thus, the server must have a technology of 'realtime multiple rendering' to execute multiple 3D games simultaneously. This paper proposes a new architecture of 2-tier servers of clouds nodes of which one group executes multiple games and the other produces game's live images. It also performs a few experimentations to prove the feasibility of the new architecture.

A Reference Frame Selection Method Using RGB Vector and Object Feature Information of Immersive 360° Media (실감형 360도 미디어의 RGB 벡터 및 객체 특징정보를 이용한 대표 프레임 선정 방법)

  • Park, Byeongchan;Yoo, Injae;Lee, Jaechung;Jang, Seyoung;Kim, Seok-Yoon;Kim, Youngmo
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1050-1057
    • /
    • 2020
  • Immersive 360-degree media has a problem of slowing down the video recognition speed when the video is processed by the conventional method using a variety of rendering methods, and the video size becomes larger with higher quality and extra-large volume than the existing video. In addition, in most cases, only one scene is captured by fixing the camera in a specific place due to the characteristics of the immersive 360-degree media, it is not necessary to extract feature information from all scenes. In this paper, we propose a reference frame selection method for immersive 360-degree media and describe its application process to copyright protection technology. In the proposed method, three pre-processing processes such as frame extraction of immersive 360 media, frame downsizing, and spherical form rendering are performed. In the rendering process, the video is divided into 16 frames and captured. In the central part where there is much object information, an object is extracted using an RGB vector per pixel and deep learning, and a reference frame is selected using object feature information.

Application of Virtual Studio Technology and Digital Human Monocular Motion Capture Technology -Based on <Beast Town> as an Example-

  • YuanZi Sang;KiHong Kim;JuneSok Lee;JiChu Tang;GaoHe Zhang;ZhengRan Liu;QianRu Liu;ShiJie Sun;YuTing Wang;KaiXing Wang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.106-123
    • /
    • 2024
  • This article takes the talk show "Beast Town" as an example to introduce the overall technical solution, technical difficulties and countermeasures for the combination of cartoon virtual characters and virtual studio technology, providing reference and experience for the multi-scenario application of digital humans. Compared with the live broadcast that combines reality and reality, we have further upgraded our virtual production technology and digital human-driven technology, adopted industry-leading real-time virtual production technology and monocular camera driving technology, and launched a virtual cartoon character talk show - "Beast Town" to achieve real Perfectly combined with virtuality, it further enhances program immersion and audio-visual experience, and expands infinite boundaries for virtual manufacturing. In the talk show, motion capture shooting technology is used for final picture synthesis. The virtual scene needs to present dynamic effects, and at the same time realize the driving of the digital human and the movement with the push, pull and pan of the overall picture. This puts forward very high requirements for multi-party data synchronization, real-time driving of digital people, and synthetic picture rendering. We focus on issues such as virtual and real data docking and monocular camera motion capture effects. We combine camera outward tracking, multi-scene picture perspective, multi-machine rendering and other solutions to effectively solve picture linkage and rendering quality problems in a deeply immersive space environment. , presenting users with visual effects of linkage between digital people and live guests.