• Title/Summary/Keyword: Image-Based Rendering

Search Result 320, Processing Time 0.025 seconds

Artificial Neural Network Method Based on Convolution to Efficiently Extract the DoF Embodied in Images

  • Kim, Jong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.51-57
    • /
    • 2021
  • In this paper, we propose a method to find the DoF(Depth of field) that is blurred in an image by focusing and out-focusing the camera through a efficient convolutional neural network. Our approach uses the RGB channel-based cross-correlation filter to efficiently classify the DoF region from the image and build data for learning in the convolutional neural network. A data pair of the training data is established between the image and the DoF weighted map. Data used for learning uses DoF weight maps extracted by cross-correlation filters, and uses the result of applying the smoothing process to increase the convergence rate in the network learning stage. The DoF weighted image obtained as the test result stably finds the DoF region in the input image. As a result, the proposed method can be used in various places such as NPR(Non-photorealistic rendering) rendering and object detection by using the DoF area as the user's ROI(Region of interest).

3D Rendering of Magnetic Resonance Images using Visualization Toolkit and Microsoft.NET Framework

  • Madusanka, Nuwan;Zaben, Naim Al;Shidaifat, Alaaddin Al;Choi, Heung-Kook
    • Journal of Multimedia Information System
    • /
    • v.2 no.2
    • /
    • pp.207-214
    • /
    • 2015
  • In this paper, we proposed new software for 3D rendering of MR images in the medical domain using C# wrapper of Visualization Toolkit (VTK) and Microsoft .NET framework. Our objective in developing this software was to provide medical image segmentation, 3D rendering and visualization of hippocampus for diagnosis of Alzheimer disease patients using DICOM Images. Such three dimensional visualization can play an important role in the diagnosis of Alzheimer disease. Segmented images can be used to reconstruct the 3D volume of the hippocampus, and it can be used for the feature extraction, measure the surface area and volume of hippocampus to assist the diagnosis process. This software has been designed with interactive user interfaces and graphic kernels based on Microsoft.NET framework to get benefited from C# programming techniques, in particular to design pattern and rapid application development nature, a preliminary interactive window is functioning by invoking C#, and the kernel of VTK is simultaneously embedded in to the window, where the graphics resources are then allocated. Representation of visualization is through an interactive window so that the data could be rendered according to user's preference.

Image-based Modeling and Rendering (영상 기반 모델링 및 렌더링)

  • 한정현
    • CDE review
    • /
    • v.7 no.3
    • /
    • pp.41-46
    • /
    • 2001
  • 영상기반 모델링 및 렌더링은 1990년대 초반 이후 집중적으로 연구되기 시작한 분야로, 영상 자체를 입력으로 하여 출력 영상을 생성하여 자연스럽게 photorealism을 달성할 수 있고, scene의 복잡도에 무관한 렌더링을 가능케 한다. 본 논문은 파노라마 렌더링, light field 렌더링, LDI 렌더링을 중심으로 지난 10년 간에 걸친 영상깁ㄴ 모델링 및 렌더링의 연구 성과를 개괄한다.

  • PDF

Haptic Interaction with Objects Displayed in a Picture based on Surface Normal Estimation (사진 속 피사체의 법선 벡터 예측에 기반한 햅틱 상호 작용)

  • Kim, Seung-Chan;Kwon, Dong-Soo
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.3
    • /
    • pp.179-185
    • /
    • 2013
  • In this paper we propose a haptic interaction system that physically represents the underlying geometry of objects displayed in a 2D picture, i.e., a digital image. To obtain the object's geometry displayed in the picture, we estimate the physical transformation between the object plane and the image plane based on homographic information. We then calculate the rotated surface normal vector of the object's face and place it on the corresponding part in the 2D image. The purpose of this setup is to create a force that can be rendered along with the image without distorting the visual information. We evaluated the proposed haptic rendering system using a set of pictures of objects with different orientations. The experimental results show that the participants reliably identified the geometric configuration by touching the object in the picture. We conclude this paper with a set of applications.

Real-Time Shadow Generation using Image Warping (이미지 와핑을 이용한 실시간 그림자 생성 기법)

  • Kang, Byung-Kwon;Ihm, In-Sung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.5
    • /
    • pp.245-256
    • /
    • 2002
  • Shadows are important elements in producing a realistic image. Generation of exact shapes and positions of shadows is essential in rendering since it provides users with visual cues on the scene. It is also very important to be able to create soft shadows resulted from area light sources since they increase the visual realism drastically. In spite of their importance. the existing shadow generation algorithms still have some problems in producing realistic shadows in real-time. While image-based rendering techniques can often be effective1y applied to real-time shadow generation, such techniques usually demand so large memory space for storing preprocessed shadow maps. An effective compression method can help in reducing memory requirement, only at the additional decoding costs. In this paper, we propose a new image-barred shadow generation method based on image warping. With this method, it is possible to generate realistic shadows using only small sizes of pre-generated shadow maps, and is easy to extend to soft shadow generation. Our method will be efficiently used for generating realistic scenes in many real-time applications such as 3D games and virtual reality systems.

An Improved PCF Technique for The Generation of Shadows (그림자생성을 위한 개선된 PCF 기법)

  • Yu, Young-Jung;Choi, Jin-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.8
    • /
    • pp.1442-1449
    • /
    • 2007
  • Shadows are important elements for realistic rendering of the 3D scene. We cannot recognize the distance of objects in the 3D scene without shadows. Two methods, image-based medthods and object-based methods, are largely used for the rendering of shadows. Object based methods can generate accurate shadow boundaries. However, it cannot be used to generate the realtime shadows because the time complexity defends on the complexity of the 3D scene. Image based methods which are techniques to generate shadows are widely used because of fast calculation time. However, this algorithm has aliasing problems. PCF is a method to solve the aliasing problem. Using PCF technique, antialiased shadow boundaries can be generated. However, PCF with large filter size requires more time to calculate antialiased shadow boundaries. This paper proposes an improved PCF technique which generates antialiased shadow boundaries similar to that of PCF. Compared with PCF, this technique can generate antialiased shadows in less time.

Implementation of Virtual Maritime Environment for LWIR Homing Missile Test (원적외선 호밍 유도탄 시험을 위한 가상 해상 환경의 구현)

  • Park, Hyeryeong
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.19 no.2
    • /
    • pp.185-194
    • /
    • 2016
  • It is essential for generating the synthetic image to test and evaluate a guided missile system in the hardware-in-the-loop simulation. In order to make the evaluation results to be more reliable, the extent of fidelity and rendering performance of the synthetic image cannot be left ignored. There are numerous challenges to simulate the LWIR sensor signature of sea surface depending on the incident angle, especially in the maritime environment. In this paper, we investigate the key factors in determining the apparent temperature of sea surface and propose the approximate formula consisting of optical characteristics of sea surface and sky radiance. We find that the greater the incident angle increases, the larger the reflectivity of sea surface, and the greater the water vapor concentration in atmosphere increases, the larger the amount of sky radiance. On the basis of this information, we generate the virtual maritime environment in LWIR region using the SE-WORKBENCH, physically based rendering software. The margin of error is under seven percentage points.

MHN Filter-based Brush Stroke Generation for Painterly Rendering (회화적 렌더링을 위한 MHN 필터 기반 브러시 스트로크 생성기법)

  • Seo, Sang-Hyun;Yoon, Kyung-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.8
    • /
    • pp.1045-1053
    • /
    • 2006
  • We introduce a new method of painterly rendering. Instead of using the gradient direction of the source image to generate a brush stroke, we extract regions that can be drawn in one stroke using MHN filtering followed by identification of connected components, and make a brush stroke from each, based on an approximation to the medial axis. This method results in realistic-looking brush strokes of varying width that have an irregular directions where necessary.

  • PDF

Adaptive Depth Fusion based on Reliability of Depth Cues for 2D-to-3D Video Conversion (2차원 동영상의 3차원 변환을 위한 깊이 단서의 신뢰성 기반 적응적 깊이 융합)

  • Han, Chan-Hee;Choi, Hae-Chul;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.12
    • /
    • pp.1-13
    • /
    • 2012
  • 3D video is regarded as the next generation contents in numerous applications. The 2D-to-3D video conversion technologies are strongly required to resolve a lack of 3D videos during the period of transition to the full ripe 3D video era. In 2D-to-3D conversion methods, after the depth image of each scene in 2D video is estimated, stereoscopic video is synthesized using DIBR (Depth Image Based Rendering) technologies. This paper proposes a novel depth fusion algorithm that integrates multiple depth cues contained in 2D video to generate stereoscopic video. For the proper depth fusion, it is checked whether some cues are reliable or not in current scene. Based on the result of the reliability tests, current scene is classified into one of 4 scene types and scene-adaptive depth fusion is applied to combine those reliable depth cues to generate the final depth information. Simulation results show that each depth cue is reasonably utilized according to scene types and final depth is generated by cues which can effectively represent the current scene.

Development of Maya Shader Plug in Based on Subsurface Scattering for Realistic Skin Rendering (사실적인 피부 렌더링을 위해 표면하 산란 모델을 적용한 마야 쉐이더 플러그인 개발)

  • Yoo Tae Kyung;Lee Won Hyung;Jahng Sung Ghab
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.1
    • /
    • pp.88-100
    • /
    • 2005
  • In computer graphics, realistic skin rendering has been regarded as difficult tasks and remains as an important research subject. Translucent materials like skin have some complicated optical properties including subsurface scattering. In this paper, we proposes a skin shader based on subsurface scattering to render realistic skin and it has been implemented as a plug-in for Maya, 3D Package. The rendered image using this proposed skin shader appears more realistic than the rendered image using classical shading techniques. Furthermore, we could model sebum, epidermis, dermis using specular reflection, multiple scattering, single scattering respectively.

  • PDF