• Title/Summary/Keyword: 3D Rendering

Search Result 666, Processing Time 0.024 seconds

Service Rendering Study for Adaptive Service of 3D Graphics Contents in Middleware (3D 그래픽 콘텐츠의 적응적 서비스를 위한 미들웨어에서의 서비스 렌더링 연구)

  • Kim, Hak-Ran;Park, Hwa-Jin;Yoon, Yong-Ik
    • The KIPS Transactions:PartA
    • /
    • v.14A no.5
    • /
    • pp.279-286
    • /
    • 2007
  • The need of contents adaptation in ubiquitous environments is growing to support multiple target platforms for 3D graphics contents. Since 3D graphics deal with a large data set md a high performance, a service adaptation for context changing is required to manipulate graphics contents with a more complicated method in multiple devices such as desktops, laptops, PDAs, mobile phones, etc. In this paper, we suggest a new notion of service adaptation middleware based on service rendering algorithm, which provides a flexible and customized service for user-centric 3D graphics contents. The service adaptation middleware consists of Service Adaptation(SA) for analyzing environments and Service Rendering(SR) for reconfiguring customized services by processing customized data. These adaptation services are able to intelligently and dynamically support the same computer graphics contents with good quality, when user environments are changed.

Comparison of LoG and DoG for 3D reconstruction in haptic systems (햅틱스 시스템용 3D 재구성을 위한 LoG 방법과 DoG 방법의 성능 분석)

  • Sung, Mee-Young;Kim, Ki-Kwon
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.711-721
    • /
    • 2012
  • The objective of this study is to propose an efficient 3D reconstruction method for developing a stereo-vision-based haptics system which can replace "robotic eyes" and "robotic touch." The haptic rendering for 3D images requires to capture depth information and edge information of stereo images. This paper proposes the 3D reconstruction methods using LoG(Laplacian of Gaussian) algorithm and DoG(Difference of Gaussian) algorithm for edge detection in addition to the basic 3D depth extraction method for better haptic rendering. Also, some experiments are performed for evaluating the CPU time and the error rates of those methods. The experimental results lead us to conclude that the DoG method is more efficient for haptic rendering. This paper may contribute to investigate the effective methods for 3D image reconstruction such as in improving the performance of mobile patrol robots.

Painterly Rendering Reflecting 2D Image Relighting and Color Change (2D 이미지 재조명에 따른 색채변화를 반영한 비사실적 렌더링)

  • Hwi-Jin Kim;Jong-Hyun Kim
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.399-402
    • /
    • 2023
  • 본 논문에서는 빛에 영향에 따른 유화의 변화를 보여주기 위해 2D 이미지 재조명과 색채변화를 반영한 회화적 렌더링 방법을 제안한다. 이 방법은 2D 이미지를 재조명하고 해당 음영 값을 가중치로 하여 색채변화를 반영해 렌더링한다. 이때 재조명의 경우 2D 이미지를 3D 이미지로 근사 추정하여 노말값을 결정하고 해당 값과 조명 위치값 사이의 각을 음영 값으로 추출하여 반영한다. 조명 위치는 사용자가 지정 가능하며 빛에 영향에 따른 색채변화 결과는 기존에 연구된 결과를 참조한다. 본 논문에서는 기존의 로컬 이미지에 근사한 자동 회화적 렌더링이 보여주는 단순하고 평면적인 결과에 비해, 재조명을 통해 빛바랜 색과 양감을 반영함으로써 현실에 존재하는 작품처럼 생동적이고 입체적인 렌더링 결과를 제공하여 문화예술작품으로의 표현 및 색채변화 예측-복원에 기여하고자한다.

  • PDF

Implementation of AR Remote Rendering Techniques for Real-time Volumetric 3D Video

  • Lee, Daehyeon;Lee, Munyong;Lee, Sang-ha;Lee, Jaehyun;Kwon, Soonchul
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.2
    • /
    • pp.90-97
    • /
    • 2020
  • Recently, with the growth of mixed reality industrial infrastructure, relevant convergence research has been proposed. For real-time mixed reality services such as remote video conferencing, the research on real-time acquisition-process-transfer methods is required. This paper aims to implement an AR remote rendering method of volumetric 3D video data. We have proposed and implemented two modules; one, the parsing module of the volumetric 3D video to a game engine, and two, the server rendering module. The result of the experiment showed that the volumetric 3D video sequence data of about 15 MB was compressed by 6-7%. The remote module was streamed at 27 fps at a 1200 by 1200 resolution. The results of this paper are expected to be applied to an AR cloud service.

Reconfigurable Architecture Design for H.264 Motion Estimation and 3D Graphics Rendering of Mobile Applications (이동통신 단말기를 위한 재구성 가능한 구조의 H.264 인코더의 움직임 추정기와 3차원 그래픽 렌더링 가속기 설계)

  • Park, Jung-Ae;Yoon, Mi-Sun;Shin, Hyun-Chul
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.1
    • /
    • pp.10-18
    • /
    • 2007
  • Mobile communication devices such as PDAs, cellular phones, etc., need to perform several kinds of computation-intensive functions including H.264 encoding/decoding and 3D graphics processing. In this paper, new reconfigurable architecture is described, which can perform either motion estimation for H.264 or rendering for 3D graphics. The proposed motion estimation techniques use new efficient SAD computation ordering, DAU, and FDVS algorithms. The new approach can reduce the computation by 70% on the average than that of JM 8.2, without affecting the quality. In 3D rendering, midline traversal algorithm is used for parallel processing to increase throughput. Memories are partitioned into 8 blocks so that 2.4Mbits (47%) of memory is shared and selective power shutdown is possible during motion estimation and 3D graphics rendering. Processing elements are also shared to further reduce the chip area by 7%.

Non-Photorealistic Rendering Using CUDA-Based Image Segmentation (CUDA 기반 영상 분할을 사용한 비사실적 렌더링)

  • Yoon, Hyun-Cheol;Park, Jong-Seung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.11
    • /
    • pp.529-536
    • /
    • 2015
  • When rendering both three-dimensional objects and photo images together, the non-photorealistic rendering results are in visual discord since the two contents have their own independent color distributions. This paper proposes a non-photorealistic rendering technique which renders both three-dimensional objects and photo images such as cartoons and sketches. The proposed technique computes the color distribution property of the photo images and reduces the number of colors of both photo images and 3D objects. NPR is performed based on the reduced colormaps and edge features. To enhance the natural scene presentation, the image region segmentation process is preferred when extracting and applying colormaps. However, the image segmentation technique needs a lot of computational operations. It takes a long time for non-photorealistic rendering for large size frames. To speed up the time-consuming segmentation procedure, we use GPGPU for the parallel computing using the GPU. As a result, we significantly improve the execution speed of the algorithm.

Effective Image Sequence Format in 3D Animation Production Pipeline (3D 애니메이션 제작 공정에 있어서 효율적인 이미지 시퀀스 포맷)

  • Kim, Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.8
    • /
    • pp.134-141
    • /
    • 2007
  • In 3D animation rendering process, Although we can render the output as a movie file format, most productions use image sequences in their rendering pipelines. This Image Sequence rendering process is extremely important step in final compositing in movie industries. Although there are various type of making image rendering processes, TGA format Is one of most widely used bitmap file formats using in industries. People may ask TGA format is most suitable for in any case. As we know 3D softwares have their own image formats. so we need to testify on this. In this paper, we are going to focus on Alias' 3D package software called MAYA which we will analyze of compressing image sequence, Image quality, supporting Alpha channels in compositing, and Z-depth information. The purpose of this paper is providing to 3D Pipeline as a guideline about effective image sequence format.

Enhanced Image Mapping Method for Computer-Generated Integral Imaging System (집적 영상 시스템을 위한 향상된 이미지 매핑 방법)

  • Lee Bin-Na-Ra;Cho Yong-Joo;Park Kyoung-Shin;Min Sung-Wook
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.295-300
    • /
    • 2006
  • The integral imaging system is an auto-stereoscopic display that allows users to see 3D images without wearing special glasses. In the integral imaging system, the 3D object information is taken from several view points and stored as elemental images. Then, users can see a 3D reconstructed image by the elemental images displayed through a lens array. The elemental images can be created by computer graphics, which is referred to the computer-generated integral imaging. The process of creating the elemental images is called image mapping. There are some image mapping methods proposed in the past, such as PRR(Point Retracing Rendering), MVR(Multi-Viewpoint Rendering) and PGR(Parallel Group Rendering). However, they have problems with heavy rendering computations or performance barrier as the number of elemental lenses in the lens array increases. Thus, it is difficult to use them in real-time graphics applications, such as virtual reality or real-time, interactive games. In this paper, we propose a new image mapping method named VVR(Viewpoint Vector Rendering) that improves real-time rendering performance. This paper describes the concept of VVR first and the performance comparison of image mapping process with previous methods. Then, it discusses possible directions for the future improvements.

3D Real-Time Cartoon Rendering Design using Hardware (하드웨어를 이용한 3D 실시간 카툰렌더링 설계)

  • Han, Deuk-Su;Kim, Kwang-Min;Lim, Pyung-Jong;Kwak, Hoon-Sung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.11a
    • /
    • pp.219-222
    • /
    • 2006
  • 본 연구는 디지털 문화의 출현이라는 맥락에서 디지털 시대의 환경 변화를 맞이한 애니메이션 제작 기법의 변화 속에서 셀 애니메이션 작품과 혼합하여 사용한 3D 컴퓨터 그래픽스(3D Computer Graphics) 기법 중 비사실적 이미지 기법(Non Photo Realistic Rendering)의 범주에 속한 카툰 렌더링(Cartoon Rendering) 기법을 분석하고 그것이 갖는 PC기반 실시간으로 수행되는 비사실적 렌더링을 위한 실시간 카툰렌더링 기법을 제시하고 있다. 종래의 카툰렌더링에 대해 연구하고 기존의 비사실적 이미지 기법의 적용에 대해 알아본다.

  • PDF

A Study on the Effective Image Sequence Format in 3D Animation Production (3D 애니메이션 제작에 있어서 효율적인 Image Sequence format에 관한 연구)

  • Kim Ho
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.131-136
    • /
    • 2005
  • In 3D animation rendering process, Although we can render the output as a movie file format, most productions use image sequences in their rendering pipelines. This Image Sequence rendering process is extremely important step in final compositing in movie industries. Although there are various type of making image rendering processes, TGA format is one of most widely used bitmap file formats using in industries. People may ask TGA format is most suitable for in any case. As we know 3D softwares have their own image formats. so we need to testify on this. In this paper, we are going to focus on Alias' 3D package software called MAYA which we will analyze of compressing image sequence, Image quality, supporting Alpha channels in compositing, and Z-depth Information. The purpose of this paper is providing to 3D Pipeline as a guideline about effective image sequence format.

  • PDF