• Title/Summary/Keyword: Rendering Map

Search Result 147, Processing Time 0.021 seconds

A Haptic Rendering Technique for 3D Objects with Vector Field (벡터 필드를 가진 3차원 오브젝트의 햅틱 렌더링 기법)

  • Kim, Lae-Hyun;Park, Se-Hyung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.4
    • /
    • pp.216-222
    • /
    • 2006
  • Vector field has been commonly used to visualize the data set which is invisible or is hard to explain. For instance, it could be used to visualize scientific data such as the direction and amount of wind and water field, transfer of heat through thermally conductive materials, and electromagnetic field. In this paper, we present a technique to enable intuitive recognition of the data though haptic feedback along with visual feedback. To add tactile information to graphical vector field, we model a haptic vector field and then apply it to the haptic map to guide a user to destination and haptic simulation of water field on 2D images whish can be used ill everyday life. These systems allow one to recognize vector information intuitively through haptic interface. We expect that the haptic rendering technique of vector field can be applied to various applications such as education, training, and entertainment.

A Study on Create Depth Map using Focus/Defocus in single frame (단일 프레임 영상에서 초점을 이용한 깊이정보 생성에 관한 연구)

  • Han, Hyeon-Ho;Lee, Gang-Seong;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.10 no.4
    • /
    • pp.191-197
    • /
    • 2012
  • In this paper we present creating 3D image from 2D image by extract initial depth values calculated from focal values. The initial depth values are created by using the extracted focal information, which is calculated by the comparison of original image and Gaussian filtered image. This initial depth information is allocated to the object segments obtained from normalized cut technique. Then the depth of the objects are corrected to the average of depth values in the objects so that the single object can have the same depth. The generated depth is used to convert to 3D image using DIBR(Depth Image Based Rendering) and the generated 3D image is compared to the images generated by other techniques.

Up-Sampling Method of Depth Map Using Weighted Joint Bilateral Filter (가중치 결합 양방향 필터를 이용한 깊이 지도의 업샘플링 방법)

  • Oh, Dong-ryul;Oh, Byung Tae;Shin, Jitae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.6
    • /
    • pp.1175-1184
    • /
    • 2015
  • A depth map is an image which contains 3D distance information. Generally, it is difficult to acquire a high resolution (HD), noise-removed, good quality depth map directly from the camera. Therefore, many researches have been focused on acquisition of the high resolution and the good quality depth map by up-sampling and pre/post image processing of the low resolution depth map. However, many researches are lack of effective up-sampling for the edge region which has huge impact on image perceptual-quality. In this paper, we propose an up-sampling method, based on joint bilateral filter, which improves up-sampling of the edge region and visual quality of synthetic images by adopting different weights for the edge parts that is sensitive to human perception characteristics. The proposed method has gains in terms of PSNR and subjective video quality compared to previous researches.

3D Clothes Modeling of Virtual Human for Metaverse (메타버스를 위한 가상 휴먼의 3차원 의상 모델링)

  • Kim, Hyun Woo;Kim, Dong Eon;Kim, Yujin;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.638-653
    • /
    • 2022
  • In this paper, we propose the new method of creating 3D virtual-human reflecting the pattern of clothes worn by the person in the high-resolution whole body front image and the body shape data about the person. To get the pattern of clothes, we proceed Instance Segmentation and clothes parsing using Cascade Mask R-CNN. After, we use Pix2Pix to blur the boundaries and estimate the background color and can get UV-Map of 3D clothes mesh proceeding UV-Map base warping. Also, we get the body shape data using SMPL-X and deform the original clothes and body mesh. With UV-Map of clothes and deformed clothes and body mesh, user finally can see the animation of 3D virtual-human reflecting user's appearance by rendering with the state-of-the game engine, i.e. Unreal Engine.

Object Segmentation Using Depth Map (깊이 맵을 이용한 객체 분리 방법)

  • Yu, Kyung-Min;Cho, Yongjoo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.639-640
    • /
    • 2013
  • In this study, a new method that finds an area where interesting objects are placed to generate DIBR-based intermediate images with higher quality. This method complements the existing object segmentation algorithm called Grabcut by finding the bounding box automatically, whereas the existing algorithm requires a user to select the region specifically. Then, the histogram of the depth map information is then used to separate the background and the frontal objects after applying the GrabCut algorithm. By using the new method, it is found that it produces better result than the existing algorithm. This paper describes the new method and future research.

  • PDF

Extracting Graphics Information for Better Video Compression

  • Hong, Kang Woon;Ryu, Won;Choi, Jun Kyun;Lim, Choong-Gyoo
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.743-751
    • /
    • 2015
  • Cloud gaming services are heavily dependent on the efficiency of real-time video streaming technology owing to the limited bandwidths of wire or wireless networks through which consecutive frame images are delivered to gamers. Video compression algorithms typically take advantage of similarities among video frame images or in a single video frame image. This paper presents a method for computing and extracting both graphics information and an object's boundary from consecutive frame images of a game application. The method will allow video compression algorithms to determine the positions and sizes of similar image blocks, which in turn, will help achieve better video compression ratios. The proposed method can be easily implemented using function call interception, a programmable graphics pipeline, and off-screen rendering. It is implemented using the most widely used Direct3D API and applied to a well-known sample application to verify its feasibility and analyze its performance. The proposed method computes various kinds of graphics information with minimal overhead.

Hole-Filling Method for Depth-Image-Based Rendering for which Modified-Patch Matching is Used (개선된 패치 매칭을 이용한 깊이 영상 기반 렌더링의 홀 채움 방법)

  • Cho, Jea-Hyung;Song, Wonseok;Choi, Hyuk
    • Journal of KIISE
    • /
    • v.44 no.2
    • /
    • pp.186-194
    • /
    • 2017
  • Depth-image-based rendering is a technique that can be applied in a variety of 3D-display systems. It generates the images that have been captured from virtual viewpoints by using a depth map. However, disoccluded hole-filling problems remain a challenging issue, as a newly exposed area appears in the virtual view. Image inpainting is a popular approach for the filling of the hole region. This paper presents a robust hole-filling method that reduces the error and generates a high quality-virtual view. First, the adaptive-patch size is decided using the color and depth information. Also, a partial filling method for which the patch similarity is used is proposed. These efforts reduce the error occurrence and the propagation. The experiment results show that the proposed method synthesizes the virtual view with a higher visual comfort compared with the existing methods.

Applying differential techniques for 2D/3D video conversion to the objects grouped by depth information (2D/3D 동영상 변환을 위한 그룹화된 객체별 깊이 정보의 차등 적용 기법)

  • Han, Sung-Ho;Hong, Yeong-Pyo;Lee, Sang-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.3
    • /
    • pp.1302-1309
    • /
    • 2012
  • In this paper, we propose applying differential techniques for 2D/3D video conversion to the objects grouped by depth information. One of the problems converting 2D images to 3D images using the technique tracking the motion of pixels is that objects not moving between adjacent frames do not give any depth information. This problem can be solved by applying relative height cue only to the objects which have no moving information between frames, after the process of splitting the background and objects and extracting depth information using motion vectors between objects. Using this technique all the background and object can have their own depth information. This proposed method is used to generate depth map to generate 3D images using DIBR(Depth Image Based Rendering) and verified that the objects which have no movement between frames also had depth information.

Depth-map Preprocessing Algorithm Using Two Step Boundary Detection for Boundary Noise Removal (경계 잡음 제거를 위한 2단계 경계 탐색 기반의 깊이지도 전처리 알고리즘)

  • Pak, Young-Gil;Kim, Jun-Ho;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.12
    • /
    • pp.555-564
    • /
    • 2014
  • The boundary noise in image syntheses using DIBR consists of noisy pixels that are separated from foreground objects into background region. It is generated mainly by edge misalignment between the reference image and depth map or blurred edge in the reference image. Since hole areas are generally filled with neighboring pixels, boundary noise adjacent to the hole is the main cause of quality degradation in synthesized images. To solve this problem, a new boundary noise removal algorithm using a preprocessing of the depth map is proposed in this paper. The most common way to eliminate boundary noise caused by boundary misalignment is to modify depth map so that the boundary of the depth map can be matched to that of the reference image. Most conventional methods, however, show poor performances of boundary detection especially in blurred edge, because they are based on a simple boundary search algorithm which exploits signal gradient. In the proposed method, a two-step hierarchical approach for boundary detection is adopted which enables effective boundary detection between the transition and background regions. Experimental results show that the proposed method outperforms conventional ones subjectively and objectively.

A Parallel Processing Technique for Large Spatial Data (대용량 공간 데이터를 위한 병렬 처리 기법)

  • Park, Seunghyun;Oh, Byoung-Woo
    • Spatial Information Research
    • /
    • v.23 no.2
    • /
    • pp.1-9
    • /
    • 2015
  • Graphical processing unit (GPU) contains many arithmetic logic units (ALUs). Because many ALUs can be exploited to process parallel processing, GPU provides efficient data processing. The spatial data require many geographic coordinates to represent the shape of them in a map. The coordinates are usually stored as geodetic longitude and latitude. To display a map in 2-dimensional Cartesian coordinate system, the geodetic longitude and latitude should be converted to the Universal Transverse Mercator (UTM) coordinate system. The conversion to the other coordinate system and the rendering process to represent the converted coordinates to screen use complex floating-point computations. In this paper, we propose a parallel processing technique that processes the conversion and the rendering using the GPU to improve the performance. Large spatial data is stored in the disk on files. To process the large amount of spatial data efficiently, we propose a technique that merges the spatial data files to a large file and access the file with the method of memory mapped file. We implement the proposed technique and perform the experiment with the 747,302,971 points of the TIGER/Line spatial data. The result of the experiment is that the conversion time for the coordinate systems with the GPU is 30.16 times faster than the CPU only method and the rendering time is 80.40 times faster than the CPU.