• Title/Summary/Keyword: realistic image

Search Result 597, Processing Time 0.034 seconds

3D Image Capturing and 3D Content Generation for Realistic Broadcasting (실감방송을 위한 3차원 영상 촬영 및 3차원 콘텐츠 제작 기술)

  • Kang, Y.S.;Ho, Y.S.
    • Smart Media Journal
    • /
    • v.1 no.1
    • /
    • pp.10-16
    • /
    • 2012
  • Stereo and multi-view cameras have been used to capture the three-dimensional (3D) scene for 3D contents generation. Besides, depth sensors are frequently used to obtain 3D information of the captured scene in real time. In order to generate 3D contents from captured images, we need several preprocessing operations to reduce noises and distortions in the images. 3D contents are considered as the basic media for realistic broadcasting that provides photo-realistic and immersive feeling to users. In this paper, we show technical trends of 3D image capturing and contents generation, and explain some core techniques for 3D image processing for realistic 3DTV broadcasting.

  • PDF

Image-based Realistic Facial Expression Animation

  • Yang, Hyun-S.;Han, Tae-Woo;Lee, Ju-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.133-140
    • /
    • 1999
  • In this paper, we propose a method of image-based three-dimensional modeling for realistic facial expression. In the proposed method, real human facial images are used to deform a generic three-dimensional mesh model and the deformed model is animated to generate facial expression animation. First, we take several pictures of the same person from several view angles. Then we project a three-dimensional face model onto the plane of each facial image and match the projected model with each image. The results are combined to generate a deformed three-dimensional model. We use the feature-based image metamorphosis to match the projected models with images. We then create a synthetic image from the two-dimensional images of a specific person's face. This synthetic image is texture-mapped to the cylindrical projection of the three-dimensional model. We also propose a muscle-based animation technique to generate realistic facial expression animations. This method facilitates the control of the animation. lastly, we show the animation results of the six represenative facial expressions.

Augmented Reality of Robust Tracking with Realistic Illumination

  • Kim, Young-Baek;Lee, Hong-Chang;Rhee, Sang-Yong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.10 no.3
    • /
    • pp.178-183
    • /
    • 2010
  • In this study we augmented a virtual object to an image of a flexible surface such as a paper, which is acquired from a web camera. To get more presence feeling, we consider realistic illumination on augmenting. To get the geometric relation between the camera and the flexible surface, we use markers that are printed on the surface. Using marker information in prior, three dimensional coordinates of the surface can be calculated. After the marker is removed from the input image, we attach a two dimensional texture and shadow to the flexible surface by considering realistic illumination.

Incremental Image-Based Motion Rendering Technique for Implementation of Realistic Computer Animation (사실적인 컴퓨터 애니메이션 구현을 위한 증분형 영상 기반 운동 렌더링 기법)

  • Han, Young-Mo
    • The KIPS Transactions:PartB
    • /
    • v.15B no.2
    • /
    • pp.103-112
    • /
    • 2008
  • Image-based motion capture technology is often used in making realistic computer animation. In this paper we try to implement image-based motion rendering by fixing a camera to a PC. Existing image-based rendering algorithms have disadvantages of high computational burden or low accuracy. The former disadvantage causes too long making-time of an animation. The latter disadvantage degrades reality in making realistic animation. To compensate for those disadvantages of the existing approaches, this paper presents an image-based motion rendering algorithm with low computational load and high estimation accuracy. In the proposed approach, an incremental motion rendering algorithm with low computational load is analyzed in the respect of optimal control theory and revised so that its estimation accuracy is enhanced. If we apply this proposed approach to optic motion capture systems, we can obtain additional advantages that motion capture can be performed without any markers, and with low cost in the respect of equipments and spaces.

An Efficient Feature Point Extraction and Comparison Method through Distorted Region Correction in 360-degree Realistic Contents

  • Park, Byeong-Chan;Kim, Jin-Sung;Won, Yu-Hyeon;Kim, Young-Mo;Kim, Seok-Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.1
    • /
    • pp.93-100
    • /
    • 2019
  • One of critical issues in dealing with 360-degree realistic contents is the performance degradation in searching and recognition process since they support up to 4K UHD quality and have all image angles including the front, back, left, right, top, and bottom parts of a screen. To solve this problem, in this paper, we propose an efficient search and comparison method for 360-degree realistic contents. The proposed method first corrects the distortion at the less distorted regions such as front, left and right parts of the image excluding severely distorted regions such as upper and lower parts, and then it extracts feature points at the corrected region and selects the representative images through sequence classification. When the query image is inputted, the search results are provided through feature points comparison. The experimental results of the proposed method shows that it can solve the problem of performance deterioration when 360-degree realistic contents are recognized comparing with traditional 2D contents.

A Technical Analysis on Deep Learning based Image and Video Compression (딥 러닝 기반의 이미지와 비디오 압축 기술 분석)

  • Cho, Seunghyun;Kim, Younhee;Lim, Woong;Kim, Hui Yong;Choi, Jin Soo
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.383-394
    • /
    • 2018
  • In this paper, we investigate image and video compression techniques based on deep learning which are actively studied recently. The deep learning based image compression technique inputs an image to be compressed in the deep neural network and extracts the latent vector recurrently or all at once and encodes it. In order to increase the image compression efficiency, the neural network is learned so that the encoded latent vector can be expressed with fewer bits while the quality of the reconstructed image is enhanced. These techniques can produce images of superior quality, especially at low bit rates compared to conventional image compression techniques. On the other hand, deep learning based video compression technology takes an approach to improve performance of the coding tools employed for existing video codecs rather than directly input and process the video to be compressed. The deep neural network technologies introduced in this paper replace the in-loop filter of the latest video codec or are used as an additional post-processing filter to improve the compression efficiency by improving the quality of the reconstructed image. Likewise, deep neural network techniques applied to intra prediction and encoding are used together with the existing intra prediction tool to improve the compression efficiency by increasing the prediction accuracy or adding a new intra coding process.

Facial Image Synthesis by Controlling Skin Microelements (피부 미세요소 조절을 통한 얼굴 영상 합성)

  • Kim, Yujin;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.369-377
    • /
    • 2022
  • Recent deep learning-based face synthesis research shows the result of generating a realistic face including overall style or elements such as hair, glasses, and makeup. However, previous methods cannot create a face at a very detailed level, such as the microstructure of the skin. In this paper, to overcome this limitation, we propose a technique for synthesizing a more realistic facial image from a single face label image by controlling the types and intensity of skin microelements. The proposed technique uses Pix2PixHD, an Image-to-Image Translation method, to convert a label image showing the facial region and skin elements such as wrinkles, pores, and redness to create a facial image with added microelements. Experimental results show that it is possible to create various realistic face images reflecting fine skin elements corresponding to this by generating various label images with adjusted skin element regions.

A WEATHERED IMAGE GENERATION METHOD FOR LANDSCAPE SIMULATION

  • Mukai, Nobuhiko;Morino, Masashi;Kosugi, Makoto
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.816-820
    • /
    • 2009
  • In landscape simulatin, it is necessary to express very realistic image generated by computer graphics. One solution is to use texture mapping; however, it needs a lot of work and time to obtain images for texture mapping since there are huge variety of images for buildings, roads, stations and so on, and the landscape image is diverse due to the weather and time. Especially, weathered images such as stain on walls, crack on roads and so forth, are needed to make the landscape image very realistic. These weathered images do not have to be strict so that it saves a lot of work and time for obtaining the images for texture mapping if we can generate a variety of weathered images automatically. Therefore, this paper describes how to generate a variety of weathered images automatically by changing the weathered shape of the original image.

  • PDF

A Survey of Image-based Virtual Try-on Technology (이미지 기반 가상 착용 이미지 합성 기술 동향)

  • S.C. Park;J.A. Park;J.Y. Park
    • Electronics and Telecommunications Trends
    • /
    • v.39 no.3
    • /
    • pp.107-115
    • /
    • 2024
  • Image synthesis has been remarkably developed in the computer vision domain and various researches have been proposed to generate realistic and high-resolution images. In particular, image-based virtual try-on is an application in fashion domain to simulate wearing clothes. Specifically, using input images of a fashion model and products, an realistic image of the model wearing the provided garments is synthesized. In this paper, we present a comprehensive review of technical trends in image-based virtual try-on technology. We first introduce relevant datasets and discuss their characteristics. Then, we categorize existing image synthesis methods into three main streams: warping-based methods, encoding-decoding-based methods, and diffusion-based methods. Finally, we explore other important research issues in the field of virtual try-on and analyze related researches aimed to tackling those challenges.

Realistic-Contents Generation Techniques with Stereoscopic and Composite Image Data (영상 데이터의 입체화 및 합성 기반 실감 콘텐츠 생성 기법)

  • Kim Manbae;Hong Donghee;Cho Youngran;Kim Haksoo
    • Journal of Broadcast Engineering
    • /
    • v.9 no.4 s.25
    • /
    • pp.402-410
    • /
    • 2004
  • Recently, there has been much interest in realistic broadcasting that is a new field following HDTV and 3DTV. In general. the realistic broadcasting is composed of diverse components such as aquisition, authoring, compression, transmission and display, posing many challenging tasks. It is necessary that the types of realistic contents need to be defined prior to the development of realistic broadcasting systems. Based upon them, other components need to be designed and developed. In this paper, we propose some realistic contents suitable to the realistic broadcasting as well as techniques of generating them. Our proposed contents consist of stereoscopic multiview sequences, object-based stereoscopic images, depth map-based image compositing and the composition of stereoscopic real and graphics images. Content generation techniques and their associated software modules are presented with realistic images produced from our experiments. Those contents are produced to deliver stereoscopic perception, immersion and realism to the users as shown in our experimental results.