• Title/Summary/Keyword: video to images

Search Result 1,363, Processing Time 0.036 seconds

View Synthesis Error Removal for Comfortable 3D Video Systems (편안한 3차원 비디오 시스템을 위한 영상 합성 오류 제거)

  • Lee, Cheon;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.36-42
    • /
    • 2012
  • Recently, the smart applications, such as smart phone and smart TV, become a hot issue in IT consumer markets. In particular, the smart TV provides 3D video services, hence efficient coding methods for 3D video data are required. Three-dimensional (3D) video involves stereoscopic or multi-view images to provide depth experience through 3D display systems. Binocular cues are perceived by rendering proper viewpoint images obtained at slightly different view angles. Since the number of viewpoints of the multi-view video is limited, 3D display devices should generate arbitrary viewpoint images using available adjacent view images. In this paper, after we explain a view synthesis method briefly, we propose a new algorithm to compensate view synthesis errors around object boundaries. We describe a 3D warping technique exploiting the depth map for viewpoint shifting and a hole filling method using multi-view images. Then, we propose an algorithm to remove boundary noises that are generated due to mismatches of object edges in the color and depth images. The proposed method reduces annoying boundary noises near object edges by replacing erroneous textures with alternative textures from the other reference image. Using the proposed method, we can generate perceptually inproved images for 3D video systems.

  • PDF

A New Denoising Method for Time-lapse Video using Background Modeling

  • Park, Sanghyun
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.125-138
    • /
    • 2020
  • Due to the development of camera technology, the cost of producing time-lapse video has been reduced, and time-lapse videos are being applied in many fields. Time-lapse video is created using images obtained by shooting for a long time at long intervals. In this paper, we propose a method to improve the quality of time-lapse videos monitoring the changes in plants. Considering the characteristics of time-lapse video, we propose a method of separating the desired and unnecessary objects and removing unnecessary elements. The characteristic of time-lapse videos that we have noticed is that unnecessary elements appear intermittently in the captured images. In the proposed method, noises are removed by applying a codebook background modeling algorithm to use this characteristic. Experimental results show that the proposed method is simple and accurate to find and remove unnecessary elements in time-lapse videos.

Video Augmentation by Image-based Rendering

  • Seo, Yong-Duek;Kim, Seung-Jin;Sang, Hong-Ki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.147-153
    • /
    • 1998
  • This paper provides a method for video augmentation using image interpolation. In computer graphics or augmented reality, 3D information of a model object is necessary to generate 2D views of the model, which are then inserted into or overlayed on environmental views or real video frames. However, we do not require any three dimensional model but images of the model object at some locations to render views according to the motion of video camera which is calculated by an SFM algorithm using point matches under weak-perspective (scaled-orthographic) projection model. Thus, a linear view interpolation algorithm is applied rather than a 3D ray-tracing method to get a view of the model at different viewpoints from model views. In order to get novel views in a way that agrees with the camera motion the camera coordinate system is embedded into model coordinate system at initialization time on the basis of 3D information recovered from video images and model views, respectively. During the sequence, motion parameters from video frames are used to compute interpolation parameters, and rendered model views are overlayed on corresponding video frames. Experimental results for real video frames and model views are given. Finally, discussion on the limitations of the method and subjects for future research are provided.

  • PDF

Registration of Video Avatar by Comparing Real and Synthetic Images (실제와 합성영상의 비교에 의한 비디오 아바타의 정합)

  • Park Moon-Ho;Ko Hee-Dong;Byun Hye-Ran
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.8
    • /
    • pp.477-485
    • /
    • 2006
  • In this paper, video avatar, made from live video streams captured from a real participant, was used to represent a virtual participant. By using video avatar to represent participants, the sense of reality for participants can be increased, but the correct registration is also an important issue. We configured the real and virtual cameras to have the same characteristics in order to register the video avatar. Comparing real and synthetic images, which is possible because of the similarities between real and virtual cameras, resolved registration between video avatar captured from real environment and virtual environment. The degree of incorrect registration was represented as energy, and the energy was then minimized to produce seamless registration. Experimental results show the proposed method can be used effectively for registration of video avatar.

A Study on Super Resolution Image Reconstruction for Effective Spatial Identification

  • Park Jae-Min;Jung Jae-Seung;Kim Byung-Guk
    • Spatial Information Research
    • /
    • v.13 no.4 s.35
    • /
    • pp.345-354
    • /
    • 2005
  • Super resolution image reconstruction method refers to image processing algorithms that produce a high resolution(HR) image from observed several low resolution(LR) images of the same scene. This method has proven to be useful in many practical cases where multiple frames of the same scene can be obtained, such as satellite imaging, video surveillance, video enhancement and restoration, digital mosaicking, and medical imaging. In this paper, we applied the super resolution reconstruction method in spatial domain to video sequences. Test images are adjacently sampled images from continuous video sequences and are overlapped at high rate. We constructed the observation model between the HR images and LR images applied with the Maximum A Posteriori(MAP) reconstruction method which is one of the major methods in the super resolution grid construction. Based on the MAP method, we reconstructed high resolution images from low resolution images and compared the results with those from other known interpolation methods.

  • PDF

Scramble and Descramble Scheme on Multiple Images (다수의 영상에 대한 스크램블 및 디스크램블 방법)

  • Kim Seung-Youl;You Young-Gap
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.6
    • /
    • pp.50-55
    • /
    • 2006
  • This paper presents a scheme which scrambles and descrambles images from multiple video channels. A combined image frame is formed by concatenating the incoming frames from channels in a two dimensional array. This algorithm employs an encryption scheme on row and column numbers of the combined image frame and thereby yields an encrypted combined image. The proposed algorithm is to encrypt multiple images at a time since it recomposes images from multiple video channels yielding one by composite image, and encrypts the composite image resulting In higher security.

  • PDF

- Development of Digital Fluoroscopic Image Recording System for Customer Safety - (고객 안전을 위한 디지털 방사선장치(DRF)의 투시영상기록장치 개발)

  • Rhim Jae Dong;Kang Kyong Sik
    • Journal of the Korea Safety Management & Science
    • /
    • v.6 no.3
    • /
    • pp.303-309
    • /
    • 2004
  • Many system devices for fluoroscopic and general X-ray studies in diagnostic radiographic system have been being changed from analog mode to digital mode. In addition, among diagnostic imaging and radiologic examinations, fluoroscopic studies that requires functional diagnosis is being widely used. The video recording method of fluoroscopic studies has been useful in functional image diagnosis and dynamic image observation, but the utility of its image quality is being reduced because of limitation in setting play segments of the video player, inconvenience of play, difficulties in preserving reproduced images, the change of image quality, etc. In order to complement these shortages, it is necessary to facilitate access to patient diagnosis information such as storing, editing and sharing functional diagnosis images in response to the trend of the digitalization of digital radiographic & fluoroscopic system(DRF). Thus this study designed and implemented a device of storing functional dynamic images real time using a computer rather than existing video recording, aiming at contribution to functional image diagnosis.

3D-Distortion Based Rate Distortion Optimization for Video-Based Point Cloud Compression

  • Yihao Fu;Liquan Shen;Tianyi Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.435-449
    • /
    • 2023
  • The state-of-the-art video-based point cloud compression(V-PCC) has a high efficiency of compressing 3D point cloud by projecting points onto 2D images. These images are then padded and compressed by High-Efficiency Video Coding(HEVC). Pixels in padded 2D images are classified into three groups including origin pixels, padded pixels and unoccupied pixels. Origin pixels are generated from projection of 3D point cloud. Padded pixels and unoccupied pixels are generated by copying values from origin pixels during image padding. For padded pixels, they are reconstructed to 3D space during geometry reconstruction as well as origin pixels. For unoccupied pixels, they are not reconstructed. The rate distortion optimization(RDO) used in HEVC is mainly aimed at keeping the balance between video distortion and video bitrates. However, traditional RDO is unreliable for padded pixels and unoccupied pixels, which leads to significant waste of bits in geometry reconstruction. In this paper, we propose a new RDO scheme which takes 3D-Distortion into account instead of traditional video distortion for padded pixels and unoccupied pixels. Firstly, these pixels are classified based on the occupancy map. Secondly, different strategies are applied to these pixels to calculate their 3D-Distortions. Finally, the obtained 3D-Distortions replace the sum square error(SSE) during the full RDO process in intra prediction and inter prediction. The proposed method is applied to geometry frames. Experimental results show that the proposed algorithm achieves an average of 31.41% and 6.14% bitrate saving for D1 metric in Random Access setting and All Intra setting on geometry videos compared with V-PCC anchor.

A Method for Object Tracking Based on Background Stabilization (동적 비디오 기반 안정화 및 객체 추적 방법)

  • Jung, Hunjo;Lee, Dongeun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.14 no.1
    • /
    • pp.77-85
    • /
    • 2018
  • This paper proposes a robust digital video stabilization algorithm to extract and track an object, which uses a phase correlation-based motion correction. The proposed video stabilization algorithm consists of background stabilization based on motion estimation and extraction of a moving object. The motion vectors can be estimated by calculating the phase correlation of a series of frames in the eight sub-images, which are located in the corner of the video. The global motion vector can be estimated and the image can be compensated by using the multiple local motions of sub-images. Through the calculations of the phase correlation, the motion of the background can be subtracted from the former frame and the compensated frame, which share the same background. The moving objects in the video can also be extracted. In this paper, calculating the phase correlation to track the robust motion vectors results in the compensation of vibrations, such as movement, rotation, expansion and the downsize of videos from all directions of the sub-images. Experimental results show that the proposed digital image stabilization algorithm can provide continuously stabilized videos and tracking object movements.

2D Adjacency Matrix Generation using DCT for UWV contents

  • Li, Xiaorui;Lee, Euisang;Kang, Dongjin;Kim, Kyuheon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.39-42
    • /
    • 2016
  • Since a display device such as TV or signage is getting larger, the types of media is getting changed into wider view one such as UHD, panoramic and jigsaw-like media. Especially, panoramic and jigsaw-like media is realized by stitching video clips, which are captured by different camera or devices. In order to stich those video clips, it is required to find out 2D Adjacency Matrix, which tells spatial relationships among those video clips. Discrete Cosine Transform (DCT), which is used as a compression transform method, can convert the each frame of video source from the spatial domain (2D) into frequency domain. Based on the aforementioned compressed features, 2D adjacency Matrix of images could be found that we can efficiently make the spatial map of the images by using DCT. This paper proposes a new method of generating 2D adjacency matrix by using DCT for producing a panoramic and jigsaw-like media through various individual video clips.

  • PDF