• Title/Summary/Keyword: Panorama View

Search Result 67, Processing Time 0.027 seconds

A Study on Registration Correction and Layout for Multi-view Videos Implementation (실감영상 구현을 위한 다면영상 정합보정 및 화면구성에 대한 연구)

  • Moon, Dae Hyuk
    • Journal of Digital Convergence
    • /
    • v.15 no.12
    • /
    • pp.531-541
    • /
    • 2017
  • Realistic videos using multi-view videos are created so that the contents shown on multi-view displays or screens look realistic. These images have been mostly used for special videos for exhibition, but, recently, systems such as Screen X have given rise to multi-view images as a format for storytelling contents such as movies. This study used HD-level broadcasting digital video camera with three zoom lenses for shooting wide to close-up shots focusing on a person, in the same way as Screen X, and identified and analyzed problems found during multi-view image registration correction. The results of this study suggested, provided the shooting technique and equipment are improved, the multi-view format can be used for conveying stories and information. Future research will need to investigate and supplement relevant techniques that will enable production of high-quality multi-view image contents by using a cinema-grade camera with standard lenses, instead of using broadcasting-grade zoom lenses.

Objective Quality Assessment Method for Stitched Images (스티칭 영상의 객관적 영상화질의 평가 방법)

  • Billah, Meer Sadeq;Ahn, Heejune
    • Journal of Broadcast Engineering
    • /
    • v.23 no.2
    • /
    • pp.227-234
    • /
    • 2018
  • Recently, stitching techniques are used for obtaining wide FOV, e.g., panorama contents, from normal cameras. Despite many proposed algorithms, the no objective quality evaluation method is developed, so the comparison of algorithms are performed only in subjective way. The paper proposes a 'Delaunay-triangulation based objective assessment method' for evaluating the geometric and photometric distortions of stitched or warped images. The reference and target images are segmented by Delaunay-triangulation based on matched points between two images, the average Euclidian distance is used for geometric distortion measure, and the average or histogram of PSNR for photometric measure. We shows preliminary results with several test images and stitching methods for demonstrate the benefits and application.

A Study on the Internet Broadcasting Image Processing based on Offloading Technique on the Mobile Environments (모바일 환경에서 오프로딩 기술 기반 인터넷 방송 영상 처리에 관한 연구)

  • Kang, Hong-gue
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.6
    • /
    • pp.63-68
    • /
    • 2018
  • Offloading is a method of communicating, processing, and receiving results from some of the applications performed on local computers to overcome the limitations of computing resources and computational speed.Recently, it has been applied in mobile games, multimedia data, 360-degree video processing, and image processing for Internet broadcasting to speed up processing and reduce battery consumption in the mobile computing sector. This paper implements a viewer that enables users to convert various flat-panel images and view contents in a wireless Internet environment and presents actual results of an experiment so that users can easily understand the images. The 360 degree spherical image is successfully converted to a plane image with Double Panorama, Quad, Single Rectangle, 360 Overview + 3 Rectangle depending on the image acquisition position of the 360 degree camera through the interface. During the experiment, more than 100 360 degree spherical images were successfully converted into plane images through the interface below.

Image Stitching focused on Priority Object using Deep Learning based Object Detection (딥러닝 기반 사물 검출을 활용한 우선순위 사물 중심의 영상 스티칭)

  • Rhee, Seongbae;Kang, Jeonho;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.882-897
    • /
    • 2020
  • Recently, the use of immersive media contents representing Panorama and 360° video is increasing. Since the viewing angle is limited to generate the content through a general camera, image stitching is mainly used to combine images taken with multiple cameras into one image having a wide field of view. However, if the parallax between the cameras is large, parallax distortion may occur in the stitched image, which disturbs the user's content immersion, thus an image stitching overcoming parallax distortion is required. The existing Seam Optimization based image stitching method to overcome parallax distortion uses energy function or object segment information to reflect the location information of objects, but the initial seam generation location, background information, performance of the object detector, and placement of objects may limit application. Therefore, in this paper, we propose an image stitching method that can overcome the limitations of the existing method by adding a weight value set differently according to the type of object to the energy value using object detection based on deep learning.

RANSAC-based Or thogonal Vanishing Point Estimation in the Equirectangular Images

  • Oh, Seon Ho;Jung, Soon Ki
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.12
    • /
    • pp.1430-1441
    • /
    • 2012
  • In this paper, we present an algorithm that quickly and effectively estimates orthogonal vanishing points in equirectangular images of urban environment. Our algorithm is based on the RANSAC (RANdom SAmple Consensus) algorithm and on the characteristics of the line segment in the spherical panorama image of the $360^{\circ}$ longitude and $180^{\circ}$ latitude field of view. These characteristics can be used to reduce the geometric ambiguity in the line segment classification as well as to improve the robustness of vanishing point estimation. The proposed algorithm is validated experimentally on a wide set of images. The results show that our algorithm provides excellent levels of accuracy for the vanishing point estimation as well as line segment classification.

A study on great wall design of the main gate in campus (벽천 디자인에 관한 연구)

  • Han, Hae-Ryon
    • Proceedings of the Korean Institute of Interior Design Conference
    • /
    • 2004.11a
    • /
    • pp.173-174
    • /
    • 2004
  • The Great Wall is an element in University which stand outs as a landmark. The Great Wall is located in front of the grand staircases of the gymnasium in the main gate area. Falling water and Lights shows a spectacle panorama in various point of view. Water falls down the top of the grand staircases and the front walls. And the red, blue and green lights brighten the falling water in the evenings. Also the relief of the palm tree and turtle symbolizes the University Identity. The wall is comprehends not only the day and the night but four seasons. The Water, Lights, and the Relief are coordinates well along with the new building in campus.

  • PDF

2D Adjacency Matrix Generation using DCT for UWV contents

  • Li, Xiaorui;Lee, Euisang;Kang, Dongjin;Kim, Kyuheon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.39-42
    • /
    • 2016
  • Since a display device such as TV or signage is getting larger, the types of media is getting changed into wider view one such as UHD, panoramic and jigsaw-like media. Especially, panoramic and jigsaw-like media is realized by stitching video clips, which are captured by different camera or devices. In order to stich those video clips, it is required to find out 2D Adjacency Matrix, which tells spatial relationships among those video clips. Discrete Cosine Transform (DCT), which is used as a compression transform method, can convert the each frame of video source from the spatial domain (2D) into frequency domain. Based on the aforementioned compressed features, 2D adjacency Matrix of images could be found that we can efficiently make the spatial map of the images by using DCT. This paper proposes a new method of generating 2D adjacency matrix by using DCT for producing a panoramic and jigsaw-like media through various individual video clips.

  • PDF

Active Object Tracking using Image Mosaic Background

  • Jung, Young-Kee;Woo, Dong-Min
    • Journal of information and communication convergence engineering
    • /
    • v.2 no.1
    • /
    • pp.52-57
    • /
    • 2004
  • In this paper, we propose a panorama-based object tracking scheme for wide-view surveillance systems that can detect and track moving objects with a pan-tilt camera. A dynamic mosaic of the background is progressively integrated in a single image using the camera motion information. For the camera motion estimation, we calculate affine motion parameters for each frame sequentially with respect to its previous frame. The camera motion is robustly estimated on the background by discriminating between background and foreground regions. The modified block-based motion estimation is used to separate the background region. Each moving object is segmented by image subtraction from the mosaic background. The proposed tracking system has demonstrated good performance for several test video sequences.

Creating Full View Panorama Image from Multiple Images (다중영상으로부터 360도 파노라마 생성)

  • Joe, Jun-Seong;Lee, Bum-Jong;Park, Jong-Seung
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.10b
    • /
    • pp.162-166
    • /
    • 2007
  • 단일 영상의 시야각 한계를 극복하기 위해 다중 영상으로부터 하나의 파노라마 영상으로 만들 수 있다. 파노라마 영상은 좌우 360도까지의 시야각을 확보할 수 있어서 복잡한 실제 환경을 가상 환경에서의 배경으로 사용하고자 할 경우에 유용하다. 본 논문에서는 가상 환경에서의 배경으로 사용할 수 있는 파노라마 영상 생성 기법을 제안한다. 다중 영상들을 촬영하고 이를 사용하여 하나의 구형 파노라마 영상을 생성한다. 상하 시야각을 180도까지 확보하기 위한 제작 기법을 제시한다. 또한 생성된 구형 파노라마 영상으로부터 3차원 렌더링에 적합한 텍스쳐로의 변환과정을 제시한다 실제 환경을 가상화할 시에 파노라마 배경을 사용하면 조밀한 배경을3차원적으로 모델링하지 않고도 배경을 3차원적으로 표현할 수 있으므로 제안된 기법은 가상현실 응용에 유용하게 사용될 수 있다.

  • PDF

High Resolution 360 degree Video Generation System using Multiple Cameras (다수의 카메라를 이용한 고해상도 360도 동영상 생성 시스템)

  • Jeong, Jinwook;Jun, Kyungkoo
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1329-1336
    • /
    • 2016
  • This paper develops a 360 degree video system using multiple off-the-shelf webcams and a set of embedded boards. Existing 360 degree cameras have shortcomings that they do not support real-time video generation since recorded videos should be copied to computers or smartphones which then provide stitching. Another shortcoming is that wide FoV(Field of View) cameras are not able to provide sufficiently high resolution. Moreover, resulting images are visually distorted bending straight lines. By employing an array of 65 degree FoV webcams, we were able to generate videos on the spot and achieve over 6K resolution with much less distortion. We describe the configuration and algorithms of the proposed system. The performance evaluation results of our early stage prototype system are presented.