• Title/Summary/Keyword: Panoramic video stitching

Search Result 15, Processing Time 0.022 seconds

Study on 3 DoF Image and Video Stitching Using Sensed Data

  • Kim, Minwoo;Chun, Jonghoon;Kim, Sang-Kyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4527-4548
    • /
    • 2017
  • This paper proposes a method to generate panoramic images by combining conventional feature extraction algorithms (e.g., SIFT, SURF, MPEG-7 CDVS) with sensed data from inertia sensors to enhance the stitching results. The challenge of image stitching increases when the images are taken from two different mobile phones with no posture calibration. Using inertia sensor data obtained by the mobile phone, images with different yaw, pitch, and roll angles are preprocessed and adjusted before performing stitching process. Performance of stitching (e.g., feature extraction time, inlier point numbers, stitching accuracy) between conventional feature extraction algorithms is reported along with the stitching performance with/without using the inertia sensor data. In addition, the stitching accuracy of video data was improved using the same sensed data, with discrete calculation of homograph matrix. The experimental results for stitching accuracies and speed using sensed data are presented in this paper.

Real-Time Panoramic Video Streaming Technique with Multiple Virtual Cameras (다중 가상 카메라의 실시간 파노라마 비디오 스트리밍 기법)

  • Ok, Sooyol;Lee, Suk-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.4
    • /
    • pp.538-549
    • /
    • 2021
  • In this paper, we introduce a technique for 360-degree panoramic video streaming with multiple virtual cameras in real-time. The proposed technique consists of generating 360-degree panoramic video data by ORB feature point detection, texture transformation, panoramic video data compression, and RTSP-based video streaming transmission. Especially, the generating process of 360-degree panoramic video data and texture transformation are accelerated by CUDA for complex processing such as camera calibration, stitching, blending, encoding. Our experiment evaluated the frames per second (fps) of the transmitted 360-degree panoramic video. Experimental results verified that our technique takes at least 30fps at 4K output resolution, which indicates that it can both generates and transmits 360-degree panoramic video data in real time.

Enhancement on 3 DoF Image Stitching Using Inertia Sensor Data (관성 센서 데이터를 활용한 3 DoF 이미지 스티칭 향상)

  • Kim, Minwoo;Kim, Sang-Kyun
    • Journal of Broadcast Engineering
    • /
    • v.22 no.1
    • /
    • pp.51-61
    • /
    • 2017
  • This paper proposes a method to generate panoramic images by combining conventional feature extraction algorithms (e.g., SIFT, SURF, MPEG-7 CDVS) with sensed data from an inertia sensor to enhance the stitching results. The challenge of image stitching increases when the images are taken from two different mobile phones with no posture calibration. Using inertia sensor data obtained by the mobile phone, images with different yaw angles, pitch angles, roll angles are preprocessed and adjusted before performing stitching process. Performance of stitching (e.g., feature extraction time, inlier point numbers, stitching accuracy) between conventional feature extraction algorithms is reported along with the stitching performance with/without using the inertia sensor data.

Panoramic Video Generation Method Based on Foreground Extraction (전경 추출에 기반한 파노라마 비디오 생성 기법)

  • Kim, Sang-Hwan;Kim, Chang-Su
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.2
    • /
    • pp.441-445
    • /
    • 2011
  • In this paper, we propose an algorithm for generating panoramic videos using fixed multiple cameras. We estimate a background image from each camera. Then we calculate perspective relationships between images using extracted feature points. To eliminate stitching errors due to different image depths, we process background images and foreground images separately in the overlap regions between adjacent cameras by projecting regions of foreground images selectively. The proposed algorithm can be used to enhance the efficiency and convenience of wide-area surveillance systems.

Fast Stitching Algorithm and Cubic Panoramic Image Reducing Distortions (빠른 스티칭 알고리즘과 왜곡현상을 해소하는 큐브 파노라마 영상)

  • Kim Eung-Kon;Seo Seung-Wan
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.580-584
    • /
    • 2005
  • One of the problems of panoramic image stitching methods is that its computational cost is so high that the image processing required usually cannot be done in real-time. Real-time performance is important in applications such as video surveillance becausewe must see current scenes. But it takes more than several seconds to calculate transform coefficients between images. Panoramic VR technologies such as Apple QuickTime VR have problem that distorts images of top and bottom. This paper presents a fast stitching method and a methpd reducing distortions of top and bottom in cubic panoramic image.

  • PDF

The Power Line Deflection Monitoring System using Panoramic Video Stitching and Deep Learning (딥 러닝과 파노라마 영상 스티칭 기법을 이용한 송전선 늘어짐 모니터링 시스템)

  • Park, Eun-Soo;Kim, Seunghwan;Lee, Sangsoon;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.25 no.1
    • /
    • pp.13-24
    • /
    • 2020
  • There are about nine million power line poles and 1.3 million kilometers of the power line for electric power distribution in Korea. Maintenance of such a large number of electric power facilities requires a lot of manpower and time. Recently, various fault diagnosis techniques using artificial intelligence have been studied. Therefore, in this paper, proposes a power line deflection detect system using artificial intelligence and computer vision technology in images taken by vision system. The proposed system proceeds as follows. (i) Detection of transmission tower using object detection system (ii) Histogram equalization technique to solve the degradation in image quality problem of video data (iii) In general, since the distance between two transmission towers is long, a panoramic video stitching process is performed to grasp the entire power line (iv) Detecting deflection using computer vision technology after applying power line detection algorithm This paper explain and experiment about each process.

2D Adjacency Matrix Generation using DCT for UWV Contents (DCT를 통한 UWV 콘텐츠의 2D 인접도 행렬 생성)

  • Xiaorui, Li;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.3
    • /
    • pp.366-374
    • /
    • 2017
  • Since a display device such as TV or digital signage is getting larger, the types of media is getting changed into wider view one such as UHD, panoramic and jigsaw-like media. Especially, panoramic and jigsaw-like media is realized by stitching video clips, which are captured by different camera or devices. However, a stitching process takes long time, and has difficulties in applying for a real-time process. Thus, this paper suggests to find out 2D Adjacency Matrix, which tells spatial relationships among those video clips in order to decrease a stitching processing time. Using the Discrete Cosine Transform (DCT), we convert the each frame of video source from the spatial domain (2D) into frequency domain. Based on the aforementioned features, 2D Adjacency Matrix of images could be found that we can efficiently make the spatial map of the images by using DCT. This paper proposes a new method of generating 2D adjacency matrix by using DCT for producing a panoramic and jigsaw-like media through various individual video clips.

2D Adjacency Matrix Generation using DCT for UWV contents

  • Li, Xiaorui;Lee, Euisang;Kang, Dongjin;Kim, Kyuheon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.39-42
    • /
    • 2016
  • Since a display device such as TV or signage is getting larger, the types of media is getting changed into wider view one such as UHD, panoramic and jigsaw-like media. Especially, panoramic and jigsaw-like media is realized by stitching video clips, which are captured by different camera or devices. In order to stich those video clips, it is required to find out 2D Adjacency Matrix, which tells spatial relationships among those video clips. Discrete Cosine Transform (DCT), which is used as a compression transform method, can convert the each frame of video source from the spatial domain (2D) into frequency domain. Based on the aforementioned compressed features, 2D adjacency Matrix of images could be found that we can efficiently make the spatial map of the images by using DCT. This paper proposes a new method of generating 2D adjacency matrix by using DCT for producing a panoramic and jigsaw-like media through various individual video clips.

  • PDF

Fixed Homography-Based Real-Time SW/HW Image Stitching Engine for Motor Vehicles

  • Suk, Jung-Hee;Lyuh, Chun-Gi;Yoon, Sanghoon;Roh, Tae Moon
    • ETRI Journal
    • /
    • v.37 no.6
    • /
    • pp.1143-1153
    • /
    • 2015
  • In this paper, we propose an efficient architecture for a real-time image stitching engine for vision SoCs found in motor vehicles. To enlarge the obstacle-detection distance and area for safety, we adopt panoramic images from multiple telegraphic cameras. We propose a stitching method based on a fixed homography that is educed from the initial frame of a video sequence and is used to warp all input images without regeneration. Because the fixed homography is generated only once at the initial state, we can calculate it using SW to reduce HW costs. The proposed warping HW engine is based on a linear transform of the pixel positions of warped images and can reduce the computational complexity by 90% or more as compared to a conventional method. A dual-core SW/HW image stitching engine is applied to stitching input frames in parallel to improve the performance by 70% or more as compared to a single-core engine operation. In addition, a dual-core structure is used to detect a failure in state machines using rock-step logic to satisfy the ISO26262 standard. The dual-core SW/HW image stitching engine is fabricated in SoC with 254,968 gate counts using Global Foundry's 65 nm CMOS process. The single-core engine can make panoramic images from three YCbCr 4:2:0 formatted VGA images at 44 frames per second and frequency of 200 MHz without an LCD display.

Feature-Based Panoramic Background Generation for Object Tracking in Dynamic Video (가변시점 비디오 객체추적을 위한 특징점 기반 파노라마 배경 생성)

  • Im, Jae-Hyun;Paik, Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.108-116
    • /
    • 2008
  • In this paper, we propose the algorithm for making panoramic background and object tacking using pan-tilt-zoom camera. We draw an analogy relation between images for cylinder projection, rearrange of images, stitching, and blending. We can then make the panoramic background, and can track the object use the panoramic background. After generated the background, the proposed algorithm tracks the moving object. Therefore it can detect the wide area, and it tracks the object continuously. So the proposed algorithm is able to use at wide area to detect and track the object.