• Title/Summary/Keyword: Equirectangular

Search Result 9, Processing Time 0.026 seconds

RANSAC-based Or thogonal Vanishing Point Estimation in the Equirectangular Images

  • Oh, Seon Ho;Jung, Soon Ki
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.12
    • /
    • pp.1430-1441
    • /
    • 2012
  • In this paper, we present an algorithm that quickly and effectively estimates orthogonal vanishing points in equirectangular images of urban environment. Our algorithm is based on the RANSAC (RANdom SAmple Consensus) algorithm and on the characteristics of the line segment in the spherical panorama image of the $360^{\circ}$ longitude and $180^{\circ}$ latitude field of view. These characteristics can be used to reduce the geometric ambiguity in the line segment classification as well as to improve the robustness of vanishing point estimation. The proposed algorithm is validated experimentally on a wide set of images. The results show that our algorithm provides excellent levels of accuracy for the vanishing point estimation as well as line segment classification.

Proposal and Implementation of Intelligent Omni-directional Video Analysis System (지능형 전방위 영상 분석 시스템 제안 및 구현)

  • Jeon, So-Yeon;Heo, Jun-Hak;Park, Goo-Man
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.850-853
    • /
    • 2017
  • In this paper, we propose an image analysis system based on omnidirectional image and object tracking image display using super wide angle camera. In order to generate spherical images, the projection process of converting from two wide-angle images to the equirectangular panoramic image was performed and the spherical image was expressed by converting rectangular to spherical coordinate system. Object tracking was performed by selecting the desired object initially, and KCF(Kernelized Correlation Filter) algorithm was used so that robust object tracking can be performed even when the object's shape is changed. In the initial dialog, the file and mode are selected, and then the result is displayed in the new dialog. If the object tracking mode is selected, the ROI is set by dragging the desired area in the new window.

Compression Efficiency Evaluation for Virtual Reality Videos by Projection Scheme

  • Kim, Byeong Chul;Rhee, Chae Eun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.2
    • /
    • pp.102-108
    • /
    • 2017
  • Videos for 360-degree virtual reality (VR) systems have a large amount of data because they are made with several different videos from multiple cameras. To store the VR data in limited space or to transmit it through a channel with limited bandwidth, the data need to be compressed at a high ratio. This paper focuses on the compression efficiency of VR videos for good visual quality. Generally, 360-degree VR videos should be projected into the planer format to cope with modern video coding standards. Among various projection schemes, three typical schemes (equirectangular, line-cubic, and cross-cubic) are selected and compared in terms of compression efficiency and quality using various videos.

Preprocessing Technique for Improving Action Recognition Performance in ERP Video with Multiple Objects (다중 객체가 존재하는 ERP 영상에서 행동 인식 모델 성능 향상을 위한 전처리 기법)

  • Park, Eun-Soo;Kim, Seunghwan;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.374-385
    • /
    • 2020
  • In this paper, we propose a preprocessing technique to solve the problems of action recognition with Equirectangular Projection (ERP) video. The preprocessing technique proposed in this paper assumes the person object as the subject of action, that is, the Object of Interest (OOI), and the surrounding area of the OOI as the ROI. The preprocessing technique consists of three modules. I) Recognize person object in the image with object recognition model. II) Create a saliency map from the input image. III) Select subject of action using recognized person object and saliency map. The subject boundary box of the selected action is input to the action recognition model in order to improve the action recognition performance. When comparing the performance of the proposed preprocessing method to the action recognition model and the performance of the original ERP image input method, the performance is improved up to 99.6%, and the action is obtained when only the OOI is detected. It can also see the effects of related video summaries.

Smartphone PVR-based Cultural Assets Experience (스마트폰 PVR 기반 문화재 체험)

  • Choi, Hong-Seon;Lee, Kang-Hee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2012.07a
    • /
    • pp.17-18
    • /
    • 2012
  • 본 논문에서는 대상이 되는 문화재를 파노라마로 촬영하고, 촬영된 파노라마 사진을 equirectangular 형태로 정합하여 3D studio max에서 가상의 공간에 mapping시키고, mqo포맷으로 출력한다. 안드로이드 OpenGL에서 출력된 mqo포맷을 불러와 GUI button을 활용하여 파노라마 가상현실 기술을 구현하는 방법을 설명한다.

  • PDF

Preprocessing Methods for Action Recognition Model in 360-degree ERP Video (360 도 ERP 영상에서 행동 인식 모델 성능 향상을 위한 전처리 기법)

  • Park, Eun-Soo;Ryu, Jaesung;Kim, Seunghwan;Ryu, Eun-Seok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.11a
    • /
    • pp.252-255
    • /
    • 2019
  • 본 논문에서 Equirectangular projection(ERP) 영상을 행동 인식 모델에 입력하기전 제안하는 전처리를 통하여 성능을 향상시키는 것을 보인다. ERP 영상의 특성상 행동 인식을 하는데 불필요한 영역이 일반적인 2D 카메라로 촬영한 영상보다 많다. 또한 행동 인식은 사람이 Object of Interest(OOI)이다. 따라서 객체 인식모델로 인간 객체를 인식한 후 Region of Interest(ROI)를 추출하여 불필요한 영역을 없애고, 왜곡 또한 줄어든다. 본 논문에서 제안하는 기법으로 전처리 후 CNN-LSTM 모델로 성능을 테스트했다. 제안하는 방법으로 전처리를 한 데이터와 하지 않은 데이터로 행동 인식을 한 정확도로 비교하였으며 제안하는 기법으로 전처리 한 데이터로 행동 인식을 한 경우 데이터의 특성에 따라 다르지만, 최대 61%까지 성능향상을 보였다.

  • PDF

Point-level deep learning approach for 3D acoustic source localization

  • Lee, Soo Young;Chang, Jiho;Lee, Seungchul
    • Smart Structures and Systems
    • /
    • v.29 no.6
    • /
    • pp.777-783
    • /
    • 2022
  • Even though several deep learning-based methods have been applied in the field of acoustic source localization, the previous works have only been conducted using the two-dimensional representation of the beamforming maps, particularly with the planar array system. While the acoustic sources are more required to be localized in a spherical microphone array system considering that we live and hear in the 3D world, the conventional 2D equirectangular map of the spherical beamforming map is highly vulnerable to the distortion that occurs when the 3D map is projected to the 2D space. In this study, a 3D deep learning approach is proposed to fulfill accurate source localization via distortion-free 3D representation. A target function is first proposed to obtain 3D source distribution maps that can represent multiple sources' positional and strength information. While the proposed target map expands the source localization task into a point-wise prediction task, a PointNet-based deep neural network is developed to precisely estimate the multiple sources' positions and strength information. While the proposed model's localization performance is evaluated, it is shown that the proposed method can achieve improved localization results from both quantitative and qualitative perspectives.

Real-time multi-GPU-based 8KVR stitching and streaming on 5G MEC/Cloud environments

  • Lee, HeeKyung;Um, Gi-Mun;Lim, Seong Yong;Seo, Jeongil;Gwak, Moonsung
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.62-72
    • /
    • 2022
  • In this study, we propose a multi-GPU-based 8KVR stitching system that operates in real time on both local and cloud machine environments. The proposed system first obtains multiple 4 K video inputs, decodes them, and generates a stitched 8KVR video stream in real time. The generated 8KVR video stream can be downloaded and rendered omnidirectionally in player apps on smartphones, tablets, and head-mounted displays. To speed up processing, we adopt group-of-pictures-based distributed decoding/encoding and buffering with the NV12 format, along with multi-GPU-based parallel processing. Furthermore, we develop several algorithms such as equirectangular projection-based color correction, real-time CG overlay, and object motion-based seam estimation and correction, to improve the stitching quality. From experiments in both local and cloud machine environments, we confirm the feasibility of the proposed 8KVR stitching system with stitching speed of up to 83.7 fps for six-channel and 62.7 fps for eight-channel inputs. In addition, in an 8KVR live streaming test on the 5G MEC/cloud, the proposed system achieves stable performances with 8 K@30 fps in both indoor and outdoor environments, even during motion.

Approaching Vehicles Alert System Based on the 360 Degree Camera (360 도 카메라를 활용한 보행 시 차량 접근 알림 시스템)

  • Yoon, Soyeon;Kim, Eun-ji;Lee, Won-young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.556-559
    • /
    • 2021
  • 해당 연구는 Insta evo 360° 카메라로 촬영한 Equirectangular 형태의 영상을 활용하여 보행자에게 위험한 차량을 구분한 후 실시간적으로 차량 접근 알림을 주는 시스템에 관한 연구이다. 360° 영상 속 위험 차량 탐지와 추적을 위해 파노라마와 일반도로 이미지 데이터 세트로 전이학습 된 You Look Only Once v5 (YOLOv5), 객체 추적 알고리즘 Simple Online and Realtime Tracking with a Deep Association Metric (DeepSORT), 그리고 실험을 통해 개발한 비 위험 차량 필터링 알고리즘을 활용한다. Insta evo 360° 카메라를 머리 위에 얹어 촬영한 영상을 개발한 최종 시스템에 적용한 결과, 약 90% 정확도로 영상에서 비 위험 차량과 위험 차량을 구분할 수 있고, 위험 차량의 경우 차량의 방향을 시각적으로 알려줄 수 있다. 본 연구를 바탕으로 보행자 시야각 외부의 위험 차량에 대한 경고 알림을 주어 보행자 교통사고 발생 가능성을 줄이고, 전방위를 볼 수 있는 360° 카메라의 활용 분야가 보행 안전 시스템뿐만 아니라 더 다양해질 것으로 기대한다.