• Title/Summary/Keyword: Omnidirectional Images

Search Result 35, Processing Time 0.029 seconds

Omnidirectional Camera Motion Estimation Using Projected Contours (사영 컨투어를 이용한 전방향 카메라의 움직임 추정 방법)

  • Hwang, Yong-Ho;Lee, Jae-Man;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.5
    • /
    • pp.35-44
    • /
    • 2007
  • Since the omnidirectional camera system with a very large field of view could take many information about environment scene from few images, various researches for calibration and 3D reconstruction using omnidirectional image have been presented actively. Most of line segments of man-made objects we projected to the contours by using the omnidirectional camera model. Therefore, the corresponding contours among images sequences would be useful for computing the camera transformations including rotation and translation. This paper presents a novel two step minimization method to estimate the extrinsic parameters of the camera from the corresponding contours. In the first step, coarse camera parameters are estimated by minimizing an angular error function between epipolar planes and back-projected vectors from each corresponding point. Then we can compute the final parameters minimizing a distance error of the projected contours and the actual contours. Simulation results on the synthetic and real images demonstrated that our algorithm can achieve precise contour matching and camera motion estimation.

Watermark Extraction Method of Omnidirectional Images Using CNN (CNN을 이용한 전방위 영상의 워터마크 추출 방법)

  • Moon, Won-Jun;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.151-156
    • /
    • 2020
  • In this paper, we propose a watermark extraction method of omnidirectional images using CNN (Convolutional Neural Network) to improve the extracted watermark accuracy of the previous deterministic method that based on algorithm. This CNN consists of a restoration process of extracting watermarks by correcting distortion during omnidirectional image generation and/or malicious attacks, and a classification process of classifying which watermarks are extracted watermarks. Experiments with various attacks confirm that the extracted watermarks are more accurate than the previous methods.

Development of Annular Optics for the Inspection of Surface Defects on Screw Threads Using Ray Tracing Simulation (광선추적을 사용한 나사산 표면결함 검사용 환형 광학계 개발)

  • Lee, Jiwon;Lim, Yeong Eun;Park, Keun;Ra, Seung Woo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.33 no.6
    • /
    • pp.491-497
    • /
    • 2016
  • This study aims to develop a vision inspection system for screw threads. To inspect external defects in screw threads, the vision inspection system was developed using front light illumination from which bright images can be obtained. The front light system, however, requires multiple side images for inspection of the entire thread surface, which can be performed by omnidirectional optics. In this study, an omnidirectional optical system was designed to obtain annular images of screw threads using an image sensor and two reflection mirrors; one large concave mirror and one small convex mirror. Optical simulations using backward and forward ray tracing were performed to determine the dimensional parameters of the proposed optical system, so that an annular image of the screw threads could be obtained with high quality and resolution. Microscale surface defects on the screw threads could be successfully detected using the developed annular inspection system.

Omnidirectional Camera-based Image Rendering Synchronization System Using Head Mounted Display (헤드마운티드 디스플레이를 활용한 전방위 카메라 기반 영상 렌더링 동기화 시스템)

  • Lee, Seungjoon;Kang, Suk-Ju
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.6
    • /
    • pp.782-788
    • /
    • 2018
  • This paper proposes a novel method for the omnidirectional camera-based image rendering synchronization system using head mounted display. There are two main processes in the proposed system. The first one is rendering 360-degree images which are remotely photographed to head mounted display. This method is based on transmission control protocol/internet protocol(TCP/IP), and the sequential images are rapidly captured and transmitted to the server using TCP/IP protocol with the byte array data format. Then, the server collects the byte array data, and make them into images. Finally, the observer can see them while wearing head mounted display. The second process is displaying the specific region by detecting the user's head rotation. After extracting the user's head Euler angles from head mounted display's inertial measurement units sensor, the proposed system display the region based on these angles. In the experimental results, rendering the original image at the same resolution in a given network environment causes loss of frame rate, and rendering at the same frame rate results in loss of resolution. Therefore, it is necessary to select optimal parameters considering environmental requirements.

Performance Analysis on View Synthesis of 360 Video for Omnidirectional 6DoF

  • Kim, Hyun-Ho;Lee, Ye-Jin;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.22-24
    • /
    • 2018
  • MPEG-I Visual group is actively working on enhancing immersive experiences with up to six degree of freedom (6DoF). In virtual space of omnidirectional 6DoF, which is defined as a case of degree of freedom providing 6DoF in a restricted area, looking at the scene from another viewpoint (another position in space) requires rendering additional viewpoints called virtual omnidirectional viewpoints. This paper presents the performance analysis on view synthesis, which is done as the exploration experiment (EE) in MPEG-I, from a set of 360 videos providing omnidirectional 6DoF in various ways with different distances, directions, and number of input views. In addition, we compared the subjective quality between synthesized images with one input view and two input views.

  • PDF

Camera pose estimation framework for array-structured images

  • Shin, Min-Jung;Park, Woojune;Kim, Jung Hee;Kim, Joonsoo;Yun, Kuk-Jin;Kang, Suk-Ju
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.10-23
    • /
    • 2022
  • Despite the significant progress in camera pose estimation and structure-from-motion reconstruction from unstructured images, methods that exploit a priori information on camera arrangements have been overlooked. Conventional state-of-the-art methods do not exploit the geometric structure to recover accurate camera poses from a set of patch images in an array for mosaic-based imaging that creates a wide field-of-view image by sewing together a collection of regular images. We propose a camera pose estimation framework that exploits the array-structured image settings in each incremental reconstruction step. It consists of the two-way registration, the 3D point outlier elimination and the bundle adjustment with a constraint term for consistent rotation vectors to reduce reprojection errors during optimization. We demonstrate that by using individual images' connected structures at different camera pose estimation steps, we can estimate camera poses more accurately from all structured mosaic-based image sets, including omnidirectional scenes.

Omnidirectional Camera System Design for a Security Robot (경비용 로봇을 위한 전방향 카메라 장치 설계)

  • Kim, Kilsu;Do, Yongtae
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.2
    • /
    • pp.74-81
    • /
    • 2008
  • This paper describes a low-cost omnidirectional camera system designed for the intruder detection capability of a security robot. Moving targets on sequential images are detected first by an adaptive background subtraction technique, and the targets are identified as intruders if they fail to enter a password within a preset time. A warning message is then sent to the owner's mobile phone. The owner can check scene pictures posted by the system on the web. The system developed worked well in experiments including a situation when the indoor lighting was suddenly changed.

  • PDF

Acquisition of Intrinsic Image by Omnidirectional Projection of ROI and Translation of White Patch on the X-chromaticity Space (X-색도 공간에서 ROI의 전방향 프로젝션과 백색패치의 평행이동에 의한 본질 영상 획득)

  • Kim, Dal-Hyoun;Hwang, Dong-Guk;Lee, Woo-Ram;Jun, Byoung-Min
    • The KIPS Transactions:PartB
    • /
    • v.18B no.2
    • /
    • pp.51-56
    • /
    • 2011
  • Algorithms for intrinsic images reduce color differences in RGB images caused by the temperature of black-body radiators. Based on the reference light and detecting single invariant direction, these algorithms are weak in real images which can have multiple invariant directions when the scene illuminant is a colored illuminant. To solve these problems, this paper proposes a method of acquiring an intrinsic image by omnidirectional projection of an ROI and a translation of white patch in the ${\chi}$-chromaticity space. Because it is not easy to analyze an image in the three-dimensional RGB space, the ${\chi}$-chromaticity is also employed without the brightness factor in this paper. After the effect of the colored illuminant is decreased by a translation of white patch, an invariant direction is detected by omnidirectional projection of an ROI in this chromaticity space. In case the RGB image has multiple invariant directions, only one ROI is selected with the bin, which has the highest frequency in 3D histogram. And then the two operations, projection and inverse transformation, make intrinsic image acquired. In the experiments, test images were four datasets presented by Ebner and evaluation methods was the follows: standard deviation of the invariant direction, the constancy measure, the color space measure and the color constancy measure. The experimental results showed that the proposed method had lower standard deviation than the entropy, that its performance was two times higher than the compared algorithm.

Localization using Ego Motion based on Fisheye Warping Image (어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘)

  • Choi, Yun Won;Choi, Kyung Sik;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

Comparison of 3D Reconstruction Methods to Create 3D Indoor Models with Different LODs

  • Hong, Sungchul;Choi, Hyunsang
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.674-675
    • /
    • 2015
  • A 3D indoor model becomes an indiscernible component of BIM (Building Information Modeling) and GIS (Geographic Information System). However, a huge amount of time and human resources are inevitable for collecting spatial measurements and creating such a 3D indoor model. Also, a varied forms of 3D indoor models exist depending on their purpose of use. Thus, in this study, three different 3D indoor models are defined as 1) omnidirectional images, 2) a 3D realistic model, and 3) 3D indoor as-built model. A series of reconstruction methods is then introduced to construct each type of 3D indoor models: they are an omnidirectional image acquisition method, a hybrid surveying method, and a terrestrial LiDAR-based method. The reconstruction methods are applied to a large and complex atrium, and their 3D modeling results are compared and analyzed.

  • PDF