• Title/Summary/Keyword: omnidirectional image

Search Result 58, Processing Time 0.021 seconds

Optical Design of a Modified Catadioptric Omnidirectional Optical System for a Capsule Endoscope to Image Simultaneously Front and Side Views on a RGB/NIR CMOS Sensor (RGB/NIR CMOS 센서에서 정면 영상과 측면 영상을 동시에 결상하는 캡슐 내시경용 개선된 반사굴절식 전방위 광학계의 광학 설계)

  • Hong, Young-Gee;Jo, Jae Heung
    • Korean Journal of Optics and Photonics
    • /
    • v.32 no.6
    • /
    • pp.286-295
    • /
    • 2021
  • A modified catadioptric omnidirectional optical system (MCOOS) using an RGB/NIR CMOS sensor is optically designed for a capsule endoscope with the front field of view (FOV) in visible light (RGB) and side FOV in visible and near-infrared (NIR) light. The front image is captured by the front imaging lens system of the MCOOS, which consists of an additional three lenses arranged behind the secondary mirror of the catadioptric omnidirectional optical system (COOS) and the imaging lens system of the COOS. The side image is properly formed by the COOS. The Nyquist frequencies of the sensor in the RGB and NIR spectra are 90 lp/mm and 180 lp/mm, respectively. The overall length of 12 mm, F-number of 3.5, and two half-angles of front and side half FOV of 70° and 50°-120° of the MCOOS are determined by the design specifications. As a result, a spatial frequency of 154 lp/mm at a modulation transfer function (MTF) of 0.3, a depth of focus (DOF) of -0.051-+0.052 mm, and a cumulative probability of tolerance (CPT) of 99% are obtained from the COOS. Also, the spatial frequency at MTF of 170 lp/mm, DOF of -0.035-0.051 mm, and CPT of 99.9% are attained from the front-imaging lens system of the optimized MCOOS.

Omnidirectional Camera-based Image Rendering Synchronization System Using Head Mounted Display (헤드마운티드 디스플레이를 활용한 전방위 카메라 기반 영상 렌더링 동기화 시스템)

  • Lee, Seungjoon;Kang, Suk-Ju
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.6
    • /
    • pp.782-788
    • /
    • 2018
  • This paper proposes a novel method for the omnidirectional camera-based image rendering synchronization system using head mounted display. There are two main processes in the proposed system. The first one is rendering 360-degree images which are remotely photographed to head mounted display. This method is based on transmission control protocol/internet protocol(TCP/IP), and the sequential images are rapidly captured and transmitted to the server using TCP/IP protocol with the byte array data format. Then, the server collects the byte array data, and make them into images. Finally, the observer can see them while wearing head mounted display. The second process is displaying the specific region by detecting the user's head rotation. After extracting the user's head Euler angles from head mounted display's inertial measurement units sensor, the proposed system display the region based on these angles. In the experimental results, rendering the original image at the same resolution in a given network environment causes loss of frame rate, and rendering at the same frame rate results in loss of resolution. Therefore, it is necessary to select optimal parameters considering environmental requirements.

Localization using Ego Motion based on Fisheye Warping Image (어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘)

  • Choi, Yun Won;Choi, Kyung Sik;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

Omnidirectional Environmental Projection Mapping with Single Projector and Single Spherical Mirror (단일 프로젝터와 구형 거울을 활용한 전 방향프로젝션 시스템)

  • Kim, Bumki;Lee, Jungjin;Kim, Younghui;Jeong, Seunghwa;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.1
    • /
    • pp.1-11
    • /
    • 2015
  • Researchers have developed virtual reality environments to provide audience with more visually immersive experiences than previously possible. One of the most popular solutions to build the immersive VR space is a multi-projection technique. However, utilization of multiple projectors requires large spaces, expensive cost, and accurate geometry calibration among projectors. This paper presents a novel omnidirectional projection system with a single projector and a single spherical mirror.We newly designed the simple and intuitive calibration system to define the shape of environment and the relative position of mirror/projector. For successful image projection, our optimized omnidirectional image generation step solves image distortion produced by the spherical mirror and a calibration problem produced by unknown parameters such as the shape of environment and the relative position between the mirror and the projector. Additionally, the focus correction is performed to improve the quality of the projection. The experiment results show that our method can generate the optimized image given a normal panoramic image for omnidirectional projection in a rectangular space.

Multi License Plate Recognition System using High Resolution 360° Omnidirectional IP Camera (고해상도 360° 전방위 IP 카메라를 이용한 다중 번호판 인식 시스템)

  • Ra, Seung-Tak;Lee, Sun-Gu;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.21 no.4
    • /
    • pp.412-415
    • /
    • 2017
  • In this paper, we propose a multi license plate recognition system using high resolution $360^{\circ}$ omnidirectional IP camera. The proposed system consists of a planar division part of $360^{\circ}$ circular image and a multi license plate recognition part. The planar division part of the $360^{\circ}$ circular image are divided into a planar image with enhanced image quality through processes such as circular image acquisition, circular image segmentation, conversion to plane image, pixel correction using color interpolation, color correction and edge correction in a high resolution $360^{\circ}$ omnidirectional IP Camera. Multi license plate recognition part is through the multi-plate extraction candidate region, a multi-plate candidate area normalized and restore, multiple license plate number, character recognition using a neural network in the process of recognizing a multi-planar imaging plates. In order to evaluate the multi license plate recognition system using the proposed high resolution $360^{\circ}$ omnidirectional IP camera, we experimented with a specialist in the operation of intelligent parking control system, and 97.8% of high plate recognition rate was confirmed.

Development of Annular Optics for the Inspection of Surface Defects on Screw Threads Using Ray Tracing Simulation (광선추적을 사용한 나사산 표면결함 검사용 환형 광학계 개발)

  • Lee, Jiwon;Lim, Yeong Eun;Park, Keun;Ra, Seung Woo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.33 no.6
    • /
    • pp.491-497
    • /
    • 2016
  • This study aims to develop a vision inspection system for screw threads. To inspect external defects in screw threads, the vision inspection system was developed using front light illumination from which bright images can be obtained. The front light system, however, requires multiple side images for inspection of the entire thread surface, which can be performed by omnidirectional optics. In this study, an omnidirectional optical system was designed to obtain annular images of screw threads using an image sensor and two reflection mirrors; one large concave mirror and one small convex mirror. Optical simulations using backward and forward ray tracing were performed to determine the dimensional parameters of the proposed optical system, so that an annular image of the screw threads could be obtained with high quality and resolution. Microscale surface defects on the screw threads could be successfully detected using the developed annular inspection system.

Acquisition of Intrinsic Image by Omnidirectional Projection of ROI and Translation of White Patch on the X-chromaticity Space (X-색도 공간에서 ROI의 전방향 프로젝션과 백색패치의 평행이동에 의한 본질 영상 획득)

  • Kim, Dal-Hyoun;Hwang, Dong-Guk;Lee, Woo-Ram;Jun, Byoung-Min
    • The KIPS Transactions:PartB
    • /
    • v.18B no.2
    • /
    • pp.51-56
    • /
    • 2011
  • Algorithms for intrinsic images reduce color differences in RGB images caused by the temperature of black-body radiators. Based on the reference light and detecting single invariant direction, these algorithms are weak in real images which can have multiple invariant directions when the scene illuminant is a colored illuminant. To solve these problems, this paper proposes a method of acquiring an intrinsic image by omnidirectional projection of an ROI and a translation of white patch in the ${\chi}$-chromaticity space. Because it is not easy to analyze an image in the three-dimensional RGB space, the ${\chi}$-chromaticity is also employed without the brightness factor in this paper. After the effect of the colored illuminant is decreased by a translation of white patch, an invariant direction is detected by omnidirectional projection of an ROI in this chromaticity space. In case the RGB image has multiple invariant directions, only one ROI is selected with the bin, which has the highest frequency in 3D histogram. And then the two operations, projection and inverse transformation, make intrinsic image acquired. In the experiments, test images were four datasets presented by Ebner and evaluation methods was the follows: standard deviation of the invariant direction, the constancy measure, the color space measure and the color constancy measure. The experimental results showed that the proposed method had lower standard deviation than the entropy, that its performance was two times higher than the compared algorithm.

Using Omnidirectional Images for Semi-Automatically Generating IndoorGML Data

  • Claridades, Alexis Richard;Lee, Jiyeong;Blanco, Ariel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.5
    • /
    • pp.319-333
    • /
    • 2018
  • As human beings spend more time indoors, and with the growing complexity of indoor spaces, more focus is given to indoor spatial applications and services. 3D topological networks are used for various spatial applications that involve navigation indoors such as emergency evacuation, indoor positioning, and visualization. Manually generating indoor network data is impractical and prone to errors, yet current methods in automation need expensive sensors or datasets that are difficult and expensive to obtain and process. In this research, a methodology for semi-automatically generating a 3D indoor topological model based on IndoorGML (Indoor Geographic Markup Language) is proposed. The concept of Shooting Point is defined to accommodate the usage of omnidirectional images in generating IndoorGML data. Omnidirectional images were captured at selected Shooting Points in the building using a fisheye camera lens and rotator and indoor spaces are then identified using image processing implemented in Python. Relative positions of spaces obtained from CAD (Computer-Assisted Drawing) were used to generate 3D node-relation graphs representing adjacency, connectivity, and accessibility in the study area. Subspacing is performed to more accurately depict large indoor spaces and actual pedestrian movement. Since the images provide very realistic visualization, the topological relationships were used to link them to produce an indoor virtual tour.

Camera pose estimation framework for array-structured images

  • Shin, Min-Jung;Park, Woojune;Kim, Jung Hee;Kim, Joonsoo;Yun, Kuk-Jin;Kang, Suk-Ju
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.10-23
    • /
    • 2022
  • Despite the significant progress in camera pose estimation and structure-from-motion reconstruction from unstructured images, methods that exploit a priori information on camera arrangements have been overlooked. Conventional state-of-the-art methods do not exploit the geometric structure to recover accurate camera poses from a set of patch images in an array for mosaic-based imaging that creates a wide field-of-view image by sewing together a collection of regular images. We propose a camera pose estimation framework that exploits the array-structured image settings in each incremental reconstruction step. It consists of the two-way registration, the 3D point outlier elimination and the bundle adjustment with a constraint term for consistent rotation vectors to reduce reprojection errors during optimization. We demonstrate that by using individual images' connected structures at different camera pose estimation steps, we can estimate camera poses more accurately from all structured mosaic-based image sets, including omnidirectional scenes.