• Title/Summary/Keyword: Fisheye lens

Search Result 61, Processing Time 0.021 seconds

Development of 360° Omnidirectional IP Camera with High Resolution of 12Million Pixels (1200만 화소의 고해상도 360° 전방위 IP 카메라 개발)

  • Lee, Hee-Yeol;Lee, Sun-Gu;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.21 no.3
    • /
    • pp.268-271
    • /
    • 2017
  • In this paper, we propose the development of high resolution $360^{\circ}$ omnidirectional IP camera with 12 million pixels. The proposed 12-megapixel high-resolution $360^{\circ}$ omnidirectional IP camera consists of a lens unit with $360^{\circ}$ omnidirectional viewing angle and a 12-megapixel high-resolution IP camera unit. The lens section of $360^{\circ}$ omnidirectional viewing angle adopts the isochronous lens design method and the catadioptric facet production method to obtain the image without peripheral distortion which is inevitably generated in the fisheye lens. The 12 megapixel high-resolution IP camera unit consists of a CMOS sensor & ISP unit, a DSP unit, and an I / O unit, and converts the image input to the camera into a digital image to perform image distortion correction, image correction and image compression And then transmits it to the NVR (Network Video Recorder). In order to evaluate the performance of the proposed 12-megapixel high-resolution $360^{\circ}$ omnidirectional IP camera, 12.3 million pixel image efficiency, $360^{\circ}$ omnidirectional lens angle of view, and electromagnetic certification standard were measured.

Mobile Robot Localization and Mapping using Scale-Invariant Features (스케일 불변 특징을 이용한 이동 로봇의 위치 추정 및 매핑)

  • Lee, Jong-Shill;Shen, Dong-Fan;Kwon, Oh-Sang;Lee, Eung-Hyuk;Hong, Seung-Hong
    • Journal of IKEEE
    • /
    • v.9 no.1 s.16
    • /
    • pp.7-18
    • /
    • 2005
  • A key component of an autonomous mobile robot is to localize itself accurately and build a map of the environment simultaneously. In this paper, we propose a vision-based mobile robot localization and mapping algorithm using scale-invariant features. A camera with fisheye lens facing toward to ceiling is attached to the robot to acquire high-level features with scale invariance. These features are used in map building and localization process. As pre-processing, input images from fisheye lens are calibrated to remove radial distortion then labeling and convex hull techniques are used to segment ceiling region from wall region. At initial map building process, features are calculated for segmented regions and stored in map database. Features are continuously calculated from sequential input images and matched against existing map until map building process is finished. If features are not matched, they are added to the existing map. Localization is done simultaneously with feature matching at map building process. Localization. is performed when features are matched with existing map and map building database is updated at same time. The proposed method can perform a map building in 2 minutes on $50m^2$ area. The positioning accuracy is ${\pm}13cm$, the average error on robot angle with the positioning is ${\pm}3$ degree.

  • PDF

Using Omnidirectional Images for Semi-Automatically Generating IndoorGML Data

  • Claridades, Alexis Richard;Lee, Jiyeong;Blanco, Ariel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.5
    • /
    • pp.319-333
    • /
    • 2018
  • As human beings spend more time indoors, and with the growing complexity of indoor spaces, more focus is given to indoor spatial applications and services. 3D topological networks are used for various spatial applications that involve navigation indoors such as emergency evacuation, indoor positioning, and visualization. Manually generating indoor network data is impractical and prone to errors, yet current methods in automation need expensive sensors or datasets that are difficult and expensive to obtain and process. In this research, a methodology for semi-automatically generating a 3D indoor topological model based on IndoorGML (Indoor Geographic Markup Language) is proposed. The concept of Shooting Point is defined to accommodate the usage of omnidirectional images in generating IndoorGML data. Omnidirectional images were captured at selected Shooting Points in the building using a fisheye camera lens and rotator and indoor spaces are then identified using image processing implemented in Python. Relative positions of spaces obtained from CAD (Computer-Assisted Drawing) were used to generate 3D node-relation graphs representing adjacency, connectivity, and accessibility in the study area. Subspacing is performed to more accurately depict large indoor spaces and actual pedestrian movement. Since the images provide very realistic visualization, the topological relationships were used to link them to produce an indoor virtual tour.

A Study of Selecting Sequential Viewpoint and Examining the Effectiveness of Omni-directional Angle Image Information in Grasping the Characteristics of Landscape (경관 특성 파악에 있어서의 시퀀스적 시점장 선정과 전방위 화상정보의 유효성 검증에 관한 연구)

  • Kim, Heung Man;Lee, In Hee
    • KIEAE Journal
    • /
    • v.9 no.2
    • /
    • pp.81-90
    • /
    • 2009
  • Relating to grasping sequential landscape characteristics in consideration of the behavioral characteristics of the subject experiencing visual perception, this study was made on the subject of main walking line section for visitors of three treasures of Buddhist temples. Especially, as a method of obtaining data for grasping sequential visual perception landscape, the researcher employed [momentum sequential viewpoint setup] according to [the interval of pointers arbitrarily] and fisheye-lens-camera photography using the obtained omni-directional angle visual perception information. As a result, in terms of viewpoint selection, factors like approach road form, change in circulation axis, change in the ground surface level, appearance of objects, etc. were verified to make effect, and among these, approach road form and circulation axis change turned out to be the greatest influences. In addition, as a result of reviewing the effectiveness via the subjects, for the sake of qualitative evaluation of landscape components using the VR picture image obtained in the process of acquiring omni-directional angle visual perception information, a positive result over certain values was earned in terms of panoramic vision, scene reproduction, three-dimensional perspective, etc. This convinces us of the possibility to activate the qualitative evaluation of omni-directional angle picture information and the study of landscape through it henceforth.

Search for Gravity Waves with n New All-sky Camera System

  • Kim, Yong-Ha;Chung, Jong-Kyun;Won, Yong-In;Lee, Bang-Yong
    • Ocean and Polar Research
    • /
    • v.24 no.3
    • /
    • pp.263-266
    • /
    • 2002
  • Gravity waves have been searched for with a new all-sky camera system over Korean Peninsular. The all-sky camera consists of a 37mm/F4.5 Mamiya fisheye lens with a 180 dog field of view, interference filters and a 1024 by 1024 CCD camera. The all-sky camera has been tested near Daejeon city, and moved to Mt. Bohyun where the largest astronomical telescope is operated in Korea. A clear wave pattern was successfully detected in OH filter images over Mt. Bohyun on July 18, 2001, indicating that small scale coherent gravity waves perturbed OH airglow near the mesopause. Other wave features are since then observed with Na 589.8nm and OI 630.0nm filters. Since a Japanese all-sky camera network has already detected traveling ionospheric disturbances (TID) over the northeast-southwest range of Japanese islands, we hope our all-sky camera extends the coverage of the TID's observations to the west direction. We plan to operate our all-sky camera all year around to study seasonal variation of wave activities over the mid-latitude upper atmosphere.

A Study on Effective Stitching Technique of 360° Camera Image (360° 카메라 영상의 효율적인 스티칭 기법에 관한 연구)

  • Lee, Lang-Goo;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.16 no.2
    • /
    • pp.335-341
    • /
    • 2018
  • This study is a study on effective stitching technique for video recorded by using a dual-lens $360^{\circ}$ camera composed of two fisheye lenses. First of all, this study located a problem in the result of stitching by using a bundled program. And the study was carried out, focusing on looking for a stitching technique more efficient and closer to perfect by comparatively analyzing the results of stitching by using Autopano Video Pro and Autopano Giga, professional stitching program. As a result, it was shown that the problems of bundled program were horizontal and vertical distortion, exposure and color mismatch and unsmooth stitching line. And it was possible to solve the problem of the horizontal and vertical by using Automatic Horizon and Verticals Tool of Autopano Video Pro and Autopano Giga, problem of exposure and color by using Levels, Color and Edit Color Anchors and problem of stitching line by using Mask function. Based on this study, it is to be hoped that $360^{\circ}$ VR video content closer to perfect can be produced by efficient stitching technique for video recorded by using dual-lens $360^{\circ}$ camera in the future.

Fast Light Source Estimation Technique for Effective Synthesis of Mixed Reality Scene (효과적인 혼합현실 장면 생성을 위한 고속의 광원 추정 기법)

  • Shin, Seungmi;Seo, Woong;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.89-99
    • /
    • 2016
  • One of the fundamental elements in developing mixed reality applications is to effectively analyze and apply the environmental lighting information to image synthesis. In particular, interactive applications require to process dynamically varying lighting sources in real-time, reflecting them properly in rendering results. Previous related works are not often appropriate for this because they are usually designed to synthesize photorealistic images, generating too many, often exponentially increasing, light sources or having too heavy a computational complexity. In this paper, we present a fast light source estimation technique that aims to search for primary light sources on the fly from a sequence of video images taken by a camera equipped with a fisheye lens. In contrast to previous methods, our technique can adust the number of found light sources approximately to the size that a user specifies. Thus, it can be effectively used in Phong-illumination-model-based direct illumination or soft shadow generation through light sampling over area lights.

Omnidirectional Camera Motion Estimation Using Projected Contours (사영 컨투어를 이용한 전방향 카메라의 움직임 추정 방법)

  • Hwang, Yong-Ho;Lee, Jae-Man;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.5
    • /
    • pp.35-44
    • /
    • 2007
  • Since the omnidirectional camera system with a very large field of view could take many information about environment scene from few images, various researches for calibration and 3D reconstruction using omnidirectional image have been presented actively. Most of line segments of man-made objects we projected to the contours by using the omnidirectional camera model. Therefore, the corresponding contours among images sequences would be useful for computing the camera transformations including rotation and translation. This paper presents a novel two step minimization method to estimate the extrinsic parameters of the camera from the corresponding contours. In the first step, coarse camera parameters are estimated by minimizing an angular error function between epipolar planes and back-projected vectors from each corresponding point. Then we can compute the final parameters minimizing a distance error of the projected contours and the actual contours. Simulation results on the synthetic and real images demonstrated that our algorithm can achieve precise contour matching and camera motion estimation.

Using Contour Matching for Omnidirectional Camera Calibration (투영곡선의 자동정합을 이용한 전방향 카메라 보정)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.125-132
    • /
    • 2008
  • Omnidirectional camera system with a wide view angle is widely used in surveillance and robotics areas. In general, most of previous studies on estimating a projection model and the extrinsic parameters from the omnidirectional images assume corresponding points previously established among views. This paper presents a novel omnidirectional camera calibration based on automatic contour matching. In the first place, we estimate the initial parameters including translation and rotations by using the epipolar constraint from the matched feature points. After choosing the interested points adjacent to more than two contours, we establish a precise correspondence among the connected contours by using the initial parameters and the active matching windows. The extrinsic parameters of the omnidirectional camera are estimated minimizing the angular errors of the epipolar plane of endpoints and the inverse projected 3D vectors. Experimental results on synthetic and real images demonstrate that the proposed algorithm obtains more precise camera parameters than the previous method.

A Hardware Design for Realtime Correction of a Barrel Distortion Using the Nearest Pixels on a Corrected Image (보정 이미지의 최 근접 좌표를 이용한 실시간 방사 왜곡 보정 하드웨어 설계)

  • Song, Namhun;Yi, Joonhwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.12
    • /
    • pp.49-60
    • /
    • 2012
  • In this paper, we propose a hardware design for correction of barrel distortion using the nearest coordinates in the corrected image. Because it applies the nearest distance on corrected image rather than adjacent distance on distorted image, the picture quality is improved by the image whole area, solve the staircase phenomenon in the exterior area. But, because of additional arithmetic operation using design of bilinear interpolation, required arithmetic operation is increased. Look up table(LUT) structure is proposed in order to solve this, coordinate rotation digital computer(CORDIC) algorithm is applied. The results of the synthesis using Design compiler, the design of implementing all processes of the interpolation method with the hardware is higher than the previous design about the throughput, In case of the rear camera, the design of using LUT and hardware together can reduce the size than the design of implementing all processes with the hardware.