• Title/Summary/Keyword: $360^{\circ}$Panoramic Image

Search Result 6, Processing Time 0.018 seconds

A Study on 360° Image Production Method for VR Image Contents (VR 영상 콘텐츠 제작에 유용한 360도 이미지 제작 방법에 관한 연구)

  • Guo, Dawei;Chung, Jeanhun
    • Journal of Digital Convergence
    • /
    • v.15 no.12
    • /
    • pp.543-548
    • /
    • 2017
  • $360^{\circ}$panoramic image can give people an unprecedented visual experience, and there are many different ways to make a $360^{\circ}$panoramic image. In this paper, we will introduce two easy and effective methods from those many ways. The first one is through 48 photos to make a $360^{\circ}$panoramic image, the second way is through 6 photos to make a $360^{\circ}$panoramic image. We will compare those methods and tell the audience which one suits themselves. Through those easy design methods introduced above, we can see VR works design became easy and popular, normal people can also make $360^{\circ}$panoramic image, and it promotes the industry of VR image contents.

Catadioptric Omnidirectional Optical System Using a Spherical Mirror with a Central Hole and a Plane Mirror for Visible Light (중심 구멍이 있는 구면거울과 평면거울을 이용한 가시광용 반사굴절식 전방위 광학계)

  • Seo, Hyeon Jin;Jo, Jae Heung
    • Korean Journal of Optics and Photonics
    • /
    • v.26 no.2
    • /
    • pp.88-97
    • /
    • 2015
  • An omnidirectional optical system can be described as a special optical system that images in real time a panoramic image with an azimuthal angle of $360^{\circ}$ and the altitude angle corresponding to the upper and lower fields of view from the horizon line. In this paper, for easy fabrication and compact size, we designed and fabricated a catadioptric omnidirectional optical system consisting of the mirror part of a spherical mirror with a central hole (that is, obscuration), a plane mirror, the imaging lens part of 3 single spherical lenses, and a spherical doublet in the visible light spectrum. We evaluated its image performance by measuring the cut-off spatial frequency using automobile license plates, and the vertical field of view using an ISO 12233 chart. We achieved a catadioptric omnidirectional optical system with vertical field of view from $+53^{\circ}$ to $-17^{\circ}$ and an azimuthal angle of $360^{\circ}$. This optical system cleaniy imaged letters on a car's front license plate at the object distance of 3 meters, which corresponds to a cut-off spatial frequency of 135 lp/mm.

Omni Camera Vision-Based Localization for Mobile Robots Navigation Using Omni-Directional Images (옴니 카메라의 전방향 영상을 이용한 이동 로봇의 위치 인식 시스템)

  • Kim, Jong-Rok;Lim, Mee-Seub;Lim, Joon-Hong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.206-210
    • /
    • 2011
  • Vision-based robot localization is challenging due to the vast amount of visual information available, requiring extensive storage and processing time. To deal with these challenges, we propose the use of features extracted from omni-directional panoramic images and present a method for localization of a mobile robot equipped with an omni-directional camera. The core of the proposed scheme may be summarized as follows : First, we utilize an omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Second, Nodes around the robot are extracted by the correlation coefficients of Circular Horizontal Line between the landmark and the current captured image. Third, the robot position is determined from the locations by the proposed correlation-based landmark image matching. To accelerate computations, we have assigned the node candidates using color information and the correlation values are calculated based on Fast Fourier Transforms. Experiments show that the proposed method is effective in global localization of mobile robots and robust to lighting variations.

Global Localization of Mobile Robots Using Omni-directional Images (전방위 영상을 이용한 이동 로봇의 전역 위치 인식)

  • Han, Woo-Sup;Min, Seung-Ki;Roh, Kyung-Shik;Yoon, Suk-June
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.31 no.4
    • /
    • pp.517-524
    • /
    • 2007
  • This paper presents a global localization method using circular correlation of an omni-directional image. The localization of a mobile robot, especially in indoor conditions, is a key component in the development of useful service robots. Though stereo vision is widely used for localization, its performance is limited due to computational complexity and its narrow view angle. To compensate for these shortcomings, we utilize a single omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Nodes around a robot are extracted by the correlation coefficients of CHL (Circular Horizontal Line) between the landmark and the current captured image. After finding possible near nodes, the robot moves to the nearest node based on the correlation values and the positions of these nodes. To accelerate computation, correlation values are calculated based on Fast Fourier Transforms. Experimental results and performance in a real home environment have shown the feasibility of the method.

Localization using Ego Motion based on Fisheye Warping Image (어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘)

  • Choi, Yun Won;Choi, Kyung Sik;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image (어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Dai, Yanyan;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.8
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.