• Title/Summary/Keyword: Omnidirectional Images

Search Result 35, Processing Time 0.028 seconds

Catadioptric Omnidirectional Optical System Using a Spherical Mirror with a Central Hole and a Plane Mirror for Visible Light (중심 구멍이 있는 구면거울과 평면거울을 이용한 가시광용 반사굴절식 전방위 광학계)

  • Seo, Hyeon Jin;Jo, Jae Heung
    • Korean Journal of Optics and Photonics
    • /
    • v.26 no.2
    • /
    • pp.88-97
    • /
    • 2015
  • An omnidirectional optical system can be described as a special optical system that images in real time a panoramic image with an azimuthal angle of $360^{\circ}$ and the altitude angle corresponding to the upper and lower fields of view from the horizon line. In this paper, for easy fabrication and compact size, we designed and fabricated a catadioptric omnidirectional optical system consisting of the mirror part of a spherical mirror with a central hole (that is, obscuration), a plane mirror, the imaging lens part of 3 single spherical lenses, and a spherical doublet in the visible light spectrum. We evaluated its image performance by measuring the cut-off spatial frequency using automobile license plates, and the vertical field of view using an ISO 12233 chart. We achieved a catadioptric omnidirectional optical system with vertical field of view from $+53^{\circ}$ to $-17^{\circ}$ and an azimuthal angle of $360^{\circ}$. This optical system cleaniy imaged letters on a car's front license plate at the object distance of 3 meters, which corresponds to a cut-off spatial frequency of 135 lp/mm.

Updating Obstacle Information Using Object Detection in Street-View Images (스트리트뷰 영상의 객체탐지를 활용한 보행 장애물 정보 갱신)

  • Park, Seula;Song, Ahram
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.599-607
    • /
    • 2021
  • Street-view images, which are omnidirectional scenes centered on a specific location on the road, can provide various obstacle information for the pedestrians. Pedestrian network data for the navigation services should reflect the up-to-date obstacle information to ensure the mobility of pedestrians, including people with disabilities. In this study, the object detection model was trained for the bollard as a major obstacle in Seoul using street-view images and a deep learning algorithm. Also, a process for updating information about the presence and number of bollards as obstacle properties for the crosswalk node through spatial matching between the detected bollards and the pedestrian nodes was proposed. The missing crosswalk information can also be updated concurrently by the proposed process. The proposed approach is appropriate for crowdsourcing data as the model trained using the street-view images can be applied to photos taken with a smartphone while walking. Through additional training with various obstacles captured in the street-view images, it is expected to enable efficient information update about obstacles on the road.

360 RGBD Image Synthesis from a Sparse Set of Images with Narrow Field-of-View (소수의 협소화각 RGBD 영상으로부터 360 RGBD 영상 합성)

  • Kim, Soojie;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.487-498
    • /
    • 2022
  • Depth map is an image that contains distance information in 3D space on a 2D plane and is used in various 3D vision tasks. Many existing depth estimation studies mainly use narrow FoV images, in which a significant portion of the entire scene is lost. In this paper, we propose a technique for generating 360° omnidirectional RGBD images from a sparse set of narrow FoV images. The proposed generative adversarial network based image generation model estimates the relative FoV for the entire panoramic image from a small number of non-overlapping images and produces a 360° RGB and depth image simultaneously. In addition, it shows improved performance by configuring a network reflecting the spherical characteristics of the 360° image.

Watermark Extraction of Omnidirectional Images Using CNN (CNN을 이용한 전방위 영상의 워터마크 추출)

  • Moon, Won-Jun;Seo, Young-Ho;Kim, Dong-Wook
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.11a
    • /
    • pp.210-212
    • /
    • 2019
  • 본 논문에서는 CNN을 이용하여 전방위 영상에 대해 워터마크를 추출하는 방법을 제안한다. 네트워크의 입력은 전방위 영상에서 SIFT 특징점을 기준으로 잘라낸 영역들이며, 네트워크를 통해 전방위 영상 생성 과정에서의 왜곡을 보정하고 워터마크를 분류한다. 또한 네트워크의 훈련 집합에는 원본 영상 외에 JPEG 압축, 가우시안 노이즈, 가우시안 블러링, 샤프닝 공격을 가한 영상들도 포함시켜서 학습을 통해 공격에 대한 강인성을 가지도록 한다. 이에 대해 훈련된 네트워크로 추출한 워터마크와 알고리즘으로 추출한 워터마크를 비교하여 제안하는 방법의 유효성을 확인한다.

  • PDF

A Study on the Development of Camera Gimbal System for Unmanned Flight Vehicle with VR 360 Degree Omnidirectional Photographing (360도 VR 촬영을 위한 무인 비행체용 카메라 짐벌 시스템 개발에 관한 연구)

  • Jung, Nyum;Kim, Sang-Hoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.8
    • /
    • pp.767-772
    • /
    • 2016
  • The purpose of this paper is to develop a gimbal system installed in the UFV(unmanned flight vehicles) for 360 degree VR video. In particular, even if the UFV rotated any direction the camera position is fiexd to minimize the shaking using the gyro sensor and the camera system is stable for taking $360^{\circ}$ panorama VR images.

Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image (어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Dai, Yanyan;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.8
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

Morphological study on abdominal organs of healthy cats using omnidirectional radiography and fluoroscopy (다각도 방사선촬영 및 투시법을 이용한 정상 고양이 장기의 형태학적 연구)

  • Shin, Sa-kyeng;Hirose, Tsuneo;Sato, Motoyoshi;Miyahara, Kazuro
    • Korean Journal of Veterinary Research
    • /
    • v.36 no.4
    • /
    • pp.949-966
    • /
    • 1996
  • To establish the method for the most effective radiography and fluoroscopy, the abdominal organs of cats were investigeted using omnidirectional angles with the center of the body as the axis using an omnidirectional protective shielding X-ray system and a $360^{\circ}$ rotary restraint unit for use in small animals. The organs examined were the diaphragm, liver, stomach, colon, spleen and kidney. The results obtained in the present study were as follows: 1. Regardless of gas in the stomach present or not, it was feasible to distinguish the left and right crura in the lumbar portion of diaphragm in the oblique projection inclined over $30^{\circ}$ and under $90^{\circ}$ from the lateral projection. 2. Outlines of the exterior left lobe and the interior right lobe of the liver were observed in the oblique image inclined up to $60^{\circ}$ from the lateral image, while that of the exterior right lobe was noted in the oblique image inclined up to $60^{\circ}$ from the ventrodorsal-dorsoventral images. 3. It was necessary to have gas present in the stomach for detailed morphological observations of the stomach. It was most clearly observed in the right $30^{\circ}$ ventral-left dorsal oblique projection($120^{\circ}$ image) and the left $60^{\circ}$ dorsal-right ventral oblique projection($300^{\circ}$ image). 4. Morphology of the colon was observable in detail by the oblique projection inclined over $30^{\circ}$ from the lateral projection. 5. To observe the whole spleen it was required to have images from the ventrodorsal projection ($90^{\circ}$ image) to the right $60^{\circ}$ ventral-left dorsal oblique projection ($150^{\circ}$ image) as well as those from the dorsoventral projection ($270^{\circ}$ image) to the left-right lateral projection $0^{\circ}$ image). 6. Dorsal and ventral sides of the kidney were observable in the oblique images inclined $30^{\circ}$ from the lateral image. 7. Considering above findings collectively, it was thought that the results of present study might be useful for the analysis of abnormalies in each organ of cat.

  • PDF

A Study on the Photo-realistic 3D City Modeling Using the Omnidirectional Image and Digital Maps (전 방향 이미지와 디지털 맵을 활용한 3차원 실사 도시모델 생성 기법 연구)

  • Kim, Hyungki;Kang, Yuna;Han, Soonhung
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.3
    • /
    • pp.253-262
    • /
    • 2014
  • 3D city model, which consisted of the 3D building models and their geospatial position and orientation, is becoming a valuable resource in virtual reality, navigation systems, civil engineering, etc. The purpose of this research is to propose the new framework to generate the 3D city model that satisfies visual and physical requirements in ground oriented simulation system. At the same time, the framework should meet the demand of the automatic creation and cost-effectiveness, which facilitates the usability of the proposed approach. To do that, I suggest the framework that leverages the mobile mapping system which automatically gathers high resolution images and supplement sensor information like position and direction of the image. And to resolve the problem from the sensor noise and a large number of the occlusions, the fusion of digital map data will be used. This paper describes the overall framework with major process and the recommended or demanded techniques for each processing step.

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Image Analysis for the Simultaneous Measurement of Underwater Flow Velocity and Direction (수중 유속 및 유향의 동시 측정을 위한 이미지 분석 기술에 관한 연구)

  • Dongmin Seo;Sangwoo Oh;Sung-Hoon Byun
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.5
    • /
    • pp.307-312
    • /
    • 2023
  • To measure the flow velocity and direction in the near field of an unmanned underwater vehicle, an optical measurement unit containing an image sensor and a phosphor-integrated pillar that mimics the neuromasts of a fish was constructed. To analyze pillar movement, which changes with fluid flow, fluorescence image analysis was conducted. To analyze the flow velocity, mean force analysis, which could determine the relationship between the light intensity of a fluorescence image and an external force, and length-force analysis, which could determine the distance between the center points of two fluorescence images, were employed. Additionally, angle analysis that can determine the angles at which pixels of a digital image change was selected to analyze the direction of fluid flow. The flow velocity analysis results showed a high correlation of 0.977 between the external force and the light intensity of the fluorescence image, and in the case of direction analysis, omnidirectional movement could be analyzed. Through this study, we confirmed the effectiveness of optical flow sensors equipped with phosphor-integrated pillars.