• Title/Summary/Keyword: 화각

Search Result 89, Processing Time 0.03 seconds

3D Map Construction from Spherical Video using Fisheye ORB-SLAM Algorithm (어안 ORB-SLAM 알고리즘을 사용한 구면 비디오로부터의 3D 맵 생성)

  • Kim, Ki-Sik;Park, Jong-Seung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.1080-1083
    • /
    • 2020
  • 본 논문에서는 구면 파노라마를 기반으로 하는 SLAM 시스템을 제안한다. Vision SLAM은 촬영하는 시야각이 넓을수록 적은 프레임으로도 주변을 빠르게 파악할 수 있고, 많은 양의 주변 데이터를 이용해 더욱 안정적인 추정이 가능하다. 구면 파노라마 비디오는 가장 화각이 넓은 영상으로, 모든 방향을 활용할 수 있기 때문에 Fisheye 영상보다 더욱 빠르게 3D 맵을 확장해나갈 수 있다. 기존의 시스템 중 Fisheye 영상을 기반으로 하는 시스템은 전면 광각만을 수용할 수 있기 때문에 구면 파노라마를 입력으로 하는 경우보다 적용 범위가 줄어들게 된다. 본 논문에서는 기존에 Fisheye 비디오를 기반으로 하는 SLAM 시스템을 구면 파노라마의 영역으로 확장하는 방법을 제안한다. 제안 방법은 카메라의 투영 모델이 요구하는 파라미터를 정확히 계산하고, Dual Fisheye Model을 통해 모든 시야각을 손실 없이 활용한다.

Estimation of Object Position from Multiple Spherical Images (다중 구면 영상으로부터 물체의 3D 위치 추정)

  • Hong, Cheol-gi;Park, Jong-Seung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.05a
    • /
    • pp.570-573
    • /
    • 2020
  • 핀홀 카메라는 그 특성상 전체 공간 중에서 일부분만을 촬영할 수 있으므로 전체 공간을 염두에 두는 3D 재구성에서는 구면 영상에 비해 많은 데이터를 확보해야 한다. 본 논문에서는 다수의 구면 영상에 촬영된 물체의 실제 3차원 위치를 추정하는 방법을 제안한다. 두 카메라의 배치 간격이 가까운 스테레오 비전과는 달리 제안하는 방법에서는 여러 대의 카메라를 넓은 간격으로 배치하여 장애물에 대한 폐색을 극복하도록 한다. 구면 카메라의 화각은 공간 전체를 담을 수 있기 때문에 촬영 간격과 카메라의 회전각이 크더라도 전 영역에 대한 일치 관계를 계산할 수 있다. 실험 결과 구면 영상에 나타난 물체의 실제 위치에 근접한 결과를 얻을 수 있었다.

Omnidirectional Environmental Projection Mapping with Single Projector and Single Spherical Mirror (단일 프로젝터와 구형 거울을 활용한 전 방향프로젝션 시스템)

  • Kim, Bumki;Lee, Jungjin;Kim, Younghui;Jeong, Seunghwa;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.1
    • /
    • pp.1-11
    • /
    • 2015
  • Researchers have developed virtual reality environments to provide audience with more visually immersive experiences than previously possible. One of the most popular solutions to build the immersive VR space is a multi-projection technique. However, utilization of multiple projectors requires large spaces, expensive cost, and accurate geometry calibration among projectors. This paper presents a novel omnidirectional projection system with a single projector and a single spherical mirror.We newly designed the simple and intuitive calibration system to define the shape of environment and the relative position of mirror/projector. For successful image projection, our optimized omnidirectional image generation step solves image distortion produced by the spherical mirror and a calibration problem produced by unknown parameters such as the shape of environment and the relative position between the mirror and the projector. Additionally, the focus correction is performed to improve the quality of the projection. The experiment results show that our method can generate the optimized image given a normal panoramic image for omnidirectional projection in a rectangular space.

Optical Design of a Reflecting Omnidirectional Vision System for Long-wavelength Infrared Light (원적외선용 반사식 전방위 비전 시스템의 광학 설계)

  • Ju, Yun Jae;Jo, Jae Heung;Ryu, Jae Myung
    • Korean Journal of Optics and Photonics
    • /
    • v.30 no.2
    • /
    • pp.37-47
    • /
    • 2019
  • A reflecting omnidirectional optical system with four spherical and aspherical mirrors, for use with long-wavelength infrared light (LWIR) for night surveillance, is proposed. It is designed to include a collecting pseudo-Cassegrain reflector and an imaging inverse pseudo-Cassegrain reflector, and the design process and performance analysis is reported in detail. The half-field of view (HFOV) and F-number of this optical system are $40-110^{\circ}$ and 1.56, respectively. To use the LWIR imaging, the size of the image must be similar to that of the microbolometer sensor for LWIR. As a result, the size of the image must be $5.9mm{\times}5.9mm$ if possible. The image size ratio for an HFOV range of $40^{\circ}$ to $110^{\circ}$ after optimizing the design is 48.86%. At a spatial frequency of 20 lp/mm when the HFOV is $110^{\circ}$, the modulation transfer function (MTF) for LWIR is 0.381. Additionally, the cumulative probability of tolerance for the LWIR at a spatial frequency of 20 lp/mm is 99.75%. As a result of athermalization analysis in the temperature range of $-32^{\circ}C$ to $+55^{\circ}C$, we find that the secondary mirror of the inverse pseudo-Cassegrain reflector can function as a compensator, to alleviate MTF degradation with rising temperature.

Large-area High-speed Single Photodetector Based on the Static Unitary Detector Technique for High-performance Wide-field-of-view 3D Scanning LiDAR (고성능 광각 3차원 스캐닝 라이다를 위한 스터드 기술 기반의 대면적 고속 단일 광 검출기)

  • Munhyun Han;Bongki Mheen
    • Korean Journal of Optics and Photonics
    • /
    • v.34 no.4
    • /
    • pp.139-150
    • /
    • 2023
  • Despite various light detection and ranging (LiDAR) architectures, it is very difficult to achieve long-range detection and high resolution in both vertical and horizontal directions with a wide field of view (FOV). The scanning architecture is advantageous for high-performance LiDAR that can attain long-range detection and high resolution for vertical and horizontal directions. However, a large-area photodetector (PD), which is disadvantageous for detection speed, is essentially required to secure the wide FOV. Thus we propose a PD based on the static unitary detector (STUD) technique that can operate multiple small-area PDs as a single large-area PD at a high speed. The InP/InGaAs STUD PIN-PD proposed in this paper is fabricated in various types, ranging from 1,256 ㎛×949 ㎛ using 32 small-area PDs of 1,256 ㎛×19 ㎛. In addition, we measure and analyze the noise and signal characteristics of the LiDAR receiving board, as well as the performance and sensitivity of various types of STUD PDs. Finally, the LiDAR receiving board utilizing the STUD PD is applied to a 3D scanning LiDAR prototype that uses a 1.5-㎛ master oscillator power amplifier laser. This LiDAR precisely detects long-range objects over 50 m away, and acquires high-resolution 3D images of 320 pixels×240 pixels with a diagonal FOV of 32.6 degrees simultaneously.

Motion-based ROI Extraction with a Standard Angle-of-View from High Resolution Fisheye Image (고해상도 어안렌즈 영상에서 움직임기반의 표준 화각 ROI 검출기법)

  • Ryu, Ar-Chim;Han, Kyu-Phil
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.395-401
    • /
    • 2020
  • In this paper, a motion-based ROI extraction algorithm from a high resolution fisheye image is proposed for multi-view monitoring systems. Lately fisheye cameras are widely used because of the wide angle-of-view and they basically provide a lens correction functionality as well as various viewing modes. However, since the distortion-free angle of conventional algorithms is quite narrow due to the severe distortion ratio, there are lots of unintentional dead areas and they require much computation time in finding undistorted coordinates. Thus, the proposed algorithm adopts an image decimation and a motion detection methods, that can extract the undistorted ROI image with a standard angle-of-view for the fast and intelligent surveillance system. In addition, a mesh-type ROI is presented to reduce the lens correction time, so that this independent ROI scheme can parallelize and maximize the processor's utilization.

Optical Lens Design of Image Sensor (이미지 센서용 광학렌즈설계)

  • Lee, Chan-Ku;Lee, Su-Dae;Joung, Maeng-Sig
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.8 no.2
    • /
    • pp.99-103
    • /
    • 2003
  • This paper presents lens optimization of the resolution and the distortion for a four-element lens design. In order to have compact optical system, we used the tele-photo type lens composed of a positive and a negative power elements instead of retro-focus lens. The specifications of optical lens design are the focal length of 7.2 mm, the F/number of 2.8 and the field angle of $54.7^{\circ}C$. The MTF values are higher than 0.5 in the spatial frequency range up to 110 lp/mm for all of the designed object heights. Also, it is expected to fulfill all the requirements of a digital still camera lens and especially suited for building low-cost, compact digital cameras because of the low-profile property of the lens design.

  • PDF

Optical System Design for CCTV Camera (CCTV 카메라용 광학계 설계)

  • Lee, Soo Cheon
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.13 no.1
    • /
    • pp.31-35
    • /
    • 2008
  • Purpose: This study is to design a triplet optical system for CCTV camera lens. Methods: It was a telescopic lens with $5^{\circ}$ field angle, 56 mm focal length, 20 mm diameter, and 2/3 inches sized CCD array detector. Results: The performance of the subject optical system was evaluated by applying ray fan, spot diagram, and diffraction optical MTF. The wavelength was achromatized at Fraunhofer C, d and F-line, and both MTF and tangential & sagittal MTF shows more than 70% at spatial frequency of 50 linepairs/mm. Conclusions: The marketable triplet optical system for CCTV camera was designed and its utility was considered.

  • PDF

A Study on FOV for developing 3D Game Contents (3D게임 콘텐츠 개발을 위한 시야각(FOV) 연구)

  • Lee, Hwan-joong;Kim, young-bong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.163-168
    • /
    • 2009
  • Since 3D gamers can control points of view and field of view freely, 3D games generate high realities and immersions. On most of 3D games, a point of view and a field of view are determined by positions and FOV(field of view) of camera. Although FOV is a simple technical factor, it leads to graphics distortion and affects immersion of the game and causes some gamers to get physical sickness. Therefore, we will suggest the instruction to operate FOVs in many 3D games by examining and analysing various rendering results with control of FOV and real cases of FOV in published games with same 3D modeling environments.

  • PDF

Activated Viewport based Surveillance Event Detection in 360-degree Video (360도 영상 공간에서 활성 뷰포트 기반 이벤트 검출)

  • Shim, Yoo-jeong;Lee, Myeong-jin
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.770-775
    • /
    • 2020
  • Since 360-degree ERP frame structure has location-dependent distortion, existing video surveillance algorithms cannot be applied to 360-degree video. In this paper, an activated viewport based event detection method is proposed for 360-degree video. After extracting activated viewports enclosing object candidates, objects are finally detected in the viewports. These objects are tracked in 360-degree video space for region-based event detection. The proposed method is shown to improve the recall and the false negative rate more than 30% compared to the conventional method without activated viewports.