• Title/Summary/Keyword: Fisheye Camera

Search Result 53, Processing Time 0.024 seconds

Accurate Human Localization for Automatic Labelling of Human from Fisheye Images

  • Than, Van Pha;Nguyen, Thanh Binh;Chung, Sun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.5
    • /
    • pp.769-781
    • /
    • 2017
  • Deep learning networks like Convolutional Neural Networks (CNNs) show successful performances in many computer vision applications such as image classification, object detection, and so on. For implementation of deep learning networks in embedded system with limited processing power and memory, deep learning network may need to be simplified. However, simplified deep learning network cannot learn every possible scene. One realistic strategy for embedded deep learning network is to construct a simplified deep learning network model optimized for the scene images of the installation place. Then, automatic training will be necessitated for commercialization. In this paper, as an intermediate step toward automatic training under fisheye camera environments, we study more precise human localization in fisheye images, and propose an accurate human localization method, Automatic Ground-Truth Labelling Method (AGTLM). AGTLM first localizes candidate human object bounding boxes by utilizing GoogLeNet-LSTM approach, and after reassurance process by GoogLeNet-based CNN network, finally refines them more correctly and precisely(tightly) by applying saliency object detection technique. The performance improvement of the proposed human localization method, AGTLM with respect to accuracy and tightness is shown through several experiments.

Development of Camera Module for Vehicle Safety Support (차량 안전 지원용 카메라 모듈 개발)

  • Shin, Seong-Yoon;Cho, Seung-Pyo;Shin, Kwang-Seong;Lee, Hyun-Chang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.672-673
    • /
    • 2022
  • In this paper, we discuss a camera that is fixed in the same view as the TOF sensor and can be installed horizontally in the vehicle's moving direction. This camera applies 1280×720 resolution to improve object recognition accuracy, outputs images at 30fps, and can apply a wide-angle fisheye lens of 180° or more.

  • PDF

De-blurring Algorithm for Performance Improvement of Searching a Moving Vehicle on Fisheye CCTV Image (어안렌즈사용 CCTV이미지에서 차량 정보 수집의 성능개선을 위한 디블러링 알고리즘)

  • Lee, In-Jung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.4C
    • /
    • pp.408-414
    • /
    • 2010
  • When we are collecting traffic information on CCTV images, we have to install the detect zone in the image area during pan-tilt system is on duty. An automation of detect zone with pan-tilt system is not easy because of machine error. So the fisheye lens attached camera or convex mirror camera is needed for getting wide area images. In this situation some troubles are happened, that is a decreased system speed or image distortion. This distortion is caused by occlusion of angled ray as like trembled snapshot in digital camera. In this paper, we propose two methods of de-blurring to overcome distortion, the one is image segmentation by nonlinear diffusion equation and the other is deformation for some segmented area. As the results of doing de-blurring methods, the de-blurring image has 15 decibel increased PSNR and the detection rate of collecting traffic information is more than 5% increasing than in distorted images.

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Multi-robot Formation based on Object Tracking Method using Fisheye Images (어안 영상을 이용한 물체 추적 기반의 한 멀티로봇의 대형 제어)

  • Choi, Yun Won;Kim, Jong Uk;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.6
    • /
    • pp.547-554
    • /
    • 2013
  • This paper proposes a novel formation algorithm of identical robots based on object tracking method using omni-directional images obtained through fisheye lenses which are mounted on the robots. Conventional formation methods of multi-robots often use stereo vision system or vision system with reflector instead of general purpose camera which has small angle of view to enlarge view angle of camera. In addition, to make up the lack of image information on the environment, robots share the information on their positions through communication. The proposed system estimates the region of robots using SURF in fisheye images that have $360^{\circ}$ of image information without merging images. The whole system controls formation of robots based on moving directions and velocities of robots which can be obtained by applying Lucas-Kanade Optical Flow Estimation for the estimated region of robots. We confirmed the reliability of the proposed formation control strategy for multi-robots through both simulation and experiment.

Tunnel Mosaic Images Using Fisheye Lens Camera (어안렌즈 카메라를 이용한 터널 모자이크 영상 제작)

  • Kim, Gi-Hong;Song, Yeong-Sun;Kim, Baek-Seok
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.17 no.1
    • /
    • pp.105-111
    • /
    • 2009
  • A construction can be more convenient and safer with adequate informations. Consequently, studies on collecting various informations using newest surveying technology and applying these informations to a construction have been making progress recently. Digital images are easy to obtain and contain various informations. Therefore, with the recent development of image processing technology, the application field of digital images is getting wider. In this study, we proposed to use a fisheye lens camera in underground construction sites, especially tunnels, to overcome inconvenience in photographing with general lens cameras. A program for mapping the surface of a tunnel and making a mosaic image is also developed. This mosaic image can be applied to observe and analyze abnormal phenomenons on tunnel surface like cracks, water leakage, exfoliates, and so on.

  • PDF

Panorama Image Stitching Using Sythetic Fisheye Image (Synthetic fisheye 이미지를 이용한 360° 파노라마 이미지 스티칭)

  • Kweon, Hyeok-Joon;Cho, Donghyeon
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.20-30
    • /
    • 2022
  • Recently, as VR (Virtual Reality) technology has been in the spotlight, 360° panoramic images that can view lively VR contents are attracting a lot of attention. Image stitching technology is a major technology for producing 360° panorama images, and many studies are being actively conducted. Typical stitching algorithms are based on feature point-based image stitching. However, conventional feature point-based image stitching methods have a problem that stitching results are intensely affected by feature points. To solve this problem, deep learning-based image stitching technologies have recently been studied, but there are still many problems when there are few overlapping areas between images or large parallax. In addition, there is a limit to complete supervised learning because labeled ground-truth panorama images cannot be obtained in a real environment. Therefore, we produced three fisheye images with different camera centers and corresponding ground truth image through carla simulator that is widely used in the autonomous driving field. We propose image stitching model that creates a 360° panorama image with the produced fisheye image. The final experimental results are virtual datasets configured similar to the actual environment, verifying stitching results that are strong against various environments and large parallax.

Real time Omni-directional Object Detection Using Background Subtraction of Fisheye Image (어안 이미지의 배경 제거 기법을 이용한 실시간 전방향 장애물 감지)

  • Choi, Yun-Won;Kwon, Kee-Koo;Kim, Jong-Hyo;Na, Kyung-Jin;Lee, Suk-Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.8
    • /
    • pp.766-772
    • /
    • 2015
  • This paper proposes an object detection method based on motion estimation using background subtraction in the fisheye images obtained through omni-directional camera mounted on the vehicle. Recently, most of the vehicles installed with rear camera as a standard option, as well as various camera systems for safety. However, differently from the conventional object detection using the image obtained from the camera, the embedded system installed in the vehicle is difficult to apply a complicated algorithm because of its inherent low processing performance. In general, the embedded system needs system-dependent algorithm because it has lower processing performance than the computer. In this paper, the location of object is estimated from the information of object's motion obtained by applying a background subtraction method which compares the previous frames with the current ones. The real-time detection performance of the proposed method for object detection is verified experimentally on embedded board by comparing the proposed algorithm with the object detection based on LKOF (Lucas-Kanade optical flow).

Verification Method of Omnidirectional Camera Model by Projected Contours (사영된 컨투어를 이용한 전방향 카메라 모델의 검증 방법)

  • Hwang, Yong-Ho;Lee, Jae-Man;Hong, Hyun-Ki
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.994-999
    • /
    • 2007
  • 전방향(omnidirectional) 카메라 시스템은 보다 적은 수의 영상으로부터 주변 장면(scene)에 대한 많은 정보를 취득할 수 있는 장점이 있기 때문에 전방향 영상을 이용한 자동교정(self-calibration)과 3차원 재구성 등의 연구가 활발히 진행되고 있다. 본 논문에서는 기존에 제안된 교정 방법들을 이용하여 추정된 사영모델(projection model)의 정확성을 검증하기 위한 새로운 방법이 제안된다. 실 세계에서 다양하게 존재하는 직선 성분들은 전방향 영상에 컨투어(contour)의 형태로 사영되며, 사영모델과 컨투어의 양 끝점 좌표 값을 이용하여 그 궤적을 추정할 수 있다. 추정된 컨투어의 궤적과 영상에 존재하는 컨투어와의 거리 오차(distance error)로부터 전방향 카메라의 사영모델의 정확성을 검증할 수 있다. 제안된 방법의 성능을 평가하기 위해서 구 맵핑(spherical mapping)된 합성(synthetic) 영상과 어안렌즈(fisheye lens)로 취득한 실제 영상에 대해 제안된 알고리즘을 적용하여 사영모델의 정확성을 판단하였다.

  • PDF

Realization for Image Distortion Correction Processing System with Fisheye Lens Camera

  • Kim, Ja-Hwan;Ryu, Kwang-Ryol;Sclabassi, Robert J.
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.281-284
    • /
    • 2007
  • A realization for image distortion correction processing system with DSP processor is presented in this paper. The image distortion correcting algorithm is realized by DSP processor for focusing on more real time processing than image quality. The lens and camera distortion coefficients are processed by YCbCr Lookup Tables and the correcting algorithm is applied to reverse mapping method for geometrical transform. The system experimentation results in the processing time about 34.6 msec on $720{\times}480$ curved image at 150 degree visual range.

  • PDF