• Title/Summary/Keyword: Fisheye Lens Cameras

Search Result 10, Processing Time 0.024 seconds

Deep Learning based Object Detector for Vehicle Recognition on Images Acquired with Fisheye Lens Cameras (어안렌즈 카메라로 획득한 영상에서 차량 인식을 위한 딥러닝 기반 객체 검출기)

  • Hieu, Tang Quang;Yeon, Sungho;Kim, Jaemin
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.2
    • /
    • pp.128-135
    • /
    • 2019
  • This paper presents a deep learning-based object detection method for recognizing vehicles in images acquired through cameras installed on ceiling of underground parking lot. First, we present an image enhancement method, which improves vehicle detection performance under dark lighting environment. Second, we present a new CNN-based multiscale classifiers for detecting vehicles in images acquired through cameras with fisheye lens. Experiments show that the presented vehicle detector has better performance than the conventional ones.

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Motion-based ROI Extraction with a Standard Angle-of-View from High Resolution Fisheye Image (고해상도 어안렌즈 영상에서 움직임기반의 표준 화각 ROI 검출기법)

  • Ryu, Ar-Chim;Han, Kyu-Phil
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.395-401
    • /
    • 2020
  • In this paper, a motion-based ROI extraction algorithm from a high resolution fisheye image is proposed for multi-view monitoring systems. Lately fisheye cameras are widely used because of the wide angle-of-view and they basically provide a lens correction functionality as well as various viewing modes. However, since the distortion-free angle of conventional algorithms is quite narrow due to the severe distortion ratio, there are lots of unintentional dead areas and they require much computation time in finding undistorted coordinates. Thus, the proposed algorithm adopts an image decimation and a motion detection methods, that can extract the undistorted ROI image with a standard angle-of-view for the fast and intelligent surveillance system. In addition, a mesh-type ROI is presented to reduce the lens correction time, so that this independent ROI scheme can parallelize and maximize the processor's utilization.

Tunnel Mosaic Images Using Fisheye Lens Camera (어안렌즈 카메라를 이용한 터널 모자이크 영상 제작)

  • Kim, Gi-Hong;Song, Yeong-Sun;Kim, Baek-Seok
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.17 no.1
    • /
    • pp.105-111
    • /
    • 2009
  • A construction can be more convenient and safer with adequate informations. Consequently, studies on collecting various informations using newest surveying technology and applying these informations to a construction have been making progress recently. Digital images are easy to obtain and contain various informations. Therefore, with the recent development of image processing technology, the application field of digital images is getting wider. In this study, we proposed to use a fisheye lens camera in underground construction sites, especially tunnels, to overcome inconvenience in photographing with general lens cameras. A program for mapping the surface of a tunnel and making a mosaic image is also developed. This mosaic image can be applied to observe and analyze abnormal phenomenons on tunnel surface like cracks, water leakage, exfoliates, and so on.

  • PDF

Geometric Correction of Vehicle Fish-eye Lens Images (차량용 어안렌즈영상의 기하학적 왜곡 보정)

  • Kim, Sung-Hee;Cho, Young-Ju;Son, Jin-Woo;Lee, Joong-Ryoul;Kim, Myoung-Hee
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.601-605
    • /
    • 2009
  • Due to the fact that fish-eye lens can provide super wide angles with the minimum number of cameras, field-of-view over 180 degrees, many vehicles are attempting to mount the camera system. Camera calibration should be preceded, and geometrical correction on the radial distortion is needed to provide the images for the driver's assistance. However, vehicle fish-eye cameras have diagonal output images rather than circular images and have asymmetric distortion beyond the horizontal angle. In this paper, we introduce a camera model and metric calibration method for vehicle cameras which uses feature points of the image. And undistort the input image through a perspective projection, where straight lines should appear straight. The method fitted vehicle fish-eye lens with different field of views.

  • PDF

Realtime Vehicle Tracking and Region Detection in Indoor Parking Lot for Intelligent Parking Control (지능형 주차 관제를 위한 실내주차장에서 실시간 차량 추적 및 영역 검출)

  • Yeon, Seungho;Kim, Jaemin
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.418-427
    • /
    • 2016
  • A smart parking management requires to track a vehicle in a indoor parking lot and to detect the place where the vehicle is parked. An advanced parking system watches all space of the parking lot with CCTV cameras. We can use these cameras for vehicles tracking and detection. In order to cover a wide area with a camera, a fisheye lens is used. In this case the shape and size of an moving vehicle vary much with distance and angle to the camera. This makes vehicle detection and tracking difficult. In addition to the fisheye lens, the vehicle headlights also makes vehicle detection and tracking difficult. This paper describes a method of realtime vehicle detection and tracking robust to the harsh situation described above. In each image frame, we update the region of a vehicle and estimate the vehicle movement. First we approximate the shape of a car with a quadrangle and estimate the four sides of the car using multiple histograms of oriented gradient. Second we create a template by applying a distance transform to the car region and estimate the motion of the car with a template matching method.

Mixing Collaborative and Hybrid Vision Devices for Robotic Applications (로봇 응용을 위한 협력 및 결합 비전 시스템)

  • Bazin, Jean-Charles;Kim, Sung-Heum;Choi, Dong-Geol;Lee, Joon-Young;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.3
    • /
    • pp.210-219
    • /
    • 2011
  • This paper studies how to combine devices such as monocular/stereo cameras, motors for panning/tilting, fisheye lens and convex mirrors, in order to solve vision-based robotic problems. To overcome the well-known trade-offs between optical properties, we present two mixed versions of the new systems. The first system is the robot photographer with a conventional pan/tilt perspective camera and fisheye lens. The second system is the omnidirectional detector for a complete 360-degree field-of-view surveillance system. We build an original device that combines a stereo-catadioptric camera and a pan/tilt stereo-perspective camera, and also apply it in the real environment. Compared to the previous systems, we show benefits of two proposed systems in aspects of maintaining both high-speed and high resolution with collaborative moving cameras and having enormous search space with hybrid configuration. The experimental results are provided to show the effectiveness of the mixing collaborative and hybrid systems.

Comparison the Mapping Accuracy of Construction Sites Using UAVs with Low-Cost Cameras

  • Jeong, Hohyun;Ahn, Hoyong;Shin, Dongyoon;Choi, Chuluong
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.1
    • /
    • pp.1-13
    • /
    • 2019
  • The advent of a fourth industrial revolution, built on advances in digital technology, has coincided with studies using various unmanned aerial vehicles (UAVs) being performed worldwide. However, the accuracy of different sensors and their suitability for particular research studies are factors that need to be carefully evaluated. In this study, we evaluated UAV photogrammetry using smart technology. To assess the performance of digital photogrammetry, the accuracy of common procedures for generating orthomosaic images and digital surface models (DSMs) using terrestrial laser scanning (TLS) techniques was measured. Two different type of non-surveying camera(Smartphone camera, fisheye camera) were attached to UAV platform. For fisheye camera, lens distortion was corrected by considering characteristics of lens. Accuracy of orthoimage and DSM generated were comparatively analyzed using aerial and TLS data. Accuracy comparison analysis proceeded as follows. First, we used Ortho mosaic image to compare the check point with a certain area. In addition, vertical errors of camera DSM were compared and analyzed based on TLS. In this study, we propose and evaluate the feasibility of UAV photogrammetry which can acquire 3 - D spatial information at low cost in a construction site.

Image Data Loss Minimized Geometric Correction for Asymmetric Distortion Fish-eye Lens (비대칭 왜곡 어안렌즈를 위한 영상 손실 최소화 왜곡 보정 기법)

  • Cho, Young-Ju;Kim, Sung-Hee;Park, Ji-Young;Son, Jin-Woo;Lee, Joong-Ryoul;Kim, Myoung-Hee
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.1
    • /
    • pp.23-31
    • /
    • 2010
  • Due to the fact that fisheye lens can provide super wide angles with the minimum number of cameras, field-of-view over 180 degrees, many vehicles are attempting to mount the camera system. Not only use the camera as a viewing system, but also as a camera sensor, camera calibration should be preceded, and geometrical correction on the radial distortion is needed to provide the images for the driver's assistance. In this thesis, we introduce a geometric correction technique to minimize the loss of the image data from a vehicle fish-eye lens having a field of view over $180^{\circ}$, and a asymmetric distortion. Geometric correction is a process in which a camera model with a distortion model is established, and then a corrected view is generated after camera parameters are calculated through a calibration process. First, the FOV model to imitate a asymmetric distortion configuration is used as the distortion model. Then, we need to unify the axis ratio because a horizontal view of the vehicle fish-eye lens is asymmetrically wide for the driver, and estimate the parameters by applying a non-linear optimization algorithm. Finally, we create a corrected view by a backward mapping, and provide a function to optimize the ratio for the horizontal and vertical axes. This minimizes image data loss and improves the visual perception when the input image is undistorted through a perspective projection.

Collision Avoidance Using Omni Vision SLAM Based on Fisheye Image (어안 이미지 기반의 전방향 영상 SLAM을 이용한 충돌 회피)

  • Choi, Yun Won;Choi, Jeong Won;Im, Sung Gyu;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.210-216
    • /
    • 2016
  • This paper presents a novel collision avoidance technique for mobile robots based on omni-directional vision simultaneous localization and mapping (SLAM). This method estimates the avoidance path and speed of a robot from the location of an obstacle, which can be detected using the Lucas-Kanade Optical Flow in images obtained through fish-eye cameras mounted on the robots. The conventional methods suggest avoidance paths by constructing an arbitrary force field around the obstacle found in the complete map obtained through the SLAM. Robots can also avoid obstacles by using the speed command based on the robot modeling and curved movement path of the robot. The recent research has been improved by optimizing the algorithm for the actual robot. However, research related to a robot using omni-directional vision SLAM to acquire around information at once has been comparatively less studied. The robot with the proposed algorithm avoids obstacles according to the estimated avoidance path based on the map obtained through an omni-directional vision SLAM using a fisheye image, and returns to the original path. In particular, it avoids the obstacles with various speed and direction using acceleration components based on motion information obtained by analyzing around the obstacles. The experimental results confirm the reliability of an avoidance algorithm through comparison between position obtained by the proposed algorithm and the real position collected while avoiding the obstacles.