• Title/Summary/Keyword: single-view camera

Search Result 97, Processing Time 0.021 seconds

Method of Measuring Color Difference Between Images using Corresponding Points and Histograms (대응점 및 히스토그램을 이용한 영상 간의 컬러 차이 측정 기법)

  • Hwang, Young-Bae;Kim, Je-Woo;Choi, Byeong-Ho
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.305-315
    • /
    • 2012
  • Color correction between two or multiple images is very crucial for the development of subsequent algorithms and stereoscopic 3D camera system. Even though various color correction methods are proposed recently, there are few methods for measuring the performance of these methods. In addition, when two images have view variation by camera positions, previous methods for the performance measurement may not be appropriate. In this paper, we propose a method of measuring color difference between corresponding images for color correction. This method finds matching points that have the same colors between two scenes to consider the view variation by correspondence searches. Then, we calculate statistics from neighbor regions of these matching points to measure color difference. From this approach, we can consider misalignment of corresponding points contrary to conventional geometric transformation by a single homography. To handle the case that matching points cannot cover the whole regions, we calculate statistics of color difference from the whole image regions. Finally, the color difference is computed by the weighted summation between correspondence based and the whole region based approaches. This weight is determined by calculating the ratio of occupying regions by correspondence based color comparison.

Image Mosaicking Using Feature Points Based on Color-invariant (칼라 불변 기반의 특징점을 이용한 영상 모자이킹)

  • Kwon, Oh-Seol;Lee, Dong-Chang;Lee, Cheol-Hee;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.2
    • /
    • pp.89-98
    • /
    • 2009
  • In the field of computer vision, image mosaicking is a common method for effectively increasing restricted the field of view of a camera by combining a set of separate images into a single seamless image. Image mosaicking based on feature points has recently been a focus of research because of simple estimation for geometric transformation regardless distortions and differences of intensity generating by motion of a camera in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.

Vision-based Obstacle Detection using Geometric Analysis (기하학적 해석을 이용한 비전 기반의 장애물 검출)

  • Lee Jong-Shill;Lee Eung-Hyuk;Kim In-Young;Kim Sun-I.
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.3 s.309
    • /
    • pp.8-15
    • /
    • 2006
  • Obstacle detection is an important task for many mobile robot applications. The methods using stereo vision and optical flow are computationally expensive. Therefore, this paper presents a vision-based obstacle detection method using only two view images. The method uses a single passive camera and odometry, performs in real-time. The proposed method is an obstacle detection method using 3D reconstruction from taro views. Processing begins with feature extraction for each input image using Dr. Lowe's SIFT(Scale Invariant Feature Transform) and establish the correspondence of features across input images. Using extrinsic camera rotation and translation matrix which is provided by odometry, we could calculate the 3D position of these corresponding points by triangulation. The results of triangulation are partial 3D reconstruction for obstacles. The proposed method has been tested successfully on an indoor mobile robot and is able to detect obstacles at 75msec.

Lane Information Fusion Scheme using Multiple Lane Sensors (다중센서 기반 차선정보 시공간 융합기법)

  • Lee, Soomok;Park, Gikwang;Seo, Seung-woo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.12
    • /
    • pp.142-149
    • /
    • 2015
  • Most of the mono-camera based lane detection systems are fragile on poor illumination conditions. In order to compensate limitations of single sensor utilization, lane information fusion system using multiple lane sensors is an alternative to stabilize performance and guarantee high precision. However, conventional fusion schemes, which only concerns object detection, are inappropriate to apply to the lane information fusion. Even few studies considering lane information fusion have dealt with limited aids on back-up sensor or omitted cases of asynchronous multi-rate and coverage. In this paper, we propose a lane information fusion scheme utilizing multiple lane sensors with different coverage and cycle. The precise lane information fusion is achieved by the proposed fusion framework which considers individual ranging capability and processing time of diverse types of lane sensors. In addition, a novel lane estimation model is proposed to synchronize multi-rate sensors precisely by up-sampling spare lane information signals. Through quantitative vehicle-level experiments with around view monitoring system and frontal camera system, we demonstrate the robustness of the proposed lane fusion scheme.

Tele-operation of a Mobile Robot Using Force Reflection Joystick with Single Hall Sensor (단일 홀센서 힘반영 조이스틱을 이용한 모바일 로봇 원격제어)

  • Lee, Jang-Myung;Jeon, Chan-Sung;Cho, Seung-Keun
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.1
    • /
    • pp.17-24
    • /
    • 2006
  • Though the final goal of mobile robot navigation is to be autonomous, operators' intelligent and skillful decisions are necessary when there are many scattered obstacles. There are several limitations even in the camera-based tele-operation of a mobile robot, which is very popular for the mobile robot navigation. For examples, shadowed and curved areas cannot be viewed using a narrow view-angle camera, especially in bad weather such as on snowy or rainy days. Therefore, it is necessary to have other sensory information for reliable tele-operations. In this paper, sixteen ultrasonic sensors are attached around a mobile robot in a ring pattern to measure the distances to obstacles. A collision vector is introduced in this paper as a new tool for obstacle avoidance, which is defined as a normal vector from an obstacle to the mobile robot. Based on this collision vector, a virtual reflection force is generated to avoid the obstacles and then the reflection force is transferred to an operator who is holding a joystick to control the mobile robot. Relying on the reflection force, the operator can control the mobile robot more smoothly and safely. For this bi-directional tele-operation, a master joystick system using a hall sensor was designed to resolve the existence of nonlinear sections, which are usual for a general joystick with two motors and potentiometers. Finally, the efficiency of a force reflection joystick is verified through the comparison of two vision-based tele-operation experiments, with and without force reflection.

  • PDF

Catadioptric Omnidirectional Stereo Imaging System and Reconstruction of 3-dimensional Coordinates (Catadioptric 전방향 스테레오 영상시스템 및 3차원 좌표 복원)

  • Kim, Soon-Cheol;Yi, Soo-Yeong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.6
    • /
    • pp.4108-4114
    • /
    • 2015
  • An image acquisition by using an optical mirror is called as a catadioptric method. The catadioptric imaging method is generally used for acquisition of 360-degree all directional visual information in an image. An exemplar omnidirectional optical mirror is the bowl-shaped hyperbolic mirror. In this paper, a single camera omnidirectional stereo imaging method is studied with an additional concave lens. It is possible to obtain 3 dimensional coordinates of environmental objects from the omnidirectional stereo image by matching the stereo image having different view points. The omnidirectional stereo imaging system in this paper is cost-effective and relatively easy for correspondence matching because of consistent camera intrinsic parameters in the stereo image. The parameters of the imaging system are extracted through 3-step calibration and the performance for 3-dimensional coordinates reconstruction is verified through experiments in this paper. Measurable range of the proposed imaging system is also presented by depth-resolution analysis.

Infrastructure 2D Camera-based Real-time Vehicle-centered Estimation Method for Cooperative Driving Support (협력주행 지원을 위한 2D 인프라 카메라 기반의 실시간 차량 중심 추정 방법)

  • Ik-hyeon Jo;Goo-man Park
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.1
    • /
    • pp.123-133
    • /
    • 2024
  • Existing autonomous driving technology has been developed based on sensors attached to the vehicles to detect the environment and formulate driving plans. On the other hand, it has limitations, such as performance degradation in specific situations like adverse weather conditions, backlighting, and obstruction-induced occlusion. To address these issues, cooperative autonomous driving technology, which extends the perception range of autonomous vehicles through the support of road infrastructure, has attracted attention. Nevertheless, the real-time analysis of the 3D centroids of objects, as required by international standards, is challenging using single-lens cameras. This paper proposes an approach to detect objects and estimate the centroid of vehicles using the fixed field of view of road infrastructure and pre-measured geometric information in real-time. The proposed method has been confirmed to effectively estimate the center point of objects using GPS positioning equipment, and it is expected to contribute to the proliferation and adoption of cooperative autonomous driving infrastructure technology, applicable to both vehicles and road infrastructure.

Flame Propagation Characteristics in a Heavy Duty Liquid Phase LPG Injection SI Engine by Flame Visualization (대형 액상 LPG 분사식 SI 엔진에서 화염 가시화를 이용한 희박영역에서의 화염 전파특성 연구)

  • 김승규;배충식;이승목;김창업;강건용
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.10 no.4
    • /
    • pp.23-32
    • /
    • 2002
  • Combustion and flame propagation characteristics of the liquid phase LPG injection (LPLI) engine were investigated in a single cylinder optical engine. Lean bum operation is needed to reduce thermal stress of exhaust manifold and engine knock in a heavy duty LPG engine. An LPLI system has advantages on lean operation. Optimized engine design parameters such as swirl, injection timing and piston geometry can improve lean bum performance with LPLI system. In this study, the effects of piston geometry along with injection timing and swirl ratio on flame propagation characteristics were investigated. A series of bottom-view flame images were taken from direct visualization using an W intensified high-speed CCD camera. Concepts of flame area speed, In addition to flame propagation patterns and thermodynamic heat release analysis, was introduced to analyze the flame propagation characteristics. The results show the correlation between the flame propagation characteristics, which is related to engine performance of lean region, and engine design parameters such as swirl ratio, piston geometry and injection timing. Stronger swirl resulted in foster flame propagation under open valve injection. The flame speed was significantly affected by injection timing under open valve injection conditions; supposedly due to the charge stratification. Piston geometry affected flame propagation through squish effects.

Construction of Cubic Panoramic Image for Realistic Virtual Reality Contents (실감형 VR 콘텐츠 제작을 위한 큐브 파노라마 영상의 구성)

  • Kim, Eung-Kon;Seo, Seung-Wan
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.05a
    • /
    • pp.431-435
    • /
    • 2006
  • Panoramic Image provides wider field of view than image from acquisition equipment such as a camera and provides realism and immersion to users compared with single image. Cubic panoramic image provides three dimensional access zooming and rotating in top, bottom, left and right directions. But we require commercial softwares to make a panoramic image and can see distorted images in top and bottom direction. This paper presents a method that constructs cubic panoramic virtual reality image using Apple QuickTimeVR's cubic data structure without any commercial software to make realistic image of top and bottom direction in cubic panoramic virtual reality space.

  • PDF

Asynchronous Sensor Fusion using Multi-rate Kalman Filter (다중주기 칼만 필터를 이용한 비동기 센서 융합)

  • Son, Young Seop;Kim, Wonhee;Lee, Seung-Hi;Chung, Chung Choo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.11
    • /
    • pp.1551-1558
    • /
    • 2014
  • We propose a multi-rate sensor fusion of vision and radar using Kalman filter to solve problems of asynchronized and multi-rate sampling periods in object vehicle tracking. A model based prediction of object vehicles is performed with a decentralized multi-rate Kalman filter for each sensor (vision and radar sensors.) To obtain the improvement in the performance of position prediction, different weighting is applied to each sensor's predicted object position from the multi-rate Kalman filter. The proposed method can provide estimated position of the object vehicles at every sampling time of ECU. The Mahalanobis distance is used to make correspondence among the measured and predicted objects. Through the experimental results, we validate that the post-processed fusion data give us improved tracking performance. The proposed method obtained two times improvement in the object tracking performance compared to single sensor method (camera or radar sensor) in the view point of roots mean square error.