• Title/Summary/Keyword: Image based localization

Search Result 258, Processing Time 0.027 seconds

A Study on Automatic Detection of Speed Bump by using Mathematical Morphology Image Filters while Driving (수학적 형태학 처리를 통한 주행 중 과속 방지턱 자동 탐지 방안)

  • Joo, Yong Jin;Hahm, Chang Hahk
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.3
    • /
    • pp.55-62
    • /
    • 2013
  • This paper aims to detect Speed Bump by using Omni-directional Camera and to suggest Real-time update scheme of Speed Bump through Vision Based Approach. In order to detect Speed Bump from sequence of camera images, noise should be removed as well as spot estimated as shape and pattern for speed bump should be detected first. Now that speed bump has a regular form of white and yellow area, we extracted speed bump on the road by applying erosion and dilation morphological operations and by using the HSV color model. By collecting huge panoramic images from the camera, we are able to detect the target object and to calculate the distance through GPS log data. Last but not least, we evaluated accuracy of obtained result and detection algorithm by implementing SLAMS (Simultaneous Localization and Mapping system).

Real-Time Individual Tracking of Multiple Moving Objects for Projection based Augmented Visualization (다중 동적객체의 실시간 독립추적을 통한 프로젝션 증강가시화)

  • Lee, June-Hyung;Kim, Ki-Hong
    • Journal of Digital Convergence
    • /
    • v.12 no.11
    • /
    • pp.357-364
    • /
    • 2014
  • AR contents, if markers to be tracked move fast, show flickering while updating images captured from cameras. Conventional methods employing image based markers and SLAM algorithms for tracking objects have the problem that they do not allow more than 2 objects to be tracked simultaneously and interacted with each other in the same camera scene. In this paper, an improved SLAM type algorithm for tracking dynamic objects is proposed and investigated to solve the problem described above. To this end, method using 2 virtual cameras for one physical camera is adopted, which makes the tracked 2 objects interacted with each other. This becomes possible because 2 objects are perceived separately by single physical camera. Mobile robots used as dynamic objects are synchronized with virtual robots in the well-designed contents, proving usefulness of applying the result of individual tracking for multiple moving objects to augmented visualization of objects.

Detection of eye using optimal edge technique and intensity information (눈 영역에 적합한 에지 추출과 밝기값 정보를 이용한 눈 검출)

  • Mun, Won-Ho;Choi, Yeon-Seok;Kim, Cheol-Ki;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.196-199
    • /
    • 2010
  • The human eyes are important facial landmarks for image normalization due to their relatively constant interocular distance. This paper introduces a novel approach for the eye detection task using optimal segmentation method for eye representation. The method consists of three steps: (1)edge extraction method that can be used to accurately extract eye region from the gray-scale face image, (2)extraction of eye region using labeling method, (3)eye localization based on intensity information. Experimental results show that a correct eye detection rate of 98.9% can be achieved on 2408 FERET images with variations in lighting condition and facial expressions.

  • PDF

Channel Error Detwction and Concealment Technqiues for the MPEG-2 Video Standard (MPEG-2 동영상 표준방식에 대한 채널 오차의 검출 및 은폐 기법)

  • 김종원;박종욱;이상욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.10
    • /
    • pp.2563-2578
    • /
    • 1996
  • In this paper, channel error characteristics are investigated to alleviate the channel error propagation problem of the digital TV transmission systems. First, error propagation problems, which are mainly caused by the inter-frame dependancy and variable length coding of the MPEG-2 baseline encoder, are intensively analyzed. Next, existing channel resilient schemes are systematically classified into two kinds of schemes; one for the encoder and the other for the decoder. By comparing the performance and implementation cost, the encoder side schemes, such as error localization, layered coding, error resilience bit stream generation techniques, are described in this paper. Also, in an effort to consider the parcticality of the real transmission situation, an efficient error detection scheme for a decoder system is proposed by employing a priori information of the bit stream syntas, checking the encoding conditions at the encoder stage, and exploiting the statistics of the image itself. Finally, subsequent error concealment technique based on the DCT coefficient recovery algorithm is adopted to evaluate the performance of the proposed error resilience technique. The computer simulation results show that the quality of the received image is significantly improved when the bit error rate is as high as 10$^{-5}$ .

  • PDF

A leak detection and 3D source localization method on a plant piping system by using multiple cameras

  • Kim, Se-Oh;Park, Jae-Seok;Park, Jong Won
    • Nuclear Engineering and Technology
    • /
    • v.51 no.1
    • /
    • pp.155-162
    • /
    • 2019
  • To reduce the secondary damage caused by leakage accidents in plant piping systems, a constant surveillance system is necessary. To ensure leaks are promptly addressed, the surveillance system should be able to detect not only the leak itself, but also the location of the leak. Recently, research to develop new methods has been conducted using cameras to detect leakage and to estimate the location of leakage. However, existing methods solely estimate whether a leak exists or not, or only provide two-dimensional coordinates of the leakage location. In this paper, a method using multiple cameras to detect leakage and estimate the three-dimensional coordinates of the leakage location is presented. Leakage is detected by each camera using MADI(Moving Average Differential Image) and histogram analysis. The two-dimensional leakage location is estimated using the detected leakage area. The three-dimensional leakage location is subsequently estimated based on the two-dimensional leakage location. To achieve this, the coordinates (x, z) for the leakage are calculated for a horizontal section (XZ plane) in the monitoring area. Then, the y-coordinate of leakage is calculated using a vertical section from each camera. The method proposed in this paper could accurately estimate the three-dimensional location of a leak using multiple cameras.

Collision Avoidance Using Omni Vision SLAM Based on Fisheye Image (어안 이미지 기반의 전방향 영상 SLAM을 이용한 충돌 회피)

  • Choi, Yun Won;Choi, Jeong Won;Im, Sung Gyu;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.210-216
    • /
    • 2016
  • This paper presents a novel collision avoidance technique for mobile robots based on omni-directional vision simultaneous localization and mapping (SLAM). This method estimates the avoidance path and speed of a robot from the location of an obstacle, which can be detected using the Lucas-Kanade Optical Flow in images obtained through fish-eye cameras mounted on the robots. The conventional methods suggest avoidance paths by constructing an arbitrary force field around the obstacle found in the complete map obtained through the SLAM. Robots can also avoid obstacles by using the speed command based on the robot modeling and curved movement path of the robot. The recent research has been improved by optimizing the algorithm for the actual robot. However, research related to a robot using omni-directional vision SLAM to acquire around information at once has been comparatively less studied. The robot with the proposed algorithm avoids obstacles according to the estimated avoidance path based on the map obtained through an omni-directional vision SLAM using a fisheye image, and returns to the original path. In particular, it avoids the obstacles with various speed and direction using acceleration components based on motion information obtained by analyzing around the obstacles. The experimental results confirm the reliability of an avoidance algorithm through comparison between position obtained by the proposed algorithm and the real position collected while avoiding the obstacles.

Tracking of Walking Human Based on Position Uncertainty of Dynamic Vision Sensor of Quadcopter UAV (UAV기반 동적영상센서의 위치불확실성을 통한 보행자 추정)

  • Lee, Junghyun;Jin, Taeseok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.1
    • /
    • pp.24-30
    • /
    • 2016
  • The accuracy of small and low-cost CCD cameras is insufficient to provide data for precisely tracking unmanned aerial vehicles (UAVs). This study shows how a quad rotor UAV can hover on a human targeted tracking object by using data from a CCD camera rather than imprecise GPS data. To realize this, quadcopter UAVs need to recognize their position and posture in known environments as well as unknown environments. Moreover, it is necessary for their localization to occur naturally. It is desirable for UAVs to estimate their position by solving uncertainty for quadcopter UAV hovering, as this is one of the most important problems. In this paper, we describe a method for determining the altitude of a quadcopter UAV using image information of a moving object like a walking human. This method combines the observed position from GPS sensors and the estimated position from images captured by a fixed camera to localize a UAV. Using the a priori known path of a quadcopter UAV in the world coordinates and a perspective camera model, we derive the geometric constraint equations that represent the relation between image frame coordinates for a moving object and the estimated quadcopter UAV's altitude. Since the equations are based on the geometric constraint equation, measurement error may exist all the time. The proposed method utilizes the error between the observed and estimated image coordinates to localize the quadcopter UAV. The Kalman filter scheme is applied for this method. Its performance is verified by a computer simulation and experiments.

Face Recognition Robust to Local Distortion Using Modified ICA Basis Image

  • Kim Jong-Sun;Yi June-Ho
    • Proceedings of the Korea Institutes of Information Security and Cryptology Conference
    • /
    • 2006.06a
    • /
    • pp.251-257
    • /
    • 2006
  • The performance of face recognition methods using subspace projection is directly related to the characteristics of their basis images, especially in the cases of local distortion or partial occlusion. In order for a subspace projection method to be robust to local distortion and partial occlusion, the basis images generated by the method should exhibit a part-based local representation. We propose an effective part-based local representation method named locally salient ICA (LS-ICA) method for face recognition that is robust to local distortion and partial occlusion. The LS-ICA method only employs locally salient information from important facial parts in order to maximize the benefit of applying the idea of 'recognition by parts.' It creates part-based local basis images by imposing additional localization constraint in the process of computing ICA architecture I basis images. We have contrasted the LS-ICA method with other part-based representations such as LNMF (Localized Non-negative Matrix Factorization)and LFA (Local Feature Analysis). Experimental results show that the LS-ICA method performs better than PCA, ICA architecture I, ICA architecture II, LFA, and LNMF methods, especially in the cases of partial occlusions and local distortion

  • PDF

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

Synthesis of 3D Sound Movement by Embedded DSP

  • Komata, Shinya;Sakamoto, Noriaki;Kobayashi, Wataru;Onoye, Takao;Shirakawa, Isao
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.117-120
    • /
    • 2002
  • A single DSP implementation of 3D sound movement is described. With the use of a realtime 3D acoustic image localization algorithm, an efficient approach is devised for synthesizing the 3D sound movement by interpolating only two parameters of "delay" and "gain". Based on this algorithm, the realtime 3D sound synthesis is performed by a commercially available 16-bit fixed-point DSP with computational labor of 65 MIPS and memory space of 9.6k words, which demonstrates that the algorithm call be used even for the mobile applications.

  • PDF